What is Microsoft Azure DevTest Labs feature all about?

In almost every Project we work, we provision some time like 1-2 weeks’ time for environment (Dev/Test) provisioning where we plan our Project with low Productivity, we buy software licenses if any needed or upgrade existing to new versions, upgrade machine configurations to get ready for another mission. So far this has been acceptable solution and considered to be pre-requisite step no matter how critical is the project from business point of view, this waste, we were allowed to doJ.

Few companies tried to tackle this problem with solutions available in the market such as CloudShare, a Cloud computing company providing ready to use development and testing Labs. However, adoption was not so quite high due to non-alignment of primary IT strategy with their Cloud Computing platform. For e.g. if your Active Directory is not in sync with Dev Labs in CloudShare, you will have to create entire dummy replica of it to mimic it, which is not practical most of the time.

Recently more mature solution have been devised by Microsoft – Azure DevTest Labs, a feature which enables companies to quickly provision development and test environments. This makes more promising solution due to its market adoption where customers are already having major of workload in Microsoft Azure Cloud platform, building Dev, Test, and Prod in same Cloud makes more sense as it just becomes your on premise extended network.

So in a nutshell, we are no more allowed to waste 1-2 weeks’ time, ultimately you are that much close to your market and end customers helping win the business.

So lets us start understanding some basics about the Azure DevTest Labs –

  • “Lab” creation is the first activity, basically “Lab” is outer container or boundary for collection of Virtual machines for development of testing purpose
    • A Lab provides secure boundary so a machine from one lab cannot see other machines in different lab, same way user permissions are organized
    • In real life, a lab could be a Project name of a sub-Project name
    • You can automate lab creation, including custom settings, by creating a Resource Manager template and using it to create identical labs again and again.
    • A lab owner has the access to any resources within the lab. Therefore, they can modify policies, read and write any VMs, change the virtual network, and so on.
    • An Entire Lab can be auto shut downed to save the resource cost when not in use
  • Once you create a Lab, you create multiple “Virtual machines” in it.
    • Each Virtual machine will have a O.S image attached to it.
    • You can limit the number of Virtual machines in a Lab, and also number of Virtual machines per user
    • To save the cost, each machine can be auto started and auto shut downed at specific time of day/week
    • You can use out-of-the-box VM images available or you can create and upload your own Custom VM image
    • A Lab user who uses the VMs, however, he is not allowed to modify any settings
  • Each Virtual machine can have “artifacts”, an artifacts is some configuration or tool which you want to install during VM provisioning so you’re VM will have it ready before first logon.
    • An example of artifacts could be 7-Zip utility, notepad++ software or some specific browser for testing purpose. Picture below shows few samples of artifacts, however, there is long available –


  • Sometimes if you have more than one artifacts to be added, you may decide order in which they should be added

Along with above core features, there are very attractive use cases which can be used by companies such as –

  1. Training setups: You don’t have to work with your training vendor to rent a Lab for training purpose which may have some high end configured machines. Organizations outsource entire training programs due to hardware requirement, as they cannot afford to procure such hardware just of the training purpose due capital and operation expenditure involved.

With Azure DevTest Labs, this wait time or costly affairs are not more required, you                can easily configure required training environment in minutes, use it wisely, save                  the cost and complete your training. Once training is over, delete the environment.                That’s it!

  1. VM image snapshot: We all have faced situations wherein, some piece of code or functionality works fine in Dev environment but fails in Test, or works somewhere and fails on some specific machine. Sometimes testers had tough time in reproducing the issues as meanwhile somebody changed some VM settings making it hard to find the root cause.

With Azure DevTest Labs, you can take snapshot of VM image, preserve it for later                  use for reproducing defects which are environment specific. An ability to create same            environment with few clicks improves the DEV-TEST team communication and                      collaboration.

  1. No more follow ups to IT Support guys for adding small features on VMs, with power in your hand, you can select artifacts from the available set, add them and test them. No dependency on other stakeholders, a Lab owner user can perform all such operations and installation happens automatically without manual intervention.

To summarize, Azure DevTest Labs is a great addition to platform, this makes sense whether or not start allowing DevOps culture within your organization. Essentially this feature enables each organization save cost and time achieving their business objectives.


Structured Web: 2025


If we look forward to year 2025, where we will have big dreams realized like Nanotechnology, artificial intelligence, next generation cloud and high performance computing. Impact of such technologies on overall human life is unimaginable at this point in time. We are not far away from tiny Nano factories and Nano robots at home doing some smart job for us. Computers around us will be million times faster, smaller, ready for you to serve within fraction of energy consumption as compared to today. These possibilities are beautiful and are likely to be realized but one important question is ‘Are we ready for that?’, ‘Are we putting correct foundation for next generation computing?’ Answer may be a ‘YES’ or ‘NO’ basis individuals perceptions and context. However, it would be certainly ‘No’ if we look at the current un-structured nature of World Wide Web, the biggest information store freely available over our fingertips.

Due to tremendous size of Web, the way we have organized our web resources and the rate of web adoption in developing countries, soon it will become difficult to identify relevant information and services of interest easily. Total dependence on merely text based search engines for information identification will not be sufficient and we will lose credible information which search engines cannot put forward effectively and such a loss may become unaffordable in near future.

Lot of research has been happening around web standardization like research on classifying web sites by Christoph Lindemann[2] and Lars Littig[2], research on extracting and managing structured Web Data by Michael John Cafarella[1] is remarkable.

This paper advises few techniques on structuring the Web to make it best usable.

This is the first paper from the series targeted towards research on ‘Structured Web: 2025’ topic.

structured web 1

Fig. 1  Conceptual view of Web showing scattered information without specific structure


Due to the heterogeneity of the Web and its lack of structure, it is crucial to identify properties of a Web resource that best reflect its functionality.  In Relational Database world, we call it a Schema. If we want to read any tuple from database, we need to first know its schema. This principle is equally applicable to Web resource as well. Once we know the schema, second step is, we should allow database tuple to be read by anybody.


Here I propose two step methodology to describe the structure of Web resource.

A. Every Web resource should describe and expose its properties.

B. Every Web resource should be accessible using unified structure.

Here I am considering Web resource as everything which will be publicly accessible.


This applies to one of the major web resource i.e. Web site. Every Web site should describe its schema using below properties and should expose it for public access.

                                                                                                                                                                           TABLE I
Web site properties

Sr. No. Web Site Properties Description






3 Languages
4 Time Zone
5 Currency

Web Content

7 Text
8 Video
9 Audio
10 XML
12 RSS
13 Documents (Word, PDF, XLS etc.)






Size of Pages
17 Count of Pages
18 External site out degree

Technical realization

JavaScript, or another scripting

Domain dictionary

Domain dictionary keywords

Popular URLs

Popular URLs of the site





Web resource structure

See section V.

Using above information available with each Web site, organizations can write crawlers, which will visit web sites and retrieve these details to maintain database of all this information.

Where –

Domain dictionary keywords can be used by search engines to index the web site against those keywords.

Security signifies if that website can be openly used by anybody or registration is required.


Once we understand about web site properties, we will be able to understand general structure of it. Next level of categorization is done using how actual web site content is made available for public access. This content access is different from content access using rendered web page. By directly exposing content using URLs will help categorising overall information in terms of relational database table like below.


WEB resource structure

Sr. No. Web site content access structure
Content Type Name URL
1 Image Einstein.png http://www.example.com/Einstein.png
2 Image James Cameron http://www.example.com/JamesC.png
3 PDF USLReport http://www.example.com/USL.pdf
4 Text Football Game http://www.example.com/FootballGame.htm
5 Video President Speech http://www.example.com/Presdspeech.mp4
.. .. .. ..
.. .. .. ..

A. WebResource_Properties.XML file

Now the question is how any web site will expose properties and content access structure to the outside world. Answer is one XML file with standard schema that should be published by every website owner. This file would be WebResource_Properties.XML. This file should be present in each sites root virtual folder and should be accessible publicly by using below URL format –


Using above mechanism, we can build relational database table for all the websites exposing web resource properties.

One can easily write piece of software which will provide you list of all sites from ‘Ireland, in Health care domain, with Audio and images, having page count >20 without any security’ for accessing content.

B. Ranking Website

Another way of classifying web resources/web sites is ranking them. This ranking should be done basis

  • Relevant Content volume and quality
  • No. of users and/or web traffic

Web site ranking should be done by independent organizations to provide real usability aspect to the world. Rank is always linked to Domain, so while comparing ranks domain always comes into picture.

C. Domain

Some of the domains can be listed as Affiliate site, Archive Site, Blogs, Corporate site, Commerce Site, database site, development site, directory site, download site, employment site etc.


Figure below shows two views of web site as –

A. View which is rendered in the browser and user can see it directly. Search engines works on this view for performing indexing on web site. Search engine cannot reach to the web resource which has got no link in the browser rendered page. Search engine cannot crawl the web sites which have got some files on web servers without links provided in web pages.

B. Second view is the view provided through WebResources_Properties xml file, sample as shown in right side of this figure.

strcutured web 2

Fig. 2  Web Virtual directory and two views of it


By implementing above guidelines, web information can be structured to some level which allows us to leverage following advantages.

A. Technology neutral way of categorizing of web sites

Using above method web sites can be categorized and web can be structured in a technology neutral way.

B. Improved search engine optimization

Now search engines need not just depend on text based indexing, additional web resource properties can help in getting meaningful search results.

C. Minimal work to get started

Web resource owners don’t need to make any changes in their web applications. Just one XML file will help bring in lots of difference.

Figure below shows conceptual view of Web when such structuring will happen over a period. Web being a massive data store, it will take time for people to adopt such standards and apply them.

strcutured web 3

Fig 3. – Conceptual view of Web showing structured information after employing above techniques

Important point is if we don’t take action on time we will be at great loss where millions of ideas/research/opinions by billions of people might get into dark ages just because nobody could find it at correct time and carry on further work. People will keep on reinventing the wheel, and next generation will blame on us because we could not manage the Web with great responsibly. If we start today, hope is entire Web will be structured data source by 2025 and next generation might use structured query language to search the Web seamlessly.

Because “Information could not be found easily is as good as information is not present.”


[1]        Michael John Cafarella, Extracting and Managing Structured Web Data, university of Washington, 2009

[2]        Chrisoph Lindemann and Lars Littig, Classifying web sites, University of Leipzig, Johannisgasse 26, 2007

[3]        John M. Pierre, On the Automated Classification of Web Sites, California UAS, 2001

Future Technologies…. Overview

In this Article, I have tried to put together few future technologies(Some of them are already reality in some shaper form) on which there is lot of Research happening in Microsoft World. This Article will help tech gurus to keep tab on progress of technology in these areas.

Software Agents

A software agent is a software program that acts for a user or other program in a relationship of agency, Related and derived concepts include Intelligent agents (in particular exhibiting some aspect of Artificial Intelligence, such as learning and reasoning), autonomous agents (capable of modifying the way in which they achieve their objectives), distributed agents (being executed on physically distinct computers), multi-agent systems (distributed agents that do not have the capabilities to achieve an objective alone and thus must communicate), and mobile agents (agents that can relocate their execution onto different processors).

  1. Visual Studio agents 2010 – Visual Studio Agents 2010 include Test Controller 2010, Test Agent 2010 and Lab Agent 2010. Test Controller 2010 and Test Agent 2010 collectively enable scale-out load generation, distributed data collection, and distributed test execution. Lab Agent 2010 manages testing, workflow and network isolation for virtual machines used with Visual Studio Lab Management 2010.
  2. Life like software agents
  3. Software distribution agents
  4. Research Projects

Natural Language Interpretation

Natural language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages; Specifically, the process of a computer extracting meaningful information from natural language input and/or producing natural language output.It began as a branch of artificial intelligence. In theory, natural language processing is a very attractive method of human–computer interaction.

  1. Excel formula’s
  2. Machine Translation – research project
  3. VoiceXML 2.0 contribution
  4. Few more research projects

Machine Translation

Machine translation, is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another.

On a basic level, MT performs simple substitution of words in one natural language for words in another, but that alone usually cannot produce a good translation of a text,                         because recognition of whole phrases and their closest counterparts in the target language is needed. Solving this problem with corpus and statistical techniques is a rapidly growing field that is leading to better translations, handling differences in linguistic typology, translation of idioms, and the isolation of anomalies.

  1. Microsoft Translator(http://microsofttranslator.com)
  2. Windows Live Toolbar – add-in for user’s web sites
  3. Research on Syntax based MT, Phrase based MT, Word alignment, Language Modeling
  4. Office 2007/2010 Translate feature

Procedural Storytelling

Procedural generation refers to content generated algorithmically rather than manually, and is often used to generate game levels and other content. While procedural generation does not guarantee that a game or sequence of levels are nonlinear, it is an important factor in reducing game development time, and opens up avenues making it possible to generate larger and more or less unique seamless game worlds on the fly and using fewer resources. This kind of procedural generation is also called “worldbuilding”, in which general rules are used to construct a believable world.

  1. Research on creating immersive 3D Worlds
  2. Digital Storytelling using Kinect
  3. Environmental storytelling
  4. Reduces game development time

Machine augmented cognition

(AugCog) is a research field at the frontier between human-computer interaction, psychology, ergonomics and neuroscience, that aims at creating revolutionary human-computer interactions. For instance, various research projects aim at evaluating in real-time the cognitive state of a user (e.g. from EEG), and design closed-loop systems to modulate information flow with respect to the user’s cognitive capacity.

Human computer interaction that aims at creating revolutionary human-computer interactions

  1. Research on augmented cognition

Cloud Computing

Cloud computing provides computation, software applications, data access, and storage resources without requiring cloud users to know the location and other details of the computing infrastructure.

  1. Windows Azure Platform
  2. Office 365


Action by a nation-state to penetrate another nations computers or networks for the purposes of causing damage or disruption.

  1. Research on steganography  and steganalysis
  2. Research on warfare commands and control systems


4G is the fourth generation of cellular wireless standards. It is a successor of the 3G and 2G families of standards.

  1. Windows Phone enablement on 4G
  2. O.S Support for 4G wireless and wired network

Mesh Networking

Mesh Networking is a type of networking where each node must not only capture and disseminate its own data, but also serve as a relay for other nodes, that is, it must collaborate to propagate the data in the network.

A mesh network can be designed using a flooding technique or a routing technique. When using a routing technique, the message propagates along a path, by hopping from node to node until the destination is reached. To ensure all its paths’ availability, a routing network must allow for continuous connections and reconfiguration around broken or blocked paths, using self-healing algorithms. A mesh network whose nodes are all connected to each other is a fully connected network. Mesh networks can be seen as one type of ad hoc network. Mobile ad hoc networks (MANET) and mesh networks are therefore closely related, but MANET also have to deal with the problems introduced by the mobility of the nodes.

  1. Toolkit for wireless mesh networking
  2. Research on mesh networking


The science of photonics includes the generation, emission, transmission, modulation, signal processing, switching, amplification, detection and sensing of light. The term photonics thereby emphasizes that photons are neither particles nor waves — they are different in that they have both particle and wave nature. It covers all technical applications of light over the whole spectrum from ultraviolet over the visible to the near-, mid- and far-infrared. Most applications, however, are in the range of the visible and near infrared light.

  1. Research on Photonics and nanostruct


  • 5G (5th generation mobile networks or 5th generation wireless systems) is a name used in some research papers and projects to denote the next major phase of mobile telecommunications standards beyond the 4G/IMT-Advanced standards effective since 2011. At present, 5G is not a term officially used for any particular specification or in any official document yet made public by telecommunication companies or standardization bodies such as 3GPP, WiMAX Forum, or ITU-R. New standard releases beyond 4G are in progress by standardization bodies, but are at this time not considered as new mobile generations but under the 4G umbrella.

    1. Research on O.S compatibility and overall protocol expectations


In computing, multi-touch refers to a touch sensing surface’s (track pad or touchscreen) ability to recognize the presence of two or more points of contact with the surface. This plural-point awareness is often used to implement advanced functionality such as pinch to zoom or activating predefined programs.

  1. Microsoft Surface
  2. Windows Touch technology
  3. Multi touch in Windows 7
  4. Multi touch programming platform

Gesture recognition

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.

  1. Microsoft Kinect
  2. Gesture recognizers for Tablet PC
  3. Speech recognition – Speech recognition is the translation of spoken words into text. It is also known as “automatic speech recognition”, “ASR”, “computer speech recognition”, “speech to text”, or just “STT”.

Speech Recognition

is technology that can translate spoken words into text. Some SR systems use “training” where an individual speaker reads sections of text into the SR system. These systems analyze the person’s specific voice and use it to fine tune the recognition of that person’s speech, resulting in more accurate transcription. Systems that do not use training are called “Speaker Independent” systems. Systems that use training are called “Speaker Dependent” systems.

  1. Windows Speech Recognition
  2. Speech Macros in Office
  3. Speech recognition programming SDK
  4. Kinect speech recognition
  5. Microsoft Research

Augmented Reality

Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one.

  1. Microsoft research – in the area of mobile phone


is a tactile feedback technology which takes advantage of the sense of touch by applying forces, vibrations, or motions to the user. This mechanical stimulation can be used to assist in the creation of virtual objects in a computer simulation, to control such virtual objects, and to enhance the remote control of machines and devices (telerobotics). It has been described as “doing for the sense of touch what computer graphics does for vision”. Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface.

  1. Microsoft Surface haptics


is a technique that allows the light scattered from an object to be recorded and later reconstructed so that when an imaging system (a camera or an eye) is placed in the reconstructed beam, an image of the object will be seen even when the object is no longer present. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the image appear three-dimensional.

  1. Microsoft research on Digital holography ,Virtual integral holography


Telepresence refers to a set of technologies which allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Telepresence requires that the users’ senses be provided with such stimuli as to give the feeling of being in that other location. Additionally, users may be given the ability to affect the remote location. In this case, the user’s position, movements, actions, voice, etc. may be sensed, transmitted and duplicated in the remote location to bring about this effect. Therefore information may be traveling in both directions between the user and the remote location.

  1. Microsoft research with HP partnership

Immersive Virtual reality

A fully immersive virtual reality to which the user connects through direct brain simulation. All senses would be stimulated diffusing the boundary between reality and fiction.

  1. Microsoft Research
  2. Game development

Depth Imaging

is the name for a collection of techniques which are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device. The resulting image, the range image, has pixel values which correspond to the distance, e.g., brighter values mean shorter distance, or vice versa. If the sensor which is used to produce the range image is properly calibrated, the pixel values can be given directly in physical units such as meters.

  • Basic API support
  • Microsoft Research

Near-field communication

is a set of standards for smartphones and similar devices to establish radio communication with each other by touching them together or bringing them into close proximity, usually no more than a few centimetres. Present and anticipated applications include contactless transactions, data exchange, and simplified setup of more complex communications such as Wi-Fi. Communication is also possible between an NFC device and an unpowered NFC chip, called a “tag”.

  1. Microsoft Research

Biometric sensors

applies biometrics to telecommunications and telecommunications to remote biometric sensing. With the emergence of multimodal biometrics systems gathering data from different sensors and contexts, International Standards that support systems performing biometric enrollment and verification or identification have begun to focus on human physiological thresholds as constraints and frameworks for “plug and play” telebiometric networks.

  1. Windows Biometric Framework (WBF)
  2. O.S compatibility for sensors
  3. Microsoft research

Smart Power meters

A smart meter is usually an electrical meter that records consumption of electric energy in intervals of an hour or less and communicates that information at least daily back to the utility for monitoring and billing purposes. Smart meters enable two-way communication between the meter and the central system. Unlike home energy monitors, smart meters can gather data for remote reporting. Such an advanced metering infrastructure (AMI) differs from traditional automatic meter reading (AMR) in that it enables two-way communications with the meter.

  1. Microsoft Smart energy Reference architecture
  2. Battery metering
  3. Power and utilities industry : Delivery and smart grid solutions

Machine vision

Machine vision (MV) is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control and robot guidance in industrial applications. While the scope of MV is broad and a comprehensive definition is difficult to distil, a “generally accepted definition of machine vision is ‘… the analysis of images to extract data for controlling a process or activity.'”

  1. Microsoft research

Computational Photography

Computational imaging refers to any image formation method that involves a digital computer. Computational photography refers broadly to computational imaging techniques that enhance or extend the capabilities of digital photography. The output of these techniques is an ordinary photograph, but one that could not have been taken by a traditional camera.

  1. Microsoft research


A tablet computer, or a tablet, is a mobile computer, larger than a mobile phone or personal digital assistant, integrated into a flat touch screen and primarily operated by touching the screen rather than using a physical keyboard. It often uses an onscreen virtual keyboard, a passive stylus pen, or a digital pen

  1. Microsoft Tablet PC

Context aware computing

In computer science context awareness refers to the idea that computers can both sense, and react based on their environment. Devices may have information about the circumstances under which they are able to operate and based on rules, or an intelligent stimulus, react accordingly. Context aware devices may also try to make assumptions about the user’s current situation. Context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information (such as user location, time of day, nearby people and devices, and user activity). Since it was proposed about a decade ago, many researchers have studied this topic and built several context-aware applications to demonstrate the usefulness of this new technology…

  1. Microsoft research

Appliance robots

Operate your home appliances from web or remote location

  1. Microsoft robotics
  2. Microsoft research

Robotic surgery

  1. Microsoft robotics
  2. Microsoft research

Domestic robots

A domestic robot is a robot used for household chores.

  1. Microsoft robotics
  2. Microsoft research

Swarm robotics

Swarm robotics is a new approach to the coordination of multirobot systems which consist of large numbers of mostly simple physical robots. It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behavior occurs.

  1. Microsoft research 

Friends, Hope this review is helpful ! 🙂

Laxmikant Patil

Posted in General. 1 Comment »

How do I choose Content Management System?

If you are planning to decide on which Content Management System to adopt, first thing, you should have some information about what you or your organization is imagining from a CMS.

However, everybody cannot be so imaginative, so imagination is restricted to what you perceive as a Content management system, you will probably need additional information which will help you educate on what CMS products out there in market are offering, what is the trend, and what people are doing best to increase productivity through collaboration.

You can read this article which will educate you on what other areas you should consider before confirming on a specific CM Product. After reading this article, you will be well versed with the terminologies/ jargons which you will be hit by visiting individual CM product sites.

An ultimate aim of this article is to –

  • Educate decision makers to understand typical CMS features
  • Relate features to their own requirements
  • Map features/requirements against any product for effective product selection

Below list should help you understand about most expected features of Content Management –

  • Security
  • Enterprise Search
  • Integration
  • Technology
  • Deployment Flexibility
  • Scalability
  • Customer Service Support
  • Localization
  • Multi-Web Site Management
  • Translation Management
  • Brand Management
  • Target Audience Marketing
  • Multi-Channel Marketing
  • Browser Support
  • Workflows
  • Supported Content Types
  • Site Edit Feature
  • Word Connector
  • Archiving flexibility
  • Content Distributor
  • Visitor experience analytics
  • Personalization
  • Is it open source?
  • Supported Web Servers
  • SEO
  • SaaS
  • Cloud hosting option/Cloud Ready
  • Mobile version support
  • Version Control and Rollback features
  • ad Management
  • Real-time auditing
  • Reporting
  • Email integration
  • Architecture
  • Social networking support
  • Disaster Recovery options
  • Record Management support
  • Web content Management support
  • Multivariable Testing support
  • Web Traffic analytics
  • Business Analytics
  • Image Edit options
  • Background Job Processor
  • Asynchronous Job processor
  • GEOIP feature
  • PDF generation feature
  • Library/content Load balancing
  • Diagnostics control
  • Subscription control at folder/content level
  • Tasks
  • Web Alerts
  • Wiki Support
  • Blog features
  • Content Editor
  • Content Migration support
  • Content Export formats
  • User communities
  • Third party components leverage
  • Drag and Drop architecture
  • Dynamic Page content and layout changes
  • Tags and Catagories
  • Comments and Likes
  • Business intelligence
  • Role and rights Management

 I am also listing some good products that I Know about–

1. Company Name: Microsoft , www.Microsoft.com

Product Name: SharePoint 2007/2010 Server

Technology: Microsoft .Net

 2. Company Name: Ektron , www.Ektron.com

Product Name: Ektron CMS

Technology: Microsoft .Net

3. Company Name: Ektron , www.Sitecore.com

Product Name: Sitecore CMS

Technology: Microsoft .Net

4. Company Name: dotCMS , www.dotCMs.org

Product Name: dotCMS CMS

Technology: Java

5. Company Name: CrownPeak , www.CrownPeak.com

Product Name: CrownPeak CMS


6. Company Name: percussion , www.percussion.org

Product Name: Percussion CM1, CM2

Technology: Java

 7. Company Name: oxcyon , www.oxcyon.org

Product Name: Percussion CM1, CM2

Technology: Java

 8.Company Name: LimeLight , www.clickability.com

Product Name: Limelight


 9. Company Name: Autonomy interwoven, www. interwoven.com

Product Name: Autonomy interwoven WCM

Technology: Java

 10. Company Name: beidgelinedigital, www. bridgelinedigital.com

Product Name: iAPPS Content Manager

Technology: Microsoft .NET

11. Company Name: SDL Tridion, www. sdltridion.com

Product Name: SDL Tridion WCM

Technology: Microsoft .NET

 I hope this is helpful !!

– Laxmikant Patil

IE 6, 7, 8 Features, Loopholes and vulnerabilities

This white paper discusses the feature differences between different IE versions (i.e. IE 6, 7, 8  ) and vulnerabilities and loopholes found in these versions.

Microsoft Internet Explorer’s journey started in 1995(IE 1.0) and currently is in its 9th major generation available for free download as a Release Candidate. Microsoft’s work on IE has been always influenced by feedback from end users in the area of usability, performance, security. Ongoing development of web standards and work done by competitors like Mozilla Firefox, Google Chrome, Safari, and Opera also drives Microsoft for improvements. Microsoft has taken great efforts to keep increasing its market share by introducing IE on other operating systems like Apple Mac, Unix and mobile devices using Internet Explorer Mobile(with Windows Phone 7 and Windows CE).

Every version of IE passes through regression testing by Microsoft and the real users worldwide. Microsoft keeps on providing service packs/patches for issues identified by the end users and tries to keep IE updated against latest security threats/issues reported by the end users.

 Below section discusses features differences between different versions of IE (IE 6 to IE8) –

Feature Comparison

Feature* IE6 IE7 IE8
Compatibility view     Yes
Accelerators     Yes
Web Slices     Yes
InPrivate Browsing     Yes
Tabbed Browsing   Yes Yes, improved
Search   Yes Yes, improved
SmartScreen filter Lacks advancedSecurity features Yes Yes, improved
Favourites bar   Yes Yes, improved
InPrivate Filtering     Yes
Security(Malware, Phishing)     Yes
Cross Site Scripting Filter(XSS)     Yes
Click-Jacking Prevention     Yes
Domain Highlighting     Yes
Data Execution Prevention     Yes
DHTML Yes Yes Yes, improved
CSS Support Full CSS Level 1 Support CSS 2.1 CSS 2.1
DOM Level Full DOM Level1 Support Level 2.0 Level 2.0
SMIL SMIL 2.0    
RSS   Yes Yes
Ajax Support XMLHTTP as an ActiveX XMLHTTP native support XMLHTTP native support
Javascript Yes Improved Improved, faster
O.S Support No –Win 7,WS 08 R2, Vista, WS 08 No – Win 7,WS 08 R2Yes – Vista, WS 08 Yes –Win 7,WS 08 R2, Vista, WS 08

*Only selected features are considered for comparison

Vulnerabilities / Loopholes

Internet Explorer has been subjected to many security vulnerabilities and concerns, much of the malware, adware and computer viruses across the internet. A number of security flaws affecting IE originated not in the browser itself, but ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws.

Below are given some of the recent vulnerabilities and loopholes found in IE –

  • Microsoft Internet Explorer 6, 7, and 8 could not properly handle objects in memory, which allowed remote attackers to execute arbitrary code by accessing an object that (1) was not properly initialized or (2) is deleted, leading to memory corruption, related to a “dangling pointer,” aka “Uninitialized Memory Corruption Vulnerability“.
  • Remote code execution is one of the critical vulnerabilities observed in IE 6, 7, 8 browsers. This vulnerability could allow remote code execution if a user views a specially crafted web page using IE. One of the recent occurrences of it was fixed by Microsoft and security update was released.( http://www.microsoft.com/technet/security/bulletin/MS10-090.mspx)
  • Information Disclosure: An attacker who successfully exploited this vulnerability could gain the same user rights as the local user and steal the information. This vulnerability was found in IE 6, 7, 8. ( http://www.microsoft.com/technet/security/advisory/980088.mspx)
  • Microsoft Internet Explorer (IE6 to IE8) contained a memory corruption vulnerability, which could result in an invalid pointer being accessed after an object is incorrectly initialized or has been deleted. In certain circumstances, the invalid pointer access can be leveraged by an attacker to execute arbitrary code. This vulnerability is being actively exploited, and exploit code was publically available. (Attackers exploited this in the December 2009 and January 2010 during Operation Aurora, aka “HTML Object Memory Corruption Vulnerability.”)
  • Microsoft Internet Explorer 6 and 7 did not properly handled objects in memory that (1) were not properly initialized or (2) are deleted, which allowed remote attackers to execute arbitrary code via vectors involving a call to the getElementsByTagName method for the STYLE tag name, selection of the single element in the returned list, and a change to the outerHTML property of this element, related to Cascading Style Sheets (CSS) and mshtml.dll, aka “HTML Object Memory Corruption Vulnerability.
  • Microsoft Internet Explorer 6, 6 SP1, 7, and 8 did not properly handle argument validation for unspecified variables, which allowed remote attackers to execute arbitrary code via a crafted HTML document, aka “HTML Component Handling Vulnerability.
  • GDI+ in Microsoft Internet Explorer 6 SP1 did not properly allocate an unspecified buffer, which allowed remote attackers to execute arbitrary code via a crafted TIFF image file that triggers memory corruption, aka “GDI+ TIFF Memory Corruption Vulnerability.
  • Buffer overflow in GDI+ in Microsoft Internet Explorer 6 SP1, allowed remote attackers to execute arbitrary code via a crafted TIFF image file, aka “GDI+ TIFF Buffer Overflow Vulnerability.
  • Heap-based buffer overflow in GDI+ in Microsoft Internet Explorer 6 SP1 allowed remote attackers to execute arbitrary code via a crafted PNG image file, aka “GDI+ PNG Heap Overflow Vulnerability.
  • Integer overflow in GDI+ in Microsoft Internet Explorer 6 SP1 allowed remote attackers to execute arbitrary code via a crafted WMF image file, aka “GDI+ WMF Integer Overflow Vulnerability.”
  • Unspecified vulnerability in Microsoft Internet Explorer 6, 6 SP1, and 7 allowed remote attackers to execute arbitrary code via a crafted data stream header that triggers memory corruption, aka “Data Stream Header Corruption Vulnerability.
  • Microsoft Internet Explorer 6 SP1, 6 and 7 on Windows XP SP2 and SP3, 6 and 7 on Windows Server 2003 SP1 and SP2, 7 on Windows Vista Gold and SP1, and 7 on Windows Server 2008 did not properly handle transition errors in a request for one HTTP document followed by a request for a second HTTP document, which allows remote attackers to execute arbitrary code via vectors involving (1) multiple crafted pages on a web site or (2) a web page with crafted inline content such as banner advertisements, aka “Page Transition Memory Corruption Vulnerability.
  • Microsoft Internet Explorer 7, when XHTML strict mode is used, allowed remote attackers to execute arbitrary code via the zoom style directive in conjunction with unspecified other directives in a malformed Cascading Style Sheets (CSS) stylesheet in a crafted HTML document, aka “CSS Memory Corruption Vulnerability.
  • Microsoft Internet Explorer 6 through 8 allowed remote attackers to spoof the address bar, via window.open with a relative URI, to show an arbitrary URL on the web site visited by the victim, as demonstrated by a visit to an attacker-controlled web page, which triggers a spoofed login form for the site containing that page.
  • Microsoft Internet Explorer 6.0.2900.2180 and earlier allowed remote attackers to cause a denial of service (CPU consumption and application hang) via JavaScript code with a long string value for the hash property (aka location.hash)
  • Microsoft Internet Explorer 8.0.7100.0 on Windows 7 RC on the x64 platform allowed remote attackers to cause a denial of service (application crash) via a certain DIV element in conjunction with SCRIPT elements that have empty contents and no reference to a valid external script location.
  • mshtml.dll in Microsoft Internet Explorer 7 and 8 on Windows XP SP3 allowed remote attackers to cause a denial of service (application crash) by calling the JavaScript findText method with a crafted Unicode string in the first argument, and only one additional argument, as demonstrated by a second argument of -1.
  • Microsoft Internet Explorer 6.0 through 8.0 beta 2 allowed remote attackers to cause a denial of service (application crash) via an onload=screen [“”] attribute value in a BODY element.
  • The XSS Filter in Microsoft Internet Explorer 8.0 Beta 2 allowed remote attackers to bypass the XSS protection mechanism and conduct XSS attacks by injecting data at two different positions within an HTML document, related to STYLE elements and the CSS expression property, aka a “double injection.”
  • Microsoft Internet Explorer 6 SP1 did not properly validate parameters during calls to navigation methods, which allowed remote attackers to execute arbitrary code via a crafted HTML document that triggers memory corruption, aka “Parameter Validation Memory Corruption Vulnerability.

All of above vulnerabilities were confirmed and are published by National vulnerabilities database and appropriate actions were taken by the Microsoft. Please note that this is not the complete list of vulnerabilities found, this is just a list of recent vulnerabilities.

Secunia Study:

An independent security advisory firm “Secunia” has maintained vulnerabilities database in different versions of IE. This comparison of unpatched publicly known vulnerabilities in latest stable version browsers is based on vulnerabilities reports by Secunia (Secunia.com)

Browser Advisories Vulnerabilities
IE6 150 227
IE7 50 151
IE8 18 77

SecurityFocus Study (SecurityFocus.com is an online computer security news portal and purveyor of information security services):

As per SecurityFocus report below is the list of vulnerabilities found in latest stable versions of IE

Browser Vulnerabilities
IE6 473
IE7 26
IE8 62


  1. If you are going to develop new web application and thinking of how many IE versions your application should support, then it is clear from above study that IE6 should be your least priority considering –
    1. Features available in IE7 and IE8 (and the efforts required to implement IE6 compatibility)
    2. (sample) 20 vulnerabilities (IE6-16, IE7-11, IE8-9))
  2.  Software Giants like Google has begun the drive to phase out support for Microsoft’s web browser Internet Explorer 6 among other older browsers.

      3.  Nevertheless, Microsoft is making its own moves to make sure users have to upgrade for latest versions of IE. For example, Office Web Applications (browser versions of Word,     PowerPoint, Excel, and OneNote) will support Internet Explorer 7, Internet Explorer 8. (Firefox 3.5 on Windows, Mac, and Linux, as well as Safari 4 on Mac). There’s no mention of IE6 in support list. It’s not officially supported, but customers will not be blocked from using it.


Tabs: View and manage multiple websites in one browser window with enhanced browser tab browsing

Web Slices: Using Web Slices, you can keep up with frequently updated sites directly from the new Favourites Bar. If a Web Slice is available on a page, a green Web Slices icon will appear in the Command Bar. Click on this icon to easily subscribe and add the Web Slices to the Favourites Bar so you can keep track of that “slice” of the web.

Accelerators:  Accelerators help you to use fewer clicks to get driving directions, translate words, and perform routine tasks.

Click Jacking: Click-jacking is an emerging online threat where an attacker’s web page deceives you into clicking on content from another website without you realizing it.

Malware: Malware is software that a cybercriminal can use to steal your bank account information, track everything you type, send out malicious software or spam, or harm your computer.

Phishing: In Phishing, a cybercriminal pretends to be a legitimate organization, such as your bank, in order to deceive you into giving up personal information such as credit card numbers and account information.


For Browser feature comparison

  1. http://windows.microsoft.com/en-IN/internet-explorer/products/ie-9/compare-browsers
  2. http://en.wikipedia.org/wiki/Internet_Explorer
  3. http://www.microsoft.com/windows/internet-explorer/compare/compare-versions.aspx
  4. http://www.microsoft.com/windows/internet-explorer/features/safer.aspx
  5. http://www.microsoft.com/windows/products/winfamily/ie/ie7/features.mspx
  6. http://www.microsoft.com/windows/ie/ie6/evaluation/features/default.mspx
  7. http://www.webdevout.net/browser-support


  1. http://secunia.com/advisories/product/11/?task=statistics_2009
  2. http://en.wikipedia.org/wiki/Comparison_of_web_browsers#Vulnerabilities
  3. http://www.microsoft.com/technet/security/bulletin/MS10-090.mspx
  4. http://www.kb.cert.org/vuls/id/492515
  5. http://web.nvd.nist.gov
  6. http://www.cve.mitre.org/cve/

Kiosk Systems: Knowledge base for software professionals – Technology backgrounder

About kiosks

……. Kiosks were common in Persia, India, Pakistan, and in the Ottoman Empire from the 13th century onward ……. Indian Kiosk are generally called “Gumti” and sometimes “khokha” too……..

 The first self-service, interactive kiosk was developed in 1977 at the University of Illinois at Urbana-Champaign by a pre-med student, Murray Lappe. …….       

–    http://en.wikipedia.org/wiki/



Kiosk has been part of human life for many years and centuries. Above quote also focuses on how technology impacted kiosk to be operated independently to serve mankind. In modern world kiosk is not just a computer with a touch screen enclosed in a box – it is an integration of Mechanical, Computer Hardware, Software, Peripherals and Embedded Controllers, and to build it requires high order domain expertise and intellectual power.

This whitepaper targets
• Technical audiences who want to develop kiosk systems
• Organizations willing to start sale through kiosks
• Functional beginners who want to possess basic knowledge of the kiosk systems

In IT world it is a common scenario where client provides business objectives and high level requirement to build the complete system and they are not in a position to provide all the details of the system one is expecting to be developed.

The objective of this paper is to educate readers and help make them comfortable in developing kiosk systems

Business Scenarios

Some of the popular business examples where organizations around the world are building kiosk systems are Digital Photo Kiosk, Bill Payment Kiosk, School Kiosk, Prepaid Electricity Kiosk, Internet Kiosk, Ticketing Kiosk etc.

Based on the business intend of the kiosk one need to set the focus on certain aspects of the kiosk, like Digital Photo Kiosk uses internal printer to print pictures instantly, so printer functionality becomes the main focus here. Hence simple user interface and high quality printer will be the best expected option.

In cases like School Kiosk, student friendly user interface is a major concern. Parents top up students smart cards to avoid cash in school transactions, here smart card reader and cash acceptor devices play a major role. School kiosk can use thermal printers for receipt printing, as their receipts are not required to be preserved for long time, so thermal printer is ideal solution. Smart card reader should be contact less reader so that students can easily operate them and no need to have physical contact with the reader. Better cash acceptor minimizes the support efforts in case of failed transactions.

Designing Internet kiosk is major challenge in terms of security where in user is allowed to browse the web. Internet kiosks may not need many peripherals but better control on user’s access rights to minimize virus attacks. Security threats are major concerns.

Prepaid Electricity Kiosk is used to top up credit through consumer’s smart cards, and then the smart card can be inserted into electricity meter at home which allows equivalent amount of electricity to consume. Such system uses contact smart card reader for read/write operations along with receipt printer. Card reading and writing speed of the reader should be major focus here.

Ticketing Kiosk is used to vend tickets and provide various other allied services as a single window for the user. These kiosks have very user friendly interface as the user may not be computer savvy and have less patience to understand the functioning. User can perform multiple functions from such kiosk ranging from finding a train schedule, fare calculation between two stations, seat availability, ticket status and ticket issuance.

 System Architecture

 Major components in the system are

1. Kiosk – This is a PC where kiosk application is running with peripherals

2. Server – Server runs the business logic

3. Database Server – for storage of data

4. Back office application – This application component is required to manage kiosks & their peripherals

Kiosk Software Architecture
1. Device monitoring: This monitoring ensures that all the peripherals are ready for use.
2. Remote Management software: Utility for kiosk remote management
3. Multimedia support: Any utility/third party software for enhanced graphical support like Adobe flash/ Microsoft Silverlight / DirectX technology.
4. OS tamper proofing: This is the most important software running on kiosk, this software restricts user accessing computer resources. User should not be able to modify registry values or system files.

Software and Hardware Components

Software Components

System Software

First thing comes to mind while designing kiosk is kiosk operating system. One has to choose the OS for various parameters like

* Integration with any existing system
* Ease of management by team
* Security
* Initial Purchase cost
* Maintenance Cost
* Future Support fomr OS manufacturer

The three main contenders in the kiosk OS market are

o Microsoft Windows

o Linux

o Apple

Microsoft Windows

Microsoft provides complete family of products those can be used as kiosk OS like

• Windows XP Embedded
• Windows Embedded POSReady 2009
• Windows CE 5.0
• Windows Embedded CE 6.0
• Windows XP Professional
• Windows XP Embedded is one of the mostly used OS on kiosk
• XP Embedded is much cheaper to license and this version of XP is pretty stable and can be tailored to Windows XP and deploy on embedded version without change
• XP Embedded has a great future as Windows Embedded Standard is a next generation of Windows XP Embedded
• Another special OS from Microsoft is Windows Embedded POSReady 2009 for point of sale solutions.
• Embedded POSReady has features for seamless connectivity with peripherals, servers and services
• Windows CE 5.0 is a distinctly different operating system and kernel, rather than a trimmed-down version of desktop Windows
• Windows CE 5.0 is best choice if you need to change OS code for hardware interfacing or some special reason. A distinctive feature of Windows CE compared to other Microsoft operating systems is that large parts of it are offered in source code form. Products like Platform Builder (an integrated environment for Windows CE OS image creation and integration, or customized operating system designs based on CE) offered several components in source code form to the general public.
• Windows Embedded CE 6.0(a renamed version of Windows CE 6.0) OS can be used to develop small footprint devices with a componentized, real-time operating system. OS image of size 300KB can be built with 700 components. When size is a matter of concern use this OS.
• Windows XP Professional is one of the most widely used OS for Kiosk similar to Embedded XP version.  Windows XP Professional is easily upgraded with the latest hot fix or service pack.
• You can use Windows XP Professional for kiosk because of its robustness and most stable OS in the market nowadays. It has come out of all the hardware interface related problems that it had initially.
• XP Professional enjoys the latest development technologies for building the kiosk applications such as Microsoft .Net framework, Windows Presentation foundation, Microsoft Silverlight technology.
• The XP Embedded will not have the same end-user help functionality available in Windows XP Pro
• XP Embedded is componentized version of Windows XP Professional, hence you can develop on embedded version without change
• XP Embedded has a great future as Windows Embedded Standard is a next generation of Windows XP Embedded
• Another special OS from Microsoft is Windows Embedded POSReady 2009 for point of sale solutions.
• Embedded POSReady has features for seamless connectivity with peripherals, servers and services
• Windows CE 5.0 is a distinctly different operating system and kernel, rather than a trimmed-down version of desktop Windows
• Windows CE 5.0 is best choice if you need to change OS code for hardware interfacing or some special reason. A distinctive feature of Windows CE compared to other Microsoft operating systems is that large parts of it are offered in source code form. Products like Platform Builder (an integrated environment for Windows CE OS image creation and integration, or customized operating system designs based on CE) offered several components in source code form to the general public.
• Windows Embedded CE 6.0(a renamed version of Windows CE 6.0) OS can be used to develop small footprint devices with a componentized, real-time operating system. OS image of size 300KB can be built with 700 components. When size is a matter of concern use this OS.
• Windows XP Professional is one of the most widely used OS for Kiosk similar to Embedded XP version.  Windows XP Professional is easily upgraded with the latest hot fix or service pack.

• You can use Windows XP Professional for kiosk because of its robustness and most stable OS in the market nowadays. It has come out of all the hardware interface related problems that it had initially.

• XP Professional enjoys the latest development technologies for building the kiosk applications such as Microsoft .Net framework, Windows Presentation foundation, Microsoft Silverlight technology.

• Linux OS is also used for Kiosk because its bit more stable and secure
• If you want to take advantage of open source nature of Linux, you may choose this OS
• Before you take decision make sure that it requires better understanding of OS to implement and manage the kiosk when you start doing a lot of custom work or integration with third party components, hardware, etc., it would be necessary.
• Linux also has embedded version but is not so popular in kiosk world
• Internet kiosk are popular using Linux ( RedHat Linux)

You will find very few people using Mac based kiosk e.g. WKiosk by App4Mac. It’s not so popular as Kiosk OS.

IBM OS/2 was the most popular OS for building ATM’s but after IBM announced that they are discontinuing the OS/2 industry support after December 2004. There are very few kiosks build using this OS those too very initially.

From security point of view any kiosk OS your choose has to be secured from public access.User should not be able to tamper OS underneath. Few steps must be taken to secure OS from tampering.

Here are the few ways so run XP in kiosk mode (secure) 
1. Use Group Management Policy console to restrict access to public user which is a restricted user, so that kiosk user cannot change/tamper OS files
2. There are several custom programs from vendors like SiteKiosk, Kioware,SoftStack etc which offers a great secure shell incorporated for your kiosk application
3. Use Windows SteadyState shared access computing tool for Windows XP and above to restrict
access to kiosk OS and data. It’s freely available with licensed copies of Windows XP and
Windows Vista 32 bit OS. Windows SteadyState tool is the next version of Microsoft shared computer toolkit.

Application Software

Several development platforms and technologies are available for developing software.
Care should be taken while selecting development platform, languages, and tools for the development.


1. Choose languages, platform, library which is the latest one and has life for at least next 10 years
2. While selecting any third party tool/library/control make sure that source code of it is available with you
3. Always follow best practices demonstrated by giant IT players like Microsoft, Sun etc
4. Kiosk development is similar to any other product except high level of modularity should be achieved for easy deployment and software upgrades.
5. Application should not only satisfy functionality but also performance, usability, maintainability
6. Choosing RDBMS is also crucial decision, Microsoft SQL Server and Oracle are major RDBMS in the market, considering network of 100 kiosks SQL server is most preferred.
7. Before purchasing any hardware perform proof-of-concept and confirm that hardware is compatible with application your building and OS.

Hardware Components

 This section describes the most commonly used devices in the kiosk system

1. Printer: This section focuses on thermal printers only, which is the most preferred printer for kiosk.
It’s very important to choose a printer after proper analysis.  Ask below questions to yourself

1. Usage – number of chits printed per day  
2. Printing speed
3. Output quality i.e. paper size and color
4. What is the initial investment
5. Consumables and maintenance charges
6. Printer form factor
7. Support for different fonts
8. Support for graphics printing

List to cross check before you select thermal printer –

• Print method – should be direct thermal and not thermal transfer method
• Fonts available
• Column capacity
• Character size, character set, characters per inch
• Interface – RS232,USB etc
• Print Speed – e.g. 170mm/sec
• Paper dimensions
• Driver support
• EMC and safety standards
• Mass – kilograms
• Auto paper cutter availability
• High MTBF

Examples: Epson, CADMUS

 2. Cash Acceptor Devices:

Despite the growing popularity of alternate payment methods, cash remains a popular form of payment.
Following points should be considered while selecting cash acceptor
1. Find out the number of transactions per day and total capacity of the cash acceptor, usually its in the range of 500-1000 cash notes
2. How many denominations do you want to accept through kiosk system
3. Maintenance cost considering long time use
4. You want acceptor to accept single note at a time or multiple notes, how notes should be stored, separate denominations in separate compartments or same.
Check time required to validate the note and move note into cash box
5. Future enhancement capabilities of the note acceptor should be demonstrated considering possibilities of changes in the security measures in notes in future by the government
6. Same note acceptor should be able to configure for validating currencies for different countries, usually this can be done by changing the validation logic in embedded IC
7. Cash acceptor should support kiosk application know about
o Total cash present in the cash box,
o Any errors occurred in the note validation
o Cashbox full notification
o Complete log of events happening inside note acceptor for trouble shooting purposes
8. It’s a common problem with cash acceptors, if they kept running long time without reset or after few years of use, they start jamming and inserted notes either remains without getting stacked properly, sometime even it can damage the note by winding it.
9. Verify the type of interface available with cash acceptor RS232, USB etc
10. Power requirements. Usually its DC 24 V, 10Amps
11. Check the weight, size of the acceptor. It should fit into kiosk cabinet.
12. Cash recognition method i.e. optical , magnetic etc

3. Smart card Reader/Writer

There are two types of card readers used with kiosk. Following are the points to be considered while selecting card reader

Contact Smart Card Reader/Writer:

• Interface – prefer USB
• Easy of use
• Smart card acceptor-Landing type (ensures longer card life and minimum damage to the cards outer surface)
• Firmware should be easily upgradeable for future updates
• Power source should be USB
• Check whether your smart card reader has passed Microsoft windows hardware quality lab certification program.
• Card reader confirms to the ISO 7816, PC/SC specification, and PC/SC driver should be available

Contact less Smart Card Reader/Writer
• Easy for use
• Operation LED indicator
• Buzzer should be available
• High-speed transactions
• Should have USB interface avoid RS232 interface
• Confirms PC/SC 2.0 specification
• RoHS Compliant
• CE and FCC Compliant
• Confirms ISO/IEC 14443 or ISO 15693 which allows communications at distances up to 50 cm.

Graphical User Interface guidelines –


•Large buttons
•Use textured background
•Make touchable areas obvious
•Limit choices
•Keep user guiding as much possible
•Have simple navigation buttons like back, forward, start
•User should be notified on button click by some beep sound, use 3-D button effect
•Use standard layout for numeric screen similar to ATMs or mobile
•Keep simple English message or messages in regional languages
•Display user name somewhere on screen
•User should not know OS underneath
•Let your GUI promote your company brand 


•No title bar
•No start menu
•No double clicking any where
•No pull down menus
•No scrolling or scroll bars
•No dragging or dropping
•Do not use web application as kiosk application
•Turn the cursor off
•Avoid black color for background
•Avoid solid colors
•Don’t change themes at a level where user will get confused by seeing change
•Avoid too many animating objects on the screen

Disaster Recovery


1.Disaster recovery plan should be ready while preparing design of the kiosk system. Plan should be prepared for cases considering software crashes, database corruption, server /hardware failures, network outages, theft, software viruses, unauthorized access or hacking etc
2.Whenever disaster happens recover data which is at logically complete state
3.Kiosk system should have ability to disable certain features temporarily to avoid further losses
4.Remote access to kiosk should be available at any point of time
5.Data loss due to disaster can be minimized by taking regular database backups
6.Transactions should be uploaded to server as soon as they are completed. If your kiosk support offline transactions mode, take care that transaction data is not stored on the kiosk for long time.
7.Refer ISO/IEC 24762:2008 standard for more information



1. Implement software and hardware logs
2. Implement email alerts on certain exceptions
3. Keep SQL queries ready to find out the mismatch in the database
4. Perform device test at every restart, don’t allow transaction if this test fails
5. Implement uniform Error code methodology, one should easily relate error code to error source

1. LogParser Utility – to parse log files and analyze the problem
2. Implement optional application instrumentation to capture application specific information

Here are some tools for remote control of kiosk
Microsoft Windows: Symntec PCAnywhere,GotoMyPC,LogMeIn Pro, radmin, RDC, rdesktop
Linux: Symntec PcAnywhere,GotoMyPc,KRDC,LogMein Pro, rdesktop
Mac OSX:Symntec PcAnywhere, Apple Remote Desktop, LogMein Pro,rdesktop



1. The Payment Card Industry (PCI), Security Standards Council has taken several steps in managing the data security standards that govern the industry.
2. Encryption methodology should be implemented for sensitive data and logging
3. Remote monitoring tool should be used
4. Provide adequate virus protection(Block not required ports, firewall restrictions)
5. Focus User & Network Access Management
6. Operating system access control
7. Kiosk application should not get affected by attacks like SQL injection, validate every user input before processing it.
8. Refer SO/IEC 17799 and BS7799-2 / ISO27001 standards for more information

Change Management and Software upgrades


1. Software upgrades are usually done in phased manner
2. Kiosk and server communication should happen through ‘process codes’ defined, this will help server to be always backward compatible
3. Kiosk application should have one report for viewing versions of the application components (Executables, DLLs, images, themes)
4. Smoke test should be performed after every upgrade

1. Use next release of Microsoft’s Systems Management Server (SMS), i.e. System Center Configuration Manager 2007 for task automation, compliance management, and policy based security management allowing for increased business agility.
2. Use BITS (Binary intelligence transfer service) technology for file transfer

What should go in technical specifications document ?


Technical specifications documents(TSD) plays a major role in conveying understanding of the project to any reader. Typically in the software industry there are two types of users who refer to TSD.

 1..Developers, Project Managers who will be directly working on the Project

2. Client Personnels like Technical architect, CIO’s, Project Managers

 I personally believe there should be two documents created for each intended audience. The reason is each set of user expects certain things from the document which may prove unnecessary to the other.

Document for customer overlook:

Generally customer approves the technical design, so it is obvious to convince the customer and assure him that all the technical details are covered in the document and system can meet desired goals. So while preparing document it is very necessary cover all such points those will help convincing the client techincal person. Technical architect from client will not be interested to know details at class levels or class diagrams, he will be interested to know Whether system is complying its requirements, those may be interms of

– Functionality,

– Performance,

– Deployement environment,

– Network requirements etc

and finally how you are depicting the overall architecture,

– Which technologies you are proposing,

– Is there any third party component used,

– Are there any licensing considerations,

– Need more hardware to be purchased

– Which best practices you are proposing etc .

So one should prepare a document may be consists of  around 25-30 pages which will provide complete outlook of the system.

 Document for Development team:

Development team is actual user of the document, they will be interested in knowing internals of the system. How system is divided into parts, how each part works, their dependencies, sequence, resuse, specific performance considerations, best practices and their details etc.It will be good if you can provide some sample programs and links to documentation sites for their further study.

 Although, it differs from organization to organization whether single document is prepared or more than one. I am providing list of topics/areas those should be considered in the TSD.

 1. Overall High Level Architecture(block diagrams)

2. Detail architecture(block diagrams)

3. Component diagram/architecture

4. Technological implementation of each component or module( if different languages used for respective module)

5. How components will be packaged and will be made ready for installation/deployment(Software Packing)

6. Sequence diagrams for important operations

7. Deployment methodology( e.g. clickonce deployment, XCOPY)

8. Folder structure( Development as well as deployment)

9. How components or assemblies would be versioned and source controlled

10. Dataflow diagrams

11. State Diagrams

12. Deployement diagrams

13. Network Architecture

14. How Logging /Event Logging/Email alerts will be implemented

15. How data( database, log files) purging will be done

16. Reporting methodology used(SSRS, CR and why?)

17. Naming conventions( Coding, assemblies, database objects)

18. Tiered and Layered Architecture

19. Security aspects and requirements( Application and network)

20. Performance aspects and requirements( Load testing methodology)

21. Pluggable and generic components

22. Highlight design patterns used

23. Database diagrams(ERD)

24. Disaster recovery plans and considerations in the application

25. Encryption methodology

26. How exceptions are handled

27. Class diagrams

28. Important algorithms

29. Usability Aspects

30. Data access technology proposed

31. Performance counters ( for troubleshooting)

Hope this is helpful !

Laxmikant Patil

Windows Azure platform – Tools and Utilities


I was looking for tools available on Windows Azure Platform, thought to share with you all. Certainly this is not the complete list available out there but I found these are useful to start with. My next post will cover few more tools –

1. Windows Azure Monitoring Management Pack(http://www.microsoft.com/downloads/en/details.aspx?FamilyID=4f05f282-f23a-49da-8133-7146ee19f249):

Windows Azure Monitoring Management Pack enables you to monitor the availability and performance of applications that are running on Windows Azure.
Feature Summary
• Discovers Windows Azure applications.
• Provides status of each role instance.
• Collects and monitors performance information.
• Collects and monitors Windows events.
• Collects and monitors the .NET Framework trace messages from each role instance.
• Grooms performance, event, and the .NET Framework trace data from Windows Azure storage account.
• Changes the number of role instances via a task.
2. CloudXplorer from clumfsyleaf(http://clumsyleaf.com/products/cloudxplorer):
CloudXplorer is a rich UI client for browsing Windows Azure blob storage.
Feature Summary
  • Copy and move blobs between folders, containers or even different accounts.
  • Rename and delete blobs, create new containers and folders.
  • Upload local files/directories and downloadblobs or entire blob folders.
  • Supports downloading/uploading of page blobs.
  • Auto-resume upload of large files.

3.Windows Azure Stoarge Explorer(http://azurestorageexplorer.codeplex.com/): Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Windows Azure Storage storage projects including the logs of your cloud-hosted applications. All 3 types of cloud storage can be viewed and edited: blobs, queues, and tables.

The Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more hosted services, all accessible with the same URL, in one or more Windows Azure datacenters. It uses a heartbeat to detect the availability of a hosted service. The Traffic Manager provides various ways of handling the lack of availability of a hosted service.
5.   Sqlcmd utility(http://msdn.microsoft.com/en-us/library/ee336280.aspx): You can connect to Microsoft SQL Azure Database with the sqlcmd command prompt utility that is included with SQL Server. The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command prompt. 
6.   SQL Server Management studio(http://msdn.microsoft.com/en-us/library/ee621784.aspx#ssms): The SQL Server Management Studio from SQL Server 2008 R2 and SQL Server 2008 R2 Express can be used to access, configure, manage and administer SQL Azure Database. Previous versions of SQL Server Management Studio are not supported.
You can transfer data to SQL Azure Database by using the bulk copy utility (BCP.exe). The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files.
8.  SQL Azure Reporting(http://msdn.microsoft.com/en-us/library/ee621784.aspx#azurereport): The Customer Technology Preview of SQL Azure Reporting is also available. Microsoft SQL Azure Reporting is a cloud-based reporting service built on SQL Azure Database, SQL Server, and SQL Server Reporting Services technologies. You can publish, view, and manage reports that display data from SQL Azure data sources.
9.  SQL Server Management Objects(http://msdn.microsoft.com/en-us/library/ee621784.aspx#ssmo): A partial set of SQL Server Management Objects (SMO) are enabled by SQL Azure Database. The partial set of SMO are only enabled in order to provide Management Studio access to SQL Azure.
10. Sql Server Migration Assistant(http://msdn.microsoft.com/en-us/library/ee621784.aspx#ssma): Starting with the SQL Server Migration Assistant 2008 for Access version 4.2 release, SSMA enables migrating Microsoft Access schema and data to SQL Azure Database and adds support for Access 2010 databases.
11. Data –tier applications(http://msdn.microsoft.com/en-us/library/ee621784.aspx#datatier):Starting with Microsoft SQL Server 2008 R2 and Microsoft Visual Studio 2010, data-tier applications (DACs) are introduced to help developers and database administrators to package schemas and objects into a single entity called DAC package.SQL Azure Database supports deleting, deploying, extracting, registering, and in-place upgrading DAC packages. SQL Server 2008 R2 and Microsoft Visual Studio 2010 included the DAC Framework 1.0, which supported only side-by-side upgrades.
12. Generate and publish script wizard(http://msdn.microsoft.com/en-us/library/ee621784.aspx#generate):You can use the Generate and Publish Scripts Wizard to transfer a database from a local computer to SQL Azure Database.The Generate and Publish Scripts Wizard creates Transact-SQL scripts for your local database and the wizard uses them to publish database objects to SQL Azure Database.
13. Cerebrata Cloud Studio(http://www.cerebrata.com/products/cloudstoragestudio/):Cloud Storage Studio is a Windows (WPF) based client for managing Windows Azure Storage, an important component of Microsoft’s Azure (Microsoft’s Cloud) platform and Hosted Applications.
The utility will perform a series of data-upload and -download tests using sample data and collect measurements of throughput, which are displayed at the end of the test, along with other statistics.
15. SpaceBlock File transfer utility(http://spaceblock.codeplex.com/): SpaceBook is a simple Windows front-end for managing Amazon S3, Nirvanix, Azure Blob Storage, and now Sun Cloud Object Storage online service accounts.
16. Windows Azure Management Tool(http://wapmmc.codeplex.com/): The Windows Azure Platform Management Tool (MMC) enables you to easily manage your Windows Azure hosted services and storage accounts.  This tool is provided as a sample with complete source code so you can see how perform various management and configuration tasks using the Windows Azure Management and Diagnostics APIs. 
17. CSPAck utility(http://msdn.microsoft.com/en-us/library/dd179441.aspx#Subheading2): The CSPack Command-Line Tool packages your service to be deployed to the Windows Azure fabric. The cspack.exe utility generates a service package file that you can upload to Windows Azure via the Windows Azure Platform Management Portal. By default the package is named.cspkg, but you can specify a different name if you choose.
18. AzureWatch utility(http://www.softsea.com/download/AzureWatch.html): AzureWatch aggregates and analyzes performance counters, queue lengths, and other metrics and matches that data against user-defined rules. When a rule produces a “hit”, a scaling action or a notification occurs.
19. Windows Azure Bootstrapper(http://bootstrap.codeplex.com/):
The Windows Azure Bootstrapper is a command line tool meant to be used by your running Web and Worker roles in Windows Azure.  This tool allows you to easily download resources (either public resources or ones in your blob storage), extract them if necessary, and launch them.  Since you don’t want to always download and run during restarts, it will also help track those dependencies and only launch an installer one time!  In addition, there are some very useful features that make it a great tool to package with your roles.
20. Windows Azure GAC Viewer (http://gacviewer.cloudapp.net/)
This tool shows you a dynamically generated list of all of the assemblies present in the GAC for an Azure instance. Additionally, it also allows you to upload your project file (*.csproj or *.vbproj) to have the references scanned and let you know if there are any discrepancies between what you are using and what is available (by default) in Azure.

21. Azure Database Upload(http://azuredatabaseupload.codeplex.com/):

This utility will allow users to take the data from a SQL Server database and upload it in their Azure table storage account. It provides an easy to use GUI to read data from a SQL Server and upload it into specified Azure table storage.

22. Azure file upload(http://azurefileupload.codeplex.com/):

This utility will allow users to take the data from a delimited flat file and upload it in their Azure table storage account. It provides an easy to use GUI to read data from a delimited flat file and upload it into specified Azure table storage.

23. DocaAzure utilities(http://www.softpedia.com/get/Programming/Components-Libraries/DocaAzure.shtml) :  DocaAzure is a handy package that contains various utilities to help you with your Windows Azure development. DocaAzure is developed in C# and includes:

* Lightweight messaging framework
* IDbSet implementation for Azure Tables
* SMTP relay and server
* Azure Tables & Blobs Backup to the same or other Storage Account
* Some other useful utilities 

CloudBerry Explorer for Windows Azure Blob Storage. CloudBerry Explorer makes managing files in Azure Blob Storage EASY. By providing a user interface to Azure Blob Storage CloudBerry lets you manage your files on Azure just as you would on local computer.
25. Windows Azure Powershell CmdLets(http://wappowershell.codeplex.com/):
The Windows Azure Platform PowerShell Cmdlets enable you to browse, configure, and manage Windows Azure Compute and Storage services directly from PowerShell.  These tools can be helpful when developing and testing applications that use Windows Azure Services.  For instance, using these tools you can easily script the deployment and upgrade of Windows Azure applications, change the configuration for a role, and set and manage your diagnostic configuration. 
26. Windows Azure Hosted Services VM Manager(http://azureinstancemanager.codeplex.com/): Windows Azure Hosted Services VM Manager is a Windows Service that can manage the number of hosted services (VM’s) running in Azure on either a time based schedule or by CPU load. This allows the application to scale either dynamically or on a timed schedule.
27. Windows Azure Guidance(http://wag.codeplex.com/):

This is open source prokect, The key themes for these projects are providing guidance on below scenarios –

1. Moving to the Cloud
2. Developing for the Cloud
3. Integrating the Cloud

28. FTP to Azure Blob Storage Bridge(http://ftp2azure.codeplex.com/)
Deployed in a worker role, the code creates an FTP server that can accept connections from all popular FTP clients (like FileZilla, for example) for command and control of your blob storage account.
29. Storage Explorer online app(http://storageexplorer.cloudapp.net/login.aspx):Windows Azure Web Storage Explorer makes it easier for developers to browse and manage Blobs, Queues and Tables from Windows Azure Storage account. You’ll no longer have to install a local client to do that. It’s developed in C#.
Hope this is useful !
Laxmikant Patil