Cloud Storage Use Case Scenarios – Considering Windows Azure Storage Platform


What is Cloud Storage?

Cloud storage is a model of networked online storage where data is stored in virtualized pools of storage which are generally hosted by third parties. Cloud storage is based on highly virtualized infrastructure and has same characteristic as Cloud computing in terms of agility, scalability, elasticity and multi-tenancy.

Cloud storage services may be accessed through a web service application programming interface (API), a cloud storage gateway or through a Web-based user interface.

  • Cloud storage provides On-Demand storage space
  • Cloud storage can be accessed through internet
  • It’s pricing is based on Pay as you go and usage model
  • Cloud storage platform exposes Rich application interfaces for communication purpose
  • User need not manage the storage, everything is transparent to the user
  • High degree of security and data redundancy and disaster recovery is built into Cloud storage
  • Companies can focus on their core business and get rid of on-premise storage disks, maintenance tasks, backups, data replication, space and cooling requirements including human resource bandwidth.

How is it different from traditional storage?

  • There is No difference with traditional storage techniques at functional interface level
  • Traditionally there was need to pre-pay and buy storage device, Cloud Storage is delivered on demand
  • Like traditional store there is no need of Capital Expenditures, Cloud storage is based on Pay as per used model
  • Because Cloud storage is maintained by third party, it reduces CapEx and OpeEx significantly
  • Traditional storage capacity use to be low because of upfront investment required, whereas in Cloud storage you get virtually unlimited storage capacity without upfront commitment/investment
  • Traditional on-premise storage could not provide the level of high availability promised by Cloud Storage
  • Unlike traditional storage, Cloud storage services are specialized for structured storage, unstructured storage, NOSQL data, large binary files, Media content streaming etc.
  • Cloud storage services has data redundancy and disaster recovery built into it
  • Cloud storage services are inherently build for high fault tolerance
  • Since data will be stored in cloud storage which can be accessed from anywhere in the world with adequate security which gives kind of comfort for data users

Overall Cloud storage provides customers with required agility, scalability and cost effectiveness.

Economics of Cloud storage

While understanding the economics, it is necessary to try to compare apple with apple, generally organizations compare on-premise storage devices cost with cloud storage/GB, which will be wrong to do. For understanding actual cost required for X GB storage you need to consider below parameters.

  • Upfront devices investment cost
  • Operating Costs
  • Redundancy cost
  • Buy in advance to provide required capacity to the business
  • Human resource involvement
  • Disaster recovery
  • Hardware support
  • Technical trainings

Source: Forrester Research – based on sample data provided under specific conditions*

Forrester Internal Storage

Forrester Cloud Storage

Forrester Cloud Gateway Storage

Challenges in using Cloud storage

Security

  • Outsourcing data storage out of organization premises causes security risks increasing attack surface area.
  • Security of data during transit

Accessibility

  • Performance, reliability, available depends on WAN bandwidth between customer and Cloud Service Provider
  • Although Cloud Service providers promises 99.9% availability, still it may prove problem for some organizations

Regulatory Compliances

  • Data being stored in Cloud storage could be subject to government regulations and legal affairs e.g. Personal information or health care records
  • Copyright and Piracy infringement for intellectual property

For e.g.

              1.The Patriot Act in the US allows the government to subpoena all data stored within the US. This might not be acceptable to many organizations.

              2. European Privacy Acts require that data be stored within the country of origin. Storing in the datacenter of out‐of‐country Service Provider cloud might not meet these requirements.

Cloud Service Provider stability

  • Companies are not permanent and the services and products they provide can change. Outsourcing data storage to another company needs careful investigation and nothing is ever certain. Contracts set in stone can be worthless when a company ceases to exist or its circumstances change.

Total Cost

  • Total recurring cost and bandwidth cost added up, may not prove feasible to some of the customers depending upon the size and data size and sensitivity to the business.

topic

Many of the above concerns can be taken care by using Cloud storage Gateways.

What is Cloud Storage gateway?

For many organization’s may it be small-medium business or large businesses, majorly two obstacles stopping from utilizing cloud storage and those are really genuine but resolvable.

The first is the relatively slow performance as measured by response time obviously because of limitation of available internet bandwidth. This slow response time often makes cloud-based storage unacceptable for some users. The second is the requirement to write code for applications to the representational state transfer (REST) API. If the applications don’t have a native interface to cloud storage, many small to medium businesses lack the aptitude, desire, skills or time to develop it themselves.

So, what is the solution?

The potential solution could be to have a mechanism in place which will overcome above two obstacles and provide a way so that enterprises don’t need to worry about the storage, communication and performance complexities. A solution should provide wrapper to Cloud storage allowing users to use storage as normal as they had been doing on-premises.

‘Cloud storage gateways’ are the appropriate solution to this, since they are designed to overcome above two obstacles and they allow you to deal with cloud storage as if you are dealing with traditional SAN, NAS storage systems using NFS, iSCSI or FC methods. Additionally they can be used as a ‘primary storage’ unit providing features such as snapshots, thin provisioning, de-duplication and compression. This also eliminates any requirement of writing an application code for enterprises before using cloud storage.

There are many cloud storage gateways in the market, one has to understand how cloud storage gateway move your data cloud storage and how it brings back to you when needed. Various gateway companies use different approaches and algorithms some of them are patented technologies as well. The more efficiency achieved in data movement decides the quality and productivity of the cloud storage gateway. Additionally some cloud storage gateways allows you to use different cloud storage platforms such as Windows Azure, Amazon S3,  EMC Atmos, Nirvanix and others providing complete flexibility.

Basic working of Cloud Storage Gateways

Cloud storage gateways are nothing but a customized appliance (server) having various types and specialty disk storage in it like HDDs, SSDs (Solid state devices) and software control  on it. On-premise applications interact with these disks as normal, data is stored on these disks initially, it is them moved to the storage cloud based on policy/ algorithms such as age of the data, last access timings, or number of snapshots etc.

Though cloud storage gateways come with some costs but they relax you from several responsibilities providing low TCO solution. You get relax mind as cloud storage gateway takes care of data backups, snapshots, archival, de-duplication, compression, allowing you to use cost effective cloud storage, disaster recovery along with acting as a primary storage medium with classic storage technology combinations.

There are a number of cloud storage gateway vendors in the market today, with more emerging every quarter. They include Cirtas Systems, CTERA Networks, Nasuni Corp., StorSimple Inc., TwinStrata Inc. and others that are still emerging.

Usage scenarios for Storage as a Service

1.      Web Facing Applications

Opting out from owned on-premise infrastructure for a business application and moving it to the cloud could streamline the operations especially for data driven applications. Although, cloud storage can be used for any application dealing with data, it benefits specially when your data is increasing rapidly or existing data size is more than you want to control it on-premise. Cloud storage specially assists in web facing applications where upload and download of content is entirely up to end users and size of data can grow by any extent. Data could unstructured (simple files, documents, videos, audio, media content, database backups) or structured content (SQL databases) or NoSQL data, cloud storage is applicable for all kind.

Few examples of places where cloud storage can be used is-

  • Media Streaming : streaming of audio & video
  • Files/ document /photos/audio/ video storage e.g. drop box, You tube
  • Store content for Social media sites like – Myspace, Facebook, Twitter, Blogs, etc
  • Content Storage & Sharing : Pictures and content are stored in Cloud Storage e.g. Smugmug,
  • NoSQL data storage

How about using cloud storage for on-premise web applications?

One can use cloud storage for on-premise application and it is perfect to use it. However, for better performance, Storage and application should be co-located in the cloud to avoid possible latency. REST API exposed by cloud storage services can be easily consumed by applications for leveraging services and reaping benefits like data redundancy, availability and cost effectiveness.

2.      Interfacing Smart Mobile devices

Mobiles are everywhere. Be it  be business phones like blackberry, Windows, iPhones or tablets/iPads with full blown applications generating and pushing data to central database for analysis, accounting and reporting purpose. Sales force or field agents and inspections generate lot of data with audio and video contents which need to be stored for longer period of process compliance purposes or reference purposes. Nowadays devices comes with inbuilt facilities to store content on local storage or on cloud, it has just become an option and cloud storage just a click away. Because of limited processing power, memory and bandwidth, data from mobile devices need to be pushed /pulled more frequently with availability requirement from anywhere in the world.

Cloud storage proves very efficient option in such cases providing complete data availability; data transfer rate doesn’t matter here since anyways it will happen via Internet with on-premise data center.

3.      Unstructured Data Storage

This is data is largest in size in any organization and uncontrollable also. People create copies of documents and version them as they want, it is difficult to track or control the pattern that users manage documents including emails, text documents, images/photos, manuals, training contents, proposals, marketing contents, accounting statements etc.

As per An IDC paper, “The Diverse and Exploding Digital Universe,” highlights how a single email with 1MB attachment when sent to four people consumes a total of 51MB of storage. (Source: “The Diverse and Exploding Digital Universe, An Updated Forecast of Worldwide Information Growth Through 2011,” March 2008, by International Data Corporation.) In other words, email suffers from attachment size limitations and is also an inefficient way of data sharing.

If constraints are applied on storage sizes, users tend to delete the content which may again create problems in accessing that in future. So to deal with such situations, strong storage policy is needed appreciating business need and impact of data availability for the organization. There should be a flexible way of data sharing which will increase collaboration in the organization along with an approach should prove cost effective and add value in terms of availability, disaster recovery, data redundancies, backups and versioning support.

Cloud storage helps you address all of above concerns fostering effective data storage, sharing, availability and pay as you storage option.

4.      Backup/ Retention/Preservation to Cloud

I have met with one customer who is into construction business from homes to business towers to ships and dams. Company operates in 7 countries following country specific policies and regulations for record keeping. Some of the countries like U.S and Canada need all the records for a construction project to be retained for 10 to 15 years. Data should be recoverable and available when it is required.

One may serve such request by regular maintenance of your storage policies and infrastructure considering backups, verifications, retention, duplication activities. Not only for the regulatory compliances but it is also important for every organization to retain, backup and preserve their data which is a real asset and outcome of thousands of hours of work.

topic 2

Cloud storage takes all of your responsibilities of storing your data consistently for years you want to store. Outsourcing such as pain area will allow your IT to focus more on innovation and value added service for the businesses and not merely on record keeping purpose. Below are some of the instances where cloud storage has been used from some time –

  • Backup for individual machine & laptops to cloud E.g Cirrustore
  • Backup of cloud computing data like OS images Eg: vsphere, Openstack Swift storage
  • Backup for one cloud provider to another – double sure it!
  • Restore the data backed up in cloud- Retention period and Secure deletion purposes
  • Data Preservation (is distinguished from Archive/Retention) in that the goal of preservation is to actively maintain the upkeep of information, most likely for long periods of time. E.g. Preserve Libraries & University Archives / repositories
  • Archive data that is less sensitive to latency

topic 3

5.      Databases in cloud

Databases, may those be SQL or NoSQL can happily reside in the cloud. Some of the Cloud service provides like Microsoft Windows Azure has SQL DB database provided in a ‘Database as a service’ mode. Microsoft has partnered with other database providers like MySQL to make it available on their cloud platform. There is a long list of databases supported on cloud platform like –

  • Cassandra
  • Bigtable
  • Hbase
  • Hypertable
  • Neo4j
  • mongoDB
  • Azure tables

SQL Database can deployed in cloud could be your primary databases, backup copies or as a secondary data source purpose like reporting purpose. SQL database in the cloud are charged based on monthly database size.

NoSQL databases present very cost effective ways for managing data and there are very good example in the industry that how people are using it for storing large data sets in the NoSQL database like Azure tables with partitioning policies spanning data in multiple datacenters around the world.

6.      Cloud Storage for cloud computing

If you are using cloud computing for any reason, there is high possibility that you will fall in love with cloud storage. Most of the cloud providers have storage integrated with other offerings they have.

Whenever we use cloud platform like Microsoft Windows Azure either in IaaS, PaaS or SaaS mode, you are actually using Azure Cloud storage in some or other format.

One of the typical examples when people use Cloud storage is when they use IaaS offerings-

  • VM Image Store :Image of the Guest OS which is made available to hypervisors for staring a VM
  • Guest Auxiliary storage: Provision the storage space, at a given QoS, which the guest needs beyond the boot storage.
  • People store entire deployment packages in cloud storage in PaaS mode for providing autoscaling feature to add more servers in response to traffic attracted by the application.

7.      Global Content Distribution in Cloud

Global content distribution is not a new mechanism to boost application performance when you have wide range of users around the globe and your application content is cached at many places nearer to the users for faster delivery to them. With the advent of Cloud computing, this mechanism has become more powerful with integrated support from the cloud providers like Microsoft has CDN(Content distribution network) feature as of Windows Azure platform, users has to perform few clicks to configure it for their application and your site will be transformed with tremendous boost in content delivery. With increased number of CDN nodes around the globe, latency and scalability issues are being addressed very proactively and easily.

8.      Scientific calculations and researchers to collaborate and discover

Scientific societies and researchers need large storage systems to store their simulations during their research. Not all scientists have liberty to buy storage systems and maintain them. Cloud storage provides efficient technique for them to use actually when required, pay for use and release the resources once results are drawn from the calculations. Cloud storage has been considered good candidate for storing content generated in digital movie production. Computer generated movie production generates huge data which need to be stored for short period of movie production may be for few months.

For e.g. movie like ‘avatar’ generated one petabyte or one million gigabytes of data which was stored using Microsoft digital asset management solution’ which could be today stored in Cloud storage with added benefits.

Hope this helps!  🙂

Laxmikant Patil

Advertisement

Why to use Windows Azure Storage against local portable hard disk?


Since the inception of computers, enterprises had been using hard disks for data storage and transfer purpose. Most popular option is ‘portable hard disk’ being cost effective, easy to use and ability to carry anywhere features. Because of success of portable hard disks enterprises got attracted to it and started leveraging them for business data storage, backup and archival purposes, which is not the purpose these disks are meant for. Portable hard disks were developed for storage of temporary data and primarily for portability purpose.

Companies need to look at more reliable storage solution considering below mentioned criteria’s, because data not available on time or loss of data is as bad as data was never available!

Below is the comparison summary between ‘portable hard disk’ and ‘Windows Azure Storage services’ storage options against different criteria’s.

Sr. No

Criteria Portable Hard Disk Cloud(Windows Azure)

Winner

1

Data Security Low – Easily accessible High- Always Secure Access Azure Storage

2

Ease of data Access Data Access is easy as one has to just plug the HDD in USB Internet based data access HDD

3

Portability High Low – but data is available around the world via Internet

HDD

4 Reach Low – Need to physically carry everywhere High – Data accessible globally

Azure Storage

5

Disaster Recovery Low – Very less chances of data recovery High – Inbuilt disaster recovery. Data gets copied at 3 places and   will be automatically made available in disaster recovery scenarios

Azure Storage

6

Data redundancy Low – Need to implement explicitly which costs your more High – Inbuilt data redundancy. Data gets copied at 3 places

Azure Storage

7 Availability Low- Vulnerable to numerous environmental conditions High – 99.9% promised availability with world class data centers   support

Azure Storage

8

Performance High – Local access Low- Internet based access HDD
9 Maintenance Need more care in periodic verifying of device No maintenance needed

Azure Storage

10

Vulnerable to physical damage, heat, dust, wear and tear Yes- highly vulnerable No

Azure Storage

11

Life of storage device 2-3 years max Virtually unlimited Azure Storage
12 Risk of device theft High Low

Azure Storage

13

Data access concurrency Cannot be accessed concurrently for more than few users Can be accessed by large users concurrently

Azure Storage

14

Governed SLAs  No YES – by Microsoft Azure Storage
15 Device Driver needs YES NO

Azure Storage

16

Data Access time Less because of local data transfer More because of internet based data transfer HDD
17 Price 1 TB for Approx. $100 $0.07 per GB per month

HDD

18

Storage Capacity Fixed – need to decide at the time of buying Virtually unlimited Azure Storage
19 Storage flexibility If data size increases, Data storage cannot grow by itself Cloud storage supports scalability out of the box with ability to   storage unlimited data. Flexibility in terms of Pay as you go model

Azure Storage

20

Pricing Model Capex – Capital investment is needed. Opex – Only monthly usage charges need to pay, no upfront commitment Azure Storage
21 Focus Organization need to spend time and give focus on maintaining HDD in   good way along with redundant copies of it No need to spend additional minute in caring about the storage once   data is uploaded.

Azure Storage

Just to conclude, ‘Windows Azure Storage Services’ wins in most of the cases and proves to be best option for storage purpose. Use this information wisely in your scenario to analyze the benefits scenarios.

Hope this helps !

Laxmikant Patil 🙂

Structured Web: 2025


I. INTRODUCTION

If we look forward to year 2025, where we will have big dreams realized like Nanotechnology, artificial intelligence, next generation cloud and high performance computing. Impact of such technologies on overall human life is unimaginable at this point in time. We are not far away from tiny Nano factories and Nano robots at home doing some smart job for us. Computers around us will be million times faster, smaller, ready for you to serve within fraction of energy consumption as compared to today. These possibilities are beautiful and are likely to be realized but one important question is ‘Are we ready for that?’, ‘Are we putting correct foundation for next generation computing?’ Answer may be a ‘YES’ or ‘NO’ basis individuals perceptions and context. However, it would be certainly ‘No’ if we look at the current un-structured nature of World Wide Web, the biggest information store freely available over our fingertips.

Due to tremendous size of Web, the way we have organized our web resources and the rate of web adoption in developing countries, soon it will become difficult to identify relevant information and services of interest easily. Total dependence on merely text based search engines for information identification will not be sufficient and we will lose credible information which search engines cannot put forward effectively and such a loss may become unaffordable in near future.

Lot of research has been happening around web standardization like research on classifying web sites by Christoph Lindemann[2] and Lars Littig[2], research on extracting and managing structured Web Data by Michael John Cafarella[1] is remarkable.

This paper advises few techniques on structuring the Web to make it best usable.

This is the first paper from the series targeted towards research on ‘Structured Web: 2025’ topic.

structured web 1

Fig. 1  Conceptual view of Web showing scattered information without specific structure

II. PERSPECTIVE

Due to the heterogeneity of the Web and its lack of structure, it is crucial to identify properties of a Web resource that best reflect its functionality.  In Relational Database world, we call it a Schema. If we want to read any tuple from database, we need to first know its schema. This principle is equally applicable to Web resource as well. Once we know the schema, second step is, we should allow database tuple to be read by anybody.

III. APPROACH

Here I propose two step methodology to describe the structure of Web resource.

A. Every Web resource should describe and expose its properties.

B. Every Web resource should be accessible using unified structure.

Here I am considering Web resource as everything which will be publicly accessible.

IV. PROPERTIES DESCRIPTION

This applies to one of the major web resource i.e. Web site. Every Web site should describe its schema using below properties and should expose it for public access.

                                                                                                                                                                           TABLE I
Web site properties

Sr. No. Web Site Properties Description

Category

Element
1

Domain

Domain
2

Presence

Country
3 Languages
4 Time Zone
5 Currency
6

Web Content

Images
7 Text
8 Video
9 Audio
10 XML
11 XHTML
12 RSS
13 Documents (Word, PDF, XLS etc.)
14

Security

Secure
15

Audience

Adult
16

Volume

Size of Pages
17 Count of Pages
18 External site out degree
19

Technical realization

JavaScript, or another scripting
20

Domain dictionary

Domain dictionary keywords
21

Popular URLs

Popular URLs of the site
22

Rank

Rank(1…10)
23

Subdomains

Subdomains
24

Web resource structure

See section V.

Using above information available with each Web site, organizations can write crawlers, which will visit web sites and retrieve these details to maintain database of all this information.

Where –

Domain dictionary keywords can be used by search engines to index the web site against those keywords.

Security signifies if that website can be openly used by anybody or registration is required.

V. RESOURCE ACCESSIBILITY

Once we understand about web site properties, we will be able to understand general structure of it. Next level of categorization is done using how actual web site content is made available for public access. This content access is different from content access using rendered web page. By directly exposing content using URLs will help categorising overall information in terms of relational database table like below.

TABLE II

WEB resource structure

Sr. No. Web site content access structure
Content Type Name URL
1 Image Einstein.png http://www.example.com/Einstein.png
2 Image James Cameron http://www.example.com/JamesC.png
3 PDF USLReport http://www.example.com/USL.pdf
4 Text Football Game http://www.example.com/FootballGame.htm
5 Video President Speech http://www.example.com/Presdspeech.mp4
.. .. .. ..
.. .. .. ..

A. WebResource_Properties.XML file

Now the question is how any web site will expose properties and content access structure to the outside world. Answer is one XML file with standard schema that should be published by every website owner. This file would be WebResource_Properties.XML. This file should be present in each sites root virtual folder and should be accessible publicly by using below URL format –

Http://www.example.com/WebResource_Properties.xml

Using above mechanism, we can build relational database table for all the websites exposing web resource properties.

One can easily write piece of software which will provide you list of all sites from ‘Ireland, in Health care domain, with Audio and images, having page count >20 without any security’ for accessing content.

B. Ranking Website

Another way of classifying web resources/web sites is ranking them. This ranking should be done basis

  • Relevant Content volume and quality
  • No. of users and/or web traffic

Web site ranking should be done by independent organizations to provide real usability aspect to the world. Rank is always linked to Domain, so while comparing ranks domain always comes into picture.

C. Domain

Some of the domains can be listed as Affiliate site, Archive Site, Blogs, Corporate site, Commerce Site, database site, development site, directory site, download site, employment site etc.

VI. TWO VIEWS OF WEB SITE

Figure below shows two views of web site as –

A. View which is rendered in the browser and user can see it directly. Search engines works on this view for performing indexing on web site. Search engine cannot reach to the web resource which has got no link in the browser rendered page. Search engine cannot crawl the web sites which have got some files on web servers without links provided in web pages.

B. Second view is the view provided through WebResources_Properties xml file, sample as shown in right side of this figure.

strcutured web 2

Fig. 2  Web Virtual directory and two views of it

VII.  CONCLUSION

By implementing above guidelines, web information can be structured to some level which allows us to leverage following advantages.

A. Technology neutral way of categorizing of web sites

Using above method web sites can be categorized and web can be structured in a technology neutral way.

B. Improved search engine optimization

Now search engines need not just depend on text based indexing, additional web resource properties can help in getting meaningful search results.

C. Minimal work to get started

Web resource owners don’t need to make any changes in their web applications. Just one XML file will help bring in lots of difference.

Figure below shows conceptual view of Web when such structuring will happen over a period. Web being a massive data store, it will take time for people to adopt such standards and apply them.

strcutured web 3

Fig 3. – Conceptual view of Web showing structured information after employing above techniques

Important point is if we don’t take action on time we will be at great loss where millions of ideas/research/opinions by billions of people might get into dark ages just because nobody could find it at correct time and carry on further work. People will keep on reinventing the wheel, and next generation will blame on us because we could not manage the Web with great responsibly. If we start today, hope is entire Web will be structured data source by 2025 and next generation might use structured query language to search the Web seamlessly.

Because “Information could not be found easily is as good as information is not present.”

VIII. REFERENCES

[1]        Michael John Cafarella, Extracting and Managing Structured Web Data, university of Washington, 2009

[2]        Chrisoph Lindemann and Lars Littig, Classifying web sites, University of Leipzig, Johannisgasse 26, 2007

[3]        John M. Pierre, On the Automated Classification of Web Sites, California UAS, 2001

Future Technologies…. Overview


In this Article, I have tried to put together few future technologies(Some of them are already reality in some shaper form) on which there is lot of Research happening in Microsoft World. This Article will help tech gurus to keep tab on progress of technology in these areas.

Software Agents

A software agent is a software program that acts for a user or other program in a relationship of agency, Related and derived concepts include Intelligent agents (in particular exhibiting some aspect of Artificial Intelligence, such as learning and reasoning), autonomous agents (capable of modifying the way in which they achieve their objectives), distributed agents (being executed on physically distinct computers), multi-agent systems (distributed agents that do not have the capabilities to achieve an objective alone and thus must communicate), and mobile agents (agents that can relocate their execution onto different processors).

  1. Visual Studio agents 2010 – Visual Studio Agents 2010 include Test Controller 2010, Test Agent 2010 and Lab Agent 2010. Test Controller 2010 and Test Agent 2010 collectively enable scale-out load generation, distributed data collection, and distributed test execution. Lab Agent 2010 manages testing, workflow and network isolation for virtual machines used with Visual Studio Lab Management 2010.
  2. Life like software agents
  3. Software distribution agents
  4. Research Projects

Natural Language Interpretation

Natural language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages; Specifically, the process of a computer extracting meaningful information from natural language input and/or producing natural language output.It began as a branch of artificial intelligence. In theory, natural language processing is a very attractive method of human–computer interaction.

  1. Excel formula’s
  2. Machine Translation – research project
  3. VoiceXML 2.0 contribution
  4. Few more research projects

Machine Translation

Machine translation, is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another.

On a basic level, MT performs simple substitution of words in one natural language for words in another, but that alone usually cannot produce a good translation of a text,                         because recognition of whole phrases and their closest counterparts in the target language is needed. Solving this problem with corpus and statistical techniques is a rapidly growing field that is leading to better translations, handling differences in linguistic typology, translation of idioms, and the isolation of anomalies.

  1. Microsoft Translator(http://microsofttranslator.com)
  2. Windows Live Toolbar – add-in for user’s web sites
  3. Research on Syntax based MT, Phrase based MT, Word alignment, Language Modeling
  4. Office 2007/2010 Translate feature

Procedural Storytelling

Procedural generation refers to content generated algorithmically rather than manually, and is often used to generate game levels and other content. While procedural generation does not guarantee that a game or sequence of levels are nonlinear, it is an important factor in reducing game development time, and opens up avenues making it possible to generate larger and more or less unique seamless game worlds on the fly and using fewer resources. This kind of procedural generation is also called “worldbuilding”, in which general rules are used to construct a believable world.

  1. Research on creating immersive 3D Worlds
  2. Digital Storytelling using Kinect
  3. Environmental storytelling
  4. Reduces game development time

Machine augmented cognition

(AugCog) is a research field at the frontier between human-computer interaction, psychology, ergonomics and neuroscience, that aims at creating revolutionary human-computer interactions. For instance, various research projects aim at evaluating in real-time the cognitive state of a user (e.g. from EEG), and design closed-loop systems to modulate information flow with respect to the user’s cognitive capacity.

Human computer interaction that aims at creating revolutionary human-computer interactions

  1. Research on augmented cognition

Cloud Computing

Cloud computing provides computation, software applications, data access, and storage resources without requiring cloud users to know the location and other details of the computing infrastructure.

  1. Windows Azure Platform
  2. Office 365

Cyber-Warfare

Action by a nation-state to penetrate another nations computers or networks for the purposes of causing damage or disruption.

  1. Research on steganography  and steganalysis
  2. Research on warfare commands and control systems

4G

4G is the fourth generation of cellular wireless standards. It is a successor of the 3G and 2G families of standards.

  1. Windows Phone enablement on 4G
  2. O.S Support for 4G wireless and wired network

Mesh Networking

Mesh Networking is a type of networking where each node must not only capture and disseminate its own data, but also serve as a relay for other nodes, that is, it must collaborate to propagate the data in the network.

A mesh network can be designed using a flooding technique or a routing technique. When using a routing technique, the message propagates along a path, by hopping from node to node until the destination is reached. To ensure all its paths’ availability, a routing network must allow for continuous connections and reconfiguration around broken or blocked paths, using self-healing algorithms. A mesh network whose nodes are all connected to each other is a fully connected network. Mesh networks can be seen as one type of ad hoc network. Mobile ad hoc networks (MANET) and mesh networks are therefore closely related, but MANET also have to deal with the problems introduced by the mobility of the nodes.

  1. Toolkit for wireless mesh networking
  2. Research on mesh networking

Photonics

The science of photonics includes the generation, emission, transmission, modulation, signal processing, switching, amplification, detection and sensing of light. The term photonics thereby emphasizes that photons are neither particles nor waves — they are different in that they have both particle and wave nature. It covers all technical applications of light over the whole spectrum from ultraviolet over the visible to the near-, mid- and far-infrared. Most applications, however, are in the range of the visible and near infrared light.

  1. Research on Photonics and nanostruct

5G

  • 5G (5th generation mobile networks or 5th generation wireless systems) is a name used in some research papers and projects to denote the next major phase of mobile telecommunications standards beyond the 4G/IMT-Advanced standards effective since 2011. At present, 5G is not a term officially used for any particular specification or in any official document yet made public by telecommunication companies or standardization bodies such as 3GPP, WiMAX Forum, or ITU-R. New standard releases beyond 4G are in progress by standardization bodies, but are at this time not considered as new mobile generations but under the 4G umbrella.

    1. Research on O.S compatibility and overall protocol expectations

Multi-touch

In computing, multi-touch refers to a touch sensing surface’s (track pad or touchscreen) ability to recognize the presence of two or more points of contact with the surface. This plural-point awareness is often used to implement advanced functionality such as pinch to zoom or activating predefined programs.

  1. Microsoft Surface
  2. Windows Touch technology
  3. Multi touch in Windows 7
  4. Multi touch programming platform

Gesture recognition

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.

  1. Microsoft Kinect
  2. Gesture recognizers for Tablet PC
  3. Speech recognition – Speech recognition is the translation of spoken words into text. It is also known as “automatic speech recognition”, “ASR”, “computer speech recognition”, “speech to text”, or just “STT”.

Speech Recognition

is technology that can translate spoken words into text. Some SR systems use “training” where an individual speaker reads sections of text into the SR system. These systems analyze the person’s specific voice and use it to fine tune the recognition of that person’s speech, resulting in more accurate transcription. Systems that do not use training are called “Speaker Independent” systems. Systems that use training are called “Speaker Dependent” systems.

  1. Windows Speech Recognition
  2. Speech Macros in Office
  3. Speech recognition programming SDK
  4. Kinect speech recognition
  5. Microsoft Research

Augmented Reality

Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one.

  1. Microsoft research – in the area of mobile phone

Haptics

is a tactile feedback technology which takes advantage of the sense of touch by applying forces, vibrations, or motions to the user. This mechanical stimulation can be used to assist in the creation of virtual objects in a computer simulation, to control such virtual objects, and to enhance the remote control of machines and devices (telerobotics). It has been described as “doing for the sense of touch what computer graphics does for vision”. Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface.

  1. Microsoft Surface haptics

Holography

is a technique that allows the light scattered from an object to be recorded and later reconstructed so that when an imaging system (a camera or an eye) is placed in the reconstructed beam, an image of the object will be seen even when the object is no longer present. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the image appear three-dimensional.

  1. Microsoft research on Digital holography ,Virtual integral holography

Telepresence

Telepresence refers to a set of technologies which allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Telepresence requires that the users’ senses be provided with such stimuli as to give the feeling of being in that other location. Additionally, users may be given the ability to affect the remote location. In this case, the user’s position, movements, actions, voice, etc. may be sensed, transmitted and duplicated in the remote location to bring about this effect. Therefore information may be traveling in both directions between the user and the remote location.

  1. Microsoft research with HP partnership

Immersive Virtual reality

A fully immersive virtual reality to which the user connects through direct brain simulation. All senses would be stimulated diffusing the boundary between reality and fiction.

  1. Microsoft Research
  2. Game development

Depth Imaging

is the name for a collection of techniques which are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device. The resulting image, the range image, has pixel values which correspond to the distance, e.g., brighter values mean shorter distance, or vice versa. If the sensor which is used to produce the range image is properly calibrated, the pixel values can be given directly in physical units such as meters.

  • Basic API support
  • Microsoft Research

Near-field communication

is a set of standards for smartphones and similar devices to establish radio communication with each other by touching them together or bringing them into close proximity, usually no more than a few centimetres. Present and anticipated applications include contactless transactions, data exchange, and simplified setup of more complex communications such as Wi-Fi. Communication is also possible between an NFC device and an unpowered NFC chip, called a “tag”.

  1. Microsoft Research

Biometric sensors

applies biometrics to telecommunications and telecommunications to remote biometric sensing. With the emergence of multimodal biometrics systems gathering data from different sensors and contexts, International Standards that support systems performing biometric enrollment and verification or identification have begun to focus on human physiological thresholds as constraints and frameworks for “plug and play” telebiometric networks.

  1. Windows Biometric Framework (WBF)
  2. O.S compatibility for sensors
  3. Microsoft research

Smart Power meters

A smart meter is usually an electrical meter that records consumption of electric energy in intervals of an hour or less and communicates that information at least daily back to the utility for monitoring and billing purposes. Smart meters enable two-way communication between the meter and the central system. Unlike home energy monitors, smart meters can gather data for remote reporting. Such an advanced metering infrastructure (AMI) differs from traditional automatic meter reading (AMR) in that it enables two-way communications with the meter.

  1. Microsoft Smart energy Reference architecture
  2. Battery metering
  3. Power and utilities industry : Delivery and smart grid solutions

Machine vision

Machine vision (MV) is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control and robot guidance in industrial applications. While the scope of MV is broad and a comprehensive definition is difficult to distil, a “generally accepted definition of machine vision is ‘… the analysis of images to extract data for controlling a process or activity.'”

  1. Microsoft research

Computational Photography

Computational imaging refers to any image formation method that involves a digital computer. Computational photography refers broadly to computational imaging techniques that enhance or extend the capabilities of digital photography. The output of these techniques is an ordinary photograph, but one that could not have been taken by a traditional camera.

  1. Microsoft research

Tablets

A tablet computer, or a tablet, is a mobile computer, larger than a mobile phone or personal digital assistant, integrated into a flat touch screen and primarily operated by touching the screen rather than using a physical keyboard. It often uses an onscreen virtual keyboard, a passive stylus pen, or a digital pen

  1. Microsoft Tablet PC

Context aware computing

In computer science context awareness refers to the idea that computers can both sense, and react based on their environment. Devices may have information about the circumstances under which they are able to operate and based on rules, or an intelligent stimulus, react accordingly. Context aware devices may also try to make assumptions about the user’s current situation. Context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information (such as user location, time of day, nearby people and devices, and user activity). Since it was proposed about a decade ago, many researchers have studied this topic and built several context-aware applications to demonstrate the usefulness of this new technology…

  1. Microsoft research

Appliance robots

Operate your home appliances from web or remote location

  1. Microsoft robotics
  2. Microsoft research

Robotic surgery

  1. Microsoft robotics
  2. Microsoft research

Domestic robots

A domestic robot is a robot used for household chores.

  1. Microsoft robotics
  2. Microsoft research

Swarm robotics

Swarm robotics is a new approach to the coordination of multirobot systems which consist of large numbers of mostly simple physical robots. It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behavior occurs.

  1. Microsoft research 

Friends, Hope this review is helpful ! 🙂

Laxmikant Patil

Posted in General. 1 Comment »

How do I choose Content Management System?


If you are planning to decide on which Content Management System to adopt, first thing, you should have some information about what you or your organization is imagining from a CMS.

However, everybody cannot be so imaginative, so imagination is restricted to what you perceive as a Content management system, you will probably need additional information which will help you educate on what CMS products out there in market are offering, what is the trend, and what people are doing best to increase productivity through collaboration.

You can read this article which will educate you on what other areas you should consider before confirming on a specific CM Product. After reading this article, you will be well versed with the terminologies/ jargons which you will be hit by visiting individual CM product sites.

An ultimate aim of this article is to –

  • Educate decision makers to understand typical CMS features
  • Relate features to their own requirements
  • Map features/requirements against any product for effective product selection

Below list should help you understand about most expected features of Content Management –

  • Security
  • Enterprise Search
  • Integration
  • Technology
  • Deployment Flexibility
  • Scalability
  • Customer Service Support
  • Localization
  • Multi-Web Site Management
  • Translation Management
  • Brand Management
  • Target Audience Marketing
  • Multi-Channel Marketing
  • Browser Support
  • Workflows
  • Supported Content Types
  • Site Edit Feature
  • Word Connector
  • Archiving flexibility
  • Content Distributor
  • Visitor experience analytics
  • Personalization
  • Is it open source?
  • Supported Web Servers
  • SEO
  • SaaS
  • Cloud hosting option/Cloud Ready
  • Mobile version support
  • Version Control and Rollback features
  • ad Management
  • Real-time auditing
  • Reporting
  • Email integration
  • Architecture
  • Social networking support
  • Disaster Recovery options
  • Record Management support
  • Web content Management support
  • Multivariable Testing support
  • Web Traffic analytics
  • Business Analytics
  • Image Edit options
  • Background Job Processor
  • Asynchronous Job processor
  • GEOIP feature
  • PDF generation feature
  • Library/content Load balancing
  • Diagnostics control
  • Subscription control at folder/content level
  • Tasks
  • Web Alerts
  • Wiki Support
  • Blog features
  • Content Editor
  • Content Migration support
  • Content Export formats
  • User communities
  • Third party components leverage
  • Drag and Drop architecture
  • Dynamic Page content and layout changes
  • Tags and Catagories
  • Comments and Likes
  • Business intelligence
  • Role and rights Management

 I am also listing some good products that I Know about–

1. Company Name: Microsoft , www.Microsoft.com

Product Name: SharePoint 2007/2010 Server

Technology: Microsoft .Net

 2. Company Name: Ektron , www.Ektron.com

Product Name: Ektron CMS

Technology: Microsoft .Net

3. Company Name: Ektron , www.Sitecore.com

Product Name: Sitecore CMS

Technology: Microsoft .Net

4. Company Name: dotCMS , www.dotCMs.org

Product Name: dotCMS CMS

Technology: Java

5. Company Name: CrownPeak , www.CrownPeak.com

Product Name: CrownPeak CMS

Technology:

6. Company Name: percussion , www.percussion.org

Product Name: Percussion CM1, CM2

Technology: Java

 7. Company Name: oxcyon , www.oxcyon.org

Product Name: Percussion CM1, CM2

Technology: Java

 8.Company Name: LimeLight , www.clickability.com

Product Name: Limelight

Technology:

 9. Company Name: Autonomy interwoven, www. interwoven.com

Product Name: Autonomy interwoven WCM

Technology: Java

 10. Company Name: beidgelinedigital, www. bridgelinedigital.com

Product Name: iAPPS Content Manager

Technology: Microsoft .NET

11. Company Name: SDL Tridion, www. sdltridion.com

Product Name: SDL Tridion WCM

Technology: Microsoft .NET

 I hope this is helpful !!

– Laxmikant Patil

IE 6, 7, 8 Features, Loopholes and vulnerabilities


This white paper discusses the feature differences between different IE versions (i.e. IE 6, 7, 8  ) and vulnerabilities and loopholes found in these versions.

Microsoft Internet Explorer’s journey started in 1995(IE 1.0) and currently is in its 9th major generation available for free download as a Release Candidate. Microsoft’s work on IE has been always influenced by feedback from end users in the area of usability, performance, security. Ongoing development of web standards and work done by competitors like Mozilla Firefox, Google Chrome, Safari, and Opera also drives Microsoft for improvements. Microsoft has taken great efforts to keep increasing its market share by introducing IE on other operating systems like Apple Mac, Unix and mobile devices using Internet Explorer Mobile(with Windows Phone 7 and Windows CE).

Every version of IE passes through regression testing by Microsoft and the real users worldwide. Microsoft keeps on providing service packs/patches for issues identified by the end users and tries to keep IE updated against latest security threats/issues reported by the end users.

 Below section discusses features differences between different versions of IE (IE 6 to IE8) –

Feature Comparison

Feature* IE6 IE7 IE8
Compatibility view     Yes
Accelerators     Yes
Web Slices     Yes
InPrivate Browsing     Yes
Tabbed Browsing   Yes Yes, improved
Search   Yes Yes, improved
SmartScreen filter Lacks advancedSecurity features Yes Yes, improved
Favourites bar   Yes Yes, improved
InPrivate Filtering     Yes
Security(Malware, Phishing)     Yes
Cross Site Scripting Filter(XSS)     Yes
Click-Jacking Prevention     Yes
Domain Highlighting     Yes
Data Execution Prevention     Yes
DHTML Yes Yes Yes, improved
CSS Support Full CSS Level 1 Support CSS 2.1 CSS 2.1
DOM Level Full DOM Level1 Support Level 2.0 Level 2.0
SMIL SMIL 2.0    
MSXML MSXML 3.0    
RSS   Yes Yes
Ajax Support XMLHTTP as an ActiveX XMLHTTP native support XMLHTTP native support
Javascript Yes Improved Improved, faster
O.S Support No –Win 7,WS 08 R2, Vista, WS 08 No – Win 7,WS 08 R2Yes – Vista, WS 08 Yes –Win 7,WS 08 R2, Vista, WS 08

*Only selected features are considered for comparison

Vulnerabilities / Loopholes

Internet Explorer has been subjected to many security vulnerabilities and concerns, much of the malware, adware and computer viruses across the internet. A number of security flaws affecting IE originated not in the browser itself, but ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws.

Below are given some of the recent vulnerabilities and loopholes found in IE –

  • Microsoft Internet Explorer 6, 7, and 8 could not properly handle objects in memory, which allowed remote attackers to execute arbitrary code by accessing an object that (1) was not properly initialized or (2) is deleted, leading to memory corruption, related to a “dangling pointer,” aka “Uninitialized Memory Corruption Vulnerability“.
  • Remote code execution is one of the critical vulnerabilities observed in IE 6, 7, 8 browsers. This vulnerability could allow remote code execution if a user views a specially crafted web page using IE. One of the recent occurrences of it was fixed by Microsoft and security update was released.( http://www.microsoft.com/technet/security/bulletin/MS10-090.mspx)
  • Information Disclosure: An attacker who successfully exploited this vulnerability could gain the same user rights as the local user and steal the information. This vulnerability was found in IE 6, 7, 8. ( http://www.microsoft.com/technet/security/advisory/980088.mspx)
  • Microsoft Internet Explorer (IE6 to IE8) contained a memory corruption vulnerability, which could result in an invalid pointer being accessed after an object is incorrectly initialized or has been deleted. In certain circumstances, the invalid pointer access can be leveraged by an attacker to execute arbitrary code. This vulnerability is being actively exploited, and exploit code was publically available. (Attackers exploited this in the December 2009 and January 2010 during Operation Aurora, aka “HTML Object Memory Corruption Vulnerability.”)
  • Microsoft Internet Explorer 6 and 7 did not properly handled objects in memory that (1) were not properly initialized or (2) are deleted, which allowed remote attackers to execute arbitrary code via vectors involving a call to the getElementsByTagName method for the STYLE tag name, selection of the single element in the returned list, and a change to the outerHTML property of this element, related to Cascading Style Sheets (CSS) and mshtml.dll, aka “HTML Object Memory Corruption Vulnerability.
  • Microsoft Internet Explorer 6, 6 SP1, 7, and 8 did not properly handle argument validation for unspecified variables, which allowed remote attackers to execute arbitrary code via a crafted HTML document, aka “HTML Component Handling Vulnerability.
  • GDI+ in Microsoft Internet Explorer 6 SP1 did not properly allocate an unspecified buffer, which allowed remote attackers to execute arbitrary code via a crafted TIFF image file that triggers memory corruption, aka “GDI+ TIFF Memory Corruption Vulnerability.
  • Buffer overflow in GDI+ in Microsoft Internet Explorer 6 SP1, allowed remote attackers to execute arbitrary code via a crafted TIFF image file, aka “GDI+ TIFF Buffer Overflow Vulnerability.
  • Heap-based buffer overflow in GDI+ in Microsoft Internet Explorer 6 SP1 allowed remote attackers to execute arbitrary code via a crafted PNG image file, aka “GDI+ PNG Heap Overflow Vulnerability.
  • Integer overflow in GDI+ in Microsoft Internet Explorer 6 SP1 allowed remote attackers to execute arbitrary code via a crafted WMF image file, aka “GDI+ WMF Integer Overflow Vulnerability.”
  • Unspecified vulnerability in Microsoft Internet Explorer 6, 6 SP1, and 7 allowed remote attackers to execute arbitrary code via a crafted data stream header that triggers memory corruption, aka “Data Stream Header Corruption Vulnerability.
  • Microsoft Internet Explorer 6 SP1, 6 and 7 on Windows XP SP2 and SP3, 6 and 7 on Windows Server 2003 SP1 and SP2, 7 on Windows Vista Gold and SP1, and 7 on Windows Server 2008 did not properly handle transition errors in a request for one HTTP document followed by a request for a second HTTP document, which allows remote attackers to execute arbitrary code via vectors involving (1) multiple crafted pages on a web site or (2) a web page with crafted inline content such as banner advertisements, aka “Page Transition Memory Corruption Vulnerability.
  • Microsoft Internet Explorer 7, when XHTML strict mode is used, allowed remote attackers to execute arbitrary code via the zoom style directive in conjunction with unspecified other directives in a malformed Cascading Style Sheets (CSS) stylesheet in a crafted HTML document, aka “CSS Memory Corruption Vulnerability.
  • Microsoft Internet Explorer 6 through 8 allowed remote attackers to spoof the address bar, via window.open with a relative URI, to show an arbitrary URL on the web site visited by the victim, as demonstrated by a visit to an attacker-controlled web page, which triggers a spoofed login form for the site containing that page.
  • Microsoft Internet Explorer 6.0.2900.2180 and earlier allowed remote attackers to cause a denial of service (CPU consumption and application hang) via JavaScript code with a long string value for the hash property (aka location.hash)
  • Microsoft Internet Explorer 8.0.7100.0 on Windows 7 RC on the x64 platform allowed remote attackers to cause a denial of service (application crash) via a certain DIV element in conjunction with SCRIPT elements that have empty contents and no reference to a valid external script location.
  • mshtml.dll in Microsoft Internet Explorer 7 and 8 on Windows XP SP3 allowed remote attackers to cause a denial of service (application crash) by calling the JavaScript findText method with a crafted Unicode string in the first argument, and only one additional argument, as demonstrated by a second argument of -1.
  • Microsoft Internet Explorer 6.0 through 8.0 beta 2 allowed remote attackers to cause a denial of service (application crash) via an onload=screen [“”] attribute value in a BODY element.
  • The XSS Filter in Microsoft Internet Explorer 8.0 Beta 2 allowed remote attackers to bypass the XSS protection mechanism and conduct XSS attacks by injecting data at two different positions within an HTML document, related to STYLE elements and the CSS expression property, aka a “double injection.”
  • Microsoft Internet Explorer 6 SP1 did not properly validate parameters during calls to navigation methods, which allowed remote attackers to execute arbitrary code via a crafted HTML document that triggers memory corruption, aka “Parameter Validation Memory Corruption Vulnerability.

All of above vulnerabilities were confirmed and are published by National vulnerabilities database and appropriate actions were taken by the Microsoft. Please note that this is not the complete list of vulnerabilities found, this is just a list of recent vulnerabilities.

Secunia Study:

An independent security advisory firm “Secunia” has maintained vulnerabilities database in different versions of IE. This comparison of unpatched publicly known vulnerabilities in latest stable version browsers is based on vulnerabilities reports by Secunia (Secunia.com)

Browser Advisories Vulnerabilities
IE6 150 227
IE7 50 151
IE8 18 77

SecurityFocus Study (SecurityFocus.com is an online computer security news portal and purveyor of information security services):

As per SecurityFocus report below is the list of vulnerabilities found in latest stable versions of IE

Browser Vulnerabilities
IE6 473
IE7 26
IE8 62

Conclusion

  1. If you are going to develop new web application and thinking of how many IE versions your application should support, then it is clear from above study that IE6 should be your least priority considering –
    1. Features available in IE7 and IE8 (and the efforts required to implement IE6 compatibility)
    2. (sample) 20 vulnerabilities (IE6-16, IE7-11, IE8-9))
  2.  Software Giants like Google has begun the drive to phase out support for Microsoft’s web browser Internet Explorer 6 among other older browsers.

      3.  Nevertheless, Microsoft is making its own moves to make sure users have to upgrade for latest versions of IE. For example, Office Web Applications (browser versions of Word,     PowerPoint, Excel, and OneNote) will support Internet Explorer 7, Internet Explorer 8. (Firefox 3.5 on Windows, Mac, and Linux, as well as Safari 4 on Mac). There’s no mention of IE6 in support list. It’s not officially supported, but customers will not be blocked from using it.

Glossary

Tabs: View and manage multiple websites in one browser window with enhanced browser tab browsing

Web Slices: Using Web Slices, you can keep up with frequently updated sites directly from the new Favourites Bar. If a Web Slice is available on a page, a green Web Slices icon will appear in the Command Bar. Click on this icon to easily subscribe and add the Web Slices to the Favourites Bar so you can keep track of that “slice” of the web.

Accelerators:  Accelerators help you to use fewer clicks to get driving directions, translate words, and perform routine tasks.

Click Jacking: Click-jacking is an emerging online threat where an attacker’s web page deceives you into clicking on content from another website without you realizing it.

Malware: Malware is software that a cybercriminal can use to steal your bank account information, track everything you type, send out malicious software or spam, or harm your computer.

Phishing: In Phishing, a cybercriminal pretends to be a legitimate organization, such as your bank, in order to deceive you into giving up personal information such as credit card numbers and account information.

References

For Browser feature comparison

  1. http://windows.microsoft.com/en-IN/internet-explorer/products/ie-9/compare-browsers
  2. http://en.wikipedia.org/wiki/Internet_Explorer
  3. http://www.microsoft.com/windows/internet-explorer/compare/compare-versions.aspx
  4. http://www.microsoft.com/windows/internet-explorer/features/safer.aspx
  5. http://www.microsoft.com/windows/products/winfamily/ie/ie7/features.mspx
  6. http://www.microsoft.com/windows/ie/ie6/evaluation/features/default.mspx
  7. http://www.webdevout.net/browser-support

Vulnerabilities

  1. http://secunia.com/advisories/product/11/?task=statistics_2009
  2. http://en.wikipedia.org/wiki/Comparison_of_web_browsers#Vulnerabilities
  3. http://www.microsoft.com/technet/security/bulletin/MS10-090.mspx
  4. http://www.kb.cert.org/vuls/id/492515
  5. http://web.nvd.nist.gov
  6. http://www.cve.mitre.org/cve/

Kiosk Systems: Knowledge base for software professionals – Technology backgrounder


About kiosks

……. Kiosks were common in Persia, India, Pakistan, and in the Ottoman Empire from the 13th century onward ……. Indian Kiosk are generally called “Gumti” and sometimes “khokha” too……..

 The first self-service, interactive kiosk was developed in 1977 at the University of Illinois at Urbana-Champaign by a pre-med student, Murray Lappe. …….       

–    http://en.wikipedia.org/wiki/

 

Introduction

Kiosk has been part of human life for many years and centuries. Above quote also focuses on how technology impacted kiosk to be operated independently to serve mankind. In modern world kiosk is not just a computer with a touch screen enclosed in a box – it is an integration of Mechanical, Computer Hardware, Software, Peripherals and Embedded Controllers, and to build it requires high order domain expertise and intellectual power.

This whitepaper targets
• Technical audiences who want to develop kiosk systems
• Organizations willing to start sale through kiosks
• Functional beginners who want to possess basic knowledge of the kiosk systems

In IT world it is a common scenario where client provides business objectives and high level requirement to build the complete system and they are not in a position to provide all the details of the system one is expecting to be developed.

The objective of this paper is to educate readers and help make them comfortable in developing kiosk systems

Business Scenarios

Some of the popular business examples where organizations around the world are building kiosk systems are Digital Photo Kiosk, Bill Payment Kiosk, School Kiosk, Prepaid Electricity Kiosk, Internet Kiosk, Ticketing Kiosk etc.

Based on the business intend of the kiosk one need to set the focus on certain aspects of the kiosk, like Digital Photo Kiosk uses internal printer to print pictures instantly, so printer functionality becomes the main focus here. Hence simple user interface and high quality printer will be the best expected option.

In cases like School Kiosk, student friendly user interface is a major concern. Parents top up students smart cards to avoid cash in school transactions, here smart card reader and cash acceptor devices play a major role. School kiosk can use thermal printers for receipt printing, as their receipts are not required to be preserved for long time, so thermal printer is ideal solution. Smart card reader should be contact less reader so that students can easily operate them and no need to have physical contact with the reader. Better cash acceptor minimizes the support efforts in case of failed transactions.

Designing Internet kiosk is major challenge in terms of security where in user is allowed to browse the web. Internet kiosks may not need many peripherals but better control on user’s access rights to minimize virus attacks. Security threats are major concerns.

Prepaid Electricity Kiosk is used to top up credit through consumer’s smart cards, and then the smart card can be inserted into electricity meter at home which allows equivalent amount of electricity to consume. Such system uses contact smart card reader for read/write operations along with receipt printer. Card reading and writing speed of the reader should be major focus here.

Ticketing Kiosk is used to vend tickets and provide various other allied services as a single window for the user. These kiosks have very user friendly interface as the user may not be computer savvy and have less patience to understand the functioning. User can perform multiple functions from such kiosk ranging from finding a train schedule, fare calculation between two stations, seat availability, ticket status and ticket issuance.

 System Architecture

 Major components in the system are

1. Kiosk – This is a PC where kiosk application is running with peripherals

2. Server – Server runs the business logic

3. Database Server – for storage of data

4. Back office application – This application component is required to manage kiosks & their peripherals

Kiosk Software Architecture
1. Device monitoring: This monitoring ensures that all the peripherals are ready for use.
2. Remote Management software: Utility for kiosk remote management
3. Multimedia support: Any utility/third party software for enhanced graphical support like Adobe flash/ Microsoft Silverlight / DirectX technology.
4. OS tamper proofing: This is the most important software running on kiosk, this software restricts user accessing computer resources. User should not be able to modify registry values or system files.

Software and Hardware Components

Software Components

System Software

First thing comes to mind while designing kiosk is kiosk operating system. One has to choose the OS for various parameters like

* Integration with any existing system
* Ease of management by team
* Security
* Initial Purchase cost
* Maintenance Cost
* Future Support fomr OS manufacturer

The three main contenders in the kiosk OS market are

o Microsoft Windows

o Linux

o Apple

Microsoft Windows

Microsoft provides complete family of products those can be used as kiosk OS like

• Windows XP Embedded
• Windows Embedded POSReady 2009
• Windows CE 5.0
• Windows Embedded CE 6.0
• Windows XP Professional
• Windows XP Embedded is one of the mostly used OS on kiosk
• XP Embedded is much cheaper to license and this version of XP is pretty stable and can be tailored to Windows XP and deploy on embedded version without change
• XP Embedded has a great future as Windows Embedded Standard is a next generation of Windows XP Embedded
• Another special OS from Microsoft is Windows Embedded POSReady 2009 for point of sale solutions.
• Embedded POSReady has features for seamless connectivity with peripherals, servers and services
• Windows CE 5.0 is a distinctly different operating system and kernel, rather than a trimmed-down version of desktop Windows
• Windows CE 5.0 is best choice if you need to change OS code for hardware interfacing or some special reason. A distinctive feature of Windows CE compared to other Microsoft operating systems is that large parts of it are offered in source code form. Products like Platform Builder (an integrated environment for Windows CE OS image creation and integration, or customized operating system designs based on CE) offered several components in source code form to the general public.
• Windows Embedded CE 6.0(a renamed version of Windows CE 6.0) OS can be used to develop small footprint devices with a componentized, real-time operating system. OS image of size 300KB can be built with 700 components. When size is a matter of concern use this OS.
• Windows XP Professional is one of the most widely used OS for Kiosk similar to Embedded XP version.  Windows XP Professional is easily upgraded with the latest hot fix or service pack.
• You can use Windows XP Professional for kiosk because of its robustness and most stable OS in the market nowadays. It has come out of all the hardware interface related problems that it had initially.
• XP Professional enjoys the latest development technologies for building the kiosk applications such as Microsoft .Net framework, Windows Presentation foundation, Microsoft Silverlight technology.
• The XP Embedded will not have the same end-user help functionality available in Windows XP Pro
• XP Embedded is componentized version of Windows XP Professional, hence you can develop on embedded version without change
• XP Embedded has a great future as Windows Embedded Standard is a next generation of Windows XP Embedded
• Another special OS from Microsoft is Windows Embedded POSReady 2009 for point of sale solutions.
• Embedded POSReady has features for seamless connectivity with peripherals, servers and services
• Windows CE 5.0 is a distinctly different operating system and kernel, rather than a trimmed-down version of desktop Windows
• Windows CE 5.0 is best choice if you need to change OS code for hardware interfacing or some special reason. A distinctive feature of Windows CE compared to other Microsoft operating systems is that large parts of it are offered in source code form. Products like Platform Builder (an integrated environment for Windows CE OS image creation and integration, or customized operating system designs based on CE) offered several components in source code form to the general public.
• Windows Embedded CE 6.0(a renamed version of Windows CE 6.0) OS can be used to develop small footprint devices with a componentized, real-time operating system. OS image of size 300KB can be built with 700 components. When size is a matter of concern use this OS.
• Windows XP Professional is one of the most widely used OS for Kiosk similar to Embedded XP version.  Windows XP Professional is easily upgraded with the latest hot fix or service pack.

• You can use Windows XP Professional for kiosk because of its robustness and most stable OS in the market nowadays. It has come out of all the hardware interface related problems that it had initially.

• XP Professional enjoys the latest development technologies for building the kiosk applications such as Microsoft .Net framework, Windows Presentation foundation, Microsoft Silverlight technology.

Linux
• Linux OS is also used for Kiosk because its bit more stable and secure
• If you want to take advantage of open source nature of Linux, you may choose this OS
• Before you take decision make sure that it requires better understanding of OS to implement and manage the kiosk when you start doing a lot of custom work or integration with third party components, hardware, etc., it would be necessary.
• Linux also has embedded version but is not so popular in kiosk world
• Internet kiosk are popular using Linux ( RedHat Linux)

Apple 
You will find very few people using Mac based kiosk e.g. WKiosk by App4Mac. It’s not so popular as Kiosk OS.

OS/2
IBM OS/2 was the most popular OS for building ATM’s but after IBM announced that they are discontinuing the OS/2 industry support after December 2004. There are very few kiosks build using this OS those too very initially.

From security point of view any kiosk OS your choose has to be secured from public access.User should not be able to tamper OS underneath. Few steps must be taken to secure OS from tampering.

Here are the few ways so run XP in kiosk mode (secure) 
1. Use Group Management Policy console to restrict access to public user which is a restricted user, so that kiosk user cannot change/tamper OS files
2. There are several custom programs from vendors like SiteKiosk, Kioware,SoftStack etc which offers a great secure shell incorporated for your kiosk application
3. Use Windows SteadyState shared access computing tool for Windows XP and above to restrict
access to kiosk OS and data. It’s freely available with licensed copies of Windows XP and
Windows Vista 32 bit OS. Windows SteadyState tool is the next version of Microsoft shared computer toolkit.

Application Software

Several development platforms and technologies are available for developing software.
Care should be taken while selecting development platform, languages, and tools for the development.

 Guidelines

1. Choose languages, platform, library which is the latest one and has life for at least next 10 years
2. While selecting any third party tool/library/control make sure that source code of it is available with you
3. Always follow best practices demonstrated by giant IT players like Microsoft, Sun etc
4. Kiosk development is similar to any other product except high level of modularity should be achieved for easy deployment and software upgrades.
5. Application should not only satisfy functionality but also performance, usability, maintainability
6. Choosing RDBMS is also crucial decision, Microsoft SQL Server and Oracle are major RDBMS in the market, considering network of 100 kiosks SQL server is most preferred.
7. Before purchasing any hardware perform proof-of-concept and confirm that hardware is compatible with application your building and OS.

Hardware Components

 This section describes the most commonly used devices in the kiosk system

1. Printer: This section focuses on thermal printers only, which is the most preferred printer for kiosk.
It’s very important to choose a printer after proper analysis.  Ask below questions to yourself

1. Usage – number of chits printed per day  
2. Printing speed
3. Output quality i.e. paper size and color
4. What is the initial investment
5. Consumables and maintenance charges
6. Printer form factor
7. Support for different fonts
8. Support for graphics printing

List to cross check before you select thermal printer –

• Print method – should be direct thermal and not thermal transfer method
• Fonts available
• Column capacity
• Character size, character set, characters per inch
• Interface – RS232,USB etc
• Print Speed – e.g. 170mm/sec
• Paper dimensions
• Driver support
• EMC and safety standards
• Mass – kilograms
• Auto paper cutter availability
• High MTBF

Examples: Epson, CADMUS
 

 2. Cash Acceptor Devices:

Despite the growing popularity of alternate payment methods, cash remains a popular form of payment.
Following points should be considered while selecting cash acceptor
1. Find out the number of transactions per day and total capacity of the cash acceptor, usually its in the range of 500-1000 cash notes
2. How many denominations do you want to accept through kiosk system
3. Maintenance cost considering long time use
4. You want acceptor to accept single note at a time or multiple notes, how notes should be stored, separate denominations in separate compartments or same.
Check time required to validate the note and move note into cash box
5. Future enhancement capabilities of the note acceptor should be demonstrated considering possibilities of changes in the security measures in notes in future by the government
6. Same note acceptor should be able to configure for validating currencies for different countries, usually this can be done by changing the validation logic in embedded IC
7. Cash acceptor should support kiosk application know about
o Total cash present in the cash box,
o Any errors occurred in the note validation
o Cashbox full notification
o Complete log of events happening inside note acceptor for trouble shooting purposes
8. It’s a common problem with cash acceptors, if they kept running long time without reset or after few years of use, they start jamming and inserted notes either remains without getting stacked properly, sometime even it can damage the note by winding it.
9. Verify the type of interface available with cash acceptor RS232, USB etc
10. Power requirements. Usually its DC 24 V, 10Amps
11. Check the weight, size of the acceptor. It should fit into kiosk cabinet.
12. Cash recognition method i.e. optical , magnetic etc

3. Smart card Reader/Writer

There are two types of card readers used with kiosk. Following are the points to be considered while selecting card reader

Contact Smart Card Reader/Writer:

• Interface – prefer USB
• Easy of use
• Smart card acceptor-Landing type (ensures longer card life and minimum damage to the cards outer surface)
• Firmware should be easily upgradeable for future updates
• Power source should be USB
• Check whether your smart card reader has passed Microsoft windows hardware quality lab certification program.
• Card reader confirms to the ISO 7816, PC/SC specification, and PC/SC driver should be available

Contact less Smart Card Reader/Writer
• Easy for use
• Operation LED indicator
• Buzzer should be available
• High-speed transactions
• Should have USB interface avoid RS232 interface
• Confirms PC/SC 2.0 specification
• RoHS Compliant
• CE and FCC Compliant
• Confirms ISO/IEC 14443 or ISO 15693 which allows communications at distances up to 50 cm.

Graphical User Interface guidelines –

Do’s

•Large buttons
•Use textured background
•Make touchable areas obvious
•Limit choices
•Keep user guiding as much possible
•Have simple navigation buttons like back, forward, start
•User should be notified on button click by some beep sound, use 3-D button effect
•Use standard layout for numeric screen similar to ATMs or mobile
•Keep simple English message or messages in regional languages
•Display user name somewhere on screen
•User should not know OS underneath
•Let your GUI promote your company brand 

Don’ts

•No title bar
•No start menu
•No double clicking any where
•No pull down menus
•No scrolling or scroll bars
•No dragging or dropping
•Do not use web application as kiosk application
•Turn the cursor off
•Avoid black color for background
•Avoid solid colors
•Don’t change themes at a level where user will get confused by seeing change
•Avoid too many animating objects on the screen

Disaster Recovery

Guidelines

1.Disaster recovery plan should be ready while preparing design of the kiosk system. Plan should be prepared for cases considering software crashes, database corruption, server /hardware failures, network outages, theft, software viruses, unauthorized access or hacking etc
2.Whenever disaster happens recover data which is at logically complete state
3.Kiosk system should have ability to disable certain features temporarily to avoid further losses
4.Remote access to kiosk should be available at any point of time
5.Data loss due to disaster can be minimized by taking regular database backups
6.Transactions should be uploaded to server as soon as they are completed. If your kiosk support offline transactions mode, take care that transaction data is not stored on the kiosk for long time.
7.Refer ISO/IEC 24762:2008 standard for more information

Troubleshooting

Guidelines

1. Implement software and hardware logs
2. Implement email alerts on certain exceptions
3. Keep SQL queries ready to find out the mismatch in the database
4. Perform device test at every restart, don’t allow transaction if this test fails
5. Implement uniform Error code methodology, one should easily relate error code to error source

Tools
1. LogParser Utility – to parse log files and analyze the problem
2. Implement optional application instrumentation to capture application specific information

Here are some tools for remote control of kiosk
Microsoft Windows: Symntec PCAnywhere,GotoMyPC,LogMeIn Pro, radmin, RDC, rdesktop
Linux: Symntec PcAnywhere,GotoMyPc,KRDC,LogMein Pro, rdesktop
Mac OSX:Symntec PcAnywhere, Apple Remote Desktop, LogMein Pro,rdesktop

Security

Guidelines:

1. The Payment Card Industry (PCI), Security Standards Council has taken several steps in managing the data security standards that govern the industry.
2. Encryption methodology should be implemented for sensitive data and logging
3. Remote monitoring tool should be used
4. Provide adequate virus protection(Block not required ports, firewall restrictions)
5. Focus User & Network Access Management
6. Operating system access control
7. Kiosk application should not get affected by attacks like SQL injection, validate every user input before processing it.
8. Refer SO/IEC 17799 and BS7799-2 / ISO27001 standards for more information

 
Change Management and Software upgrades


Guidelines

1. Software upgrades are usually done in phased manner
2. Kiosk and server communication should happen through ‘process codes’ defined, this will help server to be always backward compatible
3. Kiosk application should have one report for viewing versions of the application components (Executables, DLLs, images, themes)
4. Smoke test should be performed after every upgrade

Tools
1. Use next release of Microsoft’s Systems Management Server (SMS), i.e. System Center Configuration Manager 2007 for task automation, compliance management, and policy based security management allowing for increased business agility.
2. Use BITS (Binary intelligence transfer service) technology for file transfer

Invalid ViewState Error – Validation of ViewState MAC failed


Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster.

 

In above exception viewstate is not getting validated/authenticated by the server. This happens when encryption key used for encrypting viewstate does not match on different servers in the web farm.
When we deploy an asp.net web application into a web farm environment, each web servers machine.config or web.config must specify the same key used for encrypting the view state. As view state is encrypted for security reasons and each machine.config on each web server will have a different key so they must all be the same. If they are different then viewstate created by one server will not be understood by other server and hence the above error.
Instead of editing machine.config file which may affect other applications running on the same server.
Best way is to add a machineKey element into each of the web server’s web.config and define the same keys and algorithm.
The machineKey goes under the System.web node. for e.g.
<machineKey validation=”SHA1″ validationKey=”A1B2C3D4E5F6F6E5D4C3B2A1A1B2C3D4E5F6F6E5D4C3B2A1A1B2C3D4 E5F6F6E5D4C3B2A1A1B2C3D4E5F6F6E5D4C3B2A1A1B2C3D4E5F6F6E5D4C3B2A1B2C3D4E5″
decryption=”Auto” decryptionKey=”A1B2C3D4E5F6F6E5D4C3B2A1A1B2C3D4E5F6F6E5D4C3B2A1″ />
for machinekey setting parameters you can refer http://msdn.microsoft.com/en-us/library/w8h3skw9.aspx

There are few more reasons why this error may occure, Like while trouble shooting –

1. Please check Application Pool Recycling settings in the IIS, IIS keeps on recycling the application pool to maintain the applications health. During this recycling process requests from the client may get into invalid viewstate situation.The fix in this case is to adjust the settings on the application pools so that recycling is less likely to occur at peak periods
2. Any Antivirus software or firewall settings may diesect the viewstate, creating difficulty for sever to validate the viewstate.
3. Improper Form Posts- Viewstate can only be posted back to the same page.  Attempting to post an aspx form to another page will fail with a viewstate invalid exception. This behavior is by design and obvious.
Some developers choose to disable the encryption of viewstate(ViewStateEncryptionMode =ViewStateEncryptionMode.Never) which is bad design, encryption is required so that no one should tamper the view state.

What should go in technical specifications document ?


Introduction

Technical specifications documents(TSD) plays a major role in conveying understanding of the project to any reader. Typically in the software industry there are two types of users who refer to TSD.

 1..Developers, Project Managers who will be directly working on the Project

2. Client Personnels like Technical architect, CIO’s, Project Managers

 I personally believe there should be two documents created for each intended audience. The reason is each set of user expects certain things from the document which may prove unnecessary to the other.

Document for customer overlook:

Generally customer approves the technical design, so it is obvious to convince the customer and assure him that all the technical details are covered in the document and system can meet desired goals. So while preparing document it is very necessary cover all such points those will help convincing the client techincal person. Technical architect from client will not be interested to know details at class levels or class diagrams, he will be interested to know Whether system is complying its requirements, those may be interms of

– Functionality,

– Performance,

– Deployement environment,

– Network requirements etc

and finally how you are depicting the overall architecture,

– Which technologies you are proposing,

– Is there any third party component used,

– Are there any licensing considerations,

– Need more hardware to be purchased

– Which best practices you are proposing etc .

So one should prepare a document may be consists of  around 25-30 pages which will provide complete outlook of the system.

 Document for Development team:

Development team is actual user of the document, they will be interested in knowing internals of the system. How system is divided into parts, how each part works, their dependencies, sequence, resuse, specific performance considerations, best practices and their details etc.It will be good if you can provide some sample programs and links to documentation sites for their further study.

 Although, it differs from organization to organization whether single document is prepared or more than one. I am providing list of topics/areas those should be considered in the TSD.

 1. Overall High Level Architecture(block diagrams)

2. Detail architecture(block diagrams)

3. Component diagram/architecture

4. Technological implementation of each component or module( if different languages used for respective module)

5. How components will be packaged and will be made ready for installation/deployment(Software Packing)

6. Sequence diagrams for important operations

7. Deployment methodology( e.g. clickonce deployment, XCOPY)

8. Folder structure( Development as well as deployment)

9. How components or assemblies would be versioned and source controlled

10. Dataflow diagrams

11. State Diagrams

12. Deployement diagrams

13. Network Architecture

14. How Logging /Event Logging/Email alerts will be implemented

15. How data( database, log files) purging will be done

16. Reporting methodology used(SSRS, CR and why?)

17. Naming conventions( Coding, assemblies, database objects)

18. Tiered and Layered Architecture

19. Security aspects and requirements( Application and network)

20. Performance aspects and requirements( Load testing methodology)

21. Pluggable and generic components

22. Highlight design patterns used

23. Database diagrams(ERD)

24. Disaster recovery plans and considerations in the application

25. Encryption methodology

26. How exceptions are handled

27. Class diagrams

28. Important algorithms

29. Usability Aspects

30. Data access technology proposed

31. Performance counters ( for troubleshooting)

Hope this is helpful !

Laxmikant Patil

Windows Azure platform – Tools and Utilities


 

I was looking for tools available on Windows Azure Platform, thought to share with you all. Certainly this is not the complete list available out there but I found these are useful to start with. My next post will cover few more tools –

1. Windows Azure Monitoring Management Pack(http://www.microsoft.com/downloads/en/details.aspx?FamilyID=4f05f282-f23a-49da-8133-7146ee19f249):

Windows Azure Monitoring Management Pack enables you to monitor the availability and performance of applications that are running on Windows Azure.
 
 
Feature Summary
• Discovers Windows Azure applications.
• Provides status of each role instance.
• Collects and monitors performance information.
• Collects and monitors Windows events.
• Collects and monitors the .NET Framework trace messages from each role instance.
• Grooms performance, event, and the .NET Framework trace data from Windows Azure storage account.
• Changes the number of role instances via a task.
 
2. CloudXplorer from clumfsyleaf(http://clumsyleaf.com/products/cloudxplorer):
CloudXplorer is a rich UI client for browsing Windows Azure blob storage.
 
Feature Summary
  • Copy and move blobs between folders, containers or even different accounts.
     
  • Rename and delete blobs, create new containers and folders.
     
  • Upload local files/directories and downloadblobs or entire blob folders.
  • Supports downloading/uploading of page blobs.
  • Auto-resume upload of large files.

3.Windows Azure Stoarge Explorer(http://azurestorageexplorer.codeplex.com/): Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Windows Azure Storage storage projects including the logs of your cloud-hosted applications. All 3 types of cloud storage can be viewed and edited: blobs, queues, and tables.

The Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more hosted services, all accessible with the same URL, in one or more Windows Azure datacenters. It uses a heartbeat to detect the availability of a hosted service. The Traffic Manager provides various ways of handling the lack of availability of a hosted service.
 
5.   Sqlcmd utility(http://msdn.microsoft.com/en-us/library/ee336280.aspx): You can connect to Microsoft SQL Azure Database with the sqlcmd command prompt utility that is included with SQL Server. The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command prompt. 
 
6.   SQL Server Management studio(http://msdn.microsoft.com/en-us/library/ee621784.aspx#ssms): The SQL Server Management Studio from SQL Server 2008 R2 and SQL Server 2008 R2 Express can be used to access, configure, manage and administer SQL Azure Database. Previous versions of SQL Server Management Studio are not supported.
 
You can transfer data to SQL Azure Database by using the bulk copy utility (BCP.exe). The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files.
 
8.  SQL Azure Reporting(http://msdn.microsoft.com/en-us/library/ee621784.aspx#azurereport): The Customer Technology Preview of SQL Azure Reporting is also available. Microsoft SQL Azure Reporting is a cloud-based reporting service built on SQL Azure Database, SQL Server, and SQL Server Reporting Services technologies. You can publish, view, and manage reports that display data from SQL Azure data sources.
 
9.  SQL Server Management Objects(http://msdn.microsoft.com/en-us/library/ee621784.aspx#ssmo): A partial set of SQL Server Management Objects (SMO) are enabled by SQL Azure Database. The partial set of SMO are only enabled in order to provide Management Studio access to SQL Azure.
 
10. Sql Server Migration Assistant(http://msdn.microsoft.com/en-us/library/ee621784.aspx#ssma): Starting with the SQL Server Migration Assistant 2008 for Access version 4.2 release, SSMA enables migrating Microsoft Access schema and data to SQL Azure Database and adds support for Access 2010 databases.
 
11. Data –tier applications(http://msdn.microsoft.com/en-us/library/ee621784.aspx#datatier):Starting with Microsoft SQL Server 2008 R2 and Microsoft Visual Studio 2010, data-tier applications (DACs) are introduced to help developers and database administrators to package schemas and objects into a single entity called DAC package.SQL Azure Database supports deleting, deploying, extracting, registering, and in-place upgrading DAC packages. SQL Server 2008 R2 and Microsoft Visual Studio 2010 included the DAC Framework 1.0, which supported only side-by-side upgrades.
 
12. Generate and publish script wizard(http://msdn.microsoft.com/en-us/library/ee621784.aspx#generate):You can use the Generate and Publish Scripts Wizard to transfer a database from a local computer to SQL Azure Database.The Generate and Publish Scripts Wizard creates Transact-SQL scripts for your local database and the wizard uses them to publish database objects to SQL Azure Database.
 
13. Cerebrata Cloud Studio(http://www.cerebrata.com/products/cloudstoragestudio/):Cloud Storage Studio is a Windows (WPF) based client for managing Windows Azure Storage, an important component of Microsoft’s Azure (Microsoft’s Cloud) platform and Hosted Applications.
 
The utility will perform a series of data-upload and -download tests using sample data and collect measurements of throughput, which are displayed at the end of the test, along with other statistics.
  
15. SpaceBlock File transfer utility(http://spaceblock.codeplex.com/): SpaceBook is a simple Windows front-end for managing Amazon S3, Nirvanix, Azure Blob Storage, and now Sun Cloud Object Storage online service accounts.
 
16. Windows Azure Management Tool(http://wapmmc.codeplex.com/): The Windows Azure Platform Management Tool (MMC) enables you to easily manage your Windows Azure hosted services and storage accounts.  This tool is provided as a sample with complete source code so you can see how perform various management and configuration tasks using the Windows Azure Management and Diagnostics APIs. 
 
17. CSPAck utility(http://msdn.microsoft.com/en-us/library/dd179441.aspx#Subheading2): The CSPack Command-Line Tool packages your service to be deployed to the Windows Azure fabric. The cspack.exe utility generates a service package file that you can upload to Windows Azure via the Windows Azure Platform Management Portal. By default the package is named.cspkg, but you can specify a different name if you choose.
 
18. AzureWatch utility(http://www.softsea.com/download/AzureWatch.html): AzureWatch aggregates and analyzes performance counters, queue lengths, and other metrics and matches that data against user-defined rules. When a rule produces a “hit”, a scaling action or a notification occurs.
 
19. Windows Azure Bootstrapper(http://bootstrap.codeplex.com/):
The Windows Azure Bootstrapper is a command line tool meant to be used by your running Web and Worker roles in Windows Azure.  This tool allows you to easily download resources (either public resources or ones in your blob storage), extract them if necessary, and launch them.  Since you don’t want to always download and run during restarts, it will also help track those dependencies and only launch an installer one time!  In addition, there are some very useful features that make it a great tool to package with your roles.
 
20. Windows Azure GAC Viewer (http://gacviewer.cloudapp.net/)
This tool shows you a dynamically generated list of all of the assemblies present in the GAC for an Azure instance. Additionally, it also allows you to upload your project file (*.csproj or *.vbproj) to have the references scanned and let you know if there are any discrepancies between what you are using and what is available (by default) in Azure.
 

21. Azure Database Upload(http://azuredatabaseupload.codeplex.com/):

This utility will allow users to take the data from a SQL Server database and upload it in their Azure table storage account. It provides an easy to use GUI to read data from a SQL Server and upload it into specified Azure table storage.
 

22. Azure file upload(http://azurefileupload.codeplex.com/):

This utility will allow users to take the data from a delimited flat file and upload it in their Azure table storage account. It provides an easy to use GUI to read data from a delimited flat file and upload it into specified Azure table storage.
 

23. DocaAzure utilities(http://www.softpedia.com/get/Programming/Components-Libraries/DocaAzure.shtml) :  DocaAzure is a handy package that contains various utilities to help you with your Windows Azure development. DocaAzure is developed in C# and includes:

* Lightweight messaging framework
* IDbSet implementation for Azure Tables
* SMTP relay and server
* Azure Tables & Blobs Backup to the same or other Storage Account
* Some other useful utilities 

 
CloudBerry Explorer for Windows Azure Blob Storage. CloudBerry Explorer makes managing files in Azure Blob Storage EASY. By providing a user interface to Azure Blob Storage CloudBerry lets you manage your files on Azure just as you would on local computer.
 
25. Windows Azure Powershell CmdLets(http://wappowershell.codeplex.com/):
The Windows Azure Platform PowerShell Cmdlets enable you to browse, configure, and manage Windows Azure Compute and Storage services directly from PowerShell.  These tools can be helpful when developing and testing applications that use Windows Azure Services.  For instance, using these tools you can easily script the deployment and upgrade of Windows Azure applications, change the configuration for a role, and set and manage your diagnostic configuration. 
 
26. Windows Azure Hosted Services VM Manager(http://azureinstancemanager.codeplex.com/): Windows Azure Hosted Services VM Manager is a Windows Service that can manage the number of hosted services (VM’s) running in Azure on either a time based schedule or by CPU load. This allows the application to scale either dynamically or on a timed schedule.
 
27. Windows Azure Guidance(http://wag.codeplex.com/):

This is open source prokect, The key themes for these projects are providing guidance on below scenarios –

1. Moving to the Cloud
2. Developing for the Cloud
3. Integrating the Cloud

 
28. FTP to Azure Blob Storage Bridge(http://ftp2azure.codeplex.com/)
Deployed in a worker role, the code creates an FTP server that can accept connections from all popular FTP clients (like FileZilla, for example) for command and control of your blob storage account.
 
29. Storage Explorer online app(http://storageexplorer.cloudapp.net/login.aspx):Windows Azure Web Storage Explorer makes it easier for developers to browse and manage Blobs, Queues and Tables from Windows Azure Storage account. You’ll no longer have to install a local client to do that. It’s developed in C#.
 
Hope this is useful !
 
Laxmikant Patil