Microsoft Azure Virtual Machine Scale set FAQs?


Imagine a situation where your application has variable resource consumption nature which is mostly a case in real world. If you keep constant number of servers / resources you will have to consider higher side of traffic pattern and provision extra resources which will not efficient economically. Such unpredictable nature of applications usage is very much common and needs cost efficient solution. Virtual machine scale set feature provides this ability where in platform can add more resources automatically based on the existing resource consumption. In simple term, if your servers are running for example at 75% CPU utilization, platform can automatically add X number of VMs to sustain the load and promise consistent user experience, as soon as your resource consumption goes down additional virtual machines are removed saving cost of the ownership.

As name indicates, Virtual Machine Scale set feature can create Virtual machines for providing high availability to your application by spawning multiple identical machines instantly. This feature is also used in case your application has big compute requirements like processing big data.

Now let us discuss few frequently asked questions on scale set –

Once I have Scale set defined which can create Virtual machines dynamically, how load distribution happens between them?

Microsoft Azure has integrated load balancer which can help distribute the load.

Are Windows and Linux both types of VMs creation possible using this feature?

Yes, Windows and Linux images along with their extensions can be created using this feature. Extensions are scripts which enables / configure additional features on Virtual machine.

Does this scaling happens automatically?

Yes, VM scale set is a feature integrated with Azure Insights Autoscale which makes this automatic scaling possible.

If I have a Web application with variable resource needs, Can I use Virtual machine scale set feature?

Yes, you can use it for web front ends or services layer. You will have to plan state of application in such case so that during scale out and scale in state is not lost.

Is it necessary to use Azure Resource Manager deployment for creation of Scale set?

Yes, it works with only Azure Resource Manager model, you can create it using Portal UI or thru JSON template or REST APIs, choice is yours. There are quick start templates available here.

Do VM Sale sets work with Azure Availability sets?

Yes. A VM scale set is an implicit availability set with 5 Fault Domains and 5 Update Domains. You don’t need to configure anything explicitly for that.

Can I use my Custom VM Images for Scale set or I am limited to available platform images1 only?

Yes, you can use your Custom images but you can create VMs up to 40, this is because custom images are currently limited to single storage account. If you use platform available images1, you can create up to 100 VMs which can be distributed across multiple storage accounts.

1A platform image in this context is an operating system image from the Azure Marketplace, like Ubuntu 16.04, Windows Server 2012 R2, etc.

Is each VM in the Scale set have public IP associated with it?

A VM scale set is created inside a Virtual Network and individual VMs in the scale set are not allocated public IP addresses. To connect to these VMs or RDP it, you have multiple options like – Use other Azure resources in your Virtual network like load balancers (with public IP) or other VMs outside scale set having public IP.

Can I attach data disk to VMs in Scale set?

As of today, this feature is not available, you cannot have data disk attached. Available options are – 1. Azure files storage (SMB mounted drives) 2. Azure Storage – blob, Tables 3. OS drive 4. Temp Drive which is not backed up by Azure.

Can I install my own software or setup something while VM images are being provisioned in scale set?

Yes, you can install new software on a platform image using a VM Extension. A VM extension is software that runs when a VM is deployed. You can run any code you like at deployment time using a custom script extension.

It is advisable not to have long running script / VM extension which can delay the VM provisioning impacting applications performance. Considering creating custom image if lot of customizations are required for platform images.

What are the available options for login into VMs in Scale set?

Since each individual VM in scale does not have public IP, you cannot RDP it directly, you have following options available –

  1. Connect to VMs using NAT rules: You can create a public IP address, assign it to a load balancer, and define inbound NAT rules which map a port on the IP address to a port on a VM in the VM scale set.
  2. Create standalone VM in same Virtual network/subnet, assign it a public IP, log in to this standalone VM using RDP and then connect to individual VMs in scale set using internal IP addresses within the Virtual network. Such standalone VM is sometimes referred as “jumpbox”

Typically what types of rules can be defined for scaling out and scaling in Scale set?

Below picture shows typical rule available while configuring the scale set –

m-scale-set

So in above example you are configuring Scale set which will have minimum 1 VM and maximum 10 it can go up to. Whenever CPU percentage reaches 75%, 1 new VM is added, as soon as it drops to 25%, 1 VM is removed from scale set.

Hope this helps in gaining basic understanding on Virtual Machine Scale Set.

Happy reading !! Thanks.

 

What is Microsoft Azure DevTest Labs feature all about?


In almost every Project we work, we provision some time like 1-2 weeks’ time for environment (Dev/Test) provisioning where we plan our Project with low Productivity, we buy software licenses if any needed or upgrade existing to new versions, upgrade machine configurations to get ready for another mission. So far this has been acceptable solution and considered to be pre-requisite step no matter how critical is the project from business point of view, this waste, we were allowed to doJ.

Few companies tried to tackle this problem with solutions available in the market such as CloudShare, a Cloud computing company providing ready to use development and testing Labs. However, adoption was not so quite high due to non-alignment of primary IT strategy with their Cloud Computing platform. For e.g. if your Active Directory is not in sync with Dev Labs in CloudShare, you will have to create entire dummy replica of it to mimic it, which is not practical most of the time.

Recently more mature solution have been devised by Microsoft – Azure DevTest Labs, a feature which enables companies to quickly provision development and test environments. This makes more promising solution due to its market adoption where customers are already having major of workload in Microsoft Azure Cloud platform, building Dev, Test, and Prod in same Cloud makes more sense as it just becomes your on premise extended network.

So in a nutshell, we are no more allowed to waste 1-2 weeks’ time, ultimately you are that much close to your market and end customers helping win the business.

So lets us start understanding some basics about the Azure DevTest Labs –

  • “Lab” creation is the first activity, basically “Lab” is outer container or boundary for collection of Virtual machines for development of testing purpose
    • A Lab provides secure boundary so a machine from one lab cannot see other machines in different lab, same way user permissions are organized
    • In real life, a lab could be a Project name of a sub-Project name
    • You can automate lab creation, including custom settings, by creating a Resource Manager template and using it to create identical labs again and again.
    • A lab owner has the access to any resources within the lab. Therefore, they can modify policies, read and write any VMs, change the virtual network, and so on.
    • An Entire Lab can be auto shut downed to save the resource cost when not in use
  • Once you create a Lab, you create multiple “Virtual machines” in it.
    • Each Virtual machine will have a O.S image attached to it.
    • You can limit the number of Virtual machines in a Lab, and also number of Virtual machines per user
    • To save the cost, each machine can be auto started and auto shut downed at specific time of day/week
    • You can use out-of-the-box VM images available or you can create and upload your own Custom VM image
    • A Lab user who uses the VMs, however, he is not allowed to modify any settings
  • Each Virtual machine can have “artifacts”, an artifacts is some configuration or tool which you want to install during VM provisioning so you’re VM will have it ready before first logon.
    • An example of artifacts could be 7-Zip utility, notepad++ software or some specific browser for testing purpose. Picture below shows few samples of artifacts, however, there is long available –

artifacts

  • Sometimes if you have more than one artifacts to be added, you may decide order in which they should be added

Along with above core features, there are very attractive use cases which can be used by companies such as –

  1. Training setups: You don’t have to work with your training vendor to rent a Lab for training purpose which may have some high end configured machines. Organizations outsource entire training programs due to hardware requirement, as they cannot afford to procure such hardware just of the training purpose due capital and operation expenditure involved.

With Azure DevTest Labs, this wait time or costly affairs are not more required, you                can easily configure required training environment in minutes, use it wisely, save                  the cost and complete your training. Once training is over, delete the environment.                That’s it!

  1. VM image snapshot: We all have faced situations wherein, some piece of code or functionality works fine in Dev environment but fails in Test, or works somewhere and fails on some specific machine. Sometimes testers had tough time in reproducing the issues as meanwhile somebody changed some VM settings making it hard to find the root cause.

With Azure DevTest Labs, you can take snapshot of VM image, preserve it for later                  use for reproducing defects which are environment specific. An ability to create same            environment with few clicks improves the DEV-TEST team communication and                      collaboration.

  1. No more follow ups to IT Support guys for adding small features on VMs, with power in your hand, you can select artifacts from the available set, add them and test them. No dependency on other stakeholders, a Lab owner user can perform all such operations and installation happens automatically without manual intervention.

To summarize, Azure DevTest Labs is a great addition to platform, this makes sense whether or not start allowing DevOps culture within your organization. Essentially this feature enables each organization save cost and time achieving their business objectives.

What is a microservice? Why we need it?


Before we touch upon microservices, let us discuss why are they needed? Why now? Why not 5 years ago or 5 years down the line?

We started programming world with procedural programming in languages such as C, Pascal etc., we invented new way of doing same using Object Oriented principles where we talked about the Object Orientation, encapsulation, abstraction etc. (C++, Smalltalk etc.).

Initially, we were running all our classes, interfaces on same machine with just logical separation of functional layers such as Presentation, business and data access layer followed by actual database management system. We found the need of having more componentization to run components separate so they can scale individually, we adopted COM (Component Object model) and DCOM (Distributed Component object model). Finally, we reached more mature and web enabled option of Services and modernized our applications using Service oriented architecture (SOA) and principles.

Now to understand what has gone wrong in so far programming and deployment models? – Well, there is nothing wrong in what we did so far, but there is scope for improvement as we witness technological innovations around us which are primarily business driven.

The problem with our recent adoption of SOA based applications or traditional client server systems is – we have applications which are logically divided in different logical layers (same machine) or sometimes different tiers (different machines) and the focus has been the functional nature of these layers like presentation/ web, business and data access – but it was never the business requirement which insisted us for this separation.

Because of just functional separation, applications became monolithic from business point of view – as I can’t deal with specific set of business requirements independently without disturbing other requirements, maintenance becomes difficult too. I can’t scale business scenarios separately or make more resilient, also the testing and deployment becomes more complicated since you are always dealing with one functional layer or application as a whole, instead of set of business scenarios. This leads to more time to deliver few changes to business and get the feedback, more time to test to understand impact and dependency, since everybody is testing the same application, you need to wait for others to test and confirm before you deploy, though, one part of it may be ready to go.

Also Monolithic applications can be scaled by just cloning it on more than one server, this way of resource utilization is not efficient as it does not have possibility to provision more resources to specific business requirements than other least used features, this causes in-efficient use of resources causing more cost of ownership. With Microservices adoption, resource utilization can be optimized.

So why is this new paradigm coming our way now not earlier? – I would say there are two reasons to it 1. Increased adoption of agile delivery model 2. As Cloud becomes reality and the way to go

Agile delivery model is most widely used and becoming more and more popular because of its ability to deliver fast, helping business achieve more aggressively – what else do you want?

As we helped business to go to market faster, test and provide feedback so improvements can be done, that raised organizational expectations on further engineering the way we write our applications and how our applications can be deployed to sustain un-predicable business growth.

So, in a nutshell, if I can use agile methodology (preferable but not limited to) to develop, test and deploy specific business requirement independently without tight coupling with other requirements or components, I can write microservices for that. Additionally pre-requisites are – Microservices should be resilient and scalable independently for un-predictable user growth, and for high resiliency you need Microservices to report health continuously so corrective actions can be taken.

As result of this, your application will become collection of Microservices (each addressing one unique business requirement) which can uniquely identify each other, communicate with each other, however, still remain independent as far as their versioning, scaling and internal business data and changes are concerned.

Cloud computing platforms such as Windows Azure helps implement all above tenets of microservices providing adequate resources and models for high resiliency, scalability and cost efficiency.

So if we have to conclude with top 3 rewards of using Microservices, these could be –

  1. Faster Delivery time due to focus on business requirements, respond to your customer faster
  2. Scalability which provides agility to your business, allow your business to venture into new geography without any wait – adopt more customers, support inorganic growth
  3. Improved resource utilization to reduce cost of ownership

In my next article, we will discuss more details on microservices.

Books reviewed by Me


I am happy to share that two of the books which I technically reviewed for PacktPub PublicationPacktpub, UK has been published recently. You can checkout these books on Amazon !

Books Published

Lotus Notes to SharePoint Migration : Key Guiding Principles


There is a definitive wave of application migration happening from Lotus notes to SharePoint Platform. These initiatives are usually cost and time consuming. Clearly formulated strategy implementation from day one is needed for long running benefits.

I have put together some key guiding principles thru by experience, Please have a look and provide your feedback,

  • Please note that SharePoint Platform journey for end users should start well before LN apps start appearing there.
  • SharePoint institutionalization is the key to success, even before first app is migrated.
  • Don’t try to build LN look alike apps in SharePoint, since these platforms are entirely different, you will end up spending lot in this desire. Always think from SharePoint perspective than an existing Notes apps.
  • LN will be around for some more time, it takes time, it’s not overnight job, so please be prepared to support both the platform for sometime.
  • Think cohesively for all business stakeholders including Partners, vendors, remote users, global users, multilingual users – so that all can participate, maintain and share information effectively.
  • Understand the pain areas and try to address them effectively even by changing business process to improve productivity.
  • Also Please note that every Notes app need not be migrated to SharePoint, always check feasibility. Some apps may remain there or some may be developed using ASP.NET/Java frameworks or even SaaS route.
  • Take a chance to improve and standardize application portfolio using Industry best practices. It is your golden chance to reorganize.
  • Which App to take first –
    • Application which can easily be migrated to OOB feature of target platform.
    • Sometimes Business critical apps are taken first for taking first come advantage, this can help set credibility of the platform.
    • However, judicious decision can only be taken after detailed assessment and listening point of views of different stakeholders.
  • Meeting with stakeholders and understanding their expectations should ensure success of migration exercise.
  • Effective Governance practice to control performance degradation because of content growth. Generally, the larger the number of people who get information from a particular type of site, the more tightly it is governed, and vice versa.
  • Focus on Reuse and standardization by leveraging consistent Framework, Tools, products.
  • Proper Migration Assessment paves the road to success, allow sufficient time to accomplish this activity.
  • Work closely with ERP, this can give enormous power to users where information is available in SharePoint which is controlled and secure.
  • Devise a strategy on maintaining and sharing Master data between applications and portals.
  • Future-proof organization with Mobility, build some apps to be usable on mobile devices.
  • What areas of the business offer the most opportunity for growth? – Act in that direction, from both Migration and SharePoint Platform adoption perspective.
  • As far as possible, target using Out-of-the-box feature of SharePoint, this may lead to cost saving and ease of maintenance in future. Achieve more and more things using SharePoint Client side object model, this will keep your SharePoint upgrades lighter(low cost).
  • Please note that, Not everything to be achieved in SharePoint, consider cost-benefit analysis and don’t hesitate to look for some third party product on Microsoft technology which can replace some Lotus Notes applications, even some times some business processes can be moved to ERP.

Hope this will help anybody going migration route.

Happy reading !!

Regards,

Laxmikant Patil

Lotus Notes to SharePoint Migration Scenarios – White Paper


One more white paper written by me is published on Company web site at below location. This is about migration scenarios while migrating Lotus notes applications to SharePoint platform.

http://www.kpit.com/downloads/whitepapers/lotus-notes-sharepoint-data.pdf

Below is little glimpse of first page, for full read, please access above link.

Lotus Notes to SharePoint Migration White Paper

Lotus Notes to SharePoint Migration White Paper

Please have a look.

Happy reading !!

Regards,

Laxmikant Patil

Quality Management Systems(QMS) in SharePoint


How will you map typical QMS (generic instead of specific industry type) with SharePoint Product features? Here I have given a try, please have a look –

QMS Specific Core Features*

  • KPI’s, Interactive Reports
  • Powerful Workflow engine
  • External partner Access
  • Document Management & Control
  • Document conversions
  • Records Management
  • Electronic and digital signatures
  • Metadata control
  • PDF rendering
  • Controlled Printing
  • Governance
  • Communities of Practice

Security & Compliance

  • Segregation of duties
  • Tasks configuration and management
  • Notifications
  • Escalations, Delegations
  • With its integrated security features, Platform supports Security and identity Management sufficient for non-repudiation
  • Microsoft Cloud services and GXP compliance. Office 365 services are SAS70 Type II, ISO 27001, Safe Harbor, HIPPA w/BAA available

Platform Development features

  • Widely used .Net based Platform
  • Build around collaborative features
  • Rapid development features and frameworks
  • Ability to integrate with ERPs, CRMs, SCMs
  • Integrated nature of Microsoft office
  • Data migration possible from existing Systems
  • Cloud as well as On-Premise availability
  • Works closely with Exchange, Lync

User Experience

  • MS Office like UI
  • Entire platform can be used on various mobile devices, PC and tablets
  • Sortable, filterable views of information, ability to control information for set of users,
  • Ability to create ad-hoc views and data exports

* Some of the features may need custom development

Below is pictorial representation of above information, I hope this information will be useful to you.

QMS in SharePoint

Happy reading !

Regards,

Laxmikant Patil