The last couple of months I was involved in designing and building a Windows Server 2012 R2 Remote Desktop Services environment on Microsoft Azure. During these months I’ve learned some interesting things about Azure and the combination of Remote Desktop Services. I want to share these experiences through this blogpost. I want to make clear upfront that I’m not going into detail on each item. But I want to share my experiences and point to possible solutions. Detailed solutions and choices are based on different types of requirements like costs, manageability and future-proof of the solution. I’ve divided this blogpost in Pros and Cons based on the experience of this last project.
Pros:
I will start describing the Pros of building a Remote Desktop Services environment on Microsoft Azure:
Use of Standard Azure Infrastructure components
With Microsoft Azure you have great possibilities in terms of out-of-the-box functionality. Azure does have standard solutions for load balancing, firewalling, storage and networking. A lot of this can be used in a RDS deployment. Let’s start with RD Gateway / RD Web Role. These roles can easily be load balanced with the standard load balancers in Microsoft Azure. The load balancers support session persistence (link) which is needed by the RD Gateway. With the Azure load balancer, it’s also possible to load balance the UDP channel.
In most of the on premise deployments the RD Connection Broker is using DNS Round Robin as load balancing mechanism. In Microsoft Azure you can use the Internal load balancer instead of the DNS round robin mechanism. This is another improvement which can be made based on the existing Azure functionality.
Dynamic Scaling and costs
One of the key characteristics of the cloud is that you only pay for the resources which you use. Especially this characteristic is very interesting for hosting a RDS environment. The user load on the RDSH servers will not be the same during the day and week. Traditionally during evening and night less RDSH servers are needed to host the user sessions. In Microsoft Azure we can then start and stop these servers dynamically to reduce costs. Let’s make this clear with an example:
We have a mid-sized company with 20 RDSH servers, in the evening and night only 3 servers are needed to handle the user load:
24*7 – Always On – 40 hours a week: | ||
Number | Type |
Costs |
20 | D3 Virtual Machine |
€ 7.441,14 |
10*5 – Only On during work hours – 10 hours a day: | ||
Number | Type |
Costs |
17 | D3 Virtual Machine (10*5) |
€ 2.125,33 |
3 | D3 Virtual Machine (24*7) |
€ 1.116,17 |
Total |
€ 3.241,50 |
In the above calculation we can save € 4.200,- per month and € 50.000,- per year. This example shows clearly that you can save money when using dynamic scaling on Microsoft Azure.
Use of Azure Automation
Another standard solutions provided by Azure is Automation. With Azure Automation we can automate actions executed on Microsoft Azure like starting or stopping an Azure Virtual Machine. When we implement a Hybrid Worker inside your internal (AD) environment you can even execute actions in your AD environment with Azure Automation. I’ve used Azure Automation to deploy and update existing RDSH servers fully automated without any manual actions needed. I’ve also used Azure Automation to support the dynamic scaling scenario described above. So with Azure Automation you can automate several repeatable processes which will saves a lot of time in the administration of the RDS platform.
Cons/Challanges:
Of course there are also some challenges when building your RDS environment on Microsoft Azure. I had to solve the following challenges during the project:
Lack of Shared Storage
During this project we were using the RDS User Profile disk mechanism. The User Profile disks need to be placed on a high available file share, when your User Profile disk is not available the user will not get his settings. In Microsoft Azure there is a high available SMB storage solution named Azure File Storage. Unfortunately, this solution cannot be used to host the User Profile disks since Azure File Storage does not support NTFS permissions. There are no other Microsoft solutions to provide High-Available storage on Azure. Therefore we’ve decided to use 3rd party software to get a high-available file share. There are 2 solutions which can be used: SIOS Datakeeper and Starwind VSAN. Since both solutions can be used to provide high available file shares the decision is based on the requirements of the customer. We’ve used the Starwind solution satisfactorily.
Scheduled Maintenance
Another important challenge is scheduled maintenance on the Azure Platform. During planned maintenance your Virtual Machines will be restarted in a specific timeframe. Currently you cannot control this reboot. To prevent the loss of functionality during the reboot VMs can be added to an availability group. But for RD Session Hosts this is not a solution since active user sessions are running on these servers. So when the reboot is occurring the user will lose his active session. Currently there is no solution which can solve this scenario. It would be great if we can use functionality like ‘Redeploy’ in the future to control the reboot of the VM during maintenance. But for now this is a challenge which we cannot mitigate.
Conclusion
Based on the Pros and Cons I’m really excited about putting your RDS environment on Microsoft Azure. As in each project/environment the planning and designing phase is very important. But when the environment is designed properly RDS can be hosted in Microsoft Azure without any issues. If you want to know more about hosting your Remote Desktop Services environment on Microsoft Azure, please let me know and I’m happy to help!
Very nice write up, I have considered using Azure for RDS myself, but I have some reservations, 1. Did you consider a highly available connection broker and if so what were those considerations? 2. What was the user experience like? Latency? Response Time? 3. Did you utilize an App-V infrastructure to deliver the apps to the 20 RDS session hosts or were the apps individually installed on each host? 4. Did the customer already have their data in Azure or was data being accessed over a S2S tunnel, if S2S tunnel, how was performance? 5. Where you able to leverage the local SSD of the D virtual machines for anything?
Hi,
1. I’ve implemented a high available connection broker. For most of the environments thats my prefferable implementation.
2. The User Experience was equal or better comparing to the on premise environment.
3. No App-V used but I’ve tested this in my lab and this works without any issues
4. I always advise to have your data as close as possible, so If you’re planning to use Azure as your RDS platform your data has to be close (in Azure) or you need to have a very good connection to Azure.
5. I prefer ExpressRoute when we talk about production, other environments can you VPN connections.
Regards, Arjan
Rather than StarWind VSAN, couldn’t you have spun-up a FileServer virtual machine to host and share the User Profile Disks (UPDs)?
For an on-premise RDS solution, do you see any problem with storing UPDs within a VHD file attached to a virtual file server?
On Azure a single VM doesn’t have any SLA so for production this is a risk. When you’ve more control on the availability on your on-premise environment you could go for a single VM. But when you want to be high available you should go for a Scale-out-Fileserver cluster.
Regards, Arjan
Thanks for the reply Arjan, that makes sense.
With regard to my second question, do you see any problem with “nesting” VHDs within another VHD?
I’ve done that multiple times at customers without any issues. If the VHD can handle the IOPS and bandwidth needed for the UPDs it should not be an issue.
Regards, Arjan