One of the core benefit of cloud computing should be ease of deployment. At Lokad, we have been using Azure in production since 2010. We love the platform, and we depend on it absolutely. Yet, it remains very frustrating that .NET deployments have not nearly made enough progress as it could have been expected 6 years ago.
The situation of .NET deployements is reminiscent of the data access re-inventions which were driving Joel Spolsky nuts 14 years ago. Now, Microsoft has shifted its attention to app deployements, and waves of stuff keep rolling in, without really addressing core concerns, and leaving the whole community disoriented.
At present time, there are 6 major ways of deploying a .NET app on Azure:
- ASM, WebRole and WorkerRole
- ASM, Classic VM
- ARM, WebApp
- ARM, Azure Batch
- ARM, Azure Functions
- ARM, VM scale set
ASM stands for Azure Service Manager, while ARM stands for Azure Resource Manager. The ASM gathers the first generation of cloud services on Azure. The ARM gathers the second generation of cloud services on Azure.
ASM to ARM transition is a mess
In Azure, pretty much everything comes in two flavors: the ASM one and the ARM one; even the Blob Storage accounts (equivalent of S3 on AWS). Yet, there are no migration possible. Once you create a resource - a storage account, a VM, etc - on one side, it cannot be migrated to the other side. It's such a headache. Why is the responsibility of the clients to deal with this mess? Most resources should be accessible from both sides, or be "migratable" in a few clicks. Then, whenever, the ASM/ARM distinction is not even relevant (eg. storage accounts), the distinction should not even be visible.
So many ways to do the same thing
It's maddening to think that pretty every service handle .NET deployments in a different way:
- With WebRole and WorkerRole, you locally produce a package of assemblies (think of it as a Zip archive containing a list of DLLs), and you push this package to Azure.
- With the Classic VM, you get a fresh barebone OS, and you do your own cooking to deploy.
- With WebApp, you push the source code through Git, and Azure takes care of compiling and deploying.
- With Azure Batch, you push your DLLs to the blob storage, and script how those files should be injected/executed in the target VM.
- With Azure Functions, you push the source code throuhg Git, except that unlike WebApp, this is not-quite-exactly-C#.
- With the VM scale set, you end up cooking your own OS image that you push to deploy.
Unfortunately, the sanest option, the package option as used for WebRole and WorkerRole, is not even available in the ARM world.
The problem with Git pushes
Many companies - Facebook or Google for example - leverage a single source code repository. Lokad does too now (we transitionned to single repository 2 years ago, it's much better now). While having a large repository creates some challenges, it also make tons of things easier. Deploying through Git looks super cool in a demo, but as soon as your repository reaches hundreds of megabytes, problems arise. As a matter of fact, our own deployments on Azure routinely crashes while our Git repository "only" weights 370MB. By the time our repository reaches 1GB, we will probably have entirely given up on using Git pushes to deploy.
In hindsight, it was expected. The size of the VM needed to compile the app has no relevance to the size of the VM needed to run the app. Plus, the compiling the app may require many software pieces that are not required afterward (do you need your JS minifier to be shipped along with your webapp?). Thus, all in all, deployment through Git push only gets you so far.
The problem with OS management
Computer security is tough. For the average software company, or rather for about 99% of the (software) companies, the only way to ensure a decent security for their apps consists of not managing the OS layer. Dealing with the OS is only asking for trouble. Delegating the OS to a trusted party who knows what she is doing is about the only way not to mess it up, unless you are fairly good yourself; which, in practice, elimitates 99% of the software practionners (myself included).
From this perspective, the Classic VM and the VM scale set feel wrong for a .NET app. Managing the OS has no upside: the app will not be faster, the app is not be more reliable, the app will not have more capabilities. OS management only offers dramatic downsides if you get something wrong at the OS level.
Packages should have solved it all
In retrospect, the earliest deployement method introduced in Azure - the packages used for WebRole and WorkerRole - was really the good apprach. Packages scale well and remain uncluttered by the original size of the source code respository. Yet, for some reason this approach was abandonned on the ARM side. Now, the old ASM design does not offer the most obvious benefits that should have been offered by this approach:
- The packages could have been made even more very secure: signing and validating packages is straightforward.
- Deployment could have been super fast: injecting a .NET app into a pre-booted VM is also straightforward.
For demo purposes, it would have been simple enough to have a Git-to-package utility service running with Azure to offer Heroku-like swiftness to small projects, with the possibility to transition naturally to package deployments afterward.
Almost reinventing the packages
Azure Batch is kinda like the package, but without the packaging. It's more like x-copy deployment with file hosted in a Blob Storage. Yet, because it's x-copy, it will be tricky to support any signing mechanisms. Then, looking further ahead, the pattern 1-blob-per-file is near guaranteed to become a performance bottleneck for large apps. Indeed, the Blob Storage offers much better performance at retrieving a 40MB package rather than 10,000 blobs of 4KB each. Thus, for large apps, batch deployments will be somewhat slow. Then, somebody somewhat will start re-inventory the notion of "package" to reduce the number of files ...
With the move toward .NET Core, .NET has never been more awesome, and yet, it could be so much more a clarified vision and technology around deployments.