Cloud Development with Microservices in Azure
Cloud development sometimes winds up looking a lot like traditional application development and hosting. Developers can create large, monolithic, web applications and deploy them to a virtual machine or app service. In that sense, transitioning to Azure can be simple for most developers. However, to make best use of Azure, developers need to start thinking differently. Developers need to start thinking about their software architecture in terms of microservices.
What are microservices?
Microservices are a software architecture style in which applications are composed of small, independent modules that communicate with each other using well-defined API contracts. These service modules are highly decoupled building blocks that serve a small single purpose function. The benefit of microservices architecture is that it makes development and scaling of applications easier. A microservices architecture also makes collaboration between autonomous teams easier and can enable them to bring new functionalities to market faster.
Why Microservices Architecture?
A microservice approach breaks up what would normally be a large application into small, independent services. This may not sound like much, but this design has several advantages:
Services can be built and maintained independently
Microservices architecture is based on a collection of highly decoupled services that handle a single action. Teams can independently build, verify, deploy, and monitor each service without having to worry about the bigger picture.
Updates to a single service can be made independently of other services. When a new version of a service goes live, the entire application does not have to be taken down, only the updated service. This reduces system downtime since microservices are small and can be deployed quickly.
Finally, adding functionality to an application can be as easy as creating a new service. Changes to the web app can be minimal, and the bulk of the responsibility left to the new service. Following this idea, features in the application can also be turn turned off or removed simply by stopping or removing a service. If some critical issue is found with a feature, the whole application does not need to be brought offline, only the affected service.
Help reduce cost
One great feature inherent to Azure is its autoscaling capabilities. When a resource runs out of computer power or memory, through configuration a resource can scaled up or down automatically to meet the variable demand. This is a powerful feature for use cases where demand for resources is either unknown or unpredictable. However, in a large, monolithic application, the entire app service would have to be scaled up regardless of which part was experiencing high demands. With microservices, each service can be set to scale individually and only when needed.
Microservices in Azure are typically serverless and run under a ‘consumption’ pricing model. This means that each service is billed by its actual usage—by execution time and by total executions. So even if you have a critical, but rarely used service, the costs can be kept to a minimum.
Easier troubleshooting and fixes
Problems with an application using microservices can easily be traced to the individual offending service. Once the problem service has been identified, it can be fixed and redeployed independently of the rest of the application.
Since each service exists in its own space, development teams have great liberty in choosing their deployment approach, language, platform, and programming model. Have a team of Python developers and a team of Java developers? Now they can work side by side on the same application.
Faster Development Time
Since each service is a small, self-contained module, large development teams can be broken up into smaller teams and can simultaneously work on different services. Each team does not have to worry about making breaking changes to the application if there is a strong contract defined between services.
In Azure, the basic building block for microservices architecture is an Azure Function. Functions are event driven, serverless, and can handle a variety of tasks. Functions can either be hosted on a Consumption plan or an Azure App Service plan. As mentioned, Consumption plans are billed per transaction and by data transferred. Hosting a function on an App Service plan will bill at regular App Service plan rates but may be preferable for long running functions or functions with predictable scaling and costs.
Azure functions support several different programming languages. Custom handlers can also be used to implement a function app in a language or runtime that is not officially supported. Officially supported languages include:
A ‘trigger’ is what causes an Azure Function to run. Triggers can be created on many different sources, such as when a message arrives on a queue in Queue Storage, or when an Event Hub receives an event. Functions can also be manually triggered through an HTTP request or be set to run as a scheduled job with a timer. A full list of supported trigger bindings is as follows:
- Blob storage
- Cosmos DB
- Event Gird
- Event Hubs
- HTTP & webhooks
- IoT Hub
- Microsoft Graph events
- Queue Storage
- Service Bus
Once triggered, functions can also bind to an input source and an output source. Binding to a function is a way of declaratively connecting to another resource. Bindings can be connected as input bindings, output bindings, or both and are provided to the function as parameters.
Both bindings and triggers provide a way for the function to access other resources and services without having to hardcode the connection details in code. For a full list of input and output sources and triggers, check out the official documentation.
An important concept in Azure Functions is Durable Functions. Durable functions allow a developer to define stateful serverless workflows. A Durable Function’s workflow is defined by an orchestrator function and its state by entity functions. The primary case for using Durable Functions is to simplify complex, stateful coordination in serverless applications. Typical design patterns include:
Basic Serverless Architecture Example
Now that we understand the basics of microservice architecture in Azure, it is time for an example.
The above diagram depicts an application for an online store that utilizes Azure Functions to handle purchasing. In this example we are utilizing an Azure App Service, Azure Storage Queue, and several Azure Functions. When a customer makes a purchase in the store, the web application places a message on the queue. This message contains information like customer information, payment information, and purchased items. Once a message hits the queue, the orchestrator function is triggered and grabs the message. It then kicks off a pipeline of three functions that processes the payment, updates inventory in the company database, and notifies the warehouse an order has been placed. Finally, once the orchestration is completed, a final function notifies the user their order has been processed successfully or has failed.
Now that we have the high-level overview, let us dive deeper into this design.
In our application, the web app acts as an entry point for the user. It includes the user interface and is mostly responsible for handing off tasks to the serverless functions. However, it may handle some small tasks on its own.
Using an Azure Storage Queue in between the web app and the orchestrator function serves as a necessary intermediary. If the orchestrator function needs to be taken offline for maintenance or due to some sort of error, orders can continue to be placed while the function’s availability is being resolved. The function can then process the backlog when it comes back online. The queue also helps even out scaling during peak demand periods. If suddenly there is a high influx of orders, the function does not have to immediately scale to meet demands and can instead fill orders as fast as it can with current resources, eventually catching up. However, if a queue is consistently full and latency is high between the queue and the orchestrator, it is a good idea to scale out the services to add more instances.
The orchestrator function is triggered when a message arrives on the queue. In our scenario, it will grab the message, and pass off any relevant information to the billing function. The billing function will handle processing payment and return to the orchestrator. The orchestrator will then call the inventory and order placement functions in sequence, provided the previous functions succeed.
Using the orchestration, we effectively wrap these three functions in a transaction. So, if any of these functions fail, we can rollback the entire orchestration. For instance, we wouldn’t want to update inventory if the order failed to process as a result of a rejected payment since the order will not actually be fulfilled. Likewise, we wouldn’t want to create a new order if billing or inventory updates failed because the warehouse would never know to fulfill the order and the customer will never receive their purchase. The orchestrator will receive success or failure messages from each function and handle any rollbacks according to its current state.
Finally, the orchestrator function outputs to the email function the order information and whether it ultimately succeeded or failed. The email function can then notify the customer of their order status.
We have discussed the benefits of a microservices architecture and walked through a trivial example. However, with all the services Azure has to offer, we can do much more and the possibilities are almost endless. Large, complex applications can be broken down and developed as smaller, more cost-effective components. Software development lifecycles can be planned and carried out much more rapidly by dealing with each function as a mini application. While architecting an application with microservices in mind might take a little more forethought, the benefits and subsequent time savings far outweigh the initial investment.