This is a quick glance at the similarities and differences between AWS Lambda and Azure Functions. This is no in-depth review of either service.
What is it all about
AWS Lambdas and Azure Functions are event driven functions that are used to build serverless applications. The service provide automatic scaling, no upfront cost, no provisioning, no servers, you pay for what you use, and so on.
According to Amazon Web Services, Lambda are “AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running.”
Lambda are a service from Amazon Web Services that let you run code with no provisioning needed or servers to manage. There are no upfront cost, and you pay for what you use. You write your function, upload the code, and Lambda deals with runtime, scaling, and making it highly available. Your code can be triggered from a wide range of different services, or be invoked from web applications and mobile applications.
To create a function in Lambda you have to log in to your AWS management console and choose Lambda. Once there you create a new function, either from scratch or using existing templates. For each function you can choose a runtime and a role. The role defines the permission of your function. A role can be tied to a single function or to several functions. It is possible to add policies to role. Once a function is created you can’t change its name.
For each function that you create you need to specify a trigger for each function. A trigger could be one or several of API gateway, CloudWatch events, CloudWatch Logs, DynamoDB, S3, Kinesis, SNS or SQS. If you want to call your function from an web application you have to create an API Gateway, with an API endpoint that your application can call. API Gateway is then responsible for triggering the function and returning the result. Compare to Azure Functions there are no HTTP endpoints for a function by default.
You can supply code to your function in a couple of ways. Depending on the runtime you choose, you write your own code in the provided editor, or you have to upload a zip-file or a file from S3. For instance if you write functions using Node.js, you can use the online editor. But, if you want your functions to run .NET Core your only option are to upload a file.
When you create the function you have the possibility to add environment variables to instrument functions.
There are quite a lot of settings so that you can add functionality to, or modify behavior of your function. For instance you can choose to; encrypt environment variables at rest and in transit, change memory footprint and timeout of your function, choose to send error messages and time out messages to a dead letter queue, add tracing in function using X-Ray, add auditing and compliance by using AWS CloudTrail.
You can use Alias to create a fixed endpoint to your function. This way resources invoking your functions doesn’t have to update the address when a function is updated with a new version. So, an alias is an always reachable endpoint for your function that you can point to a specific version. When updating and alias pointing to a specific Lambda version, all incoming traffic will hit the new version pointed to by the updated alias. That could cause instabilities. Therefore it is possible to point you alias to two different versions of your lambda and use a percentage weight to tell the amount of traffic sent to each version. So you can say that 20% of the incoming traffic should go to the newly created function, and the rest should go to the previous version. When the new version has proven itself, one can steer more and more traffic to that version.
You can set the concurrency limit for each function that you create. That way you can specify the max number of concurrent executions allowed for that specific function. If you want a function to stop processing requests you can throttle a function by setting concurrency to zero.
You can do some testing of your function in the console. There are preexisting templates that you can use, or you write your own payload. When using API gateway you can test your function implicitly by testing them through your API gateway.
In the Lambda dashboard you have account metrics that shows errors, invocations, duration, and so on for all your functions in current region. You can find these metrics for each function under the monitoring tab in the function.
Another feature of AWS Lambdas are Layers. With Layers you can share code between multiple functions. So, you package any code and share it as a layer.
Functions are by nature stateless. To preserve state you have to use other services, for instance S3 DynamoDB, or similar.
According to Microsoft with Azure Functions you “Easily build the apps you need using simple, serverless functions that scale to meet demand. Use the programming language of your choice, and don’t worry about servers or infrastructure.”
With Azure Functions you can build web and mobile application backends, real-time file processing, automation of scheduled tasks, extend your SaaS applications, and so on. You can create event-driven, serverless compute experience. Your functions scale on demand and you pay only for the resources that you consume. You can choose a hosting plan that lets you pay-per-execution and dynamically allocate resources based on your applications load. If you choose App Service Plan you use a predefined capacity allocation with predictable costs and scale, that is, you are responsible for scaling your function app.
When you create a function in Azure you log into the management portal, and go to Function Apps. There you create a Function App that works as a grouping of a number of functions, and is a platform for the grouping. You can choose operating system of your Function App when you create it. You get an endpoint that is reachable by web. A Function App are tied to a subscription, resource group, and a region. The cool thing when creating your function app and selecting a Linux operating system, you can choose to deploy your function from a Docker image, instead of code. You will also note that the number runtimes compared to AWS are fewer at the moment.
When you’ve done this, you are ready to create a function.
You can configure the platform for the functions. You can provide connection strings that can be used by the functions, you can create app specific settings to instrument your functions without redeploying them, and you can mount additional storage, etc. You can set quotas on the platform to limit the usage on a daily basis. You can adjust access control for your platform. You can assign custom domain names.
You got monitoring and logs using metrics for the Function App. Microsoft says that the built-in logging is useful for testing with small workloads. When having high workload in production, you should use Application Insights. You shouldn’t use built-in logging and Application Insights at the same time, so make sure to disable built-in logging. You can bring your own dependencies. You have integrated security. You can deploy your function from the portal or use continuous integration.
If you chose to create your function by code you get a choice of using for instance VS Code or any editor of choice together with the CLI. Using VS Code you can develop your function locally and when it is finished you can upload it to Azure. You can also use the in-portal editor depending on runtime.
When creating a function using the in-portal editor you can choose to trigger from webhooks + API, timer, or other templates. Templates include different invocations from Azure Service Bus, Blob storage, Cosmos DB, Eventhub, or Eventgrid.
As with AWS, it is possible to do some testing of your function in the console, and you write your own payload.
Note that when you create functions you must create a storage account.
Another cool thing with Azure Functions are that you can run your functions on-premises.
The following table are a small summary and side-by-side comparison of AWS Lambda vs Azure Functions. This is not the whole picture, but just a small subset of their capabilites.
|Scaling||AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.||Can have hosting plan that lets you pay-per-execution and dynamically allocate resources based on your apps load. You can also use an app service plan, which let you use a predefined capacity allocation with predictable costs and scale.|
|How to test||* Using predefined blueprints to unit test and load test your Lambda
* Write a Lambda to test another Lambda
* Use plugins for toolkits to test your Lambda locally
* Set up API Gateway and trigger your endpoints using for instance Postman
|* Use unit testing frameworks in for instance Visual Studio and test your function there.
* You could define payload when using the in-portal editor.
* You could use for instance Postman to trigger your functions HTTP endpoint manually.
|How to deploy||Depending on runtime there are some different options. Either upload file from S3, upload ZIP file directly, or use in-browser editor. If you use Visual Studio there are a plugin for Lambda that can be used. You can deploy using ARM.
You can also use 3-rd party toolkit, for instance Serverless (https://serverless.com/), for this.
|Depending on runtime there are some different options. Either upload ZIP file directly or use in-browser editor. Deploy from CLI, Visual Studio, VS Code and so on. You can also deploy using ARM. You can also use continuous deployment directly from a couple of sources, for instance GitHub, Dropbox and so on. It works in the same way as continuous deployment for web apps, but Functions does not support deployment slots yet.|
|Environment variables||Yes, it is possible to add environment variables to instrument functions.||Yes, you can have connection strings and app settings.|
|Triggers||A trigger could be one or several of API gateway, CloudWatch events, CloudWatch Logs, DynamoDB, S3, Kinesis, SNS or SQS.||Your functions could be triggered by HTTP calls, timers, CosmosDB, blob storage, Queue storage, events from Event Grid or Event Hub, or by Service Bus.|
|Managing/orchestration||Use AWS Step Functions to define the steps in a workflow to coordinate the use of functions, using a state machine model.||Azure Logic Apps or Durable Functions for state transition and communication between functions. Or use storage queues for cross function communication.|
|Available runtimes||Node.js, Python 2.7, Python 3.6, Python 3.7, Ruby, Java, Go, .NET Core.
You can provide your own custom runtime for your Lambdas to run in.
|Billing||You are charged for the total number of requests across all your functions. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function.
The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.
NB You may incur additional charges if your Lambda function utilizes other AWS services or transfers data.
|Azure functions have a consumption plan that is billed based on per-second resource consumption and executions. Azure also have a premium plan with the same features and scaling as the consumption plan, but with enhanced performance and VNET access. This plan is billed on a per second basis based on the number of vCPU-s and GB-s your premium functions consume.
Azure lets you run Functions within your existing App Service plan at regular App Service plan rates.
Free grant of 1 million requests and 400,000 GB-s of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription.
NB When creating a Function App a Storage account is created automatically. The cost for the storage account aren’t included in the cost above.
|Cost||Lambda Free Tier
First 1M requests per month are free, $0.20 per 1M requests thereafter.
First 400,000 GB-seconds per month, up to 3.2M seconds of compute time, are free.
$0.00001667 for every GB-s used thereafter.
You are charged $0.00005001 for every GB-second used.
|Execution Time $0.000016/GB-s
Total Executions $0.20 per million executions
|Logging||Send error messages and time out messages to a dead letter queue. Tracing using X-Ray. Auditing and compliance using AWS CloudTrail.||Application Insights|
|Monitoring||CloudWatch and X-Ray||Application Insights|