Creating AWS Lambdas using the AWS Command Line Interface

You can use Linux shell, Windows command line, or remotely trough for instance putty or terminal to use the AWS CLI. After you have installed the CLI you have to configure it for programmatic access to AWS. So, you have to create an access key and secret key. For this example you should create a new user, and give it access type Programmatic access. Give the user proper permissions – I will give the user AdministratorAccess for this post. Once that is done, you should create an access key and a secret key. Open up your console, type aws configure and fill in the information prompted for.

Note that you should be careful when using access keys. Make sure that you keep them safe, anyone having access to your keys will be able to perform the operations granted to the IAM user the keys are issued for.

You should also create an execution role for your function. In IAM console add a role that has permissions policy AWSLambdaBasicExecutionRole. Make a copy of the ARN to the role, as it is needed later when creating the function.

Now lets look at how we can start to create Lambdas. Type aws lambda help to get the AWS Lambda API Reference. Lets look at aws lambda create-function help.

“Creates a new Lambda function. The function configuration is created
from the request parameters, and the code for the function is provided
by a .zip file. The function name is case-sensitive.”

When creating a function you need to specify at least function name, runtime, role, handler, and the code for the function. That is, you have to give your function a name, you have to specify what runtime to use, you have to specify which execution role the function should use, you have to give the name of the method within you code that Lambda calls to begin execute the function, and lastly you provide the code as a zip-file.

aws lambda create-function 
--function-name MyCliLambdaFunction01 
--runtime nodejs8.10 
--role arn:aws:iam::13371337:role/LambdaExecutionRole 
--handler index.handler 
--zip-file "fileb://Index.zip"

The response from AWS when the function is created succesfully are quite verbose. As seen below in the confirmation, the method returns function name, function arn, choosen runtime, role for the function, handler, size of provided code, choosen timeout, and so on.

{
    "FunctionName": "MyCliLambdaFunction01",
    "FunctionArn": "arn:aws:lambda:eu-north-1:13371337:function:MyCliLambdaFunction01",
    "Runtime": "nodejs8.10",
    "Role": "arn:aws:iam::13371337:role/LambdaExecutionRole",
    "Handler": "index.handler",
    "CodeSize": 369,
    "Description": "",
    "Timeout": 3,
    "MemorySize": 128,
    "LastModified": "2019-01-22T19:25:30.212+0000",
    "CodeSha256": "L712mUWN7et62IgYGBeyZ0fikp3l8=",
    "Version": "$LATEST",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "RevisionId": "79ff3431-8f1-4d10-8867-d96fee4ef1507"
}

Ok, so now we have a function. Looking in the AWS Lambda Management Console we see that a function is created.

So, lets try and invoke the newly created function.

aws lambda invoke --function-name MyCliLambdaFunction01 outfile

This command invokes the function synchronous. The command will return the result of the execution, as seen below. The output from the function will be stored in the file named outfile.

{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}

So, we have created a function and invoked it through the CLI, and received a 200 as a response. What else is there we can do through the CLI. Lets see if we can add an environment variable and then change the behaviour of the function based on the contents of the environment variable.

Lets update the function to add a environment variable.

aws lambda  update-function-configuration 
--function-name MyCliLambdaFunction01 
--environment Variables={KeyName1=true,KeyName2=StringToPrint}

Now that we’ve added to variables, we have to update our code. I’ll update my node.js-code with an if-statement that evaluates KeyName1 and if it is true it returns KeyName2. This is the code:

This is how the CLI is called to update the code of the function by giving it a zip-file.

aws lambda update-function-code
--function-name MyCliLambdaFunction01 
--zip-file "fileb://UpdatedIndex.zip"

I invoke the function in the same way as above.

aws lambda invoke --function-name MyCliLambdaFunction01 outfile
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}

And the content of outfile is:

"StringToPrint"

This is quite a simple use case, but it shows that it is possible to work with functions from the CLI without touching the console.

I will look further into this in the future.

Further reading:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
https://docs.aws.amazon.com/cli/latest/reference/lambda/
https://docs.aws.amazon.com/lambda/latest/dg/env_variables.html

Node.js code
https://gist.github.com/mHallne/ac06cbed68ecab7e0d137dd357f14c51

AWS Lambdas and Azure Functions

This is a quick glance at the similarities and differences between AWS Lambda and Azure Functions. This is no in-depth review of either service.

What is it all about

AWS Lambdas and Azure Functions are event driven functions that are used to build serverless applications. The service provide automatic scaling, no upfront cost,  no provisioning, no servers, you pay for what you use, and so on.

AWS Lambda

According to Amazon Web Services, Lambda are “AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running.”

Lambda are a service from Amazon Web Services that let you run code with no provisioning needed or servers to manage. There are no upfront cost, and you pay for what you use. You write your function, upload the code, and Lambda deals with runtime, scaling, and making it highly available. Your code can be triggered from a wide range of different services, or be invoked from web applications and mobile applications.

To create a function in Lambda you have to log in to your AWS management console and choose Lambda. Once there you create a new function, either from scratch or using existing templates. For each function you can choose a runtime and a role. The role defines the permission of your function. A role can be tied to a single function or to several functions. It is possible to add policies to role. Once a function is created you can’t change its name.

For each function that you create you need to specify a trigger for each function. A trigger could be one or several of API gateway, CloudWatch events, CloudWatch Logs, DynamoDB, S3, Kinesis, SNS or SQS. If you want to call your function from an web application you have to create an API Gateway, with an API endpoint that your application can call. API Gateway is then responsible for triggering the function and returning the result. Compare to Azure Functions there are no HTTP endpoints for a function by default.

You can supply code to your function in a couple of ways. Depending on the runtime you choose, you write your own code in the provided editor, or you have to upload a zip-file or a file from S3. For instance if you write functions using Node.js, you can use the online editor. But, if you want your functions to run .NET Core your only option are to upload a file.

Editor Lambda Management Console

When you create the function you have the possibility to add environment variables to instrument functions.

There are quite a lot of settings so that you can add functionality to, or modify behavior of your function. For instance you can choose to; encrypt environment variables at rest and in transit, change memory footprint and timeout of your function, choose to send error messages and time out messages to a dead letter queue, add tracing in function using X-Ray, add auditing and compliance by using AWS CloudTrail.

You can use Alias to create a fixed endpoint to your function. This way resources invoking your functions doesn’t have to update the address when a function is updated with a new version. So, an alias is an always reachable endpoint for your function that you can point to a specific version. When updating and alias pointing to a specific Lambda version, all incoming traffic will hit the new version pointed to by the updated alias. That could cause instabilities. Therefore it is possible to point you alias to two different versions of your lambda and use a percentage weight to tell the amount of traffic sent to each version. So you can say that 20% of the incoming traffic should go to the newly created function, and the rest should go to the previous version. When the new version has proven itself, one can steer more and more traffic to that version.

You can set the concurrency limit for each function that you create. That way you can specify the max number of concurrent executions allowed for that specific function. If you want a function to stop processing requests you can throttle a function by setting concurrency to zero.

You can do some testing of your function in the console. There are preexisting templates that you can use, or you write your own payload. When using API gateway you can test your function implicitly by testing them through your API gateway.

In the Lambda dashboard you have account metrics that shows errors, invocations, duration, and so on for all your functions in current region. You can find these metrics for each function under the monitoring tab in the function.

Another feature of AWS Lambdas are Layers. With Layers you can share code between multiple functions. So, you package any code and share it as a layer.

Functions are by nature stateless. To preserve state you have to use other services, for instance S3 DynamoDB, or similar.

Azure Functions

According to Microsoft with Azure Functions you “Easily build the apps you need using simple, serverless functions that scale to meet demand. Use the programming language of your choice, and don’t worry about servers or infrastructure.”

With Azure Functions you can build web and mobile application backends, real-time file processing, automation of scheduled tasks, extend your SaaS applications, and so on. You can create event-driven, serverless compute experience. Your functions scale on demand and you pay only for the resources that you consume. You can choose a hosting plan that lets you pay-per-execution and dynamically allocate resources based on your applications load. If you choose App Service Plan you use a predefined capacity allocation with predictable costs and scale, that is, you are responsible for scaling your function app.

When you create a function in Azure you log into the management portal, and go to Function Apps. There you create a Function App that works as a grouping of a number of functions, and is a platform for the grouping. You can choose operating system of your Function App when you create it. You get an endpoint that is reachable by web. A Function App are tied to a subscription, resource group, and a region. The cool thing when creating your function app and selecting a Linux operating system, you can choose to deploy your function from a Docker image, instead of code. You will also note that the number runtimes compared to AWS are fewer at the moment.

When you’ve done this, you are ready to create a function.

You can configure the platform for the functions. You can provide connection strings that can be used by the functions, you can create app specific settings to instrument your functions without redeploying them, and you can mount additional storage, etc. You can set quotas on the platform to limit the usage on a daily basis. You can adjust access control for your platform. You can assign custom domain names.

You got monitoring and logs using metrics for the Function App. Microsoft says that the built-in logging is useful for testing with small workloads. When having high workload in production, you should use Application Insights. You shouldn’t use built-in logging and Application Insights at the same time, so make sure to disable built-in logging. You can bring your own dependencies. You have integrated security. You can deploy your function from the portal or use continuous integration.

If you chose to create your function by code you get a choice of using for instance VS Code or any editor of choice together with the CLI. Using VS Code you can develop your function locally and when it is finished you can upload it to Azure. You can also use the in-portal editor depending on runtime.

When creating a function using the in-portal editor you can choose to trigger from webhooks + API, timer, or other templates. Templates include different invocations from Azure Service Bus, Blob storage, Cosmos DB, Eventhub, or Eventgrid.

As with AWS, it is possible to do some testing of your function in the console, and you write your own payload.

Note that when you create functions you must create a storage account.

Another cool thing with Azure Functions are that you can run your functions on-premises.

Summary

The following table are a small summary and side-by-side comparison of AWS Lambda vs Azure Functions. This is not the whole picture, but just a small subset of their capabilites.

AWS Azure
Scaling AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload. Can have hosting plan that lets you pay-per-execution and dynamically allocate resources based on your apps load. You can also use an app service plan, which let you use a predefined capacity allocation with predictable costs and scale.
How to test * Using predefined blueprints to unit test and load test your Lambda

* Write a Lambda to test another Lambda

* Use plugins for toolkits to test your Lambda locally

* Set up API Gateway and trigger your endpoints using for instance Postman

* Use unit testing frameworks in for instance Visual Studio and test your function there.

* You could define payload when using the in-portal editor.

* You could use for instance Postman to trigger your functions HTTP endpoint manually.

How to deploy Depending on runtime there are some different options. Either upload file from S3, upload ZIP file directly, or use in-browser editor. If you use Visual Studio there are a plugin for Lambda that can be used. You can deploy using ARM.

You can also use 3-rd party toolkit, for instance Serverless (https://serverless.com/), for this.

Depending on runtime there are some different options. Either upload ZIP file directly or use in-browser editor. Deploy from CLI, Visual Studio, VS Code and so on. You can also deploy using ARM. You can also use continuous deployment directly from a couple of sources, for instance GitHub, Dropbox and so on. It works in the same way as continuous deployment for web apps, but Functions does not support deployment slots yet.
Versioning Yes Yes
Environment variables Yes, it is possible to add environment variables to instrument functions. Yes, you can have connection strings and app settings.
Triggers A trigger could be one or several of API gateway, CloudWatch events, CloudWatch Logs, DynamoDB, S3, Kinesis, SNS or SQS. Your functions could be triggered by HTTP calls, timers, CosmosDB, blob storage, Queue storage, events from Event Grid or Event Hub, or by Service Bus.
Managing/orchestration Use AWS Step Functions to define the steps in a workflow to coordinate the use of functions, using a state machine model. Azure Logic Apps or Durable Functions for state transition and communication between functions. Or use storage queues for cross function communication.
Available runtimes Node.js, Python 2.7, Python 3.6, Python 3.7, Ruby, Java, Go, .NET Core.

You can provide your own custom runtime for your Lambdas to run in.

C#, JavaScript, and F# are globally available. Java and Python 3.6 are in preview. Python, Typescript, PHP, Batch, Bash, PowerShell are all in experimental state.
Billing You are charged for the total number of requests across all your functions.  Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function.

The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

NB You may incur additional charges if your Lambda function utilizes other AWS services or transfers data.

Azure functions have a consumption plan that is billed based on per-second resource consumption and executions. Azure also have a premium plan with the same features and scaling as the consumption plan, but with enhanced performance and VNET access. This plan is billed on a per second basis based on the number of vCPU-s and GB-s your premium functions consume.

Azure lets you run Functions within your existing App Service plan at regular App Service plan rates.

Free grant of 1 million requests and 400,000 GB-s of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription.

NB When creating a Function App a Storage account is created automatically. The cost for the storage account aren’t included in the cost above.

Cost Lambda Free Tier

First 1M requests per month are free, $0.20 per 1M requests thereafter.

First 400,000 GB-seconds per month, up to 3.2M seconds of compute time, are free.

$0.00001667 for every GB-s used thereafter.

Lambda@Edge
Request pricing is $0.60 per 1 million requests ($0.0000006 per request).

You are charged $0.00005001 for every GB-second used.

Execution Time $0.000016/GB-s

Total Executions $0.20 per million executions

Logging Send error messages and time out messages to a dead letter queue. Tracing using X-Ray. Auditing and compliance using AWS CloudTrail. Application Insights
Monitoring CloudWatch and X-Ray Application Insights

Taking your CRM to the Cloud in one hour

This is and edited version of my talk at Microsoft TechDays Stockholm 17 november 2016.

Do you dare to go from an on-premises CRM to cloud based solution to gain all the benefits from the cloud? Developing and deploying you CRM solution for the cloud doesn’t have to be impossible or costly. If you plan it carefully, follow best practices and sprinkle it with a bit of ingenuity it’s hard to fail. It is possible to lower your cost, increase quality, get more automation, and have full control in the cloud. In this session, we will look at one way to manage your CRM transition to the cloud.

Intro

I just want to say that when someone dares you, usually you should not pay any attention. You shouldn’t even pay any attention if someone double dares you… You know what? Someone double dared me once, and somehow I accepted the challenge. A couple of months later I was running my first ultra-marathon in the Swedish mountains.

Some quick notes before we really start

It is hard to talk about one specific topic without touching on another, closely related topic. So, bear with me when some topics bleed into each other.

Microsoft rebranded Microsoft Dynamics CRM to Microsoft Dynamics 365 a couple of days ago. So, bear that in mind. I will use the term CRM, Microsoft CRM, and Microsoft Dynamics CRM.

In this talk I will ask a lot of questions that I don’t have the answer to. You are the ones that must answer these questions, whether it is in your own business or if it is in your customer’s business.

This session is a level 100. Actually, I figured out that it is really hard to do a level 100 talk, because one should expect that the audience have any previous knowledge about the topic. So, I’ve done my best to keep things simple, but this talk is more like 175 than 100…

We will have time for questions at the end.

Do you dare to go to the cloud?

So, do you dare to go to the cloud? So, I just said that you should ignore anyone saying that. So, lets change this so that you don’t ignore me.

Do you dare to stay out of the cloud?

A typical scenario is that it is time to upgrade/install/move your CRM solution. You’ve decided to go with a on-prem solution. You should just go ahead and insert the CD to start the installation, right? How hard can it be?

Install and setup the server, install the CRM software. Create your business rules. That would be something that an Enterprise Architect and a Solution Architect could do in a couple of weeks, right?

Yeah, and don’t forget that the CRM needs a database to store everything on. It should be installed on a fast server with lots of memory and processing power.

And you must integrate your CRM solution to your Active Directory, and you should have your Identity provider installed so that you provide a Single Sign On solution throughout the organization.

And don’t forget about backups.

And security, your system must be secure.

And redundant storage.

And we have all these tools and 3rd party solutions that should integrate towards the CRM. Do we have to invest in a new integration platform? Will the one we just deployed cope?

Business Intelligence; we should have all that so that we could track our user’s behavior.

And all the machines to handle that load.

And we must connect to our online service thing. Which is in another part of the network. So, we must setup an DMZ to separate the services. Secure you know.

And we should have a solution for logging

And monitoring

Load balancing is necessary

And you have some users that want to access the CRM solution from outside the office. Using a mobile device. Typically, an IPhone…

You must have trained people to handle all this operational stuff. If only I could hire enough people to handle all this technology.

And you must keep an eye on the cost. How do you keep track of all the money spent on infrastructure, man hours, licenses, training, operations and so on?

TCO of On-Premises vs Cloud

TCO of On-Premises vs Cloud

This is a simplified figure to show that there are a lot of hidden costs with an on-prem solution. To the left in this figure you all see that above the surface there is cost of software licenses. But below the surface, hidden from our view, are the capital expense for customization and implementation, hardware, head count, maintenance and so on. Operational expenses are also listed under ongoing costs.

To the right in this figure you see that over half of capital expense when deploying to the cloud are subscription fees. The other fixed, up front, costs aren’t there. The ongoing cost, or operational cost, are much smaller because that is part of the subscription.

Do you still dare to stay out of the cloud?

Ok, so you’ll continue with your on-prem installation and deployment. You’ve got your system up and running and it is time to open it up and let in the users.

During the project, you must make predictions on how the system will be used. And it is pretty hard to do these predictions. How many users should it be able to handle? What is the expected load? How will the underlying infrastructure handle everyday usage of the system? Have you tested and planned for all use cases?

But, you decide to open up your system. Then this happens.

This image shows what happened when Niantic launched Pokémon Go in Australia and New Zealand. And, you all might agree, this is an extreme scenario. But it is useful for proving my point that you have advantages of being in the cloud.

The orange line in the figure is Niantics Original Launch Target load. That was the estimate that they had. Just to be sure they also had a Estimated Worst Case scenario with 5 times the load. Just in case something hit the fan, they would have some extra capacity, The green line shows actual number of transactions. It took roughly 15 minutes to exceed Estimated Worst Case.

Looking at this figure one sees that they did their predications and forecast, or guesstimate if you rather want to use that word. They also calculated with a worst-case scenario. But I don’t think that they could have foreseen what would actually happen. This is proof that they couldn’t.

In this case, they worked closely with Google to solve the problem.

A more realistic scenario is something that we experienced after a go live. It was at a customer running an cloud solution. The customer experienced performance issues in the web api that we’ve built in front of our CRM solution. The requests sent from the web app, through the api, towards the CRM took longer and longer time to return. We couldn’t find any performance problems with the code. The solution in this case, and in the case of Niantic was to scale the environment that hosts the solution. This is done without affecting the daily operation of the service. For us the problem was solved in a couple of minutes once we had ruled out software problems.

But, what do you do when you are running on-prem and you hit the roof within a week, a month? How do you handle a pressure on the system that you didn’t/couldn’t foresee? I think the answer is: You can’t, not fast enough at least.

“It is difficult to make predictions, especially about the future” -Clever Danish person

So, this is a word of wisdom that I usually keep in mind. ”It is difficult to make predictions, especially about the future”. But with this cloud this isn’t really a problem. Because the effort of adapting to current conditions are small compared to running an on-prem solution.

Go to the cloud!

So. My message is: Go to the cloud! You have to go to the cloud. Your competitors are going to the cloud. I don’t think you can afford not going to the cloud.

Your answer to my dare should be: I don’t dare to stay on-prem, I am going to the cloud.

Who am I?

My name is Mikael Hallne. I work at Kentor. My role varies from Software Developer to Systems Architect to Technical Project Manager. But no matter which role that I have I tend to preach/nag/whine/rant about testing, quality, coding standards, a continuous mindset, sharing is caring. Promoting the agile mindset. So, I always end up with a bigger responsibility in the project than first intended…

Long story short; I’ve been involved in a couple of projects where we’ve transitioned from an old CRM to a new Microsoft Dynamics CRM solution. Both on-prem and transitions to the cloud. I don’t know that much about CRM itself. But I know a few things about making it work in the cloud, and how to integrate services towards it and so. So, this is based on real world experience.

Why should you go to the cloud?

So why should you go to the cloud? Whether it is part of your corporate strategy and in your long-term plan, or if your company or customer are just curious, here are some of the benefits I see for going to the cloud. And, I guess you’ve heard quite many of these over the last two days.

It is a low entry barrier to get started in the cloud – It is really easy to get started. A couple of clicks. Some code. And you are up and running.

It gives you flexibility – It gives you the opportunity to scale your environment on demand. You turn the control knob, and you increase the power. You turn the control knob the other way, and you decrease the power.

With a low entry barrier and flexibility you can be agile – You can setup a new environment in a couple of minutes to a day, instead of months when hosting the infrastructure and solution by yourself. You can test your ideas. You can fail fast, try many solutions, and in the end, use the best.

Being agile leads to shorter time to market. It might be a couple of days from idea to implementation instead of months if you host your solution yourself.

Being in the cloud gives you better control over cost – The cost of your system is collected on one invoice that you’ll receive every month. And you pay for what you’ve used. You don’t have to spend money in advance for something that might add value in the future.

It reduces your TCO of infrastructure – You don’t have to have a capital expense in infrastructure. You don’t have to buy new servers, and invest in datacenters. That leads to less pain during maintenance as you don’t have to manage updates of your system.

It gives you reliability – The cloud vendor has SLAs that promises you uptime. You also got load balancing, data replication and so on to ensure reliability of the system.

It gives you security

It enables easy monitoring and lets you gather metrics

The cloud also enables you to use a continuous integration and delivery pipeline. It is there, it is cheap, it is easy to use.

The Cloud makes sure that you don’t outgrow your system.

What is The Cloud?

The Cloud is an giant Internet based “resource pool” accessible from everywhere. And it gives you the possibility to use what you need, when you need it, with a minimal management effort, using a “pay as you go” model.

As I said in previous slide, the Cloud gives you:

Scalability/Flexibility – Scale on demand

Agility and short time to market

Reliability, Redundancy, Security

Iaas vs PaaS vs Saas

The cloud usual offers three different service models:

Infrastructure as a Service

Platform as a Service

Software as a Service

You have on-premise to the left. You manage everything from networking, storage, servers, virtualization, operating system, runtime, and so on.

With infrastructure as a service you get your infrastructure in the cloud. For instance, a virtual machine that you’ve got complete control over. The service provider manages networking, storage, servers, virtual machines and so on. You manage the rest, from operating system, runtime, and up to applications. So, you are responsible for keeping your system up to date. It feels like an on-prem machine, but instead of having it in your data center it is in the cloud. That is, the service that you use and pay for are infrastructure. This is a perfect service if you want to move an existing on-prem application to the cloud without doing any changes to it.

We’ve typically used Infrastructure as a service when we’ve set up FTP servers, or we set up an virtual machine with an SQL server that we got full control over during data migrations. Or we have virtual machines running Team City or Octopus Deploy. Or imagine a startup that can’t afford to own this much hardware. And they cant afford daily operations to have this service up and running. Instead they buy their infrastructure as a service.

Platform as a service provides you a bit more than Infrastructure as a Service. You also get an Operating System, Middleware and a Runtime. You might also get development tools and so on. That is, a PaaS gives you a complete development and deployment environment. You need to manage the application, and then let the application execute in the platform provided. That is, the service that you use and pay for are the platform needed to run your application. Azure, Google App Engine are examples of a Platform as a Service.

We usually use platform as a service when we host web applications, web apis, we run azure webjobs there, we use it for identity providers, and all sorts of integrations towards the CRM.

With a Software as a Service solution the service provider manages everything. The service provider manages the hardware and software, and with the appropriate service agreement, will ensure the availability and the security of the application and your data. You manage the content. That is, the service that you use and pay for are software managed by someone else.

Examples of Software as a Service applications are web based email such as Gmail or Hotmail, Dropbox for storage, Trello, or Dynamics 365 that Microsoft offers.

Hybrid solution

Just a quick mention of hybrid solutions.

Simply taking something that used to be on-premises and making it run in the cloud is of limited value. There isn’t a 1 to 1 mapping between an on-prem installation of the CRM and Dynamics 365 (CRM online).

You might have much of your existing infrastructure on-premises. Or you might be reluctant to use some key applications in the cloud. There are many ways that CRM Online can operate as part of a hybrid IT solution. Using CRM Online doesn’t mean the you have to move your whole IT stack into the cloud.

A hybrid solution offers you the possibility to not have to choose between on-prem or the cloud. With a hybrid solution, you can integrate your existing environment (on-prem that is) with the public cloud (Azure for instance). So, the cloud works as an extension of your current environment.

You need a strategy

Ok, so now you’ve made up your mind. You are going to the cloud. You are going to “lean in” as Brad Anderson said at the keynote yesterday. Or, your customer has decided that they wants to take on the cloud. So, it is your job to help them to lean in. And you are bringing your CRM system along on the ride. But, before you start anything, it is important to have a strategy.

I’ve joined projects that’s been running for a couple of months that were lacking a proper strategy. Without a real strategy, everything is hard. The strategy is part of the foundation of the project. It is a real struggle to try and finish and deliver a project that’s lacking a strategy. We don’t know when we are done. We don’t know what we are going to achieve. We really miss the fundamentals of what we are trying to achieve.

So, in my experience you need a well-defined strategy to start with. But, you also need a roadmap. Once you’ve got the strategy, you can create a roadmap, which is more detailed and hands on.

In the process of creating the roadmap we start to ask questions like:

  • Will we do IaaS, PaaS, or SaaS? A mix of them all?
  • Are we going to just move existing solution, i.e. do a 1 to 1 mapping of existing functionality into the new solution? Or are we doing a rebuild?
  • Are there other systems in our existing IT solution that could or should be moved to the cloud?
  • What opportunities do we have to modernize the solution?
  • What opportunities are there for innovation?
  • How do we design our solution for it to really take advantage of the cloud?
  • How will we handle data migrations?
  • What we do here are laying the foundation for future work. From these questions and the answers provided one can start to create high level requirements or epics and start to get a sense of what needs to be done.

At this part in the project, it is also an excellent opportunity to decide on testing strategies and whether to do continuous integration/deployment or not. Hint: you should go for as much automated testing as possible and you should do continuous integration/delivery!

As part of your strategy and roadmap you should decide on if you are going to do a pilot deployment? One with a limited set of data and functionality used by a limited set of users? That way incrementally introducing the new system and new functionality to the business? Or do you want to do a big-bang deployment where you switch from the old system to the new one with a large data migration?

Indiana Jones

hemp-wick-edit1

So before a big bang release, you feel nervous for when is a good time to do the switch? Will it work? Have we done everything? The business is nervous. People are afraid of switching. Everyone hesitate.

You all know what happens when Indy does the switch right? Everything looks fine for a while. Suddenly a large rock comes tumbling down, he runs for his life, screaming and panicking. You do the connection between this and a big bang release…

But, as you all are aware of the IT industry likes to think that it is agile. And some companies are. But not when it comes to delivering a system, then suddenly it is traditional waterfall with a big bang release in the end. So, part of your strategy should be to deliver agile, with a pilot and adding value incrementally.

Considerations before moving your CRM to the cloud

Ok, apart from creating the strategy and the roadmap, there are some other questions that you’ve got to ask yourself. These questions are more the ones that drive your design. And these are only a few of all the questions that you’ve got to ask yourself.

Will all your services in the cloud be in for instance Europe so that you comply with the Data Protection Directive (PUL) (or the new General Data Protection Regulation effective around 2018)?

 

Microsoft updates their CRM system twice every year. You don’t really have the choice of not updating, you can only postpone it for so long and you can only skip one update. If you got version 8.0, and 8.1 is released, you can skip that one if you want to. But once 8.2 is announced you are forced to update your system to 8.2. How do you handle that? How to make sure that your system is built to handle that? We will talk about best practices in a while.

When you are in the cloud you have to design for the cloud. Microsoft CRM instances use a pool of shared resources. This is a key difference from on-prem installations, as this means that no portion of these shared resources are dedicated strictly to the instance running your solution – they’re shared. That means that you must design your solution to accommodate potential scenarios where these resources don’t perform your requests immediately.

If you or any of your neighbors, in the cloud, sharing the resource are using an excessive amount of resources, you may have a governor placed on your usage. That might lead to strange errors if your solution isn’t designed for the cloud.

And, as I said, these are all some considerations needed before going to the cloud.

Drawbacks of being in the cloud with your CRM

Are there any drawback of going to the cloud? Of course there is!

What you gain in flexibility, you might lose in control. For example:

You don’t have control over your servers. As mentioned earlier, you share a resource pool.

You don’t have the same control over the SQL-servers. You have problems tweaking the servers’ performance. If you for instance want to add an index to improve performance you have to contact Microsoft support. We’ve seen that this is and issue from time to time. It is solvable, but it is time consuming and frustrating.

You are not 100 % in control of when to update the system. We have a customer that didn’t want to update its system because they were in a business-critical phase during the autumn. We chose to postpone the update. The message we got was that we choose when to perform the update, we are in control. All of a sudden this new version is announced, and we can’t postpone the previous update any longer. We are forced to update during the most critical part of our customers yearly invoice period. I had quite a few calls with Microsoft to try and postpone that update.

You don’t have full control over your data in the sense that it is located on someone else’s computer

You don’t have full control over your data in the sense that you don’t have direct access to the database so that you can query it the way you want to

And as Brad talked about yesterday, you don’t know who is accessing your data.

Are you sure that the cloud service provider is transparent enough so that you can see where your data is stored and how it is secured?

Best practices

Ok, so you’ve decided to go for an online, Software as a Service, solution. What do you do next? One of the things you should do is to follow best practices, or patterns and principles as Microsoft call it. There are a lot of benefits following them.

Have anyone read their best practices? It is this thick… There is a white paper online that is only a couple of pages. I suggest that you read it as it gives you pointers to what to look at.

The idea behind following best practices are that it makes life easier when handling updates and upgrades of your CRM system. The CRM platform could, after an update, change in ways that break an unsupported customization. But if you have followed best practices and have supported customizations they should continue to work after these updates.

Another reason to stick with supported customizations are that with a short time to market, release pace is quite quick. Your time to prepare is short, and it’s much better spent bringing the new features and functionality into your solution than fixing unsupported code so it doesn’t break your company’s or a customer’s deployment.

Microsoft also recommends that you consider the long-term maintenance requirements, and use the tools in CRM for building business logic when possible. They say that configurations are easier for others to analyze and understand without having to open up Visual Studio. These tools are a more integrated part of the CRM solution, which means that they’ll be checked for compatibility during solution import. And while it’s easy to write code that doesn’t follow Microsoft’s guidelines, it’s really hard to do that with a configuration. That is, writing unsupported code is easy, having unsupported features by using configuration is hard.

They also have the recommendation that you should use workflows instead of plug-ins, and business rules instead of JavaScript. They say that choosing configuration can potentially improve performance, require less maintenance, and decrease the number of bugs in your solution.

But, I think that there are a couple of drawbacks following these best practices. When using configuration, it is hard to have the kind of version controlled system that a lot of developers are used to. For instance, you can’t have several developers working with customizations at the same time in the same system. Branching of your solution is near impossible. With configurations, it is hard to do isolated testing. It is a pain having several different environments when working with configuration.

Ingenuity

We always deliver a new CRM system, whether it is on-prem or online, using automated testing and continuous delivery. We do this through code. But we do it in a way so that we don’t violate best practices, because we usually maintain the systems that we’ve built. And we don’t want to spend time fixing our own unsupported customizations…

With that in place you are able to run automated unit tests and integration tests of your CRM solution. You also have to possibility to abstract away your CRM database and test everything locally with an in-memory database. That way you have easy control of your data and can use data driven testing.

We’ve found that using Typescript instead of JavaScript gives you all the benefits of a strongly typed language and it compiles to standard JavaScript. The JavaScript can then be injected into the solution at deployment. That enables you to test your JavaScript resources in isolation.

So Microsoft does some pretty cool things, but Microsoft – when you need to brush up your existing best practices come talk to us. We’ve got some ideas how to improve them to get better automation and continuous delivery…

Conclusions

So, to sum things up:

I’ve talked about that it is easy to go to the cloud because of the low entry-barrier.

I’ve talked about how it gives you flexibility. You can start small, fail fast, and learn continuously without a large capital investment. Which enables agility and shorter Time to market.

It lets your business focus on creating value instead of managing hardware.

It eases your adoption of continuous integration/deployment and devops.

It gives you control over cost

The cloud makes sure that you don’t outgrow your solution

 

And it is fun

Move to WordPress in Azure

I don’t know really why I haven’t moved my hosted WordPress blog to Azure yet. But, now is the time.

I am surprised how easy it was. A couple of clicks and you are done.

First, login to your Azure portal and go to the marketplace.

Select and create a WordPress app.

Create WordPress app in Azure

Fill in the mandatory fields for creating an app. I choose to have a MySQL In App (Preview) Database Provider.

Select Database Provider

Once created, go to your current WordPress installation. Login as admin, go to Tools, Export and chose All content. Then click Download Export File. You’ll now get an XML file with your site content.

Export posts and content from WordPress

Go back to Azure and open your newly created site. Fill in a Name, Username, and Password. Login as admin, go to Tools, Import. Select the newly created XML file and click Upload file and import.

Import posts and content to your new WordPress app

Voila, all your blog posts and other content is moved. But, you still have to move all your media files. Go back to your old installation of WordPress. As admin, go to Tools, Export. Select Media and choose a Start date and an End date to export all the media created during that time period. Click Download Export File.

Go back to your new WordPress installation in Azure. As admin, go to Tools, Import. Select the file and click Upload file and import. Done, you now have all your media files.

There are two things left to do.

You have to do something about the themes and plugins in your new installation. I’ve always used a defualt theme with minor modifications so I don’t have to move them from the old installation to the new. If you have quite a lot of modifications you should look into some plugin that helps you clone or move your old installation. You also have to install your plugins that you’ve installed on the old installation. Again, since I only use Aksimet for spam protection, I only have to copy my API key.

Secondly you have to point your URL to your new location. In Azure you navigate to your app, click Custom Domains. Copy the IP adress that is shown. Click Add hostname.

Add Custom Domain to your WordPress app

Now you have to verify that you own the domain that you want to redirect to your Azure app. You do this by going to your current registrar and configure the DNS settings there. You should create two CNAME settings as seen in the figure below.

Configure DNS settings

Once that is done, go back to Azure, fill in Hostname and click Validate. If configured correctly, you should get two green OK and can then click Add hostname. If you get any error, follow the links under CNAME configuration in Azure.

Add hostname

Now your’e done. It can take a while for the DNS settings to propagate so you might not be able to use your URL to reach the new WordPress installation in Azure for a while. But give it a try after an hour or two and you should see that it works.

Enjoy.

IdentityServer3, Azure, and the invalid grant

TL; DR; IdentityServer hosted in Azure and automatic scaling doesn’t work without the creation of an backend storage

While maintaining a previous installation of IdentityServer3 we got some strange errors reported from the web developers. On the web site, that is hosted on EPI server, they got the error when they validated the token.

Server Error in '/' Application.
invalid_grant

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.Exception: invalid_grant

Source Error:

Line 110: }
 Line 111:
 Line 112: var userInfoClient = new UserInfoClient(
 Line 113: new Uri(AUTHORITY + "/connect/userinfo"),
 Line 114: tokenResponse.AccessToken);

They also reported that if they redirected their development environment towards our acceptance/staging environment, they experienced the problem 90 out of 100 times. But, when they redirected it to our test environment they never saw the problems.

Trying to narrow the problem down we tried to login to the system using different users. We experienced the problem most of the times, but we couldn’t find a pattern. But we found out that if you refresh the page (F5) a couple of times when you get the error, login is successful after a few tries.

So, further, in trying to narrow the error we made sure that test and acceptance hade the same code and configuration. They did. We looked at the certificate that IdentityServer3 used. It where valid for another couple of years. We made sure that the redirect URI didn’t have any trading slashes. No luck.

On to looking at the logs in Azure and Application Insights. And here we see something.

2016-09-20T14:47:21  PID[76] Information 2016-09-20 14:47:21.765 +00:00 [Information] Start token request
2016-09-20T14:47:21  PID[76] Verbose     2016-09-20 14:47:21.796 +00:00 [Debug] Start client validation
2016-09-20T14:47:21  PID[76] Verbose     2016-09-20 14:47:21.796 +00:00 [Debug] Start parsing Basic Authentication secret
2016-09-20T14:47:21  PID[76] Verbose     2016-09-20 14:47:21.796 +00:00 [Debug] Parser found secret: "BasicAuthenticationSecretParser"
2016-09-20T14:47:21  PID[76] Information 2016-09-20 14:47:21.796 +00:00 [Information] Secret id found: "web"
2016-09-20T14:47:21  PID[76] Verbose     2016-09-20 14:47:21.796 +00:00 [Debug] Secret validator success: "HashedSharedSecretValidator"
2016-09-20T14:47:21  PID[76] Information 2016-09-20 14:47:21.796 +00:00 [Information] Client validation success
2016-09-20T14:47:21  PID[76] Information 2016-09-20 14:47:21.796 +00:00 [Information] Start token request validation
2016-09-20T14:47:21  PID[76] Information 2016-09-20 14:47:21.796 +00:00 [Information] Start validation of authorization code token request
2016-09-20T14:47:21  PID[76] Error       2016-09-20 14:47:21.796 +00:00 [Error] Invalid authorization code: 48b5eb90e8b...979f67
{
  "ClientId": "web",
  "GrantType": "authorization_code",
  "AuthorizationCode": "48b5eb90e8b...979f67",
  "Raw": {
    "grant_type": "authorization_code",
    "code": "48b5eb90e8b...979f67",
    "redirect_uri": "http://URL.com/"
  }
}
2016-09-20T14:47:21  PID[76] Information 2016-09-20 14:47:21.796 +00:00 [Information] End token request
2016-09-20T14:47:21  PID[76] Information 2016-09-20 14:47:21.796 +00:00 [Information] Returning error: invalid_grant

Oh, a lead! Invalid authorization code.

Ok. I really don’t know what to do now, but let’s try and hit F5 a couple of times. After three (can sometimes be more and sometimes less…) times the page loads properly. Looking at the logs I see that it parsed the token successfully.

2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.618 +00:00 [Information] Start token request
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.665 +00:00 [Debug] Start client validation
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.665 +00:00 [Debug] Start parsing Basic Authentication secret
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.665 +00:00 [Debug] Parser found secret: "BasicAuthenticationSecretParser"
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Secret id found: "web"
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.665 +00:00 [Debug] Secret validator success: "HashedSharedSecretValidator"
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Client validation success
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Start token request validation
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Start validation of authorization code token request
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Validation of authorization code token request success
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Token request validation success
{
  "ClientId": "web",
  "GrantType": "authorization_code",
  "AuthorizationCode": "48b5eb90e8b...979f67",
  "Raw": {
    "grant_type": "authorization_code",
    "code": "48b5eb90e8b...979f67",
    "redirect_uri": "http://URL.com/"
  }
}
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Creating token response
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.665 +00:00 [Information] Processing authorization code request
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.665 +00:00 [Debug] Creating access token
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.665 +00:00 [Debug] Creating JWT access token
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.682 +00:00 [Debug] Creating identity token
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.682 +00:00 [Information] Getting claims for identity token for subject: f3fae24c-4d77-...-5065f38ada71
2016-09-20T14:47:26  PID[7516] Verbose     2016-09-20 14:47:26.682 +00:00 [Debug] Creating JWT identity token
2016-09-20T14:47:26  PID[7516] Information 2016-09-20 14:47:26.712 +00:00 [Information] End token request

This doesn’t really make any sense. Why would something throw an exception, and if you hit reload a couple of times it works just fine…? Trying to narrow down the error domain further I looked at the authorization code. No luck, it was handled correctly. I looked at the certificate used. Had it expired or something like that. Nope, it looked just fine. Ok, how about the redirect URL and other callback URLs? Some googling reveled that there could be problems with trailing slashes. No luck there either…

Sitting down trying to figure out what changes have I made to the code or the environment in the last couple of weeks. The only thing that I could think of that I haven’t checked yet is my settings in Azure. The IdentityServer is hosted in Azure. I have recently made changes to the app service plan so that it should scale automatically. Let’s try and disable that change. The app service plan should run using only one instance, and it should not scale automatically.

Problem solved! Everything works as expected.

Apparently IdentityServer keeps some kind of state in memory that doesn’t work when scaling on several instances. To handle that you have to build a backend storage solution to distribute authorization codes between processes/instances. That is left as an exercise for the reader…

Problems installing dotnet core RC2 on Ubuntu 14.04.04 due to libunwind

I am trying to install the RC2 of dotnet from Microsoft. I am following the instructions on https://www.microsoft.com/net/core#ubuntu.

When calling curl -sSL https://raw.githubusercontent.com/dotnet/cli/rel/1.0.0/scripts/obtain/dotnet-install.sh | bash /dev/stdin --version 1.0.0-preview1-002702 --install-dir ~/dotnet I get an error saying I have to install libunwind first.

To install libunwind you have to clone the git repo with
git clone git://git.sv.gnu.org/libunwind.git

Following instructions on Internets tells you to start the installation by issuing autoreconf -i. But that might tell you that you have to install libtool. To install libtool issue command sudo apt-get install autoconf libtool to install autoconf and libtool. Now we head back to installing libunwind. Run
autoreconf -i
./configure
make
sudo make install

Now you can continue to install dotnet. Enjoy

åäö in json payload

I am adding slack notifications to my Octopus Deploy. I am not using the recommended way, instead I’ve
add a post-deployment script that does exactly the same thing as in the recommended way. Why? Eh,reasons…

The problem that I’ve encounterd was that project name contains the character ‘ä‘. When running the message that should be sent to Slack through ConvertTo-Json, ‘ä‘ is converted to ‘BAD+4‘. Some googling
led me to try and wrapp my payload in a [System.Text.Encoding]::UTF8.GetBytes call. So, this is what I ended up with.

Before:
Invoke-RestMethod -Method POST -Body ($payload | ConvertTo-Json -Depth 4) -Uri $hook

After:
Invoke-RestMethod -Method POST -Body ([System.Text.Encoding]::UTF8.GetBytes($payload | ConvertTo-Json -Depth 4)) -Uri $hook

And it works!

Octopus deploy notification in Slack
Octopus deploy notification in Slack

Credit.

Coverage reports in Jenkins

In the current project we use Visual Studio Professional 2013. It can’t give me any code coverage. We also use Jenkins as build server and it would be nice to see a code coverage report after each build. Especially since we have a goal to increase code coverage.

I’ve been experimenting quite a lot to get this working. I have read many useful blogpost and Stackoverflow answers, but for some reason I couldn’t get it to work. But then, by some magic, I decided to read the documentation of one of the tools. And there it was, the solution to all my problems… regsvr32!

I decided to use OpenCover to get a coverage report. Then using ReportGenerator to generate nice reports based on the output from OpenCover. You also have to install HTML Publisher plugin for Jenkins to publish the generated report.

To install OpenCover either you follow the instructions on the wiki on the projects github, or go to the release catalouge and download the installer. Install it on your Jenkins server. For ReportGenerator go to its release catalog and get the zip file. Extract it and put it in a folder on the Jenkins server. I assume that you have the other plugins to run unit tests already installed in Jenkins.

Previously I used VSTestRunner plugin for Jenkins to run unit tests. Now, this build step has to be removed and replaced with a Execute Windows batch command. Here one wrapp the call to VSTest.Console.exe with the call to OpenCover.exe like this:

C:\JenkinsTools\OpenCover\OpenCover.Console.exe -target:"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe" -targetargs:"D:\Jenkins\Workspaces\TestProject\bin\Debug\Tests.dll /Platform:x86 /Framework:framework45 /Logger:trx" -output:"CodeCoverage\TestProject_Build-%BUILD_ID%.VSTest.Coverage.xml" -mergebyhash -skipautoprops

I had to get this to work properly. At first I saw errors when OpenCover commited its work. It said that it couldn’t find the PDB files, or that I had to register a user. So I added -targetdir:"D:\Jenkins\Workspaces\TestProject\PDB\" where I stored all the PDB:s for the moment, and the -register:user flag to register/de-register the profiler. It work perfectly in the command prompt. But I couldn’t get it to work when running from Jenkins. So, checking the documentation for OpenCover it said:

you pre-register the profiler DLLs using the regsvr32 utility where applicable for your environment.

Run
regsvr32 x86\OpenCover.Profiler.dll
regsvr32 x64\OpenCover.Profiler.dll

And now it works!

So, on to generating reports based on the output from OpenCover. From the same Windows Batch command as above, add the following to generate a summary and a report.

C:\JenkinsTools\ReportGenerator\ReportGenerator.exe -reports:"CodeCoverage\Build-%BUILD_ID%.VSTest.Coverage.xml" -targetDir:CodeCoverageHTML -reporttypes:Html

C:\JenkinsTools\ReportGenerator\ReportGenerator.exe -reports:"CodeCoverage\Build-%BUILD_ID%.VSTest.Coverage.xml" -targetDir:CodeCoverageHTML -reporttypes:HtmlSummary

Add a post-build action to publish a HTML report like this:

Post-Build Action - Publish HTML Report
Post-Build Action – Publish HTML Report

Now you’ll have a link to the generated report from the status page of your project.

Additional reading:
Continuous Delivery – Adding Static Analysis

Build errors in TeamCity 9.x

I’ve recently played around with TeamCity 9 installed on a virtual machine in Azure.

Everything works as expected and all builds are working properly. But then, when I add the project that the others are working on, the build fails. When I build it localy it works fine, but when TeamCity fetches it from GitHub it doesn’t build. The error message is

C:\TeamCity\buildAgent\work\f612de5cd923cd\Project\ProjectProject\Project.csproj(250, 3): error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk.

On my dev computer I got VS 2013 professional when building. On the TeamCity-server I got the bundled version of MSBuild. According to this Stackoverflow post you can’t build MS WebApplications without installing VS. So, the quickest and easiest solution for me was to ask one of the devs to copy the contents of the folder C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\WebApplications\ on their local machine to the TeamCity server.

If this doesn’t solve your problem, or the SO link doesn’t help, check this link out.