Serverless Architecture with Azure Functions – Barry Mills

I’ve been running the Derbyshire Dot Net user group for a number of years now and thought it was high time that I gave a talk myself. So on the 27th April I gave a talk on Serverless Architecture with Azure Functions. I had intended to post the slides from the talk on this site but a number of the slides wouldn’t have made much sense without some background context. So, I decided to turn it into a blog post….

What is Serverless Architecture?

serverless

Serverless Architecure is a kind of misleading term. It implies that there are no servers involved but this is clearly not the case. Your code is is still running on a server however as a developer the idea is that you should not have to think about servers. You should only be focusing on developing core business functionality.

There are currently two schools of thought around serverless architeture and these are BaaS (Background as a Service) and FaaS (Function as a Service).

Background as a Service (BaaS)

Background as a Service (BaaS) provides developers with a way to connect their applications to back-end cloud storage and processing while also providing common features such as authentication (e.g. Auth0), logging, push notifications, and other features that users expect from their applications. BaaS is essentially another way of describing connecting to an API for a vendor to provide your processing so you don’t have to.

Function as a Service (FaaS)

Function as a Service (Faas), which is the focus of this post, is defined as:

server side logic is run in stateless compute container that are event-triggered, and are ephemeral (may only last for one invocation)  – @mikebroberts

How does Serverless Architecture differ from Platform as a Service?

scalingThe key differentiator between Funcation as a Service (FaaS) and Platform as a Service (PaaS) is scaling.

With most PaaS’s you still need to think about scale. For example, with Azure Websites you must start by selecting the app service tier you need. Shared and Basic tier allows you to have only 1 instance, where  Standard scales to 3 instances, standard 10 and premium 50. There is some level of automatic scaling with Azure websites however if you’re scaling has reached the limit of you app service tier (i.e you are using 3 instances in standard tier) and you need to scale further then you will need some manual intervention to scale up to the next tier.

With a Function as a Service application this scaling is completely transparent. Even if you setup your PaaS application to auto-scale you won’t be doing this to the level of individual requests (unless you have a very specifically shaped traffic profile), and so a FaaS application is much more efficient when it comes to costs.

Benefits of Serverless Architecture?

Cost

benefit-cost

The major benefit to adopting a Serverless Architecture is reduced cost. Serverless Architecture is essentially a cloud based offering so offers the same reduced operational cost savings as IaaS, PaaS and SaaS because you are taking advantage of economies of scale. Because the vendor is running multi-tenant services the cost of running the infrastructure is dramatically reduced and passed onto the client.

Background as a Service (BaaS) offerings help to reduce the overall development cost because you are consuming an existing service rather than creating it from scratch. The added benefit of this approach is that the provider is likely an expert in the area of functionality that you are consuming. If you were to attempt to re-create the functionality yourself you would most likely end up with an inferior product.

However, the biggest win for Serverless is the ability to elastically auto-scale Functions as a Service (FaaS). The major vendors such as Azure Functions and AWS Lambda charge based on actual usage. If you had a service that had to be ready to accept infrequent requests then you would need a PaaS Service continually running ready to accept the request. This means you would have to keep the resource switched cosntantly incurring a cost.

With FaaS the compute resource is only built up when the request/trigger is received, the function is performed then the compute resource is torn down again. During my experimentation with Azure functions I am yet to incur any cost as you get 400,000 GB-s resource usage and 1 million requests free every month.

Disadvantages of Serverless Architecture?

Vendor Lock-in

disadvantage-serverless-vendor-lock-in.png

It’s very likely that whatever Serverless features you’re using from a vendor that they’ll be differently implemented by another vendor. For example Azure Functions using a Run method in the run.csx file as the entry point whereas when you create a Lambda function, you specify a handler that AWS Lambda can invoke when the service executes the function on your behalf.

If you want to switch vendors you’ll almost certainly need to update your operational tools. In the case of deployment if you moved from Amazon to Azure you woud have to move from AWS CodeDeploy to a model that supported deployment to Azure such as WebDeploy. , monitoring, etc.)

You may need to change your code (e.g. to satisfy a different FaaS interface), and you may even need to change your design or architecture if there are differences to how competing vendor implementations behave.

Execution Time

disadvantage-serverless-execution-time

FaaS functions are time limited, as of writing this post both Amazon AWS and Azure Functions have a constraint of 5 minutes of processing before the execution is aborted. Although, I believe that this limit is configurable in AWS Lambda.

Whenever possible, try to refactor large functions into smaller function sets that work together and return fast responses.

For example, a webhook or HTTP trigger function might require an acknowledgment response within a certain time limit. You can pass the HTTP trigger payload into a queue to be processed by a queue trigger function. This approach allows you to defer the actual work and return an immediate response. It is common for webhooks to require an immediate response.

– taken directly from Azure Functions documentation

Start-up Time

disadvantage-serverless-startup-time

Under the covers Function as a Service (FaaS) implementations create a container for your code to run in on the first request. This means that the first request  to a function may take quite a long time, as it is loading the container and any dependencies your application requires.

After a function is executed, the serverless vendor  (Lambda / Azure Functsoin) keeps the container ready whilst waiting for another request to the function. Therefore on subsequent requests re-use the existing container so the function executes more speedily.

This works in unison with the previous constraint of bounded execution time. If your function does not get used within the time boundaries then the resource is torn down and any further requests would once again require the container computer resource to be stood up again, meaning the response time is once again diminished.

Don’t use FaaS for time critical apps unless you an guarantee a steady stream of traffic…or potentially ping every 5 minutes.

Microsoft’s offering: Azure Functions

azure-functions.png

Azure Functions are part of the Azure Web + Mobile suite of App Services that enable the creation of small pieces of meaningful, reusable methods, easily shared across services. These serverless, event-driven methods are often referred to as “nanoservices” due to their small size.  Although an Azure Function can contain quite a bit of code, they are typically designed to serve a single purpose, and respond to events.

Supported Languages

azure-functions-supported-languages

Azure Functions can be created in most common development and scripting languages including Javascript, C#, Python, Bash, Batch and Powershell. They  an be “triggered” by events in other Azure App Services, such as Web, Mobile, Logic, and API apps. Azure Functions can also be exposed via HTTP URL schemes for easy integration into legacy systems.

Common Scenarios for Azure Functions

azure-functions-common-scenarios

Azure Functions are “event-driven” meaning they run based on associated and configure events, or “triggers”.  For example a function could be triggered by a schedule such as running a process once every 24-hours or by an event in a document management system,

Azure Functions can also respond to Azure-specific events, such as an image added to a Storage Blob or a notification arriving in a Message Queue.

Azure Functions: Templates

azure-functions-app-templates.png

Azure Functions can be created from scratch but  the Azure Portal exposes a large number of templates to help get developers started. These templates are designed with specific triggers in mind such as a Blob Storage event (triggered when a file is uploaded to blob storage) or  a GitHub webhook event and many more.

Templates can easily form the basis for robust function creation, and are really designed just to get a developer started.

Azure Functions : Timer Apps

azure-functions-timer-apps

Timer Functions are Azure Functions that are triggered by a specific time event.

Timer Functions are perfect for clean up and maintenance processes and can easily be combined with other functions for more robust scenarios.

Timer Functions us CRON expressions to configure schedules.

Azure Functions : Data Processing Apps

azure-functions-data-apps.png

Data Processing Azure Functions are triggered by some activity in a data store, such as a table, queue, or container.

Data Processing Functions also typically have both “in” and “out” parameters, meaning they can accept an object as a parameter, and then process information and then return another object from within the function.

In and Out parameters controlled by Bindings which can be defined in the Azure Functions Portal or via the project.json file.

The In and Out parameters are not the same as a “return” value, as most Azure Functions either return void, or an HTTP response such as “created”, “accepted”, or “OK”.

Azure Functions : HTTP and Webhooks

azure-functions-webhooks.png

Webhook and API Functions are designed to easily integrate with 3rd party systems, like GitHub, Office 365, and especially Microsoft PowerApps.

Since Webhook and API Functions are often exposed to external or legacy systems, they typically need CORS settings managed in order to “allow” external resources to “see” and execute the function.

Most Azure Logic Apps leverage Webhook and API Functions directly.

Anatomy of an Azure Function

azure-functions-anatomy-of-a-function

An Azure Function is really a group of a few files that work in harmony.

All actual function logic resides in a “Run.csx” file written in a language of choice.

A “Project” file is similar to a project file in other technologies such as .NET Core or and contains “secondary” assembly references such as NuGet packages (or dlls).

Finally a “Function” file contains information about triggers and parameters, such as locations of Storage Queue connection strings and whether the parameter is design as an “in”, “out” or “bi-directional”.

Optionally, additional configuration is also found in the Function App Settings such as an API Key or database connection string.

Azure Function Bindings

azure-functions-function-bindingsazure-functions-function-bindings-json

Without bindings, an Azure Function would just be a “disconnected” algorithm without any way to serve a purpose.

Bindings server to connect functions and output to other services. Some of the most common binding types and features are listed in the table, however variations and adaptations can and do exist.

Testing Functions

azure-functions-testing.png

Many Azure Functions are exposed via an actual URL that can be called directly from a web client or browser. When an Azure Function is not exposed via a URL its common practice to call the function from another function, such as a Timer-based Function for testing purposes only. Since Azure Functions can be nested, testing scenarios can be quite varied. For managing and testing Azure Functions that integrate with Storage Containers, Microsoft provides the Microsoft Azure Storage Explorer, as well as the Visual Studio Cloud Explorer. The Logs console in the Azure Function Designer is also a great way to view and trace function processing.

Closing thoughts

I finished the talk with a few demos (these are included in the actual slides and I may make them the subject of a further post). Some of the questions that were asked echoed my own thoughts on particularly on aspects of tooling.  One of the key concerns raised was how do we get it into source control and into a continuous delivery pipeline. There are some tools available to develop locally but this requires an CLI utility that can be installed via npm at https://www.npmjs.com/package/azure-cli.

February Meetup – Ian Robinson talk on Umbraco

0bc8ad1Umbraco is one of the most popular open source ASP.Net CMS products on the market. Used by the likes of Microsoft, Peugeot, Costa, and Heinz, it has been around for over 10 years and has a vibrant and active community of tens of thousands of developers. The software itself can be installed on dedicated servers, VMs or in the cloud, with the latest offering being “Umbraco as a Service”.

In this talk we will look at what you can use Umbraco for, the community that surrounds it, how you install and configure it, customising the look and feel of your website, how to add features using plugins from the community, and how to extend the back office yourself using C#, JavaScript and HTML.

Ian is the director of Chilli Information Solutions Ltd (www.chilli-is.co.uk) and has been developing software for over 15 years, going freelance 7 years ago. He specialises in the healthcare sector, developing e-learning, web applications, and HL7 integration solutions for clients such as Health Education England, the University Hospital of North Midlands NHS Trust, Staffordshire and Stoke-on-Trent Partnership NHS Trust, the Royal College of Anaesthetists and the Royal College of Obstetricians and Gynaecologists.

How to Speed up .Net and SQL Server Web Apps – An Overview

Webartread‘ve been running Derbyshire Dot Net for a number of years now and I thought it was about time that we started giving an overview of the evening’s talks, and this is something I’ll be looking to do every month from now on.

On 27th October 2016 we were hosting Bart Read who was giving a presentation on “How to Speed up .Net and SQL Server Web Apps”. The talk was scenario based and was focussed on how Bart had diagnosed performance problems for a number of his consulting clients. The group was very impressed with Bart’s presentation as he seemed to make what could be quiet a dry subject into an entertaining speech.

SQL Query Performing Slowly

The first scenario Bart introduced was that of a Customer Support Centre application. It was a traditional 3-tiered architecture (Asp.Net, nHibernate, and SQL Server) in which the SQL Server database had a linked server which was accessed via a SQL Synonym.

First, he took the application server out of the load balancer so that he could isolate performance profiling without affecting production systems. Then, using ANTS Performance Profiler, he was able to establish that there was a particular SQL query that was taking upwards of 40 seconds to complete.

Now that he had the badly performing SQL query he needed the actual parameters that were used. The individual parameters could influence the performance of the system due to indexing strategies, how SQL Server builds its statistics and many other variables. He ran Microsoft’s SQL Server Profiler tools which allows you to collect all commands sent to SQL server and recommended that you filter by Application Name (as defined in web.config), DatabaseName and potentially where time taken is greater than a defined threshold. The suggestion was made that you capture the following events in SQL Profiler:

  • SQL Batch Started
  • SQL Batch Complete
  • RPC Statement Complete

A really useful tip from this section of the talk was that there is the option to save the profile to a ‘Trace Table’. The trace table is just a standard database table that the profiler writes to which allows you finer grained control over filtering the result set because you can run SQL queries against it.

At this point Bart explained that in some cases your application may run really slow, but when you run the SQL statement in SQL Server Management Studio it runs extremely quickly. This is due to how SQL server creates execution plans and caches them. SQL server takes a query and using statistics it has collected over time and determines the most efficient query execution plan. This plan is then stored in a cache which is implemented as a hash table. The hash key is made up of a number of factors including the query to be executed. If the query is formatted differently (e.g. has a line break) then it will produce a different hash value and will not find the plan in the cache.

After this he explained that he then went on to analyse the actual execution plan and found that there was a query filtering a table on the linked server. However, as it was a table on the linked server and not on the current database it was retrieving all 600,000 record for each request before filtering. The accepted solution in this case was to create a table valued function in the linked server and do the filtering on the correct database.

Diagnosing a .Net Memory Leak

The second scenario was for the same client but they had an issue with a .Net memory leak that they were unable to locate. This time Bart used the ANTS Memory profiler tool which allowed him to take a snapshot of memory before the offending operation and after. In this particular scenario it was identified that the issue was that the client was using Castle Windsor dependence injection with a transient lifestyle when they should have been using PerRequest. However, this part of the talk was probably the highlight due to a detailed but clearly understandable explanation of .Net Garbage Collection using the mark and sweep algorithm.

One of the key takeaways from this section was how misused the method GC.Collect() was, as it just moves Gen0 objects into Gen1 stores so will likely slow down the application more.

…and the rest

The following few scenarios weren’t quite as in depth but covered issues with deadlocking, network latency and issues with the code due to unnecessary caching. The talk finished with a roundup of browser based performance focussing on how JSFrameworks (Angular, React et al) files needing to be downloaded could affect performance. It was also discussed that javascript is subject to the same rules regarding garbage collection as mentioned earlier and google chrome browser has built in developer tools for taking memory profile snapshots.

All in all this was a very informative talk and I’d have no qualms recommending this talk for another user group. The slides are available for download at http://www.slideshare.net/bartread/longer-version-2-x-45-mins-with-break-how-to-speed-up-net-and-sql-server-web-apps. If you have any questions please feel free to get in touch with Bart ([email protected]). Bart is available for performance consultancy gigs, should you find yourself in need.

Next Meetup – Bart Read – Speed up .NET and SQL Server Web Apps

October 27th 2016 – 6.30PM

Reserve your Place at Meetup

Bart talks about the techniques used to identify, troubleshoot, and fix performance problems in web apps across the whole stack, and illustrate these with a number of real world examples.

Bart works from his home near Cambridge in the UK as an independent consultant, entrepreneur and tech writer. He also contributes to the open source Node Tools for Visual Studio project hosted on CodePlex.

My expertise is in .NET, web, and mobile, and I’m always looking for new technologies to learn, use, and write about. As a result you’ll often find me at local developer community events.

Previously he worked at Red Gate Software for nearly 10 years – a company making tools for developers and DBAs. Over that time he was a developer, project manager, and product manager, but he also helped out with technical recruitment, and even ran the IT department for a while. He’s worked on some great products for .NET developers over the years:Nomad for Visual Studio.NET ReflectorANTS Performance ProfilerANTS Memory Profiler, along with many of Red Gate’s SQL tools.

Thanks to Red Gate Software we have a number of prizes to give away on Thursday including:

Hope to see you there!

Next Meetup -Andrew Bullock – Writing Robust Systems

July 28th 2016 – 6.30PM

Reserve your Place at Meetup

Building stable systems can be hard, but much of the difficulty can stem from poor design rather than implicit complication.

In this talk Andrew will be covering design patterns, processes and considerations for building stable, fail-well software.

Hopefully you’ll take away useful ways of thinking about software design and implementation, along with some helpful code snippets and patterns to enable you to get some quick wins as well as long term improvements.

Twitter @trullock

Next Meetup – Richard Wilde – Web UI Testing with CasperJS

The next meetup of Derbyshire DotNet is tomorrow (30th June 2016) 18:30 at the Greyhound Pub on Friargate. We will be welcoming Richard Wilder of wildesoft.net who will be delivering the talk below:

Unit testing helps us when writing any sort of application, however we often we end up writing unit tests for small pieces of logic that don’t really matter and sometimes miss out on the bigger picture. We tend to shy away from Web UI testing as the feedback loop is just too slow. In this presentation we will look at a toolset that gives us both end-to-end testing that aims to give us far faster feedback.

Welcome to the CasperJS a navigation and testing utility written in JavaScript, which plays nicely alongside PhantomJs, a headless WebKit Browser

The promise is simple wouldn’t it be nice to be able to perform some UI testing before you commit your code to source control?

About Richard Wilde

Richard started programming during the home computing era with a ZX81. He has been running his own software company called wildesoft.net since 2004 and has a lot of experience delivering Microsoft based solutions for all aspects of businesses. He is also the co-founder of Smart Devs a user group based in Hereford. He also hangs out on twitter @rippo

Crpytography in .NET Talk at NDC London

Back in January I had the opportunity to speak at the NDC London conference at the Excel centre. The Video recording of my talk at NDC London is now available to watch on-line. This was my first major conference so it was a little scary, but I really enjoyed the experience. The room was about 2 thirds full and I got an excellent speaker rating at the end so I must have done something right.

Next Meetup – SQL Azure by Tobiasz Koprowski (Data Platform MVP)

tobiaszThe next meetup of Derbyshire DotNet is tonight at 18:30 at the Institute for Innovation in Sustainable Engineering and the talk will be separated into two parts, but will focus on SQL Azure, SQL Server and Cloud Services.

In the first session Tobiasz will introduce everyone to the technology formerly known as SQL Azure (now Windows Azure SQL Database). Then under a Tips and Trick session he will show which points, features, compatibility and non-compatibility for SQL Azure which are important for DBA’s.

He will also cover the functionalities, performance, cost, and SLA and security aspects. Then after a short break he will demonstrate how we can work with our data in the Cloud using SQL Azure, Blob Storage, and the functionalities of backup, restore, encryption and availability. We will learn how we can implement a hybrid environment and when and why it is (or not) good practice. And, finally we will find a few minutes for discussion about future of the DBA.

Tobiasz is an Independent Consultant and CEO of Shadowland Consulting. Community leader focused on SQL Server, SharePoint, security, Cloud & collaboration solutions, and ITIL, DR, BCM and SLA. Love licensing agreements. Working with audit projects, consulting and implementation in Poland, Scandinavia, Europe and China. Former Vice-Chair in GITCA EMEA Board. Member of the Boards at Polish Information Processing Society & ISSA Polska. Member several associations: Microsoft Terminology Community, Friends of RedGate PLUS, PASS, ISSA, ACM and several communities in the world.

MCT and MVP from July 2010. Subject Matter Expert at CQURE and Microsoft Connect. Former president of the Polish SQL Server User Group. Creator and CEO (2009-2011) of SQLDay Conference. Active blogger (owner of five) and international speaker with many different conference experience. Co-author of SQL Server MVP Deep Dives Volume Two. Actually settled not far away from Sherwood Forest. Cross-mountain cyclist, amateur in swimming and running. Traveller. Fan of snooker and good music.