To understand the concept of serverless a little more clearly, let’s step back a little and have a look at the evolution of software deployment approaches:

The history of software deployment (Image source: The Origins of Serverless)

In each of the first three paradigms, there is a concept of “where” your application is running: on a physical server onsite, on a VM, or on a cloud host or a container.

The modern age of serverless computing has taken the “where” out of our concerns about software deployment. This age has progressed as follows:

  • It began in 2014 with the launch of AWS Lambda, a platform based on Amazon’s cloud service.
  • In 2016, Microsoft followed suit with Azure Functions.
  • Then in 2017, Google joined the party with a beta version of its Google Cloud Functions solution, which reached production status in July 2018.

These three serverless providers have slightly different limitations, advantages, supported languages, and ways of doing things.

Serverless is not the correct approach for every problem. It’s better to see it as augmenting the previous techniques rather than supplanting them.

Staying aware of your project requirements will guide you in choosing the most efficient way to launch and maintain your software platform. Large deployments, for example, may need to use a combination of different deployment approaches.

It’s common for GPU compute to be deployed on bare metal, instead of in a container or virtual machine.

Serverless providers have a set of capabilities (such as Parse or Firebase). They offer:

  • Infrastructural services like message queues, databases, and edge caching.
  • Higher-level services: like federated identity, role and capability management, and search.

The choice between serverless FaaS and hosted containers may come down to the style and type of your application:

  • FaaS could be a better choice for an event-driven style with few event types per application component.
  • Containers could be a better choice for synchronous-request–driven components with many entry points.
Photo by CHUTTERSNAP on Unsplash

The key operational difference between FaaS and platform as a service (PaaS) is scaling. With a PaaS like Heroku, for example, you may need to think about how many dynos you want to run in order to scale.

With a FaaS application, scaling is completely transparent. Even if you set up your PaaS application to autoscale, you won’t be doing this to the level of individual requests unless you have a specifically shaped traffic profile.

Given this benefit, why would people still use PaaS?

There are several reasons, but the tooling is probably the biggest one. The need for better monitoring and remote debugging solutions is an area that still needs significant improvement in the serverless market.

Platform as a service (Image source: What is PaaS?)

Serverless doesn’t mean “no Ops”, although it might mean no “sysadmin”.

“Ops” refers to a lot more than server administration. It includes monitoring, deployment, security, networking, support, and often some amount of production debugging and system scaling.

All these problems exist with serverless applications, and we need to deal with them. We’re just outsourcing the “sysadmin” with serverless.

Although there are deviations in how the different serverless platforms work, they typically follow a workflow similar to this:

  • You implement your software and package it following the guidelines of your chosen platform. Depending on the vendor, you may need to write your software as a JavaScript function, or even package it into a container.
  • Once you’ve created the package, it will be uploaded to the serverless platform.
  • The deployment will then be live.
  • The serverless platform will manage when to create or destroy replicas of your application and will respond to increased load by creating more copies of the container or function.

This lean workflow tends to be pretty popular with developers. They only need to be concerned with the creation of the software, and the serverless vendor takes care of the details.

Microservices are the opposite of monoliths, where all functionality of an application run as a single entity. The concept of microservices is an architectural pattern that has broken software down into a series of small services like:

  • A search service to find out products in a database based on user search queries.
  • A shopping cart service to manage the items that users add to their cart.
  • A checkout service that handles the payment process.
A serverless architecture (Image source: Serverless Architectures)

Both technologies — microservices and serverless — offer important advantages for cloud-native computing, but they solve different problems and have distinct deployment environments.

In order to deploy your application on a serverless platform, you don’t have to use a microservices architecture. Serverless is one way to host microservices, but it’s not the only way. In addition, not every microservice could run as a serverless function. Depending on your case, it could be better to deploy your microservices inside containers.

There’s nothing stopping you from deploying a monolithic application on a serverless platform, although it’s difficult to imagine many use cases where this would offer real benefits.

In a traditional environment, you may have one long-duration task performing both coordination and execution. To take advantage of the efficiency that serverless offers, it’s better to re-architecture your monolith application and create several coordinated small units (FaaS functions).

With the serverless architecture, developers are able to create software and not have to worry about issues like hardware, operating system maintenance, scalability, or locality.

That said, the serverless paradigm has many concerns. Some of them are:

  • Performance: Infrequently used serverless code may suffer from greater response latency than code that is continuously running on a dedicated server, virtual machine, or container. This is because, unlike with autoscaling, the cloud provider typically “spins down” the serverless code completely when not in use. This means that if your code requires a significant amount of time to start up, it will create additional latency.
  • Vendor lock-in: As I’ve already described, no two serverless platforms are identical. They all support different languages and tools. This means you can’t drag-and-drop a serverless function from AWS Lambda into Azure Functions without reconfiguring it. For this reason, applications and software that run in a serverless environment are by default locked to a specific cloud vendor.
  • Security: Sometimes a cloud vendor cannot satisfy your specific security needs.