Going Serverless? 8 Use Cases to Guide You

Share on facebook
Share on twitter
Share on linkedin
Share on reddit

Does this scenario sound familiar to you? Your IT department is devoted to developing applications and new features as well as maintaining, patching, and monitoring your servers. You spend more time than you’d like putting out fires and keeping an eye on your infrastructure. Despite this heavy responsibility, you’re still being pushed to generate more value for your customers and innovate on a dime.

To cope with this challenge, many IT and R&D leaders are turning to serverless technologies such as Amazon Lambda and Azure Functions. As the fastest growing cloud service model, the serverless market is projected to reach nearly $22B by 2025 (with a cumulative annual growth rate of close to 28 percent). With a growth rate of this magnitude, you’ve likely heard the buzz surrounding serverless computing. 

Let’s dive into the benefits and some potential use cases.

The Benefits of Serverless Technology

The focus for IT and innovation departments has shifted from infrastructure management to development. For many companies, this shift pays off; a 2018 Hackernoon survey found that early adopters of a serverless technology reported a 77 percent increase in delivery speed and an average of four developer workdays saved per month. By removing the time-intensive tasks of provisioning and infrastructure management, internal resources are freed up, allowing for focus on business needs, adding value, and development.

For businesses that need to experiment and innovate regularly at a breakneck pace, serverless often makes sense. Case in point: RightScale’s 2019 State of Cloud Report found that serverless technology experienced a growth rate of 50% from 2018 to 2019.

How Businesses Use Serverless

As you work to determine whether a serverless approach would make sense for your organization, take a look at how other companies are leveraging similar strategies for a range of initiatives.

#1 Revenue Forecasting

IndieHackers is well known for its collaborative community of tech founders and entrepreneurs. Their team recently worked with DataBlade to evolve a current forecasting model outside of the DataBlade platform. The model was key to making strategic budget and resource allocations.

IndieHackers and DataBlade decided to leverage a serverless web application to interact with an accessible, on-demand standalone implementation of the forecasting mode. By employing a serverless strategy, IndieHackers and DataBlade were able to deliver a forecasting model backed by data science that is on-demand, scalable, and simple to deploy.

#2 Replace Inefficient Processes and Reduce Errors

Netflix has relied on AWS to help scale infrastructure and meet customer demand for years. Neil Hunt, Netflix’s Chief Product Officer, highlighted AWS Lambda as a key component of the company’s initiatives to root out inefficient processes and reduce errors. 

Netflix uses AWS Lambda to help simplify managing its complex, dynamic infrastructure by using event-based triggers. These triggers are paramount to developing a self-managing infrastructure, automating the encoding of media files, validating backup completions and instance deployments, and more. 

Lambdas are responsible for checking the countless Neftflix files that are modified each day, determining if they need to be backed up as well as making sure they are checked for validity and integrity.

#3 Manage Billions of Dollars in Transactions with a 100% Serverless Strategy

The Shamrock Trading Corporation is the parent company for five brands in the transportation services, finance, and technology industries. Shamrock depends on their invoicing system, trucking fleet software, and check depositing app to keep their customers in the financial services and logistics verticals happy. 

Interestingly enough, all of those services are serverless. Shamrock moved to an entirely serverless strategy to reduce costs and eliminate the need for active scaling. By moving their Docker app over to a serverless workload, they were able to bring their costs down from $30,000 a month to $3,000. As a result of this success, Shamrock is converting more legacy applications to serverless 

#4 Collect, Analyze, and Deliver Play-by-Play Analytics

AWS drives the Major League Baseball Advanced Media’s (MLBAM) Player Tracking System. The digital, interactive arm of MLB set out to find an innovative method of collecting and analyzing plays from ballparks across North America. The MLBAM team knew they needed a way to produce analytics in seconds during the baseball season and the capability to turn things off during off-season.

With AWS, the Player Tracking System gets metrics and video into the hands of broadcasters within 12 seconds of the play’s completion. The system leverages a range of products including AWS Direct Connect, Lambda, Amazon EC2, Amazon S3, Amazon ElastiCache, Amazon Dynamo DB, and Amazon CloudFront.

#5 Faster Time to Market for New Services

As a web and mobile app engagement company, Localytics needs to be able to support and manage billions of data points each day from various mobile apps on the company’s software. To create new services, Localytic’s engineering department needed to access subsets of said data and get it to customers quickly. The company’s previous method required the engineering team to deal with infrastructure management, capacity planning, and utilization monitoring.

Localytics now uses AWS to send billions of data points each month, which ultimately end up in an Amazon Kinesis stream. As each feature of the software is created, a microservice leverages Lambda to access that Kinesis data stream. The engineering team can focus on creating new services without having to deal with provisioning and infrastructure management for each microservice. Lamda scales up and down as the load requires, and each new feature functions independently as a microservice.


Are you interested in joining IOD’s growing global network of freelance tech experts and writers? Contact us and start to see for yourself how freelancing may be the preferred lifestyle choice for you this year.


And … 3 More Interesting Use Cases 

Looking for more on how and where to integrate serverless into your IT strategy? AWS provides  a comprehensive set of guides, API references, tutorials, and projects. 

Learn to run code on AWS Lambda without having to provision or manage servers. If you’re interested in the prospect of releasing code faster, learn how to build a serverless application with AWS Codestar and Cloud 9. You can also find documentation for hosting the back-end logic of your website on AWS Lambda so your team can spend more time on front-end functionality and UX.

Caveats to Consider

A serverless approach presents three main caveats for consideration.

1. Lock-In Effect

This potential drawback to serverless goes hand in hand with vendor dependency. It’s in the best interest of your serverless platform to make your user experience as “sticky” as possible. In other words, they don’t want to make it easy for you to switch from one provider to another. If you do decide to change providers, you face the potential of having to re-engineer applications.

2. Time-Limited Tasks

If you’re working with long-term tasks, serverless may not be the right approach. Your serverless provider will give you a time limit to execute a task, which typically works well for short or real-time processes. If you need to exceed that time limit, you’ll have to call on other function(s) to execute the task. Furthermore, you won’t be able to execute long-term tasks if you reach the time-out limit. 

However, major providers have been actively working to make running long jobs easier. For example, AWS now offers AWS Step Functions, which allows you to create rather large workflows consisting of small jobs. And Azure’s Premium plan has no time limits whatsoever for executions.

3. Cold Starts

With the serverless pricing model, you generally don’t pay for the function idle time. When there is no load to your service, the cloud provider will “freeze” the containers (where the functions are run) provided to that service. Of course, when users resume activity and the load increases again, the provider needs to unfreeze those containers or even create new ones in order to handle all the requests quickly.

This unfreezing/new container creation process is known as “cold starts” and usually takes up to half a second to complete, a delay that may be noticeable to users. Additional requests to the container, however, will be handled without such delay. Moreover, the issue of delays will soon be resolved: Azure’s Premium plan allows customers to nullify cold starts. AWS has also offered a solution to this problem. In 2019, the cloud service provider launched the “Provisioned Concurrency” feature, which helps to avoid cold starts in a smart way.

Final Thoughts

While a serverless approach won’t work for every scenario, it can certainly free up resources and provide greater flexibility in the right context. At IOD, we aim to bring the same level of freedom and flexibility to your tech content by taking care of the heavy-lifting, research, and writing.

To maximize your reach with quality, expert-based content, contact us today.

Share on facebook
Share on twitter
Share on linkedin
Share on reddit

Related posts

Close Menu