With the adoption of public cloud services on the rise and technical resources such as servers far from sight, companies are forced to address the elephant in the room: How can they manage the cloud costs of day-to-day operations? Or, more specifically, how can they keep costs from spiraling out of control?
From a business point of view, several benefits have been driving organizations to adopt the public cloud, such as enhanced capacity planning, massive economies of scale from companies like Amazon Web Services (AWS), the ability to trade upfront capital investments (CapEx) for monthly operating expenses (OpEx), and, above all, the ability to truly focus on their business rather than running and maintaining data centers.
As a market leader in the public cloud space, AWS has paved the way for today’s digital transformation and offers multiple mechanisms for businesses to innovate while keeping costs under control. Yet, those tools and processes are still quite unclear, or even unknown, to many business leaders.
To better understand cloud costs, let’s start by examining how AWS pricing actually works.
How AWS Pricing Works
From the very beginning, AWS has been quite transparent about how their pricing works and how customers can take advantage of it to gain better cost efficiencies. Architects can design systems and optimize costs by picking cloud services that match their usage needs while still having the option to expand later.
With AWS’ on-demand and pay-as-you-go pricing model, customers can get exactly what they need on a per-hour basis (or even per-second in some cases) while still having at their disposal a reservation-based payment model for long-term and predictable workloads.
The AWS pricing model, as described in their own whitepaper, follows four key principles that help customers to understand the best practices regarding cloud costs and to avoid pitfalls. We’ll take a look at each of these principles here below.
Understand the Fundamentals of Pricing
Every new cloud customer should first learn that there are three aspects that drive costs when using AWS: compute, storage, and outbound data transfer. The weight of each of these will vary according to your product and pricing model.
Compute usage is typically charged per hour, while storage is often per Gigabyte of data stored. As to data transfer, with a few exceptions, customers are not charged for inbound data transfers or transfers between services within the same region. This means that you usually don’t pay for the data going into your AWS account and thus really only have to worry about data going out of it, e.g., internet traffic.
Start Early with Cost Optimization
Don’t wait until your cloud workloads are in production to optimize costs. Customers that come from an on-premises environment often fall into this trap. Cloud adoption is not a mere technical exercise. It requires a cultural change that starts from the very beginning by looking at how cloud costs are planned and allocated.
Decision makers need full visibility of running costs, and mechanisms to control these should be in place early on. This drives organizations to optimize their costs frequently and with less effort. Also, having such a cost-efficient strategy from the start will give your team peace of mind as your cloud environment grows and becomes more complex.
Maximize the Power of Flexibility
You can do this by leveraging cloud-native capabilities, such as launching resources on-demand and turning them off when they’re not needed, instead of keeping services running 24/7. For predictable workloads that need to be constantly running, customers can still leverage a reservation model with a long-term commitment for extra savings.
This cloud elasticity can save a tremendous amount of money while still giving you the capacity for near-unlimited growth. Also, by using and paying only for the resources you need, you can focus more resources on feature development and innovation.
Choose the Right Pricing Model for the Job
In AWS, the same product can have multiple pricing models, so it’s important to research the characteristics of each and choose the best fit for your workload. Pricing models vary from on-demand (pay-as-you without long-term commitment or upfront costs) and dedicated instances (for instances on dedicated hardware) to spot (a mechanism to bid on the price and have discounted hourly rates) and reservations (committing and paying for long-term capacity in exchange of a sizable discount).
We know tech blogging.
Getting Costs Under Control: Tips & Tricks
Once you understand AWS’ pricing principles and use them as a guideline, you can then learn how to make the best use of AWS’ built-in tools. There are a few interesting tricks here that business leaders can implement to help get their cloud costs under control.
Consolidated Billing and Reserved Resources
The AWS pricing principles suggest you reserve capacity for predictable workloads and gain substantial discounts. But how does this work in practice? The mechanics are fairly simple, as you can commit to using a certain type of resource (e.g., a certain number of EC2 M5 instances in eu-west-1 region) for a certain period of time (minimum of one year) and receive a discount of up to 75%. The exact amount of the discount depends on various factors, such as the resource type, region, amount of upfront payment, and number of years.
This does not mean that a specific resource has to always be running. Since the reservation is for a certain resource type, not a specific deployed resource, you are free to stop, terminate, or re-deploy that resource as much as you want as long as you keep using the same type.
When customers have multiple AWS accounts, one interesting trick is to enroll every account under the same “Organization” and enable consolidated billing. This makes the monthly operational management easier, plus it enables you to use the reserved resource type you purchased across any of your AWS accounts, meaning it becomes significantly more flexible.
In addition, with the recent introduction of the Savings Plan feature across multiple AWS products, customers can now get insights on potential savings by switching to reserved resources based on their product usage.
Billing Alarms & Cost Explorer
When it comes to cloud costs, the worst situation is when you receive an unexpected invoice at the end of the month for used resources that did not bring any business value.
From an operational point of view, it’s important to not get caught by surprise. Therefore, customers must have ways to receive notifications and react swiftly when something unexpected happens.
In AWS, customers can leverage a feature named Billing Alarms, which allows you to set up an alarm to notify you of custom-defined conditions. A common scenario is to configure the alarm to send an email notification in case the monthly costs are predicted to go above a certain threshold based on the current usage pattern. This enables you to quickly react and troubleshoot the cause of the sudden increase without waiting until the end of the month.
For troubleshooting both current and past expenses, AWS customers can use Cost Explorer, a built-in UI tool that provides a visualization and filtering of costs based on different factors, such as service, tagging, and time period. The most popular filtering method is tagging. This is made possible by having your development team tag AWS resources with custom key/value pairs such as use case, owner, department, or cost center.
For increased awareness, customers can also display billing information using CloudWatch metrics and dashboards. This enables a customized visualization of cost usage and correlates with the system status (e.g., number of requests served).
These tools make it incredibly easy for decision makers to track and understand how their cloud investment is being spent.
Engineering Teams in the Decision-Making Process
It is often said that when using cloud computing, your system scales with a credit card. While not wrong, it is crucial to know when and why that scaling occurs.
If customers are unaware of different product pricing and how volume affects them, costs can easily skyrocket. This can be due to the system responding to an increase in demand or a simple development mistake.
Engineering teams are right at the center when it comes to optimizing costs and utilizing the right type of technical resources. However, one common pitfall is choosing resources based purely on their technical characteristics. The total cost of ownership (TCO) needs to be taken into account for each component while designing the system. The TCO includes the technical specifications, pricing model, and operational costs.
AWS makes it easier for engineering teams to estimate the cost of their resource choices with its Pricing Calculator tool. This lets teams weigh the pros and cons of their choices and choose the AWS services that suit them best.
One important consideration to keep in mind is that while some managed serverless services might feel less affordable compared to a DIY approach with EC2 virtual instances, the human cost of operating them often largely exceeds any potential savings.
Software engineering teams working in DevOps should continuously be on the lookout for ways to improve their operations. When talking about specific workloads, this eagerness to improve and adopt best practices should extend to all stakeholders. Bringing everyone to the table and performing frequent assessments, such as AWS Well-Architected Reviews, can pave the way for greater cost-efficiency as well as an increase in innovation.
Therefore, engineering teams should be an active part of the decision-making process with business leaders. Only by embracing business objectives as a common goal, and maximizing the potential for digital transformation that cloud technologies provide, can businesses truly thrive.
As businesses move forward in their digital transformation and execute their technology strategy, using a public cloud provider such as AWS gives a tremendous amount of speed and flexibility to accomplish their business goals.
For anyone using cloud services, it’s critical to understand and control how money is being spent—making sure that only resources needed are in-use and that they are getting the most from each dollar spent.
With near-unlimited resources just an API-request away, it is fairly easy to go overboard without the proper guidance and boundaries in place. Therefore, make sure to have the proper people and structure in place (e.g., architecture and cloud steering group) that can manage and optimize your cloud investment and usage.