Check out all the on-demand sessions from the Intelligent Security Summit here.
This post was written by Asim Razzaq, CEO of Yotascale
At the end of the month, when it’s time to pay the cloud bill, cloud hosting providers will tell you how much you owe, down to the last penny. However, predicting and reducing next month’s costs — or even the costs accrued during a fiscal year — becomes more complex and challenging. The reason is that workloads deployed to public cloud platforms are substantially more difficult to review, audit and predict than those in a private data center.
When it comes to cloud cost optimization for Amazon Web Services (AWS), there are some built-in tools that go a long way for basic usage to help get you started in the right direction. As great as these tools are in practice, however, they usually fall short of handling complex use cases — especially when you are managing multiple accounts or deploying containerized applications.
Unfortunately, organizations pursuing these opportunities are underserved by the native cost visibility and cloud cost optimization tools available from their cloud vendor. Following are some tips for AWS cloud cost optimization, as well as some potential pitfalls to watch out for.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
Gain cost visibility
Understanding what your cost profile looks like is paramount to saving money on cloud services. A few questions to consider are: who is generating spend, where it is allocated, and what is the business value these services are connected to? Clear, project-level data about where your spend is concentrated will enable you to make consequential improvements.
AWS offers a solution for tracking spend called tagging. Tags are key-value pairs supported by a wide range of AWS services that range from access control to cost attribution. Properly tagging all of the instances in the infrastructure can be complex and time intensive, especially when dealing with large infrastructure, but it is manageable. Proper tagging is critical to gaining visibility into the AWS infrastructure.
For smaller organizations with less complex environments, AWS Cost Explorer is an option to use to analyze the tagged resources and costs by service reports, and offer recommendations on a forecast and savings plan. However, if you have ten teams that are each using EC2 instances in the same department-wide account, this report will combine every team’s instances together.
AWS Cost Explorer is less usable for modern cloud architectures, as it doesn’t properly support cost allocation for containers and Kubernetes. If you need support for containers, or you have multiple accounts, you should explore a third party cloud cost management tool to gain the best visibility into your cloud spend.
Decisions that are cost-based are easier when the architecture designer has concrete information about cloud spend. The focus should be on the product first to avoid premature cost optimization. Adopting ill-suited tools that are less costly can be much more expensive in the long run.
Breaking down spend by project, rather than by resource, empowers engineers to create and achieve key performance indicators (KPIs) and other targets around minimizing application cost. Developers working to reduce the expense of running an application often focus on several factors. Serverless functions, like AWS Lambda, are often cheaper than instances for a wide variety of use cases. Managed databases are often a major cost center, and there are a plethora of options with various capabilities.
One way to save on server costs is by building a predictable application that can run on Reserved Instances, which are much lower cost than Spot Instances. The challenge with Reserved Instances is right-sizing them at the start of the service plan year to match anticipated usage. Over reserving can leave idle instances that may have been better served as spot instances on demand.
When you are investigating purchasing reserved instances or savings plans, AWS Trusted Advisor can assist with finding opportunities to save by purchasing Reserved Instances and Savings Plans. Although it should be noted that with more complex scenarios, Trusted Advisor won’t have the visibility needed (especially in Kubernetes environments) to make as accurate of recommendations as a third party tool could.
Right-size your instances
According to Forbes, as much as 30% of cloud spend is wasted on idle or over-provisioned resources. Locating and shutting down these instances is a near impossible task without detailed and accurate reports on your environment.
For smaller, less complex environments, AWS Compute Optimizer provides right-sizing recommendations for EC2 instances, EBS volumes, and Lambda functions by monitoring logs and metadata to determine which instances are over- and under-provisioned. AWS Compute Optimizer then reports potential savings options for those recommendations, but unfortunately, it doesn’t provide much detail about ‘why’ it is making those recommendations, making the impact to production environments difficult to determine.
Essentially, cloud cost optimization comes down to managing tradeoffs, such as cost for speed or flexibility for unit economics. The right tool can help manage those tradeoffs and detect, attribute, optimize, and forecast costs. Tradeoffs begin before cloud services are even provisioned. For some operations, it might not be worth spending valuable engineering time to trim a certain percentage off of a low monthly AWS bill when those expensive engineer hours could be spent shipping features. However, for larger organizations working with heavily-used cloud resources, cutting costs of frequently-run processes by just mere pennies may yield extensive dividends.
Generating insight and general visibility is a common trend through the tools AWS offers, but they have some notable limitations. First, engineering teams need to spend valuable time learning and configuring this wide array of services. Second, they need to figure out how to read and share account-level information at team- and project-level granularity.
All and all, understanding configuration best practices, as well as the tools and services built into the platform — and putting them to use — is the best first step to take to ensure control and visibility of cloud-deployed applications.
Asim Razzaq is the CEO and Founder of Yotascale. In his career, Razzaq was Senior Director of Platform Engineering (Head of Infrastructure) at PayPal, where he was responsible for all core infrastructure processing payments and logins. He led the build-out of the PayPal private cloud and the PayPal developer platform generating multi-billion dollars in payments volume. Asim has held engineering leadership roles at early-mid stage startups and large companies including eBay and PayPal. His teams have focused on building cloud-scale platforms and applications.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.