Positional's Story with StormForge

Balancing Cost with Reliability to Scale Applications from Design to Launch

Positional tile

Key Results

Icon arrow down

75%

resource savings on CPU achieved from automatically rightsizing

Icon arrow down

46%

Overall cost savings achieved from automatically rightsizing

Icon rocket

Countless

Reliability issues averted by identifying memory was underprovisioned

The Challenges

Positional required a Kubernetes platform capable of scaling to thousands of customers quickly and cost-efficiently. As a young startup with a small team, they didn’t have a lot of engineers or time to invest in decoding Kubernetes complexities. 

In preparation for the public launch of their content marketing SaaS platform, which runs on Amazon EKS, co-founder and CTO Matthew Lenhard knew that their infrastructure was not production ready. Cloud costs were already rising, and based on how the infrastructure was configured, he knew they would run into performance issues. 

The majority of their EKS cluster was hand-tuned to run workloads without setting any CPU or memory requests. Knowing the performance risk with this setup, Lenhard made the cluster much larger than needed to account for peak traffic. While he couldn’t afford to let cloud costs increase, he also needed to meet customers’ availability and latency needs. 

With Optimize Live, “I can save myself a bunch of time, and we can wrap up everything we need on the infrastructure side quicker so that we can get back to focusing on products and the things that are actually going to drive value for our customers."

Matthew Lenhard
Co-founder and CTO

How Positional balanced savings with reliability using Optimize Live.

Icon check mark box

Automatically set optimal CPU and memory requests, saving countless hours in manual labor, plus avoiding the cost of hiring another FTE or running a significantly larger cluster to safeguard performance

Icon check mark box

Increased the impact of their Karpenter deployment by setting the optimal CPU and memory requests using StormForge

Icon check mark box

Run the minimum number of nodes using Karpenter to effectively size pods optimally with Optimize Live

Straight from the Source

See what Matt Lenhard had to say about Positional’s experience with StormForge and Optimize Live’s continuous rightsizing capabilities

Positional had been in private beta for a year where their app load was pretty predictable. They knew how many more customers they were onboarding at a time, so they understood the capacity required. That meant, “We didn't invest a ton of time thinking through scalability,” Lenhard said.  

As they geared up for public launch, “All of that goes out the window very quickly.” 

They also had an eye toward a future product launch that involved ingesting a lot of customer data, so they expected more load spikes that would impact their availability. At that point, they needed to start taking their infrastructure scalability and availability more seriously.

When Lenhard started to look for options to address his challenges, he connected with the StormForge team and got a crash course in Kubernetes optimization best practices.  They helped him envision what the ideal setup would be like, helping him understand what would meet his needs that wouldn't “take forever to get to.” 

With this knowledge, he understood how to “balance the benefits of the availability [with] the cost optimization, alongside not spending too many resources on getting all of this set up. Which, for a small team like us, is super important.”

That led him to use Karpenter alongside Stormforge. 

“There's kind of the missing piece in between Karpenter and your workloads, which is, ‘how do you set up these requests and limits for your deployments?’ That was a hard question for us because we didn't really know. I mean, you could pipe everything into a Grafana dashboard and stare at it for a while and try to think, okay, ‘Here's our 99 percentile of CPU. And this is what costs will look like.” 

“StormForge basically handled all that for us.” 

He realized that, “I can save myself a bunch of time, and we can wrap up everything we need on the infrastructure side quicker so that we can get back to focusing on products and the things that are actually going to drive value for our customers, and ultimately us.”

“The cool thing about Stormforge is it basically just did all this for me, gave me this nice little patch file, and that's it. I can just hit apply, and it's off to the races.”

“It just really made it easy to handle both sides of things — the workload sizing and then [with Karpenter] the node scaling as well.”

“It used to be more of a balancing act between [cost and availability], but with Karpenter and StormForge, it's less of that because you kind of get the best of both worlds where you can stay online, stay available, without worrying about too much about unused or over-provisioned resources that are going to eat up your AWS bill.”

We use cookies to provide you with a better website experience and to analyze the site traffic. Please read our "privacy policy" for more information.