On-Demand

Survey Findings: Understanding the Impact of Application Cloud Waste

Rich: All right, why don’t we go ahead and get started. Thank you everyone for joining today’s webinar where we’re going to be talking about the survey results around understanding the impact of cloud waste. So we all know that cloud waste is a big problem in the industry. There are statistics like this that have been out there for a while, right? That companies overall waste about 17 billion dollars a year in cloud spend on idle and excess resources. And that’s obviously a big number, but we wanted to kind of find out a little bit more about the impact that cloud waste had on individual enterprises and what people within organizations actually think about cloud waste, how they prioritize. So we put together a survey and really the questions that we were hoping to answer were to get an understanding of whether cloud waste is a problem for individual enterprises, whether people within it organizations feel like cloud waste is a problem, if so what is the impact of that problem? What effect does it have on your day-to-day operations and your business outcomes, and what are the causes of it what do people think are the primary reasons for the amount of cloud waste that they have and then finally is it a priority to fix it or to address cloud waste. So that’s what we’re going to be going through today on the call, the results of that survey. I think there are some interesting findings that we had. My name is Rich Bentley. I’m the Senior Director of Product Marketing at StormForge based up here in Ann Arbor, Michigan and I’ll be walking through the results of the survey on the call today and I’m going to be joined by Cody Crudgington who leads up our consulting team for StormForge. Cody works day to day with folks that are in the industry that are migrating to the cloud and going cloud native and helping those organizations really optimize the performance and the efficiency of their applications. So Cody will be able to bring the perspective of what’s actually going on out there and what customers are seeing as they try to optimize their cloud applications.

So a couple of housekeeping things before we actually dive in to the survey results. First of all, if you have questions please enter them in the question tab on the Zoom interface and we will try to get to as many as we can at the end of the call today.

So with that, let’s dive into the survey results. So the first thing I wanted to start with is one that I thought was maybe a little bit surprising, but it’s probably the best news in the survey and that is we asked folks how predictable their company’s cloud spend is, and I think we wanted to find out whether your cloud spend on a month-to-month basis, if what it’s going to be, or if it’s really varies a lot and it’s hard to predict. The good news is that in most cases cloud spend is fairly predictable either highly predictable or somewhat predictable in the vast majority of cases. So that’s all good right. You kind of know what to expect every month, what your bill is going to be, so you can plan around that. The bad news is that it’s increasing, right? With a few exceptions, right? But the vast majority of companies are expecting their cloud spend to go up over the next 12 months, either by a lot in the case of 32% of the respondents, or at least somewhat in the case of 44% of respondents. So only 6% of the people that responded to the survey felt like their cloud spend was going to go down over time, so not surprising. I think what we would expect, but I think when we get into the problem of cloud waste and spending that problem is only going to get worse over time as we go forward, so something important to keep in mind.

The other part that’s probably the really bad news is that the folks that we surveyed recognize that a very significant percentage of what they spend on cloud resources today is actually wasted and unnecessary. So the average of all the responses to the survey was that about 48% of the money that their organization spends on cloud resources is wasted. So that’s a huge number. It actually is not surprising. I think what we have seen and we’ll get more into this, but we’ve seen that to be the case, but we weren’t sure if people recognized that was a major issue or not. It seems that the vast majority of people do recognize that a lot of the money that they spend on cloud is actually wasted on either unused or idle resources and really not needed. The other part of that we wanted to make sure that we understood was whether the people that responded to the survey, whether they felt confident that they actually knew how much of their cloud spend was wasted or if they didn’t. Again, maybe a little bit surprising, but most of the survey respondents were pretty confident that they knew that they had a lot of cloud waste. So 35% were extremely confident, another 49% were at least somewhat confident. So I want to bring Cody into the conversation at this point, and Cody maybe based on your experience working with customers, if you could comment on if this is what you’re seeing as well. Do people generally understand the level of cloud waste that they have?

Cody: Yeah, I think they do. Most of the customers we work with, and even in my past lives, you can easily tell your resource utilization through monitoring tools or whatever you’re using for your environment that there’s a big gap in your resources versus what you provision compared to what you’re actually using. It’s almost like a safety net for most people, but again, you’re not using those resources effectively at that point, right?

Rich: Yeah, so I think in a lot of ways there’s a couple of pieces of good news so far, right? One is that cloud spend is predictable. That’s what you’re going to spend. The other is that at least you’re aware of the problem, right? That people know that they’re probably spending more than they need to. We’ll kind of get into the causes of that in a couple of minutes here. One thing we wanted to do is validate the results of the survey, which are really perceptions, right? These are people within IT organizations sharing what they believe to be the case with what is actually the reality out there. To do that, we took a look… DataDog did a nice piece of research that they published a few months back, that… DataDog is a SAS observability provider and so they have a lot of data from actual customers’ actual production usage, and this was what they found in terms of the actual usage of requested CPU and memory within cloud native environments. So the charts are maybe a little bit confusing at first, but what they’re showing is the bars, if we look at the CPU chart for example, the percentage of containers that are actually running is what’s on the y-axis, the vertical axis, and the amount of CPU that they’re actually requesting versus what they’re actually using is represented by the size of the bars. So in other words, 30% of the containers that are out there running in production are actually using less than 10% of the amount of resources that they requested. If you add up the first three bars on each of these, what it shows is that about half of the containers that are out there running in production are using less than 30% of the requested CPU or the requested memory, so a huge amount of over-provisioning, idle resources, things like that. Again Cody, any additional comments on how this relates to what you’re seeing out there?

Cody: Not really, I mean this is pretty consistent with how the industry is using their resources or allocating their resources. Again, you want to use as much of what you’ve requested or provisioned as effectively as possible or efficiently as possible, and it’s just not happening across the board.

Rich: Yeah, for sure. So we looked at what people perceive in terms of their cloud spend in terms of their cloud waste, and then the next thing we wanted to do is to get into what is the impact of that. What does it actually mean to you as an enterprise IT professional? So we asked a few questions on what are the biggest impacts. We provided a few options here that we thought were the most likely impacts that people would be thinking about, and no surprise but the number one impact is that it reduces your profitability, right? If you’re spending more on cloud than you need to, then your organization and your company as a whole is going to be less profitable, especially in those organizations where your cloud spend is a significant element of your cost, part of your balance sheet. So definitely reducing profit profitability is a big one. The second one is just making IT look bad to the rest of the organization. I think everybody that’s in an IT role is familiar with that feeling that people are looking at it as a cost center, as being not strategic to the organization, and I think everyone is aware and wants the perception of it to be more strategic and adding value to the organization, rather than just sort of a cost center. So that’s an important one as well. Then other impacts were around being less competitive, right? If you’re spending more on your cloud resources than your primary competitors are, that’s going to put you at a disadvantage, staying within budget, kind of related to the profitability one. Then lower down on the list were not being able to hire the resources you need. So if you’re constrained in terms of budget and spending, a lot of times that may result in hiring freezes or limitations on being able to hire. And then being bad for the environment is pretty far down the list, which is maybe understandable at an individual organizational level. You may have other priorities that are higher than that, but at a macro kind of industry level, we know that the amount of cloud waste out there really does impact the environment and greenhouse gas emissions as well. 

Let me pause there. Cody, you’ve worked with a lot of customers on these kind of challenges. What are the types of impacts you see? Are these representative? Any good examples that you can kind of bring to the table? 

Cody: Yeah, absolutely. There’s a specific issue that comes to mind with the previous customer where we had an issue with the stateful deployment essentially. We were doing manual tuning by trial and error, I think there were about eight or so tunable parameters, and our deployment was completely unstable. It took a few sprints for us to get it right. In that time, we were seeing consistent issues in production, so we would spend a lot of time figuring out what was going on in production, fix it in the pipeline, redeploy et cetera. Eventually we just threw resources at it and at the application. Within six months or so during this time we doubled our cloud spend, not just because of this application, but there were other contributing factors. But about 35% I believe or so was contributed directly back to the stateful deployment. And it’s not just that, right? It’s also the time you’re wasting to fix these issues, correcting them in production, making sure you’re meeting your SLAs, and with the increased cloud spend, you can tie it directly, you can see how it adds up and costs your organization. It’s not just profitability, it’s also the time with your employees, everything it takes to get it back and working and figured out, so there’s lots of factors that play into this.

Rich: Yeah, and we’ll talk through in a couple of minutes here about the different causes, but I know you uttered a phrase “throwing resources at it,” and I hear that a lot. I think that the common response is that if you’ve got a problem, or you’ve got a performance issue, or you’ve got a reliability issue, the first reaction is really to kind of throw resources at it because that’s the easiest thing to do. It doesn’t take a lot of time or effort and that may be fine when you’re small and you’re kind of ramping up, but as you scale that becomes quite a cost to the organization.

Cody: Absolutely, absolutely.

Rich: So let’s talk about the kind of the prioritization of this. So we wanted to know whether cloud waste, whether addressing that and reducing or eliminating cloud waste, was a high priority for the organizations that we surveyed. We found that yes, it was a pretty high priority. In 33% of the cases, they listed it as a very high priority, and then another 43% said it was important, maybe not the highest priority for the organization, but still a pretty high priority. Only 6% said that it’s not a priority at all for those organizations. So obviously it’s something that people feel the pain of. They feel the impact of it and they want to solve it. 

So I want to take just a quick pause here. If we think about what we’ve learned so far from the survey, I think it’s actually really interesting, right? We’ve found that cloud waste is a real problem, right? That people within the enterprise believe it’s a problem. They understand that it’s a costly problem. It has an impact, right? Both in terms of profitability of the organization, but also the reputation of IT and meeting budgets and things like that. It’s going to get worse because everyone’s expecting their cloud spend to grow over the next 12 months, so even though it’s a problem today, it’s going to be a bigger problem 12 months from now. We know it’s a high priority to fix, so if you kind of take all that together, the question that sort of comes immediately to mind is, well why haven’t we fixed it already? If it’s such a big problem and it’s so costly, why haven’t we addressed it? So that’s where we’ll go next when we talk about the causes of cloud waste and kind of what people are perceiving as those causes. I think that will help give us the answers as to why it still is an issue, and then we’ll talk about sort of how to address that. 

So we asked about the biggest causes of cloud waste for your organization, and again we gave some responses that we felt were going to be most likely. What we found is that complexity is the number one culprit to blame for cloud waste. So cloud complexity makes it hard to estimate how many resources are actually needed in order to effectively run your application. Over-provisioning is number two. This kind of goes back to Cody what you were saying with throwing resources at it. I think it’s a common response again to over-provision and build more resources into your application that are actually needed, but you’re doing it to kind of play it safe and hopefully make sure that you don’t have any performance issues. In other cases, it may be maintaining more environments than you need. You may be standing up development environments or QA environments and maybe you’re not taking those down when they’re no longer needed. They’re still out there kind of using up resources. The fourth one was developers have no incentive to run apps efficiently. A friend of mine tweeted a while ago. He said asking developers to decide how many resources their app should have is sort of like asking your kid in a candy store to just take what you need, don’t take any more than you need. But if you’re a developer, you’ve got incentive to make your app run well, and so if you’re not too worried about budget or cost then you’re more likely to over-provision that application, and request more resources than you actually need. Farther down on the list for things like the amount of attention or oversight of cloud spend, it’s definitely seen as a cause, but not as big. I think most organizations are looking at their cloud spend and they’re examining that especially when times are tight, they’re looking at ways to reduce spend and things like that. So I think in a lot of cases, they have visibility. They are looking at their cloud spend, they’re making sure that they’re understanding that and things like that. So those were a little bit of a lower priority, but let me pause there again Cody and get your get your take on this.

Cody: Yeah, we all know that cloud native architectures can be complex. You can have multiple environments across multiple cloud providers, just a number of moving parts, but whatever your strategy is, we all agree that the level of complexity since moving to more distributive environments has only increased. I remember when federation and things like that were really starting to take shape during the virtualization days, it was all about being more distributed, eliminating single points of failure, all under this amazing guise of it would make operations lives easier. But it’s not true at all. It makes for a better end user experience, but not necessarily for a better experience for those who have to maintain it. And if you’re an operations group, you don’t want to get that 2:30AM pager duty call, so what do they do? Because in my experience, it’s more on the SREs, not the developers, to maintain that stability in the environment and to avoid getting that 2:30AM pager duty call. They throw resources at it, so it’s just kind of the just the way things have gone for so long and it’s time to kind of address those. 

Rich: Yeah, for sure. Well, we will get into addressing that in just a minute here, but one other thing when we talk about the causes of cloud waste, we wanted to ask specifically about Kubernetes. We suspected that cloud complexity was probably going to be a fairly common response to the cause of cloud waste, and so we wanted to dive a little bit deeper into Kubernetes and how that contributes to that. So I didn’t include the kind of the demographics, but the vast majority of people who responded to the survey were using Kubernetes either in production or kind of ramping up into production. What we found is that in most cases so about 63% if you add up the first two in most cases, they felt like Kubernetes was a factor in their complexity, which contributed to their cloud waste. So fairly significant and I think if we look at why that is, we’ll talk a little bit here about why we see Kubernetes complexity being a challenge and this is a slide that we use when we talk about our platform, but if you think about the amount of configuration needed when you’re deploying an application on Kubernetes, right, for every container that is part of that application that makes up that application, you’ve got settings like CPU requests and limits, memory requests and limits, replicas, and you add all those up, and you multiply it by the number of containers, it’s a lot of different things that you need to configure and set. Then if you add to that application specific parameters as well, so if it’s a Java app for example you may have JVM heap size that you need to configure, garbage collection settings, things like that. All of those different things that you set have an impact on the cost of running that application, the performance of that application, the reliability of that application, and all of those things are interrelated as well. So if you change one thing, it has an impact on another thing. So the number of different combinations that you have when you deploy a Kubernetes application is really approaching infinity at some level, right? Because there are so many different options that you need to set and all those different settings have an impact on the cost, performance, and reliability of those applications so it’s a very difficult thing for a person to actually figure out the best way to do that. Cody, is that kind of mapped to what you’ve seen out there? Any other things that you would want to add to that? 

Cody: Yeah, I mean again we’ve said it a couple times already, it’s pretty typical you’ll have some idea of your application settings and what they take, but the load that they’re actually working with may not line up with the number of settings, for example your number of CPU requests or your limits. So it just goes back to getting back to using those resources efficiently, so you’re not in this cycle of again we’re just going to throw resources at it.

Rich: For sure. So one of the things we wanted to find out was who was actually making those decisions on resources. So we asked for the different groups within IT who were actually making those decisions about resource allocation, and what we found is that in most cases it was IT Ops or Cloud Ops that were making those decisions, kind of a mix of other roles that were helping with that. Cody, does that match up with what you’ve seen and is that what you would recommend to organizations? Who do you see as a best practice being sort of the right folks to make that decision? 

Cody: Yeah, well typically these days if it’s not directly application related and there’s always a bit of contention about whose problem it is, infrastructure, is it the app, what’s going on here? But if we look at the past, your dev group would create an application. They would tell your infrastructure group what the application resource requirements were, what they thought they were kind of in a general idea, and then the infra group would go and provision those nodes or VMs in a chosen environment to host the application. But what happens when you have a new code release? Are the requirements still the same? Is there more load on your application? Have they changed? Is there some kind of drift that you need to manage? This is where I think, and we kind of think, that this shift left approach can help and get all teams involved in the application deployment. By shifting left and putting the optimization in your build pipelines, that gives you confidence when you go to deploy to production.

Rich: Yeah, yeah that makes sense. Then the last thing that we wanted to cover from the survey here is how those decisions are made, right? How do the people that are actually making the resource allocation decisions, how are they actually doing it? And we provided three options there. One is sort of best guess or trial and error, right? You try something out and then you see how it works and you go back and you change it or tweak it as needed. The other is sort of relying on defaults, right? As you’re getting applications from service providers or different vendors, they may come with defaults that you would start with for that. Then the third is using a solution like StormForge, which is actually using machine learning to optimize your application and test it before you put it into production. What we found was interestingly a pretty even split between those three. A little more on the trial and error and the relying on defaults side of things, which we’ll get into kind of the benefits of using a solution like StormForge a little bit, but Cody on those, for folks that are doing trial and error and that are more relying on defaults, is that something you see commonly and what do you see as being the primary drawbacks to that? 

Cody: Yeah, trial and error is likely by far the most common method to get your application stable, and I have empathy for QA teams out there that are doing it that way because it’s just not effective or efficient. It takes a lot of man hours and a lot of time to sometimes get these things right. This slide really took me by surprise given the pool of companies that we surveyed. I was really surprised to see that 27% are using machine learning to optimize their Kubernetes workloads, and I think that gives credit to what we’re trying to accomplish. It’s now, I guess we call it now, a widely used method to figure out whether or not your configuration is going to provide stability or instability in your environment. So I think this is where things are going just because of how effectively it does work. 

Rich: Yeah and obviously we’re a little bit biased on that front, since we provide a solution that does that, but I think it is pretty clear that as solutions like StormForge become viable ways to do this, that it really is a much easier and a more effective way of actually setting these resources. So this is kind of the short pitch on what we do here. We’re not gonna dive deep into the StormForge platform, we’ll give you some ways to do that further down, but what we call StormForge is an application performance optimization platform. Really what it’s all about is finding the right parameters, all those Kubernetes and application parameters that I talked through a minute ago, but finding the set of parameters that will really optimize based on whatever your goals are. Whether that’s cost efficiency, whether it’s performance, whether it’s reliability, and we do that using machine learning. We have a process called Rapid Experimentation that you can you can almost think of a little bit like trial and error, but it’s doing it automatically, and it’s doing it with machine learning, which is able to really deal with that complexity and understand that very complicated parameter space that you have for your application. And it’s doing a set of trials where it’s honing in on the best configuration, the one that’s going to give you the best results in terms of cost, in terms of performance, and we’re doing that as part of your pre-release process. So before you’re actually putting an application into production, we’re actually doing that Rapid Experimentation to find the best way to deploy that, and by the way if you’re using a solution like StormForge, you can automate that and make it part of your CI/CD workflow. We often talk about it as adding continuous optimization, so instead of CI/CD, it’s really CI/CO/CD, right? Making continuous optimization a regular part of that automated release process. The type of results that we see from this, this is an example of a customer a large travel website that you are likely very familiar with, they had put a lot of effort into the trial and error approach to making sure that their applications performed well and were very efficient, and what you see here is a chart from the StormForge product. The blue triangle represents where they were at, so the baseline performance and cost of their application was there. They ran it through StormForge and our machine learning learned their application, it learned their environment and came back with a set of recommendations, which are what you see here on the orange and the green dot on the screen. What we found is that with all the time and effort that they had put into optimizing this application, they had actually done a really good job of squeezing latency out of the application, right? There wasn’t a whole lot more that they could do to improve the performance and response time of the application, but what we found is that they could cut the cost of running that application in half essentially, and keep the same level of performance. They actually could have cut costs more than in half, but they would have sacrificed a bit of performance to do that effectively, and so they chose the highlighted configuration right here, which again gave them the same level of performance at half of the half of the cost of actually running that application. 

Those are fairly typical results we see. We usually see an impact and a benefit both in terms of being able to improve performance and improve the cost of actually running that application as well.

So we’ve only got a few minutes left here and I want to highlight a couple of things that you can do from here as you leave the webinar today, that you can follow up with. First of all, we’ve got a panel discussion on this topic planned for next week, a week from today on Earth Day. We’ve got some really great panelists who are going to be joining us from the companies as you can see here, Cloud Native Computing Foundation, Amazon, and Boomi. They’re going to be talking about what you can do about cloud waste, and what is the impact of that problem. So really kind of expanding on the topic that we’ve set up here. So really strongly recommend that folks attend that. I think you’ll get a lot of great insights from the panelists on that. 

Then there are a number of other things you can do from the StormForge website as well. So we went over a number of the survey results here, but there are additional things that were in the survey that you can download. You can see the complete set of demographic data from the people who took the survey, so definitely encourage you to go and download the survey report. Also take the cloud waste pledge. So we’ve set up a pledge. We have committed to contributing to the reduction of cloud waste in 2021 and we’re looking for people who are also committed to doing that within their own organizations and are willing to sign on with the pledge and track how they’re doing along those lines as well. 

If you’d like to get started using the StormForge platform, you can sign up and start using it for free at www.stormforge.io/try-free/. There are also links there to contact us and follow us as well, so definitely encourage everyone to go out there and take advantage of some of these resources and start on the path of reducing your own cloud waste. 

So with that, we’ve got a few minutes left for questions. As we’ve got a few questions that have come in, I’m going to start with one. Cody, this one is probably a good one for you. They’re asking how much time and effort and knowledge is needed to use StormForged machine learning for optimizing your applications?

Cody: Not much at all, not much overhead. Obviously the more complex an experiment gets, meaning the more tunable parameters there are, it can, I don’t want to say take some time because we have experiment generation, so it’s essentially a one-liner and you supply your manifest and it’ll go through and read those and tell you what’s tunable what’s not and it’ll generate the experiment for you, so at this point with our latest release it’s fairly quick and easy to do. I think with the free, please correct me if I’m wrong here, you get one or two tunable parameters, so if you just want to see how you could save just by your CPU and memory, you can sign up for the free tier and we can get you started there. 

Rich: Cool. All right, the next next question I wanted to take was, does the survey suggest that there are potential public relations impacts from cloud waste or are the risks to the corporate image mostly internal? So I can address that and Cody I’ll ask you to add on to that as well.

Yeah, we didn’t actually ask about the impact externally. From my experience, I haven’t seen a lot of PR issues with cloud waste. I think it’s not something that is usually kind of a public thing that the companies end up kind of getting shamed on or things like that. So we actually didn’t ask about that and we didn’t get that in any of the write-ins around that as well, but Cody have you ever seen a situation or have you dealt with customers that have had more of a public impact on their cloud waste?

Cody: I mean not necessarily. I mean of course the big three are always going to get hit first, so Amazon, Google, Facebook about this thing, but I think with the adoption of FinOps, I think one of the previous slides the talk that’s coming up through the panel we’re going to do on cloud waste, the CNCF for Linux Foundation have now created another foundation called the FinOps Foundation, and that’s expected to gain a lot of traction this year from what I hear through people I know at CNCF. They’re expecting it to be a really big topic. I think in the next few years, it may not be a PR issue now, but maybe three years from now when this FinOps idea really takes hold, I think at that point there’s potential for some PR issues.

Rich: Yeah, I think that’s a really good point. Next question I wanted to take, Cody, maybe you can take this one, is if you cut your cloud resources down by say 40% or 50%, aren’t you then putting the performance of your applications at risk? How do you kind of balance that trade-off? 

Cody: Yeah, not at all. So our machine learning, I mean there’s a reason it’s machine learning and we’re not doing this by hand, we based on the load test that we provide, or you provide because again you can use your you can bring your own load test using our product or we can come up with a load test for you which is really the kind of meat and potatoes of everything. Using that, we can tell you exactly what the setting should be for a stable configuration and if you want to pad it by 10% or something, like there’s nothing wrong with that we see that, but we will give you an absolute with these configuration settings. You’re going to save x amount of dollars if that’s what you’re trying to, if that’s what your goal or objective is, and we’re going to guarantee the performance of your application. So there’s really no risk, but again we’ve seen customers who took our numbers and then added a few here and a few there to make them feel safe. 

Rich: Yeah, that makes sense. If you think about the slide that I covered that has the complexity, there’s so many different things that impact the cost and performance and that’s part of what the machine learning is able to do, is really find those configurations that will give you the cost benefit, but not sacrifice the performance of the application as well. So the next question I want to take, we’ll do a couple more here. This question was around how big were the companies surveyed in terms of their cloud spend and do we see any difference in cloud waste between larger companies and smaller companies? It’s a good question and when we did the survey, we made it fairly open to companies of all sizes. We wanted to get a range of company sizes and what we found is that we got a really good mix of companies. Some of them, their cloud spend was less than $5,000 a month, and then we had a decent percentage that had cloud spend of over $1,000,000 a month, and everything in between there. Interestingly, what we found is that there was a maybe a little bit of an impact on the amount of cloud waste that organizations saw if they were a little bit larger. On the smaller end they were maybe less likely to see as much cloud waste, but it really wasn’t what you might expect, where it would be kind of a steep curve where larger organizations have a lot more cloud spend. It was really kind of across the board, every organization had a fair amount of cloud spend perceived. It was actually maybe the ones that estimated the largest percentage of cloud waste were those that were maybe in the middle a little bit more. So I think maybe larger organizations are putting a little bit more focus, a little bit more effort to try to reduce their cloud costs. Smaller organizations maybe don’t care as much, or don’t have as much urgency around the problem, but sort of in that middle ground is where we saw a lot of more of the higher numbers. All right, last question I think that we’ve got time for here just in the last couple of minutes, Cody this one’s for you. So when you’re using StormForge, where do you run your experiments? Do you need a canary instance?

Cody: So we prefer you to run in a dev environment or a QA environment that maybe resembles your production environment, but again, we want you to shift left as much as possible. So whether it’s kicking off the experiment in a build pipeline or something, we are also API driven and command line driven so you can easily script that out, but preferably in a QA environment. Something that closely resembles your production environment as much as possible.

Rich: Great and there’s one more question I think I wanted to get to here as well before we wrap up and that is around if a small to mid-sized business, this kind of goes along with the question earlier about company size, but if you’re a smaller business and you want to reduce cloud waste, what low cost first steps can such an organization take without a lot of overhead?

Cody: Yeah, absolutely. I like this question a lot. We might have discussed a few things earlier. Regardless of whether or not you’re going to use StormForge or some other optimization tool, the first thing you can look at easily is how well are you utilizing the resources you’ve allocated. Because a lot of the time again even back in the data center days, we were only using a small fraction of the CPU and a small fraction of the memory that we had allocated to those applications. Some servers would sit there with one application on them that was running .05 CPU, and they would just sit there and host one application. So the first thing you could do easily is make sure you’re utilizing the resources that you have efficiently. I’m not saying you want to peg out your CPU at 100 half the day, but if you could get somewhere around 60 CPU utilization, 60-80% memory utilization, that’s going to be a great start.

Rich: Yep, great. All right, well thanks Cody for adding all of your insights from your experience. Really appreciate that and just want to go back to this before we wrap up here. Make sure you register for this panel discussion next week if the survey results have been interesting for you. I think that the event next week will be really interesting to kind of delve more into what you can do about this, and what we’re seeing around the industry. So with that, thank you everyone for attending. Thanks again Cody for adding your insights to this, and look for the survey, and we will talk to you all soon have a great day.

Cody: Thank you.

Latest Resources

Seeing is Believing

Start getting resizing recommendations minutes from now.

Watch An Install

Free trial includes full version on 1 cluster for 30 days!

We use cookies to provide you with a better website experience and to analyze the site traffic. Please read our "privacy policy" for more information.