Frequently Asked Questions

StormForge’s customer support teams have fielded questions from a wide variety of engineering teams. In this FAQ, we have assembled the most the frequently asked questions.

StormForge Optimize Live

StormForge Optimize Live continuously rightsizes Kubernetes workloads to ensure cloud-native applications are both cost effective and performant while removing developer toil.

As a vertical rightsizing solution, Optimize Live is autonomous, tunable, and works seamlessly with the Kubernetes horizontal pod autoscaler (HPA) at enterprise scale.

Optimize Live addresses both over- and under-provisioned workloads by analyzing usage data with advanced machine learning to recommend optimal resource requests and limits. Recommendations can be deployed automatically on a flexible schedule, accounting for changes in traffic patterns or application resource requirements, ensuring that workloads are always right-sized and freeing developers from the toil and cognitive load of infrastructure sizing.

Optimize Live realizes the full potential savings of bi-dimensional autoscaling by combining the benefits of horizontal and vertical pod autoscaling. By detecting the presence of a horizontal pod autoscaler (HPA) on a workload, Optimize live then sets the HPA target utilization alongside the vertical rightsizing recommendation to ensure optimal application scaling behavior.

Software is not static, and today software companies have more releases, more often. Developers are using StormForge to make sure they’re managing software appropriately and that it’s running with optimal efficiency without having to spend time manually tuning applications. What does that mean from a value standpoint? It means they don’t have to choose between cost and performance or other metrics. Developers can unlock a whole new world by gaining a much broader view of their environment from a resource management standpoint and make more intelligent decisions.

The StormForge platform is built in a way that is meant to be integrated and part of the DevOps toolchain. That means that it is hardened from a cloud-native perspective and vertically integrated so that developers can integrate quickly to get up and running fast and realize value.

StormForge is the only solution that closes the loop between pre-production and production with both experimentation-based and observation-based optimization in a single platform. Our patent-pending Machine Learning engine allows us to provide a level of sophistication that goes above and beyond the basic statistical modeling used by other solutions. And, unlike observability and APM tools, which are important for seeing what’s happening in production, StormForge Optimize Live is proactive and enables you to automate the change or fix that comes from observing data. In that way, it’s the perfect partnership – marrying observability and APM tools and StormForge ML-driven optimization, and a big differentiation for StormForge.

StormForge started as a machine learning lab. We were a Docker Swarm shop for the first few years, and we were trying to solve our own lift-and-shift challenges from Docker Swarm to Kubernetes. When our team of data scientists and engineers were performing this lift-and-shift, we realized how painful tuning is for developers.

We knew we weren’t alone, and immediately went out and talked to as many developers as we could to see if they were experiencing some of the same problems. When you look at how many workloads are moving over to Kubernetes, it’s obviously a breakneck pace. That was really the start of StormForge – we are born out of a real business problem that stalls organizational momentum. So, we dedicated ourselves to solving the problem in a way that makes life easier for developers and gives businesses the intelligence and insight they need to make better resource decisions as they move to Kubernetes.

Start a free trial today or play around in our sandbox environment for a test drive, and start improving application performance, reducing cloud costs, and improving developer experience. 

Kubernetes App Optimization

Kubernetes – or “k8s” – is an open-source software platform that automates Linux container operations. K8s enables IT and DevOps teams to manage workloads and services via a framework that runs containerized applications on distributed, cloud-based systems in automated and resilient ways.

Learn more about Kubernetes here.

With the continual growth and adoption of Kubernetes, companies need a way to manage the complexity of their Kubernetes applications. Monitoring workloads, manual tuning, and managing applications at scale while not over-provisioning resources that increase your cloud bill exponentially are just a few of the reasons why Kubernetes management is needed to simplify, and automate, your Kubernetes environment.

Learn more about Kubernetes management here.

Cloud-Native is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. Cloud-native architecture takes full advantage of the distributed, scalable, flexible nature of the public cloud. “Going Cloud Native” means developing and deploying applications that abstract away many layers of infrastructure — networks, servers, applications, etc.

Learn more about Cloud Native here.

When organizations go cloud-native with their key business applications and manage them with Kubernetes, StormForge helps ensure that those apps maintain the desired performance while minimizing cost at all times. StormForge helps in Kubernetes-driven environments with application performance optimization – addressing a big challenge. Leveraging machine learning-powered automation, StormForge eliminates manual tasks and trial-and-error, replacing them with fast and dynamic analysis, adaptation, and action.

StormForge is purpose-built for Kubernetes. Its ground-breaking machine learning technology is a perfect fit for solving the complex problems that arise when going cloud-native. It is innovative both conceptually and in practice because it continuously optimizes and improves the Kubernetes operating environment.

Application optimization is the process of tuning, testing, and re-tuning an application’s parameters and configuration settings such that its operational performance is in line with the organization’s preferences – whether that be for the lowest cost, highest speed, or some other specific parameter.

One real-world example is that many companies focus on CPU and memory tuning. Alternatively, companies want to and should consider many other parameters as a way of establishing and maintaining an optimized Kubernetes environment. This is a major difference between StormForge’s proactive approach to Optimization as opposed to conventional APM (Application Performance Management) software products.

We recommend that you check out this StormForge blog post for a deeper dive into this topic: Improving performance cost-efficiency in Kubernetes Applications.

The simple answer is that Kubernetes is hard to work with because it was designed for managing containers at a Google scale with legacy services. Kubernetes, in short, supports a fast-moving, complex and demanding environment.

It is complicated for 3 reasons: (1) Deploying applications with Kubernetes involves advanced levels of automation and multifaceted levels of abstraction (i.e., everything is broken down into smaller and smaller pieces); (2) over the years, there have been many major releases of Kubernetes (3-4 sometimes per year) with a lot of new features and changes, and this pace is not expected to slow down, and (3) the levels and varieties of configurations are far greater than ever before because of containerization, resource sharing, and microservices.

StormForge can tune any application running on Kubernetes. Some of the most common use cases, languages, and technologies tuned by our customers include:

Languages:

  • Java
  • .NET
  • Python
  • Javascript/Node.js
  • Go
  • PHP
  • Any other language that runs in a container on Kubernetes

Technologies:

  • Apache Spark
  • Cassandra
  • Drupal
  • Elasticsearch
  • Horizontal Pod Autoscaler
  • Nginx
  • PostgreSQL
  • Vertical Pod Autoscaler
  • WordPress
  • Any other technology that runs in a container on Kubernetes, including all of your custom applications

There are two ways to optimize an application:

The first is handling the task manually. This involves a human manually changing settings or parameters, assessing how these “tweaks” have impacted results, and then employing a trial-and-error process until one set of results is acceptable and desirable. Clearly, this approach – a typical APM approach – is time-consuming, error-prone, and ultimately ineffective in the Kubernetes environment.
 

The alternative is automated optimization of application performance that is software-defined – driven by artificial intelligence and machine learning. Given the huge numbers of variables involved with cloud-based applications, and the complexity of testing, adjusting, and re-testing millions of combinations of variables, the automated approach is far superior to any manually-based efforts.

Getting an application’s performance optimized is important for several reasons. First, it means that the application will operate as users expect – in terms of availability, reliability, speed, responsiveness, etc. The second reason is financial. Over-provisioning cloud apps is one way to help ensure that they run well, but doing so can drive costs through the roof. Properly optimized applications perform well under load but do so in a cost-effective way with ‘just right’ provisioning.

Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters and thereby allow businesses to separate resources, security, and access to these resources.

A “service” or “microservice” is a component or specific collection of pods that act as a logical slice of an application. In other words, a service is a portion of an application that executes a particular task.

An experiment defines what you want to test, how it will be tested, and what we will do to optimize the application. StormForge automates, via a series of scientific trials, a variety, and range of parameters related to the Cloud environment, Kubernetes, the application(s), load, etc. StormForge’s Machine Learning algorithms vary the parameters used in these trials and automatically produce multiple simulations, under load, in order to find the optimal parameters and ultimately the optimal configuration for all involved technologies.

Kubernetes Performance Testing

Application performance testing is the process of assessing the performance, stability, scalability, reliability, and resource usage of applications under a particular workload. By defining test scenarios (test cases) and using user-defined data sources real-world workload can be generated and the application tested against this workload. The metrics gathered during test runs are being used to benchmark the current status of an application to its target status as it might have been defined as part of an SLA. If performance testing reveals that an application is not performing at the desired level, the next task is using performance data and other sources of information to locate and diagnose the problems, and take the necessary actions to resolve it. The overarching goal of performance testing, therefore, is not only to make an application fast but enable an organization to understand why the application is behaving as it is and what limitations it has.

Companies are looking to automate, scale, do fast deployments and lower the costs of applications. To determine whether or not organizations are actually gaining those potential benefits, they need to test application performance. By doing so, they get quantitative answers to key performance questions, such as the following: What is the actual speed, scalability, and stability of an application versus expectations? How does the application perform as the number of concurrent users rapidly increases or decreases? Is the application’s resource provisioning optimal?

With StormForge Performance Testing engineers can create load tests in just minutes, and scale tests from tens to hundreds of thousands of requests per second. Performance Testing also enables tests that replicate the activity of millions of concurrent users. The Platform’s intuitive, user-friendly user interface allows teams to easily create repeatable automated load tests to incorporate into their CI/CD workflows.

At this time, any application reachable via HTTP can be tested with StormForge Performance. In some cases, closed systems (not reachable over the internet) can also be tested. Feel free to discuss your specific needs with us by contacting us.

The focus of performance testing changes over a project’s lifetime as different aspects of a system need to be evaluated. StormForge supports the following testing types: load, smoke, endurance (soak), throttle, stress, peak, and scalability.

Learn more about the different types of performance tests here.

The maximum load generated for each test depends on the plan an organization purchased and defined in each test case. With the highest plans, any kind of real-world load can be generated.

Of course. You can find the details in our documentation.

It’s pretty easy. Tests can be automated and scheduled using either the CLI in combination with your CI tooling or simply use the scheduling feature in the WebUI.

Yes. Each test case is highly configurable to simulate real-world user behavior observed by e.g. an organization’s marketing team (user journey, time-on-page, etc.). One can upload their own data sources and define data sources to be used in a test case.

Yes. Load can be generated from any inhabited continent to simulate load from clients in these regions.

Yes. In more and more cases, organizations are launching their mobile apps in regions with limited or varying mobile coverage. With Throttle testing in StormForge Performance, organizations can make sure to serve all of their customers and launch in new markets with confidence.

Sure thing. To distinguish between real user traffic and StormForge traffic to, for example, a production system, it is either possible to set a custom HTTP header or filter for the standard headers StormForge sends. In addition, one can also use basic auth to allow load tests to access your test environments.

StormForge Performance Testing fits in the CI/CD pipeline right after a change is deployed to an integration or testing environment. Performance testing should happen, in general, at the same time as integration and acceptance testing.

The number of users, collaborators, test cases, test data, and test runs depend on which plan you have purchased. Contact us for more details on what is included with each plan.

About StormForge

StormForge is the single, unified software company that emerged following Carbon Relay’s acquisition of StormForger in November of 2020. The company’s team of world-class data scientists, machine learning experts and seasoned DevOps engineers build products that drive operational improvements in the Cloud and on-prem through better software performance and more effective resource utilization. StormForge’s main software offerings are automated performance testing and performance optimization of Kubernetes applications.

Carbon Relay acquired StormForger in November of 2020, and rebranded the newly combined companies as StormForge. We invite you to view our Press Release for more details.

StormForge offers solutions for application optimization and performance testing for your Kubernetes applications. They also offer a full range of Professional Services that are designed to help customers make smooth transitions from their monolithic apps to microservices running in containers orchestrated by Kubernetes. The company’s services help to bridge customers from their legacy approaches to a next-generation environment while ensuring performance, reliability, and cost-efficiency.

StormForge offers a full range of Professional Services and offerings. The company’s Services team members can guide customers through all stages of the DevOps lifecycle. Our experienced and hands-on team delivers to customers the technical expertise, oversight, and guidance they need to ensure reliability and performance before their products are released. Specific service offerings include Application QuickStart, Kubernetes Auditing & Guidance, and Education Service. For customers with unique or specific requirements, StormForge also offers Custom Consulting Services.

For more information on these offerings, visit the Professional Services page.

The most efficient way to contact StormForge is by visiting our Contact Us page. There you will find ways to contact the team, Press / Media inquiry information, our phone numbers, and office locations.

General inquiries can also be made by phone at +1 857-233-9831 or via email at info@stormforge.io.

We use cookies to provide you with a better website experience and to analyze the site traffic. Please read our "privacy policy" for more information.