This blog originally appeared on The New Stack.
There are many globally pervasive applications today (Netflix, Zoom, Tik Tok for example) which use large amounts of computing power. Additionally, the carbon footprint of a number of innovative and important technologies (like IoT) are significant. So yes, carbon emissions of data centers and growing internet traffic is, of course, a problem. However, it’s not something a developer or someone else from the tech community can directly impact. Where the collective “we” should be focused on is reducing the resource needs of the cloud-based digital products we code today and tomorrow. This article will take a look at hidden carbon costs, risks associated with the rising energy consumption of data centers, what the major cloud providers are already doing and how resource efficiency in the cloud is so important and achievable.
Conflicting studies confuse the issue of carbon emissions and the data center. One study finds that Greenhouse Gas Emissions (GHG) of data centers within the US are responsible for only 31.5 million tons or 0.5% of total GHG emissions. However, another study from Yale finds that the global GHG emissions by data centers amount to over 2% of the human-influenced carbon emissions (equivalent to the global aviation industry) and over 2% of global electricity production. A third study states that estimated emissions (like the previously cited reports) are not accurate and point to the lack of bottom up studies enabling precise alignment of the energy source used for data centers worldwide and the carbon emissions of this energy source. Overall, results are inconclusive.
Fortunately, regardless of what the data may or may not be telling us, all of the major cloud providers have committed to renewable energy usage and invest heavily in much more efficient hyperscale data centers. They have demonstrated prioritizing infrastructure optimization and scale in the ways computing facilities are designed, managed, and powered. Initiatives at AWS, Google and Microsoft have been well documented. While their efforts for cleaner cloud technology are commendable, there remains a problem – data centers have to keep diesel-powered backup generators for their power plants and diesel fuel has only a shelf life of 6-12 months (under the best conditions). Big players like Microsoft are trying to find better solutions to this carbon intensive backup technology. Natural gas and hydrogen fuel cells are candidates for a cleaner path but these approaches are still in the pilot phase.
Despite many carbon-related issues to cloud technology, it is not the carbon emission but rather the rapid increase in energy demand to run cloud-based applications that’s worrisome. As more and more tasks get digitized and much more computing intensive applications go mainstream, the efficiency gains of Moore’s Law could be outrun by market demand and thereby also outpace renewable energy production. Chip and circuit design can help “flatten the curve” by becoming more efficient, but will run into the limitations of physics soon. How will future data center energy demand evolve as a consequence of its growing usage, exacerbated by the fact that there are no storage technologies today at that scale to store electricity gained from renewables?
To further complicate matters, the digital realm does not exist in isolation. Other energy-intensive use cases like electric cars compete with software on the energy demand side and are already hitting the limits of today’s grid.
Everyone would like to contribute to fighting climate change and making it easier for humanity to have a sustainable future. How can the tech community do their part? Amazon chief sustainability architect Adrian Cockcroft has made an important observation about the cloud that cannot be ignored today. That is, as we’ve successfully used the cloud for digital transformation, why not also use it for sustainability transformation?
To achieve this, the focus needs to be on cloud efficiency. Today, cloud resources needed for running applications are abundant and easy to provision, and are therefore prone to be wasted. Overprovisioning computing resources in the cloud is not a new phenomenon. The FinOps lifecycle provides a proven framework for pursuing cloud efficiency.
Finding the optimal configuration for a complex system is not a trivial endeavor, especially when keeping performance, reliability and stability goals in mind. A team needs time, expertise in Kubernetes, application and load test design. We believe this is the most important reason cloud waste via overprovisioning is a growing problem.
StormForge makes it easy for DevOps teams to make the right decisions for efficiency and sustainability. It empowers engineers with the information they need to deploy cloud native applications in a way that minimizes waste and optimizes resource utilization. With StormForge, this process is completely automated, so developers can focus on building innovations and not get bogged down in Kubernetes configuration.
We believe that as members of the tech community, we have a responsibility to make sure that applications run as resource efficiently as possible to give us as much time as possible to create the energy ecosystem we need for a sustainable, digital future. To see how StormForge can help, contact us here.
We use cookies to provide you with a better website experience and to analyze the site traffic. Please read our "privacy policy" for more information.