Since the start of the cloud era, on-premises infrastructure took a back seat to cloud-based alternatives. Many enterprise workloads moved from corporate data centers to cloud-based alternatives including cloud hyperscalers Amazon Web Services (AWS), Google Cloud and Microsoft Azure.
Not all roads end with cloud
One of the misconceptions of cloud was that all workloads would ultimately end up in some cloud-based alternative. The marketing engines of cloud providers and pundits helped fuel this mantra while reality proved differently. That is not to say that enterprise workloads should not leverage public cloud, just that it is one of many solutions available.
A few of the reasons why enterprise workloads may not use public cloud include a) workloads are not well suited for cloud without significant rewriting, b) limited by physical constraints or latency requirements or c) limited by regulatory/ sovereignty restrictions.
The rise of private cloud
For workloads not destined for public cloud, enterprises still wanted a cloud-like experience but with on-premises options. Enter private cloud infrastructure.
At first, private cloud infrastructure was complicated and hard to manage. Fast forward to today and solutions like HPE’s GreenLake provide elegant management tools to manage an enterprise’s fleet of systems.
Bringing infrastructure back in vogue
The ultimate drivers for on-premises infrastructure (servers/ storage) come from specific applications, use-cases and outcomes that are not well suited for public cloud or require a hybrid approach.
While AI in the public cloud has garnered quite a bit of attention, there is a growing need for AI capabilities at the edge. Many AI applications will even use a hybrid approach with some functions leveraging public cloud while others leverage edge and on-premises devices.
That brings us to revisit on-premises infrastructure. New architectures and requirements from technologies like AI require a different type of infrastructure from what was commonly used in the past.
Today’s infrastructure is more capable, sophisticated and breaks several norms that have existed for decades.
New infrastructure architectures require a much higher rack density than ever before which leads to challenges in space, weight, power and cooling. High performance processors such as Nvidia’s H200 GPU require a lot of power and give off a tremendous amount of heat. As such, systems with H200 processors require liquid cooling as air cooling is not enough to keep the chips running at optimal temperatures.
Lenovo’s 6th-Gen Neptune Liquid Cooling is one example of a new cooling architecture for servers and storage in the data center.
Bringing more power and liquid cooling to a data center…especially a corporate data center requires a change in thinking and design. Many enterprises still work under a ‘no liquids in the data center’ rule. That is about to change.
Liquid cooling is not new per-se, but also not widely used. Today, liquid cooling comes in multiple forms from special coolant to water cooling. While hyperscalers have already trekked down the path of providing liquid cooling in their data centers, enterprises are yet to widely embrace this change.
The combination of liquid cooling, private cloud architectures and more sophisticated solutions open the door to new options for on-premises infrastructure to support high performance workloads such as AI.
CIO Perspective
The drive to embrace new architectures and technologies in the corporate data center is not a suggestion for repatriation of workloads from public cloud. The reality is that enterprise workloads are getting more complicated and require a hybrid approach to their architecture.
At the same time, new data center infrastructure is driving a resurgence in on-premises capabilities. Those new technologies are incredibly impressive but require a different approach to how enterprises leverage them that starts with the business outcomes and drives down to the underlying infrastructure.
New requirements require a different way of thinking. We are quickly approaching a point where if you want AI on-premises in a corporate data center, you will need to support liquid cooling.
Discover more from AVOA
Subscribe to get the latest posts sent to your email.

Cloud works well for us, but for certain use cases in larger companies, it can be more cost effective to run on-prem.
That’s right. For some, cloud may suit them just fine. For others, a hybrid approach is more appropriate.