A data centre-sized conundrum for CIOs: Cutting costs and evaluating TCO

A data centre-sized conundrum for CIOs: Cutting costs and evaluating TCO

Robert Hormuth, CVP, Architecture and Strategy, AMD, offers tips for CIOs to evaluate the total cost of ownership of data centre solutions and how they can generate efficiencies. He discusses why upgrading to a newer infrastructure is more cost-efficient as older systems can likely cost more in upkeep and energy.

CIOs with tightened budgets are faced with a data centre-sized conundrum. Where can they cut the fat while preserving essential services and investing in technologies that are essential to growing revenue? One area that stands out on any budget sheet: data centres, given they are consistently one of the largest cost centres for IT and continue to grow. 

In fact, core IT spending around data centre needs – systems, software, devices and IT services – is projected to account for nearly 70% of total IT spend in 2023, according to Gartner. That’s an increase of more than 20% since a decade ago. 

Of course, CIOs can’t just gut their data centre budgets. Their infrastructure – whether on-prem, colocation, public or private cloud (or likely, some combination) – is now critical to operating in the modern world. Data and compute are accelerating faster than ever before, thanks in part to the pandemic turning our lives even more digital, as well as the explosion of data and memory-heavy AI workloads. And CIOs must meet their mandate to ensure their workforce is armed with software stacks that are robust, better than the competition’s and, critically, agile for developers to build and deploy cutting-edge solutions. 

That’s not to say bloat is better. As CIOs contend with what’s best for the business, their customers and their bottom line, there are areas in their data centres that may be overlooked for long-term growth and right-sizing. Several tips for CIOs to evaluate the total cost of ownership of data centre solutions and generate efficiencies include:

1. Reevaluating legacy systems

Life cycles for data centre hardware are contracting. While chips from three, five, or 10 years ago may still be functioning, advances in Moore’s Law, or the exponential improvement in semiconductors, means those systems are not just unable to compete with modern servers, they’re leagues behind. 

Legacy systems are oftentimes seen as ‘good enough’ and blanket investments in upgrades strike many as too costly and time-consuming. But the ROI is undeniable. Outdated servers cost more in upkeep and energy and result in larger footprints than ultra-efficient and performance-friendly modern servers. 

In contrast, modern data centre processors can create multifold upgrades in performance and efficiency. While every CIO will have different needs, data centre footprints are not infinite and investing in new physical real estate is difficult to execute in this economic climate. Bringing in new, highly efficient chips can not only improve performance, but also reduce the number of servers in data centres. This opens up worlds of possibilities, from leaving room for scale for future growth (such as new AI-powered software solutions) to adding new features that enhance the future capabilities of their data centres (such as allowing access to the most up-to-date memory and I/O features).

Legacy systems can also significantly drain budgets – both in day-to-day operations and when significant scale is needed. For example, systems that are more than four-to-five years old can likely cost more in upkeep and energy than the cost of upgrading to newer models. And this does not account for the ‘soft costs’ of more frequent unplanned downtime, performance rot and increased security threats. CIOs with outdated systems will lag behind competitors in the capacity to quickly scale up operations as needed.

2. High core density

Improving server efficiency is key to reducing data centre TCO. With electricity bills on all CIOs’ minds, there’s a misconception that newer, high-core count chips will lead to skyrocketing energy usage. But that’s not the case. Advancements in chip core density means CIOs can do more with fewer servers, thereby reducing power consumption per workload. In fact, due to advances in manufacturing and process technologies, today’s servers can process significantly higher workload volumes while consuming less power than older servers.

Prioritising consolidation in this way should be viewed as more than just a cost-cutting measure. It’s critical to creating greater capacity within the envelope of the data centre. This will allow CIOs to flex more easily with growing demands and do more with their current technology and space. For example, securing new and expanded power sources into a facility is extremely costly and complicated. While upgrading to higher core processors also has upfront costs, the ability to process higher workloads means that fewer servers will be needed and energy can be reduced – thus potentially eliminating the need to secure more power, both to the server and for cooling purposes.

Overall, the ROI on these servers is undeniable when compared to the alternative.

3. The cost of doing nothing 

The cost of doing nothing is often overlooked when deciding whether or not to procure new technology in an era of tight budgets. Many companies only account for IT capital costs, and while a key factor, it is not the only cost to consider. For example, many IT leaders fall into the trap of thinking that because their infrastructure is paid for, it is more cost-effective to leave it in place. The reality is, older hardware can actually cost more to operate on an annual basis than replacing it with more performant and energy efficient servers; and again, that does not account for the other soft costs previously mentioned. When it comes to lifetime total cost of ownership, these costs can hamper the ability of a data centre to scale with future needs and evolving technology.

Companies should take into account the pace of innovation and expect that they will need to evolve their technology to integrate with or be replaced by newer and better versions every few years.

All companies must live with legacy systems to a certain extent. However, those that take a proactive approach to systems and infrastructure upgrades ensure that the choices they make around technology selection are optimised for change and scalability. 

Evaluating the total cost of ownership for a data centre isn’t as straightforward or simple as it initially seems. With the massive growth in data generation – according to Statista the world will produce slightly over 180 zettabytes of data by 2025 – the need for data centres is only increasing. CIOs must look at the total costs associated with their IT strategy and choose solutions that maximise efficiency while remaining easily adaptable, scalable and customisable. For on-prem deployments, this means examining, capital costs, power and cooling costs, security risks and competitive advantages. For cloud, it means choosing the most optimised and cost-effective instances. Updating to a newer infrastructure may not only be a better business strategy, it might just add more money to the bottom line as well.

Browse our latest issue

Intelligent Data Centres

View Magazine Archive