Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Table of contents
What is a data center?
A data center is defined as a physical space which safely stores computer systems that in turn store and share data for computation by client systems. These computer systems at a data center typically include servers, data storage systems, networking equipment and security systems.
A simple example to help understand the function of a data center would be something like a video streaming service. When a user wants to play an online video on a streaming service, the actual video data is stored on a server which is physically located in a data center. Your laptop or phone (the client system) will request the video to be fetched from the data center, and once fetched, it will be played on the device.
Digital services are the heart of the modern internet. From Netflix to Meta, companies around the world serve millions to billions of internet users with digital products. A data center drives the back end of this entire experience by giving these enterprises a centralized facility to run their digital infrastructure and services, around the clock and without interruption.
Another example: Imagine searching for something on Google. As you hit the search button, the packets of information requested go through a Google data center (via the internet and fiber cables) to be processed and provide the actual search results that you see on your device screen. This is precisely what all data centers are meant to do. They bring together powerful computers that store, process and disseminate data and applications to support business-critical use cases, web apps, virtual machines and more.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Ultimately, this ensures the smooth running of day-to-day business operations and functions.
The need for data centers has grown exponentially with the rise of affordable computing devices (smartphones, tablets) and high-speed internet. Data center facilities already consume 1% of the global energy demand. They are available in all sizes — from fitting in a closet or a small room to a massive facility covering acres. All tech giants, including Google, Meta, Amazon and Microsoft, have built data centers across different parts of the world.
Data center architecture: Key design components
Be it small or large, a data center design can never be complete without certain core components that drive its functionality, starting from IT operations to the storage of data and applications. These include:
- Servers: These are computing devices that include high-performance processors, RAM and sometimes GPUs to process massive volumes of data and drive applications. Multiple server units are combined to form a single data center rack. Depending on the use case, an individual server or rack may be dedicated to a task, application or specific client. On the whole, modern data centers are home to thousands of servers, working on various tasks/applications.
- Storage systems: The storage side of things for the servers is handled by storage systems that can include hard disk drives (HDDs), solid-state drives (SSDs) or old-school robotic tape drives. These units hold business-critical data and applications with multiple backups, allowing easy access to end users and recovery in case of cyberattacks or disasters.
- Network and communication infrastructure: This element connects the servers, storage systems and associated data center services to end-user locations. It largely comprises routers, switches, application delivery controllers, and endless cables that help information flow through the data center.
- Security: The is the final component. It includes elements that are responsible for maintaining the security of the information and applications housed in data centers. It can range from firewalls and encryption to comprehensive network and application security solutions.
The 4 tiers of data centers
When setting up a data center, an enterprise has to consider multiple factors, including its area of work, location, finances and the urgency of data access, in order to select the ideal infrastructure. To help with this, the American National Standards Institute (ANSI) and Telecommunications Industry Association (TIA) published a set of standards in 2005 for data center design and implementation. These standards classify data centers into four different categories or tiers, rated by metrics such as uptime, investment, redundancies and level of fault tolerance.
The four tiers are:
- The data centers in Tier 1 carry only basic infrastructure, such as a single distribution path of power, dedicated cooling equipment and UPS to servers.
- Tier 1 data centers are ideally suited for office buildings or organizations that do not need immediate access to data.
- These facilities also come with the lowest server hosting cost, owing to the lack of redundancy-specific hardware.
- These facilities have bare minimum redundancy measures, such as backups, and are expected to deliver an uptime of 99.671% in a year. In case of repairs and maintenance, they’ll also have to be shut down.
2) Redundancy capable
- Tier 2 data centers are pretty similar to basic ones, with a single distribution path to servers, but they have some redundancy in the form of additional capacity components (chillers, energy generators and UPS) to support the IT load.
- This allows individual components to be taken down for repairs and maintenance, usually without any downtime.
- The annual expected uptime of these data centers is 99.741%.
3) Concurrently maintanable
- Tier 3 data centers come with redundant capacity components of Tier 2 (cooling, power, etc.) as well as two distribution paths to the servers, one of which remains active, and the other sits as an alternative.
- This way, if one distribution path goes offline for any reason, the other goes active, keeping the servers online.
- The annual expected uptime of these data centers is 99.982%.
- These data centers are the most capable ones, with the highest levels of redundancies across all levels of the infrastructure.
- Tier 4 data centers have at least two simultaneously active distribution paths, and multiple independent, compartmentalized and physically isolated systems to ensure fault tolerance.
- They keep servers running in the face of both planned and unplanned disruptions, and promise an expected uptime of 99.99% per year.
Infrastructure requirements for implementation and maintenance
For implementing a data center in any of the above-mentioned tiers, the main requirements in terms of infrastructure will be real estate infrastructure, IT components, and power and security systems.
1) Real estate infrastructure
First of all, an organization has to ensure that the real estate facility chosen for data center operations not only offers sufficient space for IT equipment (detailed above) but also provides environmental control to handle continuous server operations — which take a lot of energy and produce a lot of heat — and keep the equipment within specific temperature and humidity ranges.
This means installing HVAC (heating, ventilation and air conditioning) solutions like computer room air handlers, chillers, air economizers and pump packages in the facility, along with variable-speed drives to control the flow of energy from the mains to the process.
In order to run around the clock, data centers also need to have a closely located power source that provides abundant energy reliably and can also bear disruptions with immediately available backup generators. Further, the power infrastructure should include UPS, switchgear, bus way, power meters, breakers, distribution units and transformers — basically all things that carry power seamlessly from the main units down to the IT equipment.
3) IT components
Information technology (IT) components are where a data center’s main technical components — servers, storage, etc. — reside. This means it has to have elements such as IT racks, IT pods, power distribution units, panels, breakers and various environmental and power sensors.
4) Security system
Since data centers host loads of business-critical information and applications, organizations are also required to have a support system in place to ensure the physical security of their site from potential breaches. This means having security measures such as biometric locks, access restrictions and video surveillance in place.
In addition, companies also need to have a dedicated team available at all times to monitor data center operations and perform regular maintenance on IT and infrastructure to prevent unexpected downtime.
Top 8 best practices for data center operations and management in 2022
Once a data center is up and running, these best practices can help streamline its operations for best results in terms of performance and affordability.
1) Increase power usage effectiveness (PUE)
A data center manager should keep a constant eye on the power usage effectiveness (PUE) of their facility — total data center power divided by the energy used just for computing — to track how much energy is being utilized to run the IT equipment (ITE) (which is doing all the work) and how much is going toward non-ITE elements such as cooling.
If the resulting figure is 1.0, then ITE is using 100% of the power and none is wasted in the form of heat. If the PUE is higher than 1.0, then some energy is going elsewhere too. For instance, if the PUE is 1.8 then, for every 1.8 watts going into the building, 1 watt is powering the ITE and 0.8 is consumed elsewhere for non-ITE. This additional energy use, once identified, could be streamlined. Google already claims that its measures have taken the PUE for all its data centers close to the near-perfect score of one.
2) Reuse the excess heat
The excess heat generated from data centers should not be let out into the environment, but recovered for secondary uses, such as heating office buildings. This saves energy, helping not only the environment but also the business. Many companies, including Meta, Amazon and H&M, have set up systems to use excess heat from their data centers.
3) Implement predictive maintenance
Data center engineers generally either schedule IT maintenance and upgrades in bulk, or react to issues when they have already occurred. This causes unexpected downtime and can prove financially costly to the organization. Instead, organizations can implement data-driven predictive analytics, where algorithms can pick up potential issues well before they occur, allowing engineers to perform maintenance only on equipment that is about to break and not on everything.
4) Plan and automate
Put plans in place to streamline various data center activities, including the ability to respond to issues and conduct audits if required. Conduct test drills to make sure that the response protocol is followed adequately, and also implement automation to reduce human error at different levels within the facility.
Servers and networking equipment have a set lifespan and should be decommissioned according to the schedule laid down by the manufacturer. This will ensure that only high-performing hardware is active within the data center, delivering maximum results on every bit of energy consumed.
Notably, decommissioning has to be executed by following the proper data migration protocol to ensure information safety.
6) Implement a data center management system (DCIM)
With so many upgrades and changes happening every day, organizations can find it difficult to keep tabs on the latest infrastructure of their data center. However, this problem can be avoided with a data center infrastructure management system (DCIM) that can serve as a single source of truth and also visualize the entire data center with centralized records of all upgrades/improvements. DCIM solutions also include robust reporting and analytics to help enterprises assess the upgrades made and their impact.
7) Set up backups
To ensure a smooth experience for end users, make sure to include redundancies in your data center infrastructure, from multiple capacity components to distribution paths to the servers. This will ensure high uptime, even in cases of unexpected disruptions such as natural disasters.
8) Focus on modularity
Instead of overbuilding the data center right away, go for a scalable, modular infrastructure that could be enhanced as the load increases. This is crucial because technology and user needs change every few years — requiring adjustments to be made.
With the above measures, a data center can succeed at successfully handling the data and applications of enterprises of all sizes. The role of these facilities has been critical and will grow more important as enterprise data volumes continue to explode. According to IDG, 175 zettabytes of data will be in existence by 2025. At the current average internet speed, this would take 1.8 billion years to download.
Read next: What is a data lake?
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.