25 October 2019 Reading time: 7 minutes

DCIM: software for server and data center management

ISPSystem
The largest data center in the world — SUPERNAP 8. © Switch

There is the largest data center in the world not far from Las Vegas. You can fit almost 250 football fields in its place. And the tallest data center occupies a 32-story skyscraper in New-York.

Comparable data centers appear in all regions of our planet. Special software helps to manage them, it monitors the condition of the hardware and even predicts problems. But it wasn’t always that way.

From vacuum tube computers to the dot-com bubble

Data centers’ history starts at the moment the USA created the first supercomputer ENIAC in 1945. Magnetic-core memory storages, redundant power supplies, guarded racks and cables — it all bears a resemblance to modern data centers.

The Soviet Union answered to that with MESM. That supercomputer was vacuum-tubed and occupied a room of 60 sq.m. The room was heated so much that they had to remove the roof for cooling purposes. It could perform approximately three thousand operations per minute — millions of times fewer than modern smartphones.

Small Electronic Calculating Machine. © Balalaika24

The technologies developed but for a long time computers have only been used for military and space purposes. They were used to calculate ballistics of a nuclear weapon and fuel consumption of a space rocket. Later the technologies started to be used for civilian needs, and the first company to do so was IBM.

In 1960, IBM built SABRE — a reservation system for American Airlines. Computers of dozens of airports were connected in a network with a “server room” in the USA, and a data center was placed underground. In 1964, SABRE successfully tested the IBM mainframe called System/360 — the first fault-tolerant server for mission-critical systems.

IBM System/360. © Ben Franske

Mainframes were used for data storage and processing until the ‘80s. They occupied large areas in institutes and commercial organizations and were still considered “a compact solution”.

Unix and PC-servers that we know appeared only in the ‘80s and got around later when the Internet started developing. The dot-com boom happening at the time also influenced the data storage market: data centers used to be created by large companies only but in the ‘90s everyone wanted in.

The commercial data center industry had developed so much that standardization was required: in 1987, Uptime Institute was established. It set data center reliability standards accepted in the whole world — Tier.

Not everyone could afford to own infrastructure so server rent was in demand. By the beginning of the 2000s, the first hosting providers started to appear.

From manual to automated control

It seems obvious that automatic computing hardware is managed by automated software. Nonetheless, it wasn’t always the case.

At the dawn of technology, one supercomputer required 20 people to be properly managed. When servers became popular, the position of a system administrator appeared. They turned servers on and rebooted them, installed and reinstalled the OS, kept records of hardware in paper journals and Excel-tables, monitored energy consumption by scattered sensors.

As the industry developed, the automatic management systems of a data center — the building itself and the hardware inside — started to appear. Common at the time automated dispatch control systems (ADCS) were adapted to manage buildings. In order to manage servers, the software had to be created from scratch.

Nowadays, software became much more powerful and functional but the task-based separation stayed. The data center software is divided into two groups: BSM and DCIM.

Building management system (BSM)is a system for building management. We need them so that the hardware inside the data center could work uninterrupted. They monitor power supply, temperature, humidity, and controls security. Power supplies, ventilation and heating, fire and security alarms are all under BSM’s control. Separate systems help to save on electricity by optimizing the load.

Data center infrastructure management (DCIM) is a system that helps to manage the hardware installed in the data center: servers, commutators, routers. We need it to prepare servers for work, configure network hardware, collect statistics, and keep records. A good DCIM can recognize problems and notify of them.

DCIM is used by engineers of commercial data centers and private infrastructure owners — in order to manage their own or rented servers.

DCImanager is a universal solution for providers and IT-infrastructure owners

There are many DCIM solutions. Despite the shared name, each of them has its own specialization. Ones are aimed at monitoring and inventory, others — at the hardware management, the rest cover all needs. We recommend DCImanager and now we will tell you what it is capable of.

We released the first version of DCImanager in 2007, this kind of software only started to develop at the time. The new one came out in 2019.

A screenshot of DCImanager 6 interface — a new version of the server management panel

DCImanager helps to prepare a server for work, manage it, monitor its condition, and make a warranty repair or replace it in case it breaks down.

Preparation
When DCImanager finds a new server, it starts diagnostics: determines its characteristics, checks the performance, configures IPMI. Removes the old data if necessary. Then it installs the OS and other required software. The OS is installed from templates, the other software — from recipes.

Providers integrate DCImanager with selling services platforms. Thanks to that, delivery is almost automated: a client ordered hardware — DCImanager turned on a server, checked, installed the required software. A provider has to configure iteration only once, then everything will function without his involvement.

Management
It is convenient to manage a server through DCImanager: reboot, reinstall the OS, assign and delete IP-addresses. Apart from all that, in DCImanager you can:

  1. work with network hardware: configure ports and poll their status, save configuration files, assign VLAN, their speed and behavior;
  2. diagnose and restore a server through the intelligent platform management interface (IPMI), even if it is disabled;
  3. enable/disable protocol data units (PDU), control the load.

Monitoring
DCImanager collects statistics on traffic, supply, temperature. If a commutator or a router turns off, energy supply or rack temperature will exceed the threshold, the server will go down for some reason — the panel will notify the administrator of that. A new DCImanager version will be able to analyze the server internal state: memory, hard drive, CPU.

Inventory
It’s convenient to keep records of hardware in the panel: specify its characteristics, when it was bought and for what price, who the provider is. If a server goes down, it will become clear — it’s time to buy a new one or replace this one by warranty.