25 November 2019 Reading time: 10 minutes

A brief history of virtualization, or why do we divide something at all

Victoria Fedoseenko

Victoria Fedoseenko

Content manager

ISPSystem

Let’s move a few decades back to have a better understanding of virtualization value.

A computer line

Imagine for a second: early 1960s of the 20th century, a large scientific institute with several hundred employees. There's only one computer for the whole institute. If a scientist needs to compute something, he makes a program and brings it to an operator who works with the computer. The operator runs the program and brings the outcomes to the scientist.

Computing machines were rare, slow and very expensive back then. Programmers themselves did not have access to them, because individual work was considered ineffective: while programmers entered data or took a pause to think about their issue, the machine was idle, and it was unacceptable. To prevent this idle time, developers and computers were separated.

A few years later, scientists came to the idea that if they allow more than one user to use the computer, efficiency will increase. When one user enters data, the computer works with the tasks of other users. It will fill pauses and minimize idle time. This idea was called the time-sharing concept.

The Time-sharing concept implies that computing resources are shared among many users. It appeared in the early 1960s and led to many revolutionary changes, including the emergence of virtualization.

ENIAC, the first electronic computing machine

High-speed task switching

At first, the idea of time-sharing was implemented literally: the processor switched between tasks only at the time of I/O operations. While one user was thinking or entering data, the computer processed tasks of another user.

The equipment became more powerful, and the tasks generated by several users were not enough to fill its capacity. Then the processor was taught to switch between tasks more often. Every task received a "quantum" of time when the processor worked with it. If one quant wasn't enough to complete the task, the processor switched to another task returning to the first one at the next quant. The switches occurred so quickly that every user may even think that he used the machine alone.

The first time when the time-sharing concept was implemented was the Compatible Time-Sharing System of Massachusetts Institute of Technology in the early 1960s. It was compatible with Fortran Monitor System and ran on IBM 7090 mainframe.

By the way, around the same time, the time-sharing system was also started at Dartmouth College (the one where Basik was invented). The scientists even managed to sell it, although it did not get widespread.

IBM 7094 console. There were no displays and input-output devices we got used to. Only buttons and little bulbs. ©️ ArnoldReinhold
IBM 7094 console, two magnetic tape drives and a punch card reader. This computer took the whole room. ©️ NASA Ames Resarch Center

From time-sharing to virtualization

The first operating system supporting time-sharing was Multics, the predecessor of the Unix family. Both Multics and the system from Dartmouth College found its application although they were far from being perfect: slow, unreliable, and unsafe. Scientists wanted to make them better and even knew how, however, the equipment capabilities were limited. It was hard to overcome these limits without having support from manufacturers, and after some time they joined the work as well.

In 1968, IBM created a new mainframe that supported CP/CMS system developed together with Cambridge scientists. It was the first OS supporting virtualization. The CP/CMS system was based on a virtual machine monitor or hypervisor.

The hypervisor runs on the hardware and creates several virtual machines. This approach is much more convenient than the time-sharing system and here's why:

  1. Virtual machines share mainframe resources instead of using it one-by-one in their turn, so efficiency is higher;
  2. Every VM is an exact copy of the original hardware, so you can run any OS on every single machine;
  3. Every user has his own OS, so he doesn't affect other users - the entire system becomes more reliable.

After some time CP/CMS was improved, renamed and launched for sale. It became the basis for VM/370 OS, which was used with one of the most popular IBM's mainframes, System/370.

Such systems already looked much more familiar to a modern user - as terminals connected to the mainframe. The mainframe was a big and powerful computer while terminals were devices having a screen, keyboard, and mouse. Users worked with terminals to enter data and set tasks, and the mainframe processed these data. One mainframe could be connected to tens or even hundreds of terminals.

IBM 3270 terminal released in 1972. Looks like a computer, but this impression is deceptive - it's just an input device, all computing happens on the mainframe. ©️ Jonathan Schilling

Sunset and dawn of virtualization technologies

The personal computer boom started in the late 20th century, affected not only the mainframe industry but also the virtualization technologies development. While computers were expensive and bulky, companies preferred to use a mainframe with terminals because they could not afford to buy a computer for every employee.

Over time, computers became smaller and their price was affordable enough for private companies. So in the 1980s, personal computers changed terminals. Together with mainframes, virtualization technologies have faded into the background, but not for long.

Affordable computers had spread widely. Operating systems became more functional but less reliable. An error in one application might cause an OS crash and affect other apps as well.

To improve stability, system administrators allocated one machine per application. Stability had increased, but it raised the equipment costs as well. And here was the moment when virtualization came back on a scene. Virtualization allowed using several independent virtual machines hosted one physical computer instead of allocating several dedicated computers for the same task.

With operating systems development, it became necessary to run a certain OS within another OS. Virtualization could solve this task as well. In 1988, SoftPC was released. This software allowed running Windows and MS-DOS applications on other operating systems. A few years later, Virtual PC appeared, making it possible to run other operating systems on Windows.

The hosting industry development became another reason to revive virtualization technology.

Server virtualization

The evolution of the Internet led to a true rise in virtualization technologies.

At first, companies hosted their websites on their own servers. It wasn't a problem for big and successful companies to buy own equipment. But after some time, smaller companies and individuals also felt a need to have a website. They didn't have money to buy own servers so other companies decided to rent the equipment to them. It was the beginning of the hosting industry.

In the first years, hosters provided you some disk space on an FTP server or an entire server. With FTP hosting websites of different users were kept in different folders of one computer. Thus FTP hosting was unreliable and insecure while renting an entire server was expensive.

Virtual servers were a good alternative: inexpensive and almost as reliable as dedicated servers.

Server virtualization solutions began to evolve. In the early 2000s, VMware introduced its product for the x86 servers called ESX Server. After several years many other solutions appeared: Xen, OpenVZ, and other hypervisors. Virtualization technologies formed the basis of cloud technologies, but we will about them in another article.

In order to simplify virtualization management for providers and private infrastructure owners, virtualization control panels began to appear. In 2003, ISPsystem released VMmanager.

Modern datacenter. It can store part of the data of one large online store like Amazon, or maybe millions of small sites. © Switch

VMmanager — a modern control panel for virtualization management

Скриншот раздела Виртуальные машины в VMmanager 6
The list of virtual machines in VMmanager

VMmanager makes virtualization technologies available to anyone. You can create virtual machines on Linux or Windows and resell them to your clients or use them for own needs.

Hosting providers use VMmanager to automate VPS provisioning. Web developers, admins, software development companies and other commercial organizations and freelancers use it to create isolated virtual machines.

The panel has a neat and convenient interface allowing users to automate or speed-up routine operations.

Other advantages of VMmanager:

  1. All-in-one solution: the panel may replace the console, tables for equipment inventory, diagnostic and monitoring tools.
  2. Simple and convenient interface: panel users may easily create VMs of the desired configuration and manage the whole virtual environment.
  3. Task management and troubleshooting: If you encounter a problem, it is easy to find the cause in the Task Log.

Feel free to run the demo to evaluate the capabilities of VMmanager. If you want to test it on-premise, you may download and install it on your server. 30 days for free!


Request VMmanager demo