What's the Difference Between Virtualization And Emulation? Why it Matters

Shutterstock / kkssr

Both emulation and virtualization serve the same purpose: to run another operating system in a virtual machine. However, each does this differently, and when it can be used, virtualization is much faster.

A question of performance

The short answer is that emulation is much slower than virtualization, and it all boils down to hardware optimizations.

Emulation is the most basic form of running an application on an unwanted host. An emulator takes commands intended for the target system and translates them into something the host computer can understand and execute. This usually involves emulating the opcodes and registers of the processor. A good example of this is the emulation of older games, like the Nintendo N64, on a modern PC. The PC cannot run N64 games directly, but the emulator is able to take the instructions intended for the N64 and execute them as close as possible to perfection.

Although the term “emulation” is commonly used to refer to the emulation of video games, it is just as often used for professional applications. For example, maybe you have some essential legacy software, which needs to work on a system like DOS. Running it in an emulator on a new server can often be easier than running it on a machine from then on. It can also refer to software that emulates the effects of legacy hardware, such as emulating old network controllers.

However, emulation can be unnecessarily slow. An extremely common use case is to run multiple Linux virtual machines on a host operating system. When the host machine is the same as the virtual machine, full emulation of the entire processor is very slow compared to normal execution.

Therefore, most virtual machines will use hardware-optimized virtualization technology. On Intel, this technology is called Intel-VT, and on AMD, it is called AMD-V. Both accomplish the same goal of virtualizing x86 applications. If you are using a desktop computer, you may need to enable them in BIOS if they are not enabled by default.

Virtualization is often used in combination with a hypervisor, which is a barebones operating system that manages multiple virtual private servers. If you’re renting a VPS from a cloud compute company like AWS, it’s likely running on a hypervisor like Nitro, Proxmox, or Hyper-V from AWS. Modern hypervisors can achieve very close to native performance (also called “bare metal”). While there is always a bit of overhead, it’s better than having to emulate it.

Virtualization almost always works best if you virtualize the same architecture. For example, x86 processors from AMD and Intel will be able to virtualize x86 operating systems like standard Windows and Linux. While it’s technically not impossible for an ARM processor to virtualize an x86 processor, that’s usually not one thing.

This can be a problem, as in the case of Apple’s new ARM-based Macbooks which run on their own M1 processors. Virtualization of x86 operating systems is not supported. While you can still run other operating systems with programs like Parallels, this is going to be much slower as it will have to resort to emulation.

So in conclusion, if you are going to be running a program from another operating system, you will want to make sure that you do so using some sort of virtualization if you want to achieve near 100% native speed.

How does Docker compare?

Docker allows you to run application containers, which are isolated packages that contain all the code needed to run an application. It is also very secure; a host machine can run multiple Docker containers without worrying that they will come out of the container or spoil with each other.

In many ways, Docker achieves the exact same goal of running multiple applications in private Linux virtual machines, but under the hood it does things a little differently.

Docker does not use emulation or virtualization. It runs all code directly on the processor and host system, without virtualization overhead. In order to isolate containers, it intelligently uses Linux namespaces, among other features that can isolate processes in their own “container jail.” Processes inside the jail cannot see or interact with files, processes, or system resources that are not assigned to them.

This achieves a system in which multiple applications can run next to each other on a host operating system without the overhead of a separate operating system for each virtual private server. For a provider like AWS, this saves a lot of money.

If you are planning to virtualize, but are concerned about performance, Docker has virtually no overhead compared to running applications on bare metal. You can read our guide to get started to learn more.

RELATED: How to package your application infrastructure with Docker

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.