|By Frank Altschuler
|May 23, 2008 02:15 PM EDT
Readers of Virtualization Journal
know that virtualization provides enormous benefits to makers and users of computing platforms ranging from desktops, to servers, and even supercomputers. The reasons for this are now obvious; cost savings through server consolidation, reduced administrative costs, and greater flexibility. Less obvious may be the degree to which virtualization can benefit deeply embedded applications such as cell phones, networking equipment, and point of sales terminals.
While there are similarities in some of the value propositions involved, there are also substantial differences due to the more challenging timing and resource budgets of embedded devices. Real-time processing in embedded applications puts a premium on low latency, highly deterministic approaches to hypervisor design, while the available volatile and non-volatile memory is smaller, often by orders of magnitude, than that available in even a low end desktop machine.
The virtualization technique most often used in and enterprise computing or desktop application is known as “full” or “native” virtualization. In this approach, each instruction executed by a guest OS or application is trapped and each privileged instruction, instead of being executed by the underlying hardware platform, is processed by software that fully emulates the underlying hardware. This allows for the greatest flexibility in hosted software as essentially any and all software should, in theory at least, run unmodified.
Unfortunately, this approach takes a relatively large amount of memory and processing overhead. In the enterprise space some of the overhead has been reduced by the inclusion by Intel and AMD of hardware virtualization support but the system overhead is still significant. In the embedded space that hardware support is quite a bit less mature and the available processing overhead is typically not there. While it is typical in a computing context to have ‘room for growth’ by virtue of more memory or speed than is strictly required at the time of purchase, in an embedded context this is more often than not labeled as ‘waste’ and not tolerated.
In order to get around this issue, most commercial virtualization vendors have adopted a technique known as “paravirtualization.” In paravirtualization, the operating system and device drivers must be modified to take advantage of the characteristics of the hypervisor or Virtual Machine Monitor (VMM). In this modification, calls to hardware are replaced by API calls to the hypervisor. Since the analysis of which instructions must be managed, and just how those instructions should be managed, has all been done during the system’s design and development phase, no run time instruction trapping or analysis is required. As a result, the performance overhead of operating virtual machines in a paravirtualized system is quite a bit lower, often by orders of magnitude than what was possible in full or native virtualization. It also means that, as the hypervisor essentially owns hardware access, security between different virtualized domains is much greater, and systems can be built in a more robust fashion.
Why Should I Virtualize My Cell Phone?
I often wonder what the conversations were like years ago when microcontrollers were a new concept and customers would ask just what could be done with such a thing. Most of the now common applications such as engine controls, GPS units, and cell phones would have seemed like so much science fiction. But, once the basic building blocks were well understood by designers, applications began to come out of the woodwork and the microcontroller became just another generally accepted tool leading by stages to just those applications.
With virtualization we’re essentially at that same very early stage where designers may have heard of the technology, but they haven’t fully internalized that they have another tool in their toolkit. The question now is more along the lines of “what can be done with LOTs of virtual processors?”
When looking at the architecture of a cell phone, as often as not there’s a baseband processor that runs the actual communications, and a separate applications processor that does graphical display, multi-media, and other processing that’s not core to the phone’s basic functionality. Using virtualization, it’s very straightforward to integrate both apps processing and the radio stack on the same physical device saving BOM cost and also considerable development time.
Another area of study is how to support handset functionality in a robust fashion, and still have a degree of openness. The Open Handset Alliance’s “Android” platform attempts to answer the “openness” aspect, but actually does little to nothing to preserve the integrity of the handset, a critical issue with carriers. Using virtualization it is possible to create highly secure and independent profiles for the basic phone function, and for the user, creating flexibility and preserving the integrity of the handset against malware or just simple user error. The Open and Secure Terminal Initiative or OSTI is a good example of this approach (http://www.nttdocomo.co.jp/english/corporate/technology/osti/