In the decades since the advent of the ‘Wintel’ system and in some cases earlier, organizations relied upon the concept of a single instance operating system and associated system environment per server. With the commercialization of virtualization software and its maturity into an enterprise-level solution, organizations can now run multiple instances of an operating environment on a single machine enabling an IT organization to greatly increase the processing utilization of its server investments. For organizations with large server and associated datacenter footprints, virtualization frequently results in a wide range of savings, from reduced facility and energy costs to substantial reductions in hardware and support investments.
Despite its increased acceptance across organizations as a cost-effective way to deploy systems and related operating environments, many organizations fail to leverage virtualization because they perceive a risk associated with running multiple instances of an operating environment on a single-server. However, the all or nothing virtualization approach leaves considerable opportunity for savings unrealized.
For many organizations, using virtual server environments for development and testing not only results in material hardware and support savings, but represents a significantly more efficient way for developers and integrators to work. In the case of testing for example, a team can ‘reset’ an environment after a failed installation or test scenario with a single command, in lieu of restoring the environment using significant scripting and manual efforts previously required.
By embracing virtual environments for development and testing as an initial step towards introducing virtualization, many of the perceived risks typically associated with running production systems in a virtualized environment are eliminated. Instead, the technology is applied to the portion of the system lifecycle where the technology’s accepted risk profile fits the organizational need.