Retaining desktop systems and other hardware for an additional one to three years before replacement can enable an organization to temporarily redirect investment into other areas. An ancillary benefit to delaying desktop replacement may also be the potential alignment of a refresh with the deployment of new versions of an operating system or at a minimum, the deployment of PCs that possess the technical requirements to run a future version.
Although not heavily advertised, many hardware vendors occasionally offer significant additional discounts based on the trade-in of existing equipment. Designed to encourage earlier than planned technical refreshes or to replace competing products, vendors may provide set or negotiated discounts off of list prices on top of any already negotiated discount levels if an organization agrees to turn over equipment to them. In most cases, these promotional programs can be leveraged to provide discounts that far outweigh the value of selling old and idle assets or even the cost of acquiring systems from used hardware vendors specifically for trade-in. There are however several aspects of these trade-in programs that buyers should be aware of.
In the decades since the advent of the ‘Wintel’ system and in some cases earlier, organizations relied upon the concept of a single instance operating system and associated system environment per server. With the commercialization of virtualization software and its maturity into an enterprise-level solution, organizations can now run multiple instances of an operating environment on a single machine enabling an IT organization to greatly increase the processing utilization of its server investments. For organizations with large server and associated datacenter footprints, virtualization frequently results in a wide range of savings, from reduced facility and energy costs to substantial reductions in hardware and support investments.
Efficient IT organizations do not rely solely on a single individual, such as the CIO, to determine where and how the organization should invest in technology to best meet the needs of the organization. Instead, mature organizations typically leverage a group of individuals from both the organization’s lines of business and the IT organization to work in concert to direct, at the highest levels, where the organization should invest for maximum return or impact. This approach, frequently referred to as IT governance or steering, is intended to control and direct IT investments to their highest value and ensure the organization’s requirements are effectively met.
The use of outside vendors and resources is another major IT spending lever available to organizations. When organizations use outside resources, technical capability, or capacity, they frequently realize savings in several areas. First, when using outside resources to supply certain skills, an organization may save by paying only for the level of effort of those resources required, instead of paying for one or more full-time employees. Second, an organization can take advantage of increased skill sets which can result in savings from reduced time required to complete tasks or higher quality outputs from the resources. Additionally, organizations can save by leveraging shared administrative and infrastructure resources when using services such as hosting where resources and capabilities are leveraged over multiple customer-bases.
From our perspective, cutting IT costs is truly part art, part science. While many great ideas for reducing IT spending exist, ultimately the ability to increase efficiency is typically based on five fundamental factors.
1. Leadership’s Level of Passion and Commitment to Efficiency
When leadership is apathetic about efficiency, the organization as a whole is typically apathetic as well. This lack of passion is often ok, especially in businesses with substantial product or service margins where time and energy may be best spent on areas other than squeezing every last nickel out of operations and administrative dollars.
Public sector IT organizations that use a working capital fund (WCF) to drive funding based on usage metrics from customer organizations (typically other offices within the agency) have a leg up on other IT organizations across the government that use more traditional fixed cost budget approaches. By attempting to assign IT costs based on usage or demand using numbers of ‘seats’, storage used, web pages hosted, and other metrics, these organizations are often better positioned to provide agency leadership with the basic tools to make better decisions about controlling IT service demand and cost. The approach can simplify and clarify IT costs and more directly and explicitly link demand with the costs associated with supply.
I just got off a call with several federal acquisition officials asking about ways to increase their integrator's performance, which led me to think about how the public sector market for services could be improved. From my perspective, the market’s characteristics, including its somewhat fixed 'supply of demand’, regulations, and common contract structures lead me to conclude that the most ideal contract structure for most larger contracts is multi-award. There’s no denying that the private sector is motivated and driven by a desire to achieve financial profit – and the only meaningful way to sustain quality and value over the life of bigger and longer contracts is to keep the threat of losing revenue real over the entire period of performance.
If you equate an average resource's value to 1X a Full-time equivalent (FTE), many people believe a good resource is somewhere between 25-50% more effective and a great resource is perhaps as much as 2 times more effective. But if you look at various types of skills and job types where the work is intellectually-based, what you'll likely find is that a good resource is actually 3X or more effective than an average one and a great resource is 10X or more effective than an average resource. How can this be?