I've been working on some projects recently where there is a
heavy dependency on virtualization. It
works very well and there are certainly advantages. You can resize things with minimal effort and
there is even a level of redundancy built in.
Every instance can have it's own virtual host (no more stacking multiple
E-Business Suite instances on a single box!).
Of course, one downside is that the "old way" of stacking
multiple environments on a single box was somewhat self limiting. With virtualization, it requires more
discipline to prevent "instance creep" (where everybody must have
their own private instance) on a project.
But, some of this has me wondering. Manageability benefits aside, the primary
selling point behind virtualization is "more efficient use of
hardware".
When we're sizing a system using traditional hardware, we
size it for the busiest day of the year and then add some fudge factor to
account for anticipated growth. The end
result is that you have a system which is running at 30-40% CPU utilization and
maybe 50-60% memory utilization most of the time. Business views this as waste.
The solution being sold to solve this problem is
virtualization. You can have virtual
machines that are sized smaller and dynamically scale to account for growth or
those busy days. The basic thought is
that some systems need more CPU/memory today while others will need more
tomorrow. The end result being that you
need (theoretically) fewer CPUs (which will operate closer to 100% utilization)
and less RAM as an aggregate across your enterprise.
This is all predicated on an assumption that I am beginning
to think is either flawed or has simply changed. The assumption in question is that
"hardware is expensive and must be used efficiently".
The truth is, hardware costs continue to fall even as
compute power increases. Moore's Law is
very much alive.
Contrast this with the proposed solution: large scale engineered systems (we're using a
vBlock from VCE on this project) at extremely high cost. These systems introduce their own management
challenges, licensing and personnel costs, licensing challenges (we're in an
Oracle world, remember?), and even technology challenges. How much is it going to cost to upgrade these
systems when they become old and slow?
(Moore's Law strikes again).
So, to me, this begs the question. Which is more expensive? Individual servers with "wasted"
capacity? Or the solution we're
deploying to solve that "problem"?
This comment has been removed by the author.
ReplyDelete