You may not be able to kick sand into the faces of those wimpy servers in the data center much longer. The wimps are on the rise, even though they won’t take over the beach any time soon.
Over the last couple of years in the server world a debate has raged concerning the relative merits of the brawny vs. the wimpy. The strong vs. the less powerful. The fast vs. the not-so-fast. In one corner we have the traditional server processors like the Intel® Xeon® that keep getting faster and denser, but come with lots of chips, chipsets and memory surrounding them. And they consume lots of power, throw off lots of heat and take up lots of space. In the other corner are the smaller, simpler processors like the Intel Atom® and some new ARM® cores. They may be somewhat slower, but they consume a heck of a lot less power and take up significantly less space. The wimpy adherents say the goal is to consume 5 watts instead of the 500 consumed by their brawny brethren.
Why is this happening? The equations are changing for operators of big data center like those maintained by Google, Facebook, Microsoft, various financial institutions, large retail operations and other types of organizations. Increasingly, data centers look beyond raw performance figures to examine performance per watt per dollar. What they’re finding is that faster and faster performance doesn’t always make up for the cost of increased power consumption, rent for larger facilities and higher air conditioning bills.
Onto the scene strolls the concept of wimpy servers (which are also known as micro-servers), which, by the way, brings with it a new acronym that could justify the concept of wimpiness on its own – FAWN for fast array of wimpy nodes. Nice! I’ll save the Bambi jokes for later.
But a good acronym is no guarantee of success. Huge and not-so-huge arrays of wimpy processors have technical challenges of their own. Still, for certain applications, they work nicely. Mozilla, for example, has deployed a wimpy server farm – excuse me – an x86-based FAWN to facilitate the downloading of Firefox. The FAWN vendor SeaMicro (http://www.seamicro.com/sites/default/files/MozillaCaseStudy.pdf) claims Mozilla uses one-fifth the power per request and their Atom-based system takes up one-fourth the space. And the big OEMs like HP are experimenting with ARM-based systems from companies like Calxeda: http://www.wired.com/wiredenterprise/2011/10/hp-arm-servers/.
Still, questions remain. From our perspective, it will be critical to validate and assure a high level of signal integrity on the inter-processor connections in a FAWN. And how does one monitor and measure signal integrity in a dense array of multiple processing units where physical access to those connections is severely limited or nonexistent? Sounds like a job for non-intrusive, software-driven high-speed I/O validation, test and debug.