"Looking at the specifications for these software products, it is clear that many will be challenged to support the hardware configurations possible today and those that will be accelerating in the future," said Carl Claunch, vice president and distinguished analyst at Gartner. "The impact is akin to putting a Ferrari engine in a go-cart; the power may be there, but design mismatches severely limit the ability to exploit it."
On average, organisations get double the number of processors in each chip generation, approximately every two years. Each generation of microprocessor, with its doubling of processor counts through some combination of more cores and more threads per core, turns the same number of sockets into twice as many processors. In this way a 32-socket, high-end server with eight core chips in the sockets would deliver 256 processors in 2009. In two years, with 16 processors per socket appearing on the market, the machine swells to 512 processors in total. Four years from now, with 32 processors per socket shipping, that machine would host 1,024 processors.
Gartner said that organisations need to take heed of the issue because there are real limits on the ability of the software to make use of all those processors. "Most virtualisation software today cannot use all 64 processors, much less the 1,024 of the high-end box, and database software, middleware and applications all have their own limits on scalability," Mr Claunch said. "There is a real risk that organisations will not be able to use all the processors that are thrust on them in only a few years time."
Mr Claunch said that the software that runs today's servers has both hard and soft limits on the number of processors that the software can effectively handle. Hard limits are often documented by the vendor or creator of the product and are therefore relatively easy to discover. They are determined by implementation details inside the software that stop it from handling more processors. In this way, an operating system might use an eight-bit field to hold the processor number, meaning a hard limit exists of 256 processors. Soft limits, however, are uncovered only from word of mouth, real-world cases. They are caused by the characteristics of the software design, which may deliver poor incremental performance or, in many cases, yield a decrease in useful work as more processors are added.
Often the soft limit is noticeably below the hard limit for software, meaning that overheads and inefficiencies produce seriously diminished value for large processor counts that may technically be within the supported configurations of the software.
"There is little doubt that multicore microprocessor architectures are doubling the number of processors per server, which in theory opens up tremendous new processing power," concluded Mr Claunch. "However, while hard limits are readily apparent, soft limits on the number of processors that server software can handle are learned only through trial and error, creating challenges for IT leaders. The net result will be hurried migrations to new operating systems in a race to help the software keep up with the processing power available on tomorrow's servers."
Additional information is available in the Gartner report "The Impact of Multicore Architectures on Server Scaling." The report is available on Gartner's website at http://www.gartner.com/....