Everyone knows that Intel is the most successful chip maker in the world. Beginning with the early designs, when there was no real competition; to the time of the 80286, when demand was so high that Intel licensed AMD to make chips under the AMD name, but identical to the Intel part; to today, when the company has a great percentage of the market, and the only competitor that has a higher percentage of any market having to do with computers is Microsoft, as its hegemony calculates out in the 90 percentiles, whereas Intel is merely in the 80’s.

So when one discusses the downturns at Intel, there just are not many, and they don’t get air time very often. Failures are rarely discussed, and the last one is not what you would call spectacular, but in terms of other things done at Intel, really could qualify.

Although there was lots of talk, there were never any shipping parts, so the failure is one of time spent in development, not something that embarrassed the company once out in the hands of the public.

[PC Perspective]

By now we should all know the story about Larrabee and Intel’s failed venture into the world of professional and consumer discrete graphics.  Early in 2007 Intel announced plans to deliver a fully programmable GPU architecture built around a collection of small x86 cores with some custom texture hardware.  We saw progression through the year and into 2009 that culminated with a very underwhelming set of "demos" and ray tracing examples.  Just a couple of months later, the project was "delayed" – a word that big companies like Intel use rather than "cancel" in order to prevent frightening the investors.

At the Intel Developer Forum this month, one of the key people behind Intel’s attempt to target NVIDIA and AMD’s discrete GPU business, Tom Piazza, shared some more insight into what went wrong.  As many people inside the competition theorized during Larrabee’s development, Tom said, "I just think it’s impractical to try to do all the functions in software  in view of all the software complexity.  And we ran into a performance per watt issue trying to do these things." 

It seems that even Intel realizes now that fixed function hardware definitely has a place in computing still and x86 isn’t everything for everyone.  "Naturally a rasterizer wants to be fixed function.  There is no reason to have the programming; it takes so little area for what it does relative to trying to code things like that."
When asked pointedly if he ever expected a Larrabee-based graphics solution to be released in the future, he stated simply, "I don’t think so."

This is similar to the change in flow when, at the time of the 486, we were being told it was to be one of the last CISC (complex instruction set computing) chips Intel would offer, and it was to be replaced by a RISC (reduced instruction set computing) era. What was found, early on, and then later proven completely with the Netburst architecture of the P4, was that a wall was being hit, where chips would bleed power, and produce great heat, so that an endless progression into the stratosphere of ultra high speed computing was just not possible.

That is why the Fermi GPUs have had such problems. Heat kills. It makes things work less efficiently than they otherwise would, and why fixed function (as they refer to it above), or CISC architecture is working for graphics, versus RISC where lots of little instructions are done very, very quickly. That was what Intel was trying to do with Larrabee, and you might say they never learned the lesson of Netburst.

§

 

Opera, the fastest and most secure web browser

®