Larrabee's dead

http://www.theregister.co.uk/2009/12/04/larrabee_slips/

I blame the very tardy Tom Forsyth. :wink:

I blame its whole concept. It was doomed to fail from the first scratch onwards.
Judging from my imho vast experience in writing rasterizers and multicore code in x86 and ARM asm; while everyone was wheezing by with gpus.

P.S. Plus, Intel have shown on 5 iterations (SSE) that they simply donā€™t understand what graphics software bleeds for.

And this is a good news.

This project is stupid. Now days whatter is how much performance you get by watt. Just considering the rasterizer itself and the way they do it on larabee using all the core where a GPU have a dedicated hardware super fast and so much more efficient power wise.

Intel is so bad with graphics, I still donā€™t get how they can sell more in quality than both nVidia and ATI. fortunately for me I donā€™t have to take care of it!

The best intel should do is to give up on graphics.
From what I read, a larabee v2 could happen. Maybe with a seriously redesign (including dedicated part for graphics) and proper work, at least as much as nvidia and ati and maybe they will do something good.

To be honest, that wasnā€™t really Larrabeeā€™s problem. Larrabeeā€™s problem was Intelā€™s absolute desire to stick x86 everywhere.

When you get to large, in-order many-core arrays like that, the overhead of doing x86 translation for 32+ cores is very much non-trivial. You can live with it when you only have 4 cores, but not when you have 32 cores. And especially not when each core is much simpler than the main core of a regular CPU; this makes the x86 translation logic much more expensive per core by comparison.

All of that x86 translation was just wasted transistors and logic, from the standpoint of doing GPU work. Which meant that Larrabeeā€™s price vs. performance ratio would always be worse than the equivalent NVIDIA or ATI part.

Rasterization, especially these days, isnā€™t that big of a deal. Whether the rasterization is in hardware or software, itā€™s simply not the limiting factor in graphics: shaders are. And Larrabeeā€™s per-core price for shaders was just too expensive compared to NVIDIA or ATI.

Fortunately for everyone involved, Intel realized this before shipping the thing.

Larrabee as a graphics part, was always more of a trojan horse than anything else. The idea was that they slip it in as a GPU, then suddenly everyone has them, and people are able to leverage them in applications. The GPU was a means to an end, not the end itself.

Looks like AMD has a clear path with Fusion for putting GPUs on CPU cores. Could be interesting, especially with OpenCL out there.

Iā€™ve been a cautious skeptic too, but hopefully this is the thing that they ā€œwonā€™tā€ do. Theyā€™ve got a sharp software graphics brain trust (Pharr, Forsyth, LeFohn, etc.), but sounds like on the hardware design and production side, Intel had a lot of teeth to cut (and is still cutting them). Sure hope they donā€™t pull the plug and dump the team out on the street. If not Larrabee rev 2, hopefully thereā€™s some innovations they can spin-off.

They were really nuts to wave the ā€œhere comes the Messiah/Larrabeeā€ flag so early though (several SIGGRAPHs ago). Brush off, wipe the egg off your face, learn lessons, and retarget efforts.

Anybody else notice the SC09 Larrabee news that came out at about the same time. For instance: Intelā€™s Larrabee Hits 1TFlop - 2.7x faster than nVidia GT200!: ā€œYouā€™ve got ATI out with a card [the 5800] that can do five teraFLOPs now. For Intel to come out with a card that does one teraFLOP next year isnā€™t going to cut it in the high-end space.ā€ ā€œBy way of comparison, ā€¦the Core i7-975, the top desktop of the Nehalem generation, hits 55.36 (GFLOPS) in turbo mode.ā€ Might have helped precipitate this.

Exactly. The ISA is awful for gfx, and even with a whole new set of instructions (smirk at SSE) the outlook is grim.

Jon Peddie did a good write-up on this over the weekend.

http://jonpeddie.com/blogs/comments/larrabee-past-present-future/

itā€™s obvious from the pace of this discussion that nobody ever took it seriously in the first place. A cynical attempt to get stock prices up, it would seem.

I took it seriously. Until it became apparent that it wouldnā€™t be price/performance competitive with ATi and NVIDIAā€™s offerings.

Really though, what killed my interest in it was OpenCL. Access to much of a GPUs performance, without being bound to a particular architecture. Once that existed, the reason for x86 in Larrabee mostly disappeared for me.

With neither a decent GPU, nor the only good GPGPU, it simply isnā€™t necessary.

I hope Intel learned a lot by trying to put it together. Things they can apply to future many-core products. And at least this didnā€™t turn out like Itanium, where they actually released the product to the benefit of noone.

What i hated about Larrabee was the people that were all crawling out of their holes and shouting ā€œWITH LARRABEE I CAN DO EVERYTHING BETTER!!ā€.

There were quite a few on these forums.

I believe that abstraction is a good thing. I believe that when we are forced to write our own software-rasterizers (or are forced to buy one from some middle-ware provider) it will be the death of many small companies and hobby-developers. Yet many (hobby) developers claimed writing their own rasterizer with the full power of Larrabee would enable them to do some ā€œcrazy stuff that would revolutionize everythingā€.

I agree that a design like Larrabee might be appealing for certain areas. I agree that there is a need for Cuda and OpenCL. And i agree that it is nice to be able to get down to the metal and not be forced to work through an abstraction (as GPGPU was done few years ago).

However, i do not agree that the most performance-critical part of a 3D application - the 3D rendering itself - should be forced upon the developer, if the engineers of a chip know much better how to leverage its full power. I like my free monthly driver updates with bug-fixes and performance improvements.

Sure, you say Intel would have shipped an OpenGL and Direct3D driver. But they made it pretty clear from the start, that that would be just some implementation to have anything run at all on it from the start. We all know how horrible Intel drivers are, does anyone think they would acquire some genius driver-writing-skills over night?? If ATI has trouble with OpenGL-drivers (and iā€™m sure they really WANT to make a good one) i doubt Intel could pull this off just like that.

No, Intel was clear about this all the time: Any OpenGL and Direct3D driver would be just for convenience, actually you are supposed to do all rendering yourself. Certainly they would give you a library with some basic functions to get started, though.

And in that regard i always saw Larrabee as a step back, at least when it comes to rendering.

Donā€™t get me wrong: I am not AGAINST Larrabee, i donā€™t think it would be a bad thing to have it available. I am just not convinced that it would have such a positive impact, at all. I think it would be a niche market for high-performance computing, where it is more convenient to have a 48+ core x86 CPU to program for, than some much more complicated GPU. As i said earlier, what i was more pissed off with was actually the people hyping it into oblivion without being able to put Larrabee into its proper context. Maybe thatā€™s because i donā€™t like fan-boys in general, no matter the product.

Jan.