Sunday, November 18, 2007

80 times the power of a PS3? I don't think so.

Recently I caught a video from AMD talking about their upcoming Spider Platform. The Spider Platform consists of a Quadcore Phenom processor, the 790 chipset, and a Radeon HD3800 graphics card.

During the presentation one of the AMD representatives stated they were doing 80 times the power of a Playstation3.

I. Don't. Think. So.

Here's why. The AMD Phenom Processor is based on x86 technology, extended with x86-64. x86 generally isn't actually considered that great of a processor architecture. The IBM / Sony / Toshiba developed Cell processor is based on the PowerPC architecture. PowerPC is generally considered to be one of the better architectures on the market, with maybe the old Alpha architecture being considered better.

Now, bear in mind that there have been practical examples of PowerPC versus x86 in the market before. Exampling the Older Apple Mac G3 and G4 designs. At 500mhz they easily kept up with Intel and AMD designs over 1ghz. Consider the Gamecube, which many developers went on the record as saying it was easily more powerful than the 733mhz Celeron used in the Xbox, despite only having a 485mhz processing speed.

Now, keeping in mind that in terms if IPC, instruction per clock cycle, PowerPC is considered to be better than x86, consider this: The IBM Cell processor used in the PS3 not only has a PowerPC processor built in, it also has 7 SPE's (Synergistic Processing Elements). Anyways, the SPE is comprised of it's own memory controller and a 128bit RISC processor. One of the SPE's is reserved for the Linux based OS used in the PS3, leaving developers free with 6.

So, in a best case scenario, the Cell processor used in the Playstation3 can manage 6 different processing threads. Now keep in mind that the two most advanced engine on the Playstation3 today are Epic's Unreal Engine 3, and the NEON Engine developed by Codemasters and Sony. Right now, both engines are barely pressing 2 processing threads. They aren't even reliably getting beyond using a 3rd of the processing power available. Why? Well, because the engines have to work across multiple platforms where those 4 extra processing threads probably are not going to be available, say like an Single core AthlonXP.

This relatively low use of processing abilities at the start of a console's life isn't exactly unusual on the exotic hardware used in game consoles. Just go pick up a Super Nintendo, a Playstation 1, a Playstation 2, a Gamecube, or any other console but an original Xbox. You'll note a startling change in graphics quality as you look at launch titles and compare them to titles two or three years down the line. For example, go pick up Ratchet and Clank, Ratchet and Clank 3: Up Your Arsenal, Jak and Daxter, and Jak3. Now, play the original titles... then play the 3rd game. See a difference?

The odd one out in the console graphics improvement is of course the Xbox. Since it used a stock x86 processor and a not so stock Pixel Shader 1.4 graphics chip from Nvidia, there wasn't as much room for the system to grow as time went on. So what you had at the beginning of the consoles life... was pretty much what you had at the end of the consoles life.


So, what does this have to do with AMD's claim?

Well, lets think about it for a second. A quad-core Phenom processor is only going to be able to process 4 threads. An engine optimized for the x86-64 platform with 4 processing threads isn't going to run very well on the 6 thread Cell Processor, and vice versa. So, cross platform engines are never really going to be able to effectively push either system to it's maximum extreme. Engines that are designed from the ground up for each platform, however, will be able to actually use the platform, as witnessed in the Ratchet and Jak games.

At best then, in a dual socket enviroment, a Quad-Core Phenom processor system would be able to process 8 threads (4*2). Even with a high clock speed of 3ghz, a Quad-Core Phenom is going to have a hard time matching the raw performance of Cell.

From a strict processor viewpoint then, I think AMD is talking rubbish when it comes to outpowering the Playstation 3.

From a graphics viewpoint though, AMD might have a point. The Playstation 3's graphics are provided by an Nvidia chipset known as RSX. RSX was reportedly based on what was the 7800 series. Chances are, from the clock speed of the GPU at the Playstation3's launch, which was 550mhz, it is more likely the GPU shares more in common with the G71 7950 GT spin than the original G70 7800 spin. Thing is, either way, that is Shader Model 3 hardware.

AMD's new RadeonHD 3800 series though, is Shader Model 4 hardware, which means that all of the shaders are unified. In the RadeonHD 3800 the shaders can either do vertex shading, or pixel shading. Granted, the RadeonHD still can only do 16 textures per cycle versus the RSX's 24 textures per cycle, so the RSX will still probably be faster in texture based titles. However, with textures on the way out and shaders on the way in, the RadeonHD has much more to pass around.

It is possible then, that 4 RadeonHD 3800's coupled together, could approach 80 times the power of the Playstation3's RSX in terms of Shader Output.

***

The primary question is though... will that matter? The gut reaction is no. Most game developers have to target for a baseline of playability. Currently that baseline is the Intel Integrated Graphics, so most titles built against x86 computing are never going to press what 4 3800's coupled together can do.

That means that 80 times the power of a PS3 or no, it will be a while before such limits are even remotely pushed.


3 comments:

Unknown said...

Marketing talk of course.
But you are comparing chip for chip and video for video.
AMD used to advertise upcoming Spider platform to be able to provide teraflop on desktop. Thats quadcore plus 2 (or more?) videocards doing calculations (say Folding) not gaming nor 8 threaded applications. With videocards providing most of the number crunching power.
Whereas I am not aware that PS3 graphics engine can be used for any calculation. So it is just Cell against whole bunch of stuff.
I have a x1950pro doing my folding, and it is way faster than any processor. Given 38x0 are more powerful, I believe the 80-times-claim could be correct. Thats IF Stanford releases folding for 38x0.

Unknown said...

Okay, a bit late on realizing somebody had posted to this. First, I'd actually suggest you go back and read a post back in 2006 that talked about Folding@Home. It is here: It is Here

Long story short, Folding is a relatively useless application with no long term medical benefits. I've talked with a few doctors who graduated from or work at Medical College of Georgia and University Hospital of Augusta Georgia, and the general consensus is that Folding@Home will net them no useful information.


Anyways, since the PS3's RSX is essentially either an overclocked Geforce 7800 or an early spin of the 7900 chip, yes, it can be used for any calculation. Users of Debian Linux and YellowDog Linux have reported successfully getting GPGPU (General Purpose computing on Graphics Processing Units) applications to work.

PS3DEV said...

80 times? Extremely exaggerated by a non-techie white collar bumble brain from AMD/ATI!

#1. PS3 RSX alone = 1.8 TFLOPS
Cell and RSX = equal over 2TFLOPS

#2. I will explain:
RSX = "Multi-way Programmable Parallel Floating Point Shader Pipelines". Now no other device uses this description, but if you were a student of Graphics Cards, you would realize that this was basically describing multiple pathways through a grid or array of Shader processors.

#3. So it's not a "Fixed Function Pipeline" of G70 with 24-Vertex Shaders and 8-Pixel Shaders. This describes a Unified Shader Architecture!

#4. Supports full 128bit HDR Lighting. Same as G80's and G70 only supports 64bit HDR like all last gen GPU's. Dead give away!

#5. From as early as Feb 2005 at GDC Sony has announced that PS3 (RSX) would use OpenGL ES 2.0. At the time 2.0 wasn't out, so they wrote feature supports of RSX into OGL ES 1.1 including Fully Programmable Shader Pipeline and call it PSGL. Khronos Group took anything that could be done in a Shader out of the Fixed Function Pipeline (including Transform and Lighting) streamlining it for future Embedded Hardware. What does that mean?

#6. At least for OpenGL ES 2.0 devices (not PC version) it becomes a unified shader model and remember RSX is fully compliant. OpenGL ES 2.0 will only run on advanced hardware just like DX10.

In fact your next Cell Phone or PSP Phone will run on OpenGL ES 2.0 with GPU's like PowerVR SGX = 720P capable for Blue Tooth display to HDTV. Unified Shaders, Shader4.0, and with OpenKode support Play DX10 games!!! So if PS3's RSX can run the same API, it's NOT a G70 and no way will anything but a mainframe out power a PS3 by 80 times making Spider Platform with 4 GPU capable of over 160 TFLOPS Total!

OpenGL ES 2.0 Cell Phone interface prototype of the Future!
http://www.youtube.com/watch?v=l8mWWkY3dBQ&e

Look Familiar? Team Design Demo ;)