Ok, a couple things. I am not that motivated a person that will go out and find articles (that I have read) to back up a claim. I do not make things up. When I write something it either comes from my computer science degree or an article I have read. There isn\'t a bandwidth problem with the xbox. The article below also addresses the HDTV question. Sure you can look at the xbox hardware (the most powerful on the market) and pick it apart, but my only question is why? Why not GC? Because people like Seven are threatened by the xbox and feel the need to discredit it.
Iron Fist all that crap you said about me is just that, crap. Anyone can say that about someone else\'s argument. I find it odd that in the posts where we agree, my arguments suddenly have substance.
Ok, but now I have quotes to back up my claims.
http://firingsquad.gamers.com/hardware/xboxtech/page7.asp__________________
Calculating the actual bandwidth required for a scene is somewhat tricky since we cannot know what the cache efficiency will be, or how much bandwidth is required for the vertex shaders, and so on. Fortunately, we should still be able to make educated guesses and approximate.
Since the primary display for the Xbox is an NTSC TV rather than a monitor or HDTV, most games will stay around 640x480. This also limits the refresh rate to 60 Hz. These two constraints will significantly reduce the amount of bandwidth required. Now let\'s be specific and say that we want 32-bit color, 32-bit Z-buffer, 4 textures per pixel, and trilinear filtering. Finally, let us assume a depth complexity (overdraw) of 4 and a multiplier of 1.2 for the z-buffer operations.
The framebuffer will require
640 * 480 * 60 * (8 bytes per pixel) * 4 * 1.2 = 707.8 MB/sec.
For RAMDAC to display 60Hz, we need to add
640 * 480 * 60 * 4 bytes per pixel = 73.7 MB/sec.
With four textures per pixel, 8 texels for a trilinear filter and 31% cache efficiency you\'ll need:
640 * 480 * 4 bytes per pixel * 4 depth complexity * 4 textures * 8 texels * 31% = 2.9GB/sec
This makes the grand total 3.7 GB/sec. Not too bad. There\'s even enough bandwidth left over for 16-sample anisotropic filterng (total of 4.55 GB/sec; 20% cache efficiency). If you wanted HRAA then your bandwidth requirement would be multiplied by 4. You would need to drop the framerate to 30 fps, and perhaps only use two textures per pixel. (5.5GB/sec).
What are we missing?
We haven\'t accounted for features such as early Z-checking or lossless Z-buffer compression which significantly reduce bandwidth requirements. Early Z-checking combined with good developer foresight could easily reduce the overdraw to somewhere between 2 and 3 rather than our more extreme example of 4. If the overdraw was 2, for example, you\'d be able to get HRAA with two textures per pixel at 60fps.
Lossless Z-buffer compression will help somewhat as it will reduce the Z-buffer size by a factor of 4. Lastly, it\'s possible that developers may choose to ignore HRAA or use bilinear filtering instead. There are many more variables to consider, but the math will work out something like that.
The Xbox won\'t be able to compete with the GeForce3 at higher resolutions, but it doesn\'t have to play on that high-resolution field. Staying at lower resolutions makes a big difference in the bandwidth requirement. What about HDTV gaming? A 1280x720 resolution will require less bandwidth than a HRAA\'d 640x480.
_________________
This pretty much squashes the bandwidth question, but I have more.
Here\'s one about the CPU:
http://www.anandtech.com/showdoc.html?i=1561&p=2______________
The CPU itself runs at 733MHz which would make you think that Microsoft could have done much better with a solution from AMD. However if AMD had supplied a 200MHz FSB processor with a L2 cache similar in size to the Duron, then the performance of an equivalently clocked solution from AMD wouldn\'t have outshined this Coppermine-derived processor too much. The other thing to take into consideration is heat and power supply requirements. In order for the Xbox to be taken seriously as a gaming console and not just a PC in a black case it would have to be no louder than a DVD player and put out no more heat than an A/V receiver. It is a widely known fact that the Coppermine core runs significantly cooler and with lower current requirements than the Athlon/Duron cores.
So although on the surface it seems as if Microsoft may have made the wrong decision with the Xbox\'s CPU (we even thought so at first), if you think about it, the decision isn\'t all that bad.
________________
Here\'s one about the UMA:
http://www.ddj.com/documents/s=882/ddj0008a/0008a.htm_________________
UMA has a significant advantage in that it allows the CPU, DVD and disk controllers, and GPU to access common data without copying; for example, models and textures can be streamed off the DVD into memory and used directly by the GPU. However, the history of UMA is spotty; witness IBM\'s PCjr UMA, which stopped the CPU virtually dead in its tracks by allotting two out of every three memory cycles to graphics. Not surprisingly, this is the aspect of Xbox that has aroused the greatest degree of public skepticism, so the accompanying text box entitled "Xbox Memory Bandwidth" discusses Xbox\'s memory bandwidth in high-end scenarios. The short version of the bandwidth story is that while there are scenarios in which Xbox could run out of bandwidth, there should be more than enough for most cases, particularly those that leverage the GPU\'s programmable pipeline. Under virtually any set of assumptions, Xbox has adequate memory bandwidth to handle 50 Mtris/sec. in real-world use, and usually plenty to hit the pipeline limits of the chip.
____________________
Here\'s another one of Seven\'s favourite xbox gripes that I\'m throwing in just for fun: xbox won\'t get much better becasue it\'s so esay to program for that the devs have already figured it out.
http://www.ddj.com/documents/s=882/ddj0008a/0008a.htm_______________
The best thing about Xbox is that it won\'t change. Ever. Judging by other consoles, Xbox should have a four or five year run, and wonderful as the profusion of constantly evolving PC technology is, I love the idea of being able to spend years working with a high-powered, fixed platform -- figuring out how to apply all that programmability, understanding the performance quirks, getting things right.
It\'s a long way from hunching over a dot-matrix printer. Still, there\'s a key similarity. Twenty years ago, I loved the feeling of having all that computer power at my command, and the part that really rocked was figuring out how to use it -- all of which goes double for Xbox.
_________________
Here\'s even more about the UMA and a bit about the higher resolution question:
http://www.epinions.com/content_21513080452______________
In Xbox\'s Unified Memory Architecture (UMA), the GPU and memory controller are integrated into a single chip. This means that in order for the CPU to access the memory, it must go through the GPU chip. This may sound like a bad thing, but with the huge amount of memory bandwidth required for fast and texture-rich graphics, it allows for super-fast communication between the GPU and memory.
Furthermore, UMA allows game developers to choose how they want to allocate memory. They can choose to use 48 MB for graphics, 4 MB for audio and 4 MB for game logic (for use by the CPU); or they can choose to use 16 MB for graphics, 16 MB for audio, and 32 MB for game logic.
A non-unified architecture would require the developers to limit themselves in each type of memory, such as 32 MB for graphics, 8 MB for audio, and 24 MB for logic. While limitations like this are acceptable for output to a standard TV, UMA will allow developers to optimize their games for display in either low resolutions (TV) or high resolutions (HDTV or possibly a computer monitor). Whichever option the developers choose, it will have a direct impact on how much memory they need to dedicate to graphics.
_______________
More on the UMA:
http://www.extremetech.com/article/0,3396,apn%253D3%2526s%253D1749%2526a%253D12561%2526app%253D1%2526ap%253D2,00.asp___________
One of the ironies to be found with the 830G or 845G is that their UMA architecture, if done right, can deliver solid performance. SGI pioneered architectures where the graphics subsystem and the CPU shared a single pool of very fast memory for its legendary workstations. Xbox will be UMA, and promises to pack a lot of performance wallop into a $300 game console, and nForce will use a unified memory architecture and could well blow the doors off of any other integrated graphics currently on the market. In other words, UMA doesn\'t have to suck.
_____________
OK ok ok, I went overboard on purpose. I hate digging up quotes. I just want to talk about games and systems. And I don\'t mind if we take a console to task, but Seven has done nothing but, and his knowledge isn\'t broad enough to really talk about it. He brings up these theories of his that make no real sense. I see what he is saying about the UMA, but those problems are from years ago and any designer worth their salt will know how to manage it. The xbox engineers did a good job with the xbox, looking for performance bottlenecks when (relatively speaking) there are none seems odd to me. Bottlenecks are much easier to spot on the GC and PS2.
If we were talking about the fastest car and made comments like "Yeah, but look at that engine, it can\'t go 400 mph!" Of course it can\'t, and none of its competitors can either. This is really my only point.