This is going to take a little space to explain, because it gets down to the heart of the matter between the differences of how a PC-style system works and how the PS2 handles it.
At the risk of being a little over-simplistic, in most console (and PC systems) you draw the data from your medium (whether CD or Cartridge or RAM) into the CPU. The CPU, depending on if the subprogram is doing AI, physics, or graphics (camera angle and text) or controller updates, gets the appropriate information (usually from memory) and performs the mathematic calculations necessary to the task. The CPU can do this very quickly, depending on processor speed and other factors.
However, there is one problem with this method: the CPU can only do one of these tasks at a time. Also, since the CPU can only hold a very small amount of data (since it is designed to "push" data through, not hold it) most tasks require the CPU to wait while the information is brought in from "off-chip". Depending on the subroutine and amount of data being manipulated, this can tie up the CPU for many "clock-cycles", as the CPU does its manipulations, writes it to memory, and waits for the next data to come in from the memory (when it could actually be doing calculations). Clever programming can mitigate this somewhat, but you\'re still stuck with problem of the CPU setting idle for long periods of time. This problem is known as the "CPU bottleneck". It was a big problem back in the early 80\'s, because the CPU speeds were much lower than they are now (the average CPU speed was about 3-8 MHz!).
Engineers studied the problem and discovered that a lot of the CPUs time was spent doing graphics. So, they developed a chipset to help the CPU render graphics in a more efficent manner. This helped the CPU to roughly double its\' workload. Soon though, even this started to bog down as the the amount of graphics data started to tie up the CPU as it was constantly "talking" and trading data with the GPU (Graphics Processing Unit). So the engineers added memory to the GPU, and designed it into a seperate board to actually be the "end" of the video data that sent the Video signal to the monitor. This set-up proved to be the best choice, and has been constantly refined ever since with faster, more powerful GPUs and faster, larger "video memory". The latest Geforce 2 Ultra w/64MB DDR is actually little different in set-up from the first Orchid 256KB VGA cards introduced in 1986.
Most people don\'t realize the difference this makes in how computers (and consoles) run. Even the latest Willamette 1.5 GHz CPU from Intel could be brought to its knees trying to render a game like \'Half-Life\' in 800x600 in 16bit color, much less something like Q3 Arena in 1600x1200 with 32 bit color. It simply wouldn\'t be playable. The GPU card actually quadruaples the apparent power of the CPU, allowing not only better graphics, but improving the game AI and physics.
Now with that, all consoles now have GPU sets. Now we have to discover a new problem that has cropped up in the last 8 years: System Bandwidth.
If you have followed me this far, you remember that I mentioned the CPU having to get information from "off-chip", whether it be in the memory or media. It also has to send all the video information to the GPU. So in some ways, the CPU has become a "traffic cop", making sure the data is sent to the appropriate areas, and occasionally doing the physics and AI. However there is a speed bump in this. Much like a city that has grown too large in a short period, the pathways that go from the CPU to the other areas is extremely slow (kind of like the New York downtown area at \'rush hour\'). You now have a 400MHz. CPU, a 200 MHz. GPU, but only a 100MHz.(or 66) pathway between them. Same thing with the CPU and memory, and the CPU to the media. Its like designing a whole city that has a traffic light on every corner. Having the fastest car (or CPU) makes little difference, because everything is forced to move around at the same, slow speed. How do you get around this?
Basically, there are two different approaches to the problem (though alot of permutations thereof..). PCs (and some consoles) do this by giving the GPU and CPU lots of memory, and then loading it all up with the information in advance. This is known as \'pre-caching\'. If you have ever waited in a game while it it displayed "LOADING..", this is basically what it is doing. While this does help somewhat, it doesn\'t really alleviate the root cause of the problem. It still comes down to how to let the different parts of the system communicate at a faster rate.
The PS2 takes a different approach. In essence, it has a series of \'super-highways\' between the different parts of the system. Basically, the CPU,GPU, and memory are all linked together with high speed connections. People often point out that the GPU has very little VRAM. And the reason is very simple: the CPU can supply as much information as the GPU needs very, very quickly. Hence the GPU doesn\'t need the huge VRAM that other systems need to \'pre-cache\' all the video information that other systems are forced to use because of \'slow\' connections.
Now, how does a (XBox) PentiumIII and Nvidia GPU deal with this? It tries to send as much information as possible to the GPU, and hopefully, this allows the CPU enough free time to do the physics and AI necessary to detailed games. The one big question that remains is this: Microsoft is using what it calls a \'unified memory archectiture(sp)\'. The GPU has no video memory. So it is using the same 64MB \'memory pool\' that the CPU is using to store its info. Suppossedly, this alllows programmers to \'divide\' up the memory to allow sufficient space for the video and CPU. Has MS found a way to do this? Will it have enough bandwidth to handle the information load inherent in this type of setup? Only time will tell..