The main question Im asking here is what is more important--256 DDR over 128, or the core/memory speed of the memory? Is the 256 DDR worth it by sacrificing the core/memory speed?
Memory bandwidth is more important than memory size right now. Even in the future, the speed of the transfer will make a larger difference in performance compared to the ammount of ram your card has.
You want a higher clock speed, higher bit rate (128 is prime right now) and 128 MB of ram.
No consumer grade product out there can use the 256MB of ram effectively... and because the bandwidth is slow, not many professional programs can use the bandwidth to any advantage either.
You best option of the above choices is the Radeon 9600 XT, though, if you want to get a fair graphic detail rating... .you might want the nVidia's 5950 card by MSI... the tests look excellent, but it is damn expensive. The current drivers for the ATI 9000+ series (and some of the earlier ones too alegedly) have a driver work around that is similar to what nVidia used to do to make their FPS higher; lower over all graphics detail per rendered scene. nVidia used this tactic in their drivers to drop over layed frames from their card renderings of ellaborate explosions and effects to bankrup 3dfx's line of competitive cards (which had a visual performance edge over the GF and GF 2 series of cards... probably the GF3's too... even though they had a slower FPS rate.)
The reason your GF2 may appear to out-do your FX 5200, Darwin, may have to do with the drivers the GF2 is using, as well as the finer details of your FX 5200. From what I have read, there are many versions of the FX 5200, some of them using "flip-chip" technology. This flip-chip technology actually is much slower for the transfer rates and allegedly makes more mistakes than the refined true chips of the true FX 5200. I don't know how to identify the difference other than manufacutre version... and even then, I had to research quite a bit to find my info before buying my MSI FX 5200.
Now, back to the Issue of the rendering of detail layers in effects;
After ATI (who was behind in the market at the time) accused nVidia of modifying drivers to drop renderings from detailed effects, nVidia made a new standard to which they held their drivers and cards as far as detail rendering. All of their new cards and drivers have been checked and do show to be rendering all of the detail levels of the advanced effects in a game scene. BUT, the new ATI drivers have also been shown to drop detail levels and over layed effects from advanced scenes and effects. There are a whole 2 or 3 pages about it at THG (link at end of post) in the comparrison between the "top cards" right now. ATI has admitted that "the dirvers are not finished yet" and "the current drivers do not fully reflect the full capability of the card." This is the same line nVidia gave when they were accused (and rightly so) of the very same thing ATI is now doing.
nVidia is losing it's competative edge on the market (slowly but seemingly surely) so I purchased an nVidia card, rather than an ATI card.
I want to see all of the details in a game that I buy, and so I opted for the "slower FPS" card of the affordable choice; the MSI nVidia FX 5200 VTD128.
The next step up (which I think is much better) is the MSI nVidia FX 5600 VTDR128 (or is it VTD128? I forget.)
-If you want your full detail, opt for an nVidia.
-If you want better FPS at 2100x1600 resolutions, opt for the ATI 9600 128 XT.
Personally I chose nVidia. I hated the fact that they killed 3dfx, but I hate even more the fact that ATI is pulling the same thing that nVidia did on 3dfx.
FPS do not matter if the frames are not in full glorious detail... but that is just what I think... as I want to see all of what the game designers spent years composing.
Be warned, nvidia does have less fps at high resolution than radeon at high resolution. What resolution are you running at? Choose your card based on your resolution and the detail you want. Those are the keys.
Other than that, the most important thing to understand is that 256MB is just a marketing ploy! They have all this extra RAM that won't be on their future cards, and they need to get rid of it. Add it to a fast chip with a slower clock rate and ther eye go.... and people will buy it because it has more RAM, not for it's true speed. It's similar to AMD saying "buy the 64 bit processor for the future." That makes no sense; in the future, any chip that is made now will cost less when a 64 bit windows OS comes out. Why buy now and waste that cash if you don't have a 64 bit OS running? hm... marketing.
Don't buy the gimick. Stay with 128MB and 128bit.
Peace.
Any thing I said above can be confirmed with:
http://www.tomshardware.com