Comments Locked

36 Comments

Back to Article

  • feraltoad - Friday, May 5, 2006 - link

    I would have like to seem some High Resolution Benchmarks, or at least one. At 1680*1050 would this show that the CPU was no longer the weakest link with the Video Card Scrambling to render the scene? Would the exra mhz of the cpus still have a marked effect? Maybe you see the answers to these questions as obvious but not me. :( I play at this resolution. I'm sure some people complain but I've tried running around with fraps in different places and as long as the FPS doesn't seem to drop below 24fps then it isn't bad at all. If I look at an Oblivion gate I get 39 fps steady, granted this isn't with Dremoras trying to club me. I find that long views in towns (anvil) are rougher than the Oblivion gates. Perhaps its all the NPCs or shadows of buildings?

    However, very nice article, and VERY DEPRESSING! Sad to see the Kings of the Video Cards struggle with a game this much. Also insanely pathetic to see that a fx60 will net you a whopping 9 fps in the most demanding part of the game. 9FPS?!? That's a frickin' $1000 dollar chip. You could buy it a $115 dollar A643000+ and OC it (just 250htt) to just 3500+ speeds and you only have a 9 fps difference from a $100 to a $1000 chip? That is messed up. (yeah I know you can OC the fx too, but still at $1000 would you expect that to be necessary? It should brew coffee w/the other core while using office apps. also I don't believe anybody buys a 3000+ w/out planning to OC) So much for "getting what you pay for". I guess it is comforting to normal people who can't drop a grand on the cpu alone.

    I too would have like to seen if a Aegia Phyx chip would have made a difference by freeing up the cpu for other things. Aegia could have really gained some momentum if they could have had this game support their chip & had a board for around ~$150.

    People keep talking about patches. Can a patch really signifigantly optimize a game anyway than decreasing quality for performance?

    Jarred what is the biggest increase in performance in a single game that you have seen with new video card drivers? Can we expect Nvidia or ATI to dramatically increase performance with a drive update?

    After all, this will hurt vid card sales if people can expect sub-mediocre performance from the best graphics cards. It will also boost 360 sales if they can't increase performance. Especially, when I release my patent pending super dongle that will allow two Xbox 360s to be used together in a SLI/Crossfire configuration. haha (j/k. but my dongle is quite super, I assure you. ;)
  • moletus - Tuesday, May 2, 2006 - link

    2 months ago, i was about to go shop 1900xtx for 600euros. Than i saw Xbox360.
    Guess what! (hint: it costed with all the bells and whistles 550eur :)

    Why on earth burn so much money when you can have all the chocolate(+AA and HDR) in the world for 1/3 of the money and still cant run the game?

    Im no MickeySoft fanboy but the 360-ride has been surprinsinly smooth.
  • The Black Cursor - Monday, May 1, 2006 - link

    Both articles were quite informative, but I don't quite have enough information to make a decision between the upper-mid range GPUs and CPUs at similar price points,

    ATI X1800XT vs. NVIDIA 7900GT

    AMD A64 3700+ vs. AMD A64 X2 3800+

    Any opinions?


    Be seeing you...

    ---> TBC (To Buy Components)
  • xFlankerx - Monday, May 1, 2006 - link

    I say the X2 3800+ with the 7900GT. While the X2 most likely will not make a difference in the game itself, as noted by my last comment, it WILL provide for an overall smoother experience from your PC.
  • xFlankerx - Sunday, April 30, 2006 - link

    I just came from reading the Anandtech Oblivion CPU testing. I beg to differ with the conclusion that you need a fast CPU to get the most out of your GPU. I'm more of the opinion that the CPU and GPU handle different tasks. We know that the CPU is much stronger than the GPU in MOST cases. Now the CPU is what provides the GPU with instructions for what the GPU needs to do. If the CPU is already feeding the GPU more instructions than it can handle, then there is no point in having a faster CPU. See, there was hardly a difference in the FPS even with a 1.8GHz A64 when compared to a 2.6GHz A64, in areas where it's GPU-intensive. The Oblivion gates are by far the most intesive part of the game, GPU-wise. Now every single one of the single-card solutions bottlenecked the CPU in these cases. Only Dual X1900XTXs were able to take advantage of the CPU.

    On the other hand, the Town benchmarks are influenced quite a bit by the CPU. This is easily explained when you notice that in a town, there is a lot for the CPU to calculate that is not in the Oblivion Gates. There are hordes of NPCs in a city, and the CPU has to control every single one of them (oblivion NPCs lead lives of their own, by far the most complicated AI in a game yet). Thus, the stronger the CPU, the better your frames per second in a crowded area will be. The clumpiness of all the videocards performance here somewhat reinforces the point.

    The Dungeon benchmarks did surprise me. There is a decent amount for the GPU to render, though not as much as the other areas. However, there is very little for the CPU to render. And yet, we see quite a bit of improvement with a faster CPU. I'm not entirely sure how to explain that.

    My point is, basically, that "You don't need a fast CPU to get the most out of your GPU." If anything, it should be the other way around, since the GPU is what can't keep up with the CPU. I think that the conclusion should be more like, "You need a fast GPU to handle the graphics of the game, but you will suffer a drop if FPS if your CPU cannot keep up with the AI in the game in crowded areas."
  • kuk - Sunday, April 30, 2006 - link

    Any idea on how a Core Duo setup would perform?
  • ballero - Sunday, April 30, 2006 - link

    Anand, are you sure that the performance increases are due to the game?
    AFAIK Catalyst are multithreading driver.
  • RyanHirst - Saturday, April 29, 2006 - link

    Ok, I was just needling a bit about the opteron servers; I didn't really think they'd be a part of the test. :>

    First, PCI-X x16 is standard on all dual-socket940 NForce professional boards, and several have 2 16x slots. I did forget that you can't do CrossFire on an Nvidia board-- but even just checking a server board with one, two, and four cores as a different test, would that not be worth a look?

    Hyperthreading doesn't really answer the question. A (perhaps) 5% improvement with HT is consistent with the "6 of one...." results HT gives in dynamic multithread envirnonments (chess, say, or video games -- where the thread results must be compared to values from other threads, or completed within a certain time). There never has been any reason to assume that, because a progam does not get a boost from HyperThreading, it will not benifit from an additional full processor.
    I'm not arguing that a dual socket-940 rig is mainstream, but should we argue about which is less relevant to 99.9% of the gaming public, an Opteron or a $1200 cross-fire setup? The point I'm trying to make is that regardless of how common it is, a multicore system could answer some questions that remain open. Namely,
    --What are the maximal advantages of multithreading in this first generation of games?

    Every review site seemed happy to pontificate on this subject when there was virtually no data, but now that there is a test candidate, no one seems interested in actually testing it.
    Was it ever in doubt that you can improve a game's performance by throwing a CrossFire x1900xtx at it? No one would argue with this premise. *THE* question that chipmakers are taking revolutionary stands on for video games... is about cores. And I would just REALLY like someone to take a look at it.
    Sorry that was long; I realized the first post sounded grumpy because I didn't explain my reasoning. Cheers, and I take the time to write this because your reviews are always the best; when I ask myself what information I would need to make an objective judgement about something, it's always information you present and analyze. Thanks again!
  • RyanHirst - Saturday, April 29, 2006 - link

    Oh, yeah and about the small boost in performance of the second core: since the multithread coding in Oblivion is (by their own statements) a result of their XBox360 version, I was assuming that the game would be generating low-complexity, nearly branchless threads, none of which would tax the extra processor at anything close to 100%... but that the game engine can generate a LOT of them. Is that not more or less the model of the XBox360 processor?
  • JarredWalton - Saturday, April 29, 2006 - link

    Xbox 360 processor is quite a bit different, and handling up to six threads over three cores. The problem is, it runs a completely different machine architecture, so it runs through different compiler. Even if their code to take full advantage of the Xbox 360 capabilities, they might not bother porting the entire spectrum over to x86. More likely -- and as far as I know there's no way to prove this -- the Xbox 360 version doesn't use more than about half the potential of the Xenon CPU. (Go ahead and call me "doubting Thomas". LOL)

    Anyway, maybe Anand will find a time to try running a few tests on a quad core versus dual core setup. I unfortunately don't have access to any quad core systems. :(
  • RyanHirst - Sunday, April 30, 2006 - link

    hehe it's ok. Just chompin' at the bit curious, that's all. If anyone on eBay wanted the collectible books I no longer need, I'd have turned them into a magic pair of 275s and I'd know. Alas. Pantone 072 Blue.
  • bob661 - Saturday, April 29, 2006 - link

    I guess this round of testing only applies to ATI video cards. I guess us Nvidia owners are left to guess how CPU performance affects GPU performance. Oh well. :(
  • PrinceGaz - Sunday, April 30, 2006 - link

    Just refer to part 1 of the Oblivion article and find out where your nVidia card of choice lies in relation to the four ATI cards tested this time and it is easy to see how it will perform with various CPUs.
  • Bladen - Saturday, April 29, 2006 - link

    My guess as to why CPUs help so much in towns is because the Radiant AI takes a fair amount of power.
  • BitByBit - Saturday, April 29, 2006 - link

    I don't know why, but the benchmark graphs didn't appear at all in that review, nor did they in the previous Oblivion review; I get nothing where the graphs should be.
    Has anyone else had this problem?
  • blackbrrd - Saturday, April 29, 2006 - link

    If you have turned off refering you won't see any images.
  • JarredWalton - Saturday, April 29, 2006 - link

    Someone else had a problem with Norton Internet Security blocking images for some reason.
  • Madellga - Monday, May 1, 2006 - link

    That was me. Turn off privacy control.
  • RyanHirst - Saturday, April 29, 2006 - link

    This article still left me with the nagging question about multithread performance. The oblivion team made reference to th game being heavily optimized for multithread performance because they knew from the beginning they'd be writing it simultaneously for the XBox360.
    So the debate about the potential of multithread code in games has been going on for awhile, and here we have the perfect game test, and we happen to know AnandTech had a couple of four-way servers in the shop over the last few weeks.... but the CPU guide leaves that question unanswered.
    If it's not unreasonable to heap $1200 in graphics hardware onto a M/B for a game that is GPU bound only half of the time (outdoors), is it too much to ask that a $1200 pair of Opteron 275's be tested to see how multithread the first advertised multithread game really is? Is it not possible that the game can offload a large number of threads to extra cores?
    If we can talk about throwing over $1K at a game, isn't anyone the least bit curious how a 4-core opteron rig with 4 gigs of RAM in NUMA might handle this game?
  • JarredWalton - Saturday, April 29, 2006 - link

    P4 EE 955 runs up to four threads, and it doesn't seem to get any help from the extra capability. It could be that further INI tweaks would allow Oblivion to run better on multi-core (more than 2 core) systems, but if going from 1 to 2 cores gives 10 to 20% more performance, there's a very good chance that moving from 2 to 4 cores wouldn't give more than another 5%. Ideally, a CPU-limited game should be able to get as much as 50% or more performance from multi-threading, but rarely can we realize the ideal case.

    Also, FYI, the servers are at a completely different location than the GPUs for this testing. They also don't support dual X1900 cards in CrossFire - they might not even have X16 PCIe slots, let alone two of them. Servers are, quite simply, not interested in improving gaming performance. There are a few out there targeting the 3D graphics workstation that might support SLI, but not CrossFire. Multi-core will really only become important when we have multi-core CPUs. The desktop/home PC market isn't interested in multiple socket motherboards (other than a few extreme enthusiasts).
  • goku - Saturday, April 29, 2006 - link

    It' really ticks me off that oblivion couldn't incorporate support for the new ageia physics processor. It would have been nice to see all those calculations being offloaded onto the PPU instead so that the CPU wouldn't have such an effect on performance.
  • DigitalFreak - Saturday, April 29, 2006 - link

    Since they are using the Havock physics engine, it was never going to happen.
  • DigitalFreak - Saturday, April 29, 2006 - link

    Oops, Havok
  • Madellga - Saturday, April 29, 2006 - link

    I don't think that was supposed to happen, but when I clicked on the link under the tittle:

    SMP - enhacing performance , it goes to http://www.anandtech.com/printarticle.aspx?i=2747">http://www.anandtech.com/printarticle.aspx?i=2747 , which is the same Oblivion CPU article we are reading.

    I think the idea is to take us to the guide you are using, isn't it?

    //s
  • kristof007 - Saturday, April 29, 2006 - link

    Same here. Please fix it Anand when you get a chance.
  • JarredWalton - Saturday, April 29, 2006 - link

    Done. That page was my doing - Anand ran the tests, I wrote page 5. I forgot to paste in the link. (Actually, I ran into some issues with undo/redo and apparently lost the link in the process. I had to rewrite two paragraphs at the time.)

    Jarred Walton
    Hardware Editor
    AnandTech.com
  • shortylickens - Friday, April 28, 2006 - link

    This makes me feel pretty good. I went out of my way to get the cheapest Socket 939 CPU I could find.
    Now that I've had the system for a while, I feel OK about doing one big CPU upgrade and I can actually see a performance boost.
  • bloc - Friday, April 28, 2006 - link

    The sempron line is amd's answer to intels celeron line.

    Might it be possible to see the benches for the Sempron S754 as they're budget cpu's with huge overclocks?
  • kmmatney - Saturday, April 29, 2006 - link

    A Sempron 2800+ overclocked to 2.4 GHz performs about the same as an Athlon 64 3700+ clocked at 2.2 GHz. So for a rough estimate, lower the Athlon64 speed by 10% to get the speed of a Sempron.

    My Sempron overclock at 2.45 Ghz was 100% stable for all games and applications I'd ever used until Oblivion. With Oblivion, the game was crashing until a lowered the spu speed to 2.35 GHz.
  • JarredWalton - Saturday, April 29, 2006 - link

    Part of the problem is that there's only one SLI motherboard for socket 754, and honestly I think that's more of a novelty product than something truly useful. Anyone spending the money on multiple GPUs is better off buying a faster processor as well.

    Anyway, looking at how cache seems to affect performance of the other chips, I would guess that a Sempron 128K/256K would be equivalent to an Athlon 64 512K running 200 to 400 MHz slower. (i.e., Athlon 64 2.0 GHz -- 3200+ -- would probably be about equal to a Sempron 2.3-2.4 GHz.) Single channel memory plus a reduction in cache size should cause a moderate performance hit, clock for clock.

    Of course, none of that means that Sempron chips aren't worth considering, especially with overclocking. Assuming you're not running super high end graphics configurations, though, you can probably reached the point where you're GPU limited to the same performance, whether you have an Athlon X2 or a Sempron.
  • kmmatney - Saturday, April 29, 2006 - link

    If you look at the Tom Hardware charts, plot the performance of the 256K cache Semprons on a chart, and then extrapolate to higher frequencies, a Sempron at 2.45 GHz will perform better than the Athlon 3500+, and closer to the Athlon 3700+. It does start to fall back a little in the heavy multitasking benchmarks, but for gaming and content creation its very close to an Athlon 3700+.

    For instance, if you take the Far Cry benchmark at 1280 x 1024 (other benchmarks behave the same):

    Sempron 256K 1.4 Ghz = 126.9
    Sempron 256K 1.6 Ghz = 140.0
    Sempron 256K 1.8 Ghz = 151.6
    Sempron 256K 2.0 Ghz = 162.7

    This forms a linear curve with very little drop-off with speed increase
    Now extrapolate to 2.4 GHz

    Sempron 256K 2.4 GHz = 186.95 (predicted)
    Sempron 256K 2.45 GHz = 189.95 (predicted)
    Sempron 256K 2.5 GHz = 192.9

    Athlon 3700+ San Diego: 190.9
    Athlon 3500+ Venice : 186.2
    Athlon 3200+ Venice : 176.5

    For a given amount of money, an overclocked Sempron paired with a high end video card will give you the best bang-for-buck for gaming.

  • JarredWalton - Saturday, April 29, 2006 - link

    I'm not talking about as an overall platform; I'm talking specifically about Oblivion performance. Clearly, looking at the 3500+ vs. 3700+, the jump from 512K to 1024K L2 helps quite a bit. Looking at Celeron D, 256K and a lower FSB kills performance. It's not too much of a stretch to guess that Sempron chips will do proportionately worse in Oblivion than in many other games/applications.
  • kmmatney - Saturday, April 29, 2006 - link

    Also, the low end S939 Athlon 64s have come down in price, with the cheapest now at $109, so right now, I would agree that Socket 939 is the way to go now, even for a low end system.

    If you look in the area of the game that counts, the outdoor scenes, the extra 512K of cache gives you an extra 2 fps. An educated guess would put a Sempron 3100+ running at stock speeds at 28.5 fps. Overclocked to 2.4 Ghz it would be around 35 fps. Not great, but very playable.
  • JarredWalton - Saturday, April 29, 2006 - link

    True, you won't notice 2 FPS difference. The thing is, a few people are talking about overclocked Sempron versus stock clock speed Athlon 64. If you're going to overclock one, you have to overclock the other. My experience is that socket 939 overclocks far better than socket 754, the so a lot of those Athlon 64 3000+ chips can hit 2.5 to 2.7 GHz on air cooling.
  • JarredWalton - Saturday, April 29, 2006 - link

    Oops -- posted too soon.

    You might be talking about five to 10 frames per second difference at that point, which would definitely be noticeable. Of course, if you're looking at running a Sempron with the typical PCI express or AGP card, you will likely be GPU limited anyway. Even a GeForce 7600 GT is going to struggle with the outdoor scenes.
  • Powermoloch - Friday, April 28, 2006 - link

    Yeah, I was wondering about that too :). My gaming rig is being powered by 3100 sempron paris and I did overclocked it @ 2.069 Ghz. Oblivion went out pretty fine at most times, and I'm really enjoying the game.

Log in

Don't have an account? Sign up now