I'm looking for a new graphics card and scan seem to be offering this one, which has a faster core clock than the ebuyer one.
Top comments
mrcyco
30 Jun 1610#1
unless you NEED Nvidia (Hairworks, Nvidia controller... moonlight), you should really consider the RX480. on par with the 970 but more ram, new chip set (so more room to grow) and the 8gb wouldn't cost to much more than this.
ReadySetGoGo
30 Jun 168#2
That is a good price for the 970 and means Nvidia have recognised how good AMD's new 480 is. I still went with the 480 though as it's newer technology and likely to improve performance over time compared to the 970.
Elevation
1 Jul 167#15
BetaRomeo
1 Jul 164#31
Everything you've said is correct, but it's stuck in the middle and failed to reach the conclusion.
Maxwell and Pascal can do graphics and compute tasks simultaneously, albeit fewer than AMD's line-up (as you can see when doing too many forces a context shift on Nvidia hardware). So by improving their pre-emption and context switching, Nvidia have allowed their cards to still run at pretty much maximum capacity, even without async compute. Their DX11 instruction scheduling is allowing them to max out their cards.
AMD, on the other hand, have had bigger chips with higher theoretical performance for years (just compare the TFLOPs), and still been beaten by their Nvidia equivalents, as despite their GCN architecture, when it comes to processing their workflow they are at a distinct disadvantage in larger applications such as games (but you're right, mining is handled fine). Their architecture allows instructions to flow without stacking, which results in a reduced flow of calls when compared with scheduling - which might sound like an odd choice, but does produce a significant reduction in latency.
Async compute helps alleviate this scheduling bottleneck that we find in DX11 games on AMD GPUs - again, in larger applications - and allows AMD's chips to reach what we'd expect from them (in addition to DX12 relieving some CPU pressure). This is pretty much spelled out by the hardware specifications of the Nvidia and AMD cards, and their comparative performances.
Throwing async compute onto Maxwell would hardly benefit Nvidia at all (hmmm... although it might improve mining performance?), because the hardware is already approaching its limits while using a scheduler-based architecture - clever pre-emption is the rough equivalent they use instead. I'm still working my way through the Pascal white papers, but it seems to be the same situation there (unless you've found something different..?).
TL;DR: Nvidia's architecture for DX11 was entirely API-appropriate, and maxed out their hardware, while we're only now seeing the true benefits of the untapped potential of AMD's GCN in DX12.
Latest comments (75)
Rakib198
11 Jul 16#75
It's also worth noting that the GTX 1060 will not support an SLI configuration, where as the RX 480 still has its crossfire functionality.
rev6
8 Jul 16#74
If you need CUDA then obviously NVIDIA. Encoding is much better on the 970 as far as I know, unless the 480 is much improved over previous generations.
ikramhussain
4 Jul 16#73
Putting gaming aside, for video editing and encoding, would you go for this or the 8Gb 480?
Deetea
4 Jul 16#72
Is the difference between the EVGA and OP's deal significant?
tropicocitwo
4 Jul 16#71
If he can wait, he should wait for an aftermarket cooled 480. Jet blower reference 480 is a horrible idea! That being said, aftermarket 970s OC very well and can equal 980 levels of performance, so the link to the EVGA SC version up above with £20 cashback is a very nice deal indeed
Nate1492
3 Jul 16#70
We'll see what they do, but they may have to simply drop the clock rate.
hitman007
3 Jul 161#69
Software fix is due on Tuesday.
Nate1492
3 Jul 16#68
Make sure you buy a fire extinguisher ;-)
But seriously, the PCIe issue is so big you should wait until they either fix it (by throttling the power of the card) or by a new revision of the card.
I wouldn't stick a part that, at stock, is overloading any component slot by over 10%.
That means there is zero overhead for OC.
milky228
3 Jul 16#67
That graph is complete ****.
Newegg have 4GB rx 480's for around £180, they outperform this card in a lot of games and decimate it in AMD optimised titles, seems like a better shout if you need a card now. If not wait until September time when the prices are stable.
rev6
3 Jul 16#66
The 1060 might be something to look out for. It should be announced/released this month (7th I believe).
Meher377
3 Jul 16#65
Thank u so much for ur time Rev6... I'm just waiting for new release which will, hopefully, bring the prices further down for 970... Would u agree?
rev6
3 Jul 16#64
If you want CUDA you buy NVIDIA. There's no alternative. OpenCL with AMD.
Meher377
3 Jul 16#63
Please elaborate... Take me as an ignorant on this matter :-! Lol. And apart from there two what are other options?
rev6
2 Jul 16#62
Then the choice is easy :smile:
Meher377
2 Jul 16#61
Indeed, more Cuda the better!
seanmorris100
2 Jul 16#60
try is the key word there, 4k is still miles off, for 4k monitors its silly prices too, very niche market, the 1080 will last you 2 years easy. 480 is 2 years behind.
crackshotkv
2 Jul 161#59
Plenty of people try when you get to 980/1080 level cards - plus with 4k/UHD monitors/movies becoming more mainstream, it's going to be the future.
seanmorris100
2 Jul 16#58
no 1 plays games at 4k...
crackshotkv
2 Jul 16#57
While the 480 is far from futureproof (it's a midrange card after all), the 1080 is not really future proof given even that struggles to handle 4k gaming well. We got probably 2-3 more years before 4k has been stabilised (and Nvidia has an architecture that can run DX12 as efficiently as AMD).
elrasho
2 Jul 16#56
Hardly on par, sometimes it's faster but the power usage and heat generated from the doesn't make it a viable option
That's because you are looking at games that released before the arrival of the GTX 1000 series. Nvidia's graphic performance rely heavily on optimisation, and with their driver teams packing their bag moving on from EOL 900 series onto the more important 1000 series, the only way to go is down for the 970 performance in new and future games. You really just how have to see how the 780Ti gone from quite a bit ahead of the 290x and 970 to being slower than them by a fairly large margin in newer games released after the launch of 900 series as example.
rev6
2 Jul 16#53
Will you use CUDA?
Meher377
2 Jul 16#52
Sorry to interrupt gamers here, I wana know if 970 is better for video editing on a pc that has amd processor or 480 in this price bracket? My specs are: amd 9590 (I know), 16gb ram, 990fx mobo.
GAVINLEWISHUKD
1 Jul 161#51
September 18, 2014
Did you get a good deal on a DeLorean!? :smiley:
seanmorris100
1 Jul 16#50
Why people banging on about future proof lol? Its the weakest amd new card its not even better than a 970 which is 3 year old its not future proof at all its crap...
1080 is futureproof
Elevation
1 Jul 161#49
Yeah...much like spring onions are scallions and coriander is "cilantro"......nice try.:smirk:
fishmaster
1 Jul 161#48
Those Nvidia graphs are utterly ridiculous, if they're by Nvidia they're absurd, totally taking advantage of people who can't read graphs properly. Shameful marketing by Nvidia, look how the bars appear to indicate double the performance, whilst the numbers underneath tell the true story. Nvidia just hope you look at the colourful bars and think ooh Nvidia are way ahead here.
Ah read the comments seems most people have spotted it >
freefall
1 Jul 16#47
The picture above is actually a Basin and not a Sink .... nice try
Rid1
1 Jul 16#46
You forgot about Shadowplay
ReadySetGoGo
1 Jul 16#45
Excuse me???
ro53ben
1 Jul 161#44
I googled RX 480 for more info and found two threads:
So a beta version of the game engine runs slower on DX12 using DX11 optimised drivers?
There isn't even a comparison to AMD in this article, at all.
TL;DR - A game that runs badly on AMD under DX11 runs significantly better under DX12. NVidia don't care as they already run just fine under DX11 but, when the DX12 engine is ready, will happily release a game ready driver that will take advantage.
mrcyco
1 Jul 16#42
To be fair, you almost can! My r9 290 can play most DX11 games on high at 1440, the RX 480 is faster, so that pretty much covers DX11. Seeing AMD GPU are in this gen and (supposable) next gen consoles, also that Microsoft are pushing DX12 hard. I would be surprised if DX12 doesn't dominate going forward
BetaRomeo
1 Jul 16#41
Noooooo! Just because AMD's cards were less efficient, doesn't mean they aren't cost/performance-competitive!
In fact, if you have a decent CPU, don't need SteamOS/Linux, and don't have a Gsync monitor, I'd kindly ask you to please slightly-more-strongly consider an AMD card, because Nvidia have just a bit too much marketshare.
(I'm not saying to blindly buy AMD... just... give them another look? Please? Thank you!)
Edit: But don't buy a 480 until the PCI-E power overdraw situation is clearer. It seems they've damaged a few motherboards already, apparently.
ro53ben
1 Jul 16#40
If you're on a budget...maybe. Doesn't work too well with my G-Sync monitor though. :sunglasses:
rev6
1 Jul 162#39
Yes. Lets just forget about all the DX11 games :smile:
crackshotkv
1 Jul 16#38
Or if you intend to get the most out of DX12/Vulkan games, AMD seems to have the edge in most cases
ro53ben
1 Jul 16#37
Gotta laugh at the "future proof" claims above. Nothing in PC Gaming is future proof, not even my mouse mat.
Rhythmeister
1 Jul 16#36
I had hoped that the GTX 1060 would be the competitor to the RX 480 :confused:
alasdairgray
1 Jul 16#35
Do you know if Battlefield 1 will have a support agreement with AMD like they have done with previous Battlefields? Slightly better performance etc?
Freeboski
1 Jul 16#34
How can this week get any worse. I bought a 970 at the start of the week, ATI release the 480 and now the prices of 970's have dropped....
I give up!
czrsiNk
1 Jul 16#33
This.
ro53ben
1 Jul 161#32
So, in short, if you're mining or using one of the rare AMD optimised games...go AMD.
Otherwise, you're better off with nVidia.
BetaRomeo
1 Jul 164#31
Everything you've said is correct, but it's stuck in the middle and failed to reach the conclusion.
Maxwell and Pascal can do graphics and compute tasks simultaneously, albeit fewer than AMD's line-up (as you can see when doing too many forces a context shift on Nvidia hardware). So by improving their pre-emption and context switching, Nvidia have allowed their cards to still run at pretty much maximum capacity, even without async compute. Their DX11 instruction scheduling is allowing them to max out their cards.
AMD, on the other hand, have had bigger chips with higher theoretical performance for years (just compare the TFLOPs), and still been beaten by their Nvidia equivalents, as despite their GCN architecture, when it comes to processing their workflow they are at a distinct disadvantage in larger applications such as games (but you're right, mining is handled fine). Their architecture allows instructions to flow without stacking, which results in a reduced flow of calls when compared with scheduling - which might sound like an odd choice, but does produce a significant reduction in latency.
Async compute helps alleviate this scheduling bottleneck that we find in DX11 games on AMD GPUs - again, in larger applications - and allows AMD's chips to reach what we'd expect from them (in addition to DX12 relieving some CPU pressure). This is pretty much spelled out by the hardware specifications of the Nvidia and AMD cards, and their comparative performances.
Throwing async compute onto Maxwell would hardly benefit Nvidia at all (hmmm... although it might improve mining performance?), because the hardware is already approaching its limits while using a scheduler-based architecture - clever pre-emption is the rough equivalent they use instead. I'm still working my way through the Pascal white papers, but it seems to be the same situation there (unless you've found something different..?).
TL;DR: Nvidia's architecture for DX11 was entirely API-appropriate, and maxed out their hardware, while we're only now seeing the true benefits of the untapped potential of AMD's GCN in DX12.
Nate1492
1 Jul 16#30
So, based on 'rumors' I heard the reference AMD card could OC to 1600+.
I think I'll stick with facts good sir.
cheesemp
1 Jul 161#29
Considering it going to cost ~£300 (based on rumors) I imagine the 480 will do pretty well.
adam0812
1 Jul 16#28
390x 8gb is £250 as well so worth considering.
adam0812
1 Jul 16#27
Dying Light and Advanced Warfare used more than 3gb at 1080p, used to stutter like crazy, was maddening on my 780ti. Got 980ti, smooth as silk.
Doogeh
1 Jul 16#26
if you're stuck for a GPU below 200 then get the RX480 with 4gb of memory. if you have 230 to blow, go for the 8gb. simple as that. the custom cards will be a lot better than the vanilla AMD though, so maybe wait...
fishmaster
1 Jul 164#25
Disagree entirely, by definition this can't be how it works.
Asynchronous Compute - Taking out of order instructions and processing them quickly essentially. AMDs architecture is designed with this in mind, Nvidia architecture can't do this, which is why you always see AMD cards used for Bitcoin mining because the GCN architecture was better for those computations.
Nvidia will always suffer a 'bottleneck' because the architecture doesn't support Async Compute. In real world terms there are many factors which mean that Async Compute won't be a walkover for AMD in DX12 games e.g. some of these include developers writing specifically to optimise for Nvidia cards.
So it's not that Nvidia can cope better with these computations, and therefore it's less of a bottleneck for Nvidia it's the other way round, AMD architecture can process Async computations and Nvidia can't, not even with Pascal architecture.
MBeeching
1 Jul 161#24
Sorry, misunderstood. Thought you meant a GTX 1080 only using 3gb VRAM :stuck_out_tongue:
moneybag
1 Jul 161#23
1080 as stated
MBeeching
1 Jul 16#22
What resolution are you using? Using a 980Ti (2160p) GTA V is usually close to 5gb and I've seen Dark Souls 3 at 5.3gb.
ReadySetGoGo
1 Jul 163#21
Because AMD just stole the price/performance crown with their new RX480
mrcyco
30 Jun 1610#1
unless you NEED Nvidia (Hairworks, Nvidia controller... moonlight), you should really consider the RX480. on par with the 970 but more ram, new chip set (so more room to grow) and the 8gb wouldn't cost to much more than this.
mikem1989 to mrcyco
1 Jul 16#20
+1
RX480 is much more future proof.
johnthehuman
1 Jul 161#16
have we stopped the 3.5Gb jokes now?
:disappointed:
moneybag to johnthehuman
1 Jul 16#19
Prefer that to the thousands of hilarious brexit jokes that people work SO hard to think of!
As an aside, on 1080 gaming I've never gone over 3GB VRAM usage on my games, though the 8GB cards will start to be fully utilised in the very near future.
ro53ben
1 Jul 16#18
Decent price for an old card. Given how many better cards NVidia make now, no wonder the prices are falling.
seany1977
1 Jul 16#17
I wonder why this is not hotter? I think this is a very good price. Where these cards not going for about £279 6 months ago?
Elevation
1 Jul 167#15
dsided
1 Jul 16#14
Any recommendations for (W)QHD 2560x1440 gaming with future Vive support on a budget? No FPS more walking simulators and racing games.
I've read that the 960TI is the same as a 970 if you overclock, however could save up for a better card, no rush atm.
Rhythmeister
1 Jul 16#13
I'd like to think that the GTX 1060 will be the new budget king of 1920x1080 gaming which will make the RX 480 virtually obsolete almost immediately :disappointed:
mrcyco
1 Jul 162#11
async is going change things.
Currently running a R9 290 tri x, My fps on warhammer went from 37.4 to 49.2 with DX12 and reading posts in discussions, Nvidia (on the other hand) is struggling. I'm considering cancelling my order for 1070 extreme amp
BetaRomeo to mrcyco
1 Jul 161#12
I think you've got it the wrong way around. Async compute doesn't give AMD cards a boost. Rather, it relieves a significant software bottleneck. Nvidia doesn't "struggle" with async compute; it's just largely unnecessary, as their cards don't have the same bottleneck to begin with.
Either way, I'd cancel that order for a 1070 until prices have settled down a bit.
xela333
30 Jun 161#10
No, it's a proprietary Nvidia technology. The AMD version is called freesync and that doesn't work with Nvidia cards
Decent price and still tempting over the 480 until the aftermarket cards arrive at least.
Rhythmeister
30 Jun 16#3
Faster than the GTX 970 in the real world, just ask Techpowerup :wink: Probably better to wait for the GTX 1060 actually, it's getting announced on the 7th of July. Personally I'd rather support the underdog BUT my new monitor happens to support G-Sync and can do 144Hz so I may have to let the rest of you support them for me :man:
ReadySetGoGo
30 Jun 168#2
That is a good price for the 970 and means Nvidia have recognised how good AMD's new 480 is. I still went with the 480 though as it's newer technology and likely to improve performance over time compared to the 970.
Opening post
Top comments
Maxwell and Pascal can do graphics and compute tasks simultaneously, albeit fewer than AMD's line-up (as you can see when doing too many forces a context shift on Nvidia hardware). So by improving their pre-emption and context switching, Nvidia have allowed their cards to still run at pretty much maximum capacity, even without async compute. Their DX11 instruction scheduling is allowing them to max out their cards.
AMD, on the other hand, have had bigger chips with higher theoretical performance for years (just compare the TFLOPs), and still been beaten by their Nvidia equivalents, as despite their GCN architecture, when it comes to processing their workflow they are at a distinct disadvantage in larger applications such as games (but you're right, mining is handled fine). Their architecture allows instructions to flow without stacking, which results in a reduced flow of calls when compared with scheduling - which might sound like an odd choice, but does produce a significant reduction in latency.
Async compute helps alleviate this scheduling bottleneck that we find in DX11 games on AMD GPUs - again, in larger applications - and allows AMD's chips to reach what we'd expect from them (in addition to DX12 relieving some CPU pressure). This is pretty much spelled out by the hardware specifications of the Nvidia and AMD cards, and their comparative performances.
Throwing async compute onto Maxwell would hardly benefit Nvidia at all (hmmm... although it might improve mining performance?), because the hardware is already approaching its limits while using a scheduler-based architecture - clever pre-emption is the rough equivalent they use instead. I'm still working my way through the Pascal white papers, but it seems to be the same situation there (unless you've found something different..?).
TL;DR: Nvidia's architecture for DX11 was entirely API-appropriate, and maxed out their hardware, while we're only now seeing the true benefits of the untapped potential of AMD's GCN in DX12.
Latest comments (75)
But seriously, the PCIe issue is so big you should wait until they either fix it (by throttling the power of the card) or by a new revision of the card.
I wouldn't stick a part that, at stock, is overloading any component slot by over 10%.
That means there is zero overhead for OC.
Newegg have 4GB rx 480's for around £180, they outperform this card in a lot of games and decimate it in AMD optimised titles, seems like a better shout if you need a card now. If not wait until September time when the prices are stable.
These two articles are interesting in that respect. I haven't actually ever seen any proof of the so called 'gimping'.
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/70125-gtx-780-ti-vs-r9-290x-rematch-8.html
http://www.babeltechreviews.com/nvidia-forgotten-kepler-gtx-780-ti-vs-290x-revisited/4/
Did you get a good deal on a DeLorean!? :smiley:
1080 is futureproof
Ah read the comments seems most people have spotted it >
https://www.reddit.com/r/Amd/comments/4qfwd4/rx480_fails_pcie_specification/
http://videocardz.com/61753/nvidia-geforce-gtx-1060-specifications-leaked-faster-than-rx-480
There isn't even a comparison to AMD in this article, at all.
TL;DR - A game that runs badly on AMD under DX11 runs significantly better under DX12. NVidia don't care as they already run just fine under DX11 but, when the DX12 engine is ready, will happily release a game ready driver that will take advantage.
In fact, if you have a decent CPU, don't need SteamOS/Linux, and don't have a Gsync monitor, I'd kindly ask you to please slightly-more-strongly consider an AMD card, because Nvidia have just a bit too much marketshare.
(I'm not saying to blindly buy AMD... just... give them another look? Please? Thank you!)
Edit: But don't buy a 480 until the PCI-E power overdraw situation is clearer. It seems they've damaged a few motherboards already, apparently.
I give up!
Otherwise, you're better off with nVidia.
Maxwell and Pascal can do graphics and compute tasks simultaneously, albeit fewer than AMD's line-up (as you can see when doing too many forces a context shift on Nvidia hardware). So by improving their pre-emption and context switching, Nvidia have allowed their cards to still run at pretty much maximum capacity, even without async compute. Their DX11 instruction scheduling is allowing them to max out their cards.
AMD, on the other hand, have had bigger chips with higher theoretical performance for years (just compare the TFLOPs), and still been beaten by their Nvidia equivalents, as despite their GCN architecture, when it comes to processing their workflow they are at a distinct disadvantage in larger applications such as games (but you're right, mining is handled fine). Their architecture allows instructions to flow without stacking, which results in a reduced flow of calls when compared with scheduling - which might sound like an odd choice, but does produce a significant reduction in latency.
Async compute helps alleviate this scheduling bottleneck that we find in DX11 games on AMD GPUs - again, in larger applications - and allows AMD's chips to reach what we'd expect from them (in addition to DX12 relieving some CPU pressure). This is pretty much spelled out by the hardware specifications of the Nvidia and AMD cards, and their comparative performances.
Throwing async compute onto Maxwell would hardly benefit Nvidia at all (hmmm... although it might improve mining performance?), because the hardware is already approaching its limits while using a scheduler-based architecture - clever pre-emption is the rough equivalent they use instead. I'm still working my way through the Pascal white papers, but it seems to be the same situation there (unless you've found something different..?).
TL;DR: Nvidia's architecture for DX11 was entirely API-appropriate, and maxed out their hardware, while we're only now seeing the true benefits of the untapped potential of AMD's GCN in DX12.
I think I'll stick with facts good sir.
Asynchronous Compute - Taking out of order instructions and processing them quickly essentially. AMDs architecture is designed with this in mind, Nvidia architecture can't do this, which is why you always see AMD cards used for Bitcoin mining because the GCN architecture was better for those computations.
Nvidia will always suffer a 'bottleneck' because the architecture doesn't support Async Compute. In real world terms there are many factors which mean that Async Compute won't be a walkover for AMD in DX12 games e.g. some of these include developers writing specifically to optimise for Nvidia cards.
So it's not that Nvidia can cope better with these computations, and therefore it's less of a bottleneck for Nvidia it's the other way round, AMD architecture can process Async computations and Nvidia can't, not even with Pascal architecture.
RX480 is much more future proof.
:disappointed:
As an aside, on 1080 gaming I've never gone over 3GB VRAM usage on my games, though the 8GB cards will start to be fully utilised in the very near future.
I've read that the 960TI is the same as a 970 if you overclock, however could save up for a better card, no rush atm.
Currently running a R9 290 tri x, My fps on warhammer went from 37.4 to 49.2 with DX12 and reading posts in discussions, Nvidia (on the other hand) is struggling. I'm considering cancelling my order for 1070 extreme amp
Either way, I'd cancel that order for a 1070 until prices have settled down a bit.
No, it's a proprietary Nvidia technology. The AMD version is called freesync and that doesn't work with Nvidia cards
https://www.scan.co.uk/products/4gb-evga-geforce-gtx-970-sc-gaming-acx-20-pcie-30-7010mhz-gddr5-gpu-1165mhz-boost-1317mhz-cores-1664