Guru3D.com Forums

Go Back   Guru3D.com Forums > Videocards > Videocards - AMD Radeon
Videocards - AMD Radeon AMD Radeon graphics cards. Do you have one or want to buy one? Use this forum to discuss anything concerning products using Radeon graphics cards, CrossFireX, from Radeon HD 7800 to R9 390X



Reply
 
Thread Tools Display Modes
Old
  (#1251)
PrMinisterGR
Ancient Guru
 
PrMinisterGR's Avatar
 
Videocard: Sapphire 7970 Quadrobake
Processor: Core i7 2600k@4.5GHz
Mainboard: Sapphire Pure Black P67
Memory: Corsair Vengeance 16GB
Soundcard: ASUS Xonar D2X
PSU: EVGA SuperNova 750 G2
Default 05-15-2017, 16:49 | posts: 6,827

The only "hole" in GCN's design is the triangle and graphics memory performance really. Polaris was a great leap towards fixing both, if Vega hit at least moderately some of these targets, things will be interesting.

We should not though that Volta (GV100) also includes the Tensor cores and some extra hardware to reach that 810mm2. In theory, if that is removed I can see it being around 700mm2, which is still insane. I don't believe we'll see it in a consumer-grade card until 7nm hits. For people forgetting their numbers, TSMC's "12nm" is basically an optimized 16nm node just for NVIDIA, hence there aren't any really great node size reductions at play here. A 3840 shader core Titan is something like 610mm2, so for a 5000+ shader Volta GPU 700mm2 is actually an optimistic target.
   
Reply With Quote
 
Old
  (#1252)
Denial
Ancient Guru
 
Denial's Avatar
 
Videocard: EVGA 1080Ti
Processor: i7-7820x
Mainboard: X299 SLI PLUS
Memory: 32GB GSkill 3600
Soundcard: ZxR & HD800 Lyr/KEF LS50
PSU: Seasonic 1000w
Default 05-15-2017, 16:53 | posts: 10,914 | Location: Terra Firma

Quote:
Originally Posted by PrMinisterGR View Post
If it's the professional card that would mean even higher clocks for consumer cards. If we use the simple OC rule of 10% (that even Fiji adhered to), then that card will go up to 1760MHz at least. Which is insane for an AMD GPU.

The insane GV100 Volta which will never sold to normal consumers unless it's at 7nm, has 15Tflop of compute power at 810mm2. NVIDIA's CEO admitted that they have to go through multiple wafers just to get one properly working.

IF AMD has managed at least Maxwell-level graphics/triangle performance with Vega, and gotten those frequencies, a theoretically overclocked Vega will reach 14.5Tflop of compute power. That's insane, it's like a year earlier and in a consumer product that is 40% smaller (Vega is 530mm2).

Quote:
Originally Posted by PrMinisterGR View Post
The only "hole" in GCN's design is the triangle and graphics memory performance really. Polaris was a great leap towards fixing both, if Vega hit at least moderately some of these targets, things will be interesting.

We should not though that Volta (GV100) also includes the Tensor cores and some extra hardware to reach that 810mm2. In theory, if that is removed I can see it being around 700mm2, which is still insane. I don't believe we'll see it in a consumer-grade card until 7nm hits. For people forgetting their numbers, TSMC's "12nm" is basically an optimized 16nm node just for NVIDIA, hence there aren't any really great node size reductions at play here. A 3840 shader core Titan is something like 610mm2, so for a 5000+ shader Volta GPU 700mm2 is actually an optimistic target.
The Titan XP is only 471mm2, not 610. The GP100 is 610 but that comes with FP64. My 1080Ti right now hits 14.5Tflops (2050Mhz OC) in a 471mm2 package and it doesn't have a year of architectural enhancements like Vega does, nor the denser 14nm process.

5000+ shader GV chip is definitely doable at ~600mm2 - the problem was TDP. At 3840 Nvidia is already at 250w. But Nvidia is claiming Volta has 50% perf/w improvements with FP32 - thus higher core counts and/or clocks should be doable at 250w.

Last edited by Denial; 05-15-2017 at 16:56.
   
Reply With Quote
Old
  (#1253)
PrMinisterGR
Ancient Guru
 
PrMinisterGR's Avatar
 
Videocard: Sapphire 7970 Quadrobake
Processor: Core i7 2600k@4.5GHz
Mainboard: Sapphire Pure Black P67
Memory: Corsair Vengeance 16GB
Soundcard: ASUS Xonar D2X
PSU: EVGA SuperNova 750 G2
Default 05-15-2017, 17:01 | posts: 6,827

Quote:
Originally Posted by Denial View Post
The Titan XP is only 471mm2, not 610. The GP100 is 610 but that comes with FP64. My 1080Ti right now hits 14.5Tflops (2050Mhz OC) in a 471mm2 package and it doesn't have a year of architectural enhancements like Vega does, nor the denser 14nm process.

5000+ shader GV chip is definitely doable at ~600mm2 - the problem was TDP. At 3840 Nvidia is already at 250w. But Nvidia is claiming Volta has 50% perf/w improvements with FP32 - thus higher core counts and/or clocks should be doable at 250w.
Aaaaand... You're completely right. Now we have to wonder about the actual architectural improvements, and if they all went to improve the clocks or to other parts as well. I have also learned to get NVIDIA's performance claims from presentations with a grain of salt. I'm sure they've managed great strides but I don't believe that the 50% will hold under all situations.
   
Reply With Quote
Old
  (#1254)
OnnA
Ancient Guru
 
OnnA's Avatar
 
Videocard: Nitro Fiji-X HBM 1150/570
Processor: ZEN x8 k17 + Nepton 280L
Mainboard: ASUS Crosshair VI Hero
Memory: 16GB 3200 CL16 1T Ripjaws
Soundcard: SB-z Nichicon + Wood 5.1
PSU: Seasonic-X 750W Platinum
Question 05-15-2017, 22:26 | posts: 2,956 | Location: HolyWater Village


Little Off one or not?

I just can't stand those Pseudo News about (not literally) HBM_2 is slower than GDDR5x or GDDR6

OMG

They show up with some small numbers like ~512GB/s

HBM_1 was 128GB/s on Stack with 1.2v (Specs / 1.3v on GPU) That gives us ~512BG/s from 4x Stacks
Basically my Fury have ~700GB/s with OC + V-mod + Timing-Mod
Also HBM_1 has 4096Bit Bus with 1GT/s (transfers per second)
HBM_2 is ~200-256GB/s on Stack so, if VEGa will have 16GB then it will pack 4x256GB/s
and HBM_2 has 4096Bit Bus with 2GT/s ! (transfers per second)
So it will get us close to ~1.4TB/s !!!! Yes twice more than HBM_1 (with OC,V and Mod it can go even Higher)
And of course HBM has Next-Gen'ish Response Times (unmatched to date) !!
And it's High Bandwidth ~4096Bit !!
And it will "eat" only 15tW !!!
And it's small

Why Fiji is so Great at 1440p or 4k???? with "only" 4.2GB gRAM HBMs?

Of course it's only IMHO

Last edited by OnnA; 05-17-2017 at 18:12.
   
Reply With Quote
 
Old
  (#1255)
Tuga
Newbie
 
Videocard: nvidia 670 gtx
Processor: i7 5820k
Mainboard:
Memory: 32gb ddr4
Soundcard: SaffPro24,Gene M040,HD800
PSU: really have no idea
Default 05-15-2017, 22:49 | posts: 41

Things are turning very interesting now. I just hope the price isn't 999 or something..
   
Reply With Quote
Old
  (#1256)
Loophole35
Ancient Guru
 
Videocard: EVGA 1080ti SC
Processor: 2600k@4.8 w/ NH-D15S
Mainboard: P8Z77-V
Memory: G-Skills ddr3 16GB@1866
Soundcard: SoundBlaster Z
PSU: TX850
Default 05-16-2017, 16:39 | posts: 8,299 | Location: FLA,USA

We are supposed to be getting some official news on Vega and possible "Threadripper" today from AMD.

source: W C C F
   
Reply With Quote
Old
  (#1257)
Maddness
Master Guru
 
Maddness's Avatar
 
Videocard: Asus RX480 Strix
Processor: 6900k
Mainboard: Rampage V Edition 10
Memory: 16Gb Dominator DDR4 3400
Soundcard: Asus Xonar STX
PSU: Corsair 1200
Default 05-16-2017, 20:18 | posts: 388 | Location: Auckland

Quote:
Originally Posted by Loophole35 View Post
We are supposed to be getting some official news on Vega and possible "Threadripper" today from AMD.

source: W C C F
That would be great. I thought it was more along the lines of investor news though.
   
Reply With Quote
Old
  (#1258)
OnnA
Ancient Guru
 
OnnA's Avatar
 
Videocard: Nitro Fiji-X HBM 1150/570
Processor: ZEN x8 k17 + Nepton 280L
Mainboard: ASUS Crosshair VI Hero
Memory: 16GB 3200 CL16 1T Ripjaws
Soundcard: SB-z Nichicon + Wood 5.1
PSU: Seasonic-X 750W Platinum
Lightbulb 05-16-2017, 20:30 | posts: 2,956 | Location: HolyWater Village

Hmm some new VEGA "Hype News" Hits my inBOX
Grain of salt as always.

So:

$399 -> Better than 1080
$499 -> Better than 1080Ti/Titan
$599 -> Best until Big Volta GDDR6

VI.5 ....

Today is AMD Financial Analyst Day 2017.
-> http://webcastevents.com/events/amd/...ive/player.htm

UPD. 1
Confirmed by Mr. Papermaster
Raja K. will give us runing VEGA Demo !!!!

UPD. 2
Sniper Elite 4 PC 1440p

Vega will pack min. 40% more perf. than Fury-X (Base Vega)
So:
Fury-X 71FPS Average
and
VEGA ~100FPS Average (IMO it will have ~125FPS)

and 4K
VEGA ~71-80FPS

UPD. 3

57-80FPS in RoT Tomb Raider

The final configuration of Vega was finalized some two years ago, and AMD's vision for it was to have a GPU that could plow through 4K resolutions at over 60 frames per second. And Vega has achieved it. Sniper Elite 4 at over 60 FPS on 4K. Afterwards, Raja talked about AMD's High Bandwidth Cache Controller, running Rise of the Tomb Raider, giving the system only 2 GB of system memory, with the HBCC-enabled system delivering more than 3x the minimum frame-rates than the non-HBCC enabled system, something we've seen in the past, though on Deus Ex: mankind Divided. So now we know that wasn't just a single-shot trick.

 Click to show spoiler




! So we have almost 3x Fury-X performance (in some areas "only" 1.5x) !

UPD. 4

Some Tests of VEGA Frontier Edition 8Pin+6Pin PCIe so = 225tW <- lol that PowaHHH

 Click to show spoiler


Last edited by OnnA; 05-17-2017 at 14:09.
   
Reply With Quote
Old
  (#1259)
Loophole35
Ancient Guru
 
Videocard: EVGA 1080ti SC
Processor: 2600k@4.8 w/ NH-D15S
Mainboard: P8Z77-V
Memory: G-Skills ddr3 16GB@1866
Soundcard: SoundBlaster Z
PSU: TX850
Default 05-16-2017, 21:19 | posts: 8,299 | Location: FLA,USA

Quote:
Originally Posted by OnnA View Post
Hmm some new VEGA "Hype News" Hits my inBOX
Grain of salt as always.

So:

$399 -> Better than 1080
$499 -> Better than 1080Ti/Titan
$599 -> Best until Big Volta GDDR6

VI.5 ....

Today is AMD Financial Analyst Day 2017.
-> http://webcastevents.com/events/amd/...ive/player.htm
Same source that said Vega would launch in October?
   
Reply With Quote
Old
  (#1260)
OnnA
Ancient Guru
 
OnnA's Avatar
 
Videocard: Nitro Fiji-X HBM 1150/570
Processor: ZEN x8 k17 + Nepton 280L
Mainboard: ASUS Crosshair VI Hero
Memory: 16GB 3200 CL16 1T Ripjaws
Soundcard: SB-z Nichicon + Wood 5.1
PSU: Seasonic-X 750W Platinum
Lightbulb 05-17-2017, 12:23 | posts: 2,956 | Location: HolyWater Village

AMD Financial Analyst Day Raja Koduri

Raja was very forthcoming today on AMD's stage, as he usually is. Raja introduced AMD's new Radeon Vega Frontier

Edition product that is built to supply deep machine learning as well as being an excellent workstation card as well.

Designed to handle the most demanding design, rendering, and machine intelligence workloads, this powerful new graphics card excels in:

Machine learning. Together with AMD’s ROCm open software platform, Radeon Vega Frontier Edition enables developers to tap into the power of Vega for machine learning algorithm development. Frontier Edition delivers more than 50 percent more performance than today’s most powerful machine learning GPUs.

Advanced visualization. Radon Vega Frontier Edition provides the performance required to drive increasingly large and complex models for real-time visualization, physically-based rendering and virtual reality through the design phase as well as rendering phase of product development.

VR workloads. Radeon Vega Frontier Edition is ideal for VR content creation supporting AMD’s LiquidVR technology to deliver the gripping content, advanced visual comfort and compatibility needed for next-generation VR experiences

Revolutionized game design workflows. Radeon Vega Frontier Edition simplifies and accelerates game creation by providing a single GPU optimized for every stage of a game developer’s workflow, from asset production to playtesting and performance optimization.

Based on the new Vega GPU architecture, Radeon Vega Frontier Edition has been built from the ground up and features Vega’s High Bandwidth Cache Controller, the cornerstone of the world’s most advanced GPU memory architecture. Combined with HBM2, Radeon Vega Frontier Edition expands the capacity of traditional GPU memory to 256TB, allowing users to tackle massive datasets with ease.

The official Radeon Vega Frontier Edition page is now live

-> http://radeon.com/en-us/vega-architecture/

Last edited by OnnA; 05-17-2017 at 13:32.
   
Reply With Quote
Old
  (#1261)
Lane
Ancient Guru
 
Videocard: 2x HD7970 - EK Waterblock
Processor: I7 4930K H2o EK Supremacy
Mainboard: Asus X79 Deluxe
Memory: G-Skill C9 2133mhz 16GB
Soundcard: X-FI Titanium HD + SP2500
PSU: CM 1000W
Default 05-18-2017, 18:39 | posts: 6,350 | Location: Switzerland

Quote:
Originally Posted by PrMinisterGR View Post
The only "hole" in GCN's design is the triangle and graphics memory performance really. Polaris was a great leap towards fixing both, if Vega hit at least moderately some of these targets, things will be interesting.

We should not though that Volta (GV100) also includes the Tensor cores and some extra hardware to reach that 810mm2. In theory, if that is removed I can see it being around 700mm2, which is still insane. I don't believe we'll see it in a consumer-grade card until 7nm hits. For people forgetting their numbers, TSMC's "12nm" is basically an optimized 16nm node just for NVIDIA, hence there aren't any really great node size reductions at play here. A 3840 shader core Titan is something like 610mm2, so for a 5000+ shader Volta GPU 700mm2 is actually an optimistic target.
Well, memory "bottleneck, was mainly due to the 4GB limitation of HBM1 ( Fiji )... Triangle output ( for tesselation ), i will say it depend the game ( we will see what this new geometry engine do on Vega ). Benchmark with extreme tesselation dont say all. ( if games us extreme level, this will not help ).

The real advantage has Nvidia ( and who was a big secret ), used since maxwell is the tiled-based rasterization.. Vega will got it too.

This technic is used a lot on mobile GPUs.

http://www.anandtech.com/show/10536/...ation-analysis

http://www.realworldtech.com/tile-ba...n-nvidia-gpus/

It accelerate a lot the process of rendering frames, triangle output ( by not computing what is not needed) and have an extreme impact on Power usage. Basically rendering only what is needed ( without having to cull anything ).

The most funny is nobody was know it was used in Maxwell allready.

Last edited by Lane; 05-18-2017 at 18:55.
   
Reply With Quote
Old
  (#1262)
Fox2232
Ancient Guru
 
Fox2232's Avatar
 
Videocard: Fury X - XL2420T(Z)@144Hz
Processor: i5-2500k@4.5GHz NH-D14
Mainboard: MSI Z68A-GD80[g3]
Memory: 4x4GB 1600MHz 9,9,8,20 1T
Soundcard: Essence ST / AKG K-701
PSU: FSP Gold series 750W
Default 05-18-2017, 19:08 | posts: 5,514 | Location: EU, CZ, Brno

Quote:
Originally Posted by Lane View Post
Well, memory "bottleneck, was mainly due to the 4GB limitation of HBM1 ( Fiji )... Triangle output ( for tesselation ), i will say it depend the game ( we will see what this new geometry engine do on Vega ). Benchmark with extreme tesselation dont say all. ( if games us extreme level, this will not help ).

The real advantage has Nvidia ( and who was a big secret ), used since maxwell is the tiled-based rasterization.. Vega will got it too.

This technic is used a lot on mobile GPUs.

http://www.anandtech.com/show/10536/...ation-analysis

http://www.realworldtech.com/tile-ba...n-nvidia-gpus/

It accelerate a lot the process of rendering frames, triangle output ( by not computing what is not needed) and have an extreme impact on Power usage. Basically rendering only what is needed ( without having to cull anything ).

The most funny is nobody was know it was used in Maxwell allready.
That sound pretty dumb. Did you meant something else than you wrote?
Because culling IS process which excludes geometry from being further processed.

"Basically rendering only what is needed" means that all unnecessary objects been culled... "( without having to cull anything )" then sounds pretty stupid.

Btw, guy in that video has theory. Theories often happen to be wrong. From what I have seen there, I have much better theory. But I'll keep it to myself as I have no need to incite flame.
   
Reply With Quote
Old
  (#1263)
OnnA
Ancient Guru
 
OnnA's Avatar
 
Videocard: Nitro Fiji-X HBM 1150/570
Processor: ZEN x8 k17 + Nepton 280L
Mainboard: ASUS Crosshair VI Hero
Memory: 16GB 3200 CL16 1T Ripjaws
Soundcard: SB-z Nichicon + Wood 5.1
PSU: Seasonic-X 750W Platinum
Talking 05-18-2017, 19:23 | posts: 2,956 | Location: HolyWater Village

Quote:
Originally Posted by Fox2232 View Post
I have much better theory. But I'll keep it to myself as I have no need to incite flame.
I have some theories + some leaks too

So my Leak about SniperElite IV was correct
 Click to show spoiler


So if we stick only with those 2 Demos: RoT TombRaider & S. Elite 4
We can clearly see that this Vega has 1080Ti Performance in both games in 4k !

That's the facts.

Q. What we don't know?
A. Is this a Big Vega or Medium/Small one? (It's possible tho)
A. Is Big Vega for Gaming wil have 16GB HBM_2 ? (2GT/s Vs 1GT/s of HBM1)
A. It will have 2x8PCie or 1x8+1x6?

Do You think is possible for us (Fiji owners) to Have HBCC in Relive to turn ON (I believe so)

Please give us some nerd decisions

Last edited by OnnA; 05-18-2017 at 19:33.
   
Reply With Quote
Old
  (#1264)
PrMinisterGR
Ancient Guru
 
PrMinisterGR's Avatar
 
Videocard: Sapphire 7970 Quadrobake
Processor: Core i7 2600k@4.5GHz
Mainboard: Sapphire Pure Black P67
Memory: Corsair Vengeance 16GB
Soundcard: ASUS Xonar D2X
PSU: EVGA SuperNova 750 G2
Default 05-18-2017, 19:30 | posts: 6,827

Quote:
Originally Posted by Fox2232 View Post
Btw, guy in that video has theory. Theories often happen to be wrong. From what I have seen there, I have much better theory. But I'll keep it to myself as I have no need to incite flame.
Please say, that's why we have this place, for conversation and flaming each other over indifferent tech companies
   
Reply With Quote
Old
  (#1265)
Lane
Ancient Guru
 
Videocard: 2x HD7970 - EK Waterblock
Processor: I7 4930K H2o EK Supremacy
Mainboard: Asus X79 Deluxe
Memory: G-Skill C9 2133mhz 16GB
Soundcard: X-FI Titanium HD + SP2500
PSU: CM 1000W
Default 05-18-2017, 20:22 | posts: 6,350 | Location: Switzerland

Quote:
Originally Posted by Fox2232 View Post
That sound pretty dumb. Did you meant something else than you wrote?
Because culling IS process which excludes geometry from being further processed.

"Basically rendering only what is needed" means that all unnecessary objects been culled... "( without having to cull anything )" then sounds pretty stupid.

Btw, guy in that video has theory. Theories often happen to be wrong. From what I have seen there, I have much better theory. But I'll keep it to myself as I have no need to incite flame.
I was a bit fast to write, i dont want enter technical explanation, but read just the Anandtech article about it.
   
Reply With Quote
Old
  (#1266)
Denial
Ancient Guru
 
Denial's Avatar
 
Videocard: EVGA 1080Ti
Processor: i7-7820x
Mainboard: X299 SLI PLUS
Memory: 32GB GSkill 3600
Soundcard: ZxR & HD800 Lyr/KEF LS50
PSU: Seasonic 1000w
Default 05-18-2017, 20:30 | posts: 10,914 | Location: Terra Firma

I didn't think tile based renderer does culling - it just batches the tiles up and keeps them in cache instead of pushing the entire frame too and from memory - which decreases the needed bandwidth and obviously saves power. Like maybe the same hardware unit that does the tiling also does culling - but I don't think the two are completely related.

Tom Peterson also mentioned that it uses GPU cycles, so it actually slightly lowers performance. They also turn it on/off in driver profiles, because it isn't worth doing in some titles.

Also David Kanter's company has provided consulting services to AMD/Nvidia and other major tech companies. He's not like a random youtuber or something - he works in the industry.
   
Reply With Quote
Old
  (#1267)
OnnA
Ancient Guru
 
OnnA's Avatar
 
Videocard: Nitro Fiji-X HBM 1150/570
Processor: ZEN x8 k17 + Nepton 280L
Mainboard: ASUS Crosshair VI Hero
Memory: 16GB 3200 CL16 1T Ripjaws
Soundcard: SB-z Nichicon + Wood 5.1
PSU: Seasonic-X 750W Platinum
Lightbulb 05-18-2017, 22:56 | posts: 2,956 | Location: HolyWater Village

Raja Koduri explains where is Radeon RX Vega in Reddit AMA

Here are the most interesting questions and answers from Reddit’s AMA -> https://www.reddit.com/r/Amd/comment..._amd_and_were/

Raja Koduri: I want to start things off today but saying thank you to everyone for all of your excitement, energy and enthusiasm for all things AMD, and in particular, for Vega.

Earlier this week we were thrilled to launch Radeon Vega Frontier Edition. We think it will have a big impact on machine intelligence and content creators. I also know some of you are disappointed that we didn’t launch RX Vega as well.
I wanted to hold this AMA and have an open discussion with you about our Vega launches. And while we’re not launching RX Vega today — so I won’t be talking about pricing or launch date — there are lots of rumors and innuendo I want to put to bed, and there are plenty of questions I can answer.
I know you guys can’t wait to see Radeon RX Vega. I know that a lot of you guys obsess over when you’ll be able to game on Vega. Your passion for all things Radeon is what drives every single person in the Radeon Technologies Group to push hard on all fronts – hardware and software engineering, display technologies, and successful developer relationships.
Everyone at AMD sees how much you guys talk about Vega and how eager you are to get your hands on it, and it fuels us.

Please know that we’re working incredibly hard on Radeon RX Vega. You are the lifeblood of Radeon, and you all deserve a graphics card that you’ll be incredibly proud to own.

Elmnator: Will the consumer RX version be as fast at the Frontier version?

RK: Consumer RX will be much better optimized for all the top gaming titles and flavors of RX Vega will actually be faster than Frontier version!

RaverendCatch: Raja, are you not shaving your beard until Vega is launched?

RK: Yes

TitanicFreak: My first question for you is what is your vision with Vega? What do you see Vega excelling in? Specifically in the consumer market. Do you see people using Vega similar to how its predecessor (Fiji) was used? Where both the Fury X and Nano excelled in m-ITX builds. Do you see Vega continuing that?
My other question would be what is the difference between the Blue and Gold variants of the Frontier Edition. Do they share a similar TDP and clock speed with the Gold edition merely being more quiet due to it’s liquid cooling? Or is there something more separating the two apart?

RK: Primary vision with Vega was to establish our next generation architecture that is capable of dealing with large data-sets (tera, peta,exa etc)
From a gaming perspective we wanted to build a product that tackles the challenging 4K@60Hz for AAA gaming…
Like Fiji Vega will excel in small form factors etc due to HBM2 advantages
Yes – the gold version may have more thermal headroom that could help in some scenarios

nas360: Why the official Frontier page shows renders of the card with 2x8pin connectors and the one you were holding at the presentation has 1×8 + 1×6 connectors?

RK: I grabbed an engineering board from the lab on the way to the Sunnyvale auditorium, and that boards works well with a 6 and an 8 pin. We decided to put two 8 pin connectors in the production boards to give our Frontier users extra headroom

Pepri: Was the card used in the gaming demos a Frontier Edition? And if so, was it the water cooled one or the air cooled version?

RK: It was an air-cooled version

Does the card profit from DX12 a lot or is DX12 performance similar to DX11 performance?

RK: Our architecture is very well suited for explicit APIs such as DX12 and Vulkan. If a game or a game engine prioritizes low level access to the GPU, Vega will soar. At the same time we’re optimizing Vega for legacy APIs as well as much as possible.

Is there a difference in performance/clock speed between the water and the air cooled version or is one just quieter/cooler?

RK: There will be a slight difference in clock speeds, and therefore performance as well.

RA2lover: How difficult is the process of developing drivers for a GPU architecture with so many differences from previous product designs?

RK: Developing drivers for new architecture is one of the most complex and difficult engineering tasks for a GPU company…In fact this is one of the reasons why there are only so few GPU companies.

WallyWest: Is the Frontier Edition a card like a Titan X (a professional/gaming card?), will we have the choice between RX driver and Pro Driver? Is it a Pro Card or a Gaming Card? Or both?

RK: The Frontier Edition was designed for a variety of use-cases like Machine Learning, real-time visualization, and game design. Can you play games on Frontier Edition? Yes, absolutely. It supports the RX driver and will deliver smooth 4K gaming. But because it is optimized for professional use cases (and priced accordingly), if gaming is your primary reason for buying a GPU, I’d suggest waiting just a little while longer for the lower-priced, gaming-optimized Radeon RX Vega graphics card

wickedplayer494: Does Frontier Edition use 4 stacks or 2 stacks of HBM2?

RK: Frontier edition employs 2 stacks of HBM2

480 GB/s of memory bandwidth is slower than Fiji’s 512 GB/s, and that was with first generation HBM. When HBM1 on Fiji can match or beat these speeds, it sort of makes you wonder, what even is the point of using HBM2 anyway if configurations don’t surpass Fiji’s memory bandwidth? Besides PCB space savings and latency

RK: Both Fiji’s and Vega’s HBM(2) implementations offer plenty of bandwidth for all workloads. (nalasco – need help here)

Can we please get the ability to overclock HBM2?

RK: We’ll see what we can do about that

Can we pretty please get a 16 GB variant of Radeon RX Vega?

RK: We will definitely look at that…

Proxiros: Thank you for this AMA knowing how valuable your time is. I don’t expect that you will reveal much today (NDA) but the only thing that all await is: Will Vega for consumers revealed at Computex ( http://www.amdcomputex.com.tw ) this year? or at least a launch date? Keep up the good work!

RK: We’ll be showing Radeon RX Vega off at Computex, but it won’t be on store shelves that week. We know how eager you are to get your hands on Radeon RX Vega, and we’re working extremely hard to bring you a graphics card that you’ll be incredibly proud to own. Developing products with billions of transistors and forward-thinking architecture is extremely difficult — but extremely rewarding — work.

And some of Vega’s features, like our High Bandwidth Cache Controller, HBM2, Rapid-Packed Math, or the new geometry pipeline, have the potential to really break new ground and fundamentally improve game development.
These aren’t things that can be mastered overnight. It takes time for developers to adapt and adopt new techniques that make your gaming experience better than ever. We believe those experiences are worth waiting for and shouldn’t be rushed out the door. We’re working as hard as we can to bring you Radeon RX Vega.

On HBM2, we’re effectively putting a technology that’s been limited to super expensive, out-of-reach GPUs into a consumer product. Right now only insanely priced graphics cards from our competitors that aren’t within reach of any gamer or consumer make use of it. We want to bring all of that goodness to you. And that’s not easy! It’s not like you can run down to the corner store to get HBM2.
The good news is that unlike HBM1, HBM2 is offered from multiple memory vendors – including Samsung and Hynix – and production is ramping to meet the level of demand that we believe Radeon Vega products will see in the market.

RA2lover: What things does the RX Vega have over the Radeon Vega FE that would make it worth the extra wait?

RK: RX will be fully optimized gaming drivers, as well as a few other goodies that I can’t tell you about just yet….But you will like FE too if you can’t wait

anihallatorx: I understand that the Vega architecture is focused mainly on increasing/enabling performance on large datasets, like utilizing HBM2, HBCC etc to fuel a vision of high frame rate 4K, VR and photorealistic situations. Where does it stand on the compute side of things? Like a new geometry engine?

RK: On the compute side of things..Vega FE will be the fastest single GPU solution (>12.5 TFlops FP32) when it’s available and our NCU packs several additional optimizations, including Rapid-Packed-Math which delivers >25 TFLops of FP16

KoolNerdz: What does Vega mean for Nvidia Volta? Is there gonna be a different design? Tell me a joke.

RK: One thing for certain is that Vega Instinct is well positioned to deliver dramatically better performance per dollar;, and TCO(total-cost-of-system ownership) is probably the most important metric to our machine learning and hyperscale customers and combined with Epic/Naples – Vega is extremely attractive..
You want a joke
Vega needs some extra Volta(ge) for overclocking

Nicolii: How will HBC effect content creation to 3d artists?
Does it help in any way in 3d programs? How so?

RK: Having 16GB of HBC on board will allow 3d artists to work on larger and even more complex models than ever before. Depending on the workload we have seen scenarios where 16 GB of HBC is effectively same performance as having 32 GB or 64 GB of regular VRAM

Last edited by OnnA; 05-18-2017 at 23:03.
   
Reply With Quote
Old
  (#1268)
Maddness
Master Guru
 
Maddness's Avatar
 
Videocard: Asus RX480 Strix
Processor: 6900k
Mainboard: Rampage V Edition 10
Memory: 16Gb Dominator DDR4 3400
Soundcard: Asus Xonar STX
PSU: Corsair 1200
Default 05-19-2017, 09:09 | posts: 388 | Location: Auckland

Some interesting answers there. Vega might be looking good after all.
   
Reply With Quote
Old
  (#1269)
haste
Master Guru
 
Videocard: GTX 1080 @ 2.1GHz
Processor: i7-2600K @ 4.8GHz
Mainboard: ASUS P8P67 DELUXE
Memory: 16GB DDR3 @ 2133MHz
Soundcard: SB X-Fi
PSU: EVGA P2 650W
Default 05-19-2017, 16:18 | posts: 386 | Location: CZ

I certainly hope they will deliver... NVIDIA is milking their enthusiast customers for a way too long.

What I actually don't understand is the hype around HBCC. From my point of view it will need app support to be able to stream data straight into the VRAM? Considering how much VEGAs are used in games (none currently), I don't know how many developers will be eager to bother with it.
   
Reply With Quote
Old
  (#1270)
Denial
Ancient Guru
 
Denial's Avatar
 
Videocard: EVGA 1080Ti
Processor: i7-7820x
Mainboard: X299 SLI PLUS
Memory: 32GB GSkill 3600
Soundcard: ZxR & HD800 Lyr/KEF LS50
PSU: Seasonic 1000w
Default 05-19-2017, 16:33 | posts: 10,914 | Location: Terra Firma

Quote:
Originally Posted by haste View Post
I certainly hope they will deliver... NVIDIA is milking their enthusiast customers for a way too long.

What I actually don't understand is the hype around HBCC. From my point of view it will need app support to be able to stream data straight into the VRAM? Considering how much VEGAs are used in games (none currently), I don't know how many developers will be eager to bother with it.
The HBCC controller on Vega doesn't require app support. The app sees a giant pool of memory, the HBCC controller automagically stores it in the best possible location, probably using some kind of heuristics system to determine access frequency, among other things.

You might be thinking of the Raedon Pro SSG - the Polaris GPU with the Terabyte SSD on it. That requires app support - at least it did back when they announced it last year. I don't know if it's using Polaris anymore.
   
Reply With Quote
Old
  (#1271)
haste
Master Guru
 
Videocard: GTX 1080 @ 2.1GHz
Processor: i7-2600K @ 4.8GHz
Mainboard: ASUS P8P67 DELUXE
Memory: 16GB DDR3 @ 2133MHz
Soundcard: SB X-Fi
PSU: EVGA P2 650W
Default 05-19-2017, 16:41 | posts: 386 | Location: CZ

Quote:
Originally Posted by Denial View Post
The HBCC controller on Vega doesn't require app support. The app sees a giant pool of memory, the HBCC controller automagically stores it in the best possible location, probably using some kind of heuristics system to determine access frequency, among other things.
Well if that is the case, then it will all depend on how smart the controller actually is. Looking at all of the special cases that might occur from frame to frame... I'm a bit skeptical, but we'll see.
   
Reply With Quote
Old
  (#1272)
Denial
Ancient Guru
 
Denial's Avatar
 
Videocard: EVGA 1080Ti
Processor: i7-7820x
Mainboard: X299 SLI PLUS
Memory: 32GB GSkill 3600
Soundcard: ZxR & HD800 Lyr/KEF LS50
PSU: Seasonic 1000w
Default 05-19-2017, 16:50 | posts: 10,914 | Location: Terra Firma

Quote:
Originally Posted by haste View Post
Well if that is the case, then it will all depend on how smart the controller actually is. Looking at all of the special cases that might occur from frame to frame... I'm a bit skeptical, but we'll see.
Realistically, 99.9% of the time everything is going to fit into HBM. The only time the HBCC management stuff is going to impact games is when it runs out of VRAM. Which is why in all the tests they've been showing for it they artificially cap the VRAM amount to 2GB.

For 8GB+ Vega cards in gaming, it's a cool feature but I don't think it's really going to change much. If they ever do 2/4GB HBM2 cards in the line up - it will probably be better there.

The real reason they have it though is compute/HPC purposes. Nvidia through CUDA has unified memory too but the developer has to manually manage it to some degree. Where as on Vega it can just let the controller do all of it.

HBCC is basically Nvidia's Turbocache in hardware form.

Quote:
The major difference between TurboCache and HyperMemory is that the latter must first load a required surface into local memory before operating on it - possibly requiring the driver to kick something else off of local memory into system RAM. The separation of up and down stream bandwidth in PCI Express makes this relatively painless. TurboCache, on the other hand, sees all graphics memory as local and does not need to load a surface or texture to local RAM before operating on it. Shaders are able to read and write directly over the PCI Express bus into system RAM. Under the NVIDIA solution, the driver carries the burden of keeping the most used and most important bits of data in local memory.
And it can write/read from more than just system memory.

Last edited by Denial; 05-19-2017 at 16:57.
   
Reply With Quote
Old
  (#1273)
haste
Master Guru
 
Videocard: GTX 1080 @ 2.1GHz
Processor: i7-2600K @ 4.8GHz
Mainboard: ASUS P8P67 DELUXE
Memory: 16GB DDR3 @ 2133MHz
Soundcard: SB X-Fi
PSU: EVGA P2 650W
Default 05-19-2017, 17:01 | posts: 386 | Location: CZ

Quote:
Originally Posted by Denial View Post
For 8GB+ Vega cards in gaming, it's a cool feature but I don't think it's really going to change much. If they ever do 2/4GB HBM2 cards in the line up - it will probably be better there.

The real reason they have it though is compute/HPC purposes. Nvidia through CUDA has unified memory too but the developer has to manually manage it to some degree. Where as on Vega it can just let the controller do all of it.
Well that was exactly my point. They present HBCC so eagerly on DX12 games like DeusEx or ROTTR, while the real benefit of it should be in super or cloud computing. Unless I'm missing something, gamers will not benefit from it much. Maybe they are trying to justify the existence of future 6GB/4GB cards? I dunno...
   
Reply With Quote
Old
  (#1274)
Denial
Ancient Guru
 
Denial's Avatar
 
Videocard: EVGA 1080Ti
Processor: i7-7820x
Mainboard: X299 SLI PLUS
Memory: 32GB GSkill 3600
Soundcard: ZxR & HD800 Lyr/KEF LS50
PSU: Seasonic 1000w
Default 05-19-2017, 17:11 | posts: 10,914 | Location: Terra Firma

Quote:
Originally Posted by haste View Post
Well that was exactly my point. They present HBCC so eagerly on DX12 games like DeusEx or ROTTR, while the real benefit of it should be in super or cloud computing. Unless I'm missing something, gamers will not benefit from it much. Maybe they are trying to justify the existence of future 6GB/4GB cards? I dunno...
Well in earlier posts I said I think if AMD had the money they'd split their compute cards off from their gaming cards like Nvidia does. But that requires a ton of developer/driver support, engineering and money to just spin a completely separate chip.

So I think when they were building Vega, they added HBCC because it's a really awesome workstation/compute feature - but they are also selling the same Vega to gamers, so they said "how can we take this tech and market it towards gaming in order to differentiate our product?" and this is what they came up with. Which is pretty cool.

Another feature of Vega, that's somewhat related to the gamer/hpc differentation, I don't hear people talking about much is the packed math support. Nvidia has this on their compute cards but not on their gaming ones. AMD has it on both and intends to build software gaming libraries that utilize it. They already said TressFX is going to use it and I'm sure basically all their vertex/particle sim stuff will eventually be ported to use it. It effectively doubles the performance of those workloads, which can potentially increase overall performance pretty drastically.

So you can have a scenario where you get TressFX running on multiple characters - turning it on impacts performance by 10-15% on Nvidia, but only 5-7% on AMD because Nvidia has no FP16 packed math on it's GTX cards.

The lesser precision shouldn't really impact those type of vertex/particle effects because they're basically somewhat random anyway.

Last edited by Denial; 05-19-2017 at 17:29.
   
Reply With Quote
Old
  (#1275)
Fox2232
Ancient Guru
 
Fox2232's Avatar
 
Videocard: Fury X - XL2420T(Z)@144Hz
Processor: i5-2500k@4.5GHz NH-D14
Mainboard: MSI Z68A-GD80[g3]
Memory: 4x4GB 1600MHz 9,9,8,20 1T
Soundcard: Essence ST / AKG K-701
PSU: FSP Gold series 750W
Default 05-19-2017, 21:53 | posts: 5,514 | Location: EU, CZ, Brno

Quote:
Originally Posted by Denial View Post
So I think when they were building Vega, they added HBCC because it's a really awesome workstation/compute feature - but they are also selling the same Vega to gamers, so they said "how can we take this tech and market it towards gaming in order to differentiate our product?" and this is what they came up with. Which is pretty cool.

Another feature of Vega, that's somewhat related to the gamer/hpc differentation, I don't hear people talking about much is the packed math support. Nvidia has this on their compute cards but not on their gaming ones. AMD has it on both and intends to build software gaming libraries that utilize it. They already said TressFX is going to use it and I'm sure basically all their vertex/particle sim stuff will eventually be ported to use it. It effectively doubles the performance of those workloads, which can potentially increase overall performance pretty drastically.

So you can have a scenario where you get TressFX running on multiple characters - turning it on impacts performance by 10-15% on Nvidia, but only 5-7% on AMD because Nvidia has no FP16 packed math on it's GTX cards.

The lesser precision shouldn't really impact those type of vertex/particle effects because they're basically somewhat random anyway.
Actually due to HBCC taking data in smaller chunks it mainly saves Memory Bandwidth.
So there is plain benefit for APUs. Secondly, Raja mentioned something bit weird where I am not sure how exactly he meant it.
Quote:
Depending on the workload we have seen scenarios where 16 GB of HBC is effectively same performance as having 32 GB or 64 GB of regular VRAM.
That should not be forgotten in APU based notebooks. In many scenarios it will make AMD's mobile chips even more attractive than they are now to those who know current small difference to intel's offering.

And It would be crazy cool if APU finally had 2 Stacks of HBM2 (8GB) and therefore ~480GB/s access to unified system memory.

Last edited by Fox2232; 05-19-2017 at 21:56.
   
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



Powered by vBulletin®
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
vBulletin Skin developed by: vBStyles.com
Copyright (c) 2017, All Rights Reserved. The Guru of 3D, the Hardware Guru, and 3D Guru are trademarks owned by Hilbert Hagedoorn.