It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


1 Billion transistors on a graphics card in under 3 years! True to life games in under 10 years!

page: 2
<< 1   >>

log in


posted on Nov, 5 2005 @ 01:55 PM
That's crazy!

FPS games coming with PTSD warnings on the boxes....

posted on Nov, 5 2005 @ 06:17 PM

Originally posted by sardion2000

Originally posted by Frosty
According to Moore's law this will mean far more than 5 billion by 2013.

I've read somewhere that Graphics technology is progressing somewhat faster then Moores law.

True, graphics tech IS processing power is progressing faster then moores law.

Moores law states that the number of tranistors on a CPU would double every 18-24 months.

The transistors on a GPU triple every 24 months. (example: ATI Radeon 9800 XT 110mil for 2003 and ATi Radeon x1800xt 321 mil for 2005) Although this seems to be a special case though... not too sure...

Performace increase too has been a 3 fold increase in 2 years. (3296 Mtextels/S in 2003 and 10,000 Mtextles/S in 2005)

So if this contiumes up until 2013, a GPU then should have 26 billion transistors ((3^4)) x 320 trans = about 26,000). They probably said 5 though becuase I think they cant make transistors any smaller then 16nm...

[edit on 5-11-2005 by beyondSciFi]

posted on Nov, 7 2005 @ 10:48 AM
There are alot of variables but overall graphics technology is moving along at about the normal rate, doubling about every 1.5 years in # of transistors, 1 million in 93 for the NV1 and 1 billion (10 x 1.5 years) in 2008. The part that is a big problem is that it keeps getting harder and harder to do as the transistors get closer and closer together. Imperfections in the manufacturing process cause voltage leaks and create heat. The NV1 had no heatsink or fan and ran fine in all conditions, currently GPU's have moved from small heatsinks to large ones to fans to multiple fans to huge contraptions that are loud and exhaust heat out of the rear of the pc and use up multiple slots, we are on the virge of water cooling because of the heat problem. By the time we get to 1 billion in a couple years we may need built refridge units to cool them and there is something else. moors law also predicts a point where we can go no more, a point where the transistors get so close that they will no longer function becaue of the material they are made out of and when we hit this point we will need to switch to s new technology like nanotubes or something. I doubt if GPU's will hit 20+ ghz on current silicone technolgy because the signs are already here that manufacturers are having big problems controlling heat and having low yields in chips. Multiple GPU's will probable be a better way to go and they are already offering it now. It just costs a lot.

posted on Nov, 7 2005 @ 03:26 PM

Originally posted by ArchAngel
A CPU and a PPU are totally different designs.

One cannot do the job of the other.

Read through this and you may understand better.

There is no issue of me understanding the difference between a general purpose processor and a specialized DSP. (I do have a degree in computer science afterall)

My point boils down to this -

Everyone will have a general purpose processing unit within their computer, and eventually most everyone will have a multicore general processing unit. This PPU add-on card is just that - an add-on. Outside from the gamers with disposable incomes, i don't feel the general gaming public will be keen on the idea oh having to constantly upgrade YET ANOTHER add-on card to be current.

You also stated that "one cannot do the job of the other", which is completely false. In-game physics today are done with the CPU, and once there is more overhead in terms of processor cycles i think it's totally logical to expect games to take advatange of multiple cores - thus allowing for better physics on hardware that most everyone already has. It's simply a matter of writing the code.

Don't get me wrong, a specialized DSP will always outperform a general purpose processing unit. That is because the specialized DSP is created for a single task. HOWEVER - a custom DSP will generally cost you more to make than a general purpose processing unit.

Again, to restate my point - multicore processors will more ubiquitous than the specialized DSP, and therefor more likely to be optimised for in my opinion.

[edit on 7-11-2005 by negativenihil]

posted on Nov, 11 2005 @ 12:17 PM

Originally posted by carcharodon
More than transistor what we need is a new kind of memory. Todays memory is a bottleneck it cannot keep up with either CPU's or GPU's.

There already is a new type of memory - it's called FBRAM (Frame-Buffer RAM). The most time consuming part of rendering is the implementation of Z-buffering and transparency. You have to read the Z-buffer, compare against the new depth position, then choose to discard that fragment or write out the Z-buffer value. Then you have to do the transparency calculation by reading the existing pixel colour, blending and writing out again.

FBRAM solved the latter problem by having the blending calculations implemented by the memory chip itself. As a programmer, you just tell it the arithmetic blending mode (add, subtract, multiply) and send the transparency/pixel values to the memory chip itself.

The other problem is solved by using deferred rendering, which divides the screen into lots of little squares, each of which is rendered separately, by having an internal Z-buffer/framebuffer cache, and sorting out the list of triangles so all triangles within a particular square are rendered together.

posted on Nov, 11 2005 @ 02:03 PM
Memory is definitly the biggest bottleneck, the latest from the big 2 uses 1.2 ghz ram, the fastest on the planet, and it is still a bottleneck! In fact you can probably trace every major advancement in vid cards in the last 10 years to advancements in memory, SDram, DDRram, DDR2ram, DDR3 etc. I think they should try to figure out a way to build large ammounts of ram right on the GPU die, that would really speed things up. 1 big fat GPU about 3" square with a big chiller on top to cool it, zooming along at a healthy 1+ ghz, in fact, give me two of them in SLI-mode woohoo! sorry I love to daydream, the future's so bright, I gotta wear shades!

posted on Nov, 11 2005 @ 02:43 PM

Originally posted by BirDMan_X
until there is nothing else to invent.

I dont see that ever happening...Dont know why you think we could sctually make something so perfect it just cant get any better...that nonsense.

top topics

<< 1   >>

log in