It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


NVIDIA Launches Revolutionary New Multi-GPU Technology

page: 1

log in


posted on Jun, 28 2004 @ 01:41 PM
I found this article but would like you CPU gurus to help make it easy for a layman to understand.

SANTA CLARA, Calif., June 28 /PRNewswire-FirstCall/ -- NVIDIA Corporation (Nasdaq: NVDA - News), the worldwide leader in visual processing solutions, today unveiled a revolutionary new technology that enables multiple NVIDIA® GeForce(TM) 6 Series or NVIDIA Quadro® graphics cards to operate in a single PC or workstation for a stunning increase in graphics horsepower. Appearing later this year in PCI Express-based PCs and workstations from the world's top manufacturers, the new NVIDIA SLI(TM) technology takes full advantage of the additional bandwidth and features of this new high-bandwidth bus architecture.

It's like a whole other language they speak could someone translate to me lol ews/040628/sfm122_1.html

posted on Jun, 28 2004 @ 01:49 PM
Its a great way for them to sell more product by allowing them to function as a co-prossesor for each other.

Example 3 video cards each with a 1GHZ processor and 128 MB V-RAM, with NEW sharing technology

1GHZ +1GHZ +1GHZ = 3GHZ even if its only at half bandwidth so lets say all cards are 128 bit

so you have 128 + 128 + 128 pipes = 384 bits of headroom per tick

lets half that = 192 bits of headroom per tick times 3GHZ so lets says thats as a simple ratio 576 bits per tick

compared to 128 bits per tick times 1GHZ = 128

Thats means the 3 co-processing video cards would have 4.5 times the performance of one card.

[edit on 28-6-2004 by robertfenix]

posted on Jun, 28 2004 @ 02:11 PM
Here is a more wordy and in-depth article on Nvidia's new SLI offering:

SLI (Scan Line Interleaving) created by 3dfx six years ago was all about pure performance and higher resolutions. [...] You connected the two cards via an internal ribbon cable as well as an external pass-through cable. Once the drivers were installed it did everything “automagically”. All you had to do was sit back and enjoy much faster frame rates in your games. [...] NVIDIA purchased 3dfx's intellectual properties and now it seems they are ready to reveal to us a little something they may have learned from 3dfx that is built in to their new GeForce 6 series of GPUs. And that is a new "SLI" technology.

Basically, each card is drawing half the screen. You can expect that this will end up under the same export restrictions that the PS2 encountered.

[edit on 28-6-2004 by Enki]

posted on Jun, 28 2004 @ 02:53 PM
Alienware came out with this earlier this month...

posted on Jun, 28 2004 @ 04:40 PM
Didn't know we had our own in-house nerds. Well sure there were a few out there, but didn't know who.

It's basically the idea of two is better than one. I use two video cards to divide up the rendering of the screen or in some technologies render every other frame. 3dfx pioneered it with their voodoo2 which used two 3d add on cards to render every other line. Several manufacturers have put multiple gpu's on a single pcb. Now with PCI express you can now run two video cards on a shared bus.

Right now there is only one motherboard that is ment to run two pci-express x-16 slots, and NVIDIA hasn't even put a pci-express version of this video card on shelves and the current AGP version costs nearly $600, so imagine buying two of those. Also, Alienware has developed their own form of this.

[edit on 28-6-2004 by Carrion]

posted on Jun, 28 2004 @ 04:45 PM
So I guess in plain English, my kid is going to be hounding me to "upgrade" to this so he can blast creatures from the netherworld in a more visually appealing way.

.......How much is this gonna cost?

BTW....Thanks for the info, there is no end to learning on this site!!!!!

posted on Jun, 28 2004 @ 04:56 PM
This is seriously exciting news for us gamers.

Don't think it's going to be cheap though.

posted on Jun, 28 2004 @ 06:27 PM
It's really not that revolutionary. It has been done quite a few times in the past by almost all the major graphics card producers. One of the first to do this was Onyx in 1993 and the company most famous for this is 3dfx. Before 3dfx went bankrupt they had a massive video card that used 4 gpu's. NVIDIA ended up buying 3dfx intellectual property and some of their engineers and are finally incorporating the technology 3dfx helped develop into their video cards.

It is really going to be too expensive for most people unless they do it on lower-end video cards. For now this is going to be for professionals and people with more money than they know what to do with.

As far as price you could expect to pay upwards of $1200 just for the video cards, not counting any other parts of the computer. Which you will have to buy somewhat special motherboard.

[edit on 28-6-2004 by Carrion]

posted on Jun, 28 2004 @ 11:04 PM
Then it looks as if the graphics wars are over currently. Why? Get a load of this (source: Tom's Hardware):

"Then again, NVIDIA holds several patents relating to SLI, meaning that ATi won't just be able to copy this approach."

Nvidia hasn't come out with benchmarks yet but they said something about hitting 1.9-2.0 times more speed over one GPU. This is really nice and all, but no one plays Quake at 2000 FPS :\. Tell your kid that when his GeForce 6800 Ultra actually slows down, then should he ask for an additional card.

Also, yes Voodoo2 was the first to impliment the technology. I remember it advertised in one of my magazines. Still have it I believe lol.

Now let's go over Nvidia dominance over ATI:

1. Shader 3.0 - While ATI may have higher frame rates now, Nvidia has prepared for the future. It's like asking yourself, do I want Doom 95 or QuakeWorld to run faster? If you choose Doom 95, go with ATI but I can assure you you'll be paying $500 later for an upgrade (and that "later" is only a few months away, don't expect your dear ATI to last another year with games such as Doom 3 and Half Life 2 coming).

2. PCI Express - PCI Express is faster than AGP 8x. ATI has NO support for PCI Express. When Nvidia launches its GeForce 6800 capable of PCI-E then the benchmarks should be at least 30-50% higher.

3. Dual GPU - Nvidia has patents for this. ATI can't just copy it. GeForce6800 doubled the GeForce FX 5950 Ultra's benchmarks. Considering this, it would take an entire GFX card generation before ATI can beat the dual GeForce6800 configuration.

4. GeForce 6800 Ultra Extreme - Although the Ultra Extreme was planned and would considerably raise performance, I believe Nvidia has cancelled it (according to TheRegister).

posted on Jun, 28 2004 @ 11:18 PM
Don't think you can count ATI out just yet. ATI is very smart and will do everything it can not to be left behind. Alienware proved you don't need andthing real special to be able to harness two video cards at the same time, and yes their method works for ATI too. And ATI does have PCI-express solution and no PCI-express will NOT be any faster than AGP for a long time.

Also ATI does have experience in SLI type setups, ever heard of the Rage Fury Maxxx? It incorporated two gpu's on a single pcb.

and the voodoo2 was not the first, there were companies that did this long before the voodoo2 came out in 1998, it was just the first to do it in mainstream.

[edit on 28-6-2004 by Carrion]

[edit on 28-6-2004 by Carrion]

posted on Jun, 28 2004 @ 11:59 PM
I'd just like to mention: Alienware's new "SLI" scheme works for all manufacturers of graphics chipsets. Alienware's technology is very, very interesting.

Below is info on Alienware's technology's flexibility:

Alienware’s Video Array works with video cards from any manufacturer; ATI, NVidia, 3DLabs, Matrox, or others. Since you are not tied to any one manufacturer’s products, you can configure the Video Array with the video cards that work best for your application.

The Video Array uses off-the-shelf video cards and drivers. There is no need to have any special provisions in hardware or software for Video Array to work. When there is a new feature or optimization implemented in the drivers, they become readily available through Video Array.

Alienware’s Video Array is not limited to 2 video cards. Future implementations may take advantage of this and put 4 or more video cards into one system. This would probably be more geared towards professional applications like rendering farms.

Read up on Alienware's technology:

Scroll to the bottom of the page to look at the comparison of Nvidia's and Alienware's "SLI" technology. I personally think Alienware's approach is better, unfortunately, it requires an Alienware ALX system.

Only the most diehard gamers will buy into the new "SLI" schemes from Nvidia and Alienware, and the person who has money to blow (as well as developers who can actually use the speed increase - time is money). 3DFX really brought the technology to the mainstream, as well as their multiple graphic chip units on one single card. I see the multiple graphic chip unit's on a single card as a much better solution than buying multiple separate graphic chips - as well as it possibly being cheaper.

ATI's Rage Fury Maxx was a dual graphic chip solution on one card. I see ATI as returning to that solution using the Radeon series of GPU's (the term "GPU" wasn't around until Nvidia released the first Geforce, as well as ATI's Radeon series). Nevertheless, the great part about Alienware's "SLI" solution is that you can do it with any graphics card, and any manufacturer (the two cards must be the same manufacturer and graphics chip of course) - thus enabling a person (who has money) to buy a new series of graphics cards, when they are released, and immediately enable "SLI" on them.

I still see CGI rendering as the greatest benefit of Alienware's technology. Their technology is going to be a big seller for developers and people who render a lot of graphics.

I don't recommend any of the "SLI" solution by Nvidia or Alienware for the average consumer though. ATI and Nvidia come out with a new card every 6-12 months (the cycles aren't too predictable anymore as the technology becomes more and more complex; it used to be that ATI and Nvidia came out with a faster version of a previous card in 6 months and a new technology after another 6 months) and the cards are usually much, much faster than the previous generation.

What is the point of plopping down money for two cards - to have almost double the speed for the moment - and then those two cards are left in the dust when a new single chip solution comes out after 6-12 months? Like I said, developers would only have a real use for this. Hardcore gamers just waste money to have the latest and greatest.

[edit on 6-29-2004 by EmbryonicEssence]

posted on Jun, 29 2004 @ 12:03 AM
I agree with Carrion, counting ATI out before this technology even becomes mainstream wouldn't be wise. They're still going to be a strong competitor for Nvidia to contend with for a long time, as I'm sure they won't allow themselves to fall behind in an evolving market.

I laughed when I read on that site how Nvidia claims to understand gamers, that's a load of crap. I'm sure they were worried about satisfying gamers when they made the sound level of the early GeForce FX cards similar to that of a vacuum cleaner. I'm a bit of an ATI fanboy I guess, so I'll just trust in their ability to keep up with and even surpass Nvidia in terms of quality.

It is nice to see some other computer aficionados here.

posted on Jun, 29 2004 @ 12:08 PM
I'm an ATI user and if this technology takes off, it will send the price of top end ATI cards plummeting.

Which is nice.

new topics

top topics


log in