It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

NERDGASM ALERT: Detailed Rendering of CG just got infinately better. The polygon is dead

page: 16
170
<< 13  14  15    17  18  19 >>

log in

join
share:

posted on Aug, 3 2011 @ 02:30 PM
link   
reply to post by SaturnFX
 


So where can I download the executable demo? Sorry... no proof, no valid claims


Ah and in case you think the videos are enough proof... then I believe some dinosaurs also exists in a park called "Jurassic Park" *lol*




posted on Aug, 3 2011 @ 02:34 PM
link   

Originally posted by mrMasterJoe
reply to post by BIGPoJo
 


Ahm, sorry what did you say pls? None of your answers is an answer to the corresponding issue. NONE.

(1) There are ALWAYS gaps in voxels based systems unless you increase the voxel size (then your image looks blobbish) or increase the voxel density (then you might get voxel "bleeding" and effects similar to Z-fighting known in the polygonal world). Dont' forget that you need some sort of dynamical thinning of the voxel clouds OR some clever sort algorith OR some sort of level of detail stuff (which produces even more data to be stored and handled!). Depending on the distance to the camera you have additional issues.

(2) Which tech does not use voxels? Raytracing? *lol* You haven't understood the question. Games do have lighting and shadowing / shadow casting. How do you do that in a voxel based system? You COULD use raytracing for each voxel in the cloud (=extremely expensive). So what?

(3) Do your REALLY think these days games store multiple trees or rocks when there are multiple of them in the game instead of just storing the type of object and its coordinate? Not really, eh?
What you describe is how these days games ALL work (unless they are some bullish freeware crap
) ALSO... when the elephant is not 3D... how can I look this elephant from any angle....? Oh well...


Sorry, your answers went to the bin




Because the angle of the elephant is being updated at 20 times per second. The power is in the software, the rasterization. Remember, there is only one elephant. The one elephant is made up of one particle. Please see rasterization.

Rasterization

They are using just a few simple patterns to create entire scenes. They do not have to track individual atoms like you would a polygon, especially if that atom's layout was a simple pattern.



posted on Aug, 3 2011 @ 02:40 PM
link   
reply to post by mrMasterJoe
 


Those videos are real enough, but the technology, as several people have pointed out, is nothing special. It's simply a form of voxel renderer, with some kind of instancing (it uses the same data multiple times, to make it "unlimited").

I'm a former game programmer. I know lots of game programmers. They all think this guy is simply trying to hype a fairly undistinguished voxel engine so he can get grants and funding. He's already got $2 Million in a government grant by pretending he's invented something, so he's continuing to do the same thing.

www.commercialisationaustralia.gov.au...

There are other companies that do it better, and I'm sure Id and other companies have been experimenting with it for years. It's just not particularly practical, except for very static worlds.

See this other company doing the same thing:
www.atomontage.com...

John Carmack of Id talking about it three years ago:
www.pcper.com...
edit on 3-8-2011 by Uncinus because: (no reason given)



posted on Aug, 3 2011 @ 02:44 PM
link   
reply to post by BIGPoJo
 


Sorry to say that - but your reply does not make sense for me. I know rasterization very well. But would you care to pls elaborate what the hell this has to do with the topic? Rasterization is the final step in any discrete visualisation system. Well - so much for that. And most games also use single models to render them many times...! The parameters make the difference.

An elephant in a voxel system is a cloud of 3D points defining the elephants volumetric shape with a specific level of detail (~amount of points) occupying a certain area in 3D-space. When the voxels are rendered the elephant finally gets rasterized corresponding to the view points parameters. Yeah. Well... where do you see "your" rasterization taking place here?

Tell me one thing pls - where does your alledged expertise come from...? I am VERY curious.
edit on 3-8-2011 by mrMasterJoe because: (no reason given)



posted on Aug, 3 2011 @ 02:48 PM
link   
reply to post by Uncinus
 


Thank you =) You are the first guy in here who has a least some basic knowledge from somewhere.
This engine never ever fulfills its bald claims not matter what it in fact really does.



posted on Aug, 3 2011 @ 02:50 PM
link   

Originally posted by mrMasterJoe
reply to post by BIGPoJo
 


Sorry to say that - but your reply does not make sense for me. I know rasterization very well. But would you care to pls elaborate what the hell this has to do with the topic? Rasterization is the final step in any discrete visualisation system. Well - so much for that. And most games also use single models to render them many times...! The parameters make the difference.

An elephant in a voxel system is a cloud of 3D points defining the elephants volumetric shape with a specific level of detail (~amount of points) occupying a certain area in 3D-space. When the voxels are rendered the elephant finally gets rasterized corresponding to the view points parameters. Yeah. Well... where do you see "your" rasterization taking place here?

Tell me one thing pls - where does your alledged expertise come from...? I am VERY curious.
edit on 3-8-2011 by mrMasterJoe because: (no reason given)


They call the tech unlimited because you can zoom in and out while still only having to track a limited number of points. The software makes a pass at only the level of detail that it needs. You essentially repeat a pattern, rasterize, zoom out, repeat the bigger pattern, zoom out, rasterize, rinse and repeat.

Remember when virtualization came out? That stuff was pretty unbelievable too. Now processors cater to that type of thing.
edit on 3-8-2011 by BIGPoJo because: (no reason given)



posted on Aug, 3 2011 @ 02:56 PM
link   
reply to post by BIGPoJo
 


Well virtualization is nothing that special. Not even when it came out. Dunno what you're talking about.
If what you describe is infact a fractal compression algorithm then the detail it produces is rather random / erratic. But they claim to have scanned a rock and are able to reproduce exactly this rock in "unlimeted"
detail...


Well, yeah fractal compression / design might one day change games a little. But I am pretty sure this is much more complicated than it looks at first glance.

Ok, for me this thread is done
I do not believe anyone will ever convince me of something extraordinary here... bye bye.

Cheers



posted on Aug, 3 2011 @ 02:58 PM
link   
reply to post by SaturnFX
 


No offence but this is just another distraction of what is really going on.

Although, I commemorate the designer for doing such a thing as giving a computer more power in graphics by shrinking the shape of the polygon and putting it in a plane instead of putting a polygon as one big object and putting it in the game.



posted on Aug, 3 2011 @ 03:05 PM
link   

Originally posted by mrMasterJoe
reply to post by Uncinus
 


Thank you =) You are the first guy in here who has a least some basic knowledge from somewhere.
This engine never ever fulfills its bald claims not matter what it in fact really does.


Actually it does fulfill many of the claims, it's just that the claims are not as amazing as he makes them sound.

Unlimited world? Done decades ago with procedural content generation. Used in games for 25 years, See Rescue on Fractalus from 1984 for a simple example, or Spore from 2008 for a more modern example.

Unlimited detail? Progressive mesh refinement, procedural geometry and procedural textures. Voxels are actually NOT unlimited. His engine has a hard 0.25mm resolution limit.

Voxels rendering using a search algorithm rather than ray tracing? That's what voxel rending ray tracing is - the ray/voxel intersections are trivial to calculate, so you just "search" down the branches of a sparse voxel octree.
edit on 3-8-2011 by Uncinus because: (no reason given)



posted on Aug, 3 2011 @ 03:07 PM
link   

Originally posted by mrMasterJoe
reply to post by SaturnFX
 


So where can I download the executable demo? Sorry... no proof, no valid claims


Ah and in case you think the videos are enough proof... then I believe some dinosaurs also exists in a park called "Jurassic Park" *lol*


I keel u with fire!!!




posted on Aug, 3 2011 @ 03:24 PM
link   
coool. im drooling cuz i knew what yer describing... makes you wonder if they gettin ready for a real matrix...



posted on Aug, 3 2011 @ 03:31 PM
link   

Originally posted by SaturnFX

Originally posted by john_bmth
he hasn't kept up to date with contemporary techniques and research. That doesn't give me much faith.
edit on 3-8-2011 by john_bmth because: (no reason given)


Another way to consider that though is...
how many ways do you need to learn how to animate a gif when your not even trying for that anymore,...someone who is working on 3d vrml language probably won't give a toss about how the latest gif animations are being processed so to speak...(slightly poor example, but you get the point)



Believe you me, as a PhD student researching in this very field (real-time computer graphics), if you don't keep up to date, you die. There's no benefit from an outside-in perspective that you may get in other fields. This "unlimited detail" tech is a voxel engine, no matter what the inventor or anyone on this site claims. As with voxel techniques in general, they have some severe drawbacks that have been discussed intermittently throughout this thread. This isn't the sort of field where you can make an accidental discovery that completely blows everything prior out of the water, the progress is made on solid math and leaps in processing power. It's just not a "discovery" science, it's limited by technology, not insight. The insight comes from utilising the limited technology of today to squeeze as much smoke and mirrors out of your algorithms as possible. To do everything that this "unlimited detail" technology claims is simply not feasible. Actually, that's not true, as IIRC they don't actually make claims about dynamic scenes and what not, they don't discuss it at all, giving the impression that this is a one size fits all solution that will be the "death of the polygon". It's not and it won't. They aren't lying as such, rather being very economical with the truth.



posted on Aug, 3 2011 @ 03:42 PM
link   

Originally posted by altered
reply to post by Thundersmurf
 


You would still have PCs increasing in power because other aspects of the processing could be improved. For example, more powerful shaders in the video card, more realistic physics simulation (this is the biggest area of improvement due to the limit being basically a full atomic reality simulation), etc.


Oh absolutely. I think that even if the raw abiity to process info didn't increase, then this technology could allow components to develop in a different direction. Perhaps more efficient in terms of heat, power consumption or size.

I liked some of the ideas from other ATS'ers regarding chemistry/physics models. We could almost never test things physically, but instead be done through simulation.

There are a multitude of applications for this new technology and every single one makes me smile. It's pretty rare that we are at the forefront of something innovative enough to get the juices flowing.

Let the discussion continue



posted on Aug, 3 2011 @ 03:52 PM
link   

Originally posted by Thundersmurf
Oh absolutely. I think that even if the raw abiity to process info didn't increase, then this technology could allow components to develop in a different direction. Perhaps more efficient in terms of heat, power consumption or size.

I liked some of the ideas from other ATS'ers regarding chemistry/physics models. We could almost never test things physically, but instead be done through simulation.

There are a multitude of applications for this new technology and every single one makes me smile. It's pretty rare that we are at the forefront of something innovative enough to get the juices flowing.

Let the discussion continue


No, there are not a multitude of applications for this technology. It's not even new technology. It does not provide any kind of simulation of the real world beyond a rough visual static 3D representation. You can't actually do anything with it, other than fly around and look at it. It's just a different way of doing 3D graphics.



posted on Aug, 3 2011 @ 03:53 PM
link   
reply to post by Uncinus
 

Hmm, John Carmack vs Bruce Dell... I know who my money would be on



posted on Aug, 3 2011 @ 04:03 PM
link   
Guy's, this is nothing new. There have been engines to run on this same technology in the past but they have problems which this man hasn't revealed, animation being one of them.

www.youtube.com... - This is animation, looks fine, but its running in 36FPS.

Basically these guys are just inventing new terminology, and trying to find funding for their engine. Don't get too excited, this is part of the future of gaming, but it won't come from these guys.



posted on Aug, 3 2011 @ 04:16 PM
link   
I'm a bit skeptical about this. There's a reason why we can't just go way up with polygons, for the same reason this can't be possible. The amount of memory this would take is insane.

Even though this doesn't use polygons, it still has to use vertex's to define where everything is in 3D space....



posted on Aug, 3 2011 @ 04:27 PM
link   

Originally posted by Kryom
I'm a bit skeptical about this. There's a reason why we can't just go way up with polygons, for the same reason this can't be possible. The amount of memory this would take is insane.

Even though this doesn't use polygons, it still has to use vertex's to define where everything is in 3D space....


The vertices are implicit, they are just regularly spaced position in 3D space. You just need to store what is in each one, not where it is.

Imagine a photo, that's 2D. Now you don't need to store the position and size of each pixel. You just store what is in each pixel.

You get around the memory issue by not storing the groups of voxels that have nothing in them. And you only store the voxels on the surface of things, so there's a lot of empty space inside. It's called a sparse voxel octree.



And then he just duplicates a lot of stuff, so he only has to store it once.



posted on Aug, 3 2011 @ 04:50 PM
link   

Originally posted by BIGPoJo

Indeed. This almost draws a parallel to current thoughts about quantum mechanics. Stuff does not exists until it is observed. In this model you would not have to track the points off screen and you would not start tracking them until they come into field of view. With current tech you have to track the points even if you are not looking at them (assumption here).

Huh? That's culling & visibility 101. It's not some weird, quantum woo hoo or ground breaking idea, it's a standard technique in computer graphics. "The fastest triangle is the one you don't draw".



posted on Aug, 3 2011 @ 05:12 PM
link   
I been thinking about this for some and i may have figured out what he has done. As it has been mentioned before the engine would not render what can not be seen, as in as much as off screen. However, there is also an area that would be optimized out of the rendering process. This area is any point which lies between the resolution at which the screen is native too.

The algorithms they may be using do not decide "what" you see but rather what you "can not" see due to the resolution. Because The visual limit for any current tech is determined by the resolution of the its self.

Through this line of thought this new engine does not generate 3D images but renders complex 2D images.

Editing to add:

It is claimed that "64 atoms[pixels] per cubic millimeter". This is the maximum this technique can zoom in on an object before duplicating Pixels. Surely this is overkill when considering game design. The level of detail would be cut at a virtual distance of an average of 6-12 inches(as per the status quo of current gen games).
edit on 3/8/11 by Glyph_D because: added information

edit on 3/8/11 by Glyph_D because: (no reason given)




top topics



 
170
<< 13  14  15    17  18  19 >>

log in

join