It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

NERDGASM ALERT: Detailed Rendering of CG just got infinately better. The polygon is dead

page: 15
170
<< 12  13  14    16  17  18 >>

log in

join
share:

posted on Aug, 3 2011 @ 01:41 PM
link   

Originally posted by T3hEn1337ened

Originally posted by Time2Think
reply to post by T3hEn1337ened
 


That's exactly what the point of all of this is man, they CAN now make better graphics than those 3 videos you posted up - if you watch the FIRST video that SaturnFX posted, the developers of the technology clearly state that they are NOT artists.


Riiight, but they can't.

See, what they CAN create is a still scene with unlimited detail. I'm not denying that that's impressive, and while game developers can't do that yet, they CAN create photorealistic graphics with movement, physics, lighting, and shading fully integrated.

It has nothing to do with being an artist. I understand that they're not artists, and so that's why their textures look like crap, but until they can create a character that moves through a scene, gets hit by a rocket, and then flys across the room using ragdoll physics, their technology can't be used for for anything but static objects. In a world where destructible environments and moving vegetation are all the rage, static objects don't get you very far.

Do you get what I'm trying to say? It's not about what their technology CAN do, it's about what it CAN'T.


Yes, that is the issue, the physics engines of today wouldn't support this due to it all being "atoms" verses prims..motion will be exceptionally difficult.
So, as it stands, it would be fantastic for just background stuff (rocks, caves, etc) however, with a new physics engine, then your onto something.
This is just the meat, now someone needs to figure out how to cook and serve it. I would think it would be very exciting to consider a new engine based on points verses polys...granted, it will be a challenge for maximum use, however, once done, it will be incredible.




posted on Aug, 3 2011 @ 01:41 PM
link   

Originally posted by SaturnFX

Originally posted by mrMasterJoe
reply to post by SaturnFX
 


Err, well. I only have to use some pretty simple and basic logic. Most people these days don't seem to be able to do that any more!

More detail = more data = more processing and storage requirements. GOT IT?
There is no way around that...

I do not even have to know the details of this tech. But you simply have to look up voxel based rendering. Without stating the word "voxel" itself they claim in the video that their tech is based on something that is known in medical visualisation. Well and that IS voxel (volumetric) rendering.

You guys believe in fairy tales these days...
edit on 3-8-2011 by mrMasterJoe because: Typos


The cloud rendering technique is what is being touted here, how it appears, not the base mesh itself. This can be tweaked. They make a big deal about POV rendering (which assumes everything else outside of the rendering field of vision is not rendered, but that is just an assumption since they made that point pretty clear).

So, ya...only the immediate relevant information is rendered through some sort of search engine style principle (only get what you seek).

I guess the concept is, if you are looking forward, everything behind you disappeared, until you turn around...sort of surreal if you think about it...again, just basing my assumptions on a claim.

There are numerous examples of this (vector graphics for instance where you can have infinate clarity, yet the size of the image is very small because its based on algorthms verses point by point pixels)


Indeed. This almost draws a parallel to current thoughts about quantum mechanics. Stuff does not exists until it is observed. In this model you would not have to track the points off screen and you would not start tracking them until they come into field of view. With current tech you have to track the points even if you are not looking at them (assumption here).



posted on Aug, 3 2011 @ 01:43 PM
link   
reply to post by BIGPoJo
 


yes, that's true. the entire set of 3d models, whether detailed or billboarded, have to be loaded, whether the camera can see all the parts or not. unliimited detail tech is saying it only has to depict what the camera is currently pointing at and nothing else.



posted on Aug, 3 2011 @ 01:45 PM
link   
reply to post by Time2Think
 


Well I have been working with cassette storage systems - so you could maybe guess how old I might be - but I will not tell you


I do know what I am talking about... I have a GeForce GTX 560 Ti in my PC and know what this thingy is capable of and what not. I keep up to date with these days game technology. I am a software developer myself and quite a good one. But that isn't really important because my basic reasoning should be clear for EVERYONE without any background in this topic. But somehow logic is gone these days.

I do not understand why the "it always becomes better" statement should be valid in ANY way. This just shows a lack of knowlegde because Moore's law isn't valid anymore.

The next step in computer graphics is realtime raytracing. Ask NVIDIA or AMD. But volumetric rendering with the claimed details is waaay ahead. I do not say is it generally impossible to use volumetric rendering for games. But that tech is VERY demanding on the hardware. AND increasing the detail even beyond the current level of detail by a claimed factor of 100.000 (pls let's forget about "unlimited", would you?) for complete games is not possible in any way - today. In a best case scenario you would need 100.000 times the amount of data for your 3D world. Where the hell do you want to store that please? Do not forget processing this enormous amount of data in REALTIME?!

Sorry you cannot convince me with weak arguments.
edit on 3-8-2011 by mrMasterJoe because: (no reason given)



posted on Aug, 3 2011 @ 01:45 PM
link   
In relation to Notch's voxel quotes :-

For the storage :-

unlimiteddetailtechnology.com...
You can see they use objects and tiles to construct the world.
So the world is a list of geometry which is bolted together.
This is how they handle the problem arising from massive point cloud datasets..

It isn't just a massive point cloud dataset of the world which is compressed and stored
redundancy and repetitions are they key to the storage issues.

The search algorithm - why it isnt a voxel engine.

unlimiteddetailtechnology.com...

Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. [snip]

They claim to only render the points that are visible in the scene.
I'd also like to point how would they do the lighting?
I mean you could world calculate the sunlight/shadows but that would mean you would need to store a map
for each tile instance?
I assume they just do realtime lighting on each pixel?

Limbo





edit on 3-8-2011 by Limbo because: (no reason given)

edit on 3-8-2011 by Limbo because: (Added points)



posted on Aug, 3 2011 @ 01:46 PM
link   
it's like an IF statement in programming. IF your eyes cant see it, it isn't loaded to be depicted on the screen. since you can't see the interior of a castle before you actually enter each part of it, nothing inside the model of the castle is loaded into memory. the unlimited detail tech only has to render one screen full of pixels at a time, and the data in those pixels from the camera's perspective. works like your eyes.



posted on Aug, 3 2011 @ 01:48 PM
link   

Originally posted by undo
reply to post by BIGPoJo
 


yes, that's true. the entire set of 3d models, whether detailed or billboarded, have to be loaded, whether the camera can see all the parts or not. unliimited detail tech is saying it only has to depict what the camera is currently pointing at and nothing else.


This is the real secret. I bet they have a database full of points somewhere and they are throwing queries at the thing to find out where the points should be in their current field of view. Not exactly a new idea but a powerful one. Another thing to consider. If they are using points that have small circle sprite like objects tied to them, they only have to know how big to scale the circle and where its at. Since the circle will always face the observer you get the illusion that its a round object like an atom. With a polygon you have to track multiple points in space. The coloring and lighting could be tracked on chunks of these atoms instead of each individual one further driving down costs.



posted on Aug, 3 2011 @ 01:54 PM
link   
I have been following this company for some time now and I find their approach fairly interesting. I'd love for this technology to be used in the games I play today, it would be a lot easier on this laptop of mine. Anyways, here are some more tidbits on how this technology apparently works.


How does it work?

If you have a background in the industry you know the above pictures are impossible. A computer cant have unlimited power and it cant process unlimited point cloud data because every time you process a point it must take up some processor time. But I assure you, it's real and it all works.

Unlimited Details method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are Ray tracing polygons and point cloud/voxels, they all have strengths and weaknesses. Polygons runs fast but has poor geometry, Ray-trace and voxels have perfect geometry but run very slowly.

Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the SEARCH tool and typed in a word like MONEY the search tool quickly searches for every place that word appeared in the document. Google and Yahoo are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesnt touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call MASS CONNECTED PROCESSING. Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.

The result is a perfect pure bug free 3D engine that gives Unlimited Geometry running super fast, and it's all done in software.

Source for quote

"the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion, instead we use a mass 3d to 2d conversion that shares the common elements of the 2d positions of all the dots combined. And so we get lots of geometry and lots of speed, speed isn’t fantastic yet compared to hardware, but its very good for a software application that’s not written for dual core. We get about 24-30 fps 1024*768 for that demo of the pyramids of monsters. This will probably be released as “backgrounds only” for the next few years, until we have made a lot more tools to work with, then we will move in to sprites as well."

Kindest regards
Bruce Robert Dell
edit on 3-8-2011 by Elzon because: Source link added


Date of comment: Thursday, March 27, 2008
edit on 3-8-2011 by Elzon because: Comment date added



posted on Aug, 3 2011 @ 01:57 PM
link   
reply to post by BIGPoJo
 


it reminds me of the double slit experiment



all pieces of data meant to be depicted on the screen are in super position to one another and are not depicted until the measuring device (the camera) looks at them.



posted on Aug, 3 2011 @ 01:59 PM
link   

How does it work? If you have a background in the industry you know the above pictures are impossible. A computer can’t have unlimited power and it can’t process unlimited point cloud data because every time you process a point it must take up some processor time. But I assure you, it's real and it all works. Unlimited Detail's method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are ray tracing polygons and point clouds/voxels, they all have strengths and weaknesses. Polygons run fast but have poor geometry, ray-tracing and voxels have perfect geometry but run very slowly. Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end. The result is a perfect pure bug free 3D engine that gives Unlimited Geometry running super fast, and it's all done in software.


SOURCE

Right here they tout the very arguments that people have against the tech. Please debunk this, please.



posted on Aug, 3 2011 @ 01:59 PM
link   
reply to post by SaturnFX
 


Absolutely. The moment these unlimited detail people start working with some physics people to create a physics engine that can use cloud rendering technology, I'll be jumping up and down like a 7 year old who's just been given an espresso shot. (Parents, don't try this.)



posted on Aug, 3 2011 @ 01:59 PM
link   
reply to post by BIGPoJo
 


In order to achieve some sort of similar look to these days "polygonal" games you should take the following things into account (please offer me ANY solution for the problems, k?):

- How do you ensure to ALWAYS fill in the "gaps" that occur with volumetric rendering - taking into account the Nyquist-Shannon sampling theorem?

- How do you do DYNAMIC lighting and shadow casting without using raytracing for a voxel cloud? Think about occlusion issues for some million voxels?

- How and where do you store the data for 100.000 times the detail of current games? 100.000 times the detail means 100.000 times the data you have today. And well - the data today is already cleverly compressed....


There are much more issues with voxels... but I think these are enough to show you that the claims made by this "company" are impossible to be true


Any suggestions? I am waiting...

edit on 3-8-2011 by mrMasterJoe because: (no reason given)



posted on Aug, 3 2011 @ 02:00 PM
link   

Originally posted by undo
reply to post by BIGPoJo
 


it reminds me of the double slit experiment



all pieces of data meant to be depicted on the screen are in super position to one another and are not depicted until the measuring device (the camera) looks at them.


I have doubts about the double split experiment. Just by adding more gear or turning the gear on could affect the outcome but that is for another topic.



posted on Aug, 3 2011 @ 02:04 PM
link   

Originally posted by mrMasterJoe
reply to post by BIGPoJo
 


In order to achieve some sort of similar look to these days "polygonal" games you should take the following things into account (please offer me ANY solution for the problems, k?):

- How do you ensure to ALWAYS fill in the "gaps" that occur with volumetric rendering - taking into account the Nyquist-Shannon sampling theorem?

- How do you do shadow casting without using raytracing for a voxel cloud? Think about occlusion issues for some million voxels?

- How and where do you store the data for 100.000 times the detail of current games? 100.000 times the detail means 100.000 times the data you have today. And well - the data today is already cleverly compressed....


There are much more issues with voxels... but I think these are enough to show you that the claims made by this "company" are impossible to be true


Any suggestions? I am waiting...

edit on 3-8-2011 by mrMasterJoe because: (no reason given)


it's literally billboarding everything and rendering variations of the other sides of the billboard based on where the camera is pointing, methinks. it only has to load into memory, what is currently in visual range, meaning the backs of things not meant to be see thru, are not loaded until the camera's perspective calls for it.



posted on Aug, 3 2011 @ 02:09 PM
link   
reply to post by undo
 


Billboards are a thing of the polygon world - they have nothing to do with voxels...



posted on Aug, 3 2011 @ 02:11 PM
link   
reply to post by mrMasterJoe
 


just using an example from the polygon world to describe what i think it's doing. the camera is the brains of the outfit. the tech only has to depict a screen full of pixels with data and no unnecessary data is loaded till the camera calls for it. maybe the camera is like a waitress who goes to the chef and places the order. i'm running out of analogies lol
edit on 3-8-2011 by undo because: (no reason given)



posted on Aug, 3 2011 @ 02:13 PM
link   

Originally posted by mrMasterJoe
reply to post by BIGPoJo
 


In order to achieve some sort of similar look to these days "polygonal" games you should take the following things into account (please ofer me ANY solution for the problems, k?):


OK



- How do you ensure to ALWAYS fill in the "gaps" that occur with volumetric rendering - taking into account the Nyquist-Shannon sampling theorem?

You only need to fill in the gaps when you zoom in. You could base the new dynamic points on the static ones.



- How do you do shadow casting without using raytracing for a voxel cloud? Think about occlusion issues for some million voxels?

This tech does not use voxels.



- How and where do you store the data for 100.000 times the detail of current games? 100.000 times the detail means 100.000 times the data you have today. And well - the data today is already cleverly compressed....


You only need to store the information for one atom. You then need to store some basic points like "where does this elephant exist". If there are 300 elephants you only need to track 300 points to know where they are. Lets imagine each elephant has 500 points to its geometry. To display these 300 elephants you would only need to track 800 points instead of 150,000. The geometry of the elephant is being rasterized and is NOT 3d. Now do you see the savings?



Any suggestions? I am waiting...


Sorry to keep you waiting, duties call.



posted on Aug, 3 2011 @ 02:21 PM
link   
Honestly, nothing is sweeter than interrupting someone whom is speaking about how something is impossible to present them with the impossible thing accomplished.

Thanks folks...will let you folks take over from here...the future looks bright, and I am pretty glad I am not part of the denial religion...I guess Notch is going to be making a few enemies for leading people into strawmen arguments.



posted on Aug, 3 2011 @ 02:25 PM
link   

Originally posted by SaturnFX
Honestly, nothing is sweeter than interrupting someone whom is speaking about how something is impossible to present them with the impossible thing accomplished.

Thanks folks...will let you folks take over from here...the future looks bright, and I am pretty glad I am not part of the denial religion...I guess Notch is going to be making a few enemies for leading people into strawmen arguments.


Remember people, Notch is just a fat neckbeard who made a popular game. He had some free time on his hands and used it creativity. If he can do it by himself then anyone can. He did not invent some new tech that revolutionized the industry and he is jelly.

I am quite intrigued by this new tech and will probably be getting involved with it.



posted on Aug, 3 2011 @ 02:27 PM
link   
reply to post by BIGPoJo
 


Ahm, sorry what did you say pls? None of your answers is an answer to the corresponding issue. NONE.

(1) There are ALWAYS gaps in voxels based systems unless you increase the voxel size (then your image looks blobbish) or increase the voxel density (then you might get voxel "bleeding" and effects similar to Z-fighting known in the polygonal world). Dont' forget that you need some sort of dynamical thinning of the voxel clouds OR some clever sort algorithm OR some sort of level of detail stuff (which produces even more data to be stored and handled!). Depending on the distance to the camera you have additional issues.

(2) Which tech does not use voxels? Raytracing? *lol* You haven't understood the question. Games do have lighting and shadowing / shadow casting. How do you do that in a voxel based system? You COULD use raytracing for each voxel in the cloud (=extremely expensive). So what?

(3) Do your REALLY think these days games store multiple trees or rocks when there are multiple of them in the game instead of just storing the type of object and its coordinate? Not really, eh?
What you describe is how these days games ALL work (unless they are some bullish freeware crap
) ALSO... when the elephant is not 3D... how can I look at this elephant from any angle....? Oh well...


Sorry, your answers went to the bin

edit on 3-8-2011 by mrMasterJoe because: (no reason given)




top topics



 
170
<< 12  13  14    16  17  18 >>

log in

join