It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

NERDGASM ALERT: Detailed Rendering of CG just got infinately better. The polygon is dead

page: 23
170
<< 20  21  22    24  25  26 >>

log in

join
share:

posted on Aug, 4 2011 @ 06:45 PM
link   

Originally posted by Limbo

but this is how he says it is done in his videos. The storage space would be impossible otherwise and Notch would be right. Since he is overlapping trees how does he pick the right pixel as he claims.
his would contradict the one point per screen pixel that he claims thus

He is either using one tile per space block OR as he says in his grains of dust comment each grain of dust is
an atom etc. (In which case overlappying) Since 2 trees can occupy the same space it is undefined what he picks so he would ahve to sample 2 or more together.


You can still overlap instances and save most of the memory. Just just need to instance the parts of the objects that don't overlap. A little hard to explain without diagrams though.

Imagine a palm tree. That's not a single node in an SVO, you start with one node big enough to cover it, then split it, discarding empty nodes, you get a bunch of nodes of differing sizes with something in them, down to 1x1x1.

Now if two palm trees (or any two objects) overlap, then you only need new nodes for the nodes that overlap, and as you keep subdividing you'll eventually get a load of quite large nodes that are just parts of the two objects.

You can change the resolutions at which things can be stamped down by subdividing an instance.

Most things don't overlap much. You'd probably want to avoid overlapping to avoid growing the size of the master SVO.

Sorry this is confusing without diagrams.



posted on Aug, 4 2011 @ 06:49 PM
link   

Originally posted by Limbo
If you think about points in space you can stick as many as you want per unit because they don't have volume.


Not in a simulation though, you are limited by the precision of the type of numbers you use. For floating point numbers there are only 8388608 possible locations between 1 and 2.

I wrote an article about it:

www.gamasutra.com...



posted on Aug, 4 2011 @ 06:51 PM
link   
Wow I enjoy reading these comments even though I have no idea what they mean. Don't mean to spam, I just model with a few different 3D software yet this is all going way over my head.



posted on Aug, 4 2011 @ 07:03 PM
link   
reply to post by Uncinus
 





"we've modified a standard voxel rendering algorithm to handle instances, but it's still not really practical as a game engine, or even really all that different from other people's voxel engines, and Id Software probably have a better version anyway, but we want more funding"


did he really say that?

that's not a quote is it? if it's not a quote, better remove the quote marks.



posted on Aug, 4 2011 @ 07:16 PM
link   
don't know if this has been posted yet:







so anyway...

in the vid he points out that they are not using voxels....

they're actually using point cloud data.

they're saying the found away around long processing times using search algorithms.

still not sure if this is a scam or not.... guess we'll have to wait and see


edit on 4-8-2011 by bladebosq because: (no reason given)

edit on 4-8-2011 by bladebosq because: (no reason given)



posted on Aug, 4 2011 @ 09:47 PM
link   
reply to post by SaturnFX
 


This is awesome. Ive always said the one thing i want to see in my lifetime is a real matrix or halodeck type environment. I think we just got one step closer.



posted on Aug, 5 2011 @ 02:15 AM
link   
let's put it that way.

I would ALREADY be impressed if that (latest) "tech demo" would run on my computer RIGHT NOW, being static and no animations would not even matter. Heck, we dont even know about this latest "tech demo", it could have been rendered on 12 high-end PCs and taken a week


But i have my doubts it would run.

( To be fair, let's assume that whatever they claim is in fact true, it would already be a GREAT improvement over current game engines since *indeed* a lot of what we see in today's game engines is textures, pseudo-objects etc..etc.. so...a game engine which combines this (for static objects) and then adding normal polygon rendering for the rest would already be sensational. )

But as said..i have my doubts.

EXAMPLE: Let's assume no-one ever would've heard of ray tracing...a company publishes a tech demo based on ray tracing which lets everyone's jaw drop because they maybe never saw a ray traced image, let alone an animation made from it. STILL..this does not mean that ray tracing would even be remotely feasible to build a game engine around it...at least not with current tech. That's how i see this too.



posted on Aug, 5 2011 @ 02:18 AM
link   
i just nereded all over the floor



posted on Aug, 5 2011 @ 03:49 AM
link   
Gamer here~!!! WOW.. ~!!!!!! The elephant statue looks TOTALLY real~!! I was woundering when they were going to make the 'jump' and just how they were going to do it.. ~!! Ok.. from a gamers POV...............................................................I'm seriously drooling ~!!!! Seriously ..



posted on Aug, 5 2011 @ 11:28 AM
link   
I can not wait until this gets into video games. I can't imagine what they could do with Elder Scrolls using this technology..



posted on Aug, 5 2011 @ 08:06 PM
link   
Loved the news. F/S for the tech-fix. I do CGI and saw Voxels years ago, but this is better. I sent a link to my buddy at EA who does FX in Maya and he got back to me and said he was excited. EA does the best work. Crysis was OK, but I liked Battlefield 2 BC and am still wringing out my bib from the drool for Battlefield 3 out later this year. (BTW, played COD too but BF3 will be better than COD3 by a mile.)

But this atom-based poly-killer could be a game changer. FPS could go to 60 easy I think, and when they get this into artists and creative programmers, toolmakers and such, they/we will think of easier and better ways of using such power and exploiting it's advantages.

So I sent them a message to get the converters to start looking at the possibilities. We work In Maya and 3DSMax so they say they have converters. Cool, but they need to make some applications for using this. Start with a 2D application, then to 3D. Modo started out like that.

Can't wait to see what happens, but I've seen such great ideas flounder until they die or come up against a problem they cannot solve, be it technical or business and budget, so never hold my breath.

ZG



posted on Aug, 5 2011 @ 09:31 PM
link   
Wow with this kind of technology we could take pics of earth from satillite, turn the pics into these atoms, and BAM real life video game whrte u can go where u please, meet up with friends and steal your neighbors car. Can u say death to rockstar?



posted on Aug, 5 2011 @ 09:50 PM
link   
I would love for Bethesda to pick this up and redo Morrowind with this
and Daggerfall, Oblivion,
will wait on Skyrim but yea

wouldn't also mind seeing Finale Fantasy 7 or 11 redone

To bad we couldn't just download this and import models from games, convert and reput them in
with out redoing the engine ....lol

Would look really slick



Open Source please ?



posted on Aug, 7 2011 @ 08:21 AM
link   
I dunno... Programs like Lightwave, moreso Modo and Maya have the ability with newer machines to handle millions upon millions of polys with exr based images (ultra high color space/resolution) so I don't really see the advantages of using voxels over polys at this point, in fact we use voxels for things like water simulation, fire, clouds etc already.

not something thats going to be replacing traditional cgi anytime soon



posted on Aug, 7 2011 @ 11:23 AM
link   
The biggest issue I can see with this type of rendering, is the fact that instead of just a few polygons, you now have a MASSIVE amount of these "atoms".

Lets say you are playing a first person shooter and you throw a grenade. The grenade will have a physics impact on so many of these 'atoms' that it would be logical to expect an insane amount of lag from millions of physics calculations, one for each of these atoms.

Take that issue and multiply it by 32 (The average max number of players per fps match) and you have a lag fest. I can see the awesome in using this for graphical rendering, but I see massive issues when integrating this form of graphical rendering with a physics engine.



posted on Aug, 8 2011 @ 01:52 PM
link   
Just watched the Quakecon keynote. He hints at a similar tech to Unlimited Detail. I recommend watching the entire video but he hints about toying around with something like Unlimited Detail around 39 min mark.


edit on 8-8-2011 by BIGPoJo because: (no reason given)



posted on Aug, 9 2011 @ 06:46 AM
link   
reply to post by Uncinus
 


Sossry, I never explained myself very well. It wa just that with them saying that this tech is much more resource friendly, we wouldn't need as powerful machines to run it. I just thought that, in turn, would lead to games/simulations being created that were much much larger than we are used to .

Regardless of what it's going to be used for, this new technology (i'm right in saying it's not voxel right?) will lead to big things. As with all new tech though, we'll see how quickly or readily it's adopted by the industry. I'm very excited about it.



posted on Aug, 9 2011 @ 01:50 PM
link   
reply to post by Thundersmurf
 


It's voxels, they refer to voxels on their site.

It's detailed, but only because they repute everything. It's really not an unlimited world.

[atsimg]http://files.abovetopsecret.com/images/member/6458f4e95fb1.jpg[/atsimg]



posted on Aug, 10 2011 @ 08:46 AM
link   
reply to post by Uncinus
 


Now, I'm sure there's a post a few up from your reply which has a video 'part 2' in which they explain that they are not using voxels.



posted on Aug, 10 2011 @ 09:39 AM
link   
reply to post by Thundersmurf
 


They use "atoms", 64 per cubic mm. Those ARE voxels. It's like saying "oh, no, we don't use a polygon engine, we use a fragment engine". It's the same thing.

What they claim is different is they don't use "ray tracing", but instead use a "search algorithm", but their description of it is moronic.


Unlimited Detail's method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are ray tracing polygons and point clouds/voxels, they all have strengths and weaknesses. Polygons run fast but have poor geometry, ray-tracing and voxels have perfect geometry but run very slowly.

Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.


That sounds exactly like a description of ray-casting into a sparse voxel octree, but written as if he's trying to explain it to a child. Here's how grown-ups describe the same thing:

www.crs4.it...:2008:SGR%27


The method is based on the decomposition of a volumetric dataset into small cubical bricks, which are then organized into an octree structure maintained out-of-core. The octree contains the original data at the leaves, and a filtered representation of children at inner nodes. At runtime an adaptive loader, executing on the CPU, updates a view- and transfer function-dependent working set of bricks maintained on GPU memory by asynchronously fetching data from the out-of-core octree representation. At each frame, a compact indexing structure, which spatially organizes the current working set into an octree hierarchy, is encoded in a small texture. This data structure is then exploited by an efficient stackless raycasting algorithm, which computes the volume rendering integral by visiting non-empty bricks in front-to-back order and adapting sampling density to brick resolution. Block visibility information is fed back to the loader to avoid refinement and data loading of occluded zones. The resulting method is able to interactively explore multi-giga-voxel datasets on a desktop PC.


(actually that's a more advanced version of the same thing, as it's including GPU rendering)



new topics

top topics



 
170
<< 20  21  22    24  25  26 >>

log in

join