NERDGASM ALERT: Detailed Rendering of CG just got infinately better. The polygon is dead

page: 27
<< 24  25  26   >>

log in


posted on Aug, 17 2011 @ 11:28 PM

Originally posted by Laokin
He was totally right though... their artists DO suck... LOL!

naw mang didn't you see that tree trunk they got an artist to make? looked brilliant, nothing sucky about it....
obviously the rock (for e.g) was better but it was also real.


posted on Aug, 18 2011 @ 12:08 PM
Does this mean we will finally get to see the real Pentagon impact videos? I hope so. It's time all that conspiracy stuff was put to rest.

posted on Aug, 18 2011 @ 05:58 PM
He said it would make the graphics 1000,000 times better. I dont see why the graphic industry would want to make such a big leap. Their is allot of money to be made by slowly advancing the technology you have.

posted on Aug, 18 2011 @ 06:08 PM

Originally posted by sabbathcrazy
I dont see why the graphic industry would want to make such a big leap.

Yes they would. They plough a lot if money into cutting edge research.

Their is allot of money to be made by slowly advancing the technology you have.

No there isn't. See above.

posted on Aug, 18 2011 @ 06:32 PM
heres bruce dell hawking the same exact tech in 2003.

iirc he got $2 million from the aussie gov that year.

basically its a crock of #. even their logo makes it seem like some kind of new age cult.

anyway heres a competitive voxel engine, which looks significantly better and doesnt resort to making ridiculous claims, like unlimited this or that. its called atomontage and they did more in a few years than unlimited detail could do in a decade.

you can visit their website here: atomontage
edit on 18-8-2011 by snarfbot because: (no reason given)
edit on 18-8-2011 by snarfbot because: (no reason given)

posted on Aug, 21 2011 @ 10:34 AM
wow nice job. gave u some stars

posted on Aug, 25 2011 @ 04:51 AM
to everyone being a dick about this & ripping on this tech I recommend u watch this interview because the tech is both explained & demo'd & etc, it pretty well covers all of the skeptics bs claims of bs.

Peace ,

posted on Aug, 25 2011 @ 05:46 AM
reply to post by B.Morrison

...or you could read the thread instead of reiterating the same old arguments...

posted on Aug, 26 2011 @ 03:56 AM
reply to post by john_bmth

how bout no scott.

2nd line.

posted on Aug, 26 2011 @ 04:11 AM
reply to post by B.Morrison

that's really interesting! i've been following unlimited detail since it first showed up on youtube.
i'm anxious to see a mmo style 3d scene with it, ya know with mountains and vegetation and player characters, animals, and npcs, vehicles, shorelines, starry skies, space to ground travel, ground to space travel, etc.

posted on Aug, 28 2011 @ 03:47 AM
as amazing as this is

As much as the creator should get pats on the back.................

Anyone cure cancer yet?

No? no cancer cure? Oh..... well....... then creating better fps graphics is a really exciting life.....
Not that the creator did bad... but lets look at the hours a day sucked out of the life of the planet.
Lets say, a video game has ONLY 10 million daily users playing an average of 1 hour. BOTH are low stats for a good game (google world of warcraft for a WTF?)

so... 10 million hours a day.... 40 hour work week, that's 250,000 weeks of work a day
average work life: 18 til 60? 51 weeks a year? so 42 years of 51 weeks, that's only 2091 weeks of work in a life of a person.
That means a video game that has a player rate of 10 million people for an hour a day, kills the equivilant of 120 people a day..... that's 120 people that never will produce anything, never discover anything, never service anything, never help anyone......

The dude who would of discovered the cure for cancer is probrably a level 50 elf right now

posted on Aug, 28 2011 @ 04:11 AM
voxel octrees, nothing new here. nvidia is already working on it, so does john carmack.

posted on Aug, 28 2011 @ 10:28 PM
Would love this to be true..
Imagine games with HUGE environments.

Need for speed, GTA, Entropia Universe, ArmA....

It doesent HAVE to be global, just HUGE....

posted on Aug, 28 2011 @ 10:59 PM
Just thinking ,those interested in this subject may want to look at this laser projection

posted on Dec, 22 2011 @ 03:09 AM
I'd like to address the plausibility of Euclideon Unlimited Detail in a rational manner before trying to compare it outright as something that already exists or has too many disadvantages to work well. Below I'll state my case and explain as best as possible.

First and foremost, it is a variation of a Sparse Voxel Octree engine. However, this being said, that does not actually mean it is a Sparse Voxel Engine completely. While I personally have not had a hands on experience with this technology, I have actually seen enough of the explanation from Mr. Dell that I can surmise what he's actually doing that is different, and it is indeed extremely clever.

In regard to the standard model format or limitation thereof in using Sparse Voxel Octree methodologies, I am fairly certain Mr. Dell is telling the truth when he says those limits do not apply with what he is doing. What this comes down to is the methodology of access to that data, coupled with how that data is represented in screen space.

By technicality, a sparse voxel octree system is unlimited detail by default. The limitation is available memory in conjunction with the resolution of the screen space. This is a known advantage/disadvantage to that system, so Euclideon making the claim of 'Unlimited Detail" does not immediately flag it as a scam. He's simply stating the blatantly obvious for those who already know Sparse Voxel Octree systems.

Where we find a bit of unconventional claim is in the idea that Euclideon has solved the fundamental problem of the computational requirement for large resolutions, in conjunction with file size constraints which normally would make those point cloud data representations gargantuan in memory as they are further refined. But again, the clue is obvious as to how he has gone about solving this problem on the surface, in that he likens it to a search engine algorithm for optimizing what needs to be dealt with.

Normally in a Sparse Voxel Octree, it depends on branching and algorithmically subdividing to get the LOD we're looking for. In relation to high definition detail, that can be expensive as the algorithm continues trying to resolve further detail. However, I'd like to point out that this problem exists only if you are forced to traverse the hierarchy of the voxel algorithm in a linear manner, much like procedural textures resolving detail.

In light of this, we must ask what, then, is going on to circumvent these limitations?

The clue is when he mentions that his system is more like a search engine. What this implies is that he has stored the point cloud data in a manner by which individual points are indexed within the file itself and the inside of the file is searchable without loading the entire file to do so.

The best explanation I can give about this is the following analogy:

Let's say you have all of Wikipedia. Clearly, the amount of data required to show all of it at once is astronomical, and clearly you could not show the entirety of Wikipedia on your computer screen at the same time. This is why Wikipedia has a search box, where you type in the query and retrieve and individual page from within the mountains of data contained within.

Because Wikipedia is indexed, it does not have to start on page 1 of Wikipedia and scan the billions of pages it contains in a linear manner to get to what you were looking for. Instead, the indexing algorithm knows to skip most of Wikipedia and only go straight to the search criteria matches for that circumstance. This is also the reason why it doesn't take your entire lunch hour to submit a search to Google before you get a result.

Now, imagine all of Wikipedia was named Wikipedia.3DS

It remains internally searchable, and a majority of that file (regardless of size) is immediately not relevant to the search criteria, and so a majority of the data in Wikipedia.3DS is ignored, except the point data you just searched for. Likewise, you never had to load (or resolve) all of Wikipedia.3DS before you could bother searching inside of it, no more than Google has to load the entire Internet to find what you're looking for.

I refer to this technique as Fractional File Indexing.

As for the file size itself for those searchable point cloud models, the point cloud data itself would be incredibly tiny to begin with because it's an algorithmic approach. That is another stated benefit to Sparse Voxel Octree systems in that they are ridiculously efficient. What Euclideon seems to have innovated is the ability to rectify the steps within the algorithmic LOD to skip a majority of the computations in linear and go straight to the LOD branch they need at that moment, and since screen space & memory are the limit for detail, this method likely just freed up 75% of the computation time normally used for mundane recursion.

In regards to animation of Sparse Voxel Octree, it is actually possible as shown here:

posted on Dec, 22 2011 @ 03:22 AM
I was wondering, how long would it actually take to create anything withthis tech.

I mean seriously, building things up from atomic structure. Imagine creating an engine for this, have fun coding.
Well interesting find nonetheless.

posted on Dec, 22 2011 @ 03:31 AM

*apologies for the long explanation in advance

Now, the thing about Sparse Voxel Octree is that it specifically is dependent on screen resolution and computer memory. The larger the viewport of the window, the more memory it is going to need in order to display all of those points.

However, this is based on the recursion process which is a linear methodology. The more detail and the higher the viewport resolution, the more memory it needs to do that hierarchy recursion in order to algorithmically resolve the fidelity. This is likely not the case with Euclideon because of that search criteria aspect mentioned before - the Fractional File Index. It's likely skipping all the steps in that algorithmic approach like a search engine skips 99.99% of the Internet to give only the part is actually needs.

Because they wouldn't need to traverse the entire hierarchy of the algorithm in order to resolve fidelity, and because that search criteria system is actively ignoring anything in the hierarchy that is completely irrelevant, they are not wasting computation on the mundane recursion to get there via linear computation methods normally used in a Sparse Voxel Octree engine.

The file size is already tiny to begin with because it's an algorithmic hierarchy, it's unlimited detail specifically because it is like this... it's infinite detail, but only in context to what you are able to see at any given moment. and the limitation of system memory and screen space resolution. We're resolving detail within screen space, but the overall effect is that you don't have to represent infinite detail as a 1:1 file basis. In this manner infinity is numerically represented much like we state Pi is infinite without trying to write out every digit.

The other interesting part about this is that since screen space is the other limiting factor, we essentially end up with a sort of "True Geometry" factor, where it is much like True Color. A point where the display of further detail wouldn't make any discernible difference because it would be smaller than the pixels on your screen. Again, this is inherent with standard Sparse Voxel Octree engines already, and all Euclideon seem to have done is skipped the recursion aspects and freed up that computation to put toward the rest of the system in order to gain a fidelity boost on a normal CPU.

If anything, a fractional file index method would have solved a lot of the limitations that standard Sparse Voxel Octree has. The true memory requirement, then... as Mr Dell had stated previously, would be only that which you would need to show a similar bitmapped image at the same resolution, not taking into account the program itself or the computation involved regardless of the high end optimizations to the process. But yes, something like this is plausible and would likely end up working just as Mr. Dell claims...

Whether or not this is the process he's using to achieve it, I cannot say at this time. But I can at least say that what he's claiming isn't as impossible as many are quick to announce. Not even Notch.

posted on Dec, 22 2011 @ 03:33 AM
reply to post by KingAtlas

Laser scan in objects or simply create 3D models as you normally would. Most 3D modelers end up with a polygon budget in order to make their models 'real time". Those models start out as ridiculously high polygon mesh. So in context to this, it's actually easier to create content for it as the original ultra-high definition models would just be directly converted to point cloud data.

new topics
top topics
<< 24  25  26   >>

log in