Unlimited Detail

Started by bobjob, August 03, 2011, 03:37:32

Previous topic - Next topic

bobjob

As unlimited detail is new technology that could very well change the whole 3D graphics industry.

Unlimted Detail Update Video

If possible, would it be likely that the LWJGL team would be interested creating a JNI bridge for Java.

I have emailed the company, and asked them a few questions about how the SDK will work, when its released.

kappa

Do have a read of this. As nice as it sounds its way too good to be true.

bobjob

I think its very logical.

Shoot 640x480 rail gun shots in quake, and determine what pixel you hit.

its just a culling algorythm run for every single pixel, minus the call to hardware.

You only need to render each pixel once (per cycle). Current hardware acceleration will generally render color on a pixel multiple times.

So it makes sense as computers get faster, resolutions will become standardised and increased polygons will become more sluggish. There needs to be an advanced sorting system. Granted it will be slow on old processors, but it makes logical sense.

So basically each frame will need to be able to run a search algorythm in a loop length of 640x480. The geometry is not the bottlenext but rather the CPU. Which will speed up, and will eventually be assisted by hardware.

bobjob

Also if there is any quality degredation in distance, because of resources that can be loaded in fast memory, it would be degredation more similar to mip mapping, than to model swapping.

as geomerty would very likely be stored more like textures grouped togther, increasing in quality as the viewer approaches. It would be impossible to notice a quality drop at a far distance. just like a texture shrinking.

princec

This isn't too good to be true at all, it's perfectly excellent. I don't see anything that isn't technically feasible going on here. There have been quite a lot of negative comments about it without really understanding how it achieves what it achieves - looking at what they've got there, I think that they have an ingenious way of instancing point clouds at arbitrary locations, possibly overlapping, possibly not, and possibly using some kind of ingenious data compression technique for the point clouds. Various naysayers have mentioned that all they've done is repeat the same instanced geometry 10,000 times but this is of course the same problem using polygons: there's only so many unique models you can actually come up with in a certain amount of time. One thing that has occurred to me is that their models don't appear to use texture data - the points are the texture - so it might not be quite so inefficient as it sounds. And the fact that they can model every grain of sand in a beach leads me to think they have some very clever recursive data structures to allow them to do this inside ordinary memory constraints. A sparse voxel tree it ain't.

I'd like to see their approach to animation - deformable point clouds sounds far easier than the somewhat bullish post by Markus about it makes it sound, and is possibly going to be built-in to the way they use their point cloud data in the first place.

What specifically interests me is if they can convince hardware vendors to take a look at their algorithms and datastructures and see about some hardware proof-of-concepts. They say themselves they're getting 25fps at those resolutions (which is rather good considering it's pure software anyway!).

Cas :)