LWJGL 3 with Oculus Rift

Started by BrickFarmer, February 12, 2015, 11:55:48

Previous topic - Next topic

spasi

The ant launcher uses the JDK pointed to by %JAVA_HOME%. Set it to a x64 JDK.

BrickFarmer

a few steps closer after a couple of fresh tries.  Now I'm trying equivalent code from HelloLibOVR and I'm getting an Unsatisfied link error org.lwjgl.ovr.OVRInitParams.offsets. 

I can see the ovr package in the .jar file, so that step worked, but maybe this is something not linked in for OVR? or does it generate a separate native lib? actually I dont see an output that looks like ovr in the natives directory...  Is there something else required other than setting the OCULUS_SDK_PATH ?

[edit] build.ovr is true when ant is running
[edit2] actually it looks like the 'release' target only downloads the native libs... so maybe I'm using the wrong target after native-compile?
Oculus Rift CV1, MBP 2016 - 2.9 i7 - Radeon Pro 460  OSX 10.12.4,  Win7 - i5 4670K - GTX1070.
Oculus Rift VR Experiments: https://github.com/WhiteHexagon

spasi

Hmm, yeah, the release target doesn't play nice with local builds. I'll have to do some changes there. For now, replace lwjgl.dll in the release with the one you've built in /libs.

BrickFarmer

yes, that did the trick! HelloLibOVR works now :) thanks!

Is it possible some classes have moved around in the latest sources, all my imports for org.lwjgl.system.glfw,GLFW... are failing, also GL11.Light and GL11.glLoadMatrix.  I understand I'm on the cutting edge now building in this way, but just wondering if this is a new change or just a bad snapshot of master.  Thanks.
Oculus Rift CV1, MBP 2016 - 2.9 i7 - Radeon Pro 460  OSX 10.12.4,  Win7 - i5 4670K - GTX1070.
Oculus Rift VR Experiments: https://github.com/WhiteHexagon

BrickFarmer

Besides the above question, I'm also wondering about the parameters for the create call.

line 56:

https://github.com/WhiteHexagon/example-lwjgl3-rift/blob/master/src/main/java/com/sunshineapps/riftexample/RiftClient0600.java

The code runs, but the OVRHmdDesc structure comes back empty.  For these generated interfaces it seems like there is a pointer version and a buffer version, so I'm wondering if I'm constructing the OVRHmdDesc object in the right way and if buffer() is the right call.  I thought it might be that the buffer needed a rewind, but that doesnt seem to be the case.

Line 77 I also have a similar dilema.  I found the BufferUtils :) but again I'm not feeling confident in the way I'm using the buffer since the result is also blank, although that might be releated to the above failure.
Oculus Rift CV1, MBP 2016 - 2.9 i7 - Radeon Pro 460  OSX 10.12.4,  Win7 - i5 4670K - GTX1070.
Oculus Rift VR Experiments: https://github.com/WhiteHexagon

spasi

Quote from: BrickFarmer on May 29, 2015, 20:24:43Is it possible some classes have moved around in the latest sources, all my imports for org.lwjgl.system.glfw,GLFW... are failing, also GL11.Light and GL11.glLoadMatrix.

Yes, GLFW has moved and is a top level package now; org.lwjgl.glfw. Functions like glLightfv have been renamed to match the native name, see this thread for details.

Quote from: BrickFarmer on May 30, 2015, 12:04:34Besides the above question, I'm also wondering about the parameters for the create call.

line 56:

https://github.com/WhiteHexagon/example-lwjgl3-rift/blob/master/src/main/java/com/sunshineapps/riftexample/RiftClient0600.java

The code runs, but the OVRHmdDesc structure comes back empty.  For these generated interfaces it seems like there is a pointer version and a buffer version, so I'm wondering if I'm constructing the OVRHmdDesc object in the right way and if buffer() is the right call.  I thought it might be that the buffer needed a rewind, but that doesnt seem to be the case.

Line 77 I also have a similar dilema.  I found the BufferUtils :) but again I'm not feeling confident in the way I'm using the buffer since the result is also blank, although that might be releated to the above failure.

The ovrHmd_Create function writes a pointer to the pHmd parameter, which means you must use a PointerBuffer. This pointer indeed points to an ovrHmdDesc struct. The code below should answer both your questions:

// ovrHmd_Create writes a pointer to pHmd
PointerBuffer pHmd = BufferUtils.createPointerBuffer(1);
ovrHmd_Create(0, pHmd);
long hmd = pHmd.get(0); // This is the handle that you'll use in other ovrHmd* functions

// Create a ByteBuffer at the pointer and wrap it in an OVRHmdDesc
OVRHmdDesc desc = new OVRHmdDesc(memByteBuffer(hmd, OVRHmdDesc.SIZEOF));
// Read the struct fields
int resolutionW = desc.getResolutionW();
int resolutionH = desc.getResolutionH();
System.out.println("resolution W=" + resolutionW + " H=" + resolutionH);

// You can either use a struct instance, that wraps a ByteBuffer...
OVRFovPort defaultEyeFovL = new OVRFovPort();
OVRFovPort defaultEyeFovR = new OVRFovPort();

desc.getDefaultEyeFov(defaultEyeFovL.buffer(), ovrEye_Left);
desc.getDefaultEyeFov(defaultEyeFovR.buffer(), ovrEye_Right);

System.out.println(defaultEyeFovL.getUpTan()); // instance method
System.out.println(defaultEyeFovR.getDownTan()); // instance method

// ...or use ByteBuffers directly
ByteBuffer maxEyeFovL = BufferUtils.createByteBuffer(OVRFovPort.SIZEOF);
ByteBuffer maxEyeFovR = BufferUtils.createByteBuffer(OVRFovPort.SIZEOF);

desc.getMaxEyeFov(maxEyeFovL, ovrEye_Left);
desc.getMaxEyeFov(maxEyeFovR, ovrEye_Right);

System.out.println(OVRFovPort.UpTan(maxEyeFovL)); // static method
System.out.println(OVRFovPort.DownTan(maxEyeFovR)); // static method


Please note that the way structs are exposed in LWJGL is subject to change before the final 3.0 release. For now, it's up to you whether you want to use ByteBuffers directly or the wrapper classes. The wrapper classes are more convenient, but there's an extra instance allocated. So try to avoid them in loops or other performance sensitive code and work with reusable ByteBuffer instances instead.

spasi

You may want to see this sample by Morgan McGuire. It uses GLFW, OpenGL and LibOVR 0.6:

QuoteRunning this program should initialize a DK2 in direct-to-rift mode, display the Oculus health and safety warning, and show a simple 3D scene with full head tracking and per-user accomodation parameters.

BrickFarmer

Thanks for all that info.  So today I have made quite good progress, at least I think so :)  I'm looking at the following function though:

ovrPosef EyeRenderPose[2];
ovr_CalcEyePoses(hmdState.HeadPose.ThePose, ViewOffset, EyeRenderPose);


I can see this probably translating to:

            OVRPosef headPose = new OVRPosef();
            hmdState.getHeadPoseThePose(headPose.buffer());
            OVRUtil.ovr_CalcEyePoses(headPose, hmdToEyeViewOffset, outEyePoses);


The problem I see is that the 2nd and 3rd parameters are both arrays.  I ran into issues with the previous Oculus SDK that array parameters need to consist of contiguous memory blocks.  So with JNA it involved something like:

private final Posef poses[] = (Posef[]) new Posef().toArray(2);


Is something similar required still.  Either way I could use a tip for how to construct array parameters please.

I could take a guess that maybe I just create a ByteBuffer of double the size (since the each takes an array of 2 elements, one for each eye) and handle the pointers etc myself?  Just worried this might get messy, and might not be the best way...

[edit]

Actually, do you think it would be possible from the generating code, to identify when there are array parameters or variables that have ovrEye_Count as thier size to generate a left and right accessor/setter?  Since that is a special case on the OVR library, I understand if that is undesirable.

RenderPose[ovrEye_Count]


would generate:

setRenderPose0
setRenderPose1


Just an idea :)  Otherwise I'm wondering if I generate a wrapper or helper class for working with the libOVR, whatever you prefer...
Oculus Rift CV1, MBP 2016 - 2.9 i7 - Radeon Pro 460  OSX 10.12.4,  Win7 - i5 4670K - GTX1070.
Oculus Rift VR Experiments: https://github.com/WhiteHexagon

spasi

This indeed is going to be ugly, but we can't do much better until Project Panama becomes a reality in Java. LibOVR in particular is an API heavy on structs and as I said above LWJGL needs more work on struct support. Also keep in mind that LWJGL always prefers performance and matching the native interface, rather than convenience. I don't know what JNA does with Posef[] but I'm sure it involves copying memory.

So, I would do something like this:

// Get the head pose and eye rendering information
OVRPosef headPose = ...;
OVREyeRenderDesc leftEyeRD = ...;
OVREyeRenderDesc rightEyeRD = ...;

// hmdToEyeViewOffset is an ovrVector3f[2]
ByteBuffer hmdToEyeViewOffset = BufferUtils.createByteBuffer(2 * OVRVector3f.SIZEOF);
leftEyeRD.getHmdToEyeViewOffset(hmdToEyeViewOffset);
hmdToEyeViewOffset.position(OVRVector3f.SIZEOF);
rightEyeRD.getHmdToEyeViewOffset(hmdToEyeViewOffset);
hmdToEyeViewOffset.position(0);

// outEyePoses is an ovrPosef[2]
ByteBuffer outEyePoses = BufferUtils.createByteBuffer(2 * OVRPosef.SIZEOF);
ovr_CalcEyePoses(headPose.buffer(), hmdToEyeViewOffset, outEyePoses);

// Retrieve the eye poses
outEyePoses.limit(OVRPosef.SIZEOF);
OVRPosef leftEyePose = new OVRPosef(outEyePoses.slice().order(outEyePoses.order()));
outEyePoses.position(OVRPosef.SIZEOF);
outEyePoses.limit(2 * OVRPosef.SIZEOF);
OVRPosef rightEyePose = new OVRPosef(outEyePoses.slice().order(outEyePoses.order()));


The above could be improved in two ways (that I can think of):

1) Methods like getHmdToEyeViewOffset could also accept an offset, so you wouldn't have to mess with hmdToEyeViewOffset's position.
2) We could add a memSlice(buffer, from, to) method that would let you get the eye poses in 2 lines instead of 5.
3) Alternative to 2, we could add struct constructors that accept an offset and do the slicing internally.

BrickFarmer

Panama looks perfect for this.  Anyway thanks for the rapid solution!  I think that is fine for now, and is at least explicit in it's intention.  Anyway I will probably wrap a lot of the boiler plate code, just to simplify reuse, at the cost of maintaining another wrapper.

I've uploaded my current progress, it builds locally, but does not yet run.  I need to do some work on the FrameBuffer texture code tomorrow, and I want to remove some other dependencies. 
Oculus Rift CV1, MBP 2016 - 2.9 i7 - Radeon Pro 460  OSX 10.12.4,  Win7 - i5 4670K - GTX1070.
Oculus Rift VR Experiments: https://github.com/WhiteHexagon

spasi

The next nightly (#51) will have memSlice in MemoryUtils. Make sure to read the javadoc. If you use it, the last 5 lines above become:

OVRPosef leftEyePose = new OVRPosef(memSlice(outEyePoses, OVRPosef.SIZEOF));
OVRPosef rightEyePose = new OVRPosef(memSlice(outEyePoses, OVRPosef.SIZEOF, 2 * OVRPosef.SIZEOF));

BrickFarmer

Quote from: spasi on June 01, 2015, 15:43:17
The next nightly (#51) will have memSlice in MemoryUtils.
great!

On the ovrHmd_SubmitFrame I can see it's looking for layerPtrList.  This seems different than my earlier question.  Would this be the correct way to construct a list of pointers?

     layers = BufferUtils.createByteBuffer(Long.BYTES);
        layers.putLong(layer0.getPointer());
        layers.position(0);


line 494 from my code.  I'm just checking because I get a native crash on the ovrHmd_SubmitFrame call and that structure seems the most likerly candidate.
Oculus Rift CV1, MBP 2016 - 2.9 i7 - Radeon Pro 460  OSX 10.12.4,  Win7 - i5 4670K - GTX1070.
Oculus Rift VR Experiments: https://github.com/WhiteHexagon

spasi

Use a PointerBuffer:

PointerBuffer layers = BufferUtils.createPointerBuffer(1);
layers.put(0, layer0);


There's a ovrHmd_SubmitFrame version that accepts a Pointerbuffer, instead of ByteBuffer + layerCount.

BrickFarmer

Quote from: spasi on June 01, 2015, 16:59:33
There's a ovrHmd_SubmitFrame version that accepts a Pointerbuffer, instead of ByteBuffer + layerCount.

Much easier :) however I'm seeing conflicting info on the parameter type.

In ovr_capi_0_6_0.h
ovrLayerHeader const * const * layerPtrList

and also the example in the comments for ovrHmd_SubmitFrame talk about headers rather than layers for the pointer list.  But also mentions "layerPtrList Specifies a list of ovrLayer pointers"

So I'm not sure whcih I should be using there, and also if it needs the getPointer since its expecting a long:

[edit] tried both and my crash must be elsewhere.  I will try again tomrrow with fresh mind :)
        OVRLayerHeader header = new OVRLayerHeader();
        header.setType(ovrLayerType_EyeFov);
        header.setFlags(ovrLayerFlag_TextureOriginAtBottomLeft);
        
        layer0 = new OVRLayerEyeFov();
        layer0.setHeader(header.buffer());
        for (int eye = 0; eye < 2; eye++) {
            layer0.setColorTexture(textureSet[eye].buffer(), eye);
            layer0.setViewport(textureSize.buffer(), eye);
            layer0.setFov(fovPorts[eye].buffer(), eye);
            // we update pose only when we have it in the render loop
        }
        
        layers = BufferUtils.createPointerBuffer(1);
        layers.put(0, header.getPointer());
        //layers.put(0, layer0.getPointer());
Oculus Rift CV1, MBP 2016 - 2.9 i7 - Radeon Pro 460  OSX 10.12.4,  Win7 - i5 4670K - GTX1070.
Oculus Rift VR Experiments: https://github.com/WhiteHexagon

spasi

Quote from: BrickFarmer on June 01, 2015, 18:14:29In ovr_capi_0_6_0.h
ovrLayerHeader const * const * layerPtrList

and also the example in the comments for ovrHmd_SubmitFrame talk about headers rather than layers for the pointer list.  But also mentions "layerPtrList Specifies a list of ovrLayer pointers"

The Header field is always the first field in all ovrLayer* structs. Which means that the address of an ovrLayer* struct will always be the same as &layer.Header (as seen in the example code).

Quote from: BrickFarmer on June 01, 2015, 18:14:29if it needs the getPointer since its expecting a long:

All LWJGL structs implement the Pointer interface and there's a convenient put(int index, Pointer wrapper) method in PointerBuffer that invokes .getPointer() for you.

Quote from: BrickFarmer on June 01, 2015, 18:14:29
        OVRLayerHeader header = new OVRLayerHeader();
        header.setType(ovrLayerType_EyeFov);
        header.setFlags(ovrLayerFlag_TextureOriginAtBottomLeft);
        
        layer0 = new OVRLayerEyeFov();
        layer0.setHeader(header.buffer());
        for (int eye = 0; eye < 2; eye++) {
            layer0.setColorTexture(textureSet[eye].buffer(), eye);
            layer0.setViewport(textureSize.buffer(), eye);
            layer0.setFov(fovPorts[eye].buffer(), eye);
            // we update pose only when we have it in the render loop
        }
        
        layers = BufferUtils.createPointerBuffer(1);
        layers.put(0, header.getPointer());
        //layers.put(0, layer0.getPointer());

In the code above, you're creating an OVRLayerHeader struct, then copying its contents to the Header field of the OVRLayerEyeFov struct. LWJGL generates extra methods in struct classes for getting/setting fields of nested structs. You can change the code to:

        layer0 = new OVRLayerEyeFov();
        layer0.setHeaderType(ovrLayerType_EyeFov);
        layer0.setHeaderFlags(ovrLayerFlag_TextureOriginAtBottomLeft);
        ...


which does the same thing without an extra struct allocation + copy.