Big/little endians and MemoryUtil.memAlloc buffers

Started by jakethesnake, July 19, 2020, 14:13:21

Previous topic - Next topic

jakethesnake

I've found it that it's faster for me to pack my vertex data (bytes and shorts) into integers before putting them in a buffer and sending them to the GPU. Is this cross platform? Or will the primitives come in the wrong order on some platform in my shaders?

example:

void put(byte a, byte b, short c){
     int i = (a << 24)|(b<<16)|c
     intBuffer.put(i);
) }

Aisaaax

If the the Type sizes differ on a certain platform, this may be a problem. Because you are shifting your shorts by constant values (24, 16). if integer length is different somewhere, then this will be a problem. If you want to make this platform-independent, you should first check the sizes of your types, and calculate your shifts from that.
For example, make two constants
public static final int   SHIFT_FIRST, SHIFT_SECOND.
Then in the initialization sequence of your program, calcilate their values as something like sizeof(int) - sizeof(byte)   //  sizeof(int) - sizeof(byte)*2

Keep in mind that there is no default sizeof() function in Java. But you could work around that through various means.
https://stackoverflow.com/questions/2370288/is-there-any-sizeof-like-method-in-java

That said, in most platforms you will be safe to assume that INT is 4 bytes, and byte is, well, a byte. Most platforms has long since standardized the type sizes. You can be pretty much certain that on any modern platform your code will not raise any issue. However some embedded controllers or smaller custom processors can be different, I don't know. If you want to be certain though - you'll need to double-check it and manually calculate your shifts.

KaiHH

Well, no. The two platforms we are talking about are the Java Platform and OpenGL/GLSL. Both are platforms on their own and both define all basic types as fixed-size entities (int as a signed 32-bit integer [1, 2], short as a signed 16-bit integer, etc), regardless of the underlying system they are implemented on and run on. So, varying basic type widths this will never be a problem.

[1]: https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
[2]: https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)

Aisaaax

I'm pretty sure that if you run your java application on a 16-bit microcontroller for some reason (which is possible), you will not have your 32-bit integers there.
That's what I was talking about.

KaiHH

Also, no. You will have 32-bit integers just like you had 64-bit longs/integers on a 32-bit JVM on a 32-bit OS/platform. Those 32-bit integers on the 16-bit microprocessor/hardware obviously won't be represented as a single native 32-bit hardware registers, but that doesn't make any difference. All operations (bitwise, etc., as defined by the Java platform) will behave as expected and (apart from performance) you won't observe any difference in behaviour - that is if we are talking about the "Java Platform" (regardless of any CPU/microcontroller it is running on). That's the whole deal with "write once run anywhere".

jakethesnake

Thanks for your answers. I never expected the size to be a problem, but given both your answers, it won't be a problem to me. I was more worried about the order of bytes. I'll explain it as the idiot I am:

in my java world I'll pack an 4-byte int of 4 bytes(A,B,C,D) like this:
ABCD
As long as I'm in my java world, it will always have this order.

But then I put it into one of those LWJGL high-tech buffers and send it out to System/GPU memory. My fear is that some system might now read the int like this:
DCBA
(from my java perspective)
This, my shader data might get the wrong values.

I don't really think this problem might occur, but I just want to double check. Also, endianness might be the same on all x86 platforms?

PS. YEs, I just checked, it's little endian on all x86 platforms, as well as most others, don't mind me.

But still, it would be in theory interesting to know the answer, might be some mobile platforms where things could get weird.

Source_of_Truth

I had similar issues related to putting floats and other data into buffers for Uniform Block Objects in GLSL usage scenarios. The endianess of the system, aka its native order, is usually respected, unless one tells OpenGL otherwise. That means the trick is to simply NOT do anything :D and ignore this as much as possible.

The LWJGL implementation in MemoryUtil is:
    public static ByteBuffer memAlloc(int size) {
        return wrap(BUFFER_BYTE, nmemAllocChecked(size), size).order(NATIVE_ORDER);
    }


As such, it is sensibly ordered for OpenGL. Now, you put everything in the byte buffer, but unlike myself, you check all the methods that ByteBuffer has  ;).
Instead of trying to mess with big/little endian bit-shift magic, one simply uses the appropriate function, in my case myByteBuffer.putFloat( myFloat );