Help with converting objects

Started by jeussa, September 26, 2015, 13:15:49

Previous topic - Next topic

jeussa

Hello LWJGL community,

For a while now I've been looking through several online tutorials on how to use LWJGL to generate a game environment, with only little progress however. Exactly that is why I'm now posting some questions on the forums. It would be great if some of you could help me out with this!

Okay, so right now my idea is to create (or at least get started with) my own small first-person RPG game. I'm planning to make this game work by hosting a server and allowing clients (along with the game engine) to connect to the server. The server will then process requests by the player and send back objects which I format using my own bytable system (don't mind about that system by the way).

Anyway, the main things I'm currently struggling with is converting 3D objects into visible objects on the screen (including their texture).
The way I plan to save these objects is by using multiple arrays of coordinates.
So let's say I'd use a float[][] then the [0 ][?] will specify a list of x-coordinates for the object vertices, [1][?] the y-coordinates and [2][?] the z-coordinates.
The same idea would be used for texture coordinates: [3][?] for x-components and [4][?] for z-components.
The coordinates specify the offset from a main vector.

I can use this to calculate where certain vertices are located and which vertices should be connected.
My main question would be: what is the most efficient way to create objects from an array of vertices, and what is the easiest way to locate a camera using a location and a rotation (pitch and yaw)

Many thanks for your time!

Kai

Usually OpenGL buffer objects are used to store models (aka. vertex attribute "streams").
You have the option of storing all vertex attributes (position, texture coordinates, normals, specularity, opacity, ...) in one linear buffer in an interleaved way with a storage format in memory being something like this: XYZST|XYZST|XYZST|XYZST|..., where XYZ is one vertex position and ST the corresponding texture coordinates.
You also have the possibility to store each vertex attribute in a separate buffer object, so have one buffer with the layout: XYZ|XYZ|XYZ|... and the other one with ST|ST|ST|...
Same indices/positions in both buffer objects correspond to the same vertex of your whole model.
Google for "OpenGL vertex buffer objects" and you'll find very good resources on how to do what you want.
In case it is not clear, LWJGL is "just" a JNI binding for the very low-level, native OpenGL/OpenAL/OpenCL APIs.
It is not primarily suited for large game development, like what you seem to be shooting for. For this you might want to use libraries that build on LWJGL and provide a more convenient programming environment, such as libGDX or JMonkeyEngine.

jeussa

Do I put these coordinates (XYZST) in a single Buffer (in my case a FloatBuffer)?
Also, does each vertex require a seperate buffer or can I make one buffer for the entire object?

Kai

Please see this demo in the joml-lwjgl3-demos repository: https://github.com/JOML-CI/joml-lwjgl3-demos/blob/master/src/org/joml/lwjgl/VboDemo.java

This showcases using simple OpenGL vertex buffer objects (VBO) and also how to setup a "camera" using JOML together with the OpenGL matrix stack.

The demos in this repository always build against the very latest Git repository version of LWJGL 3. You can grab a nightly build of LWJGL 3 here: http://www.lwjgl.org/download