In my application (for which basically I intend to use GIS data), most objects are already in a 'world', so in theory my model matrix could be the identity. Everything is measured in meters, and I would definitively prefer to keep that way to play with the models / data.
Initially I thought I could place a camera over a city (much bigger box) and that the projection would take care of transforming 'world' coordinates (arbitrary 'box') to device coordinates (-1,-1,-1) to (1,1,1) 'box'. But now I wonder if the 'world' in opengl tutorials only refer to rearranging objects in relation to each other.
For now, I need to create an extra step 'WW' (real world to device world) before using view and transformation matrices: T * V * WW * M * V
(in my case, a model matrix would be required only to moving objects)
Do the 'cameras' in JOML (perspective, ortho) are bound to a (-1,-1,-1) to (1, 1, 1) 'box'? In other words, do I have to scale my real world in the model matrix before using view and transformation matrices?
I would prefer to embed the WW step in the transformation matrix and work with my 'real' world up to the view step if this is possible (I am having a hard time to move from 2D to 3D, I still can't visualise all the maths)
other thing I couldn't still figure out is whether or not the transformation matrix fixes the aspect ratio and what do we need to update when the GLFW window is resized.
minor questions / comments:
1. great library! but I saw that some parts are patented. Do this contaminates the entire library or one just need to avoid using a few methods for a commercial application?