Hello Guest

Shader version and render speed

  • 3 Replies
  • 3824 Views
Shader version and render speed
« on: April 09, 2015, 19:31:59 »
Hello,

I have a problem with my shader. My graphic card (NVIDIA GeForce GT 540M) should support OpenGL 4.4. I installed the latest drivers. OpenGL Extensions Viewer tells me my graphic card supports OpenGL 3.0 and lower. I discoverd that earlyer because I wasn't able to use shader versions that are higher than 130. But I really need the function "inverse".
So my question is: Why does my graphic card only support 3.0 when nvidea says it supports up to 4.4? Is there anything I can do?

My second question is: Is it normal that I have 15 fps when I render a big model like the Standford Dragon (7MB Wavefront file)? I'm using LWJGL 3.0.

Thanks for every answer!

*

Kai

Re: Shader version and render speed
« Reply #1 on: April 09, 2015, 21:27:26 »
Hi,

about 3.0 vs. 4.4:

If in addition to your dedicated Nvidia card your notebook also has an Intel integrated graphics adapter, the version mismatch could be due to Nvidia Optimus.
If you did not explicitly disable Optimus in the BIOS and tell the computer to explicitly only use the dedicated Nvidia card, then your OS is likely using the Intel card.
You can verify that by checking the GL_VENDOR GL string via glGetString.
Sadly, there is no reliable way to force Optimus to use the dedicated Nvidia card via the Nvidia control panel (configure javaw.exe or java.exe to use the Nvidia adapter does not work at least for me).
So I just disable the Intel adapter via BIOS. That is a surefire way of ensuring that the Nvidia adapter is being used.

about the 15 fps:

This could be due to Optimus using the Intel adapter.
But it also of course heavily depends on the way you are drawing the model. Are you using immediate mode? Are you using client-side vertex arrays? Are you using server-side vertex buffer objects? Are you maybe unadvertently updating the buffer every frame? How complex are your shaders? Are you using shaders at all or the fixed-function pipeline?
In short: Your question about why you get (just) 15 fps cannot be answered without information on how you render the model.

Re: Shader version and render speed
« Reply #2 on: April 10, 2015, 05:19:06 »
Thank you sooo much!
I totally forgot about the useless Intel GPU. I changed the settings in the NVIDEA system control. It was set on automatic. Now it supports version 4.3.
And the fps issue is solved.

And just to make it clear: yes I'm using immediate mode and vertex arrays and currently there is no Server so yeah :)

But I have another question: since i use my nvidea card the fps never pass over 60. Actually I have no settings for this in LWJGL. I'm using GLFW. There is an option int the NVIDEA system control (display refresh rate) that has the option 60 Hz. I can't even change that cause there is only one option in that Combo box.
How do I increase my fps rate now? Although I have only 60 fps javaw uses 25% oft the CPU.
« Last Edit: April 10, 2015, 05:34:17 by Alpha11833 »

*

Offline Cornix

  • *****
  • 488
Re: Shader version and render speed
« Reply #3 on: April 10, 2015, 07:17:15 »
Quote
yes I'm using immediate mode and vertex arrays and currently there is no Server so yeah
I think you misunderstood. Server-Side vertex arrays means that your vertex data is stored on GPU side and you only tell the GPU to draw. The opposite is Client-Side vertex arrays that are stored on CPU side (in your RAM probably) and commited to the GPU in every single frame. When using immediate you are probably not using vertex arrays at all.



About your FPS, sounds to me like you have probably VSync enabled. This setting limits the FPS to the maximum FPS that your screen can display. If you deactivate VSync you should be able to go higher, but then you will get graphical artifacts and glitches known as "tearing" because your screen cant catch up with the GPU.