I have a small app that uses gl_sharing. This runs fine on my MacBook Pro if I force it to use the NVIDIA GeForce GT 650M, but fails if I use the integrated Intel graphics. I assume that this is because the integrated graphics doesn't support gl_sharing (which is fine).
However, I've not found any way to programmatically determine that the NVIDIA supports gl_sharing. I assumed that I would be able to use CLDeviceCapabilities and look for one of CL_KHR_gl_sharing or CL_APPLE_gl_sharing, but neither of those flags is set :-(
This is what I get back for the Intel:
OpenCL 1.2 - Extensions: cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_gl_depth_images cl_khr_gl_event cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics
And this is what I get for the NVIDIA:
OpenCL 1.2 - Extensions: cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_fp64 cl_khr_gl_depth_images cl_khr_gl_event cl_khr_gl_msaa_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics
Right now, I've hard-coded the code to use the second device returned by getDevices, but that's hardly a good solution. Suggestions gratefully received for a better one.
Thanks in advance,
paul.butcher->msgCount++