Create a hole in an existing wall

Started by Vapalus, January 06, 2015, 17:33:47

Previous topic - Next topic

Vapalus

Hello, I've got a "problem" which is already halfway solved by using framebuffer Textures.
Just to explain where I am going at, currently I am trying to create an engine which can render many objects as fast as possible - thats already done, drawProgram be blessed.

Now, I want to create kind of a portal effect, that you can create a hole in a wall which may lead to a different Dimension, i.e., that means I have to render that "Dimension" first, and render the dimension I am in after. Currently my concept contains several virtual cameras, in which each camera draws onto a framebuffered Texture.

Now the problem I am facing is that this seems to be actually kind of a resource-intense way to do this:

public class FrameBufferedTexture extends Texture {
	private static boolean fboEnabled = false;
	private int iFBOId;
	
	private int iWidth;
	private int iHeight;

	public static void initFrameBufferedTextures(){
		fboEnabled = GLContext.getCapabilities().GL_EXT_framebuffer_object;
	}
	
	public int getFrameBufferObjectID(){
		int iOutput = 0;
		IntBuffer buffer = ByteBuffer.allocateDirect(1*4).order(ByteOrder.nativeOrder()).asIntBuffer(); // allocate a 1 int byte buffer
		EXTFramebufferObject.glGenFramebuffersEXT( buffer ); // generate 
		iOutput = buffer.get();
		if (iOutput < 1){
			System.out.print("Error creating buffer");
			fboEnabled = false;
		}
		return iOutput;
	}

	public FrameBufferedTexture(int width, int height){
		super();
		iWidth = width;
		iHeight = height;
		iFBOId = getFrameBufferObjectID();
	}
	
	public void recreateTexture(){
		int texId = this.id;
		//ToDo: check if Layer is valid!!
		GL13.glActiveTexture(GL13.GL_TEXTURE0 + 0);
		GL11.glBindTexture(GL11.GL_TEXTURE_2D, texId);
		
		// All RGB bytes are aligned to each other and each component is 1 byte
		GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 1);
		
		// Upload the texture data and generate mip maps (for scaling)
		GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, iWidth, iHeight, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, BufferUtils.createByteBuffer(iWidth * iHeight * 3));
		GL30.glGenerateMipmap(GL11.GL_TEXTURE_2D);
		
		// Setup the ST coordinate system
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_REPEAT);
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_REPEAT);
		
		// Setup what to do when the texture has to be scaled
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, scaling);
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, min);
		
		OpenGLCheck.exitOnGLError("CreateTexture");
		
		this.id = texId;
	}
	
	public int preRender(){
		if (fboEnabled){
			//Create a texture here??
			recreateTexture();
			EXTFramebufferObject.glBindFramebufferEXT( EXTFramebufferObject.GL_FRAMEBUFFER_EXT, iFBOId );
			EXTFramebufferObject.glFramebufferTexture2DEXT( 
					EXTFramebufferObject.GL_FRAMEBUFFER_EXT,
					EXTFramebufferObject.GL_COLOR_ATTACHMENT0_EXT,
					GL11.GL_TEXTURE_2D, 
					this.id, 
					0);
	
			GL11.glPushAttrib(GL11.GL_VIEWPORT_BIT);
			GL11.glViewport( 0, 0, iWidth, iHeight );
			GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);	//Reset des OpenGL: Elemente und Puffer
			return iFBOId;
		}
		return -1;
	}
	
	public void postRender(){
		if (fboEnabled){
			EXTFramebufferObject.glBindFramebufferEXT( EXTFramebufferObject.GL_FRAMEBUFFER_EXT, 0);
			GL11.glPopAttrib();
			GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);	//Reset des OpenGL: Elemente und Puffer
		}
	}
}


Is there a faster way to do it?

Just to explain it:
It works by making a pre-Render first, then the cam gets set and renders all necessairy objects, then the target to render to should get reset.
What I cannot understand is that re-creating the Texture is necessairy all the time. Also I have the weird impression that I'm slowly filling the GFX ram?
After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

Kai

Hi,

okay, so the first way that came to me of doing that (there may be other ways) would be:

1. Position and orient the camera view/projection in such a way that the "event horizon" (i.e. the wall/portal through which you see the "other world") would be at the exact same position and orientation in both "worlds"
2. Render the scene of the "other world" into some texture using a framebuffer object.
3. Render the "current world" into another texture also using framebuffer object, but at all pixels that you want to see through your portal into the "other world" you render a "masking" value into another texture (so you would have two textures bound in your framebuffer object)
4. In the final pass you use a fullscreen quad and a shader that samples the three textures (i.e. the one from the "other world", from "this world" and the "mask") and writes the pixels from "this world" whenever the mask value says so, otherwise writes the "other world" pixel.

That's just a possible way, though it may not be the most bandwidth-friendly.

Now, just one thing about your code:

You do not need to recreate your texture for each pass. Just create it once with the appropriate dimensions (width, height) and also using (ByteBuffer) null as the "data" argument, since you only want OpenGL to allocate space for the texture and not upload any content from host-memory (your Java program) to the texture. That is, because the framebuffer object will write all texels into it when rendering.

Regards,
Kai

Vapalus

Thats actually even more complicated then my own solution  ;D
I just render twice:
First I render the other dimension, then i create a texture of that, and that I use at my "real" world as a portal.
With a customized shader I remove the "stretching" of the texture so that it looks as if I could look right through it.
So I just have to render once for every Dimension I may enter, with the possibility to create a portal into a different dimension nearby.

Thank you for your idea anyways. I will look into that "recreation" problem. It gives me a headache - I found out on some computers just redrawing is enough, on others, it isn't. I blame NVidia.

After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

Kai

See, there are many ways to rome. :)
Either, buying the parts of and building an airplane and then flying to rome, like I would,
or just buying yourself a ticket and hop onto the next flight there, like you would. :)

QuoteI blame NVidia.
With such basic well-used things as a FBO and a vertex/fragment shader, I find it hard to believe that there are still bugs at this area. But in others, I highly agree.
Usually, both are to blame: Nvidia for being too lenient, allowing you to do stuff that, by the GL specs shouldn't work, but still do.
And on the other hand, AMD, for really having some bugs and failing you at every single mistake you make. :)

Vapalus

I guess you are using the stencil buffer in your idea, or would you do it via shaders?
There is only one way to determine what is faster, and that is trying it.
But I think letting OpenGL do its own thing should be faster.

btw, my new code is now:

package camera;

import org.lwjgl.BufferUtils;
import org.lwjgl.opengl.GL11;
import org.lwjgl.opengl.GL13;
import org.lwjgl.opengl.GLContext;
import org.lwjgl.opengl.EXTFramebufferObject;

import glDrawProgram.OpenGLCheck;
import glDrawProgram.Texture;

import java.nio.ByteBuffer;
import java.nio.IntBuffer;
import java.nio.ByteOrder;


public class FrameBufferedTextureFast extends Texture {
	private static boolean fboEnabled = false;
	private int iFBOId;
	
	private int iWidth;
	private int iHeight;

	public static void initFrameBufferedTextures(){
		fboEnabled = GLContext.getCapabilities().GL_EXT_framebuffer_object;
	}
	
	public int getFrameBufferObjectID(){
		int iOutput = 0;
		IntBuffer buffer = ByteBuffer.allocateDirect(1*4).order(ByteOrder.nativeOrder()).asIntBuffer(); // allocate a 1 int byte buffer
		EXTFramebufferObject.glGenFramebuffersEXT( buffer ); // generate 
		iOutput = buffer.get();
		if (iOutput < 1){
			System.out.print("Error creating buffer");
			fboEnabled = false;
		}
		return iOutput;
	}

	public FrameBufferedTextureFast(int width, int height){
		super(true);
		iWidth = width;
		iHeight = height;
		iFBOId = getFrameBufferObjectID();
		createTexture();
	}
	
	public void createTexture(){
		int texId = this.id;
		//ToDo: check if Layer is valid!!
		GL13.glActiveTexture(GL13.GL_TEXTURE0 + 0);
		GL11.glBindTexture(GL11.GL_TEXTURE_2D, texId);
		
		// All RGB bytes are aligned to each other and each component is 1 byte
		GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 1);
		
		// Upload the texture data and generate mip maps (for scaling)
		GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, iWidth, iHeight, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, BufferUtils.createByteBuffer(iWidth * iHeight * 3));
		//GL30.glGenerateMipmap(GL11.GL_TEXTURE_2D);
		
		// Setup the ST coordinate system
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_REPEAT);
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_REPEAT);
		
		// Setup what to do when the texture has to be scaled
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, scaling);
		GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, min);
		
		OpenGLCheck.exitOnGLError("CreateTexture");
		
		this.id = texId;
	}
	
	public int preRender(){
		if (fboEnabled){
			//Create a texture here??
			GL13.glActiveTexture(GL13.GL_TEXTURE0 + 0);
			GL11.glBindTexture(GL11.GL_TEXTURE_2D, this.id);
			
			EXTFramebufferObject.glBindFramebufferEXT( EXTFramebufferObject.GL_FRAMEBUFFER_EXT, iFBOId );
			EXTFramebufferObject.glFramebufferTexture2DEXT( 
					EXTFramebufferObject.GL_FRAMEBUFFER_EXT,
					EXTFramebufferObject.GL_COLOR_ATTACHMENT0_EXT,
					GL11.GL_TEXTURE_2D, 
					this.id, 
					0);
	
			GL11.glPushAttrib(GL11.GL_VIEWPORT_BIT);
			GL11.glViewport( 0, 0, iWidth, iHeight );
			GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);	//Reset des OpenGL: Elemente und Puffer
			return iFBOId;
		}
		return -1;
	}
	
	public void postRender(){
		if (fboEnabled){
			EXTFramebufferObject.glBindFramebufferEXT( EXTFramebufferObject.GL_FRAMEBUFFER_EXT, 0);
			GL11.glPopAttrib();
			GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);	//Reset des OpenGL: Elemente und Puffer
		}
	}
}


Works on Intel, NVidia and ATI as it seems.
After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

Kai

Either of them would do, I guess.

My idea just needs a memory buffer (that would most likely be the image of a texture) with at least 1-bit per-pixel precision (so GL_R8 would do) in which to store that "mask".
To achieve that, I would be using a shader that writes that bit, yes. I would prefer shaders as I find stencilling a bit cumbersome to setup, as it is limited in what you can do with it compared to a shader.
With stencilling you would need to render your masked area/object (the event horizon of that portal) seperately from all other scene objects, because your stencil function cannot rely on depth there -> needs a constant value.
But that's solely a personal opinion.

As with performance, surely your 2 scene-passes approach, compared to my 2 scene-passes + 1 fullscreen-pass snap idea would be more performant.
And it's both the "OpenGL's thing", or way of doing. :)