Hi there, im playing with OGL and ive made a little experiment about 2.5d graphics. I want to have 2 images for each sprite, one a colormap and one a heightmap/zmap (more like distance to camera-map - similar to those that are used when you want to do depth of view). The heightmap adds a lot more feeling that the sprite is actually a 3D model, even though it is just an image. Ive coded this working example:
import java.awt.Graphics2D;
import java.awt.Rectangle;
import java.awt.color.ColorSpace;
import java.awt.geom.AffineTransform;
import java.awt.image.BufferedImage;
import java.awt.image.ComponentColorModel;
import java.awt.image.DataBuffer;
import java.awt.image.DataBufferByte;
import java.awt.image.Raster;
import java.awt.image.WritableRaster;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import org.lwjgl.opengl.GL11;
public class Image extends Node {
public Image(String path, String dmap, int guid) {
super(guid);
this.path = path;
this.dmap = dmap;
}
private String path, dmap;
public int width, height;
public int x = 100, y=100;
public double scale = 0.3;
byte[] dataABGR, dataHeight;
public void update() {
if (dataABGR == null) {
Rectangle r = new Rectangle();
dataABGR = loadImage(path,r,scale);
width = r.width;
height = r.height;
dataHeight = loadImage(dmap,r,scale);
}
GL11.glBegin(GL11.GL_POINTS);
for(int x = 0; x < width; x++)
for(int y = 0; y < height; y++){
int o = (x+y*width)*4;
byte r = dataABGR[o+3];
byte g = dataABGR[o+2];
byte b = dataABGR[o+1];
byte a = dataABGR[o];
float z = height(x,y);
GL11.glColor4ub(r,g,b,a);
if(a != 0)//#TODO fix this... gl should support alpha
GL11.glVertex3f(this.x+x, this.y+y, z/500.0f);
}
GL11.glEnd();
}
private float height(int x, int y){
return (int)dataHeight[(x+y*width)*4+1] & 0xFF;
}
public static byte[] loadImage(String path, Rectangle dimensions, double scale) {
try {
BufferedImage load = ImageIO.read(new File(path));
int width = (int)(load.getWidth()*scale);
int height = (int)(load.getHeight()*scale);
dimensions.width = width;
dimensions.height = height;
BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D g = img.createGraphics();
g.setTransform(AffineTransform.getScaleInstance(scale, scale));
g.drawImage(load, null, null);
int[] data = new int[width*height];
System.out.println("Loaded texture "+path+" mapped on "+width+"x"+height);
return ((DataBufferByte)img.getRaster().getDataBuffer()).getData();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
}
I suppose most of you dont wanna read the code, so ill explain a bit. The class represents a sprite with a heightmap, it loads the sprite and heightmap and on each update draws each
pixel (no it doesnt hand the texture or byte array to OGL, it sends each pixel individually) on the correct x,y and heighmap calculated z position. I like this method because i can draw my sprites, or model them, and i think i could have them composed into nice complex structures. Maybe a simple 3D approach would be better, but because i like hand-drawn graphics i wanted to try this out

Ok so... now to the question. This approach works, here is a result
http://filesmelt.com/dl/result1.png and the
colormap and the
heightmap used in the example. The problem is, it is
slow. I suppose there is some way to do the same thing with a shader, to use the gpu. My experience with 3D and OGL is more theoretical than practical, so thats why i ask you here if you know about some more performance efficient way. I suppose I could send the colormap and the heightmap to the gpu and let some vertex shader do the work, but I dont know how

EDIT: im now discussing this on
http://www.java-gaming.org/index.php/topic,25002.msg213977.html