I understand. I reckon none of this data is too sensitive. Here is the most recent full crash log -- nix that. I got this when I tried:
The following error or errors occurred while posting this message:
The message exceeds the maximum allowed length (20000 characters).
I can post a full log, but it will take additional posts I guess.
Sounds like you might not be managing memory allocations properly (e.g. deallocating buffers while they're still in use).
There is a very good chance I am making such a mistake. I am still newer to OpenGL/GLFW/LWJGL.
What stb APIs do you use?
Of STB I believe I only use:
org.lwjgl.stb.STBTruetype.stbtt_GetPackedQuad;
org.lwjgl.stb.STBTruetype.stbtt_PackBegin;
org.lwjgl.stb.STBTruetype.stbtt_PackEnd;
org.lwjgl.stb.STBTruetype.stbtt_PackFontRange;
org.lwjgl.stb.STBTruetype.stbtt_PackSetOversampling;
org.lwjgl.stb.STBImage.stbi_failure_reason;
org.lwjgl.stb.STBImage.stbi_image_free;
org.lwjgl.stb.STBImage.stbi_info_from_memory;
org.lwjgl.stb.STBImage.stbi_is_hdr_from_memory;
org.lwjgl.stb.STBImage.stbi_load;
org.lwjgl.stb.STBImage.stbi_load_from_memory;
and
org.lwjgl.stb.STBTTAlignedQuad;
org.lwjgl.stb.STBTTPackContext;
org.lwjgl.stb.STBTTPackedchar;
One of the main things I do with STB is:
private void load_fonts() {
font_tex = glGenTextures();
chardata = STBTTPackedchar.malloc(6 * 128);
try ( STBTTPackContext pc = STBTTPackContext.malloc() ) {
ByteBuffer ttf = ioResourceToByteBuffer("demo/arial.ttf", 160 * 1024);
//ByteBuffer ttf = ioResourceToByteBuffer("demo/impact.ttf", 160 * 1024);
ByteBuffer bitmap = BufferUtils.createByteBuffer(BITMAP_W * BITMAP_H);
stbtt_PackBegin(pc, bitmap, BITMAP_W, BITMAP_H, 0, 1, null);
for ( int i = 0; i < 2; i++ ) {
int p = (i * 3 + 0) * 128 + 32;
chardata.limit(p + 95);
chardata.position(p);
stbtt_PackSetOversampling(pc, 1, 1);
stbtt_PackFontRange(pc, ttf, 0, scale[i], 32, chardata);
p = (i * 3 + 1) * 128 + 32;
chardata.limit(p + 95);
chardata.position(p);
stbtt_PackSetOversampling(pc, 2, 2);
stbtt_PackFontRange(pc, ttf, 0, scale[i], 32, chardata);
p = (i * 3 + 2) * 128 + 32;
chardata.limit(p + 95);
chardata.position(p);
stbtt_PackSetOversampling(pc, 3, 1);
stbtt_PackFontRange(pc, ttf, 0, scale[i], 32, chardata);
}
chardata.clear();
stbtt_PackEnd(pc);
glBindTexture(GL_TEXTURE_2D, font_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, BITMAP_W, BITMAP_H, 0, GL_ALPHA, GL_UNSIGNED_BYTE, bitmap);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
} catch (IOException e) {
throw new RuntimeException(e);
}
Could you also post some relevant code?
The problem here is my project is quite large and I don't yet know enough to deduce where I should suspect a problem is. Perhaps your help will help me better know where to look. I will post a full error log in subsequent posts. If you want error logs for all 3 types, I can post all those.
How do you handle allocations and why do you run OOM so often?
I'm not totally sure how to answer this. I may not understand how to properly allocate memory. Many of these things I've learned in the context of a larger task, such as that example above. It has "chardata = STBTTPackedchar.malloc(6 * 128);". I've tried to ensure these things happen at appropriate environment states and locations, but I may be making a mistake.
I go OOM because of other things I think. It's a big project, lots going on. Don't think it's because anything related to LWJGL is getting out of hand.