Seriously... what on earth is going on there. I can reproduce, too, on Intel.
I mean, I know Intel has lots of bugs, but I would not have expected to see so many of them in a row.

You can workaround this with "
orphaning" the element/index buffer by allocating it again with glBufferData (like you did in your earlier example with the vertex buffers but them losing the attribute bindings).
Before you do glBufferSubData() just size the buffer again using glBufferData(GL_ELEMENT_ARRAY_BUFFER, ...) and using the exact same size you used before when initially initializing it.
Normally, orphaning is a way to avoid flusing the render pipeline so that a new buffer can be filled and used immediately, as described in that article. But in your case you actually
need it to not get misbehaviour.
"Normally" with a sane driver it would make sure that the buffer is finished rendering when a call to glBufferSubData() is done on that same buffer. Apparently, not so with Intel. You can see that this is the case and the buffer is likely still in use or primed for rendering (and finally rendered at glSwapBuffers(), so Thread.sleep() would not help) if you simply insert a GL11.glFinish() in between your two draws (without orphaning the buffer using glBufferData).
Intel... Jesus...
So to summarize:
- Updating an ARRAY_BUFFER with glBufferData does not work when using a shader, because it will make the shader lose the attribute binding to that buffer. So here, either use glBufferSubData or re-specify the buffer binding via glVertexAttribPointer()
- Updating an ELEMENT_ARRAY_BUFFER the opposite is the case. Here glBufferSubData does not work and we must orphan the buffer with glBufferData or flush and wait for all rendering to finish using glFinish.
It will be interesting to see how mapped buffers behave (glMapBuffer/glUnmapBuffer). Also in that article.