One way to do this is to get the ray's equation from the 'frustrum' of the camera, which is really just the line perpendicular to the plane of the camera. This is the ray that comes 'straight out' of the center of the field of view and AFAIK, is often used.
This is done by converting the camera's facing (however this is stored) into a normalized vector (x,y,z). For example if you store it in 'gimbal' coordinates, 'yaw', 'pitch', 'roll', you take the yaw and pitch, ignoring the roll since it is factored in last and will not alter the position of the center of the camera, and use a little trigonometry to convert those two radian values into a 3d line. If you use a quaternion it would be different, though I use quaternions I convert from the gimbal coords since they are more human-readable and can be reasonable placed in configuration files without the aid of a calculator.
Secondly, you need planar geometry to ray cast against. Let's say you have a bunch of boxes in a room. You need to do a test against each triangle or quad to see if the ray intersects it. The math is not simple, but it is all multiplications and such.
Furthermore, you can do certain optimizations to cut down the number of tests you need to perform per frame, or (in the case of AIs) only calculate this every so often. One obvious optimization is space partitioning; if you can first figure out if the ray intersects the bounding box for that space or not, you know whether or not it can intersect with anything bounded by that partition.
Would be cool if LWJGL did include a simple raycast/ray pick implementation.