In MATLAB I have calculated the Fundamental matrix (of two images) using the normalized Eight point algorithm. From that I need to triangulate the corresponding image points in 3D space. From what I understand, to do this I would need the rotation and translation of the image's cameras. The easiest way of course would be calibrate the cameras first then take the images, but this is too constricting for my application as it would require this extra step.
So that leaves me with auto (self) camera calibration. I see mention of bundle adjustment, however in An Invitation to 3D Vision it seems it requires an initial translation and rotation, which makes me think that a calibrated camera is needed or my understanding is falling short.
So my question is how can I automatically extract the rotation/translation so I can reprojected/triangulate the image points into 3D space. Any MATLAB code or pseudocode would be fantastic.
You can use the fundamental matrix to recover the camera matrices and triangulate the 3D points from their images. However, you must be aware that the reconstruction you will obtain will be a projective reconstruction and not a Euclidean one. This is useful if your goal is to measure projective invariants in the original scene such as the cross ratio, line intersections, etc. but it won't be enough to measure angles and distances (you will have to calibrate the cameras for that).
If you have access to Hartley and Zisserman's textbook, you can check section 9.5.3 where you will find what you need to go from the fundamental matrix to a pair of camera matrices that will allow you to compute a projective reconstruction (I believe the same content appears in section 6.4 of Yi Ma's book). Since the source code for the book's algorithms is available online, you may want to check the functions vgg_P_from_F, vgg_X_from_xP_lin, and vgg_X_from_xP_nonlin.
Peter's matlab code would be much helpful to you I think :
http://www.csse.uwa.edu.au/~pk/research/matlabfns/
Peter has posted a number of fundamental matrix solutions. The original algorithms were mentioned in the zisserman book
http://www.amazon.com/exec/obidos/tg/detail/-/0521540518/qid=1126195435/sr=8-1/ref=pd_bbs_1/103-8055115-0657421?v=glance&s=books&n=507846
Also, while you are at it don't forget to see the fundamental matrix song :
http://danielwedge.com/fmatrix/
one fine composition in my honest opinion!
If your 3D-space can be chosen arbitrarily you could set your first camera matrix as
P = [I | 0]
No translation, no rotation. That would leave you with a coordinate system defined from camera 1. Then it should not be too hard to calibrate the second camera.