While rendering full 3D model from cloud might be difficult, what about adding some depth information to individual photos, that could be used by nVidia 3D Vision system and to export stereograms from synth and for more accurate rotating?
Forget about pictures - it would be that great to see point clouds alone in 3D...
I would think that with enough photos at small angles around the object that a program could be written with an algorithm to offset left/right image and accomplish this.
I've been toying with this implementing this idea on the side. I think it would be relatively easy to create automatic "red-green" images or some such pseudo-3d model.