Forum : Technical Preview

Get all your questions answered about our latest Photosynth Technical Preview.


Topic: Motion Interpolation or true 3D?

Report Abuse Report Abuse
plavozont1 (Over 1 year ago)
When I look at the synths I see some awesome motion interpolation, photosynth generates a very smooth movement between two photos made at a different angle. And that is only thing that is possible to do using photosynth - smoothly move through a sequence of photos - question: what's the use of it? It would be really awesome if it could convert a 2D video, consisting of 2D frames into a 3D scene! That would be the easiest way to 3Dficate anything - we make a movie of an object - a room or a building, and convert that video to 3D model of a room or a building. At this site a see headers like "Capture your world in 3D", "Experience the !NEW! Photosynth 3D",  but I see the same old photosynth that synths the photos and I don't see any 3D, what is this all about?
NateLawrence (Over 1 year ago)
Hi, plavozont1, 

The Photosynth desktop application that was announced in mid-2006 and released back in August of 2008 was a little bit closer to being able to take any video (no matter what the camera motion was) and focused on building the best sparse point cloud of the scene possible.

The new Photosynth doesn't put as much effort into generating a single global model from the photos, so if you enter global camera mode (press the [C] key) in the new Photosynth viewer (you may want to turn the point cloud on by opening the debug menu by typing 42 into the viewer, checking the checkbox by point cloud, and reloading the page) the point cloud from the same photos uploaded with Photosynth 1 and Photosynth 2 may not have as big a point cloud in Photosynth 2, even if all the photos are matched and used in both versions.
PhotosynthTeam (Over 1 year ago)
You are right plavozontt1 that we are not offering a 3D model. Real models need real surface reconstruction, and trying to do that automatically from photos alone is very very hard. Other products tackle the problem by giving people a rough model, and then having them correct that model in a 3D editor, or by relying on different sensors (e.g. depth sensors).

For now we're sticking with allowing people to build and share experiences that we think feel better than either a photo or a video. if you feel that this isn't "real 3D" then perhaps you're right.
NateLawrence (Over 1 year ago)
Photosynth 2 uses a per-image depth map (you might call it a local model, rather than a global one) and focuses on crossfading between the pixels, depth maps, and 3D positions of the photos, rather than focusing on a global model where the geometry for the whole scene is always loaded (like Photosynth 1's point clouds).

The reason that 'Photosynth 2' is heralded as 'the new 3D Photosynth' is because, whereas in Photosynth 1 each photo was simply projected onto a flat rectangle (or 'quad' as it was called), in Photosynth 2 each image is projected onto multiple depths so that foreground objects move independently of foreground objects as you move between two photos' different positions.

Another way of saying this would be that in Photosynth 2, the photos are actually projected onto the point cloud. The truth is that what the photos are being projected onto a very simplified low-polygon representation of the point cloud, but it's still more articulated than in PS1.
plavozont1 (Over 1 year ago)
Thanks, I get it.May be some day you guys will add some true 3D thing to panorama and synth features.I saw some cool"3D" tours of museums and other places,but it's nothing but a spherical panoramas.It would be totaly awsome to have a tool that allow to create a real 3D model of such things.I think you are not that far away from it,look at this beautiful point cloud generated by kinect chromeexperiments.com/detail/kinectwebgl How about creating photosynth for xbox that will make gamers happy by giving them a 3D model of their flats?In the light of recent Project Morpheus for PlayStation4 that uses accelerometers gyroscopes and positioning with camera to get the angle of the helmet in space,xbox helmet could use some mini kinect set on it that will orientate in space by matching the 3D model of gamer's room and the picture from mini kinect.I imagined how I look at myself through that helmet and see my virtual body dressed into some super hero sute that was exciting!))
You need to be Signed In to add a comment. (Are you new? Sign Up for a free account.)