Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
I love panoramas, and to be able to create a scene that is completely three dimensional is an amazing breakthrough. I've found that it would be even more incredible if the photos were blended together and pasted onto the pointcloud, making one seamless three dimensional image that moves smoothly from one point to the next. Obviously there would be limitations to this technique, but for those who already do panos and are used to setting the focal length, aperture, shutter, and white balance before a shoot, the scene would likely come out quite nicely. Hopefully this technique will be possible in the near future.
>>There are certainly big improvements to be made to the current pointclouds, which are known as 'sparse reconstruction'. These are generated in the step of discovering the positions of the camera(s) that took the photos in a given collection.
>>The next step after this is to use stereo vision algorithms once the positions of the cameras is known to generate dense reconstruction, which is still just a pointcloud but a fair improvement. Here is an example.
Original synth (sparse reconstruction) here:
The following two videos provide a look at what that same data looks like after dense reconstruction:
>>The University of Washington (where the Photo Tourism was born) is performing research on what comes next which is more what you envision.
Yes, The University of Washington is definitely taking it in the direction I'm looking for. I find photosynth currently is a little jagged when navigating a scene, and to have the entire thing simply rendered in three dimensions and viewed using a simple flying camera would be great. Thanks so much Nethanael.
Edit: ...where the Photo Tourism *project* was born...
I should also point out that it was a collaboration between UW and Microsoft Research and that nearly all affiliated research continues to be worked on with Rick Szeliski from Microsoft Research.
An example of an idea that has been introduced to Photosynth after the initial Photo Tourism work is "Finding Paths through the World's Photos" (finding a path of photos to follow through doorways instead of floating through walls as you transition from an inside photograph to outside or vice versa) which is now seen when clicking on a highlight within a synth.
The collaborations between UW and Microsoft Research in the structure from motion space is a great way to keep up with what Photosynth will likely look like a year or two down the road.
Yasutaka Furukawa (now of Google) has recently posted some videos of his 'oriented patches' reconstructions under the title of "Towards Internet-scale Multi-view Stereo".
See them here: http://www.cs.washington.edu/homes/furukawa/
The above research seems to still have been done in conjunction with Steve Seitz and Brian from UW and Rick Szeliski from Microsoft Research, regardless of Mr. Furukawa's current affiliation. Still, I'm sure that it means very interesting things from Google beyond their current synthing of Panoramio, Picasa, and Flickr photos into Street View. I know that they've also been using some level of computer vision to automate 3D cities for Google Earth, following in Virtual Earth's footsteps there, in addition to relying on the wonderful user generated models from Sketchup and Google 3D Warehouse.