Get all your questions answered about our latest Photosynth Technical Preview.
Are there any thoughts on how panoramas captured in portrait can be improvied? Take a look at these:
Photosynth knows about the whole scene but only displays the one photo that is in view, it would be great if the pilar-boxing could be removed and a blured version of the whole scene is displayed with the image thats in view brought into focus.
I've been through my photo archive and most of my panos are taken in portrait...
Another one here: http://photosynth.net/preview/view/7581c4ad-0045-4566-a6eb-77582852cc50
I'm with you, Matt.
Seeing my photos in the context of others and the recovered geometry was part of what I loved about the original Photosynth and something that I'd like to see come back.
In that sense, I wouldn't limit this improvement to portrait synths.
I think that the current implementation has a few causes:
1: Cropping to the input photo aspect ratio allows the user to never see holes outside of the photos (except momentarily where the camera didn't stay level between shots).
2: Every photo currently has a single depth map computed for it and the viewer only worries about projecting 2 photos onto two depth maps long enough to crossfade between them. This allows synths to run on less powerful hardware because the entire model is never loaded at one time.
Here's a video where Eric Stollnitz from Microsoft Research explains the steps behind generating what we know today as Photosynth 2 spins: http://youtube.com/watch?v=lGCoYVCzWT4&t=7m35s
Also, the Preview's 'About' page http://photosynth.net/preview/about/ says,
"Third, the technology uses the feature points in each photo to generate 3D shapes. It does so on a per-photo basis rather than trying to generate a global 3D model for the scene."
This is an important distinction from Photosynth 1 (where the point cloud *was* trying to generate a global 3D model for the scene).
For me, the ability to make a detailed point cloud of any part of the scene made this global model really important, however a really dense point cloud model of the whole scene wouldn't render very well in the ordinary person's web browser on their $400 laptop or a tablet or smartphone unless you were using something like Euclideon's closed source GeoVerse View http://euclideon.com/products/sdk/
Panoramas are a bit unique in that you'll only move left or right and along that path, everywhere that the camera looks will be filled by photography and geometry (whereas on something like a Walk synth, if all shots were portrait, you wouldn't have any geometry or photography to display on the sides when you were at the beginning of the walk (but later shots could extend their view of the scene by using the wider view from further back - this is assuming a straight walk).
I think that the biggest limiting factor for keeping each photo separate and projecting them onto geometry for foreground objects in parallax pano synths is the issue of loading up all the polygons and photos necessary to fill the screen at the same time and keep the synth fluid and reactive on all devices.
See the Photosynth team's reply to Joscelin here, though: http://photosynth.net/discussion.aspx?cat=00581351-82d8-438d-a37b-7eadb3fb4991&dis=9283e2c6-6002-462e-82a0-4fbe03612adc
All those example panos would probably be better as "stitches", since there's no parallax to speak of. In the future we hope to have our backend automatically identify this and create a stitched experience when it makes sense, or the author explicitly requests this.
As a user, I do not feel we should be forced to select the type of synth unless we know what we want - Allowing the service to automatically detect the best would be a much better approach in my view.
Some users will shoot for a specific type but most will not be skilled enough to know the best type to choose.
Matt, I think that letting the system automatically determine what is best by default (as both you and the Photosynth team is suggesting) is appropriate, but I do appreciate having a manual override to say, "No, I meant it to be this sort of panorama.".
There are times that I give ICE a set of photos taken with a rotating camera and it falsely detects one of the planar motions that it supports, so it's good to be able to say, "No, try again with a rotating camera motion.".
I think that there's a parallel here.