Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
So, more of a question than a suggestion, but this didn't feel like it belonged in the lounge.
For my fellow users who don't know what I mean by synth linking, please read: ( http://getsatisfaction.com/livelabs/topics/linking_synths )
I've been wondering this for over a year now, so here we go.
We know that:
A) A photo is only uploaded to Photosynth once, no matter how many synths it is used in or by how many users, unless it has been edited since last uploaded.
B) Many people make multiple synths of a place, trying to get it right, if their first attempt wasn't as successful as they wanted.
C) We can use geo-alignment to position each of these versions, containing different combinations of photos used in previous synths (more or less of a given set) or new photos (taken to fill in the gaps of a previously unsynthy synth).
D) Since the positions of the cameras that took the photos presented in any given synth and the objects pictured in them are only approximate locations ( http://getsatisfaction.com/livelabs/topics/exporting_xyz_coordinates#reply_582257 ), any difference in the photos used will result in a slightly to potentially radically different hypothesised position for a given photo or the shape of the bit of pointcloud that is constructed from it.
Now, I assume that geo-alignment is the last big chunk of data necessary from end users before synth linking can occur, so hopefully we are close to seeing at least preliminary results for synth linking.
THE QUESTION, then, is this:
What happens when I have seven synths of the same place all uploaded to Photosynth at once?
Now that we can align our synths to the map, each photo (remember, in Photosynth's database each photo is a globally unique item, even when used multiple times) now begins to have real world co-ordinates on real maps of the Earth.
So, again, the question is: What happens when multiple synths disagree about the location of a photo or the shape or position of a bit of pointcloud that a given photo and its neighbors generate?
I assume that the idea of synth linking is to be able to see photos and pointclouds from several synths all at once. This means that if some sort of averaging is not done for the positions of photos that occur in multiple synths, you would instead have the same photo rendered multiple times in different places on your screen at the same time. This is so sloppy that I assume that it is ruled out automatically.
This whole idea of averaging the position of repeat photos (and therefore averaging the position of the points that appear in them) seems to play with the synth as though it were made of elastic.
All of the above assumes, to some degree, that the places that the photos occur, in both synths, is part of a good solid, undistorted pointcloud. The question, then, is "What if a series of photos makes up an excellent strong reconstruction in one synth, but is part of a weak or incorrect reconstruction in a second synth (even though it may still be placed within the main pointcloud of that second synth)?". How do you get Photosynth to automatically detect the wrong alignment?
How do you use all the 'good' reconstruction to correct the poor reconstruction anywhere that those same image features occur in synths tagged within a given radius on the map?
I had originally thought of using photos over again in separate synths in a case where two or more synths were both excellent strong reconstructions but each had so many image features that you could not fit all the photos of a place into one synth (not with a 32-bit synther anyway). In this sort of scenario you could use several (say 50 for good measure, but you wouldn't need so many) photos from the end of one synth as the beginning of its neighbour. In this way, when attempting to link neighbouring synths, Photosynth would have a very easy time of things.
It now begins to occur to me, though, that once synth linking is released, it may be possible to correct wrong or weak reconstructions in other people's synths by simply visiting the same location and providing several overlapping synths with above average pointclouds. Once image features in their synths are identified in yours, their twisted pointcloud could be made to straighten itself out and their photo positions could be refined, providing that Photosynth has some way to detect a superior reconstruction.
The previous paragraph is a bit of a departure from this topic since none of the photos in their bad synth would be in use of your new healthy synth, but the same idea could apply to your own synths where you have access to using your photos from your earlier synths and sandwiching them into your new synths, made with an increased understanding of how Photosynth works. If the old photos can be linked to a strong reconstruction and Photosynth knows that they are the same objects, then when linking some of my old crummy synths to my new robust synths, it should be possible to straighten the old ones out.
I'd be happy to hear thoughts from anyone - my fellow users and the Photosynth team. What do you think about all of this? What parts of what I lay out are too naive? Where am I right on track?
I feel like now is a good time to talk about this.
I guess there are many variables, each have their own degree of estimated accuracy:
1) The offset, orientation, FoV, focal point and other attributes of the view.
2) The estimated depth of a matchpoint in each view (giving its relative position)
3) The orientation, scale and position of the point cloud, given by the user
1 and 2 are linked, where matchpoints exist in images from a greater range of location and zoom in the same point cloud, their estimated accuracy increases.
3 is problematic, it depends on user input and is therefore always wrong! I guess the positions have to be relative to something, some form of reference point and the best option here would be to have the bird's eye photography from Bing Maps being the glue that holds everything together, while the rest is flexible.
Oh I just thought of something, the matchpoints may be point cloud specific. That is, they represent an estimate of the depth of a pixel or group of pixels, but they may not exist in the same way as a pixel (outside the point cloud). Maybe they can still be used as guides though, based on the number of images they appear in.
Would be interesting to see how accurate a long long linked synth would be against a map.. Map software with seamless zoom to 3d.. Right i'm off to read up :p