Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
Several people have explicitly mentioned that it would be great to see Photosynth use the GPS co-ords from all the individual geotagged photos in a synth to auto align (not just position at the centre) a pointcloud to the map.
Specifically myself and 'rakerman' here: ( http://photosynth.net/discussion.aspx?cat=e57bdda7-ff98-40a1-b2ae-529b0213c2b8&dis=184f2052-3905-4b31-b796-7c0c2e3da34b ) and SoulSolutions here: ( http://photosynth.net/discussion.aspx?cat=01b6f15f-42eb-49cb-a221-ed56615e1c47&dis=cdf9fd5f-b5eb-4b92-b9fa-01e6fda81d3b ).
In the 'Bugs' reporting forum I briefly outlined one of the obstructions to using the current pointcloud (whose reconstruction is only approximate) to map imagery (which is far less approximately correctly positioned, projection errors notwithstanding) here: ( http://photosynth.net/discussion.aspx?cat=a6bad539-de7d-4a16-8c4d-b143f7d5f984&dis=eb638cb7-2356-4c77-bc3c-255621d03c4e )
That said, I'd love to see both aspects realised.
How outlandish a request is the elastic pointcloud idea?
I'm thinking of something that for the end user would look like the editing mode when ICE has finished stitching a panorama, but being that the photos all have concrete positions on the map, there would be far less guesswork involved in the stretching of the pointcloud back into line.
It seems to me that this stretch and squash idea must come before the simpler sounding auto alignment from individual photo GPS tags if it is to be applied across all synths where the reconstruction may be strongly (no problem there) or weakly (here's where you need the aforementioned squash and stretch) accurate.
Tangential to this concept would be correcting synths where the user has taken several panoramas from slightly different positions, but both positions result only in flat pointclouds of the same plane (say of a far off building face) that are misaligned.
If those point planes continue to be separate from the rest of the points (rather than the pointcloud being all one static object) and tied to the positions of the cameras that took the photos where the features were found, then recorrecting the camera positions with GPS tags seems like it could stand a chance of aligning the two (or more) planes in the pointcloud.