Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
I would like to know where are the main differences between Photosynth software and Microsoft ICE (http://research.microsoft.com/en-us/um/redmond/groups/ivm/ice.html).
Is there any project to make them become same thing?
Sorry... mean "which" are the main differences.
ICE is a traditional panorama stitcher. It attempts to fuse photos seamlessly to appear as though they are all a single large photo. It does not do well if photos were taken from different positions, if the camera changes what distance it is focusing at between shots, or changes the zoom between shots. Although it can match across different exposures, it is not desirable because the transition will most likely not be smooth. Even though ICE can stitch a sphere of photography around you, it is what I would call two dimensional stitching.
Photosynth uses the first step of 3D reconstruction to determine the positions of many different cameras positioned at many different locations all around a subject. It can match across different zooms, different exposures, different focal lengths, and most importantly different positions. Photosynth is the beginning of 3D stitching. If you focus a large amount of photos on a single subject from different positions, you will see the point cloud beginning to become dense enough to resemble a solid object, though Photosynth point clouds are still all what the computer vision industry refers to as a sparse reconstruction.
The Photosynth team and others (university research labs, Autodesk, Google, etc. etc.) have been experimenting with dense reconstruction, but it is still somewhat user unfriendly and it will probably be a while until this functionality is accurate enough to be added to ICE.
In short, ICE provides a smooth continuous panorama, but the most 3D model it will ever create is a bubble of photography which you are trapped in the middle of. If you only stand in one place and turn around to look in an outward circle, a panorama from ICE, Photoshop, or another panorama stitcher is appropriate, but this will never give you a 3D model, allow you to move around, or let you look at things from multiple different perspectives.
Photosynth is focused on presenting each individual photo just as it was taken, only arranged in 3D space and nested in the 3D point cloud of the scene. If you take care to scan different objects in the scene by taking photos around them and give the computer a tour of how to get from one thing to the next, you are free to move anywhere you like with confidence that Photosynth will follow you. Just remember to take photos close to each other. With the first steps into 3D stitching, you can rotate the point cloud to any angle.
As to whether ICE and Photosynth will merge into the same program in the future, it seems to make sense to me as well, but when this might happen, I don't know. The teams that make the two programs are two different divisions - Microsoft Research for ICE and the Photosynth team inside of Bing Maps.
As far as browsing panoramas and photosynths together, check out what David Gedye (Photosynth team leader) had to say over on this discussion: http://bit.ly/hl4PrE It also makes me think back to this classic discussion: http://getsatisfaction.com/livelabs/topics/linking_synths
If you're interested in possible future updates to Photosynth's visuals, check these out:
Blaise shows off experimentation, projecting the photos in a synth onto the point cloud, rather than flat geometric planes: (13:55 onward for example):
2007 November 7th: http://en.sevenload.com/videos/xNyNIRTR-Blaise-Aguera-y-Arcas-Photosynth
To see dense reconstructions, see these:
Blaise shows off experimentation, using stereo vision algorithms on photos after the initial position approximation is uploaded to generate *dense* point clouds. (49:00 - 51:00)
2009 May 22nd: http://www.calit2.net/events/popup.php?id=1564
2009 June 12th: http://photosynth.ning.com/video/cyberspace-arriving-tedx
2010 June 3rd: http://photosynth.ning.com/video/augmented-reality-event-2010
From Yasutaka Furukawa, previously of University of Washington, now of Google:
Ambient Point Clouds for View Interpolation:
would bee cool if the bing game had some user interaction rather than only passive uploads:)
humans are good at masking objects and with a helping edge detection algorithm maybe the machine could learn by the user. The keyboard navigation could have a more gamelike feel with customizations. local files that are your own working in an offline environment as well. The sharing can be even better.Maybe have some reward arrangement.