Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
another thing that I think it would be cool is the possibility to see photosynth in action while you add pictures.
One would start adding or removing pictures one by one (or multiple pics with Win7 touch) and the algorithm would matching and overlaping to creat a synth on the fly!
Would that be possible?
It seems like it would have to redo the entire synth every time you added or removed a photo (that is, it would have to compare all the image features or keypoints to each other all over again... it could skip finding image features for all the photos that are staying in the synth when new ones are added). It would definitely be cool to have the position of all the photos be held in 3D and animate to their new positions when more photos are added.
A related idea that I've had for a long time now is to have the option to enable people to watch when I'm uploading a new synth. The 3D data is the very last thing to get calculated and uploaded so you couldn't do anything too amazing but you could watch as new photos flow into the collection, upload the grouping data from the image feature matching that happens right before scene reconstruction to watch all the photos uploaded so far group appropriately, + then at the end watch as they all fly into place in 3D. Awesome!
I know that I would love to watch one of LostInTheTriangle, Schn828, or alFaku's megasynths be formed in the manner that I describe above.
I would imagine that real time synth would be hard to do in terms of computational economy, but Windows7 touch make me imagine this possibility of manipulating synths in a radical new way. Adding and removing multiple pictures from the 'dataset' of the synth. Maybe removing pictures that do not overlap with anyother, and serch in real time (by adding) pictures that could match a current dataset... that would give an extra WOW effect to photosynth!
If it possible to do a realtime video photosynth, why would not be possible to synth on the fly?
I am sinceraly puzzeled.
A: The video being used is extremely low resolution, meaning many fewer details per image to match to the other input images, meaning faster image keypoint matching.
B: What you see there is actually far more related to what Microsoft Research's Image Composite Editor or Photoshop Photo Merge do - that is, create one large image from several smaller images.
C: Photosynth works with images taken at fairly different angles - up to 15 or so degrees different from each other. The more basic sort of stitching that panorama stitchers (like Microsoft Research ICE) do requires the input photos or video to be taken from close to exactly the same spot.
thanks for the explanation, Nathanael!
You might also be interested in 'Core Tools for Augmented Reality', also shown at TechFest 2009. Take your pick of the two videos; they're both the same information although very slightly different presentations.
I was going to say that they demonstrate there the camera having some idea of the geometry of a space and therefore being able to layer graphics on top of that semi-believably, but the overlays are still somewhat sketchy.
Imagine, though, if it was using a high quality Photosynth pointcloud to match the video to. You could do a lot more accurate collision detection for your overlays with a pointcloud as dense as the ones generated by Photosynth.
None of the videos we've been linking to really show the image features so well so I'll try to dig one up here that does a little better job...
Here we go... a video from Adobe researcher Dan Goldman showing some work done with University of Washington (where Photo Tourism was born). Watch the middle of the video to see the image features tracked in a medium resolution video.