Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
If there a way to use photosyth to document the relative position of objects that move through a scene at different times?
Hmmm? Could you provide more details?...
Yes, more details please, dherring. What example do you have in mind?
If your question concerns whether Photosynth can reconstruct both an environment as well as a solid object that moves through the scene, then no, it can't. Either focus on the object, moving the camera enough that it doesn't spend its time tracking the environment, or vice versa.
Today's Photosynth only understands stationary scenes. Any part of the scene that changes (people walking through a town square while you're synthing the square) won't be part of the reconstruction. If there are people sitting on a bench in the scene, though, they stand a good chance of making it into the point cloud as long as they don't shift in their seat or whip their head around too much.
Again, an example of what you were wondering about would be easier to answer.
I am thinking about about a more static version of what the TV producers do during the Olympics where they compare the path of different skiers over the same portion of a downhill course. It requires the same kind of autocorrelation image processing that is presumably used to stitch together sequences of photos, but also assigns some slightly more transparent values to objects that deviate from the scene in separate photos.
Hmm. I'm envisioning a fairly wide angle shot for skiers on a slope which would mean that the skiers are quite small. This, in turn, means that they wouldn't provide much in the way of image features to lock onto.
Photosynth would definitely lock onto the slope and you could, in flipping through the different screen captures, trace the paths of different skiers, but Photosynth is using the image features to try to solve structure from motion. This method of tracking things means that you pay attention to whatever stays the same in the scene and discard momentarily visible items (in our case, pay attention to the slope and discard the skiers).
Some of the rock climbing synths here on the site show examples of the camera following a climber up a rock face, though. Photosynth only reconstructs the rigid part of the scene (the rock face) but it is easy enough to memorise the ascent and descent trails due to each photo being presented in context.
There's nothing to say that Photosynth won't do what you're talking about in the future, but you'd essentially be talking about Videosynth at that point, where the system is able to track separate objects within the scene to reconstruct individually. There are multiple iterations on this concept: the first would be to solve a rigid object moving through a scene like a baseball through a stadium (think slow motion to catch the details of the laces on the ball) or a car through a street in a scene while the camera dollys around in a way that gives the computer vision algorithms enough parallax on the surrounding buildings to ascertain their general structure.
...). The second iteration being tracking multiple rigid parts within a flexible object (think of individual eyes being tracked within a face with nose and ears being other practically stationary objects). A third iteration is tracking a stationary pattern on a fully flexible, in motion object (think of a distinctive weave or print on a piece of fabric blowing in the wind).
I'd like to be able to take multiple photos of a scene documenting the position of objects moving through the scene at different times without having to make use of a fixed camera perspective. (like different cars driving over the same section of road). Photosynth should be able to realign and merge the backgrounds (the part that does not move). What I would like to do is also retain and overlay objects that are moving through the scene in their precise location realtive to the background.