Get all your questions answered about our latest Photosynth Technical Preview.
In thinking again about synths constructed of spherical panoramas http://bit.ly/psfpanosynths and talking with Joscelin the past couple of days about navigation needs in linked synths http://bit.ly/psflsnav I was drawn back into thinking of maintaining zoom on a particular part of the scene during navigation http://bit.ly/psfzoomspin in order to use a single synth as multiple synth types.
I wanted a simple name for it so I'm calling it video highlights for now, but the idea is that you select a part of the scene that you want to keep the viewer centered and framed on and the viewer automatically adjusts the zoom value of the viewer to compensate for your current photo's/pano's distance from that object/part of the scene.
It primarily makes sense to me as a user navigation technique in synths constructed of spherical panos, but may prove useful even for paying attention to smaller parts of a spin or walk synth constructed of ordinary photos.
Hey Nate: Show me a synth that you think would really benefit from such a feature. What would you highlight in it?
I feel this would be a very interesting feature, especially if it would be possible to use different zoom factors, using multiple frames.
As I said, the primary use case in my mind is in synths constructed entirely of spherical panos so, since neither Photosynth 1 or Photosynth 2 support uploading such a thing, I can't link you to an example of that.
It isn't difficult, though, to imagine a Walk of spherical panoramas through a museum or capitol building or cathedral or flower garden.
The question quickly becomes whether it is limited to being only a Walk synth or whether we might not equally correctly call it a Wall synth as well (as long as the panos are taken close enough to each other).
The very next thing is to imagine a mural or inscription on the wall of a government building or a particular tree or flower bed in the garden or a statue in the cathedral or an exhibit in the museum or a particular house on a street and realize that if we lock the viewport to that portion of the point cloud through all the panos, we essentially have a spin synth of that part of the scene.
The higher the resolution of the panoramas, the further away we can satisfactorily zoom in on any given object from and still fill the viewer with acceptable amounts of pixels.
As I stated in the second link above, I would prefer to see this as a navigation technique, able to be used on the fly by any user, but realize that that means either:
1) getting Open Seadragon working in 3D or
2) doing server side cropping of the pertinent portions of the images to send to the viewer at the viewer's resolution (a much less desirable solution).
Whether it can be used in realtime by any user during navigation (+ shared in comments) or it's an authoring tool that only the synth's owner can specify (much less attractive to me), being able to outline a particular portion of the point cloud ought to allow the system to prefetch the necessary tiles before beginning playback of this video highlight in the same way that the viewer currently prefetches images in one direction.
As for examples in current synths, I would say that given any synth of a statue or a person standing still, a simple use case would be to zoom into the face or head of the statue or person.
In some cases of statues the inscription, information plaque, or pedestal may be of equal interest.
In cases of spins with a number of objects (a yard, a still life, etc.) you have essentially captured multiple spins. By giving the user the ability to lock onto only the tree or only the rose bush or only barbeque grill seen in the yard, you enable users to harvest more value from their viewing experience without stopping and zooming in and out with every photo. Similarly with a large model set or a fruit stand, enabling us to zoom in on each piece of fruit or each component of the model lets us extract many little spins, even if we're seeing the same angles as when zoomed out.
This isn't really too different than what Noah and Blaise talked about when they mentioned stabilized slideshows years ago, but in a more structured set of photos, as long as the photos are sufficiently higher resolution than the viewer/monitor then you can actually harvest multiple stabilized slideshows per spin (in the case of what is able to be uploaded today) or extract multiple spins from a walk or wall of panoramas.
I'm sure that anyone who walks much or has been a passenger in a vehicle before has taken the opportunity to look at something as they approach and pass it. Because of this (especially in cases of synths of spherical panos) it makes perfect sense to me as a navigation tool.
Back in the mid 1990s, Nintendo was making their first 3D Legend of Zelda game and found the need for a new control scheme, which they named Z-Targeting.
Unlike a game like Mario 64 which didn't have such a need (or indeed even the 2D overhead control scheme of previous Zelda games), when combatting moving enemies in 3D, it was difficult to maneuver and stay facing the opponent at the same time in order to successfully strike with one's sword. Nintendo's solution simply involved locking onto a particular enemy so that Link always faced them, regardless of the direction he walked.
In a synth of spherical panos the target won't be moving, but the principle is the same. Being able to specify a particular subject to keep the viewer locked onto is a natural way to move around the scene.
My hope would be that by specifying a portion of the point cloud that the viewer would handle things, but I echo Joscelin's request for manual fine tuning if/when necessary.
hi Nate, Great vision and plenty of other ideas from you and a few others. How about clustering and prioritizing them to set out a course to get to realization?