Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
Alright, so I am just so pleased that we finally have geo-alignment in our hands. The UI is elegant, simple, and intuitive and the feature is well overdue (and thus all the more welcome).
That being said, I've got two suggestions already.
1: It took me a little while to actually click on the eye icon to hide the point cloud to verify that I had lined up as well as I could. While still finessing my synth's position (and before investigating the eye icon) I subconsciously pressed the [Ctrl] key probably about 5 times before trying out the actual button. Perhaps it seems backwards to usually use [Ctrl] to see the point cloud in the regular viewer and suddenly use it to hide it in the editor, but in my mind I suppose the connection that had been made was, "Press the control key to see what's hidden underneath (or see past what's blocking your view)."
Any chance of adding the [Ctrl] key as a keyboard shortcut for the point cloud 'layer visibility' icon?
So the above was just a suggestion, but this second (and at the present time final, I promise) point is truly a pain point. Before we start, I know that it's not your team's code at fault and I'd like to underscore how happy I am that you're finally using Seadragon Maps in at least the editor (still looking forward to the geo-explore page using it). However...
With the Silverlight Map Control, in areas where there is not high resolution satellite imagery or orthographic photography, one is presented with the white tiles with the "No imagery available for this zoom level" type icon. When attempting my second geo-alignment, this proved to be very daunting as I had to shrink the point cloud down until it was almost unusably small in order to align it with the maps.
The only solution that I see is that where higher resolution imagery is not available the low resolution tiles should be projected down into the fine grain zoom levels. Pixelated is better than miniscule.
Nathanael we made a fix late in the release for an unrelated issue which probably introduced this. If I'm right about the cause we'll definitely have this fixed in our next update.
Nathanael, thanks a lot for your great feedback! I am a software engineer in the Photosynth team. Yes, the eye icon is used to toggle between showing the map underneath and showing the alignment overlay (pointcloud, image, frusta) to make sure things are aligned well. I think it's a good idea to add [Ctrl] key as a shortcut for this for users who know how to use it in the viewer.
The map control we used for this release doesn't have the ability to oversample the tiles when it goes beyond the relosution limit. But we will have it fixed for the next update soon. In the meanwhile, if it's because the ring control is shrinked to become too small to use, you could try to zoom out the point cloud by pressing "-" icon to see if it would work.
Thanks again and we are waiting for more feedback from you!
Thanks for the tip, Dan. Believe it or not I'd used both the grip just under the eye icon *in combination with* the minus button at the lower edge of the ring to get my pointcloud anywhere near small enough. ( This collection: http://photosynth.net/view.aspx?cid=8e0c916c-307a-4af6-8e03-69d299050036 )
A few random questions:
1: Is it possible to further reduce the diameter of the ring while editing the alignment by reducing the size of the browser window? I ask because I noticed that I could enlarge it further by using the browser fullscreen mode.
2: Does there a difference to the viewer in terms of coordinates between aligning the pointcloud to the map by a) using the ring scaler on the right of the ring or b) using the + and - icons on the lower edge?
3: Does the zoom level that you align it with make a difference either, when testing the alignment? (Some pointclouds seem to grip the ground better than others.)
A couple of quick observations graphically:
1: The border on the alignment ring after locking alignment should probably use a vector path, rather than a general use Silverlight ellipse with a specified border width because it looks quite silly when you zoom out and breaks the illusion of getting further away.
2: The shadow under the current photo while editing alignment changes the its position under the photo based on which way the camera is 'illuminating' the picture frame. This is great design! (I think I would like the shadow to be filled with a radial gradient, though, and slightly more transparent than it is currently.)
And also on my mind:
I find watching the different frusta of the cameras move over the map while editing alignment to be absolutely fascinating. I would love it if I had some way to do this in overhead view (although it's that much cooler when you have the map as a backdrop... I suppose I would definitely want the map with this).
Slightly more ambitious: =]
I would love to be able to watch different users walking through my synths, using the above-mentioned overhead map view, and be able to chat with them if we were both online to answer questions, etc.
How about the ability to add our own map layers into the editor? For example we may have access to incredibly high resolution aerial images of the area in question or more recent images.
Also has anyone tested to see if the individual photos, if already geotagged, are now perfectly aligned? or is it still just one photo?
Hey, here's a question!
In a number of my synths, I've got ortho and near-ortho images included as part of the set. Since you're including an ortho view of the point cloud (which is a SERIOUSLY kick-butt addition to the Photosynth feature set), any chance of tiling multiple ortho images on top of the point cloud, essentially making a large ortho image?
The thought here is to use the geometry of the point cloud to accurately place the ortho images and "build" larger scale orthogonal imagery piecemeal.
(Hope that this makes sense...)
SoulSolutions: My first reaction to your 'Add our own map layers' comment was, "We've already got MapCruncher at our disposal. Surely that imagery will be available once the Silverlight Map Control releases.". While hopefully that will be the case, using the Photosynth Geo-Alignment editor is a real joy to use and it would be a great asset to be able to add our own here on the site. Since the whole site is about mapping the whole world through photography and photogrammetry it does seem very appropriate.
A twist on this approach might be a special setting on the synther where you could specify that you have ortho imagery to contribute, use the geo-alignment tool to get it close to where it ought to be and then start the specialised synth running your image against Bing Maps tiles (without having to download them ourselves or all that rot) to tighten the placement up. I'm not completely sold on this last scenario, but I'm turning it over in my head.
Tom, do the edges line up with each other enough to satisfy you in this regard? It feels like we'd be moving toward pano stitcher territory there. I'm not trying to shoot you down... I like the idea.
I assume that at some point in the very near future, whether it is here on the site or on Bing Maps 2.0 (or both), we'll have the option to view any geotagged synth with a Bing Maps ground plane. Presumably the pointcloud is layered on top of the map imagery, but then you want the photos that are close to parallel with the ground plane to be displayed as well, but on top of the pointcloud to form a high res ortho view.
Have you ever used your purest ortho shots to stitch together a megatexture to use in any of your KAP synths, or do you feel that that would pollute the synth's results|require too much rotation of the photos to warrant the trouble?
Your suggestion resembles a discussion I was having recently about navigation methods. ('navigating a synth' harald182)
I haven't yet. So far I've used only the straight images as inputs to Photosynth, then used the things Photosynth produces to do other things with the imagery later.
The big difference between pano stitchery and what I'm after is that most pano stitching software makes no serious attempt to maintain any real 3D framework on which to drape the images. This, coupled with weak correction for lens abberations, and an insistence on projecting the stitched image onto a curved plane, results in distortion.
Here's an example of a stitched set of orthos versus a synth point cloud:
Synth (view ortho):
There's way more curvature in the garden on the left in the stitched image than there is in the point cloud. After walking it on the ground, I know the point cloud is more accurate.
Here's another example that is closer to what I'm after:
Stitch (which in this case had a lot less distortion):
Synth (view ortho):
Unfortunately there are no nice fencelines or plowed sections of garden in that set for me to get a good idea of how much distortion I'm getting. And with stitched imagery, it's always guesswork to find out how much there is, and how it's affecting the final image.
With a synth, so long as you're careful when doing the photography, there's remarkably little distortion in the point cloud. From the tests done by sir_ivar, it looks like "remarkably little" is darned close to "not much at all". Which has a LOT of appeal for something like this.
I'm glad you mentioned Bing Maps. It's not so much that I want to see my point clouds superimposed on top of the Bing Maps ground plane. It's that the combination of the point cloud and the ortho and near-ortho images would provide something like Bing Maps with far more resolution in the area of coverage than would ever be available from satellite imagery alone.
And THAT is what I'm after.
Hey Nathaniel, to answer your questions above..
1: Although enlarging the window lets you enlarge the ring, reducing the size of the browser window won't allow you to reduce the diameter of the ring beyond it's minimum.
2: You can use the point cloud zoom control or the ring resize control, and there is no difference in the data we save.
3: The point cloud zoom level is there purely for ease of alignment, and doesn't intrinsically affect the accuracy of the alignment. (Although zooming in would likely allow you to do the hand-alignment a little more precisely.)
With respect to the alignment ring border, the reason we decided to keep the border thickness constant while zooming in/out is that the map scale differences are enormous at various zoom levels, so it would be easy to lose the ring on the map. The solution is to implement power-law scaling on the alignment ring thickness, but this is getting pretty esoteric, so we decided against it. :-)
@SoulSolutions: Thanks for the feedback.. It would certainly be cool to add map layers, although that's more the domain of Bing Maps / Mapcruncher.
With respect to your other question, if what you're asking is "do synths with geotags in image EXIF get auto-aligned?" The answer is no.
Understood as far as wanting to see as many ortho shots layered together over the ground as possible. I suppose I was just thinking along the lines of every public synth eventually living inside of Bing Maps and the fact that we are, in essence, the imagery providers at that point. Our images will *be* the Bing Maps imagery.
I'll try to put my thoughts into more focus here:
Were you thinking of the photos still existing as individual two-dimensional planes while viewing them all in overhead view or more along the lines of using the pointcloud as a depth map for the photos to be projected onto? I know the latter would be cooler, but I don't think we'll see it in Silverlight for a while.
The curvature seen in panoramas is a direct result of trying to line up all image features from all the images. By contrast, Photosynth can figure out what angle the image was taken at and display it at that angle, compensating for radial distortion of different shots by adjusting the virtual camera's focal length on the pointcloud, etc. but the fact remains that that photo will only ever look right when viewed from the angle that the light that it captured was reflecting at. This essentially means that keeping every photo as a pure flat quad will never line up *every* point between the 2D images, and thus will nearly always have messy overlap at the edges where the photos meet even if the calculation has been made to correctly know where the point is in 3D.
There are a few options as to how to display the photos in 3D. Even once you know where the points are in 3D and roughly where the cameras that took the photos were, you have the question of where, between the pointcloud and the position that the photo was taken at, do you display the photo? Do you put it way back where the physical camera was (usually multiple feet away from the next photo) or do you position the photo as close to the pointcloud as possible - trying to line up the image features with their corresponding points (meaning as you transition from one photo to the next you'll be able to see they actually overlap)? Photosynth opts to do the latter and displays the photos right up on the pointcloud but this is not actually where the light that made the photo actually was.
( See http://channel9.msdn.com/posts/Dan/Drew-Steedly-and-Joshua-Podolak-on-Photosynth/ 12:44 - 14:52)
What I'm winding around to is that the way that the viewer displays things is actually quite close to what you want - showing each image as close to the 3D pointcloud as their flat nature will allow. Essentially all that would change is in overhead view more photos would flip on than usual, as long as they were close to parallel to the ground plane. I guess I'm just warning you that it may not quite be the seamless high resolution ortho view that you may be hoping for.
Scott, thanks for the reply.
When talking about the zoom level that we align the pointcloud to (point 3), I was actually thinking of different layers of satellite imagery used to align with possibly resulting in differing alignment results (other than accuracy differing due to seeing more or less clearly), but from your above answer, it sounds like this is not remotely true.
Excellent point as to the visibility of the alignment ring from high zoom levels. I wouldn't have minded seeing what it looked like with power-law scaling but, honestly, seeing the whole thing, pointcloud representation and all, slowly inflate as one zoomed out would have likely been equally dissatisfying unless you could use power-law scaling on the ring alone, leaving the rasterised pointcloud to scale normally, so I at least see where you're coming from.