TheBlakeE

Blake's Photosynths


Holy Cross Abbey
Holy Cross Abbey
David-Photosynth-Team (Over 1 year ago)
Very nice! Great reconstruction...

Nathanael (Over 1 year ago)
The shape of the abbey really came through in overhead view in this version. Good work!

more...
I-50 Colorado Mile Marker 256
I-50 Colorado Mile Marker 256
Nathanael (Over 1 year ago)
Oh yeah. You nailed this one.

more...
Red Rocks All main rock formations
Red Rocks All main rock formations
Nathanael (Over 1 year ago)
By resizing first, we are spreading that information out over a larger area and the blurring is only making that information look more natural - like it does to us in the real world when we hold something too close for our eyes to focus on.

Nathanael (Over 1 year ago)
> Because of the 1.5 megapixel rule, images smaller than 1.5 megapixels are less than ideal for synthing. They might (this is just my personal theory) have a better chance if you resize them to at least 2.5 megapixels first and apply the minimum amount of Gaussian Blurring to make the corners of the pixels disappear. This serves as giving something to the synther that it can shrink down to 1.5 megapixels and at least have the frame be the same size as the other photos' frames. The blurring serves as a crude interpolation for what colors should fall in between what used to be 2 pixels before we blew the low resolution image up and makes it less ugly to zoom into. It will look a little blurry but if you think about it, the amount of detail was already a very low description of what was there. ...

Nathanael (Over 1 year ago)
> Although the full resolutions are uploaded to be viewed, the computer vision algorithms in the synther only utilise 1.5 megapixel versions of the photos for feature extraction, matching, and subsequent scene reconstruction. Whatever detail can't be seen in a 1.5 megapixel version of a photos, Photosynth won't be able to see. This is both ensuring that pretty much all the photos will be about the same width or height when compared to each other (which increases their chances for being matched because the sizes of things now depends on a percentage of the frame that it is within rather than a set number of pixels) and also cuts down on the number of image features that your computer will have to match.

Nathanael (Over 1 year ago)
This does not, of course, explain everything we see in your synth. > The computer vision algorithms can only recognise features if they differ by a size difference of 2 or less. Think of photos where the cliffs fill about 60% of the picture compared to another photo where that same cliff (from within 30 degrees of the first photo, no less) only takes up 20% of its photo. These just aren't likely to match. Maybe you'll get lucky, but probably not.

Nathanael (Over 1 year ago)
Hmmm. Collecting all of these must have been a lot of work and the results must be disappointing in that light. I must admit that even I am a little surprised at some of the photos that didn't match. Anyway... onto some possible technical reasons why. Bear in mind that while I will be listing things that I really have heard the Photosynth team say, whether these things apply to this case or not is only my best guess. > The computer vision algorithms can only recognise features (such as distinctive corners in the cracks in the cliff, I suppose) from a maximum angle difference of 30 degrees. You wouldn't think that this would plague such large objects because it takes quite a lot of walking to change your angle on them much. Nevertheless there are quite a few different angles here. This means that anything within 30 degrees of each other could match but if you have another group that's just a little too different, then it will be its own isolated group.

more...
W0QL Ham Tower
W0QL Ham Tower
Nathanael (Over 1 year ago)
I especially enjoy selecting the 'Tower 2' highlight and then the 'Front' highlight. The transition is very cool.

more...
"S" Mountain Salida,  CO
"S" Mountain Salida, CO
TheBlakeE (Over 1 year ago)
Thank you so much for all your comments, Nathanael. I don't know how long you've been doing this, but it's impressive to be so patient and thorough with a newbie after all this time. My goal originally was just to do a 360 of S Mountain itself. While I was hiking up the back side of it, though, I discovered these interesting objects (the water tower and the colorful tree) and felt compelled to try those as well. I think I'll just forget about the tree for now, although I saw a great example that somebody did of a similar object, so maybe I'll come back to it later. I do want to try to get a better synth of the mountain itself, though, and to zoom from the mountain to the building at the top. I tried to geoalign it, but the pictures didn't link well enough for me to tell which way was which. Thanks again! Many more to come from TheBlakeE. :)

Nathanael (Over 1 year ago)
This means that synthing individual objects and geoaligning them may, in the end, be a better approach than trying to pack an entire environment into a single synth. I'll admit, though, that until synth linking is in our hands the desire to synth an entire scene is very powerful and there's nothing to say that you shouldn't. I will say that you may find better success in mastering synthing different types of individual objects. Once you nail that down and get a feel for about how many photos are necessary for the desired detail of different objects, it becomes very simple to string them together by also collecting photos looking from (and walking from) one object to the next (some of which you've done in this synth).

Nathanael (Over 1 year ago)
The tradeoff for this way of forcing Photosynth to get something right (the orbit) is that you must understand when an orbit is overkill or when to use a dense orbit versus a sparse orbit. In the above case, where you want to cover an environment with multiple objects within the scene, trying to capture an orbit of photos for every surface (even if your camera had enough space on the memory card) would almost certainly choke your computer when you tried to reconstruct the scene with the synther. Shooting full orbits around different parts of individual objects can easily work, but you don't get the joy of seeing the entire scene in the pointcloud all together. This may change by the end of 2010 if the Photosynth team completes 'synth linking' this year (no promises there, though). I see that you've geotagged your synth above, but can't tell whether you also geoaligned it. Assuming that you have, the idea is that you will be able to navigate all geoaligned synths.

Nathanael (Over 1 year ago)
Photosynth can only make jumps between photos that are all pointed at the same place on the same surface that differ by a maximum of 30 degrees. Photosynth fundamentally works by matching unique textures. From this, we can see exactly why the water tank was so successful and why the star was such a failure. The primary textures in the photos of the star were either the mountains behind it or the clouds further behind it. The star in itself is nearly invisible to Photosynth because it can't determine any repeating texture. With objects which have no blatant texture or which have extremely complicated surfaces, the number of photos necessary to derive a pointcloud (whether detailed or at least not malformed) jumps up significantly in order for Photosynth to track their features and therefore calculate the positions of those features. The simplest way to lock Photosynth into succeeding in tracking something is to provide a dense orbit of photos around a surface.

Nathanael (Over 1 year ago)
24 photos is a good minimum number of photos when circling an object. In the tree's case the uneven ground and nearby shrubs surrounding it is difficult to complete a clean circuit but as it is, at your close range I counted only 10 photos moving from one side to the other. I made this sort of mistake many times when I began synthing and I am still looking for ways to beat it on very large and complex objects. I remember reading in the Photosynth Photography Guide (available here: http://photosynth.net/howtosynth.aspx ) when I began that I should have three photos of something from three different angles in order for Photosynth to detect and position it correctly. What I began realising after I had some breakthrough successes was that I had to stop thinking of objects as single objects, but as a collection of separate surfaces. For great pointclouds you need some overlap in addition to a decent coverage of each surface in the object.

more...