Red Rocks All main rock formations

    By: 

TheBlakeE

Add to Favorites
Embed
Facebook
Report Abuse
Share

Description

This has me baffled. Why does it get some of the mountains, but most of them it doesn't recognize? The texture of the mountain certainly hasn't changed.
Stats
Synthy 7%
Views 47
Favorites 0
Photos 246
Date Created 2/6/2010
Location

Related Photosynths

Comments

(5)
Nathanael Over 1 year ago
Hmmm. Collecting all of these must have been a lot of work and the results must be disappointing in that light.

I must admit that even I am a little surprised at some of the photos that didn't match. Anyway... onto some possible technical reasons why. Bear in mind that while I will be listing things that I really have heard the Photosynth team say, whether these things apply to this case or not is only my best guess.

> The computer vision algorithms can only recognise features (such as distinctive corners in the cracks in the cliff, I suppose) from a maximum angle difference of 30 degrees. You wouldn't think that this would plague such large objects because it takes quite a lot of walking to change your angle on them much. Nevertheless there are quite a few different angles here. This means that anything within 30 degrees of each other could match but if you have another group that's just a little too different, then it will be its own isolated group.
Nathanael Over 1 year ago
This does not, of course, explain everything we see in your synth.

> The computer vision algorithms can only recognise features if they differ by a size difference of 2 or less. Think of photos where the cliffs fill about 60% of the picture compared to another photo where that same cliff (from within 30 degrees of the first photo, no less) only takes up 20% of its photo. These just aren't likely to match. Maybe you'll get lucky, but probably not.
Nathanael Over 1 year ago
> Although the full resolutions are uploaded to be viewed, the computer vision algorithms in the synther only utilise 1.5 megapixel versions of the photos for feature extraction, matching, and subsequent scene reconstruction. Whatever detail can't be seen in a 1.5 megapixel version of a photos, Photosynth won't be able to see. This is both ensuring that pretty much all the photos will be about the same width or height when compared to each other (which increases their chances for being matched because the sizes of things now depends on a percentage of the frame that it is within rather than a set number of pixels) and also cuts down on the number of image features that your computer will have to match.
Nathanael Over 1 year ago
> Because of the 1.5 megapixel rule, images smaller than 1.5 megapixels are less than ideal for synthing. They might (this is just my personal theory) have a better chance if you resize them to at least 2.5 megapixels first and apply the minimum amount of Gaussian Blurring to make the corners of the pixels disappear. This serves as giving something to the synther that it can shrink down to 1.5 megapixels and at least have the frame be the same size as the other photos' frames. The blurring serves as a crude interpolation for what colors should fall in between what used to be 2 pixels before we blew the low resolution image up and makes it less ugly to zoom into. It will look a little blurry but if you think about it, the amount of detail was already a very low description of what was there.
...
Nathanael Over 1 year ago
By resizing first, we are spreading that information out over a larger area and the blurring is only making that information look more natural - like it does to us in the real world when we hold something too close for our eyes to focus on.
You need to be Signed In to add a comment.
New to Photosynth? Sign Up for a free account.