Duplication Test: 1MP Tiles 50% Overlap -Mid Scale

    By: 

Nathanael.Lawrence

Add to Favorites
Embed
Facebook
Report Abuse
Share

Description

A test to simultaneously determine whether:

1) an intermediate scale of crops between the 1 megapixel chunks (which I have chosen to be small enough to evade being downsampled before feature extraction) and the original image (which the synther will downsample for synthing purposes but not for display purposes) is needed in order for them to match, and

2) how many points is duplication capable of generating, due to overlapping tiles of original pixels. (The duplication is not quite pure since each tile has its own JPEG compression and each has different contents, although derived from the same original pixels, however minimum compression possible was used and thus the results should be a best case scenario, given that all matches are successfully aligned.)
Stats
Synthy 91%
Views 56
Favorites 0
Photos 33
Date Created 2/12/2010

Related Photosynths

Comments

(12)
Nathanael.Lawrence Over 1 year ago
As expected, overlapping tiles artificially preserves image features in the pointcloud through allowing them to self-verify between different copies of themselves.
Nathanael.Lawrence Over 1 year ago
An unexpected phenomena is observed in the visible lack of features along tile borders. This test used two sets of 1000x1000 pixel tiles, the one designed for the upper left corner to fall precisely at the center of the other set of tiles. I anticipated that this would give the sort of point density seen within each segment above, but the lack of features around the edges is mystifying. Presumably JPEG compression throws away extra data along the edges of images, since it is deemed to be passed over by the human eye. The effect seen above actually gives the false impression of the tiles used being half the size that they actually are (excepting the top most left tile and others along the edge of the image which are merely the result of a remainder in the division of images whose length and width are not multiples of 1000).
Nathanael.Lawrence Over 1 year ago
Tiles larger than 1000x1000 can be used to still sneak under the synther's downscaling and therefore retain all image features during feature extraction. My choice to stick strictly to 1000x1000 pixel tiles above also had the side effect of extremely narrow tiles along the right side of the image, several of which did not successfully match.

If I knew precisely where the synther's boundary for downsampling was, then I could use that as a maximum tile size and merely divide any arbitrarily sized image into square tiles which did not transgress that number. I have asked the question of the Photosynth team once, but received no reply.
(See: http://getsatisfaction.com/livelabs/topics/what_is_the_maximum_resolution_used_for_feature_extraction )
Nathanael.Lawrence Over 1 year ago
For the record, several different points are being tested here, one valid (using tiles of an original image smaller than what the synther downsamples all images to, in order that the details from these portions of the original pixels may be scanned and compared against the detail found in pieces of other unique photos from unique perspectives), and one invalid (namely the artificial preservation of points in the pointcloud by using multiple copies of the same photo and thus letting the features found within it match themselves perfectly, thus preserving points for all features, regardless of whether they appear in photos taken from truly distinct points of view [or portions thereof] and therefore regardless of the fact that their three dimensional coordinates cannot be solved for).

For comparison, see these synths:
http://photosynth.net/view.aspx?cid=36ea0c15-bde3-4427-b783-e491dd41ee8a
http://photosynth.net/view.aspx?cid=5d0379c2-895f-4a70-bae7-2d3fc4f6d5a8
Nathanael.Lawrence Over 1 year ago
As to the first question posed in the description, the original 10 megapixel's 1.5 megapixel version, although vastly different in scale to the full sized portions available in the tiles, as seen by the synther, was still able to match.

This means that for me (since I will be using a 10 megapixel camera for the foreseeable future) that I need not worry with any intermediate scaled tiles or any of the headaches that come with the math as larger and larger tile sizes are resized to 1.5 megapixel versions of themselves by the synther for matching, etc.

This is good news for me as it means that for the tests which I've been putting off since 2008 (without any nonsense about overlapping tilesets), I can simply generate one tileset per parent image and expect them to register to each other, minimising the artificial preservation of points which cannot be found in other unique images.
Nathanael Over 1 year ago
Ideally, I could simply use only the 1000x1000 pieces of every photo without even bothering about throwing the originals in (thus completely ruling out any possibility of false preservation), but I am wary that without the original as a guide, the pieces of each unique photo would never be correctly aligned with each other.
Nathanael Over 1 year ago
I have thought of one use for false preservation of points and it runs thus. My fellow synther and inquiring mind, Jim (a.k.a. jimcseke) was recently experimenting with providing multiple edits of each photo, zooming in progressively with each edit and found that if he rotated them as he zoomed, Photosynth retained many more points per image.

At first I was a bit put off because I knew that the reason that Photosynth throws away features (and therefore ultimately points from the pointcloud, even of the features found at the default 1.5 megapixels) is that it has not been able to successfully identify those same features in other photos. Jim was preserving lots of points from each copy photo because the synther was comparing those parts to themselves.
Nathanael Over 1 year ago
Two things were happening at once. Jim was cropping in (allowing the synther to see features it normally wouldn't by using its small versions of the original images) which is good.

He was also making Photosynth keep parts of each image that could only be found in itself. These points, although kept in the pointcloud, would never be properly arranged in 3D, because no depth could be calculated from a single image because nothing moves relative to anything else. This was bad.

I was a bit unhappy to tell him that I didn't think that the rotating could be much use. Then I got to thinking. Couldn't you use that sort of technique on things that are *meant* to be flat? Things like an information plaque, if photographed head on, could be cropped out and the crop rotated all the way around so that its part of the pointcloud would be super dense. It would be flat as that sort of technique will always give, but since the original object was flat there's no problem.
Nathanael Over 1 year ago
I'm still a firm believer that for more *meaningful* points that actually follow things' correct shape you need more distinct perspectives on things, not the same perspective rotated around in the computer (which has its own problems like the fact that pixels can only be rotated in 90 degree increments without making an absolute mess of them), but I was happy to find something I liked about Jim's rotating.

The cropping idea has been rolling around in my brain since the first week or two that I used Photosynth, but I hadn't ever done much with it except to glue some wide shots and close ups together, even though I knew that it could make the entire pointcloud more dense if you used crops of the correct size(s). The big holdup was mainly due to the fact that I (to this day) don't know what rule the synther uses to decide whether to downsample an image before inspecting it for features... I don't know the magic number that describes that border.
Nathanael Over 1 year ago
Due to Jim's experiment, though, and due to sustained interest by myself and my fellow pointcloud aficionados and fanatics, I plan to try to nail down some good, specific but generally applicable practices for choosing crop sizes, suggestions for programs to batch create tiles for a given set of images etc. during this weekend.

Using the Seadragon team's tools for converting images to Seadragon's Deep Zoom Image format should actually work, as I can specify the tile size and compression quality. Once every image is converted, I'll only need the tiles of the largest (read original) resolution from the image pyramid to combine in the synther with the original images, but that should provide an excellent demonstration of my theory, should all go well.
Nathanael Over 1 year ago
As observed above, the synther seems reluctant to find image features which lie on the edges of tiles. Certainly in my tests I will not (as I did in this synth) have tiles from the same image overlap in the same manner. (Multiple tilesets at the same scale from the same image = a big fat negative for real synthing. We don't want self-verifying, self-sustained features and therefore points from within the same image.) It does, however, make me wonder whether a slight amount of overlap on the edges of each tile within an image's single tileset might be useful.

I am still somewhat uncertain as to how well the individual crops will match any other photo's crops as they will all be automated and therefore cut many things in the photos in half that you would keep together if you were cropping by hand. Will finding half a surface in one tile be enough to match it to the third of same part of that same surface in another photo's tile?
Nathanael Over 1 year ago
Minimal overlap may be inevitable, but unless I am mistaken there is a parameter in Seadragon's tools to specify the amount of overlap between tiles. Time will tell.

I tell you what. I really do wish that either the synther just had an option that allowed me to have it analyse the original photos as originally provided for image features, rather than all of this cropping nonsense. We've even seen from sneak peeks at the work that the Photosynth team is doing as well as the GRAIL lab at University of Washington, that shows that dense pointclouds are the next step before a mesh can be created and the features (whose estimated positions are what we currenly see represented by the points) can be textured onto the mesh, so we know that Photosynth of the future will certainly afford us more dense pointclouds but it's all a waiting game as of now. This ugly cropping cheat seems to be the only way to get around it for now without knowing|being a programmer.
You need to be Signed In to add a comment.
New to Photosynth? Sign Up for a free account.