As detailed elsewhere, I have long been interested in allowing the synther to examine every last pixel of my photos for image features to match, rather than only the image features able to be found in 1.5 or 2 Megapixel versions of the photos, as it has done since its launch.
I knew that if I chopped the original resolution into pieces smaller than the 1.5 Megapixel limit, that Photosynth would not resize these pieces. I did, however, doubt whether Photosynth was clever enough to place the tiles of the original photos accurately enough to make each original image appear seamless when slightly zoomed out from any given tile, so had planned to include both the pieces of each image (to get every detail examined) along with the actual original images (to provide the synther with a confirmation of how close the tiles were to each other and to actually look at), even though the originals would only be matched based on their 1.5 megapixel versions.
Since all 130 original photos had been calculated to be 100% synthy before, I knew that the tiles had an excellent chance of matching, but I also knew that parts of some of the images like the background behind the hydrant that was out of focus (because of the shallow field of view) which contained no texture to match might fall out of the synth if I synthed the tiles alone, but might match their parent images if I could include both.
In the end I tried on two separate weekends to make a synth containing the 130 originals as well as square tiles of exactly half the width of my portrait photos or half the height of my landscape shots (these tiles are still not quite smaller than 1.5 megapixels, but still allow the synther to look at objects more closely than the small versions of the original images). Each time, this combination of these particular photos and their pieces threw my iMac's dual processors into a furious 50+ hour battle which ended in an out of memory error.
Finally, I decided to simply try making a synth with only the pieces of the originals, leaving the whole original images completely out of the picture. True, there would be some tiles that would not have anything to match due to being out of focus background and the alignment of the pieces of the original images wouldn't fit together perfectly, but this experiment was really more for the pointcloud anyway.
It is, in fact, far more dense than the pointcloud calculated from the original images alone, as I had long ago predicted, however I wish that I had a way to align the pieces more perfectly. In the end, the Photosynth team has plans to enable us to resynth our photos at their full resolution which will do away with all of this nonsense of manually creating tiles of the photos just to get a denser point cloud, which will also leave the photos unmolested in the process, but this is one way for anyone who just wants to generate a dense point cloud for meshing to do so in the interim.