Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
During one of his presentations, Blaise Aguera y Arcas eluded to some work in the pipe on generating higher density point clouds from synths.
From the way he described it, the synth process uses an algorithm to find matching features between images that don't depend on a particular geometry of the camera, orientation, etc. The resulting point cloud is relatively sparse, but contains enough information to tie the images in a synth into a cohesive geometry.
Once that geometry is known, however, from what he said it should be possible to take stereo pairs and apply a different algorithm to pick up common points in the pair, and tie those back into the geometry as well. From the example he used in his talk, this algorithm creates a much MUCH higher density point cloud.
Given the work that sir_ivar has been posting, namely using Photosynth as a photogrammetric tool, there is a TON of appeal to having higher density, potentially lower noise point clouds to work with.
I realize that photogrammetry was never the thrust behind Photosynth, but the fact that Blaise Aguera y Arcas went so far as to mention, and then demonstrate these higher density point clouds at least indicates there's some interest in it from the standpoint of development.
Sooo... To make a long story long:
Any idea when we're likely to see this new feature in Photosynth?
And if the developers need image sets to test with, please please let me know. I've got one set of several thousand pictures around a single site they are more than welcome to use. I'll gladly CC license the lot of them.
I second that. If you guys want a few die-hard users to alpha or beta test any new dense reconstruction synthers, viewers, or models you know who to call. I know that for research purposes you guys have free reign on which synths to test new reconstruction techniques on but it would be most welcome for those of us who are interested to get an inside look.
Tom mentions the hope of reduced noise in the point cloud. Is it the case that the point cloud generated by the stereo vision techniques would completely replace the existing point cloud or simply merge with it?
Also, do you now use the full resolution photos when operating the stereo vision algorithms or still use a reduced resolution like the current synther does when detecting image features?
I don't have so many sets with multiple thousands of photos but I do have several over 1000 and a few that should make fantastic test subjects in which the original set is only comprised of a few hundred shots.
For those interested in this topic, also see: