Forum : New Feature Suggestions/Requests

Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.


Topic: How do you make a point cloud with as much resolution as a picture?

Report Abuse Report Abuse
Joneyes (Over 1 year ago)
Is it possible to make a complete point cloud? if so how?
Nathanael (Over 1 year ago)
I suppose that the question is, how close do you want to get to the point cloud and how big is the object? Because the points in the point cloud are always rendered as one of your physical monitor's pixels, no matter what distance you are from them it will always be possible to zoom in until you can see between the points. The short answer is taking tons of close ups that are in really sharp focus with good lighting should do the trick.

More helpfully, though, research has been being done to improve the automatic reconstruction. The type of output that the structure-from-motion techniques that Photosynth employs is referred to as a sparse reconstruction. Dense point clouds are certainly possible to generate, but work is still being done to make this process require fewer resources so that more people can actually use it.
Nathanael (Over 1 year ago)
Here's a link to some higher quality reconstructions that can be done with Photosynths: 
http://photosynth.ning.com/video/augmented-reality-event-2010
Read the comments in the video description for more places to get a bit of a better look at what's being shown.

Additionally there is always computer vision research being done at universities and industrial research labs around the world. Here are some links to research on performing dense reconstruction from the University of Washington where Photosynth's predecessor, Photo Tourism, was born.
Nathanael (Over 1 year ago)
First off, Noah Snavely, who was the Ph.D. student who came up with Photo Tourism project, had made his equivalent of the synther, named Bundler, available and its output is very much like Photosynth's (or vice versa), minus the streaming zoomable photos.

Bundler homepage: http://phototour.cs.washington.edu/bundler/
Bundler manual: http://phototour.cs.washington.edu/bundler/bundler-v0.4-manual.html
Bundler for Windows: http://francemapping.free.fr/Portfolio/Prog3D/BUNDLER.html

I should warn you that Bundler doesn't have any menus. If you're going to use it, you'll need to use the command prompt to interact with it, but the instructions on how to operate it should be the same, regardless of which version you decide to install.
Nathanael (Over 1 year ago)
What came afterward was the work of Michael Goesele on Multi-view Stereo, as detailed here: http://grail.cs.washington.edu/projects/mvscpc/

This was followed by work by Yasutaka Furukawa who worked on oriented patches, as displayed here: 
http://grail.cs.washington.edu/rome/dense.html 

and later furthered to this level: http://www.youtube.com/watch?v=ofHFOr2nRxU . Dr. Furukawa also made his software, PMVS available.

PMVS2 Homepage: http://grail.cs.washington.edu/software/pmvs/
PMVS2 for Windows: http://francemapping.free.fr/Portfolio/Prog3D/PMVS2.html
Nathanael (Over 1 year ago)
PMVS needs a lot of memory to run, however, and so making large dense reconstructions was as good as impossible. To make larger reconstructions, Dr. Furukawa created CMVS which split Bundler's output into smaller numbers of cameras and let PMVS2 work on them individually before they were all put back together. He also made CMVS2 available: 

CMVS Homepage: http://grail.cs.washington.edu/software/cmvs/
CMVS for Windows: http://francemapping.free.fr/Portfolio/Prog3D/CMVS.html

(CMVS includes PMVS2)

Many of we Photosynth users like you who had seen this work wished that we could use PMVS2 with Photosynth output instead of Bundler, but for a while no one had quite figured out how to convert Photosynth output to Bundler's format so that it would work in PMVS2. (See this discussion: http://synthexport.codeplex.com/Thread/View.aspx?ThreadId=204015)
Nathanael (Over 1 year ago)
Finally, though, about a week and a half ago, a French fellow named Henri Astre figured out how to connect Photosynths to PMVS2. The only problem now is that from what Henri says, Photosynth doesn't include some of the information needed to use it directly with CMVS, so we can't yet break Photosynth output into small enough clusters of cameras to do really high quality reconstruction with PMVS2.

If you'd like to try it with lower resolution copies of your photos or with smaller sets of photos, Henri's Photosynth Toolkit is available on his blog: 
http://www.visual-experiments.com/2010/08/19/my-photosynth-toolkit/
http://www.visual-experiments.com/2010/08/22/dense-point-cloud-created-with-photosyth-and-pmvs2/
Nathanael (Over 1 year ago)
Even though computer vision is an industry that is heating up these days, using the sorts of techniques that Photosynth is employing still only gets a dense reconstruction of whatever you saw in your sparse reconstruction as far as I have been able to observe.

Even in the video for 'Towards Internet-scale Multi-view Stereo', you could see that where there had been large gaps in the sparse point cloud, there were still noticeable gaps in the dense reconstruction.
Nathanael (Over 1 year ago)
There has also been some work done this year by some familiar names: Michael Goesele, Drew Steedly ( http://photosynth.net/userprofilepage.aspx?user=Drew ), and Rick Szeliski ( http://photosynth.net/userprofilepage.aspx?user=Rick ) among them. Their paper (and the accompanying video) is about how to disguise the parts of the photos that weren't able to be reconstructed when moving between them, so as for asking when we'll see a model that is as continuous and as high resolution as a photos, the answer is, I think, "When you make a point cloud that is continuous from one side to another, whose sparse reconstruction itself is fairly solid."

That said, I was impressed with the point cloud on the tree trunk in your recent synth: http://photosynth.net/view.aspx?cid=8e3afe8c-b6c8-4882-91f7-b4c8a1c1e6f7 I think that the dense reconstruction of the trunk would probably look quite solid. I would expect the landscape to still thin out as you move away from the tree, though.
Joneyes (Over 1 year ago)
thats very interesting, some of that i found from one of your comments and if you know of a tutorial on how to use bundler and CMVS i would really like to try that software , i tried to use arc but there are limitations of not being able to control any aspect of the process and i think they are slightly different , but i could be wrong , yes i think what led me to ask this firstly my curiostiy on methods of capturing a scene in 3d though it would accurately depict each view as if viewed in person , like the work being done at holografika I once sall a demo of video they took using a type of 3d camera not stereographic but from many view points it wasn't perfect and they didn't say much about the camera or should it just a video , i wonder cause i have seen holographic video capturing setup made from many video cameras lined up in rows and on a slight curvature almost the camera positions you would use to make a photosynth point cloud around the front of an object.
Joneyes (Over 1 year ago)
I wonder about the potential of the information gathered when taking photos or/and videos from different angles and view points potential that is not yet uncovered , perhaps theres more information between and withing the photos , that is not yet unlocked or understood completely , i would one day like to use a true 3d camera that could reconstruct an true digital holographic snapshot of a scene , and that could be displayed as a still or video on a holographic viewing diverse like such , from holografika , but hopefully by then  the full 180 degree of viewing would be realized and added 180 degree of viewing in the vertical as well i have a guess this might be where digital viewing is going , hopefully , photo-video to holographic still - holographic motion
Nathanael (Over 1 year ago)
joneyes, 

Josh Harle created a video tutorial, PDF, and some extra files that help streamline the process of using Bundler, CMVS, and PMVS2.

Check out his blog post here: http://blog.neonascent.net/archives/bundler-photogrammetry-package/

He also did a video on using Henri's Photosynth Toolkit: http://blog.neonascent.net/archives/photosynth-toolkit/
Joneyes (Over 1 year ago)
This actually helps alot :) thank you Nathanael , it explains things in good detail , i have some direction now
Nathanael (Over 1 year ago)
@joneyes,

Henri Astre has also released a Structure From Motion Toolkit of his own. You can download the current version from: 
http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/

Also, a couple of sites that I found since we last talked: 

1) http://hypr3d.com (It's a bit like a hybrid between ARC3D and PhotoCity.)
(Check out the comments for a downloadable version of my hydrant to view in Meshlab: http://www.hypr3d.com/2185/obligatory-hydrant-synth )
Then compare it to Photosynth's sparse reconstructions: 
http://photosynth.net/d3d/photosynth.aspx?cid=df4722d5-e640-42c3-b8bb-4338ac62acdb
http://photosynth.net/d3d/photosynth.aspx?cid=dee30328-b7fd-47cf-b9e6-491ffe4a3cc0

2) http://bit.ly/oi3dcloudcasterlite (Online Interactive has built a viewer for dense reconstructions from PMVS2 in Adobe Flash.)
Nathanael (Over 1 year ago)
I also made a request to Kathleen Tuite from PhotoCity to download PhotoCity's dense reconstruction of my hydrant, since she seems to use higher resolution copies of the photos than Hypr3D at this point, and she kindly agreed.

http://twitter.com/#!/kaflurbaleen/status/2222621135474688

Again, you can use Meshlab to view the model. http://meshlab.sourceforge.net/
Nathanael (Over 1 year ago)
If you're interested in getting leads on some alternate reconstruction techniques, here is a video ( http://videolectures.net/cvpr2010_spotlights5/ ) which Henri Astre showed me where many researchers give short summaries about their techniques. Their names, the organization which they did their work at, and some of their slides are also available, so it should be relatively easy to find out more about their work.

I found the work around matching not only texture patches, but also edges found in the images to be very interesting and I think you you'll really appreciate Shubao Liu's presentation of Ray Markov Random Fields for Image-Based 3D Modeling.
http://www.lems.brown.edu/~sbliu/projects/iray/iray.html
Paul1917 (Over 1 year ago)
Nathaniel,

I just have come accross another similar service www.my3dscanner.com just to add to your incredible collection.
You need to be Signed In to add a comment. (Are you new? Sign Up for a free account.)