Forum : Photosynth Lounge

Photogrammetric, Oblique Image Stitching, Pets Dressed in Clothes Photos… this is a place to chat and share stories with your fellow Photosynthers. Not all topics have to be about photography, this is a place to relax and chat about whatever you fancy.


Topic: How do you improve points' clouds quality ?

Report Abuse Report Abuse
Pierre_P (Over 1 year ago)
Hello dear Photosynth users,

Hello Nathanael ! (I figured you answer to 2 out of 3 posts on this forum… !)

This message follows a previous one untitled “Points cloud from videos ?” where I explained I needed to create a point cloud as detailed as possible (so to re-build the 3D shapes of an object), and that I wanted to try to upload a video. Instinctively, I thought that providing “as much information” as possible (=as many images as possible) would greatly help the synther to re-built the object. Nathanael let me know that he was concerned about image resolution and overall quality (“blurry”) since images are extracted from a video. I told him it was OK (I have a full HD camera), I might have been wrong... He indicated me a number of softwares to extract image sequences from videos; I used QuickTime pro and it works great.
Pierre_P (Over 1 year ago)
The results of my tests are a bit disappointing, let’s say that uploading 50, 100, 300 images extracted from a video doesn’t provide obvious improvements compared to, for example, 25 pictures. I could not upload e.g. 1500 images yet because of memory issues, but I will try soon. I do not expect much of it. What I observed on the images is that the resolution seems OK (to my eye…) but might be indeed a bit blurry (I will try taking videos with a special care on this issue). I also noted that the way you move the camera around the object does make a difference. For instance, if you just slowly turn around it (360°, strategy 1) with a stable vertical angle, the synther make a better work than if you turn around it say quarter per quarter (90°, strategy 2), stopping each time and making the camera rotate on 180 vertical degrees (i.e. in order to capture everything from the ground to the sky, what might have been out of the reach of the camera when you hold it straight.
Pierre_P (Over 1 year ago)
The first strategy provided “acceptable” results with 113 to 315 images (i.e. the point cloud is not detailed enough, I cannot even see obvious improvements of the cloud density from 113 to 315, but the shapes of the object are respected i.e. the synther made is job and each point of the cloud is indeed a common point to 3 or more images, a true XYZ point that I can use later on in my work). I eventually tried with 524 images (god it took a long time to process with my 1 GO RAM!), and interestingly, the point cloud was really worst (yet, the “synthy” indicator was better, 91%). The cloud density was more or less similar, but there were 2 objects in the cloud instead of one, some kind of “echo” of the real object, right next to it, less “dense”… I thought the triangulation went wrong somewhere..
Pierre_P (Over 1 year ago)
The second strategy provided horrible results. Point clouds are not dense at all, and the synther cannot succeed the images assemblage, you have one part of the object here, another there, it’s a mess (and it displays 99% synthy, the synther as a good sense of humor).

I have been thinking of this whole thing ever since. There is a problem of (1) assemblage, images needs to be correctly put together (in order for the point cloud to be a true representation of the reality / of the object)  and (2) points density, in order for me to recognize the object, or exploit the information the cloud carries.
Pierre_P (Over 1 year ago)
I have been browsing info on the forum, there are some. I was really excited when I found Jimcseke topic “Yes there is a way to vastly improve your synth”, then I read it… Jim idea is not bad, but apparently it will not fix everything (or anything at all?). I read your Pointclound construction theory” Nathanael, very interesting and informative, thanks you. It made me think of a strategy to fix the first issue I talked about (problem “1”), I could place “recognizable” objects/patterns/stuffs around the target object (i.e. within the scene), so the synther could lay on them (recognized them as “positive feature-match”) to correctly reconstruct the scene (the benefit would be that the points associated to the target object I’m interested in would have, in theory, a correct depth).
Pierre_P (Over 1 year ago)
Nathanael talk about  “predetermined patterns” in his theory, is there any more info on these patterns? I mean, is there some specific shapes or patterns the synther is trained to recognize, and therefore some “specific” objects that would particularly fit those patterns… and help the synther as much as possible… ? 

Secondly, concerning the 2nd problem (cloud density), I could not find much info, apart from the “cropping” strategy.. I did not read anything about image pre-processing? Nathanael, you advised to avoid as much as possible pre-processing such as rotations… What about colors enhancement? Or using panchromatic images? Does the synther use 1 spectral band or more? What about images extension… I read the synther convert images in some windows-related extension prior to process them, but the properties of the raw extension might play a role….? Or not.
Pierre_P (Over 1 year ago)
Christmass trees also gave me an idea… Wouldn’t small lights (“fairy lights”, “tinsel” ? not sure of the translation) be easily recognizable by the synther (very particular pixel values… or even lasers)? Or, maybe, the pattern they build (if the synther recognizes patterns rather than pixel values…)? I could put a couples of “tinsels” on my target object (say a tree), it could help with respect to problem (1), and maybe to problem (2) (although I hope the synther will not  add just ONE more point per light, cause I would need a lot of tincels for the cloud to be dense enough….).
Pierre_P (Over 1 year ago)
I am not familiar with the general topic and could not read everything on it so far, I’m sorry if my questions have been answered somewhere else (I see there is HUGE amount of info on 3D reconstruction on the internet…).  Am I missing something obvious? Does anyone have ideas / advices of any kind in order to densify the point cloud associated with an object?

Thanks you for reading… and sorry for my English…

Pierre
Nathanael (Over 1 year ago)
Hello again, Pierre!

I apologize for taking so long to reply!

Taking photos for 3D reconstruction certainly takes some getting used to. 

Photosynth's main purpose is to show the actual source photos in spatial context with its point cloud (referred to by the computer vision industry as a sparse reconstruction) being a secondary focus.

There are other photogrammetry solutions available on the internet. Visit Olafur Haraldsson's Photogrammetry Forum for a nice list with links of services and utilities that are aimed at dense reconstruction. http://www.pgrammetry.com/forum/

Photosynth is well known for being one of the free publicly available utilities which can handle close to 2000 photos well (granted, that will take some time to compute). To put the most photos together, you'll want a computer with a 64bit version of Windows and more than 4GB of RAM. Photosynth is a 32bit application and so can only use 4GB, but you'll want some for Windows, etc. as well.
Nathanael (Over 1 year ago)
I don't know if you are able to understand spoken English, but if so there are some videos of Photosynth team members talking about how it works that would probably help you understand what it is doing and what sort of image features Photosynth is able to recognize. Noah Snavely, Drew Steedly, and Blaise Agüera y Arcas are usually fairly informative to listen to.

http://channel9.msdn.com/blogs/nicfill/shutterspeed-ep04-the-photosynth-team
http://video-jsoe.ucsd.edu/asx/AguerayArcas.asx

Some time ago I made a list of Photosynth-related videos here if you want more: http://docs.com/XFO

Here's the wiki on David Lowe's SIFT, which Photo Tourism and Photosynth owe much to: http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
Nathanael (Over 1 year ago)
To save you some time, though, as far as the "predetermined pattern" that I made reference to in my point cloud theory doc ( http://bit.ly/pointcloudtheory ), Photosynth searches images for corners and blobs of color.

What this translates into is that, as long as your photos are in good crisp focus, any sort of speckled or spotted surface (concrete, brick, sand, tree bark, rust, wood grain, fruit skin, freckles, carpet, leaves, etc.) will light up like crazy in your point cloud (at least, as long as a small square of the image surrounding each blob is distinguishable from the area surrounding other blobs and therefore correctly matched between different images).

What determines how good your point cloud will be is whether Photosynth can match the images to each other and this mainly depends on whether you are following Photosynth's Photography Guide ( http://photosynth.net/help.aspx#photosynthhelp ) about not changing perspectives too much between shots.
Nathanael (Over 1 year ago)
There's nothing magical about a particular camera motion that Photosynth wants more than another. It is going to compare every photo in a synth to every other photo in that same synth. However, both the Photosynth Photography Guide and their beginners video tutorial ( http://bit.ly/howtosynth ) list (among other techniques) circling an object with photos like you describe above. 

I will tell you plainly that this is my favorite technique. I have found it to be very reliable for me. If it were possible, I would shoot a full sphere of photos around each object that I care about. =) For more of my thinking along those lines, try reading http://bit.ly/sinkorsynth
Nathanael (Over 1 year ago)
Just for the record, I don't think that rotating an image will cause it to not synth well. I mainly think that it is merely unnecessary and nonbeneficial (aside from aesthetic purposes) and (in image editing more than synthing) prefer to rotate an image as few times as possible. Obviously if you've taken a photo with the camera sideways, rotating it to right side up before uploading is desirable. https://getsatisfaction.com/livelabs/topics/why_are_my_photos_sometimes_sideways_in_photosynth

The only image enhancement that is specifically warned against in Photosynth documentation is cropping, actually, because the EXIF metadata which is retained in the image will no longer accurately describe the center of the image and possibly the aspect ratio of the image - depending on the crop. That said, Photosynth team members did actually list times when cropping is useful when asked. See this topic: https://getsatisfaction.com/livelabs/topics/don_t_crop_images
Nathanael (Over 1 year ago)
Tony Ernst from the Photosynth team reports that Photosynth's image feature extraction and matching is done in greyscale: 
http://photosynth.net/discussion.aspx?cat=6b63cb81-8b57-4d5d-a978-41d5509bf59a&dis=87048c3c-6a66-4494-b867-5896be252c2a

Panchromatic images should be fine. 

The only image format that Photosynth takes as input is JPG.

As to your holiday lights idea, yes they can work quite well for synths.
Here's an example by Photosynth manager David Gedye: http://photosynth.net/d3d/photosynth.aspx?cid=c94a8cdf-bb10-405f-afa3-d57c56288e5c
More search results: http://photosynth.net/search.aspx?q=christmas lights&sortby=Best Synth#11

I hope this helps.
If you have more questions please do ask. =)
You need to be Signed In to add a comment. (Are you new? Sign Up for a free account.)