Forum : New Feature Suggestions/Requests

Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.


Topic: The future of point clouds

Report Abuse Report Abuse
Nathanael (Over 1 year ago)
So for some time I've been wondering about additional channels that could be added to point clouds.

If I understand correctly, we currently have 16 bit depth for red, green, and blue channels.

I realise that the calculations used thus far in scene reconstruction are doing well just to arrange everything properly, but looking forward, surely we will see other techniques that could recognise a wall of coloured transparent glass on the side of an office, recognise features on the other side and gain an understanding of the transparency of the glass wall, given its structure and the structure of objects on both sides of it.

Similarly interesting is having enough photography at the proper exposures to understand the structure of light sources and their surroundings, such that, if the light source is identified as such, its luminosity can be calculated and possibly the specularity of different objects within the scene, given their structure is known as well.
Nathanael (Over 1 year ago)
So, a few questions are as follows:

1: Couldn't (and shouldn't) these properties be applied to the pointclouds?
2: What would this involve? Simply adding new channels to the points in synths where the above-mentioned calculations have been performed?
3: If|when the above is achieved, how would you handle older pointclouds? Would you perform a reanalysis of exceptional pointclouds up on the servers or simply ask people to resynth and discard their original copies which are tied to the original synth dates and comments?
Nathanael (Over 1 year ago)
4: Presumably some of the above gets easier when you have other data to feed off of, such as correct orientation and scale on a given point on Earth, etc. Since we now have geo-alignment in our hands can you begin to solve for effects of the sun in cohesive synths (shadows, reflections, etc.) given that you now have structural data, coordinates + orientation for a specific place on earth, + date + time (meaning location of the sun relative to the structure in the synth is known), even to the point of locating the sun where pictured in the sky of a synth to finely calibrate where a camera's EXIF date or time metadata is not correctly calibrated?
Nathanael (Over 1 year ago)
Tangentially, in light of all of the above, shouldn't the Photosynth Photography Guide encourage photographers to all calibrate their camera's date and time to one reliable source easily found on the internet?
Nathanael (Over 1 year ago)
Another tangent:

How well does near infrared and ultraviolet data synth? Would it possibly provide matches where visible spectra does not? What other interesting calculations could you do with an expanded spectrum in photos and in the pointcloud?
tbenedict (Over 1 year ago)
HEYYYY!  I don't have the ability to do UV, but I can do IR.  I've never tried to synth an IR set, though.  I won't get much of a chance to try this until the weekend, but I'll happily test it and see what I get.  (From the ground, not from the air!)

Tom
Nathanael (Over 1 year ago)
I should have posted this back when I made the first post but some of what makes me confident about identifying, interpreting, and utilizing specularities is this older video from Pravin Bhat and company (then of UW, now of Weta Digital).

"Using Photographs to Enhance Videos of a Static Scene" on vimeo: http://vimeo.com/1513129
Keep an eye on the windows behind the bust and especially on the picture frames late in the video.

Although only remotely related, I was thinking through all of this again after discovering Yasutaka Furukawa's videos of oriented patches in place of pointclouds.

"Towards Internet-scale Multi-view Stereo" on YouTube:
http://www.cs.washington.edu/homes/furukawa/

The videos clearly demonstrate the fact that the naive results of Structure from Motion, although a fantastic first step, still lead to a simplistic reconstruction which is ignorant of luminosity, specularity, and translucency.
Nathanael (Over 1 year ago)
I look forward to the reflections of things being able to be traced back to the actual objects in synths where all of that coverage is intentionally shot.

A fully opaque world is a visually stunted world. Transparency, light sources, and recognition of reflections are going to go a long way in making sense of the world and really drawing people in.

I appreciate that these are all incredibly complicated problems, but it would be helpful to know that some provision is being made so that at a time when the analysis necessary to glean all of the above information is possible, it is not necessary to create an entirely different format.
You need to be Signed In to add a comment. (Are you new? Sign Up for a free account.)