Topic: development NEWS!

Report Abuse Report Abuse
harald182 (Over 1 year ago)
in case you are interested here's the page of the team of the university of washington working on photo tourism (the backbone technology of photosynth). there you can find very interesting informations about their scientific work. have fun.
Nathanael (Over 1 year ago)
Interesting. I hadn't read the 'Building Rome in a Day' paper yet. Thanks!
Nathanael (Over 1 year ago)
I'm not sure how long it's been up, but Yasutaka Furukawa who has done work with University of Washington's Steve Seitz and Brian Curless and Rick Szeliski of Microsoft Research (see Dense Reconstruction ) has semi recently uploaded videos of that same oriented patches approach of reconstruction, entitled "Towards Internet-scale Multi-view Stereo".

You can see three YouTube videos on his page here:

Also note that he now works for Google. This ought to be an interesting year for computer vision.
Nathanael (Over 1 year ago)
I believe basically what's being shown is that the image features which are identified in the input photos (which are then tracked throughout the different photos to solve for the positions of the cameras and the features themselves) and whose positions are represented by a point apiece within a typical pointcloud are now, themselves, not being presented as themselves, rather than as a single point per feature.

If anyone can correct me or clarify this, by all means do. 

I have questions as to whether any sort of new composite feature is generated (being that every point is the estimated position of a feature which has been identified in multiple images - therefore the lighting and exposure on that feature will be different in every image) or whether a simple selection of the highest resolution copy of that feature is always selected to be used in this model.

Also, note that places where your pointclouds have holes are also places where there will be no patches.
swami_worldtraveler (Over 1 year ago)
Cool. Tnx, Nat.