Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
Just from panning around the synths I've done, it looks like the point cloud for mapping the images does a darned good job of following the 3D contours of the scene.
I like the Photosynth interface, and don't want to suggest any features that break away from that, but are there any plans afoot for having Photosynth try to generate a 3D model of a scene, using the images as texture maps, and the point cloud to generate a 3D polygonal model?
I understand some synths really wouldn't lend themselves to that sort of treatment, but many of them would. And the resulting 3D model would be pretty interesting to look at.
Generating full 3D content from Photosynths has always been a personal goal of mine. There are people working on doing exactly this, but currently this technology only really works at a "research" level. This means you basically need:
1. A really good Photosynth.
2. A bunch of hand tweaking of certain parameters for your "automatic" algorithm.
3. Maybe a few user-generated hints.
... and then you can get a relatively good 3D model. I would expect things to improve with time, but I don't think we're quite there yet.
using the orginal images as textures is a nice idea, but i would prefer if the could would be pumped up with more particles, so that there is no need for textures any more. With more particles I mean it should be as much as possible to create the most perfect result (as shown in games with polygones) yes this works :)
Ok, let me just stick my foot in my mouth a little further, and maybe tackle #3 and #2 in your list, Marvin:
If it was possible to supply GPS coordinates for some key points in the synth, would this help with the hints and hand tweaking of parameters?
Yeah, more or less what I'm getting at is being able to take a whole collection of ortho or near-ortho aerial images, maybe a set of obliques to compliment them, and a set of ground control points, and doing orthorectification, georeferencing, and photogrammetry off the resulting mish-mash of data. I realize this is outside the scope of what Photosynth is about, but a lot of the work has already been done.
In my dream world, I'd love to be able to go out with a bucket of tennis balls, lay them out on a grid, grab the GPS coords of each ball, photograph the site from the air using heavily overlapped shots, and then toss the whole lot into a magic black box that spits out a 3D model. (I'd pick up the balls.)
tom, your dream world scenario reminds of this Alaskan case study: http://www.terrageomatics.com/index.php?option=com_frontpage&Itemid=28
I really like the idea of having the option of mesh or points, but I suppose the algorithms need to produce relatively noise free data, especially for the closer range stuff. Currently you tend to get extraneous points on most synths and without adequate filtering these could cause spikes within the mesh - enough to ruin some synths. Then you’d have to throw existing synths into the mix, would they need resynthing with yon filters??
Heck...complicated this business isn’t it, but you guys are doing a terrific job :)
Like PGRic points out, the big issues with 3D reconstruction are noise and errors in the registraion of the images. I think that adding GPS data might help a bit (by making Photosynth's job of finding out where the camera were a bit easier), but I don't think that will improve things enough to get you a good 3D reconstruction.
That said, there *may* be a solution specifc to your aerial synths (I've looked through them. Beautiful datasets, by the way), since the geometry is mainly 2D. It would require a series of manual steps (with lots of hand tweaking), and you'd probably need your own viewer (the are a bunch available online) to view it.
I don't have the time to do this myself, and my guess is that it would probably take on the order of days until we worked out the kinks you got a feel for the process, but I'm willing to coach and help anyone who wants to try (you'll need experience using a 3D modeller like Maya/LightWave, and probably a bit of programming experience).
The steps are:
1. Extracting the pointcloud.
2. Initial 3D reconstruction.
3. Editing the reconstruction by hand (because you're sure get noise, it's only a question of how bad).
4. Adding texture (by hand).
5. Viewing it, and making it available for viewing.
I should reiterate that I don't think you can get a great result with the Photosynth pointcloud unless:
1. The model is simple (like you 2D terrain).
2. The synth reconstruction is really good.
Hate to rain on ur parade, but u m
ay b barkin up the wrong digital tree
. That said, i'd luv to b proven wrong. Plus, this discussion is for "wishes," aft
er all! BRAINSTORMING is good :)
i'v been involved 3D personally n Professionally for 16 yrs now. This either makes me qualified or too long in the forrest to c the future trees...
Building 3D models from 2D images is a topic of ongoing research as well as real-world applications. This problem is best tackled w/ a special-purpose solution, which by its nature is not trivial. But one can dream, n try, of course:)
So, Photosynth is great, n I luv it, but it just may not have the sufficient basis. Perhaps it could b modified/enhanced, but that's a BIG perhaps, requirin a BIG effort
"Instant" 2D to 3D is a nice dream, n one worth pursuing, but I'm not convinced photosynth is the vehicle.
Just as there is no "Make art" button, there may b no "Make 3D" button.
But who dreams nothing moves forward. So, DREAM ON!…
Try that last line again!…
But w/o dreams nothing moves forward. So, DREAM ON!…
A comment about the point cloud n tessellation (i.e. mesh generation by connectin pts into polygons to form a surface. Search Voronoi Diagra.). The resultin model must be usable! 4 games, that means minimal geometry, n specially/optimally made texture maps. Feature work requires higher res models, but this still must b limited to fit current technology constraints. So, efficiency commensurate w/ the intended application is always important. The synth point cloud is a poor candidate to base model generation on. It's overly dense, noisy, n point placement is not particularly good. Textures r an altogether different, troublesome issue. Here again, synth data is not well suited. 4 one thing, there can be TOO MANY fotos. How r the "right" ones to b chosen? Finally, any model would require considerable pre-processin (auto, +/or manual), +/or post-processin.
So, again, Photosynth just doesn't seem like a good foundation to build a 2D image to 3D model creator.
P.S. Sum v impressive models have been made usin Autodesk's "ImageModeler" program:
Here r a couple illustrative vids:
HOW IT WAS DONE:
Another app is "Insight3D." I don't really kno anythin about it, but that it exists, n it's open source. Find it here:
Swami, thanks for the links, I'll give Insight a go, not seen that one before. I use Topcon PI3000 and ImageMaster: http://www.topconpositioning.com/products/software/surveying/office-software/imagemaster-(previously-pi-3000).html
that should have been:
Can we have an edit feature in forum please, it's not playing ball ;-)
I've actually *built* some reconstructions from selected Photosynth pointclouds (albeit with very simple texture maps), and while I agree with you that in the general case using a real 3D scanner rig is probably simpler, I think that in the Aerial Photography cases (where a 3D scanner is impractical) you can actually get something pretty good - if you're willing to spend time fixing things by hand (smoothing, removing outliers, etc).
Marvin, I'd b interested to c the 3D models u'v been able to build using Photosynth. 3D scanners certainly have there application, but I wasn't even considerin them in my discussion. Again, Photosynth is v cool, but it's not a 2D image to 3D model generation program. This type of intense, special-purpose task requires a special-purpose program tailored to that task from the base up. Sure, u can get a 3D model out of Photosynth, but other than that it's free and might b a fun challenge, I don't c any real reason to use it. Even aerial fotos could b better used in a special-purpose 2D -> 3D app. I'm not tryin to b a buzz-kill here. It's just my take on it. Of course, everybody's entitled to their own take. Live Labs, prove me wrong! Again, would b interested to c what u'v managed to do...
PGRic, tnx for the link to Toccon PI3000.
P.S. Yeah, formattin can bet a little screwed sumtimes, especially from my PDA. Not sure why ur long URL got split. Mayb the parens tricked it. Here's my attempt to keep it whole: composed in Notepad, then copied w/ Word Wrap off, then pasted. Let's c if it works...
Nope, could v likely b the parens:( Would probably have to do w/ underlyin engine of webboard app. Fixable, tho - 1 line of could even, perhaps!
To marvin, Aerial 3D scanners already exist and are very common, but just expensive and start around 1 million.
I would like to attempt to use a photosynth point cloud to build a 3d model.
While Image Modeler, Sytheyes, PF track, and Bosue are all fantastic photo modeling products, the automated ease of photosynth is quite compelling.
Some of the users in this forum appear to be a bit negative on the idea, but I see many possibilities.
Sure its just a point cloud, but do you realize what you can do with a point cloud? You can automatically determine correct spacial data as a base for your model. Once that is done, using smart particles and volume simulators, that's an instant 3d obj. This is a big deal for us in the CGI profession.
I would hope to see Microsoft at least give us the option of exporting the point cloud. Being such a professional company, and having given so many excellent products to the CG community, I'm sure they will see fit to provide us the option.
Sony Pictures Entertainment
I've extracted a point cloud from one of my synths using instructions online. The synth process was dead easy (toss a bunch of pictures at Photosynth.) The point cloud extraction process was not as straightforward, but once I had the files it was doable. I tossed everything into a 3D modeler, and took a look.
There were stray points. To be expected. I removed those. Then I created a mesh from the cleaned up point cloud and got a fairly serviceable model out of it.
It's not perfect by any stretch. And because I ditched the RGB color component of the individual points, there was no texture mapping applied to the resulting model. But I got a surface I could use to extract a DEM from my images:
I, too, would really REALLY like to see point cloud extraction as an option during the synth process. Exporting it as a CSV file that could be imported into the software of the user's choice would be pretty cool.
I'm currently playing with a photosynth .bin loader for the Irrlicht engine, once done I'll release the source along with a Windows binary that can view the cloud and save to 3DS and probably other file formats.
No promises, but given that most of the reverse engineering work is already done it shouldn't be too hard.
erm, in case I wasn't clear - already done by other people, not by me
I am not very into the specs of 3D programming, but may I add Voxels ?
Voxels are used for high speed rendering of 3D scenes. Using the points as a base, you could make a fast voxel 3D model. It is not highly detailed, but it would make the point cloud more solid.
That is, if I understand what voxels exactly are.
For starters, I would like to see what could be done in a controlled environment with a known camera. A lot of other photogrammetry packages have a calibration method for the camera taking the photos. This helps to remove lens distortions and give accurate placement of objects in the photos.
I briefly watched the video on how to take photos, but if you could shoot a "grid" or something like that with your camera and then have Photosynth calibrated to your camera, there should be a much more accurate way to get point cloud data.
Also, it usually helps to keep the same focal length (i.e. dont' zoom in between photos) and avoid highly specular or reflective objects since there is nothing to match between photos in a bright white spot.
This would be a very useful tool indeed, but there needs to be some control of the point cloud density in order to make it easier.
In any case, Photosynth seems to be a groundbreaking tool with lots of potential.
I posted a story in the lounge about Canoma, a program that actually let you build fairly accurate textured 3d models from photographic data. You might find it fun to play with if you can get a copy. Sadly the geometry that you can model with Canoma is rather... um... primitive (I'll duck now).
Another program that does an amazing job with creating textured 3d objects from images is called 3DSOM from Creative Dimension in London. This can only be used for objects, but they do some pretty cool stuff with automatically calibrating and removing lens distortion from your digital camera.
How can I contact you for some assistance in exporting the point cloud?
What would you like to do?
I've been away for some time as you can tell...I am still interested in exporting point clouds for reconstructions. Basically, I would like to get the X,Y,Z and RGB information out.
Could you drop me a note at eliscio(at)hotmail.com?