Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
would it be amazing if Photosynth could creat automatic-combined-synth from multiples synths from the same location. It could use the tag and the information about the synth on top of image algorithm to match synths from the same places or objects.
Great suggestion - the team has been scheming about this for quite some time. Lots of technical problems to solve, but it's a very cool concept.
Take the case of Giza Pyramid, if you search for synths of it you get more than one different attempts. I would guess that at least a few pictures from different attempts would be possible to overlap. What if the site offer an option to share users photos from specific places with an folder named at that place? So from now on anyone creating synths for Giza would have an option of automaticaly sharing his photos to the folder....
The idea of an opt-in point for an ongoing public synth has merit...
Oh... Didnt realize I was to post my other post in the Ideas section.. How did they do the TED demo then... Was that because flikr is CC? (creative commons)
The only synth derived from flickr photos shown at the TED presentation (I think) was Noah Snavely's Notre Dame data set. Since he was doing academic research, he was allowed to simply download photos from flickr as he pleased, simply to show that the theory worked. That is why Microsoft didn't include that example in their initial Photosynth Tech Preview... it was built from other people's photos which were their property and unlike Noah doing academic research, Microsoft is a big corporation. It was shown at TED, mostly because it was an example of one of the first successes of the Photo Tourism project.
In Noah's case, he just downloaded all the photos he wanted to use to his own computer and set the University of Washington servers up to crunch the numbers there on location, mostly the same as we do today.
Check these links for more info: http://getsatisfaction.com/livelabs/topics/linking_synths
Linked synths gave me another idea:
Time or season incorporation.
As synths get linked and grow in size, surely two very similar (overlapping) synth would be taken in winter, and another taken in summer by two different people.
Now image taking that to the extreme. Walking through a synth, but also having a slider that lets you adjust the overall time of day or season as a selection criterion for the images used.
Hmm. I wonder what Chicago downtown looks like at night, during the winter? Ah!
And another use for time: how about synth'ing a building construction once a week as the building takes shape. Sliding through time while walking through that synth would be something to see!
@Fracture: Yes! I'm with you all the way, re Chicago + downtown + night, etc.
I must say, though... Good luck with getting all the different shoots of a building being constructed to synth together. The question of what exactly will stay the same for Photosynth to latch onto is a profound one. Bare earth and gravel piles will be replaced by smooth sod, rafters will be covered by roofing, and studs will be covered by siding or brick. Windows will be full of reflections that will change from day to day as the weather changes and the surrounding grounds vary as well as seasonal shifts happening.
All of that to say... I like the idea but the odds of it working with the way that Photosynth currently operates are... slim. If you could catch it in enough transitional phases with only half the siding on, etc. and take heaps of pictures it might all hang delicately together but probably not all in one point cloud.
Excellent points. In terms of getting the building construction concept to work, this is what I had in my mind:
It wouldn't really work _without_ the time-slider concept. The synths produced on different days would likely have to be shown with the time incorporated into the presentation.
I would also assume that you would either have to maintain commonalities between all of the independent synths. This could be surrounding buildings, GPS data, or manual alignment when creating the synth.
Eventual automation would be great. If synth is here to stay, and collaborative-synth'ing happens (PLEASE!), you'll eventually have things like city streets showing new buildings appearing and old ones disappearing.
With the current interface, such things are just buried in the jumble to be stumbled upon should you happen to reach the right photo.
It seems like this idea would be harder than we can first imagine.
I just recently made a decent synth of a place, but decided that I could do better. I went back on a different day and started taking more photos. Although I though that there wasn't any real difference, even just the difference in lighting was so huge that photosynth couldn't recognize it as the same object/place, even though nothing but the lighting changed from sunny to overcast.
Time and light conditions seems to be a big part of the synthing process.
Changes that hardly register with us stand in the way of synthing.
It would also seem that if we wanted a quick fix to match up different people's synths, we would have to be able to force one on the other, or, somehow have it compare the resulting point clouds? Of course, at this point, I don't really understand the value or possibility of what I'm suggesting.
One approach might be to worry less about the images and concern yourself with point-clouds. They're like a sheet draped over a landscape and offer a "3D fingerprint" of a location.
When geo-tagging their photo-synth, the user already gives a fairly close approximation of the location of their synth. You could even ask the user to approximate the orientation and scale of their point-cloud over a map-view of some sort.
You then treat the point cloud like a solid, and see if it intersects (or "stacks on top of") other point-clouds users have placed in the vicinity.
If you get what appears to be an overlap, you can then start using the image information to fine tune.