Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
Photos are great, and photosynth is awesome! But, pictures are somehow kind of a 'dead' information. If microsoft could come out with some sort of hypertext for pictures that would be great. Imagine a picture in a particular synth that would have hyperlinks connecting both to inside and outside that synth.
One could access some text about some particular detail inside a picture, could even play movies, hear sounds, twitt, the whole digital bonanza experience! :-)
I am no computer scientist but that would be just AWESOME! ;-)
I what have imagined is one sorte of tele-immersion and augmented reality inside a synth. For instance, one could play hyperlinks from inside a synth, edit and creat content "there", like it was some kind of 'second life' stuff.
People could share details found in synth, and connect with each other! They could created links between URLs and synths, they could "write" content to the synth, put more pictues to it.
I like this idea very much. When I read it I was reminded of other discussions of 'tagging' synths like this one:
You do take it a step further, though, with the suggestion to pull linked content back into the synth. I think that as cool as this would be, the Photosynth team (because of their small size) would rather make synths as strong as possible as well as able to be used by other companies in their own programs and let the other companies build on their own layering of Wikipedia articles, etc. Part of what you should think of is that Photosynth is not, in the larger scheme of things, an isolated community, but ultimately just one of the teams working on building Bing Maps|Virtual Earth. Every team can't bake in different layers of linking.
Does Google let you actually edit Wikipedia articles from inside Google Earth or just read them?
As far as videos go, one thing that I've wished for a while is that you could make a synth of a place with photos but then throw a video that moves through that space at the synth and have it float through the pointcloud properly when points from the photos are recognized in the video.
Even a short video is many more images than you usually can synth together but because videos are so low resolution, there is far less detail, meaning that Photosynth doesn't run out of memory as fast when trying to match images. I would love to be able to synth my brother's apartment and then place his webcam feed inside of that synth and have it move about the pointcloud when he walks around with his laptop.
I saw online a techfeat demo about video synths on mobile phones! That would be a revolution to citizen journalism among other things!
I think that the idea of matching visulally everything to existing pictures is somehow a 'closed' idea, information wise. What I thaught of is to transform photosynth 'space' in a html like space. Someone experiencing a synth of a pyramid could find and watch a video inside that synth, a lecture in a university. The users could also open files, share, and even write.
In other words what project Tuva (that I HOPE grows) made for video streaming would help photosynth, a type pf enhanced synth!
Hmm. To me saying that matching videos or other visual information to existing synths is a closed idea is like saying that linking one page to another page in HTML is a closed idea. I don't quite understand how you could think that so perhaps I'm just misunderstanding you.
I think, what you're describing is what many people call the metaverse or cyberspace. In many ways Photosynth has the potential to be true cyberSPACE as it was originally envisioned. You describe browsing relevant pieces of the web from inside of synths and I completely agree with you.
I view building good strong synths of places as parallel to big web companies like Google, Amazon, Yahoo, Ebay, Wikipedia, or Microsoft building a big existing source of information. Once those large collections of information exist other people can link to useful information that they find there - adding to what is already there, but there has to be something there to begin with.
It's not so much that everything must tie to existing pictures so much that visual data, whether photo or video, needs other visual data to link to, just as text needs other text to link to.
In any case I think that you're right that the future of the web lies in the direction of augmented reality. I view Photosynth as the tool to really begin building a large database of the physical world to link all of the existing web into. Once that large visual index of the real world exists, then all the information that currently lives in video, audio, and text can begin to be embedded into the space built by tools like Photosynth, certainly.
No, what I meant to say is that the video played should not necessaraly have any 'visual' connection with the synth! just that! Your idea is amazing, but it goes back to the matching pictures, if I understood you correvtly!
I found something that is somehow connected to microsoft via mixonline team.
imagine that space inside a synth? with links, new information pointing to outside the synth. the synth as an environment to agregate information!
I don't want to be negative, so don't feel like I'm trying to put your ideas down... I'm just talking things over.
You say that photos seem to you like a 'dead' sort of information, but what do you think is a living sort of information, then?
Text is just something that someone said in the past. Audio is only a record of a sound from the past. Video is only the record of something that has already happened and finally, yes photos are only records of things already past. I suppose I'm just wondering what you would consider to be 'live' information.
Certainly with any of these media types, more new information can be posted with them all the time, but they all seem like they could be used equally to either communicate how something was in the past or how something could be in the future. Even things like instructions that are teaching you how to do something new today can be done with photos as much as video or text. I'm just curious what you really meant.
Liking pictures has a 'tautological' character, liking text don't.
There is this A=A feeling to photosynths after you played with it a lot, and that is the thing I would like to see addressed.
With text we need to go beyond tautology to create new (live) information, that is the 'philosophical' idea behind my suggestion!
Ooh, we're both typing at the same time. =]
So... synths as link aggregators to related information. I feel like that is very much compatible with what I'm thinking of. Perhaps you're thinking of something a little more immediate that still links out to the web as it exists today, then.
I think that this is quite close to what will happen in the end if I'm right about what I was describing above, only you'd like to get started with the cross-linking to the rest of the internet already while synths are still little islands and not so connected to each other and not bother about waiting for videos to be automatically matched so that they move through a synth correctly.
I see the use. I really would like to get some good strong tagging support for Photosynth which includes linking to external information.
Anyway... I'll stop hijacking your topic. =]
imagine a school trip to see a museum or something. students with different photo-machines take let's say 210 pictures from the place, a "lousy" synth is made from this pictures just 9% (stitch).
but, inside this synth students can create content linking to inside and outside the synth.
To keep photosynth a 'pure' image product is not a good idea to me. It runs the risk to get stucked in this tautological feeling that I was describing. this is great product and could become more 'inpure' and mixed up with texts, audio, video, etc.. in this way photosynth could seriously enter the web as an revolutionary place to browse for information...