Forum : New Feature Suggestions/Requests

Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.


Topic: Video

Report Abuse Report Abuse
mloewen (Over 1 year ago)
using the same technology with video could have very interesting results.(?)
ZInventor (Over 1 year ago)
Possibly take the video and separate the frames, using each frame as an image for the synth?

sounds cool, and would be usefull... super-simplify the process of synthing: shoot video of an area, walk around a bit, upload video, done!

-Z
madeeds (Over 1 year ago)
A few people have done this.  Here's one example: http://photosynth.net/view.aspx?cid=2c954b62-5526-4ddb-bc52-bdc09e1c2592.  It combines a few shots taken with a still camera plus some video.

The two main problems with video are:
1) Motion blur on the frames (You need to move very slowly and use only small fraction of the frames, and have good lighting.)
2) Low resolution. Even with the nicest video cameras you're not going to capture anything that is worth zooming in to.
Nathanael (Over 1 year ago)
Here's the link again, since the forum attached the fullstop to the link: 
http://photosynth.net/view.aspx?cid=2c954b62-5526-4ddb-bc52-bdc09e1c2592
Scruff.R (Over 1 year ago)
It would be cool if this feature (http://www.youtube.com/watch?v=EfolwAPOO_g at time 10:22) would find it's way into Photosynth for us, too ;-)
Scruff.R (Over 1 year ago)
Sorry not time 10:22 (that's the whole length) - rather at 5:55
Nathanael (Over 1 year ago)
Scruff, you're definitely on my wavelength.

A few other places that people have mentioned this same sort of thing are: 

siblog here: 
http://photosynth.net/discussion.aspx?cat=01b6f15f-42eb-49cb-a221-ed56615e1c47&dis=a1b4ee40-b3b8-4a16-8bf1-a6807e6e6219

and myself here: 
http://photosynth.net/discussion.aspx?cat=01b6f15f-42eb-49cb-a221-ed56615e1c47&dis=0d6d7eca-96c6-477f-b255-5d106d592eba

so you can imagine how happy I was to see that talk in February.

Since Photosynths are part of Bing Maps and what was demonstrated is being worked on as a feature of Bing Maps, it seems to me that it's simply a waiting game of Silverlight having the ability to do real 3D graphics before we see the same sort of 'video moves through space' applied to 3D models (pointclouds in Photosynth's case) instead of only to panoramas (which the video doesn't map nicely to unless the person with the video camera is standing in the same place that the panorama was taken from).
Nathanael (Over 1 year ago)
Of course this only really works if there are enough stationary objects in the scene that remained the same between the photosynth and the video. 

Were you talking about just the ability to see videos move around synths or specifically the ability to see that happen when the video is being sent to you live? Either one is exciting to me.
michaeldenis (Over 1 year ago)
From a strictly practical standpoint, video has a few great uses in the current version of Photosynth.  The most compelling use is the "Rotational Move" that results in the donut-like functional button in the Photosynth viewer.  Using videos to circle a single object greatly increases the fluidity of the motion and naturally creates a path of images for the rotation to follow.  So long that a circular arch was show around a set point, Photosynth should have an easier time following a handheld video arch than a shot-by-shot series of single-shot guesses.

I recommend manually setting cameras to "shoot fast" when capturing such videos where the camera actually moves because otherwise the shots will be blurred.  Consumer video cameras also lag far behind SLRs' many interchangable lenses, making matters even moredifficult.  Furthermore, I expect that few people use video decompilers often.  Nonetheless, with recent advances in HD Camera tech, I want to see testing.
Sintor (Over 1 year ago)
Have you ever realised that sometimes many videos are uploaded on the web, that were shot at the same show but from different points of view? I mean public events like concerts, parades, plays, press conferences...
I wonder if that kind of videos could be of any use if you can collect enough of them and find the way to align them in time. You could then get one frame from each and every video source and use the frames for a photosync. Then, step one frame ahead (or some certain time fraction) and repeat.
Wouldn't that give you some kind of "videoSync"?

(Hope this makes any sense at all. Sorry my English)
NateLawrence (Over 1 year ago)
Sintor, you're definitely on the right track.

A few years ago at Microsoft Techfest, some research from Microsoft Innovation Labs Cairo was shown that demonstrated multiple people broadcasting video from their cellphones from slightly different vantage points of the same event and the server was stitching the videos together to create a larger video, although that was really more of a video panorama than a video synth. 

Here are a few stories about it:
(Video Interview) http://techcrunch.com/2009/02/24/microsoft-techfest-qik-meets-photosynth-with-impressive-panoramic-mobile-movies/
(Video Interview) http://news.cnet.com/8301-13860_3-10171114-56.html
http://www.gizmodo.com.au/2009/02/realtime_mobile_video_stitching_is_so_crazy_it_just_might_work-2/
Poster: http://www.microsoft.com/presspass/events/msrtechfest/Posters/id183_20x15.jpg
Project group website: http://research.microsoft.com/en-us/projects/mobicast/
NateLawrence (Over 1 year ago)
More along the lines of what you describe has actually been demonstrated in a research project from ETH Zurich and University College London called Unstructured Video-Based Rendering. It does need a synth to register the videos to, but that shouldn't be a problem.

Video summary: http://www.vimeo.com/12062502
Project website with downloadable interactive demo and additional videos: http://cvg.ethz.ch/research/unstructured-vbr/
OmniSynThesis (Over 1 year ago)
Very interesting ideas!
Sintor (Over 1 year ago)
Thank you natelawrence for that great info and usefull links
Sarhat (Over 1 year ago)
When one use video instead of still images some problems are occures in my synth as followed 
1- The extracted frames from videos dose not have any information about focal length and other information related to the camera 
2- their is a great distortion in the synth, for example I had used a video on a mobile van, then I extracted frames about 300 frames, I can see that my point clouds have been distorted to some shape like panoramic whereas it should be a straight line surfaces (http://photosynth.net/view.aspx?cid=9cab958c-596f-414e-ad77-6869a5601f52) 
3- Also, the camera calibration parameters have very big errors instead of focal length around 28 mm, I got around 200mm. 
4- the relative position of camera are also fault..
I think the problem is occur because no information about camera are available in header of extracted frames from video.....
Nathanael (Over 1 year ago)
Sarhat, your point cloud in your video frame test is not accurate, however it is rather attractive in its own way. :) http://photosynth.net/d3d/photosynth.aspx?cid=9cab958c-596f-414e-ad77-6869a5601f52&m=false&i=0:0:107&c=8.60879923689142:-7.71441067684134:9.30496185594986&z=1660.02399431506&d=-0.797792748739944:-0.436480477287229:-0.777309257860325&p=-7.32229108769998:-5.74316920504903&t=False

I have no doubt that you're correct about the cause.

I have noticed that choosing a camera position/motion which allows Photosynth to track both background and foreground objects with a high degree of overlap (such as video provides) can often escape this curling effect in the point cloud.

Could you try a test where you choose a point on the ground, do your best to keep it in the centre of your viewfinder, + orbit it with the camera in a complete circle, with one or two other overlapping orbits of neighboring points on the ground? I'd be interested in the results.
You need to be Signed In to add a comment. (Are you new? Sign Up for a free account.)