Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
Greetings, Photosynth viewer team,
Is there any chance of seeing breakthrough performance in pointcloud rendering as seen in the following videos? Their company seems to be trying to make nice with ATi, NVidia, or Intel, but I'd be quite happy to see Microsoft Research or the XBOX division or Bing or whoever snatch them up.
Feel free to provide technical critique and comment. I assume that this is the way forward for Bing Maps as Photosynth really takes off, going forward.
I know that Photosynth pointclouds aren't anywhere near as dense as the data which these fellows are working from (although yours can be more dense than they currently are as you've demonstrated before), and it seems as though the whole operation must be dependent on, I would think, no less than a fiber optic internet connection if it is going to be run successfully against a remote database.
Presumably, though, even if the pointclouds must be fully downloaded so that they can be queried at interactive speeds, this will not be so terribly different from what happens today as our pointclouds download in little chunks as available until the entire thing is cached in our hard drive's temp folder.
In any case, it does seem that if their demo is truly running in software alone without any hardware acceleration, then there is great hope for performance gains in the Silverlight viewer before particle hardware acceleration is added to Silverlight.
If I understood it correctly, they do not have to download all the data beforehand, they only need to handle the number of pixels of your monitor to show the part you are currently watching, just as DeepZoom does. They do not, however, use various resolutions, so they do not have the problem of an initially burred view turning into a sharp one.
Very true as to only needing to load the points which your screen's pixels need at the moment, Ari.
I was just thinking that if one is moving through the scene or rotating around an object in unpredictable ways, that is still going to require the correct data being sent back from the server in time to draw and with every frame, different data will be needed. There is a little bit of overlap in what points need to be loaded between frames, but if you are moving quickly and expect a good framerate, then the amount of data per second is still quite substantial, I would think.
That is the only reason that I imagined the pointcloud data for an area being downloaded locally before it is manipulatable. The parallel is that when Google performs a query on its database, it does not crawl the entire web again that very second before returning results to you. Rather, it is running against its own local databases and then sends the search results page back to you.
Yes, I agree, the amount of data that should be available at any instant is much larger than the actual data to be displayed. Using points as infinitesimally small polygon means dealing with huge amount of data which probably cannot be handled in today's personal computers. It is not clear what Unlimited Detail is actually doing, and whether what we see in their demos is real.