Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
I've been wondering recently about whether it would be possible, near the end of scene reconstruction, to have the synther take one more pass at fitting orphans into the main coordinate system by recognising paths in the photos whose positions have been solved for and using deduction based on the filenames to infer a position for the orphans in question.
In synths that are constructed from randomly named photos (e.g. scraped from the internet or taken on multiple cameras, multiple days, or on a camera which names photos identically in a new folder each time your surpass 999 photos - in other words that there might be two or more DSC_0023.jpg files from different folders on the same camera) this inference breaks down completely and is worthless, but if it can be observed that photos 538 through 545 all appear looking in the same direction and progressing in a straight line in relatively even increments and images 546 through 549 are orphans, but then 550 - 575 continue the line established by 538 through 545 but with an empty space of about four images in between them, the clear human intuition is to re-examine the orphaned photos in that light.
I realise that mathematically the orphaned photos are orphaned for the very reason that their features did not match strongly enough to the preceding or proceeding photos, but it seems to me that cross referencing the positions, names, and date taken metadata from all the images is a logical enough step to take in order to make one last effort to place lost images correctly.
I should make clear that I am not suggesting placing the photos simply based on predictive logic, but that using a higher resolution to extract more image features from these excluded images and their suspected neighbouring images to then make a second pass at connecting the groups is desireable. Even without using higher resolutions to harvest more features it seems that the deduction could be used to strongly suggest image pairs to jump start that part of the reconstruction again.
Perhaps the current synther already uses some amount of this sort of reasoning, but it is not apparent to me.
Please see this collection for a prime example of where I feel the above approach would have benefited:
My fellow users can load that synth up on the iPhone in the iSynth app and switch to Orbit Mode to see lines or arcs of camera positions with obviously missing chunks in the vectors or lines to get a better idea of what I'm talking about. (Download iSynth here: http://www.cs.brown.edu/people/gpascale/iSynth/ )
Even in loosely structured synths such as mine I realise that the photographer may point the camera at something else entirely in unpredictable ways that may interrupt the flow through the synth, but I do feel that there is a use case where this sort of deduction is appropriate.
When any human being, given the chance to see the camera positions laid out in front of them, could highlight orphans and have the positions of its preceding and proceeding neighbors automatically highlight within the reconstruction, I believe that it would be simple intuition that any child could make to guess where the orphans should be placed.
Curious as to why this is or is not a good suggestion or whether this sort of process is already part of the synther.