This is an idea that I have had since about 1999. I considered it too far-fetched and half-baked to present other than to a few colleagues and friends ... until I saw that Ramesh Raskar of the MIT Media Lab had developed a camera that can view objects around a corner by viewing reflected light. So I sent the following note to him on November 16, 2010. But since this is something now in the minds of other researchers, I want to make it universally available.
The ultimate camera light capture/analysis
Before I go into my subject, let me give you my credentials, so that you will see that what I am proposing has legitimate feasibility.
I am a retired complex data analyst from Chevron. I have a Master's Degree in Mathematics and was a member of the Mathematical Modeling group of Chevron Information Technology Company. When the PhD engineers, geologists, scientists and managers had a data analysis problem that their discipline's tools could not adequately resolve, they dropped those problems in our lap. I also was the head of the cross-company Advanced Information-Based Modeling (AIBM) research program for more than 10 years, managing a budget of roughly $150K per year for that program.
In my role with AIBM, I attended and reported on the leading state of the art conferences: World Congress on Computational Intelligence (WCCI), Knowledge Discovery and Data Mining (KDD), Genetic and Evolutionary Computation Conference (GECCO), conferences on machine learning, neural networks, artificial life ... all of the cutting edge fields of applied Mathematics. So I came home with many full-baked ideas, some of which led to projects within Chevron. But I also developed my share of half-baked ideas, "what if" ideas. And what I will tell you about below is one of those. However, your work on the around-the-corner camera now makes me think this idea is now somewhat more than half-baked.
It really comes out of all of the hundreds of presentations that I have seen on teasing meaning out of data, not just finding better tools for dealing with signal to noise problems but finding tools to deal with multiple signals coming through the same space.
So here is what I conceived. Look at your office window. You see an image of what is outside, directly in your line of vision, augmented by your peripheral vision. Note the spot on the window itself where you were looking. Now change the angle at which you look through that same piece of glass. Perhaps stand up, or move a few feet to the right or to the left or approach the window and look at an angle downward through the same glass of the window through which you looked the first time. You will see things that you did not see at all the first time, even in your peripheral vision. And yet the light that is coming to your eye is coming through that same piece of glass, just at a different angle. In fact, the light from every viewable angle is coming through that part of the window at the same time. It is merely a matter of having the instrument that can interpret that light to see at an angle that the eye does not see.
This is data, coming through that piece of glass. And theoretically, just as we can train neural networks or genetic algorithms or whatever machine learning method we choose, we could capture the light from the target angle and then train the algorithm to "see" from that angle -- any angle that we choose. Theoretically anything at all that can be seen from any angle through that piece of glass can be seen in such a way by a properly trained algorithm.
But it does not stop there. Now consider a photograph, perhaps a photograph from 150 years ago. The light passing through the lens of that long gone camera was just like the light passing through your office window. And the light reaching your eye from the plane of the two-dimensional photograph theoretically contains all of the light that was present, from all directions, when the photograph was taken. You could use the same technology that you used with the window on the photograph. You could see things that we would normally say were not in the picture ... and yet they really are in the picture but just not in a way that we have ever before had the means to see.
It all comes down to data analysis, which is probably what you realized when you developed your around-the-corner camera.
Of course, both your camera and my ideas are frought with a plethora of privacy issues. If any photograph can be turned into something that shows far more than the eye can see, we will be able to see things that were not intended. Every new technological advance carries both great potential for good and great potential for harm, and this is certainly no exception. So this aspect must be kept in mind in all things.
I would be very glad to talk with you about this. I am going to post this note on my personal web site www.wwjohnston.net -- since you have clearly gone the first step in realizing things in this direction, perhaps others will also be interested. But I clearly see you as the one having both the vision and the means to carry this idea into full-baked form.
Send E-mail to firstname.lastname@example.org
Send mail to:
Walter Wesley Johnston
1865 Herndon Avenue, Suite K-187
Clovis, CA 93611-6163