Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He's saying he can make a commercial sensor in 5 years. Now, that's admittedly not saying it'll be in phones in that time, but when it works it won't just be competitive with CMOS sensors, it'll do things they aren't capable of.


There are planar Fourier capture arrays, which are also lensless, can be implemented with existing CMOS process and seem to be better by every metric than this. AFAIK they are currently being evaluated for eye tracking sensors which aren't sensitive to resolution (128x128 is enough) but need to be flat.

There are also single-pixel compressed sensing cameras, which can be made lensless as well and are limited by the optical modulator specs.

The problem with lensless sensors is always the computing power required to reconstruct the final image.


Admittedly I'm reading between the lines here, but I suspect that these don't need computationally expensive reconstruction. The reconstruction is done entirely by setting up the delays correctly on the input sensors, then the image is formed by simple interference. You'd read the image off just like an ordinary ccd.


From the article:

"He concedes that there are challenges: improving the optical performance of the elements; suppressing spillover effects between different signals in the device; and honing the algorithms that calibrate the camera’s performance."

The line of "honing the algorithms" is the complexity orbital-decay is referencing. These types of sensors do need computationally expensive reconstruction to generate images we're used to seeing with traditional optics and sensors currently found in many consumer devices. The filtering and focusing work the lenses do still needs to happen. These sensors essentially rely on complex math to replace the finely ground glass.


Those algorithms are already very well understood from radar and sonar, though. It's not like he's starting from scratch.

And I'd take issue with the characterisation that they need "complex math to replace the finely ground glass" - what replaces the glass is the analogue photon detection, delay, and amplification channel on the front end. My suspicion is that the only "complex math" is done calculating the delays before capturing the image, not doing the reconstruction (again, unless I've missed something unique about moving from GHz to THz).


Five years? Read the aforelinked XKCD; it seems a pretty accurate representation of reality to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: