Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is not true, computational photography is not the thing that traditional cameras are missing. What they need is a seamless user experience. And while traditional camera excels at capturing (in terms of user experience), it lacks at anything after that. Processing, sharing, etc.

Another important factor is “the best camera is the one you have with you.” This killed the compact camera segment. I don’t think it eats much into the interchangeable lens camera market, which is what you claims are becoming irrelevant.

Just to prove that computation photography is not missing, it’s not new at all. Digital photographer has been using those techniques all the time, including HDR you mentioned. Some Sony camera can stitch panorama automatically so there are examples that in camera processing are also there (some Sony cameras can install apps that process HDR too, IIRC, but they killed that online store.) Again it is the user experience, if it is seamless and doesn’t take you much additional effort, then it becomes a more useful tool (in terms of how often you use it.)

Your criticism of “producing worse result” again is not a matter of computation photography. It is just a matter of FPS, ie how fast the sensor reads out the data. Cell phone cameras especially iPhone (a few years ago Apple hired an indie app developer that optimized the app crazily using hand tuned intrinsics to capture highest FPS that have been seen on that model, IIRC 20fps.) But nowadays even FF sensor can be read out at insane FPS making your point irreverent. This is not a coincidence because a) who do you think develop those sensors used on smart phone and b) it has many other uses other than producing low artifact HDRs.

Also if you want no-artifact HDR, not just low-artifact HDR, then get the biggest sensor you can get and take single shot HDR, not a cell phone.

Lastly, computational photography is a necessity for smart phone, just because of physical limits. Indeed it’s a blessing that you can squeeze that much from such a tiny camera. But make no mistakes, it doesn’t win in the quality department, and people don’t hate it because it is “cheating”, but about the quality.

Just to name one feature considered to be cheating and eventually becomes a standard in lens design: aberration correction. I think Olympus started it first in M43 that the electronic communications also send barrel distortion and CA correction information back to camera and write it in the EXIF data, and correct it both in the fly and in any preview or rendered output (ie this is a mandatory correction.) at first it is seen as an inferior lens design philosophy and eventually it is seen as a better optimization (as you only care the end results and whatever means towards a better end is a better optimization) where the biggest benefits is size (optimized/minimized size given same quality.)

So no, it is not a controversial take, it is just a wrong take.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: