Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Have iPhone cameras become too smart? (newyorker.com)
186 points by tomduncalf on March 21, 2022 | hide | past | favorite | 299 comments



Controversial take: DSLRs and mirrorless cameras have largely made themselves irrelevant by ignoring computational photography advancements (computational photography is seen as "cheating").

I own several "real" cameras: FF, M43, and over a dozen lenses. Multiple camera companies. On paper many have very impressive specs. The problem is that camera companies focus primarily on raw specs with software being a secondary concern. To give one concrete example, I can press one button on a cellphone (almost any) and it will capture more dynamic range than my $2K DSLR.

How can a tiny sense in a cellphone capture more dynamic range can a massive sensor? Software. It is stacking multiple rapid exposures using an eShutter and computationally merging them. You can do that with a DSLR, but it is time-consuming, outside of camera, and produces worse results (e.g. motion in the frame will ruin your stack).

It is super aggravating, but it isn't actually surprising if you look at the amounts of money spent on photographic software alone compared to almost the entire camera industry. I've given up for most things, a cellphone can beat all but the absolute top end dedicated cameras and even then only for niches.


As a photographer on the side: nahhhh. I have a “real” camera because I want control. Give me the RAW data, and I can move mountains.

What have made interchangeable lens cameras increasingly irrelevant: the workflow. On a phone, somewhat obviously, I take a photo, and post it to Instagram, instantly. On a stand-alone camera? I…

…uh, well if I’m shooting RAW I need to either have RAW/JPEG turned on, or convert to JPEG in camera, and then save that image, and then use the awful built-in wifi to connect to a shoddily made app that will randomly not connect for reasons, and then hope the transfer finishes, or use a SD-to-Lightning adapter and dongle up my phone.

The DSLR/mirrorless workflow is still, conceptually, film, except replace a darkroom with Lightroom. It’s terrible for run-and-go casual shooting, which was a huge part of the lower-end market.


And Lightroom has a fair bit of computational photography built into it as well as does Photoshop. Though I use my iPhone mostly on raw a lot as well as my “real” cameras. They’re mostly easier for snapshots and sharing.


Distortion correction, chromatic aberration correction, perspective correction…yeah, all nice stuff. Especially the perspective feature, which is relatively recent. It’s nice to be able to straighten up an image quickly, in the Lightroom workflow, without having to round-trip to Photoshop.


I don't know about the latest versions of lightroom (I still use my perpetual license) but the one I have remains untouched by the icy tendrils of "A.I.", and my output is still worlds better than it


AI is admittedly an overused term. But Lightroom absolutely has algorithms of various types that it uses when you make any number of changes. Do you know exactly what vibrancy does to use one of the more trivial examples?


There are things you cannot do on your DSLR and computer setup. For instance, the phone can take the same picture simultaneously at three different focuses. That's just data you don't have, whether you blend it together by hand or not.


Excuse me? Focus is a property of the lens, no? I understand rapid e-shutter for dynamic range (kinda like bracketing then mixing in post), but how exactly is a single camera taking three focuses? Or do you mean the two+ distinct cameras being fused together? Because I don't think they do that (yet), but I could be wrong.

Personally, I like the control a more manual camera offers, but appreciate the quality of my phone's camera at times too. Generally, I agree, technology that tries to outsmart us feels out of place. Technology that tries to empower us feels just right.


yes, they do fuse different lenses together on pixels


How does it know my finger is over one? I just tested this on my iPhone 13 Mini, and the two photos, one without a finger over a lens and one with a finger over a lens, both look identical.

Given that the focal length of the lenses is different, it would be hard to "fuse" the images together, though I suppose it wouldn't be impossible. All this to accomplish what? A higher effective aperture? Or are you imagining some strange blending of foreground and background in focus, with middle-ground out of focus?


I should have capitalized Pixel, since I meant the phone. iphone doesn't do this as far as I know. if I remember correctly it's to blur the background.


generally, people use a DSLR with a bit more intention, and it makes sense to optimize the sensor and lens to capture a single image as envisioned by the photographer.

however, there are lenses available that combine multiple focuses, focal lengths, and framings by projecting multiple images onto the sensor, to be captured with a single actuation of the shutter.

modern cameras also include features for automatically integrating multiple captures to expand details such as depth of field, dynamic range, resolution, and so on. your phone camera does this too, it is just concealed and non-obvious.


> The DSLR/mirrorless workflow is still, conceptually, film, except replace a darkroom with Lightroom. It’s terrible for run-and-go casual shooting, which was a huge part of the lower-end market.

Yep. And there's a whole culture around it, too. The manufacturers who try to deliver improvements - Olympus, for example, have quite a bit of in-body smarts - generally get shat on for doing so. There's a lot of emotional investment in the "digital darkroom", make everything hard mindset.


I know we don’t say “disruption” any longer, as it’s a phrase co-opted by hucksters trying to sell ad tech…but the camera manufacturers got disrupted.

Because of what you said right here. They kept trying to appease their highest-end clientele without actually thinking about the job to be done for 98% of photography, which is to post kid and cat photos instantly.


No camera can compete against a phone for "post cat pic instantly", because a 10 year old camera phone can already produce a good enough picture and a phone has a cellular connection and instagram app to upload it instantly. A camera has to compete in the "better pictures" space.


That's you, but I have a "real" camera because I want higher quality photos. And it used to be that way

But when I compared my DSLR with my more modern cellphone - well, the colors were just nicer on the cellphone camera.

Just because the DSLR would have better software wouldn't force you to use it. If you want control do that, if you want incredibly high quality point-and-shoot photos, you can do that too.


That’s…extremely debatable that a phone produces better colors.

For my particular weird niche of photography: there is no way a phone is going to get anywhere near the richness of tone and color I get in very low light.

https://www.flickr.com/photos/perardi/albums/721577201444334...


Just a heads up: there are a few NSFW or borderline NSFW photos in here. It might not be a bad idea to tag that.

Just out of curiosity, are these flat out of the camera or after processing?


Almost all photos people put up online have some form of processing. Including these. Most cameras don’t have this contrasty of a look OOC.


A fellow Sony user, I see.


Sony user here. I just plug into the USB-C on the camera (a7R IV) to download photos from it, it mounts the SD cards as USB drives. No sense faffing about with their crappy wifi program.


Does your phone read the RAW files, or are you shooting JPEG? (iOS and macOS don’t preview Fuji RAW files, which is not a surprise, given how they require very special handling as they’re not traditional Bayer filter sensors.)

And how is the performance when editing on the phone? Those are some big files.


My iPad Pro reads the RAW files from the Sony A7R3. Performance is fine but I stick to JPG due to file size. The iPad 6 had terrible performs with the same RAW files, apps would frequently crash due to memory limitations.


About what I expected.

I’ve only tried editing Nikon Z7 RAW files on an iPad Pro, and I honestly don’t remember which generation of Pro. It was OK, but not stellar.

The file size, oof, yeah. That’s a good point. My lossless compression RAW files from my Fuji are ~30 megabytes. Given an iOS device averages 128GB of storage (my rough ballpark guess), that’s going to fill up quickly.


I don't edit on my phone. The screen isn't color calibrated and it's tiny.


Are you getting this to work straight to mobile? I use the crappy wifi share to get shots onto my phone on the go, but I'd much rather get the RAW files over vs. a downsampled JPEG if possible. Might be worth carrying a small USB-C to lightning cable if it works straight to iPhone.


Sony? Luxury.

Fuji. It’s like trying to connect a Palm Pilot to wifi or something.


I was thinking the same thing.


I get away with Sony Imaging Edge on iPhone + Lightroom mobile to post to Instagram on the fly. It's only 2MP JPEG but it's good enough for on-the-fly Instagram stories.


You need 3-5x exposure bracketed RAW to recreate most phone tricks, and there's no software that can do it. The big one is HDR photos in HEIF; "HDR" photo editing software does tone mapping into sRGB, which is literally the opposite of an HDR photo.

(I'm actually not sure why this is, since pro video workflows handle it. I think it's because phones and movie theaters have HDR displays but PCs don't.)

Presumably they can do something like deep fusion for noise reduction/superresolution even better than you already have from a real camera, but nobody seems very interested in doing it. Adobe can't even improve their image resizing for some reason (backwards compatibility?), when they come up with a new resizing method they have to put it in a different place in the UI and give it a silly name like "neural enhance RAW".


I’m curious to hear your take on phones shooting in RAW then?


The workflow is still easier that way.

I do have to go through the step of post-processing…but I'm doing that with JPEGs from the phone regardless.

Completely tangential rant: #nofilter is dumb. Your camera is actually producing a low-contrast black-and-white image that is then demosaiced and interpreted according to an engineer somewhere who is trying to make it look fairly literal. There's filters all the way down.

Wait, where was I? Right. Post-processing. So once I've editing the RAW file in the Photos app, or in Lightroom…boom, it's right there for me. It's on my phone, it's getting backed up to iCloud, and I can trivially post the photo anywhere.

Whereas, on a stand-alone camera, I have to jump through a series of connecting hoops. So RAW doesn't have much impact on the workflow.


You're not imagining the stuff you can do with raw sensor access.


As a "serious amateur" photographer at this stage of my life, the last thing I want my camera to do is be "smart".

I use a Fujifilm X-Pro3, and while I have a variety of lenses, I almost always have my 23mm f/1.4 mounted. With the crop factor that gives me a "traditional" 35mm focal length. I almost always shoot with fully manual exposure, and only use autofocus about half the time.

I chose this kit specifically because I have full control over the image with physical controls to make changes - I can change shutter speed, aperture, ISO, and even dynamic range without taking my eye out of the viewfinder.

The X-Pro3 is pretty much made for me and my use case. While it has a "screen", it's folded into the back of the camera and isn't visible unless I manually flip it down. It can't practically be exposed when I'm using the camera normally. It's handy if I want to shoot at waist level or if I want to show someone standing next to me an image... but that's it.

I have nothing against "computational photography", and it definitely has its place - but for me, as for most photographers, that place is in post-processing. Adobe is responsible for that stuff, not my camera. I expect my camera to produce a RAW file as an artifact, not an "image".


As a person who makes money shooting video. I wish I had more reason to invest in my camera body. Interchangeable lens should come with chromatic abberation/distortion profiles on a chip that communicates with the camera so we can buy cheaper and lighter lenses with fewer elements. You can always buy the fancy lens with perfect optics if you prefer that aesthetic. A professional camera software will always expose on/off switches for the correction so the photographer retains control over the shot. I really don't see the problem with giving pro cameras more optional smarts, it just makes them more versatile.


Are you sure cameras don’t do that?

I honestly do not know. They definitely do for still photos—Nikon has had chromatic aberration correction and lens distortion correction for quite a while, and it’s keyed to individual lenses.

If they aren’t doing that in video, I suspect it’s a pure horsepower limitation. Reading the data off the sensor, and applying the corrections in real-time while writing to card may be too computationally intensive or bandwidth-limited.



>Interchangeable lens should come with chromatic abberation/distortion profiles on a chip that communicates with the camera so we can buy cheaper and lighter lenses with fewer elements.

Pretty much this, and this is what some people already do with pancake lenses, correcting the result in post. However there's a problem with this: for the universal vendor-supplied profiles to work well, the lens must still be made within pretty tight tolerances, so the cost reduction is limited. And proper manual calibration is really expensive and tedious, which defeats the point.

I don't know if there's a niche, but I'd love to see some startup doing the full-fledged lens and body calibration (STF/OTF and stuff) on demand to make high quality custom tailored profiles. That way you can use really cheap and compact lenses to get the same result; software correction can do wonders nowadays.


Software can't fix a slow aperture.


Of course. But it can fix many artifacts like field non-uniformity and distortions (both spectral and spatial), which otherwise require more lens elements, precise production and exotic materials to fix. Software correction is typically slightly lossy, but it's a reasonable trade-off. Per-lens/per-body calibration extends this much more than it's currently possible with vendor-supplied profiles, but requires pretty complex and expensive gear.


It can if the scene is willing to sit still. You can get rid of shot noise by fusing different exposures.


You've just described Olympus, and they mostly get shit for it. Apparently it's cheating or something.


The Olympus cameras look sweet. Hadn't looked into them before. They seem a bit more stills-focused, but very nice.


Ignoring the computational parts, Olympus makes some spectacular optics but isn't recognized (much) for those, either.


Pretty sure camera lenses already contain a ton of information about distortion and chromatic abberation which the camera uses to compensate automatically. But this is generally disabled when shooting RAW.


Lightroom has profile for most lenses, and given it knows which was used from EXIF data, applying the correct correction is all automatic.


I think it would be nice to have a shooting mode where you could create a "folder of files" from one shot. maybe call it multi-raw?

examples:

  - exposure stacking
  - focus stacking
  - time lapse 
  - burst
  - panorama (or any kind of increased resolution)
You could generate a preview image in-camera as an aid, but you would be free to use external tools on the individual images afterwards.


I'm genuinely surprised that this isn't already a feature in professional and prosumer camera bodies.


It is.


> but for me, as for most photographers, that place is in post-processing

You’re forgetting that today everyone is a photographer. It’s not like the 90s or even early 2000s where a photo was an event and even owning a decent camera was the exception, let alone actually lugging the thing around consistently.


>Adobe is responsible for that stuff, not my camera.

I grew up on analog photography, and one of things I really appreciate about recent Fujifilm cameras is that you don't necessarily need to use Lightroom, Photoshop, etc. There's an "X Raw Studio" app, which lets you develop RAWs in-camera through a USB tether to your computer. It has a very, very limited set of RAW development choices, but for me it brings back a little bit of the feel of working in a darkroom. Limited options. I rarely use actual postprocessing software these days.


I don't see how tethering your camera to your computer is much different then popping out a card and loading any 1st or 3rd party software? Or even just shooting in some processed format?


There is no technical difference. There's no real technical argument for it -- the same thing could easily be done by software. It's more about the gestalt of photography and being limited to a small number of options done "in camera". Most of what Fujifilm has been doing in recent years has been trying to carve out a niche closer to old-style photography (the XPro3's removal of the back screen in the default configuration, for example, so that chimping becomes difficult). When you pull files into Lightroom/Camera Raw/Capture One, there is tons of flexibility, but there's also a ton of sliders and it eventually ends up feeling onerous and not fun in the long term for me. (If I was doing professional photography it would be different of course.) I think that's a big part of what makes people prefer their phones.


Fujifilm cameras have an image processing chip that is used for film simulations, which this computer software also uses. Essentially it's in-camera editing controlled through a PC.


That's probably a sign of the famous Japanese inability to write software more than anything. Sony cameras come with a free version of Capture One that recreates the official picture styles, though that's not that interesting.


> a cellphone can beat all but the absolute top end dedicated cameras and even then only for niches.

In what aspect ? Artistically a 2010 dslr would shit on any single phone on the market

If your goal is to take pictures to document your life then yes use your phone, 100% the better choice here, but if your job or passion is to make pictures a proper camera will always be better, even my 1965 leica produce better pictures than the current top end iphone.

The problem with DSLRs/mirrorless is that they're proper tools and just like with a brush and a canvas you have to put time and effort to master them. No amount of computational photography will make you a good photographer and if a modern 2k$ camera is the bottleneck it certainly is more of your fault than the camera.

I also think the cross section between casual users who want an iphone like experience X people who drop 2k$ on a camera is extremely small. Different tools for different jobs


> The problem with DSLRs/mirrorless is that they're proper tools and just like with a brush and a canvas you have to put time and effort to master them. No amount of computational photography will make you a good photographer and if a modern 2k$ camera is the bottleneck it certainly is more of your fault than the camera.

This is a good analogy, I will have to borrow it.

I got friends who ended up buying expensive DSLRs after they saw my photos taken with f/1.8 lens. But they had kit lens and were complaining that their cameras. I told them to get f/1.8 lens. Then they were complaining about zoom. Also they could not use maximum aperture outside. I helped them get ND filters. Now they are complaining about that. And forget to take ND filters indoors, get way too many photos incorrectly focused etc.

Most people expect pro cameras to just act just like cell phone cameras.


What's a good resource to learn how to use cameras correctly?


The best resource is probably your local community college or art center. Nothing beats a class with assignments to force you to practice.

And the internet. Just search for whatever questions you have and you’ll get tons of results. Now that it’s all digital, you can try a bit of everything and see what you like (and ignore what you don’t).


I did take photography classes in college that helped me understand basics.

But I learned a lot more by joining a local photography meetup. They would hire models to practice portrait shoots with and the main people were very helpful. They charged $50-$60 per shoot though.

Fair warning, back then I ran into a lot of photographers who did photography for living and were pretty grumpy about dwindling incomes. Don't let their snarky comments discourage you from going to meetups.


"Understanding Exposure" is good on the absolute basics. However I don't remember if it covers more philosophical stuff like "zoom with your feet", and it probably won't help someone who just straight up keeps forgetting to take off / put on a filter.


I think by far the most distracting feature is zoom. It gives false sense of being able to virtually move around like a drone would. Prime lenses used in aperture priority mode and all automatics off helped me a lot.


Tony & Chelsea Northrup's Youtube channel and their book(s) is something I could recommend. (Not affiliated with them in any way.)


> but if your job or passion is to make pictures a proper camera will always be better

The camera you have with you is better than the camera you don't. A smartphone camera which delivers 90% of the results while being on your person 100% of the time is therefore a strong contender.

> No amount of computational photography will make you a good photographer and if a modern 2k$ camera is the bottleneck it certainly is more of your fault than the camera.

- Posts about trade-offs and the loss of competitive advantage between DSLRs/mirrorless Vs. smartphones.

- Replies that they aren't a "good photographer" because they hold this opinion. And are therefore clearly the "bottleneck" in quality photographs. QED?

This is exactly why photography never evolves (see also film). The community is actually the problem. More specs! More specs! Ignore technological advancement! The camera market shrinks 10% YoY, but no changes, no listening to the market, those who point out the flaws in the strategy aren't "good photographers" who are just bads "bottleneck[ed]" by their lack of skill.

CPU and GPU power has doubled, and power consumption has halved, computer vision has had a Renaissance, use it onboard? Na, that's only what 95% of the consumer and prosumer market wants, let's just continue to serve the 5% and keep the training rolling towards bankruptcy.

If your equipment need RAW, you've just shrunk your market share by 80%. If you cannot post instantly you've just shrunk it by 90%. So fight to be the King of the tiny kingdom that is the remaining 5-10%. RAW is a great facility to have, but it is just that, a facility. When it picks up the slack for the antiquated onboard abilities it is just a crutch at that point.


> A smartphone camera which delivers 90% of the results while being on your person 100%

It delivers 0% of what I, and many, want. It's like saying a renault twingo is 90% of a porsche 911 because it can go from A to B, it's not and people who say it is simply don't know what they're talking about or don't have the perspective to understand the need for such tools.

> This is exactly why photography never evolves (see also film)... More specs! More specs! Ignore technological advancement!

Photography _never_ evolves ? Er ok, well if that's what you truly believe I don't see much room for argumentation

> If your equipment need RAW, you've just shrunk your market share by 80%. If you cannot post instantly you've just shrunk it by 90%.

> The camera market shrinks 10% YoY

It's fine, DVD don't sell much anymore either, people move to what's best for them, a dedicated camera isn't and if a phone does 90% of what they perceive is enough then let it be. Nobody's going to buy a sony a7rIV to take snapshots of their lunch and post it as a 480p image on instagram.

People use phones because they have phones with them anyways, not because they have computation photography assistances. If it was as easy as you imply don't you think we'd see viable products already ? There is no market for camera oriented phones nor for phone like cameras


> It's like saying a renault twingo is 90% of a porsche 911 because it can go from A to B,

Well, it generally is, given 90% or more of a typical 911's driving is probably not on the track or strip. Think of how many rich retirees have them simply for the build quality and status that comes with it.

What everyone's missing is that smartphone cameras are a completely different use case than dedicated cameras; whether it be your hobby or profession, a DSLR/other standalone camera is something you take because you're trying to take stunning pictures and you're willing to tinker with the photo before and after it's shot. A phone camera is designed to look stunning, and while you can shoot in RAW, it's designed to get shots for people that only take pictures to later remember the time they spent in a place - these are the same people that would've instead purchased a cheap pocket camera for their road trip and would only see those photos again when they were scrapbooking.


> In what aspect ? Artistically a 2010 dslr would shit on any single phone on the market

I can 100% guarantee that it won't when I'm holding it.

Sure, you can invest time to get good at it and the money into the tools to apply those skills. I have other priorities so I don't and the very best option is a smartphone camera.

It makes no sense that point and shoots aren't attacking this. Even "cheap" ones have a massive edge in terms of (imaging) hardware over phones.

Someone like Apple or Google should create a dedicated camera and just knock that entire segment out from under the dinosaurs.


I would have expected that by now Nikon/Canon/Fuji would employ hundreds of ML engineers. Does anybody know whether that's going to happen, eventually? Or are they going to sit on their laurels?


Sony don’t make a multi-lens parallel processing computational camera. I think that’s telling.


DSLRs take amazing pictures easily. they collect a lot of light, which obviates the need for a lot of comluter magix, they have autofocus and autoexposure and simple basic adjustments you can play with.

what they won't do is pre apply filters to make the colors super poppy and generically "artsy" like the phone will.


what they won't do is pre apply filters to make the colors super poppy and generically "artsy" like the phone will.

Of course they do. Fuji’s X100 line has a whole slate of film simulations (mimicking old 35mm film). My Olympus bodies have 10-15 art filters built in. They just don’t have fancy names like Instagram - they tend to be more descriptive.


Do they also take good pictures in low-light with shaky hands? Surely there is a point where work smarter not harder actually pays off - your own eyes likely has better capabilities than any camera, yet hardware-wise the latter is surely better than your slightly incorrect bio-lenses. It just has a huge NN attached to it


Well, no, human eyes are way worse in low light compared to modern DSLR sensors.

With a monochrome sensor you can get already exceed 50% quantum efficiency, requiring only a few hundred photons per pixel to get decent quality, and around 10000 in the bright spots to reasonably match what a TV can reproduce (dynamic range and perceived noise).

But with a color sensor you can get recognizable/useful colors down to about 100 photons per pixel. Yes, it will be noisy, but at that light level (assuming a remotely-fast lens) your eyes will give you a hard time when you even try to read normal text. Should be enough to walk without tripping, though.


Human eyes are better than cameras at a lot of things - that's one of the main problems with self-driving cars. We can see at 240fps, see light polarization, we have different "pixel wells" for low- and high-luminance environments, etc. Even the latest HDTV color system omits a lot of colors (electric greens) people can see.


You're definitely going to get a better shot from a f/1.8 lens in low light than a cellphone camera, even vs night mode on iPhone. If you just set it on a stable surface you can take exposures >10" and have it come out crystal clear. (Or carry a gorrilapod)


An f/1.8 lens never comes out crystal clear if you want the entire scene to be in focus. Though since you mentioned long exposures you probably meant closing it to f/4+, but then it doesn't matter what the top spec is.


I didn't mean the entire image would be in focus, just that your subject would be clear.

A faster aperture will let in more light, so you will not need as long an exposure to get the same amount of detail. It can make a meaningful difference in low light shots.

Adjust your aperture to get the appropriate amount of bokeh, of course; but generally I prefer as wide an aperture as possible in most situations, as long as the focus is appropriate.


This is amazingly far away from the point that point-and-shoots are terrible at their job compared to smartphones.


That statement is completely dependent on what you think the job of the camera is.

A high-quality P&S (like the Sony RX, Fuji X100, or Ricoh GR) is going to outperform a phone camera in just about any edge-case (low light, motion, natural bokeh). Even more so if you shoot raw.

Somebody said it best in a sibling comment - a high-end camera is a tool, like canvas and brush, and you need to know how to use it, practice with it, and get in the zone when using it.

Nobody is arguing that cell phone cameras aren't amazing. They are, no question. And they're almost always in your pocket/purse, so even better. They're great for generic snaps of life.

But there are things they simply can't do - wildlife photography and outdoor sports (not enough zoom), low light - toddlers playing inside or indoor sports (sensor too small, stacking exposures with night mode doesn't work with fast moving subjects), natural feeling bokeh (portrait mode has some odd edge cases). If you don't care about any of those, then a cell phone camera is likely all you need. If you do care about those, then a high-end super zoom, mirrorless, or DSLR is going to make a massive difference.


I am not at all talking about filters or "artsy" output.

Look at the discussions about low light below. Why the heck should I know about f stops when my camera could just measure and adjust accordingly (including giving instructions to hold) like the phone does?


Smartphones rely on stacking multiple exposures in very quick succession. They can do that because a smartphone sensor can scans extremely quickly. In a smartphone, it doesn't make a difference whether you take one long exposure, or several short ones and add them up.

But the bigger the sensor, the slower it scans. Scan & stack is not yet feasible for bigger sensors.

Besides, smartphone sensors need to do this to overcome their limited dynamic range. Bigger sensors are not typically limited by dynamic range, and hence don't need this.


You're talking about the technical implementation. I'm talking about the experience of "point at thing, click button"

There is a massive gulf between smartphones and standalone cameras there in the outcome.

My point is that there is literally a product segment called "point and click" that is terrible (relatively speaking) at doing that thing.

All the replies here are focusing on how cameras can produce better results. The difference is my phone will in my unskilled hands.


Your 1965 Leica takes better pictures than a current camera!

Btw, why do people keep saying "DSLR" to mean "real camera"? "DSLRs" are not at all the ideal of picture quality, actually they're worse than mirrorless but when they were "SLRs" they were worse than rangefinder and view cameras.


I agree with this. I own both a mid-range DSLR and an iPhone 12 Mini. In the past six months since I got the iPhone, I've transitioned all my short films to being shot with its tiny sensor over the DSLR's -- it just looks much, much better and saves me a ton of work. I really wish there was someone making the equivalent of an iPhone's computational photography, but used on a full-size sensor. Imagine the capabilities!


Same boat. My Sony mirrorless is great (mostly), but the firmware/software just doesn't evolve, and it adds enough friction that I barely use the thing. Even though the resulting images can be better, they're only better with 5x the work (in front of a real computer). A high quality phone/ipad has a lot of benefits for HCI over a laptop, certainly for casual use.

I'd be happy with a mirrorless shell that I snap a phone into to control the rest of the workflow. Give me that iPhone auto-AI picture magic (and some tunables to turn it down when needed) and I'd be happy. Well not in bright sun, but whatever.

I was just looking for an EyeFi the other day, which are apparently no longer a thing. The Wifi workflow on my A6000 is just so bad as to be unusable, so I end up batching photos into a big editing session every 1-2 weeks, uggh :(


>by ignoring computational photography advancements

I don't believe it's true. Dedicated cameras never "ignored" the means to get better dynamic range and signal/noise ratio with stacking. It's just not needed most of the time, being a very specific trade-off for certain scenes. And RAW processing software is pretty much embracing computational photography (calibrated optics correction, noise reduction, highlight reconstruction etc, including things like auto retouching and tonemapping if you don't want to do it manually). It hugely improved over the years, being able to pull out much better results from the same RAW shot.

What they're lacking is miniaturization and speed. Big sensors and many megapixels mean slow reading and writing speeds; my smartphone can record RAW video at 4K@60fps which is out of reach for my camera. Modern super-sharp lenses became huge. Electronic viewfinder and variable aperture mechanism can't be made reasonably small. Thankfully, the need for mechanical shutter seems to slowly go away as recent Nikons show.

I hope there will be more super-compact cameras like Sigma fp in the future; combine it with a pancake lens and proper calibration ("computational photography") to compensate for the lack of corrective elements in pancakes, and you got yourself a pocket camera with vastly better capabilities.

Smartphones can sometimes get nice results and are pretty good at motion compensation in deep stacks, but they are slow, un-ergonomic and very restricted in what you can shoot well with them. And the people who use dedicated cameras tend to shoot RAW with smartphones for best results as well. (stacking is available in bayer domain so it's not a problem)


> Dedicated cameras never "ignored" the means to get better dynamic range and signal/noise ratio with stacking.

I respectfully disagree. No dedicated format for stacks (instead just filenames), no first party software to combine the stacks, and absolutely no on-board combination.

Heck on a fairly recent camera I own, you cannot even configure the camera to always take a stack cellphone style. You have to bind to button to open a menu, to select stack size, every single shot.

> And RAW processing software is pretty much embracing computational photography (calibrated optics correction, noise reduction, highlight reconstruction etc, including things like auto retouching and tonemapping if you don't want to do it manually).

True. However, the market has shown time and time again that onboard is more popular than post-progressing. People want to snap and post, they don't want to snap and then post the next day/week after they've post processed it in PhotoLab or Lightroom.


The whole stacking argument is, in my opinion, a little silly.

Why do we need stacking? Because it allows you to do exposure bracketing to get better dynamic range.

Now the question is, in a high end mirrorless like, say an a7r4, you get something like 12-14 stops of dynamic range (expressing this as a number is kind of ambiguous). Do you really need one or two stops of dynamic range with bracketing? I don't think so.

If you expose smartly you can absolutely make use of the incredible dynamic range.

The only problem I see is on iPhone, it shoots in HEIC which is a great format because it lets you encode much higher dynamic range (and the software support is there to display it correctly). Check out a recent iPhone and take a picture with the sun and something medium-dark and you'll see that the sun is eye searingly bright whereas the medium-dark subject is still visible. I wish there was a flow that lets me produce HEIC images (AFAIK even Photoshop doesn't allow HDR workflows). A lot of the pro imaging workflow is only "aware" of printed end results and low-end digital images like Instagram or JPEG web publishing. Very frustrating in my opinion.


> absolutely no on-board combination.

This is false. My 2016 (I think?) Olympus Pen-F does multiple-exposure combination on-board. The results aren't always the best, since you can't control much, and it's JPEG only, but it does do it.

The reference manual: https://www.manualslib.com/manual/1081745/Olympus-Pen-F.html...


The old (by now) CMOSIS CMV12000 does 4000x3000 pixels at 10bit per pixel and 300 Hz.

It reads the 4000 pixel rows in parallel, from the bottom and the top. There's no hard reason why this couldn't be extended to a larger sensor, with the per-row time at-worst linearly increasing with the row count (due to capacitance on the column-lines). But even then, the row count would only have quadratic influence on the readout time, and the column count not at all. Combines to, for fixed aspect ratio, a constant number of pixels-per-second.

So going to the 8k equivalent of 8000x6000 pixels (48MP), you'd quarter those 300fps and still have 75fps left. Assuming you take wide-screen video of 8000x4320 (some sacrificial columns that don't matter for speed and double the rows of "4K" (2160p)), the reduced row count would take that up to ~104fps.

Oh, that chip also has a pipelined electronic global shutter, so you can take the long exposure of a HDR stack while the short exposure is being read out. Also no jello and shutter down to 1/50000.

How big of a sensor were you thinking, that RAW video recording would be inconceivable for the big sensor? And yeah, sure, you'll have to do something with those many gigabits. It's about 36 Gbit/s for the above example. A PCIe4x4 connection would do, but you'd need about 2-and-a-half PCIe4 M.2 SSD's worth of sustained sequential write to handle uncompressed RAW.


There may be more dynamic range in an iPhone photo, but the tones are quite harsh if you don't control the light. Fine texture also seems lacking when viewing on a (desktop) computer screen.

I very much prefer the rendering of a "big camera", even with less dynamic range, than what can be obtained from an iPhone.

Of course, as others have said, the iPhone quality is still amazing for what it is, especially given that you can carry it in your jeans pockets. Even the tiny Olympus Pens are worlds away from that. And just as always, the best camera is the one you have with you.

I also agree that it would probably be great to have a "big camera" with the smarts of an iPhone. But there's probably not that big of a market for that, so the manufacturers don't invest in it. Although I'm surprised Sony didn't do anything, since they're in the camera phone business, too. At least, much more than say Nikon, Canon or Olympus.


> a cellphone can beat all but the absolute top end dedicated cameras and even then only for niches

Until you zoom in, and then it's all a big smudge. Unless you are talking about $1k+ cellphones only, which is outside of the price range of most.


$1k cellphones are extremely popular with the public and people who post photos online.


Prime lenses on cameras are much better quality for the same price, so the best kind of zoom is already the one with your feet.


My experience with the iPhone camera (X) is that smudge artifacts (AI, compression) are quite noticeable even on the device's screen. As soon as you put that on your 30" desktop monitor, the photo is nearly unviewable. This hasn't been my experience with mirrorless cameras, whose output looks pretty good at 1:1 on a 100dpi screen.

There are two photographs that come to mind where the iPhone utterly disappointed me. One was a morning park scene, rays of light filtering through leaves. The leaves just looks liked blobs. Another was on a boat at night; with my eyes I could clearly see texture in the buildings on the illuminated skyline, reflections on the water, and texture in the water. The iPhone shot was just a black screen with some blurred specks of light.

I know I could have gotten both of those photographs with my A7ii. I have handheld nighttime pictures with stars in the sky. A big sensor gathers a lot of light, and even more important than getting close is collecting as much light as you can.


iPhone cameras have evolved a lot since the X. There have been 4 generations of iPhones released since then.


I meant, when you open the image at its true resolution on a computer. But I agree that prime lenses are close to always better in quality than variable length ones, even if for some situations you need a telephoto anyway (and having a fixed 300mm in my pocket does not seem practical).


There is that, but I feel like pixel resolution turns out to not matter pretty often. Obviously it does matter if you’re cropping or blowing it up for a background etc, but even printing can look good with a low res image if you’re viewing it from far enough away.

They do make clip-on zoom lenses for phones but I haven’t actually used any, not sure how much they help.


Surprising decent even with cheap plastic lenses. I was surprised anyway, after not expecting much.


Problem with smartphones is the lack of glass up front. I have an iPhone 13 Pro which is about as good as it gets but there’s no linear optical zoom on it. That means you have to spend half of the time getting closer and further away from things or deal with huge quality drop from cropping. The sensor and processing aren’t anywhere near as important as that.

With all that my 13 Pro is about as capable as my old Nikon D3100 which I bought over 12 years ago.

I am actually considering buying a Nikon Z50 so I can actually go back to an era of more control.


>Problem with smartphones is the lack of glass up front.

They have "periscope lenses" in some phones


Yes aware of that. The optics aren’t quite there yet. They seem to suffer from internal reflections and aberrations you don’t get on much larger better engineered lenses.

The iPhone 13 pro has a couple of annoying things already in that space.


Computational photography can be done on your computer. What you should ask yourself is why isn't there any widespread software for this sort of thing widely available? Because the sort of people who want overprocessed photos are happy with their phones.

People buy DSLRs for many reasons, none of which include unremovable sharpening filters, automatic replacement of faces with leafs or turning a bright orange sky blue. Or replacing the blurry 300px moon you just shot with a 3mp stock photo.


Does your computer know as much about the scene as your phone/camera at the time of shooting it? I really don’t think that a computer can do the same computational photography as a real-time device, at least not with current formats.

It’s probably the same as AOT vs JIT — the proper solution would be to move more data to the computer where we have faster hardware and less resource-constraints, which would be the equivalent of profile-guided optimizations in my analogy (things like movement, maybe even some basic description of the scene based on NN and previously seen images)


Well there's CHDK for Canon cameras. I had it installed on a point-and-shoot a few years back and really liked the features it gave.

https://chdk.fandom.com/wiki/CHDK


I did the same some 10 years ago only to discover I'm too lazy to do post-processing on my timelapse. You can definitely use it to record successions of stills and videos which I guess is part of what phones do. It would be better if there was native firmware support for some of that stuff though, and I imagine that some combination of firmware acquisition settings and accelerometer data would give you the same capabilities as smartphones. Perhaps maybe a secondary camera with a high framerate to get optical flow/camera pose data from? Dunno.

You can still do it manually, but give it 3 years and you'll be able to compose that photo from scratch with an off-the-shelf generative NN photoshop plugin anyway. But still I'd say the market for DSLRs will remain the same because their main function (in my opinion) is to capture an accurate representation of reality with predictable errors in pixel space and not in some latent one.


> automatic replacement of faces with leafs

this was confirmed to be fake news.


The leaf was in front of the face but the algorithm decided to blend it in that way.


For your computer to compete, it would need the full IMU and lidar stream, which is required for many of these computational tricks.


As a sports photographer, I disagree. There's no stacking multiple frames in my world.

Also, as a portrait photographer, I disagree too. Until we can use flashes properly, I'm not getting rid of my camera.

Plus, dynamic range isn't everything.


I have to side with OP here. Yours pluses are really worth it only if you need quality 98% of the people here won't ever need (OK, 1-3 pictures printed as bigger canvas in next 10 years, but still). Most people today see pics on phones only. Even those done by pros in studios. The rest is mostly tablet/notebook screen. 1% (like me) something bigger. On that level, I can do almost/completely same pics as you with my Samsung S22 Ultra for those portraits (not sports part, but if its for family, videos usually trump photos for target audience).

Minuses are staggering - the amount of money, time invested, annoyance to shooter and subjects, the speed with which you can capture most moments in life.

If I understood you correctly, you're pro/semi-pro. That's not the topic here, its everybody else snapping pics. As long term full frame shooter, for last 2 weeks, its almost dead platform for me. I'll consider to take it for longer vacations, but I expect it to come out of bag very rarely.


We're just not photographing the same subjects in the same way. 90% of the pictures I take, be it professionally or personally, I couldn't take with a smartphone.


> As a sports photographer, I disagree.

You don't disagree with the post you replied to. Niches will always require specialized equipment as it said. It was a comment about general purpose photography (and its ever shrinking market share).


It's not about the niche, it's about doing a job with the right tools.

Did you ever build a desktop gui app with php? You actually can, it's just not the most productive tool.

Conversely, you can take good pictures with a smartphone (for some specific cases), it's just not a great tool for the job.


Moving objects aren't a niche, they are the real life outside of of selfies and posing.


Surely the background is somewhat stationary during, so there is ample space to refine that with less fast-moving data.

Sure, you have to get better hardware to properly capture the fast moving target of the photo, but it is not an either or question.


I think a lot depends on whether you stand to loose and the other bloke stands to gain from those lies - if these are old war stories that bear no relationship to my current life, well, that's just entertainment, whatever.

But if the other person is telling me "deposit money in this ATM, it's totally secure' then I'd be a bit assertive with holding him to account.


For sports photography, why wouldn't you want stacked exposures for the non-moving background parts of the frame? If it allows getting a comparable noise floor with faster shutter speeds, that seems like a strictly good trade.


I see where this is coming from, but I don't care about the background being noisy and blurry. If anything, that's a plus in my book.

However I want the subject to be tack-sharp, and he's moving and changing shape all the time.


Is it really fair to say about software though?

As it is, a DSLR or mirrorless cameras put a lot of money to make good lenses and frames.

I think it is fair to say that the software was usually on desktops, with the likes of Adobe providing all features needed for making a good photo great.

Is it really necessary for the camera companies to cram half baked, low powered software into compact camera cases and increase the cost by another 600 - 1000 USD?

There was a time when everyone bought point and shoot ones to get their pictures. Mobiles have almost fully replaced those ones. Some people bought DSLRs and the like as a step up compared to PAS ones.

Good mobile cameras might make the DSLR ones back to being professionals choice, rather than an enthusiasts choice.

I would rather they build software to interface with phones to make use of their tremendous computational power rather than re-invent the wheel.

How about having a DSLR that can sync its photos with an iPhone and an iPhone app that automates the processing of the photos?


Most of what the cell phone camera is doing in software is also available in the post processing software for DSLRs.

Its not as convenient, but the quality is better.

But, HDR isn't necessary in most cases, specially given that RAW files give you an extra stop in both directions when post processing. OTH, the properties of different lenses (FOV, DOF, magnification, macro) can't be reproduced in a cell phone camera (only approximated in software, and not usually well).

A big DSLR isn't necessary for snapshots, but for serious applications (landscape, portraiture, nature, sports, events, etc...) they don't hold up.


> Most of what the cell phone camera is doing in software is also available in the post processing software for DSLRs.

This is not completely true, since a DSLR doesn't have an IMU and LIDAR stream, which are used extensively. Smartphones aren't just sticking independently captured images together.


> specially given that RAW files give you an extra stop in both directions

With modern gear you can even recover 4 or 5 stops of shadow details relatively easily: https://theartofphotography.tv/nikon-z6-dynamic-range/


As someone who learned photography with film in an old Nikon but hasn't kept up, how is it possible to get two extra stops from a RAW file? And why does it need to be done in post processing instead of the camera just recording the image with the sensor's full dynamic range in the first place?


It manifests as extra detail hidden in the shadow or highlights, which is only exposed when you start messing with color curves or drag shadow or exposure controls way up.

I asked a very similar question in another comment thread recently and the answer I got was that camera's awareness of the scene isn't nearly good enough when it comes to adjusting its own sensitivity curves, and that our eyes and brain are much better at postprocessing the raw data then the camera is.


One example is that cameras don't know what time of day it is (or rather, they could know but often choose not to care) and so auto-exposure/auto-white balance try to turn everything into noon daytime. So if you wanted your picture to look like the actual color cast and brightness you're seeing, it's up to you to remember what it looked like.

Of course, phones aren't necessarily good at this either.


Ugh, god no. I don't want fancy software trying to think for me. I want raw sensor data captured with as much detail and fidelity as possible that I can feed into my virtual darkroom workflow.

The dynamic range thing is frustrating, indeed. But even so the data is usually still there. Perhaps better scene programs is a solution for traditional DSLR cameras, but most definitely NOT "computational photography"

Photography needs better sensors and better scene programs, not a bunch of algorithms with their poorly-thought-out predefined notions of what is "good"


I don’t think you know what computational photography is. Do you also think that the famous photograph of the black hole is not true to life?

It is not AI upscaling or whatever “magic” that just guesses at noise, most of it is just clever “hacks” to get out more data than you would ordinarily get, e.g. by using up the small vibration of the photographer’s hand to distribute the adjacent color filter’s input to where it belongs, and the like. Many part of it is done in hardware as well.


That part is fine, though I have considered that to be just 'image processing'.

There is a clear difference between what the iPhone 13 does computationally and what my DSLR body does computationally. The body doesn't claim to use AI, it doesn't refer to machine learning models, and aside from the physical characteristics of the hardware that affects the images that it takes, what I get out of it is pretty much raw sensor data. Which then gets fed into an image processing postproduction workflow.

I would think that any photographer worth their salt would know that a lot of famous photographs of the stars are false color, as NASA very prominently annotates exactly how it achieved the image that it did on its site. But I would bet they are still hand-tuning the end result. Do these new iPhones come at you with, "hey, it looks like you underexposed the face so I fixed this for you" then ask me to keep or undo the changes?" From the description of it, they don't. It just happens, without your consent, because Apple thinks their phone is smarter than you are.


A lot of people assume the worst about computational photography; there were stories about Samsung phones having an "unbelievably" good Moon photography mode, where everyone assumed it was just pasting in a photo of the Moon instead of just knowing not to overexpose it.


> How can a tiny sense in a cellphone capture more dynamic range can a massive sensor? Software

No it's also hardware. The cellphone has much more powerful CPUs and GPUs than your DSLR that enable this


The CPU and GPU aren't really relevant in a DSLR. What does matter, though, is the efficiency of the sensor and the quality of the lenses. If I'm shooting a DSLR I'm also going to be editing the RAW data on a much, much more powerful device than a cell phone.


The point is that if DSLRs were to do onboard computational photography like OP wants them to it would require them to have more powerful CPUs and GPUs


I kind of agree. I'm not an absolute kind of person.

For day to day photos, where there is no set up, sure, the camera phones are great because you can quickly take that photo. As they say, the best camera is the one you have with you.

As a complete amateur, trying to take a photo of my kid at soccer, I have a Sony A6000 and I bought a used Sigma 100-400mm F5-6.3 DG DN OS. The camera phone is simply useless. Also, good luck trying to take a photo of that bird and showing it to your kids!


I agree. 90% of DSLR shots don’t need to push the limits of physics. 100 vs 1600 ISO doesn’t matter if you have indistinguishable noise reduction. F/1.8 gives a nice bokeh, but if you only have two depth planes (a foreground subject and a distant background), no one will notice if you fake the blur.

That said, I still have to carry my mirrorless into the backcountry because tiny sensors can physically only let in so much light, which makes them unsuitable for astrophotography. I’ve seen the “Astro-modes” on new flagship phones and the result is always “wow, this is good...for a phone”, but nowhere close to what I can get unedited from my camera.

And the worst thing is that it’s unlikely to change. Companies that have the ability to make really good phone cameras have no interest in ILCs, and the ones who make really good ILCs don’t care about software.

And good luck trying to compete in the middle without the hundreds of millions that Apple/Google put into ML research and the decades of research that Canon has done on colour science.


Fun fact: your eyes are apparently a very low-quality camera. The reason we can make out so much detail is because our brains are really good at processing and focusing.


The replies here are focused on “control” and how DSLRs are meant to be the _raw_ experience. I think they’re missing the mark.

All DSLRs come with an “auto” mode. Why should it be the case that this mode is so rubbish? There is currently a gap in the market for a camera that combines superior hardware with superior computational photography. You can keep your RAW images but why not have a computationally processed output option to go alongside that?


> Controversial take: DSLRs and mirrorless cameras have largely made themselves irrelevant by ignoring computational photography advancements

Preach.

Next up: $500,000 cinematic cameras and lenses, because we'll be capturing light volumes and skeletal postures.

The biggest advantage won't be in equipment cost savings, either, but rather substantially increased workflow efficiency.


My Sony full frame mirrorless can take multiple frames for HDR with the flip of a setting. I'm sure whatever you're using can too. Swap the shutter setting to turn on bracketing and decide what attribute you want to change (exposure, aperture, shutter speed, or ISO are all options).

That said, rarely are HDR shots actually desirable IMO.


Maybe they're focusing on what's important. It's plausible they're blind and dumb, or it's possible they reviewed iphones features and shots and said nop.

Most smartphone pics have a weird feel to it, I thought I was being picky but it seems the article author feels the same. Computing power alter the optics too much.


Possible but unlikely. They have had a decade to see this coming and innovate. They just kept selling the same thing with more megapixels.


This isn't my experience _at all_.

Until about six months ago, I carried a Fujifilm X-E2. It's an older camera (~2014), and was 16.3 MP. I upgraded to an X-Pro3, which is 26.1MP. Given that the MP stat increases with the square of resolution, that's not a huge change.

My iPhone X has a 12MP camera, the same as the iPhone 13 Pro. The technical resolution _doesn't matter_. It hasn't mattered since the 10MP barrier was breached.

I would prefer the Nikon that I carried in ~2004 to the iPhone 13 Pro for "photography". Note that that is in quotes, because "photography" is very different from "taking pictures". I use my iPhone when I'm with my family most of the time and it's great for that. It's great for documenting activities and "capturing memories". That's not at all the same thing as "photography".

My X-Pro3 can produce excellent images at 12,800 ISO and f/1.4. My iPhone doesn't even tell me what ISO its capable of as far as I know, has a static f/2.4 aperture, and the OS "Camera" app doesn't even let me set shutter speed. It does "does its best".

They aren't comparable tools.


This has nothing to do with the fact that the professional camera industry has failed to adapt to new and improved tools and methods that apple has largely pioneered in the field.


Yes, a photo is not a snapshot. But no one can deny that the iPhone 13 Pro is among the best snapshot tools in the world.


I mean I get the point, smartphones did feel like incredible wonders, thin, instant, stupid simple.. but if it swaps "reality" for gimmicks then I can see why actual camera brands wouldn't pursue it or at least leverage it in their own products.

Absolute ease is rarely a requirement for quality, be it in photography, music .. or most things I guess.


As a photographer I understood that my pictures never reflected reality I saw it with my own eyes. I used various knobs and functions to get the desired vision I had.

DSLRs have their places, but to say that the entire professional camera industry wasn’t caught off-guard and failed to respond in kind to the rise of computational photography is an easy statement to make.


They don’t swap reality for gimmicks, that “article” is fake.


I agree with you, but... I don't think it's quite there yet. "Professional" cameras aren't yet irrelevant. But eventually they will be if they don't make some changes.

This is film vs digital all over again.


Honestly don't know: can you post-process raw DSLR images on a computer to acheive the same quality as iPhone camera?


I think the mirrorless cameras do some of this. canon does skin smoothing, selfie, HDR, etc.


It is not true, computational photography is not the thing that traditional cameras are missing. What they need is a seamless user experience. And while traditional camera excels at capturing (in terms of user experience), it lacks at anything after that. Processing, sharing, etc.

Another important factor is “the best camera is the one you have with you.” This killed the compact camera segment. I don’t think it eats much into the interchangeable lens camera market, which is what you claims are becoming irrelevant.

Just to prove that computation photography is not missing, it’s not new at all. Digital photographer has been using those techniques all the time, including HDR you mentioned. Some Sony camera can stitch panorama automatically so there are examples that in camera processing are also there (some Sony cameras can install apps that process HDR too, IIRC, but they killed that online store.) Again it is the user experience, if it is seamless and doesn’t take you much additional effort, then it becomes a more useful tool (in terms of how often you use it.)

Your criticism of “producing worse result” again is not a matter of computation photography. It is just a matter of FPS, ie how fast the sensor reads out the data. Cell phone cameras especially iPhone (a few years ago Apple hired an indie app developer that optimized the app crazily using hand tuned intrinsics to capture highest FPS that have been seen on that model, IIRC 20fps.) But nowadays even FF sensor can be read out at insane FPS making your point irreverent. This is not a coincidence because a) who do you think develop those sensors used on smart phone and b) it has many other uses other than producing low artifact HDRs.

Also if you want no-artifact HDR, not just low-artifact HDR, then get the biggest sensor you can get and take single shot HDR, not a cell phone.

Lastly, computational photography is a necessity for smart phone, just because of physical limits. Indeed it’s a blessing that you can squeeze that much from such a tiny camera. But make no mistakes, it doesn’t win in the quality department, and people don’t hate it because it is “cheating”, but about the quality.

Just to name one feature considered to be cheating and eventually becomes a standard in lens design: aberration correction. I think Olympus started it first in M43 that the electronic communications also send barrel distortion and CA correction information back to camera and write it in the EXIF data, and correct it both in the fly and in any preview or rendered output (ie this is a mandatory correction.) at first it is seen as an inferior lens design philosophy and eventually it is seen as a better optimization (as you only care the end results and whatever means towards a better end is a better optimization) where the biggest benefits is size (optimized/minimized size given same quality.)

So no, it is not a controversial take, it is just a wrong take.


> I've given up for most things, a cellphone can beat all but the absolute top end dedicated cameras and even then only for niches.

Phone cameras absolutely unequivocally cannot beat the clarity, sharpness, detail, natural bokeh, shutter speed, and overall quality of even an affordable dedicated camera in the $1K range. The only time a phone camera can somewhat keep up is when you're looking at the photos on your tiny 4" phone screen and throwing some heavy duty filters on it to the point where anything will look "good". Throw that photo up on a computer monitor, or TV screen, or get it printed and you can immediately see the difference just in sharpness and detail alone.

Yes, phone cameras are far more convenient, and they have more filter options, and so on. But there's no comparison whatsoever. It's just that most people don't care enough when they're putting it up as a tiny little square on Instagram or sending it as a Snap that someone will look at for 5 seconds and never see again.


Genuinely asking, is it also true when you are in bad conditions? Sure, cameras take in much more light, but there surely is a point where “working smarter not harder” actually turns it around.

For example, I have seen the Pixel 6 take a better photo of a river before a town at night due to the very hard dynamic range requirements plus the movement of the waves in the dark.


Assuming a modern camera with a prime lens shot at the widest aperture, yes, absolutely, although to get the best results you'd have to do some post-processing in Lightroom/Capture One (which the phone is already doing automatically in the background).


Instagram hasn't had "heavy duty" filters since pre-iPhone 4 days when it was a lomography app. Everyone goes for an overly well lit but realistic look these days.

Although a lot of them do just shoot on larger cameras.


It's not that cameras have become too smart. It's that they're making choices for us, and these are not the choices we would make for ourselves.

11 years ago, I gave a talk about this at the Internet Archive. It seems to have held up. https://youtu.be/UMMogOoWEbI


>It's that they're making choices for us, and these are not the choices we would make for ourselves.

Of course, you can always take off the training wheels and shoot raw.

Apple even worked with Adobe to create ProRaw, which allows you to selectively turn on or off various parts of their image processing pipeline after you shoot the image.

https://lux.camera/understanding-proraw/


ProRaw is demosaiced and includes the computational photography stuff as well.

The big picture is that cameras profoundly influence how we see and share ourselves, others, and the world. Fairly recently, cameras (and their developers) started making opinionated decisions for us about things like the color of sky, the texture of skin, and much more. Sure, any one of us can opt out and do things RAW, but the rest of the world will go on taking pictures with these decisions baked in. I don't think the answer is RAW, because the problem is not pixel data -- it is who controls image making in the first place.


What’s your proposed way of solving the issue? HDR can be considered more natural to how our own eyes work, being pedantic the “old-school” model is much more foreign than the NN-enhanced one. Sure, one can overdo it, e.g. replacing an image of a moon with a photograph of it, but I don’t think that NN-based color balance, HDR and the like are worsening the problem.


I don't see how this is different than the film stock you shot on, the lens you used, or the format you shot it on.


> Fairly recently, cameras (and their developers) started making opinionated decisions for us about things like the color of sky

That's called "auto white balance" and we've been stuck with it since the advent of digital cameras. Phones don't do much more than that; if they detect the sky it's to adjust denoising to not cause banding artifacts etc.


I've always especially felt this way about Pixel cameras. They are no doubt good, but there is something just off about them I cannot easily identify. I believe it both oversharpens faces and makes white skin tanner.

Is there any company out there doing just HDR, and not making color choices? Or is it baked in at the SOC level now?


I've noticed the same thing with my Pixel. It adds too much contrast in areas that shouldn't. Some simple shots of my pets in uniform lighting end up looking ghoulish because it turns their fur into this streakish blob of very dark and light areas. I've also had weird instances were it tries to "restore" fine detail on zoomed in pictures by inserting gridlike patterns on objects (very similar to images generated by a GAN).


Apple's ProRaw can "just do HDR", assuming you don't want to demosaic yourself.


> It's that they're making choices for us

I think that's the definition of a "smart" appliance, and you just described what my problem with "smart" appliances is: they are too smart by a half, and end up making bad choices when pushed a little.


> they are too smart by a half, and end up making bad choices when pushed a little.

That's seems like the opposite of smart—instead, it's dumb yet intrusive. Or as Kurt Vonnegut said: "Beware of the man who works hard to learn something, learns it, and finds himself no wiser than before." These devices do a lot of thinking, but have no wisdom.


They are smart, if you consider "smarts" to mean making lots of complex decisions. And they make good decisions a lot of the time too.

But you can't encode a set of heuristics that works every time into a physical device - that is the real problem with "smart". In a small minority of cases, the situation will get so complicated that the rules you embedded will produce suboptimal behaviour. To overcome that, you'd need to embed a human-like intelligence in the device.

Similarly will they fail for the minority of people who have drastically different goals (e.g. aesthetics) for the purpose of the device. Now you end up having to clone the owner wholesale.

Okay, really the problem with "smart" devices is that they typically don't disclose what rules they follow, and typically don't come to the user asking for help choosing when tradeoffs enter the picture (literally in the context of this article).

"Dumb" devices, meanwhile, lay all choices flat.

Granted, cameras have long stopped being comletely dumb - but they often make a good job opening themselves to doubt with things like optional manual color balance, and different priority modes.


To be honest even our very own eyes not that “smart” — they can be fooled quite easily, by eg. visual illusions, magic tricks.


> They are smart, if you consider "smarts" to mean making lots of complex decisions.

I don’t, really. Making lots of good decision, sure. But just because they’re complicated? No.


Choice implies a non-processed alternative but hardware has been driven in a direction so as to make the non-processed alternative unusable (meh hardware, processing to fix).


My digital camera had Profiles for outdoors, night/dark areas, people, animals, macro. Seems to me Apple did what they do best (and credit to Gnome too) remove the Profiles options because the users are stupid and use some shit algorithm to decide what profile to use for you,


Computational photography is much more advanced than that.


Digital cameras had shit like face detection too, but yeah they are more advanced but still too stupid to just work, I remember a picture was posted where the code made a leaf appear on top of someone's face, like fuck this photos are fabricated, object are moved around, hidden or made to appear - you can't just use a photo as evidence without first knowing all the tech details on what algorithm created the photo.


As mentioned elsewhere, that story is fake. Computational photography is more about increasing the sources of data, by e.g. reusing the shakiness of one’s hand to decompose the adjacent color channels’ input. This is a statistically correct method to increase the noise-to-signal ratio, nothing like “this is a leaf, put a leaf here”


Delegating the issue to hardware seems off to me.

Aren't hardware manufacturers also making choices for you?

I get that software allows for far broader picture manipulation but there are a decent amount of similar choices on the lens and sensor level, no?


True but hardware makes it obvious why the image looks the way it does and is consistent. Photographers can even tell what lenses were used for a photo by looking at it.

Software makes it more of a black box. The same picture taken days apart could look different due to a software update you weren't even aware of. Hardware is also a forced choice so is more deliberate which again is the opposite of the software situation of changing algos and processing.


Pretty much this. I have a Pixel 4a and if I use an alternate camera that doesn't have Google's algos, the images look like ass.


You can still capture RAW and edit it yourself, right?


After watching far too many videos comparing the iPhone 13 Pro Max, Samsung S22 Ultra, and Pixel 6 Pro, I decided against the iPhone because of the automatic skin smoothing.

The iPhone 13 Pro Max removed wrinkles, sun spots, moles, hair, etc to the point where the results looked like overprocessed, manually edited photos. This wasn't subtle -- the reviewers commented on it as well.

While I don't do video, I was struck by a specific example: the iPhone 13 Pro Max's automatic enhancement for dark scenes decided a barely visible wall was red and recently painted. It looked like someone had drawn a box in the photo and did a red fill.

I suspect it is only a matter of time before Samsung and Pixel's photos look as overprocessed. But, for now, we have some choice.


> I decided against the iPhone because of the automatic skin smoothing.

There’s no such thing. I doubt any reviewer making videos knows what they’re talking about unless it’s from a very few sources like DPReview.

The iPhone camera has never had skin smoothing - that’s something only Asian camera apps do. Deep Fusion is more like the opposite of that.


> The iPhone camera has never had skin smoothing

https://en.wikipedia.org/wiki/IPhone_XS#Excessive_smoothing_...

You were saying?


Indeed, that’s a case of people conspiring a skin smoothing filter out of a bug, like it says.

Actually I really didn’t like the denoising on that camera, it seemed poorly tuned in general…


Do people really choose between Android and iOS based on the camera? They are all very close on quality, and IIRC if you don't like the stock camera apps you can get a third party app with less opinionated processing. But even so, the whole experience, including the ecosystem, matters so much more to me than the camera. I have a real camera for when it matters that much.


I did, and went with S22 ultra at the end. Long term nikon full frame shooter. I must say both iphone 13 pro max and this are very formidable devices, each has their strengths and weaknesses, took quite a bit of time to decide. I like how S22 feels like a real computer where I can plug my variometer for paragliding via OTG cable, side load apps of my own choice, use normal charging cables like rest of the universe, upload/download files straight from filesystyem without any installed apps and so on. I am getting into whole S pen experience, its pretty nice for sketching or precise mouse-like work ie when editing photos in phone. Apple notch making it look like some basic 2015 chinese phone is just a cherry on the top.

As for cameras, 10x physical zoom is not something to neglect, Apple doesn't have anything to offer there. In past 2 weeks I have it, I've shot quite a lot of otherwise impossible kids, family, nature, animals etc. pics with this zoom. That was my main motivation for new phone - a camera for pictures I always have with me, unlike dedicated camera (2.5kg being just a detail, I didn't mind carrying it but I didn't have it a lot of casual times where great situation happened).

Another reason that swayed me - with this upgrade I wanted to improve headphones, priority being quality of sounds and battery life to play my collection of flacs. Airpods Pro were just not cutting it, went with Sennheiser momentum 2. You can't connect iphone and Senns via some above-default bluetooth protocol, they support aptX, Apple has their own solution and nothing else.

Rest of cameras are +-comparable, sometimes one wins other times the other. BUT - this over-processing that Apple is famous for is something I don't like, as long term DSLR shooter I 'have it in the eye' how processing should look like. A lot of people got so used to it though, they consider ie full frame pictures with normal level of processing bland and artificial compared to Apple's.


Have you looked at the marketing material for any of those phones (or any modern flagship for that matter)? Over half of it focuses on the various cameras they have and their specs/features.


> Do people really choose between Android and iOS based on the camera?

Kinda. I have a 4a because of the camera more or less. It was between that and the iPhone SE 2020.

I did kinda wanna switch up to Android (I've gone back and forth over the years. I don't lock myself into any one company's eco.)


The 4a Camera app consistently makes the "wrong" choice for me (flat lighting and colours). I wish I knew of an Halide-like for Android.


My wife is thinking of upgrading her iPhone for better snapshots of our baby.

I’m a professional photographer. Hell, she is too - she second shoots at my weddings. We have the best cameras and lenses money can buy…but they aren’t in her pocket when our baby does something cute.


> Do people really choose between Android and iOS based on the camera?

iPhone 13 launch marketing/advertisement was almost entirely about camera.

https://www.youtube.com/watch?v=XKfgdkcIUxw


Probably some people are not heavily plugged into an ecosystem, they need at max 3-5 apps to work and that is enough for them. other sure have laptops,watches, TVs already in the ecosystem, tons of money already spent into apps and media so they are prisoners.


Even without the ecosystem, it seems like the actual UI experience is different enough that it would be the dominant factor. Every time I switch back and forth between Android and iOS it's like whiplash. They do tend to copy each other heavily, so it's converging, but there are still plenty of behavioral differences.


My cousin switches every few years and it seems like it’s always a crapshoot of whether she’ll have iOS or Android. Drives my phone crazy as it’ll try to send her iMessages and failover to texts for months.


As someone who dislikes both iOS and Android, I chose my phone based on other factors. I chose my phone due to its size and form. In their case, they would be choosing it based on photography.


>Do people really choose between Android and iOS based on the camera?

We did. My wife wanted an Iphone because it takes great pictures, and I got an S9 because I can tinker with the ROM.


Wow I can’t imagine what a pain it would be for one of us to have an android phone and the other iOS. We use the react emojis for each other constantly, I send videos to my wife via airdrop, and all of our images are synced and shared via iCloud. Also my wife can seamlessly FaceTime me wherever I am to help calm a grumpy toddler who “wants daddy”.

I guess I sound like I’ve really drank the iKoolaid but I feel like we’d just constantly be frustrated if we weren’t using the same device family.


I like to tinker with self-hosting so usually if we need something, I set it up. We use rocketchat for IM, images are synced manually to a samba share on the network, and if we wanted to video chat (not really our style though) we could use skype or I could host Element or something. It's really not much of a pain, but also it's not like I don't see the appeal of living in a world where the defaults are there and they're usable.


Yes, camera quality is one of the main drivers of differentiation between device choice in many markets.

Interestingly, it's usually "more processing the better".


This is fantastic. You've confused the over-sharpening that some Android cameras do with Apple's non-existent "skin smoothing" feature.


Are you saying that iPhone doesn’t do the “skin smoothing” as described?


Yes that is what I’m saying. The iOS camera app does not do skin smoothing.


Check some side-by-side comparisons of of same portrait scenes on youtube. Apple removes all moles, freckles, wrinkles and so on so folks look like (very good looking) plastic toys from fashion magazine covers. Some like it, some don't.


Just took some pictures with my iPhone 13 Pro and it 100% doesn't do this. Not even in Portrait mode.

Maybe there's some way to monkey with it just right.

I think people are confusing low light noise reduction for something else.


Check some side-by-side comparisons of of same portrait scenes on YouTube.

No one is going to go dig for that, if you wish for folks to make such a comparison, it would be better to post a link so that everyone is on the same page.


Samsung is the king of oversmoothing. I've used the note 4, s8, 10+ and s21 ultra and imo the camera has always been subpar exactly because of that. I actually have no idea how most reviews have been constantly praising the cameras of flagship Samsung phones when in practice even at launch it was always a disappointing experience.

Even with the s21+, the only way to really get decent shots from the camera in low/medium light before a recent update was to use modded Gcam builds. Even with the update I'd say it maybe compares to an iPhone 8 or x

Overprocessing and oversaturation were so bad before the s10+ that I remember not even bothering with the camera at all because of how frustrating it was to always get mediocre pictures.


I was surprised when I moved from a mid-range to an S21. The smudging effect was just as evident unless I used the 64mp setting that is seemingly only available in one mode in the Samsung camera app and nowhere else.


The Pixel 6 photos are quite over processed.

You can shoot raw, but I just want the same normal workflow without the run around.


Haven’t Samsung’s flagship phones favored over (or more than normal) saturated photos over the years? I think the key to liking a phone camera is to get one that takes photos you like (I know this is tautological). Or play with apps that will allow you to change from the defaults. This may not help as much on iOS since the default camera app that can be launched quickest from the Lock Screen is Apple’s camera app.


I feel like it is inevitable that consumers gravitate towards cameras that make them look the prettiest, if that is not already where manufactures are going.

Samsung absolutely does not want someone to switch, and then find out that they "look worse" on their new samsung phone.

I guess its just a matter of whether or not they implement options for post-processing.


Personally I'm tired of the HDR-ification of everything. House listings? It's like realtors figured out what HDR was and went NUTS. Those images are like the comic sans of photography.


I've seen them add light fixtures to photos. Like, the light fixtures do not exist in the actual house. Not a lamp, something connected to the house. The way they're going, that industry's gonna get a regulatory smack-down at some point.


Here in New York I've seen listings use extremely bright, daylight-emulating lights to make rooms appear "sun-drenched" when their only window faces another building, and remain dim year-round.

I even saw this on a video walkthrough of an apartment – they clearly had placed the lights right outside the window.

Finding a place to live is challenging.


We sold a place last year. The agent asked whether we wanted to pay for rental furniture or virtual furniture for the pics. We said neither.

We ended up with the virtual furniture for free. I think they wanted to figure out how it works and our place was small and boxy so an easy starting point.

Not that you could trust the ultrawide angle HDR photos much in the first place, now you have to put up with shoddily inserted virtual furniture wrecking your size/perspective references. It is an Alice in Wonderland trip...


I recently saw a listing where badly inserted trees and piles of rocks were littered around the back yard, maybe to hide crap the owner hadn't bothered to clean up? Not sure, but it was obvious.


UGH! Is that what virtual staging is going to next, straight up lies? It's one thing to use a fake couch, but a fake fixture is not acceptable.


It's even better when they throw on HDR for one of those faux Mediterranean houses built with the fit and finish of an Olive Garden. It's like a tell to stay away, someone who worshiped Camilla Soprano's kitchen once owned this home.


That, and the "realtor lens". Driveways suddenly become 50% longer. TVs are 3 meters long and .75 meters tall. That bedroom looks like a long, narrow jail cell (well, not in real life; such is the unintended consequence of misrepresentation). We all know what you're doing, few (if any) are fooled, just take the shot as it is or just leave it out, since that photo might as well be a picture of the Eiffel Tower for all of its lack of usefulness in representing the house.


Going to respectfully disagree, though maybe we are referring to different images. When I look at images on Redfin, it's clear someone either used HDR or manually edited in the scenes outside the window. To me, this looks like what my eyes would see when in the room vs what the camera itself would show which would either be blown out views outside the windows or the room too dark.

Random example: https://www.redfin.com/CA/Santa-Monica/1319-Harvard-St-90404...


I did real estates photography for a while.

Most real estate photographers use HDR, and the higher end uses off-camera flash (as did I). There’s no nefarious reason behind this - it’s because photos where you can actually see out the windows look better. It’s also more like what your eyes would see rather than a crazy blown out white rectangle. When the house is on a lake you want people to see what the view is like. It was the rooms where I didn’t do that where I was hiding something, like an air conditioning unit being all you could see or whatnot.

The stupidly wide lenses most use? Yeah, that‘s deceptive. I also think it looks bad I tried to ride the line on using the tightest lens I could while still showing the room.


>To me, this looks like what my eyes would see when in the room vs what the camera itself would show which would either be blown out views outside the windows or the room too dark.

>Random example: https://www.redfin.com/CA/Santa-Monica/1319-Harvard-St-90404...

Honestly it's hard to tell without a reference picture. Looking at the first picture, it seems reasonable that the living room would be well lit because of huge windows. However, the section with the tall houseplants look nearly as bright as the open living room area, which seems doubtful.


The first picture where the walls are bright white would have the windows look like a bright glowing rectangle vs. being able to see the outdoors through the window if they didn't use HDR. They overexposed the interior a little bit, which means the outdoors would especially look like bright white rectangles. Even when you have a lot of windows the indoors is significantly dimmer than the outdoors on sunny days unless the room is a greenhouse.

They did the HDR so well that you couldn't tell it was HDR. Artificial HDR is a choice, just like natural HDR is in that photo.

In pictures 8 and 9 they didn't HDR, so the windows look like white rectangles and very overexposed. In picture 17 they did the HDR treatment on the horizontal windows, but did not on the skylights. Those skylights should show up as blue vs. white rectangles if they were doing the HDR treatment on the entire photo, etc.

I think they also use off camera flash and bounced it off the ceiling, which is why I think it looks kind of glowy-white with all the white walls and floors in that place.


> or manually edited in the scenes outside the window

This is definitely a thing. A lot of real estate photos are manipulated to replace the content of copyrighted photos and paintings, to hide unsightly views through windows, or to show clear skies when the photographs were taken on a rainy day.


That's how HDR was originally. Everyone used it to the Xtreme++!! Everything looked post-apocolyptic instead of just rolling the highlights back to proper exposure and pulling details out of the shadows. It's perfectly fine to shoot HDR without going nuts, but very few people choose to do it that way.


Probably shouldn't call it shooting HDR (even though they do it) - you're doing the exact opposite, you're converting HDR into SDR and so it looks overly tone mapped as as a result.

A proper HDR photo should blind you when viewed on an HDR display, which are kinda expensive when they're desktop sized.


The whole idea that the iPhone 7 was dumb but now the 12 is too smart shows how poorly researched this article is. The aperture on all cell phone cameras are tiny, so you’re running into the limits of physics as you get very few photons per pixel imaged. The noise is just very high and uneven. So all cameras use computational photography, including way before the iPhone 7, in order to achieve the results they get. You can argue that the algorithms are getting worse, but you cannot say they weren’t smart before.


To add, the 13's cameras are even bigger now. I wouldn't be surprised if we see another 1.5x increase in the size of the sensors by the iPhone 15's launch. It turns out that giving the algorithm more to work with vastly increases how much it can do.


I found this interesting as I'm someone who used to travel with at least one camera most of the time, but I've recently come back from a holiday where I used my iPhone 13 Pro as my only camera.

Overall I actually found it a great experience – I love having an ultra wide angle lens in my pocket and after a bit of time getting used to it, I find the 3x zoom a more useful focal length than the 2x zoom on my previous iPhone X – but I did find the over-processing (the "painterly" look) frustrating, and even more annoying is that it will often use the wide angle lens and upscale rather than using the 3x zoom (even with ProRAW enabled – see [1]).

I understand why the software makes these choices for the "average" user, but it would be nice to enable a "pro" mode which reduces noise reduction and favours the zoom lens in more situations.

I ended up using the stock camera app for "snapshots" (e.g. photos where I didn't care too much about the quality, or where I was just using the wide angle lens), and Lightroom Mobile's camera for shots where I wanted more control, or when I wanted to ensure that the zoom lens was being used (Halide is also good for this, but I found it convenient to have the photos in LR Mobile for processing immediately, even if Lightroom's UX is clunkier).

This actually worked pretty well for me – you can trust the stock camera to take a photo which will look good at mobile screen sizes even in challenging conditions, so it's great for "capturing the moment", while if you are taking a photo where you care about the details, it's usually not an issue to take a few extra seconds to open LR... but it would be nice to be able to do this in the stock camera.

The results with RAWs from Lightroom are actually pretty impressive IMO – there's more noise than the stock camera, but I prefer this to the smudged noise reduction look and I'm sure with some processing I can find a happy medium. Even the ultra wide photos are reasonably sharp.

This was a long way of saying that if you're frustrated by this issue, try a third party camera app and hopefully you'll find that you can get more out of the newer iPhone's great cameras, while still having the default camera there for quick snapshots!

[1] https://lux.camera/iphone-13-pro-camera-app-intelligent-phot...


You mean you cannot manually choose the lens on iPhone 13?? :O

I have an old Samsung Note 8; I was contemplating Iphone 13 Pro for my wife so we can improve the casual/random photos of our kids, but that'd be a deal breaker :-<

[context - we have Nikon d800, d7200, couple of d90's, V1, etc lying around the house so we do like photos, and being able to zoom or choose a lens is something we take for granted :]


You can manually choose the lens, but in edge cases where the stock camera app decides that you're too close to the subject to be able to focus with, e.g. the 3x lens, it switches to the 0.5x ultrawide lens instead, to maintain focus, and crops to maintain the same field of view. Using a non-stock app lets you force it to respect your lens setting, out-of-focus and all.


Exactly this, also the iPhone "helps" you by sometimes digitally cropping the 1x image to the 3x field of view if the result from the 3x lens was not judged to be good enough.

It has always done this to some extent, but as the other link I posted ([1]) describes, this both happens more frequently with the 3x lens on the 13 Pro than the 2x lens on previous generations because the 3x zoom has a smaller aperture than the 2x, and also the effect is more noticeable, because it's blowing a 1x image up to 3x rather than 2x.

I hope Apple will offer some facility to tweak this in future as it seems to me it frequently chooses digital zoom rather than the zoom lens even in "not that challenging" conditions. I do get why they'd do this though – the reality is that if it always used the zoom lens, you'd end up with a lot more noisy/blurred photos due to physics – but it would be nice to say "I'm OK with that"!

One other thing I should mention is that you can't set a third party camera app as the default, so the camera button on the lock screen always opens the stock camera. In practice I don't find this a huge issue, as if I'm grabbing a quick snapshot from the lock screen it might be of some fleeting moment, in which case I'd probably rather get a usable digitally zoomed image, than a blurry optical zoom one.

[1] https://lux.camera/iphone-13-pro-camera-app-intelligent-phot...


You've been able to disable this behavior since like a month after the iPhone 13/13 Pro came out.

It's the auto-macro functionality in the camera settings. The default behavior switches to the ultra-wide below the minimum focus distance of the other two cameras.


As far as I know you can’t prevent it from switching to 1x image digitally zoomed when shooting at 3x though, unless you use a third party app


Had a very difficult time shooting a photo through a fence recently, because it kept switching lenses to focus one the fence itself, which changes the framing, making it difficult to adjust focus back.


I did the same for my honeymoon last fall. I hate carrying around a "full size" camera because 1) it's bulky and heavy and expensive so I have to worry about it constantly and 2) it makes me look like your standard tourist. Pocket. 13. Snap. Snap. Pocket. Go. Happy wife. Airdrop her the photos next time we sit down for food for upload to instagram.


Yeah, I'm not sure I'll go back to using a full size camera except for situations which call for a specialist lens (e.g. going on safari, maybe astrophotography one day). I think for me the killer feature is that the photos are there on your phone (and in the cloud), ready to be browsed/shared/edited. I'm terrible for never getting round to downloading/editing the photos from an SD card!


My Sony a6500 has been relegated to webcam duties.


Honestly, I just carry my a6300 with me most of the time. I’ve used some of the modded apps that are available (it’s running android after all) to ensure I can single-click pair it to my phone and send all photos over, or send photos over automatically as soon as they’re taken.

It’s just as convenient for quick sharing, and the photos are so much better.


Can you share what apps are? I'm highly interested. Main reason my a6000 is gathering dust is because saving to Google Photos is a pain.


There's a ftp uploader that's publicly available, but mostly I just wrote my own custom upload tool.

Search for OpenMemories Framework, there's something of an unofficial SDK available.


@kuschku (sorry I can’t reply to yours due to comment depth) - do you have a link to these apps? I have a few Sony cameras gathering dust, so this sounds interesting!


I think this is what sets this technology apart. The human element. We don't take photos to look at in awe of the quality and crispness. We take them to share with others. Usually DSLRs force you to get to a computer, insert the memory card, process the photos in Lightroom, then share from your computer.

Phones make this process seamless because the designers realized that the key to a good photo experience is the human experience of sharing.


I feel the opposite: I don't care about the camera. I just take snaps of people and random things and like the memory more than anything else. I wish apple made high end phones with cameras that were flush in the body. For me the extended lenses are just an annoying and user hostile design statement.*

And ditto ipads: since the first ipad came out I've taken a grand total of one photo with it. The aggressive cluster of lenses is a real downer since I can't lay it flat.

* Yes I know some people chose their phone by the camera and need the physics of a longer lens barrel. They are not wrong, nor am I: I'm just saying that one size doesn't fit all.


I feel like a lot of that was just a reaction to the reality that overwhelmingly, people put their $800 smartphone in a protective case, and that that protective case typically adds 1-3mm of thickness on the back of it.

It was the hardware designers saying "welp, we could really use that extra few mm of depth for the camera lens, and keeping the rest of the body thinner is a better deal than beefing up the whole thing for those handful of users who don't use a case.

Maybe there's a non-case case out there for you that just sticks on the back and is thick enough to make the whole thing flush without adding any bulk around the sides? Or even just a strip of foamy tape or something that can run across the top of the device adjacent to the camera pop-out so that the device is able to sit flat on a table.


While I agree with you on the design choices made, the camera bump has gotten to absurd levels. It's now roughly 50% the thickness of the phone body [1]. This means that unless one gets a case that's unnecessarily thick, it'll still rock on a table with even with a case on. There's also a limit to case thickness without affecting other features like wireless charging and magsafe.

I'm not sure this is good design anymore. Perhaps if the bump was centered so the phone doesn't sit lopsided on a table, or maybe even a tapered design.

[1] https://imgur.com/a/UVt4FyH


Okay, yeah, that is pretty thick. I'm still rocking an iPhone 7 and for me, the bump is well under 1mm and doesn't protrude at all from a fairly normal-thickness case.

My partner has an iPhone 13, though, and my impression was that it wasn't all that different for her, but clearly that's not the case— I should examine it more closely. I see from looking at some phone cases online, many of them do have a raised "frame" around the camera area, so I can see how that would contribute to issues with it rocking when set down on a flat surface.

EDIT: Okay, I realised she uses a popsocket on the back of hers, so this has never come up because it either sits on the table face down, or is face-up but propped up on the socket.


IMO it looks like Apple took this tradeoff by assuming people will get a case, and all the ones i've seen (and have[0]) include making the back of the case either flush with the sensors or further out than the extruding bump.

0: https://www.spigen.com/collections/iphone-13-pro-max/product...


As someone who has had a naked iPhone since the iPhone 3G and has only once dropped an iPhone in a way that damaged it, all the cases just seem so crappy and cheap compared to the feel of my iPhone. I’ve tried a few and always return them. I almost never put my phone down though as it’s almost always in my pocket or charging or being used, so I don’t really care much about the bump on my 12 Pro Max.


> flush camera

isn't that what phone case is for?


Case simply makes the whole device larger.

Apple could make the camera flush by bulking out the back of the phone with more battery (though these days battery life is adequate for me so this wouldn't be as much of a win as it might have been a few years ago).


Cases are so ugly and make the device feel cheap though. Why get this amazing precision engineered device and cover it with a chonky plastic case that makes the phone look like an 80s movie artists vision of the future?


1. because they dont make them durable to withstand fall

2. because they make them slippery as hell

etc.

I tried to use my phone without case to enjoy the finish and smaller dimensions, but the edges were already inconvenient to hold and God forbid I would leave it lying anywhere with vibrations on, it was slowly slipping on any surface which was not perfectly even, even without vibrations.


Whole article talks about photo differences but there aren’t any concrete shown examples of the problem images. Kind of disappointing because it’s difficult to tell if the differences are huge or mild exaggeration.


It's kind of the house style of the New Yorker to be picture-adverse, which is definitely to the detriment of this article.


Yeah, slightly odd decision I agree!

If you scroll down to "The 75mm Telephoto Camera" section of https://lux.camera/iphone-13-pro-camera-app-intelligent-phot..., there are some images demonstrating the "painted" look you get from the noise reduction.


It's the same thing with the webcams on the M1 Macs.

Comparing my 2015 Macbook Pro and my 2021 Macbook Pro, I can say that they both have crappy webcams. The 2015 has a somewhat grainy, high contrast look. The 2021 webcam has extreme smoothing applied, to the point where my facial hair looks like it's painted on. The result is that my face always looks blurry. There are also weird movement glitches that are probably caused by temporal smoothing.

From a distance, the pictures from the new webcams look better. But up close it's frustrating that my face is always blurry.


Webcams still remain an issue on laptops because the space for them is extremely limited; the iPhone 13 Pro camera has tons of z-space to work with, while the Mac has to keep it thin to continue to look like a modern laptop and not a 2003 thinkpad.


As long as there's a photography thread...

I'm going to Africa next year. I have been before, and taken a DMC-ZS60. It's OK but not great. By far the best feature is that it packs a tremendous zoom lens into a very small and light package, which is really a must for wildlife photography.

I would like to step up my photography game for my next trip without having to add too much bulk and weight. Once upon a time I had a DMC-GX1 with a 300mm lens and that worked well, but I found that even that was too big for me to comfortably lug around and so I got rid of it because I never used it. That camera is also getting pretty old. Has the technology improved at all? Is there a better alternative out there for a good zoom (300mm or better) without the bulk of a regular DSLR?

[UPDATE]

In case anyone is interested, here are a few shots from the ZS60:

https://flownet.com/ron/trips/Seabourn2019/Pages/422.html

https://flownet.com/ron/trips/Seabourn2019/Pages/496.html

https://flownet.com/ron/trips/Africa2022/Pages/303.html

https://flownet.com/ron/trips/west_africa_2015/Pages/707.htm...

https://flownet.com/ron/trips/west_africa_2015/Pages/950.htm...

And one from the GX1 for comparison. (Another advantage to that camera is a fast shutter!)

https://flownet.com/ron/trips/NWP2015/Pages/162.html


Are these from a safari? I'm impressed at how close up these pictures of big cats look


Yes that is what a good zoom lens does for you!


I took a fairly elderly Panasonic DMC-GX80 on holiday with me and despite the lack of HDR, AI and clever stuff with multiple exposures, I found that for most of the interesting photos I wanted to take, it did the right thing.

My phone on the other hand did the blandest thing possible. Impossible to get an atmospheric color cast, a dramatic silhouette or anything that made a photo interesting to me. It was fine in most situations with "normal" lighting but they aren't usually the things I want to photograph.

The problem isn't the smartness - it's that combined with the pathological desire to simplify UI and remove options. Give me the clever stuff but let me tweak it.

But no - settings are bad and options are confusing to the user.


As several here have noted, an antidote is to use both RAW and jpg formats. I have my Pixel set to dual outputs, which yields both Google's processed image and a dng I can use later.

For the shifts in color (blue skies at night or colorful sunsets turned dun), that problem exists for most cameras. No matter the camera you use, if you turn off automatic white balance and pin it to 'daylight' or ~5500K, you'll find a whole world of color returns to your images. There's post-processing work (definitely work in RAW, of course) to do, but it brings back a perspective that is often lost.

As for the author's lament, "Now every photo we take on our iPhones has had the salt applied generously, whether it is needed or not." , I'm less sad. People like salty snacks and fast food, but that doesn't mean a meal from a skilled chef is any less delicious. If anything, it makes the work of an artisan stand out more to those who appreciate it.

An example -- check out Bianca Germain's images https://biancagermainphoto.com/ (@biancagermain). The composition of her environmental portraiture captures context and narrative, something an algorithm cannot do.


Observation about the author's Ms. McCabe deciding to carry a Pixel: I was out this weekend with my wife and her friends and they all had iPhones. They all kept asking me to use my Pixel, because "it takes better pictures." Not sure how much more damning it can be than a bunch of 40-50 year old moms noticing the camera for it's suck.

The Pixel is definitely, unabashedly, engaging in similar computational chicanery, but it's better at it's particular brand of misdirection than the iPhone is at it's brand. And this is the first time soccer moms have noticed.

I have a friend who was a PM at Apple and moved to Google. He said never expect to work with Apple on a software project. It didn't make any sense for them to contract for software services, because they didn't have any data.


Sounds like a meme, or perhaps a compliment to your eye or a social media page? There’s a photographic style setting they can use if they prefer a less neutral look on their phones.


> they all had iPhones

It seems as though the camera is not important enough to drive purchasing decisions, and is more of a single-issue determinant


While most readers probably have at least some firsthand experience with this, some examples would've gone a long way to illustrate the issues brought up.


> “Make it less smart—I’m serious,” she said. Lately she’s taken to carrying a Pixel, from Google’s line of smartphones, for the sole purpose of taking pictures.

And here we completely sabotage the premise of the headline, and make it clear that this whole topic is a subjective perception issue by the consumers. Pixels do FAR more ML-based post-processing than iPhones. It just so happens that I guess they do a BETTER job.

Which means that iPhone cameras aren't too smart, but rather aren't smart enough.


Samsung has some crazy thing turned on by default which makes selfies into air-brushed portraits where you've applied blush and a sparkle to your eye. It's surreal, ridiculous and generally disturbing.

Happily, since it's a non-iOS device, I can just turn it off.

Apple is Apple. Bitching about the way they over-coddle their users is a pastime almost as old as I am at this point, so I won't. I'd be a devoted Apple zealot if they gave power users the ability to customize their devices. It's disappointing they don't, as they have such nice kit, but I'm not their target market and never have been, so I just use other products.

Apple hasn't noticed my personal boycott, yet, but any day now, they'll notice and cave in to my demands, I'm sure.


Samsung has some crazy thing turned on by default which makes selfies into air-brushed portraits where you've applied blush and a sparkle to your eye. It's surreal, ridiculous and generally disturbing.

I believe the word you are looking for is Korean. https://vitalbar.com/blog/what-is-the-korean-beauty-standard...


It looks fake and unappealing to me on Asian faces, but becomes even more disturbing when applied to other ethnicities.


One could argue iPhone 7 had computational photography as well. But let's spilt it between Apple actually mentions it, ( iPhone X ) and prior iPhone.

For a long time, iPhones' Camera were the most realistic of all Smartphone Cameras. No HDR, No filters, it tries to preserve the image as real life as it can be. And that means things do look a little dull most of the time without proper lighting and adjustment. It carries on from Steve Jobs era. While Samsung, much like their TV, likes to tune their colour profile with high contrast and pop colours. It was easy for most consumer to think Samsung takes better quality pictures, but Apple stood ground. Refuse to give in, if you wanted professional edited photos alike, use a separate App.

This changed with iPhone X, or more accurately with the release of Google Pixel in 2016. Where the race of computational photography began. Apple decided to show their own take on iPhone X a year later. And since then Apple went all in on eye popping colour. The direction of their Camera profile took a 180 degree change.

And it is sort of strange in 2022, Google Pixel and Samsung actually has less pop colour and contrast compared to an iPhone.

I cant name many things or if any thing Tim Cook's Apple has done right. But I can list many things they have changed and done wrong. Camera being one of them.


Digital photos are inherently computational, I think the distinction is where you stop processing and how much human control is exercised over the final product. Even film has some carefully developed characteristics that are used by the the photographer and developer to influence the image.

I love the full spectrum of it, from my DSLR to my iPhone. I'd love to see camera companies embrace the ability to capture the raw data to make deeper processing available and to open up the pipeline and let me tweak to my preference.


I think it's a welcome effect for many average people on the street. As a hobbyist photographer who likes nature and astrophotograpy with quite serious gear (5D Mark IV, Sigma 14 f1.8, Canon 50mm f/1.2 etc) I find iPhone camera really impressive and is actually the primary reason of upgrading every year.

Yet, I'd love to see an "off" switch when needed. But at least we've got RAW shooting which doesn't apply much effects (but still overprocesses a little too much for "raw", agreed.)


Third party camera apps like Lightroom and Halide can take much “RAWer” RAWs than the ProRAW option of the stock camera


An article talking about photos that has no photos in it, huh.


I tried to show mates how filthy I was turned over my new apartment, and what I'm assuming to be the anti shadow AI in the sensor removed so much of the dirt..


If the iPhone provided access to the RAW image then this wouldn't be a big deal. Discard the processed image if you want. Maybe Apple could make it a setting for whether to keep the RAW photo?

I have an iPhone SE and you can take good photographs with that camera - if you know what you're doing. I've been doing photography as a hobby since the days of processing my own film and having my own darkroom and I can tell you the iPhone is capable of capturing great shots. But you have to know what you're doing and you can't have the camera's software interfering with what you're doing.

Can an iPhone replace your DSLR (assuming you know what you're doing)? No. The DSLR simply captures more information, has more detail, especially in low-lit situations, and has better color. But for a device you always have in your pocket? It's a pro-level snapshot camera! But it's not replacing your DSLR and for most people that's just fine.


They offer ProRAW if you have a pro phone. Silly because the 12 and 12 pro have the same cameras except for telephoto afaik


Seems there are quite a few ways to get around this.

Didn't know you could change the keyframe in a live photo, and it appears that removes the processing.

https://appletoolbox.com/disable-photo-auto-enhance-iphone/


I'm pretty sure the live photo frames are lower quality though, as it stores the live photo part as a compressed video rather than individual images, so you're looking at a grab of a video which could have artefacts and lower resolution.


There's a good record of phone imaging over the years to be found in a few devoted sites still comparing phone cameras with the Lumia 1020, the 2013 flagship of Windows Phones known for their camera hardware and naturalistic processing.

I find it quite remarkable that the 1020's 41 MP, 1/1.5", f/2.2 camera, relying only on oversampling, has performance that still falls within the range of the flagships of today. It even achieves comparable performance when zooming, despite modern dedicated telephotos!

Nokia 808 vs 1020 vs Pixel 5 vs iPhone 12 Pro Max:

http://allaboutwindowsphone.com/features/item/24153_Youwante...


Sun Microsystems used to have a tagline: "The network is the computer," now it's starting to look like the network is the camera.

Basically photography is about Pentagram, Twatter and Faceborg more than it is about the photo itself.

The medium is starting to alter the aesthetics of the images themselves¹.

Joking aside, a professional photographer needs to be able to get reproducible results. Because the algorithms in the phones are mostly black box, with few parameters, they can't be used in a professional setting.

Even if you have the control over the algorithm, they are still really weak. Portrait mode for example can't reliably separate the subjects hair from the background properly.

Doing this optically does not suffer from these problems at all.

1: For example, vertical videos.


> Even if you have the control over the algorithm, they are still really weak. Portrait mode for example can't reliably separate the subjects hair from the background properly.

This is harder than it sounds with a "real" camera lens, perfectly focusing on one point isn't necessarily the right thing to do for other points even if they're the same distance away. Most people who upgrade to a dedicated camera aren't going to look this stuff up; most reviews don't even cover it.

https://www.lensrentals.com/blog/2016/09/fun-with-field-of-f...


I think the point about resulting uncanniness is valid, but can't most of these features be turned off? Portrait mode, despite the blurring issues, seems to be a popular feature.

Having just traded from an iPhone 5 to a 13 Pro, I can without hesitation say the upgrade to the quality of my pictures blew my mind. Let's not overthink this.


> Having just traded from an iPhone 5 to a 13 Pro, I can without hesitation say the upgrade to the quality of my pictures blew my mind. Let's not overthink this.

Okay, but that's what, a 6 generation leap? It had better be better because of hardware alone!


Sure, and I expected as much for that reason. It's just funny to see the author talking trash about the 12 Pro featurization after trading from a 7...should still be a big upgrade in general quality.


Funny, I just had this problem. We got a taro-mousse cake for my daughter's birthday and I was trying to take a photo of it on my iPhone 11 Pro. I could see the brilliant purple of the taro section of the cake with my own eyes, but the iPhone would hesitate before eventually turning it into a darker, almost brown color.


Might have to tap on it and drag up. If it takes up too much of the picture, it won't have a neutral reference in the background to help figure out how bright the room is supposed to look. (Cameras usually just work off the picture they're taking to determine this, and yet everyone says they're too smart.)

I have the same problem with a black cat that turns brown if you try to expose off it.


Was it possible to fix this by tapping on the cake itself? Might it have been affected by the color of the lighting in the room? We had issues with that in our kitchen, which had warm yellow lights. Our 'paprika' colored plates always showed up weird in food photos.


Refocusing on the cake didn't help for us. This was in a fairly bright sunlit room, which I guess is technically warm lighting?


Can you fix it by adjusting the temperature or tint in the Photos app?


Maybe, I'll try it!


I’ve noticed with my 12 pro, my wife’s 11 pro, and my mom’s S10 that shots will sometimes look distorted in weird ways. With someone’s face being unnaturally elongated, or an arm in the foreground shortened. You’ll think you captured the shot and will only notice it after.


I noticed this recently. I was in Lightroom editing some photos from iPhone 13 Pro and zooming in they looked like impressionist paintings, small blotches of color, instead of what I'm used to from previous phones.


This is a terrible article full of outright falsehoods with no examples.

It seems like they just quoted a bunch of older "get off my lawn" photographers who either don't know how to use the latest iPhones or haven't even tried them.

Pretty much everything they're talking about is either outright wrong, or it's behavior that can be turned off in the settings or influenced in the camera app, or just go use an alternate app.

Some of the complaints in the article are as simple as the person complaining doesn't know how to control exposure compensation or automatic exposure lock on the iPhone. Both of these are things that have not changed in a long time on the iPhone. Portrait mode is imperfect for sure, but the author doesn't seem to understand that a large aperture on a traditional camera is also liable to produce weird effects in a picture and will most certainly do things like make things in the background disappear. Using that wisely separates a good photographer from a bad one.

There is a RAW mode for people who are complaining about that. Maybe it's not "RAW enough" but the same thing has happened in the DSLR/MILC world for a while too. Sony was doing processing on their RAW files a long time ago.

Some of the stuff going on with DLSRs/MILC is targeted very heavily at spec sheet chasers these days and far less so artistic/working photographers. It's gotten a bit out of hand lately. I have a bunch of pro level photo gear and a lot of it has lost a lot of fun over the last 10 years.. the smartphone photography has gotten a lot more interesting. I've printed 60" wide photos on one of my cameras and it's 10 years old now. Zero reason for me to upgrade it for any realistic improvement in megapixels and the other things the camera companies are selling, for which they want $4000. Likewise ultra expensive new lenses which improve performance in the corner of the frame but weigh 2x as much as what I have already. The spec chasers love all this stuff to death but if the subject of the photo isn't in the corner no viewer cares, and not many quality compositions focus on the corners. And it's certainly not worth upgrading a $1000 lens to a $2000-3000 lens. Even MILC is in many ways a sideways move and often focuses on the same stuff smartphones are doing.

The 3 lens smartphones have gotten really really compelling for anyone who really thinks about how they shoot. If you're not chasing printing big prints you can take the smartphone and get the same results as having to take an entire bag of expensive camera gear that cost you thousands.

The biggest weakness of the smartphones is flash & external lighting.. but there have been hacks & add on devices to control external flashes for quite a while now too. As in years. A lot of that is the same kind of stuff you used to have to do with "Pro" photo gear.

No matter what the camera is someone will always complain about the camera being the cause of their bad photography. It is almost never the camera's fault. It is almost always a problem between the floor and the shutter button. Be a maker not a taker, etc..


Film cameras might hip again for the same feeling as music LP. The digital version of photos become a little bit to perfect.

Why do we call addictive smart phones that tracks users by selling advertisement smart?


I've been shooting film on and off for about a decade now, and a good film shot pretty much always looks better than a phone picture.

The problem is that going from camera to digitally-shareable photo is either 1) time-consuming, or 2) quite expensive. I mostly do my own scanning, which takes a lot of time and manual labor, because getting good quality digital scans of negatives costs a lot of money and frankly I don't take enough good pictures to a roll to make it worthwhile.


What's smart are the companies doing it for their own interests ($$$), not necessarily the users buying into it.


like most technology today it is now being forced on us rather than being available as a tool.


An article about photos... with no photos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: