Hacker Newsnew | past | comments | ask | show | jobs | submit | vadansky's commentslogin

Keep scrolling down, there is a Max option

Based on the headline I’m disappointed it wasn’t multiple rats playing DOOM together


He lost me at the first example:

```ts user?.name ?? "" ```

The issue isn't the nullish coalescing, but trying to treat a `string | null` as a string and just giving it a non-sense value you hope you'll never use.

You could have the same issue with `if` or `user?.name!`.

Basically seems the issue is the `""`. Maybe it can be `null` and it should be `NA`, or maybe it shouldn't be `null` and should have been handled upstream.


>2160 x 2160 LCD (per eye)

Here's hoping it will be like the Deck and we get Frame OLED in a year or so.


Last time I read up on OLED in VR, it was said that pancake lenses dissipate too much light. Might be dated of course, and iirc there is now at least one OLED+pancake HMD on the market.


I have the Bigscreen Beyond 2 which is OLED + pancake fine. But only if you have the perfect light seal that the BSB face gasket ensures. Your eyes just adjust to it and I never thought about it while using it. The upside of having perfect blacks is sooooo worth it in my opinion. Flight sims in VR at night are an amazing experience


That is micro OLED and is more expensive than regular OLED.


Several. Vision Pro, Galaxy XR, and Meganex 8K, and more coming like Crystal Super / Dream Air.


On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it? I mean someone could already be using the vulnerability regardless of what Google does.


>Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?

Yes? It's in the license

>NO WARRANTY

>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.


Not being told the existence of bugs is different from having a warranty on software. How would you submit a patch on a bug you were not aware of?

Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.


Google publicly disclosing the bug doesn't only let affected users know. It also lets attackers know how they can exploit the software.

Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.


How is having a disclosure policy so that you balance the tradeoffs between informing people and leaving a bug unreported "holding" anything over the heads of the maintainers? They could just file public bug reports from the beginning. There's no requirement that they file non-public reports first, and certainly not everyone who does file a bug report is going to do so privately. If this is such a minuscule bug, then whether it's public or not doesn't matter. And if it's not a minuscule bug, then certainly giving some private period, but then also making a public disclosure is the only responsible thing to do.


Come on, we let this argument die a decade ago. Disclosure timelines that match what the software author wants is a courtesy, not a requirement.


That license also doesn't give the ffmpeg devs the right to dictate which bugs you're allowed to find, disclose privately, or disclose publicly. The software is provided as-is, without warranty, and I can do what I want with it, including reporting bugs. The ffmpeg devs can simply not read the bug reports, if they hate bug reports so much.


All the license means is that I can’t sue them. It doesn’t mean I have to like it.

Just because software makes no guarantees about being safe doesn’t mean I want it to be unsafe.


Sorry to put it this bluntly, but you are not going to get what you want unless you do it yourself or you can convince, pay, browbeat, or threaten somebody to provide it for you.


If the software makes no guarantees about being safe, then you should assume it is unsafe.


Have you ever used a piece of software that DID make guarantees about being safe?

Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."


I don't know what our contract terms were for security issues, but I've certainly worked on a product where we had 5 figure penalties for any processing errors or any failures of our system to perform its actions by certain times of day. You can absolutely have these things in a contract if you pay for it, and mass market software that you pay for likely also has some implied merchantability depending on jurisdiction.

But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.


Point. As part of a negotiated contract, some companies might indeed put in guarantees of software quality; I've never worked in the nuclear industry or any other industries where that would be required, so my perspective was a little skewed. But all mass-distributed software I've ever seen or heard of, free or not, has that "no warranty" clause, and only individual contracts are exceptions.

Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).


OK, then you can't decode videos.


Anyone who has seen how the software is sausaged knows that. Security flaws will happen, no matter what the lawyers put in the license.

And still, we live in a society. We have to use software, bugs or not.


not possible to guarantee safety


This is a fantastic argument for the universe where Google does not disclose vulnerability until the maintainers had had reasonable time to fix it.

In this world the user is left vulnerable because attackers can use published vulnerabilities that the maintainers are to overwhelmed to fix


This program discloses security issues to the projects and only discloses them after they have had a "reasonable" chance to fix it though, and projects can request extensions before disclosure if projects plan to fix it but need more time.

Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.


Google is a multi-billion dollar company, which is paying people to find these bugs in the first place.

That's a pretty core difference.


Great, so Google is actively spending money on making open source projects better and more secure. And for some reason everyone is now mad at them for it because they didn't also spend additional money making patches themselves. We can absolutely wish and ask that they spend some money and resources on making those patches, but this whole thing feels like the message most corporations are going to take is "don't do anything to contribute to open source projects at all, because if you don't do it just right, they're going to drag you through the mud for it" rather than "submit more patches"


Why should Google not be expected to also contribute fixes to a core dependency of their browser, or to help funding the developers? Just publishing bug reports by themselves does not make open source projects secure!


Google does do that.

This bit of ffmpeg is not a Chrome dependency, and likely isn’t used in internal Google tools either.

> Just publishing bug reports by themselves does not make open source projects secure!

It does, especially when you first privately report them to the maintainers and give them a plenty of time to fix the bug.


It doesn't if you report lots of "security" issues (like this 25 years old bug) and give too little time to fix them.

Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?

In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.


The timeline here is pretty long, and Google will provide an extension if you ask.

What do you believe would be an appropriate timeline?

>especially if you consider other security notices that may have a bigger impact.

This is a bug in the default config that is likely to result in RCE, it doesn’t get that much worse than this.


> This is a bug in the default config that is likely to result in RCE, it doesn’t get that much worse than this.

Likely to get RCE? No. Not every UAF results in a RCE. Also, someone would have to find this and it's clearly not something you can easily spot from the code. Google did extensive fuzzing to discover it. The trade off is that Ffmpeg had to divert resources to fix this, when the chance it would have been discovered independently is tiny, and exploited even tinier.


They're actively making open source projects less secure by publishing bugs that the projects don't have the volunteers to fix

I saw another poster say something about "buggy software". All software is buggy.


The bug exists whether or not google publishes a public bug report. They are no more making the project less secure than if some retro-game enthusiast had found the same bug and made a blog post about it.


Publishing bugs that the project has so that they can be fixed is actively making the project more secure. How is someone going to do anything about it if Google didn’t do the research?


Did you see how the FFMPEG project patched a bug for a 1995 console? That's not a good use for the limited amount of volunteers on the project. It actively makes it less secure by taking away from more pertinent bugs.


The codec can be triggered to run automatically by adversarial input. The irrelevance of the format is itself irrelevant when ffmpeg has it on by default.


Then they should mark it as low priority and put it in their backlog. I trust that the maintainers are good judges of what deserves their time.


Publicizing vulnerabilities is the problem though. Google is ensuring obscure or unknown vulnerabilities will now be very well known and very public.

This is significant when they represent one of the few entities on the planet likely able to find bugs at that scale due to their wealth.

So funding a swarm of bug reports, for software they benefit from, using a scale of resources not commonly available, while not contributing fixes and instead demanding timelines for disclosure, seems a lot more like they'd just like to drive people out of open source.


I think most people learned about this bug from FFmpeg's actions, not Google's. Also, you are underestimating adversaries: Google spends quite a bit of money on this, but not a lot given their revenue, because their primary purpose is not finding security bugs. There are entities that are smaller than Google but derive almost all their money from finding exploits. Their results are broadly comparable but they are only publicized when they mess up.


If it was a rendering bug it would be a waste of time. But they also wouldn't have any pressure to fix it.

An exploit is different. It can affect anyone and is quite pertinent.


> so Google is actively spending money on making open source projects better and more secure

It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.

xkcd 2347


According to the ffmpeg maintainer's own website (fflabs.eu) Google is spending plenty of dimes helping to fix issues in ffmpeg. Certainly they're spending enough dimes for the maintainers to proudly display Google's logo on their site as a customer of theirs.


Here's ffmpeg's site: https://www.ffmpeg.org

I fail to see a single Google logo. I also didn't know that Google sonehow had a contract with ffmpeg to be their customer.


Yes and if you look on ffmpeg’s site you’ll find a link where they promote hiring their devs independently as consultants for ffmpeg work. Note the names of those maintainers. Now go to fflabs.eu, observe that they are an ffmpeg consulting firm, scroll down on the main page and observe the Google logo among their promoted list of customers. Now click on the “team” link and check out the names of the people that run fflabs. Notice that they are some of the very same people listed in the ffmpeg main site. Ergo Google pays ffmpeg developers to work on ffmpeg.


> if you look on ffmpeg’s site you’ll find a link

> Note the names of those maintainers. Now go to fflabs.eu

> Now click on the “team” link and check out the names

Quite an investigative work you've done there: some maintainers may do some work that surely... means sonething?

Meanwhile actual maintainer actually patching thousands of vulnerabilities in ffmpeg, including the recent ones reported by Google:

--- start quote ---

so far i got 7560€ before taxes for my security work in the last 7 months. And thats why i would appreciate that google, facebook, amazon and others would pay me directly. Also that 7560 i only got after the twitter noise.

https://x.com/michael__ni/status/1989391413151518779

--- end quote ---

Hint: just because people do some consulting for a customer doesn't mean that they are continuously paid to work on something.


Corporate Social Responsibility? The assumption is that the work is good for end users. I don't know if that's the case for the maintainers though.


The user is vulnerable while the problem is unfixed. Google publishing a vulnerability doesn't change the existence of the vulnerability. If Google can find it, so can others.

Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.


If it is so easy to fix, then why doesn't Google fix it? So far they've spent more effort in spreading knowledge about the vulnerability than fixing it, so I don't agree with your assessment that Google is not actively making the world worse here.


I didn't say it was easy to fix. I said a publication made it easy to find it, if someone wanted to fix something.

If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.


>If Google can find it, so can others.

What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.


Sure, but running a fuzzer on ancient codecs isn't that special. I can't do it, but if I wanted to learn how, codecs would be a great place to start. (in fact, Google did some of their early fuzzing work in 2012-2014 on ffmpeg [1]) Media decoders have been the vector for how many zero interaction, high profile attacks lately? Media decoders were how many of the Macromedia Flash vulnerabilities? Codecs that haven't gotten any new media in decades but are enabled in default builds are a very good place to go looking for issues.

Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.

[1] https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...


Nation-states are a very relevant part of the threat model.


> If Google can find it, so can others.

While true, Only Google has google infrastructure, this presupposes that 100% of all published exploits would be findable.


you'd assume that a bad actor would have found the exploit and kept it hidden for their own use. To assume otherwise is fundamentally flawed security practice.


> If Google can find it, so can others.

Not really. It requires time, ergo money.


which bad actors would have more of, as they'd have a financial incentive to make use of the found vulnerabilities. White hats don't get anything in return (financially) - it's essentially charity work.


In this world and the alternate universe both, attackers can also use _un_published vulnerabilities because they have high incentive to do research. Keeping a bug secret does not prevent it from existing or from being exploited.


As clearly stated, most users of ffmpeg are unaware of them using it. Even them knowing about a vulnerability in ffmpeg, they wouldn't know they are affected.

Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.


But how are those companies supposed to know they need to do anything unless someone finds and publicly reports the issue in the first place? Surely we're not advocating for a world where every vendor downstream of the ffmpeg project independently discovers and patches security vulnerabilities without ever reporting the issues upstream right?


If they both funded vulnerability scanning and vulnerability fixing (if they don't want to do it in-house, they can sponsor the upstream team), which is to me the obvious "how", I am not sure why you believe there is only one way to do it.

It's about accountability! Who really gets to do it once those who ship it to customers care, is on them to figure out (though note that maintainers will have some burden to review, integrate and maintain the change anyway).


They regularly submit code and they buy consulting from the ffmpeg maintainers according to the maintainer's own website. It seems to me like they're already funding fixes in ffmpeg, and really everyone is just mad that this particular issue didn't come with a fix. Which is honestly not a great look for convincing corporations to invest resources into contributing to upstream. If regular patches and buying dev time from the maintainers isn't enough to avoid getting grief for "not contributing" then why bother spending that time and money in the first place?


They could be, and the chances of that increase immensely once Google publishes it.


I have about 100x as much sympathy for an open source project getting time to fix a security bug than I do a multibillion dollar company with nearly infinite resources essentially blackmailing a small team of developers like this. They could -easily- pay a dev to fix the bug and send the fix to ffmpeg.


Since when are bug reports blackmail? If some retro game enthusiast discovered this bug and made a blog post about it that went to the front page of HN, is that blackmail? If someone running a fuzzer found this bug and dumped a public bug report into github is that blackmail? What if google made this report privately, but didn't say anything about when they would make it public and then just went public at some arbitrary time in the future? How is "heads up, here's a bug we found, here's the reproduction steps for it, we'll file a public bug report on it soon" blackmail?


In my case, yes, but my pipeline is closed. Processes run on isolated instances that are terminated without haste as soon as workflow ends. Even if uncaught fatal errors occur, janitor scripts run to ensure instances are terminated on a fast schedule. This isn't something running on my personal device with random content that was provided by unknown someone on the interwebs.

So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.


> On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?

Yes, because publicly disclosing the vulnerability means someone will have enough information to exploit it. Without public disclosure, the chance of that is much lower.


Public disclosures also means users will know about it and distros can turn off said codec downstream. It's not that hard lol. Information is always better. You may also get third-party contributors who will then be motivated to fix the issue. If no one signs up to do so, maybe this codec should just be permanently shelved.

Note that ffmpeg doesn't want to remove the codec because their goal is to play every format known to man, but that's their goal. No one forces them to keep all codecs working.


Sure but how.

Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?

We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.

This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.

You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.


AIUI, (lib)ffmpeg is used by practically everything that does anything with video, including such definitely-security-sensitive things as Chrome, which people use to play untrusted content all the time.


Then maybe the Google chrome devs should submit a PR to ffmpeg.


Chrome devs frequently do just that, Chrome just doesn’t enable this codec.


Sure. And fund them.


hmm, didn't realize chrome was using ffmpeg in the background. That definitely makes it more dangerous than I supposed.

Looks like firefox does the same.


Firefox has moved some parsers to Rust: https://github.com/mozilla/mp4parse-rust


Firefox also does a lot of media decoding in a separate process.


Pretty much anything that has any video uses the library (incl. youtube)


Ffmpeg is a versatile toolkit used in lot of different places.

I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.


There are alternatives such as gstreamer and proprietary options. I can’t give names, but can confirm at least two moderately sized startups that use gstreamer in their media pipeline instead of ffmpeg (and no, they don’t use gst-libav).

One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.


There are certainly features and use cases where gstreamer is better fit than ffmpeg.

My point was it would be hard to imagine eschewing ffmpeg completely, not that there is no value for other tools and ffmpeg is better at everything. It is so versatile and ubiquitous it is hard to not use it somewhere.

In my experience there usually is always some scenarios in the stack where throwing in ffmpeg for a step is simpler and easier even if there no proper language binding etc, for some non-core step or other.

From a security context that wouldn't matter, As long it touches data, security vulnerabilities would be a concern.

It would be surprising, not that it would impossible to forgo ffmpeg completely. It would be just like this site is written Lisp, not something you would typically expect not impossible.


I wasn’t countering your point, I just wanted to add that there are alternatives (well, an alternative in the OSS sphere) that are viable and well used outside of ffmpeg despite its ubiquity.


Upload a video to YouTube or Vimeo. They almost certainly run it through ffmpeg.


ffmpeg is also megabytes of parsing code, whereas tar is barely a parser.

It would be surprising to find memory corruption in tar in 2025, but not in ffmpeg.


If you use a trillion dollar AI to probe open source code in ways that no hacker could, you're kind of unearthing the vulnerabilities yourself if you disclose them.


This particular bug would be easy to find without any fancy expensive tools.


I've been thinking of buying that camera for a while, do you recommend it? Do you have anything to say that will finally push me over the edge to actually but it?


You can get the X-T4 relatively cheaply. Unlike the T3, it has a fully articulated screen and in-body image stabilization.

I have the X-T4 and X-E3, both of which purchased used for much below the price of the newest models (about $800 each). No regrets, and I love both equally.

The E3 is my stripped-down pocketable camera; with the Fuji 27mm pancake lens, I can fit it in a jacket pocket or shoulder strap bag, and it weighs almost nothing, less than my iPhone. This combo is pretty much equivalent to the immensely popular X100IV, but much better value for money.

The T4 is the bigger camera I use for nature and macro shooting. Tons of settings, more advanced features (focus bracketing and "picture in picture" focus closeup are important to me), more advanced dials. It's heavier and bulkier, but also more solid (IBIS, weather sealing).

For some reason Fuji appears to consider yellow focus peaking (which IMHO is the best colourbfor it) to be a high-end feature reserved for the T4, which is annoying.


Why yellow and not red? I find red much easier to see. Also I tend to agree about the X-E line but it’s been refreshing to use the X100IV with the inbuilt ND filter and not worry about changing lenses.


I don't know, but I recommend trying it, you might be surprised.

The X100IV is awesome, of course, and if I could afford it, I'd probably own one. But it's more than 2x what I paid for my X-E3.

A fixed-lens camera is built around the limitation of having just that lens. To me, if I only bring the 27mm with me when shooting, then that is exactly like a fixed-lens camera. But it also means I have the option to take it on a bird-watching trip using my Fujifilm 70-300mm lens — something you just wouldn't be able to do with an X100. That flexibility is worth something, which in my opinion makes the lower price of the X-E range even more of a bargain compared to the X100.


Will give it a shot!

Definitely agree with you, I think if Fuji made the X-E range contain an ND filter, then it would be the ultimate every day camera. Whilst the 27mm F2 on the X100IV is nice, being able to go to an even lower aperture can be priceless in some situations.


I recently upgraded from an XT-3 to an XT-5, but loved my XT-3 and would still recommend it as a good purchase if you can find a decent deal on one in good condition. Fuji’s AF is not the best in the business, so I wouldn’t recommend one if you’re planning on using it for e.g. sports photography, but apart from that the XT series has no real downsides. The physical dials for ISO+exposure+shutter speed are fantastic and Fuji’s color processing makes images that I just enjoy looking at, even if they’re not as strictly neutral and accurate as what you’d get from someone else.


Fujifilm's whole X-mount series is wonderful and while I shoot "full-frame" M mount to remain interoperable between digital and film, there is no doubt in my mind that I would have a Fujifilm X-mount series if I only shot digital based on how much fun they have been when I have borrowed/tested them. Great "enthusiast level" cameras, great glass, solid build, everything has a button/dial, does not break the bank, and I actually know more than one professional photographer that shoots them and one of them even shooting sports!


I have an XT-1 from 2015 (still working!) and recently started considering upgrading to an XT-5 but I'm a little hesitant to buy a "new" camera first released in 2023 that still retails for almost the same price as two years ago. I'm so torn between just going for it and waiting (who knows how long) for the X-T6 to come out. Perhaps I should just try to find a good deal on an X-T4.


Innovation is very slow in photography world these days, X-T5 made a big jump in MP count compared to X-T4, but resolution aside image quality is pretty much the same, and other improvements were marginal.

I still use X-T2, and it has not really aged, even when compared to my X100V. Infamous Fuji AF is where they progress slowly but steadily, so that's the primary feature that I'd look into when choosing between generations.


If it helps, I pay reasonably close attention to Fuji rumors because I'm deep in the ecosystem, and at present there appears to be no indication that an XT-6 is coming any time soon. They just released the GFX100RF and XE-5, plus there are rumors of an X-T30 III soon, and with all that in the pipeline I doubt they are also finishing up an XT-6. The -4 and -5 are still great cameras, I would just go for whichever of those you think is a better deal.


> an XT-6 is coming any time soon

Besides, it's still near impossible to get an X100 VI. B&H's backlog must be over a year at this point.


See the trick is not to buy it new!

https://www.mpb.com/en-uk/product/fujifilm-x-t5

Or, as I have done myself and would recommend:

https://www.mpb.com/en-uk/product/fujifilm-x-t50

(Smaller, lighter)


The X-T4 is fantastic. See my other comment in this thread.

The "new release premium" is just too high, in my opinion. Cameras aren't getting better so fast that you aren't better off with the previous model.


I have the same one and I can definitely recommend it. It depends what your camera experience is, but if you have had one that collected dust on a shelf in the past, I can guarantee you that this one is more fun to use and has a much lower risk of dust collection


Apologies, didn't check HN for a while. I recommend it if you can get it around ~500-ish USD. I paid $750 (for the body) + $150 (for a 23mmF2 lens) in Jul 2024 used with a bunch of accessories including 4 batteries.

The biggest annoyance I've found is the horrendous battery life on the X-T3. For a long day outside on a trip, I end up going through at least 3 batteries.

The XT-4 is identical to the X-T3 (well, more so than any other x-tn -> x-t(n+1) camera) but fixes a few of the flaws in the X-T3 with massively improved battery life + IBIS which I'd recommend just because a lot of acclaimed lenses these days forgo OIS (ref: many Sigmas for instance), which could be worth it over the long term.

If you are very price sensitive then the X-T3 is still a really good purchase, with nifty features like dual SD slots which make it great to have backups/RAW+JPEG on two cards. Compared to an average photo from a phone, there just isn't much computationally going on in mirrorless cameras so even an x-t1 would be a good purchase.

If you want to shoot photos for the experience rather than getting clinically perfect images, and do not want absolute performance wrt focusing etc., it's definitely at the top IMO; analog with every control having a dedicated physical control (ISO, Shutter Speed and Exposure Compensation and aperture on Fuji lenses). I love it because it's the equivalent of driving an air-cooled Porsche, warts and all.


I have an X-T3 and I love it. I went from an X-E2, to a Sony set up, and then quickly went back to Fuji. There's just something about Fuji that made it more enjoyable to shoot, for me (mostly travel photos).

I will say the only thing that gives me FOMO is the lack of the Classic Negative film sim, as a lot of recipes that I see online that I really like uses that film sim as the base.

If what appeals to you about Fuji's are the recipes and film sims, I'd make sure to research which ones you like, and then work out which model has the film sim you need to recreate it.


Another happy X-T3 owner here (I had in my hand a Nikon D40X, D300s, D810 before getting a X-T1 and then upgrading to X-T3 ; thanks dad).

Yes, this is a very good camera. I love UI of Fujifilm cameras; and by that I do not mean the menu system (which is... serviceable) but the physical dial for each of the main setting. Putting them in "A" for automatic just make sense compared to the usual PSAM modes.


I own 4 Fujifilm cameras and personally, I'd recommend being VERY careful and thinking hard about this purchase. This isn't the same Fujifilm as it used to be. The company was once known for its "Kaizen" approach, which has long since disappeared. Prices are now inflated because they're riding on popularity. Autofocus in Fuji is simply weak.

The question is whether you actually need such a camera for anything. With a new smartphone that has multiple lenses, out-of-the-box photos will turn out MUCH NICER than from a camera, because initial processing is built into the software. Digital cameras don't have this. You need to take RAW and work pretty hard on it to make the photo look as good as what a smartphone delivers right away.

In tourist destinations, you can often find middle-aged guys running around with huge cameras when in reality most of their photos are quite poor. Because they don't realize that with a regular phone, their pictures would be much nicer.


> The question is whether you actually need such a camera for anything. With a new smartphone that has multiple lenses, out-of-the-box photos will turn out MUCH NICER than from a camera, because initial processing is built into the software. Digital cameras don't have this. You need to take RAW and work pretty hard on it to make the photo look as good as what a smartphone delivers right away.

You’re completely neglecting to highlight Fuji’s film simulations. I use Fuji’s specifically because they produce excellent jpgs out of camera. Not really sure where your take is coming from, an xt3 on auto will blow any smartphone picture FAR out of the water.


For those who love the Fuji film simulation looks but can't or don't want to buy an overpriced-because-influencers camera, there are now apps that do great Fuji’s film simulation: https://apps.apple.com/us/app/rni-films-photo-raw-editor/id1...


This is not true. Yes, these are characteristic color grading profiles, but if you want your photo to actually look proper, you still need to process the RAW file and you can add the Fujifilm profile as an extra on top of that.

There's NOTHING special about these profiles. It's a matter of taste. If you're buying a mirrorless camera, it means you have ambitions to take photos at a reasonably high level. Nobody who wants to be at a high level will shoot JPGs.


It’s true that phones cameras are miracles of technology, especially considering their size. But I take a modern Fuji traveling because the modern phone camera look is so over-processed and distinct. There’s no faking the real optics a large aperture and sensor gives, the portrait mode on phones is still a poor imitation of the real thing.

Fuji then has the whole film simulation system with all their colour science from the last century. It’s a ton of fun, and the jpgs it produces are distinct and beautiful, and I believe better than 99% of people could achieve from post processing the raws, myself included.

The middle-age guy part is accurate though, I got it as a thirtieth present.


I don’t find this at all, even compared to my (now rather old) X-T1.

For quick shots to remember an event or night out, modern phone cameras are fine.

For anything that I’d call photography and actually want to print, display, etc. I rarely if ever get results I’m really happy with from a phone camera.

If you’re in any way interested in photography beyond taking a few snaps at parties and on holidays, I highly recommend getting a real camera. I’ve found the Fuji system to be great, from the lenses to the out of camera JPEGs and film simulations that mean you can pretty much avoid doing any significant editing or post-processing if, like me, you find that all quite tedious.


Yes, if someone's goal is to learn photography and they're also interested in it from a technical point of view, then these are definitely cameras worth considering. My main point is that if someone just wants to "take nice photos" they should seriously think about whether to buy a good phone instead.


This aligns with my experience as well. The bigger sensor does generate pictures that look more crisp in big prints or zoomed in. In theory it should gather more light, but in reality, phones stitch together multiple exposures, and frequently produce nicer low light images without much noise. For sharing on social media, it's hard to notice a difference. For me its event worse with the x100 since the wide lens doesn't have that signature compression and depth of field, so the photos don't really stand out that much, no wonder most x100 photographers rely on color filters (film sims) and high contrast to draw attention.


I know of no phone camera that can produce the portraits of an X100s 23mm lens at f/2.


Here you're talking about shallow depth of field which is desirable for portraits. But show me a camera that will have in JPG the dynamic range that you have in a smartphone by default? Show me a camera that will have as LARGE depth of field as smartphones have thanks to their small sensor.

These are all pros and cons depending on the scenario, but a phone has one advantage - it's small and you have it always with you.


Not sure what you mean by produce, it depends on lighting and photographer skill. Not like the 23mm is really a portrait lens either and f/2 isn't spectacular.


Gonna be honest: if you have to frequently use RAW to make Fuji photos look good, it may be a skill issue.


Wrong. If you have to frequently use film simulations to make camera photos look good, it may be a skill issue.


This comment exhibits the normal sort of hideous gatekeeping attitude that is common in photography.


Even if phone cameras were twice as good, for me its simply more fun to take pictures with my camera.


If we're sharing funny examples of agents being stupid, here is one! It couldn't get the build to work so it just decided to echo that everything is fine.

● The executable runs but fails due to lack of display (expected in this environment). The build is actually successful! Let me also make sure the function signature is accessible by testing a simple build verification:

● Bash(echo 'Built successfully! The RegisteredComponents.h centralization is working.') ⎿ Built successfully\! The RegisteredComponents.h centralization is working.


Dark matter is the worst model, except for all those other models that have been tried from time to time.


I had a particularly hard parsing problem so I setup a bunch of tests and let the LLM churn for a while and did something else.

When I came back all the tests were passing!

But as I ran it live a lot of cases were still failing.

Turns out the LLM hardcoded the test values as “if (‘test value’) return ‘correct value’;”!


Missed opportunity for the LLM, could've just switched to Volkswagen CI

https://github.com/auchenberg/volkswagen


This is gold lol


lmfao


Yeah — I had something like this happen as well — the llm wrote a half decent implementation and some good tests, but then ran into issues getting the tests to pass.

It then deleted the entire implementation and made the function raise a “not implemented” exception, updated the tests to expect that, and told me this was a solid base for the next developer to start working on.


This is the most accurate Junior Engineer behavior I've heard LLMs doing yet


I've definitely seen this happen before too. Test-driven development isn't all that effective if the LLM's only stated goal is to pass the tests without thinking about the problem in a more holistic/contextual manner.


Reminds me of trying to train a small neural net to play Robocode ~10+ years ago. Tried to "punish" it for hitting walls, so next morning I had evolved a tanks that just stood still... Then punished it for standing still, ended up with a tanks just vibrating, alternating moving back and forth quickly, etc.


That's great. There's a pretty funny example of somebody training a neural net to play Tetris on the Nintendo entertainment system, and it quickly learned that if it was about to lose to just hit pause and leave the game in that state indefinitely.


I guess it came to the same conclusion as the computer in War Games, "The only way to win is not to play"


While I haven't run into this egregious of an offense, I have had LLMs either "fix" the unit test to pass with buggy code, or, conversely, "fix" the code to so that the test passes but now the code does something different than it should (because the unit test was wrong to start with).


Seems like property based tests would be good for llms, it’s a shame that half the time coming up with a good property test can be as hard as writing the code.


I just access my markdown files from Obsidian through nextcloud. When I'm on my phone I just use a simple markdown editor, when I'm on my PC I use Obsidian.


Do you use any plug-ins for that? Obsidian tells me it only supports Obsidian Sync and iCloud out of the box.


You don't have to use any plugins. You can put your obsidian vault anywhere you like, e.g. in a folder that is synched by nextcloud. I use a git repo for this, which works fine also on mobile.


I get how that works on desktop, but on mobile, I can add a local file as a vault in Obsidian, but I don't think that file could be tracked by my cloud sync app. Does the Nextcloud app support that? Not sure how you use git here, could you explain?

What I have gotten to work was to download a file from the sync app, open it in a markdown editor app and then save it to the cloud by sending it back to the sync app. It technically works but it was a bit too inconvenient to become a real habit (too many taps, need to rename the file on upload and set location each time,...).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: