Why would you need to replace the batteries? Do they fail outright at around 10 years, become unsafe, or do they just lose capacity?
Curious!
Even if they're at 50% capacity, they would still work, right? But if there are other considerations, especially safety ones, then that would definitely be a consideration. I'm not sure where to learn about this type of thing.
At which point if you're short on capacity (but who knows how your demand might shift over a decade) it's not like you need to replace the original batteries to get that 20% back, you will probably be able to just expand the pack to bring the capacity up.
Embarrassingly dumb question: if you’re one of the few users who don’t run a dark background terminal … how well do these TUI render (in a light background)?
Not a dumb question at all. I grew up using actual green screen terminals, and the advent of high-resolution colour monitors and applications with dark text on a white background felt like a blessing. I truly do not understand the regression to dark mode. It's eyestrain hell for me.
Unfortunately, I was unable to test in my light-background terminal, since the application crashes on startup.
If I'm working in a dark room, then light mode is eye strain hell. With dark mode, the minimum brightness I can achieve is about 100x lower than with light mode.
OLED monitors will bring green screen terminals back in style quite soon (with occasional orange and red highlights for that Hollywood haxx0r UX effect)
My personal experience is mixed. Half the time, I get something usable, the other half I get something that prints light yellow on slightly-darker yellow or highlights an item with a dark blue background and dark green text. I'm sure there's something I can tweak in my terminal app to fix this, but it's easier to just avoid those apps.
No the author is highlighting the fact that the aspect ratio a video is stored in doesn’t always match the aspect ratio a video is displayed in. So simply calculating the aspect ratio based on the number of horizontal and vertical pixels gives you the storage ratio, but doesn’t always result in the correct display ratio.
Yes I think they are conflating square pixels with square pixel aspect ratios.
If a video file only stores a singular color value for each pixel, why does it care what shape the pixel is in when it's displayed? It would be filled in with the single color value regardless.
The obvious solution is just to throw more LLM's at it to verify the output of the other LLM and that it is doing its job...
\s (mostly because you know this will be the "Solution" that many will just run with despite the very real issue of how "persuadable" these systems are)...
The real answer is that even that will fail and there will have to be a feedback loop with a human that will likely in many cases lead to more churn trying to fix the work the AI did vs if the human just did it in the first place.
Instead of focusing on the places that using an AI tool can truly cut down on time spent like searching for something (which can still fail but at least the risk when a failure is far lower vs producing output).
I'd assume an outcome is a negotiated agreement between buyer and Agent provider.
Think of all the n8n workflows. If we take a simple example of Expense receipt processing workflows, or a lead sourcing workflow, I'd think the outcomes can be counted pretty well. In these cases, successfully entered receipts into ERP or number of Entries captured in salesforce.
I am sure there are cases where outcomes are fuzzy, for instances employer-employee agreement.
But in some cases, for instance, my accounting agent would only get paid if he successfully uploads my tax returns.
Surely not applicable in all cases. But, in cases Where a human is measured on outcomes, the same should be applicable for agents too, I guess
Indeed. The whole AI game is predicated on the fact that they can deliver work equivalent to humans in some cases. If that is never going to be the case, then this whole agentic stuff goes belly-up.
The alternative scenario is they get better and do some work really well. That is an interesting territory to focus on.
This is the problem with this, in simple cases like “you add N employees” then you can vaguely approximate it, like they do in the article.
But for anything that’s not this trivial example, the person who knows the value most accurately is … the customer! Who is also the person who is paying the bill, so there’s strong financial incentive for them not to reveal this info to you.
I often go back to customer support voice AI agent example. Let's say, The bot can resolve tickets successfully at a certain rate . This is capturable easily. Why is this difficult? What cases am I missing?
Meaning ... SSDs initially reused IDE/SATA interfaces, which had inherent bottlenecks because those standards were designed for spinning disks.
To fully realize SSD performance, a new transport had to be built from the ground up, one that eliminated those legacy assumptions, constraints and complexities.
Chrome is able to capture the mass consumer market, due to Google’s dark pattern to nag you to install Chrome anytime you’re on a Google property.
Edge target enterprise Fortune 500 user, who is required to use Microsoft/Office 365 at work (and its deep security permission ties to SharePoint).
Safari has Mac/iOS audience via being the default on those platform (and deep platform integration).
Brave (based on Chromium), and LibreWolf (based on Firefox) has even carved out those user who value privacy.
---
What’s Firefox target user?
Long ago, Firefox was the better IE, and it had great plugins for web developers. But that was before Chrome existed and Google capturing the mass market. And the developers needed to follow its users.
So what target user is left for a Firefox?
Note: not trolling. I loved Firefox. I just don’t genuine understand who it’s for anymore.
Ostensibly nerds. Linux users and maybe Mac users. Technical people who understand more about the software industry than all Mozilla Corp management since Brendan.
It's difficult to monetize us when the product is a zero dollar intangible, especially when trust has been eroded such that we've all fled to Librewolf like you said.
It's difficult to monetize normies when they don't use the software due to years of continuous mismanagement.
I think giving Mozilla a new CEO is like assigning a new captain to the Titanic. I will be surprised if this company still exists by 2030.
Right and to your point, there's not a whole lot of precedent for browsers successfully funding themselves when the browser itself is the primary product.
Opera was the lightweight high performance extension rich, diversely funded, portable, adapted to niche hardware, early to mobile browser practically built from the dreams of niche users who want customization and privacy. They're a perfect natural experiment for what it looks like to get most, if not all decisions right in terms of both of features users want, as well as creative attempts to diversify revenue. But unfortunately, by the same token also the perfect refutation of the fantasy that making the right decisions means you have a path to revenue. If that was how it worked, Opera would be a trillion dollar company right now.
But it didn't work because the economics of web browsers basically doesn't exist. You have to be a trillion dollar company already, and dominate distribution of a given platform and force preload your browser.
Browsers are practically full scale operating systems these days with tens of millions of lines of code, distribued for free. Donations don't work, paying for the browser doesn't work. If it did, Opera (the og Opera, not the new ownership they got sold to) would still be here.
> Browsers are practically full scale operating systems these days with tens of millions of lines of code, distributed for free.
Well there's your problem! Google owns the server, the client, and the standards body, so ever-increasing complexity is inevitable if you play by their rules. Tens of thousands of lines of code could render the useful parts of the web.
Can you say more? I do think Google has effectively pushed embrace-extend-extinguish, changing the rules so that it's a game they can win. And I do think part of the point of web standards protocols is to limit complexity. So I agree the rules as they exist now favor Google. I think the "real" solution was for the standards bodies to stay in control but seems like that horse left the barn.
Yes, I would literally pay a nominal fee for Firefox if I were confident in the org's direction. As things stand though, the trust is gone as you said.
Mozilla is (or at least started as) a nonprofit. Even corporation is only there to fulfill the nonprofit goals. They shouldn't even be thinking about monetization they should be thinking about getting donations and securing grants.
It seems as if you ask Mozilla, the answer would be "Not current Firefox users."
I really don't know the answer to this question, and I don't know if Mozilla has defined it internally, which probably leads to a lot of the problems that the browser is facing. Is it the privacy focused individual? They seem to be working very hard against that. Is it the ad-sensitive user? Maybe, but they're not doing a lot to win that crowd over.
It kind of feels like Firefox is not targeted at anyone in particular. But long gone are the days when you can just be an alternative browser.
Maybe the target user is someone who wants to use Firefox, regardless of what that means.
I've been using Firefox for a long time, longer than it's had that name, and it used to be excellent for my tab hoarding habits. Specifically, it could handle a large number of tabs, and every couple of months it would crash and lose all of them. I would have to start over from scratch, with an amazing sense of catharsis and freedom, and I never had to make the decision on my own that I would never be able to make.
Now, it's no better than the others. I'm at 1919 tabs right now, and it hasn't lost any for many years. It's rock solid, it's good at unloading the tabs so I don't even need to rely on non-tab-losing crash/restarts to speed things up, and it doesn't even burn enough memory on them to force me to reconsider my ways.
This is a perfect example of how Mozilla's mismanagement has driven Firefox into the ground. Bring back involuntary tab bankruptcy and spacebar heating!
It seems to me Android users who want to block ads are a strong target market. Desktop Chrome has extensions and despite the nerf, it has adblockers that mostly work; Android Chrome doesn't have extensions.
A built in adblocker would probably help Firefox attract those users, but might destroy their Google revenue stream.
I think the problem with that is that Firefox Android with uBO still feels like it has worse First Contentful Paint than Chrome Android. Even on a high-end phone the difference can feel ridiculous; sites render after 1-2s on Chrome but sometimes I can count up to 5 with FF.
The benefits of having uBO might matter more to you and me, but let's not forget that faster rendering was arguably the main reason Chrome Desktop got popular 20 years ago, which caused Firefox to rewrite its engine 2 (3?) times since then to catch up. 20 years later this company still hasn't learned with Android.
Maybe I'm less sensitive to that, but I hadn't really noticed on a phone that wasn't high-end in 2020 and certainly isn't now. I'll have to pay attention to sites being slow and compare a Chromium-based browser next time I notice one.
I switched from Firefox desktop to Chrome when Chrome was new because it was multi-process and one janky page couldn't hang or crash the whole browser. I vaguely remember the renderer being a little faster, but multi-process was transformative. Firefox took years to catch up with that.
I'm very sensitive to ads though. If a browser doesn't have a decent adblocker, I'm not using it. Perhaps surprisingly, the Chromium browser with good extension support on Android is Edge.
Somehow its target user group includes my father, who is 90 years old. As far as I can recall, we got him using Firefox years ago and he became a committed user.
I wish more browsers would target seniors. Accessibility and usability is universally a nightmare.
Firefox users are people who would use LibreWolf, but installed it, tried it, saw it doesn't have dark mode, and figured that Firefox was good enough after all.
> Power generation turbines are designed to work at ambient sea level conditions. They don't rely on ambient air being especially cold for cooling, they can keep cool thanks to the large mass flow rate
What could be contributing to this is recently Vertasium did a whole video on how jet engines operate at temperatures above their components melting point.
And how the cold air at altitude is what keeps it from melting.
> Lower-level languages don’t have this same problem to the same extent.
Of course they have.
If the computer would directly execute what you write down in what you call "low level language" this would be slow as fuck.
Without highly optimizing compilers even stuff like C runs pretty slow.
If something about the optimizer or some other translation step of a compiler changes this has often significant influence on the performance of the resulting compilation artifacts.
First of all nothing in the article is about optimization. Scala does not even have an optimizer…
It was about translation strategies and macro expansion.
But this makes no difference. You have all issues you just named exactly the same in so called "high level languages" as you have in C. C is in fact a high level language, and the code you write in C has not much in common with what the machine actually executes. The last time this wasn't like that is about 40 years ago.
1. Whether the C optimizer kicks in or not is pure dark magic. Nobody can tell from just looking at the code. The optimization techniques are way too complex to be understood ad hoc, not even experts can do that.
2. The difference between the optimizer doing its work, or instead the computer just verbatim executing whatever someone written down is hilariously large! Adding -O2 can make your code many orders of magnitude faster. Or it does almost nothing… (But like said, you can't know what will happen just from looking at the code, that's point again 1.)
3. You neither can express what the machine does in C. The machine does not execute anything like C. The last time it did is over 50 years ago… Since at least 30 years we have kind of JIT compiler in the CPUs which translate the result of compilation to ASM into the actual machine language. A modern CPU needs actually to emulate a computer that still works like a PDP-11 to match the C machine model even the real hardware does not look anything like a PDP-11 any more! You have only very indirect influence on the actual machine code when writing C. It's mostly about forcing the CPU internal JIT to do something, but you have no control over it, exactly as you don't have control over what for example the JVM JIT does. It's the exact same situation, just a level lower.
reply