Hacker Newsnew | past | comments | ask | show | jobs | submit | QuadrupleA's commentslogin

As a 25+ year veteran programmer that's been mostly unimpressed with the quality of AI-generated code -

I've still learned from it. Just read each line it generates carefully. Read the API references of unfamiliar functions or language features it uses. You'll learn things.

You'll also see a lot of stupidity, overcomplication, outdated or incorrect APIs calls, etc.


I don't think formal verification really addresses most day-to-day programming problems:

    * A user interface is confusing, or the English around it is unclear
    * An API you rely on changes, is deprecated, etc.
    * Users use something in unexpected ways
    * Updates forced by vendors or open source projects cause things to break
    * The customer isn't clear what they want
    * Complex behavior between interconnected systems, out of the purview of the formal language (OS + database + network + developer + VM + browser + user + web server)
For some mathematically pure task, sure, it's great. Or a low-level library like a regular expression parser or a compression codec. But I don't think that represents a lot of what most of us are tasked with, and those low-level "mathematically pure" libraries are generally pretty well handled by now.

In fact, automated regression tests done by ai with visual capabilities may have bigger impact than formal verification has. You can have an army of testers now, painfully going through every corner of your software

In practice ends up being a bit like static analysis though, which is you get a ton of false positives.

All said, I’m now running all commits through Codex (which is the only thing it’s any good at), and it’s really pretty good at code reviews.


Will only work somewhat when customers expect features to work in a standard way. When customer spec things to work in non-standard approaches you'll just end up with a bunch of false positives.

This. When the bugs come streaming in you better have some other AI ready to triage them and more AI to work them, because no human will be able to keep up with it all.

Bug reporting is already about signal vs noise. Imagine how it will be when we hand the megaphone to bots.


TBH most day to day programming problems are barely worth having tests for. But if we had formal specs and even just hand wavy correspondences between the specs and the implementation for the low level things everybody depends on that would be a huge improvement for the reliability of the whole ecosystem.

A limited form of formal verification is already mainstream. It is called type systems. The industry in general has been slowly moving to encode more invariants into the type system, because every invariant that is in the type system is something you can stop thinking about until the type checker yells at you.

A lot of libraries document invariants that are either not checked at all, only at runtime, or somewhere in between. For instance, the requirement that a collection not be modified during interaction. Or that two region of memory do not overlap, or that a variable is not modified without owning a lock. These are all things that, in principle, can be formally verified.

No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.

For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.


> No one claims that good type systems prevent buggy software.

That's exactly what languages with advanced type systems claim. To be more precise, they claim to eliminate entire classes of bugs. So they reduce bugs, they don't eliminate them completely.


No nulls, no nullability bombs.

Forcing devs to pre-fix/avoid bugs before the compiler will allow the app means the programs are more correct as a group.

Wrong, incomplete, insufficient, unhelpful, unimpressive, and dumb are all still very possible. But more correct than likely in looser systems.


> No nulls, no nullability bombs.

I hate this meme. Null indicates something. If you disallow null that same state gets encoded in some other way. And if you don't properly check for that state you get the exact same class of bug. The desirable type system feature here is the ability to statically verify that such a check has occurred every time a variable is accessed.

Another example is bounds checking. Languages that stash the array length somewhere and verify against it on access eliminate yet another class of bug without introducing any programmer overhead (although there generally is some runtime overhead).


The whole point of "no nullability bombs" is to make it obvious in the type system when the value might be not present, and force that to be handled.

Javascript:

  let x = foo();
  if (x.bar) { ... } // might blow up
Typescript:

  let x = foo(); // type of x is Foo | undefined
  if (x === undefined) { ...; return; } // I am forced to handle this
  if (x.bar) { ... } // this is now safe, as Typescript knows x can only be a Foo now
(Of course, languages like Rust do that cleaner, since they don't have to be backwards-compatible with old Javascript. But I'm using Typescript in hopes of a larger audience.)

If you eliminate the odd integers from consideration, you've eliminated an entire class of integers. yet, the set of remaining integers is of the same size as the original.

Peak HN gnomism. While the set of possible errors may be infinite, their distribution is not uniform.

No, because integers in computing are generally finite.

There cannot be infinite bugs in a limited program.

Programs are not limited; the number of Turing machines is countably infinite.

When you say things like "eliminate a class of bugs", that is played out in the abstraction: an infinite subset of that infinity of machines is eliminated, leaving an infinity.

How you then sample from that infinity in order to have something which fits on your actual machine is a separate question.


How do you count how many bugs a program has? If I replace the Clang code base by a program that always outputs a binary that prints hello world, how many bugs is that? Or if I replace it with a program that exits immediately?

Maybe another example is compiler optimisations: if we say that an optimising compiler is correct if it outputs the most efficient (in number of executed CPU instructions) output program for the every input program, then every optimising compiler is buggy. You can always make it less buggy by making more of the outputs correct, but you can never satisfy the specification on ALL inputs because of undecidability.


Because the number of state where a program can be is so huge (when you consider everything that can influence how a program runs and the context where and when it runs) it is for the current computation power practically infinite but yes it is theoretically finite and can even be calculated.

> For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.

The problem is there is always some chance a coding agent will get stuck and be unable to produce a conforming implementation in a reasonable amount of time. And then you are back in a similar place to what you were with those pre-LLM solutions - needing a human expert to work out how to make further progress.


With the added issue that now the expert is working with code they didn't write, and that could be in general be harder to understand than human-written code. So they could find it easier to just throw it away and start from scratch.

Some type systems (e.g, Haskell) are closing in in becoming formal verification languages themselves.

And one can see how quickly they became mainstream...

Given that it's the AI doing the coding, it would be pretty quickly so long as it's decent at Haskell. Which it already is, surprisingly so actually for such a niche language. It doesn't necessarily write great code, but it's good enough, and the straightjacket type system makes it very hard for the model to sneak in creative hacks like using globals, or trip itself with mutable state.

I think that’s because the barrier to entry for a beginner is much higher than say python.

IMHO, these strong type systems are just not worth it for most tasks.

As an example, I currently mostly write GUI applications for mobile and desktop as a solo dev. 90% of my time is spent on figuring out API calls and arranging layouts. Most of the data I deal with are strings with their own validation and formatting rules that are complicated and at the same time usually need to be permissive. Even at the backend all the data is in the end converted to strings and integers when it is put into a database. Over-the-wire serialization also discards with most typing (although I prefer protocol buffers to alleviate this problem a bit).

Strong typing can be used in between those steps but the added complexity from data conversions introduces additional sources of error, so in the end the advantages are mostly nullified.


> Most of the data I deal with are strings with their own validation and formatting rules that are complicated and at the same time usually need to be permissive

this is exactly where a good type system helps: you have an unvalidated string and a validated string which you make incompatible at the type level, thus eliminating a whole class of possible mistakes. same with object ids, etc.

don't need haskell for this, either: https://brightinventions.pl/blog/branding-flavoring/


That's neat, I was about to ask which languages support that since the vast majority don't. I didn't know that you can do that in Typescript.

Any language with an type system really...

Even OOP : if you have a string class, you can have a String_Formated_For_API subtype.

Just extends String, and add some checking.

But now the type checker "knows" it can print() a String_Formated_For_API just fine but not call_API(string).


I would argue that the barrier to entry is on par with python for a person with no experience, but you need much more time with Haskell to become proficient in it. In python, on the other hand, you can learn the basics and these will get you pretty far

Python has a reputation for being good for beginners so it's taught to beginners so it has a reputation for being good for beginners.

I blame syntax. It's too unorthodox nowadays. Historical reasons don't matter all that much, everything mainstream is a C-family memember

Piggybacking off your comment, I just completed a detailed research paper where I compared Haskell to C# with an automated trading strategy. I have many years of OOP and automated trading experience, but struggled a bit at first implementing in Haskell syntax. I attempted to stay away from LLMs, but ended up using them here and there to get the syntax right.

Haskell is actually a pretty fun language, although it doesn't fly off my fingers like C# or C++ does. I think a really great example of the differences is displayed in the recursive Fibonacci sequence.

In C#:

    public int Fib(int n)
    {
        if (n <= 1)
            return n;
        else
            return Fib(n - 1) + Fib(n - 2);
    }
In Haskell:

    fib :: Integer -> Integer
    fib n
      | n <= 1    = n
      | otherwise = fib (n - 1) + fib (n - 2)
As you might know, this isn't even scratching the surface of the Haskell language, but it does a good job highlighting the syntax differences.

When using switch expression in C#, they are a lot more similar:

    public int Fib(int n) => n switch
    {
        <= 1 => n,
        _    => Fib(n - 1) + Fib(n - 2)
    };

> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.

To me it seems they reduce productivity. In fact, for Rust, which seems to match the examples you gave about locks or regions of memory the common wisdom is that it takes longer to start a project, but one reaps the benefits later thanks to more confidence when refactoring or adding code.

However, even that weaker claim hasn’t been proven.

In my experience, the more information is encoded in the type system, the more effort is required to change code. My initial enthusiasm for the idea of Ada and Spark evaporated when I saw how much ceremony the code required.


> In my experience, the more information is encoded in the type system, the more effort is required to change code.

I would tend to disagree. All that information encoded in the type system makes explicit what is needed in any case and is otherwise only carried informally in peoples' heads by convention. Maybe in some poorly updated doc or code comment where nobody finds it. Making it explicit and compiler-enforced is a good thing. It might feel like a burden at first, but you're otherwise just closing your eyes and ignoring what can end up important. Changed assumptions are immediately visible. Formal verification just pushes the boundary of that.


In practice it would be encoded in comments, automated tests and docs, with varying levels of success.

It’s actually similar to tests in a way: they provide additional confidence in the code, but at the same time ossify it and make some changes potentially more difficult. Interestingly, they also make some changes easier, as long as not too many types/tests have to be adapted.


This reads to me like an argument for better refactoring tools, not necessarily for looser type systems. Those tools could range from mass editing tools, IDEs changing signatures in definitions when changing the callers and vice versa, to compiler modes where the language rules are relaxed.

I was thinking about C++ and if you change your mind about whether some member function or parameter should be const, it can be quite the pain to manually refactor. And good refactoring tools can make this go away. Maybe they already have, I haven’t programmed C++ for several years.

Constraints Liberate, Liberties Constrain. (I also recommend watching the presentation with the same title)

> All that information encoded in the type system makes explicit what is needed in any case and is otherwise only carried informally in peoples' heads by convention

this is, in fact better for llms, they are better at carrying information and convention in their kv cache than they are in having to figure out the actual types by jumping between files and burning tokens in context/risking losing it on compaction (or getting it wrong and having to do a compilation cycle).

if a typed language lets a developer fearlessly build a semantically inconsistent or confusing private API, then llms will perform poorer at them even though correctness is more guaranteed.


It is definitely harder to refactor Haskell than it is Typescript. Both are "safe" but one is slightly safer, and much harder to work with.

Capturing invariants in the type system is a two-edged sword.

At one end of the spectrum, the weakest type systems limit the ability of an IDE to do basic maintenance tasks (e.g. refactoring).

At the other end of the spectrum, dependent type and especially sigma types capture arbitrary properties that can be expressed in the logic. But then constructing values in such types requires providing proofs of these properties, and the code and proofs are inextricably mixed in an unmaintainable mess. This does not scale well: you cannot easily add a new proof on top of existing self-sufficient code without temporarily breaking it.

Like other engineering domains, proof engineering has tradeoffs that require expertise to navigate.


> but one reaps the benefits later thanks to more confidence when refactoring or adding code.

To be honest, I believe it makes refactoring/maintenance take longer. Sure, safer, but this is not a one-time only price.

E.g. you decide to optimize this part of the code and only return a reference or change the lifetime - this is an API-breaking change and you have to potentially recursively fix it. Meanwhile GC languages can mostly get away with a local-only change.

Don't get me wrong, in many cases this is more than worthwhile, but I would probably not choose rust for the n+1th backend crud app for this and similar reasons.


The choice of whether to use GC is completely orthogonal to that of a type system. On the contrary, being pointed at all the places that need to be recursively fixed during a refactoring is a huge saving in time and effort.

I was talking about a type system with affine types, as per the topic was Rust specifically.

I compared it to a statically typed language with a GC - where the runtime takes care of a property that Rust has to do statically, requiring more complexity.


In my opinion, programming languages with a loose type system or no explicit type system only appear to foster productivity, because it is way easier to end up with undetected mistakes that can bite later, sometimes much later. Maybe some people argue that then it is someone else's problem, but even in that case we can agree that the overall quality suffers.

"In my experience, the more information is encoded in the type system, the more effort is required to change code."

Have you seen large js codebases? Good luck changing anything in it, unless they are really, really well written, which is very rare. (My own js code is often a mess)

When you can change types on the fly somewhere hidden in code ... then this leads to the opposite of clarity for me. And so lots of effort required to change something in a proper way, that does not lead to more mess.


There’s two types of slowdown at play:

a) It’s fast to change the code, but now I have failures in some apparently unrelated part of the code base. (Javascript) and fixing that slows me down.

b) It’s slow to change the code because I have to re-encode all the relationships and semantic content in the type system (Rust), but once that’s done it will likely function as expected.

Depending on project, one or the other is preferable.


Or: I’m not going to do this refactor at all, even though it would improve the codebase, because it will be near impossible to ensure everything is correct after making so many changes.

To me, this has been one of the biggest advantages of both tests and types. They provide confidence to make changes without needing to be scared of unintended breakages.


There's a tradeoff point somewhere where it makes sense to go with one or another. You can write a lot of codes in bash and Elisp without having to care about the type of whatever you're manipulating. Because you're handling one type and encoding the actual values in a typesytem would be very cumbersome. But then there are other domain which are fairly known, so the investment in encoding it in a type system does pay off.

Soon a lot of people will go out of the way and try to convince you that Rust is most productive language, functions having longer signatures than their bodies is actually a virtue, and putting .clone(), Rc<> or Arc<> everywhere to avoid borrow-checker complaints makes Rust easier and faster to write than languages that doesn't force you to do so.

Of course it is a hyperbole, but sadly not that large.


> For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.

That is not novel and every declarative language precisely embodies it.


I think most existing declarative languages still require the programmer to specify too many details to get something usable. For instance, Prolog often requires the use of 'cut' to get reasonable performance for some problems.

> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.

They really don’t. How did you arrive at such a conclusion?


Not that I can answer for OP but as a personal anecdote; I've never been more productive than writing in Rust, it's a goddamn delight. Every codebase feels like it would've been my own and you can get to speed from 0 to 100 in no time.

Yeah, I’ve been working mainly in rust for the last few years. The compile time checks are so effective that run time bugs are rare. Like you can refactor half the codebase and not run the app for a week, and when you do it just works. I’ve never had that experience in other languages.

Through empirical evidence? Do you think that the vast majority of software devs moved to typing for no reason?

> Do you think that the vast majority of software devs moved to typing for no reason?

It is quite clear that this industry is mostly driven by hype and fades, not by empirical studies.

Empirical evidence in favor of a claim that static typing and complex type systems reduce bugs or improve productivity is highly inconclusive at best


It's a bad reason. A lot of best practices are temporary blindnesses, comparable, in some sense, with supposed love to BASIC before or despite Dijkstra. So, yes, it's possible there is no good reason. Though I don't think it's the case here.

We don't actually have empirical evidence on the topic, surprisingly.

It's just people's hunches.


I feel like the terms logical, empirical, rational and objective are used interchangeably by the general public, with one being in vogue at a time.

> Complex behavior between interconnected systems, out of the purview of the formal language (OS + database + network + developer + VM + browser + user + web server)

Isn't this what TLA+ was meant to deal with?


Not really, some components like components have a lot of properties that’s very difficult to modelize. Take latency in network, or storage performance in OS.

Actually, formal verification could help massively with four of those problems — all but the first (UI/UX) and fifth (requirements will always be hard).

A change in the API of a dependency should be detected immediately and handled silently.

Reliance on unspecified behavior shouldn't happen in the first place; the client's verification would fail.

Detecting breakage caused by library changes should be where verification really shines; when you get the update, you try to re-run your verification, and if that fails, it tells you what the problem is.

As for interconnected systems, again, that's pretty much the whole point. Obviously, achieving this dream will require formalizing pretty much everything, which is well beyond our capabilities now. But eventually, with advances in AI, I think it will be possible. It will take something fundamentally better than today's LLMs, though.


That has been the problem with unit and integration tests all the time. Especially for systems that tend to be distributed.

AI makes creating mock objects much easier in some cases, but it still creates a lot of busy work and makes configuration more difficult. At at this points it often is difficult configuration management that cause the issues in the first place. Putting everything in some container doesn't help either, on the contrary.


> But I don't think that represents a lot of what most of us are tasked with

Give me a list of all the libraries you work with that don't have some sort of "okay but not that bit" rule in the business logic, or "all of those function are f(src, dst) but the one you use most is f(dst,src) and we can't change it now".

I bet it's a very short list.

Really we need to scrap every piece of software ever written and start again from scratch with all these weirdities written down so we don't do it again, but we never will.


Scrapping everything wouldn't help. 15 years ago the project I'm on did that - for a billion dollars. We fixed the old mistakes but made plenty of new ones along the way. We are trying to fix those now and I can't help but wonder what new mistakes we are making the in 15 years we will regret.

Computers are terrible and software is terrible and we should just go back to tilling the fields with horses and drinking beer.

Yeah, there were about 5 or 10 videos about this "complexity" and unpredictability of 3rd parties and wheels involved that AI doesn't control and even forget - small context window - in like past few weeks. I am sure you have seen at least one of them ;)

But it's true. AI is still super narrow and dumb. Don't understand basic prompts even.

Look at the computer games now - they still don't look real despite almost 30 years since Half-life 1 started the revolution - I would claim. Damn, I think I ran it on 166 Mhz computer on some lowest details even.

Yes, it's just better and better but still looking super uncanny - at least to me. And it's been basically 30 years of constant improvements. Heck, Roomba is going bankrupt.

I am not saying things don't improve but the hype and AI bubble is insane and the reality doesn't match the expectation and predictions at all.


> An API you rely on changes, is deprecated, etc

Formal verification will eventually lead to good, stable API design.

> Users use something in unexpected ways

> Complex behavior between interconnected systems

It happens when there's no formal verification during the design stage.

Formal verification literally means cover 100% state changes and for every possible input/output, every execution branch should be tested.


Formal verification has nothing to do with the quality of the API.

Given the spec, formal verification can tell you if your implementation follows the spec. It cannot tell you if the spec if good


Thats something I agree with.

I am right now working on an offline api client: https://voiden.md/. I wonder if this can be a feature.


> It cannot tell you if the spec if good

I beg to differ, if a spec is hard to verify, then it's a bad sign.


All non-trivial specs, like the one for seL4, are hard to verify. Lots of that complexity comes from interacting with the rest of the world which is a huge shared mutable global state you can't afford to ignore.

Of course, you can declare that the world itself is inherently sinful and imperfect, and is not ready for your beautiful theories but seriously.


> Of course, you can declare that the world itself is inherently sinful and imperfect, and is not ready for your beautiful theories

i see we are both familiar with haskellers (friendly joke!)


it can tell you if your spec is bad, but it can't tell you if your spec is good

That is one problem of many solved, isn't that good?

That the spec solves the problem is called validation in my domain and treated explicitly with different methods.

We use formal validation to check for invariants, but also "it must return a value xor an error, but never just hang".


> Formal verification will eventually lead to good, stable API design.

Why? Has it ever happened like this? Because to me it would seem that if the system verified to work, then it works no matter how API is shaped, so there is no incentive to change it to something better.


> if the system verified to work, then it works no matter how API is shaped

That's the case for one-off integrations, but the messy part always comes when system goal changes

Let's say formal verification could help to avoid some anti-patterns.


> Let's say formal verification could help to avoid some anti-patterns.

I'd still like to hear about the actual mechanism of this happening. Because I personally find it much easier to believe that the moment keeping the formal verification up to date becomes untenable for whatever reason (specs changing too fast, external APIs to use are too baroque, etc) people would rather say "okay, guess we ditch the formal verification and just keep maintaining the integration tests" instead of "let's change everything about the external world so we could keep our methodology".


> I'd still like to hear about the actual mechanism of this happening

I am not an expert on this, but the worst API I've seen is those with hidden states.

e.g. .toggle() API. Call it old number of times, it goes to one state, call it even number of times, it goes back.

And there's call A before you call B types of APIs, the client has to keep a strict call order (which itself is a state machine of some kind)


> I am not an expert on this, but the worst API I've seen is those with hidden states.

> e.g. .toggle() API. Call it old number of times, it goes to one state, call it even number of times, it goes back.

This is literally a dumb light switch. If you have trouble proving that, starting from lights off, flicking a simple switch twice will still keep lights off then, well, I have bad news to tell you about the feasibility of using the formal methods for anything more complex than a dumb light switch. Because the rest of the world is a very complex and stateful place.

> (which itself is a state machine of some kind)

Yes? That's pretty much the raison d'être of the formal methods: for anything pure and immutable, normal intuition is usually more than enough; it's tracking the paths through enormous configuration spaces that our intuition has problem with. If the formal methods can't help with that with comparable amount of effort, then they are just not worth it.


At that point you create an entirely new API, fully versioned, and backwardly compatible (if you want it to be). The point the article is making is that AI, in theory, entirely removes the person from the coding process so there's no longer any need to maintain software. You can just make the part you're changing from scratch every time because the cost of writing bug-free code (effectively) goes to zero.

The theory is entirely correct. If a machine can write provably perfect code there is absolutely no reason to have people write code. The problem is that the 'If' is so big it can be seen from space.


Isn’t this where the Eiffel design by contract people speak up about code reuse?

100% of state changes in business software is unknowable on a long horizon, and relies on thoroughly understanding business logic that is often fuzzy, not discrete and certain.

Formal verification does not gurantee business logic works as everybody expected, nor its future proof, however, it does provide a workable path towards:

Things can only happen if only you allow it to happen.

It other words, your software may come to a stage where it's no longer applicable, but it never crashes.

Formal verification had little adoption only because it costs 23x of your original code with "PhD-level training"


The reason it doesn't work is businesses change faster than you can model every detail AND keep it all up to date. Unless you have something tying your model directly to every business decision and transaction that happens, your model will never be accurate. And if we're talking about formal verification, that makes it useless.

So much of what's aimed at nontechnical consumers these days is full of dishonesty and abuse. Microsoft kinda turned Windows into something like this, you need OneDrive "for your protection", new telemetry and ads with every update, etc.

In much of the physical world thankfully there's laws and pretty-effective enforcement against people clubbing you on the head and taking your stuff, retail stores selling fake products and empty boxes, etc.

But the tech world is this ever-boiling global cauldron of intangible software processes and code - hard to get a handle on what to even regulate. Wish people would just be decent to each other, and that that would be culturally valued over materialism and moneymaking by any possible means. Perhaps it'll make a comeback.


This was a nearly poetic way to put it. Thank you for ascribing words to a problem that equally frustrates me.

I spend a lot of time trying to think of concrete ways to improve the situation, and would love to hear people's ideas. Instinctively I tend to agree it largely comes down to treating your users like human beings.


The situation won’t be improved for as long as an incentive structure exists that drives the degradation of the user experience.

Get as off-grid as you possibly can. Try to make your everyday use of technology as deterministic as possible. The free market punishes anyone who “respects their users”. Your best bet is some type of tech co-op funded partially by a billionaire who decided to be nice one day.


We're not totally unempowered here, as folks who know how to tech. We can build open source alternatives that are as easy to use and install as the <epithet>-ware we are trying to combat.

Part of the problem has been that there's a mountain to climb vis a vis that extra ten miles to take something that 'works for me' and turn it into 'gramps can install this and it doesn't trigger his alopecia'.

Rather, that was the problem. If you're looking for a use case for LLMs, look no further. We do actually have the capacity to build user-friendly stuff at a fraction of the time cost that we used to.

We can make the world a better place if we actually give a shit. Make things out in the open, for free, that benefit people who aren't in tech. Chip away at the monopolies by offering a competitive service because it's the right thing to do and history will vindicate you instead of trying to squeeze a buck out of each and every thing.

I'm not saying "don't do a thing for money". You need to do that. We all need to do that. But instead of your next binge watch or fiftieth foray into Zandronum on brutal difficulty, maybe badger your llm to do all the UX/UI tweaks you could never be assed to do for that app you made that one time, so real people can use it. I'm dead certain that there are folks reading this now who have VPN or privacy solutions they've cooked up that don't steal all your data and aren't going to cost you an arm and a leg. At the very least, someone reading this has a network plugin that can sniff for exfiltrated data to known compromised networks (including data brokers) - it's probably just finicky to install, highly technical, and delicate outside of your machine. Tell claude to package that shit so larry luddite can install it and reap the benefits without learning what a bash is or how to emacs.


I agree and with how much money people in this field can make I’m surprised their aren’t more retired hackers banding together to build something like this. Personally I still have a mortgage to pay off but eventually I would like to be involved in something like this.

What product(s) do you think present the best opportunity for reinventing today with a genuine, user-centric approach?

Personally I feel it's everything from the ground up - silicon IC's through to device platforms and cloud services. But we need a plan to chip away at the problem one bite at a time.


Probably a phone OS would be the most impactful. If it had the ability to really cut back on tracking and data sharing by default.

But if you’re talking about building hardware… that feels like something the NSA would be happy to be involved with whether you want them to be or not. I’d vote for an 80/20 solution that gets people protected from some of the most rampant data mining going on by corporations vs. state actors.

The other issue to keep in mind is that the tech ecosystem absolutely will suffocate anything like this by disabling access to their apps / website with this OS. So at the end of the day I really don’t know if there’s a solution to any of this.


And still, there is plenty of software that you can't run on anything but Windows. That's a major blocker at this point and projects like 'mono' and 'wine', while extremely impressive, are still not good enough to run that same software on Linux.

Wasn't it now (end of 2025) that Dario Amodei said Claude (or LLMs in general) would be doing almost all programming work?

This article is my typical experience with LLM coding. Endless correction and handholding, and manual cleanup of subtle mistakes. With no long-term learning from them.

Kinda makes me livid, the amount of false hype coming out of the mouths of the stewards of these investor-subsidized LLM companies.

But they're amazing Google replacements, and learning tools. And once in a blue moon they ace a coding assignment and delight me.

Edit: 90% of coding work by June to September 2025: https://www.businessinsider.com/anthropic-ceo-ai-90-percent-...


I mean, Claude got 90% there. The author could contribute 10% of work to fix the alignment and have a finished page.


He assembled all the assets, did a bunch of prompting prep, etc. The alignment was kinda the main job, and he would have had to do a bunch of cleanup after. If this was a freelance job a client was paying for, I'd definitely tell myself to save cost next time by not using AI.


Only a matter of time before supply catches up and then likely overshoots (maybe combined with AI / datacenter bubble popping), and RAM becomes dirt cheap. Sucks for those who need it now though.


If openai was building out with DDR5 that would create a big ol supply of DDR5 to make RAM cheap.

No. They bought the wafers for HBM. The only thing thats going to get cheap when openai implodes are server stuff for homelab bros. The RAM manufacturers are using their production capacity to produce HBM for openai.


Probably a good time to mention fossil, an excellent SCM from the makers of SQLite with a number of github-like features built-in:

https://fossil-scm.org/

Single ~6mb executable, blazing fast. I've been a happy user for years.


There's no code review system for Fossil, because the SQLite guy doesn't believe in formalized code review. It also doesn't have any good flow for accepting external contributions other than .patch/bundle files, because the SQLite guy doesn't believe anyone but him is good enough to regularly contribute code.

It might be great for single person projects, which I guess is fine for hobby stuff, but unless you luck out like Richard Hipp and manage to become a well-paid hermit working in a effectively-one-person cathedral model, then it's not really going to work for most projects.


Minimalist vps, download the executable, copy and paste apache config, activate and you are good to go. With certain providers you don't even need to buy a domain.

Personal, small and medium sized projects are 99% of all projects.


Fossil is great. I'd encourage anyone to try it out on their next personal project.


Unfortunately companies use the "security boogeyman" to push ever-increasing ads, telemetry, performance degradation, features you probably don't want that disrupt your workflow and muscle memory, breaking API changes to libraries, etc.

If you could sign a contract with e.g. Microsoft (or hell, NPM) to only receive updates that explicitly fix bugs and security holes, that'd be amazing - but I've rarely if ever seen it.


During the early XP days Windows had granular updates where you could decline everything but security updates if you wanted. Even when they pushed out the Windows Genuine Advantage update (Which offered a user no genuine advantages at all, just possibly hassles) you could still decline it.


Exactly--if I could guarantee that I was getting just security updates and bug fixes, I'd be happy to turn on automatic Windows updates (and application updates too, for that matter).


Unfortunately it’s not just a boogeyman. That’s why it’s worked so well historically.

You are less secure if you wave off years of security patches.


Moved to Arch 6+ months ago after 25+ years in Windows, it's been SO nice. My computer belongs to me again, lightning fast, no ads and BS every update, no 500 background processes.

Definitely took some setup work - I have a lot of scripts and custom tools. But so worth it! Happy trails.


So glad to have developed my own chat interface, with interchangeable models. Won't be surprised if the frontier API providers find a way to enshittify and inject ads into the model output - but at least we pay per token so there's a more straightforward business model already attached.


And the ego boost of it all - being one of the special few who sees "the truth" that others are too brainwashed/dumb/whatever to see. Makes one feel quite important.


Indeed, this might be why religion seems so odd to outsiders.

It's implausible, yet that's what stimulates the tribal feelings among the believers.


Those are the simple cows to be milked, but numerous 'gurus' in these communities are very well aware of the bullshit they propagate to the weak and gullible, but its just such an easy noncritical prey. You can always just go deeper in paranoia.

Makes me think that mr trump switched from being democrat to republican and pushed for magaesque folks who often love him to the death due to very similar principle - just spit out some populist crap that stirs core emotions - the worse the better, make them feel victim, find easy target to blame which can't defend themselves well (immigrants), add some conspiracy (of which he is actually part of as wall street billionaire).

Extreme left wouldn't swallow easily that ridiculous mix from nepotic billionaire who managed to bankrupt casinos and avoided military duty (on top of some proper hebephilia with his close friend mr E and who knows what else).

But what do I know, just an outside observer, but nobody around the world has umbrella thick enough that this crap doesn't eventually fall on them too.


I think Trump's just been running a simple popularity-seeking loop for a while. Do a thing; if his people like it, do it more; otherwise do it less.

I've heard that even Hitler was like this: that he didn't start out hating Jews, but repeatedly reacted to the fact that he got louder cheers whenever he blamed things on Jews. But I don't know how to verify if this is true.


The feedback response post is true, but specifically about Jews is not true. He hated Jews long before he rose anywhere near power


Hitler was enthralled by Henry Ford, and copied what he learned about anti-semitism.

https://www.thehistoryreader.com/historical-figures/hitlers-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: