Hacker Newsnew | past | comments | ask | show | jobs | submit | AnIrishDuck's commentslogin

Yeah, this feels like another reincarnation of the ancient "who watches the watchmen?" problem [1]. Time and time again we see that the incentives _really really_ matter when facing this problem; subtle changes can produce entirely new problems.

1. https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%...


Or we have devices attest user age. On setup, the device has the option to store a root ("guardian"?) email address. Whenever "adult mode" is activated or the root email is changed, a notification must first be sent to the prior root email. That notification may optionally contain a code that must be used to proceed with the relevant action, though the user should be warned of the potential device-crippling consequences.

This setting is stored in a secure enclave and survives factory resets.

I will note that these two systems are not mutually exclusive. There are plenty of ways to "think of the children" that don't trample on everybody's freedom.


Yeah, not doing this on my Linux devices.


You don't have to, that's the point.

EDIT: or to rephrase, this proposal is opt-in (device attests the user is a minor) not mandatory (device is required to attest the user is an adult)


I misunderstood then, I’m all in favor of that approach. If mainstream manufacturers include an optional child mode then that doesn’t affect adults. I do think it’s better if the child device simply blocks adult labeled content rather than attesting that the user is a child, just to avoid leaking any information about minors. But it’s still an OK solution.


I personally have taken several road trips (1000+ miles) with an EV across the United States and have not found charging to be a "huge issue".

But I (clearly) must be wrong, sorry to disagree with the spokesman of America.


> Classic Motte and Bailey.

For this to be a "classic motte and bailey" you will need to point us to instances where _the original poster_ suggested these (the "bailey", which you characterize as "rust eliminates all bugs") things.

It instead appears that you are attributing _other comments_ to the OP. This is not a fair argumentation technique, and could easily be turned against you to make any of your comments into a "classic motte and bailey".


As somebody that "learned" C++ (Borland C++... the aggressively blue memories...) first at a very young age, I heartily agree.

Rust just feels natural now. Possibly because I was exposed to this harsh universe of problems early. Most of the stupid traps that I fell into are clearly marked and easy to avoid.

It's just so easy to write C++ that seems like it works until it doesn't...


> the options are to build more software or to hire fewer engineers.

To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.

More on all of these later.

> I am not convinced that software has a growing market

Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.

The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.

However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]

They are, of course, now doing very different things.

Let's now spitball some of those other scenarios above:

- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.

- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g traditional end to end tests.

- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.

Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.

And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.

I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.

1. https://www.economist.com/democracy-in-america/2011/06/15/ar... (while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)


The most critical skill in the coming era, assuming that AI follows its current trajectory and there are no research breakthroughs for e.g. continual learning is going to be delegation.

The art of knowing what work to keep, what work to toss to the bot, and how to verify it has actually completed the task to a satisfactory level.

It'll be different than delegating to a human; as the technology currently sits, there is no point giving out "learning tasks". I also imagine it'll be a good idea to keep enough tasks to keep your own skills sharp, so if anything kinda the reverse.


> Sometimes after a night’s sleep, we wake up with an insight on a topic or a solution to a problem we encountered the day before.

The current crop of models do not "sleep" in any way. The associated limitations on long term task adaptation are obvious barriers to their general utility.

> When conversing with LLMs, I never get the feeling that they have a solid grasp on the conversation. When you dig into topics, there is always a little too much vagueness, a slight but clear lack of coherence, continuity and awareness, a prevalence of cookie-cutter verbiage. It feels like a mind that isn’t fully “there” — and maybe not at all.

One of the key functions of REM sleep seems to be the ability to generalize concepts and make connections between "distant" ideas in latent space [1].

I would argue that the current crop of LLMs are overfit on recall ability, particularly on their training corpus. The inherent trade-off is that they are underfit on "conceptual" intelligence. The ability to make connections between these ideas.

As a result, you often get "thinking shaped objects", to paraphrase Janelle Shane [2]. It does feel like the primordial ooze of intelligence, but it is clear we still have several transformer-shaped breakthroughs before actual (human comparable) intelligence.

1. https://en.wikipedia.org/wiki/Why_We_Sleep 2. https://www.aiweirdness.com/


Not really, no. The founders were not omniscient, but many of them publicly wrote about the problematic rise of political "factions" contrary to the general interest: https://en.wikipedia.org/wiki/Federalist_No._10


> One thing that's been really off putting about the technology industry is how fake-it-till-you-make-it has become so pervasive.

It feels accidental, but it's definitely amusing that the models themselves are aping this ethos.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: