It's less the fact that someone owns JS's trademark, and more that it's specifically Oracle (they got it when they bought Sun).
Oracle is an incredibly litigious company. Their awful reputation in this respect means that the JS ecosystem can never be sure they won't swoop in and attempt to demand rent someday. This is made worse by the army of lawyers they employ; even if they're completely in the wrong, whatever project they go after probably won't be able to afford a defense.
> Oracle is an incredibly litigious company. Their awful reputation in this respect means that the JS ecosystem can never be sure they won't swoop in and attempt to demand rent someday. This is made worse by the army of lawyers they employ; even if they're completely in the wrong, whatever project they go after probably won't be able to afford a defense.
That is why on one level I am surprised by the petition. They are talking to a supercharged litigation monster and are asking it "Dear Oracle, ... We urge you to release the mark into the public domain". You know what a litigation happy behemoth does in that case? It goes asks some AI to write a "Javascript: as She Is Spoke" junk book on Amazon just so they can hang on to the trademark. Before they didn't care but now that someone pointed it out, they'll go out of their way to assert their usage of it.
On the other hand, maybe someone there cares about their image and would be happy to improve it in the tech community's eyes...
> It goes asks some AI to write a "Javascript: as She Is Spoke" junk book on Amazon just so they can hang on to the trademark.
IANAL, but I don't think that wouldn't be enough to keep the trademark.
Also the petition was a "we'll ask nicely first so we can all avoid the hastle and expense of legal procedings", they are now in the process of getting the trademark invalidated, but Oracle, illogically but perhaps unsurprisingly is fighting it.
I was just using it as an example of doing the absolute minimum. They could write a dumb Javascript debugger or something with minimal effort.
But yeah, IANAL either and just guessing, I just know Oracle is shady and if you challenge them legally they'll throw their weight around. And not sure if responding to a challenge with a new "product" is enough to reset the clock on it. Hopefully a the judge will see through their tricks.
Trademark law is kind of about hypotheticals though. The purpose of a trademark is to prevent theoretical damages from potential confusion, neither of which you ever have to show to be real
In this case the trademark existing and belonging to Oracle is creating more confusion than no trademark existing, so deleting it is morally right. And because Oracle isn't actually enforcing it it is also legally right
Imho this is just the prelude to get better press. "We filed a petition to delete the JavaScript trademark" doesn't sound nearly as good as "We collected 100k signatures for a letter to Oracle and only got silence, now we formally petition the USPTO". It's also a great opportunity to find pro-bono legal council or someone who would help fund the petition
The other aspect here is that general knowledge (citation needed) says that if a company doesn't actively defend their trademark, they often won't be able to keep it if challenged in court. Or perhaps general knowledge is wrong.
Assuming Oracle did decide to go down that route, who would they sue? No one really uses the JavaScript name in anything official except for "JavaScriptCore" that Apple ships with Webkit.
My bad, after reading more it seems Deno is trying to get Oracle's trademark revoked, but I found out that "Rust for Javascript" devs have received a cease and desist from Oracle regarding the JS trademark, which may have triggered Deno to go after Oracle.
The incredibly litigious company here is Deno. Deno sued on a whim, realized they were massively unprepared, then asked the public to fund a legal campaign that will benefit Deno themselves, a for-profit, VC-backed company.
This personal vendetta will likely end with the community unable to use the term JavaScript. Nobody should support this.
1. Oracle is the litigious one here. My favorite example is that time they attacked a professor for publishing less-than-glowing benchmarks of their database: https://danluu.com/anon-benchmark/ What's to stop them from suing anyone using the term JavaScript in a way that isn't blessed by them? That's what Deno is trying to protect against.
2. Deno is filing a petition to cancel the trademark, not claim it themselves. This would return it to the public commons.
It should be obvious from these two facts that any member of the public that uses JavaScript should support this, regardless of what they think of Deno-the-company.
I think what the GP was referring to was the "new" owner of Sears, who reorganized the company into dozens of independent business units in the early 2010s (IT, HR, apparel, electronics, etc). Not departments, either; full-on internal businesses intended as a microcosm of the free market.
Each of these units were then given access to an internal "market" and directed to compete with each other for funding.
The idea was likely to try and improve efficiency... But what ended up happening is siloing increased, BUs started infighting for a dwindling set of resources (beyond normal politics you'd expect at an organization that size; actively trying to fuck each other over), and cohesion decreased.
It's often pointed to as one of the reasons for their decline, and worked out so badly that it's commonly believed their owner (who also owns the company holding their debt and stands to immensely profit if they go bankrupt) desired this outcome... to the point that he got sued a few years ago by investors over the conflict of interest and, let's say "creative" organizational decisions.
This happened at a place where I worked years ago, but not as 'on purpose.' We were a large company where most pieces depended on other pieces, and everything was fine - until a new CEO came in who started holding the numbers of each BU under a microscope. This led to each department trying to bill other departments as an enterprise customer, who then retaliated, which then led to internal departments threatening to go to competitors who charged less for the same service. Kinda stupid how that all works - on paper it would have made a few departments look better if they used a bottom barrel competitor, but in reality the company would have bled millions of dollars as a whole...all because one rather large BU wanted to goose its numbers.
Why is that a bad thing? If an internal department that’s not core to their business is less efficient than an external company - use the external company.
Anecdote: Even before Amazon officially killed Chime, everyone at least on the AWS side was moving to officially supported Slack.
I guess it depends on circumstances, but it boils down to each department only cost others some marginal cost in practice.
Imagine a hosting company and a dns company, both with plenty of customers and capacity. The hosting company says... I'll host your DNS site, if you provide DNS to our hosting site. Drop in the bucket for each.
One year the DNS company decides it needs to show more revenue, so will begin charging the hosting company $1000/yr, and guess what the hosting company says the same. Instead, they each get mad and find $500/yr competitors. What was accomplished here?
Further, it just looks bad in many cases. Imagine if Amazon.com decided AWS was too expensive, and decided to move their stuff off to say, Azure only. That wouldn't be a great look for AWS and in turn hurts...Amazon.
I do get your point, but there are a lot of... intangibles about being in a company together.
There is more politics than you think within Amazon Retail about moving compute over to AWS. I’m not sure how much of Amazon Retail runs on AWS instead of its own infrastructure (CDO).
I know one project from Amazon got killed because their AWS bill was too high. Yeah AWS charges Amazon Retail for compute when they run on AWS hardware.
As a rule, organizations are created to avoid the transaction costs on those detail tasks. If you externalize every single supporting task into a market, you will be slowed down to a drag, won't be able to use most competitive advantages, and will pay way more than doing them in house.
But removing the market competition is a breeding ground for inefficiency. So there's a balance there, and huge conglomerates tying their divisions together serves only to make the competitive ones die by the need to use the services of the inefficient ones.
My four years at AWS kind of indoctrinated me. As they said, everytime you decide to buy vs build, you have to ask yourself “does it make the beer taste better”?
Don’t spend energy on undifferentiated heavy lifting. If you are Dropbox it makes sense to move away from S3 for instance.
to put a finer point on it, it wasn't just competition or rewarding-the-successful, the CEO straight up set them at odds with each other and told them directly to battle it out.
basically "coffee is for closers... and if you don't sell you're fired" as a large scale corporate policy.
That was a bullshit separation of a single horizontal cut of the market (all of those segments did consumer retail sales) without overlap.
The part about no overlaps already made it impossible for them to compete. The only "competition" they had was in the sense of TV gameshow competition where candidates do worthless tasks, judged by some arbitrary rules.
That has absolutely no similarity to how Samsung is organized.
that just sounds more like a case of square peg and a round hole. Yes, WP is a nightmare just like NPM and its ilk are to me as well. Adding WP in my list was fraught for this level of response, and I realize now I should have left it off the list. It really doesn't do much for moving the conversation in the right direction
That's my point. At some point, people's fear of learning code is causing them to do things in ways that are unnecessary and overcomplicated, which is quite a bit ironic.
You say fear. I say unnecessary for task at hand. My mom doesn't need to learn how to code to make a website for her florist. She just needs a site that can host some basic information like contact info, gallery of example images, and maybe some cheesy "about" page that people feel like is oh so important.
We're obviously reading a developer centric forum where people seem to have a hard time seeing things from anything other than a developer's point of view. Have hammer, everything is a nail situation. People just not wanting to become a coder isn't because they are scared of it. They just don't want to do it. I don't want to be a florist. I don't go bitching to florists that there's not an easy way to make floral arrangements without learning basics nor does it make me scared of it. Whatever "fear" you want to imply really makes you sound out of touch with non-developers.
I realize that for the simple use cases like that it's fine. I'm talking about people at work using complicated workflows in "low code" tools or spreadsheets full of macros. At some point it's equally or more complex, just in a different way.
Having been involved in a “no code” product, I’ll just say that it’s a really crappy way to write programs. You’re better off creating a DSL of some sort and asking people to type. Demanding that people click the mouse three times to open an input box where they can type something and then doing that a few hundred times is not “better.” It’s infuriating.
It's due to every hyperscalar building out new AI datacenters. For example you have Google recently saying things like "Google tells employees it must double capacity every 6 months to meet AI demand", and that they need to increase capacity by 1000x within 4-5 years.
There's legitimately interesting research in using it to accelerate certain calculations. For example, usually you see a few talks at chemistry conferences on how it's gotten marginally faster at (very basic) electronic structure calculations. Also some neat stuff in the optimization space. Stuff you keep your eye on hoping it's useful in 10 years.
The most similar comparison is AI stuff, except even that has found some practical applications. Unlike AI, there isn't really much practicality for quantum computers right now beyond bumping up your h-index
Well, maybe there is one. As a joke with some friends after a particularly bad string of natural 1's in D&D, I used IBM's free tier (IIRC it's 10 minutes per month) and wrote a dice roller to achieve maximum randomness.
that was my understanding too - in the fields of chemistry, materials science, pharmaceutical development, etc... quantum tech is somewhat promising and might be pretty viable in those specific niche fields within the decade.
I don't personally care if a product includes AI, it's the pushiness of it that's annoying.
That, and the inordinate amount of effort being devoted to it. It's just hilarious at this point that Microsoft, for example, is moving heaven and earth to put AI into everything office, and yet Excel still automatically converts random things into dates (the "ability" to turn it off they added a few years ago only works half the time, and only affects csv imports) with no ability to disable it.
I think a lot of the pushiness is a frantic effort to keep the bubble inflated and keep the market out of the trough of disillusionment. It won't work. The trough of disillusionment is inevitable. There is no jumping straight from peak of inflated expectations straight to the slope of enlightenment, because the market fundamentally needs the cleansing action of the trough of disillusionment to shake out the theoreticals and the maybes and get to what actually works.
Hopefully after the pop rather than shoving it in our face they can return to advertising at us to use the things, and the things needing to prove themselves to get to real sales, rather than corporations getting 10% stock pumps in a day based on statistics about how "used" their AI stuff is while they don't tell the market how few people actually chose to use their AI stuff rather than just becoming a metric when it was pushed on them.
>I don't personally care if a product includes AI, it's the pushiness of it that's annoying.
I agree with you in principle, but in practice these two are currently inextricable; if there's AI in the product, then it will be pushed / impossible to turn off / take resources away from actual product improvement.
Exactly! I honestly can't remember the last time my window start menu search bar functioned as it's supposed to. For multiple laptops across more than 5 years i have to hit the windows key three to 7 times to get it to let me type into it. It either doesn't open, doesn't show anything, or doesn't let me type into it.
I mean, c'mon, its literally called the fucking windows key and it doesn't work. As per standard Microsoft it's a feature that worked perfectly on all versions before cortana (their last "ai assistant" type push), i wonder what new core functionalities of their product they're going to fuck up and never fix.
I was an insider user of Windows for close to a decade, really stuck with it through WSL's development... But the first time I saw internet ads on my start menu search result was kind of it for me, I switched my default boot to Linux and really haven't looked back. I don't really need Windows for my workflows, and though I'm using Windows for my current job, I'm at a point I'd rather not be.
Windows as an OS really kind of peaked around Windows 7 IMO... though I do like the previews on the taskbar, that's about the only advancement since that I appreciate at all... besides WSL2(g) that is. I used to joke that Windows was my favorite Linux distro, now I just don't want it near me. Even my SO would rather be off of it.
Microsoft could have made Windows privacy respecting, continued investing in WSL, baked PowerToys into the OS, etc. and actually made one hell of a workhorse operating system that could rival the mac for developer mindshare. They could partner with Google and/or Samsung and make some deep Android integration to rival Apple's ecosystem of products. Make Windows+Android just as seamless and convenient as mac + iOS.
Instead they opted for forced online accounts, invasive telemetry, and ads in the OS instead of actually trying to keep and win over the very enthusiasts that help ensure their product gets chosen in the enterprise world where they make their cash.
Now they're going to scrap the concept of Windows as something you interact with directly all together and make it "Agentic" whatever the hell that means.
I don't think their bet is going to pay off, especially if the bubble crashes. I think it will be one of the biggest blunders and mistakes that Microsoft will have made.
Probably because just easier to catch a predatory fish than a land predator
Throw a line in the pond, whatever bites will bite. Clean it and you've got dinner.
Versus with hunting, historically (and even now) if you miss your shot or don't hit a part that immediately takes it down, now you've got an angry wolf/bear/moose bearing down on you. Wolf is also probably too close to dog for most cultures.
Nowadays you can get meat from bear/moose/whatever, but there isn't much of a culinary tradition associated with them. So the only people out for them are the curious or macho types
The most annoying instance of this is installers in Windows that just assume you want to go into `C:/Program Files`, which nowadays requires admin to be modified
This is very annoying on company machines where you may not have admin, since now there's red tape with your IT because the installer was poorly written.
Half the reason I use the WSL is because you at least get "root" on it, so permissions are never an issue
Edit: there may be something lost in translation. This post is in reference to software your IT already approves, which happens to only install to program files.
It's a feature. You shouldn't be installing software on your work computer. Your IT department should be vetting it, deploying it, and keeping it up-to date for you.
Maybe you can tell the difference between report.pdf and report.exe, but too many people can't, so unfortunately we can't let everyone install anything.
> Your IT department should be vetting it, deploying it, and keeping it up-to date for you.
There are not enough IT staff at my organization to do this. They have an approved list of software that may be installed. Some common installations are automated, others are niche-enough that it's DIY.
We don't live in a perfect world where the IT staffing ratio is 1:20 (or whatever arbitrary number you would consider "good"), so this is how my organization does it.
> unfortunately we can't let everyone install anything.
> Your" IT department should consider giving you your own admin account. But it's their call.
Seems like a bit of an extreme solution for one-off installations that are rare enough to not be worth bothering to automate.
Good example of this is scientific software like Gaussian (a "common" quantum mechanics package): needs admin, expensive and strict license that gets audited. It's approved, but we have a single digit number of people using it. It's just not worth the time to automate a script around an install that only happens once every year or so on average, when they can just temporarily elevate the user.
> You shouldn't be installing software on your work computer. Your IT department should be vetting it, deploying it, and keeping it up-to date for you.
If I actually had to depend on IT to do all that, it would take forever to get anything done.
In a Windows environment this can be managed with AppLocker, or an endpoint management solution, or 3rd-Party tool like Threatlocker.
It becomes less about controlling the users and more about stopping any bad guy dead in their tracks. If nothing but what has been implicitly authorized can execute, then 99% of ransomware attacks will be stopped immediately even after the user clicks the link.
Your company software procurement process shouldn’t be so onerous that people turn to Shadow IT. You have to work with people where they are.
No, that's the default behavior in Windows. If you install to, say, app data it's fine. If you install to program files, you need admin because it is a protected folder.
> The company does NOT want you installing random crap on their machines.
Why do you immediately jump to the conclusion that the post is about installing "random crap?"
Where did I write that it was not approved in advance...?
The post is about requiring admin to install to Program Files. Even if it is an approved piece of software, you're still going to need admin to install it.
> while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."
I'm surprised McKinsey convinced someone to say the quiet part out loud
The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization.
No one wants to say on their resume, "I manage 5 people, but trust me, with AI, its like managing 20 people!"
Managers also don't pay people's salaries. The Tech Tools budget is a different budget than People salaries.
Also keep in mind, for any problem space, there is an unlimited number of things to do. 20 people working 20% more efficiently wont reach infinity any faster than 10 people.
> The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization
Ding ding ding!
AI can absolutely reduce headcount. It already could 2 years ago, when we were just getting started. At the time I worked at a company that did just that, succesfully automating away thousands of jobs which couldn't pre-LLMs. The reason it ""worked"" was because it was outsourced headcount, so there was very limited political incentive to keep them if they were replaceable.
The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it. This is by far the biggest cause of all these "failed" AI projects. It's super obvious when you start noticing that for jobs that were being outsourced, or done by temp/contracted workers, those are much more rapidly being replaced. As well as the fact that tech startups are hiring much less than before. Not talking about YC-and-co startups here, those are global exceptions indeed affected a lot by ZIRP and what not. I'm talking about the 99.9% of startups that don't get big VC funds.
A lot of the narrative on HN that it isn't happening and AI is all a scam is IMO out of reasonable fear.
If you're still not convinced, think about it this way. Before LLMs were a thing, if I asked you what the success rate of software projects at non-tech companies was, what would you have said? 90% failure rate? To my knowledge, the numbers are indeed close. And what's the biggest reason? Almost never "this problem cannot be technically solved". You'd probably name other, more common reasons.
Why would this be any different for AI? Why would those same reasons suddenly disappear? They don't. All the politics, all the enterprise salesmen, the lack of understanding of actual needs, the personal KPIs to hit - they're all still there. And the politics are even worse than with trad. enterprise software now that the premise of headcount reduction looms larger than ever.
Yes, and it’s instructive to see how automation has reduced head count in oil and gas majors. The reduction comes when there’s a shock financially or economically and layoffs are needed for survival. Until then, head count will be stable.
Trucks in the oil sands can already operate autonomously in controlled mining sites, but wide adoption is happening slowly, waiting for driver turnover and equipment replacement cycles.
> The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it.
I don't know, most of the companies doing regular layoffs wheneveer they can get away with it are pretty big and old. Be it in tech - IBM/Meta/Google/Microsoft, or in physical things - car manufacturers, shipyards, etc.
Through top-down, hard mandates directly by the exec level, absolutely! They're an unstoppable force, beating those incentives.
The execs aren't the ones directly choosing, overseeing and implementing these AI efforts - or in the preceding decades, the software efforts. 9 out of 10 times, they know very little about the details. They may ""spearhead"" it in so far that's possible, but there's tonnes of layers inbetween with their own incentives which are required to cooperate to actually make it work.
If the execs say "Whole office full-time RTO from next month 5 days a week", they really don't depend on those layers at all, as it's suicide for anyone to just ignore it or even fake it.
Did you not see the backlash the Duolingo CEO got and how hard he backtracked? Coming out and saying "We're replacing a big bunch of people with LLMs" is about the worst PR you can get in 2025, it's really an wful idea for anyone but maybe pure B2B companies that are barely hanging on and super desperate for investor cash.
This was a big, traditional non-tech company.
Also as implied, these were cheap offshore contracting jobs being replaced. Still magnitudes more expensive than LLMs, making it very "worth it" from a company perspective. But not prime earnings call material.
Everyone in the industry also knows that it's not particularly unique, far away from something no one has been able to do. Go look at the job markets for translation, data entry, customer support compared to 2 years ago. And as mentioned, even junior web devs.
Maybe 40 years ago or in some cultures, but I've always focused on $ / person. If we have a smaller team that can generate $2M in ARR per developer that's far superior to $200K. The problem is once you have 20 people doing the job nobody thinks it's possible to do it with 10. You're right that "there is an unlimited number of things to do" and there's really obvious things that must be done and must not be done, but the majority IME are should or could be done, and in every org I've experienced it's a challenge to constrain the # of parallel initiatives, which is the necessary first step to reducing active headcount.
we use AI (LLMs) to improve the recall and precision of our classification models for content moderation. Our human moderators can only process so many items per day, at a high cost.
AI (LLMS) act as a pre-filter, auto-approving or auto-rejecting before they get to the humans for review.
I don't mean to be dismissive and crappy right out of the gate with that question, I'm merely drawing on my experience with AI and the broader trends I see emerging: AI is leveraged when you need knowledge products for the sake of having products, not when they're particularly for something. I've noticed a very strange phenomenon where middle managers will generate long, meandering report emails to communicate what is, frankly, not complicated or terribly deep information, and send them to other people, who then paradoxically use AI to summarize those emails, likely into something quite similar to what was prompted to be generated in the first place.
I've also noticed it being leveraged heavily in spaces where a product existing, like a news release, article, social media post, etc. is in itself the point, and the quality of it is a highly secondary notion.
This has led me to conclude that AI is best leveraged in such cases where nobody including the creator of a given thing really... cares much what the thing is, if it's good, or does it's job well? It exists because it should exist and it's existence performs the function far more than anything to do with the actual thing that exists.
And in my organization at least, our "cultural opinion" on such things would be... well if nobody cares what it says, and nobody is actually reading it... then why the hell are we generating it and then summarizing it? Just skip the whole damn thing and send a short, list email of what needs communicating and be done.
He's either lying or hard-selling. The company in his profile "neofactory.ai" says they "will build our first production line in Dallas, TX in Q3." well, we just entered Q4, so not that. Despite that it has no mentions online and the website is just a "contact us" form.
The anthropologist David Graeber wrote a book called "Bullshit Jobs" that explored the subject. It shouldn't be surprising that a prodigious bullshit generator could find a use in those roles.
I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites, and deliver approximately same value. It would provide additional benefits, in that AIs refuse to do unethical things and generally reason acceptably well. The cost cutting would be immense!
I am not kidding. In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki. The current AIs are perfect for this.
> I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites
That is exactly what it can't do. We need someone to hold liable in key decisions.
Right, because one really widely-known fact about CEOs is that whenever anything goes wrong at a company, they take the full blame, and if it's criminal, they go to jail!
Can it turn simple yes-or-no questions, or "hey who's the person I need to ask about X?" into scheduled phone calls that inexplicably invite two or three other people as an excuse to fill up its calendar so it looks very busy?
It's not the top IME, but the big fat middle of the org chart (company age seems to mirror physical age maybe?) where middle to senior managers can hide out, deliver little demonstratable value and ride with the tides. Some of these people are far better at surfing the waves than they are at performing the tasks of their job title, and they will outlast you, both your political skills and your tolerance for BS.
> In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki.
It's hard to take this sentiment seriously from a source that doesn't have direct experience with the c-suite. The average person only gets to see the "public relations" view of the c-suite (mostly the CEO) so I can certainly see why a "LLM based mouthpiece" might be better.
The c-suite is involved in thousands of decisions that 90% of the rest of the world is not privy to.
FWIW - As a consumer, I'm highly critical of the robotic-like external personas the c-suite take on so I can appreciate the sentiment, but it's simply not rooted in any real experience.
> AI in its current state will likely not replace any workers.
This is a puzzling assertion to me. Hasn’t even the cheapest Copilot subscription arguably replaced most of the headcount that we used to have of junior new-grad developers? And the Zendesks of the world have been selling AI products for years now that reduce L1 support headcount, and quite effectively too since the main job of L1 support is/was shooting people links to FAQs or KB articles or asking them to try restarting their computer.
> Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".
Yup, it's just the latest management fad. Remember Six Sigma? Or Agile (in its full-blown cultish form; some aspects can be mildly useful)? Or matrix management? Business leaders, as a class, seem almost uniquely susceptible to fads. There is always _some_ magic which is going to radically increase productivity, if everyone just believes hard enough.
I was working with a team on a pretty simple AI solution we were adding to our larger product. Every time we talk to someone we're telling them "still need a human to validate this..."
I mean, nah, we've seen enough to these cycles to know exactly how this will end.. with a sigh and a whimper and the Next Big Thing taking the spotlight. After all, where are all the articles about how "that time that CEOs thought blockchain could replace databases" etc?
I think they can. IME LLMs have me working somewhat less and doing somewhat more. It's not a tidal wave but I'm stuck a little bit less on bugs and some things like regex or sql I'm much faster. It's something like 5-10% more productive. That level of slack is easy to take up by doing more but theoretically it means being able to lose 1 out of every 10-20 devs.
How does it make sense to trade one group of labor (human) who are generally loosely connected, having little collective power for another (AI)? What you're really doing isn't making work more "efficient", you're just outsourcing work to another party -- one who you have very little control over. A party that is very well capitalized, who is probably interested in taking more and more of your margin once they figure out how your business works (and that's going to be really easy because you help them train AI models to do your business).
That's not required. All that is required is becoming a sole source of labor, or a source that is the only realistic choice economically.
If you ask me, that's the real long game on AI. That is exactly why all these billionaires keep pouring money in. They know it's the only way to continue growth is to start taking over large sections of the economy.
Yes, that's the difference between robot makers (tool makers for others) and AI, which is not only trying to be a tool for other companies, but also take over their businesses, by acquiring their knowledge and then use a combination of capture through lack of visibility and (mis-)use of the information gathered to directly compete.
Classic enshittification combined with embedding internally to company operations to become indispensable.
both make a lot of sense, but the biggest mistake they make is to see people as capacity, or as a counter.
Each human can be a bit more productive, I fully believe 10-15% is possible with today's tools if we do it right. But each human has it unique set of experience and knowledge. If I do my job a bit faster, and you do your job a bit faster. But if we are a team of 10, and we do all our job 10% faster, doesn't mean you can let one of us go. It just means, we all do our job 10% faster, which we probably waste by drinking more coffee or taking longer lunch breaks
Organizations that successfully adapt are those that use new technology to empower their existing workers to become more productive. Organizations looking to replace humans with robots are run by idiots and they will fail.
Oracle is an incredibly litigious company. Their awful reputation in this respect means that the JS ecosystem can never be sure they won't swoop in and attempt to demand rent someday. This is made worse by the army of lawyers they employ; even if they're completely in the wrong, whatever project they go after probably won't be able to afford a defense.