That's the whole point. They aren't law, and they were (probably) never meant to be so far-reaching, and yet the clear purpose of this Executive Order is to tell the states what laws they can enact. The EO doesn't have the legal power to do that directly, but it clearly outlines the intention to withdraw federal funding from states that refuse to toe the line.
> The order directs Attorney General Pam Bondi to create an “AI Litigation Task Force” within 30 days whose "sole responsibility shall be to challenge State AI laws" that clash with the Trump administration's vision for light-touch regulation.
The EO isn't about Federal Preemption. Trump's not creating a law to preempt states. So a question about how Federal Preemption is relevant is on point.
> My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones. …
Sounds like leaving it up to Congress! But then the administration vows to thwart state laws despite the vacuum of no extant preemption, so effectively imposing a type of supposed Executive preemption:
> Until such a national standard exists, however, it is imperative that my Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation.
So preemption link is relevant, I think; and at any rate, helpful to give background to those not familiar with the concept, which constitutes the field against which this is happening.
Also why are they small federal government states rights for things but big federal government centralized power for this? It doesn't make sense to me.
I think the message between the lines is what's important, and it goes like this:
"We in the executive branch have an agreement with the Supreme Court allowing us to bypass congress and enact edicts. We will do this by sending the Justice Department any state law that gets in the way of our donors, sending the layup to our Republican Supreme Court, who will dunk on the States for us and nullify their law."
We don't have to go through the motions of pretending we still live in a constitutional republic, it's okay to talk frankly about reality as it exists.
It goes deeper than that - the Supreme Council will issue non-binding "guidance" on the "shadow docket", so that when/if the fascists/destructionists [0] lose the Presidency, they can go back to being obstructionists weaponizing high-minded ideals in bad faith. As a libertarian, the way I see it is we can disagree politically on what constitutes constructive solutions, but it's time to unite, stop accepting any of the fascists' nonsense, and take back the fucking government - full support for the one remaining mainstream party that at least nominally represents the interests of the United States, while demanding they themselves stop preemptively appeasing the fascists. The Libertarian, Green, or even new parties can step up as the opposition. Pack the courts with judges that believe in America first and foremost, make DC and PR states to mitigate the fascists' abuse of the Senate, and so on. After we've stopped the hemorrhaging, work on fundamental things like adopting ranked pairs voting instead of this plurality trash.
[0] I'd be willing to call them something else if they picked an honest name for themselves - they are most certainly not "conservatives"
It's right in the text of the EO: they intend to argue that the state laws are preempted by existing federal regulations, and they also direct the creation of new regulations to create preemption if necessary, specifically calling on the FCC and FTC to make new federal rules to preempt disfavored state laws. Separately it talks about going to Congress for new laws but mostly this lays out an attempt to do it with executive action as much as possible, both through preemption and by using funding to try to coerce the states.
There's a reasonable argument that nationwide regulation is the more efficient and proper path here but I think it's pretty obvious that the intent is to make toothless "regulation" simply to trigger preemption. You don't have to do much wondering to figure out the level of regulation that David Sacks is looking for.
This is quite literally going to lead to a Supreme Court case about Federal Preemption. Bondi will challenge some CA law, they will lose and appeal until they get to the Supreme Court. I don't have any grace to give people at this point, you have to be willingly turning a blind eye if you do not see where this will end up.
Federal preemption requires federal law (aka laws written by congress). How else would it get to the supreme court?
The EO mentions congress passing new law a few times in addition to an executive task force to look into challenging state laws based on constitutional violations or federal statues. That's the only way they'd get in front of a judge.
If the plan is for the executive to invent new laws it's not mapped out in this EO
> Federal preemption requires federal law (aka laws written by congress). How else would it get to the supreme court?
1. No federal preemption currently. (No federal law, therefore no regulation on the matter that should preempt.)
2. State passes and enforces law regarding AI.
3. Trump directs Bondi to challenge the state law on nonsense grounds.
4. In the lawsuit, the state points out that there is no federal preemption; oh yeah, 10th Amendment; and that the administration's argument is nonsense.
5. The judge, say Eileen Cannon, invalidates the state law.
6. Circuit Court reverses.
7. Administration seeks and immediately gets a grant of certiorari — and the preemption matter is in the Supreme Court.
> passing new law … only way they'd get it in front of a judge.
The EO directs Bondi to investigate whether, and argue that, existing executive regulations (presumably on other topics) preempt state legislation.
Regardless, the EO makes it a priority to find and take advantage of some way to challenge and possibly invalidate state laws on the subject. This is a new take on preemption: creation of a state-law vacuum on the subject, through scorched-earth litigation (how Trumpian!), despite an utter absence of federal legislation on the matter.
the Task Force can try to challenge state AI laws. they can file whatever lawsuits they want. they will probably lose most of their suits, because there's very little ground for challenging state AI regulations.
I’m working my way through “The End of the World is Just the Beginning”, and the main thesis is that everyone is preparing for demographic collapse. Global populations are declining almost everywhere, and this breaks the current global order. For example, what does the Chinese economy look like when all the people subject to the one child policy retire? What are the knock on effects of labor becoming more expensive almost everywhere? Can immigration solve this problem? What about the cultural friction of mass immigration? What happens to the places that everyone emigrates from?
The book basically argues that a significant amount of the world is headed for destabilization, and a destabilized world involves a lot less trust.
Side note: personally, I find the writing style and general tone to be hyperbolic, but some of the analysis is interesting.
I think we are already seeing this happening as it is not rocket science. US is not willing to be the world police anymore, because it is more and more expensive and some of the elites and many ordinary people feel that they are not gaining much as return.
So this left and is going to leave a lot of power gaps around the globes, and regional wars are picking up paces.
China is not particularly happy about this, because it is not ready and perhaps don't even want to be the next world police. US has always wanted China to share the responsibility but China is hesitant, which is understandable. Plus most of the people in China do not want a destabilized world, for now.
What I'd expect that China will gradually lose steam (actually people on HN already observed since like 10 years ago) when the people born in the 1970s/1980s retire. The officials who are resistant to the idea of expansion (because it damages their power base) are going to retire then. I'd expect the world to be a LOT hotter then. So that's about 2030-2040 and might come a bit earlier as the other players are already moving the pieces(e.g. Russia).
Not sure how to prepare my family through that time, though. I mean, it's just my guess, so my wife just rolls up her eye and wants to buy more houses/stocks because "houses/stocks always go up if you look at the chart". What I think is that the whole economical-geopolitico logic is going to change forever, and what is gone is gone for good. The next globalization is maybe 50 years away but we will never see it. From hindsight, I believe the 2008 financial crisis was the turning points. They managed to drag it for another 20 years, which I send my kudos.
Again, just my guess. I have always wronged in the pessimistic side so I hope I'm wrong this time again.
I am so, so, so tired of hearing this argument. At a minimum, AI provides efficiency gains. Skilled engineers can now produce more code. This puts downward pressure on jobs. We’re not going to eliminate every software engineering job, but the options are to build more software or to hire fewer engineers. I am not convinced that software has a growing market (it’s already everywhere), so that implies downward pressure. The same is true for customer support, photography, video production (ads), paralegal work, pharma, and basically any job that involves filing paperwork.
Eliminating jobs has absolutely happened. How many jobs exist today for newspaper printing? Photograph development? Film development? Call switchboard operation? Technology absolutely eats jobs. There have been more jobs created over time, but the current economic situation makes large scale jobs adjustment work less well.
AI cannot provide customer support. It cannot answer questions.
> photography, video production (ads)
AI cannot take photographs or make videos. Or at least, not ones that look like utter trash.
> paralegal work, pharma, and basically any job that involves filing paperwork.
Right, so you'd be happy with a random number generator with a list of words picking what medication you're supposed to get, or preparing your court case?
AI is useless, and always will be. It is not "intelligence", it's crude pattern matching - a big Eliza bot.
I am so, so, so tired of hearing this argument. At a minimum, switching from assembly language to high-level programming languages provided efficiency gains. Skilled engineers were able to produce more code. This put upward pressure on jobs. The demand for new software is effectively infinite.
Unlike higher level programing languages AI doesn't actually make programmers more efficient (https://arxiv.org/abs/2507.09089). Many people who are great programmers and love programing aren't interested in having their role reduced to being QA where they just review the bad code AI designed and wrote all day long.
In a hypothetical world where AI is actually decent enough to be any good at writing software, the demand for software being infinite won't save even one programmer's job because zero programmers will be needed to create any of it. Everyone who needs software will just ask AI to do it for them. Zero programing jobs needed ever again.
Pretending 16 samples is authoritative is absolutely hilarious and wild, copium this pure could kill someone.
Also working on a codebase you already know biases results in the first place -- they missed out on what has become a cornerstone of this stuff for AISWE people like me: repo tours; tree-sitter feeds the codebase to the LLM and I get to find all the stuff in the code I care about by either a single well formatted meta prompt or by just asking questions when I need to.
I'll concede one thing to the authors of the study, Claude Code is not that great. Everyone I know has moved on since before July. I personally am hacking on my own fork of Qwen CLI (which is itself a Gemini fork) and it does most of what I want with the models of my choice which I swap out depending on what I'm doing. Sometimes they're local on my 4090 and sometimes I use a frontier or larger openweights model hosted somewhere else. If you're expecting a code assistant to drop in your lap and just immediately experience all of its benefits you'll be disappointed. This is not something anyone can offer without just prescribing a stack or workflow. You need to make it your own.
The study is about dropping just 16 people into a tooling they're unfamiliar with, have no mechanical sympathy for, and aren't likely to shape and mold it to their own needs.
You want conclusive evidence go make friends with people who hack their own tooling. Basically everyone I hang out with has extended BMAD, written their own agents.md for specific tasks, make their own slash commands, "skills" (convenient name and PR hijacking of a common practice but whatever, thanks for MCP I guess). Literally what kind of dev are you if you're not hacking your own tools???
You got four ingredients here you have to keep in mind when thinking about this stuff: the model, the context, the prompt, and the tooling. If you're not intervening to set up the best combination of each for each workflow you are doing then you are just letting someone else determine how that workflow goes.
Universal function approximators that can speak english got invented and nobody wants to talk to them is not the scifi future I was hoping for when I was longing for statistical language modeling to lead to code generation back in 2014 as a young NLP practitioner learning Python for the first time.
If you can't make it work fine, maybe it's not for you, but I would probably turn violent if you tried to take this stuff from me.
> the options are to build more software or to hire fewer engineers.
To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.
More on all of these later.
> I am not convinced that software has a growing market
Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.
The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.
However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]
They are, of course, now doing very different things.
Let's now spitball some of those other scenarios above:
- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.
- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g
traditional end to end tests.
- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.
Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.
And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.
I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.
1. https://www.economist.com/democracy-in-america/2011/06/15/ar...
(while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)
LLMs can now detect garbage much more cheaply than humans can. This might increase cost slightly for the companies that own the AIs, but it almost certainly will not result in hiring human reviewers
> LLMs can now detect garbage much more cheaply than humans can.
Off the top of my head, I don't think this is true for training data. I could be wrong, but it seems very fallible to let GPT-5 be the source of ground truth for GPT-6.
I dotn think an LLM even can detect garbage during a training run. While training the system is only tasked with predicting the next token in the training set, it isn't trying to reason about the validity of the training set itself.
There are multiple people claiming this in this thread, but with no more than a "it doesn't work stop". Would be great to hear some concrete information.
What about garbage that are difficult to tell from truth?
For example, say I have an AD&D website, how does AI tell whether a piece of FR history is canon or not? Yeah I know it's a bit extreme, but you get the idea.
Next step will be to mask the real information with typ0canno. Or parts of the text, otherwise search engines will fail miserably. Also squirrel anywhere so dogs look in the other direction. Up.
Imagine filtering the meaty parts with something like /usr/games/rasterman:
> what about garbage thta are dififult to tell from truth?
> for example.. say i have an ad&d website.. how does ai etll whether a piece of fr history is canon ro not? yeah ik now it's a bit etreme.. but u gewt teh idea...
or /usr/games/scramble:
> Waht aobut ggaabre taht are dficiuflt to tlel form ttruh?
> For eapxlme, say I hvae an AD&D wisbete, how deos AI tlel wthheer a pciee of FR hsiotry is caonn or not? Yaeh I konw it's a bit emxetre, but you get the ieda.
Sadly punny humans will have a harder time decyphering the mess and trying to get the silly references. But that is a sacrifice Titans are willing to make for their own good.
What cost do they incur while tokenizing highly mistyped text? Woof. To later decide real crap or typ0 cannoe.
Trying to remember the article that tested small inlined weirdness to get surprising output. That was the inspiration for the up up down down left right left right B A approach.
There are multiple people claiming this in this thread, but with no more than a "it doesn't work stop". Would be great to hear some concrete information.
I think OP is claiming that if enough people are using these obfuscators, the training data will be poisoned. The LLM being able to translate it right now is not a proof that this won't work, since it has enough "clean" data to compare against.
If enough people are doing that then venacular English has changed to be like that.
And it still isn't a problem for LLMs. There is sufficient history for it to learn on, and in any case low resource language learning shows them better than humans at learning language patterns.
If it follows an approximate grammar then an LLM will learn from it.
You're missing the point.
The goal of garbage production is not to break the bots or poison LLMs, but to remove load from your own site. The author writes it in the article. He found that feeding bots garbage is the cheapest strategy, that's all.
Does this target any particular area? Or does it make people more hairy everywhere? If it targets a particular area, can anyone explain to me how scientists discover such specific pathways?
Counterargument: I can learn things at record speed (for me) because I can learn things in the order that makes sense to me. I find it much more motivating to start with: “why is AI bad at playing video games?”, than to start with “what is the chain rule?”
I will certainly need to learn about the chain rule eventually, but I find that I get lost in the details (and unmotivated to continue) without an end goal that is interesting to me.
AI loves to make vibe charts (sometimes with lots of extra steps), but that’s part of the process. It is nontrivial to wrangle LLMs through larger projects, and people need to learn that too
Of course! We still have computers the size of mainframes that ran on vacuum tubes. They are just built with vastly more powerful hardware and are used for specialized tasks that supercomputing facilities care about.
But it has the potential to alter the economics of AI quite dramatically
Because, similar to the US, they have authoritarian tendencies - strong nationalism and anti-immigration. How are you going to round up the bad people if you don't have surveillance everywhere?
Well the Axis powers from World War II are the most obvious demonstrations of nationalism begetting authoritarianism. Germany, Italy, and Japan were nationalist in the extreme. And Italy from that time is such a clear example that it's basically the canonical example used to teach how fascism emerges.
Contemporary examples include the Philippines, Hungary, Poland's Law and Justice Party, and arguably Russia, Turkey and India. Modi is a Hindu nationalist. The United States unfortunately is shaping up to count as an example as well.
Extreme forms of nationalism tend to have a narrative of grievance, a desire to restore a once a great national identity, and a tendency to divide the world into loyal citizens, and enemies without and within, against whom authoritarians powers must be mobilized.
So there's a conceptual basis, in terms of setting the stage for rationalizing authoritarianism, as well as abundant historical examples demonstrating the marriage of nationalism and authoritarianism in action. There's nothing wrong with not knowing, but I would say there's an extremely strong and familiar historical canon to those who study the topic.
But that would only be something nationalism signaled if the converse weren’t also true — eg, totalitarian states like the USSR, CCP, etc.
Those also had:
- grievance narratives;
- a tendency to divide the world into loyal citizens and enemies; and,
- use the above to justify authoritarian powers.
You haven’t shown that nationalism played a particular part in that cycle; just that it also happened in nationalist states. Almost like the problem is those factors, rather than nationalism.
The USSR absolutely used a nationalist view in their propaganda [0]
As did the CCP [1]:
> Ideals and convictions are the spiritual banners for the united struggle of a country, nation and party, wavering ideals and convictions are the most harmful form of wavering.
I actually considered listing them as additional examples, but I had to stop somewhere and they had their own distinct wrinkles.
I think the major difference in their respective cases pertain to the ideological dynamics of the particular strains of communism that manifested in those countries. What they lack is a fixation on the purity of national heritage as a primary source of moral truth and a foundation for a self conception. Instead they tended to regard themselves as part of universal, international struggle and understood conflict in economic and ideological terms. What they had in common was the sense that conflict with this chosen enemy necessitated authoritarianism.
There's more than one path to authoritarianism, and they overlap. Different mechanisms don't disprove one another, they exist side by side.
Here is an interesting review of how the two are historically strongly correlated[1].
Their conclusion is that "[...] ethnic and elitist forms of
nationalism, which combine to forge exclusive nationalism,
help to perpetuate autocratic regimes by continually legitimating minority exclusions [...]"
Right-wing nationalism as we're currently experiencing it is exclusive. It broadly advocates for restoring revised historical cultural narratives of a particular ethnic group, for immigration restriction and immigrant removal, for further minority culture erasure, and so on.
Do you know the history of nationalism in Europe, and Germany in particular? Hint: it’s the “Na” part of “Nazi.”
You are getting downvoted because this pretty basic stuff. Either you’re part of today’s lucky 10k, or your post reads very much like far-right Gish galloping.
I don't know. It seems like from what you saying that you and honestly an enormous amount of people need to actually learn about 20th century European history and WWII. People are throwing around these terms of NAZI and Gestapo and all of this and I think they have no idea what they mean. The left is not against authoritarian. The left does not even want to really eliminate the police. They just want to be the ones to decide who are the thought-criminals and what to do with them. Also, that is not what Gish galloping. I don't know what is happening here.
I do know my history. The Nazi party was a pan-German nationalist party. I'm not sure why this is controversial.
Germans, and Germany are obviously quite sensitive to the dangers of nationalism and authoritarianism. Not just because of WW2, but also the experience of East Germany.
Authoritarian? You're saying this because of immigration; this comes from a position that is basically open borders. It is an interesting double standard. The people that hold this position would not consider non-Western countries that don't want to have open borders or have dramatic demographic shifts in their population and culture to be "authoritarian." This whole notion of "rounding up the bad people" is just infantile leftist stuff. How do you have a sovereign country if you are not able to have a policy that prevents unfettered 'immigration' or unable to deport those that immigrated contrary to law?
The whole concept of a country as a related group of people from one ethnicity or historical origin is relatively recent.
Feudalism did not have this concept; a country was the land belonging to a king (or equivalent), mediated through a set of nobles. There was no concept of illegal or legal immigration; the population of a country were the people who worked for, or were owned by, the nobles ruling that country. There were land rights granted to peasants who had historically lived in that place, but these could and were often overruled by nobles.
European nobility had no such idea of ethnicity or national grouping; the English monarchy is a German family, and most of European nobility were related to each other much more closely than to the citizens of their country.
Early post-monarchy states didn't have this concept. The English Civil War and the French Revolution didn't create states that had a defined concept of the citizen as a member of any ethnic grouping. Again, there's no mention of immigration in any of the documents from this period. It just wasn't a concept they thought about.
The whole concept that a nation-state is a formalisation of a historical grouping of ethnically related people is a very recent one, only a couple of hundred years old.
So to answer your question: It is very easy to have a sovereign country without a policy that prevents unfettered immigration; you just don't care about your population being ethnically diverse. Your citizens are the people who live in your country, and have undergone whatever ceremony and formality you decide makes them citizens.
This is, after all, how America historically did this; if you arrived in America and pledged allegiance, you became a citizen of America.
No idea! But I found this part of the paper confusing: "between the conception of Aert van Beethoven’s son Hendrik in Kampenhout, Belgium, in c.1572, and the conception of Ludwig van Beethoven seven generations later in 1770, in Bonn, Germany". I think that means that there were 8 generations from Aert to the famous composer, so Aert was the great...grandfather with 6 greats. (How many people know their family history that far back?)
The Wikipedia article for the composer's grandfather (who was also called Ludwig) says: in 1712 two boys named Ludwig van Beethoven were born. The two families were distantly related. [...] it is not certain "which Ludwig" actually settled in Bonn in 1733
But presumably both of those boys were officially descended from Aert so it doesn't matter for the purposes of this analysis.
> How many people know their family history that far back?
That is not so uncommon in Europe. Both from my maternal and paternal line the oldest ancestors I am aware of lived almost 400 years ago, in the mid 17th century. Before that the registers from my region are very patchy, because of the devastations of the Thirty Years' War. Were this is not the case, it is not uncommon that one is able to continue the line back to the 16th century, especially for people living in towns (not to speak of the aristocracy).
Or, as another example, here is photo of the reunion of the "descendants and sides relatives of Dr. Martin Luther and Katharina von Bora" (not related to myself): https://www.lutheriden.de/the-lutheriden.html
> How many people know their family history that far back?
Very common to be able to do this in Europe. I've traced my family back to the early 1500s via Ancestry.com, and my family tree isn't anything special. Before that period, records get very sparse among the common folk, though, and you pretty much need to have a strong connection with nobility to go further back in time.
> One Beethoven biographer63 has previously suggested, on circumstantial grounds, that Ludwig senior may not have been Johann van Beethoven’s biological father
> 63 Canisius, C. Beethoven “Sehnsucht und Unruhe in der Musik”: Aspekte zu Leben und Werk Originalausgabe. Schott, 1992
So it's been hypothesized but presumably not demonstrated genetically.