I don't get what's miserable or unhappy about criticizing the political incoherence of some people. You may disagree with them, but why the personal attacks?
Nothing like outputting specific colors to see what branch the current pixel is currently running through. It's like printf debugging but colorful and with only three floats of output.
Ah yes the duality of anti-AI crowds on HN. “GenAI is just fancy autocorrect”, and “autocorrect isn’t actually GenAI”.
The thing is, if you’re talking about making laws (as GP is), your “surely people understand this difference” strategy doesn’t matter squat and the impact will be larger than you think.
You don't seem to understand what people mean when they say "AI is just fancy autocorrect". People talk about the little word suggestions over the keyboard, not about correcting spelling. And yes, of course those suggestions are going to be provided by some sort of ML model, and yes if you actually write a whole article just using them, it should be marked as AI generated, but literally no one is doing that. Maybe because it's not fancy enough autocorrect. Either way, this is not the gotcha you think.
A law like this would obviously need some sort of sensible definition of what "AI" means in this context. Online translation tools also use ML models and even systems to unlock your device with your face do, so classifying all of that as "AI contributions" would make the definition completely useless.
I assume the OP was talking about things like LLMs and diffusion models which one could definitely single out for regulatory purposes. At the end of the day I don't think it would ever be realistically possible to have a law like this anyway, at least not one that wouldn't come with a bunch of ambiguity that would need to be resolved in court.
OK, so define it for us, please. Because, once again, this thread is talking about introducing laws about "AI". OP was talking about LLMs you say - So SLMs then are fine? If not, then where is the boundary? If they're fine then congratulations you have created a new industry of people pushing the boundaries of what SLMs can do, as well as how they are defined.
Laws are built on definitions and this hand-wavy BS is how we got nonsense like the current version of the AI act.
Why are you so mad at me, I'm not even the OP you should ask these questions. I'm also not convinced we need regulation like this in the first place, so I can't tell you where this boundary should be, but a boundary could certainly be found and it would be beyond simple spellchecking autocorrect.
I also don't understand why you think this would be so impossible to define. There are regulations for all kinds of areas where specific things are targeted like chemicals or drugs and just because some of these have incentivized people to slightly change a regulated thing into an unregulated thing does not mean we don't regulate these areas at all. So how are AI systems so different that you think it'd be impossible to find an adequate definition?
ollama can't connect to MCP servers, it can merely run models which output instructions back to a connected system to connect to an MCP server (e.g mcphost using ollama to run a prompt and then itself connecting to an MCP server if the response requires it).
I mean both of these things are actually happening (drone deliveries and people spending a lot of time in VR), just at a much much smaller scale than it was hyped up to be.
Drones and VR require significant upfront hardware investment, which curbs adoption. On the other hand, adopting LLM-as-a-service has none of these costs, so no wonder so many companies are getting involved with it so quickly.
Right, but abstract costs are still costs to someone, so how far does that go before mass adoption turns into a mass liability for whomever is ultimately on the hook? It seems like there is this extremely risky wager that everyone is playing--that LLM's will find their "killer app" before the real costs of maintaining them becomes too much to bear. I don't think these kinds of bets often pay off. The opposite actually, I think every truly revolutionary technological advance in the contemporary timeframe has arisen out of its very obvious killer app(s), they were in a sense inevitable. Speculative tech--the blockchain being one of the more salient and frequently tapped examples--tends to work in pretty clear bubbles, in my estimation. I've not yet been convinced this one is any different, aside from the absurd scale at which it has been cynically sold as the biggest thing since Gutenberg, but while that makes it somewhat distinct, it's still a rather poor argument against it being a bubble.
Considering what we've been seeing in the Russia-Ukraine and Iran-Israel wars, drones are definitely happening at scale. For better or for worse, I expect worldwide production of drones to greatly expand over the coming years.
This makes no sense, just because something didn't become as big as the hypemen said it would doesn't make the inventions or users of those inventions disappear.
For something to be considered “happening” you can’t just have a handful of localized examples. It has to be happening at a large noticeable scale that even people unfamiliar with the tech are noticing. Then you can say it’s “happening”. Otherwise, it’s just smaller groups of people doing stuff.