That seems like a potentially very useful addition to the robots.txt "standard": Crawler categories.
Wanting to disallow LLM training (or optionally only that of closed-weight models), but encouraging search indexing or even LLM retrieval in response to user queries, seems popular enough.
If you're using a specific user agent, then you're saying "I want this specific user agent to follow this rule, and not any others." Don't be surprised when a new bot does what you say! If you don't want any bots reading something, use a wildcard.
Yes, but given the lack of generic "robot types" (e.g. "allow algorithmic search crawlers, allow archival, deny LLM training crawlers"), neither opt-in nor opt-out seems like a particularly great option in an age where new crawlers are appearing rapidly (and often, such as here, are announced only after the fact).
Sure, but I still think it's OK to look at Apple with a raised eyebrow when they say "and our previously secret training data crawler obeys robots.txt so you can always opt out!"
I've been online since before the web existed, and this is the first time I've ever seen this idea of some implicit obligation to give people advance notice before you deploy a crawler. Looks to me like people are making up new rules on the fly because they don't like Apple and/or LLMs.
It's not controversial, it's just not how the ecosystem works. There has never been an expectation that someone make a notification about impending crawling.
It might be nice if there were categories that well-behaved bots could follow, as noted above, but even then the problem exists for bots doing new things that don't fall into existing categories.
My complaint here isn't what they did. It's that they explain it as "here's how to opt out" when the information was too late to allow people to opt out.
Robots.txt is already the understood mechanism for getting robots to avoid scraping a website.