Hacker Newsnew | past | comments | ask | show | jobs | submit | dinoqqq's commentslogin

It's worrying that they don't specify in which cases they require identity checks.


Just a few days ago, on Friday, my 15 year old son had his Claude account suspended with a demand for ID to prove he is 18 or older. He had his own Claude Max subscription (he out-earns me fairly frequently in his circle of gaming programmers), and was unaware Anthropic had a must-be-18 rule, as was I. Their email said "Our team found signals that your account was used by a child. This breaks our rules, so we paused your access to Claude." So I guess if you ever ask a question that seems to originate from a teen or less, expect to hit an ID gate.

So now he's a Codex user. OpenAI and Google both have a minimum age of 13.

EDIT: I should note that Anthropic gave him a refund for the whole month that was underway, despite him being nearing the end of it. So good on them.


> he out-earns me fairly frequently in his circle of gaming programmers

Can you expand on this? Your teenage son makes more money than you do professionally, by vibe coding video games?


Who said anything about "vibe coding"? Using coding tools like Claude Code as just another tool in the belt is something the overwhelming bulk of professional devs do now (and given that my son managed to find a number of clients paying for his work, he qualifies as professional). Pejorative "vibe coding" nonsense doesn't change this.


Call it whatever you want then, I'm still interested in the question at hand – your son makes more money than you do professionally, using Claude Code to make something video game related?


Almost certainly referring to cheating software.


He makes solutions for people and they pay him money for doing so. I mean...pretty much exactly how we all operate? He's excellent at networking and has built an enormous connection tree.


That's interesting, good for him! When I was a teenager I spent all my time playing Halo 2, I didn't write my first line of code until my early 20s.


This sounds so sketchy and vague. Why do you need to keep defending it, if it's complely legit?

"Defending it"? Sketchy and vague is similarly hilariously pathetic language, and you sound like an absolute creep.

I have zero obligation to detail the work my minor son does to random weird foot-stomping, entitled creeps on HN, and these bizarrely insecure demands by professional failures is...telling.

In this case I assume you're keying off of the other clown who, based upon absolutely nothing (but apparently their own professional failure), is certain it must be "cheating software".

How pathetic. If this is your lot in life, Jesus Christ find a different career. Maybe the trades or something.


Yeah. I do not get the 18-years-old age gate. It's not like they're protecting anyone. AI is available so freely now anyone who wants it can get it.

Anthropic made the best models by hiring non-technical folks like philosophers to build the best training sets and evaluations. Now, it seems like their philosophers are telling people how they can and can't use their model.


Liability. OpenAI have had several court cases now I believe where children killed themselves after interacting with ChatGPT. Less liability if the user is an adult.


> Anthropic gave him a refund for the whole month that was underway, despite him being nearing the end of it

I sense an opportunity for free tokens.

Ideas for prompts that reliably trigger the age check?


... so let me understand this.

It is frequently said that programming directly is obsolete, and the skill you must have now is knowing how to operate agentic AIs.

Yet you aren't allowed to do this until you're 18.

So, developing software is now 18+ only?


Qwen3 runs locally on reasonable hardware, and is comparable to a mid-2025 Claude Sonnet (albeit possibly rather slower) .

Local models are chasing the online frontier models pretty hard.

So worst case, that's the fallback (FWIW, YMMV)

edit: Qwen-3.5 MoE (and other local MoE models like it)


Whats "reasonable hardware"?


People have tried to run Qwen3-235B-A22B-Thinking-2507 on 4x $600 used, Nvidia 3090s with 24 GB of VRAM each (96 GB total), and while it runs, it is too slow for production grade (<8 tokens/second). So we're already at $2400 before you've purchased system memory and CPU; and it is too slow for a "Sonnet equivalent" setup yet...

You can quantize it of course, but if the idea is "as close to Sonnet as possible," then while quantized models are objectively more efficient they are sacrificing precision for it.

So next step is to up that speed, so we're at 4x $1300, Nvidia 5090s with 32 GB of VRAM each (128 GB), or $5,200 before RAM/CPU/etc. All of this additional cost to increase your tokens/second without lobotomizing the model. This still may not be enough.

I guess my point is: You see this conversation a LOT online. "Qwen3 can be near Sonnet!" but then when asked how, instead of giving you an answer for the true "near Sonnet" model per benchmarks, they suddenly start talking about a substantially inferior Qwen3 model that is cheap to run at home (e.g. 27B/30B quantized down to Q4/Q5).

The local models absolutely DO exist that are "near Sonnet." The hardware to actually run them is the bottleneck, and it is a HUGE financial/practical bottleneck. If you had a $10K all-in budget, it isn't actually insane for this class of model, and the sky really is the limit (again to reduce quantization and or increase tokens/second).

PS - And electricity costs are non-trivial for 4x 3090s or 4x 5090s.


I may have genuinely new data for you.

Qwen3.5-35B-A3B is reported to perform slightly better than the model you mentioned.

It runs fine but non-optimal on a single 3090 with even 131072 tokens of context , and due to the hybrid attention architecture, the memory usage and compute scale rather less drastically than ctx^2. I've had friends with smaller cards still getting work out of it. Generation is at around 20 tokens/sec on that 3090 (without doing anything special yet) . You'll need enough DRAM to hold the bits of the model that don't fit. Nothing to write home about, but genuinely usable in a pinch or for tasks that don't need immediate interactivity.

It's the first local model that passes my personal kimbench usability benchmark at least. Just be aware that it is extremely verbose in thinking mode. Seems to be a qwen thing.

(edit: On rechecking my numbers; I now realize I can possibly optimize this a lot better)


With respect, this isn't "new data" it is an anecdote. And it kind of represents exactly the problem I was talking about above:

- Qwen is near Sonnet 4.5!

- How do I run that?

- [Starts talking about something inferior that isn't near Sonnet 4.5].

It is this strange bait/switch discussion that happens over and over. Least of all because Sonnet has a 200K context window, and most of these ancdotes aren't for anywhere near that context size.


You're not wrong; but... imho it's closer to Sonnet 4.0 [1] on my personal benchmark [2]. And I HAVE run it at just over 200Ktoken context, it works, it's just a bit slow at that size. It's not great, but ... usable to me? I used Sonnet 4.0 over api for half a year or so before, after all.

Only way to know if your own criteria are now matched -or not yet- is to test it for yourself with your own benchmark or what have you.

And it does show a promising direction going forward: usable (to some) local models becoming efficient enough to run on consumer hardware.

[1] released mid-2025

[2] take with salt - only tests personal usability

+ Note that some benchmarks do show Qwen3.5-35B-A3B matching Sonnet 4.5 (released later last year); but I treat those with the same skepticism you do , clearly ;)


One sure would expect Qwen3.5-35B-A3B to "perform slightly better" than Qwen3-235B-A22B!


> The hardware to actually run them is the bottleneck, and it is a HUGE financial/practical bottleneck.

That's unsurprising, seeing as inference for agentic coding is extremely context- and token-intensive compared to general chat. Especially if you want it to be fast enough for a real-time response, as opposed to just running coding tasks overnight in a batch and checking the results as they arrive. Maybe we should go back to viewing "coding" as a batch task, where you submit a "job" to be queued for the big iron and wait for the results.


A machine with 128GB of unified system RAM will run reasonable-fidelity quantizations (4-bit or more).

If you ever want to answer this type of question yourself, you can look at the size of the model files. Loading a model usually uses an amount of RAM around the size it occupies on disk, plus a few gigabytes for the context window.

Qwen3.5-122B-A10B is 120GB. Quantized to 4 bits it is ~70GB. You can run a 70GB model in 80GB of VRAM or 128GB of unified normal RAM.

Systems with that capability cost a small number of thousand USD to purchase new.

If you are willing to sacrifice some performance, you can take advantage of the model being a mixture-of-experts and use disk space to get by with less RAM/VRAM, but inference speed will suffer.


If you want something off the shelf get a MacBook Pro M5 (base "Pro" CPU) with 48GB RAM:

Gemma 4 31B Q6: 9tok/s, I'd say it is smarter than GPT-4o, but yeah it's slow. Good for coding.

Gemma 4 26B A4B Q4: 50tok/s. Feels faster than ChatGPT 5.4, but not as smart (as it reasons less). Good for general chatting and research.


Give Gemma4 a look, too. I've had terrific results with that and OpenCode locally.


> It is frequently said that programming directly is obsolete

Who says this?


The CEO of the company in question with the age limit, for one


I would hope most people can recognize that someone trying to sell you something might be among the least trustworthy sources about that thing.


I mean, you can disagree with the sentiment (I certainly do), but there are still an awful lot of people saying it.


I thought it was true too, for a couple of months. Then the honeymoon phase ended and now I only use Claude to write commit message drafts (which I rewrite myself) and review PRs.


Yes, today’s kids should instead learn to be influencers.

This is genuine advice I’ve seen from high profile business types. We’re fucked in the sense our children will be made to be attention whores online.


It seems out of step and foolish, and the cynic in me says that Anthropic has a side hustle of identity harvesting and is looking for justifications, but on the flip side, there is a real risk of pearl clutching if a child ever uses AI, and maybe Anthropic just wants to steer clear of all of that. Though simply putting it in the ToS should be sufficient legal shielding, and the idea that they're chat harvesting to age fingerprint conversations seems dubious.


Basically the only relevant question, and it's the one they didn't answer


An equally valid question is "does the company you use for identify verification follow the same commitments with regards user privacy and selling/processing of user data as Anthropic itself?".

And the answer to that question is:

"Hell no! We used the cheapest, shadiest company we could find for that. They'll process and sell all your data. Thank you for continuing to be a valued Anthropic customer!".


I can guess at least one valid:

* preventing North Korea, China, Russian, Iran and etc. actors from accessing service. They absolutely use workarounds to access AI, e.g. I bet there are companies who are proxy between Anthropic and those countries.

I imagine there will be quite some false positives while identifying those.


This will do absolutely nothing to prevent those actors from accessing Claude... they already recruit young unemployed Americans to do proxy job interviews[0][1], etc. They'll just pay young unemployed Americans to do verification for them.

[0] https://www.tradingview.com/news/cointelegraph:6192f38e3094b...

[1] https://youtube.com/watch?v=QebpXFM1ha0


That sounds likely to increase their costs and create new opportunities to get caught. Not a silver bullet but not "absolutely nothing". Like how anti-money laundering laws don't wipe out all crime, but are still worthwhile.


If the API costs are gonna be thousands, or the subscription will be $20/month, is it really that expensive to pay some guy on Discord a $50 gift card to verify the account as a one-time setup? Better yet, we'll probably start seeing fake porn websites and other phishing sites that ask to verify your age but end up proxy verifying a bunch of these services in an automated manner with minimal costs, and you'll be able to buy verified Claude accounts for tens of cents on account marketplaces. Just as you have been able to buy verified Discord accounts, aged Steam accounts, etc...


I am a Chinese, and I can tell you definitively that it doesn't cost $50. It will only cost about $7 on the Chinese platform "Xianyu".


Get caught how? They don’t tell the person what they’re going to use the accounts for or who will use it. The star buyer patsy knows nothing.

On the scale of intelligence budgets this would be in the realm of petty cash.


They won't even need to do that. With enough time and money, they can certainly figure out how to not trigger the ID verification system.


Also, as many teenagers know, it's trivial to get a fake ID card.


The "Why did my account get banned after verification?" section gives some reasons:

- Repeated violations of our Usage Policy

- Account creation from an unsupported location

- Terms of Service violations

- Under-18 usage


Those are reasons for banning after verification, not reasons for requesting identity verification in the first place.


They request ID for bans so that they can ban you personally. ID checks may as well be a sign that you've already been banned and they're fishing for ways to make the ban harder to evade. Venmo does the same thing.


Maybe Anthropic just likes creating a market for dark identities. Because that's the most likely effect of such stupidity; generating more ID theft victims with no change to services to criminals.


Is a "dark identity" one that's never been shared with an identity-theft-as-a-service? Or is it just of one that's (supposed to be) privacy-conscious (and wouldn't otherwise have been an easy victim)?


> ID checks may as well be a sign that you've already been banned and they're fishing for ways to make the ban harder to evade.

So identity verification is basically a canary that your account is about to get banned, or is on the chopping block. At that point you're better off abandoning ship rather than handing over your ID.


Basically exactly my point. If you could use the service without ID verification, and others can still use the service without ID verification, but you've been blocked because you haven't handed over your ID, then leave or start a new account. That is if you're averse to being banned personally. If you don't mind the risk then you can verify ID and prepare to jump ship if it's a ban.


Wouldn't the reasons for requesting identification be the same those for banning people - the system has flagged that you might be from the wrong location/under 18/creating multiple free acounts etc - so is validating.


You're a hero


Great work! I really appreciate these tools with the privacy angle!


Thank you! I am a big privacy advocate!


This will hurt for a lot of startup AI wrappers around these productivity tools.


Briefings are a feature, not a moat. If that's your whole product, Google was always going to eat you eventually. We do this too at amaiko.ai. Full M365 integration, but not locked to Microsoft. Can pull in Google services, Jira, whatever else. The defensible part isn't "here's what's on your calendar," it's the AI actually building memory over weeks and adapting to how you work.


It’s a Google Labs experiment. We all know how that goes.

Startups will be fine.


it was only a matter of time before this was coming


There is an update:

"Cloudflare Dashboard and Cloudflare API service issues"

Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.

Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed. Dec 05, 2025 - 08:56 UTC


LinkedIn, Perplexity as well


Politicians to reach out to in Germany, with a template email:

poststelle@bmi.bund.de, poststelle@bmjv.bund.de, info@bmds.bund.de, baerbel.bas@bundestag.de, lars.klingbeil@bundestag.de, friedrich.merz@bundestag.de, landesleitung@csu-bayern.de, fraktion@cducsu.de, matthias.miersch@bundestag.de, sebastian.fiedler@bundestag.de, alexander.throm@bundestag.de, johannes.schaetzl@bundestag.de, ralph.brinkhaus@bundestag.de

Sehr geehrte Damen und Herren,

ich wende mich heute an Sie, um meine große Sorge über die geplante Einführung der sogenannten „Chatkontrolle“ auszudrücken.

Die flächendeckende Überwachung privater Kommunikation stellt einen massiven Eingriff in unsere Grundrechte dar. Sie gefährdet die Privatsphäre aller Bürgerinnen und Bürger und untergräbt zentrale Prinzipien eines demokratischen Rechtsstaates. Der Schutz der Vertraulichkeit von Kommunikation ist ein unverzichtbarer Bestandteil unserer freiheitlichen Gesellschaft.

Zudem zeigen zahlreiche Expertinnen und Experten auf, dass das flächendeckende Scannen privater Nachrichten zur Bekämpfung von Kindesmissbrauchsdarstellungen nicht wirksam ist. Stattdessen schwächt eine solche Maßnahme die Sicherheit digitaler Kommunikation insgesamt und schafft gefährliche Überwachungsinfrastrukturen, die leicht missbraucht werden können.

Ich bitte Sie daher eindringlich, sich bei der entsprechenden Abstimmung klar gegen die Einführung der Chatkontrolle auszusprechen und sich für den Schutz der Bürgerrechte und der Privatsphäre einzusetzen.

Mit freundlichen Grüßen


lol



I don't see that this is solving the problem. If there is a new law, it still needs to be enforced, so companies still need to have the same checks on identity to make sure they are compliant.

I agree that it should be the responsibility of parents, but if you leave good and bad parenting to the parents only I think we would live in a different world.


If this was the case then Russia would also admit they did it. It's weird to not hide your IP, but still deny the hack on political level.


The major powers are endlessly engaged in hacking operations against each other. This is just normal, and no one needs to "admit" to it for that reality to be true. The notable part of this story isn't that Russia tried to compromise a US system, but instead is that some Russian party (whether official or unofficial) apparently had DOGE credentials moments after they were created, which indicates that DOGE is thoroughly compromised. Which should surprise absolute no-one.


Look at what they did with the 2016 election. They hacked that too and didn't hide anything, but when they were accused by the US government they claimed innocence and blamed Ukraine. The allows Russian people to say "Look how awful those Ukrainians are for hacking America; and look at how awful America is for blaming Russia."

So they hack their enemy, and then use that to reinforce the false narratives they tell their own people. It's gaslighting at the national level. Russia is as if your emotionally abusive partner was your government. America is becoming the same.


Nah, it's the same as the "little green men" in Crimea back in the day.

Everyone knew it was Russia. They were still like "I don't know what you're talking about".

It's all power games.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: