This keeps coming up and we keep having the same debates about what Age Verification isn't.
For the folks in the back row:
Age Verification isn't about Kids or Censorship, It's about Surveillance
Age Verification isn't about Kids or Censorship, It's about Surveillance
Age Verification isn't about Kids or Censorship, It's about Surveillance
Without even reaching for my tinfoil hat, the strategy at work here is clear [0 1 2]. If we have to know that you're not a minor, then we also have to know who you are so we can make any techniques to obfuscate that illegal. By turning this from "keep an eye on your kids" to "prove you're not a kid" they've created the conditions to make privacy itself illegal.
VPNs are next. Then PGP. Then anything else that makes it hard for them to know who you are, what you say, and who you say it to.
Please, please don't fall into the trap and start discussing whether or not this is going to be effective to protect kids. It isn't, and that isn't the point.
Like many mention in other comments on this post, it's possible to implement using ZKPs. There are likely other methods that would be effective without compromising privacy. None of them are part of the Age Verification discussion because kids are not the actual point of Age Verification.
When I say "if we have to know you're not a kid, we have to know who you are" I'm not stating an actual truth, but the argument as it is playing out politically.
> None of them are part of the Age Verification discussion because kids are not the actual point of Age Verification.
The EU age verification solution says implementations SHOULD implement[1] their ZKP protocol[2]. Not linking it to the user is stated as an explicit goal:
Unlinkability: The goal of the solution is to prevent user profiling and tracking by avoiding linkable transactions. Initially, the solution will rely on batch issuance to protect users from colluding RPs. Zero-Knowledge Proof (ZKP) mechanisms will be considered to offer protection. More details are provided in Section 7.
Is there a good explanation of how ZKPs prevent attestation providers (which presumably know your identity) from linking an issued proof back to you if, for example, the website elects to store it? I can wrap my head around RSA and ECC and PKI, but I haven't managed to make sense of this yet.
Assuming that's even a goal, of course. The cited paragraph mentions RPs (the websites, from what I understand), but makes no mention of attestation providers.
This is, of course, very technical, but here is how it works at a high level.
In the non-ZKP presentation, the "holder" (phone) sends the credential to the relying party (website), and the RP executes some verification algorithm. In the ZK presentation, the holder executes the verification algorithm and sends to the RP a proof that the algorithm was executed correctly.
The "proof" has this magical property that it reveals nothing other than the check passed. (You will have to take on faith that such proofs exist.) In particular, if the check was the predicate "I have a signature by ISSUER on HASH, and SHA256(DOCUMENT)==HASH, and DOCUMENT["age_gt_18"]=TRUE", anybody looking at the proof cannot infer ISSUER, HASH, DOCUMENT, or HASH, or nothing else really. "Cannot infer" means that the proof is some random object and all HASH, DOCUMENT, ISSUER, etc. that satisfy the predicate are equally likely, assuming that the randomness used in the proof is private to the holder. Moreover, a generating a proof uses fresh randomness each time, so given two proofs of the same statement, you still cannot tell whether they come from the same ISSUER, HASH, DOCUMENT, ...
the more I think about it, the more I feel like I need someone with deep knowledge to explain ZKPs to me.
So like, we've got this algorithm that gets sent our way and we run it and that provides kind of a cryptographic hash or whatever. But if we're running the algorithm ourselves what's to stop us from lying? Where does the 'proof' come from? What's the check that it's running and why do we inherently trust the source it's checking?
Here is a hopefully simple example of how this ZKP thing may even be possible. Imagine that you give me a Sudoku puzzle. I solve it, and then I want to prove to you that I have solved it without telling you the solution. It sounds impossible, but here is one way to do it.
I compute the solution. I randomly scramble the digits 1-9 and I put the scrambled solution in a 9x9 array of lock boxes on a table. I have the keys to the 81 locks but I am not giving you the key yet. You randomly ask me to open either 1) one random row chosen by you; 2) one random column chosen by you; 3) one random 3x3 block chosen by you; or 4) the cells corresponding to the original puzzle you posed to me. In total you have 28 possibilities, and assume that you choose them with equal probability. You tell me what you want and I open the corresponding lockboxes. You verify that the opened lock boxes are consistent with me knowing a solution, e.g. all numbers in a row are distinct, the 3x3 block consists of distinct numbers, etc. If I am cheating, then at least one of your 28 choices will be inconsistent, and you catch me with probability 1/28, so if we repeat this game 1000 times, and I don't know the solution, you will catch me with probability at least 1-(1/28)^1000 which is effectively 1. However, every time we repeat the game, I pick a different random scrambling of the integers 1-9, so you don't learn anything about the solution.
All of ZKP is a fancy way to 1) encode arbitrary computations in this sort of protocol, and 2) amplify the probability of success via clever error-correction tricks.
The other thing you need to know is that the protocol I described requires interaction (I lock the boxes and you tell me which ones to open), but there is a way to remove the interaction. Observe that in the Sudoku game above, all you are doing is flipping random coins and sending them to me. Of course you cannot let me pick the random coins, but if we agree that the random coins are just the SHA256 hash of what I told you, or something else similarly unpredictable, then you will be convinced of the proof even if the "coins" are something that I compute myself by using SHA256. This is called the "Fiat-Shamir transformation".
How do we implement the lock boxes? I tell you SHA256(NONCE, VALUE) where the NONCE is chosen by me. Given the hash you cannot compute VALUE. To open the lock box, I tell you NONCE and VALUE, which you believe under the assumption that I cannot find a collision in SHA256.
I think a parent should be able to give their kid access if they deem their kid mature enough. If the kid can handle social media without it becoming an addiction or a self-esteem issue or similar, then it would generally be a net positive. For example, social media may include YouTube which has a lot of educational content. Why hold the kid back?
Excellent question. More generally, what prevents me from copying the credential and giving it to somebody else?
The currently favored approach works like this. The DOCUMENT contains a device public key DPK. The corresponding secret key is stored in some secure hardware on the phone, designed so that I (or malware or whatever) cannot extract the secret key from the secure hardware. Think of it as a yubikey or something, but embedded in the phone. Every presentation flow will demand that the secure element produce a signature of a random challenge from the RP under the secret key of the secure hardware. In the ZKP presentation, the ZKP prover produces a proof that this signature verifies correctly, without disclosing the secret key of the secure hardware.
In your example, the parent could give the phone to the kid. However, in current incarnations, the secure hardware refuses to generate a signature unless unlocked by some kind of biometric identification, e.g. fingerprint. The fingerprint never leaves the secure hardware.
How does the issuer (e.g. the republic of France) know that DOCUMENT is bound to a given fingerprint? This is still under discussion, but as a first bid, a French citizen goes to city hall with his phone and obtains DOCUMENT after producing a fingerprint on the citizen's phone (as opposed to a device belonging to the republic of France). You can imagine other mechanisms based on physical tokens (yubikeys or embedded chips in credit cards, or whatever). Other proposals involve taking pictures compared against a picture stored in DOCUMENT. As always, one needs to be clear about the threat model.
In all these proposals the biometric identification unlocks the secure hardware into signing a nonce. The biometrics themselves are not part of the proof and are not sent to the relying party or to the issuer.
Another excellent question. The current answer in the EU seems to be "you need a phone". My preferred answer (despite being one of the Google guys who designed the ZKP mechanism) would be that the government sends you some sort of plastic card with a chip that does not tie you to a phone. Still fighting that battle.
If privacy is an explicit goal, why isn't it a MUST? Why even bother with the initial batch issuance phase? And what's stopping them from silently adopting a batch size of 1?
Yeah, getting into the car with the guy holding the gun doesn't become okay because you have a great argument you're waiting to use down the road. He's already got the gun out.
We should have started arguing when he just said he had a gun, indoors, in the crowd. We shouldn't have quietly walked outside at his demand. But that all happened. Here we are now, at the car, and he's got the gun out, and he's saying "get in", and we're probably not going to win from here -- but pal, it's time to start arguing. Or better yet, fighting back hard.
Because that car isn't going anywhere we want to be. We absolutely can not get in the car right now, and just plan to argue the point later. It doesn't matter how right the argument is at all.
The thing is that as far as I can tell, a ZKP of age involves a state or similar attestor to issue an ID/waller that can be querried for age without revealing identity.
But attestor has to have certainty about the age of the person it issues IDs to. That raises obvious questions.
What states are going to accept private attestors? What states are going accept other states as attestors? What state won't start using its issues ID/Wallet for any purpose it sees fit?
This system seems likely to devolve national Internets only populated by those IDs. That can all happen with ZKPs not being broken.
> Are you aware of any age verification systems that do not have this property?
As I understand it, it's the goal of OpenID4VP[1][2]. Using it a site can request to know if the user is over 18 say, and the user can return proof of just that one claim, I'm over 18, without sharing identifying information.
The new EU age verification solution[3] builds on this for example.
Can't read the specs at the moment, but what prevents the age verification service and the age-gated website from coluding and de-anonymizing your porn use?
Haven't either had time to fully wrap my head around the details.
At least in the EU solution they say there would be multiple attestation serivices the user could choose to use. So that would be technically better than nothing.
2) Cigarette vending machines accept VISA cards and government IDs and they're offline.
3) A medium-sized social media network required photos (not scans) of GovIDs, where only year of birth and validity date need to visible. The rest could be blacked out physically.
4) You can guess users' age and only request solid proof only for those you are unsure about.
The problem is that we technical users think of a one-size-fits-all technical approach that works, without a single fail, for all global users. That is bound to fail.
It is only a law and you can break it big time or small time. Reddit's approach might proof way too weak, it'll be fined and given a year to improve. Others might leave the market. Others will be too strict and struggle to get users. Others might have weak enforcement and keep a low profile forever. Others will start small, below the radar and explode in popularity and then enforcement will have to improve.
You can also request identity and then delete it. (Yes, some will fail to delete and get hacked.)
Giving Facebook a free pass is stupid. They're selling your age cohort "10-11" within 0.0037ms for 0.$0003 to the highest bidder on their ad platform.
> Age Verification isn't about Kids or Censorship, It's about Surveillance
We know this because, instead of putting easy-to-use parental controls on new devices sold (and making it easy to install on old ones) with good defaults [1], they didn't even try that, and went directly for the most privacy-hostile solution.
[1] So lazy parents with whatever censorship the government thinks is appropriate for kids, while involved parents can alter the filtering, or remove the software entirely.
Parental control software has existed for decades. It hasn't worked.
Over 70% of teenagers <18 today have watched porn [1]. We all know (many from experience) that kids easily get around whatever restrictions adults put on their computers. We all know the memes about "click here if you're 18" being far less effective than "click here if you're not a robot."
Yes, there were other ways of trying to solve the problem. Governments could've mandated explicit websites (which includes a lot of mainstream social media these days) include the RTA rating tag instead of it being a voluntary thing, which social media companies still would've fought; and governments could've also mandated all devices come with parental control software to actually enforce that tag, which still would've been decried as overreach and possibly would've been easily circumventable for anyone who knows what they're doing (including kids).
But at the end of the day, there was a legitimate problem, and governments are trying to solve the problem, ulterior motives aside. It's not legal for people to have sex on the street in broad daylight (and even that would arguably be healthier for society than growing up on staged porn is). This argument is much more about whether it's healthy for generations to be raised on porn than many detractors want to admit.
“Ease of access” and “easy access to the most depraved shit you can think of that’s out there” is what changed. That is what is wrong and why many people feel we need to find some way to control that access.
The Internet didn’t come along until I was well into adulthood. Think about what porn access looked like in the late ‘70s and ‘80s. As a teen we were “lucky” if by some rare miracle a friend stole their dad’s Playboy, Penthouse, or Hustler and stashed it in the woods (couldn’t risk your parents finding it under your mattress) for us dudes to learn the finer points of female anatomy. In a week it would be washed out from the elements with nary a nipple to be seen. Those magazines (even hustler) was soft compared to what a few clicks can find today. Basically you got degrees of nudity back then, but we appreciated it.
Hardcore video was very rare to see as a horny teen kid in the ‘80s. Most porn movies was still pretty well confined to theaters, but advent of VHS meant (again by sheer luck) you had to have a friend whose parents happened to be in to it, who had rented or bought a video, it was in the house and accessible, all the adults had to be gone from the house so you could hurry up and watch a few minutes on the family’s one TV with a VCR. You needed to build in viewing time along with rewind time to hide your tracks.
Now…parents just leave the room for a few minutes and a willing kid with a couple of clicks could be watching something far beyond the most hardcore thing I saw as a teen.
But it's been illegal to peddle porn to minors for much longer than it's been illegal to peddle social media, so it's a good proxy for how effective our current efforts are.
The approximate substitute-good for porn is actual sex, which parents generally stop teens from doing. The substitute-good for social media is talking to people in person, which parents are generally happy with.
It's been illegal to sell porn to minors since approximately forever. If that is constitutional (not saying it is, but I'd be surprised if it wasn't since it's such an established practice), then I don't see how requiring age verification on porn sites wouldn't be. Requiring health warnings might be another matter, though. Not sure about that.
I am someone who is very privacy focused. I've literally never had a social media account on any platform and I'm 42. From day one of facebook, I never wanted my information online. Like many here, I'm deeply concerned about privacy and surveillance.
In real life, we think age verification is a good thing. Kids shouldn't buy porn. Teenagers shouldn't get into bars. etc... There has to be room somewhere for reasonable discussion about making sure children do not have access to things they shouldn't. I think it's important to note, that complete dismissal of this idea only turns away your allies and hurts our cause in the long run.
> In real life, we think age verification is a good thing. Kids shouldn't buy porn. Teenagers shouldn't get into bars. etc...
These are not equivalent, I don't have to scan my face, upload my ID and share my personal biometric data with various 3rd parties, who will sell and leak my data, every time I want to look at porn or sip a beer.
Also, there are countries where teenagers can drink and go to pubs, and society hasn't crumbled. We also have several generations of young adults with access to porn, and the sky didn't fall.
Maybe we shouldn't use the government to implement a "papers, please" process just to use and post on the internet, maybe we should instead legislate the root cause of the problem: algorithmic optimization and manipulation. That way everyone benefits, not just kids, and we won't have to scan our faces to look at memes on Reddit.
In real life the situation is different. When I buy alcohol, someone looks at my drivers licence, does not make a copy of it, forgets it quickly, and cannot tie it to other information about me. As soon as it's online and it's copies, I can't tell what happens on anyone else's servers. I don't want any company knowing my actual name and location, then that can be tied to more data, which is what Google etc have been trying to do for years but this would just completely fast track that. I would in theory be fine with something where it never leaves my computer, but that is obviously impossible.
A lot of the proposals don't involve you sending your drivers license or "other information" to anyone. The site in question asks you to verify with a trusted third party (usually a government entity), and that trusted third party only provides then with the end result of the validation.
> which is what Google etc have been trying to do for years but this would just completely fast track that.
Excuse me? They have done that for years. There's nothing to "fast track" here. Big Tech already implemented surveillance.
How many of those proposals do not have a government-mandated app as a spider in the middle of the web, which is aware of all the apps and websites you try to visit which ask for validation?
In the online world you can’t make sure of anything. Florida for instance requires age verification for porn sites. Guess how many mainstream sites not based in the US are completely ignoring the law and guess how many others are easily accessible via a VPN? If you guessed the sum total of both is less than 100%, you would be wrong - and even that is tilted toward sites that just ignored it.
The one thing you can control is your childs access through their device using parental controls.
I can absolutely guarantee you that any teenager can easily get access to weed, cigarettes and alcohol despite the laws and definitely can use a VPN. It only takes one smart kid to show them how.
> I can absolutely guarantee you that any teenager can easily get access to weed, cigarettes and alcohol
Is you argument then that we shouldn't age gate those things in reality either? Would you suggest that teenagers smoke and drink just as much as they would have had it been legal to sell to minors?
Laws don't just exist to stop you, they also exist to shape society. They exist as signals for what we deem appropriate behavior.
So we make meaningless laws that inconsistently enforced? What do you think happens when little Johnny is caught with weed in his car in a 95% White high income school district vs little Jerome in a 95% Black school district?
Also how much “shaping of society” do you expect to happen when you pass a law that no one respects?
How many kids do you think a law is going to stop from going to the porn sites that completely ignored the law?
How many kids say “I really want to smoke weed but it’s illegally so I won’t do it”?
I'm not dismissing that idea. It is a perfectly reasonable thing to think about, part of why we have age verification techniques that already work well in critical places like online vape shops.
I'm even willing to talk about the possibility that we could use more robust systems deployed more broadly. A lot of folks here are talking about ZKPs in this regard, and that's not a bad idea at all.
The issue I'm trying to sound the horn on is that the current push for AF in the US and EU has nothing to do with kids. I think you could put together a working group on ZKPs and Age Verification, write up a paper and run experiments, and when you bring it to the lawmakers they're gonna say something to the tune of:
"yeah but that's not trustworthy enough and too technical for people to understand so we're just going to serve legal notices to VPN providers instead to tell them that they can't anymore"
...or something to that tune. I'm not a mind reader, I've just read the reports (by lawmakers) mentioning VPNs as an "area of concern".
This is a political gambit and not a new one. The more we treat the current issue as having anything to do with protecting kids the more we legitimize what is an obvious grift.
> The issue I'm trying to sound the horn on is that the current push for AF in the US and EU has nothing to do with kids. I think you could put together a working group on ZKPs and Age Verification, write up a paper and run experiments, and when you bring it to the lawmakers they're gonna say something to the tune of:
The EU is currently doing large-scale field tries of the EU Digital Identity Wallet, which they have been working on for several years. It uses ZKPs for age verification. They expect to roll it out to the public near the end of 2026.
I appreciate the mention - i had not yet heard of this EU DIW thing. That said, I can't find any resources on it that mention the use of ZKPs. Could you share a link?
> The issue I'm trying to sound the horn on is that the current push for AF in the US and EU has nothing to do with kids.
> i had not yet heard of this EU DIW thing
That's a joke of a comment chain. You're "sounding the horn" on something you have barely even researched? You're spreading misinformation and fear mongering without even reading any of the proposals?
The other guy was gallant, I'm not. That's clown behavior.
Ya got me. Nevermind that the DSA (which I have read, in part) and the DIW (new to me) are different things, and that one does not mention the other [0]. Also the DSA is happening now while the Wallet thing isn't rolled out.
There are actual discussions about VPN regulation in relation to AV in the US [1]. The UK's OSA [2] is blatant about the need to violate encryption. Australia's OSA [3] has also come under criticism for precisely the things I'm talking about. Is it a stretch to extend this reasoning to the EU's incredibly similar legislation? Honk my nose if you must but I don't think so.
Here's the thing - I don't want you to listen to me, or anyone else on the internet, as an 'expert'. Verify your information personally, even when you trust it.
I think the equivocation of online and real life is a massive mistake. When you go into a grocery store you are constantly on CCTV. Does that mean when you shop on Amazon them recording you via webcam should be considered? Obviously not. The restrictions in real life are temporary. If you try to buy port, go into a bar, etc you are asked for ID and they look at it and hand it back. They don't take your ID, your picture and store it forever and then sell information about you to other people.
The concern about children is aimed at the wrong target. Instead of targeting everyone it would make far more sense to target the platforms. With Roblox having a pedo problem the company should face punishment. That will actually get them to change their ways. However all these massive platforms are major donors to politicians so the chance of that happening is low to none.
> They don't take your ID, your picture and store it forever and then sell information about you to other people.
It would not surprise me in the least if there are brick-and-mortar businesses doing this, especially larger companies in jurisdictions (such as the majority of the United States) with weak/nonexistent privacy protections.
They don't need to. If you bought something with a card they just store that - let the data brokerage handle connecting it with actual ID cards and other elements of your identity.
But yeah, walmart is for sure logging their transactions and selling the data. It's practically free money.
> In real life, we think age verification is a good thing.
Ok. In real life, do we think having agents from the government and corporations following you everywhere, writing down your every move and word, is a good thing? Or rather, what kind of crime would one have to have committed, so that they would only be allowed out in public with surveillance agents trailing them everywhere?
If HN is social media, then so are PHPBB, NNTP, BBS, etc. and the term loses its semantic relevance.
My heuristic is that social media focuses on particular people, regardless of what they're talking about. In contrast, forums (like HN) focus on a particular topic, regardless of who's talking about it.
Doesn't matter what you want it to mean. What matters is what those in power want it to mean. It's very easy to stretch the definition to cover all sites where people can post content for strangers to see, or stretch it even wider to all digital media where people can interact with a social group.
AFAIK nobody here is. The point is that with relevance to the current discussion on potential future age-verification laws, only the widest definition matters, because that's what's at risk.
Age verification is absolutely about kids. It’s also being used (or hijacked into) a vehicle for people who want increased surveillance.
There is a ton of evidence that there are harms to unrestricted online access for kids and teens (the book The Anxious Generation is cultural touchstone for this topic at this point). There is a real, well reasoned, and valid movement to do something about this problem.
The solutions proposed aren’t always well targeted and are often hijacked by the pro-surveillance movement, but it’s important to call out that these solutions aren’t well targeted instead of declaring the age verification push isn’t addressing a real problem and constituency.
As many others have mentioned in this thread and others, there are ways - effective and straightforward ways - that we could be protecting our kids from the harms that come with the www.
The harms are real. The solution is a Surveillance Wolf wearing a dead Save The Kids Sheep(tm).
Solutions that might work - RTA headers [0]. More robust parental controls. Not this reimagining of the rules of the internet in service of a fairly vague and ineffective goal. It's like the whole AV concept was designed not to work in the current context at all - almost as if that was the point.
Perhaps I'm going a little out on a limb. I don't think I am - but quick, tell me you need to know where I'm dialing from without asking me where I'm dialing in from.
Unfortunately The Anxious Generation is a very well-written house of cards built on questionable studies [1] and its success is simply a reflection of the fact it capitalizes on the trendiest moral panic of our times.
Social media is akin to violent video games in the 2000s, tv addiction in the 90s, santanic heavy metal in 80s, and even 'bicycle face' in the 1890s bicycle craze.
Jonathan Haidt seems extremely earnest and thoughtful, but unfortunately being lovingly catapulted to fame for being the guy who affirms everyones gut reaction to change (moral panic)...makes it extremely difficult financially, emotionally and socially for him to steelman the opposite side of that thing.
Even if he hadn't compiled a bunch of suspect research from pre-2010 to make his claims, the field of Psychology is at the center of the replication crisis and is objectively its worst offender. Pyschology studies published in prestigious academic journals have been found to replicate only 36% of the time. [2]
I feel like the EFF has stretched a bit far on this one. They need to be advocating for good solutions, not portraying age verification as fundamentally about surveillance and censorship.
As many are pointing out zero knowledge proofs exist and resolve most of the issues they are referring to. And it doesn't have to be complex. A government (or bank, or anybody that has an actual reason to know your identity) provided service that mints a verifiable one time code the user can plug into a web site is very simple and probably sufficient. Pretty standard PKI can do it.
The real battle to be lost here is that uploading actual identity to random web sites becomes normalised. Or worse, governments have to know what web sites you are going to. That's what needs to be fought against.
There are overwhelming dichotomous portrayals in this debate which gives me pause because there are entities who benefit from both sides of this debate, but neither would benefit with a sensible privacy-preserving solution.
So instead of advocating for those sensible and workable solutions, the discussions are always centred on either blocking any attempt at reform while hyperventilating about vague authoritarianism or a similarly vague need to protect the innocent.
Meanwhile in the world of smartphone data providers, social media networks, and the meta/googles of the world: they all know your personal information and identity up to the wazoo - and have far more information on every one of you than what is possessed by your own governments (well except for the governments that are also buying up that data.)
So let me be clear, the gate is open, the horse has bolted - recapturing your privacy is where attention should be focused in this debate... even if it's bad for shareholders.
> Meanwhile in the world of smartphone data providers, social media networks, and the meta/googles of the world: they all know your personal information and identity up to the wazoo - and have far more information on every one of you than what is possessed by your own governments (well except for the governments that are also buying up that data.)
This is where I'm concerned too. We are seeing a proliferation of third party verification services that I have to interact with and that have no real obligations to citizens, because their customer is the website.
I'd like to see governments step in as semi-trusted third parties to provide primitives that allow us to bootstrap some sort of anonymous verification system. By semi-trusted, I mean trusted to provide attestations like "This person is a US citizen over the age of 18" but not necessarily trusted with an access log of all our websites.
> They need to be advocating for good solutions, not
No, fighting back against horrible proposals does not require suggesting an alternative proposal to the alleged problem. That only serves to benefit the malicious actors proposing the bad thing in the first place, the hope that we'll settle on something Not As Bad.
Thank god for the EFF and their everlasting fight to stop these nonsense internet laws. I'm glad they don't waste their time on "well how about this" solutions. The middle ground will never be enough for the proponents of surveillance, and will always be an incremental loss for the victims.
Yep this is the first time I've disagreed with the EFF on anything civil liberties related.
My view is that there's no reason why we can't come together and come up with a rating system for websites (through HTTP headers, there are already a couple proposals, the RTA header and another W3C proposal).
Once a website just sends a header saying this is adult only content, what YOU as a user do with it is up to you. You could restrict it at the OS level (which is another thing we ALREADY have).
This would match the current system, which allows households to set their devices to block whatever they want, and the devices get metadata from the content producers.
What good solutions are there that prevent the age verification service and the website from comparing notes (because Big Brother told them to) and figuring out who you are and what you're doing?
If they voluntarily collude then yes, you can't avoid that. It's like third party cookies - once two parties collude it's game over. But that just outlines a situation where the user's chosen trusted service is hostile to their interests and they need to find one that isn't.
If Big Brother starts mandating the collusion - then yes, there's a hill to die on. But in some ways that's the point here. There are hills to die on - this just isn't it. And if you pick the wrong hill then you already died so you are losing the ones that really mattered. If the EFF pointed out to everyone that there is a privacy preserving answer to the core issue that is driving this, they could then mount a strong defense for the part that is truly problematic, since it isn't actually required to solve the problem.
This is only hypothetical for government ID's, but in theory government IDs could provide pairwise pseudonymous identifiers with services. Your ID with a single service is stable, but it is different with each service.
The reality is that even countries that have digital IDs like Belgium which would be 1 of the many requirements of implementing such a zero-knowledge system are pushing for surveillance heavy legislation right now.
Once a system is in place that infringes on rights nobody will modify it to give citizens more rights.
I'd be OK with an "I am a child" header mandated by law to be respected by service providers (eg. "adult sites" must not permit a client setting the header to proceed). On the client side, mandate that consumer devices that might reasonably be expected to be used by children (every smartphone, tablet, smart TV, etc) have parental controls that set the header. Leave it to parents to set the controls. Perhaps even hold parents culpable for not doing so, as a minimum supervision requirement, just as one may hold parents culpable for neglecting their children in other ways.
Forcing providers to divine the age of the user, or requiring an adult's identity to verify that they are not a child, is backwards, for all the reasons pointed out. But that's not the only way to "protect the children". Relying on a very minimal level of parental supervision of device use should be fine; we already expect far more than that in non-technology areas.
A server header exists to say something is adult and could be used for user-generated content as well. [1] It just needs legislation and an afternoon from interns at assorted companies. It's not perfect, nothing is but could easily trigger existing parental controls and parental controls that could be added back into user agents. No third parties required. I think I've beat this horse into dust [2] so I should just hire kvetchers to politely remind congress at this point.
I like the first part of the idea, which is the header. Heck, even enable it by default. As long as the tracking of the toggle isn't a thing its a perfect compromise. While we're at it, respecting do not track headers would also be nice.
This completely leaves it up to the families / parents to control and gives some level of compliance to make the effort worth while.
There may even be a way to generate enough noise with the request to prevent any forms of tracking. This sort of thing should really be isolated in that way to prevent potential abuses via data brokers by way of sale of the information
As long as the tracking of the toggle isn't a thing its a perfect compromise.
This concept does not involve any tracking if implemented as designed. The user agent detects the RTA header and triggers parental controls if enabled. Many sites already voluntarily self label. [1] Careful how far one drills down as these sites are NSFW and some may be malicious.
If we must do something like this, I think a good solution would be an optional server header that describes the types of objectionable content that may be present (including “none”). Browsers on child devices from mainstream vendors would refuse to display any “unrated” resources without the header, and would block any resources that parents deem age-inappropriate, with strict but fair default settings that can be overridden. Adult browsers would be unaffected. Legislatures could attempt to craft laws against intentionally miscategorized sites, as doing this would be intentionally targeting kids with adult content.
There is no perfect solution that avoids destroying the internet, but this would be a pretty good solution that shelters kids from accidentally entering adult areas, and it doesn’t harm adult internet users. It also avoids sending out information about the user’s age since filtering happens on the client device.
It was derided as a "system for mass censorship", and got shot down. In hindsight a mistake, and it should have been implemented - it was completely voluntary by the user.
It’s close, but I see why it failed. There’s no need to include licensing/rights management in there. Also this was before pervasive HTTPS, so it would have been possible for governments and ISPs to snoop the info and possibly block it. If it could be limited to just content ratings, and kept private behind SSL, this isn’t a bad approach.
But this also needs some kind of guarantee that lawmakers won’t try to force it on FOSS projects that want to operate outside the system. And that companies like Google won’t use EEE to gradually expand this header into other areas and eventually cut off consenting adults who want to operate outside this system. I’m not sure if it is possible to get those guarantees.
I think the idea is that the manufacturers are culpable for making a parental restriction mode that's set-and-forget and not easily thwarted from inside the mode and parents are culpable for declining to set it.
Which I still don't love, but is at least more fair.
It could be added at the router? The child's computer could be identified and this header added, in a MITM situation... but, maybe that would be easy to defeat, by replacing the cert on the client? Not my area of expertise... really just asking...
There's no reason to hold the parents culpable. It would be up to the device manufacturer to ensure that this isn't possible on a system that has parental controls enabled. This is already a solved problem - see how MDM solutions do it, and see Apple's ban on alternative browsers.
It's not even necessary to block parents from giving their children Linux desktops or whatever. It'll largely solve the problem if parents are merely expected to enable parental controls on devices that have the capability.
My only gripe here is the idea of "perhaps hold the parents culpable." I'm not opposed to the idea, but what sucks is we are ultimately all paying the cost of it going wrong. The idea that we can shunt that away to a few irresponsible people is just demonstrably not the case.
Worse, it leads to situations where society seems to want to flat out be kid free in many ways. With families reportedly afraid to let their kids walk to and from school unsupervised.
I don't know an answer, mind. So this is where I have a gripe with no real answer. :(
Add to that, clearly those "bad parents" are the result of bad parenting in the first place, so really it's the grand parents that are to blame...
Wait, those grand parents also had bad models to work with, so really it's the great grandparents that were to blame...
No, wait, it was the society that they grew up in that encouraged poor behaviour toward them, and forced them to react by taking on toxic behaviours. We all should pay because we all actively contribute to the world around us, and that includes being silent when we see bad things happening.
>Worse, it leads to situations where society seems to want to flat out be kid free in many ways. With families reportedly afraid to let their kids walk to and from school unsupervised.
Less the false dichotomy, and more the stickiness of each of those options. To your point (I think), those aren't the only options available, but people do seem to be attracted quite heavily to them.
I was referencing the towns that have called the cops because there were some unsupervised kids in a park. I comfort myself by saying this isn't nearly as common as the fear mongers online would have you think. That there are cases it happens still worries me.
Note that I'm not even necessarily worried about cops getting called. Quite the contrary, I am fine with the idea of cops having a more constant presence around parks and such. I do worry about people that get up in arms about how things are too unsafe for kids to be let outside. If that is the case, what can we do to make it safe?
> Perhaps even hold parents culpable for not doing so, as a minimum supervision requirement
Even the idea of prosecuting parents for allowing their child to access 'information,' no matter what that information is, just sounds like asking for 1984-style insanity.
A good rule of thumb when creating laws: imagine someone with opposite political views from yours applying said law at their discretion (because it will happen at some point!).
Another good question to ask yourself: is this really a severe enough problem that government needs to apply authoritarian control via its monopoly on violence to try to solve? Or is it just something I'm abstractly worried about because some pseudo-intellectuals are doing media tours to try to sell books by inciting moral panic?
As with every generation who is constantly worried about what "kids these days" are up to, it's highly highly likely the kids will be fine.
The worrying is a good instinct, but when it becomes an irrational media hysteria (the phase we're in for the millennial generation who've had kids and are becoming their parents), it creates perverse incentives and leads to dumb outcomes.
The truth is the young are more adaptable than the old. It's the adults we need to worry about.
> Even the idea of prosecuting parents for allowing their child to access 'information,' no matter what that information is, just sounds like asking for 1984-style insanity.
This assumes an absolutist approach to enforcement, which I did not advocate and is not a fundamental part of my proposed solution. In any case, the law already has to make a subjective decision in non-technology areas. It would be no different here. Courts would be able to consider the surrounding context, and over time set precedents for what does and does not cross the bar in a way that society considers acceptable.
But what if we didn't collectively spend $billions of dollars and hundreds of thousands of hours battling with money, lobbyists, lawyers, judges and political campaigns over what is largely a moral panic?
What could humanity do instead with all that time and resources?
I know the US is a nation built by lawyers, for lawyers, but this is both its best strength and worst weakness. Sometimes it's in everyones best interest to accept the additional risks individually as opposed to bubble wrapping everything in legislation and expanding the scope of the corrupt lawyer-industrial complex.
Maybe the lawyers could use the extra time fixing something actually important like healthcare or education instead.
I am a Russian proxy site, I make requests for you without the header. I serve you the content because I don't care about following American laws.
Alternatively, just use an older browser that doesn't serve the header.
If anything, you'd want the reverse. A header that serves as a disclaimer saying "I'm an adult, you can serve me anything" and then the host would only serve if the browser sends that header. And you'd have to turn it on through the settings/parental controls.
Now, this doesn't handle the proxy situation. You could still have a proxy site that served the request with the header for you, but there's not much you can do about that regardless.
> 1) Given that it just says you're a "child", how does that work across jurisdictions where the adult age may not be 18?
It's a client-side flag saying "treat this request as coming from a child (whatever that means to you)". I don't follow what the jurisdiction concern is.
[EDIT] Oooooh you mean if a child is legally 18 where the server is, but 16 where the client is. But the header could be un-set for a 5-year-old, too, so I don't think that much matters. The idea would be to empower parents to set a policy that flags requests from their kids as coming from a child. If they fail to do that, I suppose that'd be on them.
The concern is that websites have no way to tell the actual age in this scenario so you'd be potentially inconveniencing and/or blocking legitimate users (according to the server jurisdiction's rules)
It doesn't seem sufficient, and would probably lead to age verification laws anyway.
Say you're a parent, with child, living in country A where someone becomes an adult when they're 18. Once the child is 18, they'll use their own devices/browsers/whatever, and the flag is no longer set. But before that, the flag is set.
Now in country B or in country C it doesn't matter that the age of becoming an adult is 15 and 30. Because the flag is set locally on the clients device, all they need to do is block requests with the flag, and assume it's faithful. Then other parents in country B or country C set/unset the flag on their devices when it's appropriate.
No need to tell actual ages, and a way for services to say "this is not for children", and parents are still responsible for their own children. Sounds actually pretty OK to me.
Except that if you're in country B, which has a law that says "you may not make information available to children that discloses that Santa Claus is made up," and the age of becoming an adult in your country is 18 -- knowing that a person accessing your site from country A is an adult in country A (which means, say, ≥ 16) is not sufficient to comply with the law.
I’m not sure why the age of majority in the region of the server would be relevant. The user is not traveling to that region, the laws protecting them should be the laws in their own region.
I don't know if "should" is intended as a moral statement or a regulatory statement, but it's not at all unusual for server operators to need to comply with laws in the country in which they are operating…
> 1) Given that it just says you're a "child", how does that work across jurisdictions where the adult age may not be 18?
So namespace it then. "I'm a child as defined by the $country_code government". It's no more of a challenge than what identity-based age verification already needs to do.
> 2) It seems like it could be abused by fingerprinters, ad services, and even hostile websites that want to show inappropriate content to children.
This is still strictly better than identify-based age verification. Hostile or illegal sites can already do this anyway. Adding a single boolean flag which a large proportion of users are expected to have set isn't adding any significant fingerprinting information.
Any time law-makers claim that a law is meant to protect children you can guarantee that the safety of children had almost nothing to do with it. This is all a push to normalize digital ID (to protect the children!); once normalized it will become mandatory.
I always ask myself who wins with these laws (well, any law really). so far, the only winner seems to be the government and data collectors. It seems these laws are intended to collect leverage in the long run.
I'd argue that this is negligible for data collectors and governments. Governments already know who you are and what sites you vist for 99.99% of the population. Data collectors already know who you are and have a pretty good idea of the sites you vist.
What unique information is this going to give the government and data collectors to abuse? Lets establish one case that both affects average people and is "bad" and not waste time discussing things that only affect a tiny minority of privacy minded people.
Keep in mind the law states a platform must provide multiple ways to reasonably verify a user is older than 16. No mention of giving the specific user age or requiring govt id
I'm just waiting for governments to start requiring OS makers to verify identity on consumer phone/laptop/console devices before you can use them.
After all, they can legitimately claim it solves much of the issues with other verification schemes - no need to trust third party sites or apps, lower risk of phishing, easier to implement internationally and with foreign nationals, etc.
Of course, the downside (for individuals) is it would take just one legal tweak or pressure from the government to destroy anonymity for good.
Why they don't use zero knowledge proof? Also question for the USA constitution experts, is this considered a violation of free speech? The article is not clear on this.
"Free Speech" in the American legal sense (1st Amendment to the Constitution) applies to government prohibition on speech, with a particular emphasis on political speech.
It doesn't prevent one person from prohibiting speech... I can tell a pastor to stop preaching on my lawn. But, the government cannot tell a pastor not to preach in the publicly-owned town square (generally, there are exceptions).
There are arguments that certain online forums are effectively "town squares in the internet age" (Twitter in particular, at least pre-Musk). But, I always found that analogy to fall apart - twitter (or whatever online forum) is more like an op-ed section in a newspaper, IMO. And newspapers don't have to publish every op-ed that gets submitted.
Also, the 1st Amendment does not protect you from the consequences of your speech. I can call my boss an asshole to his face legally - and he can fire me (generally, there are labor protections and exceptions).
Some proposed implementation do this. Without the requirement there is no chance of your ID or age being leaked, with zero knowledge proof, there is a chance they leak but can be made small, potentially arbitrarily so. Other implementations come with larger risks.
Zero knowledge proof is either trivially defeated by re-using the same credentials or doesn't have useful privacy guarantees. There really isn't an in-between here for something like age verification.
The idea is that e.g. the government would give you an app that lives on your phone. When you apply for the app you provide some documents to prove your age, but you don't say anything about what sites you plan to visit. When you want to visit an age-restricted site you use the app to generate a proof that you have it, but the site doesn't learn anything more than that, and the government doesn't learn that you used the app.
It's funny because the same "perfect is the enemy of good" argument is used both to criticize age verification in the first place (why bother if it isn't perfect) but then also to dismiss proprosals to implement it better (why bother if they don't perfectly fix the problem).
No. It's mostly that the proposed age verification schemes have fundamental problems that disqualify them from being considered "good" and none of the "better" implementations fix those problems at all.
Age verification in general is not intended to defend against people lying or using stolen credentials. If you’re 13 but know the password to your dead grandpa’s account and the website in question has no idea he’s dead, there’s no way to defend against that, with or without a ZKP.
What the ZKP does is let you limit the information the site collects to the fact that you are under 18, and nothing else. It’s an application of the principle of least privilege. It lets you give the website that one fact without revealing your name, birthdate, address, browsing history, and all your other private data.
What prevents one kid in a friend group or in a school from sharing the same identifier?
After all - if it doesn't share anything other than a guarantee of the "age" of someone who is authenticating with the website then how would the website know there's re-use of identifiers?
- If I can do a zero knowledge proof once per day against someone who is under age, I can eventually determine their birthday.
- If I can do a zero knowledge proof with an arbitrary age, I can eventually determine anyone's birthday.
- If the only time people need to verify their age is to visit some site that they'd rather not anyone know they visit and that requires showing identity - even if it's 100% secure, a good share of people will balk simply because they do not believe it is secure or creating a chilling effect on speech.
- If the site that verifies identity is only required for porn, then it has a list of every single person who views porn. If the site that verifies identity is contacted every time age has to be re-registered, then it knows how often people view porn.
- If the site that verifies identity is a simple website and the population has been trained that uploading identity documents is totally normal, then you open yourself up to phishing attacks.
- If the site that verifies identity is not secure or keeps records, then anyone can have the list (via subpoena or hacking).
- If the protocol ever exchanges any unique identifier from the site that verifies your identity and the site that verifies identity keeps records, then one may piece together, via subpoena (or government espionage, hacking) every site you visit.
Frankly, the fact that everyone promoting these systems hasn't admitted there are any potential security risks should be like an air raid siren going off in people's heads.
And at the end of all of this, none of it will prevent access to a child. Between VPNs, sharing accounts, getting older siblings/friends to do age verification for them, sites in jurisdictions that simply don't care, the darkweb, copying the token/cert/whatever from someone else, proxying age verification requests to an older sibling/rando, etc. there are way, way too many ways around it.
So one must ask, why does taking all this risk for so little reward make any sense?
(IANAL) That demonstrates the opposite: that's a voluntary system with no force of law behind it—the private sector "self-regulating" itself, if you will.
The film rating systems were created under threat of legislation in the first half of the 20th century (so, in lieu of actual legislation). The transformative 1st Amendment rulings of the Warren Court would have made such laws unconstitutional after the 1960's, but the dynamic that created these codes predates that—predates the modern judicial interpretation of the 1st Amendment.
Because safeguarding user privacy is not a goal. Scoring political points with "think of the children" agendas, while getting kickbacks from companies salivating at the opportunity to gather even more personal data, is.
Onlyfans is legal prostitution so we need to protect that. Better to regulate the entire internet with taking your rights than question why it's allowed.
No, its legal (in some jurisdictions) pornography. Prostitution on the platform, as well as whatever the legal status is in the set of jurisdictions involved, is also, from what I understand, explicitly against the platform ToS.
I will say that it's a weird legal distinction in many states that paying someone to have sex is illegal unlessss.... you record it and sell the recording. Then it's legal.
Way to split hairs. Something being against the ToS can still be legal.
Prostitution obviously cannot physically happen on an online platform, but it sure is a convenient way to advertise and attract customers, and serve as the payment processor.
> Way to split hairs. Something being against the ToS can still be legal.
Well, no, violating a binding legal agreement is illegal.
> Prostitution obviously cannot physically happen on an online platform, but it sure is a convenient way to advertise and attract customers, and serve as the payment processor.
Which is explicilty prohibited by the law in many places OF operates, and judging from the number of people who are creators on the platform I've seen complaining about people jeopardizing their status with the platform by soliciting it on the platform, also by the actively-enforced terms of the platform. OF is simply not “legal prostitution”, and it is ridiculous to describe it that way
> Well, no, violating a binding legal agreement is illegal.
Not touching the rest of this thread's arguments, but that isn't really true. Breaking ToS, or any other contract, is not "illegal"-- it's not a crime. It opens you up to civil (not criminal) penalties if the other party sues, but that's it.
What they should do instead is invest in technology that can do age verification while protecting privacy. This is obviously a required piece of technology. It is not acceptable for children to grow up on the Internet and easily access pornography by simply going to a website. Imagine letting your children loose in a city where they can wander in and out of peep shows without friction.
While the "required piece of technology" aspect is debatable, there is certainly enough demand for it that it is going to happen in one way or another.
So I agree that instead of fighting some change that I think is inevitable, they should make it so that it works in the most privacy-conscious way possible. And I mean with real technical solutions, like an open-source app or browser extension you can download, a proof-of-concept server for age verification, etc... using the best crypto has to offer.
Not to mention people lose accounts because someone reported them as underage, and now they don't want to fully dox themselves over this. Who can blame them considering discord's own support ticket system was hacked which included people who had to validate their age.
I would be happy if we just moved to a way we could more realistically enable audits of information flow in our lives. I don't, necessarily, want to restrict my kids consumptions. It does worry me that I don't know how to teach them to audit all of the information that is being exposed to them. Or worse, collected about them.
I'm not entirely sure what you mean by 'audit', but teach them critical thinking, and show them the strategies the media uses to manipulates them. Teach them there's often more than 1 side to a story.
Things like this will give them a huge advantage in not being manipulated and lied to.
To explain it like budgeting. You can forward plan what you will spend money on. But you also need to be able to see where all of your money went. This is nigh impossible with data flow, nowadays.
I'd be comfortable with it having large segments of "uncategorized." But right now, if I scan over to my ISP to see how much data I have used for the month, I have little to no help in saying how much of that was what.
Ah okay. I think this would probably be pretty tricky, security-wise, no? One of my first thoughts that might help would be writing a simple tool that parses history from your browsers to categorize it. Other than that, there are things like https://activitywatch.net/ (which seems to have a desktop and Android version)
Yeah, just writing out the idea, I would imagine I should be able to see a lot of this with my router?
Again, I get that that will be a lot I have to write off as "uncategorized." I'm not even trying to drive all telemetry down to zero. I'm comfortable knowing that my HVAC may send diagnostic stuff in, as an example. But it seems kind of crazy to me that this is not something that is often discussed? Do I just miss those discussions?
Generally speaking, I share the HN consensus on age verification laws. But, there is a real problem with kid's unfettered internet access. Just think about all the adults who are hopelessly addicted to social media. The negative affects are amplified when it comes to developing minds.
My SO has been teaching for nearly 20 years now, and mental health in kids has fallen off a cliff in the last two decades. I could fill this page with online bullying stories. Some of which, are especially cruel. Half her students are on medication for anxiety. It's out of control, honestly.
That said, I don't know how to solve it. It's easy to put this on the parents, but that's not the answer. Otherwise, it would be solved already. Some don't care. Some don't have the time to care because they're trying to keep the lights on, and dinner on the table. And, some simply think it doesn't apply to them or their children. Parents on HN are hyper-aware of this sort of thing, but that's definitely the minority.
I know a family that would be most folks least likely candidate for something bad to happen online. Single income, relatively well off, the parent at home has an eye on the kids 24/7. And, if you met the kids, you would most likely qualify them as "good kids". Without going into detail, their life was turned upside down because one of the kids was "joking around" online.
Again, I don't know what the answer to the problem is. Clearly, age verification laws are a veiled attempt to both collect and control data. And, EFF's emphasis on advertising restrictions as a solution, seems off the mark. There's more to it than that. Idk, this shit makes me want to log off permanently, and pretend it's 1992.
Parents have complete control of the Chat/Porn server and since the orgasmatron necessarily has all your desires stored in its LLM (Large Lust Model) it trivially knows your age and will lock you out.
And internet porn can be banned regardless of age. (that's only half sarcastically said).
Demand for home Large Lust Models and orgasmatrons will soar. You heard it here first. Opportunity for entrepreneurs. And these home-based products are the only way to keep porn away from kids (if parents don't care now, they never will) and to maintain privacy on the internet.
Every place where I've worked in I.T., the rule was "No porn downloading at work. Porn belongs in the home." (especially in the days of slow home modems)
And to be really enforceable, all offshore sites would have to agree to the scheme, including certain Russian ones who are glad to pollute our children's and adults' minds with porn, propaganda and conspiracy theories.
Lastly: There always was and will be media. Micro-SD cards now? If not phones, thrift store picture frames and RPi's. "Porn finds a way."
Is the EFF captured? This is a resource against misguided laws but what's a law they'd actually approve of? This entire resource is boring defense of the status quo.
I wonder what the psychological effect of having little or no privacy would do to people. Are we all going to be paranoid schizophrenics? How would a world of paranoid schizophrenics work? How insane are world events going to be from that point on?
Paranoid, maybe. Schizophrenics? No. Firstly, "paranoid schizophrenia" is an outdated diagnosis. Paranoia is a common symptom of schizophrenia, but schizophrenics exhibiting paranoia are not considered to have separate mental illness from those who are not. Secondly, schizophrenia is not caused simply by psychological stress, and is associated with a large cluster of positive and negative symptoms, with paranoia being only one of them.
China is an example of this. Somewhere that, according to the UN's data, executed "undesirable" people with such gusto that it incidentally decreased the organ donor waitlist time so low that it couldn't be explained by any other factor.
"Perfect" security is only attainable with zero dissent, zero individuality, zero privacy, and zero freedom.
Asking for a year of birth is the best solution and always will be. Once kids are old enough to figure that out you're not going to stop them from much.
> we must fight back to protect the internet that we know and love.
This is not compelling. The internet I know and love has been dying for a long time for unrelated reasons. The new internet that is replacing that one is an internet that I very much do not love and would be totally ok to see lots of it get harder to access.
The parts where traffic generates money for the kind of people who would think putting an advertisement on a screen on someone's home refrigerator is an acceptable thing to do (morally, not legally or whatever).
"SAN FRANCISCO-With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help"
The surveillance and censorship system is built, administered and maintained by Silicon Valley companies who have adopted this as their "business model". "Monetising" surveillance of other peoples' noncommercial internet use
These Silicon Valley companies have been surveilling internet subscribers for over a decade, relentlessly connecting online identity to offline identity, hell bent on knowing who is accessing what webpage on what website, where they live, what they are interested in, and so on, building detailed advertising profiles (including the age of the ad target) tied to IP addresses, then selling the subscribers out to advertisers and collecting obscene profits (and killing media organisations that hire journalists in the process)
Now these companies are being forced to share some of the data they collect and store
Gosh, who would have forseen such an outcome
These laws are targeting the Silicon Valley companies, not internet subscribers
But the companies want to spin it as an attack on subscribers
The truth is the companies have been attacking subscriber privacy and attempting to gatekeep internet publication^1 for over a decade, in the name of advertising and obscene profits
1. Discourage subscribers from publishing websites and encourage them to create pages on the company's website instead. Centralise internet publication, collect data, perform surveillance and serve advertisements
I am disappointed to find no mentions of zero knowledge proofs or any other indications that we wont have to trust anyone with this task.
We have the technology to do age verification without revealing any more information to the site and without the verification authority finding out what sites we are browsing. However, most people are ignorant of it.
If we don't push for the use of privacy preserving technology we wont get it and we will get more tracking. You cannot defeat age verification on the internet, age verification is already a feature of our culture. The only way out is to ensure that privacy preserving technologies are mandated.
Realistically all but the largest sites are going to contract out age verification to third parties. There will probably be verification companies that will have a wide range of verifications.
I think sadly, this is a lost battle in public opinion. And the gambling of digital assets on Roblox and other casino-like website is also starting to get public attention, and will turn public opinion further.
The CNIL gave up 3 years ago, and gave guidelines, you can read about it here [0]. At the time it read like "How well, we tried, we said it is incompatible with privacy and the GDPR multiple times, we insist one more time that giving tools to parents is the only privacy-safe solution despite obvious problems, but since your fucking law will pass, so the best we can do is to draw guidelines, and present solutions and how to implement them correctly".
I think the EFF should do the same. That's just how it is. Define solutions you'll agree with. Fight the fight on chat control and other stuff where the public opinion can be changed, this is too late, and honestly, if it's done well,it might be fine.
If the first implementation is correct, we will have to fight to maintain the statu quo, which in a conservative society, is the easiest, especially when no other solution have been tested. If it's not, we will have to fight to make it correct, then fight to maintain it, and both are harder. the EFF should reluctantly agree and draft the technical solution themselves.
The net got too big, the 90% got in because of facebook and google, and automated bots took over from there.
Either we create the fix, or the feds take it over. we need to sever the idea of a global internet. per-country and allied nations only.
anonymous cert-chain verified ID stored on device. problem fixed.
online age verification is disingenuous and a pretext to give governments the hard coded technical option to regulate speech and association.
there's a great game being played out by these users of force against the advocates of desire. everything about the bureaucracies pushing digital ID is unwanted. this isnt about age verification tech, its about illegitimate power for unwanted people who are actuated by forcing their will on others.
we should treat these actions with the open disgust they deserve.
In Switzerland you are forced to receive an SMS code to your phone on every portal in every public space everywhere to establish your identity on every network. No SMS = No public wifi anywhere in Switzerland.
That's a funny choice, I thought Europe was done with SMS. I can see this 1-to-1 mapping with other cellphone derived messaging like Whatsapp, etc being an issue for privacy but it's certainly possible to have multiple phones.
I believe cyber cafes in India must verify identity via ID before allowing internet access and maintain logs, browsing history, etc. for at least one year.
I understand this is a technology forum, frequented mostly by liberal adults, who built a lot of their internet nous on totally free internet of 90s and 00s. I am one of them.
Equally, I think insisting that there must be no controls to internet access whatsoever is not right either. There is now plenty of evidence that eg. social media are very harmful to teenagers - and frankly, before I noticed, going on FB got me depressed each time I did it at one point. And as a parent, you realise how little control you have over your children's tech access. Case in point - my kids seem to have access to very poorly locked down iPads at school. I complained, but they frankly don't understand.
We all accept kids can't buy alcohol and cigarettes, even if that encroaches on their freedom. But or course flashing an ID when you're over 18 is not very privacy-invading.
Likewise, I think it is much better to discuss better means of effecting these access controls. As some comments here mention, there are e.g. zero knowledge proofs.
I'm sure I'll be told it's all a sham to collect data and it's not about kids. And maybe. But I care about kids not having access to TikTok and Pornhub. So I'd rather make the laws better than moan about how terrible it is to limit access to porn and dopamine shots.
For the folks in the back row:
Age Verification isn't about Kids or Censorship, It's about Surveillance
Age Verification isn't about Kids or Censorship, It's about Surveillance
Age Verification isn't about Kids or Censorship, It's about Surveillance
Without even reaching for my tinfoil hat, the strategy at work here is clear [0 1 2]. If we have to know that you're not a minor, then we also have to know who you are so we can make any techniques to obfuscate that illegal. By turning this from "keep an eye on your kids" to "prove you're not a kid" they've created the conditions to make privacy itself illegal.
VPNs are next. Then PGP. Then anything else that makes it hard for them to know who you are, what you say, and who you say it to.
Please, please don't fall into the trap and start discussing whether or not this is going to be effective to protect kids. It isn't, and that isn't the point.
0 https://www.eff.org/deeplinks/2025/11/lawmakers-want-ban-vpn...
1 https://www.techradar.com/vpn/vpn-privacy-security/vpn-usage...
2 https://hansard.parliament.uk/Lords/2025-09-15/debates/57714...
reply