Hacker Newsnew | past | comments | ask | show | jobs | submit | Levitz's commentslogin

A whole lot of websites are inaccessible from my country when there's football on, due to a judicial order meant to curb piracy.

The whole deal with Chat Control is also not to be forgotten. I do think you guys see this place with rose tinted glasses sometimes.


Does that football scenario mean that the rule of law doesn't exist or that it does exist and is being enforced?

I agree with you that both of those laws are stupid, but that's a completely separate discussion to what I'm claiming above.


Depends on how you interpret the ECHR.

Does it allow blocking half the internet during football games?

It almost certainly does not: https://hudoc.echr.coe.int/fre#{%22itemid%22:[%22001-115705%...

AFAIU this is common because lower courts often deliberately choose to not try to interpret ECHR, leaving that for appeals courts.


I interpret ECHR as what it is: not a regulatory body by any stretch of the imagination. It can recommend changes to the national law, but it cannot force any state to do so. You seem to be interpreting it as some sort of an equivalent to the US supreme court, which it is not.

But now we're straying even further from my original argument which boils down to "laws mean something" into arguing the intricacies of how laws are supposed to be changed. I'm not interested in having that discussion, as it has nothing to do with my original claim.


ECHR decisions are (supposed to be) legally binding. If they're not obeyed, that's not a good look for rule of law in Europe.

ECHR decisions are certainly not mere recommendations.

>It can recommend changes to the national law, but it cannot force any state to do so

ECHR can simply invalidate national law.


According to whom? You?

The only thing ECHR cares about is one piece of "legislation", which is not a law, but a declaration (Declaration of Human Rights), so that you have some sort of internationally recognised body to go to whenever you feel that your local judicial system has done you injustice. That is all it does. That is all it is meant to do. That is the sole reason of its existence. It is not a legislative body at all.

> ECHR can simply invalidate national law.

It can't. You're either making things up or severely misunderstanding the court. It can say "this law doesn't align with the Declaration" and that's it. The law still exists. ECHR relies on signatories being willing to make the necessary changes themselves. Some are and get right on it, some aren't. The election law in my country has lost 5 cases in the ECHR and not a single one of the verdicts are fixed as of now, the oldest of which dates back to 2009. This is horrible, I want to see them fixed, but ECHR can't force us to fix it and we as in the country face 0 consequences for not addressing any of them (as of yet).

There is a separate court called European Court of Justice which is the equivalent of the US supreme court and is tasked with interpreting EU-wide laws and making sure national laws are aligned as much as possible. That is a legislative body with an enforcement mechanism. ECHR is not, you don't know what you're talking about.


>According to whom? You?

According to the European Convention on Human Rights, it's sort of the whole point.


And what would those values and priorities be? Because it doesn't seem to me that they align with what they actually do.

For example, it seems to me there is a whole lot of worry around megacorporations, often related to capitalism and the inequalities it brings.

In that context, if you don't place privacy as a priority, how are you not either stupid or ignorant? Is my premise just wrong?


You can be in favor of privacy while simultaneously thinking porn, gambling, and advertisers shouldn't be targeting children. The age verification bills I've read have steep penalties for retaining information, so that seems fine since that's literally more protection than you get in person.

It's really more just concluding that those corporations should be liable for their behavior. It also has nothing to do with "the Internet" which is largely unaffected. Except of course ideas for forcing OS behavior coming out of California which are obviously bad.

I actually think things could be a lot simpler if we just made the laws like alcohol: it's illegal (with criminal liability) for a non-parent adult to provide <restricted thing> to a child. Simple enough. Seems to work fine as-is for Internet alcohol purchases. Businesses dealing in restricted industries can figure out how to avoid that liability. That's entirely compatible with making it illegal for businesses to stalk everyone, which we should also do!


> The age verification bills I've read have steep penalties for retaining information, so that seems fine since that's literally more protection than you get in person.

The best way (and only way) to prevent retaining information is to not share them in the first place.

> You can be in favor of privacy while simultaneously thinking porn, gambling, and advertisers shouldn't be targeting children.

There are other method to achieve this without mandatory identification. You can force these content to be served with an HTTP header providing their legal minimum age of consultation or type of content, and blocking them browser side. Governments could maintain filter lists for different age bracket and release them to everyone, allowing easy compliance on the device parental control settings.


Headers could maybe work in a world where the technology were ubiquitous and people knew how to set it up (c.f. v-chip's failures), and kids couldn't just buy their own device for $20 and use it on the actually ubiquitous free pubic wi-fi to avoid any restrictions.

And actually I think it's a better world where kids can obtain e.g. a raspberry pi that they completely control no questions asked and free public wi-fi exists all over, and the onus is on service providers to not deal with children if they're not supposed to. Basically, a high trust society.

In any case, "don't retain records" is actually a pretty easy task. Trivial, actually (use a device with no disk to handle PII, an API that just returns yes/no to the rest of the system, and heavily restrict the firewall, e.g. no outbound connections). Or you buy a token/gift card in person with ID check. If you think the penalties aren't steep enough to get compliance, just raise them (e.g. business ending fines plus jail time).


If you implemented that simple solution the expected outcome is businesses collecting ID at the door. But unlike the age verification bills there'd be no prohibition of or penalty for misuse of the collected information. It's a strictly worse outcome.

You can make intentional targeting illegal without criminalizing the accidental. And mandating self categorization of content by service providers would enable standardized filtering that was broadly effective.

The above won't get kids off of social media and it won't serve the purposes of the surveillance state but it will meet the stated goals of those pushing these measures.

Keeping children off of social media is a much trickier problem. I think we'd be better served by banning certain sorts of algorithmic feeds.


Okay, so make it illegal for them to record any information which is what the actual laws do (or better, explicitly criminalize all the other current stalking). The point is you don't need to be prescriptive about how to prevent children from accessing the sites. Just make it so you can face massive fines and be arrested if you don't. They can figure out how to comply with the law, and they can be effective or be shut down.

They're not actually owed a solution for how to make their business model work. They can just be told that what they're doing is unacceptable, and they can figure out what they'd like to do next. If you're worried they might react with some other unacceptable thing, we can clarify that that's not okay either.


I agree that open ended requirements are better than the imposition of prescriptive solutions. But I don't want online ID verification and that's where your proposal logically leads so I am equally opposed to it.

> They're not actually owed a solution for how to make their business model work. They can just be told that what they're doing is unacceptable,

You listed a few different things previously. Which one are we talking about here?

I think the rest of us are owed a solution where we can still do what we want without having our privacy violated. Regulations need to take the end user into account.

I already proposed what I think would be a workable solution to achieve the stated goals without unduly eroding the status quo. Do you have any response to it?


Self categorization has been the status quo since the 90s and has proven to be insufficient. More generally, assuming people agree that something is a social problem/should be restricted, I don't think "have a third party come up with a solution that people can buy to filter us" makes sense. The liability belongs on the people dealing in the restricted item.

We don't give kids special debit cards that detect and block purchases of cigarettes and alcohol and say "make sure your kids don't get cash". We make it a crime to sell those things to a child.

Why is online ID verification a problem for e.g. porn and gambling but it's fine for alcohol? Why should it be fully anonymous? Should we also allow anonymous porn and cigarette vending machines in person? Why is online special?

This whole idea of anonymous access can't even work in a world where you actually pay for things online, which makes the whole proposition even more dubious. If you're an adult and spending money online, you already told them who you are (modulo darknet markets with crypto). Or you could buy a porn gift card in person with an ID flash like other restricted physical items if you're uncomfortable with online payments. And treat the gift card as restricted as well: giving it to a minor is a crime. So then what's the problem exactly? Ad supported porn specifically somehow is important enough to be special?

More to the point: as far as I know, if you perform a sex act in plain view inside of a private establishment that's open to the general public with no restrictions, then that's public indecency/lewd conduct, a criminal act, even if the owner consents. If children are present it can become a felony and you're going on the sex offender list along with jail time. Why is an unrestricted public website different?

Why are you "owed" this privacy online when someone running an open to all, fully anonymous, unchecked porn theatre in person would be arrested? How about the privacy you are owed is that your business stays between you and whomever you interact with, and even they can be asked/required not to keep or share notes about you? But they can still be expected to know you are an adult before they sell you adult services.


TBH I think this is all either fundamentally flawed or incredibly weak except for your final paragraph. That one actually poses a somewhat interesting question - why the seeming disparity between online and offline porn regulations in the US? Still, it fails to address (or even acknowledge) the differences in the impact of requiring ID between those scenarios.

Also I think you have this entire thing exactly backwards. It's not on the rest of us to convince the other camp that ID shouldn't be required. Rather it's on the other camp to put forward a convincing case that ID should be required - that there is no realistic alternative and that the tradeoffs are worth the cost. Otherwise the current status quo wins out.

> Self categorization has been the status quo since the 90s and has been proven to be insufficient.

What are you on about? Legally mandated self categorization has never been tried and would presumably work if there were penalties for violations. You don't even need 100% compliance, you just need high enough compliance that the default becomes to filter out any site that fails to do so.

Voluntary self categorization isn't particularly useful because almost no operators bother to do it. So you're left with no (workable) option other than whitelist filtering.

> have a third party come up with a solution that people can buy to filter us

I never suggested anything of the sort.

> The liability belongs on the people dealing in the restricted item.

The items are not currently restricted and I don't agree with you that they should be. However I would agree to changing things to make all providers liable for accurately self categorizing the content they serve up by means of a standardized header format or some other protocol.

> Why is online ID verification a problem for e.g. porn and gambling but it's fine for alcohol?

Presumably because you have to take receipt of the shipment so the vendor is already going to collect your PII.

Why is legally requiring that a gambling website send a header categorizing itself as such unworkable yet somehow it's all going to work out just fine if we require them to do the much more complicated thing of securely handling and accurately verifying identification documents? That seems like an obvious contradiction to me.

> Why should it be fully anonymous? Should we also allow anonymous porn and cigarette vending machines in person?

Don't we effectively do exactly that? There's no requirement for ID retention on sale of alcohol or cigarettes and until recently the norm was for the clerk to briefly eyeball your license. They also didn't used to bother checking ID if you looked old enough. (That's changed at the major retailers around here lately but that's a different matter.)

Anyway I never claimed the brick and mortar way of doing things was ideal so arguing as though I've agreed to that seems rather disingenuous.

> If you're an adult and spending money online, you already told them who you are

But I did not give them a copy of my ID or any otherwise unnecessary PII and do not want to be required to do so. Also there are plenty of ways to pay for things online without readily revealing your identity to the couterparty. I expect you are well aware of that fact.

> Why is an unrestricted public website different?

For practical reasons I'd imagine. Analogies are great and all but at the end of the day a global electronic communication network has rather different properties than a physical brick and mortar location that you walk into.

Regardless, the reputable services all seem to agree with you (as do I) and thus go out of their way to send headers marking them as adult only. It's roughly equivalent to a shop hanging a "no under 18 allowed" on the door but then not bothering to ID anyone. If parents can't be bothered to configure even the most basic of controls on their children's devices why should the rest of society be made to suffer for that?


Sending a header is unworkable because nothing obeys it, there are embedded browsers all over, and even if you mandated that every app/browser do so, kids can get a computer/phone for $20 with no restrictions.

There's no requirement for ID retention online either. In fact, unlike in person, it is banned. And a framework where you just say "you are liable for what you provide to children" actually allows for a site employee to briefly eyeball your ID or just look at you and decide you look old enough (though that doesn't really work with realtime video generation).

Record retention is a different question from checking. I think I and the actual relevant laws have been pretty clear that we should disallow that. No, we do not have anonymous cigarette vending machines (at least anywhere I've been in the US). They are always behind a counter with an ID check.

Except for crypto, I don't think I am familiar with any way to pay for something online without revealing my identity. I'm pretty sure 100% of online purchases I've made over the last 20 years have required name/address and usually phone number as part of payment details. Even with crypto, as far as I know common wisdom on darknet markets is (or was?) to use your real name/address as that's the least suspicious. I don't actually know a single place where you don't give that info to your counterparty. I can't imagine it's common.

What parental controls? As far as I know, Safari is the only modern browser that checks RTA headers (if it still does). There are no options for Chrome or more importantly Firefox, which is the only browser that's fit for purpose with malware blocking (especially for children). Similarly Android has no controls.

I don't see what part of being online makes it less practical to check ID. It seems more practical to me. It's just cheaper not to, and online businesses are big on avoiding labor. That's not some fundamental right of theirs.


The browsers don't support it because only a few major sites bother to send it. The issue here is not support by client software it is lack of participation. That could be fixed via legal mandate, no different than requiring ID checks or anything else.

Right now if you want to build out a filtering solution there's nothing to base it on. We could fix that via regulation and then filtering would just work.

> kids can get a computer/phone for $20 with no restrictions.

At that point ID checks are no good either. They can just visit a site from a different country that doesn't respect our legal framework or hop on tor or bittorrent or whatever else.

In fact when it comes to ID checks if you don't enable parental controls and filtering then they will be able to bypass it in the exact same way as above except using their regular device that you gave to them! No need to go purchase a new one!

So you're inevitably going to end up needing a client side filtering solution regardless. As I keep telling you, the solution you're gunning for here is strictly worse than content filtering based on mandatory headers.

> Except for crypto, I don't think I am familiar with any way to pay for something online without revealing my identity.

There are also virtual credit card services. Or gift cards (which you yourself mentioned earlier).

Of course anything shipped needs a name and address (and likely phone number) but there are plenty of services you can pay for that don't involve shipping a physical item.

> That's not some fundamental right of theirs.

Never said or even implied that to be the case. I think I've been pretty clear that I see it as a threat to privacy, that I don't personally want it, and that I don't think it's the best (or even a particularly good) solution for the stated problems.

It's bizarre to me. You are putting all this effort towards advocating for new regulation that would require a change to how services operate. Simultaneously you argue against a less intrusive solution on the basis that no one currently does it. For some reason everyone can start checking IDs but sending a header is a bridge too far? It's inconsistent.


> They can just visit a site from a different country that doesn't respect our legal framework

That's called noncompliance. This is why a simpler framework is better: do you demonstrably serve content to children in this jurisdiction illegally? Then you'll incur fines and a warrant here. Better not have revenue or visit here. And we could put the same liability on advertisers funding it so there's just no financial incentive for anyone.

Bittorrent is trivial to block, other countries are easy to block on your router, and it would be simple enough to just say running an open proxy incurs liability for anything you front if you obscure the originating location or allow international traffic. Again the basic principle is "are you providing access to the general public with no gating to restricted material?" In any case, obscure Russian forums you can access through Tor are an afterthought compared to e.g. Reddit, which hosts both Roblox forums and porn today with no wall between them. There's no reason to allow that.

Note also that provider liability doesn't mean we can't also have filtering. Liability just creates the correct incentives for providers to help ensure the solution actually works. If liability with no prescription for a solution would lead to ID checks and not working with vendors to have working filters, that kind of reveals what we think would actually work.

As far as virtual cards go, do they not still require payment information? Surely business don't want to deal with anonymous purchases since that's begging for fraud? In any case, service provider liability is still compatible here. I didn't say they need to check ID. Neither does e.g. the Texas law. It says someone needs to verify age. They can use a commercial service for it. The virtual card provider or gift card retailer could provide that service and assume or share liability.

I'm not even necessarily advocating for a new regulation. I'm saying recognize public indecency/lewd behavior for what it is, and ban things like gambling in children's games. Recognize that public websites with no access gates are public spaces and act accordingly. And yes I consider checking ID for a handful of specific services to be less intrusive than everyone supporting some header. I don't consider the former to be intrusive at all really. The latter is basically impossible if for no other reason than there are already billions of devices that don't. It's a fantasy non solution that basically amounts to "do nothing".


I don't know why I'm the only person online willing to steelman this, but...

The early Internet users weren't people who subscribed to AOL to look at porn in the 90's. They were the people who were granted access to the ARPANET to work in the 80's. The Internet was an exclusive community back then. You had government employees, knowledge workers, and elite university students who had all passed institutional screening processes. You were only allowed to use the ARPANET if you were using it to do something useful and aligned. Therefore you could feel reasonably assured that anyone you talked to online was going to be better than the average person you'd find going outside and walking down the street. If you wanted to know who they were, you could just finger their username. If you wanted to know who owned a domain, you could whois it, get their name and then even write them mail or call them.

People have wanted that old Internet back for a long time. i.e. the one that existed before Eternal September. Those are the people who run your tech companies. The ones who remember what it was like. These people understand what people actually want isn't always the same thing as what they say they want. They understand why the only truly successful Internet spaces on the modern Internet are the ones like Facebook that got people to be non-anonymous. Another example is the best places to work that folks desperately want to get into are the companies like Google whose intranets are much more like the old Internet. These are really the only Internet spaces that normal people want to use. Because people want to interact with other people who are similar to them. Because people want to know who other people are. Otherwise we can't operate as the social creatures that evolution designed us to be. I don't think any civilization in history has operated its public square as a gigantic red light district where everyone is required to wear a mask. So why should we?

Overcoming the anonymous religion problem that somehow glommed onto the hacker and cyberpunk movements is more important and urgent now than it's ever been, because the Internet has been filling up with billions of AI agents. It's gonna be Eternal September in overdrive. Humanity is really facing a tradeoff where you'll have to have gatekeeping again and won't be allowed to conceal who you are, or you can be gaslit by machines forever in your own robot fantasy.


4chan is and has for a long time been in cooperation with US law enforcement. You can literally, right now, enter the site and when reporting a post there's an option to report a post as breaking US law.

If the UK is not happy about how the site deals with such matters, the UK can block the site.

>Next on the escalation ladder is govt writing the rules for you that you either take or leave

No. This is not "next". This is "now" but the UK doesn't want to actually look to be doing what they are actively doing, and now we've got this mess.


It's not the first time I see comments similar to this and I honestly can't even begin to grasp how anyone can think that the US is in any way shape or form at risk from Iran.

If it spends enough to trigger the debt bomb literally pounding sand, that could do it. It isn't Iran that is the danger though. The US could just walk away any time and be fine.

And leave their “partners” to clean up the mess?

I'm sure people said the same about the USSR invading Afghanistan.

The US right now cannot keep its bridges from collapsing. It cannot keep its children safe from men with guns. It cannot keep its citizens fed or housed. It is failing to provide adequate healthcare for a majority of its population, it cannot even keep its children vaccinated against measles. Our science agencies are being run by crackpots. Our mass media is being combined under one single owner.

This doesn't even consider the impending existential challenges of climate change.

And this nation, instead of fixing its crumbling domestic infrastructure - educational systems, health care systems, or anything that would benefit the citizens of the US - has chosen to launch an attack against a foreign nation that has already cost 10s of billions of dollars and will likely cost vastly more.

All the political and economic capitol that is required to maintain and improve stable conditions is instead being poured into murder in a desert thousands of miles from home.


I'd argue that Vietnam and Al-Qaida/Afghanistan/Iraq where much lesser "risks" than Iran, and those still left lasting scars on US society, self-image and standing.

Well, why don't you open Bloomberg or the Financial Times to understand why. The damage being caused is potentially civilization-ending. At the very least, this is already going to be very expensive for everyone for many years.

I think you are right on every point, but I think it's worth noting that WoW is kind of a different beast.

You don't play a "match", you don't play "against" other players most of the time, in this context "botting" and "cheating" overlap because having your character do stuff 24/7 unattended is an evident advantage over the rest of the population, but it's not like you are hindering anyone's progress directly the vast majority of the time doing so.

How often does actual cheating happen in WoW, anywhere it matters? M+? Raiding? PvP?


Most of cheating is botting. When bots farm dungeons or other activities, earn gold and then that gold is being sold at black markets for dollars to other players.

That's indirectly hindering other players progression, because it causes deflation (so you can't earn as much gold selling your ores); because it causes inflation (more circulating gold, yes, these are contradictory); because it denies other player farm (if bot gathered ore, other player have to search for another vein) and so on; also illegal gold selling increases expectations (other players bought super good gear, why don't you do that) and causes burn-out (because farming gold fairly is much more hard, than just buying it).

But mainly it just makes players angry, because they can see these bots moving in a predetermined route and stealing resources from their noses. I'm not really sure if bots are that bad in the grand scheme of things, but living players certainly don't like to compete with automatons.

There were also cheaters who used instant cast interruptions at arenas, but it seems that competitive PvP is not that popular nowadays so I'm not sure how it's wide spread.


That and back in my WoW days it was "competitive" raiding with "creative use of game mechanics" - although those bans usually lasted for a few weeks or so most of the time.

I only remember coming across one of these myself, basically a boss fight where you could exploit pathing to take one of the enemies out of the encounter.

This was all just for boasting rights - being server first for a given boss fight with an undergeared raid group.

It was always a coin flip sometimes to understand if the strategy you came up with was going to get flagged by a DM or not. Usually for brand new fights they'd come warn you before an actual take-down they disapproved of, and you'd only get banned if you ignored the warning and continued. I can only assume Blizzard had alerts of some sort so in-game DMs would be notified when a guild started a give encounter.


>The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity.

Anonymity is not the problem though. We've gone with anonymity for a long while and it has worked fine. Would a removal of anonymity suddenly fix all this? No, absolutely not. Astroturfing and PR campaigns happened before AI comments were a concern, same as bad actors.

The problem here is the "recent" development of trusting whatever you read online. Of insisting that content should be personal, trustable and real, when none of this can ever be ensured. The separate, but related problem of engagement-based economy makes it way worse.

And remember: social media sites don't actually want to get rid of bots, for the most part. That's not in their interest, as long as bots increase engagement, does anyone trust them to actively hurt their bottom line in order to promote honest, productive discourse? Please.


The central complaint isn't "distaste" because you can't call for someone to be banned because of a "distaste".

"Jesse Singal has distributed private medical information on Bluesky without the consent of the patient" translates to publishing a quote from a patient included in a therapist's letter of support for hormones.

The problem in this situation is that the complaint itself as well as the whole drama surrounding the person is an exercise of harassment towards Singal. In this context, I don't think that saying "waffles" is out of order. I'm not sure of what else can be done about crybullying, since by its very nature innocent bystanders would be surely affected if action was taken against those complaining.


Distributing private medical information without consent is a violation of Bluesky's terms.

And to me, that sounds like a much more concrete example of someone being a bully.


>“Don’t use Bluesky Social to break the law or cause harm to others,”

Is this, quoted in the change.org, the relevant line?

The law was not broken, it is also fairly evident that the intention was not to "cause harm to others", nor has any harm has seemingly come upon the patient for this (it requires a huge stretch of imagination to think of a case in which it could)


Is it private if it is in a public affidavit?


In my opinion, inappropriately leaked information should probably still be considered private, even if it was made publicly accessible. But even if not, Singal says the same leaker directly contacted him with a new leak, which he also published.


> In my opinion, inappropriately leaked information should probably be considered private.

How is that relevant to BSky's terms of service? The information was public and did not identify the person.

> But even if not, Singal says the same leaker directly contacted him with a new leak, which he also published.

I notice that you didn't say whether this new leak was private information, or whether it was also already public knowledge, or whether it in any way identified a person.


> I notice that you didn't say whether this new leak was private information

The new leak was, according to journalist Jesse Singal himself, absolutely private information.


Please cite Singal's statement and let's see what he actually said.


I think this entire thread has run its course; if it's not this detail, it'll be another, as a few others have already moved goalposts further down the discussion than the ones you're setting here.

But if you wish to sate personal curiosity, it is in his Substack, linked from the first link I posted, which was itself from the link posted by its GP.


The only thing that seems remotely related to your claims is this:

    When the office of Missouri Attorney General Andrew Bailey began an investigation, [Reed] said she handed over the spreadsheet, after scrubbing out the personally identifying information that could spark HIPAA problems. She shared a copy of it with me as well — it contains 17 alleged detransitioners or desisters and 60 allegedly worrisome cases.
What's your problem with what happened exactly? Is it your position that your "private information" cannot be used, ever, to expose what some see as a medical scandal, even though it cannot identify you or in any way be associated with you? What does "private" even mean to you if sharing this dataset did not violate HIPAA?


> In my opinion, inappropriately leaked information should probably still be considered private.

I'd love to see the limitations of this opinion you definitely hold honestly and without favor.

You started by posting a change.org petition that links to a deleted post - in other words an "appeal to petition" that has no evidence. Now you are suggesting there is another leak that was published (presumably not mentioned in this petition?) that also has no evidence. Where is the evidence?

Everything from an actual search engine request for these posts (which to be clear, are deleted) suggests that these are anonymized and public, and contain no identifying information.


>The effect of that data is serving you better ads.

On the contrary, the ads become worse, since they become better at trying to get me to buy some crap I don't need.

The more irrelevant to my profile they are, the better.


I'm definitely in the night owl camp and I'd much rather have sunlight in the mornings because I already am going to have trouble waking up each morning, making it so I can't even set my circadian rhythm properly is just adding insult to injury.

It amazes me that we actually argue about this based on vibes. We know that people are better off the closer the time between waking up and sunrise.


Astroturfing was already a thing.

Identification fixes nothing here, you log with your account, plug in the AI.

The problems with social media have nothing to do with ID and everything to do with godawful incentives, the argument seems to be that it's a large price to pay but that it's worth it. Worth it for what? The end result is absolutely terrible either way


Astroturfing will still be a thing after ID. What, you think the government is going to go after their own bot armies?


I think it would be a lot more difficult for anyone to do and it isn’t like people will be using government platforms at least not in the west


>I think it would be a lot more difficult for anyone to do

Why? Like, what makes you think that?


Because of ID tracking? Say you have attach your government approved ID to use social media. It is now trivial to check how many accounts you have made and how much you have posted. You certainly can't be posting faster than the fastest typist in the world. And if you're mostly just copy pasting, is the quality of the posts actually worth engaging with?

While I am not against internet ID, there is a case to be made for social media for the harms they are causing.


It is now trivial to check how many posts many people in social media make with their own accounts, and astroturfing campaigns still happen.

Why would social media companies fight against this? They, much like the public actually like the engagement. That is the whole problem.

Look at X, where you can now see where people are posting from, do people honestly engage with the feature? No, they don't bother to check if they agree with the content and they use it as an excuse to dismiss in bad faith if they don't like the content.

This is not a control problem, social media networks are not at a loss of options in how to engage with this, they don't want to, the point can be made that states might want to fix this and are unable to, but if that was actually the case there's half a dozen better ways to do that, among them, banning the services.

The idea that the entirety of the population ought to throw privacy away so people can still browse Instagram is repugnant to me.


I think we're speaking past each other. I'm talking about the way a single user can create multiple accounts on a single platform to create the illusion of consensus. If you repeatedly see a single user creating many posts / comments on a single topic, it quickly satiates your attention.

With an approved ID, it will be trivial to enforce 1 ID 1 account on 1 platform. This is not possible now.

To my knowledge, no country has tried it before up until recently. The issue of government distrust is valid, but that shifts the problem to one of government accountability, not accessibility. Demand the rule of law to be upheld, hold those in power accountable and be vigilant of their trespasses, do not abdicate what little power you hold. That is what is required for civil society to function properly.


Let's say the government issues hundreds of thousands of IDs to people who don't exist and uses them to verify bots (or room full of paid humans) that post pro-government messages all day, at "normal" rates that a human posts.

It's amazing how there is a much larger crowd, of completely real people, who approve of the government, than those nasty dissenters. We know they're real people because we trust the government vouching for its own IDs.

And because of the real ID policy, the government can also ask the social media company for the ID used by opposed posters, and find out where they live and "visit" them, maybe "warn" them.

Hooray for democracy!


This sounds like an unreasonable amount of distrust in a government. If a government is truly malicious, it no longer matters if an ID was issued in the first place.

Take the current US administration. If they were to point the finger at a user for something the government didn't like, I doubt many people will agree, and more likely people will be opposed to the government than the user. The most important thing is to prevent government from abusing violence on the people for speaking up, which is somewhat lacking in the US.

More effort should be done to hold governments accountable, not finding ways to skirt around it.


It doesn't even have to be malicious. The UK government had the https://en.wikipedia.org/wiki/Windrush_scandal where it lost the only identity documents of thousands of people, and also tried to remove them from the UK for not having these documents.

Governments shouldn't work like Google's technical support, where they are in 100% control and you have zero recourse if they don't like you, or even if they just fuck up. Governments should be accountable to their people, there need to be systems (like courts) to rein in the government's unlawful actions. It goes without saying that government shouldn't build fully centralised systems of authority, and certainly shouldn't be implicitly trusted by third parties - because when they do that, things go badly for the citizens of that government. Or citizens of other countries (see e.g. the USA fucking with ICC staff)

...and yet here we are, discussing systems that would lock people out of all sorts of things if they won't or can't get a trusted proof they're in a central database we trust the custodians of 100% - those custodians never make mistakes or abuse their position, right? Why the rush to adopt the more fragile system?

What I worry about is more and more "nudge theory" or dark patterns coming in; you may be entitled to something, or have rights, and the government doesn't like people having that, or paying for people to have it. They won't say "no, people can't have these rights and entitlements" and take the hit at the ballot box (though sometime they do and that is strictly worse), but they will deliberately put in roadblocks and gotchas (digital or otherwise) that oh-so-unfortunately sometimes don't work, or are cumbersome and thus discourage people from exercising their rights.


Point taken on incompetence as opposed to maliciousness. I'm not gung ho on a central database. Perhaps issuing a physical ID, similar to driving license would suffice? And if we want to prevent having tech corps scanning your face, just make it a pin locked card, ala bank cards. Social media isn't a human right anyways.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: