Hacker Newsnew | past | comments | ask | show | jobs | submit | jameskilton's commentslogin

This is the only statement that matters. We knew exactly who Trump is and exactly what he would do if voted in again. The "opposition" did nothing during Trump's first term, and now they are completely powerless do do anything.

America voted for this, we failed miserably to prevent this from happening for the past 30 years, and we will pay the price for this for generations.


Trust is hard to earn, and easy to lose.

Turns out the easiest way to destroy the greatest nation in the world is to fail to hold the rich and powerful to account.

No-one should trust us anymore, and if Europe doesn't step up and do something, this will be China's world to run for generations to come (Russia unfortunately is working on committing suicide).


My daughter will not get a phone at all until she's at least 16 and probably finally actually needs one.

As for the Switch and Nintendo Online, I didn't find it confusing or difficult at all to set up a child's account, make sure they can't buy anything without my permission, and then I make sure my daughter knows what she can and can't do, and I keep an eye on it to make sure she follows my rules. I don't trust parental controls to do everything for me.

Now that said, Minecraft on the Switch is one gawd-awful frankenstein amalgamation of permissions and accounts run by Nintendo and Microsoft. I got that working but it's by far the worst experience I've ever dealt with to play a game, even single player.


> My daughter will not get a phone at all until she's at least 16 and probably finally actually needs one.

It’s all fine and dandy, until (i) you find that they’ve actually just saved up their pocket money and gifts for the last year and a half to buy the phone (age 11 in my daughter’s case) and that all the after school and weekend activities are being arranged on phones. Seeing your kids excluded from real-world activities is tough.

In our case, a combination of talking to the kids plus Apple parental controls offered a reasonable approach.


My daughters are younger than that, but A lot of the neighbor girls in who are in that age range got apple watches before phones. Which kind of makes sense, because it allows them to text, but keeps them off of apps and such.

I had a cell phone before my parents. Paid cash for a TracFone when I was 16 or 17 and used that to sell weed. Where there's a will, there's a way.

Heh. When I was in high school, cell phones and pagers were banned based on the assumption that only drug dealers could afford them.

Yep. Even 20 years ago, phones were basically necessary to have a social life in high school. It’s where everything got planned.

My daughter is 14. Still no phone. You can make this work.

My parents did the no phone until 16 rule, and it was awful. Completely cut me off socially.

The "socially" part is the problem though. A lot of bullying occurs via those social media platforms that teenagers are using.

It's true, and it can definitely be a problem. But I wasn't getting invited to in-person events because I wasn't contactable. Kids don't ring doorbells in 2025, they text people if they want to meet up.

A lot of bullying occurs in any environment teenagers exist en masse.

Right; which is why allowing teenagers to be safe at home instead of exposed to it 24/7 is a smart choice.

Allowing these teenagers who are being bullied to explore spaces where they feel safe and comfortable seems like a good idea too though. As someone who was bullied in school, being online did not make that issue any worse, and allowed me to find friends I couldn't otherwise have.

Yet in the broader sense online bullying targeting other teenagers is a commonly cited problem, including in incidents of teen suicide. "It didn't make it worse for me" doesn't counteract what we provably know is occurring[0][1][2].

Young Teen suicide (10 to 14) has increased from roughly 1 per 100K in the early 2000s to now nearly 3 per 100K in the last five years. Older teen suicide (15-19) has increased from 6 per 100K to 11 per 100K over the same time period[3].

[0] https://www.jmir.org/2018/4/e129/

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC12230417/

[2] https://pubmed.ncbi.nlm.nih.gov/32017089/

[3] https://www.cdc.gov/nchs/products/databriefs/db471.htm


1 and 2 do not seem to suggest that cyberbullying is more harmful in this regard than other forms of bullying - and in fact only 3 seems to contrast these concepts at all.

> Sensitivity analyses suggested that cybervictimization only and both cyber- and face-to-face victimization were associated with a higher risk of suicidal ideation/attempt compared to face-to-face victimization only and no victimization; however, analyses were based on small n. In prospective analyses, cybervictimization was not associated with suicidal ideation/attempt 2 years later after accounting for baseline suicidal ideation/attempt and other confounders. In contrast, face-to-face victimization was associated with suicidal ideation/attempt 2 years later in the fully adjusted model, including cybervictimization.

In fact, reading 3, it looks like the highest prevalence of cyberbullying capped out at a whopping.... 16% of 15 year olds, with a sharp drop down to 7% just 2 years later.

I have to say, there's lots of things to worry about with kids going online. I just don't think bullying in particular is one of them.


As someone who was not popular and got bullied some in school, I think cyberbullying would have been worse since it comes home with you. I was in school when SMS was finally becoming widespread and something of the bullying happened through it, it sucked since I'm at home and getting reminded of shit at school.

I can't imagine today with 24/7 social media apps on the phone.


In my case, as you said it may not have exacerbated it, but for me it certainly perpetuated it.

A retreat into the online world seems like a comfort in difficult times but it is a retreat, and the longer you stay retreated, the less likely it is you'll regain the ground again.


Social media is not the same thing as social communication.

This is going to show how naive I am. Because I am middle aged, do not have a cell phone, and still to this day just show up at people's houses unannounced if I want a social experience.

This still is possible for me, surely it is possible for kids.


That seems like a great strategy if your goal is for your child to be the weird kid that has no friends.

There are pros and cons to that goal.


> This still is possible for me, surely it is possible for kids.

I think there's a real generational divide here. What is normal in my parents generation (I'm in my early 30s) is not normal in roughly my generation downwards (which coincides both with mobile phone ownership amongst children/teens becoming common, and children/teens becoming much more restricted in how much freedom they had in terms of being allowed outside by themselves).

Even amongst people my age, people would consider weird and probably even rude if I turned up unannounced (a "What are you up to?" text message would probably be the norm). And I think that's more exaggerated amongst younger generations. Perhaps that's different if you live very close to your friends. But a lot of people don't.


I feel sorry for your daughter. 16 was very late to get one as far back as the late 90s - I was very glad to get one at 14 as it meant I wasn’t quite such a weirdo outcast.

I didn’t have a cell phone until I was 17, but still used the house phone to call and talk to friends. A house phone a parent can always listen in to conversations but still respect the child’s privacy. The child also knows that they can be listened in on and that their privacy is restricted.

The child may also learn about making social effort to keep in touch rather than relying on a beacon to ping them about social events.


16 is too late. You can’t teach your kids good maturity with communication devices through abstinence. You just have to watch what they do online. Which means reading their WhatsApp et al messages after they’ve gone to bed.

Yes there will be some problems created from them having devices, but parenting isn’t supposed to be easy, it’s supposed to be educational and supportive for the children. Which forced abstinence is not.


> Which means reading their WhatsApp et al messages after they’ve gone to bed.

Do they know you do this? Otherwise this seems like a very effective way to create trust issues in your kids.


Of course they do. You should be open and honest.

For us, it’s a system that’s worked well. So well, in fact, that our kids have felt comfortable coming to us when they see something concerning in a group chat rather than waiting for us to find it. And in return, we’ve learned to trust their judgement a lot more because they’ve demonstrated mature behaviour online.


Are you sure the kids aren't learning to delete the messages?

You have it backwards, it’s not trying to catch my children doing bad things (though there is that benefit too). it’s more about ensuring that other people are not doing, or trying to do, bad things to my children.

I trust my own children but you’re right that I cannot guarantee that they’re not bullying others and deleting those messages. However I’d hope other parents are monitoring their children’s phone usage and would tell either me or the school if my child was causing issues. That’s how a healthy community of parents are supposed to work.

Also your comment has a tone of “kids can find a way to bypass parental oversight so why bother parenting in the first place?” I don’t if that is intentional or not. But it’s an attitude I have seen other parents adopt and, unsurprisingly, their kids are usually the little shits that cause trouble because they know there are zero repercussions.


Yeah I’m pretty sure invading your kids privacy like that is setting you up for worse trouble.

A better way to frame this is supervised vs unsupervised access. And it depends on their age.

At 11 I wouldnt expect them to have unsupervised internet access. At 16 I might, but by the time they’re 16 I wouldn’t need to monitor their online activity so closely because they’ll have several years of trust and experience built up.


If they're 10, tell them that literally anything they type into their device is being stored for parental review. No expectation of privacy.

Obviously, this'll have to change at around 16, but those conversations need to happen anyway.


Then you are a terrible parent and your kids will get their social activities through detention as they can't do homework.

Probably best to link to the repo itself, this is not meant to be used yet. https://github.com/rue-language/rue


This is the dumbest timeline.


The dumbest timeline is the one with nuclear war. This might not be it.

If we're optimistic and assume that Trump, Xi and Putin have some kind of deal for a new world order where the US is no longer a world police, and the US gets to have its oligarchs just like Russia has.

Maybe that part of the deal is that Trump gets the Americas. It sure sucks for the new vassal states, but it beats having a nuclear war.


Putin seems intent on keeping up his threats, he might just use a "low yield" nuke to shake out the weak hands in Europe - - which it appears there are plenty of - - the question is how EU NATO would respond. I doubt they would then match him, nuke for nuke.

Could it be Trump is leaning towards just letting Putin and the EU settle their own differences by themselves - - while Trump concentrates on his side of the world, which Venezuela is a too easy prize to win. The old playbook: Find a US leaning Venezuelan leader who can be bought off with CIA money, get rid of Maduro, by force if needed, then the huge discoveries in the oilfields of Guyana next door that Exxon, Hess Corporation, CNOOC and others have their hands deep in are secured.


But, but this is what they vote for. "Government is bad, we can do it ourselves!"


Government fails -> “government can’t do anything” -> vote for smaller government -> government fails -> “government can’t…


You've already thought about this more than anyone in the administration has.

This is about keeping the non-whites out of America.



I've always driven small sedans (sentra ser back in the 90s, currently a toyota corolla). Learning to drive in 1980, it was common to be able to see traffic through the two cars in front of you because (1) most cars were small sedans, and (2) really dark window tinting hadn't become a thing. Now I'm usually looking at the rear of a SUV or tall pickup truck and can't anticipate traffic even one car ahead.

Anyway, for years I've always responded to the "I feel so much safer in my big car/truck" with "I always stand up in movie theaters because the view is so much better"


Must be a very important movie, I guess, to analogize the inability to see a film to being killed to death.


Either you don't understand what an analogy is, your misunderstand this one in particular.

My point was that both do something which benefits me that directly disadvantages other people. When people talk about their massive truck feeling safer, somehow this dynamic is ignored. But if someone applied the same reasoning about standing up in a movie theater, the selfishness is apparent to everyone.



> The PR was opened, the workflow run, and the PR closed within the space of 1 minute (screenshots include timestamps in UTC+2, the author's timezone):

It's an unfortunately common problem with GitHub Actions, it's easy to set things up to where any PR that's opened against your repo runs the workflows as defined in the branch. So you fork, make a malicious change to an existing workflow, and open a PR, and your code gets executed automatically.

Frankly at this point PRs from non-contributors should never run workflows, but I don't think that's the default yet.


Problem is that you might want to have the tests run before even looking at it.

I think the mistake was to put secrets in there and allow publishing directly from github's CI.

Hilariously the people at pypi advise to use trusted publishers (publishing on pypi from github rather than local upload) as a way to avoid this issue.

https://blog.pypi.org/posts/2025-11-26-pypi-and-shai-hulud/


> Problem is that you might want to have the tests run before even looking at it.

Why is this a problem? The default `pull_request` trigger isn't dangerous in GitHub Actions; the issue here is specifically with `pull_request_target`. If all you want to do is have PRs run tests, you can do that with `pull_request` without any sort of credential or identity risk.

> Hilariously the people at pypi advise to use trusted publishers (publishing on pypi from github rather than local upload) as a way to avoid this issue.

There are two separate things here:

1. When we designed Trusted Publishing, one of the key observations was that people do use CI to publish, and will continue to do so because it conveys tangible benefits (mostly notably, it doesn't tie release processes to an opaque phase on a developer's machine). Given that people do use CI to publish, giving them a scheme that provides self-expiring, self-scoping credentials instead of long-lived ones is the sensible thing to do.

2. Separately, publishing from CI is probably a good thing for the median developer: developer machines are significantly more privileged than the average CI runner (in terms of access to secrets/state that a release process simply doesn't need). One of the goals behind Trusted Publishing was to ensure that people could publish from an otherwise minimal CI environment, without even needing to configure a long-lived credential for authentication.

Like with every scheme, Trusted Publishing isn't a magic bullet. But I think the proscription to use it here is essentially correct: Shai-Hulud propagates through stored credentials, and a compromised credential from a TP flow is only useful for a short period of time. In other words, Trusted Publishing would make it harder for the parties behind Shai-Hulud to group and orchestrate the kinds of compromise waves we're seeing.


> the issue here is specifically with `pull_request_target`

I just went to github to search for references to that trigger-type, and I admit I was surprised at the sheer number of times it is visible in a code-search.

It seems like a common-pattern, sadly.


Yes, it’s shockingly common. I’m of the opinion that GitHub should remove it entirely, since only a tiny majority of uses of it are demonstrably safe.


The kind of argument of "just don't make mistakes, how hard is it" (and we're talking about something very obscure and badly documented here) didn't work for C and in my opinion doesn't work for this either.


I can’t find a single place in that comment where I said anything like “just don’t make mistakes.” Where in the world did you get that from?


It does largely avoid the issue if you configure to allow only specific environments AND you require reviews before pushing/merging to branches in that environment.

https://docs.pypi.org/trusted-publishers/adding-a-publisher/

For a malicious version to be published would then require full merge which is a fairly high bar.

AWS allows similar


As we're seeing, properly configuring github actions is rather hard. By default force pushes are allowed on any branch.


Yes and anyone who knows anything about software dev knows that the first thing you should do with an important repo is set up branch protections to disallow that, and require reviews etc. Basic CI/CD.

This incident reflects extremely poorly on PostHog because it demonstrates a lack of thought to security beyond surface level. It tells us that any dev at PostHog has access at any time to publish packages, without review (because we know that the secret to do this is accessible from plain GHA secret which can be read from any GHA run which presumably run on any internal dev's PR). The most charitable interpretation of this is that it's consciously justified by them because it reduces friction, in which case I would say that demonstrates poor judgement, a bad balance.

A casual audit would have revealed this and suggested something like restricting the secret to a specific GHA environment and requiring reviews to push to that env. Or something like that.


Nobody understands github. I guess someone at microsoft did but they probably got fired at some point.

You can't really fault people for this.

It's literally the default settings.


There's a lot of research on this, particularly from Robin Dunbar, who gave us "Dunbar's Number" https://en.wikipedia.org/wiki/Dunbar%27s_number


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: