Hacker Newsnew | past | comments | ask | show | jobs | submit | coolestguy's commentslogin

If it's so easy, why haven't you done it?


Because luck plays a very significant role.


Attributing something to luck sounds like a lazy cop out, sorry. We just had an article on the front page yesterday about “increasing your luck”.

If you need to be lucky in meeting the right people, you can increase your chances by spending your evenings in the your nearest financial district watering hole. We’ve easily established luck can be controlled for, which puts us back into skill territory.

What specifically must one luck out on? Have you tried?


Exactly, as a multimillion lottery winner, it upsets me so much when people say I won because of luck.

I played every single day, and I played at different locations. I also made sure I performed my pre-ticket rituals which I learned from other lottery winners. Other people could have done the same. It’s absolutely a skill issue.


You picked the lottery you played, on which day, with what buyin, where you bought the tickets from. Did you not?

> Attributing something to luck sounds like a lazy cop out, sorry.

Everyone one of us here has an unbroken line of lucky (lucky enough!) ancestors stretching back a billion years or so. Pretending it's not a thing is silly.

When you're born matters. Where you're born matters. Who you encounter matters. etc. etc. etc.

> What specifically must one luck out on? Have you tried?

I think perhaps we have different definitions of luck.


No, I think we have a similar definition of luck, but I think you’ve succumbed to a defeatist attitude. You have to be pretty unlucky to be permanently locked out of becoming a CEO, and if you’re dealt those cards, moaning about it on an online forum would be way down in your list of priorities.


> You have to be pretty unlucky to be permanently locked out of becoming a CEO…

Sure, but that's not what's being asserted. I am not "permanently locked out" of megacorp CEO roles; I'm just vanishingly unlikely to get one.

There are lots of people who have enough singing/dancing skill to be a mega popstar like Taylor Swift. There just aren't enough slots.

Could I become the next Steve Jobs? Maybe! I'd have to get really lucky.


Then why were you bringing up conditions of ones birth?

Vanishingly unlikely to get one if you try, or vanishingly unlikely to get one if you sit on your ass all day?

I assume you’re talking about the former and yet I don’t think you’ve thought this through. I think you’ve blindly attributed to luck what actually requires time, perseverance, grit, lack of morality. The only way to figure that out is for you to offer up your understanding of what one must luck out on?


> Then why were you bringing up conditions of ones birth?

Because they're a form of luck?

If you're born in the developed world, that's luck. If you're born to supportive parents, that's luck. If you're Steve Jobs and you wind up high school buddies with Woz in Mountain View, CA, that's luck. White? Luck. Male? Luck. Healthy? Luck. A light touching of psychopathy? Luck!

> Vanishingly unlikely to get one if you try, or vanishingly unlikely to get one if you sit on your ass all day?

Both.

> I think you’ve blindly attributed to luck what actually requires time, perseverance, grit, lack of morality.

There are many, many people who devote time, perserverance, and grit to their endeavours without becoming a "hugely expensive" CEO. Hence, luck. Is it the only thing? No. Is it a thing? Yes, absolutely.


None of what you’ve mentioned is a requirement to become a “hugely expensive” CEO. If you’re born into conditions which stop you from becoming self reliant, that’s a different story but we covered that.

Those people who devote time - do they devote time to becoming a hugely expensive CEO or just some “endeavours”?

I think we’re fundamentally disagreeing on whether or not lack of luck can be adequately compensated for by exerting more effort. I have not yet heard of a compelling argument for why that’s not the case.


> None of what you’ve mentioned is a requirement to become a “hugely expensive” CEO.

Again, no one said they're requirements. Just significant factors. You don't have to be white, you don't have to be male, you don't have to be from the developed world… but you do have to have some substantially lucky breaks somewhere.

A quadriplegic orphan of the Gaza War might become the next Elon Musk. But the odds are stacked heavily against them.


God save us from grindset influencers who pedal all this ‘if you didn’t succeed it was down to you not trying hard enough’ m’larky. In some respects I appreciate the call to taking agency but the fact it results in people being unable to acknowledge the sheer extent of external factors in the world is crazy.

It comes from being young and naive.

No one said anything about megacorps though, just CEOs.


No one except the article we're all (theoretically) discussing, titled "CEOs are hugely expensive", citing "the boards of BAE Systems, AstraZeneca, Glencore, Flutter Entertainment and the London Stock Exchange" as examples in the introductory paragraph.


Now read the rest of the article. It talks about CEOs in general, not just megacorp ones, even if it does use megacorp CEOs in the intro. It is asking a general question of whether the role of a CEO should be automated. Articles often start with a hook that is related but does not wholly encompass the entirety of the point of the article.


> Now read the rest of the article.

I did.

> It talks about CEOs in general, not just megacorp ones, even if it does use megacorp CEOs in the intro.

This does not accurately describe the article.


Well if we're deriving different conclusions from the same article, then there is probably not much else to talk about.


That and having enough millions in the first place to meet with the right people and get/buy a position helps.


As the Rick & Morty quote goes, "that just sounds like luck with extra steps".


To Elon's credit he didn't start out with millions

Just a well off white politician as a father during South African apartheid.

In Errol Musk's political career, he was a city councillor and member of an opposition party. So, while true, this is minor league. His business ventures appear to be more relevant to his wealth.

https://en.wikipedia.org/wiki/Errol_Musk#Career


But the dice gets rolled for everyone and clearly success isn’t randomly distributed.

So what does that tell you?

It must be luck plus something else.


> It must be luck plus something else.

That is why I said “significant role”, not “the only requirement”, yes.


In science we have the idea of background noise - a random signal that is always there is random fashion.

And what is typically done is you ignore it. It’s always there, it’s random, and it applies to all samples.

Same with luck and success. You can control luck, so you focus on what’s left.


[flagged]


Just because luck plays a part in everything does not make it moot.

Set up two identical agents in a game with rules guaranteeing a winner, and you will end up with one loser being equal to the winner.

I agree that CEO positions in aggregate are likely generally filled by people better at "CEOing", but there is nothing ruling out "losers" who were equally skilled or even better that just didn't make it due to luck or any of the innumerable factors playing into life.


Aww, you did the meme! https://lol.i.trollyou.com/


Because most of the people don't want to. Additionally, there is a limit on positions. Only few people will get there. But it doesn't mean that there was a competition based on abilities, that some extraordinary skills are needed, or that many other people would not be as good.


The guy who oversaw the silicon change is the one who's likely going to be the next CEO


Sorry that you can't control other peoples lives & wants


This is like arguing that we shouldn't try to regulate drugs because some people might "want" the heroin that ruins their lives.

The existing "personalities" of LLMs are dangerous, full stop. They are trained to generate text with an air of authority and to tend to agree with anything you tell them. It is irresponsible to allow this to continue while not at least deliberately improving education around their use. This is why we're seeing people "falling in love" with LLMs, or seeking mental health assistance from LLMs that they are unqualified to render, or plotting attacks on other people that LLMs are not sufficiently prepared to detect and thwart, and so on. I think it's a terrible position to take to argue that we should allow this behavior (and training) to continue unrestrained because some people might "want" it.


What's your proposed solution here? Are you calling for legislation that controls the personality of LLMs made available to the public?


There aren't many major labs, and they each claim to want AI to benefit humanity. They cannot entirely control how others use their APIs, but I would like their mainline chatbots to not be overly sycophantic and generally to not try and foster human-AI friendships. I can't imagine any realistic legislation, but it would be nice if the few labs just did this on their own accord (or were at least shamed more for not doing so)


Unfortunately, I think a lot of the people at the top of the AI pyramid have a definition of "humanity" that may not exactly align with the definition that us commoners might be thinking of when they say they want AI to "benefit humanity".

I agree that I don't know what regulation would look like, but I think we should at least try to figure it out. I would rather hamper AI development needlessly while we fumble around with too much regulation for a bit and eventually decide it's not worth it than let AI run rampant without any oversight while it causes people to kill themselves or harm others, among plenty of other things.


At the very least, I think there is a need for oversight of how companies building LLMs market and train their models. It's not enough to cross our fingers that they'll add "safeguards" to try to detect certain phrases/topics and hope that that's enough to prevent misuse/danger — there's not sufficient financial incentive for them to do that of their own accord beyond the absolute bare minimum to give the appearance of caring, and that's simply not good enough.


I work on one of these products. An incredible amount of money and energy goes into safety. Just a staggering amount. Turns out it’s really hard.


Yes. My position is that it was irresponsible to publish these tools before figuring out safety first, and it is irresponsible to continue to offer LLMs that have been trained in an authoritative voice and to not actively seek to educate people on their shortcomings.

But, of course, such action would almost certainly result in a hit to the finances, so we can't have that.


Cynicism is so blinding.

Alternative take: these are incredibly complex nondeterministic systems and it is impossible to validate perfection in a lab environment because 1) sample sizes are too small, and 2) perfection isn’t possible anyway.

All products ship with defects. We can argue about too much or too little or whatever, but there is no world where a new technology or vehicle or really anything is developed to perfection safety before release.

Yeah, profits (or at least revenue) too. But all of these AI systems are losing money hand over fist. Revenue is a signal of market fit. So if there are companies out there burning billions of dollars optimizing the perfectly safe AI system before release, they have no idea if it’s what people want.


Oh, lord, spare me the corporate apologetics.

Releasing a chatbot that confidently states wrong information is bad enough on its own — we know people are easily susceptible to such things. (I mean, c'mon, we had people falling for ELIZA in the '60s!)

But to then immediately position these tools as replacements for search engines, or as study tutors, or as substitutes for professionals in mental health? These aren't "products that shipped with defects"; they are products that were intentionally shipped despite full knowledge that they were harmful in fairly obvious ways, and that's morally reprehensible.


Ad hom attacks instantly declare “not worth engaging with”.


That's a funny irony: I didn't use an ad hominem in any way, but your incorrect assertion of it makes me come to the same conclusion about you.


Pretty sure most of the current problems we see re drug use are a direct result of the nanny state trying to tell people how to live their lives. Forcing your views on people doesn’t work and has lots of negative consequences.


Okay, I'm intrigued. How in the fuck could the "nanny state" cause people to abuse heroin? Is there a reason other than "just cause it's my ideology"?


I don't know if this is what the parent commenter was getting at, but the existence of multi-billion-dollar drug cartels in Mexico is an empirical failure of US policy. Prohibition didn't work a century ago and it doesn't work now.

All the War on Drugs has accomplished is granting an extremely lucrative oligopoly to violent criminals. If someone is going to do heroin, ideally they'd get it from a corporation that follows strict pharmaceutical regulations and invests its revenue into R&D, not one that cuts it with even worse poison and invests its revenue into mass atrocities.

Who is it all even for? We're subsidizing criminal empires via US markets and hurting the people we supposedly want to protect. Instead of kicking people while they're down and treating them like criminals over poor health choices, we could have invested all those countless billions of dollars into actually trying to help them.


I'm not sure which parent comment you're referring to, but what you're saying aligns with my point a couple levels up: reasonable regulation of the companies building these tools is a way to mitigate harm without directly encroaching on people's individual freedoms or dignities, but regulation is necessary to help people. Without regulation, corporations will seek to maximize profit to whatever degree is possible, even if it means causing direct harm to people along the way.


Comparing LLM responses to heroine is insane.


I'm not saying they're equivalent; I'm saying that they're both dangerous, and I think taking the position that we shouldn't take any steps to prevent the danger because some people may end up thinking they "want" it is unreasonable.


No one sane uses baseline webui 'personality'. People use LLMs through specific, custom APIs, and more often than not they use fine tune models, that _assume personality_ defined by someone (be it user or service provider).

Look up Tavern AI character card.

I think you're fundamentally mistaken.

I agree that to some users use of the specific LLMs for the specific use cases might be harmful but saying (default AI 'personality') that web ui is dangerous is laughable.


heroin is the drug, heroine is the damsel :)


You’re absolutely right!

The number of heroine addicts is significantly lower than the number of ChatGPT users.


I am with you. Insane comparisons are the first signs of an activist at work.


I don't know how to interpret this. Are you suggesting I'm, like, an agent of some organization? Or is "activist" meant only as a pejorative?

I can't say that I identify as any sort of AI "activist" per se, whatever that word means to you, but I am vocally opposed to (the current incarnation of) LLMs to a pretty strong degree. Since this is a community forum and I am a member of the community, I think I am afforded some degree of voicing my opinions here when I feel like it.


Disincentivizing something undesirable will not necessarily lead to better results, because it wrongly assumes that you can foresee all consequences of an action or inaction.

Someone who now falls in love with an LLM might instead fall for some seductress who hurts him more. Someone who now receives bad mental health assistance might receive none whatsoever.


Your argument suggests that we shouldn’t ever make laws or policy of any kind, which is clearly wrong.


Your argument suggests that blanket drug prohibition is better than decriminalization and education.

Which is demonstrably false (see: US Prohibition ; Portugal)


I disagree with your premise entirely and, frankly, I think it's ridiculous. I don't think you need to foresee all possible consequences to take action against what is likely, especially when you have evidence of active harm ready at hand. I also think you're failing to take into account the nature of LLMs as agents of harm: so far it has been very difficult for people to legally hold LLMs accountable for anything, even when those LLMs have encouraged suicidal ideation or physical harm of others, among other obviously bad things.

I believe there is a moral burden on the companies training these models to not deliberately train them to be sycophantic and to speak in an authoritative voice, and I think it would be reasonable to attempt to establish some regulations in that regard in an effort to protect those most prone to predation of this style. And I think we need to clarify the manner in which people can hold LLM-operating companies responsible for things their LLMs say — and, preferably, we should err on the side of more accountability rather than less.

---

Also, I think in the case of "Someone who now receives bad mental health assistance might receive none whatsoever", any psychiatrist (any doctor, really) will point out that this is an incredibly flawed argument. It is often the case that bad mental health assistance is, in fact, worse than none. It's that whole "first, do no harm" thing, you know?


Who are you to determine what other people want? Who made you god?


...nobody? I didn't determine any such thing. What I was saying was that LLMs are dangerous and we should treat them as such, even if that means not giving them some functionality that some people "want". This has nothing to do with playing god and everything to do with building a positive society where we look out for people who may be unable or unwilling to do so themselves.

And, to be clear, I'm not saying we necessarily need to outlaw or ban these technologies, in the same way I don't advocate for criminalization of drugs. But I think companies managing these technologies have an onus to take steps to properly educate people about how LLMs work, and I think they also have a responsibility not to deliberately train their models to be sycophantic in nature. Regulations should go on the manufacturers and distributors of the dangers, not on the people consuming them.


here’s something I noticed: If you yell at them (all caps, cursing them out, etc.), they perform worse, similar to a human. So if you believe that some degree of “personable answering” might contribute to better correctness, since some degree of disagreeable interaction seems to produce less correctness, then you might have to accept some personality.


Interesting codex just did the work once I sweared. Wasted 3-4 prompts being nice. And angry style made him do it.


Actually DeepSeek performs better for me in terms of prompt adherence.


ChatGPT 5.2: allow others to control everything about your conversations. Crowd favorite!


so good.


I'm looking for a sports aggregation site like Brutalist Report - does anyone know if this exists?


"using the biggest software suite tailored for offices/IT environments is a red flag"

honestly the things i read here sometimes hahaha


The idea that the most commonly purchased thing in the market is of mediocre quality should not be hard to accept, and neither should the idea that some people only want tk work with what they, personally, consider to be the best.


If this is "tailored", then I don't even want to know what how bad other MS products are. Oh wait, we can see that in Windows in general. But then again MS Teams is worse. It's almost as if the more MS has its fingers on something, the worse it gets.


>it was called "google without ads and seo spam"

... if you could talk to it like a human and have google search hold a conversation with you - sure. That distinction is a big big big difference though


I dont find “How tall is the Eiffel Tower” to be any more compelling than “height of eiffel tower.”


You're missing the "conversation" part.

If you're limiting yourself to simple fact retrieval questions like this then you are...limiting yourself.


>Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).

You think a technology that allows millions of people all around the world to keep & trustlessly update a database, showing cryptographic ownership of something "the most useless technology ever invented"?


>Dehumanization seems to be the trend that the US is leading on right now.

Criminals have to want to stop doing crime before they can be rehabilitated.


> Criminals have to want to stop doing crime before they can be rehabilitated

This is literally what rehabiliation entails. Convincing criminals that they have better options than crime.

It doesn't work for everyone. There are absolutely bad people who will just violate social contracts, or who can't control their rage turning into violence. Those people need to be incapacitated. But for the vast majority of criminals, particularly non-violent criminals, crime is an economic cost-benefit exercise.


On top of that: the US has ~5% of the world's population but ~25% of the world's prisoners. So when we talk about "criminals", most of the people we're referring to are only incarcerated because they're subject to the US carceral system. If they lived in any other country, they'd considered upstanding citizens.


Les Mis is a great treatment of exactly this, even if fictional. It takes more than justice to reform the soul. It takes making room by society to forgive the repentant. We call this mercy, and it is the higher ideal.


If it's too much for society to forgive someone who has done their time, the very least society could do is to stop actively fighting their rehabilitation.

Whenever a read a story about someone who's been to prison and then ends up a solid, productive member of society, I can't help but think: "This person must have extraordinary grit and determination!" Because when a criminal gets out of prison, the entire system and the entire society is set up to try to oppose his rehabilitation and get him back into prison. Overcoming this active hostility must take a remarkable person.


> "This person must have extraordinary grit and determination!" Because when a criminal gets out of prison, the entire system and the entire society is set up to try to oppose his rehabilitation and get him back into prison. Overcoming this active hostility must take a remarkable person.

This is precisely the story of Les Misérables - that remarkable person being Jean Valjean.


This is an incredibly naive take and doesn't address what you quoted in your comment. We should not dehumanize anyone - criminal or otherwise.


This is the result of the dehumanization effort. It highlights OPs point in attempting to refute it


That's not entirely fair - there are all walks of life in those prisons. Some are undoubtedly beyond help, but the ones we can actually rehabilitate, or at least give meaningful work to, are not an opportunity worth overlooking.


I'm not justifying the crimes and I think people should pay for the consequences of their actions, but I don't think it's that simple.

I think some people just haven't been exposed to the benefits of taking a path to life that doesn't involve crime. Some people also need to be convinced that there are viable alternatives to crime. And as someone else said, society needs to give them the chance to redeem themselves and pursue those alternate paths.


That's exactly what ethereum is - you're able to move funds without permission, trustlessly, instantly.


A payment system people actually want to use is not just the system to move funds. It also needs ways to move funds back and meet the onerous compliance burdens anything financial eventually has to deal with and a thousand other little things at the boundary between the perfect little world of the system and the messy, complicated world of a modern economy.


No, it's not a complete solution and has too much shady speculation and sketchy exchanges. Furthermore, it's not a stablecoin.


Most people are not suited for using crypto in a serious capacity. Crypto puts the entire onus of keeping the funds secure on the holder, and one misstep on their part can see their entire wallet siphoned away in a fraction of a second with no recourse, and likely no punishment on the thief.

Banks have fraud experts on staff, they have people that can monitor activity and stop such transactions. Both sides have accountability so that the thief can be tracked down. Your worst outcome is getting your card skimmed and not noticing it in time to report the fraud unless you deliberately send your bank info to someone else. But even then, your bank can probably still help you.


No you're right, thinking about laws & second order effects isn't a government thing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: