"I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."
Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, no matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.
Given all of the replies on this thread, HN is apparently willing to stretch the truth if Sam Altman can be put under any negative light.
I disagree with the use of “literally” by the person above you, since Sam didn’t literally say those words (unless you subscribe to the new meaning of “literally” in the dictionary, of course).
At the same time, their interpretation doesn’t seem that far off. As per your comment, Sam said he “cannot imagine figuring out how” which is pretty close to admitting he’s clueless how anyone does it, which is what your parent comment said.
It’s the difference between “I don’t know how to paint” and “I cannot imagine figuring out how to paint”. Or “I don’t know how to plant a garden” and “I cannot imagine figuring out how to plant a garden”. Or “I don’t know how to program” and “I cannot imagine figuring out how to program”.
In the former cases, one may not know specifically how to do them but can imagine figuring those out. They could read a book, try things out, ask someone who has achieved the results they seek… If you can imagine how other people might’ve done it, you can imagine figuring it out. In the latter cases, it means you have zero idea where to start, you can’t even imagine how other people do it, hence you don’t know how anyone does do it.
The interpretation in your parent comment may be a bit loose (again, I disagree with the use of “literally”, though that’s a lost battle), but it is hardly unfair.
The interpretation is very off. You are way too focused on whether the first sentence is quote accurately. But
>Clearly, people did it for a long time, no problem.
In fact means Altman thinks the exact opposite of "he didn't know how anyone could raise a baby without using a chatbot" - what he means is that while it's not imaginable, people make do anyway, so clearly it very much is possible to raise kids without chatgpt.
What the gp did is the equivalent of someone saying "I don't believe this, but XYZ" and quoting them as simply saying they believe XYZ. People are eating it up though because it's a dig at someone they don't like.
I think what Altman defenders in this particular thread are failing to realise is that his real comment is already worthy of scrutiny and ridicule and it is dangerous.
Saying “no no, he didn’t mean everyone, he was only talking about himself” is not meaningfully better, he’s still encouraging everyone to do what he does and use ChatGPT to obsess about their newborn. It is enough of a representation of his own cluelessness (or greed, take your pick) to warrant criticism.
> One example given by Altman was meeting another father and hearing that this dad's six-month-old son had already started crawling, while Altman's had not. That prompted Altman to go to the bathroom and ask ChatGPT questions about when the average child crawls and if his son is behind.
> The OpenAI CEO said he "got a great answer back" and was told that it was normal for his son not to be crawling yet.
To be fair, that is a relatable anxiety. But I can't imagine Altman having the same difficulties as normal parents. He can easily pay for round the clock childcare including during night-times, weekends, mealtimes, and sickness. Not that he does, necessarily, but it's there when he needs it. He'll never know the crushing feeling of spending all day and all night soothing a coughing, congested one-year-old whilst feeling like absolute hell himself and having no other recourse.
Sounds like a great way for someone to accidentally harm their infant. What an irresponsible thing to say. There are all sorts of little food risks, especially until they turn 1 or so (and of course other matters too, but food immediately comes to mind).
The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.
But of course he just had to get that great marketing sound bite didn’t he?
Sam Altman decided to irresponsibly talk bullshit about parenting because yes, he needed that marketing sound bite.
I cannot believe someone will wonder how people managed to decode "my baby dropped pizza and then giggled" before LLMs. I mean, if someone is honestly terrified about the answer to this life-or-death question and cannot figure out life without an LLM, they probably shouldn't be a parent.
Then again, Altman is faking it. Not sure if what he's faking is this affectation of being a clueless parent, or of being a human being.
That’s not the questions people will ask though. They’ll go “what body temperature is too high?” Baby temperatures are not the same as ours. The threshold for fevers and such are different.
They will ask “how much water should my newborn drink?” That’s a dangerous thing to get wrong (outside of certain circumstances, the answer is “none.” Milk/formula provides necessary hydration).
They will ask about healthy food alternatives - what if it tells them to feed their baby fresh honey on some homemade concoction (botulism risk)?
People googled this stuff before, but a basic search doesn’t respond with you about how it’s right and consistently feed you emotionally bad info in the same fashion.
I was mostly arguing that Altman's statements, if taken at face value, show him to be unfit to be a parent. I stand by this, but mostly because I think people like him -- Altman, Musk, I tend to conflate -- are robots masquerading as human beings.
That said, of course Altman is being cynical about this. He's just marketing his product, ChatGPT. I don't believe for a minute he really outsources his baby's well-being to an LLM.
For people invested in AI it is becoming something like Maslow's Hammer - "it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail"
Wow, that's profoundly dangerous. Personally, I don't see how anyone could raise a kid without having a nurse in the family. I wouldn't trust AI to determine if something were really a medical issue or not, and would definitely have been at the doctors far, far more often otherwise.
You don't need nurses -_-, just your own parents or someone who had kids before and some random books for theoretical questions.
Raising a kid is really very natural and instinctive, it's just like how to make it sleep, what to feed it when, and how to wash it. I felt no terror myself and just read my book or asked my parents when I had some stupid doubt.
They feel like slightly more noisy cats, until they can talk. Then they become little devils you need to tame back to virtue.
Sam Altman has revealed himself to be the type of tech bro who is embarrassingly ignorant about the world and when faced with a problem doesn’t think “I’ll learn how to solve this” but “I know exactly what’ll fix this issue I understand nothing about: a new app”.
He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…