I made this comment on a post that got flagged for reasons I don't understand. I don't want to feel like I wasted my time so I'm re-commenting here:
I was watching "HyperNormalisation" by Adam Curtis for the second time. In his segment on Eliza, an early example of a chat bot, I realized that Curtis makes a mistake in his interpretation of Eliza. For Curtis it's narcissism that makes Eliza attractive. Curtis levels the charge that Westerners are individualistic and self-centered often.
But when an interview with the creator Joseph Weizenbaum is shown starting at 01hr:22min, he never says that. He relates how his secretary took to it, and even though she knew it was a primitive computer program, she wanted some privacy while she used it. Weizenbaum was puzzled by that, but then the secretary (or possibly another woman) says Eliza doesn't judge me and it doesn't try to have sex with me.
What jumped out at me was that Weizenbaum's secretary was using Eliza as a thinking tool to clarify her thoughts. Most high school graduates in America don't learn critical thinking skills as far as I can tell. Eliza is a useful tool because it encourages critical thinking and second order thinking by asking questions and reflecting back answers and asking questions in another way. The secretary didn't want to use Eliza because she was a narcissist, she wanted to talk through some sensitive issues with what she knew was a dumb program so she could try and resolve them.
That's how I feel about ChatGPT so far. It's a great thinking tool. Someone to bounce ideas around with. Of course, I know it's a dumb computer program and it makes mistakes, but it's still a cool new tool to have in the toolbox.
This is insane. People aren't going to use chatGPT to think, they are going to use it for the opposite. That chatGPT doesn't even know when it's wrong is why this is a problem. The vast majority of projects I've seen purport to replace your needs for other things (developers, lawyers, etc) but not being a domain expert, any such user is not going to be aware of the significant flaws chatGPT has in its answers. Heck, someone posted a chatGPT bot that is supposed to digest and make understandable legal contracts, but the bot couldn't get the basics of the ycombinator SAFE right and had the directionality of the share assignment completely opposite what it should be. This is a fatal mistake that a layman wont realize.
> That chatGPT doesn't even know when it's wrong is why this is a problem.
Have you never had a conversation with someone about a topic which they know nothing about, and they say/ask something that is wrong/stupid, but it still raises some question(s) you haven't thought about before?
I kind of think of ChatGPT like that, a dumb friend that is mostly dumb, but sometimes makes my brain pull in a direction I haven't previously explored.
It's good that you think of chatGPT like that. My point is that clearly, that's not how most people are envisioning it nor is that how businesses are purporting to sell it.
No, what's your point? Google doesn't purport to offer legal advice, unlike the AI companies that do but simultaneously disclaim any liability. You can be obtuse if you want, but it's completely disingenuous to compare companies purporting to offer legal advice with a google search.
I'm thinking about Chat GPT in the general sense as a search engine replacement. I have no knowledge of or interest in apps that offer legal advice using Chat GPT as a back end. Seems risky unless you show a lot of disclaimers.
I was watching "HyperNormalisation" by Adam Curtis for the second time. In his segment on Eliza, an early example of a chat bot, I realized that Curtis makes a mistake in his interpretation of Eliza. For Curtis it's narcissism that makes Eliza attractive. Curtis levels the charge that Westerners are individualistic and self-centered often.
But when an interview with the creator Joseph Weizenbaum is shown starting at 01hr:22min, he never says that. He relates how his secretary took to it, and even though she knew it was a primitive computer program, she wanted some privacy while she used it. Weizenbaum was puzzled by that, but then the secretary (or possibly another woman) says Eliza doesn't judge me and it doesn't try to have sex with me.
What jumped out at me was that Weizenbaum's secretary was using Eliza as a thinking tool to clarify her thoughts. Most high school graduates in America don't learn critical thinking skills as far as I can tell. Eliza is a useful tool because it encourages critical thinking and second order thinking by asking questions and reflecting back answers and asking questions in another way. The secretary didn't want to use Eliza because she was a narcissist, she wanted to talk through some sensitive issues with what she knew was a dumb program so she could try and resolve them.
That's how I feel about ChatGPT so far. It's a great thinking tool. Someone to bounce ideas around with. Of course, I know it's a dumb computer program and it makes mistakes, but it's still a cool new tool to have in the toolbox.
HyperNormalisation by Adam Curtis
https://www.youtube.com/watch?v=yS_c2qqA-6Y
Eliza
https://en.wikipedia.org/wiki/ELIZA