Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel the main thing LLMs are teaching us thus far is how to write good prompts to reproduce the things we want from any of them. A good prompt will work on a person too. This prompt would work on a person, it would certainly intimidate me.

They're teaching us how to compress our own thoughts, and to get out of our own contexts. They don't know what we meant, they know what we said. The valuable product is the prompt, not the output.



Einstein predicted LLMs too?

> If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.

(not sure if that was the original quote)

Edit: Actually interesting read now that I look the origin: https://quoteinvestigator.com/2014/05/22/solve/


Thanks, now I want to read a sci-fi short story where LLM usage has gotten so high that human-to-human language has evolved to be like LLM prompts. People now talk to each other in very intimidating, very specific paragraph long instructions even for simple requests and conversation.


so an extremely resource intensive rubber duck


For you, yes. For me it's like my old teapot that I bought when I didn't drink tea and I didn't have a french press just because I walked past it in Target, and didn't even start using for 5 years after I bought it. Since then it's become my morning buddy (and sometimes my late night friend.) Thousands of cups; never fails. I could recognize it by its unique scorch and scuff marks anywhere.

It is indifferent towards me, though always dependable.


How is it as a conversationalist?


Either shrill or silent.


Then to what do you impute the state of mind called indifference?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: