Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  Generate an image of a clock face, but instead of the usual 12 hour numbering, number it with 13 hours. 

Gemini, 2.5 Flash or "Nano Banana" or whatever we're calling it these days. https://imgur.com/a/1sSeFX7

A normal (ish) 12h clock. It numbered it twice, in two concentric rings. The outer ring is normal, but the inner ring numbers the 4th hour as "IIII" (fine, and a thing that clocks do) and the 8th hour as "VIIII" (wtf).



It should be pretty clear already that anything which is based (limited?) to communicating words/text can never grasp conceptual thinking.

We have yet to design a language to cover that, and it might be just a donquijotism we're all diving into.


> We have yet to design a language to cover that, and it might be just a donquijotism we're all diving into.

We have a very comprehensive and precise spec for that [0].

If you don't want to hop through the certificate warning, here's the transcript:

- Some day, we won't even need coders any more. We'll be able to just write the specification and the program will write itself.

- Oh wow, you're right! We'll be able to write a comprehensive and precise spec and bam, we won't need programmers any more.

- Exactly

- And do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?

- Uh... no...

- Code, it's called code.

[0]: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...


Ive been thinking about that a lot too. Fundamentally it's just a different way of telling the computer what to do and if it seems like telling an llm to make a program is less work than writing it yourself then either your program is extremely trivial or there are dozens of redundant programs in the training set that are nearly identical.

If you're actualy doing real work you have nothing to fear from LLMs because any prompt which is specific enough to create a given computer program is going to be comparable in terms of complexity and effort to having done it yourself.


I don’t think that’s clear at all. In fact the proficiency of LLMs at a wide variety of tasks would seem to indicate that language is a highly efficient encoding of human thought, much moreso than people used to think.


Yea it’s amazing that the parent post literally misunderstands the fundamental realities of LLMs and the compression they reveal in linguistics even if blurry is incredible.



Really? I can grasp the concept behind that command just fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: