Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs need to either be extremely good with insane context or LLM output has to be conditioned by and translated into formal logic, like Prolog or Datalog to make sure it's compatible with the rules and state of the world, and determine which effect player and NPC actions have on the world.


Longer context lengths will massively help, but game engines will also provide structure to the LLM. If you instruct an NPC to pick up a weapon and there isn't any, the engine will return an error to the LLM saying there's no weapon, and the LLM will write the character saying "What do you mean? There's no weapon."


Another reason extremely long context lengths may not be fully necessary is that engines can store information in a database, update it based on further interactions, etc


Logical conversation trees driving the LLM context seem way more reasonable to me than the other way around.

But then you are adding a component of "how can I interrogate the NPC?" to the game. Anything you add to a game makes it a different game, and not necessarily a better one.


"Ahoy there, one-armed pirate captain, couldst thou tell me how to implement the sieve of Eratosthenes in Python?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: