> they might have a big base of predefined response "templates".
Sort of. They fine-tuned the existing GPT-3 largest model on samples of dialog, which would be like your templates. The program doesn't "render templates", but the fine-tuning process has instructed it that responses following the template are statistically more likely to be the correct response to a given prompt. See their homepage [0], Methods section.
> Also, they can have specific "plugin" calculators or things like that, so that once tokenized, the operations would be performed by the plugin and not by some magic AI understanding.
This is likely the direction they are going to take it, but this tech demo doesn't seem to include it. Some of the "ChatGPT jailbreaks" suggest that they are experimenting with enabling web search, likely in a manner like you describe. [1]
Sort of. They fine-tuned the existing GPT-3 largest model on samples of dialog, which would be like your templates. The program doesn't "render templates", but the fine-tuning process has instructed it that responses following the template are statistically more likely to be the correct response to a given prompt. See their homepage [0], Methods section.
> Also, they can have specific "plugin" calculators or things like that, so that once tokenized, the operations would be performed by the plugin and not by some magic AI understanding.
This is likely the direction they are going to take it, but this tech demo doesn't seem to include it. Some of the "ChatGPT jailbreaks" suggest that they are experimenting with enabling web search, likely in a manner like you describe. [1]
[0] https://openai.com/blog/chatgpt/
[1] https://twitter.com/goodside/status/1598253337400717313