It doesn't know anything. Large language models are basically Markov chains with a large context for conditional probabilities. If the output contains the current date then it is supplied out of band in some other way. It could be part of the "system prompt" which is an extra set of tokens that modifies the conditional probabilities in the output or the output is fixed up after the fact using some kind of extra parsing and filtering after sampling.
LLMs are not magic and encoding model metadata in the output is just asking for trouble. Inline model metadata should be assumed to be a statistically probable hallucination just like all output from an LLM.