Assume free-associations = hallucinations. Assume hallucinations are exactly what makes LLMs useful and your question can be rephrased as "Can you list some examples where LLMs were useful to you?"
Is not the purpose of a model to interpolate between two points? This is the underlying basis of "hallucinations" (when that works out /not/ in our favour) or "prediction" (when it does). So it's a matter of semantics and a bit of overuse of the term "hallucination". But the model would be useless as nothing more than a search engine if it were to just regurgitate it's training data verbatim.
All lies aren't useless, some can be insightful even when blatantly wrong in themselves (for instance: taken literally every scientific model is a lie). I can definitely see how an LLM hallucinating can helps fostering creativity (the same way psychedelics can), even if all they say is bullshit.
I'm using hallucination to mean "not exactly the thing", not outright lying. So maybe the "truth" is "My socks are wet." A hallucination could be "My socks are damp."