This is the key problem. The LLM won't ask questions or clarify something that it doesn't understand; it'll just proceed with what it thinks it knows, and more often than not, get it wrong.
It does usually summarize what you want, but that's simply a restatement of the prompt (sometimes verbatim), which is not the same as the type of follow-up questions that a good Jr engineer would make.
Prompt engineering involves (among other things) anticipating this and encouraging the model to ask clarifying questions before it begins.
Separately but related, models are getting better at recognizing and expressing their own uncertainty; but again they won’t do that automatically; you need to ask for that behavior in your prompt.
And finally; models aren’t yet where they should be with regard to stopping to ask questions. A lot of the Devin style agentic products are going to push & eval their models for their ability to do this, so it’s a capability you can reasonably expect to see from future models and will make a lot of my post obsolete.
So right now you need to ask the model to ask you clarifying questions and tell you what it’s uncertain of - before it goes off and does work for you.