Are you using LangGraph tool nodes? I run very small, non-RLHF instruction models with maybe a 2% failure rate on response format matching tool definition. I would also guess you do not have your orchestration and pipe configured correctly.
I think its the other way around. Python became the language of choice for AI because it was already popular. Lots of things made it popular: use for systems management scripts, web apps (Django, especially), then numerical stuff,...
I think the reason is that it is easy to learn enough to get things done, but it is very flexible, very readable, and once the ecosystem started gaining momentum (which it clearly had by the time of the XKCD cartoon) that became an advantage too.
It reminds me of the proposal to shake hands at the end of Goldeneye:
> Miyamoto, with a series of suggestions for the game. “One point was that there was too much close-up killing – he found it a bit too horrible. I don’t think I did anything with that input. The second point was, he felt the game was too tragic, with all the killing. He suggested that it might be nice if, at the end of the game, you got to shake hands with all your enemies in the hospital.”
One of my favorite childhood video games is 8 Eyes for the NES - after beating each boss, the player character sits down with them at a table, and a little skeleton butler walks over and serves them tea. The little scene plays over and over and over, you and the defeated bad guy, sitting at the table, sipping tea, while a skeleton wanders over and offers a periodic refill.
No local model for me manages to get function calling right.