Exactly. Looks like everybody's complaining that Siri isn't a better Ask Jeeves, when that's not the design goal. What people expect is an LLM that has full access to the phone. Nobody's even remotely close to shipping that.
My Siri-initiated timers are always done with my phone, probably 50 or more each week (work stuff). The only time I get a failure is when I release the side button too quickly. I've made certain the spoken feedback is enabled to reduce the risk of me making that mistake. (Settings > Siri > Siri Responses > Prefer Spoken Responses)
As for, "What time is it?"... Try activating Siri and only saying, "Time."
I suspect that's the main difference; if you're trying to use hands-free voice activation via "hey Siri" you get a much different experience than if you can touch the watch/phone to trigger Siri first.
And thinking back over it, more than half the failures are complete - e.g., it likely never activated at all. Very few are "it set a timer, but for the wrong time".
Good chance that's what captures our different Siri experiences. The few times I've done it spoken was always with AirPods and I always waited for the Siri reply (been a while; is it, "Uh-huh"?) after I said, "Hey, Siri." But my experience activating Siri with speech is so minimal as to be untrustworthy of anything broader.
I wonder if we are getting different versions based on geolocation (I'm in Europe) because my experience is the absolute opposite of this. I actually had the thought "maybe I should switch to apple to stop having to deal with this" just this week (although reading this thread siri is as bad).
My experience is only through android auto and it honestly makes me furious how bad it is. There is absolutely no other tech product in my life that gets even close to how bad voice commands are handled in Android.
In my experience, literally everything sucks:
- single language voice recognition (me speaking in English with an accent)
- multi language voice recognition (english commands that include localised names from the country I'm in)
- action in context (understand what I'm actually asking it to do)
- supported actions (what it can actually do)
Some practical examples from just this week:
- I had to repeat 3 times that "no I don't want to reply" because I made the mistake of getting google to read a whatsapp message while driving, and it got stuck into the "would you like to reply" (it almost always gets stuck - it's my goto example to show people how bad it is)
- I asked it to queue a very specific playlist on Spotify, and it just couldn't get it right (no matter how specific my command was, I couldn't get it to play a playlist from MY. account instead of playing an unrelated public playlist)
- I asked to add a song to a playlist, and it said it couldn't do that (at least it understood what I was asking? maybe)
And in general I gave up trying to use google maps through voice commands, because it's just not capable of understanding an English command if it contains a street/location name pronounced in the local language/accent.