Not sure what side of the debate you're taking here, but I think you've outlined the issue perfectly.
Engineers: "We couldn't have a list of commands, that's not how humans work, you're supposed to treat Alexa like a human, and the possibilities are endless"
Users: "Ok, then. Alexa, take out the trash."
Alexa: "Sorry, I can't do that."
(Ok, so obviously the possibilities aren't endless, right?)
I can somewhat understand general knowledge queries. For those, you can totally make the case that there's just too many things you can ask about to enumerate them all.
But imperative commands, like sending text messages, setting timers, or home automation? There's a finite list of those, since at the end of the day they actually have to be authored by some human who's writing a (say) Alexa skill. The number of utterances that may map to those skills are unbounded, but the number of skills aren't. So yes, at the end of the day, for "command" like things, they really should be able to give a list of them.
> (Ok, so obviously the possibilities aren't endless, right?)
This does not follow from the above. The set of positive integers is countably infinite. So is the set of positive even integers. Even if "half of the positive integers are missing!" there are still "endless" even postive integers.
By that logic the calculator app has an (effectively) infinite amount of functionality since there is an infinite number of integers which you can add together.
Well, I elaborated after. There's an actual finite set of skills that are coded up by actual engineers. A natural language system isn't hallucinating the ABI for the function calls that send text messages. There's code there which takes the utterance and sends the texts. What I'm saying is that you can take an inventory of what skills have been written (and/or are installed), and y'know... document them somewhere.
I'm not giving a middlebrow dismissal. There exists a real discoverability problem with virtual assistants, and asking users to "just try things" leads them to try things that don't work, and then conclude that the assistant must not be as useful as they thought.
Moreover, when an assistant doesn't do a thing, you're unlikely to try it again later; instead most people will conclude "I guess it can't do that" and move on. If they add the feature later, it's too late.
With every failed request, your confidence that an assistant really is intelligent and can understand you, diminishes more and more. Every time a user hits a dead end with a virtual assistant, it doesn't encourage them to try more things that do work, it instead gives the user less confident that anything will.
I can't count the number of times my wife has been surprised I can get Siri to do things. Her typical response is "I can never get her to understand me so I just stick with timers." It's a real problem, and I'm not being dismissive of anything.
In contrast, reread your comment in this context. You're taking my comment, reading in the least charitable way, condescending to me about the meaning of finite when the rest of my comment clarifies what I mean, and being completely dismissive of the point I'm trying to make. How can you say I'm the one issuing middlebrow dismissals?
> why you felt the need to make a comment just to make yourself look smart.
I hardly think it made me look smart. It's borderline trivial. The parent comment was insanely reductive in the stadnard HN style. I was hoping to help reduce the appearance of future such comments.
Sibling comments indicate that it had no positive effect. Such is life.
> I was hoping to help reduce the appearance of future such comments.
> Sibling comments indicate that it had no positive effect.
I'm really not trying to attack you here but this honestly reads like a high-school kid trying to make themselves sound smart by emulating spock from star trek.
Engineers: "We couldn't have a list of commands, that's not how humans work, you're supposed to treat Alexa like a human, and the possibilities are endless"
Users: "Ok, then. Alexa, take out the trash."
Alexa: "Sorry, I can't do that."
(Ok, so obviously the possibilities aren't endless, right?)
I can somewhat understand general knowledge queries. For those, you can totally make the case that there's just too many things you can ask about to enumerate them all.
But imperative commands, like sending text messages, setting timers, or home automation? There's a finite list of those, since at the end of the day they actually have to be authored by some human who's writing a (say) Alexa skill. The number of utterances that may map to those skills are unbounded, but the number of skills aren't. So yes, at the end of the day, for "command" like things, they really should be able to give a list of them.