I'd assume an outcome is a negotiated agreement between buyer and Agent provider.
Think of all the n8n workflows. If we take a simple example of Expense receipt processing workflows, or a lead sourcing workflow, I'd think the outcomes can be counted pretty well. In these cases, successfully entered receipts into ERP or number of Entries captured in salesforce.
I am sure there are cases where outcomes are fuzzy, for instances employer-employee agreement.
But in some cases, for instance, my accounting agent would only get paid if he successfully uploads my tax returns.
Surely not applicable in all cases. But, in cases Where a human is measured on outcomes, the same should be applicable for agents too, I guess
Indeed. The whole AI game is predicated on the fact that they can deliver work equivalent to humans in some cases. If that is never going to be the case, then this whole agentic stuff goes belly-up.
The alternative scenario is they get better and do some work really well. That is an interesting territory to focus on.
I'd assume an outcome is a negotiated agreement between buyer and Agent provider.
Think of all the n8n workflows. If we take a simple example of Expense receipt processing workflows, or a lead sourcing workflow, I'd think the outcomes can be counted pretty well. In these cases, successfully entered receipts into ERP or number of Entries captured in salesforce.
I am sure there are cases where outcomes are fuzzy, for instances employer-employee agreement.
But in some cases, for instance, my accounting agent would only get paid if he successfully uploads my tax returns.
Surely not applicable in all cases. But, in cases Where a human is measured on outcomes, the same should be applicable for agents too, I guess