An attempt to be very descriptive and a little bit prescriptive about the issue of “decision” in transaction systems.
Consider that much of transacting is already mostly automated.  Take a simple situation – can the delivery person leave the package on the front porch? In the first instance, this question will have an algorithmic answer – did the recipient check a box saying that packages can be left on their porch?  On a second level, if there is a history of deliveries, there might be a pattern.  But what if the package is unusually expensive, or perishable, or sensitive to heat or cold?  A computer or puppies?  What if the algorithm says one thing, but experience demonstrates that harm is minimized by another course of action?
Conversely, anti-lock brakes are an example of an algorithm over-riding a human decision.  Except that the human can anticipate that the algorithm will do its work.  The algorithm is also part of the environment in which the human decides.  The recent auto-pilot disaster in Indonesia highlights the problem of what should decide.
A rough schematization of a system of decision could be:
  • What makes the decision (in order of increasing authority and effort with some entanglement of 3 and 4):
    1. Algorithm (formula, protocol, program)
    2. Experience (data, priors)
    3. Rule (text)
    4. Human (applied judgment)
  • Who makes the decision.  This list is more approximate.  The decisions include the choice of partner, the parameters of the transaction, adjustments, dispute resolution, dissuasion, interdiction and even punishment. From first instance to systemic:
    1. Parties
    2. Advisors
    3. Interested third parties, e.g. lenders, landlord, spouse
    4. Market-maker
    5. Insurer
    6. Regulator
    7. Legislator
See – Common Trust Fabric