AI and Accountability

AI is going to be replacing things that, up until now, humans were uniquely suited for.

I'll use a present project as an example. I'm working on a side project that is heavily reliant on AI. Without getting too far into the weeds, it's a system to allow "smart" replaying of sets of related API calls to enable better testing. I could get 90% of the way to success pretty easily without any AI help. Sure, things like variables would get named "var1," "var2," etc., but it worked. Then there's weirder cases where you need to be able to take some context cues to figure out how to select the right value to use in a replay.

{
  documents: {
    {
       type: ticket
       number: 123456
    },
    ...
  }
}

On a subsequent call, we need to find the ticket number, but it might not be in the same place every time.

You may have multiple documents and they may not be in the same order (think JSONPath selectors to get the document's number where the type is "ticket").

Now, I can, in lieu of AI, present that to a human to make that decision. The end result is the same. I have the inputs that went to either the AI or the human, and I have the answer provided. In neither case do I have an audit trail of how the answer was arrived at. I would argue that you might be able to get more of an audit trail from the AI because there may be logged "reasoning" ideas used in the process. I know I can't audit what I did to come to the answer other than "it felt right."

So, why is AI worse? Are humans better?

I think I need to have a bit more of a think.

Previous
Previous

Scoping projects

Next
Next

Front Yard After Flooding