As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
if the source code for said accusing AI cannot be examined and audited by the defense; the state is denying the defendant their right to face their accuser. mistrial.
This makes no sense. The source code isn’t “their accuser” (regardless of the fact that they’re very obviously also not the defendant either).
AI is nothing but a distraction. It’s not an entity. The negligence is exactly the same as it would be for any other piece of software doing something that caused harm.
It’s rarely going to be criminal (though it should be, more often, regardless of “AI” nonsense, when company executives take grossly negligent shortcuts that kill people), but AI doesn’t require any extra laws.
if the source code for said accusing AI cannot be examined and audited by the defense; the state is denying the defendant their right to face their accuser. mistrial.
What determines the decisions/actions of an AI?
Hint: It is not source code.
This makes no sense. The source code isn’t “their accuser” (regardless of the fact that they’re very obviously also not the defendant either).
AI is nothing but a distraction. It’s not an entity. The negligence is exactly the same as it would be for any other piece of software doing something that caused harm.
It’s rarely going to be criminal (though it should be, more often, regardless of “AI” nonsense, when company executives take grossly negligent shortcuts that kill people), but AI doesn’t require any extra laws.