Decisions without morality

Over the past decade, we have seen the rise of Fake News, and the dangers of social media allowing us to stay in our closed opinion spheres, finding support for almost any misguided opinion there is.

Artificial Intelligence will aggravate that problem.

A.I. supplies the psychological confirmation mechanisms that convince the ignorant that they are right: they can continuously iterate and demand changes until they are satisfied.

In my experience, the only thing that (misguidedly) satisfies an ignorant client is to get exactly that which they want to see through unlimited iterations.

What ”proves” to them that the result is ”good” (which they themselves by definition lack the objective capability to determine) is that they persuade themselves through requesting changes, and that those changes in and of themselves supposedly mean that they have ”refined” the results. When, in reality, all they have really done is go through a long series of bizarre, totally subjective mutations which at no point were guided by deliberate, knowledge-based and empirical judgments.

That is, in my mind, the real problem with A.I.: it enables limitless, automated and arbitrary mutations, without any of us ever really understanding the purposefulness or adequacy of the decisions made along the way. We simply outsource important decisions to the A.I., without understanding or even being aware of those decisions, relying entirely on the machine to make the right decisions.

This is certifiably insane.

We have no way of ascertaining the factual accuracy of the decisions made by the A.I. or, more importantly, the MORAL consequences of the decisions it makes.

We dismiss the dangers of A.I. at our peril. We may think that the current outputs produced by A.I.s are unsatisfactory, and that A.I. poses no threat for that reason. But those are mere details that A.I. will get progressively better at resolving. That is, after all, precisely what A.I. does: it learns and improves its output, and that is happening at an ever accelerating pace. If we believe that we can judge A.I. by the quality of what is produced TODAY, then we are missing the core of the problem.

Specific results are difficult for A.I. to procure TODAY, simply because they are still new to the A.I., and the machine lacks sufficient data upon which to base its results. This is a challenge that will disappear on its own through the very way that A.I. is constructed: to gather more data, and refine its output indefinitely.

The current problem with A.I. is this: the advocates for Artificial Intelligence are experts on the technology, not subject matter experts in the fields in which A.I. is being disruptively accelerated and implemented. The ACTUAL subject matter experts have no means through which to validate the veracity of A.I. output, and the technical experts don’t appear to care about the morality of it.

The long-term problem with A.I. is an ascerbation of that problem: the more advanced A.I.s get, the harder it will be for its human counterparts to validate or justify its results, or even understand how it arrived at those results.

The way I see it, the key issue is that A.I. is a “black box”: it produces results without us knowing how it arrived at those results, and we therefore lack the ability to validate them. And this is an escalating problem, which worsens the more complex A.I. gets: the black box gets blacker and blacker. We will eventually require “Voight-Kampff-detectors” in the hands of A.I.-psychologists, who will try to analyze the A.I., and understand its decisions.

Talk about chasing your own tail.

If we outsource decisionmaking – in any form – to the A.I. machine, we are in deep trouble. Because A.I.s CAN and DO make mistakes: its decisions are only as good as the data fed to it.

Ultimately, decisions are validated not on the grounds of veracity, but on the grounds of morality – morality that A.I.s don’t have. There may be thousands of data points that support a certain decision, but the decision may still be an immoral one.

Decisions without morality?

That is Skynet right there. Or, if you wish to be more sophisticated: Asimov’s law of robotics –a law that is yet to be enacted.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s