Tagged: a-i
The Technology of A.I. is a Red Herring
I’m a designer and illustrator with art history schooling, and I’ve been working in tech since 1995, the past 10 years with implementations of MarTech SaaS solutions involving machine learning, among other things. I understand both Art and Tech well enough to realize the true impact of A.I. is ultimately about neither.
First, the debate about the technology of A.I. is a red herring. This is not ultimately about Technology, or even about Art – Tech is an enabler, and Art is the victim, but the perpetrator of the abuse is Business.
Generative A.I. acts as a business disruptor, and as long as such disruption is profitable (and unregulated), it will persist. Generative A.I. art is happening not because Technology is compelling it, but because Business Disruption is driving it, and Technology is acting as an enabler, entirely without the consent of the victim, Art. Much in the same way as self-driving cars are disrupting the transportation industry: you’d be a fool to think that is happening for technological reasons. Tech is just the symptom of the underlying profit motive. And wherever there is a profit motive, a technological enablement, and a lack of regulation, bad things tend to happen. This is not, as some suggest, a doomsday prophecy – it is simply learning from history!
Analogies abound about other technological advancements, but this is not science fiction. A.I. is being sold as capable of things it is already doing. Publishers are already integrating the technology, artists are already being made obsolete, and their work is already being appropriated, on an industrial scale. The genie is out of the bottle, and as always, legislation is struggling to catch up.
Generative A.I. is not ultimately about what is possible, but what is right, and what is wrong. There is nothing right about appropriating the work of unwitting human beings for purposes they have not agreed to. And what is wrong will be the driver in legislation and regulation, which is where A.I. is going next. Don’t believe me? Just look at the E.U. As it did with data privacy laws, the E.U. is leading the legislative field in A.I. regulations. But don’t expect the U.S. to take any initiatives here – true to its core capitalist nature, the U.S. will always lead with its business interests, and let all other interests fall by the wayside. Until they are forced to reconsider.
Perhaps that actually offers a glimmer of hope, because American profit interests are to some degree dependent on being able to do business in Europe. And just like with data privacy legislation, E.U. regulations may ultimately convince American businesses to comply.
Do what’s right.
A.I.’s False Sheen of Magic
Much is made of the derivative nature of A.I. And while A.I. is indeed derivative at its core, I’m not sure I agree that lack of originality is its most damning issue.
Computers are constantly getting progressively better at experimenting and finding creative new combinations. Creativity is a process, it is not magic, and you could argue that there is very little true originality anyway – almost all forms of expression build on that which came before. That holds true even for human evolution itself.
No, the truly horrific prospect I worry about is IF and WHEN computers manage to match human ingenuity in combining old things into new ones. If that happens, we have outsourced what is a very core part of humanity, and where does that leave us? Even if computers COULD create original works, why on Earth would we want them to?! Are we really, as a species, looking to remove and replace human thought and creativity…? The very notion is antithetical to human existence.
Users and endorsers of A.I. are betraying some very core humanist principles, and they’re doing so seemingly cluelessly in terms of the consequences. Those of us who work in the creative application of technology (which includes myself) have a responsibility to step up and draw a line for what is acceptable, and what is not. I create graphic design and design systems for machine learning, but that is MY work that is fed to the machine, to determine which creative combinations are the most productive – not anyone elses work. I think the unapproved, unacknowledged appropriation of anyone’s work – ANY work – needs to be outlawed, and more than that, I think that ought to be a completely foregone conclusion.
I’m especially disturbed by the unsettling combination of greed and completely uncritical adoption of technology, where nobody seems to reflect on the consequences. This seems to me a uniquely American phenomenon.
I’ve heard vapid American capitalists justify this destructive “disruption” (in fact, almost any disruption of almost any market) in the most crass way possible, inferring that something is right just because it makes money. This amounts to a form of nihilism at best, or economic fascism at worst.
Furthermore, I have heard vapid American technologists justify the wholesale plundering or our cultural heritage that A.I. enables, simply because it represents a technological advancement – as if humanity is already of secondary importance to computers.
Both these positions are as baffling as they are horrifying.
Some speak out (well-intentionally) against A.I. in defense of the human creative process, attributing almost magical powers to the latter. I am more than a little leery of suggesting there is “magic” involved in the creative process, even though it often involves powers and influences unseen. But just because something is subjective and subconscious does not make it magical.
This is in fact one of the very core problems with A.I.: that its influences and sources are unknown, which gives tech evangelists the liberty to imply the occurrence of “magic” (though of a different kind). The human brain processes only that which we have fed to it, and that which we have been fed at conception, through DNA. All of this is dangerously analogous to A.I. We get into very murky waters if we sanction ideas on the sole basis of the inspiration for those ideas being unknown to us.
If anything, we need to insist on more transparency, and not endorse the obscuring of references. Such obfuscation enables theft. As humans, we are the sum total of our experiences, and this is in fact equally true of A.I. – the difference is how far A.I. is able to interpolate and morph those experiences. This currently has its limits, but that will surely evolve, and the only way to escape this slippery slope is to make an absolute, unequivocal demand for transparency. We cannot argue theft if we cannot prove what was stolen.
So, I’m using the ”magic” analogy as a word of caution here, because when we’re talking tech that is this complex, some will (and do!) find it indistinguishable from magic. What we call it matters, and we need this process to be seen for what it really is, with full transparency and zero romanticization: industrialized theft, and super-charged pillaging.
The further we let this progress, the more difficult it will be to show how it is done, and to demonstrate that zero magic is involved. We are now, in fact, in urgent need of what the illusionist Houdini did to spiritualism in the early 1900s:
A.I. is not a boon to humanity, and it’s false sheen of magic needs to be debunked.
