Tagged: technology

Shame on you, H.R.!

I struggle mightily with understanding why H.R. is so willingly making itself an enabler of generative A.I., specifically in the hiring of creatives, and in the displacement and replacement of creative talent.

At best, it is inconsistent and self-defeating. At worst, it is deeply immoral.

First, you put up requirements for creatives to showcase expensive art, design or writing degrees. Then, you expect them to prove their abilities by proffering extensive examples of their work experience, and providing references to back it up. Lastly, you expect to be served examples of creative work, which will presumably be judged (subjectively, but hopefully by people who are qualified to make that judgment), based on its inherent qualities. Then, at the very end, you expect them to replace all of that knowledge, all of that expertise, all of those evident qualities, with homogenized and industrialized machine output, reducing all of their varied experience to prompt writing. Why on Earth would anyone take on a mountain of student debt to learn a trade, taking pride in the quality of their work, using that pride to further the objectives of their employer, only to have their work be reduced to data entry, thereby becoming complicit in the complete erosion of their entire profession, and the depleted value of the skillset they’ve fought so long to attain?

There are profound problems inherent in equating generative A.I. with human expertise and human learning. Let’s be clear here: whatever A.I. tech advocates may say to the contrary, generative A.I. engines don’t actually learn in the true semantic sense of the word. They are not sentient, they are not able to draw conclusions and weigh them against each other, or extrapolate from what they see. There is no lateral thinking whatsoever. There is no trial-and-error, no judicious application of the many considerations going into the appropriateness and “feel” of creative work. There is no social context for the use of the output of generative A.I. These machines are trained to identify patterns, replicate them, and fuse together replicated pieces, like a high-tech meat grinder. This is a BIG difference compared to how humans work, how actual learning works, and how creativity works.

More importantly, that replication is fraught with legal, commercial and moral problems. Some examples:

TRUE LEARNING VS. REPLICATING
A human artist who learns from another artist would probably initially copy the other artist’s work, but eventually transcend those influences, and emerge with a style more their own. With generative A.I., sure, you can feed the machine examples of an artist’s work, but it doesn’t become that artist by ingesting those pieces of visual data, and it never develops aesthetic sensibilities of its own. All the A.I. learns how to do is to imitate that style, which is lightyears away from the organic inspiration and learning that happens between human artists. Moreover, the output is judged by how precisely the A.I. approximates someone else’s work, whereas a human artist is typically judged based on the uniqueness of their output. If a human artist simply replicated the work of another artist, there is a word for that: it is called counterfeiting. The counterfeiter could (and would!) be sued and be taken to court. Surely, you wouldn’t expect a company like Disney, for instance, to accept the wholesale, industrial scale duplication of artistic work they’ve spent generations refining, would you?

UNIQUE VS. COPYCAT
Then we have the field of branding and corporate identity, which would suffer immensely from the uncritical adoption of generative A.I. First of all, anything incorporated into a brand’s visual or verbal presence that has been created from replicated pieces of other creative output is obviously not unique, it goes without saying. Hence, it flies in the face of the very nature of brand building. Its value in defining a distinct identity, or catching the attention of potential customers by standing out, is inherently compromised. Second, what is not unique cannot be owned or claimed as your own: others can freely copy it, and the coherence and recognizability of a brand would suffer immensely from it. Commercially, this would devalue the brand, far more than any savings possible from using A.I.

DEVELOPING VS. vs. STEALING
There are ways of creatively developing new, improved forms of output without directly replicating something already existing, and that process is not new: it’s called R&D. The perennial truth in that field is that there are no shortcuts. If a car manufacturer wanted to develop a sportier car, they wouldn’t steal a racecar, take it apart, replicate its patented componentry, and implement it as-is. Instead, they would analyze the racecar, understand how it worked, and then apply whatever mechanical and aerodynamical principles that were observable and applicable to their product, and then iterate and test it until they found a workable set of components that produced the intended result, and were feasible to produce in an economical way. Part of that testing would involve how it felt to drive the car, a sensation that any A.I. would struggle to account for in its output. Moreover, the cheating involved in simply replicating existing components would be considered a crime; one for which the manufacturer would be taken to court and punished by the letter of the law. There’s an entire legal profession dedicated to this, and you better believe they are gearing up to incorporate the defense against abuse of A.I. into their law practices.

GROWTH VS. ATROPHY
If you, as an H.R. professional, think that you are contributing to growing the competence of your creative department by hiring people with a focus on generative A.I. usage, you have an extremely short-sighted perspective. Do you seriously think, for instance, that the creative ability to envision and visualize bespoke solutions won’t be affected if you have people do nothing but sit and feed prompts to a machine, and judge what of its output is usable…? That removing creative immersion in artistic decisionmaking will not lead to the atrophy of creative abilities in your staff…? Imagine if you had your Art Directors do nothing but conduct Google image searches all day, being fed algorithm-homogenized images for years, recycling the same visuals in a giant aesthetic echo chamber. Do you think they would ever come up with anything eye-opening or truly creative ever again? We’ve already seen the effects of algorithm-based selection of content in the increasingly isolated thought bubbles of social media: it leads to an erosion in human contact, and a depletion of human ingenuity. Instead of coming up with sentiments that more truthfully and genuinely represent people’s opinions and feelings, we are reduced to sharing and recycling memes, and using pre-defined emojis. Imagine applying that same homogenizing effect to the entire field of creative work! If that doesn’t give you pause, you seriously haven’t given it enough thought.

CONVINCING VS. CONNING
The entire field of marketing is based on the principles of persuasion: that advertising can somehow convince a potential customer to change their purchasing decision in your favor. This has, so far, been entirely dependent on human-to-human communication, as it should be. Meaning, if you are being persuaded, it was ultimately another human being who persuaded you. If they did so through illegitimate means – by exaggerating, obfuscating, lying, swindling – then that is something for which a human can be held accountable. In the case of machine marketing output, what does that accountability look like? Nobody knows, and it will take decades of legal wrangling in court to establish enough legal precedents for the law to be consider settled. In the meantime, we will see A.I.s step over the line and repeatedly lie to people, without knowing it is doing so, without its handlers knowing it did so, and without anyone being accountable. We’re already seeing that happen: very recently, a summer reading list with AI-generated content, including fake books and quotes, was published by several newspapers. This will lead to a deepening of already dangerous levels of untruth.

AGENCY AND RESPONSIBILITY
It is my firm belief that, at a certain level – say Director level and up – any professional should have a say in which tools are used to practice their trade, and how they are used. More than that, it should be part of their responsibilities. That is, in fact, part of what you are hiring them for, and more importantly, you should be hiring them to stand up for what is right, not to become unthinking tools of machine adoption. Denying professionals that agency, that choice, and that say in the execution of their own professional duties, is tantamount to a form of abuse – especially if it leads to the depletion and devaluing of their hard-earned capabilities, and the long-term dismantling of their own profession.

If you go into H.R. thinking that it’s your job to enable your employer’s abuse of their employees, I would suggest there is a deep flaw in your moral compass.

Shame on you.

The Technology of A.I. is a Red Herring

I’m a designer and illustrator with art history schooling, and I’ve been working in tech since 1995, the past 10 years with implementations of MarTech SaaS solutions involving machine learning, among other things. I understand both Art and Tech well enough to realize the true impact of A.I. is ultimately about neither.

First, the debate about the technology of A.I. is a red herring. This is not ultimately about Technology, or even about Art – Tech is an enabler, and Art is the victim, but the perpetrator of the abuse is Business.

Generative A.I. acts as a business disruptor, and as long as such disruption is profitable (and unregulated), it will persist. Generative A.I. art is happening not because Technology is compelling it, but because Business Disruption is driving it, and Technology is acting as an enabler, entirely without the consent of the victim, Art. Much in the same way as self-driving cars are disrupting the transportation industry: you’d be a fool to think that is happening for technological reasons. Tech is just the symptom of the underlying profit motive. And wherever there is a profit motive, a technological enablement, and a lack of regulation, bad things tend to happen. This is not, as some suggest, a doomsday prophecy – it is simply learning from history!

Analogies abound about other technological advancements, but this is not science fiction. A.I. is being sold as capable of things it is already doing. Publishers are already integrating the technology, artists are already being made obsolete, and their work is already being appropriated, on an industrial scale. The genie is out of the bottle, and as always, legislation is struggling to catch up.

Generative A.I. is not ultimately about what is possible, but what is right, and what is wrong. There is nothing right about appropriating the work of unwitting human beings for purposes they have not agreed to. And what is wrong will be the driver in legislation and regulation, which is where A.I. is going next. Don’t believe me? Just look at the E.U. As it did with data privacy laws, the E.U. is leading the legislative field in A.I. regulations. But don’t expect the U.S. to take any initiatives here – true to its core capitalist nature, the U.S. will always lead with its business interests, and let all other interests fall by the wayside. Until they are forced to reconsider.

Perhaps that actually offers a glimmer of hope, because American profit interests are to some degree dependent on being able to do business in Europe. And just like with data privacy legislation, E.U. regulations may ultimately convince American businesses to comply.

Do what’s right.

A.I.’s False Sheen of Magic

Much is made of the derivative nature of A.I. And while A.I. is indeed derivative at its core, I’m not sure I agree that lack of originality is its most damning issue.

Computers are constantly getting progressively better at experimenting and finding creative new combinations. Creativity is a process, it is not magic, and you could argue that there is very little true originality anyway – almost all forms of expression build on that which came before. That holds true even for human evolution itself.

No, the truly horrific prospect I worry about is IF and WHEN computers manage to match human ingenuity in combining old things into new ones. If that happens, we have outsourced what is a very core part of humanity, and where does that leave us? Even if computers COULD create original works, why on Earth would we want them to?! Are we really, as a species, looking to remove and replace human thought and creativity…? The very notion is antithetical to human existence.

Users and endorsers of A.I. are betraying some very core humanist principles, and they’re doing so seemingly cluelessly in terms of the consequences. Those of us who work in the creative application of technology (which includes myself) have a responsibility to step up and draw a line for what is acceptable, and what is not. I create graphic design and design systems for machine learning, but that is MY work that is fed to the machine, to determine which creative combinations are the most productive – not anyone elses work. I think the unapproved, unacknowledged appropriation of anyone’s work – ANY work – needs to be outlawed, and more than that, I think that ought to be a completely foregone conclusion.

I’m especially disturbed by the unsettling combination of greed and completely uncritical adoption of technology, where nobody seems to reflect on the consequences. This seems to me a uniquely American phenomenon.

I’ve heard vapid American capitalists justify this destructive “disruption” (in fact, almost any disruption of almost any market) in the most crass way possible, inferring that something is right just because it makes money. This amounts to a form of nihilism at best, or economic fascism at worst.

Furthermore, I have heard vapid American technologists justify the wholesale plundering or our cultural heritage that A.I. enables, simply because it represents a technological advancement – as if humanity is already of secondary importance to computers.

Both these positions are as baffling as they are horrifying.

Some speak out (well-intentionally) against A.I. in defense of the human creative process, attributing almost magical powers to the latter. I am more than a little leery of suggesting there is “magic” involved in the creative process, even though it often involves powers and influences unseen. But just because something is subjective and subconscious does not make it magical.

This is in fact one of the very core problems with A.I.: that its influences and sources are unknown, which gives tech evangelists the liberty to imply the occurrence of “magic” (though of a different kind). The human brain processes only that which we have fed to it, and that which we have been fed at conception, through DNA. All of this is dangerously analogous to A.I. We get into very murky waters if we sanction ideas on the sole basis of the inspiration for those ideas being unknown to us.

If anything, we need to insist on more transparency, and not endorse the obscuring of references. Such obfuscation enables theft. As humans, we are the sum total of our experiences, and this is in fact equally true of A.I. – the difference is how far A.I. is able to interpolate and morph those experiences. This currently has its limits, but that will surely evolve, and the only way to escape this slippery slope is to make an absolute, unequivocal demand for transparency. We cannot argue theft if we cannot prove what was stolen.

So, I’m using the ”magic” analogy as a word of caution here, because when we’re talking tech that is this complex, some will (and do!) find it indistinguishable from magic. What we call it matters, and we need this process to be seen for what it really is, with full transparency and zero romanticization: industrialized theft, and super-charged pillaging.

The further we let this progress, the more difficult it will be to show how it is done, and to demonstrate that zero magic is involved. We are now, in fact, in urgent need of what the illusionist Houdini did to spiritualism in the early 1900s:

A.I. is not a boon to humanity, and it’s false sheen of magic needs to be debunked.