Category: Uncategorized

The Brittleness of Ideation

I’m intensely protective of the creative process. It needs to be nurtured and cared for in its fragile infancy, before ideas are fully formed.

Ideas need to be moved forward gently and carefully, while teasing out their shape and structure. Every contributor needs to be afforded the headspace to go into all the nooks and crannies of their creativity, and explore hidden potential without hesitation or stifling judgment. This is not typically a matter of individual personality or psychological fortitude; even those who seem outwardly resilient alter their patterns of exploration based on some measure of self-doubt, external factors, or interpersonal dynamics. This usually happens imperceptively and subconsciously, and there are no guarantees that these subtle course alterations are for the better.

The creative process depends on open-mindedness and a nurturing, supportive attitude. In this regard, it is not unlike therapy – especially when the process is collective and collaborative. The realm of ideas needs to be a safe space for all participants, where they can feel secure in probing for those elusive impulses that may or may not deliver something of substance. The key response should always be “yes, and…”, or a “maybe”, as opposed to a flat “no”. The rule of thumb must be to suspend judgment and disbelief, and move forward with the assumption that every grain of sand might potentially be hiding a speck of gold. Otherwise, the risk is that viable ideas are overlooked, or modified in ways that do not fully leverage their potential. Ideas are constantly evolving, so to regard them as being in stasis is simply ignoring the inherent potential.

Issuing opinions or judgments is almost always premature while ideas are still congealing. While the instinct to judge, to limit and to reject is profoundly human – it is related to our fight-or-flight instincts – it is deeply at odds with the ideation process. Rejection is an expression of fear, whereas acceptance is an expression of love. Evaluation inserted too early in the ideation process only serves to restrict the freedom of movement of ideas before they are fully formed, especially if such evaluation is based on personal bias and hypotheticals – almost regardless of how well informed such hypotheticals may be. Only when an idea has been given a clear shape and direction is it meaningful to evaluate it, and decide how to proceed, and this “ready” state is not always immediately apparent.

Therefore, there is really no need to rush to judgment, at any stage of the ideation process. Let the process take its time. Every single decision already happens under the natural pressure of second-guessing and doubt; that is a fully normal condition in navigating the ambiguity of the unknown and unseen. We must, as creatives, strive to lessen these doubts, and not succumb to self-censorship. This is especially true in collaborative ideation, as we can unwittingly have a censoring and suppressing effect on others by merely issuing an opinion, and this often leads to the halting of forward momentum, the stifling of creativity, and/or the strangling of the flow of thought. Even those who may seem outwardly resilient to such undue influence are subject to these effects, and those who lack the necessary understanding of these psychological group dynamics are typically better removed from the process, as their presence can be quite destructive. This calls for some self-awareness: I have myself had this effect on ideation on occasion, and have had to bow out of the process for that reason.

Instead, what we need to do as creatives is to welcome, document and catalogue ideas, move on to the next idea, and then see if patterns emerge where certain concepts share similarities, or may contradict each other to the point where it eventually becomes necessary to pick a path. There is ample time for judgment at the end of this process, but to pre-empt the potential value of an idea before it has fully revealed itself is never productive. Though picking paths may give a false sense of progression, it does not enhance ideation to choose a direction too narrowly before it is well understood where a path may lead to; it only restricts options in a manner that is rarely well considered or constructive. Some paths may converge, and some may diverge, but they might all lead to meaningful destinations.

If we apply a divergent mindset, as opposed to a convergent one, we ensure that we explore more possibilities and possible combinations of ideas, and even if some of these possibilities turn out to not be applicable for the task at hand, they very often lead to new trains of thought, and spawn new ideas which may serve some other purpose at a later point. We do ourselves a disservice by rejecting them.

This is why the true value of an idea can never just be assessed within the narrow confines of one specific objective. An idea always has value, it just needs to be nurtured, appreciated and considered in the right light.

The Technology of A.I. is a Red Herring

I’m a designer and illustrator with art history schooling, and I’ve been working in tech since 1995, the past 10 years with implementations of MarTech SaaS solutions involving machine learning, among other things. I understand both Art and Tech well enough to realize the true impact of A.I. is ultimately about neither.

First, the debate about the technology of A.I. is a red herring. This is not ultimately about Technology, or even about Art – Tech is an enabler, and Art is the victim, but the perpetrator of the abuse is Business.

Generative A.I. acts as a business disruptor, and as long as such disruption is profitable (and unregulated), it will persist. Generative A.I. art is happening not because Technology is compelling it, but because Business Disruption is driving it, and Technology is acting as an enabler, entirely without the consent of the victim, Art. Much in the same way as self-driving cars are disrupting the transportation industry: you’d be a fool to think that is happening for technological reasons. Tech is just the symptom of the underlying profit motive. And wherever there is a profit motive, a technological enablement, and a lack of regulation, bad things tend to happen. This is not, as some suggest, a doomsday prophecy – it is simply learning from history!

Analogies abound about other technological advancements, but this is not science fiction. A.I. is being sold as capable of things it is already doing. Publishers are already integrating the technology, artists are already being made obsolete, and their work is already being appropriated, on an industrial scale. The genie is out of the bottle, and as always, legislation is struggling to catch up.

Generative A.I. is not ultimately about what is possible, but what is right, and what is wrong. There is nothing right about appropriating the work of unwitting human beings for purposes they have not agreed to. And what is wrong will be the driver in legislation and regulation, which is where A.I. is going next. Don’t believe me? Just look at the E.U. As it did with data privacy laws, the E.U. is leading the legislative field in A.I. regulations. But don’t expect the U.S. to take any initiatives here – true to its core capitalist nature, the U.S. will always lead with its business interests, and let all other interests fall by the wayside. Until they are forced to reconsider.

Perhaps that actually offers a glimmer of hope, because American profit interests are to some degree dependent on being able to do business in Europe. And just like with data privacy legislation, E.U. regulations may ultimately convince American businesses to comply.

Do what’s right.

A.I.’s False Sheen of Magic

Much is made of the derivative nature of A.I. And while A.I. is indeed derivative at its core, I’m not sure I agree that lack of originality is its most damning issue.

Computers are constantly getting progressively better at experimenting and finding creative new combinations. Creativity is a process, it is not magic, and you could argue that there is very little true originality anyway – almost all forms of expression build on that which came before. That holds true even for human evolution itself.

No, the truly horrific prospect I worry about is IF and WHEN computers manage to match human ingenuity in combining old things into new ones. If that happens, we have outsourced what is a very core part of humanity, and where does that leave us? Even if computers COULD create original works, why on Earth would we want them to?! Are we really, as a species, looking to remove and replace human thought and creativity…? The very notion is antithetical to human existence.

Users and endorsers of A.I. are betraying some very core humanist principles, and they’re doing so seemingly cluelessly in terms of the consequences. Those of us who work in the creative application of technology (which includes myself) have a responsibility to step up and draw a line for what is acceptable, and what is not. I create graphic design and design systems for machine learning, but that is MY work that is fed to the machine, to determine which creative combinations are the most productive – not anyone elses work. I think the unapproved, unacknowledged appropriation of anyone’s work – ANY work – needs to be outlawed, and more than that, I think that ought to be a completely foregone conclusion.

I’m especially disturbed by the unsettling combination of greed and completely uncritical adoption of technology, where nobody seems to reflect on the consequences. This seems to me a uniquely American phenomenon.

I’ve heard vapid American capitalists justify this destructive “disruption” (in fact, almost any disruption of almost any market) in the most crass way possible, inferring that something is right just because it makes money. This amounts to a form of nihilism at best, or economic fascism at worst.

Furthermore, I have heard vapid American technologists justify the wholesale plundering or our cultural heritage that A.I. enables, simply because it represents a technological advancement – as if humanity is already of secondary importance to computers.

Both these positions are as baffling as they are horrifying.

Some speak out (well-intentionally) against A.I. in defense of the human creative process, attributing almost magical powers to the latter. I am more than a little leery of suggesting there is “magic” involved in the creative process, even though it often involves powers and influences unseen. But just because something is subjective and subconscious does not make it magical.

This is in fact one of the very core problems with A.I.: that its influences and sources are unknown, which gives tech evangelists the liberty to imply the occurrence of “magic” (though of a different kind). The human brain processes only that which we have fed to it, and that which we have been fed at conception, through DNA. All of this is dangerously analogous to A.I. We get into very murky waters if we sanction ideas on the sole basis of the inspiration for those ideas being unknown to us.

If anything, we need to insist on more transparency, and not endorse the obscuring of references. Such obfuscation enables theft. As humans, we are the sum total of our experiences, and this is in fact equally true of A.I. – the difference is how far A.I. is able to interpolate and morph those experiences. This currently has its limits, but that will surely evolve, and the only way to escape this slippery slope is to make an absolute, unequivocal demand for transparency. We cannot argue theft if we cannot prove what was stolen.

So, I’m using the ”magic” analogy as a word of caution here, because when we’re talking tech that is this complex, some will (and do!) find it indistinguishable from magic. What we call it matters, and we need this process to be seen for what it really is, with full transparency and zero romanticization: industrialized theft, and super-charged pillaging.

The further we let this progress, the more difficult it will be to show how it is done, and to demonstrate that zero magic is involved. We are now, in fact, in urgent need of what the illusionist Houdini did to spiritualism in the early 1900s:

A.I. is not a boon to humanity, and it’s false sheen of magic needs to be debunked.

Your Brand Personalized

Dynamic Creative In User Experience Design

All design is inherently subjective.

How people perceive and are affected by design depends on who they are, what their circumstances look like and what their expectations are. Subjectivity applies not only to media and graphic design, but to art, fashion, architecture or any other form of visual expression.

We judge with our eyes.

This is especially problematic in the world of user experience, which to a large extent revolves around functionality and usability. The usefulness of browser- and app-based experiences depends on how well they enable users to accomplish what they’re trying to do. Accomplishing goals, on the other hand, also depends on how motivated users are. This adds an emotional dimension to this otherwise highly rational discipline.

Structuring a website (or an app) so that the user merely understands how to use it is simply not sufficient. If the user does not want to use a website, the empirical knowledge of how to use it is largely inconsequential. Therefore, content must be packaged in a way that the user is emotionally motivated to partake of it and here, design has an important role to play.

A survey conducted by Carleton University in Ottawa, published in the journal Behaviour and Information Technology, determined that users form their impressions of a website and its visual appeal within the first 1/20th of a second of visiting it. Even more surprisingly, these first impressions colored the entire experience of the site, whether or not the whole site actually turned out to match that initial perception. The conclusion of the survey was that this first impression was “unlikely to involve cognition” – meaning it is largely an emotional response.

However, presenting a differentiated audience with a unified, undifferentiated design – however well optimized – will not account for the variances in people’s preferences and goals. Such a design will never be entirely effective. Given that users are different and have different preferences and expectations, effective UX design has to be personalize-able. This is quickly becoming an expectation, if not the norm.

A recent study published by Salesforce found that 80% of users expect online experiences to be personalized and tailored to their needs. This means that brands simply cannot afford to broadcast the same uniform message to a single, undifferentiated audience. Marketers need to find ways of communicating to individuals, not audiences.

This posits a problem of scale.

Very few marketers or publishers of content can afford to employ armies of designers and content producers to tailor experiences to each individual user. More importantly, they cannot do so in real time.

Enter dynamic personalization.

By devising smartly constructed, modular design systems based on creative componentry that can be freely interchanged, it is possible to compose entire user experiences based on incoming media signals. These signals can reveal behavioral-, demographic- and psychographic details about each individual user, allowing the experience to programmatically flex and adjust to some of those factors.

This ensures a successful, results-oriented communicative solution that scales. In addition, it may even be able to predict favorable outcomes through the application of AI-technology such as machine learning. Done right, it will allow marketers to dial in the most effective combination of content and design with increasing accuracy.

Testing naturally becomes a central element in such solutions, where a multitude of creative options can be fed into the learning engine. The ideal mix can thus be determined, assembled and verified as users arrive on site. In iProspect’s own testing, we regularly demonstrate incremental conversion gains through dynamic, personalized experiences. However, fully capitalizing on this opportunity requires a shift in how brands look at design and how brands go to market.

The era of the static, monolithic brand is over. Modern brands need to understand their audiences, find ways of communicating on a personal level, and truly become interactive.

(Reposted)

Casting Call

Design, and especially identity design – but really all types of design that depend on recognizability – is not like auteurship, or artistry.

As a designer, you are not speaking in your own voice – you need to find the voice of the client, and understand how that voice resonates (or doesn’t resonate) with the intended audience.

This means that your job as a designer is more akin to that of a casting director.

You need for your design to play a role, and to express itself in certain ways that are consistent with the script – i.e., the client’s brand strategy. And if the client is unsure about that strategy – meaning, they are essentially shooting a movie without a script – you need to define the role for them, and find the right actor. Your design needs to embody the personality that you would advise the client to assume.

You need to conduct a Casting Call.

You need to define the client’s visual persona, and articulate how it should act when it’s on the stage, in order to be able to convey the right message in a credible and convincing way.

Some clients will come to you and ask for a certain actor: “We’d like for Al Pacino to play this role!”. Meaning, they come to you with an outside-in idea of someone who they think would represent them, when the actor’s job is to FORGET about who they are, not act as themselves, and instead embody the ROLE.

For that reason, you ought to take all such suggestions with a huge scoop of salt.

Treating a visual brand identity like a known commodity, like a well-known actor, means the audience are at risk of only seeing THAT actor; seeing THAT face – not experience the role that the script actually calls for.

This is the same as applying a recognizable ”style” to a brand identity. ”We want clean and modern”, say many corporations, afraid of stepping out of line with expectations and industry tropes. But their identity, as defined by the brand strategy, may not express itself best in a clean and modern ”style”, just like Al Pacino might not be the right face, or the right voice, for that role.

Think about your brand as if it’s an actor about to get on the stage, and then define the role based on how you need the audience to experience and react to your brand. Make sure your designer conducts a casting call, and casts the right actor in the role – an actor which will be able to let audiences see past their recognizable face, and experience the persona they are trying to express.

If your brand identity design does the job right, it will be able to embody the true persona of the brand, as opposed to pretending, and putting on stage clothes while still being recognizable as someone else’s borrowed personality.

Design Ethos

Never just create a one-off asset when you can construct something reusable

Never just create a reusable single template when you can design something modular

Never just design modular components when you can devise a framework

Never just devise a framework when you can architect a design system

Never just architect a design system when you can define a user experience

Never just define a user experience when you can plan out a user journey

Never just plan out a single user journey when you can improve lifetime value

(Reposted from 2020)

Decisions without morality

Over the past decade, we have seen the rise of Fake News, and the dangers of social media allowing us to stay in our closed opinion bubbles, finding support for almost any misguided opinion there is.

Artificial Intelligence will only aggravate that problem.

A.I. supplies the psychological confirmation mechanisms that convince the ignorant that they are right: they can continuously iterate and demand changes until they are satisfied.

In my experience, the only thing that (misguidedly) satisfies an ignorant client is to get exactly that which they want to see through unlimited iterations.

What ”proves” to them that the result is ”good” (which they themselves by definition lack the objective capability to determine) is that they persuade themselves through requesting changes, and that the progressive results of those changes in and of themselves supposedly mean that they have ”refined” the results. When, in reality, all they have really done is go through a long series of bizarre, totally subjective mutations which at no point were guided by deliberate, knowledge-based or empirical judgments.

That is, in my mind, the real problem with A.I.: it enables limitless, automated and arbitrary mutations, without any of us ever really understanding the purposefulness or adequacy of the decisions made along the way. We simply outsource important decisions to the A.I., without understanding or even being aware of those decisions, relying entirely on the machine to make the right decisions – even though the machine has no awareness of “right” or “wrong”.

This is certifiably insane.

We ought to be deeply troubled by the fact that we have no way of ascertaining the factual accuracy of the decisions made by the A.I. or, more importantly, the MORAL consequences of the decisions it makes.

We dismiss the dangers of A.I. at our peril. We may think that the current outputs produced by A.I.s are unsatisfactory, and that A.I. poses no threat for that reason. But those are mere details that A.I. will get progressively better at resolving. That is, after all, precisely what A.I. does: it learns, iterates and improves its output, and that is happening at an ever accelerating pace. If we believe that we can judge A.I. by the quality of what is produced TODAY, then we are missing the very essence of the problem.

Specific results are difficult for A.I. to procure TODAY, simply because they are still new to the A.I., and the machine lacks sufficient data upon which to base its results. This is a challenge that will disappear on its own through the very way that A.I. is constructed: to gather more data, and refine its output indefinitely.

The current problem with A.I. is this: the advocates for Artificial Intelligence are experts on the technology, not subject matter experts in the fields in which A.I. is being disruptively accelerated and implemented. The ACTUAL subject matter experts have no means through which to validate the veracity of A.I. output, and the technical experts don’t appear to care about the morality of it.

The long-term problem with A.I. is an ascerbation of that very problem: the more advanced A.I.s get, the harder it will be for its human counterparts to validate or justify its results, or even understand how it arrived at those results.

The way I see it, the key issue is that A.I. is a “black box”: it produces results without us knowing how it arrived at those results, and we therefore lack the ability to validate or dispute them. And this is an escalating problem, which worsens the more complex A.I. gets: the black box gets blacker and blacker. We will eventually require “Voight-Kampff-detectors” in the hands of A.I.-psychologists, who will try to analyze the A.I., and understand its decisions.

Talk about chasing your own tail.

If we outsource decisionmaking – in any form – to the A.I. machine, we are in deep trouble. Because A.I.s CAN and DO make mistakes: its decisions are only as good as the data fed to it.

Ultimately, decisions are validated not on the grounds of veracity, but on the grounds of morality – morality that A.I.s don’t have. There may be thousands of data points that support a certain decision, but the decision may still be an immoral one.

Decisions without morality?

That is Skynet right there. Or, if you wish to be more sophisticated: Asimov’s 3 laws of robotics – laws that are yet to be enacted.

The Risk of Creative Misalignment

The creative process involves some highly delicate group dynamics.

The objectives of each stage of the process are very different, and because this requires different approaches at each different stage, misalignment in how people perceive the process and in how they behave can cause disruption.

Usually, this disruption occurs as a result of a misalignment or asymmetry in roles, behaviors or contributions during the process. However, these are mainly triggers but not necessarily causes.

This misalignment is actually a very common problem in nearly all creative work – a misalignment between intents and attitudes, which can often cause conflicts. Some people are in ideation mode whereas other people are in resolution or critique mode, etc.

Since this is a very common part of my day job, and since I have also actually lectured at various design schools on creativity, as well as written a book on the creative process, I wanted to share my perspective on these possible creative misalignments.

First, let me dig into the parameters of the creative process:

Basically, the creative process is one of repeat contractions and expansions through different stages. The process is convergent and divergent at different times, and trying to be divergent while others are trying to be convergent (and vice versa) is what causes most creative disagreements. At times, there is a need for decisiveness and clarity, then sometimes there’s a need to allow for ambiguity and an open mind, and finally, there is sometimes a need for constructive refinement, which is where special considerations for tact and tone need to be made, since efforts and emotional investments are at stake.

I tend to consistently split the creative process into three stages: Strategy, Tactics and Execution.

The Strategy phase is where the work is convergent and participants try to define the “Why“, meaning, what is the actual purpose of the work. This often requires an emphasis on analysis and clarity in terms of objectives. You can’t really have multiple purposes for creative work (or at least you shouldn’t), and the required clarity here CAN turn argumentative, since the “Why” is often a loaded, weighty question. If there is argumentation here, it probably needs to happen in order to sort through all the parameters of the work, and determine which ones are important. Applying a divergent mindset at this stage can be challenging for many people, since it lacks the requisite focus, and most people tend to find that frustrating.

EXAMPLE: In putting together or evaluating a creative brief, you often need to probe quite deeply into the purpose for the project, and the intended outcomes. We can’t really expect everyone to have the exact same thoughts on this, so it is important to iron out all the kinks and let this debate play out. The benefit of a more thorough vetting of the “why” is that you will hopefully have a lot more alignment on the objectives going forward.

Then there is the Tactics phase, where focus is on discovery and finding options – the “What” stage of the creative work, meaning, What it is that actually solves for the “Why“. This is where ideation happens, and that always benefits from suspended judgment, since ideation is by nature ambiguous and divergent. Applying a convergent mindset at this stage means people may feel that their ideas are being overlooked or dismissed, so it is important to catalogue all ideas and reserve judgment for later. At the same time, this is a process that has its distinct, specific purpose, and applying this divergent mindset can be destructive at other parts of the process.

EXAMPLE: When considering different possibilities for creative solutions, these may take many different forms, and may initially not be very relevant, as ideas tend to deepen and get progressively more useful the further into the process one gets. For that reason, it is vital to get all ideas on the table, in order to build relevance and probe more deeply into the subject matter. Usually, it is easier to gauge the usefulness of individual ideas only when one has exhausted most of the possible avenues. The best mindset at this stage, in my experience, is to strive to catalogue all ideas, and sort them into categories. A single early idea may not be perfect in its initial form, but it may well trigger other, better ideas within the same category at a later stage.

Finally, there is the Execution part of the process. The objective here is to choose and refine ONE chosen solution, threading it through the needle of the “Why” (i.e. the purpose), arriving at mechanics that help define the “How” – i.e., how to execute and finalize the idea. At this point, effort has likely already been invested in order to build the chosen solution, and while that needs to be reviewed and evaluated (constructively, positively and encouragingly), it is also highly likely to cause tension if participants revisit any of the previous stages, as this will cause rework, and also means that the solution was not based on the right parameters from the beginning (hence, causing wasted effort). Quite often, this causes exasperation, especially with the person managing the execution, as there can be a feeling of added, previously unknown (and often subjective) parameters being applied after the fact.

EXAMPLE: When presented with a proposed solution, consider that you are now not at the initial stages of the process – the solution that is being reviewed is a product of decisions and conclusions made at earlier stages, and a selection of one specific option among many. For that reason, it is best to focus on refining what you see, and present these suggested refinements as improvements, as opposed to rework.

Now, as I mentioned above, I believe we can trace most of potential creative disagreements to situations where we team members have been at misaligned stages of the process mentally, and the comments are therefore difficult to reconcile with everyone’s individual mindset. This may cause frustration or, at worst, a feeling of work being dismissed, or a role being under threat.

For this reason, the creative process needs to be shepherded carefully through the stages, and efforts should be made to frame discussions appropriately, in order to establish greater clarity around what the objective is at each stage.

In summary:

Strategy – Means argumentation is allowed and may in fact be necessary, as important considerations should not be left unaddressed.

Tactics – Means argumentation can be destructive, and an open mind is necessary, in order to consider all possibilities and encourage all team members to bring out their ideas, without fear of being judged.

Execution – Means argumentation may be inevitable, in determining what works and what doesn’t, but it’s important to respect the effort that has already been made, and ideally try to build on it as opposed to want to throw it out. (I am personally fundamentally against throwing out creative work – even if it didn’t fit the current objective perfectly, there may always be another purpose for that work).

The Creative vs. The Scientific Process

The Creative Process is one of more or less constant ambiguity, refining ideas progressively in pursuit of a goal that may not have one single, empirically verifiable answer. For that reason, it is not productive to apply The Scientific Process in creative matters, because it will only tell you what doesn’t work, and why. It won’t tell you what might work, or what is needed to make it work. It may even, for that reason, be a blocker for continued ideation, due to its lack of recognition of what actually does work, even in an incomplete solution, and what can be used to build upon. This lack of recognition of the (partial) merits of ideas can also be demotivating, and serve to affect group dynamics in a restrictive way, where people lose the motivation to continue contributing to a solution, since progress is viewed rather unforgivingly, in black-or-white terms.

Ideation needs are best met iteratively with The Creative Process, where solutions are continuously and productively brought forward, and tested out with a less dogmatic or restrictively proof-based perspective on what the final solution might look like.

Recognizing the ambiguity inherent in the process means suspending judgment, and instead striving to contribute constructively.

(This approach to the ideation process actually has some similarities with Agile Methodology).