Category: Uncategorized

The Territorialism in UX

I am honestly getting more than a little weary of the constant territorialism in our industry. It is perhaps a natural consequence of a relatively new discipline still trying to find its footing, and define its role and identity, but I wonder how meaningful it really is, and how much of it is just a distracting knee-jerk reaction.

As UX work becomes more prevalent and more commonplace, we see practitioners defensively narrowing their positions, and fragmenting their responsibilities – presumably to protect themselves against feared incursions from other disciplines, or perhaps in desperate attempts to appear more relevant. This results in a proliferation of increasingly niche responsibilities and skillsets, and it does not seem productive.

Fundamentally, these things are all interconnected. There is no reason to incessantly try to define and own little fiefs within the larger domain – it just reduces and limits us as UX practitioners.

Don’t get me wrong, specialization can be a good thing, but it doesn’t have to mean a conscious narrowing of scope. You can focus on something without losing sight of the broader context. Specialized expertise comes naturally with experience, and if you narrow your perspectives prematurely, you may actually end up strangling your understanding of the discipline, and reduce your grasp of the bigger picture. In essence, you’d be digging yourself a hole that might be difficult to climb out of.

UI doesn’t exist in a vacuum – in fact, it could be argued that this is the very reason for why the distinction of the term ”UX” was created in the first place, and why ”CX” even exists.

UX may not be synonymous with CX, but a UX designer will only stand to lose from being separated from the broader consumer context.

UI may not be synonymous with UX, but it is definitely part of it, and isolating it just means the discipline loses some of its cohesion.

UI may not be synonymous with graphic design, but pretending they are unrelated simply weakens the relevance of the discipline. UI used to be called GUI, the ”G” representing the graphic (abstract, visual) nature of interfaces. You really cannot divorce graphic design from UI; the former is the foundation for the latter. A UI designer really needs to understand typography, color, value, proportions, composition, texture, contrasts, etc etc. Yes, they do need to understand more than that, but those are the basics.

You don’t automatically become a good UI designer by being a good graphic designer, but if you don’t know the basics, you’ll be neither.

At this point, it seems to me, we need to be building bridges between the different disciplines, as opposed to creating islands.

How To

Self-help is all the rage. There are books on almost every subject known to man, and the Internet is overflowing with helpful content, trying to instruct people on how to do stuff – all kinds of stuff. There are presumably knowledgeable people lecturing and speaking at conferences about how best to do things, all sorts of things, and people are gobbling it up left and right.

I love the good intent behind this type of advice, truly, and I don’t want this to sound uncharitable, but I’ve personally had a pretty important realization over the past five-six years on this topic:

It’s not for me.

Don’t get me wrong: I used to LOVE this kind of “how-to” content. I read it voraciously, and I think I believed that it would make me productive; that it would clarify things for me; that it would empower me. I read loads of interviews with other creators, and clung to their words as if they were magic formulas. I thought that by absorbing their methods, and using their tools, I could emulate their successes.

But I’ve found that it actually does the opposite. For me. By focusing so much on how other people do things, and by trying desperately to internalize that, I lost sight of how I do things myself. I’ve lost sight of the fact that I actually do know how to do stuff. I have the capacity to work through things, and figure them out for myself.

And after realizing this, I’ve become many many times more productive – so much so that it’s actually astonished me. I’ve been more productive in the past five years than I had been through the entirety of my career up to that point. It’s made me a bit regretful of the time I wasted; all the time I sat there procrastinating, thinking I needed to read up on something before I went ahead and did it.

I’ve done some soul searching to figure this out, and the answer – while not exactly revolutionary – still hit me like an epiphany. I think that I (and many others) learn principally by doing – not by reading, hearing, or observing. And while I was trying to learn how to do stuff in those other ways, I actually wasn’t learning, and it made me feel that I couldn’t do whatever it was that I was trying to do. But by not relying on those crutches, and by instead immersing myself in the work, and trying to figure things out for myself to find my own solutions, it’s really uncorked my productivity.

So, for all the people out there who are helped by this type of helpful “how to” content: more power to you. But please also consider that you may actually be able to figure things out for yourself, and this may be a way of learning that could actually feel much more empowering for you. It could rid you of crutches, of methods which may ultimately not be for you, but which will hold you back as you struggle to make sense of them.

Sometimes, the right way to do something is what works for YOU, not what works for others.

Now I guess I need to go write a self-help book on the subject. Let’s see, how do you write self-help books…?

I’d better Google that.

The Brittleness of Ideation

I’m intensely protective of the creative process. It needs to be nurtured and cared for in its fragile infancy, before ideas are fully formed.

Ideas need to be moved forward gently and carefully, while teasing out their shape and structure. Every contributor needs to be afforded the headspace to go into all the nooks and crannies of their creativity, and explore hidden potential without hesitation or stifling judgment. This is not typically a matter of individual personality or psychological fortitude; even those who seem outwardly resilient alter their patterns of exploration based on some measure of self-doubt, external factors, or interpersonal dynamics. This usually happens imperceptively and subconsciously, and there are no guarantees that these subtle course alterations are for the better.

The creative process depends on open-mindedness and a nurturing, supportive attitude. In this regard, it is not unlike therapy – especially when the process is collective and collaborative. The realm of ideas needs to be a safe space for all participants, where they can feel secure in probing for those elusive impulses that may or may not deliver something of substance. The key response should always be “yes, and…”, or a “maybe”, as opposed to a flat “no”. The rule of thumb must be to suspend judgment and disbelief, and move forward with the assumption that every grain of sand might potentially be hiding a speck of gold. Otherwise, the risk is that viable ideas are overlooked, or modified in ways that do not fully leverage their potential. Ideas are constantly evolving, so to regard them as being in stasis is simply ignoring the inherent potential.

Issuing opinions or judgments is almost always premature while ideas are still congealing. While the instinct to judge, to limit and to reject is profoundly human – it is related to our fight-or-flight instincts – it is deeply at odds with the ideation process. Rejection is an expression of fear, whereas acceptance is an expression of love. Evaluation inserted too early in the ideation process only serves to restrict the freedom of movement of ideas before they are fully formed, especially if such evaluation is based on personal bias and hypotheticals – almost regardless of how well informed such hypotheticals may be. Only when an idea has been given a clear shape and direction is it meaningful to evaluate it, and decide how to proceed, and this “ready” state is not always immediately apparent.

Therefore, there is really no need to rush to judgment, at any stage of the ideation process. Let the process take its time. Every single decision already happens under the natural pressure of second-guessing and doubt; that is a fully normal condition in navigating the ambiguity of the unknown and unseen. We must, as creatives, strive to lessen these doubts, and not succumb to self-censorship. This is especially true in collaborative ideation, as we can unwittingly have a censoring and suppressing effect on others by merely issuing an opinion, and this often leads to the halting of forward momentum, the stifling of creativity, and/or the strangling of the flow of thought. Even those who may seem outwardly resilient to such undue influence are subject to these effects, and those who lack the necessary understanding of these psychological group dynamics are typically better removed from the process, as their presence can be quite destructive. This calls for some self-awareness: I have myself had this effect on ideation on occasion, and have had to bow out of the process for that reason.

Instead, what we need to do as creatives is to welcome, document and catalogue ideas, move on to the next idea, and then see if patterns emerge where certain concepts share similarities, or may contradict each other to the point where it eventually becomes necessary to pick a path. There is ample time for judgment at the end of this process, but to pre-empt the potential value of an idea before it has fully revealed itself is never productive. Though picking paths may give a false sense of progression, it does not enhance ideation to choose a direction too narrowly before it is well understood where a path may lead to; it only restricts options in a manner that is rarely well considered or constructive. Some paths may converge, and some may diverge, but they might all lead to meaningful destinations.

If we apply a divergent mindset, as opposed to a convergent one, we ensure that we explore more possibilities and possible combinations of ideas, and even if some of these possibilities turn out to not be applicable for the task at hand, they very often lead to new trains of thought, and spawn new ideas which may serve some other purpose at a later point. We do ourselves a disservice by rejecting them.

This is why the true value of an idea can never just be assessed within the narrow confines of one specific objective. An idea always has value, it just needs to be nurtured, appreciated and considered in the right light.

The Technology of A.I. is a Red Herring

I’m a designer and illustrator with art history schooling, and I’ve been working in tech since 1995, the past 10 years with implementations of MarTech SaaS solutions involving machine learning, among other things. I understand both Art and Tech well enough to realize the true impact of A.I. is ultimately about neither.

First, the debate about the technology of A.I. is a red herring. This is not ultimately about Technology, or even about Art – Tech is an enabler, and Art is the victim, but the perpetrator of the abuse is Business.

Generative A.I. acts as a business disruptor, and as long as such disruption is profitable (and unregulated), it will persist. Generative A.I. art is happening not because Technology is compelling it, but because Business Disruption is driving it, and Technology is acting as an enabler, entirely without the consent of the victim, Art. Much in the same way as self-driving cars are disrupting the transportation industry: you’d be a fool to think that is happening for technological reasons. Tech is just the symptom of the underlying profit motive. And wherever there is a profit motive, a technological enablement, and a lack of regulation, bad things tend to happen. This is not, as some suggest, a doomsday prophecy – it is simply learning from history!

Analogies abound about other technological advancements, but this is not science fiction. A.I. is being sold as capable of things it is already doing. Publishers are already integrating the technology, artists are already being made obsolete, and their work is already being appropriated, on an industrial scale. The genie is out of the bottle, and as always, legislation is struggling to catch up.

Generative A.I. is not ultimately about what is possible, but what is right, and what is wrong. There is nothing right about appropriating the work of unwitting human beings for purposes they have not agreed to. And what is wrong will be the driver in legislation and regulation, which is where A.I. is going next. Don’t believe me? Just look at the E.U. As it did with data privacy laws, the E.U. is leading the legislative field in A.I. regulations. But don’t expect the U.S. to take any initiatives here – true to its core capitalist nature, the U.S. will always lead with its business interests, and let all other interests fall by the wayside. Until they are forced to reconsider.

Perhaps that actually offers a glimmer of hope, because American profit interests are to some degree dependent on being able to do business in Europe. And just like with data privacy legislation, E.U. regulations may ultimately convince American businesses to comply.

Do what’s right.

A.I.’s False Sheen of Magic

Much is made of the derivative nature of A.I. And while A.I. is indeed derivative at its core, I’m not sure I agree that lack of originality is its most damning issue.

Computers are constantly getting progressively better at experimenting and finding creative new combinations. Creativity is a process, it is not magic, and you could argue that there is very little true originality anyway – almost all forms of expression build on that which came before. That holds true even for human evolution itself.

No, the truly horrific prospect I worry about is IF and WHEN computers manage to match human ingenuity in combining old things into new ones. If that happens, we have outsourced what is a very core part of humanity, and where does that leave us? Even if computers COULD create original works, why on Earth would we want them to?! Are we really, as a species, looking to remove and replace human thought and creativity…? The very notion is antithetical to human existence.

Users and endorsers of A.I. are betraying some very core humanist principles, and they’re doing so seemingly cluelessly in terms of the consequences. Those of us who work in the creative application of technology (which includes myself) have a responsibility to step up and draw a line for what is acceptable, and what is not. I create graphic design and design systems for machine learning, but that is MY work that is fed to the machine, to determine which creative combinations are the most productive – not anyone elses work. I think the unapproved, unacknowledged appropriation of anyone’s work – ANY work – needs to be outlawed, and more than that, I think that ought to be a completely foregone conclusion.

I’m especially disturbed by the unsettling combination of greed and completely uncritical adoption of technology, where nobody seems to reflect on the consequences. This seems to me a uniquely American phenomenon.

I’ve heard vapid American capitalists justify this destructive “disruption” (in fact, almost any disruption of almost any market) in the most crass way possible, inferring that something is right just because it makes money. This amounts to a form of nihilism at best, or economic fascism at worst.

Furthermore, I have heard vapid American technologists justify the wholesale plundering or our cultural heritage that A.I. enables, simply because it represents a technological advancement – as if humanity is already of secondary importance to computers.

Both these positions are as baffling as they are horrifying.

Some speak out (well-intentionally) against A.I. in defense of the human creative process, attributing almost magical powers to the latter. I am more than a little leery of suggesting there is “magic” involved in the creative process, even though it often involves powers and influences unseen. But just because something is subjective and subconscious does not make it magical.

This is in fact one of the very core problems with A.I.: that its influences and sources are unknown, which gives tech evangelists the liberty to imply the occurrence of “magic” (though of a different kind). The human brain processes only that which we have fed to it, and that which we have been fed at conception, through DNA. All of this is dangerously analogous to A.I. We get into very murky waters if we sanction ideas on the sole basis of the inspiration for those ideas being unknown to us.

If anything, we need to insist on more transparency, and not endorse the obscuring of references. Such obfuscation enables theft. As humans, we are the sum total of our experiences, and this is in fact equally true of A.I. – the difference is how far A.I. is able to interpolate and morph those experiences. This currently has its limits, but that will surely evolve, and the only way to escape this slippery slope is to make an absolute, unequivocal demand for transparency. We cannot argue theft if we cannot prove what was stolen.

So, I’m using the ”magic” analogy as a word of caution here, because when we’re talking tech that is this complex, some will (and do!) find it indistinguishable from magic. What we call it matters, and we need this process to be seen for what it really is, with full transparency and zero romanticization: industrialized theft, and super-charged pillaging.

The further we let this progress, the more difficult it will be to show how it is done, and to demonstrate that zero magic is involved. We are now, in fact, in urgent need of what the illusionist Houdini did to spiritualism in the early 1900s:

A.I. is not a boon to humanity, and it’s false sheen of magic needs to be debunked.

Your Brand Personalized

Dynamic Creative In User Experience Design

All design is inherently subjective.

How people perceive and are affected by design depends on who they are, what their circumstances look like and what their expectations are. Subjectivity applies not only to media and graphic design, but to art, fashion, architecture or any other form of visual expression.

We judge with our eyes.

This is especially problematic in the world of user experience, which to a large extent revolves around functionality and usability. The usefulness of browser- and app-based experiences depends on how well they enable users to accomplish what they’re trying to do. Accomplishing goals, on the other hand, also depends on how motivated users are. This adds an emotional dimension to this otherwise highly rational discipline.

Structuring a website (or an app) so that the user merely understands how to use it is simply not sufficient. If the user does not want to use a website, the empirical knowledge of how to use it is largely inconsequential. Therefore, content must be packaged in a way that the user is emotionally motivated to partake of it and here, design has an important role to play.

A survey conducted by Carleton University in Ottawa, published in the journal Behaviour and Information Technology, determined that users form their impressions of a website and its visual appeal within the first 1/20th of a second of visiting it. Even more surprisingly, these first impressions colored the entire experience of the site, whether or not the whole site actually turned out to match that initial perception. The conclusion of the survey was that this first impression was “unlikely to involve cognition” – meaning it is largely an emotional response.

However, presenting a differentiated audience with a unified, undifferentiated design – however well optimized – will not account for the variances in people’s preferences and goals. Such a design will never be entirely effective. Given that users are different and have different preferences and expectations, effective UX design has to be personalize-able. This is quickly becoming an expectation, if not the norm.

A recent study published by Salesforce found that 80% of users expect online experiences to be personalized and tailored to their needs. This means that brands simply cannot afford to broadcast the same uniform message to a single, undifferentiated audience. Marketers need to find ways of communicating to individuals, not audiences.

This posits a problem of scale.

Very few marketers or publishers of content can afford to employ armies of designers and content producers to tailor experiences to each individual user. More importantly, they cannot do so in real time.

Enter dynamic personalization.

By devising smartly constructed, modular design systems based on creative componentry that can be freely interchanged, it is possible to compose entire user experiences based on incoming media signals. These signals can reveal behavioral-, demographic- and psychographic details about each individual user, allowing the experience to programmatically flex and adjust to some of those factors.

This ensures a successful, results-oriented communicative solution that scales. In addition, it may even be able to predict favorable outcomes through the application of AI-technology such as machine learning. Done right, it will allow marketers to dial in the most effective combination of content and design with increasing accuracy.

Testing naturally becomes a central element in such solutions, where a multitude of creative options can be fed into the learning engine. The ideal mix can thus be determined, assembled and verified as users arrive on site. In iProspect’s own testing, we regularly demonstrate incremental conversion gains through dynamic, personalized experiences. However, fully capitalizing on this opportunity requires a shift in how brands look at design and how brands go to market.

The era of the static, monolithic brand is over. Modern brands need to understand their audiences, find ways of communicating on a personal level, and truly become interactive.

(Reposted)

Casting Call

Design, and especially identity design – but really all types of design that depend on recognizability – is not like auteurship, or artistry.

As a designer, you are not speaking in your own voice – you need to find the voice of the client, and understand how that voice resonates (or doesn’t resonate) with the intended audience.

This means that your job as a designer is more akin to that of a casting director.

You need for your design to play a role, and to express itself in certain ways that are consistent with the script – i.e., the client’s brand strategy. And if the client is unsure about that strategy – meaning, they are essentially shooting a movie without a script – you need to define the role for them, and find the right actor. Your design needs to embody the personality that you would advise the client to assume.

You need to conduct a Casting Call.

You need to define the client’s visual persona, and articulate how it should act when it’s on the stage, in order to be able to convey the right message in a credible and convincing way.

Some clients will come to you and ask for a certain actor: “We’d like for Al Pacino to play this role!”. Meaning, they come to you with an outside-in idea of someone who they think would represent them, when the actor’s job is to FORGET about who they are, not act as themselves, and instead embody the ROLE.

For that reason, you ought to take all such suggestions with a huge scoop of salt.

Treating a visual brand identity like a known commodity, like a well-known actor, means the audience are at risk of only seeing THAT actor; seeing THAT face – not experience the role that the script actually calls for.

This is the same as applying a recognizable ”style” to a brand identity. ”We want clean and modern”, say many corporations, afraid of stepping out of line with expectations and industry tropes. But their identity, as defined by the brand strategy, may not express itself best in a clean and modern ”style”, just like Al Pacino might not be the right face, or the right voice, for that role.

Think about your brand as if it’s an actor about to get on the stage, and then define the role based on how you need the audience to experience and react to your brand. Make sure your designer conducts a casting call, and casts the right actor in the role – an actor which will be able to let audiences see past their recognizable face, and experience the persona they are trying to express.

If your brand identity design does the job right, it will be able to embody the true persona of the brand, as opposed to pretending, and putting on stage clothes while still being recognizable as someone else’s borrowed personality.

Design Ethos

Never just create a one-off asset when you can construct something reusable

Never just create a reusable single template when you can design something modular

Never just design modular components when you can devise a framework

Never just devise a framework when you can architect a design system

Never just architect a design system when you can define a user experience

Never just define a user experience when you can plan out a user journey

Never just plan out a single user journey when you can improve lifetime value

(Reposted from 2020)

Decisions without morality

Over the past decade, we have seen the rise of Fake News, and the dangers of social media allowing us to stay in our closed opinion bubbles, finding support for almost any misguided opinion there is.

Artificial Intelligence will only aggravate that problem.

A.I. supplies the psychological confirmation mechanisms that convince the ignorant that they are right: they can continuously iterate and demand changes until they are satisfied.

In my experience, the only thing that (misguidedly) satisfies an ignorant client is to get exactly that which they want to see through unlimited iterations.

What ”proves” to them that the result is ”good” (which they themselves by definition lack the objective capability to determine) is that they persuade themselves through requesting changes, and that the progressive results of those changes in and of themselves supposedly mean that they have ”refined” the results. When, in reality, all they have really done is go through a long series of bizarre, totally subjective mutations which at no point were guided by deliberate, knowledge-based or empirical judgments.

That is, in my mind, the real problem with A.I.: it enables limitless, automated and arbitrary mutations, without any of us ever really understanding the purposefulness or adequacy of the decisions made along the way. We simply outsource important decisions to the A.I., without understanding or even being aware of those decisions, relying entirely on the machine to make the right decisions – even though the machine has no awareness of “right” or “wrong”.

This is certifiably insane.

We ought to be deeply troubled by the fact that we have no way of ascertaining the factual accuracy of the decisions made by the A.I. or, more importantly, the MORAL consequences of the decisions it makes.

We dismiss the dangers of A.I. at our peril. We may think that the current outputs produced by A.I.s are unsatisfactory, and that A.I. poses no threat for that reason. But those are mere details that A.I. will get progressively better at resolving. That is, after all, precisely what A.I. does: it learns, iterates and improves its output, and that is happening at an ever accelerating pace. If we believe that we can judge A.I. by the quality of what is produced TODAY, then we are missing the very essence of the problem.

Specific results are difficult for A.I. to procure TODAY, simply because they are still new to the A.I., and the machine lacks sufficient data upon which to base its results. This is a challenge that will disappear on its own through the very way that A.I. is constructed: to gather more data, and refine its output indefinitely.

The current problem with A.I. is this: the advocates for Artificial Intelligence are experts on the technology, not subject matter experts in the fields in which A.I. is being disruptively accelerated and implemented. The ACTUAL subject matter experts have no means through which to validate the veracity of A.I. output, and the technical experts don’t appear to care about the morality of it.

The long-term problem with A.I. is an ascerbation of that very problem: the more advanced A.I.s get, the harder it will be for its human counterparts to validate or justify its results, or even understand how it arrived at those results.

The way I see it, the key issue is that A.I. is a “black box”: it produces results without us knowing how it arrived at those results, and we therefore lack the ability to validate or dispute them. And this is an escalating problem, which worsens the more complex A.I. gets: the black box gets blacker and blacker. We will eventually require “Voight-Kampff-detectors” in the hands of A.I.-psychologists, who will try to analyze the A.I., and understand its decisions.

Talk about chasing your own tail.

If we outsource decisionmaking – in any form – to the A.I. machine, we are in deep trouble. Because A.I.s CAN and DO make mistakes: its decisions are only as good as the data fed to it.

Ultimately, decisions are validated not on the grounds of veracity, but on the grounds of morality – morality that A.I.s don’t have. There may be thousands of data points that support a certain decision, but the decision may still be an immoral one.

Decisions without morality?

That is Skynet right there. Or, if you wish to be more sophisticated: Asimov’s 3 laws of robotics – laws that are yet to be enacted.