Hi everyone, Kevin again.
I recently wrote about our loss of senses and disillusion in face of recent turbulences: the recent layoffs, the rise of AI, and the rapid commoditization of design all seem to indicate we are gently running along the edge of a precipice only covered by fog, waiting for us to misstep along the way.
While some see it this way, some are jumping off the cliff as the only way to embrace what seems unavoidable. Perhaps the scary precipice isn't that deep? Maybe the fog hides an ocean of opportunities? What to believe when what is at stake is the thing we know for certain? A dilemma, that is, one imbued by strong feelings either way.
The third perspective
The dilemma is unsolvable given both perspectives: on one side, the threats to our current ways of doing and living far outweigh any benefits that it might bring; on the other side, the opportunities for meaningful changes are here to be explored and, eventually, conquered.
This is not but a rather fragile landscape, asymmetric in terms of investments, left open for polarization. In this increasing tension space, some ran along the edge and accidentally looked into the precipice and its apparent depth, the “disillusioned" I talked about in my previous article.
But the precipice isn't what it seems, it is a yet-undecided amorphic space, more than limited to two opposing states. We need to go beyond simplistic dualism. We need to increase diversity. We need a third perspective.
“The world is in fact cursed.
What the world being cursed does mean is that you can't just blame one side, [...] you can't just idealize the other because even the victimizer is a victim and even the victim is a victimizer.
The world is cursed [and we should be] on a quest of seeing the world with eyes uncluttered by hate”. – Avatar vs Princess Mononoke: How to have a message
The world is cursed –it is bounded to change, with unforeseeable consequences. The precipice has always been there. The dilemma is not one to be solved, but one to be sidestepped and, eventually, dissolved.
How? The third perspective is not necessarily a space of reconciliation and peace –convergence between two sides– but first and foremost one of liminality. Liminal spaces can hold several, possibly contradicting, truths. In my previous article, I discussed Deleuze's philosophy and his work with Guattari on Schizoanalysis. Similarly, the third perspective is a process of navigating different territories and making novel connections.
Embracing the unavoidable
Previously, I discussed the sentiment that we (as designers) paved the way for our own downfall and why, while the sentiment is understandable, it is also overly simplistic and says little to the whole story.
Similarly, embracing fully the opposite view isn't exempt from issues. Very much like the wave of criticisms against some of our tools for being a doorstep to our replacement by AI tools imposes us to look at the kind of patterns at play, so embracing AI thoughtlessly because it is "unavoidable" is overly reductive. Allow me to bring an example as a case for discussion.
I recently stumbled upon a post in this LinkedIn group about using AI in our design process. The post is only visible to the group members (but in case, here is the link) but links a Medium article titled “Design Sprints are dead, long live the AI Sprints?”.
The author describes how he used ChatGPT to perform a Design Sprint by himself (emphasis mine):
What I am about to show you is a potential application of GenAI to replace the current Design Sprint process. You’ll see the full process and results so you can judge the quality ( or not ) of it by yourself.
I had to trick the bot once or twice to get it to perform what I wanted but overall it was quite a smooth flow I would say. I could go from an initial problem to a ready-to-show prototype in under 4 hours!
Of course, this is kinda absurd at first. Indeed, if the point is speed towards an output, then why even bother performing a simulacrum of Design Sprint? If the point is quality of the solution, then using ChatGPT to replace key actors' knowledge is problematic.
As I replied under LinkedIn comments:
Well, I know some people will find amazing what AI can do. It certainly is. But this misses the point on so many levels (misplaced admiration).
Design Sprint is not that great of a method for what it claims to do. This is even stupider.
AI isn't the problem. The subversion of one form of knowledge by another without taking into account the context is the issue –that is, human knowledge and experience by AI. That's the pattern at play.
As I develop in my reply to the author:
Just to be clear, I'm playing a lot with AI myself recently, so I'm not blaming you for trying. But you need to understand the type of *knowledge space* you're playing with before even considering which tool to use. Every tool comes with *bounded applicability*. [ChatGPT] uses Large Language Models' existing data to generate predictive patterns of *believable answers*.
When dealing with *unknown unknowns*, like in design, you have an issue of *epistemological knowledge*, [of coming to know the context and its relationships]. This is what makes things unclear, uncertain, etc. *Relationships in context are what provide meaning*. Replacing any amount of epistemological knowledge by AI in the context of "solving" a problem for social complex adaptive systems comes with a high cost in terms of assumptions, diversity, meaning, etc.
However, feeding knowledge/meaning to AI to then using it to play with themes in a conversational fashion will likely bring interesting connections/inspirations because this is exactly what they are good at doing. We should use this kind of tool not to replace human experience & knowledge, but to sustain its autonomy –that's an ethical design principle.
I think I will never advocate this enough, but understanding the advantages and limitations of our tools, combined with an understanding of the context is key. This can open new opportunities for using them (constraints and affordances). AI tools are no different.
Coming to design principles
A third perspective here is to recognize the constraints and opportunities on both sides. Human knowledge is situated, context-sensitive (see contextualism) and tacit, even in “expert domains“. Algorithms with Large Language Models like ChatGPT have limited access to context, no access to human experience, and unreliable relationship to expert knowledge and facts –although the latter can improve, context and experience won't. AI can be used to extract summary, highlights elements and show relationships based on probabilistic inference. The relevancy therefore highly depends on the original input which must convey the necessary context for it to perform "efficiently".
So, here are some principles (as proposal) to approach AI within a design process:
- Define context: Seek situated (human and non-human) knowledge and experiences to highlight relationships (narrative research);
- Foster collaboration: through small groups around said knowledge, experiences and relationships to generate themes and new relationships;
- Support theme generation with AI: each group act as located context with their knowledge and understanding to feed the AI but incentivise reinterpretation and re-contextualisation by the group upon the output;
- Design interventions: use the generated themes and relationships to design interventions (portfolio of strategies) back into the context.
Thanks for reading!
Kevin from Design & Critical Thinking.