Abstract

This article examines a dominant user experience (UX) design philosophy that prioritizes the simplicity of systems, and argues that these practices often treat users as incapable of engaging with deeper complex systems. Through an analysis of digital interfaces, the study demonstrates that oversimplified designs yield static and overly constrained systems, which in turn limit adaptability, stifle problem-solving, and erode user trust. The discussion calls for a paradigmatic shift toward design approaches that recognize and respect users’ agency and cognitive abilities by offering systems which grow with their uses and offers them choices to access that dynamism, thereby challenging established UX methodologies that favor control over empowerment.


From User-friendly to Growing Systems

I used to believe in the idea of making things user-friendly. I could reference specific instances proving the utility of user-friendliness in the development of systems—places like Three Mile Island, specific mechanisms like the aircraft controls of the B-17 bomber during WWII, “small” aesthetic changes like changing the U.S. insignia on military aircraft. I had a noble quest of finding un-navigable applications, dismantling them, and providing ways for it to be more streamlined. Then, doubt crept in. Two things separated in time by years changed how I thought about user-friendliness: a person and a book.

I used to work with a prominent User Experience (UX) designer with significant acclaim and background. This person was there at the inception of the discipline of UX. When they spoke about design though, they kept revealing an uncaring attitude about the user. They couched their uncaring attitude in UX “truisms” such as “the user doesn’t know what they want but they know what the problem is”. They were fundamentally problem-centric as a person. They wanted to understand and solve problems using design, so much so that I think they lost sight of who they were doing it for. It reflected in how they told stories about the problems they solved—denigrating people, glorifying problems. I always had the sense that if they could, they’d abstract people away and just solve problems. And this was unacceptable to me.

A few years later, I read Life in Code by Ellen Ullman, a history of Ullman’s entire career in and around software development. In one section, she writes about how programmers’ move to make software more user-friendly tends toward treating the user as if they were dumb. This tendency toward user-friendliness-as-idiot-proof necessitates cloaking the complicated internals of a system—non-coders are too dumb to understand anyway, so the sentiment goes. Interestingly, this ultimately turned around on software developers as people developed as more specialized, such as programming languages that don’t let the programmer easily manipulate a computer’s memory. Even though Ullman levied this critique to software development, I realized I had been doing the same thing. I’d been solving problems people had while holding a bias that those who have the problem were dumb, and so, my interventions reflected that. I was so encouraged to design for the “lowest common denominator” that I didn’t even see what was happening. Ullman’s book and encounters with a more senior designer turned have turned everything I know about design on its head.

I suggest we replace the idea of “user-friendliness”. Software developers and designers—the entire technological enterprise, really—has avoided significant mishaps under the banner of user-friendliness, but it also places the user in the position where they cannot learn from the system. It places the user as a dumb thing the person developing and/or designing technology must work around. Users are inert dumb things to the technologist. We need now more than ever to provide ways to give users access to the knowledge that gets abstracted away for the sake of a clean interface. We need purposeful, human-centered complexity and dynamic systems that can unveil themselves.

What I’m proposing is similar to the advent of positive psychology. Positive psychology was a response to a narrow lens on mental well-being. Before positive psychology, the goal of the psychiatrist was to remove or minimize mental illness, with wellness at this time being frame as the absence of mental illness. Martin Seligman, pioneered the field of positive psychology by researching happiness and mental flourishing. Thus extending what it means to be mentally well. Similar to this, then, I’m suggesting that the paradigm of user-friendly has given us a perspective which works, but doesn’t give us systems in which promote increasing knowledge and competence of the user.

You’ve seen examples of this in software—even if you haven’t noticed it—but they’re often not fully explored. Any software you’ve used that has both a “simplified view” and a “standard view” exemplifies the positive of how we can develop dynamic systems while still being couched in the language of user-friendly. Some users are “simple,” designers and developers seem to say, while providing a mildly scalable system that fits changing interest and knowledge of the system.

We can do better. We must do better because a better future for all of us is predicated on the knowledge that the public has and has access to. Knowing first hand that clean interfaces disguise a world of complexity, I know that we need the general public, to know how these technologies work; how to reason and conjecture about them. There are other ways to accomplish this goal, but I see it as the system’s—and therefore the designers and developers behind it—responsibility to impart knowledge of its internals to the users, and to give them an adequate ramp to achieve complete knowledge of its systems.

An example is in order. The interfaces generative AI chatbots employ are generally simple: a textbox to write a prompt in, maybe a sidebar with your “conversational history”, maybe a few settings to toggle about which model to use. A user types their prompt into the textbox and a world of complexity gets executed behind the scenes without me having to know what a generative pre-trained transformer (GPT) is, without needing to grapple with the chorus of models that are often now working in tandem to answer my prompt, without having to know that the data the model is trained on will bias results, ultimately without ever understanding how I got the answer that I got. A system designed with human-centered complexity in mind would do things differently. As a dynamic system, an HCC system would have a series of views that enable the user access to ever more of the internals of the system. There would be a beginners view that would look like what we have for ChatGPT, and based on user’s interest—and ultimately the users’ agency—unveil more options to interface with the model such a temperature slider that affects randomness of the output and access to fine tune the model in the system (along with understanding what exactly it means to fine turn a model). The system would dynamically respond to the system and the user would use their own agency to explore and opt-into ever more complex features as they have the appetite.


I’m calling for a paradigm shift in how we think about user interfaces and experiences. Instead of assuming that every user needs everything simplified perfectly, we should focus on designing systems that encourage exploration and learning. This means embedding educational elements and transparency into our designs so that users can gradually uncover and understand the complexities of the technology. Such systems would not only cater to the immediate needs of novice users but also provide a pathway for them to become experts if they choose. Ultimately, I want human-centered complexity to lead to a more informed and engaged user base who is capable of using technology not just to complete tasks, but to innovate and solve increasingly complex problems themselves.


Written by Austin Wiggins.

Austin L. Wiggins is a designer and public interest technologist with a Master of Science in Public Interest Technology. Austin works within the federal government at the intersection of design and technology.  Austin's writing is dedicated to rethinking technology beyond its purely functional attributes by exploring its philosophical, social, cultural, and ethical dimensions. Austin work combines everyday wisdom and academic research in the philosophy/psychology/sociology of technology to inspire analysis and critical dialogue in the fields of design and technology. In addition, Austin runs a newsletter called "Multidisciplined", that examines the intersection of design, technology, and society, providing insightful commentary on contemporary trends and challenges.

Links

Multidisciplined | Austin Wiggins | Substack
A publication exploring technology through all the disciplines that interact with it: From philosophy and sociology, to programming and design. Click to read Multidisciplined, by Austin Wiggins, a Substack publication with hundreds of subscribers.