A quote from this article stuck:
My reporting tool now says “Describe the graph you’d like to create” — and in doing so transfers the burden of abstraction onto me
Like in all systems, rules are important. Designing AI native applications must also follow certain rules. Most rules directly stem from traditional interfaces and have just evolved out of the HITL paradigm. Here are some to think about: Trust, Value perception and Cognitive effort.
Trust
Trust has been a long lasting super rule in interfaces. User need to feel safe, navigating and putting in data. They want to communicate on a good vibe layer. Without trust, there is no adoption.
Users may gladly tolerate AI as a suggestion engine — as a magical addition to an ongoing project — but they are far less tolerant when it becomes an agent. This is because when a product claims it can write, design, code, or decide for the user, the psychological stakes increase
Instant (positive) outcomes are helping to gain trust. Whenever the interface does its job right, within an acceptable time frame, trust starts to build. Same goes for wrong answers or worse, in the agentic world: wrong actions. We do human in the loop for the sake of controllability and accountability. There is a sweet spot between asking to many questions and too less. Finding the right balance in delegation is the absolute sweet spot.
Value perception
Value is about knowing about the power using it the first time and returning back using it. It’s a perception. It with all perception there is a spectrum. We need to answer two questions: How can the average user immediately see why an AI feature is worth their time? How does the experience need to be in order for users to come back.
Research consistently shows that perceived usefulness and perceived ease of use are the primary drivers of AI adoption. Notably, perceived ease of use often has a stronger impact than perceived usefulness
International Journal of Human-Computer Interaction
Cognitive effort
Nothing new: The blank page is the biggest risk to user effort. New is that we now are able to easily create engagement via LLMs. Before it was a simple CRUD interface suggestion random things that a user might want to do next. Everything but a dead lock. Today is different. We know more. We can generate personalized starting points to reduce ambiguity. Key here is that the users do not have to “speak AI”. Best possible outcome is a natural (text based) conversation, that is flowing instinctively and leads to satisfactory results.
Instead of asking users to imagine what might be valuable — a high cognitive effort activity — our systems can instead optimize for surfacing plausible starting points. Designing interfaces where the user reacts rather than invents doesn’t eliminate work, instead it redistributes cognitive labor.