Ever-Updating Collection of New Approaches to UI/UX in the AI Era
Dials, Knobs, and Sliders
Physical knobs, digital sliders, and quadrant “dials” are being used to fine-tune AI responses and inputs. For example, Figma’s new Figma Slides features a novel UX where you can adjust the tone of generated text with a slider. Moving the orange indicator lets you blend between casual and professional tones, or concise and expanded content.
This idea was spotted in a demo shared by Twitter user Johannes Stelzer. While I don’t have all the details, the concept is fantastic.
For a personal example, I built a Chrome extension that summarizes any webpage. You can adjust the length of the output using sliders. Want a four-word summary? Slide it down. Prefer a detailed essay? Slide it up.
Node-and-Edge Graph
LangGraph Studio uses a node-and-edge system to visualize logic flows in LangChain Agents. Each node represents a “micro-agent” completing a specific task, and the edges show the connections between them. This modular approach allows users to build complex agents from simple, connected building blocks.
The Infinite Canvas
An infinite canvas provides an open, continuous workspace for limitless creativity. Examples include Figma, FigJam, TLDraw, and Visual Electric. Here’s a peek at Visual Electric’s canvas in action.
Figma and FigJam also use this concept to inspire creativity. In one example, Julie W. Design is brainstorming with FigJam AI on an infinite canvas.
Another example combines Claude’s Artifacts and TLDraw’s infinite canvas for nearly instant site-building capabilities.
Voice Input
While voice input isn’t new, it feels more natural within AI chat interfaces, popularized by OpenAI’s ChatGPT. David Lam points out that voice input works seamlessly with this mental model of a chat interface.
Dot by New.Computer integrates voice input elegantly, alongside its text-based interactions. Notably, the interface includes a “pinch-out” gesture to access a hidden card view, showcasing Jason Yuan’s mastery of fluid, innovative UI design.
Visual Interface
AI tools are also embracing visual input. For example, OpenInterpreter uses a webcam to “read” a sticky note held up by Killian, then connects to Wi-Fi based on the written info.
Side-by-Side Layout
In this UI approach, a chat window appears on the left while results or outputs are shown on the right. This layout works well on desktop but requires swipes or tabs for mobile. Here are a few examples:
- Layla’s travel planning AI displays the chat on the left where users share their travel preferences, while the right side shows links, videos, and offers related to their itinerary.
- Claude’s coding assistant uses a similar layout. You ask a coding question in the left window, and on the right, there’s a toggle to switch between viewing the code and its results.
Whole-Body Interface
This example demonstrates how MediaPipe tracks body gestures to produce results. While it may rely on machine learning more than traditional AI, the outcome is undeniably cool!