Junior UX/UI designer
Improving the Project Page UX including the integration of the AI in the workflow
The integration of the AI within a workflow has always been at the very core of the MosaiQ platform. As such, we always strived to find solutions to make the workflow easier to understand, quick to adopt, and delightful in its usability, trusting the theory that positive experiences reinforce positive judgment
Dismantled previous ideas to make space for new, more suitable ones
Technical constraints
Ever-changing priorities within the workflow (research it’s an iterative process)
To make the UX of the Project Page more intuitive (what users can do and how)
To make the assistant’s interaction (whether with the database or with the user) feeling “natural”
To boost the workflow
Improved the overall workflow’s by implementing some common features like selecting and/or prioritising files
Making both interactions: AI User & AI Database more intuitive & user friendly
The surge of interest in AI has accelerated not only its technical evolution but also the refinement of its language — allowing machines to communicate their reasoning in ways that are increasingly understandable to humans.
Watching this shift unfold has been fascinating: as AI became more transparent in its “thinking,” people’s perception of it began to change — from cautious skepticism to genuine curiosity and trust.
In my observation, the ability to follow the AI’s reasoning process gives users a greater sense of control, as they can trace how conclusions are formed and even question certain outcomes. Ultimately, what has become evident is that while AI is an incredibly powerful tool, it still requires guidance, training, and human oversight to perform tasks effectively and responsibly.
As the AI assistant is the core feature of the MosaiQ web app, our goal was to create an interface that felt both intuitive and adaptable, able to evolve alongside users’ needs and the fast-paced growth of AI technology.
Visual feedbacks - Designing clear & visible assistant’s interactions
Since the Assistant’s output is directly tied to the content it analyses, clear visual indicators are essential to show users exactly which files or elements the AI is referencing.
Question:
What kind of visual cue can we design? Where should it go?
Answer:
Over time, the solution took shape around three simple visual cues designed to keep users informed at every stage:
In the database: selected articles are visually confirmed with a checked box and a highlighted background.
In the chat: if the database isn’t visible, a cue above the input field displays the selected files through a dropdown.
When the processing begins: a chat bubble appears, summarising the action taken and the articles involved.
Assistant Actions -
Assistant Actions, refers to all the actions users can perform through the assistant. These spam from:
The possibility to limit the assistant’s interaction to selected articles VS full database
To choose the Assistant’s mode (Relevant text VS Full text)
To add the chat to an existing project
To export the chat
To launch AI Modules
To open and interact with Templates
These actions were developed with time and many became possible as the machine behind it became more powerful.
Assistant Actions timeline:
Early iteration - In the earliest version, the assistant’s functionality was limited to answering user queries. There were no interactive actions or system-level integrations — the experience was purely conversational. This simplicity helped us establish a clear foundation for how users interacted with the assistant before introducing more complex behaviours.
Early - Mid iteration - As the product matured, we observed that users wanted greater transparency around the assistant’s activity. To address this, we introduced:
Visibility into which articles the assistant was interacting with, helping users better understand its sources and context.
We also added the option to “Restrict information to chat answers,” enabling users to narrow the assistant’s focus and keep conversations outcome-oriented. This iteration emphasised clarity, trust, and control.
Latest iteration - In the latest version, we focused on tidying and simplifying the design to create a more cohesive and efficient interface.All actions were reduced to icon-based buttons for a cleaner, more intuitive experience.
To preserve visibility and hierarchy:
The AI Modules icon was placed as the first clickable item, maintaining its prominence within the workflow.
It was followed by the Export/Move Chat to Project action, keeping essential project tools within quick reach.
Finally, the Assistant Mode control was refined to allow users to choose whether the assistant should work with the most relevant retrieved information or the full text, enhancing contextual precision.
This iteration reflects a mature, minimal, and highly functional design — balancing usability, aesthetics, and user empowerment.
2.1) Assistant & AI Modules
A feature that quickly gained prominence was the Assistant’s ability to run AI modules — predefined sets of tasks that the Assistant executes automatically.
These modules are powerful because they can be tailored to the unique needs of each user.For instance, a module could review a selected document, analyse previously defined parameters, and generate the outcome in a required format.
By automating the first step of the analysis, AI modules speed up the workflow and free users to focus on the next stages of their work.
Read more about AI modules
Assistant & Templates
Templates are among the earliest features in MosaiQ, providing the structure for research outputs.
At first, they were fully flexible: users could add or reorder sections and subsections as their work evolved, supporting an iterative research process.
Later, we experimented with combining templates and AI modules, enabling users to run an AI module and have its output automatically formatted within the chosen template — streamlining the workflow even further.
Read more about Templates
Developing the language - How to convey the AI’s thinking process
As mentioned, the advent of a more sophisticated use of language, together with the ability to show the thinking process behind, further sparked the curiosity towards AI and its capabilities.
Questions:
How can we show the thinking process without disrupting the current workflow? Where should it go? Should it be visible at all time or just requested? How can that action be triggered?
Answer:
Influenced by the ChatGPT design, we’ve iterated on various versions but at this stage we agreed that:
The thinking process should be triggered (not display by default)
The Logic button - We added the btn “Logic” on the top of the Assistant Action panel, just above the text placeholder. The button triggers the AI to show their thinking process.
User should be able to close/hide the thinking process
The Thinking Process can be displayed in two ways:
Main Chat (3.a): It can also appear directly within the conversation and be shown or hidden using the arrow icon next to its title
Side Panel (3.b, 3.c): When triggered, it opens on either side of the chat window. Options were to either either:
NB: This feature has not been developed yet.
Designing an interactive flow - How can users actively interact with the AI?
As the Assistant’s replies grew more advanced, users began expecting a more human, conversational way to engage with it. During Beta testing, many expressed the desire to ask follow-up questions about specific parts of a response — to steer the conversation in multiple directions, just like in a real dialogue.
Design Inspiration:
To address this need for more natural, contextual follow-ups, we began exploring interaction patterns that would allow users to reference specific parts of a response seamlessly.
The idea was first inspired by WhatsApp’s reply-to-message feature, where users can select and respond to individual chat bubbles. We initially explored replicating this behaviour, but since the Assistant’s responses weren’t structured in separate bubbles, allowing users to highlight text directly felt far more natural.
At the same time, ChatGPT was introducing a similar interaction model. Because our interface aligned closely with theirs in both look and function — and this behaviour was quickly becoming a familiar convention — adopting a comparable solution felt like the most intuitive and user-friendly choice.
NB: This feature has not been developed yet.
This portfolio was designed and coded by Mor Shmueli. It is open-sourced and hosted on Netlify