Junior UX/UI designer
Improving the Project Page UX including the integration of the AI in the workflow
MosaiQ Labs develops AI tools for knowledge workers. The Project Page is the core workspace where users collect files, analyse them, and create deliverables.
I redesigned the page to unify the file database and the AI assistant into a more intuitive system. This included refining navigation, improving file interaction patterns, and strengthening how the assistant communicates its actions and data sources.
Rapidly changing priorities, technical limitations, and the need to rebuild older concepts into more scalable solutions.
Improve the clarity and usability of the Project Page and make AI interactions feel natural, predictable, and supportive of real research behaviours.
A more transparent and efficient research environment: clearer file selection, better filtering and organisation, and more intuitive assistant feedback that helps users understand when—and how—the AI is working.
MosaiQ Labs is a startup with the mission to integrate AI solutions to knowledge workers workflows. Data collection reported that users followed a specific research model that can be broken down into three section: data collection, data analysis, and production of deliverables. The Project Page is where all these three come together.
The Project Page is where all the saved files are collected, and where they can be accessed for analysis with the support of the AI. It is divided in two main sections: the files’ database on the left, and the AI assistant on the right. The Assistant has many functions and can interact with the database in various ways:
It can interact with the whole content of the database or selected articles
It stores AI Modules. These can be called directly from the assistant to interact with the database or selected articles
It stores Templates. As the name suggests, templates are the structure of the final deliverable.
It stored the chat history
It allows user to Export or Download the entire chat or a selected segment of the chat
It nests the Assistant Mode. The Assistant Mode lets users choose between answers based on key excerpts or full-text analysis
Because researching is ultimately an iterative process with many back and forth, the goal was to create a flexible environment, in which all components could interact between them as well as being used separately.
Nevertheless, the design of each component presented its own challenges. Here’s an overview on how we overcome them.
What if users need to work with multiple file types?
In 2022, Beta testing showed that users primarily relied on web articles and PDFs for their work. This insight, led us to limit the supported file types and design a streamlined UI focused on simplifying file search.
As the Beta user base expanded, however, new needs emerged — particularly the ability to work with additional file types such as tables and charts. Our challenge was to integrate these formats while maintaining, and ideally enhancing, the efficiency of the search experience.
Problem:
The file types are divided into sections and users can only scroll down to find the article, making the action quite time consuming when working with a big database.
Question:
How do we add more file types while improving the search for files?
Answer:
To solve this issues, we looked at two separate features that collectively would provide an efficient answer to the problems.
We decided to have just one file list
Inspired by the Mac file manager or “Finder” we opted to add columns at the top of the list to help organise the list either by: (File) Name, Type, Source, Date Added, and Actions.
We add Filters above the list providing the option to choose to view the full list or just a list of the required file types.
What if files could be selected? Which visual cues should we incorporate to make a solid connection?
In 2022, Beta testing showed that users primarily relied on web articles and PDFs for their work. This insight, led us to limit the supported file types and design a streamlined UI focused on simplifying file search.
As the Beta user base expanded, however, new needs emerged — particularly the ability to work with additional file types such as tables and charts. Our challenge was to integrate these formats while maintaining, and ideally enhancing, the efficiency of the search experience.
Problem:
Users want to select one, or more files, to interact with the assistant. They don’t want the assistant to analyse the whole project content.
Question:
How do we visualise the assistant’s interaction with the selected files?
Answer:
To solve this issues we implemented two features:
Inspired by the well know Gmail, we added check-boxes to select the files of interest.
On top of the checked check box, we added a different colour background for the selected files to strengthen the difference between the selected and unselected articles.
We added a visual cue in the assistant “Reply to” to show with what is the AI interacting to. For the single item it would show the article’s title while for more than one item it shows “Reply to | n of files selected”
What if users want to see the citation’s original context?
Problem:
Unless instructed otherwise, the machine extrapolates, analyses, and combines data from all the sources nested in the project, to provide a comprehensive answer. But how can users know from where exactly has the machine taken the information from?
While seeing a the source of a citation has always been possible, it didn’t seem to be a predominant feature at the beginning, hence it had a relatively small space relegated to it.
Question:
How do users understand from where has the machine extrapolated the information from?
Answer:
We enlarged the citation “bubble” and juxtapose it over the assistant’s answer.On top of it, we added a shadow effect to make the bubble stand out from the text behind.
We also taught about darkening the screen underneath to accentuate the difference even more.
If not enough, and the user wanted to see the snippet of text in source, we’ve added a tab on the right lower corner of the bubble with a tab “View in source”. When clicked, it opens the citation’s original source file in the Viewer on the left side of the screen.
What if users want to add more content to an existing project?
Problem:
As researching is quite an iterative process and information can come from a variety of source at any time, it is possible that users need to add further content to an existing project.
Question:
How can users add more content to an existing project?
Answer:
We add an “Add content” button at the top of the page. Once clicked, the button opens a menu with the options to upload various file types.
This portfolio was designed and coded by Mor Shmueli. It is open-sourced and hosted on Netlify