The Problem
After a market-pivot our product had to cater to the needs of researchers and post-grad academics. Generative AI is a tool that empowers ESL (English as a second language) students to translate their knowledge into clear, cohert pieces of writing.
However, generative AI has constraints that can lead to problematic outcomes in academia:
AI tools often halloucinate ‘sources’ eroding the system trust and potentially leading the user to false information.
Our current model would sometimes suggested a source to users e.g. (James, 2021). However, the user had no clear path to checking the legitimacy of the source. In fact, sometimes it did not even exist at all.
Researchers must be able to back-up claims with relevant, high-quality sources.
The Solution
Focusing on redesigning the experience for new users, we prioritised the following in our design decisions:
Guiding new users to the correct difficulty level
Sharing the benefits of taking part in the challenge ahead of the first session
Moving reminders from a 'paid' to a 'free' feature. Encouraging repeat use and consistency is an important lever to increase paid conversions.
User Research
As a 5-person start-up we adhered to Lean UX principles to optimize for speed of learning. This case study documents multiple build-measure-learn loops. User interviews served as the primary driver of insights.
- Synthesis of the key outcomes
- Understanding which factors users value when assessing a potential citation
- Event-driven user text chats via Intercom
- Google’s human-centred AI guidelines as a reference for best practices
Primary Frustration
When AI hallucinates a citation it creates more work for a user forcing them to search in order to validate legitimacy.
Secondary Frustration
Users often have an appropriate source but currently have no way to add it to text/ bibliography.
"How might we create an intuitive citation experience that allows researchers to verify the sources that our AI model has consulted?"
Design Iterations
I cycled through multiple design iterations regularly sharing prototypes with our community of users for feedback.
Version 1
The system decides when to automatically show a citation (confidence threshold of at least 90%). Users can toggle citations on/off inside content settings.
Issues
Not enough user control, citations appear randomly
Violated the principle of user control and human-in-the-loop
On/off setting has some discoverability issues
Version 2
Allowing users to highlight a block of text and find citations that were used as part of the generative process.
Issues
Stopping and highlighting breaks the users writing flow
What if a block of text is edited after the citation is added? A user might change the sentiment of an argument thus rending the citation incorrect
Citations need to be from journals for Masters & PhD students
Final Version
Keyboard shortcut (@) to find a list of sources that relate to the preceding text block. Allow users to filter by 'web' or 'journals'.
Manual citations allow users to add their own pre-found sources and then format that with one click.
Users can add a placeholder. A mental reminder to come back later, this keeps the writer in flow.