GOOGLE LENS AND AI SEARCH
Google Lens is an app that allows for reverse image search
Lens desktop extension allows for reverse image search and translation on desktop
Translate is a popular Lens feature for travelers because of the ability to translate live objects like sign posts , menus, or food packages
LENS NATURAL WORLD DIARY STUDY
TL; DR
Goal: Improve the Lens search experience and results generation for users learning about entities in the natural world whether plant or animal.
Methodology: 1 week diary study moderated manually followed by 60 min follow-up interviews including concept tests for new information architecture and results hierarchy and usability tests.
Recommendations: Improve results hierarchy and information architecture through tiered disclosure of information; fore-fronting visual matches to an image query and identifying general information about location, habit, and physical description, and threat level before more in-depth information about care, endangerment, pest control, etc.
LENS ON DESKTOP
TL;DR
Goal: Improve search experience, usability, and UX/UI design within Lens on Chrome
Methodology: 45-60min usability tests
Recommendations: Improvement of search flow from chrome side bar
Since joining the Google Lens team in 2022, my primary responsibility has been conducting research on Google Lens and its AI search features. For this project, I conducted a series of research projects studying Lens functionality and experience on desktop since it was originally developed for mobile. This series consisted of two instances of moderated 45-60 minute interviews with 10-12 participants each. At the conclusion of each round of research, with each round lasting a week, I created a deck that I shared with primary stakeholders on the research and product development teams.
LENS TRANSLATE QUERY FORMULATION
TL;DR
Goal: Improve search experience, usability, and UX/UI design within Lens on mobile for translation use case
Methodology: 30-45min usability tests
Recommendations: Addition of specific design elements such as tooltips and updated iconography/symbols for CTAs/buttons, as well as overall navigation development
This project entailed the design and implementation of research that improved the translation function within Lens, specifically focused on the user experience of translating signs and objects using Lens Translate. Here the challenging component was mapping the primary user path which was complex as it entailed users navigating from taking a photo within Lens to translating text within that photo and then searching the translated and/or original text. 12 moderated usability sessions were conducted in order to map the userflow within Lens and provide feedback to UX and product designers on pain points and recommend next steps.
OTHER PROJECTS
Desktop on Chrome Diary Study
Lens desktop “search what you see” extension
Semantic disambiguation