We designed the AI transformation of our products

Client: Zenoti

Duration: 3 years

Collaborating closely with product, we pursued a two-pronged AI strategy: seamlessly integrating intelligence into existing products while pioneering novel AI-driven innovations redefining the user experience.

The Challenge

The landscape of our industry was shifting rapidly, with AI poised to revolutionize user expectations. We risked falling behind if we didn't adapt. Crucially,  there was no clear precedent within our industry, no direct competitor response to emulate, and no explicit customer need to address. 

We also faced our very own challenges working with this technology. As the design community noticed with mobile, social, and web before that, AI will cause us to rethink new possibilities for many experiences we build. The team also struggled to understand the limitations and capabilities of AI, which hindered our usual brainstorming process, and we often had to first test the idea's technical feasibility before fleshing it out. Due to a lack of shared workflow and language, it wasn’t even easy to work with AI engineers and understand this directly from them. Even after understanding how it works, it was challenging to design fuzzy open-ended interactions and experiences AI could offer, unlike the predictable interactions we designers are used to. 

But rather than waiting for disruption to happen, we had to drive it ourselves:- identify the right opportunities to integrate AI (whether GenAI improves UX or introduces complexity*), enhance our products, exceed evolving user expectations, and maintain our market leadership!

* Scenarios when GenAI is beneficial

  • Tasks that are open-ended, creative, and augments user.
    E.g., summarizing notes, drafting replies.

  • Creating or transforming complex outputs.
    E.g., analyzing a large dataset to give a bite-sized analysis.

  • Where structured UX fails to capture user intent.

The discovery

To kickstart our AI initiative, we hosted a hackathon in early 2024, bringing together designers, PMs, and engineers in a collaborative sprint of innovation. Multiple teams formed, each tasked with proposing and showcasing AI-powered solutions for our product. The energy was palpable as they hacked through countless ideas, from simple enhancements to potentially profound solutions, including some "aha!" moments with the potential to revolutionize how our users interact with our products. To capitalize on this momentum, a committee was established to evaluate the projects, celebrate the winners, and strategically select the most impactful ideas for further development. Teams were then assembled around these selected projects, ready to bring them to fruition.

The principles

With no established playbook to follow, we embraced a culture of experimentation. We immersed ourselves in the world of AI, exploring various tools and platforms, and carefully analyzing the user experiences they offered. This hands-on approach allowed us to learn firsthand what resonated with users and what fell short. This research led us to define a set of core principles that would ensure our AI integrations were always designed with the user in mind.

#1 Make users aware of the capabilities of AI to create additional value, in ways that they didn’t know existed

What happens when a tool can do a million things, but the user only knows a handful? AI's capabilities far exceed what most people instinctively think to use them for. While open chat experiences provide full creative control, they can also overwhelm users with the question: Where do I start?

Given AI's broad applications across our portfolio, we focused on instantly clarifying its value. Whether it's surfacing action items from existing content, summarizing client details, or refining text ("make it shorter"), our goal was to reduce friction and help users get immediate results.

To combat hesitation, we introduced Suggested Prompts—predefined questions and commands that guide users toward AI’s capabilities. These prompts not only educate users on what’s possible but also keep the conversation flowing.

However, we found that mid-conversation suggestions were even more effective. When prompts adapt to context, interacting with AI feels more intuitive, reducing effort while keeping users in control.

Our research revealed that suggested prompts alone don’t always help—especially when users struggle with writing advanced queries or understanding nuanced features. That’s where Templates come in. By providing structured inputs, templates eliminate the guesswork, ensuring users don’t waste time crafting the perfect prompt.

Ultimately, our approach isn’t just about making AI accessible—it’s about making it effortless to tap into its power.

#2 Get clarifications from the user when the initial prompt isn’t sufficiently clear

Most users are unlikely going to be so good at prompting that they can get a good outcome on their first try, To help them get great outcomes with less friction, we designed our AI tools to seek quick clarifications to let them validate whether AI is on the right track, so they can intervene if needed. 

Such follow-ups can break up what might have been a multi-stepped prompt into smaller pieces. Additional questions give the user the sense that the AI is responding to the user's input from, instead of simply collecting more information.

#3 Have AI distill a resource down by synthesizing or summarizing for key takeaways

The summary pattern allows users to get the essence across all of the sources, and focus on what matters. We also label these summaries clearly, to ensure users know this is AI-generated and some data could be missed. 

We use different patterns: In certain touchpoints, we offer nudges to let users know the AI can help them summarize content,

But since this is such a low-risk pattern, we often introduce it upfront as well. 

#4 Use filters to direct the AI to either limit the references during input or specific formats during output

Working with AI is a process of tuning inputs and controlling outcomes. Filters allow users to set boundaries around AI to improve the quality and accuracy of their results. Parameters can be combined with filters to let users govern the reference inputs and tokens like tone, styles, references and create different results.

When the user has a clear sense of what inputs they want to direct the AI to, they should be able to constrain the data it pulls in as references.

When they know the outcome they are targeting, instead, they can constrain the format. 

#5 Be transparent about what sources AI is using and give users control over what to use

RAG was a breakthrough in how LLMs source data, allowing them to combine their foundational data with outside sources, dramatically increasing the data connections they can draw from, to form the responses. 

From a UX perspective, RAG gives the user transparency into the data the AI is using, like the full URLs of the references and the ability to quickly preview.  

Giving users the ability to see and manage sources is pretty powerful. We even made it possible for business owners to connect our conversational bot Zeenie to their proprietary data. This helped to connect fragmented data across the enterprise and decreased the amount of time users spent looking for answers and resources to common questions.

#6 Give users control to adjust parts of the content without impacting the whole

When AI is operating as an assistant, like an editor, we need ways to let it interact with our content directly, including the ability to let it adjust parts of a piece without regenerating or impacting the whole. This puts less UX burden on the user nailing the initial prompt. This can decrease the time it takes to get a user to their initial aha moment and let the AI learn and work with them.

For example, when drafting an email campaign, we may ask AI to adjust the tone of a paragraph that is not quite right.

Similarly, when generating an image for the same campaign, users should be able to use one of the options or isolate a part of the option and have the AI apply new tokens only to that area.

The goal is for AI to never take control away from the user unless directed to do so. All suggestions need to be reviewed before they are accepted, and users have the option to control or regenerate the output.

#7 Establish mechanisms for users to provide feedback, enabling continuous improvement based on real user experiences

Giving users the ability to rate their interactions has become a table-stakes pattern in service and conversational experiences. By making it clear to the user that they are interacting with the model and how their feedback will be used, a thumbs-up or down signals to prompt engineers whether the design of the model itself is effective.

Along with this, we  also include other questions like comparing two different versions or the quality of regenerations

Lastly, we account for implicit feedback by capturing user actions such as skips, dismissals, edits, or interaction frequency. These passive signals help us tune our models and recommendations.

#8 Give visual cues and/or labels to help users identify AI features and content

How do you know if you are interacting with a person or a model? How can you distinguish between information returned via a prompt request from information that was manually entered?

Until AI becomes ubiquitous in the coming times, I feel it’s still important to differentiate the AI from other features and highlight its “novelty”. There are different visual cues we use to do exactly that.

To maintain a degree of uniformity with other platforms as well as stay true to our brand, we decided to employ teal gradients and the sparkle icon to highlight AI across the board.

#9 Use variations when users are likely to need to select multiple options, such as with image generators

The faster AI can help the user get to a good result, the faster the user will reach their "ah-ha" moment.  This could take a while for certain kinds of prompts and content being generated. It can especially become time consuming in case of rich media generation if you have a strong idea in mind of what you are looking for but don't know how to get the AI there.

In these moments, using variants lets the AI proactively seek out more information from the user, which helps to get the user’s intent before spending a ton of time and processing patterns on a poor result. 

Users can process multiple iterations against a single prompt, compare the results, and regenerate a particular variant with multiple sub-variants. This way, they keep the user in the driver's seat while reducing the number of processor hours burned on bad results. Overall, a win-win for all parties.

#10 Avoid "black box" AI by making the decision-making process transparent and understandable to the user.

When a user can understand how the AI logically produced its generation, they can learn how to improve their prompting capabilities and remain in control. AI is (often) better at writing prompts than humans and AI generators take the prompt users write and make logical adjustments to get the best outcome.

By making the prompting process transparent, it makes users more savvy users of AI by improving their own prompting techniques or getting more comfortable ceding the responsibility to the AI itself, where it makes more sense

#11 (As much as possible) Let users work backward through different prompts and variations to see how they arrived at the current result

Working with AI can feel like wandering through a maze in the dark. It's nearly impossible to trace your steps as you work with AI to generate some results

Users can highlight some text for the marketing content and ask the AI to generate a different way of writing it. Even after they accept the change, they can go back to compare the two approaches, and assess the instructions around length and tone for adjustments

This approach allows users to see what they left behind or understand the path the model took to get to their response. This builds trust in the model and helps users improve their results through personal feedback loops

#12 String workflows to synthesize and manage content on autopilot

Generative workflows work similarly to manual workflow patterns. They can be built manually or by using an open-ended prompt. Each step is usually a self-contained prompt combining both generative and non-generative steps. These workflows can be tested by running a test to generate a sample outcome to test their accuracy

#13 Communicate the ongoing evolution of AI systems to users on time

AI systems are dynamic and constantly evolving entities. This evolution can lead to changes in behavior that are difficult to predict. Communicating this ongoing evolution to users in a timely, relevant, and understandable way can be complicated.

#14 Extending the creative process to include engineers

There are so many potential ways to approach any AI solution, so as a designer, getting too prescriptive too early in the process risked diminishing the creativity of our engineering counterparts. This is a much more creative and expressive engineering process than we are generally accustomed to. Our job often was to simply help them make better user-centered choices along their engineering process by inspiring them with examples, prototypes and vision videos.  

#15 Design graceful error handling so the AI can recover from mistakes without disrupting the user experience.

AI errors can happen due to biased or flawed data collection, processing, model development or implementation. It can be unclear who/what should be held responsible if an issue arises, but the UX goal was to inform users and allow them to recover gracefully,whenever that mistake happens.

#16 Augment vs automate

One of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Automation is best for tasks that are tedious, time-consuming, and low risk, like summarising all feedback into a brief note.

Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control, for example when you want to draft a message or campaign both of which requires careful oversight due to the potential for significant harm if errors occur.

Previous
Previous

Zenoti: Boosting NPS and reducing churn with a thoughtful and comprehensive revamp