Home

/

Advancing AI design beyond the basics

Advancing AI design beyond the basics

May 24, 2025

Authors

Shwet Sharvary

Lead Design Consultant, Fractal BxD

Arkaprabha Dey

Arkaprabha Dey

Senior Design Consultant, Fractal BxD

Contributors

Raj Fractal

Raj Aradhyula

Chief Design Officer, Fractal BxD

Ramchand Matta, Fractal

Ramchand Matta

Principal Design Consultant, Fractal BxD

Trust is the foundation of any good relationship, including those between users and AI. Without it, even the most innovative tools risk being abandoned. The next three principles of the framework focus on ensuring trust and collaboration.

A procurement manager notices an AI-recommended order quantity that seems unusually high.

Instead of being forced to accept this recommendation, she can easily override it based on her market knowledge and experience. The system allows this adjustment and learns from it, improving future recommendations. This is about creating a collaborative partnership between human expertise and AI capabilities.

In our earlier article on elevating the human experience, we explored the three principles for driving change. Now, let’s look at the others.

Principle 4: Make human in charge

Assist users in recovering from errors by allowing corrections and ensuring user control over system decisions, including the option to override AI actions when necessary to guarantee accuracy and bolster user confidence.

"How do we ensure users maintain control when AI doesn't align with their intentions?"

The procurement scenario demonstrates the importance of building safeguards and override mechanisms into AI systems. This principle, make the human in charge, ensures humans are in charge and can intervene when necessary, maintaining control while benefiting from AI assistance. 

For example, a supply chain management tool that enables managers to adjust inventory levels and override automated reordering decisions manually. Similarly, HR professionals might need to override AI candidate screening decisions when the system hasn't fully captured the nuances of an applicant's potential.

This illustrative example above shows an interface that empowers users by providing them with the capability to override AI decisions. This feature is crucial for instances where users may find the AI's logic in accepting or rejecting an application to be inconsistent with their own reasoning.

Key questions designers must address include:

  • What happens when the AI fails to respond?

  • How will humans take control when AI systems err & show limitations?

  • How will the feedback system work regarding an inaccurate output?

A data scientist stares at her screen, puzzled by an AI-generated recommendation that doesn't align with her expertise.

The system suggests an unusual market strategy but does not explain its choice. Despite the AI's sophisticated algorithms, the lack of transparency makes it difficult to trust or act on its advice. However, she then switches to a different AI tool — one that displays its data sources, reasoning process, and allows her to explore the underlying factors. Suddenly, the same type of recommendation becomes valuable, actionable intelligence rather than a mysterious black box output.

Principle 5: Explain AI decisions

Make the AI's decision-making process transparent to users, ensuring they can understand, trust, and interact effectively with the system while upholding ethical standards and fairness.

"How can we ensure users trust and effectively utilize AI-generated outcomes?"

The data scientist's experience highlights a crucial design principle: Explain AI decisions. This principle focuses on making AI's decision-making process clear and comprehensible to users, fostering trust and enabling effective collaboration between humans and AI.

Consider a document fraud detection system that doesn't just flag suspicious documents but clearly outlines the specific patterns or anomalies that triggered the alert. This transparency allows investigators to understand the basis for these flags and take appropriate action with confidence.


This example illustrates explainable AI through a familiar interaction—a progress bar. Instead of simply displaying the time left, it highlights the AI's logic, confidence, and decision-making process in real-time, making the system more transparent and trustworthy for users of all skill levels.

To implement transparent AI effectively, designers should consider:

  • Is the AI system transparent enough to build trust?

  • Is the result credible or dependable? Can the user verify it?

  • Will the user understand how the AI generated the results?

A customer service representative opens a new AI-powered support interface for the first time.

Instead of being overwhelmed by complex commands or technical jargon, she finds an intuitive system that suggests complete sentences and follow-up questions based on the conversation's context. Visual cues help her understand customer sentiment, and predictive text accelerates her response time while reducing cognitive load.

Principle 6: Seamlessly blend AI into workflows

Design interfaces to actively facilitate natural and intuitive interactions with AI, enabling accessibility while considering the user's familiarity with AI behavior. Seamlessly integrate GenAI features into existing interfaces.

"How can we make AI interfaces feel natural and effortless for users?"

The customer service representative's experience exemplifies our final principle: Seamlessly blend AI into workflows. This principle ensures that AI interfaces feel natural and integrated, making complex technology accessible to all users.

Imagine a virtual shopping assistant that enables customers to upload photos of items they are searching for, utilizing image recognition to find similar products in the retailer's inventory. Alternatively, consider a one-click prompt enhancement feature that automatically improves user inputs by filling in necessary details and objectives.

This illustrative example features an e-commerce platform with a camera icon, enabling users to effortlessly search for similar products in real life, thereby showcasing the convenience of AI-powered visual shopping.

Designers should evaluate their systems by asking:

  • Is the interface intuitive for users new to GenAI?

  • Can we add AI/GenAI features that naturally embed in the current workflows making them almost invisible?

  • Will the GenAI reduce effort and make tasks easier and simpler?

Building trust through thoughtful design

These three principles — Make humans in charge, Clarify how AI thinks, and seamlessly blend AI into workflows — form the foundation of trustworthy AI systems. Together, they ensure that AI remains a powerful tool that enhances rather than replaces human capabilities.

"When user control, transparency, and intuitive design come together, AI transforms from a mysterious black box into a trusted partner in decision-making."

In the final article of this series, we’ll dive into the toolkit we’ve developed to assist organizations in creating a roadmap to successful AI implementation.

Enable better decisions with Fractal

Recognition and achievements

Named leader

Customer analytics service provider Q2 2023

Named leader

Customer analytics service provider Q2 2023

Named leader

Customer analytics service provider Q2 2023

Representative vendor

Customer analytics service provider Q1 2021

Representative vendor

Customer analytics service provider Q1 2021

Representative vendor

Customer analytics service provider Q1 2021

Great Place to Work, USA

8th year running. Certifications received for India, USA,Canada, Australia, and the UK.

Great Place to Work, USA

8th year running. Certifications received for India, USA,Canada, Australia, and the UK.

Great Place to Work, USA

8th year running. Certifications received for India, USA,Canada, Australia, and the UK.