Reimagining Mobile Banking with GenAI

As part of a 20-week capstone with KeyBank, I explored how generative AI can meet users where they are, offering real-time budgeting tools, transaction insights, and financial literacy support directly within their mobile app experience.

Role

Role

UX Designer, Researcher, Product Strategist

Timeline

Timeline

January – June 2025

Team

Team

4 designers

Partner

Partner

KeyBank

Challenge

KeyBank wanted to explore how generative AI could enhance the mobile banking experience, beyond basic chatbots, while maintaining user trust, clarity, and control.

The challenge: design an AI-enhanced experience that builds trust, feels intuitive, and empowers users to make smarter money choices.

Solution

We focused on in-line GenAI, AI that surfaces relevant insights (like budget nudges or savings updates) directly within the user’s flow, making it helpful but never intrusive.

Process

Literature Review: We began by reviewing academic and industry sources to understand how generative AI is currently being explored in banking, with a focus on trust, usability, and transparency.

Competitive Analysis: We evaluated mobile banking apps from key competitors to identify common AI features, gaps in user experience, and best practices for integration.

User Interviews: We conducted interviews to explore users’ behaviors, needs, and concerns around AI in banking. We focused on their comfort levels, trust factors, and pain points with current tools.

Data Analysis: We synthesized findings from 22 survey responses and our interviews, mapping out recurring patterns, emotional responses, and feature preferences.

Prototyping: Our team translated insights into low-fidelity sketches and mid-fidelity wireframes. We focused on designing contextual GenAI touchpoints that aligned with user mental models.

User Testing: We tested our interactive prototype with a diverse group of users, capturing feedback on tone, clarity, and usability. This helped us validate and refine our designs.

Refinements: Based on testing and stakeholder input, we improved the onboarding flow, clarified insight language, and simplified interaction paths to enhance trust and efficiency.


Compatitive Analysis

Miro Board of Competitive Analysis

What We Learned From These Companies

Everyday In-Line GenAI Examples

Here are some examples of in-line Ai that you've probably seen before.

On the left, we see an example of actionable in-line AI from the google search AI. Here, AI is embedded directly within the content, allowing users to take immediate actions like explaining a concept, finding related images, or copying text - without leaving their current workflow.

On the right, we have an example of in-line AI suggestions. In this case, the AI Grammarly uses provides real-time writing feedback, such as recommending more concise phrasing.

User Research

After our competitive analysis we did a round of interviews. We began our user research by sending out a screening survey to gather information on user's mobile banking habits, comfort with AI, and prior experience with digital banking tools. This helped us identify a range of users who actively manage their finances through mobile apps.

From that pool, we selected eight participants to take part in semi-structured interviews. Each interview lasted around 25 to 30 minutes and was conducted over Zoom or in person. We specifically included participants who were 18 and older, used mobile banking at least occasionally, and had varying levels of familiarity with AI features in apps.

The interviews focused on four key areas and included 12 main questions designed to explore user behaviors, attitudes toward AI, and expectations for financial features.

What We Wanted to Learn

Before going into the interviews, we identified four key areas we wanted to learn more about from participants.

First, we explored Current Behavior & Awareness — how people currently use their mobile banking apps, what they do most often, and whether they’re aware that AI is already part of some financial tools.

Second, we asked about Reactions to AI Use Cases, like personalized savings goals or transaction explanations. We wanted to see which types of features felt helpful and which ones felt unnecessary or even uncomfortable.

We also asked about Data and Security with AI, because concerns around privacy and trust came up earlier in our background research. We asked participants how they’d feel about AI seeing their transaction history, and what types of controls they’d want in place.

Finally, we asked about Usage Patterns & Frequency with questions about how often they thought they’d actually interact with AI features if they were part of their banking app. That helped us understand where to design for regular use versus more occasional, on-demand support.

Findings

One of the most consistent things we heard from our interviews was that people primarily use their mobile banking apps for basic, routine tasks things like checking balances, paying bills, transferring money, or depositing checks.

These actions are deeply embedded in users' financial routines, and many participants said they open their app multiple times a week just to do one or two of these things quickly.

But even though these are the most common tasks, users still described frustrations with navigation like needing several clicks to find something simple, or struggling to locate features like Zelle or deposit tools. This finding highlighted a clear need for faster access to high-frequency actions, which we kept in mind when designing features like shortcuts and predictive assistance later in our prototype.

Another major pain point we heard across multiple interviews was around navigation and usability. Even though users were doing basic tasks, they often had to click through multiple screens just to complete them. Features like paying bills, transferring money, or freezing their cards were described as buried or hard to find.

One participant mentioned it should only take four clicks total to do something simple but from their experience, many of these tasks felt tedious and not intuitive.

We also heard that the interface felt cluttered or hard to scan, which made checking balances or locating frequent actions frustrating, especially on smaller screens.

These insights pushed us to design features that are in-line with the user’s flow and easily accessible, minimizing the need to dig through menus or navigate to a different screen. 

While many banking apps already include AI tools like chatbots or automated budgeting, we found that most users either don’t use them or have had negative experiences with them in the past.

Participants described current AI features as generic, unhelpful, or frustrating because they were often unable to complete tasks or provide relevant support. Several people said they tried using a chatbot but ended up stuck in loops or had to involve a human agent to actually get the help they needed.

These responses revealed a significant gap between what's available and what users actually find useful. People weren’t necessarily against AI — they just hadn’t encountered implementations that felt responsive or worth engaging with.

This finding helped shape our goal to use AI in a way that feels personalized, capable, and actionable, instead of just another chatbot.

Key Themes

Understanding Our Users

To make sure our design decisions were grounded in real user needs, we developed two personas based on patterns we saw in our interviews and usability testing. 

First, we have Sam, who represents younger professionals or recent grads - people who are comfortable with tech and want to get in and out of the app quickly. Sam primarily uses banking apps to check balances, pay bills, and make quick transfers. They’re open to helpful AI features, like summaries or savings nudges, but are skeptical of anything too pushy or complicated. For Sam, it was important that our design included shortcuts, streamlined personalization, and clear transaction info.

Then there's Natalie, a more experienced user who has multiple accounts and responsibilities - often both personal and business-related. Natalie is open to using AI, but only if it’s transparent, explainable, and easy to control. She needs reassurance that the system understands her context and that she can approve or adjust anything before it happens. Her needs really reinforced our emphasis on trust, permission settings, and explainable AI recommendations.

These two personas capture the range of expectations we saw from users who want speed and simplicity, to those who need confidence and control and they both shaped the direction of our design decisions throughout the project.

Core Principles

We overall found four core principles as to what the user wants:

Invisible until useful: Users want AI to be available when it’s needed, but not intrusive. It should work quietly in the background and only surface when it can add real value.

User agency: People want control. They need to be able to easily edit, dismiss, or correct AI suggestions, ensuring the experience feels empowering rather than restrictive.

Transparency: Trust is critical. Users expect explainable AI—systems that are clear about how decisions are made, helping to build confidence and reliability.

Familiar UI: Finally, users prefer interfaces that feel familiar. By incorporating KeyBank’s existing design language, we make sure the AI features are easy to adopt and don’t disrupt the user’s workflow.

Prototype Highlights

Early Ideation/Low-Fidelity

When I started sketching I knew we wanted in line notification, maybe a pop up and we wanted it to be subtle and customizable.

Wireframe Flows

Wireframes: Budgeting

Wireframes: In-line Transfer

Wireframes: In-line Analyze Transaction

Wireframes: Dynamic Dashboard

Usability Testing

Our goal for this round of testing was to evaluate how usable, clear, and effective our inline generative AI features really were—especially after implementing feedback from earlier research.

We brought back 5 participants from our original interview group to keep the user insights consistent and to see how well the design changes addressed their earlier pain points. and each test lasted around 40-45 minutes.

We gave each participant four key tasks that reflected real scenarios, such as adjusting a budget, responding to an AI prompt, analyzing a transaction, and navigating the dashboard. 

These tasks helped us assess not just functionality, but also how well the in-line AI fit into their natural mobile banking flow.

Our usability testing revealed three core behavioral insights that shaped how we refined the design:

First, Clarity and Context Are Key. While users appreciated the value of inline AI suggestions, they often weren’t sure how those suggestions were generated or what effect they’d have, like whether a budget change was temporary or impacted future months. That led us to introduce an onboarding flow to help users build trust and better understand how the AI works before they act on it.

Second, we found the need to Strengthen Feedback After Actions. Features like AI-assisted transfers and budget adjustments were well-received, especially with Face ID confirmation, which added a layer of trust and intentionality. Still, users wanted reassurance that their actions had been applied correctly, so we added feedback confirmations to close that loop.

Finally, our third key insight is that Users See Value in In-Line AI Support. Participants consistently told us the AI felt helpful, relevant, and behavior-changing as long as it appeared at the right time, in the right place. When done well, inline AI nudges felt like a natural part of the experience, not a disruption.

Alongside the broader behavioral patterns, our usability testing also uncovered some key micro-level interaction insights that helped us refine the interface further:

Discoverability of Microinteractions Needs Improvement: Participants often missed subtle but important elements like clickable terms, thumbs-up/down icons, or small arrows. This told us we needed to improve visual cues and touch target sizes to make interactive elements more intuitive and accessible.

Personalization Preferences Are Strong, but Subtle: In our A/B testing, users strongly preferred toggle switches over checkboxes. They felt toggles were more modern, confident, and final—whereas checkboxes felt temporary or uncertain. This small detail helped us ensure our controls match user expectations.

Insight Timing & Frequency Should Be Flexible: While some users liked weekly financial summaries, others said they only check their accounts monthly. This highlighted the need for adjustable insights. letting users tailor how often they receive updates so AI support fits their actual behavior.

These micro-level findings reminded us that even subtle interaction patterns can shape the overall perception of clarity, trust, and usefulness in AI-powered experiences.

Final Prototype Flows

SUPR-Q Survey Results

After the usability testing, we had participants complete the SUPR-Q (Standardized User Experience Percentile Rank Questionnaire) survey, which measures user experience across usability, trust, appearance, and loyalty, 2 questions from each category ranked on a scale of 1 to 5.

Our overall average score was 4.1, reflecting generally strong user satisfaction with the prototype.

The highest score was in Appearance, with a 4.7, showing that users found the design clean, modern, and visually appealing.

The lowest score was in Usability, with a 3.7 indicating room for improvement. Interestingly, many participants said they enjoyed using the prototype during testing, suggesting that while the overall experience was positive, specific interaction issues like unclear labels or small touch targets may have lowered the perceived usability score.

These results helped confirm that while users liked the look and felt confident in the app, there’s still room to refine clarity and flow in key interactions.

Next Steps

As we move forward, our first priority is to develop higher-fidelity prototypes. We'll take the feedback from usability testing and refine our wireframes to improve clarity, layout, and microinteractions to make the experience more polished and intuitive.

Next, we plan to expand testing with a more diverse user group. Currently, our users are primarily ages 22–35, digitally fluent, and comfortable with mobile banking. While this gave us consistent feedback and insights into Gen Z and millennial behaviors, expanding to a more diverse group will help validate our designs across different ages, financial goals, and tech comfort levels.

Finally, we'll begin aligning our designs with KeyBank’s existing design system. This will help ensure consistency with their you broader product ecosystem and allow for easier scalability if these features are implemented in the future.

Usability Testing

Our goal for this round of testing was to evaluate how usable, clear, and effective our inline generative AI features really were—especially after implementing feedback from earlier research.

We brought back 5 participants from our original interview group to keep the user insights consistent and to see how well the design changes addressed their earlier pain points. and each test lasted around 40-45 minutes.

We gave each participant four key tasks that reflected real scenarios, such as adjusting a budget, responding to an AI prompt, analyzing a transaction, and navigating the dashboard. 

These tasks helped us assess not just functionality, but also how well the in-line AI fit into their natural mobile banking flow.

Our usability testing revealed three core behavioral insights that shaped how we refined the design:

First, Clarity and Context Are Key. While users appreciated the value of inline AI suggestions, they often weren’t sure how those suggestions were generated or what effect they’d have, like whether a budget change was temporary or impacted future months. That led us to introduce an onboarding flow to help users build trust and better understand how the AI works before they act on it.

Second, we found the need to Strengthen Feedback After Actions. Features like AI-assisted transfers and budget adjustments were well-received, especially with Face ID confirmation, which added a layer of trust and intentionality. Still, users wanted reassurance that their actions had been applied correctly, so we added feedback confirmations to close that loop.

Finally, our third key insight is that Users See Value in In-Line AI Support. Participants consistently told us the AI felt helpful, relevant, and behavior-changing as long as it appeared at the right time, in the right place. When done well, inline AI nudges felt like a natural part of the experience, not a disruption.

Alongside the broader behavioral patterns, our usability testing also uncovered some key micro-level interaction insights that helped us refine the interface further:

Discoverability of Microinteractions Needs Improvement: Participants often missed subtle but important elements like clickable terms, thumbs-up/down icons, or small arrows. This told us we needed to improve visual cues and touch target sizes to make interactive elements more intuitive and accessible.

Personalization Preferences Are Strong, but Subtle: In our A/B testing, users strongly preferred toggle switches over checkboxes. They felt toggles were more modern, confident, and final—whereas checkboxes felt temporary or uncertain. This small detail helped us ensure our controls match user expectations.

Insight Timing & Frequency Should Be Flexible: While some users liked weekly financial summaries, others said they only check their accounts monthly. This highlighted the need for adjustable insights. letting users tailor how often they receive updates so AI support fits their actual behavior.

These micro-level findings reminded us that even subtle interaction patterns can shape the overall perception of clarity, trust, and usefulness in AI-powered experiences.”

Findings

One of the most consistent things we heard from our interviews was that people primarily use their mobile banking apps for basic, routine tasks things like checking balances, paying bills, transferring money, or depositing checks.

These actions are deeply embedded in users' financial routines, and many participants said they open their app multiple times a week just to do one or two of these things quickly.

But even though these are the most common tasks, users still described frustrations with navigation like needing several clicks to find something simple, or struggling to locate features like Zelle or deposit tools. This finding highlighted a clear need for faster access to high-frequency actions, which we kept in mind when designing features like shortcuts and predictive assistance later in our prototype

Another major pain point we heard across multiple interviews was around navigation and usability. Even though users were doing basic tasks, they often had to click through multiple screens just to complete them. Features like paying bills, transferring money, or freezing their cards were described as buried or hard to find.

One participant mentioned it should only take four clicks total to do something simple but from their experience, many of these tasks felt tedious and not intuitive.

We also heard that the interface felt cluttered or hard to scan, which made checking balances or locating frequent actions frustrating, especially on smaller screens.

These insights pushed us to design features that are in-line with the user’s flow and easily accessible, minimizing the need to dig through menus or navigate to a different screen. 

While many banking apps already include AI tools like chatbots or automated budgeting, we found that most users either don’t use them or have had negative experiences with them in the past.

Participants described current AI features as generic, unhelpful, or frustrating because they were often unable to complete tasks or provide relevant support. Several people said they tried using a chatbot but ended up stuck in loops or had to involve a human agent to actually get the help they needed.

These responses revealed a significant gap between what's available and what users actually find useful. People weren’t necessarily against AI — they just hadn’t encountered implementations that felt responsive or worth engaging with.

This finding helped shape our goal to use AI in a way that feels personalized, capable, and actionable, instead of just another chatbot.

Key Themes

Before going into the interviews, we identified four key areas we wanted to learn more about from participants.

First, we explored Current Behavior & Awareness — how people currently use their mobile banking apps, what they do most often, and whether they’re aware that AI is already part of some financial tools.

Second, we asked about Reactions to AI Use Cases, like personalized savings goals or transaction explanations. We wanted to see which types of features felt helpful and which ones felt unnecessary or even uncomfortable.

We also asked about Data and Security with AI, because concerns around privacy and trust came up earlier in our background research. We asked participants how they’d feel about AI seeing their transaction history, and what types of controls they’d want in place.

Finally, we asked about Usage Patterns & Frequency with questions about how often they thought they’d actually interact with AI features if they were part of their banking app. That helped us understand where to design for regular use versus more occasional, on-demand support

Conclusion

The StreamLine mobile banking app redesign successfully addressed the usability issues, resulting in a more intuitive and user-friendly experience. The improved UX/UI design led to increased user adoption, engagement, and satisfaction, demonstrating the value of a well-designed template for UX designers.