Helping retail investors make assertive decisions

Project type:
Mobile App
Work scope:
End-to-End Product Design
Timeline:
Feb. 2025 – ongoing

Context

During a casual conversation with an acquaintance who works as an investment consultant, he told me about a product idea he had been considering for some time.

He had seen many people struggling to make assertive investment decisions that provided solid returns while reflecting their values and investment style.

He believed an AI Assistant could be a great way to solve this problem, effectively turning his expertise into a scalable digital product.

I decided to help turn this idea into reality.

Challenge

We were starting from scratch. There was no name, no branding, no research, no team… nothing but a busy founder’s idea and a tried-and-true Design Process.

The goal was to lay the foundation for an MVP.

Solution

1. Discover

We knew some retail investors needed some kind of assistance. But who actually needs help, and what does that help even look like?

I started by looking for a few potential users to interview. I eventually found three retail investors from different backgrounds, age brackets, and experience levels who agreed to be interviewed.

The pictures are merely illustrative, not the actual interviewees.

In parallel, I conducted desk research to understand the problem space: identifying competitors, gathering market data, and uncovering retail investor demographics and behavior patterns.

The insights from the desk research and user interviews allowed me to establish a Problem Statement and Proto-Persona.

I’m weary of getting investment advice from other people. I’m trying to figure it out on my own.
— Yevhenii M.

At this point, I had a touchpoint with the founder to discuss, validate, and enrich the information collected thus far.

2. Define

Armed with enough data and user insights, I created a Perspective Grid to turn them into stories: Person → Needs → Gap/Barrier/Pain → Value → Implied Capabilities → Possible Solutions → Features

This Perspective Grid then informed an Experience Map that provided a more comprehensive visualization of the customer journey including phases, actions, thoughts, feelings and opportunities.

Both of these artifacts resulted in a list of potential features. However, I knew it wouldn’t be possible to develop all of them for the MVP (and maybe not all of them were even worth pursuing).

To choose which ones would make the cut, I had another touchpoint with the founder. First, I presented the discovery insights and explained how they resulted in the list of features. Then, we discussed and prioritized the list, establishing which ones should be in the MVP.

With the list of features prioritized and approved, I established the Information Architecture and User Flows for each section of the app.

Information Architecture for AssetChat
Sample of three User Flows. There were seven in total.

Great! Now I could finally start creating screens… right?

Not quite. I still had important questions to answer, such as:

  • What content should be on each page?
  • How should we prioritize this content?
  • How would users navigate between the different features and sections?

To answer these questions, I employed a combination of:

  • Looking up the content and UI patterns used by competitors.
  • Using AI wireframing tools such as Uizard to quickly explore potential solutions.
  • Applying the Object-Oriented UX (OOUX) methodology to achieve structure, prioritization, and intuitive contextual navigation.
An overview of the OOUX artifacts I generated

3. Develop

Now we’re talking! The only thing left before I could start working on the UI was establishing basic branding — logo, name, and color palette.

Branding is not my area of expertise and wasn’t my priority in this project, so I did it “quick and dirty”, trusting the founder will hire a specialist later on to make it shine.

Finally, I created the main screens (onboarding, home, main sections) that would allow me to assemble a prototype to conduct usability tests.

I created prototypes on Figma and Lovable. Click the images below to see them.

For a better experience, press “Z” (fit width and height)
For a better experience, view as mobile version (Inspect ⟶ Toggle device ⟶ Mobile ⟶ fit to window)

I chose these tools because each provides different insights while testing. Generally, a Figma prototype will allow for richer visual design while sacrificing realistic interactions. On the other hand, vibe coding tools like Lovable result in more realistic interactions with less polished visuals.

I then conducted usability testing with one of the people I had interviewed, uncovering quite a few areas of improvement.

Below is the Figma prototype after adjustments (bear in mind it's an ongoing project).

Next steps

  • Finishing all screens that were not included in the prototypes (e.g., settings, quiz, etc).
  • Validating all screens and content with the founder.
  • Running more usability tests once we have all the screens.
  • Handing off to the development team.

Reflection

I learned many lessons with this project:

  • Designing a product from scratch follows the same process as improving an existing one, but with less certainty along the way. There are some things we just have to assume and then test.
  • There’s a fine line between being clever and reinventing the wheel.
  • There’s more than one way to prototype.
  • I need to get comfortable letting the user struggle during usability tests.
  • Collaborating with a developer along the way would have streamlined the Design process.
  • If I were to start from scratch now, I would build on top of an existing Design System instead of trying to create everything from scratch (although this was a great exercise).

All in all, working in an AI-based product has been an exciting challenge, and I hope to do more of that in the future!