Crypto trading is opaque by default — volatile markets, complex interfaces, and no clear signal on when to act. VLADAM wanted to change that with an AI-powered trading platform that surfaced real-time recommendations from machine learning models. I led UX design on the MVP from concept to launch.
Crypto trading is opaque by default — volatile markets, complex interfaces, and no clear signal on when to act. VLADAM wanted to change that with an AI-powered trading platform that surfaced real-time recommendations from machine learning models. I led UX design on the MVP from concept to launch.
Crypto trading is opaque by default — volatile markets, complex interfaces, and no clear signal on when to act. VLADAM wanted to change that with an AI-powered trading platform that surfaced real-time recommendations from machine learning models. I led UX design on the MVP from concept to launch.
Designing the AI-driven crypto trading platform
Designing the AI-driven crypto trading platform
The core design problem wasn't the interface — it was trust. Users needed to act on AI-generated recommendations with real money on the line. That meant every design decision had to answer the same question: does this make the AI feel more or less reliable?
I led design from early concept through to a tested, launch-ready prototype. Research focused on understanding how both experienced traders and complete newcomers interpreted market data — two groups with very different mental models and very different thresholds for trusting an automated signal.
The main tension throughout was simplicity versus transparency. Beginner users needed the interface to feel approachable; experienced traders needed enough context to evaluate the recommendation themselves. Most of the iteration happened in that gap.
The core design problem wasn't the interface — it was trust. Users needed to act on AI-generated recommendations with real money on the line. That meant every design decision had to answer the same question: does this make the AI feel more or less reliable?
I led design from early concept through to a tested, launch-ready prototype. Research focused on understanding how both experienced traders and complete newcomers interpreted market data — two groups with very different mental models and very different thresholds for trusting an automated signal.
The main tension throughout was simplicity versus transparency. Beginner users needed the interface to feel approachable; experienced traders needed enough context to evaluate the recommendation themselves. Most of the iteration happened in that gap.
The core design problem wasn't the interface — it was trust. Users needed to act on AI-generated recommendations with real money on the line. That meant every design decision had to answer the same question: does this make the AI feel more or less reliable?
I led design from early concept through to a tested, launch-ready prototype. Research focused on understanding how both experienced traders and complete newcomers interpreted market data — two groups with very different mental models and very different thresholds for trusting an automated signal.
The main tension throughout was simplicity versus transparency. Beginner users needed the interface to feel approachable; experienced traders needed enough context to evaluate the recommendation themselves. Most of the iteration happened in that gap.
Retrospective after the Project
The MVP launch directly contributed to the client securing €175K in pre-seed investment — validators and early users cited the platform's clarity and ease of use as a key differentiator in a crowded space.
Designing AI outputs for trust is a problem I'd approach differently today. In 2021, "showing the recommendation" felt like enough. Now I'd spend considerably more time on explainability — helping users understand why the AI was suggesting what it was, not just what it suggested. That layer of transparency is what separates AI features people rely on from ones they ignore.
The MVP launch directly contributed to the client securing €175K in pre-seed investment — validators and early users cited the platform's clarity and ease of use as a key differentiator in a crowded space.
Designing AI outputs for trust is a problem I'd approach differently today. In 2021, "showing the recommendation" felt like enough. Now I'd spend considerably more time on explainability — helping users understand why the AI was suggesting what it was, not just what it suggested. That layer of transparency is what separates AI features people rely on from ones they ignore.
The MVP launch directly contributed to the client securing €175K in pre-seed investment — validators and early users cited the platform's clarity and ease of use as a key differentiator in a crowded space.
Designing AI outputs for trust is a problem I'd approach differently today. In 2021, "showing the recommendation" felt like enough. Now I'd spend considerably more time on explainability — helping users understand why the AI was suggesting what it was, not just what it suggested. That layer of transparency is what separates AI features people rely on from ones they ignore.