AI Agent–Driven Lab Experience

AI Agent–Driven Lab Experience

AI Agent–Driven Lab Experience

Jan 29, 2026

An AI-powered lab experience designed to guide learners through complex, hands-on cybersecurity scenarios in real time. The agent supports learners as they work inside live environments—offering contextual guidance, hints, and validation—while preserving the challenge and realism of the lab. The goal was to improve confidence, reduce dead ends, and help learners progress without removing the need for critical thinking.

0M+

users reached

0+

months of collaboration

7000+

hours of time saved

The process

The design process focused on preserving the integrity of hands-on learning while reducing unnecessary friction. By combining behavioral data, scenario analysis, and iterative prototyping, the AI agent was shaped to support learning in real time without undermining problem-solving. Every decision prioritized realism, trust, and learner confidence at scale.

01

Discovery

/research and insights

We analyzed how learners interacted with hands-on labs, where they became blocked, and what caused frustration or abandonment. Usage patterns revealed that learners often understood concepts but struggled with execution and next steps inside live environments.

01

Discovery

/research and insights

We analyzed how learners interacted with hands-on labs, where they became blocked, and what caused frustration or abandonment. Usage patterns revealed that learners often understood concepts but struggled with execution and next steps inside live environments.

01

Discovery

/research and insights

We analyzed how learners interacted with hands-on labs, where they became blocked, and what caused frustration or abandonment. Usage patterns revealed that learners often understood concepts but struggled with execution and next steps inside live environments.

02

Design

/concepts and execution

The AI agent was designed to behave like a mentor rather than a solution engine. Guidance was contextual, optional, and timed to moments of friction—helping learners reason through problems instead of bypassing them. Language, tone, and visibility were carefully tuned to avoid breaking immersion.

02

Design

/concepts and execution

The AI agent was designed to behave like a mentor rather than a solution engine. Guidance was contextual, optional, and timed to moments of friction—helping learners reason through problems instead of bypassing them. Language, tone, and visibility were carefully tuned to avoid breaking immersion.

02

Design

/concepts and execution

The AI agent was designed to behave like a mentor rather than a solution engine. Guidance was contextual, optional, and timed to moments of friction—helping learners reason through problems instead of bypassing them. Language, tone, and visibility were carefully tuned to avoid breaking immersion.

03

Testing & Iteration

/feedback and refinement

Early versions were tested to find the right balance between assistance and autonomy. Iterations focused on when the agent should intervene, how much information it should provide, and how learners responded to different levels of guidance during active lab work.

03

Testing & Iteration

/feedback and refinement

Early versions were tested to find the right balance between assistance and autonomy. Iterations focused on when the agent should intervene, how much information it should provide, and how learners responded to different levels of guidance during active lab work.

03

Testing & Iteration

/feedback and refinement

Early versions were tested to find the right balance between assistance and autonomy. Iterations focused on when the agent should intervene, how much information it should provide, and how learners responded to different levels of guidance during active lab work.

04

Delivery

/handoff and launch

The agent was embedded directly into the lab experience, working alongside terminals, tasks, and objectives. The design ensured the AI felt like part of the environment rather than an overlay, with clear states and predictable behavior learners could trust.

04

Delivery

/handoff and launch

The agent was embedded directly into the lab experience, working alongside terminals, tasks, and objectives. The design ensured the AI felt like part of the environment rather than an overlay, with clear states and predictable behavior learners could trust.

04

Delivery

/handoff and launch

The agent was embedded directly into the lab experience, working alongside terminals, tasks, and objectives. The design ensured the AI felt like part of the environment rather than an overlay, with clear states and predictable behavior learners could trust.

05

Scaling

/growth optimization

The system was designed to support a wide range of lab types, difficulty levels, and future scenarios. Its modular structure allows new behaviors, prompts, and capabilities to be added as the platform and curriculum evolve.

05

Scaling

/growth optimization

The system was designed to support a wide range of lab types, difficulty levels, and future scenarios. Its modular structure allows new behaviors, prompts, and capabilities to be added as the platform and curriculum evolve.

05

Scaling

/growth optimization

The system was designed to support a wide range of lab types, difficulty levels, and future scenarios. Its modular structure allows new behaviors, prompts, and capabilities to be added as the platform and curriculum evolve.

Performance at scale

Satisfaction

Score

87%
37+

Countries

37+

Countries

37+

Countries

0K+

Views

0K+

Views

0K+

Views

0+

Languages

0+

Languages

0+

Languages

17M+

Active users

2021

2022

2023

2024

2025