Validating Learning Models through UX Research
Client
Senior project - Interactive Media Management Program
Duration
3 weeks
Roles
UX Research, Facilitator, Interviewer.
Tools
Figma, Miro, Maze, Google Workspace
Team
1 Collaborator
Quick rundown
-
Problem and main goal: Test the learning method and gamified elements of an e-learning app so they don't hinder the learning and general experience of the user.
-
Testing Methods: I did remote (using Maze) and in-person testing to ensure consistency.
-
Participant Screening: Identified suitable test participants using specific filtering questions.
-
Feedback Collection: I decided to collect feedback after the onboarding flow and lesson completion.
-
User Suggestions: Praised the app but suggested improvements for clarity and instructions.
The problem
While designing a gamified e-learning mobile application that teaches young adults how to use storytelling effectively at work or school, one of the essential moments in the design was to validate the learning method used and that the gamified elements implemented did not hinder the learning or general experience of the user.
​
I needed users to understand the lesson and how to interact with the app to complete it successfully on their first attempt. This learning model refers to this initial understanding as the pre-assessment. On top of this, users should feel the gamification elements of the app enhancing their experience to continue using it, and not hindering their experience.
For this, I had about one week until my deadline.
My approach
With a clear purpose, I hatched a plan that translated into actionable objectives with two separate methods to interview enough potential users: remote testing using Maze and in-person testing asking users about their experience live.
​
For Maze, I was lucky enough to have the help of a mentor, an eleven-year veteran of user research who guided me on how to use Maze and read the reports, for which I am ever grateful.
A challenge I ran into during the building phase of the test, was having enough cohesion between tests to avoid disrupting the results between them, and keeping my test to Maze's limitations due to using the free version. After some consideration and discussing it with my mentor, I decided to keep my in-person tests a little bit more thorough, but seems they were less people, I would balanced them out with a more quantifiable information from the remote test.
After building the structure, I turn my sights to the potential users. I used a modified list of filtering questions to know if the people giving their feedback, are as close to the people this application is for. At this point, it was a matter of time and footwork.
​
Screening Questions.
-
What do you do currently?
-
What do you think about using phone applications to learn?
-
Would you use a mobile application to learn something?
-
Have you found it hard to explain, present or convey something at school or work? Think about making a presentation, discussing an idea, pitching something to anyone, or speaking in public.
-
What do you know about storytelling?
The Outcome
We came up with a test that asked participants to act as marketing students aiming to enhance their pitching skills and interact with the prototype. They undertook two tasks: firstly, completing the onboarding process to receive a learning path and reach the home screen; secondly, selecting, starting, and completing their first storytelling lesson. Questions following each task aimed to gather feedback on the application's functionality, areas of confusion or improvement, and user preferences. Additionally, a final question assessed participants' perception of the app's potential to enhance storytelling in professional or educational settings. However, it was in seeing their actual behaviour through maze where we found some interesting insights.
The Impact
The end of this journey marked the beginning of a new one, as users praised the app's goal-based learning approach, likening it to platforms like Duolingo and highlighting its effective task completion.
However, users suggested improvements, including refining onboarding questions for better user profile creation, ensuring clearer objectives and user assistance during lessons to minimize confusion, clarifying the app's purpose at the outset, and providing clearer instructions to enhance user comprehension, particularly regarding the selection process. as this project was paused in its final design stage.
After careful user usability testing focused on the learning model and gamification elements to provide the experience intended for the user, I can assert positively that the learning model was adopted well by the users and the gamified elements served to enhance their learning experience.