StoryLab Science

For my Master’s thesis project, in collaboration with my partner Jodalys Herrera, we crafted an AI-enhanced learning tool for Science. By harnessing the power of storytelling and interactive gaming, our project offers an engaging and enjoyable learning journey.

Science education is a challenging subject to teach in elementary classrooms and as a result students receive limited science instruction per day. To address this, StoryLab Science (SLS) was designed as a supplementary educational tool that integrates Kundu’s framework, based on Bandura’s research, for enhancing self-efficacy in online education to promote scientific literacy and confidence in learners. SLS uses question-driven narratives that align with NGSS standards and multiple representations of science to deepen understanding. Learners communicate their understanding to a teachable agent in which SLS provides timely feedback from AI to reinforce correct answers and address misconceptions.

To date, the development of SLS has involved the creation of four prototypes and the implementation of five learner studies. The evaluation process has included both quantitative and qualitative analysis to assess learning outcomes and the technological effectiveness of SLS. The discussion details how findings have generated valuable insights about the impact of SLS on scientific literacy, learner engagement, and the overall effectiveness of the platform. This data-driven approach has informed the continuous refinement and improvement of SLS, ensuring its ability to provide a comprehensive and engaging science education experience for young learners. Our next steps include testing a fully functional prototype of SLS with learners, utilizing pre and post surveys to evaluate its impact on learning outcomes, scientific literacy, and engagement with science.

Read full thesis project here: https://purl.stanford.edu/tb353kq3490

Prototyping

Figma

Our Figma prototype showcases the interactive demo presented at LDT Expo, designed to engage learners in an immersive science lesson covering Gravity, Electricity, & Magnets. Through an enchanting narrative featuring Moo-nica, the adventurous space cow, alongside Finn and his trusty sidekick, Baldwing the hairless chicken, learners embark on a captivating journey. They absorb scientific concepts through a variety of mediums, including videos, textual content, and interactive games, all while aiding Moo-nica in her quest to return home.

Highlighting the visual appeal of our product, the prototype incorporates stunning illustrations with a parallax effect, enhancing the overall user experience. Moreover, our platform has an integrated AI system tailored to each learner's interests and age group, offering personalized assistance to address any misconceptions they encounter.

For further details, refer to our comprehensive write-up and demo video located at the top of this page.


Early Development

In the initial stages of our project, we conducted user testing on the navigational format of our learning experience using flashcards. Following trials within our target age group, we discovered that offering the three learning modules (reading, watching, or playing) as options was more effective than a linear progression through each module. Should users encounter challenges in understanding the material before explaining concepts to the teachable agent in the story, they have the freedom to revisit and explore alternative mediums to enhance their learning experience.


Teachable Agent & AI Development

To effectively refine our AI system, we initially conducted user testing of our prompts using the GPT-4 website. Our aim was to ensure that the prompts facilitated meaningful conversations between the user and the teachable agent (AI), thereby enhancing the user’s learning journey. By incorporating parameters such as Name, Age, and Interests, we sought to engage users by tapping into their existing knowledge and intrinsic interests, allowing us to create meaningful metaphors tailored to each individual.

Following this phase, we developed our own backend AI chat system. Leveraging speech-to-text and text-to-speech technology, we aimed to create a more natural conversational experience for users, surpassing the limitations of traditional writing and reading interactions.


Quantitative and Qualitative Data

We collected audio recordings from nine children explaining scientific concepts in controlled studio settings with minimal background noise. After analyzing these transcription results, we computed the Word Error Rate, a critical metric in speech recognition. Word Error Rate measures the accuracy of transcribing spoken language compared to the reference transcript, with lower values indicating higher accuracy models. We then used the Speech-to-text API to process the audios and fed them into our ChatGPT prompt. In order to generate feedback, we sent out a parent survey to fill our prompts with relevant topics of interest to our target learners. Additionally, qualitative codes and a rubric were developed to assess the effectiveness of the feedback provided by ChatGPT. We coded and took into account factors such as accuracy, positive affirmation, reference to learners’ interests or prior knowledge, and addressing misconceptions. Using these codes, we rated the feedback on a scale of 1-3.

Read the full report and the data that we collected in the publication.

Previous
Previous

Character & Effects Work

Next
Next

Audio Reactive Experiments & Research