Alpha - Developing Core Game Modes
- Tiffany Zhou
- Apr 14
- 2 min read
This week saw a lot of progress on some fundamentals.
Starting with Story Mode, we incorporated much more feedback for the user as they play the game. The mode now includes instructions on what the user should do once clicking on the story; answer choices in the game are marked with either a green "correct" or a red "incorrect" UI text element. Finally, once the user has unlocked all characters in the story, the user is told that they have won the game. Some final steps for this mode will be getting much more characters and letting the user know which characters they have unlocked when the characters button is clicked.
We also made progress on Express Mode. We swapped the camera of Express Mode to use the AR camera as opposed to the webcam. The MoodMe SDK we were using struggled to identify certain emotions (sadness in particular) and heavily emphasized others (neutral). It also only detected 3 emotions by milestone 2. So, the second major improvement came from importing our own neural network, which now allows us to detect 7 emotions. Some thresholds still need to be adjusted to be more accurate, but we now have the groundwork for more emotions.
Additionally, we started reworking Express Mode's prompted portion. We wanted to integrate an LLM model so the user could be engaged with natural dialogue for roleplay purposes, where the AI would provide random scenarios and task the user to react with a given emotion with the response. Unfortunately, we have having issues with the speech to text portion compiling with Xcode.
We also scheduled and held an interview with stakeholder Fatima Joumaa, an early elementary teacher that focuses on social emotional learning in her classroom. She had great insights about our product and ideas for improvement.
Comments