WeatherBug Voice: Rethinking Weather Information Communications

This project was conducted as part of the UBC Designing for People CREATE training program.

The DFP training program included a practical course on Design Thinking, followed by a practical application project partnered with an active company.

This page outlines how we executed the process of design thinking to generate our recommendations for how to handle errors in the context of Weather VUIs.


Our project objective with WeatherBug was to "investigate the best strategies to retrieve weather information, and design an intuitive Voice User Interface (VUI) for WeatherBug users".

To begin our empathize phase, we interviewed 6 people with a variety of experience with voice user interfaces, having them try out a couple of scenarios with existing voice user interfaces (Alexa, Google Assistant), and discussing the results.

We identified some key aspects that these users are looking for in VUIs, such as consistent behavior between applications.


We narrowed the design space down to an issue of Trust. How do we establish trust between a user and their weather VUI, given the limitations of the technology and the data?

We generated a set of personas to outline the users we were designing for.

Ideate (and then do it again)

We developed a series of story boards and use cases to develop a prototype for.

At the end of this process, we determined that our scope was too large and we needed to go back and repeat these last three steps.

We re-empathized using a newly received set of user data in the form of VUI reviews, specifically focusing on issues of trust. We re-defined the problem space, narrowing further to error handling, and we re-ideated, reading previous research on how to hand VUI errors and story boarding several error scenarios we could design to handle.


We developed a low fidelity prototype on a whiteboard which we tried out by pretending to be the VUI and responding to a few test users.

Based on this feedback, we created a Wizard-of-Oz system, creating a web front end that users could interact with as a VUI. The backend allowed a tester to sit in another room and listen to what the user said, then have the machine synthesize an arbitrary string of text into speech, allowing for rapid prototyping of VUI interactions.


Using our VUI Wizard-of-Oz system, we conducted a formal lab study to investigate how users responded to various error handling strategies in the event of a failure. Our final demonstration video outlines some of the valuable qualitative results we received.