The Design Lab, UC San Diego
(+ the Nissan Silicon Valley Research Center)

How might autonomous vehicles communicate with human beings in urban road environments?

Above: the driver waves to the pedestrian as a way to signal they can cross.

Above: the driver waves to the pedestrian as a way to signal they can cross.

Human beings can wave, hold eye contact, and make all sorts of gestures to communicate their intentions on the road.

Autonomous vehicles can’t – at least not in the same way.

Given this, how might autonomous vehicles establish trust and clear communication with pedestrians in noisy & unpredictable urban environments?

We used a variety of ethnographic studies, simulator studies, lab interviews, and Wizard-of-Oz prototyping to understand what people really do on urban roads, and how to best design for human trust in autonomous vehicles.


Who: Jim Hollan, Don Norman, Colleen Emmenegger, Ben Bergen, Malte Risto, Tavish Grade, Melissa Wright
When: November 2015 - March 2017
Methods Used: Ethnography of urban road environments (downtown La Jolla, Pacific Beach, university campus intersections, senior centers), Wizard-of-Oz prototyping, multi-modal video coding, participant interviews, surveys
Tools: ChronoViz (~TechSmith Morae), iMovie, GoPro cameras + mounts


Research Approaches

Figure 1, above: A pedestrian jaywalking & running across Voigt Dr. on the university campus. The pedestrian makes eye contact with the bus driver before proceeding to run across the road.

Figure 1, above: A pedestrian jaywalking & running across Voigt Dr. on the university campus. The pedestrian makes eye contact with the bus driver before proceeding to run across the road.

Screen-Shot-2016-01-07-at-10.35.46-AM.png

Using Ethnography to Build a Vocabulary of Road User Behavior

Challenge: There’s a rich layer of communication present in the road, much of it habitual and outside of conscious awareness (e.g., making eye contact with other drivers, shifting your body position to indicate you are about to run, waiting for someone else to claim the right-of-way at an intersection. But there doesn’t seem to be an adequate or standardized language to describe these rich behaviors.

Goal: Understand how context shapes the meaning of a signal, and unearth any communication patterns through hours of collected traffic interaction footage.

Method: We set up video cameras all along urban environments and non-stoplight regulated intersections. Our presupposition is that in ambiguous traffic situations (including intersections without traffic lights), drivers and pedestrians are more likely to resort to overt and directed signaling to negotiate safe passage on the road.

Role: My role as the research assistant was to accompany my research sensei on the recording trips & collecting the video data, and then sitting with the team to analyze the video data. I used the video data to make the figures.


3-1.png

Interviewing Road Users on Road User Behavior

After conducting our ethnographic research, we brought participants to our lab and gave semi-structured interviews.

Goal: We built this rich vocabulary, but we wanted to validate it and understand how everyday people talk about behavior and interaction on the road. And what would they notice that we haven’t?

Role: I conducted 20 lab interviews, between 30 minutes – 1 hour for each session. I was responsible for participant recruitment, setting up the lab equipment and study, and for transcribing the interviews with the team. 

Methods Used: talk-aloud procedure, stimulated recall, semi-structured interviewing, participant recruitment.


IMG_2344.jpg

Using Driving Simulators to Measure How Downstream Information Affects Behavior

Scenario: You are a driver in an unfamiliar place, with seen and unseen dangers (construction zones, slippery road conditions, incoming ambulances). What role could an Intelligent Driver Support System play in helping you navigate the road in a safe and efficient manner? 

Approach: We repurposed an old police training simulator and created different traffic events within the simulator route. The experimental conditions affected the types of location and time-sensitive messages participants would hear as they drove through the simulator world:

1. Advice Only (e.g. “slow down”, “merge to the left lane”)
2. Information Only (e.g. “slippery road conditions ahead”)
3. Information + Advice
4. Control (no messages). 

Role: I recruited 40+ participants and ran 40+ simulator experiments. I was responsible for data collection, simulator troubleshooting, and synthesizing the survey data. After the experiment, I worked with the rest of the team to craft a narrative out of the raw data.

Result: A full report and presentation to Toyota, and an extended abstract based on the simulator experiment to HFES 2017.


Deliverables & Impact

Screen Shot 2019-10-27 at 2.21.12 PM.png

Poster for the Autonomous Vehicle Symposium (AVS 2016).

Here we push the need for standards in autonomous vehicle to human communication.

 

All the research done above was in service of a safe and seamless Assistant experience for drivers – especially before that public release.

Stakeholders were present with us every step of the way: product managers, interaction & conversation designers, engineers. Some even sat in on our driving simulator and ride-along studies. Here are some of the things I did to ensure that this research made an impact on the actual product and wider organization:

  • Wrote email newsletters and shareouts to wider internal organizations (e.g., beyond Google Geo Assistant and for other teams like Android Auto, and Geo Driving, and other Assistant teams)

  • Created video highlight reels to create a bridge between users and stakeholders. During my highlights I focus on what works well, and what does not (focusing on the failures alone would not actually represent the whole story) – and the safety implications for each finding.

  • Created bug lists for product polishing & potentially distracting especially before a public launch or shipping products with car manufacturer partners

  • Conducted literature reviews to consolidate past & current research on this nebulous and emerging design space, and to showcase how this research plays a role in the grander research ecosystem. It’s a nebulous space, and it helps to get a lay of the land to see how

  • All this was part of a larger effort to showcase the Google Assistant as a hero in driving use cases, as seen in the public 2019 I/O conference hosted by Google.

Reflections

_______

_______

_______


Previous
Previous

Bringing Conversational Experiences to Everyone

Next
Next

Exploring Pedestrian Communication with Autonomous Vehicles