Previous Event

Engineering Automation: Introducing the Self-Driving AI Meetup Series

by Chelsea Ouellette
Hackathon

Reflecting on the Past and Future of Self-Driving Cars

In February 2017, Uber ATG, Mercedes Benz Research & Development North America, and Bosch Research and Technology Center came together to host the inaugural session of the Self-Driving AI Meetup series at Mercedes Benz Research & Development North America in Sunnyvale, California. The event, titled Machine Learning and Self Driving Vehicles was attended by over 150 people, with almost 2,000 tuning in via Facebook Live. The new meetup series was founded to foster an open, collaborative space for members of the artificial intelligence (AI) community to engage, explore, and discuss state-of-the-art research in machine learning for self-driving vehicles.

Jan Becker, lecturer at Stanford University and senior director of Automated Driving at Faraday Future, and Anca Dragan, principal of the InterACT Laboratory at UC Berkeley, both delivered presentations on the relationship between machine learning and self-driving technology during the event.

In his presentation, titled “From Research to Product Development,” Becker discussed the intersection of AI and culture, walking the audience through a history of the representation of self-driving cars in advertising, mass media, and politics. According to Becker, the concept of self-driving vehicles can be traced back to a January 5, 1918 Scientific American article depicting a self-driving concept car. In the 1950s, Becker suggests, the references of self-driving vehicles in popular culture ballooned, with some media outlets suggesting this technology would be fueled by radar and analytics.

Referencing an informational video from the 1950s on self-driving cars, Becker notes that, “considering that [this content] was made over 60 years ago, these are actually pretty accurate depictions. Some of these things are true today.” When the 1960s arrived, Becker added, research on the technology sped up, too, including the debut of Stanford’s Artificial Intelligence Laboratory Cart, a small robot cart designed by the university and one of the first functional self-driving technologies.

At the close of his presentation, Becker addressed what needs to be done in order to bring viable self-driving products to market, including ensuring that self-driving technology is reliable and can be easily adopted by humans.

Hackathon

Building Robot Cars with Humans in Mind

Dragan’s presentation, titled “Cars that Coordinate with People,” addressed the ways in which robots use machine learning to interact with people to predict behavior, particularly as it relates to self-driving cars. During her talk, Dragan offered examples of case studies from her lab, including a scenario in which a self-driving car accurately predicted the behavior of a human-controlled vehicle driving nearby.

“What I’ve learned in robotics is that we care about having robots reach their objectives while obeying task constraints, like we don’t want cars to go off the road,” Dragan said. “What we care about in my lab is a particular version of this problem where the robot can’t do this in isolation but it actually needs to -- whether it wants to or not -- interact with people.”

Hackathon

Q&A: UC Berkeley's Anca Dragan on Robot Cars

After the meetup, we spoke with Dragan to learn more about the motives behind her decision to pursue AI as a career, her lab’s current research, and where she thinks “robot car” development is headed in the future.

As a young girl, what most piqued your interest in pursuing a career in robotics and human-robot interaction?

I had no idea I would end up in this area when I was growing up — I had only heard of robots in cartoons and movies, and I didn't even know if they existed in real life! But, I grew up in Romania where math is actually considered “cool.” I hope we will see that more in the U.S. as well. I really started liking math in middle school, then I got into computer science as a more applied version of math.

I came across AI as sub-area, and it was really fascinating to think of algorithms for agents that can make their own intelligent decisions, plus it sounded really useful — like it could make it easier for us to have more resources and a higher quality of life. Only in grad school, when I was working on robot motion planning, did I take a step back and think, ”hmmm.. the ultimate goal is for these robots to help people. I wonder what implications this has for the algorithms we're building...” It turned out that was a very rich question to explore!

How do you define your line of work, algorithmic human-robot interaction? How does it differ from other forms of AI?

Algorithmic human-robot interaction is about robotics algorithms that go beyond function (like driving on the road and avoiding collision) and explicitly deal with interaction (like coordinating with pedestrians and human drivers, keeping the passengers comfortable, and so on). Regardless of the application, we have found that robotics algorithms make implicit assumptions about people that are not quite true. For instance, a robot cleaning your table will treat you as an obstacle that happens to be moving, but of course you are also making decisions about what to do, adapting to the robot, and have preferences about how the job should get done. Or when a robot learns a task from a person, it tends to assume that the person will just be able to give it a close-to-optimal demonstration of how exactly it should move, but of course in reality people are good at providing some types of guidance but not others. Algorithmic human-robot interaction is about generalizing these algorithms to the interaction setting, where they account for what people want and how they will act.

Your research with the InterACT Lab focuses on moving beyond building robots that solely respond to the physical human state, and into building machines that can predict a human's internal state. How does this philosophy guide your work with self-driving cars?

This is definitely one of the areas of priority for us. Human internal states, like the person's beliefs and preferences, are central to getting these interactions right, but are difficult to estimate since the robot can't just access this information directly. With cars, this boils down to estimating the driving style of another driver, what a pedestrian plans to do, or how a passenger actually wants the car to drive.

What elements of machine learning are most significant in the development of safe, efficient, self-driving vehicles?

There is a lot of debate right now as to how machine learning can be most useful. Some approaches learn to map sensor inputs directly to control outputs for the car, others impose a little more structure and separate perception from action decisions. The jury is still out, but I think when it comes to interaction and safety, machine learning can really help in updating and customizing predictive models about the physical world and about people.

In your opinion, what are the key challenges facing the advancement of the self-driving cars industry so far as it relates to human-car interaction?

I am a little biased, but one thing that we've found is that people don't decide what to do in a vacuum, but instead, respond to the actions of others. This makes an autonomous car's job difficult because it needs to consider and address the effects and influence of its actions on others. We have been working hard over the last year or so on tackling this challenge, but there is still much to do.





The organizers and speakers of February’s meetup pose for a picture after the event. From left: Ilyas Atishev (MBRDNA), Marius Wiggert (MBRDNA), Anca Dragan, Axel Gern (Head of Autonomous Driving, MBRDNA), Jan Becker (Stanford University, Faraday Future), David Dao (MBRDNA), Chelsea Ouellette (Uber ATG), Angela Klein (Bosch), and Moritz Dechant (Bosch). Photo: MBRDNA


If this topic interests you, join our meetup group to learn about upcoming sessions of our Self-Driving AI Meetups in 2017.

Chelsea Ouellette is a program manager on Uber ATG’s recruiting team.