Bio Abstract

Dr. Anca Dragan

Anca Dragan is an Assistant Professor in EECS at UC Berkeley, where she runs the InterACT lab. Her goal is to enable robots to work with, around, and in support of people. She works on algorithms that enable robots to a) coordinate with people in shared spaces, and b) learn what people want them to do. Anca did her PhD in the Robotics Institute at Carnegie Mellon University on legible motion planning. At Berkeley, she helped found the Berkeley AI Research Lab, is a co-PI for the Center for Human-Compatible AI, and has been honored by the Presidential Early Career Award for Scientists and Engineers (PECASE), the Sloan fellowship, the NSF CAREER award, the Okawa award, MIT's TR35, and an IJCAI Early Career Spotlight.
Learning Intended Rewards: Extracting all the right information from all the right places

AI work tends to focus on how to optimize a specified reward function, but rewards that lead to the desired behavior consistently are not so easy to specify. Rather than optimizing specified reward, which is already hard, robots have the much harder job of optimizing intended reward. While the specified reward does not have as much information as we make our robots pretend, the good news is that humans constantly leak information about what the robot should optimize. In this talk, we will explore how to read the right amount of information from different types of human behavior -- and even the lack thereof.

Dr. Ayanna Howard

Ayanna Howard, Ph.D. is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology. She also holds a faculty appointment in the School of Electrical and Computer Engineering and functions as the Chief Technology Officer of Zyrobotics. Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 250 peer-reviewed publications in a number of projects - from healthcare robots in the home to AI-powered STEM apps for children with diverse learning needs.  To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being recognized as one of the 23 most powerful women engineers in the world by Business Insider and one of the Top 50 U.S. Women in Tech by Forbes. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of STEM educational products to engage children of all abilities. Prior to Georgia Tech, Dr. Howard was a Senior Robotics Researcher and Deputy Manager in the Office of the Chief Scientist at NASA's Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Institute for Robotics and Intelligent Machines, Chair of the Robotics Ph.D. program, and the Associate Chair for Faculty Development in the School of Electrical and Computer Engineering at Georgia Tech.

Designing Socially Interactive Agents for Healthcare

For many individuals living with a disability, physical therapy is provided as an intervention mechanism to support their functional goals. With the recent advances in AI, therapeutic interventions using robots is now ideally positioned to make an impact in this domain.  There are numerous challenges though that still must be addressed to enable successful interaction between patients, clinicians, and robots - designing methods for extracting insights on a patient’s outcomes; developing learning methods to endow robots with the ability to interact with the patient; and ensuring that the robot can provide feedback to the caregiver and clinician in a trustworthy manner. In addition, as AI systems become more fully integrated into the day-to-day activities of human beings, disparate impacts from human-AI interactions must be more carefully investigated. This issue can be especially problematic in scenarios where the user might experience tangible harms, especially as relates to healthcare outcomes.  In this talk, I will discuss the role of robotics and AI technologies for healthcare and highlight our methods and studies in this domain.
Dr. Daphne Koller
Daphne Koller is the CEO and Founder of insitro, a startup company that aims to
rethink drug development using machine learning. She is also the Co-Chair of the
Board and Co-Founder of Coursera, the largest platform for massive open online
courses (MOOCs). Daphne was the Rajeev Motwani Professor of Computer Science
at Stanford University, where she served on the faculty for 18 years. She has also
been the Chief Computing Officer of Calico, an Alphabet company in the healthcare
space. She is the author of over 200-refereed publications appearing in venues such
as Science, Cell, and Nature Genetics. Daphne was recognized as one of TIME
Magazine’s 100 most influential people in 2012 and Newsweek’s 10 most important
people in 2010. She has been honored with multiple awards and fellowships during
her career including the Sloan Foundation Faculty Fellowship in 1996, the ONR
Young Investigator Award in 1998, the Presidential Early Career Award for
Scientists and Engineers (PECASE) in 1999, the IJCAI Computers and Thought
Award in 2001, the MacArthur Foundation Fellowship in 2004, and the ACM Prize in
Computing in 2008. Daphne was inducted into the National Academy of Engineering
in 2011 and elected a fellow of the American Academy of Arts and Sciences in 2014
and of the International Society of Computational Biology in 2017. Her teaching was
recognized via the Stanford Medal for Excellence in Fostering Undergraduate
Research, and as a Bass University Fellow in Undergraduate Education.

Machine learning: a new approach to drug discovery

Modern medicine has given us effective tools to treat some of the most significant and burdensome diseases. At the same time, it is becoming consistently more challenging to develop new therapeutics: clinical trial success rates hover around the mid-single-digit range; the pre-tax R&D cost to develop a new drug (once failures are incorporated) is estimated to be greater than $2.5B; and the rate of return on drug development investment has been decreasing linearly year by year, and some analyses estimate that it will hit 0% before 2020. A key contributor to this trend is that the drug development process involves multiple steps, each of which involves a complex and protracted experiment that often fails. We believe that, for many of these phases, it is possible to develop machine learning models to help predict the outcome of these experiments, and that those models, while inevitably imperfect, can outperform predictions based on traditional heuristics. The key will be to train powerful ML techniques on sufficient amounts of high-quality, relevant data. To achieve this goal, we are bringing together cutting edge methods in functional genomics and lab automation to build a bio-data factory that can produce relevant biological data at scale, allowing us to create large, high-quality datasets that enable the development of novel ML models. Our first goal is to engineer in vitro models of human disease that, via the use of appropriate ML models, are able to provide good predictions regarding the effect of interventions on human clinical phenotypes. Our ultimate goal is to develop a new approach to drug development that uses high-quality data and ML models to design novel, safe, and effective therapies that help more people, faster, and at a lower cost.
Dr. Dan Yamins

Dan Yamins is a computational neuroscientist at Stanford University, where he's an assistant professor of Psychology and Computer Science, and a faculty scholar at the Wu Tsai Neurosciences Institute.   He works on science and technology challenges at the intersection of neuroscience, artificial intelligence, psychology and large-scale data analysis.   

The brain is the embodiment of the most beautiful algorithms ever written.  Dan's research group, the Stanford NeuroAILab, seeks to "reverse engineer" these algorithms, both to learn both about how our minds work and build more effective artificial intelligence systems.     

Recent Advances in Brain-Inspired Self-Supervised Learning

Neural networks have proven effective learning machines for a variety of challenging AI tasks, as well as surprisingly good models of brain areas that underly real human intelligence.  However, most successful neural networks are trained in a supervised fashion on labelled datasets, requiring the costly collection of large numbers of annotations.  Unsupervised approaches to learning in neural networks are thus of substantial interest for furthering artificial intelligence, both because they would enable the training of networks without the need for annotation, and because they would be better models of the kind of general-purpose learning deployed by humans.  In this talk, I will describe a spectrum of recent approaches to unsupervised learning, based on ideas from cognitive science and neuroscience.  First, I will discuss breakthroughs in neurally-inspired unsupervised learning of deep visual embeddings that achieve that achieve performance levels on challenging visual categorization tasks that are competitive with those of direct supervision of modern convnets.  Second, I'll discuss our work building perception systems that make accurate long-range predictions of physical futures in realistic environments, and show how these support richer self-supervised visual learning.  Finally, I'll talk about the use of intrinsic motivation and curiosity to create interactive agents that self-curricularize, producing novel visual behaviors and learning powerful sensory representations.