04-18
CITP Lecture: Aligning Machine Learning, Law, and Policy for Responsible Real-World Deployments

Attendance restricted to Princeton University faculty, staff and students.


Machine learning (ML) is being deployed to a vast array of real-world applications with profound impacts on society. ML can have positive impacts, such as aiding in the discovery of new cures for diseases and improving government transparency and efficiency. But it can also be harmful: reinforcing authoritarian regimes, scaling the spread of disinformation, and exacerbating societal biases. As we rapidly move toward systemic use of ML in the real world, there are many unanswered questions about how to successfully use ML for social good while preventing its potential harms. Many of these questions inevitably require pursuing a deeper alignment between ML, law, and policy.

Are certain algorithms truly compliant with current laws and regulations? Is there a better design that can make them more tuned to the regulatory and policy requirements of the real world? Are laws, policies, and regulations sufficiently informed by the technical details of ML algorithms, or will they be ineffective and out-of-sync? In this talk, we will discuss ways to bring together ML, law, and policy to address these questions. We will draw on real-world examples throughout the talk, including a unique real-world collaboration with the Internal Revenue Service. We will show how investigating questions of alignment between ML, law, and policy can advance core research in ML, drive interdisciplinary work, and reimagine how we think about certain laws and policies. It is Henderson’s hope that this agenda will help us lead to more effective and responsible ways of deploying ML in the real world, so that we steer toward positive impacts and away from potential harms.

Bio: Peter Henderson is a joint J.D.-Ph.D. (Computer Science, AI) candidate at Stanford University where he is advised by Dan Jurafsky for his Ph.D. and Dan Ho for my J.D. at Stanford Law School. He is also an OpenPhilanthropy AI Fellow and a Graduate Student Fellow at the Regulation, Evaluation, and Governance Lab. At Stanford Law School. Henderson co-led the Domestic Violence Pro Bono Project, worked on client representation with the Three Strikes Project, and contributed to the Stanford Native Law Pro Bono Project. Previously, he was advised by David Meger and Joelle Pineau for his M.Sc. at McGill University and the Montréal Institute for Learning Algorithms.

Henderson has spent time as a software engineer and applied scientist at Amazon AWS/Alexa and worked with Justice Cuéllar at the California Supreme Court. He is a part-time researcher with the Internal Revenue Service Research, Applied Analytics and Statistics Division, a technical advisor at the Institute for Security+Technology.

His research focuses on aligning machine learning, law, and policy for responsible real-world deployments. This alignment process is two-fold: (1) guided by law, policy and ethics, develop general AI systems capable of safely tackling longstanding challenges in government and society; (2) empowered by a deep technical understanding of AI, make sure that laws & policies keep general AI systems safe and beneficial for all.

Some of his work has received coverage by TechCrunch, Science, The Wall Street Journal, Bloomberg, and more. He also occasionally posts a roundup of news at the intersection of AI, Law, and Policy, the latest of which is here. Overall, he is interested in a wide range of other technical machine learning research, policy, and legal work.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

Date and Time
Tuesday April 18, 2023 4:30pm - 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Speaker
Peter Henderson, from Stanford University

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List