News

Enabling robots to learn from past experiences

EECS Prof. Pieter Abbeel and Assistant Prof. Sergey Levine are developing algorithms that enable robots to learn from past experiences — and even from other robots.  They use deep reinforcement learning to bring robots past a crucial threshold in demonstrating human-like intelligence: the ability to independently solve problems and master new tasks in a quicker, more efficient manner.  An article in the Berkeley Engineer delves into the innovations and advances that allow Abbeel and Levine help robots make "good" choices, generalize between tasks, improvise with objects, multi-task, and manage unexpected challenges in the world around them.

Using machine-learning to reinvent cybersecurity two ways: Song and Popa

EECS Prof. and alumna Dawn Song (Ph.D. '02, advisor: Doug Tygar) and Assistant Prof. Raluca Ada Popa are featured in the cover story for the Spring 2020 issue of the Berkeley Engineer titled "Reinventing Cybersecurity."  Faced with the challenge of protecting users' personal data while recognizing that sharing access to that data "has fueled the modern-day economy" and supports scientific research, Song has proposed a paradigm that involves "controlled use" and an open source approach utilizing a new set of principles based on game theory.  Her lab is creating a platform that applies cryptographic techniques to both machine-learning models and hardware solutions, allowing users to keep their data safe while also making it accessible.  Popa's work focuses on using machine-learning algorithms to keep data encrypted in cloud computing environments instead of just surrounding the data with firewalls.  "Sharing without showing" allows sensitive data to be made available for collaboration without decryption.  This approach is made practical by the creation of a machine-learning training system that is exponentially faster than other approaches. "So instead of training a model in three months, it takes us under three hours.”

Pieter Abbeel and Sergey Levine: teaching computers to teach themselves

EECS Prof. Pieter Abbeel and Assistant Prof. Sergey Levine both appear in a New York Times article titled "Computers Already Learn From Us. But Can They Teach Themselves?" which describes the work of scientists who "are exploring approaches that would help machines develop their own sort of common sense."  Abbeel, who runs the Berkeley Robot Learning Lab, uses reinforcement-learning systems that compete against themselves to learn faster in a method called self-play.  Levine, who runs the Robotic AI & Learning Lab, is using a form of self-supervised learning in which robots explore their environment to build a base of knowledge.

Keeping classified information secret in a world of quantum computing

Computer Science and Global Studies double major, Jake Tibbetts, has published an article in the Bulletin of the Atomic Scientists titled "Keeping classified information secret in a world of quantum computing."  Tibbetts, who is a research assistant at the LBNL Center for Global Security Research and a member of the Berkeley Nuclear Policy Working Group, argues that instead of worrying about winning the quantum supremacy race against China, U.S. policy makers and scholars should shift their focus to a more urgent national security problem: How to maintain the long-term security of secret information secured by existing cryptographic protections, which will fail against an attack by a future quantum computer.  Some possible avenues include deploying honeypots to misdirect and waste the resources of entities attempting to steal classified information; reducing the deployment time for new encryption schemes; and triaging cryptographic updates to systems that communicate and store sensitive and classified information.

Two EECS papers win 2019 ACM SIGPLAN Distinguished Paper Awards

Two papers co-authored by Berkeley EECS authors won ACM SIGPLAN Distinguished Paper Awards at the Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA) 2019.  "Duet: An Expressive Higher-Order Language and Linear Type System for Statically Enforcing Differential Privacy" co-authored by Prof. Dawn Song (Ph.D. '02, advisor: Doug Tygar), graduate student Lun Wang, undergraduate researcher Pranav Gaddamadugu, and alumni Neel Somani (CS B.A.  '19), Nikhil Sharma (EECS B.S. '18/M.S. '19),  and Alex Shan (CS B.A. '18), along with researchers in Vermont and Utah, and "Aroma: Code Recommendation via Structural Code Search" co-authored by Prof. Koushik Sen (along with authors at Facebook and UC Irvine), won two of the five honors awarded at the top programming language conference, part of the ACG SIGPLAN conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH) in October.  

"Oracle-Guided Component-Based Program Synthesis" wins 2020 ICSE Most Influential Paper Award

The paper "Oracle-Guided Component-Based Program Synthesis," co-authored by alumnus Susmit Jha (M.S./Ph.D. '11), Sumit Gulwani (Ph.D. '05, advisor: George Necula), EECS Prof. Sanjit A. Seshia, and Ashish Tiwari--and part of Susmit Jha's Ph.D. dissertation advised by Sanjit Seshia--will receive the 2020 Most Influential Paper Award by the ACM/IEEE International Conference on Software Engineering (ICSE). ICSE is the premier conference on software engineering and this award recognizes the paper judged to have had the most influence on the theory or practice of software engineering during the 10 years since its original publication. The citation says, in part, that the paper: "...has made a significant impact in Software Engineering and beyond, inspiring subsequent work not only on program synthesis and learning, but also on automated program repair, controller synthesis, and interpretable artificial intelligence."

Using deep learning to expertly detect hemorrhages in brain scans

A computer algorithm co-developed by Vision Group alumnus Weicheng Kuo (Ph.D. '19), post doc Christian Hӓne, their advisor Prof. Jitendra Malik, and researchers at UCSF, bested two out of four expert radiologists at finding tiny brain hemorrhages in head scans, an advance that one day may help doctors treat patients with traumatic brain injuries, strokes and aneurysms.  The algorithm found some small abnormalities that the experts missed, noted their location within the brain, and classified them according to subtype.  The researchers used of a type of deep learning known as a fully convolutional neural network (FCN) to train the algorithm on a relatively small number of images that were packed with data.  Each small abnormality was manually delineated at the pixel level. The richness of this data — along with other steps that prevented the model from misinterpreting random variations, or “noise,” as meaningful — created an extremely accurate algorithm.

How to Stop Superhuman A.I. Before It Stops Us

EECS Prof. Stuart Russell has penned a New York Times Op-Ed titled "How to Stop Superhuman A.I. Before It Stops Us," in which he explains why we need to design artificial intelligence that is beneficial, not just smart.  "Instead of building machines that exist to achieve their objectives," he writes, we need to build "machines that have our objectives as their only guiding principle..."   This will make them "necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines."  Russell has just published a book titled "Human Compatible: Artificial Intelligence and the Problem of Control" (Viking , October 8, 2019).

Sergey Levine, Francis Bach, and Pieter Abbeel are top 3 most prolific NeurIPS 2019 authors

Two EECS faculty and one alumnus are the authors with the most number of papers accepted to the upcoming 2019 Thirty-third Conference on Neural Information Processing Systems (NeurIPS), one of the most popular and influential AI conferences in the world.  CS Prof. Sergey Levine took the top spot with 12 papers, alumnus Francis Bach (Ph.D. '05, advisor: Michael Jordan) was the second most prolific contributor with 10 papers, and Prof. Pieter Abbeel placed third with nine.  Only one in five of the 6,743 papers submitted to the conference this year were accepted.  Registration to be one of the 8,000 attendees at  last year's NeurIPS (formerly NIPS) conference sold out in 12 minutes.  A lottery has been implemented for this year's conference, which will take place in December.

EECS students, postdocs, alumni and faculty make strong showing at 2019 USENIX Security Symposium

EECS students, postdocs, alumni, and faculty were front and center at the 28th USENIX Security Symposium in Santa Clara last week.  In addition to the Test of Time and Distinguished Paper Awards (see below), Keynote Speaker Alex Stamos (B.S. '01), previously the Chief Security Officer of Facebook, highlighted the threat model work of current ICSI postdoc Alisa Frik (advisor: Serge Egelman).  Alumnus Nicholas Carlini (Ph.D. '18, advisor: David Wagner) gave a talk on his neural networks research which was co-authored by CS Prof. Dawn Song and postdoc Chang Liu.  ICSI researchers Primal Wijesekera and Serge Egelman, and former ICSI postdoc Joel Reardon, were awarded a Distinguished Paper Award for "50 Ways to Leak Your Data: An Exploration of Apps' Circumvention of the Android Permissions System." Grad students Frank Li (advisor: Vern Paxson) and Nathan Malkin (advisors: Serge Egelman and David Wagner), received a Distinguished Paper award at the SOUPS '19 technical session for "Keepers of the Machines: Examining How System Administrators Manage Software Updates For Multiple Machines." The Zip Bomb research of alumnus David Fifield (Ph.D. '17, advisor: Doug Tygar) was also awarded a Best Paper award at the WOOT '19 technical session.

Two CS grad students, co-advised by David Culler and Raluca Popa, also made presentations.  Sam Kumar presented "JEDI: Many-to-Many End-to-End Encryption and Key Delegation for IoT" and Michael P. Andersen presented "WAVE: A Decentralized Authorization Framework with Transitive Delegation."