News

"Oracle-Guided Component-Based Program Synthesis" wins 2020 ICSE Most Influential Paper Award

The paper "Oracle-Guided Component-Based Program Synthesis," co-authored by alumnus Susmit Jha (M.S./Ph.D. '11), Sumit Gulwani (Ph.D. '05, advisor: George Necula), EECS Prof. Sanjit A. Seshia, and Ashish Tiwari--and part of Susmit Jha's Ph.D. dissertation advised by Sanjit Seshia--will receive the 2020 Most Influential Paper Award by the ACM/IEEE International Conference on Software Engineering (ICSE). ICSE is the premier conference on software engineering and this award recognizes the paper judged to have had the most influence on the theory or practice of software engineering during the 10 years since its original publication. The citation says, in part, that the paper: "...has made a significant impact in Software Engineering and beyond, inspiring subsequent work not only on program synthesis and learning, but also on automated program repair, controller synthesis, and interpretable artificial intelligence."

Using deep learning to expertly detect hemorrhages in brain scans

A computer algorithm co-developed by Vision Group alumnus Weicheng Kuo (Ph.D. '19), post doc Christian Hӓne, their advisor Prof. Jitendra Malik, and researchers at UCSF, bested two out of four expert radiologists at finding tiny brain hemorrhages in head scans, an advance that one day may help doctors treat patients with traumatic brain injuries, strokes and aneurysms.  The algorithm found some small abnormalities that the experts missed, noted their location within the brain, and classified them according to subtype.  The researchers used of a type of deep learning known as a fully convolutional neural network (FCN) to train the algorithm on a relatively small number of images that were packed with data.  Each small abnormality was manually delineated at the pixel level. The richness of this data — along with other steps that prevented the model from misinterpreting random variations, or “noise,” as meaningful — created an extremely accurate algorithm.

How to Stop Superhuman A.I. Before It Stops Us

EECS Prof. Stuart Russell has penned a New York Times Op-Ed titled "How to Stop Superhuman A.I. Before It Stops Us," in which he explains why we need to design artificial intelligence that is beneficial, not just smart.  "Instead of building machines that exist to achieve their objectives," he writes, we need to build "machines that have our objectives as their only guiding principle..."   This will make them "necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines."  Russell has just published a book titled "Human Compatible: Artificial Intelligence and the Problem of Control" (Viking , October 8, 2019).

Sergey Levine, Francis Bach, and Pieter Abbeel are top 3 most prolific NeurIPS 2019 authors

Two EECS faculty and one alumnus are the authors with the most number of papers accepted to the upcoming 2019 Thirty-third Conference on Neural Information Processing Systems (NeurIPS), one of the most popular and influential AI conferences in the world.  CS Prof. Sergey Levine took the top spot with 12 papers, alumnus Francis Bach (Ph.D. '05, advisor: Michael Jordan) was the second most prolific contributor with 10 papers, and Prof. Pieter Abbeel placed third with nine.  Only one in five of the 6,743 papers submitted to the conference this year were accepted.  Registration to be one of the 8,000 attendees at  last year's NeurIPS (formerly NIPS) conference sold out in 12 minutes.  A lottery has been implemented for this year's conference, which will take place in December.

EECS students, postdocs, alumni and faculty make strong showing at 2019 USENIX Security Symposium

EECS students, postdocs, alumni, and faculty were front and center at the 28th USENIX Security Symposium in Santa Clara last week.  In addition to the Test of Time and Distinguished Paper Awards (see below), Keynote Speaker Alex Stamos (B.S. '01), previously the Chief Security Officer of Facebook, highlighted the threat model work of current ICSI postdoc Alisa Frik (advisor: Serge Egelman).  Alumnus Nicholas Carlini (Ph.D. '18, advisor: David Wagner) gave a talk on his neural networks research which was co-authored by CS Prof. Dawn Song and postdoc Chang Liu.  ICSI researchers Primal Wijesekera and Serge Egelman, and former ICSI postdoc Joel Reardon, were awarded a Distinguished Paper Award for "50 Ways to Leak Your Data: An Exploration of Apps' Circumvention of the Android Permissions System." Grad students Frank Li (advisor: Vern Paxson) and Nathan Malkin (advisors: Serge Egelman and David Wagner), received a Distinguished Paper award at the SOUPS '19 technical session for "Keepers of the Machines: Examining How System Administrators Manage Software Updates For Multiple Machines." The Zip Bomb research of alumnus David Fifield (Ph.D. '17, advisor: Doug Tygar) was also awarded a Best Paper award at the WOOT '19 technical session.

Two CS grad students, co-advised by David Culler and Raluca Popa, also made presentations.  Sam Kumar presented "JEDI: Many-to-Many End-to-End Encryption and Key Delegation for IoT" and Michael P. Andersen presented "WAVE: A Decentralized Authorization Framework with Transitive Delegation."

Grant Ho, Vern Paxson, and David Wagner win USENIX Security Symposium Distinguished Paper Award

Graduate student Grant Ho and his co-advisors Profs. Vern Paxson and David Wagner, were honored with a Distinguished Paper Award at the 2019 USENIX Security Symposium for "Detecting and Characterizing Lateral Phishing at Scale".  In the paper, they presented "the first large-scale characterization of lateral phishing attacks, based on a dataset of 113 million employee-sent emails from 92 enterprise organizations."  Ho, Paxson, and Wagner previously won the same award at the 2017 USENIX Security Symposium for their paper "Detecting Credential Spearphishing Attacks in Enterprise Settings."

David Wagner, Eric Brewer, Ian Goldberg, and Randi Thomas win 2019 USENIX Test of Time Award

CS Profs. and alumni David Wagner (Ph.D. '00) and Eric Brewer (B.S. '89), and alumni Ian Goldberg (Ph.D. '00) and Randi Thomas (M.S.) have won the 2019 USENIX Test of Time Award for their 1996 paper titled "A Secure Environment for Untrusted Helper Applications."  The paper, which introduced a fundamental and crucial technique for confining untrusted applications in computer systems, and which made a significant contribution to the computer security field, was written by Wagner, Goldberg and Thomas when they were Brewer's graduate students.  “Beyond its strong academic impact — cited by 890 papers," said award committe member Dan Boneh, "the technique is now used to confine web pages in the Chrome browser, and to confine applications running on Android."

GauGAN AI art tool wins two major awards at SIGGRAPH 2019 Real-Time Live Competition

A viral real-time AI art application, co-created by three current and former graduate students of CS Prof. Alexei Efros, has won two coveted awards--Best in Show and Audience Choice--at the SIGGRAPH 2019 Real-Time Live Competition.  The interactive application, called GauGAN, was co-created by Ph.D. candidate Taesung Park during a summer internship at NVIDIA, along with alumni and NVIDIA researchers Jun-Yan Zhu (Ph.D. '17,  ACM SIGGRAPH Outstanding Doctoral Disseration winner) and Ting-Chun Wang (Ph.D. '17), as well as NVIDIAs Ming-Yu Liu.  GauGAN is the first semantic image synthesis model that can turn rough sketches into stunning, photorealistic landscape scenes.

Michael Jordan on the goals and remedies for AI

CS Prof. Michael Jordan has written a commentary in the Harvard Data Science Review (HDSR) titled "Dr. AI or: How I Learned to Stop Worrying and Love Economics" (a play on the title of the film Dr. Strangelove).  In it, he  argues that instead of trying to put "‘thought’ into the computer, and expecting that ‘thinking computers’ will be able to solve our problems and make our lives better," he explores the prospect of bringing microeconomics "into the blend of computer science and statistics that is currently being called ‘AI.'"

Shruti Agarwal and Hany Farid use facial quirks to unmask ‘deepfakes’

CS graduate student Shruti Agarwal and her thesis advisor Prof. Hany Farid have created a new weapon in the war against "deepfakes," the hyper-realistic AI-generated videos of people appearing to say and do things they never actually said or did.  The new forensic technique, which uses the subtle characteristics of how a person speaks to recognize whether a new video of that individual is real, was presented this week at the Computer Vision and Pattern Recognition conference in Long Beach.  “The basic idea is we can build these soft biometric models of various world leaders, such as 2020 presidential candidates," said Farid, "and then as the videos start to break, for example, we can analyze them and try to determine if we think they are real or not.”