News

John Kubiatowicz and Group's (Circa 2000) Paper Named Most Influential at ASPLOS 2018

At the ASPLOS conference in late March, John Kubitowicz and his group from 2000 were celebrated for their paper, "OceanStore: an architecture for global-scale persistent storage." The paper was named Most Influential Paper 2018, and the authors receiving the award included David Bindel, Yan Chen, Steven Czerwinski, Patrick Eaton, Dennis Geels, Ramakrishna Gummadi, Sean Rhea, Hakim Weatherspoon, Chris Wells, and Ben Zhao, as well as Kubi, a long-time Berkeley CS faculty member. The paper was originally published in the Proceedings of the ninth international conference on Architectural support for programming languages and operating systems (ASPLOS IX). 

Carlini (photo: Kore Chan/Daily Cal)

AI training may leak secrets to canny thieves

A paper released on arXiv last week by a team of researchers including Prof. Dawn Song and Ph.D. student Nicholas Carlini (B.A. CS/Math '13), reveals just how vulnerable deep learning is to information leakage.  The researchers labelled the problem “unintended memorization” and explained it happens if miscreants can access to the model’s code and apply a variety of search algorithms. That's not an unrealistic scenario considering the code for many models are available online, and it means that text messages, location histories, emails or medical data can be leaked.  The team doesn't “really know why neural networks memorize these secrets right now, ” Carlini says.  “At least in part, it is a direct response to the fact that we train neural networks by repeatedly showing them the same training inputs over and over and asking them to remember these facts."   The best way to avoid all problems is to never feed secrets as training data. But if it’s unavoidable then developers will have to apply differentially private learning mechanisms, to bolster security, Carlini concluded.

Ben Recht wins NIPS Test of Time Award

Prof. Ben Recht has won the Neural Information Processing System (NIPS) 2017 Test of Time Award for a paper he co-wrote with Ali Rahimi in 2007 titled "Random Features for Large-Scale Kernel Machines."   Deep learning, which involves stacking many neural networks on top of one another to learn the features of giant databases and develop clever algorithms, is being used to carry out more and more tasks in an expanding number of areas.  In their acceptance speech at the NIPS conference, Recht and Rahimi posited that more theory is needed to understand the state-of-the-art empirical performance of deep learning, and called for simple theorems and simple, easily reproducible experiments.  "We are building systems that govern healthcare and mediate our civic dialogue, we influence elections," said Rahimi. "I would like to live in a society where systems are built on top of verifiable, rigorous thorough knowledge and not alchemy."

Three EECS-affiliated papers win Helmholtz Prize at ICCV 2017

Three papers with Berkeley authors received the Helmholtz Prize at the International Conference on Computer Vision (ICCV) 2017 in Venice, Italy.  This award honors  papers that have stood the test of time (more than ten years after first publication) and is bestowed by the IEEE technical committee on Pattern Analysis and Machine Intelligence (PAMI).    Seven papers won this year, among them: "Recognizing action at a distance," by A Efros, A Berg, G Mori and J Malik, ICCV 2003; "Discovering objects and their location in images," by J Sivic, B Russell, A Efros, A Zisserman and W Freeman, ICCV 2005; and "The pyramid match kernel: Discriminative classification with sets of image features," by K Grauman and T Darrell, ICCV 2005."

RISELab researchers investigate how to build more secure, faster AI systems

Computer Science faculty in the Real-Time Intelligent Secure Execution Lab (RISELab) have outlined challenges in systems, security and architecture that may impede the progress of Artificial Intelligence, and propose new research directions to address them.  The paper, A Berkeley View of Systems Challenges for AI, was authored by Profs. Stoica, Song, Popa, Patterson, Katz, Joseph, Jordan, Hellerstein, Gonzalez, Goldberg, Ghodsi, Culler and Abbeel, as well as Michael  Mahoney in Statistics/ICSI. Some of the challenges outlined include AI systems that make timely and safe decisions in unpredictable environments, that are robust against sophisticated adversaries, and that can process ever increasing amounts of data across organizations and individuals without compromising confidentiality.

Aviad Rubinstein helps show that game players won’t necessarily find a Nash equilibrium

CS graduate student Aviad Rubinstein (advisor: Christos Papadimitriou)  is featured in a Quanta Magazine article titled "In Game Theory, No Clear Path to Equilibrium," which describes the results of his paper on game theory proving that no method of adapting strategies in response to previous games will converge efficiently to even an approximate Nash equilibrium for every possible game. The paper, titled Communication complexity of approximate Nash equilibria, was co-authored by Yakov Babichenko and published last September.  Economists often use Nash equilibrium analyses to justify proposed economic reforms, but the new results suggest that economists can’t assume that game players will get to a Nash equilibrium, unless they can justify what is special about the particular game in question.

Berkeley CS faculty among the most influential in their fields

U.C. Berkeley has the top ten most AMiner Most Influential Scholar Award winners across all fields of computer science in 2016 and the top five most award winners in the fields of Computer Vision, Database, Machine Learning, Multimedia, Security, Computer Networking, and System.  The 28 CS faculty members included in the rankings were among the 100 most-cited authors in 12 of the 15 research areas evaluated. Two were among the 100 most-cited authors in 3 different areas each: Scott Shenker ranked #1 in Computer Networking, #51 in System, and #99 in Theory; and Trevor Darrell ranked #8 in Mulitmedia, #18 in Computer Vision, and #100 in Machine Learning.  Out of the 700,000 researchers indexed, only 16 appeared on three or more area top 100 lists.  See a more detailed breakdown of our influential faculty scholars.

Scott Beamer receives 2016 SPEC Kaivalya Dixit Distinguished Dissertation Award

Dr. Scott Beamer's dissertation titled "Undertanding and Improving Graph Algorithm Performance" has been selected to receive the 2016 Standard Performance Evaluation Corp (SPEC) Kaivalya Dixit Distinguished Dissertation Award.  The award recognizes outstanding doctoral dissertations in the field of computer benchmarking, performance evaluation, and experimental system analysis in general.  Papers are evaluated on scientific originality, scientific significance, practical relevance, impact, and quality of the presentation.

Among other comments, the members of the committee were impressed with Beamer's deep understanding of open-source graphs, with the quality of the implementations, with the creation of a graph benchmark suite that is already been used, that is relevant for High Performance Computing, and that is likely to have further impact in the future. The committee also remarked on the clarity and simplicity of the ideas presented in the document.

The award will be presented at the International Conference on Performance Engineering (ICPE) in April.

MIT TR35 logo

Sergey Levine, Oriol Vinyals and Wei Gao named on MIT TR35

Prof. Sergey Levine, EECS alumni Oriol Vinyals and EECS postdoc Wei Gao (working with Ali Javey) have been named on MIT Technology Review’s 2016 TR35 (Innovators Under 35) who push the edge of science, creating new approaches to tackling technology challenges. In the “Pioneers” category Prof. Levine teaches robots to watch and learn from their own successes, supervising it’s own learning, and Oriol Vinyals is working to create computers that can teach themselves how to play and win complex games by enabling them to learn from experience. In the “Inventors” category, Wei Gao is building sweatbands that monitor your health on a molecular level.

Scott Aaronson answers every ridiculously big question thrown at him

EECS alumnus Scott Aaronson (Computer Science Ph.D. '04) "Answers Every Ridiculously Big Question (John Horgan) Throws at Him" in a Cross-Check interview for Scientific American.  Aaronson, an Associate Professor at MIT (soon UT Austin) and an authority on quantum computation, riffs on simulated universes, the Singularity, unified theories, P/NP, the mind-body problem, free will, why there’s something rather than nothing, and more.