News

Aviad Rubinstein wins 2017 ACM Doctoral Dissertation Award

CS alumnus Aviad Rubinstein (Ph.D. ' 17, advisor: Christos Papadimitriou) is the recipient of the Association for Computing Machinery (ACM) 2017 Doctoral Dissertation Award for his dissertation “Hardness of Approximation Between P and NP.”  In his thesis, Rubinstein established the intractability of the approximate Nash equilibrium problem and several other important problems between P and NP-completeness—an enduring problem in theoretical computer science.  His work was featured in a Quanta Magazine article titled "In Game Theory, No Clear Path to Equilibrium" in July. After graduating, Rubinstein became a Rabin Postdoc at Harvard and will join Stanford as an Assistant Professor in the fall.

HäirIÖ: Human Hair as Interactive Material

CS Prof. Eric Paulos and his graduate students in the Hybrid Ecologies Lab, Sarah Sterman, Molly Nicholas, and Christine Dierk, have created a prototype of a wearable color- and shape-changing braid called HäirIÖ.  The hair extension is built from a custom circuit, an Arduino Nano, an Adafruit Bluetooth board, shape memory alloy, and thermochromic pigments.  The bluetooth chip allows devices such as phones and laptops to communicate with the hair, causing it to change shape and color, as well as respond when the hair is touched. Their paper "Human Hair as Interactive Material," was presented at the ACM International Conference on Tangible, Embedded and Embodied Interaction (TEI) last week. They have posted a how-to guide and instructable videos which include comprehensive hardware, software, and electronics documentation, as well as information about the design process. "Hair is a unique and little-explored material for new wearable technologies," the guide says.  "Its long history of cultural and individual expression make it a fruitful site for novel interactions."

Making computer animation more agile, acrobatic — and realistic

Graduate student Xue Bin “Jason” Peng (advisors Pieter Abbeel and Sergey Levine) has made a major advance in realistic computer animation using deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts. The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.  “We developed more capable agents that behave in a natural manner,” Peng said. “If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We’re moving toward a virtual stuntman.”  Peng will present his paper at the 2018 SIGGRAPH conference in August.

John Kubiatowicz and Group's (Circa 2000) Paper Named Most Influential at ASPLOS 2018

At the ASPLOS conference in late March, John Kubitowicz and his group from 2000 were celebrated for their paper, "OceanStore: an architecture for global-scale persistent storage." The paper was named Most Influential Paper 2018, and the authors receiving the award included David Bindel, Yan Chen, Steven Czerwinski, Patrick Eaton, Dennis Geels, Ramakrishna Gummadi, Sean Rhea, Hakim Weatherspoon, Chris Wells, and Ben Zhao, as well as Kubi, a long-time Berkeley CS faculty member. The paper was originally published in the Proceedings of the ninth international conference on Architectural support for programming languages and operating systems (ASPLOS IX). 

Carlini (photo: Kore Chan/Daily Cal)

AI training may leak secrets to canny thieves

A paper released on arXiv last week by a team of researchers including Prof. Dawn Song and Ph.D. student Nicholas Carlini (B.A. CS/Math '13), reveals just how vulnerable deep learning is to information leakage.  The researchers labelled the problem “unintended memorization” and explained it happens if miscreants can access to the model’s code and apply a variety of search algorithms. That's not an unrealistic scenario considering the code for many models are available online, and it means that text messages, location histories, emails or medical data can be leaked.  The team doesn't “really know why neural networks memorize these secrets right now, ” Carlini says.  “At least in part, it is a direct response to the fact that we train neural networks by repeatedly showing them the same training inputs over and over and asking them to remember these facts."   The best way to avoid all problems is to never feed secrets as training data. But if it’s unavoidable then developers will have to apply differentially private learning mechanisms, to bolster security, Carlini concluded.

Ben Recht wins NIPS Test of Time Award

Prof. Ben Recht has won the Neural Information Processing System (NIPS) 2017 Test of Time Award for a paper he co-wrote with Ali Rahimi in 2007 titled "Random Features for Large-Scale Kernel Machines."   Deep learning, which involves stacking many neural networks on top of one another to learn the features of giant databases and develop clever algorithms, is being used to carry out more and more tasks in an expanding number of areas.  In their acceptance speech at the NIPS conference, Recht and Rahimi posited that more theory is needed to understand the state-of-the-art empirical performance of deep learning, and called for simple theorems and simple, easily reproducible experiments.  "We are building systems that govern healthcare and mediate our civic dialogue, we influence elections," said Rahimi. "I would like to live in a society where systems are built on top of verifiable, rigorous thorough knowledge and not alchemy."

Three EECS-affiliated papers win Helmholtz Prize at ICCV 2017

Three papers with Berkeley authors received the Helmholtz Prize at the International Conference on Computer Vision (ICCV) 2017 in Venice, Italy.  This award honors  papers that have stood the test of time (more than ten years after first publication) and is bestowed by the IEEE technical committee on Pattern Analysis and Machine Intelligence (PAMI).    Seven papers won this year, among them: "Recognizing action at a distance," by A Efros, A Berg, G Mori and J Malik, ICCV 2003; "Discovering objects and their location in images," by J Sivic, B Russell, A Efros, A Zisserman and W Freeman, ICCV 2005; and "The pyramid match kernel: Discriminative classification with sets of image features," by K Grauman and T Darrell, ICCV 2005."

RISELab researchers investigate how to build more secure, faster AI systems

Computer Science faculty in the Real-Time Intelligent Secure Execution Lab (RISELab) have outlined challenges in systems, security and architecture that may impede the progress of Artificial Intelligence, and propose new research directions to address them.  The paper, A Berkeley View of Systems Challenges for AI, was authored by Profs. Stoica, Song, Popa, Patterson, Katz, Joseph, Jordan, Hellerstein, Gonzalez, Goldberg, Ghodsi, Culler and Abbeel, as well as Michael  Mahoney in Statistics/ICSI. Some of the challenges outlined include AI systems that make timely and safe decisions in unpredictable environments, that are robust against sophisticated adversaries, and that can process ever increasing amounts of data across organizations and individuals without compromising confidentiality.

Aviad Rubinstein helps show that game players won’t necessarily find a Nash equilibrium

CS graduate student Aviad Rubinstein (advisor: Christos Papadimitriou)  is featured in a Quanta Magazine article titled "In Game Theory, No Clear Path to Equilibrium," which describes the results of his paper on game theory proving that no method of adapting strategies in response to previous games will converge efficiently to even an approximate Nash equilibrium for every possible game. The paper, titled Communication complexity of approximate Nash equilibria, was co-authored by Yakov Babichenko and published last September.  Economists often use Nash equilibrium analyses to justify proposed economic reforms, but the new results suggest that economists can’t assume that game players will get to a Nash equilibrium, unless they can justify what is special about the particular game in question.

Berkeley CS faculty among the most influential in their fields

U.C. Berkeley has the top ten most AMiner Most Influential Scholar Award winners across all fields of computer science in 2016 and the top five most award winners in the fields of Computer Vision, Database, Machine Learning, Multimedia, Security, Computer Networking, and System.  The 28 CS faculty members included in the rankings were among the 100 most-cited authors in 12 of the 15 research areas evaluated. Two were among the 100 most-cited authors in 3 different areas each: Scott Shenker ranked #1 in Computer Networking, #51 in System, and #99 in Theory; and Trevor Darrell ranked #8 in Mulitmedia, #18 in Computer Vision, and #100 in Machine Learning.  Out of the 700,000 researchers indexed, only 16 appeared on three or more area top 100 lists.  See a more detailed breakdown of our influential faculty scholars.