News

Yannis Ioannidis and the Greek spin-off that will become the voice of Samsung

CS alumnus Yannis Ioannidis (Ph.D. '86) is featured in an article about Samsung's purchase of Greek text-to-speech company Innoetics for close to 50 million euros.  Ioannidis is president of the ATHENA Research & Innovation Center, which nurtured the startup and provided critical support during its evolution and the development of its technology. Innoetics' text-to-speech software learns languages by listening to native speakers, whose voices it can then mimic with great accuracy.  It is currently fluent in 19 languages. Samsung plans to use the technology across a wide range of its product ecosystem.  Ioannidis says that, as a result of the purchase, “any voice emanating from a Samsung device in the years to come will be ‘Greek,’ the product of Greek technology.”  Ioannidis is currently a professor of Informatics and Telecommunications at the University of Athens.

Stuart Russell is featured speaker at IP EXPO Europe

CS Prof. Stuart Russell will be speaking on the use of AI, its long-term future and its relation to humanity, at the 2017 IP EXPO Europe showcase.  IP EXPO Europe is an information technology trade show held annually in England which "brings together some of the biggest names, in their respective fields, to tackle the technological issues facing organisations right now."  Other speakers include Brad Anderson of Microsoft and chess champion Garry Kasparov.

Alexi Efros's team offers custom colorization using deep neural networks

CS Prof. Alexei Efros (also alumnus, Ph.D. '03) and his team have developed a new technique, leveraging deep neural networks and AI, to allow novices--even those with limited artistic ability--to quickly add realistic color to black and white images.  "The goal of our previous project was to just get a single, plausible colorization," says Richard Zhang, a coauthor and PhD candidate, advised by Efros. "If the user didn't like the result, or wanted to change something, they were out of luck. We realized that empowering the user and adding them in the loop was actually a necessary component for obtaining desirable results."  They will present their research into "Real-Time User Guided Colorization with Learned Deep Priors" at SIGGRAPH 2017 in August.

EECS faculty envision California's next-gen infrastructure

EE Profs Claire Tomlin,  Costas Spanos and Connie Chang-Hasnain, and CS Prof. David Culler, are featured in a Berkeley Engineer article titled "Smart moves: California's next-gen infrastructure," which describes current UC Berkeley research projects that promise to transform the way we live.  “What’s enabling these infrastructure changes is our ability to compute faster, to share information faster and to provide that information to users very quickly,” says Tomlin.   She envisions “rail-to-drone” expressways, converting railroad rights-of-way to aerial corridors where closely-spaced fleets of drones travel safely.  Spanos predicts self-monitoring buildings so smart they band together and form bargaining alliances, and Chang-Hasnain's team have been working on manufacturing highly efficient but low-cost solar cells by growing nanoscale “forests” of expensive photovoltaics on inexpensive silicon substrates. The Berkeley Institute for Data Science, co-founded by Culler, is “equipping students not just to consume data but to produce insight” which will help guide the changes to come.

Justine Sherry wins the 2016 ACM SIGCOMM Doctoral Dissertation Award

CS alumna Justine Sherry (M.S. '12/Ph.D. '16 advisor: Sylvia Ratnasamy) has won the ACM SIGCOMM Doctoral Dissertation Award for Outstanding PhD Thesis in Computer Networking and Data Communication.  Justine's thesis was on "Middleboxes as a Cloud Service," and brought the benefits of cloud computing to the networking domain.  Justine is now assistant professor at the Carnegie Mellon School of Computer Science.

Lauren Barghout Joins Last Studio Standing as Chief Vision Scientist

Laura Barghout, a visiting scholar at the Berkeley Initiative for Soft Computing (BISC), has been hired as Chief Vision Scientist of Last Studio Standing, the largest hand-drawn animation studio in the Western Hemisphere.  Barghout invented a Gestalt-based fuzzy inference system and labeling technique that is used commercially in image/video background removal, object recognition and image labeling systems. Her research has contributed to the understanding of context-dependent spatial vision, spatial masking, theoretical and computational psychophysics, and the application of fuzzy set theory to human and machine vision.  "Most animation relies solely on a direction and what's being created," she said . "This invention allows for a softer, more human-like understanding. It captures the flavor and nuance of a subject naturally -- and, again, it's softer. As such, it requires less manual clean up."

Aviad Rubinstein helps show that game players won’t necessarily find a Nash equilibrium

CS graduate student Aviad Rubinstein (advisor: Christos Papadimitriou)  is featured in a Quanta Magazine article titled "In Game Theory, No Clear Path to Equilibrium," which describes the results of his paper on game theory proving that no method of adapting strategies in response to previous games will converge efficiently to even an approximate Nash equilibrium for every possible game. The paper, titled Communication complexity of approximate Nash equilibria, was co-authored by Yakov Babichenko and published last September.  Economists often use Nash equilibrium analyses to justify proposed economic reforms, but the new results suggest that economists can’t assume that game players will get to a Nash equilibrium, unless they can justify what is special about the particular game in question.

Vern Paxson's cybersecurity startup Corelight raises $9.2M in Series A funding

Corelight, a cybersecurity startup co-founded by CS Prof. Vern Paxson, has raised $9.2 million in Series A funding from Accel Partners, with participation from Osage University Partners and Riverbed Technology Co-founder (and former Berkeley CS professor) Dr. Steve McCanne.  Corelight provides powerful network visibility solutions for cybersecurity built on a widely-used open source framework called Bro, which was developed by Paxson while working at LBNL in 1995.   The Corelight Sensor, which enables wide-ranging real-time understanding of network traffic, is already being used by many of the world’s most capable security operations including Amazon and five other Fortune 100 companies.

Compressed light field microscopy helps build a window into the brain

In a project funded by a $21.6M donation from DARPA,  a light field microscope developed by EE Associate Prof. Laura Waller, MCB Assistant Prof. Hillel Adesnik and their lab groups, is being used to create a window into the brain through which researchers — and eventually physicians — can monitor and activate thousands of individual neurons using light.   The microscope is based on CS Assistant Prof. Ren Ng's revolutionary light field camera which captures light through an array of lenses and reconstructs images computationally in any focus. The microscope is the first tier of a two-tier device referred to as a cortical modem:  it "reads" through the surface of the brain to visualize up to a million neurons; the second tier component "writes" by projecting light patterns onto these neurons using 3D holograms, stimulating them in a way that reflects normal brain activity. The goal of the project is to read from a million individual neurons and simultaneously stimulate 1,000 of them with single-cell accuracy.  “By encoding perceptions into the human cortex," MCB Prof. Ehud Isacoff says, "you could allow the blind to see or the paralyzed to feel touch.”

There are 10 faculty involved in this project, 4 of which are from EECS: Laura Waller, Ren Ng, Jose Carmena and Rikky Muller. The project is being led by Ehud Isacoff from the Helen Wills Neuroscience Institute.

The tale of Lester Mackey's pursuit of the Netflix Prize

In October 2006, Netflix announced "The Netflix Prize," a $1M competition where teams of programmers raced to make the Netflix recommendation engine 10% more accurate.  The nail-biting competition is profiled in an article for Thrilllist which prominently features participant and CS alumnus Lester Mackey (Ph.D. '12), then an undergraduate at Princeton.   "It was so much fun," he said. "The contest was structured so well. We had to learn so much to be competitive and I met so many people along the way."  The winners beat the second place team by only 20 minutes.   Mackey is now a researcher at Microsoft Research New England and an adjunct professor of Statistics at Stanford University.