Pat Goldberg Memorial 2012 Best Papers in CS, EE and Math - overview
2012 Best Paper Awards Focus on Cloud, Security, Smarter Energy, Graphene Transistors, and Machine Learning
Close to 120 papers in computer science, electrical engineering and mathematical sciences published in refereed conference proceedings and journals in 2012 were submitted by IBM Research authors worldwide for the Pat Goldberg Memorial 2012 Best Papers in CS, EE and Math.
As usual, the overall quality of the papers was high. From these submissions, the Research Professional Interest Communities (PICs) nominated 36 papers based on technical significance (depth and breadth) and expected impact on computer science, electrical engineering or mathematical sciences, or their applications in the research disciplines covered by the PICs.
A committee consisting of PIC site coordinators or their representatives (see the sidebar) reviewed the nominated papers and selected the following five as winners of the Pat Goldberg Memorial 2012 Best Paper Awards. In alphabetical order:
Paper title: ELI: Bare-Metal Performance for I/O Virtualization
Authors and affiliations: Abel Gordon1, Nadav Ami2, Nadav Har'El1, Muli Ben-Yehuda1, Alex Landau1, Assaf Schuster2, and Dan Tsafir2
1IBM Research - Haifa; 2Technion
Name of journal or conference: International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) 2012
Summary: This paper presents ELI (Exit-Less Interrupts), a novel mechanism that solves a fundamental problem for cloud providers and data center management: I/O virtualization performance overhead. This overhead limits the usage of machine virtualization, a key component to efficiently manage the cost of modern computer environments, for many workloads that require maximum I/O performance, such as relational databases, object caching systems and storage appliances. By introducing an innovative shadow-interrupt descriptor table mechanism, ELI made it possible for the first time to achieve nearly zero overhead I/O virtualization i.e., close to bare-metal performance, when running these most demanding I/O intensive workloads. The authors believe that ELI will accelerate the adoption of virtualization for "any type" of workload, helping the industry to build more flexible and cost-efficient systems. The paper may also influence hardware vendors such as Intel and AMD to integrate into their processor a hardware mechanism that behaves like the software-only approach presented by ELI.
The paper has already been heavily cited, with over 60 citations according to Google Scholar.
Paper title: Homomorphic Evaluation of the AES Circuit
Authors and affiliations: Shai Halevi1, Craig Gentry1, and Nigel Smart2
1IBM Research - Watson; 2University of Bristol
Name of journal or conference: International Cryptology Conference (CRYPTO) 2012
Summary: This paper represents a large step forward in the advancement of fully- homomorphic encryption. It describes optimizations that allow for a fully-homomorphic implementation of a complex and practical circuit, the AES encryption algorithm. The optimizations are both AES-focused and applicable more generally to future implementations of fully-homomorphic circuits. They reduced the running time of a circuit like AES by roughly two orders of magnitude, down to 36 hours to compute the encryption of a single AES block. Other optimizations that process multiple blocks simultaneously achieve an amortized rate of just over five minutes per block, reducing the running time from around 150 days, to five minutes.
To note a few of the contributions of the work: strategies for minimizing costly operations that are unique in fully-homomorphic encryption; new ways of structuring algorithms to reduce the required key space; and twists on the key concepts regarding how to handle noise in evaluations. These techniques can be applied to a wide variety of algorithms to see how amenable they are to fully-homomorphic encryption implementations. Further, their integration of SIMD and other parallel optimizations that allow multiple blocks (in this case up to 720) to be processed simultaneously will likely be a hallmark of fully-homomorphic implementations moving forward. In scenarios where throughput, and not latency, are a primary concern, fully-homomorphic encryption is becoming more practical.
The paper has already been heavily cited, with over 90 citations according to Google Scholar.
Paper title: Lagrangian Duality and Branch-and-Bound Algorithms for Optimal Power Flow
Authors and affiliations: Dzung Phan (IBM Research - Watson)
Name of journal or conference: Operations Research
Summary: This paper provides an exact branch-and-bound algorithm to solve a continuous (no discrete variables), nonconvex, nonlinear optimization problem, namely the AC optimal power flow problem (AC OPF). The AC OPF problem is to calculate the lowest cost power flows through a network of generators and consumers of power connected by power lines. The paper considers a fairly general subcase of the AC OPF problem. As the problem is nonlinear and nonconvex, it is fairly common in the literature to solve this problem with nonlinear solvers which provide locally optimal solutions. The main contribution of this paper is a branch-and-bound algorithm to find a globally optimal solution; numerical experiments show the efficacy of this algorithm on reasonable sized instances. The lower bounds used in the algorithm are based on a specific Lagrangian relaxation which is solved over an ellipsoid containing the domain of feasible solutions. Branching is performed in a novel way by subdividing a domain into two subdomains contained inside two smaller ellipsoids.
The solution methods have the potential to influence and improve real-life power flow problem solvers. This is an important area as small improvements to solution techniques can have significant economic impact. The research was done as part of SERI (IBM Smarter Energy Research Institute) and several utilities already showed interest in the methods.
Paper title: Light–matter Interaction in a Microcavity-controlled Graphene Transistor
Authors and affiliations: Michael Engel3, Mathias Steiner1, Antonio Lombardo2, Andrea Ferrari2, Hilbert v. Löhneysen3, Phaedon Avouris1, Ralph Krupke3
1IBM Research - Watson; 2Cambridge University; 3Karlsruhe Institute of Technology
Name of journal or conference: Nature Communications
Summary: Graphene consists of a layer of carbon atoms arranged in a honeycomb structure. It possesses extraordinary electrical properties that could potentially be used in the development of high-speed optoelectronic devices, such as photo-detectors, optical modulators and solid-state lasers. The paper demonstrates that placing a sheet of graphene in an optical microcavity can have profound effects on its optoelectronic properties. It describes how such optical confinement allows spectrally selective generation of photocurrent and even alters the electrical transport properties of the material. The results show that optoelectronic components selecting light of variable wavelengths can also be made of graphene.
The technical challenge was to couple the graphene to the electrodes and to integrate the material in an optical microcavity. A sheet of graphene was embedded between two optically transparent dielectric materials, Si3N4 and Al2O3, which are in turn enclosed by silver mirrors with a spacing equal to one-half of the resonant wavelength of the cavity. At the center of this optical cavity the anti-node in the optical field enhances the absorption or emission of photons by the graphene at the resonant wavelength, and inhibits it at other wavelengths. Applying a voltage across the graphene and illuminating the device with a laser generated 20 times more photocurrent at the resonant wavelength, providing a spectral selectivity not observed in unconfined graphene. Similarly, when a current is applied to unconfined graphene, it heats up and emits a featureless thermal spectrum, whereas the graphene in the optical cavity displays a strong emission peak at the resonant wavelength.
The paper has already been heavily cited, with over 60 citations according to Google Scholar.
Paper title: Sublinear Optimization for Machine Learning
Authors and affiliations: Kenneth Clarkson1, Elad Hazan2, David Woodruff1
1IBM Research - Almaden; 2Technion
Name of journal or conference: Journal of the ACM
Summary: This paper gives sublinear (in the input data) time approximation algorithms for some basic machine learning optimization problems. The two main results are for linear classification and minimum enclosing ball. The paper uses a simple method called "L_2 sampling" for approximating vector dot products, together with the classical techniques for iterative algorithms of stochastic gradient descent and multiplicative weights updates, applied in a primal-dual framework. Such dot product estimates have low accuracy; the paper shows that despite this, the iterative algorithms achieve good accuracy. The running time of the approximation algorithms is shown to be linear in the number of points plus the dimension. The previously best running time for approximating these quantities was linear in the input data, which could be as large as the number of points multiplied by the dimension. Moreover, the paper proves that the running time of the new algorithms is almost the best-possible in the RAM model. The lower bounds are proven by showing that a certain number of entries of the input data must be read by any algorithm that has an acceptable chance of success.
Given the fundamental nature of these problems and the need for fast algorithms on large inputs, this is a result that is likely to have broad impact in machine learning.
This article first appeared on the IBM intranet. It was written by Stephen Lavenberg.