# Pat Goldberg Best Paper Awards for 2010 announced

**2010 Best Paper awards focus on double patterning lithography, the distinct elements problem, discrepancy minimization and nested virtualization.**

Close to 120 papers in computer science, electrical engineering and mathematical sciences published in refereed conference proceedings and journals in 2010 were submitted by IBM Research authors worldwide for the 2010 Pat Goldberg Memorial Best Paper Awards in CS, EE and Math.

As usual, the overall quality of the papers was high. From these submissions, the Research Professional Interest Communities (PICs) nominated 34 papers based on technical significance (depth and breadth) and expected impact on computer science, electrical engineering or mathematical sciences, or their applications in the research disciplines covered by the PICs.

A committee consisting of the PIC site coordinators (see sidebar) and head of IBM Computer Science Laura Haas reviewed the nominated papers and selected the following four as winners of the 2010 Pat Goldberg Memorial Best Paper Awards. In alphabetical order:

**Authors and affiliations**: Jae-Seok Yang (UT Austin), Katrina Lu (Intel), Minsik Cho (IBM T. J. Watson Research Center) Kun Yuan (UT Austin) David Pan (UT Austin)

**Paper title**: A new graph theoretic, multiobjective layout decomposition framework for double patterning lithography

**Name of journal or conference**: ASPDAC 2010 (Asian and South Pacific Design Automation Conference)

Double patterning lithography (DPL) is the leading photolithography technology candidate for 14nm, and may be further extended as triple/quadruple patterning for sub-7nm nodes. Layout decomposition is a fundamental step in DPL, which splits the original layout into two orthogonal sub-masks (resulting in litho-etch-litho-etch process). It has several key challenges:

(a) Each sub mask has to be conflict-free, i.e. satisfying all DPL rules as well as the design rules.

(b) The stitch count (which is created when a single polygon is severed) should be minimized for higher tolerance against significant overlay errors (e.g., 3σ=5nm)

(c) Each sub mask should have similar pattern density for better lithographic results.

In this paper, the authors first proved that layout decomposition for DPL is equivalent to a well-researched constrained min-cut graph partitioning problem (solving challenges a and b). Then the authors proposed a new partitioning framework to address other DPL challenges (challenge c) based on the overlay self-compensation concept and balanced partitioning objective, which is to enhance overall manufacturing yield.

Compared with previous state-of-the-art works, which relied on integer linear programming, the proposed algorithm is order-of magnitudes faster and provides near-optimal decomposition results, yet considers multi-objective functions seamlessly, which none of the previous work could. For example, the proposed approach achieved over a 10^5 speedup with 0.5% quality degradation for the largest circuit the previous work could handle.

Given that DPL's importance in the sub-14nm manufacturing in the semiconductor industry, the paper received the Best Paper Award at ASPDAC 2010, a PIC targeted conference, due to both theoretical elegancy and significant real-world impact.

**Authors and affiliations**: Daniel M. Kane (Harvard University), Jelani Nelson (MIT), and David P. Woodruff (IBM Almaden Research Center)

**Paper title**: An optimal algorithm for the distinct elements problem

**Name of journal or conference**: PODS 2010 (ACM Symposium on Principles of Database Systems)

This paper won the Best Paper award in ACM PODS, a PIC top quality conference, Indiana. The following are some of the reasons cited by the PODS committee for awarding this paper:

"The Distinct Elements (F0) problem is one of the central problems in the data streams area. It has attracted a lot of interest over several decades, particularly in the database community -- there have been many VLDB/SIGMOD/PODS/ICDE papers on this topic. This new paper makes a significant advance in our understanding of the difficulty of this problem, with strong bounds on the complexity."

"This paper presents an optimal algorithm for estimating the number of distinct elements in a stream. This is a classic, important problem that has been well-studied; providing the first optimal algorithm is a nice accomplishment."

In summary, the paper resolves a classic question that was not only challenging theoretically, but also of practical interest to the database community. The techniques used in achieving the optimal bounds involve a novel combination of different techniques, including subsampling, balls-and-bins ideas, k-wise independent hash functions, and clever modifications and improvements of several data structures. This work can be viewed as a culmination of a long line of work on this problem, with optimal parameters in terms of both space and time complexity.

**Authors and affiliations**: Nikhil Bansal (IBM T. J. Watson Research Center)

**Paper title**: Constructive algorithms for discrepancy minimization

**Name of journal or conference**: FOCS 2010 (IEEE Symposium on Foundations of Computer Science)

The paper gives a novel algorithm for the Discrepancy Minimization Problem and related problems. In the simplest form the problem is as follows:

We are given a family of subsets S_1,..., S_n of the set {1,...,n}. The goal is to set x_i = +1 or x_i= -1 for every i in {1,...,n} so as to minimize the discrepancy, which is defined as max_k |{sum_{i\in S_k} x_i}|. In 1985, Spencer showed that there always exists an assignment with discrepancy at most 6 sqrt{n} and this bound is tight. However, Spencer's result is not "constructive": i.e., it does not give an efficient algorithm for finding the assignment. Moreover, Spencer conjectured that no such algorithm would exist. Here is a quote by Joel Spencer: "Nikhil Bansal has given an algorithmic implementation of my result that given any n sets on n points there is a 2-coloring with all discrepancies less than 6 \sqrt{n}. I had long conjectured that no algorithm would exist."

The paper not only settles a question open for 25 years, but also introduces a very novel rounding technique based on semidefinite programming (SDP). To find the assignment, the algorithm performs a Brownian random walk starting at the origin x=(x_1,...,x_n)=(0,...,0) and making small random steps distributed as Gaussian random variables. Once x hits one of the faces of the cube [-1,1]^n, the corresponding coordinate x_i is fixed to +1 or -1; and the process is continued. The covariance matrix of the Gaussian random variables is computed using SDP after each step. The paper uses very clever arguments to show that a good solution to the SDP always exists and then that the process converges to a good solution with high probability. This is certainly a breakthrough in the area of SDP algorithms. Already, several follow up papers have been published.

**Authors and affiliations**: Muli Ben-Yehuda, Zvi Dubitzky, Michael Factor, Nadav Har’El, Abel Gordon, Orit Wasserman, Ben-Ami Yassour (IBM Haifa Research Lab), Anthony Liguori, Michael D. Day (IBM Linux Technology Center)

**Paper title**: The Turtles Project: Design and Implementation of Nested Virtualization

**Name of journal or conference**: OSDI 2010 (USENIX Symposium on Operating Systems Design and Implementation)

This paper was awarded the Jay Lepreau Best Paper at OSDI 2010, a PIC top quality conference. Turtles made significant contributions to the field of operating systems, more specifically, virtualization. Turtles demonstrated that efficient "nested virtualization" for x86 platforms is feasible through two innovative technologies: multi-dimensional paging for MMU virtualization and multi-level device assignment for I/O virtualization. With this efficient nested virtualization, hardware vendors can enhance their platforms running hypervisors in the firmware and cloud offerings can be substantially improved.

The novelty and importance of this work has already started to have impact on both industry and academia. Some software and hardware vendors have decided to add nested virtualization support to their offerings, influenced by the results and technologies presented in this paper. Computer Science classes in several universities have already included this paper in their course materials.

For more information about the Pat Goldberg Memorial Best Paper Awards, contact Steve Lavenberg.

*This article originally appeared on the IBM intranet.*

*Last updated on August 1, 2011*