IBM Programming Languages Day - PL Day 2010
The eleventh annual Programming Languages Day will be held at the IBM Thomas J. Watson Research Center on Thursday, July 29, 2010. The day will be held in cooperation with the New Jersey and New England Programming Languages and Systems Seminars. The main goal of the event is to increase awareness of each other's work, and to encourage interaction and collaboration.
The Programming Languages Day features a keynote presentation and 12 regular presentations. Prof. Doug Lea, State University of New York at Oswego, will deliver the keynote presentation this year.
You are welcome from 9AM onwards, and the keynote presentation will start at 9:30AM sharp. We expect the program to run until 5:30PM. The Programming Languages day will be held in room GN-F15 in the Hawthorne-1 building in Hawthorne, New York.
If you plan to attend the Programming Languages Day, please register by sending an e-mail with your name, affiliation, contact information, and dietary restrictions to etorlak at us.ibm.com so that we can plan for lunch and refreshments.
Program Committee:
- Adriana Compagnoni, Stevens Institute of Technology
- Joshua Guttman, Worcester Polytechnic Institute
- Emina Torlak, IBM T.J. Watson Research Center
Program
9:00-9:30 BREAKFAST | |
Keynote | |
9:30-10:30 | Doug Lea (State University of New York at Oswego) |
Engineering Fine-Grained Parallelism Support for Java7 | |
10:30-10:45 BREAK | |
Session 1: Parallelism and Concurrency | |
10:45-11:10 | Arun Raman (Princeton University) |
Speculative Parallelization Using Software Multi-threaded Transactions | |
11:10-11:35 | Guojing Cong, George Almasi, and Vijay Saraswat (IBM Research) |
Engineering Distributed Graph Algorithms in PGAS Languages | |
11:35-12:00 | Ryan Newton (Intel) |
Intel Concurrent Collections for Haskell | |
12:00-1:00 LUNCH | |
Session 2: Domain Specific Languages and Constructs | |
1:00-1:25 | Dominic Duggan and Ye Wu (Stevens Institute of Technology) |
Secure Nested Transactions | |
1:25-1:50 | Robert Grabowski and Lennart Beringer (Princeton University) |
Noninterference for Dynamic Security Environments | |
1:50-2:15 | Ashish Agarwal (Yale University) |
Mechanizing Optimization and Statistics | |
2:15-2:30 BREAK | |
Session 3: Logic and Analysis | |
2:30-2:55 | Ming Fu (Yale University) |
Reasoning About Optimistic Concurrency Using a Program Logic for History | |
2:55-3:20 | Kenneth Roe and Scott Smith (Johns Hopkins University) |
A Framework for Describing Recursive Data Structure Topologies | |
3:20-3:45 | David Van Horn (Northeastern University), Christopher Earl and Matthew Might (University of Utah) |
Push-Down Control-Flow Analysis of Higher-Order Programs | |
3:45-4:15 COFFEE | |
Session 4: Languages and Tools for the Web | |
4:15-4:40 | Robert Muth (Google) |
NaCl: Spice up Your Browser | |
4:40-5:05 | Jose Castanos, David Edelsohn, Kazuaki Ishizaki, Takeshi Ogasawara, Priya Nagpurkar, Akihiko Tozawa, and Peng Wu (IBM Research) |
Compilers are from Mars, Dynamic Scripting Languages are from Venus | |
5:05-5:30 | Nate Foster (Cornell University), Michael Freedman, Rob Harrison, Matthew Meola, Jennifer Rexford, and David Walker (Princeton University) |
Frenetic: Functional Reactive Programming for Networks |
Abstracts and Slides
Engineering Fine-Grained Parallelism Support for Java7
Doug Lea (State University of New York at Oswego)
(pdf)
The next major Java release will contain a workstealing framework that efficiently supports a wide range of parallel usages, including lightweight actors and parallel operations on collections. This talk will present an overview of the rationale, design, and implementation, along with sample current and upcoming usages by languages running on JVMs.
Speculative Parallelization Using Software Multi-threaded Transactions
Arun Raman (Princeton University)
With the right techniques, multicore architectures may be able to continue the exponential performance trend that elevated the performance of applications of all types for decades. While many scientific programs can be parallelized without speculative techniques, speculative parallelism appears to be the key to continuing this trend for general-purpose applications. Recently-proposed code parallelization techniques, such as those by Bridges et al. and by Thies et al., demonstrate scalable performance on multiple cores by using speculation to divide code into atomic units (transactions) that span multiple threads in order to expose data parallelism. Unfortunately, most software and hardware Thread-Level Speculation (TLS) memory systems and transactional memories are not sufficient because they only support single-threaded atomic units. Multi-threaded Transactions (MTXs) address this problem, but they require expensive hardware support as currently proposed in the literature. This work proposes a Software MTX (SMTX) system that captures the applicability and performance of hardware MTX, but on existing multicore machines. The SMTX system yields a harmonic mean speedup of 13.36x on native hardware with four 6-core processors (24 cores in total) running speculatively parallelized applications.
Engineering Distributed Graph Algorithms in PGAS Languages
Guojing Cong, George Almasi, and Vijay Saraswat (IBM Research)
(ppt)
Due to the memory intensive workload and the erratic access pattern, irregular graph algorithms are notoriously hard to implement and optimize for high performance on distributed-memory systems. Although the PGAS paradigm proposed recently improves ease of programming, no high performance PGAS implementation of large-scale graph analysis is known.
We present the first fast PGAS implementation of graph algorithms for the connected components and minimum spanning tree problems. By improving memory access locality, compared with the naive implementation, our implementation exhibits much better communication efficiency and cache performance on a cluster of SMPs. With additional algorithmic and PGAS-specific optimizations, our implementation achieves significant speedups over both the best sequential implementation and the best single-node SMP implementation for large, sparse graphs with more than a billion edges.
We further analyze the optimizations applied in our study, and discuss how they may be automated by either the compiler or the runtime systems. This material is based on the Supercomputing 2010 paper.
Intel Concurrent Collections for Haskell
Ryan Newton (Intel)
(pptx)
Intel Concurrent Collections (CnC) is a parallel programming model in which a network of steps (functions) communicate through message-passing as well as a limited form of shared memory. This talk describes a new implementation of CnC for Haskell. Compared to existing parallel programming models for Haskell, CnC occupies a useful point in the design space: pure and deterministic like Strategies, but more explicit about granularity and the structure of the computation, which affords the programmer greater control over parallel performance. We present results on 4, 32, and 48-core machines demonstrating parallel speedups ranging between 7X and 22X on non-trivial benchmarks.
Secure Nested Transactions
Dominic Duggan and Ye Wu (Stevens Institute of Technology)
Nested transactions are a well-known abstraction for building secure distributed systems. This article considers security properties of nested transactions based on information flow properties. The motivation for considering nested transactions is that they provide a natural generalization of information flow control from the sequential to the concurrent case, where transactions provide synchronization between processes of different security levels, and nested transactions allow processes of different security levels to collaborate without leaking information. Process aborts prevent the termination leaks normally associated with information flow in concurrent processes. Because process aborts are now observable, nesting of transactions must replace the stacking of security contexts in the sequential case. The article considers two semantics for nested transactions: Tau_{Zero}, a global calculus of nested transactions, and Tau_{One}, a language for the compositional description of transactional applications that is used to reason about security properties of such applications.
Noninterference for Dynamic Security Environments
Robert Grabowski and Lennart Beringer (Princeton University)
(pdf)
Noninterference specifies that a program transfers data between objects of different confidentiality levels in manner that respects a given information flow policy. A technique to statically analyse programs with respect to this property are type systems. However, few existing analyses consider mobile code executed in varying security environments where the confidentiality levels and flow policies are only known at runtime.
In this talk, we present a suitable generalisation of noninterference for Java/JVM-style mobile code. Dynamic policy dependency is supported by a language construct to query the security levels and policies at runtime and to guard operations that induce potentially insecure information flows. We present type systems for Java and bytecode that provide guarantees that programs execute securely in any environment. We also outline a type-preserving compilation that produces a certificate of dynamic noninterference, making the approach suitable for proof carrying code scenarios.
Mechanizing Optimization and Statistics
Ashish Agarwal (Yale University)
(pdf)
Scientific and engineering investigations are formalized most often in the language of numerical mathematics. The tools supporting this are numerous but disparate, leading to sub-optimal use of existing mathematical theory. We present a unifying framework by taking a programming languages based approach to this problem. Our richly typed language allows naturally declaring optimization and statistics problems, and a library of transformations allows users to interactively compile input problems to solvable forms. We implement our system as a domain specific language embedded in OCaml. Here, we focus on three features: disjunctive constraints, measure types and random variables, and indexing.
By disjunctive constraints, we mean disjunctions over propositions on reals, e.g. x <= w \/ x >= w + 4.0. The usual solution strategy involves converting these into mixed-integer linear programming (MILP) constraints using the big-M, convex-hull, or other methods. Automation is clearly needed because these are algebraically tedious and manual application limits them to experts. We provide the first robust implementations and compare our results with that of ILOG CPLEX.
Statistics is increasingly important due to the increasing amount of data generated in the sciences. We introduce language features that enable declarative expression of statistical models and estimation problems. A type 'prob T' characterizes probability measures over type T, a special let binding introduces random variables, and some standard measures (e.g. Normal, Gaussian) can be used to construct more complex ones. We demonstrate with an example how our software facilitates exploring the large space of algorithms for solving statistical problems.
Finally, matrices are accepted canonical forms in mathematics, but practitioners employ a more flexible indexing notation: e.g. forall i in {A,B,C} x_i <= w_i. Especially in optimization, this need is so critical that virtually every tool supports it. However, indexing has been treated as a mere syntactic convenience and is eliminated at parse time. We present a dependently typed theory that enables far richer index sets to be expressed. Importantly, our theory brings indexing into the formal realm, providing an O(n) to O(1) reduction in memory requirements and the potential for a corresponding computational improvement.
Reasoning about Optimistic Concurrency Using a Program Logic for History
Ming Fu (Yale University)
(pptx)
Optimistic concurrency algorithms provide good performance for parallel programs but they are extremely hard to reason about. Program logics such as concurrent separation logic and rely-guarantee reasoning can be used to verify these algorithms, but they make heavy uses of history variables which may obscure the high-level intuition underlying the design of these algorithms. In this paper, we propose a novel program logic that uses invariants on history traces to reason about optimistic concurrency algorithms. We use past tense temporal operators in our assertions to specify execution histories. Our logic supports modular program specifications with history information by providing separation over both space (program states) and time. We prove Michael's non-blocking stack algorithm and show that the intuition behind such algorithm can be naturally captured using trace invariants.
A Framework for Describing Recursive Data Structure Topologies
Kenneth Roe and Scott Smith (Johns Hopkins University)
(ppt)
This paper presents a framework for verifying invariants on heap data structures such as lists and trees in a C-like language with a low-level store model. The goal of the system is to detect common errors such as memory leaks, dangling pointers and looped data structures. The framework provides a language for expressing invariants, and a set of inference rules for verifying them on code that manipulates the data structures. This work builds on the work done by Cook et al. which uses separation logic with recursive predicates to document data structure invariants. The key extension here is the ability to express and reason about data structures more complex than linked lists. The heap description includes a spatial component describing the basic set of lists and trees in the heap. Regular expressions over struct fields are used to describe invariants on pointer fields outside of those creating the basic lists and trees. Also, many special constructs are included to describe intermediate states when a program is in the middle of updating a data structure. To demonstrate its utility, we use the framework to analyze a program that manipulates a tree and a linked list of indices into that tree. A Coq implementation of the logic is currently being developed to experiment with formal verification of small programming examples.
Push-Down Control-Flow Analysis of Higher-Order Programs
David Van Horn (Northeastern University), Christopher Earl and Matthew Might (University of Utah)
(pdf)
Context-free approaches to static analysis gain precision over classical approaches by perfectly matching returns to call sites---a property that eliminates spurious interprocedural paths. Vardoulakis and Shivers's recent formulation of CFA2 showed that it is possible (if expensive) to apply context-free methods to higher-order languages and achieve the same boost in precision achieved by first-order programs.
To this young body of work on context-free analysis of higher-order programs, we contribute the first polyvariant and polynomial-time pushdown control-flow analysis framework, which we derive as an abstract interpretation of a CESK machine with an unbounded stack.
In the end, we arrive at a framework for control-flow analysis that can efficiently compute pushdown generalizations of classical control-flow analyses.
NaCl: Spice up Your Browser
Robert Muth (Google)
Native Client (NaCl) is an open-source technology for running x86 native code in web applications, with the goal of maintaining the browser neutrality, OS portability, and safety that people expect from web apps. Native Client uses software fault isolation and a specialized runtime to direct all system interaction and side effects through managed interfaces. It supports performance-oriented features generally absent from web application programming environments, such as thread support, hand-coded assembler, etc. We combine these properties in an open architecture designed to leverage existing web standards, and to encourage community review and 3rd-party tools.
This talk will cover system design and implementation, and some of our experiences securing and using the system.
We will also talk about our recent efforts to port the system to non-x86 instruction sets and our plans for helping programmers to support multiple instruction sets.
Compilers are from Mars, Dynamic Scripting Languages are from Venus
Jose Castanos, David Edelsohn, Kazuaki Ishizaki, Takeshi Ogasawara, Priya Nagpurkar, Akihiko Tozawa, and Peng Wu (IBM Research)
(pdf)
There has been a recent surge in interest in compiling dynamic scripting languages (DSL), such as Javascript, Python, PHP, Ruby, and Lua, to improve performance. The most popular implementations of DSL today are primarily interpreted and can be 10 to 1000 times slower than C and Java. Given this performance disparity, one may expect compilation to easily yield magnitudes of performance improvement. To our surprise, performance improvement from DSL compilers today is still quite elusive, where speed-up factors are typically within 2X , DSL JIT can also be significant slower than interpretation.
In this talk, we will examine various approaches taken by today's DSL compilers, with a special focus on Python. Python offers almost every flavor of compilation approaches explored by the community, such as attaching an existing optimizer to the CPython interpreter (Unladen-swallow), converting Python to an intermediate language with a mature JIT (Jython, IronPython, and the DaVinci machine), exploring trace compilation (PyPy) and runtime type specialization (Psyco), We observe two interesting directions taken by today's Python JIT projects. One is to compile primarily at the level of the language used to implement Python (C for CPython, Java for Jython, and RPython for PyPy), The other is to customize existing optimizers designed for statically typed languages (LLVM, Java JIT, and CLR JIT) to compile DSLs. We will discuss the implications and trade-offs of such approaches. We will also briefly talk about our early experience in building a JIT compiler for Python in our ongoing FIORANO project, and present our vision of how to compile for dynamic scripting languages.
Frenetic: Functional Reactive Programming for Networks
Nate Foster (Cornell University), Michael Freedman, Rob Harrison, Matthew Meola, Jennifer Rexford, and David Walker (Princeton University)
(pdf)
Effective network management is a difficult task. Operators must configure the devices in the network to provide a number of interrelated services ranging from basic routing, to service discovery, to load balancing, to traffic monitoring, to authentication and access control. Unfortunately, the interface for programming the network is typically defined at a very low level of abstraction -- it is usually derived from the features of the underlying hardware and is designed for efficiency rather than expressiveness and ease-of-use.
This paper proposes a new programming language for networks based on functional reactive programming. Our language, Frenetic, is organized around two distinct levels of abstraction:
- A collection of high-level declarative operators, inspired by functional reactive programming, for processing streams of network traffic.
- A run-time system that handles all of the low-level details related to installing and uninstalling packet-processing rules on network switches.
We describe the design of Frenetic as well as its implementation on top of the OpenFlow/NOX platform. We show how Frenetic enables a modular style of programming and we demonstrate the utility of the language using examples inspired by common network management tasks.
Important Dates
- Abstract submission: 17 November 2019
- Decision: 27 November 2019
- Registration date: 4 December 2019
- IBM PL Day: 9 December 2019