IBM Student Workshop for Frontiers of Cloud Computing 2012 - Program & Schedule


Program At A Glance

First Day - Monday July 30, 2012

Start

Duration

What

8:15

0:30

Gathering, with breakfast

8:45

0:15

Welcome

9:00

1:00

Keynote Address by Jim Comfort

10:00

0:50

Two student talks of session 1 (C&N+DFTC): Chen, Li

10:50

0:20

break

11:10

0:50

Two student talks of session 1 (C&N+DFTC): Lin, Yoon

12:00

1:00

Lunch

13:00

0:25

IBM talk 1: Zafer

13:25

0:25

Panel of session 1

13:50

0:50

Two Student talks of session 2 (Mobile+PMA): Tiwari, Tan

14:40

0:20

break

15:00

0:25

One student talk of session 2: Liu

15:25

0:25

IBM talk 2: IBM Talk from PMA

15:50

0:25

Panel of session 2

16:15

1:15

Three student talks of session 3 (S&P + OS): Chen, Weaver, Abu-Libdeh

17:30

1:00

Free time / travel to restaurant

18:30

....

Dinner at Tramonto

 

Second Day - Tuesday July 31, 2012

Start

Duration

What

8:30

0:30

Gathering, with breakfast

9:00

0:25

IBM talk of session 3: Halevi

9:25

0:25

Panel of session 3

9:50

0:25

One student talks of session 4 (DM+SS): Palanisamy

10:15

0:20

break

10:35

0:50

Two student talks of session 4 (DM+SS):Fan, Shen

11:25

0:25

IBM talk of session 4: Luis Lastras

11:50

0:25

Panel of session 4

12:15

1:00

Lunch

13:15

1:15

Three student talks of session 5 (Services+Web):Xin, Luo, Jiang & Zhou

14:30

0:25

IBM talk of session 5: Tao Tao

14:55

0:25

Panel of session 5

15:20

0:15

Best Talk award and goodbyes

15:35

 

end

 

 

Program Details

Student Presentations

Shiyao Chen (C&N); Cornell

Title: Deadline scheduling with resource augmentation and penalty: Optimal competitive ratio

Abstract: We consider an online preemptive scheduling problem where jobs with deadlines arrive sporadically. A commitment requirement is imposed such that the scheduler has to either accept or decline a job immediately upon arrival. The scheduler’s decision to accept an arriving job constitutes a contract with the customer; if the accepted job is not completed by its deadline as promised, the scheduler loses the value of the corresponding job and has to pay an additional penalty depending on the amount of unfinished workload. The objective of the online scheduler is to maximize the overall profit, i.e., the total value of the admitted jobs completed before their deadlines less the penalty paid for the admitted jobs that miss their deadlines. We show that the maximum competitive ratio is 3 − 2 * sqrt(2) and propose a simple online algorithm to achieve this competitive ratio. The optimal scheduling includes a threshold admission and a greedy scheduling policies.

Min Li (C&N, DFTC); Virginia Tech

Title: CAM A Topology Aware Minimum Cost Flow Based Resource Manager for MapReduce Applications in the Cloud

Abstract: MapReduce has emerged as a prevailing distributed computation paradigm for enterprise and large-scale data-intensive computing. The model is also increasingly used in the massively-parallel cloud environment, where MapReduce jobs are run on a set of virtual machines (VMs) on pay-as-needed basis. However, MapReduce jobs suffer from performance degradation when running in the cloud due to inefficient resource allocation. In particular, the MapReduce model is designed for and leverages information from the native clusters to operate efficiently, whereas the cloud presents a virtual cluster topology overlying or hiding actual network information. This results in two placement anomalies: loss of data locality and loss of job locality, where jobs are placed physically away from their data or other associated jobs, adversely affecting their performance. In this paper we propose, CAM, a cloud platform that provides an innovative resource scheduler particularly designed for hosting MapReduce applications in the cloud. CAM reconciles both data and VM resource allocation with a variety of competing constraints, such as storage utilization, changing CPU load and network link capacities. CAM uses a flow-network-based algorithm that is able to optimize MapReduce performance under the specified constraints —not only by initial placement, but by readjusting through VM and data migration as well. Additionally, our platform exposes, otherwise hidden, lower-level topology information to the MapReduce job scheduler so that it makes optimal task assignments. Evaluation of CAM using both micro-benchmarks and simulations on a 23 VM cluster shows that compared to a state-of-the-art resource allocator, our system reduces network traffic and average MapReduce job execution time by a factor of 3 and 8.6, respectively.

Minghong Lin (DFTC); CalTech

Title: Energy Efficient Algorithms in Data Centers

Abstract: Power consumption imposes a significant cost for data centers. Thus, it is not surprising that optimizing energy cost in data center is receiving increasing attention. In this talk, we focus on the algorithmic issues of energy optimization for data centers with algorithm analysis and experiments. At the local data center level, we develop online algorithms to dynamically adapt the number of active servers to match the current workload. At the global data center level, we propose online algirithms for geographical load balancing to explore the diversity of renewable sources and the diversity of propagation delays. In both contexts the new algorithms provide significantly improved performance guarantees when compared with the "standard" approaches using Receding Horizon Control.

Young Yoon (DFTC); University of Toronto

Title: Automated Planning for Network Topology Transformation

Abstract: Reconfiguring a topology is an important management technique to sustain high efficiency and robustness of a network. But, the problem of transforming the network from an old topology to a newly refined topology, at runtime, received relatively little attention. The key challenge is to minimize the disruption that can be caused by topology transformation operations. Excessive disruption can be costly and harmful and thus it may hamper the decision to migrate to a better topology. To address this issue, we ought to solve a problem of finding an appropriate sequence of steps to transform a topology that incurs the least service disruption. We refer this problem as an incremental topology transformation (ITT) problem. The ITT problem can be formulated well as an automated planning problem and can be solved with numerous off-the-shelve planning techniques. However, we found that state-of-the-art domain-independent planning techniques did not scale to solve large ITT problem instances. This shortcoming motivated us to develop a suite of planners that use novel domain-specific heuristics to guide the search for a solution. We empirically evaluated our planners on a wide range of topologies. Our results illustrate that our planners offer a viable solution to a diversity of ITT problems. We envision that our approach could eventually provide a compelling addition to the arsenal of techniques currently employed by the administrators of networks.

Balaji Palanisamy (DM); Georgia Institute of Technology

Title: Purlieus: Locality-aware Resource Allocation for MapReduce in a Cloud

Abstract: Purlieus is a MapReduce resource allocation system aimed at enhancing the performance of MapReduce jobs in the cloud. Purlieus provisions virtual MapReduce clusters in a locality-aware manner enabling MapReduce virtual machines (VMs) access to input data and importantly, intermediate data from local or close-by physical machines. We demonstrate how this locality-awareness during both map and reduce phases of the job not only improves runtime performance of individual jobs but also has an additional advantage of reducing network traffic generated in the cloud data center. This is accomplished using a novel coupling of, otherwise independent, data and VM placement steps. We present a detailed evaluation of Purlieus and demonstrate significant savings in network traffic and almost 50% reduction in job execution times for a variety of workloads.

Bin Fan (DM & Storage); Carnegie Mellon

Title:Small Cache, Big Effect: Provable Load Balancing for Randomly Partitioned Cluster Services

Abstract: Load balancing requests across a cluster of back-end servers is critical for avoiding performance bottlenecks and meeting servicelevel objectives (SLOs) in large-scale cloud computing services. We show how a small, fast popularity-based front-end cachecan ensure load balancing for an important class of such services; furthermore, we prove an O(n log n) lower-bound on the necessary cache size and show that this size depends only on the total numberof back-end nodes n, not the number of items stored in the system. We validate our analysis through simulation and empirical results running a key-value storage system on an 85-node cluster.

Yue Tan (PMA); Ohio State University

Title: Provisioning for Cloud Computing with Quality of Service Constraints

Abstract: We present a stochastic modeling approach to guide the resource provisioning task for future service clouds as the demand grows large. This problem can be mapped to a capacity planning problem in a general multi-class Erlang loss network model under quality of service constraint. We give an improved optimization formulation that is not only easier to solve, but also yields asymptotically exact provisioning solutions with improved QoS guarantees.

Devesh Tiwari (PMA, Mobile); North Carolina State University

Title: A Practical Performance Modeling Approach for Understanding In-Memory Mapreduce on Multi-core Architectures Winner - Best Talk Award

Abstract: MapReduce parallel programming model has seen wide adoption in data center applications. Recently, lightweight, fast, in-memory MapReduce runtime systems have been proposed for shared memory systems. However, what factors affect performance and what performance bottlenecks exist for a given program, are not well understood. In this talk, I will present a practical performance model that captures key performance factors, important trends, and behavior of in-memory MapReduce on multi-core architectures. I will discuss how our analytical model discovers several important findings and implications for system designers, performance tuners, and programmers.

Our model quantifies relative contribution of different key performance factors for both map and reduce phases, and shows that performance of MapReduce programs are highly input-content dependent. Our model reveals that performance is heavily affected by the order in which distinct keys are encountered during the Map phase, and the frequency of these distinct keys. We also show that data-structure and algorithm design choices affect map and reduce phases differently and sometimes affecting map phase positively while affecting reduce phase negatively. If time permits, I will discuss how such practical performance models can be useful for performance optimizations, computation offloading and guiding computation placement decisions in the exascale computing systems.

Bo (Irvine) Chen (Security & Privacy); New Jersey Institute of Technology

Title:Remote Data Integrity Checking for Public Clouds

Abstract:Remote Data integrity Checking (RDC) allows clients to efficiently check the integrity of data stored at untrusted servers. This allows data owners to assess the risk of outsourcing data in the public clouds, making RDC a valuable tool for data auditing.

In this talk, I will present some recent results on RDC. My talk will be two-fold: I will first introduce R-DPDP (Robust Dynamic Provable Data Possession), the first RDC scheme that provides robustness and, at the same time, supports dynamic updates, while requiring small, constant, client storage. The main challenge that had to be overcome was to reduce the client-server communication overhead during updates under an adversarial setting. Secondly, I will present RDC-NC, a novel RDC scheme for Network Coding-based distributed storage systems. Unlike previous work on RDC, which focused on minimizing the costs of the prevention phase, we take a holistic look and initiate the investigation of RDC schemes for distributed systems that rely on network coding to minimize the combined costs of both the prevention and repair phases. RDC-NC mitigates new attacks that stem from the underlying principle of network coding. The proposed scheme can preserve in an adversarial setting the minimal communication overhead of the repair component achieved by network coding in a benign setting.

Gabriel Weaver (S&P); Dartmouth

Title:XUTools: Next-Generation UNIX Tools to Process Real-World SecurityArtifacts

Abstract:Security policies are hard to manage. UNIX tools are traditionally useful but don't operate on many of the languages in which security policies are expressed and implemented. Current UNIX text-processing tools only operate on regular languages, but for modern contexts, that isn't general enough. We designed and implemented eXtended UNIX text-processing tools (xutools) that enable practitioners to extract (xugrep), count (xuwc), and compare (xudiff) texts in terms of language-specific structures that correspond to productions in context-free grammars. Our hypothesis is that xutools will enable new computational experiments on multi-versioned structured texts. Although our xutools are applicable to a variety of domains, in this talk, we focus on how our tools address several real-world security problems that we encountered during our fieldwork.

Zhiming Shen (Storage) North Carolina State University

Title: Optimizing Virtual Machine Image I/O for the cloud

Abstract: Infrastructure as a Service (IaaS) cloud typically maintains a large virtual machine (VM) image repository. The images are either copied beforehand or streamed on-demand to the compute nodes for booting new VM instances. IO contentions of accessing the image repository are getting more and more attentions, which may affect the application performance, cloud user experience, and infrastructure cost. In this work, we propose solutions for optimizing I/O operations of the VM images so as to improve the overall performance of cloud applications. First, our VM image access deduplication mechanism consolidates accesses to the image repository for blocks with identical contents. Second, we define a simple rule to partition the image page cache functionally so that the host and guest page cache can cache more public and private data respectively. Finally, we propose an adaptive page cache management policy to improve the overall memory utilization.

Yexi Jiang (Florida International University) and Yang Zhou (Georgia Institute of Technology) (Services Computing)

Title: Cloud Services Marketplace

Abstract: Cloud services marketplace (CSM) is an IBM exploratory project aiming to be "an AppStore for Services". The core of CSM is a platform that interfaces with both customers and service providers. For customers, traditionally a customer needs to contact to the human being service agent to purchase services. In the era of AppStore, OneClick Checkout, and pay-as-you-go cloud services, customers would expect services can also be purchased online efficiently and conveniently. However, due to the complex nature of services, simple keyword search does not work well for services. Therefore, a new marketplace interface
is needed. In CSM, exploring and purchasing services are conducted through natural language conversation. The customer can provide their requirement in form of natural language and interact with the system with a series of question and answer. Based on customer's input, the system can gradually understand' customers' needs, and then locate and configure service products to customers. Internally, CSM is built around a Services Knowledge Base (SKB) and utilizes semantic web technologies. We will discuss how CSM utilizes offline RDF mining to build a ranking system for better services selection.

Yuan Luo (Web); Indiana University

Title: A Hierarchical MapReduce Framework

Abstract: MapReduce is a programming model well suited to processing large datasets using high-throughput parallelism running on a large number of compute resources. While it has proven useful on data-intensive high throughput applications, conventional MapReduce model limits itself to scheduling jobs within a single cluster. As job sizes become larger, single-cluster solutions grow increasingly inadequate. Additionally, the input dataset could be very large and widely distributed across multiple clusters. Feeding large datasets repeatedly to remote computing resources becomes the bottleneck. When mapping such data-intensive tasks to compute resources, scheduling algorithms need to determine whether to bring data to computation or bring computation to data. We present a Hierarchical MapReduce framework that gathers computation resources from different clusters and runs MapReduce jobs across them. The applications implemented in this framework adopt the Map-Reduce-GlobalReduce model where computations are expressed as three functions: Map, Reduce, and GlobalReduce. Two scheduling algorithms are introduced: Compute Capacity Aware Scheduling for compute-intensive jobs and Data Location Aware Scheduling for data-intensive jobs. Experimental evaluations using a molecule binding prediction tool, AutoDock, and grep demonstrate promising results for our framework.

Reynold S. Xin (DM, Web); University of California, Berkeley

Title: Shark - Hive on Spark

Abstract: In this talk, I will introduce Spark and Shark, two core components of the Berkeley Data Analytics System. Spark is a new cluster computing system that aims to make data analytics fast — both fast to run and fast to write — while staying highly compatible with Hadoop. Shark is a large-scale data warehouse system for Spark designed to be compatible with Apache Hive. It can answer Hive QL queries up to 30 times faster than Hive without modification to the existing data nor queries. Shark supports Hive's query language, metastore, serialization formats, and user-defined functions. Shark recently won the Best Demo Award at SIGMOD 2012.

Hongbo Liu (Mobile); Stevens Institute of Technology

Title: Collaborative Secret Key Extraction Leveraging Received Signal Strength for Mobile Cloud Computing

Abstract:Cloud computing has provided access to many powerful computing resources for users. Mobile cloud computing is an emerging cloud service model following the trend to extend the cloud to the edge of networks. It includes numerous mobile devices that are closely associated with their users. It is challenging to access these powerful computing resources via mobile wireless networks. Since mobile cloud computing is highly relying on wireless communication, the security issues in wireless communication is critical. Thus, to ensure the security on the wireless connection between users and cloud computing resources, securing communication in mobile wireless networks is a challenging problem because the traditional cryptographic-based methods are not always applicable in dynamic mobile wireless environments. Using physical layer information of radio channel to generate keys secretly among wireless devices has been proposed as an alternative in wireless mobile networks. And the Received Signal Strength (RSS) based secret key extraction gains much attention due to the RSS readings are readily available in wireless infrastructure. Furthermore, the problem of using RSS to generate keys among multiple devices to ensure secure group communication remains open. In this work, we propose a framework for collaborative key generation among a group of wireless devices leveraging RSS. The proposed framework consists of a secret key extraction scheme exploiting the trend exhibited in RSS resulted from shadow fading, which is robust to outsider adversary performing stalking attacks. Todeal with mobile devices not within each other’s communication range, we employ relay nodes to achieve reliable key extraction. To enable secure group communication, two protocols, namely star-based and chain-based, are developed in our framework by exploiting RSS from multiple devices to perform group key generation collaboratively. Our experiments in both outdoor and indoor environments confirm the feasibility of using RSS for group key generation among multiple wireless devices under various mobile scenarios. The results also demonstrate that our collaborative key extraction scheme can achieve a lower bit mismatch rate compared to existing works when maintaining the comparable bit generation rate.

Hussam Abu-libdeh (OS); Cornell University

Title: Building scalable strongly consistent services using Elastic Replication Winner - Best Talk Award

Abstract: Most of the scalable and high-performance services used in datacenters today provide relaxed consistency guarantees in order to achieve good responsiveness. One reason for this is that it is believed that expensive majority-based consensus protocols are needed in order to provide strong consistency in asynchronous and partially synchronous environments such as a datacenter or the Internet. In this talk, I will briefly describe our research into building a new lightweight replication protocol that does not use majority voting and yet provides strong consistency in the presence of crash faults and imperfect failure detectors.

 

IBM Speakers

 

Session 1 (C&N + DFTC): Murtaza Zafer

Title: Minimum congestion mapping in a cloud

Abstract: We study a basic resource allocation problem that arises in cloud computing environments. The physical network of the cloud is represented as a graph with vertices denoting servers and edges corresponding to communication links. A workload is a set of processes with processing and mutual communication requirements. The workloads arrive and depart over time, and the resource allocator must map each workload upon arrival to the physical network. The objective is to minimize congestion which is defined as the maximum of per-node or per-link congestion in the physical network. We present approximation algorithms for specific classes of static single-workload mapping problems and then use it to obtain competitive algorithms for the online case of workload arrivals and departures.

This is joint work with N. Bansal, K. Lee and V. Nagarajan.

Session 2 (Mobile + PMA): Jian Tan

Title: Delay Tails in MapReduce Scheduling

Abstract: MapReduce/Hadoop production clusters exhibit heavy-tailed characteristics for job processing times. These phenomena are resultant of the workload features and the adopted scheduling algorithms. Analytically understanding the delays under different schedulers for MapReduce can facilitate the design and deployment of large Hadoop clusters. The map and reduce tasks of a MapReduce job have fundamental difference and tight dependence between them, complicating the analysis. This also leads to an interesting starvation problem with the widely used Fair Scheduler due to its greedy approach in launching reducers. To address this issue, we design and implement Coupling Scheduler, which gradually launches reducers by coupling the progresses of map and reduce tasks. Real experiments demonstrate improvements to job response times by up to an order of magnitude.

Based on extensive measurements and source code investigations, we propose analytical models for the default FIFO, Fair Scheduler and our implemented Coupling Scheduler. For a class of heavy-tailed map service time distributions, i.e., regularly varying of index -a, we derive the distribution tail of the job processing delay under the three schedulers, respectively. The default FIFO Scheduler causes the delay to be regularly varying of index -a+1. Interestingly, we discover a criticality phenomenon for Fair Scheduler, the delay under which can change from regularly varying of index -a to -a+1, depending on the maximum number of reduce tasks of a job. Other more subtle behaviors also exist. In contrast, the job processing delay distribution under Coupling Scheduler can be one order lower than Fair Scheduler under some conditions, implying a better performance.

Session 3 (S&P + OS):Shai Halevi

Title: Proofs of ownership in remote storage systems

Abstract:Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.

Session 4 (DM + SS): Luis Lastras

Title: New memory technologies and the Cloud: from a single memory cell to distributed storage

Abstract: The technologies of choice for main memory and persistent storage in data centers have traditionally been DRAM, hard disks and tape, and more recently, flash. In recent years, the memory industry has been studying new memory technologies such as Phase Change Memory, magnetic RAM and resistive RAM which can help in either continuing the success of DRAM and flash or fill voids in performance and density that currently exist. These new memory technologies come with a surprisingly large number of design possibilities; in many of them the cost of writing, the information density of the memory, the retention of the data and the life of the memory can be tradeoff in many potentially useful ways.

In this talk, we will briefly review these developments and describe some of the work that our group has done to understand and exploit these technologies, ranging from the problem of basic information theory of a rewritable memory cell and the problem of wear leveling in memory technologies, to the problem of structuring systems that exploit these technologies. We will also touch on the problem of distributed storage in the cloud, and in particular work that our group has done in the past for efficient data codings in such settings.

Session 5 (Services + Web): Tao Tao

Title: IBM Smart Cloud Enterprise Plus

Abstract: The IBM SmartCloud has two implementation options: Enterprise and Enterprise Plus. Enterprise (SCE) expands on our existing Development and Test Cloud allowing customers to expand on internal development and test efforts with reduction of application development tasks from days to minutes via automation and rapid provisioning with over 30% reduction in costs versus traditional application environments. Enterprise Plus (SCE+) complements and expands on the value of Enterprise, offering brand new capabilities provide a core set of multi-tenant services to manage virtual server, storage, network and security infrastructure components including managed operational production services. SCE+ was GAed on March 20th, with planned rollout globally throughout 2012. Based upon enterprise clients’ requirements for managed private clouds, SCE+ provides unique capabilities in security, reliability, workload infrastructure choices, and self-service control of entitling and managing virtualized computing resources. SCE+ enables IBM to take a strategic lead over our cloud competition and to cost-competitively cloudify IBM service management processes for its SO and ITS clients. Research has been playing a critical role in creating SCE+ since 2Q2010, and is now leading the development of several key SCE+ components with innovative technologies in the areas of self-service Portal, ITIL-aligned IT service management processes, automated compliance checklist generation for server creation & removal, policy-based patch management, integrated server provisioning & quality management, and end-to-end integration of business and operations support services.