Tag Archives: scaling

Michael Reiter, University of North Carolina – June 2012

reiter.jpgWACCO: A Wide-Area Cluster-Consistent Object Store

Michael Reiter, UNC  June 2012
Project proposal is to construct a system called WACCO, an abbreviation for “Wide-Area Cluster-Consistent Objects”. WACCO manages access to stateful, deterministic objects over a logically tree-based overlay network of proxies that is arranged to respect geography; i.e., neighbors in the tree tend to be close geographically or, more to the point, enjoy low latency between them. Each client is assigned to a nearby proxy to which it connects to access objects, and object access is managed through a protocol that offers a novel type of consistency that we dub cluster consistency. Cluster consistency is strong: it ensures sequential consistency, a consistency condition initially conceived for use in shared-memory systems, and also that clusters of concurrent reads see the most recent preceding update to the object on which the reads are performed.

Scalability of services implemented using WACCO is achieved through two strategies. First, WACCO uses the logical tree structure of the overlay to aggregate read demand, permitting the responses to some reads to answer others. As such, under high read concurrency, the vast majority of reads are not propagated to the location of the object; rather, most are paused in the tree awaiting other to complete, from which the return result can be “borrowed.” Second, WACCO employs migration to dynamically change where each object resides. This permits the object to move closer to demand as it fluctuates, e.g., due to diurnal patterns.

WACCO was initially conceived to support global-scale services such as content distribution networks (CDNs), but while ensuring much greater responsiveness to data updates than existing designs allow. As such, WACCO’s design places a premium on supporting both frequent updates and widespread concurrent reads on a per-object basis. The proposed work includes the implementation of WACCO and its evaluation in the CDN domain, as well as exploration of its use for applications running across geographically distributed datacenters.

 

Luigi Rizzo, Università di Pisa, Italy – May 2012

lr1.jpgHigh Speed Packet Capture and Storage Systems

This proposal extends the netmap framework, an earlier research effort by the PI that enables lossless packet capture for 10GbE network interfaces using commodity hardware. More specifically, this proposal will focus on extending netmap to leverage hardware acceleration features such as on-NIC timestamps, checksums, and packet classification. Additionally, issues such as leveraging multiple cores and implementing an efficient packet-processing pipeline that performs functions such as filtering, anonymization, and storage will be studied.

Erez Zadok, Stony Brook University – January 2012

>ezk.jpgDedup Workload Modeling, Synthetic Datasets, and Scalable Benchmarking

Electronic data volumes keep growing at rapid rates, costing users precious space and increasing TCOs (energy, performance, etc.). Data-deduplication is a popular technique to reduce the actual amount of data that has to be retained. Several vendors offer dedup-based products, and many publications are available. Alas, there is a serious lack of comparable results across systems. Often, the problem is a lack of realistic data sets that can be shared without violating privacy; moreover, good data sets can be very large and difficult to share. Many papers publish results using small data sets or non-representative ones (e.g., successive Linux kernel source tarballs). Lastly, there is no agreement what constitutes “realistic” data sets.

We propose to develop tools and techniques to produce realistic, scalable dedupable data sets, taking actual workloads into account. We will begin by analyzing dedupability properties of several different data sets we have access to; we will develop and release tools for anyone to analyze their own data sets without violating privacy. Next, we will build models that describe the important inherent properties of those data sets. Afterward, we will be able to generate data synthetically that follows these models; we will generate data sets far larger than their originals, but faithfully modeling the original data.

 

Angela Demke Brown, Ashvin Goel, University of Toronto – July 2011

angela.jpg ashvin_smallerA Policy-based Architecture for Scalable Storage Systems

The complexity of modern storage systems continues to grow, making management of these systems a first-class concern. A storage system today may need to exploit the widely-varying characteristics of heterogeneous storage units to meet the simultaneous demands of many customers with differing requirements. In addition, desirable properties such as cost-effectiveness, scalability, reliability and power-efficiency may conflict with each other. A further challenge arises due to virtualization and other layers of indirection between applications and storage hardware, because application-level optimizations to exploit hardware features may not have the desired effect. Performance may be lost when the underlying physical layout does not match the assumptions made at the higher level. Worse, reliability may be reduced if an underlying deduplication system removes extra copies of data blocks that were deliberately replicated, such as critical file system metadata. Finally, existing management interfaces are not extensible, making it difficult to express novel policies. As a result, significant time and effort is spent designing, customizing, and maintaining storage solutions.

We argue that a scalable storage system must expose a more flexible mechanism for researchers or storage administrators to express the desired properties. We propose a policy-driven architecture that introduces extensibility and dynamism into the control plane of a data center’s storage system. Our proposed system consists of two parts: (1) a domain-specific policy language that allows the construction of sophisticated policies using both static and dynamic properties of the available storage devices and the storage requests; (2) an extension of the storage system’s control plane, capable of interpreting and enforcing these policies by monitoring the stream of requests and the dynamic characteristics of the storage devices. Existing work on policy-based storage management is mainly concerned with storage allocation or configuration, and focuses on static properties of the storage devices (e.g. capacity, cost, throughput, reliability). Our goal is to automatically manage the daily operation of the storage system, adjusting to changes in workload requests, power consumption, and load hotspots according to high-level policies.