Cornell Systems Lunch
CS 7490 Spring 2017
The Systems Lunch is a seminar for discussing recent, interesting papers in the systems area, broadly defined to span operating systems, distributed systems, networking, architecture, databases, and programming languages. The goal is to foster technical discussions among the Cornell systems research community. We meet once a week on Fridays at noon in Gates 114.
The systems lunch is open to all Cornell Ph.D. students interested in systems. First-year graduate students are especially welcome. Non-Ph.D. students have to obtain permission from the instructor. Student participants are expected to sign up for CS 7490, Systems Research Seminar, for one credit.
To join the systems lunch mailing list please send an empty message to firstname.lastname@example.org with the subject line "join". More detailed instructions can be found here.
Links to papers and abstracts below are unlikely to work outside the Cornell CS firewall. If you have trouble viewing them, this is the likely cause.
|January 27||Building Resilient Systems with Secure End-to-End Data Provenance
Decision makers need timely and accurate data to make critical decisions. In many cases, that data is presented without evidence that it is trustworthy, leaving decision makers no choice but to blindly trust that the data is correct. Data provenance is a vital technology that provides decision makers with the evidence they need to make decisions based on good data, by tracking the history of ownership and processing of data as it moves through a system. In this talk, we describe the challenges of building resilient systems that leverage data provenance to protect data, and the security guarantees necessary for trustworthy whole-system provenance. We detail our experiences with integrating data provenance into a prototype system and describe our open-source Linux Provenance Modules framework for capturing trustworthy whole-system data provenance.
|Thomas Moyer (MIT Lincoln Lab)|
|February 3||Lightweight Authentication of Freshness in Outsourced Key-Value Stores
Yuzhe Tang, Ting Wang, Ling Liu, Xin Hu, and Jiyong Jang
Annual Computer Security Applications Conference (ACSAC) 2014
|Yuzhe (Richard) Tang|
|February 10||Light-Weight Contexts: An OS Abstraction for Safety and Performance
James Litton (UMD and MPI-SWS); Anjo Vahldiek-Oberwagner, Eslam Elnikety, and Deepak Garg (MPI-SWS); Bobby Bhattacharjee (UMD); Peter Druschel (MPI-SWS)
|February 17||HIL: Designing an Exokernel for the Data Center
Jason Hennessey, Sahil Tikale, Ata Turk, Emine Ugur Kaynar (Boston University); Chris Hill (MIT); Peter Desnoyers (Northeastern University); Orran Krieger (Boston University)
|February 24||Incremental Consistency Guarantees for Replicated Objects
Rachid Guerraoui, Matej Pavlovic, and Dragos-Adrian Seredinschi (EPFL)
|March 3||Towards Automatic Generation of Security-Centric Descriptions for Android Apps
Mu Zhang (NEC Labs), Yue Duan, Qian Feng, Heng Yin (Syracuse University)
|Mu Zhang (NEC Labs)|
|March 10||Queues Don’t Matter When You Can JUMP Them!
Matthew P. Grosvenor, Malte Schwarzkopf, Ionel Gog, Robert N. M. Watson, Andrew W. Moore, Steven Hand, and Jon Crowcroft (University of Cambridge)
|March 17||Speaker cancelled due to weather conditions, no meeting.|
|March 24||Slicer: Auto-Sharding for Datacenter Applications
Abstract: Sharding -- stateful distributed scaling -- is a fundamental building block of large-scale applications, but most have their own custom, ad-hoc implementations. Our goal is to make sharding as easily reusable as a filesystem or lock manager. Slicer is Google’s general purpose sharding service. It monitors signals such as load hotspots and server health to dynamically shard work over a set of servers. Its goals are to maintain high availability and reduce load imbalance while minimizing churn from moved work.
In this paper, we describe Slicer’s design and implementation. Slicer has the consistency and global optimization of a centralized sharder while approaching the high availability, scalability, and low latency of systems that make local decisions. It achieves this by separating concerns: a reliable data plane forwards requests, and a smart control plane makes load-balancing decisions off the critical path. Slicer’s small but powerful API has proven useful and easy to adopt in dozens of Google applications. It is used to allocate resources for web service front-ends, coalesce writes to increase storage bandwidth, and increase the efficiency of a web cache. It currently handles 2-7M req/s of production traffic. The median production Slicer-managed workload uses 63% fewer resources than it would with static sharding.
Bio: Jon Howell is a distributed systems engineer at Google working on Slicer. Previously, at Microsoft Research, he contributed to the Ironfleet mechanically verified distributed system, the Pinocchio remote verification system, the Embassies secure application architecture, the record-breaking Flat Datacenter Storage system, and the FARSITE peer-to-peer file system.
|Jon Howell (Google)|
|March 31||Rethinking Optical Switching in the Data Center
Abstract: The immense bandwidth requirements of next-generation data center networks provide an opportunity for optical circuit switching to deliver substantial cost and energy advantages over state-of-the-art electronic packet switching. However, existing optical switching proposals face deployment challenges in the data center environment due to limited scalability at the physical layer and/or significant control complexity at the network layer. Our work attempts to overcome these limitations by taking a holistic design approach where we jointly redesign the network architecture and optical switching hardware subject to data center-specific constraints. The end result is a hybrid optical-electronic network solution that delivers higher throughput using less power than a comparably-priced electronic network while remaining practical to deploy and operate at scale.
Bio: William (Max) Mellette is a postdoctoral researcher in Computer Science at UC San Diego working with George Porter on data center network architecture. He received his PhD from UC San Diego in Electrical and Computer Engineering with a concentration in Photonics, where he designed and built optical switches targeting data center networks. His research interests span all aspects of networked systems, with a focus on the intersection of hardware and network design.
|Max Mellette (UCSD)|
|April 7||Spring Break, no meeting.|
|April 14||ACSU Luncheon—no systems lunch, no meeting.|
|April 21||Consolidating Concurrency Control and Consensus for Commits under Conflicts
Shuai Mu and Lamont Nelson, New York University; Wyatt Lloyd, University of Southern California; Jinyang Li, New York University
|April 28||In Search of Smarter Hardware Prediction Mechanisms
Abstract: In today"s data-driven world, memory system performance is often critical to overall system performance. In this talk, we present the Hawkeye Cache, a novel method of solving the age-old problem of cache replacement. We then step back and discuss our broader vision, explaining how machine learning algorithms can help us improve the Hawkeye Cache as well as other memory system optimizations.
Bio: Calvin Lin is a University Distinguished Teaching Professor of Computer Science at The University of Texas at Austin. His research interests are in compilers, security, and computer architecture. He is also the Director of the department"s Turing Scholars Honors Program and has led an NSF-funded effort to develop and dessiminate a new high school CS Principles course that uses a project-based learning pedagogy. When he is not working, he can be found chasing his two young sons or coaching the UT men"s ultimate frisbee team.
|Calvin Lin (UT Austin)|
|May 5||Sealed-Glass Proofs: Using Transparent Enclaves to Prove and Sell Knowledge
Florian Tramèr, Fan Zhang, Huang Lin, Jean-Pierre Hubaux, Ari Juels, and Elaine Shi
European Symposium on Security and Privacy, 2017 (EuroS&P 2017)