09:00 - 10:45 |
The Transactional
Manifesto
Maurice Herlihy Microsoft Research
and Brown University
Computer architecture is about to undergo, if not another
revolution, then a vigorous shaking-up. The major chip manufacturers
have, for the time being, simply given up trying to make processors
run faster. Instead, they have recently started shipping
``multicore'' architectures, in which multiple processors (cores)
communicate directly through shared hardware caches, providing
increased concurrency instead of increased clock speed.
As a
result, system designers and software engineers can no longer rely
on increasing clock speed to hide software bloat. Instead, they must
somehow learn to make effective use of increasing parallelism. This
adaptation will not be easy. Conventional synchronization techniques
based on locks and conditions are unlikely to be effective in such a
demanding environment. Coarse-grained locks, which protect
relatively large amounts of data, do not scale, and fine-grained
locks introduce substantial software engineering
problems.
Transactional memory is a computational model in
which threads synchronize by optimistic, lock-free transactions.
This synchronization model promises to alleviate many (perhaps not
all) of the problems associated with locking, and there is a growing
community of researchers working on both software and hardware
support for this approach. This talk will survey the area, with a
focus on open research problems.
Flexible approaches to consistency of replicated
data
Marc Shapiro INRIA Rocquencourt and
LIP6
Replication and
consistency are essential features of any distributed system and
have been studied extensively. Protocols differ greatly (despite all
claiming to maintain consistency) but a systematic comparison is
lacking. In practice, there seems to be no protocol that is both
decentralised and maintains good semantic properties. To fill this
need, we developed the Action-Constraint Framework to capture both
the semantics of replicated data and the behaviour of a replication
algorithm. It enables us to decompose the problem of ensuring
consistency into simpler, easily understandable sub-problems. As the
sub-problems are largely orthogonal, sub-solutions can be mixed and
matched. Our unified framework enables a systematic exploration of
both pessimistic and optimistic protocols, both full and partial
replication, strong and weak consistency, etc. I will present some
preliminary results, including a serialisation protocol with good
decentralisation properties, and simulations comparing a number of
representative sub-solution combinations. Finally, I will present
the Joyce implementation of the framework, supporting cooperative
applications. Joint work with Nishith Krishna and James O'Brien,
performed at Microsoft Research Cambridge.
Directions in System Engineering:
combining DSLs, Aspects and Components.
Gilles
Muller École des Mines de
Nantes
Engineering systems software is known as one of
the most difficult programming tasks. In fact, it raises numerous
challenges that have as yet been only partially addressed. Some of
these are the evolution of legacy systems, the safety of systems
code, and the high degree of expertise required to develop low level
services. Overall, this calls for dedicated methodologies and tools
for capturing the specific needs and properties of systems
software.
In this talk, we first review several recent
approaches that we have successfully applied such as Domain Specific
Languages, Aspects and Components. We then describe the synergy
between these approaches and conclude by presenting some of the new
challenges they raise. |
11:15 - 12:30 |
Work-in-Progress Session
1
Rethinking OS support for
high-speed networking Herbert Bos
Sprint: adaptive data
management for in-memory database clusters Fernando
Pedone
Demand for high performance
and availability combined with plummeting hardware prices have led
to the widespread emergence of large computing clusters. Most
typically, cluster nodes are interconnected through very fast
network switches and equipped with powerful processors and large
main memories. These hardware trends invalidate many fundamental
design decisions of current systems and require a re- evaluation of
data structures and algorithms, adapted to the new environment.
Moreover, from an operational perspective, configuring applications
to such environments becomes too complex to be done manually. Sprint
exploits the characteristics of modern computing clusters to achieve
highly efficient and available adaptive data management. In this
short talk I will overview some aspects of Sprint's architecture and
some open problems we have identified.
Nizza - Towards small,
application-specific Trusted Computing Bases Hermann
Haertig
Automating DBMS
Configuration: What if you could ask "what if"? Dushyanth
Narayanan |
14:00 - 15:00 |
Work-in-Progress Session
2
On Detours and Shortcuts
to solve distributed systems problems Paulo Esteves
Veríssimo
AutoPatch Christof
Fetzer
The AutoPatch project tries
to increase the dependability (i. e., securiy, reliability, and
availability) of software by automatically patching the code. The
system will identify certain issues and generate patches to fix
these issues, i.e., design failures, until manual patches become
available.
Thresher: A Filtering
Archive for High-frequency Snapshots Liuba
Shrira
A snapshot system
keeps past states of data so that read-only applications can run
against consistent state - so called back-in-time execution.
Decreasing storage costs and new efficient versioned storage
techniques have enabled a new generation of snapshot systems that
allow to capture virtually unlimited amounts of past
states.
Current snapshot systems suffer from the limitation
that while it is easy to collect lots of past states it is difficult
to distinguish in the storage system between important and
unimportant states (that is to filter, for short). This makes it
hard to retain important states for longer time-scale, or store them
with faster access.
We describes Thresher, the first
high-performance non-disruptive snapshot system that provides the
ability to filter out important states, that is, to disentangle
important incremental updates from the others after the
fact. |