
List of accepted papers
(In order of appearance)
Track A: Algorithms, Automata, Complexity and Games
 Competitive Algorithms for Due Date Scheduling
 Nikhil Bansal, HoLeung Chan and Kirk Pruhs

We consider several online scheduling problems that arise when
customers request maketoorder products from a company. At the time of
the order, the company must quote a due date to the customer. To
satisfy the customer, the company must produce the good by the due date.
The company must have an online algorithm with two components: The first
component sets the due dates, and the second component schedules the
resulting jobs with the goal of meeting the due dates.
The most basic quality of service measure for a job is the quoted
lead time, which is the difference between the due date and the
release time. We first consider the basic problem of minimizing the
average quoted lead time. We show that there is a 1+εspeed
O(log k / ε)competitive algorithm for this
problem (here k is the ratio of the maximum work of a job to the
minimum work of a job), and that this algorithm is essentially optimally
competitive. This result extends to the case that each job has a weight
and the objective is weighted quoted lead time.
We then introduce the following general setting: there is a
nonincreasing profit function p_{i}(t) associated
with each job J_{i}. If the customer for job
J_{i} is quoted a due date of d_{i}, then
the profit obtained from completing this job by its due date is
p_{i}(d_{i}). We consider the objective
of maximizing profits. We show that if the company must finish each job
by its due date, then there is no O(1)speed polylogcompetitive
algorithm. However, if the company can miss the due date of a job, at
the cost of forgoing the profits from that job, then we show that there
is a (1+ε)speed O(1+1/ε)competitive algorithm, and that this
algorithm is essentially optimally competitive.
 Mechanism Design for Fractional Scheduling on Unrelated Machines
 George Christodoulou, Elias Koutsoupias and Annamária Kovács

In this paper, we consider the mechanism design version of the
fractional variant of the scheduling problem on unrelated machines. We
give a lower bound of 2–1/n for any fractional truthful
mechanism, while we propose a truthful mechanism that achieves
approximation of 1+(n–1)/2, for n machines. We also focus
on an interesting family of allocation algorithms, the
taskindependent algorithms. We give a lower bound of
1+(n–1)/2, that holds for every (not only monotone) allocation
algorithm of this class. Under this consideration, our truthful
independent mechanism is the best that we can hope from this family of
algorithms.
 Estimating Sum by Weighted Sampling
 Rajeev Motwani, Rina Panigrahy and Ying Xu

We study the classic problem of estimating the sum of n
variables. The traditional uniform sampling approach requires a linear
number of samples to provide any nontrivial guarantees on the estimated
sum. In this paper we consider various sampling methods besides uniform
sampling, in particular sampling a variable with probability
proportional to its value, referred to as linear weighted
sampling. If only linear weighted sampling is allowed, we show
an algorithm for estimating sum with Õ(√n) samples, and it
is almost optimal in the sense that Ω(√n) samples are necessary
for any reasonable sum estimator. If both uniform sampling and linear
weighted sampling are allowed, we show a sum estimator with
Õ(∛n) samples. More generally, we may allow general
weighted sampling where the probability of sampling a variable is
proportional to any function of its value. We prove a lower bound of
Ω(∛n) samples for any reasonable sum estimator using general
weighted sampling, which implies that our algorithm combining uniform
and linear weighted sampling is an almost optimal sum estimator.
 Sampling Methods for Shortest Vectors, Closest Vectors and Successive Minima
 Johannes Blömer and Stefanie Naewe

In this paper we introduce a new lattice problem, the subspace
avoiding problem (SAP). We describe a
probabilistic single exponential time algorithm for SAP
for arbitrary ℓ_{p} norms. We also describe polynomial
time reductions for four classical problems from the geometry of
numbers, the shortest vector problem SVP, the
closest vector problem CVP, the successive
minima problem SMP, and the shortest
independent vectors problem SIVP to
SAP, establishing probabilistic single exponential time
algorithms for them. The result generalize and extend previous results
of Ajtai, Kumar and Sivakumar. The results on SMP and
SIVP are new for all norms. The results on
SVP and CVP generalize previous results of
Ajtai et al. for the ℓ_{2} norm to arbitrary
ℓ_{p} norms.
 Low Distortion Spanners
 Seth Pettie

A spanner of an undirected unweighted graph is a subgraph
that approximates the distance metric of the original graph with some
specified accuracy. Specifically, we say H⊆G is an
fspanner of G if any two vertices u, v at
distance d in G are at distance at most f(d)
in H. There is clearly some tradeoff between the sparsity of
H and the distortion function f, though the nature of this
tradeoff is still poorly understood.
In this paper we present a simple, modular framework for constructing
sparse spanners that is based on interchangable components called
connection schemes. By assembling connection schemes in
different ways we can recreate the additive 2 and 6spanners of
Aingworth et al. and Baswana et al. and improve on the
(1+ε,β)spanners
of Elkin and Peleg, the sublinear additive spanners of Thorup and Zwick,
and the (non constant) additive spanners of Baswana et al. Our
constructions rival the simplicity of all comparable algorithms and
provide substantially better spanners, in some cases reducing the
density doubly exponentially.
 Minimum Weight 2EdgeConnected Spanning Subgraphs in Planar Graphs
 André Berger and Michelangelo Grigni

We present a linear time algorithm exactly solving the 2edge
connected spanning subgraph (2ECSS) problem in a graph of bounded
treewidth. Using this with Klein's diameter reduction
technique, we find a linear time PTAS for the problem in
unweighted planar graphs, and the first PTAS for the problem in
weighted planar graphs.
 Labeling Schemes for Vertex Connectivity
 Amos Korman

This paper studies labeling schemes for the vertex connectivity
function on general graphs. We consider the problem of labeling the
nodes of any nnode graph is such a way that given the labels of
two nodes u and v, one can decide whether u and
v are kvertex connected in G, i.e., whether there
exist k vertex disjoint paths connecting u and v.
The paper establishes an upper bound of
k^{2}log n on the number of bits used in a
label. The best previous upper bound for the label size of such
labeling scheme is 2^{k}log n.
Key words: Graph algorithms, vertexconnectivity, labeling
schemes.
 UnboundedError OneWay Classical and Quantum Communication Complexity
 Kazuo Iwama, Harumichi Nishimura, Rudy Raymond and Shigeru Yamashita

This paper studies the gap between quantum oneway communication
complexity Q(f) and its classical counterpart
C(f), under the unboundederror setting, i.e., it
is enough that the success probability is strictly greater than 1/2. It
is proved that for any (total or partial) Boolean function
f, Q(f)=⎡C(f)/2⎤, i.e., the former is
always exactly one half as large as the latter. The result has an
application to obtaining an exact bound for the existence of
(m,n,p)QRAC which is the nqubit random
access coding that can recover any one of m original bits with
success probability ≥p. We can prove that
(m,n,>1/2)QRAC exists if and only if
m≤2^{2n}–1. Previously, only the
nonexistence of (2^{2n},n,>1/2)QRAC was
known.
 A Lower Bound on EntanglementAssisted Quantum Communication Complexity
 Ashley Montanaro and Andreas Winter

We prove a general lower bound on the boundederror
entanglementassisted quantum communication complexity of Boolean
functions. The bound is based on the concept that any classical or
quantum protocol to evaluate a function on distributed inputs can be
turned into a quantum communication protocol. As an application of this
bound, we give a very simple proof of the statement that almost all
Boolean functions on n+n bits have communication
complexity linear in n, even in the presence of unlimited
entanglement.
 Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity
 Paul Beame, Matei David, Toniann Pitassi and Philipp Woelfel

We solve some fundamental problems in the numberonforehead (NOF)
kparty communication model. We show that there exists a
function which has at most logarithmic communication complexity for
randomized protocols with a onesided error probability of 1/3 but which
has linear communication complexity for deterministic protocols. The
result is true for k=n^{O(1)} players, where
n is the number of bits on each players' forehead. This
separates the analogues of RP and P in the NOF communication model. We
also show that there exists a function which has constant randomized
complexity for public coin protocols but at least logarithmic complexity
for private coin protocols. No larger gap between private and public
coin protocols is possible. Our lower bounds are existential and we do
not know of any explicit function which allows such separations.
However, for the 3player case we exhibit an explicit function which has
Ω(log log n) randomized complexity for private coins
but only constant complexity for public coins.
It follows from our existential result that any function that is
complete for the class of functions with polylogarithmic
nondeterministic kparty communication complexity does not have
polylogarithmic deterministic complexity. We show that the set
intersection function, which is complete in the numberinhand model, is
not complete in the NOF model under cylindrical reductions.
 An Optimal Decomposition Algorithm for Tree Edit Distance
 Erik D. Demaine, Shay Mozes, Benjamin Rossman and Oren Weimann

The edit distance between two ordered rooted trees with
vertex labels is the minimum cost of transforming one tree into the
other by a sequence of elementary operations consisting of deleting and
relabeling existing nodes, as well as inserting new nodes. In this
paper, we present a worstcase O(n^{3})time
algorithm for this problem, improving the previous best
O(n^{3}log n)time algorithm. Our
result requires a novel adaptive strategy for deciding how a dynamic
program divides into subproblems, together with a deeper understanding
of the previous algorithms for the problem. We prove the optimality of
our algorithm among the family of decomposition strategy
algorithms — which also includes the previous fastest algorithms
— by tightening the known lower bound of
Ω(n^{2}log^{2}n) to
Ω(n^{3}), matching our algorithm's running time.
Furthermore, we obtain matching upper and lower bounds of
Θ(nm^{2}(1+log (n/m))) when the two
trees have sizes m and n where m<n.
 On Commutativity Based Edge Lean Search
 Dragan Bošnački, Edith Elkind, Blaise Genest and Doron Peled

Exploring a graph through search is one of the most basic building blocks of
various applications. In a setting with a huge state space,
such as in testing and verification, optimizing the search may be crucial.
We consider the problem of visiting all states in a graph where edges
are generated by actions and the (reachable) states are not known in advance.
Some of the actions may commute, i.e., they result in the same state
for every order in which they are taken (this is the case when the actions
are performed independently by different processes).
We show how to use commutativity to achieve full coverage of the
states while traversing considerably fewer edges.
 Commitment Under Uncertainty: TwoStage Stochastic Matching Problems
 Irit Katriel, Claire KenyonMathieu and Eli Upfal

We define and study two versions of the bipartite matching problem in
the framework of twostage stochastic optimization with recourse. In
one version the uncertainty is in the second stage costs of the edges,
in the other version the uncertainty is in the set of vertices that
needs to be matched. We prove lower bounds, and analyze efficient
strategies for both cases. These problems model reallife stochastic
integral planning problems such as commodity trading, reservation
systems and scheduling under uncertainty.
 On the Complexity of HardCore Set Constructions
 ChiJen Lu, ShiChun Tsai and HsinLung Wu

We study a fundamental result of Impagliazzo (FOCS'95) known
as the hardcore set lemma. Consider any function
f:{0,1}^{n}→{0,1} which is "mildlyhard", in the
sense that any circuit of size s must disagree with f on
at least δ fraction of inputs. Then the hardcore set lemma says
that f must have a hardcore set H of density δ on
which it is "extremely hard", in the sense that any circuit of
size
s'=O(s/(1/ε^{2} log(1/(εδ))))
must disagree with f on at least (1ε)/2 fraction of
inputs from H.
There are three issues of the lemma which we would like to address:
the loss of circuit size, the need of nonuniformity, and its
inapplicability to a lowlevel complexity class. We introduce two models
of hardcore set constructions, a strongly blackbox one and a weakly
blackbox one, and show that those issues are unavoidable in such
models.
First, we show that in any strongly blackbox construction, one can
only prove the hardness of a hardcore set for smaller circuits of size
at most
s'=O(s/(1/ε^{2} log(1/δ))).
Next, we show that any weakly blackbox construction must be inherently
nonuniform — to have a hardcore set for a class G of
functions, we need to start from the assumption that f is hard
against a nonuniform complexity class with Ω(1/ε logG)
bits of advice. Finally, we show that weakly blackbox constructions in
general cannot be realized in a lowlevel complexity class such as
AC^{0}[p] — the assumption that f is hard
for AC^{0}[p] is not sufficient to guarantee the
existence of a hardcore set.
 Approximation by DNF: Examples and Counterexamples
 Ryan O'Donnell and Karl Wimmer

Say that f:{0,1}^{n}→{0,1} εapproximates
g:{0,1}^{n}→{0,1} if the
functions disagree on at most an ε fraction of points. This
paper contains two results about approximation by DNF and other
smalldepth circuits:
 For every constant 0<ε<1/2 there is a DNF of size
2^{O(√n)} that εapproximates the Majority function on
n bits, and this is optimal up to the constant in the
exponent.
 There is a monotone function ℱ:{0,1}^{n}→{0,1}
with total influence (AKA average sensitivity)
B(ℱ)≤O(log n) such that any DNF or CNF
that .01approximates ℱ requires size
2^{Ω(n/log n)}
and such that any unbounded fanin ANDORNOT circuit that
.01approximates requires size Ω(n/log n). This
disproves a conjecture of Benjamini, Kalai, and Schramm.
 Exotic Quantifiers, Complexity Classes, and Complete Problems
 Peter Bürgisser and Felipe Cucker

We define new complexity classes in the BlumShubSmale theory of
computation over the reals, in the spirit of the polynomial hierarchy,
with the help of infinitesimal and generic quantifiers. Basic
topological properties of semialgebraic sets like boundedness,
closedness, compactness, as well as the continuity of semialgebraic
functions are shown to be complete in these new classes. All attempts
to classify the complexity of these problems in terms of the previously
studied complexity classes have failed. We also obtain completeness
results in the Turing model for the corresponding discrete problems. In
this setting, it turns out that infinitesimal and generic quantifiers
can be eliminated, so that the relevant complexity classes can be
described in terms of usual quantifiers only.
 Online ConflictFree Colorings for Hypergraphs
 Amotz BarNoy, Panagiotis Cheilaris, Svetlana Olonetsky and Shakhar Smorodinsky

We provide a framework for online conflictfree coloring
(CFcoloring) of any hypergraph. We use this framework to obtain an
efficient randomized online algorithm for CFcoloring any
kdegenerate hypergraph. Our algorithm uses
O(k log n) colors with high probability
and this bound is asymptotically optimal for any constant k.
Moreover, our algorithm uses
O(k log k log n) random
bits with high probability. As a corollary, we obtain asymptotically
optimal randomized algorithms for online CFcoloring some hypergraphs
that arise in geometry. Our algorithm uses exponentially fewer random
bits compared to previous results.
We introduce deterministic online CFcoloring algorithms for points
on the line with respect to intervals and for points on the plane with
respect to halfplanes (or unit discs) that use Θ(log n)
colors and recolor O(n) points in total.
 Distributed Computing with Advice: Information Sensitivity of Graph Coloring
 Pierre Fraigniaud, Cyril Gavoille, David Ilcinkas and Andrzej Pelc

We study the problem of the amount of information (advice) about a
graph that must be given to its nodes in order to achieve fast
distributed computations. The required size of the advice enables to
measure the information sensitivity of a network problem. A problem is
information sensitive if little advice is enough to solve the problem
rapidly (i.e., much faster than in the absence of any advice), whereas
it is information insensitive if it requires giving a lot of information
to the nodes in order to ensure fast computation of the solution. In
this paper, we study the information sensitivity of distributed graph
coloring.
 Universal Algebra and Hardness Results for Constraint Satisfaction Problems
 Benoît Larose and Pascal Tesson

We present algebraic conditions on constraint languages Γ
that ensure the hardness of the constraint satisfaction problem
CSP(Γ) for complexity classes L, NL, P, NP and Mod_{p}L.
These criteria also give nonexpressibility results for various
restrictions of Datalog. Furthermore, we show that if
CSP(Γ) is not firstorder definable then it is Lhard. Our
proofs rely on tame congruence theory and on a finegrain analysis
of the complexity of reductions used in the algebraic study of
CSPs. The results pave the way for a refinement of the
dichotomy conjecture stating that each CSP(Γ) lies in P or
is NPcomplete and they match the recent classification
for Boolean CSP. We also infer a partial
classification theorem for the complexity of CSP(Γ) when
the associated algebra of Γ is the idempotent reduct of a
preprimal algebra.
 On the power of kconsistency
 Albert Atserias, Andrei Bulatov and Victor Dalmau

The kconsistency algorithm for constraintsatisfaction
problems proceeds, roughly, by finding all partial solutions
on at most k variables and iteratively deleting those
that cannot be extended to a partial solution by one more
variable. It is known that if the core of the structure
encoding the scopes of the constraints has treewidth at most
k, then the kconsistency algorithm is always correct.
We prove the exact converse to this: if the core of the
structure encoding the scopes of the constraints does not
have treewidth at most k, then the kconsistency
algorithm is not always correct. This characterizes the
exact power of the kconsistency algorithm in structural
terms.
 Complexity of Propositional Proofs Under a Promise
 Nachum Dershowitz and Iddo Tzameret

We study — within the framework of propositional proof
complexity — the problem of certifying unsatisfiability of CNF
formulas under the promise that any satisfiable formula has many
satisfying assignments, where "many" stands for an explicitly
specified function λ in the number of variables n. To this end,
we develop propositional proof systems under different measures of
promises (that is, different λ) as extensions of resolution. This is
done by augmenting resolution with axioms that, roughly, can eliminate
sets of truth assignments defined by Boolean circuits. We then
investigate the complexity of such systems, obtaining an exponential
separation in the averagecase between resolution under different size
promises:
 Resolution has polynomialsize refutations for all unsatisfiable
3CNF formulas when the promise is ε∙2^{n}, for
any constant 0<ε<1.
 There are no subexponential size resolution refutations for random 3CNF
formulas, when the promise is 2^{δn}
(and the number of clauses is o(n^{3/2})),
for any constant 0<δ<1.
 Sharp Tractability Borderlines for Finding Connected Motifs in VertexColored Graphs
 Michael R. Fellows, Guillaume Fertin, Danny Hermelin and Stéphane Vialette

We study the problem of finding occurrences of motifs in
vertexcolored graphs, where a motif is a multiset of colors, and an
occurrence of a motif is a subset of connected vertices whose multiset
of colors equals the motif. This problem has applications in metabolic
network analysis, an important area in bioinformatics. We give two
positive results and three negative results that together draw sharp
borderlines between tractable and intractable instances of the
problem.
 Parameterized Algorithms for Directed Maximum Leaf Problems
 Noga Alon, Fedor V. Fomin, Gregory Gutin, Michael Krivelevich and Saket Saurabh

We prove that finding a rooted subtree with at least k leaves in a
digraph is a fixed parameter tractable problem. A similar result
holds for finding rooted spanning trees with many leaves in digraphs
from a wide family ℒ that includes all strong and acyclic
digraphs. This settles completely an open question of Fellows and
solves another one for digraphs in ℒ. Our algorithms are
based on the following combinatorial result which can be viewed as a
generalization of many results for a `spanning tree with many
leaves' in the undirected case, and which is interesting on its own:
If a digraph D∊ℒ of order n with minimum indegree at
least 3 contains a rooted spanning tree, then D contains one
with at least (n/2)^{1/5}1 leaves.
 Parameterized Approximability of the Disjoint Cycle Problem
 Martin Grohe and Magdalena Grüber

We give an fpt approximation algorithm for the directed vertex
disjoint cycle problem. Given a directed graph G with n
vertices and a positive integer k, the algorithm constructs a
family of at least k/ρ(k) disjoint cycles of
G if the graph G has a family of at least k
disjoint cycles (and otherwise may still produce a solution, or just
report failure). Here ρ is a computable function such that
k/ρ(k) is nondecreasing and unbounded. The running
time of our algorithm is polynomial.
The directed vertex disjoint cycle problem is hard for the
parameterized complexity class W[1], and to the best of our knowledge
our algorithm is the first fpt approximation algorithm for a natural
W[1]hard problem.
Key words: approximation algorithms, fixedparameter
tractability, parameterized complexity theory.
 Linear Problem Kernels for NPHard Problems on Planar Graphs
 Jiong Guo and Rolf Niedermeier

We develop a generic framework for deriving linearsize problem
kernels for NPhard problems on planar graphs. We demonstrate the
usefulness of our framework in several concrete case studies, giving new
kernelization results for CONNECTED VERTEX
COVER, MINIMUM EDGE
DOMINATING SET, MAXIMUM
TRIANGLE PACKING, and
EFFICIENT DOMINATING SET on
planar graphs. On the route to these results, we present effective,
problemspecific data reduction rules that are useful in any approach
attacking the computational intractability of these problems.
 Balanced Families of Perfect Hash Functions and Their Applications
 Noga Alon and Shai Gutner

The construction of perfect hash functions is a wellstudied topic.
In this paper, this concept is generalized with the following
definition. We say that a family of functions from [n] to
[k] is a δbalanced (n,k)family of perfect
hash functions if for every S⊆[n], S=k,
the number of functions that are 11 on S is between
T/δ and δT for some constant T>0. The
standard definition of a family of perfect hash functions requires that
there will be at least one function that is 11 on S, for each
S of size k. In the new notion of balanced families, we
require the number of 11 functions to be almost the same (taking
δ to be close to 1) for every such S. Our main result is
that for any constant δ>1, a δbalanced
(n,k)family of perfect hash functions of size
2^{O(k log log k)} log n
can be constructed in time
2^{O(k log log k)} n log n.
Using the technique of colorcoding we can apply our explicit
constructions to devise approximation algorithms for various counting
problems in graphs. In particular, we exhibit a deterministic polynomial
time algorithm for approximating both the number of simple paths of
length k and the number of simple cycles of size k for any
k≤O(log n / log log log n)
in a graph with n vertices. The approximation is up to any fixed
desirable relative error.
Key words: approximate counting of subgraphs, colorcoding,
perfect hashing.
 An Exponential Improvement on the MST Heuristic for Minimum Energy Broadcasting in Ad Hoc Wireless Networks
 Ioannis Caragiannis, Michele Flammini and Luca Moscardelli

In this paper we present a new approximation algorithm for the
Minimum Energy Broadcast Routing (MEBR) problem in ad hoc
wireless networks that has exponentially better approximation factor
than the wellknown Minimum Spanning Tree (MST) heuristic. Namely, for
any instance where a minimum spanning tree of the set of stations is
guaranteed to cost at most ρ times the cost of an optimal
solution for MEBR, we prove that our algorithm achieves an approximation
ratio bounded by
2 ln ρ – ln 2 + 2. This
result is particularly relevant for its consequences on Euclidean
instances where we significantly improve previous results.
 Succinct Ordinal Trees Based on Tree Covering
 Meng He, J. Ian Munro and S. Srinivasa Rao

Various methods have been used to represent a tree of n nodes
in essentially the informationtheoretic minimum space while supporting
various navigational operations in constant time, but different
representations usually support different operations. Our main
contribution is a succinct representation of ordinal trees, based on
that of Geary et al., that supports all the navigational
operations supported by various succinct tree representations while
requiring only 2n+o(n) bits. It also supports
efficient levelorder traversal, a useful ordering previously supported
only with a very limited set of operations.
Our second contribution expands on the notion of a single succinct
representation supporting more than one traversal ordering, by showing
that our method supports two other encoding schemes as abstract data
types. In particular, it supports extracting a word
(O(lg n) bits) of the balanced parenthesis sequence
or depth first unary degree sequence in O(f(n))
time, using at most
n/f(n) + o(n) additional
bits, for any f(n) in O(lg n) and
Ω(1).
 A Framework for Dynamizing Succinct Data Structures
 Ankur Gupta, WingKai Hon, Rahul Shah and Jeffrey Scott Vitter

We present a framework to dynamize succinct data structures, to
encourage their use over nonsuccinct versions in a wide variety of
important application areas. Our framework can dynamize most
stateoftheart succinct data structures for dictionaries, ordinal
trees, labeled trees, and text collections. Of particular note is its
direct application to XML indexing structures that answer subpath
queries. Our framework focuses on achieving
informationtheoretically optimal space along with nearoptimal
update/query bounds.
As the main part of our work, we consider the following problem
central to text indexing: Given a text T over an alphabet Σ,
construct a compressed data structure answering the queries
char(i), rank_{s}(i), and
select_{s}(i) for a symbol s∊Σ. Many
data structures consider these queries for static text T. We
build on these results and give the best known query bounds for the
dynamic version of this problem, supporting arbitrary insertions and
deletions of symbols in T.
Specifically, with an amortized update time of
O(n^{ε}), any static succinct data
structure D for T, taking t(n) time for
queries, can be converted by our framework into a dynamic succinct data
structure that supports rank_{s}(i),
select_{s}(i), and
char_{s}(i) queries in
O(t(n) + log log n)
time, for any constant ε>0. When Σ=polylog(n), we
achieve O(1) query times. Our update/query bounds are
nearoptimal with respect to the lower bounds.
 InPlace Suffix Sorting
 Gianni Franceschini and S. Muthukrishnan

Given string T=T[1,...,n], the suffix sorting
problem is to lexicographically sort the suffixes
T[i,...,n] for all i. This problem is
central to the construction of suffix arrays and trees with many
applications in string processing, computational biology and
compression. A bottleneck in these applications is the amount of
workspace needed to perform suffix sorting beyond the space
needed to store the input as well as the output. In particular,
emphasis is even on the constant c in the
O(n)=cn space algorithms known for this
problem.
Currently the best previous result takes
O(nv + n log n) time and
O(n/√v) extra space, for any v∊[1,√n]
for strings from a general alphabet. We improve this and present the
first known inplace suffix sorting algorithm. Our algorithm takes
O(n log n) time using O(1)
workspace and is optimal in the worst case for the general alphabet.
 Strong Price of Anarchy for Machine Load Balancing
 Amos Fiat, Haim Kaplan, Meital Levy and Svetlana Olonetsky

As defined by Aumann in 1959, a strong equilibrium is a Nash
equilibrium that is resilient to deviations by coalitions. We give
tight bounds on the strong price of anarchy for load balancing on
related machines. We also give tight bounds for kstrong
equilibria, where the size of a deviating coalition is at most
k.
Keywords: Game theory, Strong Nash equilibria, Load
balancing, Price of Anarchy
 Efficient Algorithms for Constant Well Supported Approximate Equilibria in Bimatrix Games
 Spyros C. Kontogiannis and Paul G. Spirakis

In this work we study the tractability of well supported approximate
Nash Equilibria (SuppNE in short) in bimatrix games. In view of the
apparent intractability of constructing Nash Equilibria (NE in
short) in polynomial time, even for bimatrix games, understanding
the limitations of the approximability of the problem is of great
importance.
We initially prove that SuppNE are immune to the addition of
arbitrary real vectors to the rows (columns) of the row (column)
player's payoff matrix. Consequently we propose a polynomial time
algorithm (based on linear programming) that constructs a
0.5SuppNE for arbitrary win lose games.
We then parameterize our technique for win lose games, in order to
apply it to arbitrary (normalized) bimatrix games. Indeed, this new
technique leads to a weaker φSuppNE for win lose games,
where φ = (√5–1)/2 is the golden ratio.
Nevertheless, this parameterized technique extends nicely to a
technique for arbitrary [0,1]bimatrix games, which assures a
0.658SuppNE in polynomial time.
To our knowledge, these are the first polynomial time
algorithms providing εSuppNE of normalized or win lose bimatrix
games, for some nontrivial constant ε∊[0,1), bounded
away from 1.
Keywords: Bimatrix Games, Well Supported Approximate Equilibria
 Holographic Algorithms: The Power of Dimensionality Resolved
 JinYi Cai and Pinyan Lu

Valiant's theory of holographic algorithms is a novel methodology
to achieve exponential speedups in computation. A fundamental
parameter in holographic algorithms is the dimension of the linear basis
vectors.
We completely resolve the problem of the power of higher dimensional
bases. We prove that 2dimensional bases are universal for
holographic algorithms.
 Reconciling Data Compression and Kolmogorov Complexity
 Laurent Bienvenu and Wolfgang Merkle

While data compression and Kolmogorov complexity are both about
effective coding of words, the two settings differ in the following
respect. A compression algorithm or compressor, for short, has to map a
word to a unique code for this word in one shot, whereas with the
standard notions of Kolmogorov complexity a word has many different
codes and the minimum code for a given word cannot be found effectively.
This gap is bridged by introducing decidable Turing machines and a
corresponding notion of Kolmogorov complexity, where compressors and
suitably normalized decidable machines are essentially the same
concept.
Kolmogorov complexity defined via decidable machines yields
characterizations in terms of the intial segment complexity of sequences
of the concepts of MartinLöf randomness, Schnorr randomness, Kurtz
randomness, and computable dimension. These results can also be
reformulated in terms of timebounded Kolmogorov complexity. Other
applications of decidable machines are presented, such as a simplified
proof of the MillerYu theorem (characterizing MartinLöf randomness
by the plain complexity of the initial segments) and a new
characterization of computably traceable sequences via a natural lowness
notion for decidable machines.
 Size Competitive Meshing without Large Angles
 Gary L. Miller, Todd Phillips and Donald Sheehy

We present a new meshing algorithm for the plane, Overlay Stitch
Meshing (OSM), accepting as input an arbitrary Planar Straight Line
Graph and producing a triangulation with all angles smaller than
170°. The output triangulation has competitive size with any
optimal size mesh having equally bounded largest angle. The competitive
ratio is O(log(L/s)) where L and s
are respectively the largest and smallest features in the input. OSM
runs in
O(n log(L/s) + m)
time/work where n is the input size and m is the output
size. The algorithm first uses Sparse Voronoi Refinement to compute a
quality overlay mesh of the input points alone. This triangulation is
then combined with the input edges to give the final mesh.
 Lower Bounds for Quantile Estimation in RandomOrder and MultiPass Streaming
 Sudipto Guha and Andrew McGregor

We present lower bounds on the space required to estimate the
quantiles of a stream of numerical values. Quantile estimation is
perhaps the most studied problem in the data stream model and it is
relatively well understood in the basic singlepass data stream model in
which the values are ordered adversarially. Natural extensions of this
basic model include the randomorder model in which the values
are ordered randomly and the
multipass model in which an algorithm is permitted a limited
number of passes over the stream.
We present lower bounds that complement existing upper bounds
in both models. One consequence is an exponential
separation between the randomorder and adversarialorder models: using
Ω(polylog n) space, exact selection requires
Ω(log n) passes in the adversarialorder model while
O(log log n) passes are sufficient in the
randomorder model.
 Streaming and Fully Dynamic Centralized Algorithms for Constructing and Maintaining Sparse Spanners
 Michael Elkin

We present a streaming algorithm for constructing sparse
spanners and show that our algorithm outperforms significantly the
stateoftheart algorithm for this task. Specifically, the
processing timeperedge of our algorithm is drastically
smaller than that of the stateoftheart algorithm, and all other
efficiency parameters of our algorithm are no greater (and some of them
are strictly smaller) than the respective parameters for the
stateoftheart algorithm.
We also devise a fully dynamic centralized algorithm maintaining
sparse spanners. This algorithm has a very small incremental update
time, and a nontrivial decremental update time. To our knowledge, this
is the first fully dynamic centralized algorithm for maintaining sparse
spanners that provides nontrivial bounds on both incremental and
decremental update time for a wide range of stretch parameter
t.
 Checking and SpotChecking the Correctness of Priority Queues
 Matthew Chu, Sampath Kannan and Andrew McGregor

We revisit the problem of memory checking considered by Blum et al.
In this model, a checker monitors the behavior of a data structure
residing in unreliable memory given an arbitrary sequence of user
defined operations. The checker is permitted a small amount of separate
reliable memory and must fail a data structure if it is not behaving as
specified and pass it otherwise. How much additional reliable memory is
required by the checker? First, we present a checker for an
implementation of a priority queue. The checker uses
O(√n log n) space where n is the
number of operations performed. We then present a spotchecker
using only
O(ε^{–1} log δ^{–1} log n)
space, that, with probability at least 1–δ, will fail the
priority queue if it is εfar (defined appropriately) from
operating like a priority queue and pass the priority queue if it
operates correctly. Finally, we then prove a range of lower bounds that
complement our checkers.
 On the Chromatic Number of Random Graphs
 Amin CojaOghlan, Konstantinos Panagiotou and Angelika Steger

In this paper we study the chromatic number
χ(G_{n,p}) of the
binomial random graph G_{n,p},
where
p = p(n) ≤ n^{–3/4 – δ},
for every fixed δ>0. We prove that a.a.s. χ(G_{n,p}) is ℓ, ℓ+1, or ℓ+2, where ℓ
is the maximum integer satisfying
2(ℓ–1)log(ℓ–1) ≤ np.
 QuasiRandomness and Algorithmic Regularity for Graphs with General Degree Distributions
 Noga Alon, Amin CojaOghlan, Hiệp Hàn, Mihyun Kang, Vojtěch Rödl and Mathias Schacht

We deal with two very related subjects: quasirandomness and regular
partitions. The purpose of the concept of quasirandomness is to
measure how much a given graph "resembles" a random one. Moreover, a
regular partition approximates a given graph by a bounded number of
quasirandom graphs. Regarding quasirandomness, we present a new
spectral characterization of low discrepancy, which extends to sparse
graphs. Concerning regular partitions, we present a novel concept of
regularity that takes into account the graph's degree distribution, and
show that if G=(V,E)
satisfies a certain boundedness condition, then
G admits a regular partition. In addition, building on the work of
Alon and Naor, we provide an algorithm that computes a
regular partition of a given (possibly sparse) graph G in polynomial
time.
Key words: quasirandom graphs, Laplacian eigenvalues,
sparse graphs, regularity lemma, Grothendieck's inequality
 Complexity of the Cover Polynomial
 Markus Bläser and Holger Dell

The cover polynomial introduced by Chung and Graham is a twovariate
graph polynomial for directed graphs. It counts the (weighted) number of
ways to cover a graph with disjoint directed cycles and paths,
it is an interpolation between determinant and permanent,
and it is believed to be a directed analogue of the Tutte polynomial.
Jaeger, Vertigan, and Welsh showed that the Tutte polynomial is Phard
to evaluate at all but a few special points and curves.
It turns out that the same holds for the cover polynomial:
We prove that, in almost the whole plane,
the problem of evaluating the cover polynomial
is Phard under polynomialtime Turing reductions,
while only three points are easy.
Our construction uses a gadget which is easier to analyze
and more general
than the XORgadget used by Valiant in his proof that
the permanent is Pcomplete.

