Removing the Haystack to Find the Needle(s):
Minesweeper, an Adaptive Join Algorithm
Abstract
We describe a new algorithm, Minesweeper, that is able to satisfy stronger runtime guarantees than previous join algorithms (colloquially, ‘beyond worstcase guarantees’) for data in indexed search trees. Our first contribution is developing a framework to measure this stronger notion of complexity, which we call certificate complexity, that extends notions of Barbay et al. and Demaine et al.; a certificate is a set of propositional formulae that certifies that the output is correct. This notion captures a natural class of join algorithms. In addition, the certificate allows us to define a strictly stronger notion of runtime complexity than traditional worstcase guarantees. Our second contribution is to develop a dichotomy theorem for the certificatebased notion of complexity. Roughly, we show that Minesweeper evaluates acyclic queries in time linear in the certificate plus the output size, while for any cyclic query there is some instance that takes superlinear time in the certificate (and for which the output is no larger than the certificate size). We also extend our certificatecomplexity analysis to queries with bounded treewidth and the triangle query.
1 Introduction
Efficiently evaluating relational joins is one of the most wellstudied problems in relational database theory and practice. Joins are a key component of problems in constraint satisfaction, artificial intelligence, motif finding, geometry, and others. This paper presents a new join algorithm, called Minesweeper, for joining relations that are stored in order data structures, such as Btrees. Under some mild technical assumptions, Minesweeper is able to achieve stronger runtime guarantees than previous join algorithms.
The Minesweeper algorithm is based on a simple idea. When data are stored in an index, successive tuples indicate gaps, i.e., regions in the output space of the join where no possible output tuples exist. Minesweeper maintains gaps that it discovers during execution and infers where to look next. In turn, these gaps may indicate that a large number of tuples in the base relations cannot contribute to the output of the join, so Minesweeper can efficiently skip over such tuples without reading them. By using an appropriate data structure to store the gaps, Minesweeper guarantees that we can find at least one point in the output space that needs to be explored, given the gaps so far. The key technical challenges are the design of this data structure, called the constraint data structure, and the analysis of the join algorithm under a more stringent runtime complexity measure.
To measure our stronger notion of runtime, we introduce the notion of a certificate for an instance of a join problem: essentially, a certificate is a set of comparisons between elements of the input relations that certify that the join output is exactly as claimed. We use the certificate as a measure of the difficulty of a particular instance of a join problem. That is, our goal is to find algorithms whose running times can be bounded by some function of the smallest certificate size for a particular input instance. Our notion has two key properties:

Certificate complexity captures the computation performed by widely implemented join algorithms. We observe that the set of comparisons made by any join algorithm that interacts with the data by comparing elements of the input relations (implicitly) constructs a certificate. Examples of such join algorithms are indexnestedloop join, sortmerge join, hash join,^{1}^{1}1Within a factor, an ordered tree can simulate a hash table. grace join, and blocknested loop join. Hence, our results provide a lower bound for this class of algorithms, as any such algorithm must take at least as many steps as the number of comparisons in a smallest certificate for the instance.

Certificate complexity is a strictly finer notion of complexity than traditional worstcase data complexity. In particular, we show that there is always a certificate that is no larger than the input size. In some cases, the certificate may be much smaller (even constantsized for arbitrarily large inputs).
These two properties allow us to model a common situation in which indexes allow one to answer a query without reading all of the data—a notion that traditional worstcase analysis is too coarse to capture. We believe ours is the first beyond worstcase analysis of join queries.
Throughout, we assume that all input relations are indexed consistently with a particular ordering of all attributes called the global attribute order (GAO). In effect, this assumption means that we restrict ourselves to algorithms that compare elements in GAO order. This model, for example, excludes the possibility that a relation will be accessed using indexes with multiple search keys during query evaluation.
With this restriction, our main technical results are as follows. Given a acyclic query we show that there is some GAO such that Minesweeper runs in time that is essentially optimal in the certificatesense, i.e., in time , where is a smallest certificate for the problem instance, is the output size, and hides factors that depend (perhaps exponentially) on the query size and at most logarithmically on the input size.^{2}^{2}2The exponential dependence on the query is similar to traditional data complexity; the logarithmic dependence on the data is an unavoidable technical necessity (see Appendix C). Assuming the 3SUM conjecture, this boundary is tight, in the sense that any cyclic query (and any GAO) there are some family of instances that require a runtime of for any where . For acyclic join queries, which are the more traditional notion of acyclicity in database theory and a strictly larger class than acyclic queries, Yannakakis’s seminal join algorithm has a worstcase running time that is linear in the input size plus output size (in data complexity). However, we show that in the certificate world, this boundary has changed: assuming the exponential time hypothesis, the runtime of any algorithm for acyclic queries cannot be bounded by any polynomial in .^{3}^{3}3In Appendix J, we show that both worstcase optimal algorithms [40, 53] and Yannakakis’s algorithm run in time for acyclic queries on some family of instances.
We also describe how to extend our results to notions of treewidth. Recall that any ordering of attributes can be used to construct a tree decomposition. Given a GAO that induces a tree decomposition with an (induced) treewidth , Minesweeper runs in time . In particular, for a query with treewidth , there is always a GAO that achieves . Moreover, we show that no algorithm (comparisonbased or not) can improve this exponent by more than a constant factor in . However, our algorithm does not have an optimal exponent: for the special case of the popular triangle query, we introduce a more sophisticated data structure that allows us to run in time , while Minesweeper runs in time .
Outline of the Remaining Sections
In Section 2, we describe the notion of a certificate and formally state our main technical problem and results. In Section 3, we give an overview of the main technical ideas of Minesweeper, including a complete description of our algorithm and its associated data structures. In Section 4, we describe the analysis of Minesweeper for acyclic queries. In Section 5, we then describe how to extend the analysis to queries with lowtreewidth and the triangle query. In Section 6, we discuss related work. Most of the technical details are provided in the appendix.
2 Problem and Main Result
Roughly, the main problem we study is:
Given a natural join query and a database instance , compute in time , where is the smallest “certificate" that certifies that the output is as claimed by the algorithm and .
We will assume that all relations in the input are already indexed. Ideally, we aim for . We make this problem precise in this section.
2.1 The inputs to Minesweeper
We assume a set of attributes and denote the domain of attribute as . Throughout this paper, without loss of generality, we assume that all attributes are on domain . We define three items: (1) the global attribute order; (2) our notation for order; and (3) our model for how the data are indexed.
The Global Attribute Order
Minesweeper evaluates a given natural join query consisting of a set of relations indexed in a way that is consistent with an ordering of all attributes occurring in that we call the global attribute order (GAO). To avoid burdening the notation, we assume that the GAO is simply the order . We assume that all relations are stored in ordered search trees (e.g., Btrees) where the search key for this tree is consistent with this global order. For example, is consistent, while is not.
TupleOrder Notation
We will extensively reason about the relative order of tuples and describe notation to facilitate the arguments. For a relation where is such that if , we define an index tuple to be a tuple of positive integers, where . Such tuples index tuples in the relation . We define their meaning inductively. If , then denotes the ’th smallest value in the set . Inductively, define to be the ’th smallest value in the set
For example, if then , , , and .
We use the following convention to simplify the algorithm’s description: for any index tuple ,
(1)  
(2) 
Model of Indexes
The relation is indexed such that the values of various attributes of tuples from can be accessed using index tuples. We assume appropriate size information is stored so that we know what the correct ranges of the ’s are; for example, following the notation described above, the correct range is for every . With the convention specified in (1) and (2), and are outofrange coordinates. These coordinates are used for the sake of brevity only; an index tuple, by definition, cannot contain outofrange coordinates.
The index structure for supports the query , which takes as input an index tuple of length and a value , and returns a pair of coordinates such that


, and

(resp. ) is the maximum (resp. minimum) index satisfying this condition.
Note that it is possible for , which holds when . Moreover, we assume throughout that FindGap runs in time . This model captures widely used indexes including a Btree [45, Ch.10] or a Trie [53].
2.2 Certificates
We define a certificate, which is a set of comparisons that certifies the output is exactly as claimed. We do not want the comparisons to depend on the specific values in the instance, only their order. To facilitate that, we think of as a variable that can be mapped to specific domain value by a database instance.^{4}^{4}4We use variables as a perhaps more intuitive, succinct way to describe the underlying morphisms. These variables are only defined for valid index tuples as imposed by the input instance described in the previous section.
A database instance instantiates all variables , where , is an index tuple in relation . (In particular, the input database instance described in the previous section is such a database instance.) We use to denote the instantiation of the variable . Note that each such variable is on the domain of some attribute ; for short, we call such variable an variable. A database instance fills in specific values to the nodes of the search tree structures of the input relations.
Example 2.1.
Consider the query on the input instance defined by and . This instance can be viewed as defining the following variables: , , , , , and , . Another database instance can define the same index variables but using different constants, in particular, set , , , , and , .
We next formalize the notion of certificates. Consider an input instance to Minesweeper, consisting of the query , the GAO , and a set of relations already indexed consistently with the GAO.
Definition 2.2 (Argument).
An argument for the input instance is a set of symbolic comparisons of the form
(3) 
and and are two index tuples^{5}^{5}5Note again that the index tuples are constructed from the input instance as described in the previous section. such that and are both variables for some , and . Note that we allow .^{6}^{6}6Equality constraints between index tuples from same relation is required to guarantee that certificates are no longer than the input, see property (ii) below. A database instance satisfies an argument if is true for every comparison in the argument .
An index tuple for a relation is called a full index tuple if . Let be a database instance for the problem. Then, the full index tuple is said to contribute to an output tuple if the tuple is exactly the projection of onto attributes in . A collection of full index tuples is said to be a witness for if has exactly one full index tuple from each relation , and all index tuples in contribute to the same .
Definition 2.3 (Certificate).
An argument for the input instance is called a certificate iff the following condition is satisfied: if and are two database instances of the problem both of which satisfy , then every witness for is a witness for and vice versa. The size of a certificate is the number of comparisons in it.
Example 2.4.
Continuing with Example 2.1. Fix an , the argument is a certificate for . For every database, such as and in the example, that satisfies the two equalities, the set of witnesses are the same, i.e., the sets and for . Notice we do not need to spell out all of the outputs in the certificate.
Consider the instance in which , . While is very similar to , does not satisfy the certificate since . The certificate also does not apply to from Example 2.1, since defines a different set of variables from , e.g., is defined in , but not in .
Properties of optimal certificates
We list three important facts about , a minimumsized certificate:

The set of comparisons issued by a very natural class of (nondeterministic) comparisonbased join algorithms is a certificate; this result not only justifies the definition of certificates, but also shows that is a lowerbound for the runtime of any comparisonbased join algorithm.

can be shown to be at most linear in the input size no matter what the data and the GAO are, and in many cases can even be of constant size. Hence, running time measured in is a strictly finer notion of runtime complexity than inputbased runtimes; and

depends on the data and the GAO.
We explain the above facts more formally in the following two propositions. The proofs of the propositions can be found in Appendix B.
Proposition 2.5 (Certificate size as runtime lowerbound of comparisonbased algorithms).
Let be a join query whose input relations are already indexed consistent with a GAO as described in Section 2.1. Consider any comparisonbased join algorithm that only does comparisons of the form shown in (3). Then, the set of comparisons performed during execution of the algorithm is a certificate. In particular, if is an optimal certificate for the problem, then the algorithm must run in time at least .
Proposition 2.6 (Upper bound on optimal certificate size).
Let be a general join query on relations and attributes. Let be the total number of tuples from all input relations. Then, no matter what the input data and the GAO are, we have , where .
In Appendix B, we present examples to demonstrate that can vary any where from to , that the input data or the GAO can change the certificate size, and that samerelation comparisons are needed.
2.3 Main Results
Given a set of input relations already indexed consistent with a fixed GAO, we wish to compute the natural join of these relations as quickly as possible. As illustrated in the previous section, a runtime approaching is optimal among comparisonbased algorithms. Furthermore, runtimes as a function of can be sublinear in the input size. Ideally, one would like a join algorithm running in time. However, such a runtime is impossible because for many instances the output size is superlinear in the input size, while is at most linear in the input size. Hence, we will aim for runtimes of the form , where is the output size and is some function; a runtime of is essentially optimal.
Our algorithm, called Minesweeper, is a generalpurpose join algorithm. Our main results analyze its runtime behavior on various classes of queries in the certificate complexity model. Recall that acyclic (often just acyclic) is the standard notion of (hypergraph) acyclicity in database theory [1, p. 128]. A query is acyclic, a stronger notion, if every subquery of obtained by removing atoms from remains acyclic. For completeness, we include these definitions and examples in Appendix A.
Let be the input size, the number of attributes, the number of relations, the output size, the maximum arity of input relations, and any optimal certificate for the instance. Our key results are as follows.
Theorem 2.7.
Suppose the input query is acyclic. Then there is some GAO such that Minesweeper computes its output in time .
As is standard in database theory, we ignore the dependency on the query size, and the above theorem states that Minesweeper runs in time .^{7}^{7}7For acyclic queries with a fixed GAO, our results are loose; our best upper bound the complexity uses the treewidth from Section 5.
What about cyclic queries? The short answer is no: we cannot achieve this guarantee. It is obvious that any join algorithm will take time . Using SUMhardness, a wellknown complexitytheoretic assumption [44], we are able to show the following.
Proposition 2.8.
Unless the SUM problem can be solved in subquadratic time, for any cyclic query in any GAO, there does not exist an algorithm that runs in time for any on all instances.
We extend our analysis of Minesweeper to queries that have bounded treewidth and to triangle queries in Section 5. These results are technically involved and we only highlight the main technical challenges.
3 The Minesweeper Algorithm
We begin with an overview of the main ideas and technical challenges of the Minesweeper algorithm. Intuitively, Minesweeper probes into the space of all possible output tuples, and explores the gaps in this space where there is no output tuples. These gaps are encoded by a technical notion called constraints, which we describe next. (For illustration, we present complete endtoend results for set intersection and the bowtie query in Appendix H and I.)
3.1 Notation for Minesweeper
We need some notation to describe our algorithm. Define the output space of the query to be the space , where is the domain of attribute .^{8}^{8}8Recall, we assume for simplicity. By definition, a tuple is an output tuple if and only if , and , for all , where is the set of attributes in .
Constraints
A constraint is an dimensional vector of the following form: where for every . In other words, each constraint is a vector consisting of three types of components:

openinterval component on the attribute (for some ) and ,

wildcard or component, and

equality component of the type .
In any constraint, there is exactly one interval component. All components after the interval component are wildcards. Hence, we will often not write down the wildcard components that come after the interval component. The prefix that comes before the interval component is called a pattern, which consists of any number of wildcards and equality components. The equality components encode the coordinates of the axis parallel affine planes containing the gap. For example, in three dimensions the constraint can be viewed as the region between the affine hyperplanes and ; and the constraint can be viewed as the strip inside the plane between the line and . We encode these gaps syntactically to facilitate efficient insertion, deletion, and merging.
Let be an arbitrary tuple from the output space, and be a constraint. Then, is said to satisfy constraint if for every one of the following holds: (1) , (2) and , or (3) and . We say a tuple is active with respect to a set of constraints if does not satisfy any constraint in the set (Geometrically, no constraint covers the point ).
3.2 A Highlevel Overview of Minesweeper
We break Minesweeper in two components: (1) a special data structure called the constraint data structure (CDS), and (2) an algorithm that uses this data structure. Algorithm 1 gives a highlevel overview of how Minesweeper works, which we will make precise in the next section.
The CDS stores the constraints already discovered during execution. For example, consider the query . If Minesweeper determines that and , then we can deduce that there is no tuple in the output that has a value in the open interval . This observation is encoded as a constraint . A key challenge with the CDS is to efficiently find an active tuple , given a set of constraints already stored in the CDS.
The outer algorithm queries the CDS to find active tuples and then probes the input relations. If there is no active , the algorithm terminates. Given an active , Minesweeper makes queries into the index structures of the input relations. These queries either report that is an output tuple, in which case is output, or they discover constraints that are then inserted into the CDS. Intuitively, the queries into the index structures are crafted so that at least one of the constraints that is returned is responsible for ruling out in any optimal certificate.
We first describe the interface of the CDS and then the outer algorithm which uses the CDS.
3.3 The CDS
The CDS is a data structure that implements two functions as efficiently as possible: (1) takes a new constraint and inserts it into the data structure, and (2) returns an active tuple with respect to all constraints that have been inserted into the CDS, or null if no such exists.
Implementation
To support these operations, we implement the CDS using a tree structure called ConstraintTree, which is a tree with at most levels, one for each of the attributes following the GAO. Figure 1 illustrates such a tree. More details are provided in Appendix E.
Each node in the CDS corresponds to a prefix (i.e. pattern) of constraints; each node has two data structures:
(1) is a sorted list with one entry per child of in the underlying tree. Each entry in the sorted list is labeled with an element of and has a pointer to the subtree rooted at the corresponding child. There are two exceptions: (1) if is a leaf then , and (2) each has at most one additional child node labeled with .
(2) is a sorted list of disjoint open intervals under that corresponding attribute. A key property is that given a value we can, in logarithmic time, output the smallest value that is not covered by any interval in (via the Next function). We will maintain the invariant that, for every node in a ConstraintTree, none of the labels in is contained in an interval in .
The following lemma is straightforward hence we omit the proof. Note that when we insert a new interval that overlaps existing intervals and/or contains values in equalities, we will have to merge them and/or remove the entries in equalities; and hence the cost is amortized.
Proposition 3.1.
The operation can be implemented in amortized time , where is total number of constraint vectors already inserted.
3.4 The outer algorithm
Algorithm 2 contains all the details that were missing from the highlevel view of Algorithm 1. Appendix D.1 has a complete run of Minesweeper on a specific query. Appendices H and I have the complete endtoend descriptions of two specific queries, which help clarify the general algorithm. We prove the following result.
Theorem 3.2.
Let denote the input size, the number of output tuples, , and . Let be any optimal certificate for the input instance. Then, the total runtime of Algorithm 2 is
where is the total time taken by the constraint data structure. The algorithm inserts constraints to CDS and issues calls to .
Our proof strategy bounds the number of iterations of the algorithm using an amortized analysis. We pay for each probe point returned by the CDS by either charging a comparison in the certificate or by charging an output tuple. If is an output tuple, we charge the output tuple. If is not an output tuple, then we observe that at least one of the constraints we discovered must rule out . Recall that each constraint is essentially a pair of elements from some base relation. If one element from each such pair is not involved in any comparison in , then we can perturb the instance slightly by moving the comparisonfree element to align with . This means does not have enough information to rule out as an output tuple, reaching a contradiction. Hence when is not an output tuple, essentially some gap must map to a pair of comparisons. Finally, using the geometry of the gaps, we show that each comparison is charged at most times and each output tuple is charged times. Thus, in total the number of iterations is .
When is an optimalsize certificate, the runtime above is about linear in plus the total time the CDS takes. Note, however, that can be very small, even constant. Hence, we basically shift all of the burden of join evaluation to the CDS. Thus, one should not hope that there is an efficient CDS for general queries:
Theorem 3.3 (Limitation of any CDS).
Unless the exponential time hypothesis is wrong, no constraint data structure can process the constraints and the probe point accesses in time polynomial (independent of the query) in the number of constraints inserted and probe points accessed.
Complete proofs of the above theorems are included in Appendix D. In the next sections, we analyze the CDS, specifically the function . Our analysis exploits properties of the query and the GAO for acyclic and bounded treewidth queries.
4 acyclic queries
We describe how to implement getProbePoint for acyclic queries. In particular, we show that there is some GAO that helps implement getProbePoint in amortized logarithmic time. Hence, by Theorem 3.2 our running time is , which we argued previously is essentially optimal.
4.1 Overview
Recall that given a set of intervals, getProbePoint returns an active tuple , i.e., a tuple that does not satisfy any of the constraints stored in the CDS. Essentially, during execution there may be a large number of constraints, and getProbePoint needs to answer an alternating sequence of constraint satisfaction problems and insertions. The question is: how do we split this work between insertion time and querying time?
In Minesweeper, we take a lazy approach: we insert all the constraints without doing any cleanup on the CDS. Then, when getProbePoint is called Minesweeper might have to do hard work to return a new active tuple, applying memoization along the way so the heavy labor does not have to be repeated in the future. When the GAO has a special structure, this strategy helps keep every CDS operation at amortized logarithmic time. We first give an example to build intuition about how our lazy approach works.
Example 4.1.
Consider a query with three attributes , and suppose the constraints that are inserted into the CDS are

for all ,

for all ,

for ,

and .
There are constraints, and there is no active tuple of the form for . Without memoization, the bruteforce strategy will take time , because for every pair , the algorithm will have to verify in time that the constraints forbid all , the constraints forbid all , and the constraint forbid .
But we can do better by remembering inferences that we have made. Fix a value . Minesweeper recognizes in time that there is no for which is active. Minesweeper is slightly smarter: it looks at constraints of the type (for ) and concludes in time that every tuple satisfying those constraints also satisfies the constraint . Minesweeper remembers this inference by inserting the inferred constraint into the CDS. Then, for , it takes only time to conclude that no tuple of the form can be active. It does this inference by inserting constraint , which is merged with to become . Overall, we need only time to reach the same conclusion as the bruteforce strategy.
4.2 Patterns
Recall that getProbePoint returns a tuple such that does not satisfy any of the constraints stored in the CDS. We find by computing , one value at a time, backtracking if necessary. We need some notation to describe the algorithm and the properties that we exploit.
Let be an integer. A vector for which is called a pattern. The number is the length of the pattern. If then it is an equality component of the pattern, while is a wildcard component of the pattern.
A node at depth in the tree ConstraintTree can be identified by a pattern of length corresponding naturally to the labels on the path from the root of ConstraintTree down to node . The pattern for node is denoted by . In particular, , the empty pattern.
Let be a pattern. Then, a specialization of is another pattern of the same length for which whenever . In other words, we can get a specialization of by changing some of the components into equality components. If is a specialization of , then is a generalization of . For two nodes and of the CDS, if is a specialization of , then we also say that node is a specialization of node .
The specialization relation defines a partially ordered set. When is a specialization of , we write . If in addition we know , then we write .
Let be the principal filter generated by in this partial order, i.e., it is the set of all nodes of the CDS such that is a generalization of and that . The key property of constraints that we exploit is summarized by the following proposition.
Proposition 4.2.
Using the notation above, for a acyclic query, there exists a GAO such that for each the principal filter is a chain.
Recall that a chain is a totally ordered set. In particular, has a smallest pattern (or bottom pattern). Note that these patterns in might come from constraints inserted from relations, constraints inserted by the outputs of the join, or even constraints inserted due to backtracking. Thinking of the constraints geometrically, this condition means that the constraints form a collection of axisaligned affine subspaces of where one is contained inside another.
In Appendix F, we prove Proposition 4.2 using a result of Brouwer and Kolen [15]. The class of GAOs in the proposition is called a nested elimination order. We show that there exists a GAO that is a nested elimination order if and only if the query is acyclic. We also show that acyclicity and this GAO can be found in polynomial time.
4.3 The getProbePoint Algorithm
Algorithm 3 describes getProbePoint algorithm specialized to acyclic queries. In turn, this algorithm uses Algorithm 4, which is responsible for efficiently inferring constraints imposed by patterns above this level. We walk through the steps of the algorithm below.
Initially, let be the root node of the CDS. We set . This is the smallest value that does not belong to any interval stored in . We work under the implicit assumption that any interval inserted into ConstraintTree that contains must be of the form , for some . This is because the domain values are in . In particular, if then the constraints cover the entire output space and null can be returned.
Inductively, let , , be the prefix of we have built thus far. Our goal is to compute . What we need to find is a value such that does not belong to the intervals stored in for every node . For this, we call algorithm 4 that uses Prop. 4.2 to efficiently find or return that there is no such . We defer its explanation for the moment. We only note that if such a cannot be found (i.e. if is returned after the search), then we have to backtrack because what that means is that every tuple that begins with the prefix satisfies some constraint stored in ConstraintTree. Line 17 of Algorithm 3 shows how we backtrack. In particular, we save this information (by inserting a new constraint into the CDS) in Line 17 to avoid ever exploring this path again.
Next Chain Value.
The key to Algorithm 4 is that such a can be found efficiently since one only needs to look through a chain of constraint sets. We write if and there is no pattern such that . Every interval from a node higher up in the chain infers an interval at a node lower in the chain. For instance, in Example 4.1, the chain consists of three nodes , , and . Further, every constraint of the form infers a more specialized constraint of the form , which in turns infers a constraint of the form . Hence, if we infer every single constraint downward from the top pattern to the bottom pattern, we will be spending a lot of time. The idea of Algorithm 4 is to infer as large of an interval as possible from a node higher in the chain before specializing it down. Our algorithm will ensure that whenever we infer a new constraint (line 17 of Algorithm 4), this constraint subsumes an old constraint which will never be charged again in a future inference.
4.4 Runtime Analysis
The proofs of the following main results are in Appendix F.
Lemma 4.3.
Suppose the input query is acyclic. Then, there exists a GAO such that each of the two operations getProbePoint and InsConstraint of ConstraintTree takes amortized time , where is the total number of constraints ever inserted.
The above lemma and Theorem 3.2 leads directly to one of our main results.
Corollary 4.4 (Restatement of Theorem 2.7).
Suppose the input query is acyclic then there exists a GAO such that Minesweeper computes its output in time
In particular, its datacomplexity runtime is essentially optimal in the certificate complexity world: .
Beyond acyclic queries, we show that we cannot do better modulo a wellknown complexity theoretic assumption.
Proposition 4.5 (Restatement of Proposition 2.8).
Unless the SUM problem can be solved in subquadratic time, for any cyclic query in any GAO, there does not exist an algorithm that runs in time for any on all instances.
Comparison with WorstCase Optimal Algorithms
It is natural to wonder if Yannakakis’ worstcase optimal algorithm for acyclic queries or the worstcase optimal algorithms of [40] (henceforth, NPRR) or [53] (henceforth LFTJ) can achieve runtimes of for acyclic queries. We outline the intuition about why this cannot be the case.
Yannakakis’ algorithm performs pairwise semijoin reducers. If we pick an instance where such that there is a relation pair involved each with size , then Yannakakis’s algorithm will exceed the bound. For NPRR and LFTJ, consider the family of instances in which one computes all paths of length (some constant) in a directed graph (this can be realized by a “path" query of length where the relations are the edge set of ). Now consider the case where the longest path in has size at most . In this case the output is empty and since each relation is , we have and by Corollary 4.4, we will run in time . Hence, when has many paths (at least ) of length at most , then both NPRR and LFTJ will have to explore all paths leading to an runtime.
Appendix J presents a rich family of acyclic queries and a family of instances that combines both of the ideas above to show that all the three worstcase optimal algorithms can have arbitrarily worse runtime than Minesweeper. In particular, even running those worstcase algorithms in parallel is not able to achieve the certificatebased guarantees.
5 Extensions
We extend in two ways: queries with bounded tree width and we describe faster algorithms for the triangle query.
5.1 Queries with bounded treewidth
While Proposition 2.8 shows that time is not achievable for cyclic queries, we are able to show the following analog of the treewidthbased runtime under the traditional worstcase complexity notion [19, 6].
Theorem 5.1 (Minesweeper for bounded treewidth queries).
Suppose the GAO has elimination width bounded by , Then, Minesweeper runs in time
In particular, if we ignore the dependence on the query size, the runtime is . Further, if the input query has treewidth bounded by , then there exists a GAO for which Minesweeper runs in the above time.
The overall structure of the algorithm remains identical to the acyclic case, the only change is in getProbePoint. The getProbePoint algorithm for general queries remains very similar in structure to that of the acyclic case (Algorithm 3), and if the input query is acyclic (with a nested elimination order as the GAO), then the general getProbePoint algorithm is exactly Algorithm 3. The new issue we have to deal with is the fact that the poset at each depth is not necessarily a chain. Our solution is simple: we mimic the behavior of Algorithm 3 on a shadow of that is a chain and make use of both the algorithm and the analysis for the acyclic case. Appendix G contains all the algorithm details, and the proofs of the above theorem, along with the following negative result.
It is natural to wonder if Theorem 5.1 is tight. In addition to the obvious dependency, the next result indicates that the dependence on also cannot be avoided, even if we just look at the class of acyclic queries.
Proposition 5.2.
Unless the exponential time hypothesis is false, for every large enough constant , there is an acyclic query for which there is no algorithm with runtime . Further, has treewidth .
Our analysis of Minesweeper is off by at most in the exponent.
Proposition 5.3.
For every , there exists an (acyclic) query with treewidth with the following property. For every possible global ordering of attributes, there exists an (infinite family of) instance on which the Minesweeper algorithm takes time.
5.2 The Triangle Query
We consider the triangle query that can be viewed as enumerating triangles in a given graph. Using the CDS described so far, Minesweeper computes this query in time , and this analysis is tight.^{9}^{9}9A straightforward application of our more general analysis given in Theorem 5.1, which gives . The central inefficiency is that the CDS wastes time determining that many tuples with the same prefix have been ruled out by existing constraints. In particular, the CDS considers all possible pairs (of which there can be of them). By designing a smarter CDS, our improved CDS explores such pairs. We can prove the following result. (The details are in Appendix L.)
Theorem 5.4.
We can solve the triangle query, in time
.
6 Related Work
Our work touches on a few different areas, and we structure the related work around each of these areas: join processing, certificates for set intersection, and complexity measures that are finer than worstcase complexity.
6.1 Join Processing
Many positive and negative results regarding conjunctive query evaluation also apply to natural join evaluation. On the negative side, both problems are hard in terms of expression complexity [16], but are easier in terms of data complexity [50] (when the query is assumed to be of fixed size). They are complete and thus unlikely to be fixparameter tractable [43, 33].
On the positive side, a large class of conjunctive queries (and thus natural join queries) are tractable. In particular, the classes of acyclic queries and bounded treewidth queries can be evaluated efficiently [55, 17, 31, 27, 54]. For example, if is the query size, is the input size, and is the output size, then Yannakakis’ algorithm can evaluate acyclic natural join queries in time . Acyclic conjunctive queries can also be evaluated efficiently in the I/O model [42], and in the RAM model even when there are inequalities [54]. For queries with treewidth , it was recognized early on that a runtime of about is attainable [19, 28]. our result strictly generalizes these results. In Appendix J, we show that Yannakakis’ algorithm does not meet our notion of certificate optimality.
The notion of treewidth is loose for some queries. For instance, if we replicate each attribute times for every attribute, then the treewidth is inflated by a factor of ; but by considering all duplicate attributes as one big compound attribute the runtime should only be multiplied by a polynomial in and there should not be a factor of in the exponent of the runtime. Furthermore, there is an inherent incompatibility between treewidth and acyclicity: an acyclic query can have very large treewidth, yet is still tractable. A series of papers [17, 2, 32, 31, 27] refined the treewidth notion leading to generalized hyper treewidth [31] and ultimately fractional hypertree width [39], which allows for a unified view of tractable queries. (An acyclic query, for example, has fractional hypertree width at most .)
The fractional hypertree width notion comes out of a recent tight worstcase output size bound in terms of the input relation sizes [7]. An algorithm was presented that runs in time matching the bound, and thus it is worstcase optimal in [40]. Given a tree decomposition of the input query with the minimum fractional edge cover over all bags, we can run this algorithm on each bag, and then Yannakakis algorithm [55] on the resulting bag relations, obtaining a total runtime of , where is the fractional hyper treewidth. The leapfrog triejoin algorithm [53] is also worstcase optimal and runs fast in practice; it is based on the idea that we can efficiently skip unmatched intervals. The indices are also built or selected to be consistent with a chosen GAO. In the Appendix J, we show that neither Leapfrog nor the algorithm from [40] can achieve the certificate guarantees of Minesweeper for acyclic queries.
Notions of acyclicity
There are at least five notions of acyclic hypergraphs, four of which were introduced early on in database theory (see e.g, [24]), and at least one new one introduced recently [22]. The five notions are not equivalent, but they form a strict hierarchy in the following way:
Acyclicity or acyclicity [11, 12, 26, 29, 38] was recognized early on to be a very desirable property of data base schemes; in particular, it allows for a datacomplexity optimal algorithm in the worst case [55]. However, an acyclic hypergraph may have a subhypergraph that is not acyclic. For example, if we take any hypergraph and add a hyperedge containing all vertices, we obtain an acyclic hypergraph. This observation leads to the notion of acyclicity: a hypergraph is acyclic if and only if every one of its subhypergraph is () acyclic [24]. It was shown (relatively) recently [41] that sat is in for acyclic CNF formulas and is complete for acyclic CNF formulas. Extending the result, it was shown that negative conjunctive queries are polytime solvable if and only if it is acyclic [14]. The separation between acyclicity and acyclicity showed up in logic [21], while Bergeacyclicity is restrictive and, thus far, is of only historical interest [13].
Graph triangle enumeration
In social network analysis, computing and listing the number of triangles in a graph is at the heart of the clustering coefficients and transitivity ratio. There are four decades of research on computing, estimating, bounding, and lowerbounding the number of triangles and the runtime for such algorithms [49, 37, 52, 51, 5, 36]. This problem can easily be reduced to a join query of the form .
6.2 Certificates for Intersection
The problem of finding the union and intersection of two sorted arrays using the fewest number of comparisons is wellstudied, dated back to at least Hwang and Lin [35] since 1972. In fact, the idea of skipping elements using a binarysearch jumping (or leapfrogging) strategy was already present in [35]. Demaine et al. [20] used the leapfrogging strategy for computing the intersection of sorted sets. They introduced the notion of proofs to capture the intrinsic complexity of such a problem. Then, the idea of gaps and certificate encoding were introduced to show that their algorithm is average case optimal. (See Appendix K for a more technical discussion.)
DLM’s notion of proof inspired another adaptive complexity notion for the set intersection problem called partition certificate by Barbay and Kenyon in [8, 9], where instead of a system of inequalities essentially a set of gaps is used to encode and verify the output. Barbay and Kenyon’s idea of a partition certificate is very close to the set of intervals that Minesweeper outputs. In the analysis of Minesweeper in Appendix H for the set intersection problem, we (implicitly) show a correspondence between these partition certificates and DLM’s style proofs. In addition to the fact that join queries are more general than set intersection, our notion of certificate is valueoblivious; our certificates do not depend on specific values in the domain, while BarbayKenyon’s partition certificate does.
It should be noted that these lines of inquiries are not only of theoretical interest. They have yielded good experimental results in textdatamining and textcompression[10].^{10}^{10}10We thank Jérémy Barbay for bringing these references to our attention.
6.3 Beyond Worstcase Complexity
There is a fairly large body of work on analyzing algorithms with more refined measures than worstcase complexity. (See, e.g., the excellent lectures by Roughgarden on this topic [46].) This section recalls the related works that are most closely related to ours.
A fair amount of work has been done in designing adaptive algorithms for sorting [23], where the goal is to design a sorting algorithm whose runtime (or the number of comparisons) matches a notion of difficulty of the instance (e.g. the number of inversions, the length of longest monotone subsequence and so on – the survey [23] lists at least eleven such measures of disorder). This line of work is similar to ours in the sense that the goal is to run in time proportional to the difficulty of the input. The major difference is that in these lines of work the main goal is to avoid the logarithmic factor over the linear runtime whereas in our work, our potential gains are of much higher order and we ignore logfactors.
Another related line of work is on selfimproving algorithms of Ailon et al. [4], where the goal is to have an algorithm that runs on inputs that are drawn i.i.d. from an unknown distribution and in expectation converge to a runtime that is related to the entropy of the distribution. In some sense this setup is similar to online learning while our work requires worstcase perinstance guarantees.
The notion of instance optimal join algorithms was (to the best of our knowledge) first explicitly studied in the work of Fagin et al. [25]. The paper studies the problem of computing the top objects, where the ranking is some aggregate of total ordering of objects according to different attributes. (It is assumed that the algorithm can only iterate through the list in sorted order of individual attribute scores.) The results in this paper are stronger than ours since Fagin et al. give optimality ratio (as opposed to our optimality ratio). On the other hand the results in the Fagin et al. paper are for a problem that is arguably narrower than the class we consider of join algorithms.
The only other paper with provable instanceoptimal guarantees that we are aware of is the Afshani et al. results on some geometric problems [3]. Their quantitative results are somewhat incomparable to ours. On the one hand their results get a constant optimality ratio: on the other hand, the optimality ratio is only true for order oblivious comparison algorithms (while our results with optimality ratio hold against all comparisonbased algorithms).
7 Conclusion and Future Work
We described the Minesweeper algorithm for processing join queries on data that is stored ordered in data structures modeling traditional relational databases. We showed that Minesweeper can achieve stronger runtime guarantees than previous algorithms; in particular, we believe Minesweeper is the first algorithm to offer beyond worstcase guarantees for joins. Our analysis is based on a notion of certificates, which provide a uniform measure of the difficulty of the problem that is independent of any algorithm. In particular, certificates are able to capture what we argue is a natural class of comparisonbased join algorithms.
Our main technical result is that, for acyclic queries there is some GAO such that Minesweeper runs in time that is linear in the certificate size. Thus, Minesweeper is optimal (up to an factor) among comparisonbased algorithms. Moreover, the class of acyclic queries is the boundary of complexity in that we show no algorithm for cyclic queries runs in time linear in the certificate size. And so, we are able to completely characterize those queries that run in linear time for the certificate and hence are optimal in a strong sense. Conceptually, certificates change the complexity landscape for join processing as the analogous boundary for traditional worstcase complexity are acyclic queries, for which we show that there is no polynomial bound in the certificate size (assuming the strong form of the exponential time hypothesis). We then considered how to extend our results using treewidth. We showed that our same Minesweeper algorithm obtains runtime for queries with treewidth . For the triangle query (with treewidth ), we presented a modified algorithm that runs in time .
Future Work
We are excited by the notion of certificatebased complexity for join algorithms; we see it as contributing to an emerging push beyond worstcase analysis in theoretical computer science. We hope there is future work in several directions for joins and certificatebased complexity.
Indexing and Certificates
The interplay between indexing and certificates may provide fertile ground for further research. For example, the certificate size depends on the order of attributes. In particular, a certificate in one order may be smaller than in another order. We do not yet have a handle on how the certificatesize changes for the same data in different orders. Ideally, one would know the smallest certificate size for any query and process in that order. Moreover, we do not know how to use of multiple access paths (eg. Btrees with different search keys) in either the analysis or the algorithm. These indexes may result in dramatically faster algorithms and new types of query optimization.
Fractional Covers
A second direction is that join processing has seen a slew of powerful techniques based on increasingly sophisticated notions of covers and decompositions for queries. We expect that such covers (hypergraph, fractional hypergraph, etc.) could be used to tighten and improve our bounds. For the triangle query, we have the fractional cover bound, i.e., . But is this possible for all queries?
Acknowledgments
We thank LogicBlox, Mahmoud Abo Khamis, Semih Salihoglu and Dan Suciu for many helpful conversations.
HQN’s work is partly supported by NSF grant CCF1319402 and a gift from Logicblox. CR’s work on this project is generously supported by NSF CAREER Award under No. IIS1353606, NSF award under No. CCF1356918, the ONR under awards No. N000141210041 and No. N000141310129, Sloan Research Fellowship, Oracle, and Google. AR’s work is partly supported by NSF CAREER Award CCF0844796, NSF grant CCF1319402 and a gift from Logicblox.
References
 [1] S. Abiteboul, R. Hull, and V. Vianu, Foundations of Databases, AddisonWesley, 1995.
 [2] I. Adler, G. Gottlob, and M. Grohe, Hypertree width and related hypergraph invariants, European J. Combin., 28 (2007).
 [3] P. Afshani, J. Barbay, and T. M. Chan, Instanceoptimal geometric algorithms, in FOCS, 2009, pp. 129–138.
 [4] N. Ailon, B. Chazelle, K. L. Clarkson, D. Liu, W. Mulzer, and C. Seshadhri, Selfimproving algorithms, SIAM J. Comput., 40 (2011), pp. 350–375.
 [5] N. Alon, On the number of subgraphs of prescribed type of graphs with a given number of edges, Israel J. Math., 38 (1981).
 [6] S. Arnborg and A. Proskurowski, Linear time algorithms for NPhard problems restricted to partial trees, Discrete Appl. Math., 23 (1989), pp. 11–24.
 [7] A. Atserias, M. Grohe, and D. Marx, Size bounds and query plans for relational joins, 2008, pp. 739–748.
 [8] J. Barbay and C. Kenyon, Adaptive intersection and tthreshold problems, in SODA, 2002, pp. 390–399.
 [9] , Alternation and redundancy analysis of the intersection problem, ACM Transactions on Algorithms, 4 (2008).
 [10] J. Barbay and A. LópezOrtiz, Efficient algorithms for context query evaluation over a tagged corpus, in SCCC, M. Arenas and B. Bustos, eds., IEEE Computer Society, 2009, pp. 11–17.
 [11] C. Beeri, R. Fagin, D. Maier, A. Mendelzon, J. Ullman, and M. Yannakakis, Properties of acyclic database schemes, in STOC, New York, NY, USA, 1981, ACM, pp. 355–362.
 [12] C. Beeri, R. Fagin, D. Maier, and M. Yannakakis, On the desirability of acyclic database schemes, J. ACM, 30 (1983), pp. 479–513.
 [13] C. Berge, Graphs and Hypergraphs, Elsevier Science Ltd, 1985.
 [14] J. BraultBaron, A Negative Conjunctive Query is Easy if and only if it is BetaAcyclic, in CSL 12, vol. 16, 2012, pp. 137–151.
 [15] A. Brouwer and A. Kolen, A superbalanced hypergraph has a nest point, (1980). Tech. Report.
 [16] A. K. Chandra and P. M. Merlin, Optimal implementation of conjunctive queries in relational data bases, in STOC, 1977.
 [17] C. Chekuri and A. Rajaraman, Conjunctive query containment revisited, Theor. Comput. Sci., 239 (2000), pp. 211–229.
 [18] J. Chen, S. Lu, S.H. Sze, and F. Zhang, Improved algorithms for path, matching, and packing problems, in SODA, 2007, pp. 298–307.
 [19] R. Dechter and J. Pearl, Tree clustering for constraint networks., Artificial Intelligence, 38 (1989), pp. 353–366.
 [20] E. D. Demaine, A. LópezOrtiz, and J. I. Munro, Adaptive set intersections, unions, and differences, in SODA, 2000, pp. 743–752.
 [21] D. Duris, Hypergraph acyclicity and extension preservation theorems, in LICS, 2008, pp. 418–427.
 [22] , Some characterizations of and acyclicity of hypergraphs., Information Processing Letters, 112 (2012).
 [23] V. EstivillCastro and D. Wood, A survey of adaptive sorting algorithms, ACM Comput. Surv., 24 (1992), pp. 441–476.
 [24] R. Fagin, Degrees of acyclicity for hypergraphs and relational database schemes, J. ACM, 30 (1983), pp. 514–550.
 [25] R. Fagin, A. Lotem, and M. Naor, Optimal aggregation algorithms for middleware, J. Comput. Syst. Sci., 66 (2003), pp. 614–656.
 [26] R. Fagin, A. O. Mendelzon, and J. D. Ullman, A simplied universal relation assumption and its properties, TODS, 7 (1982).
 [27] J. Flum, M. Frick, and M. Grohe, Query evaluation via treedecompositions, J. ACM, 49 (2002), pp. 716–752.
 [28] E. C. Freuder, Complexity of ktree structured constraint satisfaction problems, in AAAI, AAAI’90, AAAI Press, 1990, pp. 4–9.
 [29] N. Goodman and O. Shmueli, Tree queries: a simple class of relational queries, ACM Trans. Database Syst., 7 (1982).
 [30] G. Gottlob, M. Grohe, N. Musliu, M. Samer, and F. Scarcello, Hypertree decompositions: Structure, algorithms, and applications, in WG, D. Kratsch, ed., vol. 3787 of LNCS, Springer, 2005.
 [31] G. Gottlob, N. Leone, and F. Scarcello, Hypertree decompositions and tractable queries, J. Comput. Syst. Sci., 64 (2002), pp. 579–627.
 [32] G. Gottlob, Z. Miklós, and T. Schwentick, Generalized hypertree decompositions: Nphardness and tractable variants, J. ACM, 56 (2009), pp. 30:1–30:32.
 [33] M. Grohe, The parameterized complexity of database queries, in PODS, 2001, pp. 82–92.
 [34] P. Heggernes and B. W. Peyton, Fast computation of minimal fill inside a given elimination ordering, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 1424–1444.
 [35] F. K. Hwang and S. Lin, A simple algorithm for merging two disjoint linearly ordered sets, SIAM J. Comput., 1 (1972), pp. 31–39.
 [36] A. Itai and M. Rodeh, Finding a minimum circuit in a graph, SIAM J. Comput., 7 (1978), pp. 413–423.
 [37] M. N. Kolountzakis, G. L. Miller, R. Peng, and C. E. Tsourakakis, Efficient triangle counting in large graphs via degreebased vertex partitioning, Internet Mathematics, 8 (2012), pp. 161–185.
 [38] D. Maier and J. D. Ullman, Connections in acyclic hypergraphs: extended abstract, in PODS, ACM, 1982, pp. 34–39.
 [39] D. Marx, Approximating fractional hypertree width, ACM Transactions on Algorithms, 6 (2010).
 [40] H. Q. Ngo, E. Porat, C. Ré, and A. Rudra, Worstcase optimal join algorithms: [extended abstract], in PODS, 2012, pp. 37–48.
 [41] S. Ordyniak, D. Paulusma, and S. Szeider, Satisfiability of Acyclic and Almost Acyclic CNF Formulas, in FSTTCS 2010, vol. 8, 2010.
 [42] A. Pagh and R. Pagh, Scalable computation of acyclic joins, in PODS, 2006, pp. 225–232.
 [43] C. H. Papadimitriou and M. Yannakakis, On the complexity of database queries, in PODS, 1997, pp. 12–19.
 [44] M. Pǎtraşcu, Towards polynomial lower bounds for dynamic problems, in STOC, 2010, pp. 603–610.
 [45] R. Ramakrishnan and J. Gehrke, Database Management Systems, McGrawHill, Inc., New York, NY, USA, 3 ed., 2003.
 [46] T. Roughgarden, Lecture notes for CS369N “beyond worstcase analysis". http://theory.stanford.edu/ tim/f09/f09.html, 2009.
 [47] , Problem set #1 (CS369N: Beyond worstcase analysis). http://theory.stanford.edu/ tim/f11/hw1.pdf, 2011.
 [48] W. Schafhauser, New Heuristic Methods for Tree Decompositions and Generalized Hypertree Decompositions, Master’s thesis, 2006.
 [49] S. Suri and S. Vassilvitskii, Counting triangles and the curse of the last reducer, in WWW, 2011, pp. 607–614.
 [50] M. Y. Vardi, The complexity of relational query languages (extended abstract), in STOC, 1982, pp. 137–146.
 [51] V. Vassilevska and R. Williams, Finding a maximum weight triangle in time, with applications, in STOC, 2006, pp. 225–231.
 [52] V. Vassilevska and R. Williams, Finding, minimizing, and counting weighted subgraphs, in STOC, ACM, 2009, pp. 455–464.
 [53] T. L. Veldhuizen, Leapfrog triejoin: a worstcase optimal join algorithm, ICDT, (2014). To Appear.
 [54] D. E. Willard, An algorithm for handling many relational calculus queries efficiently, J. Comput. Syst. Sci., 65 (2002), pp. 295–331.
 [55] M. Yannakakis, Algorithms for acyclic database schemes, in VLDB, 1981, pp. 82–94.
Appendix A The GAO and query’s structure
The input query can be represented by a hypergraph , where is the set of all attributes, and is the collection of input relations’ attribute sets. Any global attribute order (GAO) is just a permutation of vertices of . In the logic, constraint satisfaction, and databases [48], graphical models, and sparse matrix computation [34] literature, any permutation of vertices of a hypergraph is called an elimination order, which can be used to characterize many important properties of the hypergraph.
Our algorithm is no different: its performance intimately relates to properties of the GAO, which characterizes structural properties of the hypergraph . In this section we state some relevant known results and derive two slightly new results regarding the relationship between elimination orders and notions of widths and acyclicity of hypergraphs.
a.1 Basic concepts
There are many definitions of acyclic hypergraphs. A hypergraph is acyclic if the GYO procedure returns empty [1, p. 128]. Essentially, in GYO one iterates two steps: (1) remove any edge that is empty or contained in another hyperedge, or (2) remove vertices that appear in at most one hyperedge. If the result is empty, then the hypergraph is acyclic. A query is acyclic if the graph formed by any subset of hyperedges is acyclic. Thus, the requirement that a hypergraph be acyclic is (strictly) stronger than acyclic. We illustrate this with an example.
Example A.1.
We map freely between hypergraphs and queries. The query is both cyclic and cyclic. However, if one adds the relation to form this query is acyclic, but it is still cyclic.
We can also define these concepts via a notion of tree decomposition.
Definition A.2 (Tree decomposition).
Let be a hypergraph. A treedecomposition of is a pair where is a tree and assigns to each node of the tree a set of vertices of . The sets , , are called the bags of the treedecomposition. There are two properties the bags must satisfy

For every hyperedge , there is a bag such that .

For every vertex