Aghighi and Bäckström have previously studied cost-optimal planning (COP) and net-benefit planning (NBP) for three action cost domains: the positive integers (Z_+), the non-negative integers (Z_0) and the positive rationals (Q_+). These were indistinguishable under standard complexity analysis for both problems, but separated for COP using parameterised complexity analysis. With the plan cost, k, as parameter, COP was W[2]-complete for Z_+, but para-NP-hard for both Z_0 and Q_+, i.e. presumably much harder. NBP was para-NP-hard for all three domains, thus remaining unseparable. We continue by considering combinations with several additional parameters and also the non-negative rationals (Q_0). Examples of new parameters are the plan length, l, and the largest denominator of the action costs, d. Our findings include: (1) COP remains W[2]-hard for all domains, even if combining all parameters; (2) COP for Z_0 is in W[2] for the combined parameter {k,l}; (3) COP for Q_+ is in W[2] for {k,d} and (4) COP for Q_0 is in W[2] for {k,d,l}. For NBP we consider further additional parameters, where the most crucial one for reducing complexity is the sum of variable utilities. Our results help to understand the previous results, eg. the separation between Z_+ and Q_+ for COP, and to refine the previous connections with empirical findings.
Cost-optimal planning (COP) uses action costs and asks for a minimum-cost plan. It is sometimes assumed that there is no harm in using actions with zero cost or rational cost. Classical complexity analysis does not contradict this assumption; planning is PSPACE-complete regardless of whether action costs are positive or non-negative, integer or rational. We thus apply parameterised complexity analysis to shed more light on this issue. Our main results are the following. COP is W[2]-complete for positive integer costs, i.e. it is no harder than finding a minimum-length plan, but it is para-NPhard if the costs are non-negative integers or positive rationals. This is a very strong indication that the latter cases are substantially harder. Net-benefit planning (NBP) additionally assigns goal utilities and asks for a plan with maximum difference between its utility and its cost. NBP is para-NP-hard even when action costs and utilities are positive integers, suggesting that it is harder than COP. In addition, we also analyse a large number of subclasses, using both the PUBS restrictions and restricting the number of preconditions and effects.
Bäckström has previously studied a number of optimization problems for partial-order plans, like finding a minimum deordering (MCD) or reordering (MCR), and finding the minimum parallel execution length (PPL), which are all NP-complete. We revisit these problems, but applying parameterized complexity analysis rather than standard complexity analysis. We consider various parameters, including both the original and desired size of the plan order, as well as its width and height. Our findings include that MCD and MCR are W[2]-hard and in W[P] when parameterized with the desired order size, and MCD is fixed-parameter tractable (fpt) when parameterized with the original order size. Problem PPL is fpt if parameterized with the size of the non-concurrency relation, but para-NP-hard in most other cases. We also consider this problem when the number (k) of agents, or processors, is restricted, finding that this number is a crucial parameter; this problem is fixed-parameter tractable with the order size, the parallel execution length and k as parameter, but para-NP-hard without k as parameter.
Cost-optimal planning has become a very well-studied topic within planning. Needless to say, cost-optimal planning has proven to be computationally hard both theoretically and in practice. Since cost-optimal planning is an optimisation problem, it is natural to analyse it from an approximation point of view. Even though such studies may be valuable in themselves, additional motivation is provided by the fact that there is a very close link between approximability and the performance of heuristics used in heuristic search. The aim of this paper is to analyse approximability (and indirectly the performance of heuristics) with respect to lower time bounds. That is, we are not content by merely classifying problems into complexity classes - we also study their time complexity. This is achieved by replacing standard complexity-theoretic assumptions (such as P not equal NP) with the exponential time hypothesis (ETH). This enables us to analyse, for instance, the performance of the h(+) heuristic and obtain general trade-off results that correlate approximability bounds with bounds on time complexity.
The use of computational complexity in planning, and in AI in general, has always been a disputed topic. A major problem with ordinary worst-case analyses is that they do not provide any quantitative information: they do not tell us much about the running time of concrete algorithms, nor do they tell us much about the running time of optimal algorithms. We address problems like this by presenting results based on the exponential time hypothesis (ETH), which is a widely accepted hypothesis concerning the time complexity of 3-SAT. By using this approach, we provide, for instance, almost matching upper and lower bounds onthe time complexity of propositional planning.
We apply the theory of parameterised complexity to planning, using the concept of fixed-parameter tractability (fpt) which is more relaxed than the usual tractability concept. The parameter we focus on is the maximal number of paths in the domain-transition graphs, and we show that for this parameter, optimal planning is fpt for planning instances with polytree causal graphs and acyclic domain-transition graphs. If this parameter is combined with the additional parameters of domain size for the variables and the treewidth of the causal graph, then planning is fpt also for instances with arbitrary causal graphs. Furthermore, all these parameters are fpt to test in advance. These results also imply that delete-relaxed planning is fpt, even in its recent generalisation to non-binary variables.
This thesis consists of two papers on different but related topics.The first paper is concerned with the use of logic as a tool to model mechanical assembly processes. A restricted 2+-dimensional world is introduced and although this world is considerably simpler than a 3-dimensional one, it is powerful enough to capture most of the interesting geometrical problems arising in assembly processes. The geometry of this 2+-dimensional world is axiomatized in first order logic. A basic set of assembly operations are identified and these operations are expressed in a variant of dynamic logic which is modified to attack the frame problem.
The second paper presents a formalism for reasoning about systems of sequential and parallel actions that may interfere or interact with each other. All synchronization of actions is implicit in the definitions of the actions and no explicit dependency information exists. The concept of action hierarchies is defined, and the couplings between the different abstraction levels are implicit in the action definitions. The hierarchies can be used both top-down and bottom-up and thus support both planning and plan recognition in a more general way than is usual.
Bäckström studied the parameterised complexity of planning when the domain-transition graphs (DTGs) are acyclic. He used the parameters d (domain size), k (number of paths in the DTGs) and w (treewidth of the causal graph), and showed that planning is fixed-parameter tractable (fpt) in these parameters, and fpt in only parameter k if the causal graph is a polytree. We continue this work by considering some additional cases of non-acyclic DTGs. In particular, we consider the case where each strongly connected component (SCC) in a DTG must be a simple cycle, and we show that planning is fpt for this case if the causal graph is a polytree. This is done by first preprocessing the instance to construct an equivalent abstraction and then apply B¨ackstr¨oms technique to this abstraction. We use the parameters d and k, reinterpreting this as the number of paths in the condensation of a DTG, and the two new parameters c (the number of contracted cycles along a path) and pmax (an upper bound for walking around cycles, when not unbounded).
The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS+ (B¨ackstr¨om and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning haveseemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partialorder planner exploit this fact.
Macros have long been used in planning to represent subsequences of operators. Macros can be used in place of individual operators during search, sometimes reducing the effort required to find a plan to the goal. Another use of macros is to compactly represent long plans. In this paper we introduce a novel solution concept called automaton plans in which plans are represented using hierarchies of automata. Automaton plans can be viewed as an extension of macros that enables parameterization and branching. We provide several examples that illustrate how automaton plans can be useful, both as a compact representation of exponentially long plans and as an alternative to sequential solutions in benchmark domains such as LOGISTICS and GRID. We also compare automaton plans to other compact plan representations from the literature, and find that automaton plans are strictly more expressive than macros, but strictly less expressive than HTNs and certain representations allowing efficient sequential access to the operators of the plan.
Macros have a long-standing role in planning as a tool for representing repeating subsequences of operators. Macros are useful both for guiding search towards a solution and for representing plans compactly. In this paper we introduce automata plans which consist of hierarchies of finite state automata. Automata plans can be viewed as an extension of macros that enables parametrization and branching. We provide several examples of the utility of automata plans, and prove that automata plans are strictly more expressive than macro plans. We also prove that automata plans admit polynomialtime sequential access of the operators in the underlying “flat” plan, and identify a subset of automata plans that admit polynomial-time random access. Finally, we compare automata plans with other representations allowing polynomial-time sequential access.
The use and study of compact representations of objects is widespread in computer science. AI planning can be viewed as the problem of finding a path in a graph that is implicitly described by a compact representation in a planning language. However, compact representations of the path itself (the plan) have not received much attention in the literature. Although both macro plans and reactive plans can be considered as such compact representations, little emphasis has been placed on this aspect in earlier work. There are also compact plan representations that are defined by their access properties, for instance, that they have efficient random access or efficient sequential access. We formally compare two such concepts with macro plans and reactive plans, viewed as compact representations, and provide a complete map of the relationships between them.
The causal graph of a planning instance is an important tool for planning both in practice and in theory. The theoretical studies of causal graphs have largely analysed the computational complexity of planning for instances where the causal graph has a certain structure, often in combination with other parameters like the domain size of the variables. Chen and Giménez ignored even the structure and considered only the size of the weakly connected components. They proved that planning is tractable if the components are bounded by a constant and otherwise intractable. Their intractability result was, however, conditioned by an assumption from parameterised complexity theory that has no known useful relationship with the standard complexity classes. We approach the same problem from the perspective of standard complexity classes, and prove that planning is NP-hard for classes with unbounded components under an additional restriction we refer to as SP-closed. We then argue that most NP-hardness theorems for causal graphs are difficult to apply and, thus, prove a more general result; even if the component sizes grow slowly and the class is not densely populated with graphs, planning still cannot be tractable unless the polynomial hierachy collapses. Both these results still hold when restricted to the class of acyclic causal graphs. We finally give a partial characterization of the borderline between NP-hard and NP-intermediate classes, giving further insight into the problem.
Modelling abstraction as a function from the original state space to an abstract state space is a common approach in combinatorial search. Sometimes this is too restricted, though, and we have previously proposed a framework using a more flexible concept of transformations between labelled graphs. We also proposed a number of properties to describe and classify such transformations. This framework enabled the modelling of a number of different abstraction methods in a way that facilitated comparative analyses. It is of particular interest that these properties can be used to capture the concept of refinement without backtracking between levels; how to do this has been an open question for at least twenty years. In this paper, we continue our previous research by analysing the complexity of testing the various transformation properties for both explicit and implicit graph representations.
Abstraction has been used in search and planning from the very beginning of AI. Many different methods and formalisms for abstraction have been proposed in the literature but they have been designed from various points of view and with varying purposes. Hence, these methods have been notoriously difficult to analyse and compare in a structured way. In order to improve upon this situation, we present a coherent and flexible framework for modelling abstraction (and abstraction-like) methods based on transformations on labelled graphs. Transformations can have certain method properties that are inherent in the abstraction methods and describe their fundamental modelling characteristics, and they can have certain instance properties that describe algorithmic and computational characteristics of problem instances. The usefulness of the framework is demonstrated by applying it to problems in both search and planning. First, we show that we can capture many search abstraction concepts (such as avoidance of backtracking between levels) and that we can put them into a broader context. We further model five different abstraction concepts from the planning literature. Analysing what method properties they have highlights their fundamental differences and similarities. Finally, we prove that method properties sometimes imply instance properties. Taking also those instance properties into account reveals important information about computational aspects of the five methods.
Compact representations of objects is a common concept in computer science. Automated planning can be viewed as a case of this concept: a planning instance is a compact implicit representation of a graph and the problem is to find a path (a plan) in this graph. While the graphs themselves are represented compactly as planning instances, the paths are usually represented explicitly as sequences of actions. Some cases are known where the plans always have compact representations, for example, using macros. We show that these results do not extend to the general case, by proving a number of bounds for compact representations of plans under various criteria, like efficient sequential or random access of actions. In addition to this, we show that our results have consequences for what can be gained from reformulating planning into some other problem. As a contrast to this we also prove a number of positive results, demonstrating restricted cases where plans do have useful compact representations, as well as proving that macro plans have favourable access properties. Our results are finally discussed in relation to other relevant contexts.
Complexity analysis of planning is problematic. Even very simple planning languages are PSPACE-complete, yet cannot model many simple problems naturally. Many languages with much more powerful features are also PSPACE-complete. It is thus difficult to separate planning languages in a useful way and to get complexity figures that better reflect reality.This paper introduces new methods for complexity analysis of planning and similar combinatorial search problems, in order to achieve more precision and complexity separations than standard methods allow. Padding instances with the solution size yields a complexity measure that is immune to this factor and reveals other causes of hardness, that are otherwise hidden. Further combining this method with limited nondeterminism improves the precision, making even finer separations possible. We demonstrate with examples how these methods can narrow the gap between theory and practice.
There are two major uses of abstraction in planning and search: refinement (where abstract solutions are extended into concrete solutions) and heuristics (where abstract solutions are used to compute heuristics for the original search space). These two approaches are usually viewed as unrelated in the literature. It is reasonable to believe, though, that they are related, since they are both intrinsically based on the structure of abstract search spaces. We take the first steps towards formally investigating their relationships, employing our recently introduced framework for analysing and comparing abstraction methods. By adding some mechanisms for expressing metric properties, we can capture concepts like admissibility and consistency of heuristics. We present an extensive study of how such metric properties relate to the properties in the original framework, revealing a number of connections between the refinement and heuristic approaches. This also provides new insights into, for example, Valtorta's theorem and spurious states.
Most planning formalisms allow instances with shortest plans of exponential length. While such instances are problematic, they are usually unavoidable and can occur in practice. There are several known cases of restricted planning problems where plans can be exponential but always have a compact (ie. polynomial) representation, often using recursive macros. Such compact representations are important since exponential plans are difficult both to use and to understand. We show that these results do not extend to the general case, by proving a number of bounds for compact representations of plans under various criteria, like efficient sequential or random access of actions. Further, we show that it is unlikely to get around this by reformulating planning into some other problem. The results are discussed in the context of abstraction, macros and plan explanation.
There is an extensive literature on the complexity of planning, but explicit bounds on time and space complexity are very rare. On the other hand, problems like the constraint satisfaction problem (CSP) have been thoroughly analysed in this respect. We provide a number of upper- and lower-bound results (the latter based on various complexity-theoretic assumptions such as the Exponential Time Hypothesis) for both satisficing and optimal planning. We show that many classes of planning instances exhibit a dichotomy: either they can be solved in polynomial time or they cannot be solved in subexponential time and thus require O (2(cn)) time for some c amp;gt; 0. In many cases, we can even prove closely matching upper and lower bounds; for every epsilon amp;gt; 0, the problem can be solved in time O (2((1+epsilon)n)) but not in time O (2((1-epsilon)n)). Our results also indicate, analogously to CSPs, the existence of sharp phase transitions. We finally study and discuss the trade-off between time and space. In particular, we show that depth-first search may sometimes be a viable option for planning under severe space constraints.
There is an extensive literature on the complexity of planning, but explicit bounds on time and space complexity are very rare. On the other hand, problems like the constraint satisfaction problem have been thoroughly analysed in this respect. We provide a number of upper and lower bound results for both plan satisfiability (PSAT) and length-optimal planning (LOP), with an emphasis on monotone planning (where actions have only positive effects) which is used in, for instance, h(+) and similar heuristics. Let v and a be the number of variables and actions, respectively. We consider both restrictions on the number and polarity of preconditions and effects of actions and the PUBS restrictions in SAS(+). For all such classes, we show that PSAT and LOP is either tractable or cannot be solved in subexponential time 2(o(v)) or time 2(o(a)), unless the so-called Exponential Time Hypothesis (ETH) is false. There is also a sharp transition: monotone LOP can be solved in time 2(o(v)) if a is an element of o(v/log v) but not if a is an element of Omega(v). We also study upper bounds and discuss the trade-off between time and space, providing a polynomial-space algorithm for monotone LOP that beats depth-first search in most cases. This raises the important question how lower bounds are affected by polynomial space restrictions.
The propositional planning problem is a notoriously difficult computational problem, which remains hard even under strong syntactical and structural restrictions. Given its difficulty it becomes natural to study planning in the context of parameterized complexity. In this paper we continue the work initiated by Downey, Fellows and Stege on the parameterized complexity of planning with respect to the parameter "length of the solution plan." We provide a complete classification of the parameterized complexity of the planning problem under two of the most prominent syntactical restrictions, i.e., the so called PUBS restrictions introduced by Backstrom and Nebel and restrictions on the number of preconditions and effects as introduced by Bylander. We also determine which of the considered fixed-parameter tractable problems admit a polynomial kernel and which do not. (C) 2015 Elsevier Inc. All rights reserved.
The propositional planning problem is a notoriously difficult computational problem. Downey et al. (1999) initiated the parameterized analysis of planning (with plan length as the parameter) and Bäckström et al. (2012) picked up this line of research and provided an extensive parameterized analysis under various restrictions, leaving open only one stubborn case. We continue this work and provide a full classification. In particular, we show that the case when actions have no preconditions and at most e postconditions is fixed-parameter tractable if e ≤ 2 and W[1]-complete otherwise. We show fixed-parameter tractability by a reduction to a variant of the Steiner Tree problem; this problem has been shown fixed-parameter tractable by Guo et al. (2007). If a problem is fixed-parameter tractable, then it admits a polynomial-time self-reduction to instances whose input size is bounded by a function of the parameter, called the kernel. For some problems, this function is even polynomial which has desirable computational implications. Recent research in parameterized complexity has focused on classifying fixed-parameter tractable problems on whether they admit polynomial kernels or not. We revisit all the previously obtained restrictions of planning that are fixed-parameter tractable and show that none of them admits a polynomial kernel unless the polynomial hierarchy collapses to its third level.
There has been a tremendous advance in domain-independent planning over the past decades, and planners have become increasingly efficient at finding plans. However, this has not been paired by any corresponding improvement in detecting unsolvable instances. Such instances are obviously important but largely neglected in planning. In other areas, such as constraint solving and model checking, much effort has been spent on devising methods for detecting unsolvability. We introduce a method for detecting unsolvable planning instances that is loosely based on consistency checking in constraint programming. Our method balances completeness against efficiency through a parameter k: the algorithm identifies more unsolvable instances but takes more time for increasing values of k. We present empirical data for our algorithm and some standard planners on a number of unsolvable instances, demonstrating that our method can be very efficient where the planners fail to detect unsolvability within reasonable resource bounds. We observe that planners based on the h^m heuristic or pattern databases are better than other planners for detecting unsolvability. This is not a coincidence since there are similarities (but also significant differences) between our algorithm and these two heuristic methods.
This paper formally presents a class of planning problems, the SAS-PUS class, which allows non-binary state variables and parallel execution of actions. The class is proven to be tractable, and we provide a sound and complete, polynomial time algorithm for planning within this class. Since the SAS-PUS class is an extension of the previously presented SAS-PUBS class, this result means that we are getting closer to tackling realistic planning problems in sequential control. In such problems, a restricted problem representation is often sufficient but the size of the problems make tractability an important issue.
This article describes a polynomial-time, O(n^{3}), planning algorithm for a limited class of planning problems. Compared to previous work on complexity of algorithms for knowledge-based or logic-based planning, our algorithm achieves computational tractability, but at the expense of only applying to a significantly more limited class of problems. Our algorithm is proven correct, and it always returns a parallel minimal plan if there is a plan at all.
One of the most widespread approaches to reactive planning is Schoppers' universal plans. We propose a stricter definition of universal plans which guarantees a weak notion of soundness, not present in the original definition, and isolate three different types of completeness that capture different behaviors exhibited by universal plans. We show that universal plans which run in polynomial time and are of polynomial size cannot satisfy even the weakest type of completeness unless the polynomial hierarchy collapses. By relaxing either the polynomial time or the polynomial space requirement, the construction of universal plans satisfying the strongest type of completeness becomes trivial. As an alternative approach, we study randomized universal planning. By considering a randomized version of completeness and a restricted (but nontrivial) class of problems, we show that there exists randomized universal plans running in polynomial time and using polynomial space which are sound and complete for the restricted class of problems. We also report experimental results on this approach to planning, showing that the performance of a randomized planner is not easily compared to that of a deterministic planner.
Sequential control is a common control problem in industry. Despite its importance fairly little theoretical research has been devoted to this problem. We study a subclass of sequential control problems, which we call the SAS-PUBS class, and present a planning algorithm for this class. The algorithm is developed using formalism from articial intelligence (AI). For planning problems in the SAS-PUBS class the algorithm nds a plan from a given initial state to a desired final state if and only if any plan exists solving the stated planning problem. Furthermore the complexity of the given algorithm increases polynomially with the number of state variables.
This paper formally presents a class of planning problems which allows non-binary state variables and parallel execution of actions. The class is proven to be tractable, and we provide a sound and complete polynomial time algorithm for planning within this class. This result means that we are getting closer to tacking realistic planning problems in sequential control, where a restricted problem representation is often sufficient, but where the size of the problems make tractability an important issue.
This paper describes a polynomial-time, O(n^{3}), planning algorithm for a limited class of planning problems. Compared to previous work on complexity of algorithms for knowledge-based or logic-based planning, our algorithm achieves computational tractability, but at the expense of only applying to a significantly more limited class of problems. Our algorithm is proven correct and complete, and it always returns a minimal plan if there is a plan at all.
This paper describes a polynomial-time, O(n 3), planning algorithm for a limited class of planning problems. Compared to previous work on complexity of algorithms for knowledge-based or logic-based planning, our algorithm achieves computational tractability, but at the expense of only applying to a significantly more limited class of problems. Our algorithm is proven correct and complete, and it always returns a minimal plan if there is a plan at all.
The industry wants provably correct and fast formal methods for handling combinatorial dynamical systems. One example of such problems is error recovery in industrial processes. We have used a provably correct, polynomial-time planning algorithm to plan for a miniature assembly line, which assembles toy cars. Although somewhat limited, this process has many similarities with real industrial processes. By exploring the structure of this assembly line we have extended a previously presented algorithm, thus extending the class of problems that can be handled in polynomial time.
The industry wants provably correct and fast formal methods for handling combinatorial dynamical systems. One example of such problems is error recovery in industrial processes. We have used a provably correct, polynomial-time planning algorithm to plan for a miniature assembly line, which assembles toy cars. Although somewhat limited, this process has many similarities with real industrial processes. By exploring the structure of this assembly line we have extended a previously presented algorithm, thus extending the class of problems that can be handled in polynomial time. The planning tool presented here contains general-purpose algorithms that generate plans in the form of GRAFCET charts that are automatically translated into PLC code using a commercial PLC compiler.
The industry wants formal methods for dealing with combinatorial dynamical systems that are provably correct and fast. One example of such problems is error recovery in industrial processes. We have used a provably correct, polynomial-time planning algorithm to plan for a miniature assembly line, which assembles toy cars. Although somewhat limited, this process has many similarities with real industrial processes. By exploring the structure of this assembly line we have extended apreviously presented algorithm making the class of problems that can be handled in polynomial time larger.
This paper presents a provably correct and efficient, polynomial time, planning tool and its application to a miniature assembly line for toy cars. Although somewhat limited, this process has many similarities with real industrial processes. One of our previous polynomial-time planning algorithms has been extended and adapted to work for a larger class of planning problems, including this particular process. The plans produced by the planner are then translated into GRAFCET charts, which are compiled into code for a programmable logic controller. Although capable of producing ordinary assembly plans, the system is mainly intended for producing plans in error recovery situations.
Issues concerning diagnosis, supervision and saftey are found in many technologically advanced products. There is now a trend to extend the functionality of diagnosis and supervision systems to handle more advanced situations. This report collects some of the initiatives taking place in research and some of the developments taking place in the industry.