|
|
Line 1: |
Line 1: |
| {{redirect2|A*|A star}}
| | My name is Nidia and I am studying Social Studies and Architecture at Niagara Falls / Canada.<br><br>Feel free to surf to my web page [http://indoor-gaming.de/index.php?mod=users&action=view&id=7968 Fifa 15 Coin Hack] |
| {{Infobox Algorithm
| |
| |class=[[Search algorithm]]
| |
| |data=[[Graph (data structure)|Graph]]
| |
| |time=<math>O(|E|) = O(b^d)</math>
| |
| |space=<math>O(|V|) = O(b^d)</math>
| |
| |optimal=yes (for unweighted graphs)
| |
| |complete=yes
| |
| }}
| |
| {{graph search algorithm}}
| |
| In [[computer science]], '''A*''' (pronounced "A star"<small> ([[File:Speaker Icon.svg|13px|link=|alt=]] [[:Media:En-us-a-star.ogg|listen]])</small>) is a [[computer algorithm]] that is widely used in [[pathfinding]] and [[graph traversal]], the process of plotting an efficiently traversable path between points, called nodes. Noted for its [[Computer performance|performance]] and accuracy, it enjoys widespread use. However, in practical travel-routing systems, it is generally outperformed by algorithms which can pre-process the graph to attain better performance.<ref>{{cite book
| |
| |chapter=Engineering route planning algorithms
| |
| |author=Delling, D. and Sanders, P. and Schultes, D. and Wagner, D.
| |
| |title=Algorithmics of large and complex networks
| |
| |pages=117–139
| |
| |year=2009
| |
| |publisher=Springer
| |
| |doi=10.1007/978-3-642-02094-0_7
| |
| }}
| |
| </ref>
| |
| | |
| [[Peter E. Hart|Peter Hart]], [[Nils Nilsson (researcher)|Nils Nilsson]] and [[Bertram Raphael]] of Stanford Research Institute (now [[SRI International]]) first described the algorithm in 1968.<ref name="nilsson">{{cite journal
| |
| | first = P. E.
| |
| | last = Hart
| |
| | coauthors = Nilsson, N. J.; Raphael, B.
| |
| | title = A Formal Basis for the Heuristic Determination of Minimum Cost Paths
| |
| | journal = [[Institute of Electrical and Electronics Engineers|IEEE]] Transactions on Systems Science and Cybernetics SSC4
| |
| | issue = 2
| |
| | pages = 100–107
| |
| | year = 1968
| |
| | doi = 10.1109/TSSC.1968.300136
| |
| | volume = 4
| |
| }}
| |
| </ref> It is an extension of [[Edsger Dijkstra|Edsger Dijkstra's]] [[Dijkstra's algorithm|1959 algorithm]]. A* achieves better time performance by using [[Heuristic (computer science)|heuristics]].
| |
| | |
| ==Description==
| |
| A* uses a [[best-first search]] and finds a least-cost path from a given initial [[node (graph theory)|node]] to one [[goal node]] (out of one or more possible goals). As A* traverses the graph, it follows a path of the lowest expected total cost or distance, keeping a sorted [[priority queue]] of alternate path segments along the way.
| |
| | |
| It uses a knowledge-plus-[[heuristic]] cost function of node ''x'' (usually denoted ''f(x)'') to determine the order in which the search visits nodes in the tree. The cost function is a sum of two functions:
| |
| * the past path-cost function, which is the known distance from the starting node to the current node ''x'' (usually denoted ''g(x)'')
| |
| * a future path-cost function, which is an [[Admissible heuristic|admissible]] "heuristic estimate" of the distance from ''x'' to the goal (usually denoted ''h(x)'').
| |
| | |
| The ''h(x)'' part of the ''f(x)'' function must be an [[admissible heuristic]]; that is, it must not overestimate the distance to the goal. Thus, for an application like [[routing]], ''h(x)'' might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points or nodes.
| |
| | |
| If the [[heuristic]] ''h'' satisfies the additional condition <math>h(x) \le d(x,y) + h(y)</math> for every edge (''x, y'') of the graph (where ''d'' denotes the length of that edge), then ''h'' is called [[Consistent heuristic|monotone, or consistent]]. In such a case, A* can be implemented more efficiently—roughly speaking, no node needs to be processed more than once (see ''closed set'' below)—and A* is equivalent to running [[Dijkstra's algorithm]] with the [[reduced cost]] ''d'(x, y) := d(x, y) + h(y) - h(x)''.
| |
| | |
| ==History==
| |
| In 1968 Nils Nilsson suggested a heuristic approach for [[Shakey the Robot]] to navigate through a room containing obstacles. This path-finding algorithm, called A1, was a faster version of the then best known formal approach, [[Dijkstra's algorithm]], for finding shortest paths in graphs. Bertram Raphael suggested some significant improvements upon this algorithm, calling the revised version A2. Then Peter E. Hart introduced an argument that established A2, with only minor changes, to be the best possible algorithm for finding shortest paths. Hart, Nilsson and Raphael then jointly developed a proof that the revised A2 algorithm was ''optimal'' for finding shortest paths under certain well-defined conditions.
| |
| | |
| ==Process==
| |
| Like all [[informed search algorithm]]s, it first searches the routes that ''appear'' to be most likely to lead towards the goal. What sets A* apart from a [[greedy algorithm|greedy]] [[best-first search]] is that it also takes the distance already traveled into account; the ''g(x)'' part of the heuristic is the cost from the starting point, not simply the local cost from the previously expanded node.
| |
| | |
| Starting with the initial node, it maintains a [[priority queue]] of nodes to be traversed, known as the ''open set'' or ''fringe''. The lower ''f(x)'' for a given node ''x'', the higher its priority. At each step of the algorithm, the node with the lowest ''f(x)'' value is removed from the queue, the ''f'' and ''g'' values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a goal node has a lower ''f'' value than any node in the queue (or until the queue is empty). (Goal nodes may be passed over multiple times if there remain other nodes with lower ''f'' values, as they may lead to a shorter path to a goal.) The ''f'' value of the goal is then the length of the shortest path, since ''h'' at the goal is zero in an admissible heuristic.
| |
| | |
| The algorithm described so far gives us only the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node.
| |
| | |
| Additionally, if the heuristic is ''monotonic'' (or [[Consistent heuristic|consistent]], see below), a ''closed set'' of nodes already traversed may be used to make the search more efficient.
| |
| | |
| ==Pseudocode==
| |
| The following [[pseudocode]] describes the algorithm:
| |
| | |
| <syntaxhighlight lang="pascal">
| |
| function A*(start,goal)
| |
| closedset := the empty set // The set of nodes already evaluated.
| |
| openset := {start} // The set of tentative nodes to be evaluated, initially containing the start node
| |
| came_from := the empty map // The map of navigated nodes.
| |
|
| |
| g_score[start] := 0 // Cost from start along best known path.
| |
| // Estimated total cost from start to goal through y.
| |
| f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal)
| |
|
| |
| while openset is not empty
| |
| current := the node in openset having the lowest f_score[] value
| |
| if current = goal
| |
| return reconstruct_path(came_from, goal)
| |
|
| |
| remove current from openset
| |
| add current to closedset
| |
| for each neighbor in neighbor_nodes(current)
| |
| if neighbor in closedset
| |
| continue
| |
| tentative_g_score := g_score[current] + dist_between(current,neighbor)
| |
|
| |
| if neighbor not in openset or tentative_g_score < g_score[neighbor]
| |
| came_from[neighbor] := current
| |
| g_score[neighbor] := tentative_g_score
| |
| f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal)
| |
| if neighbor not in openset
| |
| add neighbor to openset
| |
|
| |
| return failure
| |
| | |
| function reconstruct_path(came_from, current_node)
| |
| if current_node in came_from
| |
| p := reconstruct_path(came_from, came_from[current_node])
| |
| return (p + current_node)
| |
| else
| |
| return current_node
| |
| </syntaxhighlight>
| |
| | |
| '''Remark:''' the above pseudocode assumes that the heuristic function is ''monotonic'' (or [[Consistent heuristic|consistent]], see below), which is a frequent case in many practical problems, such as the Shortest Distance Path in road networks. However, if the assumption is not true, nodes in the '''closed''' set may be rediscovered and their cost improved.
| |
| In other words, the closed set can be omitted (yielding a tree search algorithm) if a solution is guaranteed to exist, or if the algorithm is adapted so that new nodes are added to the open set only if they have a lower ''f'' value than at any previous iteration.
| |
| | |
| [[Image:Astar progress animation.gif|thumb|Illustration of A* search for finding path from a start node to a goal node in a [[robotics|robot]] [[motion planning]] problem. The empty circles represent the nodes in the ''open set'', i.e., those that remain to be explored, and the filled ones are in the closed set. Color on each closed node indicates the distance from the start: the greener, the farther. One can first see the A* moving in a straight line in the direction of the goal, then when hitting the obstacle, it explores alternative routes through the nodes from the open set. {{see also|Dijkstra's algorithm}}]]
| |
| | |
| ===Example===
| |
| An example of an A star (A*) algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to target point:
| |
| | |
| [[File:AstarExample.gif|An example of A star (A*) algorithm in action (nodes are cities connected with roads, h(x) is the straight-line distance to target point) Green: Start, Blue: Target, Orange: Visited]]
| |
| | |
| '''Key:''' green: start; blue: goal; orange: visited
| |
| | |
| '''Note:''' This example uses a comma as the [[decimal separator]].
| |
| | |
| ==Properties==
| |
| Like [[breadth-first search]], A* is ''complete'' and will always find a solution if one exists.
| |
| | |
| If the heuristic function ''h'' is [[admissible heuristic|admissible]], meaning that it never overestimates the actual minimal cost of reaching the goal, then A* is itself admissible (or ''optimal'') if we do not use a closed set. If a closed set is used, then ''h'' must also be ''monotonic'' (or [[consistent heuristic|consistent]]) for A* to be optimal. This means that for any pair of adjacent nodes ''x'' and ''y'', where ''d(x,y)'' denotes the length of the edge between them, we must have:
| |
| | |
| :<math>h(x) \le d(x,y) + h(y)</math>
| |
| | |
| This ensures that for any path ''X'' from the initial node to ''x'':
| |
| | |
| :<math>L(X) + h(x) \le L(X) + d(x,y) + h(y) = L(Y) + h(y)</math>
| |
| | |
| where ''L'' is a function that denotes the length of a path, and ''Y'' is the path ''X'' extended to include ''y''. In other words, it is impossible to decrease (total distance so far + estimated remaining distance) by extending a path to include a neighboring node. (This is analogous to the restriction to nonnegative edge weights in [[Dijkstra's algorithm]].) Monotonicity implies admissibility when the heuristic estimate at any goal node itself is zero, since (letting ''P = (f,v<sub>1</sub>,v<sub>2</sub>,...,v<sub>n</sub>,g)'' be a shortest path from any node ''f'' to the nearest goal ''g''):
| |
| | |
| :<math>h(f) \le d(f,v_1) + h(v_1) \le d(f,v_1) + d(v_1,v_2) + h(v_2) \le \ldots \le L(P) + h(g) = L(P)</math>
| |
| | |
| A* is also optimally efficient for any heuristic ''h'', meaning that no optimal algorithm employing the same heuristic will expand fewer nodes than A*, except when there are multiple partial solutions where ''h'' exactly predicts the cost of the optimal path. Even in this case, for each graph there exists some order of breaking ties in the priority queue such that A* examines the fewest possible nodes.
| |
| | |
| ===Special cases===
| |
| [[Dijkstra's algorithm]], as another example of a uniform-cost search algorithm, can be viewed as a special case of A* where ''h(x) = 0'' for all ''x''. General [[depth-first search]] can be implemented using the A* by considering that there is a global counter ''C'' initialized with a very large value. Every time we process a node we assign ''C'' to all of its newly discovered neighbors. After each single assignment, we decrease the counter ''C'' by one. Thus the earlier a node is discovered, the higher its ''h(x)'' value. It should be noted, however, that both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including a ''h(x)'' value at each node.
| |
| | |
| ===Implementation details===
| |
| There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a [[LIFO (computing)|LIFO]] manner, A* will behave like [[depth-first search]] among equal cost paths (avoiding exploring more than one equally-optimal solution).
| |
| | |
| When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower cost path. When finding a node in a queue to perform this check, many standard implementations of a [[Heap (data structure)|min-heap]] require ''O(n)'' time. Augmenting the heap with a [[hash table]] can reduce this to constant time{{clarify|date=September 2013}}.
| |
| | |
| ==Admissibility and optimality{{anchor|Admissibility and Optimality}}==
| |
| A* is [[admissible heuristic|admissible]] and [[Consistent heuristic|considers]] fewer nodes than any other admissible search algorithm with the same heuristic. This is because A* uses an "optimistic" estimate of the cost of a path through every node that it considers—optimistic in that the true cost of a path through that node to the goal will be at least as great as the estimate. But, critically, as far as A* "knows", that optimistic estimate might be achievable.
| |
| | |
| Here is the main idea of the proof:
| |
| | |
| When A* terminates its search, it has found a path whose actual cost is lower than the estimated cost of any path through any open node. But since those estimates are optimistic, A* can safely ignore those nodes. In other words, A* will never overlook the possibility of a lower-cost path and so is admissible.
| |
| | |
| Suppose now that some other search algorithm B terminates its search with a path whose actual cost is ''not'' less than the estimated cost of a path through some open node. Based on the heuristic information it has, Algorithm B cannot rule out the possibility that a path through that node has a lower cost. So while B might consider fewer nodes than A*, it cannot be admissible. Accordingly, A* considers the fewest nodes of any admissible search algorithm.
| |
| | |
| This is only true if both:
| |
| | |
| * A* uses an [[admissible heuristic]]. Otherwise, A* is not guaranteed to expand fewer nodes than another search algorithm with the same heuristic.<ref>{{cite journal
| |
| | first = Rina
| |
| | last = Dechter
| |
| | coauthors = Judea Pearl
| |
| | title = Generalized best-first search strategies and the optimality of A*
| |
| | journal = [[Journal of the ACM]]
| |
| | volume = 32
| |
| | issue = 3
| |
| | pages = 505–536
| |
| | year = 1985
| |
| | doi = 10.1145/3828.3830
| |
| | url=http://portal.acm.org/citation.cfm?id=3830&coll=portal&dl=ACM
| |
| }}
| |
| </ref>
| |
| | |
| * A* solves only one search problem rather than a series of similar search problems. Otherwise, A* is not guaranteed to expand fewer nodes than [[incremental heuristic search]] algorithms.<ref>{{cite journal
| |
| | first = Sven
| |
| | last = Koenig
| |
| | coauthors = Maxim Likhachev, Yaxin Liu, David Furcy
| |
| | title = Incremental heuristic search in AI
| |
| | journal = [[AI Magazine]]
| |
| | volume = 25
| |
| | issue = 2
| |
| | pages = 99–112
| |
| | year = 2004
| |
| | url=http://portal.acm.org/citation.cfm?id=1017140
| |
| }}
| |
| </ref>
| |
| | |
| [[Image:Weighted A star with eps 5.gif|thumb|A* search that uses a heuristic that is 5.0(=ε) times a [[consistent heuristic]], and obtains a suboptimal path.]]
| |
| | |
| ===Bounded relaxation===
| |
| While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. It is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than ''(1 + ε)'' times the optimal solution path. This new guarantee is referred to as ''ε''-admissible.
| |
| | |
| There are a number of ''ε''-admissible algorithms:
| |
| | |
| * Weighted A*. If ''h<sub>a(n)</sub>'' is an admissible heuristic function, in the weighted version of the A* search one uses ''h<sub>w(n)</sub> = ε h<sub>a(n)</sub>'', ''ε > 1'' as the heuristic function, and perform the A* search as usual (which eventually happens faster than using ''h<sub>a</sub>'' since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most ''ε'' times that of the least cost path in the graph.<ref name="pearl84"/>
| |
| | |
| * Static Weighting<ref>{{cite journal
| |
| | first = Ira
| |
| | last = Pohl
| |
| | title = First results on the effect of error in heuristic search
| |
| | journal = Machine Intelligence
| |
| | volume = 5
| |
| | pages = 219–236
| |
| | year = 1970
| |
| }}</ref> uses the cost function ''f(n) = g(n) + (1 + ε)h(n)''.
| |
| | |
| * Dynamic Weighting<ref>{{cite conference
| |
| | first = Ira
| |
| | last = Pohl
| |
| | title = The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic weighting and computational issues in heuristic problem solving
| |
| | booktitle = Proceedings of the Third International Joint Conference on Artificial Intelligence (IJCAI-73)
| |
| | volume = 3
| |
| | pages = 11–17
| |
| | place = California, USA
| |
| | date = August 1973
| |
| }}</ref> uses the cost function ''f(n) = g(n) + (1 + ε w(n))h(n)'', where <math>w(n) = \begin{cases} 1 - \frac{d(n)}{N} & d(n) \le N \\ 0 & \text{otherwise} \end{cases}</math>, and where ''d(n)'' is the depth of the search and ''N'' is the anticipated length of the solution path.
| |
| | |
| * Sampled Dynamic Weighting<ref>{{cite conference
| |
| | first = Andreas
| |
| | last = Köll
| |
| | coauthors = Hermann Kaindl
| |
| | title = A new approach to dynamic weighting
| |
| | booktitle = Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI-92)
| |
| | pages = 16–17
| |
| | place = Vienna, Austria
| |
| | date = August 1992
| |
| }}</ref> uses sampling of nodes to better estimate and debias the heuristic error.
| |
| | |
| * <math>A^*_\varepsilon</math>.<ref>{{cite journal
| |
| | first = Judea
| |
| | last = Pearl
| |
| | coauthors = Jin H. Kim
| |
| | title = Studies in semi-admissible heuristics
| |
| | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)
| |
| | volume = 4
| |
| | issue = 4
| |
| | pages = 392–399
| |
| | year = 1982
| |
| }}</ref> uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the second ''h<sub>F</sub>'' is used to select the most promising node from the FOCAL list.
| |
| | |
| * ''A<sub>ε</sub>''<ref>{{cite conference
| |
| | first = Malik
| |
| | last = Ghallab
| |
| | coauthors = Dennis Allard
| |
| | title = ''A<sub>ε</sub>'' – an efficient near admissible heuristic search algorithm
| |
| | booktitle = Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83)
| |
| | volume = 2
| |
| | pages = 789–791
| |
| | place = Karlsruhe, Germany
| |
| | date = August 1983
| |
| }}</ref> selects nodes with the function ''A f(n) + B h<sub>F</sub>(n)'', where ''A'' and ''B'' are constants. If no nodes can be selected, the algorithm will backtrack with the function ''C f(n) + D h<sub>F</sub>(n)'', where ''C'' and ''D'' are constants.
| |
| | |
| * AlphA*<ref>{{cite paper
| |
| | first = Bjørn
| |
| | last = Reese
| |
| | title = AlphA*: An ''ε''-admissible heuristic search algorithm
| |
| | year = 1999
| |
| }}</ref> attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the cost function ''f<sub>α</sub>(n) = (1 + w<sub>α</sub>(n)) f(n)'', where <math>w_\alpha(n) = \begin{cases} \lambda & g(\pi(n)) \le g(\tilde{n}) \\ \Lambda & \text{otherwise} \end{cases}</math>, where ''λ'' and ''Λ'' are constants with <math>\lambda \le \Lambda</math>, ''π(n)'' is the parent of n, and ''ñ'' is the most recently expanded node.
| |
| | |
| ==Complexity==
| |
| The [[computational complexity theory|time complexity]] of A* depends on the heuristic. In the worst case, the number of nodes expanded is [[exponential time|exponential]] in the length of the solution (the shortest path), but it is [[polynomial time|polynomial]] when the search space is a tree, there is a single goal state, and the heuristic function ''h'' meets the following condition:
| |
| | |
| :<math>|h(x) - h^*(x)| = O(\log h^*(x))</math>
| |
| | |
| where ''h<sup>*</sup>'' is the optimal heuristic, the exact cost to get from ''x'' to the goal. In other words, the error of ''h'' will not grow faster than the [[logarithm]] of the "perfect heuristic" ''h<sup>*</sup>'' that returns the true distance from ''x'' to the goal.<ref name="pearl84">{{cite book
| |
| | first = Judea
| |
| | last = Pearl
| |
| | title = Heuristics: Intelligent Search Strategies for Computer Problem Solving
| |
| | publisher = Addison-Wesley
| |
| | year = 1984
| |
| | isbn = 0-201-05594-5
| |
| }}</ref><ref name="aima">{{cite book
| |
| | first = S. J.
| |
| | last = Russell
| |
| | coauthors = Norvig, P.
| |
| | title = [[Artificial Intelligence: A Modern Approach]]
| |
| | year = 2003
| |
| | pages = 97–104
| |
| | isbn = 0-13-790395-2
| |
| | publisher = Prentice Hall
| |
| | location = Upper Saddle River, N.J.
| |
| }}</ref>
| |
| | |
| ==Applications==
| |
| A* is commonly used for the common pathfinding problem in applications such as games, but was originally designed as a general graph traversal algorithm.<ref name="nilsson"/>
| |
| It finds applications to diverse problems, including the problem of [[parsing]] using [[Stochastic context-free grammar|stochastic grammars]] in [[Natural language processing|NLP]].<ref>{{cite conference
| |
| |last1=Klein
| |
| |first1=Dan
| |
| |last2=Manning
| |
| |first2=Christopher D.
| |
| |title=A* parsing: fast exact Viterbi parse selection
| |
| |conference=Proc. NAACL-HLT
| |
| |year=2003
| |
| }}</ref>
| |
| | |
| ==Variants of A*==
| |
| *[[D*]]
| |
| *[[Any-angle path planning|Field D*]]
| |
| *[[IDA*]]
| |
| *[[Fringe search|Fringe]]
| |
| *[[Incremental heuristic search|Fringe Saving A* (FSA*)]]
| |
| *[[Incremental heuristic search|Generalized Adaptive A* (GAA*)]]
| |
| *[[Incremental heuristic search|Lifelong Planning A* (LPA*)]]
| |
| *[[SMA*|Simplified Memory bounded A* (SMA*)]]
| |
| *[[Any-angle path planning|Theta*]]
| |
| * A* can be adapted to a [[bidirectional search]] algorithm. Special care needs to be taken for the stopping criterion.<ref>{{cite paper
| |
| | title = Efficient Point-to-Point Shortest Path Algorithms
| |
| | url = http://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/EPP%20shortest%20path%20algorithms.pdf
| |
| }} from [[Princeton University]]</ref>
| |
| | |
| ==References==
| |
| {{Reflist}}
| |
| | |
| ==Further reading==
| |
| * {{cite journal
| |
| | first = P. E.
| |
| | last = Hart
| |
| | coauthors = Nilsson, N. J.; Raphael, B.
| |
| | title = Correction to "A Formal Basis for the Heuristic Determination of Minimum Cost Paths"
| |
| | journal = [[Association for Computing Machinery|SIGART]] Newsletter
| |
| | volume = 37
| |
| | pages = 28–29
| |
| | year = 1972
| |
| }}
| |
| * {{cite book
| |
| | first = N. J.
| |
| | last = Nilsson
| |
| | title = Principles of Artificial Intelligence
| |
| | publisher = Tioga Publishing Company
| |
| | location = Palo Alto, California
| |
| | year = 1980
| |
| | isbn = 0-935382-01-1
| |
| }}
| |
| | |
| ==External links==
| |
| * [http://www.policyalmanac.org/games/aStarTutorial.htm A* Pathfinding for Beginners]
| |
| * A* with [http://harablog.wordpress.com/2011/09/07/jump-point-search/ Jump point search]
| |
| * [http://theory.stanford.edu/~amitp/GameProgramming/ Clear visual A* explanation, with advice and thoughts on path-finding]
| |
| * Variation on A* called [http://www.cs.ualberta.ca/~mmueller/ps/hpastar.pdf Hierarchical Path-Finding A* (HPA*)]
| |
| * [http://www.heyes-jones.com/astar.html A* Algorithm tutorial]
| |
| * [http://www.humblebeesoft.com/blog/?p=18 A* Pathfinding in Objective-C (Xcode)]
| |
| * [http://dx.doi.org/10.1016/j.knosys.2011.09.008 Dyna-h, an A*-similar heuristic approach in Reinforcement Learning]
| |
| | |
| {{DEFAULTSORT:A Search Algorithm}}
| |
| [[Category:Graph algorithms]]
| |
| [[Category:Routing algorithms]]
| |
| [[Category:Search algorithms]]
| |
| [[Category:Combinatorial optimization]]
| |
| [[Category:Game artificial intelligence]]
| |
| [[Category:Articles with example pseudocode]]
| |
| | |
| {{Link GA|de}}
| |