There are now 3 least-cost path algorithms in package org.matsim.router:
For every router, there exists a class which computes some preprocessing data and is passed to the router class constructor in order to accelerate the routing procedure. The preprocessing classes are located in org.matsim.demandmodeling.router.util:
Condition: The link cost must be non-negative, otherwise Dijkstra does not work.
The new version is much faster on short routes than the former version (50 times as fast, if the start and end node of the route are ~1km away from each other) and diminutively faster on long routes (~1.5 times faster with preprocessing). Dijkstra can be used with or without preprocessing data (the preprocessing just marks all nodes that are in dead ends). Preprocessing does not take long (2 seconds on a network with 400'000 nodes on our computer leibnitz) and does not require much additional memory (~ 1 pointer per node) either, therefore it should be used where possible. Invoking Dijkstra may then look like this:
PreProcessDijkstra preProcessData = new PreProcessDijkstra();
TravelCostI costFunction = ...
LeastCostPathCalculator routingAlgo = new Dijkstra(network, costFunction, preProcessData);
routingAlgo.calcLeastCostPath(fromNode, toNode, startTime);
If you don't want to preprocess the network, you can invoke Dijkstra as follows:
LeastCostPathCalculator routingAlgo = new Dijkstra(network, costFunction);
Is about 3 times faster than the above Dijkstra.
- The same as for Dijkstra: The link cost must be non-negative, otherwise Dijkstra does not work.
- The length stored in the links must be greater or equal to the euclidean distance of the link's start and end node, otherwise the algorithm is not guaranteed to deliver least-cost paths. In this case PreProcessEuclidean gives out a warning message.
- The CostCalculator which calculates the cost for each link must implement the TravelMinCostI interface, i.e. it must implement the function getLinkMinimumTravelCost(Link link). The TravelTimeCalculator class does implement it.
PreProcessEuclidean.run() is very fast and needs (almost) no additional memory.
TravelMinCostI costFunction = ...
PreProcessEuclidean preProcessData = new PreProcessEuclidean(costFunction);
LeastCostPathCalculator routingAlgo = new AStarEuclidean(network, preProcessData);
A note about the so-called overdo factor: You can drastically accelerate the routing of AStarEuclidean by providing an overdo factor > 1 (e.g. 1.5, 2 or 3). In this case, AStarEuclidean does not calculate least-cost paths anymore but tends to deliver distance-minimal paths. The greater the overdo factor, the faster the algorithm but the more the calculated routes diverge from the least-cost ones.
A typical invocation then looks like this:
LeastCostPathCalculator routingAlgo = new AStarEuclidean(network, preProcessData, 2);
About double as fast as AStarEuclidean. PreProcessLandmarks.run() takes about 1 minute on 400'000 nodes on leibnitz and requires 2*X double values per node, where X is the number of landmarks. Currently, it is set to 16 (but can be set to another value, as 12 or 8 for example), so with 400'000 nodes we would need 2*8*16*400'000 bytes = about 100MB of additional memory.
Conditions: The same as for AStarEuclidean.
TravelMinCostI costFunction = ...
PreProcessLandmarks preProcessData = new PreProcessLandmarks(costFunction);
LeastCostPathCalculator routingAlgo = new AStarLandmarks(network, preProcessData);