The Skiena's book of algorithm contains the following explaination of Floyd Warshall algorithm:
floyd(adjacency_matrix *g)
{
int i,j; /* dimension counters */
int k; /* intermediate vertex counter */
int through_k; /* distance through vertex k */
for (k=1; k<=g->nvertices; k++)
for (i=1; i<=g->nvertices; i++)
for (j=1; j<=g->nvertices; j++) {
through_k = g->weight[i][k]+g->weight[k][j];
if (through_k < g->weight[i][j])
g->weight[i][j] = through_k;
}
}
The Floyd-Warshall all-pairs shortest path runs in O(n3) time, which is asymptotically no better than n calls to Dijkstra’s algorithm. However, the loops are so tight and the program so short that it runs better in practice. It is notable as one of the rare graph algorithms that work better on adjacency matrices than adjacency lists.
Can someone please elaborate why the bold part is true?
If you look at that algorithm, there are three loops and only 2 statements nested inside of those. The only logic is done within the third nested loop. If you ran n Djikstra's, the logic would be done inside the first loop as well as the lasted nested, which is easily debatably not cleaner. With the clean three loops the computer should have an easier time managing the memory as well.