The Skiena's book of algorithm contains the following explaination of Floyd Warshall algorithm:
floyd(adjacency_matrix *g)
{
int i,j; /* dimension counters */
int k; /* intermediate vertex counter */
int through_k; /* distance through vertex k */
for (k=1; k<=g->nvertices; k++)
for (i=1; i<=g->nvertices; i++)
for (j=1; j<=g->nvertices; j++) {
through_k = g->weight[i][k]+g->weight[k][j];
if (through_k < g->weight[i][j])
g->weight[i][j] = through_k;
}
}
The Floyd-Warshall all-pairs shortest path runs in O(n3) time, which is asymptotically no better than n calls to Dijkstra’s algorithm. However, the loops are so tight and the program so short that it runs better in practice. It is notable as one of the rare graph algorithms that work better on adjacency matrices than adjacency lists.
Can someone please elaborate why the bold part is true?
Dijkstra's algorithm with common data structure is O(E log v) but Fibonacci heap improves this to O(E+v log v) which is more complicated data structure and constant factory of algorithm is bigger than floyed. look at Running time in this link.
the difference in this question is as same as diffrence between q-sort and heap-sort