I posted a similar question last time when I compared the running times of two different implementations of insertion sort. I have a similar problem now. I know that a heap sort's complexity is O(nlogn), same as the quick sort in the average case. But here are my results for when I sort an array of randomly generated integers of size 10,000.
Quick sort: Time taken for execution: 0.005288
Heap sort: Time taken for execution: 0.234245
As you can see, my heap sort is taking much more time than it should.
Here is my code for the max_heapify, build_max_heap and heapsort functions:
void max_heapify(int *a,int n,int i)
{
int largest = 0;
if((2*i)<=n && a[2*i] > a[i]) largest = 2*i;
else largest = i;
if((2*i+1)<=n && a[2*i+1] > a[largest]) largest = 2*i+1;
if(largest != i)
{
swap(a[i],a[largest]);
max_heapify(a,n,largest);
}
}
void build_max_heap(int *a,int n)
{
for(int i=1;i<=n/2;i++)
{
max_heapify(a,n,i);
}
}
void heapsort(int *a,int n)
{
while(n>0)
{
build_max_heap(a,n);
swap(a[1],a[n--]);
}
}
Can anybody tell me the flaw in the implementation? The code works. I tested it personally for smaller sample sizes, but it's not efficient.
Any help is appreciated. Thanks!
You are trying to make a determination based on a single random sample (regardless of how many elements are in the sample), which is unreliable in the extreme.
The difference may be due to code outside of the algorithms (possibly the first algorithm run took time to initialize things that did not need to be initialized again for the second). This can also be addressed by running multiple samples (say, by alternating between algorithms).