Using standard Java HashMap (compared to Trove THashMap) causes non-HashMap code to run slower

6.1k Views Asked by At

I use a HashMap to cache about 2 million values calculated through a recursive algorithm. I use either HashMap<Integer, Double> from the Collections Framework, or the TIntDoubleHashMap from the Trove library, controlled by the boolean useTrove variable, as in the code below.

I do expect the Trove library to be faster, as it avoids auto-boxing etc. And indeed, the put() and get() calls take about 300ms to run (in total) for the THashMap compared to about 500ms for the HashMap<>.

Now, my overall program runtime is about 2.8s when using THashMap, and 6.7s when using HashMap<>. This difference cannot be explained by the increased runtime for the put() and get() calls alone.

  1. I suspect this vastly increased runtime with HashMap<> is driven by the fact that this implementation is quite memory inefficient as each int/double needs to be boxed into an Object, and this increased memory usage causes cache misses in other parts of the program. Does this explanation make sense, and how could I confirm/reject this hypothesis?

  2. In general, how do I explore algorithmic optimisations for such scenarios? Profiling the algorithm doesn't readily point to the HashMap<> being the culprit, at least if CPU time alone is considered. Is it simply a question of knowing in advance that memory usage needs to be prioritised for memory-hungry programs?

Code in full below.

import java.util.HashMap;
import gnu.trove.map.hash.TIntDoubleHashMap;

class RuntimeStopWatch {
    long elapsedTime;
    long startTime;
    RuntimeStopWatch() { reset(); }
    void reset() { elapsedTime = 0; }
    void start() { startTime = System.nanoTime(); }
    void stop() {
        long endTime = System.nanoTime();
        elapsedTime += (endTime - startTime);
        startTime = endTime;
    }
    void printElapsedTime(String prefix) {
        System.out.format(prefix + "%dms\n", elapsedTime / 1000000);
    }
}

public class HashMapBehaviour {

    static RuntimeStopWatch programTime = new RuntimeStopWatch();
    static RuntimeStopWatch hashMapTime = new RuntimeStopWatch();
    static HashMap<Integer, Double> javaHashMapCache;
    static TIntDoubleHashMap troveHashMapCache;
    static boolean useTrove;

    public static void main(String[] args) {
//        useTrove = true;
        useTrove = false;

        javaHashMapCache = new HashMap<>();
        troveHashMapCache = new TIntDoubleHashMap();

        programTime.start();
        recursiveFunction(29, 29, 178956970);
        programTime.stop();

        programTime.printElapsedTime("Program: ");
        hashMapTime.printElapsedTime("Hashmap: ");
    }


    static double recursiveFunction(int n, int k, int bitString) {
        if (k == 0) return 0.0;
        if (useTrove) {
            hashMapTime.start();
            if (troveHashMapCache.containsKey(bitString | (1 << n))) return troveHashMapCache.get(bitString | (1 << n));
            hashMapTime.stop();
        } else {
            hashMapTime.start();
            if (javaHashMapCache.containsKey(bitString | (1 << n))) return javaHashMapCache.get(bitString | (1 << n));
            hashMapTime.stop();
        }
        double result = 0.0;
        for (int i = 0; i < (n >> 1); i++) {
            double play1 = recursiveFunction(n - 1, k - 1, stripSingleBit(bitString, i));
            double play2 = recursiveFunction(n - 1, k - 1, stripSingleBit(bitString, n - i - 1));
            result += Math.max(play1, play2);
        }
        if (useTrove) {
            hashMapTime.start();
            troveHashMapCache.put(bitString | (1 << n), result);
            hashMapTime.stop();
        } else {
            hashMapTime.start();
            javaHashMapCache.put(bitString | (1 << n), result);
            hashMapTime.stop();
        }
        return result;
    }

    static int stripSingleBit(int bitString, int bitIndex) {
        return ((bitString >> (bitIndex + 1)) << bitIndex) | (bitString & ((1 << bitIndex) - 1));
    }
}
1

There are 1 best solutions below

0
On

One big thing with Trove is that you'll want to pre-size the collection. Because storage is single-array-based in T*Maps, failing to pre-size the collection will result in a lot of array creation and copying. HashMap doesn't have this problem because it uses linked objects.

So, summary: try sizing your collection with new TIntDoubleHashMap(<expected_size>)

At a grander scope, think about what you're optimizing for. Trove is can be most efficient with overall memory usage and sometimes performance. However, the big performance gains don't come from super-snazzy hashing algorithms, but rather that there can be less GC pressure because less temporary objects (for boxing) are used. Whether or not this matters to you entirely depends on your application. Also the load factor allows you to trade off data "density" in the array at the cost of lookup speed. So, tuning that can be useful. If you get a lot of collisions while doing lookups and want better performance or want to maximize memory at the cost of performance, adjust the factor.

If you have memory to burn and just want lookup performance, HashMap is pretty tough to beat... especially if the contents of the map are static. The JVM is very good at optimizing away temporary objects, so don't discount this too quickly. (Premature optimization, etc...)

Keep in mind that this kind of micro benchmark also isn't necessarily a great indicator of real-world performance. It misses things like GC pressure and JIT compilation. Tools like JMH can help writing more representative tests.