Does synchronisation with ConcurrentHashMap's .compute() guarantee visibility?

336 Views Asked by At

Within ConcurrentHashMap.compute() I increment and decrement some long value located in shared memory. Read, increment/decrement only gets performed within compute method on the same key. So the access to long value is synchronised by locking on ConcurrentHashMap segment, thus increment/decrement is atomic. My question is: Does this synchronisation on a map guarantee visibility for long value? Can I rely on Map's internal synchronisation or should I make my long value volatile?

I know that when you explicitly synchronise on a lock, visibility is guaranteed. But I do not have perfect understanding of ConcurrentHashMap internals. Or maybe I can trust it today but tomorrow ConcurrentHashMap's internals may somehow change: exclusive access will be preserved, but visibility will disappear... and it is an argument to make my long value volatile.

Below I will post a simplified example. According to the test there is no race condition today. But can I trust this code long-term without volatile for long value?

class LongHolder {

    private final ConcurrentMap<Object, Object> syncMap = new ConcurrentHashMap<>();
    private long value = 0;

    public void increment() {
        syncMap.compute("1", (k, v) -> {
            if (++value == 2000000) {
                System.out.println("Expected final state. If this gets printed, this simple test did not detect visibility problem");
            }
            return null;
        });
    }
}

class IncrementRunnable implements Runnable {

    private final LongHolder longHolder;

    IncrementRunnable(LongHolder longHolder) {
        this.longHolder = longHolder;
    }

    @Override
    public void run() {
        for (int i = 0; i < 1000000; i++) {
            longHolder.increment();
        }
    }
}


public class ConcurrentMapExample {
    public static void main(String[] args) throws InterruptedException {
        LongHolder longholder = new LongHolder();
        Thread t1 = new Thread(new IncrementRunnable(longholder));
        Thread t2 = new Thread(new IncrementRunnable(longholder));
        t1.start();
        t2.start();
    }
}

UPD: adding another example which is closer to the code I am working on. I would like to remove map entries when no one else is using the object. Please note that reading and writing of the long value happens only inside of remapping function of ConcurrentHashMap.compute:

public class ObjectProvider {

    private final ConcurrentMap<Long, CountingObject> map = new ConcurrentHashMap<>();

    public CountingObject takeObjectForId(Long id) {
        return map.compute(id, (k, v) -> {
            CountingObject returnLock;
            returnLock = v == null ? new CountingObject() : v;

            returnLock.incrementUsages();
            return returnLock;
        });
    }

    public void releaseObjectForId(Long id, CountingObject o) {
        map.compute(id, (k, v) -> o.decrementUsages() == 0 ? null : o);
    }
}

class CountingObject {
    private int usages;

    public void incrementUsages() {
        --usages;
    }

    public int decrementUsages() {
        return --usages;
    }
}

UPD2: I admit that I failed to provide the simplest code examples previously, posting a real code now:

public class LockerUtility<T> {

    private final ConcurrentMap<T, CountingLock> locks = new ConcurrentHashMap<>();

    public void executeLocked(T entityId, Runnable synchronizedCode) {
        CountingLock lock = synchronizedTakeEntityLock(entityId);
        try {
            lock.lock();
            try {
                synchronizedCode.run();
            } finally {
                lock.unlock();
            }
        } finally {
            synchronizedReturnEntityLock(entityId, lock);
        }

    }

    private CountingLock synchronizedTakeEntityLock(T id) {
        return locks.compute(id, (k, l) -> {
            CountingLock returnLock;
            returnLock = l == null ? new CountingLock() : l;

            returnLock.takeForUsage();
            return returnLock;
        });
    }

    private void synchronizedReturnEntityLock(T lockId, CountingLock lock) {
        locks.compute(lockId, (i, v) -> lock.returnBack() == 0 ? null : lock);
    }

    private static class CountingLock extends ReentrantLock {
        private volatile long usages = 0;

        public void takeForUsage() {
            usages++;
        }

        public long returnBack() {
            return --usages;
        }
    }
}
2

There are 2 best solutions below

2
On

No, this approach will not work, not even with volatile. You would have to use AtomicLong, LongAdder, or the like, to make this properly thread-safe. ConcurrentHashMap doesn't even work with segmented locks these days.

Also, your test does not prove anything. Concurrency issues by definition don't happen every time. Not even every millionth time.

You must use a proper concurrent Long accumulator like AtomicLong or LongAdder.

8
On

Do not get fooled by the line in the documentation of compute:

The entire method invocation is performed atomically

This does work for side-effects, like you have in that value++; it only works for the internal data of ConcurrentHashMap.

The first thing that you miss is that locking in CHM, the implementation has changed a lot (as the other answer has noted). But even if it did not, your understanding of the:

I know that when you explicitly synchronize on a lock, visibility is guaranteed

is flawed. JLS says that this is guaranteed when both the reader and the writer use the same lock; which in your case obviously does not happen; as such no guarantees are in place. In general happens-before guarantees (that you would require here) only work for pairs, for both reader and writer.