In a discussion on the Guava mailing list after my previous post, Charles Fry and Louis Wasserman had some great ideas that spurred me to rewrite the distributed CacheLoader functionality from scratch.
The first part is static factory methods to turn an AsyncFunction into a CacheLoader:
This is independent of any mention of Hazelcast; it works with any AsyncFunction. The Iterable-ness of the keys passed to loadAll is preserved all the way through, so asynchronous processing can start before the keys have been completely iterated. This could be handy if, for example, the keys themselves are being generated asynchronously.
The other piece is a class with methods to create an AsyncFunction out of a Hazelcast-based Executor:
Note that the public API of this class does not use any Hazelcast types.
The whole implementation is much cleaner and takes full advantage of existing Guava machinery. Here's what it looks like in use in my code:
asyncLookup = HazelcastAsyncFunction .from(syncLookup()); .onExecutor(hazelcastExecutor()) .withTaskKeyFunction(mapAccountToServer()); LoadingCache cache = CacheBuilder.newBuilder().build( fromAsyncFunction(asyncLookup, 30L, TimeUnit.SECONDS)); ... // Later on, this call causes the account lookups to be // magically distributed across the cluster: Map accountNameToInfo = cache.get(accountNames);
Update 2012-Mar-13: Louis Wasserman pointed out (in comments to this post) a race in the implementation of CacheLoaders. I've updated the code to remove the race. It doesn't use as many cool Guava tricks, but it's quite a bit simpler now.
Update 2012-Mar-20: Unpacked the example to make it less frightening.