Comment by deedubaya
See https://github.com/huntresslabs/ttl_memoizeable for an alternative implementation.
For those who don’t understand why you might want something like this: if you’re doing high enough throughput where eventual consistency is effectively the same as atomic consistency and IO hurts (i.e. redis calls) you may want to cache in memory with something like this.
My implementation above was born out of the need to adjust global state on-the-fly in a system processing hundreds of thousands of requests per second.