madsohm 21 hours ago

Since using `def` to create a method returns a symbol with the method name, you can do something like this too:

  memoize def expensive_calculation(arg)
    @calculation_count += 1
    arg * 2
  end, ttl: 10, max_size: 2

  memoize def nil_returning_method
    @calculation_count += 1
    nil
  end
JamesSwift 21 hours ago

Looks good. Id suggest making your `get` wait to acquire the lock until needed. eg instead of

  @lock.synchronize do
    entry = @store[key]
    return nil unless entry

    ...
you can do

  entry = @store[key]
  return nil unless entry

  @lock.synchronize do
    entry = @store[key]
And similarly for other codepaths
  • chowells 21 hours ago

    Does the memory model guarantee that double-check locking will be correct? I don't actually know for ruby.

    • JamesSwift 20 hours ago

      I think it wouldnt even be a consideration on this since we arent initializing the store here only accessing the key. And theres already the check-then-set race condition in that scenario so I think it is doubly fine.

  • hp_hovercraft84 14 hours ago

    Good call, but I think I would like to ensure it remains thread-safe as @store is a hash. Although I will consider something like this in a future update. Thanks!

deedubaya 17 hours ago

See https://github.com/huntresslabs/ttl_memoizeable for an alternative implementation.

For those who don’t understand why you might want something like this: if you’re doing high enough throughput where eventual consistency is effectively the same as atomic consistency and IO hurts (i.e. redis calls) you may want to cache in memory with something like this.

My implementation above was born out of the need to adjust global state on-the-fly in a system processing hundreds of thousands of requests per second.

film42 a day ago

Nice! In rails I end up using Rails.cache most of the time because it's always "right there" but I like how you break out the cache to be a per-method to minimize contention. Depending on your workload it might make sense to use a ReadWrite lock instead of a Monitor.

Only suggestion is to not wrap the error of the caller in your memo wrapper.

> raise MemoTTL::Error, "Failed to execute memoized method '#{method_name}': #{e.message}"

It doesn't look like you need to catch this for any operational or state tracking reason so IMO you should not catch and wrap. When errors are wrapped with a string like this (and caught/ re-raised) you lose the original stacktrace which make debugging challenging. Especially when your error is like, "pg condition failed for select" and you can't see where it failed in the driver.

  • JamesSwift 20 hours ago

    I thought ruby would auto-wrap the original exception as long as you are raising from a rescue block (i.e. as long as $! is non-nil). So in that case you can just

      raise "Failed to execute memoized method '#{method_name}'"
    
    And ruby will set `cause` for you

    https://pablofernandez.tech/2014/02/05/wrapped-exceptions-in...

    • film42 an hour ago

      TIL! That's pretty cool. I still think if you have no reason to catch an error (i.e. state tracking, etc.) then you should not.

  • hp_hovercraft84 a day ago

    Thanks for the feedback! That's a very good point, I'll update the gem and let it bubble up.

locofocos a day ago

Can you pitch me on why I would want to use this, instead of Rails.cache.fetch (which supports TTL) powered by redis (with the "allkeys-lru" config option)?

  • thomascountz a day ago

    I'm not OP nor have I read through all the code, but this gem has no external dependencies and runs in a single process (as does activesupport::Cache::MemoryStore). Could be a "why you should," or a "why you should not" use this gem, depending on your use case.

  • hp_hovercraft84 a day ago

    Good question. I built this gem because I needed a few things that Rails.cache (and Redis) didn’t quite fit:

    - Local and zero-dependency. It caches per object in memory, so no Redis setup, no serialization, no network latency. -Isolated and self-managed. Caches aren’t global. Each object/method manages its own LRU + TTL lifecycle and can be cleared with instance helpers. - Easy to use — You just declare the method, set the TTL and max size, and you're done. No key names, no block wrapping, no external config.

    • JamesSwift 21 hours ago

      For what its worth, ActiveSupport::CacheStore is a really flexible api that gives minimal contractual obligations (read_entry, write_entry, delete_entry is the entire set of required methods), but still allows you to layer specific functionality (eg TTL) on top with an optional 'options' param. You could get the best of both worlds by adhering to that contract and then people can swap in eg redis cache store if they wanted a network-shared store.

      EDIT: see https://github.com/rails/rails/blob/main/activesupport/lib/a...

      • hp_hovercraft84 14 hours ago

        That's actually a really good idea! I'll definitely consider this in a future update. Thanks!

  • film42 a day ago

    Redis is great for caching a customer config that's hit 2000 times/second by your services, but even then, an in-mem cache with short TTL would make redis more tolerant to failure. This would be great for the in-mem part.

gurgeous a day ago

This is neat, thanks for posting. I am using memo_wise in my current project (TableTennis) in part because it allows memoization of module functions. This is a requirement for my library.

Anyway, I ended up with a hack like this, which works fine but didn't feel great.

   def some_method(arg)
     @_memo_wise[__method__].tap { _1.clear if _1.length > 100 }
     ...
   end
   memo_wise :some_method
qrush a day ago

Congrats on shipping your first gem!!

I found this pretty easy to read through. I'd suggest setting a description on the repo too so it's easy to find.

https://github.com/mishalzaman/memo_ttl/blob/main/lib/memo_t...

  • [removed] a day ago
    [deleted]
  • hp_hovercraft84 a day ago

    As in identify where the source code is in the README?

    • zerocrates 21 hours ago

      I think they mean just set a description for the repo in github (set using the gear icon next to "About"), saying what the project is. That description text can come up in github searches and google searches.

wood-porch 21 hours ago

Will this correctly retrieve 0 values? AFAIK 0 is falsey in Ruby

``` return nil unless entry ```

  • chowells 21 hours ago

    No, Ruby is more strict than that. Only nil and false are falsely.

    • wood-porch 20 hours ago

      Doesn't that shift the problem to caching false then :D

      • RangerScience 19 hours ago

        you can probably always just do something like:

          def no_items?
            !items.present?
          end
          
          def items
            # something lone
          end
        
          memoize :items, ttl: 60, max_size: 10`
        
        just makes sure the expensive operation results in a truthy value, then add some sugar for the falsey value, done.