First things first: erstwhile there is caching in the talk title, the celebrated quote from Phil Karlton must follow. So let's get this 1 out of the way: there are only 2 hard things in Computer discipline - cache invalidation and naming things. Yes, caching is hard. It looks simple at first glance, which leads to a simplistic plan and implementation. The application starts slowing down: introduce any caches. Simple library, simply put, a fewer gets, simple invalidation logic (or possibly not). Fast forward, and we end up with even worse bottlenecks. alternatively of fixing an issue, we've just covered it with yet another layer of mud: stalled data (already invalid) in services which cannot be scaled out (different data lands on different nodes) and traded latency and read-access time for hazard of actually losing the data (eventual writes). We evidently can do better! In this talk, we will walk through different use-cases (as cache plan depends on the use-case) which might require caching. We will look at how caching can be designed effectively. We will start from the business script (it's easier to identify with), though an abstraction, to implementation examples on the top of Hazelcast. Everything from zero to a caching hero in a single talk.
GeeCON Prague 2022: Jakub Marchwicki - Caching beyond simple put and gets
First things first: erstwhile there is caching in the talk title, the celebrated quote from Phil Karlton must follow. So let's get this 1 out of the way: there are only 2 hard things in Computer discipline - cache invalidation and naming things. Yes, caching is hard. It looks simple at first glance, which leads to a simplistic plan and implementation. The application starts slowing down: introduce any caches. Simple library, simply put, a fewer gets, simple invalidation logic (or possibly not). Fast forward, and we end up with even worse bottlenecks. alternatively of fixing an issue, we've just covered it with yet another layer of mud: stalled data (already invalid) in services which cannot be scaled out (different data lands on different nodes) and traded latency and read-access time for hazard of actually losing the data (eventual writes). We evidently can do better! In this talk, we will walk through different use-cases (as cache plan depends on the use-case) which might require caching. We will look at how caching can be designed effectively. We will start from the business script (it's easier to identify with), though an abstraction, to implementation examples on the top of Hazelcast. Everything from zero to a caching hero in a single talk.