By Prashant M, 10/25/25
We have in-memory DBs at home
Yes, yes we do. However the controls that operating successful memcached operations gives you, combined with the scalability of Kubernetes , we must design an architecture that supports a M:M (many-to-many) relationship. This will allow for as many requests and as much storage we could possibly expect. If the user pool active has a lot of overlap and frequently picks similar items, then great! We just need more clients to handle that traffic for our singular memcached server. In general, statelessness and non-persistent databases are usually not mentioned in the same sentence, yet it seems logical that if your cache is not mission critical, it should be incredibly elastic. Hence, we designed our caches as such.
Building Memcached into Pacific Coast
If you read Intro to Storage then you know where this is going and you can feel free to skip to the next section. If you haven’t, I highly recommend you do at the end of this article but for now I’ll give the gist. Pacific-coast controls all of our data at Runway Avenue, we run our Postgres, FAISS, block storage, and memcached databases together. Allowing for our developers to make a simple request to store or get something and have all of the business logic be evaluated by pacific-coast resulting in really well optimized storage handling and really simple APIs. All of these databases aforementioned hold a lot of data, with the exception of memcached itself and faiss since both are just caches in a sense. FAISS in the sense that models naturally look things up by vector search and memcached in the sense that if I’m looking to quickly load an object, why should every request hit Postgres ? It makes zero sense to design a system where you’re not hot-setting things into RAM at every level, making your compute a lot quicker, especially when you’re on-prem. memcached solves a lot of that by giving every search we do O(1) complexity and allowing us to load very small pieces of data that speed up the user experience by taking seconds and squashing it down to milliseconds by pre-loading necessary data rather than the whole thing at once.
What can we use Memcached for?
Anything and everything. The end goal is to load all of the following into memcached clusters:
- The entire Postgres DB production catalog as a hot-set. IE the bare minimum configuration for users to see an item on their base page before clicking on anything
- Item name
- Price
- Cover Image URL
- Image caching
- Could be in a lower resolution, could also be high resolution, depends on application development at this current point in time. Will be updated another-time
- A full metadata set for items that are frequently accessed
Point is, memcached doesn’t speak “.img” or “.pg”, memcached speaks bytes. Which means our memcached clients do a lot of preprocessing in order to ensure data is clean, and routed to the right servers. Data must be marshaled and de-marshaled by the client and then verified.
Conclusion
Memcached is an incredibly unique tool that allows us to control data at the RAM and byte-level, giving us insane user responsiveness, and allowing us to control our RAM like water through containers. It’s positioned within pacific-coast to be one of our most versatile tools, giving us an edge that most startups would take for granted. It helps contribute to our on-prem focuses, positions our system to be incredibly fast, and very scalable.