Architecture Idea: Passive Models


Passive Models

This is an idea for scaling out certain data when transitioning to a highly clustered architecture.

TL;DR Don’t just read, mostly subscribe.

Ideally suited for data that is read often, but rarely written; the higher the read:write ratio, the more you gain from this technique.

This tends to happen to some types of data when growing up into a cluster, even if you have data that has a 2:1 ratio for a single server (a very small margin in this context, meaning it is read twice for every time it is written), when you scale it up, you often don’t get a 4:2 ratio, instead you get 4:1 because one of the two writes end up being redundant (that is, if you can publish knowledge of the change fast enough that other edges don’t make the same change).

With many workloads, such as configuration data, you are quickly scaling at cN:1 with very large c [number of requests served between configuration changes], meaning that real-world e-commerce systems are doing billions of wasted reads of data that hasn’t changed.  Nearly all modern data stores can do reads this like incredibly fast, but they still cost something, produce no value, and compete for resources with requests that really do need to read information that has changed.  For configuration data on a large-scale site, c can easily be in the millions.

So, this is an attempt to reign in this cN:1 scaling and constrain it to N:1; one read per node per write, so a 32-server cluster would be 32:1 in the worst-case, instead of millions to one.

Pairing a Store with a Hub

defn: Hub – any library or service that provides a publish/subscribe API.

defn: Store – any lib/service that provides a CRUD API.

Clients use the Store’s CRUD as any ORM would, and aggressively cache the responses in memory. When a Client makes a change to data on the Store, they simultaneously publish alerts through the Hub to all other Clients. Clients use these messages to invalidate their internal caches. The next time that resource is requested, it’s newly updated version is fetched from the Store.

Since the messages broadcast through the Hub do not cause immediate reads, this allows bursts of writes to coalesce and not cause a corresponding spike in reads, but rather the read load experienced after a change is always the same, and based on the data’s usage pattern and how you spread traffic around your cluster.

To stick with the example of configuration data, let’s suppose the usage pattern is to read the configuration on every request, with a cluster of web servers load balanced by a round-robin rule. Suppose an administrative application changes and commits the configuration, it also invalidates the cached configuration on each web server through the Hub. Each subsequent request as the round-robin proceeds around the cluster will fetch an updated configuration directly from the Store. Load balancing rules that re-use servers, such as lowest-load, can have even higher cache efficiency.

From the perspective of the code using the Client, the writes made by others just seem to take a little bit longer to fully commit, and in exchange we never ask the database for anything until we know it has new information.

Further Work

The Store layer requires aggressive caching, which requires that you constrain the CRUD to things where you can hash and cache effectively. Map/reduce is not allowed, etc., it really is best for an ORM-like scenario, where you have discrete documents, and use summary documents more than complicated queries.


Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*