Ask HN: How would you design for this scale today?

4 points by phs318u 2 days ago

4 comments

20 years ago I worked at a utility (electricity distribution). The government introduced a mandate to switch everyone from "basic" meters (4 data points per year, read quarterly by a human looking at a dial), to "remotely read interval meters" (48 data points per day, streamed periodocally over the day). The company had around a million meters and our data was going intake was going to jump by roughly 3.5 orders of magnitude. Key requirements: store 15 months of meter data hot/active, bill customers on time-of-use tariffs, estimate/substitute missing data (previous period lookup or guess), and (key requirement) send the meter data to the regulator and market participants within 24 hours of receipt.

The solution I came up with back in 2005 was a sharded Oracle 10g database, storing 1 read per row (controversial!) and the biggie - using Oracle Advanced Queuing to run worker jobs that could scale independently (no. of processes per queue) with a design that was basically a queue per business action (e.g. send data, aggregate consumption etc) and worker processes designed to do 1 thing - read a message off the queue (1 msg = 1 unit of work), and do its thing (if successful, write messages to other queues for downstream activity).

This all ran on a single piece of Sun hardware running Solaris, with the worker processes being either C++ applets or PL/SQL stored procs. Processed 48 million rows per-day, stored about 18 billion rows in the active table. Ran for about 10 years before being decommissioned for COTS solution.

My question is: how would such a solution be architected today? What design, what technologies etc. I am interested for my own learning (I've been working away from the pointy end of an IDE for quite some time).

Addendum: probably the only thing I wished we had done was use Oracle RAC. Instead we opted for an active-passive topology (annoyingly wasteful of an 50% of our compute).

Feel free to ask clarifying questions.

mtmail a day ago

The solution back then makes sense. It was a couple of years too early for CouchDB, Cassandra or MySQL Cluster (https://en.wikipedia.org/wiki/MySQL_Cluster) which are more suited for write-heavy applications and clustering across servers.

Later spinning harddrives were replaced by SSD, then NVME. More open source NoSQL and columnular storage solution. Cloud services started offering hosted databases with unlimited scale. 500GB would all be 'hot' ready to be queried in realtime.

Today I'd see three options

* multiple cloud servers which receive the data and put it into a managed cloud database, like Google BigQuery. They'll handle all scale, including region replication, backups. You might overpay but likely still less than Oracle software licence.

* specialist SaaS for IoT, for example ClickHouse. They can handle 10.000 incoming rows per second. The data store later does defragmentation, storing data by date and other optimizations which make it faster to query recent data, vs older data.

* place it into JSON or CSV files, one per hour, or one day and query with DuckDB.

theamk a day ago

Input data rate: 1e6 * 48 / (24*60*60) = ~560 TPS

Working data size (1 day): (8B meter id + 8B timestamp + 8B value) * 48 * 1e6 = 1.1 gigabyte

Archival: 1.1GB/day * 15 month = 482 GB .. but looks like mostly write-only data?

That is pretty small as far as modern servers go. My usual approach for IoT things is event sourcing-like architecture - you have logging servers (for disaster recovery) and processing server(s), which keep things in RAM. If processing servers crash, they restart, and logging servers re-send data from last checkpoint.

But I am sure that a Postgres can be used as well, or some sort of time-series database too. Based on your description, you really don't need much as far as database go - basically range queries and that's it.

aristofun 9 hours ago

Those numbers and all the features are easy-peasy for a single not very big flink cluster

fouc 10 hours ago

Elixir would be a good fit I think.