Comment by keepamovin

Comment by keepamovin 9 hours ago

1 reply

You can see it now, forgot to make it public.

- 1. download_hn.sh - bash script that queries BigQuery and saves the data to *.json.gz

- 2. etl-hn.js - does the sharding and ID -> shard map, plus the user stats shards.

- 3. Then either npx serve docs or upload to CloudFlare Pages.

The ./toool/s/predeploy-checks.sh script basically runs the entire pipeline. You can do it unattended with AUTO_RUN=true