Comment by nextaccountic
Comment by nextaccountic 16 hours ago
Is there anything more production grade built around the same idea of HTTP range requests like that sqlite thing? This has so much potential
Comment by nextaccountic 16 hours ago
Is there anything more production grade built around the same idea of HTTP range requests like that sqlite thing? This has so much potential
Hadn't seen PMTiles before, but that matches the mental model exactly! I chose physical file sharding over Range Requests on a single db because it felt safer for 'dumb' static hosts like CF. - less risk of a single 22GB file getting stuck or cached weirdly. Maybe it would work
My only gripe is that the tile metadata is stored as JSON, which I get is for compatibility reasons with existing software, but for e.g. a simple C program to implement the full spec you need to ship a JSON parser on top of the PMTiles parser itself.
That's neat, but.. is it just for cartographic data?
I want something like a db with indexes
Look into using duckdb with remote http/s3 parquet files. The parquet files are organized as columnar vectors, grouped into chunks of rows. Each row group stores metadata about the set it contains that can be used to prune out data that doesn’t need to be scanned by the query engine. https://duckdb.org/docs/stable/guides/performance/indexing
LanceDB has a similar mechanism for operating on remote vector embeddings/text search.
It’s a fun time to be a dev in this space!
There was a UK government GitHub repo that did something interesting with this kind of trick against S3 but I checked just now and the repo is a 404. Here are my notes about what it did: https://simonwillison.net/2025/Feb/7/sqlite-s3vfs/
Looks like it's still on PyPI though: https://pypi.org/project/sqlite-s3vfs/
You can see inside it with my PyPI package explorer: https://tools.simonwillison.net/zip-wheel-explorer?package=s...
I recovered it from https://archive.softwareheritage.org/browse/origin/directory... and pushed a fresh copy to GitHub here:
https://github.com/simonw/sqlite-s3vfs
This comment was helpful in figuring out how to get a full Git clone out of the heritage archive: https://news.ycombinator.com/item?id=37516523#37517378
Here's a TIL I wrote up of the process: https://til.simonwillison.net/github/software-archive-recove...
I also have a locally cloned copy of that repo from when it was on GitHub. Same latest commit as your copy of it.
From what I see in GitHub in your copy of the repo, it looks like you don’t have the tags.
Do you have the tags locally?
If you don’t have the tags, I can push a copy of the repo to GitHub too and you can get the tags from my copy.
didn't you do something similar for Datasette, Simon?
Nothing smart with HTTP range requests yet - I have https://lite.datasette.io which runs the full Python server app in the browser via WebAssembly and Pyodide but it still works by fetching the entire SQLite file at once.
oh! I must've been confused with your TIL where you linked to an explainer of this technique
https://simonwillison.net/2021/May/2/hosting-sqlite-database...
https://phiresky.github.io/blog/2021/hosting-sqlite-database...
i played around with this a while back. you can see a demo here. it also lets you pull new WAL segments in and apply them to the current database. never got much time to go any further with it than this.
https://just.billywhizz.io/sqlite/demo/#https://raw.githubus...
gdal vsis3 dynamically fetches chunks of rasters from s3 using range requests. It is the underlying technology for several mapping systems.
There is also a file format to optimize this https://cogeo.org/
This is somewhat related to a large dataset browsing service a friend and I worked on a while back - we made index files, and the browser ran a lightweight query planner to fetch static chunks which could be served from S3/torrents/whatever. It worked pretty well, and I think there’s a lot of potential for this style of data serving infra.
This is pretty much well what is so remarkable about parquet files; not only do you get seekable data, you can fetch only the columns you want too.
I believe that there are also indexing opportunities (not necessarily via eg hive partitioning) but frankly - am kinda out of my depth pn it.
Yes — PMTiles is exactly that: a production-ready, single-file, static container for vector tiles built around HTTP range requests.
I’ve used it in production to self-host Australia-only maps on S3. We generated a single ~900 MB PMTiles file from OpenStreetMap (Australia only, up to Z14) and uploaded it to S3. Clients then fetch just the required byte ranges for each vector tile via HTTP range requests.
It’s fast, scales well, and bandwidth costs are negligible because clients only download the exact data they need.
https://docs.protomaps.com/pmtiles/