Comment by seanalltogether

Comment by seanalltogether 5 hours ago

4 replies

Maybe I'm wrong but it seems like you would only want to parse partial values for objects and arrays, but not strings or numbers. Objects and arrays can be unbounded so it makes sense to process what you can, when you can, whereas a string or number usually is not.

rictic 5 hours ago

Numbers, booleans, and nulls are atomic with jsonriver, you get them all at once only when they're complete.

For my use case I wanted streaming parse of strings, I was rendering JSON produced by an LLM, for incrementally rendering a UI, and some of the strings were long enough (descriptions) that it was nice to see them render incrementally.

everforward 5 hours ago

It could be useful if you're doing something with the string that operates sequentially anyways (i.e. block-by-block AES, or SHA sums).

I _think_ the intended use of this is for people with bad internet connections so your UI can show data that's already been received without waiting for a full response. I.e. if their connection is 1KB/s and you send an 8KB JSON blob that's mostly a single text field, you can show them the first kilobyte after a second rather than waiting 8 seconds to get the whole blob.

At first I thought maybe it was for handling gigantic JSON blobs that you don't want to entirely load into memory, but the API looks like it still loads the whole thing into memory.

xg15 3 hours ago

There is json that has very long string literals. Usually, it's either long-ish text or HTML content, or base64-encoded binary data.

So I'd definitely count strings as "unbounded" as well.

AaronFriel 5 hours ago

If you're generating long reports, code, etc. with an LLM, partial strings matter quite a lot for user experience.