Comment by upghost

Comment by upghost 16 hours ago

2 replies

I'm sure you realize it gets reassembled into "huge opaque strings" when it is fed into the LLM as context. The arbitrary transport of the context as JSON is just a bit of protocol theater.

You don't really have to parse the output, Python already has a parser in the form of the AST library[1].

But I get your drift. Depending on your workflow this could seem like more work.

[1]: https://docs.python.org/3/library/ast.html#module-ast

max-privatevoid 15 hours ago

The inference engine can do whatever it wants. This is already the case. The actual format of the tool call text varies by model, and the inference engine handles the translation from/to the JSON representation, that's the least of its concerns.

What I don't want to happen is for some shitty webdev who writes an AI client in JavaScript to be forced to write a custom parser for some bespoke tool call language (call it "MLML", the Machine Learning Markup Language, to be superseded by YAMLML and then YAYAMLML, ...), or god forbid, somehow embed a WASM build of Python in their project to be able to `import ast`, instead of just parsing JSON and looking at the fields of the resulting object.

  • upghost 7 hours ago

    Yeah that's fair, I concede the point.

    I got a good snicker out of the YAYAMLMLOLOL :D

    Seems like it's tools calling tools all the way down heh