Understanding Tool Calling in LLMs – Step-by-Step with REST and Spring AI
(muthuishere.medium.com)76 points by muthuishere 13 hours ago
76 points by muthuishere 13 hours ago
I think it's interesting and odd that tool calling took the form of this gnarly json blob. I much prefer the NexusRaven[1] style where you provide python function stubs with docstrings and get back python function invocations with the arguments populated. Of course I don't really understand why MCP is popular over REST or CLI, either.
The actual API call is still going to be JSON. How do you deal with that? Pack your Python function definitions into an array of huge opaque strings? And who would want to write a parser for that?
I'm sure you realize it gets reassembled into "huge opaque strings" when it is fed into the LLM as context. The arbitrary transport of the context as JSON is just a bit of protocol theater.
You don't really have to parse the output, Python already has a parser in the form of the AST library[1].
But I get your drift. Depending on your workflow this could seem like more work.
The inference engine can do whatever it wants. This is already the case. The actual format of the tool call text varies by model, and the inference engine handles the translation from/to the JSON representation, that's the least of its concerns.
What I don't want to happen is for some shitty webdev who writes an AI client in JavaScript to be forced to write a custom parser for some bespoke tool call language (call it "MLML", the Machine Learning Markup Language, to be superseded by YAMLML and then YAYAMLML, ...), or god forbid, somehow embed a WASM build of Python in their project to be able to `import ast`, instead of just parsing JSON and looking at the fields of the resulting object.
"Just write this...." adds an annotation
One of the many issues with Spring is that abstractions it provides are extremely leaky [1]. It leaks frequently and when it does, an engineer is faced with the need to comprehend a pile of technology[2] that was supposed to be abstracted away in the first place.
In what ways are the abstractions leaky? @Tool or @GetMapping make no demands on how to implement “this is a tool” or “this is a GET REST endpoint.” That they’re coupled with Spring (or rather, Spring is the only implementation for the semantics of these annotations) doesn’t constitute a leaky abstraction.
The precise semantics usually aren’t that well specified, and debugging is difficult when something goes wrong. Annotation-based frameworks are generally more difficult to reason about than libraries you only call in to. One reason is that with frameworks you don’t know very well which parts of the framework code are all involved in calling your code, whereas with libraries the answer usually is “the parts you call in to”.
Spring has more “synergy” in a sense than using a bunch of separate libraries, but because of that it’s also a big ball of mud that your code sits on top of, but isn’t on top of it in the sense of being in control.
This is fair. I think the complaint is that Spring is _beautiful_ in up to medium sized demos, but in any sufficiently large application you always seem to need to dig in and figure out what Spring is doing inside those annotations and do something unspeakable involving the giant stack of factory factory context thread local bean counter manager handler method proxy managers etc.
Also Spring is a kind of franchise or brand, and the individual projects under the umbrella vary a lot in quality.
I think about this occasionally trying to rationalize it. I see similar patterns in other things like R and Julia where they design something in the environment to seem like a composable tool, and maybe it is but only within two or three specific compositions and then the way the environment is described sure seems to imply some kind of universality but it just doesn't work. Some even seem to keep patching every leak (maybe Spring means Spring a leak? Haha) and there's a sunk cost fallacy thing with an immense documentation page.
How do you pass a user token to MCP calls? Do you hand the token to the LLM and expect it to fill an argument?
Usually via environment variables in the MCP server definition, or a config file
I recently started diving into LLMs a few weeks ago, and one thing that immediately caught me off guard was how little standardization there is across all the various pieces you would use to build a chat stack.
Want to swap out your client for a different one? Good luck - it probably expects a completely different schema. Trying a new model? Hope you're ready to deal with a different chat template. It felt like every layer had its own way of doing things, which made understanding the flow pretty frustrating for a noobie.
So I sketched out a diagram that maps out what (rough) schema is being used at each step of the process - from the initial request all the way through Ollama and an MCP server with OpenAI-compatible endpoints showing what transformations occur where.
Figured I'd share it as it may help someone else.
https://moog.sh/posts/openai_ollama_mcp_flow.html
Somewhat ironically, Claude built the JS hooks for my SVG with about five minutes of prompting.