Comment by tybaa

Comment by tybaa 3 days ago

6 replies

Hey! Glad to hear you're excited about it! Yes, we're currently working on improving our MCP support in general - we'll have more to share soon, but part of that is supporting SSE servers directly

joshstrange 3 days ago

Very cool. Like I said I can make it work with Stdio but I have a SSE MCP proxy I wrote to combine multiple MCP servers (just to make plugging in all my tools to a new client easier to test). That said, I think after looking at the docs that I'll be tempted to move my tools in directly but I probably will keep them behind MCP for portability.

  • tybaa 3 days ago

    Oh nice, did you write your own proxy or are you using something like https://www.npmjs.com/package/mcp-proxy ?

    • joshstrange 2 days ago

      I have used `mcp-proxy` but (afaik) you can only use it 1-to-1 and I wanted an N-to-1 proxy so that instead of configuring all my MCP servers in the multiple clients I've tested out I could just add 1 server and pull in everything.

      I found `mcp-proxy-server` [0] which seemed like it would do what I want but I ran into multiple problems. I added some minor debug logging to it and the ball sort of rolled downhill from there. Now it's more my code than what was there originally but I have tool proxying working for multiple clients (respecting sessionIds, etc) and I think I've solved most all the issues I've run into and added features like optional tool prefixing so there isn't overlap between MCP servers.

      Given what I know now, I don't think N-to-1 is quite as useful as I thought. Or rather, it really depends on your "client". If you can toggle on/off tools in your client then it's not a big problem but sometimes you don't want "all" the tools and if you client only allows toggling per MCP server then you will have an issue.

      I love the ideas of workflows and how you have defined agents. I think my current issue is almost too many tools and the LLM sometimes gets confused over which ones to use. I'm especially thrilled with your HTTP endpoints you expose for the agents. My main MCP server (my custom tools I wrote, vs the third-party ones) exposes an HTTP GUI for calling the tools (faster iteration vs trying it through LLMs) and I've been using that and 3rd-party chat clients (LibreChat and OpenWebUI) as my "LLM testing" platform (because I wasn't aware of a better options) but neither of those tools let you "re-expose" the agents via an API.

      All in all I'm coming to the conclusion that 90% of MCP servers out there are really cool for seeing what's possible but it's probably best to write your own tools/MCP since most all MCP servers are just thin wrappers around an API. Also it's so easy to create an MCP server that they are popping up all over the place and often of low quality (don't fully implement the API, take shortcuts for the authors use-case, etc). Using LLMs to writing the "glue" code from API->Tool is fairly minor and I think is worth "owning". To sum that all up: I think my usage of 3rd party MCP servers is going to trend towards 0 as I "assimilate" MCP servers into my own codebase for more control but I really like MCP as a way to vend tools to various different LLM clients/tools.

      [0] https://github.com/adamwattis/mcp-proxy-server

      • tybaa 2 days ago

        Thanks for sharing! It's so helpful to hear real world experiences like this. Would you be interested in meeting up on a call sometime? I'd love to chat about how you're using MCP to help inform how we can make all of this easier for folks. We're actively thinking about our APIs for tool use and MCP right now.