CodeMode & Recently Updated Servers: Remote MCP Updates
How code generation is replacing multi-step tool calling, and tracking the latest MCP servers on Remote-MCP.com
Hey everyone, it’s been a while since the last post due to a busy period for me personally. It’s also been a busy period for Remote MCP - since the last post there has been CodeMode, ChatGPT Apps, and the Official Registry. Here’s our view on CodeMode.
We’re also excited to release a new page to our website: Recently Updated. The Recently Updated page is powered by the Official Registry, so everyone can keep up-to-date with new Remote MCP Servers. The page only includes Remote MCP Servers that have been submitted to the registry - if you think any are worthy of inclusion on our main list, then please submit a PR to add the server.
Let’s kick on
CodeMode
A new paradigm for LLM tool use. CodeMode is based on the idea that models are very good at generating code, but not quite there with multi-step tool calling. To solve this, CodeMode swaps the ask - replacing tool calling with code generation. The early results appear to be the holy grail: higher performance, with lower token use.
So how does this work?
Simply put, rather than exposing N tools to a model, you expose a single tool:
CodeModeDescription:
A tool that can generate code to achieve a goal
Input type:{“functionDescription”: string }Output type:{“result” string }
The CodeMode tool generates code, executes, and returns the result as the tool output. This reduces multi-step tool calls into a single (albeit complex) step. How does this work?
The trick is to expose the N tools as a library, with each tool corresponding to a library function. Then we find:
The tool input precisely defines the arguments of the tool function, fully typed
The tool output defines the return type of the function
The tool description acts as documentation on how and why to use the function.
Now when a model generates the code to achieve the task, it can use each tool just like it was a built-in function. When executing the generated code, the tools are invoked directly by the generated library code. This means for a Remote MCP server, during the code generation step, the server’s tools are available as a full library.
There are a lot of parts to creating this setup in production - with sandboxing the code execution being critical. The Cloudflare blog post goes into more detail, and is a very good read on the topic using dynamic worker loaders - a sandboxing primitive which is worth a deep dive of its own.
Recently Updated Remote MCP Servers
Make sure to keep track of new and and updated servers on our new page!


Thanks for writing this, it clarifies a lot, making me wonder how this CodeMode approach trully solves the tricky LLM tool use issues we discussed.