It's a little different. These are systems which are explicitly able to achieve better or worse outcomes by tuning the cost, in ways that aren't especially configurable otherwise. For an HTTP API, you can read the docs and use the small image vs large image endpoint or whatever and have a clear idea of what you're getting and for what cost. For LLMs, it would be very nice to be able to communicate about the desired and actual cost breakdowns for each sub-action.
It would also be nice to do that for http for the same reason. You can also read the find Docs for your MCP, and the LLM can also read the docs