TPMJS vs Manual Function Calling
Manual function calling means writing JSON schemas, validation, and handlers per tool per provider. TPMJS auto-extracts schemas from Zod definitions and serves them via a universal MCP interface.
The Per-Provider Schema Problem
Each LLM provider expects tool schemas in a different format. A single tool requires separate definitions for each provider:
OpenAI format
{
"type": "function",
"function": {
"name": "search_web",
"description": "Search the web",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query"
},
"limit": {
"type": "number",
"description": "Max results"
}
},
"required": ["query"]
}
}
}Anthropic format
{
"name": "search_web",
"description": "Search the web",
"input_schema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query"
},
"limit": {
"type": "number",
"description": "Max results"
}
},
"required": ["query"]
}
}Two providers, one tool — already two schema definitions to maintain. Add the handler function, validation logic, error handling, and tests, and each tool costs 1-4 hours. Multiply by ten tools and three providers: 30 schema definitions, 30 validation functions, and a maintenance surface that grows linearly.
How TPMJS Eliminates Schema Boilerplate
TPMJS tools define their schema once using Zod — the same Zod object used for runtime validation is also the schema definition. The TPMJS enrichment pipeline auto-extracts the JSON Schema from the Zod definition, so there's no separate schema to write or maintain:
// This is the entire tool definition.
// Zod schema = input validation = JSON Schema = MCP inputSchema
import { tool } from 'ai';
import { z } from 'zod';
export const searchWeb = tool({
description: 'Search the web and return results',
parameters: z.object({
query: z.string().describe('Search query'),
limit: z.number().optional().describe('Max results'),
}),
execute: async ({ query, limit = 10 }) => {
// Implementation — no separate schema, no per-provider format
const results = await fetch(`https://api.example.com/search?q=${query}&limit=${limit}`);
return results.json();
},
});
// What happens automatically:
// 1. npm publish → TPMJS changes feed picks up the package
// 2. Enrichment sync extracts inputSchema from Zod via the executor
// 3. Quality score calculated: tier + log(downloads) + log(stars) + metadata
// 4. Available via MCP, CLI, web search, and AI SDKFeature Comparison
| Capability | Manual Function Calling | TPMJS |
|---|---|---|
| Schema authoring | Hand-write JSON Schema per provider format | Zod schema in code — auto-extracted to JSON Schema during sync |
| Input validation | Implement per tool (LLMs send malformed data) | Zod .parse() built into every tool — validated before execute() |
| Cross-provider support | Different schema format per vendor (OpenAI vs Anthropic vs Google) | MCP protocol — one interface for all providers |
| Schema-code sync | Manual — schemas drift from implementation | Schema is the code — Zod object is the source of truth |
| Discovery | None — tools live in your codebase | BM25-scored search + 44 categories + quality scoring |
| Reuse across projects | Copy-paste or internal packages | npm install — published to the public registry |
| Health monitoring | You build it | Automated import + execution health checks during enrichment |
| Time to add a tool | 1-4 hours (schema + validation + handler + tests) | npm install + 1 import |
| Error handling | Per tool, per provider | Standardized executor contract with executionTimeMs tracking |
At Scale
Manual: 5 tools, 2 providers
- 10 JSON Schema definitions
- 10 validation functions
- 5 handler implementations
- 5 test suites
- ~2,000 lines of boilerplate that must stay in sync
TPMJS: 5 tools
- 0 separate schema definitions (Zod is the schema)
- 0 validation functions (Zod .parse() is validation)
- 5 npm install commands
- 5 import statements
- Works with every provider via MCP — no per-provider code
Built-In Quality Signals
With manual function calling, there's no way to know if a tool implementation is reliable. TPMJS provides automated quality scoring:
Tier score
0.4 (minimal) or 0.6 (rich metadata)
Downloads
log₁₀(downloads + 1) / 15, max 0.2
Stars
log₁₀(stars + 1) / 10, max 0.1
Metadata
+0.04 params, +0.03 returns, +0.03 aiAgent
Scores range from 0.00 to 1.00 and are recalculated daily. This lets agents prioritize higher-quality tools automatically.
One schema. Every provider. Zero boilerplate.
Define your tool with Zod, publish to npm. TPMJS handles the rest.