@tpmjs/tools-hllm
Execute a topology with streaming SSE response. Supports topology types: single, sequential, parallel, map-reduce, scatter, debate, reflection, consensus, brainstorm, decomposition, rhetorical-triangle, tree-of-thoughts, react.
HLLM API error: HTTP 405 -
Last checked: 1/17/2026, 1:43:05 AM
Test @tpmjs/tools-hllm (executeTopology) with AI-powered execution
0/2000 characters
Install this tool and use it with the AI SDK
npm install @tpmjs/tools-hllmpnpm add @tpmjs/tools-hllmyarn add @tpmjs/tools-hllmbun add @tpmjs/tools-hllmdeno add npm:@tpmjs/tools-hllmimport { executeTopology } from '@tpmjs/tools-hllm';import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { executeTopology } from '@tpmjs/tools-hllm';
const result = await generateText({
model: openai('gpt-4o'),
tools: { executeTopology },
prompt: 'Your prompt here...',
});
console.log(result.text);Available configuration options
topologystringType of topology to execute.
promptstringThe prompt to send to the topology.
modelstringModel to use (e.g., "gpt-4", "claude-3-opus"). Uses default if not specified.
systemPromptstringSystem prompt to set the context.
temperaturenumberTemperature for response randomness (0-2). Default: 0.7
maxTokensnumberMaximum tokens in response.
toolsarrayTool IDs to make available to the topology.
sessionIdstringSession ID to continue a conversation.
Schema extracted: 1/17/2026, 12:59:35 AM
ERROR: No README data found!
Downloads/month
0
Quality Score