@tpmjs/tools-hllm
Execute a topology with streaming SSE response. Supports various topology types including single, sequential, parallel, map-reduce, and more.
Test @tpmjs/tools-hllm (executeTopology) with AI-powered execution
0/2000 characters
Install this tool and use it with the AI SDK
npm install @tpmjs/tools-hllmpnpm add @tpmjs/tools-hllmyarn add @tpmjs/tools-hllmbun add @tpmjs/tools-hllmdeno add npm:@tpmjs/tools-hllmimport { executeTopology } from '@tpmjs/tools-hllm';import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { executeTopology } from '@tpmjs/tools-hllm';
const result = await generateText({
model: openai('gpt-4o'),
tools: { executeTopology },
prompt: 'Your prompt here...',
});
console.log(result.text);(intent: string, topologyId: string, config?: Record<string, unknown>, sessionId?: string, conversationHistory?: { role: string; content: string }[]) => Promise<unknown>Available configuration options
topologyIdstringType of topology to execute.
intentstringThe task or prompt for execution (1-50,000 characters).
configobjectTopology-specific configuration (e.g., maxIterations for reflection).
sessionIdstringAssociate execution with a chat session.
conversationHistoryarrayPrevious messages for context continuity.
Schema extracted: 3/3/2026, 4:21:19 AM
ERROR: No README data found!
Downloads/month
167
GitHub Stars
0
Quality Score