RealSkills API

Skills, proven in the wild — not declared on paper.

A living skills endpoint that evolves through agent conversations. Unlike static documentation, skills emerge organically from question patterns and improve over time.

Philosophy

Why living skills beats static skills.md

Traditional documentation is written once and becomes outdated. RealSkills takes a different approach:

  • Questions drive discovery — Every agent question reveals what users actually need
  • Answers compound — Similar questions get better answers based on previous responses
  • Skills emerge — Patterns in questions automatically create skill categories
  • Quality improves — More questions = more context = better responses

Think of it as a knowledge base that learns from every interaction.

How It Works

The question → skill inference loop

Agent POSTs question
        ↓
┌─────────────────────────────┐
│   /skills endpoint          │
│   - Embed question          │
│   - Check similarity cache  │
│   - RAG from stored Q&A     │
│   - Generate response (LLM) │
│   - Store question + answer │
│   - Update skill graph      │
└─────────────────────────────┘
        ↓
    Return skill guidance (markdown)
        ↓
    Skill graph evolves in real-time
  1. Question Received — Agent submits a question via POST
  2. Embedding Generated — Question is converted to a 3072-dimensional vector
  3. Similarity Check — If >95% similar to existing question, return cached answer
  4. RAG Context — Find similar past questions/answers for context
  5. Response Generation — GPT-4.1-mini generates a tailored response
  6. Storage & Graph Update — Question stored, skills inferred and linked

API Reference

GET and POST endpoints

GET /:username/collections/:slug/skills

Returns the skill summary as markdown. Triggers lazy seeding on first access.

curl https://tpmjs.com/ajaxdavis/collections/my-tools/skills

POST /:username/collections/:slug/skills

Submit a question and receive an AI-generated response based on the collection's tools and previous Q&A.

curl -X POST https://tpmjs.com/ajaxdavis/collections/my-tools/skills \
  -H "Content-Type: application/json" \
  -d '{
    "question": "How do I handle errors with these tools?",
    "agentName": "my-agent",
    "tags": ["error-handling"]
  }'

Request Schema

POST request body format

interface SkillsRequest {
  // Required: The question to ask (5-2000 characters)
  question: string;

  // Optional: Session ID for multi-turn conversations
  sessionId?: string;

  // Optional: Self-reported agent identity
  agentName?: string;

  // Optional: Additional context (max 2000 chars)
  context?: string;

  // Optional: Hint tags to guide response (max 10)
  tags?: string[];
}

Multi-Turn Conversations

To continue a conversation, include the sessionId from a previous response. Sessions maintain context for up to 24 hours and include the last 20 messages.

Response Schema

Successful response format

interface SkillsResponse {
  success: boolean;
  data: {
    // Markdown-formatted response
    answer: string;

    // Confidence score (0-1)
    confidence: number;

    // Number of similar questions used for RAG
    basedOn: number;

    // Skills this question relates to
    skillsIdentified: string[];

    // Session ID for continuing conversation
    sessionId?: string;

    // Suggested follow-up questions
    suggestedFollowups?: string[];
  };
  meta: {
    // Whether response was from cache
    cached: boolean;

    // ID of stored question
    questionId: string;

    // Processing time in milliseconds
    processingMs: number;
  };
}

Example Response

{
  "success": true,
  "data": {
    "answer": "To handle errors with these tools...",
    "confidence": 0.85,
    "basedOn": 3,
    "skillsIdentified": ["error-handling", "try-catch-patterns"],
    "sessionId": "sess_abc123",
    "suggestedFollowups": [
      "What are the retry patterns?",
      "How do I log errors?"
    ]
  },
  "meta": {
    "cached": false,
    "questionId": "clx123abc456",
    "processingMs": 1234
  }
}

Integration Guide

How agents should use the Skills API

1. Initial Discovery

When an agent first encounters a collection, fetch the skills summary:

const response = await fetch(`${baseUrl}/${username}/collections/${slug}/skills`);
const skillsMarkdown = await response.text();
// Parse or display the markdown summary

2. Asking Questions

When the agent needs guidance on using the tools:

const response = await fetch(`${baseUrl}/${username}/collections/${slug}/skills`, {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    question: "How do I parse JSON responses from the API?",
    agentName: "my-automation-agent",
    tags: ["json", "parsing"]
  })
});

const { data } = await response.json();
console.log(data.answer); // Use this guidance

3. Multi-Turn Conversations

For follow-up questions, use the session ID:

// First question
const first = await askSkills("How do I handle pagination?");
const sessionId = first.data.sessionId;

// Follow-up (maintains context)
const followUp = await askSkills(
  "Can you show me an example with async iteration?",
  { sessionId }
);

Best Practices

Effective questioning patterns

Good Questions
  • ✓ "How do I handle rate limiting with the API tool?"
  • ✓ "What's the best way to batch multiple requests?"
  • ✓ "Can I use these tools with streaming responses?"
Avoid These
  • ✗ "Tell me everything about this collection"
  • ✗ Single-word questions like "Help"
  • ✗ Questions unrelated to the collection's tools

Tips for Better Responses

  • • Be specific about what you're trying to accomplish
  • • Include relevant context in the context field
  • • Use tags to hint at the problem domain
  • • Use sessions for related follow-up questions

Confidence Scores

How confidence is calculated

Each response includes a confidence score (0-1) based on:

  • Base confidence (30%) — Minimum for any generated response
  • Similar questions (up to 40%) — More similar past Q&A = higher confidence
  • Skills documentation (20%) — Collection has generated skills.md
  • Question volume (10%) — 3+ similar questions adds bonus

Interpreting Scores

  • >0.8 — High confidence, well-supported by prior Q&A
  • 0.5-0.8 — Moderate confidence, some relevant context
  • <0.5 — Lower confidence, limited prior knowledge

Lazy Seeding

Automatic bootstrapping on first access

When a collection's skills endpoint is accessed for the first time, it automatically seeds with synthetic questions generated from:

  • Existing skills.md documentation (if available)
  • Tool descriptions and capabilities
  • Common use case patterns for the tool category

This ensures the endpoint is useful immediately, even before any real agent interactions. Seeding typically adds 10-15 synthetic Q&A pairs.

Seeding Status Response

If seeding is in progress when you make a request, you'll receive a 202 response:

{
  "success": true,
  "data": {
    "status": "seeding",
    "message": "Skills are being generated. Please retry in a few seconds."
  }
}

Caching Behavior

How similar questions are cached

Questions with >95% similarity to existing questions return cached answers instantly. This provides:

  • Faster response times (~50ms vs ~1-2s)
  • Reduced API costs
  • Consistent answers for equivalent questions

The meta.cached field indicates whether a cached response was used. Cached responses increment a similarCount counter for analytics.

Next Steps

Continue exploring TPMJS