Skip to main content

Command Palette

Search for a command to run...

Building a Reusable AI SDK

Updated
4 min read

Recently, I’ve found myself building AI enabled applications, like the AI content optimization platform “https://buffbyteai.xyz/” and a Anaemia detection using computer models https://nailtechapp.netlify.app/, some of my major problems is managing the api keys for my LLM providers, dynamic prompts, variable prefill, AI persona’s, retry/failover mechanisms, response structure validation (structure, type, defaults etc)

Imagine if it’s easy for me to use the same key accross different apps, while still being able to dynamically switch providers in a split second, and validate input and output from the LLMs used, this will make my development of AI-enabled applications super fast!.

The pain points are real:

  • Repetitive setup across projects

  • Inconsistent error handling patterns

  • Manual JSON validation and parsing

  • API key management across environments

  • No standardized way to handle retries and failures

  • Provider-specific code that's hard to switch

Initial Thoughts

The solution seemed obvious: build a unified SDK that abstracts away provider differences while offering consistent developer experience. But the challenge was balancing simplicity with flexibility. Too simple, and it becomes limiting. Too complex, and it defeats the purpose of reducing boilerplate.

I wanted something that felt natural to use, a fluent API that could handle the 80% use case elegantly while still allowing customization for edge cases. The key insight was that most AI interactions follow the same pattern: authenticate, send prompt with variables, get structured response, handle errors.

The Solution Architecture

The SDK centers around two main methods:

Initialize once, use everywhere:

const ai = sdk.initialize({
  auth: {
    type: "embedded" | "fetch",
    url: "https://my-keys-api.com/keys",
    keys: { claude: "key1", openai: "key2" }
  },
  settings: {
    retryOnFail: true,
    maxRetry: 3,
    cachable: true,
    trimPrompt: false
  }
})

Prompt with intelligence:

const result = await ai.prompt('Get weather in {{CITY}}', {
  expectJson: true,
  jsonStructure: { temp: "number", condition: "string" },
  validateJSON: true,
  errorOnInvalidJSON: true
}, { CITY: "San Francisco" })

The authentication system supports both embedded keys and remote fetching, solving the multi-environment deployment challenge. Variable interpolation with required validation prevents runtime errors. JSON structure validation ensures type safety with AI responses.

Decision Matrix

When designing the feature set, I evaluated each potential feature against three criteria:

Impact (how much pain it solves),

Complexity (implementation difficulty), and

Usage Frequency (how often developers need it).

High Impact + Low Complexity + High Usage → V1

  • Variable interpolation and validation

  • Multi-provider authentication

  • JSON response handling and validation

  • Basic retry logic and caching

High Impact + High Complexity → V2

  • Streaming responses (complex WebSocket/SSE handling)

  • Usage tracking and cost estimation (requires pricing data maintenance)

Medium Impact + Medium Complexity → V2

  • Template management system

  • Advanced middleware and hooks

Low Impact or Very High Complexity → Future/Never

  • Complex prompt engineering features

  • Built-in fine-tuning capabilities

This matrix helped avoid feature creep while ensuring V1 addresses the most painful developer problems.

Version 1: Core Foundation

V1 focuses on the essential developer experience:

Authentication & Configuration

  • Multi-provider support (Claude, OpenAI, extensible to others)

  • Flexible auth: embedded keys or remote URL fetching

  • Global settings with per-request overrides

Smart Prompting

  • {{VARIABLE}} interpolation with validation

  • Required variable checking (throws helpful errors)

  • Clean variable passing via options object

Response Handling

  • Automatic JSON parsing and validation

  • Type-safe structure checking against expected schema

  • Configurable error handling for invalid responses

Reliability

  • Retry logic for failed requests

  • Response caching to reduce API calls

  • Prompt trimming for token optimization

This gives developers a production-ready foundation that eliminates most AI integration boilerplate while maintaining flexibility.

Version 2: Advanced Features

Once V1 proves the core concept, V2 will add sophistication:

Streaming Support Real-time response streaming for better user experience in chat applications and long-form content generation.

Template Management
Reusable prompt templates with versioning, making it easier to maintain and iterate on prompts across teams.

Usage Analytics Token tracking, cost estimation, and usage limits to help manage AI budgets and optimize performance.

Advanced Middleware Hooks for logging, analytics, custom validation, and request modification, enabling complex workflow integration.

Why This Approach Works

This SDK design succeeds because it follows proven software engineering principles:

Progressive Enhancement: Start simple, add complexity gradually based on real usage patterns.

Separation of Concerns: Authentication, prompting, and response handling are cleanly separated but work together seamlessly.

Developer Experience First: The API feels natural and reduces cognitive load rather than adding abstraction complexity.

Flexibility Without Bloat: Core functionality handles most use cases, while extension points allow customization without forcing complexity on simple users.

I have chosen the name “Ajala AI SDK”, Ajala stood out to me because of the legendary “Ajala, THE TRAVELER” who travelled the entire world on a bicycle, hoping to do that someday, but definitely not with a bicycle 😂😂😂 , Ajala AI SDK in the open, you can follow the development on github. Star the repo to stay updated, open issues for features you need, or reach out if you'd like to contribute code, documentation, or ideas.
Building developer tools is always better as a community effort!

Bye.