Skip to content

An enhanced TypeScript SDK for OpenAI API with built-in context management, proxy support, streaming, and enhanced error handling. Includes logging, retry mechanism, and fully typed responses for seamless AI integration.

License

Notifications You must be signed in to change notification settings

bgarciaoliveira/openai-enhanced-sdk

Repository files navigation

npm version Build Status License

OpenAI TypeScript Enhanced SDK

OpenAI Enhanced SDK is a fully typed TypeScript SDK that facilitates integration with the OpenAI API. It offers features that the official SDK does not have, such as context management for conversations, proxy support, automatic request retry mechanism, detailed logging system, among others. Additionally, the SDK implements most of the functionalities provided by the OpenAI API. Note: This is an unofficial SDK and is not affiliated with OpenAI.

Table of Contents

Features

  • Complete API Coverage: Implements all major OpenAI API endpoints.
  • Context Management: Manage conversation context easily for chat completions.
  • Streaming Support: Supports streaming for completions and chat completions.
  • Robust Error Handling: Provides custom error classes for different error types.
  • TypeScript Support: Includes comprehensive type definitions.
  • Logging: Configurable logging using Winston.
  • Retry Mechanism: Built-in retry logic using axios-retry.
  • Proxy Support: Enhanced proxy configuration for flexible network setups.
  • Extensible: Easily extendable for future API endpoints.

Installation

Install the package via npm:

npm install openai-enhanced-sdk

Getting Started

Initialization

import OpenAIClient from '../src/openai-client';
import { HttpsProxyAgent } from 'https-proxy-agent';

const apiKey = process.env.OPENAI_API_KEY;

// Proxy agent configuration
const proxyAgent = new HttpsProxyAgent('http://proxy.example.com:8080');

const client = new OpenAIClient(apiKey, {
  baseURL: 'https://api.openai.com/v1',
  timeout: 10000,
  proxyConfig: proxyAgent,
  axiosConfig: {
    headers: {
      'Custom-Header': 'custom-value',
    },
  },
  axiosRetryConfig: {
    retries: 5,
    retryDelay: 2000,
  },
  loggingOptions: {
    logLevel: 'info',
    logToFile: true,
    logFilePath: 'logs/openai-client.log',
  },
});

Authentication

Ensure you have your OpenAI API key stored securely, preferably in an environment variable:

export OPENAI_API_KEY=your_api_key_here

Usage Examples

Context Management

Add a single entry to context

client.addToContext({
  role: 'system',
  content: 'You are a helpful assistant.',
});

Add multiple entries to context

const contextEntries = [
  {
    role: 'user',
    content: 'Tell me a joke.',
  },
  {
    role: 'assistant',
    content: 'Why did the chicken cross the road? To get to the other side!',
  },
];
client.addBatchToContext(contextEntries);

Get current context

const context = client.getContext();
console.log(context);

Clear context

client.clearContext();

Use context in chat completion

const response = await client.createChatCompletion({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: 'Do I need an umbrella today?' }],
});
console.log(response);

Proxy Configuration

Using Proxy with Custom HTTPS Agent

import HttpsProxyAgent from 'https-proxy-agent';

const proxyAgent = new HttpsProxyAgent('http://localhost:8080');

const proxyClient = new OpenAIClient(apiKey, {
  proxyConfig: proxyAgent,
});

List Models

const models = await client.listModels();
console.log(models);

Create Completion

const completion = await client.createCompletion({
  model: 'text-davinci-003',
  prompt: 'Once upon a time',
  max_tokens: 5,
});
console.log(completion);

Create Chat Completion

const chatCompletion = await client.createChatCompletion({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: 'Hello, how are you?' }],
});
console.log(chatCompletion);

Create Embedding

const embedding = await client.createEmbedding({
  model: 'text-embedding-ada-002',
  input: 'OpenAI is an AI research lab.',
});
console.log(embedding);

Create Image

const image = await client.createImage({
  prompt: 'A sunset over the mountains',
  n: 1,
  size: '512x512',
});
console.log(image);

Error Handling

try {
  const completion = await client.createCompletion({
    model: 'text-davinci-003',
    prompt: 'Hello, world!',
  });
} catch (error) {
  if (error instanceof AuthenticationError) {
    console.error('Authentication Error:', error.message);
  } else if (error instanceof ValidationError) {
    console.error('Validation Error:', error.message);
  } else if (error instanceof RateLimitError) {
    console.error('Rate Limit Exceeded:', error.message);
  } else if (error instanceof APIError) {
    console.error('API Error:', error.message);
  } else {
    console.error('Unknown Error:', error);
  }
}

Configuration

You can customize the client using the OpenAIClientOptions interface:

import HttpsProxyAgent from 'https-proxy-agent';

const proxyAgent = new HttpsProxyAgent('http://localhost:8080');

const client = new OpenAIClient(apiKey, {
  baseURL: 'https://api.openai.com/v1',
  timeout: 10000, // 10 seconds timeout
  proxyConfig: proxyAgent,
  loggingOptions: {
    logLevel: 'debug',
    logToFile: true,
    logFilePath: './logs/openai-sdk.log',
  },
  axiosRetryConfig: {
    retries: 5,
    retryDelay: axiosRetry.exponentialDelay,
  },
});

Logging

The SDK uses Winston for logging. You can configure logging levels and outputs:

loggingOptions: {
  logLevel: 'info', // 'error' | 'warn' | 'info' | 'debug'
  logToFile: true,
  logFilePath: './logs/openai-sdk.log',
}

Testing

The SDK includes comprehensive unit tests using Jest. To run the tests:

npm run test

Documentation

For more detailed information, please refer to the OpenAI Enhanced SDK Documentation.

Contributing

Contributions are welcome! Please open an issue or submit a pull request on GitHub.

License

This project is licensed under the MIT License.


If you have any questions or need assistance, feel free to reach out!

About

An enhanced TypeScript SDK for OpenAI API with built-in context management, proxy support, streaming, and enhanced error handling. Includes logging, retry mechanism, and fully typed responses for seamless AI integration.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published