OpenAI Enhanced SDK is a fully typed TypeScript SDK that facilitates integration with the OpenAI API. It offers features that the official SDK does not have, such as context management for conversations, proxy support, automatic request retry mechanism, detailed logging system, among others. Additionally, the SDK implements most of the functionalities provided by the OpenAI API. Note: This is an unofficial SDK and is not affiliated with OpenAI.
- Features
- Installation
- Getting Started
- Usage Examples
- Configuration
- Logging
- Testing
- Documentation
- Contributing
- License
- Complete API Coverage: Implements all major OpenAI API endpoints.
- Context Management: Manage conversation context easily for chat completions.
- Streaming Support: Supports streaming for completions and chat completions.
- Robust Error Handling: Provides custom error classes for different error types.
- TypeScript Support: Includes comprehensive type definitions.
- Logging: Configurable logging using Winston.
- Retry Mechanism: Built-in retry logic using
axios-retry
. - Proxy Support: Enhanced proxy configuration for flexible network setups.
- Extensible: Easily extendable for future API endpoints.
Install the package via npm:
npm install openai-enhanced-sdk
import OpenAIClient from '../src/openai-client';
import { HttpsProxyAgent } from 'https-proxy-agent';
const apiKey = process.env.OPENAI_API_KEY;
// Proxy agent configuration
const proxyAgent = new HttpsProxyAgent('http://proxy.example.com:8080');
const client = new OpenAIClient(apiKey, {
baseURL: 'https://api.openai.com/v1',
timeout: 10000,
proxyConfig: proxyAgent,
axiosConfig: {
headers: {
'Custom-Header': 'custom-value',
},
},
axiosRetryConfig: {
retries: 5,
retryDelay: 2000,
},
loggingOptions: {
logLevel: 'info',
logToFile: true,
logFilePath: 'logs/openai-client.log',
},
});
Ensure you have your OpenAI API key stored securely, preferably in an environment variable:
export OPENAI_API_KEY=your_api_key_here
client.addToContext({
role: 'system',
content: 'You are a helpful assistant.',
});
const contextEntries = [
{
role: 'user',
content: 'Tell me a joke.',
},
{
role: 'assistant',
content: 'Why did the chicken cross the road? To get to the other side!',
},
];
client.addBatchToContext(contextEntries);
const context = client.getContext();
console.log(context);
client.clearContext();
const response = await client.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Do I need an umbrella today?' }],
});
console.log(response);
import HttpsProxyAgent from 'https-proxy-agent';
const proxyAgent = new HttpsProxyAgent('http://localhost:8080');
const proxyClient = new OpenAIClient(apiKey, {
proxyConfig: proxyAgent,
});
const models = await client.listModels();
console.log(models);
const completion = await client.createCompletion({
model: 'text-davinci-003',
prompt: 'Once upon a time',
max_tokens: 5,
});
console.log(completion);
const chatCompletion = await client.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello, how are you?' }],
});
console.log(chatCompletion);
const embedding = await client.createEmbedding({
model: 'text-embedding-ada-002',
input: 'OpenAI is an AI research lab.',
});
console.log(embedding);
const image = await client.createImage({
prompt: 'A sunset over the mountains',
n: 1,
size: '512x512',
});
console.log(image);
try {
const completion = await client.createCompletion({
model: 'text-davinci-003',
prompt: 'Hello, world!',
});
} catch (error) {
if (error instanceof AuthenticationError) {
console.error('Authentication Error:', error.message);
} else if (error instanceof ValidationError) {
console.error('Validation Error:', error.message);
} else if (error instanceof RateLimitError) {
console.error('Rate Limit Exceeded:', error.message);
} else if (error instanceof APIError) {
console.error('API Error:', error.message);
} else {
console.error('Unknown Error:', error);
}
}
You can customize the client using the OpenAIClientOptions
interface:
import HttpsProxyAgent from 'https-proxy-agent';
const proxyAgent = new HttpsProxyAgent('http://localhost:8080');
const client = new OpenAIClient(apiKey, {
baseURL: 'https://api.openai.com/v1',
timeout: 10000, // 10 seconds timeout
proxyConfig: proxyAgent,
loggingOptions: {
logLevel: 'debug',
logToFile: true,
logFilePath: './logs/openai-sdk.log',
},
axiosRetryConfig: {
retries: 5,
retryDelay: axiosRetry.exponentialDelay,
},
});
The SDK uses Winston for logging. You can configure logging levels and outputs:
loggingOptions: {
logLevel: 'info', // 'error' | 'warn' | 'info' | 'debug'
logToFile: true,
logFilePath: './logs/openai-sdk.log',
}
The SDK includes comprehensive unit tests using Jest. To run the tests:
npm run test
For more detailed information, please refer to the OpenAI Enhanced SDK Documentation.
Contributions are welcome! Please open an issue or submit a pull request on GitHub.
This project is licensed under the MIT License.
If you have any questions or need assistance, feel free to reach out!