A Proxy to extract structured data from text and images using LLMs.
l1m is the easiest way to get structured data from unstructured text or images using LLMs. No prompt engineering, no chat history, just a simple API to extract structured JSON from text or images.
- 📋 Simple Schema-First Approach: Define your data structure in JSON Schema, get back exactly what you need
- 🎯 Zero Prompt Engineering: No need to craft complex prompts or chain multiple calls. Add context as JSON schema descriptions
- 🔄 Provider Flexibility: Bring your own provider, supports any OpenAI compatible or Anthropic provider and Anthropic models
- ⚡ Caching: Built-in caching, with
x-cache-ttl
(seconds) header to use l1m.io as a cache for your LLM requests - 🔓 Open Source: Open-source, no vendor lock-in. Or use the hosted version with free-tier and high availability
- 🔒 Privacy First: We don't store your data, unless you use the
x-cache-ttl
header - ⚡️ Works Locally: Use l1m locally with Ollama or any other OpenAI compatible model provider
curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: demo" \
-H "X-Provider-Key: demo" \
-H "X-Provider-Model: demo" \
-d '{
"input": "A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913",
"schema": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" }
}
}
}
}
}
}'
curl -X POST https://api.l1m.io/structured \
-H "Content-Type: application/json" \
-H "X-Provider-Url: demo" \
-H "X-Provider-Key: demo" \
-H "X-Provider-Model: demo" \
-d '{
"input": "'$(curl -s https://public.l1m.io/menu.jpg | base64)'",
"schema": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" }
}
}
}
}
}
}'
See sdk-node for a complete example.
See sdk-python for a complete example.
See sdk-go for a complete example.
x-provider-model
(optional): Custom LLM model to usex-provider-url
(optional): Custom LLM provider URL (OpenAI compatible or Anthropic API)x-provider-key
(optional): API key for custom LLM providerx-cache-ttl
(optional): Cache TTL in seconds- Cache key (generated) = hash(input + schema + x-provider-key + x-provider-model)
image/jpeg
image/png
Official SDKs are available for:
See local.md for instructions on running l1m locally (and using with Ollama).
Join our waitlist to get early access to the production release of our hosted version.
Built by Inferable.