QuLang brings together AI builders across the Qubic blockchain. Its goal is to enable decentralized inference of Large Language Models (LLMs) and AI Agents. Here's how it works:
-
Users can top up their QuLang accounts through the smart contract (
procedure TopUp, 1
) and withdraw their balance using the same mechanism (procedure Withdraw, 2
). -
Providers can register endpoints for LLM inference following the Vercel AI SDK UI standard (an example is available in the
example-openai-provider
repository). The endpoints are stored in a centralized PostgreSQL database, while pricing (input token price, output token price) and burn rate parameters are managed by the smart contract (procedure updateProvider, 3
). -
Inference transactions are validated through a main endpoint. Users with sufficient QuLang balances are debited an amount calculated by:
The AI provider receives a credit of
Important note: Some features, particularly security measures and exception handling, are not yet fully developed.
A fork of the Qubic node with our smart contract implementation.
- Code: https://github.com/Qubic-Qulang/core/blob/madrid-2025/src/contracts/HM25.h
- Node endpoint: http://46.17.103.110:31841/
- RPC endpoint: http://46.17.103.110/v1/tick-info
- URL: http://46.17.103.110:3000/
- Repository: https://github.com/Qubic-Qulang/qulang-app
- Endpoint instance: http://46.17.103.110:3001/
- Repository: https://github.com/Qubic-Qulang/example-openai-provider
The next steps include finishing developpment of core features, and IPO/share system value. Then, we may look into more complex integrations for AI agents. We also wish to explore putting some quantized providers directly on the computor node, in order to have an inference endpoint in the chain itself.