Developer Quickstart
Use the OpenAI-compatible interface to complete your first minimal viable call
If you're planning to write your own code to integrate with MoleAPI, remember these two items first:
- Base URL:
https://api.moleapi.com/v1 - Authentication method:
Authorization: Bearer <YOUR_API_KEY>
The most common first endpoint is:
POST /chat/completions
Step 0: Put these two parameters into your environment first
It's recommended not to hardcode your API Key directly in the code. Instead, put it in an environment variable first.
PowerShell
$env:MOLEAPI_KEY="sk-xxxxxxxxxxxxxxxx"Bash / zsh
export MOLEAPI_KEY="sk-xxxxxxxxxxxxxxxx"Why use environment variables first
This lets you switch between local testing, CI, and production by only changing environment configuration, without repeatedly modifying source code.
cURL examples
Bash / zsh
curl https://api.moleapi.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MOLEAPI_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "你好,请用一句话介绍 MoleAPI。"}
]
}'PowerShell
$headers = @{
"Content-Type" = "application/json"
"Authorization" = "Bearer $env:MOLEAPI_KEY"
}
$body = @{
model = "gpt-4o-mini"
messages = @(
@{
role = "user"
content = "你好,请用一句话介绍 MoleAPI。"
}
)
} | ConvertTo-Json -Depth 5
Invoke-RestMethod -Method Post `
-Uri "https://api.moleapi.com/v1/chat/completions" `
-Headers $headers `
-Body $bodyIf the request succeeds, you will typically receive a JSON response that includes at least:
idmodelchoices[0].message.content
As long as you can see the model return text normally, your minimal integration flow is working.
Minimal Python example
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["MOLEAPI_KEY"],
base_url="https://api.moleapi.com/v1",
)
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "你好,请返回一句接入成功提示。"}],
)
print(resp.choices[0].message.content)Minimal Node.js example
import OpenAI from "openai";
async function main() {
const client = new OpenAI({
apiKey: process.env.MOLEAPI_KEY,
baseURL: "https://api.moleapi.com/v1",
});
const resp = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "你好,请返回一句接入成功提示。" }],
});
console.log(resp.choices[0]?.message?.content);
}
main().catch(console.error);When running it successfully for the first time, verify only these 3 things
- The request returns a valid JSON response
- The returned content is not an error message
- The model name and Group you are using are currently available
Once this step is stable, you can gradually add:
- Streaming output
- Longer context
- Tool calling
- Multimodal capabilities such as images and audio
Minimal troubleshooting order
- First, confirm that the Base URL and path are correct
- Then confirm that the API Key is valid and has access to the target model
- Finally, verify the model name and current Group
Start small, then scale up
Start with the shortest possible request, then gradually add longer context, streaming output, and more complex parameters.
If cURL works but your project code does not
This usually means the platform itself is fine, and the issue is more likely in your project integration layer. Check these first:
- Whether your code is actually reading
MOLEAPI_KEY - Whether
baseURL/base_urlin the SDK configuration is still pointing to another platform - Whether your application code is concatenating the request URL a second time
- Whether your proxy, gateway, or server-side environment variables differ from your local setup
What to read next
In addition to the traditional OpenAI chat/completions endpoint, some MoleAPI models also support OpenAI's latest responses endpoint, the Gemini API, and Anthropic request formats. For supported models and invocation details, see the API Reference.
How is this guide?
Last updated on