Getting Started¶
Let's get Pollux running and see your first result.
1. Install¶
pip install pollux-ai
Or download the latest wheel from Releases.
2. Set Your API Key¶
Get a key from Google AI Studio, then:
export GEMINI_API_KEY="your-key-here"
Get a key from OpenAI Platform, then:
export OPENAI_API_KEY="your-key-here"
Get a key from the Anthropic Console, then:
export ANTHROPIC_API_KEY="your-key-here"
Get a key from OpenRouter, then:
export OPENROUTER_API_KEY="your-key-here"
Not sure which provider? Start with Gemini for explicit context caching, OpenAI for broad model selection, Anthropic for implicit caching and extended thinking, or OpenRouter for routed model access.
3. Run¶
import asyncio
from pollux import Config, Source, run
async def main() -> None:
config = Config(provider="gemini", model="gemini-2.5-flash-lite")
result = await run(
"What are the key points?",
source=Source.from_text(
"Pollux handles fan-out, fan-in, and broadcast execution patterns. "
"It can also reuse context through caching."
),
config=config,
)
print(result["status"])
print(result["answers"][0])
asyncio.run(main())
If you chose OpenAI, change config to
Config(provider="openai", model="gpt-5-nano"). For Anthropic, use
Config(provider="anthropic", model="claude-haiku-4-5"). For OpenRouter,
use Config(provider="openrouter", model="google/gemma-3-27b-it:free").
No API key yet?
Use Config(provider="gemini", model="gemini-2.5-flash-lite", use_mock=True) to
run the pipeline locally without network calls. See Mock Mode.
4. See the Output¶
ok
The key points are: (1) Pollux supports three execution patterns (fan-out,
fan-in, and broadcast); (2) it provides context caching to avoid
re-uploading the same content for repeated prompts.
That's your first Pollux result. The status is ok and the answer
references details from the source text. When this works, swap to your real
input: Source.from_file("document.pdf").
What happened?
Pollux owned: normalizing your prompt and source into a request, planning the API call, executing it, and extracting the answer into a standard ResultEnvelope.
You owned: writing the prompt, choosing what to analyze, and deciding what to do with the result.
This boundary runs through every part of Pollux. You'll see it called out on each page of the docs.
What's next? Read Core Concepts for the full mental model, then Sending Content to Models for realtime calls or Submitting Work for Later Collection if your workload can run in the background.