OpenAI Integration
Full observability and governance for OpenAI APIs. Track token usage, latency, costs, and ensure quality across GPT-4 and GPT-3.5 deployments.
OpenAI SDK >= 1.0.0GPT-4 / GPT-4oGPT-3.5Embeddings
Installation
Terminal
pip install turingpulse openaiQuick Start
main.py
from openai import OpenAI
from turingpulse import init
from turingpulse.integrations.openai import instrument_openai
# Initialize TuringPulse
init(
api_key="sk_live_...",
project_id="my-project"
)
# Instrument OpenAI - wraps all API calls
instrument_openai()
# Your code works exactly the same - now with full tracing!
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)Supported Features
Models
- GPT-4o / GPT-4o-mini
- GPT-4 / GPT-4 Turbo
- GPT-3.5 Turbo
- o1-preview / o1-mini
- text-embedding-3
- DALL·E 3
Capabilities
- Chat completions
- Streaming responses
- Function calling
- Vision (GPT-4V)
- Embeddings
- Assistants API
Tracked Metrics
- Token usage (in/out)
- Latency (TTFB, total)
- Cost estimation
- Finish reasons
- Tool calls
- Error rates
Streaming Support
streaming.py
client = OpenAI()
# Streaming is automatically tracked
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a poem"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
# Trace captures time-to-first-token and full responseFunction Calling
functions.py
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
}]
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Weather in NYC?"}],
tools=tools
)
# Function calls are captured in traceWith KPIs & Alerts
kpis.py
from turingpulse.integrations.openai import instrument_openai, OpenAIConfig
instrument_openai(
config=OpenAIConfig(
agent_id="openai-service",
kpis=[
{"kpi_id": "latency_ms", "use_duration": True, "threshold": 5000},
{"kpi_id": "cost_usd", "threshold": 0.10, "comparator": "gt"},
{"kpi_id": "tokens", "threshold": 4000, "comparator": "gt"},
],
alert_channels=["slack://alerts"],
)
)💡
Cost Tracking
TuringPulse automatically calculates costs based on OpenAI pricing. View cost breakdowns by model in the dashboard.