Log Traces
Traces give you instant visibility into what's working, what's not, and why. Includes advanced analysis and debugging features.
Overview
A trace represents a single execution of your AI workflow. Each trace contains:
- Spans - Individual operations (LLM calls, tools, etc.)
- Metadata - Custom attributes you attach
- Metrics - Latency, tokens, cost
- Status - Success, error, or running
Basic Usage
Using the Decorator
The simplest way to log traces is with the @trace decorator:
from turingpulse import trace
@trace(
workflow_id="customer-support-agent",
workflow_name="Customer Support Agent"
)
def handle_support_request(query: str, user_id: str):
# Your agent logic here
response = agent.run(query)
return response
# Call your function normally
result = handle_support_request("How do I reset my password?", "user-123")Using the Context Manager
For more control, use the context manager:
from turingpulse import Trace
with Trace(
workflow_id="customer-support-agent",
metadata={"user_id": "user-123", "channel": "web"}
) as trace:
# Your agent logic
response = agent.run(query)
# Add custom spans
with trace.span("post-processing"):
result = process_response(response)
# Set the output
trace.set_output(result)Adding Spans
Spans represent individual operations within a trace. Create nested spans to capture the hierarchy of your workflow:
from turingpulse import trace, span
@trace(workflow_id="rag-agent")
def answer_question(question: str):
# Retrieval span
with span("retrieval", span_type="retrieval"):
docs = vector_store.search(question, k=5)
# LLM span (auto-captured if using supported providers)
with span("generation", span_type="llm"):
response = llm.chat(
messages=[
{"role": "system", "content": "Answer based on context."},
{"role": "user", "content": f"Context: {docs}\n\nQuestion: {question}"}
]
)
return responseSpan Types
| Type | Use Case |
|---|---|
llm | LLM API calls |
tool | Tool/function invocations |
retrieval | Vector search, RAG |
agent | Agent decision steps |
chain | Chain executions |
custom | Any other operation |
Adding Metadata
Attach custom metadata to traces and spans for filtering and analysis:
from turingpulse import trace, get_current_trace
@trace(
workflow_id="support-agent",
metadata={
"environment": "production",
"version": "1.2.3"
}
)
def handle_request(request):
# Get the current trace
current_trace = get_current_trace()
# Add metadata dynamically
current_trace.set_metadata("user_id", request.user_id)
current_trace.set_metadata("request_type", classify_request(request))
# Add tags for filtering
current_trace.add_tag("priority", "high")
current_trace.add_tag("department", "billing")
return process(request)Capturing Inputs & Outputs
TuringPulse automatically captures function inputs and outputs, but you can customize what gets logged:
from turingpulse import trace
@trace(
workflow_id="chat-agent",
capture_input=True, # Log function arguments
capture_output=True, # Log return value
)
def chat(messages: list, user_id: str):
response = llm.chat(messages)
return response
# Or manually set input/output
from turingpulse import get_current_trace
def process_request(request):
trace = get_current_trace()
# Set custom input (e.g., sanitized version)
trace.set_input({
"query": request.query,
"user_id": request.user_id
# Exclude sensitive fields
})
result = agent.run(request)
# Set custom output
trace.set_output({
"response": result.response,
"confidence": result.confidence
})
return resultError Handling
Errors are automatically captured and attached to traces:
from turingpulse import trace
@trace(workflow_id="my-agent")
def risky_operation():
try:
result = external_api.call()
return result
except APIError as e:
# Error is automatically captured
# Trace status will be set to "error"
raise
# Or manually record errors
from turingpulse import get_current_trace
def handle_request(request):
trace = get_current_trace()
try:
result = process(request)
except Exception as e:
trace.record_exception(e)
trace.set_status("error", str(e))
raiseAsync Support
TuringPulse fully supports async/await:
from turingpulse import trace, span
@trace(workflow_id="async-agent")
async def async_agent(query: str):
# Async spans work the same way
async with span("retrieval"):
docs = await vector_store.asearch(query)
async with span("generation"):
response = await llm.achat(messages=[...])
return response
# Run with asyncio
import asyncio
result = asyncio.run(async_agent("What is the weather?"))Framework Integrations
For supported frameworks, tracing is automatic:
LangGraph
from turingpulse.integrations.langgraph import instrument
# Wrap your compiled graph
instrumented_graph = instrument(
graph.compile(),
workflow_id="langgraph-agent"
)
# Run normally - all nodes are traced
result = instrumented_graph.invoke({"messages": [...]})LangChain
from turingpulse.integrations.langchain import TuringPulseCallbackHandler
# Add the callback handler
handler = TuringPulseCallbackHandler(workflow_id="langchain-agent")
# Use with any LangChain component
chain.invoke(input, config={"callbacks": [handler]})See the SDK Integration guides for more details.
Best Practices
- Use consistent workflow IDs - Same ID across environments for easy comparison
- Add meaningful metadata - User IDs, request types, versions help with debugging
- Create spans for key operations - Don't over-instrument, focus on important steps
- Handle sensitive data - Use
capture_input=Falseor sanitize inputs - Set appropriate sampling - In high-volume production, consider sampling traces
Next Steps
- Log Conversations - Multi-turn tracking
- Log User Feedback - Capture ratings
- Cost Tracking - Monitor spend
- Evaluation - Score your traces