Weave supports ingestion of OpenTelemetry compatible trace data through a dedicated endpoint. This endpoint allows you to send OTLP (OpenTelemetry Protocol) formatted trace data directly to your Weave project.
Path: /otel/v1/tracesMethod: POST
Content-Type: application/x-protobufBase URL: The base URL for the OTEL trace endpoint depends on your W&B deployment type:
Entity: You can only log traces to the project under an entity that you have access to. You can find your entity name by visiting your W&N dashboard at [https://wandb.ai/home], and checking the Teams field in the left sidebar.
This example shows how to use the OpenAI instrumentation. There are many more available which you can find in the official repository: https://github.com/Arize-ai/openinferenceFirst, install the required dependencies:
Performance Recommendation: Always use BatchSpanProcessor instead of SimpleSpanProcessor when sending traces to Weave. SimpleSpanProcessor exports spans synchronously, potentially impacting the performance of other workloads. These examples illustrate BatchSpanProcessor, which is recommended in production because it batches spans asynchronously and efficiently.
Next, paste the following code into a python file such as openinference_example.py
Report incorrect code
Copy
Ask AI
import base64import openaifrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk import trace as trace_sdkfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessorfrom openinference.instrumentation.openai import OpenAIInstrumentorOPENAI_API_KEY="YOUR_OPENAI_API_KEY"WANDB_BASE_URL = "https://trace.wandb.ai"PROJECT_ID = "<your-entity>/<your-project>"OTEL_EXPORTER_OTLP_ENDPOINT = f"{WANDB_BASE_URL}/otel/v1/traces"# Can be found at https://wandb.ai/authorizeWANDB_API_KEY = "<your-wandb-api-key>"AUTH = base64.b64encode(f"api:{WANDB_API_KEY}".encode()).decode()OTEL_EXPORTER_OTLP_HEADERS = { "Authorization": f"Basic {AUTH}", "project_id": PROJECT_ID,}tracer_provider = trace_sdk.TracerProvider()# Configure the OTLP exporterexporter = OTLPSpanExporter( endpoint=OTEL_EXPORTER_OTLP_ENDPOINT, headers=OTEL_EXPORTER_OTLP_HEADERS,)# Add the exporter to the tracer providertracer_provider.add_span_processor(BatchSpanProcessor(exporter))# Optionally, print the spans to the console.tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)def main(): client = openai.OpenAI(api_key=OPENAI_API_KEY) response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Describe OTEL in a single sentence."}], max_tokens=20, stream=True, stream_options={"include_usage": True}, ) for chunk in response: if chunk.choices and (content := chunk.choices[0].delta.content): print(content, end="")if __name__ == "__main__": main()
Finally, once you have set the fields specified above to their correct values, run the code:
Next, paste the following code into a python file such as openllmetry_example.py. Note that this is the same code as above, except the OpenAIInstrumentor is imported from opentelemetry.instrumentation.openai instead of openinference.instrumentation.openai
Report incorrect code
Copy
Ask AI
import base64import openaifrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk import trace as trace_sdkfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessorfrom opentelemetry.instrumentation.openai import OpenAIInstrumentorOPENAI_API_KEY="YOUR_OPENAI_API_KEY"WANDB_BASE_URL = "https://trace.wandb.ai"PROJECT_ID = "<your-entity>/<your-project>"OTEL_EXPORTER_OTLP_ENDPOINT = f"{WANDB_BASE_URL}/otel/v1/traces"# Can be found at https://wandb.ai/authorizeWANDB_API_KEY = "<your-wandb-api-key>"AUTH = base64.b64encode(f"api:{WANDB_API_KEY}".encode()).decode()OTEL_EXPORTER_OTLP_HEADERS = { "Authorization": f"Basic {AUTH}", "project_id": PROJECT_ID,}tracer_provider = trace_sdk.TracerProvider()# Configure the OTLP exporterexporter = OTLPSpanExporter( endpoint=OTEL_EXPORTER_OTLP_ENDPOINT, headers=OTEL_EXPORTER_OTLP_HEADERS,)# Add the exporter to the tracer providertracer_provider.add_span_processor(BatchSpanProcessor(exporter))# Optionally, print the spans to the console.tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)def main(): client = openai.OpenAI(api_key=OPENAI_API_KEY) response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Describe OTEL in a single sentence."}], max_tokens=20, stream=True, stream_options={"include_usage": True}, ) for chunk in response: if chunk.choices and (content := chunk.choices[0].delta.content): print(content, end="")if __name__ == "__main__": main()
Finally, once you have set the fields specified above to their correct values, run the code:
If you would prefer to use OTEL directly instead of an instrumentation package, you may do so. Span attributes will be parsed according to the OpenTelemetry semantic conventions described at https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/.First, install the required dependencies:
Next, paste the following code into a python file such as opentelemetry_example.py
Report incorrect code
Copy
Ask AI
import jsonimport base64import openaifrom opentelemetry import tracefrom opentelemetry.sdk import trace as trace_sdkfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessorOPENAI_API_KEY = "YOUR_OPENAI_API_KEY"WANDB_BASE_URL = "https://trace.wandb.ai"PROJECT_ID = "<your-entity>/<your-project>"OTEL_EXPORTER_OTLP_ENDPOINT = f"{WANDB_BASE_URL}/otel/v1/traces"# Can be found at https://wandb.ai/authorizeWANDB_API_KEY = "<your-wandb-api-key>"AUTH = base64.b64encode(f"api:{WANDB_API_KEY}".encode()).decode()OTEL_EXPORTER_OTLP_HEADERS = { "Authorization": f"Basic {AUTH}", "project_id": PROJECT_ID,}tracer_provider = trace_sdk.TracerProvider()# Configure the OTLP exporterexporter = OTLPSpanExporter( endpoint=OTEL_EXPORTER_OTLP_ENDPOINT, headers=OTEL_EXPORTER_OTLP_HEADERS,)# Add the exporter to the tracer providertracer_provider.add_span_processor(BatchSpanProcessor(exporter))# Optionally, print the spans to the console.tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))trace.set_tracer_provider(tracer_provider)# Creates a tracer from the global tracer providertracer = trace.get_tracer(__name__)tracer.start_span('name=standard-span')def my_function(): with tracer.start_as_current_span("outer_span") as outer_span: client = openai.OpenAI() input_messages=[{"role": "user", "content": "Describe OTEL in a single sentence."}] # This will only appear in the side panel outer_span.set_attribute("input.value", json.dumps(input_messages)) # This follows conventions and will appear in the dashboard outer_span.set_attribute("gen_ai.system", 'openai') response = client.chat.completions.create( model="gpt-3.5-turbo", messages=input_messages, max_tokens=20, stream=True, stream_options={"include_usage": True}, ) out = "" for chunk in response: if chunk.choices and (content := chunk.choices[0].delta.content): out += content # This will only appear in the side panel outer_span.set_attribute("output.value", json.dumps({"content": out}))if __name__ == "__main__": my_function()
Finally, once you have set the fields specified above to their correct values, run the code:
Report incorrect code
Copy
Ask AI
python opentelemetry_example.py
The span attribute prefixes gen_ai and openinference are used to determine which convention to use, if any, when interpreting the trace. If neither key is detected, then all span attributes are visible in the trace view. The full span is available in the side panel when you select a trace.
Add specific span attributes to organize your OpenTelemetry traces into Weave threads, then use Weave’s Thread UI to analyze related operations like multi-turn conversations or user sessions in Weave’s thread UI.Add the following attributes to your OTEL spans to enable thread grouping:
wandb.thread_id: Groups spans into a specific thread
wandb.is_turn: Marks a span as a conversation turn (appears as a row in the thread view)
The following code shows several examples of organizing OTEL traces into Weave threads. They use wandb.thread_id to group related operations, and use wandb.is_turn to view high level operations as rows in the thread view. Each example performs the followingmark high-level operations that appear as rows in the thread view).
Initial set up
Use this configuration to run these examples:
Report incorrect code
Copy
Ask AI
import base64import jsonimport osfrom opentelemetry import tracefrom opentelemetry.sdk import trace as trace_sdkfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessor# ConfigurationENTITY = "YOUR_ENTITY"PROJECT = "YOUR_PROJECT"PROJECT_ID = f"{ENTITY}/{PROJECT}"WANDB_API_KEY = os.environ["WANDB_API_KEY"]# Set up OTLP endpoint and headersOTEL_EXPORTER_OTLP_ENDPOINT="https://trace.wandb.ai/otel/v1/traces"AUTH = base64.b64encode(f"api:{WANDB_API_KEY}".encode()).decode()OTEL_EXPORTER_OTLP_HEADERS = { "Authorization": f"Basic {AUTH}", "project_id": PROJECT_ID,}# Initialize tracer providertracer_provider = trace_sdk.TracerProvider()# Configure the OTLP exporterexporter = OTLPSpanExporter( endpoint=OTEL_EXPORTER_OTLP_ENDPOINT, headers=OTEL_EXPORTER_OTLP_HEADERS,)# Add the exporter to the tracer providertracer_provider.add_span_processor(BatchSpanProcessor(exporter))# Optionally, print the spans to the consoletracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))# Set the tracer providertrace.set_tracer_provider(tracer_provider)# Create a tracer from the global tracer providertracer = trace.get_tracer(__name__)
Trace a basic single-turn thread
Report incorrect code
Copy
Ask AI
def example_1_basic_thread_and_turn(): """Example 1: Basic thread with a single turn""" print("\n=== Example 1: Basic Thread and Turn ===") # Create a thread context thread_id = "thread_example_1" # This span represents a turn (direct child of thread) with tracer.start_as_current_span("process_user_message") as turn_span: # Set thread attributes turn_span.set_attribute("wandb.thread_id", thread_id) turn_span.set_attribute("wandb.is_turn", True) # Add some example attributes turn_span.set_attribute("input.value", "Hello, help me with setup") # Simulate some work with nested spans with tracer.start_as_current_span("generate_response") as nested_span: # This is a nested call within the turn, so is_turn should be false or unset nested_span.set_attribute("wandb.thread_id", thread_id) # wandb.is_turn is not set or set to False for nested calls response = "I'll help you get started with the setup process." nested_span.set_attribute("output.value", response) turn_span.set_attribute("output.value", response) print(f"Turn completed in thread: {thread_id}")def main(): example_1_basic_thread_and_turn<A()if __name__ == "__main__": main()
Trace a multi-turn conversation sharing one thread ID
Report incorrect code
Copy
Ask AI
def example_2_multiple_turns(): """Example 2: Multiple turns in a single thread""" print("\n=== Example 2: Multiple Turns in Thread ===") thread_id = "thread_conversation_123" # Turn 1 with tracer.start_as_current_span("process_message_turn1") as turn1_span: turn1_span.set_attribute("wandb.thread_id", thread_id) turn1_span.set_attribute("wandb.is_turn", True) turn1_span.set_attribute("input.value", "What programming languages do you recommend?") # Nested operations with tracer.start_as_current_span("analyze_query") as analyze_span: analyze_span.set_attribute("wandb.thread_id", thread_id) # No is_turn attribute or set to False for nested spans response1 = "I recommend Python for beginners and JavaScript for web development." turn1_span.set_attribute("output.value", response1) print(f"Turn 1 completed in thread: {thread_id}") # Turn 2 with tracer.start_as_current_span("process_message_turn2") as turn2_span: turn2_span.set_attribute("wandb.thread_id", thread_id) turn2_span.set_attribute("wandb.is_turn", True) turn2_span.set_attribute("input.value", "Can you explain Python vs JavaScript?") # Nested operations with tracer.start_as_current_span("comparison_analysis") as compare_span: compare_span.set_attribute("wandb.thread_id", thread_id) compare_span.set_attribute("wandb.is_turn", False) # Explicitly false for nested response2 = "Python excels at data science while JavaScript dominates web development." turn2_span.set_attribute("output.value", response2) print(f"Turn 2 completed in thread: {thread_id}")def main(): example_2_multiple_turns()if __name__ == "__main__": main()
Trace deeply nested operations and mark only the outermost span as a turn
Report incorrect code
Copy
Ask AI
def example_3_complex_nested_structure(): """Example 3: Complex nested structure with multiple levels""" print("\n=== Example 3: Complex Nested Structure ===") thread_id = "thread_complex_456" # Turn with multiple levels of nesting with tracer.start_as_current_span("handle_complex_request") as turn_span: turn_span.set_attribute("wandb.thread_id", thread_id) turn_span.set_attribute("wandb.is_turn", True) turn_span.set_attribute("input.value", "Analyze this code and suggest improvements") # Level 1 nested operation with tracer.start_as_current_span("code_analysis") as analysis_span: analysis_span.set_attribute("wandb.thread_id", thread_id) # No is_turn for nested operations # Level 2 nested operation with tracer.start_as_current_span("syntax_check") as syntax_span: syntax_span.set_attribute("wandb.thread_id", thread_id) syntax_span.set_attribute("result", "No syntax errors found") # Another Level 2 nested operation with tracer.start_as_current_span("performance_check") as perf_span: perf_span.set_attribute("wandb.thread_id", thread_id) perf_span.set_attribute("result", "Found 2 optimization opportunities") # Another Level 1 nested operation with tracer.start_as_current_span("generate_suggestions") as suggest_span: suggest_span.set_attribute("wandb.thread_id", thread_id) suggestions = ["Use list comprehension", "Consider caching results"] suggest_span.set_attribute("suggestions", json.dumps(suggestions)) turn_span.set_attribute("output.value", "Analysis complete with 2 improvement suggestions") print(f"Complex turn completed in thread: {thread_id}")def main(): example_3_complex_nested_structure()if __name__ == "__main__": main()
Trace background operations that belong to a thread but aren't turns
Report incorrect code
Copy
Ask AI
def example_4_non_turn_operations(): """Example 4: Operations that are part of a thread but not turns""" print("\n=== Example 4: Non-Turn Thread Operations ===") thread_id = "thread_background_789" # Background operation that's part of thread but not a turn with tracer.start_as_current_span("background_indexing") as bg_span: bg_span.set_attribute("wandb.thread_id", thread_id) # wandb.is_turn is unset or false - this is not a turn bg_span.set_attribute("wandb.is_turn", False) bg_span.set_attribute("operation", "Indexing conversation history") print(f"Background operation in thread: {thread_id}") # Actual turn in the same thread with tracer.start_as_current_span("user_query") as turn_span: turn_span.set_attribute("wandb.thread_id", thread_id) turn_span.set_attribute("wandb.is_turn", True) turn_span.set_attribute("input.value", "Search my previous conversations") turn_span.set_attribute("output.value", "Found 5 relevant conversations") print(f"Turn completed in thread: {thread_id}")def main(): example_4_non_turn_operations()if __name__ == "__main__": main()
After sending these traces, you can view them in the Weave UI under the Threads tab, where they’ll be grouped by thread_id and each turn will appear as a separate row.
Weave automatically maps OpenTelemetry span attributes from various instrumentation frameworks to its internal data model. When multiple attribute names map to the same field, Weave applies them in priority order, allowing frameworks to coexist in the same traces.