langfuse
Langfuse Python SDK
Installation
The SDK was rewritten in v3 and released in June 2025. Refer to the v3 migration guide for instructions on updating your code.
pip install langfuse
Docs
Please see our docs for detailed information on this SDK.
1""".. include:: ../README.md""" 2 3from ._client.attributes import LangfuseOtelSpanAttributes 4from ._client.get_client import get_client 5from ._client import client as _client 6from ._client.observe import observe 7from ._client.span import LangfuseEvent, LangfuseGeneration, LangfuseSpan 8 9Langfuse = _client.Langfuse 10 11__all__ = [ 12 "Langfuse", 13 "get_client", 14 "observe", 15 "LangfuseSpan", 16 "LangfuseGeneration", 17 "LangfuseEvent", 18 "LangfuseOtelSpanAttributes", 19]
75class Langfuse: 76 """Main client for Langfuse tracing and platform features. 77 78 This class provides an interface for creating and managing traces, spans, 79 and generations in Langfuse as well as interacting with the Langfuse API. 80 81 The client features a thread-safe singleton pattern for each unique public API key, 82 ensuring consistent trace context propagation across your application. It implements 83 efficient batching of spans with configurable flush settings and includes background 84 thread management for media uploads and score ingestion. 85 86 Configuration is flexible through either direct parameters or environment variables, 87 with graceful fallbacks and runtime configuration updates. 88 89 Attributes: 90 api: Synchronous API client for Langfuse backend communication 91 async_api: Asynchronous API client for Langfuse backend communication 92 langfuse_tracer: Internal LangfuseTracer instance managing OpenTelemetry components 93 94 Parameters: 95 public_key (Optional[str]): Your Langfuse public API key. Can also be set via LANGFUSE_PUBLIC_KEY environment variable. 96 secret_key (Optional[str]): Your Langfuse secret API key. Can also be set via LANGFUSE_SECRET_KEY environment variable. 97 host (Optional[str]): The Langfuse API host URL. Defaults to "https://cloud.langfuse.com". Can also be set via LANGFUSE_HOST environment variable. 98 timeout (Optional[int]): Timeout in seconds for API requests. Defaults to 5 seconds. 99 httpx_client (Optional[httpx.Client]): Custom httpx client for making non-tracing HTTP requests. If not provided, a default client will be created. 100 debug (bool): Enable debug logging. Defaults to False. Can also be set via LANGFUSE_DEBUG environment variable. 101 tracing_enabled (Optional[bool]): Enable or disable tracing. Defaults to True. Can also be set via LANGFUSE_TRACING_ENABLED environment variable. 102 flush_at (Optional[int]): Number of spans to batch before sending to the API. Defaults to 512. Can also be set via LANGFUSE_FLUSH_AT environment variable. 103 flush_interval (Optional[float]): Time in seconds between batch flushes. Defaults to 5 seconds. Can also be set via LANGFUSE_FLUSH_INTERVAL environment variable. 104 environment (Optional[str]): Environment name for tracing. Default is 'default'. Can also be set via LANGFUSE_TRACING_ENVIRONMENT environment variable. Can be any lowercase alphanumeric string with hyphens and underscores that does not start with 'langfuse'. 105 release (Optional[str]): Release version/hash of your application. Used for grouping analytics by release. 106 media_upload_thread_count (Optional[int]): Number of background threads for handling media uploads. Defaults to 1. Can also be set via LANGFUSE_MEDIA_UPLOAD_THREAD_COUNT environment variable. 107 sample_rate (Optional[float]): Sampling rate for traces (0.0 to 1.0). Defaults to 1.0 (100% of traces are sampled). Can also be set via LANGFUSE_SAMPLE_RATE environment variable. 108 mask (Optional[MaskFunction]): Function to mask sensitive data in traces before sending to the API. 109 blocked_instrumentation_scopes (Optional[List[str]]): List of instrumentation scope names to block from being exported to Langfuse. Spans from these scopes will be filtered out before being sent to the API. Useful for filtering out spans from specific libraries or frameworks. For exported spans, you can see the instrumentation scope name in the span metadata in Langfuse (`metadata.scope.name`) 110 additional_headers (Optional[Dict[str, str]]): Additional headers to include in all API requests and OTLPSpanExporter requests. These headers will be merged with default headers. Note: If httpx_client is provided, additional_headers must be set directly on your custom httpx_client as well. 111 tracer_provider(Optional[TracerProvider]): OpenTelemetry TracerProvider to use for Langfuse. This can be useful to set to have disconnected tracing between Langfuse and other OpenTelemetry-span emitting libraries. Note: To track active spans, the context is still shared between TracerProviders. This may lead to broken trace trees. 112 113 Example: 114 ```python 115 from langfuse.otel import Langfuse 116 117 # Initialize the client (reads from env vars if not provided) 118 langfuse = Langfuse( 119 public_key="your-public-key", 120 secret_key="your-secret-key", 121 host="https://cloud.langfuse.com", # Optional, default shown 122 ) 123 124 # Create a trace span 125 with langfuse.start_as_current_span(name="process-query") as span: 126 # Your application code here 127 128 # Create a nested generation span for an LLM call 129 with span.start_as_current_generation( 130 name="generate-response", 131 model="gpt-4", 132 input={"query": "Tell me about AI"}, 133 model_parameters={"temperature": 0.7, "max_tokens": 500} 134 ) as generation: 135 # Generate response here 136 response = "AI is a field of computer science..." 137 138 generation.update( 139 output=response, 140 usage_details={"prompt_tokens": 10, "completion_tokens": 50}, 141 cost_details={"total_cost": 0.0023} 142 ) 143 144 # Score the generation (supports NUMERIC, BOOLEAN, CATEGORICAL) 145 generation.score(name="relevance", value=0.95, data_type="NUMERIC") 146 ``` 147 """ 148 149 _resources: Optional[LangfuseResourceManager] = None 150 _mask: Optional[MaskFunction] = None 151 _otel_tracer: otel_trace_api.Tracer 152 153 def __init__( 154 self, 155 *, 156 public_key: Optional[str] = None, 157 secret_key: Optional[str] = None, 158 host: Optional[str] = None, 159 timeout: Optional[int] = None, 160 httpx_client: Optional[httpx.Client] = None, 161 debug: bool = False, 162 tracing_enabled: Optional[bool] = True, 163 flush_at: Optional[int] = None, 164 flush_interval: Optional[float] = None, 165 environment: Optional[str] = None, 166 release: Optional[str] = None, 167 media_upload_thread_count: Optional[int] = None, 168 sample_rate: Optional[float] = None, 169 mask: Optional[MaskFunction] = None, 170 blocked_instrumentation_scopes: Optional[List[str]] = None, 171 additional_headers: Optional[Dict[str, str]] = None, 172 tracer_provider: Optional[TracerProvider] = None, 173 ): 174 self._host = host or cast( 175 str, os.environ.get(LANGFUSE_HOST, "https://cloud.langfuse.com") 176 ) 177 self._environment = environment or cast( 178 str, os.environ.get(LANGFUSE_TRACING_ENVIRONMENT) 179 ) 180 self._project_id: Optional[str] = None 181 sample_rate = sample_rate or float(os.environ.get(LANGFUSE_SAMPLE_RATE, 1.0)) 182 if not 0.0 <= sample_rate <= 1.0: 183 raise ValueError( 184 f"Sample rate must be between 0.0 and 1.0, got {sample_rate}" 185 ) 186 187 timeout = timeout or int(os.environ.get(LANGFUSE_TIMEOUT, 5)) 188 189 self._tracing_enabled = ( 190 tracing_enabled 191 and os.environ.get(LANGFUSE_TRACING_ENABLED, "True") != "False" 192 ) 193 if not self._tracing_enabled: 194 langfuse_logger.info( 195 "Configuration: Langfuse tracing is explicitly disabled. No data will be sent to the Langfuse API." 196 ) 197 198 debug = debug if debug else (os.getenv(LANGFUSE_DEBUG, "False") == "True") 199 if debug: 200 logging.basicConfig( 201 format="%(asctime)s - %(name)s - %(levelname)s - %(message)s" 202 ) 203 langfuse_logger.setLevel(logging.DEBUG) 204 205 public_key = public_key or os.environ.get(LANGFUSE_PUBLIC_KEY) 206 if public_key is None: 207 langfuse_logger.warning( 208 "Authentication error: Langfuse client initialized without public_key. Client will be disabled. " 209 "Provide a public_key parameter or set LANGFUSE_PUBLIC_KEY environment variable. " 210 ) 211 self._otel_tracer = otel_trace_api.NoOpTracer() 212 return 213 214 secret_key = secret_key or os.environ.get(LANGFUSE_SECRET_KEY) 215 if secret_key is None: 216 langfuse_logger.warning( 217 "Authentication error: Langfuse client initialized without secret_key. Client will be disabled. " 218 "Provide a secret_key parameter or set LANGFUSE_SECRET_KEY environment variable. " 219 ) 220 self._otel_tracer = otel_trace_api.NoOpTracer() 221 return 222 223 # Initialize api and tracer if requirements are met 224 self._resources = LangfuseResourceManager( 225 public_key=public_key, 226 secret_key=secret_key, 227 host=self._host, 228 timeout=timeout, 229 environment=environment, 230 release=release, 231 flush_at=flush_at, 232 flush_interval=flush_interval, 233 httpx_client=httpx_client, 234 media_upload_thread_count=media_upload_thread_count, 235 sample_rate=sample_rate, 236 mask=mask, 237 tracing_enabled=self._tracing_enabled, 238 blocked_instrumentation_scopes=blocked_instrumentation_scopes, 239 additional_headers=additional_headers, 240 tracer_provider=tracer_provider, 241 ) 242 self._mask = self._resources.mask 243 244 self._otel_tracer = ( 245 self._resources.tracer 246 if self._tracing_enabled and self._resources.tracer is not None 247 else otel_trace_api.NoOpTracer() 248 ) 249 self.api = self._resources.api 250 self.async_api = self._resources.async_api 251 252 def start_span( 253 self, 254 *, 255 trace_context: Optional[TraceContext] = None, 256 name: str, 257 input: Optional[Any] = None, 258 output: Optional[Any] = None, 259 metadata: Optional[Any] = None, 260 version: Optional[str] = None, 261 level: Optional[SpanLevel] = None, 262 status_message: Optional[str] = None, 263 ) -> LangfuseSpan: 264 """Create a new span for tracing a unit of work. 265 266 This method creates a new span but does not set it as the current span in the 267 context. To create and use a span within a context, use start_as_current_span(). 268 269 The created span will be the child of the current span in the context. 270 271 Args: 272 trace_context: Optional context for connecting to an existing trace 273 name: Name of the span (e.g., function or operation name) 274 input: Input data for the operation (can be any JSON-serializable object) 275 output: Output data from the operation (can be any JSON-serializable object) 276 metadata: Additional metadata to associate with the span 277 version: Version identifier for the code or component 278 level: Importance level of the span (info, warning, error) 279 status_message: Optional status message for the span 280 281 Returns: 282 A LangfuseSpan object that must be ended with .end() when the operation completes 283 284 Example: 285 ```python 286 span = langfuse.start_span(name="process-data") 287 try: 288 # Do work 289 span.update(output="result") 290 finally: 291 span.end() 292 ``` 293 """ 294 if trace_context: 295 trace_id = trace_context.get("trace_id", None) 296 parent_span_id = trace_context.get("parent_span_id", None) 297 298 if trace_id: 299 remote_parent_span = self._create_remote_parent_span( 300 trace_id=trace_id, parent_span_id=parent_span_id 301 ) 302 303 with otel_trace_api.use_span( 304 cast(otel_trace_api.Span, remote_parent_span) 305 ): 306 otel_span = self._otel_tracer.start_span(name=name) 307 otel_span.set_attribute(LangfuseOtelSpanAttributes.AS_ROOT, True) 308 309 return LangfuseSpan( 310 otel_span=otel_span, 311 langfuse_client=self, 312 environment=self._environment, 313 input=input, 314 output=output, 315 metadata=metadata, 316 version=version, 317 level=level, 318 status_message=status_message, 319 ) 320 321 otel_span = self._otel_tracer.start_span(name=name) 322 323 return LangfuseSpan( 324 otel_span=otel_span, 325 langfuse_client=self, 326 environment=self._environment, 327 input=input, 328 output=output, 329 metadata=metadata, 330 version=version, 331 level=level, 332 status_message=status_message, 333 ) 334 335 def start_as_current_span( 336 self, 337 *, 338 trace_context: Optional[TraceContext] = None, 339 name: str, 340 input: Optional[Any] = None, 341 output: Optional[Any] = None, 342 metadata: Optional[Any] = None, 343 version: Optional[str] = None, 344 level: Optional[SpanLevel] = None, 345 status_message: Optional[str] = None, 346 end_on_exit: Optional[bool] = None, 347 ) -> _AgnosticContextManager[LangfuseSpan]: 348 """Create a new span and set it as the current span in a context manager. 349 350 This method creates a new span and sets it as the current span within a context 351 manager. Use this method with a 'with' statement to automatically handle span 352 lifecycle within a code block. 353 354 The created span will be the child of the current span in the context. 355 356 Args: 357 trace_context: Optional context for connecting to an existing trace 358 name: Name of the span (e.g., function or operation name) 359 input: Input data for the operation (can be any JSON-serializable object) 360 output: Output data from the operation (can be any JSON-serializable object) 361 metadata: Additional metadata to associate with the span 362 version: Version identifier for the code or component 363 level: Importance level of the span (info, warning, error) 364 status_message: Optional status message for the span 365 end_on_exit (default: True): Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks. 366 367 Returns: 368 A context manager that yields a LangfuseSpan 369 370 Example: 371 ```python 372 with langfuse.start_as_current_span(name="process-query") as span: 373 # Do work 374 result = process_data() 375 span.update(output=result) 376 377 # Create a child span automatically 378 with span.start_as_current_span(name="sub-operation") as child_span: 379 # Do sub-operation work 380 child_span.update(output="sub-result") 381 ``` 382 """ 383 if trace_context: 384 trace_id = trace_context.get("trace_id", None) 385 parent_span_id = trace_context.get("parent_span_id", None) 386 387 if trace_id: 388 remote_parent_span = self._create_remote_parent_span( 389 trace_id=trace_id, parent_span_id=parent_span_id 390 ) 391 392 return cast( 393 _AgnosticContextManager[LangfuseSpan], 394 self._create_span_with_parent_context( 395 as_type="span", 396 name=name, 397 remote_parent_span=remote_parent_span, 398 parent=None, 399 end_on_exit=end_on_exit, 400 input=input, 401 output=output, 402 metadata=metadata, 403 version=version, 404 level=level, 405 status_message=status_message, 406 ), 407 ) 408 409 return cast( 410 _AgnosticContextManager[LangfuseSpan], 411 self._start_as_current_otel_span_with_processed_media( 412 as_type="span", 413 name=name, 414 end_on_exit=end_on_exit, 415 input=input, 416 output=output, 417 metadata=metadata, 418 version=version, 419 level=level, 420 status_message=status_message, 421 ), 422 ) 423 424 def start_generation( 425 self, 426 *, 427 trace_context: Optional[TraceContext] = None, 428 name: str, 429 input: Optional[Any] = None, 430 output: Optional[Any] = None, 431 metadata: Optional[Any] = None, 432 version: Optional[str] = None, 433 level: Optional[SpanLevel] = None, 434 status_message: Optional[str] = None, 435 completion_start_time: Optional[datetime] = None, 436 model: Optional[str] = None, 437 model_parameters: Optional[Dict[str, MapValue]] = None, 438 usage_details: Optional[Dict[str, int]] = None, 439 cost_details: Optional[Dict[str, float]] = None, 440 prompt: Optional[PromptClient] = None, 441 ) -> LangfuseGeneration: 442 """Create a new generation span for model generations. 443 444 This method creates a specialized span for tracking model generations. 445 It includes additional fields specific to model generations such as model name, 446 token usage, and cost details. 447 448 The created generation span will be the child of the current span in the context. 449 450 Args: 451 trace_context: Optional context for connecting to an existing trace 452 name: Name of the generation operation 453 input: Input data for the model (e.g., prompts) 454 output: Output from the model (e.g., completions) 455 metadata: Additional metadata to associate with the generation 456 version: Version identifier for the model or component 457 level: Importance level of the generation (info, warning, error) 458 status_message: Optional status message for the generation 459 completion_start_time: When the model started generating the response 460 model: Name/identifier of the AI model used (e.g., "gpt-4") 461 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 462 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 463 cost_details: Cost information for the model call 464 prompt: Associated prompt template from Langfuse prompt management 465 466 Returns: 467 A LangfuseGeneration object that must be ended with .end() when complete 468 469 Example: 470 ```python 471 generation = langfuse.start_generation( 472 name="answer-generation", 473 model="gpt-4", 474 input={"prompt": "Explain quantum computing"}, 475 model_parameters={"temperature": 0.7} 476 ) 477 try: 478 # Call model API 479 response = llm.generate(...) 480 481 generation.update( 482 output=response.text, 483 usage_details={ 484 "prompt_tokens": response.usage.prompt_tokens, 485 "completion_tokens": response.usage.completion_tokens 486 } 487 ) 488 finally: 489 generation.end() 490 ``` 491 """ 492 if trace_context: 493 trace_id = trace_context.get("trace_id", None) 494 parent_span_id = trace_context.get("parent_span_id", None) 495 496 if trace_id: 497 remote_parent_span = self._create_remote_parent_span( 498 trace_id=trace_id, parent_span_id=parent_span_id 499 ) 500 501 with otel_trace_api.use_span( 502 cast(otel_trace_api.Span, remote_parent_span) 503 ): 504 otel_span = self._otel_tracer.start_span(name=name) 505 otel_span.set_attribute(LangfuseOtelSpanAttributes.AS_ROOT, True) 506 507 return LangfuseGeneration( 508 otel_span=otel_span, 509 langfuse_client=self, 510 input=input, 511 output=output, 512 metadata=metadata, 513 version=version, 514 level=level, 515 status_message=status_message, 516 completion_start_time=completion_start_time, 517 model=model, 518 model_parameters=model_parameters, 519 usage_details=usage_details, 520 cost_details=cost_details, 521 prompt=prompt, 522 ) 523 524 otel_span = self._otel_tracer.start_span(name=name) 525 526 return LangfuseGeneration( 527 otel_span=otel_span, 528 langfuse_client=self, 529 input=input, 530 output=output, 531 metadata=metadata, 532 version=version, 533 level=level, 534 status_message=status_message, 535 completion_start_time=completion_start_time, 536 model=model, 537 model_parameters=model_parameters, 538 usage_details=usage_details, 539 cost_details=cost_details, 540 prompt=prompt, 541 ) 542 543 def start_as_current_generation( 544 self, 545 *, 546 trace_context: Optional[TraceContext] = None, 547 name: str, 548 input: Optional[Any] = None, 549 output: Optional[Any] = None, 550 metadata: Optional[Any] = None, 551 version: Optional[str] = None, 552 level: Optional[SpanLevel] = None, 553 status_message: Optional[str] = None, 554 completion_start_time: Optional[datetime] = None, 555 model: Optional[str] = None, 556 model_parameters: Optional[Dict[str, MapValue]] = None, 557 usage_details: Optional[Dict[str, int]] = None, 558 cost_details: Optional[Dict[str, float]] = None, 559 prompt: Optional[PromptClient] = None, 560 end_on_exit: Optional[bool] = None, 561 ) -> _AgnosticContextManager[LangfuseGeneration]: 562 """Create a new generation span and set it as the current span in a context manager. 563 564 This method creates a specialized span for model generations and sets it as the 565 current span within a context manager. Use this method with a 'with' statement to 566 automatically handle the generation span lifecycle within a code block. 567 568 The created generation span will be the child of the current span in the context. 569 570 Args: 571 trace_context: Optional context for connecting to an existing trace 572 name: Name of the generation operation 573 input: Input data for the model (e.g., prompts) 574 output: Output from the model (e.g., completions) 575 metadata: Additional metadata to associate with the generation 576 version: Version identifier for the model or component 577 level: Importance level of the generation (info, warning, error) 578 status_message: Optional status message for the generation 579 completion_start_time: When the model started generating the response 580 model: Name/identifier of the AI model used (e.g., "gpt-4") 581 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 582 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 583 cost_details: Cost information for the model call 584 prompt: Associated prompt template from Langfuse prompt management 585 end_on_exit (default: True): Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks. 586 587 Returns: 588 A context manager that yields a LangfuseGeneration 589 590 Example: 591 ```python 592 with langfuse.start_as_current_generation( 593 name="answer-generation", 594 model="gpt-4", 595 input={"prompt": "Explain quantum computing"} 596 ) as generation: 597 # Call model API 598 response = llm.generate(...) 599 600 # Update with results 601 generation.update( 602 output=response.text, 603 usage_details={ 604 "prompt_tokens": response.usage.prompt_tokens, 605 "completion_tokens": response.usage.completion_tokens 606 } 607 ) 608 ``` 609 """ 610 if trace_context: 611 trace_id = trace_context.get("trace_id", None) 612 parent_span_id = trace_context.get("parent_span_id", None) 613 614 if trace_id: 615 remote_parent_span = self._create_remote_parent_span( 616 trace_id=trace_id, parent_span_id=parent_span_id 617 ) 618 619 return cast( 620 _AgnosticContextManager[LangfuseGeneration], 621 self._create_span_with_parent_context( 622 as_type="generation", 623 name=name, 624 remote_parent_span=remote_parent_span, 625 parent=None, 626 end_on_exit=end_on_exit, 627 input=input, 628 output=output, 629 metadata=metadata, 630 version=version, 631 level=level, 632 status_message=status_message, 633 completion_start_time=completion_start_time, 634 model=model, 635 model_parameters=model_parameters, 636 usage_details=usage_details, 637 cost_details=cost_details, 638 prompt=prompt, 639 ), 640 ) 641 642 return cast( 643 _AgnosticContextManager[LangfuseGeneration], 644 self._start_as_current_otel_span_with_processed_media( 645 as_type="generation", 646 name=name, 647 end_on_exit=end_on_exit, 648 input=input, 649 output=output, 650 metadata=metadata, 651 version=version, 652 level=level, 653 status_message=status_message, 654 completion_start_time=completion_start_time, 655 model=model, 656 model_parameters=model_parameters, 657 usage_details=usage_details, 658 cost_details=cost_details, 659 prompt=prompt, 660 ), 661 ) 662 663 @_agnosticcontextmanager 664 def _create_span_with_parent_context( 665 self, 666 *, 667 name: str, 668 parent: Optional[otel_trace_api.Span] = None, 669 remote_parent_span: Optional[otel_trace_api.Span] = None, 670 as_type: Literal["generation", "span"], 671 end_on_exit: Optional[bool] = None, 672 input: Optional[Any] = None, 673 output: Optional[Any] = None, 674 metadata: Optional[Any] = None, 675 version: Optional[str] = None, 676 level: Optional[SpanLevel] = None, 677 status_message: Optional[str] = None, 678 completion_start_time: Optional[datetime] = None, 679 model: Optional[str] = None, 680 model_parameters: Optional[Dict[str, MapValue]] = None, 681 usage_details: Optional[Dict[str, int]] = None, 682 cost_details: Optional[Dict[str, float]] = None, 683 prompt: Optional[PromptClient] = None, 684 ) -> Any: 685 parent_span = parent or cast(otel_trace_api.Span, remote_parent_span) 686 687 with otel_trace_api.use_span(parent_span): 688 with self._start_as_current_otel_span_with_processed_media( 689 name=name, 690 as_type=as_type, 691 end_on_exit=end_on_exit, 692 input=input, 693 output=output, 694 metadata=metadata, 695 version=version, 696 level=level, 697 status_message=status_message, 698 completion_start_time=completion_start_time, 699 model=model, 700 model_parameters=model_parameters, 701 usage_details=usage_details, 702 cost_details=cost_details, 703 prompt=prompt, 704 ) as langfuse_span: 705 if remote_parent_span is not None: 706 langfuse_span._otel_span.set_attribute( 707 LangfuseOtelSpanAttributes.AS_ROOT, True 708 ) 709 710 yield langfuse_span 711 712 @_agnosticcontextmanager 713 def _start_as_current_otel_span_with_processed_media( 714 self, 715 *, 716 name: str, 717 as_type: Optional[Literal["generation", "span"]] = None, 718 end_on_exit: Optional[bool] = None, 719 input: Optional[Any] = None, 720 output: Optional[Any] = None, 721 metadata: Optional[Any] = None, 722 version: Optional[str] = None, 723 level: Optional[SpanLevel] = None, 724 status_message: Optional[str] = None, 725 completion_start_time: Optional[datetime] = None, 726 model: Optional[str] = None, 727 model_parameters: Optional[Dict[str, MapValue]] = None, 728 usage_details: Optional[Dict[str, int]] = None, 729 cost_details: Optional[Dict[str, float]] = None, 730 prompt: Optional[PromptClient] = None, 731 ) -> Any: 732 with self._otel_tracer.start_as_current_span( 733 name=name, 734 end_on_exit=end_on_exit if end_on_exit is not None else True, 735 ) as otel_span: 736 yield ( 737 LangfuseSpan( 738 otel_span=otel_span, 739 langfuse_client=self, 740 environment=self._environment, 741 input=input, 742 output=output, 743 metadata=metadata, 744 version=version, 745 level=level, 746 status_message=status_message, 747 ) 748 if as_type == "span" 749 else LangfuseGeneration( 750 otel_span=otel_span, 751 langfuse_client=self, 752 environment=self._environment, 753 input=input, 754 output=output, 755 metadata=metadata, 756 version=version, 757 level=level, 758 status_message=status_message, 759 completion_start_time=completion_start_time, 760 model=model, 761 model_parameters=model_parameters, 762 usage_details=usage_details, 763 cost_details=cost_details, 764 prompt=prompt, 765 ) 766 ) 767 768 def _get_current_otel_span(self) -> Optional[otel_trace_api.Span]: 769 current_span = otel_trace_api.get_current_span() 770 771 if current_span is otel_trace_api.INVALID_SPAN: 772 langfuse_logger.warning( 773 "Context error: No active span in current context. Operations that depend on an active span will be skipped. " 774 "Ensure spans are created with start_as_current_span() or that you're operating within an active span context." 775 ) 776 return None 777 778 return current_span 779 780 def update_current_generation( 781 self, 782 *, 783 name: Optional[str] = None, 784 input: Optional[Any] = None, 785 output: Optional[Any] = None, 786 metadata: Optional[Any] = None, 787 version: Optional[str] = None, 788 level: Optional[SpanLevel] = None, 789 status_message: Optional[str] = None, 790 completion_start_time: Optional[datetime] = None, 791 model: Optional[str] = None, 792 model_parameters: Optional[Dict[str, MapValue]] = None, 793 usage_details: Optional[Dict[str, int]] = None, 794 cost_details: Optional[Dict[str, float]] = None, 795 prompt: Optional[PromptClient] = None, 796 ) -> None: 797 """Update the current active generation span with new information. 798 799 This method updates the current generation span in the active context with 800 additional information. It's useful for adding output, usage stats, or other 801 details that become available during or after model generation. 802 803 Args: 804 name: The generation name 805 input: Updated input data for the model 806 output: Output from the model (e.g., completions) 807 metadata: Additional metadata to associate with the generation 808 version: Version identifier for the model or component 809 level: Importance level of the generation (info, warning, error) 810 status_message: Optional status message for the generation 811 completion_start_time: When the model started generating the response 812 model: Name/identifier of the AI model used (e.g., "gpt-4") 813 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 814 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 815 cost_details: Cost information for the model call 816 prompt: Associated prompt template from Langfuse prompt management 817 818 Example: 819 ```python 820 with langfuse.start_as_current_generation(name="answer-query") as generation: 821 # Initial setup and API call 822 response = llm.generate(...) 823 824 # Update with results that weren't available at creation time 825 langfuse.update_current_generation( 826 output=response.text, 827 usage_details={ 828 "prompt_tokens": response.usage.prompt_tokens, 829 "completion_tokens": response.usage.completion_tokens 830 } 831 ) 832 ``` 833 """ 834 if not self._tracing_enabled: 835 langfuse_logger.debug( 836 "Operation skipped: update_current_generation - Tracing is disabled or client is in no-op mode." 837 ) 838 return 839 840 current_otel_span = self._get_current_otel_span() 841 842 if current_otel_span is not None: 843 generation = LangfuseGeneration( 844 otel_span=current_otel_span, langfuse_client=self 845 ) 846 847 if name: 848 current_otel_span.update_name(name) 849 850 generation.update( 851 input=input, 852 output=output, 853 metadata=metadata, 854 version=version, 855 level=level, 856 status_message=status_message, 857 completion_start_time=completion_start_time, 858 model=model, 859 model_parameters=model_parameters, 860 usage_details=usage_details, 861 cost_details=cost_details, 862 prompt=prompt, 863 ) 864 865 def update_current_span( 866 self, 867 *, 868 name: Optional[str] = None, 869 input: Optional[Any] = None, 870 output: Optional[Any] = None, 871 metadata: Optional[Any] = None, 872 version: Optional[str] = None, 873 level: Optional[SpanLevel] = None, 874 status_message: Optional[str] = None, 875 ) -> None: 876 """Update the current active span with new information. 877 878 This method updates the current span in the active context with 879 additional information. It's useful for adding outputs or metadata 880 that become available during execution. 881 882 Args: 883 name: The span name 884 input: Updated input data for the operation 885 output: Output data from the operation 886 metadata: Additional metadata to associate with the span 887 version: Version identifier for the code or component 888 level: Importance level of the span (info, warning, error) 889 status_message: Optional status message for the span 890 891 Example: 892 ```python 893 with langfuse.start_as_current_span(name="process-data") as span: 894 # Initial processing 895 result = process_first_part() 896 897 # Update with intermediate results 898 langfuse.update_current_span(metadata={"intermediate_result": result}) 899 900 # Continue processing 901 final_result = process_second_part(result) 902 903 # Final update 904 langfuse.update_current_span(output=final_result) 905 ``` 906 """ 907 if not self._tracing_enabled: 908 langfuse_logger.debug( 909 "Operation skipped: update_current_span - Tracing is disabled or client is in no-op mode." 910 ) 911 return 912 913 current_otel_span = self._get_current_otel_span() 914 915 if current_otel_span is not None: 916 span = LangfuseSpan( 917 otel_span=current_otel_span, 918 langfuse_client=self, 919 environment=self._environment, 920 ) 921 922 if name: 923 current_otel_span.update_name(name) 924 925 span.update( 926 input=input, 927 output=output, 928 metadata=metadata, 929 version=version, 930 level=level, 931 status_message=status_message, 932 ) 933 934 def update_current_trace( 935 self, 936 *, 937 name: Optional[str] = None, 938 user_id: Optional[str] = None, 939 session_id: Optional[str] = None, 940 version: Optional[str] = None, 941 input: Optional[Any] = None, 942 output: Optional[Any] = None, 943 metadata: Optional[Any] = None, 944 tags: Optional[List[str]] = None, 945 public: Optional[bool] = None, 946 ) -> None: 947 """Update the current trace with additional information. 948 949 This method updates the Langfuse trace that the current span belongs to. It's useful for 950 adding trace-level metadata like user ID, session ID, or tags that apply to 951 the entire Langfuse trace rather than just a single observation. 952 953 Args: 954 name: Updated name for the Langfuse trace 955 user_id: ID of the user who initiated the Langfuse trace 956 session_id: Session identifier for grouping related Langfuse traces 957 version: Version identifier for the application or service 958 input: Input data for the overall Langfuse trace 959 output: Output data from the overall Langfuse trace 960 metadata: Additional metadata to associate with the Langfuse trace 961 tags: List of tags to categorize the Langfuse trace 962 public: Whether the Langfuse trace should be publicly accessible 963 964 Example: 965 ```python 966 with langfuse.start_as_current_span(name="handle-request") as span: 967 # Get user information 968 user = authenticate_user(request) 969 970 # Update trace with user context 971 langfuse.update_current_trace( 972 user_id=user.id, 973 session_id=request.session_id, 974 tags=["production", "web-app"] 975 ) 976 977 # Continue processing 978 response = process_request(request) 979 980 # Update span with results 981 span.update(output=response) 982 ``` 983 """ 984 if not self._tracing_enabled: 985 langfuse_logger.debug( 986 "Operation skipped: update_current_trace - Tracing is disabled or client is in no-op mode." 987 ) 988 return 989 990 current_otel_span = self._get_current_otel_span() 991 992 if current_otel_span is not None: 993 span = LangfuseSpan( 994 otel_span=current_otel_span, 995 langfuse_client=self, 996 environment=self._environment, 997 ) 998 999 span.update_trace( 1000 name=name, 1001 user_id=user_id, 1002 session_id=session_id, 1003 version=version, 1004 input=input, 1005 output=output, 1006 metadata=metadata, 1007 tags=tags, 1008 public=public, 1009 ) 1010 1011 def create_event( 1012 self, 1013 *, 1014 trace_context: Optional[TraceContext] = None, 1015 name: str, 1016 input: Optional[Any] = None, 1017 output: Optional[Any] = None, 1018 metadata: Optional[Any] = None, 1019 version: Optional[str] = None, 1020 level: Optional[SpanLevel] = None, 1021 status_message: Optional[str] = None, 1022 ) -> LangfuseEvent: 1023 """Create a new Langfuse observation of type 'EVENT'. 1024 1025 The created Langfuse Event observation will be the child of the current span in the context. 1026 1027 Args: 1028 trace_context: Optional context for connecting to an existing trace 1029 name: Name of the span (e.g., function or operation name) 1030 input: Input data for the operation (can be any JSON-serializable object) 1031 output: Output data from the operation (can be any JSON-serializable object) 1032 metadata: Additional metadata to associate with the span 1033 version: Version identifier for the code or component 1034 level: Importance level of the span (info, warning, error) 1035 status_message: Optional status message for the span 1036 1037 Returns: 1038 The Langfuse Event object 1039 1040 Example: 1041 ```python 1042 event = langfuse.create_event(name="process-event") 1043 ``` 1044 """ 1045 timestamp = time_ns() 1046 1047 if trace_context: 1048 trace_id = trace_context.get("trace_id", None) 1049 parent_span_id = trace_context.get("parent_span_id", None) 1050 1051 if trace_id: 1052 remote_parent_span = self._create_remote_parent_span( 1053 trace_id=trace_id, parent_span_id=parent_span_id 1054 ) 1055 1056 with otel_trace_api.use_span( 1057 cast(otel_trace_api.Span, remote_parent_span) 1058 ): 1059 otel_span = self._otel_tracer.start_span( 1060 name=name, start_time=timestamp 1061 ) 1062 otel_span.set_attribute(LangfuseOtelSpanAttributes.AS_ROOT, True) 1063 1064 return cast( 1065 LangfuseEvent, 1066 LangfuseEvent( 1067 otel_span=otel_span, 1068 langfuse_client=self, 1069 environment=self._environment, 1070 input=input, 1071 output=output, 1072 metadata=metadata, 1073 version=version, 1074 level=level, 1075 status_message=status_message, 1076 ).end(end_time=timestamp), 1077 ) 1078 1079 otel_span = self._otel_tracer.start_span(name=name, start_time=timestamp) 1080 1081 return cast( 1082 LangfuseEvent, 1083 LangfuseEvent( 1084 otel_span=otel_span, 1085 langfuse_client=self, 1086 environment=self._environment, 1087 input=input, 1088 output=output, 1089 metadata=metadata, 1090 version=version, 1091 level=level, 1092 status_message=status_message, 1093 ).end(end_time=timestamp), 1094 ) 1095 1096 def _create_remote_parent_span( 1097 self, *, trace_id: str, parent_span_id: Optional[str] 1098 ) -> Any: 1099 if not self._is_valid_trace_id(trace_id): 1100 langfuse_logger.warning( 1101 f"Passed trace ID '{trace_id}' is not a valid 32 lowercase hex char Langfuse trace id. Ignoring trace ID." 1102 ) 1103 1104 if parent_span_id and not self._is_valid_span_id(parent_span_id): 1105 langfuse_logger.warning( 1106 f"Passed span ID '{parent_span_id}' is not a valid 16 lowercase hex char Langfuse span id. Ignoring parent span ID." 1107 ) 1108 1109 int_trace_id = int(trace_id, 16) 1110 int_parent_span_id = ( 1111 int(parent_span_id, 16) 1112 if parent_span_id 1113 else RandomIdGenerator().generate_span_id() 1114 ) 1115 1116 span_context = otel_trace_api.SpanContext( 1117 trace_id=int_trace_id, 1118 span_id=int_parent_span_id, 1119 trace_flags=otel_trace_api.TraceFlags(0x01), # mark span as sampled 1120 is_remote=False, 1121 ) 1122 1123 return trace.NonRecordingSpan(span_context) 1124 1125 def _is_valid_trace_id(self, trace_id: str) -> bool: 1126 pattern = r"^[0-9a-f]{32}$" 1127 1128 return bool(re.match(pattern, trace_id)) 1129 1130 def _is_valid_span_id(self, span_id: str) -> bool: 1131 pattern = r"^[0-9a-f]{16}$" 1132 1133 return bool(re.match(pattern, span_id)) 1134 1135 def _create_observation_id(self, *, seed: Optional[str] = None) -> str: 1136 """Create a unique observation ID for use with Langfuse. 1137 1138 This method generates a unique observation ID (span ID in OpenTelemetry terms) 1139 for use with various Langfuse APIs. It can either generate a random ID or 1140 create a deterministic ID based on a seed string. 1141 1142 Observation IDs must be 16 lowercase hexadecimal characters, representing 8 bytes. 1143 This method ensures the generated ID meets this requirement. If you need to 1144 correlate an external ID with a Langfuse observation ID, use the external ID as 1145 the seed to get a valid, deterministic observation ID. 1146 1147 Args: 1148 seed: Optional string to use as a seed for deterministic ID generation. 1149 If provided, the same seed will always produce the same ID. 1150 If not provided, a random ID will be generated. 1151 1152 Returns: 1153 A 16-character lowercase hexadecimal string representing the observation ID. 1154 1155 Example: 1156 ```python 1157 # Generate a random observation ID 1158 obs_id = langfuse.create_observation_id() 1159 1160 # Generate a deterministic ID based on a seed 1161 user_obs_id = langfuse.create_observation_id(seed="user-123-feedback") 1162 1163 # Correlate an external item ID with a Langfuse observation ID 1164 item_id = "item-789012" 1165 correlated_obs_id = langfuse.create_observation_id(seed=item_id) 1166 1167 # Use the ID with Langfuse APIs 1168 langfuse.create_score( 1169 name="relevance", 1170 value=0.95, 1171 trace_id=trace_id, 1172 observation_id=obs_id 1173 ) 1174 ``` 1175 """ 1176 if not seed: 1177 span_id_int = RandomIdGenerator().generate_span_id() 1178 1179 return self._format_otel_span_id(span_id_int) 1180 1181 return sha256(seed.encode("utf-8")).digest()[:8].hex() 1182 1183 @staticmethod 1184 def create_trace_id(*, seed: Optional[str] = None) -> str: 1185 """Create a unique trace ID for use with Langfuse. 1186 1187 This method generates a unique trace ID for use with various Langfuse APIs. 1188 It can either generate a random ID or create a deterministic ID based on 1189 a seed string. 1190 1191 Trace IDs must be 32 lowercase hexadecimal characters, representing 16 bytes. 1192 This method ensures the generated ID meets this requirement. If you need to 1193 correlate an external ID with a Langfuse trace ID, use the external ID as the 1194 seed to get a valid, deterministic Langfuse trace ID. 1195 1196 Args: 1197 seed: Optional string to use as a seed for deterministic ID generation. 1198 If provided, the same seed will always produce the same ID. 1199 If not provided, a random ID will be generated. 1200 1201 Returns: 1202 A 32-character lowercase hexadecimal string representing the Langfuse trace ID. 1203 1204 Example: 1205 ```python 1206 # Generate a random trace ID 1207 trace_id = langfuse.create_trace_id() 1208 1209 # Generate a deterministic ID based on a seed 1210 session_trace_id = langfuse.create_trace_id(seed="session-456") 1211 1212 # Correlate an external ID with a Langfuse trace ID 1213 external_id = "external-system-123456" 1214 correlated_trace_id = langfuse.create_trace_id(seed=external_id) 1215 1216 # Use the ID with trace context 1217 with langfuse.start_as_current_span( 1218 name="process-request", 1219 trace_context={"trace_id": trace_id} 1220 ) as span: 1221 # Operation will be part of the specific trace 1222 pass 1223 ``` 1224 """ 1225 if not seed: 1226 trace_id_int = RandomIdGenerator().generate_trace_id() 1227 1228 return Langfuse._format_otel_trace_id(trace_id_int) 1229 1230 return sha256(seed.encode("utf-8")).digest()[:16].hex() 1231 1232 def _get_otel_trace_id(self, otel_span: otel_trace_api.Span) -> str: 1233 span_context = otel_span.get_span_context() 1234 1235 return self._format_otel_trace_id(span_context.trace_id) 1236 1237 def _get_otel_span_id(self, otel_span: otel_trace_api.Span) -> str: 1238 span_context = otel_span.get_span_context() 1239 1240 return self._format_otel_span_id(span_context.span_id) 1241 1242 @staticmethod 1243 def _format_otel_span_id(span_id_int: int) -> str: 1244 """Format an integer span ID to a 16-character lowercase hex string. 1245 1246 Internal method to convert an OpenTelemetry integer span ID to the standard 1247 W3C Trace Context format (16-character lowercase hex string). 1248 1249 Args: 1250 span_id_int: 64-bit integer representing a span ID 1251 1252 Returns: 1253 A 16-character lowercase hexadecimal string 1254 """ 1255 return format(span_id_int, "016x") 1256 1257 @staticmethod 1258 def _format_otel_trace_id(trace_id_int: int) -> str: 1259 """Format an integer trace ID to a 32-character lowercase hex string. 1260 1261 Internal method to convert an OpenTelemetry integer trace ID to the standard 1262 W3C Trace Context format (32-character lowercase hex string). 1263 1264 Args: 1265 trace_id_int: 128-bit integer representing a trace ID 1266 1267 Returns: 1268 A 32-character lowercase hexadecimal string 1269 """ 1270 return format(trace_id_int, "032x") 1271 1272 @overload 1273 def create_score( 1274 self, 1275 *, 1276 name: str, 1277 value: float, 1278 session_id: Optional[str] = None, 1279 dataset_run_id: Optional[str] = None, 1280 trace_id: Optional[str] = None, 1281 observation_id: Optional[str] = None, 1282 score_id: Optional[str] = None, 1283 data_type: Optional[Literal["NUMERIC", "BOOLEAN"]] = None, 1284 comment: Optional[str] = None, 1285 config_id: Optional[str] = None, 1286 metadata: Optional[Any] = None, 1287 ) -> None: ... 1288 1289 @overload 1290 def create_score( 1291 self, 1292 *, 1293 name: str, 1294 value: str, 1295 session_id: Optional[str] = None, 1296 dataset_run_id: Optional[str] = None, 1297 trace_id: Optional[str] = None, 1298 score_id: Optional[str] = None, 1299 observation_id: Optional[str] = None, 1300 data_type: Optional[Literal["CATEGORICAL"]] = "CATEGORICAL", 1301 comment: Optional[str] = None, 1302 config_id: Optional[str] = None, 1303 metadata: Optional[Any] = None, 1304 ) -> None: ... 1305 1306 def create_score( 1307 self, 1308 *, 1309 name: str, 1310 value: Union[float, str], 1311 session_id: Optional[str] = None, 1312 dataset_run_id: Optional[str] = None, 1313 trace_id: Optional[str] = None, 1314 observation_id: Optional[str] = None, 1315 score_id: Optional[str] = None, 1316 data_type: Optional[ScoreDataType] = None, 1317 comment: Optional[str] = None, 1318 config_id: Optional[str] = None, 1319 metadata: Optional[Any] = None, 1320 ) -> None: 1321 """Create a score for a specific trace or observation. 1322 1323 This method creates a score for evaluating a Langfuse trace or observation. Scores can be 1324 used to track quality metrics, user feedback, or automated evaluations. 1325 1326 Args: 1327 name: Name of the score (e.g., "relevance", "accuracy") 1328 value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL) 1329 session_id: ID of the Langfuse session to associate the score with 1330 dataset_run_id: ID of the Langfuse dataset run to associate the score with 1331 trace_id: ID of the Langfuse trace to associate the score with 1332 observation_id: Optional ID of the specific observation to score. Trace ID must be provided too. 1333 score_id: Optional custom ID for the score (auto-generated if not provided) 1334 data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL) 1335 comment: Optional comment or explanation for the score 1336 config_id: Optional ID of a score config defined in Langfuse 1337 metadata: Optional metadata to be attached to the score 1338 1339 Example: 1340 ```python 1341 # Create a numeric score for accuracy 1342 langfuse.create_score( 1343 name="accuracy", 1344 value=0.92, 1345 trace_id="abcdef1234567890abcdef1234567890", 1346 data_type="NUMERIC", 1347 comment="High accuracy with minor irrelevant details" 1348 ) 1349 1350 # Create a categorical score for sentiment 1351 langfuse.create_score( 1352 name="sentiment", 1353 value="positive", 1354 trace_id="abcdef1234567890abcdef1234567890", 1355 observation_id="abcdef1234567890", 1356 data_type="CATEGORICAL" 1357 ) 1358 ``` 1359 """ 1360 if not self._tracing_enabled: 1361 return 1362 1363 score_id = score_id or self._create_observation_id() 1364 1365 try: 1366 new_body = ScoreBody( 1367 id=score_id, 1368 sessionId=session_id, 1369 datasetRunId=dataset_run_id, 1370 traceId=trace_id, 1371 observationId=observation_id, 1372 name=name, 1373 value=value, 1374 dataType=data_type, # type: ignore 1375 comment=comment, 1376 configId=config_id, 1377 environment=self._environment, 1378 metadata=metadata, 1379 ) 1380 1381 event = { 1382 "id": self.create_trace_id(), 1383 "type": "score-create", 1384 "timestamp": _get_timestamp(), 1385 "body": new_body, 1386 } 1387 1388 if self._resources is not None: 1389 # Force the score to be in sample if it was for a legacy trace ID, i.e. non-32 hexchar 1390 force_sample = ( 1391 not self._is_valid_trace_id(trace_id) if trace_id else True 1392 ) 1393 1394 self._resources.add_score_task( 1395 event, 1396 force_sample=force_sample, 1397 ) 1398 1399 except Exception as e: 1400 langfuse_logger.exception( 1401 f"Error creating score: Failed to process score event for trace_id={trace_id}, name={name}. Error: {e}" 1402 ) 1403 1404 @overload 1405 def score_current_span( 1406 self, 1407 *, 1408 name: str, 1409 value: float, 1410 score_id: Optional[str] = None, 1411 data_type: Optional[Literal["NUMERIC", "BOOLEAN"]] = None, 1412 comment: Optional[str] = None, 1413 config_id: Optional[str] = None, 1414 ) -> None: ... 1415 1416 @overload 1417 def score_current_span( 1418 self, 1419 *, 1420 name: str, 1421 value: str, 1422 score_id: Optional[str] = None, 1423 data_type: Optional[Literal["CATEGORICAL"]] = "CATEGORICAL", 1424 comment: Optional[str] = None, 1425 config_id: Optional[str] = None, 1426 ) -> None: ... 1427 1428 def score_current_span( 1429 self, 1430 *, 1431 name: str, 1432 value: Union[float, str], 1433 score_id: Optional[str] = None, 1434 data_type: Optional[ScoreDataType] = None, 1435 comment: Optional[str] = None, 1436 config_id: Optional[str] = None, 1437 ) -> None: 1438 """Create a score for the current active span. 1439 1440 This method scores the currently active span in the context. It's a convenient 1441 way to score the current operation without needing to know its trace and span IDs. 1442 1443 Args: 1444 name: Name of the score (e.g., "relevance", "accuracy") 1445 value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL) 1446 score_id: Optional custom ID for the score (auto-generated if not provided) 1447 data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL) 1448 comment: Optional comment or explanation for the score 1449 config_id: Optional ID of a score config defined in Langfuse 1450 1451 Example: 1452 ```python 1453 with langfuse.start_as_current_generation(name="answer-query") as generation: 1454 # Generate answer 1455 response = generate_answer(...) 1456 generation.update(output=response) 1457 1458 # Score the generation 1459 langfuse.score_current_span( 1460 name="relevance", 1461 value=0.85, 1462 data_type="NUMERIC", 1463 comment="Mostly relevant but contains some tangential information" 1464 ) 1465 ``` 1466 """ 1467 current_span = self._get_current_otel_span() 1468 1469 if current_span is not None: 1470 trace_id = self._get_otel_trace_id(current_span) 1471 observation_id = self._get_otel_span_id(current_span) 1472 1473 langfuse_logger.info( 1474 f"Score: Creating score name='{name}' value={value} for current span ({observation_id}) in trace {trace_id}" 1475 ) 1476 1477 self.create_score( 1478 trace_id=trace_id, 1479 observation_id=observation_id, 1480 name=name, 1481 value=cast(str, value), 1482 score_id=score_id, 1483 data_type=cast(Literal["CATEGORICAL"], data_type), 1484 comment=comment, 1485 config_id=config_id, 1486 ) 1487 1488 @overload 1489 def score_current_trace( 1490 self, 1491 *, 1492 name: str, 1493 value: float, 1494 score_id: Optional[str] = None, 1495 data_type: Optional[Literal["NUMERIC", "BOOLEAN"]] = None, 1496 comment: Optional[str] = None, 1497 config_id: Optional[str] = None, 1498 ) -> None: ... 1499 1500 @overload 1501 def score_current_trace( 1502 self, 1503 *, 1504 name: str, 1505 value: str, 1506 score_id: Optional[str] = None, 1507 data_type: Optional[Literal["CATEGORICAL"]] = "CATEGORICAL", 1508 comment: Optional[str] = None, 1509 config_id: Optional[str] = None, 1510 ) -> None: ... 1511 1512 def score_current_trace( 1513 self, 1514 *, 1515 name: str, 1516 value: Union[float, str], 1517 score_id: Optional[str] = None, 1518 data_type: Optional[ScoreDataType] = None, 1519 comment: Optional[str] = None, 1520 config_id: Optional[str] = None, 1521 ) -> None: 1522 """Create a score for the current trace. 1523 1524 This method scores the trace of the currently active span. Unlike score_current_span, 1525 this method associates the score with the entire trace rather than a specific span. 1526 It's useful for scoring overall performance or quality of the entire operation. 1527 1528 Args: 1529 name: Name of the score (e.g., "user_satisfaction", "overall_quality") 1530 value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL) 1531 score_id: Optional custom ID for the score (auto-generated if not provided) 1532 data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL) 1533 comment: Optional comment or explanation for the score 1534 config_id: Optional ID of a score config defined in Langfuse 1535 1536 Example: 1537 ```python 1538 with langfuse.start_as_current_span(name="process-user-request") as span: 1539 # Process request 1540 result = process_complete_request() 1541 span.update(output=result) 1542 1543 # Score the overall trace 1544 langfuse.score_current_trace( 1545 name="overall_quality", 1546 value=0.95, 1547 data_type="NUMERIC", 1548 comment="High quality end-to-end response" 1549 ) 1550 ``` 1551 """ 1552 current_span = self._get_current_otel_span() 1553 1554 if current_span is not None: 1555 trace_id = self._get_otel_trace_id(current_span) 1556 1557 langfuse_logger.info( 1558 f"Score: Creating score name='{name}' value={value} for entire trace {trace_id}" 1559 ) 1560 1561 self.create_score( 1562 trace_id=trace_id, 1563 name=name, 1564 value=cast(str, value), 1565 score_id=score_id, 1566 data_type=cast(Literal["CATEGORICAL"], data_type), 1567 comment=comment, 1568 config_id=config_id, 1569 ) 1570 1571 def flush(self) -> None: 1572 """Force flush all pending spans and events to the Langfuse API. 1573 1574 This method manually flushes any pending spans, scores, and other events to the 1575 Langfuse API. It's useful in scenarios where you want to ensure all data is sent 1576 before proceeding, without waiting for the automatic flush interval. 1577 1578 Example: 1579 ```python 1580 # Record some spans and scores 1581 with langfuse.start_as_current_span(name="operation") as span: 1582 # Do work... 1583 pass 1584 1585 # Ensure all data is sent to Langfuse before proceeding 1586 langfuse.flush() 1587 1588 # Continue with other work 1589 ``` 1590 """ 1591 if self._resources is not None: 1592 self._resources.flush() 1593 1594 def shutdown(self) -> None: 1595 """Shut down the Langfuse client and flush all pending data. 1596 1597 This method cleanly shuts down the Langfuse client, ensuring all pending data 1598 is flushed to the API and all background threads are properly terminated. 1599 1600 It's important to call this method when your application is shutting down to 1601 prevent data loss and resource leaks. For most applications, using the client 1602 as a context manager or relying on the automatic shutdown via atexit is sufficient. 1603 1604 Example: 1605 ```python 1606 # Initialize Langfuse 1607 langfuse = Langfuse(public_key="...", secret_key="...") 1608 1609 # Use Langfuse throughout your application 1610 # ... 1611 1612 # When application is shutting down 1613 langfuse.shutdown() 1614 ``` 1615 """ 1616 if self._resources is not None: 1617 self._resources.shutdown() 1618 1619 def get_current_trace_id(self) -> Optional[str]: 1620 """Get the trace ID of the current active span. 1621 1622 This method retrieves the trace ID from the currently active span in the context. 1623 It can be used to get the trace ID for referencing in logs, external systems, 1624 or for creating related operations. 1625 1626 Returns: 1627 The current trace ID as a 32-character lowercase hexadecimal string, 1628 or None if there is no active span. 1629 1630 Example: 1631 ```python 1632 with langfuse.start_as_current_span(name="process-request") as span: 1633 # Get the current trace ID for reference 1634 trace_id = langfuse.get_current_trace_id() 1635 1636 # Use it for external correlation 1637 log.info(f"Processing request with trace_id: {trace_id}") 1638 1639 # Or pass to another system 1640 external_system.process(data, trace_id=trace_id) 1641 ``` 1642 """ 1643 if not self._tracing_enabled: 1644 langfuse_logger.debug( 1645 "Operation skipped: get_current_trace_id - Tracing is disabled or client is in no-op mode." 1646 ) 1647 return None 1648 1649 current_otel_span = self._get_current_otel_span() 1650 1651 return self._get_otel_trace_id(current_otel_span) if current_otel_span else None 1652 1653 def get_current_observation_id(self) -> Optional[str]: 1654 """Get the observation ID (span ID) of the current active span. 1655 1656 This method retrieves the observation ID from the currently active span in the context. 1657 It can be used to get the observation ID for referencing in logs, external systems, 1658 or for creating scores or other related operations. 1659 1660 Returns: 1661 The current observation ID as a 16-character lowercase hexadecimal string, 1662 or None if there is no active span. 1663 1664 Example: 1665 ```python 1666 with langfuse.start_as_current_span(name="process-user-query") as span: 1667 # Get the current observation ID 1668 observation_id = langfuse.get_current_observation_id() 1669 1670 # Store it for later reference 1671 cache.set(f"query_{query_id}_observation", observation_id) 1672 1673 # Process the query... 1674 ``` 1675 """ 1676 if not self._tracing_enabled: 1677 langfuse_logger.debug( 1678 "Operation skipped: get_current_observation_id - Tracing is disabled or client is in no-op mode." 1679 ) 1680 return None 1681 1682 current_otel_span = self._get_current_otel_span() 1683 1684 return self._get_otel_span_id(current_otel_span) if current_otel_span else None 1685 1686 def _get_project_id(self) -> Optional[str]: 1687 """Fetch and return the current project id. Persisted across requests. Returns None if no project id is found for api keys.""" 1688 if not self._project_id: 1689 proj = self.api.projects.get() 1690 if not proj.data or not proj.data[0].id: 1691 return None 1692 1693 self._project_id = proj.data[0].id 1694 1695 return self._project_id 1696 1697 def get_trace_url(self, *, trace_id: Optional[str] = None) -> Optional[str]: 1698 """Get the URL to view a trace in the Langfuse UI. 1699 1700 This method generates a URL that links directly to a trace in the Langfuse UI. 1701 It's useful for providing links in logs, notifications, or debugging tools. 1702 1703 Args: 1704 trace_id: Optional trace ID to generate a URL for. If not provided, 1705 the trace ID of the current active span will be used. 1706 1707 Returns: 1708 A URL string pointing to the trace in the Langfuse UI, 1709 or None if the project ID couldn't be retrieved or no trace ID is available. 1710 1711 Example: 1712 ```python 1713 # Get URL for the current trace 1714 with langfuse.start_as_current_span(name="process-request") as span: 1715 trace_url = langfuse.get_trace_url() 1716 log.info(f"Processing trace: {trace_url}") 1717 1718 # Get URL for a specific trace 1719 specific_trace_url = langfuse.get_trace_url(trace_id="1234567890abcdef1234567890abcdef") 1720 send_notification(f"Review needed for trace: {specific_trace_url}") 1721 ``` 1722 """ 1723 project_id = self._get_project_id() 1724 final_trace_id = trace_id or self.get_current_trace_id() 1725 1726 return ( 1727 f"{self._host}/project/{project_id}/traces/{final_trace_id}" 1728 if project_id and final_trace_id 1729 else None 1730 ) 1731 1732 def get_dataset( 1733 self, name: str, *, fetch_items_page_size: Optional[int] = 50 1734 ) -> "DatasetClient": 1735 """Fetch a dataset by its name. 1736 1737 Args: 1738 name (str): The name of the dataset to fetch. 1739 fetch_items_page_size (Optional[int]): All items of the dataset will be fetched in chunks of this size. Defaults to 50. 1740 1741 Returns: 1742 DatasetClient: The dataset with the given name. 1743 """ 1744 try: 1745 langfuse_logger.debug(f"Getting datasets {name}") 1746 dataset = self.api.datasets.get(dataset_name=name) 1747 1748 dataset_items = [] 1749 page = 1 1750 1751 while True: 1752 new_items = self.api.dataset_items.list( 1753 dataset_name=self._url_encode(name, is_url_param=True), 1754 page=page, 1755 limit=fetch_items_page_size, 1756 ) 1757 dataset_items.extend(new_items.data) 1758 1759 if new_items.meta.total_pages <= page: 1760 break 1761 1762 page += 1 1763 1764 items = [DatasetItemClient(i, langfuse=self) for i in dataset_items] 1765 1766 return DatasetClient(dataset, items=items) 1767 1768 except Error as e: 1769 handle_fern_exception(e) 1770 raise e 1771 1772 def auth_check(self) -> bool: 1773 """Check if the provided credentials (public and secret key) are valid. 1774 1775 Raises: 1776 Exception: If no projects were found for the provided credentials. 1777 1778 Note: 1779 This method is blocking. It is discouraged to use it in production code. 1780 """ 1781 try: 1782 projects = self.api.projects.get() 1783 langfuse_logger.debug( 1784 f"Auth check successful, found {len(projects.data)} projects" 1785 ) 1786 if len(projects.data) == 0: 1787 raise Exception( 1788 "Auth check failed, no project found for the keys provided." 1789 ) 1790 return True 1791 1792 except AttributeError as e: 1793 langfuse_logger.warning( 1794 f"Auth check failed: Client not properly initialized. Error: {e}" 1795 ) 1796 return False 1797 1798 except Error as e: 1799 handle_fern_exception(e) 1800 raise e 1801 1802 def create_dataset( 1803 self, 1804 *, 1805 name: str, 1806 description: Optional[str] = None, 1807 metadata: Optional[Any] = None, 1808 ) -> Dataset: 1809 """Create a dataset with the given name on Langfuse. 1810 1811 Args: 1812 name: Name of the dataset to create. 1813 description: Description of the dataset. Defaults to None. 1814 metadata: Additional metadata. Defaults to None. 1815 1816 Returns: 1817 Dataset: The created dataset as returned by the Langfuse API. 1818 """ 1819 try: 1820 body = CreateDatasetRequest( 1821 name=name, description=description, metadata=metadata 1822 ) 1823 langfuse_logger.debug(f"Creating datasets {body}") 1824 1825 return self.api.datasets.create(request=body) 1826 1827 except Error as e: 1828 handle_fern_exception(e) 1829 raise e 1830 1831 def create_dataset_item( 1832 self, 1833 *, 1834 dataset_name: str, 1835 input: Optional[Any] = None, 1836 expected_output: Optional[Any] = None, 1837 metadata: Optional[Any] = None, 1838 source_trace_id: Optional[str] = None, 1839 source_observation_id: Optional[str] = None, 1840 status: Optional[DatasetStatus] = None, 1841 id: Optional[str] = None, 1842 ) -> DatasetItem: 1843 """Create a dataset item. 1844 1845 Upserts if an item with id already exists. 1846 1847 Args: 1848 dataset_name: Name of the dataset in which the dataset item should be created. 1849 input: Input data. Defaults to None. Can contain any dict, list or scalar. 1850 expected_output: Expected output data. Defaults to None. Can contain any dict, list or scalar. 1851 metadata: Additional metadata. Defaults to None. Can contain any dict, list or scalar. 1852 source_trace_id: Id of the source trace. Defaults to None. 1853 source_observation_id: Id of the source observation. Defaults to None. 1854 status: Status of the dataset item. Defaults to ACTIVE for newly created items. 1855 id: Id of the dataset item. Defaults to None. Provide your own id if you want to dedupe dataset items. Id needs to be globally unique and cannot be reused across datasets. 1856 1857 Returns: 1858 DatasetItem: The created dataset item as returned by the Langfuse API. 1859 1860 Example: 1861 ```python 1862 from langfuse import Langfuse 1863 1864 langfuse = Langfuse() 1865 1866 # Uploading items to the Langfuse dataset named "capital_cities" 1867 langfuse.create_dataset_item( 1868 dataset_name="capital_cities", 1869 input={"input": {"country": "Italy"}}, 1870 expected_output={"expected_output": "Rome"}, 1871 metadata={"foo": "bar"} 1872 ) 1873 ``` 1874 """ 1875 try: 1876 body = CreateDatasetItemRequest( 1877 datasetName=dataset_name, 1878 input=input, 1879 expectedOutput=expected_output, 1880 metadata=metadata, 1881 sourceTraceId=source_trace_id, 1882 sourceObservationId=source_observation_id, 1883 status=status, 1884 id=id, 1885 ) 1886 langfuse_logger.debug(f"Creating dataset item {body}") 1887 return self.api.dataset_items.create(request=body) 1888 except Error as e: 1889 handle_fern_exception(e) 1890 raise e 1891 1892 def resolve_media_references( 1893 self, 1894 *, 1895 obj: Any, 1896 resolve_with: Literal["base64_data_uri"], 1897 max_depth: int = 10, 1898 content_fetch_timeout_seconds: int = 5, 1899 ) -> Any: 1900 """Replace media reference strings in an object with base64 data URIs. 1901 1902 This method recursively traverses an object (up to max_depth) looking for media reference strings 1903 in the format "@@@langfuseMedia:...@@@". When found, it (synchronously) fetches the actual media content using 1904 the provided Langfuse client and replaces the reference string with a base64 data URI. 1905 1906 If fetching media content fails for a reference string, a warning is logged and the reference 1907 string is left unchanged. 1908 1909 Args: 1910 obj: The object to process. Can be a primitive value, array, or nested object. 1911 If the object has a __dict__ attribute, a dict will be returned instead of the original object type. 1912 resolve_with: The representation of the media content to replace the media reference string with. 1913 Currently only "base64_data_uri" is supported. 1914 max_depth: int: The maximum depth to traverse the object. Default is 10. 1915 content_fetch_timeout_seconds: int: The timeout in seconds for fetching media content. Default is 5. 1916 1917 Returns: 1918 A deep copy of the input object with all media references replaced with base64 data URIs where possible. 1919 If the input object has a __dict__ attribute, a dict will be returned instead of the original object type. 1920 1921 Example: 1922 obj = { 1923 "image": "@@@langfuseMedia:type=image/jpeg|id=123|source=bytes@@@", 1924 "nested": { 1925 "pdf": "@@@langfuseMedia:type=application/pdf|id=456|source=bytes@@@" 1926 } 1927 } 1928 1929 result = await LangfuseMedia.resolve_media_references(obj, langfuse_client) 1930 1931 # Result: 1932 # { 1933 # "image": "data:image/jpeg;base64,/9j/4AAQSkZJRg...", 1934 # "nested": { 1935 # "pdf": "data:application/pdf;base64,JVBERi0xLjcK..." 1936 # } 1937 # } 1938 """ 1939 return LangfuseMedia.resolve_media_references( 1940 langfuse_client=self, 1941 obj=obj, 1942 resolve_with=resolve_with, 1943 max_depth=max_depth, 1944 content_fetch_timeout_seconds=content_fetch_timeout_seconds, 1945 ) 1946 1947 @overload 1948 def get_prompt( 1949 self, 1950 name: str, 1951 *, 1952 version: Optional[int] = None, 1953 label: Optional[str] = None, 1954 type: Literal["chat"], 1955 cache_ttl_seconds: Optional[int] = None, 1956 fallback: Optional[List[ChatMessageDict]] = None, 1957 max_retries: Optional[int] = None, 1958 fetch_timeout_seconds: Optional[int] = None, 1959 ) -> ChatPromptClient: ... 1960 1961 @overload 1962 def get_prompt( 1963 self, 1964 name: str, 1965 *, 1966 version: Optional[int] = None, 1967 label: Optional[str] = None, 1968 type: Literal["text"] = "text", 1969 cache_ttl_seconds: Optional[int] = None, 1970 fallback: Optional[str] = None, 1971 max_retries: Optional[int] = None, 1972 fetch_timeout_seconds: Optional[int] = None, 1973 ) -> TextPromptClient: ... 1974 1975 def get_prompt( 1976 self, 1977 name: str, 1978 *, 1979 version: Optional[int] = None, 1980 label: Optional[str] = None, 1981 type: Literal["chat", "text"] = "text", 1982 cache_ttl_seconds: Optional[int] = None, 1983 fallback: Union[Optional[List[ChatMessageDict]], Optional[str]] = None, 1984 max_retries: Optional[int] = None, 1985 fetch_timeout_seconds: Optional[int] = None, 1986 ) -> PromptClient: 1987 """Get a prompt. 1988 1989 This method attempts to fetch the requested prompt from the local cache. If the prompt is not found 1990 in the cache or if the cached prompt has expired, it will try to fetch the prompt from the server again 1991 and update the cache. If fetching the new prompt fails, and there is an expired prompt in the cache, it will 1992 return the expired prompt as a fallback. 1993 1994 Args: 1995 name (str): The name of the prompt to retrieve. 1996 1997 Keyword Args: 1998 version (Optional[int]): The version of the prompt to retrieve. If no label and version is specified, the `production` label is returned. Specify either version or label, not both. 1999 label: Optional[str]: The label of the prompt to retrieve. If no label and version is specified, the `production` label is returned. Specify either version or label, not both. 2000 cache_ttl_seconds: Optional[int]: Time-to-live in seconds for caching the prompt. Must be specified as a 2001 keyword argument. If not set, defaults to 60 seconds. Disables caching if set to 0. 2002 type: Literal["chat", "text"]: The type of the prompt to retrieve. Defaults to "text". 2003 fallback: Union[Optional[List[ChatMessageDict]], Optional[str]]: The prompt string to return if fetching the prompt fails. Important on the first call where no cached prompt is available. Follows Langfuse prompt formatting with double curly braces for variables. Defaults to None. 2004 max_retries: Optional[int]: The maximum number of retries in case of API/network errors. Defaults to 2. The maximum value is 4. Retries have an exponential backoff with a maximum delay of 10 seconds. 2005 fetch_timeout_seconds: Optional[int]: The timeout in milliseconds for fetching the prompt. Defaults to the default timeout set on the SDK, which is 5 seconds per default. 2006 2007 Returns: 2008 The prompt object retrieved from the cache or directly fetched if not cached or expired of type 2009 - TextPromptClient, if type argument is 'text'. 2010 - ChatPromptClient, if type argument is 'chat'. 2011 2012 Raises: 2013 Exception: Propagates any exceptions raised during the fetching of a new prompt, unless there is an 2014 expired prompt in the cache, in which case it logs a warning and returns the expired prompt. 2015 """ 2016 if self._resources is None: 2017 raise Error( 2018 "SDK is not correctly initalized. Check the init logs for more details." 2019 ) 2020 if version is not None and label is not None: 2021 raise ValueError("Cannot specify both version and label at the same time.") 2022 2023 if not name: 2024 raise ValueError("Prompt name cannot be empty.") 2025 2026 cache_key = PromptCache.generate_cache_key(name, version=version, label=label) 2027 bounded_max_retries = self._get_bounded_max_retries( 2028 max_retries, default_max_retries=2, max_retries_upper_bound=4 2029 ) 2030 2031 langfuse_logger.debug(f"Getting prompt '{cache_key}'") 2032 cached_prompt = self._resources.prompt_cache.get(cache_key) 2033 2034 if cached_prompt is None or cache_ttl_seconds == 0: 2035 langfuse_logger.debug( 2036 f"Prompt '{cache_key}' not found in cache or caching disabled." 2037 ) 2038 try: 2039 return self._fetch_prompt_and_update_cache( 2040 name, 2041 version=version, 2042 label=label, 2043 ttl_seconds=cache_ttl_seconds, 2044 max_retries=bounded_max_retries, 2045 fetch_timeout_seconds=fetch_timeout_seconds, 2046 ) 2047 except Exception as e: 2048 if fallback: 2049 langfuse_logger.warning( 2050 f"Returning fallback prompt for '{cache_key}' due to fetch error: {e}" 2051 ) 2052 2053 fallback_client_args: Dict[str, Any] = { 2054 "name": name, 2055 "prompt": fallback, 2056 "type": type, 2057 "version": version or 0, 2058 "config": {}, 2059 "labels": [label] if label else [], 2060 "tags": [], 2061 } 2062 2063 if type == "text": 2064 return TextPromptClient( 2065 prompt=Prompt_Text(**fallback_client_args), 2066 is_fallback=True, 2067 ) 2068 2069 if type == "chat": 2070 return ChatPromptClient( 2071 prompt=Prompt_Chat(**fallback_client_args), 2072 is_fallback=True, 2073 ) 2074 2075 raise e 2076 2077 if cached_prompt.is_expired(): 2078 langfuse_logger.debug(f"Stale prompt '{cache_key}' found in cache.") 2079 try: 2080 # refresh prompt in background thread, refresh_prompt deduplicates tasks 2081 langfuse_logger.debug(f"Refreshing prompt '{cache_key}' in background.") 2082 2083 def refresh_task() -> None: 2084 self._fetch_prompt_and_update_cache( 2085 name, 2086 version=version, 2087 label=label, 2088 ttl_seconds=cache_ttl_seconds, 2089 max_retries=bounded_max_retries, 2090 fetch_timeout_seconds=fetch_timeout_seconds, 2091 ) 2092 2093 self._resources.prompt_cache.add_refresh_prompt_task( 2094 cache_key, 2095 refresh_task, 2096 ) 2097 langfuse_logger.debug( 2098 f"Returning stale prompt '{cache_key}' from cache." 2099 ) 2100 # return stale prompt 2101 return cached_prompt.value 2102 2103 except Exception as e: 2104 langfuse_logger.warning( 2105 f"Error when refreshing cached prompt '{cache_key}', returning cached version. Error: {e}" 2106 ) 2107 # creation of refresh prompt task failed, return stale prompt 2108 return cached_prompt.value 2109 2110 return cached_prompt.value 2111 2112 def _fetch_prompt_and_update_cache( 2113 self, 2114 name: str, 2115 *, 2116 version: Optional[int] = None, 2117 label: Optional[str] = None, 2118 ttl_seconds: Optional[int] = None, 2119 max_retries: int, 2120 fetch_timeout_seconds: Optional[int], 2121 ) -> PromptClient: 2122 cache_key = PromptCache.generate_cache_key(name, version=version, label=label) 2123 langfuse_logger.debug(f"Fetching prompt '{cache_key}' from server...") 2124 2125 try: 2126 2127 @backoff.on_exception( 2128 backoff.constant, Exception, max_tries=max_retries + 1, logger=None 2129 ) 2130 def fetch_prompts() -> Any: 2131 return self.api.prompts.get( 2132 self._url_encode(name), 2133 version=version, 2134 label=label, 2135 request_options={ 2136 "timeout_in_seconds": fetch_timeout_seconds, 2137 } 2138 if fetch_timeout_seconds is not None 2139 else None, 2140 ) 2141 2142 prompt_response = fetch_prompts() 2143 2144 prompt: PromptClient 2145 if prompt_response.type == "chat": 2146 prompt = ChatPromptClient(prompt_response) 2147 else: 2148 prompt = TextPromptClient(prompt_response) 2149 2150 if self._resources is not None: 2151 self._resources.prompt_cache.set(cache_key, prompt, ttl_seconds) 2152 2153 return prompt 2154 2155 except Exception as e: 2156 langfuse_logger.error( 2157 f"Error while fetching prompt '{cache_key}': {str(e)}" 2158 ) 2159 raise e 2160 2161 def _get_bounded_max_retries( 2162 self, 2163 max_retries: Optional[int], 2164 *, 2165 default_max_retries: int = 2, 2166 max_retries_upper_bound: int = 4, 2167 ) -> int: 2168 if max_retries is None: 2169 return default_max_retries 2170 2171 bounded_max_retries = min( 2172 max(max_retries, 0), 2173 max_retries_upper_bound, 2174 ) 2175 2176 return bounded_max_retries 2177 2178 @overload 2179 def create_prompt( 2180 self, 2181 *, 2182 name: str, 2183 prompt: List[Union[ChatMessageDict, ChatMessageWithPlaceholdersDict]], 2184 labels: List[str] = [], 2185 tags: Optional[List[str]] = None, 2186 type: Optional[Literal["chat"]], 2187 config: Optional[Any] = None, 2188 commit_message: Optional[str] = None, 2189 ) -> ChatPromptClient: ... 2190 2191 @overload 2192 def create_prompt( 2193 self, 2194 *, 2195 name: str, 2196 prompt: str, 2197 labels: List[str] = [], 2198 tags: Optional[List[str]] = None, 2199 type: Optional[Literal["text"]] = "text", 2200 config: Optional[Any] = None, 2201 commit_message: Optional[str] = None, 2202 ) -> TextPromptClient: ... 2203 2204 def create_prompt( 2205 self, 2206 *, 2207 name: str, 2208 prompt: Union[ 2209 str, List[Union[ChatMessageDict, ChatMessageWithPlaceholdersDict]] 2210 ], 2211 labels: List[str] = [], 2212 tags: Optional[List[str]] = None, 2213 type: Optional[Literal["chat", "text"]] = "text", 2214 config: Optional[Any] = None, 2215 commit_message: Optional[str] = None, 2216 ) -> PromptClient: 2217 """Create a new prompt in Langfuse. 2218 2219 Keyword Args: 2220 name : The name of the prompt to be created. 2221 prompt : The content of the prompt to be created. 2222 is_active [DEPRECATED] : A flag indicating whether the prompt is active or not. This is deprecated and will be removed in a future release. Please use the 'production' label instead. 2223 labels: The labels of the prompt. Defaults to None. To create a default-served prompt, add the 'production' label. 2224 tags: The tags of the prompt. Defaults to None. Will be applied to all versions of the prompt. 2225 config: Additional structured data to be saved with the prompt. Defaults to None. 2226 type: The type of the prompt to be created. "chat" vs. "text". Defaults to "text". 2227 commit_message: Optional string describing the change. 2228 2229 Returns: 2230 TextPromptClient: The prompt if type argument is 'text'. 2231 ChatPromptClient: The prompt if type argument is 'chat'. 2232 """ 2233 try: 2234 langfuse_logger.debug(f"Creating prompt {name=}, {labels=}") 2235 2236 if type == "chat": 2237 if not isinstance(prompt, list): 2238 raise ValueError( 2239 "For 'chat' type, 'prompt' must be a list of chat messages with role and content attributes." 2240 ) 2241 request: Union[CreatePromptRequest_Chat, CreatePromptRequest_Text] = ( 2242 CreatePromptRequest_Chat( 2243 name=name, 2244 prompt=cast(Any, prompt), 2245 labels=labels, 2246 tags=tags, 2247 config=config or {}, 2248 commitMessage=commit_message, 2249 type="chat", 2250 ) 2251 ) 2252 server_prompt = self.api.prompts.create(request=request) 2253 2254 if self._resources is not None: 2255 self._resources.prompt_cache.invalidate(name) 2256 2257 return ChatPromptClient(prompt=cast(Prompt_Chat, server_prompt)) 2258 2259 if not isinstance(prompt, str): 2260 raise ValueError("For 'text' type, 'prompt' must be a string.") 2261 2262 request = CreatePromptRequest_Text( 2263 name=name, 2264 prompt=prompt, 2265 labels=labels, 2266 tags=tags, 2267 config=config or {}, 2268 commitMessage=commit_message, 2269 type="text", 2270 ) 2271 2272 server_prompt = self.api.prompts.create(request=request) 2273 2274 if self._resources is not None: 2275 self._resources.prompt_cache.invalidate(name) 2276 2277 return TextPromptClient(prompt=cast(Prompt_Text, server_prompt)) 2278 2279 except Error as e: 2280 handle_fern_exception(e) 2281 raise e 2282 2283 def update_prompt( 2284 self, 2285 *, 2286 name: str, 2287 version: int, 2288 new_labels: List[str] = [], 2289 ) -> Any: 2290 """Update an existing prompt version in Langfuse. The Langfuse SDK prompt cache is invalidated for all prompts witht he specified name. 2291 2292 Args: 2293 name (str): The name of the prompt to update. 2294 version (int): The version number of the prompt to update. 2295 new_labels (List[str], optional): New labels to assign to the prompt version. Labels are unique across versions. The "latest" label is reserved and managed by Langfuse. Defaults to []. 2296 2297 Returns: 2298 Prompt: The updated prompt from the Langfuse API. 2299 2300 """ 2301 updated_prompt = self.api.prompt_version.update( 2302 name=name, 2303 version=version, 2304 new_labels=new_labels, 2305 ) 2306 2307 if self._resources is not None: 2308 self._resources.prompt_cache.invalidate(name) 2309 2310 return updated_prompt 2311 2312 def _url_encode(self, url: str, *, is_url_param: Optional[bool] = False) -> str: 2313 # httpx ≥ 0.28 does its own WHATWG-compliant quoting (eg. encodes bare 2314 # “%”, “?”, “#”, “|”, … in query/path parts). Re-quoting here would 2315 # double-encode, so we skip when the value is about to be sent straight 2316 # to httpx (`is_url_param=True`) and the installed version is ≥ 0.28. 2317 if is_url_param and Version(httpx.__version__) >= Version("0.28.0"): 2318 return url 2319 2320 # urllib.parse.quote does not escape slashes "/" by default; we need to add safe="" to force escaping 2321 # we need add safe="" to force escaping of slashes 2322 # This is necessary for prompts in prompt folders 2323 return urllib.parse.quote(url, safe="")
Main client for Langfuse tracing and platform features.
This class provides an interface for creating and managing traces, spans, and generations in Langfuse as well as interacting with the Langfuse API.
The client features a thread-safe singleton pattern for each unique public API key, ensuring consistent trace context propagation across your application. It implements efficient batching of spans with configurable flush settings and includes background thread management for media uploads and score ingestion.
Configuration is flexible through either direct parameters or environment variables, with graceful fallbacks and runtime configuration updates.
Attributes:
- api: Synchronous API client for Langfuse backend communication
- async_api: Asynchronous API client for Langfuse backend communication
- langfuse_tracer: Internal LangfuseTracer instance managing OpenTelemetry components
Arguments:
- public_key (Optional[str]): Your Langfuse public API key. Can also be set via LANGFUSE_PUBLIC_KEY environment variable.
- secret_key (Optional[str]): Your Langfuse secret API key. Can also be set via LANGFUSE_SECRET_KEY environment variable.
- host (Optional[str]): The Langfuse API host URL. Defaults to "https://cloud.langfuse.com". Can also be set via LANGFUSE_HOST environment variable.
- timeout (Optional[int]): Timeout in seconds for API requests. Defaults to 5 seconds.
- httpx_client (Optional[httpx.Client]): Custom httpx client for making non-tracing HTTP requests. If not provided, a default client will be created.
- debug (bool): Enable debug logging. Defaults to False. Can also be set via LANGFUSE_DEBUG environment variable.
- tracing_enabled (Optional[bool]): Enable or disable tracing. Defaults to True. Can also be set via LANGFUSE_TRACING_ENABLED environment variable.
- flush_at (Optional[int]): Number of spans to batch before sending to the API. Defaults to 512. Can also be set via LANGFUSE_FLUSH_AT environment variable.
- flush_interval (Optional[float]): Time in seconds between batch flushes. Defaults to 5 seconds. Can also be set via LANGFUSE_FLUSH_INTERVAL environment variable.
- environment (Optional[str]): Environment name for tracing. Default is 'default'. Can also be set via LANGFUSE_TRACING_ENVIRONMENT environment variable. Can be any lowercase alphanumeric string with hyphens and underscores that does not start with 'langfuse'.
- release (Optional[str]): Release version/hash of your application. Used for grouping analytics by release.
- media_upload_thread_count (Optional[int]): Number of background threads for handling media uploads. Defaults to 1. Can also be set via LANGFUSE_MEDIA_UPLOAD_THREAD_COUNT environment variable.
- sample_rate (Optional[float]): Sampling rate for traces (0.0 to 1.0). Defaults to 1.0 (100% of traces are sampled). Can also be set via LANGFUSE_SAMPLE_RATE environment variable.
- mask (Optional[MaskFunction]): Function to mask sensitive data in traces before sending to the API.
- blocked_instrumentation_scopes (Optional[List[str]]): List of instrumentation scope names to block from being exported to Langfuse. Spans from these scopes will be filtered out before being sent to the API. Useful for filtering out spans from specific libraries or frameworks. For exported spans, you can see the instrumentation scope name in the span metadata in Langfuse (
metadata.scope.name
) - additional_headers (Optional[Dict[str, str]]): Additional headers to include in all API requests and OTLPSpanExporter requests. These headers will be merged with default headers. Note: If httpx_client is provided, additional_headers must be set directly on your custom httpx_client as well.
- tracer_provider(Optional[TracerProvider]): OpenTelemetry TracerProvider to use for Langfuse. This can be useful to set to have disconnected tracing between Langfuse and other OpenTelemetry-span emitting libraries. Note: To track active spans, the context is still shared between TracerProviders. This may lead to broken trace trees.
Example:
from langfuse.otel import Langfuse # Initialize the client (reads from env vars if not provided) langfuse = Langfuse( public_key="your-public-key", secret_key="your-secret-key", host="https://cloud.langfuse.com", # Optional, default shown ) # Create a trace span with langfuse.start_as_current_span(name="process-query") as span: # Your application code here # Create a nested generation span for an LLM call with span.start_as_current_generation( name="generate-response", model="gpt-4", input={"query": "Tell me about AI"}, model_parameters={"temperature": 0.7, "max_tokens": 500} ) as generation: # Generate response here response = "AI is a field of computer science..." generation.update( output=response, usage_details={"prompt_tokens": 10, "completion_tokens": 50}, cost_details={"total_cost": 0.0023} ) # Score the generation (supports NUMERIC, BOOLEAN, CATEGORICAL) generation.score(name="relevance", value=0.95, data_type="NUMERIC")
153 def __init__( 154 self, 155 *, 156 public_key: Optional[str] = None, 157 secret_key: Optional[str] = None, 158 host: Optional[str] = None, 159 timeout: Optional[int] = None, 160 httpx_client: Optional[httpx.Client] = None, 161 debug: bool = False, 162 tracing_enabled: Optional[bool] = True, 163 flush_at: Optional[int] = None, 164 flush_interval: Optional[float] = None, 165 environment: Optional[str] = None, 166 release: Optional[str] = None, 167 media_upload_thread_count: Optional[int] = None, 168 sample_rate: Optional[float] = None, 169 mask: Optional[MaskFunction] = None, 170 blocked_instrumentation_scopes: Optional[List[str]] = None, 171 additional_headers: Optional[Dict[str, str]] = None, 172 tracer_provider: Optional[TracerProvider] = None, 173 ): 174 self._host = host or cast( 175 str, os.environ.get(LANGFUSE_HOST, "https://cloud.langfuse.com") 176 ) 177 self._environment = environment or cast( 178 str, os.environ.get(LANGFUSE_TRACING_ENVIRONMENT) 179 ) 180 self._project_id: Optional[str] = None 181 sample_rate = sample_rate or float(os.environ.get(LANGFUSE_SAMPLE_RATE, 1.0)) 182 if not 0.0 <= sample_rate <= 1.0: 183 raise ValueError( 184 f"Sample rate must be between 0.0 and 1.0, got {sample_rate}" 185 ) 186 187 timeout = timeout or int(os.environ.get(LANGFUSE_TIMEOUT, 5)) 188 189 self._tracing_enabled = ( 190 tracing_enabled 191 and os.environ.get(LANGFUSE_TRACING_ENABLED, "True") != "False" 192 ) 193 if not self._tracing_enabled: 194 langfuse_logger.info( 195 "Configuration: Langfuse tracing is explicitly disabled. No data will be sent to the Langfuse API." 196 ) 197 198 debug = debug if debug else (os.getenv(LANGFUSE_DEBUG, "False") == "True") 199 if debug: 200 logging.basicConfig( 201 format="%(asctime)s - %(name)s - %(levelname)s - %(message)s" 202 ) 203 langfuse_logger.setLevel(logging.DEBUG) 204 205 public_key = public_key or os.environ.get(LANGFUSE_PUBLIC_KEY) 206 if public_key is None: 207 langfuse_logger.warning( 208 "Authentication error: Langfuse client initialized without public_key. Client will be disabled. " 209 "Provide a public_key parameter or set LANGFUSE_PUBLIC_KEY environment variable. " 210 ) 211 self._otel_tracer = otel_trace_api.NoOpTracer() 212 return 213 214 secret_key = secret_key or os.environ.get(LANGFUSE_SECRET_KEY) 215 if secret_key is None: 216 langfuse_logger.warning( 217 "Authentication error: Langfuse client initialized without secret_key. Client will be disabled. " 218 "Provide a secret_key parameter or set LANGFUSE_SECRET_KEY environment variable. " 219 ) 220 self._otel_tracer = otel_trace_api.NoOpTracer() 221 return 222 223 # Initialize api and tracer if requirements are met 224 self._resources = LangfuseResourceManager( 225 public_key=public_key, 226 secret_key=secret_key, 227 host=self._host, 228 timeout=timeout, 229 environment=environment, 230 release=release, 231 flush_at=flush_at, 232 flush_interval=flush_interval, 233 httpx_client=httpx_client, 234 media_upload_thread_count=media_upload_thread_count, 235 sample_rate=sample_rate, 236 mask=mask, 237 tracing_enabled=self._tracing_enabled, 238 blocked_instrumentation_scopes=blocked_instrumentation_scopes, 239 additional_headers=additional_headers, 240 tracer_provider=tracer_provider, 241 ) 242 self._mask = self._resources.mask 243 244 self._otel_tracer = ( 245 self._resources.tracer 246 if self._tracing_enabled and self._resources.tracer is not None 247 else otel_trace_api.NoOpTracer() 248 ) 249 self.api = self._resources.api 250 self.async_api = self._resources.async_api
252 def start_span( 253 self, 254 *, 255 trace_context: Optional[TraceContext] = None, 256 name: str, 257 input: Optional[Any] = None, 258 output: Optional[Any] = None, 259 metadata: Optional[Any] = None, 260 version: Optional[str] = None, 261 level: Optional[SpanLevel] = None, 262 status_message: Optional[str] = None, 263 ) -> LangfuseSpan: 264 """Create a new span for tracing a unit of work. 265 266 This method creates a new span but does not set it as the current span in the 267 context. To create and use a span within a context, use start_as_current_span(). 268 269 The created span will be the child of the current span in the context. 270 271 Args: 272 trace_context: Optional context for connecting to an existing trace 273 name: Name of the span (e.g., function or operation name) 274 input: Input data for the operation (can be any JSON-serializable object) 275 output: Output data from the operation (can be any JSON-serializable object) 276 metadata: Additional metadata to associate with the span 277 version: Version identifier for the code or component 278 level: Importance level of the span (info, warning, error) 279 status_message: Optional status message for the span 280 281 Returns: 282 A LangfuseSpan object that must be ended with .end() when the operation completes 283 284 Example: 285 ```python 286 span = langfuse.start_span(name="process-data") 287 try: 288 # Do work 289 span.update(output="result") 290 finally: 291 span.end() 292 ``` 293 """ 294 if trace_context: 295 trace_id = trace_context.get("trace_id", None) 296 parent_span_id = trace_context.get("parent_span_id", None) 297 298 if trace_id: 299 remote_parent_span = self._create_remote_parent_span( 300 trace_id=trace_id, parent_span_id=parent_span_id 301 ) 302 303 with otel_trace_api.use_span( 304 cast(otel_trace_api.Span, remote_parent_span) 305 ): 306 otel_span = self._otel_tracer.start_span(name=name) 307 otel_span.set_attribute(LangfuseOtelSpanAttributes.AS_ROOT, True) 308 309 return LangfuseSpan( 310 otel_span=otel_span, 311 langfuse_client=self, 312 environment=self._environment, 313 input=input, 314 output=output, 315 metadata=metadata, 316 version=version, 317 level=level, 318 status_message=status_message, 319 ) 320 321 otel_span = self._otel_tracer.start_span(name=name) 322 323 return LangfuseSpan( 324 otel_span=otel_span, 325 langfuse_client=self, 326 environment=self._environment, 327 input=input, 328 output=output, 329 metadata=metadata, 330 version=version, 331 level=level, 332 status_message=status_message, 333 )
Create a new span for tracing a unit of work.
This method creates a new span but does not set it as the current span in the context. To create and use a span within a context, use start_as_current_span().
The created span will be the child of the current span in the context.
Arguments:
- trace_context: Optional context for connecting to an existing trace
- name: Name of the span (e.g., function or operation name)
- input: Input data for the operation (can be any JSON-serializable object)
- output: Output data from the operation (can be any JSON-serializable object)
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
Returns:
A LangfuseSpan object that must be ended with .end() when the operation completes
Example:
span = langfuse.start_span(name="process-data") try: # Do work span.update(output="result") finally: span.end()
335 def start_as_current_span( 336 self, 337 *, 338 trace_context: Optional[TraceContext] = None, 339 name: str, 340 input: Optional[Any] = None, 341 output: Optional[Any] = None, 342 metadata: Optional[Any] = None, 343 version: Optional[str] = None, 344 level: Optional[SpanLevel] = None, 345 status_message: Optional[str] = None, 346 end_on_exit: Optional[bool] = None, 347 ) -> _AgnosticContextManager[LangfuseSpan]: 348 """Create a new span and set it as the current span in a context manager. 349 350 This method creates a new span and sets it as the current span within a context 351 manager. Use this method with a 'with' statement to automatically handle span 352 lifecycle within a code block. 353 354 The created span will be the child of the current span in the context. 355 356 Args: 357 trace_context: Optional context for connecting to an existing trace 358 name: Name of the span (e.g., function or operation name) 359 input: Input data for the operation (can be any JSON-serializable object) 360 output: Output data from the operation (can be any JSON-serializable object) 361 metadata: Additional metadata to associate with the span 362 version: Version identifier for the code or component 363 level: Importance level of the span (info, warning, error) 364 status_message: Optional status message for the span 365 end_on_exit (default: True): Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks. 366 367 Returns: 368 A context manager that yields a LangfuseSpan 369 370 Example: 371 ```python 372 with langfuse.start_as_current_span(name="process-query") as span: 373 # Do work 374 result = process_data() 375 span.update(output=result) 376 377 # Create a child span automatically 378 with span.start_as_current_span(name="sub-operation") as child_span: 379 # Do sub-operation work 380 child_span.update(output="sub-result") 381 ``` 382 """ 383 if trace_context: 384 trace_id = trace_context.get("trace_id", None) 385 parent_span_id = trace_context.get("parent_span_id", None) 386 387 if trace_id: 388 remote_parent_span = self._create_remote_parent_span( 389 trace_id=trace_id, parent_span_id=parent_span_id 390 ) 391 392 return cast( 393 _AgnosticContextManager[LangfuseSpan], 394 self._create_span_with_parent_context( 395 as_type="span", 396 name=name, 397 remote_parent_span=remote_parent_span, 398 parent=None, 399 end_on_exit=end_on_exit, 400 input=input, 401 output=output, 402 metadata=metadata, 403 version=version, 404 level=level, 405 status_message=status_message, 406 ), 407 ) 408 409 return cast( 410 _AgnosticContextManager[LangfuseSpan], 411 self._start_as_current_otel_span_with_processed_media( 412 as_type="span", 413 name=name, 414 end_on_exit=end_on_exit, 415 input=input, 416 output=output, 417 metadata=metadata, 418 version=version, 419 level=level, 420 status_message=status_message, 421 ), 422 )
Create a new span and set it as the current span in a context manager.
This method creates a new span and sets it as the current span within a context manager. Use this method with a 'with' statement to automatically handle span lifecycle within a code block.
The created span will be the child of the current span in the context.
Arguments:
- trace_context: Optional context for connecting to an existing trace
- name: Name of the span (e.g., function or operation name)
- input: Input data for the operation (can be any JSON-serializable object)
- output: Output data from the operation (can be any JSON-serializable object)
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
- end_on_exit (default: True): Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks.
Returns:
A context manager that yields a LangfuseSpan
Example:
with langfuse.start_as_current_span(name="process-query") as span: # Do work result = process_data() span.update(output=result) # Create a child span automatically with span.start_as_current_span(name="sub-operation") as child_span: # Do sub-operation work child_span.update(output="sub-result")
424 def start_generation( 425 self, 426 *, 427 trace_context: Optional[TraceContext] = None, 428 name: str, 429 input: Optional[Any] = None, 430 output: Optional[Any] = None, 431 metadata: Optional[Any] = None, 432 version: Optional[str] = None, 433 level: Optional[SpanLevel] = None, 434 status_message: Optional[str] = None, 435 completion_start_time: Optional[datetime] = None, 436 model: Optional[str] = None, 437 model_parameters: Optional[Dict[str, MapValue]] = None, 438 usage_details: Optional[Dict[str, int]] = None, 439 cost_details: Optional[Dict[str, float]] = None, 440 prompt: Optional[PromptClient] = None, 441 ) -> LangfuseGeneration: 442 """Create a new generation span for model generations. 443 444 This method creates a specialized span for tracking model generations. 445 It includes additional fields specific to model generations such as model name, 446 token usage, and cost details. 447 448 The created generation span will be the child of the current span in the context. 449 450 Args: 451 trace_context: Optional context for connecting to an existing trace 452 name: Name of the generation operation 453 input: Input data for the model (e.g., prompts) 454 output: Output from the model (e.g., completions) 455 metadata: Additional metadata to associate with the generation 456 version: Version identifier for the model or component 457 level: Importance level of the generation (info, warning, error) 458 status_message: Optional status message for the generation 459 completion_start_time: When the model started generating the response 460 model: Name/identifier of the AI model used (e.g., "gpt-4") 461 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 462 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 463 cost_details: Cost information for the model call 464 prompt: Associated prompt template from Langfuse prompt management 465 466 Returns: 467 A LangfuseGeneration object that must be ended with .end() when complete 468 469 Example: 470 ```python 471 generation = langfuse.start_generation( 472 name="answer-generation", 473 model="gpt-4", 474 input={"prompt": "Explain quantum computing"}, 475 model_parameters={"temperature": 0.7} 476 ) 477 try: 478 # Call model API 479 response = llm.generate(...) 480 481 generation.update( 482 output=response.text, 483 usage_details={ 484 "prompt_tokens": response.usage.prompt_tokens, 485 "completion_tokens": response.usage.completion_tokens 486 } 487 ) 488 finally: 489 generation.end() 490 ``` 491 """ 492 if trace_context: 493 trace_id = trace_context.get("trace_id", None) 494 parent_span_id = trace_context.get("parent_span_id", None) 495 496 if trace_id: 497 remote_parent_span = self._create_remote_parent_span( 498 trace_id=trace_id, parent_span_id=parent_span_id 499 ) 500 501 with otel_trace_api.use_span( 502 cast(otel_trace_api.Span, remote_parent_span) 503 ): 504 otel_span = self._otel_tracer.start_span(name=name) 505 otel_span.set_attribute(LangfuseOtelSpanAttributes.AS_ROOT, True) 506 507 return LangfuseGeneration( 508 otel_span=otel_span, 509 langfuse_client=self, 510 input=input, 511 output=output, 512 metadata=metadata, 513 version=version, 514 level=level, 515 status_message=status_message, 516 completion_start_time=completion_start_time, 517 model=model, 518 model_parameters=model_parameters, 519 usage_details=usage_details, 520 cost_details=cost_details, 521 prompt=prompt, 522 ) 523 524 otel_span = self._otel_tracer.start_span(name=name) 525 526 return LangfuseGeneration( 527 otel_span=otel_span, 528 langfuse_client=self, 529 input=input, 530 output=output, 531 metadata=metadata, 532 version=version, 533 level=level, 534 status_message=status_message, 535 completion_start_time=completion_start_time, 536 model=model, 537 model_parameters=model_parameters, 538 usage_details=usage_details, 539 cost_details=cost_details, 540 prompt=prompt, 541 )
Create a new generation span for model generations.
This method creates a specialized span for tracking model generations. It includes additional fields specific to model generations such as model name, token usage, and cost details.
The created generation span will be the child of the current span in the context.
Arguments:
- trace_context: Optional context for connecting to an existing trace
- name: Name of the generation operation
- input: Input data for the model (e.g., prompts)
- output: Output from the model (e.g., completions)
- metadata: Additional metadata to associate with the generation
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
- completion_start_time: When the model started generating the response
- model: Name/identifier of the AI model used (e.g., "gpt-4")
- model_parameters: Parameters used for the model (e.g., temperature, max_tokens)
- usage_details: Token usage information (e.g., prompt_tokens, completion_tokens)
- cost_details: Cost information for the model call
- prompt: Associated prompt template from Langfuse prompt management
Returns:
A LangfuseGeneration object that must be ended with .end() when complete
Example:
generation = langfuse.start_generation( name="answer-generation", model="gpt-4", input={"prompt": "Explain quantum computing"}, model_parameters={"temperature": 0.7} ) try: # Call model API response = llm.generate(...) generation.update( output=response.text, usage_details={ "prompt_tokens": response.usage.prompt_tokens, "completion_tokens": response.usage.completion_tokens } ) finally: generation.end()
543 def start_as_current_generation( 544 self, 545 *, 546 trace_context: Optional[TraceContext] = None, 547 name: str, 548 input: Optional[Any] = None, 549 output: Optional[Any] = None, 550 metadata: Optional[Any] = None, 551 version: Optional[str] = None, 552 level: Optional[SpanLevel] = None, 553 status_message: Optional[str] = None, 554 completion_start_time: Optional[datetime] = None, 555 model: Optional[str] = None, 556 model_parameters: Optional[Dict[str, MapValue]] = None, 557 usage_details: Optional[Dict[str, int]] = None, 558 cost_details: Optional[Dict[str, float]] = None, 559 prompt: Optional[PromptClient] = None, 560 end_on_exit: Optional[bool] = None, 561 ) -> _AgnosticContextManager[LangfuseGeneration]: 562 """Create a new generation span and set it as the current span in a context manager. 563 564 This method creates a specialized span for model generations and sets it as the 565 current span within a context manager. Use this method with a 'with' statement to 566 automatically handle the generation span lifecycle within a code block. 567 568 The created generation span will be the child of the current span in the context. 569 570 Args: 571 trace_context: Optional context for connecting to an existing trace 572 name: Name of the generation operation 573 input: Input data for the model (e.g., prompts) 574 output: Output from the model (e.g., completions) 575 metadata: Additional metadata to associate with the generation 576 version: Version identifier for the model or component 577 level: Importance level of the generation (info, warning, error) 578 status_message: Optional status message for the generation 579 completion_start_time: When the model started generating the response 580 model: Name/identifier of the AI model used (e.g., "gpt-4") 581 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 582 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 583 cost_details: Cost information for the model call 584 prompt: Associated prompt template from Langfuse prompt management 585 end_on_exit (default: True): Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks. 586 587 Returns: 588 A context manager that yields a LangfuseGeneration 589 590 Example: 591 ```python 592 with langfuse.start_as_current_generation( 593 name="answer-generation", 594 model="gpt-4", 595 input={"prompt": "Explain quantum computing"} 596 ) as generation: 597 # Call model API 598 response = llm.generate(...) 599 600 # Update with results 601 generation.update( 602 output=response.text, 603 usage_details={ 604 "prompt_tokens": response.usage.prompt_tokens, 605 "completion_tokens": response.usage.completion_tokens 606 } 607 ) 608 ``` 609 """ 610 if trace_context: 611 trace_id = trace_context.get("trace_id", None) 612 parent_span_id = trace_context.get("parent_span_id", None) 613 614 if trace_id: 615 remote_parent_span = self._create_remote_parent_span( 616 trace_id=trace_id, parent_span_id=parent_span_id 617 ) 618 619 return cast( 620 _AgnosticContextManager[LangfuseGeneration], 621 self._create_span_with_parent_context( 622 as_type="generation", 623 name=name, 624 remote_parent_span=remote_parent_span, 625 parent=None, 626 end_on_exit=end_on_exit, 627 input=input, 628 output=output, 629 metadata=metadata, 630 version=version, 631 level=level, 632 status_message=status_message, 633 completion_start_time=completion_start_time, 634 model=model, 635 model_parameters=model_parameters, 636 usage_details=usage_details, 637 cost_details=cost_details, 638 prompt=prompt, 639 ), 640 ) 641 642 return cast( 643 _AgnosticContextManager[LangfuseGeneration], 644 self._start_as_current_otel_span_with_processed_media( 645 as_type="generation", 646 name=name, 647 end_on_exit=end_on_exit, 648 input=input, 649 output=output, 650 metadata=metadata, 651 version=version, 652 level=level, 653 status_message=status_message, 654 completion_start_time=completion_start_time, 655 model=model, 656 model_parameters=model_parameters, 657 usage_details=usage_details, 658 cost_details=cost_details, 659 prompt=prompt, 660 ), 661 )
Create a new generation span and set it as the current span in a context manager.
This method creates a specialized span for model generations and sets it as the current span within a context manager. Use this method with a 'with' statement to automatically handle the generation span lifecycle within a code block.
The created generation span will be the child of the current span in the context.
Arguments:
- trace_context: Optional context for connecting to an existing trace
- name: Name of the generation operation
- input: Input data for the model (e.g., prompts)
- output: Output from the model (e.g., completions)
- metadata: Additional metadata to associate with the generation
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
- completion_start_time: When the model started generating the response
- model: Name/identifier of the AI model used (e.g., "gpt-4")
- model_parameters: Parameters used for the model (e.g., temperature, max_tokens)
- usage_details: Token usage information (e.g., prompt_tokens, completion_tokens)
- cost_details: Cost information for the model call
- prompt: Associated prompt template from Langfuse prompt management
- end_on_exit (default: True): Whether to end the span automatically when leaving the context manager. If False, the span must be manually ended to avoid memory leaks.
Returns:
A context manager that yields a LangfuseGeneration
Example:
with langfuse.start_as_current_generation( name="answer-generation", model="gpt-4", input={"prompt": "Explain quantum computing"} ) as generation: # Call model API response = llm.generate(...) # Update with results generation.update( output=response.text, usage_details={ "prompt_tokens": response.usage.prompt_tokens, "completion_tokens": response.usage.completion_tokens } )
780 def update_current_generation( 781 self, 782 *, 783 name: Optional[str] = None, 784 input: Optional[Any] = None, 785 output: Optional[Any] = None, 786 metadata: Optional[Any] = None, 787 version: Optional[str] = None, 788 level: Optional[SpanLevel] = None, 789 status_message: Optional[str] = None, 790 completion_start_time: Optional[datetime] = None, 791 model: Optional[str] = None, 792 model_parameters: Optional[Dict[str, MapValue]] = None, 793 usage_details: Optional[Dict[str, int]] = None, 794 cost_details: Optional[Dict[str, float]] = None, 795 prompt: Optional[PromptClient] = None, 796 ) -> None: 797 """Update the current active generation span with new information. 798 799 This method updates the current generation span in the active context with 800 additional information. It's useful for adding output, usage stats, or other 801 details that become available during or after model generation. 802 803 Args: 804 name: The generation name 805 input: Updated input data for the model 806 output: Output from the model (e.g., completions) 807 metadata: Additional metadata to associate with the generation 808 version: Version identifier for the model or component 809 level: Importance level of the generation (info, warning, error) 810 status_message: Optional status message for the generation 811 completion_start_time: When the model started generating the response 812 model: Name/identifier of the AI model used (e.g., "gpt-4") 813 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 814 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 815 cost_details: Cost information for the model call 816 prompt: Associated prompt template from Langfuse prompt management 817 818 Example: 819 ```python 820 with langfuse.start_as_current_generation(name="answer-query") as generation: 821 # Initial setup and API call 822 response = llm.generate(...) 823 824 # Update with results that weren't available at creation time 825 langfuse.update_current_generation( 826 output=response.text, 827 usage_details={ 828 "prompt_tokens": response.usage.prompt_tokens, 829 "completion_tokens": response.usage.completion_tokens 830 } 831 ) 832 ``` 833 """ 834 if not self._tracing_enabled: 835 langfuse_logger.debug( 836 "Operation skipped: update_current_generation - Tracing is disabled or client is in no-op mode." 837 ) 838 return 839 840 current_otel_span = self._get_current_otel_span() 841 842 if current_otel_span is not None: 843 generation = LangfuseGeneration( 844 otel_span=current_otel_span, langfuse_client=self 845 ) 846 847 if name: 848 current_otel_span.update_name(name) 849 850 generation.update( 851 input=input, 852 output=output, 853 metadata=metadata, 854 version=version, 855 level=level, 856 status_message=status_message, 857 completion_start_time=completion_start_time, 858 model=model, 859 model_parameters=model_parameters, 860 usage_details=usage_details, 861 cost_details=cost_details, 862 prompt=prompt, 863 )
Update the current active generation span with new information.
This method updates the current generation span in the active context with additional information. It's useful for adding output, usage stats, or other details that become available during or after model generation.
Arguments:
- name: The generation name
- input: Updated input data for the model
- output: Output from the model (e.g., completions)
- metadata: Additional metadata to associate with the generation
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
- completion_start_time: When the model started generating the response
- model: Name/identifier of the AI model used (e.g., "gpt-4")
- model_parameters: Parameters used for the model (e.g., temperature, max_tokens)
- usage_details: Token usage information (e.g., prompt_tokens, completion_tokens)
- cost_details: Cost information for the model call
- prompt: Associated prompt template from Langfuse prompt management
Example:
with langfuse.start_as_current_generation(name="answer-query") as generation: # Initial setup and API call response = llm.generate(...) # Update with results that weren't available at creation time langfuse.update_current_generation( output=response.text, usage_details={ "prompt_tokens": response.usage.prompt_tokens, "completion_tokens": response.usage.completion_tokens } )
865 def update_current_span( 866 self, 867 *, 868 name: Optional[str] = None, 869 input: Optional[Any] = None, 870 output: Optional[Any] = None, 871 metadata: Optional[Any] = None, 872 version: Optional[str] = None, 873 level: Optional[SpanLevel] = None, 874 status_message: Optional[str] = None, 875 ) -> None: 876 """Update the current active span with new information. 877 878 This method updates the current span in the active context with 879 additional information. It's useful for adding outputs or metadata 880 that become available during execution. 881 882 Args: 883 name: The span name 884 input: Updated input data for the operation 885 output: Output data from the operation 886 metadata: Additional metadata to associate with the span 887 version: Version identifier for the code or component 888 level: Importance level of the span (info, warning, error) 889 status_message: Optional status message for the span 890 891 Example: 892 ```python 893 with langfuse.start_as_current_span(name="process-data") as span: 894 # Initial processing 895 result = process_first_part() 896 897 # Update with intermediate results 898 langfuse.update_current_span(metadata={"intermediate_result": result}) 899 900 # Continue processing 901 final_result = process_second_part(result) 902 903 # Final update 904 langfuse.update_current_span(output=final_result) 905 ``` 906 """ 907 if not self._tracing_enabled: 908 langfuse_logger.debug( 909 "Operation skipped: update_current_span - Tracing is disabled or client is in no-op mode." 910 ) 911 return 912 913 current_otel_span = self._get_current_otel_span() 914 915 if current_otel_span is not None: 916 span = LangfuseSpan( 917 otel_span=current_otel_span, 918 langfuse_client=self, 919 environment=self._environment, 920 ) 921 922 if name: 923 current_otel_span.update_name(name) 924 925 span.update( 926 input=input, 927 output=output, 928 metadata=metadata, 929 version=version, 930 level=level, 931 status_message=status_message, 932 )
Update the current active span with new information.
This method updates the current span in the active context with additional information. It's useful for adding outputs or metadata that become available during execution.
Arguments:
- name: The span name
- input: Updated input data for the operation
- output: Output data from the operation
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
Example:
with langfuse.start_as_current_span(name="process-data") as span: # Initial processing result = process_first_part() # Update with intermediate results langfuse.update_current_span(metadata={"intermediate_result": result}) # Continue processing final_result = process_second_part(result) # Final update langfuse.update_current_span(output=final_result)
934 def update_current_trace( 935 self, 936 *, 937 name: Optional[str] = None, 938 user_id: Optional[str] = None, 939 session_id: Optional[str] = None, 940 version: Optional[str] = None, 941 input: Optional[Any] = None, 942 output: Optional[Any] = None, 943 metadata: Optional[Any] = None, 944 tags: Optional[List[str]] = None, 945 public: Optional[bool] = None, 946 ) -> None: 947 """Update the current trace with additional information. 948 949 This method updates the Langfuse trace that the current span belongs to. It's useful for 950 adding trace-level metadata like user ID, session ID, or tags that apply to 951 the entire Langfuse trace rather than just a single observation. 952 953 Args: 954 name: Updated name for the Langfuse trace 955 user_id: ID of the user who initiated the Langfuse trace 956 session_id: Session identifier for grouping related Langfuse traces 957 version: Version identifier for the application or service 958 input: Input data for the overall Langfuse trace 959 output: Output data from the overall Langfuse trace 960 metadata: Additional metadata to associate with the Langfuse trace 961 tags: List of tags to categorize the Langfuse trace 962 public: Whether the Langfuse trace should be publicly accessible 963 964 Example: 965 ```python 966 with langfuse.start_as_current_span(name="handle-request") as span: 967 # Get user information 968 user = authenticate_user(request) 969 970 # Update trace with user context 971 langfuse.update_current_trace( 972 user_id=user.id, 973 session_id=request.session_id, 974 tags=["production", "web-app"] 975 ) 976 977 # Continue processing 978 response = process_request(request) 979 980 # Update span with results 981 span.update(output=response) 982 ``` 983 """ 984 if not self._tracing_enabled: 985 langfuse_logger.debug( 986 "Operation skipped: update_current_trace - Tracing is disabled or client is in no-op mode." 987 ) 988 return 989 990 current_otel_span = self._get_current_otel_span() 991 992 if current_otel_span is not None: 993 span = LangfuseSpan( 994 otel_span=current_otel_span, 995 langfuse_client=self, 996 environment=self._environment, 997 ) 998 999 span.update_trace( 1000 name=name, 1001 user_id=user_id, 1002 session_id=session_id, 1003 version=version, 1004 input=input, 1005 output=output, 1006 metadata=metadata, 1007 tags=tags, 1008 public=public, 1009 )
Update the current trace with additional information.
This method updates the Langfuse trace that the current span belongs to. It's useful for adding trace-level metadata like user ID, session ID, or tags that apply to the entire Langfuse trace rather than just a single observation.
Arguments:
- name: Updated name for the Langfuse trace
- user_id: ID of the user who initiated the Langfuse trace
- session_id: Session identifier for grouping related Langfuse traces
- version: Version identifier for the application or service
- input: Input data for the overall Langfuse trace
- output: Output data from the overall Langfuse trace
- metadata: Additional metadata to associate with the Langfuse trace
- tags: List of tags to categorize the Langfuse trace
- public: Whether the Langfuse trace should be publicly accessible
Example:
with langfuse.start_as_current_span(name="handle-request") as span: # Get user information user = authenticate_user(request) # Update trace with user context langfuse.update_current_trace( user_id=user.id, session_id=request.session_id, tags=["production", "web-app"] ) # Continue processing response = process_request(request) # Update span with results span.update(output=response)
1011 def create_event( 1012 self, 1013 *, 1014 trace_context: Optional[TraceContext] = None, 1015 name: str, 1016 input: Optional[Any] = None, 1017 output: Optional[Any] = None, 1018 metadata: Optional[Any] = None, 1019 version: Optional[str] = None, 1020 level: Optional[SpanLevel] = None, 1021 status_message: Optional[str] = None, 1022 ) -> LangfuseEvent: 1023 """Create a new Langfuse observation of type 'EVENT'. 1024 1025 The created Langfuse Event observation will be the child of the current span in the context. 1026 1027 Args: 1028 trace_context: Optional context for connecting to an existing trace 1029 name: Name of the span (e.g., function or operation name) 1030 input: Input data for the operation (can be any JSON-serializable object) 1031 output: Output data from the operation (can be any JSON-serializable object) 1032 metadata: Additional metadata to associate with the span 1033 version: Version identifier for the code or component 1034 level: Importance level of the span (info, warning, error) 1035 status_message: Optional status message for the span 1036 1037 Returns: 1038 The Langfuse Event object 1039 1040 Example: 1041 ```python 1042 event = langfuse.create_event(name="process-event") 1043 ``` 1044 """ 1045 timestamp = time_ns() 1046 1047 if trace_context: 1048 trace_id = trace_context.get("trace_id", None) 1049 parent_span_id = trace_context.get("parent_span_id", None) 1050 1051 if trace_id: 1052 remote_parent_span = self._create_remote_parent_span( 1053 trace_id=trace_id, parent_span_id=parent_span_id 1054 ) 1055 1056 with otel_trace_api.use_span( 1057 cast(otel_trace_api.Span, remote_parent_span) 1058 ): 1059 otel_span = self._otel_tracer.start_span( 1060 name=name, start_time=timestamp 1061 ) 1062 otel_span.set_attribute(LangfuseOtelSpanAttributes.AS_ROOT, True) 1063 1064 return cast( 1065 LangfuseEvent, 1066 LangfuseEvent( 1067 otel_span=otel_span, 1068 langfuse_client=self, 1069 environment=self._environment, 1070 input=input, 1071 output=output, 1072 metadata=metadata, 1073 version=version, 1074 level=level, 1075 status_message=status_message, 1076 ).end(end_time=timestamp), 1077 ) 1078 1079 otel_span = self._otel_tracer.start_span(name=name, start_time=timestamp) 1080 1081 return cast( 1082 LangfuseEvent, 1083 LangfuseEvent( 1084 otel_span=otel_span, 1085 langfuse_client=self, 1086 environment=self._environment, 1087 input=input, 1088 output=output, 1089 metadata=metadata, 1090 version=version, 1091 level=level, 1092 status_message=status_message, 1093 ).end(end_time=timestamp), 1094 )
Create a new Langfuse observation of type 'EVENT'.
The created Langfuse Event observation will be the child of the current span in the context.
Arguments:
- trace_context: Optional context for connecting to an existing trace
- name: Name of the span (e.g., function or operation name)
- input: Input data for the operation (can be any JSON-serializable object)
- output: Output data from the operation (can be any JSON-serializable object)
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
Returns:
The Langfuse Event object
Example:
event = langfuse.create_event(name="process-event")
1183 @staticmethod 1184 def create_trace_id(*, seed: Optional[str] = None) -> str: 1185 """Create a unique trace ID for use with Langfuse. 1186 1187 This method generates a unique trace ID for use with various Langfuse APIs. 1188 It can either generate a random ID or create a deterministic ID based on 1189 a seed string. 1190 1191 Trace IDs must be 32 lowercase hexadecimal characters, representing 16 bytes. 1192 This method ensures the generated ID meets this requirement. If you need to 1193 correlate an external ID with a Langfuse trace ID, use the external ID as the 1194 seed to get a valid, deterministic Langfuse trace ID. 1195 1196 Args: 1197 seed: Optional string to use as a seed for deterministic ID generation. 1198 If provided, the same seed will always produce the same ID. 1199 If not provided, a random ID will be generated. 1200 1201 Returns: 1202 A 32-character lowercase hexadecimal string representing the Langfuse trace ID. 1203 1204 Example: 1205 ```python 1206 # Generate a random trace ID 1207 trace_id = langfuse.create_trace_id() 1208 1209 # Generate a deterministic ID based on a seed 1210 session_trace_id = langfuse.create_trace_id(seed="session-456") 1211 1212 # Correlate an external ID with a Langfuse trace ID 1213 external_id = "external-system-123456" 1214 correlated_trace_id = langfuse.create_trace_id(seed=external_id) 1215 1216 # Use the ID with trace context 1217 with langfuse.start_as_current_span( 1218 name="process-request", 1219 trace_context={"trace_id": trace_id} 1220 ) as span: 1221 # Operation will be part of the specific trace 1222 pass 1223 ``` 1224 """ 1225 if not seed: 1226 trace_id_int = RandomIdGenerator().generate_trace_id() 1227 1228 return Langfuse._format_otel_trace_id(trace_id_int) 1229 1230 return sha256(seed.encode("utf-8")).digest()[:16].hex()
Create a unique trace ID for use with Langfuse.
This method generates a unique trace ID for use with various Langfuse APIs. It can either generate a random ID or create a deterministic ID based on a seed string.
Trace IDs must be 32 lowercase hexadecimal characters, representing 16 bytes. This method ensures the generated ID meets this requirement. If you need to correlate an external ID with a Langfuse trace ID, use the external ID as the seed to get a valid, deterministic Langfuse trace ID.
Arguments:
- seed: Optional string to use as a seed for deterministic ID generation. If provided, the same seed will always produce the same ID. If not provided, a random ID will be generated.
Returns:
A 32-character lowercase hexadecimal string representing the Langfuse trace ID.
Example:
# Generate a random trace ID trace_id = langfuse.create_trace_id() # Generate a deterministic ID based on a seed session_trace_id = langfuse.create_trace_id(seed="session-456") # Correlate an external ID with a Langfuse trace ID external_id = "external-system-123456" correlated_trace_id = langfuse.create_trace_id(seed=external_id) # Use the ID with trace context with langfuse.start_as_current_span( name="process-request", trace_context={"trace_id": trace_id} ) as span: # Operation will be part of the specific trace pass
1306 def create_score( 1307 self, 1308 *, 1309 name: str, 1310 value: Union[float, str], 1311 session_id: Optional[str] = None, 1312 dataset_run_id: Optional[str] = None, 1313 trace_id: Optional[str] = None, 1314 observation_id: Optional[str] = None, 1315 score_id: Optional[str] = None, 1316 data_type: Optional[ScoreDataType] = None, 1317 comment: Optional[str] = None, 1318 config_id: Optional[str] = None, 1319 metadata: Optional[Any] = None, 1320 ) -> None: 1321 """Create a score for a specific trace or observation. 1322 1323 This method creates a score for evaluating a Langfuse trace or observation. Scores can be 1324 used to track quality metrics, user feedback, or automated evaluations. 1325 1326 Args: 1327 name: Name of the score (e.g., "relevance", "accuracy") 1328 value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL) 1329 session_id: ID of the Langfuse session to associate the score with 1330 dataset_run_id: ID of the Langfuse dataset run to associate the score with 1331 trace_id: ID of the Langfuse trace to associate the score with 1332 observation_id: Optional ID of the specific observation to score. Trace ID must be provided too. 1333 score_id: Optional custom ID for the score (auto-generated if not provided) 1334 data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL) 1335 comment: Optional comment or explanation for the score 1336 config_id: Optional ID of a score config defined in Langfuse 1337 metadata: Optional metadata to be attached to the score 1338 1339 Example: 1340 ```python 1341 # Create a numeric score for accuracy 1342 langfuse.create_score( 1343 name="accuracy", 1344 value=0.92, 1345 trace_id="abcdef1234567890abcdef1234567890", 1346 data_type="NUMERIC", 1347 comment="High accuracy with minor irrelevant details" 1348 ) 1349 1350 # Create a categorical score for sentiment 1351 langfuse.create_score( 1352 name="sentiment", 1353 value="positive", 1354 trace_id="abcdef1234567890abcdef1234567890", 1355 observation_id="abcdef1234567890", 1356 data_type="CATEGORICAL" 1357 ) 1358 ``` 1359 """ 1360 if not self._tracing_enabled: 1361 return 1362 1363 score_id = score_id or self._create_observation_id() 1364 1365 try: 1366 new_body = ScoreBody( 1367 id=score_id, 1368 sessionId=session_id, 1369 datasetRunId=dataset_run_id, 1370 traceId=trace_id, 1371 observationId=observation_id, 1372 name=name, 1373 value=value, 1374 dataType=data_type, # type: ignore 1375 comment=comment, 1376 configId=config_id, 1377 environment=self._environment, 1378 metadata=metadata, 1379 ) 1380 1381 event = { 1382 "id": self.create_trace_id(), 1383 "type": "score-create", 1384 "timestamp": _get_timestamp(), 1385 "body": new_body, 1386 } 1387 1388 if self._resources is not None: 1389 # Force the score to be in sample if it was for a legacy trace ID, i.e. non-32 hexchar 1390 force_sample = ( 1391 not self._is_valid_trace_id(trace_id) if trace_id else True 1392 ) 1393 1394 self._resources.add_score_task( 1395 event, 1396 force_sample=force_sample, 1397 ) 1398 1399 except Exception as e: 1400 langfuse_logger.exception( 1401 f"Error creating score: Failed to process score event for trace_id={trace_id}, name={name}. Error: {e}" 1402 )
Create a score for a specific trace or observation.
This method creates a score for evaluating a Langfuse trace or observation. Scores can be used to track quality metrics, user feedback, or automated evaluations.
Arguments:
- name: Name of the score (e.g., "relevance", "accuracy")
- value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL)
- session_id: ID of the Langfuse session to associate the score with
- dataset_run_id: ID of the Langfuse dataset run to associate the score with
- trace_id: ID of the Langfuse trace to associate the score with
- observation_id: Optional ID of the specific observation to score. Trace ID must be provided too.
- score_id: Optional custom ID for the score (auto-generated if not provided)
- data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL)
- comment: Optional comment or explanation for the score
- config_id: Optional ID of a score config defined in Langfuse
- metadata: Optional metadata to be attached to the score
Example:
# Create a numeric score for accuracy langfuse.create_score( name="accuracy", value=0.92, trace_id="abcdef1234567890abcdef1234567890", data_type="NUMERIC", comment="High accuracy with minor irrelevant details" ) # Create a categorical score for sentiment langfuse.create_score( name="sentiment", value="positive", trace_id="abcdef1234567890abcdef1234567890", observation_id="abcdef1234567890", data_type="CATEGORICAL" )
1428 def score_current_span( 1429 self, 1430 *, 1431 name: str, 1432 value: Union[float, str], 1433 score_id: Optional[str] = None, 1434 data_type: Optional[ScoreDataType] = None, 1435 comment: Optional[str] = None, 1436 config_id: Optional[str] = None, 1437 ) -> None: 1438 """Create a score for the current active span. 1439 1440 This method scores the currently active span in the context. It's a convenient 1441 way to score the current operation without needing to know its trace and span IDs. 1442 1443 Args: 1444 name: Name of the score (e.g., "relevance", "accuracy") 1445 value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL) 1446 score_id: Optional custom ID for the score (auto-generated if not provided) 1447 data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL) 1448 comment: Optional comment or explanation for the score 1449 config_id: Optional ID of a score config defined in Langfuse 1450 1451 Example: 1452 ```python 1453 with langfuse.start_as_current_generation(name="answer-query") as generation: 1454 # Generate answer 1455 response = generate_answer(...) 1456 generation.update(output=response) 1457 1458 # Score the generation 1459 langfuse.score_current_span( 1460 name="relevance", 1461 value=0.85, 1462 data_type="NUMERIC", 1463 comment="Mostly relevant but contains some tangential information" 1464 ) 1465 ``` 1466 """ 1467 current_span = self._get_current_otel_span() 1468 1469 if current_span is not None: 1470 trace_id = self._get_otel_trace_id(current_span) 1471 observation_id = self._get_otel_span_id(current_span) 1472 1473 langfuse_logger.info( 1474 f"Score: Creating score name='{name}' value={value} for current span ({observation_id}) in trace {trace_id}" 1475 ) 1476 1477 self.create_score( 1478 trace_id=trace_id, 1479 observation_id=observation_id, 1480 name=name, 1481 value=cast(str, value), 1482 score_id=score_id, 1483 data_type=cast(Literal["CATEGORICAL"], data_type), 1484 comment=comment, 1485 config_id=config_id, 1486 )
Create a score for the current active span.
This method scores the currently active span in the context. It's a convenient way to score the current operation without needing to know its trace and span IDs.
Arguments:
- name: Name of the score (e.g., "relevance", "accuracy")
- value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL)
- score_id: Optional custom ID for the score (auto-generated if not provided)
- data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL)
- comment: Optional comment or explanation for the score
- config_id: Optional ID of a score config defined in Langfuse
Example:
with langfuse.start_as_current_generation(name="answer-query") as generation: # Generate answer response = generate_answer(...) generation.update(output=response) # Score the generation langfuse.score_current_span( name="relevance", value=0.85, data_type="NUMERIC", comment="Mostly relevant but contains some tangential information" )
1512 def score_current_trace( 1513 self, 1514 *, 1515 name: str, 1516 value: Union[float, str], 1517 score_id: Optional[str] = None, 1518 data_type: Optional[ScoreDataType] = None, 1519 comment: Optional[str] = None, 1520 config_id: Optional[str] = None, 1521 ) -> None: 1522 """Create a score for the current trace. 1523 1524 This method scores the trace of the currently active span. Unlike score_current_span, 1525 this method associates the score with the entire trace rather than a specific span. 1526 It's useful for scoring overall performance or quality of the entire operation. 1527 1528 Args: 1529 name: Name of the score (e.g., "user_satisfaction", "overall_quality") 1530 value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL) 1531 score_id: Optional custom ID for the score (auto-generated if not provided) 1532 data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL) 1533 comment: Optional comment or explanation for the score 1534 config_id: Optional ID of a score config defined in Langfuse 1535 1536 Example: 1537 ```python 1538 with langfuse.start_as_current_span(name="process-user-request") as span: 1539 # Process request 1540 result = process_complete_request() 1541 span.update(output=result) 1542 1543 # Score the overall trace 1544 langfuse.score_current_trace( 1545 name="overall_quality", 1546 value=0.95, 1547 data_type="NUMERIC", 1548 comment="High quality end-to-end response" 1549 ) 1550 ``` 1551 """ 1552 current_span = self._get_current_otel_span() 1553 1554 if current_span is not None: 1555 trace_id = self._get_otel_trace_id(current_span) 1556 1557 langfuse_logger.info( 1558 f"Score: Creating score name='{name}' value={value} for entire trace {trace_id}" 1559 ) 1560 1561 self.create_score( 1562 trace_id=trace_id, 1563 name=name, 1564 value=cast(str, value), 1565 score_id=score_id, 1566 data_type=cast(Literal["CATEGORICAL"], data_type), 1567 comment=comment, 1568 config_id=config_id, 1569 )
Create a score for the current trace.
This method scores the trace of the currently active span. Unlike score_current_span, this method associates the score with the entire trace rather than a specific span. It's useful for scoring overall performance or quality of the entire operation.
Arguments:
- name: Name of the score (e.g., "user_satisfaction", "overall_quality")
- value: Score value (can be numeric for NUMERIC/BOOLEAN types or string for CATEGORICAL)
- score_id: Optional custom ID for the score (auto-generated if not provided)
- data_type: Type of score (NUMERIC, BOOLEAN, or CATEGORICAL)
- comment: Optional comment or explanation for the score
- config_id: Optional ID of a score config defined in Langfuse
Example:
with langfuse.start_as_current_span(name="process-user-request") as span: # Process request result = process_complete_request() span.update(output=result) # Score the overall trace langfuse.score_current_trace( name="overall_quality", value=0.95, data_type="NUMERIC", comment="High quality end-to-end response" )
1571 def flush(self) -> None: 1572 """Force flush all pending spans and events to the Langfuse API. 1573 1574 This method manually flushes any pending spans, scores, and other events to the 1575 Langfuse API. It's useful in scenarios where you want to ensure all data is sent 1576 before proceeding, without waiting for the automatic flush interval. 1577 1578 Example: 1579 ```python 1580 # Record some spans and scores 1581 with langfuse.start_as_current_span(name="operation") as span: 1582 # Do work... 1583 pass 1584 1585 # Ensure all data is sent to Langfuse before proceeding 1586 langfuse.flush() 1587 1588 # Continue with other work 1589 ``` 1590 """ 1591 if self._resources is not None: 1592 self._resources.flush()
Force flush all pending spans and events to the Langfuse API.
This method manually flushes any pending spans, scores, and other events to the Langfuse API. It's useful in scenarios where you want to ensure all data is sent before proceeding, without waiting for the automatic flush interval.
Example:
# Record some spans and scores with langfuse.start_as_current_span(name="operation") as span: # Do work... pass # Ensure all data is sent to Langfuse before proceeding langfuse.flush() # Continue with other work
1594 def shutdown(self) -> None: 1595 """Shut down the Langfuse client and flush all pending data. 1596 1597 This method cleanly shuts down the Langfuse client, ensuring all pending data 1598 is flushed to the API and all background threads are properly terminated. 1599 1600 It's important to call this method when your application is shutting down to 1601 prevent data loss and resource leaks. For most applications, using the client 1602 as a context manager or relying on the automatic shutdown via atexit is sufficient. 1603 1604 Example: 1605 ```python 1606 # Initialize Langfuse 1607 langfuse = Langfuse(public_key="...", secret_key="...") 1608 1609 # Use Langfuse throughout your application 1610 # ... 1611 1612 # When application is shutting down 1613 langfuse.shutdown() 1614 ``` 1615 """ 1616 if self._resources is not None: 1617 self._resources.shutdown()
Shut down the Langfuse client and flush all pending data.
This method cleanly shuts down the Langfuse client, ensuring all pending data is flushed to the API and all background threads are properly terminated.
It's important to call this method when your application is shutting down to prevent data loss and resource leaks. For most applications, using the client as a context manager or relying on the automatic shutdown via atexit is sufficient.
Example:
# Initialize Langfuse langfuse = Langfuse(public_key="...", secret_key="...") # Use Langfuse throughout your application # ... # When application is shutting down langfuse.shutdown()
1619 def get_current_trace_id(self) -> Optional[str]: 1620 """Get the trace ID of the current active span. 1621 1622 This method retrieves the trace ID from the currently active span in the context. 1623 It can be used to get the trace ID for referencing in logs, external systems, 1624 or for creating related operations. 1625 1626 Returns: 1627 The current trace ID as a 32-character lowercase hexadecimal string, 1628 or None if there is no active span. 1629 1630 Example: 1631 ```python 1632 with langfuse.start_as_current_span(name="process-request") as span: 1633 # Get the current trace ID for reference 1634 trace_id = langfuse.get_current_trace_id() 1635 1636 # Use it for external correlation 1637 log.info(f"Processing request with trace_id: {trace_id}") 1638 1639 # Or pass to another system 1640 external_system.process(data, trace_id=trace_id) 1641 ``` 1642 """ 1643 if not self._tracing_enabled: 1644 langfuse_logger.debug( 1645 "Operation skipped: get_current_trace_id - Tracing is disabled or client is in no-op mode." 1646 ) 1647 return None 1648 1649 current_otel_span = self._get_current_otel_span() 1650 1651 return self._get_otel_trace_id(current_otel_span) if current_otel_span else None
Get the trace ID of the current active span.
This method retrieves the trace ID from the currently active span in the context. It can be used to get the trace ID for referencing in logs, external systems, or for creating related operations.
Returns:
The current trace ID as a 32-character lowercase hexadecimal string, or None if there is no active span.
Example:
with langfuse.start_as_current_span(name="process-request") as span: # Get the current trace ID for reference trace_id = langfuse.get_current_trace_id() # Use it for external correlation log.info(f"Processing request with trace_id: {trace_id}") # Or pass to another system external_system.process(data, trace_id=trace_id)
1653 def get_current_observation_id(self) -> Optional[str]: 1654 """Get the observation ID (span ID) of the current active span. 1655 1656 This method retrieves the observation ID from the currently active span in the context. 1657 It can be used to get the observation ID for referencing in logs, external systems, 1658 or for creating scores or other related operations. 1659 1660 Returns: 1661 The current observation ID as a 16-character lowercase hexadecimal string, 1662 or None if there is no active span. 1663 1664 Example: 1665 ```python 1666 with langfuse.start_as_current_span(name="process-user-query") as span: 1667 # Get the current observation ID 1668 observation_id = langfuse.get_current_observation_id() 1669 1670 # Store it for later reference 1671 cache.set(f"query_{query_id}_observation", observation_id) 1672 1673 # Process the query... 1674 ``` 1675 """ 1676 if not self._tracing_enabled: 1677 langfuse_logger.debug( 1678 "Operation skipped: get_current_observation_id - Tracing is disabled or client is in no-op mode." 1679 ) 1680 return None 1681 1682 current_otel_span = self._get_current_otel_span() 1683 1684 return self._get_otel_span_id(current_otel_span) if current_otel_span else None
Get the observation ID (span ID) of the current active span.
This method retrieves the observation ID from the currently active span in the context. It can be used to get the observation ID for referencing in logs, external systems, or for creating scores or other related operations.
Returns:
The current observation ID as a 16-character lowercase hexadecimal string, or None if there is no active span.
Example:
with langfuse.start_as_current_span(name="process-user-query") as span: # Get the current observation ID observation_id = langfuse.get_current_observation_id() # Store it for later reference cache.set(f"query_{query_id}_observation", observation_id) # Process the query...
1697 def get_trace_url(self, *, trace_id: Optional[str] = None) -> Optional[str]: 1698 """Get the URL to view a trace in the Langfuse UI. 1699 1700 This method generates a URL that links directly to a trace in the Langfuse UI. 1701 It's useful for providing links in logs, notifications, or debugging tools. 1702 1703 Args: 1704 trace_id: Optional trace ID to generate a URL for. If not provided, 1705 the trace ID of the current active span will be used. 1706 1707 Returns: 1708 A URL string pointing to the trace in the Langfuse UI, 1709 or None if the project ID couldn't be retrieved or no trace ID is available. 1710 1711 Example: 1712 ```python 1713 # Get URL for the current trace 1714 with langfuse.start_as_current_span(name="process-request") as span: 1715 trace_url = langfuse.get_trace_url() 1716 log.info(f"Processing trace: {trace_url}") 1717 1718 # Get URL for a specific trace 1719 specific_trace_url = langfuse.get_trace_url(trace_id="1234567890abcdef1234567890abcdef") 1720 send_notification(f"Review needed for trace: {specific_trace_url}") 1721 ``` 1722 """ 1723 project_id = self._get_project_id() 1724 final_trace_id = trace_id or self.get_current_trace_id() 1725 1726 return ( 1727 f"{self._host}/project/{project_id}/traces/{final_trace_id}" 1728 if project_id and final_trace_id 1729 else None 1730 )
Get the URL to view a trace in the Langfuse UI.
This method generates a URL that links directly to a trace in the Langfuse UI. It's useful for providing links in logs, notifications, or debugging tools.
Arguments:
- trace_id: Optional trace ID to generate a URL for. If not provided, the trace ID of the current active span will be used.
Returns:
A URL string pointing to the trace in the Langfuse UI, or None if the project ID couldn't be retrieved or no trace ID is available.
Example:
# Get URL for the current trace with langfuse.start_as_current_span(name="process-request") as span: trace_url = langfuse.get_trace_url() log.info(f"Processing trace: {trace_url}") # Get URL for a specific trace specific_trace_url = langfuse.get_trace_url(trace_id="1234567890abcdef1234567890abcdef") send_notification(f"Review needed for trace: {specific_trace_url}")
1732 def get_dataset( 1733 self, name: str, *, fetch_items_page_size: Optional[int] = 50 1734 ) -> "DatasetClient": 1735 """Fetch a dataset by its name. 1736 1737 Args: 1738 name (str): The name of the dataset to fetch. 1739 fetch_items_page_size (Optional[int]): All items of the dataset will be fetched in chunks of this size. Defaults to 50. 1740 1741 Returns: 1742 DatasetClient: The dataset with the given name. 1743 """ 1744 try: 1745 langfuse_logger.debug(f"Getting datasets {name}") 1746 dataset = self.api.datasets.get(dataset_name=name) 1747 1748 dataset_items = [] 1749 page = 1 1750 1751 while True: 1752 new_items = self.api.dataset_items.list( 1753 dataset_name=self._url_encode(name, is_url_param=True), 1754 page=page, 1755 limit=fetch_items_page_size, 1756 ) 1757 dataset_items.extend(new_items.data) 1758 1759 if new_items.meta.total_pages <= page: 1760 break 1761 1762 page += 1 1763 1764 items = [DatasetItemClient(i, langfuse=self) for i in dataset_items] 1765 1766 return DatasetClient(dataset, items=items) 1767 1768 except Error as e: 1769 handle_fern_exception(e) 1770 raise e
Fetch a dataset by its name.
Arguments:
- name (str): The name of the dataset to fetch.
- fetch_items_page_size (Optional[int]): All items of the dataset will be fetched in chunks of this size. Defaults to 50.
Returns:
DatasetClient: The dataset with the given name.
1772 def auth_check(self) -> bool: 1773 """Check if the provided credentials (public and secret key) are valid. 1774 1775 Raises: 1776 Exception: If no projects were found for the provided credentials. 1777 1778 Note: 1779 This method is blocking. It is discouraged to use it in production code. 1780 """ 1781 try: 1782 projects = self.api.projects.get() 1783 langfuse_logger.debug( 1784 f"Auth check successful, found {len(projects.data)} projects" 1785 ) 1786 if len(projects.data) == 0: 1787 raise Exception( 1788 "Auth check failed, no project found for the keys provided." 1789 ) 1790 return True 1791 1792 except AttributeError as e: 1793 langfuse_logger.warning( 1794 f"Auth check failed: Client not properly initialized. Error: {e}" 1795 ) 1796 return False 1797 1798 except Error as e: 1799 handle_fern_exception(e) 1800 raise e
Check if the provided credentials (public and secret key) are valid.
Raises:
- Exception: If no projects were found for the provided credentials.
Note:
This method is blocking. It is discouraged to use it in production code.
1802 def create_dataset( 1803 self, 1804 *, 1805 name: str, 1806 description: Optional[str] = None, 1807 metadata: Optional[Any] = None, 1808 ) -> Dataset: 1809 """Create a dataset with the given name on Langfuse. 1810 1811 Args: 1812 name: Name of the dataset to create. 1813 description: Description of the dataset. Defaults to None. 1814 metadata: Additional metadata. Defaults to None. 1815 1816 Returns: 1817 Dataset: The created dataset as returned by the Langfuse API. 1818 """ 1819 try: 1820 body = CreateDatasetRequest( 1821 name=name, description=description, metadata=metadata 1822 ) 1823 langfuse_logger.debug(f"Creating datasets {body}") 1824 1825 return self.api.datasets.create(request=body) 1826 1827 except Error as e: 1828 handle_fern_exception(e) 1829 raise e
Create a dataset with the given name on Langfuse.
Arguments:
- name: Name of the dataset to create.
- description: Description of the dataset. Defaults to None.
- metadata: Additional metadata. Defaults to None.
Returns:
Dataset: The created dataset as returned by the Langfuse API.
1831 def create_dataset_item( 1832 self, 1833 *, 1834 dataset_name: str, 1835 input: Optional[Any] = None, 1836 expected_output: Optional[Any] = None, 1837 metadata: Optional[Any] = None, 1838 source_trace_id: Optional[str] = None, 1839 source_observation_id: Optional[str] = None, 1840 status: Optional[DatasetStatus] = None, 1841 id: Optional[str] = None, 1842 ) -> DatasetItem: 1843 """Create a dataset item. 1844 1845 Upserts if an item with id already exists. 1846 1847 Args: 1848 dataset_name: Name of the dataset in which the dataset item should be created. 1849 input: Input data. Defaults to None. Can contain any dict, list or scalar. 1850 expected_output: Expected output data. Defaults to None. Can contain any dict, list or scalar. 1851 metadata: Additional metadata. Defaults to None. Can contain any dict, list or scalar. 1852 source_trace_id: Id of the source trace. Defaults to None. 1853 source_observation_id: Id of the source observation. Defaults to None. 1854 status: Status of the dataset item. Defaults to ACTIVE for newly created items. 1855 id: Id of the dataset item. Defaults to None. Provide your own id if you want to dedupe dataset items. Id needs to be globally unique and cannot be reused across datasets. 1856 1857 Returns: 1858 DatasetItem: The created dataset item as returned by the Langfuse API. 1859 1860 Example: 1861 ```python 1862 from langfuse import Langfuse 1863 1864 langfuse = Langfuse() 1865 1866 # Uploading items to the Langfuse dataset named "capital_cities" 1867 langfuse.create_dataset_item( 1868 dataset_name="capital_cities", 1869 input={"input": {"country": "Italy"}}, 1870 expected_output={"expected_output": "Rome"}, 1871 metadata={"foo": "bar"} 1872 ) 1873 ``` 1874 """ 1875 try: 1876 body = CreateDatasetItemRequest( 1877 datasetName=dataset_name, 1878 input=input, 1879 expectedOutput=expected_output, 1880 metadata=metadata, 1881 sourceTraceId=source_trace_id, 1882 sourceObservationId=source_observation_id, 1883 status=status, 1884 id=id, 1885 ) 1886 langfuse_logger.debug(f"Creating dataset item {body}") 1887 return self.api.dataset_items.create(request=body) 1888 except Error as e: 1889 handle_fern_exception(e) 1890 raise e
Create a dataset item.
Upserts if an item with id already exists.
Arguments:
- dataset_name: Name of the dataset in which the dataset item should be created.
- input: Input data. Defaults to None. Can contain any dict, list or scalar.
- expected_output: Expected output data. Defaults to None. Can contain any dict, list or scalar.
- metadata: Additional metadata. Defaults to None. Can contain any dict, list or scalar.
- source_trace_id: Id of the source trace. Defaults to None.
- source_observation_id: Id of the source observation. Defaults to None.
- status: Status of the dataset item. Defaults to ACTIVE for newly created items.
- id: Id of the dataset item. Defaults to None. Provide your own id if you want to dedupe dataset items. Id needs to be globally unique and cannot be reused across datasets.
Returns:
DatasetItem: The created dataset item as returned by the Langfuse API.
Example:
from langfuse import Langfuse langfuse = Langfuse() # Uploading items to the Langfuse dataset named "capital_cities" langfuse.create_dataset_item( dataset_name="capital_cities", input={"input": {"country": "Italy"}}, expected_output={"expected_output": "Rome"}, metadata={"foo": "bar"} )
1892 def resolve_media_references( 1893 self, 1894 *, 1895 obj: Any, 1896 resolve_with: Literal["base64_data_uri"], 1897 max_depth: int = 10, 1898 content_fetch_timeout_seconds: int = 5, 1899 ) -> Any: 1900 """Replace media reference strings in an object with base64 data URIs. 1901 1902 This method recursively traverses an object (up to max_depth) looking for media reference strings 1903 in the format "@@@langfuseMedia:...@@@". When found, it (synchronously) fetches the actual media content using 1904 the provided Langfuse client and replaces the reference string with a base64 data URI. 1905 1906 If fetching media content fails for a reference string, a warning is logged and the reference 1907 string is left unchanged. 1908 1909 Args: 1910 obj: The object to process. Can be a primitive value, array, or nested object. 1911 If the object has a __dict__ attribute, a dict will be returned instead of the original object type. 1912 resolve_with: The representation of the media content to replace the media reference string with. 1913 Currently only "base64_data_uri" is supported. 1914 max_depth: int: The maximum depth to traverse the object. Default is 10. 1915 content_fetch_timeout_seconds: int: The timeout in seconds for fetching media content. Default is 5. 1916 1917 Returns: 1918 A deep copy of the input object with all media references replaced with base64 data URIs where possible. 1919 If the input object has a __dict__ attribute, a dict will be returned instead of the original object type. 1920 1921 Example: 1922 obj = { 1923 "image": "@@@langfuseMedia:type=image/jpeg|id=123|source=bytes@@@", 1924 "nested": { 1925 "pdf": "@@@langfuseMedia:type=application/pdf|id=456|source=bytes@@@" 1926 } 1927 } 1928 1929 result = await LangfuseMedia.resolve_media_references(obj, langfuse_client) 1930 1931 # Result: 1932 # { 1933 # "image": "data:image/jpeg;base64,/9j/4AAQSkZJRg...", 1934 # "nested": { 1935 # "pdf": "data:application/pdf;base64,JVBERi0xLjcK..." 1936 # } 1937 # } 1938 """ 1939 return LangfuseMedia.resolve_media_references( 1940 langfuse_client=self, 1941 obj=obj, 1942 resolve_with=resolve_with, 1943 max_depth=max_depth, 1944 content_fetch_timeout_seconds=content_fetch_timeout_seconds, 1945 )
Replace media reference strings in an object with base64 data URIs.
This method recursively traverses an object (up to max_depth) looking for media reference strings in the format "@@@langfuseMedia:...@@@". When found, it (synchronously) fetches the actual media content using the provided Langfuse client and replaces the reference string with a base64 data URI.
If fetching media content fails for a reference string, a warning is logged and the reference string is left unchanged.
Arguments:
- obj: The object to process. Can be a primitive value, array, or nested object. If the object has a __dict__ attribute, a dict will be returned instead of the original object type.
- resolve_with: The representation of the media content to replace the media reference string with. Currently only "base64_data_uri" is supported.
- max_depth: int: The maximum depth to traverse the object. Default is 10.
- content_fetch_timeout_seconds: int: The timeout in seconds for fetching media content. Default is 5.
Returns:
A deep copy of the input object with all media references replaced with base64 data URIs where possible. If the input object has a __dict__ attribute, a dict will be returned instead of the original object type.
Example:
obj = { "image": "@@@langfuseMedia:type=image/jpeg|id=123|source=bytes@@@", "nested": { "pdf": "@@@langfuseMedia:type=application/pdf|id=456|source=bytes@@@" } }
result = await LangfuseMedia.resolve_media_references(obj, langfuse_client)
Result:
{
"image": "data:image/jpeg;base64,/9j/4AAQSkZJRg...",
"nested": {
"pdf": "data:application/pdf;base64,JVBERi0xLjcK..."
}
}
1975 def get_prompt( 1976 self, 1977 name: str, 1978 *, 1979 version: Optional[int] = None, 1980 label: Optional[str] = None, 1981 type: Literal["chat", "text"] = "text", 1982 cache_ttl_seconds: Optional[int] = None, 1983 fallback: Union[Optional[List[ChatMessageDict]], Optional[str]] = None, 1984 max_retries: Optional[int] = None, 1985 fetch_timeout_seconds: Optional[int] = None, 1986 ) -> PromptClient: 1987 """Get a prompt. 1988 1989 This method attempts to fetch the requested prompt from the local cache. If the prompt is not found 1990 in the cache or if the cached prompt has expired, it will try to fetch the prompt from the server again 1991 and update the cache. If fetching the new prompt fails, and there is an expired prompt in the cache, it will 1992 return the expired prompt as a fallback. 1993 1994 Args: 1995 name (str): The name of the prompt to retrieve. 1996 1997 Keyword Args: 1998 version (Optional[int]): The version of the prompt to retrieve. If no label and version is specified, the `production` label is returned. Specify either version or label, not both. 1999 label: Optional[str]: The label of the prompt to retrieve. If no label and version is specified, the `production` label is returned. Specify either version or label, not both. 2000 cache_ttl_seconds: Optional[int]: Time-to-live in seconds for caching the prompt. Must be specified as a 2001 keyword argument. If not set, defaults to 60 seconds. Disables caching if set to 0. 2002 type: Literal["chat", "text"]: The type of the prompt to retrieve. Defaults to "text". 2003 fallback: Union[Optional[List[ChatMessageDict]], Optional[str]]: The prompt string to return if fetching the prompt fails. Important on the first call where no cached prompt is available. Follows Langfuse prompt formatting with double curly braces for variables. Defaults to None. 2004 max_retries: Optional[int]: The maximum number of retries in case of API/network errors. Defaults to 2. The maximum value is 4. Retries have an exponential backoff with a maximum delay of 10 seconds. 2005 fetch_timeout_seconds: Optional[int]: The timeout in milliseconds for fetching the prompt. Defaults to the default timeout set on the SDK, which is 5 seconds per default. 2006 2007 Returns: 2008 The prompt object retrieved from the cache or directly fetched if not cached or expired of type 2009 - TextPromptClient, if type argument is 'text'. 2010 - ChatPromptClient, if type argument is 'chat'. 2011 2012 Raises: 2013 Exception: Propagates any exceptions raised during the fetching of a new prompt, unless there is an 2014 expired prompt in the cache, in which case it logs a warning and returns the expired prompt. 2015 """ 2016 if self._resources is None: 2017 raise Error( 2018 "SDK is not correctly initalized. Check the init logs for more details." 2019 ) 2020 if version is not None and label is not None: 2021 raise ValueError("Cannot specify both version and label at the same time.") 2022 2023 if not name: 2024 raise ValueError("Prompt name cannot be empty.") 2025 2026 cache_key = PromptCache.generate_cache_key(name, version=version, label=label) 2027 bounded_max_retries = self._get_bounded_max_retries( 2028 max_retries, default_max_retries=2, max_retries_upper_bound=4 2029 ) 2030 2031 langfuse_logger.debug(f"Getting prompt '{cache_key}'") 2032 cached_prompt = self._resources.prompt_cache.get(cache_key) 2033 2034 if cached_prompt is None or cache_ttl_seconds == 0: 2035 langfuse_logger.debug( 2036 f"Prompt '{cache_key}' not found in cache or caching disabled." 2037 ) 2038 try: 2039 return self._fetch_prompt_and_update_cache( 2040 name, 2041 version=version, 2042 label=label, 2043 ttl_seconds=cache_ttl_seconds, 2044 max_retries=bounded_max_retries, 2045 fetch_timeout_seconds=fetch_timeout_seconds, 2046 ) 2047 except Exception as e: 2048 if fallback: 2049 langfuse_logger.warning( 2050 f"Returning fallback prompt for '{cache_key}' due to fetch error: {e}" 2051 ) 2052 2053 fallback_client_args: Dict[str, Any] = { 2054 "name": name, 2055 "prompt": fallback, 2056 "type": type, 2057 "version": version or 0, 2058 "config": {}, 2059 "labels": [label] if label else [], 2060 "tags": [], 2061 } 2062 2063 if type == "text": 2064 return TextPromptClient( 2065 prompt=Prompt_Text(**fallback_client_args), 2066 is_fallback=True, 2067 ) 2068 2069 if type == "chat": 2070 return ChatPromptClient( 2071 prompt=Prompt_Chat(**fallback_client_args), 2072 is_fallback=True, 2073 ) 2074 2075 raise e 2076 2077 if cached_prompt.is_expired(): 2078 langfuse_logger.debug(f"Stale prompt '{cache_key}' found in cache.") 2079 try: 2080 # refresh prompt in background thread, refresh_prompt deduplicates tasks 2081 langfuse_logger.debug(f"Refreshing prompt '{cache_key}' in background.") 2082 2083 def refresh_task() -> None: 2084 self._fetch_prompt_and_update_cache( 2085 name, 2086 version=version, 2087 label=label, 2088 ttl_seconds=cache_ttl_seconds, 2089 max_retries=bounded_max_retries, 2090 fetch_timeout_seconds=fetch_timeout_seconds, 2091 ) 2092 2093 self._resources.prompt_cache.add_refresh_prompt_task( 2094 cache_key, 2095 refresh_task, 2096 ) 2097 langfuse_logger.debug( 2098 f"Returning stale prompt '{cache_key}' from cache." 2099 ) 2100 # return stale prompt 2101 return cached_prompt.value 2102 2103 except Exception as e: 2104 langfuse_logger.warning( 2105 f"Error when refreshing cached prompt '{cache_key}', returning cached version. Error: {e}" 2106 ) 2107 # creation of refresh prompt task failed, return stale prompt 2108 return cached_prompt.value 2109 2110 return cached_prompt.value
Get a prompt.
This method attempts to fetch the requested prompt from the local cache. If the prompt is not found in the cache or if the cached prompt has expired, it will try to fetch the prompt from the server again and update the cache. If fetching the new prompt fails, and there is an expired prompt in the cache, it will return the expired prompt as a fallback.
Arguments:
- name (str): The name of the prompt to retrieve.
Keyword Args:
version (Optional[int]): The version of the prompt to retrieve. If no label and version is specified, the
production
label is returned. Specify either version or label, not both. label: Optional[str]: The label of the prompt to retrieve. If no label and version is specified, theproduction
label is returned. Specify either version or label, not both. cache_ttl_seconds: Optional[int]: Time-to-live in seconds for caching the prompt. Must be specified as a keyword argument. If not set, defaults to 60 seconds. Disables caching if set to 0. type: Literal["chat", "text"]: The type of the prompt to retrieve. Defaults to "text". fallback: Union[Optional[List[ChatMessageDict]], Optional[str]]: The prompt string to return if fetching the prompt fails. Important on the first call where no cached prompt is available. Follows Langfuse prompt formatting with double curly braces for variables. Defaults to None. max_retries: Optional[int]: The maximum number of retries in case of API/network errors. Defaults to 2. The maximum value is 4. Retries have an exponential backoff with a maximum delay of 10 seconds. fetch_timeout_seconds: Optional[int]: The timeout in milliseconds for fetching the prompt. Defaults to the default timeout set on the SDK, which is 5 seconds per default.
Returns:
The prompt object retrieved from the cache or directly fetched if not cached or expired of type
- TextPromptClient, if type argument is 'text'.
- ChatPromptClient, if type argument is 'chat'.
Raises:
- Exception: Propagates any exceptions raised during the fetching of a new prompt, unless there is an
- expired prompt in the cache, in which case it logs a warning and returns the expired prompt.
2204 def create_prompt( 2205 self, 2206 *, 2207 name: str, 2208 prompt: Union[ 2209 str, List[Union[ChatMessageDict, ChatMessageWithPlaceholdersDict]] 2210 ], 2211 labels: List[str] = [], 2212 tags: Optional[List[str]] = None, 2213 type: Optional[Literal["chat", "text"]] = "text", 2214 config: Optional[Any] = None, 2215 commit_message: Optional[str] = None, 2216 ) -> PromptClient: 2217 """Create a new prompt in Langfuse. 2218 2219 Keyword Args: 2220 name : The name of the prompt to be created. 2221 prompt : The content of the prompt to be created. 2222 is_active [DEPRECATED] : A flag indicating whether the prompt is active or not. This is deprecated and will be removed in a future release. Please use the 'production' label instead. 2223 labels: The labels of the prompt. Defaults to None. To create a default-served prompt, add the 'production' label. 2224 tags: The tags of the prompt. Defaults to None. Will be applied to all versions of the prompt. 2225 config: Additional structured data to be saved with the prompt. Defaults to None. 2226 type: The type of the prompt to be created. "chat" vs. "text". Defaults to "text". 2227 commit_message: Optional string describing the change. 2228 2229 Returns: 2230 TextPromptClient: The prompt if type argument is 'text'. 2231 ChatPromptClient: The prompt if type argument is 'chat'. 2232 """ 2233 try: 2234 langfuse_logger.debug(f"Creating prompt {name=}, {labels=}") 2235 2236 if type == "chat": 2237 if not isinstance(prompt, list): 2238 raise ValueError( 2239 "For 'chat' type, 'prompt' must be a list of chat messages with role and content attributes." 2240 ) 2241 request: Union[CreatePromptRequest_Chat, CreatePromptRequest_Text] = ( 2242 CreatePromptRequest_Chat( 2243 name=name, 2244 prompt=cast(Any, prompt), 2245 labels=labels, 2246 tags=tags, 2247 config=config or {}, 2248 commitMessage=commit_message, 2249 type="chat", 2250 ) 2251 ) 2252 server_prompt = self.api.prompts.create(request=request) 2253 2254 if self._resources is not None: 2255 self._resources.prompt_cache.invalidate(name) 2256 2257 return ChatPromptClient(prompt=cast(Prompt_Chat, server_prompt)) 2258 2259 if not isinstance(prompt, str): 2260 raise ValueError("For 'text' type, 'prompt' must be a string.") 2261 2262 request = CreatePromptRequest_Text( 2263 name=name, 2264 prompt=prompt, 2265 labels=labels, 2266 tags=tags, 2267 config=config or {}, 2268 commitMessage=commit_message, 2269 type="text", 2270 ) 2271 2272 server_prompt = self.api.prompts.create(request=request) 2273 2274 if self._resources is not None: 2275 self._resources.prompt_cache.invalidate(name) 2276 2277 return TextPromptClient(prompt=cast(Prompt_Text, server_prompt)) 2278 2279 except Error as e: 2280 handle_fern_exception(e) 2281 raise e
Create a new prompt in Langfuse.
Keyword Args:
name : The name of the prompt to be created. prompt : The content of the prompt to be created. is_active [DEPRECATED] : A flag indicating whether the prompt is active or not. This is deprecated and will be removed in a future release. Please use the 'production' label instead. labels: The labels of the prompt. Defaults to None. To create a default-served prompt, add the 'production' label. tags: The tags of the prompt. Defaults to None. Will be applied to all versions of the prompt. config: Additional structured data to be saved with the prompt. Defaults to None. type: The type of the prompt to be created. "chat" vs. "text". Defaults to "text". commit_message: Optional string describing the change.
Returns:
TextPromptClient: The prompt if type argument is 'text'. ChatPromptClient: The prompt if type argument is 'chat'.
2283 def update_prompt( 2284 self, 2285 *, 2286 name: str, 2287 version: int, 2288 new_labels: List[str] = [], 2289 ) -> Any: 2290 """Update an existing prompt version in Langfuse. The Langfuse SDK prompt cache is invalidated for all prompts witht he specified name. 2291 2292 Args: 2293 name (str): The name of the prompt to update. 2294 version (int): The version number of the prompt to update. 2295 new_labels (List[str], optional): New labels to assign to the prompt version. Labels are unique across versions. The "latest" label is reserved and managed by Langfuse. Defaults to []. 2296 2297 Returns: 2298 Prompt: The updated prompt from the Langfuse API. 2299 2300 """ 2301 updated_prompt = self.api.prompt_version.update( 2302 name=name, 2303 version=version, 2304 new_labels=new_labels, 2305 ) 2306 2307 if self._resources is not None: 2308 self._resources.prompt_cache.invalidate(name) 2309 2310 return updated_prompt
Update an existing prompt version in Langfuse. The Langfuse SDK prompt cache is invalidated for all prompts witht he specified name.
Arguments:
- name (str): The name of the prompt to update.
- version (int): The version number of the prompt to update.
- new_labels (List[str], optional): New labels to assign to the prompt version. Labels are unique across versions. The "latest" label is reserved and managed by Langfuse. Defaults to [].
Returns:
Prompt: The updated prompt from the Langfuse API.
9def get_client(*, public_key: Optional[str] = None) -> Langfuse: 10 """Get or create a Langfuse client instance. 11 12 Returns an existing Langfuse client or creates a new one if none exists. In multi-project setups, 13 providing a public_key is required. Multi-project support is experimental - see Langfuse docs. 14 15 Behavior: 16 - Single project: Returns existing client or creates new one 17 - Multi-project: Requires public_key to return specific client 18 - No public_key in multi-project: Returns disabled client to prevent data leakage 19 20 The function uses a singleton pattern per public_key to conserve resources and maintain state. 21 22 Args: 23 public_key (Optional[str]): Project identifier 24 - With key: Returns client for that project 25 - Without key: Returns single client or disabled client if multiple exist 26 27 Returns: 28 Langfuse: Client instance in one of three states: 29 1. Client for specified public_key 30 2. Default client for single-project setup 31 3. Disabled client when multiple projects exist without key 32 33 Security: 34 Disables tracing when multiple projects exist without explicit key to prevent 35 cross-project data leakage. Multi-project setups are experimental. 36 37 Example: 38 ```python 39 # Single project 40 client = get_client() # Default client 41 42 # In multi-project usage: 43 client_a = get_client(public_key="project_a_key") # Returns project A's client 44 client_b = get_client(public_key="project_b_key") # Returns project B's client 45 46 # Without specific key in multi-project setup: 47 client = get_client() # Returns disabled client for safety 48 ``` 49 """ 50 with LangfuseResourceManager._lock: 51 active_instances = LangfuseResourceManager._instances 52 53 if not public_key: 54 if len(active_instances) == 0: 55 # No clients initialized yet, create default instance 56 return Langfuse() 57 58 if len(active_instances) == 1: 59 # Only one client exists, safe to use without specifying key 60 instance = list(active_instances.values())[0] 61 62 # Initialize with the credentials bound to the instance 63 # This is important if the original instance was instantiated 64 # via constructor arguments 65 return Langfuse( 66 public_key=instance.public_key, 67 secret_key=instance.secret_key, 68 host=instance.host, 69 tracing_enabled=instance.tracing_enabled, 70 ) 71 72 else: 73 # Multiple clients exist but no key specified - disable tracing 74 # to prevent cross-project data leakage 75 langfuse_logger.warning( 76 "No 'langfuse_public_key' passed to decorated function, but multiple langfuse clients are instantiated in current process. Skipping tracing for this function to avoid cross-project leakage." 77 ) 78 return Langfuse( 79 tracing_enabled=False, public_key="fake", secret_key="fake" 80 ) 81 82 else: 83 # Specific key provided, look up existing instance 84 target_instance: Optional[LangfuseResourceManager] = active_instances.get( 85 public_key, None 86 ) 87 88 if target_instance is None: 89 # No instance found with this key - client not initialized properly 90 langfuse_logger.warning( 91 f"No Langfuse client with public key {public_key} has been initialized. Skipping tracing for decorated function." 92 ) 93 return Langfuse( 94 tracing_enabled=False, public_key="fake", secret_key="fake" 95 ) 96 97 # target_instance is guaranteed to be not None at this point 98 return Langfuse( 99 public_key=public_key, 100 secret_key=target_instance.secret_key, 101 host=target_instance.host, 102 tracing_enabled=target_instance.tracing_enabled, 103 )
Get or create a Langfuse client instance.
Returns an existing Langfuse client or creates a new one if none exists. In multi-project setups, providing a public_key is required. Multi-project support is experimental - see Langfuse docs.
Behavior:
- Single project: Returns existing client or creates new one
- Multi-project: Requires public_key to return specific client
- No public_key in multi-project: Returns disabled client to prevent data leakage
The function uses a singleton pattern per public_key to conserve resources and maintain state.
Arguments:
- public_key (Optional[str]): Project identifier
- With key: Returns client for that project
- Without key: Returns single client or disabled client if multiple exist
Returns:
Langfuse: Client instance in one of three states: 1. Client for specified public_key 2. Default client for single-project setup 3. Disabled client when multiple projects exist without key
Security:
Disables tracing when multiple projects exist without explicit key to prevent cross-project data leakage. Multi-project setups are experimental.
Example:
# Single project client = get_client() # Default client # In multi-project usage: client_a = get_client(public_key="project_a_key") # Returns project A's client client_b = get_client(public_key="project_b_key") # Returns project B's client # Without specific key in multi-project setup: client = get_client() # Returns disabled client for safety
75 def observe( 76 self, 77 func: Optional[F] = None, 78 *, 79 name: Optional[str] = None, 80 as_type: Optional[Literal["generation"]] = None, 81 capture_input: Optional[bool] = None, 82 capture_output: Optional[bool] = None, 83 transform_to_string: Optional[Callable[[Iterable], str]] = None, 84 ) -> Union[F, Callable[[F], F]]: 85 """Wrap a function to create and manage Langfuse tracing around its execution, supporting both synchronous and asynchronous functions. 86 87 This decorator provides seamless integration of Langfuse observability into your codebase. It automatically creates 88 spans or generations around function execution, capturing timing, inputs/outputs, and error states. The decorator 89 intelligently handles both synchronous and asynchronous functions, preserving function signatures and type hints. 90 91 Using OpenTelemetry's distributed tracing system, it maintains proper trace context propagation throughout your application, 92 enabling you to see hierarchical traces of function calls with detailed performance metrics and function-specific details. 93 94 Args: 95 func (Optional[Callable]): The function to decorate. When used with parentheses @observe(), this will be None. 96 name (Optional[str]): Custom name for the created trace or span. If not provided, the function name is used. 97 as_type (Optional[Literal["generation"]]): Set to "generation" to create a specialized LLM generation span 98 with model metrics support, suitable for tracking language model outputs. 99 100 Returns: 101 Callable: A wrapped version of the original function that automatically creates and manages Langfuse spans. 102 103 Example: 104 For general function tracing with automatic naming: 105 ```python 106 @observe() 107 def process_user_request(user_id, query): 108 # Function is automatically traced with name "process_user_request" 109 return get_response(query) 110 ``` 111 112 For language model generation tracking: 113 ```python 114 @observe(name="answer-generation", as_type="generation") 115 async def generate_answer(query): 116 # Creates a generation-type span with extended LLM metrics 117 response = await openai.chat.completions.create( 118 model="gpt-4", 119 messages=[{"role": "user", "content": query}] 120 ) 121 return response.choices[0].message.content 122 ``` 123 124 For trace context propagation between functions: 125 ```python 126 @observe() 127 def main_process(): 128 # Parent span is created 129 return sub_process() # Child span automatically connected to parent 130 131 @observe() 132 def sub_process(): 133 # Automatically becomes a child span of main_process 134 return "result" 135 ``` 136 137 Raises: 138 Exception: Propagates any exceptions from the wrapped function after logging them in the trace. 139 140 Notes: 141 - The decorator preserves the original function's signature, docstring, and return type. 142 - Proper parent-child relationships between spans are automatically maintained. 143 - Special keyword arguments can be passed to control tracing: 144 - langfuse_trace_id: Explicitly set the trace ID for this function call 145 - langfuse_parent_observation_id: Explicitly set the parent span ID 146 - langfuse_public_key: Use a specific Langfuse project (when multiple clients exist) 147 - For async functions, the decorator returns an async function wrapper. 148 - For sync functions, the decorator returns a synchronous wrapper. 149 """ 150 function_io_capture_enabled = os.environ.get( 151 LANGFUSE_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED, "True" 152 ).lower() not in ("false", "0") 153 154 should_capture_input = ( 155 capture_input if capture_input is not None else function_io_capture_enabled 156 ) 157 158 should_capture_output = ( 159 capture_output 160 if capture_output is not None 161 else function_io_capture_enabled 162 ) 163 164 def decorator(func: F) -> F: 165 return ( 166 self._async_observe( 167 func, 168 name=name, 169 as_type=as_type, 170 capture_input=should_capture_input, 171 capture_output=should_capture_output, 172 transform_to_string=transform_to_string, 173 ) 174 if asyncio.iscoroutinefunction(func) 175 else self._sync_observe( 176 func, 177 name=name, 178 as_type=as_type, 179 capture_input=should_capture_input, 180 capture_output=should_capture_output, 181 transform_to_string=transform_to_string, 182 ) 183 ) 184 185 """Handle decorator with or without parentheses. 186 187 This logic enables the decorator to work both with and without parentheses: 188 - @observe - Python passes the function directly to the decorator 189 - @observe() - Python calls the decorator first, which must return a function decorator 190 191 When called without arguments (@observe), the func parameter contains the function to decorate, 192 so we directly apply the decorator to it. When called with parentheses (@observe()), 193 func is None, so we return the decorator function itself for Python to apply in the next step. 194 """ 195 if func is None: 196 return decorator 197 else: 198 return decorator(func)
Wrap a function to create and manage Langfuse tracing around its execution, supporting both synchronous and asynchronous functions.
This decorator provides seamless integration of Langfuse observability into your codebase. It automatically creates spans or generations around function execution, capturing timing, inputs/outputs, and error states. The decorator intelligently handles both synchronous and asynchronous functions, preserving function signatures and type hints.
Using OpenTelemetry's distributed tracing system, it maintains proper trace context propagation throughout your application, enabling you to see hierarchical traces of function calls with detailed performance metrics and function-specific details.
Arguments:
- func (Optional[Callable]): The function to decorate. When used with parentheses @observe(), this will be None.
- name (Optional[str]): Custom name for the created trace or span. If not provided, the function name is used.
- as_type (Optional[Literal["generation"]]): Set to "generation" to create a specialized LLM generation span with model metrics support, suitable for tracking language model outputs.
Returns:
Callable: A wrapped version of the original function that automatically creates and manages Langfuse spans.
Example:
For general function tracing with automatic naming:
@observe() def process_user_request(user_id, query): # Function is automatically traced with name "process_user_request" return get_response(query)
For language model generation tracking:
@observe(name="answer-generation", as_type="generation") async def generate_answer(query): # Creates a generation-type span with extended LLM metrics response = await openai.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": query}] ) return response.choices[0].message.content
For trace context propagation between functions:
@observe() def main_process(): # Parent span is created return sub_process() # Child span automatically connected to parent @observe() def sub_process(): # Automatically becomes a child span of main_process return "result"
Raises:
- Exception: Propagates any exceptions from the wrapped function after logging them in the trace.
Notes:
- The decorator preserves the original function's signature, docstring, and return type.
- Proper parent-child relationships between spans are automatically maintained.
- Special keyword arguments can be passed to control tracing:
- langfuse_trace_id: Explicitly set the trace ID for this function call
- langfuse_parent_observation_id: Explicitly set the parent span ID
- langfuse_public_key: Use a specific Langfuse project (when multiple clients exist)
- For async functions, the decorator returns an async function wrapper.
- For sync functions, the decorator returns a synchronous wrapper.
516class LangfuseSpan(LangfuseSpanWrapper): 517 """Standard span implementation for general operations in Langfuse. 518 519 This class represents a general-purpose span that can be used to trace 520 any operation in your application. It extends the base LangfuseSpanWrapper 521 with specific methods for creating child spans, generations, and updating 522 span-specific attributes. 523 """ 524 525 def __init__( 526 self, 527 *, 528 otel_span: otel_trace_api.Span, 529 langfuse_client: "Langfuse", 530 input: Optional[Any] = None, 531 output: Optional[Any] = None, 532 metadata: Optional[Any] = None, 533 environment: Optional[str] = None, 534 version: Optional[str] = None, 535 level: Optional[SpanLevel] = None, 536 status_message: Optional[str] = None, 537 ): 538 """Initialize a new LangfuseSpan. 539 540 Args: 541 otel_span: The OpenTelemetry span to wrap 542 langfuse_client: Reference to the parent Langfuse client 543 input: Input data for the span (any JSON-serializable object) 544 output: Output data from the span (any JSON-serializable object) 545 metadata: Additional metadata to associate with the span 546 environment: The tracing environment 547 version: Version identifier for the code or component 548 level: Importance level of the span (info, warning, error) 549 status_message: Optional status message for the span 550 """ 551 super().__init__( 552 otel_span=otel_span, 553 as_type="span", 554 langfuse_client=langfuse_client, 555 input=input, 556 output=output, 557 metadata=metadata, 558 environment=environment, 559 version=version, 560 level=level, 561 status_message=status_message, 562 ) 563 564 def update( 565 self, 566 *, 567 name: Optional[str] = None, 568 input: Optional[Any] = None, 569 output: Optional[Any] = None, 570 metadata: Optional[Any] = None, 571 version: Optional[str] = None, 572 level: Optional[SpanLevel] = None, 573 status_message: Optional[str] = None, 574 **kwargs: Any, 575 ) -> "LangfuseSpan": 576 """Update this span with new information. 577 578 This method updates the span with new information that becomes available 579 during execution, such as outputs, metadata, or status changes. 580 581 Args: 582 name: Span name 583 input: Updated input data for the operation 584 output: Output data from the operation 585 metadata: Additional metadata to associate with the span 586 version: Version identifier for the code or component 587 level: Importance level of the span (info, warning, error) 588 status_message: Optional status message for the span 589 **kwargs: Additional keyword arguments (ignored) 590 591 Example: 592 ```python 593 span = langfuse.start_span(name="process-data") 594 try: 595 # Do work 596 result = process_data() 597 span.update(output=result, metadata={"processing_time": 350}) 598 finally: 599 span.end() 600 ``` 601 """ 602 if not self._otel_span.is_recording(): 603 return self 604 605 processed_input = self._process_media_and_apply_mask( 606 data=input, field="input", span=self._otel_span 607 ) 608 processed_output = self._process_media_and_apply_mask( 609 data=output, field="output", span=self._otel_span 610 ) 611 processed_metadata = self._process_media_and_apply_mask( 612 data=metadata, field="metadata", span=self._otel_span 613 ) 614 615 if name: 616 self._otel_span.update_name(name) 617 618 attributes = create_span_attributes( 619 input=processed_input, 620 output=processed_output, 621 metadata=processed_metadata, 622 version=version, 623 level=level, 624 status_message=status_message, 625 ) 626 627 self._otel_span.set_attributes(attributes=attributes) 628 629 return self 630 631 def start_span( 632 self, 633 name: str, 634 input: Optional[Any] = None, 635 output: Optional[Any] = None, 636 metadata: Optional[Any] = None, 637 version: Optional[str] = None, 638 level: Optional[SpanLevel] = None, 639 status_message: Optional[str] = None, 640 ) -> "LangfuseSpan": 641 """Create a new child span. 642 643 This method creates a new child span with this span as the parent. 644 Unlike start_as_current_span(), this method does not set the new span 645 as the current span in the context. 646 647 Args: 648 name: Name of the span (e.g., function or operation name) 649 input: Input data for the operation 650 output: Output data from the operation 651 metadata: Additional metadata to associate with the span 652 version: Version identifier for the code or component 653 level: Importance level of the span (info, warning, error) 654 status_message: Optional status message for the span 655 656 Returns: 657 A new LangfuseSpan that must be ended with .end() when complete 658 659 Example: 660 ```python 661 parent_span = langfuse.start_span(name="process-request") 662 try: 663 # Create a child span 664 child_span = parent_span.start_span(name="validate-input") 665 try: 666 # Do validation work 667 validation_result = validate(request_data) 668 child_span.update(output=validation_result) 669 finally: 670 child_span.end() 671 672 # Continue with parent span 673 result = process_validated_data(validation_result) 674 parent_span.update(output=result) 675 finally: 676 parent_span.end() 677 ``` 678 """ 679 with otel_trace_api.use_span(self._otel_span): 680 new_otel_span = self._langfuse_client._otel_tracer.start_span(name=name) 681 682 return LangfuseSpan( 683 otel_span=new_otel_span, 684 langfuse_client=self._langfuse_client, 685 environment=self._environment, 686 input=input, 687 output=output, 688 metadata=metadata, 689 version=version, 690 level=level, 691 status_message=status_message, 692 ) 693 694 def start_as_current_span( 695 self, 696 *, 697 name: str, 698 input: Optional[Any] = None, 699 output: Optional[Any] = None, 700 metadata: Optional[Any] = None, 701 version: Optional[str] = None, 702 level: Optional[SpanLevel] = None, 703 status_message: Optional[str] = None, 704 ) -> _AgnosticContextManager["LangfuseSpan"]: 705 """Create a new child span and set it as the current span in a context manager. 706 707 This method creates a new child span and sets it as the current span within 708 a context manager. It should be used with a 'with' statement to automatically 709 manage the span's lifecycle. 710 711 Args: 712 name: Name of the span (e.g., function or operation name) 713 input: Input data for the operation 714 output: Output data from the operation 715 metadata: Additional metadata to associate with the span 716 version: Version identifier for the code or component 717 level: Importance level of the span (info, warning, error) 718 status_message: Optional status message for the span 719 720 Returns: 721 A context manager that yields a new LangfuseSpan 722 723 Example: 724 ```python 725 with langfuse.start_as_current_span(name="process-request") as parent_span: 726 # Parent span is active here 727 728 # Create a child span with context management 729 with parent_span.start_as_current_span(name="validate-input") as child_span: 730 # Child span is active here 731 validation_result = validate(request_data) 732 child_span.update(output=validation_result) 733 734 # Back to parent span context 735 result = process_validated_data(validation_result) 736 parent_span.update(output=result) 737 ``` 738 """ 739 return cast( 740 _AgnosticContextManager["LangfuseSpan"], 741 self._langfuse_client._create_span_with_parent_context( 742 name=name, 743 as_type="span", 744 remote_parent_span=None, 745 parent=self._otel_span, 746 input=input, 747 output=output, 748 metadata=metadata, 749 version=version, 750 level=level, 751 status_message=status_message, 752 ), 753 ) 754 755 def start_generation( 756 self, 757 *, 758 name: str, 759 input: Optional[Any] = None, 760 output: Optional[Any] = None, 761 metadata: Optional[Any] = None, 762 version: Optional[str] = None, 763 level: Optional[SpanLevel] = None, 764 status_message: Optional[str] = None, 765 completion_start_time: Optional[datetime] = None, 766 model: Optional[str] = None, 767 model_parameters: Optional[Dict[str, MapValue]] = None, 768 usage_details: Optional[Dict[str, int]] = None, 769 cost_details: Optional[Dict[str, float]] = None, 770 prompt: Optional[PromptClient] = None, 771 ) -> "LangfuseGeneration": 772 """Create a new child generation span. 773 774 This method creates a new child generation span with this span as the parent. 775 Generation spans are specialized for AI/LLM operations and include additional 776 fields for model information, usage stats, and costs. 777 778 Unlike start_as_current_generation(), this method does not set the new span 779 as the current span in the context. 780 781 Args: 782 name: Name of the generation operation 783 input: Input data for the model (e.g., prompts) 784 output: Output from the model (e.g., completions) 785 metadata: Additional metadata to associate with the generation 786 version: Version identifier for the model or component 787 level: Importance level of the generation (info, warning, error) 788 status_message: Optional status message for the generation 789 completion_start_time: When the model started generating the response 790 model: Name/identifier of the AI model used (e.g., "gpt-4") 791 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 792 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 793 cost_details: Cost information for the model call 794 prompt: Associated prompt template from Langfuse prompt management 795 796 Returns: 797 A new LangfuseGeneration that must be ended with .end() when complete 798 799 Example: 800 ```python 801 span = langfuse.start_span(name="process-query") 802 try: 803 # Create a generation child span 804 generation = span.start_generation( 805 name="generate-answer", 806 model="gpt-4", 807 input={"prompt": "Explain quantum computing"} 808 ) 809 try: 810 # Call model API 811 response = llm.generate(...) 812 813 generation.update( 814 output=response.text, 815 usage_details={ 816 "prompt_tokens": response.usage.prompt_tokens, 817 "completion_tokens": response.usage.completion_tokens 818 } 819 ) 820 finally: 821 generation.end() 822 823 # Continue with parent span 824 span.update(output={"answer": response.text, "source": "gpt-4"}) 825 finally: 826 span.end() 827 ``` 828 """ 829 with otel_trace_api.use_span(self._otel_span): 830 new_otel_span = self._langfuse_client._otel_tracer.start_span(name=name) 831 832 return LangfuseGeneration( 833 otel_span=new_otel_span, 834 langfuse_client=self._langfuse_client, 835 environment=self._environment, 836 input=input, 837 output=output, 838 metadata=metadata, 839 version=version, 840 level=level, 841 status_message=status_message, 842 completion_start_time=completion_start_time, 843 model=model, 844 model_parameters=model_parameters, 845 usage_details=usage_details, 846 cost_details=cost_details, 847 prompt=prompt, 848 ) 849 850 def start_as_current_generation( 851 self, 852 *, 853 name: str, 854 input: Optional[Any] = None, 855 output: Optional[Any] = None, 856 metadata: Optional[Any] = None, 857 version: Optional[str] = None, 858 level: Optional[SpanLevel] = None, 859 status_message: Optional[str] = None, 860 completion_start_time: Optional[datetime] = None, 861 model: Optional[str] = None, 862 model_parameters: Optional[Dict[str, MapValue]] = None, 863 usage_details: Optional[Dict[str, int]] = None, 864 cost_details: Optional[Dict[str, float]] = None, 865 prompt: Optional[PromptClient] = None, 866 ) -> _AgnosticContextManager["LangfuseGeneration"]: 867 """Create a new child generation span and set it as the current span in a context manager. 868 869 This method creates a new child generation span and sets it as the current span 870 within a context manager. Generation spans are specialized for AI/LLM operations 871 and include additional fields for model information, usage stats, and costs. 872 873 Args: 874 name: Name of the generation operation 875 input: Input data for the model (e.g., prompts) 876 output: Output from the model (e.g., completions) 877 metadata: Additional metadata to associate with the generation 878 version: Version identifier for the model or component 879 level: Importance level of the generation (info, warning, error) 880 status_message: Optional status message for the generation 881 completion_start_time: When the model started generating the response 882 model: Name/identifier of the AI model used (e.g., "gpt-4") 883 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 884 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 885 cost_details: Cost information for the model call 886 prompt: Associated prompt template from Langfuse prompt management 887 888 Returns: 889 A context manager that yields a new LangfuseGeneration 890 891 Example: 892 ```python 893 with langfuse.start_as_current_span(name="process-request") as span: 894 # Prepare data 895 query = preprocess_user_query(user_input) 896 897 # Create a generation span with context management 898 with span.start_as_current_generation( 899 name="generate-answer", 900 model="gpt-4", 901 input={"query": query} 902 ) as generation: 903 # Generation span is active here 904 response = llm.generate(query) 905 906 # Update with results 907 generation.update( 908 output=response.text, 909 usage_details={ 910 "prompt_tokens": response.usage.prompt_tokens, 911 "completion_tokens": response.usage.completion_tokens 912 } 913 ) 914 915 # Back to parent span context 916 span.update(output={"answer": response.text, "source": "gpt-4"}) 917 ``` 918 """ 919 return cast( 920 _AgnosticContextManager["LangfuseGeneration"], 921 self._langfuse_client._create_span_with_parent_context( 922 name=name, 923 as_type="generation", 924 remote_parent_span=None, 925 parent=self._otel_span, 926 input=input, 927 output=output, 928 metadata=metadata, 929 version=version, 930 level=level, 931 status_message=status_message, 932 completion_start_time=completion_start_time, 933 model=model, 934 model_parameters=model_parameters, 935 usage_details=usage_details, 936 cost_details=cost_details, 937 prompt=prompt, 938 ), 939 ) 940 941 def create_event( 942 self, 943 *, 944 name: str, 945 input: Optional[Any] = None, 946 output: Optional[Any] = None, 947 metadata: Optional[Any] = None, 948 version: Optional[str] = None, 949 level: Optional[SpanLevel] = None, 950 status_message: Optional[str] = None, 951 ) -> "LangfuseEvent": 952 """Create a new Langfuse observation of type 'EVENT'. 953 954 Args: 955 name: Name of the span (e.g., function or operation name) 956 input: Input data for the operation (can be any JSON-serializable object) 957 output: Output data from the operation (can be any JSON-serializable object) 958 metadata: Additional metadata to associate with the span 959 version: Version identifier for the code or component 960 level: Importance level of the span (info, warning, error) 961 status_message: Optional status message for the span 962 963 Returns: 964 The LangfuseEvent object 965 966 Example: 967 ```python 968 event = langfuse.create_event(name="process-event") 969 ``` 970 """ 971 timestamp = time_ns() 972 973 with otel_trace_api.use_span(self._otel_span): 974 new_otel_span = self._langfuse_client._otel_tracer.start_span( 975 name=name, start_time=timestamp 976 ) 977 978 return cast( 979 "LangfuseEvent", 980 LangfuseEvent( 981 otel_span=new_otel_span, 982 langfuse_client=self._langfuse_client, 983 input=input, 984 output=output, 985 metadata=metadata, 986 environment=self._environment, 987 version=version, 988 level=level, 989 status_message=status_message, 990 ).end(end_time=timestamp), 991 )
Standard span implementation for general operations in Langfuse.
This class represents a general-purpose span that can be used to trace any operation in your application. It extends the base LangfuseSpanWrapper with specific methods for creating child spans, generations, and updating span-specific attributes.
525 def __init__( 526 self, 527 *, 528 otel_span: otel_trace_api.Span, 529 langfuse_client: "Langfuse", 530 input: Optional[Any] = None, 531 output: Optional[Any] = None, 532 metadata: Optional[Any] = None, 533 environment: Optional[str] = None, 534 version: Optional[str] = None, 535 level: Optional[SpanLevel] = None, 536 status_message: Optional[str] = None, 537 ): 538 """Initialize a new LangfuseSpan. 539 540 Args: 541 otel_span: The OpenTelemetry span to wrap 542 langfuse_client: Reference to the parent Langfuse client 543 input: Input data for the span (any JSON-serializable object) 544 output: Output data from the span (any JSON-serializable object) 545 metadata: Additional metadata to associate with the span 546 environment: The tracing environment 547 version: Version identifier for the code or component 548 level: Importance level of the span (info, warning, error) 549 status_message: Optional status message for the span 550 """ 551 super().__init__( 552 otel_span=otel_span, 553 as_type="span", 554 langfuse_client=langfuse_client, 555 input=input, 556 output=output, 557 metadata=metadata, 558 environment=environment, 559 version=version, 560 level=level, 561 status_message=status_message, 562 )
Initialize a new LangfuseSpan.
Arguments:
- otel_span: The OpenTelemetry span to wrap
- langfuse_client: Reference to the parent Langfuse client
- input: Input data for the span (any JSON-serializable object)
- output: Output data from the span (any JSON-serializable object)
- metadata: Additional metadata to associate with the span
- environment: The tracing environment
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
564 def update( 565 self, 566 *, 567 name: Optional[str] = None, 568 input: Optional[Any] = None, 569 output: Optional[Any] = None, 570 metadata: Optional[Any] = None, 571 version: Optional[str] = None, 572 level: Optional[SpanLevel] = None, 573 status_message: Optional[str] = None, 574 **kwargs: Any, 575 ) -> "LangfuseSpan": 576 """Update this span with new information. 577 578 This method updates the span with new information that becomes available 579 during execution, such as outputs, metadata, or status changes. 580 581 Args: 582 name: Span name 583 input: Updated input data for the operation 584 output: Output data from the operation 585 metadata: Additional metadata to associate with the span 586 version: Version identifier for the code or component 587 level: Importance level of the span (info, warning, error) 588 status_message: Optional status message for the span 589 **kwargs: Additional keyword arguments (ignored) 590 591 Example: 592 ```python 593 span = langfuse.start_span(name="process-data") 594 try: 595 # Do work 596 result = process_data() 597 span.update(output=result, metadata={"processing_time": 350}) 598 finally: 599 span.end() 600 ``` 601 """ 602 if not self._otel_span.is_recording(): 603 return self 604 605 processed_input = self._process_media_and_apply_mask( 606 data=input, field="input", span=self._otel_span 607 ) 608 processed_output = self._process_media_and_apply_mask( 609 data=output, field="output", span=self._otel_span 610 ) 611 processed_metadata = self._process_media_and_apply_mask( 612 data=metadata, field="metadata", span=self._otel_span 613 ) 614 615 if name: 616 self._otel_span.update_name(name) 617 618 attributes = create_span_attributes( 619 input=processed_input, 620 output=processed_output, 621 metadata=processed_metadata, 622 version=version, 623 level=level, 624 status_message=status_message, 625 ) 626 627 self._otel_span.set_attributes(attributes=attributes) 628 629 return self
Update this span with new information.
This method updates the span with new information that becomes available during execution, such as outputs, metadata, or status changes.
Arguments:
- name: Span name
- input: Updated input data for the operation
- output: Output data from the operation
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
- **kwargs: Additional keyword arguments (ignored)
Example:
span = langfuse.start_span(name="process-data") try: # Do work result = process_data() span.update(output=result, metadata={"processing_time": 350}) finally: span.end()
631 def start_span( 632 self, 633 name: str, 634 input: Optional[Any] = None, 635 output: Optional[Any] = None, 636 metadata: Optional[Any] = None, 637 version: Optional[str] = None, 638 level: Optional[SpanLevel] = None, 639 status_message: Optional[str] = None, 640 ) -> "LangfuseSpan": 641 """Create a new child span. 642 643 This method creates a new child span with this span as the parent. 644 Unlike start_as_current_span(), this method does not set the new span 645 as the current span in the context. 646 647 Args: 648 name: Name of the span (e.g., function or operation name) 649 input: Input data for the operation 650 output: Output data from the operation 651 metadata: Additional metadata to associate with the span 652 version: Version identifier for the code or component 653 level: Importance level of the span (info, warning, error) 654 status_message: Optional status message for the span 655 656 Returns: 657 A new LangfuseSpan that must be ended with .end() when complete 658 659 Example: 660 ```python 661 parent_span = langfuse.start_span(name="process-request") 662 try: 663 # Create a child span 664 child_span = parent_span.start_span(name="validate-input") 665 try: 666 # Do validation work 667 validation_result = validate(request_data) 668 child_span.update(output=validation_result) 669 finally: 670 child_span.end() 671 672 # Continue with parent span 673 result = process_validated_data(validation_result) 674 parent_span.update(output=result) 675 finally: 676 parent_span.end() 677 ``` 678 """ 679 with otel_trace_api.use_span(self._otel_span): 680 new_otel_span = self._langfuse_client._otel_tracer.start_span(name=name) 681 682 return LangfuseSpan( 683 otel_span=new_otel_span, 684 langfuse_client=self._langfuse_client, 685 environment=self._environment, 686 input=input, 687 output=output, 688 metadata=metadata, 689 version=version, 690 level=level, 691 status_message=status_message, 692 )
Create a new child span.
This method creates a new child span with this span as the parent. Unlike start_as_current_span(), this method does not set the new span as the current span in the context.
Arguments:
- name: Name of the span (e.g., function or operation name)
- input: Input data for the operation
- output: Output data from the operation
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
Returns:
A new LangfuseSpan that must be ended with .end() when complete
Example:
parent_span = langfuse.start_span(name="process-request") try: # Create a child span child_span = parent_span.start_span(name="validate-input") try: # Do validation work validation_result = validate(request_data) child_span.update(output=validation_result) finally: child_span.end() # Continue with parent span result = process_validated_data(validation_result) parent_span.update(output=result) finally: parent_span.end()
694 def start_as_current_span( 695 self, 696 *, 697 name: str, 698 input: Optional[Any] = None, 699 output: Optional[Any] = None, 700 metadata: Optional[Any] = None, 701 version: Optional[str] = None, 702 level: Optional[SpanLevel] = None, 703 status_message: Optional[str] = None, 704 ) -> _AgnosticContextManager["LangfuseSpan"]: 705 """Create a new child span and set it as the current span in a context manager. 706 707 This method creates a new child span and sets it as the current span within 708 a context manager. It should be used with a 'with' statement to automatically 709 manage the span's lifecycle. 710 711 Args: 712 name: Name of the span (e.g., function or operation name) 713 input: Input data for the operation 714 output: Output data from the operation 715 metadata: Additional metadata to associate with the span 716 version: Version identifier for the code or component 717 level: Importance level of the span (info, warning, error) 718 status_message: Optional status message for the span 719 720 Returns: 721 A context manager that yields a new LangfuseSpan 722 723 Example: 724 ```python 725 with langfuse.start_as_current_span(name="process-request") as parent_span: 726 # Parent span is active here 727 728 # Create a child span with context management 729 with parent_span.start_as_current_span(name="validate-input") as child_span: 730 # Child span is active here 731 validation_result = validate(request_data) 732 child_span.update(output=validation_result) 733 734 # Back to parent span context 735 result = process_validated_data(validation_result) 736 parent_span.update(output=result) 737 ``` 738 """ 739 return cast( 740 _AgnosticContextManager["LangfuseSpan"], 741 self._langfuse_client._create_span_with_parent_context( 742 name=name, 743 as_type="span", 744 remote_parent_span=None, 745 parent=self._otel_span, 746 input=input, 747 output=output, 748 metadata=metadata, 749 version=version, 750 level=level, 751 status_message=status_message, 752 ), 753 )
Create a new child span and set it as the current span in a context manager.
This method creates a new child span and sets it as the current span within a context manager. It should be used with a 'with' statement to automatically manage the span's lifecycle.
Arguments:
- name: Name of the span (e.g., function or operation name)
- input: Input data for the operation
- output: Output data from the operation
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
Returns:
A context manager that yields a new LangfuseSpan
Example:
with langfuse.start_as_current_span(name="process-request") as parent_span: # Parent span is active here # Create a child span with context management with parent_span.start_as_current_span(name="validate-input") as child_span: # Child span is active here validation_result = validate(request_data) child_span.update(output=validation_result) # Back to parent span context result = process_validated_data(validation_result) parent_span.update(output=result)
755 def start_generation( 756 self, 757 *, 758 name: str, 759 input: Optional[Any] = None, 760 output: Optional[Any] = None, 761 metadata: Optional[Any] = None, 762 version: Optional[str] = None, 763 level: Optional[SpanLevel] = None, 764 status_message: Optional[str] = None, 765 completion_start_time: Optional[datetime] = None, 766 model: Optional[str] = None, 767 model_parameters: Optional[Dict[str, MapValue]] = None, 768 usage_details: Optional[Dict[str, int]] = None, 769 cost_details: Optional[Dict[str, float]] = None, 770 prompt: Optional[PromptClient] = None, 771 ) -> "LangfuseGeneration": 772 """Create a new child generation span. 773 774 This method creates a new child generation span with this span as the parent. 775 Generation spans are specialized for AI/LLM operations and include additional 776 fields for model information, usage stats, and costs. 777 778 Unlike start_as_current_generation(), this method does not set the new span 779 as the current span in the context. 780 781 Args: 782 name: Name of the generation operation 783 input: Input data for the model (e.g., prompts) 784 output: Output from the model (e.g., completions) 785 metadata: Additional metadata to associate with the generation 786 version: Version identifier for the model or component 787 level: Importance level of the generation (info, warning, error) 788 status_message: Optional status message for the generation 789 completion_start_time: When the model started generating the response 790 model: Name/identifier of the AI model used (e.g., "gpt-4") 791 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 792 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 793 cost_details: Cost information for the model call 794 prompt: Associated prompt template from Langfuse prompt management 795 796 Returns: 797 A new LangfuseGeneration that must be ended with .end() when complete 798 799 Example: 800 ```python 801 span = langfuse.start_span(name="process-query") 802 try: 803 # Create a generation child span 804 generation = span.start_generation( 805 name="generate-answer", 806 model="gpt-4", 807 input={"prompt": "Explain quantum computing"} 808 ) 809 try: 810 # Call model API 811 response = llm.generate(...) 812 813 generation.update( 814 output=response.text, 815 usage_details={ 816 "prompt_tokens": response.usage.prompt_tokens, 817 "completion_tokens": response.usage.completion_tokens 818 } 819 ) 820 finally: 821 generation.end() 822 823 # Continue with parent span 824 span.update(output={"answer": response.text, "source": "gpt-4"}) 825 finally: 826 span.end() 827 ``` 828 """ 829 with otel_trace_api.use_span(self._otel_span): 830 new_otel_span = self._langfuse_client._otel_tracer.start_span(name=name) 831 832 return LangfuseGeneration( 833 otel_span=new_otel_span, 834 langfuse_client=self._langfuse_client, 835 environment=self._environment, 836 input=input, 837 output=output, 838 metadata=metadata, 839 version=version, 840 level=level, 841 status_message=status_message, 842 completion_start_time=completion_start_time, 843 model=model, 844 model_parameters=model_parameters, 845 usage_details=usage_details, 846 cost_details=cost_details, 847 prompt=prompt, 848 )
Create a new child generation span.
This method creates a new child generation span with this span as the parent. Generation spans are specialized for AI/LLM operations and include additional fields for model information, usage stats, and costs.
Unlike start_as_current_generation(), this method does not set the new span as the current span in the context.
Arguments:
- name: Name of the generation operation
- input: Input data for the model (e.g., prompts)
- output: Output from the model (e.g., completions)
- metadata: Additional metadata to associate with the generation
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
- completion_start_time: When the model started generating the response
- model: Name/identifier of the AI model used (e.g., "gpt-4")
- model_parameters: Parameters used for the model (e.g., temperature, max_tokens)
- usage_details: Token usage information (e.g., prompt_tokens, completion_tokens)
- cost_details: Cost information for the model call
- prompt: Associated prompt template from Langfuse prompt management
Returns:
A new LangfuseGeneration that must be ended with .end() when complete
Example:
span = langfuse.start_span(name="process-query") try: # Create a generation child span generation = span.start_generation( name="generate-answer", model="gpt-4", input={"prompt": "Explain quantum computing"} ) try: # Call model API response = llm.generate(...) generation.update( output=response.text, usage_details={ "prompt_tokens": response.usage.prompt_tokens, "completion_tokens": response.usage.completion_tokens } ) finally: generation.end() # Continue with parent span span.update(output={"answer": response.text, "source": "gpt-4"}) finally: span.end()
850 def start_as_current_generation( 851 self, 852 *, 853 name: str, 854 input: Optional[Any] = None, 855 output: Optional[Any] = None, 856 metadata: Optional[Any] = None, 857 version: Optional[str] = None, 858 level: Optional[SpanLevel] = None, 859 status_message: Optional[str] = None, 860 completion_start_time: Optional[datetime] = None, 861 model: Optional[str] = None, 862 model_parameters: Optional[Dict[str, MapValue]] = None, 863 usage_details: Optional[Dict[str, int]] = None, 864 cost_details: Optional[Dict[str, float]] = None, 865 prompt: Optional[PromptClient] = None, 866 ) -> _AgnosticContextManager["LangfuseGeneration"]: 867 """Create a new child generation span and set it as the current span in a context manager. 868 869 This method creates a new child generation span and sets it as the current span 870 within a context manager. Generation spans are specialized for AI/LLM operations 871 and include additional fields for model information, usage stats, and costs. 872 873 Args: 874 name: Name of the generation operation 875 input: Input data for the model (e.g., prompts) 876 output: Output from the model (e.g., completions) 877 metadata: Additional metadata to associate with the generation 878 version: Version identifier for the model or component 879 level: Importance level of the generation (info, warning, error) 880 status_message: Optional status message for the generation 881 completion_start_time: When the model started generating the response 882 model: Name/identifier of the AI model used (e.g., "gpt-4") 883 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 884 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 885 cost_details: Cost information for the model call 886 prompt: Associated prompt template from Langfuse prompt management 887 888 Returns: 889 A context manager that yields a new LangfuseGeneration 890 891 Example: 892 ```python 893 with langfuse.start_as_current_span(name="process-request") as span: 894 # Prepare data 895 query = preprocess_user_query(user_input) 896 897 # Create a generation span with context management 898 with span.start_as_current_generation( 899 name="generate-answer", 900 model="gpt-4", 901 input={"query": query} 902 ) as generation: 903 # Generation span is active here 904 response = llm.generate(query) 905 906 # Update with results 907 generation.update( 908 output=response.text, 909 usage_details={ 910 "prompt_tokens": response.usage.prompt_tokens, 911 "completion_tokens": response.usage.completion_tokens 912 } 913 ) 914 915 # Back to parent span context 916 span.update(output={"answer": response.text, "source": "gpt-4"}) 917 ``` 918 """ 919 return cast( 920 _AgnosticContextManager["LangfuseGeneration"], 921 self._langfuse_client._create_span_with_parent_context( 922 name=name, 923 as_type="generation", 924 remote_parent_span=None, 925 parent=self._otel_span, 926 input=input, 927 output=output, 928 metadata=metadata, 929 version=version, 930 level=level, 931 status_message=status_message, 932 completion_start_time=completion_start_time, 933 model=model, 934 model_parameters=model_parameters, 935 usage_details=usage_details, 936 cost_details=cost_details, 937 prompt=prompt, 938 ), 939 )
Create a new child generation span and set it as the current span in a context manager.
This method creates a new child generation span and sets it as the current span within a context manager. Generation spans are specialized for AI/LLM operations and include additional fields for model information, usage stats, and costs.
Arguments:
- name: Name of the generation operation
- input: Input data for the model (e.g., prompts)
- output: Output from the model (e.g., completions)
- metadata: Additional metadata to associate with the generation
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
- completion_start_time: When the model started generating the response
- model: Name/identifier of the AI model used (e.g., "gpt-4")
- model_parameters: Parameters used for the model (e.g., temperature, max_tokens)
- usage_details: Token usage information (e.g., prompt_tokens, completion_tokens)
- cost_details: Cost information for the model call
- prompt: Associated prompt template from Langfuse prompt management
Returns:
A context manager that yields a new LangfuseGeneration
Example:
with langfuse.start_as_current_span(name="process-request") as span: # Prepare data query = preprocess_user_query(user_input) # Create a generation span with context management with span.start_as_current_generation( name="generate-answer", model="gpt-4", input={"query": query} ) as generation: # Generation span is active here response = llm.generate(query) # Update with results generation.update( output=response.text, usage_details={ "prompt_tokens": response.usage.prompt_tokens, "completion_tokens": response.usage.completion_tokens } ) # Back to parent span context span.update(output={"answer": response.text, "source": "gpt-4"})
941 def create_event( 942 self, 943 *, 944 name: str, 945 input: Optional[Any] = None, 946 output: Optional[Any] = None, 947 metadata: Optional[Any] = None, 948 version: Optional[str] = None, 949 level: Optional[SpanLevel] = None, 950 status_message: Optional[str] = None, 951 ) -> "LangfuseEvent": 952 """Create a new Langfuse observation of type 'EVENT'. 953 954 Args: 955 name: Name of the span (e.g., function or operation name) 956 input: Input data for the operation (can be any JSON-serializable object) 957 output: Output data from the operation (can be any JSON-serializable object) 958 metadata: Additional metadata to associate with the span 959 version: Version identifier for the code or component 960 level: Importance level of the span (info, warning, error) 961 status_message: Optional status message for the span 962 963 Returns: 964 The LangfuseEvent object 965 966 Example: 967 ```python 968 event = langfuse.create_event(name="process-event") 969 ``` 970 """ 971 timestamp = time_ns() 972 973 with otel_trace_api.use_span(self._otel_span): 974 new_otel_span = self._langfuse_client._otel_tracer.start_span( 975 name=name, start_time=timestamp 976 ) 977 978 return cast( 979 "LangfuseEvent", 980 LangfuseEvent( 981 otel_span=new_otel_span, 982 langfuse_client=self._langfuse_client, 983 input=input, 984 output=output, 985 metadata=metadata, 986 environment=self._environment, 987 version=version, 988 level=level, 989 status_message=status_message, 990 ).end(end_time=timestamp), 991 )
Create a new Langfuse observation of type 'EVENT'.
Arguments:
- name: Name of the span (e.g., function or operation name)
- input: Input data for the operation (can be any JSON-serializable object)
- output: Output data from the operation (can be any JSON-serializable object)
- metadata: Additional metadata to associate with the span
- version: Version identifier for the code or component
- level: Importance level of the span (info, warning, error)
- status_message: Optional status message for the span
Returns:
The LangfuseEvent object
Example:
event = langfuse.create_event(name="process-event")
Inherited Members
- langfuse._client.span.LangfuseSpanWrapper
- trace_id
- id
- end
- update_trace
- score
- score_trace
994class LangfuseGeneration(LangfuseSpanWrapper): 995 """Specialized span implementation for AI model generations in Langfuse. 996 997 This class represents a generation span specifically designed for tracking 998 AI/LLM operations. It extends the base LangfuseSpanWrapper with specialized 999 attributes for model details, token usage, and costs. 1000 """ 1001 1002 def __init__( 1003 self, 1004 *, 1005 otel_span: otel_trace_api.Span, 1006 langfuse_client: "Langfuse", 1007 input: Optional[Any] = None, 1008 output: Optional[Any] = None, 1009 metadata: Optional[Any] = None, 1010 environment: Optional[str] = None, 1011 version: Optional[str] = None, 1012 level: Optional[SpanLevel] = None, 1013 status_message: Optional[str] = None, 1014 completion_start_time: Optional[datetime] = None, 1015 model: Optional[str] = None, 1016 model_parameters: Optional[Dict[str, MapValue]] = None, 1017 usage_details: Optional[Dict[str, int]] = None, 1018 cost_details: Optional[Dict[str, float]] = None, 1019 prompt: Optional[PromptClient] = None, 1020 ): 1021 """Initialize a new LangfuseGeneration span. 1022 1023 Args: 1024 otel_span: The OpenTelemetry span to wrap 1025 langfuse_client: Reference to the parent Langfuse client 1026 input: Input data for the generation (e.g., prompts) 1027 output: Output from the generation (e.g., completions) 1028 metadata: Additional metadata to associate with the generation 1029 environment: The tracing environment 1030 version: Version identifier for the model or component 1031 level: Importance level of the generation (info, warning, error) 1032 status_message: Optional status message for the generation 1033 completion_start_time: When the model started generating the response 1034 model: Name/identifier of the AI model used (e.g., "gpt-4") 1035 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 1036 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 1037 cost_details: Cost information for the model call 1038 prompt: Associated prompt template from Langfuse prompt management 1039 """ 1040 super().__init__( 1041 otel_span=otel_span, 1042 as_type="generation", 1043 langfuse_client=langfuse_client, 1044 input=input, 1045 output=output, 1046 metadata=metadata, 1047 environment=environment, 1048 version=version, 1049 level=level, 1050 status_message=status_message, 1051 completion_start_time=completion_start_time, 1052 model=model, 1053 model_parameters=model_parameters, 1054 usage_details=usage_details, 1055 cost_details=cost_details, 1056 prompt=prompt, 1057 ) 1058 1059 def update( 1060 self, 1061 *, 1062 name: Optional[str] = None, 1063 input: Optional[Any] = None, 1064 output: Optional[Any] = None, 1065 metadata: Optional[Any] = None, 1066 version: Optional[str] = None, 1067 level: Optional[SpanLevel] = None, 1068 status_message: Optional[str] = None, 1069 completion_start_time: Optional[datetime] = None, 1070 model: Optional[str] = None, 1071 model_parameters: Optional[Dict[str, MapValue]] = None, 1072 usage_details: Optional[Dict[str, int]] = None, 1073 cost_details: Optional[Dict[str, float]] = None, 1074 prompt: Optional[PromptClient] = None, 1075 **kwargs: Dict[str, Any], 1076 ) -> "LangfuseGeneration": 1077 """Update this generation span with new information. 1078 1079 This method updates the generation span with new information that becomes 1080 available during or after the model generation, such as model outputs, 1081 token usage statistics, or cost details. 1082 1083 Args: 1084 name: The generation name 1085 input: Updated input data for the model 1086 output: Output from the model (e.g., completions) 1087 metadata: Additional metadata to associate with the generation 1088 version: Version identifier for the model or component 1089 level: Importance level of the generation (info, warning, error) 1090 status_message: Optional status message for the generation 1091 completion_start_time: When the model started generating the response 1092 model: Name/identifier of the AI model used (e.g., "gpt-4") 1093 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 1094 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 1095 cost_details: Cost information for the model call 1096 prompt: Associated prompt template from Langfuse prompt management 1097 **kwargs: Additional keyword arguments (ignored) 1098 1099 Example: 1100 ```python 1101 generation = langfuse.start_generation( 1102 name="answer-generation", 1103 model="gpt-4", 1104 input={"prompt": "Explain quantum computing"} 1105 ) 1106 try: 1107 # Call model API 1108 response = llm.generate(...) 1109 1110 # Update with results 1111 generation.update( 1112 output=response.text, 1113 usage_details={ 1114 "prompt_tokens": response.usage.prompt_tokens, 1115 "completion_tokens": response.usage.completion_tokens, 1116 "total_tokens": response.usage.total_tokens 1117 }, 1118 cost_details={ 1119 "total_cost": 0.0035 1120 } 1121 ) 1122 finally: 1123 generation.end() 1124 ``` 1125 """ 1126 if not self._otel_span.is_recording(): 1127 return self 1128 1129 processed_input = self._process_media_and_apply_mask( 1130 data=input, field="input", span=self._otel_span 1131 ) 1132 processed_output = self._process_media_and_apply_mask( 1133 data=output, field="output", span=self._otel_span 1134 ) 1135 processed_metadata = self._process_media_and_apply_mask( 1136 data=metadata, field="metadata", span=self._otel_span 1137 ) 1138 1139 if name: 1140 self._otel_span.update_name(name) 1141 1142 attributes = create_generation_attributes( 1143 input=processed_input, 1144 output=processed_output, 1145 metadata=processed_metadata, 1146 version=version, 1147 level=level, 1148 status_message=status_message, 1149 completion_start_time=completion_start_time, 1150 model=model, 1151 model_parameters=model_parameters, 1152 usage_details=usage_details, 1153 cost_details=cost_details, 1154 prompt=prompt, 1155 ) 1156 1157 self._otel_span.set_attributes(attributes=attributes) 1158 1159 return self
Specialized span implementation for AI model generations in Langfuse.
This class represents a generation span specifically designed for tracking AI/LLM operations. It extends the base LangfuseSpanWrapper with specialized attributes for model details, token usage, and costs.
1002 def __init__( 1003 self, 1004 *, 1005 otel_span: otel_trace_api.Span, 1006 langfuse_client: "Langfuse", 1007 input: Optional[Any] = None, 1008 output: Optional[Any] = None, 1009 metadata: Optional[Any] = None, 1010 environment: Optional[str] = None, 1011 version: Optional[str] = None, 1012 level: Optional[SpanLevel] = None, 1013 status_message: Optional[str] = None, 1014 completion_start_time: Optional[datetime] = None, 1015 model: Optional[str] = None, 1016 model_parameters: Optional[Dict[str, MapValue]] = None, 1017 usage_details: Optional[Dict[str, int]] = None, 1018 cost_details: Optional[Dict[str, float]] = None, 1019 prompt: Optional[PromptClient] = None, 1020 ): 1021 """Initialize a new LangfuseGeneration span. 1022 1023 Args: 1024 otel_span: The OpenTelemetry span to wrap 1025 langfuse_client: Reference to the parent Langfuse client 1026 input: Input data for the generation (e.g., prompts) 1027 output: Output from the generation (e.g., completions) 1028 metadata: Additional metadata to associate with the generation 1029 environment: The tracing environment 1030 version: Version identifier for the model or component 1031 level: Importance level of the generation (info, warning, error) 1032 status_message: Optional status message for the generation 1033 completion_start_time: When the model started generating the response 1034 model: Name/identifier of the AI model used (e.g., "gpt-4") 1035 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 1036 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 1037 cost_details: Cost information for the model call 1038 prompt: Associated prompt template from Langfuse prompt management 1039 """ 1040 super().__init__( 1041 otel_span=otel_span, 1042 as_type="generation", 1043 langfuse_client=langfuse_client, 1044 input=input, 1045 output=output, 1046 metadata=metadata, 1047 environment=environment, 1048 version=version, 1049 level=level, 1050 status_message=status_message, 1051 completion_start_time=completion_start_time, 1052 model=model, 1053 model_parameters=model_parameters, 1054 usage_details=usage_details, 1055 cost_details=cost_details, 1056 prompt=prompt, 1057 )
Initialize a new LangfuseGeneration span.
Arguments:
- otel_span: The OpenTelemetry span to wrap
- langfuse_client: Reference to the parent Langfuse client
- input: Input data for the generation (e.g., prompts)
- output: Output from the generation (e.g., completions)
- metadata: Additional metadata to associate with the generation
- environment: The tracing environment
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
- completion_start_time: When the model started generating the response
- model: Name/identifier of the AI model used (e.g., "gpt-4")
- model_parameters: Parameters used for the model (e.g., temperature, max_tokens)
- usage_details: Token usage information (e.g., prompt_tokens, completion_tokens)
- cost_details: Cost information for the model call
- prompt: Associated prompt template from Langfuse prompt management
1059 def update( 1060 self, 1061 *, 1062 name: Optional[str] = None, 1063 input: Optional[Any] = None, 1064 output: Optional[Any] = None, 1065 metadata: Optional[Any] = None, 1066 version: Optional[str] = None, 1067 level: Optional[SpanLevel] = None, 1068 status_message: Optional[str] = None, 1069 completion_start_time: Optional[datetime] = None, 1070 model: Optional[str] = None, 1071 model_parameters: Optional[Dict[str, MapValue]] = None, 1072 usage_details: Optional[Dict[str, int]] = None, 1073 cost_details: Optional[Dict[str, float]] = None, 1074 prompt: Optional[PromptClient] = None, 1075 **kwargs: Dict[str, Any], 1076 ) -> "LangfuseGeneration": 1077 """Update this generation span with new information. 1078 1079 This method updates the generation span with new information that becomes 1080 available during or after the model generation, such as model outputs, 1081 token usage statistics, or cost details. 1082 1083 Args: 1084 name: The generation name 1085 input: Updated input data for the model 1086 output: Output from the model (e.g., completions) 1087 metadata: Additional metadata to associate with the generation 1088 version: Version identifier for the model or component 1089 level: Importance level of the generation (info, warning, error) 1090 status_message: Optional status message for the generation 1091 completion_start_time: When the model started generating the response 1092 model: Name/identifier of the AI model used (e.g., "gpt-4") 1093 model_parameters: Parameters used for the model (e.g., temperature, max_tokens) 1094 usage_details: Token usage information (e.g., prompt_tokens, completion_tokens) 1095 cost_details: Cost information for the model call 1096 prompt: Associated prompt template from Langfuse prompt management 1097 **kwargs: Additional keyword arguments (ignored) 1098 1099 Example: 1100 ```python 1101 generation = langfuse.start_generation( 1102 name="answer-generation", 1103 model="gpt-4", 1104 input={"prompt": "Explain quantum computing"} 1105 ) 1106 try: 1107 # Call model API 1108 response = llm.generate(...) 1109 1110 # Update with results 1111 generation.update( 1112 output=response.text, 1113 usage_details={ 1114 "prompt_tokens": response.usage.prompt_tokens, 1115 "completion_tokens": response.usage.completion_tokens, 1116 "total_tokens": response.usage.total_tokens 1117 }, 1118 cost_details={ 1119 "total_cost": 0.0035 1120 } 1121 ) 1122 finally: 1123 generation.end() 1124 ``` 1125 """ 1126 if not self._otel_span.is_recording(): 1127 return self 1128 1129 processed_input = self._process_media_and_apply_mask( 1130 data=input, field="input", span=self._otel_span 1131 ) 1132 processed_output = self._process_media_and_apply_mask( 1133 data=output, field="output", span=self._otel_span 1134 ) 1135 processed_metadata = self._process_media_and_apply_mask( 1136 data=metadata, field="metadata", span=self._otel_span 1137 ) 1138 1139 if name: 1140 self._otel_span.update_name(name) 1141 1142 attributes = create_generation_attributes( 1143 input=processed_input, 1144 output=processed_output, 1145 metadata=processed_metadata, 1146 version=version, 1147 level=level, 1148 status_message=status_message, 1149 completion_start_time=completion_start_time, 1150 model=model, 1151 model_parameters=model_parameters, 1152 usage_details=usage_details, 1153 cost_details=cost_details, 1154 prompt=prompt, 1155 ) 1156 1157 self._otel_span.set_attributes(attributes=attributes) 1158 1159 return self
Update this generation span with new information.
This method updates the generation span with new information that becomes available during or after the model generation, such as model outputs, token usage statistics, or cost details.
Arguments:
- name: The generation name
- input: Updated input data for the model
- output: Output from the model (e.g., completions)
- metadata: Additional metadata to associate with the generation
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
- completion_start_time: When the model started generating the response
- model: Name/identifier of the AI model used (e.g., "gpt-4")
- model_parameters: Parameters used for the model (e.g., temperature, max_tokens)
- usage_details: Token usage information (e.g., prompt_tokens, completion_tokens)
- cost_details: Cost information for the model call
- prompt: Associated prompt template from Langfuse prompt management
- **kwargs: Additional keyword arguments (ignored)
Example:
generation = langfuse.start_generation( name="answer-generation", model="gpt-4", input={"prompt": "Explain quantum computing"} ) try: # Call model API response = llm.generate(...) # Update with results generation.update( output=response.text, usage_details={ "prompt_tokens": response.usage.prompt_tokens, "completion_tokens": response.usage.completion_tokens, "total_tokens": response.usage.total_tokens }, cost_details={ "total_cost": 0.0035 } ) finally: generation.end()
Inherited Members
- langfuse._client.span.LangfuseSpanWrapper
- trace_id
- id
- end
- update_trace
- score
- score_trace
1162class LangfuseEvent(LangfuseSpanWrapper): 1163 """Specialized span implementation for Langfuse Events.""" 1164 1165 def __init__( 1166 self, 1167 *, 1168 otel_span: otel_trace_api.Span, 1169 langfuse_client: "Langfuse", 1170 input: Optional[Any] = None, 1171 output: Optional[Any] = None, 1172 metadata: Optional[Any] = None, 1173 environment: Optional[str] = None, 1174 version: Optional[str] = None, 1175 level: Optional[SpanLevel] = None, 1176 status_message: Optional[str] = None, 1177 ): 1178 """Initialize a new LangfuseEvent span. 1179 1180 Args: 1181 otel_span: The OpenTelemetry span to wrap 1182 langfuse_client: Reference to the parent Langfuse client 1183 input: Input data for the event 1184 output: Output from the event 1185 metadata: Additional metadata to associate with the generation 1186 environment: The tracing environment 1187 version: Version identifier for the model or component 1188 level: Importance level of the generation (info, warning, error) 1189 status_message: Optional status message for the generation 1190 """ 1191 super().__init__( 1192 otel_span=otel_span, 1193 as_type="event", 1194 langfuse_client=langfuse_client, 1195 input=input, 1196 output=output, 1197 metadata=metadata, 1198 environment=environment, 1199 version=version, 1200 level=level, 1201 status_message=status_message, 1202 )
Specialized span implementation for Langfuse Events.
1165 def __init__( 1166 self, 1167 *, 1168 otel_span: otel_trace_api.Span, 1169 langfuse_client: "Langfuse", 1170 input: Optional[Any] = None, 1171 output: Optional[Any] = None, 1172 metadata: Optional[Any] = None, 1173 environment: Optional[str] = None, 1174 version: Optional[str] = None, 1175 level: Optional[SpanLevel] = None, 1176 status_message: Optional[str] = None, 1177 ): 1178 """Initialize a new LangfuseEvent span. 1179 1180 Args: 1181 otel_span: The OpenTelemetry span to wrap 1182 langfuse_client: Reference to the parent Langfuse client 1183 input: Input data for the event 1184 output: Output from the event 1185 metadata: Additional metadata to associate with the generation 1186 environment: The tracing environment 1187 version: Version identifier for the model or component 1188 level: Importance level of the generation (info, warning, error) 1189 status_message: Optional status message for the generation 1190 """ 1191 super().__init__( 1192 otel_span=otel_span, 1193 as_type="event", 1194 langfuse_client=langfuse_client, 1195 input=input, 1196 output=output, 1197 metadata=metadata, 1198 environment=environment, 1199 version=version, 1200 level=level, 1201 status_message=status_message, 1202 )
Initialize a new LangfuseEvent span.
Arguments:
- otel_span: The OpenTelemetry span to wrap
- langfuse_client: Reference to the parent Langfuse client
- input: Input data for the event
- output: Output from the event
- metadata: Additional metadata to associate with the generation
- environment: The tracing environment
- version: Version identifier for the model or component
- level: Importance level of the generation (info, warning, error)
- status_message: Optional status message for the generation
Inherited Members
- langfuse._client.span.LangfuseSpanWrapper
- trace_id
- id
- end
- update_trace
- score
- score_trace
23class LangfuseOtelSpanAttributes: 24 # Langfuse-Trace attributes 25 TRACE_NAME = "langfuse.trace.name" 26 TRACE_USER_ID = "user.id" 27 TRACE_SESSION_ID = "session.id" 28 TRACE_TAGS = "langfuse.trace.tags" 29 TRACE_PUBLIC = "langfuse.trace.public" 30 TRACE_METADATA = "langfuse.trace.metadata" 31 TRACE_INPUT = "langfuse.trace.input" 32 TRACE_OUTPUT = "langfuse.trace.output" 33 34 # Langfuse-observation attributes 35 OBSERVATION_TYPE = "langfuse.observation.type" 36 OBSERVATION_METADATA = "langfuse.observation.metadata" 37 OBSERVATION_LEVEL = "langfuse.observation.level" 38 OBSERVATION_STATUS_MESSAGE = "langfuse.observation.status_message" 39 OBSERVATION_INPUT = "langfuse.observation.input" 40 OBSERVATION_OUTPUT = "langfuse.observation.output" 41 42 # Langfuse-observation of type Generation attributes 43 OBSERVATION_COMPLETION_START_TIME = "langfuse.observation.completion_start_time" 44 OBSERVATION_MODEL = "langfuse.observation.model.name" 45 OBSERVATION_MODEL_PARAMETERS = "langfuse.observation.model.parameters" 46 OBSERVATION_USAGE_DETAILS = "langfuse.observation.usage_details" 47 OBSERVATION_COST_DETAILS = "langfuse.observation.cost_details" 48 OBSERVATION_PROMPT_NAME = "langfuse.observation.prompt.name" 49 OBSERVATION_PROMPT_VERSION = "langfuse.observation.prompt.version" 50 51 # General 52 ENVIRONMENT = "langfuse.environment" 53 RELEASE = "langfuse.release" 54 VERSION = "langfuse.version" 55 56 # Internal 57 AS_ROOT = "langfuse.internal.as_root"