Comment by hrpnk

Comment by hrpnk a year ago

4 replies

Is there a reason you didn't just implement OpenTelemetry (OT) straight away? Curious about the trade offs to opt for a home-grown telemetry inspired by OT instead.

calebkaiser a year ago

Good question! It mostly came down to implementation speed, as well as some uncertainty about performance/overhead. We will be releasing OpenTelemetry compatible ingestion endpoints in the near future, but since Opik has so many features that aren't related to OT, we decided to move forward without it for the initial release. It is a great project though and something we will be implementing soon—it will especially be useful for building out integrations with frameworks that are OpenTelemetry compatible.

  • baggiponte a year ago

    Have you seen two “prototypes” of standard for LLM telemetry? One is openllmetry, maintained by the folks at TraceLoop. Seems the more popular. The other one is openinference IIRC, by Arize AI.

    • calebkaiser a year ago

      Of the two, the only one I've ever personally explored is OpenLLMetry. Extremely cool project. In general, this is one of those areas where the field still needs to "shake out" a bit.

    • kakaly0403 a year ago

      There is a GenAI standard spec from OpenTelemetry for tracing LLM based applications. Currently there are 3 library implementations of this spec - Langtrace, OpenLLMetry and OpenLit. Microsoft has an implementation for .NET aswell. OpenInference, though opentelemetry compatible does not adhere to the standard spec.