LLM observability dashboard (beta)

Last updated:

|Edit this page

The LLM observability dashboard provides an overview of your LLM usage and performance. It includes insights on:

  • Users
  • Traces
  • Costs
  • Generations
  • Latency
LLM observability dashboard

This dashboard is a great starting point for understanding your LLM usage and performance. You can use it to answer questions like:

  • Are users using our LLM-powered features?
  • What are my LLM costs by customer, model, and in total?
  • Are generations erroring?
  • How many of my users are interacting with my LLM features?
  • Are there generation latency spikes?

To dive into specific generation events, click on the generations tab to get a list of all the recent generation events captured by PostHog.

Questions? Ask Max AI.

It's easier than reading through 561 docs articles.

Community questions

Was this page useful?

Next article

Tutorials and guides

Got a question which isn't answered below? Head to the community forum to let us know! How to setup PostHog for AI How to set up LLM analytics for Cohere How to set up LLM analytics for Anthropic's Claude How to set up LLM analytics for ChatGPT How to monitor generative AI calls to AWS Bedrock How to compare AWS Bedrock prompts How to compare AWS Bedrock foundational models How to monitor LlamaIndex apps with Langfuse and PostHog Using LLMs in analytics Product metrics to…

Read next article