To use Trace Viewer, select a trajectory from the 🤗 HuggingFace dataset using the size, complexity, and index selectors, or upload a JSON trace file using the 📁 Load JSON button. Once loaded, the first grid state will become visible, alongside two prompt and model output sections containing input/output tokens for the current trace step.
Use the buttons ▶️ Play, ⏸️ Pause, ⏩ Next, ⏪ Back and 🔄 Reset to move across trace steps, and use the slider to control animation speed. A white arrow on the agent tile (A) indicates the predicted action direction from the model. The arrow points in the direction predicted by the agent. A summary of grid and model parameters is available in the ⚙️ Config tab.
Tokens can be hovered to show an info tooltip with next-token probabilities displayed as bar charts. Tokens with probe information will also display a decoded grid showing the model's internal representation of tile types. Tiles marked with "X" have no probe data available. Clicking on a token pins its position; when advancing steps, the decoded grid will update to show probe data for the same token position in the new step (if available). This allows tracking how the model's internal representation evolves across steps. Use the layer selector dropdown above the decoded grid to switch between different model layers. Hovering on decoded grid tiles shows the prediction probabilities for all tile classes as bar charts. Click the pinned token again to unpin.