🔍 Token Log Probability Analyzer - Model Comparison
Compare how two ERNIE models predict each token in your text with detailed log probability breakdown.
ERNIE-4.5-Base-PT
Token Analysis
ERNIE-4.5-PT
Token Analysis
Try these examples:
How to Interpret Results
This interface compares two ERNIE models side by side:
- ERNIE-4.5-Base-PT (left): Base model, better at general language patterns
- ERNIE-4.5-PT (right): Instruction-tuned model, better at following complex instructions
Analysis Components
For each model, you'll see:
- Summary: Key metrics including Total Log Probability and average token probability
- Token Analysis Table: Detailed breakdown of each token's log probability and probability
- Token Probability Chart: Visual representation of each token's prediction probability
Model Comparison
- Model Comparison Summary: Shows which model has higher overall confidence
- Model Comparison Chart: Side-by-side visualization of token probabilities
Key Concepts
Log Probability:
- Ranges from -∞ to 0
- Closer to 0 = higher model confidence
- Used instead of raw probability to avoid numerical underflow
Total Log Probability:
- Sum of individual token log probabilities
- Measures overall model confidence in the entire sequence
- Allows comparison between different models
Why Compare Models?:
- Base models may be better at general language
- Instruction-tuned models may be better at specific tasks
- Different models have different strengths for different types of text