About This Demo
This demonstration shows how token context windows work in Large Language Models:
- Each message shows its token count
- Messages outside the context window appear faded
- Like an LLM's active memory, only the most recent tokens within the context window are available for processing - older tokens "fade" from memory
Want to learn more about Tokens? Check out the
Tokenizer Playground.