Think of tokens as credits for using our AI brain (Large Language Models or LLMs). Just like ChatGPT limits your daily messages, we use tokens to manage our AI resources efficiently. LLMs are expensive to run, which is why we need to balance usage across all users.

Why Do We Need Tokens?

Running Large Language Models (LLMs) is like running a supercomputer - it’s resource-intensive and costly. Think of tokens as your AI budget:

  • Each conversation with our AI uses tokens
  • Different operations need different amounts
  • Your plan includes a monthly token allowance
  • Tokens automatically refresh each month

How Tokens Work

  • Monthly Refresh: Get fresh tokens at the start of each month
  • Automatic Metering: Once tokens are used, switch to pay-as-you-go
  • Same Rate: Metered usage costs the same as plan tokens
  • Token Accumulation: Unused tokens carry forward to next month with your plan’s new tokens

Tips for Token Efficiency

  1. Be Specific: Clear instructions use fewer tokens
  2. Batch Changes: Combine small updates
  3. Plan Ahead: Organize your development phases

When you run out of tokens before your billing cycle ends, your account automatically switches to metered billing at the same rate. Any remaining tokens at month’s end will be added to your next month’s allowance.