<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=3529442&amp;fmt=gif">
Skip to content
All posts

What is a Token, in the context of ChatGPT

Since OpenAI's breakthrough, "token" is a term that needs to be explained. It is a recurring word used in the context of representing the capacity of the GPT model, while also being used to calculate how much you will pay to use the service.

How Large Language Models Work

At a high level, Large Language Models (LLMs) like ChatGPT-4 works by using a massive neural network to process vast amounts of text data. The model is trained on a large corpus of text, such as the internet or a collection of books, and learns to recognise patterns in the data. This training allows the model to generate coherent and contextually relevant responses to new text prompts.

When a user inputs a prompt, the large language model uses its trained neural network to generate a response. The model doesn't simply regurgitate pre-written responses, but rather generates a response on the fly by drawing on its learned patterns and context from the prompt. This is why large language models can generate human-like responses that seem tailored to the specific prompt.

Tokens are therefore an important part of users' everyday life, but also an important part of the technology's functionality.

To provide you with an answer, the GPT technology needs to tokenize the text. It divides a text into smaller units called tokens. These can be individual words, parts of words, or even specific characters.

In the case of GPT, the text you want to input into the service is divided before being fed into the model. Each token represents a meaningful unit within the text.

For example:

Sentence: “Learning to work effectively, however, can positively impact not just your workdays but also your personal mental health.” Tokenization: [ “Learning”, “to”, “work”, “effectively”, “,”, “however”, “,”, “can”, “positively”, “impact”, “not”, “just”, “your”, “workdays”, “but”, “also”, “your”, “personal”, “mental”, “health”, “.” ]

Pay attention to the "," and "." which are also tokens because they have meaning in the sentence. Therefore, the sentence consists of 21 tokens.

As mentioned, this is important for the user because they want the technology to understand all the information being inputted, so that it can provide the best possible answers. A detail like a "comma" can be crucial for the meaning of words. For example: Let`s eat Grandma! Vs. Let`s eat, Grandma! - can make a fateful difference.

Token limitations

The limitation in OpenAI's GPT 3.5 is 4096 tokens. The number of tokens will expand as the technology evolves. OpenAI GPT 4 will have 8000 tokens in their 8K edition and 32000 tokens in the 32K edition. OpenAI GPT 3.5 and 3.5Turbo are essentially GPT3, with additional technology that improves response time and customization.

In an average document, the average number of letters per word is 5-6 letters. So optimistically, 4096 tokens would cover 4096 words or approximately 20,000 characters.

The average of tokens on an A4 site is 500 – this means that GPT4 32K will be able to “have 60 pages of information in average” inside its limitations. Estimated price for asking a question for all that information is 1,92$.

The token limitation applies to both the information inputted into the technology and the information delivered by the technology. However, the price may not be the same. The price is set per 1000 tokens.

When Ayfie delivers GPT technology, we do so through Microsoft Azure.

Their models have the following prices:


Model                      Prompt (information input)     Completion (information output)


GPT3.5Turbo           $0.002                                   $0.002

GPT4 8K                  $0.030                                   $0.060

GPT4 32K                $0.060                                   $0.120

For this reason, it is important to understand what a token is, first to understand the possibilities of the technology, and then to understand how it incurs costs through the technology.

Example of cost:

- This document you are reading consists of 593 tokens.
- If this document in GPT3.5 was to be translated, transcribed, or compressed, the upload would cost: 0.01 Norwegian Kroner, 0.001 Euro, 0.001 Dollar.

The answer would cost the same or less, depending on the question to the model.

Polite responses from the model with unnecessary elaboration, while being pleasant, might become a “cash cow” for OpenAI in the future. This is something that can be regulated in the solution's setup.

At Ayfie, where we offer our customers OpenAI through Ayfie AI Personal Assistant, securely within Ayfie's domain, delivered via Microsoft Azure, it is currently a free solution to test and use for our customers and all other interested end-users worldwide (existing clients or not). However, this is freemium model we will review over the next months.

In cases where we set up Ayfie Personal Assistant on the customer's domain, token costs will be in addition to the solution cost. If this is something you would be curious in discussing in more detail, then please contact us via the contact form

Sources:
- Azure OpenAI Service Pricing