Not Magic, Just Math: How AI Generates Text

By Jared Seigel, Branch Librarian

Artificial intelligence (AI) is everywhere these days, especially in the form of chatbots. These chatbots can help people write emails, answer questions or carry on conversations that feel surprisingly human. But how do these tools actually work? What makes them seem so intelligent? 

Modern chatbots are powered by large language models (LLMs), a type of computer program trained to understand and write language. They learn by reading a huge amount (i.e., hundreds of billions of words) of text from books, websites and articles. As they read, they look for patterns in how people use words and sentences. For example, they might learn that the next most likely word in the sentence “The sky is ____” is probably “blue.” 

LLMs don’t read text the way we do. Instead, they break everything down into smaller pieces called tokens, which could be a full word, part of a word or even punctuation. The model learns to predict which tokens are most likely to come next. It stores these patterns as statistical weights, or numbers that represent the strength of different word relationships (e.g., sky and blue). This learning happens through a process called self-supervised learning, where the model makes guesses and then corrects itself millions of times. Over time, it adjusts billions of internal settings, called parameters, to improve its accuracy.

AI breaks down text into small parts called tokens, then uses numerical values learned during training to ”understand” how those parts connect and make sense together. Image generated using Microsoft Copilot. 

The most advanced of these models are called GPTs, short for generative pretrained transformers. Tools like ChatGPT, Google Gemini and Claude use these models to chat, answer questions and help with writing or research. You can guide what they do just by how you phrase a question, a technique known as prompt engineering. 

These models are so good at predicting patterns in language that their responses can sound surprisingly natural, like they come from a real mind. But AI doesn’t have a mind, and GPTs aren’t perfect. Since they learn from human-written text, they can pick up biases or mistakes in the text they’re trained on. And while their abilities can seem impressive, they don’t really “understand” anything.  

So while chatbots may sound smart (except when they don’t), it’s not because they understand us the way a person would. They’re just very good at recognizing patterns and making smart guesses. As AI tools become more common, a basic understanding of how they work is becoming an essential part of digital literacy. And by clearly recognizing both their strengths and limitations, we can use them more thoughtfully, ask better questions and stay both critical and curious about what's coming next.