Quiet-STaR trains AI to think before it speaks, promising a leap in machine understanding


Researchers at Stanford University have developed a method called “Quiet-STaR” that enables AI systems to learn to think between the lines. This could pave the way for more versatile and efficient AI that can better solve complex tasks.

When humans write or speak, we often pause to think. We consider how best to phrase an argument, or what the other person is thinking.

This “thinking” is hidden between the lines of almost all texts – for example, in the intermediate steps of mathematical proofs that are not explicitly mentioned. So far, AI has struggled to capture such unspoken thought processes. But that could change.

Internal reasoning helps LLMs generate better answers

Quiet-STaR (Quiet Self-Taught Reasoner) teaches an LLM to think quietly before it speaks. At each point in a text, the AI generates possible reasons why the text continues one way rather than another.



Through trial and error, it learns which considerations lead to the most likely continuations – it thinks before it “speaks”, i.e. it continues to generate the text.

The technology is based on the “Self-Taught Reasoner” (STaR), which teaches AI systems to derive reasons from a few examples and to learn from correct answers. However, while STaR only works for certain question-answer tasks, Quiet-STaR is designed to teach language models to infer implicit reasoning from any text.

Video: Zelikman et al.

This sounds simple, but it poses significant challenges: The AI has to learn how to generate “thoughts” and how to use them effectively. It is also computationally intensive to calculate and evaluate many continuations for each text passage.

The researchers are tackling this problem with sophisticated sampling algorithms and techniques such as “teacher forcing,” in which the system is gradually introduced to the correct continuations.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top