Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Paper
• 2403.09629 • Published
• 79
Mistral-7b with continued pretraining using Quiet-STaR (https://arxiv.org/abs/2403.09629) for generating 8 thought tokens before each output token.