Latent Reasoning

  • Training Large Language Models to Reason in a Continuous Latent Space [paperarrow-up-right][codearrow-up-right]

    • FAIR at Meta, UC San Diego

  • LLM Pretraining with Continuous Concepts [paperarrow-up-right][codearrow-up-right]

    • FAIR at Meta, KAIST, UC San Diego

  • Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning [paperarrow-up-right]

    • Meta AI, UC Berkeley, UCL

  • Scalable Language Models with Posterior Inference of Latent Thought Vectors [paperarrow-up-right]

    • UCLA, Lambda Inc, Salesforce Research, KUNGFU.AI

  • Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach [paperarrow-up-right]

    • Tübingen AI Center, University of Maryland, College Park, Lawrence Livermore National Laboratory.

  • Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking [paperarrow-up-right]

    • CAS, Baidu, Beijing Normal University

  • Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens

    • Time: 20 Jun 2025

    • Three stage training to let VLM think with latent image.

      1. Train VLM to reconstruct helper image in latent space

      2. Only optimize reasoning loss

      3. RL

Last updated