Home » Introducing an Enhanced AI Reasoning Technique

Introducing an Enhanced AI Reasoning Technique

by Carl Nash
0 comments


Exevutives using AI computing simulation.
Image: Envato/DC_Studio

Researchers from AI company DeepSeek and Tsinghua University have introduced a new technique to enhance “reasoning” in large language models (LLMs).

Reasoning capabilities have emerged as a critical benchmark in the race to build top-performing generative AI systems. China and the U.S. are actively competing to develop the most powerful and practical models. According to a Stanford University report in April, China’s LLMs are rapidly closing the gap with their U.S. counterparts. In 2024, China produced 15 notable AI models compared to 40 in the U.S., but it leads in patents and academic publications.

What is DeepSeek’s new technique?

DeepSeek researchers published a paper, titled “Inference-Time Scaling for Generalist Reward Modeling,” on Cornell University’s arXiv, the archive of scientific papers. Note that papers published on arXiv are not necessarily peer-reviewed.

In the paper, the researchers detailed a combination of two AI training methods: generative reward modeling and self-principled critique tuning.

“In this work, we investigate how to improve reward modeling (RM) with more inference compute for general queries, i.e. the inference-time scalability of generalist RM, and further, how to improve the effectiveness of performance-compute scaling with proper learning methods,” the researchers wrote.

SEE: DDoS Attacks Now Key Weapons in Geopolitical Conflicts, NETSCOUT Warns

Reward modeling is the process of training AI to align more closely with user preferences. With Self-Principled Critique Tuning, the model generates its own critiques or ‘principles’ during inference to fine-tune its answers. The combined approach continues the effort to let LLMs deliver more relevant answers faster.

“Empirically, we show that SPCT significantly improves the quality and scalability of GRMs, outperforming existing methods and models in various RM benchmarks without severe biases, and could achieve better performance compared to training-time scaling,” the researchers wrote.

They called the models trained with this method DeepSeek-GRM.

“DeepSeek-GRM still meets challenges in some tasks, which we believe can be addressed by future efforts in generalist reward systems,” the researchers wrote.

What’s next for DeepSeek?

DeepSeek has generated significant buzz around the R1 model, which rivals leading reasoning-focused models like OpenAI o1. A second model, DeepSeek-R2, is rumored for release in May. The company also launched DeepSeek-V3-0324, an updated reasoning model released in late March.

According to the paper, models built with the new GRM-SPCT method will be open-searched, though no release date has been specified.



Source link

You may also like

Leave a Comment

About Us

Advertisement

Latest Articles

© 2024 Technewsupdate. All rights reserved.