Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

TON has brief downtime before being restored

June 1, 2025

Sam Altman biographer Keach Hagey explains why the OpenAI CEO was ‘born for this moment’

June 1, 2025

Watch The Hollywood Reporter’s Full Drama Actress Roundtable

June 1, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » NVIDIA Team Scores Kaggle Win With Reasoning Model
Tech

NVIDIA Team Scores Kaggle Win With Reasoning Model

HarishBy HarishApril 15, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


The final days of the AI Mathematical Olympiad’s latest competition were a transcontinental relay for team NVIDIA.

Every evening, two team members on opposite ends of the U.S. would submit an AI reasoning model to Kaggle — the online Olympics of data science and machine learning. They’d wait a tense five hours before learning how well the model tackled a sample set of 50 complex math problems.

After seeing the results, the U.S. team would pass the baton to teammates waking up in Armenia, Finland, Germany and Northern Ireland, who would spend their day testing, modifying and optimizing different model versions.

“Every night I’d be so disappointed in our score, but then I’d wake up and see the messages that came in overnight from teammates in Europe,” said Igor Gitman, senior applied scientist. “My hopes would go up and we’d try again.”

While the team was disheartened by their lack of improvement on the public dataset during the competition’s final days, the real test of an AI model is how well it can generalize to unseen data. That’s where their reasoning model leapt to the top of the leaderboard — correctly answering 34 out of 50 Olympiad questions within a five-hour time limit using a cluster of four NVIDIA L4 GPUs.

“We got the magic in the end,” said Northern Ireland-based team member Darragh Hanley, a Kaggle grandmaster and senior large language model (LLM) technologist.

Building a Winning Equation

The NVIDIA team competed under the name NemoSkills — a nod to their use of the NeMo-Skills collection of pipelines for accelerated LLM training, evaluation and inference. The seven members each contributed different areas of expertise, spanning LLM training, model distillation and inference optimization.

For the Kaggle challenge, over 2,200 participating teams submitted AI models tasked with solving 50 math questions — complex problems at the National Olympiad level, spanning algebra, geometry, combinatorics and number theory — within five hours.

The team’s winning model uses a combination of natural language reasoning and Python code execution.

To complete this inference challenge on the small cluster of NVIDIA L4 GPUs available via Kaggle, the NemoSkills team had to get creative.

Their winning model used Qwen2.5-14B-Base, a foundation model with chain-of-thought reasoning capabilities which the team fine-tuned on millions of synthetically generated solutions to math problems.

These synthetic solutions were primarily generated by two larger reasoning models — DeepSeek-R1 and QwQ-32B — and used to teach the team’s foundation model via a form of knowledge distillation. The end result was a smaller, faster, long-thinking model capable of tackling complex problems using a combination of natural language reasoning and Python code execution.

To further boost performance, the team’s solution reasons through multiple long-thinking responses in parallel before determining a final answer. To optimize this process and meet the competition’s time limit, the team also used an innovative early-stopping technique.

A reasoning model might, for example, be set to answer a math problem 12 different times before picking the most common response. Using the asynchronous processing capabilities of NeMo-Skills and NVIDIA TensorRT-LLM, the team was able to monitor and exit inference early if the model had already converged at the correct answer four or more times.

TensorRT-LLM also enabled the team to harness FP8 quantization, a compression method that resulted in a 1.5x speedup over using the more commonly used FP16 format. ReDrafter, a speculative decoding technique developed by Apple, was used for a further 1.8x speedup.

The final model performed even better on the competition’s unseen final dataset than it did on the public dataset — a sign that the team successfully built a generalizable model and avoided overfitting their LLM to the sample data.

“Even without the Kaggle competition, we’d still be working to improve AI reasoning models for math,” said Gitman. “But Kaggle gives us the opportunity to benchmark and discover how well our models generalize to a third-party dataset.”

Sharing the Wealth 

The team will soon release a technical report detailing the techniques used in their winning solution — and plans to share their dataset and a series of models on Hugging Face. The advancements and optimizations they made over the course of the competition have been integrated into NeMo-Skills pipelines available on GitHub.

Key data, technology, and insights from this pipeline were also used to train the just-released NVIDIA Llama Nemotron Ultra model.

“Throughout this collaboration, we used tools across the NVIDIA software stack,” said Christof Henkel, a member of the Kaggle Grandmasters of NVIDIA, known as KGMON. “By working closely with our LLM research and development teams, we’re able to take what we learn from the competition on a day-to-day basis and push those optimizations into NVIDIA’s open-source libraries.”

After the competition win, Henkel regained the title of Kaggle World Champion — ranking No. 1 among the platform’s over 23 million users. Another teammate, Finland-based Ivan Sorokin, earned the Kaggle Grandmaster title, held by just over 350 people around the world.

For their first-place win, the group also won a $262,144 prize that they’re directing to the NVIDIA Foundation to support charitable organizations.

Meet the full team — Igor Gitman, Darragh Hanley, Christof Henkel, Ivan Moshkov, Benedikt Schifferer, Ivan Sorokin and Shubham Toshniwal — in the video below:

Sample math questions in the featured visual above are from the 2025 American Invitational Mathematics Examination. Find the full set of questions and solutions on the Art of Problem Solving wiki. 



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleAmazon emails sellers to gauge how Trump’s tariffs impacting business
Next Article Everywhere, All at Once: NVIDIA Drives the Next Phase of AI Growth
Harish
  • Website
  • X (Twitter)

Related Posts

App Store in the U.S. facilitated $406B in developer billings and sales in 2024

May 29, 2025

Run LLMs on AnythingLLM Faster With RTX AI PCs

May 29, 2025

NVIDIA’s Bartley Richardson on How Teams of AI Agents Provide Next-Level Automation

May 28, 2025

Logic Pro amplifies beat making on Mac and iPad with advanced new capabilities

May 28, 2025

NVIDIA and Google Partnership Gains Momentum With the Latest Blackwell and Gemini Announcements

May 27, 2025

The App Store prevented more than $9 billion in fraudulent transactions

May 27, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

TON has brief downtime before being restored

June 1, 2025

Sam Altman biographer Keach Hagey explains why the OpenAI CEO was ‘born for this moment’

June 1, 2025

Watch The Hollywood Reporter’s Full Drama Actress Roundtable

June 1, 2025

TC Sessions: AI Trivia Countdown — score big on tickets

June 1, 2025
Don't Miss
Blockchain

TON has brief downtime before being restored

June 1, 20252 Mins Read

The Open Network (TON), an independent layer-1 Blockchain that has a symbiotic relationship with the…

BitMEX discovers cybersecurity lapses in North Korea hacker group

May 31, 2025

Insurers Race to Cover Crypto Kidnap and Ransom Risks

May 31, 2025

FTX Bankruptcy Estate distributes $5 billion

May 30, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

Sam Altman biographer Keach Hagey explains why the OpenAI CEO was ‘born for this moment’

June 1, 2025

TC Sessions: AI Trivia Countdown — score big on tickets

June 1, 2025

4 days to go: TC Sessions: AI is almost in session

June 1, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.