New State of the Art AI Optimizer: Rectified Adam (RAdam). Improve your AI accuracy instantly versus Adam, and why it works.

Less Wright
5 min readAug 15, 2019

A new paper by Liu, Jian, He et al introduces RAdam, or “Rectified Adam”. It’s a new variation of the classic Adam optimizer that provides an automated, dynamic adjustment to the adaptive learning rate based on their detailed study into the effects of variance and momentum during training. RAdam holds the promise of immediately improving every AI architecture compared to vanilla Adam as a result:

RAdam is robust to various learning rates while still converging rapidly and achieving greater accuracy (CIFAR dataset)

I have tested RAdam myself inside the FastAI framework, and quickly achieved new high accuracy records versus two of the hard to beat FastAI leaderboard scores on ImageNette. Unlike many papers I have tested this year where things only seem to work well on their specific datasets used in the paper, and not so well on new datasets I try it with, it appears RAdam is a true improvement and likely to be the permanent successor to vanilla Adam imo.

RAdam and XResNet50, 86% in 5 epochs
Imagenette Leaderboard — current high = 84.6%

--

--

Less Wright
Less Wright

Written by Less Wright

PyTorch, Deep Learning, Object detection, Stock Index investing and long term compounding.

Responses (14)