# Intuition of Adam Optimizer

**Prerequisites : ** Optimization techniques in Gradient Descent

**Adam Optimizer **

Attention reader! Don’t stop learning now. Get hold of all the important Machine Learning Concepts with the **Machine Learning Foundation Course** at a student-friendly price and become industry ready.

Adaptive Moment Estimation is an algorithm for optimization technique for gradient descent. The method is really efficient when working with large problem involving a lot of data or parameters. It requires less memory and is efficient. Intuitively, it is a combination of the ‘gradient descent with momentum’ algorithm and the ‘RMSP’ algorithm.

**How Adam works?**

Adam optimizer involves a combination of two gradient descent methodologies:

**Momentum:**

This algorithm is used to accelerate the gradient descent algorithm by taking into consideration the ‘exponentially weighted average’ of the gradients. Using averages makes the algorithm converge towards the minima in a faster pace.

where,

m_{t }= aggregate of gradients at time t [current] (initially, m_{t}= 0) m_{t-1}= aggregate of gradients at time t-1 [previous] W_{t}= weights at time t W_{t+1 }= weights at time t+1 α_{t }= learning rate at time t ∂L = derivative of Loss Function ∂W_{t}= derivative of weights at time t β = Moving average parameter (const, 0.9)

**Root Mean Square Propagation (RMSP): **

Root mean square prop or RMSprop is an adaptive learning algorithm that tries to improve AdaGrad. Instead of taking the cumulative sum of squared gradients like in AdaGrad, it takes the ‘exponential moving average’.

where,

W_{t}= weights at time t W_{t+1}= weights at time t+1 α_{t}= learning rate at time t ∂L = derivative of Loss Function ∂W_{t}= derivative of weights at time t V_{t}= sum of square of past gradients. [i.e sum(∂L/∂Wt-1)] (initially, V_{t}= 0) β = Moving average parameter (const, 0.9) ϵ = A small positive constant (10^{-8})

NOTE: Time (t)could be interpreted as anIteration (i).

Adam Optimizer inherits the strengths or the positive attributes of the above two methods and builds upon them to give a more optimized gradient descent.

Here, we control the rate of gradient descent in such a way that there is minimum oscillation when it reaches the global minimum while taking big enough steps (step-size) so as to pass the local minima hurdles along the way. Hence, combining the features of the above methods to reach the global minimum efficiently.

**Mathematical Aspect of Adam Optimizer**

Taking the formulas used in the above two methods, we get

Parameters Used :1. ϵ =a small +ve constant to avoid 'division by 0' error when (v_{t }-> 0). (10^{-8})2. βdecay rates of average of gradients in the above two methods. (β_{1 }& β_{2 }=_{1}= 0.9 & β_{2 }= 0.999)3. α —Step size parameter / learning rate (0.001)

Since m_{t} and v_{t }have both initialized as 0 (based on the above methods), it is observed that they gain a tendency to be ‘biased towards 0’ as both β_{1} & β_{2} ≈ 1. This Optimizer fixes this problem by computing ‘bias-corrected’ m_{t }and v_{t}. This is also done to control the weights while reaching the global minimum to prevent high oscillations when near it. The formulas used are:

Intuitively, we are adapting to the gradient descent after every iteration so that it remains controlled and unbiased throughout the process, hence the name Adam.

Now, instead of our normal weight parameters m_{t }and v_{t }, we take the bias-corrected weight parameters (m_hat)_{t }and (v_hat)_{t}. Putting them into our general equation, we get

**Performance:**

Building upon the strengths of previous models, Adam optimizer gives much higher performance than the previously used and outperforms them by a big margin into giving an optimized gradient descent. The plot is shown below clearly depicts how Adam Optimizer outperforms the rest of the optimizer by a considerable margin in terms of training cost (low) and performance (high).