MADGRAD¶
- class madgrad.MADGRAD(params: Any, lr: float = 0.01, momentum: float = 0.9, weight_decay: float = 0, eps: float = 1e-06, decouple_decay=False)¶
MADGRAD: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization.
MADGRAD is a general purpose optimizer that can be used in place of SGD or Adam may converge faster and generalize better. Currently GPU-only. Typically, the same learning rate schedule that is used for SGD or Adam may be used. The overall learning rate is not comparable to either method and should be determined by a hyper-parameter sweep.
MADGRAD requires less weight decay than other methods, often as little as zero. Momentum values used for SGD or Adam’s beta1 should work here also.
On sparse problems both weight_decay and momentum should be set to 0.
- Arguments:
- params (iterable):
Iterable of parameters to optimize or dicts defining parameter groups.
- lr (float):
Learning rate (default: 1e-2).
- momentum (float):
Momentum value in the range [0,1) (default: 0.9).
- weight_decay (float):
Weight decay, i.e. a L2 penalty (default: 0).
- eps (float):
Term added to the denominator outside of the root operation to improve numerical stability. (default: 1e-6). This parameter is less important in MADGRAD than in Adam. On problems with very small gradients, setting this to 0 will improve convergence.
- decouple_decay (bool):
Apply AdamW style decoupled weight decay (EXPERIMENTAL).
- step(closure: Optional[Callable[[], float]] = None) Optional[float] ¶
Performs a single optimization step.
- Arguments:
- closure (callable, optional): A closure that reevaluates the model
and returns the loss.
- property supports_flat_params: bool¶
- property supports_memory_efficient_fp16: bool¶
- class madgrad.MirrorMADGRAD(params: Any, lr: float = 0.01, momentum: float = 0.9, weight_decay: float = 0, eps: float = 0, decouple_decay=False)¶
Mirror MADGRAD: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization.
Mirror MADGRAD uses the weighting and momentum of MADGRAD but uses mirror descent rather than dual averaging as the base method. In general, the mirror variant works better than standard MADGRAD on problems where generalization gap is not an issue, such as large Transformer model training. On CIFAR-10/Image-Net and smaller NLP models the standard variant should be prefered. The Mirror variant is more numerically stable which may help with large model training.
Currently does not support sparse gradients.
- Arguments:
- params (iterable):
Iterable of parameters to optimize or dicts defining parameter groups.
- lr (float):
Learning rate (default: 1e-2).
- momentum (float):
Momentum value in the range [0,1) (default: 0.9).
- weight_decay (float):
Weight decay, i.e. a L2 penalty (default: 0).
- eps (float):
Term added to the denominator outside of the root operation to improve numerical stability. (default: 0). This parameter is less important in MADGRAD than in Adam. A value of 0 will likely give the best results.
- decouple_decay (bool):
Apply AdamW style decoupled weight decay (EXPERIMENTAL). Application of decay occurs before the step.
- step(closure: Optional[Callable[[], float]] = None) Optional[float] ¶
Performs a single optimization step.
- Arguments:
- closure (callable, optional): A closure that reevaluates the model
and returns the loss.
- property supports_flat_params: bool¶
- property supports_memory_efficient_fp16: bool¶