Gradient Descent method.
This optimization method is based in the fact that the
gradient of a function points in its maximum growth
direction. In order to minimize it, we'll go in the
opposite direction (). Of course, this is done
by updating x (and ) iteratively.
HYPER PARAMETERS
x : real*8, dimension(dimx)
Initial guess. Must have as many dimensions as
the input variable has (obviously).
LR : real*8
Learning Rate. Related to the iteration step.
eps : real*8
Minimum relative error for stopping. If a
relative step (step divided by the current x) is
smaller than eps, then the process stops and we
consider the optimization as done.
Nmax : integer, optional. Default = 1,000,000
Maximum number of iterations.
Nodes of different colours represent the following:
Solid arrows point from a procedure to one which it calls. Dashed
arrows point from an interface to procedures which implement that interface.
This could include the module procedures in a generic interface or the
implementation in a submodule of an interface in a parent module.
Nodes of different colours represent the following:
Solid arrows point from a procedure to one which it calls. Dashed
arrows point from an interface to procedures which implement that interface.
This could include the module procedures in a generic interface or the
implementation in a submodule of an interface in a parent module.