GD Subroutine

public subroutine GD(format_str, output_str)

GRADIENT DESCENT

Gradient Descent method. This optimization method is based in the fact that the gradient of a function points in its maximum growth direction. In order to minimize it, we'll go in the opposite direction (). Of course, this is done by updating x (and ) iteratively.

HYPER PARAMETERS

x : real*8, dimension(dimx) Initial guess. Must have as many dimensions as the input variable has (obviously).

LR : real*8 Learning Rate. Related to the iteration step.

eps : real*8 Minimum relative error for stopping. If a relative step (step divided by the current x) is smaller than eps, then the process stops and we consider the optimization as done.

Nmax : integer, optional. Default = 1,000,000 Maximum number of iterations.

Arguments

Type IntentOptional Attributes Name
character(len=20) :: format_str

formatting string

character(len=1000) :: output_str

Actual x and y values for reportting.