Algorithms

For problems with larger search spaces, and greater number of model evaluations, Genetic algorithm or Random Forest may be more appropriate.

Below is a list of recommendations for algorithm selection.

  • Short model run times, large search space (> 100,000 models, expected sample > 1000 models) – GA, RF, GBRT, or PSO

  • Small search space (<100,000, expected # of samples < 1000) - Gaussian Process.

  • Very small search space (< 500 models), many cores (> 20) – Exhaustive Search.

Genetic Algorithm

Genetic Algorithm (GA) is a reproduction of the mathematics of evolution/survival of the fittest. A more detailed discussion on GA can be found here, and a very readable (but somewhat dated) reference is Genetic Algorithms in Search, Optimization and Machine Learning 13th ed. Edition by David Goldberg. Details of the options (not all of which are available in pyDarwin) can be found here. Briefly, GA presents the search space as a bit string, with each “gene” being a binary number that is decoded into the integer value for that option. For example, for a dimension of Additive vs Additive + proportional residual error, the integer codes would be:

  1. Additive error (e.g., +EPS(1))

  2. Additive + proportional error (e.g., EXP(EPS(1))+EPS(2))

It is straightforward enough to code these values [1,2] into a binary [0,1]. For dimensions with more than 2 values, more than 1 bit will be needed. For example, if 1 or 2 or 3 compartments are searched, the string representation might be:

  1. One compartment (ADVAN1)

  2. Two compartment (ADVAN3)

  3. Three compartment (ADVAN11)

and the bit string representation might be:

  • 1 - [0,0]

  • 2 - [0,1] and [1,0]

  • 3 - [1,1]

The bit strings for each gene are concatenated into a “chromosome”. The search starts with a population of random bit strings. These bit strings are decoded, and NONMEM control files are constructed from the template file by substituting the selected text from the token set. The resulting NONMEM control file is run and the fitness is calculated. The next generations is created by randomly selecting sets of parent candidates from the population. These parent candidates are then selected based on Tournament selection. Once the sets of parents are selected, they undergo crossover and mutation, and a new generation is created. This process is repeated until no further improvement is seen.

Gaussian Process

Gaussian Process is one of the two options used in Bayesian Optimization. The Gaussian Process specifies the form of the prior and posterior distribution. Initially the distribution is random, as is the case for all the global search algorithms. Once some models have been run, the distribution can be updated (the “ask” step) and new, more informative samples can be generated (the “tell” step).

Random Forest

Random Forest consists of splitting the search space (based on the “goodness” of each model in this case), thus continuously dividing the search space into “good” and “bad” regions. As before, the initial divisions are random, but become increasingly well-informed as real values for the fitness/reward of models are included.

Gradient Boosted Random Tree

Gradient Boosted Random Tree is similar to Random Forest, but may increase the precision of the tree building by progressively building the tree and calculating a gradient of the reward/fitness with respect to each decision.

Particle Swarm Optimization (PSO)

Particle swarm optimization (PSO [1]) is another approach to optimization that, like Genetic Algorithm, attempts to reproduce a natural optimization process. In the case of PSO, the natural process is the swarm behavior of birds and fish, although the specifics of the relationship to bird and fish behavior is largely speculation. Each particle (candidate NONMEM model in this case) moves through the search space, as one might imagine individuals in a school of fish or a flock of birds moving together, but also each bird/fish moving somewhat independently.

The velocity of each particle’s movement is based on two factors:

  1. Random movement

  2. Coordinated movement.

The coordinated movement is in turn, defined by the following parameters in the Options List:

  • inertia (\(\\w\)): the particle tends to continue moving in the same direction as the previous velocity

  • cognitive (\(c_1\)): the particle tends to move in the direction toward its own best known position

  • social (\(c_2\)): the particle tends to move in the direction toward the current best known position among all particles

Other parameters for PSO include: population_size, neighbor_number, p_norm, and break_on_no_change.

As with other optimization algorithms, the downhill step may also be implemented. The topology defines the region of the swarm whereby individual particles (models in this case) exchange information and thereby act in coordination. The “star” topology is the only implementation currently available in pyDarwin. The star topology permits particles (i.e., models) to coordinate with a set of nearest neighbors in a sort of star shape, up to the number of neighbors specified in neighbor_number.