What is Particle Swarm Optimization?

Geese.png

In general, traders like ecological analogies for the markets. Traders live in a world where they  eat what they kill . Hence, strategies  live ,  die  and  evolve  in an almost biological sense. Successful strategies  breed like animals  as information leaks from desk to desk. Market crashes are described as  cleansing fires, clearing out the  deadwood  and making way for upstarts to take their place.

Thus it should come as no surprise that one of our favorite optimization routines was inspired by watching a flock of birds.

The basic idea is iterative and simple. For example, suppose you have a strategy with two parameters.

You want to offload the process of setting these values to an algorithm  because, as a human, your spirit is willing but your brain is spongy and bruised from years of trading the markets. Ideally, you want your algorithm to pick the best strategy. 

How can we do this?

The first step, of course, is to explicitly define what “best” means. To be simple,  let’s define the best strategy as the one who makes the most money  over our training dataset. One way to accomplish this is simply to try every possible combination of the two parameters until you find the strategy with the best PnL.  This is known as brute force search , and is only feasible if your parameter space is small and your computational power is cheap. In most practical examples, brute force is out of the question, either:

1) the parameter space is too large or

2) the objective function takes too long to execute. 

In our example, calculating the PnL of an individual strategy takes a nontrivial amount of time to complete. Thus we must search for more clever optimization techniques

This is where the birds come into the picture . (Way) Back in 1995 James Kennedy and Russell Eberhart published their original paper on the subject, appropriately titled:

Particle Swarm Optimization  [CiteSeerX LINK]

"A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described."

Intuitively, imagine our search space as a 2-dimensional grid, where the x axis represents the first parameter and the y the second parameter. The Particle Swarm Optimization or PSO algorithm will begin by initializing a random field of points (known as particles) within this search space. It will then evaluate the objective function for each particle and update the position of the particle in the search space according to system of transition equations:

Put simply, each iteration we:

Evaluate the objective function for each particle Find the best particle for the iteration and compare against (x best, y best), updating if needed For each particle, compare the output of the objective function against (x local,y local), updating if needed Move each particle in a random way towards a blend of its global and local best positions.

Using an iterative algorithm such as PSO can lead to effective model parameter. Hence, reducing parameter risk. In fact PSO could be extremely effective in the training of many machine learning methods such as artificial neural network weights.