Overview
We have done a lot of work on novel approaches in the filed of swarm intelligence. Recently, we proposed Generative Adeversarial Optimization (GAO), which combines neural network and black-box optimization. And we proposed Fireworks Algorithm (FWA) in 2010 which is a powerful swarm intelligence algorithm and attracted lots of attention. Historically, we also studied Particle Swarm Optimization.
Published Monographs
- Ying Tan, Swarm Intelligence: Volume 3: Applications. Vol 3 , IET (The Institution of Engineering and Technology), 2018. ISBN:978-1-78561-629-7.
PSO Research
Particle Swarm Optimization (PSO) is a global random optimization algorithm. It maintains a population of n where each particle represents a potential solution. Particles adjust their position based on best solutions they and their neighbours found.
1 Research on PSO Theory
The search strategy of PSO is always an important question, which is to find the balance between exploration and exploitation. We defined the problem in mathematics model and proposed two values for exploration and exploitation. We recorded their change in the optimization procedure and proved that our definition is acceptable.
2 Improvement on PSO
2.1 CPSO
CPSO adapts a clone strategy in the search process. The best results in last M generation are cloned in the population and mutated according to certain probability. The results proved that search range and diversity are increased.
2.2 AR-CPSO
Before the clone operation in CPSO process, we use advance and retreat method to find the local optimal to validate the quality of last flight and restricts the particle in the nearby of local optimal.
2.3 RBH-PSO
The main idea of RBH-PSO is to search more carefully near global optimal. We generate a black hole near global best randomly within radius r. Each particle has a probability p to be attracted to the black hole. Their position will be the same as the black hole while others update as normal PSO.
Fig 1. Comparison between CPSO, ARCPSO, RBH-PSO and PSO.
3 Implementing PSO on GPU
We take the advantage of parallel computing on GPU by using the CUDA platform to increase performance. The algorithm for implementing PSO based on GPU is shown in Fig 2. The four steps in “for” are all implemented in parallel. And we test the algorithm on 4 benchmark functions, f1~f4. The results of which are given in fig .3. We can find out that the bigger the population is, the higher the speedup of GPU based PSO can be.
Fig 2.Parallel PSO on GPU
Fig 3. Speedup of GPU-SPSO on CPU-SPSO according to population and test function
4 New optimization algorithm – Fireworks Algorithm
Fireworks Algorithm (FWA) is based on simulation of firework explosion. The search process maintains m paralleled explosions. Some information of each explosion will be kept to aid next explosion. Mutate and immune strategies are introduced to avoid pre-mature.
Fig 4. Work fLow of Firework Algorithm
Fig 5. Comparision between FWA and PSO
Fireworks Algorithm (FWA)
(Prof. Tan's speech on FWA,PPT, ZIP file, FWA Matlab Codes, FWA C Codes)
Swarm intelligence
Optimization Problems have being one of major problems and can be found everywhere, especially in the academy field and industrial world. To solve these kind problems, many methods have been proposed. Recently Swarm Intelligence (SI) optimization has been a very hot subject and method for optimization problems and showed its great success. James Kennedy defined it as follows : ” Swarm intelligence refers to a kind of problem-solving ability that emerges in the interactions of simple information-processing units.[3]” The units that build up the swarm can be living creature and also can be lifeless bodies. In the years since the introduction of Swarm Intelligence, SI has a number of branches with a number of researchers exploring the concepts, behavior, cognition underlie the nature and made great contribution to these fields, such as, Particle Swarm Optimization (PSO)[4], Ant Colony Optimization (ACO)[5], Genetic Algorithm (GA)[6] and Fish School Search (FSS)[7]. In PSO algorithms, it simulates the behavior of ” searching food among flocks” who share the social information, previous experiences among the swarm and individuals.
Fig1. From Biological phenomenon or process to Artificial Computational Approach
Motivation of FWA
In the same way, the inspiration of Inspired by observing fireworks explosion, a novel swarm intelligence algorithm, called Fireworks Algorithm (FA), is proposed for global optimization of complex functions. The FA is presented and implemented by simulating the explosion process of fireworks. In the FA, two explosion (search) processes are employed and mechanisms for keeping diversity of sparks are also well designed.
Fig2. Mathematical Modeling Process of Fireworks Algorithm
Fireworks Algorithm
When a firework is set off, a shower of sparks will fill the local space around the firework. In our opinion, the explosion process of a firework can be viewed as a search in the local space around a specific point where the firework is set off through the sparks generated in the explosion. Mimicking the process of setting fireworks, a rough framework of the FA is depicted in Fig. 3.
Fig.3. Framework of fireworks algorithm
To validate the convergence speed of the FA, we conducted more thorough experiments. Fig. 4 depicts the convergence curves of the FA, the CPSO and the SPSO on eight benchmark functions averaged over 20 independent runs. From these results, we can arrive at a conclusion that the proposed FA has a much faster speed than the CPSO and the SPSO. From Table 3, we can find that the FA can find excellent solutions with only 10000 times of function evaluations. This also reflects the fast convergence speed of the proposed FA.
Fig.4. Convergence curves of the FA, the CPSO and the SPSO on eight benchmark functions.
Table. 1. Statistical mean and standard deviation of solutions found by the FA, the CPSO and the SPSO on nine benchmark functions over 20 independent runs of 10000 function evaluations
Application based on FWA
Low-rank approximations are utilized in several content based retrieval and data mining applications, such as text and multimedia mining, web search, etc. and achieve a more compact representation of the data with only limited loss in information. They reduce storage and runtime requirements, and also reduce redundancy and noise in the data representation while capturing the essential associations. The Non-negative Matrix Factorization (NMF, (Lee and Seung 1999)) leads to a low-rank approximation which satisfies non-negativity constraints. NMF approximates a data matrix by
where
and
are the NMF factors. NMF requires all entries in
,
and
to be zero or positive.
Figure 5 - Scheme of very coarse NMF approximation with very low rank k. Although k is significantly smaller than m and n, the typical structure of the original data matrix can be retained (note the three different groups of data objects in the left, middle, and right part of A).
Optimization Strategy 1 – Initialization
Algorithm 1 – Pseudo code for the initialization procedure for NMF factors W and H. The two for-loops in lines 4 and 10 can be executed concurrently. SIO = Swarm Intelligence Optimization
Evaluation of Optimization Strategy 1
Initialization: Before evaluating the improvement of the NMF approximation quality as such, we first measure the initial error after initializing and
(before running the NMF algorithm). Figure 3 and Figure 4 show the average approximation error (i.e. Frobenius norm / fitness) per row (left) and per column (right) for data set DS-RAND.
![]() |
Figure 3 – Left hand-side: average approximation error per row (after initializing rows of W). Right hand-side: average approximation error per column (after initializing of H). NMF rank k = 5. Legends are ordered according to approximation error (top = worst, bottom = best).
For more details about this paper, please see [8][9].
Reference
[1] Y. Tan and Y. Zhu (2010). Fireworks algorithm for optimization. ICSI 2010, Part I, LNCS 6145, pp. 355-364
[2] D. Bratton and J. Kennedy (2007). Defining a Standard for Particle Swarm Optimization. Proceedings of the 2007 IEEE Swarm Intelligence Symposium (SIS 2007), pp. 120-127
[3] J. Kennedy (2006). Handbook of Nature-Inspired and Innovative, Springer, Chapter 6, pp. 187-219
[4] J. Kennedy and R. Eberhart(1995), Particle Swarm Optimization. Proceedings of the 1995 IEEE International Conference on Neural Networks(Perth, Australia): IEEE Service Center, Piscataway, NJ, IV, pp. 1942-1948.
[5] M. Dorigo, V. Maniezzo and A. Colorni (1996). Ant system: optimization by a colony of cooperating agents. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, Volume: 26, Issue: 1, pp. 29-41.
[6] J. Holland (1975). Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor.
[7] C. J. A. B. Filho, F. B. de Lima Neto, A. J. C. C. Lins, A. I. S. Nascimento and M. P. Lima (2008). A novel search algorithm based on fish school behavior. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. pp. 2646-2651
[8] Andreas Janecek* and Ying Tan, Swarm Intelligence for Non-negative Matrix Factorization, IJSIR 2011
[9] Andreas Janecek* and Ying Tan, Using population based algorithms for initializing nonnegative matrix factorization, ICSI 2011, Part II, LNCS 6729, pp. 307–316, 2011.