Swarm intelligence phd thesis

Share your Details to get free
  1. Quantum Particle Swarm Optimization Technique for Load Balancing in Cloud Computing
  2. An Improved Particle Swarm Optimization Algorithm Based on Two Sub-swarms
  3. Swarm Intelligence | SpringerLink
  4. Navigation menu

If a course has limited intake capacity, priority will be given to PhD candidates who follow an individual education plan where this particular course is included.

  1. ernest essay hemingway.
  2. essays on fashion and youth;
  3. downfall of the romanovs essay;
  4. sir frederick banting essay.
  5. Artificial Intelligence as an M.Tech thesis topic for CSE?
  6. english essay topics for ielts.
  7. An analysis of particle swarm optimizers phd thesis.

PhD candidates who have been admitted to another higher education institution must apply for a position as a visiting student within a given deadline. If the number of enrolled students is higher than the limit, they will be ranked as follows:. The teaching will include lectures, discussions and assigment tasks. The teaching will be organized as one or two 1 week workshop sessions joint with IN where we will try to take into account possible conflicting lectures in other courses. Mandatory assignments and other hand-ins at Department of Informatics.

Read more about the grading system. Students who can document a valid reason for absence from the regular examination are offered a postponed examination at the beginning of the next semester. Re-scheduled examinations are not offered to students who withdraw during, or did not pass the original examination.

Quantum Particle Swarm Optimization Technique for Load Balancing in Cloud Computing

It is possible to take the exam up to 3 times. If you withdraw from the exam after the deadline or during the exam, this will be counted as an examination attempt. Application form, deadline and requirements for special examination arrangements. Same semester as taught. University of Oslo P. Box Blindern Oslo. Main navigation jump Main content jump Theme navigation jump Contact information jump. For employees Norwegian website.

Search our webpages Search. Menu Search. Sub navigation Studies Courses IN It hybridizes Particle Swarm Optimization with Simulated Annealing [ 26 ] and reduces runtime as well as the number of iterations. In general, the values of C 1 and C 2 are kept constant. An empirically found optimum pair seems to be 2.

Ratnaweera et al. The topology of the swarm of particles establishes a measure of the degree of connectivity of its members to the others. It essentially describes a subset of particles with whom a particle can initiate information exchange [ 28 ]. The lBest variant associates a fraction of the total number of particles in the neighborhood of any particular particle.

An Improved Particle Swarm Optimization Algorithm Based on Two Sub-swarms

This structure leads to multiple best particles, one in each neighborhood and consequently the velocity update equation of the PSO has multiple social attractors. Under such circumstances, the swarm is not attracted towards any single global best rather a combination of subswarm bests. This brings down the convergence speed but significantly increases the chance of finding global optima. In the gBest variant, all particles simultaneously influence the social component of the velocity update in the swarm, thereby leading to an increased convergence speed and a potential stagnation to local optima if the true global optima is not where the best particle of the neighborhood is.

There have been some fundamental contributions to the development of PSO topologies over the last two decades [ 29 , 30 , 31 ].

  • Swarm intelligence for autonomous UAV control;
  • msu thesis approval form;
  • causes of the french revolution essay introduction!
  • In [ 31 ], Mendes et al. Interested readers can also refer to the recent work by Liu et al to gain an understanding about topology selection in PSO driven optimization environments [ 32 ]. In this section, the underlying constraints for convergence of the swarm to an equilibrium point are reviewed.

    The above relation can be simplified by replacing the stochastic factors with the acceleration coefficients C 1 and C 2 such that when C 1 and C 2 are chosen to satisfy the condition in Equation 12 , the swarm converges. This point may not be an optimum and particles may prematurely converge to it. A hybridized PSO implementation integrates the inherent social, co-operative character of the algorithm with tested optimization strategies arising out of distinctly different traditional or evolutionary paradigms towards achieving the central goal of intelligent exploration-exploitation.

    This is particularly helpful in offsetting weaknesses in the underlying algorithms and distributing the randomness in a guided way. The literature on hybrid PSO algorithms is quite rich and growing by the day. In this section, some of the most notable works as well as a few recent approaches have been outlined. Popular approaches in hybridizing GA and PSO involve using the two approaches sequentially or in parallel or by using GA operators such as selection, mutation and reproduction within the PSO framework.

    Authors in [ 36 ] used one algorithm until stopping criterion is reached to use the final solution in the other algorithm for fine tuning. How the stopping criterion is chosen varies. They used a switching method between the algorithms when one algorithm fails to improve upon past results over a chosen number of iterations.

    Swarm Intelligence | SpringerLink

    In [ 37 ] the first algorithm is terminated once a specified number of iterations has been exceeded. The best particles from the first algorithm populate the particle pool in the second algorithm and the empty positions are filled using random generations. This preserves the diversity of the otherwise similar performing population at the end of the first phase. Authors in [ 37 ] put forth the idea of exchanging fittest particles between GA and PSO, running in parallel for a fixed number of iterations. In Yang et al. The authors used this method to optimize three unconstrained and three constrained problems.

    Li et al. Valdez et al. Simple fuzzy rules were used to determine whether to consider GA or PSO particles and change their parameters or to take a decision. This method was tested on the Indian Pines hyperspectral dataset as well as for road detection purposes. The method could select the most informative features within an acceptable processing time automatically and did not require the users to set the number of desired features beforehand.

    Benvidi et al.

    fensterstudio.ru/components/zojubose/syk-localizar-celular.php Results indicated the designed model accurately determined concentrations in real as well as synthetic samples. From observations the introduced method emerged as a powerful tool to estimate the concentration of food colorants with a high degree of overlap using nonlinear artificial neural network. Yu et al. Nik et al. The GA operator initiates reproduction when particles stagnate. This version of the hybrid algorithm was named DPSO with mutation-crossover. The exploration ability of the algorithm was used first to find an initial kernel of solutions containing cluster centroids which was subsequently used by the k-means in a local search.

    For treating constrained optimization problems, Garg used a PSO to operate in the direction of improving the vector while using GA to update decision vectors [ 48 ].

    Navigation menu

    In [ 49 ], Zhang et al. A hybrid PSO and GA method with a small population was tested to optimize five operating parameters, including EGR rate, pilot timing, pilot ratio, main injection timing, and injection pressure. Results demonstrated significant speed-up and superior optimization as compared to GA. Results indicated that DEC during the spring equinox, summer solstice, autumnal equinox and winter solstice increased approximately by 1.

    Differential evolution DE by Price and Storn [ 63 ] is a very popular and effective metaheuristic for solving global optimization problems. Several approaches of hybridizing DE with PSO exist in the literature, some of which are elaborated in what follows. Hendtlass [ 64 ] introduced a combination of particle swarm and differential evolution algorithm SDEA and tested it on a graduated set of trial problems.

    The SDEA algorithm works the same way as a particle swarm one, except that DE is run intermittently to move particles from worse performing areas to better ones. Experiments on a set of four benchmark problems, viz. The Goldstein-Price Function, the six-hump camel back function, the Timbo2 Function and the n-dimensional 3 Potholes Function showed improvements in performance.

    It was noted that the new algorithm required more fitness evaluations and that it would be feasible to use the component swarm-based algorithm for problems with computationally heavy fitness functions. Their strategy employed different operations at random, rather than a combination of both at the same time.

    Talbi and Batouche [ 66 ] used DEPSO to approach the multimodal rigid-body image registration problem by finding the optimal transformation, which superimposed two images by maximization of mutual information. Hao et al. Das et al. The modified algorithm was used to optimize well-known benchmarks as well as constrained optimization problems. The authors demonstrated the superiority of the proposed method, achieved through a synergism between the underlying popular multi-agent search processes: the PSO and DE. Two different fitness functions: one based on passband and stopband ripple, the other on MSE between desired and practical results were considered.

    While promising results were obtained with respect to performance and convergence time, it was noted that the DEPSO algorithm could also be applied to the personal best position, instead of the global best. Vaisakh et al. The IEEE bus test system is used to illustrate its effectiveness and results confirm the superiority of the algorithm proposed.

    Huang et al. A computational example supports the claim that it is an efficient method to estimate and back analyze the mechanics parameters of systems. Xu et al. Xiao and Zuo [ 77 ] used a multipopulation strategy to diversify the population and employ every subpopulation to a different peak, subsequently using a hybrid DEPSO operator to find the optima in each.