نوع مقاله: مقاله پژوهشی

نویسندگان

1 استاد دانشکده مهندسی برق، دانشگاه علم و صنعت ایران

2 دانشجوی دکتری دانشکده مهندسی برق، دانشگاه علم و صنعت ایران

3 دانشجوی کارشناسی ارشد دانشکده مهندسی برق، دانشگاه علم و صنعت ایران

4 دانشجوی کارشناسی ارشد برق، دانشگاه علوم دریایی امام خمینی(ره) نوشهر

چکیده

با توجه به خصوصیات فیزیکی بسیار نزدیک اهداف واقعی و کلاترِ سونار فعال، تفکیک این اهداف، از موضوعات چالش‌برانگیز محققان و صنعت‌گران حوزه آکوستیک می‌باشد. شبکه‌های عصبی چندلایه (MLP) یکی از پرکاربردترین شبکه‌های عصبی در دسته‌بندی اهداف دنیای واقعی هستند. آموزش از مهمترین بخش‌های توسعه این نوع شبکه ها است که در سال‌های اخیر بسیار مورد توجه قرار گرفته است. به منظور آموزش شبکه‌های MLP از دیر باز استفاده از روش‌های بازگشتی و گرادیان نزولی مرسوم بوده است. دقت دسته‌بندی نامناسب، گیر افتادن در کمینه‌های محلی و سرعت همگرایی پایین از معایب روش‌های سنتی می‌باشد. به منظور غلبه بر این معایب، در سال‌های اخیر استفاده از الگوریتم‌های ابتکاری و فرا ابتکاری بسیار مرسوم گردیده است. این مقاله برای آموزش شبکه MLP از الگوریتم بهینه‌سازی ترکیبی ازدحام ذرات و جستجوی گرانشی ((PSOGSA استفاده می‌کند. الگوریتم جستجوی گرانشی (GSA) یک روش بهینه‌سازی فرا ابتکاری جدید بر اساس فعل و انفعالات گرانش و جرم می باشد. ثابت شده است که این الگوریتم توانایی خوبی برای جستجوی کلی دارد، اما در آخرین تکرارها دارای سرعت پایین در بهره‌برداری فضای جستجو می‌باشد. با توجه به توانایی منحصربه‌فرد بهینه‌ساز ازدحام ذرات (PSO) در فاز بهره‌برداری، از این روش برای حل مشکل فوق استفاده می‌شود. نتایج به دست آمده نشان می‌دهد که دسته‌بندی‌کننده‌های مبتنی بر GSA، PSO و PSOGSA دادگان سونار را به ترتیب با دقت 7500/92، 6741/93 و 42308/94 دسته‌بندی می‌نمایند. همچنین سرعت همگرایی الگوریتم ترکیبی نسبت به دو الگوریتم معیار ذکر شده بهتر می‌باشد.

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

Classification of Sonar Target using Hybrid Particle Swarm and Gravitational Search

نویسندگان [English]

  • M. R. Mosavi 1
  • M. Khishe 2
  • A. Moridi 3

1 Department of Electrical Engineering, Iran University of Science and Technology

2 Department of Electrical Engineering, Iran University of Science and Technology

3 Department of Electrical Engineering, Iran University of Science and Technology

چکیده [English]

Due to the very close physical properties of real targets and clutter of active sonar, the separation of these objectives is the latest challenge to researchers and industrialists of the acoustic field. Multi-Layer Perceptron (MLP) is one of the most widely used Neural Networks (NNs) for classification purposes in the real world. Training is an important part of these NNs and recently becomes attractive for many researchers. For training MLP NNs, the recurrent and gradient descent methods have always been used. Poor classification accuracy, trapped in local minimums and low convergence speed are the disadvantages of traditional methods. To overcome these disadvantages, in recent years the use of heuristic and meta-heuristic algorithms have become very common. This paper presents a hybrid optimization algorithm of Particle Swarm Optimization and Gravity Search Algorithm (PSOGSA) for training MLP networks. GSA, a new meta-heuristic optimization method, is based on the interaction of gravity and mass. It has been proven that this algorithm has good ability of global search, but it suffers from slow searching speed in the last iterations. This article uses PSOGSA according to the unique ability of PSO on the operational phase of the hybrid method to solve the above problems. The results show that the classification methods based on GSA, PSO and PSOGSA, classify sonar dataset with the accuracy of 92.7500, 93.6741 and 94.42308. The hybrid algorithm convergence speed is better than other two standard algorithms.

کلیدواژه‌ها [English]

  • Classification
  • sonar
  • Clutter
  • Particle Swarm Optimization
  • Gravitational Search Algorithm

[1] Mirjalili, S., Mirjalili, S. M. and Lewis, A., “Let a Biogeography-based Optimizer Train Your Multi-Layer Perceptron”, Journal of Information Science, Vol.269, pp.188-209, June 2014.

[2] Abedifar, V., Eshghi, M., Mirjalili S. and Mirjalili, S. M., “An Optimized Virtual Network Mapping using PSO in Cloud Computing”, 21st Iranian Conference on Electrical Engineering, pp.1-6, 2013.

[3] Nguyen, L. S., Frauendorfer, D., Mast M. S. and Gatica-Perez, D., “Hire Me: Computational Inference of Hirability in Employment Interviews based on Nonverbal Behavior”, IEEE Transactions on Multimedia, Vol.16, No.4, pp.1018-1031, 2014.

[4] Auer, P., Burgsteiner H. and Maass, W., “A Learning Rule for Very Simple Universal Approximators Consisting of a Single Layer of Perceptrons”, Neural Networks, Vol.21, No.5, pp.786-795, June 2008.

[5] Barakat, M., Lefebvre, D., Khalil, M., Druaux, F. and Mustapha, O., “Parameter Selection Algorithm with Self Adaptive Growing Neural Network Classifier for Diagnosis Issues”, International Journal of Machine Learning and Cybernetics, Vol.4, No.3, pp.217-233, 2013.

[6] Guo, Z. X., Wong, W. K. and Li, M., “Sparsely Connected Neural Network-based Time Series Forecasting”, Information Sciences, Vol.193, pp.54-71, 2012.

[7] Csáji, B. C., “Approximation with Artificial Neural Networks”, Faculty of Science, Etvs Lornd University, Hungary, 2001.

[8] Reed R. D. and Marks, R. J., “Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks”, MIT Press, 1998.

[9] Oja, E., “Unsupervised Learning in Neural Computation”, Theoretical Computer Science, Vol.287, No.1, pp.187-207, 2002.

[10] Zhang, N., “An Online Gradient Method with Momentum for Two-Layer Feedforward Neural Networks”, Applied Mathematics and Computation, Vol.212, No.2, pp.488-498, 2009.

[11] Hush D. R. and Horne, B. G., “Progress in Supervised Neural Networks”, IEEE Signal Processing Magazine, Vol.10, No.1, pp.8-39, 1993.

[12] Ng, S. C. , Cheung, C. C., Leung, S. H. and Luk, A., “Fast Convergence for Back Propagation Network with Magnified Gradient Function”, IEEE Joint Conference on Neural Networks, Vol.3, pp.1903-1908, 2003.

[13] Magoulast, G. D., Vrahatis M. N. and Androulakis, G. S., “On the Alleviation of the Problem of Local Minima in Back-propagation”, Nonlinear Analysis, Theory, Methods & Applications, Vol.30, No.7, pp.4545-4550, 1997.

[14] Ho, Y. C. and Pepyne, D. L., “Simple Explanation of the No-Free-Lunch Theorem and its Implications”, Journal of Optimization Theory and Applications, Vol.115, No.3, pp.549-570, 2002.

[15] Wang, P., Yu X. and Lu, J., “Identification and Evolution of Structurally Dominant Nodes in Protein-Protein Interaction Networks”, IEEE Transactions on Biomedical Circuits and Systems, Vol.8, No.1, pp.87-97, 2014.

[16] Shaw, S.and Kinsner, W., “Chaotic Simulated Annealing in Multilayer Feedforward Networks”, Canadian Conference on Electrical and Computer Engineering, Vol.1, pp.265-269, 1996.

[17] Chang, S. K., Mohammed, O. A. and Hahn, S. Y., “Detection of Magnetic Body using Article Neural Network with Modified Simulated Annealing”, IEEE Transactions on Magnetics, Vol.30, No.5, pp.3644-3647, 1994.

[18] Montana, D. J. and Davis, L., “Training Feed-forward Neural Networks using Genetic Algorithms”, 11th International Joint Conference on Artificial Intelligence, Vol.1, pp.762-767, 1989.

[19] Kiranyaz, S., Ince, T., Yildirim, A. and Gabbouj, M., “Evolutionary Artificial Neural Networks by Multi-dimensional Particle Swarm”, Neural Networks, Vol.22, No.10, pp.1448-1462, 2009.

[20] Sellcs, M. and Rylander, B., “Neural Network Learning using Particle Swarm Optimization”, Advances in Information Science and Soft Computing, pp.224-226, 2002.

[21] Zhang, C., Shao, H. and Li, Y., “A New Evolved Artificial Neural Network and Its Application”, Proceeding of 3rd World Congress on Intelligent Control and Automation, Vol.2, pp.1065-1068, 2000.

[22] van den Bergh, F., Engelbrecht, A. P. and Engelbrecht, A. P., “Cooperative Learning in Neural Networks using Particle Swarm Optimizers”, South African Computer Journal, No.26, pp.84-90, 2000.

[23] Zhang, C., Shao, H. and Li, Y., “Particle Swarm Optimization for Evolving Artificial Neural Network”, IEEE International Conference on Systems, Man, and Cybernetics, Vol.4, pp.2487-2490, 2000.

[24] Mirjalili, S.and Safa Sadiq, A., “Magnetic Optimization Algorithm for Training Multilayer Perceptron”, IEEE 3rd International Conference on Communication Software and Networks(ICCSN), pp.42-46, 2011.

[25] Mirjalili, S., Mirjalili, S. M. and Lewis, A., “Grey Wolf Optimizer”, Advances in Engineering Software, Vol.69, pp.46-61, 2014.

[26] Si, T., Hazra, S. and Jana, N., “Artificial Neural Network Training using Differential Evolutionary Algorithm for Classification”, The International Conference on Information Systems Design and Intelligent Applications, pp.769-778, 2012.

[27] Zhang, J. R., Zhang, J., Lok, T. M. and Lyu, M. R., “A Hybrid Particle Swarm Optimization-back-Propagation Algorithm for Feedforward Neural Network Training”, Applied Mathematics and Computation, Vol.185, No.2, pp.1026-1037, 2007.

[28] Kennedy, J., “Particle Swarm Optimization”, Encyclopedia of Machine Learning, pp.760-766, 2010.

[29] Rashedi, E., Nezamabadi-pour, H. and Saryazdi, S., “GSA: A Gravitational Search Algorithm”, Information Sciences, Vol.179, No.13, pp.2232-2248, 2009.

[30] Newton, I., “In Experimental Philosophy Particular Propositions Are Inferred from the Phenomena and Afterwards Rendered General by Induction”, Andrew English Translation Published, Vol.2, 1729.

[31] Abarghouei, A. A., Ghanizadeh, A.and Shamsuddin, S. M., “Advances of Soft Computing Methods in Edge Detection”, Intternational Journal of Advance Soft Computing Application, Vol.1, No.2, pp.1-42, 2009.

[32] Rashedi, E., Nezamabadi-pour, H. and Saryazdi, S., “BGSA: Binary Gravitational Search Algorithm”, Natural Computing, Vol.9, No.3, pp.727-745, 2010.

[33] Sinaie, S., “Solving Shortest Path Problem using Gravitational Search Algorithm and Neural Networks”, Universiti Teknologi Malaysia (UTM), Johor Bahru, Malaysia, M.Sc. Thesis, 2010.

[34] Mirjalili, S. and Hashim, S. Z. M., “A New Hybrid PSOGSA Algorithm for Function Optimization”, International Conference on Computer and Information Application (ICCIA), pp.374-377, 2010.

[35] Mirjalili, S., Hashim, S. Z. M. and Sardroudi, H. M., “Training Feedforward Neural Networks using Hybrid Particle Swarm Optimization and Gravitational Search Algorithm”, Applied Mathematics and Computation, Vol.218, No.22, pp.11125-11137, 2012.

[36]http://archive.ics.uci.edu/ml/datasets/Connectioni st+Bench+(Sonar,+Mines+vs.+Rocks).

[37] Gorman R. P. and Sejnowski, T. J., “Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets”, Neural Networks, Vol.1, No.1, pp.75-89, 1988.

[38] Jade, R. K., Verma, L. K. and Verma, K., “Classification using Neural Network and Support Vector Machine for Sonar Data Set”, International Journal of Computer Trends and Technology, Vol.4, No.2, pp.116-119, 2013.

[39] Fialkowski, J. M. and Gauss, R. C., “Methods for Identifying and Controlling Sonar Clutter”, IEEE Journal of Oceanic Engineering, Vol.35, No.2, pp.330-354, 2010.

[40] Moll, C. A. M., Ainslie, M. A. and Janmaat, J., “The Most Likely Distribution of Target Echo Amplitudes”, 2nd International Conference & Exhibition on Underwater Acoustic Measurements: Technologies & Results, pp.361-366, 2007.

[41] Ludwig, O. and Nunes, U., “Novel Maximum-Margin Training Algorithms for Supervised Neural Networks”, IEEE Transaction on Neural Networks, Vol.21, No.6, pp. 972-984, 2010.

[42] Mirjalili, S., “Hybrid Particle Swarm Optimization and Gravitational Search Algorithm for Multilayer Perceptron Learning”, Master, Universiti Teknologi Malaysia (UTM), 2011.