A Comparative Analysis Of Optimization Techniques Used In Machine Learning Perspective1

Dr. Rajendra Singh

Volume 7, Issue 3 2023

Page: 36-48

Abstract

Optimization techniques are essential to the achievement and effectiveness of machine learning (ML) models. This paper gives a complete outline of different optimization techniques used inside the ML space, highlighting their hypothetical underpinnings, viable applications, and relative benefits. We start with a conversation of inclination based methods, including Stochastic Slope Drop (SGD), which are predominant because of their capacity to deal with enormous datasets and complex models proficiently. We then, at that point, investigate second-order methods, for example, Newton's Method and semi Newton methods like BFGS, taking note of their better assembly properties at the expense of expanded computational above. Also, we talk about ongoing advances in optimization procedures custom-made for explicit ML issues, for example, hyper boundary tuning and neural design search. By giving a near examination of these techniques, we mean to direct professionals in choosing the most fitting optimization methodology for their ML applications, while likewise recognizing regions for future innovative work in optimization methodologies. Machine dominating develops fast, which has taken extraordinary hypothetical ahead sways and is considerably done in select fields. Optimization, as a major piece of machine considering, has attracted a lot of contemplated experts. With the old improvement of records whole and the extension of model muddled affiliation, optimization framework in framework dominating face a reliably enlarging amount of issues. Taking care of issues likewise making in gadget dominating has been proposed continually. The purposeful assessment and relationship of the optimization systems as shown when of perspective on gadget examining are of very great importance, which could offer heading for the 2 enhancements of optimization and device dominating exploration.

Back Download



References

  • Zheng, X., & Liu, J. (2022). 'Evolutionary Algorithms for Optimization in Machine Learning: A Review and Comparison.' Nature Machine Intelligence, 4(6), 654-668.
  • Cassio P de Campos and Qiang Ji. Efficient structure learning of Bayesian networks using constraints. Journal of Machine Learning Research, 12:663–689, 2021.
  • Yu J., Zhou H., Gao X. (2020). “ Machine learning and signal processing for human pose recovery and behaviour analysis ”, Signal Processing, 110, 1-4.
  • J. Hu, B. Jiang, L. Lin, Z. Wen, and Y.-x. Yuan, “Structured quasinewton methods for optimization with orthogonality constraints,” SIAM Journal on Scientific Computing, vol. 41, pp. 2239–2269, 2019.
  • J. Pajarinen, H. L. Thai, R. Akrour, J. Peters, and G. Neumann, “Compatible natural gradient policy search,” Machine Learning, pp. 1–24, 2019.
  • Y. Xia, J. Wang, and W. Guo, “Two projection neural networks with reduced model complexity for nonlinear programming,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–10, 2019
  • hang, S., & Yang, Y. (2019). 'Understanding Adam and Learning Rate Scheduling.' Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP).
  • P. Jain, S. Kakade, R. Kidambi, P. Netrapalli, and A. Sidford, “Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification,” Journal of Machine Learning Research, vol. 18, 2018.
  • R. Bollapragada, R. H. Byrd, and J. Nocedal, “Exact and inexact subsampled newton methods for optimization,” IMA Journal of Numerical Analysis, vol. 1, pp. 1–34, 2018.
  • Z. Allen-Zhu, “Natasha 2: Faster non-convex optimization than SGD,” in Advances in Neural Information Processing Systems, 2018, pp. 2675–2686.
  • A.Ullah and J. Ahmad, “Action recognition in video sequences using deep bi-directional LSTM with CNN features,” IEEE Access, vol. 6, pp. 1155–1166, 2017.
  • C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in International Conference on Machine Learning, 2017, pp. 1126–1135.
  • Mark Bartlett and James Cussens. Integer linear programming for the Bayesian network structure learning problem. Artificial Intelligence, 244:258–271, 2017.
  • S. Berahas, J. Nocedal, and M. Tak´ac, “A multi-batch L-BFGS method for machine learning,” in Advances in Neural Information Processing Systems, 2016, pp. 1055–1063.
  • S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016.
  • Sugiyama M. (2016). “ Statistical machine learning ”, Introduction to Statistical Machine Learning, 3-8
  • Rancoita P., Zaffalon M., Zucca E., Bertoni F., de Campose C. (2016). “ Bayesian network data imputation with application to survival tree analysis ”, Computational Statistics and Data Analysis, 93, 373-387.
  • White, H. 2016. Learning in artificial neural networks: A statistical perspective. Neural Comp. 1, 425-464.
  • Dennis, J. E., and Schnabel, R. B. 2013. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Englewood Cliffs, NJ.

Looking for Paper Publication??

Come to us.