Employability of Deep Learning Techniques in Sentiment Analysis from Twitter Data

Saksham Agarwal

Volume 7, Issue 2 2023

Page: 1-10

Abstract

This study presents a comparative analysis of various deep learning techniques used for sentiment analysis on Twitter data. The evaluation process is thorough, ensuring the reliability of the results. Specifically, two categories of neural networks are examined: convolutional neural networks (CNNs), which excel in image processing, and recurrent neural networks (RNNs), particularly long short-term memory (LSTM) networks, which are successful in natural language processing (NLP) tasks. This work evaluates and compares ensembles and combinations of CNNs and LSTMs. Additionally, it assesses different word embedding techniques, including Word2Vec and global vectors for word representation (GloVe). The evaluation utilizes data from the international workshop on semantic evaluation (SemEval), a renowned event in the field. Various tests and combinations are conducted, and the top-performing models are compared in performance. This study contributes to sentiment analysis by providing a comprehensive analysis of these methods' performance, advantages, and limitations using a consistent testing framework with the same dataset and computing environment.

Back Download



References

  • L. L. Bo Pang, «Opinion Mining and Sentiment Analysis Bo», Found. Trends® Inf. Retr., vol. 1, no. 2, pp. 91– 231, 2008.
  • K. Fukushima, «Neocognition: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position», Biol. Cybern., vol. 202, no. 36, pp. 193–202, 1980.
  • Y. Lecun, P. Ha, L. Bottou, Y. Bengio, S. Drive, eta R. B. Nj, «Object Recognition with Gradient-Based Learning», in Shape, contour and grouping in computer vision, Heidelberg: Springer, 1999, pp. 319–345.
  • D. E. Ruineihart, G. E. Hinton, eta R. J. Williams, «Learning internal representations by error propagation», 1985.
  • S. Hochreiter eta J. Urgen Schmidhuber, «Ltsm», Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
  • Y. Kim, «Convolutional Neural Networks for Sentence Classification», arXiv Prepr. arXiv1408.5882., 2014.
  • C. N. dos Santos eta M. Gatti, «Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts», in Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, 2014, pp. 69–78.
  • P. Ray eta A. Chakrabarti, «A Mixed approach of Deep Learning method and Rule-Based method to improve Aspect Level Sentiment Analysis», Appl. Comput. Informatics, 2019.
  • S. Lai, L. Xu, K. Liu, eta J. Zhao, «Recurrent Convolutional Neural Networks for Text Classification», Twentyninth AAAI Conf. Artif. Intell., pp. 2267–2273, 2015.
  • D. Tang, B. Qin, eta T. Liu, «Document Modeling with Gated Recurrent Neural Network for Sentiment Classification», in Proceedings of the 2015 conference on empirical methods in natural language processing, 2015, pp. 1422–1432.
  • K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, eta Y. Bengio, «Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation», arXiv Prepr. arXiv1406.1078, 2014.
  • J. Chung, C. Gulcehre, K. Cho, eta Y. Bengio, «Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling», arXiv Prepr. arXiv1412.3555, 2014.
  • L. Zhang, S. Wang, eta B. Liu, «Deep Learning for Sentiment Analysis: A Survey», Wiley Interdiscip. Rev. Data Min. Knowl. Discov., vol. 8, no. 4, pp. e1253, 2018.
  • T. Mikolov, K. Chen, G. Corrado, eta J. Dean, «Efficient Estimation of Word Representations in Vector Space», arXiv Prepr. arXiv1301.3781, 2013.
  • J. Pennington, R. Socher, eta C. Manning, «Glove: Global Vectors for Word Representation», Proc. 2014 Conf. Empir. Methods Nat. Lang. Process., pp. 1532–1543, 2014.
  • H. Kwak, C. Lee, H. Park, eta S. Moon, «What is Twitter, a social network or a news media?», in Proceedings of the 19th international conference on World wide web, 2010, pp. 591–600.
  • A. Go, R. Bhayani, eta L. Huang, «Proceedings - Twitter Sentiment Classification using Distant Supervision (2009).pdf», CS224N Proj. Report, Stanford, vol. 1, no. 12, 2009.
  • A. Srivastava, V. Singh, eta G. S. Drall, «Sentiment Analysis of Twitter Data», in Proceedings of the Workshop on Language in Social Media, 2011, pp. 30–38.
  • E. Kouloumpis, T. Wilson, eta M. Johanna, «Twitter Sentiment Analysis: The Good the Bad and the OMG!», in Fifth International AAAI conference on weblogs and social media, 2011.
  • J. Deriu, M. Gonzenbach, F. Uzdilli, A. Lucchi, V. De Luca, eta M. Jaggi, «SwissCheese at SemEval-2016 Task 4: Sentiment Classification Using an Ensemble of Convolutional Neural Networks with Distant Supervision», Proc. 10th Int. Work. Semant. Eval., pp. 1124–1128, 2016.
  • M. Rouvier eta B. Favre, «SENSEI-LIF at SemEval-2016 Task 4 : Polarity embedding fusion for robust sentiment analysis», Proc. 10th Int. Work. Semant. Eval., pp. 207–213, 2016.
  • C. Baziotis, N. Pelekis, eta C. Doulkeridis, «DataStories at SemEval- 2017 Task 4: Deep LSTM with Attention for Message-level and Topicbased Sentiment Analysis», Proc. 11th Int. Work. Semant. Eval., pp. 747–754, 2017.
  • M. Cliche, «BB twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs», arXiv Prepr. arXiv1704.06125.
  • P. Bojanowski, E. Grave, A. Joulin, eta T. Mikolov, «Enriching Word Vectors with Subword Information», Trans. Assoc. Comput. Linguist., Vol 5, pp. 135–146, 2016.
  • T. Lei, H. Joshi, R. Barzilay, T. Jaakkola, K. Tymoshenko, A. Moschitti, eta L. Marquez, «Semi-supervised Question Retrieval with Gated Convolutions», arXiv Prepr. arXiv1512.05726, 2015.
  • Y. Yin, S. Yangqiu, eta M. Zhang, «NNEMBs at SemEval-2017 Task 4: Neural Twitter Sentiment Classification: a Simple Ensemble Method with Different Embeddings», Proc. 11th Int. Work. Semant. Eval., pp. 621–625, 2017.
  • J. Wang, L.-C. Yu, K. R. Lai, eta X. Zhang, «Dimensional Sentiment Analysis Using a Regional CNN-LSTM Model», Proc. 54th Annu. Meet. Assoc. Comput. Linguist. (Volume 2 Short Pap., Vol 2, pp. 225– 230, 2016.

Looking for Paper Publication??

Come to us.