International Journal of Information Technology and Applied Sciences (IJITAS) 2023-08-08T14:28:44+00:00 IJITAS Journal Open Journal Systems <p><strong>International Journal of Information Technology and Applied Sciences (IJITAS) -ISSN 2709-2208 (Online)-</strong> is a peer-reviewed International Journal that currently publishes 4 issues annually. IJITAS is published by the <a href="" target="_blank" rel="noopener">World Organization of Applied Sciences (WOAS)</a>. IJITAS journal publishes technical papers, as well as review articles and surveys, describing recent research and development work that covers all areas of computer science, information systems, and computer / electrical engineering.</p> <p align="justify"><em><strong>Cross Reference</strong></em></p> <p align="justify"><strong>International Journal of Information Technology and Applied Sciences (IJITAS)</strong> is a member of the <strong>CrossRef. </strong>The DOI prefix allotted for IJITAS is <a href=""><strong>10.52502/ijitas</strong></a></p> Performance Comparison of Machine Learning and Deep Learning Models for Sentiment Analysis of Hotel Reviews 2023-08-08T14:26:19+00:00 Muhammad Sanwal Muhammad Mamoon Mazhar <p>This research paper conducts a thorough examination, comparing BERT and LSTM architectures with machine learning models for sentiment classification task. To establish a foundational reference point, conventional machine learning algorithms including Logistic Regression (LR), Support Vector Machines (SVM), Random Forest (RF), Decision Tree (DT) and BernoulliNB are utilized. The BERT and LSTM models along with machine learning models have been implemented and trained using a dataset comprising hotel reviews. Their performance has been assessed through the utilization of multiple metrics, including accuracy, precision, recall, and F1-score. In the context of text classification and sentiment analysis tasks, the experimental outcomes highlight the superior effectiveness of BERT and LSTM models when compared to traditional machine learning algorithms. The BERT model distinguishes itself through the incorporation of bidirectional training and contextual embedding strategies, ultimately showcasing outstanding performance in its ability to proficiently capture contextual information and subtle nuances prevalent in textual data. Moreover, the LSTM&nbsp;technique has been able to attain noteworthy outcomes through its effective modeling of sequential data and preservation of temporal dependencies. The present research yields significant findings regarding the comparative analysis of machine learning models employing BERT and LSTM architectures concerning text classification and sentiment analysis. The results of this study provide valuable insights into the merits and constraints of each model, thereby aiding researchers and practitioners in making informed decisions when choosing suitable models that are customized for specific natural language processing (NLP) tasks.</p> 2023-08-08T00:00:00+00:00 Copyright (c) 2023