Sen, Tarik UveysYakit, Mehmet CanGumus, Mehmet SemihAbar, OrhanBakal, Gokhan2025-09-252025-09-2520251568-49461872-9681https://doi.org/10.1016/j.asoc.2025.113092https://hdl.handle.net/20.500.12573/3479Bakal, Mehmet/0000-0003-2897-3894Text classification, a cornerstone of natural language processing (NLP), finds applications in diverse areas, from sentiment analysis to topic categorization. While deep learning models have recently dominated the field, traditional n-gram-driven approaches often struggle to achieve comparable performance, particularly on large datasets. This gap largely stems from deep learning' s superior ability to capture contextual information through word embeddings. This paper explores a novel approach to leverage the often-overlooked power of n-gram features for enriching word representations and boosting text classification accuracy. We propose a method that transforms textual data into graph structures, utilizing discriminative n-gram series to establish long-range relationships between words. By training a graph convolution network on these graphs, we derive contextually enhanced word embeddings that encapsulate dependencies extending beyond local contexts. Our experiments demonstrate that integrating these enriched embeddings into an long-short term memory (LSTM) model for text classification leads to around 2% improvements in classification performance across diverse datasets. This achievement highlights the synergy of combining traditional n-gram features with graph-based deep learning techniques for building more powerful text classifiers.eninfo:eu-repo/semantics/closedAccessText-Graph TransformationGraph Convolution NetworkDeep LearningText MiningGraph MiningCombining N-Grams and Graph Convolution for Text ClassificationArticle10.1016/j.asoc.2025.1130922-s2.0-105001822010