Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Sen, Tarik Uveys"

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Text Classification Experiments on Contextual Graphs Built by N-Gram Series
    (Springer International Publishing AG, 2025) Sen, Tarik Uveys; Yakit, Mehmet Can; Gumus, Mehmet Semih; Abar, Orhan; Bakal, Gokhan
    Traditional n-gram textual features, commonly employed in conventional machine learning models, offer lower performance rates on high-volume datasets compared to modern deep learning algorithms, which have been intensively studied for the past decade. The main reason for this performance disparity is that deep learning approaches handle textual data through the word vector space representation by catching the contextually hidden information in a better way. Nonetheless, the potential of the n-gram feature set to reflect the context is open to further investigation. In this sense, creating graphs using discriminative ngram series with high classification power has never been fully exploited by researchers. Hence, the main goal of this study is to contribute to the classification power by including the long-range neighborhood relationships for each word in the word embedding representations. To achieve this goal, we transformed the textual data by employing n-gram series into a graph structure and then trained a graph convolution network model. Consequently, we obtained contextually enriched word embeddings and observed F1-score performance improvements from 0.78 to 0.80 when we integrated those convolution-based word embeddings into an LSTM model. This research contributes to improving classification capabilities by leveraging graph structures derived from discriminative n-gram series.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 4
    A Transfer Learning Application on the Reliability of Psychological Drugs' Comments
    (Institute of Electrical and Electronics Engineers Inc., 2023) Sen, Tarik Uveys; Bakal, Gokhan
    As digitalization and the Internet stay emerging concepts by gaining popularity, the accuracy of personal reviews/opinions will be a critical issue. This circumstance also particularly applies to patients taking psychological drugs, where accurate information is crucial for other patients and medical professionals. In this study, we analyze drug reviews from drugs.com to determine the effectiveness of reviews for psychological drugs. Our dataset includes over 200,000 drug reviews, which we labeled as positive, negative, or neutral according to their rating scores. We apply machine learning (ML) models, including Logistic Regression, Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) algorithms, to predict the sentiment class of each review. Our results demonstrate an F1-Weighted score of 85.3% for the LSTM model. However, by applying the transfer learning technique, we further improved the F1 score (nearly 3% increase) obtained by the LSTM model. Our findings proved that there is no contextual difference between the comments made by the patients suffering from psychological or other diseases. © 2023 Elsevier B.V., All rights reserved.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 3
    Citation - Scopus: 4
    Combining N-Grams and Graph Convolution for Text Classification
    (Elsevier, 2025) Sen, Tarik Uveys; Yakit, Mehmet Can; Gumus, Mehmet Semih; Abar, Orhan; Bakal, Gokhan
    Text classification, a cornerstone of natural language processing (NLP), finds applications in diverse areas, from sentiment analysis to topic categorization. While deep learning models have recently dominated the field, traditional n-gram-driven approaches often struggle to achieve comparable performance, particularly on large datasets. This gap largely stems from deep learning' s superior ability to capture contextual information through word embeddings. This paper explores a novel approach to leverage the often-overlooked power of n-gram features for enriching word representations and boosting text classification accuracy. We propose a method that transforms textual data into graph structures, utilizing discriminative n-gram series to establish long-range relationships between words. By training a graph convolution network on these graphs, we derive contextually enhanced word embeddings that encapsulate dependencies extending beyond local contexts. Our experiments demonstrate that integrating these enriched embeddings into an long-short term memory (LSTM) model for text classification leads to around 2% improvements in classification performance across diverse datasets. This achievement highlights the synergy of combining traditional n-gram features with graph-based deep learning techniques for building more powerful text classifiers.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback