Etcil, MustafaKolukisa, BurakBakir-Güngör, Burcu2025-09-252025-09-2520249798350365887https://doi.org/10.1109/UBMK63289.2024.10773404https://hdl.handle.net/20.500.12573/3793Portfolio optimization is a form of investment management that aims to maximize returns while minimizing risks. However, the inherent complexity and unpredictability of financial markets pose a challenge. Recent advancements in machine learning, particularly in deep reinforcement learning (DRL), offer promising solutions by enabling dynamic and adaptive trading strategies. This paper presents a comprehensive evaluation of three actor-critic-based DRL algorithms-Advantage Actor-Critic (A2C), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO)-applied to portfolio optimization. These strategies were implemented in both sentiment-aware and non-sentiment-aware versions, allowing for a direct comparison of their performance. The sentiment-aware models incorporated sentiment analysis using FinBERT and knowledge graphs to measure market sentiment from financial news, while the non-sentiment-aware models relied solely on stock prices and technical indicators. Our comparative study demonstrates that incorporating sentiment analysis resulted in consistently superior risk-adjusted returns and portfolio resilience during market fluctuations compared to non-sentiment-aware strategies. © 2025 Elsevier B.V., All rights reserved.eninfo:eu-repo/semantics/closedAccessDeep Reinforcement LearningKnowledge GraphsPortfolio ManagementSentiment AnalysisAdversarial Machine LearningContrastive LearningFinancial MarketsReinforcement LearningActor CriticInherent ComplexityInvestment ManagementKnowledge GraphsMachine-LearningPortfolio ManagementsPortfolio OptimizationReinforcement LearningsSentiment AnalysisTrading StrategiesDeep Reinforcement LearningEvaluating the Impact of Sentiment Analysis on Deep Reinforcement Learning-Based Trading StrategiesConference Object10.1109/UBMK63289.2024.107734042-s2.0-85215502592