Fine Tuning DeepSeek and Llama Large Language Models with LoRA

No Thumbnail Available

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE

Open Access Color

Green Open Access

No

OpenAIRE Downloads

OpenAIRE Views

Publicly Funded

No
Impulse
Average
Influence
Average
Popularity
Average

Research Projects

Journal Issue

Abstract

In this paper, Low-Rank Adaptation (LoRA) finetuning of two different large language models (DeepSeek R1 Distill 8B and Llama3.1 8B) was performed using the Turkish dataset. Training was performed on Google Colab using A100 40 GB GPU, while the testing phase was carried out on Runpod using L4 24 GB GPU. The 64.6 thousand row dataset was transformed into question-answer pairs from the fields of agriculture, education, law and sustainability. In the testing phase, 40 test questions were asked for each model via Ollama web UI and the results were supported with graphs and detailed tables. It was observed that the performance of the existing language models improved with the fine-tuning method.

Description

Keywords

Large Language Models, Fine-Tuning, LoRA, Turkish LLM Dataset

Turkish CoHE Thesis Center URL

Fields of Science

Citation

WoS Q

N/A

Scopus Q

N/A
OpenCitations Logo
OpenCitations Citation Count
N/A

Source

33rd Conference on Signal Processing and Communications Applications-SIU-Annual -- Jun 25-28, 2025 -- Istanbul, Turkiye

Volume

Issue

Start Page

1

End Page

4
PlumX Metrics
Citations

Scopus : 1

Captures

Mendeley Readers : 2

SCOPUS™ Citations

1

checked on Feb 03, 2026

Web of Science™ Citations

1

checked on Feb 03, 2026

Page Views

2

checked on Feb 03, 2026

Google Scholar Logo
Google Scholar™
OpenAlex Logo
OpenAlex FWCI
4.81974515

Sustainable Development Goals

2

ZERO HUNGER
ZERO HUNGER Logo