Uluirmak, Bugra AlperenKurban, Rifat2025-10-202025-10-202025979833156656297983315665552165-0608https://doi.org/10.1109/SIU66497.2025.11112387In this paper, Low-Rank Adaptation (LoRA) finetuning of two different large language models (DeepSeek R1 Distill 8B and Llama3.1 8B) was performed using the Turkish dataset. Training was performed on Google Colab using A100 40 GB GPU, while the testing phase was carried out on Runpod using L4 24 GB GPU. The 64.6 thousand row dataset was transformed into question-answer pairs from the fields of agriculture, education, law and sustainability. In the testing phase, 40 test questions were asked for each model via Ollama web UI and the results were supported with graphs and detailed tables. It was observed that the performance of the existing language models improved with the fine-tuning method.trinfo:eu-repo/semantics/closedAccessLarge Language ModelsFine-TuningLoRATurkish LLM DatasetFine Tuning DeepSeek and Llama Large Language Models with LoRADeepSeek ve Llama Büyük Dil Modellerinin LoRa ile İnce AyarıConference Object10.1109/SIU66497.2025.111123872-s2.0-105015366215