Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Uluirmak, Bugra Alperen"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - WoS: 1
    Citation - Scopus: 1
    Fine Tuning DeepSeek and Llama Large Language Models with LoRA
    (IEEE, 2025) Uluirmak, Bugra Alperen; Kurban, Rifat
    In this paper, Low-Rank Adaptation (LoRA) finetuning of two different large language models (DeepSeek R1 Distill 8B and Llama3.1 8B) was performed using the Turkish dataset. Training was performed on Google Colab using A100 40 GB GPU, while the testing phase was carried out on Runpod using L4 24 GB GPU. The 64.6 thousand row dataset was transformed into question-answer pairs from the fields of agriculture, education, law and sustainability. In the testing phase, 40 test questions were asked for each model via Ollama web UI and the results were supported with graphs and detailed tables. It was observed that the performance of the existing language models improved with the fine-tuning method.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback