Intro

Using the AdapterHub library and LoRA for LLM fine-tuning. AdapterHub is compatible with many fine-tuning techniques

img

AdapterHub is a framework simplifying the integration, training and usage of adapters and other efficient fine-tuning methods for Transformer-based language models.

Environment Setup

Prereqs

Set up the virtual environment

  • Create a .venv in scratch
# create a directory to store the environment
mkdir -p ~/scratch/dsgt-arc/llm-lora
 
# create a virtual environment using uv
uv venv ~/scratch/dsgt-arc/llm-lora/.venv
 
# activate it
source ~/scratch/dsgt-arc/llm-lora/.venv/bin/activate
  • Link the scratch .venv with the lora project .venv and Install dependencies using pyproject.toml
# navigate to project directory
cd ~/dsgt-arc/fall-2025-interest-group-projects/project/01-llm-lora/
 
# create a symbolic link to your environment in this project folder
ln -s ~/scratch/dsgt-arc/llm-lora/.venv $(pwd)/.venv
 
# install project dependencies defined in pyproject.toml
uv pip install -e .

Run the Jupyter notebook in VS Code

  • Copy the Jupyter notebook finetune.ipynb into user folder cp finetune.ipynb ../../user/ctio3/01-llm-lora/

  • Open the Jupyter notebook in VS Code and select the venv kernel: Select Interpreter → Enter Interpreter Path

  • Run the notebook and allow it to install dependencies.

Debugging

  • Create a kernel spec that points to the Python interpreter inside the .venv
python -m ipykernel install --user \
    --name=llm-lora-test \
    --display-name "Python (llm-lora-test)"

Run Jupyter in browser

  • Activate the .venv
source ~/scratch/dsgt-arc/llm-lora/.venv/bin/activate    
  • Launch the Jupyter notebook
jupyter notebook /storage/home/hcoda1/1/ctio3/dsgt-arc/fall-2025-interest-group-projects/user/ctio3/01-llm-lora/finetune.ipynb

LLM LoRA fine-tuning

Exercises

Next