Intro
https://pace.gatech.edu/ice-cluster/
Georgia Tech PACE-ICE Access Guide
Prerequisites
- Georgia Tech account (GT username/password)
- VPN access (required when off-campus), must be the overall VPN not the browser VPN
- Terminal application (Mac/Linux have built-in, Windows use PowerShell or WSL)
Step 1: Connect to GT VPN (If Off-Campus)
Required whenever not on GT campus network
- Install GT VPN: https://faq.oit.gatech.edu/content/how-do-i-get-started-campus-vpn
- Connect to VPN before accessing PACE
- Verify connection (GT resources should be accessible)
Step 2: Choose Your Access Method
PACE offers three ways to access resources - they all access the same system, just different interfaces:
Method A: SSH (Terminal/Command Line)
Best for: Running scripts, submitting batch jobs, file management
How to access:
ssh your_gt_username@login-ice.pace.gatech.edu
# Example: ssh gpolazzo3@login-ice.pace.gatech.eduYou’ll see a prompt like:
[gpolazzo3@login-ice-1 ~]$
What you can do:
- Navigate files:
ls,cd,pwd - Edit code:
nano,vim - Submit jobs:
sbatch job_script.sh - Check job status:
squeue -u $USER - Load software:
module load anaconda3
Method B: OnDemand Web Interface (Browser-Based)
Best for: Jupyter notebooks, interactive development, GUI applications
How to access:
- Go to: https://ondemand-ice.pace.gatech.edu
- Log in with GT credentials
- You’ll see a dashboard with options:
Available tools:
- ignore the classes portion (thats a more specific app for a class using pace, that we all can see for some reason)
- Jupyter Notebook - Run Python notebooks with GPU
- Interactive Desktop - Full Linux desktop in browser
- File Browser - Upload/download files, shows directory visually
- Job Monitor - Check running jobs
To launch Jupyter:
- Click “Interactive Apps” → “Jupyter Notebook”
- Fill in resources (see “Requesting Resources” below)
- Click “Launch”
- Wait for job to start
- Click “Connect to Jupyter”
Method C: Interactive Desktop (For GUI Applications)
Best for: Using Vivado, GUI tools, visual debugging
How to access:
- Go to: https://ondemand-ice.pace.gatech.edu
- Click “Interactive Apps” → “Interactive Desktop”
- Configure resources
- Launch and connect
- You get a full Linux desktop in your browser
Step 3: Understanding PACE Storage
Your Personal Directories
# Home directory (50 GB limit)
/home/gpolazzo3
# Use for: Personal configs, small scripts, our projects
# Scratch directory (temporary, no backup, no quota)
/storage/scratch1/gpolazzo3
# Use for: Temporary job outputs, intermediate files, if you have infinite files for some reason.
# WARNING: Files deleted after 60 days!Shared VIP Project Storage
Your VIP group has shared storage - ask your team lead for the exact path
Step 4: Requesting Compute Resources
Understanding Partitions
- ice-gpu - GPU nodes (A100, V100)
- ice-cpu - CPU-only nodes
Understanding Accounts
Your account determines resource access. Check yours:
sacctmgr show associations user=$USER format=Account,Partition,QOSCommon accounts:
coc- College of Computing general access- checking with pace support if there is a specific vip specific account
Interactive Jobs (For Testing)
Request a GPU node for interactive work:
salloc --account=coc \
--partition=ice-gpu \
--gres=gpu:A100:1 \
--nodes=1 \
--ntasks-per-node=8 \
--mem=32G \
--time=01:00:00
# Wait for allocation (shows "Granted job allocation")
# You're now on a GPU node!
# When done:
exitBatch Jobs (For Long Training)
Create a Slurm script (example.slurm):
#!/bin/bash
#SBATCH --job-name=my_experiment
#SBATCH --account=coc
#SBATCH --partition=ice-gpu
#SBATCH --gres=gpu:A100:1
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
#SBATCH --mem=32G
#SBATCH --time=04:00:00
#SBATCH --output=logs/%x_%j.out
#SBATCH --error=logs/%x_%j.err
# Load software
module load anaconda3
conda activate my-env
# Run your code
python train_model.pySubmit job:
sbatch example.slurmCheck status:
squeue -u $USER # Your jobs
squeue -p ice-gpu # All GPU jobsStep 5: Software Management
Using Modules
PACE provides pre-installed software via modules:
# See available modules
module avail
# Load Anaconda
module load anaconda3
# Load CUDA
module load cuda/11.8Creating Your Own Environment
# Load anaconda
module load anaconda3
# Create environment from YAML
conda env create -f environment.yml
# Or create manually
conda create -n myenv python=3.9
# Activate
conda activate myenv
# Install packages
pip install tensorflow numpyHelpful Commands
# Check your allocations
pace-check-queue ice-gpu
# Check job status
squeue -u $USER
# Cancel a job
scancel JOB_ID
# Check GPU availability
sinfo -p ice-gpu
# Monitor job output (while running)
tail -f logs/my_job_12345.out
# Copy files to PACE
scp file.txt gpolazzo3@login-ice.pace.gatech.edu:~/
# Copy files from PACE
scp gpolazzo3@login-ice.pace.gatech.edu:~/results.tar.gz .Getting Help
- PACE Documentation: https://gatech.service-now.com/home?id=kb_article_view&sysparm_article=KB0042102
- Submit Support Ticket: pace-support@gatech.edu




