- AI Engineering
- Posts
- Fine-tune LLMs in VS Code Using Google Colab GPUs
Fine-tune LLMs in VS Code Using Google Colab GPUs
.. PLUS: Agent Skills for Claude Code
In today's newsletter:
Apify Agent Skills: Make Coding Agents 10x more powerful
Fine-tuning LLMs in VS Code with Unsloth
Reading time: 4 minutes.
Apify released agent skills, pre-built workflows that connect coding agents to Apify Actors for data collection, extraction, and automation
Each skill packages domain-specific knowledge and tooling. Describe what data you need, and the agent selects the right Actor, handles authentication, extracts the data, and formats the output.
Skills currently available:
Ultimate Scraper - Scrape Instagram, Facebook, TikTok, YouTube, Google Maps, Google Search
Competitor Intelligence - Analyze pricing, content, ads, market positioning across platforms
Brand Reputation Monitoring - Track reviews, ratings, sentiment
Trend Analysis - Discover emerging trends across social platforms
Actor Development - Develop, debug, and deploy Apify Actors
Skills can be extended with your team's domain knowledge and packaged into reusable workflows.
Compatible with Claude Code, Cursor, Codex, Gemini CLI, and other coding agents.
Install using the command:

You can now fine-tune LLMs directly from Visual Studio Code (VSCode), locally or by using Google Colab's extension.
In this tutorial, you’ll learn how to use the open-source training repo: Unsloth, to connect any fine-tuning notebook in VS Code to a Colab runtime, so you can train on your local or free Colab GPU.
Let’s get started:
What You Need Before Starting
VS Code
A Google account to authenticate with Colab
The Jupyter extension for VS Code (most setups already have it)
Git (installed by default on most machines)
No local GPU required.
Setting it up
Step 1: Install the Google Colab Extension
Open the Extensions panel in VS Code (Ctrl+Shift+X on Windows/Linux, Cmd+Shift+X on Mac).
Search for "Colab" and install the Google Colab extension from Google.
This extension is what connects your local VS Code environment to a live Colab runtime in the cloud.

Step 2: Clone the Unsloth Notebooks Repository

The notebooks are organized by model and task. For example, to run Qwen3-4B with GRPO (reinforcement learning from verifiable rewards):

Open whichever notebook fits your use case.

Step 3: Connect to a Colab Runtime
In the notebook toolbar (top right), click Select Kernel, then choose Colab from the dropdown.

VS Code will prompt you to:
Click + Add New Colab Server
A browser window opens for Google authentication. Log in and grant access.
Return to VS Code once authenticated.
You only need to authenticate once.
Step 4: Select Your GPU
After connecting, configure the runtime:
Set Hardware Accelerator to GPU
Select your GPU type. T4 is the standard option on the free tier.
Give the server a name (e.g. "unsloth-qwen3")

Step 5: Select the Python Kernel
Once your Colab server is live, VS Code will display the available kernels from that runtime. Select the Python 3 kernel.
You are now running on Colab's GPU from inside VS Code.
Step 6: Run the Notebook
Click Run All in the notebook toolbar, or step through cells manually.
The first few cells install dependencies and take a couple of minutes. After that, training starts: model loading, LoRA adapter setup, dataset preparation, and training. All output appears inline.
What You Can Fine-Tune with This Setup
Once you are set up, the repo has over 100 notebooks ready to run. Pick up any of these tasks and start experimenting immediately:
Conversational and instruction tuning on Llama 3.1/3.2, Qwen3, Gemma 3, Phi 4, and Mistral
Reinforcement learning with GRPO on Qwen3, Llama 3.1, Gemma 3, and Mistral
Vision fine-tuning on Llama 3.2 Vision (11B), Qwen2.5 VL, and Pixtral
Text-to-speech on Sesame CSM, Orpheus, Llasa TTS, and Spark TTS
That’s all for today. Thank you for reading today’s edition. See you in the next issue with more AI Engineering insights.
PS: We curate this AI Engineering content for free, and your support means everything. If you find value in what you read, consider sharing it with a friend or two.
Your feedback is valuable: If there’s a topic you’re stuck on or curious about, reply to this email. We’re building this for you, and your feedback helps shape what we send.
WORK WITH US
Looking to promote your company, product, or service to 160K+ AI developers? Get in touch today by replying to this email.
