RECENT POSTS

Fine-Tuning: QLoRA

The goal of this post is to demonstrate how to fine-tune an LLM (Llama-3.1-8B-Instruct) using QLoRA to solve a classic machine learning task: classifying emails as spam or not spam. If you haven’t read the previous post on fine-tuning yet, I highly recommend doing so, as it covers the foundational concepts, approaches, parameters, and other…

Fine-Tuning: Getting Started

Fine-tuning is the process of readjusting a previously trained model (usually a general-purpose one) so that it adapts to a more specific task or dataset. The goal is to specialize the model by leveraging the knowledge it has already acquired during pretraining, without having to train it from scratch. During fine-tuning, only part of the…

Agent2Agent and Model Context Protocol

This post aims to demonstrate how to integrate the Model Context Protocol (MCP) into the example from the previous article (click here), where I used the Agent2Agent Protocol (A2A) to implement a CRUD system. Therefore, I highly recommend reading the previous post first, as it provides the foundation to better understand this content. For better…