Conference Talk 7: Best Practices For Fine Tuning Mistral
- Mastering LLMs Course Notes: My notes from the course Mastering LLMs: A Conference For Developers & Data Scientists by Hamel Husain and Dan Becker.
Mistral Overview
- Mistral AI: Paris-based team (50+ people) specializing in large language models (LLMs).
- Model Timeline:
- Sept 2023: Mistral 7b released
- Blog Post: Mistral 7B
- Dec 2023: Mistral 8x7b, Mistral Medium (commercial), API platform launched
- Blog Post: Mixtral of experts
- Feb 2024: Mistral Small and Mistral Large (flagship) released
- Blog Post: Au Large
- Feb 2024: Le Chat - free conversational AI interface launched
- Blog Post: Le Chat
- Apr 2024: Open-source 8x22b model released
- Blog Post: Cheaper, Better, Faster, Stronger
- May 2024: Codestral - specialized model for code generation (80+ languages)
- Blog Post: Codestral: Hello, World!
- LangChain Tutorial: Self-correcting code assistants with Codestral
- Sept 2023: Mistral 7b released
- Model Offerings:
- Open-source (Apache 2 License): Mistral 7b, 8x7b, 8x22b
- Homepage: Open source models
- Docs: Open-weight models
- Enterprise-Grade: Mistral Small, Mistral Large (supports fine-tuning)
- Specialized: Codestral for coding, Embedding model
- Open-source (Apache 2 License): Mistral 7b, 8x7b, 8x22b
- Fine-Tuning:
- Blog Post: My Tailor is Mistral
- GitHub Repository: mistral-finetune
Customization
- Blog Post: My Tailor is Mistral
- GitHub Repository: mistral-finetune
- Documentation: Model customization
- Developer Examples: Model customization
Benefits of Prompting
- Documentation: Prompting Capabilities
- Out-of-the-box functionality: No data or training required.
- Easy updates: Adaptable to new workflows and prototyping.
Benefits of Fine-Tuning
- Guide: Fine-tuning
- Performance: Often outperforms prompting and even larger models.
- Efficiency: Faster and cheaper than using large prompts.
- Task Alignment: Tailored to specific tasks and behaviors.
- Knowledge Integration: Ability to teach new facts and information.
Demos
- GitHub Repository: Mistral Cookbook
- GitHub Repository: Mistral Inference
Mistral API
GitHub Repository: mistralai/client-python
pip install -U mistral-api>=0.4.2
Fine-tuned model names have a specific structure:
ft:open-mistral-7b:b6e34a5e:20240531:a29e61db
ft
: Indicates a fine-tuned model.open-mistral-7b
: Specifies the base model used.b6e34a5e:20240531:a29e61db
: Represents the specific fine-tuned version.
Abstract Generator Demo
Generates abstracts based on inputted research paper titles.
It was trained on title-abstract pairs from arxiv.org.
Highlights how effective fine-tuning can be for specific tasks.
- Example
client.chat(='ft:mistral-small-latest:8e2706f0:20240604:d861257a', model=[ messages"role": "user", "content": "Fine-tuning is all you need"} { ],=0.2, temperature0].message.content ).choices[
'Fine-tuning is a common practice in deep learning, where a pre-trained model is fine-tuned on a downstream task.\nIn this paper, we show that fine-tuning is all you need for a wide range of tasks.\nWe propose a simple yet effective fine-tuning recipe that works well for both vision and language tasks.\nOur recipe includes a learning rate warmup stage, a cosine learning rate decay schedule, and a stochastic weight averaging (SWA) stage.\nWe show that our recipe can significantly improve the performance of fine-tuning on a wide range of tasks, including image classification, object detection, instance segmentation, semantic segmentation, and language understanding.\nWe also show that our recipe can improve the performance of fine-tuning on small datasets, where the performance of fine-tuning is usually worse than training from scratch.\nOur recipe is simple and easy to implement, and we hope it will be useful for the deep learning community.'
Medical Chatbot Demo
- Trained on the HuggingFace dataset for AI medical chatbots.
- HuggingFace Dataset: ruslanmv/ai-medical-chatbot
- Example:
client.chat(='ft:open-mistral-7b:b6e34a5e:20240531:a29e61db', model=[ messages"role": "user", "content": "Hello doctor, My reverse elbow armpits have developed a darker (my skin color is fair) pigmentation. This pigmentation has also affected the whole of my ..."} { ],=0.2, temperature0].message.content ).choices[
'Hi, It seems that you might be having some fungal infection. Apply clotrimazole cream locally. Take tablet fluconazole 150 mg once a week for three weeks. Keep local part clean and dry. Avoid oily and spicy food. Ok and take care.'
News Article Stylist (Economist Style Guide) Demo
Showcases how to generate training data using a larger model (e.g., Mistral Large) when you don’t have an existing dataset.
Process:
Define Prompt: “You are a news article stylist following the Economist style guide.”
Generate Data: Use Mistral Large to rewrite news articles in the Economist style, providing guidelines and examples.
Fine-tune: Train a smaller model (e.g., Mistral 7B) on the generated data.
- Example
= "Incoming Florida Senate President Bill Galvano named the Naples Republican the Senate's majority leader for the upcoming legislative session. Kathleen Passidomo was unimpressed ..." news = client.chat( response ='ft:mistral-small-latest:b6e34a5e:20240604:ee1ab18b', model=[ messages"role": "user", "content": news} { ],=0.2, temperature )print(response.choices[0].message.content)
Kathleen Passidomo, a Naples Republican, has been named the Senate's majority leader for the upcoming legislative session by incoming Florida Senate President Bill Galvano. Passidomo was impressed with the appointment because of her good rapport with both Senate Republicans and Democrats and her ability to build consensus. Passidomo agrees that her people skills will be key in shaping the Senate agenda for the coming session, which begins on March 5. Committee meetings will start on December 11. Although the legislative priorities will be the same as last year, including water quality and mental health. In a message to other senators, Galvano praised Passidomo's ability to not only extract favorable consideration from other senators but to maintain the respect of the members. Committee chairmen recognize leadership's ability to guide the Senate to a top spot now sets Passidomo up for even greater things in the future. Assuming she keeps her seat in the 2022 election and Republicans maintain control of the Senate in ...
= "Please carefully review the provided news article and identify all stylistic issues that detract from its clarity, precision, and sophistication. To ensure the accuracy of your analysis, quote the specific text and explain the reason for your suggested improvement." prompt = client.chat( response ='ft:mistral-small-latest:b6e34a5e:20240604:ee1ab18b', model=[ messages"role": "user", "content": prompt+news} { ],=0.2, temperature )print(response.choices[0].message.content)
1. The article starts with a repetitive sentence, "Incoming Florida Senate President Bill Galvano named the Naples Republican the Senate's majority leader for the upcoming legislative session." This can be simplified to avoid repetition. 2. The use of informal language, such as "you can put my office in the men's room for all I care," detracts from the article's professional tone. 3. The article uses indirect speech in some instances, such as "What he was saying is that he was naming her senate majority leader," which can be replaced with direct speech for clarity and conciseness. 4. The sentence "Galvano says he picked Passidomo for the post because she has a good rapport with both Senate Republicans and Democrats and is a consensus builder" can be rephrased to avoid redundancy. 5. The use of parentheses in the sentence "Most of it (the job) is member relations, helping members get their bills heard" can be avoided by rephrasing the sentence. 6. The sentence "The Senate majority leader only gets one vote, just like other senators, so what's the upside for constituents?" can be rephrased to avoid the use of a question and make it more assertive. 7. The sentence "That table includes the Senate president, president pro tempore, in this case Sen. David Simmons, Passidomo as majority leader and perhaps one or two others" can be rephrased for clarity. 8. The sentence "Traditionally, legislators in leadership positions don't file as many bills as rank and file members" can be rephrased to avoid the use of "don't" and make it more assertive. 9. The sentence "Some recent predecessors in the post haven't filed any" can be rephrased to avoid the use of "haven't" and make it more assertive. 10. The sentence "In a message to other senators, Galvano praised Passidomo's efforts on the latter" can be rephrased to avoid the use of "the latter" and make it more precise. 11. The sentence "The role of the Senate majority leader isn't to extract favorable consideration from other senators" can be rephrased to avoid the use of "isn't" and make it more assertive. 12. The sentence "Being named to a top spot now sets Passidomo up for even greater things in the future" can be rephrased to avoid the use of "sets up" and make it more assertive. 13. The sentence "Assuming she keeps her seat in the 2022 election and Republicans maintain control of the Senate in 2022 and beyond, she could be Senate president herself in the session that follows" can be rephrased for clarity and conciseness. 14. The sentence "Passidomo said she was taken by surprise by her sudden promotion" can be rephrased to avoid the use of "was taken by surprise" and make it more assertive. 15. The sentence "Third floor, to be exact" can be avoided as it does not add any significant information to the article.
Mistral Fine-tune API Walkthrough
- Documentation: https://docs.mistral.ai/guides/finetuning/
- Jupyter Notebook: mistral/fine_tune/mistral_finetune_api.ipynb
- Data Preparation:
- Format: Data should be in JSON format.
- Size Limits:
- Training data: Each file <= 512 MB (multiple files allowed).
- Evaluation data: Total size <= 1 MB.
- Size Limits:
- Reformatting: Use provided scripts to adapt data from sources like HuggingFace.
- Validation: The
mistral-finetune
repository includes a data validation script.
- Format: Data should be in JSON format.
- Uploading Data:
- Documentation: Upload dataset
- Use the
files.create
function, specifying file name and purpose (“fine-tune”).
- Creating a Fine-tuning Job:
- Provide file IDs for training and evaluation data.
- Choose the base model (Mistral 7B or Mistral Small).
- Set hyperparameters (e.g., learning rate, number of steps).
- Monitoring Progress:
- Retrieve job status and metrics using the job ID.
- Using the Fine-tuned Model:
- Access the fine-tuned model using the provided model name (retrieved from the completed job).
- Weight & Biases Integration (Optional):
- Configure API key for tracking metrics and visualizations.
Getting Started Fine-Tuning Mistral 7B (Local)
Jupyter Notebook: tutorials/mistral_finetune_7b.ipynb
Covers fine-tuning Mistral 7B
Steps:
- Clone Repository:
git clone https://github.com/mistralai/mistral-finetune.git
- Install Dependencies: Follow instructions in the repository.
- Download Model: Download the desired Mistral model (e.g., 7Bv3).
- Prepare Data: Similar to the API walkthrough.
- Configure Training:
- Use a configuration file (
.yaml
) to specify data paths, model parameters, and hyperparameters. - Adjust sequence length based on available GPU memory.
- Use a configuration file (
- Start Training: Execute the training script.
- Inference:
Utilize the
mistral-inference
package.- GitHub Repository: mistral-inference
pip install mistral-inference
Load the tokenizer, base model, and fine-tuned LoRA weights.
Generate text.
I’m Christian Mills, a deep learning consultant specializing in practical AI implementations. I help clients leverage cutting-edge AI technologies to solve real-world problems.
Interested in working together? Fill out my Quick AI Project Assessment form or learn more about me.