Direct preference optimization (preview)
Direct preference optimization (DPO) is an alignment technique for large language models, used to adjust model weights based on human preferences. It differs from reinforcement learning from human feedback (RLHF) in that it does not require fitting a reward model and uses simpler binary data preferences for training. It is computationally lighter weight and faster than RLHF, while being equally effective at alignment.Why is DPO useful?
DPO is especially useful in scenarios where there’s no clear-cut correct answer, and subjective elements like tone, style, or specific content preferences are important. This approach also enables the model to learn from both positive examples (what’s considered correct or ideal) and negative examples (what’s less desired or incorrect). DPO makes it easier for you to generate high-quality training datasets. While many organizations struggle to generate sufficiently large datasets for supervised fine-tuning, they often have preference data already collected based on user logs, A/B tests, or smaller manual annotation efforts.Direct preference optimization dataset format
Direct preference optimization files have a different format than supervised fine-tuning. You provide a “conversation” containing the system message and the initial user message, and then “completions” with paired preference data. You can only provide two completions. The dataset uses three top-level fields:| Field | Required | Description |
|---|---|---|
input | Yes | Contains the system message and initial user message |
preferred_output | Yes | Must contain at least one assistant message (roles: assistant, tool only) |
non_preferred_output | Yes | Must contain at least one assistant message (roles: assistant, tool only) |
jsonl format:
Direct preference optimization model support
The following models support direct preference optimization fine-tuning:| Model | DPO support | Region availability |
|---|---|---|
gpt-4o-2024-08-06 | Yes | See fine-tuning models |
gpt-4.1-2025-04-14 | Yes | See fine-tuning models |
gpt-4.1-mini-2025-04-14 | Yes | See fine-tuning models |
How to use direct preference optimization fine-tuning
- Navigate to Build in the top section of AI foundry.
- Select Fine-tune from the side menu.
- Prepare
jsonldatasets in the preference format. - Select a model and then select the method of customization Direct Preference Optimization.
- Upload datasets – training and validation. Preview as needed.
- Select hyperparameters, defaults are recommended for initial experimentation.
- Review the selections and create a fine-tuning job.
Direct preference optimization - REST API
Next steps
- Explore the fine-tuning capabilities in the Azure OpenAI fine-tuning tutorial.
- Review fine-tuning model regional availability.
- Learn more about Azure OpenAI quotas