Maximize AI capabilities, lower resource use, and ensure cost-effective generative AI deployments across your applications.
Run Phi-3 in the cloud, at the edge, or on device, resulting in greater deployment and operation flexibility.
Phi-3 models were developed in accordance with Microsoft AI principles: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness.
USE CASES
Use Phi-3 for generative AI applications
Local deployments
Operate effectively in offline environments where data privacy is paramount or connectivity is limited.
Accurate and relevant answers
Generate more coherent, accurate, and contextually relevant outputs with an expanded context window.
Latency-bound scenarios
Deploy at the edge to deliver faster responses.
Cost-constrained tasks
Use Phi-3 for simple tasks to reduce resource requirements and lower costs without compromising performance.
Customization and precision
Boost performance by fine-tuning the models with domain specific data.
Phi-3-mini was trained and optimized for English, and its capabilities in other languages are limited. We encourage you to use Microsoft Translator to translate prompts and responses for the best results.