Go to the "txt2img" tab and run your inference!.Select the optimized model that will appear in the checkpoint dropdown.Go to the Olive optimization tab and start the optimization pass.CTRL+CLICK on the URL following "Running on local URL:" to run the WebUI.This step will install all its dependencies needed for olive, onnxruntime, other packages and start up, this may take a few minutes.conda create -name Automatic1111_olive python=3.10.6.Enter the following commands in the terminal, followed by the enter key, to install Automatic1111 WebUI.Here is how to generate Microsoft Olive optimized stable diffusion model and run it using Automatic1111 WebUI: Generate and Run Olive Optimized Stable Diffusion Models with Automatic1111 WebUI on AMD GPUs Quantization: converts most layers from FP32 to FP16 to reduce the model's GPU memory footprint and improve performance.Ĭombined, the above optimizations enable DirectML to leverage AMD GPUs for greatly improved performance when performing inference with transformer models like Stable Diffusion.ģ.Transformer graph optimization: fuses subgraphs into multi-head attention operators and eliminating inefficient from conversion.Model conversion: translates the base models from PyTorch to ONNX.The DirectML sample for Stable Diffusion applies the following techniques: ![]() Olive greatly simplifies model processing by providing a single toolchain to compose optimization techniques, which is especially important with more complex models like Stable Diffusion that are sensitive to the ordering of optimization techniques. Olive is a Python tool that can be used to convert, optimize, quantize, and auto-tune models for optimal inference performance with ONNX Runtime execution providers like DirectML. Driver: AMD Software: Adrenalin Edition™ 23.7.2 or newer ( ).Platform having AMD Graphics Processing Units (GPU).Ensure Anaconda/Miniconda directory is added to PATH.Installed Anaconda/Miniconda ( Miniconda for Windows).Prepared by Hisham Chowdhury (AMD), Lucas Neves (AMD), and Justin Stoecker (Microsoft)ĭid you know you can enable Stable Diffusion with Microsoft Olive under Automatic1111(Xformer) to get a significant speedup via Microsoft DirectML on Windows? Microsoft and AMD have been working together to optimize the Olive path on AMD hardware, accelerated via the Microsoft DirectML platform API and the AMD User Mode Driver’s ML (Machine Learning) software layer allowing users access to the power of the AMD GPU’s AI (Artificial Intelligence) capabilities. ![]() The original blog with additional instructions on how to manually generate and run Stable Diffusion Automatic1111 with Olive Optimizations is available here - ORIGINAL HOW-TO GUIDE ![]() : The Automatic1111-directML branch now supports Microsoft Olive under the Automatic1111 WebUI interface, which allows for generating optimized models and running them all under the Automatic1111 WebUI, without a separate branch needed to optimize for AMD platforms.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |