Stable Diffusion 3.x uses a split checkpoint (i.e. UNET/VAE/CLIP aren't contained in a single file), therefore it can only be used in the "Advanced" checkpoint mode.
Following automatic installation, a "metacheckpoint" named "SD 3 Medium" (or similar depending on your choice) will be created. Simply using that option in "Simple" mode is enough.
Following manual installation, models must be selected manually in the "Advanced" checkpoint mode as follows:
Due to licensing restrictions, Metastable is unable to provide a fully automatic installation procedure for SD 3.x. The models can still be installed manually.
If you don't have a HuggingFace account yet, create a new account here - https://huggingface.co/join
Log into your HuggingFace account.
Choose a model and navigate to:
Fill in the "You need to agree to share your contact information to access this model" form, and submit. The access should be granted instantly.
Regardless of the model you've chosen, submit the same form for SD 3 medium as well - https://huggingface.co/stabilityai/stable-diffusion-3-medium
Download the following files:
Model file (depending on the model you're trying to use):
Text encoders:
Open Metastable.
Go to "Settings", "About Metastable" and click on the "Reveal in explorer" button in the "Storage" section.
In the newly opened file explorer window, open the "models" directory.
Move your model file (sdX_X.safetensors) to the "checkpoint" directory.
Move your text encoder files (clip_l.safetensors, clip_g.safetensors, t5xxl_fp16.safetensors) to the "text_encoder" directory.
Feature | SD 3.x |
---|---|
Text-to-image | ✅ |
Image-to-image | ✅ |
Inpainting | ✅ |
LORA | ✅ |
ControlNet | ✅ |
IPAdapter | ❌ |
PULID | ❌ |