Stable Diffusion 2.1 Inpainting (Core ML)
This is a Core ML conversion of Stable Diffusion 2.1 Inpainting optimized for on-device inference on Apple Silicon.
Model Details
| Property | Value |
|---|---|
| Original Model | Stable Diffusion 2.1 Inpainting |
| Parameters | 865 million |
| Resolution | 512x512 |
| Format | Core ML (.mlmodelc) |
| Compute Units | CPU + Neural Engine (Split Einsum) |
Intended Use
This model is optimized for:
- iOS/macOS Deployment: Native Apple Silicon inference
- Inpainting Tasks: Fill in masked areas of images
- Object Removal: Remove unwanted objects from photos
- Object Replacement: Replace specific parts of images
Capabilities
- Mask-based Editing: Paint areas to regenerate
- Seamless Blending: Natural integration with original content
- Prompt-guided Generation: Control what appears in masked areas
- High Quality Output: SD 2.1 quality improvements
Bundle Contents
TextEncoder.mlmodelc- CLIP text encoderUnet.mlmodelc- 9-channel UNet (latent + masked image + mask)VAEDecoder.mlmodelc- Variational autoencoder decoderVAEEncoder.mlmodelc- Variational autoencoder encodervocab.json- Tokenizer vocabularymerges.txt- BPE merges
Usage
Compatible with Apple's ml-stable-diffusion framework and iOS apps using Core ML.
License
This model is released under the CreativeML Open RAIL++-M license.
Attribution
- Original Model: Stable Diffusion 2.1 Inpainting by Stability AI
- Core ML Conversion: jc-builds
Model tree for jc-builds/sd2-inpainting-coreml
Base model
stabilityai/stable-diffusion-2-inpainting