Stable Diffusion 2.1 Inpainting (Core ML)

This is a Core ML conversion of Stable Diffusion 2.1 Inpainting optimized for on-device inference on Apple Silicon.

Model Details

Property Value
Original Model Stable Diffusion 2.1 Inpainting
Parameters 865 million
Resolution 512x512
Format Core ML (.mlmodelc)
Compute Units CPU + Neural Engine (Split Einsum)

Intended Use

This model is optimized for:

  • iOS/macOS Deployment: Native Apple Silicon inference
  • Inpainting Tasks: Fill in masked areas of images
  • Object Removal: Remove unwanted objects from photos
  • Object Replacement: Replace specific parts of images

Capabilities

  • Mask-based Editing: Paint areas to regenerate
  • Seamless Blending: Natural integration with original content
  • Prompt-guided Generation: Control what appears in masked areas
  • High Quality Output: SD 2.1 quality improvements

Bundle Contents

  • TextEncoder.mlmodelc - CLIP text encoder
  • Unet.mlmodelc - 9-channel UNet (latent + masked image + mask)
  • VAEDecoder.mlmodelc - Variational autoencoder decoder
  • VAEEncoder.mlmodelc - Variational autoencoder encoder
  • vocab.json - Tokenizer vocabulary
  • merges.txt - BPE merges

Usage

Compatible with Apple's ml-stable-diffusion framework and iOS apps using Core ML.

License

This model is released under the CreativeML Open RAIL++-M license.

Attribution

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jc-builds/sd2-inpainting-coreml

Finetuned
(4)
this model