TL;DR — How to store the torch.compile artifacts for StableDiffusionXLPipeline to avoid recompilation?
Context
- I use StableDiffusionXLPipeline with
torch.compileviapipe.unet.compile(fullgraph=True, mode=“max-autotune”) - In a server-less environment the workers can be shut-down, and therefore would discard
torch.compile’s work. - In order to avoid cold-start latency, I want to save the compiled artifacts to a persistent disk so subsequent
torch.compilecan re-used it. - To point the compilation output to a specific directory I set
os.environ[‘TORCH_COMPILE_CACHE_DIR’] = “/workspace/torch_compile_cache”. - However, the directory is empty with
ls -a.
What I tried
- Checked
~/.cachedirectory - Checked
/tmpdirectory where I saw the following, but that is not the artifact
root@6fe163448e4c:/workspace# ls -a /tmp/ . .. tmpcfjos3cw tmpqcgd5g4w torchinductor_root
Code Block
pipeline = StableDiffusionXLPipeline.from_pretrained(
"ANY Stable Diffusion Model",
torch_dtype=torch.float16,
use_safetensors=True,
device_map="cuda",
cache_dir="/workspace"
)
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config)
pipeline.unet.compile(fullgraph=True, mode="max-autotune")
# Generate the image (Nodes 2, 3, 4, 5)
start_time = time.time()
image_no_skip = pipeline(
prompt="Super man",
negative_prompt="Super woman",
width=512,
height=512,
num_inference_steps=1,
guidance_scale=1.0
).images[0]
print(f"Total Time: {time.time() - start_time}")
Question:
How can we store torch.compile artifacts so that we can re-use it?