Skylion007/openwebtext
Viewer • Updated • 8.01M • 67.6k • 512
How to use Continuous-Rivals-Discrete/langflow-owt with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Continuous-Rivals-Discrete/langflow-owt", trust_remote_code=True) # Load model directly
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("Continuous-Rivals-Discrete/langflow-owt", trust_remote_code=True, dtype="auto")How to use Continuous-Rivals-Discrete/langflow-owt with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Continuous-Rivals-Discrete/langflow-owt"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Continuous-Rivals-Discrete/langflow-owt",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/Continuous-Rivals-Discrete/langflow-owt
How to use Continuous-Rivals-Discrete/langflow-owt with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Continuous-Rivals-Discrete/langflow-owt" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Continuous-Rivals-Discrete/langflow-owt",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Continuous-Rivals-Discrete/langflow-owt" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Continuous-Rivals-Discrete/langflow-owt",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use Continuous-Rivals-Discrete/langflow-owt with Docker Model Runner:
docker model run hf.co/Continuous-Rivals-Discrete/langflow-owt
# Load model directly
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("Continuous-Rivals-Discrete/langflow-owt", trust_remote_code=True, dtype="auto")LangFlow is a continuous diffusion language model that operates in embedding space. Unlike discrete diffusion models (MDLM, SEDD, DUO), LangFlow performs diffusion directly on continuous token embeddings, enabling smoother denoising dynamics.
For more details, please see our paper: LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling.
To use the pre-trained model for text generation, use the following snippet:
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('gpt2')
model = AutoModelForMaskedLM.from_pretrained('chumengl/langflow-owt', trust_remote_code=True)
# Generate samples
samples = model.generate_samples(num_samples=5, num_steps=128)
texts = tokenizer.batch_decode(samples, skip_special_tokens=True)
for text in texts:
print(text)
@article{chen2026langflow,
title={LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling},
author={Chen, Yuxin and Liang, Chumeng and Sui, Hangke and Guo, Ruihan and Cheng, Chaoran and You, Jiaxuan and Liu, Ge},
journal={arXiv preprint arXiv:2604.11748},
year={2026}
}
Chumeng Liang (chumengl@illinois.edu)
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Continuous-Rivals-Discrete/langflow-owt", trust_remote_code=True)