Instructions to use Universal-NER/UniNER-7B-type with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Universal-NER/UniNER-7B-type with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Universal-NER/UniNER-7B-type")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Universal-NER/UniNER-7B-type") model = AutoModelForCausalLM.from_pretrained("Universal-NER/UniNER-7B-type") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Universal-NER/UniNER-7B-type with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Universal-NER/UniNER-7B-type" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Universal-NER/UniNER-7B-type", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Universal-NER/UniNER-7B-type
- SGLang
How to use Universal-NER/UniNER-7B-type with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Universal-NER/UniNER-7B-type" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Universal-NER/UniNER-7B-type", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Universal-NER/UniNER-7B-type" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Universal-NER/UniNER-7B-type", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Universal-NER/UniNER-7B-type with Docker Model Runner:
docker model run hf.co/Universal-NER/UniNER-7B-type
UniNER-7B-type
Description: A UniNER-7B model trained from LLama-7B using the Pile-NER-type data without human-labeled data. The data was collected by prompting gpt-3.5-turbo-0301 to label entities from passages and provide entity tags. The data collection prompt is as follows:
Given a passage, your task is to extract all entities and identify their entity types. The output should be in a list of tuples of the following format: [("entity 1", "type of entity 1"), ... ].
Check our paper for more information. Check our repo about how to use the model.
Comparison with UniNER-7B-definition
The UniNER-7B-type model excels when handling entity tags. It performs better on the Universal NER benchmark, which consists of 43 academic datasets across 9 domains. In contrast, UniNER-7B-definition performs better at processing entity types defined in short sentences and is more robust to type paraphrasing.
Inference
The template for inference instances is as follows:
A virtual assistant answers questions from a user based on the provided text.
USER: Text: {Fill the input text here}
ASSISTANT: I’ve read this text.
USER: What describes {Fill the entity type here} in the text?
ASSISTANT: (model's predictions in JSON format)
Note: Inferences are based on one entity type at a time. For multiple entity types, create separate instances for each type.
License
This model and its associated data are released under the CC BY-NC 4.0 license. They are primarily used for research purposes.
Citation
@article{zhou2023universalner,
title={UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition},
author={Wenxuan Zhou and Sheng Zhang and Yu Gu and Muhao Chen and Hoifung Poon},
year={2023},
eprint={2308.03279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 999