kvaishnavi commited on
Commit
3bf694f
·
verified ·
1 Parent(s): 3d8799e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -21,13 +21,13 @@ Here are some of the optimized configurations we have added:
21
  1. ONNX model for int4 NPU: ONNX model for Qualcomm NPU using int4 quantization.
22
 
23
  ## Model Run
24
- You can see how to run this model with ORT GenAI [here](https://github.com/microsoft/onnxruntime-genai/blob/main/examples/python/model-vision.py)
25
 
26
  For NPU:
27
 
28
  ```bash
29
  # Download the model directly using the Hugging Face CLI
30
- huggingface-cli download microsoft/Fara-7B-onnx --include npu/qnn-int4/* --local-dir .
31
 
32
  # Install ONNX Runtime GenAI
33
  pip install --pre onnxruntime-genai
 
21
  1. ONNX model for int4 NPU: ONNX model for Qualcomm NPU using int4 quantization.
22
 
23
  ## Model Run
24
+ You can see how to run this model with ORT GenAI [here](https://github.com/microsoft/onnxruntime-genai/blob/main/examples/python/model-vision.py).
25
 
26
  For NPU:
27
 
28
  ```bash
29
  # Download the model directly using the Hugging Face CLI
30
+ huggingface-cli download onnxruntime/fara-7b-onnx --include npu/qnn-int4/* --local-dir .
31
 
32
  # Install ONNX Runtime GenAI
33
  pip install --pre onnxruntime-genai