Felladrin commited on
Commit
edabafe
·
verified ·
1 Parent(s): 44fe6ea

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,490 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers.js
3
+ pipeline_tag: image-classification
4
+ tags:
5
+ - vision-transformer
6
+ - age-estimation
7
+ - gender-classification
8
+ - face-analysis
9
+ - computer-vision
10
+ - pytorch
11
+ - transformers
12
+ - multi-task-learning
13
+ language:
14
+ - en
15
+ license: apache-2.0
16
+ datasets:
17
+ - UTKFace
18
+ metrics:
19
+ - accuracy
20
+ - mae
21
+ model-index:
22
+ - name: Age Gender Prediction
23
+ results:
24
+ - task:
25
+ type: image-classification
26
+ name: Gender Classification
27
+ dataset:
28
+ name: UTKFace
29
+ type: face-analysis
30
+ metrics:
31
+ - type: accuracy
32
+ value: 94.3
33
+ name: Gender Accuracy
34
+ - type: mae
35
+ value: 4.5
36
+ name: Age MAE (years)
37
+ base_model:
38
+ - abhilash88/age-gender-prediction
39
+ ---
40
+
41
+
42
+
43
+ # age-gender-prediction (ONNX)
44
+
45
+
46
+ This is an ONNX version of [abhilash88/age-gender-prediction](https://huggingface.co/abhilash88/age-gender-prediction). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
47
+
48
+
49
+ ## Usage with Transformers.js
50
+
51
+
52
+ See the pipeline documentation for `image-classification`: https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageClassificationPipeline
53
+
54
+
55
+ ---
56
+
57
+
58
+ # 🏆 ViT Age-Gender Prediction: Vision Transformer for Facial Analysis
59
+
60
+ [![Model](https://img.shields.io/badge/Model-Vision%20Transformer-blue)](https://huggingface.co/abhilash88/age-gender-prediction)
61
+ [![Accuracy](https://img.shields.io/badge/Gender%20Accuracy-94.3%25-green)](https://huggingface.co/abhilash88/age-gender-prediction)
62
+ [![Pipeline](https://img.shields.io/badge/Pipeline-One%20Liner-brightgreen)](https://huggingface.co/abhilash88/age-gender-prediction)
63
+
64
+ A state-of-the-art Vision Transformer model for simultaneous age estimation and gender classification, achieving **94.3% gender accuracy** and **4.5 years age MAE** on the UTKFace dataset.
65
+
66
+ ## 🚀 One-Liner Usage
67
+
68
+ ```python
69
+ from model import predict_age_gender
70
+
71
+ result = predict_age_gender("your_image.jpg")
72
+ print(f"Age: {result['age']}, Gender: {result['gender']}")
73
+ ```
74
+
75
+ **That's it!** One line to get age and gender predictions.
76
+
77
+ ## 🆕 **October 2025 Update - Discussion #5 Fixed**
78
+
79
+ ✅ **Issue Resolved:** Model now includes helper functions that return proper age and gender values (not `LABEL_0`/`LABEL_1`)
80
+
81
+ **Recommended usage:**
82
+ ```python
83
+ from model import predict_age_gender
84
+
85
+ result = predict_age_gender("image.jpg")
86
+ print(f"Age: {result['age']}, Gender: {result['gender']}")
87
+ print(f"Confidence: {result['gender_confidence']:.1%}")
88
+ ```
89
+
90
+ **Simple one-liner version:**
91
+ ```python
92
+ from model import simple_predict
93
+
94
+ print(simple_predict("image.jpg"))
95
+ # Output: "25 years, Female (87.3% confidence)"
96
+ ```
97
+
98
+ **Important:** These helper functions work correctly. The standard `pipeline()` approach returns `LABEL_0`/`LABEL_1` and should not be used.
99
+
100
+ ## 📱 Complete Examples
101
+
102
+ ### Basic Usage
103
+ ```python
104
+ from model import predict_age_gender
105
+
106
+ # Predict from file
107
+ result = predict_age_gender("your_image.jpg")
108
+ print(f"Age: {result['age']} years")
109
+ print(f"Gender: {result['gender']}")
110
+ print(f"Confidence: {result['gender_confidence']:.1%}")
111
+
112
+ # Predict from URL
113
+ result = predict_age_gender("https://example.com/face_image.jpg")
114
+ print(f"Prediction: {result['age']} years, {result['gender']}")
115
+
116
+ # Works with PIL Image too
117
+ from PIL import Image
118
+ img = Image.open("image.jpg")
119
+ result = predict_age_gender(img)
120
+ print(f"Result: {result['age']} years, {result['gender']}")
121
+ ```
122
+
123
+ ### Simple Helper Functions
124
+ ```python
125
+ from model import predict_age_gender, simple_predict
126
+
127
+ # Method 1: Detailed result
128
+ result = predict_age_gender("your_image.jpg")
129
+ print(f"Age: {result['age']}, Gender: {result['gender']}")
130
+ print(f"Confidence: {result['confidence']:.1%}")
131
+
132
+ # Method 2: Simple string output
133
+ prediction = simple_predict("your_image.jpg")
134
+ print(prediction) # "25 years, Female (87% confidence)"
135
+ ```
136
+
137
+ ### Google Colab
138
+ ```python
139
+ # Install requirements
140
+ !pip install transformers torch pillow
141
+
142
+ from model import predict_age_gender
143
+ import matplotlib.pyplot as plt
144
+ from PIL import Image
145
+
146
+ # Upload image in Colab
147
+ from google.colab import files
148
+ uploaded = files.upload()
149
+ filename = list(uploaded.keys())[0]
150
+
151
+ # Predict
152
+ result = predict_age_gender(filename)
153
+
154
+ # Display
155
+ img = Image.open(filename)
156
+ plt.figure(figsize=(8, 6))
157
+ plt.imshow(img)
158
+ plt.title(f"Prediction: {result['age']} years, {result['gender']} ({result['gender_confidence']:.1%})")
159
+ plt.axis('off')
160
+ plt.show()
161
+
162
+ print(f"Age: {result['age']} years")
163
+ print(f"Gender: {result['gender']}")
164
+ print(f"Confidence: {result['gender_confidence']:.1%}")
165
+ ```
166
+
167
+ ### Batch Processing
168
+ ```python
169
+ from model import predict_age_gender
170
+
171
+ # Process multiple images
172
+ images = ["image1.jpg", "image2.jpg", "image3.jpg"]
173
+ results = []
174
+
175
+ for image in images:
176
+ result = predict_age_gender(image)
177
+ results.append({
178
+ 'image': image,
179
+ 'age': result['age'],
180
+ 'gender': result['gender'],
181
+ 'confidence': result['gender_confidence']
182
+ })
183
+
184
+ for result in results:
185
+ print(f"{result['image']}: {result['age']} years, {result['gender']} ({result['confidence']:.1%})")
186
+ ```
187
+
188
+ ### Real-time Webcam
189
+ ```python
190
+ import cv2
191
+ from model import predict_age_gender
192
+ from PIL import Image
193
+
194
+ cap = cv2.VideoCapture(0)
195
+
196
+ while True:
197
+ ret, frame = cap.read()
198
+ if ret:
199
+ # Convert frame to PIL Image
200
+ rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
201
+ pil_image = Image.fromarray(rgb_frame)
202
+
203
+ # Predict
204
+ result = predict_age_gender(pil_image)
205
+
206
+ # Display prediction
207
+ text = f"Age: {result['age']}, Gender: {result['gender']}"
208
+ cv2.putText(frame, text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
209
+ cv2.imshow('Age-Gender Detection', frame)
210
+
211
+ if cv2.waitKey(1) & 0xFF == ord('q'):
212
+ break
213
+
214
+ cap.release()
215
+ cv2.destroyAllWindows()
216
+ ```
217
+
218
+ ### URL Images
219
+ ```python
220
+ from model import predict_age_gender
221
+
222
+ # Direct URL prediction
223
+ image_url = "https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?w=300"
224
+ result = predict_age_gender(image_url)
225
+
226
+ print(f"Age: {result['age']} years")
227
+ print(f"Gender: {result['gender']}")
228
+ print(f"Confidence: {result['gender_confidence']:.1%}")
229
+ ```
230
+
231
+ ## 📊 Output Format
232
+
233
+ The helper function returns a dictionary with the prediction:
234
+
235
+ ```python
236
+ {
237
+ "age": 25,
238
+ "gender": "Female",
239
+ "gender_confidence": 0.873,
240
+ "gender_probability_male": 0.127,
241
+ "gender_probability_female": 0.873,
242
+ "label": "25 years, Female",
243
+ "score": 0.873
244
+ }
245
+ ```
246
+
247
+ **Access the values:**
248
+ - `result['age']` - Predicted age (integer, 0-100)
249
+ - `result['gender']` - Predicted gender ("Male" or "Female")
250
+ - `result['gender_confidence']` - Confidence score (0-1)
251
+ - `result['gender_probability_male']` - Male probability (0-1)
252
+ - `result['gender_probability_female']` - Female probability (0-1)
253
+ - `result['label']` - Formatted string summary
254
+
255
+ ## 🎯 Model Performance
256
+
257
+ | Metric | Performance | Dataset |
258
+ |--------|------------|---------|
259
+ | **Gender Accuracy** | **94.3%** | UTKFace |
260
+ | **Age MAE** | **4.5 years** | UTKFace |
261
+ | **Architecture** | ViT-Base + Dual Head | 768→256→64→1 |
262
+ | **Parameters** | 86.8M | Optimized |
263
+ | **Inference Speed** | ~50ms/image | CPU |
264
+
265
+ ### Performance by Age Group
266
+ - **Adults (21-60 years)**: 94.3% gender accuracy, 4.5 years age MAE ✅ **Excellent**
267
+ - **Young Adults (16-30 years)**: 92.1% gender accuracy ✅ **Very Good**
268
+ - **Teenagers (13-20 years)**: 89.7% gender accuracy ✅ **Good**
269
+ - **Children (5-12 years)**: 78.4% gender accuracy ⚠️ **Limited**
270
+ - **Seniors (60+ years)**: 87.2% gender accuracy ✅ **Good**
271
+
272
+ ## ⚠️ Usage Guidelines
273
+
274
+ ### ✅ Optimal Performance
275
+ - **Best for**: Adults 16-60 years old
276
+ - **Image quality**: Clear, well-lit, front-facing faces
277
+ - **Use cases**: Demographic analysis, content filtering, marketing research
278
+
279
+ ### ❌ Known Limitations
280
+ - **Children (0-12)**: Reduced accuracy due to limited training data
281
+ - **Very elderly (70+)**: Higher prediction variance
282
+ - **Poor conditions**: Low light, extreme angles, heavy occlusion
283
+
284
+ ### 🎯 Tips for Best Results
285
+ - Use clear, well-lit images
286
+ - Ensure faces are clearly visible and front-facing
287
+ - Consider confidence scores for critical applications
288
+ - Validate results for your specific use case
289
+
290
+ ## 🛠️ Installation
291
+
292
+ ```bash
293
+ # Minimal installation
294
+ pip install transformers torch pillow
295
+
296
+ # Full installation with optional dependencies
297
+ pip install transformers torch torchvision pillow opencv-python matplotlib
298
+
299
+ # For development
300
+ pip install transformers torch pillow pytest black flake8
301
+ ```
302
+
303
+ ## 📈 Use Cases & Examples
304
+
305
+ ### Content Moderation
306
+ ```python
307
+ from model import predict_age_gender
308
+
309
+ def moderate_content(image_path):
310
+ result = predict_age_gender(image_path)
311
+ age = result['age']
312
+
313
+ if age < 18:
314
+ return f"Minor detected ({age} years) - content flagged for review"
315
+ return f"Adult content approved: {age} years, {result['gender']}"
316
+
317
+ status = moderate_content("user_upload.jpg")
318
+ print(status)
319
+ ```
320
+
321
+ ### Marketing Analytics
322
+ ```python
323
+ from model import predict_age_gender
324
+ from glob import glob
325
+
326
+ def analyze_audience(image_folder):
327
+ demographics = {"male": 0, "female": 0, "total_age": 0, "count": 0}
328
+
329
+ for image_path in glob(f"{image_folder}/*.jpg"):
330
+ result = predict_age_gender(image_path)
331
+ demographics[result['gender'].lower()] += 1
332
+ demographics['total_age'] += result['age']
333
+ demographics['count'] += 1
334
+
335
+ demographics['avg_age'] = demographics['total_age'] / demographics['count']
336
+ demographics['male_percent'] = demographics['male'] / demographics['count'] * 100
337
+ demographics['female_percent'] = demographics['female'] / demographics['count'] * 100
338
+
339
+ return demographics
340
+
341
+ stats = analyze_audience("customer_photos/")
342
+ print(f"Average age: {stats['avg_age']:.1f}")
343
+ print(f"Gender split: {stats['male_percent']:.1f}% Male, {stats['female_percent']:.1f}% Female")
344
+ ```
345
+
346
+ ### Age Verification
347
+ ```python
348
+ from model import predict_age_gender
349
+
350
+ def verify_age(image_path, min_age=18):
351
+ result = predict_age_gender(image_path)
352
+ age = result['age']
353
+ confidence = result['gender_confidence']
354
+
355
+ if confidence < 0.7: # Low confidence
356
+ return "Please provide a clearer image"
357
+
358
+ if age >= min_age:
359
+ return f"Verified: {age} years old (meets {min_age}+ requirement)"
360
+ else:
361
+ return f"Age verification failed: {age} years old"
362
+
363
+ verification = verify_age("id_photo.jpg", min_age=21)
364
+ print(verification)
365
+ ```
366
+
367
+ ## 🔧 Technical Details
368
+
369
+ - **Base Model**: google/vit-base-patch16-224 (Vision Transformer)
370
+ - **Input Resolution**: 224×224 RGB images
371
+ - **Architecture**: Dual-head design with age regression and gender classification
372
+ - **Training Dataset**: UTKFace (23,687 images)
373
+ - **Training**: 15 epochs, AdamW optimizer, 2e-5 learning rate
374
+
375
+ ## 🌟 Key Features
376
+
377
+ - ✅ **True one-line usage** with transformers pipeline
378
+ - ✅ **High accuracy** (94.3% gender, 4.5 years age MAE)
379
+ - ✅ **Multiple input types** (file paths, URLs, PIL Images, NumPy arrays)
380
+ - ✅ **Batch processing** support
381
+ - ✅ **Real-time capable** (~50ms inference)
382
+ - ✅ **Google Colab ready**
383
+ - ✅ **Production tested**
384
+
385
+ ## 🚀 Quick Start Examples
386
+
387
+ ### Absolute Minimal Usage
388
+ ```python
389
+ from model import predict_age_gender
390
+ result = predict_age_gender("image.jpg")
391
+ print(f"Age: {result['age']}, Gender: {result['gender']}")
392
+ ```
393
+
394
+ ### With Helper Function
395
+ ```python
396
+ from model import simple_predict
397
+ print(simple_predict("image.jpg")) # "25 years, Female (87% confidence)"
398
+ ```
399
+
400
+ ### Error Handling
401
+ ```python
402
+ from model import predict_age_gender
403
+
404
+ def safe_predict(image_path):
405
+ try:
406
+ result = predict_age_gender(image_path)
407
+ return f"Age: {result['age']}, Gender: {result['gender']}"
408
+ except Exception as e:
409
+ return f"Prediction failed: {e}"
410
+
411
+ prediction = safe_predict("any_image.jpg")
412
+ print(prediction)
413
+ ```
414
+
415
+ ## 🔧 Troubleshooting
416
+
417
+ ### Issue: Getting `LABEL_0`/`LABEL_1` instead of age/gender
418
+
419
+ **Solution:** Use the helper functions instead of pipeline:
420
+
421
+ ```python
422
+ # ✅ CORRECT METHOD - Use helper function
423
+ from model import predict_age_gender
424
+
425
+ result = predict_age_gender("image.jpg")
426
+ print(f"Age: {result['age']}, Gender: {result['gender']}")
427
+ # Output: Age: 25, Gender: Female
428
+ ```
429
+
430
+ ```python
431
+ # ❌ WRONG METHOD - Don't use standard pipeline
432
+ from transformers import pipeline
433
+ classifier = pipeline("image-classification", ...) # Returns LABEL_0/LABEL_1
434
+ ```
435
+
436
+ The standard `pipeline()` approach doesn't work properly with custom models. Always use the `predict_age_gender()` helper function.
437
+
438
+ ### Issue: Warning "Some weights not initialized"
439
+
440
+ This warning is **expected and safe to ignore**:
441
+ ```
442
+ Some weights of ViTForImageClassification were not initialized...
443
+ ```
444
+
445
+ The model uses custom age and gender heads instead of standard classification, which causes this informational warning. The model works correctly.
446
+
447
+ ### Issue: Low confidence predictions
448
+
449
+ For optimal results:
450
+ - ✅ Use clear, well-lit images
451
+ - ✅ Ensure face is front-facing and visible
452
+ - ✅ Avoid heavy occlusion or extreme angles
453
+ - ⚠️ Predictions with confidence < 0.7 may need manual review
454
+
455
+ ## 📝 Citation
456
+
457
+ ```bibtex
458
+ @misc{age-gender-prediction-2025,
459
+ title={Age-Gender-Prediction: Vision Transformer for Facial Analysis},
460
+ author={Abhilash Sahoo},
461
+ year={2025},
462
+ publisher={Hugging Face},
463
+ url={https://huggingface.co/abhilash88/age-gender-prediction},
464
+ note={One-liner pipeline with 94.3\% gender accuracy}
465
+ }
466
+ ```
467
+
468
+ ## 📄 License
469
+
470
+ Licensed under Apache 2.0. Commercial use permitted with attribution.
471
+
472
+ ---
473
+
474
+ **🎉 Ready to use!** Just one line of code to get accurate age and gender predictions from any facial image! 🚀
475
+
476
+ **Try it now:**
477
+ ```python
478
+ from model import predict_age_gender
479
+
480
+ result = predict_age_gender("your_image.jpg")
481
+ print(f"Age: {result['age']}, Gender: {result['gender']}")
482
+ print(f"Confidence: {result['gender_confidence']:.1%}")
483
+ ```
484
+
485
+ **Simple one-liner:**
486
+ ```python
487
+ from model import simple_predict
488
+ print(simple_predict("your_image.jpg"))
489
+ # Output: "25 years, Female (87.3% confidence)"
490
+ ```
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "_name_or_path": "abhilash88/age-gender-prediction",
4
+ "architectures": [
5
+ "AgeGenderViTModel"
6
+ ],
7
+ "attention_probs_dropout_prob": 0.0,
8
+ "encoder_stride": 16,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.0,
11
+ "hidden_size": 768,
12
+ "image_size": 224,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-12,
16
+ "model_type": "vit",
17
+ "num_attention_heads": 12,
18
+ "num_channels": 3,
19
+ "num_hidden_layers": 12,
20
+ "patch_size": 16,
21
+ "qkv_bias": true,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.49.0"
24
+ }
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ed49b00610718edfa0ae8dab530ed20bd2bcb55b34a83f074db60ede20a62b
3
+ size 343401688
onnx/model_bnb4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:371ffe92210cb0e64e99ade013ccedfdbbf37c6c35f98aecff094986aa48ee40
3
+ size 51450010
onnx/model_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65ccc8f72e291396a8c113a0414b8e23e500292dc4cb3079a3c5ecefadbb9566
3
+ size 171801382
onnx/model_int8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0d92c6137ea1031f777b3b848229125f4b4036e9a097234986dad3905cd3dbc
3
+ size 87333629
onnx/model_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc043b7232abbdd6c7b72b1bfa56d5a7a40b33354f8deb7621d16f89a9e788ee
3
+ size 56757898
onnx/model_q4f16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3315ac1b04917680bb0e91945d8f2cea9bee0262bfbfcc8e5ea6acdf979d5157
3
+ size 49718585
onnx/model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:871f98f559f49c6a53880d827b9bc6aa4140c159dd9772c75e325bb69204c265
3
+ size 87333629
onnx/model_uint8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:871f98f559f49c6a53880d827b9bc6aa4140c159dd9772c75e325bb69204c265
3
+ size 87333629
preprocessor_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_center_crop": true,
3
+ "do_convert_rgb": null,
4
+ "do_normalize": true,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.485,
9
+ 0.456,
10
+ 0.406
11
+ ],
12
+ "image_processor_type": "ViTFeatureExtractor",
13
+ "image_std": [
14
+ 0.229,
15
+ 0.224,
16
+ 0.225
17
+ ],
18
+ "resample": 2,
19
+ "rescale_factor": 0.00392156862745098,
20
+ "size": {
21
+ "height": 224,
22
+ "width": 224
23
+ }
24
+ }
quantize_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "modes": [
3
+ "fp16",
4
+ "q8",
5
+ "int8",
6
+ "uint8",
7
+ "q4",
8
+ "q4f16",
9
+ "bnb4"
10
+ ],
11
+ "per_channel": true,
12
+ "reduce_range": true,
13
+ "block_size": null,
14
+ "is_symmetric": true,
15
+ "accuracy_level": null,
16
+ "quant_type": 1,
17
+ "op_block_list": null
18
+ }