| | --- |
| | language: |
| | - en |
| | - zh |
| | license: apache-2.0 |
| | tags: |
| | - Mixtral |
| | - openbmb/MiniCPM-2B-sft-bf16-llama-format |
| | - MoE |
| | - merge |
| | - mergekit |
| | - moerge |
| | - MiniCPM |
| | base_model: |
| | - openbmb/MiniCPM-2B-sft-bf16-llama-format |
| | model-index: |
| | - name: MoECPM-Untrained-4x2b |
| | results: |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: AI2 Reasoning Challenge (25-Shot) |
| | type: ai2_arc |
| | config: ARC-Challenge |
| | split: test |
| | args: |
| | num_few_shot: 25 |
| | metrics: |
| | - type: acc_norm |
| | value: 46.76 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: HellaSwag (10-Shot) |
| | type: hellaswag |
| | split: validation |
| | args: |
| | num_few_shot: 10 |
| | metrics: |
| | - type: acc_norm |
| | value: 72.58 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MMLU (5-Shot) |
| | type: cais/mmlu |
| | config: all |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 53.21 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: TruthfulQA (0-shot) |
| | type: truthful_qa |
| | config: multiple_choice |
| | split: validation |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: mc2 |
| | value: 38.41 |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: Winogrande (5-shot) |
| | type: winogrande |
| | config: winogrande_xl |
| | split: validation |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 65.51 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: GSM8k (5-shot) |
| | type: gsm8k |
| | config: main |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 44.58 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b |
| | name: Open LLM Leaderboard |
| | --- |
| | # MoECPM Untrained 4x2b |
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| |
|
| | A MoE model out of 4 MiniCPM-2B-sft models. Intended to be trained. This version probably does not perform well (if it works at all, lol. I haven't tested it). |
| |
|
| | ## Uses |
| |
|
| | - Training |
| |
|
| | ### Recommendations |
| |
|
| | <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
| |
|
| | Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
| | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
| | Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Inv__MoECPM-Untrained-4x2b) |
| |
|
| | | Metric |Value| |
| | |---------------------------------|----:| |
| | |Avg. |53.51| |
| | |AI2 Reasoning Challenge (25-Shot)|46.76| |
| | |HellaSwag (10-Shot) |72.58| |
| | |MMLU (5-Shot) |53.21| |
| | |TruthfulQA (0-shot) |38.41| |
| | |Winogrande (5-shot) |65.51| |
| | |GSM8k (5-shot) |44.58| |
| |
|
| |
|