yang-z commited on
Commit
d77d6fb
·
verified ·
1 Parent(s): 89dd0b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -84,6 +84,85 @@ The goal of the FIM task is to fill in the missing parts of the code, generating
84
  {prefix}[SUF]{suffix}[MID]
85
  ````
86
  It is recommended to use our template during inference.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
  ## Paper
88
  **Arxiv:** <https://arxiv.org/abs/2407.10424>
89
 
 
84
  {prefix}[SUF]{suffix}[MID]
85
  ````
86
  It is recommended to use our template during inference.
87
+ ## Run CodeV-All Models with Twinny
88
+
89
+ The instructions below use `codev-all-qc` as an example. For other models, please make corresponding adjustments.
90
+
91
+ ### Install Ollama
92
+
93
+ Refer to the [official documentation](https://github.com/ollama/ollama/tree/main/docs).
94
+
95
+ ### Import a Model in Ollama
96
+
97
+ #### Create a Modelfile
98
+
99
+ Create a file named `Modelfile` and fill it with the following content:
100
+
101
+ ```
102
+ from path/to/codev-all-qc
103
+
104
+ TEMPLATE """{{ .Prompt }}"""
105
+
106
+ PARAMETER stop "```"
107
+ ```
108
+
109
+ Replace `path/to/codev-all-qc` with the actual path to your model. You can also customize parameters (e.g., temperature). See the [Modelfile Reference](https://github.com/ollama/ollama/blob/main/docs/modelfile.md) for details.
110
+
111
+ #### Import CodeV-ALL
112
+
113
+ Start the Ollama service:
114
+
115
+ ```
116
+ ollama serve
117
+ ```
118
+
119
+ Create the model:
120
+
121
+ ```
122
+ ollama create codev-all-qc -f path/to/Modelfile
123
+ ```
124
+
125
+ Repace `path/to/Modelfile` with the actual path to your Modelfile. Wait for the model creation process to complete.
126
+
127
+ ### **Twinny Setup**
128
+
129
+ #### Install Twinny
130
+
131
+ Open VS Code and install Twinny in the Extensions Marketplace.
132
+
133
+ <img src="./assets/image-20250912155617922.png" alt="image-20250912155617922" style="zoom: 35%;" />
134
+
135
+ #### Twinny Configuration
136
+
137
+ Open the FIM Configuration page.
138
+
139
+ <img src="./assets/7449b0e6ac2ff722339b7c74f37a8b0e.png" alt="7449b0e6ac2ff722339b7c74f37a8b0e" style="zoom:33%;" />
140
+
141
+ Enter the settings as shown below. The model name should match the one used during `ollama create`. Modify the hostname according to your setup (if Ollama is running on a different node, use that node’s IP address; for local use, use `0.0.0.0`). Click Save.
142
+
143
+ <img src="./assets/image-20250912160402939.png" alt="image-20250912160402939" style="zoom: 35%;" />
144
+
145
+ Go to Template Configuration and open the template editor.
146
+
147
+ <img src="./assets/image-20250912160957699.png" alt="image-20250912160957699" style="zoom: 35%;" />
148
+
149
+ Open `fim.hbs`, replace its content with the following, and save:
150
+
151
+ ```
152
+ <|fim_prefix|>```verilog\n<verilog>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|>
153
+ ```
154
+
155
+ <img src="./assets/image-20250912160901631.png" alt="image-20250912160901631" style="zoom: 33%;" />
156
+
157
+ Finally, ensure the Fim option is checked in the template settings. Note: you may need to re-enable this each time VS Code restarts.
158
+
159
+ <img src="./assets/bd1fc20b0075656ba4e5321523832e19.png" alt="bd1fc20b0075656ba4e5321523832e19" style="zoom:35%;" />
160
+
161
+ #### Try FIM
162
+
163
+ You can now try FIM while writing code in VS Code. Note: The first time you use completion, Ollama will load the model, which may cause a significant delay.
164
+
165
+ <img src="./assets/image-20250225124004805.png" alt="image-20250225124004805" style="zoom: 67%;" />
166
  ## Paper
167
  **Arxiv:** <https://arxiv.org/abs/2407.10424>
168