GGUF
🇪🇺 Region: EU
ajinauser commited on
Commit
2330b94
·
verified ·
1 Parent(s): 37b24e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -28
README.md CHANGED
@@ -85,34 +85,7 @@ Pick a file (e.g., `jina-code-embeddings-1.5b-F16.gguf`). You can either:
85
 
86
  ---
87
 
88
- ## A) CLI embeddings with `llama-embedding`
89
-
90
- ### Auto-download from Hugging Face (repo + file)
91
-
92
- ```bash
93
- ./llama-embedding \
94
- --hf-repo jinaai/jina-code-embeddings-1.5b-GGUF \
95
- --hf-file jina-code-embeddings-1.5b-F16.gguf \
96
- --pooling last \
97
- -p "Find the most relevant code snippet given the following query:
98
- print hello world in python"
99
- ```
100
-
101
- ### Local file
102
-
103
- ```bash
104
- ./llama-embedding \
105
- -m /path/to/jina-code-embeddings-1.5b-F16.gguf \
106
- --pooling last \
107
- -p "Find the most relevant code snippet given the following query:
108
- print hello world in python"
109
- ```
110
-
111
- > Outputs a single **896-d** vector to stdout. For smaller sizes, slice client-side.
112
-
113
- ---
114
-
115
- ## B) HTTP service with `llama-server`
116
 
117
  ### Auto-download from Hugging Face (repo + file)
118
 
 
85
 
86
  ---
87
 
88
+ ## HTTP service with `llama-server`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
  ### Auto-download from Hugging Face (repo + file)
91