bibproj commited on
Commit
da6ac42
·
verified ·
1 Parent(s): d6fba20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -2
README.md CHANGED
@@ -1,7 +1,112 @@
1
  ---
2
- language: en
 
3
  pipeline_tag: text-generation
4
  tags:
 
5
  - mlx
6
- library_name: mlx
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: gemma
3
+ library_name: mlx
4
  pipeline_tag: text-generation
5
  tags:
6
+ - transformers
7
  - mlx
8
+ - translation
9
+ language:
10
+ - ar
11
+ - bg
12
+ - zh
13
+ - cs
14
+ - da
15
+ - nl
16
+ - en
17
+ - fi
18
+ - fr
19
+ - de
20
+ - el
21
+ - gu
22
+ - he
23
+ - hi
24
+ - hu
25
+ - id
26
+ - it
27
+ - ja
28
+ - ko
29
+ - fa
30
+ - pl
31
+ - pt
32
+ - ro
33
+ - ru
34
+ - sk
35
+ - es
36
+ - sv
37
+ - tl
38
+ - th
39
+ - tr
40
+ - uk
41
+ - vi
42
+ base_model:
43
+ - yanolja/YanoljaNEXT-Rosetta-27B-2511
44
  ---
45
+
46
+ # mlx-community/YanoljaNEXT-Rosetta-27B-2511-mlx-bf16
47
+
48
+ This model [mlx-community/YanoljaNEXT-Rosetta-27B-2511-mlx-bf16](https://huggingface.co/mlx-community/YanoljaNEXT-Rosetta-27B-2511-mlx-bf16) was converted to MLX format from [yanolja/YanoljaNEXT-Rosetta-27B-2511](https://huggingface.co/yanolja/YanoljaNEXT-Rosetta-27B-2511) using mlx-lm version **0.28.4**.
49
+
50
+ You can find more similar translation-related MLX model quants for an Apple Mac Studio at https://huggingface.co/bibproj
51
+
52
+ ## Model Description
53
+
54
+ This model is a 27-billion parameter, decoder-only language model built on the Gemma3 27B architecture and fine-tuned by Yanolja NEXT. It is specifically designed to translate structured data (JSON format) while preserving the original data structure.
55
+
56
+ The model was trained on a multilingual dataset covering the following languages equally:
57
+ - Arabic
58
+ - Bulgarian
59
+ - Chinese
60
+ - Czech
61
+ - Danish
62
+ - Dutch
63
+ - English
64
+ - Finnish
65
+ - French
66
+ - German
67
+ - Greek
68
+ - Gujarati
69
+ - Hebrew
70
+ - Hindi
71
+ - Hungarian
72
+ - Indonesian
73
+ - Italian
74
+ - Japanese
75
+ - Korean
76
+ - Persian
77
+ - Polish
78
+ - Portuguese
79
+ - Romanian
80
+ - Russian
81
+ - Slovak
82
+ - Spanish
83
+ - Swedish
84
+ - Tagalog
85
+ - Thai
86
+ - Turkish
87
+ - Ukrainian
88
+ - Vietnamese
89
+
90
+ While optimized for these languages, it may also perform effectively on other languages supported by the base Gemma3 model.
91
+
92
+ ## Use with mlx
93
+
94
+ ```bash
95
+ pip install mlx-lm
96
+ ```
97
+
98
+ ```python
99
+ from mlx_lm import load, generate
100
+
101
+ model, tokenizer = load("mlx-community/YanoljaNEXT-Rosetta-27B-2511-mlx-bf16")
102
+
103
+ prompt = "hello"
104
+
105
+ if tokenizer.chat_template is not None:
106
+ messages = [{"role": "user", "content": prompt}]
107
+ prompt = tokenizer.apply_chat_template(
108
+ messages, add_generation_prompt=True
109
+ )
110
+
111
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
112
+ ```