Nohobby commited on
Commit
1c0b7b3
·
verified ·
1 Parent(s): 3e671da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -17
README.md CHANGED
@@ -1,37 +1,49 @@
1
  ---
2
  base_model:
3
- - unsloth/DeepSeek-R1-Distill-Llama-70B
4
- - mergekit-community/L3.3-L3.1-NewTempusBlated-70B
 
 
 
5
  - Nohobby/AbominationSnowPig
6
  - SicariusSicariiStuff/Negative_LLAMA_70B
7
- - ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
8
  library_name: transformers
9
  tags:
10
  - mergekit
11
  - merge
12
 
13
  ---
14
- # merge
15
 
16
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
 
18
- ## Merge Details
19
- ### Merge Method
 
 
 
20
 
21
- This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [mergekit-community/L3.3-L3.1-NewTempusBlated-70B](https://huggingface.co/mergekit-community/L3.3-L3.1-NewTempusBlated-70B) as a base.
22
 
23
- ### Models Merged
24
 
25
- The following models were included in the merge:
26
- * [unsloth/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B)
27
- * [Nohobby/AbominationSnowPig](https://huggingface.co/Nohobby/AbominationSnowPig)
28
- * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
29
- * [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4)
30
 
31
- ### Configuration
32
 
33
- The following YAML configuration was used to produce this model:
 
 
 
 
 
 
 
 
 
 
34
 
 
35
  ```yaml
36
  models:
37
  - model: unsloth/DeepSeek-R1-Distill-Llama-70B
@@ -58,4 +70,4 @@ parameters:
58
  dtype: float32
59
  out_dtype: bfloat16
60
  tokenizer_source: base
61
- ```
 
1
  ---
2
  base_model:
3
+ - sophosympatheia/Nova-Tempus-70B-v0.2
4
+ - nbeerbower/Llama-3.1-Nemotron-lorablated-70B
5
+ - sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
6
+ - deepseek-ai/DeepSeek-R1-Distill-Llama-70B
7
+ - ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
8
  - Nohobby/AbominationSnowPig
9
  - SicariusSicariiStuff/Negative_LLAMA_70B
 
10
  library_name: transformers
11
  tags:
12
  - mergekit
13
  - merge
14
 
15
  ---
16
+ # Prikol
17
 
18
+ > I don't even know anymore
19
 
20
+ ![Меня нужно изолировать от общества](https://files.catbox.moe/x9t3zo.png)
21
+
22
+ ### Overview
23
+
24
+ I have yet to try it
25
 
26
+ Prompt format: Llama3
27
 
28
+ ### Quants
29
 
30
+ -
 
 
 
 
31
 
32
+ ## Merge Details
33
 
34
+ ### Step1
35
+ ```yaml
36
+ base_model: sophosympatheia/Nova-Tempus-70B-v0.2
37
+ merge_method: model_stock
38
+ dtype: bfloat16
39
+ models:
40
+ - model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
41
+ - model: sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
42
+ tokenizer:
43
+ source: sophosympatheia/Nova-Tempus-70B-v0.2
44
+ ```
45
 
46
+ ### Step2
47
  ```yaml
48
  models:
49
  - model: unsloth/DeepSeek-R1-Distill-Llama-70B
 
70
  dtype: float32
71
  out_dtype: bfloat16
72
  tokenizer_source: base
73
+ ```