Ontocord.AI
commited on
Commit
·
ed3b869
1
Parent(s):
48626b3
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,15 +1,18 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
tags:
|
| 4 |
-
- Composer
|
| 5 |
-
- MosaicML
|
| 6 |
-
- llm-foundry
|
| 7 |
-
datasets:
|
| 8 |
-
- the_pile_books3
|
| 9 |
inference: false
|
| 10 |
---
|
|
|
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
|
| 15 |
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
inference: false
|
| 4 |
---
|
| 5 |
+
# Given-MPT-7B
|
| 6 |
|
| 7 |
+
This is a merge of the following MPT-7B models:
|
| 8 |
+
|
| 9 |
+
- *g*orilla-llm/gorilla-mpt-7b-hf-v0
|
| 10 |
+
- *i*bm/mpt-7b-instruct2
|
| 11 |
+
- Teh*V*enom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge
|
| 12 |
+
- *e*mozilla/mpt-7b-storysummarizer
|
| 13 |
+
- *n*omic-ai/gpt4all-mpt
|
| 14 |
+
|
| 15 |
+
# Original Model Card From MPT-7B-StoryWriter-65k+
|
| 16 |
|
| 17 |
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
|
| 18 |
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
|