CraftGPT commited on
Commit
8cbbbb0
·
verified ·
1 Parent(s): ef59a8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -106
README.md CHANGED
@@ -3,197 +3,177 @@ tags:
3
  - embedding
4
  - minecraft
5
  - block2vec
 
 
 
 
6
  ---
7
- # Model Card for Model ID
8
 
9
- <!-- Provide a quick summary of what the model is/does. -->
10
 
11
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
12
 
13
  ## Model Details
14
 
15
  ### Model Description
16
 
17
- <!-- Provide a longer summary of what this model is. -->
18
 
 
 
19
 
 
20
 
21
- - **Developed by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
-
29
- ### Model Sources [optional]
30
-
31
- <!-- Provide the basic links for the model. -->
32
-
33
- - **Repository:** [More Information Needed]
34
- - **Paper [optional]:** [More Information Needed]
35
- - **Demo [optional]:** [More Information Needed]
36
 
37
  ## Uses
38
 
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
40
 
41
  ### Direct Use
42
 
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
-
45
- [More Information Needed]
46
 
47
- ### Downstream Use [optional]
48
 
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
 
51
- [More Information Needed]
 
 
 
52
 
53
  ### Out-of-Scope Use
54
 
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
-
57
- [More Information Needed]
58
 
59
  ## Bias, Risks, and Limitations
60
 
61
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
-
63
- [More Information Needed]
 
 
 
 
64
 
65
  ### Recommendations
66
 
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
 
69
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
70
 
71
  ## How to Get Started with the Model
72
 
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
 
77
  ## Training Details
78
 
79
  ### Training Data
80
 
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
-
83
- [More Information Needed]
84
 
85
  ### Training Procedure
86
 
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
-
89
- #### Preprocessing [optional]
90
-
91
- [More Information Needed]
92
 
 
 
 
93
 
94
  #### Training Hyperparameters
95
 
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
97
 
98
- #### Speeds, Sizes, Times [optional]
99
 
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
 
104
  ## Evaluation
105
 
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
  ### Testing Data, Factors & Metrics
109
 
110
  #### Testing Data
111
 
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
 
116
  #### Factors
117
 
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
-
120
- [More Information Needed]
121
 
122
  #### Metrics
123
 
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
 
128
  ### Results
129
 
130
- [More Information Needed]
 
131
 
132
  #### Summary
133
 
 
134
 
 
135
 
136
- ## Model Examination [optional]
137
-
138
- <!-- Relevant interpretability work for the model goes here -->
139
 
140
- [More Information Needed]
141
-
142
- ## Environmental Impact
143
-
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
-
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
 
155
  ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
158
 
159
- ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
 
163
  #### Hardware
164
 
165
- [More Information Needed]
 
166
 
167
  #### Software
168
 
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
 
 
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
  **BibTeX:**
176
 
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
 
 
 
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
3
  - embedding
4
  - minecraft
5
  - block2vec
6
+ - transformerblock2vec
7
+ - 3d
8
+ - voxel
9
+ license: mit
10
  ---
 
11
 
12
+ # TransformerBlock2Vec
13
 
14
+ This model card provides an overview of the TransformerBlock2Vec model, a transformer-based embedding model designed to create a 144-dimensional embedding space for Minecraft build chunks (up to 16x8x16 blocks). It uses 3D Rotary Positional Embeddings (RoPE) and is trained to predict masked blocks in a sequence, enabling downstream tasks like build vs. terrain segmentation.
15
 
16
  ## Model Details
17
 
18
  ### Model Description
19
 
20
+ TransformerBlock2Vec is a transformer-based model that maps 3D Minecraft build chunks (up to 16x8x16 blocks) into a 144-dimensional embedding space. It leverages a custom 3D RoPE implementation to encode block positions and is trained using a masked language modeling approach, predicting 30% of masked blocks in a sequence. It uses DeepSpeed and FlashAttention for efficient training on consumer hardware (e.g., RTX 4070).
21
 
22
+ - **Model type:** Transformer-based embedding model
23
+ - **License:** MIT
24
 
25
+ ### Model Sources \[optional\]
26
 
27
+ - **Repository:** [https://github.com/Kingburrito777/TransformerBlock2Vec](https://github.com/Kingburrito777/TransformerBlock2Vec)
28
+ - **Paper:** [TBA!]()
29
+ - **Demo:** [TBA!]()
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ## Uses
32
 
33
+ The model works very well for distinguishing user-made builds from terrain in Minecraft worlds, achieving 95% accuracy on unseen data for build vs. terrain segmentation.
34
+ The embedding space of 3D minecraft data will enable downstream tasks such as search and retrieval, generative AI, and context understanding (bots).
35
 
36
  ### Direct Use
37
 
38
+ TransformerBlock2Vec can be used to generate 144-dimensional embeddings for Minecraft build chunks, enabling tasks such as clustering similar builds, visualizing build distributions using t-SNE, or classifying chunks as builds vs. terrain. It is particularly suited for extracting meaningful representations from Minecraft data.
 
 
39
 
40
+ ### Downstream Use
41
 
42
+ The model supports downstream tasks like:
43
 
44
+ - **Build vs. Terrain Segmentation:** Identifying user-made structures in raw Minecraft worlds with 95% accuracy.
45
+ - **Schematic Search Engine:** Enabling nearest-neighbor searches for similar builds based on embedding similarity.
46
+ - **Generative Model Pretraining:** Providing embeddings for text-to-voxel generative models.
47
+ - **Duplicate Analysis:** Clustering builds shows very similar builds near one another in the embedding space, and thus gives an effective way to remove duplicates from the corpus.
48
 
49
  ### Out-of-Scope Use
50
 
51
+ - Is not directly integrated into the Minecraft game in any way, this requires a realtime application or mod written in Java.
52
+ - Not suitable for predicting block properties (e.g., stair directions) without additional finetuning.
53
+ - Not intended for non-Minecraft 3D data.
54
 
55
  ## Bias, Risks, and Limitations
56
 
57
+ - **Bias:** The model is trained on a dataset of Minecraft schematics, which may overrepresent certain build styles or block types based on the scraped data sources.
58
+ - **Risks:** Misclassification of terrain as builds (or vice versa) could lead to incorrect data extraction in downstream tasks.
59
+ - **Limitations:**
60
+ - Limited to chunks of 16x8x16 or smaller.
61
+ - Excludes block properties (e.g., stair orientations) to reduce vocabulary size.
62
+ - Requires significant compute for training (5 days on an RTX 4070 with DeepSpeed).
63
+ - Performance may degrade on highly unique or novel builds not represented in the training data.
64
 
65
  ### Recommendations
66
 
67
+ Users should:
68
 
69
+ - Validate model outputs for specific use cases, especially with modded or atypical builds.
70
+ - Consider finetuning for tasks requiring block property predictions.
71
+ - Be aware of potential biases in the training data and augment the dataset if targeting underrepresented build styles.
72
 
73
  ## How to Get Started with the Model
74
 
75
+ - **Repository:** [https://github.com/Kingburrito777/TransformerBlock2Vec](https://github.com/Kingburrito777/TransformerBlock2Vec)
 
 
76
 
77
  ## Training Details
78
 
79
  ### Training Data
80
 
81
+ The model is trained on a PostgreSQL database of approximately 108 Billion tokens, the largest known dataset of its kind. Schematics are converted to litematica format, and augmented with metadata (e.g., block counts, dimensions). Chunks are extracted up to 16x8x16, with non-empty chunks used for training. Schematic files are preprocessed with a "terrain-removal" algorithm, to reduce redundancy of terrain-like builds in the training data. This results in a model more directly suitable for embedding builds, instead of over represented generated terrain.
82
+ Curating and refining data is paramount to the models sucess in downstream tasks, and is currently the most difficult part of this endeavor.
 
83
 
84
  ### Training Procedure
85
 
86
+ #### Preprocessing
 
 
 
 
87
 
88
+ - Schematics are loaded using an optimized litematica parser.
89
+ - Chunks are augmented with random rotations (25% probability) and flips (25% probability per axis).
90
+ - Sequences are padded with a PAD token (1097), separated by a SEP token (1096), and masked with a MASK token (1098) for 30% of tokens.
91
 
92
  #### Training Hyperparameters
93
 
94
+ - **Batch size:** 86 with 4 gradient accumulation steps (effective batch size of 328 with DeepSpeed configuration)
95
+ - **Learning rate:** 2e-4 warmup, 1e-5 thereafter.
96
+ - **Epochs:** 4 (took 5 days!)
97
+ - **Optimizer:** DeepSpeed-managed AdamW
98
+ - **Dropout:** 0.1
99
+ - **Training regime:** Mixed precision (fp16) with DeepSpeed
100
 
101
+ #### Sizes, Times
102
 
103
+ - **Training time:** \~5 days on a single RTX 4070 with DeepSpeed and FlashAttention.
104
+ - **Checkpoint size:** \~30MB
 
105
 
106
  ## Evaluation
107
 
 
 
108
  ### Testing Data, Factors & Metrics
109
 
110
  #### Testing Data
111
 
112
+ Evaluated on a held-out set of unseen Minecraft schematics and raw world data, including both user-made builds and terrain chunks.
 
 
113
 
114
  #### Factors
115
 
116
+ - **Build type:** Various structures (e.g., houses, farms, castles).
117
+ - **Chunk size:** Up to 16x8x16.
 
118
 
119
  #### Metrics
120
 
121
+ - **Accuracy (segmentation):** 95%+ for build vs. terrain classification on unseen data.
122
+ - **Accuracy:** 50-90% for masking task as trained on. (If provided chunks are not too far from training distribution)
123
+ - **Loss:** Cross-entropy loss on masked block prediction.
124
 
125
  ### Results
126
 
127
+ For the masking task it was trained on (unusual for real-world use-case), probability can range from 40-90% correctly predicted blocks given a masking rate <30%.
128
+ The model achieves 95% accuracy in distinguishing builds from terrain when training a segmentation head, with clear separation in the 144-dimensional embedding space (visualized via t-SNE). It generalizes well to unseen builds but may struggle with highly unique or novel block distributions. See \[the Github repo\](https://github.com/Kingburrito777/TransformerBlock2Vec)
129
 
130
  #### Summary
131
 
132
+ TransformerBlock2Vec provides a robust embedding space for Minecraft builds, enabling accurate segmentation and potential for generative tasks with further development.
133
 
134
+ ## Model Examination
135
 
136
+ t-SNE visualizations of the 144-dimensional embedding space show distinct clusters for different build types, with terrain chunks separable from user-made structures. The model captures 3D spatial relationships effectively due to the 3D RoPE implementation.
 
 
137
 
138
+ ## Technical Specifications
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
  ### Model Architecture and Objective
141
 
142
+ - **Architecture:** Transformer encoder with 6 layers, 8 attention heads, 144-dimensional embeddings, and SwiGLU feed-forward networks. Uses 3D RoPE for positional encoding and FlashAttention for efficiency.
143
+ - **Objective:** Masked block prediction (30% of tokens masked) to learn a 144-dimensional embedding space for Minecraft chunks.
144
 
145
+ I will have a technical breakdown released with proper metrics and further details.
146
 
147
+ ### Compute Infrastructure
148
 
149
  #### Hardware
150
 
151
+ - **Hardware Type:** NVIDIA RTX 4070
152
+ - **Hours used:** \~120 hours (5 days) for training
153
 
154
  #### Software
155
 
156
+ - PyTorch
157
+ - DeepSpeed (for distributed training and mixed precision)
158
+ - FlashAttention
159
+ - Python 3.8+
160
+ - PostgreSQL (for data storage)
161
 
162
+ ## Citation
163
 
164
  **BibTeX:**
165
 
166
+ ```bibtex
167
+ [tba]
168
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
169
 
170
+ ## Glossary
171
 
172
+ - **3D RoPE:** 3D Rotary Positional Embeddings, a positional encoding method for 3D voxel data.
173
+ - **Litematica:** A Minecraft schematic file format for storing 3D build data.
174
+ - **Chunk:** A 3D block region in Minecraft (up to 16x8x16 in this model).
175
+ - **Embedding Space:** A 144-dimensional vector space representing Minecraft build chunks.
176
 
177
+ ## More Information
178
 
179
+ NOT AN OFFICIAL MINECRAFT PRODUCT. NOT APPROVED BY OR ASSOCIATED WITH MOJANG OR MICROSOFT. The project is part of the CraftGPT initiative to build generative AI for Minecraft.