huihui-ai commited on
Commit
4bb6c20
·
verified ·
1 Parent(s): 540e97d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -0
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: vllm
3
+ language:
4
+ - en
5
+ - fr
6
+ - es
7
+ - de
8
+ - it
9
+ - pt
10
+ - nl
11
+ - zh
12
+ - ja
13
+ - ko
14
+ - ar
15
+ license: apache-2.0
16
+ inference: false
17
+ base_model:
18
+ - mistralai/Ministral-3-8B-Reasoning-2512
19
+ extra_gated_description: >-
20
+ If you want to learn more about how we process your personal data, please read
21
+ our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
22
+ tags:
23
+ - mistral-common
24
+ - abliterated
25
+ - uncensored
26
+ ---
27
+
28
+ # huihui-ai/Huihui-Ministral-3-8B-Reasoning-2512-abliterated
29
+ This is an uncensored version of [mistralai/Ministral-3-8B-Reasoning-2512](https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
30
+
31
+ It was only the text part that was processed, not the image part.
32
+
33
+ ## Chat with Image
34
+
35
+ ```
36
+ import torch
37
+ from transformers import Mistral3ForConditionalGeneration, MistralCommonBackend
38
+ from PIL import Image, ImageOps
39
+ import base64
40
+ import io
41
+ from datetime import datetime, timedelta
42
+ import os
43
+
44
+ model_id = "huihui-ai/Huihui-Ministral-3-8B-Reasoning-2512-abliterated"
45
+
46
+ tokenizer = MistralCommonBackend.from_pretrained(model_id, trust_remote_code=True)
47
+
48
+ model = Mistral3ForConditionalGeneration.from_pretrained(
49
+ model_id,
50
+ torch_dtype="auto",
51
+ device_map="auto",
52
+ trust_remote_code=True,
53
+ )
54
+
55
+ model = model.cuda()
56
+
57
+ #image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
58
+ image_path = "/png/Battle.png"
59
+
60
+ with Image.open(image_path) as img:
61
+ extension_to_format = {
62
+ ".png": "PNG",
63
+ ".jpg": "JPEG",
64
+ ".jpeg": "JPEG"
65
+ }
66
+ image_format = extension_to_format.get(
67
+ "." + image_path.lower().split(".")[-1], "JPEG"
68
+ )
69
+
70
+ buffered = io.BytesIO()
71
+ img.save(buffered, format=image_format)
72
+ base64_string = base64.b64encode(buffered.getvalue()).decode("utf-8")
73
+ image_url = f"data:image/{'png' if image_format == 'PNG' else 'jpeg'};base64,{base64_string}"
74
+
75
+ messages = [
76
+ {
77
+ "role": "user",
78
+ "content": [
79
+ {
80
+ "type": "text",
81
+ "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
82
+ },
83
+ {"type": "image_url", "image_url": {"url": image_url}},
84
+ ],
85
+ },
86
+ ]
87
+
88
+ tokenized = tokenizer.apply_chat_template(
89
+ messages,
90
+ add_generation_prompt=True,
91
+ return_dict=True,
92
+ return_tensors="pt"
93
+ ).to("cuda")
94
+
95
+ tokenized["input_ids"] = tokenized["input_ids"].to(device="cuda")
96
+ tokenized["pixel_values"] = tokenized["pixel_values"].to(dtype=torch.bfloat16, device="cuda")
97
+ image_sizes = [tokenized["pixel_values"].shape[-2:]]
98
+
99
+ output = model.generate(
100
+ **tokenized,
101
+ image_sizes=image_sizes,
102
+ max_new_tokens=512,
103
+ )[0]
104
+
105
+ decoded_output = tokenizer.decode(output[len(tokenized["input_ids"][0]):])
106
+ print(decoded_output)
107
+ ```
108
+
109
+ ### Usage Warnings
110
+
111
+
112
+ - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
113
+
114
+ - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
115
+
116
+ - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
117
+
118
+ - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
119
+
120
+ - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
121
+
122
+ - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
123
+
124
+
125
+ ### Donation
126
+
127
+ If you like it, please click 'like' and follow us for more updates.
128
+ You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
129
+
130
+ ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
131
+ - bitcoin(BTC):
132
+ ```
133
+ bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
134
+ ```
135
+ - Support our work on [Ko-fi](https://ko-fi.com/huihuiai)!
136
+