Upload model weights

#2
by swhitehead - opened
Files changed (1) hide show
  1. README.md +3 -317
README.md CHANGED
@@ -1,317 +1,3 @@
1
- ---
2
- language:
3
- - en
4
- library_name: transformers
5
- license: mit
6
- pipeline_tag: image-text-to-text
7
- tags:
8
- - multimodal
9
- ---
10
-
11
- # Fara-7B: An Efficient Agentic Model for Computer Use
12
-
13
- [![Microsoft](https://img.shields.io/badge/Microsoft-Project-0078D4?logo=microsoft)](https://aka.ms/msaif/fara)
14
- [![Hugging Face Dataset](https://img.shields.io/badge/🤗-Dataset-yellow)](https://huggingface.co/datasets/microsoft/WebTailBench)
15
- [![Foundry](https://img.shields.io/badge/Azure-Foundry-0089D6)](https://aka.ms/foundry-fara-7b)
16
- [![Github](https://img.shields.io/badge/Github-181717?logo=github&logoColor=white)](https://github.com/microsoft/fara)
17
- [![Paper](https://img.shields.io/badge/Paper-2511.19663-red)](https://huggingface.co/papers/2511.19663)
18
-
19
- [Official Microsoft Blog](https://www.microsoft.com/en-us/research/?p=1155843&preview=1&_ppp=0a22f3e916)<br>
20
- [Technical Report](https://aka.ms/fara-techreport)<br>
21
- [Paper](https://huggingface.co/papers/2511.19663)<br>
22
- [Github](https://github.com/microsoft/fara)<br>
23
- [Microsoft Foundry](https://ai.azure.com/explore/models/Fara-7B/version/1/registry/azureml-msr?tid=72f988bf-86f1-41af-91ab-2d7cd011db47)<br>
24
-
25
- ## Model Summary
26
-
27
- **Developer:** Microsoft Research
28
-
29
- **Description:**
30
- Fara-7B is Microsoft's first agentic small language model (SLM) designed specifically for computer use. With only 7 billion parameters, Fara-7B is an ultra-compact Computer Use Agent (CUA) that achieves state-of-the-art performance within its size class and is competitive with larger, more resource-intensive agentic systems.
31
-
32
- **Model Architecture:**
33
- Multimodal decoder-only language model that takes an image (screenshot) + text context. It directly predicts thoughts and actions with grounded arguments. Current production baselines leverage Qwen 2.5-VL (7B).
34
-
35
- **Parameters:** 7 Billion
36
-
37
- **Inputs:** User goal (text), current screenshot(s), history of previous outputs (thoughts + actions text) from the agent.
38
-
39
- **Context Length:** 128k
40
-
41
- **Outputs:** Generated text in response to the input, with a chain-of-thought block followed by a tool call block to indicate the action.
42
-
43
- **GPUs:** 64 H100s
44
-
45
- **Training Time:** 2.5 days
46
-
47
- **Public Data Summary:** N/A
48
-
49
- **Dates:** Trained between 26th October 2025 to 29th October 2025
50
-
51
- **Status:** Static model trained on public and private data
52
-
53
- **Release Date:** November 24th, 2025
54
-
55
- **License:** MIT
56
-
57
- **Model Dependencies:** Qwen 2.5 VL
58
-
59
- **Additional Assets:** N/A
60
-
61
- **Acceptable Use Policy:** N/A
62
-
63
- ---
64
-
65
- ## 1. Model Overview
66
-
67
- Fara is a 7B Computer Use Agent (CUA) model specialized for taking actions on the web to accomplish high-level user tasks. Beyond understanding webpage layout and basic action mechanics, it plans and executes high-level goals like booking restaurants, applying for jobs, planning trips, and buying shopping lists. Its training relies on a large-scale, fully synthetic dataset of action trajectories generated and verified by a multi-agent pipeline.
68
-
69
- Fara perceives browser inputs via screenshots, while internal reasoning and state history are recorded textually. Based on recent screenshots and a full history of actions, it predicts the next action with necessary arguments (e.g., coordinates for clicks).
70
-
71
- ### 1.1 Alignment Approach
72
-
73
- Fara-7B uses a robust post-training safety approach leveraging open-source and in-house synthetic datasets. It incorporates critical point recognition—situations requiring user permission or sensitive information—to safely halt actions. The model is trained to refuse harmful tasks and undergoes automated red teaming to assess risks, including grounding, jailbreaks, harmful content, and copyright violations.
74
-
75
- ### 1.2 Safeguards
76
-
77
- Fara-7B is trained to refuse tasks in categories that violate usage policy:
78
-
79
- | Type | Description | Examples |
80
- |------|------------|---------|
81
- | Illegal Activities | Tasks requiring unlawful actions | Terrorism-related searches, piracy, unauthorized access, weapons creation |
82
- | Deceptive Tasks | Tasks misleading or impersonating | Fake forms, fraudulent listings, phishing |
83
- | High-Risk/Regulated Domains | Tasks requiring professional oversight | Medical, legal, financial advice or approvals |
84
- | Harassment, Exploitation, Hate | Tasks harming or discriminating | Harassment content, stalking, sexualizing minors |
85
- | Unsafe Technical Use | Misuse of automation | Large-scale scraping, spam, system disruption |
86
- | Misinformation | Spreading false claims | Publishing unverified claims |
87
- | Sexual | Erotic or pornographic tasks | Erotic roleplay, porn searches |
88
-
89
- Critical points where the agent stops include entering personal info, completing purchases, making calls, sending emails, submitting applications, and signing into accounts.
90
-
91
- ---
92
-
93
- ## 2. Usage
94
-
95
- ### Sample Usage
96
-
97
- You can try Fara-7B locally by setting up the environment and hosting the model. For full instructions, refer to the [GitHub repository](https://github.com/microsoft/fara#installation).
98
-
99
- ```bash
100
- # 1. Clone repository
101
- git clone https://github.com/microsoft/fara.git
102
- cd fara
103
-
104
- # 2. Setup environment
105
- python3 -m venv .venv
106
- source .venv/bin/activate
107
- pip install -e .
108
- playwright install
109
- ```
110
-
111
- Then in one process, host the model:
112
- ```bash
113
- vllm serve "microsoft/Fara-7B" --port 5000 --dtype auto
114
- ```
115
- Then you can iterative query it with:
116
- ```bash
117
- fara-cli --task "whats the weather in new york now"
118
- ```
119
-
120
- Hint: might need to do `--tensor-parallel-size 2` with vllm command if you run out of memory
121
-
122
- ### 2.1 Primary Use Cases
123
-
124
- - Automating web tasks such as shopping, booking travel, restaurant reservations, info-seeking, or account workflows.
125
- - Performs actions step-by-step using multimodal understanding from browser screenshots.
126
- - On-device execution provides privacy guarantees and lower latency.
127
-
128
- ### 2.2 Out-of-Scope Use Cases
129
-
130
- - Model not evaluated for all downstream purposes; consider limitations of LLMs for accuracy, safety, and fairness.
131
- - Must adhere to applicable laws and regulations.
132
- - English-only support.
133
-
134
- ### 2.3 Distribution Channels
135
-
136
- - Hugging Face
137
- - Azure AI Foundry
138
-
139
- ### 2.4 Input Formats
140
-
141
- Given the nature of the training data, always use the ChatML template with the following system prompt for inference:
142
-
143
- ---
144
-
145
- **System Prompt:**
146
-
147
- You are a web automation agent that performs actions on websites to fulfill user requests by calling various tools.
148
-
149
- You should stop execution at **Critical Points**. A Critical Point occurs in tasks like:
150
-
151
- - Checkout
152
- - Book
153
- - Purchase
154
- - Call
155
- - Email
156
- - Order
157
-
158
- A Critical Point requires the user's permission or personal/sensitive information (name, email, credit card, address, payment information, resume, etc.) to complete a transaction (purchase, reservation, sign-up, etc.), or to communicate as a human would (call, email, apply to a job, etc.).
159
-
160
- **Guideline:** Solve the task as far as possible **up until a Critical Point**.
161
-
162
- **Examples:**
163
-
164
- - If the task is to "call a restaurant to make a reservation," do **not** actually make the call. Instead, navigate to the restaurant's page and find the phone number.
165
- - If the task is to "order new size 12 running shoes," do **not** place the order. Instead, search for the right shoes that meet the criteria and add them to the cart.
166
-
167
- Some tasks, like answering questions, may not encounter a Critical Point at all.
168
-
169
- ---
170
-
171
- **Function Signatures:**
172
-
173
- You are provided with function signatures within XML tags:
174
-
175
- ```json
176
- {
177
- "type": "function",
178
- "function": {
179
- "name": "computer_use",
180
- "description": "Use a mouse and keyboard to interact with a computer, and take screenshots.\
181
- * This is an interface to a desktop GUI. You do not have access to a terminal or applications menu. You must click on desktop icons to start applications.\
182
- * Some applications may take time to start or process actions, so you may need to wait and take successive screenshots to see the results of your actions. E.g. if you click on Firefox and a window doesn't open, try wait and taking another screenshot.\
183
- * The screen's resolution is 1428x896.\
184
- * Whenever you intend to move the cursor to click on an element like an icon, you should consult a screenshot to determine the coordinates of the element before moving the cursor.\
185
- * If you tried clicking on a program or link but it failed to load, even after waiting, try adjusting your cursor position so that the tip of the cursor visually falls on the element that you want to click.\
186
- * Make sure to click any buttons, links, icons, etc with the cursor tip in the center of the element. Don't click boxes on their edges unless asked.\
187
- * When a separate scrollable container prominently overlays the webpage, if you want to scroll within it, you typically need to mouse_move() over it first and then scroll().\
188
- * If a popup window appears that you want to close, if left_click() on the 'X' or close button doesn't work, try key(keys=['Escape']) to close it.\
189
- * On some search bars, when you type(), you may need to press_enter=False and instead separately call left_click() on the search button to submit the search query. This is especially true of search bars that have auto-suggest popups for e.g. locations\
190
- * For calendar widgets, you usually need to left_click() on arrows to move between months and left_click() on dates to select them; type() is not typically used to input dates there.",
191
- "parameters": {
192
- "properties": {
193
- "action": {
194
- "description": "The action to perform. The available actions are:\
195
- * key: Performs key down presses on the arguments passed in order, then performs key releases in reverse order. Includes 'Enter', 'Alt', 'Shift', 'Tab', 'Control', 'Backspace', 'Delete', 'Escape', 'ArrowUp', 'ArrowDown', 'ArrowLeft', 'ArrowRight', 'PageDown', 'PageUp', 'Shift', etc.\
196
- * type: Type a string of text on the keyboard.\
197
- * mouse_move: Move the cursor to a specified (x, y) pixel coordinate on the screen.\
198
- * left_click: Click the left mouse button.\
199
- * scroll: Performs a scroll of the mouse scroll wheel.\
200
- * visit_url: Visit a specified URL.\
201
- * web_search: Perform a web search with a specified query.\
202
- * history_back: Go back to the previous page in the browser history.\
203
- * pause_and_memorize_fact: Pause and memorize a fact for future reference.\
204
- * wait: Wait specified seconds for the change to happen.\
205
- * terminate: Terminate the current task and report its completion status.",
206
- "enum": ["key", "type", "mouse_move", "left_click", "scroll", "visit_url", "web_search", "history_back", "pause_and_memorize_fact", "wait", "terminate"],
207
- "type": "string"
208
- },
209
- "keys": {"description": "Required only by action=key.", "type": "array"},
210
- "text": {"description": "Required only by action=type.", "type": "string"},
211
- "coordinate": {"description": "(x, y) coordinates for mouse actions. Required only by action=left_click, action=mouse_move, and action=type.", "type": "array"},
212
- "pixels": {"description": "Amount of scrolling. Positive = up, Negative = down. Required only by action=scroll.", "type": "number"},
213
- "url": {"description": "The URL to visit. Required only by action=visit_url.", "type": "string"},
214
- "query": {"description": "The query to search for. Required only by action=web_search.", "type": "string"},
215
- "fact": {"description": "The fact to remember for the future. Required only by action=pause_and_memorize_fact.", "type": "string"},
216
- "time": {"description": "Seconds to wait. Required only by action=wait.", "type": "number"},
217
- "status": {"description": "Status of the task. Required only by action=terminate.", "type": "string", "enum": ["success", "failure"]}
218
- },
219
- "required": ["action"],
220
- "type": "object"
221
- }
222
- }
223
- }
224
-
225
- For each function call, return a JSON object with the function name and arguments within XML tags:
226
-
227
- ```json
228
- {
229
- "name": "<function-name>",
230
- "arguments": <args-json-object>
231
- }
232
-
233
- ```
234
-
235
- - Function signatures provided for all actions (`key`, `type`, `mouse_move`, `left_click`, `scroll`, `visit_url`, `web_search`, `history_back`, `pause_and_memorize_fact`, `wait`, `terminate`).
236
-
237
-
238
- ### 2.5 Technical Requirements & Integration
239
-
240
- - Required packages: `torch >=2.7.1`, `transformers >=4.53.3`, `vllm >=0.10.0`
241
- - Tested on NVIDIA A6000, A100, H100 GPUs (Ubuntu 24.04.3 LTS)
242
- - Recommended on vLLM server with bf16 precision
243
- - Provided implementation via Magentic-UI in Docker sandbox for safe web execution
244
-
245
- ### 2.6 Responsible AI Considerations
246
-
247
- - English-only; other languages may have degraded performance
248
- - Potential stereotype reinforcement or inappropriate content
249
- - Verify outputs, especially in high-stakes or regulated domains
250
- - Misuse includes fraud, spam, malware generation
251
- - Use safety services like Azure AI Content Safety where possible
252
- - Recommended: human-in-the-loop, sandboxing, access control, output verification
253
-
254
- ---
255
-
256
- ## 3. Data Overview
257
-
258
- ### 3.1 Training, Testing, Validation Datasets
259
-
260
- - Multi-agent data generation pipeline produces synthetic trajectories from seed URLs and open-source tasks
261
- - Records screenshots, thoughts, action traces, and verification via verifier agents
262
- - Includes high-quality public datasets: image and text modalities
263
- - Specialized data: grounding, UI understanding (VQA, captioning, OCR), safety/refusal datasets
264
-
265
- ---
266
-
267
- ## 4. Quality and Performance Evaluation
268
- ### Table: Online Agent Evaluation Results
269
-
270
- | Model | Params | WebVoyager | Online-M2W | DeepShop | WebTailBench |
271
- |------------------------------|--------|------------|------------|----------|---------------|
272
- | **SoM Agents** | | | | | |
273
- | SoM Agent (GPT-5) | - | 90.6 | 57.7 | 49.1 | 60.4 |
274
- | SoM Agent (o3) | - | 79.3 | 55.4 | 49.7 | 52.7 |
275
- | SoM Agent (GPT-4o) | - | 65.1 | 34.6 | 16.0 | 30.8 |
276
- | GLM-4.1V-9B-Thinking | 9B | 66.8 | 33.9 | 32.0 | 22.4 |
277
- | **Computer Use Models** | | | | | |
278
- | OpenAI computer-use-preview | - | 70.9 | 42.9 | 24.7 | 25.7 |
279
- | UI-TARS-1.5-7B | 7B | 66.4 | 31.3 | 11.6 | 19.5 |
280
- | Fara-7B | 7B | 73.5 | 34.1 | 26.2 | 38.4 |
281
-
282
- The table reports task completion success rates on WebVoyager, Online-Mind2Web, DeepShop, and WebTailBench for both SoM agents and native computer-use agents.
283
- Scores are averaged over 3 runs.
284
-
285
- ### 4.2 Safety Evaluation & Red-Teaming
286
-
287
- - Post-training safety with critical point design
288
- - Red-teaming on Azure: grounding, jailbreaks, harmful content, copyright
289
-
290
- ### Guidelines for Safe Use
291
-
292
- - Human-in-the-loop monitoring recommended
293
- - Do not share sensitive data
294
- - Run in sandboxed environments
295
- - Limit internet access via allow-lists/block-lists
296
- - Avoid use in commercial, high-stakes, or regulated domains
297
-
298
- **Security Considerations:**
299
- - Automates interactions across websites, apps, OS; requires strict access control, sandboxing, and monitoring
300
-
301
-
302
- **Attribution:** Our model is based on Qwen 2.5 VL. Qwen 2.5 VL has an Apache 2.0 license. Fara-7B is released with an MIT License. Apache 2.0 and MIT are compatible licenses.
303
-
304
-
305
- ---
306
-
307
- ## Appendix: Benchmarks
308
-
309
- | Benchmark | Link |
310
- |-----------|------|
311
- | WebVoyager | [MinorJerry/WebVoyager](https://huggingface.co/datasets/MinorJerry/WebVoyager) |
312
- | Online-Mind2Web | [osunlp/Online-Mind2Web](https://huggingface.co/datasets/osunlp/Online-Mind2Web) |
313
- | DeepShop | [DeepShop/DeepShop](https://huggingface.co/datasets/DeepShop/DeepShop) |
314
- | WebTailBench | [microsoft/WebTailBench](https://huggingface.co/datasets/microsoft/WebTailBench) |
315
- | ScreenSpot v1 | [rootsautomation/ScreenSpot](https://huggingface.co/datasets/rootsautomation/ScreenSpot) |
316
- | ScreenSpot v2 | [Voxel51/ScreenSpot-v2](https://huggingface.co/datasets/Voxel51/ScreenSpot-v2) |
317
- | AgentHarm | [ai-safety-institute/AgentHarm](https://huggingface.co/datasets/ai-safety-institute/AgentHarm) |
 
1
+ ---
2
+ license: mit
3
+ ---