No AI turn in the dataset

#1
by uglyfred78 - opened

Hi, I was wondering how to use your dataset. I loaded it in python and made a filter to look through all tool calls and only found heartbeat and finalize_session. I am pretty sure there is just no AI turn. Am I doing something wrong? I am very interested in openclaw and would like to train a small finetuned model on this dataset with QLoRA.

@webxos

still under dev I will update it soon thank you, here are some possible fixes in the meantime:

Make sure you're loading the JSONL files, not just the dataset card. Example:
python

from datasets import load_dataset

Load the JSONL files directly

dataset = load_dataset('json', data_files={
'train': 'data/train.jsonl',
'validation': 'data/validation.jsonl',
'test': 'data/test.jsonl'
})

If you loaded only dataset_card.json, you'll only see metadata, not the actual sessions.
2. Navigate to the full_trace field

Each example has a full_trace list. Inside that list, each turn contains:

agent_name (e.g., "router", "email_agent")

role ("user_message", "thought", "action", "observation")

tool_call (present only for actions that invoke a tool)

tool_result (the simulated outcome)

To find all tool calls, you need to iterate over full_trace and check tool_call. For example:
python

def extract_tool_calls(example):
tool_names = []
for turn in example['full_trace']:
if turn.get('tool_call'):
tool_names.append(turn['tool_call']['name'])
return tool_names

Apply to one example

example = dataset['train'][0]
print(extract_tool_calls(example))

If you do this across the whole dataset, you should see many different tool names.
3. Why you might only see heartbeat and finalize_session

A few possibilities:

You’re looking at the metadata field – The metadata field contains generator info (including finalize_session as a completion marker), not the actual trace.

The dataset was generated with an older version – Ensure you’re using the latest version (v2.0) where the full range of tools is active.

You’re only looking at a few examples – Try sampling 100 random examples and counting tool occurrences.

The export ZIP might be corrupted – Generate a new dataset and re‑export.

Here’s a quick diagnostic script:
python

from collections import Counter

all_tools = []
for split in ['train', 'validation', 'test']:
for ex in dataset[split]:
for turn in ex['full_trace']:
if turn.get('tool_call'):
all_tools.append(turn['tool_call']['name'])

print(Counter(all_tools).most_common())

If the output still only shows heartbeat and finalize_session, something went wrong in generation. Try generating a fresh dataset (e.g., gen 100) and export again.
4. What about “AI turns”?

The dataset includes turns from multiple agents: router, email_agent, calendar_agent, code_agent, web_agent, user_proxy, system. Each of these agents produces thoughts, actions, and observations – exactly the kind of data you’d want for fine‑tuning a model to do tool‑calling and routing.

If you want to extract only the assistant messages (i.e., AI responses), filter by role in full_trace:
python

assistant_turns = [turn for turn in example['full_trace']
if turn['role'] in ['thought', 'action']]

These contain the reasoning and tool invocations.
5. Fine‑tuning with QLoRA

The dataset is perfect for that! You can format each trace into a conversation (user → assistant → tool → assistant …) and fine‑tune a model like Llama‑3 or Qwen to follow ReAct patterns. The task_description serves as the user prompt, and the full_trace provides the expected assistant/tool interaction sequence.

Sign up or log in to comment