[Data Quality] Remove test cases with randomized/non-deterministic temporary paths from dataset

#2
by OnionZuo - opened

I've identified several instances in the Hugging Face dataset where test cases contain randomized temporary file paths, making them non-deterministic and unsuitable for reproducible benchmarking. These p2p (point-to-point) test cases generate random directory names that vary between runs. This affects at least 4 instances across the Go multilang dataset.

Examples

ollama__ollama-11509
github.com/ollama/ollama/server/internal/cache/blob::TestOpenErrors//tmp/TestOpenErrors2982046416/001
github.com/ollama/ollama/server/internal/cache/blob::TestOpenErrors//tmp/go-build3264530104/b606/blob.test
rqlite__rqlite-2182
github.com/rqlite/rqlite/v8/internal/rarchive.Test_IsZipFile//tmp/rqlite-archive-test3705504462
github.com/rqlite/rqlite/v8/internal/rarchive.Test_IsZipFile//tmp/rqlite-archive-test861724217
rqlite__rqlite-2190
github.com/rqlite/rqlite/v8/internal/rarchive.Test_IsZipFile//tmp/rqlite-archive-test2446697833
github.com/rqlite/rqlite/v8/internal/rarchive.Test_IsZipFile//tmp/rqlite-archive-test2006852349
rqlite__rqlite-2197
github.com/rqlite/rqlite/v8/internal/rarchive.Test_IsZipFile//tmp/rqlite-archive-test1337815809
github.com/rqlite/rqlite/v8/internal/rarchive.Test_IsZipFile//tmp/rqlite-archive-test3023757337

Yes this is one intrinsic problem of automatic pipeline of SWE task creation, both for python and other languages. Automatic filtering fails to cover the numerous cases.

We have run the regression tests for three times to ensure test status consistency and exclude those unstable instances, but there must be some problematic instances left.

Thank you for your contribution.

As some instances would become invalid (flaky) over time, it is allowed that users can run evaluation with gold patch for 3 times to filter invalid instances first before using SWE-bench-Live for benchmarking and training.

Sign up or log in to comment