ArneBinder commited on
Commit
d102135
·
verified ·
1 Parent(s): 0bfa3b2

use pie-modules instead of pytorch-ie (#5)

Browse files

- from https://github.com/ArneBinder/pie-datasets/pull/204 (3f57c3ca11fb82fcb04489d17ee4c169a52795d7)

Files changed (3) hide show
  1. README.md +16 -14
  2. aae2.py +2 -2
  3. requirements.txt +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ Therefore, the `aae2` dataset as described here follows the data structure from
9
  ```python
10
  from pie_datasets import load_dataset
11
  from pie_datasets.builders.brat import BratDocumentWithMergedSpans
12
- from pytorch_ie.documents import TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
13
 
14
  # load default version
15
  dataset = load_dataset("pie/aae2")
@@ -53,11 +53,11 @@ See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema)
53
 
54
  ### Data Splits
55
 
56
- | Statistics | Train | Test |
57
- | ---------------------------------------------------------------- | -------------------------: | -----------------------: |
58
- | No. of document | 322 | 80 |
59
- | Components <br/>- `MajorClaim`<br/>- `Claim`<br/>- `Premise` | <br/>598<br/>1202<br/>3023 | <br/>153<br/>304<br/>809 |
60
- | Relations\*<br/>- `supports`<br/>- `attacks` | <br/>3820<br/>405 | <br/>1021<br/>92 |
61
 
62
  \* included all relations between claims and premises and all claim attributions.
63
 
@@ -90,17 +90,19 @@ See further statistics in Stab & Gurevych (2017), p. 650, Table A.1.
90
 
91
  See further description in Stab & Gurevych 2017, p.627 and the [annotation guideline](https://github.com/ArneBinder/pie-datasets/blob/db94035602610cefca2b1678aa2fe4455c96155d/data/datasets/ArgumentAnnotatedEssays-2.0/guideline.pdf).
92
 
93
- **Note that** relations between `MajorClaim` and `Claim` were not annotated; however, each claim is annotated with an `Attribute` annotation with value `for` or `against` - which indicates the relation between itself and `MajorClaim`. In addition, when two non-related `Claim` 's appear in one paragraph, there is also no relations to one another. An example of a document is shown here below.
94
 
95
  #### Example
96
 
97
- ![Example](img/sg17f2.png)
 
 
98
 
99
  ### Document Converters
100
 
101
  The dataset provides document converters for the following target document types:
102
 
103
- - `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations` with layers:
104
  - `labeled_spans`: `LabeledSpan` annotations, converted from `BratDocumentWithMergedSpans`'s `spans`
105
  - labels: `MajorClaim`, `Claim`, `Premise`
106
  - `binary_relations`: `BinaryRelation` annotations, converted from `BratDocumentWithMergedSpans`'s `relations`
@@ -112,13 +114,13 @@ The dataset provides document converters for the following target document types
112
  - build a `supports` or `attacks` relation from each `Claim` to every `MajorClaim`
113
  - no relations between each `MajorClaim`
114
  - labels: `supports`, `attacks`, and `semantically_same` if `connect_first`
115
- - `pytorch_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions` with layers:
116
  - `labeled_spans`, as above
117
  - `binary_relations`, as above
118
  - `labeled_partitions`, `LabeledSpan` annotations, created from splitting `BratDocumentWithMergedSpans`'s `text` at new lines (`\n`).
119
  - every partition is labeled as `paragraph`
120
 
121
- See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
122
  definitions.
123
 
124
  #### Relation Label Statistics after Document Conversion
@@ -162,7 +164,7 @@ input:
162
  revision: 1015ee38bd8a36549b344008f7a49af72956a7fe
163
  ```
164
 
165
- For token based metrics, this uses `bert-base-uncased` from `transformer.AutoTokenizer` (see [AutoTokenizer](https://huggingface.co/docs/transformers/v4.37.1/en/model_doc/auto#transformers.AutoTokenizer), and [bert-based-uncased](https://huggingface.co/bert-base-uncased) to tokenize `text` in `TextDocumentWithLabeledSpansAndBinaryRelations` (see [document type](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py)).
166
 
167
  For relation-label statistics, we collect those from the default relation conversion method, i.e., `connect_first`, resulting in three distinct relation labels.
168
 
@@ -351,7 +353,7 @@ Three non-native speakers; one of the three being an expert annotator.
351
 
352
  ### Social Impact of Dataset
353
 
354
- "\[Computational Argumentation\] have
355
  broad application potential in various areas such as legal decision support (Mochales-Palau and Moens 2009), information retrieval (Carstens and Toni 2015), policy making (Sardianos et al. 2015), and debating technologies (Levy et al. 2014; Rinott et al.
356
  2015)." (p. 619)
357
 
@@ -366,7 +368,7 @@ The relations between claims and major claims are not explicitly annotated.
366
  "The proportion of non-argumentative text amounts to 47,474 tokens (32.2%) and
367
  1,631 sentences (22.9%). The number of sentences with several argument components
368
  is 583, of which 302 include several components with different types (e.g., a claim followed by premise)...
369
- \[T\]he identification of argument components requires the
370
  separation of argumentative from non-argumentative text units and the recognition of
371
  component boundaries at the token level...The proportion of paragraphs with unlinked
372
  argument components (e.g., unsupported claims without incoming relations) is 421
 
9
  ```python
10
  from pie_datasets import load_dataset
11
  from pie_datasets.builders.brat import BratDocumentWithMergedSpans
12
+ from pie_modules.documents import TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
13
 
14
  # load default version
15
  dataset = load_dataset("pie/aae2")
 
53
 
54
  ### Data Splits
55
 
56
+ | Statistics | Train | Test |
57
+ | ------------------------------------------------------------ | -------------------------: | -----------------------: |
58
+ | No. of document | 322 | 80 |
59
+ | Components <br/>- `MajorClaim`<br/>- `Claim`<br/>- `Premise` | <br/>598<br/>1202<br/>3023 | <br/>153<br/>304<br/>809 |
60
+ | Relations\*<br/>- `supports`<br/>- `attacks` | <br/>3820<br/>405 | <br/>1021<br/>92 |
61
 
62
  \* included all relations between claims and premises and all claim attributions.
63
 
 
90
 
91
  See further description in Stab & Gurevych 2017, p.627 and the [annotation guideline](https://github.com/ArneBinder/pie-datasets/blob/db94035602610cefca2b1678aa2fe4455c96155d/data/datasets/ArgumentAnnotatedEssays-2.0/guideline.pdf).
92
 
93
+ **Note that** relations between `MajorClaim` and `Claim` were not annotated; however, each claim is annotated with an `Attribute` annotation with value `for` or `against` - which indicates the relation between itself and `MajorClaim`. In addition, when two non-related `Claim` 's appear in one paragraph, there is also no relations to one another. An example of a document is shown here below.
94
 
95
  #### Example
96
 
97
+ ![Structure](img/sg17f2.png)
98
+
99
+ ![Example](img/aae2_train_47.png)
100
 
101
  ### Document Converters
102
 
103
  The dataset provides document converters for the following target document types:
104
 
105
+ - `pie_modules.documents.TextDocumentWithLabeledSpansAndBinaryRelations` with layers:
106
  - `labeled_spans`: `LabeledSpan` annotations, converted from `BratDocumentWithMergedSpans`'s `spans`
107
  - labels: `MajorClaim`, `Claim`, `Premise`
108
  - `binary_relations`: `BinaryRelation` annotations, converted from `BratDocumentWithMergedSpans`'s `relations`
 
114
  - build a `supports` or `attacks` relation from each `Claim` to every `MajorClaim`
115
  - no relations between each `MajorClaim`
116
  - labels: `supports`, `attacks`, and `semantically_same` if `connect_first`
117
+ - `pie_modules.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions` with layers:
118
  - `labeled_spans`, as above
119
  - `binary_relations`, as above
120
  - `labeled_partitions`, `LabeledSpan` annotations, created from splitting `BratDocumentWithMergedSpans`'s `text` at new lines (`\n`).
121
  - every partition is labeled as `paragraph`
122
 
123
+ See [here](https://github.com/ArneBinder/pie-modules/blob/main/src/pie_modules/documents.py) for the document type
124
  definitions.
125
 
126
  #### Relation Label Statistics after Document Conversion
 
164
  revision: 1015ee38bd8a36549b344008f7a49af72956a7fe
165
  ```
166
 
167
+ For token based metrics, this uses `bert-base-uncased` from `transformer.AutoTokenizer` (see [AutoTokenizer](https://huggingface.co/docs/transformers/v4.37.1/en/model_doc/auto#transformers.AutoTokenizer), and [bert-based-uncased](https://huggingface.co/bert-base-uncased) to tokenize `text` in `TextDocumentWithLabeledSpansAndBinaryRelations` (see [document type](https://github.com/ArneBinder/pie-modules/blob/main/src/pie_modules/documents.py)).
168
 
169
  For relation-label statistics, we collect those from the default relation conversion method, i.e., `connect_first`, resulting in three distinct relation labels.
170
 
 
353
 
354
  ### Social Impact of Dataset
355
 
356
+ "[Computational Argumentation] have
357
  broad application potential in various areas such as legal decision support (Mochales-Palau and Moens 2009), information retrieval (Carstens and Toni 2015), policy making (Sardianos et al. 2015), and debating technologies (Levy et al. 2014; Rinott et al.
358
  2015)." (p. 619)
359
 
 
368
  "The proportion of non-argumentative text amounts to 47,474 tokens (32.2%) and
369
  1,631 sentences (22.9%). The number of sentences with several argument components
370
  is 583, of which 302 include several components with different types (e.g., a claim followed by premise)...
371
+ [T]he identification of argument components requires the
372
  separation of argumentative from non-argumentative text units and the recognition of
373
  component boundaries at the token level...The proportion of paragraphs with unlinked
374
  argument components (e.g., unsupported claims without incoming relations) is 421
aae2.py CHANGED
@@ -2,9 +2,9 @@ import os
2
  from typing import Dict
3
 
4
  import pandas as pd
 
5
  from pie_modules.document.processing import RegexPartitioner
6
- from pytorch_ie.annotations import BinaryRelation
7
- from pytorch_ie.documents import (
8
  TextDocumentWithLabeledSpansAndBinaryRelations,
9
  TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions,
10
  )
 
2
  from typing import Dict
3
 
4
  import pandas as pd
5
+ from pie_modules.annotations import BinaryRelation
6
  from pie_modules.document.processing import RegexPartitioner
7
+ from pie_modules.documents import (
 
8
  TextDocumentWithLabeledSpansAndBinaryRelations,
9
  TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions,
10
  )
requirements.txt CHANGED
@@ -1,2 +1,2 @@
1
- pie-datasets>=0.8.0,<0.11.0
2
- pie-modules>=0.8.3,<0.12.0
 
1
+ pie-datasets>=0.10.11,<0.11.0
2
+ pie-modules>=0.15.9,<0.16.0