Dataset Viewer
Auto-converted to Parquet Duplicate
inputs
stringlengths
244
11.7k
targets
stringlengths
12
173
_template_idx
int64
0
9
_task_source
stringclasses
1 value
_task_name
stringclasses
1 value
_template_type
stringclasses
2 values
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). A: Did they use a relation extraction method to construct the edges in the graph? **** Q: Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. A: How is the effectiveness of the model evaluated? **** Q: Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\mathcal {N}$ as input, and outputs a random variable ${f}(\mathcal {N})$. For example, suppose $\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7]. A:
How does the differential privacy mechanism work? ****
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution is here: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Now, solve this: An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16. CVAE. An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16 Solution:
What baselines other than standard transformers are used in experiments?
6
NIv2
task461_qasper_question_generation
fs_opt
Teacher: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Teacher: Now, understand the problem? If you are still confused, see the following example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Reason: This is a good question, and it is answerable based on the context. Now, solve this instance: There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Student:
Do they propose any further additions that could be made to improve generalisation to unseen speakers?
2
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation. How they perform manual evaluation, what is criteria? We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Did they use a relation extraction method to construct the edges in the graph? The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2.
Do they use dropout?
0
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: To further improve the performance, we adopt system combination on the decoding lattice level. By combining systems, we can take advantage of the strength of each model that is optimized for different domains. The best result for vlsp2018 of 4.85% WER is obtained by the combination weights 0.6:0.4 where 0.6 is given to the general language model and 0.4 is given to the conversation one. On the vlsp2019 set, the ratio is change slightly by 0.7:0.3 to deliver the best result of 15.09%. A:
What is the language model combination technique used in the paper?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: Our novelties include: Using self-play learning for the neural response ranker (described in detail below). Optimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup. Training a separate dialog model for each user, personalizing our socialbot and making it more consistent. Using a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc. Using a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses. Using our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3. Building big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8. Constantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged. A: What is novel in author's approach? **** Q: We also propose an effective model to integrate clinical named entity information into pre-trained language model. In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. A: How they introduce domain-specific features into pre-trained language model? **** Q: As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting. A:
How is the political bias of different sources included in the model? ****
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features. Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with "Jieba", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers. A:
What is the benchmark dataset and is its quality high?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Example solution: how was the dataset built? Example explanation: This is a good question, and it is answerable based on the context. Problem: The second series of tests consisted of using all the documents of the subcorpus “specialist” $E$, because the documents of the subcorpus of Annodis are not identical. As far as system evaluations are concerned, we use the 78 $E$ documents as reference. We calculate the precision $P$, the recall $R$ and the $F$-score on the text corpus used in our tests, as follow:
Solution: How is segmentation quality evaluated?
5
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example Input: To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes. Example Output: How authors create adversarial test set to measure model robustness? Example Input: Single-Turn: This dataset consists of single-turn instances of statements and responses from the MiM chatbot developed at Constellation AI BIBREF5. Multi-Turn: This dataset is taken from the ConvAI2 challenge and consists of various types of dialogue that have been generated by human-computer conversations. Example Output: what datasets did they use? Example Input: We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Example Output:
What unimodal detection models were used?
3
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment. A: What evaluation metrics did they use? **** Q: How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases. A: What trends are found in musical preferences? **** Q: Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT. A:
How is their model different from BERT? ****
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Example solution: how was the dataset built? Example explanation: This is a good question, and it is answerable based on the context. Problem: Following previous work BIBREF13, BIBREF12, BIBREF10, BIBREF17, BIBREF9, BIBREF7, we choose SemCor3.0 as training corpus, which is the largest corpus manually annotated with WordNet sense for WSD.
Solution: Is SemCor3.0 reflective of English language data in general?
5
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution is here: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Now, solve this: Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue). We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language. Solution:
Are agglutinative languages used in the prediction of both prefixing and suffixing languages?
6
NIv2
task461_qasper_question_generation
fs_opt
instruction: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. question: This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods. We denote the following distance/similarity measures. WMD: The Word Mover's Distance introduced in Section SECREF1 . VSM: The similarity measure introduced in Section UID12 . PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 . PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 . answer: Which are the state-of-the-art models? question: We downloaded 76 videos from a tutorial website about an image editing program . answer: What is the source of the triples? question: We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. answer:
which languages are explored?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: In this paper, we use three data sets from the literature to train and evaluate our own classifier. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Ex Output: Which publicly available datasets are used? Ex Input: We conduct our experiments on $\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains. Ex Output: Over which datasets/corpora is this work evaluated? Ex Input: Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Ex Output:
What is dialogue act recognition?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: . We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. A: Which dialogue acts are more suited to the twitter domain? **** Q: We propose a natural language generation model based on BERT, making good use of the pre-trained language model in the encoder and decoder process, and the model can be trained end-to-end without handcrafted features. During model training, the objective of our model is sum of the two processes, jointly trained using "teacher-forcing" algorithm. During training we feed the ground-truth summary to each decoder and minimize the objective. $$L_{model} = \hat{L}_{dec} + \hat{L}_{refine}$$ (Eq. 23) At test time, each time step we choose the predicted word by $\hat{y} = argmax_{y^{\prime }} P(y^{\prime }|x)$ , use beam search to generate the draft summaries, and use greedy search to generate the refined summaries. A: How are the different components of the model trained? Is it trained end-to-end? **** Q: Our multi-task BERT models involve six different Arabic classification tasks. Author profiling and deception detection in Arabic (APDA). LAMA+DINA Emotion detection. Sentiment analysis in Arabic tweets. A:
What are the tasks used in the mulit-task learning setup? ****
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Let me give you an example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. The answer to this example can be: how was the dataset built? Here is why: This is a good question, and it is answerable based on the context. OK. solve this: the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ). After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Answer:
What discourse relations does it work best/worst for?
8
NIv2
task461_qasper_question_generation
fs_opt
instruction: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. question: However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts BIBREF2 , BIBREF3 , BIBREF4 . For instance, in some datasets, negation words like “not” and “nobody” are often associated with a relationship of contradiction. As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases. answer: Is such bias caused by bad annotation? question: We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach: Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset. Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation. Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class. answer: What are the three steps to feature elimination? question: We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. answer:
what is the size of BoolQ dataset?
9
NIv2
task461_qasper_question_generation
fs_opt
Teacher: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Teacher: Now, understand the problem? If you are still confused, see the following example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Reason: This is a good question, and it is answerable based on the context. Now, solve this instance: Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . Student:
What is the source of memes?
2
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Ex Output: How was the corpus obtained? Ex Input: We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell: Ex Output: How are customer satisfaction, customer frustration and overall problem resolution data collected? Ex Input: It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). Ex Output:
Are any of the utterances ungrammatical?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Input: Consider Input: Our novelties include: Using self-play learning for the neural response ranker (described in detail below). Optimizing neural models for specific metrics (e.g. diversity, coherence) in our ensemble setup. Training a separate dialog model for each user, personalizing our socialbot and making it more consistent. Using a response classification predictor and a response classifier to predict and control aspects of responses such as sentiment, topic, offensiveness, diversity etc. Using a model predictor to predict the best responding model, before the response candidates are generated, reducing computational expenses. Using our entropy-based filtering technique to filter all dialog datasets, obtaining higher quality training data BIBREF3. Building big, pre-trained, hierarchical BERT and GPT dialog models BIBREF6, BIBREF7, BIBREF8. Constantly monitoring the user input through our automatic metrics, ensuring that the user stays engaged. Output: What is novel in author's approach? Input: Consider Input: Yet, it is possible that the precise interpretability scores that are measured here are biased by the dataset used. However, it should be noted that the change in the interpretability scores for different word coverages might be effected by non-ideal subsampling of category words. Although our word sampling method, based on words' distances to category centers, is expected to generate categories that are represented better compared to random sampling of category words, category representations might be suboptimal compared to human designed categories. Output: What are the weaknesses of their proposed interpretability quantification method? Input: Consider Input: To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN).
Output: What network architecture do they use for SIM?
2
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators. A: What are the supported natural commands? **** Q: In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. A: What manual Pyramid scores are used? **** Q: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. A:
How are weights dynamically adjusted? ****
4
NIv2
task461_qasper_question_generation
fs_opt
Teacher: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Teacher: Now, understand the problem? If you are still confused, see the following example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Reason: This is a good question, and it is answerable based on the context. Now, solve this instance: FarsNet [20] [21] is the first WordNet for Persian, developed by the NLP Lab at Shahid Beheshti University and it follows the same structure as the original WordNet. Student:
What is the WordNet counterpart for Persian?
2
NIv2
task461_qasper_question_generation
fs_opt
Teacher: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Teacher: Now, understand the problem? If you are still confused, see the following example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Reason: This is a good question, and it is answerable based on the context. Now, solve this instance: Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below. Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. Perplexity is commonly used to evaluate the quality of a language model. Parameterized Metrics learn a parameterized model to evaluate generated text. Student:
What previous automated evalution approaches authors mention?
2
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems. A:
What datasets did they use?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Let me give you an example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. The answer to this example can be: how was the dataset built? Here is why: This is a good question, and it is answerable based on the context. OK. solve this: Task 1: Quora Duplicate Question Pair Detection Task 2: Ranking questions in Bing's People Also Ask Answer:
On which tasks do they test their conflict method?
8
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary. Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol. Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag. Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 . Ex Output: What are the three different sources of data? Ex Input: For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT. On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB. Ex Output: How much better was results of CamemBERT than previous results on these tasks? Ex Input: Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing. n spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 . Ex Output:
What was the baseline?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example Input: We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users. Example Output: What is the source of the user interaction data? Example Input: We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. Example Output: Which dataset do they train their models on? Example Input: Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems. Example Output:
What datasets did they use?
3
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Input: Consider Input: The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features. The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Output: What baseline and classification systems are used in experiments? Input: Consider Input: We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long. Output: What datasets are experimented with? Input: Consider Input: Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below. Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. Perplexity is commonly used to evaluate the quality of a language model. Parameterized Metrics learn a parameterized model to evaluate generated text.
Output: What previous automated evalution approaches authors mention?
2
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Example solution: how was the dataset built? Example explanation: This is a good question, and it is answerable based on the context. Problem: Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality.
Solution: What semi-supervised learning is applied?
5
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Let me give you an example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. The answer to this example can be: how was the dataset built? Here is why: This is a good question, and it is answerable based on the context. OK. solve this: We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment. Answer:
Which is more useful, visual or textual features?
8
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. A: Do they experiment with the dataset? **** Q: For the datasets, we use Karpathy and Fei-Fei's split for MS-COCO dataset BIBREF10 . A: What dataset/corpus is this work evaluated over? **** Q: Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. A:
by how much did their model improve? ****
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Let me give you an example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. The answer to this example can be: how was the dataset built? Here is why: This is a good question, and it is answerable based on the context. OK. solve this: To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }. INLINEFORM0 tag indicates that the current word appears before the pun in the given context. INLINEFORM0 tag highlights the current word is a pun. INLINEFORM0 tag indicates that the current word appears after the pun. Answer:
What is the tagging scheme employed?
8
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: More specifically, we observe the impact of: (i) pre-trained word embeddings BIBREF11, BIBREF12, recurrent BIBREF13 and transformer-based sentence encoders BIBREF14 as question representation strategies; (ii) distinct convolutional neural networks used for visual feature extraction BIBREF15, BIBREF16, BIBREF17; and (iii) standard fusion strategies, as well as the importance of two main attention mechanisms BIBREF18, BIBREF19. Ex Output: What type of experiments are performed? Ex Input: We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. Ex Output: What are the baseline models? Ex Input: We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\le $ 50 tokens (yelp50) and $\le $200 tokens (yelp200). Ex Output:
What datasets do they use?
1
NIv2
task461_qasper_question_generation
fs_opt
Teacher: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Teacher: Now, understand the problem? If you are still confused, see the following example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Reason: This is a good question, and it is answerable based on the context. Now, solve this instance: We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Student:
What multilingual parallel data is used for training proposed model?
2
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example Input: Bag-of-words feature vectors were used to train a multinomial logistic regression model. Let INLINEFORM0 be the true label, where INLINEFORM1 is the total number of labels and INLINEFORM2 is the concatenation of the weight vectors INLINEFORM3 associated with the INLINEFORM4 th party then DISPLAYFORM0 Example Output: What model are the text features used in to provide predictions? Example Input: The third person plural pronoun `they' has no gender in English and most other languages (unlike the third person singular pronouns `he', `she', and `it'). If these sentences are translated into French, then `they' in the first sentence should be translated `elles', as referring to Jane and Susan, and `they' in the second sentence should be translated `ils', as referring to Fred and George. A couple of examples: The word `sie' in German serves as both the formal second person prounoun (always capitalized), the third person feminine singular, and the third person plural. Example Output: What language do they explore? Example Input: There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters. Example Output:
Do the authors mention any confounds to their study?
3
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to. A:
How does car speak pertains to a car's physical attributes?
9
NIv2
task461_qasper_question_generation
fs_opt
instruction: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. question: It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). answer: Are any of the utterances ungrammatical? question: Our system was ranked second in the competition only 0.3 BLEU points behind the winning team UPC-TALP. The relative low BLEU and high TER scores obtained by all teams are due to out-of-domain data provided in the competition which made the task equally challenging to all participants. The fact that out-of-domain data was provided by the organizers resulted in a challenging but interesting scenario for all participants. answer: Does the use of out-of-domain data improve the performance of the method? question: The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. answer:
How big is the evaluated dataset?
9
NIv2
task461_qasper_question_generation
fs_opt
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. PROBLEM: In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work. SOLUTION: What language is the Twitter content in? PROBLEM: To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. SOLUTION: What are the in-house data employed? PROBLEM: For every input data point and possible target class, LRP delivers one scalar relevance value per input variable, hereby indicating whether the corresponding part of the input is contributing for or against a specific classifier decision, or if this input variable is rather uninvolved and irrelevant to the classification task at all. SOLUTION:
Does the LRP method work in settings that contextualize the words with respect to one another?
8
NIv2
task461_qasper_question_generation
fs_opt
Given the task definition, example input & output, solve the new input case. In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? This is a good question, and it is answerable based on the context. New input case for you: For every input data point and possible target class, LRP delivers one scalar relevance value per input variable, hereby indicating whether the corresponding part of the input is contributing for or against a specific classifier decision, or if this input variable is rather uninvolved and irrelevant to the classification task at all. Output:
Does the LRP method work in settings that contextualize the words with respect to one another?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques: Propaganda Techniques ::: 1. Loaded language. Using words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11. Propaganda Techniques ::: 2. Name calling or labeling. Labeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12. Propaganda Techniques ::: 3. Repetition. Repeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12. Propaganda Techniques ::: 4. Exaggeration or minimization. Either representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke. Propaganda Techniques ::: 5. Doubt. Questioning the credibility of someone or something. Propaganda Techniques ::: 6. Appeal to fear/prejudice. Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments. Propaganda Techniques ::: 7. Flag-waving. Playing on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15. Propaganda Techniques ::: 8. Causal oversimplification. Assuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue. Propaganda Techniques ::: 9. Slogans. A brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16. Propaganda Techniques ::: 10. Appeal to authority. Stating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14. Propaganda Techniques ::: 11. Black-and-white fallacy, dictatorship. Presenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship). Propaganda Techniques ::: 12. Thought-terminating cliché. Words or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18. Propaganda Techniques ::: 13. Whataboutism. Discredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19. Propaganda Techniques ::: 14. Reductio ad Hitlerum. Persuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20. Propaganda Techniques ::: 15. Red herring. Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20. Propaganda Techniques ::: 16. Bandwagon. Attempting to persuade the target audience to join in and take the course of action because “everyone else is taking the same action” BIBREF15. Propaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion. Using deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion. Propaganda Techniques ::: 18. Straw man. When an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22. A:
What are the 18 propaganda techniques?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: Embedding: We developed different variations of our models with a simple lookup table embeddings learned from scratch and using high-performance contextual embeddings, which are ELMo BIBREF11, BERT BIBREF16 and ClinicalBERT BIBREF13 (trained and provided by the authors). A:
What embeddings are used?
9
NIv2
task461_qasper_question_generation
fs_opt
instruction: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. question: Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. answer: How much data samples do they start with before obtaining the initial model labels? question: As indicated in its name, Recurrent Deep Stacking Network stacks and concatenates the outputs of previous frames into the input features of the current frame. answer: What does recurrent deep stacking network do? question: Embedding: We developed different variations of our models with a simple lookup table embeddings learned from scratch and using high-performance contextual embeddings, which are ELMo BIBREF11, BERT BIBREF16 and ClinicalBERT BIBREF13 (trained and provided by the authors). answer:
What embeddings are used?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. A:
What insights into the relationship between demographics and mental health are provided?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Example solution: how was the dataset built? Example explanation: This is a good question, and it is answerable based on the context. Problem: The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
Solution: What is the size of this dataset?
5
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. Ex Output: What was the baseline for this task? Ex Input: We evaluate performance of our models on the SQUAD BIBREF16 dataset (denoted $\mathcal {S}$ ). We use the same split as that of BIBREF4 , where a random subset of 70,484 instances from $\mathcal {S}\ $ are used for training ( ${\mathcal {S}}^{tr}$ ), 10,570 instances for validation ( ${\mathcal {S}}^{val}$ ), and 11,877 instances for testing ( ${\mathcal {S}}^{te}$ ). Ex Output: Which datasets are used to train this model? Ex Input: The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words. Ex Output:
What is the size of this dataset?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution is here: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Now, solve this: For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. Solution:
After how many hops does accuracy decrease?
6
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. [Q]: Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. [A]: How was quality measured? [Q]: We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total. [A]: Where do the concept sets come from? [Q]: We performed the annotation with freely available tools for the Portuguese language. [A]:
Are the annotations automatic or manually created?
5
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example is below. Q: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. A: how was the dataset built? Rationale: This is a good question, and it is answerable based on the context. Q: For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . A:
Which models were compared?
9
NIv2
task461_qasper_question_generation
fs_opt
Part 1. Definition In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Part 2. Example Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Answer: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Part 3. Exercise Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 . Answer:
What is a commonly used evaluation metric for language models?
7
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Example solution: how was the dataset built? Example explanation: This is a good question, and it is answerable based on the context. Problem: EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues.
Solution: what datasets were used?
5
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. [Q]: We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher. [A]: Which dataset do they use to learn embeddings? [Q]: Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. The local features constitute our baseline system on top of which the clustering features are added. [A]: what are the baselines? [Q]: A quantitative measure is required to reliably evaluate the achieved improvement. One of the methods proposed to measure the interpretability is the word intrusion test BIBREF41 . But, this method is expensive to apply since it requires evaluations from multiple human evaluators for each embedding dimension. In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability. Specifically, we apply a modified version of the approach presented in BIBREF40 in order to consider possible sub-groupings within the categories in SEMCAT. [A]:
What experiments do they use to quantify the extent of interpretability?
5
NIv2
task461_qasper_question_generation
fs_opt
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task. In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Why? This is a good question, and it is answerable based on the context. New input: We used fastText and SVM BIBREF16 for preliminary experiments. Solution:
What classification models were used?
0
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. [EX Q]: We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long. [EX A]: What datasets are experimented with? [EX Q]: Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 . Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification. Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 . Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). [EX A]: Are results reported only on English datasets? [EX Q]: 1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. 2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. 3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets. [EX A]:
What seven state-of-the-art methods are used for comparison?
6
NIv2
task461_qasper_question_generation
fs_opt
Part 1. Definition In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Part 2. Example Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Answer: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Part 3. Exercise Table TABREF14 shows the effectiveness of our model (multi-task transformer) over the baseline transformer BIBREF8 . Answer:
what are the baselines?
7
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution is here: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Now, solve this: Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 . Solution:
Is the model presented in the paper state of the art?
6
NIv2
task461_qasper_question_generation
fs_opt
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task. In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Why? This is a good question, and it is answerable based on the context. New input: To reduce variance and boost accuracy, we ensemble 10 CNNs and 10 LSTMs together through soft voting. The models ensembled have different random weight initializations, different number of epochs (from 4 to 20 in total), different set of filter sizes (either INLINEFORM0 , INLINEFORM1 or INLINEFORM2 ) and different embedding pre-training algorithms (either Word2vec or FastText). Solution:
How many CNNs and LSTMs were ensembled?
0
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . Ex Output: what sentiment sources do they compare with? Ex Input: Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. Ex Output: Why is big data not appropriate for this task? Ex Input: We tested the system on two datasets, different in size and complexity of the addressed language. Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. Experimental Evaluation ::: Datasets ::: ROMULUS dataset The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. Ex Output:
Which publicly available NLU dataset is used?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example Input: Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Example Output: In which countries are Hausa and Yor\`ub\'a spoken? Example Input: To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes. Example Output: How authors create adversarial test set to measure model robustness? Example Input: Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 . Example Output:
Is the model presented in the paper state of the art?
3
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: We aim to find such content in the social media focusing on the tweets. Ex Output: Does the dataset contain content from various social media platforms? Ex Input: If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos. Ex Output: How is the lexicon of trafficking flags expanded? Ex Input: Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. Ex Output:
What is the challenge for other language except English
1
NIv2
task461_qasper_question_generation
fs_opt
Given the task definition, example input & output, solve the new input case. In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? This is a good question, and it is answerable based on the context. New input case for you: Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. Output:
What is the challenge for other language except English
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. -------- Question: For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. Answer: What problems are found with the evaluation scheme? Question: We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long. Answer: What datasets are experimented with? Question: To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions: We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind. We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model. We average the results for each set of hyperparameter across three trials with random weight initializations. Answer:
What were the variables in the ablation study?
7
NIv2
task461_qasper_question_generation
fs_opt
Detailed Instructions: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. See one example below: Problem: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Problem: Comparing the natural and artificial sources of our parallel data wrt. several linguistic and distributional properties, we observe that (see Fig. FIGREF21 - FIGREF22 ): artificial sources are on average shorter than natural ones: when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent. automatic word alignments between artificial sources tend to be more monotonic than when using natural sources, as measured by the average Kendall INLINEFORM0 of source-target alignments BIBREF22 : for French-English the respective numbers are 0.048 (natural) and 0.018 (artificial); for German-English 0.068 and 0.053. The intuition is that properties (i) and (ii) should help translation as compared to natural source, while property (iv) should be detrimental. Solution:
what is their explanation for the effectiveness of back-translation?
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. [Q]: The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. [A]: Which dimensionality do they use for their embeddings? [Q]: Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity. [A]: What does the "sensitivity" quantity denote? [Q]: We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions. Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts. [A]:
How do they correlate user backstory queries to user satisfaction?
5
NIv2
task461_qasper_question_generation
fs_opt
instruction: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. question: We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity. answer: What simulations are performed by the authors to validate their approach? question: TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test. The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . answer: What is the new task proposed in this work? question: Comparing the natural and artificial sources of our parallel data wrt. several linguistic and distributional properties, we observe that (see Fig. FIGREF21 - FIGREF22 ): artificial sources are on average shorter than natural ones: when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent. automatic word alignments between artificial sources tend to be more monotonic than when using natural sources, as measured by the average Kendall INLINEFORM0 of source-target alignments BIBREF22 : for French-English the respective numbers are 0.048 (natural) and 0.018 (artificial); for German-English 0.068 and 0.053. The intuition is that properties (i) and (ii) should help translation as compared to natural source, while property (iv) should be detrimental. answer:
what is their explanation for the effectiveness of back-translation?
9
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Let me give you an example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. The answer to this example can be: how was the dataset built? Here is why: This is a good question, and it is answerable based on the context. OK. solve this: Adding word alignments in parallel sentences results in small, non significant improvements, even if there is some labeled data available in the source language. This difficulty in showing the usefulness of parallel corpora for SRI may be due to the current assumptions about role alignments, which mean that only a small percentage of roles are aligned. Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora. Answer:
Overall, does having parallel data improve semantic role induction across multiple languages?
8
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Example solution: how was the dataset built? Example explanation: This is a good question, and it is answerable based on the context. Problem: The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29.
Solution: Does API provide ability to connect to models written in some other deep learning framework?
5
NIv2
task461_qasper_question_generation
fs_opt
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task. In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Why? This is a good question, and it is answerable based on the context. New input: Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. Solution:
How big is the provided treebank?
0
NIv2
task461_qasper_question_generation
fs_opt
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. PROBLEM: To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM) SOLUTION: What models do they propose? PROBLEM: To compare our neural models with the traditional approaches, we experimented with a number of existing models including: Support Vector Machine (SVM), a discriminative max-margin model; Logistic Regression (LR), a discriminative probabilistic model; and Random Forest (RF), an ensemble model of decision trees. SOLUTION: what was their baseline comparison? PROBLEM: Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. SOLUTION:
How big is the provided treebank?
8
NIv2
task461_qasper_question_generation
fs_opt
Teacher: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Teacher: Now, understand the problem? If you are still confused, see the following example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Reason: This is a good question, and it is answerable based on the context. Now, solve this instance: Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. Student:
What models are evaluated with QAGS?
2
NIv2
task461_qasper_question_generation
fs_opt
Given the task definition, example input & output, solve the new input case. In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? This is a good question, and it is answerable based on the context. New input case for you: In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. Output:
How are resources adapted to properties of Chinese text?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: Fenerbahçe We have decided to consider tweets about popular sports clubs as our domain for stance detection. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. Then, we have annotated the stance information in the tweets for these targets as Favor or Against. For the purposes of the current study, we have not annotated any tweets with the Neither class. Ex Output: How were the tweets annotated? Ex Input: Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Ex Output: Is there any evidence that encoders with latent structures work well on other tasks? Ex Input: Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Ex Output:
How do they combine audio and text sequences in their RNN?
1
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. One example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution is here: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Now, solve this: The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB). Solution:
What classical machine learning algorithms are used?
6
NIv2
task461_qasper_question_generation
fs_opt
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. PROBLEM: We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. SOLUTION: What domain does the dataset fall into? PROBLEM: We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher. SOLUTION: Which dataset do they use to learn embeddings? PROBLEM: The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. SOLUTION:
Is there any example where geometric property is visible for context similarity between words?
8
NIv2
task461_qasper_question_generation
fs_opt
Part 1. Definition In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Part 2. Example Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Answer: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Part 3. Exercise We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. Answer:
How large is the dataset?
7
NIv2
task461_qasper_question_generation
fs_opt
Part 1. Definition In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Part 2. Example Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Answer: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Part 3. Exercise Task: given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) BIBREF27 . Answer:
Can named entities in SnapCaptions be discontigious?
7
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. A: What different properties of the posterior distribution are explored in the paper? **** Q: Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method. By using the concept word-groups, we have trained the GloVe algorithm with the proposed modification given in Section SECREF4 on a snapshot of English Wikipedia measuring 8GB in size, with the stop-words filtered out. A: Do they report results only on English data? **** Q: We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). A:
How did they get relations between mentions? ****
4
NIv2
task461_qasper_question_generation
fs_opt
Teacher: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Teacher: Now, understand the problem? If you are still confused, see the following example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Reason: This is a good question, and it is answerable based on the context. Now, solve this instance: To achieve this, a method is proposed that transforms the goal-labels of the used dataset (DSTC2) into labels whose behaviour can be replicated during deployment. Student:
what corpus is used to learn behavior?
2
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Input: Consider Input: The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). Output: How big PIE datasets are obtained from dictionaries? Input: Consider Input: We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. Output: Is this style generator compared to some baseline? Input: Consider Input: To achieve this, a method is proposed that transforms the goal-labels of the used dataset (DSTC2) into labels whose behaviour can be replicated during deployment.
Output: what corpus is used to learn behavior?
2
NIv2
task461_qasper_question_generation
fs_opt
Detailed Instructions: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. See one example below: Problem: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Problem: For training and testing we use the train-validation-test split of WikiTableQuestions BIBREF0 , a dataset containing 22,033 pairs of questions and answers based on 2,108 Wikipedia tables. This dataset is also used by our baselines, BIBREF0 , BIBREF3 . Solution:
Does the dataset they use differ from the one used by Pasupat and Liang, 2015?
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: In our experiments, we use as input the 2210 tokenized sentences of the Stanford Sentiment Treebank test set BIBREF2 , preprocessing them by lowercasing as was done in BIBREF8 . A: Which datasets are used for evaluation? **** Q: Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. A: What were the scores of their system? **** Q: The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. A:
What are results of comparison between novel method to other approaches for creating compositional generalization benchmarks? ****
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. -------- Question: We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. Answer: Which machine learning models do they explore? Question: TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test. The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . Answer: What is the new task proposed in this work? Question: The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases. Answer:
What is the size of the dataset?
7
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. [EX Q]: A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24. [EX A]: Does the paper motivate the use of CRF as the baseline model? [EX Q]: Yet, it is possible that the precise interpretability scores that are measured here are biased by the dataset used. However, it should be noted that the change in the interpretability scores for different word coverages might be effected by non-ideal subsampling of category words. Although our word sampling method, based on words' distances to category centers, is expected to generate categories that are represented better compared to random sampling of category words, category representations might be suboptimal compared to human designed categories. [EX A]: What are the weaknesses of their proposed interpretability quantification method? [EX Q]: In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection SECREF32. In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods. Before submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table TABREF33. [EX A]:
How they evaluate quality of generated output?
6
NIv2
task461_qasper_question_generation
fs_opt
Given the task definition, example input & output, solve the new input case. In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Output: how was the dataset built? This is a good question, and it is answerable based on the context. New input case for you: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . Output:
Which machine baselines are used?
1
NIv2
task461_qasper_question_generation
fs_opt
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. PROBLEM: On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games. SOLUTION: How do the authors show that their learned policy generalize better than existing solutions to unseen games? PROBLEM: We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. SOLUTION: What are the baseline models? PROBLEM: For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . SOLUTION:
Which pre-trained transformer do they use?
8
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Example input: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Example output: how was the dataset built? Example explanation: This is a good question, and it is answerable based on the context. Q: In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets. ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents. MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.) AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press. WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset. WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation. A:
What datasets used for evaluation?
3
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. [EX Q]: In contrast to existing bottlenecks, this work targets three different types of social networks (Formspring: a Q&A forum, Twitter: microblogging, and Wikipedia: collaborative knowledge repository) for three topics of cyberbullying (personal attack, racism, and sexism) without doing any explicit feature engineering by developing deep learning based models along with transfer learning. [EX A]: What cyberbulling topics did they address? [EX Q]: Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a naïve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, the aggregated feature values corresponding to target word senses are used to build a naïve Bayesian classifier adjusted to a situation of unknown a priori probabilities. [EX A]: How do they deal with unknown distribution senses? [EX Q]: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . [EX A]:
Which machine baselines are used?
6
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Q: E2E NLG challenge Dataset: The training set of the E2E challenge dataset which consists of 42K samples was partitioned into a 10K paired and 32K unpaired datasets by a random process. The Wikipedia Company Dataset: The Wikipedia company dataset presented in Section SECREF18 was filtered to contain only companies having abstracts of at least 7 words and at most 105 words. The dataset was then divided into: a training set (35K), a development set (4.3K) and a test set (4.3K). The training set was also partitioned in order to obtain the paired and unpaired datasets. A: What non-annotated datasets are considered? **** Q: Comparing the natural and artificial sources of our parallel data wrt. several linguistic and distributional properties, we observe that (see Fig. FIGREF21 - FIGREF22 ): artificial sources are on average shorter than natural ones: when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent. automatic word alignments between artificial sources tend to be more monotonic than when using natural sources, as measured by the average Kendall INLINEFORM0 of source-target alignments BIBREF22 : for French-English the respective numbers are 0.048 (natural) and 0.018 (artificial); for German-English 0.068 and 0.053. The intuition is that properties (i) and (ii) should help translation as compared to natural source, while property (iv) should be detrimental. A: what is their explanation for the effectiveness of back-translation? **** Q: In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets. ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents. MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.) AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press. WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset. WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation. A:
What datasets used for evaluation? ****
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each. MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets. MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times. MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 . RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”. RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words. SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs. SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness. WS353 contains 353 word pairs annotated with similarity scores from 0 to 10. WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs. WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously. Ex Output: Which similarity datasets do they use? Ex Input: We map each relation type $R(x,y)$ to at least one parametrized natural-language question $q_x$ whose answer is $y$ . Ex Output: How is the input triple translated to a slot-filling task? Ex Input: Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas. In this way we encourage models to construct a sentence using content and style independently. Ex Output:
How they know what are content words?
1
NIv2
task461_qasper_question_generation
fs_opt
Detailed Instructions: In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. See one example below: Problem: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Solution: how was the dataset built? Explanation: This is a good question, and it is answerable based on the context. Problem: In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. Solution:
what ml based approaches were compared?
4
NIv2
task461_qasper_question_generation
fs_opt
In this task, you will be presented with a context from an academic paper and you have to write an answerable question based on the context. Your questions can be extractive, abstractive, or yes-no questions. Ex Input: To evaluate the influence of our hypersphere feature for off-the-shelf NER systems, we perform the NE recognition on two standard NER benchmark datasets, CoNLL2003 and ONTONOTES 5.0. Ex Output: Do they evaluate on NER data sets? Ex Input: Inspired by the supervised reordering in conventional SMT, in this paper, we propose a Supervised Attention based NMT (SA-NMT) model. Specifically, similar to conventional SMT, we first run off-the-shelf aligners (GIZA++ BIBREF3 or fast_align BIBREF4 etc.) to obtain the alignment of the bilingual training corpus in advance. Then, treating this alignment result as the supervision of attention, we jointly learn attention and translation, both in supervised manners. Since the conventional aligners delivers higher quality alignment, it is expected that the alignment in the supervised attention NMT will be improved leading to better end-to-end translation performance. Ex Output: Which conventional alignment models do they use as guidance? Ex Input: We adapted BERTNLU from ConvLab-2. We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). Ex Output:
What are the benchmark models?
1
NIv2
task461_qasper_question_generation
fs_opt
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4