Download PDFOpen PDF in browserNegation Scope Resolution: Quantifying Neural Uncertainty In An Imbalanced SettingEasyChair Preprint 16027 pages•Date: October 7, 2019AbstractNegation scope detection is an interesting task for neural machine learning models, because of the sequential dependencies in the input data. Having a neural classifier being able to untangle negated parts of a sentence from the non-negated part is useful for downstream tasks. Additionally, generally in classification tasks one has to work with quite imbalanced data sets. Within natural language only a subset of sentences contain negations - thus negation annotated data might be prone to imbalance in such a way that there are many annotated sentences without any negations (positive sentences) versus sentences with negations (negative sentences). This paper looks at how this kind of imbalance affects neural model performance by comparing models trained on the full data set, with models trained on a subset in which the positive sentences have been filtered out. The results evaluated on the *SEM 2012 shared task on negation scope detection show that there does seem to be a difference in how the classifiers are affected by imbalance, depending on architecture; and how including part-of-speech (PoS) features help to reduce this difference. Keyphrases: BiLSTM, NLP, negation analysis, negation scope detection, neural network, scope detection, scope match
|