Classifier Probes May Just Learn from Linear Context Features

Jenny Kunz and Marco Kuhlmann. Classifier Probes May Just Learn from Linear Context Features. In Proceedings of the 28th International Conference on Computational Linguistics (COLING), pages 5136–5146, Barcelona, Spain (Online), 2020.

Abstract

Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations learned by neural sentence encoders such as BERT and ELMo. While many authors are aware of the difficulty to distinguish between extracting the linguistic structure encoded in the representations'' andlearning the probing task,'' the validity of probing methods calls for further research. Using a neighboring word identity prediction task, we show that the token embeddings learned by neural sentence encoders contain a significant amount of information about the exact linear context of the token, and hypothesize that, with such information, learning standard probing tasks may be feasible even without additional linguistic structure. We develop this hypothesis into a framework in which analysis efforts can be scrutinized and argue that, with current models and baselines, conclusions that representations contain linguistic structure are not well-founded. Current probing methodology, such as restricting the classifier's expressiveness or using strong baselines, can help to better estimate the complexity of learning, but not build a foundation for speculations about the nature of the linguistic structure encoded in the learned representations.

Links