How can we employ theoretical insights and practical tools from knowledge representation and reasoning to enhance machine learning, and when is it worthwhile to do so? This paper is based on an invited talk delivered at ECSQARU2019 around this question. It emphasizes the knowledge representation and reasoning side of knowledge-enhanced machine learning, looking at a few case studies: the finite model theory of probabilistic languages, the generation of explanations for embeddings, and an “explainable” version of the Winograd Challenge.
- Knowledge representation
- Machine learning