Conflict datasets are explicit about the criteria they are using, but how do we know that they accurately capture what they claim to? How can we compare across conflict datasets when they are using different lenses? We propose using new, massive, semi-structured data such as Wikipedia as a common point of connection to look across different conflict datasets. We match 11 conflict datasets to the wikidata pages that best capture their conflicts. With this, we are able to validate the datasets against each other and identify areas of overlap and possible gaps in their measurement. We discuss what these gaps mean for results based on these datasets.