For a project mapping the distribution of bobcats (Lynx rufus) and lynx (Lynx canadensis) in British Columbia, researcher TJ Gooliaff was using images — from trail cameras, camera phones and others that he solicited from the public. The two species’ distributions often overlap, and as Gooliaff looked at the images — sometimes blurry, dark or with only part of the animal showing — he found the two species were sometimes difficult to distinguish.
“It was not always easy to tell bobcats and lynx apart,” said Gooliaff, who was a master’s student at the University of British Columbia Okanagan at the time. “I wondered if bobcat and lynx experts are reliably able to tell them apart from images, because trail camera and citizen-science studies are becoming more and more common to map the distributions of many species.”
In what started as a side project, Gooliaff and his supervisor, Dr. Karen Hodges, created an online survey to measure agreement among experts in classifying bobcats and lynx from camera images.
They uploaded 300 images of bobcats and lynx and sent them to 27 bobcat and lynx experts to see how much they agreed on the species’ classifications. They sent six batches of high-quality images at a time to the experts. The survey flashed each image on the screen, prompting the expert to choose if the animal in the image was a “bobcat”, “lynx” or “unknown.”
“It turns out agreement between the experts was far from perfect,” Gooliaff said. “Even experts have a difficult time telling the two species apart.”
The side project turned into a study published in Ecology and Evolution.
Gooliaff said he was surprised by how many experts disagreed on the species classification as well as the frequent use of the “unknown” option.
Each batch of images tested a specific question. One batch contained a combination of images taken during summer and winter. ‘There was far higher agreement during the winter,” he said, possibly because when lynx lose their winter fur, their more brownish coat more closely resembles a bobcat.
Another batch contained images with different landscapes, including forest, grassland and developed areas. “The thinking was people would partially base classifications on the background features rather than the animals themselves,” he said.
Gooliaff then tested the consistency of the individual experts. The first and last batches of the survey images, which occurred three months apart, were the same images, just rearranged. He compared what experts called each image between the two batches and found that no expert was completely consistent. “In plenty of cases, experts called an image a bobcat and then three months later called the same image a lynx,” he said.
The study yields some important lessons, Gooliaff said, especially since there are many similar looking species with overlapping distributions. “When using wildlife images, we have to be more careful in the ways we classify species,” he said. “There may be a lot more species misclassification going on than we think there is.” He also suggests adding in publications how the images were classified, by whom and how many people classified them.
Another message, he said, is not being afraid to call something “unknown” if there are no defining characteristics in the image. When species are difficult to tell apart, he suggests having multiple people classify the images rather than relying on a single person.
“Camera trapping is becoming more common and we still think it has huge value,” he said. “But we need to be careful about classifying similar-looking species. We think if many experts classify an image then they usually get it right in the end when we combine all of their opinions. If you just rely on one person’s classification, it is unreliable.”
|Dana Kobilinsky is associate editor at The Wildlife Society. Contact her at email@example.com with any questions or comments about her article. You can follow her on Twitter at @DanaKobi.|
Share your thoughts on this article, and others, on our Facebook and Twitter pages.