{"id":45,"date":"2026-02-10T17:20:03","date_gmt":"2026-02-10T17:20:03","guid":{"rendered":"https:\/\/site.uvm.edu\/amber\/?page_id=45"},"modified":"2026-02-12T21:03:37","modified_gmt":"2026-02-12T21:03:37","slug":"machine-learning-models","status":"publish","type":"page","link":"https:\/\/site.uvm.edu\/amber\/machine-learning-models\/","title":{"rendered":"Machine Learning Models"},"content":{"rendered":"\n<p>AMBER utilizes a variety of machine learning model to identify wildlife within media files.  We are continually adding model options to our analysis portfolio.  If you have a model to run, or are in need of new model development, contact us for more details.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><mark style=\"background-color:#f9f9f9\" class=\"has-inline-color has-contrast-color\"><strong><em>Trail Camera Analysis<\/em><\/strong>     <\/mark><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>DeepFaune New England<\/summary>\n<p><a href=\"https:\/\/code.usgs.gov\/vtcfwru\/deepfaune-new-england\">DeepFaune New England<\/a> (DFNE) model for species classification analyzes trail camera imagery. This model is a re-trained version of the <a href=\"https:\/\/plmlab.math.cnrs.fr\/deepfaune\/software\" target=\"_blank\" rel=\"noreferrer noopener\">DeepFaune<\/a> model for classifying European species in trial cameras, fine-tuned to classify taxa from northeastern North America. DFNE classifies 24 taxa, including the &#8220;no-species&#8221; label indicating the absence of an animal.<\/p>\n\n\n\n<p><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1441\" class=\"wp-image-15\" style=\"width: 500px\" src=\"http:\/\/site.uvm.edu\/amber\/files\/2026\/02\/dpne_annotated_images-scaled.jpg\" alt=\"\" srcset=\"https:\/\/site.uvm.edu\/amber\/files\/2026\/02\/dpne_annotated_images-scaled.jpg 2560w, https:\/\/site.uvm.edu\/amber\/files\/2026\/02\/dpne_annotated_images-300x169.jpg 300w, https:\/\/site.uvm.edu\/amber\/files\/2026\/02\/dpne_annotated_images-1024x576.jpg 1024w, https:\/\/site.uvm.edu\/amber\/files\/2026\/02\/dpne_annotated_images-768x432.jpg 768w, https:\/\/site.uvm.edu\/amber\/files\/2026\/02\/dpne_annotated_images-1536x864.jpg 1536w, https:\/\/site.uvm.edu\/amber\/files\/2026\/02\/dpne_annotated_images-2048x1152.jpg 2048w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><\/p>\n<\/details>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>SpeciesNet<\/summary>\n<p><a href=\"https:\/\/github.com\/google\/cameratrapai?tab=readme-ov-file#overview\">SpeciesNet<\/a> is runs two AI models: (1) an object detector that finds objects of interest in wildlife camera images, and (2) an image classifier that classifies those objects to the species level. This ensemble is used for species recognition in the&nbsp;<a href=\"https:\/\/www.wildlifeinsights.org\/\">Wildlife Insights<\/a>&nbsp;platform.  The code is now open source, allowing AMBER to include SpeciesNet into our machine learning repertoire. <\/p>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>MegaDetector<\/summary>\n<p><a href=\"https:\/\/github.com\/agentmorris\/MegaDetector\">MegaDetector<\/a> is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems.<\/p>\n\n\n\n<p>MegaDetector only finds animals, it doesn&#8217;t identify them to species level.  Both SpeciesNet and DeepFaune New England use MegaDetector to identify animals within images, and then further classify identified targets to species.<\/p>\n<\/details>\n<\/details>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>MegaDetector<\/summary>\n<p><a href=\"https:\/\/github.com\/agentmorris\/MegaDetector\/blob\/main\/megadetector.md\">MegaDetector<\/a>&nbsp;is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems.<\/p>\n\n\n\n<p>MegaDetector only finds animals, it doesn&#8217;t identify them to species level.<\/p>\n<\/details>\n\n\n\n<div style=\"margin-top:var(--wp--preset--spacing--10);margin-bottom:var(--wp--preset--spacing--10);height:18px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong><em><mark style=\"background-color:#f9f9f9\" class=\"has-inline-color\">Audio Analysis <\/mark><\/em><\/strong><\/h2>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>BirdNet<\/summary>\n<p><a href=\"https:\/\/birdnet.cornell.edu\/\">BirdNET<\/a> uses deep learning to recognize over 6,000 species globally within audio files. It was developed by the&nbsp;<a href=\"https:\/\/www.birds.cornell.edu\/ccb\/\">K. Lisa Yang Center for Conservation Bioacoustics<\/a>&nbsp;at the&nbsp;<a href=\"https:\/\/www.birds.cornell.edu\/home\">Cornell Lab of Ornithology<\/a>&nbsp;in collaboration with&nbsp;<a href=\"https:\/\/www.tu-chemnitz.de\/index.html.en\">Chemnitz University of Technology<\/a>.<\/p>\n\n\n\n<p>Go to&nbsp;<a href=\"https:\/\/birdnet.cornell.edu\/\">https:\/\/birdnet.cornell.edu<\/a>&nbsp;to learn more about the project.<\/p>\n<\/details>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>HawkEars<\/summary>\n<p><a href=\"https:\/\/github.com\/jhuus\/HawkEars\">HawkEars<\/a> is a desktop program that scans audio recordings for bird sounds and generates&nbsp;<a href=\"https:\/\/www.audacityteam.org\/\">Audacity<\/a>&nbsp;label files. It is inspired by&nbsp;<a href=\"https:\/\/github.com\/kahst\/BirdNET\">BirdNET<\/a>, and intended as an improved productivity tool for analyzing field recordings.&nbsp;AMBER runs HawkEars directly and stores results in your database, accessed through our web portal.<\/p>\n<\/details>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>Ruffed Grouse Drumming Model<\/summary>\n<p>Ruffed Grouse are a flagship species in North America, of interest to many conservation groups.  See <a href=\"https:\/\/wildlife.onlinelibrary.wiley.com\/doi\/10.1002\/wsb.1395\">this article<\/a> for more details. <\/p>\n<\/details>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>Customized Templates<\/summary>\n<p>When acoustic targets have a recognizable signal, simple template-matching may be a powerful way to screen audio files for a target wildlife species of interest.  AMBER has the ability to create customized templates and screen files for potential signal matches, and further develop machine learning models to locate true detections.<\/p>\n<\/details>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>Soundscape Analyses<\/summary>\n<p><a href=\"https:\/\/github.com\/chrisbartha\/ABGQI-CNN\">ABGQI <\/a>is a CNN focused on soundscape analysis, predicting 5 soundscape components: human noise (Anthropophony), wildlife vocalizations (Biophony), weather phenomena (Geophony), Quiet periods (Q), and microphone Interference (I).&nbsp;AMBER users may elect to use this model for soundscape monitoring, among others.<\/p>\n<\/details>\n\n\n\n<div style=\"height:68px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>AMBER utilizes a variety of machine learning model to identify wildlife within media files. We are continually adding model options to our analysis portfolio. If you have a model to run, or are in need of new model development, contact us for more details. Trail Camera Analysis Audio Analysis<\/p>\n","protected":false},"author":6081,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"class_list":["post-45","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/pages\/45","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/users\/6081"}],"replies":[{"embeddable":true,"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/comments?post=45"}],"version-history":[{"count":17,"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/pages\/45\/revisions"}],"predecessor-version":[{"id":157,"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/pages\/45\/revisions\/157"}],"wp:attachment":[{"href":"https:\/\/site.uvm.edu\/amber\/wp-json\/wp\/v2\/media?parent=45"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}