<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:academictorrents="http://academictorrents.com/" version="2.0">
<channel>
<title>Animals - Academic Torrents</title>
<description>collection curated by erotemic</description>
<link>https://academictorrents.com/collection/animals</link>
<item>
<title>FishTrack23: An Ensemble Underwater Dataset for Multi-Object Tracking (Dataset)</title>
<description>@inproceedings{dawkins2024fishtrack23,
title= {FishTrack23: An Ensemble Underwater Dataset for Multi-Object Tracking},
journal= {},
author= {Matthew Dawkins and Jack Prior and Bryon Lewis and Robin Faillettaz and Thompson Banez and Mary Salvi and Audrey Rollo and Julien Simon and Alexa Abanga and Matthew Campbell and Matthew Lucero and Aashish Chaudhary and Benjamin Richards and Anthony Hoogs},
booktitle= {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages= {7167--7176},
year= {2024},
url= {https://openaccess.thecvf.com/content/WACV2024/papers/Dawkins_FishTrack23_An_Ensemble_Underwater_Dataset_for_Multi-Object_Tracking_WACV_2024_paper.pdf},
abstract= {Tracking fish in optical underwater imagery contains a number of challenges not encountered in terrestrial domains. Video may contain large schools comprised of many individuals, dynamic natural backgrounds, variable target scales, volatile collection conditions, and non-fish moving confusors including debris, marine snow, and other organisms. Lastly, there is a lack of large public datasets for algorithm evaluation available in this domain. FishTrack aims to address these challenges by providing a large quantity of expert-annotated fish groundtruth tracks, in imagery and video collected across a range of different backgrounds, locations, collection conditions, and organizations.},
keywords= {NOAA, deep learning, Computer Vision, object detection, object classification, marine biology, IFREMER, Fish, Object Tracking, CDFW},
terms= {},
license= {CC-BY-4.0},
superseded= {}
}

</description>
<link>https://academictorrents.com/download/70695b973afa53be67dbfb72a2478775885598b9</link>
</item>
<item>
<title>Whale Shark ID Dataset (Dataset)</title>
<description>@article{,
title= {Whale Shark ID Dataset},
journal= {},
author= {Wild Me},
year= {2020},
url= {https://www.wildme.org},
abstract= {Our released whale shark (Rhincodon typus) data set represents a collaborative effort based on the data collection and population modeling efforts conducted at Ningaloo Marine Park in Western Australia from 1995-2008 (Holmberg et al. 2008, 2009). Photos (7888) and metadata from 2441 whale shark encounters were collected from 464 individual contributors, especially from the original research of Brad Norman and from members of the local whale shark tourism industry who sight these animals annually from April-June. Images were annotated with bounding boxes around each visible whale shark and viewpoints labeled (e.g., left, right, etc.). A total of 543 individual whale sharks were identified by their unique spot patterning using first computer-assisted spot pattern recognition (Arzoumanian et al. 2005) and then manual review and confirmation.  A total of 7,693 named sightings were exported.

The dataset is released in the Microsoft COCO format (https://cocodataset.org/) and therefore uses flat image folders with associated YAML metadata files. We have collapsed the entire dataset into a single "train" label and have left "val" and "test" empty; we do this as an invitation to researchers to experiment with their own novel approaches for dealing with the unbalanced and chaotic distribution on the number of sightings per individual.  All of the images in the dataset have been resized to have a maximum linear dimension of 3,000 pixels.  The metadata for all animal sightings is defined by an axis-aligned bounding box via and includes information on the rotation of the box (theta), the viewpoint of the animal, a species (category) ID, a source image ID, an individual string ID name, and other miscellaneous values.  The temporal ordering of the images, and an anonymized ID for the original photographer, can be determined from the metadata for each image.

For research or press contact, please direct all correspondence to Wild Me at info@wildme.org.  Wild Me (https://www.wildme.org) is a registered 501(c)(3) not-for-profit based in Portland, Oregon, USA and brings state-of-the-art computer vision tools to ecology researchers working around the globe on wildlife conservation.

Direct download mirror: https://wildbookiarepository.azureedge.net/datasets/whaleshark.coco.tar.gz},
keywords= {coco, identification, wildlife, whale shark},
terms= {Use of this dataset in scientific research must provide attribution under the CDLA-Permissive License (version 1.0) and must also cite the original research publication: 

@article{holmberg2009estimating,
  title={Estimating population size, structure, and residency time for whale sharks Rhincodon typus through collaborative photo-identification},
  author={Holmberg, Jason and Norman, Bradley and Arzoumanian, Zaven},
  journal={Endangered Species Research},
  volume={7},
  number={1},
  pages={39--53},
  year={2009}
}},
license= {Community Data License Agreement – Permissive – Version 1.0 (https://cdla.io/permissive-1-0/)},
superseded= {}
}

</description>
<link>https://academictorrents.com/download/bb47cd1d6dde2f49b040495382c778c102409080</link>
</item>
<item>
<title>Great Zebra and Giraffe Count ID Dataset (Dataset)</title>
<description>@article{,
title= {Great Zebra and Giraffe Count ID Dataset},
journal= {},
author= {Wild Me},
year= {2020},
url= {https://www.wildme.org},
abstract= {Our dataset for plains zebra (Equus quagga) is taken from a two-day census of the Nairobi National Park, located just south of the capital’s airport in Nairobi, Kenya.  The “Great Zebra and Giraffe Count” (GZGC) photographic census was organized on February 28th and March 1st 2015 and had the participation of 27 different teams of citizen scientists, 55 total photographers, and collected 9,406 images of plains zebra and Masai giraffe (Giraffa tippelskirchi) (Parham et al. 2017).  Only images containing either zebras or giraffes were included in the exported dataset, a total of 4,948 images, where the original biographical information of the original contributors are removed.  All images are labeled with bounding boxes around the individual animals for which there is ID metadata, meaning some images contain missing boxes and are not intended to be used for object detection training or testing.  Viewpoints for all animal annotations were also added.  All ID assignments were completed using the HotSpotter algorithm (Crall et al. 2013) by visually matching the stripes and spots as seen on the body of the animal.  A total of 2,056 combined names are released for 6,286 individual zebra and 639 giraffe sightings.  This dataset presents as a challenging comparison compared to the whale shark dataset since it contains a significantly higher number of animals that are only seen once during the survey.

The dataset is released in the Microsoft COCO format (https://cocodataset.org/) and therefore uses flat image folders with associated YAML metadata files. We have collapsed the entire dataset into a single "train" label and have left "val" and "test" empty; we do this as an invitation to researchers to experiment with their own novel approaches for dealing with the unbalanced and chaotic distribution on the number of sightings per individual.  All of the images in the dataset have been resized to have a maximum linear dimension of 3,000 pixels.  The metadata for all animal sightings is defined by an axis-aligned bounding box via and includes information on the rotation of the box (theta), the viewpoint of the animal, a species (category) ID, a source image ID, an individual string ID name, and other miscellaneous values.  The temporal ordering of the images, and an anonymized ID for the original photographer, can be determined from the metadata for each image.

For research or press contact, please direct all correspondence to Wild Me at info@wildme.org.  Wild Me (https://www.wildme.org) is a registered 501(c)(3) not-for-profit based in Portland, Oregon, USA and brings state-of-the-art computer vision tools to ecology researchers working around the globe on wildlife conservation.

Direct download mirror: https://wildbookiarepository.azureedge.net/datasets/gzgc.coco.tar.gz},
keywords= {zebra, wildlife, coco, identification, giraffe},
terms= {Use of this dataset in scientific research must provide attribution under the CDLA-Permissive License (version 1.0) and must also cite the original research publication: 

@inproceedings{parham2017animal,
  title={Animal population censusing at scale with citizen science and photographic identification},
  author={Parham, Jason and Crall, Jonathan and Stewart, Charles and Berger-Wolf, Tanya and Rubenstein, Daniel I},
  booktitle={AAAI Spring Symposium-Technical Report},
  year={2017}
}},
license= {Community Data License Agreement – Permissive – Version 1.0 (https://cdla.io/permissive-1-0/)},
superseded= {}
}

</description>
<link>https://academictorrents.com/download/69160c6bf11275321017f18124dbaff2d381b21c</link>
</item>
</channel>
</rss>
