• An automatic zooplankton identification model has been developed for 114 taxonomic categories.
• The model successfully distinguishes species and stages.
• Various model validations show high model performance for identifying key zooplankton taxa.
• The model makes unprecedented insights into the fine scale vertical distribution of taxa possible.
We deployed the Lightframe On-sight Keyspecies Investigation (LOKI) system, a novel underwater imaging system providing cutting-edge imaging quality, in the Canadian Arctic during fall 2013. A Random Forests machine learning model was built to automatically identify zooplankton in LOKI images. The model successfully distinguished between 114 different categories of zooplankton and particles. The high resolution taxonomical tree included many species, stages, as well as sub-groups based on animal orientation or condition in images. Results from a machine learning regression model of prosome length (R2=0.97) were used as a key predictor in the automatic identification model. Model internal validation of the automatic identification model on test data demonstrated that the model performed with overall high accuracy (86%) and specificity (86%). This was confirmed by confusion matrices for external testing results, based on automatic identifications for 2 complete stations. For station 101, from which images had also been used for training, accuracy and specificity were 85%. For station 126, from which images had not been used to train the model, accuracy and specificity were 81%. Further comparisons between model results and microscope identifications of zooplankton in samples from the two test stations were in good agreement for most taxa. LOKI’s image quality makes it possible to build accurate automatic identification models of very high taxonomic detail, which will play a critical role in future studies of zooplankton dynamics and zooplankton coupling with other trophic levels.
Also see researchgate at goo.gl/ZctMep for the publication!
Most studies on buoyant microplastics in the marine environment rely on sea surface sampling. Consequently, microplastic amounts can be underestimated, as turbulence leads to vertical mixing. Models that correct for vertical mixing are based on limited data. In this study we report measurements of the depth profile of buoyant microplastics in the North Atlantic subtropical gyre, from 0 to 5 m depth. Microplastics were separated into size classes (0.5–1.5 and 1.5–5.0 mm) and types (‘fragments’ and ‘lines’), and associated with a sea state. Microplastic concentrations decreased exponentially with depth, with both sea state and particle properties affecting the steepness of the decrease. Concentrations approached zero within 5 m depth, indicating that most buoyant microplastics are present on or near the surface. Plastic rise velocities were also measured, and were found to differ significantly for different sizes and shapes. Our results suggest that (1) surface samplers such as manta trawls underestimate total buoyant microplastic amounts by a factor of 1.04–30.0 and (2) estimations of depth-integrated buoyant plastic concentrations should be done across different particle sizes and types. Our findings can assist with improving buoyant ocean plastic vertical mixing models, mass balance exercises, impact assessments and mitigation strategies.
Available here: http://www.nature.com/articles/srep33882
or at researchgate: https://goo.gl/E3CY9N
ZOOMIE is an image treatment tool developed to ensure optimal quality for images collected with the Lightframe On-sight Keyspecies Investigation (LOKI) System, an underwater zooplankton camera system. ZOOMIE does that by identifying cases where multiple pictures of the same specimen have been taken (hereafter referred to as double images), a phenomenon that frequently occurs when imaging plankton in a constrained volume during vertical deployments. The process of identifying double pictures can be carried out manually but is very time consuming. By applying ZOOMIE, the time needed to identify double images is substantially reduced. It is essential to account for double images when representative distributions of images are wanted
ZOOMIE can automatically filter thousands of images based on previously extracted image parameters (e.g. area, mean grey pixel value, kurtosis; here extracted using the LOKI browser software (Isitec GmbH; http://www.isitec.de/start.htm)). The filtering is based on a set of rules that compares the image parameters of multiple images in order to detect double images and exclude them. The set of rules can be changed easily in the ZOOMIE scripts so that researchers can easily adapt the thresholds for finding double images necessary for their LOKI settings. After running the actual script to find double images, other scripts can be executed to automatically transfer images flagged for exclusion to a new folder.
Finally, the results can be visualized on an internal homepage, using the actual images which are linked to the database. Here we can validate the outcome of the processing and we can manually adapt the outcome through dragging and dropping of images to verify if any images were wrongly allocated to a double image group.
Although ZOOMIE was developed for LOKI images and the exclusion of double images, ZOOMIE could easily be adapted to handle other tasks requiring the handling and comparison of large numbers of images.
Climate change is negatively affecting tropical regions through increasing temperatures and decreased precipitation leading to changes in local hydrology and decreasing water supply among others. In order to make accurate future predictions of carbon stock and forest health it is necessary to better understand the current underlying baseline carbon stock and how it may vary across space. Here we adapted an existing carbon stock assessment method and applied it to two tropical regions in Nicaragua and Costa Rica managed by the Maderas Rainforest Conservancy. Carbon stock was calculated based on 1) above-ground tree biomass, 2) above-ground sapling biomass, 3) leaf litter, herb and grass biomass, 4) soil organic carbon, 5) below-ground biomass, 6) stumps and deadwood and 7) regenerating plants. Our results show a strata-pooled average of 234.09 ± 379 Mg C ha-1 (n=40) carbon at the Costa Rican site and 209.20 ± 216 Mg C ha-1 (n=40) at the Nicaraguan site. These values are much higher than those available on a biome-wide scale, highlighting the extent of carbon stock loss outside these study areas as a result of anthropogenic disturbances, in comparison to more pristine areas. Local investigations into carbon stocks in the tropics are necessary to better estimate the current state of carbon content in the tropics. By adapting existing sampling protocols to local conditions this can be achieved efficiently. Furthermore, local estimates of carbon stock enable non-governmental organizations (NGOs) to participate in the Reducing Emissions from Deforestation and forest Degradation (REDD) program led by the United Nations.
The tropics eternally fascinate us. But tropical land- and seascapes mean many
things for many people (Forsyth and Miyata 1987; Kricher and Plotkin 1999). For
some, they can be a great home, a wonderful holiday, and a study site, while for
others they constitute a miserable life (with an average daily income of US$ 4) in a
life-threatening habitat (Collier 2007; Davis 2007). It is not an overstatement to say
that in the tropics, one can die easily. To the rest of the world, however, the tropics
still represent a land of opportunity (a “lebensraum”; Figs. 1.2 and 1.3)…..
It is now understood that the Ross Sea stands as one of the last relatively pristine (ocean) areas. Many decades of international research have been carried out under the Antarctic Treaty System stipulating that data acquired under this scheme must be shared with the global community. In line with Carlson (Nature 469:293, 2011, Polar Research 10.3402/polar.v32i0.20789, 2013), we find little evidence of enforcement towards making digital geographic information systems (GIS) project data available online for the wider Ross Sea ecosystem. While it is possible to find easily >40 digital datasets for most areas and pixels worldwide, despite many decades of research in the Ross Sea, only app. 100 digital datasets can be found for the study area. It simply shows that data from many studies in the region are not available. High-quality population and trend data explicit in space and time are mostly missing in the public realm, e.g., from the Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR.org). This presents an ethical dilemma because it still appears that sufficient data exist for a pro-active and pre-cautionary management of this region. No coherent and efficient management scheme truly exists and is applied for this precious part of the world now heavily affected by global stressors and mismanagement of data and resources.
Abstract of the feasibility study:
The research described in this feasibility report indicates that The Ocean Cleanup Array is a feasible and viable method to remove large amounts of plastic pollution from a major accumulation zone known as the Great Pacific Garbage Patch. Computer simulations have shown that floating barriers are suitable to capture and concentrate floating plastic debris. Combined with ocean current models to determine how much plastic would encounter the structure, a cleanup efficiency of 42% of all plastic within the North Pacific gyre can be achieved in ten years using a 100 km Array. In collaboration with offshore experts, it has been determined that this Array can be made and installed using current materials and technologies. The estimated costs are €317 million in total, or €31.7 million per year when depreciated over ten years, which translates to €4.53 per kilogram of collected ocean debris.
One of the powerful figures in the article. Here we see figure 4 a which shows predicted change from 2010 to 2100 based on future CanESM 2 data. The values shown here are mean predicted relative occurrence indeces (ROI) pooled over all 38 species that were modeled out. Warm colours show high predicted change and cool colours show lower change. The general trends of our study indicate a decline in ROI predictions for 2100. We think this represents an indication for a declining habitat quality and decreasing distribution range for traditional Antarctica species. One can see that eastern Antarctic waters are predicted to be among the most affected regions of change.
The common raven (Corvus corax) is an abundant generalist of the northern hemisphere, known to congregate and roost near human-related food sources. Due to a growing human-footprint and associated anthropogenic food subsidies, raven populations have increased dramatically over the past several years throughout the USA. The sub-arctic region has also witnessed increased urbanization and industrialization, and ravens have taken advantage of these changes. During 2004 and 2006, we surveyed parking lots on a bi-weekly basis in the city of Fairbanks in interior Alaska, showing an influx of ravens in winter. Between 2010 and 2012, we documented the presence and absence of ravens at a permanent set of 30 suspected raven locations and 21 randomized locations within the city limits of Fairbanks. We used machine learning (RandomForests) and 12 spatial GIS datasets from the Fairbanks North Star Borough to accurately model-predict the relative occurrence of ravens during winter and summer in Fairbanks. Our research showed a positive correlation between raven occurrence and commercial and residential zones in both winter and summer, as well as an inverse geographic relationship between ravens and the waste transfer station in the study area in winter, and a direct correlation near restaurants in summer. These results emphasize the link that ravens have with commercial, anthropogenic food sources, and how Fairbanks and its subsidized, urban habitat may be shaping part of the wider sub-arctic biodiversity landscape.
Keywords Common raven Fairbanks Alaska Distribution model Machine learning Subsidized predator