301
views
0
recommends
+1 Recommend
0
shares
    • Review: found
    Is Open Access

    Review of 'BoatNet: Automated Small Boat Composition Detection using Deep Learning on Satellite Imagery'

    Bookmark
    4
    BoatNet: Automated Small Boat Composition Detection using Deep Learning on Satellite ImageryCrossref
    The article discusses maritime energy-emissions with a focus on emissions caused by small boats.
    Average rating:
        Rated 3.5 of 5.
    Level of importance:
        Rated 4 of 5.
    Level of validity:
        Rated 3 of 5.
    Level of completeness:
        Rated 3 of 5.
    Level of comprehensibility:
        Rated 4 of 5.
    Competing interests:
    None

    Reviewed article

    • Record: found
    • Abstract: found
    • Article: found
    Is Open Access

    BoatNet: Automated Small Boat Composition Detection using Deep Learning on Satellite Imagery

    Tracking and measuring national carbon footprints is one of the keys to achieving the ambitious goals set by countries. According to statistics, more than 10\% of global transportation carbon emissions result from shipping. However, accurate tracking of the emissions of the small boat segment is not well established. Past research has begun to look into the role played by small boat fleets in terms of Greenhouse Gases (GHG), but this either relies on high-level techno-activity assumptions or the installation of GPS sensors to understand how this vessel class behaves. This research is undertaken mainly in relation to fishing and recreational boats. With the advent of open-access satellite imagery and its ever-increasing resolution, it can support innovative methodologies that could eventually lead to the quantification of GHG emissions. This work used deep learning algorithms to detect small boats in three cities in the Gulf of California in Mexico. The work produced a methodology named BoatNet that can detect, measure and classify small boats even under low-resolution and blurry satellite images, achieving an accuracy of 93.9% with a precision of 74.0%. Future work should focus on attributing a boat activity to fuel consumption and operational profile to estimate small boat GHG emissions in any given region. The data curated and produced in this study is freely available at https://github.com/theiresearch/BoatNet.
      Bookmark

      Review information

      10.14293/S2199-1006.1.SOR-EARTH.AWCDRL.v1.RJUQPC
      This work has been published open access under Creative Commons Attribution License CC BY 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com.

      Earth & Environmental sciences,Computer science,Statistics,Geosciences
      Small boats activity,Statistics,Object Detection,Climate Change,Transfer Learning,Deep Learning,Energy,The Environment,Policy and law,Climate,Sustainable development

      Review text

      The article discusses maritime energy-emissions with a focus on emissions caused by ships. The authors reveal that existing energy emission models usually do not consider small boat fleets, which becomes the main focus in this paper. A model titled BoatNet for quantifying small boat fleet emission inventories is introduced, which can detect, measure and classify small boats in order to improve the accuracy of carbon inventory of small boats globally. BoatNet applies CNN on RGB very high-resolution satellite imagery.

      The paper is well written, and the research is innovative, relevant and important. The main drawback is the documentation of the proposed methodology. The following, mostly Satellite-Remote Sensing issues should be considered before publication:

      1. The authors missed to provide details on the imagery source and did not discuss resolution details on the available and used satellite imagery extracted that was from Google Earth Pro. For the applied eye altitude (200m) Google Earth Pro usually uses mosaiced true color composites derived from Digital Globe's WorldView-1/2/3 series, GeoEye-1, and Airbus' Pleiades, all of which provide data at around 0.5m spatial resolution. How does this refer to the resolution you extracted at 200m eye altitude?
      2. Google Earth Pro does display the satellite imagery source, as for instance: “Image @ 2022 CNES / Airbus”, which refers to Airbus' Pleiades imagery. However, the authors did not mention the satellite imagery source(s) of the 694 high-resolution imagery being used.
      3. The paper also leaves a few further open questions: What would be the benefit of additional spectral bands in the IR part of the EMS, if available? Worldview3 for example comes with 29 bands. PlanetScope with 5 bands. What would be the minimum required spatial resolution? Which multispectral (high-res) satellite imagery is available for free, and which one is not? A discussion on satellite imagery access, availability, costs and in particular resolution (spatial, spectral, temporal) would be beneficial for this research.
      4. Di you account for the count of duplicates?
      5. Details on how satellite images were extracted from Google Earth Pro are missing.
      6. The issue of boats located on 2 adjacent images could have been addressed by applying tiling with spatial overlap (for instance of 50%, depending on the set tile size) …page 7 second paragraph: “…some large vessels …do not appear fully in an image”
      7. The authors could also consider masking out land areas
      8. Page 6: 2nd paragraph: The authors mentioned shadows and clouds but did discuss in the next sentence the removal of haze. This leaves the reader with open questions about shadows and clouds (although Google Earth Pro imagery is as good as everywhere cloud-free mosaics).

      Other, mostly minor comments:

      1. Abstract: 1st sentence: improve wording
      2. Abstract: I suggest to replace the term “Techno-activity” with “Technology…”
      3. Abstract: Replace GPS with “Global navigation satellite system (GNSS)”
      4. Abstract: “…The work produced a methodology named BoatNet that can detect, measure and classify small boats…” – I suggest to also inform about target classes (shipping/leisure)
      5. Page 1: Unit ‘Mt’ should be written in full when using the first time: “Megaton (Mt)”
      6. Same accounts for CO2e: Carbon dioxide equivalent (page 2 last line)
      7. Page 2: first 3 paragraphs: I suggest adding the respective literature references to back up your statements
      8. Page 2 – section C.: First sentence: “Bringing deep….is essential.” Why is it essential – what for?
      9. Page 2 – section C.: Third sentence: I suggest using the term ‘resolution’ instead of ‘quality’
      10. Page 2 – section C.: Forth sentence: I suggest rewording into something like: “Machine learning is widely used for satellite imagery analysis.
      11. Whole text: You may consider replacing “Satellite image” with “Satellite imagery”
      12. Page 3 – line 8: “…and fuel used data” Did you mean fuel-used data? The sentence wasn’t clear to me.
      13. Page 3 – line 10: “CO2e” Does “e” refer to estimate? Please write full form when using an abbreviation for the first time in the text.
      14. Page 3 – section B – paragraph 2: “…each number is neither zero nor new, but…” Are you sure that is correct? What is a “new” number? Please adjust to improve clarity.
      15. Page 5 – end of chapter II: I recommend adding a summary of the literature review including argumentation for why Yolo CNN was selected for this work (just before the last paragraph)
      16. Page 5 – end of chapter II- last paragraph: “…aims at detecting small boats…” I recommend adding the fact that the proposed model intends to detect specific boat types (fishing, recreational)
      17. Page 6 - Fig. 5 caption: reference should be replaced by ref 51 (Dwivedi …Yolov5), or at least ref 51 should be added.
      18. Page 6 - the last paragraph: “ To validate… was done beteen BoatNet…” à correct between
      19. Page 7: what is the loss of prediction (detection) accuracy due to image resizing? Depending on research goals (and other factors), it is sometimes worth running a multiple day training.
      20. Page 8: false counts caused by nearby located boats: couldn’t the boat type classification allow to at least distinguish between different (adjacent) boat types?
      21. Page 8: last paragraph: “Nevertheless, as Figures 11, 12, 13 demonstrate, the model still detects most small boats in poorly detailed satellite images,…” What exactly did you refer to with “poorly detailed” ?
      22. “precision of training can be up to 93.9%,”  … why isn’t this result not explained in the result section? How did you derive 93
      23. Page 10, fourth paragraph: “Due to the low data quality of the selected regions, the images are less suitable as training datasets.” What exactly did you mean with “low data quality”

      Comments

      Comment on this review