blumenstiel commited on
Commit
91707da
·
1 Parent(s): b5ce5aa

Update ReadMe

Browse files

Signed-off-by: Benedikt Blumenstiel <[email protected]>

Files changed (1) hide show
  1. README.md +23 -52
README.md CHANGED
@@ -11,78 +11,49 @@ tags:
11
  - Foundation model
12
  ---
13
  ### Model and Inputs
14
- The pretrained [Prithvi-EO-1.0-100m](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M/blob/main/README.md) model is finetuned to segment the extent of floods on Sentinel-2 images from the [Sen1Floods11 dataset](https://github.com/cloudtostreet/Sen1Floods11).
15
 
16
  The dataset consists of 446 labeled 512x512 chips that span all 14 biomes, 357 ecoregions, and 6 continents of the world across 11 flood events. The benchmark associated to Sen1Floods11 provides results for fully convolutional neural networks trained in various input/labeled data setups, considering Sentinel-1 and Sentinel-2 imagery.
17
 
18
- We extract the following bands for flood mapping:
19
-
20
- 1. Blue
21
- 2. Green
22
- 3. Red
23
- 4. Narrow NIR
24
- 5. SWIR 1
25
- 6. SWIR 2
26
 
27
  Labels represent no water (class 0), water/flood (class 1), and no data/clouds (class -1).
28
 
29
- The Prithvi-100m model was initially pretrained using a sequence length of 3 timesteps. Based on the characteristics of this benchmark dataset, we focus on single-timestamp segmentation. This demonstrates that our model can be utilized with an arbitrary number of timestamps during finetuning.
30
 
31
- ![](sen1floods11-finetuning.png)
32
 
33
- ### Code
34
-
35
- The code for this finetuning is available through [github](https://github.com/NASA-IMPACT/hls-foundation-os/).
36
-
37
- The configuration used for finetuning is available through this [config](https://github.com/NASA-IMPACT/hls-foundation-os/blob/main/fine-tuning-examples/configs/sen1floods11.py).
38
-
39
- ### Results
40
-
41
- Finetuning the geospatial foundation model for 100 epochs leads to the following performance on the test dataset:
42
-
43
- | **Classes** | **IoU**| **Acc**|
44
- |:------------------:|:------:|:------:|
45
- | No water | 96.90% | 98.11% |
46
- | Water/Flood | 80.46% | 90.54% |
47
-
48
- | **aAcc** |**mIoU**|**mAcc**|
49
- |:------------------:|:------:|:------:|
50
- | 97.25% | 88.68% | 94.37% |
51
-
52
-
53
- The performance of the model has been further validated on an unseen, holdout flood event in Bolivia. The results are consistent with the performance on the test set:
54
 
 
 
 
55
 
56
- | **Classes** | **IoU**| **Acc**|
57
- |:------------------:|:------:|:------:|
58
- | No water | 95.37% | 97.39% |
59
- | Water/Flood | 77.95% | 88.74% |
60
 
61
- | **aAcc** |**mIoU**|**mAcc**|
62
- |:------------------:|:------:|:------:|
63
- | 96.02% | 86.66% | 93.07% |
64
 
65
- Finetuning took ~1 hour on an NVIDIA V100.
66
 
 
67
 
68
- ### Inference
69
- The github repo includes an inference script that allows running the flood mapping model for inference on Sentinel-2 images. These inputs have to be geotiff format, including 6 bands for a single time-step described above (Blue, Green, Red, Narrow NIR, SWIR, SWIR 2) in order. There is also a **demo** that leverages the same code **[here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-100M-sen1floods11-demo)**.
 
70
 
71
  ### Feedback
72
 
73
- Your feedback is invaluable to us. If you have any feedback about the model, please feel free to share it with us. You can do this by submitting issues on our open-source repository, [hls-foundation-os](https://github.com/NASA-IMPACT/hls-foundation-os/issues), on GitHub.
74
 
75
  ### Citation
76
 
77
- If this model helped your research, please cite our model in your publications. Here is an example BibTeX entry:
78
 
79
  ```
80
- @misc{Prithvi-100M-flood-mapping,
81
- author = {Jakubik, Johannes and Fraccaro, Paolo and Oliveira Borges, Dario and Muszynski, Michal and Weldemariam, Kommy and Zadrozny, Bianca and Ganti, Raghu and Mukkavilli, Karthik},
82
- month = aug,
83
- doi = { 10.57967/hf/0973 },
84
- title = {{Prithvi 100M flood mapping}},
85
- repository-code = {https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-sen1floods11},
86
- year = {2023}
87
  }
88
- ```
 
11
  - Foundation model
12
  ---
13
  ### Model and Inputs
14
+ The pretrained [Prithvi-EO-2.0-300M-TL](https://huggingface.co/ibm-nasa-geospatial/Prithvi-EO-2.0-300M-TL) model is finetuned to segment the extent of floods on Sentinel-2 images from the [Sen1Floods11 dataset](https://github.com/cloudtostreet/Sen1Floods11).
15
 
16
  The dataset consists of 446 labeled 512x512 chips that span all 14 biomes, 357 ecoregions, and 6 continents of the world across 11 flood events. The benchmark associated to Sen1Floods11 provides results for fully convolutional neural networks trained in various input/labeled data setups, considering Sentinel-1 and Sentinel-2 imagery.
17
 
18
+ We use the following six bands for flood mapping: Blue, Green, Red, Narrow NIR, SWIR, SWIR 2.
 
 
 
 
 
 
 
19
 
20
  Labels represent no water (class 0), water/flood (class 1), and no data/clouds (class -1).
21
 
22
+ The Prithvi-EO-2.0-300M-TL model was initially pretrained using a sequence length of 4 timestamps. Based on the characteristics of this benchmark dataset, we focus on single-timestamp segmentation. This demonstrates that our model can be utilized with an arbitrary number of timestamps during fine-tuning.
23
 
24
+ ### Fine-tuning
25
 
26
+ The model was fine-tuned using [TerraTorch](https://github.com/IBM/terratorch):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
+ ```shell
29
+ terratorch fit -c sen1floods11.yaml
30
+ ```
31
 
32
+ The configuration used for finetuning is available through this [config](https://github.com/NASA-IMPACT/Prithvi-EO-2.0/blob/main/configs/sen1floods11.yaml).
 
 
 
33
 
34
+ ### Inference and demo
 
 
35
 
36
+ A **demo** running this model is available **[here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-EO-2.0-Sen1Floods11-demo)**.
37
 
38
+ This repo includes an inference script that allows running the flood model for inference on Sentinel-2 L1C images.
39
 
40
+ ```shell
41
+ python inference.py --data_file examples/India_900498_S2Hand.tif
42
+ ```
43
 
44
  ### Feedback
45
 
46
+ Your feedback is invaluable to us. If you have any feedback about the model, please feel free to share it with us. You can do this by submitting issues on GitHub or start a discussion on HuggingFace.
47
 
48
  ### Citation
49
 
50
+ If this model helped your research, please cite [Prithvi-EO-2.0](https://arxiv.org/abs/2412.02732) in your publications.
51
 
52
  ```
53
+ @article{Prithvi-EO-V2-preprint,
54
+ author = {Szwarcman, Daniela and Roy, Sujit and Fraccaro, Paolo and Gíslason, Þorsteinn Elí and Blumenstiel, Benedikt and Ghosal, Rinki and de Oliveira, Pedro Henrique and de Sousa Almeida, João Lucas and Sedona, Rocco and Kang, Yanghui and Chakraborty, Srija and Wang, Sizhe and Kumar, Ankur and Truong, Myscon and Godwin, Denys and Lee, Hyunho and Hsu, Chia-Yu and Akbari Asanjan, Ata and Mujeci, Besart and Keenan, Trevor and Arévolo, Paulo and Li, Wenwen and Alemohammad, Hamed and Olofsson, Pontus and Hain, Christopher and Kennedy, Robert and Zadrozny, Bianca and Cavallaro, Gabriele and Watson, Campbell and Maskey, Manil and Ramachandran, Rahul and Bernabe Moreno, Juan},
55
+ title = {{Prithvi-EO-2.0: A Versatile Multi-Temporal Foundation Model for Earth Observation Applications}},
56
+ journal = {arXiv preprint arXiv:2412.02732},
57
+ year = {2024}
 
 
58
  }
59
+ ```