BiliSakura commited on
Commit
2e4a064
·
verified ·
1 Parent(s): 26722d3

Upload RS-Painter model files

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: diffusers
4
+ tags:
5
+ - inpainting
6
+ - stable-diffusion
7
+ - remote-sensing
8
+ - semantic-segmentation
9
+ - diffusion-models
10
+ - few-shot
11
+ - sat-imagery
12
+ - StableDiffusionInpaintPipeline
13
+ inference: true
14
+ pipeline_tag: inpainting
15
+ model-index:
16
+ - name: RS-Painter-Diffusers
17
+ results: []
18
+ ---
19
+ # RS-Painter-Diffusers
20
+
21
+
22
+ # RS-Painter-Diffusers
23
+
24
+ **Tackling Few-Shot Segmentation in Remote Sensing via Inpainting Diffusion Model**
25
+
26
+ [![Paper](https://img.shields.io/badge/arXiv-PDF-b31b1b)](https://arxiv.org/abs/2503.03785)
27
+ [![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)
28
+
29
+ **ICLR Machine Learning for Remote Sensing Workshop, 2025 (Best Paper Award)**
30
+
31
+ - Original Paper: [arXiv:2503.03785](https://arxiv.org/abs/2503.03785)
32
+ - Project Website: [https://steveimmanuel.github.io/rs-paint](https://steveimmanuel.github.io/rs-paint)
33
+ - Original Repository: [https://huggingface.co/SteveImmanuel/RSPaint](https://huggingface.co/SteveImmanuel/RSPaint)
34
+
35
+ ## Model Description
36
+
37
+ RS-Painter is an image-conditioned diffusion-based approach for creating diverse sets of novel-class samples for semantic segmentation in few-shot settings in the remote sensing domain. By ensuring semantic consistency using cosine similarity between the generated samples and the conditioning image, and using the Segment Anything Model (SAM) to obtain precise segmentation, this method can train off-the-shelf segmentation models with high-quality synthetic data, significantly improving performance in low-data scenarios.
38
+
39
+ This model is compatible with the Hugging Face `diffusers` library and can be used with `StableDiffusionInpaintPipeline`.
40
+
41
+ ## Quick Start
42
+
43
+
44
+ ```python
45
+ from diffusers import StableDiffusionInpaintPipeline
46
+ from PIL import Image
47
+
48
+ # Load pipeline
49
+ pipeline = StableDiffusionInpaintPipeline.from_pretrained(
50
+ "BiliSakura/RS-Painter-Diffusers",
51
+ safety_checker=None,
52
+ requires_safety_checker=False,
53
+ )
54
+
55
+ # Load image and mask
56
+ image = Image.open("input_image.png").convert("RGB")
57
+ mask = Image.open("mask.png").convert("L")
58
+
59
+ # Generate
60
+ result = pipeline(
61
+ prompt="a beautiful landscape",
62
+ image=image,
63
+ mask_image=mask,
64
+ num_inference_steps=50,
65
+ )
66
+
67
+ # Save result
68
+ result.images[0].save("output.png")
69
+ ```
70
+
71
+ ## Citation
72
+
73
+ If you use this model in your research, please cite:
74
+
75
+ ```bibtex
76
+ @article{2025rspaint,
77
+ title={Tackling Few-Shot Segmentation in Remote Sensing via Inpainting Diffusion Model},
78
+ author={Immanuel, Steve Andreas and Cho, Woojin and Heo, Junhyuk and Kwon, Darongsae},
79
+ journal={arXiv preprint arXiv:2503.03785},
80
+ year={2025}
81
+ }
82
+ ```
83
+