Submitted by swdsld t3_z2n3cl in MachineLearning

We made a background removal tool named as transparent-background based on our recent work "Revisiting Image Pyramid Structure for High Resolution Salient Object Detection (InSPyReNet)" which will be published in ACCV 2022.

For better performance, we trained our model on various salient object detection datasets which are publicly available. We think our tool actually works better than currently available tools like Apple's recent background removal tool for IOS and macOS, or https://www.remove.bg.

You can use our tool as a command-line tool or python API.

Please visit our github repository and try out your images and videos.

transparent-background: https://github.com/plemeri/transparent-background

InSPyReNet: https://github.com/plemeri/InSPyReNet

Here is a sample result of apple's recent background removal tool and our tool.

Input image

Result from Apple's recent background removal tool

Result from our tool \"transparent-background\"

82

Comments

You must log in or register to comment.

ajt9000 t1_ixhshux wrote

Looks great! Nice job

6

fl0o0ps t1_ixidpn0 wrote

By pyramid do you mean itti&koch’s work?

5

swdsld OP t1_ixjolp8 wrote

Thank you for your attention! That's right. Previous deep learning based segmentation models already used their pyramid structure and we improved this structure for high resolution scenario.

3

fl0o0ps t1_ixm6mun wrote

Cool. What's your ROI graph look like for the saliency detection part? Or is that classified?

2

swdsld OP t1_ixngmf0 wrote

Salient object detection including my method using deep learning outputs pixel-wise binary classification result.

2

earthsworld t1_ixij3th wrote

Not bad, but still seeing a ton of white pixels around your edges when the image is placed on a darker background. The Apple results are much better compared.

2

swdsld OP t1_ixjnqnl wrote

Thank you for your opinion! However, can you check the results once more? Our result which is the last one clearly shows better result by corretly segmenting the propeller and the landing gear parts while apple's result ignores the propeller and shows many white pixels around the landing gear.

Also, while we agree that in other cases, Apple's tool might show better results compared to us, but I just wanted to share my work which can work better in some cases with relatively less training images (I guess) compared to the companys' official tool like Apple's.

1

boyetosekuji t1_ixmnpd0 wrote

Great job, tried the popular remove.bg service and result look better than theirs. https://i.imgur.com/ZCIURkA.png

2

swdsld OP t1_ixnf5at wrote

Thank you very much for noticing!

2

boyetosekuji t1_ixnhgzt wrote

Have you run into any use cases where bg removal suffers?

2

swdsld OP t1_ixnicaw wrote

We also have some problem with detecting all object in the given image. It is quite common problem for salient object detection because objects in the the dataset are usually center-biased. It could be our next research topic!

1

boyetosekuji t1_ixnqtst wrote

What's the reason for the smoke effect https://imgur.com/a/GGdWIig instead of sharp outline? Open in new tab (white bg)

2

swdsld OP t1_ixou56r wrote

Thank you for letting me know about the problem. It's actually quite common in those kind of background removal tools. When the model is not sure whether the region is a part of salient object or not, it produces such artifacts. It could happen for both our method and other companies' tool. We can avoid that problem by training the model with various scenes which is what I'm planning to in the future release.

Tools like remove.bg or Apple's native tool are provided commercially, so they must have been using massive dataset including their privately annotated ground truths. We don't think that we can outperform in every scenario than theirs, but I'm trying my best training my method with the datasets which are available for me now.

Stay tuned for the future updates!

1

boyetosekuji t1_ixovllt wrote

i've got some good images with your tool, this one was not bad too, aesthetically. Would implementing depth map work? although it would add another GPU intensive task. Keep going, Good luck.

2

swdsld OP t1_ixoxfyf wrote

Using depth map would definitely be a great option. Thank you for your kind comment!

1

boyetosekuji t1_ixoywrk wrote

if you want to check depth map take a look at this paper, they claim better edge detection, valuable for eg. the wires of san Francisco bridge, etc

2

Due-Philosopher-1426 t1_ixmtp5o wrote

I assume the code is open source. Are there any licensing requirements if someone were to use this commercially?

1

swdsld OP t1_ixnfsk0 wrote

We use MIT licence which means you can use our code freely as long as all copies of the software or its substantial portions include a copy of the terms of the MIT License and also a copyright notice. Please check the Licence part of our repository. Thanks!

1