I understand now that I should not only select my model architecture based on the performance/reviews I read on some blog posts. It requires digging deeper into the architecture and understanding how it works to find the right one for the use case.
First I had the symbols transparent on the background, but it was already super hard for a human to find the symbols, so I thought in the real dataset, the laundry symbols will be always on a single colored background. That's why I started adding random background colors.
Initially, my thoughts were going into the direction: "when I train with so random backgrounds but clearly visible symbols on top, the production model will later find laundry symbols on any kind of image the user sends".
First I had the symbols transparent on the background, but it was already super hard for a human to find the symbols, so I thought in the real dataset, the laundry symbols will be always on a single colored background. That's why I started adding random background colors.
Waterfront_xD OP t1_ituyhmt wrote
Reply to comment by StephaneCharette in [P] Object detection model learns backgrounds and not objects by Waterfront_xD
If I could, I would give two upvotes :D