Submitted by t0ns0fph0t0ns t3_110h9ey in MachineLearning
Comments
vzq t1_j88vobl wrote
Big Guess Who energy!
Appropriate_Fish_451 t1_j88wdns wrote
That's one of the most disturbing faces I've ever seen.
prehensile_dick t1_j8a37r0 wrote
wow you can barely tell they are fake /s
tdgros t1_j8an84z wrote
they're the same picture
leeliop t1_j8azsq4 wrote
When is the NFT drop
ferndoll6677 t1_j8b4kz9 wrote
Most epic game how many questions would be required?
JackBlemming t1_j8c2llw wrote
Everyone involved in this project is being paid too much.
vzq t1_j8cblxe wrote
Not a lot if you play optimally.
2log(1e6) is only about 20.
piman01 t1_j8cbt1b wrote
Wow nice. Surprisingly good performance
CyberDainz t1_j8eycl2 wrote
112x112 resolution. Completely useless in 2k23
cheeseler t1_j8h1h3j wrote
Something about the last row reminds me of Zoombinis
inagy t1_j9ch6ox wrote
This remind me of the Best of Talking Heads album cover :)
t0ns0fph0t0ns OP t1_j88sq1x wrote
>State-of-the-art face recognition models show impressive accuracy, achieving over 99.8% on Labeled Faces in the Wild (LFW) dataset. However, these models are trained on large-scale datasets that contain millions of real human face images collected from the internet. Web-crawled face images are severely biased (in terms of race, lighting, make-up, etc) and often contain labeling noise. Most importantly, these face images are collected without explicit consent, raising more pressing privacy and ethical concerns. To avoid the problems associated with real face datasets, we introduce a large-scale synthetic dataset for face recognition, obtained by photo-realistic rendering of diverse and high-quality digital faces using a computer graphics pipeline. We compare our method to SynFace, a recent method trained on GAN-generated synthetic faces, and reduce the error rate on LFW by 52.5% (accuracy from 91.93% to 96.17%). We first demonstrate that aggressive data augmentation can significantly help reduce the domain-gap between our synthetic faces and real face images. Taking advantage of having full control over the rendering pipeline, we also study how each attribute (e.g., variation in facial pose, accessories, and textures) affects the accuracy. Finally, by fine-tuning the network on a smaller number of real face images that could reasonably be obtained with consent, we achieve accuracy that is comparable to the methods trained on millions of real face images, while alleviating the problems associated with large datasets. microsoft.github.io
>
>video presentation: youtube.com
>
>paper: arxiv.org