Submitted by Avelina9X t3_10gxs5i in MachineLearning
So, these pictures below are taken from a 144p video on YouTube. You cannot tell me that these aren't CNN upscaling artefacts.
So this raises the question of.... how exactly is this implemented? What model are they using which is tiny enough to run on (i assume) WebGL2? Is it a CNN inside of GLSL shaders? Is it something else? CPU side or GPU side?
And also... how have I not seen a single other person pointing this out, anywhere on the internet. Believe me I looked. Ain't no one talking about this.
EDIT: UPDATE this is doing it in ALL videos in chrome now. It only works in Chrome, not in Discord or Edge, so its not GPU/Windows fuckery. But the strange thing is other friends testing this with the same version of Chrome ***DONT*** have this? And the even stranger thing is... this is running on Intel Integrated Graphics...
IntelArtiGen t1_j55at5j wrote
I don't really see how and why they would do it. What's the video? You can check the codec they used with right click > "stats for nerds", the codec should say which algorithm they used to encode/decode the video. Using CNNs client-side for this task would probably be quite cpu/gpu intensive and I doubt they would do it (except perhaps if it's an experiment). And using CNNs server-side wouldn't make sense if it increases the size of data download.
It does look like CNN artifacts.