Avelina9X
Avelina9X t1_ja2pi2q wrote
Reply to comment by Cheetus_Deleteus_ in Police cat~ by asilvertintedrose
All Cats Are Bastards
Avelina9X OP t1_j5o47kr wrote
Reply to comment by muchcharles in [D] Did YouTube just add upscaling? by Avelina9X
Not in Task Manager, but at least something would show up in GPU-Z like a clock increase over idle, memory bus usage, GPU utilisation, thermals, power draw, etc etc.
Avelina9X OP t1_j57babi wrote
Reply to comment by Syzygianinfern0 in [D] Did YouTube just add upscaling? by Avelina9X
I have neither a 30 or 40 series card... plus this is running on integrated graphics.
Avelina9X OP t1_j57b8cf wrote
Reply to comment by currentscurrents in [D] Did YouTube just add upscaling? by Avelina9X
Its not even running on my 1660 Ti. It's running on my integrated intel graphics. Dedicated graphics is completely idle during this. Aaaand theres nothing related in the Chrome Flags at all.
Avelina9X OP t1_j55l3ks wrote
Reply to comment by wintermute93 in [D] Did YouTube just add upscaling? by Avelina9X
What version of Chrome? What's your region? I'm in the UK, using a GTX 1660 Ti (but Chrome running on Intel Iris graphics) with chrome version 109.0.5414.75 (Official Build) (64-bit) (cohort: Stable)
Avelina9X OP t1_j55kz89 wrote
Reply to comment by NotARedditUser3 in [D] Did YouTube just add upscaling? by Avelina9X
Correction: Vimeo does this. It's only in Chrome. But other people also running 109.0.5414.75 (Official Build) (64-bit) (cohort: Stable)
do not see this behaviour.
Avelina9X OP t1_j55kvbx wrote
Reply to comment by tomvorlostriddle in [D] Did YouTube just add upscaling? by Avelina9X
Okay. This is occuring in Chrome, but only Chrome (not Discord or Edge). It happens on YouTube and Vimeo. But this doesn't occur in others' Chromes even though we're on the same version 109.0.5414.75 (Official Build) (64-bit) (cohort: Stable)
Avelina9X OP t1_j55kcjo wrote
Reply to comment by F1ckReddit in [D] Did YouTube just add upscaling? by Avelina9X
Yeah that's really weird. We're documenting google chrome silently adding upscaling. I think it's a really worthwhile discussion for the community to figure out what model its using as well as how they're implementing it in a cross platform, GPU agnostic way that is buttery smooth and doesn't use a tone of resources.
Avelina9X OP t1_j55hcfz wrote
Reply to comment by NotARedditUser3 in [D] Did YouTube just add upscaling? by Avelina9X
It's a GTX 1660 Ti in a tablet laptop. No other video platform does this.
Avelina9X OP t1_j55h54h wrote
Reply to comment by IntelArtiGen in [D] Did YouTube just add upscaling? by Avelina9X
I think it's clientside. Which is why I mentioned perhaps its using a GLSL based CNN which is absolutely possible in WebGL2 and I've been experimenting with that sort of tech myself (not for upscaling, but just as a proof of concept CNN in WebGL).
Avelina9X OP t1_j55gyws wrote
Reply to comment by f10101 in [D] Did YouTube just add upscaling? by Avelina9X
Heres the video link https://www.youtube.com/watch?v=yPUGPLAfhTk
But if YouTube are doing A/B testing your hardware/account/IP/region might not be marked for rollout yet.
Avelina9X OP t1_j558wqk wrote
Reply to comment by LiquidDinosaurs69 in [D] Did YouTube just add upscaling? by Avelina9X
Im not going crazy, right? Those are absolutely CNN upscaling artefacts.
Submitted by Avelina9X t3_10gxs5i in MachineLearning
Avelina9X OP t1_j4vngcs wrote
Reply to comment by C0hentheBarbarian in [D] Has any work been done on VQ-VAE Language Models? by Avelina9X
Ahah! It seems like the reason I couldn't find anything is because I was being too specific about text seq models and I was disregarding the domain of audio. Thank you!
Avelina9X OP t1_j4vn6su wrote
Reply to comment by gunshoes in [D] Has any work been done on VQ-VAE Language Models? by Avelina9X
Ahhhh! So it seems like this is something that's been explored in the slightly parallel domain of TTS and ASR rather than in pure text LMs, thanks for pointing me in this direction!
Avelina9X OP t1_j4vn244 wrote
Reply to comment by dojoteef in [D] Has any work been done on VQ-VAE Language Models? by Avelina9X
Thank you for the resource! I'll have a deep dive into this!
Submitted by Avelina9X t3_109yuvi in MachineLearning
Avelina9X t1_jbt4o8y wrote
Reply to [D] What's the Time and Space Complexity of Transformer Models Inference? by Smooth-Earth-9897
So the attention mechanism has N^2 space and time complexity relative to sequence length. However, if you are memory constrained it is possible to get the memory requirement per token down to O(N) by computing only 1 token at a time and caching the previous keys and values. This is only really possible at inference time and requires the architecture was implemented with caching in mind.