EarthquakeBass
EarthquakeBass t1_jdy7796 wrote
Reply to comment by SmellElectronic6656 in [D] Can we train a decompiler? by vintergroena
It’s very useful for malware analysis. In malware it’s all about hiding your tracks. Clearing up the intent of even just some code helps white hats a lot. Example: Perhaps it inserts some magic bytes into a file to exploit an auto run vulnerability. ChatGPT might recognize that context from its training data much more quickly.
EarthquakeBass t1_j64jhk3 wrote
Reply to comment by currentscurrents in [R] Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers by currentscurrents
https://en.m.wikipedia.org/wiki/Huang%27s_law
A bit of marketing flair for sure, but I think at the crossroads of hardware improvements, ensembling, clever optimizations etc. we will keep improving models at a pretty darn fast pace. GPT-3 alone dramatically has improved the productivity of engineers, I’m sure of it.
EarthquakeBass t1_je6wa0g wrote
Reply to [D] The best way to train an LLM on company data by jaxolingo
I think Azure might actually have support for private OpenAI stuff