AllowFreeSpeech
AllowFreeSpeech t1_je3rjmv wrote
Reply to comment by currentscurrents in [D] Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
20:1 ratio of tokens:params
AllowFreeSpeech t1_jc8fcdz wrote
Reply to comment by spiritus_dei in [D] ChatGPT without text limits. by spiritus_dei
Here is a link to the abstract page: https://arxiv.org/abs/2301.04589
AllowFreeSpeech t1_j2ilhjb wrote
Reply to [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
Fwiw, it may be easier to learn an algorithm represented in a Prolog-like or Lisp-like language than in various modern C-like programming languages. I am not sure.
AllowFreeSpeech t1_j2el1hw wrote
Reply to comment by sockalicious in Pharmaceuticals | Free Full-Text | Quercetin in the Prevention and Treatment of Coronavirus Infections: A Focus on SARS-CoV-2 by Zilkin
That's because of your silli reductionist worldview which assesses substances individually, and not in combination. I would get into details, but this would absolutely be the worst subreddit for a neutral discussion about it.
AllowFreeSpeech t1_j1ofv9d wrote
Reply to [P] A self-driving car using Nvidia Jetson Nano, with movement controlled by a pre-trained convolution neural network (CNN) written in Taichi by TaichiOfficial
What could possibly help general Taichi users is an easy way to convert a Pandas dataframe to a sensible Taichi structure. This is not easy, as Pandas has many different data types, but it still makes sense for Taichi to handle this conversion.
AllowFreeSpeech t1_j1gbxjx wrote
Reply to comment by Dry_Task4749 in [P] A self-driving car using Nvidia Jetson Nano, with movement controlled by a pre-trained convolution neural network (CNN) written in Taichi by TaichiOfficial
Numba is rubbish because they don't tell you how many unreasonable errors you will encounter using it (a lot) for anything that is not a very trivial function. That's despite reading its docs in detail. It is overrated.
AllowFreeSpeech t1_j1et95s wrote
Reply to comment by thiru_2718 in [P] A self-driving car using Nvidia Jetson Nano, with movement controlled by a pre-trained convolution neural network (CNN) written in Taichi by TaichiOfficial
Free speech doesn't mean you can post nonsense or garbage or foreign-language materials. Free speech does still require you to be on-topic and stick to the language in use while in a public forum. Free speech means the freedom to post reasoned disagreeable opinions, but only those that are on-topic and are in the established language of the forum.
Use your head. If I go to a Chinese forum, and start posting in Japanese, how welcome would I be there...
AllowFreeSpeech t1_j1dp8av wrote
Reply to comment by thiru_2718 in [P] A self-driving car using Nvidia Jetson Nano, with movement controlled by a pre-trained convolution neural network (CNN) written in Taichi by TaichiOfficial
Isn't it obvious? I expect what is shared on this subreddit to always consistently be in English. If you want to use other languages, don't post that shit here. This is not the place for it.
AllowFreeSpeech t1_j19jvmd wrote
Reply to [P] A self-driving car using Nvidia Jetson Nano, with movement controlled by a pre-trained convolution neural network (CNN) written in Taichi by TaichiOfficial
In the repo, why are the code comments not consistently in English, e.g. in this file?
With taichi, is the backprop being done manually or automatically?
AllowFreeSpeech t1_izyuqhl wrote
Is this going to be another one of those throwaway ideas like "capsule networks"...
AllowFreeSpeech t1_itte8kt wrote
Reply to [N] OpenAI Gym and a bunch of the most used open source RL environments have been consolidated into a single new nonprofit (The Farama Foundation) by jkterry1
According to Goo Translate, farama means "slice" in Romanian, whereas fărâmă means "bit" (as in a very small amount). In Maori, however, faramā means pharmacist. Translating from Arabic, farama means "pharma", which fits.
AllowFreeSpeech t1_ir79coc wrote
Reply to [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
Is it limited to matmul or is it actually and demonstrably a generic algorithm optimizer?
AllowFreeSpeech t1_jeevp3b wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
What bothers me is that most researchers don't care to use any model compression or efficiency techniques. They want others to pay for their architectural inefficiencies. IMO such funding could be a bad idea if it were to stop competition of neural architectures, and a good idea otherwise.
For example, is matrix-matrix multiplication necessary or can matrix-vector multiplication do the job? Similarly, are dense networks necessary or can sparse networks do the job? Alternatively the funding can go toward the engineering of optical and analog hardware that is significantly more power efficient.