dwarfarchist9001
dwarfarchist9001 t1_ja30ctf wrote
Reply to AI that can translate whole videos ? by IluvBsissa
The technology to do all the individual steps is already there. It's just a matter of doing it yourself or waiting for some business to do the work for you. I expect such a service will be a available later this year either as a standalone or built in to YouTube Premium.
dwarfarchist9001 t1_j9w8701 wrote
Reply to comment by fangfried in What are the big flaws with LLMs right now? by fangfried
Going from O(n^2) to O(nlog(n)) for context window size let's you have a context window of 1.3 million tokens using the same space needed for GPT-3's 8000 tokens.
dwarfarchist9001 t1_j9lb1wl wrote
Reply to comment by gelukuMLG in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Yes but how many parameters must you actually have to store all the knowledge you realistically need. Maybe a few billion parameters is enough to store the basics of every concept known to man and more specific details can be stored in an external file that the neural net can access with API calls.
dwarfarchist9001 t1_j9kpzs8 wrote
Reply to comment by Borrowedshorts in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Humans don't suffer from overfitting if they train on the same data too much.
dwarfarchist9001 t1_j9knt85 wrote
Reply to comment by gelukuMLG in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
It was shown recently that for LLMs ~0.01% of parameters explain >95% of performance.
dwarfarchist9001 t1_j8p0hmt wrote
Reply to comment by Big_Foot_7911 in What will the singularity mean? Why are we persuing it? by wastedtime32
>Singularity defines the point in a function where it takes an infinite value,
It doesn't need to be infinite it can also be undefined or otherwise not well-behaved. For instance the function 1/x is never infinite for any finite value of x but it has a singularity because it is undefined at x=0. Another example is the piecewise function F(x)={x^2 if x>=0 {-x^2 if x<=0 for which x=0 has a definite value of y=0 but x=0 is considered a singularity for the purposes of calculus because it is not differentiable at that point.
dwarfarchist9001 t1_j8oqtw9 wrote
The word singularity comes from math where it means a part of a graph where a function becomes discontinuous (i.e. the value of the line at that point is either undefined or infinite). Things like evolution, immortality, and fusing together into a hive mind have nothing to do with it, at least not inherently.
The idea of the technological singularity comes from the observation that the rate of new technological advances is becoming faster over time such that our total level of technology is growing hyperbolically. If human technology continues growing at the current hyperbolic rate or something close to it then relatively soon we will reach the singularity on the graph of new technological innovations which is the point in time where the rate of new technological innovation will become infinite.
The general assumption of this subreddit is that the technological singularity occurs as the result of the creation of a self improving AI which will then proceed to rapidly create better and better versions of itself. The hope is that if AI is controllable or at least benevolent it could bring about a golden age where all science is known and essentially all economic activity is automated.
dwarfarchist9001 t1_ja3156t wrote
Reply to comment by hducug in How Far to the Technological Singularity? by FC4945
Some people mistakenly believe that the technological singularity is synonymous with AGI and AGI by 2025-2030 is unlikely but not impossible.