Kinexity

Kinexity t1_jdde4wi wrote

>"The challenge with integrating artificial limbs, or restoring function to arms or legs, is extracting the information from the nerve and getting it to the limb so that function is restored."

Also title itself says "paralyzed limbs". I criticize the restoration function of those implamants, not artificial limb replacement.

−2

Kinexity t1_jciwhos wrote

No, singularity is well defined if we talk about a time span when it happens. You can define it as:

  • Moment when AI evolves beyond human comprehension speed
  • Moment where AI reaches it's peak
  • Moment when scietific progress exceedes human comprehension

There are probably other ways to define it but those are the ones I can think up on the spot. In classical singularity event those points in time are pretty close to each other. LLMs are a dead end on the way to AGI. They get us pretty far in terms of capabilities but their internals are lacking to get something more. I have yet to see ChatGPT ask me a question back which would be a clear sign that it "comprehends" something. There is no intelligence behind it. It's like taking a machine which has a hardcoded response to every possible prompt in every possible context - it would seem intelligent while not being intelligent. That's what LLMs are with the difference being that they are way more efficient than the scheme I described while also making way more errors.

Btw don't equate that with Chinese room thought experiment because I am not making here a point on the issue if computer "can think". I assume it could for the sake of the argument. I also say that LLMs don't think.

Finally, saying that LLMs are a step towards singularity is like saying that chemical rockets are a step towards intergalactic travel.

0

Kinexity t1_jch4ihg wrote

Let's start off with one thing - this sub is a circlejerk of basement dwellers disappointed with their life who want a magical thing to come and change their life. Recently it's been overflowed with group jerking off sessions over GPT4 being proto-AGI (which it probably isn't) which means that sanity levels are low and most people will try to completely oversell the singularity and the time at which it will come.

Putting that aside - yes, future changes are hard to comprehend and predict. It's like industrial revolution but on steroids so it's hard to imagine what will happen. Put your hopes away if you don't want to get disappointed because while all the things you mentioned should be possible they are not guaranteed to be achieved. When it happens you'll know but probably only after the fact. It's like it was with ozone depletion - we were shitting ourselves and trying to prevent it until levels stopped dropping and we could say in retrospective that the crisis is slowly going away. Singularity will probably be like this - you won't notice it until it's already in the past.

−1

Kinexity t1_jbznlup wrote

There is a repo for CPU interference written in pure C++: https://github.com/ggerganov/llama.cpp

30B model can run on just over 20GB of RAM and take 1.2sec per token on my i7 8750H. Though actual Windows support has yet to arrive and as of right now the output is garbage for some reason.

Edit: fp16 version works. It's 4 bit quantisation that returns garbage.

29

Kinexity t1_j9yyi6o wrote

That's true but assuming that they somehow can tweak flagging rates (as in not like they fed some flagging model a bunch of hateful tokens and it's automatic) then it's pretty fucked up that there are differences between races and sexes.

Obviously it's based on an assumption and shows that they should have been more transparent over how flagging works.

1

Kinexity t1_j9mmiib wrote

Human brain runs GI and as such if AGI cannot exist then it would mean that the Universe is uncomputable and that our brains run on basically magic we cannot tackle at all. Even in that situation you could get something arbitrarily close to AGI.

>What's your reasoning for thinking ASI might not be able to exist?

I like looking at emergence as phase transitions. Emergence of animal intelligence from lack of it would be a phase transition and emergence of human intelligence from animal intelligence would be another one. It's not guaranteed to work like this but if you look at emergence in other things it seems to work in similar manner. I classify superintelligence as something which would be another transition above us - able to do something that human intelligence fundementally cannot. Idk if there is such thing and as such there is no proof ASI, as I define it, can exist.

2

Kinexity t1_j9mh4lt wrote

Society is an emerging property of a group of humans but not in terms of intelligence. If you took perfectly intelligent human (whatever that means) and gave him infinite amounts of time and removed the problem of entropy breaking things then he'd be able to do all the things that whole human society achieved. AGI is by nature of human level intelligence and I'd guess grouping them together is unlikely to produce superintelligence.

1