Submitted by Kinexity t3_125w3yy in singularity
Kinexity
Kinexity t1_jdwx3uf wrote
Reply to comment by JackD4wkins in Scientists discover how cancer cells evade immune system by BousWakebo
If it works and they pass the trials then more power to them. The paper I saw about it was from last year so it shouldn't be surprising that it did not take off yet and should also be a proof that if it took so much longer to develop than immunotherapy then it was indeed harder to get it to work.
Kinexity t1_jdwrahf wrote
Reply to comment by JackD4wkins in Scientists discover how cancer cells evade immune system by BousWakebo
No. It's the opposite. We don't have a reliable methods to attack DNA of cancer cells. Using immune system to do the job for us has been proven to work safely and reliably.
Kinexity t1_jdwmi96 wrote
Reply to comment by JackD4wkins in Scientists discover how cancer cells evade immune system by BousWakebo
Why do we have to complicate the process to meet some arbitrary goal which doesn't make our cure better but rather makes it harder to deploy?
Kinexity t1_jdw9p5d wrote
Reply to AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
This doesn't have much to do with LLMs or AI. You can download whole English Wikipedia and it uses a fraction of your compute to open and and only weights ~60GB.
Kinexity t1_jdp5k33 wrote
Reply to comment by Unfrozen__Caveman in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
At the end people will buy what's cheaper. Automation is unstoppable on all fronts because of competition.
Kinexity t1_jdde4wi wrote
Reply to comment by Corsair4 in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
>"The challenge with integrating artificial limbs, or restoring function to arms or legs, is extracting the information from the nerve and getting it to the limb so that function is restored."
Also title itself says "paralyzed limbs". I criticize the restoration function of those implamants, not artificial limb replacement.
Kinexity t1_jdcl2ky wrote
Reply to New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
I think this is a dead end in the situation they present it. We need to learn how to repair eg. broken spine (which btw already was done several times) and not just slap the implant on and say it's done.
Kinexity t1_jcv2hkq wrote
Reply to comment by Neither_Novel_603 in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
I like how seed has a slider to set it. Almost r/badUIbattles candidate.
Kinexity t1_jciwhos wrote
Reply to comment by alexiuss in Skeptical yet uninformed. New to the scene. by TangyTesticles
No, singularity is well defined if we talk about a time span when it happens. You can define it as:
- Moment when AI evolves beyond human comprehension speed
- Moment where AI reaches it's peak
- Moment when scietific progress exceedes human comprehension
There are probably other ways to define it but those are the ones I can think up on the spot. In classical singularity event those points in time are pretty close to each other. LLMs are a dead end on the way to AGI. They get us pretty far in terms of capabilities but their internals are lacking to get something more. I have yet to see ChatGPT ask me a question back which would be a clear sign that it "comprehends" something. There is no intelligence behind it. It's like taking a machine which has a hardcoded response to every possible prompt in every possible context - it would seem intelligent while not being intelligent. That's what LLMs are with the difference being that they are way more efficient than the scheme I described while also making way more errors.
Btw don't equate that with Chinese room thought experiment because I am not making here a point on the issue if computer "can think". I assume it could for the sake of the argument. I also say that LLMs don't think.
Finally, saying that LLMs are a step towards singularity is like saying that chemical rockets are a step towards intergalactic travel.
Kinexity t1_jchxgjp wrote
Reply to comment by alexiuss in Skeptical yet uninformed. New to the scene. by TangyTesticles
Yeah, yeah, yeah. Honestly it's easier to prove my point this way:
!RemindMe 10 years
The singularity will not be here in a decade. I'm going to make so much karma off of that shit when I post about it.
Kinexity t1_jch4ihg wrote
Let's start off with one thing - this sub is a circlejerk of basement dwellers disappointed with their life who want a magical thing to come and change their life. Recently it's been overflowed with group jerking off sessions over GPT4 being proto-AGI (which it probably isn't) which means that sanity levels are low and most people will try to completely oversell the singularity and the time at which it will come.
Putting that aside - yes, future changes are hard to comprehend and predict. It's like industrial revolution but on steroids so it's hard to imagine what will happen. Put your hopes away if you don't want to get disappointed because while all the things you mentioned should be possible they are not guaranteed to be achieved. When it happens you'll know but probably only after the fact. It's like it was with ozone depletion - we were shitting ourselves and trying to prevent it until levels stopped dropping and we could say in retrospective that the crisis is slowly going away. Singularity will probably be like this - you won't notice it until it's already in the past.
Kinexity t1_jc1lwah wrote
Reply to comment by light24bulbs in [P] Discord Chatbot for LLaMA 4-bit quantized that runs 13b in <9 GiB VRAM by Amazing_Painter_7692
That is fast. We are literally talking about a high end laptop CPU from 5 years ago running a 30B LLM.
Kinexity t1_jbznlup wrote
Reply to comment by remghoost7 in [P] Discord Chatbot for LLaMA 4-bit quantized that runs 13b in <9 GiB VRAM by Amazing_Painter_7692
There is a repo for CPU interference written in pure C++: https://github.com/ggerganov/llama.cpp
30B model can run on just over 20GB of RAM and take 1.2sec per token on my i7 8750H. Though actual Windows support has yet to arrive and as of right now the output is garbage for some reason.
Edit: fp16 version works. It's 4 bit quantisation that returns garbage.
Kinexity t1_jayeonp wrote
Reply to comment by TinyBurbz in Security robots patrolling a parking lot at night in California by Dalembert
There is no adventage. At best it's another idea where the creator did not think about drawbacks. At worst it's investor money grift.
Kinexity t1_ja5p0gm wrote
Reply to comment by DonManuel in The ultimate solar panels are coming: perovskites with 250% more efficiency by Renu_021
The base efficiency isn't important. Perovskite panels suck because of their shitty lifespan and currently there doesn't seem to be much change in that regard.
Kinexity t1_ja003li wrote
Reply to comment by [deleted] in Almost 40% of domestic tasks could be done by robots ‘within decade’ | Artificial intelligence (AI) by Gari_305
Then all the more power to you. No one is going to ban humans from cooking. Most people either lack time or will and bad diet is a serious problem which is why I think of cooking automation as a necessity.
Kinexity t1_j9yyi6o wrote
Reply to comment by Depression_God in Likelihood of OpenAI moderation flagging a sentence containing negative adjectives about a demographic as 'Hateful'. by grungabunga
That's true but assuming that they somehow can tweak flagging rates (as in not like they fed some flagging model a bunch of hateful tokens and it's automatic) then it's pretty fucked up that there are differences between races and sexes.
Obviously it's based on an assumption and shows that they should have been more transparent over how flagging works.
Kinexity t1_j9ygnsn wrote
Reply to comment by EconomicRegret in Almost 40% of domestic tasks could be done by robots ‘within decade’ | Artificial intelligence (AI) by Gari_305
The big assumption in your comment is that you would need people to think up recipies. Just like with image generation it will probably turn out that a dumb model can do that just as well as a human.
Kinexity t1_j9vqhhb wrote
Reply to comment by Jayco424 in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
ASI is unneccessary in space conquest. Just AGI is enough.
Kinexity t1_j9uqy4v wrote
Reply to comment by altmorty in Almost 40% of domestic tasks could be done by robots ‘within decade’ | Artificial intelligence (AI) by Gari_305
If it was for cooking only then what you say makes sense but if people already had robots to do chores then they may as have cooking functionality.
Kinexity t1_j9u88wf wrote
Reply to comment by SaintLouisduHaHa in Almost 40% of domestic tasks could be done by robots ‘within decade’ | Artificial intelligence (AI) by Gari_305
I think the really big thing would be a robot which can cook. No need to go to restaurant and pay a lot of money for decent food when you can ask your robot to make it for you when you come back from work.
Kinexity t1_j9mmiib wrote
Reply to comment by WithoutReason1729 in Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
Human brain runs GI and as such if AGI cannot exist then it would mean that the Universe is uncomputable and that our brains run on basically magic we cannot tackle at all. Even in that situation you could get something arbitrarily close to AGI.
>What's your reasoning for thinking ASI might not be able to exist?
I like looking at emergence as phase transitions. Emergence of animal intelligence from lack of it would be a phase transition and emergence of human intelligence from animal intelligence would be another one. It's not guaranteed to work like this but if you look at emergence in other things it seems to work in similar manner. I classify superintelligence as something which would be another transition above us - able to do something that human intelligence fundementally cannot. Idk if there is such thing and as such there is no proof ASI, as I define it, can exist.
Kinexity t1_j9mh4lt wrote
Reply to comment by turnip_burrito in Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
Society is an emerging property of a group of humans but not in terms of intelligence. If you took perfectly intelligent human (whatever that means) and gave him infinite amounts of time and removed the problem of entropy breaking things then he'd be able to do all the things that whole human society achieved. AGI is by nature of human level intelligence and I'd guess grouping them together is unlikely to produce superintelligence.
Kinexity t1_je31fe6 wrote
Reply to comment by S3ndD1ckP1cs in IBM unveils world's first quantum computer dedicated to healthcare research by Dr_Singularity
Cancer survival rates have been steadily going up for the last several decades. Although advances are slow they are there nonetheless.