CommunismDoesntWork
CommunismDoesntWork t1_jef7r37 wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
Unix adopted the philosophy that text is the ultimate API, which is why everything on Linux can be done through the CLI, including moving the mouse. And LLMs are very good at using text. So everything sort of does have an API.
CommunismDoesntWork t1_je1bklp wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
Maybe figure out how to train an LLM with far less data and much faster?
CommunismDoesntWork t1_jdvoemp wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
Now try to do the reverse. Given all this data, have it come up with a plot based on evil/good, cautionary/optimistic score. I wonder what a plot in the top left would be like.
CommunismDoesntWork t1_jdqzp8i wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
How do you know GPT runs in O(1)? Different prompts seen to take more or less time to compute.
CommunismDoesntWork t1_jdia6kb wrote
Reply to comment by BinarySplit in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
It can do this just fine
CommunismDoesntWork t1_jdefd1g wrote
Reply to comment by orrk256 in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
Can you stop sucking Marx cock for a second?
CommunismDoesntWork t1_jddvzjp wrote
Reply to comment by cyberFluke in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
Stalin isn't going to suck your dick bro
CommunismDoesntWork t1_jddbke7 wrote
Reply to comment by Enzo-chan in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
>Tesla still uses lithium-ion batteries which is the norm for any EVs today,
And unheard of like 6 years ago. Are we just going to pretend auto companies would have switched to EVs the way they are now if Tesla hadn't come along and started to eat their lunch? That's a revolution.
SpaceX revolutionized space if you were already in the space industry. For average people, we won't see the ramifications until starship, agreed, but everyone in the space industry felt the revolution that was 2015 when SpaceX landed the falcon 9 for the first time
CommunismDoesntWork t1_jddayh4 wrote
Reply to comment by [deleted] in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
Define wealthy? His dad was an engineer and mom a regular model (not super model). That's upper middle class at best. "But the emerald mine" his dad invested $40k of his life savings on a 50% stake in a mine and doubled his money over 10 years which barely beats the stock market.
CommunismDoesntWork t1_jddagpj wrote
Reply to comment by BookOfWords in New 'biohybrid' implant will restore function in paralyzed limbs | "This interface could revolutionize the way we interact with technology." by chrisdh79
Stop spreading the lie that Elon didn't found Tesla and SpaceX. First, SpaceX was soley founded by Elon and he's been the CEO and chief engineer since inception.
Tesla was Co founded by 5 people. Elon musk and JB straubel were independently going to start an EV company using tech from AC propulsion. AC propulsion then introduced them to Mark and Martain because they wanted to do the same thing, and the 4 decided to team up and musk provided the initial funding for Tesla. Tesla was nothing but a piece of paper when they teamed up. The courts decided all of them get to call themselves founders because there's no hard and fast rule of what counts as being a founder.
CommunismDoesntWork t1_jdd87tx wrote
Reply to comment by KerfuffleV2 in [P] New toolchain to train robust spiking NNs for mixed-signal Neuromorphic chips by FrereKhan
I haven't, that's really cool though!
CommunismDoesntWork t1_jdcxx5u wrote
Reply to comment by FrereKhan in [P] New toolchain to train robust spiking NNs for mixed-signal Neuromorphic chips by FrereKhan
>But in principle there's nothing standing in the way of building a 100B parameter SNN.
That's awesome. In that case, I'd pivot my research if I were you. These constrained optimization problems on limited hardware are fun and I'm sure they have some legitimate uses, but LLMs have proven that scale is king. Going in the opposite direction and trying to get SNNs to scale to billion of parameters might be world changing.
Because NNs are only going to get bigger and more costly to train. If SNNs and their accelerators can speed up training and ultimately reduce costs, that would be massive. You could be the first person in the world to create a billion parameter SNN. Once you show the world that it's possible, the flood gates will open.
CommunismDoesntWork t1_jdcpcv9 wrote
Reply to comment by FrereKhan in [P] New toolchain to train robust spiking NNs for mixed-signal Neuromorphic chips by FrereKhan
Are those chips general purpose SNN accelerators in the same way GPUs are general purpose NN accelerators? If so, what's stopping someone from creating a 100B parameter SNN similar to LLMs?
CommunismDoesntWork t1_jdcloqz wrote
Reply to comment by FrereKhan in [P] New toolchain to train robust spiking NNs for mixed-signal Neuromorphic chips by FrereKhan
Is there specialized hardware for SNNs yet?
CommunismDoesntWork t1_jd7iuwz wrote
Reply to comment by darklinux1977 in AI democratization => urban or rural exodus ? by IntroVertu
Starlink solves this. Maybe not it's exact current version, but the idea of mega constellations. And of course, existing fiber isn't going anywhere
CommunismDoesntWork t1_j9ngi4q wrote
Reply to comment by xott in Stephen Wolfram on Chat GPT by cancolak
It's simple but not interesting from a research perspective. Humans don't need calculators to do math after all. Someone has done it though. They posted about it on the machine learning subreddit a few days ago
CommunismDoesntWork t1_j9b1qjb wrote
Reply to [D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM by head_robotics
I'm surprised pytorch doesn't have an option to load models partially in a just in time basis yet. That way even an infinitely large model can be infered on.
CommunismDoesntWork t1_j85b26a wrote
Reply to comment by vtjohnhurt in ChatGPT Powered Bing Chatbot Spills Secret Document, The Guy Who Tricked Bot Was Banned From Using Bing Chat by vadhavaniyafaijan
It looks like these are the hidden instruction that get appended to everyone's prompts.
CommunismDoesntWork t1_j75nm9r wrote
Reply to comment by Brashendeavours in Possible first look at GPT-4 by tk854
Go back to futurology
CommunismDoesntWork t1_j6xl8r8 wrote
Reply to comment by Imonfire1 in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
Teams for linux now works as a Progressive Web App, which means it now has the same features as the windows app
CommunismDoesntWork t1_j6i7fm5 wrote
There's no audio?
CommunismDoesntWork t1_j3rwrk4 wrote
Reply to comment by LightbulbMaster42 in This biotech startup says mice live longer after genetic reprogramming by ChickenTeriyakiBoy1
The kim dynasty shows no signs of stopping, even with deaths in the family.
CommunismDoesntWork OP t1_j259z6s wrote
Reply to comment by Belostoma in ChatGPT is cool, but for the next version I hope they make a ResearchAssistantGPT by CommunismDoesntWork
ChatGPT has been shown to have problem solving and analytical reasoning skills. It can also explain the reason behind it's answers. It can be confidently incorrect sometimes, but ChatGPT is for sure more than just "predicting what word should come next". There's a spark of AGI in it, even if it's not perfect. Transformers have been shown to be turing complete, so there's nothing fundamentally limiting it.
CommunismDoesntWork OP t1_j2531og wrote
Reply to comment by Belostoma in ChatGPT is cool, but for the next version I hope they make a ResearchAssistantGPT by CommunismDoesntWork
>being potentially useful for the early stages of exploring a new idea and an unfamiliar body of work.
Exactly, this is what I had in mind when I was quizzing ChatGPT on the immune system. I wanted it to teach me everything there is to know about the immune system basically, which is something I know almost nothing about. If you keep asking ChatGPT "why", it will eventually bottom out and won't go into any more detail, whereas I imagine a research orientated GPT could keep going deeper and deeper until it hits the current limit of our understanding about a particular subject.
>New research hyper-relevant to mine is likely to cite at least one of my papers, so I already get an alert. >There are many times when my research takes me into a new sub-field for just one or two questions ancillary to my own work
But how do you know a completely separate area isn't relevant to your work? Not a sub field, but a completely separate area. Let's say a team is trying to cure Alzheimer's. At the same time, a different team is working to cure aids. The aids group makes a discovery about biology that at first only looks applicable to aids, and so only people studying aids learn about it. But as the alzheimer's team uncovers more raw facts about Alzheimer's, they uncover a fact that when combined with the aids discovery could create a cure for alzheimer's. But then many years go by without anyone making the connection, or worse case scenario the alzheimer's team randomly rediscovers the same thing the aids team discovered years ago. Where I think a research assistant GPT would really shine is by being able to absorb all of these independent data points and instantly making the connections. If it even speeds up research by a week it would totally be worth it.
CommunismDoesntWork t1_jegsr5b wrote
Reply to comment by junkboxraider in [News] Twitter algorithm now open source by John-The-Bomb-2
As far as I know, there was never any evidence to back up those claims