duffmanhb
duffmanhb t1_ja2ugba wrote
Reply to comment by visarga in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I hope so. I'm still waiting for them to accept my invite. But soon as I get it, first thing I'll do is create some llama bots for Reddit and see how effective it is compared to GPT3 posting believable comments. If it's nearly as good, but can be ran locally, it'll completely change the bot game on social media.
duffmanhb t1_ja2o0mz wrote
Reply to comment by Akimbo333 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
No idea... They only allow in published researchers.
duffmanhb t1_ja2nzfa wrote
Reply to comment by Z1BattleBoy21 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Siri was exclusively cloud based for the longest time. They only brought over basic functions to local hardware.
duffmanhb t1_j9jps75 wrote
Reply to comment by Destiny_Knight in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
So basically it's fine tuned for this specific topic, where GPT is large because it's a general dataset for multidomain use.
duffmanhb t1_j8erc4s wrote
Reply to comment by magnets-are-magic in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Oh of course... There is still a lot. This breakthrough will probably pay off drastically for the next 10 years. We still have all the fine tuning benefits, as well as squeezing out the benefits of scale. Tons and tons of fruit hanging for a while.
duffmanhb t1_j8dp0eb wrote
I don't think he understands how S Curves work. We had a major breakthrough when we figured out how to convert micro transistors to work as analogue transistors instead of binary... Which allowed us to pick up where we left off in the 60s
However, all this explosion of growth will probably slow down once the low hanging fruit is all achieved after this breakthrough, and we'll likely top off for a while until we get another breakthrough.
duffmanhb t1_j8d3s02 wrote
Reply to comment by TheRidgeAndTheLadder in Bing Chat blew ChatGPT out of the water on my bespoke "theory of mind" puzzle by Fit-Meet1359
It's definitely not 4. It's just a 3.5 backend with modified fine tuning for being used in a search engine.
duffmanhb t1_j8d3p0h wrote
Reply to comment by Fit-Meet1359 in Bing Chat blew ChatGPT out of the water on my bespoke "theory of mind" puzzle by Fit-Meet1359
What's interesting is someone in your other thread got the same exact to the letter response from Chat GPT. This says 2 things: This is likely the same build as 3.5 on the backend... And there is a formula it's using to get the same exact response.
duffmanhb t1_j8d3a5s wrote
Reply to comment by Hazzman in Bing Chat blew ChatGPT out of the water on my bespoke "theory of mind" puzzle by Fit-Meet1359
> The reason we won't have access to this is
I think it's more about people don't have a place to create a room dedicated to medical procedures?
duffmanhb t1_j80ku0z wrote
Google's LaMDA already has this theoretically figured out. They've shown over a year ago it's ability to function general tasks like this. And I'm sure they are much further ahead already.
duffmanhb t1_j5y3gte wrote
Reply to comment by Diamondsfullofclubs in An ALS patient set a record for communicating via a brain implant: 62 words per minute by esprit-de-lescalier
Changing data transfer mediums is the insanely easy part. The hard part is the tech itself. Switching over to a custom secured wireless protocol is just a minor hardware and software change. In the meantime, just use something wireless, and clamp down on security once it's ready for market.
duffmanhb t1_j5y0bmf wrote
Reply to comment by nearfar47 in An ALS patient set a record for communicating via a brain implant: 62 words per minute by esprit-de-lescalier
I mean, considering these are all in the early prototype and experimentation phases, I don't think you need to worry about a hacker creating a bespoke virus specifically for the 5 people on the planet who would have one.
duffmanhb t1_j5whxsn wrote
Reply to comment by PhasmaFelis in An ALS patient set a record for communicating via a brain implant: 62 words per minute by esprit-de-lescalier
I heard a lecture from a big tech CEO (who I wont mention because Reddit hates him and don't want to derail), who believes the future is going to have GPT-style fine tuned models for individuals eventually. To the point that others will simply be able to engage with your AI clone to get 80% of the answers they'd need from you directly, massively increasing productivity. Instead of having to take your time, they can just talk to your AI clone to get guidance or answers. He also believes there will often just be AI to AI conversations, where you can set the topic of conversation and have the two AI's make decisions... Again, with the belief that the AI will be so advanced, the overwhelming majority of times, it'll be incredibly an accurate reflection of real life.
He also theorized that future dating apps will be like this, sort of like that Black Mirror episode. Where we'd have our AI clones just interact with EVERYONE on the app, and then the app would determine who we best get along with, optimizing our matching ability.
duffmanhb t1_j5wg8f3 wrote
Reply to comment by c0mpost in An ALS patient set a record for communicating via a brain implant: 62 words per minute by esprit-de-lescalier
We've discovered just how adaptive the brain is when it comes to trying to engage with the outer world. For the most part, the brain will simply make whatever required pathways needed, to adapt to the new tool it's using.
duffmanhb t1_j5wfzy3 wrote
Reply to An ALS patient set a record for communicating via a brain implant: 62 words per minute by esprit-de-lescalier
Why is Neuralink the only company who realized they can avoid the massive wires and hardware sticking out of people's head just by using bluetooth? I don't get why so many of these companies still use these bulky fucking wires that look ridiculous when a wireless solution is better for everyone.
duffmanhb t1_j5ksrpu wrote
Reply to comment by tatleoat in University of Toronto researchers used AI to discover a potential new cancer drug — in less than a month by BigShoots
It’s wild how a decade ago protein folding was an insane problem to solve. People were waiting together cloud super computers just to solve a single protein. Then 2 years ago Google releases a program that can do any and all folding almost instantly.
duffmanhb t1_j5h4dte wrote
Reply to comment by User1539 in People are already working on a ChatGPT + Wolfram Alpha hybrid to create the ultimate AI assistant (things are moving pretty fast it seems) by lambolifeofficial
Based on what I've heard about Google's AI -- That type of AGI is already there. I don't think any AGI will ever make everyone content as it's a broad moving goal post that's ill defined, and fundamentally digital is going to be different than biological processing, but the AI Google has is really really good. Mostly because it's multiple different types of AIs all networked together, connected to the internet, and can learn novel tasks on demand.
duffmanhb t1_j59z0nd wrote
Reply to comment by visarga in Google to relax AI safety rules to compete with OpenAI by Surur
You're not thinking larger. Google searches aren't just things like, "What's the velocity speed required to break gravity?" or "What is the kwh rate in California for the last 10 years?"
People will still use Google, or some future permutation of it. Long as they are using Google, they are feeding them data, which they'll use to deliver ads (probably better than ever). It doesn't have to deliver those ads through Google.com, but in many other ways. They can still deliver your precise answer, but ender that, deliver products that are perfectly optimized to be exactly what you are looking for. If it's something you want to buy, or could potentially buy that you don't know you want yet... All this AI data Google will generate from you using their AI, will be able to deliver ads better than ever.
Say for instance you're a PERFECT candidate for solar panels on your roof. But it's nothing you ever even consider, never been educated on, haven't looked into, and just really aren't interested in it. Google will be able to use your AI searches to get such an intimate understanding of you, to realize, 'Visarga is an amazing cadidate for residential solar and they don't even know it. But they would absolutely love to get solar for their home if they knew more about it. The data shows they absolutely would be thrilled to have this. So we can now find a way to get them in contact with an installer so they can get solar."
That's MASSIVELY valuable for EVERYONE. The installer who wants to not spend time educating everyone, and seeking out ideal candidates, and the consumer who would be thrilled to get this, but has no idea about it. This is what Google already tries, and with the data AI models will be able to deliver, are going to optimize this beyond belief. Sure, you wont get your answer ad on a ChatGPT style interface, but that's probably not what the future of this AI integration is going to look like. It's not going to be some blank interface you're seeing now. It'll be integrated into other things.
duffmanhb t1_j59yp0e wrote
Reply to comment by StillBurningInside in Google to relax AI safety rules to compete with OpenAI by Surur
People will use google to research and search things... Further, the search engine itself isn't where it needs to deliver ads. It's gathering data on you to figure out what you want and need in that moment, and if it's something to buy, they will use this data to optimally find exactly the most perfect product you are seeking. If anything this level of depth and AI will improve their ad delivery across the web.
You're acting like Google wont know how to adapt and instead just sit around complaining that their old model and way of doing things doesn't work.
duffmanhb t1_j59rk9f wrote
Reply to comment by Fmeson in Google to relax AI safety rules to compete with OpenAI by Surur
Google's models are leaps and bounds beyond OpenAI
It's frustrating that they wont release it, but it's by and large BECAUSE it's so advanced. Google's AI is connected to the internet, so all of its information is up to date, dynamic, and constantly evolving. The very nature of connecting it to the web with constant streams of information pretty much inherently remove most safe guards and leave open tons of room for rapid growth and abuse that Google wont be able to stay ahead of against millions of people using it.
It's also potentially a general AI. It's not just ChatGPT style, but their AI is also connected to EVERYTHING you can imagine. Not just knowledge databases from 2020 and before... It's more closely resembling an actual mind like human's that have tons and tons of different "brains" all working together. You can work with maps, weather data, traffic, breaking news, art, internet of things, you name it. They connect everything in their AI
This is what Google has been working on the past year. It's entirely on improvement and guard rails. But it looks like Google has realized the cat's out of the bag, so they want to bring it to market sooner than later before everyone starts building businesses on the OpenAI framework instead of theirs.
duffmanhb t1_j59qrr8 wrote
Reply to comment by visarga in Google to relax AI safety rules to compete with OpenAI by Surur
You're complaining about the search results. OpenAI isn't a challenge to their ad model. Getting better search results has nothing to do with ads. No one is clicking through ads looking for information. They are clicking through shitty search results that are SEO packed to the tits, making the whole slew of results be generic AI generated crap.
Google WANTS better search results, that meet the users needs, to drive traffic. OpenAI has massively cornered Google in certain information seeking type searches, which Google wants to tackle right away. Google's ads have absolutely no threat from better search results.
duffmanhb t1_j4ra1m4 wrote
Reply to comment by ThePokemon_BandaiD in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
That's what Google has done... It's also connected to the internet. Their AI is fully integrated from end to end, but they refuse to release it because they are concerned about how it would impact everyone. So they are focusing on setting up safety barriers or something.
Like dude, just release it.
duffmanhb t1_j4i5jof wrote
Reply to comment by ramriot in Zero Days (2016) - Stuxnet, a piece of self-replicating computer malware that the U.S. and Israel unleashed to destroy a key part of an Iranian nuclear facility, and which ultimately spread beyond its intended target. [01:53:51] by Missing_Trillions
That's interesting. I had no idea that it was recoded and rereleased into the wild. Could it have been Israel? It definitely doesn't sound like something the US would do. Maybe Iran after discovering it tried to repurpose it?
I was always under the impression that it got out because the original attack vector was via a USB with some boss's naked wife on their, incentivizing him to bring it into the office... Then they also brought it out
duffmanhb OP t1_j4bs6ld wrote
Reply to comment by [deleted] in Breakthrough milestone in understanding the reversal of aging by duffmanhb
No idea on the specifics but all I got out of it was that it leveraged the same sort of mechanism. I think I recall reading something about using peptides to activate it? I could be wrong.
duffmanhb t1_ja7cc6u wrote
Reply to Singularity claims its first victim: the anime industry by Ok_Sea_6214
Just posting criticism because that's more interesting: The problem it's going to encounter, especially based off their video, is human acting is far from exaggerated animation. Animation has a smooth flow to it with intense extremes when wanted. That's something the AI can't really replicate.