ActuatorMaterial2846
ActuatorMaterial2846 t1_je8luak wrote
Reply to comment by jetro30087 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Interesting, curious what size this particular Llama model is, or is that not even relevant?
ActuatorMaterial2846 t1_je8ik1t wrote
Reply to comment by FlyingCockAndBalls in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
It's actually quite technical, but essentially, the transformer architecture helps each part of the sentence “talk” to all the other parts at the same time. This way, each part can understand what the whole sentence is about and what it means.
Here is the paper that imo changed the world 6 years ago and is the reason for the current state of AI.
https://arxiv.org/abs/1706.03762
If it goes over your head (it did for me), ask bing or chatgpt to summarise it for you. It helped me get my head around this stuff, as I'm in no way an expert nor do I study this field.
ActuatorMaterial2846 t1_je8gg68 wrote
Reply to Thoughts on this? by SnaxFax-was-taken
It's certainly possible. But I've read his books, and his concepts to obtain immortality are through nanobots. Although I have great respect for this man, I'm have no clue where we are in terms of nano tech. I haven't read about any papers or notable research regarding this.
So yes, I think he is usually on to something when he makes his predictions, and I'm particularly in agreement with his AGI predictions, albeit he seems a little conservative compared to others, I'm not sure how quickly nano tech will advance to get us to the stage he expects by 2030.
ActuatorMaterial2846 t1_je8fqgw wrote
Reply to comment by Not-Banksy in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
No worries. I'll also point out the magic behind all this is particularly the transformer architecture. This is he real engine behind LLMs and other models.
ActuatorMaterial2846 t1_je8e3lg wrote
So what happens is they compile a dataset. Basically a big dump of data. For large language models, that is mostly text, books, websites, social media comments. Essentially as many written words as possible.
The training is done through whats called a neural network using something called a transformer architecture. Which is a bunch of GPUs (graphics processing units) linked together. What happens in the nueral network whilst training is a bit of a mystery, 'black box' is often a term used as the computational calculations are extremely complex. So not even the researchers understand what happens here exactly.
Once the training is complete, it's compiled into a program, often referred to as a model. These programs can then be refined and tweaked to operate a particular way for public release.
This is a very very simple explanation and I'm sure there's an expert who can explain it better, but in a nutshell that's what happens.
ActuatorMaterial2846 t1_je40on3 wrote
Reply to comment by lawandordercandidate in we gotta put the genie back in the bottle. it's the only way. by lawandordercandidate
Jokes aside, I'm not sure you are considering all variables with your post. Sam Altman isn't the be all end all of AI, and although he is a smart dude, he is not even the brains behind OpenAI's development.
Furthermore, OpenAI/Microsoft are not the only players. They are the biggest in the public/commercial sector, but there are many different organisations working on this technology.
Things will change, and they will change drastically. We haven't had a societal shift in living memory, the industrial revolution being the last major example, and this advancement will indeed be orders of magnitude bigger.
That doesn't mean we are "fucked", but it does mean, once again, a shift in human hierarchal structure. It possible that money may eventually be no longer relevant. There are so many factors to consider.
Then there are the obvious positives, especially when it comes to health and medical advancements, but unimaginable leisure and pleasurable activities will be at your finger tips ☺️.
The obvious negatives too, hair trigger defence systems and automatic robotic weapons, mass propaganda and misinformation, scamming etc.
The world is changing, capalism is not robust enough to withstand it, there will be a new order that we all will have to adapt to. Scary, but nothing suggests it will be bad or good overall, but certainly intense and different.
Civilisation is fluid and forever changing, we just typically don't live long enough to see it happen. That is changing as technology speeds towards singularity and we live longer, and soon possibly indefintily.
ActuatorMaterial2846 t1_je3ze5a wrote
In the words of a very famous book. "Don't panic! Consider how lucky you are that life has been good to you so far. Alternatively, if life hasn't been good to you so far, which given your current circumstances seems more likely, consider how lucky you are that it won't be troubling you much longer."
ActuatorMaterial2846 t1_je1r5pm wrote
Reply to Which communities have you found where people are both smart about what AI is and isn't currently capable of, but where everyone in there is convinced we'll have AI soon that's smarter than 95% of humans at all computer based tasks within a few years? by TikkunCreation
r/machinelearning seems to be a good sub. People seem grounded, yet have a very good understanding of the field and are up to date.
It's a little less fanciful than this sub, but that's why I like this sub.
ActuatorMaterial2846 t1_jd5oap8 wrote
Reply to The Future Timelines by EchoingSimplicity
I think fast, adaptive, un-aligned. I think the choice by openAI to go for profit shows a level of hubris amongst the creators in the sector.
It just seems so arrogant to close their research off and then spouting some pseudo intellectual drivel about alignment and the human condition in order to justify it, as if only they can find solve the mystery.
If it was to be human aligned, it needs to be open, where academics, intellectuals, and the general public can see the direction its heading in, not a small group of technocrats who think they know best for society.
ActuatorMaterial2846 t1_jaei6xk wrote
Reply to Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it's going | Even now, people misunderstand just how transformative generative AI really is. Those who do understand, however, are too caught up in techno-idealism to see the likely ground truth by Yuli-Ban
Transhumanism will be widely adopted. In fact, it kind of already is. Preventative vaccines, many of us are required to take in the early stages of our lives is a good example of how it will be adopted.
If you take Kurzweil's predictions regarding nanobots, the concept doesn't seem nearly as invasive as say, cutting your skull open to put a nueral link in your brain.
ActuatorMaterial2846 t1_jab26pm wrote
We are quite close imo. Chatbots are pretty dumb and most people will base their opinions on the likes of chatgpt or bing.
Some people seem to also think it's just slightly improved technology from decades ago and are not familiar with the advancements in transformer architecture and neural networks. Machines are learning on their own with no human input apart from initial parameters. Protein folding is a massive leap also. We are on the cusp of a new technological age now.
ActuatorMaterial2846 t1_jaax7vx wrote
The 'grabby aliens hypothesis' is quite compelling and many astro physicists and biologists seem to consider it even more plausible.
Basically, a series of hard steps need to be accomplished, and the galaxy, at least, is too young and too hostile for it to be swarming with intelligent advanced civilisations.
Fermi paradox has been around a while and there are few other theories, 'dark forst' as an example too. So I wouldn't succumb simply to such a basic and old concept when so many great minds have come up with plausible reasons to counter the great filter.
E: I guess this isn't a thread for discussion then...
E2: I just realised I'm in r/furturogy, makes sense now...
ActuatorMaterial2846 t1_ja7jpy2 wrote
Reply to comment by ashareah in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
Realistically, they should be forced to open their data to public scrutiny. This secrecy to one up one another in the name of profit is down right fucking dangerous. I'm certain these companies have some ethical questions to answer for.
E: Holy crap, lol the downvotes. How has this butthurt people so much.
ActuatorMaterial2846 t1_ja4xeos wrote
I'm an electrician, I have worked on all kinds of housing and buildings, old, new, prefab etc.
What I think he is saying is that it is similar to prefab buildings. With such designs, wires, plumbing, and general utilities are all pre installed and simply need to be hooked up to mains once the structure has been put together. In Altman's example, I believe he is looking at 3D printing specifically. Similar ideas have been proposed for Luna and Mars bases.
Essentially utilising the raw materials within the proposed structures environment. Piping would actually be pretty simple. It can be ceramic, for example. However, when it comes to electricity, you need conductive material that is not found just anywhere.
ActuatorMaterial2846 t1_ja14o3n wrote
Reply to AI image generator Midjourney blocks porn by banning words about the human reproductive system by marketrent
Well, I guess someone is looking for money from investors. This is just going to drive people to the open source sector.
ActuatorMaterial2846 t1_j9ydjnf wrote
Reply to comment by KarmaStrikesThrice in ChatGPT on your PC? Meta unveils new AI model that can run on a single GPU by 10MinsForUsername
Is this to do with advancements in file compression? I heard Emad Mostaque talk about this regarding stable diffusion.
ActuatorMaterial2846 t1_j9yauge wrote
Reply to comment by astrologicrat in We are in the early days of AI used as tool for biological design. It’s potential to design new proteins + DNA sequences from the building blocks of life is astonishing. by MichaelTen
Yeah, I think people took that comment about 'instantly killing us by releasing a poison in the atmosphere' a bit too seriously. Maybe because it was so specific, idk.
But he does have a point that we should be concerned about an autonomous entity smarter than humans in all cognitive ability. An entity that has no known desire apart from a core function to improve and adapt to its environment.
Such an entity would most certainly begin competing with us for resources. So, his emphasis on alignment is correct, and he is probably not overstating the difficulty in achieving that.
Everything else he says is a bit too doomer with little to back it up.
ActuatorMaterial2846 t1_j9y4n77 wrote
Reply to comment by HabeusCuppus in How long before we start to see chat AI that specializes in a certain field at a human or better level? by saleemkarim
Harvey bot Man.
ActuatorMaterial2846 t1_j9w2p6b wrote
>Beware the snake oil. They have impressive ML (“Machine Learning”) models built/trained from content, algorithms, and neural networks. That is not “AI” and it is not “AGI”. Beware the snake oil. Remember what it actually is. Don’t fall for the hucksters and word games. twitter.com/cccalum/status…
These comments annoy me. Of course it's AI in every definition of the term.
When you see someone say this, they are simply a denialist refusing to look at objective reality. You could beat someone like this in the head with objective truth and they would deny it with each blow. I will never understand such close minded dogmatic attitudes.
ActuatorMaterial2846 t1_j9ru6ih wrote
Reply to comment by mouserat_hat in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
Machine learning
ActuatorMaterial2846 t1_j9r171j wrote
Reply to And Yet It Understands by calbhollo
I'm pretty stupid, but I just want to grasp something if it can be clarified.
A basic function can be described as an equation with solid answer 1+1=2.
But what these nueral networks seem to do is take a basic function and provide an approximation. That approximation seems to be based on context, perhaps by an equation proceeding or succeeding it.
I've heard it described as complex matrices with inscrutable floating-point numbers.
Have I grasped this or am I way off?
ActuatorMaterial2846 t1_j9qma3y wrote
Reply to comment by Dhiox in What are ‘robot rights,’ and should AI chatbots have them? by HarpuasGhost
I'm more convinced that we may never create an AI with sentience. An AI will likely always mimic it though.
However, I do think an AGI and ASI are inevitable. Sentience isn't required for such things to exist.
Such intellegence just has to be similar to the alphago or alphafold models, except capable of doing all human cognitive tasks at that level or higher, and needs to be able to operate autonomously.
There are organisms that behave like this in the world, albeit not intelligent as we consider it or even alive, but still incredibly complex, autonomous and adaptable.
ActuatorMaterial2846 t1_j9dyomx wrote
Reply to What are the gaming evolution in this advanced artificial intelligence technology, and how are they transforming the gaming experience? by decentralizedmemes
Roblox and minecradt are apparently incorporating text prompts to build game worlds. It'll start quietly I think, with games mentioned but also modding.
The issue is that it requires a lot of power to generate this stuff, on top of the typical graphics generation. Perhaps there will be additional components within hardware in the very near future, but until then, it will have to be generated online.
ActuatorMaterial2846 t1_j93i5ce wrote
Reply to comment by Circlemadeeverything in UN says AI poses 'serious risk' for human rights by Circlemadeeverything
I actually kind of agree. Transformer architecture isn't the complicated part, it's the nueral networks held by large companies and governments which are very expensive. It's easy to see such tech remaining in the hands of the powerful, but I'm not convinced that's going to be the case in the near future.
There are already proven examples of this technology being completely open source. Stability AI is already leaps and abounds ahead of DALLE-2 for example.
When GPT and chatbots get nerfed, it will drive more people to seek out open source options. DALLE-2 is a locked out system and will likely be a payed platform, yet stable diffusion is open source and utilises a users own backend. I'm not sure big corps will be able to keep up.
However, my concern is the sophistication of the nueral networks that are no doubt classified, most definitely in the hands of government and military.
ActuatorMaterial2846 t1_jeh3ffn wrote
Reply to Opinions on TaskMatrix.ai by iuwuwwuwuuwwjueej
It's revolionary, at least on paper. Waiting to see some demonstrations.