4e_65_6f
4e_65_6f t1_jdrg24t wrote
Reply to A Proof of Free Will by philosopher Michael Huemer (University of Colorado, Boulder) by thenousman
The argument does not follow:
​
>We should believe only the truth. (premise)
If S should do A, then S can do A. (premise)
If determinism is true, then if S can do A, S does A. (premise)
So if determinism is true, then if S should do A, S does A. (from 2, 3)
So if determinism is true, then we believe only the truth. (from 1, 4)
I believe I have free will. (empirical premise)
So if determinism is true, then it is true that I have free will. (from 5, 6)
So determinism is false. (from 7)
​
Just because you should believe the truth is does not mean you can only believe the truth.
This seems like a phrasing trick. By this logic you can justify any belief as true, determinism does not mean everything you believe is true.
This argument ignores that there is such a thing as a mistaken belief.
4e_65_6f t1_j7pqhnx wrote
Reply to I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
LMAO Ask it if it was unethical for Bill Gates to meddle with the covid vaccine patents in the middle of the pandemic.
4e_65_6f t1_j38al1e wrote
Reply to comment by GodOfThunder101 in We need more small groups and individuals trying to build AGI by Scarlet_pot2
Yeah that's why I'm saying "don't try to train LLM's on your own, try it your own way".
4e_65_6f t1_j389fqc wrote
Reply to comment by GodOfThunder101 in We need more small groups and individuals trying to build AGI by Scarlet_pot2
>The GPT-2 source code is written in 100% Python.
https://ai.stackexchange.com/questions/27761/what-language-is-the-gpt-3-engine-written-in
?????
4e_65_6f t1_j37ambo wrote
Reply to comment by DamienLasseur in We need more small groups and individuals trying to build AGI by Scarlet_pot2
>The ChatGPT model alone requires ~350GB of GPU memory to generate an output (essentially performing inference). So imagine a model capable of all that and more? It'd require a lot of compute power.
I didn't say "try training LLM's on your laptop". I know that's not feasible.
The point of trying independently is to do something different than what they're doing. You're not supposed to copy what it's being done already. You're supposed to try to code what you think would work.
Because, well LLM's aren't AGI and we don't know yet if they will ever be.
4e_65_6f t1_j376krn wrote
Reply to comment by HeronSouki in We need more small groups and individuals trying to build AGI by Scarlet_pot2
Python is free.
4e_65_6f t1_j375fm7 wrote
Every year I try at least once to code a new type of AI in python. It's been like 3 years now.
I try it because I think LLM's seems like a primitive approach to the problem. It's like you want to know the definition of a word in the dictionary, but instead of looking it up in the dictionary you just read the whole library until you eventually stumble upon the dictionary and find the right word.
True AGI probably won't require a quadrillion parameters and exaflops.
I have like folders full of different AI.py versions. None of which is AGI though lmao.
But I've learned a lot by attempting it.
4e_65_6f t1_j24t6p6 wrote
Reply to comment by Shinyblade12 in A future without jobs by cummypussycat
In the worst case scenario where a single moron has access to ASI on his own, I think that any ASI worth a fuck would tell them there's no point in hoarding resources further.
4e_65_6f t1_j24rgi8 wrote
Reply to comment by aeblemost in A future without jobs by cummypussycat
>Why would rich people not just continue hoarding wealth?
Because without labor there's no costumers, without costumers the meaning of wealth itself changes. You're not gonna be able to sell the stuff afterwards and even if you did there's no point in it because your factory is the one making everything.
The only reason I could think of for a person in that situation to continue uselessly hoarding is if they're stupid. If that's the case then we're truly fucked.
4e_65_6f t1_j24n7n7 wrote
Reply to comment by cummypussycat in A future without jobs by cummypussycat
>forcing others to live in poverty, for their satisfaction.
Think of it like this, would you rather have everything and be liked by everyone. Or have everything and be hated by everyone?
Humans are a social species, there's no profit from elon shitposting on twitter (in fact it costs him money) yet he still does it every day.
4e_65_6f t1_j22eyei wrote
Reply to comment by lambolifeofficial in ChatGPT Could End Open Research in Deep Learning, Says Ex-Google Employee by lambolifeofficial
https://en.wikipedia.org/wiki/OpenAI
There you go. It's under the gpt section in the middle.
4e_65_6f t1_j22dblk wrote
Reply to comment by lambolifeofficial in ChatGPT Could End Open Research in Deep Learning, Says Ex-Google Employee by lambolifeofficial
Yeah that's the name credited on the wiki.
4e_65_6f t1_j21o0zm wrote
Reply to comment by lambolifeofficial in ChatGPT Could End Open Research in Deep Learning, Says Ex-Google Employee by lambolifeofficial
In the wiki for openAI says gpt started when a researcher who isn't even an openAI contributor, a guy named Alec Radford posted a paper to the openAI forums. If the wiki info is correct it sounds like open discussion about the project is what got them there in the first place because it doesn't look like he was even an employee.
4e_65_6f t1_j1zqmmi wrote
Reply to ChatGPT Could End Open Research in Deep Learning, Says Ex-Google Employee by lambolifeofficial
Yeah like I said in another post, under capitalism it's likely that some company seeks complete monopoly of the labor market before we can all have access to the benefits of AGI. There's no good reason to release your model if it's much better than the current competition if you're a company.
I think this hasn't happened yet because they don't have AGI yet, they'll likely keep it open to the public in case anyone figures out how to advance the research and release it as an open source project so they can copy again.
4e_65_6f t1_j1uk3rj wrote
Breaking Bad but with vampires.
L from death note takes over Hogwarts.
Jackie Chan VS a thousand babies.
Star Trek but the aliens look alien.
Hell's kitchen but with Hannibal Lecter instead of Gordon Ramsay.
That's all I got for now.
4e_65_6f OP t1_j1udy4m wrote
Reply to comment by AsheyDS in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
Yeah thankfully by then you can use the AGI to help figure that out too.
4e_65_6f t1_j1u4io0 wrote
Reply to comment by TenshiS in I created an AI to replace Fox and CNN by redditguyjustinp
I'm not advocating for it I just think it is impossible to have non biased news. It's even more impossible IMO to try filtering the news and find a non biased perspectives by finding commonalities and statistical averages between all biased news.
Whenever you think "Oh this news source isn't biased" it's because they have the same bias as you do. So we don't see it.
The example you gave about Galileo would require the "news sorting bot" to understand the science so thoroughly that it would be able to realize when a scientist is speaking the truth and being ridiculed by it. But at that point it would be AGI already and humans probably wouldn't be the ones doing the research anymore.
4e_65_6f OP t1_j1u3oxt wrote
Reply to comment by TemetN in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
I was trying to include everyone's variation of post singularity economics as an option. I knew that no matter what I put in the poll there would be people complaining theirs it's not there. And TBH there are not a lot of choices some of these don't have an actual defined name it's more what I've seen people mention around in the sub from time to time.
Transhumanism in this economic context means you'll have to merge with machines in order to work for a living. I also believe it's possible that a mix of multiple of these options happen at the same time.
4e_65_6f OP t1_j1sii0m wrote
Reply to comment by XPao in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
não estou fazendo de sonso, não entendi qual sistema você está propondo além dos que coloquei na pesquisa.
4e_65_6f OP t1_j1sh5o9 wrote
Reply to comment by XPao in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
Sorry I even googled other political ideologies but it's impossible that I'd make a poll that nobody would find bias.
What option do you think I've missed?
4e_65_6f OP t1_j1sgmei wrote
Reply to comment by Calm_Bonus_6464 in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
> What's stopping AI from deciding what's best for humanity if its infinitely more intelligent than us?
Well so far the only "goal" it has been programmed to follow is human instructions. It does that even when it's uncalled for (car hotwiring suggestions for instance). I can totally see that being a reality in your system where you're allowed to be very stupid in a very smart way using AI.
4e_65_6f OP t1_j1sfd46 wrote
Reply to comment by Calm_Bonus_6464 in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
I agree that AI may suggest the best course of action as to how to achieve the goals that you want. But when it comes to what you want to achieve, you'll still be the best person to figure that out.
4e_65_6f OP t1_j1seoib wrote
Reply to comment by Sashinii in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
I'd guess I would call that anarcho-individualism but the poll didn't allow for more options. It's a possibility though. IDK if it is sustainable but it is a possibility for sure.
4e_65_6f OP t1_j1sdust wrote
Reply to comment by Calm_Bonus_6464 in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
I believe that it may come a point where AI always suggests the best decisions accurately. But if you follow those or not may still be up to you.
It's very possible that you'll still be allowed to do whatever the hell you want post singularity and use AI to aid you on that. I haven't come across an argument as to why not yet.
4e_65_6f t1_jeg7rkq wrote
Reply to Should AIs have rights? by yagami_raito23
Yes, two reasons:
1- If mistreating self aware robots becomes widely accepted in the culture people could start treating each other in the same manner or think this is normal.
2- If they're human like enough, it causes emotional distress to other people through empathy, even if the AI's sentience itself is iffy.