LarsPensjo
LarsPensjo t1_j642rfy wrote
Reply to comment by koelti in ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
Just ask ChatGPT. I got:
> Microsoft is investing $10 billion into OpenAI, an AI research company founded in 2015 by Elon Musk, Sam Altman, and other prominent figures in the tech industry. However, many in the community are frustrated with OpenAI's shift away from its original ethos of developing AI for everyone, free of economic pressures. There are fears that OpenAI's models will become fancy Microsoft Office plugins, leading to a loss of open research and innovation. The specifics of the deal suggest that there is more going on behind the scenes, and that Sam Altman, the CEO of OpenAI, may have orchestrated a major strategic move to secure the company's future.
LarsPensjo t1_j642avw wrote
Nice write-up!
However, I don't think OpenAI is alone at this level. There are quite a few more, although OpenAI was first to make it publically available.
That means an OpenAI failure would just be a minor setback for the customers.
LarsPensjo t1_j36mdv5 wrote
Reply to comment by gleamingthenewb in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
But that doesn't help to determine whether it uses reflection on its own thinking.
LarsPensjo t1_j36arit wrote
Reply to comment by gleamingthenewb in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Ok. Is there anything you can ask me, where the answer can't be explained as me just using a prediction of a string of characters corresponds to your prompt?
LarsPensjo t1_j360vxt wrote
Reply to comment by pluutoni in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It has been primed to do so. That isn't something it learned from scraping the internet.
LarsPensjo t1_j35yrqx wrote
Reply to comment by monsieurpooh in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
> That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.
I definitely agree. IMO, you see a lot of "AI is not true intelligence", which doesn't really matter.
Eliezer Yudkowsky had an interesting observation:
> Words aren't thoughts, they're log files generated by thoughts.
I believe he meant the written word.
LarsPensjo t1_j33nt4v wrote
Reply to comment by sticky_symbols in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
One definition of "reflection" is
> serious thought or consideration.
Can you give an example where a human person can reflect on something, which ChatGPT can't? And more crucially, what method would you use to detect this?
What I am aiming at, is that these are borderline philosophical questions, without clear definitions.
LarsPensjo t1_j33luce wrote
Reply to comment by sticky_symbols in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Aren't all self-improvements ultimately triggered by external events?
LarsPensjo t1_j33hla5 wrote
Reply to comment by sticky_symbols in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
I asked ChatGPT "Are you sentient?", and got the (expected) answer
> I am a machine learning model trained to simulate conversation, so I don't have consciousness or sentience in the same way that humans do.
Then I asked, "Please explain why you say that.", and got the answer
> As a machine learning model, I have been programmed to process and generate text based on patterns in data...
Doesn't that mean it reflected on its own thinking?
LarsPensjo t1_j33enxl wrote
Reply to comment by sticky_symbols in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
I saw an example where someone asked for a Python program to solve a task. ChatGPT produced such a program. But there was an error, and the person pointed out the error and asked for a fix.
ChatGPT then produced a correct program.
Isn't this an example of self-improvement? There was external input, but that is beside the point. Also, the improvement is going to be forgotten if you restart with a new prompt. But that is also beside the point, there was an improvement while the sessions lasted.
Notice also that ChatGPT did the improvement, the person starting the prompt did not explicitly how to solve the error.
LarsPensjo t1_j11if2g wrote
Reply to comment by Shelfrock77 in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Are you asking why we dream things?
LarsPensjo t1_j643g3d wrote
Reply to comment by LarsPensjo in ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
I also asked ChatGPT what could be criticized from this text, and got the answer:
>One potential criticism of the text is that it appears to present a biased view of the Microsoft investment in OpenAI, suggesting that the CEO of OpenAI, Sam Altman, has orchestrated a "coup of the decade" and that the specifics of the deal tell a different story from the community's frustration about OpenAI moving away from its ethos of developing AI for everyone, free of economic pressures. Additionally, the text also presents a rosy picture of Sam Altman's background, giving the impression that he is a strategic mastermind and influential figure in Silicon Valley without providing any counterarguments or criticism of his actions or decisions.