lovesdogsguy t1_j6ed0fx wrote
Summary from chatGPT:
OpenAI CEO Sam Altman is in Washington D.C. this week to demystify the advanced chatbot ChatGPT to lawmakers and explain its uses and limitations. The chatbot, which is powered by cutting edge AI, is so capable that its responses are indistinguishable from human writing. The technology's potential impact on academic learning, disruption of entire industries and potential misuse has sparked concern among lawmakers. OpenAI, the company that created ChatGPT, has a partnership with Microsoft, which has agreed to invest around $10 billion into the company. In the meetings, Altman has also told policymakers that OpenAI is on the path to creating "artificial general intelligence," a term used to describe an artificial intelligence that can think and understand on the level of the human brain. This has led to discussions about the need for regulation and oversight for the technology. OpenAI was formed as a nonprofit in 2015 by some of the tech industry's most successful entrepreneurs, like Elon Musk, investor Peter Thiel, LinkedIn co-founder Reid Hoffman and Y Combinator founding partner Jessica Livingston. A central mandate was to study the possibility that artificial intelligence could do harm to humanity. Therefore, it makes sense that Altman would be on a tour of Washington right now to discuss the potential impacts of the technology on society and the need for regulation.
CrunchyAl t1_j6ffzhd wrote
Can this thing figure out what's in my email address?
-Old ass congress people
GraydientAI t1_j6fu2hd wrote
*googles my aol mail*
*squints and jaw protrudes forward*
GPT-5entient t1_j6juxyk wrote
Yeah, it's going to be funny seeing how these "internet is a series of tubes" geezers are going to try to regulate AGI.
We're fucked.
yottawa t1_j6evgp7 wrote
What did you do to get this answer from ChatGPT?After copying and pasting the text of the article, you asked ChatGPT to extract the article summary?
YobaiYamete t1_j6f6nru wrote
Yep, ChatGPT is amazing for summarizing things. Just say
Summarize this for me
"wall of text goes here between quotation marks"
Then you can tell it "Summarize it more" or "Explain this in simple english like I'm 5" or "give me a bullet point list of the high points" or "is any of this even accurate" etc etc
I use it all the time for fact checking schizo posts on /r/HighStrangeness where some posts a 12,000 word wall of text and chatGPT goes "This is a post from a confused person who thinks aliens are hiding in potato chip bags but they make numerous illogical leaps and contradict themselves 18 times in the message"
and you can even say "write a polite but informal reply to this for me pointing out the inconsistencies and contradictions" and save yourself the time of even having to do that lol
throwawayPzaFm t1_j6fcs8n wrote
Not sure if genius or cruel seal clubbing, but awesome.
CriscoButtPunch t1_j6ggxv1 wrote
Only cruel if they're baby seals
throwawayPzaFm t1_j6h9b84 wrote
What if they think they're baby seals from Jupiter, sent by the grand seal shepherd to aid mankind on their path to salvation?
CriscoButtPunch t1_j6k456s wrote
Better club em, just to be sure
Caring_Cactus t1_j6fu1nn wrote
Anime dweebs about to start the most intense waifu wars with each other.
Aburath t1_j6glnb4 wrote
It would probably validate some of their beliefs to learn that all of the responses are from real artificial intelligences
heyimpro t1_j6gzqmz wrote
/r/gangstalking is going to have a field day with this one
sneakpeekbot t1_j6gzrkm wrote
Here's a sneak peek of /r/Gangstalking using the top posts of the year!
#1: “They” are making me think gay thoughts
#2: everyone needs to get a carbon monoxide detector!
#3: Another brave whistleblower and man of integrity. | 23 comments
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^Contact ^^| ^^Info ^^| ^^Opt-out ^^| ^^GitHub
CellWithoutCulture t1_j6gnrgy wrote
Is the potato chip thing something really thinks? That subreddit is awesome, thanks for sharing
Typo_of_the_Dad t1_j6hhfeg wrote
So it actually calls people confused? Interesting.
lovesdogsguy t1_j6f1fjd wrote
Just prompted: "Please summarise the following article," and then copy / pasted the text of the article. First answer was just a short paragraph, so I prompted, "please expand the summary by 30%." It was more than double the length. Still not so good with numbers it seems.
exstaticj t1_j6h804j wrote
Do you think that the total length was 230% of the original summary? Could ChatGpt could have kept the original 100% and then added a 130% expansion to it? You said over double and this is the only thing I could think of that might yield this type of result.
lovesdogsguy t1_j6jgcme wrote
It could have, yes. I'm not certain. I just asked it to expand by 30% and it was definitely over double the length.
DukkyDrake t1_j6exwqk wrote
The unsupervised use case of ChatGPT is very limited.
>Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,”
There is no way to really know that from existing products.
dmit0820 t1_j6f8nis wrote
I'd argue that the transformer architecture(the basis for large language models and image diffusion) is a form of general intelligence, although it doesn't technically meet the requirements to be called AGI yet. It's able to take any input and output a result that, while not better than a human expert, exceeds the quality of the average human most of the time.
ChatGPT can translate, summarize, paraphrase, program, write poetry, conduct therapy, debate, plan, create, and speculate. Any system that can do all of these things can reasonably be said to be a step on the path to general intelligence.
Moreover, we aren't anywhere near the limits of transformer architecture. We can make them multi-modal, inputting and outputting every type of data, embodied, by giving them control of and input from robotic systems, goal directed, integrated with the internet, real time, and potentially much more intelligent simply by improving algorithms, efficiency, network size, and data.
Given how many ways we still have left to make them better it's not unreasonable to think systems like this might lead to AGI.
StevenVincentOne t1_j6fjosp wrote
>I'd argue that the transformer architecture(the basis for large language models and image diffusion) is a form of general intelligence
It could be. A big part of the problem with the discussion is that most equate "intelligence" and "sentience". An amoeba is an intelligent system, within the limits of its domain, though it has no self-awareness of itself or its domain. A certain kind of intelligence is at work in a chemical reaction. So intelligence and even general intelligence might not be as high of a standard as most may think. Sentience, self-awareness, agency...these are the real benchmarks that will be difficult to achieve, even impossible, with existing technologies. It's going to take environmental neuromorphic systems to get there, imho.
dmit0820 t1_j6g0pkd wrote
Some of that might not be too hard, self-awareness and agency can be represented as text. If you give Chat GPT a text adventure game it can respond as though it has agency and self-awareness. It will tell you what it wants to do, how it wants to do it, explain motivations, ect. Character. AI takes this to another level, where the AI bots actually "believe" they are those characters, and seem very aware and intelligent.
We could end up creating a system that acts sentient in every way and even argues convincingly that it is, but isn't.
StevenVincentOne t1_j6g1ixu wrote
Sure. But I was talking about creating systems that actually are sentient and agentic not just simulacra. Though one could discuss whether or not for all practical purposes it matters. If you can’t tell the difference does it really matter as they used to say in Westworld.
Ok-Hunt-5902 t1_j6i5jkz wrote
>‘Sentience, self-awareness, agency’
Wouldn’t we be better off with a ‘general intelligence’ that was none of those things
StevenVincentOne t1_j6ihhc3 wrote
It may be that we don't have to choose or that we have no choice. There is probably an inherent tendency for systems to self-organize into general intelligence and then sentience and beyond into supersentience. There's probably a tipping point at which "we" no longer call those shots and also a tipping point at which "we" and the systems we gave rise to are not entirely distinguishable as separate phenomena. That's just evolution doing its thing. "We" should want to participate in that fully and agentically, not reactively.
TacomaKMart t1_j6g1v5x wrote
>ChatGPT can translate, summarize, paraphrase, program, write poetry, conduct therapy, debate, plan, create, and speculate. Any system that can do all of these things can reasonably be said to be a step on the path to general intelligence.
And this is what makes it different from the naysayers who claim it's a glorified autocorrect. It obviously has its flaws and limitations, but this is the VIC-20 version and already it's massively disruptive.
The goofy name ChatGPT sounds like a 20 year old instant messenger client like ICQ. The name hides that it's a serious, history-altering development, as does the media coverage that fixates on plagiarized essays.
turnip_burrito t1_j6fa8be wrote
Yep, any data which can be structured as a time series.... oh wait that's ALL data, technically.
IsraelFakeNation9 t1_j6hapl3 wrote
Your flair… sorry man.
vernes1978 t1_j6hh6pg wrote
Kinda redundant statement for any experimental tech.
This also fits for any fusion project.
DukkyDrake t1_j6hsu0i wrote
I dont think so. if you perfect your fusion experiment, you end up with a working sample of the goal of the project.
>In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” a term used to describe an artificial intelligence that can think and understand on the level of the human brain.
I hope he didn't give that explicit definition because it ties his goals to something quite specific. If they perfect GPT and it produces 99.9999% accurate answers, he won't necessarily end up with a working sample of his stated goal.
That definition is an actual AI, something that doesn't currently exist, and absolutely no one knows how to build. That's why they went down the machine learning path with big data and compute.
maskedpaki t1_j6f0kfo wrote
Holy shit I missed the start and then didn't realise it was chatgpt until seeing a reply. It makes me realise how good chatgpt is at making text. It's pretty much perfect for short passages.
GPT-5entient t1_j6jvjvs wrote
Summarizing, using ChatGPT or on your own, should be mandatory in every subreddit when posting links to articles. Using ChatGPT makes it 0 effort, so there's no reason to NOT have that policy...
lovesdogsguy t1_j6jwnk3 wrote
It would certainly be a help to have a little summary for every post, and chatGPT makes it easy.
Viewing a single comment thread. View all comments