blueSGL
blueSGL t1_jed7gnq wrote
Reply to comment by Stryker1-1 in GPT-4 poses too many risks and releases should be halted, AI group tells FTC. by VAMSI_BEUNO
How has this narrative sprung up so quickly and spread so widely.
https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
https://futureoflife.org/open-letter/ai-open-letter/
Back in 2015 the same org drafted an open letter and announced potential issues with AI's and that was years before any sort of commercialization effort.
There are alignment researchers who have signed the letter, both times.
Current models cannot be controlled or explained in fine grain enough detail to control (the problem is being worked on by people like Neel Nanda and Chris Olah but it's still very early stages and they need more time and people working on the problem)
The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong and it is far from 'safe'
blueSGL t1_jecxney wrote
Reply to comment by agorathird in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
>a lot of them look like randoms so far.
...
>Population
>We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.
I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.
blueSGL t1_jecv6ta wrote
Reply to comment by agorathird in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
> There's consideration from the people working on these machines.
https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/
>In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.
If half the engineers that designed a plane were telling you there is a 10% chance it'll drop out of the sky, would you ride it?
edit: as for the people from the survey:
> Population
> We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.
blueSGL t1_jecuq9j wrote
Reply to comment by Simcurious in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Was Elon Musk planning this back in 2014 too?
Playing the long game?
https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
blueSGL t1_je8saz1 wrote
Reply to comment by Shack-app in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
what will ever happen? Interpretability? it's being worked on right now, there are already some interesting results. It's just an early field that will need time and money and researches put into it. Alignment as a whole needs more time money and researchers.
blueSGL t1_je8q3lm wrote
Reply to comment by Mrkvitko in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
> 3. it will be initially unaligned
if we had:
-
a provable mathematical solve for alignment...
-
the ability to directly reach into the shogoths brain, watch it thinking, know what it's thinking and prevent eventualities that people consider negative outputs...
...that worked 100% on existing models. I'd be a lot happier about our chances right now.
As in the fact that the current models cannot be controlled or explained in fine grain enough detail (the problem is being worked on but it's still very early stages) what makes you think making larger models will make them easier to analyze or control.
The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong.
As has been shown. OpenAI has not found all the ways in which to coax out negative outputs. The internet contains far more people than OpenAI's alignment researches, and those internet denizens will be more driven to find flaws.
Basically until the AI 'brain' can be exposed and interpreted and safety check added at that level we have no way of preventing some clever sod working out a way to break the safety protocols imposed on the surface level.
blueSGL t1_je6rikh wrote
Reply to comment by Iffykindofguy in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
> "growth at all cost"
so cancer.
blueSGL t1_je3sis4 wrote
Reply to TV star Paul O'Grady dies aged 67 by ucd_pete
Anyone coming here never having seen the comedy stylings of Lily Savage, watch this:
blueSGL t1_jdl93th wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
> AGI, Theory of Mind, Creativity
Marvin Minsky classified words such as these as “suitcase words”. As in a word into which people attribute (or pack) multiple meanings.
These words are almost like thought terminating cliches, as in when they are spoken it assures the derailment of the conversation. Further comments will be arguing about what to put in the suitcase rather than the initial point of discussion.
blueSGL t1_jdl756z wrote
Reply to comment by Blacky372 in [D] Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
> with specialized expert data from literally 50 experts in various fields that worked on the response quality in their domain.
Sounds like a future goal for Open Assistant.
If one were being unethical... create a bot to post the current Open Assistant answers to technical questions in small specialist subreddits and wait for Cunningham's_Law to come into effect. (I'm only half joking)
blueSGL t1_jdl02u6 wrote
Reply to comment by liyanjia92 in [P] ChatGPT with GPT-2: A minimum example of aligning language models with RLHF similar to ChatGPT by liyanjia92
>So with GPT-2 medium, what we really do here is to parent a dumb kid, instead of a "supernaturally precocious child" like GPT-3. What interested me is that RLHF does actually help to parent this dumb kid to be more socially acceptable.
> In other words, if we discover the power of alignment and RLHF earlier, we might foresee the ChatGPT moment much earlier when GPT-2 is out in 2019.
That just reads to me as capability overhang. If there is "one simple trick" to make the model "behave" what's to say there that this is the only one. (or that the capabilities derived from the current behavior modification are the 'best they can be') Scary thought.
blueSGL t1_jdd7maq wrote
Reply to comment by 141_1337 in ChatGPT bug leaked users' conversation histories by swimmerRei5687
After refusing to say how many parameters GPT4 has, refusing to give over any details of training dataset or methodology and doing so in the name of staying 'competitive' I'm taking the stance that they are going to do everything in their power to obfuscate the size of the model and how much it costs to run.
e.g. Sam Altman has said in the past that the model would be a lot smaller than people expect and that more data can be crammed into smaller models. (Chinchilla and especially the very recent Llama papers prove this)
Would I put it past the new 'competitive' profit driven OpenAI to rate limit a GPT4 that is actually similar in size to GPT3 to give the impression the model is bigger and takes more compute to generate answers? No (as the difference in inference cost is pure profit)
blueSGL t1_jdafkx8 wrote
Reply to comment by Traveshamockery in Persuasive piece by Robert Wright. Worrying about the rapid advancement of AI no longer makes you a kook. by OpenlyFallible
https://github.com/ggerganov/llama.cpp [CPU loading with comparatively low memory requirements (LLaMA 7b running on phones and Raspberry Pi 4) - no fancy front end yet]
https://github.com/oobabooga/text-generation-webui [GPU loading with a nice front end with multiple chat and memory options]
/r/LocalLLaMA
blueSGL t1_jd53pbu wrote
/r/LocalLLaMA
blueSGL t1_jd1lnsk wrote
Reply to comment by mjrossman in AI displacing jobs is a red herring, how we self-organize is the more fundamental trend by mjrossman
What are your thoughts on Microsoft Office 365 Copilot ?
blueSGL t1_jd0uijz wrote
Reply to comment by tdgros in [P] OpenAssistant is now live on reddit (Open Source ChatGPT alternative) by pixiegirl417
depends on the age of the cow I suppose.
blueSGL t1_jczq00a wrote
Reply to comment by Eleganos in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
how much of that food can be sown, grown, kept. and harvested without the aid of machinery, fertilizers or anything else reliant on just-in-time trade infrastructure.
blueSGL t1_jcz9z0p wrote
Reply to comment by nmarshall23 in Expert: Misinformation targeting Black voters is rising — and AI could make it more “sophisticated” by Wagamaga
> I'm dreading the day AI can write code.
Self fixing code generation is already in the pipeline for simple programs. (that was the middle of last year. ): https://www.youtube.com/watch?v=_3MBQm7GFIM&t=260s @ 4.20
GPT4 can do some impressive things:
>"Not only have I asked GPT-4 to implement a functional Flappy Bird, but I also asked it to train an AI to learn how to play. In one minute, it implemented a DQN algorithm that started training on the first try."
https://twitter.com/DotCSV/status/1635991167614459904
a scrip dubbed "Wolverine" that hooks into GPT4 and recursivly resolves errors in python scripts.
https://twitter.com/bio_bootloader/status/1636880208304431104
blueSGL t1_jcus2nc wrote
Another step closer to the "Infinite Simpsons Generator"
Submitted by blueSGL t3_11vqqel in singularity
blueSGL t1_jcs9g8e wrote
Reply to comment by eratonnn in We've had public access to ChatGPT for 3 months now. Has anyone made any actual profitable business or quality thing with it? by eratonnn
Have you seen that Microsoft are directly integrating it into their office suite under the banner of "Office 365 Copilot"?
Here are some timestamped links to the presentation.
Auto Writing Personal Stuff: @ 10.12
Business document generation > Powerpoint : @ 15.04
Control Excel using natural language: @ 17.57
Auto Email writing w/document references in Outlook: @ 19.33
blueSGL t1_jcjgsl1 wrote
Reply to comment by Necessary_Ad_9800 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Exactly.
I'm just eager to see what fine tunes are going to be made on LLaMA now, and how model merging effects them. The combination of those two techniques has lead to some crazy advancements in the Stable Diffusion world. No idea if merging will work with LLMs as it does for diffusion models. (has anyone even tried yet?)
blueSGL t1_jcjga2i wrote
Reply to [R] RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning, 23 token/s on 3090 after latest optimization (16G VRAM is enough, and you can stream layers to save more VRAM) by bo_peng
Is it possible to split the model and do inference across multiple lower VRAM GPUs or does a single card have to have the minimum 16gig VRAM?
blueSGL t1_jcbqxnq wrote
Reply to comment by [deleted] in GPT4 makes functional Flappy Bird AND an AI that learns how to play it. by gantork
The 'pro gamer move' That seems to be less meme-y by the day.
blueSGL t1_jegptub wrote
Reply to comment by SkyeandJett in Meta AI: Robots that learn from videos of human activities and simulated interactions by TFenrir
I guess this kinda puts the argument "how will the AGI/ASI interact with the world" to bed as a reason not to be concerned about alignment (which seems to be en vogue at the moment. )