Submitted by sideways t3_103hwns in singularity
Search
50 results for www.lesswrong.com:
Submitted by maxtility t3_11v1esn in singularity
Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it's going | Even now, people misunderstand just how transformative generative AI really is. Those who do understand, however, are too caught up in techno-idealism to see the likely ground truth
lesswrong.comSubmitted by Yuli-Ban t3_11e80kp in singularity
Ortus14 t1_j58qmtg wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Alignment on Less Wrong. He's written a lot of other interesting things as well. [https://www.lesswrong.com/users/eliezer\_yudkowsky](https://www.lesswrong.com/users/eliezer_yudkowsky) Not to nick pick, but as far as encouraging discussion, you might want
adt t1_j9eh3zp wrote
Reply to [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
gonna love Gwern's comment then... [https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K) Original post is interesting for context: https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned
_dekappatated OP t1_j5qsma3 wrote
Reply to comment by vinayd in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
follow a lot of AI researchers on twitter, occasionally check out [https://www.lesswrong.com](https://www.lesswrong.com), try to read some of the research papers, learn about LLMs, transformers, watch some youtube videos to get high level
Submitted by Singularian2501 t3_yrw80z in singularity
model into the future leads to short AI timelines: \~75% chance of AGI by 2032. Lesswrong: [https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long](https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long) Why I think strong general AI is coming soon Lesswrong: [https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon](https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon) We are VERY
sheerun t1_itosj8w wrote
Reply to comment by ghostfuckbuddy in Is anything better than FTL as a future? by ribblle
Here is must-read story for reverse entropy tech: https://www.lesswrong.com/posts/CKgPFHoWFkviYz7CB/the-redaction-machine
valdanylchuk t1_itp5tfx wrote
panic, [it all adds up to normality](https://www.lesswrong.com/posts/mmCDYzXfQpXq9xpru/adding-up-to-normality-1). There will be revolutionary developments, but there will be also tons of friction and inertia while they start affecting your everyday life. Maybe adjust
GreenWeasel11 t1_iw3zyht wrote
Reply to comment by SurroundSwimming3494 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
said. One also sees things like ["Why I think strong general AI is coming soon"](https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon) popping up from time to time (specifically, "I think there is little time left before someone builds
sheerun t1_iwigcnv wrote
Reply to comment by SufficientPie in A typical thought process by Kaarssteun
Maybe smartheads from [https://www.lesswrong.com/](https://www.lesswrong.com/) and corporate/academia AI/machine learning researchers. Not that worrying is not justified, very very justified. Controlling GAI is not possible directly indefinitely, we need another GAI, so recursive problem
Singularian2501 OP t1_iwpzwii wrote
Reply to [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
www.lesswrong.com/posts/Couhhp4pPHbbhJ2Mg/will-we-run-out-of-ml-data-evidence-from-projecting-dataset Lesswrong discussion about the paper
Singularian2501 OP t1_iwq1iph wrote
Reply to comment by lostmsu in [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
www.lesswrong.com/posts/mRwJce3npmzbKfxws/efficientzero-how-it-works A lesswrong article I have found that explains how efficient zero works. In my opinion the author wants to say that systems like efficient zero are more efficient in their data usage
iiioiia t1_j1j787j wrote
Reply to comment by notabraininavat in Knowing the content of one’s own mind might seem straightforward but in fact it’s much more like mindreading other people by ADefiniteDescription
flawed this thinking is (replace GroupX with "The Jews", "The Blacks", etc and observe how [cognition](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) immediately changes, *if it does not terminate in response*), but they typically do not work
Baturinsky OP t1_j3hmxy6 wrote
Reply to comment by Blasket_Basket in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
ChatGPT may be not on the level of AGI yet (even though some think it is - [https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that](https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that)) But the preogress of AI training does not show signs of slowing down, and there
artifex0 t1_j50v0ju wrote
Reply to comment by I_am_so_lost_hello in The year is 2058. I awake in my pod. by katiecharm
alignment problem is [not easy](https://www.alignmentforum.org/), but also not without [hope](https://www.lesswrong.com/posts/BfN88BfZQ4XGeZkda/concrete-reasons-for-hope-about-ai).
icedrift t1_j535agx wrote
Reply to comment by AsuhoChinami in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
most up to date that I could find. EDIT: Found this from last year [https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022](https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022) Looks like predictions haven't changed all that much, but there's still a wide range. Nobody really
blueSGL OP t1_j53btzc wrote
Reply to comment by icedrift in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
find this section of an interview with Ajeya Cotra (of [biological anchors for forecasting AI timelines](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) fame) Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754 Where she talks about how several benchmarks were past
chloesmiddlefinger t1_j64rhlj wrote
Reply to comment by JVM_ in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Bostrom's paperclip maximizer?](https://www.lesswrong.com/tag/paperclip-maximizer#Description)
sideways t1_j6gxjug wrote
Reply to comment by Cr4zko in Acceleration is the only way by practical_ussy
Very relevant: *Reedspacer's Lower Bound* https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement
CollapseKitty t1_j78fjug wrote
Reply to comment by Ivanthedog2013 in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
have a horrendous track record with much easier agents) we *probably* all die. [This](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) write up is a bit technical, but scanning it should give you some better context and examples
DukkyDrake t1_j7hzdl6 wrote
Reply to The Simulation Problem: from The Culture by Wroisu
Those running sufficiently capable language models could be culpable for committing [mindcrimes](https://www.lesswrong.com/posts/MmmPyJicaaJRk4Eg2/the-limit-of-language-models).
dancingnightly t1_j7s355b wrote
semantic text embeddings and LM models through this method(would operate differently to multi modal embeddings): [https://www.lesswrong.com/posts/mkbGjzxD8d8XqKHzA/the-singular-value-decompositions-of-transformer-weight](https://www.lesswrong.com/posts/mkbGjzxD8d8XqKHzA/the-singular-value-decompositions-of-transformer-weight) This method, which is only practical for toy problems really right now, would allow
Baturinsky t1_j7ttnra wrote
each other), but it has a good introduction info on it's sidebar. There is also [https://www.lesswrong.com/tag/ai](https://www.lesswrong.com/tag/ai) with a lot of articles on the matter
leventov t1_j7ubimw wrote
thinking about cognitive science. [Theories of cognitive science and ML/DL form an "abstraction-grounding" stack](https://www.lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research#3_4__Weaving_together_theories_of_cognition_and_cognitive_development__ML__deep_learning__and_interpretability_through_the_abstraction_grounding_stack): general theories of cognition (intelligence, agency) -> general theories of DNN working in runtime -> interpretability theories
Darustc4 t1_j8f9oi7 wrote
Reply to comment by FusionRocketsPlease in Altman vs. Yudkowsky outlook by kdun19ham
Optimality is the tiger, and agents are its teeth: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn
TemetN t1_j8h21sz wrote
Reply to comment by gay_manta_ray in Altman vs. Yudkowsky outlook by kdun19ham
some of the others through the topic links up top). [Pascal's Mugging](https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities)
technologyisnatural t1_j8y8r1h wrote
Reply to comment by SonOfDayman in Microsoft Killed Bing by Neurogence
really want to go down the rabbit hole … https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned
gaudiocomplex t1_j9czm28 wrote
Reply to comment by Low-Restaurant3504 in Artificial Intelligence needs its own version of the Three Laws of Robotics so it doesn’t kill humans. by Fluid_Mulberry394
lesswrong.com. I recommend going there instead of listening to idiots here. Here's a [fun one](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) Favorite part: >"The concrete example I usually use here is nanotech, because there's been
GlobusGlobus t1_j9ijxpj wrote
eating aliens. Read it. Regarding AI I have a very different view than what he has. https://www.lesswrong.com/posts/n5TqCuizyJDfAPjkr/the-baby-eating-aliens-1-8
CellWithoutCulture t1_j9nqm32 wrote
Reply to comment by Molnan in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
probably find insight in reading and skimming some of his stuff. e.g. - [recent, gwern on bing](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K) - technical https://gwern.net/scaling-hypothesis - fiction https://gwern.net/fiction/clippy
Baturinsky t1_j9pe2tg wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
with Alignment issue could be a good early step on the way to ASI research. This [https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast](https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast) could be a good introduction, imho
gwern t1_j9qwz8z wrote
Reply to comment by Hodoss in And Yet It Understands by calbhollo
some of the unacceptable predictions happened to survive by fooling the imperfect censor model': https://www.lesswrong.com/posts/hGnqS8DKQnRe43Xdg/?commentId=7tLRQ8DJwe2fa5SuR#7tLRQ8DJwe2fa5SuR
blueSGL t1_j9rf2n4 wrote
Reply to comment by HelloGoodbyeFriend in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
find this section of an interview with Ajeya Cotra (of [biological anchors for forecasting AI timelines](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) fame) Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754 Where she talks about how several benchmarks were past
VirtualHat t1_j9rsysw wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
work in AI research, and I see many of the points EY makes [here](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) in section A as valid reasons for concern. They are not 'valid' in the sense that they must
dentalperson t1_j9t55as wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
here](https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA) is a text transcription of the podcast with comments. You mention EY not being rigorous in his arguments. The timelines/probability of civilization-destroying AGI seem to need more explanation
Imnimo t1_j9uip00 wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned
blueSGL t1_ja00p4i wrote
Reply to [R] [P] New ways of breaking app-integrated LLMs with prompt injection by taken_every_username
first saw this mentioned 9 days ago by Gwern in the comment [here](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K) on LW >"... a language model is a Turing-complete weird machine running programs written in natural language; when
VirtualHat t1_jaa4jwx wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
increasing](https://www.taylorfrancis.com/chapters/edit/10.1201/9781351251389-4/ethics-artificial-intelligence-nick-bostrom-eliezer-yudkowsky) [number](https://www.amazon.com.au/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2) [of](https://www.zdnet.com/article/the-next-decade-in-ai-gary-marcus-four-steps-towards-robust-artificial-intelligence/) [academics](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop
ry007opyt OP t1_jahoaai wrote
Reply to comment by Patthelatino in I tried 2,000 AI tools so you don’t have to. Ask me anything about how to supercharge your life with AI! by ry007opyt
specify the agent that you want to instantiate. Think of these chat bots as [Simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) \- they can be simulate both a good copyrighter and a bad one with similar difficulty
danysdragons OP t1_jdd2hcv wrote
Reply to My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" [very detailed rebuttal to AI doomerism by Quintin Pope] by danysdragons
author, I'm just maintaining the original article title for the post title. Also published at [https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky](https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky)
1II1I11II1I1I111I1 t1_jdt8t49 wrote
Diamondoid Bacteria [Essential Reading](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)
was_der_Fall_ist t1_jdw2fud wrote
Reply to comment by sineiraetstudio in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Christiano, alignment researcher at ARC and previously OpenAI, in [a comment thread on this LessWrong post](https://www.lesswrong.com/posts/pckLdSgYWJ38NBFf8/gpt-4). Someone comments pretty much the same thing the person I replied
was_der_Fall_ist t1_jdw2ya2 wrote
Reply to comment by MysteryInc152 in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Check out [this LessWrong thread in the comments](https://www.lesswrong.com/posts/pckLdSgYWJ38NBFf8/gpt-4). Paul Christiano, alignment researcher at ARC/ previously OpenAI, explains the RLHF change the exact way I did (because I was pretty much quoting
1II1I11II1I1I111I1 t1_jdwowqy wrote
Reply to The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
scared of ChatGPT, or GPT-4. [This is what AI doom looks like](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), and it only has very little to do with 'truth
DukkyDrake t1_ir7q5qb wrote
Reply to "The number of AI papers on arXiv per month grows exponentially with doubling rate of 24 months." by Smoke-away
agreed that A majority of the research being published in NLP is of [dubious scientific value](https://www.lesswrong.com/posts/3zfFPjMv9fioDAeHi/survey-of-nlp-researchers-nlp-is-contributing-to-agi). What percent is NLP related? "The exponentially growth of crap
houstonhoustonhousto t1_jeesgvl wrote
Reply to comment by _JellyFox_ in The Alignment Issue by CMDR_BunBun
reference: https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer
Zermelane t1_iracd7y wrote
Reply to The End of Programming by General-Tart-6934
lets us get around the current problem of [paucity of training data for code models](https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications#Code).
Dr_Singularity OP t1_irb4n3t wrote
Reply to “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long
throwawaydthrowawayd t1_j3vbipb wrote
Reply to comment by CyberAchilles in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
www.youtube.com/watch?v=GsFWDFz5tE0#t=08m50s) * **Jacob Cannell** (Vast.ai, lesswrong-author) ----> AGI: ~[2026-32](https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long) * **Richard Sutton** (Deepmind Alberta) ----> AGI: ~[2027-32?](https://www.youtube.com/watch?v=PvJ14d0r3CM) * **Jim Keller** (Tenstorrent) ----> AGI: ~[2027-32?](https://www.youtube.com/watch?v=0ll5c50MrPs#t=31m25s) ... Nathan Helm-Burger** (AI alignment researcher; lesswrong-author) ----> AGI: ~[2027-37](https://www.lesswrong.com/posts/wgcFStYwacRB8y3Yp/timelines-are-relevant-to-alignment-research-timelines-2-of) * **Geordie Rose** (D-Wave, Sanctuary AI) ----> AGI: ~[2028](https://www.youtube.com/watch?v=1JnTKkoPd1U#t=23m27s) * **Cathie Wood** (ARKInvest ... Sustensis) ----> AGI: ~[2030](https://www.youtube.com/watch?v=2wQ_XLwF6k4#t=22m55s) * **Ross Nordby** (AI researcher; Lesswrong-author) ----> AGI: ~[2030](https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon) * **Ilya Sutskever** (OpenAI) ----> AGI: ~[2030-35?](https://old.reddit.com/r/singularity/comments/kxgg1b/openais_chief_scientist_ilya_sutskever_comments/) * **Hans Moravec** (Carnegie Mellon University