ReasonablyBadass
ReasonablyBadass t1_jedodjw wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Who will administrate access to it?
ReasonablyBadass t1_jeafuu6 wrote
ReasonablyBadass t1_je0rwr3 wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Is it possible the older questions are by now about better known problems so more training data existed for them and the newer are about newer concepts, not really represented on the net yet?
ReasonablyBadass t1_jdyzvv1 wrote
Reply to comment by starfries in [D] FOMO on the rapid pace of LLMs by 00001746
That dude was a researcher before he wrote that though
ReasonablyBadass t1_jdyyyo3 wrote
Reply to [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
Interesting. I can't look at the raw data right now: was memory the problem? Did it ignore clues it got? Or was it more conceptual, did it not figure out properties of objects it asked for?
Could you quickly list the terms it did get right?
ReasonablyBadass t1_jdx7f88 wrote
Reply to [D] Simple Questions Thread by AutoModerator
I still remember the vanishing/exploding gradient problem. It seems to be a complete non issue now. Was it just Relus and skip connections that sovled it?
ReasonablyBadass t1_jdqs3dm wrote
Reply to My camera setup on the International Space station. More details in comments. by astro_pettit
Is that disembodied head some necromantic camera thingy?
ReasonablyBadass t1_jdpy0fs wrote
I feel rapid AI development kinda explains the fermi paradox though: why bother with mega structures or whatever when you can turn a planet into computronium and explore far wilder artificial worlds forever?
ReasonablyBadass t1_jdgsfv5 wrote
Reply to comment by Maleficent_Refuse_11 in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
Auto-regressive and external knowledge hub aren't contraditions though, are they?
Theory of Mind: there is a recent Edan Mayer video about this exact topic
ReasonablyBadass t1_jdgs88s wrote
Reply to comment by visarga in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
Someone trying to get the word out? Or PR stunt?
ReasonablyBadass t1_jd6lnmk wrote
Reply to comment by not_particulary in [D] Running an LLM on "low" compute power machines? by Qwillbehr
Note, Alpace isn't fully Open source. It's legal situation is kinda murky.
ReasonablyBadass t1_jd26flf wrote
What was the hardware this was trained on? Boinc like distribution?
And what are the hardware requirements for running it locally?
ReasonablyBadass t1_jcsu1yv wrote
Reply to comment by ninjasaid13 in [P] The next generation of Stanford Alpaca by [deleted]
Not sure how much this is established law.
Anyway, Alpaca says so themselves on their website: https://crfm.stanford.edu/2023/03/13/alpaca.html
ReasonablyBadass t1_jcscza5 wrote
The best bet we most likely have is to instantiate as many AGIs as possible at the same time. It will necessitate them developing social skills and values to cooperate.
ReasonablyBadass t1_jcs32ea wrote
Reply to [P] The next generation of Stanford Alpaca by [deleted]
Careful. That MIT license won't work, I think, thanks to ClosedAIs licences
ReasonablyBadass t1_jae7zhu wrote
Reply to [R] Microsoft introduce Kosmos-1, a Multimodal Large Language Model (MLLM) that can perceive general modalities, learn in context (i.e., few-shot), and follow instructions (i.e., zero-shot) by MysteryInc152
Can't read the paper right now, can someone summarize: is it a new model or "just" the standard transformers but used on multi modal data? if it is new, what are the strucutral changes?
ReasonablyBadass t1_jabfviu wrote
Reply to Everyone, say hi to Redwood 🌲 by Prosciutto4U
Trees to meet you
ReasonablyBadass t1_ja2cj0j wrote
Reply to YouTube captions sure are something else by jdwill1991
"Tell me when you see one, I haven't got any in a while" - Batman, probably
ReasonablyBadass t1_j9seeio wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
That seems wildly pessimistic.
I would be shocked if one doesn't exist by 2030
ReasonablyBadass t1_j9s3yq1 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I think the basic issue of AI alignment isn't AI. It's trying to figure out what our values are supposed to be and who gets to decide that.
ReasonablyBadass t1_j99eja2 wrote
Reply to comment by Herbert-Quain in Scientists create carbon nanotubes out of plastic waste using an energy-efficient, low-cost, low-emissions process. Compared to commercial methods for carbon nanotube production that are being used right now, ours uses about 90% less energy and generates 90%-94% less carbon dioxide by Wagamaga
Efficiency has nothing to do with how much energy you need. It's about the ration between resource use and end product.
If other processes need less heat but produce a lot of unusable waste, they are less efficient.
Edit: also,flashing, afaik, means for only a very short amount of time. Might not be all that mich energy overall, actually
ReasonablyBadass t1_j95c1et wrote
Reply to comment by mikeesfp in Just below the surface, near Praya de Rocha, Portugal [1152x2048][OC] by mikeesfp
True. But it just looks like the place in a movie or game you would expect to see littered with gold coins.
ReasonablyBadass t1_j954jbs wrote
I can A hear this picture and B expect to see treasure.
ReasonablyBadass t1_j7jjxzq wrote
The AI wars are heating up rapidly.
The next few years are going to be nuts.
ReasonablyBadass t1_jefxvhx wrote
Reply to Sam Altman's tweet about the pause letter and alignment by yottawa
And we would trust the guys who sold out with alignment because...?