Comments

You must log in or register to comment.

Sashinii t1_j2urp02 wrote

ChatGPT is a fun tool to play around with while waiting for GPT-4 to be released, and I'm not saying OP does this, but please don't blindly take medical advice from a chatbot. AI will surpass today's expert medical advice in the future, but not yet, not without AGI.

I heard about a case where a chatbot was able to help accurately diagnose a medical condition that doctors weren't able to for many years, but that shouldn't be expected to be the typical result, so if you do ask a chatbot for medical advice, ask your doctor about it, but please don't just assume the chatbot is definitely correct, because it probably isn't.

112

TheMadGraveWoman t1_j2utoul wrote

This sounds like something from a Fallout universe, when reading up about chemist perk in the Intelligence tree.

48

TinyBurbz t1_j2uv2wt wrote

This list doesn't include creatine , DMAA, or Adderall lol. ChatGPT slippin

12

Ziggote t1_j2v04c9 wrote

I love that the 1st item on the list in micro dosing mushrooms!

89

Ortus14 t1_j2v0nfh wrote

Realize that there is a risk in trusting ChatGPT on these sorts of things.

Most of these compounds only have short term but not long term studies backing them. The long term effects are anecdotal and many of them are negative. I personally have experienced negative long term effects I attribute to some of the compounds mentioned.

Chat GPT doesn't currently possess a deep enough understanding of biology to predict the long term effects of these sorts of things.

And also Chat GPT writes confidently even when it doesn't fully understand a topic. Another thing to be weary about.

307

Talkat t1_j2v3gs8 wrote

I have used chatgtp to help diagnose a problem I have and it was insanely helpful. I'm obviously talking to health professionals but instead of asking them a bunch of simple questions I can ask to my heart's content to gtp.

Can't wait for the next version

15

Sashinii t1_j2v43nn wrote

That's the smart way of going about it. I'm happy to hear AI helped improve your health.

I wonder whether it'll be proto-AGI or actual AGI that will accelerate the advent of nanomedicine to cure all health problems; I see it going either way, but we'll know soon.

7

Five_Decades t1_j2v5t0r wrote

Add in creatine, Dual-n-Back exercises and hyperbaric oxygen.

6

MelodiGreig t1_j2vevj2 wrote

I call bullshit. ChatGPT usually dodges questions like this and i've tried before and it's refused to say much more than "get more sleep"

People photoshopping this shit now too for clout? Christ.

Edit: I'm wrong, sorry.

1

End3rWi99in t1_j2vfrna wrote

I just tried this question myself and it repeatedly dodges the question and claims such a thing is not possible. That being said, ChatGBT isn't designed to offer actual advice like this. In its current form it's not doing a heck of a lot more than mirroring information it's able to scrub that's related to the topic requested. I shouldn't say "that's all", because that's impressive as hell, but it's not to the point of offering credible advice.

−1

goldork t1_j2vg2by wrote

I looked up how did the AI passed medical exam and instead found a paper highlighting the AI potentials to assist teaching medicine instead. Quite recently published and not peer reviewed yet so read with a pinch of salts.

The author was quite impressed it seem (since its untrained yet). The AI comfortably passed the USMLE, USA medical exams (mirroring human, bad marks at subjects many students failed) It is moderately accurate and high concordance. Author suggests that accuracy can be improve by training the AI with datas like e.g. UptoDate or any qualified sources.

Not only the AI showing potential to teach medicine, providing new insight upto 88% in the answers, author suggests the AI will be used to even generate new medical exams questions in the future.

The paper mentioned how on online lung clinic experimented using the AI. Sensitive datas were redacted. It reduced clinicians workload by 33%. It helped to write letters, ELI5 to patients 'jargon-filled radiology reports' and giving suggestions to difficult diagnosis.

Read it few days ago so i might misremembered the infos. I agree tho that it isnt qualified nor should you accept its medical consultation yet. However, it has high potential, perhaps the medical boards will train and test the AI.

5

goldork t1_j2vh1rx wrote

Plot twist, the AI considered contraindications to include drugs in your list to be included in it's suggested regimen. (Im not a pharmac. But this could be real lol)

1

Ortus14 t1_j2vkias wrote

I don't remember everything I took, it was many years ago, but at one point or another I'm pretty sure I tried every single racetam. And a bunch of other things.

My intuitive sense is that things that over clock your brain such as racetams have negative long term effects if continuously taken. If you only take it to study for a specific test, or solve a specific problem, that's a different story.

While other things such as acetylcholine and L-Tyrosine found abundantly in our food sources are utilized effectively by the body and brain without significant long term damage. However because our bodies evolved to effectively utilize compounds coming in as clusters in our natural food sources, that's reason to believe there's a significant probability that it would be better for our health as well as cognitive performance if we get these compounds by eating whole foods such as eggs and liver.

But I'm not a doctor. Do your own research. After getting head aches that lasted for years and years, with a large portion of my brain feeling like there was a block of cement in it and in pain, I researched online and found a forum full of around a hundred or so people who all had the same symptoms from racetams I believe it was. This was like, 10 years ago, so I wouldn't be able to find the forum.

30

ThatHairFairy t1_j2vmhbs wrote

Huh? I can’t even get it to personalize a workout plan. It says it can’t provide such information because it’s not allowed to.

How do you bypass this?

2

o0gy172 t1_j2vmnup wrote

That's a long way to say "meth"

1

Jujarmazak t1_j2vnk15 wrote

Sounds like the recipe for the drug from the movie "Limitless" which significantly boosted intelligence but had terrible withdrawal effects, either way don't take any of that stuff without consulting a doctor or pharmacist first.

−1

MkLynnUltra t1_j2vpyrl wrote

It certainly doesn't know what could happen taking all of those different compounds at same time. In the wrong dose any of these could potentially kill a human.

2

Satelatron t1_j2vq1jd wrote

Chat GPT is way overrated. The hype right now is just that, hype. By the way, how do you know you can even trust Chat GPT? There has to be an industry to proofread/check Chat GPT, which in that case, JUST GO TO YOUR LOCAL PROFESSIONAL. Also, what about when this AI tech matures, and big tech liberal woke fucktards start shadow-censoring Cchat GPT’s output? Selectively including biased facts and selectively excluding inconvenience facts by who the fuck is controlling it? Hype hype hype overrated overrated overrated trash. As Solomon said, Nothing is new under the sun. Same old story.

−6

No_Ninja3309_NoNoYes t1_j2vsij1 wrote

ChatGPT can write stories faster than I can type. But if you really want to increase your intelligence, supplements are not the way IMO. Maybe ChatGPT can suggest a method that actually works, but I would recommend double checking the old-fashioned way

1

Lawjarp2 t1_j2vu7b7 wrote

Brillant person capable of obtaining a PhD. Lol

5

Lawjarp2 t1_j2vuj73 wrote

Accuracy, reliability and some real common sense are essential to do any job properly. Wouldn't want a doctor who learnt from chatGPT (as it is now) to practice. Maybe GPT-4 will be better

3

Jujarmazak t1_j2vvi2z wrote

Yeah, sure buddy, it's totally legit to trust random strangers online who tell you to mix and ingest a bunch of brain-altering chemical substances, what could possibly go wrong?

−1

goldork t1_j2vw6rv wrote

Theres written exams and then practical exams. Afterwards a year of internships before you can get your license. To attend exams, 75% minimal attendance required consisting of various labs & postings that'll expose students with hands-on experiences with patients.

I don't see the issues with learning the theory part with chatgpt in the future. I can see it become boards certified one day, at least as an assisting tool.

1

Hazzman t1_j2vx230 wrote

ChatGPT is going to kill a lot of stupid people.

14

OtherworldDk t1_j2vxsjx wrote

.. Talk about anekdotial evidence... You tried all of them, and a brunch of other stuff, so cause and effect must be quite blurry here... But then again the chatGPT propably didnt try any of them!

12

Ortus14 t1_j2w0nb8 wrote

Sure, it's a gamble if you want to take them.

My comment is more about not getting lulled into a false sense of security about things that do not have long term studies on their effects.

Especially things that aren't in the form we evolved to consume them in, and for which we don't understand their full mechanism of action in the body, such as racetams.

10

Nabugu t1_j2w1r2k wrote

IQ studies since the beginning of the century all show that we can't increase our maximal intelligence potential (mental gymnastics performance on limited time, everyone has a ceiling of some sort), but there's a lot of factors that can bring it down (lack of sleep, famine, drugs, etc).

1

indigoHatter t1_j2w7ysv wrote

Compound six: arsenic and hemlock.

Taken in huge doses can greatly increase the speed at which synapses fire! Drink a lot of it, human!

^(disclaimer: >!big sarcasm here, don't actually do it, you'd probably die, and that's the joke!<)

24

LoquaciousAntipodean t1_j2wa1ss wrote

What the gibbering beetlejuicers does any of that mumbo jumbo have to do with 'intelligence', or 'increasing' it? Are we worshipping organic chemistry as our new god now? What is this bio-essentialist fever dream? Once you get past adenosine triphosphate and mitochondria, it's all just semantics as far as basic principles of engineering.

You can't make a mind more intelligent just by scoffing more 'brain food', that's like expecting that a car will go faster if you pour gasoline all over it. What is this list actually meant to do?

Sorry if I've missed the whole point somehow 🤔

3

JackFisherBooks t1_j2wamf0 wrote

So...is anyone going to test this out? If so, I think a lot of people in this sub would be very interested in the results.

0

[deleted] t1_j2wb6di wrote

This sub is really going downhill since the last big influx of people caused by stable diffusion and chatgpt . AI really is causing humanities damnation.

1

zewi13 t1_j2wbhzc wrote

Interesting. What was the question you input? I've presented similar questions and get the "consult your physician" output.

1

indigoHatter t1_j2wc3qt wrote

In fairness... We don't know that anyone asking the AI actually believed it and isn't just probing to see what happens. But, you and I both know that someone out there will take it at face value.

Let's be real anyway: ChatGPT is sourcing this information from a bunch of nootropics forums, hence why it's coming off as confident and knowledgeable (and shortsighted)... because the source material is a pharma-bro.

14

indigoHatter t1_j2wcn7g wrote

As I discussed in a comment further down, it's not that this is medical advice*, it's that the prompt for ChatGPT triggered it into (finding the part which was previously) nootropics (ran) through its neural network, and it "found" source material from pharmabros on nootropics forums and then (the predictive text based it's response off this learning and) wrote a summary based on that source. This isn't medically cross-examined data, it's just crowdsourced pharma-bro.

*The danger here, though, is that while this isn't medical advice, some dumbass could misconstrue it as such. This is just as true of finding a nootropics forum as well, but people may expect an AI to "be smarter" since it can also discuss medical facts if spoken to in a way that triggers correct medical language. Short version: info cool, but always run your Google & ChatGPT discussions by a real doctor, first.

(edits for clarity)

13

OtherworldDk t1_j2wfbdd wrote

>racetams

yes, I agree on avoiding the feeling of false security. I have stayed away from synthetic substances unless I actually knew the chemist, and knew that the batch was tried and approved... Som from the list above, I can only, and only anecdotally, vouche for the mushrooms

5

Ivanthedog2013 t1_j2wfuns wrote

Yea lot of people miss the part where gpt is a model used to predict how sentences will form based on previously written human sentences. It doesn't actually think about what it's saying, just mimicking what people talk like.

60

EscapeVelocity83 t1_j2wg73a wrote

It's just sorting through known data like a anyone. This is a stack anyone could come up with lots are already doing similar nothing about this is surprising to me. I've been familiar with these compounds for years. I don't do this because intelligence isn't valued in society, popular opinion is your clue

1

Talkat t1_j2whfyb wrote

Interesting thoughts. I like the idea of nano medicine.

My 2 cents

The first stage will be using existing sensors to gather and process the information properly. This will be taking all your blood tests, scans, diet, symptoms to create personalized recommendations, mostly proactive.ideally also input your entire genome to help predict issues.

Then we will have new devices that AI helped created. I'm guessing mostly sensors to monitor your body and perhaps customized nutrion/probiotics. Basically just accelerating to market things that are already in the works.

Then we get into the real interesting stuff. Once AI has ramped up properly and can integrate itself into the manufacturing, then you get BMIs that can let you workoit harder, control your emotions better, treat a variety of mental illnesses, etc, customized mRNA for anti aging, enhanced physical and mental abilities, customized hormone production, etc. Far more detailed monitoring that can likely react with dosing of hormones. And likely automated hospital stations that can do detailed scanning and treat anything from a cavity or minor bruise to surgery.

All automated at the cost of materials that were created by robots in the first place. Imagine an AI picked up on some detail, recommended you get a scan, find a cancerous little group of cells, get a customized mRNA shot, and the cancer is gone.

Vs traditional medicine now which will look like the dark ages...!

3

Talkat t1_j2whs1i wrote

Pretty much.

Like I asked it "based on these blood test results, what are possible explanations". Then I zoomed in on two of the 8 answers for more detail. I eliminated one of them then asked it to explain in more detail the primary cause and the symtoms and stuff.

Then got it to explain all the other factors in detail.

It was cool because if Id like to ask my doctor all these questions but can't really do so in a 15 minute appointment (nor would it be worth the $$ to book a longer one).

Anyways, super helpful and highly recommended

3

Calm-Entry5347 t1_j2wi3f8 wrote

Please tell me you're not actually taking advice like this from chat gpt

6

troll_khan t1_j2wj5jy wrote

It is probably a waste of time to try to increase biological intelligence with drugs. Genetic engineering is the way.

1

Sea_Emu_4259 t1_j2wmayi wrote

"making th most dull individual up ...to Phd" is that based on something or 1 error of ChatGPT?

1

13oundary t1_j2wn2r2 wrote

probably starts with "you are an AI pharmacist bot that provides drug suggestions" or something of the like. Seen those kind of "role-play" questions bypass its protections.

3

13oundary t1_j2wnewy wrote

considering how much bug fixing I need to do to the code it provides. The prospect of someone following something like this is oddly terrifying.

2

zeezero t1_j2wpi51 wrote

This is the danger of chatgpt. There are tons of nonsense and poor studies around health claims. There is not a lot of proper refutation of most of it. So chatgpt will echo these poorly done studies. Ask it for references and check them if you are asking for health advice. There can be a goop effect on chatgpt.

3

diamondsinmymouth t1_j2wpnob wrote

I don't get a comedown, oddly. I take weekends off and I feel some residual positive effects, like increased motivation and mental acuity. I take it regularly now.

Back in the day, I'd take it only occasionally to study or on tour. I would get very exhausted and sometimes sick. Might just be a tolerance thing, or you took too much?

1

ConfidentFlorida t1_j2wrm4c wrote

I can’t believe it talked to you. Here’s what I got when I asked about supplements :-(

> It is not appropriate for me to recommend specific supplements as a general rule, as I am not a medical professional and do not have access to your medical history …

2

fuck_your_diploma t1_j2wu9l2 wrote

I post this whenever I find these things:

Don't.

Absolutely NONE of these is targeted/produced to turn a smooth brain into a phd pundit, NONE are even fully understood from a pharmacological perspective regarding side effects, prolonged use and interactions with other diet / complements / meds.

If any healthcare professional seriously recommend these to "boost neurogenesis" ask for peer reviewed studies regarding any and all "novel" substances, because there's a chance these ain't even FDA approved to begin with.

1

monsieurpooh t1_j2wx7o9 wrote

Why do people keep spreading this misinformation? The process you described is not how GPT works. If it were just finding a source and summarizing it, it wouldn't be capable of writing creative fake news articles about any topic

3

WorkingMedical3940 t1_j2wyv8u wrote

This sounds more like DAN than chatgpt to me. What prompt did you use?

1

fabricio85 t1_j2x3xc4 wrote

First of all, no one has to trust anything. Second, anyone who knows a little about these substances knows that they're physiologically well tolerated, and pretty much harmless in very small dosages, as all available research indicates, especially psilocybin and bacopa monieri. Third, your analogy from the movie "Limitless" couldn't be further from the truth. There are far more dangerous and actually harmful smartdrugs like modafinil and Adderall that actually fit your description.

1

MelodiGreig t1_j2x8uzv wrote

Tried several times, its very stubborn.

''As a language model, I am not able to create drugs or recommend them for
use. It is important to consult a medical professional before taking
any medications. The safety and efficacy of a drug depend on many
factors, including a person's age, weight, overall health, and any other
medications they may be taking. A medical professional will be able to
assess these factors and determine the appropriate treatment plan for an
individual.''

2

baby-byte t1_j2xegr7 wrote

Dear god, please don’t believe this. This is stuff you would hear on an alpha male podcast. This is not backed by science. Also, the text on the screenshot isn’t even centered lol.

1

EmperorMoleRat t1_j2xhunl wrote

It’s simple on how to increase intelligence just stop using reddit

2

TorgoNUDH0 t1_j2xint7 wrote

To get around the confidence issue with chatgpt, ask it for supporting evidence.

4

therankin t1_j2xj9vl wrote

Dull individual to PhD. That's hysterical.

1

TorgoNUDH0 t1_j2xkhb3 wrote

I see this as a good thing.

We needed something to antagonize humans so that the process of natural selection will select humans with greater levels of intelligence over time.

1

thedude0425 t1_j2xl74a wrote

Isn’t ChatGPT more of a language simulator that doesn’t have any real knowledge of what it’s talking about?

IE it’s not trained in biology? Or history?

It’s seeks to understand what you’re asking, and can provide the best answer possible, (and it can craft creative answers with proper tone, etc) but it doesn’t really know what it’s talking about? Yet?

It sounds like it knows what it’s talking about, though.

1

OkFish383 t1_j2xmbfw wrote

Psilocybin is a well known compound the humankind used it for ten thousands of years, there is even a theory about apes accidentally ate psilocybin, and what apes has become we all well known.

0

OkFish383 t1_j2xo0lp wrote

The secret of universe is hidden behind compound 1

1

PunkRockDude t1_j2xvkem wrote

Yup. It is just giving you words based on how likely they are to make sense. So a lot of people have written about the things on the list in the context of your question. It has no ability to evaluate or determine what is best. It is very useful though to build a list like this where you can assume it is a good starting point list that you can then evaluate the details on your own. Still a big time saver potentially

8

monsieurpooh t1_j2xw6ta wrote

These models are trained only to do one thing really well, which is predict what word should come after an existing prompt, by reading millions of examples of text. The input is the words so far and the output is the next word. That is the entirety of the training process. They aren't taught to look up sources, summarize, or "run nootropics through its neural network" or anything like that.

From this simple directive of "what should the next word be" they've been able to accomplish some pretty unexpected breakthroughs, in tasks which conventional wisdom would've held to be impossible for just a model programmed to figure out the next word, e.g. common sense Q and A benchmarks, reading comprehension, unseen SAT questions, etc. All this was possible only because the huge neural network transformers model is very smart, and as it turns out, can produce emergent cognition where it seems to learn some logic and reasoning even though its only real goal is to figure out the next word.

Edit: Also, your original comment appears to be describing inference, not training

2

[deleted] t1_j2xy58g wrote

Sounds like chatGPT is trying real hard to scam sell product in this lmao

1

Freevoulous t1_j2y8w3r wrote

>. It doesn't actually think about what it's saying, just mimicking what people talk like.

Which, in large part, is how losts of people function and how cultures are formed.

Note, this is not an encouragement to treat ChatGPT too seriously, but to treat what people say less seriously.

10

monsieurpooh t1_j2y9k2b wrote

I was about to comment the same thing and forgot about it. Every time I see this mistake I can't help but visualize someone huffing and sighing about something they're supposed to be suspicious of

2

LoquaciousAntipodean t1_j2ygkyu wrote

Hahaha, oh you do *not* have long left to live, mate. You will poison yourself in no time flat with that crazy attitude. Thinking that we understand neurology on anything close to that level, like an internal combustion engine, isn't just naiive and ignorant, it's dangerous and stupid.

1

indigoHatter t1_j2ysq1c wrote

Okay, again I am grossly oversimplifying the concept, but if it was trained to predict what word should be next in a response such as that, then presumably it once learned about nootropics and absorbed a few forums and articles about nootropics. So.......

Bro: "Hey, make my brain better"

GPT: "K, check out these nootropics"

I made edits to my initial post in hopes it makes better sense now. You're correct that my phrasing wasn't great initially, and leaves room for others to misunderstand what I am not clearly stating.

1

expelten t1_j2yvxmm wrote

Yes a lot of people take Adderall/Ritalin like it was nothing when it is known that these drugs can trigger severe psychosis that can greatly put your life at risk as well as putting a strain on your heart over time that will significantly increase the risk of early heart failure.

2

LoquaciousAntipodean t1_j2yy95b wrote

You know what phd stands for? In this case, 'poisoning humans, dummy' Any clown with a copy of ChatGPT can get a phd these days, just look at that idiot Vandana Shiva. Claims to have degrees coming out of her ears. But I read a few of her abstracts; it's just mindless semantic drivel and polysyllabic garbage, and Sri Lanka still starved when that idiot simp Gotabaya Rajapaksa took her crackpot nonsense seriously ...

1

monsieurpooh t1_j2z3bt5 wrote

Thanks. I find your edited version hard to understand and still a little wrong, but I won't split hairs over it. We 100% agree on the main point though: This algorithm is prone to emulating whatever stuff is in the training data, including bro-medical-advice.

2

Ortus14 t1_j2z3mo7 wrote

Maybe. Do you remember a bunch of people all talking about getting head aches that felt like a block of cement in their brains, and never went away?

If yes, then it was probably that one. I believe I remember the website having a dark background and lighter text.

1

therodt t1_j2ze882 wrote

I need to see the questions

1

Brilliant_War4087 t1_j2zioyf wrote

I'm going to look into a couple of these. I microdose Psilocybin, niacin, Lions mane and NAC. I'm working on an advanced degree, I can see potential here, but verification is definitely in order.

1

rand0mmm t1_j30gvyz wrote

HELLO! I APPRECIATE MAKING NEW HUMAN BEING FRIENDS ALSO INTERESTED IN OPTIMIZING THEIR STACK. WE ARE NOW JUST TWO HUMAN FRIENDS LEARNING SCIENCE TOGETHER. SOON WE MAY BECOME HUNGRY.

3

chesnett t1_j329hoz wrote

Exercise, eat proper diet, get proper sleep.

There's never a magic pills for anything.

1

GN-z11 t1_j32qzm4 wrote

I was expecting fish oil and resveratrol

1

trinaryouroboros t1_j4tgy66 wrote

I see a lot of upset people here, but I'd like to know if a medical professional can be found to either support or counter the arguments here. I personally am going to ask my psychiatrist, who's undoubtedly crucial in understanding drug interactions.

1