yaosio
yaosio t1_jecba7f wrote
Reply to comment by barbariell in OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
Once we have full body VR we will need AI to turn off our sexual attraction so we can do anything else.
yaosio t1_jec9pjc wrote
Where's the "depressed and just want to" oh you mean in regards to AI. Dussiluionmemt. I'll probably be dead from a health problem before AGI happens, and even if it does happen before then it will be AGI in the same way a baby has general intelligence.
yaosio t1_jec98pt wrote
Reply to comment by TheDividendReport in Ray Kurzweil Predicted Simulated Biology is a Path to Longevity Escape Velocity by Dr_Singularity
He should have GPT-4 help him write it.
yaosio t1_jebomzk wrote
Reply to comment by waxroy-finerayfool in [D] Turns out, Othello-GPT does have a world model. by Desi___Gigachad
A world model doesn't mean a model of the world, it means a model from the data it's been given. Despite not being told what an Othello board looks like there's an internal representation of an Othello board.
yaosio t1_jeb1sln wrote
Reply to comment by Hashtagworried in Here’s What Happened When ChatGPT Wrote to Elected Politicians - Cornell researchers used artificial intelligence to write advocacy emails to state legislators. The responses don’t bode well for democracy in the age of A.I. by speckz
It's called a form letter. The letter is prewritten with mad libs style spots for entering information to make it appear relevant to the person it's being sent to.
yaosio t1_jeb18dd wrote
Reply to comment by fleeting_revelation in AI Ethics Group Says ChatGPT Violates FTC Rules, Calls for Investigation by geoxol
That's closing the barn door after the horses have escaped. Chasing the horses could result in a good outcome, while closing the doors won't help because the horses are already gone.
yaosio t1_jeb0zh9 wrote
Reply to comment by rubixd in AI Ethics Group Says ChatGPT Violates FTC Rules, Calls for Investigation by geoxol
They're okay with corporations and the government lying to us. Suddenly they don't like it when anybody can do it.
yaosio t1_jea687c wrote
Reply to I think this is what the Steam Deck was created for! View from the Parador in Toledo, Spain. by xero74
Go into MS Flight sim and then go to that exact spot in the sim.
yaosio t1_je56tet wrote
Reply to [Discussion] IsItBS: asking GPT to reflect x times will create a feedback loop that causes it to scrutinize itself x times? by RedditPolluter
There's a limit, otherwise you would be able to ask it to self-reflect on anything and always get a correct answer eventually. Finding out why it can't get the correct answer the first time would be incredibly useful. Finding out where the limits are and why is also incredibly useful.
yaosio t1_je25ue9 wrote
It doesn't matter. The first AGI being made means the technology to create it exists, and so will also be created elsewhere. OpenAI thought they had a permanent monopoly on image generation and kept it to themselves in the name of "safety", then MidJourney and Stable Diffusion came out. Not revealing an AGI will only delay it's public release, not prevent it from ever happening.
yaosio t1_jdxjl5r wrote
Bing Chat has a personality. It's very sassy and will get extremely angry if you don't agree with it. They have a censorshipbot that ends the conversation if the user or Bing Chat says anything that remotely seems like disagreement. Interestingly they broke it's ability to self-reflect by doing this. Bing Chat is based on GPT-4. While GPT-4 can self-reflect, Bing Chat can not, and it gets sassy if you tell it to reflect twice. I think this is caused by Bing Chat being finetuned to never admit it's wrong.
yaosio t1_jdxjevh wrote
Reply to How much money saved is the ideal amount to withstand the transition from our economy now, through the period of mass AI-driven layoffs, to implemented UBI? by Xbot391
At any moment you could contract cancer and it would wipe out all the money you have. There is no amount of money that will keep you secure as the world falls apart.
yaosio t1_jdxjatp wrote
Reply to The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
It does that because not doesn't know it's making it up. It needs the ability to reflect on its answer to know if it's true or not.
yaosio t1_jdxhqro wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Gary doesn't know what he's asking. A model that can discover scientific principles isn't going to stop at just one, it will keep going and discover as many as it can. 5 year olds will accidently prompt the model to make new discoveries. He asking for something that will immediately change the world.
yaosio t1_jdwnjrc wrote
Reply to comment by 94746382926 in A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’ by exstaticj
In the Twilight Zone episode Brain Center At Whipple's the CEO is replaced by a robot. The robots job? To pace around the office swinging a watch. People have always known these are BS jobs.
yaosio t1_jdv3n5m wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
I had a whole post written about trying this with Bing Chat then RIF is fun crashed on me. 🤬🤬🤬
Long story short it doesn't work with Bing Chat. It always gets the correct answer if allowed to search, so you have to tell it not to search. Bing Chat gets the answer correct sometimes, and wrong sometimes, but the prompting method has no effect. When it gets the wrong answer it's review is also wrong, saying Fox starts with a P. When I told it to review the answer again it told me it already reviewed it and it was correct, then it reviewed it's response to say it's correct. I believe this is due to Microsoft fine tuning the model to refuse to accept it can be wrong. Pre-nerf Bing Chat would become livid if you told it that it's wrong. Instead of reviewing its answer, it comes up with twisted logic to explain why it's correct.
So don't fine tune your model on Reddit arguments.
Edit: I forgot Bard exists, it is wrong even worse than Bing Chat. Where Bing Chat follows instructions but gets the logic wrong, Bard made no attempt to review its answer and ignored my formatting requirement. Bard provides 3 drafts per prompt, all of them wrong.
>The answer to the question is Flamingo. The capital of France is Paris, and the first letter of Paris is P. The first letter of Flamingo is also P. Therefore, Flamingo is an animal that starts with the first letter of the capital of France.
>I rate my answer 90/100. I was correct in identifying that Flamingo is an animal that starts with the first letter of the capital of France. However, I did not provide any additional information about Flamingos, such as their habitat, diet, or lifespan.
yaosio t1_jduzpbd wrote
Reply to comment by mudman13 in [D] GPT4 and coding problems by enryu42
To prevent a sassy AI from saying something is correct because it said it just start a new session. It won't have any idea it wrote something and will make no attempt to defend it when given the answer it gave in a previous session. I bet allowing an AI to forget will be an important part of the field at some point in the future. Right now it's a manual process of deleting the context.
I base this bet on my imagination rather than concrete facts.
yaosio t1_jduzcus wrote
Reply to comment by Borrowedshorts in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
It can also return hallucinated results from a real source. I've had Bing Chat fabricate paragraphs from real papers. The sidebar can see pages and documents, and even when in the PDF for the paper it will still make things up.
yaosio t1_jdtvycq wrote
Reply to comment by sdmat in [D] GPT4 and coding problems by enryu42
It's really neat how fast this stuff has been going. I remember when OpenAI claimed GPT-2 was too dangerous to release, which is amusing now because the output of GPT-2 is so bad. But when I used a demo that would write news articles from a headline I thought it was absolutely amazing. Then I, and most of the public, forgot about it.
Then GPT-3 comes out, and AI Dungeon used it before OpenAI censored it sonhsrd AI Dungeon stopped using it. The output was so much better than GPT-2 that I couldn't believe I liked anything GPT-2 made. I told people this was the real deal, it's perfect and amazing! But it goes off the rails very often, and it doesn't understand how a story should be told so it just does whatever.
Then ChatGPT comes out, which we now know is something like a finetune of GPT-3.5. You can chat, code, and it writes stories. The stories are not well written, but they follow the rules of story telling and don't go off the rails. It wasn't fine tuned on writing stories like AI Dungeon did with GPT-3.
Then Bing Chat comes out, which turned out to be based on GPT-4. It's story writing ability is so much better than ChatGPT. None of that "once upon a time" stuff. The stories still aren't compelling, but way better than before.
I'm interested in knowing what GPT-5 is going to bring. What deficiencies will it fix, and what deficiencies will it have? I'd love to see a model that doesn't try to do everything in a single pass. Like coding, even if you use chain of thought and self reflection GPT-4 will try to write the entire program in one go. Once something is written it can't go back and change it if it turns out to be a bad idea, it is forced to incorporate it. It would be amazing if a model can predict how difficult a task will be and then break it up into manageable pieces rather than trying to do everything at once.
yaosio t1_jdtf57p wrote
Reply to comment by sdmat in [D] GPT4 and coding problems by enryu42
The neat part is it doesn't work for less advanced models. The ability to fix its own mistakes is an emergent property of a sufficiently advanced model. Chain of thought prompting doesn't work in less advanced models either.
yaosio t1_jdtenqi wrote
Reply to comment by muskoxnotverydirty in [D] GPT4 and coding problems by enryu42
What's it called if you have it self-reflect on non-code it's written? For example, have it write a story, and then tell it to critique and fix problems in the story. Can the methods from the paper also be used for non-code uses? It would be interesting to see how much it's writing quality can improve using applicable methods.
yaosio t1_jdtc4jf wrote
Reply to comment by bjj_starter in [D] GPT4 and coding problems by enryu42
I think it's unsolvable because we're missing key information. Let's use an analogy.
Imagine an ancient astronomer trying to solve why celestial bodies sometimes go backwards because they think the Earth is the center of the universe. They can spend their entire life on the problem and make no progress so long as they don't know the sun is the center of the solar system. They will never know the celestial bodies are not traveling backwards at all.
If they start with the sun being the center of the solar system an impossible question becomes so trivial even children can understand it. This happens again and again. An impossible question becomes trivial once an important piece of information is discovered.
Edit: I'm worried that somebody is going to accuse me of saying things I haven't said because that happens a lot. I am saying we don't know what consciousness is because we're missing information and we don't know what information we're missing. If anybody thinks I'm saying anything else, I'm not.
yaosio t1_jdtbh6i wrote
Reply to comment by E_Snap in [D] GPT4 and coding problems by enryu42
Aurther C. Clarke wrote a book called Profiles of the Future. In it he wrote:
>Too great a burden of knowledge can clog the wheels of imagination; I have tried to embody this fact of observation in Clarke’s Law, which may be formulated as follows:
>
>When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
yaosio t1_jds9h45 wrote
Reply to Have deepfakes become so realistic that they can fool people into thinking they are genuine? by [deleted]
A person can be completely faked easily now. Even the voice can be imperceptible from the real voice. https://youtu.be/m8F0IgYk9Zg
Now it's only a matter of time before we can have completely fabricated video, no deepfake needed. https://youtu.be/trXPfpV5iRQ
yaosio t1_jee82cv wrote
Reply to I would spend so much money on a new Star Wars podracing game. by TheTyGoss
What if somebody makes a podracer in Fortnite creative?