RobleyTheron
RobleyTheron t1_j4cn410 wrote
Magnificent Desolation by Buzz Aldrin was an excellent read on the whole experience.
As a fun aside, I got his signed autograph at KSC when it was released like 15 years ago. As my favorite Apollo Astronaut, it was a really amazing experience to meet him in real life.
RobleyTheron t1_iwggw55 wrote
Reply to comment by GreenWeasel11 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
That is correct 😀
RobleyTheron t1_iw2yy31 wrote
Reply to comment by ECEngineeringBE in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
I understand where you're coming from, but a tipping point does exist, where you go from armchair speculation, to an expert with honest understanding of a subject.
Think global warming science. Although there is a lot more consensus in that field as opposed to AGI.
With that said, people smarter than me do think we are closer to AGI. I'll conceit that my opinion is this article is hype, and generally speaking, people have nothing to worry about in the next 20-40 years.
RobleyTheron t1_iw2w691 wrote
Reply to comment by DickMan64 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
There's nothing but hype in this article and redditors are acting like the sky could fall at any minute.
Current AGI is barely at the phase of the Wright Brothers trying to take flight at Kittyhawk, and this article is like asking what we'll do when we meet aliens upon our moon landing.
RobleyTheron t1_iw2uui0 wrote
Reply to comment by Redvolition in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
All you have to do to litmus test this is look at the billions and billions of dollars being spent on self-driving cars. These systems are being managed by the largest, and most innovative companies, often with the smartest people in the field.. and they're all failing (minus Cruise and Waymo's incremental improvement).
Argo with billions of dollars invested, just collapsed last week.
If we were 5 to 10 years away, and the current architecture works, those companies would be capable of driving in more than two cities. If you can't pattern match images in a self-driving car, you are decades away from from even contemplating proto-AGI.
RobleyTheron t1_iw2tlg3 wrote
Reply to comment by ECEngineeringBE in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
There is zero consensus. A lot of the smartest people in the field think we're 100 years to never away.
My point is that you can't place a date on it right now, because the fundamental architecture for modern machine learning will not get us to AGI. The entire system needs to be rethought and rebuilt, likely with massive amounts of technology that does not yet exist.
RobleyTheron t1_iw2t6ir wrote
Reply to comment by unflappableblatherer in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Fair, I'll grant that human level intelligence and cognition could be separate. My own entirely unscientific opinion is that consciousness arises from the complex interactions of neurons. The more neurons, the more likely you are to be conscious.
I dont think pattern matching will ever get us to AGI. It's way, way too brittle. It's also completely lacks understanding. A lot of learning and intelligence comes from transference. I know the traits of object A, and I recognize the traits of object B are similar, therefore B will probably act like A. That jump is not possible with current architecture.
RobleyTheron t1_iw2sfhp wrote
Reply to comment by SurroundSwimming3494 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
There's an annual AI conference and every year they ask the researchers how far away we are to AGI; the answers range from 10 years to 100 to its impossible. There is absolutely zero consensus from the smartest people in the industry on timeline.
RobleyTheron t1_iw2ryvr wrote
Reply to comment by JKJ420 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Most people in here don't know anything about actual artificial intelligence. They're caught up in completely unrealistic hope and fear bubbles.
2012 was really the breakthrough with ImageNet and convolutional neural networks. Self-driving cars, conversational AI, image recognition, it's all based on that architecture.
The only thing that really changed that year is data and servers became big enough to show progress. Most current AI architecture is based off Jeffrey Hintons work from the 1980's.
7 years out of 10 isn't nothing.
RobleyTheron t1_iw2qfqz wrote
Reply to comment by havenyahon in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Excellent. Thanks for the reccomendation, I'll check it out.
RobleyTheron t1_iw1fix7 wrote
Reply to comment by havenyahon in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Interesting, I don't know much about embodied cognition. Any good papers or books you'd reccomend?
RobleyTheron t1_iw1dikj wrote
Reply to comment by havenyahon in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Thanks. Curios on your thoughts of whole brain emulation. I feel like that will get us closer to human level AI (some day), as opposed to trying to program it from scratch.
RobleyTheron t1_iw1dbmp wrote
Reply to comment by DyingShell in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
There are two comments here, first the attack on the quote, fine. The point is we cannot measure time to human level AI in years, but it must be in technological breakthroughs.
Second, yes AI will replace jobs. It's going to be a lot slower than most people predict. However, economies are naturally dynamic. 140 years ago 96% of Americans were involved in agriculture. Today it's more like 1.6%.
Despite that, our economy didn't fall off a cliff. We have hovered at near record unemployment for several years now. Automation and improvement is a normal part of life.
RobleyTheron t1_iw1cvle wrote
Reply to comment by darkmatter8879 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
I've been at it for 7 years and I got involved because I was excited and thought we were a lot closer as a society.
The reality is that ALL artificial intelligence today is pattern matching and nothing more. There is no self reinforcement learning, unsupervised learning, neuroplasticity between divergent subjects or base general comprehension (even that of an infant).
The closest our (human) supercomputers can muster is a few seconds mimicking the neural connections of a silk worm.
The entire fundamental architecture of modern AI will need to be restarted if we ever hope to reach self-aware AI.
RobleyTheron t1_ivzsjog wrote
Reply to The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Arguments about the Turing Test aside, GPT-3 is so far away from human intelligence that GPT-4 would be like jumping from fighting with sticks and stones to an atomic bomb.
This is corporate PR hype, and nothing more. I work in AI and it's insanely stupid once you get beyond the smoke and mirror screen we set up to make it seem human.
My favorite quote is that we are not 50-100 years away from human level AI, but 50-100 Nobel prizes away.
RobleyTheron t1_itr7pjp wrote
Reply to comment by lughnasadh in Lyft co-founder says autonomous vehicles won’t replace drivers for at least a decade by lughnasadh
I think there are several factors at play:
First, the $100MM per year Lyft was spending seems like a drop in the bucket to the billions of dollars GM and Waymo are spending every year. Combining with a large automobile manufacturer may have been their best bet to remain competitive.
Although there are two companies testing level 4 technology, we don't know anything about how much time a safety driver is required to take over in a remote operation center. Despite being on the road, for all we know safety drivers are required at least once in the vast majority of trips (thus not saving money or being economical on a large scale).
SF is definitely hard city driving, that doesn't mean it'll adapt well to other locations and climates.
I do think Lyft was likely behind the other companies, but given the slow rate of progress over the last three years, and my own painful development on alternative AI systems, large scale rollout is probably 10 years away.
RobleyTheron t1_j63qlzc wrote
Reply to ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
This sub has gotten a little over saturated with impending AGI doom predictions, and I really appreciated an actually thoughtful and nuanced review of this deal.
I'm the CEO of an AI company, and the comment of AI's lower gross profit on average was very validating.