justowen4
justowen4 t1_j70hxwf wrote
Reply to [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
Site is down; Microsoft was never expecting more than a few people to read their blog
justowen4 t1_j2ckhjk wrote
Reply to When is GPT4 expected to release? by [deleted]
I wonder if Microsoft planned this from the start? It’s so perfect GitHub, vscode, codex, copilot
justowen4 t1_j2b0vpm wrote
Reply to comment by lloesche in OpenAI might have shot themselves in the foot with ChatGPT by Kaarssteun
Ah yes good point, Google ad team is salivating
justowen4 t1_iytg67n wrote
Reply to comment by grbbrt in What if you could use GPT-3 directly from Siri? by Huguini
Or lonely non-elderly people?
justowen4 t1_iyktahl wrote
Reply to Is my career soon to be nonexistent? by apyrexvision
The economy is not the sum of human effort, it’s the volume of active capital. Ai doesn’t deplete capital, but it does accelerate capitalism (rich get richer, and the poor get richer). Work just evolves, and humans only need to keep investing in themselves to keep up this marathon. In other words, don’t worry lizard brain, you are safe.
justowen4 t1_ixakkip wrote
Reply to How much time until it happens? by CookiesDeathCookies
I’m doubtful we will get innovative outputs from the 2023 llms, I think better summarized analysis of existing knowledge will be the next step, assisting humans to make innovation faster — I think we have been preparing for a good Ai assistant for a long time, from clippy to now every Fortune 500 companies frontline customer support and sales system, we are almost at the point where these systems will have the intelligence needed to be nearly as useful as trained human agents, and then it’ll pick up steam fast as there trillions of dollars in that general workflow
justowen4 t1_iwxl8ps wrote
Reply to Full Self-Driving Twitter by [deleted]
They have already scaled out, twitter is refreshingly open about their tech and historically a big player in open sourcing some of their concepts
justowen4 t1_iwggy30 wrote
Reply to My predictions for the next 30 years by z0rm
I like the conservative approach, and I’d mix in BCI somewhere here which will have a big impact
justowen4 t1_ivahguf wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
There is a nearly limitless amount of innovation potential in biochemistry that AIs like AlphaFold are specifically good at. Ecological problems are biochemical problems, and the reason we can’t figure out bacteria and enzymes to rectify our polluted biological systems (from the boreal forest to gut microbiomes) is that traditional computing can’t calculate the complex simulations to find solutions. The next step is big pharma throwing billions into drug simulations via AI, and then we will have built the intelligence needed to determine ecological adjuncts to clean up polluted environments. Humans have tried with mixed success to adjust biological systems but it will take a super smart simulator to find solutions that don’t backfire.
justowen4 t1_iv8kur9 wrote
Reply to comment by justowen4 in TSMC approaching 1 nm with 2D materials breakthrough by maxtility
I love pat, and chips act is wise, but intel historically has been anything but opaque regarding practically anything related to chip marketing
justowen4 t1_iv8kqea wrote
Reply to comment by iNstein in TSMC approaching 1 nm with 2D materials breakthrough by maxtility
Lol that’s hillllllllaaaaaarious
justowen4 t1_iuw9c84 wrote
Reply to comment by Bakoro in Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America by Mynameis__--__
Perhaps your point could be further articulated by the idea that we are not maximizing economic capacity by using historical data directly, we need an AI that can factor bias into the equation. In other words institutional racism is bad for predictive power because it will assume certain groups are simply unproductive, so we need an AI smart enough to recognize the dynamics of historical opportunity levels and virtuous cycles. I’m pretty sure this would not be hard for a decent AI to grasp. Interestingly these AIs give tax breaks for the ultra wealthy which I am personally opposed to but even with all the dynamics factored into maximum productivity the truth might be that rich people are better at productivity.. (I’m poor btw)
justowen4 t1_itt5mpf wrote
Reply to comment by manOnPavementWaving in Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
It’s simply going to be both scenarios in 2023, quantity and quality, synthetic data variations from existing corpuses with better training distributions (pseudo-sparcity) on optimized hardware. Maybe even some novel chips like photon or analog later next year. It’s like cpus 20 years ago, optimizations all around!
justowen4 t1_itdur6e wrote
Reply to When do you expect gpt-4 to come out? by hducug
Just imagine what 2023 will bring us in AI advancements. Feels a lot like the semiconductor shrinking progress but further on the exponential curve. I wouldn’t be surprised if gpt-4 is delayed so they can incorporate all the new ways to train. I think the next step will be big+efficient and see if we can crack into those remaining cognition tests that Ai still falls short on
justowen4 t1_itau79p wrote
Reply to comment by FirstOrderCat in U-PaLM 540B by xutw21
Epic commenting you two. The winner is….. AsthmaBeyondBorders !
justowen4 t1_it64aj5 wrote
Reply to comment by chimgchomg in If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
Meh, just a bit early. We all make this mistake when we are isolated plutocrats
justowen4 t1_it645n5 wrote
Reply to comment by ftc1234 in If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
In case you missed it, LLMs surprised us by being able to scale beyond expectations. The underestimation was because llms came from the nlp world with simple word2vec style word associations. In 2017 the groundbreaking “attention is all you need” paper showed the simple transformer architecture alone with lots of gpu time can outperform other model types. Why? Because it’s not an nlp word association network anymore, it’s a layered context calculator that uses words as ingredients. Barely worth calling them llms unless you redefine language to be integral to human intelligence
justowen4 t1_it621gn wrote
Reply to If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
It’s actually a really fun time, and I agree that the IT efficiency will lift all tides. I would imagine a GUI AI alone would be a 10x efficiency for desk jobs
justowen4 t1_isobs10 wrote
Reply to Meet the Army of Robots Coming to Fill In for Scarce Workers. Robots are spreading at a record pace, from their traditional strongholds like making automobiles into nearly every other human endeavor by Shelfrock77
Pff it’s just china in that data. We aren’t going to see the exponential robot usage until it’s smart enough to handle edge cases. It’s still the same assembly line robots that we have been using for generations
justowen4 t1_iqwm2lz wrote
OpenAI says you should start a company that handles the middle layer between customers and their next generation api which will have deep context or tuning capabilities
justowen4 t1_je8hj5f wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
It’s also not true, even Stephen Wolfram, who is a legitimate genius in the technical definition of genius, has to rework the definition of “understand” to avoid applying it to ChatGPT. Understanding, like intelligence, has to be defined in terms of thresholds of geometric associations, because that’s what our brain does. And guess what, that’s what LLMs do. It’s coordinates at the base layer. Doesn’t mean it’s conscious, but it’s definitely intelligence and understanding at the fundamental substrate. To redefine these words so that only humans can participate is just egotistical nonsense