TFenrir
TFenrir t1_j9iaxdg wrote
Reply to comment by turnip_burrito in OpenAI has privately announced a new developer product called Foundry by flowday
I really want to see how coherent and sensible it can be at 32k tokens, and a fundamentally better model. Could it write a whole short story off a prompt?
TFenrir t1_j9i5653 wrote
Holy shit, 32k token context? That is a complete fucking game changer. That's about 30k words. Current context length is about 4k tokens.
A simple example of why that is relevant - it's hard for a model to hold an entire research paper in its context right now - this could now handle probably multiple research papers in context.
Code wise... It's the difference between a 100 line toy app, to something like 800.
Context window increasing this much just also makes so many more apps easier to write, or fundamentally possible when they weren't before. Chat memory extends, short story writing basically hits new heights. The average book has 500 words a page, ish - about 6-7 pages currently, will jump up to 50.
1 token = 4 English characters. Average word is 4.7.
TFenrir t1_j9glyon wrote
Reply to Pardon my curiosity, but why doesn’t Google utilize its sister company DeepMind to rival Bing’s ChatGPT? by Berke80
First, Google has the best language models we know about if we look at benchmarks results, with it's PaLM model.
Second, Google has a much higher standard for what they have been willing to release (which seems to be changing because of the competition).
Third, DeepMind will be releasing their own LLM (Sparrow) - which will most likely be quite capable, as well as accurate.
Fourth, Google will be releasing LaMDA (which powers Bard) soon, and there's no data that shows it's any less proficient than any other model out there, although there are rumours that the smaller model behind Bard might be not competitive enough to impress, although it would be cheap enough to scale for more users.
Fifth, it's important to remember that both ChatGPT and Sydney make numerous mistakes, they are just in a position where they are much less scrutinized for those mistakes
TFenrir t1_j9fviko wrote
Reply to comment by Chad_Abraxas in People are Flooding Magazines With AI-Written Fiction Because They Think They’ll Make Money by SnoozeDoggyDog
I have noticed that Bing/Sydney ends up "speaking in threes" when it gets a bit unhinged, but I was really impressed with these demonstrations:
https://twitter.com/emollick/status/1626316207229132800?t=ENGs-wTpqy_tpdyfkfAdKg&s=19
https://twitter.com/emollick/status/1626084142239649792?t=28uBrQrcgQP6pZPHZZBIew&s=19
TFenrir t1_j9cevx1 wrote
Reply to comment by Chad_Abraxas in People are Flooding Magazines With AI-Written Fiction Because They Think They’ll Make Money by SnoozeDoggyDog
Have you seen any of Bing's creative writing? It's not yet good enough for any old schmuck to be able to write a compelling story, but it did seem significantly better than ChatGPT. Was wondering if as a professional writer you've seen any difference between the two and if you had any thoughts?
TFenrir t1_j971ped wrote
Reply to comment by [deleted] in What’s up with DeepMind? by BobbyWOWO
You're not displaying any ability to look at situations like this with nuance. It's extremely simplistic to look at the world like it's composed of good guys and bad guys, and you do yourself a disservice when you fall into that trap.
It's not dick-riding to say "maybe there are more complicated reasons that people want to be cautious about the AI they release other than being power hungry, mustache twirling villains".
As a creative exercise, could you imagine a reason that you may even begrudgingly agree with, that someone like Demis would have to hesitate to share their AI? If you can't, don't you think that's telling?
TFenrir t1_j96v7w5 wrote
Reply to comment by [deleted] in What’s up with DeepMind? by BobbyWOWO
It's too easy to look at people who don't give you what you want as monsters, but I think we do ourselves a disservice if we eschew nuance for thoughts that affirm our frustrations.
TFenrir t1_j96l9m3 wrote
Reply to comment by blueSGL in What’s up with DeepMind? by BobbyWOWO
Yeah I think this is already playing out to some degree, with some attrition from Google Brain to OpenAI.
I don't know how much is just... Normal poaching and attrition, and how much is related to different ideologies, but I think Google will have to pivot significantly to prevent something more substantial happening to their greatest asset.
TFenrir t1_j96hi1f wrote
Reply to comment by [deleted] in What’s up with DeepMind? by BobbyWOWO
I generally appreciate what you are saying, and I feel more or less the same way, in the sense that I think that these models should be in our hands sooner, rather than later, so that we can give appropriate large scale feedback... But I also think the reasoning to hold back is more complicated. I get the impression that fear of bad results is a big part of the anxiety people like Demis feel.
TFenrir t1_j968cot wrote
Reply to comment by Redditing-Dutchman in What’s up with DeepMind? by BobbyWOWO
They only really laid off operational staff, and closed their Edmonton office. All the Edmonton engineers were offered roles in Toronto/Montreal offices.
TFenrir t1_j96815w wrote
Reply to comment by lehcarfugu in What’s up with DeepMind? by BobbyWOWO
Ah I get you. Yeah, here's the complicated thing though - Google generally provides the most valuable AI research every year, especially if you include DeepMind.
https://thundermark.medium.com/ai-research-rankings-2022-sputnik-moment-for-china-64b693386a4
If suddenly they decide that it's more important to be... Let's say cautious, about what papers they release, what impact is that going to have? Are other companies going to step up and provide more research, or are they all going to be more cautious about sharing their findings?
TFenrir t1_j960l6m wrote
Reply to comment by lehcarfugu in What’s up with DeepMind? by BobbyWOWO
? Can you clarify what you mean by stifle innovation?
TFenrir t1_j960ilw wrote
Reply to comment by GoldenRain in What’s up with DeepMind? by BobbyWOWO
This is probably likely, and not just for this reason. Demis Hassabis himself said recently in a time magazine article that he thinks that OpenAI (without naming them) don't contribute to the science, but take a lot from the science out there - which they use to push AI out into the world faster than he would like people to. So they probably are going to not share as much going forward.
TFenrir t1_j8pe4w6 wrote
Reply to comment by Proof_Deer8426 in What will the singularity mean? Why are we persuing it? by wastedtime32
>I don’t mean to be rude but I think it’s naieve to imagine that ai will not be used to reinforce the current power structures, or that those structures are have benefited humanity.
How does it play out in your brain? Let's say Google is the first to AGI - this AI can do lots of really impressive things that can help humanity; solve medical questions, mathematics questions, can be embodied and do all menial work, and can automate the process of building and farming, finding efficiencies and reducing costs constantly until the cost is a fraction of today's cost.
How does Google use this to prevent people from from benefiting? Do they also prevent all other non Google AGI attempts? Give me an idea is what you are envisioning.
> Jeremy Corbyn said that if he were elected, homelessness within the UK would be ended within weeks, and it is not an exaggeration to say that would be entirely possible. There are far more homes than homeless people, and we have the resources to build many more.
So in this very UK specific example, you imagine that the roughly 250k homeless could be homed in the roughly 250k empty homes. Would you want the government to just take those houses from whomever owned them, and give them to the homeless? I'm not in any way against providing homes for the homeless, but could see how that could not cause many negative side effects?
> We don’t, because it would disrupt the ideology of capitalism, which requires the threat of homelessness and unemployment in order to force people to work for low wages.
Or we don't because no one wants to spend that kind of money for no return. What happens when doing so becomes effectively free, do you think the government and people would like... Ban efforts to build homes for homeless?
> Wages and productivity have been detached for decades now - ie wages have remained stagnant while productivity has increased exponentially. Ai will increase productivity, but without changing the economic system the benefit will not be to ordinary people but to the profits of the rich.
A truly post AGI world would not have any human labour. It likely couldn't in any real way. How do you imagine a post AGI world still having labour?
> The upward momentum of the world you refer to is misleading. People like Bill Gates like to point to the fact that enormous amounts of people have been lifted out of poverty in recent decades, trying to attribute this to the implementation of neoliberal economics. They always neglect to point out that these stats are skewed by the work of the Chinese Communist Party, which has lifted 700 million people out of absolute poverty - more than any government in history. That has nothing to do with the political trajectory that the West has been on.
I'm African, how do you think Africa has fared in the last few decades when it comes to starvation, and economic prosperity? We don't even need to include China, I think basically every developing country in the world is significantly better off today than they ever have been, minus a few outliers.
You think China lifted it's people out of poverty without capitalism? Do you think China is not a capitalist country? I'm not a fan of capitalism, but I'm not going to let that blind me from the reality - that the world is better off and continues to improve with almost every measurement of human success. It's not perfect, but so many people have an almost entirely detached view of the world, compared to what it was even 30 years ago.
Edit: for some additional data
https://ourworldindata.org/a-history-of-global-living-conditions-in-5-charts
TFenrir t1_j8p7obm wrote
Reply to comment by Proof_Deer8426 in What will the singularity mean? Why are we persuing it? by wastedtime32
>Ai will not solve all problems for us - most of our problems are already solvable. We could homelessness tomorrow but we don’t - because that would contradict our society’s ideology.
How could we solve homelessness tomorrow? Not to be rude, but statements like this feel like they are just coming from a place of ignorance and jadedness.
We have many many many problems in this world, and they are often very intertwined - and so far, every problem we have overcome, we have used our intelligence to do so.
> This technology will be owned by people that don’t want to solve the same kinds of problems that most people imagine they want solved.
Again. Jadedness. Do you know who is working towards AGI? What their ideologies are? Do you think the thousands of engineers who are putting their life's work into creating this... Thing, are doing so because they want to serve some nefarious, mustache twirling evil overlord? I think that you are doing yourself a disservice with such a myopic and simplistic outlook on society and humankind.
> Mass production did not lead to the end of scarcity - most of the world still lives in poverty and spend most of their lives working for a pittance.
The world is today, in probably the best state it has ever been in, in most measures of human success. We have fewer people as a percentage of the population in poverty than ever before. We have blown past most goals that we have placed for ourselves to end world hunger. The upward momentum of the world is extremely significant, and you can see this in all corners of the developing world. What are you basing your opinions on?
> If we ask an ai how to end poverty and it answers with economic redistribution and a command economy, that ai will be reprogrammed to give an answer that doesn’t upset the status quo.
Again, myopic.
TFenrir t1_j8o1khg wrote
Reply to comment by wastedtime32 in What will the singularity mean? Why are we persuing it? by wastedtime32
>This is what I keep hearing. Stuff about excepting change. But there is no historical precedent for this. This is the start of the exponential growth. The way I see it, I have every reason to be afraid and not one reason not to be. I am spending my parents life savings to get a degree that likely will not matter. My big problem is, what exactly are we expected to do once we “solve intelligence”? I LIKE the natural world. That’s all there is. It will never make sense to me. I don’t want to float around in a computer metaverse and be fed unlimited amounts of seratonin and never question anything or wonder or worry or feel any other emotion. That is all I know. And it is going ti be taken away from me without my consent? This future of AI is inevitable totalitarian.
It's really really hard to predict anything, especially the future. I get it. There is a sort of... Logical set of steps you can walk down, that leads to your conclusion. But that is only one of many paths that are going to open up to us. You're right it's all exponential, but I also think that means what the human experience can be is going to expand. Maybe we will diverge as a species. There is a great sci fi book series (Commonwealth Saga) and in one of their books, they come across this species that seems to have fallen into this divide. Most of the species have left their physical bodies behind, but some of the species never strayed from their farming, amish-like lifestyle. My point is... I can imagine lots of different futures, and lots of them have a world where maybe more people can have the kind of lives they want.
>Brave new world type shit. It’s real. It’s fucking real. And everyone around me is talking about internships and where they want to live and different jobs and stuff. My girlfriend thinks I’m crazy because this fear is all I talk about. She said everything will be okay and I’m just falling for the fear mongering. I don’t know what to do with myself. It is hard to find joy when all I think about is how EVERYTHING that gives me joy will be gone.
I had this talk with my partner literally... Monday, this week. She's had to hear me talk about AI for the entire decade we've been together, and as things get crazier she asks me how I feel about it. If I'm freaking out. I just told her that I'm trying to live my life like the next ten years are the last years that I can even kind of imagine, that there is an event horizon that I can't see beyond, and worrying about what's beyond that line is just a source of infinite loops in my mind.
Instead I'm going to get some friends together and go out dancing. It's been a while since we've had the chance.
TFenrir t1_j8nyno3 wrote
>I am relatively uneducated on AI. I recently became interested by the introduction of chat-GPT and all the new AI art. My first and most significant reaction to all of this, which has taken absolute precedent in the last few months, is fear. Terror rather. What does this all mean?
Honestly? Fair. If this is your first introduction, I can really appreciate the discomfort. If it helps, I find learning more about the technology under the hood removes some of the anxiety for a lot of people, as ignorance of something powerful leaves a lot of room for fear. That's not to say that fear is unwarranted, just that it can be mitigated by exposure.
> I’m currently a college student. I will I spend my entire adult life simply giving prompts to AI? Or will there be prompt AI soon, even before I graduate?
There is a lot of effort being made to remove the need for prompting all together, to create true, somewhat autonomous agents who are "prompted" not just by a message sent to them, but by the changes in the real world in real time.
> I’ve don’t a lot of speculation and some research, and I am having a very hard time understanding the practical reasons why we as a species seam to be so adamant on creating this singularity.
Well the reason is pretty straight forward - want to "solve" intelligence, so that it can solve all other problems for us, quickly and fairly. That's the pie-in-the-sky dream many people are pursuing
> Form what I understand, we have no idea what will happen. I am horrified. I feel as if the future is bleak and dystopian and there is now way to prepare, and everything I do now is useless. This post is somewhat curiosity, and a lot of desperation. Why am I forced to reconcile with the fact that the world will never ever look the same, and the reasons for that entirely allude me? Is it in the pursuit of money? As far as I can see money won’t matter once the singularity comes about. I am fucking terrified, confused, and desperate.
Like I said earlier, I appreciate why you are overwhelmed, but the world was never going to stay the same. Becoming comfortable with that... Discomfort, that uncertainty, is going to be a strength unto itself. If you can master that, I think the changing world will be a lot more palatable in the future
TFenrir t1_j8nxpb8 wrote
Reply to comment by icepush in What will the singularity mean? Why are we persuing it? by wastedtime32
That's a very very specific prediction, maybe not the best thing to introduce someone new to
TFenrir t1_j7ziwlh wrote
Not far, there are a lot of people working on getting the appropriate training data for this right now. One of the most prominent groups is Adept.ai - their v1 model is trained on using browser based apps however, you can see examples and sign up for the waitlist on their website.
If I was going to ballpark when a regular Joe will have access to tech like that (without commenting on proficiency, and specifically for Blender)... 50% certain within 1 year, 80% within 3?
TFenrir t1_j7wkzff wrote
Reply to comment by [deleted] in The copium goes both ways by IndependenceRound453
They stated that in a cagey round about language way.
They further clarified, I even asked about that here (in this sub full of non experts) and someone clarified.
Additionally, GPT4 would not be used for search. Anything they are using for search is going to be a tiny model with much faster and cheaper inference, something that scales for a search engine. Hypothetically if GPT4 was even 500 million parameters, it would be untenable to use for search
Edit: here's where someone shared a link and a quote for me
https://www.reddit.com/r/singularity/comments/10w9p6n/-/j7mszte
TFenrir t1_j7wjkxz wrote
Reply to comment by [deleted] in The copium goes both ways by IndependenceRound453
Ummm Bing isn't using GPT4? They have even clearly said it's just an evolved version of GPT3
TFenrir t1_j7wjdqv wrote
Reply to comment by petermobeter in The copium goes both ways by IndependenceRound453
That seems like a pretty uninformed take, weirdly. For example, I could share with you a dozen papers from Google alone that highlight their progress in AI outside of just language models - and those old chat bots are so fundamentally different than today's... It's like comparing Google search indexing to a very large if else statement. Not even in the same ballpark of functionality.
TFenrir t1_j7wibx3 wrote
Reply to The copium goes both ways by IndependenceRound453
I think there's all kinds of people here, but I know the type you are describing. I think to a lot of people, the singularity feels like the most likely way they will get heaven.
I wonder where I stand on the spectrum when I'm trying to be self critical. I have a very good life, I make good money, have lots of social... Extra curriculars and fun hobbies, and I sincerely love life and have always loved it.
Would I love a best case scenario for AI? Absolutely, who wouldn't?
But that's not the reason I think it's inevitable. I've been following lots of people who are really really smart, Demis Hassabis, Shane Legg, Illya Sutskevar, and more... People who are actually building this stuff. And I see how their language has changed.
I think you'd also be surprised about how many experts are increasingly moving up their timelines. We can look at forecasting platforms for example, and we can see the shift.
Out of curiosity, what experts are you referencing when you say most don't think we'll get anything transformative anytime soon?
TFenrir t1_j7lj00z wrote
Reply to New text to video from Runway by e-scape
Mmmm, is this text to video or style transfer on to video? It's a bit of a blurry line, but when I think of text to video, I think of imagen video. Also I think Runaway is calling this video to video
TFenrir t1_j9p35nk wrote
Reply to Seriously people, please stop by Bakagami-
I generally agree, but sometimes I feel like there are very interesting conversations around improved functionality - I never share any of these myself, but if I saw one I would be interested. Here are a few examples from one particular person on Twitter who likes to put Bing through the ringer:
https://twitter.com/emollick/status/1628605530963845123?t=V63nQ-OLGRhUbaeNTA5hlA&s=19
https://twitter.com/emollick/status/1626084142239649792?t=RLI3NAv6CqjahpbZ3ZMy9g&s=19
There are more from that user that are interesting, and some things that are interesting is just how much more sophisticated Bing is at lying/hallucinating.
In general, I don't think that these tweets are thread worthy, but for example there are expected Bing updates today that might improve quality of updates, or add more options, or maybe in the near future updates that give Bing access to more tools (ala Toolformer), so I wouldn't want a hard and fast "don't share any chatbot outputs" rules.