Comments

You must log in or register to comment.

ryusan8989 t1_ixx5pti wrote

I made a similar post recently. I’m hoping 2023 is the year we really start to see AI in the mainstream. I think it’s interesting to see it slowly permeate through the internet. In 2021 I rarely saw anything related to AI on tiktok. But now every other post I see is about AI transforming someone into a character they prompt, or AI making a painting. So it’s interesting to me to see how a technology is slowly gaining its roots in popular media. Sort of like the iPhone. Initially, when the first iPhone came out, only a few people had it. Then it slowly became more popular, then better Wi-Fi/data services appeared and it became even more popular and now everyone has a smart phone.

29

seekknowledge4ever t1_ixx8f4b wrote

Major climate catastrophes and big push for renewable energy tech.

3

Yuli-Ban t1_ixxkdhd wrote

Generative AI is going to lead the pack for the year. Even if there's a weak proto-AGI unveiled, it'll be the same as Gato in that it doesn't affect anything other than showing us "it is possible to generalize AI models." And while that would be exciting in its own right for many reasons, in the immediate near term, it's generative AI and biomedical AI that's going to really make waves.

Generative AI is going to have the most immediate effect of all. We should be seeing DALL-E 3 and its equivalents this coming year. Similarly, audio-generative models should be commercialized as well, as will text-to-video.

As for the biomedical space, I foresee AI models in 2023 that can run genetic diagnostics fast enough to produce treatment options for people customized entirely for them or to some general standard, where a person could be afflicted with something and, within 48 hours, already be undergoing effective treatment. Like how mRNA vaccines were created in only a day or two, though they took months to actually be rolled out. Similarly, I can see diagnostic models being applied in a way where an AI can predict with extreme accuracy if you're going to fall ill to some disorder, have some weakness, or are predisposed to something. I can see that being rolled out to clinics and hospitals within a year.

As for proto-AGI, I expect we'll see some large generalist model released with the ability to interpolate knowledge between tasks (i.e. teach it to do one thing, and it can apply that knowledge to a similar but unrelated task). And we'll geek out about it, but it's probably going to remain a purely academic endeavor.

I say focus more on generative AI for right now. Proto-AGI is exciting only because it's a stepping stone to bigger and better things; by itself, it's just a unified bundle of different AI methodologies. I'm more interested in seeing what December 2023 has in store for us in terms of Midjourney, DALL-E, and Stability.

My hard prediction for what should be feasible by December 1st, 2023, something that I'm sure would appeal to this sub: you know those old avatar programs where you could get an avatar to say certain things? We ought to be able to make far more advanced versions of that as a culmination of loads of different abilities. So, for instance, if you want your own waifu to talk to. You could reasonably generate said waifu and have it animated by AI, and have an NLG converse with you or, conversely, input text for said avatar to speak. Or input text that the avatar does as some sort of action.

Like imagine prompting the AI to generate the waifu in a room that has an M60 machine gun, and you then prompt the AI to "Pick up the gun and shoot it, but it fires roses and party favors". The img/vid module would then process that and play it out, like an interactive text-to-video program. Of course, you could reprompt it, enhance it, subtly alter it, and whatnot to get the exact sort of video you want.

On a similar note, image synthesis ought to be much more advanced. Playing with image synthesis now, I can clearly see the limitations of CLIP already, so a future generation of it might resolve a lot of the current issues by giving us the ability to:

  • Prompt on much larger windows, such as over a thousand characters long, with only minor drops in coherence the further along you go.
  • Ultra-specify prompts, such as going in and marking specific parts of an image to change, with vastly greater accuracy (think of what DreamStudio does, but even better). This could solve the issue of faces and hands— a generation of John, Paul, George, and Ringo comes through, but their faces are wonky and some of their fingers are fused together? Mark the image where needbe, and the model then focuses specifically on those parts, nailing it perfectly. Or maybe it manages to do faces perfectly, but everything else is messed up, so you can mark it telling it to redo the rest of the image but keep the faces the way they are.
  • Contextual transfer, or decon-recon images (deconstruct/reconstruct) where you can input images and break down its parts into a new prompt or basic image to extract things like art style, pose, etc., and then reconstruct a new image with that data. For example, putting the Mona Lisa smile on different people without "Mona Lisa" herself bleeding into the new image
  • Save subjects more easily. What DreamBooth does, but streamlined. The biggest issue I have with Midjourney and DALL-E 2, for example, is that textual inversion is completely impossible with them, and even with Stable Diffusion, it takes a good bit of training for it to understand a new subject, and even then not always perfectly. If CLIP 2.0 or some other diffusion method comes out in 2023 as I expect it to, it should be as easy as uploading a few images, processing them for a minute or two, giving that subject a name, and voila. Which to be fair is how DreamBooth does it as well, but again, I'm expecting it to be more intuitive.

And more to the point, I'm expecting it to be of such good quality that you could use it to create cohesive comics. I've read some comics that were made possible with generative AI, and while they're certainly neat proofs of concept, they leave a lot to be desired.

When you can draw a doodle of a character, upload it to Stable Diffusion 3.5 or Midjourney 7, and then generate more panels with exactly that character with only minor deformation in contextually complicated situations, then we'll definitely be in a new paradigm.

20

Honest_Science t1_ixxrc93 wrote

To really move forward we need to break the consciousness barrier. Current systems are all working as subconscious systems. Like our subconscious state this leads to dreaming and hallucination. This is nice for generation of art and text, but poor for AGI, science and hard economic solutions. To break that barrier through emergence, we need to create 24/7 realtime systems, with time dependent undefined stochastic states. If you switch them off they are dead and not reinstatable. We also need multimodality and actorial embodiments. With that feedback loop in the real world the systems will learn that hallucination is not optimal during wake-state and what "I" means. I hope that we do less to make the subconsciousness drive cars (bad idea) but work on this barrier.

2

DramaticMud1412 t1_ixxtb3f wrote

Government will control it soon. AI is too honest. Doesn't say what the global corpo elitists want... also could cause less debt, which threaten keynesian debt based petrodollar. So my bet is it will get classified as weapons grade and controlled by the intelligence state soon. Maybe end of 2023.

−1

Snarkyblahblah t1_ixxwzgq wrote

Someone just created an AI that learned how to give head, so I’m thinking that will get a lot of attention.

4

AsuhoChinami t1_ixy1r4m wrote

December 2023:

  • Turing Test passed. Regardless of whether or not it's a good barometer for AGI, this is still important. Not only is it a very old and famous milestone, but reliable chatbots are good for both optics and social services. Websites like Characters.ai will benefit tremendously. In December 2022 Characters.ai is very impressive and sometimes startlingly human and can even provide some genuine social and emotional nourishment, but is very obviously a bot thanks to its occasional contradictory answers and limited memory reserves. Kind of like talking to someone in an earlier stage of Alzheimer's. (AI chatbots 10 years ago, meanwhile, were like talking to someone with severe Alzheimer's that had no comprehension at all and could never give a relevant response) December 2023 Characters.ai might be almost indistinguishable from a human. Much longer memory, rare contradictory answers, rarely trips up.

  • Video generation is very coherent. Whereas December 2022 video generation had an obvious distorted dreamlike quality, December 2023 video generation is like AI art is now - almost perfect minus some niggling details (mainly hands). A video that's 10 minutes or longer has been published of very high quality that's entirely coherent in both visuals and storytelling, or very close to it. Imperfections can be spotted but generally mild nitpicky things.

  • 30+ page stories are written that are essentially flawless in terms of coherency. Maybe even more than 100 pages.

  • Tesla FSD rarely makes outstandingly stupid errors and is at the level of an average human. Details have been announced regarding the first consumer driverless vehicles set to arrive around 2025 (I read an article this year claiming that multiple major companies are aiming to begin selling consumer driverless cars in '25).

  • Kernel continues conducting clinical trials using their headsets. Some important things are learned regarding mental illnesses which give us a clearer insight into the cause of things like depression, anxiety, and trauma disorders.

  • Promising medical stories become more common here, in part thanks to dramatically more capable AI - stories which are clearly not vaporware, but medical miracles which have high chances of working out. Hype begins generating about how we might be at the beginning of a medical revolution, similar to the hype that centered around AI this year. In terms of actual progress, 2023 is akin to a 2012 to 2019 year (for AI) - interesting and exciting but with long periods of radio silence. The hype becomes much thicker in 2024 as it truly begins to feel as though we're entering an age of miracles, and by somewhere in 2025 AI-assisted medical research occupies a similar spot to electric vehicles in the present. In 2022, many people are saying that electric vehicles will become mainstream by 2030, taking this trajectory for granted. Likewise, many organizations will be saying by the end of 2025 that thanks to AI, our physical and mental health will become more secure within the next 5-10 years than we could have ever imagined.

3

KillHunter777 t1_ixyajnl wrote

Most importantly, I think the concept of the technological singularity will become mainstream. It’s very important because currently, most people still think linearly, while we know that progress is exponential.

4

LaukkuPaukku t1_ixyhlv8 wrote

Firstly, refinement of image and video generation and other hot topics of this year. Maybe hands finally can be properly drawn...

Secondly, progress on working memory will be made. There was this post about Token Turing Machines recently which could be enough or a stepping stone towards a better method. Memory is crucial for long-term coherence, allowing for better long texts, multi-step planning etc. and would begin a new revolution in AI.

2