SleekEagle
SleekEagle OP t1_jaszawj wrote
Reply to comment by Zestyclose-Debt-4712 in [R] High-resolution image reconstruction with latent diffusion models from human brain activity by SleekEagle
It looks like, rather than conditioning on text they condition on the fMRI, but it's unclear to me exactly how they map between the two and why this would even work without finetuning. TBH I haven't had time to read the paper so I don't know the details, but figured I'd drop the paper in case anyone was interested!
SleekEagle t1_j9vl7r3 wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't think anyone believes it will be LLMs that undergo an intelligence explosion, but they could certainly be a piece of the puzzle. Look at how much progress has been made in just the past 10 years alone - imo it's not unreasonable to think that the alignment problem will be a serious concern in the next 30 years or so.
In the short term, though, I agree that people doing bad things with AI is much more likely than an intelligence explosion.
Whatever anyone's opinion, I think the fact that the opinions of very smart and knowledgeable people run the gamut is a testament to the fact that we need to dedicate serious resources into ethical AI beyond the disclaimers at the end of every paper that models may contain biases.
SleekEagle t1_j9tttxr wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over. Not taking an opinion on which side is "right", just saying that this is a false equivalence with respect to the arguments that are being made.
​
EDIT: Typo
SleekEagle t1_j8y0vuc wrote
Reply to comment by suflaj in How likely is ChatGPT to be weaponized as an information pollution tool? What are the possible implementation paths? How to prevent possible attacks? by zcwang0702
Agreed! I mean even if the proper resources were dumped into creating such a large detector it could be come quickly obsolete because of adversarial training (AFAIK, not an expert on adv. training)
SleekEagle t1_j8xoiu6 wrote
Reply to comment by suflaj in How likely is ChatGPT to be weaponized as an information pollution tool? What are the possible implementation paths? How to prevent possible attacks? by zcwang0702
Adversarial training will be a huge factor regarding detection models imo
SleekEagle t1_j8ix4fz wrote
Reply to comment by MustBeSomethingThere in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Authors publish papers on research, experiments, findings, etc. They do not always release the code for the models they are studying.
The lucidrains' repos implement the models, creating an open-source implementation for the research
The next step would then be to train the model, which requires a lot more than just the code (most notably, money). I assume you're referring to these trained weights when you say "the needed AI model". Training would require a huge amount of time and money for a team, never mind a single person, to train even one of these models let alone a whole portfolio of them
For this reason, it's not very reasonable to expect lucidrains or any other person to train these models - the open-source implementations are a great contribution on their own!
SleekEagle t1_j083jkl wrote
Reply to comment by hx-zero in [Project] Run and fine-tune BLOOM-176B at home using a peer-to-peer network by hx-zero
Got it, thanks for the explanation!
SleekEagle t1_j07bxyi wrote
I thought distributed training over the internet was prohibitively slow due to communication overhead - wouldn't you run into the same issue when fine-tuning? If anyone could ELI5 why/how this works that would be awesome!
SleekEagle t1_iza258i wrote
Reply to comment by vwings in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
Agreed - the last three you listed were the first ones that came to mind for me
SleekEagle t1_iza1xxj wrote
Reply to comment by vzq in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
The mathematician in me will never let SVMs die!
SleekEagle OP t1_iz9nh0v wrote
Reply to comment by jaycrossler in [D] Stable Diffusion 1 vs 2 - What you need to know by SleekEagle
My pleasure :) I'm sure v2 will be awesome in the long run but I think it makes sense to stick with v1 in the short term (although I haven't seen 2.1's performance)
SleekEagle OP t1_iz9ncax wrote
Reply to comment by sfcl33t in [D] Stable Diffusion 1 vs 2 - What you need to know by SleekEagle
My pleasure!
SleekEagle OP t1_iz5pkih wrote
Reply to comment by fastinguy11 in [D] Stable Diffusion 1 vs 2 - What you need to know by SleekEagle
This area moves quite fast doesn't it 🥲😂
SleekEagle t1_iyn7x1m wrote
Reply to comment by throwaway2676 in [D] PyTorch 2.0 Announcement by joshadel
One of my first thoughts as well! Is there any reason PT's speed ceiling would be lower than JAX's? Ik PyTorch-XLA is a thing but not sure about its current status
SleekEagle t1_iy8baee wrote
Never do work that someone else has already done unless they can't do it as well as you and that discrepancy matters
SleekEagle t1_iy4k5m4 wrote
Reply to comment by koiRitwikHai in [D] What method is state of the art dimensionality reduction by olmec-akeru
It's been a while since I looked at tsne and umap but the assumption for PCA is that the data lives near an affine subspace and for VAE that the data is well modeled by the distribution whose parameters you are finding. My thoughts but I'm sure there's other considerations that I'd love to hear other people chime in with!
SleekEagle t1_iurllof wrote
Reply to comment by dojoteef in [N] Adversarial Policies Beat Professional-Level Go AIs by xutw21
Exactly - if we want robust systems that interact with our lives with any sort of weight (e.g. autonomous vehicles), then we need to know about weird failure modes, how to address them, and, perhaps most importantly, how to find them
SleekEagle OP t1_iuhwyl1 wrote
Reply to comment by cy13erpunk in new physics-inspired Deep Learning method generates images with electrodynamics by SleekEagle
Some might say the coolest ;)
My pleasure!
SleekEagle OP t1_iuedbiw wrote
Reply to comment by HydrousIt in new physics-inspired Deep Learning method generates images with electrodynamics by SleekEagle
Just to add - PFGMs are best in class for flow models. They perform comparably to GANs on the datasets used in the paper, which is pretty exciting.
SleekEagle OP t1_iued3mr wrote
Reply to comment by cy13erpunk in new physics-inspired Deep Learning method generates images with electrodynamics by SleekEagle
To generate data, you need to know the probability distribution of a dataset. This is in general unknown. The method called "normalizing flows" starts with a simple distribution that we do know exactly, and learns how to turn the simple distribution into the data distribution through a series of transformations. If we know these transformations, then we can generate data from the data distribution by sampling from the simple distribution and passing it through the transformations.
Normalizing flows are a general approach to generative AI - how to actually learn the transformations and what they look like depends on the particular method. With PFGMs, the authors find that the laws of physics define these transformations. If we start with a simple distribution, we can transform it into the data distribution by imagining the data points are electrons and moving them according to the electric field they generate.
SleekEagle OP t1_iu5zfj5 wrote
Reply to comment by dasnihil in new physics-inspired Deep Learning method generates images with electrodynamics by SleekEagle
👋 hello friend!
SleekEagle OP t1_iu5h5tt wrote
Reply to comment by ebolathrowawayy in new physics-inspired Deep Learning method generates images with electrodynamics by SleekEagle
I'm not sure how the curse of dimensionality would affect PFGMs relative to Diffusion Models, but at the very least PFGMs could be dropped in as the base model in Imagen while diffusion models are kept for the super resolution chain! More info on that here or more info on Imagen here (or how to build your own Imagen here ;) ).
SleekEagle OP t1_iu56cs1 wrote
Reply to comment by Education-Sea in new physics-inspired Deep Learning method generates images with electrodynamics by SleekEagle
Note that PFGMs are not text-conditioned yet! There's still work to be done there :)
SleekEagle OP t1_iu569xx wrote
Reply to comment by blueSGL in new physics-inspired Deep Learning method generates images with electrodynamics by SleekEagle
I don't think the paper explicitly says anything about this, but I would expect them to be similar. If anything I would imagine they would require less memory, but not more. That having been said, if you're thinking of e.g. DALL-E 2 or Stable Diffusion, those models also have other parts that PFGMs don't (like text encoding networks), so it is completely fair that they are larger!
SleekEagle t1_je9jlkg wrote
Reply to [D] What do you think about all this hype for ChatGPT? by Dear-Vehicle-3215
I think hallucination is a serious concern in some fields but for general business-y creative work it's going to be a game changer. Just look at Jasper - a $100M series A.
EDIT: This corresponds to GPT-4 more than ChatGPT