Search

50 results for towardsdatascience.com:

No-Belt7582 t1_j9j6edy wrote

Foong, “How to Fine-tune Stable Diffusion using Textual Inversion,” Medium, Oct. 24, 2022. https://towardsdatascience.com/how-to-fine-tune-stable-diffusion-using-textual-inversion-b995d7ecc095 (accessed Feb. 09, 2023). [10] N. W. Foong, “How to Fine-tune Stable Diffusion using Dreambooth,” Medium ... towardsdatascience.com/how-to-fine-tune-stable-diffusion-using-dreambooth-dfa6694524ae (accessed Feb. 09, 2023). [11] “The Annotated Diffusion Model.” https://huggingface.co/blog/annotated-diffusion (accessed Jan. 31, 2023). [12] J. Alammar, “The Illustrated Stable Diffusion.” https://jalammar.github.io/illustrated-stable-diffusion/ (accessed Jan. 31, 2023). [13] “Understanding Stable

3

nibbels t1_irvvnpw wrote

with both the field and the models. https://arxiv.org/abs/2011.03395 https://openreview.net/forum?id=xNOVfCCvDpM https://arxiv.org/abs/2110.09485#:~:text=The%20notion%20of%20interpolation%20and,outside%20of%20that%20convex%20hull. https://towardsdatascience.com/the-reproducibility-crisis-and-why-its-bad-for-ai-c8179b0f5d38 https://ai100.stanford.edu/2021-report/standing-questions-and-responses/sq10-what-are-most-pressing-dangers-ai And then, of course, there are the oft-discussed topics like bias

1

build_saas_reddit t1_iry4uhw wrote

Take a look at Pegasus : [https://arxiv.org/abs/1912.08777](https://arxiv.org/abs/1912.08777) [https://towardsdatascience.com/how-to-perform-abstractive-summarization-with-pegasus-3dd74e48bafb](https://towardsdatascience.com/how-to-perform-abstractive-summarization-with-pegasus-3dd74e48bafb)

3

BlazeObsidian t1_iujbbdu wrote

sure you model is running on the GPU ? See [https://towardsdatascience.com/pytorch-switching-to-the-gpu-a7c0b21e8a99](https://towardsdatascience.com/pytorch-switching-to-the-gpu-a7c0b21e8a99) or if you can see GPU utilisation it might be simpler to verify. If you are not explicitly moving your model

1

Heap_Good_Firewater t1_iwld8mc wrote

only be safely used by software engineers seems like trading one set of problems for another. [https://towardsdatascience.com/the-blockchain-scalability-problem-the-race-for-visa-like-transaction-speed-5cce48f9d44](https://towardsdatascience.com/the-blockchain-scalability-problem-the-race-for-visa-like-transaction-speed-5cce48f9d44) >The music industry is a good analog for what's happening in finance. The music

1

I-am_Sleepy t1_iy7xfo0 wrote

genetic algorithm, or bayesian optimization For bayesian, if your prior is normal, then its [conjugate prior](https://towardsdatascience.com/conjugate-prior-explained-75957dc80bfb) is also normal. For multivariate, it is a bit trickier, depends on your settings (likelihood distribution

1

ResponsibilityNo7189 t1_j02dzwf wrote

your network probabilities to be calibrated. First you might want to read aleatoric vs. epistemic uncertainty. [https://towardsdatascience.com/aleatoric-and-epistemic-uncertainty-in-deep-learning-77e5c51f9423](https://towardsdatascience.com/aleatoric-and-epistemic-uncertainty-in-deep-learning-77e5c51f9423) MonteCarlo sampling and training have been used to get a sense of uncertainty. Also changing

10

DreamWatcher_ t1_j03smkx wrote

billion parameters, but it's performance is still lower than neural networks with fewer parameters. [https://towardsdatascience.com/gpt-4-is-coming-soon-heres-what-we-know-about-it-64db058cfd45](https://towardsdatascience.com/gpt-4-is-coming-soon-heres-what-we-know-about-it-64db058cfd45)

4

biophysninja t1_j0a4nc2 wrote

approach this depending on the nature of the data, complexity, and compute available. 1- using SMOTE https://towardsdatascience.com/stop-using-smote-to-handle-all-your-imbalanced-data-34403399d3be 2- if your data is sparse you can use PCA or Autoencoders to reduce the dimensionality

−1

I-am_Sleepy t1_j0mhitl wrote

starter, look at InfoVAE ([See this blog](https://towardsdatascience.com/with-great-power-comes-poor-latent-codes-representation-learning-in-vaes-pt-2-57403690e92b) for context). Another way is to vector-quantized it (VQ-VAE based models), as the model only need to learn a small number of latent

2

jharel t1_j264g6n wrote

following is my explanation. Perhaps I'll try to find time to post about it. [https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46](https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46)

0

jharel t1_j2bmsex wrote

imagine something happening in the future then it must be inevitable future fact." [https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46](https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46)

2

Scarlet_pot2 OP t1_j39g574 wrote

associations from a large corpus of text." This was the first "guess the next word" model. [https://towardsdatascience.com/attention-is-all-you-need-discovering-the-transformer-paper-73e5ff5e0634](https://towardsdatascience.com/attention-is-all-you-need-discovering-the-transformer-paper-73e5ff5e0634) This next link is the "Attention is all you need" paper that describes how to build

1

MustachedLobster t1_j45dp6k wrote

performance at tasks in T, as measured by P, improves with experience E.” https://towardsdatascience.com/what-is-machine-learning-and-types-of-machine-learning-andrews-machine-learning-part-1-9cd9755bc647#:~:text=Tom%20Mitchell%20provides%20a%20more,simple%20example%20to%20understand%20better%20.

3

MustachedLobster t1_j47oa1s wrote

performance at tasks in T, as measured by P, improves with experience E. https://towardsdatascience.com/what-is-machine-learning-and-types-of-machine-learning-andrews-machine-learning-part-1-9cd9755bc647#:~:text=Tom%20Mitchell%20provides%20a%20more,simple%20example%20to%20understand%20better%20. Localisation error decreases the more data you have

1

BigZaddyZ3 t1_j554ioa wrote

still capable of exceeding human abilities *already*. [AI is already more accurate than doctors.](https://towardsdatascience.com/ai-diagnoses-disease-better-than-your-doctor-study-finds-a5cc0ffbf32) Thinking AI won’t eventually exceed even the best human minds in pretty much every sector is basically

−2

terath t1_j5kz6tz wrote

tokenizers and many language models are built on them. Here is a quick article on them: https://towardsdatascience.com/byte-pair-encoding-subword-based-tokenization-algorithm-77828a70bee0

−2

MagicalPeanut t1_j7en0ux wrote

just how bad Zillow's data scientists failed Zilllow when it came to pricing the market: [https://towardsdatascience.com/invaluable-data-science-lessons-to-learn-from-the-failure-of-zillows-flipping-business-25fdc218a62](https://towardsdatascience.com/invaluable-data-science-lessons-to-learn-from-the-failure-of-zillows-flipping-business-25fdc218a62) Feel better yet? My hot take is that prices should go down for everyone as rates

4

ImZanga t1_j8gh6ci wrote

reliable: * [Why TikTok made its user so obsessive? The AI Algorithm that got you hooked.](https://towardsdatascience.com/why-tiktok-made-its-user-so-obsessive-the-ai-algorithm-that-got-you-hooked-7895bb1ab423) * [The App That Knows You Better than You Know Yourself: An Analysis of the TikTok Algorithm

18

[deleted] OP t1_j8rdy53 wrote

possible reason [https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e](https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e) i.e the VGG convolutional model wont be good for bounding boxes but only for classification task

3

tonicinhibition t1_jb1fgpe wrote

Reply to comment by tripple13 in To RL or Not to RL? [D] by vidul7498

pretty well for anyone else who is curious: [Do GANS really model the true data distribution...](https://towardsdatascience.com/do-gans-really-model-the-true-data-distribution-or-are-they-just-cleverly-fooling-us-d08df69f25eb) For further nuance on this topic, Machine Learning Street Talk discussed interpolation vs extrapolation with Yann

1

lifesthateasy t1_jbim6l5 wrote

else who can explain it to you why brain neurons and artificial neurons are fundamentally different: [https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7](https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7) Even this article has some omissions, and I want to highlight how in the past

1

gradientic t1_jc430rz wrote

that you have a significant dataset of images without anomalies - learn inlier look for outliers) - check https://towardsdatascience.com/an-effective-approach-for-image-anomaly-detection-7b1d08a9935b for some ideas and pointers (sry if pointing obvious things

5

jarmosie t1_jddmvp9 wrote

high quality online content through individual blogs or newsletter. I know there's [Towards Data Science](https://towardsdatascience.com) & [Machine Learning Mastery](https://machinelearningmastery.com/) to name a few but what other lesser known

1

1714alpha t1_jdt21l2 wrote

Hell, they're are already programs that can [diagnose illnesses better than human doctors ](https://towardsdatascience.com/ai-diagnoses-disease-better-than-your-doctor-study-finds-a5cc0ffbf32). To your point, it would indeed be problematic if any single source of information became the unquestioned authority

0