kuchenrolle
kuchenrolle t1_j6wgu5r wrote
Reply to comment by wanted_to_upvote in What are the effects of adding rock salt to a cooler full of ice? by Ok_Kareem_7223
>No one ever said anything about that.
That's not quite correct. AUniquePerspective, who EmeraldHawk is responding to, introduced that above.
kuchenrolle t1_j07s9z6 wrote
Reply to New research shows why we hear “lemon” and not “melon” in processing incoming sounds: our brains “time-stamp” the order of incoming sounds, allowing us to correctly process the words that we hear by giuliomagnifico
>Having demonstrated that the brain processes multiple speech sounds at the same time, the next question is: How does the brain do this without mixing up the phonetic features of these speech sounds? There are a number of potential computational solutions to this problem. One is position-specific encoding, which posits that phonetic features are represented differently depending on where the phoneme occurs in a word. This coding scheme uses a different neural pattern to encode information about the first phoneme position (P1), second phoneme position (P2), etc., resulting in no representational overlap between neighbouring speech sounds.
>
>To test whether the brain uses this coding scheme, we trained a decoder on the responses to phonemes in first position and evaluated the model’s generalisation to other phoneme positions (Fig. 2C). Contra to the predictions of a position-dependent coding scheme, we found significant generalisation from one phoneme position to another. A classifier trained on P1 significantly generalised to the pattern of responses evoked by P2, P3, P-1 and P-2 from 20 to 270 ms (p < 0.001; t = 3.3), with comparable performance (max variance for P2 = 26%, SEM = 4%; P3 = 32%, SEM = 3%; P-1 = 23%, SEM = 3%, P-2 = 37%, SEM = 4%). This result contradicts a purely position-specific encoding scheme, and instead supports the existence of a position-invariant representation of phonetic features.
>
>Interestingly, training and testing on the same phoneme position (P1) yielded the strongest decodability (max = 71%, SEM = 5%), which was significantly stronger than when generalising across positions (e.g. train P1 test P1 vs. train on P1 test on P2: 110:310 ms, p = 0.006). It is unclear whether this gain in performance is indicative of position-specific encoding in addition to invariant encoding, or whether it reflects bolstered similarity between train and test signals due to matching other distributional features. Future studies could seek to match extraneous sources of variance across phoneme positions to test this explicitly.
That's an original way of interpreting their findings. Could they think any more symbolically?
kuchenrolle t1_irc3kxu wrote
Reply to comment by izumi3682 in White House Releases Blueprint for Artificial Intelligence Bill of Rights by izumi3682
Who exactly are those AI experts that are "feeling substantial unease as to how fast these NLP programs were progressing"? Worrying about unexpected consequences of AI (regardless of conscience) is fair. But worrying about GLP-3 "getting mad at us" is not and I'd like to see what experts say otherwise and with what arguments.
kuchenrolle t1_jdxya0n wrote
Reply to comment by apbailey in TIFU by drinking black tea by monkshood_bezoar
Do you have any source on this? I've heard a number of claims about the effects of tea preparation on caffeine content, but very little evidence.