Ulfgardleo
Ulfgardleo t1_jdmw59w wrote
Reply to comment by IronSmithFE in If earth was a smooth sphere, which direction would water flow when placed on the surface? by Axial-Precession
water would eventually evenly distribute around the globe and sit still
Ulfgardleo t1_j97nb2q wrote
Reply to [D] Relu + sigmoid output activation by mrwafflezzz
sigmoid of 0 is 0.5
Ulfgardleo t1_j976icn wrote
Reply to comment by currentscurrents in [D] what are some open problems in computer vision currently? by Fabulous-Let-822
we can do image segmentation, but segmentation uncertainties are a bit iffy. we can do pixel-wise uncertainties, but that really is not what we want because neighbouring pixels are not independent. e.g., if you have a detect-and-segment task, then with an uncertain detection, your segmentation masks should reflect that sometimes "nothing" is detected and thus there is nothing to segment. i think we have not progressed there beyond ising model variations.
Ulfgardleo t1_j96vdnc wrote
Reply to comment by liquiddandruff in [D] Please stop by [deleted]
but theory of mind is not sentience. it is also not clear whether what we measured here is theory of mind.
Ulfgardleo t1_j92cwoy wrote
Reply to comment by medwatt in [D] Short survey of optimization methods by medwatt
optimisation is a the worst field to skip the nitty gritty details. Optimisation is all about details.
​
your question is unspecified. "optimisation" in ML is a very different beast than optimisation in the math sense.
Ulfgardleo t1_j91e648 wrote
Reply to comment by goolulusaurs in [D] Please stop by [deleted]
Due to the way their training works, LLM cannot be sentient. It misses all ways to interact with the real world outside of text prediction. it has no way to commit knowledge to memory. It does not have a sense of time or order of events, because it cant remember anything between sessions.
If something cannot be sentient, one does not need to measure it.
Ulfgardleo t1_j912luv wrote
computer vision is a much broader problem domain than text to image or text to video. AFAIK 3D pose estimation under occlusions is an unsolved problem, still.
Ulfgardleo t1_j8wig60 wrote
Reply to [N] Google is increasing the price of every Colab Pro tier by 10X! Pro is 95 Euro and Pro+ is 433 Euro per month! Without notifying users! by FreePenalties
seems they confused DKK and € symbols.
Ulfgardleo t1_j89nauz wrote
Reply to Is it possible that an earth-like planet is floating independently in our universe somewhere with no sun and whose atmosphere harbors conditions to produce it's own sun-like light and energy? by CevicheCabbage
by adding the condition that the atmosphere should be able to produce the energy: no. the only way you can do this is via geothermal energy and it would probably not be a lot.
Ulfgardleo t1_j87y15c wrote
Reply to comment by BrotherAmazing in [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
Sorry that was a wrong translation from how we say it over here.
Ulfgardleo t1_j84fokp wrote
Reply to comment by cajmorgans in [D] Is it legal to use images or videos with copyright to train a model? by Tlaloc-Es
legally the data is not public and the fact that facebook is actively trying to prevent scraping is making it very difficult to argue otherwise.
Legally, the data cnanot be public. The users give facebook a non-exclusive license with limited rights to store and process the data. From this does not follow the right that anyone who sees the shared images (for example) has a right to process them as well. If that wasthe case, the terms (https://www.facebook.com/terms.php 3.1) would have to state under which license the works are redistributed by facebook.
Ulfgardleo t1_j84fdfl wrote
Reply to comment by 2blazen in [D] Is it legal to use images or videos with copyright to train a model? by Tlaloc-Es
if it is illegal now it would be super illegal then, because removing watermarks on its own typically violates the license of the material.
​
The question is 100% the same as "can i include GPLv3 code in my commercial closed source repository if i remove the license headers and ensure that the code ris never published?"
Ulfgardleo t1_j7yd02x wrote
Reply to comment by I-am_Sleepy in [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
You are right, but the point I was making that in ml in general those are not of high importance and this already holds for rather basal questions like:
"For your chosen learning algorithm, under which conditions holds that: in expectation over all training datasets of size n, the Bayes risk is not monotonously increasing with n"
One would think that this question is of rather central importance. Yet no-one cares, and answering this question is non-trivial for linear classification already. Stats cares a lot about this question. While the math behind both fields is the same, (all applied math is a subset of math, except if you people who identify as one of both) the communities have different goals.
Ulfgardleo t1_j7y8hdg wrote
Reply to [D] Critique of statistics research from machine learning perspectives (and vice versa)? by fromnighttilldawn
The difference between stats and ml is as large as between math and applied math. They aim to answer vastly different questions. In ml you don't care about identifiability because you don't care whether there is a gene among 2 millions that cause a specific type of cancer. This is not what ml is about. In ML you also very rarely care about tail risk (you should) and almost nothing about calibration (you really should). And identifiability is out of the window as soon as you use neural networks and that prevents you from interpreting your models.
Ulfgardleo t1_j77rx53 wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
how should it plan? It does not have persistent memory to have any form of time-consistyency. the memory starts with the beginning of the session and ends with the end of the session. next session does not know about previous session.
​
it lacks everything necessary to have something like a plan.
Ulfgardleo t1_j77ribp wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
a virus acts on its own. it has mechanics to interact with the real world.
Ulfgardleo t1_j6wcfav wrote
Reply to comment by GoofAckYoorsElf in [R] Extracting Training Data from Diffusion Models by pm_me_your_pay_slips
no you are now just writing what you like.
Is it right to use someone elses work without asking nor paying for it?
Ulfgardleo t1_j6w8snb wrote
Reply to comment by GoofAckYoorsElf in [R] Extracting Training Data from Diffusion Models by pm_me_your_pay_slips
"copyright warriors"
do you care about what is right, or what you like?
Ulfgardleo t1_j6vzpgz wrote
Reply to [D] Normalizing Flows in 2023? by wellfriedbeans
There is only very little research. They are a nice theoretical idea, but the concept is very constraining and numerical difficulties make experimenting hell.
I am not aware of any active research and I think they never were really big to begin with.
Ulfgardleo t1_j6tnxul wrote
I vager a guess that most DL applications can't really make use of language models and tye cost of said models make it infeasible for many applications.
Ulfgardleo t1_j603u8t wrote
Reply to [P] EvoTorch 0.4.0 dropped with GPU-accelerated implementations of CMA-ES, MAP-Elites and NSGA-II. by NaturalGradient
in my experience, this is never the bottleneck. rastrigin does not cost much to evaluate, real functions where you would consider evolution on, do. I did research in speeding up CMA-ES and in the end it felt like a useless exercise in matrix algebra for that reason.
Yes, in theory being able to speed-up matrix operations is nice, but doing stuff in higher dimensions (80 is kinda irrelevant computationally, even on a CPU) always has to fight against the O(1/n) convergence rate of all evo algorithms.
So all this is likely good for is benchmarking these algorithms in a regime that is practically irrelevant for evolution.
Ulfgardleo t1_j46aqfw wrote
Reply to comment by chief167 in [D] Has ML become synonymous with AI? by Valachio
you get downvoted, but you are right. There is nothing intelligent about an accurate regression model. It is the application of that regression model to a certain task that we anthropomorphize to "intelligence".
Ulfgardleo t1_j3hf04h wrote
Reply to comment by deepwank in [R] Greg Yang's work on a rigorous mathematical theory for neural networks by IamTimNguyen
Neural tangent kernels as an idea are old. They predate deep learning. To my knowledge not a single practically useful fact came out of these analysis yet.
Ulfgardleo t1_j1w3qhc wrote
Reply to comment by KonArtist01 in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
I am not sure about you but I can predict a significant amount of ai paintings with high confidence. There are still significant errors in paper/material texture. Like "the model has not understood that canvas threads do not swirl". Or "this Hand looks off" or "this eye looks wrong".
(All three examples visible in the painting test above).
Ulfgardleo t1_jegoe8z wrote
Reply to comment by pier4r in [News] Twitter algorithm now open source by John-The-Bomb-2
this aprt is not used for recommendations though. this is for analytics and internal testing and ensuring that different groups (+elon) don't get disadvantaged.