Search

50 results for openreview.net:

Submitted by Neurosymbolic t3_1027qvv in MachineLearning

preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.255.pdf](https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.255.pdf) * Kamienny et al., "End-to-end Symbolic Regression with Transformers," NeurIPS 2022 [https://openreview.net/pdf?id=GoOuIrDHG\_Y](https://openreview.net/pdf?id=GoOuIrDHG_Y) * Nandwani et al., "A Solver-Free Framework for Scalable Learning in Neural ILP Architectures ... NeurIPS 2022 [https://openreview.net/pdf?id=EqZuN4V\_FLF](https://openreview.net/pdf?id=EqZuN4V_FLF) * Shakarian and Simari, "Extensions to Generalized Annotated Logic and an Equivalent Neural Architecture," TransAI 2022 [https://ieeexplore.ieee.org/document/9951514](https://ieeexplore.ieee.org/document/9951514) What interesting papers do you think were overlooked

3

Rolling_Pig t1_iqy6tyc wrote

think algorithms of these two papers may help you. https://arxiv.org/abs/2110.02711 https://openreview.net/forum?id=pd1P2eUBVfq (I just read it, and so impressed) Actually there is a gap because of one-timestep. But if betas

1

nibbels t1_irvvnpw wrote

critiques, but they do discuss issues with both the field and the models. https://arxiv.org/abs/2011.03395 https://openreview.net/forum?id=xNOVfCCvDpM https://arxiv.org/abs/2110.09485#:~:text=The%20notion%20of%20interpolation%20and,outside%20of%20that%20convex%20hull. https://towardsdatascience.com/the-reproducibility-crisis-and-why-its-bad-for-ai-c8179b0f5d38 https://ai100.stanford.edu/2021-report/standing-questions-and-responses/sq10-what-are-most-pressing-dangers-ai And then, of course, there

1

JNmbrs t1_isgqdyr wrote

mlb2251.github.io/stitch\_jul11.pdf](https://mlb2251.github.io/stitch_jul11.pdf) and [http://andrewcropper.com/pubs/aaai20-forgetgol.pdf](http://andrewcropper.com/pubs/aaai20-forgetgol.pdf)); (c) optimizing neural guidance (e.g., [https://openreview.net/pdf?id=rCzfIruU5x5](https://openreview.net/pdf?id=rCzfIruU5x5) and [https://arxiv.org/pdf/2206.05922.pdf](https://arxiv.org/pdf/2206.05922.pdf)); and (d) specification (e.g., [https://arxiv.org/pdf/2007.05060.pdf](https://arxiv.org/pdf/2007.05060.pdf) and [https://arxiv.org/pdf/2204.02495.pdf](https://arxiv.org/pdf/2204.02495.pdf)). While obviously

1

ChrisRackauckas OP t1_iswr0wc wrote

your gradient estimate. (2) unlike previous other algorithms with known exponential cost scaling (for example, see https://openreview.net/forum?id=KAFyFabsK88 for a deep discussion on previous work's performance), this scales linearly. 1024 should be fine

2

shingekichan1996 OP t1_j5wlavz wrote

github.com/raminnakhli/Decoupled-Contrastive-Learning](https://openreview.net/forum?id=JzdYX8uzT4W) And I saw also that the same paper is [rejected at NeurIPS'21](https://openreview.net/forum?id=JzdYX8uzT4W) becuase of its similar impact on other methods like Barlow Twins, SimSiam, BYOL, etc. However, at first

3

vwvwvvwwvvvwvwwv t1_j6xgaqc wrote

autoencoder might work just as well). This was published yesterday: [Flow Matching for Generative Modeling](https://openreview.net/forum?id=PqvMRDCJT9t) *TL;DR:* We introduce a new simulation-free approach for training Continuous Normalizing Flows, generalizing the probability

4

chrvt t1_j7pizuw wrote

very high-dimensional data where classical nearest neighbor methods fail: [Intrinsic dimensionality estimation using Normalizing Flows](https://openreview.net/pdf?id=wA7vZS-mSxv)

1

afireohno t1_j9230xx wrote

types of invariances (translation, permutation, etc) that can be encoded in DL architectures. 2. [Algorithmic alignment](https://openreview.net/forum?id=rJxbJeHFPS) studies the relationship between information flow in classical algorithms and DL architectures and how "aligning

6

velcher t1_j934snb wrote

might be interested in [V-information](https://openreview.net/forum?id=r1eBeyHFDH), which specifically looks at information from a computational efficiency point of view. For example, classical mutual information will say an encrypted version of the message

2

liquiddandruff t1_j98v6ko wrote

pace here - Language Models Can (kind of) Reason: A Systematic Formal Analysis of Chain-of-Thought https://openreview.net/forum?id=qFVVBzXxR2V - Emergent Abilities of Large Language Models https://arxiv.org/abs/2206.07682 A favourite [discussed recently](https://www.reddit.com/r/singularity/comments/10y85f5/theory_of_mind_may_have_spontaneously_emerged_in/):

4

Cryptizard t1_ja69a0b wrote

Reply to comment by Mason-B in So what should we do? by googoobah

Just two weeks ago there was a paper that compresses GPT-3 to[1/4 the size](https://openreview.net/forum?id=tcbBPnfwxS). That’s two orders of magnitude in one paper, let alone 10 years. Your pessimism just

1

Submitted by Ash3nBlue t3_xvw467 in MachineLearning

/nearcyan/status/1576620734146756609](https://twitter.com/nearcyan/status/1576620734146756609) Another discussion in r/singularity: [https://www.reddit.com/r/singularity/comments/xtwd7k/selfprogramming\_artificial\_intelligence\_using/](https://www.reddit.com/r/singularity/comments/xtwd7k/selfprogramming_artificial_intelligence_using/) ICLR OpenReview: [https://openreview.net/forum?id=SKat5ZX5RET](https://openreview.net/forum?id=SKat5ZX5RET)

78