Submitted by MurlocXYZ t3_110swn2 in MachineLearning
throwaway2676 t1_j8digqj wrote
Here are the top 10 posts on my front page right now:
>[R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research
>[D] Quality of posts in this sub going down
>[D] Is a non-SOTA paper still good to publish if it has an interesting method that does have strong improvements over baselines (read text for more context)? Are there good examples of this kind of work being published?
>[R] [N] pix2pix-zero - Zero-shot Image-to-Image Translation
>[P] Extracting Causal Chains from Text Using Language Models
>[R] [P] Adding Conditional Control to Text-to-Image Diffusion Models. "This paper presents ControlNet, an end-to-end neural network architecture that controls large image diffusion models (like Stable Diffusion) to learn task-specific input conditions." Example uses the Scribble ControlNet model.
>[R] [P] OpenAssistant is a fully open-source chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
>[D] What ML dev tools do you wish you'd discovered earlier?
>[R] CIFAR10 in <8 seconds on an A100 (new architecture!)
>[D] Engineering interviews at Anthropic AI?
From this list the only non-academic/"low quality" posts are the last one and this one. This is consistent with my normal experience, so I'm not really sure what you are talking about.
MurlocXYZ OP t1_j8dknrw wrote
I have been filtering by Hot, so my experience has been quite different. I guess I should filter by Top more.
Viewing a single comment thread. View all comments