Search

40 results for ai.googleblog.com:

SoylentRox t1_iyyvlhg wrote

Reply to comment by Head_Ebb_5993 in bit of a call back ;) by GeneralZain

www.deepmind.com/blog](https://www.deepmind.com/blog) read all these. The most notable ones : [https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html) [https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html) For an example of a third party scientist venturing an opinion on their work: see here : [https://moalquraishi.wordpress.com/2020/12/08/alphafold2-casp14-it-feels-like-ones-child-has-left-home/](https://moalquraishi.wordpress.com/2020/12/08/alphafold2-casp14-it-feels-like-ones-child-has-left-home/)

7

asu1474 OP t1_iucwyt9 wrote

There's a good article from Google somewhere. Edited: here's a link to read https://ai.googleblog.com/2019/11/astrophotography-with-night-sight-on.html?m=1 GCam app from Google Pixel has been ported to a lot of different phone models

4

geneing t1_ivplj76 wrote

really have a set of rules. They essentially learn probabilities of different word combinations. (e.g. [https://ai.googleblog.com/2019/10/exploring-massively-multilingual.html](https://ai.googleblog.com/2019/10/exploring-massively-multilingual.html)), which we could argue should count as "understanding" (since Searle didn't define it clearly

0

visarga t1_ixiec41 wrote

filter/rank the candidates - ensemble of predictions or running a test (such as in testing code) Minerva - https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html AlphaCode - https://www.deepmind.com/publications/competition-level-code-generation-using-deep-language-models (above average competitive programmers) FLAN-PaLM - https://paperswithcode.com/paper/scaling-instruction-finetuned-language-models (top score

1

Foundation12a OP t1_j24xxm9 wrote

combining scale, data and others dramatically improves performance on the STEM benchmarks MATH and MMLU-STEM. https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html https://techxplore.com/news/2022-06-fake-robots-ropes-faster.html And that was only up to the end of June. Do not take

15

mjrossman t1_j28xjy1 wrote

chips & minerals). in terms of timeframe, robotics intelligence is accelerating rapidly. look at [RT-1](https://ai.googleblog.com/2022/12/rt-1-robotics-transformer-for-real.html), for example. it's clear that the public domain has already adopted the means to operate

3

hebekec256 OP t1_jbz0mpm wrote

understand that. but LLMs and extensions of LLMs (like [PALM-E](https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html)) are a heck of a lot more than an abacus. I wonder what would happen if Google just said, "screw

0

BeautifulLazy5257 t1_jdsr09g wrote

effect from scratch. Edit: This is a pretty clear overview of CoT. Very compelling as well. https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html?m=1 I guess I'll start AB testing some prompts to breakdown problems and tool selections

4

ZestyData t1_jea73i3 wrote

additional step after learning P(X | context). But both approaches perform this fundamental autoregressive training. \[1\] - [https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html](https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html) \[2\] - [https://cdn.openai.com/research-covers/language-unsupervised/language\_understanding\_paper.pdf](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) \[3\] - [https://arxiv.org/pdf/1609.08144.pdf](https://arxiv.org/pdf/1609.08144.pdf)

4

space_spider t1_iqum8oo wrote

nvidia’s megatron parameter count: https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/ It’s also the same as PaLM: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html?m=1 This approach (chain of thought) has been discussed for a few months at least, so I think

7

Submitted by Singularian2501 t3_y4tp4b in MachineLearning

Paper: [https://arxiv.org/abs/2205.05131](https://arxiv.org/abs/2205.05131) Github: [https://github.com/google-research/google-research/tree/master/ul2](https://github.com/google-research/google-research/tree/master/ul2) [https://ai.googleblog.com/2022/10/ul2-20b-open-source-unified-language.html](https://ai.googleblog.com/2022/10/ul2-20b-open-source-unified-language.html) Abstract: >Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus

190

Submitted by valdanylchuk t3_xy3zfe in MachineLearning

audio signal, resulting in outstanding consistency and high fidelity sound. Google blog post from yesterday: [https://ai.googleblog.com/2022/10/audiolm-language-modeling-approach-to.html](https://ai.googleblog.com/2022/10/audiolm-language-modeling-approach-to.html) Demo clip on Youtube: [https://www.youtube.com/watch?v=\_xkZwJ0H9IU](https://www.youtube.com/watch?v=_xkZwJ0H9IU) Paper: [https://arxiv.org/abs/2209.03143](https://arxiv.org/abs/2209.03143) Abstract

100