RomanRiesen
RomanRiesen t1_j46ixvh wrote
Reply to comment by chimp73 in [D] Bitter lesson 2.0? by Tea_Pearce
Counter point: markets that are small and specialised and require tons of domain knowledge. E.g. training the model on israeli law in hebrew.
RomanRiesen t1_izxysue wrote
Reply to comment by onformative in [P] AI project using reinforcement learning to 3D sculpt sculptures by onformative
I think you might have misunderstood me.
I do like the sculptures themselves quite a bit!
But with the striking lighting the pieces look like pieces I could see hanging in my room, which is very rare. I like my walls blank and cold.
RomanRiesen t1_iztz82z wrote
Seconds before opening reddit i had the idea to try to create impasto depth maps from photos / diffusion model outputs.
What I came to in the few seconds of given thought was pretty close to the additive approach as far as I can tell.
Unfortunately I don't know the first thing about painting so I'll never implement that.
RomanRiesen t1_iztyj2q wrote
Reply to comment by ReginaldIII in [P] AI project using reinforcement learning to 3D sculpt sculptures by onformative
I'm much more impressed by the rendering of the sculptures (lighting & angles & compositions) than by the statues themselves tbh.
Not that the idea itself isn't also really cool though.
RomanRiesen t1_izreqi1 wrote
Reply to comment by Lampshader in [P] I made a command-line tool that explains your errors using ChatGPT (link in comments) by jsonathan
Yeah, true lol
Python errors are mostly good enough.
RomanRiesen t1_izp9ejs wrote
Reply to comment by robot_lives_matter in [P] I made a command-line tool that explains your errors using ChatGPT (link in comments) by jsonathan
If you add "be as concise as possible" it cuts out a lot of the noise. But that is annoying to add everytime. But you can say thanks to the great retention "for all following answers be as concise as possible". All we need now is a a .chatgptrc file to add all the "global" prompts we want lol
RomanRiesen t1_izp8la9 wrote
Reply to comment by senobrd in [P] I made a command-line tool that explains your errors using ChatGPT (link in comments) by jsonathan
No. The whole chatgpt/gpt-3.5 model builds on code-davinci-002 (which is maybe the one tuned for copilot, but I don't think this has been said publicly).
So amy prompt to chatgpt is a prompt to a differently fine-tuned version of copilot (or copilot-like).
RomanRiesen t1_izkj431 wrote
Reply to comment by Competitive-Rub-1958 in [R] Large language models are not zero-shot communicators by mrx-ai
I was about to write "neither title nor abstract manage to 1-shot communicate their ideas or research to me" but it felt mean so I didn't. Also haven't read the paper yet.
RomanRiesen t1_iyw8l9k wrote
Reply to comment by PromiseChain in [D] OpenAI’s ChatGPT is unbelievable good in telling stories! by Far_Pineapple770
That's quite funny.
RomanRiesen t1_ixpundj wrote
Reply to comment by Amazing-Panda-5323 in Which book installed a new fear in you? by confrita
> It made me fearful of losing my free will
Neuroscience be like: welp, you can't loose what you don't have.
RomanRiesen t1_ixpuhl1 wrote
Reply to comment by lazyprettyart in Which book installed a new fear in you? by confrita
I don't know why exactly but this made me laugh out loud
RomanRiesen t1_ityoci9 wrote
Reply to comment by FuturamaMemes in [Image] My Fortune Today by jjwinc68
Fathom by fathom you will not have an anthem
RomanRiesen t1_j6tqunu wrote
Reply to comment by [deleted] in [D] What does a DL role look like in ten years? by PassingTumbleweed
That quote is unreadable.
Bet I could ask chatgpt to improve it though lol