metal079
metal079 t1_j5h2qjv wrote
Reply to comment by fern2k in Radxa Rock5 Model A is a credit card-sized single-board PC with RK3588S and up to 16GB RAM (starting at $99) by giuliomagnifico
Up to ps1/n64 probably
metal079 t1_j3ajbbb wrote
Reply to comment by Scarlet_pot2 in We need more small groups and individuals trying to build AGI by Scarlet_pot2
>trying different approaches
You realize this takes money? How do we know something works unless you can train a model to test it?
metal079 t1_j380f62 wrote
Least delusional/r/singularity user
metal079 t1_j3803y1 wrote
Reply to comment by gangstasadvocate in We need more small groups and individuals trying to build AGI by Scarlet_pot2
That exists, it's called Petals
metal079 t1_j1r8dps wrote
Reply to comment by dimsycamore in Sam Altman Confirms GPT 4 release in 2023 by Neurogence
I swear this sub sometimes..
metal079 t1_j0wnreb wrote
Reply to comment by Phoenix5869 in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
It feels like that sometimes
metal079 t1_j0mo7vk wrote
Reply to comment by imbiandneedmonynow in OpenAI Forecasts $1 Billion in Revenue by 2024 by liquidocelotYT
Microsoft invested a billion into them. They also sell gpt and dalle
metal079 t1_j0mltmp wrote
Reply to comment by Ese_Americano in OpenAI Forecasts $1 Billion in Revenue by 2024 by liquidocelotYT
You cant, it isnt public
metal079 t1_j055is5 wrote
Reply to comment by Cheap_Meeting in [P] LORA Dreambooth - fine-tune Stable diffusion models twice as faster than Dreambooth method, smaller model sizes 3-4 MBs by Illustrious_Row_9971
iirc its not as good as normal dreambooth because it cant train the text encoder.
metal079 t1_itj39pz wrote
Reply to comment by cptnobveus in All the Metals We Mined in 2021 in One Visualization by CronicChaos84
Batteries don't use much lithium
metal079 t1_jaeuymi wrote
Reply to comment by AnOnlineHandle in [R] Microsoft introduce Kosmos-1, a Multimodal Large Language Model (MLLM) that can perceive general modalities, learn in context (i.e., few-shot), and follow instructions (i.e., zero-shot) by MysteryInc152
Rule of thumb is vram needed = 2x per billion parameters, though I recall pygamillion which is 6B says it needs 16GB of ram so it depends.