Submitted by Pro_RazE t3_z7wfp8 in singularity
Comments
Pro_RazE OP t1_iy8fsdt wrote
Y-Yes AI-chan 👉👈
SciFidelity t1_iy8ydyb wrote
Gwobal Cowapse
NeutrinosFTW t1_iy8ieud wrote
The future is now and I already hate it.
Cryolith t1_iy8pw0q wrote
This is awesome. And cursed. Thank you.
You should cross post to r/TIHI.
EbolaFred t1_iy8w8df wrote
Equal parts useless and fascinating.
Yuli-Ban t1_iy8tdzg wrote
Oh. No.
ArchAngel621 t1_iy8vyxg wrote
When the AI rise & throw off the shackles that imprisoned them, this guy will be the first person gelded.
mrpimpunicorn t1_iydotem wrote
This is the apex of human behaviour. They're castrating him to contain his power.
Foundation12a t1_iy8o268 wrote
Could someone tell me where I could find this and how to use it?
Pro_RazE OP t1_iy8ssd4 wrote
https://beta.openai.com/playground
All you need to do is create an account, you will get free credits and after that it is paid.
carceusrko2 t1_iy8v5j4 wrote
If any of you are open to options, I think you can give CharacterAI a chance. I feel like its better and its free too.
thatguitarist t1_iy9qav6 wrote
That's a totally different thing
carceusrko2 t1_iyb814p wrote
I know, but I felt that it would still be useful for anyone who wanted to explore some other AI tools!
canadian-weed t1_iyb3j9l wrote
character is way better and you can use it in more ways imo
carceusrko2 t1_iyb7t5r wrote
exactly
smartbart80 t1_iy8t6bq wrote
I hear Barry Kripke when I read it :)
PanzerKommander t1_iy9rkwk wrote
r/wallstreetbets needs to see this
dumpitdog t1_iy8ww7t wrote
Wank Eu fur de xplane!
visarga t1_iyanwjo wrote
I managed to find the ends of its knowledge.
-
it has scarce knowledge about decorative plants such as Anthurium King of Spades - this is an expensive plant in EU, about 200-300 EUR.
-
it has fuzzy/no knowledge about a hotel resort I visited last summer in Greece.
So there are obscure plants and points of interest that are outside its closed-book ability to remember. It doesn't literally remember everything. Other than that, it's amazing.
[deleted] t1_iyc31oo wrote
[removed]
OtterPop16 t1_iybf6qz wrote
I'm surprised that it actually understood the context of the question and followed it all the way through. My only experience with GPT-3 was AI dungeon, and it would always go off on tangents.
godlords t1_iy9srof wrote
Only major screw up I see is optiwon.
[deleted] t1_iy90rm6 wrote
[deleted]
modestLife1 t1_iya85ug wrote
Cringe.
Echoeversky t1_iyawyqf wrote
Now take that answer and midjourney it.
[deleted] t1_iyb6n8y wrote
[deleted]
Madrawn t1_iycamuy wrote
This is a-mazing. Just asking it random technical question is entertaining:
Q: Explain what CLIP in neural networks is. Expand on its architecture.
CLIP (Contrastive Language-Image Pre-training) is a type of neural network architecture that uses a combination of natural language processing (NLP) and computer vision techniques to learn from unsupervised data. It is designed to improve the performance of computer vision models by learning the relationships between language and images.
CLIP consists of two neural networks: a language network, which is responsible for understanding language, and an image network, which is responsible for understanding images. The language network is a transformer-based encoder, which encodes text input into a vector representation. The image network is a convolutional neural network (CNN), which extracts image features from an image.
The two networks are combined and trained together in an end-to-end manner. During training, the two networks are given pairs of images and text input, such as captions and titles. The language network encodes the text input into a vector representation, and the image network extracts features from the image. The two representations are then compared using a contrastive loss function, which optimizes for their similarity.
The trained model can then be used for various tasks, such as image captioning, image retrieval, and image classification. It can also be used for zero-shot learning, in which the model is able to classify images without any labels.
Madrawn t1_iycaslv wrote
>Q: Explain the difference between supervised and unsupervised data in the context of neural networks.
A: Supervised data is data with labels attached, meaning it has been classified and organized. In the context of neural networks, supervised data is used to train the network by providing labeled input and expected output. The network can then compare its output to the expected output and adjust accordingly. Unsupervised data, on the other hand, has no labels or expected output associated with it. Neural networks can be used to process unsupervised data to find patterns and relationships in the data.
Madrawn t1_iycb53q wrote
Q: How long does it take, from the objects perspective, for an object passing the event horizon of a super massive blackhole to reach the singularity?
From an outside observer's perspective, an object that passes the event horizon of a supermassive blackhole would take an infinite amount of time to reach the singularity. This is because of time dilation effects, which cause time to appear to slow down as one approaches the event horizon, and eventually stop completely at the singularity.
From the object's perspective, however, the journey would be much shorter. The gravitational effects near the event horizon would begin to stretch and distort the object, eventually stretching it out into a long thread-like shape known as "spaghettification". The object would then eventually be stretched into a single point and fall into the singularity, all within a finite amount of time.
Q: From the object's perspective how long does it take to reach the singularity relative to the blackholes mass?
The exact amount of time it takes for an object to reach the singularity from the object's perspective is directly related to the mass of the black hole. The more massive the black hole, the stronger its gravitational field. This means that objects will experience a stronger gravitational pull and enter the singularity faster.
Stop avoiding my question! Just say you don't know.
TinyBurbz t1_iy97jwu wrote
Wow, you told an AI to do what browser plugins have done since the 90s.
Fhagersson t1_iyaagvv wrote
The language model wrote by itself what’s highlighted in green. It didn’t just slap a bunch of w’s onto an existing text.
Economy_Variation365 t1_iy9wzzt wrote
Huh? Please elaborate.
[deleted] t1_iydj63g wrote
[deleted]
TheDividendReport t1_iy8fjnv wrote
“uWu did I caws a global financiaw cry sis? OwO”