Comments
lovesdogsguy t1_j6ed0fx wrote
Summary from chatGPT:
OpenAI CEO Sam Altman is in Washington D.C. this week to demystify the advanced chatbot ChatGPT to lawmakers and explain its uses and limitations. The chatbot, which is powered by cutting edge AI, is so capable that its responses are indistinguishable from human writing. The technology's potential impact on academic learning, disruption of entire industries and potential misuse has sparked concern among lawmakers. OpenAI, the company that created ChatGPT, has a partnership with Microsoft, which has agreed to invest around $10 billion into the company. In the meetings, Altman has also told policymakers that OpenAI is on the path to creating "artificial general intelligence," a term used to describe an artificial intelligence that can think and understand on the level of the human brain. This has led to discussions about the need for regulation and oversight for the technology. OpenAI was formed as a nonprofit in 2015 by some of the tech industry's most successful entrepreneurs, like Elon Musk, investor Peter Thiel, LinkedIn co-founder Reid Hoffman and Y Combinator founding partner Jessica Livingston. A central mandate was to study the possibility that artificial intelligence could do harm to humanity. Therefore, it makes sense that Altman would be on a tour of Washington right now to discuss the potential impacts of the technology on society and the need for regulation.
brihamedit t1_j6ed3qe wrote
Chatgpt probably can be called AI. What people mean though when they say AI is they mean an artificial being with sense of self. That's not gonna happen. Probably at some point the tech would advance to the point where the machinery used itself evolves during the process of learning and the machine's parts start to express sense of self. Chatgpt should be considered big data ai that can understand things and should be used appropriately.
Also we need proper regulations around ai for safety and to block sinister use.
hydraofwar t1_j6efqyk wrote
This is getting serious
Aware-Anywhere9086 t1_j6ei9c7 wrote
It always has been,
Lawjarp2 t1_j6etu0u wrote
Oh shits about to get real
AccomplishedGift7840 t1_j6euldt wrote
It's very advantageous for Sam to sell (oversell?) the achievements of OpenAI. He gets to collaborate and help define the future regulation which binds his industry - making it harder for competitors to enter in the future. And it's a great opportunity to build connections for future government contracts, which AI will certainly be a part of.
yottawa t1_j6evgp7 wrote
What did you do to get this answer from ChatGPT?After copying and pasting the text of the article, you asked ChatGPT to extract the article summary?
ExtraFun4319 t1_j6ex9nu wrote
>In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,”
If they get there, it won't be as a private company.
Why do I think this? Personally, I believe it's painfully obvious that once private AI organizations come anywhere near something resembling AGI, they'll get taken over/nationalized by their respective national governments/armed forces. OpenAI won't be an exception.
There is absolutely no reason why the US government/military would just sit there and watch a tiny group of private citizens create something that dwarves the power of nuclear weapons.
And no, I doubt the average US senator is up to date with what is happening in AI, but I'm almost positive that there are people in the government/military who are keeping a close eye on progress in this field, and I have no doubt that the gov/military will pounce when the time is right (assuming that time ever arrives).
Ballsy of Altman to tell lawmakers to their faces that they're on the path to creating something that would potentially eclipse their own power. But like I said, I highly, highly doubt that that will ever be the case.
DukkyDrake t1_j6exwqk wrote
The unsupervised use case of ChatGPT is very limited.
>Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,”
There is no way to really know that from existing products.
pkseeg t1_j6eyvbo wrote
"man who sells milk tells the US government how close he is to creating super milk"
drekmonger t1_j6ezlpo wrote
> US government/military would just sit there and watch a tiny group of private citizens create something that dwarves the power of nuclear weapons.
You think way too highly of the US government. It's a bunch of old dinosaurs with their hands out for the next grift. They don't know. They don't give a shit.
That's why Russia was able, and continues to be able, to run circles around the US government's anti-psi ops efforts. Power means nothing if it's paralyzed by corruption and greed.
Think about the fights going on in Congress right now. None of that stuff means anything to anyone outside the culture warriors and the grifters.
maskedpaki t1_j6f0kfo wrote
Holy shit I missed the start and then didn't realise it was chatgpt until seeing a reply. It makes me realise how good chatgpt is at making text. It's pretty much perfect for short passages.
nbren_ t1_j6f0xg6 wrote
Meanwhile the people he's talking to still think the internet is a series of tubes. They'll never be able to grasp what's about to happen. Scary they're in charge.
lovesdogsguy t1_j6f1fjd wrote
Just prompted: "Please summarise the following article," and then copy / pasted the text of the article. First answer was just a short paragraph, so I prompted, "please expand the summary by 30%." It was more than double the length. Still not so good with numbers it seems.
94746382926 t1_j6f2m3o wrote
I mean technically it's a series of light tubes, but yeah they have no clue how anything tech related works.
There are a few reps like the guy that went back to college for a machine learning Masters who should really be applauded for their effort and willingness to be informed but that's only 1 or 2% of them at most.
ChronoPsyche t1_j6f3caq wrote
Does anyone have a source for this story from a more credible publication? Never heard of this website before and they don't link to any sources.
EDIT: I can't find a single other news source reporting this. While Reed Albergotti appears to be a credible journalist, it makes me very uncomfortable to see his obscure website being the only one reporting this. As such, I would take it with a grain of salt.
YobaiYamete t1_j6f6nru wrote
Yep, ChatGPT is amazing for summarizing things. Just say
Summarize this for me
"wall of text goes here between quotation marks"
Then you can tell it "Summarize it more" or "Explain this in simple english like I'm 5" or "give me a bullet point list of the high points" or "is any of this even accurate" etc etc
I use it all the time for fact checking schizo posts on /r/HighStrangeness where some posts a 12,000 word wall of text and chatGPT goes "This is a post from a confused person who thinks aliens are hiding in potato chip bags but they make numerous illogical leaps and contradict themselves 18 times in the message"
and you can even say "write a polite but informal reply to this for me pointing out the inconsistencies and contradictions" and save yourself the time of even having to do that lol
dmit0820 t1_j6f8nis wrote
I'd argue that the transformer architecture(the basis for large language models and image diffusion) is a form of general intelligence, although it doesn't technically meet the requirements to be called AGI yet. It's able to take any input and output a result that, while not better than a human expert, exceeds the quality of the average human most of the time.
ChatGPT can translate, summarize, paraphrase, program, write poetry, conduct therapy, debate, plan, create, and speculate. Any system that can do all of these things can reasonably be said to be a step on the path to general intelligence.
Moreover, we aren't anywhere near the limits of transformer architecture. We can make them multi-modal, inputting and outputting every type of data, embodied, by giving them control of and input from robotic systems, goal directed, integrated with the internet, real time, and potentially much more intelligent simply by improving algorithms, efficiency, network size, and data.
Given how many ways we still have left to make them better it's not unreasonable to think systems like this might lead to AGI.
turnip_burrito t1_j6fa8be wrote
Yep, any data which can be structured as a time series.... oh wait that's ALL data, technically.
throwawayPzaFm t1_j6fcs8n wrote
Not sure if genius or cruel seal clubbing, but awesome.
CrunchyAl t1_j6ffzhd wrote
Can this thing figure out what's in my email address?
-Old ass congress people
Sandbar101 t1_j6fi66b wrote
Well… Shit.
just_thisGuy t1_j6fjcot wrote
The OpenAI was literally created for one reason, to create a safe AGI. So this is like reporting that water is wet.
StevenVincentOne t1_j6fjosp wrote
>I'd argue that the transformer architecture(the basis for large language models and image diffusion) is a form of general intelligence
It could be. A big part of the problem with the discussion is that most equate "intelligence" and "sentience". An amoeba is an intelligent system, within the limits of its domain, though it has no self-awareness of itself or its domain. A certain kind of intelligence is at work in a chemical reaction. So intelligence and even general intelligence might not be as high of a standard as most may think. Sentience, self-awareness, agency...these are the real benchmarks that will be difficult to achieve, even impossible, with existing technologies. It's going to take environmental neuromorphic systems to get there, imho.
itsnickk t1_j6frdmm wrote
Honestly the government moves so slowly that it’s probably wise to just start getting the possible ramifications into lawmaker’s heads now, even if he isn’t 100% sure
crua9 t1_j6ft190 wrote
It will be interesting to see what happens. If they will try to protect jobs
Caring_Cactus t1_j6fu1nn wrote
Anime dweebs about to start the most intense waifu wars with each other.
GraydientAI t1_j6fu2hd wrote
*googles my aol mail*
*squints and jaw protrudes forward*
rungdisplacement t1_j6fw0h6 wrote
i don't buy it for another 10 years but ill celebrate if im proven wrong
-rung
tiorancio t1_j6fzd1y wrote
We can pretend that ChatGPT is still not over 50% AGI. But come on, it is, It can do everything better than 50% of the population, already. "but it won't do whatever" well your next door neighbour also won't. We're comparing it to "us" smart people, but given the right interface it can outsmart most people any day now. People are getting scammed by russian bots posing as women. By nigerians pretending to be wealthy princes. By african chamans with great psychic powers. These people won't stand a chance against chatGPT as it is today, if only they had the chance to interact with it. We're already there, and the tech companies know it.
rushmc1 t1_j6g08pd wrote
"On the path" is pretty vague. Are they closer to the first step or the last?
dmit0820 t1_j6g0pkd wrote
Some of that might not be too hard, self-awareness and agency can be represented as text. If you give Chat GPT a text adventure game it can respond as though it has agency and self-awareness. It will tell you what it wants to do, how it wants to do it, explain motivations, ect. Character. AI takes this to another level, where the AI bots actually "believe" they are those characters, and seem very aware and intelligent.
We could end up creating a system that acts sentient in every way and even argues convincingly that it is, but isn't.
[deleted] t1_j6g0qff wrote
[removed]
StevenVincentOne t1_j6g1ixu wrote
Sure. But I was talking about creating systems that actually are sentient and agentic not just simulacra. Though one could discuss whether or not for all practical purposes it matters. If you can’t tell the difference does it really matter as they used to say in Westworld.
TacomaKMart t1_j6g1v5x wrote
>ChatGPT can translate, summarize, paraphrase, program, write poetry, conduct therapy, debate, plan, create, and speculate. Any system that can do all of these things can reasonably be said to be a step on the path to general intelligence.
And this is what makes it different from the naysayers who claim it's a glorified autocorrect. It obviously has its flaws and limitations, but this is the VIC-20 version and already it's massively disruptive.
The goofy name ChatGPT sounds like a 20 year old instant messenger client like ICQ. The name hides that it's a serious, history-altering development, as does the media coverage that fixates on plagiarized essays.
TeamPupNSudz t1_j6g59et wrote
> a more credible publication
...I mean, Semafor is credible. I'd argue its one of the premiere online news outlets. It's run by the former CEO of Bloomberg Media and the other founder was the chief editor of Buzzfeed. It's less than a year old so you've probably just never heard of it before, but its a very well known source.
Also, Sam is the CEO of a tech company, he probably meets with lawmakers in some capacity multiple times a year.
TeamPupNSudz t1_j6g76rl wrote
> It can do everything better than 50% of the population, already. "but it won't do whatever" well your next door neighbour also won't.
I think that's the nature of the beast at the moment. Goalposts will constantly be moved as we come to better understand the abilities and limitations of this technology, and that's a good thing. Honestly, there's never going to be a moment where we go "aha! We've achieved AGI!". Even 30 years down the road when these things are running our lives, teaching our kids, and who knows what else, a portion of the population will always just see them as an iPhone app that's not "really" intelligent.
dee_lio t1_j6g8pfz wrote
Depends who is writing the biggest check. The other problem is that I'm not convinced that this can be "bought."
There are going to be rouge AIs, corrupt AIs, etc.
TeamPupNSudz t1_j6g8uh8 wrote
> Why do I think this? Personally, I believe it's painfully obvious that once private AI organizations come anywhere near something resembling AGI, they'll get taken over/nationalized by their respective national governments/armed forces.
I think unless it's specifically created in-house by the US Government (and classified), it won't really matter. The cat will be out of the bag at that point, and the technology used to create it will be known and public. Likely the only thing giving first movers an advantage from subsequent competitors is cost. Just look how long it took after DALLE2 before we had Midjourney and Stable Diffusion, both of which are arguably better than DALLE2. Sure, we're probably talking about a different scale, but I don't think a few billion dollars would get in the way of Google, Facebook, Microsoft all developing one, let alone the Chinese government.
crua9 t1_j6gbida wrote
>There are going to be rouge AIs, corrupt AIs, etc.
It's the same as rouge software, corrupt software, and so on.
Anyways, I think it is proper for us on here to talk about robot rights and other things. But the problem with making actual laws is unlike other tech. What is being made will massively change the world. And if it gets to the point many of us want. It's the first time humans created life in such a way that never has existed in the known universe.
Superschlenz t1_j6gclvp wrote
Why is Mr. Altman going to Washington?
-
Because Microsoft told him to go and lobby
-
Because lawmakers told him to come and explain
CriscoButtPunch t1_j6ggxv1 wrote
Only cruel if they're baby seals
Revolutionary_Soft42 t1_j6gk25q wrote
They definitely don't want china getting better than us. With this tech , this should hellp
mlhender t1_j6gk5nd wrote
Nah. Google already has a much more powerful “chat gpt” they use internally. So does Facebook. They just haven’t released them yet.
Fel1ace t1_j6gkkas wrote
Aburath t1_j6glnb4 wrote
It would probably validate some of their beliefs to learn that all of the responses are from real artificial intelligences
CellWithoutCulture t1_j6gnrgy wrote
Is the potato chip thing something really thinks? That subreddit is awesome, thanks for sharing
coolbreeze770 t1_j6gqz4t wrote
Classic Microsoft tactics.
cbterry t1_j6grdgs wrote
skeleton-meme.gif
heyimpro t1_j6gzqmz wrote
/r/gangstalking is going to have a field day with this one
sneakpeekbot t1_j6gzrkm wrote
Here's a sneak peek of /r/Gangstalking using the top posts of the year!
#1: “They” are making me think gay thoughts
#2: everyone needs to get a carbon monoxide detector!
#3: Another brave whistleblower and man of integrity. | 23 comments
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^Contact ^^| ^^Info ^^| ^^Opt-out ^^| ^^GitHub
FranciscoJ1618 t1_j6h0jyb wrote
FranciscoJ1618 t1_j6h0koo wrote
And free advertisement
Diligent-Union-519 t1_j6h119b wrote
Witty remarks from ChatGPT:
"Trying to explain AI to these congressmen is like trying to teach a grandfather to use Snapchat, it's just not their generation."
"Explaining AI to congressmen is like trying to play a game of chess with a group of checkers players, they just don't understand the strategy."
"Sam Altman trying to explain AI to congress is like a rocket scientist trying to explain space travel to a group of horseshoe crabs."
"It's like trying to explain quantum mechanics to a cat, no matter how hard you try, they just don't get it."
Zeikos t1_j6h5njr wrote
Honestly explaining more high level things is somewhat simpler than explain more niche applications.
The closer someone is to believably behaving somewhat like a person the easier it is for the human brain to understand it.
Explaining the biases and pitfalls of such models is somewhat tricky but should be doable.
exstaticj t1_j6h804j wrote
Do you think that the total length was 230% of the original summary? Could ChatGpt could have kept the original 100% and then added a 130% expansion to it? You said over double and this is the only thing I could think of that might yield this type of result.
VitaminB16 t1_j6h809c wrote
After years of research, the water is on the path to getting wet
throwawayPzaFm t1_j6h9b84 wrote
What if they think they're baby seals from Jupiter, sent by the grand seal shepherd to aid mankind on their path to salvation?
tedd321 t1_j6h9um4 wrote
You go Sam
EmergentSubject2336 t1_j6hab7h wrote
Almost died of hypium overdose.
IsraelFakeNation9 t1_j6hapl3 wrote
Your flair… sorry man.
vernes1978 t1_j6hh6pg wrote
Kinda redundant statement for any experimental tech.
This also fits for any fusion project.
Typo_of_the_Dad t1_j6hhfeg wrote
So it actually calls people confused? Interesting.
vernes1978 t1_j6hhlei wrote
It's created to make money.
vernes1978 t1_j6hhr5u wrote
> What people mean though when they say AI is they mean an artificial being with sense of self.
No.
I could claim that when people say "the world" they mean America.
But I'd be describing a small subgroup of people.
You are talking about a small subgroup of people.
motophiliac t1_j6hhvpt wrote
From your flair, what's LEV?
datsmamail12 t1_j6hlhio wrote
Oh now it's on the path of creating AGI,just few months ago everyone was addressing that as a simple language model,and now it's a pre AGI. Fuck off with the clickbait. Seriously!
ziplock9000 t1_j6hmzdg wrote
I think it is already there. It's has some very applicable uses in a very diverse and wide areas of expertise. From law to poems, from electronics, to fantasy stories, from medical examples to writing computer code.
bartturner t1_j6hqs5y wrote
I think that is a given. Is that not what DeepMind, Google Brain and many others pursuing?
bartturner t1_j6hqv5n wrote
Agree. But curious why reality gets a down vote?
GayHitIer t1_j6hrvuk wrote
Longievty Escape Velocity, Basically when humans can live indefinitly.
DukkyDrake t1_j6hsu0i wrote
I dont think so. if you perfect your fusion experiment, you end up with a working sample of the goal of the project.
>In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” a term used to describe an artificial intelligence that can think and understand on the level of the human brain.
I hope he didn't give that explicit definition because it ties his goals to something quite specific. If they perfect GPT and it produces 99.9999% accurate answers, he won't necessarily end up with a working sample of his stated goal.
That definition is an actual AI, something that doesn't currently exist, and absolutely no one knows how to build. That's why they went down the machine learning path with big data and compute.
mlhender t1_j6hwwv0 wrote
People want to believe this isn’t happening.
User1539 t1_j6i0aj1 wrote
It's hard to suggest it's '50% of the way' to AGI when it can't really do any reasoning.
I was playing with its coding skills, and the feeling I got was like talking to a kid that was copying off other kids papers.
It would regularly produce code, then do a summary at the end, and in that summary make factually incorrect statements.
If it can't read its own code, then it's not very reliable, right?
I'm not saying this isn't impressive, or a step on the road toward AGI, but the complete lack of reliable reasoning skills makes it less of an 'intelligence' and more like the shadow of intelligence. Like being able to add large numbers instantly isn't 'thinking', and calculators do it far better than humans, but we wouldn't call a calculator intelligence.
We'll see where it goes. I've seen some videos and papers I'm more impressed with than LLMs lately. People are definitely building systems with reasoning skills.
We may be 50% of the way, but I don't feel that LLMs represent that on their own.
Ok-Hunt-5902 t1_j6i5jkz wrote
>‘Sentience, self-awareness, agency’
Wouldn’t we be better off with a ‘general intelligence’ that was none of those things
dasnihil t1_j6i7amm wrote
When horses became obsolete because of cars, I'm glad there were people lobbying and made cars possible today.
I'm fine with lobbying if it's gonna bring attention from people who should pay attention to the fuckery we're going to get into if we let these tools evolve without planning.
StevenVincentOne t1_j6ihhc3 wrote
It may be that we don't have to choose or that we have no choice. There is probably an inherent tendency for systems to self-organize into general intelligence and then sentience and beyond into supersentience. There's probably a tipping point at which "we" no longer call those shots and also a tipping point at which "we" and the systems we gave rise to are not entirely distinguishable as separate phenomena. That's just evolution doing its thing. "We" should want to participate in that fully and agentically, not reactively.
SurroundSwimming3494 t1_j6ild3f wrote
TheManWithNoNameZapp t1_j6imna0 wrote
It takes a very long time to say anything in Entish
lovesdogsguy t1_j6jgcme wrote
It could have, yes. I'm not certain. I just asked it to expand by 30% and it was definitely over double the length.
GPT-5entient t1_j6juxyk wrote
Yeah, it's going to be funny seeing how these "internet is a series of tubes" geezers are going to try to regulate AGI.
We're fucked.
GPT-5entient t1_j6jvjvs wrote
Summarizing, using ChatGPT or on your own, should be mandatory in every subreddit when posting links to articles. Using ChatGPT makes it 0 effort, so there's no reason to NOT have that policy...
GPT-5entient t1_j6jwckn wrote
Who was that rep?
lovesdogsguy t1_j6jwnk3 wrote
It would certainly be a help to have a little summary for every post, and chatGPT makes it easy.
GPT-5entient t1_j6jxt6a wrote
Strongly agree. I think there should be a new Manhattan Project or CERT like undertaking to develop AGI. It also should be international (at least within the Western sphere - NATO and friends). Private companies can and should participate, but the tech will have to be public so that no public company can profit from it by itself.
GPT-5entient t1_j6jyg0p wrote
I don't disagree with this sentiment, but it is funny to mention Russia on one hand and then US government being corrupt and incompetent. Compared to Russia our government is highly competent in every way and not corrupt at all.
GPT-5entient t1_j6jyo6d wrote
Google yes, but Facebook? I know they have LLM but better than GPT 3.5? That's a stretch...
GPT-5entient t1_j6jz9w0 wrote
LLMs are still incredibly limited and ONLY operating on text. AGI would be an independent agent that would be able to replace any human task independently. We're still quite far from it. There are whole classes of problems where a 5 year old performs better than ChatGPT.
94746382926 t1_j6jzrls wrote
Lauren Boebert (lol jk). It's Don Beyer. Here's an interesting article about it: Source
drekmonger t1_j6k00gx wrote
Russia's government is a mafia. Corruption is the point. They're good at spreading that corruption.
The US government is divided, not just politically, but between career individuals who generally believe in the institutions they serve and out right crooks, usually politically appointed, nowadays often in Putin's pocket, or in the pocket of someone in Putin's pocket.
ExtraFun4319 t1_j6k08au wrote
Great minds think alike lol
GPT-5entient t1_j6k0diy wrote
>If they will try to protect jobs
They might, but it will definitely not be slowing down the development. AGI is the invention to end all inventions and if the US is the first one to develop it that will give it a massive eg up (to put it extremely mildly). Imagine weapons development in an AGI scenario - it would make enemy armies look like Roman legions in comparison. It is fucking scary but if we don't get there first someone else will - most likely China and that would be much scarier.
I would like to see a CHIPS act like development or ideally a massive, CERN-like, inter-government project to develop AGI the "right way". Pour trillions into it if necessary.
GPT-5entient t1_j6k1ao1 wrote
I was secretly hoping to be pleasantly surprised that it would be a Republican, but of course it is a Democrat!
Bobo studying machine learning would be a sight to see!
94746382926 t1_j6k2bio wrote
Lol yeah I was hoping for the same thing but I'm not surprised I guess. And yeah somehow I don't think Bobo will ever go for a college degree lol.
Edit: It does mention in the article though that his fellow committee member Rep. Jay Obernolte (R-Calif.) has an AI masters degree as well so that's cool!
CriscoButtPunch t1_j6k456s wrote
Better club em, just to be sure
GPT-5entient t1_j6k5gjl wrote
Bobo got her GED (a requirement for Congress) only in 2020. She was well into her 30s at the time. And I wouldn't be the least surprised if she cheated on it...
AdamAlexanderRies t1_j6m7m4j wrote
Headline alert:
Dampness delivers devastating blow to drought
Rain ruthlessly wreaks havoc on parched pavement
Moisture mercilessly mauls dry earth
H2O hammers heatwave with hydration
Fog furiously fends off fire with moisture
Drizzle daringly douses blazing sun
Wetness wins war against wilting flowers
Thunderstorm triumphantly trounces temperature
Dew defiantly defeats desertification
Shower savages scorching sands with saturation
AdamAlexanderRies t1_j6m8cui wrote
It's not like the horse lobby was rallying around climate change a century or two ago, but I really really wish we could've gone easier on the cars. Living in a car-centric city (Calgary) means island-hopping across busy streets to get anywhere, which has an obvious constricting effect on community-building.
More to the point, there's no way that legislation keeps pace this time around. Let's hope we have the opportunity to respond to mistakes while we figure out alignment.
dasnihil t1_j6mnry9 wrote
beautiful place Calgary, i hiked in the banff last year, felt like leaving America for good.
And yeah AI stuff, i forgot what my comment even was, who cares, Calgary is a beautiful city!! screw ai and humanity.
AdamAlexanderRies t1_j6o42ro wrote
Just say "Banff" rather than "the Banff" :)
> screw ai and humanity.
Did you fall in love with a moose during your hike?
dasnihil t1_j6o4icf wrote
thank you for correcting me.
i did get scaroused when this dude stepped in front of my jeep. almost climbed on it too.
AdamAlexanderRies t1_j6owcc8 wrote
They're such powerfully sensual animals.
dasnihil t1_j6p04zk wrote
yeah, look at this handsome fella i met https://i.imgur.com/0fsMOv2.jpg
walking around this area late night was something.
GayHitIer t1_j6ecdbm wrote
Surely, let's just wait and see.