3SquirrelsinaCoat

3SquirrelsinaCoat t1_je4qpxm wrote

There are a few sides to it. Plenty of leading AI people have been increasingly talking about the ethics of AI, not in terms of "should we or shouldn't we use AI," but instead, how do we use it in a way that doesn't lead to a bunch of unintended consequences. That's a very fuzzy unclear area until you put some concrete stuff around it, which is AI governance. Governance takes AI innovation from the equivalent of three drunk guys flying down the highway in a Porche at 150 mph and turns it into three drunk guys being driven in an uber at a safe speed. It puts guardrails around the whole thing, bringing more people to the table, getting more input - it changes it from the AI engineers doing their thing in a vacuum to an organization doing something together, and when you take that approach, you are much more ready to avoid the harms. This was true of just your run of the mill machine learning a couple years ago. GPT and its friends are different, and what governance looks like for that is new.

So one idea of that letter on GPT4 is a call for businesses to pump the breaks and ensure all this AI innovation is governed. I don't know that that came through clear enough, but I imagine part of the audience got it.

The second idea of the letter is a call to governments to set independent guardrails (ie regulations) to guide this maturing tech. That, I believe the scientific term is "absolutely fucking unrealistic" in 6 months. Shit, that won't even happen in 2 years of meetings and rulemaking. Just look at where we were with gpt in January. Government bodies have zero hope of passing regulations in a timeframe where they will be meaningful. It's why it was so fucking reckless for OpenAI and some others to just throw this shit into the wild with their fingers crossed.

Now the cat is out of the bag, government can't do anything in time (even if the regulators understood this stuff, and they don't), which means the onus to "stop" falls entirely on the shoulders of the organizations that lack the governance structures to manage this stuff. It's all fucked man. AI philosophers don't have much to add here in terms of actually doing something. It's much more immediately action-oriented, not idea oriented. We've got the ideas; many organizations lack the ability to implement them.

That's my two cents anyway.

8

3SquirrelsinaCoat t1_je0xlzh wrote

If black holes (of any size) are local "objects" (in as much as they are in one spot, not all spots), then logically unknown energy that causes expansion in all places at the same time at the same rate (to our knowledge) cannot come from an astrological body that only exists in one place.

Plus, as u/Chadmartigan says, we actually understand Hawking radiation. Dark energy we do not understand at all.

2

3SquirrelsinaCoat t1_jdqqnux wrote

So long as we talk about AI using words and concepts typically only applied to living things, then I think there's truth in what you say, but maybe for different reasons.

Of course AI does not experience anything but the way we talk about it, sometimes, suggests that it is experiencing. We use words like "think" and "learn." We talk about, "it told me X" or "it discovered X." Then we add conversational AI to give it a personality, we give it a voice through text to audio. Robots are often humanoid. And all that before the people who don't understand this technology at all come rushing in and perceive an AI-self because they lack the technical knowledge to know that that isn't so.

We are definitely on a trajectory to treat AI as if it is autonomous and "deserving" of rights, but that's not because AI is becoming so sophisticated that it justifies that. Instead, because it is becoming so sophisticated and because we talk about it using human-specific verbs, I do think a large portion of end users will simply view AI as human-like, regardless of the truth of it. That is, AI rights will grow out of ignorance and humans anthropomorphizing inanimate computations.

We can change this. If the AI field started purposefully rejecting human-specific verbs, and if journalists stopped being so superficial and dumbing it down, and if we can improve social media conversations where there are often ignorant people proclaiming that AI is sentient, and if government bodies codify how the law views AI and that it is neither human nor deserving of any legal status beyond technology regulation - if we do all that, we can get people on the same page about what AI is and how it works. But I'm not holding my breath.

0

3SquirrelsinaCoat t1_jdmpoui wrote

Probably for some things, but currently, a family dr's schedule is insane, just rushing from one room to the next, talking to you for 10 minutes and they're out. If at home diagnostics become widespread (and I agree, I think they will and it's already happening), then the doctor has less to sort through. Little Jimmy with a cough doesn't need to come in because the at home diagnostics say, "it's just some allergies." That's one less patient for the doctor to see so they can spend more time dealing with higher level health problems.

There's a concept in medicine called "operating at the top of your license." That is, the MD should be spending most of their time dealing with the really tough cases and not wasting their deep knowledge on Little Jimmy's cough. It's one of the lines that gets trumpeted a bunch - AI liberates you to focus on more meaningful work. That's true. It's also code for lower level job replacement. Family doctors are going to need fewer nurses and physician assistants.

6

3SquirrelsinaCoat t1_jdmo0v3 wrote

Aside from task automation and what that means for jobs, the first losers will be people who are just starting their careers. When you're starting out, you don't know shit. Even with a college degree or two, you don't know anything. There's a lot about a career that you can only learn by doing.

So what happens when the lowest level tasks are taken care of by AI, and all you really need is someone with experience to validate the outputs? Take copywriting. You can easily use prompts to churn out copy, but it won't be perfect. It will miss some key phrasing, might include points that don't need to be there, maybe there are additional marketing messages to weave in. But on the whole, the drafting part of the writing is automated.

Now, if I'm the business leader, I don't want some very junior person validating those outputs. They don't know what to look for. They probably could not even write it as well as the AI. If I'm a business, I don't need junior people, I just need 1 or 2 experts.

The consequence is that getting into a career and earning your place is going to get very difficult. If you're in high school or college right now, the way I started my career and the way you're going to start it are really different. I don't know yet how we will overcome this as a society. If you remove opportunities to learn, then humans will perpetually lose skills as they are automated. How do you become a copywriter if no one needs you and your newly minted bachelors in communications? How do you become an expert without experience? That's going to be a huge issue going forward, and I don't know of anyone with a real answer for it.

11

3SquirrelsinaCoat t1_jdjme5l wrote

I'm a huge fan of Rocket Lab. I'll admit it. Some people are SpaceX diehards. I really want to see Rocket Lab thrive, and I expect they will. They can absolutely compete in the small sat market, and their Photon spacecraft is more or less unique in the commercial market. Even SpaceX doesn't have that. Starship can land a lot of mass on orbiting bodies, amazing. Photon can deliver science experiments to other planets at a price far less than a space agency. Also amazing.

What a time to be alive. How cool, how dramatic and enthralling.

13

3SquirrelsinaCoat t1_jd7jwbe wrote

The light sail carrying a wafer-thin sensor package is not meant to come back. One of the biggest challenges is figuring out how to slow it down as it approaches AC. What will most likely happen is the probe just zips through the system, grabs whatever data it can, and sends it back to Earth before flying on into nothingness. The alternative - slowing it down, redirecting it back toward Earth with orbital maneuvers, speeding it back up, and then having it actually reach us (not just zip through our system) - technically, we're not even close to that as a spacefaring species.

19

3SquirrelsinaCoat t1_jd4i7cp wrote

ispace is fuckin awesome. I've had great conversations with some of the people there - they are not fucking around. On their development timeline, this landing is important, but they've been working on tech for the future missions for the last couple years already. Batteries are a big item for their long-term plan, which isn't just landing. Once it can land reliably and its rover batteries can survive lunar night, this becomes a science platform that gives lunar access to any paying customer. The Moon economy is about to break wide open, and ispace will be the ones to cut the ribbon, I'm sure of it.

80

3SquirrelsinaCoat t1_jcv9vwn wrote

To find Neanderthal sites, we imagine their activity and look in logical areas and then get very lucky. In 60,000, if human data is lost at some point (which becomes a greater risk with digitizing knowledge), future anthropologists with slim details about our civilization might only know, "They went to the Moon. We suspect in these areas. Let's see what we can find." And then they get lucky and find this guy with the hieroglyphics plate nearby.

6

3SquirrelsinaCoat t1_jcuzplg wrote

If you think about the artifacts we have from ancient human history, the stuff that survives is small and blunt. An arrowhead, a shell trinket, even a little carved doll. Imagine historians 60,000 years from now. Will they remember what we did, or is this one of the things that will remain and they'll wonder what we meant by it?

8

3SquirrelsinaCoat t1_jc3dd42 wrote

Think tank papers serve a good role, and many of the points the author lays out are valid. Most of them actually. However, she's not really adding anything to what is already an ongoing discussion across many industries on precisely the points she makes. Big picture, yeah she's bang on. More immediate picture, what does a "reset" look like? How does one concoct a reset? Get all the fortune 500 together and ask them to sign a pledge? Move a piece of legislation through Congress? A year from now, we're going to have much more powerful versions of what is today already powerful. Laws are not equipped to address it (won't get passed anyway); industry guidelines are only as good as a company's word.

So part of my reaction this article, which again is good, is, "yeah, and? What's your plan?" Just repeating what everyone else is already discussing, even if eloquently phrased, comes up a bit short for a think tank.

3

3SquirrelsinaCoat t1_jaeurbg wrote

>Of course I can tell it to say anything— that’s what it does.

No that's not what it does. I'm leaving this. I thought you had an understanding of things.

−7

3SquirrelsinaCoat t1_jaetk90 wrote

There have been plenty of demonstrations of that tool being steered into phrasing that is uniquely human. The NY Mag reporter or someone like that duped it into talking relentlessly about how it loved the reporter. Other examples are plentiful, ascribing a sense of self before the user because the user does not understand what they are using, for the most part.

There is a shared sentiment I've seen in the public dialogue, perhaps most famously by that google guy who was fired for saying he believed a generative chat tool was conscious (that was almost certainly chatgpt) - a narrative that something like chatgpt is on the verge of agi, or at least a direct path toward it. And while a data scientists or architects or whatever may look at it and think, yeah I can kind of see that if it becomes persistent and tailored, that's a kind of agi. The rest of the world thinks terminator, hal, whatever the fuck fiction. And because chatgpt has this tendency toward humanizing its outputs (which isn't its fault, that's the data it was trained on), there is an implied intellect and existence that the non-technical public perceives as real, and it's not real. It's a byproduct, a fart if you will, that results from other functions that are on their own valuable.

−9

3SquirrelsinaCoat t1_jae3n3f wrote

Arguably, true AGI is a new life form, whether it is on silicon or meat. I don't believe that the current versions of machine learning will lead to AGI because of a few things but one of them is energy. If we get better energy efficiency (and maybe it scales, idk), then we can go full steam toward AGI because a huge hurdle is removed. But if we could somehow remove that hurdle and build AGI using our existing tools, I would still class it as closer to life than closer to machine. The autonomy of the thought and a real desire to exist (not a pretend one like what is farted out by the Puppet Known as ChatGPT) is evidence of life - but that's me.

29

3SquirrelsinaCoat t1_jacoxz2 wrote

The difference in energy consumption is a big selling point, if theories turn into reality. It takes only 12 watts to power a human brain, which is jaw dropping efficiency, particularly compared to energy required for machine learning training. If energy efficiency is an inherent part of OI, this would be a huge step forward and possibly a viable platform for real AGI.

114

3SquirrelsinaCoat t1_ja8wgp0 wrote

The challenge with all colonization is the motivation. In theory, your idea makes sense. But that's a long way to go unless colonization is heavily motivated. To build so far away, even with future tech that allows us to get there in say a couple months, the builders would need a huge reason to go for it. I'm thinking Earth becoming uninhabitable or certain groups being under threat if they remain on Earth. I cannot think of a grand reason why we would go so far unless Venus cloud cities are proven useless, Mars isn't workable, and the Moon is for whatever reason off the table.

But if we're building orbital stations, then what does it matter where we put them? And if gravity isn't a factor for an orbital station, why do we care about the surface gravity of the body we are orbiting?

I like your idea a lot, cool premise for a story, I can't think of a reason it would ever come to pass.

25

3SquirrelsinaCoat t1_j9pkps6 wrote

With enough time and ink and paper, you could write down an AI. Do you give rights to a stack of math problems?

Yeah but the emergence, cause it's emerging, the room knows how to speak Chinese, it told me it loved me, this is the AGI revolution the movies promised us...

Nonsense. It's just fucking math, people.]

Edit: Take this gem from the article, and the expert by the way is a professor of media studies, not AI.

>These are rights related to these personal delivery robots, giving the robot the rights and responsibilities of a pedestrian when it’s in the crosswalk. Now we’re not giving it the right to vote, we’re not giving it the right to life. We’re just saying when there’s a conflict in a crosswalk between who has the right of way, we recognize the robot functions as a pedestrian. Therefore, the law recognizes that as having the same rights and responsibilities that a human pedestrian would have in the same circumstances.

So stupid. Those are property rights granted to the owner of the robot. The robot itself has no rights. The company has the right of way, like a pedestrian, and that's what the law recognizes. This guy is just going to add more confusion to a topic most people already misunderstand.

3

3SquirrelsinaCoat t1_j9d4l6r wrote

I can imagine scenarios. Say we're building a new jet engine. Prototyping is expensive so automatically we're iterating with a digital twin. Currently that's done through 2D interfaces, maybe augmented reality at best, and nonstop video conferences. That is ripe for improvement. A jet engine is going to be a large engineering team with global assets, depending on which part of the engine is being developed at any one time. And instead of a bunch of engineers standing over an actual piece of machinery or using computers and talking over the phone, they are in a perfect duplication of a real world lab, except when they make a mistake or drop something or whatever, it doesn't matter, and it also doesn't matter where in the world anyone is.

That's still a little bit ahead of us but not by much. Valid and valuable use case for, idk, next-gen engineering call it. That's one hypothetical where a "metaverse" (which is just a 3d environment with extra sensors) is useful, bringing together AI, VR, advanced computing, haptics, all of it, into a new way of working. That makes sense to me.

What doesn't make sense is asking someone to pay for the experience. Large companies can afford this shit, and if there's breakthrough innovations, I think it will come from the industrial space funded entirely by R&D.

1

3SquirrelsinaCoat t1_j9c79k0 wrote

>I think it's a solution in search of a problem, really.

That's really well put. An industrial metaverse/collection of virtual worlds could be huge for innovation, iteration, safety training, etc. It's not like those things aren't possible now but if there's an angle worth a damn, it won't be commercializing the experience. The economic benefit should come from whatever happens in the metaverse that gets exported to the real world. The reverse is going to fail. "Come to our metaverse and enjoy our entertainment and blah blah blah." Nobody is paying for that because it is just a novelty. But if you could create something in the metaverse, experiment with it, refine it, meet with others in a 3D space, and then the final product gets exported (whether its a sales thing, a product, a new service), then you can make money, because it does not require anyone to buy VR headsets and look at shitty avatars.

2