Viewing a single comment thread. View all comments

decrementsf t1_j0l8gz8 wrote

ChatGPT does everything current technology does for us, faster.

You can use technology to learn anything. Turn it into a useful tool to produce things you already understood 10x faster. With guidance from ChatGPT cut down on time spent looking up information you already more or less know, and can spot-check easily. Learning a new thing is a process of practice, and fast feedback loops. ChatGPT can be used to speed up your feedback loops checking your work to speed up repetition. You can learn programming languages faster by getting repetitions in 10x faster when playing with personal projects, shaves off the time burn digging through forums to debug issues.

You may have noticed a limitation in technology is the more specialized knowledge the less available resources. Purchase a production line piece of equipment for your home business, you can find nothing about it online. Have to rely on finding rare printed documentation from other businesses or others with experience in closed networks. ChatGPT is no different. It hits walls when prompted with processes not yet discovered and discussed. You have to guide the tool through from first principles. Can't do that without understanding the tool.

You can also waste time or over indulge in fear and anger storytelling outrage, consuming sugar far junk food information. Faster than ever. This will be a deeper discussion of resulting algorithm psychosis. We've never had more tools for bad information at the same time as having better information than ever accessible. This speeds up divides in humanity. Exterior signals regarding how people spend their time. Can we live with one another? Deep thought philosophers can stay busy with.

A nail gun in the hands of someone who doesn't know how to build a house does not create finished homes faster. Using a tool without understanding the fundamentals turns to nonsense. Will there be cheating? Yep. Your peers will notice. You will always have peers that do the hard work.

17

iiioiia t1_j0ly9ey wrote

ChatGPT often completely makes shit up though and states it as if it is a fact.

11

Nedink t1_j0m0n5g wrote

Yeah, just like people trying to be helpful on the internet.

5

iiioiia t1_j0m2tjt wrote

Right, but the contents of your message seemed to state only the positive subset of ChatGPT's attributes, and implied that it is good for error checking/etc without acknowledging that the things it says are often completely incorrect or nonsensical.

My hope is that the similarity of it's "cognition" to ours may force or encourage us to pay more serious attention to the nature and consequences of ours.

10

decrementsf t1_j0n8cf8 wrote

ChatGPT is rolling over what people have said on the internet. Then regurgitating it using statistics on steroids. Lots and lots of steroids.

You're going to get an amalgam of what people in the training data have said.

To add an example, if you ask it go provide a recipe for chocolate chip cookies it's going to do a pretty good job with common information like this. If you have familiarity with what chocolate chip cookie recipes usually look like, you'll catch the error if it recommends adding large quantities of ginger and cardamom to the recipe. You need to have some basic understanding of what results should look like. The credibility of outputs provided is greatest for common information, becoming less credible or unavailable in the underlying training sets the more novel your request (you're not going to get great overview of how the Helion nuclear fusion reactor works).

3

iiioiia t1_j0nfq4n wrote

Well, simple math is pretty common, and I've seen several examples online where it gets elementary school math wrong.

Based on what I've read about it, its behavior seems extremely similar to human cognition, I can't even imagine what the next version is going to be like, let alone 2-3 years from now. I think we are in a new era, this might be similarly disrupting as the internet was, maybe even more.

3

Dismal_Contest_5833 t1_j13wts4 wrote

the answrs wont make sense half the time. it would be useless to use chat gpt to complete a paper for a university course as depending on the subject, you have to cite sources, and the task may ask for ones opinion.

1

Randommaggy t1_j0liml3 wrote

Its not even a practical nailgun its an impractical one with a heavy V12 engine that needs specialized skills to wield without taking of a leg or killing your neighbors.

Its also tempting for people that does not understand the subject they are applying it to.

10

decrementsf t1_j0lp3ui wrote

> Its also tempting for people that does not understand the subject they are applying it to.

Oof. Bidding a contract from an actuaries view of risk and relevant parameters, against a financial industry sales team low ball bid. The long-term goes kaboom and everyone laments no one could see that coming.

The value in skill-stacking is the ability to see more parameters in your analysis. You can have equal credentials in your field as all of the other highly qualified candidates. The candidate who has a complimentary skill or two in their back pocket can see around corners the others can't. Useful understanding for personal development, and recruiting high-function teams.

2

JustAPerspective t1_j0mcli8 wrote

By making accomplished bullshit equally available to everyone, this puts the burden onto the people who sniff out the bullshitters & only deal with people who can actually walk the talk.

This will probably be an expensive learning curve for a number of companies.

[[The value in skill-stacking is the ability to see more parameters in your analysis. You can have equal credentials in your field as all of the other highly qualified candidates. The candidate who has a complimentary skill or two in their back pocket can see around corners the others can't. Useful understanding for personal development, and recruiting high-function teams.]]

You're talking about diverse perspectives & broad problem solving skills being more effective than specialization of multiple portions - is that correct?

If so, we find this to be true in many capacities that exceed capitalist matters, and honestly an essential component of evolution to the species - if everyone sees things the same way, they tend to end up with the same blindspots.

So, to answer OP's inquiry, Socrates may have found ChatGPT a most democratic tool, ultimately benevolent if used so, as people learn to look for the meaning in what is said.

1