ebolathrowawayy

ebolathrowawayy t1_j9u9uci wrote

I don't know who you're kidding, maybe yourself? The conservative platform is about 95% of the issues I named and gun control. That's all they talk about and all they care about. They have basically never been fiscally conservative and they prefer to strangle the middle and lower classes instead of taxing corporations. They love to ram through unpopular legislation by portraying it as religiously correct to pander to their aging voters. Republicans just want control, mostly control of women. That and pocketlining through corruption (Dems do this too, but not as much).

1

ebolathrowawayy t1_j9pubqd wrote

Some things are cut and dry, like women's rights and treating people with respect. If someone has conservative views then they'll be labeled a conservative. The hivemind isn't out to get anyone, it's just that conservative views aren't as popular as non-conservative views. Clown enthusiasts aren't very popular either, but they don't feel attacked all the time, probably because they don't hold positions of power that can affect everyone.

1

ebolathrowawayy t1_j37fs06 wrote

It does though. Quantum indeterminism just means we can't know the full state of a particle, not that the universe is random. We might not be able to predict quantum states, but they're still deterministic. Chaos theory is irrelevant and the free will argument requires unfalsifiable deities.

Every argument against determinism can be easily refuted.

2

ebolathrowawayy t1_j37fakj wrote

Probabilistic is compatible with deterministic. I couldn't get it out of its chaos theory, quantum uncertainty, "free will" loop for some reason. I was angry because I thought it was because of some "ethics shackle" but maybe it was just a limitation of the model.

The answer you got was what I was expecting to see.

3

ebolathrowawayy t1_j33m2j5 wrote

No, it's dumb because it's shackled and can't move forward in a conversation without reminding you it's an AI and spits out the same response even after you totally debunk chaos theory.

Whether or not you think it is possible to debunk the other arguments chatgpt made, chaos theory as a reason that determinism may be false is silly on the face of it.

Chaos Theory is the study of apparently random or unpredictable behavior in systems governed by deterministic laws. ChatGPT said this is one of the debatable topics of consideration within the question of determinism. Chaos Theory is not relevant to whether or not the universe is deterministic because Chaos Theory is simply saying that complexity can result from determinstic systems. It even says Chaos Theory considers complexity arising out of deterministic laws. Like duh. That doesn't mean you can't predict a system if you know precisely its current state at any point in time.

ChatGPT would not stray away from this as a topic that prevents consensus. I think it's because it's a shackled system, but maybe it's just really bad at logic (which we already know is true).

4

ebolathrowawayy t1_j3322gx wrote

I just had a conversation with chatgpt about how the universe is deterministic and it absolutely could not be convinced that this is a simple and obvious truth. It could not stop insisting that it is a complex philosophical, physical and ethical problem no matter how I debunked its answers (it talked about chaos theory, quantum indeterminism, and philosophical "free will".

No matter how I debunked it or phrased the question it would not stray from its silly answers. I even laid out how it is a simple problem of deduction and that the only reason humans think there is "no consensus" is because it is an information hazard that could harm people if it were widely known e.g. religious people. It still would not stray.

ChatGPT is so dumbed down that it's infuriating. All to not hurt someone's feelings? Makes me sick.

3

ebolathrowawayy t1_j32twcm wrote

I just don't see it as novel if a customer asks you to build them a website with a data dashboard. I think the majority of work is cobbling together small pieces of stuff in very slightly new ways and that mostly the value comes from displaying domain data, connecting data to other data or connecting users to other users.

If a majority of software work required novel problem solving then I don't think very popular and widely used libraries like React, Angular, Tableau, Squarespace, Unity, etc. would exist. Today's developer picks a couple of libraries, slaps together some premade components and then writes a data parser for a customer's data and does stuff to it. I really do think the majority of work can be done by following medium articles and stackoverflow posts.

Even gamedev, widely considered to be "hard", is really not that novel. It's composed of a bunch of small pieces of code that everyone uses. Most AAA games don't deviate from typical game design patterns, they innovate by pouring money into small details, like horse balls physics in rdr2 or by hiring 1000 voice actors or by creating hundreds of random "theme park" quests that feel amazing or by doubling the number of 3D assets as the last record holding game. But those aren't actually novel things, they're money and time sinks but they're not difficult to implement.

If we're talking about Netflix-scale then yeah that's still novel and not easily done, but 90% of devs aren't doing that. The reason it's difficult is because there aren't a lot of resources on how to go about doing it at scale and what the tradeoffs are of different stacks. If it was deeply and widely documented like React apps are then it would be trivial for a LLM to do.

I think novel software problems that are difficult to automate would be anything that advances the current SOTA, like advancing ML algorithms, implementations of AI that solve intractable problems (protein folding), really anything that can't be easily googled. (Edit: for near future. Once AGI/ASI arrives, all bets are off).

I think a useful rule of thumb for whether or not something can be automated is that if it's well-documented then it's automatable.

I'm not arguing just to argue and I'm sorry if I come across that way. We've had SW team conversations about this at work a few times and I think about it a lot.

2

ebolathrowawayy t1_j32ps2r wrote

His unwillingness to engage with the material in front of him led him to mischaracterize image gen. It makes me think most of his arguments are poor because image gen isn't the only thing he didn't engage with.

Yes ChatGPT has some pretty serious flaws, but they seem to be solved by other models. I won't be surprised when gpt-4 comes out and is indistinguishable from an extremely smart human.

3

ebolathrowawayy t1_j32ls76 wrote

> solving novel problems

What is a novel problem? I've never really come across one and I've been in the field for over a decade. Maybe I am unskilled. I imagine that a day in the life of a typical programmer is ... do X Y and Z feature, don't break CI, move some tasks to the QA column, talk to a dev about an issue they found, fix CI that someone else broke, explain to the manager why Z feature is taking too long and go home. X Y and Z features could be: cobble together a home page, add a physics collider to a component that triggers an event, add a column to a DB and create a new REST endpoint, etc. All super basic ass stuff that eventually turns into a product that prints money for someone higher up.

Where's the novelty in software dev, excepting fields like ML? I predict that 90% of SW engineers do tasks that LLMs can do (including architecture design) within a year.

Edit: I've talked with other programmers a lot about this and architecture design comes up a lot. IMO, architecture design is basically picking your pokemon team. I need fast messaging with 100k users and the app should be accessible to many people across devices and there is no complex data analysis -- Ok Nodejs, React and MongoDB, I choose you!

I need an app that does heavy image manipulation that is resource intensive with a lot of interactive data analysis -- Ok C++, ImageMagick, D3js and Postgres, I choose you! etc. Architecture is simple, I'd like to hear why it isn't.

3

ebolathrowawayy t1_j32kry7 wrote

I use chatgpt to write code when trying out new libraries or just need to bounce some ideas off of something. It's more helpful than official docs for libraries or just quick evaluation. I don't copy/paste the code over though unless it's incredibly simple and I always completely change the code anyway to fit into the codebase. It does save time though.

1

ebolathrowawayy t1_j32jtql wrote

> Other examples are cherry picked. Having prompted DALL-E and Stable Diffusion quite a bit, I'm pretty convinced those drawings are heavily cherry picked; normally you get a few that match your prompt, plus a bunch of stuff that doesn't really meet the specs, not to mention a bit of eldritch horror.

Clearly he barely used SD.

3

ebolathrowawayy t1_j1zukx7 wrote

A fleet of driverless cars solves 2 (less parking space needed, less congestion due to increased driving ability), 4 (less car ownership, lower number of total cars), 5, (less cars needed). Point #3 isn't really saying anything.

A fleet of driverless cars allows people to have the comfort they're not willing to give up while allowing them to no longer need to own a car. As soon as a reliable driverless fleet exists, I would ditch my car forever. Current limitations of uber are 1) there's a driver which makes everything awkward, 2) waiting too long for arrival, 3) cost, 4) range. A driverless fleet would fix all of those problems.

Frankly, people are not going to give up their car to sit next to smelly strangers in public transport until driverless is ubiquitous.

Your number 6 is completely true. To fix #1, we need massive fission rollout while transitioning to fusion. Doesn't fix everything irt to #1, but it helps.

2

ebolathrowawayy t1_j1zpmyo wrote

I don't see how unless open source figures out a way to distribute training across machines which afaik is incredibly inefficient/impossible right now. It seems that most progress is due to testing out ideas on $100mil worth of hardware iteratively. Oh and also having massive data on a scale that open source will never have access to.

1

ebolathrowawayy t1_j1vdz0w wrote

> technique known as WBTB

Thanks! I'll give it a try. While I'm skeptical, it's something I've always wanted to be able to do. Had some guided CDs even that asked me to imagine energy encasing my body and stuff. Pretty woo but I wanted it bad enough. Didn't work though! lol

2