berdiekin

berdiekin t1_j6f8xt9 wrote

Short answer: A hard fork is when a group of people decide they no longer like the direction a crypto is going (or can't get to an agreement) and announce to the world that, at some specific moment in the future, they'll continue building the chain with their rules/technology.

This creates 2 chains with identical history up to the moment of the split (aka: hard fork).

Chain1: A -> B -> C -> D ....

Chain2: A -> B -> C-> X -> Y ...

Usually the chain with the most support gets to keep their name and the "loser" changes theirs.

Happened with Ethereum, but also Bitcoin, and probably others.

In the screenshot the dude is asking his gpt bot to look up what specific block was the last one to be in the shared history before a split.

4

berdiekin t1_j6adymm wrote

I see the 2010s as a decade of maturing the technologies of the late 2000s.

Suddenly humanity had this massive influx of online users, and with them mountains of data, everyone was now taking pictures, filming, streaming, ... and sharing it online through social media.

Data sets exploded, so much so that a new branch of data management was called into life: Big Data. When I was in uni around 2010 that was one of the hottest topics. Because all these companies now had stupendous amounts of data but were unsure how to process it or even what to do with it.

On the commercial side there was hope it could be used to better target ads, to better predict what customers want.

Talks (more like whispers) were starting to float that maybe, just maybe, these grand new datasets could help us get better AI systems. Perhaps some day have systems that were better than humans at things like image recognition.

What I mean to say, in short, is this: The 2010s taught us how to process lots of data. And we're now starting to see that bear fruit.

31

berdiekin t1_j5xytk1 wrote

Too many people looking at it from a cartoon villain perspective.

Companies wouldn't actively use it to starve people, they dont care about you. What they will do (or at least try to do) is the same they've always been doing.

That is, cut costs and find ways to maximize profits. In this case using AI to automate more people out of jobs. The fact that you might lose your home or go hungry is just a side effect of that effort.

That's why we need a tax on the usage of robots and AI.

1

berdiekin t1_j1x41fr wrote

It's actually a supported theory.

If we can manage to simulate a full universe down to the same level of precision as our own then the chances of our own universe being simulated practically become 1.

Because for every civilization capable and willing to create these simulations in the "root" universe there is a theoretically infinite amount of simulated sub-universes.

Ergo: the number of simulated universes will always massively outnumber the real ones.

Even in the case of "infinite real universes" in a multi verse; if in every one of these "real universes" there is only 1 civilization running these simulations then the number of simulations will always outnumber the real ones.

AT BEST our odds would be 50/50 (in the case that in every universe only 1 civilization reaches the need/want/can stage of simulations while it still being so resource intensive that they can only ever manage to run 1).

Wikipedia:

>The simulation argument
>
>In 2003, philosopher Nick Bostrom proposed a trilemma that he called "the simulation argument". Despite the name, Bostrom's "simulation argument" does not directly argue that humans live in a simulation; instead, Bostrom's trilemma argues that one of three unlikely-seeming propositions is almost certainly true:
>
>"The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero", or
>
>"The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero", or
>
>"The fraction of all people with our kind of experiences that are living in a simulation is very close to one."
>
>The trilemma points out that a technologically mature "posthuman" civilization would have enormous computing power; if even a tiny percentage of them were to run "ancestor simulations" (that is, "high-fidelity" simulations of ancestral life that would be indistinguishable from reality to the simulated ancestor), the total number of simulated ancestors, or "Sims", in the universe (or multiverse, if it exists) would greatly exceed the total number of actual ancestors.

https://en.wikipedia.org/wiki/Simulation_hypothesis#:~:text=The%20simulation%20hypothesis%20proposes%20that,current%20form%20by%20Nick%20Bostrom.

4

berdiekin t1_iwpoxte wrote

in very simple terms computing power of supercomputers is often defined by FLOPS (floating point operations per second) because a lot of simulations require just that.

1 FLOP would be 1 operation per second. And just like with storage you can pre-pend it with kilo, mega, giga, ...

So 1 kiloflop would be 1000 operations per second, 1 megaflop would be 1000 kiloflops, ... etc.

Then mega -> giga -> tera -> peta -> exa. So yeah, we're finally breaching exaFlops of computing power which is big. Even just a couple of years ago we were barely reaching 100 PFLOPS and now we're already aiming beyond 10 EFLOPS. We're talking absolutely INSANE levels of computing power

https://en.wikipedia.org/wiki/History_of_supercomputing

15

berdiekin t1_iswpk2e wrote

That's a fair question, but so much depends on the how/when/what. Like how fast will these tools appear, how good will they be, how powerful will they be, how easy to use will they be, ...

I personally don't see these tools going from pretty much not existing to writing entire projects from scratch based on a simple description. At least not without at least some human intervention.

Because code generation is 1 thing, now tell it to integrate that with other (human written) APIs and projects with often lackluster documentation (if there's any in the first place). Not gonna happen.

Unless we hit some kind of AGI breakthrough ofcourse, then all bets are off.

1

berdiekin t1_iss2afs wrote

I guess if we ever get to the point where you can describe an entire app/project with a simple (none-technical) description and get something out of it that does what's expected then programmers would become obsolete.

Honestly I'm more interested to see if an AI would be able to integrate into a legacy project and take over / improve that.

1