berdiekin
berdiekin t1_j9lky82 wrote
Reply to comment by Standard_Ad_2238 in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
>Why the hell are people treating AI differently?
I don't think we are, like you said it's something that seems to occur with every major new technology.
Seems to me that this is just history repeating itself.
berdiekin t1_j8f52rl wrote
Reply to comment by Durabys in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
no they shouldn't, stop anthropomorphizing a fucking text generation algorithm.
berdiekin t1_j6f8xt9 wrote
Reply to comment by CrispyScientist in “I’ve tried to give GPT access to the internet and the blockchain. What could possibly go wrong?” by maxtility
Short answer: A hard fork is when a group of people decide they no longer like the direction a crypto is going (or can't get to an agreement) and announce to the world that, at some specific moment in the future, they'll continue building the chain with their rules/technology.
This creates 2 chains with identical history up to the moment of the split (aka: hard fork).
Chain1: A -> B -> C -> D ....
Chain2: A -> B -> C-> X -> Y ...
Usually the chain with the most support gets to keep their name and the "loser" changes theirs.
Happened with Ethereum, but also Bitcoin, and probably others.
In the screenshot the dude is asking his gpt bot to look up what specific block was the last one to be in the shared history before a split.
berdiekin t1_j6adymm wrote
Reply to comment by AsuhoChinami in Why did 2003 to 2013 feel like more progress than 2013 to 2023? by questionasker577
I see the 2010s as a decade of maturing the technologies of the late 2000s.
Suddenly humanity had this massive influx of online users, and with them mountains of data, everyone was now taking pictures, filming, streaming, ... and sharing it online through social media.
Data sets exploded, so much so that a new branch of data management was called into life: Big Data. When I was in uni around 2010 that was one of the hottest topics. Because all these companies now had stupendous amounts of data but were unsure how to process it or even what to do with it.
On the commercial side there was hope it could be used to better target ads, to better predict what customers want.
Talks (more like whispers) were starting to float that maybe, just maybe, these grand new datasets could help us get better AI systems. Perhaps some day have systems that were better than humans at things like image recognition.
What I mean to say, in short, is this: The 2010s taught us how to process lots of data. And we're now starting to see that bear fruit.
berdiekin t1_j5xytk1 wrote
Reply to comment by Smellz_Of_Elderberry in Humanity May Reach Singularity Within Just 7 Years, Trend Shows by Shelfrock77
Too many people looking at it from a cartoon villain perspective.
Companies wouldn't actively use it to starve people, they dont care about you. What they will do (or at least try to do) is the same they've always been doing.
That is, cut costs and find ways to maximize profits. In this case using AI to automate more people out of jobs. The fact that you might lose your home or go hungry is just a side effect of that effort.
That's why we need a tax on the usage of robots and AI.
berdiekin t1_j4s3sje wrote
Reply to comment by Ginkotree48 in Is it wishful thinking that I feel like we’re way closer than we thought? by fignewtgingrich
I've been following it for close to 20 years and this acceleration we've been seeing in the last couple of years. Hell even just this last year.
It's fucking mindblowing.
berdiekin t1_j42o0rd wrote
Reply to comment by AsuhoChinami in does character ai have the ability to close the chat by [deleted]
All you gotta do is git gud scrub.
Not like it took me 30-odd tries or anything...
berdiekin t1_j1x41fr wrote
Reply to comment by khanto0 in Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
It's actually a supported theory.
If we can manage to simulate a full universe down to the same level of precision as our own then the chances of our own universe being simulated practically become 1.
Because for every civilization capable and willing to create these simulations in the "root" universe there is a theoretically infinite amount of simulated sub-universes.
Ergo: the number of simulated universes will always massively outnumber the real ones.
Even in the case of "infinite real universes" in a multi verse; if in every one of these "real universes" there is only 1 civilization running these simulations then the number of simulations will always outnumber the real ones.
AT BEST our odds would be 50/50 (in the case that in every universe only 1 civilization reaches the need/want/can stage of simulations while it still being so resource intensive that they can only ever manage to run 1).
Wikipedia:
>The simulation argument
>
>In 2003, philosopher Nick Bostrom proposed a trilemma that he called "the simulation argument". Despite the name, Bostrom's "simulation argument" does not directly argue that humans live in a simulation; instead, Bostrom's trilemma argues that one of three unlikely-seeming propositions is almost certainly true:
>
>"The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero", or
>
>"The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero", or
>
>"The fraction of all people with our kind of experiences that are living in a simulation is very close to one."
>
>The trilemma points out that a technologically mature "posthuman" civilization would have enormous computing power; if even a tiny percentage of them were to run "ancestor simulations" (that is, "high-fidelity" simulations of ancestral life that would be indistinguishable from reality to the simulated ancestor), the total number of simulated ancestors, or "Sims", in the universe (or multiverse, if it exists) would greatly exceed the total number of actual ancestors.
berdiekin t1_iwz0kvf wrote
Reply to comment by ReadSeparate in US and EU Pushing Ahead With Exascale, China Efforts Remain Shrouded by nick7566
you are correct, I code for a living.
berdiekin t1_iwpoxte wrote
Reply to comment by JuneOnReddit in US and EU Pushing Ahead With Exascale, China Efforts Remain Shrouded by nick7566
in very simple terms computing power of supercomputers is often defined by FLOPS (floating point operations per second) because a lot of simulations require just that.
1 FLOP would be 1 operation per second. And just like with storage you can pre-pend it with kilo, mega, giga, ...
So 1 kiloflop would be 1000 operations per second, 1 megaflop would be 1000 kiloflops, ... etc.
Then mega -> giga -> tera -> peta -> exa. So yeah, we're finally breaching exaFlops of computing power which is big. Even just a couple of years ago we were barely reaching 100 PFLOPS and now we're already aiming beyond 10 EFLOPS. We're talking absolutely INSANE levels of computing power
berdiekin t1_iswpk2e wrote
Reply to comment by freeman_joe in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
That's a fair question, but so much depends on the how/when/what. Like how fast will these tools appear, how good will they be, how powerful will they be, how easy to use will they be, ...
I personally don't see these tools going from pretty much not existing to writing entire projects from scratch based on a simple description. At least not without at least some human intervention.
Because code generation is 1 thing, now tell it to integrate that with other (human written) APIs and projects with often lackluster documentation (if there's any in the first place). Not gonna happen.
Unless we hit some kind of AGI breakthrough ofcourse, then all bets are off.
berdiekin t1_iss2afs wrote
Reply to comment by Redifyle in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
I guess if we ever get to the point where you can describe an entire app/project with a simple (none-technical) description and get something out of it that does what's expected then programmers would become obsolete.
Honestly I'm more interested to see if an AI would be able to integrate into a legacy project and take over / improve that.
berdiekin t1_jb0nh02 wrote
Reply to comment by maskedpaki in Security robots patrolling a parking lot at night in California by Dalembert
>Yh and when it's disabled it sends an alert for a human to come in person.
oh no, at that point I only have like 30 minutes to an hour to make a daring escape!
Joking aside I do see the value especially when combined with a static camera system.