bortlip
bortlip t1_je1ckgm wrote
Reply to comment by RedSunFox in Curious: How do Pennsylvanians feel about John Fetterman? by deadocmike
No, he's not.
From Feb 2023 to Mar 2023, Fetterman missed 54 of 65 roll call votes, which is 83.1%
That is not "of roll call votes in the Senate since he took office."
It's always a lie based on a fact. They take a root fact and twist it and lie to make it worse - even when they wouldn't really need to, like here where he has missed a large percentage. Just not the lie told by OP and being pushed by fox etc.
And it's a lie that "hey, I'm just neutral here". It's the old "I'm just asking questions, I don't have an agenda!" "I'm just stating facts"
It's dishonest BS and should be dismissed and deleted.
bortlip t1_je1486a wrote
>has missed 83% of roll call votes in the Senate since he took office.
That's just a blatant lie.
Your post history doesn't show you as neutral.
bortlip t1_jdztlvl wrote
>But actually trying out these features for yourself—or at least the ones that have already been publicly released—does not come cheap. Unlike ChatGPT, which captivated the world because it was free, GPT-4 is currently only available to non-developers through a premium service that costs $20 a month.
So, you need to pay to access GPT-4. Ok.
I'd love to comment on the rest of the article, but The Atlantic won't let me see it unless I subscribe.
bortlip t1_jdsk31s wrote
Reply to comment by rejectednocomments in A Proof of Free Will by philosopher Michael Huemer (University of Colorado, Boulder) by thenousman
>If you actually read what he says
Nevermind. I don't care what you think.
bortlip t1_jdsc0re wrote
Reply to comment by rejectednocomments in A Proof of Free Will by philosopher Michael Huemer (University of Colorado, Boulder) by thenousman
Enlighten us, please.
bortlip t1_jdrb2ea wrote
Reply to A Proof of Free Will by philosopher Michael Huemer (University of Colorado, Boulder) by thenousman
It seems tht even just 1 and 2 together are self defeating.
- We should believe only the truth. (premise)
- If S should do A, then S can do A. (premise)
Can't we then say:
Conclusion: We can only believe the truth. Nothing we believe is false.
EDIT: Instead of downvoting, I'd love to hear why this is wrong.
bortlip t1_jcfgv8y wrote
Reply to comment by JarrickDe in Why We Need to Think Beyond Science to Save the World by derstarkerewille
Really?
The way I read it, I would say the opposite with a title of
"Why we should stop thinking and just live by embracing our instincts, emotions, and feelings."
bortlip t1_j97z0vw wrote
Reply to Compatibilism is supported by deep intuitions about responsibility and control. It can also feel "obviously" wrong and absurd. Slavoj Žižek's commentary can help us navigate the intuitive standoff. by matthewharlow
IDK, I think things become clearer when you break the definitions down some and address the nuances more and I think that's what Compatibilism does.
I think it can help to word things without using the actual words we are discussing, thus removing issues around differing definitions. For example, I'll approach this with out using the term "Freewill" or "determinism".
Can I affect the universe in such a way as it would be unpredictable if you had perfect knowledge of the world and the laws of nature? Or, to try to word it another way, if it were possible to "rewind" the universe to the point you made a decision, could you decide another way?
No, I don't think you could. I believe (ignoring quantum effects, which I don't think factor in to this, but I could be wrong) that you would always choose the same way due to causality. If you could rewind the universe, it would always playout the same way.
Can I evaluate all options open to me and choose which I would most like and then execute that option. Yes, barring some external force preventing you. If I have a glass of milk and a glass of water, I can choose which to drink.
I think this is what Compatibilism is trying to say.
Can I choose how I want to choose? Can I will what my will is? No. But that's just the way things work. That's not really a limitation that makes it so you can't exercise the will you do have.
But the question remains about morality. How can I hold you morally responsible? After all, if you didn't choose to have that will, how is it your fault you have that will?
Here again, I think the "trick" of not using the words can help shed light.
Should I separate a person from society due to what they did? Yes, that seems like a proper thing to do. The person is causing an issue and separation can help with that.
I feel I could go on and potentially explain better and more, but that's already a lot, so I'll leave it there.
bortlip t1_j87qyye wrote
Reply to Are you prepping just in case? by AvgAIbot
"Is It Worth Being a Prepper? A Lighthearted Cost-Benefit Analysis"
In this calculation, we aim to determine if it makes sense to be a prepper by performing a cost-benefit analysis. The calculation takes into account the subjective value of life as a prepper and a non-prepper, as well as the estimated probability of a disaster occurring.
The equation used in the calculation is as follows:
Value of being a Prepper (VP) = (Value of Life as a Prepper if Nothing Happens (VLN) * (1 - Probability of Disaster (PD))) + (Value of Life as a Prepper if Something Happens (VLS) * Probability of Disaster (PD))
Value of Life as a Non-Prepper (VNP) = Value of Life as a Non-Prepper if Nothing Happens (VLNN) * (1 - Probability of Disaster (PD)) + Value of Life as a Non-Prepper if Something Happens (VLNS) * Probability of Disaster (PD)
In this calculation, the probability of disaster (PD) is expressed as a decimal between 0 and 1, representing the estimated chance of a disaster occurring. The value of life as a prepper if no disaster occurs (VLN) and the value of life as a prepper if a disaster does occur (VLS) are rated on a scale of 1-10, taking into account factors such as peace of mind, self-sufficiency, and the satisfaction of being prepared. The value of life as a non-prepper if no disaster occurs (VLNN) and the value of life as a non-prepper if a disaster does occur (VLNS) are also rated on a scale of 1-10.
For the purposes of this calculation, we assigned the following values to the variables:
PD = 0.01 (1% chance of a disaster occurring)
VLN = 5 (value of life as a prepper if no disaster occurs is rated 5 on a scale of 1-10)
VLS = 3 (value of life as a prepper if a disaster does occur is rated 3 on a scale of 1-10)
VLNN = 6 (value of life as a non-prepper if no disaster occurs is rated 6 on a scale of 1-10)
VLNS = 0 (value of life as a non-prepper if a disaster does occur is rated 0 on a scale of 1-10)
The outcome of the calculation is as follows:
Value of being a Prepper (VP) = (5 * (1 - 0.01)) + (3 * 0.01) = 4.95 + 0.03 = 4.98
Value of Life as a Non-Prepper (VNP) = (6 * (1 - 0.01)) + (0 * 0.01) = 5.94 + 0 = 5.94
Based on these calculations, the value of life as a non-prepper is higher compared to the value of life as a prepper. However, it is important to note that these values are subjective and may vary greatly depending on the individual's personal beliefs and experiences. This calculation is meant to be taken with a grain of salt and serves only as a lighthearted illustration of the cost-benefit analysis of being a prepper. This has been a collaboration between bortlip and chatGPT.
bortlip t1_j7hl3ik wrote
I had chatGPT summarize this:
ChatGPT is eating our lunch. We're announcing that we intend to work on something real soon in an attempt to look proactive and not fall behind.
bortlip t1_j76ny0y wrote
Reply to Possible first look at GPT-4 by tk854
I went through all the links. Did I miss something?
I didn't find anything confirming any release schedule for GPT-4.
I saw some screen shots and talk of bing getting GPT integration - but they could be doing that with the chatGPT api that isn't public yet. Or even GPT-3 for now and will switch it out for GPT-4 later.
Nothing about the actual real GPT-4 timeline though. It's the "coming weeks" part that I object to. It makes it sound like its 2 or 3 weeks away. Which it might be - but there's no actual evidence of that provided.
bortlip t1_j6oatpx wrote
Reply to comment by Olive2887 in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
"Consciousness and complex behaviour have no relationship whatsoever" *
* citation needed
bortlip t1_j2f9oev wrote
Reply to comment by SkylorBeck in I Used ChatGPT for a Day and Found It Very Impressive by jormungandrsjig
No, I'm not looking to start an argument with you, that was a legitimate question.
I'm played with chatGPT and the models in the playground so far. I intend to start playing with copilot soon, but haven't tried it yet.
I really like the ability to go back and forth with the chat aspect and have the AI build the code that way and I am curious how the tab complete will compare to that experience.
bortlip t1_j2f75co wrote
Reply to comment by mrpoops in I Used ChatGPT for a Day and Found It Very Impressive by jormungandrsjig
Agreed. I also can advance pretty quickly by going back in forth with it. It's like a junior developer that I just dictate to and correct here and there and it does all the typing and puts things together intelligently.
bortlip t1_j2f1ech wrote
Reply to comment by SkylorBeck in I Used ChatGPT for a Day and Found It Very Impressive by jormungandrsjig
I've actually had better success getting good code out of chat GPT than I have out of the code specific models available on openai's playground.
I'm curious what code bots your are using.
bortlip t1_j2f16du wrote
Reply to comment by savage_slurpie in I Used ChatGPT for a Day and Found It Very Impressive by jormungandrsjig
That has not been my experience.
If you expect it to give you 100% working code with minimal instruction, it might not work (although it might). But if you work with it just a little bit, it's pretty amazing.
Example: I was reading this article about text summarization with python. The article listed this code:
def summarize(text, per):
nlp = spacy.load('en_core_web_sm')
doc= nlp(text)
tokens=[token.text for token in doc]
word_frequencies={}
for word in doc:
if word.text.lower() not in list(STOP_WORDS):
if word.text.lower() not in punctuation:
if word.text not in word_frequencies.keys():
word_frequencies[word.text] = 1
else:
word_frequencies[word.text] += 1
max_frequency=max(word_frequencies.values())
for word in word_frequencies.keys():
word_frequencies[word]=word_frequencies[word]/max_frequency
sentence_tokens= [sent for sent in doc.sents]
sentence_scores = {}
for sent in sentence_tokens:
for word in sent:
if word.text.lower() in word_frequencies.keys():
if sent not in sentence_scores.keys():
sentence_scores[sent]=word_frequencies[word.text.lower()]
else:
sentence_scores[sent]+=word_frequencies[word.text.lower()]
select_length=int(len(sentence_tokens)*per)
summary=nlargest(select_length, sentence_scores,key=sentence_scores.get)
final_summary=[word.text for word in summary]
summary=''.join(final_summary)
return summary
I asked chat GPT what it did:
At a high level, this code defines a function that takes in a piece of text and a percentage value, and produces a summary of the text by:
- Tokenizing the text and calculating the frequency of each word in the document
- Normalizing the word frequencies to the range [0, 1]
- Tokenizing the text into sentences and calculating a score for each sentence based on the normalized word frequencies of the words it contains
- Selecting the highest-scoring sentences based on the specified percentage of the total number of sentences
- Returning the selected sentences as a single summary string
I then asked it to write that same code as c# and it gave me the (nearly: I had to add one cast) perfect LINQ:
var words = Regex.Split(text, @"\s+")
.Where(x => !stopWords.IsStopWord(x.ToLower()) && !char.IsPunctuation(x[0]))
.GroupBy(x => x)
.ToDictionary(g => g.Key, g => (double)g.Count());
var maxCount = words.Max(x => x.Value);
words = words.ToDictionary(x => x.Key, x => (double)x.Value / maxCount);
var sentences = Regex.Split(text, @"(?<=[\.!\?])\s+")
.Select(x => x.Trim())
.ToArray();
var scores = sentences
.Select(x => x.Split(' ')
.Select(y => words.ContainsKey(y) ? words[y] : 0)
.Sum())
.ToArray();
var summary = string.Join(" ", sentences
.Zip(scores, (s, c) => new { Sentence = s, Score = c })
.OrderByDescending(x => x.Score)
.Take((int)(per * sentences.Length))
.Select(x => x.Sentence));
That's a pretty high level of complexity to me.
bortlip t1_je1dbad wrote
Reply to comment by RedSunFox in Curious: How do Pennsylvanians feel about John Fetterman? by deadocmike
Hey, this you?
>RedSunFox
>
>1 point
>
>·
>
>4 days ago
>
>Unless you know, it’s white people, in which case that’s “systemic racism” and “all white people are racist”