HamiltonBrae

HamiltonBrae t1_jddu4gn wrote

>You can believe that you have JTB knowledge

&nsbp;

Yes, I just think that under the reasonable belief paradigm that this is a contradiction. I think the idea of believing certain things are true has to be given up or surrogated with something else like the belief that something is empirically adequate. The contradiction could just be ignored I guess but arguably that also undermines the point of doing this kind of thinking which I think is to reduce things like that; after all, why was the reasonable belief paradign asserted in the first place. I think everyone probably inevitably tolerates some level of contradiction or paradox in their views though.

 

>The difference between being taught something that's based on cherrypicked evidence and doing the cherrypicking yourself is that in the former case, you don't have the evidence necessary to tell that there's cherrypicking happening.

 

I don't think you have the evidence to tell there is cherrypicking happening when you do it yourself either though. You think your picking of evidence is completely reasonable and isn't cherry picked at all. On the contrary, you will think the opposition are cherry picking evidence and ignoring your evidence.

 

>That said, if we're aware that evidence and teaching can be flawed then we logically ought to check our sources.

 

Yes, but we have more confidence in some sources or evidence than others to the point we don't think we need to check. We would consider this reasonable yet its possible the confidence is misplaced (and often is).

 

>and grant credence or disbelief to those sources appropriately.

 

And what is appropriate will seem different to different people.

 

>Different people ought to come to different conclusions about a belief if they start with different evidence or different premises. Conspiratorial thinking is what renders a belief unreasonable, not the conclusions it generates.

 

Its hard to see what separates conspiratorial from reasonable here because they are just coming from different evidences and premises too.

1

HamiltonBrae t1_jcpqfxt wrote

Sorry, late reply;

 

>It's a far cry removed from JTB, in any case.

 

Maybe I wasn't clear enough but my point was that using that definition of belief, then I think someone should logically believe that they have justified true beliefs If they believe that some fact is true and they think that that belief is justified. If you believe in justified true beliefs then surely it undermines the paradigm which wants to get rid of knowledge. The knowledge and non-knowledge views would be indistinguishable from a person's perspective from a practical viewpoint. My point is then not so much about whether knowledge actually exists in the JTB sense but whether someone should logically believe they have knowledge in the JTB sense under your scheme. I see you have specified your definition of reasonable though. I assumed that reasonable was more or less synonymous with justification since at face value when I think of someone having a reasonable belief then I think they are justifed in it, but maybe I should have anticipated some difference. Thinking more deeply though, I guess justification is complicated and I don't think I can even define the limits too well of where justification starts and ends.

 

At the same time, I don't think this affects my argument too much; but again, the more I think about this, the more complicated it seems to get. We can talk about someone believing something is a true when they have no uncertainty; we can also talk about someone believing their belief is reasonable or justified. Presumably they wouldn't assent to a belief that they didn't think was reasonable but if they were open to believing that some of those reasonable beliefs were justified then I think they would again be forced to believe that they had knowledge. Neither would I think that it differs from the knowledge position you argue against since someone working unser the assumption that knowlede was possible would also not believe they have knowledge if they didn't believe their belief was totally justified. So as long as a person believe that beliefs can be justified, then they should logically believe that they have knowledge.

 

>This applies to most conspiracy theorists: they aren't unreasonable because they've come to false conclusions, they're unreasonable because they've supported their false conclusions on the basis of cherrypicked and/or fabricated evidence that's extensively contradicted. Ignoring those contradictions and ignoring the baseless construction of those beliefs is what renders them unreasonable.

 

>If someone believes the Earth is flat because they're a child in an isolated community that's been told by trusted teachers and parents that the Earth is flat, they're reasonable in holding that belief.

 

If someone holds a belief reasonably because they have been taught it and don't know better then why can't someone have a reasonable belief from cherry picked/fabricated evidence. I think these two sources of knowledge are blurry because on one hand, the taught knowledge in the isolated community is going to be due to error/fabrication/cherry picking/deception while on the otherhand someone who holds their views despite counter evidence is going to subjectively feel that they are being reasonable and they cannot help that. They feel that the counter evidence they are shown is inadequate just as the non-conspiratorial person would feel about the evidence they are given by the conspiracy theorist; If the evidence doesn't seem reasonable to them, how can they help that? In their logic, what they have been shown just doesn't count as counter evidence. In your words, they come to conclusions about the counter evidenceu that they feel subjectively to be most logical. These may not actually be logically sound, but they have to make do with the best they're capable of.

 

Now, I do think that some beliefs seem more unreasonable to me than others (like conspiratorial ones) but its doesn't seem straightforward to defeat a skeptic purely with reason. Neither does there seem to be a straightforward divide between reasonable and unreasonable. For instance, some Christians may think their views are totally reasonable and conspiracy theorists views are totally unreasonable; but then again, I might think believing God is totally unreasonable. It doesn't seem sufficient to resolve the problem of skeptical hypotheses purely by "reasonable beliefs" if a person, specifically a skeptic, thinks the skeptical hypothesis is reasonable.

1

HamiltonBrae t1_jcni16m wrote

well according to those logics and views there are some contradictions that are acceptable. im not saying that arbitrary contradictory sentences make sense and i dont even know too much about those views but im open to the idea that logic can be done in different ways.

 

even so, i dont think the idea of non-contradiction is enough to pick out truth because truth depends on the premises and if these are blurry or underdetermined or context dependent then its not straightforward.

1

HamiltonBrae t1_jcgawei wrote

What do you mean by beliefs here? If a belief is "a subjective attitude that something or proposition is true", then I feel like a reasonable/justified belief that something is true isn't really that different from knowledge here. Obviously, the thing you believe has to be true to count as knowledge but then you believe it is true by the definition of belief. If your evidence is strong enough or reasonable enough where you subjectively have no doubt then to me that says you would logically believe that you have knowledge of it, so is there much practical difference? In cases where you have less confidence or certainty in the evidence then yes you may not believe you have knowledge because you are obviously not sure; but then again, I don't think someone who is engaging in the "folly of knowledge" which you are arguing against would say they have knowledge either, because they are unsure: the stances are hard to distinguish. So, even if knowledge here is defined by JTB, I may not practically be able to get rid of the belief in knowledge; I believe I have knowledge in certain circumstances where subjective uncertainty approaches zero (e.g. like where my house is). Your article's view ends up with something like a Moorean paradox of claiming to be "discarding knowledge" but still logically ending up believing in it in the same cases someone would normally. Surely then the problems of skepticism about knowledge remain when using the term belief as defined above, if you believe that you have knowledge (regardless of whether you actually do under JTB)?

 

Regarding your skeptical hypothesis: you say we shouldn't believe the strongest skeptical hypotheses because they are "unactionable". I will give you that one, though I think maybe its conceivable for some one to have weird/incoherent beliefs like that and still function. The unactionable thing doesn't really seem to affect most of the weaker skeptical hypotheses at all though; just believing (or even just being unsure that) you live in a simulation or an evil demon deceiving your senses you seem to be things that don't contradict "actionable" beliefs at all; its still possible to have a normal life in a simulation.

 

Also, it seems that what counts as reasonable evidence is subjective. Your examples kind of preach to the choir of someone with relatively normal beliefs but could you actually convince someone who holds some of these skeptical hypotheses to change their beliefs? Probably not if their beliefs seem reasonable to them. Their beliefs and what counts as evidence may seem arbitrary and weird but so might yours to them. They might ask about your "falsifiable hypotheses" of why you can be so sure that there are no bees in the suitcase or how you know your test to check the broken watch is reliable. I feel like ultimately you would end up resorting to things like "because it happened before" or "because I remember these things tend to happen", then they might ask how can you show that this memory or knowledge is reliable and that opens the door for them to say that you're beliefs are just coming out of nowhere or that you haven't shown or justified that they are definitely true and that the skeptic should believe them. I think if you cannot convince the skeptic then you haven't truly solved the problem, unless you are implying in the article that the skeptic should believe in their skeptical hypotheses based on their "reasonable beliefs". I guess thats fine but its unintuitive to me to pit these different hypotheses against eachother if the message is just essentially believe whatever you think is reasonable. Neither would there seem to be much consequence of someone simply entertaining their uncertainty about an evil demon or even crossing the threshold to belief if doing so didn't have any effect on their "actionable" living.

 

I think an interesting point also is that these types of skeptical hypotheses are held by real people in some sense. Some people genuinely believe we are in a simulation, some people believe that the universe is purely mental(or physical) and many many people believe in some kind of God. Is God that much different from a (non)evil demon? Especially something like a creationist God where all of the evidence for evolution ans that the universe is billions of years old is just wrong.

 

Edit: Following from the last paragraph, it's also interesting to think how a Christian crisis of faith is kind of analogous to the skeptical problems raised by descartes, but inverted. Christians are faced with the problem that it is conceivable that their world could have been created without the existence of a (non)evil demon, and so everthing that follows in their beliefs is also false.

1

HamiltonBrae t1_jcdohzh wrote

All Ive been talking about is how beliefs are supported by evidence and I think thats how most people think. They change their minds if they feel that their beliefs are no longer supported by the evidence they see.

As for non-contradiction, I don't know. It seems an obvious part of my general thought the overwhelming majority of the time but I do understand there are people with views and who have created logics that are not so strict about that. I am open to logical pluralism and/or nihilism.

1

HamiltonBrae t1_jc95vmi wrote

I dont know exactly what truth means, probably something similar to what many people think; "what is the case" or "what are the facts" but what does this mean? I don't think it can be specified in some way that reflects some objective standard.

"predictive modeling" maybe is a standard for belief (just in the sense of changing beliefs with regard to evidence), but it is not enough for truth.

>So did you come to this belief via predictive modeling?

ha this is almost like asking "did you come up with this belief via thinking"

1

HamiltonBrae t1_jc7j920 wrote

>That’s a truth claim.

 

Yes, but if you're an anti-realist about truth then I don't think it really matters. I use words like true or false all the time but it doesn't necessarily mean I am using them to mean something in the sense of truth/justification realism.

 

>So what model did you use to construct it?

 

what are you talking about exactly?

1

HamiltonBrae t1_jc78t6u wrote

Not necessarily. Accuracy can just mean that the model you construct predicts data accurately... the data you see in the world is what the model tells you to expect. That doesn't necessarily mean the model is true. Nor does it necessarily mean there is a single true model that we can construct.

0

HamiltonBrae t1_jc2lvmz wrote

many people are perfectly happy with anti-realism with regard to truth and justification. they might even say it is the best picture of the world given philosophy's well documented difficulties in determining these things.

0

HamiltonBrae t1_jbw1m4p wrote

A person looks at the map and the map provides them with information that tells them what will happen if they move in a certain direction or whatever. A map can tell someone standing on a road whether if they take the second left hand turn they will come across a church or an open field or a roundabout or another street. Its giving them information about something they cannot immediately access and don't know about. That is a form of prediction, made by the person using the information from the map which is a model of the topographic features of some landscape. If I have never been somewhere before and have no knowledge of its terrain, then I can think of the map as allowing me to make a prediction of the kind of terrain I might expect to see. Its my personal prediction. Maybe you will see it more easily if I use words like knowledge or expectation instead of prediction, but I would be meaning the exact same thing. I don't necessarily mean predicting something no one has ever seen before. This is about the personal knowledge of whoever is using the map. They get knowledge from the map and they use that knowledge to act. That implies prediction. I am not going to embark on a route unless I know whats at the end of it which means I can predict what will happen if I were to go down that route, which is essentially just equivalent to making factual statements about this route and its endpoint which I cannot access immediately from my current position. When I say prediction, I basically just mean the utilization of knowledge, knowing what will happen or what is the case beyond my immediate experience. A map trivially allows this to occur. Even the photo example too: if you have never seen someone before and you have seen their photo, then you suddenly have information about them which you can use in novel contexts, you might be able to recognize them walking down the street.

>map itself cannot predict because it is an inanimate objects

Well so are models. no model is useful unless someone is there to initialize it and put in the parameters, the variables, the initial conditions that need to be used to predict something.

1

HamiltonBrae t1_jbrnwcn wrote

>The person holding the map can use the map to understand what the earth will look like when they get to the portion of the terrain the map is meant to represent.

Yes and this is prediction. I am using a map to predict what I might find if I go walk in a certain direction. This is precisely what a map is used for, allowing us as individuals to predict things we do not have immediate perceptual access to, and is in the same spirit as what any model is for. Maps and the notion of a "useful representation" are meaningless without this notion of prediction.

>It doesn’t predict where the roads might move to, what the buildings will look like in ten years, or how a new hill might form.

Neither does any other model. Models can be wrong, then you just change the model.

2

HamiltonBrae t1_jbodo49 wrote

Well okay, now that I've been forced to think about this more deeply I'll agree with OP that maps are about prediction. Why do you use a map? Because you don't know where you are with any great familiarity and need it to make predictions about what will happen if you walk in one direction or another. Prediction is primarily what the validity of a map relies on.

2

HamiltonBrae t1_jbmkg3d wrote

I just don't really see why unpredictability should be identified with free will. Seems like a very superficial way of thinking about it. I don't really think there is a possible definition of free will that is both coherent and non-trivial.

4

HamiltonBrae t1_jaub7ff wrote

totally agree. most of the time i was thinking about this thread was about what "faith" actually means in this context. its such a loaded term when what has been talked about in this thread could use more neutral and straightforward terms. i wonder if part of the use of the word is just to make the discussion seem more exciting.

1

HamiltonBrae t1_jauaoje wrote

i dunno, maybe we or i have a different definition of 'leap of faith' but the 'taking for granted' thing almost seems opposite to the idea of a leap of faith to me. this is kind of why i dont like the word faith in this context. its such a loaded and inflated term when what people mean about what is being discussed in this thread could be expressed with much clearer and more neutral words.

1

HamiltonBrae t1_jarc3l7 wrote

>he argues it’s a contradiction of trust our sensory experiences to tell us something about the world in a way we do not trust our moral, or emotional experiences, to reveal something about the world.

what if i have experiences that tell me that my sensory experiences should be treated in a different way to my emotional ones in how they relate to the world? seems like the statement about what goff said oversimplified things.

 

obviously the knowledge we hold and act on knowledge doesnt require infallibility and so, when we think about it, its hard to actually rule out that any of our beliefs could be contradicted in the future (and this seems more likely for some beliefs than others); however, rovelli is right that anyone who wants to make sure their knowledge is as accurate as it can needs to have their ideas open for debate. neither do i think many everyday acts and things we do are adequately described psychologically or cognitively as a leap of faith.

6

HamiltonBrae t1_jacfhxm wrote

I don't see why its not in principle possible to instill the complexities of human consciousness in an artificial form. all of your arguments are that its complex but that doesnt say its not possible and if im honest some of your examples like animals dying are about biology that has little to do with consciousness so it seems like you're erecting a strawman. on the otherhand many of the things you do mention have been successfully studied and modelled to an extent computationally. There is even neuromorphic engineering geared at designing computational systems implemented in machines that are like neural systems.

4

HamiltonBrae t1_ja8djmw wrote

Hope this ramble doesnt seem too incoherent.

 

Yes, this type of example is interesting. Gets to the intuition that what is important for consciousness is relational or functional aspects which can be reproduced in unintuitive ways. We think of our conscious needing to work in a rapid way where neurons excite each other in succession almost instantly and computations in different parts of the brain are happening simultaneously. I always get torn because as long as the functional relationships between your units are preserved, then why shouldn't the drawing examplle be conscious.. it would definitely act like it to some perspective where it would produce behaviours like any other conscious being... just on a very slow timescale. Moreover, surely its plausible to suggest that our consciousness is quite slow in the context of the physical mechanisms that must support it.. when you think about all of the chemical processes that have to happen, the travelling that ions and neurotransmitters have to do, transportation of vesicles and receptors, other processes involved in energy metabolism. All of these convoluted processes support our consciousness on a very fast timescale just like the paper and hand that is writing out the equations. Seems like as long as no limitations on fundamental physics have been violated, there is a degree that the temporal scale giving the speed which things happen is kind of relative.
 

Then again because we percieve our consciousness as a kind of integrated intrinsic whole, its hard to imagine the drawing example having phenomenal consciousness with all the implied time lags of writing things... even though this kind of happens to us on a smaller scale in some sense.

 

What if you did all the equations sequentially though so that you just did each calculation and drawing and rubbed it out instantly then did the next one... instead of having a 2d map out in front of you... it would behave in the same way computationally but none of the states would actually exist simultaneously... that's a hard one for me.

 

Another interesting point is that that computational drawing if it is like a human brain will end up, with the right inputs, professing its own consciousness. which brings up redundancy in dualistic views of consciousness... why do i need to posit separate phenomenal consciousness to the brain if a person's beliefs about being consciousness have nothing to do with some phenomenal conscious and are causally everything to do with brain computations, so much so that a drawing will profess consciousness by the exact same mechanisms... it would make phenomenal consciousness seem epiphenomenal which many people find undesirable. it makes it increasongly difficult to distinguish myself from the 2d paper as being somehow more conscious or that there needs to be a unique phenomenal ontology to explain my consciousness as opposed to brain mechanisms or whatever.

1

HamiltonBrae t1_ja7mq2t wrote

I think they are being too strict; a brain in a vat can conceivably be conscious without habing a body. I think what better describes the things the author suggests as being needed for consciousness is that a.i. needs a sense of self or separation of things that are it and not it.

1

HamiltonBrae t1_j3egndi wrote

>If you choose not to accept it again then the falsification doesn't occur.

Yes, which is the same problem that induction and verification has. You can infer and verify something then later find out that you can no longer accept it. It applies just as much to falsification as verification.

>One experiment! If you do another experiment it's not a re-litigation of the previous falsification at time 1, it's actually just another falsification at time 2

Well if you are talking about the same type of phenomenon explored several times, I don't see how it is different from the classic example in induction about the sun not rising the next day. In the induction example, sun rises on day 1 but not day 2, in the falsification example instance 1 might be the orbit of some planet and instance 2 might be the discovery that the orbit is affected by some other body. in neither example do verification or falsification are capable of permanently cementing the status of the theories. the finding of the sun not rising on day 2 might even be reversed if it rises on all the days after that and you find some good explanation of why it did not rise on that particular day. my example with the planet orbits depicts a single incidence of newtownan mechanics being falsified which can conceivably be reversed, or was actually reversed depending on how correct my example was.

1

HamiltonBrae t1_j3ctdx5 wrote

It does need to generalize because if this single counter example was flawed then it completely invalidates the whole thing. You need to be sure that this single counter example is actually valid and that if you repeated it ad infinitum you would get the same result again and again and again which you can't be sure of. There maybe an irrelevant reason why thos counter example occurred. I think there is a very well known example that I can't remember specifically which is how the orbit of some planet in the solar system actually "falsified" Newtonian mechanics, however what was not taken into account was another body affecting the orbit of that planet which skewed the result, so it appeared to falsify it when it didn't. Now surely for every event of falsification, to be one hundred percent sure you are falsifying what you think you are, you need to rule out every single one of these alternative explanations.

i think ultimately, you have to verify that your falsification is valid.

1