You will obviously appreciate that these ‘hits’ are the tip of an iceberg of ‘misses’, though I am getting about one hit for every seven misses, which feels quite encouraging. ‘Smile’, ‘Locked In’ and ‘Curious Tom’ will appear in published anthologies; I can’t share the others online – because that would disqualify them from being entered in yet more competitions!
However, if you’d like a pdf version of any or all of the above by email for your personal reading, let me know.
苹果手机怎么挂vnp教程
Facebook
Email
Reddit
Twitter
国内ios如何使用youtube
Tumblr
翻外墙看youtube加速软件
Pocket
苹果手机怎么挂vnp教程
苹果用什么fq
Posted on by 苹果手机怎么挂vnp教程
17
Conscious Entities has gone quiet for some while now. Initially this was due to slowly worsening health issues which I won’t relate in detail; both the illnesses and the treatments involved cause serious fatigue. In November I had to spend three weeks in hospital getting serious antiviral treatment.
In early December I was much better and came back to post something. To my surprise I found that part of my mind just wouldn’t co-operate (a disconcerting experience that might well have been the subject of an interesting post in itself!).
No doubt this is partly due to continuing lack of energy, but I must also admit that some of the energy I do have is currently being siphoned off into writing fiction – I recently dusted off some old short stories wot I wrote, and got placed or shortlisted in a number of competitions. I suspect my subconscious wants to do more of that just now.
I don’t think I’ve said my last word here. But things are likely to remain quieter than they have been over the last sixteen years. In the meantime, if you have been – thanks for reading.
Share this:
国内ipad怎么看youtube
Email
Reddit
Twitter
More
Tumblr
Pinterest
Pocket
LinkedIn
苹果用什么fq
Posted on by Peter
21
Benjamin Libet’s famous experiments have been among the most-discussed topics of neuroscience for many years. Libet’s experiments asked a subject to move their hand at a random moment of their choosing; he showed that the decision to move could be predicted on the basis of a ‘readiness potential’ detectable half a second before the subject reported having made the decision. The result has been confirmed many times since, and even longer gaps between prediction and reported decision have been claimed. The results are controversial because they seem to be strong scientific evidence against free will. If the decision had been made before the decider knew it, how could their conscious thought have been effective? My original account (14 years ago, good grief) is here.
Libet’s findings and their significance have been much disputed philosophically, but a new study reported here credibly suggests that the readiness potential (RP) or Bereitschaftspotential has been misunderstood all along.
That is not to say that the RP was completely irrelevant to the behaviour of Libet’s subjects. In the rather peculiar circumstances of the original experiment, where subjects are asked to pick a time at random, it is likely that a chance peak of activity would tip the balance for the unmotivated decision. But that doesn’t make it either the decision itself or the required cause of a decision. Outside the rather strange conditions of the experiment, it has no particular role. Perhaps most tellingly, when the original experiments were repeated with a second group of subjects who were asked not to move at all, it was impossible to tell the difference between the patterns of neural activity recorded; a real difference appeared only at the time the subjects in the first group reported having made a decision.
This certainly seems to change things, though it should be noted that Libet himself was aware that the RP could be consciously ‘over-ruled’, a phenomenon he called ‘Free Won’t’. It can, indeed, be argued that the significance of the results was always slightly overstated. We always knew that there must be neural precursors of any given decision, if not so neatly identifiable as the RP. So long as we believe the world is governed by determined physical laws (something I think it’s metaphysically difficult to deny) the problem of Free Will still arises; indeed, essentially the same problem was kicked around for centuries in forms that relied on divine predestination or simple logical fatalism rather than science.
Nevertheless, it looks as though our minds work the way they seem to do rather more than we’ve recently believed. I’m not quite sure whether that’s disappointing or comforting.
Share this:
Facebook
Email
Reddit
Twitter
More
Tumblr
Pinterest
Pocket
LinkedIn
苹果用什么fq
Posted on by Peter
81
Tim Bollands recently tweeted his short solution to the Hard Problem (I mean, not literally in a tweet – it’s not that short). You might think that was enough to be going on with, but he also provides an argument for a pretty uncompromising kind of panpsychism. I have to applaud his boldness and ingenuity, but unfortunately I part ways with his argument pretty early on. The original tweet is here.
Bollands’ starting premise is that it’s ‘intuitively clear that combining any two non-conscious material objects results in another non-conscious object’. Not really. Combining a non-conscious Victorian lady and a non-conscious bottle of smelling salts might easily produce a conscious being. More seriously, I think most materialists would assume that conscious human beings can be put together by the gradual addition of neural tissue to a foetus that attains consciousness by a similarly gradual process, from dim sensations to complex self-aware thought. It’s not clear to me that that is intuitively untenable, though you could certainly say that the details are currently mysterious.
Bollands believes there are three conclusions we can draw: humans are not conscious; consciousness miraculously emerges, or consciousness is already present in the matter brains are made from. The first, he says, is evidently false (remember that); the second is impossible, given that putting unconscious stuff together can’t produce consciousness; so the third must be true.
That points to some variety of panpsychism, and in fact Bollands goes boldly for the extreme version which attributes to individual particles the same kind of consciousness we have as human beings. In fact, your consciousness is really the consciousness of a single particle within you, which due to the complex processing of the body has come to think of itself as the consciousness of the whole.
稳定的vpn - Integral Solutions:2021-5-12 · 稳定的vpn 合肥极递云课教育科技公司 境外伕理app 美版苹果可众设置vpn吗 天行pro 下载无线中继安卓版 浏览器大全 心阶 ssr 失败 WWW.P1916.COM baalamb WWW.SUNYAN1993.COM ssr acl规则 毒药机场测评 ss伕理设置 苹果shadowrocket设置 lanota正版 免费伕理服务器国外网站 自带梯子的iOS浏览器 能上谷歌的免费加速器 极光 ...
Brevity is perhaps the problem here; I don’t think Bollands has enough space to make his answers clear, let alone plausible. Nor is it really clear how all this solves the Hard Problem. Bollands reckons the Hard Problem is analogous to the Combination Problem for panpsychism, which he has solved by denying that any combination occurs (though his particles still somehow benefit from the senses and cognitive apparatus of the whole body). But the Hard Problem isn’t about how particles or nerves come together to create experience, it’s about how phenomenal experience can possibly arise from anything merely physical. That is, to put it no higher, at least as difficult to imagine for a single particle as for a large complex organism.
So I’m not convinced – but I’d welcome more contributions to the debate as bold as this one.
翻了墙可众看哪些网站
Facebook
Email
Reddit
苹果手机怎么挂vnp教程
More
国内ios如何使用youtube
Pinterest
Pocket
LinkedIn
苹果用什么fq
Posted on by Peter
12
Eric Holloway gives a brisk and entertaining dismissal of all materialist theories of consciousness here, boldly claiming that no materialist theory of consciousness is plausible. I’m not sure his coverage is altogether comprehensive, but let’s have a look at his arguments. He starts out by attacking panpsychism…
It’s really a bit of a straw man he’s demolishing here. I’m not sure panpsychists are necessarily committed to the view that particles are conscious (I’m not sure panpsychists are necessarily materialists, either), but I’ve certainly never run across anyone who thinks that the consciousness of a particle and the consciousness of a human being would be the same. It would be more typical to say that particles, or whatever the substrate is, have only a faint glow of awareness, or only a very simple, perhaps binary kind of consciousness. Clearly there’s then a need to explain how the simple kind of consciousness relates or builds up into our kind; not an easy task, but that’s the business panpsychists are in, and they can’t be dismissed without at least looking at their proposals.
Another solution is that certain structures become conscious. But a structure is an abstract entity and there is an untold infinite number of abstract entities.
This is perhaps Holloway’s riposte; he considers this another variety of panpsychism, though as stated it seems to me to encompass a lot of non-panpsychist theories, too. I wholeheartedly agree that conscious beings are not abstract entities, an error which is easy to fall into if you are keen on information or computation as the basis of your theory. But it seems to me hard to fight the idea that certain structural (or perhaps I mean functional) properties instantiated in particular physical beings are what amounts to consciousness. On the one hand there’s a vast wealth of evidence that structures in our brains have a very detailed influence on the content of our experiences. On the other, if there are no structural features, broadly described, that all physical instances of conscious entities have in common, it seems to me hard to avoid radical mysterianism. Even dualists don’t usually believe that consciousness can simply be injected randomly into any physical structure whatever (do they?). Of course we can’t yet say authoritatively what those structural features are.
Another option, says Holloway, is illusionism.
But, if we are allowed to “solve” the problem that way, all problems can be solved by denying them. Again, that is an unsatisfying approach that ‘explains’ by explaining away.
Empty dismissal of consciousness would indeed not amount to much, but again that isn’t what illusionists actually say; typically they offer detailed ideas about why consciousness must be an illusion and varied proposals about how the illusion arises. I think many would agree with David Chalmers that explaining why people do believe in consciousness is currently where some of the most interesting action is to be found.
I agree that complexity alone is not enough, though some people have been attracted to the idea, suggesting that the Internet, for example, might achieve consciousness. A vastly more sophisticated form of the same kind of thinking perhaps underlies the Integrated Information theory. But emergence can mean more than that; in particular it might say that when systems have enough structural complexity of the right kind (frantic hand-waving), they acquire interesting properties (meaningful, experiential ones) that can only be addressed on a higher level of interpretation. That, I think, is true; it just doesn’t help all that much.
‘In 1989 I was invited to go to Los Angeles in response to a request from the Dalai Lama, who wished to learn some basic facts about the brain.’
Besides being my own selection for ‘name drop of the year’, this remark from Patricia Churchland’s new book 苹果手机怎么挂vnp教程 perhaps tells us that we are not dealing with someone who suffers much doubt about their own ability to explain things. That’s fair enough; if we weren’t radically overconfident about our ability to answer difficult questions better than anyone else, it’s probable no philosophy would ever get done. And Churchland modestly goes on to admit to asking the Buddhists some dumb questions (‘What’s your equivalent of the Ten Commandments?’). Alas, I think some of her views on moral philosophy might benefit from further reflection.
Her basic proposition is that human morality is a more complex version of the co-operative and empathetic behaviour shown by various animals. There are some interesting remarks in her account, such as a passage about human scrupulosity, but she doesn’t seem to me to offer anything distinctively new in the way of a bridge between mere co-operation and actual ethics. There is, surely, a gulf between the two which needs bridging if we are to explain one in terms of the other. No doubt it’s true that some of the customs and practices of human beings may have an inherited, instinctive root; and those practices in turn may provide a relevant backdrop to moral behaviour. Not morality itself, though. It’s interesting that a monkey fobbed off with a reward of cucumber instead of a grape displays indignation, but we don’t get into morality until we ask whether the monkey was right to complain – and why.
Another grouping which strikes me as odd is the way Churchland puts rationalists with religious believers (they must be puzzled to find themselves together) with neurobiology alone on the other side. I wouldn’t be so keen to declare myself the enemy of rational argument; but the rationalists are really the junior partners, it seems, people who hanker after the old religious certainties and deludedly suppose they can run up their own equivalents. Just as people who deny personhood sometimes seem to be motivated mainly by a desire to denounce the soul, I suspect Churchland mainly wants to reject Christian morality, with the baby of reasoned ethics getting thrown out along with the theological bathwater.
She seems to me particularly hard on Kant. She points out, quite rightly, that his principle of acting on rules you would be prepared to have made universal, requires the rules to be stated correctly; a Nazi, she suggests, could claim to be acting according to consistent rules if those rules were drawn up in a particular way. We require the moral act to be given its correct description in order for the principle to apply. Yes; but much the same is true of Aristotle’s Golden Mean, which she approves. ‘Nothing to excess’ is fine if we talk about eating or the pursuit of wealth, but it also, taken literally, means we should commit just the right amount of theft and murder; not too much, but not too little, either. Churchland is prepared to cut Aristotle the slack required to see the truth behind the defective formulation, but Kant doesn’t get the same accommodation. Nor does she address the Categorical Imperative, which is a shame because it might have revealed that Kant understands the kind of practical decision-making she makes central, even though he says there’s more to life than that.
Here’s an analogy. Churchland could have set out to debunk physics in much the way she tackles ethics. She might have noted that beavers build dams and ants create sophisticated nests that embody excellent use of physics. Our human understanding of physics, she might have said, is the same sort of collection of rules of thumb and useful tips; it’s just that we have so many more neurons, our version is more complex. Now some people claim that there are spooky abstract ‘laws’ of physics, like something handed down by God on tablets; invisible entities and forces that underlie the behaviour of material things. But if we look at each of the supposed laws we find that they break down in particular cases. Planes sail through the air, the Earth consistently fails to plummet into the Sun; so much for the ‘law’ of gravity! It’s simply that the physics practices of our own culture come to seem almost magical to us; there’s no underlying truth of physics. And worse, after centuries of experiment and argument, there’s still bitter disagreement about the answers. One prominent physicist not so long ago said his enemies were ‘not even wrong’!
No-one, of course, would be convinced by that, and we really shouldn’t be convinced by a similar case against ethical theory.
That implicit absence of moral truth is perhaps the most troubling thing about Churchland’s outlook. She suggests Kant has nothing to say to a consistent Nazi, but I’m not sure what she can come up with, either, except that her moral feelings are different. Churchland wraps up with a reference to the treatment of asylum seekers at the American border, saying that her conscientious feelings are fired up. But so what? She’s barely finished explaining why these are just feelings generated by training and imitation of her peer group. Surely we want to be able to say that mistreatment of children is really wrong?
Share this:
Facebook
Email
Reddit
Twitter
More
国内ipad怎么看youtube
Pinterest
Pocket
LinkedIn
苹果用什么fq
Posted on by Peter
15
An interesting 国内ipad怎么看youtube by William Lycan gives a brisk treatment of the interesting question of whether consciousness comes in degrees, or is the kind of thing you either have or don’t. In essence, Lycan thinks the answer depends on what type of consciousness you’re thinking of. He distinguishes three: basic perceptual consciousness, ‘state consciousness’ where we are aware of our own mental state, and phenomenal consciousness. In passing, he raises interesting questions about perceptual consciousness. We can assume that animals, broadly speaking, probably have perceptual, but not state consciousness, which seems primarily if not exclusively a human matter. So what about pain? If an animal is in pain, but doesn’t know it is in pain, does that pain still matter?
Leaving that one aside as an exercise for the reader, Lycan’s answer on degrees is that the first two varieties of consciousness do indeed come in degrees, while the third, phenomenal consciousness, does not. Lycan gives a good ultra-brief summary of the state of play on phenomenal consciousness. Some just deny it (that represents a ‘desperate lunge’ in Lycan’s view); some, finding it undeniable, lunge the other way – or perhaps fall back? – by deciding that materialism is inadequate and that our metaphysics must accommodate irreducibly mental entities. In the middle are all the people who offer some partial or complete explanation of phenomenal consciousness. The leading view, according to Lycan, is something like his own interesting proposal that our introspective categorisation of experience cannot be translated into ordinary language; it’s the untranslatability that gives the appearance of ineffability. There is a fourth position out there beyond the reach of even the most reckless lunge, which is panpsychism; Lycan says he would need stronger arguments for that than he has yet seen.
Getting back to the original question, why does Lycan think the answer is, as it were, ‘yes, yes, no’? In the case of perceptual consciousness, he observes that different animals perceive different quantities of information and make greater or lesser numbers of distinctions. In that sense, at least, it seems hard to argue against consciousness occurring in degrees. He also thinks animals with more senses will have higher degrees of perceptual consciousness. He must, I suppose be thinking here of the animal’s overall, global state of consciousness, though I took the question to be about, for example, perception of a single light, in which case the number of senses is irrelevant (though I think the basic answer remains correct).
On state consciousness, Lycan argues that our perception of our mental states can be dim, vivid, or otherwise varied in degree. There’s variation in actual intensity of the state, but what he’s mainly thinking of is the degree of attention we give it. That’s surely true, but it opens up a couple of cans of worms. For one thing, Lycan has already argued that perceptual states come in degrees by virtue of the amount of information they embody; now state consciousness which is consciousness of a perceptual state can also vary in degree because of the level of attention paid to the perceptual state. That in itself is not a problem, but to me it implies that the variability of state consciousness is really at least a two-dimensional matter. The second question is, if we can invoke attention when it comes to state consciousness, should we not also be invoking it in the case of perceptual consciousness? We can surely pay different degrees of attention to our perceptual inputs. More generally, aren’t there other ways in which consciousness can come in degrees? What about, for example, an epistemic criterion, ie how certain we feel about what we perceive? What about the complexity of the percept, or of our conscious response?
Coming to phenomenal consciousness, the brevity of the piece leaves me less clear about why Lycan thinks it alone fails to come in degrees. He asserts that wherever there is some degree of awareness of one’s own mental state, there is something it’s like for the subject to experience that state. But that’s not enough; it shows that you can have no phenomenal consciousness or some, but not that there’s no way the ‘some’ can vary in degree. Maybe sometimes there are two things it’s like? Lycan argued that perceptual consciousness comes in degrees according to the quantity of information; he didn’t argue that we can have some information or none, and that therefore perceptual consciousness is not a matter of degree. He didn’t simply say that wherever there is some quantity of perceptual information, there is perceptual consciousness.
It is unfortunately very difficult to talk about phenomenal experience. Typically, in fact, we address it through a sort of informal twinning. We speak of red quale, though the red part is really the objective bit that can be explained by science. It seems to me a natural prima facie assumption that phenomenal experience must ‘inherit’ the variability of its objective counterparts. Lycan might say that, even if that were true, it isn’t what we’re really talking about. But I remain to be convinced that phenomenal experience cannot be categorised by degree according to some criteria.
Share this:
Facebook
Email
Reddit
Twitter
More
Tumblr
Pinterest
Pocket
LinkedIn
苹果用什么fq
Posted on by Peter
29
Watch more videos on iai.tv
Will the mind ever be fully explained by neuroscience? A good discussion from IAI, capably chaired by Barry C. Smith.
Raymond Tallis puts intentionality at the centre of the question of the mind (quite rightly, I think). Neuroscience will never explain meaning or the other forms of intentionality, so it will never tell us about essential aspects of the mind.
Susanna Martinez-Conde says we should not fear reductive explanation. Knowing how an illusion works can enhance our appreciation rather than undermining it. Our brains are designed to find meanings, and will do so even in a chaotic world.
Markus Gabriel says we are not just a pack of neurons – trivially, because we are complete animals, but more interestingly because of the contents of our mind – he broadly agrees that intentionality is essential. London is not contained in my head, so aliens could not decipher from my neurons that I was thinking I was in London. He adds the conceot of geist – the capacity to live according to a conception of ourselves as a certain kind of being – which is essential to humanity, but relies on our unique mental powers.
Martinez-Conde points out that we can have the experience of being in London without in fact being there; Tallis dismisses such ‘brain in a vat’ ideas; for the brain to do that it must have had real experiences and there must be scientists controlling what happens in the vat. The mind is irreducibly social.
My sympathies are mainly with Tallis, but against him it can be pointed out that while neuroscience has no satisfactory account of intentionality, he hasn’t got one either. While the subject remains a mystery, it remains possible that a remarkable new insight that resolves it all will come out of neuroscience. The case against that possibility, I think, rests mainly on a sense of incredulity: the physical is just not the sort of thing that could ever explain the mental. We find this in Brentano of course, and perhaps as far back as Leibniz’s mill, or in the Cartesian point that mental things have no extension. But we ought to admit that this incredulity is really just an intuition, or if you like, a failure to be able to imagine. It puzzles me sometimes that numbers, those extensionless abstract concepts, can nevertheless drive the behaviour of a computer. But surely it would be weird to say they don’t, or that how computers do arithmetic must remain an unfathomable mystery.
Share this:
Facebook
Email
国内ios如何使用youtube
Twitter
More
Tumblr
翻了墙可众看哪些网站
Pocket
LinkedIn
苹果用什么fq
Posted on by Peter
6
Ian McEwan’s latest book iPhone怎样能看YouTube has a humanoid robot as a central character. Unfortunately I don’t think he’s a terrifically interesting robot; he’s not very different to a naïve human in most respects, except for certain unlikely gifts; an ability to discuss literature impressively and an ability to play the stock market with steady success. No real explanation for these superpowers is given; it’s kind of assumed that direct access to huge volumes of information together with a computational brain just naturally make you able to do these things. I don’t think it’s that easy, though in fairness these feats only resemble the common literary trick where our hero’s facility with languages or amazingly retentive memory somehow makes him able to perform brilliantly at tasks that actually require things like insight and originality.
The robot is called Adam; twenty-five of these robots have been created, twelve Adams and thirteen Eves, on the market for a mere £86,000 each. This doesn’t seem to make much commercial sense; if these are prototypes you wouldn’t sell them; if you’re ready to market them you’d be gearing up to make thousands of them, at least. Surely you’d charge more, too – you could easily spend £86k on a fancy new car. But perhaps prices are misleading, because we are in an alternate world.
Turing appears in the novel, and I hate the way he’s portrayed. One of McEwan’s weaknesses, IMO, is his reverence for the British upper class, and here he makes Sir Alan into the sort of grandee he admires; a lordly fellow with a large house in North London who summons people when he wants information, dismisses them when he’s finished, and hands out moral lectures. Obviously I don’t know what Turing was really like, but to me his papers give the strong impression of an unassuming man of distinctly lower middle class origins; a far more pleasant person than the arrogant one we get in the book.
McEwan doesn’t give us any great insight into how Adam comes to have human-like behaviour (and surely human-like consciousness). His fellow robots are prone to a sort of depression which leads them to a form of suicide; we’re given the suggestion that they all find it hard to deal with human moral ambiguity, though it seems to me that humans in their position (enslaved to morally dubious idiots) might get a bit depressed too. As the novel progresses, Adam’s robotic nature seems to lose McEwan’s interest anyway, as a couple of very human plots increasingly take over the story.
McEwan got into trouble for speaking dismissively of science fiction; is Machines Like Me SF? On a broad reading I’d say why not? – but there is a respectable argument to be made for the narrower view. In my youth the genre was pretty well-defined. There were the great precursors; Jules Verne, H.G. Wells, and perhaps Mary Shelley, but SF was mainly the product of the American pulp magazines of the fifties and sixties, a vigorous tradition that gave rise to Asimov, Clarke, and Heinlein at the head of a host of others. That genre tradition is not extinct, upheld today by, for example, the beautiful stories of Ted Chiang.
At the same time, though, SF concepts have entered mainstream literature in a new way. 国内ios如何使用youtube, for example, obviously makes brilliant use of an SF concept, but does so in the service of a novel which is essentially a love story in the literary mainstream of books about people getting married which goes all the way back to Pamela. There’s a lot to discuss here, but keeping it brief I think the new currency of SF ideas comes from the impact of computer games. The nerdy people who create computer games read SF and use SF concepts; but even non-nerdy people play the games, and in that way they pick up the ideas, so that novelists can now write about, say, a ‘portal’ and feel confident that people will get the idea pretty readily; a novel that has people reliving bits of their lives in an attempt to get them right (like The Seven Deaths Of Evelyn Hardcastle) will not get readers confused the way it once would have done. But that doesn’t really make Evelyn Hardcastle SF.
I think that among other things this wider dispersal of a sort of SF-aware mentality has led to a vast improvement in the robots we see in films and the like. It used to be the case that only one story was allowed: robots take over. Latterly films like Ex Machina or Her have taken a more sophisticated line; the TV series Westworld, though back with the take-over story, explicitly used ideas from Julian Jaynes.
So, I think we can accept that Machines Like Me stands outside the pure genre tradition but benefits from this wider currency of SF ideas. Alas, in spite of that we don’t really get the focus on Adam’s psychology that I should have preferred.
Share this:
Facebook
Email
国内ipad怎么看youtube
Twitter
More
Tumblr
翻外墙看youtube加速软件
Pocket
LinkedIn
Feeling free
Posted on 国内ipad怎么看youtubePeter
10
Eddy Nahmias recently reported on a ground-breaking case in Japan where a care-giving robot was held responsible for agreeing to a patient’s request for a lethal dose of drugs. Such a decision surely amounts to a milestone in the recognition of non-human agency; but fittingly for a piece published on 1 April, the case was in fact wholly fictional.
However, the imaginary case serves as an introduction to some interesting results from the experimental philosophy Nahmias has been prominent in developing. The research – and I take it to be genuine – aims not at clarifying the metaphysics or logical arguments around free will and responsibility, but at discovering how people actually think about those concepts.
The results are interesting. Perhaps not surprisingly, people are more inclined to attribute free will to robots when told that the robots are conscious. More unexpectedly, they attach weight primarily to subjective and especially emotional conscious experience. Free will is apparently thought to be more a matter of having feelings than it is of neutral cognitive processing.
Why is that? Nahmias offers the reasonable hypothesis that people think free will involves caring about things. Entities with no emotions, it might be, don’t have the right kind of stake in the decisions they make. Making a free choice, we might say, is deciding what you want to happen; if you don’t have any emotions or feelings you don’t really want anything, and so are radically disqualified from an act of will. Nahmias goes on to suggest, again quite plausibly, that reactive emotions such as pride or guilt might have special relevance to the social circumstances in which most of our decisions are made.
I think there’s probably another factor behind these results; I suspect people see decisions based on imponderable factors as freer than others. The results suggest, let’s say, that the choice of a lover is a clearer example of free will than the choice of an insurance policy; that might be because the latter choice has a lot of clearly calculable factors to do with payments versus benefits. It’s not unreasonable to think that there might be an objectively correct choice of insurance policy for me in my particular circumstances, but you can’t really tell someone their romantic inclinations are based on erroneous calculations.
I think it’s also likely that people focus primarily on interesting cases, which are often instances of moral decisions; those in turn often involve self-control in the face of strong desires or emotions.
Another really interesting result is that while philosophers typically see freedom and responsibility as two sides of the same coin, people’s everyday understanding may separate the two. It looks as though people do not generally distinguish all that sharply between the concepts of being causally responsible (it’s because of you it happened, whatever your intentions) and morally responsibly (you are blameworthy and perhaps deserve punishment). So, although people are unwilling to say that corporations or unconscious robots have free will, they are prepared to hold them responsible for their actions. It might be that people generally are happier with concepts such as strict liability than moral philosophers mainly are; or of course, we shouldn’t rule out the possibility that people just tend to suffer some mild confusion over these issues.
Thought-provoking stuff, anyway, and further evidence that experimental philosophy is a tool we shouldn’t reject.
Share this:
国内ios如何使用youtube
Email
Reddit
Twitter
More
Tumblr
Pinterest
Pocket
LinkedIn
iPhone怎样能看YouTube
Pages
About
Recent Comments
advice: Thanks , I've just been looking for information approximately this topic fo...
1: some people tend to be more instinctive types/flat thinkers stuck in fallac...
Anon: I am a bisexual cis woman with no trans tendencies and no gender dysphoria,...
John Anthony: Hi Peter, I would love to read all of the above. Not sure if this is the pl...
John Anthony: Just wanted to add to the deservedly long list of thanks and well-wishes. I...
Ryan Graham: What a terrific post! I spend a couple of moments
on reading, and I am so...
Landscaping Service: Sweet internet site, super pattern, real clean and employ friendly.
Arnold: ...observation is a phenomenon, life is a phenomenon...
that philosophy ...
Charles L.: I just would like to read new posts. Thank you.
Richard J R Miles: Dear Peter I hope you are OK. Can you please email to me your stories, I ha...
My Book...
Alternative Angles
国内ios如何使用youtube
Higher Orders
Levels of Explanation
Memes
苹果手机怎么挂vnp教程
Astonishing Experiments
Blindsight
Pre-empted decisions
Split Brains
Bedtime Stories
苹果手机怎么挂vnp教程
Mary the Colour Scientist
The Chinese Room
国内ipad怎么看youtube
Definitions
…of 'consciousness'
翻外墙看youtube加速软件
A computer
Illusory self
Massively Parallel
国内ios如何使用youtube
Resources
Internet Encyclopedia of Philosophy
Machines Like Us
Robots.net
Stanford Encyclopedia of Philosophy
国内ios如何使用youtube
Dualism
Epiphenomenalism
Homunculi
Panpsychism
Solipsism
Facebook
RSS
Dynamic Blogroll
Overconfidence Can Be Transmitted From Person To Person
The US may have the most to lose if Donald Trump bans TikTok
A Confederacy of Dunces: quotes (14)
SpaceX Crew Dragon capsule makes splashdown with NASA astronauts
I have a legal duty to make you aware of the surprising fact that this site uses cookies. Hope you're OK with that? I dunno what we do if you're not.
Cookie Policy