Some guy who thinks about thinking.

A whole bunch of stuff that makes little difference to everyday life.

Neuroscience and Free Will: Part 2: Why Neuroscientific Critiques of Free Will Appear Methodologically Unsound.

by metacognizant

In part two of this series, I will discuss the methodological problems with studies such as Libet’s and show that there are alternative ways to interpret the data. It’ll be part three before I sketch an account of a neuroscientific concept of free will, and part four before I offer any positive data for it.

Studies in the vein of Libet et al. do seem to pose a problem for free will. However, there is good reason to believe that–upon closer inspection–they are not methodologically sound. The first strong hints of this came in 2010, when a study surfaced that in the Libet paradigm, not only is there a readiness-potential (RP) before a conscious decision to move, there is also a RP before a conscious decision to not move (Trevena & Miller, 2010). This seems to indicate that this RP is not an unconscious neural determinate of a voluntary action–that there might be something in the experimental paradigm responsible for the RP rather than unconscious neural activity. Another two experiments by Miller, Shepherdson, and Trevena (2011) furnished further support for that idea. In their article, Miller et al. demonstrate through their two experiments–each with two conditions–that the cognitive activity of monitoring a clock seems to be responsible for the RP’s occurrence. They found that in each of the two experiments, a condition where participants had to monitor a clock to report the time of their action, there was a RP before the action, but that RP was absent in conditions without the presence of a clock–indicating that these actions may have truly been self-chosen. This result–that cognitive activity of monitoring a clock or attending to an action in time–finds support from a very similar study, that showed that actions where the timing was unattended lacked an early-stage RP (Baker, Mattingley, Chambers, & Cunnington, 2011). These studies seem to show that cognitive processes heavily influence the RP, and that it is likely not neural activity of a chosen action before conscious awareness of that choice, but due to other cognitive factors related to the experiment. More research needs to be done in this area, but with three experiments indicating that the Libet study and experiments like it may be methodologically flawed by not accounting for cognitive factors specifically related to the experimental paradigm, there is enough reason to maintain skepticism for the claim that these studies have shown that choices arise prior to conscious will.

A quick reference to another two studies similar to Libet-type studies is necessary here, because the methodological issues that apply to Libet-type studies do not at first appear to apply to these other studies. Soon, Brass, Heinze, and Haynes (2008) performed an experiment that appeared to show that there exists a neural determinate of an action up to 7 to 10 seconds before an action is consciously chosen. This study was also replicated (Bode et al., 2011). These studies analyzed fMRI data with highly complex algorithms, and these algorithms were able to predict from the fMRI whether a response would be a right or left response 7 to 10 seconds before the response was consciously chosen with up to 57% (in the Bode study) or 60% (in the Soon study). However, a recent study has shown that these studies may have been flawed also because of cognitive factors (Lages & Jaworska, 2012). Lages and Jaworska note that most untrained individuals have an extremely difficult time creating truly random sequences, and the algorithms may have simply picked up neural markers of either intending to maintain or intending to switch responses. Indeed, using a purely behavioral version of the same experimental paradigm with the same inclusion criteria, Lages and Jaworska were able to predict with 61.6% accuracy–higher than that of either Soon’s study or Bode’s–the actions of participants before they made them. The studies of Soon et al. and Bode et al. thus seem to ignore cognitive factors related to whether or not a participant intended to respond in a certain way.

More research is clearly necessary in this area. However, given the methodological problems that appear to exist in neuroscientific studies purported to show that free will is an illusion, there is good reason to hold this purported claim in an air of skepticism–especially when there is strong evidence that we can choose our own actions (in part four, to come).

Updates on the end of the semester.

by metacognizant

As this semester draws to a close, I will start to be on here more and more. Starting at the beginning of next week I’ll have some time to sit down and write some good entries. One of my papers has been accepted for publication and should be published sometime next week. I’ve just finished my paper on neuroscience and free will (totaling 33 pages) and I’ll work on getting that published soon.

Look for more activity starting again soon.

Neuroscience and Free Will: An Update

by metacognizant

Hi everyone, you’ve probably noticed that it’s been ridiculously long since I’ve put up an entry. There are a few reasons why. First, I’ve been very busy with school; I took the GRE on the twentieth (and I did very well, thank goodness), and I hardly had any time to sleep with the workload I had. Second, I realized that I want to bolster my graduate school resume, and so I’ve decided that I’m going to rescind my multiple part blog entry on free will. I got the ok to write my thoughts on free will for a school assignment. After I finish with the paper, I’m going to submit it to an undergraduate neuroscience research journal. Once it gets accepted, I’ll link to it here for everyone to read. There’s another paper I wrote that should be getting accepted pretty quickly, and I’ll link to it here once it does.

With that said, you probably shouldn’t expect too much to be written for this blog until school’s done (at the end of April). I only have this last semester left to write research papers as an undergraduate, and I want to churn out as many as I can to strengthen my graduate school resume. I’ll link to anything I complete as it’s finished, but most substance on this blog will be postponed until May.

Thanks for your patience.

Neuroscience and Free Will: Part 1: Why Neuroscience Poses a Threat to Free Will.

by metacognizant

Because I have so much to say on neuroscience and free will, I’ve decided to break this up into a series. I believe it’ll be a two to three part series, but it might be any number of posts. In this first post, I want to face the issue head on, and engage rather than avoid the research that usually allows individuals to declare that we’re not really free at all.

There are few more qualified than Adina Roskies (Ph.D. Neuroscience, Ph.D. Philosophy) to speak on the philosophy of mind, so perhaps I’m getting myself into more than I can handle beginning this series by critiquing her. She has written a lot on free will, and a good part of what she’s written, I agree with. For instance, in her article, “Neuroscientific challenges to free will and responsibility,” (Roskies, 2005) she argues that neuroscience cannot adjudicate this debate–that is, it cannot settle the matter by itself. This is because the notion of free will requires some extra metaphysical ground (such as indeterminism) and the most that neuroscience can show is that our brains seem to be deterministic–as she puts it, neuroscience provides a mechanism for behavior–but it cannot demonstrate determinism on a large scale. At its best, neuroscience cannot verify its assumption on a large scale, but it could suggest that the brain seems to be deterministic.

While Roskies has a lot to say regarding free will (Roskies, 2010a), I want to focus specifically on studies that seem to indicate that we don’t have free will, and so I’ll hone in on the Libet studies, as I believe that those are the biggest threat to free will (and, answering them will also answer other issues around neuroscience and free will).

The neuroscientist/neurosurgeon Benjamin Libet is famous for two sets of experiments, but only one has direct bearing on our discussion here. The study relevant to our topic asked individuals in an experimental group to move their fingers whenever they felt the urge to move and report the time that they decided to move their finger, all while being measured by EEG (Libet, Gleason, Wright, & Pearl, 1983). What this study found was that a barrage of increasing neural activity preceded an individual’s decision to move for hundreds of milliseconds, and that this activity was a consistent and steady stream of activity. It seemed, according to this experiment, that the actions of these individuals were unconsciously initiated and that the individuals did not have any control in initiating them. Libet was quick to point out that these individuals could veto their actions within a critical time period (Libet, 1985), but even this concession only leaves us with “free won’t,” rather than free will.

Roskies (2010b, pp. 14-16), in another article, criticizes this interpretation of Libet’s experiment, claiming that we do not have grounds for claiming that the neural signal preceding the action and the decision to act is actually the neural signal for action initiation rather than the neural signal for volition. However, this interpretation of Libet’s experiment does not work for two reasons: if we needed a neural precursor to decide to act, then either that neural precursor was not free, in which case we don’t have control over our volition, or it was free, in which case it would need a neural precursor, and so on ad infinitum. Moreover, recent evidence from single-neuron recordings has shown that Libet’s neural precursor originates in the supplementary motor area, and that progressive recruitment of neurons starts as early as 1500 ms (one and a half seconds!) before individuals decide to move (Fried, Mukamel, & Kreiman, 2011). There can be little doubt, then, that the neural events preceding one’s decision to move are actually the neural events of action initiation rather than the neural precursor to volition.

Libet’s study has been replicated numerous times, and it has withstood every angle of criticism that it has received over the years (Banks & Pockett, 2007). Libet’s study clearly seems to show that we do not consciously control even the most rudimentary aspects of action initiation.

It is for this reason that neuroscience poses a threat to free will.

Before long, your private inner world won’t be private anymore.

by metacognizant

In a recent phenomenal study, scientists mapped out and codified a small area of the visual cortex while individuals watched movies (Nishimoto, Vu, Naselaris, Benjamini, Yu, & Gallant, 2011). A computer program was used to decode the data from this brain region and reconstruct it. Individuals were then exposed to new movies, and their brain activity was recorded while they watched these videos. Below is a side-by-side view of the original movie and the reconstruction of it based upon their fMRI activity:

As you can see, this study was able to reconstruct mental images with remarkable accuracy based solely upon fMRI data. The creepiness of this increases when you recall that they only mapped and coded for a small area of the visual cortex, and coding for a larger part of it would only increase the accuracy of the image reproduced. The point of this study was to demonstrate that dynamic brain activity measured under normal conditions can be reconstructed given the current technology. Now that this technology is in place, the researchers have said elsewhere that they intend to further develop this so that we can reconstruct the subjective experiences of coma patients or patients with locked-in syndrome. Personally, I think lie detectors might get a lot more intricate pretty soon.

Now, for any regular readers, my next entry will take a little bit to get up. I’m going to try to write a pretty thorough and helpful entry on neuroscience and free will. It’ll be worth the wait, I promise.

Uploading Memory.

by metacognizant

I remember when I first watched this clip. I thought that, while it was impossible, it was nevertheless one of the coolest ideas ever.

Now is when neuroscience is starting to get scary cool.

Some of you might be thinking that I can’t be hinting that uploading memories into someone is within reality, but I am.

Researchers at the University of Southern California recently designed a cortical neural prosthesis (or an artificial neural system) for restoring or enhancing memory (Berger, Hampson, Song, Goonawardena, Marmarelis, & Deadwyler, 2011). This device isn’t a typical artificial neural network, but instead is a device that is capable of recording neural signals during a task and producing those signals itself once it has recorded them. In an experiment, this neural prosthesis recorded the activity of rats successfully completing a complex, trial-by-trial task that could not be done on the first try:

The behavioral testing apparatus for the DNMS [delayed-nonmatch-to-sample] task is the same as reported in prior studies (Deadwyler and Hampson 1997, 2004, Hampson et al 2008) and consists of a 43 × 43 × 50 cm Plexiglas chamber with two retractable levers (left and right) positioned on either side of a water trough on one wall of the chamber (figure 1). A photocell with a cue light activated nose-poke (NP) device was mounted in the center of the opposite wall. The chamber was housed inside a sound-attenuated cubicle. The DNMS task consisted of three phases: (1) sample phase: in which a single lever was presented randomly in either the left or right position; when the animal pressed the lever, the event was classified as sample response (SR), (2) delay phase: of variable duration (1–30 s) in which a nosepoke (NP) into a photocell was required to advance to the (3) nonmatch phase: in which both levers were presented and a response on the lever opposite to the SR, i.e., a nonmatch response (NR), was required for delivery of a drop of water (0.4 ml) in the trough. A response in the nonmatch phase on the lever in the same position as the SR (match response) constituted an ‘error’ with no water delivery and a turning off of the lights in the chamber for 5.0 s. Following the reward delivery (or an error) the levers were retracted for 10.0 s before the sample lever was presented to begin the next trial. Individual performance was assessed as % correct NRs with respect to the total number of trials (100–150) per daily (1–2 h) session.

After recording the neural activity from successful completions of the task, some of the rats were given minipumps to infuse specific areas of their brain with a drug designed to block memory recall. This drug was infused chronically. These rats that did not have the neural prosthesis activated could not successfully complete the task. The rats that did have the neural prosthesis activated, however, could successfully complete the task. When the prosthesis was activated, they could complete the task; but when it was deactivated, the very ones who could complete it before, couldn’t. The neural prosthesis gave them the mental ability to do the task; it gave them the memory.

Now, you might be thinking, “well, perhaps it works when it records one individual’s neural activity, but it won’t work for another person, because everyone’s neural activity is different.” However, that’s not very accurate. Yes, there are differences in neural activity in most everyone, but those differences are functional rather than anatomical. What I mean by that is even though many people might display different activation under an fMRI, the neural basis of memory is still the same: fMRI scanners can actually read your mind (more on this next post). Because of that, neural information recorded in the right spots of one person’s brain while doing something would transfer seamlessly into someone else’s brain and allow them the same abilities.

Let me make a quick mental note on the philosophy of mind, as someone actually brought this article up to me, questioning if it made a difference for dualism. I’d say that it doesn’t. Even on interactionist dualism, the soul and the brain are still interdependent while embodied. This doesn’t affect the philosophy of mind any more than memory having a neural basis does.

So, while we haven’t yet taught someone how to fly a helicopter by uploading it into their brain, we’re not that far off.

The Power of the Individual Neuron.

by metacognizant

Awhile back, I wrote an entry titled, The Power of the Human Brain. I’m still proud of this entry, and I think it does a good job explaining the complexity of neuronal computation in everyday language. However, my entry was posted at a forum as an argument, and the engineer who the poster was arguing against—who has a fairly sophisticated understanding of the gist of neuroscience—absolutely demolished my entry. Simply put, the assertions that I made (that an individual neuron is more powerful than a slightly dated supercomputer) were not well-supported by the evidence I put forward in my blog post. Fair enough, this is true. However, these assertions are supported by the evidence. This entry is going to be an explication of the true power of individual neurons.

This entry will be fairly technical and complex at times, but I assure you, it will be worth the read. Neuronal communication is amazingly complex. I will assume the reader’s familiarity with my previous blog entry (linked above), and go from there. To reiterate briefly, a neuron can have up to 200,000 synaptic connections. Every time an action potential reaches these connections, information is conveyed from one neuron to another in the form of excitatory or inhibitory charges, which travel down the dendrites to the cell body. This information is integrated and computed at the cell body, or the soma, and if the excitatory threshold is reached, the neuron initiates an action potential—that is, it conveys the integrated and computed information to another neuron. The engineer that I was discussing with in my previous entry did me the favor of multiplying 10 bits of information by 20,000 synapses and assigning a weighting factor to these 10-bit signals for spatial and temporal priority (which he did not give the numerical value) took around 40 kiloFLOPS. Now, admittedly, engineering is not my strong point. I wish I knew enough even to properly adjust this calculation to the correct bits/second information conveyed by individual neural spikes—300 bits per second (Koch 2004, 333)—and the correct density of synaptic connections, mentioned above, to get the maximum computational rate of a neuron in an incredibly simplistic neural model, but I don’t know how to convert anything to FLOPS, and so I don’t know how different these new figures would make the final product. It seems to be at least a few orders of magnitude greater, but I can’t say for certain. In addition, the weighting factor given to neural coding even in simplistic models of artificial neural networks is debated, so I don’t know which he used, and so the actual computation might require greater power still.

Fortunately, however, I don’t have to be proficient in engineering to demonstrate my claim as true. About 10^4 parameters need to be specified to create neurons with biophysically realistic spatiotemporal computational abilities (Huys, Ahrens, & Paninski, 2006). These parameters need to be set to account for odd aspects of neural computation, such as backpropagation of action potentials (Grewe, Bonnan, & Frick, 2010)–which affect first-order neural communication–,among other things. In modeling these neurons, a realistic synaptic input is prohibitive for a computer’s memory capacity, and much synaptic input is set to zero for purposes of calculations. Already, we have begun to stress the computational power of computers, and neurons haven’t even begun to show off.

The type of neural model mentioned above is one known as a “multicompartmental” model, referencing the variety of parameters specified in the model. However, even multicompartmental models, which, again, are our most biophysically realistic models of neurons, cannot account for a variety of computations and behaviors known in neurons (Herz, Gollisch, Machens, & Jaeger, 2006). While they can add, subtract, multiply, and divide inputs, they cannot engage in a variety of observed, behaviorally relevant, and biological computations that actual neurons can. For a list of these, see Table 1 and Table 2 in the linked article. Some neural models that can engage in these behaviorally relevant computations are abstract with little attention to biological characteristics, and provide no rationale for how actual neurons can perform their computations. Nonetheless, because these abstract models do demonstrate the function of neurons without explaining how, they have made some testable predictions. One of these is that an individual neuron may perform the function of an entire multicompartmental artificial neural network, which has shown to be true.

Neurons are nowhere near as simplistic as summing junctions. Take dendrites, for example. Rather than passive conveyers of excitatory or inhibitory charges, they are the “the paramount examples of biological nanotechnology,” as Kolb and Whishaw state in their An Introduction to Brain and Behavior, second edition. Indeed, one article, in review of dendrites, states:

“Spines appear as one of the ultimate examples of miniaturization in Biology, and could represent a good testing ground for biological nanotechnology (Mehta et al., 1999). Indeed, there is already some evidence for both small number of molecules and extreme precision in their location…. Moreover, even the position of these channels and receptors, particularly with respect to other molecular components of the spine, appears determined with extreme precision… It seems to us that spines are built with great molecular sophistication, and that future studies to understand their structure and function must operate at an equally high level of experimental precision. It does not appear exaggerated to argue that the understanding of the function of the biochemical pathways present in spines may require single-molecule techniques (Mehta et al., 1999), in combination with detailed computational modeling of the spatio-temporal dynamics and kinetics of these molecules (Kennedy, 2000).” (Tashiro & Yuste, 2003)

Kolb and Whishaw state further about dendritic spines,

“to mediate learning, each spine must be able to act independently, undergoing changes that its neighbors do not undergo. Spines may appear and disappear on a dendrite in a matter of seconds, and they may even extend or move along a dendrite to search out and contact a presynaptic axon terminal. When forming a synapse, they can change in size and shape and even divide…. The mechanisms that allow spins to appear and change in shape include a number of different cytoskeletal filaments linked to the membrane receptors…. The activation of receptors can induce mRNA within the spine to produce more of these structural proteins. In addition, second messengers within the spine can carry signals to the cell’s DNA to send more mRNA addressed to just the signaling spine.” (p. 181)

In addition to the information-processing carried out in the soma, then, dendrites are carrying out two different forms of information-processing, all using the same resources as that of the cell.

The computational abilities of neurons do not stop there, however. One article on neuron computation titled, Inside the brain of a neuron, explicates some of the vast computations performed by neurons that go undetected by some simplistic models of neural computation (Sidiropoulou, Pissadaki, & Poirazi, 2006). To note some of these, dendrites themselves engage in nonlinear summation, amplification, and delay of inputs before conveying that information to the soma. Moreover, the soma itself computationally maintains persistent activity in generating action potentials even in the absence of synaptic input. Finally, axons themselves seem to be involved in a neuron’s information-processing. These are just a few of the extraordinary computational abilities of neurons often neglected in neuron models, and I recommend reading the whole article.

One more thing that I can’t avoid mentioning is information-processing in each cell’s DNA. A neuron’s DNA is an incredibly complex and computationally powerful information-processor itself (Shapiro, 2006). However, neural computations regularly initiate epigenetic changes (Borelli, Nestler, Allis, & Sassone-Corsi, 2008). These changes then influence neural computation, resulting in an interactive information-processing loop, all performed with the same resources that the cell has for input summation. Indeed, recent extremely basic and simplistic experiments have shown that integrating what resembles DNA into artificial neural networks is necessary for function remotely resembling that we see in reality (Qian, Winfree, & Bruck, 2011). The information-processing in DNA alone parallels or rivals computers in many ways, and this is only combined with the capacities of individual neurons.

Finally, individual neurons engage in large-scale, long-term computations simultaneously with their short-term computations, and their long-term computations result in a different output pattern (Sheffield, Best, Mensh, Kath, & Spruston, 2010). How neurons do this is currently unknown.

In sum, though I can’t do the math to convert a neuron’s computational ability into FLOPS, I don’t think anyone can. The known computational abilities of our most biophysically realistic neuron models overwhelm most computers even when synaptic input is set to a minimal level. However, these neuron models cannot come close to performing many biologically relevant computations, and actual neurons can perform computations only possible by entire networks of multicompartmental models or abstract models that are not biophysically realistic. Moreover, each neuron possessing computational abilities currently untapped by neuron models, and these abilities greatly enhance the computational power of each neuron. The interplay between the powerful information-processing capabilities of DNA and neural computation only increases our understood computational power of neurons. Finally, neurons engage in separate short-term and long-term computations simultaneously. It is no wonder why some claim that each neuron is more powerful than a supercomputer.

And we haven’t even touched mirror neuronsneurons that somehow conceptually understand actions–with a twenty foot pole.

Sorry for the inactivity (yet again)

by metacognizant

Just a quick comment to let everyone know that I am still planning to write here, I’ve just been way too busy with school. Also, I decided to wait to share my research papers until they’ve been graded; I’d hate for my professors to do a plagiarism search, turn up my own blog, and fail the class for plagiarism when it was my own work they found in the first place. Again though, I will share them once they have been graded.

What time is it?

by metacognizant

Recently, I realized that out of all I’ve written on this blog I have not written a single thing on the nature of time itself. Due to my fascination with the subject, I thought I’d post a brief introduction to the philosophy of time and sketch my own view.

The philosopher Dean Zimmerman has written an essay explicating the two philosophical theories of time–A-theory and B-theory–and defending the A-theory. Because I’m an A-theorist, I chose to link to this article in order to indirectly give a brief introduction to my own view. I suggest reading the article in full, but in brief, the A-theory of time views tensed statements (“I am now sitting at my computer”) as reflecting objective reality, as the flow of time is a true event; in contraposition, the B-theory views tensed statements as not reflecting objective reality, as all moments of time timelessly coexist–that is, time does not flow, and time as we know it only arises as a subjective illusion. A B-theory is required for the concept of time travel to work–if moments in time do not coexist, then it would be impossible to travel from one to another. Traditionally, the special theory of relativity is thought to necessitate a B-theory of time. This is not necessarily true–as addressed by William Lane Craig in Time and the Metaphysics of Relativity, and it seems that the A-theory of time has been validated by a recent experiment showing that neutrinos appear to be able to travel faster than light.

There isn’t much of a point to this blog to except to introduce my readers to the concept of time so that I can write more about it later. For those who haven’t thought much about time itself, I encourage you to. It’s fascinating.

Freud strikes again.

by metacognizant

I’ve noticed an increasing fascination with Freud once again, especially in neuropsychology. For instance, the eminent neuroscientist V. S. Ramachandran goes out on uncharacteristically theoretical and non-testable limbs to discuss possible neuroscientific evidences for Freud’s theory in the two books of his that I have. It seems that the work of Freud waxes and wanes in its influence, but now there is some solid evidence behind some of his theories. Some of the more abstract and controversial components of his theory–psychosexual stages, for instance–remain untested, but a growing body of evidence exists for repression of unwanted memories and their role in our mental health. As part of my paper on psychosis that I’ve been writing, I’ve come across an article that’s written in accessible language and details the neural mechanisms underlying repression of unwanted memories. This paper, published in the journal, Current Directions in Psychological Science, outlines the basic way our brain suppresses memories, and how variations in that method can lead to distress.

The need to keep unwanted thoughts out of our life to maintain mental health has great intuitive appeal, but it has not gone unchallenged. New neuroscientific research is establishing a hard basis for how this works, and the very fact that it does. Unfortunately, we may have to tame our unconscious after all.