“begins to see” because so far I have no reason to suspect this process terminates. Neither do wiser and more experienced mathematicians I have talked to. In this spirit, for example, The Princeton Companion to Mathematics [PCM], expressly renounces any tidy answer to the question “What is mathematics?” Instead, the book replies to this question with 1000 pages of expositions of topics within mathematics, all written by top experts in
their own subfields. This is a wise approach: a shorter answer would be not just incomplete, but necessarily misleading. Unfortunately, while mathematicians are often reluctant to define mathematics, others are not. In 1960, despite having made his own mathematically significant contributions, physicist Eugene Wigner defined mathematics as “the science of skillful operations with concepts and rules invented just for this purpose” [W]. This rather negative characterization of mathematics may have been partly tongue-in-cheek, but he took it seriously enough to build upon it an argument that mathematics is “unreasonably effective” in the natural sciences—an argument which has been unreasonably
influential among scientists ever since. What weight we attach to Wigner’s claim, and the view of mathematics it promotes, has both metaphysical and practical implications for the progress of mathematics and physics. If the effectiveness of mathematics in physics is a ‘miracle,’ then this miracle may well run out. In this case, we are justified in keeping the two subjects ‘separate’ and hoping our luck continues. If, on the other hand, they are deeply and rationally related, then this surely has consequences for how we should do research at the interface. In fact, I shall argue that what has so far been unreasonably effective is not mathematics but reductionism—the practice of inferring behavior of a complex problem by isolating and solving manageable ‘subproblems’—and that physics may be reaching the limits of effectiveness of the reductionist approach. In this case, mathematics will remain our best hope for progress in physics, by finding precise ways to go beyond reductionist tactics.
inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.
By unifying the theory of evolution (which shows how random variation and selection is sufficient to provide incremental adaptation) with learning theories (which show how incremental adaptation is sufficient for a system to exhibit intelligent behaviour), this research shows that it is possible for evolution to exhibit some of the same intelligent behaviours as learning systems (including neural networks). In an opinion paper, published in Trends in Ecology and Evolution, Professors Watson and Eörs Szathmáry, from the Parmenides Foundation in Munich, explain how formal analogies can be used to transfer specific models and results between the two theories to solve several important evolutionary puzzles. Professor Watson says: "Darwin's theory of evolution describes the driving process, but learning theory is not just a different way of describing what Darwin already told us. It expands what we think evolution is capable of. It shows that natural selection is sufficient to produce significant features of intelligent problem-solving." For example, a key feature of intelligence is an ability to anticipate behaviours that that will lead to future benefits. Conventionally, evolution, being dependent on random variation, has been considered 'blind' or at least 'myopic' - unable to exhibit such anticipation. But showing that evolving systems can learn from past experience means that evolution has the potential to anticipate what is needed to adapt to future environments in the same way that learning systems do. "When we look at the amazing, apparently intelligent designs that evolution produces, it takes some imagination to understand how random variation and selection produced them. Sure, given suitable variation and suitable selection (and we also need suitable inheritance) then we're fine. But can natural selection explain the suitability of its own processes? That self-referential notion is troubling to conventional evolutionary theory - but easy in learning theory. "Learning theory enables us to formalise how evolution changes its own processes over evolutionary time. For example, by evolving the organisation of development that controls variation, the organisation of ecological interactions that control selection or the structure of reproductive relationships that control inheritance - natural selection can change its own ability to evolve.
"If evolution can learn from experience, and thus improve its own ability to evolve over time, this can demystify the awesomeness of the designs that evolution produces. Natural selection can accumulate knowledge that enables it to evolve smarter. That's exciting because it explains why biological design appears to be so intelligent."
The researchers hope to apply the new understanding of thermodynamics at the quantum level to high-performance quantum technologies in the future. "Any progress towards the management of finite-time thermodynamic processes at the quantum level is a step forward towards the realization of a fully fledged thermo-machine that can exploit the laws of quantum mechanics to overcome the performance limitations of classical devices," Paternostro said. "This work shows the implications for reversibility (or lack thereof) of non-equilibrium quantum dynamics. Once we characterize it, we can harness it at the technological level."
the space of serious possibilities we can rationally inquire about (169). Accordingly, inquiry should not be understood as a process that generates changes in doxastic performances (which would concern our psychological dispositions and states), but rather as a process which results in changes in doxastic commitments (108). Changes in doxastic commitments can concern either the extension or contraction of our state of full belief. Levi offers us a detailed analysis of the ways in which these changes can be justified. Extension can be justified by either routine expansion or deliberate expansion, where routine expansion identifies a “program for utilizing inputs to form new full beliefs to be added to X’s state of full belief K” (235). Levi refers here to a “program” because he wants to distinguish this kind of expansion from a conclusion obtained through inference, where, for example, the data would figure as premises of an induction (236). The difference here is that the “program” tells us how to use the data before the data are collected, whereas in inductive inferences there is no such identification in advance. He reads Peirce’s late account of induction as developing some elements along these lines (72-3) and he finds some affinities with Hintikka’s account of induction as a process “allowing nature to answer a question put to it by the inquirer” (204). Our state of full belief can also expand by means of deliberate expansion. In the latter “the answer chosen is justified by showing it is the best option among those available given the cognitive goals of the agent” (236). “The justified change is the one that is best among the available options (relevant alternatives) according to the goal of seeking new error-free and valuable information” (237). However, when we expand our state of full belief we can inadvertently generate inconsistencies among our beliefs. When we are in this inconsistent state of belief, we cannot but give up some of our beliefs in order to avoid contradictions. In contracting our state of full belief, we have basically three options. We can give up the new belief that generated the inconsistency or we can give up the old belief with which it is in contradiction. Alternatively, we can also suspend judgment between the two. In all these cases we have a contraction of the state of full belief. Levi describes the criterion which should be followed in deciding between these three options as follows: “In contracting a state of belief by giving up information X would prefer, everything else being equal, to minimize the value of loss of the information X is going to incur” (230). In deciding weather to give up either the new or the old belief, X should then take into consideration which retreat would cause the smaller loss of information. If the loss of information would be equal in the two cases, then X should suspend judgment about the two (181, 229-30). This account of inquiry and of the way in which it justifies changes in doxastic commitments is part of an elaborate and original approach to epistemology. It draws its basic insights from Peirce’s and Dewey’s account of inquiry, but it develops their views in an extremely original and detailed view, which constitutes the core of Levi’s philosophy. Levi’s book contains also interesting reflections on the concept of truth. He argues that, from a pragmatist point of view, we should not be interested in giving a definition of this concept, which clarifies what we do when we use the predicate “is true” in sentences and propositions. Rather, we should be interested in how the concept of truth is relevant for understanding the way in which we change beliefs through inquiry (124-5). Levi criticizes those accounts of inquiry which claim that inquiry should not aim at truth but at warranted assertibility (e.g. Rorty, Davidson, sometimes Dewey) (ch. 7). Against these views, he maintains that a concern with truth is essential to understand at least some of our inquiries, that is, those inquiries which aim to justify changes in full beliefs. It seems essential that these inquiries should try to avoid error (an aim that should be associated with the purpose of attaining new information) and this seems to have an indirect connection with the aim of finding out the truth (135- 6). On the other hand, Levi rejects Peirce’s account of truth as the final opinion that we will reach at the end of inquiry. According to Levi, proposing this understanding of truth as the aim of inquiry would result in insoluble inconsistencies with the kind of corrigibilism that Levi endorses and that he also attributes to Peirce (138-40). Levi’s view seems to be the following: if in my current state of belief I believe h is absolutely true, then I should regard it as an essential part of the final opinion I aim to reach “in the long run.” Thus, I should not be prepared to give up h (which would contradict Peirce’s corrigibilism), insofar as at further steps in inquiry I could end up believing the contrary view (which I now believe is false). Levi concludes that at any determinate time in inquiry we should not be concerned with making the best move in order to contribute to the attainment of the truth intended as the final and definitive description of the world. On the contrary, we should just try to obtain new errorfree information in the next proximate step of inquiry. I do not think that this way of presenting Peirce’s views is fair to his actual position, for two main reasons: (1) Peirce’s account of truth as the final opinion can be read as identifying not substantial theses about reality or the ultimate aims of inquiry, but the commitments we make with respect to a proposition when we asserts that it is true: that is, we commit ourselves to the view that it will hold in the long run;3 (2) even if we identify the attainment of truth as the ultimate aim of inquiry, it seems possible, within Peirce’s model, to maintain that we can be corrigibilist about the views we currently consider true. Of course it would be irrational to doubt or give up these views as long as we still believe in them (this is basically what Levi calls Peirce’s principle of doxastic inertia). This does not imply that we cannot consider those views as corrigible, given that we could incur in circumstances (like new evidence gained through experience, or the identification of inconsistencies in our set of beliefs, etc.) that justify the emergence of a doubt on those views. If we were in these circumstances, it would not be problematic to give up those views, insofar as we would not be any more completely certain that they are true. If our aim were thus the attainment of truth in the long run, we would be justified to give up those views insofar as we would not be any more certain that they contribute to the attainment of the final opinion. Levi’s book also contains important scholarly contributions on Peirce and Dewey. It is undeniable that his approach to the writings of both Peirce and Dewey is strongly influenced by his own views and interests, but Levi is surely distinctive among the central figures in contemporary pragmatism for reading these classics with the attention they deserve. Chapter 4 “Beware of Syllogism: Statistical Reasoning and Conjecturing According to Peirce” presents a reconstruction of the evolution of Peirce’s account of induction and hypothesis. Levi shows how Peirce later abandons his early attempts to define these kinds of inferences by means of a permutation of the structure of a categorical syllogism. In his later writings Peirce first begins to regard these inferences as permutations of statistical deductions (75), and he then abandons this strategy in favor of a description of deduction, induction and abduction reflecting their roles in inquiry (77-8). Chapters 5 “Dewey’s Logic of Inquiry” and 6 “Wayward Naturalism: Saving Dewey from Himself” contain interesting considerations on Dewey’s theory of inquiry and the kind of naturalism we should associate with it. Insofar as the two articles overlap in many respect (unfortunately sometimes the overlap is not only relative to the topics, but textual!, which makes one wonder if it would not have been better to include only one of the two in the collection) I will discuss them together. With respect to chapter 4 on Peirce, these articles are less scholarly and more concerned with a correction of Dewey’s views along the lines Levi suggests. In these chapters, Levi discusses a multiplicity of issues, but I will limit myself to the consideration of his criticism of Dewey’s naturalism (cf. 85-8, 111-16). Accordingly, Levi claims that “activities like believing, evaluating, inquiring, deliberating, and deciding are resistant to naturalization” (105), if the latter is understood as an explanation of these activities by means of psychological or behavioral dispositions. In his attempt to show continuities between the way in which humans rationally conduct inquiries and the way in which animals respond to the challenges posed by their environment, Dewey commits exactly this naturalistic fallacy (cf. 85, 111). However, states of full belief, understood as doxastic commitments, involve a normative element that cannot be reduced to dispositions (106). Endorsing an approach to inquiry based on commitments is equal to endorsing a better naturalism, which Levy calls wayward naturalism (cf. 103-4), and which does not substitute old supernatural entities with new ones (according to Levi, the appeal to dispositions as universal means of explanation in epistemology introduce a new kind of supernaturalism). Following Levi, if we read Dewey properly, it becomes evident that we cannot but develop his account of inquiry in this way (108-9). To conclude, it is surely good to have these essays collected together, insofar as they offer a new perspective on some of the central insights of Levi’s philosophy thanks to a fruitful discussion with recent developments in epistemology. Even though sometimes the overlap between the articles is so significant (as in the case of chapter 5 and 6), that it would have been advisable to avoid redundancies, the texts here presented are surely of interest for any scholar who believes that the classical pragmatists’ account of inquiry has still a lot to offer to the current philosophical debate
String theory is a potential "theory of everything", uniting all matter and forces in a single theoretical framework, which describes the fundamental level of the universe in terms of vibrating strings rather than particles. Although the framework can naturally incorporate gravity even on the subatomic level, it implies that the universe has some strange properties, such as nine or ten spatial dimensions. String theorists have approached this problem by finding ways to "compactify" six or seven of these dimensions, or shrink them down so that we wouldn't notice them. Unfortunately, Jun Nishimura of the High Energy Accelerator Research Organization (KEK) in Tsukuba says "There are many ways to get four-dimensional space–time, and the different ways lead to different physics." The solution is not unique enough to produce useful predictions. These compactification schemes are studied through perturbation theory, in which all the possible ways that strings could interact are added up to describe the interaction. However, this only works if the interaction is relatively weak, with a distinct hierarchy in the likelihood of each possible interaction. If the interactions between the strings are stronger, with multiple outcomes equally likely, perturbation theory no longer works. Matrix allows stronger interactions. Weakly interacting strings cannot describe the early universe with its high energies, densities and temperatures, so researchers have sought a way to study strings that strongly affect one another. To this end, some string theorists have tried to reformulate the theory using matrices. "The string picture emerges from matrices in the limit of infinite matrix size," says Nishimura. Five forms of string theory can be described with perturbation theory, but only one has a complete matrix form – Type IIB. Some even speculate that the matrix Type IIB actually describes M-theory, thought to be the fundamental version of string theory that unites all five known types. The model developed by Sang-Woo Kim of Osaka University, Nishimura, and Asato Tsuchiya of Shizuoka University describes the behaviour of strongly interacting strings in nine spatial dimensions plus time, or 10 dimensions. Unlike perturbation theory, matrix models can be numerically simulated on computers, getting around some of the notorious difficulty of string-theory calculations. Although the matrices would have to be infinitely large for a perfect model, they were restricted to sizes from 8 × 8 to 32 × 32 in the simulation. The calculations using the largest matrices took more than two months on a supercomputer, says Kim. Physical properties of the universe appear in averages taken over hundreds or thousands of matrices. The trends that emerged from increasing the matrix size allowed the team to extrapolate how the model universe would behave if the matrices were infinite. "In our work, we focus on the size of the space as a function of time," says Nishimura. 'Birth of the universe' The limited sizes of the matrices mean that the team cannot see much beyond the beginning of the universe in their model. From what they can tell, it starts out as a symmetric, nine-dimensional space, with each dimension measuring about 10–33 cm. This is a fundamental unit of length known as the Planck length. After some passage of time, the string interactions cause the symmetry of the universe to spontaneously break, causing three of the nine dimensions to expand. The other six are left stunted at the Planck length. "The time when the symmetry is broken is the birth of the universe," says Nishimura. "The paper is remarkable because it suggests that there really is a mechanism for dynamically obtaining four dimensions out of a 10-dimensional matrix model," says Harold Steinacker of the University of Vienna in Austria.
Hikaru Kawai of Kyoto University, Japan, who worked with Tsuchiya and others to propose the IIB matrix model in 1997, is also very interested in the "clear signal of four dimensional space–time". "It would be a big step towards understanding the origin of our universe," he says. Although he finds that the evolution of the model universe in time is too simple and different from the general theory of relativity, he says the new direction opened by the work is "worth investigating intensively". Will the Standard Model emerge? The team has yet to prove that the Standard Model of particle physics will show up in its model, at much lower energies than this initial study of the very early universe. If it leaps that hurdle, the team can use it to explore cosmology. Compared with perturbative models, Steinacker says, "this model should be much more predictive". Nishimura hopes that by improving both the model and the simulation software, the team may soon be able to investigate the inflation of the early universe or the density distribution of matter, results which could be evaluated against the density distribution of the real universe. The research will be described in an upcoming paper in Physical Review Letters and a preprint is available at arXiv:1108.1540.
In Hilbert's thought experiment, he explained that additional rooms could be created in a hotel that already has an infinite number of rooms because the hotel manager could simply "shift" all of the current guests to a new room according to some rule, such as moving everyone up one room (to leave the first room empty) or moving everyone up to twice their current room number (to create an infinite number of empty rooms by leaving the odd-numbered rooms empty). In their paper, the physicists proposed two ways to model this phenomena—one theoretical and one experimental—both of which use the infinite number of quantum states of a quantum system to represent the infinite number of hotel rooms in a hotel. The theoretical proposal uses the infinite number of energy levels of a particle in a potential well, and the experimental demonstration uses the infinite number of orbital angular momentum states of light. The scientists showed that, even though there is initially an infinite number of these states (rooms), the states' amplitudes (room numbers) can be remapped to twice their original values, producing an infinite number of additional states. On one hand, the phenomena is counterintuitive: by doubling an infinite number of things, you get infinitely many more of them. And yet, as the physicists explain, it still makes sense because the total sum of the values of an infinite number of things can actually be finite. "As far as there being an infinite amount of 'something,' it can make physical sense if the things we can measure are still finite," coauthor Filippo Miatto, at the University of Waterloo and the University of Ottawa, told Phys.org. "For example, a coherent state of a laser mode is made with an infinite set of number states, but as the number of photons in each of the number states increases, the amplitudes decrease so at the end of the day when you sum everything up the total energy is finite. The same can hold for all of the other quantum properties, so no, it is not surprising to the trained eye." The physicists also showed that the remapping can be done not only by doubling, but also by tripling, quadrupling, etc., the states' values. In the laser experiment, these procedures produce visible "petals" of light that correspond to the number that the states were multiplied by. The ability to remap energy states in this way could also have applications in quantum and classical information processing, where, for example, it could be used to increase the number of states produced or to increase the information capacity of a channel
- Hypertime -- why we need 2 dimensions of time: A Two-Time Universe? Physicist Explores How Second Dimension of Time Could Unify Physics Laws: For a long time, Itzhak Bars has been studying time. More than a decade ago, the physicist began pondering the role time plays in the basic laws of physics — the equations describing matter, gravity and the other forces of nature.Those laws are exquisitely accurate. Einstein mastered gravity with his theory of general relativity, and the equations of quantum theory capture every nuance of matter and other forces, from the attractive power of magnets to the subatomic glue that holds an atom’s nucleus together. But the laws can’t be complete. Einstein’s theory of gravity and quantum theory don’t fit together. Some piece is missing in the picture puzzle of physical reality. Bars thinks one of the missing pieces is a hidden dimension of time. Bizarre is not a powerful enough word to describe this idea, but it is a powerful idea nevertheless. With two times, Bars believes, many of the mysteries of today’s laws of physics may disappear. Of course, it’s not as simple as that. An extra dimension of time is not enough. You also need an additional dimension of space. It sounds like a new episode of “The Twilight Zone,” but it’s a familiar idea to most physicists. In fact, extra dimensions of space have become a popular way of making gravity and quantum theory more compatible. Extra space dimensions aren’t easy to imagine — in everyday life, nobody ever notices more than three. Any move you make can be described as the sum of movements in three directions — up-down, back and forth, or sideways. Similarly, any location can be described by three numbers (on Earth, latitude, longitude and altitude), corresponding to space’s three dimensions. Other dimensions could exist, however, if they were curled up in little balls, too tiny to notice. If you moved through one of those dimensions, you’d get back to where you started so fast you’d never realize that you had moved. “An extra dimension of space could really be there, it’s just so small that we don’t see it,” said Bars, a professor of physics and astronomy. Something as tiny as a subatomic particle, though, might detect the presence of extra dimensions. In fact, Bars said, certain properties of matter’s basic particles, such as electric charge, may have something to do with how those particles interact with tiny invisible dimensions of space.In this view, the Big Bang that started the baby universe growing 14 billion years ago blew up only three of space’s dimensions, leaving the rest tiny. Many theorists today believe that 6 or 7 such unseen dimensions await discovery. Only a few, though, believe that more than one dimension of time exists. Bars pioneered efforts to discern how a second dimension of time could help physicists better explain nature. “Itzhak Bars has a long history of finding new mathematical symmetries that might be useful in physics,” said Joe Polchinski, a physicist at the Kavli Institute for Theoretical Physics at UC Santa Barbara. “This two-time idea seems to have some interesting mathematical properties.” If Bars is on the right track, some of the most basic processes in physics will need re-examination. Something as simple as how particles move, for example, could be viewed in a new way. In classical physics (before the days of quantum theory), a moving particle was completely described by its momentum (its mass times its velocity) and its position. But quantum physics says you can never know those two properties precisely at the same time. Bars alters the laws describing motion even more, postulating that position and momentum are not distinguishable at a given instant of time. Technically, they can be related by a mathematical symmetry, meaning that swapping position for momentum leaves the underlying physics unchanged (just as a mirror switching left and right doesn’t change the appearance of a symmetrical face). In ordinary physics, position and momentum differ because the equation for momentum involves velocity. Since velocity is distance divided by time, it requires the notion of a time dimension. If swapping the equations for position and momentum really doesn’t change anything, then position needs a time dimension too. “If I make position and momentum indistinguishable from one another, then something is changing about the notion of time,” said Bars. “If I demand a symmetry like that, I must have an extra time dimension.” Simply adding an extra dimension of time doesn’t solve everything, however. To produce equations that describe the world accurately, an additional dimension of space is needed as well, giving a total of four space dimensions. Then, the math with four space and two time dimensions reproduces the standard equations describing the basic particles and forces, a finding Bars described partially last year in the journal Physical Review D and has expanded upon in his more recent work. Bars’ math suggests that the familiar world of four dimensions — three of space, one of time — is merely a shadow of a richer six-dimensional reality. In this view the ordinary world is like a two-dimensional wall displaying shadows of the objects in a three-dimensional room. In a similar way, the observable universe of ordinary space and time may reflect the physics of a bigger space with an extra dimension of time. In ordinary life nobody notices the second time dimension, just as nobody sees the third dimension of an object’s two-dimensional shadow on a wall. This viewpoint has implications for understanding many problems in physics. For one thing, current theory suggests the existence of a lightweight particle called the axion, needed to explain an anomaly in the equations of the standard model of particles and forces. If it exists, the axion could make up the mysterious “dark matter” that astronomers say affects the motions of galaxies. But two decades of searching has failed to find proof that axions exist. Two-time physics removes the original anomaly without the need for an axion, Bars has shown, possibly explaining why it has not been found. On a grander level, two-time physics may assist in the quest to merge quantum theory with Einstein’s relativity in a single unified theory. The most popular approach to that problem today, superstring theory, also invokes extra dimensions of space, but only a single dimension of time. Many believe that a variant on string theory, known as M theory, will be the ultimate winner in the quantum-relativity unification game, and M theory requires 10 dimensions of space and one of time. Efforts to formulate a clear and complete version of M theory have so far failed. “Nobody has yet told us what the fundamental form of M theory is,” Bars said. “We just have clues — we don’t know what it is.” Adopting the more symmetric two-time approach may help. Describing the 11 dimensions of M theory in the language of two-time physics would require adding one time dimension plus one space dimension, giving nature 11 space and two time dimensions. “The two-time version of M theory would have a total of 13 dimensions,” Bars said. For some people, that might be considered unlucky. But for Bars, it’s a reason for optimism. “My hope,” he says, “is that this path that I am following will actually bring me to the right place.
- You're not irrational, you're just quantum probabilistic: Researchers explain human decision-making with physics theory: The next time someone accuses you of making an irrational decision, just explain that you're obeying the laws of quantum physics. A new trend taking shape in psychological science not only uses quantum physics to explain humans' (sometimes) paradoxical thinking, but may also help researchers resolve certain contradictions among the results of previous psychological studies. According to Zheng Joyce Wang and others who try to model our decision-making processes mathematically, the equations and axioms that most closely match human behavior may be ones that are rooted in quantum physics. "We have accumulated so many paradoxical findings in the field of cognition, and especially in decision-making," said Wang, who is an associate professor of communication and director of the Communication and Psychophysiology Lab at The Ohio State University. "Whenever something comes up that isn't consistent with classical theories, we often label it as 'irrational.' But from the perspective of quantum cognition, some findings aren't irrational anymore. They're consistent with quantum theory—and with how people really behave."In two new review papers in academic journals, Wang and her colleagues spell out their new theoretical approach to psychology. One paper appears in Current Directions in Psychological Science, and the other in Trends in Cognitive Sciences. Their work suggests that thinking in a quantum-like way—essentially not following a conventional approach based on classical probability theory—enables humans to make important decisions in the face of uncertainty, and lets us confront complex questions despite our limited mental resources. When researchers try to study human behavior using only classical mathematical models of rationality, some aspects of human behavior do not compute. From the classical point of view, those behaviors seem irrational, Wang explained. For instance, scientists have long known that the order in which questions are asked on a survey can change how people respond—an effect previously thought to be due to vaguely labeled effects, such as "carry-over effects" and "anchoring and adjustment," or noise in the data. Survey organizations normally change the order of questions between respondents, hoping to cancel out this effect. But in the Proceedings of the National Academy of Sciences last year, Wang and collaborators demonstrated that the effect can be precisely predicted and explained by a quantum-like aspect of people's behavior. We usually think of quantum physics as describing the behavior of sub-atomic particles, not the behavior of people. But the idea is not so far-fetched, Wang said. She also emphasized that her research program neither assumes nor proposes that our brains are literally quantum computers. Other research groups are working on that idea; Wang and her collaborators are not focusing on the physical aspects of the brain, but rather on how abstract mathematical principles of quantum theory can shed light on human cognition and behaviors. "In the social and behavioral sciences as a whole, we use probability models a lot," she said. "For example, we ask, what is the probability that a person will act a certain way or make a certain decision? Traditionally, those models are all based on classical probability theory—which arose from the classical physics of Newtonian systems. So it's really not so exotic for social scientists to think about quantum systems and their mathematical principles, too." Quantum physics deals with ambiguity in the physical world. The state of a particular particle, the energy it contains, its location—all are uncertain and have to be calculated in terms of probabilities. Quantum cognition is what happens when humans have to deal with ambiguity mentally. Sometimes we aren't certain about how we feel, or we feel ambiguous about which option to choose, or we have to make decisions based on limited information. "Our brain can't store everything. We don't always have clear attitudes about things. But when you ask me a question, like 'What do you want for dinner?" I have to think about it and come up with or construct a clear answer right there," Wang said. "That's quantum cognition." "I think the mathematical formalism provided by quantum theory is consistent with what we feel intuitively as psychologists. Quantum theory may not be intuitive at all when it is used to describe the behaviors of a particle, but actually is quite intuitive when it is used to describe our typically uncertain and ambiguous minds." She used the example of Schrödinger's cat—the thought experiment in which a cat inside a box has some probability of being alive or dead. Both possibilities have potential in our minds. In that sense, the cat has a potential to become dead or alive at the same time. The effect is called quantum superposition. When we open the box, both possibilities are no longer superimposed, and the cat must be either alive or dead. With quantum cognition, it's as if each decision we make is our own unique Schrödinger's cat. As we mull over our options, we envision them in our mind's eye. For a time, all the options co-exist with different degrees of potential that we will choose them: That's superposition. Then, when we zero in on our preferred option, the other options cease to exist for us. The task of modeling this process mathematically is difficult in part because each possible outcome adds dimensions to the equation. For instance, a Republican who is trying to decide among the candidates for U.S. president in 2016 is currently confronting a high-dimensional problem with almost 20 candidates. Open-ended questions, such as "How do you feel?" have even more possible outcomes and more dimensions. With the classical approach to psychology, the answers might not make sense, and researchers have to construct new mathematical axioms to explain behavior in that particular instance. The result: There are many classical psychological models, some of which are in conflict, and none of which apply to every situation. With the quantum approach, Wang and her colleagues argued, many different and complex aspects of behavior can be explained with the same limited set of axioms. The same quantum model that explains how question order changes people's survey answers also explains violations of rationality in the prisoner's dilemma paradigm, an effect in which people cooperate even when it's in their best interest not to do so. "The prisoner's dilemma and question order are two completely different effects in classical psychology, but they both can be explained by the same quantum model," Wang said. "The same quantum model has been used to explain many other seemingly unrelated, puzzling findings in psychology. That's elegant."
- Is Nature Unnatural? Decades of confounding experiments have physicists considering a startling possibility: The universe might not make sense. On an overcast afternoon in late April, physics professors and students crowded into a wood-paneled lecture hall at Columbia University for a talk by Nima Arkani-Hamed, a high-profile theorist visiting from the Institute for Advanced Study in nearby Princeton, N.J. With his dark, shoulder-length hair shoved behind his ears, Arkani-Hamed laid out the dual, seemingly contradictory implications of recent experimental results at the Large Hadron Collider in Europe. “The universe is inevitable,” he declared. “The universe is impossible.” The spectacular discovery of the Higgs boson in July 2012 confirmed a nearly 50-year-old theory of how elementary particles acquire mass, which enables them to form big structures such as galaxies and humans. “The fact that it was seen more or less where we expected to find it is a triumph for experiment, it’s a triumph for theory, and it’s an indication that physics works,” Arkani-Hamed told the crowd. However, in order for the Higgs boson to make sense with the mass (or equivalent energy) it was determined to have, the LHC needed to find a swarm of other particles, too. None turned up. With the discovery of only one particle, the LHC experiments deepened a profound problem in physics that had been brewing for decades. Modern equations seem to capture reality with breathtaking accuracy, correctly predicting the values of many constants of nature and the existence of particles like the Higgs. Yet a few constants — including the mass of the Higgs boson — are exponentially different from what these trusted laws indicate they should be, in ways that would rule out any chance of life, unless the universe is shaped by inexplicable fine-tunings and cancellations. In peril is the notion of “naturalness,” Albert Einstein’s dream that the laws of nature are sublimely beautiful, inevitable and self-contained. Without it, physicists face the harsh prospect that those laws are just an arbitrary, messy outcome of random fluctuations in the fabric of space and time. The LHC will resume smashing protons in 2015 in a last-ditch search for answers. But in papers, talks and interviews, Arkani-Hamed and many other top physicists are already confronting the possibility that the universe might be unnatural. (There is wide disagreement, however, about what it would take to prove it.) “Ten or 20 years ago, I was a firm believer in naturalness,” said Nathan Seiberg, a theoretical physicist at the Institute, where Einstein taught from 1933 until his death in 1955. “Now I’m not so sure. My hope is there’s still something we haven’t thought about, some other mechanism that would explain all these things. But I don’t see what it could be.” Physicists reason that if the universe is unnatural, with extremely unlikely fundamental constants that make life possible, then an enormous number of universes must exist for our improbable case to have been realized. Otherwise, why should we be so lucky? Unnaturalness would give a huge lift to the multiverse hypothesis, which holds that our universe is one bubble in an infinite and inaccessible foam. According to a popular but polarizing framework called string theory, the number of possible types of universes that can bubble up in a multiverse is around 10500. In a few of them, chance cancellations would produce the strange constants we observe. In such a picture, not everything about this universe is inevitable, rendering it unpredictable. Edward Witten, a string theorist at the Institute, said by email, “I would be happy personally if the multiverse interpretation is not correct, in part because it potentially limits our ability to understand the laws of physics. But none of us were consulted when the universe was created.” “Some people hate it,” said Raphael Bousso, a physicist at the University of California at Berkeley who helped develop the multiverse scenario. “But I just don’t think we can analyze it on an emotional basis. It’s a logical possibility that is increasingly favored in the absence of naturalness at the LHC.” What the LHC does or doesn’t discover in its next run is likely to lend support to one of two possibilities: Either we live in an overcomplicated but stand-alone universe, or we inhabit an atypical bubble in a multiverse. “We will be a lot smarter five or 10 years from today because of the LHC,” Seiberg said. “So that’s exciting. This is within reach.” Cosmic Coincidence: Einstein once wrote that for a scientist, “religious feeling takes the form of a rapturous amazement at the harmony of natural law” and that “this feeling is the guiding principle of his life and work.” Indeed, throughout the 20th century, the deep-seated belief that the laws of nature are harmonious — a belief in “naturalness” — has proven a reliable guide for discovering truth. “Naturalness has a track record,” Arkani-Hamed said in an interview. In practice, it is the requirement that the physical constants (particle masses and other fixed properties of the universe) emerge directly from the laws of physics, rather than resulting from improbable cancellations. Time and again, whenever a constant appeared fine-tuned, as if its initial value had been magically dialed to offset other effects, physicists suspected they were missing something. They would seek and inevitably find some particle or feature that materially dialed the constant, obviating a fine-tuned cancellation. This time, the self-healing powers of the universe seem to be failing. The Higgs boson has a mass of 126 giga-electron-volts, but interactions with the other known particles should add about 10,000,000,000,000,000,000 giga-electron-volts to its mass. This implies that the Higgs’ “bare mass,” or starting value before other particles affect it, just so happens to be the negative of that astronomical number, resulting in a near-perfect cancellation that leaves just a hint of Higgs behind: 126 giga-electron-volts. Physicists have gone through three generations of particle accelerators searching for new particles, posited by a theory called supersymmetry, that would drive the Higgs mass down exactly as much as the known particles drive it up. But so far they’ve come up empty-handed. The upgraded LHC will explore ever-higher energy scales in its next run, but even if new particles are found, they will almost definitely be too heavy to influence the Higgs mass in quite the right way. The Higgs will still seem at least 10 or 100 times too light. Physicists disagree about whether this is acceptable in a natural, stand-alone universe. “Fine-tuned a little — maybe it just happens,” said Lisa Randall, a professor at Harvard University. But in Arkani-Hamed’s opinion, being “a little bit tuned is like being a little bit pregnant. It just doesn’t exist.”If no new particles appear and the Higgs remains astronomically fine-tuned, then the multiverse hypothesis will stride into the limelight. “It doesn’t mean it’s right,” said Bousso, a longtime supporter of the multiverse picture, “but it does mean it’s the only game in town.” A few physicists — notably Joe Lykken of Fermi National Accelerator Laboratory in Batavia, Ill., and Alessandro Strumia of the University of Pisa in Italy — see a third option. They say that physicists might be misgauging the effects of other particles on the Higgs mass and that when calculated differently, its mass appears natural. This “modified naturalness” falters when additional particles, such as the unknown constituents of dark matter, are included in calculations — but the same unorthodox path could yield other ideas. “I don’t want to advocate, but just to discuss the consequences,” Strumia said during a talk earlier this month at Brookhaven National Laboratory. However, modified naturalness cannot fix an even bigger naturalness problem that exists in physics: The fact that the cosmos wasn’t instantly annihilated by its own energy the moment after the Big Bang. Dark Dilemma: The energy built into the vacuum of space (known as vacuum energy, dark energy or the cosmological constant) is a baffling trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times smaller than what is calculated to be its natural, albeit self-destructive, value. No theory exists about what could naturally fix this gargantuan disparity. But it’s clear that the cosmological constant has to be enormously fine-tuned to prevent the universe from rapidly exploding or collapsing to a point. It has to be fine-tuned in order for life to have a chance. To explain this absurd bit of luck, the multiverse idea has been growing mainstream in cosmology circles over the past few decades. It got a credibility boost in 1987 when the Nobel Prize-winning physicist Steven Weinberg, now a professor at the University of Texas at Austin, calculated that the cosmological constant of our universe is expected in the multiverse scenario. Of the possible universes capable of supporting life — the only ones that can be observed and contemplated in the first place — ours is among the least fine-tuned. “If the cosmological constant were much larger than the observed value, say by a factor of 10, then we would have no galaxies,” explained Alexander Vilenkin, a cosmologist and multiverse theorist at Tufts University. “It’s hard to imagine how life might exist in such a universe.” Most particle physicists hoped that a more testable explanation for the cosmological constant problem would be found. None has. Now, physicists say, the unnaturalness of the Higgs makes the unnaturalness of the cosmological constant more significant. Arkani-Hamed thinks the issues may even be related. “We don’t have an understanding of a basic extraordinary fact about our universe,” he said. “It is big and has big things in it.” The multiverse turned into slightly more than just a hand-waving argument in 2000, when Bousso and Joe Polchinski, a professor of theoretical physics at the University of California at Santa Barbara, found a mechanism that could give rise to a panorama of parallel universes. String theory, a hypothetical “theory of everything” that regards particles as invisibly small vibrating lines, posits that space-time is 10-dimensional. At the human scale, we experience just three dimensions of space and one of time, but string theorists argue that six extra dimensions are tightly knotted at every point in the fabric of our 4-D reality. Bousso and Polchinski calculated that there are around 10500 different ways for those six dimensions to be knotted (all tying up varying amounts of energy), making an inconceivably vast and diverse array of universes possible. In other words, naturalness is not required. There isn’t a single, inevitable, perfect universe. “It was definitely an aha-moment for me,” Bousso said. But the paper sparked outrage. “Particle physicists, especially string theorists, had this dream of predicting uniquely all the constants of nature,” Bousso explained. “Everything would just come out of math and pi and twos. And we came in and said, ‘Look, it’s not going to happen, and there’s a reason it’s not going to happen. We’re thinking about this in totally the wrong way.’ ” Life in a Multiverse: The Big Bang, in the Bousso-Polchinski multiverse scenario, is a fluctuation. A compact, six-dimensional knot that makes up one stitch in the fabric of reality suddenly shape-shifts, releasing energy that forms a bubble of space and time. The properties of this new universe are determined by chance: the amount of energy unleashed during the fluctuation. The vast majority of universes that burst into being in this way are thick with vacuum energy; they either expand or collapse so quickly that life cannot arise in them. But some atypical universes, in which an improbable cancellation yields a tiny value for the cosmological constant, are much like ours. In a paper posted last month to the physics preprint website arXiv.org, Bousso and a Berkeley colleague, Lawrence Hall, argue that the Higgs mass makes sense in the multiverse scenario, too. They found that bubble universes that contain enough visible matter (compared to dark matter) to support life most often have supersymmetric particles beyond the energy range of the LHC, and a fine-tuned Higgs boson. Similarly, other physicists showed in 1997 that if the Higgs boson were five times heavier than it is, this would suppress the formation of atoms other than hydrogen, resulting, by yet another means, in a lifeless universe. Despite these seemingly successful explanations, many physicists worry that there is little to be gained by adopting the multiverse worldview. Parallel universes cannot be tested for; worse, an unnatural universe resists understanding. “Without naturalness, we will lose the motivation to look for new physics,” said Kfir Blum, a physicist at the Institute for Advanced Study. “We know it’s there, but there is no robust argument for why we should find it.” That sentiment is echoed again and again: “I would prefer the universe to be natural,” Randall said. But theories can grow on physicists. After spending more than a decade acclimating himself to the multiverse, Arkani-Hamed now finds it plausible — and a viable route to understanding the ways of our world. “The wonderful point, as far as I’m concerned, is basically any result at the LHC will steer us with different degrees of force down one of these divergent paths,” he said. “This kind of choice is a very, very big deal.” Naturalness could pull through. Or it could be a false hope in a strange but comfortable pocket of the multiverse. As Arkani-Hamed told the audience at Columbia, “stay tuned.” Via Quanta Magazine/This article was reprinted on ScientificAmerican.com.
- New Principle May Help Explain Why Nature is Quantum: Like small children, scientists are always asking the question 'why?'. One question they've yet to answer is why nature picked quantum physics, in all its weird glory, as a sensible way to behave. Researchers Corsin Pfister and Stephanie Wehner at the Centre for Quantum Technologies at the National University of Singapore tackle this perennial question in a paper published today in Nature Communications.We know that things that follow quantum rules, such as atoms, electrons or the photons that make up light, are full of surprises. They can exist in more than one place at once, for instance, or exist in a shared state where the properties of two particles show what Einstein called "spooky action at a distance", no matter what their physical separation. Because such things have been confirmed in experiments, researchers are confident the theory is right. But it would still be easier to swallow if it could be shown that quantum physics itself sprang from intuitive underlying principles. One way to approach this problem is to imagine all the theories one could possibly come up with to describe nature, and then work out what principles help to single out quantum physics. A good start is to assume that information follows Einstein's special relativity and cannot travel faster than light. However, this alone isn't enough to define quantum physics as the only way nature might behave. Corsin and Stephanie think they have come across a new useful principle. "We have found a principle that is very good at ruling out other theories," says Corsin. In short, the principle to be assumed is that if a measurement yields no information, then the system being measured has not been disturbed. Quantum physicists accept that gaining information from quantum systems causes disturbance. Corsin and Stephanie suggest that in a sensible world the reverse should be true, too. If you learn nothing from measuring a system, then you can't have disturbed it. Consider the famous Schrodinger's cat paradox, a thought experiment in which a cat in a box simultaneously exists in two states (this is known as a 'quantum superposition'). According to quantum theory it is possible that the cat is both dead and alive – until, that is, the cat's state of health is 'measured' by opening the box. When the box is opened, allowing the health of the cat to be measured, the superposition collapses and the cat ends up definitively dead or alive. The measurement has disturbed the cat. This is a property of quantum systems in general. Perform a measurement for which you can't know the outcome in advance, and the system changes to match the outcome you get. What happens if you look a second time? The researchers assume the system is not evolving in time or affected by any outside influence, which means the quantum state stays collapsed. You would then expect the second measurement to yield the same result as the first. After all, "If you look into the box and find a dead cat, you don't expect to look again later and find the cat has been resurrected," says Stephanie. "You could say we've formalised the principle of accepting the facts", says Stephanie. Corsin and Stephanie show that this principle rules out various theories of nature. They note particularly that a class of theories they call 'discrete' are incompatible with the principle. These theories hold that quantum particles can take up only a finite number of states, rather than choose from an infinite, continuous range of possibilities. The possibility of such a discrete 'state space' has been linked to quantum gravitational theories proposing similar discreteness in spacetime, where the fabric of the universe is made up of tiny brick-like elements rather than being a smooth, continuous sheet. As is often the case in research, Corsin and Stephanie reached this point having set out to solve an entirely different problem altogether. Corsin was trying to find a general way to describe the effects of measurements on states, a problem that he found impossible to solve. In an attempt to make progress, he wrote down features that a 'sensible' answer should have. This property of information gain versus disturbance was on the list. He then noticed that if he imposed the property as a principle, some theories would fail. Corsin and Stephanie are keen to point out it's still not the whole answer to the big 'why' question: theories other than quantum physics, including classical physics, are compatible with the principle. But as researchers compile lists of principles that each rule out some theories to reach a set that singles out quantum physics, the principle of information gain versus disturbance seems like a good one to include.
- What if The universe is an illusion? The world around us does a good job of convincing us that it is three dimensional. The problem is that some pretty useful physics says it's a hologram: again, this is another result I have derived - the universe is a hologram, however, my proofs are not based on 'utilitarian physics', but on necessary and sufficient conditions that any quantum theory that unifies Einstein's Theory of General Relativity and Quantum Field Theory must meet.
- New model describes cognitive decision making as the collapse of a quantum superstate : Quantum physics and the 'mind': is the brain a quantum computer- decision making in an enormous range of tasks involves the accumulation of evidence in support of different hypotheses. One of the enduring models of evidence accumulation is the Markov random walk (MRW) theory, which assigns a probability to each hypothesis. In an MRW model of decision making, when deciding between two hypotheses, the cumulative evidence for and against each hypothesis reaches different levels at different times, moving particle-like from state to state and only occupying a single definite evidence level at any given point. By contrast with MRW, the new theory assumes that evidence develops over time in a superposition state analogous to the wave-like state of a photon, and judgements and decisions are made when this indefinite superposition state "collapses" into a definite state of evidence. In the experiment, nine study participants completed 112 blocks of 24 trials each over five sessions, in which they viewed a random dot motion stimulus on a screen. A percentage of the dots moved coherently in a single direction. The researchers manipulated the difficulty of the test between trials. In the choice condition, participants were asked to decide whether the coherently moving dots were traveling to the left or the right. In the no-choice condition, participants were prompted by an audio tone simply to make a motor response. Then participants were asked to rate their confidence that the coherently moving dots were traveling to the right on a scale ranging from 0 (certain left) to 100 percent (certain right). The researchers report that, on average, confidence ratings were much higher when the trajectories of the dots were highly coherent. Confidence ratings were lower in the no-choice condition than in the choice condition, providing evidence against the read-out assumption of MRW theory, which holds that confidence in the choice condition should be higher. The QRW theory posits that evidence evolves over time, as in MRW, but that judgments and decisions create a new definite state from an indefinite, superposition-like state. "This quantum perspective reconceptualizes how we model uncertainty and formalizes a long-held hypothesis that judgments and decisions create rather than reveal preferences and beliefs," the authors write. They conclude, "... quantum random walk theory provides a previously unexamined perspective on the nature of the evidence accumulation process that underlies both cognitive and neural theories of decision making."
- Quantum Biology and the Hidden Nature of Nature: Can the spooky world of quantum physics explain bird navigation, photosynthesis and even our delicate sense of smell? Clues are mounting that the rules governing the subatomic realm may play an unexpectedly pivotal role in the visible world. Join leading thinkers in the emerging field of quantum biology as they explore the hidden hand of quantum physics in everyday life and discuss how these insights may one day revolutionize thinking on everything from the energy crisis to quantum computers.
- There Is No Progress in Philosophy Eric Dietrich: Except for a patina of twenty-first century modernity, in the form of logic and language, philosophy is exactly the same now as it ever was; it has made no progress whatsoever. We philosophers wrestle with the exact same problems the Pre-Socratics wrestled with. Even more outrageous than this claim, though, is the blatant denial of its obvious truth by many practicing philosophers. The No-Progress view is explored and argued for here. Its denial is diagnosed as a form of anosognosia, a mental condition where the affected person denies there is any problem. The theories of two eminent philosophers supporting the No-Progress view are also examined. The final section offers an explanation for philosophy’s inability to solve any philosophical problem, ever. The paper closes with some reflections on philosophy’s future.
- Quantum physics just got less complicated: Here's a nice surprise - quantum physics is less complicated than we thought. An international team of researchers has proved that two peculiar features of the quantum world previously considered distinct are different manifestations of the same thing. The result is published 19 December in Nature Communications. Patrick Coles, Jedrzej Kaniewski, and Stephanie Wehner made the breakthrough while at the Centre for Quantum Technologies at the National University of Singapore. They found that 'wave-particle duality' is simply the quantum 'uncertainty principle' in disguise, reducing two mysteries to one. "The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information," says Wehner, who is now an Associate Professor at QuTech at the Delft University of Technology in the Netherlands.
- How spacetime is built by quantum entanglement: A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo's Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors' Suggestion "for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields." Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales. The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive. Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators. Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called "spooky action at distance." The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory. "It was known that quantum entanglement is related to deep issues in the unification of general relativity and quantum mechanics, such as the black hole information paradox and the firewall paradox," says Hirosi Ooguri. "Our paper sheds new light on the relation between quantum entanglement and the microscopic structure of spacetime by explicit calculations. The interface between quantum gravity and information science is becoming increasingly important for both fields. I myself am collaborating with information scientists to pursue this line of research further."
- Why Isn’t There More Progress in Philosophy? David J. Chalmers: "Is there progress in philosophy? I have two reactions to this question. First, the answer is obviously yes. Second, it is the wrong question. The right question is not “Is there progress?” but “Why isn’t there more?”. We can distinguish three questions about philosophical progress. The Existence Question: is there progress in philosophy? The Comparison Question: is there as much progress in philosophy as in science? The Explanation Question (which tends to presuppose a negative answer to at least one of these two questions): why isn’t there more progress in philosophy? What we might call a glass-half-full view of philosophical progress is that there is some progress in philosophy. The glass-half-empty view is that there is not as much as we would like. In effect, the glass-half-full view consists in a positive answer to the Existence Question, while the glass-half-empty view (or at least one salient version of it) consists in a negative answer to the Comparison Question. These views fall between the extremes of a glass-empty view which answers no to the Existence Question, saying there is no progress in philosophy, and a glass-full thesis which answers yes to the Comparison Question, saying there is as much progress in philosophy as in science (or as much as we we would like). Of course the glass-half-full thesis and the glass-half-empty thesis are consistent with one another. I think for almost anyone deeply involved with the practice of philosophy, both theses will ring true. In discussions of progress in philosophy, my experience is that most people focus on the Existence Question: pessimists about philosophical progress (e.g. Dietrich 2011, Nielsen 1987; McGinn 1993) argue for the glass-empty thesis, and optimists (e.g. Stoljar forthcoming) respond by defending the glass-half-full thesis. I will focus instead on the Comparison and Explanation Questions. I will articulate a version of the glass-half-empty thesis, argue for it, and then address the crucial question of what explains it. I should say this this paper is as much an exercise in the sociology of philosophy as in philosophy. For the most part I have abstracted away from my own philosophical and metaphilosophical views in order to take an “outside view” of philosophical progress from a sociological perspective. For much of the paper I am largely saying the obvious, but sometimes the obvious is worth saying so that less obvious things can be said from there. Only toward the end will I bring in my own views, which lean a little more toward the optimistic, and see how the question of philosophical progress stands in light of them."
- Is Time’s Arrow Perspectival? Carlo Rovelli: We observe entropy decrease towards the past. Does this imply that in the past the world was in a non-generic microstate? The author points out an alternative. The subsystem to which we belong interacts with the universe via a relatively small number of quantities, which define a coarse graining. Entropy happens to depends on coarse-graining. Therefore the entropy we ascribe to the universe depends on the peculiar coupling between us and the rest of the universe. Low past entropy may be due to the fact that this coupling (rather than microstate of the universe) is non-generic. The author then argues that for any generic microstate of a sufficiently rich system there are always special subsystems defining a coarse graining for which the entropy of the rest is low in one time direction (the “past”). These are the subsystems allowing creatures that “live in time” —such as those in the biosphere— to exist. He then replies to some objections raised to an earlier presentation of this idea, in particular by Bob Wald, David Albert and Jim Hartle.
- Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction Jeffrey M Schwartz, Henry P Stapp, Mario Beauregard: Neuropsychological research on the neural basis of behaviour generally posits that brain mechanisms will ultimately suffice to explain all psychologically described phenomena. This assumption stems from the idea that the brain is made up entirely of material particles and fields, and that all causal mechanisms relevant to neuroscience can therefore be formulated solely in terms of properties of these elements. Thus, terms having intrinsic mentalistic and/or experiential content (e.g. ‘feeling’, ‘knowing’ and ‘effort’) are not included as primary causal factors. This theoretical restriction is motivated primarily by ideas about the natural world that have been known to be fundamentally incorrect for more than three-quarters of a century. Contemporary basic physical theory differs profoundly from classic physics on the important matter of how the consciousness of human agents enters into the structure of empirical phenomena. The new principles contradict the older idea that local mechanical processes alone can account for the structure of all observed empirical data. Contemporary physical theory brings directly and irreducibly into the overall causal structure certain psychologically described choices made by human agents about how they will act. This key development in basic physical theory is applicable to neuroscience, and it provides neuroscientists and psychologists with an alternative conceptual framework for describing neural processes. Indeed, owing to certain structural features of ion channels critical to synaptic function, contemporary physical theory must in principle be used when analysing human brain dynamics. The new framework, unlike its classic-physics-based predecessor, is erected directly upon, and is compatible with, the prevailing principles of physics. It is able to represent more adequately than classic concepts the neuroplastic mechanisms relevant to the growing number of empirical studies of the capacity of directed attention and mental effort to systematically alter brain function.
- When causation does not imply correlation: robust violations of the Faithfulness axiom Richard Kennaway: it is demonstrated here that the Faithfulness property that is assumed in much causal analysis is robustly violated for a large class of systems of a type that occurs throughout the life and social sciences: control systems. These systems exhibit correlations indistinguishable from zero between variables that are strongly causally connected, and can show very high correlations between variables that have no direct causal connection, only a connection via causal links between uncorrelated variables. Their patterns of correlation are robust, in that they remain unchanged when their parameters are varied. The violation of Faithfulness is fundamental to what a control system does: hold some variable constant despite the disturbing influences on it. No method of causal analysis that requires Faithfulness is applicable to such systems.
- Renormalized spacetime is two-dimensional at the Planck scale T. Padmanabhan, Sumanta Chakraborty: Quantum field theory distinguishes between the bare variables – which we introduce in the Lagrangian – and the renormalized variables which incorporate the effects of interactions. This suggests that the renormalized, physical, metric tensor of spacetime (and all the geometrical quantities derived from it) will also be different from the bare, classical, metric tensor in terms of which the bare gravitational Lagrangian is expressed. The authors provide a physical ansatz to relate the renormalized metric tensor to the bare metric tensor such that the spacetime acquires a zero-point-length ℓ(0) of the order of the Planck length LP . This prescription leads to several remarkable consequences. In particular, the Euclidean volume VD(ℓ, ℓ0) in a D-dimensional spacetime of a region of size ℓ scales as VD(ℓ, ℓ0) ∝ ℓ D−2 0 ℓ 2 when ℓ ∼ ℓ0, while it reduces to the standard result VD(ℓ, ℓ0) ∝ ℓ D at large scales (ℓ ≫ ℓ0). The appropriately defined effective dimension, Deff , decreases continuously from Deff = D (at ℓ ≫ ℓ0) to Deff = 2 (at ℓ ∼ ℓ0). This suggests that the physical spacetime becomes essentially 2-dimensional near Planck scale.
- CERN's LHCb experiment reports observation of exotic pentaquark particles: "The pentaquark is not just any new particle," said LHCb spokesperson Guy Wilkinson. "It represents a way to aggregate quarks, namely the fundamental constituents of ordinary protons and neutrons, in a pattern that has never been observed before in over fifty years of experimental searches. Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we're all made, is constituted." Our understanding of the structure of matter was revolutionized in 1964 when American physicist, Murray Gell-Mann, proposed that a category of particles known as baryons, which includes protons and neutrons, are comprised of three fractionally charged objects called quarks, and that another category, mesons, are formed of quark-antiquark pairs. Gell-Mann was awarded the Nobel Prize in physics for this work in 1969. This quark model also allows the existence of other quark composite states, such as pentaquarks composed of four quarks and an antiquark. Until now, however, no conclusive evidence for pentaquarks had been seen. LHCb researchers looked for pentaquark states by examining the decay of a baryon known as Λb (Lambda b) into three other particles, a J/ѱ (J-psi), a proton and a charged kaon. Studying the spectrum of masses of the J/ѱ and the proton revealed that intermediate states were sometimes involved in their production. These have been named Pc(4450)+ and Pc(4380)+, the former being clearly visible as a peak in the data, with the latter being required to describe the data fully. Earlier experiments that have searched for pentaquarks have proved inconclusive. Where the LHCb experiment differs is that it has been able to look for pentaquarks from many perspectives, with all pointing to the same conclusion. It's as if the previous searches were looking for silhouettes in the dark, whereas LHCb conducted the search with the lights on, and from all angles. The next step in the analysis will be to study how the quarks are bound together within the pentaquarks.
- Causes and Consequences of Income Inequality: A Global Perspective - INTERNATIONAL MONETARY FUND: Widening income inequality is the defining challenge of our time. In advanced economies, the gap between the rich and poor is at its highest level in decades. Inequality trends have been more mixed in emerging markets and developing countries (EMDCs), with some countries experiencing declining inequality, but pervasive inequities in access to education, health care, and finance remain. Not surprisingly then, the extent of inequality, its drivers, and what to do about it have become some of the most hotly debated issues by policymakers and researchers alike. Against this background, the objective of this paper is two-fold. First, the authors show why policymakers need to focus on the poor and the middle class. Earlier IMF work has shown that income inequality matters for growth and its sustainability. Their analysis suggests that the income distribution itself matters for growth as well. Specifically, if the income share of the top 20 percent (the rich) increases, then GDP growth actually declines over the medium term, suggesting that the benefits do not trickle down. In contrast, an increase in the income share of the bottom 20 percent (the poor) is associated with higher GDP growth. The poor and the middle class matter the most for growth via a number of interrelated economic, social, and political channels. Second, the authors investigate what explains the divergent trends in inequality developments across advanced economies and EMDCs, with a particular focus on the poor and the middle class. While most existing studies have focused on advanced countries and looked at the drivers of the Gini coefficient and the income of the rich, this study explores a more diverse group of countries and pays particular attention to the income shares of the poor and the middle class—the main engines of growth. This analysis suggests that technological progress and the resulting rise in the skill premium (positives for growth and productivity) and the decline of some labor market institutions have contributed to inequality in both advanced economies and EMDCs. Globalization has played a smaller but reinforcing role. Interestingly, we find that rising skill premium is associated with widening income disparities in advanced countries, while financial deepening is associated with rising inequality in EMDCs, suggesting scope for policies that promote financial inclusion. Policies that focus on the poor and the middle class can mitigate inequality. Irrespective of the level of economic development, better access to education and health care and well-targeted social policies, while ensuring that labor market institutions do not excessively penalize the poor, can help raise the income share for the poor and the middle class. There is no one-size-fits-all approach to tackling inequality. The nature of appropriate policies depends on the underlying drivers and country-specific policy and institutional settings. In advanced economies, policies should focus on reforms to increase human capital and skills,
coupled with making tax systems more progressive. In EMDCs, ensuring financial deepening is accompanied with greater financial inclusion and creating incentives for lowering informality would be important. More generally, complementarities between growth and income equality objectives suggest that policies aimed at raising average living standards can also influence the distribution of income and ensure a more inclusive prosperity.
- Does time dilation destroy quantum superposition? Why do we not see everyday objects in quantum superpositions? The answer to that long-standing question may partly lie with gravity. So says a group of physicists in Austria, which has shown theoretically that a feature of Einstein's general relativity, known as time dilation, can render quantum states classical. The researchers say that even the Earth's puny gravitational field may be strong enough for the effect to be measurable in a laboratory within a few years. Our daily experience suggests that there exists a fundamental boundary between the quantum and classical worlds. One way that physicists explain the transition between the two, is to say that quantum superposition states simply break down when a system exceeds a certain size or level of complexity – its wavefunction is said to "collapse" and the system becomes "decoherent". An alternative explanation, in which quantum mechanics holds sway at all scales, posits that interactions with the environment bring different elements of an object's wavefunction out of phase, such that they no longer interfere with one another. Larger objects are subject to this decoherence more quickly than smaller ones because they have more constituent particles and, therefore, more complex wavefunctions. There are already multiple different explanations for decoherence, including a particle emitting or absorbing electromagnetic radiation or being buffeted by surrounding air molecules. In the latest work, Časlav Brukner at the University of Vienna and colleagues have put forward a new model that involves time dilation – where the flow of time is affected by mass (gravity). This relativistic effect allows for a clock in outer space to tick at a faster rate than one near the surface of the Earth. In their work, Brukner and colleagues consider a macroscopic body – whose constituent particles can vibrate at different frequencies – to be in a superposition of two states at very slightly different distances from the surface of a massive object. Time dilation would then dictate that the state closer to the object will vibrate at a lower frequency than the other. They then calculate how much time dilation is needed to differentiate the frequencies so that the two states get out of step with one another, so much that they can no longer interfere. With this premise, the team worked out that even the Earth's gravitational field is strong enough to cause decoherence in quite small objects across measurable timescales. The researchers calculated that an object that weighs a gram and exists in two quantum states, separated vertically by a thousandth of a millimetre, should decohere in around a millisecond. Beyond any potential quantum-computing applications that would benefit from the removal of unwanted decoherence, the work challenges physicists' assumption that only gravitational fields generated by neutron stars and other massive astrophysical objects can exert a noticeable influence on quantum phenomena. "The interesting thing about this phenomenon is that both quantum mechanics and general relativity would be needed to explain it," says Brukner. Quantum clocks
One way to experimentally test the effect would involve sending a "clock" (such as a beam of caesium atoms) through the two arms of an interferometer. The interferometer would initially be positioned horizontally and the interference pattern recorded. It would then be rotated to the vertical, such that one arm experiences a higher gravitational potential than the other, and its output again observed. In the latter case, the two states vibrate at different frequencies due to time dilation. This different rate of ticking would reveal which state is travelling down each arm, and once this information is revealed, the interference pattern disappears. "People have already measured time dilation due to Earth's gravity," says Brukner, "but they usually use two clocks in two different positions. We are saying, why not use one clock in a superposition?" Carrying out such a test, however, will not be easy. The fact that the effect is far smaller than other potential sources of decoherence would mean cooling the interferometer down to just a few kelvin while enclosing it in a vacuum, says Brukner.
The measurements would still be extremely tricky, according to Markus Arndt, at the University of Vienna, who was not involved in the current work. He says they could require superpositions around a million times bigger and 1000 times longer lasting than is possible with the best equipment today. Nevertheless, Arndt praises the proposal for "directing attention" towards the interface between quantum mechanics and gravity. He also points out that any improvements to interferometers needed for this work could also have practical benefits, such as allowing improved tests of relativity or enhancing tools for geodesy.
- Judgment Aggregation in Science Liam Kofi Bright, Haixin Dang, and Remco Heesen: This paper raises the problem of judgment aggregation in science. The problem has two sides. First, how do scientists decide which propositions to assert in a collaborative document? And second, how should they make such decisions? The literature on judgment aggregation is relevant to the second question. Although little evidence is available regarding the first question, it suggests that current scientific practice is not in line with the most plausible recommendations from the judgment aggregation literature. The authors explore the evidence that is presently available before suggesting a number of avenues for future research on this problem.
- A Stronger Bell Argument for Quantum Non-Locality Paul M. Nager: It is widely accepted that the violation of Bell inequalities excludes local theories of the quantum realm. This paper presents a stronger Bell argument which even forbids certain non-local theories. Among these excluded non-local theories are those whose only non-local connection is a probabilistic (or functional) dependence between the space-like separated measurement outcomes of EPR/B experiments (a subset of outcome dependent theories). In this way, the new argument shows that the result of the received Bell argument, which requires just any kind of nonlocality, is inappropriately weak. Positively, the remaining non-local theories, which can violate Bell inequalities (among them quantum theory), are characterized by the fact that at least one of the measurement outcomes in some sense probabilistically depends both on its local as well as on its distant measurement setting (probabilistic Bell contextuality). Whether an additional dependence between the outcomes holds, is irrelevant for the question whether a certain theory can violate Bell inequalities. This new concept of quantum non-locality is considerably tighter and more informative than the one following from the usual Bell argument. It is proven that (given usual background assumptions) the result of the stronger Bell argument presented here is the strongest possible consequence from the violation of Bell inequalities on a qualitative probabilistic level
- General relativity as a two-dimensional CFT Tim Adamo: The tree-level scattering amplitudes of general relativity encode the full non-linearity of the Einstein field equations. Yet remarkably compact expressions for these amplitudes have been found which seem unrelated to a perturbative expansion of the EinsteinHilbert action. This suggests an entirely different description of GR which makes this on-shell simplicity manifest. Taking our cue from the tree-level amplitudes, the author discusses how such a description can be found. The result is a formulation of GR in terms of a solvable two-dimensional CFT, with the Einstein equations emerging as quantum consistency conditions.
- The Rise and Decline of General Laws of Capitalism Daron Acemogluy, James A. Robinsonz: Thomas Pikettyí's (2013) book, Capital in the 21st Century, follows in the tradition of the great classical economists, like Marx and Ricardo, in formulating general laws of capitalism to diagnose and predict the dynamics of inequality. The authors argue that general economic laws are unhelpful as a guide to understand the past or predict the future, because they ignore the central role of political and economic institutions, as well as the endogenous evolution of technology, in shaping the distribution of resources in society. The authors use regression evidence to show that the main economic force emphasized in Pikettyí's book, the gap between the interest rate and the growth rate, does not appear to explain historical patterns of inequality (especially, the share of income accruing to the upper tail of the distribution). They then use the histories of inequality of South Africa and Sweden to illustrate that inequality dynamics cannot be understood without embedding economic factors in the context of economic and political institutions, and also that the focus on the share of top incomes can give a misleading characterization of the true nature of inequality.
- Strange behavior of quantum particles may indicate the existence of other parallel universes John Davis: It started about five years ago with a practical chemistry question. Little did Bill Poirier realize as he delved into the quantum mechanics of complex molecules that he would fall down the rabbit hole to discover evidence of other parallel worlds that might well be poking through into our own, showing up at the quantum level. The Texas Tech University professor of chemistry and biochemistry said that quantum mechanics is a strange realm of reality. Particles at this atomic and subatomic level can appear to be in two places at once. Because the activity of these particles is so iffy, scientists can only describe what's happening mathematically by "drawing" the tiny landscape as a wave of probability. Chemists like Poirier draw these landscapes to better understand chemical reactions. Despite the "uncertainty" of particle location, quantum wave mechanics allows scientists to make precise predictions. The rules for doing so are well established. At least, they were until Poirier's recent "eureka" moment when he found a completely new way to draw quantum landscapes. Instead of waves, his medium became parallel universes. Though his theory, called "Many Interacting Worlds," sounds like science fiction, it holds up mathematically. Originally published in 2010, it has led to a number of invited presentations, peer-reviewed journal articles and a recent invited commentary in the premier physics journal Physical Review. "This has gotten a lot of attention in the foundational mechanics community as well as the popular press," Poirier said. "At a symposium in Vienna in 2013, standing five feet away from a famous Nobel Laureate in physics, I gave my presentation on this work fully expecting criticism. I was surprised when I received none. Also, I was happy to see that I didn't have anything obviously wrong with my mathematics." In his theory, Poirier postulates that small particles from many worlds seep through to interact with our own, and their interaction accounts for the strange phenomena of quantum mechanics. Such phenomena include particles that seem to be in more than one place at a time, or to communicate with each other over great distances without explanations.
- A statistical method for studying correlated rare events and their risk factors Xiaonan Xue, Mimi Y Kim, Tao Wang, Mark H Kuniholm, Howard D Strickler: Longitudinal studies of rare events such as cervical high-grade lesions or colorectal polyps that can recur often involve correlated binary data. Risk factor for these events cannot be reliably examined using conventional statistical methods. For example, logistic regression models that incorporate generalized estimating equations often fail to converge or provide inaccurate results when analyzing data of this type. Although exact methods have been reported, they are complex and computationally difficult. The current paper proposes a mathematically straightforward and easy-to-use two-step approach involving (i) an additive model to measure associations between a rare or uncommon correlated binary event and potential risk factors and (ii) a permutation test to estimate the statistical significance of these associations. Simulation studies showed that the proposed method reliably tests and accurately estimates the associations of exposure with correlated binary rare events. This method was then applied to a longitudinal study of human leukocyte antigen (HLA) genotype and risk of cervical high grade squamous intraepithelial lesions (HSIL) among HIV-infected and HIV-uninfected women. Results showed statistically significant associations of two HLA alleles among HIV-negative but not HIV-positive women, suggesting that immune status may modify the HLA and cervical HSIL association. Overall, the proposed method avoids model non-convergence problems and provides a computationally simple, accurate, and powerful approach for the analysis of risk factor associations with rare/uncommon correlated binary events.
- Why Not Capitalism? Jason Brennan: 'Most economists believe capitalism is a compromise with selfish human nature. As Adam Smith put it, "It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest." Capitalism works better than socialism, according to this thinking, only because we are not kind and generous enough to make socialism work. If we were saints, we would be socialists. In Why Not Capitalism?, Jason Brennan attacks this widely held belief, arguing that capitalism would remain the best system even if we were morally perfect. Even in an ideal world, private property and free markets would be the best way to promote mutual cooperation, social justice, harmony, and prosperity. Socialists seek to capture the moral high ground by showing that ideal socialism is morally superior to realistic capitalism. But, Brennan responds, ideal capitalism is superior to ideal socialism, and so capitalism beats socialism at every level. Clearly, engagingly, and at times provocatively written, Why Not Capitalism? will cause readers of all political persuasions to re-evaluate where they stand vis-à-vis economic priorities and systems—as they exist now and as they might be improved in the future.'
- An argument for ψ-ontology in terms of protective measurements Shan Gao: The ontological model framework provides a rigorous approach to address the question of whether the quantum state is ontic or epistemic. When considering only conventional projective measurements, auxiliary assumptions are always needed to prove the reality of the quantum state in the framework. For example, the Pusey-Barrett-Rudolph theorem is based on an additional preparation independence assumption. In this paper, the author gives a new proof of ψ-ontology in terms of protective measurements in the ontological model framework. It is argued that the proof needs not rely on auxiliary assumptions, and also applies to deterministic theories such as the de Broglie-Bohm theory. In addition, the author gives a simpler argument for ψ-ontology beyond the framework, which is only based on protective measurements and a weaker criterion of reality. The argument may be also appealing for those people who favor an anti-realist view of quantum mechanics.
- Depth and Explanation in Mathematics Marc Lange: This paper argues that in at least some cases, one proof of a given theorem is deeper than another by virtue of supplying a deeper explanation of the theorem — that is, a deeper account of why the theorem holds. There are cases of scientific depth that also involve a common abstract structure explaining a similarity between two otherwise unrelated phenomena, making their similarity no coincidence and purchasing depth by answering why questions that separate, dissimilar explanations of the two phenomena cannot correctly answer. The connections between explanation, depth, unification, power, and coincidence in mathematics and science are compared.
Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of ‘actual cause’. Their analysis is designed for the case of deterministic causation. It is shown here that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
given level of confidence, the authors' method is not only faster in many cases but also requires less information about the system, namely, only the minimum transition probability that occurs in the Markov chain. In addition, the method can be generalised to unbounded quantitative properties such as mean-payoff bounds.
discussed how effective information relates to information gain, Shannon and mutual information. The author concludes by discussing some broader implications.
schallenges in Kearns and Star 2013). Here they will ﬁrst provide updated versions of our earlier concerns, since they mostly still seem pertinent. Then the authors will turn to provide a fresh response to his account of reasons that focuses on the notion of a weighing explanation. On Broome’s account, 'pro tanto' reasons are facts cited in weighing explanations of what one ought to do; facts that have weights. It is not clear what the idea that pro tanto reasons have weights really amounts to. While recognizing that a simple analogy with putative non-normative weighing explanations involving physical weights initially seems helpful, the authors argue that the notion of a weighing explanation, especially a normative weighing explanation, does not ultimately stand up to scrutiny.
particular. Rather, it is suggested that both ideas have something to offer a scientific understanding of consciousness, as long as they are not dressed up as solutions to illusory metaphysical problems. As for human-level AI, we must await its development before we can decide whether or not to ascribe consciousness to it.
order unscented transforms and Gauss–Hermite quadrature rule. They compare the performance of the methods in two simulated experiments: a univariate toy model as well as tracking of a maneuvering target. In the experiments, the authors also compare against approximate likelihood estimates obtained by particle filtering and extended Kalman filtering based methods. The experiments suggest that the higher-order unscented transforms may in some cases provide more accurate estimates.
a formal condition to which the separability principle gives rise, with the condition of “outcome independence”. If this proof is sound, then Howard’s claim would gain strong support in that “outcome independence” and “parameter independence”, where the latter arises from Howard’s locality principle, have been shown by [Jarrett, 1984] to conjunctively constitute a necessary condition
for the derivation of the Bell inequalities [Clauser and Horne, 1974]. However, Howard’s proof has been contested in a number of ways. In this essay the author will discuss several criticisms of Howard’s equivalence proof that focus on the sufficiency of the separability principle for outcome independence. Paul then will argue that, while none of these criticisms succeeds, they do constrain the possible form of Howard’s argument. To do so, he will first introduce both the separability principle and outcome independence in the context of EPR-like experiments before discussing the individual arguments.
science and physics, and Pylkk¨o’s aconceptual view of the mind. Finally, Bohm’s early analogies will be briefly considered in relation to the analogies between quantum processes and the mind he
proposed in his later work.
central control. This paper applies a simple theoretical model to demonstrate that a series of local interactions between individuals is a simple yet robust mechanism that realizes stable proportions. In this study, alternative symmetric interactions between individuals are proposed as a proportion fulfillment method. The authors' results show that asymmetric properties in local interactions are crucial for adaptive regulation, which depends on group size and overall density. The foremost advantage of this strategy is that no global information is required for each individual.
- The Fine-Tuning Argument - Klaas Landsman: are the laws of nature and our cosmos delicately fine-tuned for life to emerge as it appears to be the case?
recognized subdiscipline of psychology and histories of psychology serve to inculcate students into psychology as well as to establish and maintain the authority of research programs (Ash 1983; Leahey
1992; Samelson 1997; Samelson 2000). We should not be surprised, therefore to find evolutionary psychologists to appeal to the history of the social sciences when they make their appeals for the necessity and value of their nascent discipline. In this paper the author will examine how evolutionary psychologists use the history of science in order to create space for their new discipline. In particular, he is interested in how they employ a particular account of the origins of American cultural anthropology at the beginning of the twentieth century. Evolutionary psychologists offer a particular history of cultural anthropology as an argument for why we now need evolutionary psychology. John will show that each discipline (EP and anthropology) attempted to create space for itself by defining a central term, “culture.” In defining “culture” each discipline also defined their scientific program: defining the nature of scientific inquiry by defining the central object of study. These definitional moves are not necessarily explicit in the argument, however; rather than arguments about definition, these scientists are offering an argument by definition. An argument by definition should not be taken to be an argument about (or from) a definition. In some sense, an argument by definition does not appear to be an argument at all: The key definitional move is simply stipulated, as if it were a natural step along the way of justifying some other claim…. One cannot help noticing an irony here. Definition of terms is a key step in the presentation of argument, and yet this critical step is taken by making moves that are not themselves argumentative at all. They are not claims supported by reasons and intended to justify adherence by critical listeners. Instead they are simply proclaimed as if they were indisputable facts.
incorporation of a particular interpretation in the quantum formalism. It is pointed out here that Crull is mistaken about decoherence and tacitly assumes some kind of interpretation of the quantum formalism.
GR-desideratum is forced upon them. It is shown how the conceptual problems dissolve when such a desideratum is relaxed. In the end, it is suggested that a similar strategy might mitigate some major issues such as the problem of time or the embedding of quantum non-locality into relativistic spacetimes.
words, one wants to emphasize the parameter value p = 1/2. To do so the concept of weighted differential entropy introduced in and is used when the frequency γ is
necessary to emphasize. It was found that the weight in suggested form does not change the asymptotic form of Shannon, Renyi, Tsallis and Fisher entropies, but change the constants. The main term in weighted Fisher Information is changed by some constant which depend on distance between the true frequency and the value one wants to emphasize.
and collaborators. It is then concluded that the complex and often troubled relations between science and society are critical to both parties, and then argued that the philosophy and history
of science can help to make this relationship work.
of Bohm’s Theory, and then briefly reviews some of the claims advanced on behalf of the ‘causal’ version by its proponents. A number of ontological or interpretive accounts of the wave function in Bohmian mechanics are then addressed in detail, including i) configuration space, ii) multi-field, iii) nomological, and iv) dispositional approaches. The main objection to each account is reviewed, namely i) the ‘problem of perception’, ii) the ‘problem of communication’, iii) the ‘problem of temporal laws’, and iv) the ‘problem of under-determination’. It is then shown that a version of dispositionalism overcomes the under-determination problem while providing neat solutions to the other three problems. A pragmatic argument is thus furnished for the use of dispositions in the interpretation of the theory more generally. The paper ends in a more speculative note by suggesting ways in which a dispositionalist interpretation of the wave function is in addition able to shed light upon some of the claims of the proponents of the causal version of Bohmian mechanics.
a principal connection on that bundle such that the holonomy map corresponds to the holonomies of that connection. Barrett also provided one sense in which this “recovery theorem” yields a unique bundle, up to isomorphism. Here we show that something stronger is true: with an appropriate definition of isomorphism between generalized holonomy maps, there is an equivalence of categories between the category whose objects are generalized holonomy maps on a smooth, connected manifold and whose arrows are holonomy isomorphisms, and the category whose objects are principal connections on principal bundles over a smooth, connected manifold. This result clarifies, and somewhat improves upon, the sense of “unique recovery” in Barrett’s theorems; it also makes precise a sense in which there is no loss of structure involved in moving from a principal bundle formulation of Yang-Mills theory to a holonomy, or “loop”, formulation.
Cognition “ (February 2015), the authors of this discussion felt a collective sense of dismay. Perusing the table of contents, they were struck by the fact that among the 19 authors listed for the 12 articles, only one female author was present. While the substantive content of the issue may persuade them that the face of cognition is changing, it appears that changes in gender distribution are not to be expected. The face of cognitive science will remain unequivocally male. According to recent statistics (NSF, 2013), more than 50% of doctorates awarded in cognitive psychology and psycholinguistics were to women, and the same holds for neuropsychology and experimental psychology. A clear implication is that women scientists should play a significant role in the future of cognitive science and cognitive neuroscience. The authors ask, then, why would the journal present an image of this science’s future as envisioned largely by male scientists?
on new evidence, the agents begin with no meaningful language or expectations then evolve to have expectations conditional on their descriptions as they evolve to have meaningful descriptions for the purpose of successful prediction. The model, then, provides a simple but concrete example of how the process of evolving a descriptive language suitable for inquiry might also provide agents with effective priors.
algorithms. To address this issue, the author studies the problem from the network designer’s perspective. More specifically, he first proposes a distributed weighted average consensus algorithm that is robust to Byzantine attacks. it is shown that, under reasonable assumptions, the global test statistic for detection can be computed locally at each node using our proposed consensus algorithm. The, it the author exploits the statistical distribution of the nodes’ data to devise techniques for mitigating the influence of data falsifying Byzantines on the distributed detection system. Since some parameters of the statistical distribution of the nodes’ data might not be known a priori, a learning based techniques is proposed to enable an adaptive design of the local fusion or update rules.
science. It is argued that their purported confirmation largely relies on a methodology that depends on premises that are inconsistent with the claim that people are Bayesian about learning and inference. Bayesian models in cognitive science derive their appeal from their normative claim that the modeled inference is in some sense rational. Standard accounts of the rationality of Bayesian inference imply predictions that an agent selects the option that maximizes the posterior expected utility. Experimental confirmation of the models, however, has been claimed because of groups of agents that “probability match” the posterior. Probability matching only constitutes support for the Bayesian claim if additional unobvious and untested (but testable) assumptions are invoked. The alternative strategy of weakening the underlying notion of rationality no longer distinguishes the Bayesian model uniquely. A new account of rationality — either for inference or for decision-making — is required to successfully confirm Bayesian models in cognitive science.
and expressive (non-financial) ones and show how non-financial motivations can influence the reaction to unsatisfactory investment performance
or observer) at the fundamental level, but also that applications of the formalism to concrete situations (e.g., measurements) should not require any input not contained in the description of the situation at hand at the fundamental level. The authors' assertion is that the Consistent Histories formalism does not meet the second criterion. It is also argued that the so-called second measurement problem, i.e., the inability to explain how an experimental result is related to a property possessed by the measured system before the measurement took place, is only a pseudo-problem. As a result, the authors reject
the claim that the capacity of the Consistent Histories formalism to solve it should count as an advantage over other interpretations.
desires. Postselection can be useful in many applications where the cost of getting the wrong event is implicitly high. However, unless this cost is specified exactly, one might conclude that discarding
all data is optimal. Here the authors analyze the optimal decision rules and quantum measurements in a decision theoretic setting where a pre-specified cost is assigned to discarding data. They also relate
their formulation to previous approaches which focus on minimizing the probability of indecision.
quantities correspond to objective features of the physical world, and are objectively related to measurable quantities like relative frequencies of physical events based on finite samples — no matter whether
the world is objectively deterministic or indeterministic.
in 1974 for significance testing in the simple vs composite hypotheses case. In this hypotheses test case, classical frequentist and Bayesian hypotheses tests are irreconcilable, as emphasized
by Lindley’s paradox, Berger & Selke in 1987 and many others. However, Dempster shows that the PLR (with inner threshold 1) is equal to the frequentist p-value in the simple Gaussian case. In 1997, Aitkin extends this result by adding a nuisance parameter and showing its asymptotic validity under more general distributions. Here it is extended to the reconciliation between the PLR
and a frequentist p-value for a finite sample, through a framework analogous to the Stein’s theorem frame in which a credible (Bayesian) domain is equal to a confidence (frequentist) domain. This general reconciliation result only concerns simple vs composite hypotheses testing. The measures proposed by Aitkin in 2010 and Evans in 1997 have interesting properties and extend Dempster’s PLR but only by adding a nuisance parameter. Here, a proposal is offered for two extensions of the PLR concept to the general composite vs composite hypotheses test. The first extension can be defined for improper priors as soon as the posterior is proper. The second extension appears from a new Bayesian-type Neyman-Pearson lemma and emphasizes, from a Bayesian perspective, the role of the LR as a discrepancy variable for hypothesis testing.
S-divergence estimators under parameter restrictions imposed by the null hypothesis. An illustration in the context of the normal model is also presented.
reluctant to judge true as to judge false, even while possessing all potentially relevant information. But by the time I became an instructor, I had become well acquainted
with another phenomenon that similarly threatens this truth-conditional definition of meaning: the phenomenon of vagueness. So I added the class of vague sentences to the discussion.
That both vague and presuppositional sentences threaten this fundamental definition shows the importance of their study for the domain of semantics. Under the supervision of Orin Percus, I therefore decided to approach the two phenomena jointly in my M.A. dissertation. By applying the tools developed for analyzing presupposition in truth-conditional semantics to the study of vagueness, I showed that it was possible to give a novel sensible account of the sorites paradox that has been puzzling philosophers since Eubulide first stated it more than 2000 years ago. This result illustrates how the joint study of two phenomena that were previously approached separately can bring new insights to long discussed problems. This thesis aims at pursuing the joint investigation of the two phenomena, by
focusing on the specific truth-value judgments that they trigger. In particular, theoretical literature of the last century rehabilitated the study of non-bivalent logical systems that were already prefigured during Antiquity and that have non-trivial consequences for truth-conditional semantics. In parallel, an experimental literature has been constantly growing since the beginning of the new century, collecting truthvalue judgments of subjects on a variety of topics. The work presented here features both aspects: it investigates theoretical systems that jointly address issues raised by
vagueness and presupposition, and it presents experimental methods that test the predictions of the systems in regard to truth-value judgments. The next two sections of this chapter are devoted to the presentation of my objects of study, namely vagueness and presupposition; and the last section of this chapter exposes the motivations that underline my project of jointly approaching the two phenomena from a truth-functional perspective. Because the notions of truth-value judgments are at the core of the dissertation, I have to make clear what I mean by bivalent and non-bivalent truth-value judgments. When I say that a sentence triggers bivalent truth-value judgments, I mean that in any situation, a sufficiently informed and competent speaker would confidently judge the sentence either “True” or “False”.
When I say that a sentence triggers non-bivalent truth-value judgments, I mean that there are situations where a competent speaker, even perfectly informed, would prefer to judge the sentence with a label different from “True” and “False”. In this chapter, I will remain agnostic as to what labels are actually preferred for each phenomenon, but the next chapters are mostly devoted to this question."
puzzles below. (None of the puzzles has a universally accepted solution, and they are aware of no suggested solutions that apply to all of the puzzles.) The authors will use the puzzles to motivate two
theses concerning infinite decisions. In addition to providing a unified resolution of the puzzles, the theses have important consequences for decision theory wherever infinities arise. By showing that Dutch book arguments have no force in infinite cases, the theses are evidence that reasonable utility functions may be unbounded, and that reasonable credence functions need not be either countably additive or conglomerable (a term to be explained in section 3). The theses show that when infinitely many decisions are involved, the difference between making the decisions simultaneously and making them sequentially can be the difference between riches and ruin. And the authors reveal a new way in which the ability to make binding commitments can save perfectly rational agents from sure losses.
nonzero probability, even if it is justified by an infinite probabilistic regress. The authors thought this to be an adequate rebuttal of foundationalist claims that probabilistic regresses must lead either to an indeterminate, or to a determinate but zero probability. In a comment, Frederik Herzberg has argued that our counterexamples are of a special kind, being what he calls ‘solvable’. In the present reaction
the authors investigate what Herzberg means by solvability. They discuss the advantages and disadvantages of making solvability a sine qua non, and we ventilate our misgivings about Herzberg’s suggestion that the notion of solvability might help the foundationalist. They further show that the canonical series arising from an infinite chain of conditional probabilities always converges, and also that the sum is equal to the required unconditional probability if a certain infinite product of conditional probabilities vanishes.
Yet, there is some scepticism in the profession concerning the prospects of GPoS. In a seminal piece, Philip Kitcher (2013) noted that the task of GPoS, as conceived by Carl Hempel and many who followed him, was to offer explications of major metascientific concepts such as confirmation, theory, explanation, simplicity etc. These explications were supposed “to provide general accounts of them by specifying the necessary conditions for their application across the entire range of possible cases” (2013, 187). Yet, Kitcher notes, “Sixty years on, it should be clear that the program
has failed. We have no general accounts of confirmation, theory, explanation, law, reduction, or causation that will apply across the diversity of scientific fields or across different periods of time” (2013, 188). The chief reasons for this alleged failure are two. The first relates to the diversity of scientific practice: the methods employed by the various fields of natural science are very diverse and field-specific. As Kitcher notes “Perhaps there is a ‘thin’ general conception that picks out what is common to the diversity of fields, but that turns out to be too attenuated to be of any great use”. The second reason relates to the historical record of the sciences: the ‘mechanics’ of major scientific changes in different fields of inquiry is diverse and involves factors that cannot be readily accommodated by a general explication of the major metascientific concepts (cf. 2013, 189). Though Kitcher does not make this suggestion explicitly, the trend seems to be to move from GPoS to the philosophies of the individual sciences and to relocate whatever content GPoS is supposed to have to the philosophies of the sciences. I think scepticism or pessimism about the prospects of GPoS is unwarranted. And I also think that there can be no philosophies of the various sciences without GPoS.
unclear. It is therefore particularly interesting to study this correlation in analytical models. In previous work the authors investigated the behavior of the Gini inequality index in kinetic models in dependence on several parameters which define the binary interactions and the taxation and redistribution processes: saving propensity, taxation rates gap, tax evasion rate, welfare means-testing etc. Here, they check the correlation of mobility with inequality by analyzing the mobility dependence from the same parameters. According to several numerical solutions, the correlation is
confirmed to be negative.