On the Brittleness of Bayesian Inference With the advent of high-performance computing, Bayesian methods are becoming increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods can impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is a pressing question to which there currently exist positive and negative answers. We report new results suggesting that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a fi- nite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems (and their discretizations) with fi- nite information on the data-generating distribution. If closeness is defined in terms of the total variation (TV) metric or the matching of a finite system of generalized moments, then (1) two practitioners who use arbitrarily close models and observe the same (possibly arbitrarily large amount of) data may reach opposite conclusions; and (2) any given prior and model can be slightly perturbed to achieve any desired posterior conclusion. The mechanism causing brittleness/robustness suggests that learning and robustness are antagonistic requirements, which raises the possibility of a missing stability condition when using Bayesian inference in a continuous world under finite information.
Quantum leap: Physicists can better study the quantum behaviour of objects on the atomic and macro-scale Erwin Schrödinger was an interesting man. Not only did he conceive a most imaginative way to (theoretically) kill a cat, he was in a constant state of superposition between monogamy and not. He shared a household with one wife and one mistress. (Although he got into trouble at Oxford for this unconventional lifestyle, it didn’t pose a problem in largely Catholic Dublin.) Just like the chemist Albert Hofmann, who tried LSD (lysergic acid diethylamide) on himself first, Schrödinger might have pondered how it would feel for a person to be in a genuine state of quantum superposition. Or even how a cat might feel. In principle, quantum mechanics would certainly allow for Schrödinger, or any of us, to enter a state of quantum superposition. That is, according to quantum theory, a large object could be in two quantum states at the same time. It is not just for subatomic particles. Everyday experience, of course, indicates that big objects behave classically. In special labs and with a lot of effort, we can observe the quantum properties of photons or electrons. But even the best labs and greatest efforts are yet to find them in anything approaching the size of a cat. Could they be found? The question is more than head-in-the-clouds philosophy. One of the most important experimental questions in quantum physics is whether or not there is a point or boundary at which the quantum world ends and the classical world begins. A straightforward approach to clarifying this question is to experimentally verify the quantum properties of ever-larger macroscopic objects. Scientists find these properties in subatomic particles when they confirm that the particles sometimes behave as a wave, with characteristic peaks and dips. Likewise, lab set-ups based on the principle of quantum interference, using many mirrors, lasers and lenses, have successfully found wave behaviour in macromolecules that are more than 800 atoms in size. Other techniques could go larger. Called atom interferometers, they probe atomic matter waves in the way that conventional interferometers measure light waves. Specifically, they divide the atomic matter wave into two separate wave packets, and recombine them at the end. The sensitivity of these devices is related to how far apart they can perform this spatial separation. Until now, the best atomic interferometers could put the wave packets about 1 centimetre apart. In this issue, physicists demonstrate an astonishing advance in this regard. They show quantum interference of atomic wave packets that are separated by 54 centimetres. Although this does not mean that we have an actual cat in a state of quantum superposition, at least a cat could now comfortably take a nap between the two branches of a superposed quantum state. (No cats were harmed in the course of these experiments.) Making huge molecules parade their wave nature and constructing atom interferometers that can separate wave packets by half a metre are extraordinary experimental achievements. And the technology coming from these experiments has many practical implications: atom interferometers splendidly measure acceleration, which means that they could find uses in navigation. And they would make excellent detectors for gravitational waves, because they are not sensitive to seismic noise. Schrödinger was more of a philosopher than an engineer, so it is plausible that he would not have taken that much interest in the practical ramifications of his theory. However, he would surely have clapped his hands at the prospect that experimenters could one day induce large objects to have quantum properties. And there are plenty of proposals for how to ramp up the size of objects with proven quantum behaviour: a microscopic mirror in a quantum superposition, created through interaction with a photon, would involve about 1014 atoms. And, letting their imaginations run wild, researchers have proposed a method to do the same with small biological structures such as viruses. To be clear, science is not close to putting a person or a cat into quantum superposition. Many say that, because of the way large objects interact with the environment, we will never be able to measure a person’s quantum behaviour. But it’s Christmas, so indulge us. If we could, and if we could be aware of such a superposition state, then how would we feel? Because ‘feeling’ would amount to measuring the wave function of the object, and because measuring causes the wave function to collapse, it should really feel like, well, nothing — or perhaps just a grin.
Quantum Mechanics and Kant’s Phenomenal World Quantum indeterminism seems incompatible with Kant’s defense of causality in his Second Analogy. The Copenhagen interpretation also takes quantum theory as evidence for anti-realism. This article argues that the law of causality, as transcendental, applies only to the world as observable, not to hypothetical (unobservable) objects such as quarks, detectable only by high energy accelerators. Taking Planck’s constant and the speed of light as the lower and upper bounds of observability provides a way of interpreting the observables of quantum mechanics as empirically real even though they are transcendentally (i.e., preobservationally) ideal
Do we have free will? Researchers test mechanisms involved in decision-making + New brain research refutes results of earlier studies that cast doubts on free will Our choices seem to be freer than previously thought. Using computer-based brain experiments, researchers from Charité - Universitätsmedizin Berlin studied the decision-making processes involved in voluntary movements. The question was: Is it possible for people to cancel a movement once the brain has started preparing it? The conclusion the researchers reached was: Yes, up to a certain point—the 'point of no return'. The results of this study have been published in the journal PNAS. The background to this new set of experiments lies in the debate regarding conscious will and determinism in human decision-making, which has attracted researchers, psychologists, philosophers and the general public, and which has been ongoing since at least the 1980s. Back then, the American researcher Benjamin Libet studied the nature of cerebral processes of study participants during conscious decision-making. He demonstrated that conscious decisions were initiated by unconscious , and that a wave of brain activity referred to as a 'readiness potential' could be recorded even before the subject had made a conscious decision. How can the unconscious brain processes possibly know in advance what decision a person is going to make at a time when they are not yet sure themselves? Until now, the existence of such preparatory brain processes has been regarded as evidence of 'determinism', according to which free will is nothing but an illusion, meaning our decisions are initiated by unconscious brain processes, and not by our 'conscious self'. In conjunction with Prof. Dr. Benjamin Blankertz and Matthias Schultze-Kraft from Technische Universität Berlin, a team of researchers from Charité's Bernstein Center for Computational Neuroscience, led by Prof. Dr. John-Dylan Haynes, has now taken a fresh look at this issue. Using state-of-the-art measurement techniques, the researchers tested whether people are able to stop planned movements once the readiness potential for a movement has been triggered. "The aim of our research was to find out whether the presence of early brain waves means that further decision-making is automatic and not under conscious control, or whether the person can still cancel the decision, i.e. use a 'veto'," explains Prof. Haynes. As part of this study, researchers asked to enter into a 'duel' with a computer, and then monitored their brain waves throughout the duration of the game using electroencephalography (EEG). A specially-trained computer was then tasked with using these EEG data to predict when a subject would move, the aim being to out-maneuver the player. This was achieved by manipulating the game in favor of the computer as soon as brain wave measurements indicated that the player was about to move. If subjects are able to evade being predicted based on their own brain processes this would be evidence that control over their actions can be retained for much longer than previously thought, which is exactly what the researchers were able to demonstrate. "A person's decisions are not at the mercy of unconscious and early . They are able to actively intervene in the decision-making process and interrupt a movement," says Prof. Haynes. "Previously people have used the preparatory brain signals to argue against free will. Our study now shows that the freedom is much less limited than previously thought. However, there is a 'point of no return' in the decision-making process, after which cancellation of movement is no longer possible." Further studies are planned in which the will investigate more complex decision-making processes. Part 2, second link: When people find themselves having to make a decision, the assumption is that the thoughts, or voice that is the conscious mind at work, deliberate, come to a decision, and then act. This is because for most people, that’s how the whole process feels. But back in the early 1980’s, an experiment conducted by Benjamin Libet, a neuroscientist with the University of California, cast doubt on this idea. He and his colleagues found in watching EEG readings of volunteers who had been asked to make a spontaneous movement (it didn’t matter what kind) that brain activity prior to the movement indicated that the subconscious mind came to a decision about what movement to make before the person experienced the feeling of making the decision themselves. This, Libet argued, showed that people don’t have nearly the degree of free will regarding decision making, as has been thought. Since then, no one has really refuted the theory. Now new research by a European team has found evidence that the brain activity recorded by Libet and other’s is due to something else, and thus, as they write in their paper published in the Proceedings of the National Academy of Sciences, that people really do make decisions in their conscious mind. To come to this conclusion, the team looked at how the brain responds to other decision forcing stimuli, such as what to make of visual input. In such instances, earlier research has shown that the brain amasses neural activity in preparation for a response, giving us something to choose from. Thus the response unfolds as the data is turned into imagery our brains can understand and we then interpret what we see based on what we’ve learned in the past. The researchers suggest that choosing to move an arm or leg or finger, works the same way. Our brain gets a hint that we are contemplating making a movement, so it gets ready. And it’s only when a critical mass occurs that decision making actually takes place. To test this theory, the team built a computer model of what they called a neural accumulator, then watched as it behaved in a way that looked like it was building up to a potential action. Next, they repeated the original experiment conducted by Libet.
Virtue Epistemology, Agency and a Kantian 'Epistemic Categorical Imperative' Virtue epistemologists hold that knowledge results from the display of epistemic virtues – openmindedness, rigor, sensitivity to evidence, and the like. But epistemology cannot rest satisfied with a list of the virtues. What is wanted is a criterion for being an epistemic virtue. An extension of a formulation of Kant’s categorical imperative yields such a criterion. Epistemic agents should think of themselves as, and act as, legislating members of a realm of epistemic ends: they make the rules, devise the methods, and set the standards that bind them. The epistemic virtues are the traits of intellectual character that equip them to do so. Students then not only need to learn the standards, methods, and rules of the various disciplines, they also need to learn to think of themselves as, and how to behave as, legislating members of epistemic realms who are responsible for what they and their fellows believe. This requires teaching them to respect reasons, and to take themselves to be responsible for formulating reasons their peers can respect.
The Ockham Efficiency Theorem for Stochastic Empirical Methods: Does Ockham-Razor Beg the Question Against Truth? Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is the Ockham efficiency theorem (Kelly 2002, Minds Mach 14:485–505, 2004, Philos Sci 74:561–573, 2007a, b, Theor Comp Sci 383:270–289, c, d; Kelly and Glymour 2004), which states that scientists who heed Ockham’s razor retract their opinions less often and sooner than do their non-Ockham competitors. The theorem neglects, however, to consider competitors following random (“mixed”) strategies and in many applications random strategies are known to achieve better worst-case loss than deterministic strategies. In this paper, we describe two ways to extend the result to a very general class of random, empirical strategies. The first extension concerns expected retractions, retraction times, and errors and the second extension concerns retractions in chance, times of retractions in chance, and chances of errors.
Is randomness in quantum mechanics “algorithmically random”? Is there any relation between Heisenberg’s uncertainty relation and Gödelian incompleteness? Can quantum randomness be used to trespass the Turing’s barrier? Can complexity shed more light on incompleteness? Whether a U238 nucleus will emit an alpha particle in a given interval of time is “random”. If we collapse a wave function, what it ends of being is “random”. Which slit the electron went through in the double slit experiment, again, is “random”. Is there any sense to say that “random” in the above sentences means “truly random”? When we flip a coin, whether it’s heads or tails looks random, but it’s not truly random. It’s determined by the way we flip the coin, the force on the coin, the way force is applied, the weight of the coin, air currents acting on it, and many other factors. This means that if we calculated all these values, we would know if it was heads or tails without looking. Without knowing this information—and this is what happens in practice—the result looks as if it’s random, but it’s not truly random. Is quantum randomness “truly random”? Our working model of “truly random” is “algorithmic randomness” in the sense of Algorithmic Information Theory (see, for example, [5]). In this paper we compare quantum randomness with algorithmic randomness in an attempt to obtain partial answers to the following questions: Is randomness in quantum mechanics “algorithmically random”? Is there any relation between Heisenberg’s uncertainty relation and G¨odel’s incompleteness? Can quantum randomness be used to trespass the Turing’s barrier? Can complexity cast more light on incompleteness? Our analysis is tentative and raises more questions than offers answers.
Randomness: quantum versus classical Recent tremendous development of quantum information theory led to a number of quantum technological projects, e.g., quantum random generators. This development stimulates a new wave of interest in quantum foundations. One of the most intriguing problems of quantum foundations is elaboration of a consistent and commonly accepted interpretation of quantum state. Closely related problem is clarification of the notion of quantum randomness and its interrelation with classical randomness. In this short review we shall discuss basics of classical theory of randomness (which by itself is very complex and characterized by diversity of approaches) and compare it with irreducible quantum randomness. The second part of this review is devoted to the information interpretation of quantum mechanics (QM) in the spirit of Zeilinger and Brukner (and QBism of Fuchs et al.) and physics in general (e.g., Wheeler’s “it from bit”) as well as digital philosophy of Chaitin (with historical coupling to ideas of Leibnitz). Finally, we continue discussion on interrelation of quantum and classical randomness and information interpretation of QM. Recently the interest to quantum foundations was rekindled by the rapid and successful development of quantum information theory. One of the promising quantum information projects which can lead to real technological applications is the project on quantum random generators. Successful realization of this project attracted attention of the quantum community to the old and complicated problem of interrelation of quantum and classical randomness. In this short review we shall discuss this interrelation: classical randomness versus irreducible quantum randomness. This review can be useful for researchers working in quantum information theory, both as a review on classical randomness and on interpretational problems of QM related to the notion of randomness. We emphasize the coupling between information and randomness, both in the classical and quantum frameworks. This approach is very natural in the light of modern information revolution in QM and physics in general. Moreover, “digital philosophy” (in the spirit of Chaitin) spreads widely in modern science, i.e., not only in physics, but in, e.g., computer science, artifical intelligence, biology. Therefore it is natural to discuss jointly randomness and information including novel possibilities to operate with quantum information and randomness outside of physics, e.g., in biology (molecular biology, genetics) and cognitive science and general theory of decision making.
New Record Set for Quantum Superposition at Macroscopic Level Abstract: 'The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale1, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales4 and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence1. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov–Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity'. A team of researchers working at Stanford University has extended the record for quantum superposition at the macroscopic level, from 1 to 54 centimeters. In their paper published in the journal Nature, the team describes the experiment they conducted, their results and also discuss what their findings might mean for researchers looking to find the cutoff point between superposition as it applies to macroscopic objects versus those that only exist at the quantum level. Nature has also published an editorial on the work done by the team, describing their experiment and summarizing their results. Scientists entangling quantum particles and even whole atoms has been in the news a lot over the past couple of years as experiments have been conducted with the goal of attempting to better understand the strange phenomenon—and much has been learned. But, as scientists figure out how to entangle two particles at ever greater distances apart there has come questions about the size of objects that can be entangled. Schrödinger's cat has come up in several such discussions as theorists and those in the applied fields seek to figure out if it might be truly possible to cause a whole cat to actually be in two places at once. In this new work, the team at Stanford has perhaps muddied the water even more as they have extended the record for supposition from a mere one centimeter to just over half a meter. They did it by creating a Bose-Einstein condensate cloud made up of 10,000 rubidium atoms (inside of a super-chilled chamber) all initially in the same state. Next, the used lasers to push the cloud up into the 10 meter high chamber, which also caused the atoms to enter one or the other of a given state. As the cloud reached the top of the chamber, the researchers noted that the wave function was a half-and-half mixture of the given states and represented positions that were 54 centimeters apart. When the cloud was allowed to fall back to the bottom of the chamber, the researchers confirmed that atoms appeared to have fallen from two different heights, proving that the cloud was held in a superposition state. The team acknowledges that while their experiment has led to a new record for superposition at the macroscopic scale, it still was done with individual atoms, thus, it is still not clear if superposition will work with macroscopic sized objects.
Excellent read on the Philosophy of Science and methodology: 'Induction and Deduction in Bayesian Data Analysis' - Andrew Gelman The classical or frequentist approach to statistics (in which inference is centered on significance testing), is associated with a philosophy in which science is deductive and follows Popper’s doctrine of falsification. In contrast, Bayesian inference is commonly associated with inductive reasoning and the idea that a model can be dethroned by a competing model but can never be directly falsified by a significance test. The purpose of this article is to break these associations, which I think are incorrect and have been detrimental to statistical practice, in that they have steered falsificationists away from the very useful tools of Bayesian inference and have discouraged Bayesians from checking the fit of their models. From my experience using and developing Bayesian methods in social and environmental science, I have found model checking and falsification to be central in the modeling process.
Thomas Piketty’s Puzzle: Globalization, Inequality and Democracy In the absence of more aggressive state intervention to redistribute wealth from rich to poor, Piketty’s economic law means that we will see an acceleration of inequality since the rich will always own a disproportionate share of capital. George Robinson examines what this increase in inequality means for democracy. One of the most striking trends of modern times, the concentration of global wealth in hands of the very few, has been popularized by Thomas Piketty in his hugely influential Capital in the Twenty-First Century. Piketty argues that the rate of return on capital consistently exceeds the rate of economic growth. In the absence of more aggressive state intervention to redistribute wealth from rich to poor, Piketty’s economic law means that we will see an acceleration of inequality since the rich will always own a disproportionate share of capital. What does this increase in inequality mean for democracy? I sympathize with Piketty’s view that a significant increase in existing inequalities of wealth may be harmful to the quality of existing democracies. The legitimacy of democracy depends, at least in part, on producing outcomes that citizens think are fair. A rapid growth in inequality risks undermining this implicit contract between citizens. However, we also should consider the impact of inequality on the process of democratization. Our globalized world, with its high levels of financial integration and mobility of capital, may have the curious side effect of making transitions from autocracy to democracy more likely. So while it is right to consider how inequality will change established democracies, we should also think about what it means for the possibility of transitions to democratic government in autocratic states. Political scientists often consider the transition from dictatorship to democracy in game theoretical terms. The elites undertake a kind of back of the envelope calculation about the costs and benefits controlling the apparatus of state. Maintaining an autocracy comes with certain costs, the costs, for instance, of controlling the population. These costs are weighed up against the risks of allowing the general population to set the rate of taxation. This might make one assume that increasing levels of equality will bolster the chances of democracy. Here’s an intuitive theory: as the distribution of income in a society becomes more equal the pressures to pursue redistributive policies from the disenfranchised in society will diminish which, in turn, reduces the costs of tolerating democracy for the elites. That’s a complicated way of saying that higher levels of equality make democratization a cheaper deal for elites. So far this all seems to support the commonsense thesis that a growth in inequality is bad for democratization. Large levels of inequality increase the risks associated with democratization for the ruling elite, so the elite are less likely to relinquish control of the state. But this picture is incomplete. We need to think about how easy it is for elites to move their money around in the modern world. Technological change and financial innovation have made assets more mobile, this has some pretty profound consequences for the risks and rewards associated with allowing a transition to democracy. So how might this increased financial integration change the behavior of autocratic elites? A recent paper by John Freeman and Dennis Quinn presents some interesting conclusions about the implications of financial globalization for democratization. Unsurprisingly they suggest that a greater level financial integration makes it easier for elites to move their assets out of the country and that this is going to reduce the threat of democratization to a ruling elite, as any progressive change to the system of taxation will have less impact on their income and assets. More controversially they argue that this is likely to happen even if the ruling elite is not feeling pressure to democratize. In a financially integrated world an investor naturally seeks an international portfolio of investments, one which diversifies into foreign equities. This kind of international portfolio will reduce risk and increase return because it is less dependent on the performance of the native economy. Innovation in financial products has changed the picture too. Many assets which would have previously be described as fixed – such as land – are now chopped up and traded on global markets. When it was not possible to sell these kind of fixed assets abroad, the wealth of a ruling group was yoked to a particular country but we now live in a world where these assets can be traded easily. In short, we are living in a world where elites are capable of spreading there wealth across the globe and are strongly incentivized to do so. A corollary of this is an increase in income inequality within the country in question; the domestic elites accrue large benefits from the asset sales and can accrue a high rate of return on their capital if they invest it abroad. All of this might point to some interesting and surprising conclusions about the relationship between the level of inequality in autocratic states and the probability of democratization. In many cases, an autocratic state with high levels of financial integration will produce a domestic elite with an international portfolio of capital investments and this is likely to result in increasing inequality within a society. This international portfolio of investments also increases probability of a transition to democracy, as the domestic elite have less incentive to maintain the repressive apparatus required to maintain autocratic rule and less to fear from a system of taxation under the ownership of democratic governments. It may be the case then, that the accelerating level of inequality Piketty has identified–facilitated in part by a globalized economy which allows a high rate of return on capital–may have the perverse effect of making the world a more democratic place by reducing the incentives of elites to maintain control of the apparatus of state in autocratic countries.
Quantum Deep Learning - Dispelling the Myth That The Brain is Bayesian and that Bayesian Methods Underlie Artificial Intelligence and Machine Learning 'We present quantum algorithms to perform deep learning that outperform conventional, state-of-the-art classical algorithms in terms of both training efficiency and model quality. Deep learning is a recent technique used in machine learning that has substantially impacted the way in which classification, inference, and artificial intelligence (AI) tasks are modeled [1–4]. It is based on the premise that to perform sophisticated AI tasks, such as speech and visual recognition, it may be necessary to allow a machine to learn a model that contains several layers of abstractions of the raw input data. For example, a model trained to detect a car might first accept a raw image, in pixels, as input. In a subsequent layer, it may abstract the data into simple shapes. In the next layer, the elementary shapes may be abstracted further into aggregate forms, such as bumpers or wheels. At even higher layers, the shapes may be tagged with words like “tire” or “hood”. Deep networks therefore automatically learn a complex, nested representation of raw data similar to layers of neuron processing in our brain, where ideally the learned hierarchy of concepts is (humanly) understandable. In general, deep networks may contain many levels of abstraction encoded into a highly connected, complex graphical network; training such graphical networks falls under the umbrella of deep learning. Boltzmann machines (BMs) are one such class of deep networks, which formally are a class recurrent neural nets with undirected edges and thus provide a generative model for the data. From a physical perspective, Boltzmann machines model the training data with an Ising model that is in thermal equilibrium. These spins are called units in the machine learning literature and encode features and concepts while the edges in the Ising model’s interaction graph represent the statistical dependencies of the features. The set of nodes that encode the observed data and the output are called the visible units (v), whereas the nodes used to model the latent concept and feature space are called the hidden units (h). Two important classes of BMs are the restricted Boltzmann machine (RBM) which takes the underlying graph to be a complete bipartite graph, and the deep restricted Boltzmann machine which is composed of many layers of RBMs (see Figure 1). For the purposes of discussion, we assume that the visible and hidden units are binary'.
Seeing Quantum (Physics) Motion Consider the pendulum of a grandfather clock. If you forget to wind it, you will eventually find the pendulum at rest, unmoving. However, this simple observation is only valid at the level of classical physics—the laws and principles that appear to explain the physics of relatively large objects at human scale. However, quantum mechanics, the underlying physical rules that govern the fundamental behavior of matter and light at the atomic scale, state that nothing can quite be completely at rest. For the first time, a team of Caltech researchers and collaborators has found a way to observe—and control—this quantum motion of an object that is large enough to see. Their results are published in the August 27 online issue of the journal Science. Researchers have known for years that in classical physics, physical objects indeed can be motionless. Drop a ball into a bowl, and it will roll back and forth a few times. Eventually, however, this motion will be overcome by other forces (such as gravity and friction), and the ball will come to a stop at the bottom of the bowl. "In the past couple of years, my group and a couple of other groups around the world have learned how to cool the motion of a small micrometer-scale object to produce this state at the bottom, or the quantum ground state," says Keith Schwab, a Caltech professor of applied physics, who led the study. "But we know that even at the quantum ground state, at zero-temperature, very small amplitude fluctuations—or noise—remain." Because this quantum motion, or noise, is theoretically an intrinsic part of the motion of all objects, Schwab and his colleagues designed a device that would allow them to observe this noise and then manipulate it. The micrometer-scale device consists of a flexible aluminum plate that sits atop a silicon substrate. The plate is coupled to a superconducting electrical circuit as the plate vibrates at a rate of 3.5 million times per second. According to the laws of classical mechanics, the vibrating structures eventually will come to a complete rest if cooled to the ground state. But that is not what Schwab and his colleagues observed when they actually cooled the spring to the ground state in their experiments. Instead, the residual energy—quantum noise—remained. "This energy is part of the quantum description of nature—you just can't get it out," says Schwab. "We all know quantum mechanics explains precisely why electrons behave weirdly. Here, we're applying quantum physics to something that is relatively big, a device that you can see under an optical microscope, and we're seeing the quantum effects in a trillion atoms instead of just one." Because this noisy quantum motion is always present and cannot be removed, it places a fundamental limit on how precisely one can measure the position of an object. But that limit, Schwab and his colleagues discovered, is not insurmountable. The researchers and collaborators developed a technique to manipulate the inherent quantum noise and found that it is possible to reduce it periodically. Coauthors Aashish Clerk from McGill University and Florian Marquardt from the Max Planck Institute for the Science of Light proposed a novel method to control the quantum noise, which was expected to reduce it periodically. This technique was then implemented on a micron-scale mechanical device in Schwab's low-temperature laboratory at Caltech. "There are two main variables that describe the noise or movement," Schwab explains. "We showed that we can actually make the fluctuations of one of the variables smaller—at the expense of making the quantum fluctuations of the other variable larger. That is what's called a quantum squeezed state; we squeezed the noise down in one place, but because of the squeezing, the noise has to squirt out in other places. But as long as those more noisy places aren't where you're obtaining a measurement, it doesn't matter." The ability to control quantum noise could one day be used to improve the precision of very sensitive measurements, such as those obtained by LIGO, the Laser Interferometry Gravitational-wave Observatory, a Caltech-and-MIT-led project searching for signs of gravitational waves, ripples in the fabric of space-time. "We've been thinking a lot about using these methods to detect gravitational waves from pulsars—incredibly dense stars that are the mass of our sun compressed into a 10 km radius and spin at 10 to 100 times a second," Schwab says. "In the 1970s, Kip Thorne [Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus] and others wrote papers saying that these pulsars should be emitting gravity waves that are nearly perfectly periodic, so we're thinking hard about how to use these techniques on a gram-scale object to reduce quantum noise in detectors, thus increasing the sensitivity to pick up on those gravity waves," Schwab says. In order to do that, the current device would have to be scaled up. "Our work aims to detect quantum mechanics at bigger and bigger scales, and one day, our hope is that this will eventually start touching on something as big as gravitational waves," he says. These results were published in an article titled, "Quantum squeezing of motion in a mechanical resonator." In addition to Schwab, Clerk, and Marquardt, other coauthors include former graduate student Emma E. Wollman (PhD '15); graduate students Chan U. Lei and Ari J. Weinstein; former postdoctoral scholar Junho Suh; and Andreas Kronwald of Friedrich-Alexander-Universität in Erlangen, Germany. The work was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center that also has support from the Gordon and Betty Moore Foundation.
Keynes and Sraffa on 'Own-Rates': A Present-Day Misunderstanding in Economics Scholars, who in recent years have studied the Sraffa papers held in the Wren Library of Trinity College, Cambridge, have concluded from Sraffa's critical—though unpublished—observations on Chapter 17 of Keynes's General Theory that he rejected Keynes's central proposition that the rate of interest on money may come to ‘rule the roost’, thus dragging the economy into recession. While Sraffa does express dissatisfaction with Chapter 17, the commentators have, we believe, misunderstood his concern: we suggest he was unhappy with Keynes's use of ‘own-rates’ rather than with the substance of the theory developed in Chapter 17. Since the papers of the late Piero Sraffa have become available to scholars, that most intriguing and important component of Keynes's General Theory, Chapter 17 on ‘The Essential Properties of Interest and Money’, has again come under critical fire, specifically from Ranchetti (2000) and Kurz (2010, 2012, 2013).1 Oka (2010) tries to fend off the criticism. From their study of the Sraffa papers, the critics contend that Keynes, having borrowed Sraffa's concept of ‘commodity rates of interest’ (Keynes preferred the designation ‘own-rates of interest’), failed to make proper use of it, and—as long ago perceived by Sraffa himself, but recorded only in unpublished notes—fell into serious error, calling in question a key proposition of his theory. Keynes famously argued that in conditions of developing recession, a relatively sticky own-rate on money would come to ‘rule the roost’, knocking out investment in other assets, the falling returns on which could not compete with the return on money. Sraffa, it has been discovered, objected privately that, with deflation, the own-rate on money would be lower, not higher, than own-rates on competing assets. On the basis of Sraffa's observations, the critics take it that Keynes was guilty of confusion and error—the implication being that Sraffa had effectively blown the argument of Chapter 17 out of the water. That interpretation needs looking into; the purpose of this paper is to do so.
On the Relation Between Mathematics and Physics: How Not to 'Factor' a Miracle: Mathematics is a bit like Zen, in that its greatest masters are likely to deny there being any succinct expression of what it is. It may seem ironic that the one subject which demands absolute precision in its definitions would itself defy definition, but the truth is, we are still figuring out what mathematics is. And the only real way to figure this out is to do mathematics. Mastering any subject takes years of dedication, but mathematics takes this a step further: it takes years before one even begins to see what it is that one has spent so long mastering. I say
“begins to see” because so far I have no reason to suspect this process terminates. Neither do wiser and more experienced mathematicians I have talked to. In this spirit, for example, The Princeton Companion to Mathematics [PCM], expressly renounces any tidy answer to the question “What is mathematics?” Instead, the book replies to this question with 1000 pages of expositions of topics within mathematics, all written by top experts in
their own subfields. This is a wise approach: a shorter answer would be not just incomplete, but necessarily misleading. Unfortunately, while mathematicians are often reluctant to define mathematics, others are not. In 1960, despite having made his own mathematically significant contributions, physicist Eugene Wigner defined mathematics as “the science of skillful operations with concepts and rules invented just for this purpose” [W]. This rather negative characterization of mathematics may have been partly tongue-in-cheek, but he took it seriously enough to build upon it an argument that mathematics is “unreasonably effective” in the natural sciences—an argument which has been unreasonably
influential among scientists ever since. What weight we attach to Wigner’s claim, and the view of mathematics it promotes, has both metaphysical and practical implications for the progress of mathematics and physics. If the effectiveness of mathematics in physics is a ‘miracle,’ then this miracle may well run out. In this case, we are justified in keeping the two subjects ‘separate’ and hoping our luck continues. If, on the other hand, they are deeply and rationally related, then this surely has consequences for how we should do research at the interface. In fact, I shall argue that what has so far been unreasonably effective is not mathematics but reductionism—the practice of inferring behavior of a complex problem by isolating and solving manageable ‘subproblems’—and that physics may be reaching the limits of effectiveness of the reductionist approach. In this case, mathematics will remain our best hope for progress in physics, by finding precise ways to go beyond reductionist tactics.
Philosophy and the practice of Bayesian statistics - Andrew Gelman and Cosma Rohilla Shaliz A substantial school in the philosophy of science identifies Bayesian inference with
inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.
Major Step Toward Confirming the Existence of the Majorana Particle: A NIMS MANA group theoretically demonstrated that the results of the experiments on the peculiar superconducting state reported by a Chinese research group in January 2015 prove the existence of the Majorana-type particles. A research group led by NIMS Special Researcher Takuto Kawakami and MANA Principal Investigator Xiao Hu of the International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS) theoretically demonstrated that the results of the experiments on the peculiar superconducting state reported by a Chinese research group in January 2015 can be taken as a proof of the existence of Majorana-type particle. The existence of Majorana particle was predicted in 1937 by the Italian theoretical physicist Ettore Majorana. Though it is fermion, it is equivalent to its own antiparticle. While its existence as an elementary particle still has not been confirmed today—nearly 80 years after the prediction, it was pointed out theoretically in recent years that quasiparticle excitations in special materials called topological superconductors behave in a similar way as Majorana particles. However, it is difficult to capture these Majorana particles in materials due to their unique properties of being charge neutral and carrying zero energy. There have been intense international competitions to confirm their existence. The research group carefully examined the physical conditions of the experiments mentioned above, conducted extensive and highly precise theoretical analysis on superconducting quasiparticle excitations, and demonstrated that Majorana particles are captured inside quantum vortex cores of a topological superconductor by comparing the theoretical analysis with the results of the experiments. In addition, the group suggested a specific method to improve the precision of the experiments by taking advantage of the unique quantum mechanical properties of Majorana particles. The collective behavior of Majorana particles—fermions that are equivalent to their own antiparticles—is different from that of electrons and photons, and it is expected to be useful in the development of powerful quantum computers. Furthermore, their very unique property due to zero-energy could be exploited for the creation of various new quantum functionalities. As such, confirming the existence of Majorana particles at high precision will leave a major ripple impact toward new developments in materials science and technology.
A Deep Connection is Drawn Between 'Evidential Probability' Theory and 'Objective Bayesian Epistemology': Evidential probability (EP), developed by Henry Kyburg, offers an account of the impact of statistical evidence on single-case probability. According to this theory, observed frequencies of repeatable outcomes determine a probability interval that can be associated with a proposition. After giving a comprehensive introduction to EP in §2, in §3 we describe a recent variant of this approach, second-order evidential probability (2oEP). This variant, introduced in Haenni et al. (2008), interprets a probability interval of EP as bounds on the sharp probability of the corresponding proposition. In turn, this sharp probability can itself be interpreted as the degree to which one ought to believe the proposition in question. At this stage we introduce objective Bayesian epistemology (OBE), a theory of how evidence helps determine appropriate degrees of belief (§4). OBE might be thought of as a rival to the evidential probability approaches. However, we show in §5 that they can be viewed as complimentary: one can use the rules of EP to narrow down the degree to which one should believe a proposition to an interval, and then use the rules of OBE to help determine an appropriate degree of belief from within this interval. Hence bridges can be built between evidential probability and objective Bayesian epistemology.
New Research Shows that Evolution Learns from Previous Experience, Providing an Explanation of How Evolution by Natural Selection Produces Intelligent Designs Without Mystification: Evolution may be more intelligent than we thought, according to a University of Southampton professor. Professor Richard Watson says new research shows that evolution is able to learn from previous experience, which could provide a better explanation of how evolution by natural selection produces such apparently intelligent designs.
By unifying the theory of evolution (which shows how random variation and selection is sufficient to provide incremental adaptation) with learning theories (which show how incremental adaptation is sufficient for a system to exhibit intelligent behaviour), this research shows that it is possible for evolution to exhibit some of the same intelligent behaviours as learning systems (including neural networks). In an opinion paper, published in Trends in Ecology and Evolution, Professors Watson and Eörs Szathmáry, from the Parmenides Foundation in Munich, explain how formal analogies can be used to transfer specific models and results between the two theories to solve several important evolutionary puzzles. Professor Watson says: "Darwin's theory of evolution describes the driving process, but learning theory is not just a different way of describing what Darwin already told us. It expands what we think evolution is capable of. It shows that natural selection is sufficient to produce significant features of intelligent problem-solving." For example, a key feature of intelligence is an ability to anticipate behaviours that that will lead to future benefits. Conventionally, evolution, being dependent on random variation, has been considered 'blind' or at least 'myopic' - unable to exhibit such anticipation. But showing that evolving systems can learn from past experience means that evolution has the potential to anticipate what is needed to adapt to future environments in the same way that learning systems do. "When we look at the amazing, apparently intelligent designs that evolution produces, it takes some imagination to understand how random variation and selection produced them. Sure, given suitable variation and suitable selection (and we also need suitable inheritance) then we're fine. But can natural selection explain the suitability of its own processes? That self-referential notion is troubling to conventional evolutionary theory - but easy in learning theory. "Learning theory enables us to formalise how evolution changes its own processes over evolutionary time. For example, by evolving the organisation of development that controls variation, the organisation of ecological interactions that control selection or the structure of reproductive relationships that control inheritance - natural selection can change its own ability to evolve.
"If evolution can learn from experience, and thus improve its own ability to evolve over time, this can demystify the awesomeness of the designs that evolution produces. Natural selection can accumulate knowledge that enables it to evolve smarter. That's exciting because it explains why biological design appears to be so intelligent."
Physics and Biology: Quantum Criticality at the Origin of Life: Why life persists at the edge of chaos is a question at the very heart of evolution. Here it is shown that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Using tools from Random Matrix Theory, it is confirmed that the energy level statistics of these biomolecules show the universal transitional distribution of the metal-insulator critical point and the wave functions are multifractals in accordance with the theory of Anderson transitions. The findings point to the existence of a universal mechanism of charge transport in living matter. The revealed bio-conductor material is neither a metal nor an insulator but a new quantum critical material which can exist only in highly evolved systems and has unique material properties
Physicists Confirm Thermodynamic Irreversibility in a Quantum System: Deep Result in Light Of T-Invariance of Quantum Laws/Equations: For the first time, physicists have performed an experiment confirming that thermodynamic processes are irreversible in a quantum system—meaning that, even on the quantum level, you can't put a broken egg back into its shell. The results have implications for understanding thermodynamics in quantum systems and, in turn, designing quantum computers and other quantum information technologies. The physicists, Tiago Batalhão at the Federal University of ABC, Brazil, and coauthors, have published their paper on the experimental demonstration of quantum thermodynamic irreversibility in a recent issue of Physical Review Letters. Irreversibility at the quantum level may seem obvious to most people because it matches our observations of the everyday, macroscopic world. However, it is not as straightforward to physicists because the microscopic laws of physics, such as the Schrödinger equation, are "time-symmetric," or reversible. In theory, forward and backward microscopic processes are indistinguishable. In reality, however, we only observe forward processes, not reversible ones like broken egg shells being put back together. It's clear that, at the macroscopic level, the laws run counter to what we observe. Now the new study shows that the laws don't match what happens at the quantum level, either. Observing thermodynamic processes in a quantum system is very difficult and has not been done until now. In their experiment, the scientists measured the entropy change that occurs when applying an oscillating magnetic field to carbon-13 atoms in liquid chloroform. They first applied a magnetic field pulse that causes the atoms' nuclear spins to flip, and then applied the pulse in reverse to make the spins undergo the reversed dynamics. If the procedure were reversible, the spins would have returned to their starting points—but they didn't. Basically, the forward and reverse magnetic pulses were applied so rapidly that the spins' flipping couldn't always keep up, so the spins were driven out of equilibrium. The measurements of the spins indicated that entropy was increasing in the isolated system, showing that the quantum thermodynamic process was irreversible. By demonstrating that thermodynamic irreversibility occurs even at the quantum level, the results reveal that thermodynamic irreversibility emerges at a genuine microscopic scale. This finding makes the question of why the microscopic laws of physics don't match our observations even more pressing. If the laws really are reversible, then what are the physical origins of the time-asymmetric entropy production that we observe? The physicists explain that the answer to this question lies in the choice of the initial conditions. The microscopic laws allow reversible processes only because they begin with "a genuine equilibrium process for which the entropy production vanishes at all times," the scientists write in their paper. Preparing such an ideal initial state in a physical system is extremely complex, and the initial states of all observed processes aren't at "genuine equilibrium," which is why they lead to irreversible processes. "Our experiment shows the irreversible nature of quantum dynamics, but does not pinpoint, experimentally, what causes it at the microscopic level, what determines the onset of the arrow of time," coauthor Mauro Paternostro at Queen's University in Belfast, UK, told Phys.org. "Addressing it would clarify the ultimate reason for its emergence."
The researchers hope to apply the new understanding of thermodynamics at the quantum level to high-performance quantum technologies in the future. "Any progress towards the management of finite-time thermodynamic processes at the quantum level is a step forward towards the realization of a fully fledged thermo-machine that can exploit the laws of quantum mechanics to overcome the performance limitations of classical devices," Paternostro said. "This work shows the implications for reversibility (or lack thereof) of non-equilibrium quantum dynamics. Once we characterize it, we can harness it at the technological level."
Haim Gaifman: Non-Standard Models in a Broader Perspective: What Does it Mean For a Theory to Have an Interpretation?! Non-standard models were introduced by Skolem, first for set theory, then for Peano arithmetic. In the former, Skolem found support for an anti-realist view of absolutely uncountable sets. But in the latter he saw evidence for the impossibility of capturing the intended interpretation by purely deductive methods. In the history of mathematics the concept of a nonstandard model is new. An analysis of some major innovations - the discovery of irrationals, the use of negative and complex numbers, the modern concept of function, and non-Euclidean geometry–reveals them as essentially different from the introduction of non-standard models. Yet, non-Euclidean geometry, which is discussed at some length, is relevant to the present concern; for it raises the issue of intended interpretation. The standard model of natural numbers is the best candidate for an intended interpretation that cannot be captured by a deductive system. Next, I suggest, is the concept of a well-ordered set, and then, perhaps, the concept of a constructible set. One may have doubts about a realistic conception of the standard natural numbers, but such doubts cannot gain support from non-standard models. Attempts to utilize non-standard models for an anti-realist position in mathematics, which appeal to meaning-as-use, or to arguments of the kind proposed by Putnam, fail through irrelevance, or lead to incoherence. Robinson’s skepticism, on the other hand, is a coherent position, though one that gives up on providing a detailed philosophical account. The last section enumerates various uses of non-standard models.
Isaac Levi: Pragmatism, Philosophy of Science, Truth and Inquiry Isaac Levi is a central figure in contemporary pragmatism, who, drawing extensively on the philosophy of classical pragmatists like Charles S. Peirce and John Dewey, has been able to successfully develop, correct, and implement their views, thus presenting an innovative and significant approach to various issues in contemporary philosophy, including problems in logic, epistemology, decision theory, etc. His books (just to mention a few of them) Gambling with Truth (Knopf 1967), The Enterprise of Knowledge (MIT Press 1980), Hard Choices (Cambridge University Press 1986), and The Fixation of Belief and Its Undoing (Cambridge University Press 1991) propose a solid and elaborate framework to address various issues in epistemology from an original pragmatist perspective. The essays contained in Pragmatism and Inquiry investigate ideas that constitute the core of Levi’s philosophy (like corrigibilism, his account of inquiry, his distinction between commitment and performance, his account of statistical reasoning, his understanding of credal probabilities, etc.), but they do so by putting these views in dialogue with other important philosophical figures of the last and present century (like Edward Craig, Donald Davidson, Jaakko Hintikka, Frank Ramsey, Richard Rorty, Michael Williams, Timothy Williamson, Crispin Wright, etc.), thus providing a renewed entry-point in his thought. The collection contains 11 essays which have been all published already. It is good to have these articles collected together, because they are very interrelated and they present a systematic view. However, it would have been useful to have a longer introduction (the one in the book is just 3 pages long), which guided the reader among the conceptual relationships between the chapters, and which identified their relevance in the current philosophical context. Insofar as there is much interrelation and overlap among the articles, I will avoid commenting them singularly one after another. I will rather identify some of the central themes that run through the book and point out where relevant ideas about these themes are presented in the collection. The first topic I wish to focus on is Levi’s original account of the tasks and purposes of epistemology. According to Levi, epistemology should not be understood as a discipline that identifies the principles according to which we can decide whether our beliefs are justified or not. He takes from the classical pragmatist (in particular from Peirce and Dewey) what he calls the principle of doxastic inertia, or doxastic infallibilism (cf. 32, 231), according to which we have no reason to justify the beliefs we are actually certain of. The task of epistemology is thus not that of justifying our current beliefs, but rather that of justifying changes in beliefs (cf. 165-71). In this context, Levi develops an interesting and original perspective which associates infallibilism and corrigibilism. We should be infallibilist about the beliefs we currently hold as true (according to Levi, it would be incoherent to hold them to be true and stress that they could be false as fallibilists do). Nonetheless we can be corrigibilist about our beliefs, because we can held them to be vulnerable to modification in the course of inquiry.1 This means that we cannot but regard our current beliefs as true, even though we can consider them as open to correction in the course of future inquiry (cf. 120). At this point, it is interesting to refer to Levi’s discussion of the claim, advanced for example by Rorty, that we should aim at warranted assertibility and not at truth. According to Levi, this claim could be understood in different ways. On the one hand, it could be read as implying that we should increase the number of beliefs that are acquired through well-conducted inquiry (133). This would be contrary to the principle of doxastic inertia proposed by the classical pragmatism, because we would require a justification for beliefs we already had. Consequently, if we read Rorty’s claim in this way, his pragmatism would abandon one of the central views of this very tradition. On the other hand, if warranted assertibility is understood as a specific aim of inquiries we actually pursue (where we thus need a justification because we are trying to introduce changes in our beliefs), the simple contention that we should aim at warranted assertibility remains empty if we do not specify which are the proximate aims of the inquiry in question (and if we do so, it seems that a central aim of at least some inquiries should be the attainment of new error free information: a goal that Rorty would probably reject as an aim of inquiry) (cf. 130, 133). From this account of the role and purposes of epistemology, it is clear how the analysis of the structure and procedures of inquiry plays an essential role in Levi’s theory of knowledge. This is the second topic I wish to address. Levi often recognizes his debt to Peirce and Dewey in his account of inquiry (cf. 1), but he also insists that we should develop their views further in order to attain a consistent position. He agrees with Peirce that inquiry is the process which allows us to pass from a state of doubt to a state of belief (cf. 83), but, following Dewey, he criticizes Peirce’s psychological description of these states (1-2).2 However, he does not endorse Dewey’s strategy to avoid psychologism, that is, his description of inquiry as a process starting with an indeterminate situation and ending with a determinate situation (2, 84-5). Rather, he understands changes in states of belief as changes in doxastic commitments, where states of belief understood as commitments are to be distinguished from states of belief understood as performances. Accordingly, a doxastic commitment identifies the set of beliefs we commit ourselves in a state of full belief. It could be totally different from the views we consciously endorse, which identify our state of doxastic performance (cf. 106). A state of belief understood as commitment has then a normative component, because it describes what we should believe and not what we actually believe. Besides identifying the beliefs we are committed to endorse, our state of full belief also decides which are the possible views and theories, on which we might rationally have doubts about (48). In other words, a state of full belief (understood as commitment) decides
the space of serious possibilities we can rationally inquire about (169). Accordingly, inquiry should not be understood as a process that generates changes in doxastic performances (which would concern our psychological dispositions and states), but rather as a process which results in changes in doxastic commitments (108). Changes in doxastic commitments can concern either the extension or contraction of our state of full belief. Levi offers us a detailed analysis of the ways in which these changes can be justified. Extension can be justified by either routine expansion or deliberate expansion, where routine expansion identifies a “program for utilizing inputs to form new full beliefs to be added to X’s state of full belief K” (235). Levi refers here to a “program” because he wants to distinguish this kind of expansion from a conclusion obtained through inference, where, for example, the data would figure as premises of an induction (236). The difference here is that the “program” tells us how to use the data before the data are collected, whereas in inductive inferences there is no such identification in advance. He reads Peirce’s late account of induction as developing some elements along these lines (72-3) and he finds some affinities with Hintikka’s account of induction as a process “allowing nature to answer a question put to it by the inquirer” (204). Our state of full belief can also expand by means of deliberate expansion. In the latter “the answer chosen is justified by showing it is the best option among those available given the cognitive goals of the agent” (236). “The justified change is the one that is best among the available options (relevant alternatives) according to the goal of seeking new error-free and valuable information” (237). However, when we expand our state of full belief we can inadvertently generate inconsistencies among our beliefs. When we are in this inconsistent state of belief, we cannot but give up some of our beliefs in order to avoid contradictions. In contracting our state of full belief, we have basically three options. We can give up the new belief that generated the inconsistency or we can give up the old belief with which it is in contradiction. Alternatively, we can also suspend judgment between the two. In all these cases we have a contraction of the state of full belief. Levi describes the criterion which should be followed in deciding between these three options as follows: “In contracting a state of belief by giving up information X would prefer, everything else being equal, to minimize the value of loss of the information X is going to incur” (230). In deciding weather to give up either the new or the old belief, X should then take into consideration which retreat would cause the smaller loss of information. If the loss of information would be equal in the two cases, then X should suspend judgment about the two (181, 229-30). This account of inquiry and of the way in which it justifies changes in doxastic commitments is part of an elaborate and original approach to epistemology. It draws its basic insights from Peirce’s and Dewey’s account of inquiry, but it develops their views in an extremely original and detailed view, which constitutes the core of Levi’s philosophy. Levi’s book contains also interesting reflections on the concept of truth. He argues that, from a pragmatist point of view, we should not be interested in giving a definition of this concept, which clarifies what we do when we use the predicate “is true” in sentences and propositions. Rather, we should be interested in how the concept of truth is relevant for understanding the way in which we change beliefs through inquiry (124-5). Levi criticizes those accounts of inquiry which claim that inquiry should not aim at truth but at warranted assertibility (e.g. Rorty, Davidson, sometimes Dewey) (ch. 7). Against these views, he maintains that a concern with truth is essential to understand at least some of our inquiries, that is, those inquiries which aim to justify changes in full beliefs. It seems essential that these inquiries should try to avoid error (an aim that should be associated with the purpose of attaining new information) and this seems to have an indirect connection with the aim of finding out the truth (135- 6). On the other hand, Levi rejects Peirce’s account of truth as the final opinion that we will reach at the end of inquiry. According to Levi, proposing this understanding of truth as the aim of inquiry would result in insoluble inconsistencies with the kind of corrigibilism that Levi endorses and that he also attributes to Peirce (138-40). Levi’s view seems to be the following: if in my current state of belief I believe h is absolutely true, then I should regard it as an essential part of the final opinion I aim to reach “in the long run.” Thus, I should not be prepared to give up h (which would contradict Peirce’s corrigibilism), insofar as at further steps in inquiry I could end up believing the contrary view (which I now believe is false). Levi concludes that at any determinate time in inquiry we should not be concerned with making the best move in order to contribute to the attainment of the truth intended as the final and definitive description of the world. On the contrary, we should just try to obtain new errorfree information in the next proximate step of inquiry. I do not think that this way of presenting Peirce’s views is fair to his actual position, for two main reasons: (1) Peirce’s account of truth as the final opinion can be read as identifying not substantial theses about reality or the ultimate aims of inquiry, but the commitments we make with respect to a proposition when we asserts that it is true: that is, we commit ourselves to the view that it will hold in the long run;3 (2) even if we identify the attainment of truth as the ultimate aim of inquiry, it seems possible, within Peirce’s model, to maintain that we can be corrigibilist about the views we currently consider true. Of course it would be irrational to doubt or give up these views as long as we still believe in them (this is basically what Levi calls Peirce’s principle of doxastic inertia). This does not imply that we cannot consider those views as corrigible, given that we could incur in circumstances (like new evidence gained through experience, or the identification of inconsistencies in our set of beliefs, etc.) that justify the emergence of a doubt on those views. If we were in these circumstances, it would not be problematic to give up those views, insofar as we would not be any more completely certain that they are true. If our aim were thus the attainment of truth in the long run, we would be justified to give up those views insofar as we would not be any more certain that they contribute to the attainment of the final opinion. Levi’s book also contains important scholarly contributions on Peirce and Dewey. It is undeniable that his approach to the writings of both Peirce and Dewey is strongly influenced by his own views and interests, but Levi is surely distinctive among the central figures in contemporary pragmatism for reading these classics with the attention they deserve. Chapter 4 “Beware of Syllogism: Statistical Reasoning and Conjecturing According to Peirce” presents a reconstruction of the evolution of Peirce’s account of induction and hypothesis. Levi shows how Peirce later abandons his early attempts to define these kinds of inferences by means of a permutation of the structure of a categorical syllogism. In his later writings Peirce first begins to regard these inferences as permutations of statistical deductions (75), and he then abandons this strategy in favor of a description of deduction, induction and abduction reflecting their roles in inquiry (77-8). Chapters 5 “Dewey’s Logic of Inquiry” and 6 “Wayward Naturalism: Saving Dewey from Himself” contain interesting considerations on Dewey’s theory of inquiry and the kind of naturalism we should associate with it. Insofar as the two articles overlap in many respect (unfortunately sometimes the overlap is not only relative to the topics, but textual!, which makes one wonder if it would not have been better to include only one of the two in the collection) I will discuss them together. With respect to chapter 4 on Peirce, these articles are less scholarly and more concerned with a correction of Dewey’s views along the lines Levi suggests. In these chapters, Levi discusses a multiplicity of issues, but I will limit myself to the consideration of his criticism of Dewey’s naturalism (cf. 85-8, 111-16). Accordingly, Levi claims that “activities like believing, evaluating, inquiring, deliberating, and deciding are resistant to naturalization” (105), if the latter is understood as an explanation of these activities by means of psychological or behavioral dispositions. In his attempt to show continuities between the way in which humans rationally conduct inquiries and the way in which animals respond to the challenges posed by their environment, Dewey commits exactly this naturalistic fallacy (cf. 85, 111). However, states of full belief, understood as doxastic commitments, involve a normative element that cannot be reduced to dispositions (106). Endorsing an approach to inquiry based on commitments is equal to endorsing a better naturalism, which Levy calls wayward naturalism (cf. 103-4), and which does not substitute old supernatural entities with new ones (according to Levi, the appeal to dispositions as universal means of explanation in epistemology introduce a new kind of supernaturalism). Following Levi, if we read Dewey properly, it becomes evident that we cannot but develop his account of inquiry in this way (108-9). To conclude, it is surely good to have these essays collected together, insofar as they offer a new perspective on some of the central insights of Levi’s philosophy thanks to a fruitful discussion with recent developments in epistemology. Even though sometimes the overlap between the articles is so significant (as in the case of chapter 5 and 6), that it would have been advisable to avoid redundancies, the texts here presented are surely of interest for any scholar who believes that the classical pragmatists’ account of inquiry has still a lot to offer to the current philosophical debate
Interference Effects of Choice on Confidence: Quantum Characteristics of Evidence Accumulation Decision-making relies on a process of evidence accumulation which generates support for possible hypotheses. Models of this process derived from classical stochastic theories assume that information accumulates by moving across definite levels of evidence, carving out a single trajectory across these levels over time. In contrast, quantum decision models assume that evidence develops over time in a superposition state analogous to a wavelike pattern and that judgments and decisions are constructed by a measurement process by which a definite state of evidence is created from this indefinite state. This constructive process implies that interference effects should arise when multiple responses (measurements) are elicited over time. We report such an interference effect during a motion direction discrimination task. Decisions during the task interfered with subsequent confidence judgments, resulting in less extreme and more accurate judgments than when no decision was elicited. These results provide qualitative and quantitative support for a quantum random walk model of evidence accumulation over the popular Markov random walk model. We discuss the cognitive and neural implications of modeling evidence accumulation as a quantum dynamic system. Significance: Most cognitive and neural decision-making models—owing to their roots in classical probability theory—assume that decisions are read out of a definite state of accumulated evidence. This assumption contradicts the view held by many behavioral scientists that decisions construct rather than reveal beliefs and preferences. We present a quantum random walk model of decision-making that treats judgments and decisions as a constructive measurement process, and we report the results of an experiment showing that making a decision changes subsequent distributions of confidence relative to when no decision is made. This finding provides strong empirical support for a parameter-free prediction of the quantum model.
String-Theory Calculations Describe 'Birth of the Universe' Researchers in Japan have developed what may be the first string-theory model with a natural mechanism for explaining why our universe would seem to exist in three spatial dimensions if it actually has six more. According to their model, only three of the nine dimensions started to grow at the beginning of the universe, accounting both for the universe's continuing expansion and for its apparently three-dimensional nature.
String theory is a potential "theory of everything", uniting all matter and forces in a single theoretical framework, which describes the fundamental level of the universe in terms of vibrating strings rather than particles. Although the framework can naturally incorporate gravity even on the subatomic level, it implies that the universe has some strange properties, such as nine or ten spatial dimensions. String theorists have approached this problem by finding ways to "compactify" six or seven of these dimensions, or shrink them down so that we wouldn't notice them. Unfortunately, Jun Nishimura of the High Energy Accelerator Research Organization (KEK) in Tsukuba says "There are many ways to get four-dimensional space–time, and the different ways lead to different physics." The solution is not unique enough to produce useful predictions. These compactification schemes are studied through perturbation theory, in which all the possible ways that strings could interact are added up to describe the interaction. However, this only works if the interaction is relatively weak, with a distinct hierarchy in the likelihood of each possible interaction. If the interactions between the strings are stronger, with multiple outcomes equally likely, perturbation theory no longer works. Matrix allows stronger interactions. Weakly interacting strings cannot describe the early universe with its high energies, densities and temperatures, so researchers have sought a way to study strings that strongly affect one another. To this end, some string theorists have tried to reformulate the theory using matrices. "The string picture emerges from matrices in the limit of infinite matrix size," says Nishimura. Five forms of string theory can be described with perturbation theory, but only one has a complete matrix form – Type IIB. Some even speculate that the matrix Type IIB actually describes M-theory, thought to be the fundamental version of string theory that unites all five known types. The model developed by Sang-Woo Kim of Osaka University, Nishimura, and Asato Tsuchiya of Shizuoka University describes the behaviour of strongly interacting strings in nine spatial dimensions plus time, or 10 dimensions. Unlike perturbation theory, matrix models can be numerically simulated on computers, getting around some of the notorious difficulty of string-theory calculations. Although the matrices would have to be infinitely large for a perfect model, they were restricted to sizes from 8 × 8 to 32 × 32 in the simulation. The calculations using the largest matrices took more than two months on a supercomputer, says Kim. Physical properties of the universe appear in averages taken over hundreds or thousands of matrices. The trends that emerged from increasing the matrix size allowed the team to extrapolate how the model universe would behave if the matrices were infinite. "In our work, we focus on the size of the space as a function of time," says Nishimura. 'Birth of the universe' The limited sizes of the matrices mean that the team cannot see much beyond the beginning of the universe in their model. From what they can tell, it starts out as a symmetric, nine-dimensional space, with each dimension measuring about 10–33 cm. This is a fundamental unit of length known as the Planck length. After some passage of time, the string interactions cause the symmetry of the universe to spontaneously break, causing three of the nine dimensions to expand. The other six are left stunted at the Planck length. "The time when the symmetry is broken is the birth of the universe," says Nishimura. "The paper is remarkable because it suggests that there really is a mechanism for dynamically obtaining four dimensions out of a 10-dimensional matrix model," says Harold Steinacker of the University of Vienna in Austria.
Hikaru Kawai of Kyoto University, Japan, who worked with Tsuchiya and others to propose the IIB matrix model in 1997, is also very interested in the "clear signal of four dimensional space–time". "It would be a big step towards understanding the origin of our universe," he says. Although he finds that the evolution of the model universe in time is too simple and different from the general theory of relativity, he says the new direction opened by the work is "worth investigating intensively". Will the Standard Model emerge? The team has yet to prove that the Standard Model of particle physics will show up in its model, at much lower energies than this initial study of the very early universe. If it leaps that hurdle, the team can use it to explore cosmology. Compared with perturbative models, Steinacker says, "this model should be much more predictive". Nishimura hopes that by improving both the model and the simulation software, the team may soon be able to investigate the inflation of the early universe or the density distribution of matter, results which could be evaluated against the density distribution of the real universe. The research will be described in an upcoming paper in Physical Review Letters and a preprint is available at arXiv:1108.1540.
Time-symmetric formulation of quantum theory provides new understanding of causality and free choice: The laws of classical mechanics are independent of the direction of time, but whether the same is true in quantum mechanics has been a subject of debate. While it is agreed that the laws that govern isolated quantum systems are time-symmetric, measurement changes the state of a system according to rules that only appear to hold forward in time, and there is difference in opinion about the interpretation of this effect. Now theoretical physicists at the Université libre de Bruxelles have developed a fully time-symmetric formulation of quantum theory which establishes an exact link between this asymmetry and the fact that we can remember the past but not the future – a phenomenon that physicist Stephen Hawking has named the "psychological" arrow of time. The study offers new insights into the concepts of free choice and causality, and suggests that causality need not be considered a fundamental principle of physics. It also extends a cornerstone theorem in quantum mechanics due to Eugene Paul Wigner, pointing to new directions for search of physics beyond the known models. The findings by Ognyan Oreshkov and Nicolas Cerf have been published this week in the journal Nature Physics. The idea that our choices at present can influence events in the future but not in the past is reflected in the rules of standard quantum theory as a principle that quantum theorists call "causality". In order to understand this principle, the authors of the new study analyze what the concept of choice in the context of quantum theory actually means. For example, we think that an experimenter can choose what measurement to perform on a given system, but not the outcome of the measurement. Correspondingly, according to the principle of causality, the choice of measurement can be correlated with outcomes of measurements in the future only, whereas the outcome of a measurement can be correlated with outcomes of both past and future measurements. The researchers argue that the defining property according to which we interpret the variable describing the measurement as up to the experimenter's choice, while the outcome not, is that it can be known before the actual measurement takes place. From this perspective, the principle of causality can be understood as a constraint on the information available about different variables at different times. This constraint is not time-symmetric since both the choice of measurement and the outcome of a measurement can be known a posteriori. This, according to the study, is the essence of the asymmetry implicit in the standard formulation of quantum theory. "Quantum theory has been formulated based on asymmetric concepts that reflect the fact that we can know the past and are interested in predicting the future. But the concept of probability is independent of time, and from a physics perspective it makes sense to try to formulate the theory in fundamentally symmetric terms", says Ognyan Oreshkov, the lead author of the study. To this end, the authors propose to adopt a new notion of measurement that is not defined only based on variables in the past, but can depend on variables in the future too. "In the approach we propose, measurements are not interpreted as up to the 'free choices' of agents, but simply describe information about the possible events in different regions of space-time", says Nicolas Cerf, a co-author of the study and director of the Centre for Quantum Information and Communication at ULB. In the time-symmetric formulation of quantum theory that follows from this approach, the principle of causality and the psychological arrow of time are both shown to arise from what physicists call boundary conditions – parameters based on which the theory makes predictions, but whose values could be arbitrary in principle. Thus, for instance, according to the new formulation, it is conceivable that in some parts of the universe causality may be violated. Another consequence of the time-symmetric formulation is an extension of a fundamental theorem by Wigner, which characterizes the mathematical representation of physical symmetries and is central to the understanding of many phenomena, such as what elementary particles can exist. The study shows that in the new formulation symmetries can be represented in ways not permitted by the standard formulation, which could have far-reaching physical implications. One speculative possibility is that such symmetries may be relevant in a theory of quantum gravity, since they have the form of transformations that have been conjectured to occur in the presence of black holes. "Our work shows that if we believe that time symmetry must be a property of the fundamental laws of physics, we have to consider the possibility for phenomena beyond those conceivable in standard quantum theory. Whether such phenomena exist and where we could search for them is a big open question", explains Oreshkov.
Power decreases trust in social exchange: How does lacking vs. possessing power in a social exchange affect people’s trust in their exchange partner? An answer to this question has broad implications for a number of exchange settings in which dependence plays an important role. Here, we report on a series of experiments in which we manipulated participants’ power position in terms of structural dependence and observed their trust perceptions and behaviors. Over a variety of different experimental paradigms and measures, we find that more powerful actors place less trust in others than less powerful actors do. Our results contradict predictions by rational actor models, which assume that low-power individuals are able to anticipate that a more powerful exchange partner will place little value on the relationship with them, thus tends to behave opportunistically, and consequently cannot be trusted. Conversely, our results support predictions by motivated cognition theory, which posits that low-power individuals want their exchange partner to be trustworthy and then act according to that desire. Mediation analyses show that, consistent with the motivated cognition account, having low power increases individuals’ hope and, in turn, their perceptions of their exchange partners’ benevolence, which ultimately leads them to trust. Significance: Trust is pivotal to the functioning of society. This work tests competing predictions about how having low vs. high power may impact people’s tendency to place trust in others. Using different experimental paradigms and measures and confirming predictions based on motivated cognition theory, we show that people low in power are significantly more trusting than more powerful people and that this effect can be explained by the constructs of hope and perceived benevolence. Our findings make important contributions to the literatures on trust, power, and motivated cognition.
Physicists put the arrow of time under a quantum microscope: the arrow of time can arise via quantum fluctuation by Jon Cartwright - Disorder, or entropy, in a microscopic quantum system has been measured by an international group of physicists. The team hopes that the feat will shed light on the "arrow of time": the observation that time always marches towards the future. The experiment involved continually flipping the spin of carbon atoms with an oscillating magnetic field and links the emergence of the arrow of time to quantum fluctuations between one atomic spin state and another. "That is why we remember yesterday and not tomorrow," explains group member Roberto Serra, a physicist specializing in quantum information at the Federal University of ABC in Santo André, Brazil. At the fundamental level, he says, quantum fluctuations are involved in the asymmetry of time. The arrow of time is often taken for granted in the everyday world. We see an egg breaking, for example, yet we never see the yolk, white and shell fragments come back together again to recreate the egg. It seems obvious that the laws of nature should not be reversible, yet there is nothing in the underlying physics to say so. The dynamical equations of an egg breaking run just as well forwards as they do backwards. Entropy, however, provides a window onto the arrow of time. Most eggs look alike, but a broken egg can take on any number of forms: it could be neatly cracked open, scrambled, splattered all over a pavement, and so on. A broken egg is a disordered state – that is, a state of greater entropy – and because there are many more disordered than ordered states, it is more likely for a system to progress towards disorder than order. This probabilistic reasoning is encapsulated in the second law of thermodynamics, which states that the entropy of a closed system always increases over time. According to the second law, time cannot suddenly go backwards because this would require entropy to decrease. It is a convincing argument for a complex system made up of a great many interacting particles, like an egg, but what about a system composed of just one particle? Serra and colleagues have delved into this murky territory with measurements of entropy in an ensemble of carbon-13 atoms contained in a sample of liquid chloroform. Although the sample contained roughly a trillion chloroform molecules, the non-interacting quantum nature of the molecules meant that the experiment was equivalent to performing the same measurement on a single carbon atom, one trillion times. Serra and colleagues applied an oscillating external magnetic field to the sample, which continually flipped the spin state of a carbon atom between up and down. They ramped up the intensity of the field oscillations to increase the frequency of the spin-flipping, and then brought the intensity back down again. Had the system been reversible, the overall distribution of carbon spin states would have been the same at the end as at the start of the process. Using nuclear magnetic resonance and quantum-state tomography, however, Serra and colleagues measured an increase in disorder among the final spins. Because of the quantum nature of the system, this was equivalent to an increase in entropy in a single carbon atom. According to the researchers, entropy rises for a single atom because of the speed with which it is forced to flip its spin. Unable to keep up with the field-oscillation intensity, the atom begins to fluctuate randomly, like an inexperienced dancer failing to keep pace with up-tempo music. "It's easier to dance to a slow rhythm than a fast one," says Serra. The group has managed to observe the existence of the arrow of time in a quantum system, says experimentalist Mark Raizen of the University of Texas at Austin in the US, who has also studied irreversibility in quantum systems. But Raizen stresses that the group has not observed the "onset" of the arrow of time. "This [study] does not close the book on our understanding of the arrow of time, and many questions remain," he adds. One of those questions is whether the arrow of time is linked to quantum entanglement – the phenomenon whereby two particles exhibit instantaneous correlations with each other, even when separated by vast distances. This idea is nearly 30 years old and has enjoyed a recent resurgence in popularity. However, this link is less to do with growing entropy and more to do with an unstoppable dispersion of quantum information. Indeed, Serra believes that by harnessing quantum entanglement, it may even be possible to reverse the arrow of time in a microscopic system. "We're working on it," he says. "In the next generation of our experiments on quantum thermodynamics we will explore such aspects." The research is described in Physical Review Letters: What is time? was chosen by Physics World editors as one of the five biggest unanswered questions in physics. In the 25th anniversary issue of the magazine (published in 2013) Adam Frank chronicles what we know and don't know about the mysterious fourth dimension
New Derivation of pi Links Quantum Physics and Pure Mathematics: In 1655 the English mathematician John Wallis published a book in which he derived a formula for pi as the product of an infinite series of ratios. Now researchers from the University of Rochester, in a surprise discovery, have found the same formula in quantum mechanical calculations of the energy levels of a hydrogen atom. "We weren't looking for the Wallis formula for pi. It just fell into our laps," said Carl Hagen, a particle physicist at the University of Rochester. Having noticed an intriguing trend in the solutions to a problem set he had developed for students in a class on quantum mechanics, Hagen recruited mathematician Tamar Friedmann and they realized this trend was in fact a manifestation of the Wallis formula for pi. "It was a complete surprise - I jumped up and down when we got the Wallis formula out of equations for the hydrogen atom," said Friedmann. "The special thing is that it brings out a beautiful connection between physics and math. I find it fascinating that a purely mathematical formula from the 17th century characterizes a physical system that was discovered 300 years later."
Could All Particles in Physics Be Mini Black Holes?: The idea that all particles are mini black holes has major implications for both particle physics and astrophysics, say scientists. Could it really be possible that all particles are mini-black holes? That’s the tantalising suggestion from Donald Coyne from UC Santa Cruz (now deceased) and D C Cheng from the Almaden Research Center near San Jose. Black holes are regions of space in which gravity is so strong that nothing, not even light, can escape. The trouble with gravity is that on anything other than an astrophysical scale, it is so weak that it can safely be ignored. However, many physicists have assumed that on the tiniest scale, the Planck scale, gravity regains its strength. In recent years some evidence to support this contention has emerged from string theory where gravity plays a stronger role in higher dimensional space. It’s only in our four dimensional space that gravity appears so weak. Since these dimensions become important only on the Planck scale, it’s at that level that gravity re-asserts itself. And if that’s the case, then mini-black holes become a possibility. Coyne and Cheng ask what properties black holes might have on that scale and it turns out that they may be far more varied than anyone imagined. The quantisation of space on this level means that mini-black holes could turn up at all kinds of energy levels. They predict the existence of huge numbers of black hole particles at different energy level. So common are these black holes that the authors suggest that: “All particles may be varying forms of stabilized black holes” That’s an ambitious claim that’ll need plenty of experimental backing. The authors say this may come from the LHC, which could begin to probe the energies at which these kinds of black holes will be produced. The authors end with the caution that it would be wrong to think of the LHC as a “black hole factory”; not because it won’t produce black holes (it almost certainly will), but because, if they are right, every other particle accelerator in history would have been producing black holes as well. In fact, if this thinking is correct, there’s a very real sense in which we are made from black holes. Curious! Read more here
Every Thing Must Go: Metaphysics Naturalized: Every Thing Must Go argues that the only kind of metaphysics that can contribute to objective knowledge is one based specifically on contemporary science as it really is, and not on philosophers' a priori intuitions, common sense, or simplifications of science. In addition to showing how recent metaphysics has drifted away from connection with all other serious scholarly inquiry as a result of not heeding this restriction, they demonstrate how to build a metaphysics compatible with current fundamental physics ('ontic structural realism'), which, when combined with their metaphysics of the special sciences ('rainforest realism'), can be used to unify physics with the other sciences without reducing these sciences to physics itself. Taking science metaphysically seriously, Ladyman and Ross argue, means that metaphysicians must abandon the picture of the world as composed of self-subsistent individual objects, and the paradigm of causation as the collision of such objects. Everything Must Go also assesses the role of information theory and complex systems theory in attempts to explain the relationship between the special sciences and physics, treading a middle road between the grand synthesis of thermodynamics and information, and eliminativism about information. The consequences of the author's metaphysical theory for central issues in the philosophy of science are explored, including the implications for the realism vs. empiricism debate, the role of causation in scientific explanations, the nature of causation and laws, the status of abstract and virtual objects, and the objective reality of natural kinds.
'Zeno effect' verified—atoms won't move while you watch: The work opens the door to a fundamentally new method to control and manipulate the quantum states of atoms and could lead to new kinds of sensors. The experiments were performed in the Utracold Lab of Mukund Vengalattore, assistant professor of physics, who has established Cornell's first program to study the physics of materials cooled to temperatures as low as .000000001 degree above absolute zero. The work is described in the Oct. 2 issue of the journal Physical Review Letters Graduate students Yogesh Patil and Srivatsan K. Chakram created and cooled a gas of about a billion Rubidium atoms inside a vacuum chamber and suspended the mass between laser beams. In that state the atoms arrange in an orderly lattice just as they would in a crystalline solid.,But at such low temperatures, the atoms can "tunnel" from place to place in the lattice. The famous Heisenberg uncertainty principle says that the position and velocity of a particle interact. Temperature is a measure of a particle's motion. Under extreme cold velocity is almost zero, so there is a lot of flexibility in position; when you observe them, atoms are as likely to be in one place in the lattice as another. The researchers demonstrated that they were able to suppress quantum tunneling merely by observing the atoms. This so-called "Quantum Zeno effect", named for a Greek philosopher, derives from a proposal in 1977 by E. C. George Sudarshan and Baidyanath Misra at the University of Texas, Austin,, who pointed out that the weird nature of quantum measurements allows, in principle, for a quantum system to be "frozen" by repeated measurements. Previous experiments have demonstrated the Zeno Effect with the "spins" of subatomic particles. "This is the first observation of the Quantum Zeno effect by real space measurement of atomic motion," Vengalattore said. "Also, due to the high degree of control we've been able to demonstrate in our experiments, we can gradually 'tune' the manner in which we observe these atoms. Using this tuning, we've also been able to demonstrate an effect called 'emergent classicality' in this quantum system." Quantum effects fade, and atoms begin to behave as expected under classical physics. The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser. A light microscope can't see individual atoms, but the imaging laser causes them to fluoresce, and the microscope captured the flashes of light. When the imaging laser was off, or turned on only dimly, the atoms tunneled freely. But as the imaging beam was made brighter and measurements made more frequently, the tunneling reduced dramatically. "This gives us an unprecedented tool to control a quantum system, perhaps even atom by atom," said Patil, lead author of the paper. Atoms in this state are extremely sensitive to outside forces,l he noted, so this work could lead to the development of new kinds of sensors. The experiments were made possible by the group's invention of a novel imaging technique that made it possible to observe ultracold atoms while leaving them in the same quantum state. "It took a lot of dedication from these students and it has been amazing to see these experiments be so successful," Vengalattore said. "We now have the unique ability to control quantum dynamics purely by observation." The popular press has drawn a parallel of this work with the "weeping angels" depicted in the Dr. Who television series – alien creatures who look like statues and can't move as long as you're looking at them. There may be some sense to that. In the quantum world, the folk wisdom really is true: "A watched pot never boils."
Is Philosophy a Grand Waste of Time?: What is philosophy? Is it largely a grand waste of time, as some scientists (like Peter Atkins and Stephen Hawking) suppose? Here's an extract from a forthcoming publication of mine ... On my view, philosophical questions are for the most part conceptual rather than scientific or empirical and the methods of philosophy are, broadly speaking, conceptual rather than scientific or empirical. Here's a simple conceptual puzzle. At a family get-together the following relations held directly between those present: Son, Daughter, Mother, Father, Aunt, Uncle, Niece, Nephew, and Cousin. Could there have been only four people present at that gathering? At first glance, there might seem to be a conceptual obstacle to there being just four people present - surely, more people are required for all those familial relations to hold between them? But in fact the appearance is deceptive. There could just be four people there. To see that there being just four people present is not conceptually ruled out, we have to unpack, and explore the connections between, the various concepts involved. That is something that can be done from the comfort of your armchair. Many philosophical puzzles have a similar character. Consider for example this puzzle associated with Heraclitus. If you jump into a river and then jump in again, the river will have changed in the interim: the water will have moved, the mud changed position, and so on. So it won't be the same. But if it's not the same river, then the number of rivers that you jump into is two, not one. It seems we're forced to accept the paradoxical - indeed, absurd - conclusion that you can't jump into one and the same river twice. Being forced into such a paradox by a seemingly cogent argument is a common philosophical predicament. This particular puzzle is fairly easily solved: the paradoxical conclusion that the number of rivers jumped into is two not one is generated by a faulty inference. Philosophers distinguish at least two kinds of identity or sameness. Numerical identity holds where the number of objects is one, not two (as when we discover that Hesperus, the evening star, is identical with Phosphorus, the morning star). Qualitative identity holds where two objects share the same qualities (e.g. two billiard balls that are molecule-for molecule duplicates of each other, for example). We use the expression 'the same' to refer to both sorts of identity. Having made this conceptual clarification, we can now see that the argument that generates our paradox trades on an ambiguity. It involves a slide from the true premise that the river jumped in the second time isn't qualitatively 'the same' to the conclusion that it is not numerically 'the same'. We fail to spot the flaw in the reasoning because the words 'the same' are used in each case. But now the paradox is resolved: we don't have to accept that absurd conclusion. Here's an example of how, by unpacking and clarifying concepts, it is possible to solve a classical philosophical puzzle. Perhaps not all philosophical puzzles can be solved by such means, but at least one can. So some philosophical puzzles are essentially conceptual in nature, and some (well, one at least) can be solved by armchair, conceptual methods. Still, I have begun with a simple, some might say trivial, philosophical example. What of the so-called 'hard problems' of philosophy, such as the mind-body problem? The mind-body problem, or at least a certain versions of it, also appears to be essentially conceptual character. On the one hand, there appear reasons to think that if mental is to have causal effects on the physical, then it will have to be identical with the physical. On the other hand, there appear to be conceptual obstacles to identifying the mental with the physical. Of course, scientists might establish various correlations between the mental and the physical. Suppose, for the sake of argument, that science establishes that whenever someone is in pain, their C-fibres are firing, and vice versa. Would scientists have then established that these properties are one and the same property - that pain just is C-fibre firing - in the way they have established that, say, heat just is molecular motion or water just is H2O? Not necessarily. Correlation is not identity. And it strikes many of us as intuitively obvious that pain just couldn't be a physical property like C-fibre firing - that these properties just couldn't be identical in that way. Of course, the intuition that certain things are conceptually ruled out can be deceptive. Earlier, we saw that the appearance the concepts son, daughter, etc. are such that there just had to be more than four people at that family gathering was mistaken: when we unpack the concepts and explore the connections between them it turns out there's no such conceptual obstacle. Philosophers have attempted to sharpen up the common intuition that there's a conceptual obstacle to identifying pain with C-fibre firing or some other physical property into a philosophical argument. Consider Kripke's anti-physicalist argument, for example, which turns on the thought that the conceptual impossibility of fool's pain (of something that feels like pain but isn't because the underlying physical essence is absent), combined with the conceptual possibility of pain without C-fibre firing (I can conceive of a situation in which I think I am in pain though my C-fibres are not firing), conceptually rules out pain having C-fibre firing as an underlying physical essence (which it would have if the identity theory were true). [1] Has Kripke here identified a genuine conceptual obstacle? Perhaps. Or perhaps not: perhaps it will turn out, on closer examination, that there is no such obstacle here. The only way to show that, however, will be through logical and conceptual work. Just as in the case of our puzzle about whether only four people might be at the family gathering and the puzzle about jumping into one and the same river twice, a solution will require we engage, not in an empirical investigation, but in reflective armchair inquiry. Establishing more facts about and a greater understanding of what happens in people's brains when they are in various mental states, etc. will no doubt be scientifically worthwhile, but it won't, by itself, allow us to answer the question of whether there is such a conceptual obstacle. So, many philosophical problems - from some of the most trivial to some of the hardest - appear to be essentially conceptual in nature, requiring armchair, conceptual work to solve. Some are solvable, and indeed have even been solved (the puzzle about the river). Others aren't solved, though perhaps they might be. On the other hand, it might turn out that at least some philosophical problems are necessarily insoluble, perhaps because we have certain fundamental conceptual commitments that are either directly irreconcilable or else generate unavoidable paradoxes when combined with certain empirically discovered facts. So there are perfectly good questions that demand answers, and that can in at least some cases be answered, though not by empirical means, let alone by the very specific forms and institutions of that mode of investigation referred to as 'the scientific method'. In order to solve many classic philosophical problems, we'll need to retire not to the lab, but to our armchairs. But is that all there is to philosophy? What of the grander metaphysical vision traditionally associated with academic philosophy? What of plumbing the deep, metaphysical structure of reality? That project is often thought to involve discerning, again by armchair methods, not what is the case (that's the business of empirical enquiry) but what, metaphysically, must be so. But how are philosophers equipped to reveal such hidden metaphysical depths by sitting in their armchairs with their eyes closed and having a good think? I suspect this is the main reason why there's considerable suspicion of philosophy in certain scientific circles. If we want to find out about reality - about how things stand outside our own minds - surely we will need to rely on empirical methods. There is no other sort of window on to reality - no other knowledge-delivery mechanism by which knowledge of the fundamental nature of that reality might be revealed. This is, of course, a traditional empiricist worry. Empiricists insist it's by means of our senses (or our senses enhanced by scientific tools and techniques) that the world is ultimately revealed. There is no mysterious extra sense, faculty, or form of intuition we might employ, while sat in our armchairs, to reveal further, deep, metaphysical facts about external reality. If the above thought is correct, and armchair methods are incapable of revealing anything about the nature of reality outside our own minds, then philosophy, conceived as a grand metaphysical exploration upon which we can embark while never leaving the comfort of our armchairs, is in truth a grand waste of time.I'm broadly sympathetic to this skeptical view about the value of armchair methods in revealing reality. Indeed, I suspect it's correct. So I have a fairly modest conception of the capabilities of philosophy. Yes, I believe we can potentially solve philosophical puzzles by armchair methods, and I believe this can be a valuable exercise. However, I'm suspicious of the suggestion that we should construe what we then achieve as our having made progress in revealing the fundamental nature of reality, a task to I which suspect such reflective, armchair methods are hopelessly inadequate.

Toward a unifying framework for evolutionary processes: the theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields.
Piketty and the limits of marginal productivity theory - Lars Pålsson Syll: "The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist today (John Maynard Keynes, General Theory, 1936)." Thomas Piketty’s book Capital in the Twenty-First Century is in many ways an impressive magnum opus. It’s a wide-ranging and weighty book, almost 700 pages thick, containing an enormous amount of empirical material on the distribution of income and wealth for almost all developed countries in the world for the last one and a half centuries. But it does not stop at this massive amount of data. Piketty also theorizes and tries to interpret the trends in the presented historical time series data. One of the more striking – and debated – trends that emerges from the data is a kind of generalized U-shaped Kuznets curve for the shares of the top 10 % and top 1 % of wealth and income, showing extremely high values for the period up to the first world war, and then dropping until the 1970/80s, when they – especially in the top 1% – start to rise sharply. Contrary to Kuznets’s (1955) original hypothesis, there does not seem to be any evidence for the idea that income differences should diminish pari passu with economic development. The gains that the increase in productivity has led to, has far from been distributed evenly in society. The optimistic view on there being automatic income and wealth equalizers, commonly held among growth and development economists until a few years ago, has been proven unwarranted. So, then, why have income differences more or less exploded since the 1980s? In an ongoing trend towards increasing inequality in both developing and emerging countries all over the world, wage shares have fallen substantially – and the growth in real wages has lagged far behind the growth in productivity – over the past three decades. As already argued by Karl Marx 150 years ago, the division between profits and wages is ultimately determined by the struggle between classes – something fundamentally different to hypothesized “marginal products” in neoclassical Cobb-Douglas or CES varieties of neoclassical production functions. Compared to Marx’s Capital, the one written by Piketty has a much more fragile foundation when it comes to theory. Where Piketty is concentrating on classifying different income and wealth categories, Marx was focusing on the facedown between different classes, struggling to appropriate as large a portion of the societal net product as possible. Piketty’s painstaking empirical research is, doubtless, very impressive, but his theorizing – although occasionally critical of orthodox economics and giving a rather dismal view of present-day and future capitalism as a rich-get-richer inequality society – is to a large extent shackled by neoclassical economic theory, something that unfortunately makes some of his more central theoretical analyses rather unfruitful from the perspective of realism and relevance. A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. A society that promotes unfettered selfishness as the one and only virtue, erodes the cement that keeps us together, and in the end we are only left with people dipped in the ice cold water of egoism and greed. If reading Piketty’s magnum opus get people thinking about these dangerous trends in modern capitalism, it may – in spite of its theoretical limitations – have a huge positive political impact. And that is not so bad. For, as the author of the original Capital once famously wrote: The philosophers have only interpreted the world, in various ways. The point, however, is to change it.
Quantum Physics, Interpretations, and Bell’s Theorem: Two Neglected Solutions - Bell’s theorem admits several interpretations or ‘solutions’, the standard interpretation being ‘indeterminism’, a next one ‘nonlocality’. In this article two further solutions are investigated, termed here ‘superdeterminism’ and ‘supercorrelation’. The former is especially interesting for philosophical reasons, if only because it is always rejected on the basis of extra-physical arguments. The latter, supercorrelation, will be studied here by investigating model systems that can mimic it, namely spin lattices. It is shown that in these systems the Bell inequality can be violated, even if they are local according to usual definitions. Violation of the Bell inequality is retraced to violation of ‘measurement independence’. These results emphasize the importance of studying the premises of the Bell inequality in realistic systems.
Softly Fine-Tuned Standard Model and the Scale of Inflation: The direct coupling between the Higgs field and the spacetime curvature, if finely tuned, is known to stabilize the Higgs boson mass. The fine-tuning is soft because the Standard Model (SM) parameters are subject to no fine-tuning thanks to their independence from the Higgs-curvature coupling. This soft fine-tuning leaves behind a large vacuum energy ∝ Λ 4 UV which inflates the Universe with a Hubble rate ∝ ΛUV, ΛUV being the SM ultraviolet boundary. This means that the tensor-to-scalar ratio inferred from cosmic microwave background polarization measurements by BICEP2, Planck and others lead to the determination of ΛUV. The exit from the inflationary phase, as usual, is accomplished via decays of the vacuum energy. Here we show that, identification of ΛUV with the inflaton, as a sliding UV scale upon the SM, respects the soft fine-tuning constraint and does not disrupt the stability of the SM Higgs boson.
Pinpointing punishment: Study identifies how a key brain region orchestrates punitive decisions: It’s a question most attorneys wish they could answer: How and why do judges and juries arrive at their decisions? The answer, according to Joshua Buckholtz, may lie in the way our brains are wired. A new study co-authored by Buckholtz, an assistant professor of psychology at Harvard, René Marois, professor and chair of psychology at Vanderbilt University, and colleagues explains how a brain region called the dorsolateral prefrontal cortex (DLPFC) coordinates third-party punishment decisions of the type made by judges and juries. The study is described in a paper published recently in the journal Neuron. “Third-party punishment is the cornerstone of all modern systems of justice, and this study suggests that our ability to make these types of decisions originates in a very basic form of information processing that is not specific to social decision-making at all,” Buckholtz said. “We think that this low-level, domain-general process of information integration forms a foundation for bootstrapping higher-order cognitive and social processes.” For Buckholtz and Marois, the new paper represents the culmination of more than seven years of work. “We were able to significantly change the chain of decision-making and reduce punishment for crimes without affecting blameworthiness,” said Marois, co-senior author of the study. “This strengthens evidence that the dorsolateral prefrontal cortex integrates information from other parts of the brain to determine punishment and shows a clear neural dissociation between punishment decisions and moral-responsibility judgments.” While still a graduate student at Vanderbilt, Buckholtz and Marois published the first study of the neural mechanisms that underlie such third-party punishment decisions, and continued to explore those mechanisms in later studies. But while those earlier papers showed that dorsolateral prefrontal cortex activity was correlated with punishment behavior, they weren’t able to pin down a causal role, or explain exactly what that brain region did to support these decisions. “It wasn’t entirely clear. Was this region corresponding to an evaluation of the mental state, or blameworthiness, of the perpetrator, or was it performing some other function? Was it assessing causal responsibility in a more general sense?” Buckholtz asked. “In this paper, we tried to develop a way to selectively map the role of this region to a more specific process and exclude alternative hypotheses.” To do that, Buckholtz, Marois, and colleagues turned to transcranial magnetic stimulation (TMS), a non-invasive technique that uses powerful electromagnets to reversibly interrupt brain information processing. As part of the study, Buckholtz and colleagues asked volunteers to read a series of scenarios that described a protagonist committing crimes ranging from simple theft to rape and murder. Each scenario varied by how morally responsible the perpetrator was for his or her actions and the degree of harm caused. In separate sessions, participants estimated perpetrators’ level of blameworthiness for each crime, and decided how much punishment they should face while researchers stimulated the brain region using the transcranial magnetic method. “What we show is that when you disrupt DLPFC activity, it doesn’t change the way they evaluate blameworthiness, but it does reduce the punishments they assign to morally responsible agents” Buckholtz said. The team was able to confirm those findings using functional MRI, and additionally was able to show that the dorsolateral prefrontal cortex was only sensitive to moral responsibility when making punishment (but not blameworthiness) decisions. This supported the idea that the brain region was not simply registering the causal responsibility of an action. Still, it didn’t answer what the region was actually doing during punishment decisions. “There had been some suggestion by others that DLPFC was important for inhibiting self-interested responses during punishment. That idea wasn’t consistent with our prior data, which led us to propose a different model,” Buckholtz said. “What this region is really good at — and it’s good at it regardless of the type of decision being made — is integrating information. In particular, punishment decisions require an integration of the culpability of a perpetrator for a wrongful act with the amount of harm they actually caused by the act.” In a previous study led by co-author Michael Treadway, now at Emory University, the authors showed that other brain regions are principally responsible for representing culpability and harm, and these areas pass this information to the prefrontal cortex when it comes time to make a decision. Using statistical models, the team showed that, under normal conditions, the impact of a perpetrator’s culpability on punishment decisions is negatively correlated with the impact of information about the amount of harm caused. “You can think of it as a zero-sum game,” Marois said. “The more you’re focused on the harm someone causes, the less you’re going to focus on how culpable they are, and the more you’re focused on their culpability, the less you focus on the harm.” Disrupting dorsolateral prefrontal cortex function, however, upends that balance. “It makes people rely more heavily on harm information and less heavily on culpability information,” Buckholtz explained. “Given the fact that, overall, TMS reduces punishment, that seemed counterintuitive to us at first. When we looked at the type of crimes this was affecting, we found it was mostly mid-range harms, like property crime and assaults. In such cases, the harm is relatively mild, but the person committing the crime had the intent to do much worse." As an example, Buckholtz cited the case of an assault that results in a broken arm. If one focuses on the perpetrator’s culpability, it’s easy to imagine that the assailant intended to do much more damage. In such an instance, focusing on the intent will lead to higher punishment than if one gives more weight to the actual amount of harm. The finding that a short dose of magnetic stimulation changes punishment decisions is sure to be of interest to those in the legal field. But not so fast, said Buckholtz. “Any suggestion that there are real-world applications for this work is wildly overblown. The magnitude of the TMS effect is quite modest, and our experiment does not replicate the conditions under which people make decisions in trial courts. The value of this study is in revealing basic mechanisms that the brain uses to render these decisions. TMS has no place in the legal system.” This study was made possible through support from the Research Network on Law and Neuroscience, supported by the John D. and Catherine T. MacArthur Foundation, which fosters research collaboration between neuroscientists and legal scholars; the National Institute of Mental Health; the National Institute on Drug Abuse; the Sloan Foundation; the Brain & Behavior Research Foundation; and the Massachusetts General Hospital Center for Law, Brain, and Behavior.
Physicists experimentally realize a quantum Hilbert hotel: (Phys.org)—In 1924, the mathematician David Hilbert described a hotel with an infinite number of rooms that are all occupied. Demonstrating the counterintuitive nature of infinity, he showed that the hotel could still accommodate additional guests. Although clearly no such brick-and-mortar hotel exists, in a new paper published in Physical Review Letters, physicists Václav Potoček, et al., have physically realized a quantum Hilbert hotel by using a beam of light.

In Hilbert's thought experiment, he explained that additional rooms could be created in a hotel that already has an infinite number of rooms because the hotel manager could simply "shift" all of the current guests to a new room according to some rule, such as moving everyone up one room (to leave the first room empty) or moving everyone up to twice their current room number (to create an infinite number of empty rooms by leaving the odd-numbered rooms empty). In their paper, the physicists proposed two ways to model this phenomena—one theoretical and one experimental—both of which use the infinite number of quantum states of a quantum system to represent the infinite number of hotel rooms in a hotel. The theoretical proposal uses the infinite number of energy levels of a particle in a potential well, and the experimental demonstration uses the infinite number of orbital angular momentum states of light. The scientists showed that, even though there is initially an infinite number of these states (rooms), the states' amplitudes (room numbers) can be remapped to twice their original values, producing an infinite number of additional states. On one hand, the phenomena is counterintuitive: by doubling an infinite number of things, you get infinitely many more of them. And yet, as the physicists explain, it still makes sense because the total sum of the values of an infinite number of things can actually be finite. "As far as there being an infinite amount of 'something,' it can make physical sense if the things we can measure are still finite," coauthor Filippo Miatto, at the University of Waterloo and the University of Ottawa, told Phys.org. "For example, a coherent state of a laser mode is made with an infinite set of number states, but as the number of photons in each of the number states increases, the amplitudes decrease so at the end of the day when you sum everything up the total energy is finite. The same can hold for all of the other quantum properties, so no, it is not surprising to the trained eye." The physicists also showed that the remapping can be done not only by doubling, but also by tripling, quadrupling, etc., the states' values. In the laser experiment, these procedures produce visible "petals" of light that correspond to the number that the states were multiplied by. The ability to remap energy states in this way could also have applications in quantum and classical information processing, where, for example, it could be used to increase the number of states produced or to increase the information capacity of a channel

Quantum physics interpretations feel the heat: they are no longer a matter of metaphysics anymore: Rolf Landauer never thought his principle would solve the mysteries of quantum mechanics. He did expect, though, that information would play a part in making sense of quantum weirdness. And sure, nobody thinks that all the mysteries surrounding quantum mechanics are solved now — and many wonder whether they ever will be, for that matter. But a new approach to one deep quantum mystery suggests that viewing the world in terms of information, and applying Landauer's principle to it, does answer one question that many people believed to be unanswerable. That question, posed in many forms, boils down to whether quantum math describes something inherent and real about the physical world. Some experts say yes; others believe quantum math is just about what people can find out about the word. Another way of posing the question is to ask whether the quantum description of nature is “ontic” or “epistemic” — about reality, or about knowledge of reality. Most attempts to articulate an interpretation of what quantum math really means (and there are lots of such interpretations) tend to favor either an ontic or epistemic point of view. But even some epistemic interpretations maintain that outcomes of a measurement are determined by some intrinsic property of the system being measured. Those are sometimes lumped with the ontic group as “Type I” interpretations. Some other interpretations (classified as Type II) believe quantum measurements deal with an observer’s knowledge or belief about an underlying reality, not some inherently fixed property.  Arguments about this issue have raged for decades. And you’d think they would continue to rage, as there would seem to be no possible way to determine which view is right. As long as all experiments come out the same way no matter which interpretation you prefer, it seems like the question is meaningless, or at least moot. But now an internationally diverse group of physicists alleges that there is in fact a way to ascertain which view is correct. If you’re a friend of reality — or otherwise in the Type I camp — you’re not going to like it. There’s no way to decide the debate within the confines of quantum mechanics itself, Adán Cabello and collaborators write in a new paper, online at arXiv.org. But if you throw in thermodynamics — the physics of heat — then a bit of logical deduction and a simple thought experiment can clinch the case for Type II. That experiment involves the manipulation of a quantum state, which is described by a mathematical expression called a wave function. A wave function can be used to compute the outcome of measurements on a particle, say a photon or electron. At the root of many quantum mysteries is the slight hitch that the wave function can only tell you the odds of getting different measurement results, not what the result of any specific measurement will be. To dispense with some unnecessary technicalities, let’s just say you can prepare a particle in a quantum state corresponding to its spin pointing up. You can then measure the spin using a detector that can be oriented in either the up-down direction or left-right direction. Any measurement resets a quantum state; sometimes to a new state, but sometimes resetting it to the same state it was originally. So the net effect of each measurement is either to change the quantum state or leave it the same. If you set this all up properly, the quantum state will change half the time — on average — if you repeat your measurement many times (randomly choosing which orientation to measure). It would be like flipping a coin and getting a random list of heads and tails. So if you kept a record of that chain of quantum measurements, you would write down a long list of 1s and 0s in random order, corresponding to whether the state changes or not. If the quantum state is Type I — corresponding to an intrinsic reality that you’re trying to find out about — it must already contain the information that you record before you make your measurement. But suppose you keep on making measurements, ad infinitum. Unless this quantum system has an infinitely large memory, it can’t know from the outset the ultimate order of all those 0s and 1s. “The system cannot have stored the values of the intrinsic properties for all possible sequences of measurements that the observer can perform,” write Cabello, of the University of Seville in Spain, and colleagues from China, Germany, Sweden and England. “This implies that the system has to generate new values and store them in its memory. For that reason, the system needs to erase part of the previously existing information.” And erasing is where Landauer’s principle enters the picture. Landauer, during a long career at IBM, was a pioneer in exploring the physics of computing. He was particularly interested in understanding the ultimate physical limits of computational efficiency, much in the way that 19th century physicists had investigated the principles regulating the efficiency of steam engines. Any computational process, Landauer showed, could be conducted without using up energy if performed carefully and slowly enough. (Or at least there was no lower limit to how much energy you needed.) But erasing a bit of information, Landauer demonstrated in a 1961 paper, always required some minimum amount of energy, thereby dissipating waste heat into the environment. A Type I quantum state, Cabello and colleagues argue, needs to erase old information to make room for the new, and therefore a long run of measurements should generate a lot of heat. The longer the list, the more heat is generated, leading to an infinite release of heat for an infinitely long list, the researchers calculated. It’s pretty hard to imagine how a finite quantum system could generate an infinite amount of heat. On the other hand, if your measurements are creating the list on the fly, then the quantum state is merely about your knowledge — and there’s no heat problem. If the quantum state is Type II, it “does not correspond to any intrinsic property of the observed system,” Cabello and coauthors note. “Here, the quantum state corresponds to the knowledge or expectations an external observer has. Therefore, the measurement does not cause heat emission from the observed system.” Fans of Type I interpretations could argue that somehow the quantum system knows in advance what measurement you will perform — in other words, you really can’t orient your detector randomly. That would imply that your behavior and the quantum system are both governed by some larger system observing superdeterministic laws that nobody knows anything about. Bizarre as that sounds, it would still probably be a better defense than attacking Landauer’s principle. “Landauer’s principle has been verified in actual experiments and is considered valid in the quantum domain,” Cabello and coauthors point out. “Therefore, whenever the temperature is not zero … the system should dissipate, at least, an amount of heat proportional to the information erased.” If you would rather not take their word for it, you should check out the September issue of Physics Today, in which Eric Lutz and Sergio Ciliberto explain the intimate links between Landauer’s principle, information and the second law of thermodynamics. “Having only recently become an experimental science,” Lutz and Ciliberto write, “the thermodynamics of information has potential to deliver new insights in physics, chemistry and biology.” The new paper by Cabello and colleagues appears to be an example of just such an insight. Nobody should expect this paper to end the quantum interpretation debate, of course. But it surely provides a new point of view for discussing it. “Ultimately, our work indicates that the long-standing question, Do the outcomes of experiments on quantum systems correspond to intrinsic properties? is not purely metaphysical,” Cabello and colleagues write. “Its answer in the affirmative has considerable physical consequences, testable through experimental observation. Its falsification will be equally exciting as it will force us to embrace radically new lines of thought.”
 Hawking radiation via tunneling from the spacetime of a spinning cosmic string black holes: In this paper, we study Hawking radiation as a massless particles tunneling process across the event horizon from the Schwarzschild and Reissner-Nordstr¨om black holes pierced by an infinitely long spinning cosmic string and a global monopole. Applying the WKB approximation and using a generalized Painlev´e line element for stationary axisymmetric spacetimes, also by taking into account that the ADM mass of the black hole decreases due to the presence of topological defects, it is shown that the Hawking temperature remains unchanged for these black holes. The tunneling of charged massive particles from Reissner-Nordstr¨om black holes is also studied, in both cases the tunneling rate is related to the change of the Bekenstein-Hawking entropy. The results extend the work of Parikh and Wilczek and are consistent with an underlying unitary theory
- Cosmology from quantum potential in brane-anti-brane system Alireza Sepehri: Recently, some mathematical as well as theoretical physicists, including myself, have removed the big-bang singularity and predicted an infinite age of our universe while deriving all of quantum cosmology 'theory'. In this paper, the author shows that the same result can be obtained in string theory and M-theory - a result I have derived as well and independently; The shape of the universe changes in different epochs. In this mechanism, first, N fundamental strings decay to N D0-anti-D0-brane. Then, D0-branes join to each other, grow and and form a six-dimensional brane-antibrane system. This system is unstable, broken and present form of four dimensional universes , one anti-universe in additional to one wormhole are produced. Thus, there isn’t any big-bang in cosmology and the universe is a fundamental metaplectic-string at the beginning. Also, the total age of the universe contains two parts, one related to the initial age and a second which correspondeds to the 'present' age of universe (t-tot = t-initial + t-present). On the other hand, the 'initial' age of universe includes two parts, the age of that fundamental string and time of transition (t-initial = t-transition + t-f−string). It is observed that only in the case of (t-f−string → ∞), the scale factor of the universe is zero and as a result, the total age of universe is infinity, as I demonstrated: blog-page again!

  • Hypertime -- why we need 2 dimensions of time: A Two-Time Universe? Physicist Explores How Second Dimension of Time Could Unify Physics Laws: For a long time, Itzhak Bars has been studying time. More than a decade ago, the physicist began pondering the role time plays in the basic laws of physics — the equations describing matter, gravity and the other forces of nature.Those laws are exquisitely accurate. Einstein mastered gravity with his theory of general relativity, and the equations of quantum theory capture every nuance of matter and other forces, from the attractive power of magnets to the subatomic glue that holds an atom’s nucleus together. But the laws can’t be complete. Einstein’s theory of gravity and quantum theory don’t fit together. Some piece is missing in the picture puzzle of physical reality. Bars thinks one of the missing pieces is a hidden dimension of time. Bizarre is not a powerful enough word to describe this idea, but it is a powerful idea nevertheless. With two times, Bars believes, many of the mysteries of today’s laws of physics may disappear. Of course, it’s not as simple as that. An extra dimension of time is not enough. You also need an additional dimension of space. It sounds like a new episode of “The Twilight Zone,” but it’s a familiar idea to most physicists. In fact, extra dimensions of space have become a popular way of making gravity and quantum theory more compatible. Extra space dimensions aren’t easy to imagine — in everyday life, nobody ever notices more than three. Any move you make can be described as the sum of movements in three directions — up-down, back and forth, or sideways. Similarly, any location can be described by three numbers (on Earth, latitude, longitude and altitude), corresponding to space’s three dimensions. Other dimensions could exist, however, if they were curled up in little balls, too tiny to notice. If you moved through one of those dimensions, you’d get back to where you started so fast you’d never realize that you had moved. “An extra dimension of space could really be there, it’s just so small that we don’t see it,” said Bars, a professor of physics and astronomy. Something as tiny as a subatomic particle, though, might detect the presence of extra dimensions. In fact, Bars said, certain properties of matter’s basic particles, such as electric charge, may have something to do with how those particles interact with tiny invisible dimensions of space.In this view, the Big Bang that started the baby universe growing 14 billion years ago blew up only three of space’s dimensions, leaving the rest tiny. Many theorists today believe that 6 or 7 such unseen dimensions await discovery. Only a few, though, believe that more than one dimension of time exists. Bars pioneered efforts to discern how a second dimension of time could help physicists better explain nature. “Itzhak Bars has a long history of finding new mathematical symmetries that might be useful in physics,” said Joe Polchinski, a physicist at the Kavli Institute for Theoretical Physics at UC Santa Barbara. “This two-time idea seems to have some interesting mathematical properties.” If Bars is on the right track, some of the most basic processes in physics will need re-examination. Something as simple as how particles move, for example, could be viewed in a new way. In classical physics (before the days of quantum theory), a moving particle was completely described by its momentum (its mass times its velocity) and its position. But quantum physics says you can never know those two properties precisely at the same time. Bars alters the laws describing motion even more, postulating that position and momentum are not distinguishable at a given instant of time. Technically, they can be related by a mathematical symmetry, meaning that swapping position for momentum leaves the underlying physics unchanged (just as a mirror switching left and right doesn’t change the appearance of a symmetrical face). In ordinary physics, position and momentum differ because the equation for momentum involves velocity. Since velocity is distance divided by time, it requires the notion of a time dimension. If swapping the equations for position and momentum really doesn’t change anything, then position needs a time dimension too. “If I make position and momentum indistinguishable from one another, then something is changing about the notion of time,” said Bars. “If I demand a symmetry like that, I must have an extra time dimension.” Simply adding an extra dimension of time doesn’t solve everything, however. To produce equations that describe the world accurately, an additional dimension of space is needed as well, giving a total of four space dimensions. Then, the math with four space and two time dimensions reproduces the standard equations describing the basic particles and forces, a finding Bars described partially last year in the journal Physical Review D and has expanded upon in his more recent work. Bars’ math suggests that the familiar world of four dimensions — three of space, one of time — is merely a shadow of a richer six-dimensional reality. In this view the ordinary world is like a two-dimensional wall displaying shadows of the objects in a three-dimensional room. In a similar way, the observable universe of ordinary space and time may reflect the physics of a bigger space with an extra dimension of time. In ordinary life nobody notices the second time dimension, just as nobody sees the third dimension of an object’s two-dimensional shadow on a wall. This viewpoint has implications for understanding many problems in physics. For one thing, current theory suggests the existence of a lightweight particle called the axion, needed to explain an anomaly in the equations of the standard model of particles and forces. If it exists, the axion could make up the mysterious “dark matter” that astronomers say affects the motions of galaxies. But two decades of searching has failed to find proof that axions exist. Two-time physics removes the original anomaly without the need for an axion, Bars has shown, possibly explaining why it has not been found. On a grander level, two-time physics may assist in the quest to merge quantum theory with Einstein’s relativity in a single unified theory. The most popular approach to that problem today, superstring theory, also invokes extra dimensions of space, but only a single dimension of time. Many believe that a variant on string theory, known as M theory, will be the ultimate winner in the quantum-relativity unification game, and M theory requires 10 dimensions of space and one of time. Efforts to formulate a clear and complete version of M theory have so far failed. “Nobody has yet told us what the fundamental form of M theory is,” Bars said. “We just have clues — we don’t know what it is.” Adopting the more symmetric two-time approach may help. Describing the 11 dimensions of M theory in the language of two-time physics would require adding one time dimension plus one space dimension, giving nature 11 space and two time dimensions. “The two-time version of M theory would have a total of 13 dimensions,” Bars said. For some people, that might be considered unlucky. But for Bars, it’s a reason for optimism. “My hope,” he says, “is that this path that I am following will actually bring me to the right place.

 

  • You're not irrational, you're just quantum probabilistic: Researchers explain human decision-making with physics theory: The next time someone accuses you of making an irrational decision, just explain that you're obeying the laws of quantum physics. A new trend taking shape in psychological science not only uses quantum physics to explain humans' (sometimes) paradoxical thinking, but may also help researchers resolve certain contradictions among the results of previous psychological studies. According to Zheng Joyce Wang and others who try to model our decision-making processes mathematically, the equations and axioms that most closely match human behavior may be ones that are rooted in quantum physics. "We have accumulated so many paradoxical findings in the field of cognition, and especially in decision-making," said Wang, who is an associate professor of communication and director of the Communication and Psychophysiology Lab at The Ohio State University. "Whenever something comes up that isn't consistent with classical theories, we often label it as 'irrational.' But from the perspective of quantum cognition, some findings aren't irrational anymore. They're consistent with quantum theory—and with how people really behave."In two new review papers in academic journals, Wang and her colleagues spell out their new theoretical approach to psychology. One paper appears in Current Directions in Psychological Science, and the other in Trends in Cognitive Sciences. Their work suggests that thinking in a quantum-like way—essentially not following a conventional approach based on classical probability theory—enables humans to make important decisions in the face of uncertainty, and lets us confront complex questions despite our limited mental resources. When researchers try to study human behavior using only classical mathematical models of rationality, some aspects of human behavior do not compute. From the classical point of view, those behaviors seem irrational, Wang explained. For instance, scientists have long known that the order in which questions are asked on a survey can change how people respond—an effect previously thought to be due to vaguely labeled effects, such as "carry-over effects" and "anchoring and adjustment," or noise in the data. Survey organizations normally change the order of questions between respondents, hoping to cancel out this effect. But in the Proceedings of the National Academy of Sciences last year, Wang and collaborators demonstrated that the effect can be precisely predicted and explained by a quantum-like aspect of people's behavior. We usually think of quantum physics as describing the behavior of sub-atomic particles, not the behavior of people. But the idea is not so far-fetched, Wang said. She also emphasized that her research program neither assumes nor proposes that our brains are literally quantum computers. Other research groups are working on that idea; Wang and her collaborators are not focusing on the physical aspects of the brain, but rather on how abstract mathematical principles of quantum theory can shed light on human cognition and behaviors. "In the social and behavioral sciences as a whole, we use probability models a lot," she said. "For example, we ask, what is the probability that a person will act a certain way or make a certain decision? Traditionally, those models are all based on classical probability theory—which arose from the classical physics of Newtonian systems. So it's really not so exotic for social scientists to think about quantum systems and their mathematical principles, too." Quantum physics deals with ambiguity in the physical world. The state of a particular particle, the energy it contains, its location—all are uncertain and have to be calculated in terms of probabilities. Quantum cognition is what happens when humans have to deal with ambiguity mentally. Sometimes we aren't certain about how we feel, or we feel ambiguous about which option to choose, or we have to make decisions based on limited information. "Our brain can't store everything. We don't always have clear attitudes about things. But when you ask me a question, like 'What do you want for dinner?" I have to think about it and come up with or construct a clear answer right there," Wang said. "That's quantum cognition." "I think the mathematical formalism provided by quantum theory is consistent with what we feel intuitively as psychologists. Quantum theory may not be intuitive at all when it is used to describe the behaviors of a particle, but actually is quite intuitive when it is used to describe our typically uncertain and ambiguous minds." She used the example of Schrödinger's cat—the thought experiment in which a cat inside a box has some probability of being alive or dead. Both possibilities have potential in our minds. In that sense, the cat has a potential to become dead or alive at the same time. The effect is called quantum superposition. When we open the box, both possibilities are no longer superimposed, and the cat must be either alive or dead. With quantum cognition, it's as if each decision we make is our own unique Schrödinger's cat. As we mull over our options, we envision them in our mind's eye. For a time, all the options co-exist with different degrees of potential that we will choose them: That's superposition. Then, when we zero in on our preferred option, the other options cease to exist for us. The task of modeling this process mathematically is difficult in part because each possible outcome adds dimensions to the equation. For instance, a Republican who is trying to decide among the candidates for U.S. president in 2016 is currently confronting a high-dimensional problem with almost 20 candidates. Open-ended questions, such as "How do you feel?" have even more possible outcomes and more dimensions. With the classical approach to psychology, the answers might not make sense, and researchers have to construct new mathematical axioms to explain behavior in that particular instance. The result: There are many classical psychological models, some of which are in conflict, and none of which apply to every situation. With the quantum approach, Wang and her colleagues argued, many different and complex aspects of behavior can be explained with the same limited set of axioms. The same quantum model that explains how question order changes people's survey answers also explains violations of rationality in the prisoner's dilemma paradigm, an effect in which people cooperate even when it's in their best interest not to do so. "The prisoner's dilemma and question order are two completely different effects in classical psychology, but they both can be explained by the same quantum model," Wang said. "The same quantum model has been used to explain many other seemingly unrelated, puzzling findings in psychology. That's elegant."
  • Is Nature Unnatural? Decades of confounding experiments have physicists considering a startling possibility: The universe might not make sense. On an overcast afternoon in late April, physics professors and students crowded into a wood-paneled lecture hall at Columbia University for a talk by Nima Arkani-Hamed, a high-profile theorist visiting from the Institute for Advanced Study in nearby Princeton, N.J. With his dark, shoulder-length hair shoved behind his ears, Arkani-Hamed laid out the dual, seemingly contradictory implications of recent experimental results at the Large Hadron Collider in Europe. “The universe is inevitable,” he declared. “The universe is impossible.” The spectacular discovery of the Higgs boson in July 2012 confirmed a nearly 50-year-old theory of how elementary particles acquire mass, which enables them to form big structures such as galaxies and humans. “The fact that it was seen more or less where we expected to find it is a triumph for experiment, it’s a triumph for theory, and it’s an indication that physics works,” Arkani-Hamed told the crowd. However, in order for the Higgs boson to make sense with the mass (or equivalent energy) it was determined to have, the LHC needed to find a swarm of other particles, too. None turned up. With the discovery of only one particle, the LHC experiments deepened a profound problem in physics that had been brewing for decades. Modern equations seem to capture reality with breathtaking accuracy, correctly predicting the values of many constants of nature and the existence of particles like the Higgs. Yet a few constants — including the mass of the Higgs boson — are exponentially different from what these trusted laws indicate they should be, in ways that would rule out any chance of life, unless the universe is shaped by inexplicable fine-tunings and cancellations. In peril is the notion of “naturalness,” Albert Einstein’s dream that the laws of nature are sublimely beautiful, inevitable and self-contained. Without it, physicists face the harsh prospect that those laws are just an arbitrary, messy outcome of random fluctuations in the fabric of space and time. The LHC will resume smashing protons in 2015 in a last-ditch search for answers. But in papers, talks and interviews, Arkani-Hamed and many other top physicists are already confronting the possibility that the universe might be unnatural. (There is wide disagreement, however, about what it would take to prove it.) “Ten or 20 years ago, I was a firm believer in naturalness,” said Nathan Seiberg, a theoretical physicist at the Institute, where Einstein taught from 1933 until his death in 1955. “Now I’m not so sure. My hope is there’s still something we haven’t thought about, some other mechanism that would explain all these things. But I don’t see what it could be.” Physicists reason that if the universe is unnatural, with extremely unlikely fundamental constants that make life possible, then an enormous number of universes must exist for our improbable case to have been realized. Otherwise, why should we be so lucky? Unnaturalness would give a huge lift to the multiverse hypothesis, which holds that our universe is one bubble in an infinite and inaccessible foam. According to a popular but polarizing framework called string theory, the number of possible types of universes that can bubble up in a multiverse is around 10500. In a few of them, chance cancellations would produce the strange constants we observe. In such a picture, not everything about this universe is inevitable, rendering it unpredictable. Edward Witten, a string theorist at the Institute, said by email, “I would be happy personally if the multiverse interpretation is not correct, in part because it potentially limits our ability to understand the laws of physics. But none of us were consulted when the universe was created.” “Some people hate it,” said Raphael Bousso, a physicist at the University of California at Berkeley who helped develop the multiverse scenario. “But I just don’t think we can analyze it on an emotional basis. It’s a logical possibility that is increasingly favored in the absence of naturalness at the LHC.” What the LHC does or doesn’t discover in its next run is likely to lend support to one of two possibilities: Either we live in an overcomplicated but stand-alone universe, or we inhabit an atypical bubble in a multiverse. “We will be a lot smarter five or 10 years from today because of the LHC,” Seiberg said. “So that’s exciting. This is within reach.” Cosmic Coincidence: Einstein once wrote that for a scientist, “religious feeling takes the form of a rapturous amazement at the harmony of natural law” and that “this feeling is the guiding principle of his life and work.” Indeed, throughout the 20th century, the deep-seated belief that the laws of nature are harmonious — a belief in “naturalness” — has proven a reliable guide for discovering truth. “Naturalness has a track record,” Arkani-Hamed said in an interview. In practice, it is the requirement that the physical constants (particle masses and other fixed properties of the universe) emerge directly from the laws of physics, rather than resulting from improbable cancellations. Time and again, whenever a constant appeared fine-tuned, as if its initial value had been magically dialed to offset other effects, physicists suspected they were missing something. They would seek and inevitably find some particle or feature that materially dialed the constant, obviating a fine-tuned cancellation. This time, the self-healing powers of the universe seem to be failing. The Higgs boson has a mass of 126 giga-electron-volts, but interactions with the other known particles should add about 10,000,000,000,000,000,000 giga-electron-volts to its mass. This implies that the Higgs’ “bare mass,” or starting value before other particles affect it, just so happens to be the negative of that astronomical number, resulting in a near-perfect cancellation that leaves just a hint of Higgs behind: 126 giga-electron-volts. Physicists have gone through three generations of particle accelerators searching for new particles, posited by a theory called supersymmetry, that would drive the Higgs mass down exactly as much as the known particles drive it up. But so far they’ve come up empty-handed. The upgraded LHC will explore ever-higher energy scales in its next run, but even if new particles are found, they will almost definitely be too heavy to influence the Higgs mass in quite the right way. The Higgs will still seem at least 10 or 100 times too light. Physicists disagree about whether this is acceptable in a natural, stand-alone universe. “Fine-tuned a little — maybe it just happens,” said Lisa Randall, a professor at Harvard University. But in Arkani-Hamed’s opinion, being “a little bit tuned is like being a little bit pregnant. It just doesn’t exist.”If no new particles appear and the Higgs remains astronomically fine-tuned, then the multiverse hypothesis will stride into the limelight. “It doesn’t mean it’s right,” said Bousso, a longtime supporter of the multiverse picture, “but it does mean it’s the only game in town.” A few physicists — notably Joe Lykken of Fermi National Accelerator Laboratory in Batavia, Ill., and Alessandro Strumia of the University of Pisa in Italy — see a third option. They say that physicists might be misgauging the effects of other particles on the Higgs mass and that when calculated differently, its mass appears natural. This “modified naturalness” falters when additional particles, such as the unknown constituents of dark matter, are included in calculations — but the same unorthodox path could yield other ideas. “I don’t want to advocate, but just to discuss the consequences,” Strumia said during a talk earlier this month at Brookhaven National Laboratory. However, modified naturalness cannot fix an even bigger naturalness problem that exists in physics: The fact that the cosmos wasn’t instantly annihilated by its own energy the moment after the Big Bang. Dark Dilemma: The energy built into the vacuum of space (known as vacuum energy, dark energy or the cosmological constant) is a baffling trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times smaller than what is calculated to be its natural, albeit self-destructive, value. No theory exists about what could naturally fix this gargantuan disparity. But it’s clear that the cosmological constant has to be enormously fine-tuned to prevent the universe from rapidly exploding or collapsing to a point. It has to be fine-tuned in order for life to have a chance. To explain this absurd bit of luck, the multiverse idea has been growing mainstream in cosmology circles over the past few decades. It got a credibility boost in 1987 when the Nobel Prize-winning physicist Steven Weinberg, now a professor at the University of Texas at Austin, calculated that the cosmological constant of our universe is expected in the multiverse scenario. Of the possible universes capable of supporting life — the only ones that can be observed and contemplated in the first place — ours is among the least fine-tuned. “If the cosmological constant were much larger than the observed value, say by a factor of 10, then we would have no galaxies,” explained Alexander Vilenkin, a cosmologist and multiverse theorist at Tufts University. “It’s hard to imagine how life might exist in such a universe.” Most particle physicists hoped that a more testable explanation for the cosmological constant problem would be found. None has. Now, physicists say, the unnaturalness of the Higgs makes the unnaturalness of the cosmological constant more significant. Arkani-Hamed thinks the issues may even be related. “We don’t have an understanding of a basic extraordinary fact about our universe,” he said. “It is big and has big things in it.” The multiverse turned into slightly more than just a hand-waving argument in 2000, when Bousso and Joe Polchinski, a professor of theoretical physics at the University of California at Santa Barbara, found a mechanism that could give rise to a panorama of parallel universes. String theory, a hypothetical “theory of everything” that regards particles as invisibly small vibrating lines, posits that space-time is 10-dimensional. At the human scale, we experience just three dimensions of space and one of time, but string theorists argue that six extra dimensions are tightly knotted at every point in the fabric of our 4-D reality. Bousso and Polchinski calculated that there are around 10500 different ways for those six dimensions to be knotted (all tying up varying amounts of energy), making an inconceivably vast and diverse array of universes possible. In other words, naturalness is not required. There isn’t a single, inevitable, perfect universe. “It was definitely an aha-moment for me,” Bousso said. But the paper sparked outrage. “Particle physicists, especially string theorists, had this dream of predicting uniquely all the constants of nature,” Bousso explained. “Everything would just come out of math and pi and twos. And we came in and said, ‘Look, it’s not going to happen, and there’s a reason it’s not going to happen. We’re thinking about this in totally the wrong way.’ ” Life in a Multiverse: The Big Bang, in the Bousso-Polchinski multiverse scenario, is a fluctuation. A compact, six-dimensional knot that makes up one stitch in the fabric of reality suddenly shape-shifts, releasing energy that forms a bubble of space and time. The properties of this new universe are determined by chance: the amount of energy unleashed during the fluctuation. The vast majority of universes that burst into being in this way are thick with vacuum energy; they either expand or collapse so quickly that life cannot arise in them. But some atypical universes, in which an improbable cancellation yields a tiny value for the cosmological constant, are much like ours. In a paper posted last month to the physics preprint website arXiv.org, Bousso and a Berkeley colleague, Lawrence Hall, argue that the Higgs mass makes sense in the multiverse scenario, too. They found that bubble universes that contain enough visible matter (compared to dark matter) to support life most often have supersymmetric particles beyond the energy range of the LHC, and a fine-tuned Higgs boson. Similarly, other physicists showed in 1997 that if the Higgs boson were five times heavier than it is, this would suppress the formation of atoms other than hydrogen, resulting, by yet another means, in a lifeless universe. Despite these seemingly successful explanations, many physicists worry that there is little to be gained by adopting the multiverse worldview. Parallel universes cannot be tested for; worse, an unnatural universe resists understanding. “Without naturalness, we will lose the motivation to look for new physics,” said Kfir Blum, a physicist at the Institute for Advanced Study. “We know it’s there, but there is no robust argument for why we should find it.” That sentiment is echoed again and again: “I would prefer the universe to be natural,” Randall said. But theories can grow on physicists. After spending more than a decade acclimating himself to the multiverse, Arkani-Hamed now finds it plausible — and a viable route to understanding the ways of our world. “The wonderful point, as far as I’m concerned, is basically any result at the LHC will steer us with different degrees of force down one of these divergent paths,” he said. “This kind of choice is a very, very big deal.” Naturalness could pull through. Or it could be a false hope in a strange but comfortable pocket of the multiverse. As Arkani-Hamed told the audience at Columbia, “stay tuned.” Via Quanta Magazine/This article was reprinted on ScientificAmerican.com.

 

  • New Principle May Help Explain Why Nature is Quantum: Like small children, scientists are always asking the question 'why?'. One question they've yet to answer is why nature picked quantum physics, in all its weird glory, as a sensible way to behave. Researchers Corsin Pfister and Stephanie Wehner at the Centre for Quantum Technologies at the National University of Singapore tackle this perennial question in a paper published today in Nature Communications.We know that things that follow quantum rules, such as atoms, electrons or the photons that make up light, are full of surprises. They can exist in more than one place at once, for instance, or exist in a shared state where the properties of two particles show what Einstein called "spooky action at a distance", no matter what their physical separation. Because such things have been confirmed in experiments, researchers are confident the theory is right. But it would still be easier to swallow if it could be shown that quantum physics itself sprang from intuitive underlying principles. One way to approach this problem is to imagine all the theories one could possibly come up with to describe nature, and then work out what principles help to single out quantum physics. A good start is to assume that information follows Einstein's special relativity and cannot travel faster than light. However, this alone isn't enough to define quantum physics as the only way nature might behave. Corsin and Stephanie think they have come across a new useful principle. "We have found a principle that is very good at ruling out other theories," says Corsin. In short, the principle to be assumed is that if a measurement yields no information, then the system being measured has not been disturbed. Quantum physicists accept that gaining information from quantum systems causes disturbance. Corsin and Stephanie suggest that in a sensible world the reverse should be true, too. If you learn nothing from measuring a system, then you can't have disturbed it. Consider the famous Schrodinger's cat paradox, a thought experiment in which a cat in a box simultaneously exists in two states (this is known as a 'quantum superposition'). According to quantum theory it is possible that the cat is both dead and alive – until, that is, the cat's state of health is 'measured' by opening the box. When the box is opened, allowing the health of the cat to be measured, the superposition collapses and the cat ends up definitively dead or alive. The measurement has disturbed the cat. This is a property of quantum systems in general. Perform a measurement for which you can't know the outcome in advance, and the system changes to match the outcome you get. What happens if you look a second time? The researchers assume the system is not evolving in time or affected by any outside influence, which means the quantum state stays collapsed. You would then expect the second measurement to yield the same result as the first. After all, "If you look into the box and find a dead cat, you don't expect to look again later and find the cat has been resurrected," says Stephanie. "You could say we've formalised the principle of accepting the facts", says Stephanie. Corsin and Stephanie show that this principle rules out various theories of nature. They note particularly that a class of theories they call 'discrete' are incompatible with the principle. These theories hold that quantum particles can take up only a finite number of states, rather than choose from an infinite, continuous range of possibilities. The possibility of such a discrete 'state space' has been linked to quantum gravitational theories proposing similar discreteness in spacetime, where the fabric of the universe is made up of tiny brick-like elements rather than being a smooth, continuous sheet. As is often the case in research, Corsin and Stephanie reached this point having set out to solve an entirely different problem altogether. Corsin was trying to find a general way to describe the effects of measurements on states, a problem that he found impossible to solve. In an attempt to make progress, he wrote down features that a 'sensible' answer should have. This property of information gain versus disturbance was on the list. He then noticed that if he imposed the property as a principle, some theories would fail. Corsin and Stephanie are keen to point out it's still not the whole answer to the big 'why' question: theories other than quantum physics, including classical physics, are compatible with the principle. But as researchers compile lists of principles that each rule out some theories to reach a set that singles out quantum physics, the principle of information gain versus disturbance seems like a good one to include.
- Round Table Talk: Conversation with Edward Witten: the second greatest scientific mind in history - the first being Newton. Must-read, and Witten's awards are unmatched in intellectual history: The Einstein Medal 1985; The Dirac Prize and Medal 1985; The International Centre for Theoretical Physics National Science Foundation Alan T. Waterman Award 1986; The Fields Medal 1990; American Institute of Physics and American Physical Society Dannie Heineman Prize for Mathematical Physics 1998; National Medal of Science 2003 Henri Poincaré Prize 2006; Crafoord Prize in Mathematics 2008; Nemmers Prize in Mathematics 2000; The Royal Netherlands Academy of Arts and Sciences Lorentz Medal 2010; Institute of Physics Isaac Newton Medal 2010; Fundamental Physics Prize 2012; Kyoto Prize in Science 2014 ... after further reflection, a mind unlike any other in history!

- What if The universe is an illusion? The world around us does a good job of convincing us that it is three dimensional. The problem is that some pretty useful physics says it's a hologram: again, this is another result I have derived - the universe is a hologram, however, my proofs are not based on 'utilitarian physics', but on necessary and sufficient conditions that any quantum theory that unifies Einstein's Theory of General Relativity and Quantum Field Theory must meet.

- New model describes cognitive decision making as the collapse of a quantum superstate
: Quantum physics and the 'mind': is the brain a quantum computer- decision making in an enormous range of tasks involves the accumulation of evidence in support of different hypotheses. One of the enduring models of evidence accumulation is the Markov random walk (MRW) theory, which assigns a probability to each hypothesis. In an MRW model of decision making, when deciding between two hypotheses, the cumulative evidence for and against each hypothesis reaches different levels at different times, moving particle-like from state to state and only occupying a single definite evidence level at any given point. By contrast with MRW, the new theory assumes that evidence develops over time in a superposition state analogous to the wave-like state of a photon, and judgements and decisions are made when this indefinite superposition state "collapses" into a definite state of evidence. In the experiment, nine study participants completed 112 blocks of 24 trials each over five sessions, in which they viewed a random dot motion stimulus on a screen. A percentage of the dots moved coherently in a single direction. The researchers manipulated the difficulty of the test between trials. In the choice condition, participants were asked to decide whether the coherently moving dots were traveling to the left or the right. In the no-choice condition, participants were prompted by an audio tone simply to make a motor response. Then participants were asked to rate their confidence that the coherently moving dots were traveling to the right on a scale ranging from 0 (certain left) to 100 percent (certain right). The researchers report that, on average, confidence ratings were much higher when the trajectories of the dots were highly coherent. Confidence ratings were lower in the no-choice condition than in the choice condition, providing evidence against the read-out assumption of MRW theory, which holds that confidence in the choice condition should be higher. The QRW theory posits that evidence evolves over time, as in MRW, but that judgments and decisions create a new definite state from an indefinite, superposition-like state. "This quantum perspective reconceptualizes how we model uncertainty and formalizes a long-held hypothesis that judgments and decisions create rather than reveal preferences and beliefs," the authors write. They conclude, "... quantum random walk theory provides a previously unexamined perspective on the nature of the evidence accumulation process that underlies both cognitive and neural theories of decision making."

- Quantum Biology and the Hidden Nature of Nature: Can the spooky world of quantum physics explain bird navigation, photosynthesis and even our delicate sense of smell? Clues are mounting that the rules governing the subatomic realm may play an unexpectedly pivotal role in the visible world. Join leading thinkers in the emerging field of quantum biology as they explore the hidden hand of quantum physics in everyday life and discuss how these insights may one day revolutionize thinking on everything from the energy crisis to quantum computers.

- The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World - Pedro Domingos: Machine learning is the automation of discovery - the scientific method on steroids - that enables intelligent robots and computers to program themselves ... in fact, machine learning will replace the scientific method very soon ... all knowledge can be derived from data by a single ‘master algorithm’: If you wonder how AI will change your life, science, history, and 'everything', read this book.

- There Is No Progress in Philosophy Eric Dietrich: Except for a patina of twenty-first century modernity, in the form of logic and language, philosophy is exactly the same now as it ever was; it has made no progress whatsoever. We philosophers wrestle with the exact same problems the Pre-Socratics wrestled with. Even more outrageous than this claim, though, is the blatant denial of its obvious truth by many practicing philosophers. The No-Progress view is explored and argued for here. Its denial is diagnosed as a form of anosognosia, a mental condition where the affected person denies there is any problem. The theories of two eminent philosophers supporting the No-Progress view are also examined. The final section offers an explanation for philosophy’s inability to solve any philosophical problem, ever. The paper closes with some reflections on philosophy’s future.

  • Quantum physics just got less complicated: Here's a nice surprise - quantum physics is less complicated than we thought. An international team of researchers has proved that two peculiar features of the quantum world previously considered distinct are different manifestations of the same thing. The result is published 19 December in Nature Communications. Patrick Coles, Jedrzej Kaniewski, and Stephanie Wehner made the breakthrough while at the Centre for Quantum Technologies at the National University of Singapore. They found that 'wave-particle duality' is simply the quantum 'uncertainty principle' in disguise, reducing two mysteries to one. "The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information," says Wehner, who is now an Associate Professor at QuTech at the Delft University of Technology in the Netherlands.

- How spacetime is built by quantum entanglement: A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo's Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors' Suggestion "for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields." Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales. The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive. Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators. Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called "spooky action at distance." The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory. "It was known that quantum entanglement is related to deep issues in the unification of general relativity and quantum mechanics, such as the black hole information paradox and the firewall paradox," says Hirosi Ooguri. "Our paper sheds new light on the relation between quantum entanglement and the microscopic structure of spacetime by explicit calculations. The interface between quantum gravity and information science is becoming increasingly important for both fields. I myself am collaborating with information scientists to pursue this line of research further."

- Why Isn’t There More Progress in Philosophy? David J. Chalmers: "Is there progress in philosophy? I have two reactions to this question. First, the answer is obviously yes. Second, it is the wrong question. The right question is not “Is there progress?” but “Why isn’t there more?”. We can distinguish three questions about philosophical progress. The Existence Question: is there progress in philosophy? The Comparison Question: is there as much progress in philosophy as in science? The Explanation Question (which tends to presuppose a negative answer to at least one of these two questions): why isn’t there more progress in philosophy? What we might call a glass-half-full view of philosophical progress is that there is some progress in philosophy. The glass-half-empty view is that there is not as much as we would like. In effect, the glass-half-full view consists in a positive answer to the Existence Question, while the glass-half-empty view (or at least one salient version of it) consists in a negative answer to the Comparison Question. These views fall between the extremes of a glass-empty view which answers no to the Existence Question, saying there is no progress in philosophy, and a glass-full thesis which answers yes to the Comparison Question, saying there is as much progress in philosophy as in science (or as much as we we would like). Of course the glass-half-full thesis and the glass-half-empty thesis are consistent with one another. I think for almost anyone deeply involved with the practice of philosophy, both theses will ring true. In discussions of progress in philosophy, my experience is that most people focus on the Existence Question: pessimists about philosophical progress (e.g. Dietrich 2011, Nielsen 1987; McGinn 1993) argue for the glass-empty thesis, and optimists (e.g. Stoljar forthcoming) respond by defending the glass-half-full thesis. I will focus instead on the Comparison and Explanation Questions. I will articulate a version of the glass-half-empty thesis, argue for it, and then address the crucial question of what explains it. I should say this this paper is as much an exercise in the sociology of philosophy as in philosophy. For the most part I have abstracted away from my own philosophical and metaphilosophical views in order to take an “outside view” of philosophical progress from a sociological perspective. For much of the paper I am largely saying the obvious, but sometimes the obvious is worth saying so that less obvious things can be said from there. Only toward the end will I bring in my own views, which lean a little more toward the optimistic, and see how the question of philosophical progress stands in light of them."

- Is Time’s Arrow Perspectival? Carlo Rovelli: We observe entropy decrease towards the past. Does this imply that in the past the world was in a non-generic microstate? The author points out an alternative. The subsystem to which we belong interacts with the universe via a relatively small number of quantities, which define a coarse graining. Entropy happens to depends on coarse-graining. Therefore the entropy we ascribe to the universe depends on the peculiar coupling between us and the rest of the universe. Low past entropy may be due to the fact that this coupling (rather than microstate of the universe) is non-generic. The author then argues that for any generic microstate of a sufficiently rich system there are always special subsystems defining a coarse graining for which the entropy of the rest is low in one time direction (the “past”). These are the subsystems allowing creatures that “live in time” —such as those in the biosphere— to exist. He then replies to some objections raised to an earlier presentation of this idea, in particular by Bob Wald, David Albert and Jim Hartle.

- Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction Jeffrey M Schwartz, Henry P Stapp, Mario Beauregard: Neuropsychological research on the neural basis of behaviour generally posits that brain mechanisms will ultimately suffice to explain all psychologically described phenomena. This assumption stems from the idea that the brain is made up entirely of material particles and fields, and that all causal mechanisms relevant to neuroscience can therefore be formulated solely in terms of properties of these elements. Thus, terms having intrinsic mentalistic and/or experiential content (e.g. ‘feeling’, ‘knowing’ and ‘effort’) are not included as primary causal factors. This theoretical restriction is motivated primarily by ideas about the natural world that have been known to be fundamentally incorrect for more than three-quarters of a century. Contemporary basic physical theory differs profoundly from classic physics on the important matter of how the consciousness of human agents enters into the structure of empirical phenomena. The new principles contradict the older idea that local mechanical processes alone can account for the structure of all observed empirical data. Contemporary physical theory brings directly and irreducibly into the overall causal structure certain psychologically described choices made by human agents about how they will act. This key development in basic physical theory is applicable to neuroscience, and it provides neuroscientists and psychologists with an alternative conceptual framework for describing neural processes. Indeed, owing to certain structural features of ion channels critical to synaptic function, contemporary physical theory must in principle be used when analysing human brain dynamics. The new framework, unlike its classic-physics-based predecessor, is erected directly upon, and is compatible with, the prevailing principles of physics. It is able to represent more adequately than classic concepts the neuroplastic mechanisms relevant to the growing number of empirical studies of the capacity of directed attention and mental effort to systematically alter brain function.

- When causation does not imply correlation: robust violations of the Faithfulness axiom Richard Kennaway: it is demonstrated here that the Faithfulness property that is assumed in much causal analysis is robustly violated for a large class of systems of a type that occurs throughout the life and social sciences: control systems. These systems exhibit correlations indistinguishable from zero between variables that are strongly causally connected, and can show very high correlations between variables that have no direct causal connection, only a connection via causal links between uncorrelated variables. Their patterns of correlation are robust, in that they remain unchanged when their parameters are varied. The violation of Faithfulness is fundamental to what a control system does: hold some variable constant despite the disturbing influences on it. No method of causal analysis that requires Faithfulness is applicable to such systems.

- Renormalized spacetime is two-dimensional at the Planck scale T. Padmanabhan, Sumanta Chakraborty: Quantum field theory distinguishes between the bare variables – which we introduce in the Lagrangian – and the renormalized variables which incorporate the effects of interactions. This suggests that the renormalized, physical, metric tensor of spacetime (and all the geometrical quantities derived from it) will also be different from the bare, classical, metric tensor in terms of which the bare gravitational Lagrangian is expressed. The authors provide a physical ansatz to relate the renormalized metric tensor to the bare metric tensor such that the spacetime acquires a zero-point-length ℓ(0) of the order of the Planck length LP . This prescription leads to several remarkable consequences. In particular, the Euclidean volume VD(ℓ, ℓ0) in a D-dimensional spacetime of a region of size ℓ scales as VD(ℓ, ℓ0) ∝ ℓ D−2 0 ℓ 2 when ℓ ∼ ℓ0, while it reduces to the standard result VD(ℓ, ℓ0) ∝ ℓ D at large scales (ℓ ≫ ℓ0). The appropriately defined effective dimension, Deff , decreases continuously from Deff = D (at ℓ ≫ ℓ0) to Deff = 2 (at ℓ ∼ ℓ0). This suggests that the physical spacetime becomes essentially 2-dimensional near Planck scale.

- CERN's LHCb experiment reports observation of exotic pentaquark particles: "The pentaquark is not just any new particle," said LHCb spokesperson Guy Wilkinson. "It represents a way to aggregate quarks, namely the fundamental constituents of ordinary protons and neutrons, in a pattern that has never been observed before in over fifty years of experimental searches. Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we're all made, is constituted." Our understanding of the structure of matter was revolutionized in 1964 when American physicist, Murray Gell-Mann, proposed that a category of particles known as baryons, which includes protons and neutrons, are comprised of three fractionally charged objects called quarks, and that another category, mesons, are formed of quark-antiquark pairs. Gell-Mann was awarded the Nobel Prize in physics for this work in 1969. This quark model also allows the existence of other quark composite states, such as pentaquarks composed of four quarks and an antiquark. Until now, however, no conclusive evidence for pentaquarks had been seen. LHCb researchers looked for pentaquark states by examining the decay of a baryon known as Λb (Lambda b) into three other particles, a J/ѱ (J-psi), a proton and a charged kaon. Studying the spectrum of masses of the J/ѱ and the proton revealed that intermediate states were sometimes involved in their production. These have been named Pc(4450)+ and Pc(4380)+, the former being clearly visible as a peak in the data, with the latter being required to describe the data fully. Earlier experiments that have searched for pentaquarks have proved inconclusive. Where the LHCb experiment differs is that it has been able to look for pentaquarks from many perspectives, with all pointing to the same conclusion. It's as if the previous searches were looking for silhouettes in the dark, whereas LHCb conducted the search with the lights on, and from all angles. The next step in the analysis will be to study how the quarks are bound together within the pentaquarks.

- Causes and Consequences of Income Inequality: A Global Perspective - INTERNATIONAL MONETARY FUND: Widening income inequality is the defining challenge of our time. In advanced economies, the gap between the rich and poor is at its highest level in decades. Inequality trends have been more mixed in emerging markets and developing countries (EMDCs), with some countries experiencing declining inequality, but pervasive inequities in access to education, health care, and finance remain. Not surprisingly then, the extent of inequality, its drivers, and what to do about it have become some of the most hotly debated issues by policymakers and researchers alike. Against this background, the objective of this paper is two-fold. First, the authors show why policymakers need to focus on the poor and the middle class. Earlier IMF work has shown that income inequality matters for growth and its sustainability. Their analysis suggests that the income distribution itself matters for growth as well. Specifically, if the income share of the top 20 percent (the rich) increases, then GDP growth actually declines over the medium term, suggesting that the benefits do not trickle down. In contrast, an increase in the income share of the bottom 20 percent (the poor) is associated with higher GDP growth. The poor and the middle class matter the most for growth via a number of interrelated economic, social, and political channels. Second, the authors investigate what explains the divergent trends in inequality developments across advanced economies and EMDCs, with a particular focus on the poor and the middle class. While most existing studies have focused on advanced countries and looked at the drivers of the Gini coefficient and the income of the rich, this study explores a more diverse group of countries and pays particular attention to the income shares of the poor and the middle class—the main engines of growth. This analysis suggests that technological progress and the resulting rise in the skill premium (positives for growth and productivity) and the decline of some labor market institutions have contributed to inequality in both advanced economies and EMDCs. Globalization has played a smaller but reinforcing role. Interestingly, we find that rising skill premium is associated with widening income disparities in advanced countries, while financial deepening is associated with rising inequality in EMDCs, suggesting scope for policies that promote financial inclusion. Policies that focus on the poor and the middle class can mitigate inequality. Irrespective of the level of economic development, better access to education and health care and well-targeted social policies, while ensuring that labor market institutions do not excessively penalize the poor, can help raise the income share for the poor and the middle class. There is no one-size-fits-all approach to tackling inequality. The nature of appropriate policies depends on the underlying drivers and country-specific policy and institutional settings. In advanced economies, policies should focus on reforms to increase human capital and skills,
coupled with making tax systems more progressive. In EMDCs, ensuring financial deepening is accompanied with greater financial inclusion and creating incentives for lowering informality would be important. More generally, complementarities between growth and income equality objectives suggest that policies aimed at raising average living standards can also influence the distribution of income and ensure a more inclusive prosperity.

- Does time dilation destroy quantum superposition? Why do we not see everyday objects in quantum superpositions? The answer to that long-standing question may partly lie with gravity. So says a group of physicists in Austria, which has shown theoretically that a feature of Einstein's general relativity, known as time dilation, can render quantum states classical. The researchers say that even the Earth's puny gravitational field may be strong enough for the effect to be measurable in a laboratory within a few years. Our daily experience suggests that there exists a fundamental boundary between the quantum and classical worlds. One way that physicists explain the transition between the two, is to say that quantum superposition states simply break down when a system exceeds a certain size or level of complexity – its wavefunction is said to "collapse" and the system becomes "decoherent". An alternative explanation, in which quantum mechanics holds sway at all scales, posits that interactions with the environment bring different elements of an object's wavefunction out of phase, such that they no longer interfere with one another. Larger objects are subject to this decoherence more quickly than smaller ones because they have more constituent particles and, therefore, more complex wavefunctions. There are already multiple different explanations for decoherence, including a particle emitting or absorbing electromagnetic radiation or being buffeted by surrounding air molecules. In the latest work, Časlav Brukner at the University of Vienna and colleagues have put forward a new model that involves time dilation – where the flow of time is affected by mass (gravity). This relativistic effect allows for a clock in outer space to tick at a faster rate than one near the surface of the Earth. In their work, Brukner and colleagues consider a macroscopic body – whose constituent particles can vibrate at different frequencies – to be in a superposition of two states at very slightly different distances from the surface of a massive object. Time dilation would then dictate that the state closer to the object will vibrate at a lower frequency than the other. They then calculate how much time dilation is needed to differentiate the frequencies so that the two states get out of step with one another, so much that they can no longer interfere. With this premise, the team worked out that even the Earth's gravitational field is strong enough to cause decoherence in quite small objects across measurable timescales. The researchers calculated that an object that weighs a gram and exists in two quantum states, separated vertically by a thousandth of a millimetre, should decohere in around a millisecond. Beyond any potential quantum-computing applications that would benefit from the removal of unwanted decoherence, the work challenges physicists' assumption that only gravitational fields generated by neutron stars and other massive astrophysical objects can exert a noticeable influence on quantum phenomena. "The interesting thing about this phenomenon is that both quantum mechanics and general relativity would be needed to explain it," says Brukner. Quantum clocks
One way to experimentally test the effect would involve sending a "clock" (such as a beam of caesium atoms) through the two arms of an interferometer. The interferometer would initially be positioned horizontally and the interference pattern recorded. It would then be rotated to the vertical, such that one arm experiences a higher gravitational potential than the other, and its output again observed. In the latter case, the two states vibrate at different frequencies due to time dilation. This different rate of ticking would reveal which state is travelling down each arm, and once this information is revealed, the interference pattern disappears. "People have already measured time dilation due to Earth's gravity," says Brukner, "but they usually use two clocks in two different positions. We are saying, why not use one clock in a superposition?" Carrying out such a test, however, will not be easy. The fact that the effect is far smaller than other potential sources of decoherence would mean cooling the interferometer down to just a few kelvin while enclosing it in a vacuum, says Brukner.
The measurements would still be extremely tricky, according to Markus Arndt, at the University of Vienna, who was not involved in the current work. He says they could require superpositions around a million times bigger and 1000 times longer lasting than is possible with the best equipment today. Nevertheless, Arndt praises the proposal for "directing attention" towards the interface between quantum mechanics and gravity. He also points out that any improvements to interferometers needed for this work could also have practical benefits, such as allowing improved tests of relativity or enhancing tools for geodesy.

- Judgment Aggregation in Science Liam Kofi Bright, Haixin Dang, and Remco Heesen: This paper raises the problem of judgment aggregation in science. The problem has two sides. First, how do scientists decide which propositions to assert in a collaborative document? And second, how should they make such decisions? The literature on judgment aggregation is relevant to the second question. Although little evidence is available regarding the first question, it suggests that current scientific practice is not in line with the most plausible recommendations from the judgment aggregation literature. The authors explore the evidence that is presently available before suggesting a number of avenues for future research on this problem.

- A Stronger Bell Argument for Quantum Non-Locality Paul M. Nager: It is widely accepted that the violation of Bell inequalities excludes local theories of the quantum realm. This paper presents a stronger Bell argument which even forbids certain non-local theories. Among these excluded non-local theories are those whose only non-local connection is a probabilistic (or functional) dependence between the space-like separated measurement outcomes of EPR/B experiments (a subset of outcome dependent theories). In this way, the new argument shows that the result of the received Bell argument, which requires just any kind of nonlocality, is inappropriately weak. Positively, the remaining non-local theories, which can violate Bell inequalities (among them quantum theory), are characterized by the fact that at least one of the measurement outcomes in some sense probabilistically depends both on its local as well as on its distant measurement setting (probabilistic Bell contextuality). Whether an additional dependence between the outcomes holds, is irrelevant for the question whether a certain theory can violate Bell inequalities. This new concept of quantum non-locality is considerably tighter and more informative than the one following from the usual Bell argument. It is proven that (given usual background assumptions) the result of the stronger Bell argument presented here is the strongest possible consequence from the violation of Bell inequalities on a qualitative probabilistic level

- General relativity as a two-dimensional CFT Tim Adamo: The tree-level scattering amplitudes of general relativity encode the full non-linearity of the Einstein field equations. Yet remarkably compact expressions for these amplitudes have been found which seem unrelated to a perturbative expansion of the EinsteinHilbert action. This suggests an entirely different description of GR which makes this on-shell simplicity manifest. Taking our cue from the tree-level amplitudes, the author discusses how such a description can be found. The result is a formulation of GR in terms of a solvable two-dimensional CFT, with the Einstein equations emerging as quantum consistency conditions.

- The Rise and Decline of General Laws of Capitalism Daron Acemogluy, James A. Robinsonz: Thomas Pikettyí's (2013) book, Capital in the 21st Century, follows in the tradition of the great classical economists, like Marx and Ricardo, in formulating general laws of capitalism to diagnose and predict the dynamics of inequality. The authors argue that general economic laws are unhelpful as a guide to understand the past or predict the future, because they ignore the central role of political and economic institutions, as well as the endogenous evolution of technology, in shaping the distribution of resources in society. The authors use regression evidence to show that the main economic force emphasized in Pikettyí's book, the gap between the interest rate and the growth rate, does not appear to explain historical patterns of inequality (especially, the share of income accruing to the upper tail of the distribution). They then use the histories of inequality of South Africa and Sweden to illustrate that inequality dynamics cannot be understood without embedding economic factors in the context of economic and political institutions, and also that the focus on the share of top incomes can give a misleading characterization of the true nature of inequality.

- Strange behavior of quantum particles may indicate the existence of other parallel universes John Davis: It started about five years ago with a practical chemistry question. Little did Bill Poirier realize as he delved into the quantum mechanics of complex molecules that he would fall down the rabbit hole to discover evidence of other parallel worlds that might well be poking through into our own, showing up at the quantum level. The Texas Tech University professor of chemistry and biochemistry said that quantum mechanics is a strange realm of reality. Particles at this atomic and subatomic level can appear to be in two places at once. Because the activity of these particles is so iffy, scientists can only describe what's happening mathematically by "drawing" the tiny landscape as a wave of probability. Chemists like Poirier draw these landscapes to better understand chemical reactions. Despite the "uncertainty" of particle location, quantum wave mechanics allows scientists to make precise predictions. The rules for doing so are well established. At least, they were until Poirier's recent "eureka" moment when he found a completely new way to draw quantum landscapes. Instead of waves, his medium became parallel universes. Though his theory, called "Many Interacting Worlds," sounds like science fiction, it holds up mathematically. Originally published in 2010, it has led to a number of invited presentations, peer-reviewed journal articles and a recent invited commentary in the premier physics journal Physical Review. "This has gotten a lot of attention in the foundational mechanics community as well as the popular press," Poirier said. "At a symposium in Vienna in 2013, standing five feet away from a famous Nobel Laureate in physics, I gave my presentation on this work fully expecting criticism. I was surprised when I received none. Also, I was happy to see that I didn't have anything obviously wrong with my mathematics." In his theory, Poirier postulates that small particles from many worlds seep through to interact with our own, and their interaction accounts for the strange phenomena of quantum mechanics. Such phenomena include particles that seem to be in more than one place at a time, or to communicate with each other over great distances without explanations.

- A statistical method for studying correlated rare events and their risk factors Xiaonan Xue, Mimi Y Kim, Tao Wang, Mark H Kuniholm, Howard D Strickler: Longitudinal studies of rare events such as cervical high-grade lesions or colorectal polyps that can recur often involve correlated binary data. Risk factor for these events cannot be reliably examined using conventional statistical methods. For example, logistic regression models that incorporate generalized estimating equations often fail to converge or provide inaccurate results when analyzing data of this type. Although exact methods have been reported, they are complex and computationally difficult. The current paper proposes a mathematically straightforward and easy-to-use two-step approach involving (i) an additive model to measure associations between a rare or uncommon correlated binary event and potential risk factors and (ii) a permutation test to estimate the statistical significance of these associations. Simulation studies showed that the proposed method reliably tests and accurately estimates the associations of exposure with correlated binary rare events. This method was then applied to a longitudinal study of human leukocyte antigen (HLA) genotype and risk of cervical high grade squamous intraepithelial lesions (HSIL) among HIV-infected and HIV-uninfected women. Results showed statistically significant associations of two HLA alleles among HIV-negative but not HIV-positive women, suggesting that immune status may modify the HLA and cervical HSIL association. Overall, the proposed method avoids model non-convergence problems and provides a computationally simple, accurate, and powerful approach for the analysis of risk factor associations with rare/uncommon correlated binary events.

- Why Not Capitalism? Jason Brennan: 'Most economists believe capitalism is a compromise with selfish human nature. As Adam Smith put it, "It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest." Capitalism works better than socialism, according to this thinking, only because we are not kind and generous enough to make socialism work. If we were saints, we would be socialists. In Why Not Capitalism?, Jason Brennan attacks this widely held belief, arguing that capitalism would remain the best system even if we were morally perfect. Even in an ideal world, private property and free markets would be the best way to promote mutual cooperation, social justice, harmony, and prosperity. Socialists seek to capture the moral high ground by showing that ideal socialism is morally superior to realistic capitalism. But, Brennan responds, ideal capitalism is superior to ideal socialism, and so capitalism beats socialism at every level. Clearly, engagingly, and at times provocatively written, Why Not Capitalism? will cause readers of all political persuasions to re-evaluate where they stand vis-à-vis economic priorities and systems—as they exist now and as they might be improved in the future.'

- An argument for ψ-ontology in terms of protective measurements Shan Gao: The ontological model framework provides a rigorous approach to address the question of whether the quantum state is ontic or epistemic. When considering only conventional projective measurements, auxiliary assumptions are always needed to prove the reality of the quantum state in the framework. For example, the Pusey-Barrett-Rudolph theorem is based on an additional preparation independence assumption. In this paper, the author gives a new proof of ψ-ontology in terms of protective measurements in the ontological model framework. It is argued that the proof needs not rely on auxiliary assumptions, and also applies to deterministic theories such as the de Broglie-Bohm theory. In addition, the author gives a simpler argument for ψ-ontology beyond the framework, which is only based on protective measurements and a weaker criterion of reality. The argument may be also appealing for those people who favor an anti-realist view of quantum mechanics.

- Depth and Explanation in Mathematics Marc Lange: This paper argues that in at least some cases, one proof of a given theorem is deeper than another by virtue of supplying a deeper explanation of the theorem — that is, a deeper account of why the theorem holds. There are cases of scientific depth that also involve a common abstract structure explaining a similarity between two otherwise unrelated phenomena, making their similarity no coincidence and purchasing depth by answering why questions that separate, dissimilar explanations of the two phenomena cannot correctly answer. The connections between explanation, depth, unification, power, and coincidence in mathematics and science are compared.

- Does Inflation Solve the Hot Big Bang Model’s Fine Tuning Problems? C.D. McCoy: Cosmological inflation is widely considered an integral and empirically successful component of contemporary cosmology. It was originally motivated (and usually still is) by its solution of certain so-called fine-tuning problems of the hot big bang model, particularly what are known as the horizon problem and the flatness problem. Although the physics behind these problems is clear enough, the nature of the problems depends on the sense in which the hot big bang model is fine-tuned and how the alleged fine-tuning is problematic. Without clear explications of these, it remains unclear precisely what problems inflationary theory is meant to be solving and whether it does in fact solve them. The author analyzes here the structure of these problems and considers various interpretations that may substantiate the alleged fine-tuning. On the basis of this analysis he argues that at present there is no unproblematic interpretation available for which it can be said that inflation solves the big bang model’s alleged fine-tuning problems.
- Towards the geometry of the universe from data H.L. Bester, J. Larena, and N.T. Bishop: the authors present an algorithm that can reconstruct the full distributions of metric components within the class of spherically symmetric dust universes that may include a cosmological constant. The algorithm is capable of confronting this class of solutions with arbitrary data. In this work they use luminosity and age data to constrain the geometry of the universe up to a redshift of z = 1.75. They go on to show that the current data are perfectly compatible with homogeneous models of the universe and that these models seem to be favoured at low redshift.
- Truthful Linear Regression Rachel Cummings, Stratis Ioannidis, Katrina Ligett: the authors consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the analyst constrains their design to mechanisms that provide a privacy guarantee to the participants; the authors use differential privacy to model individuals’ privacy losses. This immediately poses a problem, as differentially private computation of a linear model necessarily produces a biased estimation, and existing approaches to design mechanisms to elicit data from privacy-sensitive individuals do not generalize well to biased estimators. They overcome this challenge through an appropriate design of the computation and payment scheme.
- Theory of the effectivity of human problem solving Frantisek Duris: The ability to solve problems effectively is one of the hallmarks of human cognition. Yet, in our opinion it gets far less research focus than it rightly deserves. In this paper the author outlines a framework in which this effectivity can be studied; he identify the possible roots and scope of this effectivity and the cognitive processes directly involved. More particularly, it is observed that people can use cognitive mechanisms to drive problem solving by the same manner on which an optimal problem solving strategy suggested by Solomonoff (1986) is based. Furthermore, evidence is provided for cognitive substrate hypothesis (Cassimatis, 2006) which states that human level AI in all domains can be achieved by a relatively small set of cognitive mechanisms. The results presented in this paper can serve both cognitive psychology in better understanding of human problem solving processes, and artificial intelligence in designing more human like intelligent agents.
- Limit Theorems for Empirical Renyi Entropy and Divergence with Applications to Molecular Diversity Analysis Maciej Pietrzak, Grzegorz A. Rempala, Michał Seweryn, Jacek Wesołowski: Quantitative methods for studying biodiversity have been traditionally rooted in the classical theory of finite frequency tables analysis. However, with the help of modern experimental tools, like high throughput sequencing, the authors now begin to unlock the outstanding diversity of genomic data in plants and animals reflective of the long evolutionary history of our planet. This molecular data often defies the classical frequency/contingency tables assumptions and seems to require sparse tables with very large number of categories and highly unbalanced cell counts, e.g., following heavy tailed distributions (for instance, power laws). Motivated by the molecular diversity studies, the authors propose here a frequency-based framework for biodiversity analysis in the asymptotic regime where the number of categories grows with sample size (an infinite contingency table). The approach here is rooted in information theory and based on the Gaussian limit results for the effective number of species (the Hill numbers) and the empirical Renyi entropy and divergence. The authors argue that when applied to molecular biodiversity analysis our methods can properly account for the complicated data frequency patterns on one hand and the practical sample size limitations on the other. We illustrate this principle with two specific RNA sequencing examples: a comparative study of T-cell receptor populations and a validation of some preselected molecular hepatocellular carcinoma (HCC) markers.
- Brane induced gravity: Ghosts and naturalness Ludwig Eglseer, Florian Niedermann, and Robert Schneider: Linear stability of brane induced gravity in two codimensions on a static pure tension background is investigated. By explicitly calculating the vacuum persistence amplitude of the corresponding quantum theory, the authors show that the parameter space is divided into two regions—one corresponding to a stable Minkowski vacuum on the brane and one being plagued by ghost instabilities. This analytical result affirms a recent nonlinear, but mainly numerical analysis. The main result is that the ghost is absent for a sufficiently large brane tension, in perfect agreement with a value expected from a natural effective field theory point of view. Unfortunately, the linearly stable parameter regime is either ruled out phenomenologically or destabilized due to nonlinearities. The authors argue that inflating brane backgrounds constitute the remaining window of opportunity. In the special case of a tensionless brane, they find that the ghost exists for any nonzero value of the induced gravity scale. Regarding this case, there are contradicting results in the literature, and they are able to fully resolve this controversy by explicitly uncovering the errors made in the “no-ghost” analysis. Finally, a Hamiltonian analysis generalizes the ghost result to more than two codimensions.
- How mathematics reveals the nature of the cosmos Joshua Carroll: 'If it were not for mathematics, we would still think we were on one of a few planets orbiting a star amidst the backdrop of seemingly motionless lights. This is a rather bleak outlook today compared to what we now know about the awesomely large universe we reside in. This idea of the universe motivating us to understand more about mathematics can be inscribed in how Johannes Kepler used what he observed the planets doing, and then applied mathematics to it to develop a fairly accurate model (and method for predicting planetary motion) of the solar system. This is one of many demonstrations that illustrate the importance of mathematics within our history, especially within astronomy and physics. The story of mathematics becomes even more amazing as we push forward to one of the most advanced thinkers humanity has ever known: Sir Isaac Newton, when pondering the motions of Halley's Comet, came to the realization that the math that had been used thus far to describe physical motion of massive bodies, simply would not suffice if we were to ever understand anything beyond that of our seemingly limited celestial nook. In a show of pure brilliance that lends validity to my earlier statement about how we can take what we naturally have and then construct a more complex system upon it, Newton developed the Calculus in which this way of approaching moving bodies, he was able to accurately model the motion of not only Halley's comet, but also any other heavenly body that moved across the sky. In one instant, our entire universe opened up before us, unlocking almost unlimited abilities for us to converse with the cosmos as never before. Newton also expanded upon what Kepler started. Newton recognized that Kepler's mathematical equation for planetary motion, Kepler's 3rd Law, was purely based on empirical observation, and was only meant to measure what we observed within our solar system. Newton's mathematical brilliance was in realizing that this basic equation could be made universal by applying a gravitational constant to the equation, in which gave birth to perhaps one of the most important equations to ever be derived by mankind; Newton's Version of Kepler's Third Law. What Newton realized was that when things move in non-linear ways, using basic Algebra would not produce the correct answer. Herein lays one of the main differences between Algebra and Calculus. Algebra allows one to find the slope (rate of change) of straight lines (constant rate of change), whereas Calculus allows one to find the slope of curved lines (variable rate of change). There are obviously many more applications of Calculus than just this, but I am merely illustrating a fundamental difference between the two in order to show you just how revolutionary this new concept was. All at once, the motions of planets and other objects that orbit the sun became more accurately measurable, and thus we gained the ability to understand the universe a little deeper. Referring back to Netwon's Version of Kepler's Third Law, we were now able to apply (and still do) this incredible physics equation to almost anything that is orbiting something else. From this equation, we can determine the mass of either of the objects, the distance apart they are from each other, the force of gravity that is exerted between the two, and other physical qualities built from these simple calculations. This is the beauty of mathematics writ large; an ongoing conversation with the universe in which more than we may expect is revealed. It came to a French mathematician Urbain Le Verrier who sat down and painstakingly worked through the mathematical equations of the orbit of Uranus. What he was doing was using Newton's mathematical equations backwards, realizing that there must be an object out there beyond the orbit of Uranus that was also orbiting the sun, and then looking to apply the right mass and distance that this unseen object required for perturbing the orbit of Uranus in the way we were observing it was. This was phenomenal, as we were using parchment and ink to find a planet that nobody had ever actually observed. What he found was that an object, soon to be Neptune, had to be orbiting at a specific distance from the sun, with the specific mass that would cause the irregularities in the orbital path of Uranus. Confident of his mathematical calculations, he took his numbers to the New Berlin Observatory, where the astronomer Johann Gottfried Galle looked exactly where Verrier's calculations told him to look, and there lay the 8th and final planet of our solar system, less than 1 degree off from where Verrier's calculations said for him to look. What had just happened was an incredible confirmation of Newton's gravitational theory and proved that his mathematics were correct.'
- A Bayesian Approach for Detecting Mass-Extinction Events When Rates of Lineage Diversification Vary Michael R. May, Sebastian Höhna, Brian R. Moore: the paleontological record chronicles numerous episodes of mass extinction that severely culled the Tree of Life. Biologists have long sought to assess the extent to which these events may have impacted particular groups. The authors present a novel method for detecting mass-extinction events from phylogenies estimated from molecular sequence data. They develop an approach in a Bayesian statistical framework, which enables them to harness prior information on the frequency and magnitude of mass-extinction events. The approach is based on an episodic stochastic-branching process model in which rates of speciation and extinction are constant between rate-shift events. They then model three types of events: (1) instantaneous tree-wide shifts in speciation rate; (2) instantaneous tree-wide shifts in extinction rate, and; (3) instantaneous tree-wide mass-extinction events. Each of the events is described by a separate compound Poisson process (CPP) model, where the waiting times between each event are exponentially distributed with event-specific rate parameters. The magnitude of each event is drawn from an event-type specific prior distribution. Parameters of the model are then estimated using a reversible-jump Markov chain Monte Carlo (rjMCMC) algorithm. They demonstrate via simulation that this method has substantial power to detect the number of mass-extinction events, provides unbiased estimates of the timing of mass-extinction events, while exhibiting an appropriate (i.e., below 5%) false discovery rate even in the case of background diversification rate variation. Finally, they provide an empirical application of this approach to conifers, which reveals that this group has experienced two major episodes of mass extinction. This new approach - the CPP on Mass Extinction Times (CoMET) model - provides an effective tool for identifying mass-extinction events from molecular phylogenies, even when the history of those groups includes more prosaic temporal variation in diversification rate.
- Scientists discover a protein that silences the biological clock A new study led by UC Santa Cruz researchers has found that a protein associated with cancer cells is a powerful suppressor of the biological clock that drives the daily ("circadian") rhythms of cells throughout the body. The discovery, published in the June 4 issue of Molecular Cell, adds to a growing body of evidence suggesting a link between cancer and disruption of circadian rhythms, while offering new insights into the molecular mechanisms of the biological clock. The ticking of the biological clock drives fluctuations in gene activity and protein levels that give rise to daily cycles in virtually every aspect of physiology in humans and other animals. A master clock in the brain, tuned to the daily cycle of light and dark, sends out signals that synchronize the molecular clocks ticking away in almost every cell and tissue of the body. Disruption of the clock has been associated with a variety of health problems, including diabetes, heart disease, and cancer. According to Carrie Partch, a professor of chemistry and biochemistry at UC Santa Cruz and corresponding author of the paper, the connection between clock disruption and cancer is still unclear. "The clock is not always disrupted in cancer cells, but studies have shown that disrupting circadian rhythms in mice causes tumors to grow faster, and one of the things the clock does is set restrictions on when cells can divide," she said. The new study focused on a protein called PASD1 that Partch's collaborators at the University of Oxford had found was expressed in a broad range of cancer cells, including melanoma, lung cancer, and breast cancer. It belongs to a group of proteins known as "cancer/testis antigens," which are normally expressed in the germ line cells that give rise to sperm and eggs, but are also found in some cancer cells. Cancer researchers have been interested in these proteins as markers for cancer and as potential targets for therapeutic cancer vaccines.
- AdS/CFT without holography: A hidden dimension on the CFT side and implications for black-hole entropy Hrvoje Nikolic: the author proposes a new non-holographic formulation of AdS/CFT correspondence, according to which quantum gravity on AdS and its dual non-gravitational field theory both live in the same number D of dimensions. The field theory, however, appears (D − 1)-dimensional because the interactions do not propagate in one of the dimensions. The D-dimensional action for the field theory can be identified with the sum over (D−1)-dimensional actions with all possible values Λ of the UV cutoff, so that the extra hidden dimension can be identified with Λ. Since there are no interactions in the extra dimension, most of the practical results of standard holographic AdS/CFT correspondence transcribe to non-holographic AdS/CFT without any changes. However, the implications on black-hole entropy change significantly. The maximal black-hole entropy now scales with volume, while the Bekenstein-Hawking entropy is interpreted as the minimal possible black-hole entropy. In this way, the non-holographic AdS/CFT correspondence offers a simple resolution of the black-hole information paradox, consistent with a recently proposed gravitational crystal.
- Sharp minimax tests for large Toeplitz covariance matrices with repeated observations Cristina Butucea, Rania Zgheib: the authors observe a sample of n independent p-dimensional Gaussian vectors with Toeplitz covariance matrix Σ = [σ|i−j| ]1≤i,j≤p and σ0 = 1. They consider the problem of testing the hypothesis that Σ is the identity matrix asymptotically when n → ∞ and p → ∞. They also suppose that the covariances σk decrease either polynomially (P k≥1 k 2ασ 2 k ≤ L for α > 1/4 and L > 0) or exponentially (P k≥1 e 2Akσ 2 k ≤ L for A, L > 0). The authors then consider a test procedure based on a weighted U-statistic of order 2, with optimal weights chosen as solution of an extremal problem. They give the asymptotic normality of the test statistic under the null hypothesis for fixed n and p → +∞ and the asymptotic behavior of the type I error probability of our test procedure. They also show that the maximal type II error probability, either tend to 0, or is bounded from above. In the latter case, the upper bound is given using the asymptotic normality of our test statistic under alternatives close to the separation boundary. Their assumptions imply mild conditions: n = o(p 2α−1/2 ) (in the polynomial case), n = o(e p ) (in the exponential case). The authors prove both rate optimality and sharp optimality of our results, for α > 1 in the polynomial case and for any A > 0 in the exponential case. A simulation study illustrates the good behavior of our procedure, in particular for small n, large p.
- Pignistic Probability Transforms for Mixes of Low-and-High-Probability Events John J. Sudano: In some real world information fusion situations, time critical decisions must be made with an incomplete information set. Belief function theories (e.g., Dempster-Shafer theory of evidence, Transferable Belief Model) have been shown to provide a reasonable methodology for processing or fusing the quantitative clues or information measurements that form the incomplete information set. For decision making, the pignistic (from the Latin pignus, a bet) probability transform has been shown to be a good method of using Beliefs or basic belief assignments (BBAs) to make decisions. For many systems, one need only address the most-probable elements in the set. For some critical systems, one must evaluate the risk of wrong decisions and establish safe probability thresholds for decision making. This adds a greater complexity to decision making, since one must address all elements in the set that are above the risk decision threshold. The problem is greatly simplified if most of the probabilities fall below this threshold. Finding a probability transform that properly represents mixes of low-and-high-probability events is essential. This article introduces four new pignistic probability transforms with an implementation that uses the latest values of Beliefs, Plausibilities, or BBAs to improve the pignistic probability estimates. Some of them assign smaller values of probabilities for smaller values of Beliefs or BBAs than the Smets pignistic transform. They also assign higher probability values for larger values of Beliefs or BBAs than the Smets pignistic transform. These probability transforms will assign a value of probability that converges faster to the values below the risk threshold. A probability information content (PIC) variable is also introduced that assigns an information content value to any set of probability. Four operators are defined to help simplify the derivations. This article outlines a systematic methodology of making better decisions using Belief function theories. This methodology can be used to automate critical decisions in complex systems.
- On the Computational Complexity of High-Dimensional Bayesian Variable Selection Yun Yang, Martin J. Wainwright, Michael I. Jordan: the authors study the computational complexity of Markov chain Monte Carlo (MCMC) methods for high-dimensional Bayesian linear regression under sparsity constraints. They first show that a Bayesian approach can achieve variable-selection consistency under relatively mild conditions on the design matrix. Furthermore they demonstrate that the statistical criterion of posterior concentration need not imply the computational desideratum of rapid mixing of the MCMC algorithm. By introducing a truncated sparsity prior for variable selection, they provide a set of conditions that guarantee both variable-selection consistency and rapid mixing of a particular Metropolis-Hastings algorithm. The mixing time is linear in the number of covariates up to a logarithmic factor. Their proof controls the spectral gap of the Markov chain by constructing a canonical path ensemble that is inspired by the steps taken by greedy algorithms for variable selection.
- Schrödinger Equation of a particle in an Uniformly Accelerated Frame and the Possibility of a New kind of Quanta Sanchari De and Somenath Chakrabarty: In this article the authors develop a formalism to obtain the Schrödinger equation for a particle in a frame undergoing an uniform acceleration in an otherwise flat Minkowski space-time geometry. They present an exact solution of the equation and obtain the eigenfunctions and the corresponding eigenvalues. It has been observed that the Schrödinger equation can be reduced to an one dimensional hydrogen atom problem. Whereas, the quantized energy levels are exactly identical with that of an one dimensional quantum harmonic oscillator. Hence considering transitions, the authors predict the existence of a new kind of quanta, which will either be emitted or absorbed if the particles get excited or de-excited respectively.
- Classical Verification of Quantum Proofs Zhengfeng Ji: the author presents a classical interactive protocol that verifies the validity of a quantum witness state for the local Hamiltonian problem. It follows from this protocol that approximating the non-local value of a multi-player one-round game to inverse polynomial precision is 'QMA'-hard. His work makes an interesting connection between the theory of 'QMA'-completeness and Hamiltonian complexity on one hand and the study of non-local games and Bell inequalities on the other.
- A Defense of Scientific Platonism without Metaphysical Presuppositions Peter Punin: From the Platonistic standpoint, mathematical edifices form an immaterial, unchanging, and eternal world that exists independently of human thought. By extension, “scientific Platonism” says that directly mathematizable physical phenomena – in other terms, the research field of physics – are governed by entities belonging to this objectively existing mathematical world. Platonism is a metaphysical theory. But since metaphysical theories, by definition, are neither provable nor refutable, anti-Platonistic approaches cannot be less metaphysical than Platonism itself. In other words, anti-Platonism is not “more scientifical” than Platonism. All we can do is to compare Platonism and its negations under epistemological criteria such as simplicity, economy of hypotheses, or consistency with regard to their respective consequences. In this paper the author intends to show that anti-Platonism claiming in a first approximation (i) that mathematical edifices consist of meaningless signs assembled according to arbitrary rules, and (ii) that the adequacy of mathematical entities and phenomena covered by physics results from idealization of these phenomena, is based as much as Platonism on metaphysical presuppositions. Thereafter, without directly taking position, he tries to launch a debate focusing on the following questions: (i) To maintain its coherence, is anti-Platonism not constrained to adopt extremely complex assumptions, difficult to defend, and not always consistent with current realities or practices of scientific knowledge? (ii) Instead of supporting anti-Platonism whatever the cost, in particular by the formulation of implausible hypotheses, would it not be more adequate to accept the idea of a mathematical world existing objectively and governing certain aspects of the material world, just as we note the existence of the material world which could also not exist?
- Prototypical Reasoning about Species and the Species Problem Yuichi Amitani: The species problem is often described as the abundance of conflicting definitions of species, such as the biological species concept and phylogenetic species concepts. But biologists understand the notion of species in a non-definitional as well as a definitional way. In this article the author argues that when they understand species without a definition in their mind, their understanding is often mediated by the notion of good species, or prototypical species, as the idea of “prototype” is explicated in cognitive psychology. This distinction helps us make sense of several puzzling phenomena regarding biologists’ dealing with species, such as the fact that in everyday research biologists often behave as if the species problem is solved, while they should be fully aware that it is not. The author then briefly discusses implications of this finding, including that some extant attempts to answer what the nature of species is have an inadequate assumption about how the notion of species is represented in biologists’ minds.
- Quantum chemistry may be a shortcut to life-changing compounds Rachel Ehrenberg: Technique could ID materials for better solar cells and batteries or more effective medicines - When Alán Aspuru-Guzik was in college, he really got into SETI, the project that uses home computers to speed the search for extraterrestrial intelligence. He was less interested in finding aliens in outer space, however, than in using fleets of computers to search molecular space. He wanted to find chemical compounds that could do intelligent things here on Earth. SETI is a well-known distributed computing project that allows regular people to volunteer their idle computers to sift through reams of data — in this case, radio signals. Aspuru-Guzik, now a theoretical chemist at Harvard University, hopes to harness thousands of home computers to comb through almost every possible combination of atoms.
- Optimal Bayesian estimation in stochastic block models Debdeep Pati, Anirban Bhattacharya: With the advent of structured data in the form of social networks, genetic circuits and protein interaction networks, statistical analysis of networks has gained popularity over recent years. Stochastic block model constitutes a classical cluster-exhibiting random graph model for networks. There is a substantial amount of literature devoted to proposing strategies for estimating and inferring parameters of the model, both from classical and Bayesian viewpoints. Unlike the classical counterpart, there is however a dearth of theoretical results on the accuracy of estimation in the Bayesian setting. In this article, the authors undertake a theoretical investigation of the posterior distribution of the parameters in a stochastic block model. In particular, they show that one obtains optimal rates of posterior convergence with routinely used multinomial-Dirichlet priors on cluster indicators and uniform priors on the probabilities of the random edge indicators. En route, they also develop geometric embedding techniques to exploit the lower dimensional structure of the parameter space which may be of independent interest.
- Quantum Life Spreads Entanglement Across Generations: The way creatures evolve in a quantum environment throws new light on the nature of life - Computer scientists have long known that evolution is an algorithmic process that has little to do with the nature of the beasts it creates. Instead, evolution is set of simple steps that, when repeated many times, can solve problems of immense complexity; the problem of creating the human brain, for example, or of building an eye. And, of course, the problem of creating life. Put an evolutionary algorithm to work in a virtual environment and it doesn’t take long to create life-like organisms in silico that live and reproduce entirely within a virtual computer-based environment. This kind of life is not carbon-based or even silicon-based. It is a phenomenon of pure information. But if the nature of information allows the process of evolution to be simulated on an ordinary computer, then why not also on a quantum computer? The resulting life would exist in virtual quantum environment governed by the bizarre laws of quantum mechanics. As such, it would be utterly unlike anything that biologists have ever encountered or imagined. But what form might quantum life take? In a latest result, we get an insights into this question thanks to the work of Unai Alvarez-Rodriguez and a few pals at the University of the Basque Country in Spain. They have simulated the way life evolves in a quantum environment and use this to propose how it could be done in a real quantum environment for the first time. “We have developed a quantum information model for mimicking the behavior of biological systems inspired by the laws of natural selection,” they say.

- Cosmology from quantum potential: It was shown recently that replacing classical geodesics with quantal (Bohmian) trajectories gives rise to a quantum corrected Raychaudhuri equation (QRE). In this article, a derivation of the second order Friedmann equations from the QRE is carried out, and shown that this also contains a couple of quantum correction terms, the first of which can be interpreted as cosmological constant (and gives a correct estimate of its observed value), while the second as a radiation term in the early universe, which gets rid of the big-bang singularity and predicts an infinite age of our universe.
- Improved minimax estimation of a multivariate normal mean under heteroscedasticity Zhiqiang Tan: the author considers the problem of estimating a multivariate normal mean with a known variance matrix, which is not necessarily proportional to the identity matrix. The coordinates are shrunk directly in proportion to their variances in Efron and Morris’ (J. Amer. Statist. Assoc. 68 (1973) 117– 130) empirical Bayes approach, whereas inversely in proportion to their variances in Berger’s (Ann. Statist. 4 (1976) 223–226) minimax estimators. The author proposes a new minimax estimator, by approximately minimizing the Bayes risk with a normal prior among a class of minimax estimators where the shrinkage direction is open to specification and the shrinkage magnitude is determined to achieve minimaxity. The proposed estimator has an interesting simple form such that one group of coordinates are shrunk in the direction of Berger’s estimator and the remaining coordinates are shrunk in the direction of the Bayes rule. Moreover, the proposed estimator is scale adaptive: it can achieve close to the minimum Bayes risk simultaneously over a scale class of normal priors (including the specified prior) and achieve close to the minimax linear risk over a corresponding scale class of hyper-rectangles. For various scenarios in this numerical study, the proposed estimators with extreme priors yield more substantial risk reduction than existing minimax estimators.
- Large-scale Machine Learning for Metagenomics Sequence Classification Kévin Vervier, Pierre Mahé, Maud Tournoud, Jean-Baptiste Veyrieras and Jean-Philippe Vert: Metagenomics characterizes the taxonomic diversity of microbial communities by sequencing DNA directly from an environmental sample. One of the main challenges in metagenomics data analysis is the binning step, where each sequenced read is assigned to a taxonomic clade. Due to the large volume of metagenomics datasets, binning methods need fast and accurate algorithms that can operate with reasonable computing requirements. While standard alignment-based methods provide state-of-the-art performance, compositional approaches that assign a taxonomic class to a DNA read based on the k-mers it contains have the potential to provide faster solutions. In this work, the authors investigate the potential of modern, large-scale machine learning implementations for taxonomic affectation of next-generation sequencing reads based on their k-mers profile. they show that machine learning-based compositional approaches benefit from increasing the number of fragments sampled from reference genome to tune their parameters, up to a coverage of about 10, and from increasing the k-mer size to about 12. Tuning these models involves training a machine learning model on about 108 samples in 107 dimensions, which is out of reach of standard softwares but can be done efficiently with modern implementations for large-scale machine learning. The resulting models are competitive in terms of accuracy with well-established alignment tools for problems involving a small to moderate number of candidate species, and for reasonable amounts of sequencing errors. The authors also show, however, that compositional approaches are still limited in their ability to deal with problems involving a greater number of species, and more sensitive to sequencing errors. They finally confirm that compositional approach achieve faster prediction times, with a gain of 3 to 15 times with respect to the BWA-MEM short read mapper, depending on the number of candidate species and the level of sequencing noise.
- Minimal length effects in black hole thermodynamics from tunneling formalism Sunandan Gangopadhyay: The tunneling formalism in the Hamilton-Jacobi approach is adopted to study Hawking radiation of massless Dirac particles from spherically symmetric black hole spacetimes incorporating the effects of the generalized uncertainty principle. The Hawking temperature is found to contain corrections from the generalized uncertainty principle. Further, the author shows from this result that the ratio of the GUP corrected energy of the particle to the GUP corrected Hawking temperature is equal to the ratio of the corresponding uncorrected quantities. This result is then exploited to compute the Hawking temperature for more general forms of the uncertainty principle having infinite number of terms. Choosing the coefficients of the terms in the series in a specific way enables one to sum the infinite series exactly. This leads to a Hawking temperature for the Schwarzschild black hole that agrees with the result which accounts for the one loop back reaction effect. The entropy is finally computed and yields the area theorem upto logarithmic corrections.
- Bayesian inference for higher order ordinary differential equation models Prithwish Bhaumik and Subhashis Ghosal: Often the regression function appearing in fields like economics, engineering, biomedical sciences obeys a system of higher order ordinary differential equations (ODEs). The equations are usually not analytically solvable. The authors are interested in inferring on the unknown parameters appearing in the equations. Significant amount of work has been done on parameter estimation in first order ODE models. Bhaumik and Ghosal (2014a) considered a two-step Bayesian approach by putting a finite random series prior on the regression function using B-spline basis. The posterior distribution of the parameter vector is induced from that of the regression function. Although this approach is computationally fast, the Bayes estimator is not asymptotically efficient. Bhaumik and Ghosal (2014b) remedied this by directly considering the distance between the function in the nonparametric model and a Runge-Kutta (RK4) approximate solution of the ODE while inducing the posterior distribution on the parameter. They also studied the direct Bayesian method obtained from the approximate likelihood obtained by the RK4 method. In this paper the authors extend these ideas for the higher order ODE model and establish Bernstein-von Mises theorems for the posterior distribution of the parameter vector for each method with n −1/2 contraction rate.
- How spacetime is built by quantum entanglement: A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo’s Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors’ Suggestion “for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields.” Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales. The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive. Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators. Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called “spooky action at distance.” The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory.
- A model for gene deregulation detection using expression data Thomas Picchetti, Julien Chiquet, Mohamed Elati, Pierre Neuvial, R´emy Nicolle and Etienne Birmele: In tumoral cells, gene regulation mechanisms are severely altered, and these modifications in the regulations may be characteristic of different subtypes of cancer. However, these alterations do not necessarily induce differential expressions between the subtypes. To answer this question, the authors propose a statistical methodology to identify the misregulated genes given a reference network and gene expression data. Their model is based on a regulatory process in which all genes are allowed to be deregulated. They derive an EM algorithm where the hidden variables correspond to the status (under/over/normally expressed) of the genes and where the E-step is solved thanks to a message passing algorithm. Their procedure provides posterior probabilities of deregulation in a given sample for each gene. They then assess the performance of their method by numerical experiments on simulations and on a bladder cancer data set.
- Probabilistic Knowledge as Objective Knowledge in Quantum Mechanics: Potential Powers Instead of Actual Properties Christian de Ronde: In classical physics, probabilistic or statistical knowledge has been always related to ignorance or inaccurate subjective knowledge about an actual state of affairs. This idea has been extended to quantum mechanics through a completely incoherent interpretation of the Fermi-Dirac and Bose-Einstein statistics in terms of “strange” quantum particles. This interpretation, naturalized through a widespread “way of speaking” in the physics community, contradicts Born’s physical account of Ψ as a “probability wave” which provides statistical information about outcomes that, in fact, cannot be interpreted in terms of ‘ignorance about an actual state of affairs’. In the present paper the author discusses how the metaphysics of actuality has played an essential role in limiting the possibilities of understating things differently. It is proposed instead a metaphysical scheme in terms of powers with definite potentia which allows us to consider quantum probability in a new light, namely, as providing objective knowledge about a potential state of affairs.
- Targeted Diversity Generation by Intraterrestrial Archaea and Archaeal Viruses In the evolutionary arms race between microbes, their parasites, and their neighbours, the capacity for rapid protein diversification is a potent weapon. Diversity-generating retroelements (DGRs) use mutagenic reverse transcription and retrohoming to generate myriad variants of a target gene. Originally discovered in pathogens, these retroelements have been identified in bacteria and their viruses, but never in archaea. Here the authors report the discovery of intact DGRs in two distinct intraterrestrial archaeal systems: a novel virus that appears to infect archaea in the marine subsurface, and, separately, two uncultivated nanoarchaea from the terrestrial subsurface. The viral DGR system targets putative tail fibre ligand-binding domains, potentially generating >1018 protein variants. The two single-cell nanoarchaeal genomes each possess ≥4 distinct DGRs. Against an expected background of low genome-wide mutation rates, these results demonstrate a previously unsuspected potential for rapid, targeted sequence diversification in intraterrestrial archaea and their viruses.
- Einstein, Bohm, and Leggett–Garg Guido Bacciagaluppi: In a recent paper (Bacciagaluppi 2015), the author analysed and criticised Leggett and Garg’s argument to the effect that macroscopic realism contradicts quantum mechanics, by contrasting their assumptions to the example of Bell’s stochastic pilot-wave theories, and applied Dzhafarov and Kujala’s analysis of contextuality in the presence of signalling to the case of the Leggett–Garg inequalities. In this chapter, he discusses more generally the motivations for macroscopic realism, taking a cue from Einstein’s criticism of the Bohm theory, then goes on to summarise his previous results, with a few additional comments on other recent work on Leggett and Garg. [To appear in: E. Dzhafarov (ed.), Contextuality from Quantum Physics to Psychology (Singapore: World Scientific).
- Should We Really Use Post-Hoc Tests Based on Mean-Ranks? Alessio Benavoli, Giorgio Corani, Francesca Mangili: The statistical comparison of multiple algorithms over multiple data sets is fundamental in machine learning. This is typically carried out by the Friedman test. When the Friedman test rejects the null hypothesis, multiple comparisons are carried out to establish which are the significant differences among algorithms. The multiple comparisons are usually performed using the mean-ranks test. The aim of this technical note is to discuss the inconsistencies of the mean-ranks post-hoc test with the goal of discouraging its use in machine learning as well as in medicine, psychology, etc...! The authors show that the outcome of the mean-ranks test depends on the pool of algorithms originally included in the experiment. In other words, the outcome of the comparison between algorithm s A and B depends also on the performance of the other algorithms included in the original experiment. This can lead to paradoxical situations. For instance the difference betwee n A and B could be declared significant if the pool comprises algorithms C, D, E and not significant if the pool comprises algorithms F, G, H. To overcome these issues, the authors suggest instead to perform the multiple comparison using a test whose outcome only depends on the two algorithms being compared, such as the sign-test or the Wilcoxon signed-rank test.
- The study of Lorenz and Rössler strange attractors by means of quantum theory Yu I Bogdanov, N A Bogdanova: the authors have developed a method for complementing an arbitrary classical dynamical system to a quantum system using the Lorenz and Rössler systems as examples. The Schrödinger equation for the corresponding quantum statistical ensemble is described in terms of the Hamilton-Jacobi formalism. They then consider both the original dynamical system in the position space and the conjugate dynamical system corresponding to the momentum space. Such simultaneous consideration of mutually complementary position and momentum frameworks provides a deeper understanding of the nature of chaotic behavior in dynamical systems. The authors have shown that the new formalism provides a significant simplification of the Lyapunov exponents calculations. From the point of view of quantum optics, the Lorenz and Rössler systems correspond to three modes of a quantized electromagnetic field in a medium with cubic nonlinearity. From the computational point of view, the new formalism provides a basis for the analysis of complex dynamical systems using quantum computers.
- Achieving Optimal Misclassification Proportion in Stochastic Block Model Chao Gao, Zongming Ma, Anderson Y. Zhang and Harrison H. Zhou: Community detection is a fundamental statistical problem in network data analysis. Many algorithms have been proposed to tackle this problem. Most of these algorithms are not guaranteed to achieve the statistical optimality of the problem, while procedures that achieve information theoretic limits for general parameter spaces are not computationally tractable. In this paper, the authors present a computationally feasible two-stage method that achieves optimal statistical performance in misclassification proportion for stochastic block model under weak regularity conditions. their two-stage procedure consists of a refinement stage motivated by penalized local maximum likelihood estimation. This stage can take a wide range of weakly consistent community detection procedures as initializer, to which it applies and outputs a community assignment that achieves optimal misclassification proportion with high probability. The practical effectiveness of the new algorithm is demonstrated by competitive numerical results.
- Explanation in Biology: An Enquiry into the Diversity of Explanatory Patterns in the Life Sciences Pierre-Alain Braillard and Christophe Malaterre: Despite the philosophical clash between deductive-nomological and mechanistic accounts of explanation, in scientific practice, both approaches are required in order to achieve more complete explanations and guide the discovery process. Here, this thesis is defended by discussing the case of mathematical models in systems biology. Not only such models complement the mechanistic explanations of molecular biology by accounting for poorly understood aspects of biological phenomena, they can also reveal unsuspected ‘black boxes’ in mechanistic explanations, thus prompting their revision while providing new insights about the causal-mechanistic structure of the world.
- Realism and instrumentalism about the wave function. How should we choose? Mauro Dorato: The main claim of the paper is that one can be a ‘realist’ (in some sense) about quantum mechanics without requiring any form of realism about the wave function. The author begins by discussing various forms of realism about the wave function, namely Albert’s configuration-space realism, Dürr Zanghi and Goldstein’s nomological realism about Ψ, Esfeld’s dispositional reading of Ψ and Pusey Barrett and Rudolph’s realism about the quantum state. By discussing the articulation of these four positions, and their interrelation, we conclude that instrumentalism about Ψ is by itself not sufficient to choose one over the other interpretations of quantum mechanics, thereby confirming in a different way the indetermination of the metaphysical interpretations of quantum mechanics.
- On the sufficiency of pairwise interactions in maximum entropy models of biological networks Lina Merchan, Ilya Nemenman: Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, the authors argue that this reduction in complexity can be thought of as a natural property of densely interacting networks in certain regimes, and not necessarily as a special property of living systems. By connecting their analysis to the theory of random constraint satisfaction problems, they suggest a reason for why some biological systems may operate in this regime.
- A Novel Plasticity Rule Can Explain the Development of Sensorimotor Intelligence Ralf Der and Georg Martius: Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, the self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher level constructs. The authors propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system specific modifications of the DEP rule but arise rather from the underlying mechanism of spontaneous symmetry breaking due to the tight brain-body-environment coupling. The new synaptic rule is biologically plausible and it would be an interesting target for a neurobiolocal investigation. The authors also argue that this neuronal mechanism may have been a catalyst in natural evolution.
- From survivors to replicators: Evolution by natural selection revisited Pierrick Bourrat: For evolution by natural selection to occur it is classically admitted that the three ingredients of variation, difference in fitness and heredity are necessary and sufficient. In this paper, the author shows using simple individual-based models, that evolution by natural selection can occur in populations of entities in which neither heredity nor reproduction are present. Furthermore, he demonstrates by complexifying these models that both reproduction and heredity are predictable Darwinian products (i.e. complex adaptations) of populations initially lacking these two properties but in which new variation is introduced via mutations. Later on, the author shows that replicators are not necessary for evolution by natural selection, but rather the ultimate product of such processes of adaptation. Finally, he assesses the value of these models in three relevant domains for Darwinian evolution.
- Game theory elucidates the collective behavior of bosons Quantum particles behave in strange ways and are often difficult to study experimentally. Using mathematical methods drawn from game theory, LMU physicists have shown how bosons, which like to enter the same state, can form multiple groups. When scientists explore the mysterious behavior of quantum particles, they soon reach the limits of present-day experimental research. From there on, progress is only possible with the aid of theoretical ideas. NIM investigator Prof. Erwin Frey and his team at the Dept. of Statistical and Biological Physics (LMU Munich) have followed this route to study the behavior of bosons. Bosons are quantum particles that like to cluster together. But by applying methods from the mathematical field of game theory, the Munich physicists were able to explain why and under what conditions bosons form multiple groups.
- Correlation of action potentials in adjacent neurons M. N. Shneider and M. Pekker: A possible mechanism for the synchronization of action potential propagation along a bundle of neurons (ephaptic coupling) is considered. It is shown that this mechanism is similar to the salutatory conduction of the action potential between the nodes of Ranvier in myelinated axons. The proposed model allows the authors to estimate the scale of the correlation, i.e., the distance between neurons in the nervous tissue, where in their synchronization becomes possible. The possibility for experimental verification of the proposed model of synchronization is discussed.
- Explaining the Unobserved—Why Quantum Mechanics Ain’t Only About Information Amit Hagar and Meir Hemmo: A remarkable theorem by Clifton, Bub and Halvorson (2003) (CBH) characterizes quantum theory in terms of information–theoretic principles. According to Bub (2004, 2005) the philosophical significance of the theorem is that quantum theory should be regarded as a “principle” theory about (quantum) information rather than a “constructive” theory about the dynamics of quantum systems. Here the authors criticize Bub’s principle approach arguing that if the mathematical formalism of quantum mechanics remains intact then there is no escape route from solving the measurement problem by constructive theories. They further propose a (Wigner–type) thought experiment that they argue demonstrates that quantum mechanics on the information–theoretic approach is incomplete.
- Metareasoning for Planning Under Uncertainty Christopher H. Lin, Andrey Kolobov, Ece Kamar, and Eric Horvitz: The conventional model for online planning under uncertainty assumes that an agent can stop and plan without incurring costs for the time spent planning. However, planning time is not free in most real-world settings. For example, an autonomous drone is subject to nature’s forces, like gravity, even while it thinks, and must either pay a price for counteracting these forces to stay in place, or grapple with the state change caused by acquiescing to them. Policy optimization in these settings requires metareasoning—a process that trades off the cost of planning and the potential policy improvement that can be achieved. The authors formalize and analyze the metareasoning problem for Markov Decision Processes (MDPs). Their work subsumes previously studied special cases of metareasoning and shows that in the general case, metareasoning is at most polynomially harder than solving MDPs with any given algorithm that disregards the cost of thinking. For reasons the authors discuss, optimal general metareasoning turns out to be impractical, motivating approximations. They present approximate metareasoning procedures which rely on special properties of the BRTDP planning algorithm and explore the effectiveness of our methods on a variety of problems.
- Can the brain map 'non-conventional' geometries (and abstract spaces)? Grid cells, space-mapping neurons of the entorhinal cortex of rodents, could also work for hyperbolic surfaces. A SISSA study just published in Interface, the journal of the Royal Society, tests a model (a computer simulation) based on mathematical principles, that explains how maps emerge in the brain and shows how these maps adapt to the environment in which the individual develops. "It took human culture millennia to arrive at a mathematical formulation of non-Euclidean spaces", comments SISSA neuroscientist Alessandro Treves, "but it's very likely that our brains could get there long before. In fact, it's likely that the brain of rodents gets there very naturally every day".
- Reconstructing Liberalism: Charles Mills' Unfinished Project Jack Turner: The political theory of Charles W. Mills seeks simultaneously to expose liberalism's complicity with white supremacy and to transform liberalism into a source of antiracist political critique. This article analyzes both Mills' critique of liberalism and his attempt to reconstruct it into a political philosophy capable of adequately addressing racial injustice. The author focuses on the (a) problematization of moral personhood, (b) theorization of white ignorance, and (c) conceptualization of white supremacy. Together these establish the need to integrate a new empirical axiom into liberal political theory: the axiom of the power of white supremacy in modernity, or the axiom of white power, for short. This axiom is analogous to James Madison's axiom of the encroaching spirit of power. Any liberal theorist that fails to take the encroaching spirit of power seriously meets the scorn of his peers. The same should be true for white power, Mills suggests. Charles concludes that Mill's reconstruction of liberalism is incomplete and urges him to develop a fully-fledged liberal theory of racial justice to complete it.
- A Fresh Look at the Structure and Concepts of Quantum Theories On The Theory of Anharmonic Effects in Quantum Physics.

- Mapping the Mind: Bridge Laws and the Psycho-Neural Interface Marco J. Nathan, Guillermo Del Pinal: Recent advancements in the brain sciences have enabled researchers to determine,with increasing accuracy, patterns and locations of neural activation associated with various psychological functions. These techniques have revived a longstanding debate regarding the relation between the mind and the brain: while many authors claim that neuroscientific data can be employed to advance theories of higher cognition, others defend the so-called ‘autonomy’ of psychology. Settling this significant issue requiresunderstanding the nature of the bridge laws used at the psycho-neural interface. While these laws have been the topic of extensive discussion, such debates have mostly focused on a particular type of link: reductive laws. Reductive laws are problematic: they face notorious philosophical objections and they are too scarce to substantiate current research at the intersection of psychology and neuroscience. The aim of this article is to provide a systematic analysis of a different kind of bridge laws — associative laws — which play a central, albeit overlooked role in scientific practice.
- Deep Boltzmann Machines with Fine Scalability Taichi Kiwaki: This study presents a layered Boltzmann machine (BM) whose generative performance scales with the number of layers. Application of deep BMs (DBMs) is limited due to its poor scalability where deep stacking of layers does not largely improve the performance. It is widely believed that DBMs have huge representation power, and its poor empirical scalability is mainly due to inefficiency of optimization algorithms. In this paper, we theoretically show that the representation power of DBMs is actually rather limited, and the inefficiency of the model can result in the poor scalability. Based on these observations, an alternative proposal is advanced - BM architecture, which is dubbed soft-deep BMs (sDBMs). It is theoretically shown that sDBMs possess much greater representation power than DBMs. Experiments demonstrate that this analysis is able to train sDBMs with up to 6 layers without pretraining, and sDBMs nicely compare state-of-the-art models on binarized MNIST and Caltech-101 silhouettes.
- Weihrauch-completeness for layerwise computability Arno Pauly, George Davie: the authors introduce the notion of being Weihrauch-complete for layerwise computability and provide several natural examples related to complex oscillations, the law of the iterated logarithm and Birkhoff’s theorem. They also consider the hitting time operators, which share the Weihrauch degree of the former examples, but fail to be layerwise computable.
- Understanding Gauge James Owen Weatherall: the author considers two usages of the expression “gauge theory”. On one, a gauge theory is a theory with excess structure; on the other, a gauge theory is any theory appropriately related to classical electromagnetism. He makes precise one sense in which one formulation of electromagnetism, the paradigmatic gauge theory on both usages, may be understood to have excess structure, and then argue that gauge theories on the second usage, including Yang-Mills theory and general relativity, do not generally have excess structure in this sense.
- On the Threshold of Intractability Pal Grønas, Drange Markus, Sortland Dregi and Daniel Lokshtanov: the authors study the computational complexity of the graph modification problems Threshold Editing and Chain Editing, adding and deleting as few edges as possible to transform the input into a threshold (or chain) graph. In this article, they show that both problems are NP-hard, resolving a conjecture by Natanzon, Shamir, and Sharan. On the positive side, they show the problem admits a quadratic vertex kernel. Furthermore, they give a sub-exponential time parameterized algorithm solving Threshold Editing in 2O( √ k log k) + poly(n) time, making it one of relatively few natural problems in this complexity class on general graphs. These results are of broader interest to the field of social network analysis, where recent work of Brandes (ISAAC, 2014) posits that the minimum edit distance to a threshold graph gives a good measure of consistency for node centralities. Finally, the authors show that all their positive results extend to the related problem of Chain Editing, as well as the completion and deletion variants of both problems.
- How can we be moral when we are so irrational? Nils-Eric Sahlin and Johan Brännmark: Normative ethics usually presupposes background accounts of human agency, and although different ethical theorists might have different pictures of human agency in mind, there is still something like a standard account that most of mainstream normative ethics can be understood to rest on. Ethical theorists tend to have Rational Man, or at least some close relative to him, in mind when constructing normative theories. It is argued here that empirical findings raise doubts about the accuracy of this kind of account; human beings fall too far short of ideals of rationality for it to be meaningful to devise normative ideals within such a framework. Instead, it is suggested, normative ethics could be conducted more profitably if the idea of unifying all ethical concerns into one theoretical account is abandoned. This disunity of ethical theorizing would then match the disunited and heuristic-oriented nature of our agency
- Relax, Tensors Are Here: Dependencies in International Processes Shahryar Minhasa, Peter D. Hoff, Michael D. Warda: Previous models of international conflict have suffered two shortfalls. They tended not to embody dynamic changes, focusing rather on static slices of behavior over time. These models have also been empirically evaluated in ways that assumed the independence of each country, when in reality they are searching for the interdependence among all countries. Here, the authors illustrate a solution to these two hurdles and evaluate this new, dynamic, network based approach to the dependencies among the ebb and flow of daily international interactions using a newly developed, and openly available, database of events among nations.
- No Big Bang? Quantum equation predicts universe has no beginning the universe may have existed forever, according to a new model that applies quantum correction terms to complement Einstein's theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.
- Dynamical and Hamiltonian formulation of General Relativity Domenico Giulini: This is a substantially expanded version of a chapter-contribution to The Springer Handbook of Spacetime, edited by Abhay Ashtekar and Vesselin Petkov, published by Springer Verlag in 2014. It introduces the reader to the reformulation of Einstein’s field equations of General Relativity as a constrained evolutionary system of Hamiltonian type and discusses some of its uses, together with some technical and conceptual aspects. Attempts were made to keep the presentation self contained and accessible to first-year graduate students. This implies a certain degree of explicitness and occasional reviews of background material.
- Leibniz’s Theory of Time Soshichi Uchii: the author has developed an informational interpretation of Leibniz’s metaphysics and dynamics, but in this paper he will concentrate on his theory of time. According to Uchii interpretation, each monad is an incorporeal automaton programed by God, and likewise each organized group of monads is a cellular automaton (in von Neumann’s sense) governed by a single dominant monad (entelechy). The activities of these produce phenomena, which must be “coded appearances” of these activities; God determines this coding. A crucially important point here is that we have to distinguish the phenomena for a monad from its states (perceptions). Both are a kind of representation: a state represents the whole world of monads, and phenomena for a monad “result” from the activities of monads. But the coding for each must be different; R(W) for the first, Ph(W) for the second, where W is a state of the monadic world. The reason for this is that no monadic state is in space and time, but phenomena occur in space and time. Now, the basis of the phenomenal time must be in the timeless realm of monads. This basis is the order of state-transition of each monads. All the changes of these states are given at once by God, and these do not presuppose time. The coded appearances (which may well be different for different creatures) of this order occur in time (for any finite creatures), and its metric must depend on God’s coding for phenomena. For humans, in particular, this metric time is derived from spatial distance (metric space) via the laws of dynamics. Thus there may well be an interrelation between spatial and temporal metric. This means that the Leibnizian frame allows relativistic metric of space-time. Uchii shows this after outlining Leibniz’s scenario.
- Estimation of connectivity measures in gappy time series G. Papadopoulosa, D. Kugiumtzisb: A new method is proposed to compute connectivity measures on multivariate time series with gaps. Rather than removing or filling the gaps, the rows of the joint data matrix containing empty entries are removed and the calculations are done on the remainder matrix. The method, called measure adapted gap removal (MAGR), can be applied to any connectivity measure that uses a joint data matrix, such as cross correlation, cross mutual information and transfer entropy. MAGR is favorably compared using these three measures to a number of known gap-filling techniques, as well as the gap closure. The superiority of MAGR is illustrated on time series from synthetic systems and financial time series.
- The Quantum Fabric of Space-Time: How Quantum Pairs Stitch Space-Time - what quantum entanglement and gravity have to do with each other? New tools may reveal how quantum information builds the structure of space! Brian Swingle via Jennifer Ouellette from Quanta Magazine.
- On statistical indistinguishability of complete and incomplete discrete time market models Nikolai Dokuchaev: the author investigates the possibility of statistical evaluation of the market completeness for discrete time stock market models. It is known that the market completeness is not a robust property: small random deviations of the coefficients convert a complete market model into a incomplete one. The paper shows that market incompleteness is also non-robust. It is also shown that, for any incomplete market from a wide class of discrete time models, there exists a complete market model with arbitrarily close stock prices. This means that incomplete markets are indistinguishable from the complete markets in the terms of the market statistics.
- On the Structure, Covering, and Learning of Poisson Multinomial Distributions Constantinos Daskalakis, Gautam Kamath, and Christos Tzamos: An (n, k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of n independent random vectors supported on the set Bk = {e1, . . . , ek} of standard basis vectors in R k . The authors prove a structural characterization of these distributions, showing that, for all ε > 0, any (n, k)-Poisson multinomial random vector is ε-close, in total variation distance, to the sum of a discretized multidimensional Gaussian and an independent (poly(k/ε), k)-Poisson multinomial random vector. Their structural characterization extends the multi-dimensional CLT of [VV11], by simultaneously applying to all approximation requirements ε. In particular, it overcomes factors depending on log n and, importantly, the minimum eigenvalue of the PMD’s covariance matrix from the distance to a multidimensional Gaussian random variable. They use a structural characterization to obtain an ε-cover, in total variation distance, of the set of all (n, k)-PMDs, significantly improving the cover size of [DP08, DP15], and obtaining the same qualitative dependence of the cover size on n and ε as the k = 2 cover of [DP09, DP14]. The authors then further exploit this structure to show that (n, k)-PMDs can be learned to within ε in total variation distance from O˜ k(1/ε2 ) samples, which is near-optimal in terms of dependence on ε and independent of n. In particular, their result generalizes the single-dimensional result of [DDS12] for Poisson Binomials to arbitrary dimension.
- Fair is Fair: Social Preferences and Reciprocity in International Politics Behavioral economics has shown that many people often divert from classical assumptions about self-interested behavior: they have social preferences, concerned about issues of fairness and reciprocity. Social psychologists show that these preferences vary across actors, with some displaying more prosocial value orientations than others. Integrating a laboratory bargaining experiment with original archival research on Anglo-French and Franco-German diplomacy in the interwar period, it is shown how fairness and reciprocity matter in social interactions. Prosocials do not exploit their bargaining leverage to the degree that proselfs do, helping explain why some pairs of actors are better able to avoid bargaining failure than others. In the face of consistent egoism on the part of negotiating partners, however, prosocials engage in negative reciprocity, leading them to adopt the same behaviors as proselfs.
- High dimensional linear inverse modelling Fenwick C. Cooper: the author introduces and demonstrates two linear inverse modelling methods for systems of stochastic ODE’s with accuracy that is independent of the dimensionality (number of elements) of the state vector representing the system in question. Truncation of the state space is not required. Instead he relies on the principle that perturbations decay with distance or the fact that for many systems, the state of each data point is only determined at an instant by itself and its neighbours. He further shows that all necessary calculations, as well as numerical integration of the resulting linear stochastic system, require computational time and memory proportional to the dimensionality of the state vector.
- On Time in Quantum Physics Jeremy Butterfield: First, the author briefly reviews the different conceptions of time held by three rival interpretations of quantum theory: the collapse of the wave-packet, the pilot-wave interpretation, and the Everett interpretation (Section 2). Then he turns to a much less controversial task: to expound the recent understanding of the time-energy uncertainty principle, and indeed of uncertainty principles in general, that has been established by such authors as Busch, Hilgevoord and Uffink. Although this may at first seem a narrow topic, Jeremy points out connections to other conceptual topics about time in quantum theory: for example, the question under what circumstances there is a time operator.
- Cooperative Intergroup Mating Can Overcome Ethnocentrism in Diverse Populations Caitlin J. Mouri, Thomas R. Shultz: Ethnocentrism is a behavioral strategy seen on every scale of social interaction. Game-theory models demonstrate that evolution selects ethnocentrism because it boosts cooperation, which increases reproductive fitness. However, some believe that inter-ethnic unions have the potential to foster universal cooperation and overcome in-group biases in humans. Here, the authors use agent-based computer simulations to test this hypothesis. Cooperative intergroup mating does lend an advantage to a universal cooperation strategy when the cost/benefit ratio of cooperation is low and local population diversity is high.
- Researchers Discover How lncRNA Silences Entire Chromosome: Scientists at Caltech say they have discovered how long non-coding RNAs (lncRNAs) can regulate critical genes. By studying an lncRNA called Xist, the researchers identified how this RNA gathers a group of proteins and ultimately prevents women from having an extra functional X-chromosome, which is a condition in female embryos that leads to death in early development. These findings, note the scientists, mark the first time that researchers have uncovered the detailed mechanism of action for lncRNA genes.
- Objective Bayesian Inference for Bilateral Data Cyr Emile M’lan and Ming-Hui Chen: this paper presents three objective Bayesian methods for analyzing bilateral data under Dallal’s model and the saturated model. Three parameters are of interest, namely, the risk difference, the risk ratio, and the odds ratio. The authors derive Jeffreys’ prior and Bernardo’s reference prior associated with the three parameters that characterize Dallal’s model. They also derive the functional forms of the posterior distributions of the risk difference and the risk ratio and discuss how to sample from their posterior distributions. The authors demonstrate the use of the proposed methodology with two real data examples. They also investigate small, moderate, and large sample properties of the proposed methodology and the frequentist counterpart via simulations.
- Ontology, Matter and Emergence Michel Bitbol: “Ontological emergence” of inherent high-level properties with causal powers is witnessed nowhere. A non-substantialist conception of emergence works much better. It allows downward causation, provided our concept of causality is transformed accordingly.
- Quantum mechanics in terms of realism Arthur Jabs: the author expounds an alternative to the Copenhagen interpretation of the formalism of non-relativistic quantum mechanics. The basic difference is that the new interpretation is formulated in the language of epistemological realism. It involves a change in some basic physical concepts. The ψ function is no longer interpreted as a probability amplitude of the observed behaviour of elementary particles but as an objective physical field representing the particles themselves. The particles are thus extended objects whose extension varies in time according to the variation of ψ. They are considered as fundamental regions of space with some kind of nonlocality. Special consideration is given to the Heisenberg relations, the reduction process, the problem of measurement, Schrodinger’s cat, Wigner’s friend, the Einstein-PodolskyRosen correlations, field quantization and quantum-statistical distributions.
- Between Laws and Models: Some Philosophical Morals of Lagrangian Mechanics J. Butterfield: the author extracts some philosophical morals from some aspects of Lagrangian mechanics. (A companion paper will present similar morals from Hamiltonian mechanics and Hamilton-Jacobi theory.) One main moral concerns methodology: Lagrangian mechanics provides a level of description of phenomena which has been largely ignored by philosophers, since it falls between their accustomed levels—“laws of nature” and “models”. Another main moral concerns ontology: the ontology of Lagrangian mechanics is both more subtle and more problematic than philosophers often realize. The treatment of Lagrangian mechanics provides an introduction to the subject for philosophers, and is technically elementary. In particular, it is confined to systems with a finite number of degrees of freedom, and for the most parteschews modern geometry.
- Dirac Processes and Default Risk Chris Kenyon, Andrew Green: the authors introduce Dirac processes, using Dirac delta functions, for short-rate-type pricing of financial derivatives. Dirac processes add spikes to the existing building blocks of diffusions and jumps. Dirac processes are Generalized Processes, which have not been used directly before because the dollar value of non-Real numbers is meaningless. However, short-rate pricing is based on integrals so Dirac processes are natural. This integration directly implies that jumps are redundant whilst Dirac processes expand expressivity of short-rate approaches. Practically, they demonstrate that Dirac processes enable high implied volatility for CDS swaptions that has been otherwise problematic inhazard rate setups.
- The Knowability Paradox in the light of a Logic for Pragmatics Massimiliano Carrara and Daniele Chiff: The Knowability Paradox is a logical argument showing that if all truths are knowable in principle, then all truths are, in fact, known. Many strategies have been suggested in order to avoid the paradoxical conclusion. A family of solutions – called logical revision – has been proposed to solve the paradox, revising the logic underneath, with an intuitionistic revision included. In this paper, the authors focus on so called revisionary solutions to the paradox – solutions that put the blame on the underlying logic. Specifically, they analyse a possible translation of the paradox into a modified intuitionistic fragment of a logic for pragmatics (KILP) inspired by Dalla Pozza and Garola in 1995. Their aim is to understand if KILP is a candidate for the logical revision of the paradox and to compare it with the standard intuitionistic solution to the paradox.
- A Proposed Probabilistic Extension of the Halpern and Pearl Definition of ‘Actual Cause' Luke Fenton-Glynn: In their article ‘Causes and Explanations: A Structural-Model Approach. Part I: Causes’,
Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of ‘actual cause’. Their analysis is designed for the case of deterministic causation. It is shown here that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
- Faster Statistical Model Checking for Unbounded Temporal Properties Przemysław Daca, Thomas A. Henzinger, Jan Křetínský, Tatjana Petrov: The authors present a new algorithm for the statistical model checking of Markov chains with respect to unbounded temporal properties, such as reachability and full linear temporal logic. The main idea is that they monitor each simulation run on the fly, in order to detect quickly if a bottom strongly connected component is entered with high probability, in which case the simulation run can be terminated early. As a result, the authors' simulation runs are often much shorter than required by termination bounds that are computed a priori for a desired level of confidence and size of the state space. In comparison to previous algorithms for statistical model checking, for a
given level of confidence, the authors' method is not only faster in many cases but also requires less information about the system, namely, only the minimum transition probability that occurs in the Markov chain. In addition, the method can be generalised to unbounded quantitative properties such as mean-payoff bounds.
- Conditions for positioning of nucleosomes on DNA Michael Sheinman, Ho-Ryun Chung:(excellent application of physics to biology) - Positioning of nucleosomes along eukaryotic genomes plays an important role in their organization and regulation. There are many different factors affecting the location of nucleosomes. Some can be viewed as preferential binding of a single nucleosome to different locations along the DNA and some as interactions between neighboring nucleosomes. In this study we analyzed how well nucleosomes are positioned along the DNA as a function of strength of the preferential binding, correlation length of the binding energy landscape, interactions between neighboring nucleosomes and others relevant system properties. The authors analyze different scenarios: designed energy landscapes and generically disordered ones and derive conditions for good positioning. Using analytic and numerical approaches they find that, even if the binding preferences are very weak, synergistic interplay between the interactions and the binding preferences is essential for a good positioning of nucleosomes, especially on correlated energy landscapes. Analyzing empirical energy landscape, they discuss relevance of our theoretical results to positioning of nucleosomes on DNA in vivo.
- The Delicacy of Counterfactuals in General Relativity Erik Curiel: General relativity poses serious problems for counterfactual propositions peculiar to it as a physical theory, problems that have gone unremarked on in the physics and in the philosophy literature. Because these problems arise from the dynamical nature of spacetime geometry, they are shared by all schools of thought on how counterfactuals should be interpreted and understood. Given the role of counterfactuals in the characterization of, inter alia, many accounts of scientific laws, theory-confirmation and causation, general relativity once again presents us with idiosyncratic puzzles any attempt to analyze and understand the nature of scientific knowledge and of science itself must face.
- Information, learning and falsification David Balduzzi: Broadly speaking, there are two approaches to quantifying information. The first, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out. The second, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it. Shannon information provides the mathematical foundation for communication and coding theory. Algorithmic information has been applied by Solomonoff and Hutter to prove remarkable results on universal induction. However, both approaches have shortcomings. Algorithmic information is not computable, severely limiting its practical usefulness. Shannon information refers to ensembles rather than actual events: it makes no sense to compute the Shannon information of a single string – or rather, there are many answers to this question depending on how a related ensemble is constructed. Although there are asymptotic results linking algorithmic and Shannon information, it is unsatisfying that there is such a large gap – a difference in kind – between the two measures. This note describes a new method of quantifying information, effective information, that links algorithmic information to Shannon information, and also links both to capacities arising in statistical learning theory. After introducing the measure, the author shows that it provides a non-universal analog of algorithmic information. It is then applied to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of this approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies (counted in two different ways for the two different capacities). it is also
discussed how effective information relates to information gain, Shannon and mutual information. The author concludes by discussing some broader implications.
- Weighing Explanations Daniel Star, Stephen Kearns: The primary goal of John Broome’s new book, Rationality Through Reasoning (2013), is to outline and defend an account of reasoning that makes it clear how it is possible to actively become more rational by means of engaging in reasoning. In the process Broome finds it necessary to also provide his own accounts of ought, reasons, and requirements. The authors will focus here on the account of reasons. This is not the first time they have done so. In an earlier paper (Kearns and Star 2008), they contrasted Broome’s account with their own favored account of reasons (reasons as evidence). Although there are some differences between the views defended in the relevant chapters of Broome’s book (chs. 3 and 4) and the draft manuscript and earlier papers that the authors used as the basis of their discussion in that earlier paper, these do not, for the most part, substantially affect the authors' earlier arguments. It is clear that in articulating an alternative account of reasons we were also heavily influenced by Broome, so we are particularly grateful to have this opportunity to contribute a piece to a 'Festschrift' for him. In a response to the authors and some other critics, Broome (2008) presented some challenges for their account of reasons, but did not address their criticisms of his own account (and they responded, in turn, to Broome’
schallenges in Kearns and Star 2013). Here they will first provide updated versions of our earlier concerns, since they mostly still seem pertinent. Then the authors will turn to provide a fresh response to his account of reasons that focuses on the notion of a weighing explanation. On Broome’s account, 'pro tanto' reasons are facts cited in weighing explanations of what one ought to do; facts that have weights. It is not clear what the idea that pro tanto reasons have weights really amounts to. While recognizing that a simple analogy with putative non-normative weighing explanations involving physical weights initially seems helpful, the authors argue that the notion of a weighing explanation, especially a normative weighing explanation, does not ultimately stand up to scrutiny.
- Ascribing Consciousness to Artificial Intelligence Murray Shanahan: This paper critically assesses the anti-functionalist stance on consciousness adopted by certain advocates of integrated information theory (IIT), a corollary of which is that human-level artificial intelligence implemented on conventional computing hardware is necessarily not conscious. The critique draws on variations of a well-known gradual neuronal replacement thought experiment, as well as bringing out tensions in IIT’s treatment of self-knowledge. The aim, though, is neither to reject IIT outright nor to champion functionalism in
particular. Rather, it is suggested that both ideas have something to offer a scientific understanding of consciousness, as long as they are not dressed up as solutions to illusory metaphysical problems. As for human-level AI, we must await its development before we can decide whether or not to ascribe consciousness to it.
- Why Physics Needs Philosophy Tim Maudlin: "Philosophy cannot be killed by any scientific or logical reasoning: just think about that": Many questions about the nature of reality cannot be properly pursued without contemporary physics. Inquiry into the fundamental structure of space, time and matter must take account of the theory of relativity and quantum theory. Philosophers accept this. In fact, several leading philosophers of physics hold doctorates in physics. Yet they chose to affiliate with philosophy departments rather than physics departments because so many physicists strongly discourage questions about the nature of reality. The reigning attitude in physics has been “shut up and calculate”: solve the equations, and do not ask questions about what they mean. But putting computation ahead of conceptual clarity can lead to confusion. Take, for example, relativity’s iconic “twin paradox.” Identical twins separate from each other and later reunite. When they meet again, one twin is biologically older than the other. (Astronaut twins Scott and Mark Kelly are about to realize this experiment: when Scott returns from a year in orbit in 2016 he will be about 28 microseconds younger than Mark, who is staying on Earth.) No competent physicist would make an error in computing the magnitude of this effect. But even the great Richard Feynman did not always get the explanation right. In “The Feynman Lectures on Physics,” he attributes the difference in ages to the acceleration one twin experiences: the twin who accelerates ends up younger. But it is easy to describe cases where the opposite is true, and even cases where neither twin accelerates but they end up different ages. The calculation can be right and the accompanying explanation wrong. If your goal is only to calculate, this might be sufficient. But understanding existing theories and formulating new ones requires more. Einstein arrived at the theory of relativity by reflecting on conceptual problems rather than on empirical ones. He was primarily bothered by explanatory asymmetries in classical electromagnetic theory. Physicists before Einstein knew, for instance, that moving a magnet in or near a coil of wire would induce an electric current in the coil. But the classical explanation for this effect appeared to be entirely different when the motion was ascribed to the magnet as opposed to the coil; the reality is that the effect depends only on the relative motion of the two. Resolving the explanatory asymmetry required rethinking the notion of simultaneity and rejecting the classical account of space and time. It required the theory of relativity. Comprehending quantum theory is an even deeper challenge. What does quantum theory imply about “the nature of reality?” Scientists do not agree about the answer; they even disagree about whether it is a sensible question. The problems surrounding quantum theory are not mathematical. They stem instead from the unacceptable terminology that appears in presentations of the theory. Physical theories ought to be stated in precise terminology, free of ambiguity and vagueness. What philosophy offers to science, then, is not mystical ideas but meticulous method. Philosophical skepticism focuses attention on the conceptual weak points in theories and in arguments. It encourages exploration of alternative explanations and new theoretical approaches. Philosophers obsess over subtle ambiguities of language and over what follows from what. When the foundations of a discipline are secure this may be counter-productive: just get on with the job to be done! But where secure foundations (or new foundations) are needed, critical scrutiny can suggest the way forward. The search for ways to marry quantum theory with general relativity would surely benefit from precisely articulated accounts of the foundational concepts of these theories, even if only to suggest what must be altered or abandoned. Philosophical skepticism arises from the theory of knowledge, the branch of philosophy called “epistemology.” Epistemology studies the grounds for our beliefs and the sources of our concepts. It often reveals tacit presuppositions that may prove wrong, sources of doubt about how much we really know.
- Towards A Mathematical Theory Of Complex Socio-Economical Systems By Functional Subsystems Representation Giulia Ajmone Marsan, Nicola Bellomo, Massimo Egidi, Luiss Guido Carli: This paper deals with the development of a mathematical theory for complex socio-economical systems. The approach is based on the methods of the mathematical kinetic theory for active particles, which describes the evolution of large systems of interacting entities which are carriers of specific functions, in our case economical activities. The method is implemented with the concept of functional subsystems constituted by aggregated entities which have the ability of expressing socio-economical purposes and functions.
- The generalised quasispecies Raphael Cerf and Joseba Dalmau: really excellent study in population-dynamics and macro-evolution - the authors study Eigen’s quasispecies model in the asymptotic regime where the length of the genotypes goes to ∞ and the mutation probability goes to 0. They then give several explicit formulas for the stationary solutions of the limiting system of differential equations.
- Evolutionary Prediction Games Jeffrey A. Barrett, Michael Dickson, Gordon Purves: the authors consider an extension of signaling games to the case of prediction, where one agent (‘sender’) perceives the current state of the world and sends a signal. The second agent (‘receiver’) perceives this signal, and makes a prediction about the next state of the world (which evolves according to stochastic but not entirely random ‘laws’). They then suggest that such games may be the basis of a model for the evolution of successful theorizing about the world.
- Sigma-Point Filtering Based Parameter Estimation in Nonlinear Dynamic System Juho Kokkala, Arno Solin, and Simo Särkkä: the authors consider approximate maximum likelihood parameter estimation in non-linear state-space models. They discuss both direct optimization of the likelihood and expectation–maximization (EM). For EM, they also give closed-form expressions for the maximization step in a class of models that are linear in parameters and have additive noise. To obtain approximations to the filtering and smoothing distributions needed in the likelihood-maximization methods, the authors focus on using Gaussian filtering and smoothing algorithms that employ sigma-points to approximate the required integrals. They discuss different sigma point schemes based on the third, fifth, seventh, and ninth
order unscented transforms and Gauss–Hermite quadrature rule. They compare the performance of the methods in two simulated experiments: a univariate toy model as well as tracking of a maneuvering target. In the experiments, the authors also compare against approximate likelihood estimates obtained by particle filtering and extended Kalman filtering based methods. The experiments suggest that the higher-order unscented transforms may in some cases provide more accurate estimates.
- Is Howard’s Separability Principle a sufficient condition for Outcome Independence? Paul Boes: Howard [1985, 1989, 1992] has argued that the, experimentally confirmed, violation of the Bell inequalities forces us to to reject at least one of two physical principles, which he terms locality and separability principle. To this end, he provides a proof [Howard, 1992] of the equivalence of the separability condition,
a formal condition to which the separability principle gives rise, with the condition of “outcome independence”. If this proof is sound, then Howard’s claim would gain strong support in that “outcome independence” and “parameter independence”, where the latter arises from Howard’s locality principle, have been shown by [Jarrett, 1984] to conjunctively constitute a necessary condition
for the derivation of the Bell inequalities [Clauser and Horne, 1974]. However, Howard’s proof has been contested in a number of ways. In this essay the author will discuss several criticisms of Howard’s equivalence proof that focus on the sufficiency of the separability principle for outcome independence. Paul then will argue that, while none of these criticisms succeeds, they do constrain the possible form of Howard’s argument. To do so, he will first introduce both the separability principle and outcome independence in the context of EPR-like experiments before discussing the individual arguments.
- Duncan Pritchard on 'The Swamping Problem' in Epistemolygy: It is argued that the swamping problem is best understood in terms of an inconsistent triad of claims: (i) a general thesis about value, (ii) a more specific thesis about epistemic value, and (iii) a statement of a popular view in epistemology which Duncan calls epistemic value T-monism. With this inconsistent triad clearly set-out, it becomes transparent what the dialectical options are for those who wish to respond to the swamping problem. In particular, one is able to map out the various responses to this problem in the literature in terms of which of the members of this inconsistent triad they deny.
- Reflected Backward Stochastic Differential Equations When The Obstacle Is Not Right-Continuous And Optimal Stopping Miryana Grigorova, Peter Imkeller, Elias Offen, Youssef Ouknine, Marie-Claire Quenez: In the first part of the paper, the authors study reflected backward stochastic differential equations (RBSDEs) with lower obstacle which is assumed to be right upper-semicontinuous but not necessarily right-continuous. They prove existence and uniqueness of the solutions to such RBSDEs in appropriate Banach spaces. The result is established by using some tools from the general theory of processes such as Mertens decomposition of optional strong (but not necessarily rightcontinuous) supermartingales, some tools from optimal stopping theory, as well as an appropriate generalization of Itô’s formula due to Gal’chouk and Lenglart. In the second part of the paper, the authors provide some links between the RBSDE studied in the first part and an optimal stopping problem in which the risk of a financial position ξ is assessed by an f-conditional expectation E(f) (where f is a Lipschitz driver). They characterize the "value function" of the problem in terms of the solution to our RBSDE. Under an additional assumption of left upper-semicontinuity on ξ, they show the existence of an optimal stopping time. They also provide a generalization of Mertens decomposition to the case of strong E f-supermartingales.
- Generalized Support and Formal Development of Constraint Propagators (On Artificial Intelligence) James Caldwell, Ian P. Gent, Peter Nightingale: The concept of support is pervasive in constraint programming. Traditionally, when a domain value ceases to have support, it may be removed because it takes part in no solutions. Arc-consistency algorithms such as AC2001 make use of support in the form of a single domain value. GAC algorithms such as GAC-Schema use a tuple of values to support each literal. The authors generalize these notions of support in two ways. First, they allow a set of tuples to act as support. Second, the supported object is generalized from a set of literals (GAC-Schema) to an entire constraint or any part of it. They also design a methodology for developing correct propagators using generalized support. A constraint is expressed as a family of support properties, which may be proven correct against the formal semantics of the constraint. Using CurryHoward isomorphism to interpret constructive proofs as programs, they show how to derive correct propagators from the constructive proofs of the support properties. The framework is carefully designed to allow efficient algorithms to be produced. Derived algorithms may make use of dynamic literal triggers or watched literals for efficiency. Finally, two case studies of deriving efficient algorithms are given.
- An Algorithm Set Revolutionizes 3-D Protein Structure Discovery A new way to determine 3-D structures from 2-D images is set to speed up protein structure discovery by a factor of 100,000. Via 'Emerging Technology From the arXiv'/MIT Technological Review.
- Finite relation algebras and omitting types in modal fragments of first order logic Tarek Sayed Ahmed: Let 2 < n ≤ l < m < ω. Let Ln denote first order logic restricted to the first n variables. It is shown that the omitting types theorem fails dramatically for the n–variable fragments of first order logic with respect to clique guarded semantics, and for its packed n–variable fragments. Both are modal fragments of Ln. As a sample, the author shows that if there exists a finite relation algebra with a so–called strong l–blur, and no m–dimensional relational basis, then there exists a countable, atomic and complete Ln theory T and type Γ, such that Γ is realizable in every so–called m–square model of T, but any witness isolating Γ cannot use less than l variables. An m–square model M of T gives a form of clique guarded semantics, where the parameter m, measures how locally well behaved M is. Every ordinary model is k–square for any n < k < ω, but the converse is not true. Any model M is ω–square, and the two notions are equivalent if M is countable. Such relation algebras are shown to exist for certain values of l and m like for n ≤ l < ω and m = ω, and for l = n and m ≥ n + 3. The case l = n and m = ω gives that the omitting types theorem fails for Ln with respect to (usual) Tarskian semantics: There is an atomic countable Ln theory T for which the single non–principal type consisting of co–atoms cannot be omitted in any model M of T. For n < ω, positive results on omitting types are obtained for Ln by imposing extra conditions on the theories and/or the types omitted. Positive and negative results on omitting types are obtained for infinitary variants and extensions of Lω,ω.
- Can Quantum Analogies Help Us to Understand the Process of Thought? Paavo Pylkkanen: A number of researchers today make an appeal to quantum physics when trying to develop a satisfactory account of the mind, an appeal still felt to be controversial by many. Often these “quantum approaches” try to explain some well-known features of conscious experience (or mental processes more generally), thus using quantum physics to enrich the explanatory framework or explanans used in consciousness studies and cognitive science. This paper considers the less studied question of whether quantum physical intuitions could help us to draw attention to new or neglected aspects of the mind in introspection, and in this way change our view about what needs explanation in the first place. Although prima facie implausible, it is suggested that this could happen, for example, if there were analogies between quantum processes and mental processes (e.g., the process of thinking). The naive idea is that such analogies would help us to see mental processes and conscious experience in a new way. It has indeed been proposed long ago that such analogies exist, and this paper first focuses at some length on David Bohm’s formulation of them from 1951. It then briefly considers these analogies in relation to Smolensky’s more recent analogies between cognitive
science and physics, and Pylkk¨o’s aconceptual view of the mind. Finally, Bohm’s early analogies will be briefly considered in relation to the analogies between quantum processes and the mind he
proposed in his later work.
- Collective Intelligence : Self-organized Regulation Resulting from Local Interactions Mayuko Iwamoto and Daishin Ueyama: Proportion regulation in nature is realized through and limited to local interactions. One of the fundamental mysteries in biology concerns the method by which a cluster of organisms can regulate the proportion of individuals that perform various roles without using a
central control. This paper applies a simple theoretical model to demonstrate that a series of local interactions between individuals is a simple yet robust mechanism that realizes stable proportions. In this study, alternative symmetric interactions between individuals are proposed as a proportion fulfillment method. The authors' results show that asymmetric properties in local interactions are crucial for adaptive regulation, which depends on group size and overall density. The foremost advantage of this strategy is that no global information is required for each individual.
- Researchers are demonstrating that, in certain contexts, namely AdS Spaces - AdS/CFT Correspondence 'duality', string theory is the only consistent theory of quantum gravity: Might this make it true? By Natalie Wolchover, via Quanta Magazine: Thirty years have passed since a pair of physicists, working together on a stormy summer night in Aspen, Colo., realized that string theory might have what it takes to be the “theory of everything.” “We must be getting pretty close,” Michael Green recalls telling John Schwarz as the thunder raged and they hammered away at a proof of the theory’s internal consistency, “because the gods are trying to prevent us from completing this calculation.” Their mathematics that night suggested that all phenomena in nature, including the seemingly irreconcilable forces of gravity and quantum mechanics, could arise from the harmonics of tiny, vibrating loops of energy, or “strings.” The work touched off a string theory revolution and spawned a generation of specialists who believed they were banging down the door of the ultimate theory of nature. But today, there’s still no answer. Because the strings that are said to quiver at the core of elementary particles are too small to detect — probably ever — the theory cannot be experimentally confirmed. Nor can it be disproven: Almost any observed feature of the universe jibes with the strings’ endless repertoire of tunes. The publication of Green and Schwarz’s paper “was 30 years ago this month,” the string theorist and popular-science author Brian Greene wrote in Smithsonian Magazine in January, “making the moment ripe for taking stock: Is string theory revealing reality’s deep laws? Or, as some detractors have claimed, is it a mathematical mirage that has sidetracked a generation of physicists?” Greene had no answer, expressing doubt that string theory will “confront data” in his lifetime. Recently, however, some string theorists have started developing a new tactic that gives them hope of someday answering these questions. Lacking traditional tests, they are seeking validation of string theory by a different route. Using a strange mathematical dictionary that translates between laws of gravity and those of quantum mechanics, the researchers have identified properties called “consistency conditions” that they say any theory combining quantum mechanics and gravity must meet. And in certain highly simplified imaginary worlds, they claim to have found evidence that the only consistent theories of “quantum gravity” involve strings.
- A New Physics Theory of Life Jeremy England (via Quanta Magazine): Why does life exist? Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.” From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.
- A Conundrum in Bayesian Epistemology of Disagreement by Tomoji Shogenji (credit for posting here goes to Andrew Teasdale): The proportional weight view in epistemology of disagreement generalizes the equal weight view and proposes that we assign to the judgments of different people weights that are proportional to their epistemic qualifications. It is known that (under the plausible Context-Free Assumption) if the resulting aggregate degrees of confidence are to constitute a probability function, they must be the weighted arithmetic means of individual degrees of confidence, but aggregation by the weighted arithmetic means violates the Bayesian rule of conditionalization. The double bind entails that the proportional weight view is inconsistent with Bayesianism. The paper explores various ways to respond to this challenge to the proportional weight view.

- The Fine-Tuning Argument - Klaas Landsman: are the laws of nature and our cosmos delicately fine-tuned for life to emerge as it appears to be the case?

- First Quantum Music Composition Unveiled Physicists have mapped out how to create quantum music, an experience that will be profoundly different for every member of the audience, they say. Via MIT Technology Review.
- Definitional Argument in Evolutionary Psychology and Cultural Anthropology John P. Jackson, Jr: The role of disciplinary history in the creation and maintenance of disciplinary autonomy and authority has been a target of scholarly inquiry at least since Thomas Kuhn’s (1970) claim that such histories were key indicators of a reigning paradigm. In the United States, the history of psychology is a
recognized subdiscipline of psychology and histories of psychology serve to inculcate students into psychology as well as to establish and maintain the authority of research programs (Ash 1983; Leahey
1992; Samelson 1997; Samelson 2000). We should not be surprised, therefore to find evolutionary psychologists to appeal to the history of the social sciences when they make their appeals for the necessity and value of their nascent discipline. In this paper the author will examine how evolutionary psychologists use the history of science in order to create space for their new discipline. In particular, he is interested in how they employ a particular account of the origins of American cultural anthropology at the beginning of the twentieth century. Evolutionary psychologists offer a particular history of cultural anthropology as an argument for why we now need evolutionary psychology. John will show that each discipline (EP and anthropology) attempted to create space for itself by defining a central term, “culture.” In defining “culture” each discipline also defined their scientific program: defining the nature of scientific inquiry by defining the central object of study. These definitional moves are not necessarily explicit in the argument, however; rather than arguments about definition, these scientists are offering an argument by definition. An argument by definition should not be taken to be an argument about (or from) a definition. In some sense, an argument by definition does not appear to be an argument at all: The key definitional move is simply stipulated, as if it were a natural step along the way of justifying some other claim…. One cannot help noticing an irony here. Definition of terms is a key step in the presentation of argument, and yet this critical step is taken by making moves that are not themselves argumentative at all. They are not claims supported by reasons and intended to justify adherence by critical listeners. Instead they are simply proclaimed as if they were indisputable facts.
- Large Margin Nearest Neighbor Embedding for Knowledge Representation - Miao Fan et al: on Artificial Intelligence and learning algorithms to efficiently find the optimal solution based on Stochastic Gradient Descent in iterative fashion.
- A Categorial Semantic Representation of Quantum Event Structures E. Zafiris, V. Karakostas: The overwhelming majority of the attempts in exploring the problems related to quantum logical structures and their interpretation have been based on an underlying set-theoretic syntactic language: could a shift to a category-theoretic 'mode' be better in explaining the global structure of a quantum algebra of events (or propositions) in terms of sheaves of local Boolean frames?
- For every complex problem, there is an answer that is clear, simple and wrong Joachim Sturmberg, Stefan Topolski: this is an examination of the notions of knowledge, truth and certainty as they apply to medical research and patient care. The human body does not behave in mechanistic but rather complex adaptive ways; thus, its behaviour to challenges is non-deterministic. This insight has important ramifications for experimental studies in health care and their statistical interrogation that are described in detail. Four implications are highlighted: one, there is an urgent need to develop a greater awareness of uncertainties and how to respond to them in clinical practice, namely, what is important and what is not in the context of this patient; two, there is an equally urgent need for health professionals to understand some basic statistical terms and their meanings, specifically absolute risk, its reciprocal, numbers needed to treat and its inverse, index of therapeutic impotence, as well as seeking out the effect size of an intervention rather than blindly accepting P-values; three, there is an urgent need to accurately present the known in comprehensible ways through the use of visual tools; and four, there is a need to overcome the perception, that errors of commission are less troublesome than errors of omission as neither's consequences are predictable.
- Fair Wages and Foreign Sourcing Gene M. Grossman, Elhanan Helpman: On general equilibrium models for studying the impact of workers’ relative-wage concerns on resource allocation and the organization of production, Pareto efficiency, and the distinction between 'closed' and 'open' economies.
- Black Holes Do Not Erase Information The "information loss paradox" in black holes—a problem that has plagued physics for nearly 40 years—may not exist - "Radiation from a Collapsing Object is Manifestly Unitary" according to a new University at Buffalo study.
- First passage times in integrate-and-fire neurons with stochastic thresholds Wilhelm Braun, Paul C. Matthews, and Rudiger Thul: The authors consider a leaky integrate–and–fire neuron with deterministic subthreshold dynamics and a firing threshold that evolves as an Ornstein–Uhlenbeck process. The formulation of this minimal model is motivated by the experimentally observed widespread variation of neural firing thresholds. They show numerically that the mean first passage time can depend non-monotonically on the noise amplitude. For sufficiently large values of the correlation time of the stochastic threshold the mean first passage time is maximal for non-vanishing noise.They then provide an explanation for this effect by analytically transforming the original model into a first passage time problem for Brownian motion. This transformation also allows for a perturbative calculation of the first passage time histograms. In turn this provides quantitative insights into the mechanisms that lead to the non-monotonic behaviour of the mean first passage time. The perturbation expansion is in excellent agreement with direct numerical simulations. The approach developed here can be applied to any deterministic subthreshold dynamics and any Gauss–Markov processes for the firing threshold. This opens up the possibility to incorporate biophysically detailed components into the subthreshold dynamics, rendering our approach a powerful framework that sits between traditional integrate-and-fire models and complex mechanistic descriptions of neural dynamics.
- Deterministic Relativistic Quantum Bit Commitment Relativistic quantum cryptography exploits the combined power of Minkowski causality and quantum information theory to control information in order to implement cryptographic tasks: what progress has been made in such methodologies?
- The Probabilistic No Miracles Argument Jan Sprenger: This paper develops a probabilistic reconstruction of the No Miracles Argument (NMA) in the debate between scientific realists and anti-realists. It is demonstrated that the persuasive force of the NMA depends on the particular disciplinary context where it is applied, and the stability of theories in that discipline. Assessments and critiques of “the” NMA, without reference to a particular context, are misleading and should be relinquished. This result has repercussions for recent anti-realist arguments, such as the claim that the NMA commits the base rate fallacy (Howson, 2000; Magnus and Callender, 2004). It also helps to explain the persistent disagreement between realists and anti-realists.
- Cosmic Acceleration in a Model of Fourth Order Gravity Shreya Banerjee, Nilesh Jayswal, and Tejinder P. Singh: A forth-order model of gravity is investigated having a free length parameter, and no cosmological constant or dark energy. The authors consider cosmological evolution of a flat Friedmann universe in this model for the case that the length parameter is of the order of present Hubble radius. By making a suitable choice for the present value of the Hubble parameter, and value of third derivative of the scale factor (the 'jerk') the authors find that the model can explain cosmic acceleration to the same degree of accuracy as the standard concordance model. If the free length parameter is assumed to be time-dependent, and of the order of the Hubble parameter of the corresponding epoch, the model can still explain cosmic acceleration, and provides a possible resolution of the cosmic coincidence problem. We also compare redshift drift in this model, with that in the standard model.
- Regularities, Natural Patterns and Laws of Nature Stathis Psillos: The goal of this paper is to outline and defend an empiricist metaphysics of laws of nature. The key empiricist idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This outline relies on the concept of a ‘natural pattern’ and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, a brief discussion on the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3 offers arguments against stronger metaphysical views of laws. Then, in section 4, nomic objectivism is motivated. In section 5 addresses the question ‘what is a regularity?’ and develop a novel answer to it, based on the notion of a pattern. In section 6 an analysis of the notion of pattern is detailed, and in section 7 raises the question: ‘what is a law of nature?’, the answer to which is: a law of nature is a regularity that is characterised by the unity of a natural pattern.
- Arrows without time: a shape-dynamic account of the time-asymmetry of causation Nguyen, D. N.: Contemporary approaches to the rapprochement of the time-asymmetry of causation and the time-symmetry of fundamental physics often appeal to the thermodynamic arrow. Nguyen gives an overview of these approaches and criticisms of them, and argues that appealing to the thermodynamic arrow is a problematic strategy since it requires us to commit to the excess metaphysical baggage of absolute space and absolute time. He then develops a new account drawing on recent work on the theory of shape dynamics, which avoids this difficulty.
- Philosophy, logic, science, history Tim Crane: analytic philosophy is sometimes said to have particularly close connections to logic and to science, and no particularly interesting or close relation to its own history. It is argued here that although the connections to logic and science have been important in the development of analytic philosophy, these connections do not come close to characterizing the nature of analytic philosophy, either as a body of doctrines or as a philosophical method. We will do better to understand analytic philosophy – and its relationship to continental philosophy – if we see it as a historically constructed collection of texts, which define its key problems and concerns. It is true, however, that analytic philosophy has paid little attention to the history of the subject. This is both its strength – since it allows for a distinctive kind of creativity – and its weakness – since ignoring history can encourage a philosophical variety of ‘normal science'.
- Agent Causation as the Solution to All the Compatibilist's Problems Ned Markosian: Ned defends the view that 'agent causation' theorists should be compatibilists. In this paper, he goes on to argue that 'compatibilists' should be agent causation theorists.
- Cheating is evolutionarily assimilated with cooperation in the continuous 'snowdrift game' Tatsuya Sasak, Isamu Okada: It is well known that in contrast to the Prisoner’s Dilemma, the snowdrift game can lead to a stable coexistence of cooperators and cheaters. Recent theoretical evidence on the snowdrift game suggests that gradual evolution for individuals choosing to contribute in continuous degrees can result in the social diversification to a 100% contribution and 0% contribution through so-called evolutionary branching. Until now, however, game-theoretical studies have shed little light on the evolutionary dynamics and consequences of the loss of diversity in strategy. Here an analysis of continuous snowdrift games with quadratic payoff functions in dimorphic populations is undertaken. Subsequently, conditions are clarified under which gradual evolution can lead a population consisting of those with 100% contribution and those with 0% contribution to merge into one species with an intermediate contribution level. The key finding is that the continuous snowdrift game is more likely to lead to assimilation of different cooperation levels rather than maintenance of diversity. Importantly, this implies that allowing the gradual evolution of cooperative behavior can facilitate social inequity aversion in joint ventures that otherwise could cause conflicts that are based on commonly accepted notions of fairness.
- Quantum Interference Links The Fate of Two Atoms: For the first time, physicists from the CNRS and Université Paris-Sud at the Laboratoire Charles Fabry (CNRS/Institut d'Optique Graduate School) have achieved interference between two separate atoms: when sent towards the opposite sides of a semi-transparent mirror, the two atoms always emerge together. This type of experiment, which was carried out with photons around thirty years ago, had so far been impossible to perform with matter, due to the extreme difficulty of creating and manipulating pairs of indistinguishable atoms. The work is published in the journal Nature dated 2 April 2015.
- Liability and the Ethics of War: A Response to Strawser and McMahan Seth Lazar: The 'Responsibility Account' of permissible killing in war states that only those responsible for unjustified threats may be intentionally killed in war. In recent papers, Jeff McMahan and B. J. Strawser have defended the Responsibility Account against an objection that it leads either towards pacifism, or towards total war, depending on how much responsibility is required for liability to be killed. In this paper, rebuttals are given to their counterarguments.
- Alienation and Subjectivity in Marx and Foucault Jae Hetterley: In this paper, an assessement is made as to the extent of the Foucauldian critique of the 'subject' problematizes Marxists philosophy, given that a key aspect of Marx's critique of capitalism is the idea that the capitalist mode of production produces alienated 'labour', a concept grounded in objective and 'universalistic' notions of subjectivity.
- Learning about probabilistic inference and forecasting by 'playing' with multivariate normal distributions G. D’Agostini: The properties of the normal distribution under linear transformation, as well the easy way to compute the covariance matrix of marginals and conditionals, offer a unique opportunity to get an insight about several aspects of uncertainties in measurements. The way to build the overall covariance matrix in a few, but conceptually relevant cases is illustrated: several observations made with (possibly) different instruments measuring the same quantity; effect of systematics (although limited to offset, in order to stick to linear models) on the determination of the ‘true value’, as well in the prediction of future observations; correlations which arise when different quantities are measured with the same instrument affected by an offset uncertainty; inferences and predictions based on averages; inference about constrained values; fits under some assumptions (linear models with known standard deviations). Many numerical examples are provided, exploiting the ability of the R language to handle large matrices and to produce high quality plots. Some of the results are framed in the general problem of ‘propagation of evidence’, crucial in analyzing graphical models of knowledge.
- Replication, Communication, and the Population Dynamics of Scientific Discovery Richard Mcelreath, Paul Smaldino: Many published research results are false, and controversy continues over the roles of replication and publication policy in improving the reliability of research. A mathematical model is developed of scientific discovery in the context of replication, publication bias, and variation in research quality. This model provides a formal framework for reasoning about the normative structure of science. It is shown that replication may serve as a ratchet that gradually separates true hypotheses from false, but the same factors that make initial findings unreliable also make replications unreliable. The most important factors in improving the reliability of research are the rate of false positives and the base rate of true hypotheses: offered suggestions are made here for accomplishing these goals. The results also clarify recent debates on the communication of replications. Surprisingly, publication bias is not always an obstacle, but instead may have positive impacts — suppression of negative novel findings is often beneficial. It is also found that communication of negative replications serves the scientific community even when replicated studies have diminished power. The model presented in this paper is only a start, but it speaks directly to ongoing debates about the design and conduct of science.
- Is the world made of loops? Alexander Afriat: In discussions of the Aharonov-Bohm effect, Healey and Lyre have attributed reality to loops σ(0) (or hoops [σ(0)]), since the electromagnetic potential A is unmeasurable and can therefore be transformed. It is argued that [A] = [A + dλ]λ and the hoop [σ(0)] are related by a meaningful duality, so that however one feels about [A] (or any potential A ∈ [A]), it is no worse than [σ(0)] (or any loop σ(0) ∈ [σ(0)]): no ontological firmness is gained by retreating to the loops, which are just as flimsy as the potentials. And one wonders how the unmeasurability of one entity can invest another with physical reality; would an eventual observation of A dissolve σ(0), consigning it to a realm of incorporeal mathematical abstractions? The reification of loops rests on the potential’s “gauge dependence”; which in turn rests on its unmeasurability; which is too shaky and arbitrary a notion to carry so much weight.
- Why Physics Uses Second Derivatives Kenny Easwaran: A defense of a causal, reductionist account of the nature of rates of change-like velocity and acceleration is offered. This account identifies velocity with the past derivative of position, and acceleration with the future derivative of velocity. Unlike most reductionist accounts, this account can preserve the role of velocity as a cause of future positions and acceleration as the effect of current forces. It is shown that this is possible only if all the fundamental laws are expressed by differential equations of the same order. Consideration of the continuity of time explains why the differential equations are all second-order. This explanation is not available on non-causal or non-reductionist accounts of rates of change. Finally, it is argued that alleged counterexamples to the reductionist account involving physically impossible worlds are irrelevant to an analysis of the properties that play a causal role in the actual world.
- The theory of global imbalances: mainstream economics vs. structural Keynesianism Thomas I. Palley: Prior to the 2008 financial crisis there was much debate about global trade imbalances. Prima facie, the imbalances seem a significant problem. However, acknowledging that would question mainstream economics’ celebratory stance toward globalization. That tension prompted an array of theories which explained the imbalances while retaining the claim that globalization is economically beneficial. This paper surveys those new theories. It contrasts them with the structural Keynesian explanation that views the imbalances as an inevitable consequence of neoliberal globalization. The paper also describes how globalization created a political economy that supported the system despite its proclivity to generate trade imbalances.
- Essentialism and Anti-Essentialism in Feminist Philosophy Alison Stone: This article revisits the ethical and political questions raised by feminist debates over essentialism, the belief that there are properties essential to women and which all women share. Feminists’ widespread rejection of essentialism has threatened to undermine feminist politics. Re-evaluating two responses to this problem — ‘strategic’ essentialism and Iris Marion Young’s idea that women are an internally diverse ‘series’ — it is argued that both unsatisfactorily retain essentialism as a descriptive claim about the social reality of women’s lives. It is also argued that instead women have a ‘gene-alogy’: women always acquire femininity by appropriating and reworking existing cultural interpretations of femininity, so that all women become situated within a history of overlapping chains of interpretation. Because all women are located within this complex history, they are identifiable as belonging to a determinate social group, despite sharing no common understanding or experience of femininity. The idea that women have a genealogy thus reconciles anti-essentialism with feminist politics.
- Orientalism Edward Said: in the deepest possible academic sense, this book clearly needs no introduction and its intellectual contributions to the history of cultural and civilizational ideas know no limits. Please enjoy when and if you have the time!
- Relatedness and Economies of Scale in the Provision of Different Kinds of Collective Goods Jorge Pena, Georg Noldeke and Laurent Lehmann: Many models proposed to study the evolution of collective action rely on a formalism that represents social interactions as n-player games between individuals adopting discrete actions such as cooperate and defect. Despite the importance of relatedness as a solution to collective action problems in biology and the fact that most social interactions unavoidably occur between relatives,incorporating relatedness into these models has so far proved elusive. The authors address this problem by considering mixed strategies and by integrating discrete-action n-player games into the direct fitness approach of social evolution theory. As an application, the authors use their mathematical framework to investigate the provision of three different kinds of collective goods, paradigmatic of a vast array of helping traits in nature: “public goods” (both providers and shirkers can use the good, e.g., alarm calls), “club goods” (only providers can use the good, e.g., participation in collective hunting), and “charity goods” (only shirkers can use the good, e.g., altruistic sacrifice). It is shown that relatedness relaxes the collective action problems associated to the provision of these goods in different ways depending on the kind of good (public, club, or charity) and on its economies of scale (constant, diminishing, or increasing returns to scale). The authors' findings highlight the importance of explicitly accounting for relatedness, the kind of good, and economies of scale in theoretical and empirical studies of collective action.
- Digging into the “Giant Resonance”, scientists find hints of new quantum physics: A cooperation between theoretical and experimental physicists from European XFEL and the Center for Free Electron Laser Science (CFEL) at DESY has uncovered previously unknown quantum states inside atoms.
- Dark matter and muons are ruled out as 'DAMA' signal source: A controversial and unconfirmed observation of dark matter made by the DAMA group in Italy may have an even stranger source than previously thought, according to physicists in the UK. Their research suggests that the signal seen by DAMA is neither from dark matter nor from background radiation. Instead, they say that the signal could be the result of a fault in the DAMA detector's data-collecting apparatus.
- Leibniz on the Modal Status of Absolute Space and Time Martin Lin: Leibniz is a relationalist about space and time. He believes that nothing spatial ortemporal is more fundamental than the spatial and temporal relations that obtain betweenthings. These relations are direct: they are unmediated by anything spatially or temporallyabsolute such as points in space or moments in time. Some philosophers, for example, Newton and Clarke, disagree. They think that space and time are absolute. Their absolutism can take different forms. Newton, for example, believes that space is a substance, or more accurately, something substance-like. A substance is not a relation of any kind. Therefore, if space is a substance or substance-like, then it is absolute. Other absolutists, such as Clarke, believe that space is a monadic property of God. A monadic property is not a relation and thus if space is a monadic property, then it is absolute. Leibniz clearly thinks that absolutism is false. What is less clear is his attitude toward its modal status. Are absolute space and time merely contingently non-actual or are they impossible? In his correspondence with Clarke, Leibniz makes a number of claims regarding this issue that appear, on the face of it, to be inconsistent with one another. He argues that the Principle of the Identity of Indiscernibles (the PII) follows from God’s wisdom. God’s wisdom is the basis of only contingent truths, thus it would follow that the PII is a contingent truth. He argues against absolute space and time by way of the PII. This suggests that relationalism is also a contingent truth and so absolute space and time must be merely contingently non-actual. And yet he also appears to claim that absolute space and time are impossible. What justifies his claim that they are impossible? Is Leibniz being inconsistent?
- Reduction, Emergence and Renormalization Jeremy Butterfield: Excellent paper on the interface between philosophy and physics (science). In previous works, the author described several examples combining reduction and emergence where reduction is understood `a la Ernest Nagel, and emergence is understood as behaviour or properties that are novel (by some salient standard). Here, the aim is again to reconcile reduction and emergence, for a case which is apparently more problematic than those I treated before: renormalization. Renormalization is a vast subject. So the author confines himself to emphasizing how the modern approach to renormalization (initiated by Wilson and others between (1965 and 1975), when applied to quantum field theories, illustrates both Nagelian reduction and emergence. The author's main point is that the modern understanding of how renormalizability is a generic feature of quantum field theories at accessible energies gives us a conceptually unified family of Nagelian reductions. That is worth saying since philosophers tend to think of scientific explanation as only explaining an individual event, or perhaps a single law, or at most deducing one theory as a special case of another. Here we see a framework in which there is a space of theories endowed with enough structure that it provides a family of reductions.
- Quantum Darwinism and non-Markovian dissipative dynamics from quantum phases of the spin−1/2 XX model Gian Luca Giorgi, Fernando Galve, and Roberta Zambrini: Quantum Darwinism explains the emergence of a classical description of objects in terms of the creation of many redundant registers in an environment containing their classical information.This amplification phenomenon, where only classical information reaches the macroscopic observer and through which different observers can agree on the objective existence of such object, has been revived lately for several types of situations, successfully explaining classicality. Here an exploration of quantum Darwinism in the setting of an environment made of two level systems which are initially prepared in the ground state of the XX model, which exhibits different phases; it is found that the different phases have different ability to redundantly acquire classical information about the system, beingthe “ferromagnetic phase” the only one able to complete quantum Darwinism. At the same time the authors relate this ability to how non-Markovian the system dynamics is, based on the interpretation that non-Markovian dynamics is associated to back flow of information from environment to system, thus spoiling the information transfer needed for Darwinism. Finally, the authors explore the mixing of bath registers by allowing a small interaction among them, finding that this spoils the stored information as previously found in the literature.
- Blind Rule Following and the 'Antinomy of Pure Reason' Alex Miller: Saul Kripke identifies the 'rule-following problem' as finding an answer to the question: what makes it the case that a speaker means one thing rather than another by a linguistic expression? Crispin Wright and Paul Boghossian argued on many occasions that this problem could be neutralised via the adoption of a form of non-reductionism about content. In recent work on 'blind rule-following', both now argue that even a non-reductionist view can be defended in a way as to neutralise the challenge posed by Kripke's Wittgenstein, a more fundamental problem about rule-following remains unsolved. In this paper, it is argued that, courtesy of a non-reductionist conception of content, we can successfully meet the 'Kripkensteinian' challenge in a way that the Wright-Boghossian problems are themselves neutralised.
- Spatial Evolutionary Public Goods Game on Complete Graph and Dense Complex Networks Jinho Kim, Huiseung Chae, Soon-Hyung Yook & Yup Kim: The authors study the spatial evolutionary public goods game (SEPGG) with voluntary or optional participation on a complete graph (CG) and on dense networks. Based on analyses of the SEPGG rate equation on finite CG, they find that SEPGG has two stable states depending on the value of multiplication factor 'r', illustrating how the “tragedy of the commons” and “an anomalous state without any active participants” occurs in real-life situations. When 'r' is low, the state with only loners is stable, and the state with only defectors is stable when 'r' is high. The authors also derive the exact scaling relation for 'r*'. All of the results are confirmed by numerical simulation. Furthermore, they find that a cooperator-dominant state emerges when the number of participants or the mean degree, left 'fencekright' fence, decreases. They also investigate the scaling dependence of the emergence of cooperation on 'r' and left 'fencekright' fence. These results show how “tragedy of the commons” disappears when cooperation between egoistic individuals without any additional socioeconomic punishment increases.
- The Feminine Subject Susan Hekman: I cannot recommend this book highly enough - In 1949 Simone de Beauvoir asked, “What does it mean to be a woman?” Her answer to that question inaugurated a radical transformation of the meaning of “woman” that defined the direction of subsequent feminist theory. What Beauvoir discovered is that it is impossible to define “woman” as an equal human being in our philosophical and political tradition. Her effort to redefine “woman” outside these parameters set feminist theory on a path of radical transformation. The feminist theorists who wrote in the wake of Beauvoir’s work followed that path. Susan Hekman’s original and highly engaging book traces the evolution of “woman” from Beauvoir to the present. In a comprehensive synthesis of a number of feminist theorists she covers French feminist thinkers Luce Irigaray and Helene Cixous as well as theorists such as Carol Gilligan, Carole Pateman and Judith Butler. The book examines the relational self, feminist liberalism and Marxism, as well as feminist theories of race and ethnicity, radical feminism, postmodern feminism and material feminism. Hekman argues that the effort to redefine “woman” in the course of feminist theory is a cumulative process in which each approach builds on that which has gone before. Although they have approached “woman” from different perspectives, feminist theorists has moved beyond the negative definition of our tradition to a new concept that continues to evolve.
- The Future of Differences: Truth and Method in Feminist Theory Susan Hekman: yet another gem by Susan - 'This is an ambitious book. It seeks to develop a clear theory of difference(s) on which to ground feminist epistemology and practice. Hekman's contention is that feminists must eschew equally both universalism and relativism. Her careful and insightful readings of feminist classics and contemporary scholarship have produced a text that will become a classic in its own right. Hekman's modestly stated ambition is to provide a form of analysis that engages both with differences and with general concepts. Her reading of Weber is truly a tour de force in this regard. This is a book that every feminist scholar will want to read and use.' Henrietta L. Moore, Professor of Social Anthropology and Director of the Gender Institute, London School of Economics.
- Supernumeration: Vagueness and Numbers Peter Simons: There is a notable discrepancy between philosophers and practitioners on approaches to vagueness. Philosophers almost all reject fuzzy logic and a majority accept some form of supervaluational theory. Practitioners analysing real data on the other hand use fuzzy logic, because computer algorithms exist for it, despite its theoretical shortcomings. These two communities should not remain separate. The solution, it is argued, is to put supervaluation and numbers together. After reviewing the principal and well-known defects of fuzzy logic, this paper shows how to use numerical values in conjunction with a supervaluational approach to vagueness. The two principal working ideas are degrees of candidature (of objects and predicates) and expected truth-value. an outline is then presented of the theory, which combines vagueness of predicates and vagueness of objects, and discussions of its pros and cons are considered, considering the obvious principal objections: that the theory is complex, that there is arbitrariness in the selection of numbers, and that penumbral connections must be accounted for. It is then contended that all these objections can be answered.
- Truth Relativists Can't Trump Moral Progress Annalisa Coliva: "In this paper we raise a new challenge for truth-relativism, when applied to moral discourse. In §1 we set out the main tenets of this doctrine; in §2 we canvass two broad forms a relativist project can take – Descriptive Relativism and Revisionary Relativism; in §3 we briefly consider the prospects of the combination of truth-relativism with either project when dealing with disagreement arising in the relevant areas of discourse. We claim that truth-relativism faces what we dub “the Lost disagreementProblem”, while leaving its final assessment for another occasion. In §4 we show how there is another – so far unnoticed – challenge truth-relativists must face when dealing with disputes about morals: we call it “the Progress Problem”. In §5 we show how a recent notion proposed in connection with truth-relativism and the problem of future contingents, viz. the idea of trumping, can help relativists make sense of such a problem. Yet, we conclude, in §6, that the appeal to trumping in fact forces a dilemma onto truth-relativists engaged in either a Descriptive or a Revisionary project."
- A Definable Henselian Valuation With High Quantifier Complexity Immanuel Halupczok, Franziska Jahnke: mathematical logic - an example of a parameter-free definable Henselian valuation ring is given which is neither definable by a parameter-free ∀∃-formula nor by a parameter free ∃∀-formula in the language of rings. This answers a question of Prestel.
- Self-Knowledge for Humans Quassim Cassam: Human beings are not model epistemic citizens. Our reasoning can be careless and uncritical, and our beliefs, desires, and other attitudes aren't always as they ought rationally to be. Our beliefs can be eccentric, our desires irrational and our hopes hopelessly unrealistic. Our attitudes are influenced by a wide range of non-epistemic or non-rational factors, including our character, our emotions and powerful unconscious biases. Yet we are rarely conscious of such influences. Self-ignorance is not something to which human beings are immune. In this book Quassim Cassam develops an account of self-knowledge which tries to do justice to these and other respects in which humans aren't model epistemic citizens. He rejects rationalist and other mainstream philosophical accounts of self-knowledge on the grounds that, in more than one sense, they aren't accounts of self-knowledge for humans. Instead he defends the view that inferences from behavioural and psychological evidence are a basic source of human self-knowledge. On this account, self-knowledge is a genuine cognitive achievement and self-ignorance is almost always on the cards. As well as explaining knowledge of our own states of mind, Cassam also accounts for what he calls 'substantial' self-knowledge, including knowledge of our values, emotions, and character. He criticizes philosophical accounts of self-knowledge for neglecting substantial self-knowledge, and concludes with a discussion of the value of self-knowledge.
- Absence of gravitational-wave signal extends limit on knowable universe: Imagine an instrument that can measure motions a billion times smaller than an atom that last a millionth of a second. Fermilab's Holometer is currently the only machine with the ability to take these very precise measurements of space and time, and recently collected data has improved the limits on theories about exotic objects from the early universe. Our universe is as mysterious as it is vast. According to Albert Einstein's theory of general relativity, anything that accelerates creates gravitational waves, which are disturbances in the fabric of space and time that travel at the speed of light and continue infinitely into space. Scientists are trying to measure these possible sources all the way to the beginning of the universe.
- On the Importance of Interpretation in Quantum Physics - A Reply to Elise Crull Antonio Vassallo and Michael Esfeld: E. Crull claims that by invoking decoherence it is possible (i) to obviate many “fine grained” issues often conflated under the common designation of 'measurement' problem, and (ii) to make substantial progresses in the fields of quantum gravity and quantum cosmology, without any early
incorporation of a particular interpretation in the quantum formalism. It is pointed out here that Crull is mistaken about decoherence and tacitly assumes some kind of interpretation of the quantum formalism.
- Projective simulation with generalization Alexey A. Melnikov, Adi Makmal, Vedran Dunjko, and Hans J. Briegel: The ability to generalize is an important feature of any intelligent agent. Not only because it may allow the agent to cope with large amounts of data, but also because in some environments, an agent with no generalization ability is simply doomed to fail. In this work we outline several criteria for generalization, and present a dynamic and autonomous machinery that enables projective simulation agents to meaningfully generalize. Projective simulation, a novel, physical, approach to artificial intelligence, was recently shown to perform well, in comparison with standard models, on both simple reinforcement learning problems, as well as on more complicated canonical tasks, such as the “grid world” and the “mountain car problem”. Both the basic projective simulation model and the presented generalization machinery are based on very simple principles. This simplicity allows us to provide a full analytical analysis of the agent’s performance and to illustrate the benefit the agent gains by generalizing. Specifically, we show how such an ability allows the agent to learn in rather extreme environments, in which learning is otherwise impossible.
- General Covariance, Diffeomorphism Invariance, and Background Independence in 5 Dimensions Antonio Vassallo: This paper considers the “GR-desideratum”, that is, the way general relativity implements general covariance, diffeomorphism invariance, and background independence. Two cases are discussed where 5-dimensional generalizations of general relativity run into interpretational troubles when the
GR-desideratum is forced upon them. It is shown how the conceptual problems dissolve when such a desideratum is relaxed. In the end, it is suggested that a similar strategy might mitigate some major issues such as the problem of time or the embedding of quantum non-locality into relativistic spacetimes.
- Coherent states, quantum gravity and the Born-Oppenheimer approximation Alexander Stottmeister, Thomas Thiemann: This article aims at establishing the (time-dependent) Born-Oppenheimer approximation, in the sense of space adiabatic perturbation theory, for quantum systems constructed by techniques of the loop quantum gravity framework, especially the canonical formulation of the latter. The analysis presented here fits into a rather general framework, and offers a solution to the problem of applying the usual Born-Oppenheimer ansatz for molecular (or structurally analogous) systems to more general quantum systems (e.g. spin-orbit models) by means of space adiabatic perturbation theory. The proposed solution is applied to a simple, finite dimensional model of interacting spin systems, which serves as a non-trivial, minimal model of the aforesaid problem. Furthermore, it is explained how the content of this article affects the possible extraction of quantum field theory on curved spacetime from loop quantum gravity (including matter fields).
- Dummett on the Relation between Logics and Metalogics Timothy Williamson: This paper takes issue with a claim by Dummett that, in order to aid understanding between proponents and opponents of logical principles, a semantic theory should make the logic of the object-language maximally insensitive to the logic of the metalanguage. The general advantages of something closer to a homophonic semantic theory are sketched. A case study is then made of modal logic, with special reference to disputes over the Brouwerian formula (B) in propositional modal logic and the Barcan formula in quantified modal logic. Semantic theories for modal logic within a possible worlds framework satisfy Dummett’s desideratum, since the non-modal nature of the semantics makes the modal logic of the object-language trivially insensitive to the modal logic of the metalanguage. However, that does not help proponents and opponents of the modal principles at issue understand each other. Rather, it makes the semantic theory virtually irrelevant to the dispute, which is best conducted mainly in the object-language; this applies even to Dummett’s own objection to the B principle. Other forms of semantics for modal languages are shown not to alter the picture radically. It is argued that the semantic and more generally metalinguistic aspect of disputes in logic is much less significant than Dummett takes it to be. The role of (non-causal) abductive considerations in logic and philosophy is emphasized, contrary to Dummett’s view that inference to the best explanation is not a legitimate method of argument in these area.
- The Foundations of Transcendental Pragmatism Alexander Schmid: Over the course of the last three centuries in America, two particular schools of philosophical, and in one case, literary thought, have captured the American intellectual imagination: transcendentalism and pragmatism. While transcendentalism flourished in the middle of the 19th century and was prominent among litterateurs and essayists, pragmatism was prominent among scholars and philosophers near the end of the 19th century and the beginning of the 20th. At first glance, transcendentalism and pragmatism may seem not to have more in common than each being uniquely American. This may be true on a superficial level, but does not, however, preclude each movement from being privy to some truth. It is on that belief that this paper will be grounded, and after a short exposition of what transcendentalism and pragmatism mean, a new school and way of thinking, transcendental pragmatism, will be unveiled and use this paper as its founding document.
- Curve-Fitting For Bayesians? Gordon Belot: Bayesians often assume, suppose, or conjecture that for any reasonable explication of the notion of simplicity a prior can be designed that will enforce a preference for hypotheses simpler in just that sense. Further, it is often claimed that the Bayesian framework automatically implements Occam’s razor — that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. But it is shown here that there are simplicity-driven approaches to curve-fitting problems that cannot be captured within the orthodox Bayesian framework and that the automatic razor does not function for such problems.
- On the Irrelevance Of General Equilibrium Theory Lars P. Syll: On "the problem with perfect competition is - not its “lack” of realism; but its lack of “relevancy”: it surreptitiously assumes an entity that gives prices (present and future) to price taking agents, that collects information about supplies and demands, adds these up, moves prices up and down until it finds their equilibrium value. Textbooks do not tell this story; they assume that a deus ex machina called the “market” does the job. In the real world, people trade with each other, not with “the market.” And some of them, at least, are price makers. To make things worse, textbooks generally allude to some mysterious “invisible hand” that allocates goods optimally. They wrongly attribute this idea to Adam Smith and make use of his authority so that students accept this magical way of thinking as a kind of proof. Perfect competition in the general equilibrium mode is perhaps an interesting model for describing a central planner who is trying to find an efficient allocation of resources using prices as signals that guide price taker households and firms. But students should be told that the course they follow—on “general competitive analysis”—is irrelevant for understanding market economies." Emmanuelle Benicourt & Bernard Guerrien. Lars agrees with this statement and adds: "We do know that - under very restrictive assumptions - equilibria do exist, are unique and are Pareto-efficient. One however has to ask oneself — what good does that do? As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria - the value of general equilibrium theory is negligible. As long as we cannot really demonstrate that there are forces operating - under reasonable, relevant and at least mildly realistic conditions - at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory."
- Relativistic Paradoxes and Lack of Relativity in Closed Spaces Moses Fayngold: Some known relativistic paradoxes are reconsidered for closed spaces, using a simple geometric model. For two twins in a closed space, a real paradox seems to emerge when the traveling twin is moving uniformly along a geodesic and returns to the starting point without turning back. Accordingly, the reference frames (RF) of both twins seem to be equivalent, which makes the twin paradox irresolvable: each twin can claim to be at rest and therefore to have aged more than the partner upon their reunion. In reality, the paradox has the resolution in this case as well. Apart from distinction between the two RF with respect to actual forces in play, they can be distinguished by clock synchronization. A closed space singles out a truly stationary RF with single-valued global time; in all other frames, time is not a single-valued parameter. This implies that even uniform motion along a spatial geodesic in a compact space is not truly inertial, and there is an effective force on an object in such motion. Therefore, the traveling twin will age less upon circumnavigation than the stationary one, just as in flat space-time. Ironically, Relativity in this case emerges free of paradoxes at the price of bringing back the pre-Galilean concept of absolute rest. An example showing the absence of paradoxes is also considered for a more realistic case of a time-evolving closed space.
- Collective Belief, Kuhn, and the String Theory Community James Owen Weatherall, Margaret Gilbert: "One of us [Gilbert, M. (2000). “Collective Belief and Scientific Change.” Sociality and Responsibility. Lanham, MD: Rowman & Littlefield. 37-49.] has proposed that ascriptions of beliefs to scientific communities generally involve a common notion of collective belief described by her in numerous places. A given collective belief involves a joint commitment of the parties, who thereby constitute what Gilbert refers to as a plural subject. Assuming that this interpretive hypothesis is correct, and that some of the belief ascriptions in question are true, then the members of some scientific communities have obligations that may act as barriers both to the generation and, hence, the fair evaluation of new ideas and to changes in their community’s beliefs. We argue that this may help to explain Thomas Kuhn’s observations on “normal science”, and go on to develop the relationship between Gilbert's proposal and several features of a group of physicists working on a fundamental physical theory called “string theory”, as described by physicist Lee Smolin [Smolin, L. (2006). The Trouble with Physics. Mariner Books: New York.]. We argue that the features of the string theory community that Smolin cites are well explained by the hypothesis that the community is a plural subject of belief."
- Why Build a Virtual Brain? Large-scale Neural Simulations as Test-bed for Artificial Computing Systems Matteo Colombo: Despite the impressive amount of financial resources invested in carrying out large-scale brain simulations, it is controversial what the payoffs are of pursuing this project. The present paper argues that in some cases, from designing, building, and running a large-scale neural simulation, scientists acquire useful knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. What this means, why it is not a trivial lesson, and how it advances the literature on the epistemology of computer simulation are the three preoccupations addressed by the paper. Keywords: Large-scale neural simulations; epistemology of computer simulation; target-directed modeling; neuromorphic technologies, brain-networking.
- Accelerating universe? Not so fast: A University of Arizona-led team of astronomers found that the type of supernovae commonly used to measure distances in the universe fall into distinct populations not recognized before; the findings have implications for our understanding of how fast the universe has been expanding since the Big Bang. The discovery casts new light on the currently accepted view of the universe expanding at a faster and faster rate, pulled apart by a poorly understood force called dark energy. This view is based on observations that resulted in the 2011 Nobel Prize for Physics awarded to three scientists, including UA alumnus Brian P. Schmidt.
- The Value Of Knowledge Duncan Pritchard: It is widely held that knowledge is of distinctive value. This is the main reason that knowledge and not mere justified true belief has been the central notion in epistemological methodology, teleology and deontology. The 'value-problem' is to explain why this is the case. In this important paper, Duncan argues against the view that knowledge is of particular value, and thus gives a negative answer to the 'value-problem' and follows through with the ramifications of such denial.
- An Algebraic Topological Method for Multimodal Brain Networks Comparisons Tiago Simas, Mario Chavez, Pablo Rodriguez, and Albert Diaz-Guilera: Understanding brain connectivity has become one of the most important issues in neuroscience. But connectivity data can reflect either the functional relationships of the brain activities or the anatomical properties between brain areas. Although one should expect a clear relationship between both representations it is not straightforward. Here, a formalism is presented that allows for the comparison of structural and functional networks by embedding both in a common metric space. In this metric space one can then find for which regions the two networks are significantly different. The methodology can be used not only to compare multimodal networks but also to extract statistically significant aggregated networks of a set of subjects. Actually, this procedure is used to aggregate a set of functional networks from different subjects in an aggregated network that is compared with the structural connectivity. The comparison of the aggregated network reveals some features that are not observed when the comparison is done with the classical averaged network.
- Mechanisms meet Structural Explanation Laura Felline: This paper investigates the relationship between Structural Explanation and the New Mechanistic account of explanation. The aim of this paper is twofold: firstly, to argue that some phenomena in the domain of fundamental physics, although mechanically brute, are structurally explained; and secondly, by elaborating on the contrast between SE and ME, to better clarify some features of SE. Finally, this paper will argue that, notwithstanding their apparently antithetical character, SE and ME can be reconciled within a unified account of general scientific explanation.
- Can social interaction constitute social cognition? Hanne De Jaegher, Ezequiel Di Paolo and Shaun Gallagher: An important shift is taking place in social cognition research, away from a focus on the individual mind and toward embodied and participatory aspects of social understanding. Empirical results already imply that social cognition is not reducible to the workings of individual cognitive mechanisms.To galvanize this interactive turn, the authors provide an operational definition of social interaction and distinguish the different explanatory roles - contextual, enabling and constitutive - it can play in social cognition. Then the authors show that interactive processes are more than a context for social cognition: they can complement and even replace individual mechanisms.This new explanatory power of social interaction can push the field forward by expanding the possibilities of scientific explanation beyond the individual.
- Evolving to Generalize - Trading Precision for Speed Cailin O’Connor: Biologists and philosophers of biology have argued that learning rules that do not lead organisms to play evolutionarily stable strategies (ESSes) in games will not be stable and thus not evolutionarily successful. This claim, however, stands at odds with the fact that learning generalization - a behavior that cannot lead to ESSes when modeled in games - is observed throughout the animal kingdom. In this paper, the author uses learning generalization to illustrate how previous analyses of the evolution of learning have gone wrong. It has been widely argued that the function of learning generalization is to allow for swift learning about novel stimuli. It is shown that in evolutionary game theoretic models learning generalization, despite leading to suboptimal behavior, can indeed speed learning. It is further observed that previous analyses of the evolution of learning ignored the short term success of learning rules. If one drops this assumption, it is argued, it can be shown that learning generalization will be expected to evolve in these models.This analysis is then used to show how ESS methodology can be misleading, and to reject previous justifications about ESS play derived from analyses of learning.
- Asymptotic behaviour of weighted differential entropies in a Bayesian problem Mark Kelbert and Pavel Mozgunov: Consider a Bayesian problem of estimating of probability of success in a series of trials with binary outcomes. The authors study the asymptotic behaviour of weighted differential entropies for posterior probability density function (PDF) conditional on x successes after n trials, when n → ∞. In the first part of work Shannon’s differential entropy is considered in three particular cases: x is a proportion of n; x ∼ n β , where 0 < β < 1; either x or n − x is a constant. In the first and second cases limiting distribution is Gaussian and the asymptotic of differential entropy is asymptotically Gaussian with corresponding variances. In the third case the limiting distribution in not Gaussian, but still the asymptotic of differential entropy can be found explicitly. Then suppose that one is interested to know whether the coin is fair or not and for large n is interested in the true frequency. In other
words, one wants to emphasize the parameter value p = 1/2. To do so the concept of weighted differential entropy introduced in and is used when the frequency γ is
necessary to emphasize. It was found that the weight in suggested form does not change the asymptotic form of Shannon, Renyi, Tsallis and Fisher entropies, but change the constants. The main term in weighted Fisher Information is changed by some constant which depend on distance between the true frequency and the value one wants to emphasize.
- Understanding Democracy and Development Traps Using a Data-Driven Approach: Why do some countries seem to develop quickly while others remain poor? This question is at the heart of the so-called poverty or development trap problem. Using mathematics on open data sets researchers now present new insights into this issue, and also suggest which countries can be expected to develop faster. The paper is published in the journal Big Data. 'Development' economists have identified several potential causes of economic development traps, but the issue is complex. Some countries appear to be stuck not only in an economic development trap but also in a political development trap with a lack of democracy.
- Knowledge Representation meets Social Virtual reality Carlo Bernava, Giacomo Fiumara, Dario Maggiorini, Alessandro Provetti,and Laura Ripamonti: This study designs and implements an application running inside 'Second Life' that supports user annotation of graphical objects and graphical visualization of concept ontologies, thus providing a formal, machine-accessible description of objects. As a result, a platform is offered that combines the graphical knowledge representation that is expected from a MUVE artifact with the semantic structure given by the Resource Framework Description (RDF) representation of information.
- Biohumanities: Rethinking the relationship between biosciences, philosophy and history of science, and society Karola Stotz, Paul E. Griffiths: It is argued that philosophical and historical research can constitute a ‘Biohumanities’ which deepens our understanding of biology itself; engages in constructive 'science criticism'; helps formulate new 'visions of biology'; and facilitates 'critical science communication'. The authors illustrate these ideas with two recent 'experimental philosophy' studies of the concept of the gene and of the concept of innateness conducted by ourselves
and collaborators. It is then concluded that the complex and often troubled relations between science and society are critical to both parties, and then argued that the philosophy and history
of science can help to make this relationship work.
- A conundrum of Denotation Christopher Phelps Cook: This is an excellent paper dealing with the semantic, alethic and truth-theoretic paradoxes, with an emphasis on Curry's paradox.
- Macroscopic Observability of Spinorial Sign Changes: A Reply to Gill Joy Christian: In a recent paper Richard Gill has criticized an experimental proposal which describes how to detect a macroscopic signature of spinorial sign changes under 2π rotations. Here it is pointed out that Gill’s worries stem from his own elementary algebraic and conceptual mistakes. In a recent paper a mechanical experiment has been proposed to test possible macroscopic observability of spinorial sign changes under 2π rotations. The proposed experiment is a variant of the local model for the spin-1/2 particles considered by Bell, which was later further developed by Peres providing pedagogical details.This experiment differs, however, from the one considered by Bell and Peres in one important respect. It involves measurements of the actual spin angular momenta of two fragments of an exploding bomb rather than their normalized spin values, ±1.
- Bohmian Dispositions Mauricio Suárez: This paper argues for a broadly dispositionalist approach to the ontology of Bohmian mechanics. It first distinguishes the ‘minimal’ and the ‘causal’ versions
of Bohm’s Theory, and then briefly reviews some of the claims advanced on behalf of the ‘causal’ version by its proponents. A number of ontological or interpretive accounts of the wave function in Bohmian mechanics are then addressed in detail, including i) configuration space, ii) multi-field, iii) nomological, and iv) dispositional approaches. The main objection to each account is reviewed, namely i) the ‘problem of perception’, ii) the ‘problem of communication’, iii) the ‘problem of temporal laws’, and iv) the ‘problem of under-determination’. It is then shown that a version of dispositionalism overcomes the under-determination problem while providing neat solutions to the other three problems. A pragmatic argument is thus furnished for the use of dispositions in the interpretation of the theory more generally. The paper ends in a more speculative note by suggesting ways in which a dispositionalist interpretation of the wave function is in addition able to shed light upon some of the claims of the proponents of the causal version of Bohmian mechanics.
- Probability Without Certainty - Foundationalism and the Lewis-Reichenbach Debate David Atkinson and Jeanne Peijnenburg: Like many discussions on the pros and cons of epistemic foundationalism, the debate between C.I. Lewis and H. Reichenbach dealt with three concerns: the existence of basic beliefs, their nature, and the way in which beliefs are related. This paper concentrates on the third matter, especially on Lewis’s assertion that a probability relation must depend on something that is certain, and Reichenbach’s claim that certainty is never needed. It is noted that Lewis’s assertion is prima facie ambiguous, but argued that this ambiguity is only apparent if probability theory is viewed within a modal logic. Although there are empirical situations where Reichenbach is right, and others where Lewis’s reasoning seems to be more appropriate, it will become clear that Reichenbach’s stance is the generic one. This follows simply from the fact that, if P(E|G) > 0 and P(E|¬G) > 0, then P(E) > 0. It is finally concluded that this constitutes a threat to epistemic foundationalism.
- Categorical Equivalence between Generalized Holonomy Maps on a Connected Manifold and Principal Connections on Bundles over that Manifold Sarita Rosenstock and James Owen Weatherall: A classic result in the foundations of Yang-Mills theory, due to J. W. Barrett [“Holonomy and Path Structures in General Relativity and Yang-Mills Theory.” Int. J. Th. Phys. 30(9), (1991)], establishes that given a “generalized” holonomy map from the space of piece-wise smooth, closed curves based at some point of a manifold to a Lie group, there exists a principal bundle with that group as structure group and
a principal connection on that bundle such that the holonomy map corresponds to the holonomies of that connection. Barrett also provided one sense in which this “recovery theorem” yields a unique bundle, up to isomorphism. Here we show that something stronger is true: with an appropriate definition of isomorphism between generalized holonomy maps, there is an equivalence of categories between the category whose objects are generalized holonomy maps on a smooth, connected manifold and whose arrows are holonomy isomorphisms, and the category whose objects are principal connections on principal bundles over a smooth, connected manifold. This result clarifies, and somewhat improves upon, the sense of “unique recovery” in Barrett’s theorems; it also makes precise a sense in which there is no loss of structure involved in moving from a principal bundle formulation of Yang-Mills theory to a holonomy, or “loop”, formulation.
- Artificial intelligence and Making Machine Learning Easier: the Role of Probabilistic Programming Larry Hardesty: Most advances in artificial intelligence are the result of machine learning, in which computers are turned loose on huge data sets to look for patterns. To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research. At the Computer Vision and Pattern Recognition conference in June, MIT researchers will demonstrate that on some standard computer-vision tasks, short programs — less than 50 lines long — written in a probabilistic programming language are competitive with conventional systems with thousands of lines of code. “This is the first time that we’re introducing probabilistic programming in the vision area,” says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. “The whole hope is to write very flexible models, both generative and discriminative models, as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems.” By the standards of conventional computer programs, those “models” can seem absurdly vague. One of the tasks that the researchers investigate, for instance, is constructing a 3-D model of a human face from 2-D images. Their program describes the principal features of the face as being two symmetrically distributed objects (eyes) with two more centrally positioned objects beneath them (the nose and mouth). It requires a little work to translate that description into the syntax of the probabilistic programming language, but at that point, the model is complete. Feed the program enough examples of 2-D images and their corresponding 3-D models, and it will figure out the rest for itself. “When you think about probabilistic programs, you think very intuitively when you’re modeling,” Kulkarni says. “You don’t think mathematically. It’s a very different style of modeling.”

- Poetic Wandering - Walking Tour Highlights The Sights and Sounds of Literary Harvard University Georgia Bellas and Sarah Sweeney: April is National Poetry Month, though at Harvard every month could be. The University’s poetic legacy dates back hundreds of years and has helped shape the world’s literary canon. E.E. Cummings, John Ashbery, and Wallace Stevens are among the University’s well-known poetic alumni, while Maxine Kumin and Adrienne Rich attended Radcliffe. Harvard Gazette invites you to explore Harvard by foot and ear. This walking tour of campus can be completed in a lunch hour or less, and pairs classic Harvard landmarks with a sampling of the poets connected to the University. Using recordings housed at the Woodberry Poetry Room as well as new recordings, the tour also commemorates the April 13 birth of Seamus Heaney, a Nobel Prize winner and Harvard’s one-time Boylston Professor and poet-in-residence. Heaney died on Aug. 30, 2013, but his mark on Harvard is indelible.
- On The Representation of Women in Cognition Roberta Klatzky, Lori Holt, & Marlene Behrmann: Upon reading the recent Cognition special issue, titled “The Changing Face of
Cognition “ (February 2015), the authors of this discussion felt a collective sense of dismay. Perusing the table of contents, they were struck by the fact that among the 19 authors listed for the 12 articles, only one female author was present. While the substantive content of the issue may persuade them that the face of cognition is changing, it appears that changes in gender distribution are not to be expected. The face of cognitive science will remain unequivocally male. According to recent statistics (NSF, 2013), more than 50% of doctorates awarded in cognitive psychology and psycholinguistics were to women, and the same holds for neuropsychology and experimental psychology. A clear implication is that women scientists should play a significant role in the future of cognitive science and cognitive neuroscience. The authors ask, then, why would the journal present an image of this science’s future as envisioned largely by male scientists?
- Rational theory choice: Arrow undermined, Kuhn vindicated Seamus Bradley: In a recent paper, Samir Okasha presented an argument that suggests that there is no rational way to choose among scientific theories. This would seriously undermine the view that science is a rational entreprise. In this paper the author shows how a suitably nuanced view of what scientific rationality requires allows us to avoid Okasha’s conclusion. The author then goes on to argue that making further assumptions about the space of possible scientific theories allows us to make scientific rationality more contentful. Finally, the author shows how such a view of scientific rationality fits with what Thomas Kuhn thought.
- Quantum Criticality in Life's Proteins Gabor Vattay: Stuart Kauffman, from the University of Calgary, and several of his colleagues have recently published a paper on the Arxiv server titled 'Quantum Criticality at the Origins of Life'. The idea of a quantum criticality, and more generally quantum critical states, comes perhaps not surprisingly, from solid state physics. It describes unusual electronic states that are are balanced somewhere between conduction and insulation. More specifically, under certain conditions, current flow at the critical point becomes unpredictable. When it does flow, it tends to do so in avalanches that vary by several orders of magnitude in size. Ferroelectric metals, like iron, are one familiar example of a material that has classical critical point. Above a critical temperature of 1043 degrees K the magnetization of iron is completely lost. In the narrow range approaching this point, however, thermal fluctuations in the electron spins that underly the magnetic behavior extend over all length scales of the sample—that's the scale invariance we mentioned. In this case we have a continuous phase transition that is thermally driven, as opposed to being driven by something else like external pressure, magnetic field, or some kind of chemical influence.Quantum criticality, on the other hand, is usually associated with stranger electronic behaviors—things like high-temperature superconductivity or so-called heavy fermion metals like CeRhIn5. One strange behavior in the case of heavy fermions, for example, is the observation of large 'effective mass'—mass up to 1000 times normal—for the conduction electrons as a consequence of their narrow electronic bands. These kinds of phenomena can only be explained in terms of the collective behavior of highly correlated electrons, as opposed to more familiar theory based on decoupled electrons.
- Moral Loopholes in the Global Economic Environment: Why Well-Intentioned Organizations Act in Harmful Ways S. L. Reiter: Thomas Pogge’s notion of moral loopholes serves to provide support for two claims: first, that the ethical code of the global economic order contains moral loopholes that allow participants in special social arrangements to reduce their obligations to those outside the social arrangement, which leads to morally objectionable actions for which no party feels responsible and that are also counterproductive to the overall objective of the economic system; and, second, that these moral loopholes are more likely to exist as our economic order becomes more global. It will be shown that attempts to rectify the situation with voluntary corporate codes of conduct are inadequate.
- Description And The Problem Of Priors Jeffrey A. Barrett: Belief-revision models of knowledge describe how to update one’s degrees of belief associated with hypotheses as one considers new evidence, but they typically do not say how probabilities become associated with meaningful hypotheses in the first place. Here the author considers a variety of Skyrms-Lewis signaling game [Lewis (1969)] [Skyrms (2010)] where simple descriptive language and predictive practice and associated basic expectations co-evolve. Rather than assigning prior probabilities to hypotheses in a fixed language then conditioning
on new evidence, the agents begin with no meaningful language or expectations then evolve to have expectations conditional on their descriptions as they evolve to have meaningful descriptions for the purpose of successful prediction. The model, then, provides a simple but concrete example of how the process of evolving a descriptive language suitable for inquiry might also provide agents with effective priors.
- Consensus based Detection in the Presence of Data Falsification Attacks Bhavya Kailkhura: This paper considers the problem of detection in distributed networks in the presence of data falsification (Byzantine) attacks. Detection approaches considered in the paper are based on fully distributed consensus algorithms, where all of the nodes exchange information only with their neighbors in the absence of a fusion center. In such networks, we characterize the negative effect of Byzantines on the steady-state and transient detection performance of the conventional consensus based detection
algorithms. To address this issue, the author studies the problem from the network designer’s perspective. More specifically, he first proposes a distributed weighted average consensus algorithm that is robust to Byzantine attacks. it is shown that, under reasonable assumptions, the global test statistic for detection can be computed locally at each node using our proposed consensus algorithm. The, it the author exploits the statistical distribution of the nodes’ data to devise techniques for mitigating the influence of data falsifying Byzantines on the distributed detection system. Since some parameters of the statistical distribution of the nodes’ data might not be known a priori, a learning based techniques is proposed to enable an adaptive design of the local fusion or update rules.
- Confirmation in the Cognitive Sciences: the Problematic Case of Bayesian Models Frederick Eberhardt and David Danks: Bayesian models of human learning are becoming increasingly popular in cognitive
science. It is argued that their purported confirmation largely relies on a methodology that depends on premises that are inconsistent with the claim that people are Bayesian about learning and inference. Bayesian models in cognitive science derive their appeal from their normative claim that the modeled inference is in some sense rational. Standard accounts of the rationality of Bayesian inference imply predictions that an agent selects the option that maximizes the posterior expected utility. Experimental confirmation of the models, however, has been claimed because of groups of agents that “probability match” the posterior. Probability matching only constitutes support for the Bayesian claim if additional unobvious and untested (but testable) assumptions are invoked. The alternative strategy of weakening the underlying notion of rationality no longer distinguishes the Bayesian model uniquely. A new account of rationality — either for inference or for decision-making — is required to successfully confirm Bayesian models in cognitive science.
- Reason, Value, and Respect: Kantian Themes from the Philosophy of Thomas E. Hill, Jr. Edited by Mark Timmons and Robert N. Johnson: In thirteen specially written essays, leading philosophers explore Kantian themes in moral and political philosophy that are prominent in the work of Thomas E. Hill, Jr. The first three essays focus on respect and self-respect.; the second three on practical reason and public reason. The third section covers a set of topics in social and political philosophy, including Kantian perspectives on homicide and animals. The final set of essays discuss duty, volition, and complicity in ethics. In conclusion Hill offers an overview of his work and responses to the preceding essays.
- Moral investing: Psychological motivations and implications Enrico Rubaltelli, Lorella Lotto, Ilana Ritov, Rino Rumiati: In four experiments we showed that investors are not only interested in maximizing returns but have non-financial goals, too. We considered what drives the decision to invest ethically and the impact this strategy has on people’s evaluation of investment performance. In Study 1, participants who chose a moral portfolio (over an immoral one) reported being less interested in maximizing their gains and more interested in being true to their moral values. These participants also reported feeling lower disappointment upon learning that a different decision could have yield a better outcome. In Studies 2 and 3, the authors replicated these findings when investors decided not to invest in immoral assets, rather than when they choose to invest morally. In Study 4, the authors found similar results using the same industrial sector in both the moral and the immoral conditions and providing participants with information about the expected return of the portfolio they were presented with. These findings lend empirical support to the conclusion that investors have both utilitarian (financial) goals
and expressive (non-financial) ones and show how non-financial motivations can influence the reaction to unsatisfactory investment performance
- Rethinking Responsibility in Science and Technology Fiorella Battaglia, Nikil Mukerji, and Julian Nida-Rümelin: The idea of responsibility is deeply embedded into the “lifeworld” of human beings and not subject to change. However, the empirical circumstances in which we act and ascribe responsibility to one another are subject to change. Science and technology play a great part in this transformation process. Therefore, it is important for us to rethink the idea, the role and the normative standards behind responsibility in a world that is constantly being transformed under the influence of scientific and technological progress. This volume is a contribution to that joint societal effort.
- The 'Consistent Histories' Formalism and the Measurement Problem Elias Okon and Daniel Sudarsky: the authors defend the claim that the Consistent Histories formulation of quantum mechanics does not solve the measurement problem. In order to do so, we argue that satisfactory solutions to the problem must not only not contain anthropomorphic terms (such as measurement
or observer) at the fundamental level, but also that applications of the formalism to concrete situations (e.g., measurements) should not require any input not contained in the description of the situation at hand at the fundamental level. The authors' assertion is that the Consistent Histories formalism does not meet the second criterion. It is also argued that the so-called second measurement problem, i.e., the inability to explain how an experimental result is related to a property possessed by the measured system before the measurement took place, is only a pseudo-problem. As a result, the authors reject
the claim that the capacity of the Consistent Histories formalism to solve it should count as an advantage over other interpretations.
- 10 Things you might not know about Black Holes: Whether you’re a gravitational guru or an armchair astronomer, you’ll gravitate toward these lesser-known facts about black holes. Believed to be the churning engines of most galaxies, black holes push the known laws of physics to their limits, and inspire some great (and some not-so-great) sci-fi adventures. To borrow a line from Perimeter Associate Faculty member Avery Broderick: “Black holes don’t 'suck' – they’re awesome!” Here are some oft-overlooked nuggets of black hole awesomeness. Feel free to use them at parties to add a little gravitational 'gravitas' to any conversation.
- A New Sociobiology: Immunity, Alterity, and the Social Repertoire Napier, A. David: The relation between biological processes and social practices has given rise to a sociobiology heavily defined through experimental, cause-and-effect theorizing, applying biology to society, culture, and individual action. Human behaviour is largely understood as the outcome of biological processes, with individual autonomy and survival, and social order and stability, prioritized. Building on an argument first made about selfhood in 1986, and about immunology from 1992 onwards, this paper argues that advances in science reframe our understanding of the boundaries between self and other ('non-self'), and thereby also our awareness of the importance of risk and danger, and the social contexts that encourage or discourage social risks. Because the assimilation of difference is not only crucial to survival, but critical for creation, the argument here for 'a new sociobiology' is for a less biologically determined sociobiology. Difference can destroy, but it is necessary for adaptation and creation. A new sociobiology, therefore, must prioritize organic relatedness over organic autonomy, attraction to 'other' over concern with 'self', if the field is to advance our understanding of creation, survival, and growth.
- Fisher information and quantum mechanical models for finance V. A. Nastasiuk: The probability distribution function (PDF) for prices on financial markets is derived by extremization of Fisher information. It is shown how on that basis the quantum-like description for financial markets arises and different financial market models are mapped by quantum mechanical ones.
- The Social Impact of Economic Growth Editors Susanna Price and Kathryn Robinson explore the social aspects of Chinese economic growth in their soon-to-be-published book, Making a Difference? Social Assessment Policy and Praxis and its Emergence in China. Following, Susanna Price offers further insight into the book’s origins and the impact the book may have on the field of Asian development studies.
- Pricing postselection: the cost of indecision in decision theory Joshua Combes and Christopher Ferrie: Postselection is the process of discarding outcomes from statistical trials that are not the event one
desires. Postselection can be useful in many applications where the cost of getting the wrong event is implicitly high. However, unless this cost is specified exactly, one might conclude that discarding
all data is optimal. Here the authors analyze the optimal decision rules and quantum measurements in a decision theoretic setting where a pre-specified cost is assigned to discarding data. They also relate
their formulation to previous approaches which focus on minimizing the probability of indecision.
- Should Our Brains Count as Courtroom Evidence? Kamala Kelkar: Judges in the future could tap straight into criminal brains and nip second offenders before they’ve had a chance to do it again, says the Obama administration. Incriminating biomarkers could eventually be used by courts to predict recidivism and influence decisions about parole, bail and sentencing, finds the second volume of a report called Gray Matters released late last month by the president’s commission on bioethics. The judicial system already uses questionable methods to adjust sentences based on the defendant’s criminal, psychological and social background, so there’s an allure to using brain scans for possibly more efficient and objective risk assessment profiles. But the prospect of neuroprediction in the courtroom leads to a slew of ethical and moral questions. Should we assign longer sentences to a criminal functioning with what scientists say is a brain ripe for a second offense? Why not just let brain scans identify the most dangerous people and send them straight to jail before they’ve committed a crime? “There’s a lot of motivation to literally get inside the heads of criminals,” said Lisa Lee, the executive director of the commission. “What the commission was really concerned about was the careful and accurate use of neuroscience in the courtroom—given what’s on the line.”
- Mind, Reason, and Being-in-the-World: The McDowell-Dreyfus Debate Joseph K. Schear: John McDowell and Hubert L. Dreyfus are philosophers of world renown, whose work has decisively shaped the fields of analytic philosophy and phenomenology respectively. Mind, Reason, and Being-in-the-World: The McDowell-Dreyfus Debate opens with their debate over one of the most important and controversial subjects of philosophy: is human experience pervaded by conceptual rationality, or does experience mark the limits of reason? Is all intelligibility rational, or is there a form of intelligibility at work in our skilful bodily rapport with the world that eludes our intellectual capacities? McDowell and Dreyfus provide a fascinating insight into some fundamental differences between analytic philosophy and phenomenology, as well as areas where they may have something in common. Fifteen specially commissioned chapters by distinguished international contributors enrich the debate inaugurated by McDowell and Dreyfus, taking it in a number of different and important directions. Fundamental philosophical problems discussed include: the embodied mind, subjectivity and self-consciousness, intentionality, rationality, practical skills, human agency, and the history of philosophy from Kant to Hegel to Heidegger to Merleau-Ponty. With the addition of these outstanding contributions, Mind, Reason, and Being-in-the-World is essential reading for students and scholars of analytic philosophy and phenomenology.
- Objective probability-like things with and without objective indeterminism László E. Szabó: It is argued that there is no such property of an event as its “probability.” This is why standard interpretations cannot give a sound definition in empirical terms of what “probability” is, and this is why empirical sciences like physics can manage without such a definition. “Probability” is a collective term, the meaning of which varies from context to context: it means different—dimensionless [0, 1]-valued — physical quantities characterising the different particular situations. In other words, probability is a reducible concept, supervening on physical quantities characterising the state of affairs corresponding to the event in question. On the other hand, however, these “probability-like” physical
quantities correspond to objective features of the physical world, and are objectively related to measurable quantities like relative frequencies of physical events based on finite samples — no matter whether
the world is objectively deterministic or indeterministic.
- Kant's Deductions of Morality and Freedom Owen Ware: It is commonly held that Kant ventured to derive morality from freedom in Groundwork III. It is also believed that he reversed this strategy in the second Critique, attempting to derive freedom from morality instead. In this paper I set out to challenge these familiar assumptions: Kant’s argument in Groundwork III rests on a normative conception of the intelligible world, one that plays the same role as the “fact of reason” in the second Critique. Accordingly, it is argued here, there is no reversal in the proof-structures of Kant’s two works.
- Science shows there is more to a Rembrandt than meets the eye: Art historians and scientists use imaging methods to virtually "dig" under or scan various layers of paint and pencil. This is how they decipher how a painter went about producing a masterpiece – without harming the original. A comparative study with a Rembrandt van Rijn painting as its subject found that the combined use of three imaging techniques provides valuable complementary information about what lies behind this artwork's complex step-by-step creation. The study, led by Matthias Alfeld of the University of Antwerp in Belgium, is published in Springer's journal Applied Physics A: Materials Science and Processing. Rembrandt's oil painting Susanna and the Elders is dated and signed 1647. It hangs in the art museum Gemäldegalerie in Berlin, Germany. The painting contains a considerable amount of the artist's changes or so-called pentimenti (from the Italian verb pentire: ''to repent") underneath the current composition. This was revealed in the 1930s when the first X-ray radiography (XRR) was done on it. More hidden details about changes made with pigments other than lead white were discovered when the painting was investigated in 1994 using neutron activation autoradiography (NAAR). Alfeld's team chose to investigate Susana and the Elders not only because of its clearly visible pentimenti, but also because of its smaller size. Macro-X-ray fluorescence (MA-XRF) scans could thus be done in a single day using an in-house scanner at the museum in Berlin. These were then compared to existing radiographic images of the painting.
- Symmetry and the Metaphysics of Physics David John Baker: The widely held picture of dynamical symmetry as surplus structure in a physical theory has many metaphysical applications. Here the author focuses on its relevance to the question of which quantities in a theory represent fundamental natural properties.
- Measuring the Value of Science Rod Lamberts: Reports about the worthy contributions of science to national economies pop up regularly all around the world – from the UK to the US and even the developing world. In Australia, the Office of the Chief Scientist recently released an analysis of science and its contribution to the economy down under, finding it's worth around A$145 billion a year. It's perfectly sensible and understandable that science (and related sectors) would feel the need to account for themselves in financial or economic terms. But in doing this we need to be wary of getting lulled into believing that this is the only – or worse, the best – way of attributing value to science. When it comes to determining the value of science, we should heed the words of the American environmental scientist and thinker, Donella Meadows, on how we think about indicators: Indicators arise from values (we measure what we care about), and they create values (we care about what we measure). Indicators are often poorly chosen. The choice of indicators is a critical determinant of the behaviour of a system. Much public debate about the value of science has been hijacked by the assumption that direct, tangible economic impact is the way to measure scientific worth. We seem now to be in a place where positing non-economic arguments for science benefits runs the risk of being branded quaintly naïve and out-of-touch at best, or worse: insensitive, irrelevant and self-serving. But relegating science to the status of mere servant of the economy does science a dramatic disservice and leaves both science and society the poorer for it. So here are five ways we can acknowledge and appreciate the societal influences and impacts of science that lie well beyond the dreary, soulless, cost-benefit equations of economics.
- Evolution and Normative Scepticism Karl Schafer: It is increasingly common to suggest that the combination of evolutionary theory and normative realism leads inevitably to a general scepticism about our ability to reliably form normative beliefs. In what follows, Karl argues that this is not the case. In particular, he considers several possible arguments from evolutionary theory and normative realism to normative scepticism and explains where they go wrong. He then gives a more general diagnosis of the tendency to accept such arguments and why this tendency should be resisted.
- A Generative Probabilistic Model For Deep Convolutional Learning Yunchen Pu, Xin Yuan, and Lawrence Carin: A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.
- Faster Algorithms for Testing under Conditional Sampling Moein Falahatgar, Ashkan Jafarpour, Alon Orlitsky: There has been considerable recent interest in distribution-tests whose run-time and sample requirements are sublinear in the domain-size k. The authors study two of the most important tests under the conditional-sampling model where each query specifies a subset S of the domain, and the response is a sample drawn from S according to the underlying distribution. For identity testing, they ask whether the underlying distribution equals a specific given distribution or ǫ-differs from it, and reduce the known time and sample complexities from Oe (ǫ −4) to Oe(ǫ −2), thereby matching the information theoretic lower bound. For closeness testing, which asks whether two distributions underlying observed data sets are equal or different, the authors reduce existing complexity from Oe (ǫ −4 log5 k) to an even sub-logarithmic Oe (ǫ −5 log log k) thus providing a better bound to an open problem in Bertinoro Workshop on Sublinear Algorithms.
- Objectivity and Conditionality in Frequentist Inference David Cox and Deborah G. Mayo: Statistical methods are used to some extent in virtually all areas of science, technology, public affairs, and private enterprise. The variety of applications makes any single unifying discussion difficult if not impossible. The authors concentrate on the role of statistics in research in the natural and social sciences and the associated technologies. Their aim is to give a relatively nontechnical discussion of some of the conceptual issues involved and to bring out some connections with general epistemological problems of statistical inference in science. In the first part of this chapter (7(I)), they considered how frequentist statistics may serve as an account of inductive inference, but because this depends on being able to apply its methods to appropriately circumscribed contexts, they need to address some of the problems in obtaining the methods with the properties we wish them to have. Given the variety of judgments and background information this requires, it may be questioned whether any account of inductive learning can succeed in being “objective.” However, statistical methods do, the authors think, promote the aim of achieving enhanced understanding of the real world, in some broad sense, and in this some notion of objectivity is crucial. They open and begin by briefly discussing this concept as it arises in statistical inference in science.
- Generalizations Related To Hypothesis Testing With The Posterior Distribution Of The Likelihood Ratio I. Smith, A. Ferrari: The Posterior distribution of the Likelihood Ratio (PLR) is proposed by Dempster
in 1974 for significance testing in the simple vs composite hypotheses case. In this hypotheses test case, classical frequentist and Bayesian hypotheses tests are irreconcilable, as emphasized
by Lindley’s paradox, Berger & Selke in 1987 and many others. However, Dempster shows that the PLR (with inner threshold 1) is equal to the frequentist p-value in the simple Gaussian case. In 1997, Aitkin extends this result by adding a nuisance parameter and showing its asymptotic validity under more general distributions. Here it is extended to the reconciliation between the PLR
and a frequentist p-value for a finite sample, through a framework analogous to the Stein’s theorem frame in which a credible (Bayesian) domain is equal to a confidence (frequentist) domain. This general reconciliation result only concerns simple vs composite hypotheses testing. The measures proposed by Aitkin in 2010 and Evans in 1997 have interesting properties and extend Dempster’s PLR but only by adding a nuisance parameter. Here, a proposal is offered for two extensions of the PLR concept to the general composite vs composite hypotheses test. The first extension can be defined for improper priors as soon as the posterior is proper. The second extension appears from a new Bayesian-type Neyman-Pearson lemma and emphasizes, from a Bayesian perspective, the role of the LR as a discrepancy variable for hypothesis testing.
- The Embodied “We”: The Extended Mind as Cognitive Sociology Teed Rockwell: Cognitive Science began with the assumption sometimes called Cartesian Materialism-- that the brain is an autonomous machine that can be studied as a closed system. The challenges of solving the puzzles presupposed by that assumption led to a recognition that mind is both embodied and embedded i. e. it cannot be separated from either the rest of the organism or from the organism's symbiotic relationship with its environment. The unavoidable (but often ignored) implication of this conclusion is that if our environment includes other minds, our minds must also be embodied by other minds. This means that we are irreducibly social, for the same reasons that we are irreducibly embodied and embedded in an environment. This paper explores and questions the assumptions of Game Theory - the branch of computer science that assumes that society can only be understood as the interaction of isolated rational autonomous agents. If the Game Theory of the future were to follow the lead of cutting edge cognitive science, it would replace computational models with dynamical ones. Just as Extended Cognition theories recognize that the line between mind and world is a flexible one, Dynamic social theories would recognize that the line between mind and mind is equally flexible that we must be understood not as autonomous individuals with selfish interests, but rather as fluctuating tribes or families dynamically bonded, and motivated not only by selfishness, but by trust, loyalty and love.
- Neyman: Distinguishing tests of statistical hypotheses and tests of significance might have been a lapse of someone’s pen Deborah G. Mayo: Contrary to ideas suggested by the title of the conference at which the present paper was presented, the author is not aware of a conceptual difference between a “test of a statistical hypothesis” and a “test of significance” and uses these terms interchangeably. A study of any serious substantive problem involves a sequence of incidents at which one is forced to pause and consider what to do next. In an effort to reduce the frequency of misdirected activities one uses statistical tests. The procedure is illustrated on two examples: (i) Le Cam’s (and asssociates’) study of immunotherapy of cancer and (ii) a socio-economic experiment relating to low-income homeownership problems.
- Testing Composite Null Hypothesis Based on S-Divergences Abhik Ghosh, Ayanendranath Basu: The authors present a robust test for composite null hypothesis based on the general S-divergence family. This requires a non-trivial extension of the results of Ghosh et al. (2015). They then derive the asymptotic and theoretical robustness properties of the resulting test along with the properties of the minimum
S-divergence estimators under parameter restrictions imposed by the null hypothesis. An illustration in the context of the normal model is also presented.
- Vagueness, Presupposition and Truth-Value Judgments Jeremy Zehr (Quote from the author): "The day I taught my first course of semantics, I presented a definition of meaning along the lines of (Heim & Kratzer 1998)’s, which I was presented with as an undergraduate student in linguistics: to know what a sentence means is to know in what situations it is true. And very soon I showed that, as I also had come to realize five years before, this definition was unable to capture our intuitions about presuppositional sentences: these are sentences we perfectly understand, but that we are sometimes as
reluctant to judge true as to judge false, even while possessing all potentially relevant information. But by the time I became an instructor, I had become well acquainted
with another phenomenon that similarly threatens this truth-conditional definition of meaning: the phenomenon of vagueness. So I added the class of vague sentences to the discussion.
That both vague and presuppositional sentences threaten this fundamental definition shows the importance of their study for the domain of semantics. Under the supervision of Orin Percus, I therefore decided to approach the two phenomena jointly in my M.A. dissertation. By applying the tools developed for analyzing presupposition in truth-conditional semantics to the study of vagueness, I showed that it was possible to give a novel sensible account of the sorites paradox that has been puzzling philosophers since Eubulide first stated it more than 2000 years ago. This result illustrates how the joint study of two phenomena that were previously approached separately can bring new insights to long discussed problems. This thesis aims at pursuing the joint investigation of the two phenomena, by
focusing on the specific truth-value judgments that they trigger. In particular, theoretical literature of the last century rehabilitated the study of non-bivalent logical systems that were already prefigured during Antiquity and that have non-trivial consequences for truth-conditional semantics. In parallel, an experimental literature has been constantly growing since the beginning of the new century, collecting truthvalue judgments of subjects on a variety of topics. The work presented here features both aspects: it investigates theoretical systems that jointly address issues raised by
vagueness and presupposition, and it presents experimental methods that test the predictions of the systems in regard to truth-value judgments. The next two sections of this chapter are devoted to the presentation of my objects of study, namely vagueness and presupposition; and the last section of this chapter exposes the motivations that underline my project of jointly approaching the two phenomena from a truth-functional perspective. Because the notions of truth-value judgments are at the core of the dissertation, I have to make clear what I mean by bivalent and non-bivalent truth-value judgments. When I say that a sentence triggers bivalent truth-value judgments, I mean that in any situation, a sufficiently informed and competent speaker would confidently judge the sentence either “True” or “False”.
When I say that a sentence triggers non-bivalent truth-value judgments, I mean that there are situations where a competent speaker, even perfectly informed, would prefer to judge the sentence with a label different from “True” and “False”. In this chapter, I will remain agnostic as to what labels are actually preferred for each phenomenon, but the next chapters are mostly devoted to this question."
- A Hierarchy of Bounds on Accessible Information and Informational Power Michele Dall’Arno: Quantum theory imposes fundamental limitations to the amount of information that can be carried by any quantum system. On the one hand, Holevo bound rules out the possibility to encode more information in a quantum system than in its classical counterpart, comprised of perfectly distinguishable states. On the other hand, when states are uniformly distributed in the state space, the so-called subentropy lower bound is saturated. How uniform quantum systems are can be naturally quantified by characterizing them as t-designs, with t = ∞ corresponding to the uniform distribution. Here it is shown the existence of a trade-off between the uniformity of a quantum system and the amount of information it can carry. To this aim, the authors derive a hierarchy of informational bounds as a function of t and prove their tightness for qubits and qutrits. By deriving asymptotic formulae for large dimensions, they also show that the statistics generated by any t-design with t > 1 contains no more than a single bit of information, and this amount decreases with t. Holevo and subentropy bounds are recovered as particular cases for t = 1 and t = ∞, respectively.
- Kantian Space, Supersubstantivalism, and the Spirit of Spinoza James Messina: In the first edition of Concerning the Doctrine of Spinoza in Letters to Mendelssohn, Jacobi claims that Kant’s account of space is “wholly in the spirit of Spinoza”. In the first part of the paper, the author argues that Jacobi is correct: Spinoza and Kant have surprisingly similar views regarding the unity of space and the metaphysics of spatial properties and laws. Perhaps even more surprisingly, they both are committed to a form of parallelism. In the second part of the paper, James draws on the results of the first part to explain Kant’s oft-repeated claim that if space were transcendentally real, Spinozism would follow, along with Kant’s reasons for thinking transcendental idealism avoids this nefarious result. In the final part of the paper, James sketches a Spinozistic interpretation of Kant’s account of the relation between the empirical world of bodies and(what one might call) the transcendental world consisting of the transcendental subject’s representations of the empirical world and its parts.
- Bayesianism, Infinite Decisions, and Binding Frank Arntzenius, Adam Elga, John Hawthorne: When decision situations involve infinities, vexing puzzles arise. The authors describe six such
puzzles below. (None of the puzzles has a universally accepted solution, and they are aware of no suggested solutions that apply to all of the puzzles.) The authors will use the puzzles to motivate two
theses concerning infinite decisions. In addition to providing a unified resolution of the puzzles, the theses have important consequences for decision theory wherever infinities arise. By showing that Dutch book arguments have no force in infinite cases, the theses are evidence that reasonable utility functions may be unbounded, and that reasonable credence functions need not be either countably additive or conglomerable (a term to be explained in section 3). The theses show that when infinitely many decisions are involved, the difference between making the decisions simultaneously and making them sequentially can be the difference between riches and ruin. And the authors reveal a new way in which the ability to make binding commitments can save perfectly rational agents from sure losses.
- The Solvability of Probabilistic Regresses. A Reply to Frederik Herzberg David Atkinson and Jeanne Peijnenburg: The authors have earlier shown by construction that a proposition can have a well-defined
nonzero probability, even if it is justified by an infinite probabilistic regress. The authors thought this to be an adequate rebuttal of foundationalist claims that probabilistic regresses must lead either to an indeterminate, or to a determinate but zero probability. In a comment, Frederik Herzberg has argued that our counterexamples are of a special kind, being what he calls ‘solvable’. In the present reaction
the authors investigate what Herzberg means by solvability. They discuss the advantages and disadvantages of making solvability a sine qua non, and we ventilate our misgivings about Herzberg’s suggestion that the notion of solvability might help the foundationalist. They further show that the canonical series arising from an infinite chain of conditional probabilities always converges, and also that the sum is equal to the required unconditional probability if a certain infinite product of conditional probabilities vanishes.
- A Hierarchy of Bounds on Accessible Information and Informational Power Michele Dall’Arn: Quantum theory imposes fundamental limitations to the amount of information that can be carried by any quantum system. On the one hand, Holevo bound rules out the possibility to encode more information in a quantum system than in its classical counterpart, comprised of perfectly distinguishable states. On the other hand, when states are uniformly distributed in the state space, the so-called subentropy lower bound is saturated. How uniform quantum systems are can be naturally quantified by characterizing them as t-designs, with t = ∞ corresponding to the uniform distribution. Here it is shown that the existence of a trade-off between the uniformity of a quantum system and the amount of information it can carry. To this aim, we derive a hierarchy of informational bounds as a function of t and prove their tightness for qubits and qutrits. By deriving asymptotic formulae for large dimensions, the author also shows that the statistics generated by any t-design with t > 1 contains no more than a single bit of information, and this amount decreases with t. Holevo and subentropy bounds are recovered as particular cases for t = 1 and t = ∞, respectively.
- On Econometric Inference and Multiple Use Of The Same Data Benjamin Holcblat, Steffen Gronneberg: In fields that are mainly nonexperimental, such as economics and finance, it is inescapable to compute test statistics and confidence regions that are not probabilistically independent from previously examined data. The Bayesian and Neyman-Pearson inference theories are known to be inadequate for such a practice. The authors show that these inadequacies also hold m.a.e. (modulo approximation error). They develop a general econometric theory, called the neoclassical inference theory, that is immune to this inadequacy m.a.e. The neoclassical inference theory appears to nest model calibration, and most econometric practices, whether they are labelled Bayesian or `a la NeymanPearson. The authors then derive a general, but simple adjustment to make standard errors account for the approximation error.
- How to Confirm the Disconfirmed - On conjunction fallacies and robust confirmation David Atkinson, Jeanne Peijnenburg and Theo Kuipers: Can some evidence confirm a conjunction of two hypotheses more than it confirms either of the hypotheses separately? The authors show that it can, moreover under conditions that are the same for nine different measures of confirmation. Further they demonstrate that it is even possible for the conjunction of two disconfirmed hypotheses to be confirmed by the same evidence.
- Probability Density Functions from the Fisher Information Metric T. Clingmana, Jeff Murugana, Jonathan P. Shock: The authors show a general relation between the spatially disjoint product of probability density functions and the sum of their Fisher information metric tensors. We then utilise this result to give a method for constructing the probability density functions for an arbitrary Riemannian Fisher information metric tensor. They note further that this construction is extremely unconstrained, depending only on certain continuity properties of the probability density functions and a select symmetry of their domains.
- Having Science in View: General Philosophy of Science and its Significance Stathis Psillos: General philosophy of science (GPoS) is the part of conceptual space where philosophy and science meet and interact. More specifically, it is the space in which the scientific image of the world is synthesised and in which the general and abstract structure of science becomes the object of theoretical investigation.
Yet, there is some scepticism in the profession concerning the prospects of GPoS. In a seminal piece, Philip Kitcher (2013) noted that the task of GPoS, as conceived by Carl Hempel and many who followed him, was to offer explications of major metascientific concepts such as confirmation, theory, explanation, simplicity etc. These explications were supposed “to provide general accounts of them by specifying the necessary conditions for their application across the entire range of possible cases” (2013, 187). Yet, Kitcher notes, “Sixty years on, it should be clear that the program
has failed. We have no general accounts of confirmation, theory, explanation, law, reduction, or causation that will apply across the diversity of scientific fields or across different periods of time” (2013, 188). The chief reasons for this alleged failure are two. The first relates to the diversity of scientific practice: the methods employed by the various fields of natural science are very diverse and field-specific. As Kitcher notes “Perhaps there is a ‘thin’ general conception that picks out what is common to the diversity of fields, but that turns out to be too attenuated to be of any great use”. The second reason relates to the historical record of the sciences: the ‘mechanics’ of major scientific changes in different fields of inquiry is diverse and involves factors that cannot be readily accommodated by a general explication of the major metascientific concepts (cf. 2013, 189). Though Kitcher does not make this suggestion explicitly, the trend seems to be to move from GPoS to the philosophies of the individual sciences and to relocate whatever content GPoS is supposed to have to the philosophies of the sciences. I think scepticism or pessimism about the prospects of GPoS is unwarranted. And I also think that there can be no philosophies of the various sciences without GPoS.
- Reducing Computational Complexity of Quantum Correlations Titas Chanda, Tamoghna Das, Debasis Sadhukhan, Amit Kumar Pal, Aditi Sen(De), and Ujjwal Sen: The authors address the issue of reducing the resource required to compute information-theoretic quantum correlation measures like quantum discord and quantum work deficit in two qubits and higher dimensional systems. They provide a mathematical description of determining the quantum correlation measure using a restricted set of local measurements. They also show that the computational error caused by the constraint over the complete set of local measurements reduces fast with an increase in the size of the restricted set. They also perform quantitative analysis to investigate how the error scales with the system size, taking into account a set of plausible constructions of the constrained set. Carrying out a comparative study, they show that the resource required to optimize quantum work deficit is usually higher than that required for quantum discord. They also demonstrate that minimization of quantum discord and quantum work deficit is easier in the case of two-qubit mixed states of fixed ranks and with positive partial transpose in comparison to the corresponding states having nonpositive partial transpose. For bound entangled states, the authors show that the error is significantly low when the measurements correspond to the spin observables along the three Cartesian coordinates.
- Nonparametric Nearest Neighbor Random Process Clustering Michael Tschannen and Helmut Bolcskei: The authors consider the problem of clustering noisy finite length observations of stationary ergodic random processes according to their nonparametric generative models without prior knowledge of the model statistics and the number of generative models. Two algorithms, both using the L(1) distance between estimated power spectral densities (PSDs) as a measure of dissimilarity, are analyzed. The first algorithm, termed nearest neighbor process clustering (NNPC), to the best of the authors' knowledge, is new and relies on partitioning the nearest neighbor graph of the observations via spectral clustering. The second algorithm, simply referred to as k-means (KM), consists of a single k-means iteration with farthest point initialization and was considered before in the literature, albeit with a different measure of dissimilarity and with asymptotic performance results only. The authors show that both NNPC and KM succeed with high probability under noise and even when the generative process PSDs overlap significantly, all provided that the observation length is sufficiently large. Their results quantify the tradeoff between the overlap of the generative process PSDs, the noise variance, and the observation length. Finally, we present numerical performance results for synthetic and real data.
- Economic inequality and mobility in kinetic models for social sciences Maria Letizia Bertotti, Giovanni Modanese: Statistical evaluations of the economic mobility of a society are more difficult than measurements of the income distribution, because they require to follow the evolution of the individuals’ income for at least one or two generations. In micro-to-macro theoretical models of economic exchanges based on kinetic equations, the income distribution depends only on the asymptotic equilibrium solutions, while mobility estimates also involve the detailed structure of the transition probabilities of the model, and are thus an important tool for assessing its validity. Empirical data show a remarkably general negative correlation between economic inequality and mobility, whose explanation is still
unclear. It is therefore particularly interesting to study this correlation in analytical models. In previous work the authors investigated the behavior of the Gini inequality index in kinetic models in dependence on several parameters which define the binary interactions and the taxation and redistribution processes: saving propensity, taxation rates gap, tax evasion rate, welfare means-testing etc. Here, they check the correlation of mobility with inequality by analyzing the mobility dependence from the same parameters. According to several numerical solutions, the correlation is
confirmed to be negative.
- Science and Informed, Counterfactual, Democratic Consent Arnon Keren: On many science-related policy questions, the public is unable to make informed decisions, because of its inability to make use of knowledge and information obtained by scientists. Philip Kitcher and James Fishkin have both suggested therefore that on certain science-related issues, public policy should not be decided upon by actual democratic vote, but should instead conform to the public's Counterfactual Informed Democratic Decision (CIDD). Indeed, this suggestion underlies Kitcher's specification of an ideal of a well-ordered science. The paper argues that this suggestion misconstrues the normative significance of CIDDs. At most, CIDDs might have epistemic significance, but no authority or legitimizing force.
- Proving the Herman-Protocol Conjecture Maria Bruna, Radu Grigore, Stefan Kiefer, Joel Ouaknine, and James Worrell: Herman’s self-stabilisation algorithm, introduced 25 years ago, is a well-studied synchronous randomised protocol for enabling a ring of N processes collectively holding any odd number of tokens to reach a stable state in which a single token remains. Determining the worst-case expected time to stabilisation is the central outstanding open problem about this protocol. It is known that there is a constant h such that any initial configuration has expected stabilisation time at most hN2. Ten years ago, McIver and Morgan established a lower bound of 4/27 ≈ 0.148 for h, achieved with three equally-spaced tokens, and conjectured this to be the optimal value of h. A series of papers over the last decade gradually reduced the upper bound on h, with the present record (achieved last year) standing at approximately 0.156. In this paper, the authors prove McIver and Morgan’s conjecture and establish that h = 4/27 is indeed optimal.