2 results
Cosmology with Phase 1 of the Square Kilometre Array Red Book 2018: Technical specifications and performance forecasts
- Part of
- Square Kilometre Array Cosmology Science Working Group:, David J. Bacon, Richard A. Battye, Philip Bull, Stefano Camera, Pedro G. Ferreira, Ian Harrison, David Parkinson, Alkistis Pourtsidou, Mário G. Santos, Laura Wolz, Filipe Abdalla, Yashar Akrami, David Alonso, Sambatra Andrianomena, Mario Ballardini, José Luis Bernal, Daniele Bertacca, Carlos A. P. Bengaly, Anna Bonaldi, Camille Bonvin, Michael L. Brown, Emma Chapman, Song Chen, Xuelei Chen, Steven Cunnington, Tamara M. Davis, Clive Dickinson, José Fonseca, Keith Grainge, Stuart Harper, Matt J. Jarvis, Roy Maartens, Natasha Maddox, Hamsa Padmanabhan, Jonathan R. Pritchard, Alvise Raccanelli, Marzia Rivi, Sambit Roychowdhury, Martin Sahlén, Dominik J. Schwarz, Thilo M. Siewert, Matteo Viel, Francisco Villaescusa-Navarro, Yidong Xu, Daisuke Yamauchi, Joe Zuntz
-
- Journal:
- Publications of the Astronomical Society of Australia / Volume 37 / 2020
- Published online by Cambridge University Press:
- 06 March 2020, e007
-
- Article
-
- You have access Access
- HTML
- Export citation
-
We present a detailed overview of the cosmological surveys that we aim to carry out with Phase 1 of the Square Kilometre Array (SKA1) and the science that they will enable. We highlight three main surveys: a medium-deep continuum weak lensing and low-redshift spectroscopic HI galaxy survey over 5 000 deg2; a wide and deep continuum galaxy and HI intensity mapping (IM) survey over 20 000 deg2 from $z = 0.35$ to 3; and a deep, high-redshift HI IM survey over 100 deg2 from $z = 3$ to 6. Taken together, these surveys will achieve an array of important scientific goals: measuring the equation of state of dark energy out to $z \sim 3$ with percent-level precision measurements of the cosmic expansion rate; constraining possible deviations from General Relativity on cosmological scales by measuring the growth rate of structure through multiple independent methods; mapping the structure of the Universe on the largest accessible scales, thus constraining fundamental properties such as isotropy, homogeneity, and non-Gaussianity; and measuring the HI density and bias out to $z = 6$ . These surveys will also provide highly complementary clustering and weak lensing measurements that have independent systematic uncertainties to those of optical and near-infrared (NIR) surveys like Euclid, LSST, and WFIRST leading to a multitude of synergies that can improve constraints significantly beyond what optical or radio surveys can achieve on their own. This document, the 2018 Red Book, provides reference technical specifications, cosmological parameter forecasts, and an overview of relevant systematic effects for the three key surveys and will be regularly updated by the Cosmology Science Working Group in the run up to start of operations and the Key Science Programme of SKA1.
21 - On Probability and Cosmology: Inference Beyond Data?
- from Part V - Methodological and Philosophical Issues
-
- By Martin Sahlén, Astronomy, Upsala University, Sweden
- Edited by Khalil Chamcham, University of Oxford, Joseph Silk, University of Oxford, John D. Barrow, University of Cambridge, Simon Saunders, University of Oxford
-
- Book:
- The Philosophy of Cosmology
- Published online:
- 18 April 2017
- Print publication:
- 13 April 2017, pp 429-446
-
- Chapter
- Export citation
-
Summary
Cosmological Model Inference with Finite Data
In physical cosmology we are faced with an empirical context of gradually diminishing returns from new observations. This is true in a fundamental sense, since the amount of information we can expect to collect through astronomical observations is finite, owing to the fact that we occupy a particular vantage point in the history and spatial extent of the Universe. Arguably, we may approach the observational limit in the foreseeable future, at least in relation to some scientific hypotheses (Ellis, 2014). There is no guarantee that the amount and types of information we are able to collect will be sufficient to statistically test all reasonable hypotheses that may be posed. There is under-determination both in principle and in practice (Butterfield, 2014; Ellis, 2014; Zinkernagel, 2011). These circumstances are not new, indeed cosmology has had to contend with this problem throughout history. For example, Whitrow (1949) relates the same concerns, and points back to remarks by Blaise Pascal in the seventeenth century: ‘But if our view be arrested there let our imagination pass beyond; … We may enlarge our conceptions beyond all imaginable space; we only produce atoms in comparison with the reality of things’. Already with Thales, epistemological principles of uniformity and consistency have been used to structure the locally imaginable into something considered globally plausible. The primary example in contemporary cosmology is the Cosmological Principle of large-scale isotropy and homogeneity. In the following, the aim will be to apply such epistemological principles to the procedure of cosmological model inference itself.
The state of affairs described above naturally leads to a view of model inference as inference to the best explanation/model (e.g. Lipton, 2004; Maher, 1993), since some degree of explanatory ambiguity appears unavoidable in principle. This is consistent with a Bayesian interpretation of probability which includes a priori assumptions explicitly. As in science generally, inference in cosmology is based on statistical testing of models in light of empirical data. A large body of literature has built up in recent years discussing various aspects of these methods, with Bayesian statistics becoming a standard framework (Hobson, 2010; Jaynes, 2003; von Toussaint, 2011). The necessary foundations of Bayesian inference will be presented in the next section.