Speaker: Robert Adler
Title: From mean Euler characteristics to the Gaussian kinematic formula
Abstract: In the early 1970's I discovered an explicit formula for the mean Euler characteristic of the excursion sets of stationary Gaussian random fields over simple Euclidean sets. To my great surprise (and delight) this formula turned out to be extremely useful, and has been applied to analyse data coming from areas as diverse as CMB, galactic surveys, and fMRI brain imaging.
About ten years ago, Jonathan Taylor discovered a beautiful extension of this formula, which lead to what is now known as the Gaussian kinematic formula. As well as having deep mathematical elegance, this result allows one to do for many non-stationary, non-Gaussian, applications what the original formula allowed one to do in the simpler cases.
The aim of this lecture will be to describe the GKF, and explain how to use it, in the belief that it too will serve as a natural tool for the analysis of cosmological data.
Speaker: Ethan Anderes
Title: Shrinking the quadratic estimator
Abstract: This talk presents a modification of the quadratic estimator of weak lensing by an adaptive Wiener filter which uses the robust Bayesian techniques developed by Berger (1980) and Strawderman (1971). This modification highlights some advantages provided by a Bayesian analysis of the quadratic estimator and requires the user to propose a fiducial model for the spectral density of the unknown lensing potential. The role of the fiducial spectral density is to give the estimator superior statistical performance in a `neighborhood of the
fiducial model' while controlling the statistical errors when the fiducial spectral density is drastically wrong. If time permits we will also discuss and compare some alternatives to the quadratic estimator which utilize local likelihood estimates of weak lensing.
Speaker: Jerome Bobin
Title: Compressed sensing in astrophysics and beyond
Abstract: With the rapidly growing amount of large-scale data ins astrophysics, the question of efficient data acquisition and transmission becomes essential. Recently, a novel sampling theory coined, Compressive sensing, asserts that signals or images of interest can be recovered with far fewer measurements or samples than stated by the celebrated Nyquist-Shannon theorem. This theory paves the way for cheaper and faster sampling or imaging. In this talk, we will present the basics of the compressive sensing theory as well as its applications in astrophysics that range from the compression of the data of the Herschel space telescope to radio interferometry.
Speaker: George Bosilca
Title: Performance vs. Productivity a never-ending story
Abstract: he convergence of several unprecedented challenges -- new design constraints, revolutionary amounts of heterogeneity, and a complete transformation of the programming paradigm -- suggest that traditional message passing programming model became unsuitable for emerging hardware designs and that the "application-architecture performance gap," will become more difficult than ever to close. The quest for an "ideal" programming paradigm runs deep into the computer scientists community. While many models have been proposed, few managed to impose as a generic and capable (and widely used) programming model. With the current drift toward hybrid architectures even these programming paradigm start shattering, and fail to maintain the promised level of performance and/or productivity.
The data flow model has the capability to extract a maximal degree of parallelism from the algorithms, while tolerating memory and synchronization latencies and allowing an automatic overlap of communications and computations. This features makes the data flow paradigm an interesting and promising programming approach, as highlighted by the sudden increase in interest from all scientific computing communities. Several programming environments and runtimes based on fine-grained tasks will be explored, showing that this programming approach has the potential to unleash an extreme level of performance compared to legacy programming approaches, and delivering a predictable level of performance while exhibiting a natural adaptability to a variety of hardware platforms.
Speaker: Carmelita Carbone
Title: CMB lensing and N-body simulations
Abstract: I will provide an overview of the different techniques used in the literature to reconstruct CMB-lensing from N-body simulations. In particular, I will focus on the realisation of large- and all-sky CMB-lensing maps and outline the critical aspects. Finally, I will present results obtained by CMB ray-tracing across the Millennium Simulation in the Born approximation, and discuss effects coming from non-linear mode coupling.
Speaker: Jean-Francois Cardoso
Title: Component separation for the Planck mission
Abstract: Component separation and, in particular, CMB extraction, from the nine frequency channels of Planck, is an important step of the Planck data processing pipeline. The Planck collaboration is investigating several approaches to component separation. They all have strong and weak points and it seems that no single approach is uniformly best for all the possible (seen and unforeseen) scientific uses of the component maps. One size may not fit all... In this talk, I will present my personal view about various approaches to component separation for Planck, trying to underline their commonalities and differences.
Speaker: Joanna Dunkley
Title: Sampling for Cosmic Microwave Background analysis
Abstract: Markov Chain Monte Carlo methods have been standard tools in cosmological analysis for the past decade. I outline how both Gibbs sampling and the Metropolis-Hastings algorithm are used for various aspects of Cosmic Microwave Background data analysis, and other cosmological analysis. This includes Galactic and extragalactic foreground removal, power spectrum estimation, and cosmological parameter estimation. I will use examples from analysis for the WMAP and Planck satellites, as well as for ground-based CMB experiments.
Speaker: Gersande Fort
Title: Adaptive and Interacting Monte Carlo methods for Bayesian analysis
Abstract: Monte Carlo algorithms are widely used methods for solving problems that are otherwise intractable, e.g., drawing samples from complicated probability distributions, computing complex functionals of a stochastic process, evaluating high-dimensional integrals # The Monte Carlo methods consist in drawing samples under a proposal distribution, and in correcting them in order to approximate the target distribution. There are two main families of Monte Carlo methods : in the first one, the correction step consists in weighting the samples; in the second one, the correction step relies on acceptance-rejection mechanisms. Monte Carlo methods are fairly universal in that they formally apply to almost every settings. However, they depend upon many design parameters that need to be finely tuned in order the algorithm to approximate the target, and to approximate it efficiently. Over the past decade, proposals have been made towards a partial automation of the choice of the design parameters yielding to so-called "Adaptive Monte Carlo methods" and "Interacting Monte Carlo methods". Unfortunately, it is known that adaption and interaction can completely destroy convergence of the Monte Carlo algorithms to the target distribution.
In this talk, I will first outline the role of the design parameters for standard Monte Carlo algorithms, and then introduce Adaptive and Interacting Monte Carlo methods. I will also discuss convergence issues and numerical performance criterion, with an emphasis on adaptive importance sampling algorithms (also called "population Monte Carlo" in the literature) and on adaptive Metropolis algorithms.
Speaker: Laura Grigori
Title: Recent advances in numerical linear algebra
Abstract: Numerical linear algebra operations are ubiquitous in many challenging academic and industrial applications, including advanced CMB data analysis. This talk will give an overview of the evolution of numerical linear algebra algorithms and software, and the major changes such algorithms have undergone following the breakthroughs in the hardware of high performance computers. A specific focus of this talk will be on communication avoiding algorithms. This is a new and particularly promising class of algorithms, which has been introduced in the late 2008 as an attempt to address the exponentially increasing gap between computation time and communication time - one of the major challenges faced today by the high performance computing community. I will discuss theoretical and practical benefits such algorithms can result in and their potential impact on applications.
Speaker: David J. Hand
Title: Opportunities and Challenges in Modelling and Anomaly Detection
Abstract: he dramatic evolution of statistics over the past 50 years has been primarily driven by two things: the requirements of new application areas and progress in computer technology. The former stimulates the development of new tools, and the latter enables this development - to the extent that entirely novel classes of methods have been invented. Once invented the new tools find application in other domains. In this talk I take a high level view of modelling, explore new strategies for anomaly detection, and look at the complications caused by messy data. In particular, I contrast the physical sciences with the social and behavioural sciences, hoping to stimulate some cross-disciplinary fertilisation.
Speaker: Reijo Keskitalo
Title: Making maps for CMB experiments
Abstract: Since the COBE days, mapmaking for CMB surveys has been driven into a more approximative regime by the computational resource requirements imposed by the ever growing datasets. I will review the spectrum of different mapmaking algorithms available for CMB experiments and discuss the criteria that drive selection between those algorithms. Examples will be drawn from both completed experiments and the body of preparatory work that will serve the upcoming Planck data release.
Speaker: Lloyd Knox
Title: Big overview
Abstract: I will review the problem of analyzing CMB data beginning with an idealized description of the data,
and then introducing the non-idealities that make the problem more challenging and more interesting.
I will give my personal viewpoint as to which problems are sufficiently well solved already and which
are most demanding of new approaches, possibly of an interdisciplinary nature.
Speaker: Domenico Marinucci
Title: On needlets/wavelets nonGaussianity estimators
Abstract: After providing an overview of the use of needlets for Cosmic Microwave Background radiation data, we shall focus more precisely on the analysis of nonGaussianity and the estimation of the non-linearity parameter fNL. In particular, we will derive the linear correction term for needlet and wavelet estimators of the bispectrum, and discuss the relationship with related techniques such as the KSW estimator (Komatsu et al. 2005). We will show that on masked data with anisotropic noise, the error bars of the proposed procedures reach the optimal limits. We shall also discuss why in the limit of full-sky and isotropic noise, the linear-correction term vanishes. Applications to WMAP 7-year data are also illustrated.
Research Supported by the ERC Grant 277742 Pascal
Speaker: Hiranya Peiris
Title: Hunting for relics from the early universe in the CMB
Abstract: Fluctuations in the cosmic microwave background (CMB), the radiation left over from the Big Bang, contain information which has been pivotal in establishing the current cosmological model. These data can also be used to test well-motivated additions to this model. These include pre-inflationary relics (signatures of bubble collisions in the context of eternal inflation) as well as topological defects that form after inflation (cosmic strings, textures). These relics typically leave subdominant spatially-localised signals, hidden in the "noise" from the primary CMB, instrumental noise, foreground residuals and other systematics. Standard approaches for searching for such signals involve focusing on statistical anomalies, which carry the danger of extreme a posteriori biases. The self-consistent approach to this problem is Bayesian model comparison; however the full implementation of this approach is computationally intractable with current CMB datasets (and this problem will only become more difficult with Planck data). I will describe a powerful modular algorithm, combining a candidate-detection stage (using wavelets or optimal filters) with a full Bayesian parameter estimation and model selection step performed in pixel space in these candidate regions. The algorithm is designed to fully account for the "look elsewhere" effect, and the use of blind analysis techniques further enhances its robustness to unknown systematics. Finally, I will present the results of applying these techniques to hunt for relic signals from the early universe in WMAP7 data.
Speaker: George Smoot
Title: Cosmology and Cosmic Microwave Background
Abstract: The cosmic microwave background (CMB) has played a key role in first establishing the Big Bang Model as the leading model and then in exploring the model and determining its parameters. Using advanced technology we have made great progress in mapping the Universe across very great distances and through many epochs in time. Perhaps the most impressive part of this has been the CMB maps of the early universe.
This talk will show the data acquired and published to date and discuss what improvements we might anticipate in the relatively near term.
These observations tell us more than just their importance as a map as they provide evidence on what possible models and parameters may correctly describe our universe. Thus there is much left to be done with an increasing degree of difficulty.
Nevertheless the progress is breathtaking.
Speaker: Sivan Toledo
Title: Discrete Aspects of High-Performance Iterative Linear Solvers
Abstract: The talk will focus on three aspects of high-performance iterative solvers for sparse linear systems of equations and for sparse least-squares problems. All three aspects are discrete/combinatorial in nature. The first aspect that I will talk about is combinatorial preconditioning. In systems in which the coefficient matrix defines a weighted-graph structure (e.g., some discretizations of scalar elliptic PDEs), one can construct fast solvers using graph algorithms that sparsify the underlying graph. I will explain the theoretical framework for this method and simple constructions that illustrate the type of sparsification algorithms that are used. The second aspect that I will talk about is random sampling techniques in constructing preconditioners and approximate solvers. This is an exciting emerging topic in numerical linear algebra. It is particularly effective when the amount of data is huge, as in CMB observations. Finally, I will describe methods to reduce communication in iterative solvers; interprocessor communication is today slower than computation, so minimizing communication is key to achieving high performance.
Speaker: Benjamin Wandelt
Title: Optimal methods for non-Gaussianity
Abstract: Non-Gaussianity has emerged as a powerful probe of primordial physics. I will discuss the motivation for non-Gaussianity searches, and discuss the status and prospects for such searches using upcoming data. Given the interdisciplinary nature of this workshop I will highlight some technical issues of broader interest, including a very general iterative scheme for optimal filtering and signal reconstruction that outperforms multi-grid or pre-conditioned congjugate gradient solvers for applications to high-resolution data with extreme dynamic range (Elsner and Wandelt 2012).
CONTRIBUTED TALKS
Speaker: Soumen Basak
Title: Sparse component separation for accurate CMB map estimation
Abstract: The Cosmological Microwave Background (CMB) is of premier importance for the
cosmologists to study the birth of our universe. Unfortunately, most CMB experiments such as
COBE, WMAP or Planck do not provide a direct measure of the cosmological signal; CMB is
mixed up with galactic foregrounds and point sources. For the sake of scientific exploitation,
measuring the CMB requires extracting several different astrophysical components (CMB,
Sunyaev-Zel'dovich clusters, galactic dust) form multi-wavelength observations. Mathematically
speaking, the problem of disentangling the CMB map from the galactic foregrounds amounts to
a component or source separation problem. In the field of CMB studies, a very large range of
source separation methods have been applied which all differ from each other in the way they
model the data and the criteria they rely on to separate components. Two main difficulties are i)
the instrument's beam varies across frequencies and ii) the emission laws of most astrophysical
components vary across pixels. I will introduce a very accurate modeling of CMB data, based
on sparsity, accounting for beams variability across frequencies as well as spatial variations
of the components' spectral characteristics. Based on this new sparse modeling of the data,
I will describe a sparsity-based component separation method coined Local-Generalized
Morphological Component Analysis (L-GMCA). Extensive numerical experiments have been
carried out with WMAP data. These experiments show the high efficiency of the proposed
component separation methods to estimate a clean CMB map with a very low foreground
contamination, which makes L-GMCA of prime interest for CMB studies.
Speaker: Yabebal Fantaye
Title: CMB lensing reconstruction in the presence of diffuse polarized foregrounds
Abstract: The measurement and characterization of the lensing of the cosmic microwave
background (CMB) is a key goal of the current and next generation of CMB experiments. We
perform a case study of a three-channel balloon-borne CMB experiment observing the sky at
(l,b)=(250deg,-38deg) and attaining a sensitivity of 5.25 muK-arcmin with 8' angular resolution
at 150 GHz, in order to assess whether the effect of polarized Galactic dust is expected to be
a significant contaminant to the lensing signal reconstructed using the EB quadratic estimator.
We find that for our assumed dust model, polarization fractions as low as a few percent may
lead to a significant dust bias to the lensing convergence power spectrum. We investigate a
parametric component separation method, proposed by Stompor et al. (2009), for mitigating the
effect of this dust bias, based on a `profile likelihood' technique for estimating the dust spectral
index. We find a dust contrast regime in which the accuracy of the profile likelihood spectral
index estimate breaks down, and in which external information on the dust frequency scaling is
needed. We propose a criterion for putting a requirement on the accuracy with which the dust
spectral index must be estimated or constrained, and demonstrate that if this requirement is
met, then the dust bias can be removed.
Speaker: Stephen Feeney
Title: A robust constraint on cosmic textures from the cosmic microwave background
Abstract: Fluctuations in the cosmic microwave background (CMB) contain information which
has been pivotal in establishing the current cosmological model. These data can also be
used to test well-motivated additions to this model, such as cosmic textures. Textures are
a type of topological defect that can be produced during a cosmological phase transition in
the early universe, and which leave characteristic hot and cold spots in the CMB. We apply
Bayesian methods to carry out an optimal test of the texture hypothesis, using full-sky data
from the Wilkinson Microwave Anisotropy Probe. We conclude that current data do not warrant
augmenting the standard cosmological model with textures. We rule out at 95% confidence
models that predict more than 6 detectable cosmic textures on the full sky. This talk is based on
arXiv:1203.1928.
Speaker: Dominic Galliano
Title: Deriving bounds for trispectrum estimators for Planck
Abstract: With Planck's more precise measurements of the CMB Anisotropies at small scales,
new independent parameters of non-Gaussianity are being developed which are based on 4
point correlation functions, such as g_NL and tau_NL. However measuring these parameters
brings new challenges in data processing. An extra correlation point gives more degrees
of freedom, which together with Planck's precision at small scales means a lot more data
needs reducing. In this talk I will go through problems with calculating the variance of these
parameters for Planck, and talking about how high performance computing is helping us
solve this. I will talk about what happens when both isocurvature and adiabatic modes are
present, how this increases the number of possible parameters and therefore the total number
of calculations needed. I will also give the estimates for the variance of different types of
g_NL parameters we can expect when using them on Planck data and how this work is being
extended for tau_NL parameters.
(In collaboration with Dr Robert Crittenden and Dr Kazuya Koyama)
Speaker: Frederic Guilloux
Title: Power spectrum estimation on the sphere using wavelets
Abstract:The angular power spectrum of a stationary random field on the sphere can be
estimated from the needlet coefficients of a single realization.
When increasingly fine resolution is available, one can consider the asymptotics of high
frequency. In this direction, we proved the consistency of the estimator in a gaussian case with
non-stationary noise and missing data.
Moreover, we show that the method enables a natural and lossless combination of many
different data sets (typically: full sky satellite experiments + ground-based experiment with
better resolution on smaller patches).
Ref :
Faye et al (2008) CMB power spectrum estimation using wavelets, Phys. Rev. D
Faye, Guilloux (2011) Spectral estimation on the sphere using wavelets: high frequency
asymptotics, Statist. Infer. for Stoch. Processes
Speaker: Michael Hobson
Title: Nets and nests: accelerated Bayesian inference in astrophysics
Abstract: Bayesian inference methods are widely used to analyse observations in astrophysics
and cosmology, but they can be extremely computationally demanding. Recent work in this
area by the Cavendish Astrophysics Group has focussed on developing new methods for
greatly accelerating such analyses, by up to a factor a million, using neural networks and nested
sampling methods, such as the SkyNet and MultiNest packages respectively. These have
recently been combined into the BAMBI algorithm, which accelerates Bayesian inference still
further. I will give an outline of these approaches, which are generic in nature, and illustrate their
use in a cosmological case study.
Speaker: Jaiseung Kim
Title: Restoration of a whole-sky CMB map
Abstract: The presence of astrophysical emissions between the last scattering surface and
our vantage point requires us to apply a foreground mask on CMB sky map, leading to large
cut around the Galactic equator and numerous holes. Since various estimators related to the
reported CMB anomalies requires a whole-sky CMB map, it is of utmost importance to develop
an efficient method to fill in the masked pixels in a way compliant with the expected statistical
properties and the unmasked pixels.
For this purpose, we consider Monte-Carlo simulation of constrained Gaussian field and derive
it for CMB anisotropy in harmonic space, where a feasible implementation is possible with good
approximation.
We applied our method to the WMAP foreground-reduced maps masked by the KQ85, and
investigated the anomalous multipole vector alignment between quadrupole and octupole
components.
We find the alignment in the foreground-reduced maps is even higher than the whole-sky
Internal Linear Combination (ILC) map, and also find the V band map has higher alignment than
other bands, despite the expectation that the V band map has less foreground contamination
than other bands.
Our method will be complementary to other efforts on restoring or reconstructing the masked
CMB data, and of great use to CMB data analyses.
Speaker: Eiichiro Komatsu
Title: Simple foreground cleaning algorithm for detecting primordial B-mode polarization of the
cosmic microwave background
Abstract: We reconsider the pixel-based, "template" polarized foreground removal method
within the context of a next-generation, low-noise, low-resolution (0.5 degree FWHM) spaceborne
experiment measuring the cosmological B-mode polarization signal in the cosmic
microwave background (CMB). This method was put forward by the Wilkinson Microwave
Anisotropy Probe (WMAP) team and further studied by Efstathiou et al. We need at least 3
frequency channels: one is used for extracting the CMB signal, whereas the other two are used
to estimate the spatial distribution of the polarized dust and synchrotron emission. No external
template maps are used. We extract the tensor-to-scalar ratio (r) from simulated sky maps
consisting of CMB, noise (2 micro K arcmin), and a foreground model, and find that, even for the
simplest 3-frequency configuration with 60, 100, and 240 GHz, the residual bias in r is as small
as Delta r~0.002. This bias is dominated by the residual synchrotron emission due to spatial
variations of the synchrotron spectral index. With an extended mask with fsky=0.5, the bias is
reduced further down to < 0.001.
Reference:
Katayama and Komatsu, ApJ, 737, 78 (2011)
Speaker: Jason McEwen
Title: Spherical wavelet-Bayesian cosmic string tension estimation
Abstract: We develop a Bayesian framework to estimate the string tension of any cosmic string
contribution of full-sky observations of the cosmic microwave background (CMB). We analyse
the CMB signal and any string component in wavelet space, using steerable scale-discretised
wavelets (S2DW) on the sphere. The wavelet transform yields a more sparse representation of
the string signal than the CMB, which we exploit to separate any string contribution. We study
the effectiveness of our framework to estimate the cosmic string tension.
Speaker: Sigurd Kirkevold Næss
Title: Methodology and lessons from the QUIET Maximum Likelihood map-maker
Abstract: I present the methodology and lessons from the maximum likelihood (ML) pipeline
of the QUIET CMB polarization experiment. The QUIET maps are, at ~150,000 pixels, among
the largest ML CMB maps for which the full covariance information has been computed. The
covariance matrix is the most precise representation of the statistical properties of the map, but
represents a significant challenge to compute and store at high resolutions. I will discuss the
implementation of the algorithm, the effect of the choice of filters and data storage format on
performance, and the effect of degenerate modes. I will provide performance numbers for the
QUIET data set, and if time allows present some of the resulting maps as an example.
Speaker: Anais Rassat
Title: Removal of two large scale Cosmic Microwave Background anomalies after subtraction of
the Integrated Sachs Wolfe effect
Abstract: Although there is currently a debate over the significance of the claimed large scale
anomalies in the Cosmic Microwave Background (CMB), their existence is not totally dismissed.
In parallel to the debate over their statistical significance, recent work has also focussed on
masks and secondary anisotropies as potential sources of these anomalies.
In this work we investigate simultaneously the impact of the method used to account for masked
regions as well as the impact of the integrated Sachs-Wolfe (ISW) effect, which is the largescale
secondary anisotropy most likely to affect the CMB anomalies. In this sense, our work is
an update of both Francis & Peacock 2010 and Kim et al. 2012. Our aim is to identify trends in
CMB data from different years and with different masks treatments. We reconstruct the ISW
signal due to 2 Micron All-Sky Survey (2MASS) and NRAO VLA Sky Survey (NVSS) galaxies,
effectively reconstructing the low-redshift ISW signal out to z ~1. We account for regions of
missing data using the sparse inpainting technique of Abrial et al. 2008 and Starck, Murtagh &
Fadili 2010. We test sparse inpainting of the CMB, Large Scale Structure and ISW and find it
constitutes a bias-free reconstruction method suitable to study large-scale statistical isotropy
and the ISW effect.
We focus on three large-scale CMB anomalies: the low quadrupole, the quadrupole/octopole
alignment and the octopole planarity. After sparse inpainting, the low quadrupole becomes
more anomalous, whilst the quadrupole/octopole alignment becomes less anomalous. However,
after subtraction of the ISW signal, the trend amongst the CMB maps is that both the low
quadrupole and the quadrupole/octopole alignment are no longer statistically significant. Our
results also suggest that both of these previous anomalies may be due to the quadrupole alone.
We do not identify any trends for the octopole planarity, either after sparse inpainting or ISW
subtraction.
Authors: A. Rassat, J.-L. Starck, F.-X. Dupé
Speaker: Mathieu Remazeilles
Title: Reconstruction of high-resolution SZ maps from heterogeneous datasets using needlets
Abstract: The aim of this work is to propose a joint exploitation of heterogeneous datasets from
high-resolution/few-channel experiments and low-resolution/many-channel experiments by
using a multiscale needlet Internal Linear Combination (ILC), in order to optimize the thermal
Sunyaev-Zeldovich (SZ) effect reconstruction at high resolution. We highlight that needlet ILC
is a powerful and tunable component separation method which can easily deal with multiple
experiments with various specifications. Such a multiscale analysis renders possible the joint
exploitation of high-resolution and low-resolution data, by performing for each needlet scale a
combination of some specific channels, either from one dataset or both datasets, selected for
their relevance to the angular scale considered, thus allowing to simultaneously extract high
resolution SZ signal from compact clusters and remove Galactic foreground contamination at
large scales.
Speaker: Alessandro Renzi
Title: Primordial non-Gaussianity and diffuse Galactic emissions
Abstract: We analyse the astrophysical foreground emissions in the context of Cosmic
Microwave Background (CMB) anisotropy non-Gaussianity (NG) studies. We parametrize the
NG signal by an equivalent f_NL evaluated through an efficient parametric fitting procedure
(skew-Cl) estimating the bispectrum template on single independent elements in the harmonic
domain. We consider the case of simulated data corresponding to the WMAP bands of the
Wilkinson Microwave Anisotropy Probe (WMAP). We focus of the characterization of the
spurious signal caused by diffuse foregrounds and extra-Galactic emissions.
Speaker: Sandro Scodeller
Title: Detection of Point Sources Using Internal Templates and Needlets
Abstract: I will present results from a recent paper (Scodeller et al 2012) where we have
developed new needlet-based methods which significantly have increased the number of
extragalactic point sources detected in WMAP data. In total we have detected 2102 sources,
1589 of these are not in the WMAP point source catalogues. I will also present recent results
where we have tested the effect of masking all these 2102 sources on the estimated power
spectrum and compare with the WMAP best fit. Furthermore, I will show a new data analysis
method which, based on subtracting detected extragalactic sources from WMAP data, one can
obtain results without using point source masks. I show that the power spectrum obtained in this
manner is in full agreement with the power spectrum obtained using a point source mask.
Papers: Scodeller, S., Hansen, F.K. and Marinucci, D., 2012, ApJ, 753, 27
Scodeller, S., and Hansen, F.K., 2012, arxiv:1207.2315
Speaker: Meisam Sharify
Title: Avoiding Communication in Generalized Least Square Problems
Abstract: Map-making is one of the crucial steps in the analysis of the CMB data sets which
can be done by applying the maximum likelihood approach. This approach yields a solution
in a form of a generalized least square problem, which several software package such as
MADmap solve to produce an estimate of the sky. Enormous sizes of data sets anticipated
m the next generation of the CMB experiments require massively parallel implementations
of the map-making algorithm suitable for high performance computing (HPC) systems. In this
context communication between computational nodes is quickly becoming one of the major
challenges, which need to be robustly addressed to ensure scalability of the map-making codes.
Indeed, the analysis of the MADmap code, which uses the preconditioned conjugate gradient
(PCG) algorithm to compute the map of the sky, shows that the cost of the communication
required by such an algorithm is indeed usually significant, and on occasions dominant, as
compared to the cost of the entire procedure. For example, Cantalupo et al (2010) find that for a
simulated data set of a Planck-like experiment 24.0% of run time is typically spent on performing
communication. In this talk I present a study of so-called communication-avoiding, iterative
solvers, such as the s-step PCG method, as applied to generalized least squares systems in the
context of generic and CMB map-making applications. This research is a joint work with Laura
Grigori and Radek Stompor.
Speaker: Horst Simon
Title: Barriers to Exascale Computing
Abstract: The development of an exascale computing capability with machines capable of executing O(1018) operations per second by the end of the decade will be characterized by significant and dramatic changes in computing hardware architecture from current (2012) petascale high-performance computers. From the perspective of computational science, this will be at least as disruptive as the transition from vector supercomputing to parallel supercomputing that occurred in the 1990s. This was one of the findings of a 2010 workshop on cross-cutting technologies for exascale computing. The impact of these architectural changes on future applications development for the computational sciences community can now be anticipated in very general terms. While the community has been investigating the road to exascale worldwide in the last several years, there are still several barriers that need to be overcome to obtain general purpose exascale performance. This short paper will summarize the major challenges to exascale, and how much progress has been made in the last five years.
Speaker: Florent Sureau
Title: Component Separation for Full-Sky Cosmic Microwave Background Recovery using
Sparsity
Abstract: Full-sky multi-channel missions such as Planck or WMAP intent measuring the anisotropies
of the Cosmic Microwave Background with a high accuracy so as to ultimately provide key
information about the birth of our universe. This objective is however challenged by the
importance of galactic and extra-galactic emissions in the frequencies observed (prevailing in
some regions of the sky at all frequencies), that need to be extracted prior to CMB analysis.
In this work, we propose to illustrate how sparsity priors for the foreground emissions in
appropriate dictionaries perform in component separation to better recover the CMB radiation.
We present two separation approaches based on sparsity priors. First we introduce a semi-blind
source separation algorithm, coined Localized Generalized Morphological Component Analysis
(lGMCA), which estimates the parameters of local linear mixture models in wavelet space.
Estimating the sources and their spectral signature at various scales and in regions of various
size is particularly suited to spatially and spectrally varying components such as the emissions
from thermal dust. We show that combined with the robustness of sparse source separation in
noisy situations, this approach lead to better cleaning of the CMB temperature map compared to
state-of-the-art approaches.
We then present an algorithm dedicated to compact source extraction, which cannot be
efficiently achieved with a generic approach based only on linear mixture models due to their
high spectral variability. In this approach we propose to separate compact sources from diffuse
emissions based on the morphology of the sources (given by a point source catalogue and the
model of the instrument point spread function (PSF)) and a sparsity prior in spherical harmonic
space for the diffuse emissions (due to the polynomial decay of the spherical harmonic
coefficient amplitudes). We show that this allows more complex backgrounds to be modeled,
and therefore lead to better estimates of the point sources for subtraction than classically
performed PSF fitting or aperture photometry based on simple models of the background.
Collaborators: F. Sureau, J. Bobin, J.-L. Starck, S. Basak Laboratoire AIM, CEA-IRFU, Saclay,
France
Speaker: Paul Sutter
Title: Accelerating convolutions on the sphere with hybrid GPU/CPU kernel splitting.
Abstract: We present a general method for accelerating by more than an order of magnitude
the convolution of pixelated function on the sphere with a radially-symmetric kernel. Our
method splits the kernel into a compact real-space and a compact spherical harmonic space
component that can then be convolved in parallel using an inexpensive commodity GPU and
a CPU, respectively. We provide models for the computational cost of both real-space and
Fourier space convolutions and an estimate for the approximation error. Using these models
we can determine the optimum split that minimizes the wall clock time for the convolution while
satisfying the desired error bounds. We apply this technique to the problem of simulating a
cosmic microwave background sky map at the resolution typical of the high resolution maps
of the cosmic microwave background anisotropies produced by the Planck spacecraft. For the
main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an
acceptable fractional rms error of order 10^-5 in the power spectrum of the output map.
Speaker: Mikolaj Szydlarski
Title: Accelerating maximum likelihood Cosmic Microwave Background map-making codes
Abstract: Map-making is one of the key steps down data analysis pipeline of any CMB experiment. As the observational data sets keep growing at Moore's rate, with their volumes exceeding tens and hundreds billions of samples, the need for fast and efficiently parallelizable iterative solvers is generally recognized.
In this work we study new iterative algorithms aiming at shortening time-to-solution of the maximum likelihood map-making. The latter results in a generalized least square system with the weights assumed to be described by a block-diagonal matrix with Toeplitz blocks and which has to be solved efficiently from the point of view of CPU time, communication volume and time, and memory consuption. Our iterative algorithm is based on a conjugate gradient method for which we construct a novel, parallel, two-level preconditioner (2lvl-PCG). We show experimentally that our parallel implementation of 2lvl-PCG outperforms by as much as a factor 5 standard one-level PCG in terms of both the convergence rate and time to the solution.
Authors: M. Szydlarski, L. Grigori and R. Stompor