User guide

The pipeline is run by the pipelet framework. Its input are Python scripts which are located in the pipeline subdirectory.

Dependencies

  • python >= 2.6, numpy, scipy, matplotlib, cfitsio

  • pipelet:

    git clone https://gitlab.in2p3.fr/pipelet/pipelet.git
    cd pipelet
    python setup.py install --prefix=$HOME
  • Healpix C++ http://sourceforge.net/projects/healpix/

    ./configure
    make cpp-all
  • healpy:

    pip install --user healpy
  • spherelib:

    git clone https://gitlab.in2p3.fr/spherelib/spherelib.git
    cd spherelib/python
    ./waf configure --healpix_prefix=$HEALPIX/src/cxx/$HEALPIX_TARGET --prefix=$HOME
    ./waf build
    ./waf install
    export PYTHONPATH=$HOME/lib/python2.6/[dist|site]-packages:$PYTHONPATH
    export LD_LIBRARY_PATH=$HOME/lib:$LD_LIBRARY_PATH
  • smica:

    git clone https://gitlab.in2p3.fr/smica/smica.git
    cd smica
    python setup.py install --prefix=$HOME

SMICA pipeline setup

The repository is hosted by CC-IN2P3 (http://gitlab.in2p3.fr/) and has a private status:

git clone https://gitlab.in2p3.fr/maudelejeune/planck.git
export PYTHONPATH=planck/src:$PYTHONPATH
  1. Set up a data repository for pipeline instances and products:

    export PLANCK_DB="" #high speed mount point where to save the pipelet database
    export PLANCK_PIPE="" #big storage mount point where to save the processed data
  2. Run the test pipeline:

    cd pipeline
    python main.py -d test
  3. Add this pipeline to the web interface:

    pipeweb track planck $PLANCK_DB/.sqlstatus
  4. Set up an account in the access control list and launch the web server:

    pipeutils -a username -l 2 $PLANCK_DB/.sqlstatus
    pipeweb start
  5. You should be able to browse the result on the web page http://localhost:8080

  6. Option: Set up a ssh tunnel and forward to browse the pipeline from remote host.

    File : .ssh/config:

    Host cluster
         HostName 134.158.189.3
         ProxyCommand ssh -W %h:%p lejeune@apcssh.in2p3.fr
         ForwardX11 yes
         Compression yes
         LocalForward 8080 134.158.189.3:8080

    Connect from remote to cluster ssh user@cluster You should be able to browse the pipeline from remote web browser at http://localhost:8080

Pipeline flavors and schemes

All the pipelines are controlled from the main script. The pipeline flavor is an input of the script.

T or P cmb map reconstruction

CMB map recontruction (alias Tcmbmap and Pcmbmap flavors) consists in:

  1. Point source subtraction or masking of each frequency map (ps_fit)
  2. Spherical Harmonic Transformation (SHT) of the maps (map2alm)
  3. Fit of the spectral covariance matrices to a model made of CMB, foreground and noise with the SMICA method. (mixmat, powspec)
  4. Linear combination of the alms. The coefficients are computed from the fitted covariance matrices, then inverse SHT of the combined alms give the CMB map (alm2map).
_images/Tcmbmap.png

Pipeline scheme for the T/P cmb map reconstruction

P dust and synchrotron map reconstruction

T calibration

The relative calibration pipeline (alias calib flavor) starts with the same pre-processing than temperature CMB map reconstruction but using a large galactic mask (40% or 60% sky fraction).

The fit of the spectral covariance matrices is performed in one step (mixmat_calib), where the CMB mixing matrix gives the relative calibration factors for each frequency map. The fit can be performed on different bin ranges in order to assess the stability of the result.

_images/calib.png

Pipeline scheme for the calibration pipeline

Data management

The pipeline uses a global environment variable named REDTRUCK which corresponds to a local mirror of the Planck component separation repository.

The rsync tool is used to download the needed data files if they are not present on the local directory.

The targeted dataset is set by the RELEASE environment variable (could be R2.00 for public data, or dx11dr2, ffp8 for Planck user).

Before running the pipeline, make sure that those 2 variables are set

export REDTRUCK="/data/planck..." #big storage mount point where to save a local copy of the compsep repo
export RELEASE="R2.00" #data set to be processed by the pipeline

The RELEASE can be changed for each pipeline run by using the release option

python main.py -d -r R2.00 Tcmbmap

Data files management utilities are gathered in the planckdata module.

Pipeline tools

Functions which are common to several pipeline scripts are gathered in the pipetools module.

Those which uses the smica Model and Component objects are gathered in the smicatools module.

Those which are specific to the pipelet environment are defined in the planckenv extension of the pipelet environment base class.