MIDAPACK - MIcrowave Data Analysis PACKage  1.1b
Parallel software tools for high performance CMB DA analysis
 All Data Structures Files Functions Variables Typedefs Groups Pages
lower internal routines

Functions

int m2m (double *vA1, int *A1, int n1, double *vA2, int *A2, int n2)
int m2m_sum (double *vA1, int *A1, int n1, double *vA2, int *A2, int n2)
int card_or (int *A1, int n1, int *A2, int n2)
int card_and (int *A1, int n1, int *A2, int n2)
int set_or (int *A1, int n1, int *A2, int n2, int *A1orA2)
int set_and (int *A1, int n1, int *A2, int n2, int *A1andA2)
int butterfly_init (int *indices, int count, int **R, int *nR, int **S, int *nS, int **com_indices, int *com_count, int steps, MPI_Comm comm)
 Initialize tables for butterfly-like communication scheme This routine set up needed tables for the butterfly communication scheme. Sending and receiving tabs should be well allocated(at least size of number of steps in butterfly scheme). Double pointer are partially allocated, the last allocation is performed inside the routine. com_indices and com_count are also allocated inside the routine, thus they are passing by reference. They represent indices which have to be communicated an their number. Algotithm is based 2 parts. The first one identify intersection between processors indices, using 3 successives butterfly communication schemes : bottom up, top down, and top down again. The second part works locally to build sets of indices to communicate.
int butterfly_reduce (int **R, int *nR, int nRmax, int **S, int *nS, int nSmax, double *val, int steps, MPI_Comm comm)
 Perform a sparse sum reduction (or mapped reduction) using a butterfly-like communication scheme.
double truebutterfly_reduce (int **R, int *nR, int nRmax, int **S, int *nS, int nSmax, double *val, int steps, MPI_Comm comm)
 Perform a sparse sum reduction (or mapped reduction) using a butterfly-like communication scheme.
int sindex (int *T, int nT, int *A, int nA)
int omp_pindex (int *T, int nT, int *A, int nA)
int ssort (int *indices, int count, int flag)
int omp_psort (int *A, int nA, int flag)
int ring_init (int *indices, int count, int **R, int *nR, int **S, int *nS, int steps, MPI_Comm comm)
 Initialize tables for ring-like communication scheme.
int ring_reduce (int **R, int *nR, int nRmax, int **S, int *nS, int nSmax, double *val, double *res_val, int steps, MPI_Comm comm)
 Perform a sparse sum reduction (or mapped reduction) using a ring-like communication scheme.
int ring_nonblocking_reduce (int **R, int *nR, int **S, int *nS, double *val, double *res_val, int steps, MPI_Comm comm)
 Perform a sparse sum reduction (or mapped reduction) using a ring-like non-blocking communication scheme.
int ring_noempty_reduce (int **R, int *nR, int nneR, int **S, int *nS, int nneS, double *val, double *res_val, int steps, MPI_Comm comm)
 Perform a sparse sum reduction (or mapped reduction) using a ring-like non-blocking no-empty communication scheme.
int truebutterfly_init (int *indices, int count, int **R, int *nR, int **S, int *nS, int **com_indices, int *com_count, int steps, MPI_Comm comm)
 Initialize tables for butterfly-like communication scheme (true means pair wise) This routine set up needed tables for the butterfly communication scheme. Sending and receiving tabs should be well allocated(at least size of number of steps in butterfly scheme). Double pointer are partially allocated, the last allocation is performed inside the routine. com_indices and com_count are also allocated inside the routine, thus they are passing by reference. They represent indices which have to be communicated an their number. Algotithm is based 2 parts. The first one identify intersection between processors indices, using 3 successives butterfly communication schemes : bottom up, top down, and top down again. The second part works locally to build sets of indices to communicate.

Detailed Description

These are low level internal routines. These are generally not to be used by external users.


Function Documentation

int m2m ( double *  vA1,
int *  A1,
int  n1,
double *  vA2,
int *  A2,
int  n2 
)

Function m2m for "map to map" Extract values from one map (A1, vA1), and for each pixel shared with an other map (A2, vA2), assign pixel value in vA1 and to pixel value in vA2.

Returns:
a number of elements shared between A1 and A2
See also:
m2m_sum

Definition at line 93 of file alm.c.

int m2m_sum ( double *  vA1,
int *  A1,
int  n1,
double *  vA2,
int *  A2,
int  n2 
)

Function m2m_sum for "sum map to map" Extract values from one map (A1, vA1), and for each pixel shared with an other map (A2, vA2), sum pixel value in vA1 to pixel value in vA2.

Returns:
a number of elements shared between A1 and A2
See also:
m2m

Definition at line 118 of file alm.c.

int card_or ( int *  A1,
int  n1,
int *  A2,
int  n2 
)

Compute $ card(A_1 \cup A_2) $ A1 and A2 should be two ascending ordered monotmony sets. of size n1 and n2.

Parameters:
n1number of elemnets in A1
A1set of indices
n2number of elemnets in A2
A2set of indices
Returns:
size of the union

Definition at line 60 of file als.c.

int card_and ( int *  A1,
int  n1,
int *  A2,
int  n2 
)

Compute $ card(A_1 \cap A_2) $ A1 and A2 should be two ascending ordered monotony sets, of size n1 and n2.

Parameters:
n1number of elemnets in A1
A1set of indices
n2number of elemnets in A2
A2set of indices
Returns:
size of the intersection

Definition at line 89 of file als.c.

int set_or ( int *  A1,
int  n1,
int *  A2,
int  n2,
int *  A1orA2 
)

Compute $ A1 \cup A_2 $ A1 and A2 should be two ascending ordered sets. It requires the sizes of these two sets, n1 and n2. A1andA2 has to be previouly allocated.

Parameters:
n1number of elemnets in A1
A1set of indices
n2number of elemnets in A2
A2set of indices
addressto the set A1orA2
Returns:
number of elements in A1orA2

Definition at line 118 of file als.c.

int set_and ( int *  A1,
int  n1,
int *  A2,
int  n2,
int *  A1andA2 
)

Compute $ A_1 \cap A_2 $ A1 and A2 should be two monotony sets in ascending order. It requires the sizes of these two sets, n1 and n2. A1andA2 has to be previously allocated.

Parameters:
n1number of elemnets in A1
A1set of indices
n2number of elemnets in A2
A2set of indices
addressto the set A1andA2
Returns:
number of elements in A1andA2

Definition at line 162 of file als.c.

int butterfly_init ( int *  indices,
int  count,
int **  R,
int *  nR,
int **  S,
int *  nS,
int **  com_indices,
int *  com_count,
int  steps,
MPI_Comm  comm 
)

Initialize tables for butterfly-like communication scheme This routine set up needed tables for the butterfly communication scheme. Sending and receiving tabs should be well allocated(at least size of number of steps in butterfly scheme). Double pointer are partially allocated, the last allocation is performed inside the routine. com_indices and com_count are also allocated inside the routine, thus they are passing by reference. They represent indices which have to be communicated an their number. Algotithm is based 2 parts. The first one identify intersection between processors indices, using 3 successives butterfly communication schemes : bottom up, top down, and top down again. The second part works locally to build sets of indices to communicate.

Parameters:
indicesset of indices(monotony) handle by a process.
countnumber of elements
Rpointer to receiving maps
nRarray of number of elements in each receiving map
Spointer to sending maps
nSarray of number of elements in each sending map
com_indicesset of indices(monotony) communicated by a process
com_countnumber of elements
stepsnumber of communication exchange in the butterfly scheme
commMPI communicator
Returns:
0 if no error

Definition at line 37 of file butterfly.c.

double butterfly_reduce ( int **  R,
int *  nR,
int  nRmax,
int **  S,
int *  nS,
int  nSmax,
double *  val,
int  steps,
MPI_Comm  comm 
)

Perform a sparse sum reduction (or mapped reduction) using a butterfly-like communication scheme.

Parameters:
Rpointer to receiving maps
nRarray of number of elements in each receiving map
nRmaxmaximum size of received message
Spointer to sending maps
nSarray of number of elements in each sending map
nSmaxmaximum size of sent message
valset of values (typically values associated to communicated indices)
stepsnumber of communication exchange in the butterfly scheme
commMPI communicator
Returns:
0 if no error

Definition at line 209 of file butterfly.c.

int truebutterfly_reduce ( int **  R,
int *  nR,
int  nRmax,
int **  S,
int *  nS,
int  nSmax,
double *  val,
int  steps,
MPI_Comm  comm 
)

Perform a sparse sum reduction (or mapped reduction) using a butterfly-like communication scheme.

Perform a sparse sum reduction (or mapped reduction) using a butterfly-like communication scheme (true means pairwise)

Parameters:
Rpointer to receiving maps
nRarray of number of elements in each receiving map
nRmaxmaximum size of received message
Spointer to sending maps
nSarray of number of elements in each sending map
nSmaxmaximum size of sent message
valset of values (typically values associated to communicated indices)
stepsnumber of communication exchange in the butterfly scheme
commMPI communicator
Returns:
0 if no error

Definition at line 430 of file butterfly_extra.c.

int sindex ( int *  T,
int  nT,
int *  A,
int  nA 
)

Sequential reindexing

Parameters:
Tmonotony array
nTnumber of index
Atab to reindex
nAnumber of element to reindex
Returns:
array of indices

Definition at line 18 of file cindex.c.

int omp_pindex ( int *  T,
int  nT,
int *  A,
int  nA 
)

Multithread (OpenMP) reindexing

Parameters:
Tmonotony array
nTnumber of index
Atab to reindex
nAinumber of element to reindex
Returns:
array of indices

Definition at line 36 of file cindex.c.

int ssort ( int *  indices,
int  count,
int  flag 
)

Sort and merge redundant elements of a set of indices using a specified method. The indices tab, initially an arbitrary set of integers, becomes a monotony set. Available methods :

  • quick sort
  • bubble sort
  • insertion sort
  • counting sort
  • shell sort
Parameters:
indicestab (modified)
countnumber of indices
flagmethod
Returns:
number of sorted elements

Definition at line 161 of file csort.c.

int omp_psort ( int *  A,
int  nA,
int  flag 
)

Sort and merge redundant elements of a set of indices, using openMP. The indices tab, initially an arbitrary set of integers, becomes a monotony set. Algorithm is diivided in two steps :

  • each thread sorts, in parallel, a subpart of the set using a specified method.
  • subsets obtained are merged successively in a binary tree manner.

Available methods for the fully parallel step :

  • quick sort
  • bubble sort
  • insertion sort
  • counting sort
  • shell sort
Parameters:
indicestab (modified)
countnumber of elements to sort
Returns:
flag method

Definition at line 291 of file csort.c.

int ring_init ( int *  indices,
int  count,
int **  R,
int *  nR,
int **  S,
int *  nS,
int  steps,
MPI_Comm  comm 
)

Initialize tables for ring-like communication scheme.


This routine set up needed tables for the ring communication scheme. Sending and receiving tabs should be well allocated(at least size of number of steps in ring scheme). Double pointer are partially allocated, the last allocation is performed inside the routine (only for R S are just pointer).

Parameters:
indicesset of indices(monotony) handle by a process.
countnumber of elements
Rpointer to receiving maps
nRarray of number of elements in each receiving map
Spointer to sending maps
nSarray of number of elements in each sending map
com_indicesset of indices(monotony) communicated by a process
com_countnumber of elements
stepsnumber of communication exchange in the ring scheme
commMPI communicator
Todo:
Ring loop and ring table are set from index 1 to size. Should be shift and be set from index 0 to size-1.
Returns:
0 if no error

Definition at line 32 of file ring.c.

int ring_reduce ( int **  R,
int *  nR,
int  nRmax,
int **  S,
int *  nS,
int  nSmax,
double *  val,
double *  res_val,
int  steps,
MPI_Comm  comm 
)

Perform a sparse sum reduction (or mapped reduction) using a ring-like communication scheme.

Parameters:
Rpointer to receiving maps
nRarray of number of elements in each receiving map
nRmaxmaximum size of received message
Spointer to sending maps
nSarray of number of elements in each sending map
nSmaxmaximum size of sent message
valset of values (typically values associated to communicated indices)
stepsnumber of communication exchange in the butterfly scheme
commMPI communicator
Returns:
0 if no error

Definition at line 82 of file ring.c.

int ring_nonblocking_reduce ( int **  R,
int *  nR,
int **  S,
int *  nS,
double *  val,
double *  res_val,
int  steps,
MPI_Comm  comm 
)

Perform a sparse sum reduction (or mapped reduction) using a ring-like non-blocking communication scheme.

Parameters:
Rpointer to receiving maps
nRarray of number of elements in each receiving map
Spointer to sending maps
nSarray of number of elements in each sending map
valset of values (typically values associated to communicated indices)
stepsnumber of communication exchange in the butterfly scheme
commMPI communicator
Returns:
0 if no error

Definition at line 126 of file ring.c.

int ring_noempty_reduce ( int **  R,
int *  nR,
int  nneR,
int **  S,
int *  nS,
int  nneS,
double *  val,
double *  res_val,
int  steps,
MPI_Comm  comm 
)

Perform a sparse sum reduction (or mapped reduction) using a ring-like non-blocking no-empty communication scheme.

Parameters:
Rpointer to receiving maps
nRarray of number of elements in each receiving map
nneRnumber of no-empty receiving messages
Spointer to sending maps
nSarray of number of elements in each sending map
nneSnumber of no-empty sending messages
valset of values (typically values associated to communicated indices)
stepsnumber of communication exchange in the butterfly scheme
commMPI communicator
Returns:
0 if no error

Definition at line 185 of file ring.c.

int truebutterfly_init ( int *  indices,
int  count,
int **  R,
int *  nR,
int **  S,
int *  nS,
int **  com_indices,
int *  com_count,
int  steps,
MPI_Comm  comm 
)

Initialize tables for butterfly-like communication scheme (true means pair wise) This routine set up needed tables for the butterfly communication scheme. Sending and receiving tabs should be well allocated(at least size of number of steps in butterfly scheme). Double pointer are partially allocated, the last allocation is performed inside the routine. com_indices and com_count are also allocated inside the routine, thus they are passing by reference. They represent indices which have to be communicated an their number. Algotithm is based 2 parts. The first one identify intersection between processors indices, using 3 successives butterfly communication schemes : bottom up, top down, and top down again. The second part works locally to build sets of indices to communicate.

Parameters:
indicesset of indices(monotony) handle by a process.
countnumber of elements
Rpointer to receiving maps
nRarray of number of elements in each receiving map
Spointer to sending maps
nSarray of number of elements in each sending map
com_indicesset of indices(monotony) communicated by a process
com_countnumber of elements
stepsnumber of communication exchange in the butterfly scheme
commMPI communicator
Returns:
0 if no error

Definition at line 37 of file truebutterfly.c.