API¶
A collection of tools widely used in Weak Gravitational Lensing data analyses
Weak Lensing image mapipulation¶
Convergence maps¶
-
class
lenstools.image.convergence.
ConvergenceMap
(data, angle, masked=False, **kwargs)[source]¶ A class that handles 2D convergence maps and allows to compute their topological descriptors (power spectrum, peak counts, minkowski functionals)
>>> from lenstools import ConvergenceMap >>> from lenstools.defaults import load_fits_default_convergence
>>> test_map = ConvergenceMap.load("map.fit",format=load_fits_default_convergence) >>> imshow(test_map.data)
-
bispectrum
(l_edges, ratio=0.5, configuration='equilateral', scale=None)¶ Calculates the bispectrum of the map in the equilateral or folded configuration
Parameters: - l_edges (array) – Multipole bin edges: these are the side of the triangle in the equilateral configuration or the base of the triangle in the folded configuration
- ratio (float.) – ratio between one of the triangle sides and the base in the folded configuration. Must be between 0 and 1
- configuration (str.) – must be either “equilateral” or “folded”
- scale (callable.) – scaling to apply to the cube of the Fourier pixels before harmonic azimuthal averaging. Must be a function that takes the array of multipole magnitudes as an input and returns an array of real positive numbers
Returns: (multipoles, bispectrum at multipoles)
Return type: tuple.
-
boundary
¶ Computes the boundaries of the masked regions, defined as the regions in which the convergence is still well defined but the first and second derivatives are not
Returns: array of bool. of the same shape as the map, with True values along the boundaries
-
countModes
(l_edges)¶ Counts the available number of modes in multipole space available to each bin specified in l_edges
Parameters: l_edges (array) – Multipole bin edges Returns: number of modes available to each bin Return type: array Raises: AssertionError if l_edges are not provided
-
cross
(other, statistic='power_spectrum', **kwargs)¶ Measures a cross statistic between maps
Parameters: - other (ConvergenceMap instance) – The other convergence map
- statistic (string or callable) – the cross statistic to measure (default is ‘power_spectrum’)
- kwargs (dict.) – the keyword arguments are passed to the statistic (when callable)
Returns: tuple – (l – array,Pl – array) = (multipole moments, cross power spectrum at multipole moments) if the statistic is the power spectrum, otherwise whatever statistic() returns on call
Raises: AssertionError if the other map has not the same shape as the input one
>>> test_map = ConvergenceMap.load("map.fit",format=load_fits_default_convergence) >>> other_map = ConvergenceMap.load("map2.fit",format=load_fits_default_convergence)
>>> l_edges = np.arange(200.0,5000.0,200.0) >>> l,Pl = test_map.cross(other_map,l_edges=l_edges)
-
cutRegion
(extent)¶ Cut a subset of the map, with the same resolution (warning! tested on square cuts only!)
Parameters: extent (array with units) – region in the map to select (xmin,xmax,ymin,ymax), must have units Returns: new ConvergenceMap instance that encloses the selected region
-
gaussianPeakHistogram
(thresholds, norm=False, fig=None, ax=None, **kwargs)¶ Plot the Gaussian field approximation (Bond 1987) to the peak histogram
-
getEll
()¶ Get the values of the multipoles in real FFT space
Returns: ell array with real FFT shape Return type: array.
-
getValues
(x, y)¶ Extract the map values at the requested (x,y) positions; this is implemented using the numpy fast indexing routines, so the formats of x and y must follow the numpy advanced indexing rules. Periodic boundary conditions are enforced
Parameters: - x (numpy array or quantity) – x coordinates at which to extract the map values (if unitless these are interpreted as radians)
- y (numpy array or quantity) – y coordinates at which to extract the map values (if unitless these are interpreted as radians)
Returns: numpy array with the map values at the specified positions, with the same shape as x and y
Raises: IndexError if the formats of x and y are not the proper ones
-
gradLaplacian
(x=None, y=None)¶
-
gradient
(x=None, y=None, save=True)¶ Computes the gradient of the map and sets the gradient_x,gradient_y attributes accordingly
Parameters: - x (array with units) – optional, x positions at which to evaluate the gradient
- y (array with units) – optional, y positions at which to evaluate the gradient
- save (bool.) – if True saves the gradient as attrubutes
Returns: tuple – (gradient_x,gradient_y)
>>> test_map = ConvergenceMap.load("map.fit") >>> gx,gy = test_map.gradient()
-
hessian
(x=None, y=None, save=True)¶ Computes the hessian of the map and sets the hessian_xx,hessian_yy,hessian_xy attributes accordingly
Parameters: - x (array with units) – optional, x positions at which to evaluate the hessian
- y (array with units) – optional, y positions at which to evaluate the hessian
- save (bool.) – if True saves the gradient as attrubutes
Returns: tuple – (hessian_xx,hessian_yy,hessian_xy)
>>> test_map = ConvergenceMap.load("map.fit") >>> hxx,hyy,hxy = test_map.hessian()
-
info
¶ Displays some of the information stored in the map (mainly resolution)
-
classmethod
load
(filename, format=None, **kwargs)¶ This class method allows to read the map from a data file, in various formats
Parameters: - filename (str.) – name of the file in which the map is saved
- format (str. or callable) – the format of the file in which the map is saved (can be a callable too); if None, it’s detected automatically from the filename
- kwargs (dict.) – the keyword arguments are passed to the format (if callable)
Returns: Spin0 instance with the loaded map
-
locatePeaks
(thresholds, norm=False)¶ Locate the peaks in the map
Parameters: - thresholds (array) – thresholds extremes that define the binning of the peak histogram
- norm (bool.) – normalization; if set to a True, interprets the thresholds array as units of sigma (the map standard deviation)
Returns: tuple – (peak height – array, peak locations – array with units)
Raises: AssertionError if thresholds array is not provided
>>> test_map = ConvergenceMap.load("map.fit") >>> thresholds = np.arange(map.data.min(),map.data.max(),0.05) >>> height,positions = test_map.locatePeaks(thresholds)
-
mask
(mask_profile, inplace=False)¶ Applies a mask to the convergence map: all masked pixels are given a nan value because they cannot be used in the calculations
Parameters: - mask_profile (array. or ConvergenceMap instance) – profile of the mask, must be an array of 1 byte intergers that are either 0 (if the pixel is masked) or 1 (if the pixel is not masked). Must be of the same shape as the original map
- inplace – if True the masking is performed in place and the original map is lost, otherwise a new instance of ConvergenceMap with the masked map is returned
Returns: the masked convergence map if inplace is False, otherwise a float corresponding to the masked fraction of the map
-
maskBoundaries
()¶ Computes the mask boundaries defined in the following way: a boundary is a region where the convergence value is defined, but the gradients are not defined.
Returns: float. : perimeter/area ratio of the mask boundaries
-
maskedFraction
¶ Computes the masked fraction of the map
Returns: float, the masked fraction of the map
-
mean
()¶ Computes the mean value of the pixels, taking into account eventual masking
Returns: float.
-
minkowskiFunctionals
(thresholds, norm=False)¶ Measures the three Minkowski functionals (area,perimeter and genus characteristic) of the specified map excursion sets
Parameters: - thresholds (array) – thresholds that define the excursion sets to consider
- norm (bool.) – normalization; if set to a True, interprets the thresholds array as units of sigma (the map standard deviation)
Returns: tuple – (nu – array, V0 – array, V1 – array, V2 – array) nu are the bins midpoints and V are the Minkowski functionals
Raises: AssertionError if thresholds array is not provided
>>> test_map = ConvergenceMap.load("map.fit") >>> thresholds = np.arange(-2.0,2.0,0.2) >>> nu,V0,V1,V2 = test_map.minkowskiFunctionals(thresholds,norm=True)
-
moments
(connected=False, dimensionless=False)¶ Measures the first nine moments of the convergence map (two quadratic, three cubic and four quartic)
Parameters: - connected (bool.) – if set to True returns only the connected part of the moments
- dimensionless (bool.) – if set to True returns the dimensionless moments, normalized by the appropriate powers of the variance
Returns: array – (sigma0,sigma1,S0,S1,S2,K0,K1,K2,K3)
>>> test_map = ConvergenceMap.load("map.fit") >>> var0,var1,sk0,sk1,sk2,kur0,kur1,kur2,kur3 = test_map.moments() >>> sk0,sk1,sk2 = test_map.moments(dimensionless=True)[2:5] >>> kur0,kur1,kur2,kur3 = test_map.moments(connected=True,dimensionless=True)[5:]
-
pdf
(thresholds, norm=False)¶ Computes the one point probability distribution function of the convergence map
Parameters: - thresholds (array) – thresholds extremes that define the binning of the pdf
- norm (bool.) – normalization; if set to a True, interprets the thresholds array as units of sigma (the map standard deviation)
Returns: tuple – (threshold midpoints – array, pdf normalized at the midpoints – array)
Raises: AssertionError if thresholds array is not provided
>>> test_map = ConvergenceMap.load("map.fit") >>> thresholds = np.arange(map.data.min(),map.data.max(),0.05) >>> nu,p = test_map.pdf(thresholds)
-
peakCount
(thresholds, norm=False)¶ Counts the peaks in the map
Parameters: - thresholds (array) – thresholds extremes that define the binning of the peak histogram
- norm (bool.) – normalization; if set to a True, interprets the thresholds array as units of sigma (the map standard deviation)
Returns: tuple – (threshold midpoints – array, differential peak counts at the midpoints – array)
Raises: AssertionError if thresholds array is not provided
>>> test_map = ConvergenceMap.load("map.fit") >>> thresholds = np.arange(map.data.min(),map.data.max(),0.05) >>> nu,peaks = test_map.peakCount(thresholds)
-
peakDistances
(thresholds, norm=False)¶ Compute the pairwise distance between local maxima on the map
Parameters: - thresholds (array) – thresholds extremes that define the binning of the peak histogram
- norm (bool.) – normalization; if set to a True, interprets the thresholds array as units of sigma (the map standard deviation)
Returns: pairwise distance
Return type: quantity
-
peakHistogram
(thresholds, norm=False, fig=None, ax=None, **kwargs)¶ Plot the peak histogram of the map
-
peakTwoPCF
(thresholds, scales, norm=False)¶ Compute the two point function of the peaks on the map
Parameters: - thresholds (array) – thresholds extremes that define the binning of the peak histogram
- scales (quantity) – radial binning of the 2pcf
- norm (bool.) – normalization; if set to a True, interprets the thresholds array as units of sigma (the map standard deviation)
Returns: (bin centers, peak 2pcf)
Return type: tuple
-
plotPDF
(thresholds, norm=False, fig=None, ax=None, **kwargs)¶ Plot the PDF of the map
-
plotPowerSpectrum
(l_edges, show_angle=True, angle_unit=Unit("arcmin"), fig=None, ax=None, **kwargs)¶ Plot the power spectrum of the map
-
powerSpectrum
(l_edges, scale=None)¶ Measures the power spectrum of the convergence map at the multipole moments specified in the input
Parameters: - l_edges (array) – Multipole bin edges
- scale (callable.) – scaling to apply to the square of the Fourier pixels before harmonic azimuthal averaging. Must be a function that takes the array of multipole magnitudes as an input and returns an array of real numbers
Returns: (l – array,Pl – array) = (binned multipole moments, power spectrum at multipole moments)
Return type: tuple.
>>> test_map = ConvergenceMap.load("map.fit") >>> l_edges = np.arange(200.0,5000.0,200.0) >>> l,Pl = test_map.powerSpectrum(l_edges)
-
save
(filename, format=None, double_precision=False)¶ Saves the map to an external file, of which the format can be specified (only fits implemented so far)
Parameters: - filename (str.) – name of the file on which to save the plane
- format (str.) – format of the file, only FITS implemented so far; if None, it’s detected automatically from the filename
- double_precision (bool.) – if True saves the Plane in double precision
-
savefig
(filename)¶ Saves the map visualization to an external file
Parameters: filename (str.) – name of the file on which to save the map
-
setAngularUnits
(unit)¶ Convert the angular units of the map to the desired unit
Parameters: unit (astropy units) – astropy unit instance to which to perform the conversion
-
smooth
(scale_angle, kind='gaussian', inplace=False, **kwargs)¶ Performs a smoothing operation on the convergence map
Parameters: - scale_angle (float.) – size of the smoothing kernel (must have units)
- kind (str.) – type of smoothing to be performed. Select “gaussian” for regular Gaussian smoothing in real space or “gaussianFFT” if you want the smoothing to be performed via FFTs (advised for large scale_angle)
- inplace (bool.) – if set to True performs the smoothing in place overwriting the old convergence map
- kwargs (dict.) – the keyword arguments are passed to the filter function
Returns: ConvergenceMap instance (or None if inplace is True)
-
std
()¶ Computes the standard deviation of the pixels, taking into account eventual masking
Returns: float.
-
twoPointFunction
(algorithm='FFT', **kwargs)¶ Computes the two point function of the convergence
Parameters: - algorithm (str.) – algorithm used to measure the two point function. Can be “FFT”
- kwargs (dict.) – accepted keyword arguments are “theta” to specify the angles at which to measure the 2pcf. If none indicated, the angles are computed automatically
Returns: (theta,2pcf(theta))
Return type: tuple.
-
visualize
(fig=None, ax=None, colorbar=False, cmap='viridis', cbar_label=None, **kwargs)¶ Visualize the convergence map; the kwargs are passed to imshow
-
Shear maps and catalogs¶
-
class
lenstools.image.shear.
ShearMap
(data, angle, **kwargs)[source]¶ A class that handles 2D shear maps and allows to perform a set of operations on them
>>> from lenstools import ShearMap
>>> test = ShearMap.load("shear.fit",format=lenstools.defaults.load_fits_default_shear) >>> test.side_angle 1.95 >>> test.data #The actual map values
-
addSourceEllipticity
(es, rs_correction=True, inplace=False)[source]¶ Produce a mock shear map that includes the intrinsic source ellipticity
Parameters: - es (array) – array of intrinsic ellipticities, must have the same shape as the shear map
- rs_correction (bool.) – include denominator (1+g*es) in the shear correction
- inplace (bool.) – perform the operation in place
Returns: shear estimate with the added intrinsic ellipticity
Return type:
-
convergence
()[source]¶ Reconstructs the convergence from the E component of the shear (Kaiser Squires method)
Returns: new ConvergenceMap instance
-
eb_power_spectrum
(l_edges, scale=None)¶ Decomposes the shear map into its E and B modes components and returns the respective power spectral densities at the specified multipole moments
Parameters: - l_edges (array) – Multipole bin edges
- scale (callable) – scaling to apply to the Fourier coefficients before harmonic azimuthal averaging. Must be a function that takes the array of multipole magnitudes as an input and returns a real numbers
Returns: (l – array,P_EE,P_BB,P_EB – arrays) = (multipole moments, EE,BB power spectra and EB cross power)
Return type: tuple.
>>> test_map = ShearMap.load("shear.fit",format=load_fits_default_shear) >>> l_edges = np.arange(300.0,5000.0,200.0) >>> l,EE,BB,EB = test_map.eb_power_spectrum(l_edges)
-
fourierEB
()¶ Computes E and B modes of the shear map in Fourier space
Returns: (E,B) map Return type: tuple.
-
classmethod
fromConvergence
(conv)[source]¶ Construct a shear map from a ConvergenceMap instance using the Kaiser Squires method
Parameters: conv (ConvergenceMap) – input convergence map Returns: reconstructed shear map Return type: ShearMap
-
classmethod
fromEBmodes
(fourier_E, fourier_B, angle=<Quantity 3.14 deg>)¶ This class method allows to build a shear map specifying its E and B mode components
Parameters: - fourier_E (numpy 2D array, must be of type np.complex128 and must have a shape that is appropriate for a real fourier transform, i.e. (N,N/2 + 1); N should be a power of 2) – E mode of the shear map in fourier space
- fourier_B (numpy 2D array, must be of type np.complex128 and must have a shape that is appropriate for a real fourier transform, i.e. (N,N/2 + 1); N should be a power of 2) – B mode of the shear map in fourier space
- angle (float.) – Side angle of the real space map in degrees
Returns: the corresponding ShearMap instance
Raises: AssertionErrors for inappropriate inputs
-
getEll
()¶ Get the values of the multipoles in real FFT space
Returns: ell array with real FFT shape Return type: array.
-
getValues
(x, y)¶ Extract the map values at the requested (x,y) positions; this is implemented using the numpy fast indexing routines, so the formats of x and y must follow the numpy advanced indexing rules. Periodic boundary conditions are enforced
Parameters: - x (numpy array or quantity) – x coordinates at which to extract the map values (if unitless these are interpreted as radians)
- y (numpy array or quantity) – y coordinates at which to extract the map values (if unitless these are interpreted as radians)
Returns: numpy array with the map values at the specified positions, with shape (N,shape x) where N is the number of components of the map field
Raises: IndexError if the formats of x and y are not the proper ones
-
gradient
(x=None, y=None)¶ Computes the gradient of the components of the spin1 field at each point
Parameters: - x (array with units) – optional, x positions at which to evaluate the gradient
- y (array with units) – optional, y positions at which to evaluate the gradient
Returns: the gradient of the spin1 field in array form, of shape (4,:,:) where the four components are, respectively, 1x,1y,2x,2y; the units for the finite difference are pixels
-
info
¶ Displays some of the information stored in the map (mainly resolution)
-
classmethod
load
(filename, format=None, **kwargs)¶ This class method allows to read the map from a data file, in various formats
Parameters: - filename (str.) – name of the file in which the map is saved
- format (str. or callable) – the format of the file in which the map is saved (can be a callable too); if None, it’s detected automatically from the filename
- kwargs (dict.) – the keyword arguments are passed to the format (if callable)
Returns: Spin1 instance with the loaded map
-
save
(filename, format=None, double_precision=False)¶ Saves the map to an external file, of which the format can be specified (only fits implemented so far)
Parameters: - filename (str.) – name of the file on which to save the plane
- format (str.) – format of the file, only FITS implemented so far; if None, it’s detected automatically from the filename
- double_precision (bool.) – if True saves the Plane in double precision
-
savefig
(filename)¶ Saves the map visualization to an external file
Parameters: filename (str.) – name of the file on which to save the map
-
setAngularUnits
(unit)¶ Convert the angular units of the map to the desired unit
Parameters: unit (astropy units) – astropy unit instance to which to perform the conversion
-
sticks
(fig=None, ax=None, pixel_step=10, multiplier=1.0)¶ Draw the ellipticity map using the shear components
Parameters: - ax (matplotlib ax object) – ax on which to draw the ellipticity field
- pixel_step (int.) – One arrow will be drawn every pixel_step pixels to avoid arrow overplotting
- multiplier (float.) – Multiplies the stick length by a factor
Returns: ax – the matplotlib ax object on which the stick field was drawn
>>> import matplotlib.pyplot as plt >>> test = ShearMap.load("shear.fit",loader=load_fits_default_shear) >>> fig,ax = plt.subplots() >>> test.sticks(ax,pixel_step=50)
-
visualize
(fig=None, ax=None, component_labels=('$\\gamma_1$', '$\\gamma_2$'), colorbar=False, cmap='viridis', cbar_label=None, **kwargs)¶ Visualize the shear map; the kwargs are passed to imshow
-
visualizeComponents
(fig, ax, components='EE, BB, EB', region=(200, 9000, -9000, 9000))¶ Plots the full 2D E and B mode power spectrum (useful to test statistical isotropicity)
Parameters: - fig (matplotlibfigure object) – figure on which to draw the ellipticity field
- ax (matplotlib ax object or array of ax objects, can be None in which case new ax are created) – ax on which to draw the ellipticity field
- components – string that contains the components to plot; the format is a sequence of {EE,BB,EB} separated by commas
- region (tuple (lx_min,lx_max,ly_min,ly_max)) – selects the multipole region to visualize
-
-
class
lenstools.catalog.shear.
Catalog
(*args, **kwargs)[source]¶ Class handler of a galaxy catalogue, inherits all the functionality from the astropy.table.Table
-
pixelize
(map_size, npixel=256, field_quantity=None, origin=<Quantity [ 0., 0.] deg>, smooth=None, accumulate='average', callback=None, **kwargs)[source]¶ Constructs a two dimensional square pixelized map version of one of the scalar properties in the catalog by assigning its objects on a grid
Parameters: - map_size (quantity) – spatial size of the map
- npixel (int.) – number of pixels on a side
- field_quantity (str.) – name of the catalog quantity to map; if None, 1 is assumed
- origin (array with units) – two dimensional coordinates of the origin of the map
- smooth (quantity) – if not None, the map is smoothed with a gaussian filter of scale smooth
- accumulate (str.) – if “sum” field galaxies that fall in the same pixel have their field_quantity summed, if “average” the sum is divided by the number of galaxies that fall in the pixel
- callback (callable or None) – user defined function that gets called on field_quantity
- kwargs (dict.) – the keyword arguments are passed to callback
Returns: two dimensional scalar array with the pixelized field (pixels with no objects are treated as NaN)
Return type: array
-
setSpatialInfo
(field_x='x', field_y='y', unit=Unit("deg"))[source]¶ Sets the spatial information in the catalog
Parameters: - field_x (str.) – name of the column that contains the x coordinates of the objects
- field_y (str.) – name of the column that contains the y coordinates of the objects
- unit (astropy.unit) – measure unit of the spatial coordinates
-
visualize
(map_size, npixel, field_quantity=None, origin=<Quantity [ 0., 0.] deg>, smooth=None, fig=None, ax=None, colorbar=False, cmap='jet', **kwargs)[source]¶ Visualize a two dimensional square pixelized map version of one of the scalar properties in the catalog by assigning its objects on a grid (the pixelization is performed using the pixelize routine)
Parameters: - map_size (quantity) – spatial size of the map
- npixel (int.) – number of pixels on a side
- field_quantity (str.) – name of the catalog quantity to map; if None, 1 is assumed
- origin (array with units) – two dimensional coordinates of the origin of the map
- smooth (quantity) – if not None, the map is smoothed with a gaussian filter of scale smooth
- kwargs (dict.) – the additional keyword arguments are passed to pixelize
Returns: two dimensional scalar array with the pixelized field (pixels with no objects are treated as NaN)
Return type: array
-
-
class
lenstools.catalog.shear.
ShearCatalog
(*args, **kwargs)[source]¶ Class handler of a galaxy shear catalog, inherits all the functionality from the Catalog class
-
addSourceEllipticity
(es, es_colnames=('e1', 'e2'), rs_correction=True, inplace=False)[source]¶ Produce a mock shear catalog that includes the intrinsic source ellipticity
Parameters: - es (table) – array of intrinsic ellipticities, must be a two column catalog with intrinsic ellipticities
- es_colnames (tuple.) – column names with intrinsic ellipticities
- rs_correction (bool.) – include denominator (1+g*es) in the shear correction
- inplace (bool.) – perform the operation in place
Returns: shear estimate with the added intrinsic ellipticity
Return type:
-
shapeNoise
(seed=None)[source]¶ Generate a catalog with randomly drawn shape noise for each galaxy
Parameters: seed (int.) – random seed for noise generation Returns: shape noise catalog Return type: ShearCatalog
-
toMap
(map_size, npixel, smooth, **kwargs)[source]¶ Convert a shear catalog into a shear map
Parameters: - map_size (quantity) – spatial size of the map
- npixel (int.) – number of pixels on a side
- smooth (quantity) – if not None, the map is smoothed with a gaussian filter of scale smooth
- kwargs (dict.) – additonal keyword arguments are passed to pixelize
Returns: shear map
Return type:
-
CMB temperature maps¶
-
class
lenstools.image.convergence.
CMBTemperatureMap
(data, angle, masked=False, **kwargs)[source]¶ -
estimateKappaQuad
(powerTT=None, callback='camb_dimensionless', noise_keys=None, lmax=3500, filtering=None)[source]¶ Estimate the lensing kappa using a temperature quadratic estimator
Parameters: - powerTT (str.) – name of the file that contains the lensed theory TT power spectrum. If callback is a callable, powerTT is passed to the callback
- callback (str.) – callback function that computes the TT power spectrum. Can be ‘camb_dimensionless’ or ‘camb_uk’ for using camb tabulated power spectra (dimensionless or uK^2 units), None (the identity is used), or callable. If callable, it is called on powerTT and must return (ell,P_TT(ell))
- noise_keys (dict.) – dictionary with noise TT power spectrum specifications
- filtering (str. or callable) – filter the map after reconstruction. Can be ‘wiener’ to apply the wiener filter, or callable. If callable, the function is called on the multipoles and applied to the reconstructed image FFT
Returns: reconstructed convergence map
Return type:
-
estimatePhiFFTQuad
(powerTT=None, callback='camb_dimensionless', noise_keys=None, lmax=3500, filtering=None)[source]¶ Estimate the Fourier transform of the lensing potential using a temperature quadratic estimator
Parameters: - powerTT (str.) – name of the file that contains the lensed theory TT power spectrum. If callback is a callable, powerTT is passed to the callback
- callback (str.) – callback function that computes the TT power spectrum. Can be ‘camb_dimensionless’ or ‘camb_uk’ for using camb tabulated power spectra (dimensionless or uK^2 units), None (the identity is used), or callable. If callable, it is called on powerTT and must return (ell,P_TT(ell))
- noise_keys (dict.) – dictionary with noise TT power spectrum specifications
- filtering (str. or callable) – filter the map after reconstruction. Can be ‘wiener’ to apply the wiener filter, or callable. If callable, the function is called on the multipoles and applied to the reconstructed image FFT
Returns: FFT of the lensing potential
Return type: array
-
estimatePhiQuad
(powerTT=None, callback='camb_dimensionless', noise_keys=None, lmax=3500, filtering=None)[source]¶ Estimate the lensing potential using a temperature quadratic estimator
Parameters: - powerTT (str.) – name of the file that contains the lensed theory TT power spectrum. If callback is a callable, powerTT is passed to the callback
- callback (str.) – callback function that computes the TT power spectrum. Can be ‘camb_dimensionless’ or ‘camb_uk’ for using camb tabulated power spectra (dimensionless or uK^2 units), None (the identity is used), or callable. If callable, it is called on powerTT and must return (ell,P_TT(ell))
- noise_keys (dict.) – dictionary with noise TT power spectrum specifications
- filtering (str. or callable) – filter the map after reconstruction. Can be ‘wiener’ to apply the wiener filter, or callable. If callable, the function is called on the multipoles and applied to the reconstructed image FFT
Returns: reconstructed lensing potential map
Return type: Spin0
-
classmethod
from_power
(angle=<Quantity 3.5 deg>, npixel=256, seed=None, space='real', powerTT=None, callback='camb_dimensionless', lmax=3500)[source]¶ Build an unlensed CMB temperature map from a known TT power spectrum
Parameters: - angle (quantity) – angular size of the map
- npixel (int.) – number of pixels on a side
- seed (int.) – random seed for the Fourier coefficients draw
- space (str.) – must be ‘real’ or ‘fourier’
- powerTT (str.) – name of the file that contains the TT power spectrum. If callback is a callable, powerTT is passed to the callback
- callback (str.) – callback function that computes the TT power spectrum. Can be ‘camb_dimensionless’ or ‘camb_uk’ for using camb tabulated power spectra (dimensionless or uK^2 units), None (the identity is used), or callable. If callable, it is called on powerTT and must return (ell,P_TT(ell))
Returns: temperature map
Return type:
-
lens
(kappa)[source]¶ Lens the CMB temperature map using a kappa map
Parameters: kappa ( ConvergenceMap
) – convergence map from which the lensing potential is inferredReturns: lensed temperature map Return type: CMBTemperatureMap
-
Noise¶
-
class
lenstools.image.noise.
GaussianNoiseGenerator
(shape, side_angle)[source]¶ A class that handles generation of Gaussian simulated noise maps
-
classmethod
forMap
(image)[source]¶ This class method generates a Gaussian noise generator intended to be used on a convergence map: i.e. the outputs of its methods can be added to the convergence map in question to simulate the presence of noise
Parameters: image ( ConvergenceMap
orCMBTemperatureMap
) – The blueprint of the image you want to generate the noise forReturns: Noise generator instance Return type: GaussianNoiseGenerator
-
fromConvPower
(power_func, seed=0, **kwargs)[source]¶ This method uses a supplied power spectrum to generate correlated noise maps in real space via FFTs
Parameters: - power_func (function with the above specifications, or numpy array (l,Pl) of shape (2,n)) – function that given a numpy array of l’s returns a numpy array with the according Pl’s (this is the input power spectrum); alternatively you can pass an array (l,Pl) and the power spectrum will be calculated with scipy’s interpolation routines
- seed (int.) – seed of the random generator
- kwargs – keyword arguments to be passed to power_func, or to the interpolate.interp1d routine
Returns: noise image with the same shape as the one used as blueprint
Return type:
-
getCMBDetectorNoise
(sigmaN=<Quantity 27.0 arcmin uK>, fwhm=<Quantity 7.0 arcmin>, ellmax=None, seed=0)[source]¶ This method produces CMB detector noise temperature maps (white noise + beam deconvolution)
Parameters: - sigmaN (quantity) – rms of the noise. Must be supplied with temperature x angle units
- fwhm (quantity) – full width half maximum of the beam
- ellmax (float.) – if not None, zero out the power spectrum for ell>ellmax
- seed (int.) – seed for the random numbere generator
Returns: noise temperature map
Return type:
-
getCMBWhiteNoise
(sigmaN=<Quantity 27.0 arcmin uK>, seed=0)[source]¶ This method produces CMB white noise temperature maps
Parameters: - sigmaN (quantity) – rms of the noise. Must be supplied with temperature x angle units
- seed (int.) – seed for the random numbere generator
Returns: noise temperature map
Return type:
-
getShapeNoise
(z=1.0, ngal=<Quantity 15.0 1 / arcmin2>, seed=0)[source]¶ This method generates a white, gaussian shape noise map for the given redshift of the map
Parameters: - z (float.) – single redshift of the backround sources on the map
- ngal (float.) – assumed angular number density of galaxies (must have units of angle^-2)
- seed (int.) – seed of the random generator
Returns: instance of the same exact shape as the one used as blueprint
Return type:
-
classmethod
Statistics¶
-
class
lenstools.statistics.ensemble.
Ensemble
(data=None, file_list=[], metric='chi2', **kwargs)[source]¶ A class that handles statistical operations on weak lensing maps, inherits from pandas DataFrame. The rows in the Ensemble correspond to the different ensemble realizations of the same descriptor
-
bootstrap
(callback, bootstrap_size=10, resample=10, seed=None, assemble=<built-in function array>, pool=None, **kwargs)[source]¶ Computes a custom statistic on the Ensemble using the bootstrap method
Parameters: - callback (callable) – statistic to compute on the ensemble; takes the resampled Ensemble data as an input
- bootstrap_size (int.) – size of the resampled ensembles used in the bootstraping; must be less than or equal to the number of realizations in the Ensemble
- resample (int.) – number of times the Ensemble is resampled
- seed (int.) – if not None, this is the random seed of the random resamples
- assemble (callable) – method that gets called on the resampled statistic list to make it into an Ensemble
- pool (MPI pool object) – MPI pool for multiprocessing (imported from emcee https://github.com/dfm/emcee)
- kwargs (dict.) – passed to the callback function
Returns: the bootstraped statistic
Return type: assemble return type
-
combine_columns
(combinations)[source]¶ Combine the hierarchical columns in the Ensemble, according to a dictionary which keys are the name of the combined features
Parameters: combinations (dict.) – mapping of combined features onto the old ones Returns: Ensemble with columns combined Return type: Ensemble
-
classmethod
combine_from_dict
(ensemble_dict)[source]¶ Builds an Ensemble combining the columns of smaller Ensembles; each key in the ensemble_dict dictionary becomes an element in the top level of the resulting Ensemble index
Parameters: ensemble_dict (dict.) – dictionary that contains the Ensembles to combine; the values in the dictionary must be Ensembles Return type: Ensemble
-
compare
(rhs, **kwargs)[source]¶ Computes a comparison score between two Ensembles: computes a chi2-style difference between two different ensembles to assert how different they are
-
classmethod
compute
(file_list, callback_loader=None, pool=None, index=None, assemble=<built-in function array>, **kwargs)[source]¶ Computes an ensemble, can spread the calculations on multiple processors using a MPI pool
Parameters: - file_list (list.) – list of files that will constitute the ensemble; the callback_loader is called on each of the files to produce the different realizations
- callback_loader (function) – This function gets executed on each of the files in the list and populates the ensemble. If None provided, it performs a numpy.load on the specified files. Must return a numpy array with the loaded data
- pool (MPI pool object) – MPI pool for multiprocessing (imported from emcee https://github.com/dfm/emcee)
- index (pandas Index) – index of the Ensemble
- assemble (callable) – called on the list of features (one feature per file) to assemble them in an array (defaults to np.array)
- kwargs (dict.) – Any additional keyword arguments to be passed to callback_loader
>>> from lenstools import Ensemble >>> from lenstools.statistics import default_callback_loader
>>> map_list = ["conv1.fit","conv2.fit","conv3.fit"] >>> l_edges = np.arange(200.0,50000.0,200.0)
>>> conv_ensemble = Ensemble.compute(map_list,callback_loader=default_callback_loader,pool=pool,l_edges=l_edges)
-
covariance
(bootstrap=False, bootstrap_size=10, resample=10, seed=None, pool=None)[source]¶ Computes the ensemble covariance matrix
Parameters: - bootstrap (bool.) – if True the covariance matrix is computed with a bootstrap estimate
- bootstrap_size (int.) – size of the resampled ensembles used in the bootstraping; must be less than or equal to the number of realizations in the Ensemble
- seed (int.) – if not None, this is the random seed of the random resamples
- resample (int.) – number of times the Ensemble is resampled
- pool (MPI pool object) – MPI pool for multiprocessing (imported from emcee https://github.com/dfm/emcee)
Returns: Covariance matrix, has shape (self.data[1],self.data[1])
Return type:
-
group
(group_size, kind='sparse')[source]¶ Sometimes it happens that different realizations in the ensemble need to be grouped together, for example when they belong to different subfields of the same observation field. With this function you can group different realizations together by taking the mean, and reduce the total number of realizations in the ensemble
Parameters: - group_size (int) – how many realizations to put in a group, must divide exactly the total number of realizations in the ensemble
- kind (str.) – specifies how to do the grouping; if set to “sparse” the groups are formed by taking one realizations every nobs/group_size (for example ([1,1001,…,9001],[2,1002,…,9002]) if nobs=10000 and group_size=10). If set to “contiguous” then the realizations are grouped as ([1,2,…,10],[11,12,…,20]). Otherwise you can set kind to your own sparse matrix scheme
Returns: gropby object
-
imshow
(fig=None, ax=None, **kwargs)[source]¶ Visualize a two dimensional map of the Ensemble, with the index as the vertical axis and the columns as the horizontal axis
Parameters: - fig –
- ax –
- kwargs (dict.) –
-
classmethod
meshgrid
(labels, sort=None)[source]¶ Construct on Ensemble whose column values are arranged in a regularly spaced mesh grid
Parameters: - labels (dict.) – dictionary whose keys are the Ensemble columns and whose values are the mesh grid axes values
- sort (dict.) – optional dictionary that tells how the meshgrid columns should be sorted
Return type:
-
principalComponents
(location=None, scale=None)[source]¶ Computes the principal components of the Ensemble
Parameters: - location (
Series
) – compute the principal components with respect to this location; must have the same columns as the Ensemble - scale (
Ensemble
) – compute the principal components applying this scaling on the Ensemble columns; must have the same columns as the Ensemble
Returns: pcaHandler instance
- location (
-
project
(vectors, names=None)[source]¶ Projects the rows of the Ensemble on the hyperplane defined by N linearly independent vectors
Parameters: - vectors (tuple.) – linearly independent vectors on which to project the rows of the Ensemble
- names (list.) – optional names of the projected components
Returns: projected Ensemble; the new rows contain the components of the old rows along the vectors
Return type:
-
classmethod
read
(filename, callback_loader=None, **kwargs)[source]¶ Reads a numpy file into an Ensemble
Parameters: - filename (str.) – name of the file to read
- callback_loader (function) – This function gets executed on each of the files in the list and populates the ensemble. If None provided, it unpickles the specified file. Must return an acceptable input for the Ensemble constructor
- kwargs (dict.) – Any additional keyword arguments to be passed to callback_loader
Returns: Ensemble instance read from the file
-
classmethod
readall
(filelist, callback_loader=None, **kwargs)[source]¶ Reads a list of files into an Ensemble
Parameters: - filelist (list.) – list of files to read
- callback_loader (function) – This function gets executed on each of the files in the list and populates the ensemble. If None provided, it performs a numpy.load on the specified file. Must return a numpy array with the loaded data
- kwargs (dict.) – Any additional keyword arguments to be passed to callback_loader
Returns: Ensemble instance read from the file
-
save
(filename, format=None, **kwargs)[source]¶ Save ensemble data in an external file (in arbitrary format)
Parameters: - filename (str.) – file name of the external file
- kwargs (dict.) – the keyword arguments are passed to the saver (or to format if callable)
Format: format in which to save the ensemble; if None the format is auto detected from the filename
-
selfChi2
()[source]¶ Computes the Ensemble distribution of chi squared values, defined with respect to the same Ensemble mean and covariance
Returns: array with the self chi squared for each realization
-
shuffle
(seed=None)[source]¶ Changes the order of the realizations in the Ensemble
Parameters: seed (int.) – random seed for the random shuffling Returns: shuffled Ensemble
-
suppress_indices
(by, suppress, columns)[source]¶ Combine multiple rows in the Ensemble into a single row, according to a specific criterion
Parameters: - by (list.) – list of columns that is used to group the Ensemble: the rows in each group are combined
- suppress (list.) – list of columns to suppress: these indices get suppressed when the rows are combined
- columns (list.) – list of columns to keep in the rows that are combined
Returns: suppressed indices lookup table,combined Ensemble
Return type: list.
-
-
class
lenstools.statistics.ensemble.
SquareMatrix
(data=None, file_list=[], metric='chi2', **kwargs)[source]¶ -
invert
()[source]¶ Compute the inverse
Return type: SquareMatrix
-
-
class
lenstools.statistics.database.
Database
(name, connection_string='sqlite:///{0}')[source]¶ -
insert
(df, table_name='data')[source]¶ Parameters: df ( Ensemble
) – records to insert in the database, in Ensemble (or pandas DataFrame) format
-
-
class
lenstools.statistics.database.
ScoreDatabase
(*args, **kwargs)[source]¶ -
insert
(df, table_name='data')¶ Parameters: df ( Ensemble
) – records to insert in the database, in Ensemble (or pandas DataFrame) format
-
pull_features
(feature_list, table_name='scores', score_type='likelihood')[source]¶ Pull out the scores for a subset of features
Parameters: - feature_list (list.) – feature list to pull out from the database
- score_type (str.) – name of the column that contains the particular score you are considering
-
query
(sql)¶ Parameters: sql (str.) – sql query string Returns: Ensemble
-
classmethod
query_all
(db_names, sql)¶ Perform the same SQL query on a list of databases and combine the results
Parameters: - db_names (list.) – list of names of the databases to query
- sql (str.) – sql query string
Returns: Ensemble
-
classmethod
read_table_all
(db_names, table_name)¶ Read the same SQL table from a list of databases and combine the results
Parameters: - db_names (list.) – list of names of the databases to query
- table (str.) – table to read
Returns: Ensemble
-
-
lenstools.statistics.database.
chi2database
(*args, **kwargs)¶ Populate an SQL database with the scores of different parameter sets with respect to the data; supports multiple features
Parameters: - db_name (str.) – name of the database to populate
- parameters (
Ensemble
) – parameter combinations to score - specs (dict.) – dictionary that should contain the emulator,data, and covariance matrix of each feature to consider; each value in this dictionary must be a dictionary with keys ‘emulator’, ‘data’ and ‘data covariance’
- table_name (str.) – table name to populate in the database
- pool (MPIPool) – MPIPool to spread the calculations over (pass None for automatic pool handling)
- nchunks (int.) – number of chunks to split the parameter score calculations in (one chunk per processor ideally)
Cosmological parameter estimation¶
-
class
lenstools.statistics.constraints.
Analysis
(data=None, file_list=[], metric='chi2', **kwargs)[source]¶ The base class of this module; the idea in weak lensing analysis is that one has a set of simulated data, that serves as training model, and then uses that set to fit the observations for the best model parameters. Inherits from
Ensemble
-
add_models
(parameters, feature)[source]¶ Add a model to the training set of the current analysis
Parameters: - parameters (array) – parameter set of the new model
- feature (array) – measured feature of the new model
-
combine_features
(combinations)[source]¶ Combine features in the Analysis, according to a dictionary which keys are the name of the combined features
Parameters: combinations (dict.) – mapping of combined features onto the old ones
-
find
(parameters, rtol=1e-05)[source]¶ Finds the location in the instance that has the specified combination of parameters
Parameters: - parameters (array.) – the parameters of the model to find
- rtol (float.) – tolerance of the search (must be less than 1)
Returns: array of int. with the indices of the corresponding models
-
refeaturize
(transformation, method='apply_row', **kwargs)[source]¶ Allows a general transformation on the feature set of the analysis by calling an arbitrary transformation function
Parameters: - transformation (callable or dict.) – callback function called on the feature_set; must take in a row of features and return a row of features. If a dictionary is passed, the keys must be the feature names
- kwargs (dict.) – the keyword arguments are passed to the transformation callable
Returns: transformed Analysis
-
reparametrize
(transformation, **kwargs)[source]¶ Reparametrize the parameter set of the analysis by calling the formatter handle on the current parameter set (can be used to enlarge/shrink/relabel the parameter set)
Parameters: - transformation (callable) – transformation function called on the parameters, must take in a row of parameters and return another row of parameters
- kwargs (dict.) – the keyword arguments are passed to the transformation callable
Returns: reparametrized Analysis
-
Fisher matrix calculations¶
-
class
lenstools.statistics.constraints.
FisherAnalysis
(data=None, file_list=[], metric='chi2', **kwargs)[source]¶ -
check
()[source]¶ Asserts that the parameters are varied one at a time, and that a parameter is not varied more than once
Raises: AssertionError
-
chi2
(observed_feature, features_covariance, correct=None)[source]¶ Computes the chi2 between an observed feature and the fiducial feature, using the provided covariance
Parameters: - observed_feature (array) – observed feature to fit, its last dimension must have the same shape as self.feature_set[0]
- features_covariance (2 dimensional array (or 1 dimensional if diagonal)) – covariance matrix of the simulated features, must be provided for a correct fit!
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
Returns: chi2 of the comparison
Return type: float.
-
classify
(observed_feature, features_covariance, correct=None, labels=[0, 1], confusion=False)[source]¶ Performs a Fisher classification of the observed feature, choosing the most probable label based on the value of the chi2
Parameters: - observed_feature (array) – observed feature to fit, the last dimenstion must have the same shape as self.feature_set[0]
- features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, must be provided for a correct classification!
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
- labels (iterable) – labels of the classification, must be the indices of the available classes (from 0 to feature_set.shape[0])
- confusion (bool.) – if True, an array with the label percentage occurrences is returned; if False an array of labels is returned
Returns: array with the labels resulting from the classification
Return type: int.
-
compute_derivatives
()[source]¶ Computes the feature derivatives with respect to the parameter sets using one step finite differences; the derivatives are computed with respect to the fiducial parameter set
Returns: array of shape (p,N), where N is the feature dimension and p is the number of varied parameters
-
confidence_ellipse
(simulated_features_covariance, correct=None, observed_feature=None, observed_features_covariance=None, parameters=['Om', 'w'], p_value=0.684, **kwargs)[source]¶ Draws a confidence ellipse of a specified p-value in parameter space, corresponding to fit an observed feature for the cosmological parameters
Parameters: - observed_feature (array) – observed feature to fit, the last dimenstion must have the same shape as self.feature_set[0]
- simulated_features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, must be provided for a correct fit!
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
- observed_features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, if different from the simulated one; if None the simulated feature covariance is used
- parameters (list.) – parameters to compute the condifence contour of
- p_value (float.) – p-value to calculate
- kwargs (dict.) – the keyword arguments are passed to the matplotlib Ellipse method
Returns: matplotlib ellipse object
Return type: Ellipse
-
static
ellipse
(center, covariance, p_value=0.684, **kwargs)¶ Draws a confidence ellipse using matplotlib Ellipse patch
Parameters: - center (tuple.) – center of the ellipse
- covariance (2D-array.) – parameters covariance matrix
- p_value (float.) – p-value to calculate
- kwargs (dict.) – the keyword arguments are passed to the matplotlib Ellipse method
Returns: matplotlib ellipse object
Return type: Ellipse
-
fisher_matrix
(simulated_features_covariance, correct=None, observed_features_covariance=None)[source]¶ Computes the parameter Fisher matrix using the associated features, that in the end allows to compute the parameter confidence contours (around the fiducial value)
Parameters: - simulated_features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, must be provided for a correct fit!
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
- observed_features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, if different from the simulated one; if None the simulated feature covariance is used
Returns: 2 dimensional array with the Fisher matrix of the analysis
-
fit
(observed_feature, features_covariance)[source]¶ Maximizes the gaussian likelihood on which the Fisher matrix formalism is based, and returns the best fit for the parameters given the observed feature
Parameters: - observed_feature (array) – observed feature to fit, must have the same shape as self.feature_set[0]
- features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, must be provided for a correct fit!
Returns: array with the best fitted parameter values
-
parameter_covariance
(simulated_features_covariance, correct=None, observed_features_covariance=None)[source]¶ Computes the parameter covariance matrix using the associated features, that in the end allows to compute the parameter confidence contours (around the fiducial value)
Parameters: - simulated_features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, must be provided for a correct fit!
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
- observed_features_covariance (2 dimensional array (or 1 dimensional if assumed diagonal)) – covariance matrix of the simulated features, if different from the simulated one; if None the simulated feature covariance is used
Returns: 2 dimensional array with the parameter covariance matrix of the analysis
-
set_fiducial
(n)[source]¶ Sets the fiducial model (with respect to which to compute the derivatives), default is 0 (i.e. self.parameter_set[0])
Parameters: n (int.) – the parameter set you want to use as fiducial
-
variations
¶ Checks the parameter variations with respect to the fiducial cosmology
Returns: iterable with the positions of the variations
-
varied
¶ Returns the indices of the parameters that are varied
Returns: list with the indices of the varied parameters
-
Non linear parameter dependence¶
-
class
lenstools.statistics.constraints.
Emulator
(*args, **kwargs)[source]¶ -
approximate_linear
(center, derivative_precision=0.1)[source]¶ Construct a FisherAnalysis by approximating the Emulator as a linear expansion along a chosen center
Parameters: - center (Series) – center point in parameter space
- derivative_precision (float.) – percentage step for the finite difference derivatives
Returns: linearly approximated Fisher analysis
Return type:
-
chi2
(parameters, observed_feature, features_covariance, correct=None, split_chunks=None, pool=None)[source]¶ Computes the chi2 part of the parameter likelihood with the usual sandwich product with the covariance matrix; the model features are computed with the interpolators
Parameters: - parameters ((N,p) array where N is the number of points and p the number of parameters) – new points in parameter space on which to compute the chi2 statistic
- observed_feature (array) – observed feature on which to condition the parameter likelihood
- features_covariance (array) – covariance matrix of the features, must be supplied
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
- split_chunks (int.) – if set to an integer bigger than 0, splits the calculation of the chi2 into subsequent chunks, each that takes care of an equal number of points. Each chunk could be taken care of by a different processor
Returns: array with the chi2 values, with the same shape of the parameters input
-
chi2Contributions
(parameters, observed_feature, features_covariance, correct=None)[source]¶ Computes the individual contributions of each feature bin to the chi2; the model features are computed with the interpolators. The full chi2 is the sum of the individual contributions
Parameters: - parameters ((N,p) array where N is the number of points and p the number of parameters) – new points in parameter space on which to compute the chi2 statistic
- observed_feature (array) – observed feature on which to condition the parameter likelihood
- features_covariance (array) – covariance matrix of the features, must be supplied
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
Returns: numpy 2D array with the contributions to the chi2 (off diagonal elements are the contributions of the cross correlation between bins)
-
likelihood
(chi2_value, **kwargs)[source]¶ Computes the likelihood value with the selected likelihood function, given the pre-computed chi2 value
Parameters: - chi2_value (array) – chi squared values
- kwargs – keyword arguments to be passed to your likelihood function
-
predict
(parameters, raw=False)[source]¶ Predicts the feature at a new point in parameter space using the bin interpolators, trained with the simulated features
Parameters: - parameters (array) – new points in parameter space on which to compute the chi2 statistic; it’a (N,p) array where N is the number of points and p the number of parameters, or array of size p if there is only one point
- raw (bool.) – if True returns raw numpy arrays
Returns: predicted features
Return type: array or
Ensemble
-
sample_posterior
(observed_feature, sample='emcee', **kwargs)[source]¶ Sample the parameter posterior distribution
Parameters: - observed_feature (Series) – observed feature to score
- sample (str. or callable) – posterior sampling method
Returns: samples from the posterior distribution
Return type: dict.
-
score
(parameters, observed_feature, method='chi2', **kwargs)[source]¶ Compute the score for an observed feature for each combination of the proposed parameters
Parameters: - parameters (DataFrame or array) – parameter combinations to score
- observed_feature (Series) – observed feature to score
- method (str. or callable) – scoring method to use (defaults to chi2): if callable, must take in the current instance, the parameter array and the observed feature and return a score for each parameter combination
- kwargs (dict.) – keyword arguments passed to the callable method
Returns: ensemble with the scores, for each feature
Return type: Ensemble
-
set_likelihood
(function=None)[source]¶ Sets the likelihood function to a custom function input by the user: the default is the usual exp(-0.5*chi^2)
-
train
(use_parameters='all', method='Rbf', **kwargs)[source]¶ Builds the interpolators for each of the feature bins using a radial basis function approach
Parameters: - use_parameters (list. or "all") – which parameters actually vary in the supplied parameter set (it doesn’t make sense to interpolate over the constant ones)
- method (str. or callable) – interpolation method; can be ‘Rbf’ or callable. If callable, it must take two arguments, a square distance and a square length smoothing scale
- kwargs – keyword arguments to be passed to the interpolator constructor
-
Posterior samplers¶
-
lenstools.statistics.samplers.
emcee_sampler
(emulator, observed_feature, features_covariance, correct=None, pslice=None, nwalkers=16, nburn=100, nchain=1000, pool=None)[source]¶ Parameter posterior sampling based on the MCMC algorithm implemented by the emcee package
Parameters: - emulator (
Emulator
) – feature emulator - observed_feature (array) – observed feature to condition the parameter estimation
- features_covariance (array) – covariance matrix of the features
- correct (int.) – if not None, correct for the bias in the inverse covariance estimator assuming the covariance was estimated by ‘correct’ simulations
- pslice (dict.) – specify slices of the parameter space in which some parameters are keps as constants
- nwalkers (int.) – number of chains
- nburn (int.) – length of the burn-in chain
- nchain (int.) – length of the MCMC chain
- pool (MPIPool) – MPI Pool for parallelization of computations
Returns: ensemble of samples from the posterior probability distribution
Return type: Ensemble
- emulator (
Confidence contour plotting¶
-
class
lenstools.statistics.contours.
ContourPlot
(fig=None, ax=None)[source]¶ A class handler for contour plots
-
confidenceArea
()[source]¶ Computes the area enclosed by the parameter confidence contours
Returns: area enclosed by the confidence contours Return type: dict.
-
expectationValue
(function, **kwargs)[source]¶ Computes the expectation value of a function of the parameters over the current parameter likelihood
-
classmethod
from_scores
(score_ensemble, parameters, feature_names=None, plot_labels=None, fig=None, ax=None, figsize=(8, 8))[source]¶ Build a ContourPlot instance out of an ensemble of scores
Parameters: - score_ensemble (
Ensemble
) – ensemble of the scores - parameters (list.) – columns names that contain the parameters
- feature_names (list.) – name of the features to generate the contour plot for
- plot_labels (list.) – plot labels for the parameters
- figsize (tuple.) – size of the plot
Returns: list of ContourPlot
- score_ensemble (
-
getLikelihood
(likelihood_filename, parameter_axes={'Omega_m': 0, 'sigma8': 2, 'w': 1}, parameter_labels={'Omega_m': '$\\Omega_m$', 'sigma8': '$\\sigma_8$', 'w': '$w$'})[source]¶ Load the likelihood function from a numpy file
-
getLikelihoodValues
(levels, precision=0.001)[source]¶ Find the likelihood values that correspond to the selected p_values
-
getMaximum
(which='full')[source]¶ Find the point in parameter space on which the likelihood is maximum
-
marginal
(parameter_name='w', levels=None)[source]¶ Marginalize the likelihood over all parameters but one
-
plotContours
(colors=['red', 'green', 'blue'], display_percentages=True, display_maximum=True, color_maximum='green', fill=False, **kwargs)[source]¶ Display the confidence likelihood contours
-
plotEllipse
(colors=['red'], levels=[0.683], **kwargs)[source]¶ Plot the Gaussian approximation to the confidence contours
Parameters: - colors (list.) – colors of the confidence ellipses
- levels (list.) – p_values of the confidence ellipses
- kwargs (dict.) – additional keyword arguments are passed to the Ellipse patch method
-
plotMarginal
(parameter, levels=[0.684], colors=['red', 'blue', 'green'], alpha=0.5, fill=False)[source]¶ Plot the likelihood function marginalized over all parameters except one
-
point
(coordinate_x, coordinate_y, color='green', marker='o')[source]¶ Draws a point in parameter space at the specified physical coordinates
-
setUnits
(parameter, parameter_min, parameter_max, parameter_unit)[source]¶ Set manually the physical units for each of the likelihood axes
-
slice
(parameter_name='w', parameter_value=-1.0)[source]¶ Slice the likelihood cube by fixing one of the parameters
-
Existing Simulation suites¶
-
class
lenstools.simulations.
IGS1
(H0=72.0, Om0=0.26, w0=-1.0, sigma8=0.798, ns=0.96, root_path=None, name=None)[source]¶ Class handler of the IGS1 simulations set, inherits the cosmological parameters from the astropy.cosmology.FlatwCDM class; the default parameter values are the fiducial ones
-
getNames
(realizations, z=1.0, kind='convergence', big_fiducial_set=False)[source]¶ Get the full name of the IGS1 maps, once a redshift, realization identificator and kind are specified
Parameters: - z (float.) – redshift plane of the maps, must be one of [1.0,1.5,2.0]
- realizations (list. or int.) – list of realizations to get the names of, the elements must be in [1,1000]
- kind (str.) – decide if retrieve convergence or shear maps, must be one of [convergence,shear1,shear2]
- big_fiducial_set (bool.) – set to True if you want to get the names of the bigger fiducial simulation based on 45 N-body simulations
-
load
(realization, **kwargs)[source]¶ Reads in a specific realization of the convergence field (in FITS format) and returns a ConvergenceMap instance with the loaded map
Parameters: - realization (int.) – the specific realization to read
- kwargs – the keyword arguments are passed to the getNames method
Returns: ConvergenceMap instance with the loaded map
-
-
class
lenstools.simulations.
CFHTemu1
(H0=72.0, Om0=0.26, w0=-1.0, sigma8=0.798, ns=0.96, root_path=None, name=None)[source]¶ Class handler of the weak lensing CFHTemu1 simulations set, inherits from IGS1; this simulation suite contains 91 different cosmological models based on 1 N-body simulation each. Each model has 1000 realizations for each of the 13 CFHT subfields
-
classmethod
getModels
(root_path='/default')[source]¶ This class method uses a dictionary file to read in all the cosmological model parameters and instantiate the corresponding CFHTemu1 objects for each one of them
Parameters: root_path (str.) – path of your CFHT emu1 simulations copy Returns: list of CFHTemu1 instances
-
getNames
(realizations, subfield=1, smoothing=0.5)[source]¶ Get the full name of the CFHT emu1 maps, once a subfield and smoothing scale are specified
Parameters: - subfield (int.) – the specific CFHT subfield you want to retrieve, must be between 1 and 13
- smoothing (float.) – smoothing scale of the maps you wish to retrieve, in arcmin
- realizations (list. or int.) – list of realizations to get the names of, the elements must be in [1,1000]
-
load
(realization, **kwargs)¶ Reads in a specific realization of the convergence field (in FITS format) and returns a ConvergenceMap instance with the loaded map
Parameters: - realization (int.) – the specific realization to read
- kwargs – the keyword arguments are passed to the getNames method
Returns: ConvergenceMap instance with the loaded map
-
squeeze
(with_ns=False)¶ Returns the cosmological parameters of the model in numpy array form
Parameters: with_ns (bool.) – if True returns also ns as the last parameter Returns: numpy array (Om0,w0,si8,ns–optionally)
-
classmethod
-
class
lenstools.simulations.
CFHTcov
(H0=72.0, Om0=0.26, w0=-1.0, sigma8=0.798, ns=0.96, root_path=None, name=None)[source]¶ Class handler of the weak lensing CFHTcov simulations set, inherits from CFHTemu1; this simulation suite contains 1000 realizations for each of the 13 CFHT subfields, based on 50 independent N-body simulations of a fiducial LambdaCDM universe. Useful to measure accurately descriptor covariance matrices
-
classmethod
getModels
(root_path='/default')[source]¶ On call, this class method returns a CFHTcov instance initialized with the cosmological parameters of the only available model in the suite
Parameters: root_path (str.) – path of your CFHTcov simulations copy Returns: CFHTcov instance initialized with the fiducial cosmological parameters
-
getNames
(realizations, subfield=1, smoothing=0.5)[source]¶ Get the full name of the CFHTcov maps, once a subfield and smoothing scale are specified
Parameters: - subfield (int.) – the specific CFHT subfield you want to retrieve, must be between 1 and 13
- smoothing (float.) – smoothing scale of the maps you wish to retrieve, in arcmin
- realizations (list. or int.) – list of realizations to get the names of, the elements must be in [1,1000]
-
load
(realization, **kwargs)¶ Reads in a specific realization of the convergence field (in FITS format) and returns a ConvergenceMap instance with the loaded map
Parameters: - realization (int.) – the specific realization to read
- kwargs – the keyword arguments are passed to the getNames method
Returns: ConvergenceMap instance with the loaded map
-
squeeze
(with_ns=False)¶ Returns the cosmological parameters of the model in numpy array form
Parameters: with_ns (bool.) – if True returns also ns as the last parameter Returns: numpy array (Om0,w0,si8,ns–optionally)
-
classmethod
Simulation design¶
-
class
lenstools.simulations.
Design
(data=None, file_list=[], metric='chi2', **kwargs)[source]¶ A class that proves useful in designing simulation sets: the main functionality provided is the uniform sampling of an arbirtarily high dimensional parameter space. The points in parameter space are chosen to be as spread as possible by minimizing a cost function, but enforcing a latin hypercube structure, in which each parameter value appears once
-
cost
(p, Lambda)[source]¶ Computes the cost function of the current configuration given the metric parameters (p,Lambda)
Parameters: - Lambda (float.) – metric parameter of the cost function; if set to 1.0 the cost function corresponds is the Coulomb potential energy
- p (float.) – metric parameter of the cost function; if set to 2.0 the distances between points are the Euclidean ones
Returns: the value of the cost function
-
diagonalCost
(Lambda)[source]¶ Computes the cost function of a diagonal configuration with a specified number of points and a metric parameter lambda; the cost function is calculated on the scaled parameter values, which are always between 0 and 1
Parameters: Lambda (float.) – metric parameter of the cost function; if set to 1.0 the cost function corresponds is the Coulomb potential energy Returns: the value of the cost function for a diagonal configuration
-
classmethod
from_specs
(npoints, parameters)[source]¶ Parameters: - npoints (int.) – number of points in the design
- parameters (list.) – list of tuples (name,label,min,max)
-
sample
(p=2.0, Lambda=1.0, seed=0, maxIterations=10000)[source]¶ Evenly samples the parameter space by minimizing the cost function computed with the metric parameters (p,Lambda); this operation works inplace
Parameters: - Lambda (float.) – metric parameter of the cost function; if set to 1.0 the cost function corresponds is the Coulomb potential energy
- p (float.) – metric parameter of the cost function; if set to 2.0 the distances between points are the Euclidean ones
- seed (int.) – random seed with which the sampler random generator will be initialized
- maxIterations (int.) – maximum number of iterations that the sampler can perform before stopping
Returns: the relative change of the cost function the last time it varied during the sampling
-
classmethod
sample_ellipse
(npoints, parameters, center, minor, major, angle, radial_power=0.5, **kwargs)[source]¶ Sample points in a two dimensional parameter space whose boundary has the shape of an ellipse
Parameters: - npoints (int.) – number of points in the design
- parameters (list.) – name of the parameters, the list must have 2 elements
- center (tuple.) – center of the ellipse
- minor (float.) – length of the semi-minor axis
- major (float.) – length of the semi-major axis
- angle (float.) – rotation angle of the major axis with respect to the horizontal (degrees counterclockwise)
- radial_power (float.) – this parameter controls the density of points around the center (higher for higher density); 0.5 corresponds to uniform sampling
- kwargs – the keyword arguments are passed to the sample() method
Returns: Return type:
-
savefig
(filename)[source]¶ Save the visualization to an external file
Parameters: filename (str.) – name of the file on which to save the plot
-
set_title
(title)[source]¶ Give a title to the visualized design
Parameters: title (str.) – title of the figure
-
visualize
(fig=None, ax=None, parameters=None, fontsize=20, **kwargs)[source]¶ Visualize the design configuration using matplotlib
Parameters: parameters (list.) – the parameters to visualize, you can specify two or three of them, by their names. If None, all parameters are visualized
-
write
(filename=None, max_rows=None, format='ascii.latex', column_format='{0:.3f}', **kwargs)[source]¶ Outputs the points that make up the design in a nicely formatted table
Parameters: - filename (str. or file descriptor) – name of the file to which the table will be saved; if None the contents will be printed
- max_rows (int.) – maximum number of rows in the table, if smaller than the number of points the different chunks are hstacked (useful if there are too many rows for display)
- format (str.) – passed to the Table.write astropy method
- column_format (str.) – format specifier for the numerical values in the Table
- kwargs (dict.) – the keyword arguments are passed to astropy.Table.write method
Returns: the Table instance with the design parameters
-
Nicaea bindings¶
-
class
lenstools.simulations.
NicaeaSettings
[source]¶ Class handler of the code settings (non linear modeling, tomography, transfer function, etc…)
-
knobs
¶ Lists available settings to tune
-
-
class
lenstools.simulations.
Nicaea
(H0=72.0, Om0=0.26, Ode0=0.74, Ob0=0.046, w0=-1.0, wa=0.0, sigma8=0.8, As=None, ns=0.96, m_nu=<Quantity [ 0., 0., 0.] eV>, name=None)[source]¶ Main class handler for the python bindings of the NICAEA cosmological code, written by M. Kilbinger & collaborators
-
convergencePowerSpectrum
(ell, z=2.0, distribution=None, distribution_parameters=None, settings=None, **kwargs)[source]¶ Computes the convergence power spectrum for the given cosmological parameters and redshift distribution using NICAEA
Parameters: - ell (array.) – multipole moments at which to compute the power spectrum
- z (float., array or None) – redshift bins for the sources; if a single float is passed, single redshift is assumed
- distribution (None,callable or list) – redshift distribution of the sources (normalization not necessary); if None a single redshift is expected; if callable, z must be an array, and a single redshift bin is considered, with the galaxy distribution specified by the call of distribution(z); if a list is passed, each element must be a NICAEA type
- distribution_parameters (str. or list.) – relevant only when distribution is a list or callable. When distribution is callable, distribution_parameters has to be one between “one” and “all” to decide if one or multiple redshift bins have to be considered. If it is a list, each element in it should be the tuple of parameters expected by the correspondent NICAEA distribution type
- settings (NicaeaSettings instance) – NICAEA code settings
- kwargs (dict.) – the keyword arguments are passed to the distribution, if callable
Returns: ( NlxNz array ) computed power spectrum at the selected multipoles (when computing the cross components these are returned in row major C ordering)
-
classmethod
fromCosmology
(cosmo)[source]¶ Builds a Nicaea instance from one of astropy.cosmology objects, from which it inherits all the cosmological parameter values
Parameters: cosmo (astropy FLRW) – one of astropy cosmology instances Returns: Nicaea instance with the cosmological parameters inherited from cosmo
-
shearTwoPoint
(theta, z=2.0, distribution=None, distribution_parameters=None, settings=None, kind='+', **kwargs)[source]¶ Computes the shear two point function for the given cosmological parameters and redshift distribution using NICAEA
Parameters: - theta (array. with units) – angles at which to compute the two point function
- z (float., array or None) – redshift bins for the sources; if a single float is passed, single redshift is assumed
- distribution (None,callable or list) – redshift distribution of the sources (normalization not necessary); if None a single redshift is expected; if callable, z must be an array, and a single redshift bin is considered, with the galaxy distribution specified by the call of distribution(z); if a list is passed, each element must be a NICAEA type
- distribution_parameters (str. or list.) – relevant only when distribution is a list or callable. When distribution is callable, distribution_parameters has to be one between “one” and “all” to decide if one or multiple redshift bins have to be considered. If it is a list, each element in it should be the tuple of parameters expected by the correspondent NICAEA distribution type
- settings (NicaeaSettings instance) – NICAEA code settings
- kind (str.) – must be “+” or “-“
- kwargs (dict.) – the keyword arguments are passed to the distribution, if callable
Returns: ( NtxNz array ) computed two point function at the selected angles (when computing the cross components these are returned in row major C ordering)
-
CAMB¶
-
lenstools.simulations.camb.
parseLog
(fname)[source]¶ Parse CAMB output log
Parameters: fname (str. or file.) – file name or file descriptor Returns: parsed log Return type: dict.
-
class
lenstools.simulations.camb.
CAMBTransferFunction
(k)[source]¶ -
add
(z, T)¶ Add transfer function information at redshift z
Parameters: - z (float.) – redshift
- T (array) – CDM transfer function from CAMB output
-
classmethod
read
(filename)¶ Load a previously pickled TransferFunction instance
Parameters: filename (str.) – name of the file from which the instance should be read Return type: TransferFunction
-
save
(filename)¶ Pickle the TransferFunction instance
Parameters: filename (str.) – name of the file to save the instance to
-
-
class
lenstools.simulations.camb.
CAMBTransferFromPower
(k)[source]¶ -
add
(z, T)[source]¶ Add transfer function information at redshift z
Parameters: - z (float.) – redshift
- T (array) – CDM transfer function from CAMB output
-
classmethod
read
(filename)¶ Load a previously pickled TransferFunction instance
Parameters: filename (str.) – name of the file from which the instance should be read Return type: TransferFunction
-
save
(filename)¶ Pickle the TransferFunction instance
Parameters: filename (str.) – name of the file to save the instance to
-
N-body simulation snapshot handling¶
-
class
lenstools.simulations.
Gadget2Snapshot
(fp=None, pool=None, length_unit=<Quantity 1.0 kpc>, mass_unit=<Quantity 10000000000.0 solMass>, velocity_unit=<Quantity 1.0 km / s>, header_kwargs={})[source]¶ A class that handles Gadget2 snapshots, mainly I/O from the binary format and spatial information statistics.Inherits from the abstract NbodySnapshot
-
close
()¶ Closes the snapshot file
-
cutPlaneAdaptive
(normal=2, center=<Quantity 7.0 Mpc>, left_corner=None, plane_resolution=<Quantity 0.1 Mpc>, neighbors=64, neighborDistances=None, kind='density', projectAll=False)¶ Cuts a density (or gravitational potential) plane out of the snapshot by computing the particle number density using an adaptive smoothing scheme; the plane coordinates are cartesian comoving
Parameters: - normal (int. (0,1,2)) – direction of the normal to the plane (0 is x, 1 is y and 2 is z)
- center (float. with units) – location of the plane along the normal direction
- plane_resolution (float. with units (or int.)) – plane resolution (perpendicular to the normal)
- left_corner (tuple of quantities or None) – specify the position of the lower left corner of the box; if None, the minimum of the (x,y,z) of the contained particles is assumed
- neighbors (int.) – number of nearest neighbors to use in the adaptive smoothing procedure
- neighborDistances (array with units) – precomputed distances of each particle to its N-th nearest neighbor; if None these are computed
- kind (str. ("density" or "potential")) – decide if computing a density or gravitational potential plane (this is computed solving the poisson equation)
- projectAll (bool.) – if True, all the snapshot is projected on a single slab perpendicular to the normal, ignoring the position of the center
Returns: tuple(numpy 2D array with the computed particle number density (or lensing potential),bin resolution along the axes,number of particles on the plane)
-
cutPlaneAngular
(normal=2, thickness=<Quantity 0.5 Mpc>, center=<Quantity 7.0 Mpc>, left_corner=None, plane_lower_corner=<Quantity [ 0., 0.] deg>, plane_size=<Quantity 0.15 deg>, plane_resolution=<Quantity 1.0 arcmin>, thickness_resolution=<Quantity 0.1 Mpc>, smooth=None, tomography=False, kind='density', space='real')¶ Same as cutPlaneGaussianGrid(), except that this method will return a lens plane as seen from an observer at z=0; the spatial transverse units are converted in angular units as seen from the observer
Parameters: - normal (int. (0,1,2)) – direction of the normal to the plane (0 is x, 1 is y and 2 is z)
- thickness (float. with units) – thickness of the plane
- center (float. with units) – location of the plane along the normal direction; it is assumed that the center of the plane is seen from an observer with a redshift of self.header[“redshift”]
- left_corner (tuple of quantities or None) – specify the position of the lower left corner of the box; if None, the minimum of the (x,y,z) of the contained particles is assumed
- plane_lower_corner (float with units.) – lower left corner of the plane, as seen from the observer (0,0) corresponds to the lower left corner of the snapshot
- plane_size (float with units) – angular size of the lens plane (angles start from 0 in the lower left corner)
- plane_resolution (float. with units (or int.)) – plane angular resolution (perpendicular to the normal)
- thickness_resolution (float. with units (or int.)) – plane resolution (along the normal)
- smooth (int. or None) – if not None, performs a smoothing of the angular density (or potential) with a gaussian kernel of scale “smooth x the pixel resolution”
- tomography (bool.) – if True returns the lens plane angular density for each slab, otherwise a projected density (or lensing potential) is computed
- kind (str. ("density" or "potential")) – decide if computing an angular density or lensing potential plane (this is computed solving the poisson equation)
- space (str.) – if “real” return the lens plane in real space, if “fourier” the Fourier transform is not inverted
Returns: tuple(numpy 2D or 3D array with the (unsmoothed) particle angular number density,bin angular resolution, total number of particles on the plane); the constant spatial part of the density field is subtracted (we keep the fluctuation only)
-
cutPlaneGaussianGrid
(normal=2, thickness=<Quantity 0.5 Mpc>, center=<Quantity 7.0 Mpc>, plane_resolution=4096, left_corner=None, thickness_resolution=1, smooth=1, kind='density', **kwargs)¶ Cuts a density (or lensing potential) plane out of the snapshot by computing the particle number density on a slab and performing Gaussian smoothing; the plane coordinates are cartesian comoving
Parameters: - normal (int. (0,1,2)) – direction of the normal to the plane (0 is x, 1 is y and 2 is z)
- thickness (float. with units) – thickness of the plane
- center (float. with units) – location of the plane along the normal direction
- plane_resolution (float. with units (or int.)) – plane resolution (perpendicular to the normal)
- left_corner (tuple of quantities or None) – specify the position of the lower left corner of the box; if None, the minimum of the (x,y,z) of the contained particles is assumed
- thickness_resolution (float. with units (or int.)) – plane resolution (along the normal)
- smooth (int. or None) – if not None, performs a smoothing of the density (or potential) with a gaussian kernel of scale “smooth x the pixel resolution”
- kind (str. ("density" or "potential")) – decide if computing a density or gravitational potential plane (this is computed solving the poisson equation)
- kwargs (dict.) – accepted keyword are: ‘density_placeholder’, a pre-allocated numpy array, with a RMA window opened on it; this facilitates the communication with different processors by using a single RMA window during the execution. ‘l_squared’ a pre-computed meshgrid of squared multipoles used for smoothing
Returns: tuple(numpy 2D array with the density (or lensing potential),bin resolution along the axes, number of particles on the plane)
-
getID
(first=None, last=None, save=True)[source]¶ Reads in the particles IDs, 4 byte ints, (read in of a subset is allowed): when first and last are specified, the numpy array convention is followed (i.e. getID(first=a,last=b)=getID()[a:b])
Parameters: - first (int. or None) – first particle in the file to be read, if None 0 is assumed
- last (int. or None) – last particle in the file to be read, if None the total number of particles is assumed
- save (bool.) – if True saves the particles IDs as attribute
Returns: numpy array with the particle IDs
-
getPositions
(first=None, last=None, save=True)[source]¶ Reads in the particles positions (read in of a subset is allowed): when first and last are specified, the numpy array convention is followed (i.e. getPositions(first=a,last=b)=getPositions()[a:b])
Parameters: - first (int. or None) – first particle in the file to be read, if None 0 is assumed
- last (int. or None) – last particle in the file to be read, if None the total number of particles is assumed
- save (bool.) – if True saves the particles positions as attribute
Returns: numpy array with the particle positions
-
getVelocities
(first=None, last=None, save=True)[source]¶ Reads in the particles velocities (read in of a subset is allowed): when first and last are specified, the numpy array convention is followed (i.e. getVelocities(first=a,last=b)=getVelocities()[a:b])
Parameters: - first (int. or None) – first particle in the file to be read, if None 0 is assumed
- last (int. or None) – last particle in the file to be read, if None the total number of particles is assumed
- save (bool.) – if True saves the particles velocities as attrubute
Returns: numpy array with the particle velocities
-
gridID
()¶ Compute an ID for the particles in incresing order according to their position on a Nside x Nside x Nside grid; the id is computed as x + y*Nside + z*Nside**2
Returns: the gridded IDs Return type: array of float
-
header
¶ Displays the snapshot header information
Returns: the snapshot header information in dictionary form Return type: dict.
-
lensMaxSize
()¶ Computes the maximum observed size of a lens plane cut out of the current snapshot
-
massDensity
(resolution=<Quantity 0.5 Mpc>, smooth=None, left_corner=None, save=False, density_placeholder=None)¶ Uses a C backend gridding function to compute the matter mass density fluctutation for the current snapshot: the density is evaluated using a nearest neighbor search
Parameters: - resolution (float with units or int.) – resolution below which particles are grouped together; if an int is passed, this is the size of the grid
- smooth (int. or None) – if not None, performs a smoothing of the density (or potential) with a gaussian kernel of scale “smooth x the pixel resolution”
- left_corner (tuple of quantities or None) – specify the position of the lower left corner of the box; if None, the minimum of the (x,y,z) of the contained particles is assumed
- save (bool.) – if True saves the density histogram and resolution as instance attributes
- placeholder (density) – if not None, it is used as a fixed memory chunk for MPI communications of the density
Returns: tuple(numpy 3D array with the (unsmoothed) matter density fluctuation on a grid,bin resolution along the axes)
-
neighborDistances
(neighbors=64)¶ Find the N-th nearest neighbors to each particle
Parameters: neighbors (int.) – neighbor order Returns: array with units
-
classmethod
open
(filename, pool=None, header_kwargs={}, **kwargs)¶ Opens a snapshot at filename
Parameters: - filename (str. or file.) – file name of the snapshot
- pool (MPIWhirlPool instance) – use to distribute the calculations on different processors; if not None, each processor takes care of one of the snapshot parts, appending as “.n” to the filename
- header_kwargs (dict.) – keyword arguments to pass to the getHeader method
- kwargs (dict.) – the keyword arguments are passed to buildFilename
-
pos2R
(filename, variable_name='pos')¶ Saves the positions of the particles in a R readable format, for facilitating visualization with RGL
Parameters: - filename (str.) – name of the file on which to save the particles positions
- variable_name (str.) – name of the variable that contains the (x,y,z) positions in the R environment
-
powerSpectrum
(k_edges, resolution=None, return_num_modes=False, density_placeholder=None)¶ Computes the power spectrum of the relative density fluctuations in the snapshot at the wavenumbers specified by k_edges; a discrete particle number density is computed before hand to prepare the FFT grid
Parameters: - k_edges (array.) – wavenumbers at which to compute the density power spectrum (must have units)
- resolution (float with units, int. or None) – optional, fix the grid resolution to some value; to be passed to the massDensity method. If none this is computed automatically from the k_edges
- return_num_modes (bool.) – if True returns the mode counting for each k bin as the last element in the return tuple
- placeholder (density) – if not None, it is used as a fixed memory chunk for MPI communications in the density calculations
Returns: tuple(k_values(bin centers),power spectrum at the specified k_values)
-
reorder
()¶ Sort particles attributes according to their ID
-
savefig
(filename)¶ Save the snapshot visualization to an external file
Parameters: filename (str.) – file name to which the figure will be saved
-
setHeaderInfo
(Om0=0.26, Ode0=0.74, w0=-1.0, wa=0.0, h=0.72, redshift=100.0, box_size=<Quantity 20.833333333333336 Mpc>, flag_cooling=0, flag_sfr=0, flag_feedback=0, flag_stellarage=0, flag_metals=0, flag_entropy_instead_u=0, masses=<Quantity [ 0.00000000e+00, 1.03000000e+10, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00] solMass>, num_particles_file_of_type=None, npartTotalHighWord=array([0, 0, 0, 0, 0, 0], dtype=uint32))[source]¶ Sets the header info in the snapshot to write
-
setPositions
(positions)¶ Sets the positions in the current snapshot (with the intent of writing them to a properly formatted snapshot file)
Parameters: positions ((N,3) array with units) – positions of the particles, must have units
-
setVelocities
(velocities)¶ Sets the velocities in the current snapshot (with the intent of writing them to a properly formatted snapshot file)
Parameters: velocities ((N,3) array with units) – velocities of the particles, must have units
-
visualize
(fig=None, ax=None, scale=False, first=None, last=None, **kwargs)¶ Visualize the particles in the snapshot using the matplotlib 3D plotting engine, the kwargs are passed to the matplotlib scatter method
Parameters: scale (bool.) – if True, multiply all the (comoving) positions by the scale factor
-
write
(filename, files=1)[source]¶ Writes particles information (positions, velocities, etc…) to a properly formatter Gadget snapshot
Parameters: - filename (str.) – name of the file to which to write the snapshot
- files (int.) – number of files on which to split the writing of the snapshot (useful if the number of particles is large); if > 1 the extension “.n” is appended to the filename
-
writeParameterFile
(filename, settings)[source]¶ Writes a Gadget2 parameter file to evolve the current snapshot using Gadget2
Parameters: - filename (str.) – name of the file to which to write the parameters
- settings (Gadget2Settings) – tunable settings of Gadget2 (see Gadget2 manual)
-
-
class
lenstools.simulations.gadget2.
Gadget2SnapshotDE
(fp=None, pool=None, length_unit=<Quantity 1.0 kpc>, mass_unit=<Quantity 10000000000.0 solMass>, velocity_unit=<Quantity 1.0 km / s>, header_kwargs={})[source]¶ A class that handles Gadget2 snapshots, mainly I/O from the binary format and spatial information statistics.Inherits from Gadget2Snapshot; assumes that the header includes Dark Energy information
-
class
lenstools.simulations.gadget2.
Gadget2SnapshotPipe
(*args, **kwargs)[source]¶ Read in the particle positions when calling the constructor, without calling fseek
-
class
lenstools.simulations.amiga.
AmigaHalos
(fp=None, pool=None, length_unit=<Quantity 1.0 kpc>, mass_unit=<Quantity 10000000000.0 solMass>, velocity_unit=<Quantity 1.0 km / s>, header_kwargs={})[source]¶ A class that handles the Amiga Halo Finder (AHF) halo output files. Inherits from the abstract NbodySnapshot
Warning: Not tested yet!
Ray tracing simulations¶
-
class
lenstools.simulations.
Plane
(data, angle, redshift=2.0, cosmology=None, comoving_distance=None, unit=Unit("rad2"), num_particles=None, masked=False, filename=None)[source]¶ -
classmethod
load
(filename, format=None, init_cosmology=True)[source]¶ Loads the Plane from an external file, of which the format can be specified (only fits implemented so far)
Parameters: - filename (str.) – name of the file from which to load the plane
- format (str.) – format of the file, only FITS implemented so far; if None, it’s detected automatically from the filename
- init_cosmology (bool.) – if True, instantiates the cosmology attribute of the PotentialPlane
Returns: PotentialPlane instance that wraps the data contained in the file
-
randomRoll
(seed=None, lmesh=None)[source]¶ Randomly shifts the plane along its axes, enforcing periodic boundary conditions
Parameters: - seed (int.) – random seed with which to initialize the generator
- lmesh (array) – the FFT frequency meshgrid (lx,ly) necessary for the calculations in fourier space; if None, a new one is computed from scratch (must have the appropriate dimensions)
-
save
(filename, format=None, double_precision=False)[source]¶ Saves the Plane to an external file, of which the format can be specified (only fits implemented so far)
Parameters: - filename (str.) – name of the file on which to save the plane
- format (str.) – format of the file, only FITS implemented so far; if None, it’s detected automatically from the filename
- double_precision (bool.) – if True saves the Plane in double precision
-
scaleWithTransfer
(z, tfr, with_scale_factor=False, kmesh=None, scaling_method='uniform')[source]¶ Scale the pixel values to a different redshift than the one of the plane by applying a suitable transfer function. This operation works in place
Parameters: - z (float.) – new redshift to evaluate the plane at
- tfr (
TransferFunction
) – CDM transfer function in Fourier space used to scale the plane pixel values - with_scale_factor (bool.) – if True, multiply the pixel values by an additional (1+znew) / (1+z0) overall factor
- kmesh (quantity) – the comoving wavenumber meshgrid (kx,ky) necessary for the calculations in Fourier space; if None, a new one is computed from scratch (must have the appropriate dimensions)
- scaling_method (str.) – must be ether “uniform” (all pixels scaled by the same factor, in which case the transfer function at low k is used) or “FFT” in which the full transfer function should be used
-
classmethod
-
class
lenstools.simulations.raytracing.
TransferSpecs
(tfr, cur2target, with_scale_factor, kmesh, scaling_method)[source]¶
-
class
lenstools.simulations.
DensityPlane
(data, angle, redshift=2.0, cosmology=None, comoving_distance=None, unit=Unit("rad2"), num_particles=None, masked=False, filename=None)[source]¶ Class handler of a lens density plane, inherits from the parent Plane class; additionally it defines redshift and comoving distance attributes that are needed for ray tracing operations
-
potential
(lmesh=None)[source]¶ Computes the lensing potential from the density plane solving the Poisson equation via FFTs
Parameters: lmesh (array) – the FFT frequency meshgrid (lx,ly) necessary for the calculations in fourier space; if None, a new one is computed from scratch (must have the appropriate dimensions) Returns: PotentialPlane instance with the computed lensing potential
-
-
class
lenstools.simulations.
PotentialPlane
(data, angle, redshift=2.0, cosmology=None, comoving_distance=None, unit=Unit("rad2"), num_particles=None, masked=False, filename=None)[source]¶ Class handler of a lens potential plane, inherits from the parent Plane class; additionally it defines redshift and comoving distance attributes that are needed for ray tracing operations
-
deflectionAngles
(x=None, y=None, lmesh=None)[source]¶ Computes the deflection angles for the given lensing potential by taking the gradient of the potential map; it is also possible to proceed with FFTs
Parameters: - x (array with units) – optional; if not None, compute the deflection angles only for rays hitting the lens at the particular x positions (mainly for speedup in case there are less light rays than the plane resolution allows; must proceed in real space to allow speedup)
- y (array with units) – optional; if not None, compute the deflection angles only for rays hitting the lens at the particular y positions (mainly for speedup in case there are less light rays than the plane resolution allows; must proceed in real space to allow speedup)
- lmesh (array) – the FFT frequency meshgrid (lx,ly) necessary for the calculations in fourier space; if None, a new one is computed from scratch (must have the appropriate dimensions)
Returns: DeflectionPlane instance, or array with deflections of rays hitting the lens at (x,y)
-
density
(x=None, y=None)[source]¶ Computes the projected density fluctuation by taking the laplacian of the potential; useful to check if the potential is reasonable
Parameters: - x (array with units) – optional; if not None, compute the density only for rays hitting the lens at the particular x positions (mainly for speedup in case there are less light rays than the plane resolution allows; must proceed in real space to allow speedup)
- y (array with units) – optional; if not None, compute the density only for rays hitting the lens at the particular y positions (mainly for speedup in case there are less light rays than the plane resolution allows; must proceed in real space to allow speedup)
Returns: DensityPlane instance with the density fluctuation data (if x and y are None), or numpy array with the same shape as x and y
-
shearMatrix
(x=None, y=None, lmesh=None)[source]¶ Computes the shear matrix for the given lensing potential; it is also possible to proceed with FFTs
Parameters: - x (array with units) – optional; if not None, compute the shear matrix only for rays hitting the lens at the particular x positions (mainly for speedup in case there are less light rays than the plane resolution allows; must proceed in real space to allow speedup)
- y (array with units) – optional; if not None, compute the shear matrix only for rays hitting the lens at the particular y positions (mainly for speedup in case there are less light rays than the plane resolution allows; must proceed in real space to allow speedup)
- lmesh (array) – the FFT frequency meshgrid (lx,ly) necessary for the calculations in fourier space; if None, a new one is computed from scratch (must have the appropriate dimensions)
Returns: ShearTensorPlane instance, or array with deflections of rays hitting the lens at (x,y)
-
-
class
lenstools.simulations.raytracing.
DeflectionPlane
(data, angle, redshift=2.0, comoving_distance=None, cosmology=None, unit=Unit("rad"))[source]¶ Class handler of a lens deflection plane, inherits from the parent Spin1 class and holds the values of the deflection angles of the light rays that cross a potential plane
-
convergence
()[source]¶ Computes the convergence from the deflection angles by taking the appropriate components of the jacobian
Returns: ConvergenceMap instance with the computed convergence values
-
jacobian
()[source]¶ Computes the jacobian of the deflection angles, useful to compute shear and convergence; units are handled properly
Returns: the jacobian of the deflection field in array form, of shape (4,:,:) where the four components are, respectively, xx,xy,yx,yy
-
-
class
lenstools.simulations.raytracing.
ShearTensorPlane
(data, angle, redshift=2.0, comoving_distance=None, cosmology=None, unit=Unit(dimensionless))[source]¶ Class handler of a plane of shear matrices, inherits from the parent Spin2 class and holds the 3 values of the symmetric shear matrix (2 diagonal + 1 off diagonal), for each pixel
-
class
lenstools.simulations.
RayTracer
(lens_mesh_size=None, lens_type=<class 'lenstools.simulations.raytracing.PotentialPlane'>)[source]¶ Class handler of ray tracing operations: it mainly computes the path corrections of light rays that travel through a set of gravitational lenses
-
addLens
(lens_specification)[source]¶ Adds a gravitational lens to the ray tracer, either by putting in a lens plane, or by specifying the name of a file which contains the lens specifications
Parameters: lens_specification – specifications of the lens to add, either in tuple(filename,distance,redshift) or as a PotentialPlane instance
-
convergenceBorn
(initial_positions, z=2.0, save_intermediate=False, real_trajectory=False)[source]¶ Computes the convergence directly integrating the lensing density along the line of sight (real or unperturbed)
Parameters: - initial_positions (numpy array or quantity) – initial angular positions of the light ray bucket, according to the observer; if unitless, the positions are assumed to be in radians. initial_positions[0] is x, initial_positions[1] is y
- z (float.) – redshift of the sources
- save_intermediate (bool.) – save the intermediate values of the convergence as successive lenses are crossed
- real_trajectory (bool.) – if True, integrate the density on the real light ray trajectory; if False the unperturbed trajectory is used
Returns: convergence values at each of the initial positions
-
convergencePostBorn2
(initial_positions, z=2.0, save_intermediate=False, include_first_order=False, include_ll=True, include_gp=True, transpose_up_to=-1, callback=None, **kwargs)[source]¶ Computes the convergence at second post-born order with a double line of sight integral
Parameters: - initial_positions (numpy array or quantity) – initial angular positions of the light ray bucket, according to the observer; if unitless, the positions are assumed to be in radians. initial_positions[0] is x, initial_positions[1] is y
- z (float.) – redshift of the sources
- save_intermediate (bool.) – save the intermediate values of the convergence as successive lenses are crossed
- include_first_order (bool.) – include the first order contribution to the convergence
- include_ll (bool.) – include lens-lens coupling
- include_gp (bool.) – include geodesic perturbation
- transpose_up_to (int.) – transpose all the lenses before a certain index before integration
- callback (callable.) – function is called on each contribution to the convergence during the LOS integration. The signature of the callback is callback(array_ov_values,tracer,k,type,**kwargs)
- kwargs (dict.) – additional keyword arguments to be passed to the callback
Returns: convergence values (2-post born) at each of the initial positions
-
displayRays
(initial_positions, z=2.0, projection='2d', fig=None, ax=None, axisbg='grey', raycolor='orange', lenscolor='blue')[source]¶ Uses matplotlib to display a visual of the lens system and of the deflection that the light rays which traverse it undergo
param initial_positions: initial angular positions of the light ray bucket, according to the observer; if unitless, the positions are assumed to be in radians. initial_positions[0] is x, initial_positions[1] is y :type initial_positions: numpy array or quantity
Parameters: - z (float. or array) – redshift of the sources; if an array is passes, a redshift must be specified for each ray, i.e. z.shape==initial_positions.shape[1:]
- projection (str.) – can be “2d” for the projections of the ray positions along x and y or “3d” for the full 3d view
- fig (matplotlib figure) – figure object that owns the plot
-
omegaPostBorn2
(initial_positions, z=2.0, save_intermediate=False)[source]¶ Computes the omega at second post-born order
Parameters: - initial_positions (numpy array or quantity) – initial angular positions of the light ray bucket, according to the observer; if unitless, the positions are assumed to be in radians. initial_positions[0] is x, initial_positions[1] is y
- z (float.) – redshift of the sources
- save_intermediate (bool.) – save the intermediate values of the convergence as successive lenses are crossed
Returns: omega values (2-post born) at each of the initial positions
-
randomRoll
(seed=None)[source]¶ Randomly rolls all the lenses in the system along both axes
Parameters: seed (int.) – random seed with which to initialize the generator
-
reorderLenses
()[source]¶ Reorders the lenses in the ray tracer according to their comoving distance from the observer
-
shoot
(initial_positions, z=2.0, initial_deflection=None, kind='positions', save_intermediate=False, compute_all_deflections=False, callback=None, transfer=None, **kwargs)[source]¶ Shots a bucket of light rays from the observer to the sources at redshift z (backward ray tracing), through the system of gravitational lenses, and computes the deflection statistics
Parameters: - initial_positions (numpy array or quantity) – initial angular positions of the light ray bucket, according to the observer; if unitless, the positions are assumed to be in radians. initial_positions[0] is x, initial_positions[1] is y
- z (float. or array) – redshift of the sources; if an array is passed, a redshift must be specified for each ray, i.e. z.shape==initial_positions.shape[1:]
- initial_deflection (numpy array or quantity) – if not None, this is the initial deflection light rays undergo with respect to the line of sight (equivalent to specifying the first derivative IC on the lensing ODE); must have the same shape as initial_positions
- kind (str.) – what deflection statistics to compute; “positions” will calculate the ray deflections after they crossed the last lens, “jacobian” will compute the lensing jacobian matrix after the last lens, “shear” and “convergence” will compute the omonimous weak lensing statistics
- save_intermediate (bool.) – save the intermediate positions of the rays too
- compute_all_deflections (bool.) – if True, computes the gradients of the lensing potential at every pixel on the lens(might be overkill if Nrays<<Npixels); must be True if the computation is done with FFTs
- callback (callable) – if not None, this callback function is called on the current ray positions array at each step in the ray tracing; the current raytracing instance and the step number are passed as additional arguments, hence callback must match this signature
- transfer (
TransferSpecs
) – if not None, scales the fluctuations on each lens plane to a different redshift (before computing the ray defections) using a provided transfer function - kwargs (dict.) – the keyword arguments are passed to the callback if not None
Returns: angular positions (or jacobians) of the light rays after the last lens crossing
-
shootForward
(source_positions, z=2.0, save_intermediate=False, grid_resolution=512, interpolation='nearest')[source]¶ Shoots a bucket of light rays from the source at redshift z to the observer at redshift 0 (forward ray tracing) and computes the according deflections using backward ray tracing plus a suitable interpolation scheme (KD Tree based)
Parameters: - source_positions (numpy array or quantity) – angular positions of the unlensed sources
- z (float.) – redshift of the sources
- save_intermediate (bool.) – if True computes and saves the apparent image distortions after each lens is crossed (can be computationally expensive)
- grid_resolution (int.) – the number of points on a side of the interpolation grid (must be choosen big enough according to the number of sources to resolve)
- interpolation (str.) – only “nearest” implemented so far
Returns: apparent positions of the sources as seen from the observer
-
Weak Lensing Simulation Pipeline¶
Directory tree handling¶
-
class
lenstools.pipeline.simulation.
SimulationBatch
(environment, syshandler=<lenstools.pipeline.remote.LocalSystem object>, indicize=False)[source]¶ Class handler of a batch of weak lensing simulations that share the same environment settings
-
archive
(name, pool=None, chunk_size=1, **kwargs)[source]¶ Archives a batch available resource to a tar gzipped archive; the resource file/directory names are retrieved with the list method. The archives will be written to the simulation batch storage directory
Parameters: - name (str.) – name of the archive
- pool (MPIPool) – MPI Pool used to parallelize compression
- kwargs (dict.) – the keyword arguments are passed to the list method
-
available
¶ Lists all currently available models in the home and storage directories
Returns: list. Return type: SimulationModel
-
commit
(message)[source]¶ If the Simulation Batch is put under version control in a git repository, this method commits the newly added models,collections,realizations or map/plane sets
Parameters: message (str.) – commit message
-
copyTree
(path, syshandler=<lenstools.pipeline.remote.LocalSystem object>)[source]¶ Copies the current batch directory tree into a separate path
Parameters: - path (str.) – path into which to copy the current batch directory tree
- syshandler (SystemHandler) – system handler (can be a remote)
-
classmethod
current
(name='environment.ini', syshandler=<lenstools.pipeline.remote.LocalSystem object>, indicize=False)[source]¶ This method looks in the current directory and looks for a configuration file named “environment.ini”; if it finds one, it returns a SimulationBatch instance that corresponds to the one pointed to by “environment.ini” (default)
Parameters: - name (str.) – name of the INI file with the environment settings, defaults to ‘environment.ini’
- syshandler (SystemHandler) – system handler that allows to override the methods used to create directories and do I/O from files, must implement the abstract type SystemHandler
Returns: Simulation batch pointed to by “environment.ini”, or None
Return type:
-
getModel
(cosmo_id)[source]¶ Instantiate a SimulationModel object corresponding to the cosmo_id provided
Parameters: cosmo_id (str.) – cosmo_id of the model Return type: SimulationModel
-
info
¶ Returns summary info of the simulation batch corresponding to the current environment
Returns: info in dictionary format
-
list
(resource=None, which=None, chunk_size=10, **kwargs)[source]¶ Lists the available resources in the simulation batch (collections,mapsets,etc…)
Parameters: - resource (None or callable) – custom function to call on each batch.models element, must return a string. If None the list of Storage model directories is returned
- which (tuple. or callable) – extremes of the model numbers to get (if None all models are processed); if callable, filter(which,self.models) gives the models to archive
- chunk_size (int.) – size of output chunk
- kwargs (dict.) – the keyword arguments are passed to resource
Returns: requested resources
Return type: list.
-
newModel
(cosmology, parameters)[source]¶ Create a new simulation model, given a set of cosmological parameters
Parameters: - cosmology (LensToolsCosmology) – cosmological model to simulate
- parameters (list.) – cosmological parameters to keep track of
Return type:
-
unpack
(where, which=None, pool=None)[source]¶ Unpacks the compressed simulation batch products into the storage directory: the resources of each model must be contained in a file called <cosmo_id>.tar.gz
Parameters: - where (str.) – path of the compressed resources
- which (tuple.) – extremes of the model numbers to unpack (if None all models are unpacked)
- pool (MPIPool) – MPI Pool used to parallelize de-compression
-
writeCAMBSubmission
(realization_list, job_settings, job_handler, config_file='camb.param', chunks=1, **kwargs)[source]¶ Writes CAMB submission script
Parameters: - realization_list (list. of str.) – list of ics to generate in the form “cosmo_id|geometry_id”
- chunks (int.) – number of independent jobs in which to split the submission (one script per job will be written)
- job_settings (JobSettings) – settings for the job (resources, etc…)
- job_handler (JobHandler) – handler of the cluster specific features (job scheduler, architecture, etc…)
- config_file (str.) – name of the CAMB configuration file
- kwargs (dict.) – you can set one_script=True to include all the executables sequentially in a single script
-
writeNGenICSubmission
(realization_list, job_settings, job_handler, config_file='ngenic.param', chunks=1, **kwargs)[source]¶ Writes NGenIC submission script
Parameters: - realization_list (list. of str.) – list of ics to generate in the form “cosmo_id|geometry_id|icN”
- chunks (int.) – number of independent jobs in which to split the submission (one script per job will be written)
- job_settings (JobSettings) – settings for the job (resources, etc…)
- job_handler (JobHandler) – handler of the cluster specific features (job scheduler, architecture, etc…)
- config_file (str.) – name of the NGenIC config file
- kwargs (dict.) – you can set one_script=True to include all the executables sequentially in a single script
-
writeNbodySubmission
(realization_list, job_settings, job_handler, config_file='gadget2.param', chunks=1, **kwargs)[source]¶ Writes N-body simulation submission script
Parameters: - realization_list (list. of str.) – list of ics to generate in the form “cosmo_id|geometry_id|icN”
- chunks (int.) – number of independent jobs in which to split the submission (one script per job will be written)
- job_settings (JobSettings) – settings for the job (resources, etc…)
- job_handler (JobHandler) – handler of the cluster specific features (job scheduler, architecture, etc…)
- config_file (str.) – name of the Nbody configuration file
- kwargs (dict.) – you can set one_script=True to include all the executables sequentially in a single script
-
writePlaneSubmission
(realization_list, job_settings, job_handler, chunks=1, **kwargs)[source]¶ Writes lens plane generation submission script
Parameters: - realization_list (list. of str.) – list of ics to generate in the form “cosmo_id|geometry_id|icN”
- chunks (int.) – number of independent jobs in which to split the submission (one script per job will be written)
- job_settings (JobSettings) – settings for the job (resources, etc…)
- job_handler (JobHandler) – handler of the cluster specific features (job scheduler, architecture, etc…)
- kwargs (dict.) – keyword arguments accepted are “environment_file” to specify the environment settings for the current batch, “plane_config_file” to specify the lensing option for plane generation script. Additionally you can set one_script=True to include all the executables sequentially in a single script
-
writeRaySubmission
(realization_list, job_settings, job_handler, chunks=1, **kwargs)[source]¶ Writes raytracing submission script
Parameters: - realization_list (list. of str.) – list of ics to generate in the form “cosmo_id|geometry_id”
- chunks (int.) – number of independent jobs in which to split the submission (one script per job will be written)
- job_settings (JobSettings) – settings for the job (resources, etc…)
- job_handler (JobHandler) – handler of the cluster specific features (job scheduler, architecture, etc…)
- kwargs (dict.) – keyword arguments accepted are “environment_file” to specify the environment settings for the current batch, “raytracing_config_file” to specify the lensing option for the ray tracing. Additionally you can set one_script=True to include all the executables sequentially in a single script
-
writeSubmission
(realization_list, job_settings, job_handler, job_executable='lenstools.custom', chunks=1, **kwargs)[source]¶ Writes a generic job submission script
Parameters: - realization_list (list. of str.) – list of ics to generate in the form “cosmo_id|geometry_id”
- chunks (int.) – number of independent jobs in which to split the submission (one script per job will be written)
- job_settings (JobSettings) – settings for the job (resources, etc…)
- job_handler (JobHandler) – handler of the cluster specific features (job scheduler, architecture, etc…)
- job_executable (str.) – name of the executable that will be run
- kwargs (dict.) – keyword arguments accepted are “environment_file” to specify the environment settings for the current batch, “config_file” to specify the job specific configuration file. Additionally you can set one_script=True to include all the executables sequentially in a single script
-
-
class
lenstools.pipeline.simulation.
SimulationModel
(cosmology=LensToolsCosmology(H0=72 km / (Mpc s), Om0=0.26, Ode0=0.74, sigma8=0.8, ns=0.96, w0=-1, wa=0, Tcmb0=0 K, Neff=3.04, m_nu=None, Ob0=0.046), environment=None, parameters=['Om', 'Ol', 'w', 'ns', 'si'], **kwargs)[source]¶ Class handler of a weak lensing simulation model, defined by a set of cosmological parameters
-
collections
¶ Lists all the available collections for a model
Returns: list. Return type: SimulationCollection
-
getTelescopicMapSet
(setname)[source]¶ Return an instance of the telescopic map set named “name”
Parameters: setname (str.) – name of the map set Return type: SimulationTelescopicMaps
-
mkdir
(directory)¶ Create a sub-directory inside the current instance home and storage paths
Parameters: directory (str.) – name of the directory to create
-
newCollection
(box_size=<Quantity 240.0 Mpc>, nside=512)[source]¶ Instantiate new simulation with the specified settings
-
newTelescopicMapSet
(collections, redshifts, settings)[source]¶ Create a new telescopic map set with the provided settings
Parameters: - collections (list.) – list of the SimulationCollection instances that participate in the telescopic map set
- redshifts (array.) – redshifts that mark the transition from one collection to the other during raytracing
- settings (TelescopicMapSettings) – settings for the new map set
Return type:
-
path
(filename, where='storage_subdir')¶ Returns the complete path to the resource corresponding to filename; returns None if the resource does not exist
Parameters: - filename (str.) – name of the resource
- where (str.) – where to look for the resource, can be “home_subdir” or “storage_subdir”
Returns: full path to the resource
Return type: str.
-
telescopicmapsets
¶ Lists all the available telescopic map sets for a model
Returns: list. Return type: SimulationTelescopicMaps
-
-
class
lenstools.pipeline.simulation.
SimulationCollection
(cosmology, environment, parameters, box_size, nside, **kwargs)[source]¶ Class handler of a collection of simulations that share model parameters
-
camb2ngenic
(z, input_root='camb')[source]¶ Read CAMB power spectrum file and convert it in a N-GenIC readable format
Parameters: - z (float.) – redshift of the matter power spectrum file to convert
- input_root (str.) – name root of camb products
-
getMapSet
(setname)[source]¶ Get access to a pre-existing map set
Parameters: setname (str.) – name of the map set to access
-
loadTransferFunction
(name_root='camb_matterpower')[source]¶ Loads in the CDM transfer function calculated with CAMB (k,delta(k)/k^2) for a unit super-horizon perturbation
Parameters: name_root (str.) – root of the file names that contain the transfer function Return type: CAMBTransferFromPower
-
newMapSet
(settings)[source]¶ Create a new map set with the provided settings
Parameters: settings (MapSettings) – settings for the new map set Return type: SimulationMaps
-
realizations
¶ List the available realizations (or independent simulations) for the current collection
Returns: list. Return type: SimulationIC
-
writeCAMB
(z, settings, fname='camb.param', output_root='camb')[source]¶ Generates the parameter file that CAMB needs to read to evolve the current initial condition in time
Parameters: - settings (CAMBSettings) – CAMB tunable settings
- z (array.) – redshifts at which CAMB needs to compute the matter power spectrum
- fname (str.) – name of the parameter file to write
- output_root (str.) – output root of camb products
-
-
class
lenstools.pipeline.simulation.
SimulationIC
(cosmology, environment, parameters, box_size, nside, ic_index, seed, ICFileBase='ics', SnapshotFileBase='snapshot', **kwargs)[source]¶ Class handler of a simulation with a defined initial condition
-
getPlaneSet
(setname)[source]¶ Instantiate a SimulationPlanes object that handles a specific plane set; returns None if the plane set does not exist
Parameters: setname (str.) – name of the plane set Return type: SimulationPlanes
-
writeGadget2
(settings)[source]¶ Generates the parameter file that Gadget2 needs to read to evolve the current initial condition in time
Parameters: settings (Gadget2Settings) – Gadget2 tunable settings
-
writeNGenIC
(settings)[source]¶ Generates the parameter file that NGenIC needs to read to generate the current initial condition
Parameters: settings (NGenICSettings) – NGenIC tunable settings
-
-
class
lenstools.pipeline.simulation.
SimulationPlanes
(cosmology, environment, parameters, box_size, nside, ic_index, seed, ICFileBase, SnapshotFileBase, settings, **kwargs)[source]¶ Class handler of a set of lens planes belonging to a particular simulation
-
class
lenstools.pipeline.simulation.
SimulationMaps
(cosmology, environment, parameters, box_size, nside, settings, **kwargs)[source]¶ Class handler of a set of lensing maps, which are the final products of the lensing pipeline
Tunable settings¶
-
class
lenstools.pipeline.settings.
EnvironmentSettings
(home='SimTest/Home', storage='SimTest/Storage')[source]¶ This class handles the system specific environment settings, such as directory paths, modules, etc…
-
class
lenstools.pipeline.settings.
CAMBSettings
(**kwargs)[source]¶ -
write
(output_root, cosmology, redshifts)[source]¶ Writes a CAMB parameter file
Parameters: - output_root (str.) – output_root for the files that CAMB will produce in output
- cosmology (FLRW) – cosmological model to generate the parameter file for
- redshifts (array.) – redshifts on which to compute the matter power spectrum and transfer function
Returns: string object
Return type: StringIO
-
-
class
lenstools.pipeline.settings.
NGenICSettings
(**kwargs)[source]¶ Class handler of NGenIC settings
-
class
lenstools.pipeline.settings.
Gadget2Settings
(**kwargs)[source]¶ Class handler of the tunable settings in a Gadget2 run
-
class
lenstools.pipeline.settings.
PlaneSettings
(**kwargs)[source]¶ Class handler of plane generation settings from constant time Nbody snapshots
-
class
lenstools.pipeline.settings.
MapSettings
(**kwargs)[source]¶ Class handler of map generation settings
-
class
lenstools.pipeline.settings.
TelescopicMapSettings
(**kwargs)[source]¶ Class handler of telescopic simulation map generation settings
Cluster deployment¶
-
class
lenstools.pipeline.deploy.
JobHandler
[source]¶ -
writeExecution
(executables, cores, settings)[source]¶ Write the execution part of the script
Parameters: - executables (list.) – list of executables to run on the compute nodes
- cores (list.) – list of numbers of cores for each executable (must have the same length as executables)
- settings (JobSettings) – job settings
Returns: StringIO object
-
writePreamble
(settings, auto_num_nodes=True)[source]¶ Writes the preamble of the job script (resources request,job name, etc…)
Parameters: - settings (JobSettings) – job settings
- auto_num_nodes (bool.) – if True, the number of requested nodes is computed automatically from the number of requested cores (knowing the cluster specifications)
Returns: StringIO object
-
Cluster specific settings¶
-
class
lenstools.pipeline.cluster.
CoriHandler
[source]¶ Job handler for the NERSC Cori Phase 1 cluster
-
writeExecution
(executables, cores, settings)¶ Write the execution part of the script
Parameters: - executables (list.) – list of executables to run on the compute nodes
- cores (list.) – list of numbers of cores for each executable (must have the same length as executables)
- settings (JobSettings) – job settings
Returns: StringIO object
-
writePreamble
(settings, auto_num_nodes=True)¶ Writes the preamble of the job script (resources request,job name, etc…)
Parameters: - settings (JobSettings) – job settings
- auto_num_nodes (bool.) – if True, the number of requested nodes is computed automatically from the number of requested cores (knowing the cluster specifications)
Returns: StringIO object
-
-
class
lenstools.pipeline.cluster.
EdisonHandler
[source]¶ Job handler for the NERSC Edison cluster
-
writeExecution
(executables, cores, settings)¶ Write the execution part of the script
Parameters: - executables (list.) – list of executables to run on the compute nodes
- cores (list.) – list of numbers of cores for each executable (must have the same length as executables)
- settings (JobSettings) – job settings
Returns: StringIO object
-
writePreamble
(settings, auto_num_nodes=True)¶ Writes the preamble of the job script (resources request,job name, etc…)
Parameters: - settings (JobSettings) – job settings
- auto_num_nodes (bool.) – if True, the number of requested nodes is computed automatically from the number of requested cores (knowing the cluster specifications)
Returns: StringIO object
-
-
class
lenstools.pipeline.cluster.
StampedeHandler
[source]¶ Job handler for the TACC Stampede cluster
-
writeExecution
(executables, cores, settings)¶ Write the execution part of the script
Parameters: - executables (list.) – list of executables to run on the compute nodes
- cores (list.) – list of numbers of cores for each executable (must have the same length as executables)
- settings (JobSettings) – job settings
Returns: StringIO object
-
writePreamble
(settings, auto_num_nodes=True)¶ Writes the preamble of the job script (resources request,job name, etc…)
Parameters: - settings (JobSettings) – job settings
- auto_num_nodes (bool.) – if True, the number of requested nodes is computed automatically from the number of requested cores (knowing the cluster specifications)
Returns: StringIO object
-
Real observation sets¶
-
class
lenstools.observations.
CFHTLens
(root_path=None)[source]¶ Class handler of the CFHTLens reduced data set, already split in 13 3x3 deg^2 subfields
Limber integration¶
-
class
lenstools.simulations.limber.
LimberIntegrator
(cosmoModel=FlatLambdaCDM(name="WMAP9", H0=69.3 km / (Mpc s), Om0=0.286, Tcmb0=2.725 K, Neff=3.04, m_nu=[ 0. 0. 0.] eV, Ob0=0.0463))[source]¶ A 3D power spectrum integrator that will compute the convergence power spectrum using the Limber approximation.
Parameters: cosmoModel (astropy.cosmology) – One of astropy.cosmology objects (WMAP9 cosmology is set by default) -
convergencePowerSpectrum
(l)[source]¶ Computes the convergence power spectrum with the Limber integral of the 3D matter power spectrum; this still assumes a single source redshift at z0 = max(z)
Parameters: l (array) – multipole moments at which to compute the convergence power spectrum Returns: array – the convergence power spectrum at the l values specified
-
Defaults¶
-
lenstools.utils.defaults.
load_fits_default_convergence
(filename)[source]¶ This is the default convergence fits file loader, it assumes that the two components of the shear are stored in two different image FITS files, which have an ANGLE keyword in the header
Parameters: gamma_file – Name of the FITS file that contains the shear map Returns: tuple – (angle,ndarray – kappa; kappa is the convergence map) Raises: IOError if the FITS files cannot be opened or do not exist
-
lenstools.utils.defaults.
load_fits_default_shear
(filename)[source]¶ This is the default shear fits file loader, it assumes that the two components of the shear are stored in a single image FITS file, which have an ANGLE keyword in the header
Parameters: gamma_file – Name of the FITS file that contains the shear map Returns: tuple – (angle,ndarray – gamma; gamma[0] is the gamma1 map, gamma[1] is the gamma2 map); the maps must follow matrix ordering, i.e. the first axis (0) is y and the second axis (1) is x. This matters for the E/B mode decomposition Raises: IOError if the FITS files cannot be opened or do not exist
-
lenstools.utils.defaults.
measure_power_spectrum
(filename, l_edges, columns=None)[source]¶ Default ensemble loader: reads a FITS data file containing a convergence map and measures its power spectrum
Parameters: args (Dictionary) – A dictionary that contains all the relevant parameters as keys. Must have a “map_id” key Returns: ndarray of the measured statistics Raises: AssertionError if the input dictionary doesn’t have the required keywords
Miscellaneous utilities¶
MPI¶
-
class
lenstools.utils.mpi.
MPIWhirlPool
(comm=None, debug=False, loadbalance=False)[source]¶ MPI class handler, inherits from MPI pool and adds one sided communications utilities (using RMA windows)
-
accumulate
(op=None)[source]¶ Accumulates the all the window data on the master, performing a custom operation (default is sum)
-
close
()¶ Just send a message off to all the pool members which contains the special
_close_pool_message
sentinel.
-
is_master
()¶ Is the current process the master?
-
map
(function, tasks)¶ Like the built-in
map()
function, apply a function to all of the values in a list and return the list of results.Parameters: - function – The function to apply to the list.
- tasks – The list of elements.
-
openWindow
(memory, window_type='sendrecv')[source]¶ Create a RMA window that looks from the master process onto all the other workers
Parameters: memory (numpy nd array) – memory buffer on which to open the window
-
wait
()¶ If this isn’t the master process, wait for instructions.
-