FRAGMENT-MNP model#

Mechanistic model of Micro and NanoPlastic FRAGMentation in the ENvironmenT.

class fragmentmnp.FragmentMNP(config: dict, data: dict, validate: bool = True)[source]#

The class that controls usage of the FRAGMENT-MNP model

Parameters
  • config (dict) – Model configuration options

  • data (dict) – Model input data

  • validate (bool, default=True) – Should config and data be validated? It is strongly recommended to use validation, but this option is provided if you are certain your config and data are correct and wish to speed up model initialisation

mass_to_particle_number(mass)[source]#

Convert mass (concentration) to particle number (concentration).

run() FMNPOutput[source]#

Run the model with the config and data provided at initialisation.

Return type

fragmentmnp.output.FMNPOutput object containing model output

Notes

The model numerically solves the following differential equation for each size class, to give a time series of mass concentrations c. k is the current size class, i are the daughter size classes.

\[\frac{dc_k}{dt} = -k_{\text{frag},k} c_k + \sum_i f_{i,k} k_{\text{frag},i} c_i - k_{\text{diss},k} c_k\]

Here, \(k_{\text{frag},k}\) is the fragmentation rate of size class k, \(f_{i,k}\) is the fraction of daughter fragments produced from a fragmenting particle of size i that are of size k, and \(k_{\text{diss},k}\) is the dissolution rate from size class k.

Mass concentrations are converted to particle number concentrations by assuming spherical particles with the density given in the input data.

static set_fsd(n: int, psd: ndarray[Any, dtype[float64]], beta: float) ndarray[Any, dtype[float64]][source]#

Set the fragment size distribution matrix, assuming that fragmentation events result in a split in mass between daughter fragments that scales proportionally to \(d^\beta\), where \(d\) is the particle diameter and \(\beta\) is an empirical fragment size distribution parameter. For example, if \(\beta\) is negative, then a larger proportion of the fragmenting mass goes to smaller size classes than larger.

For an equal split between daughter size classes, set \(\beta\) to 0.

Parameters
  • n (int) – Number of particle size classes

  • psd (np.ndarray) – Particle size distribution

  • beta (float) – Fragment size distribution empirical parameter

Returns

Matrix of fragment size distributions for all size classes

Return type

np.ndarray

static set_k_distribution(dims: dict, k_f: float, k_0: float = 0.0, params: dict = {}, is_compound: bool = True) ndarray[Any, dtype[float64]][source]#

Create a distribution based on the rate constant scaling factor k_f and baseline adjustment factor k_0. The distribution will be a compound combination of power law / polynomial, exponential, logarithmic and logistic regressions, encapsulated in the function \(X(x)\), and have dimensions given by dims. For a distribution with D dimensions:

\[k(\mathbf{x}) = k_f \prod_{d=1}^D X(x_d) + k_0\]

\(X(x)\) is then given either by:

\[X(x) = A_x \hat{x}^{\alpha_x} \cdot B_x e^{-\beta_x \hat{x}} \cdot C_x \ln (\gamma_x \hat{x}) \cdot \frac{D_x}{1 + e^{-\delta_{x,1}(\hat{x} - \delta_{x,2})}}\]

or the user can specify a polynomial instead of the power law term:

\[X(x) = \sum_{n=1}^N A_{x,n} \hat{x}^n \cdot B_x e^{-\beta_x \hat{x}} \cdot C_x \ln (\gamma_x \hat{x}) \cdot \frac{D_x}{1 + e^{-\delta_{x,1}(\hat{x} - \delta_{x,2})}}\]

In the above, the dimension value \(\hat{x}\) is normalised such that the median value is equal to 1: \(\hat{x} = x/\tilde{x}\).

Parameters
  • dims (dict) – A dictionary that maps dimension names to their grids, e.g. to create a distribution of time t and particle surface area s, dims would equal {‘t’: t, ‘s’: s}, where t and s are the timesteps and particle surface area bins over which to create this distribution. The dimension names must correspond to the subscripts used in params. The values are normalised such that the median of each dimension is 1.

  • k_f (float) – Rate constant scaling factor

  • k_0 (float, default=0) – Rate constant baseline adjustment factor

  • params (dict, default={}) – A dictionary of values to parameterise the distribution with. See the notes below.

  • is_compound (bool, default=True) – Whether the regression for each dimension are combined by multiplying (compound) or adding.

Returns

Distribution array over the dims provided

Return type

k = np.ndarray

Notes

k is modelled as a function of the dims provided, and the model builds this distribution as a combination of power law / polynomial, exponential, logarithmic and logistic regressions, enabling a broad range of dependencies to be accounted for. This distribution is intended to be applied to rate constants used in the model, such as k_frag and k_diss. The params dict gives the parameters used to construct this distribution using the equation above. That is, \(A_{x}\) (where x is the dimension), \(\alpha_{x_i}\) etc are given in the params dict as e.g. A_t, alpha_t, where the subscript (t in this case) is the name of the dimension corresponding to the dims dict.

This function does not require any parameters to be present in params. Non-present values are defaulted to values that remove the influence of that particular expression, and letting all parameters default results in a constant k distribution.

More specifically, the params that can be specified are:

A_xarray-like or float, default=1

Power law coefficient(s) for dim x (e.g. A_t for dim t). If a scalar is provided, this is used as the coefficient for a power law expression with alpha_x as the exponent. If a list is provided, these are used as coefficients in a polynomial expression, where the order of the polynomial is given by the length of the list. For example, if a length-2 list A_t=[2, 3] is given, then the resulting polynomial will be \(3t^2 + 2t\) (note the list is in ascending order of polynomials).

alpha_xfloat, default=0

If A_x is a scalar, alpha_x is the exponent for this power law expression. For example, if A_t=2 and alpha_t=0.5, the resulting power law will be \(2t^{0.5}\).

B_xfloat, default=1

Exponential coefficient.

beta_xfloat, default=0

Exponential scaling factor.

C_xfloat or None, default=None

If a scalar is given, this is the coefficient for the logarithmic expression. If None is given, the logarithmic expression is set to 1 (i.e. it is ignored).

gamma_xfloat, default=1

Logarithmic scaling factor.

D_xfloat or None, default=None

If a scalar is given, this is the coefficient for the logistic expression. If None is given, the logistic expression is set to 1.

delta1_xfloat, default=1

Logistic growth rate (steepness of the logistic curve).

delta2_xfloat or None, default=None

Midpoint of the logistic curve, which denotes the x value where the logistic curve is at its midpoint. If None is given, the midpoint is assumed to be the at the midpoint of the x range. For example, for the time dimension t, if the model timesteps go from 1 to 100, then the default is delta2_t=50.

If any dimension values are equal to 0, the logarithmic term returns 0 rather than being undefined.

Warning

The parameters used for calculating distributions such as k_frag and k_diss have changed from previous versions, which only allowed for a power law relationship. This causes breaking changes after v0.1.0.

static surface_area(psd: ndarray[Any, dtype[float64]]) ndarray[Any, dtype[float64]][source]#

Return the surface area of the particles, presuming they are spheres. This function can be overloaded to account for different shaped particles.

static volume(psd: ndarray[Any, dtype[float64]]) ndarray[Any, dtype[float64]][source]#

Return the volume of the particles, presuming they are spheres. This function can be overloaded to account for different shaped particles.