This allows LDS to cover the parameter space more evenly compared to MC and LHS. Each parameter combination, sampled by Sobol’s algorithm, is unique, which means that sampling of N Sobol’s points from a hypercube provides N variants of parameter value on each individual parameter direction. Among the most popular methods of sensitivity analysis are averaged local sensitivities (Balsa-Canto et al., 2010, Kim et al., 2010 and Zi et al., 2008), Sobol’s method (Kim et al., 2010, Rodriguez-Fernandez GW-572016 datasheet and Banga, 2010 and Zi et al., 2008), Partial Rank Correlation Coefficient (PRCC)
(Marino et al., 2008 and Zi et al., 2008), and Multi-Parametric Sensitivity Analysis (MPSA) (Yoon and Deisboeck, 2009 and Zi et al., 2008). In general, different SA methods are better suited to specific types of analysis. For example, analysis of a distribution Selisistat datasheet of local sensitivities, can be very useful for the initial scoring of parameters prior to model calibration, especially if sensitivity coefficients can be derived analytically and will not require
numerical differentiation, which significantly increases the computational cost. The choice of the particular SA method significantly depends on the assumed relationship between the input parameters and model output. If a linear trend can be assumed, the methods based on calculation of the Pearson correlation coefficient can be employed. For nonlinear but monotonic dependences, PRCC and standardized rank regression coefficient (SRRC) appear to be the best choice (Marino et al., 2008), as they work with rank transformed values. If no assumption can be made about the relationship between model inputs and outputs, or the dependence is non-monotonic, another group of sensitivity methods can be employed, based on decomposition of the variance of the model output into partial variances, assessing the contribution of each
parameter to the total variance. One of the most powerful variance-based methods is Sobol’s method; however it is also known to be among the most computationally intensive, with the cost growing exponentially with the dimensionality of the parameter space (Rodriguez-Fernandez and Banga, 2010). Another promising method that makes no assumptions Rolziracetam about the dependence between model parameters and outputs is MPSA (Jia e al., 2007 and Yoon and Deisboeck, 2009). In MPSA all outputs are divided into two groups: “acceptable” and “unacceptable” and parameter distributions in both groups are tested against the null hypothesis that they are taken from the same distribution. The lower is the probability of acceptance of null hypothesis, the higher is the sensitivity of the parameter (Zi et al., 2008). When binary decomposition of model outputs can be naturally introduced the results of MPSA can be very useful (Yoon and Deisboeck, 2009). In our GSA implementation we chose to use PRCC as the preferred method for SA, as one of the most efficient and reliable sampling-based techniques (Marino et al.