This way, the generated

damage extent and oil outflow cal

This way, the generated

damage extent and oil outflow calculations are used primarily to learn the parameters in the BBN in realistic areas of the impact scenario space. A direct, uncorrelated sampling of yT, yL, l and θ would lead to a large number Selleckchem ABT-199 of cases in unrealistic areas of the impact scenario space, which is unnecessary in actual applications. The ranges for the impact scenario variables in the MC sampling are shown in Table 2. The resulting data set from which the Bayesian submodel GI(XI, AI) is learned consists of following variables for all damage cases: • Vessel particulars: length L, width B, displacement Displ, deadweight DWT, tank type TT, number of side tanks ST and number of center tanks CT, see Fig.

3. Learning a Bayesian network from data is a two-step procedure: structure search and parameter fitting, for which a large number of methods have been proposed (Buntine, 1996 and Daly et al., 2011). In the presented model, use was made of the greedy thick thinning (GTT) algorithm (Dash and Cooper, 2004) implemented in the GeNIe free modeling software.4 The GTT is a score + search Bayesian learning method, in which a heuristic search algorithm is applied to explore the space of DAGs along with a score function to evaluate the candidate network structures, guiding the search. The GTT algorithm discovers a Bayesian network structure using a 2-stage procedure, given an initial graph

G(X, A) and a dataset T: I. Thicking GSI-IX step: while the K2-score function (Eq. (12)) increases: The above algorithm starts with an initial empty graph G, to which iteratively arcs are added which maximize the K2-score function in the thicking step. When adding additional arcs does not lead to increases in K2-score, the thinning step is applied. Here, arcs are iteratively deleted until no arc removal results in a K2-score increase, which is when the algorithm is stopped and the network returned. The oxyclozanide K2-score function is chosen to evaluate the candidate network structures (Cooper and Herskovits, 1992). This method measures the logarithm of the joint probability of the Bayesian network structure G and the dataset T, as follows: equation(12) K2(G,T)=log(P(G))+∑i=1n∑j=1qilog(ri-1)!Nij+ri-1!+∑k=1rilog(Nijk!)where P(G) is the prior probability of the network structure G, ri the number of distinct values of Xi, qi the number of possible configurations of Pa(Xi), Nij the number of instances in the data set T where the set of parents Pa(Xi) takes their j-th configuration, and Nijk is the number of instances where the variables Xi takes the k-th value xik and Pa(Xi) takes their j-th configuration: equation(13) Nij=∑k=1riNijk In the construction of the submodel GI(XI, AI) through Bayesian learning, two preparatory steps are required to transform the oil outflow dataset from Section 4.3.2 in a BN.

Comments are closed.