**There are many situations where product developers have solid prior information on particular aspects of reliability modelling based on physics of failure or previous experience with the same failure mechanism. For example developers have useful knowledge about the Weibull shape parameter, or life-stressor slope of a power law model. In such situations Bayesians methods may be very beneficial.**

From an application point of view there are at least two important reasons for the use of Bayesian methods:

- Bayesian methods allow an developer to incorporate prior information into data analysis to provide important improvements in precision (or cost savings).
- Bayesian methods can handle, with relative ease, complicated data-model (nonlinear) combinations, random effects for which no maximum likelihood (ML) software exists or for which implementing ML would be difficult.

The Relationship between Bayesian Inference and non-Bayesian Likelihood Inference

The left-hand side of the figure below shows the components of a likelihood-based non-Bayesian

inference procedure. Inputs are the data and a model for the data. The inference outputs would

be, for example, point estimates and confidence intervals for quantities of interest (e.g., a

quantile of a failure probability associated with a failure-time distribution). The right-hand side

of the figure is a similar diagram for the Bayesian inference procedure. In addition to the model

and the data, one must also specify a joint prior distribution that describes one’s knowledge

about the unknown parameters of the model. Bayes’ theorem is used to combine the prior

information with the likelihood to produce a posterior distribution. Similar to the non-Bayesian

inference, outputs would be point estimates and credible intervals (the name commonly used to

describe the Bayesian analogue to non-Bayesian confidence intervals).Fig 1: Non-Bayesian vs Bayesian inference

**Bayes rule**

Bayes’ theorem is a well-known probability rule. It allows one to combine available data with prior p(θ) information to obtain a posterior (or updated) distribution that can be used for inference:

The likelihood f(y|θ) is a function of the assumed model for the data y and the data itself and must be proportional to the probability of the data. The likelihood quantifies the information in the data. The joint prior distribution p(θ) quantifies the available prior information about the unknown parameters in θ . The output p(θ|y) is the resulting joint posterior distribution for θ , reflecting knowledge of θ after the information in the data and the prior distribution have been combined.

**Prior information
**The use of Bayesian methods for statistical modeling and inference requires one to specify a joint prior distribution p(θ) to describe the prior knowledge that is available about the unknown parameters in θ. It is important to note that with limited data, the choice of a prior distribution (even a diffuse prior distribution) can have strong influence on inferences. Hence, developers are faced with the question of which prior distribution should be used in the analysis. One generally accepted principle for answering this question is that whoever is assuming the risks associated with decisions resulting from the Bayesian analysis should be allowed to choose the prior distribution. For an experiment where the results will be used to determine if a product is safe or not, customers who will use the product and managers who will benefit from the development and sale of the product will face different risks. In such cases it may be necessary to use a diffuse joint prior distribution or to use a frequentist method that does not require specification of a prior distribution.

In some applications solid prior information, based on a combination of physics of failure and previous empirical (accelerated life testing) experience, is available. A second source of prior information is so-called “expert opinion”: individual or groups of individuals with knowledge about reliability in particular situations are able to provide subjective information that can be used to derive appropriate prior distributions.

**Numerical methods/estimation
**To estimate the joint posterior distribution the Monte Carlo Markov Chain (MCMC) method is applied. Basically, it simulates samples from a particular joint posterior distribution (i.e., the joint posterior distribution corresponding to a given model, data, and joint prior distribution).

Easy-to-use procedures for doing a Bayesian analysis are not available yet. For developers or analysts who have sufficient statistical backgrounds OpenBugs, WinBugs, R, and Stata 14 are appropriate software tools.

**An example
**Suppose we are interested in assessing the proportion of failed electronic components used in an outdoor application after six months (=4380 hours). We have the following results from reliability testing in hours: 5045, 8313, 10452, 18431, 21741, 2504, 763, 643, 22812, 3476, 7676, 9498, 8514, 10750, 9030, 2313, 5727, 2605, 11734, 12968. Data obtained like these are assumed to follow a exponential distribution: f(y|q) = 1/q exp(-t/q). We will use the maximum likelihood (1) and Bayesian (2) methods.

- Using the maximum likelihood method the estimate for q equals 8750 with standard error 1957. The 95% confidence interval for q equals

.

Hence, the 95% confidence interval of the survival probability after six months equals (exp(-4380/13561), exp(-4380/5646)) = (0.46, 0.72).

- Based on an expert opinion q is expected to be larger than 6500 with 99% probability. Also the expected value of q is 8000. As a model for the prior p(q), the gamma distribution is assumed. Hence, based on this prior knowledge the shape parameter a equals 112, and the scale b is equal to 113E-06. Combining the data with the prior knowledge gives the posterior distribution p(q|y): in this particular case also a gamma distribution with parameters a=132, and b =9.315E-07. The estimate for q equals 8123, and the corresponding 95%
*credibility*interval equals (6898, 9709). Consequently, the 95%*credibility*interval of the survival probability after six months equals (exp(-4380/9709), exp(-4380/6898)) = (0.53, 0.64).

By considering the prior information we get a different (smaller) point estimate, and a smaller interval for the survival probability.

If you have questions on this subject, please do not hesitate to contact Marc Schuld.