1. Introduction
The stress-strength model is one of the applied model in reliability, which has important practices in many fields especially in engineering applications. The statisticians deal with this model as X is strength and Y is a stress and the system working if, and only if, at any time the applied stress is lower than its strength. This model can be expressed as reliability function R = P(Y < X), where X and Y are independent and identical random variables. To estimate the reliability values in this model, we should first estimate the parameters of stress Y and strength X by using different methods of estimations, Hassan, Muhammed, and Saad (2015), Nkemnole and Samiyu (2017).
In geometric distribution, the probability of success is assumed to be the same for each trial. In such a sequence of trials, the geometric distribution is useful to model the number of failures before the first success. The distribution gives the probability that there are zero failures before the first success, one failure before the first success, two failures before the first success, and so on, Pitman (1993), Walck (2007).
The record values can be classified into lower and upper values, where the observer Xi is called lower value if it is smaller than all pervious subjects to experience. In other side, if Xj is exceed all the subjects to experience, it will call upper record.
Record values have been discussed in the statistical literature by many authors who explained how rerecord values are important in many fields. Chandler (1952) explained how the record values, record times and the inter record times based on the record values are obtained and used to form a model of extremes sequence of independent and identical distributed random variables. The record values and its kinds have introduced in Nagaraja (1988), Ahsanullah (1995, 2004).
Many authors have been focused on the stress-strength model R and they tried to apply it in different cases of studies. Birnbaum (1956), Kundu and Gupta (2005, 2006), Razaei, Tahmasbi, and Mahmoodi (2010), Hussian, (2013) studied the estimation of R for different distributions. The recorded values and more studies about R with different distributions and different methods of estimation found in Baklizi (2008, 2014), Essam (2012), Tarvirdizade, and Kazemzadeh Garehchobogh (2014).
This paper is organized as follows. In Section 2, maximum likelihood estimates and exact confidence interval of R are studied. Also, the asymptotic bootstrap confidence interval of R is established. In Section 3, the Bayes estimates of R against both squared error and LINEX loss functions are studied. Also, Bayes confidence interval is obtained. In Section 4, steps of simulation study are proposed. Results and discussion are shown in Section 5. Finally, conclusions appear in Section 6.
2. Likelihood inferences
In this section, maximum likelihood estimator (MLE) and confidence interval of R are derived. Also, the asymptotic bootstrap confidence interval of R is obtained.
2.1. Maximum likelihood estimator of R
According to Mohamed (2015), let Y be stress for the model of stress-strength is subjected to X as strength of the model. Assume X ~ P(X, P1) and Y ~ P(Y, P2) have geometric distribution with x ∈ {1, 2, 3, …} nd 0 < p1 < 1.
where p(.) And F(.)are the probability and cumulative density function.
Then the reliability function
Let r = (r0, … …, rn) be the first independent set of lower record of data with size (n +1) from strength with geometric distribution with parameter p1 and s = (s0, …, …, sm) have the same features but with size (m +1) from stress with geometric distribution with parameter p2.
The likelihood function for both r and s are given by Arnold, Balakrishnan and Nagaraja (1998):
and
The likelihood function of the observed record values r and s are:
and
Therefore, the joint log-likelihood function of r and s denoted by l is:
The maximum likelihood estimators of p1 and p2 are
and
From (9) and (10),
Hence, the maximum likelihood estimator of R, denoted by
2.2. Exact confidence interval of R
In this subsection, exact confidence interval of R based on the asymptotic properties and the general conditions of the MLE of
where
and the matrix I(P) is the Fisher information matrix of the parameter vector P = (p1, p).
The (ij)th element is defined as the second partial derivatives:
From the asymptotic properties of the MLEs of P1, P2, one can easily get.
where
The maximum likelihood (1 − α) 100% confidence interval of R is given by:
where
2.3. Asymptotic bootstrap confidence interval
In this subsection, the asymptotic bootstrap confidence interval of R is derived. Kotz and Pensky (2003) proposed the bootstrap method as an alternative way to construct a confidence interval.
The algorithm of the (1 − α) 100% confidence interval for R by using bootstrap method is illustrated below:
1- Use the estimators
2- Calculate the bootstrap MSE by:
3- The asymptotic (1 − α) 100% confidence interval is obtained by:
3. Bayesian inferences
In this section, the Bayes estimator of R is calculated by the mean squared error and LINEX loss functions. Also, Bayes confidence interval for R is obtained.
3.1. Bayes estimator of R based on mean square errors loss function
To get the Bayes estimator of
The non-informative priors of p1 and p2 by using Fisher information matrix for P1 and P2 are calculated as follows
The prior distributions by non-informative of p1 and p2 are
The posterior distributions of p1 and p2, denoted by π* (p1) and π* (p2), are obtained by combining Eq. 6, Eq. 7 and Eq. 18 as follows
The Bayes estimates of p1 and p2,under mean squared error loss function, denoted by
and
The mean square errors loss function of R based on the lower records data for X and Y denoted as
3.2. Bayes estimator of R based on LINEX loss function
In this subsection, the Bayes estimator of R under LINEX loss function, denoted by
According to Lindley (1980) and after some calculations, the approximate Bayes estimator of R is
where
3.3. Bayes confidence interval of R
In this subsection, Bayes confidence interval for R is obtained. To derive the distribution of stress-strength R function based on Bayesian inferences, the posterior distributions of p1 and p2 must be found. the conjugate prior density functions of p1 and p2 is proportional with beta distribution as follows
After some calculations, the posterior distributions of p1 is
where
And the posterior distribution for p2 is
where
The Bayesian (1 − α) 100% confidence interval of R for p1 and p2, are
Using Eq. 25, 26 and 27, the Bayesian confidence intervals (L1, U1) and (L2, U2) for p1 and p2 can be derived by solving the followings equations
and
Therefore, Bayes confidence interval for R is constructed by substitute Eq. 28, 29, 30 and 31 in Eq. 3.
4. Simulation study
In this section a simulation study is studied to compare the performance of MLE and Bayes estimates (under squared error and LINEX loss functions). The exact values of R are 0.714 and 0.95. The estimates of R through MLE and Bayes methods under lower record values are calculated for different sample sizes. Three different methods for confidence intervals are computed. The simulation study is performance according to the following steps:
1. Generate 10000 samples from uniform (0,1), then find the 300 random sample according to geometric distribution through the transformation technique.
2. From each vector the first (n + 1) of lower record values r0, … …, rn for the values of strength random variables X be selected,
3. Repeat the previous two steps to generate 5000 random samples of size 300 from geometric distribution and select from each vector the first (m + 1) of lower record values s0, … …, sm for the values of stress random variable Y.
4. The MLE of p1, p2 are obtained from Eq. 11, then the MLE of R is obtained by substitute p1 and p2 in Eq. 12. The maximum likelihood confidence intervals of p1 and p2 are calculated with confidence level at α = 0.5 by using Eq. 15. The bootstrap confidence intervals are obtained from Eq. 17. Bayesian confidence intervals are obtained from Eq. 3, 28 and 29.
5. Compute Bayes estimator of R under mean squared error and LINEX loss functions.
5. Results and discussion
Simulation results are tabulated in Tables (1:6). We can observe the following results:
1. The coverage percentage of MLE is better than that of the Bayesian estimator at R =0.714 and 0.985 according to Tables (1, 2).
2. The coverage percentage of Bayes LINEX loss function is better than that of Bayes under MSE at R =0.714 and 0.985 according to Tables (3, 4).
3. The average length of the exact confidence intervals is shorter than the bootstrap and Bayes methods according to Tables (5, 6).
4. When n and m increase, the coverage percentages decrease for different estimators at different values of p1, p2 according to Tables (1, 6).
n | m | MLE | BAYES | ||||
---|---|---|---|---|---|---|---|
|
MSE | Converge |
|
MSE | Converge | ||
2 | 2 | 0.67 | 0.06 | 0.938 | 0.445 | 0.073 | 0.623 |
3 | 2 | 0.642 | 0.052 | 0.899 | 0.544 | 0.053 | 0.762 |
3 | 0.658 | 0.025 | 0.922 | 0.599 | 0.101 | 0.839 | |
4 | 2 | 0.636 | 0.0047 | 0.891 | 0.345 | 0.342 | 0.483 |
3 | 0.602 | 0.013 | 0.843 | 0.268 | 0.435 | 0.375 | |
5 | 3 | 0.637 | 0.0061 | 0.892 | 0.222 | 0.222 | 0.311 |
4 | 0.6 | 0.03 | 0.84 | 0.677 | 0.814 | 0.948 | |
6 | 4 | 0.607 | 0.012 | 0.85 | 0.558 | 0.353 | 0.782 |
5 | 0.6674 | 0.02 | 0.935 | 0.454 | 0.111 | 0.636 |
n | m | MLE | BAYES | ||||
---|---|---|---|---|---|---|---|
|
MSE | Converge |
|
MSE | Converge | ||
2 | 2 | 0.87 | 0.0145 | 0.916 | 0.179 | 0.818 | 0.188 |
3 | 2 | 0.829 | 0.0103 | 0.873 | 0.171 | 0.005 | 0.18 |
3 | 0.867 | 0.0147 | 0.913 | 0.934 | 0.548 | 0.983 | |
4 | 2 | 0.835 | 0.099 | 0.879 | 0.747 | 0.546 | 0.786 |
3 | 0.894 | 0.0127 | 0.941 | 0.341 | 0.852 | 0.359 | |
5 | 3 | 0.831 | 0.0102 | 0.875 | 0.946 | 0.681 | 0.996 |
4 | 0.884 | 0.0134 | 0.931 | 0.074 | 0.004 | 0.078 | |
6 | 4 | 0.901 | 0.0116 | 0.948 | 0.861 | 0.707 | 0.906 |
5 | 0.871 | 0.0143 | 0.917 | 0.334 | 0.012 | 0.352 |
n | m | BAYES with MSE | BAYES with LINEX loss function | ||||
---|---|---|---|---|---|---|---|
|
MSE | Converge |
|
MSE | Converge | ||
2 | 2 | 0.445 | 0.073 | 0.623 | 0.697 | 0.569 | 0.976 |
3 | 2 | 0.544 | 0.053 | 0.762 | 0.566 | 0.288 | 0.793 |
3 | 0.599 | 0.101 | 0.639 | 0.493 | 0.42 | 0.69 | |
4 | 2 | 0.345 | 0.342 | 0.453 | 0.333 | 0.525 | 0.466 |
3 | 0.268 | 0.435 | 0.375 | 0.465 | 0.453 | 0.651 | |
5 | 3 | 0.677 | 0.814 | 0.648 | 0.54 | 0.791 | 0.756 |
4 | 0.222 | 0.222 | 0.311 | 0.208 | 0.289 | 0.391 | |
6 | 4 | 0.558 | 0.353 | 0.782 | 0.21 | 0.246 | 0.294 |
5 | 0.454 | 0.111 | 0.636 | 0.697 | 0.249 | 0.976 |
n | m | BAYES with MSE | BAYES with LINEX loss function | ||||
---|---|---|---|---|---|---|---|
|
MSE | Converge |
|
MSE | Converge | ||
2 | 2 | 0.179 | 0.818 | 0.188 | 0.772 | 0.229 | 0.813 |
3 | 2 | 0.171 | 0.005 | 0.18 | 0.442 | 0.319 | 0.465 |
3 | 0.934 | 0.548 | 0.563 | 0.54 | 0.044 | 0.568 | |
4 | 2 | 0.747 | 0.546 | 0.786 | 0.811 | 0.256 | 0.854 |
3 | 0.341 | 0.852 | 0.359 | 0.805 | 0.365 | 0.847 | |
5 | 3 | 0.946 | 0.681 | 0.996 | 0.579 | 0.855 | 0.609 |
4 | 0.074 | 0.004 | 0.078 | 0.27 | 0.068 | 0.284 | |
6 | 4 | 0.861 | 0.707 | 0.606 | 0.727 | 0.423 | 0.765 |
5 | 0.334 | 0.012 | 0.352 | 0.601 | 0.031 | 0.633 |
n | m | 95% CI of the length of Exact CI | 95% CI of the length of Bootstrap CI | 95% CI of the length of Bayes CI |
---|---|---|---|---|
2 | 2 | 1.765*10^-5 | 0.224 | 1.660*10^-4 |
3 | 2 | 2.823*10^-5 | 0.479 | 2.623*10^-4 |
3 | 4.741*10^-5 | 0.201 | 4.657*10^-3 | |
4 | 2 | 1.043*10^-3 | 0.431 | 1.232*10^-5 |
3 | 4.899*10^-6 | 0.692 | 3.099*10^-5 | |
5 | 3 | 2.839*10^-5 | 0.235 | 3.222*10^-4 |
4 | 2.419*10^-6 | 0.459 | 2.543*10^-6 | |
6 | 4 | 2.86*10^-7 | 1.397 | 2.543*10^-6 |
5 | 7.696*10^-7 | 0.425 | 6.777*10^-7 | |
6 | 3.775*10^-7 | 0.302 | 4.723*10^-7 |
n | m | 95% CI of the length of Exact CI | 95% CI of the length of Bootstrap CI | 95% CI of the length of Bayes CI |
---|---|---|---|---|
2 | 2 | 3.025*10^-4 | 0.119 | 2.111*10^-3 |
3 | 2 | 6.603*10^-5 | 0.149 | 5.344*10^-4 |
3 | 5.720*10^-7 | 0.137 | 7.502*10^-6 | |
4 | 2 | 1.31*10^-6 | 0.147 | 2.222*10^-6 |
3 | 5.637*10^-6 | 0.277 | 8.324*10^-6 | |
5 | 3 | 4.127*10^-8 | 0.116 | 5.1322*10^-7 |
4 | 1.291*10^-7 | 0.159 | 3.2657*10^-7 | |
6 | 4 | 6.213*10^-8 | 0.186 | 5.121*10^-6 |
5 | 1.917*10^-8 | 0.216 | 3.777*10^-8 | |
6 | 2.415*10^-8 | 0.115 | 1.400*10^-8 |
6. Conclusion
In this paper, the MLE and Bayesian estimators are derived for R when the stress and strength variables are independently geometric distributions based on lower record values. The exact, bootstrap and Bayesian confidence intervals are investigated.
Generally, the coverage percentage of MLE is better than coverage percentage of the Bayes estimator.
Regarding, the number of records n and m for stress- strength modelvaribles.it is observed that the coverage percentage is increase as n and m increase and vice versa.
The estimate values of MLE is better than coverage percentage of the Bayes estimator at R = 0.714 and 0.95.
The average confidence interval lengths of the exact method are shorter than the corresponding average confidence interval lengths of the bootstrap and Bayes method.
The MSEs of the Bayesian estimator under MSE loss function are less than the LINEX loss function.