Hindawi Publishing Corporation ξ e Scientiο¬c World Journal Volume 2014, Article ID 512039, 6 pages http://dx.doi.org/10.1155/2014/512039
Research Article Estimation of the Parameters of Burr Type III Distribution Based on Dual Generalized Order Statistics Chansoo Kim1 and Woosuk Kim2 1 2
Department of Applied Mathematics, Kongju National University, Gongju 314-701, Republic of Korea Department of Statistics and Actuarial Science, University of Iowa, Iowa City, IA 52242-1409, USA
Correspondence should be addressed to Woosuk Kim;
[email protected] Received 12 June 2014; Accepted 16 September 2014; Published 19 October 2014 Academic Editor: Krzysztof Malarz Copyright Β© 2014 C. Kim and W. Kim. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The estimation of the parameters of Burr type III distribution based on dual generalized order statistics is considered by using the maximum likelihood (ML) approach as well as the Bayesian approach. The exact expression of the expected Fisher information matrix of the parameters in the distribution is obtained. Also, an approximation based on Lindley is used to obtain the Bayes estimator. To compare the maximum likelihood estimator and the Bayes estimator of the parameters, Monte Carlo simulation study is performed.
1. Introduction Burr type III distribution with two parameters was first introduced in the literature of Burr [1] for modelling lifetime data or survival data. It is more flexible and includes a variety of distributions with varying degrees of skewness and kurtosis. Burr type III distribution with two parameters π and π, which is denoted by BurrIII (π, π), has also been applied in areas of statistical modelling such as forestry (Gove et al. [2]), meteorology (Mielke [3]), and reliability (Mokhlis [4]). The probability density function and the cumulative distribution function of BurrIII (π, π) are given by, respectively, β(π+1)
π (π₯) = πππ₯β(π+1) (1 + π₯βπ )
,
π₯ > 0, π > 0, π > 0, βπ
πΉ (π₯) = (1 + π₯βπ ) .
(1)
Note that Burr type XII distribution can be derived from Burr type III distribution by replacing π with 1/π. The usefulness and properties of Burr distribution are discussed by Burr and Cislak [5] and Johnson et al. [6]. Abd-Elfattah and Alharbey [7] considered a Bayesian estimation for Burr type III distribution based on double censoring. Order statistics are widely used in statistical modelling and inference. As a unified approach to a variety of models of ordered random variables such as ordinary order statistics,
upper record values, and sequential order statistics, the concept of generalized order statistics (GOS) was introduced by Kamps [8]. Based on GOS, Burkschat et al. [9] introduced the concept of dual generalized order statistics as a dual model of GOS and a unification of several models of decreasingly ordered random variables such as reversed order statistics, lower record values, and lower Pfeifer records. Let πΉ(π₯) denote an absolutely continuous distribution function with the corresponding density function π(π₯) and Μ π), π = 1, 2, . . . , π, be the corresponding dual let π(π, π, π, GOS. Then, the joint probability density function of the first π dual GOS is the following: ππ(1,π,π,π),...,π(π,π, Μ Μ (π₯1 , . . . , π₯π ) π,π) πβ1
πβ1
π=1
π=1
= π (βπΎπ ) [βπΉππ (π₯π ) π (π₯π )] Γ πΉπβ1 (π₯π ) π (π₯π ) , (2) Μ = (π1 , for πΉβ1 (0) < π₯π β€ β
β
β
β€ π₯1 < πΉβ1 (1), π π2 , . . . , ππβ1 ) β π
πβ1 , with parameters π β π, π β₯ 2, π > 0, and ππ = βπβ1 π=π ππ such that πΎπ = π + π β π + ππ > 0 for all π β {1, 2, . . . , π}. For simplicity, we shall assume π1 = π2 = β
β
β
= ππβ1 = π. If π = 0 and π = 1, then it gives the joint probability density function of π reversed order statistics
2
The Scientific World Journal
from the independent and identically distributed (iid) random sample coming from πΉ(π₯). If π = β1, then π(π, π, π, π) reduces to the πth lower π-record value of the iid random variables. Various distributional properties and some applications of the related topics are studied by Burkschat et al. [9], Ahsanullah [10], Jaheen [11], Mbah and Ahsanullah [12], Barakat and El-Adll [13], and W. Kim and C. Kim [14]. In this paper, our main objective is to describe MLE, the exact expression of the expected Fisher information matrix, and Bayes estimation procedures for the parameters of Burr type III distribution based on dual GOS, assuming the conjugate priors. In Section 2, we consider MLE and obtain an exact expression of the expected Fisher information matrix of the parameters. In Section 3, Lindleyβs approximation is used to obtain Bayes estimates for the parameters. Finally, in order to compare MLE with Bayes estimators, Monte Carlo simulation is studied in Section 4.
where ππ = π₯ππ /(1+π₯ππ ) and ππ = ln π₯π /(1+π₯ππ ) for π = 1, 2, . . . , π. From (5), MLE of π is expressed by Μπ = β π
β(π+1)
π=1
Γ [β(1 + π=1 π π
πβ1
πβ1
π=1
π=1
= ππ π (βπΎπ ) [β
+
π+ππ+1
(1 + π₯ππ )
]
βπΈ ( βπΈ (
ππ+1
(1 + π₯ππ )
π2 π π2 π ) βπΈ ( ) ππ2 ππππ
π2 π π2 π ) βπΈ ( 2 ) ππππ ππ
.
β1
)
(10) ,
π) (π,π)=(Μπ,Μ
π2 π π = β 2, ππ2 π
(3)
πβ1 π2 π = (1 + π) β ππ + πππ , ππππ π=1
From (3), the log-likelihood function is proportional to π = ln πΏ (π, π | π₯)
π2 π π = β 2 β (π + ππ + 1) 2 ππ π
β π ln π + π ln π πβ1
+ β [(ππ + πππ β 1) ln π₯π β (π + ππ + 1) ln (1 + π₯ππ )] π=1
(11)
πβ1
Γ β ππ ππ ln π₯π β (ππ + 1) ππ ππ ln π₯π , π=1
(4)
To derive the maximum likelihood estimators (MLEβs) Μππ and πΜπ of π and π,
πβ1 ππ π π = β βπ₯ππ ππ + π [(1 + π) β ππ + πππ ] , ππ π π=1 π=1
(9)
with
π₯ππππβ1
πβ1 ππ π = + (1 + π) β ln ππ + π ln ππ , ππ π π=1
π, π = 1, 2.
β1
]
+ (πππ β 1) ln π₯π β (ππ + 1) ln (1 + π₯ππ ) .
π2 π (π, π | x) ), ππππ
β β π12 π11 ) π =( β β π12 π22 (π,π)=(Μπ,Μ π)
βπ(πβ1) π₯πβπ )
π₯πππ+πππβ1
(8)
β
=(
βππ π₯πβπ ) ] (1
(7)
From (4), the asymptotic variance-covariance matrix for MLE is obtained by the following:
π=1
πβ1
.
MLE of the parameter π is obtained by solving the nonlinear equation (8). Substituting MLE of the parameter π in (7), we can obtain MLE of the parameter π. The asymptotic variance-covariance matrix of MLE for the parameters π and π is given by the elements of the Fisher information matrix:
πΏ (π, π | π₯) π
ln ππ + π ln ππ
π [(1 + π) βπβ1 π π π π=1 ππ + πππ ] = 0. β βπ₯π ππ β πβ1 π π=1 (1 + π) βπ=1 ln ππ + π ln ππ
πΌππ = βπΈ (
For π1 = π2 = β
β
β
= ππβ1 = π, suppose that π(1, π, π, π), π(2, π, π, π), . . ., and π(π, π, π, π) (π β₯ 1, π is a real number) are π dual generalized order statistics drawn from BurrIII (π, π). Using (1) and (2), we can get the following likelihood function:
πβ1
(1 +
Substituting (7) in (6), MLE πΜπ of π can be written as
2. Maximum Likelihood Estimation
= π (βπΎπ ) [βπππ₯πβ(π+1) (1 + π₯πβπ )
π π) βπβ1 π=1
(5)
(6)
π₯ππ /(1
+ π₯ππ ) and ππ = ln π₯π /(1 + π₯ππ ). where ππ = Note that the Fisher information involves only a function of ππ and so we need the marginal probability density function of πth dual GOS based on the distribution function πΉ(π₯) and the density function π(π₯). From Burkschat et al. [9], the marginal probability density function of πth dual GOS is the following: ππ(π,π,π,π) (π₯π ) =
πΆπβ1 πΎ β1 πβ1 [πΉ (π₯π )] π [ππ (πΉ (π₯π ))] π (π₯π ) , Ξ (π) (12)
The Scientific World Journal
3 By the same transformation method π§ = πΉ1/π (π₯π ), the expectation of ππ ππ ln ππ , for π =ΜΈ β1, is given by
where π
πΆπβ1 = βπΎπ ,
πΈ (ππ ππ ln ππ )
π = 1, 2, . . . , π,
π=1
{β 1 π₯π+1 , βπ (π₯) = { π + 1 {β ln π₯,
(13)
π =ΜΈ β1
=
πβ1
2ππΆπβ1
πβ1 ) (β1)π β( π2 (π β 1)!(π + 1)πβ1 π=0 π
π = β1,
ππ (π₯) = βπ (π₯) β βπ (1) ,
Γ{
π₯ β [0, 1) .
1 [π (πΎπ + π (π + 1)) + 1] β
+β[
Assume that π =ΜΈ β1. For each π = 1, 2, . . . , π, we can have the expectation of ππ , which is
π=1
=
πΆπβ1 ln π₯π πΎ β1 πβ1 πΉ(π₯π ) π π (π₯π ) ππ (πΉ (π₯π )) ππ₯π . β« (π β 1)! 0 1 + π₯ππ (14)
If we use the transformation π§ = πΉ1/π (π₯π ), then the expectation of ππ is given by πΈ (ππ )
π[π (πΎπ + π (π + 1)) + π + 2] 1
π[π (πΎπ + π (π + 1)) + π + 1]
2
]
(17) For π = β1, we can compute the expectation of ππ , which is
=
ln ππ ) 1 + πππ
β πΆπβ1 ln π₯π πΎπ β1 πβ1 π (π₯π ) πβ1 (πΉ (π₯π )) ππ₯π . β« π πΉ(π₯π ) (π β 1)! 0 1 + π₯π (18)
Using the same transformation method π§ = πΉ1/π (π₯π ), the expectation of ππ is the following:
β
(β1)π+1 π (πΎπ + π (π + 1)) β 1 ( ) 2 π π=0 (1 + π)
Γβ
1 [π (πΎπ + π (π + 1)) + 1]
2
β
π
πΈ (ππ ) =
1 [π (πΎπ + π (π + 1))]
2
}.
πΈ (ππ ππ ln ππ )
π(ππ)
π(ππ + 1)
π+1
β
π πππ
π
(ππ) β 1 1 β }. + β{ π π=1 π(π + ππ)π π(π + ππ + 1)π
(15)
To get the expectation of ππ ππ ln ππ , we need to compute
(19)
To get the expectation of ππ ππ ln ππ , we should compute πΈ (ππ ππ ln ππ )
2
πππ (ln ππ )
2
2
= πΈ(
) 2
=
2
(β1)π π (πΎπ + π (π + 1)) ( )} . 3 π π=0 (1 + π)
πβ1
Γ { [π (πΎπ + π (π + 1)) β 1]
(1 + πππ )
3
β
πβ1 = ) (β1)π β( π (π β 1)!(π + 1)πβ1 π=0 π
= πΈ(
[π (πΎπ + π (π + 1)) + 2]
Γβ
πΈ (ππ ) = πΈ (
+
1
β π (πΎπ + π (π + 1))
β
ππΆπβ1
β
1
β
ln ππ ) πΈ (ππ ) = πΈ ( 1 + πππ
3
β π₯π (ln π₯ ) πΆπβ1 πΎ β1 π πβ1 π πΉ(π₯π ) π π (π₯π ) ππ (πΉ (π₯π )) ππ₯π . β« (π β 1)! 0 (1 + π₯π )2 π (16)
πππ (ln ππ )
2
(1 + πππ )
) 2
=
β π₯π (ln π₯ ) πΆπβ1 πΎ β1 π πβ1 π πΉ(π₯π ) π π (π₯π ) πβ1 (πΉ (π₯π )) ππ₯π . β« (π β 1)! 0 (1 + π₯π )2 π (20)
4
The Scientific World Journal
2 π With the transformation π§ = πΉ1/π (π₯π ) and (ββ π=1 (π§ /π)) = β πβ1 π βπ=2 βπ=1 (π§ /(π β π)π), the expectation of ππ ππ ln ππ , when π = β1, is given by
where π1 (π) =
π
(ππ) π (π + 1) π (π + 1) β = 2 { π+2 π+2 π (ππ + 1) (ππ + 2) β πβ1
π=2 π=1
π (π β π) (π + ππ + 1)
π=1
π
π (π, π | π₯) β ππ+πΎβ1 ππ+πΌβ1 πβπ/π½βπ/πΏ
1 π
π (π β π) (π + ππ + 2)
β
+ 2π β (
where πΌ, π½, πΎ, and πΏ are chosen to reflect prior knowledge about π and π. By combining (3) and (24), the joint posterior density function of π and π can be put as follows:
1
β
1 π(π + ππ + 2)
π+1
β
Γ (β π=1
1 π+1
π(π + ππ + 1)
)} .
π2 π π ) = 2, ππ2 π
πβ1
= β (1 + π) β πΈ (ππ ) β ππΈ (ππ ) , β = βπΈ ( π22
=
(22)
π2 π ) ππ2
. ππ+1 (1 + π₯ππ )
(25)
Μπ΅ = πΈ [π (π, π) | π₯] π β
=
β¬0 π (π, π) πΏ (π, π | π₯) π (π, π) ππ ππ β
β¬0 πΏ (π, π | π₯) π (π, π) ππ ππ
.
(26)
Μπ΅ = π (π 1 , π 2 ) + 1 (π΄ + πβ π΅12 + πβ π΅21 + πβ πΆ12 + πβ πΆ21 ) π 30 03 21 12 2 + π1 π΄ 12 + π2 π΄ 21 , (27)
π + (π + ππ + 1) π2
where
πβ1
2 2
Γ β πΈ (ππ ππ ln ππ ) + (ππ + 1) πΈ (ππ ππ ln ππ ) .
π΄ = ββ πππ πππ ,
π=1
π=1 π=1
β β β Using (15), (17), (19), and (21), all entries π11 , π12 , π22 can be explicitly expressed, depending on π.
3. Bayes Estimation In this section, we want to estimate the parameters π and π under squared error loss (SEL) function, which is defined as Μ = (π β π) Μ 2 for a parameter π. Assuming that the πΏ 0 (π, π) parameters π and π are unknown, a natural choice for the prior distributions of π and π would be to assume that the two quantities are independent gamma distributions as in the following: π (π, π) = π1 (π) π2 (π) ,
π₯ππππβ1
In general, the integral ratio in (26) cannot be expressed in a simple closed form. Hence, we use Lindleyβs approximation [15] to obtain a numerical approximation. In a two-parameter case, say (π 1 , π 2 ) = (π, π), based on Lindleyβs approximation, the approximate Bayes estimator of a function π = π(π 1 , π 2 ), under the SEL function, leads to
π2 π ) ππππ
π=1
(1 + π₯ππ )
) π+ππ+1
Under the SEL function, it is well known that the Bayes estimator of a function π = π(π, π) is the posterior mean of the function, which is
Now, we can get each entry of the Fisher information matrix πβ as follows:
β π12 = Qβ21 = βπΈ (
π₯πππ+πππβ1
πβ1
)
(21)
β π11 = βπΈ (
(24)
πΏβπΎ πΎβ1 βπ/πΏ π2 (π) = π π , Ξ (πΎ)
πΈ (ππ ππ ln ππ )
+ ββ (
π½βπΌ πΌβ1 βπ/π½ π π , Ξ (πΌ)
(23)
πππβ =
ππ+π π π
πππ1 ππ 2
;
π, π = 0, 1, 2, 3, with π + π = 3, ππ ππ = , ππ π
ππ ππ = , ππ π
π = ln π (π 1 , π 2 ) ,
π2 π πππ = , ππ π ππ π
(28)
for π, π = 1, 2,
and for π =ΜΈ π, π΄ ππ = ππ πππ + ππ πππ ,
π΅ππ = (ππ πππ + ππ πππ ) πππ ,
πΆππ = 3ππ πππ πππ + ππ (πππ πππ + 2πππ2 ) .
(29)
Note that πππ is the (π, π)th element of the inverse of the matrix (πππ ), π, π = 1, 2, where πππ = π2 π/ππ 1 ππ 2 . Moreover, (27) is to be evaluated at the MLEβs of π 1 and π 2 .
The Scientific World Journal
5
Now, we apply Lindleyβs approximation (27) to our case, where (π 1 , π 2 ) = (π, π) and π(π 1 , π 2 ) = π(π, π). The elements πππ can be obtained as β1
βπ βπ ( 11 12 ) βπ21 βπ22
=
1
π βπ ( 22 12 ) π βπ 21 11 π11 π22 β (π12 ) 2
(30)
π π = ( 11 12 ) . π21 π22
Γ β ππ ππ ln π₯π + (ππ + 1) ππ ππ ln π₯π ,
(32)
π=1
πΌ = (1 + π) β ππ + πππ . π=1
Also, the values of πππβ can be obtained as follows for π, π = 0, 1, 2, 3; π3 π 2π = 3, 3 ππ π
β π03 =
π3 π ππ3
=
2π π β βπ π2 (1 β π₯ππ ) ln π₯π π3 π=1 π π πβ1
β π (1 + π) β ππ ππ2 (1 β π₯ππ ) ln π₯π
(33)
π=1
β
ππππ ππ2
(1 β π₯ππ ) ln π₯π ,
β π21 =
π3 π = 0, ππ2 ππ
β π12 =
πβ1 π3 π = β + 1) β ππ ππ ln π₯π β πππ ππ ln π₯π . (π ππππ2 π=1
Note π = ln π(π, π) β (πΌ β 1) ln π + (πΎ β 1) ln π β π/π½ β π/πΏ. Then, we get π1 =
ππ πΌ β 1 1 = β , ππ π π½
π2 =
1 (π π» + π12 πΌ + π21 πΌ + π22 πΊ) , 2π 11
β 2 β β π»2 π03 πΊπΌ π12 (π»πΊ + 2πΌ ) π30 π» πΌ + + + π1 + π2 , π1 = 2π2 2π2 2π2 π π β β 2 β π»πΌ π03 πΊ π30 πΊπΌ 3π12 πΌ πΊ + + + π1 + π2 . 2π2 2π2 2π2 π π
(36)
From (35), we can deduce the values of the Bayes estimates of the parameters π and π as follows. If π(π, π) = π, then π0 = 0, π1 = 1, and π2 = 0. Hence, Μπ = π + π . π΅ 1
(37)
If π(π, π) = π, then π0 = 0, π1 = 0, and π2 = 1. Hence, πΜπ΅ = π + π2 .
(38)
Note that (35), (37), and (38) are to be evaluated at MLEβs (Μππ, πΜπ).
4. Simulation Study and Comparisons
πβ1
β π30 =
π0 =
π2 =
Let π» = βπ22 , πΊ = βπ11 , and πΌ = π12 = π21 . Then, π = πΊπ» β πΌ2 . Then, we can rewrite this as π» πΌ πΊ (31) π11 = , π12 = π21 = , π22 = , π π π where π πΊ = 2, π π π» = 2 + (π + ππ + 1) π πβ1
where
ππ πΎ β 1 1 = β . ππ π πΏ
(34)
Substituting all the above components to (27), the Bayes estimate of the function π(π, π) given in (27), under the SEL function, becomes Μπ΅ = πΈ [π (π, π) | π₯] = π + π0 + π1 π1 + π2 π2 , π
(35)
In this section, we consider MLE and the approximate Bayes estimates for two parameters π and π of Burr type III distribution. To assess the performance of these estimates, we conducted a simulation study. Let ππΏ(1) = π₯1 , ππΏ(2) = π₯2 , . . ., and ππΏ(π) = π₯π be the lower record values of size π which can be obtained from the dual GOS scheme as a special case by taking π = β1 and π = 1. MLE and Bayes estimates for the parameters of BurrIII (π, π) based on lower records are computed and compared through the Monte Carlo simulation study according to the following steps. (1) For π = 2 and π = 3, samples of lower record values of size π (π = 4, 6, 8, 10) were generated from Burr type III distribution. Burr type III lower record values are generated using the inverse cdf, ππ = (π’πβ1/π β 1)β1/π , where π’π is the uniformly distributed random variate. (2) MLEβs, Μππ and πΜπ of the parameters π and π, are calculated by iteratively solving (7) and (8) with π = β1 and π = 1. (3) For given vlaues of prior parameters (πΌ, π½, πΎ, πΏ), the Bayes estimates of π and π are computed from (37) and (38) with π = β1 and π = 1. (4) The above steps are repeated 1,000 times to evaluate the root mean squared error (RMSE) of MLE and Bayes estimates for the different sample sizes π. Note that RMSE = β
2 1 1000 β (π (π0 ) β π (πΜπ )) , 1000 π=1
(39)
where π(π0 ) is the true value and π(πΜπ ) is the πth Μ estimate of π(π) evaluated at π.
6
The Scientific World Journal Table 1: The averaged RMSE for MLE and Bayes Estimates of the parameters π and π for different π. RMSE
π 4 6 8 10
πΜπ
Μπ π
2.0268 1.6421 1.4473 1.3556
2.6734 2.1182 1.8153 1.5863
πΜπ΅ (πΌ = 3, π½ = 2 1.7365 1.3672 1.1980 1.1267
Table 1 provides the averaged RMSE of MLE and Bayes estimates based on lower record values for two sets of prior parameters (πΌ, π½, πΎ, πΏ). To show the consistency of the result across varying data sets with large variability and differing sample sizes, we simulate data under two sets of parameters, each prior distribution with large variability. We see that the Bayes estimates are better than MLE in the sense of comparing RMSE of the estimates. As the sample size π increases, RMSE of the estimates should decrease, which is the case in our computer simulation.
Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment This work was funded by the Korea Meteorological Administration Research and Development Program under Grant CATER 2012-3081.
References [1] I. W. Burr, βCumulative frequency functions,β Annals of Mathematical Statistics, vol. 13, pp. 215β232, 1942. [2] J. H. Gove, M. J. Ducey, W. B. Leak, and L. Zhang, βRotated sigmoid structures in managed uneven-aged northern hardwood stands: a look at the Burr Type III distribution,β Forestry, vol. 81, no. 2, pp. 161β176, 2008. [3] P. W. Mielke, βAnother family of distributions for describing and analyzing precipitation data,β Journal of Applied Meteorology, vol. 12, pp. 275β280, 1973. [4] N. A. Mokhlis, βReliability of a stress-strength model with Burr type III distributions,β Communications in Statistics. Theory and Methods, vol. 34, no. 7, pp. 1643β1657, 2005. [5] I. W. Burr and P. J. Cislak, βOn a general system of distributions, I: its curve-shape characteristics, II: the sample median,β Journal of the American Statistical Association, vol. 63, pp. 627β635, 1968. [6] N. L. Johnson, S. Kotz, and N. Balakrishnan, Continuous Univariate Distributions, John Wiley & Sons, New York, NY, USA, 2nd edition, 1995. [7] A. M. Abd-Elfattah and A. H. Alharbey, βBayesian estimation for Burr distribution type III based on trimmed samples,β ISRN Applied Mathematics, vol. 2012, Article ID 250393, 18 pages, 2012. [8] U. Kamps, βA concept of generalized order statistics,β Journal of Statistical Planning and Inference, vol. 48, no. 1, pp. 1β23, 1995.
Μπ π΅ πΎ = 2, πΏ = 3) 2.3706 1.9863 1.7572 1.5180
πΜπ΅ (πΌ = 5, π½ = 1 1.9826 1.5851 1.3504 1.3264
Μπ π΅ πΎ = 2, πΏ = 5) 1.7875 1.4074 1.3357 1.2318
[9] M. Burkschat, E. Cramer, and U. Kamps, βDual generalized order statistics,β Metron. International Journal of Statistics, vol. 61, no. 1, pp. 13β26, 2003. [10] M. Ahsanullah, βA characterization of the uniform distribution by dual generalized order statistics,β Communications in Statistics. Theory and Methods, vol. 33, no. 11-12, pp. 2921β2928, 2004. [11] Z. F. Jaheen, βEstimation based on generalized order statistics from the Burr model,β Communications in Statistics: Theory and Methods, vol. 34, no. 4, pp. 785β794, 2005. [12] A. K. Mbah and M. Ahsanullah, βSome characterizations of the power function distribution based on lower generalized order statistics,β Pakistan Journal of Statistics, vol. 23, no. 2, pp. 139β 146, 2007. [13] H. M. Barakat and M. E. El-Adll, βAsymptotic theory of extreme dual generalized order statistics,β Statistics & Probability Letters, vol. 79, no. 9, pp. 1252β1259, 2009. [14] W. Kim and C. Kim, βOn dual generalized order statistics from Burr Type III distribution and its characterization,β Far East Journal of Mathematical Sciences, vol. 81, no. 1, pp. 21β39, 2013. [15] D. V. Lindley, βApproximat e bayesian methods,β Trabajos de Estadistica, vol. 21, pp. 223β237, 1980.