Statistics
Permanent URI for this communityhttps://repository.ui.edu.ng/handle/123456789/360
Browse
2 results
Search Results
Item Frequentist and bayesian estimation of parameters of linear regression model with correlated explanatory variables(2017) Adepoju, A. A.; Adebajo, E. O; Ogundunmade, P. T.This paper addressed the popular issue of collinearity among explanatory variables in the context of a multiple linear regression analysis, and the parameter estimations of both the classical and the Bayesian methods. Five sample sizes: 10, 25, 50, 100 and 500 each replicated 10,000 times were simulated using Monte Carlo method. Four levels of correlation p = 0.0,0.1,0.5, and 0.9 representing no correlation, weak correlation, moderate correlation and strong correlation were considered. The estimation techniques considered were; Ordinary Least Squares (OLS), Feasible Generalized Least Squares (FGLS) and Bayesian Methods. The performances of the estimators were evaluated using Absolute Bias (ABIAS) and Mean Square Error (MSE) of the estimates. In all cases considered, the Bayesian estimators had the best performance. It was consistently most efficient than the other estimators, namely OLS and FGLSItem Estimators of linear regression model with autocorrelated error terms and prediction using correlated uniform regressors(2012-11) Ayinde, K.; Adedayo, D. A.; Adepoju, A. A.Performances of estimators of linear regression model with autocorrelated error term have been attributed to the nature and specification of the explanatory variables. The violation of assumption of the independence of the explanatory variables is not uncommon especially in business, economic and social sciences, leading to the development of many estimators. Moreover, prediction is one of the main essences of regression analysis. This work, therefore, attempts to examine the parameter estimates of the Ordinary Least Square estimator (OLS), Cochrane-Orcutt estimator (COR), Maximum Likelihood estimator (ML) and the estimators based on Principal Component analysis (PC) in prediction of linear regression model with autocorrelated error terms under the violations of assumption of independent regressors (multicollinearity) using Monte-Carlo experiment approach. With uniform variables as regressors, it further identifies the best estimator that can be used for prediction purpose by averaging the adjusted co-efficient of determination of each estimator over the number of trials. Results reveal that the performances of COR and ML estimators at each level of multicollinearity over the levels of autocorrelation are convex – like while that of the OLS and PC estimators are concave; and that as the level of multicollinearity increases, the estimators perform much better at all the levels of autocorrelation. Except when the sample size is small (n=10), the performances of the COR and ML estimators are generally best and asymptotically the same. When the sample size is small, the COR estimator is still best except when the autocorrelation level is low. At these instances, the PC estimator is either best or competes with the best estimator. Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator competes with the best estimator in all the levels of multicollinearity