In the last ten years, there has been increasing interest and activity in the general area of partially linear regression smoothing in statistics. Many methods and techniques have been proposed and studied. This monograph hopes to bring an up-to-date presentation of the state of the art of partially linear regression techniques. The emphasis is on methodologies rather than on the theory, with a particular focus on applications of partially linear regression techniques to various statistical problems. These problems include least squares regression, asymptotically efficient estimation, bootstrap resampling, censored data analysis, linear measurement error models, nonlinear measurement models, nonlinear and nonparametric time series models.
Specially selected from The New Palgrave Dictionary of Economics 2nd edition, each article within this compendium covers the fundamental themes within the discipline and is written by a leading practitioner in the field. A handy reference tool.
Comprehensive model adequacy checking procedures are discussed for general parametric and semiparametric model specifications, with illustration in a variety of examples containing assumptions on dependence structures, density shapes, functional forms and other model features. We use the efficient score processes developed by Bickel, Ritov and Stoker (2006) as building blocks, from which many omnibus tests can be constructed. This set of omnibus tests include Class I tests with decreasing power along high frequencies, and Class II tests with approximately equal power on limited frequencies. We also give a unified view of a group of asymptotically distribution free tests from the score perspective. This set of tests is essentially derived from a family of inefficient scores, enabling the limit Gaussian processes to have nice variance-covariance structure. Additionally, we propose data-driven tests in the score and spectral domains. Either model selection rules or thresholding methods are invoked to choose the scores or spectra on which to focus. Finally, we consider aggregating different types of tests, primarily combining one Class I test and one Class II test, in the hope of achieving a balance between the two classes. Numerical experiments confirm that both Class I and Class II tests have their own strong and weak aspects, and the aggregated procedures demonstrate a balanced and stable performance; although signal strength (of departures) is a fundamental limiting factor of all such procedures. In summary, a statistical model is warranted only when it passes various diagnostic checks with different but complementary strengths.
This volume contains Raymond J. Carroll's research and commentary on its impact by leading statisticians. Each of the seven main parts focuses on a key research area: Measurement Error, Transformation and Weighting, Epidemiology, Nonparametric and Semiparametric Regression for Independent Data, Nonparametric and Semiparametric Regression for Dependent Data, Robustness, and other work. The seven subject areas reviewed in this book were chosen by Ray himself, as were the articles representing each area. The commentaries not only review Ray’s work, but are also filled with history and anecdotes. Raymond J. Carroll’s impact on statistics and numerous other fields of science is far-reaching. His vast catalog of work spans from fundamental contributions to statistical theory to innovative methodological development and new insights in disciplinary science. From the outset of his career, rather than taking the “safe” route of pursuing incremental advances, Ray has focused on tackling the most important challenges. In doing so, it is fair to say that he has defined a host of statistics areas, including weighting and transformation in regression, measurement error modeling, quantitative methods for nutritional epidemiology and non- and semiparametric regression.
Until now, students and researchers in nonparametric and semiparametric statistics and econometrics have had to turn to the latest journal articles to keep pace with these emerging methods of economic analysis. Nonparametric Econometrics fills a major gap by gathering together the most up-to-date theory and techniques and presenting them in a remarkably straightforward and accessible format. The empirical tests, data, and exercises included in this textbook help make it the ideal introduction for graduate students and an indispensable resource for researchers. Nonparametric and semiparametric methods have attracted a great deal of attention from statisticians in recent decades. While the majority of existing books on the subject operate from the presumption that the underlying data is strictly continuous in nature, more often than not social scientists deal with categorical data--nominal and ordinal--in applied settings. The conventional nonparametric approach to dealing with the presence of discrete variables is acknowledged to be unsatisfactory. This book is tailored to the needs of applied econometricians and social scientists. Qi Li and Jeffrey Racine emphasize nonparametric techniques suited to the rich array of data types--continuous, nominal, and ordinal--within one coherent framework. They also emphasize the properties of nonparametric estimators in the presence of potentially irrelevant variables. Nonparametric Econometrics covers all the material necessary to understand and apply nonparametric methods for real-world problems.
Rebecca M. Warner's Applied Statistics: From Bivariate Through Multivariate Techniques, Second Edition provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked to think about the meaning of equations. Each chapter presents a complete empirical research example to illustrate the application of a specific method. Although SPSS examples are used throughout the book, the conceptual material will be helpful for users of different programs. Each chapter has a glossary and comprehension questions.
The application and interpretation of statistics are central to ecological study and practice. Ecologists are now asking more sophisticated questions than in the past. These new questions, together with the continued growth of computing power and the availability of new software, have created a new generation of statistical techniques. These have resulted in major recent developments in both our understanding and practice of ecological statistics. This novel book synthesizes a number of these changes, addressing key approaches and issues that tend to be overlooked in other books such as missing/censored data, correlation structure of data, heterogeneous data, and complex causal relationships. These issues characterize a large proportion of ecological data, but most ecologists' training in traditional statistics simply does not provide them with adequate preparation to handle the associated challenges. Uniquely, Ecological Statistics highlights the underlying links among many statistical approaches that attempt to tackle these issues. In particular, it gives readers an introduction to approaches to inference, likelihoods, generalized linear (mixed) models, spatially or phylogenetically-structured data, and data synthesis, with a strong emphasis on conceptual understanding and subsequent application to data analysis. Written by a team of practicing ecologists, mathematical explanations have been kept to the minimum necessary. This user-friendly textbook will be suitable for graduate students, researchers, and practitioners in the fields of ecology, evolution, environmental studies, and computational biology who are interested in updating their statistical tool kits. A companion web site provides example data sets and commented code in the R language.