Improving the Dependability of Measurements, Calculations, Equipment, and Software
Author: I. R. Walker
Publisher: Cambridge University Press
Covering many techniques widely used in research, this book will help researchers in the physical sciences and engineering solve troublesome - and potentially very time consuming - problems in their work. The book deals with technical difficulties that often arise unexpectedly during the use of various common experimental methods, as well as with human error. It provides preventive measures and solutions for such problems, thereby saving valuable time for researchers. Some of the topics covered are: sudden leaks in vacuum systems, electromagnetic interference in electronic instruments, vibrations in sensitive equipment, and bugs in computer software. The book also discusses mistakes in mathematical calculations, and pitfalls in designing and carrying out experiments. Each chapter contains a summary of its key points, to give a quick overview of important potential problems and their solutions in a given area.
Author: Jerome Kirk,Marc L. Miller,Marc Louis Miller
Kirk and Miller define what is -- and what is not -- qualitative research. They suggest that the use of numbers in the process of recording and analyzing observations is less important than that the research should involve sustained interaction with the people being studied, in their own language and on their own turf. Following a chapter on objectivity, the authors discuss the role of reliability and validity and the problems that arise when these issues are neglected. They present a paradigm for the qualitative research process that makes it possible to pursue validity without neglecting reliability.
How can social scientists assess the reliability of the measures derived from tests and questionnaires? Through an illustrative review of the principles of classical reliability theory, Ross E Traub explores some general strategies for improving measurement procedures. Beginning with a presentation of random variables and the expected value of a random variable, the book covers such topics as: the definition of reliability as a coefficient and possible uses of a coefficient; the notion of parallel tests so as to make possible the estimation of a reliability coefficient for a set of measurements; what to do when parallel tests are not available; what factors affect the reliability coefficient; and how to estimate the
A review of the empirical evidence shows that unreliability of research findings relating brain images and cognitive processes is widespread in cognitive neuroscience. Cognitive neuroscientists increasingly claim that brain images generated by new brain imaging technologies reflect, correlate, or represent cognitive processes. In this book, William Uttal warns against these claims, arguing that, despite its utility in anatomic and physiological applications, brain imaging research has not provided consistent evidence for correlation with cognition. Uttal bases his argument on an extensive review of the empirical literature, pointing to variability in data not only among subjects within individual experiments but also in the new meta-analytical approach that pools data from different experiments. This inconsistency of results, he argues, has profound implications for the field, suggesting that cognitive neuroscientists have not yet proven their interpretations of the relation between brain activity captured by macroscopic imaging techniques and cognitive processes; what may have appeared to be correlations may have only been illusions of association. He supports the view that the true correlates are located at a much more microscopic level of analysis: the networks of neurons that make up the brain. Uttal carries out comparisons of the empirical data at several levels of data pooling, including the meta-analytical. He argues that although the idea seems straightforward, the task of pooling data from different experiments is extremely complex, leading to uncertain results, and that little is gained by it. Uttal's investigation suggests a need for cognitive neuroscience to reevaluate the entire enterprise of brain imaging-cognition correlational studies.
National Research Council,Division of Behavioral and Social Sciences and Education,Center for Education,Committee on Scientific Principles for Education Research
Author: National Research Council,Division of Behavioral and Social Sciences and Education,Center for Education,Committee on Scientific Principles for Education Research
Publisher: National Academies Press
Researchers, historians, and philosophers of science have debated the nature of scientific research in education for more than 100 years. Recent enthusiasm for "evidence-based" policy and practice in educationâ€"now codified in the federal law that authorizes the bulk of elementary and secondary education programsâ€"have brought a new sense of urgency to understanding the ways in which the basic tenets of science manifest in the study of teaching, learning, and schooling. Scientific Research in Education describes the similarities and differences between scientific inquiry in education and scientific inquiry in other fields and disciplines and provides a number of examples to illustrate these ideas. Its main argument is that all scientific endeavors share a common set of principles, and that each fieldâ€"including education researchâ€"develops a specialization that accounts for the particulars of what is being studied. The book also provides suggestions for how the federal government can best support high-quality scientific research in education.
Methods and Critical Appraisal for Evidence-Based Practice
Author: Geri LoBiondo-Wood,Judith Haber
Publisher: Elsevier Health Sciences
Now in full color, this easy-to-understand textbook offers a comprehensive introduction to nursing research concepts and methods. Evidence-based practice is emphasized throughout, with clear guidelines for evaluating research and applying scientific evidence to practice.
This work provides a thought-provoking account of how medical treatments can be tested with unbiased or 'fair' trials and explains how patients can work with doctors to achieve this vital goal. It spans the gamut of therapy from mastectomy to thalidomide and explores a vast range of case studies.
Since the publication of the first edition of Content Analysis: An Introduction to Its Methodology, the textual fabric in which contemporary society functions has undergone a radical transformation: specifically, the ongoing information revolution. Today, content analysis has become an efficient alternative to public opinion research—a method of tracking markets, political leanings, and emerging ideas, a way to settle legal disputes, and an approach to explore individual human minds.
Action research, explored in this book, is a seven-step process for improving teaching and learning in classrooms at all levels. Through practical examples, research tools, and easy-to-follow "implementation strategies," Richard Sagor guides readers through the process from start to finish. Learn how to uncover and use the data that already exist in your classrooms and schools to answer significant questions about your individual or collective concerns and interests. Sagor covers each step in the action research process in detail: selecting a focus, clarifying theories, identifying research questions, collecting data, analyzing data, reporting results, and taking informed action. Drawing from the experience of individual teachers, faculties, and school districts, Sagor describes how action research can enhance teachers' professional standing and efficacy while helping them succeed in settings characterized by increasingly diverse student populations and an emphasis on standards-based reform. The book also demonstrates how administrators and policymakers can use action research to bolster efforts related to accreditation, teacher supervision, and job-embedded staff development. Part how-to guide, part inspirational treatise, Guiding School Improvement with Action Research provides advice, information, and encouragement to anyone interested in reinventing schools as learning communities and restructuring teaching as the true profession it was meant to be.
In the field of social work, qualitative research is starting to gain more prominence as are mixed methods and various issues regarding race, ethnicity and gender. These changes in the field are reflected and updated in The Handbook of Soical Work Research Methods, Second Edition. This text contains meta analysis, designs to evaluate treatment and provides the support to help students harness the power of the Internet. This handbook brings together leading scholars in research methods in social work.
Numerical software is used to test scientific theories, design airplanes and bridges, operate manufacturing lines, control power plants and refineries, analyze financial derivatives, identify genomes, and provide the understanding necessary to derive and analyze cancer treatments. Because of the high stakes involved, it is essential that results computed using software be accurate, reliable, and robust. Unfortunately, developing accurate and reliable scientific software is notoriously difficult. This book investigates some of the difficulties related to scientific computing and provides insight into how to overcome them and obtain dependable results. The tools to assess existing scientific applications are described, and a variety of techniques that can improve the accuracy and reliability of newly developed applications is discussed. Accuracy and Reliability in Scientific Computing can be considered a handbook for improving the quality of scientific computing. It will help computer scientists address the problems that affect software in general as well as the particular challenges of numerical computation: approximations occurring at all levels, continuous functions replaced by discretized versions, infinite processes replaced by finite ones, and real numbers replaced by finite precision numbers. Divided into three parts, it starts by illustrating some of the difficulties in producing robust and reliable scientific software. Well-known cases of failure are reviewed and the what and why of numerical computations are considered. The second section describes diagnostic tools that can be used to assess the accuracy and reliability of existing scientific applications. In the last section, the authors describe a variety of techniques that can be employed to improve the accuracy and reliability of newly developed scientific applications. The authors of the individual chapters are international experts, many of them members of the IFIP Working Group on Numerical Software.
Essay from the year 2010 in the subject Politics - Methods, Research, grade: 1,3 (77%), University of Warwick (Politics and International Studies), course: Qualitative Research Methods, language: English, abstract: Validity and reliability as quality indicators have an uneasy standing in qualitative research and are subject to numerous debates. Researchers from different paradigmatic backgrounds expressed a variety of views, the extremes ranging from a complete denial of the possibility of valid and reliable qualitative research on one hand to the rejection of validity and reliability as meaningful quality indicators on the other. The following essay acknowledges the diverging assumptions underlying the different paradigms associated with quantitative and qualitative research. However, it denies that validity and reliability are inherently connected to predetermined ontological or epistemological assumptions and argues for their general use as quality indicators. To clarify this claim, a selection of different paradigms and the development of alternative quality indicators within them are highlighted. Since the usefulness of this multitude of indicators is questionable, reconciliation is attempted by consolidating them. The concepts of “core validity” and “core reliability”, which can be specified according to the researcher’s paradigm, are introduced for this task. These concepts underline the relevance and applicability of validity and reliability as quality indicators in qualitative research. Furthermore, qualitative research has developed strategies and methods, which enable the researcher to address negative influences on validity and reliability and achieve high degrees of both.
Author: William Trochim,James P Donnelly,Kanika Arora
Publisher: Nelson Education
From an expert team in the research methods field, RESEARCH METHODS: THE ESSENTIAL KNOWLEDGE BASE, 2nd Edition, is written specifically for undergraduates. The book streamlines and clarifies explanations of fundamental, yet difficult, concepts in a familiar, engaging style. Students learn about the relationship between theory and practice, which helps them become better researchers and better consumers of research. Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version.
This work illustrates research conducted over a ten-year timespan and addresses a fundamental issue in reliability theory. This still appears to be an empirically disorganized field and the book suggests employing a deductive base in order to evolve reliability as a science. The study is in line with the fundamental work by Gnedenko. Boris Vladimirovich Gnedenko (1912 – 1995) was a Soviet mathematician who made significant contributions in various scientific areas. His name is especially associated with studies of dependability, for which he is often recognized as the 'father' of reliability theory. In the last few decades, this area has expanded in new directions such as safety, security, risk analysis and other fields, yet the book ‘Mathematical Methods in Reliability Theory’ written by Gnedenko with Alexander Soloviev and Yuri Bélyaev still towers as a pillar of the reliability sector’s configuration and identity. The present book proceeds in the direction opened by the cultural project of the Russian authors; in particular it identifies different trends in the hazard rate functions by means of deductive logic and demonstrations. Further, it arrives at multiple results by means of the entropy function, an original mathematical tool in the reliability domain. As such, it will greatly benefit all specialists in the field who are interested in unconventional solutions.
"Comprising more than 500 entries, the Encyclopedia of Research Design explains how to make decisions about research design, undertake research projects in an ethical manner, interpret and draw valid inferences from data, and evaluate experiment design strategies and results. Two additional features carry this encyclopedia far above other works in the field: bibliographic entries devoted to significant articles in the history of research design and reviews of contemporary tools, such as software and statistical procedures, used to analyze results. It covers the spectrum of research design strategies, from material presented in introductory classes to topics necessary in graduate research; it addresses cross- and multidisciplinary research needs, with many examples drawn from the social and behavioral sciences, neurosciences, and biomedical and life sciences; it provides summaries of advantages and disadvantages of often-used strategies; and it uses hundreds of sample tables, figures, and equations based on real-life cases."--Publisher's description.
Reliability of Large and Complex Systems, previously titled Reliability of Large Systems, is an innovative guide to the current state and reliability of large and complex systems. In addition to revised and updated content on the complexity and safety of large and complex mechanisms, this new edition looks at the reliability of nanosystems, a key research topic in nanotechnology science. The author discusses the importance of safety investigation of critical infrastructures that have aged or have been exposed to varying operational conditions. This reference provides an asymptotic approach to reliability; its methodology, whilst largely mathematical, is designed to help the reader understand and construct general models of large and systems in a wide range of engineering fields. A complete and innovative guide to the reliability of large and complex systems Provides the reader with a strong foundational knowledge of safety investigation into critical infrastructures; the main research area in the world of safety science Explains how to construct large, reliable and safe systems in variable operation conditions
Analysis of Ordinal Categorical Data Alan Agresti Statistical Science Now has its first coordinated manual of methods for analyzing ordered categorical data. This book discusses specialized models that, unlike standard methods underlying nominal categorical data, efficiently use the information on ordering. It begins with an introduction to basic descriptive and inferential methods for categorical data, and then gives thorough coverage of the most current developments, such as loglinear and logit models for ordinal data. Special emphasis is placed on interpretation and application of methods and contains an integrated comparison of the available strategies for analyzing ordinal data. This is a case study work with illuminating examples taken from across the wide spectrum of ordinal categorical applications. 1984 (0 471-89055-3) 287 pp. Regression Diagnostics Identifying Influential Data and Sources of Collinearity David A. Belsley, Edwin Kuh and Roy E. Welsch This book provides the practicing statistician and econometrician with new tools for assessing the quality and reliability of regression estimates. Diagnostic techniques are developed that aid in the systematic location of data points that are either unusual or inordinately influential; measure the presence and intensity of collinear relations among the regression data and help to identify the variables involved in each; and pinpoint the estimated coefficients that are potentially most adversely affected. The primary emphasis of these contributions is on diagnostics, but suggestions for remedial action are given and illustrated. 1980 (0 471-05856-4) 292 pp. Applied Regression Analysis Second Edition Norman Draper and Harry Smith Featuring a significant expansion of material reflecting recent advances, here is a complete and up-to-date introduction to the fundamentals of regression analysis, focusing on understanding the latest concepts and applications of these methods. The authors thoroughly explore the fitting and checking of both linear and nonlinear regression models, using small or large data sets and pocket or high-speed computing equipment. Features added to this Second Edition include the practical implications of linear regression; the Durbin-Watson test for serial correlation; families of transformations; inverse, ridge, latent root and robust regression; and nonlinear growth models. Includes many new exercises and worked examples. 1981 (0 471-02995-5) 709 pp.
Planning and conducting successful surveys requires a great deal of time, energy and know-how. While the time and energy components are relatively easy to find, what is often difficult is acquiring the know-how actually to plan, conduct and analyze a survey. An invaluable resource, The Survey Kit offers all the information necessary for conducting a state-of-the-art survey - from the initial planning stages all the way through analyzing and reporting the data.