**Author**: Rudolf Kruse

**Publisher:** Springer Science & Business Media

**ISBN:**

**Category:** Mathematics

**Page:** 279

**View:** 112

This monograph is an attempt to unify existing works in the field of random sets, random variables, and linguistic random variables with respect to statistical analysis. It is intended to be a tutorial research compendium. The material of the work is mainly based on the postdoctoral thesis (Ha bilitationsschrift) of the first author and on several papers recently published by both authors. The methods form the basis of a user-friendly software tool which supports the statistical inferenee in the presence of vague data. Parts of the manuscript have been used in courses for graduate level students of mathematics and eomputer scienees held by the first author at the Technical University of Braunschweig. The textbook is designed for readers with an advanced knowledge of mathematics. The idea of writing this book came from Professor Dr. H. Skala. Several of our students have significantly contributed to its preparation. We would like to express our gratitude to Reinhard Elsner for his support in typesetting the book, Jorg Gebhardt and Jorg Knop for preparing the drawings, Michael Eike and Jiirgen Freckmann for implementing the programming system and Giinter Lehmann and Winfried Boer for proofreading the manuscript. This work was partially supported by the Fraunhofer-Gesellschaft. We are indebted to D. Reidel Publishing Company for making the pub lication of this book possible and would especially like to acknowledge the support whieh we received from our families on this project.

Over the last forty years there has been a growing interest to extend probability theory and statistics and to allow for more flexible modelling of imprecision, uncertainty, vagueness and ignorance. The fact that in many real-life situations data uncertainty is not only present in the form of randomness (stochastic uncertainty) but also in the form of imprecision/fuzziness is but one point underlining the need for a widening of statistical tools. Most such extensions originate in a "softening" of classical methods, allowing, in particular, to work with imprecise or vague data, considering imprecise or generalized probabilities and fuzzy events, etc. About ten years ago the idea of establishing a recurrent forum for discussing new trends in the before-mentioned context was born and resulted in the first International Conference on Soft Methods in Probability and Statistics (SMPS) that was held in Warsaw in 2002. In the following years the conference took place in Oviedo (2004), in Bristol (2006) and in Toulouse (2008). In the current edition the conference returns to Oviedo. This edited volume is a collection of papers presented at the SMPS 2010 conference held in Mieres and Oviedo. It gives a comprehensive overview of current research into the fusion of soft methods with probability and statistics.

Over the last forty years there has been a growing interest to extend probability theory and statistics and to allow for more flexible modelling of imprecision, uncertainty, vagueness and ignorance. The fact that in many real-life situations data uncertainty is not only present in the form of randomness (stochastic uncertainty) but also in the form of imprecision/fuzziness is but one point underlining the need for a widening of statistical tools. Most such extensions originate in a "softening" of classical methods, allowing, in particular, to work with imprecise or vague data, considering imprecise or generalized probabilities and fuzzy events, etc. About ten years ago the idea of establishing a recurrent forum for discussing new trends in the before-mentioned context was born and resulted in the first International Conference on Soft Methods in Probability and Statistics (SMPS) that was held in Warsaw in 2002. In the following years the conference took place in Oviedo (2004), in Bristol (2006) and in Toulouse (2008). In the current edition the conference returns to Oviedo. This edited volume is a collection of papers presented at the SMPS 2010 conference held in Mieres and Oviedo. It gives a comprehensive overview of current research into the fusion of soft methods with probability and statistics.

The contributions in this book state the complementary rather than competitive relationship between Probability and Fuzzy Set Theory and allow solutions to real life problems with suitable combinations of both theories.

Classical probability theory and mathematical statistics appear sometimes too rigid for real life problems, especially while dealing with vague data or imprecise requirements. These problems have motivated many researchers to "soften" the classical theory. Some "softening" approaches utilize concepts and techniques developed in theories such as fuzzy sets theory, rough sets, possibility theory, theory of belief functions and imprecise probabilities, etc. Since interesting mathematical models and methods have been proposed in the frameworks of various theories, this text brings together experts representing different approaches used in soft probability, statistics and data analysis.

Probability and Statistics theme is a component of Encyclopedia of Mathematical Sciences in the global Encyclopedia of Life Support Systems (EOLSS), which is an integrated compendium of twenty one Encyclopedias. The Theme with contributions from distinguished experts in the field, discusses Probability and Statistics. Probability is a standard mathematical concept to describe stochastic uncertainty. Probability and Statistics can be considered as the two sides of a coin. They consist of methods for modeling uncertainty and measuring real phenomena. Today many important political, health, and economic decisions are based on statistics. This theme is structured in five main topics: Probability and Statistics; Probability Theory; Stochastic Processes and Random Fields; Probabilistic Models and Methods; Foundations of Statistics, which are then expanded into multiple subtopics, each as a chapter. These three volumes are aimed at the following five major target audiences: University and College students Educators, Professional practitioners, Research personnel and Policy analysts, managers, and decision makers and NGOs.

Uncertainty has been of concern to engineers, managers and . scientists for many centuries. In management sciences there have existed definitions of uncertainty in a rather narrow sense since the beginning of this century. In engineering and uncertainty has for a long time been considered as in sciences, however, synonymous with random, stochastic, statistic, or probabilistic. Only since the early sixties views on uncertainty have ~ecome more heterogeneous and more tools to model uncertainty than statistics have been proposed by several scientists. The problem of modeling uncertainty adequately has become more important the more complex systems have become, the faster the scientific and engineering world develops, and the more important, but also more difficult, forecasting of future states of systems have become. The first question one should probably ask is whether uncertainty is a phenomenon, a feature of real world systems, a state of mind or a label for a situation in which a human being wants to make statements about phenomena, i. e. , reality, models, and theories, respectively. One cart also ask whether uncertainty is an objective fact or just a subjective impression which is closely related to individual persons. Whether uncertainty is an objective feature of physical real systems seems to be a philosophical question. This shall not be answered in this volume.

Statistical data are not always precise numbers, or vectors, or categories. Real data are frequently what is called fuzzy. Examples where this fuzziness is obvious are quality of life data, environmental, biological, medical, sociological and economics data. Also the results of measurements can be best described by using fuzzy numbers and fuzzy vectors respectively. Statistical analysis methods have to be adapted for the analysis of fuzzy data. In this book, the foundations of the description of fuzzy data are explained, including methods on how to obtain the characterizing function of fuzzy measurement results. Furthermore, statistical methods are then generalized to the analysis of fuzzy data and fuzzy a-priori information. Key Features: Provides basic methods for the mathematical description of fuzzy data, as well as statistical methods that can be used to analyze fuzzy data. Describes methods of increasing importance with applications in areas such as environmental statistics and social science. Complements the theory with exercises and solutions and is illustrated throughout with diagrams and examples. Explores areas such quantitative description of data uncertainty and mathematical description of fuzzy data. This work is aimed at statisticians working with fuzzy logic, engineering statisticians, finance researchers, and environmental statisticians. It is written for readers who are familiar with elementary stochastic models and basic statistical methods.

In this paper, we developed a control chart methodology for the monitoring the mean time between two events using the belief estimator under the neutrosophic gamma distribution. The proposed control chart coeffcients and the neutrosophic average run length (NARL) have been determined using different process settings.

The analysis of experimental data resulting from some underlying random process is a fundamental part of most scientific research. Probability Theory and Statistics have been developed as flexible tools for this analyis, and have been applied successfully in various fields such as Biology, Economics, Engineering, Medicine or Psychology. However, traditional techniques in Probability and Statistics were devised to model only a singe source of uncertainty, namely randomness. In many real-life problems randomness arises in conjunction with other sources, making the development of additional "softening" approaches essential. This book is a collection of papers presented at the 2nd International Conference on Soft Methods in Probability and Statistics (SMPS’2004) held in Oviedo, providing a comprehensive overview of the innovative new research taking place within this emerging field.

Under the pressure of harsh environmental conditions and natural hazards, large parts of the world population are struggling to maintain their livelihoods. Population growth, increasing land utilization and shrinking natural resources have led to an increasing demand of improved efficiency of existing technologies and the development of new ones. A

This conference will explore the use of computational modelling to understand and emulate inductive processes in science. The problems involved in building and using such computer models reflect methodological and foundational concerns common to a variety of academic disciplines, especially statistics, artificial intelligence (AI) and the philosophy of science. This conference aims to bring together researchers from these and related fields to present new computational technologies for supporting or analysing scientific inference and to engage in collegial debate over the merits and difficulties underlying the various approaches to automating inductive and statistical inference.The proceedings also include abstracts by the invited speakers (J R Quinlan, J J Rissanen, M Minsky, R J Solomonoff & H Kyburg, Jr.).

Like the preceding volumes, and met with a lively response, the present volume is collecting contributions stressed on methodology or successful industrial applications. The papers are classified under four main headings: sampling inspection, process quality control, data analysis and process capability studies and finally experimental design.

A Course in Mathematical and Statistical Ecology

This book constitutes the refereed proceedings of the Second International Symposium on Intelligent Data Analysis, IDA-97, held in London, UK, in August 1997. The volume presents 50 revised full papers selected from a total of 107 submissions. Also included is a keynote, Intelligent Data Analysis: Issues and Opportunities, by David J. Hand. The papers are organized in sections on exploratory data analysis, preprocessing and tools; classification and feature selection; medical applications; soft computing; knowledge discovery and data mining; estimation and clustering; data quality; qualitative models.

Support for addressing the on-going global changes needs solutions for new scientific problems which in turn require new concepts and tools. A key issue concerns a vast variety of irreducible uncertainties, including extreme events of high multidimensional consequences, e.g., the climate change. The dilemma is concerned with enormous costs versus massive uncertainties of extreme impacts. Traditional scientific approaches rely on real observations and experiments. Yet no sufficient observations exist for new problems, and "pure" experiments, and learning by doing may be expensive, dangerous, or impossible. In addition, the available historical observations are often contaminated by past actions, and policies. Thus, tools are presented for the explicit treatment of uncertainties using "synthetic" information composed of available "hard" data from historical observations, the results of possible experiments, and scientific facts, as well as "soft" data from experts' opinions, and scenarios.