Introduction to bayesian statistics pdf

 
    Contents
  1. Bayesian statistics
  2. A First Course in Bayesian Statistical Methods
  3. Data556 - Introduction to Bayesian Statistics.pdf - DATA...
  4. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

Introduction to Bayesian Statistics. Brendon J. Brewer. This work is licensed under the Creative Commons Attribution-ShareAlike. Unported License. To view. PrefaceHow This Text Was Developed This text grew out of the course notes for an Introduction to Bayesian Statistics. " this edition is useful and effective in teaching Bayesian inference at both elementary and intermediate levels. It is a well-written book on.

Author:MICAELA BARCUS
Language:English, Spanish, Arabic
Country:Norway
Genre:Academic & Education
Pages:738
Published (Last):10.07.2016
ISBN:555-9-24297-269-6
Distribution:Free* [*Registration Required]
Uploaded by: DOROTHA

62096 downloads 113014 Views 34.57MB PDF Size Report


Introduction To Bayesian Statistics Pdf

Introduction to Bayesian Statistics mICINTCNNIAL THE W l L E Y B I C E N T E N N I A L - K N O W L E D G E F O R G E N E R A T I O N S G ach generation has. Introduction to Bayesian Statistics. Mike Goddard. University of Melbourne and Victorian Institute of Animal Science. THE BAYESIAN VS FREQUENTIST. Bayesian Inference. Consistent use of probability to quantify uncertainty. Predictions involve marginalisation, e.g. posterior likelihood function prior.

It is a well-written book on elementary Bayesian inference, and the material is easily accessible. It is both concise and timely, and provides a good collection of overviews and reviews of important tools used in Bayesian statistical methods. There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian statistics. The authors continue to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inference for discrete random variables, binomial proportions, Poisson, and normal means, and simple linear regression. In addition, more advanced topics in the field are presented in four new chapters: The inclusion of these topics will facilitate readers' ability to advance from a minimal understanding of Statistics to the ability to tackle topics in more applied, advanced level books.

Statistical models have a number of parameters that can be modified. For example, a coin can be represented as samples from a Bernoulli distribution , which models two possible outcomes. The Bernoulli distribution has a single parameter equal to the probability of one outcome, which in most cases is the probability of landing on heads.

Devising a good model for the data is central in Bayesian inference. In most cases, models only approximate the true process, and may not take into account certain factors influencing the data.

Parameters can be represented as random variables. Bayesian inference uses Bayes' theorem to update probabilities after more evidence is obtained or known.

Indeed, parameters of prior distributions may themselves have prior distributions, leading to Bayesian hierarchical modeling [6] , or may be interrelated, leading to Bayesian networks. Design of experiments[ edit ] The Bayesian design of experiments includes a concept called 'influence of prior beliefs'.

This approach uses sequential analysis techniques to include the outcome of earlier experiments in the design of the next experiment. Because of way the universe is organized, this summing is down the column in the reduced universe. The division scales them up so the conditional probabilities sum to one.

In Chapter 6 this pattern is repeated with the Bayesian universe. The horizontal dimension is the sample space, the set of all possible values of the observable random variable. The vertical dimension is the parameter space, the set of all possible values of the unobservable parameter.

The reduced universe is the vertical slice that we observed. The conditional probabilities, given what we observed, are the unconditional probabilities found by using the multiplication rule prior x likelihood divided by their sum over all possible parameter values.

Again, this sum is taken down the column. The division rescales the probabilities so they sum to one. When the parameter is continuous, the rescaling is done by dividing the joint probability-probability density function at the observed value by its integral over all possible parameter values so it integrates to one.

Again, the joint probability-probability density function is found by the multiplication rule and at the observed value is prior x likelihood. This is done for binomial observations and a continuous beta prior in Chapter 8.

When the observation is also a continuous random variable, the conditional probability density is found by rescaling the joint probability density at the observed value by dividing by its integral over all possible parameter values. Again, the joint probability density is found by the multiplication rule and at the observed value is prior x likelihood.

This is done for normal observations and a continuous normal prior in Chapter All these cases follow the same general pattern. There must be a prior belief to start from.

Conjugate priors are found by matching first two moments with prior belief on location and spread. When the conjugate shape does not give satisfactory representation of prior belief, setting up a discrete prior and interpolating is suggested. Details that I consider beyond the scope of this course are included as footnotes.

There are many figures that illustrate the main ideas, and there are many fully worked out examples. I have included chapters comparing Bayesian methods with the corresponding frequentist methods. There are exercises at the end of each chapter, some with short answers.

There are computer exercises to be done in Minitab or R using the included macros. Some of these are small-scale Monte Carlo studies that demonstrate the efficiency of the Bayesian methods evaluated according to frequentist criteria. Advantages of the Bayesian Perspective Anyone who has taught an Introduction to Statistics class will know that students have a hard time coming to grips with statistical inference.

The concepts of hypothesis testing and confidence intervals are subtle and students struggle with them. This is more like the kind of plausible reasoning that students use in their everyday life, but structured in a formal way. Conceptually, it is a more straightforward method for making inferences. The Bayesian perspective offers a number of advantages over the conventional frequentist perspective. Yet in science there usually is some prior knowledge about the process being measured.

Throwing this prior information away is wasteful of information which often translates to money. Bayesian statistics uses both sources of information: This is much more useful to a scientist than the confidence statements allowed by frequentist statistics. This is a very compelling reason for using Bayesian statistics. Clients will interpret a frequentist confidence interval as a probability interval. Why not use a perspective that allows them to make the interpretation that is useful to them.

This contrasts to frequentist procedures, which require many different tools. They are always marginalized out of the joint posterior distribution. This is not always easily done in a frequentist way. However, there were great difficulties in using Bayesian statistics in actual practice.

While it is easy to write down the formula for the posterior distribution, a closed form existed only in a few simple cases, such as for a normal sample with a normal prior.

In other cases the integration required had to be done numerically. This in itself made it more difficultfor beginning students. If there were more than a few parameters, it became extremely difficult to perform the numerical integration. In the past few years, computer algorithms e. We can approximate the posterior distribution to any accuracy we wish by taking a large enough random sample from it.

This removes the disadvantage of Bayesian statistics, for now it can be done in practice for problems with many parameters, as well as for distributions from general samples and having general prior distributions. Of course these methods are beyond the level of an introductory course. Neverthe- less, we should be introducing our students to the approach to statistics that gives the theoretical advantages from the very start.

That is how they will get the maximum benefit. This course consists of 36 one-hour lectures, 12 one-hour tutorial sessions, and several computer assignments. In each tutorial session, the students work through a statistical activity in a hands-on way. Some of the computer assignments involve Monte Carlo studies showing the long-run performance of statistical procedures.

Chapter 1 one lecture gives an introduction to the course. Chapter 2 three lectures covers scientific data gathering including random sampling methods and the need for randomized experiments to make inferences on cause-effect relationships.

Chapter 3 two lectures is on data analysis with methods for displaying and summarizing data. If students have already covered this material in a previous statistics course, this could be covered as a reading assignment only.

Chapter 4 three lectures introduces the rules of probability including joint, marginal, and conditional probability and shows that Bayes' theorem is the best method for dealing with uncertainty. Chapter 5 two lectures introduces discrete and random variables. Chapter 7 two lectures introduces continuous random variables. Chapter 8 three lectures shows how inference is done on the population proportion from a binomial sample using either a uniform or a beta prior.

There is discussion on choosing a beta prior that corresponds to your prior belief and then graphing it to confirm that it fits your belief. Chapter 9 three lectures compares the Bayesian inferences for the proportion with the corresponding frequentist ones. The Bayesian estimator for the pro- portion is compared with the corresponding frequentist estimator in terms of mean squared error.

The difference between the interpretations of Bayesian credible interval and the frequentist confidence interval is discussed. There is considerable discussion on choosing a normal prior and then graphing it to con- firm it fits with your belief.

The predictive distribution of the next observation is developed. Section Chapter 11 one lecture compares the Bayesian inferences for mean with the corresponding frequentist ones. Chapter 12 three lectures does Bayesian inference for the difference between two normal means, and the difference between two binomial proportions using the normal approximation.

Chapter 13 three lectures does simple linear regression model in a Bayesian manner. Chapter 14 three lectures introduces robust Bayesian methods using mixture priors. This chapter shows how to protect against misspecified priors, which is one of the main concerns that many people have against using Bayesian statistics. It is at a higher level than the previous chapters and could be omitted and more lecture time given to the other chapters. Acknowledgments I would like to acknowledge the help I have had from many people.

First, my students over the past three years, whose enthusiasm with the early drafts encouraged me to continue writing. My colleague, James Curran, for writing the R macros, writing Appendix D on how to implement them, and giving me access to the glass data. Renate Meyer from the University of Auckland gave me usehl comments on the manuscript. John Wilkinson for his comments on the R macros which resulted in improved code. Finally, last but not least, I wish to thank my wife Sylvie for her constant love and support and for her help in producing some of the figures.

This includes devising methods to gather data relevant to the question, methods to summarize and display the data to shed light on the question, and methods that enable us to draw answers to the question that are supported by the data. Data almost always contain uncertainty. This uncertainty may arise from selection of the items to be measured, or it may arise from variability of the measurement process.

Drawing general conclusions from data is the basis for increasing knowledge about the world, and is the basis for all rational scientific inquiry. Statistical inference gives us methods and tools for doing this despite the uncertainty in the data. The methods used for analysis depend on the way the data were gathered.

It is vitally important that there is a probability model explaining how the uncertainty gets into the data. Variable X appears to have an association with variable Y. If high values of X occur with high values of variable Y and low values of X occur with low values of Y ,we say the association is positive. On the other hand, the association could be negative in which high values of variable X occur in with low values of variable Y. Figure 1. The unshaded area indicates that X and Y are observed variables.

The shaded area indicates that there may be additional variables that have not been observed. Introduction to Bayesian Statistics, Second Edition. By William M.

There are several possible explanations. The association might be a causal one. For example, X might be the cause of Y. This is shown in Figure 1. On the other hand, there could be an unidentified third variable Z that has a causal effect on both X and Y. They are not related in a direct causal relationship.

The association between them is due to the effect of Z. Z is called a lurking variable, since it is hiding in the background and it affects the data. It is possible that both a causal effect and a lurking variable may both be contribut- ing to the association. We say that the causal effect and the effect of the lurking variable are confounded. This means that both effects are included in the association. If we conclude that it is due to a causal effect, then our next goal is to determine the size of the effect.

If we conclude that the association is due to causal effect confounded with the effect of a lurking variable, then our next goal becomes determining the sizes of both the effects. The idea that scientific theories should be tested against real world data revolutionized thinking. This way of thinking known as the scientific method sparked the Renaissance. The scientific method rests on the following premises: This last principle, elaborated by William of Ockham in the 13th century, is now known as "Ockham's razor" and is firmly embedded in science.

It keeps science from developing fanciful overly elaborate theories. Thus the scientific method directs us through an improving sequence of models, as previous ones get falsified.

The scientific method generally follows the following procedure: Ask a question or pose a problem in terms of the current scientific hypothesis. Gather all the relevant information that is currently available. This includes the current knowledge about parameters of the model. Design an investigation or experiment that addresses the question from step 1.

The predicted outcome of the experiment should be one thing if the current hypothesis is true, and something else if the hypothesis is false. Gather data from the experiment. Draw conclusions given the experimental results. Revise the knowledge about the parameters to take the current results into account.

The scientific method searches for cause-and-effect relationships between an ex- perimental variable and an outcome variable. In other words, how changing the experimental variable results in a change to the outcome variable. Scientific mod- elling develops mathematical models of these relationships. Both of them need to isolate the experiment from outside factors that could affect the experimental results.

All outside factors that can be identified as possibly affecting the results must be controlled. It is no coincidence that the earliest successes for the method were in physics and chemistry where the few outside factors could be identified and con- trolled.

Thus there were no lurking variables. All other relevant variables could be identified, and then physically controlled by being held constant. That way they would not affect results of the experiment, and the effect of the experimental variable on the outcome variable could be determined. In biology, medicine, engineering, technology, and the social sciences it isn't that easy to identify the relevant factors that must be controlled. In those fields a different way to control outside factors is needed, because they can't be identified beforehand and physically controlled.

This can extend the scientific method into situations where the relevant outside factors cannot even be identified. Since we cannot identify these outside factors, we cannot control them directly.

Bayesian statistics

The lack of direct control means the outside factors will be affecting the data. There is a danger that the wrong conclusions could be drawn from the experiment due to these uncontrolled outside factors. The important statistical idea of randomization has been developed to deal with this possibility.

The unidentified outside factors can be "averaged out" by randomly assigning each unit to either treatment or control group. This contributes variability to the data. Statistical conclusions always have some uncertainty or error due to variability in the data. We can develop a probability model of the data variability based on the randomization used.

Randomization not only reduces this uncertainty due to outside factors, it also allows us to measure the amount of uncertainty that remains using the probability model. Randomization lets us control the outside factors statistically, by averaging out their effects. Underlying this is the idea of a statistical population, consisting of all possible values of the observations that could be made.

The data consists of observations taken from a sample of the population. For valid inferences about the population parameters from the sample statistics, the sample must be "representative" of the population.

Amazingly, choosing the sample randomly is the most effective way to get representative samples! The first is often referred to as thefrequentist approach. Sometimes it is called the classical approach.

Procedures are developed by looking at how they perform over all possible random samples. The probabilities don't relate to the particular random sample that was obtained. In many ways this indirect method places the "cart before the horse. It applies the laws of probability directly to the problem. This offers many fundamental advantages over the more commonly used frequentist approach.

We will show these advantages over the course of the book. Frequentist Approach to Statistics Most introductory statistics books take the frequentist approach to statistics, which is based on the following ideas: The unknown parameters are fixed, not random, so probability statements cannot be made about their value. Instead, a sample is drawn from the population, and a sample statistic is calculated. The probability distribution of the statistic over all possible random samples from the population is determined and is known as the sampling distribution of the statistic.

The parameter of the population will also be a parameter of the sampling distribution. The probability statement that can be made about the statistic based on its sampling distribution is converted to a confidence statement about the parameter. The confidence is based on the average behavior of the procedure under all possible samples. This paper was found after his death by his friend Richard Price, who had it published posthumously in the Philosophical Transactions of the Royal Society in Bayes showed how inverse probability could be used to calculate probability of antecedent events from the occurrence of the consequent event.

His methods were adopted by Laplace and other scientists in the 19th century, but had largely fallen from favor by the early 20th century. By the middle of the 20th century, interest in Bayesian methods had been renewed by De Finetti, Jeffreys, Savage, and Lindley, among others. They developed a complete method of statistical inference based on Bayes' theorem.

This book introduces the Bayesian approach to statistics. The ideas that form the basis of the this approach are: Each person can have hislher own prior, which contains the relative weights that person gives to every possible parameter value. It measures how "plausible" the person considers each parameter value to be before observing the data. We revise our beliefs about parameters after getting the data by using Bayes' theorem. This gives our posterior distribution which gives the relative weights we give to each parameter value after analyzing the data.

The posterior dis- tribution comes from two sources: This has a number of advantages over the conventional frequentist approach. Allowing the parameter to be a random variable lets us make probability statements about it, posterior to the data. This contrasts with the conventional approach where inference probabilities are based on all possible data sets that could have occurred for the fixed parameter value.

Given the actual data, there is nothing random left with a fixed parameter value, so one can only make conjidence statements, based on what could have occurred. Bayesian statistics also has a general way of dealing with a nuisance parameter.

Frequentist statistics does not have a general procedure for dealing with them. Bayesian statistics is predictive, unlike conventional frequentist statistics. This means that we can easily find the conditional probability distribution of the next observation given the sample data. Monte Carlo Studies In frequentist statistics, the parameter is considered a fixed, but unknown, constant.

A statistical procedure such as a particular estimator for the parameter cannot be judged from the value it gives. Instead, statistical procedures are evaluated by looking how they perform in the long run over all possible samples of data, for fixed parameter values over some range.

For instance, we fix the parameter at some value. The estimator depends on the random sample, so it is considered a random variable having a probability distribution. This distribution is called the sampling distribution of the estimator, since its probability distribution comes from taking all possible random samples.

Then we look at how the estimator is distributed around the parameter value. This is called sample space averaging. Essentially it compares the performance of procedures before we take any data. Bayesian procedures consider the parameter to be a random variable, and its posterior distribution is conditional on the sample data that actually occurred, not all those samples that were possible but did not occur.

However, before the experiment, we might want to know how well the Bayesian procedure works at some specific parameter values in the range. To evaluate the Bayesian procedure using sample space averaging, we have to consider the parameter to be both a random variable and a fixed but unknown value at the same time.

We can get past the apparent contradiction in the nature of the parameter because the probability distribution we put on the parameter measures our uncertainty about the true value.

It shows the relative belief weights we give to the possible values of the unknown parameter! After looking at the data, our belief distribution over the parameter values has changed. This way we can think of the parameter as a fixed, but unknown, value at the same time as we think of it being a random variable.

This is called pie-posterior analysis because it can be done before we obtain the data. In Chapter 4,we will find out that the laws of probability are the best way to model uncertainty.

Because of this, Bayesian procedures will be optimal in the post-data setting, given the data that actually occurred. In Chapters 9 and 11, we will see that Bayesian procedures perform very well in the pre-data setting when evaluated using pie-posterior analysis. In fact, it is often the case that Bayesian procedures outperform the usual frequentist procedures even in the pre-data setting.

Monte Carlo studies are a useful way to perform sample space averaging. We draw a large number of samples randomly using the computer and calculate the statistic frequentist or Bayesian for each sample.

The empirical distribution of the statistic over the large number of random samples approximates its sampling distribution over all possible random samples. We can calculate statistics such as mean and standard deviation on this Monte Carlo sample to approximate the mean and standard deviation of the sampling distribution.

Some small-scale Monte Carlo studies are included as exercises. Almost all of these courses are based on frequentist ideas. As a statistician, I know that Bayesian methods have great theoretical advantages. Some other texts include Berry , Press , and Lee This book aims to introduce students with a good mathematics background to Bayesian statistics.

It covers the same topics as a standard introductory statistics text, only from a Bayesian perspective. Students need reasonable algebra skills to follow this book. Bayesian statistics uses the rules of probability, so competence in manipulating mathematical formulas is required. Students will find that general knowledge of calculus is helpful in reading this book. Specifically they need to know that area under a curve is found by integrating, and that a maximum or minimum of a continuous differentiable function is found where the derivative of the function equals zero.

The book is self-contained with a calculus appendix that students can refer to. Chapter 2 introduces some fundamental principles of scientific data gathering to control the effects of unidentified factors. These include the need for drawing samples randomly, along with some random sampling techniques. The reason why there is a difference between the conclusions we can draw from data arising from an observational study and from data arising from a randomized experiment is shown.

Completely randomized designs and randomized block designs are discussed. Often a good data display is all that is necessary. The principles of designing displays that are true to the data are emphasized. Chapter 4 shows the difference between deduction and induction. Plausible rea- soning is shown to be an extension of logic where there is uncertainty. It turns out that plausible reasoning must follow the same rules as probability. Chapter 5 covers discrete random variables, including joint and marginal discrete random variables.

The binomial, hypergeometric, and Poisson distributions are introduced, and the situations where they arise are characterized. We see that two important consequences of the method are that multiplying the prior by a constant, or that multiplying the likelihood by a constant do not affect the resulting posterior distribution. We show that we get the same results when we analyze the observations sequentially using the posterior after the previous observation as the prior for the next observation, as when we analyze the observations all at once using the joint likelihood and the original prior.

Chapter 7 covers continuous random variables, including joint, marginal, and con- ditional random variables. The beta, gamma, and normal distributions are introduced in this chapter. We show how to find the posterior distribution of the population proportion using either a uniform prior or a beta prior. We explain how to choose a suitable prior. We look at ways of summarizing the posterior distribution. Chapter 9 compares the Bayesian inferences with the frequentist inferences.

We show that the Bayesian estimator posterior mean using a uniform prior has better performance than the frequentist estimator sample proportion in terms of mean squared error over most of the range of possible values. This kind of frequentist analysis is useful before we perform our Bayesian analysis. We see the Bayesian credible interval has a much more useful interpretation than the frequentist confidence interval for the population proportion.

One-sided and two-sided hypothesis tests using Bayesian methods are introduced. Bayesian inference for the Poisson parameter using the resulting posterior include Bayesian credible intervals and two-sided tests of hypothesis, as well as one-sided tests of hypothesis. We show how to choose a normal prior. We discuss dealing with nuisance parameters by marginalization. The predictive density of the next observation is found by considering the population mean a nuisance parameter and marginalizing it out.

These comparisons include point and interval estima- tion, and hypothesis tests including both the one-sided and the two-sided cases. Chapter 13 shows how to perform Bayesian inferences for the difference between normal means and how to perform Bayesian inferences for the difference between proportions using the normal approximation.

Chapter 14 introduces the simple linear regression model and shows how to perform Bayesian inferences on the slope of the model. The predictive distribution of the next observation is found by considering both the slope and intercept to be nuisance parameters and marginalizing them out. Chapter 15 introduces Bayesian inference for the standard deviation 0,when we have a random sample of normal observations with known mean p.

This chapter is at a somewhat higher level than the previous chapters and requires the use of the change-of-variable formula for densities. We discuss how to choose an inverse chi-squared prior that matches our prior belief about the median. Bayesian inferences from the resulting posterior include point estimates, credible intervals, and hypothesis tests including both the one-sided and two-sided cases.

Chapter 16 shows how we can make Bayesian inference robust against a misspeci- fied prior by using a mixture prior and marginalizing out the mixture parameter. This chapter is also at a somewhat higher level than the others, but it shows how one of the main dangers of Bayesian analysis can be avoided. Main Points 0 An association between two variables does not mean that one causes the other.

It may be due to a causal relationship, it may be due to the effect of a third lurking variable on both the other variables, or it may be due to a combination of a causal relationship and the effect of a lurking variable.

Scientific method is a method for searching for cause-and-effect relationships and measuring their strength. It uses controlled experiments, where outside factors that may affect the measurements are controlled. This isolates the rela- tionship between the two variables from the outside factors, so the relationship can be determined. The principle of ran- domization is used to statistically control these unidentified outside factors by averaging out their effects.

This contributes to variability in the data. We can use the probability model based on the randomization method to measure the uncertainty. The only kind of probability allowed is long-run relative frequency. These probabilities are only for observations and sample statistics, given the unknown parameters. Statistical procedures are judged by how they perform in an infinite number of hypothetical repetitions of the experiment. Probabilities can be calculated for parameters as well as observations and sample statistics.

The rules of probability are used to revise our beliefs about the parameters, given the data. We use the empirical distribution of the statistic over all the samples we took in our study instead of its sampling distribution over all possible repetitions.

Statistical science has shown that data should be relevant to the particular questions, yet be gathered using randomization. The development of methods to gather data purposefully, yet using randomization, is one of the greatest contributions the field of statistics has made to the practice of science.

Variability in data solely due to chance can be averaged out by increasing the sample size. Variability due to other causes cannot be.

Statistical methods have been developed for gathering data randomly, yet relevant to a specific question. These methods can be divided into two fields. Sample survey theory is the study of methods for sampling from a finite real population. Experimental design is the study of methods for designing experiments that focus on the desired factors and that are not affected by other possibly unidentified ones. Inferences always depend on the probability model which we assume generated the observed data being the correct one.

When data are not gathered randomly, there is a risk that the observed pattern is due to lurking variables that were not observed, instead of being a true reflection of the underlying pattern. In a properly designed experiment, treatments are assigned to subjects in such a way as to reduce the effects of any lurking variables that are present, but unknown to us.

A First Course in Bayesian Statistical Methods

When we make inferences from data gathered according to a properly designed random survey or experiment, the probability model for the observations follows from the design of the survey or experiment, and we can be confident that it is correct. This puts our inferences on a solid foundation. B y William M. There is the possibility the assumed probability model for the observations is not correct, and our inferences will be on shaky ground. The entire group of objects or people the investigator wants information about.

For instance, the population might consist of New Zealand residents over the age of eighteen. Usually we want to know some specific attribute about the population. Each member of the population has a number associated with it, for example, hisher annual income. Then we can consider the model population to be the set of numbers for each individual in the real population.

Our model population would be the set of incomes of all New Zealand residents over the age of eighteen. We want to learn about the distribution of the population.

Specifically, we want information about the population parameters, which are numbers associated with the distribution of the population, such as the population mean, median, and standard deviation. Often it is not feasible to get information about all the units in the population.

The population may be too big, or spread over too large an area, or it may cost too much to obtain data for the complete population. A subset of the population. The investigator draws one sample from the population and gets information from the individuals in that sample. Sample statistics are calculated from sample data. They are numerical characteristics that summarize the distribution of the sample, such as the sample mean, median, and standard deviation.

A statistic has a similar relationship to a sample that a parameter has to a population. However, the sample is known, so the statistic can be calculated. Statistical inference.

Making a statement about population parameters on basis of sample statistics. Good inferences can be made if the sample is representative of the population as a whole! The distribution of the sample must be similar to the distribution of the population from which it came! Sampling bias, a systematic tendency to collect a sample which is not representative of the population, must be avoided. It would cause the distribution of the sample to be dissimilar to that of the population, and thus lead to very poor inferences.

Even if we are aware of something about the population and try to represent it in the sample, there is probably some other factors in the population that we are unaware of, and the sample would end up being nonrepresentative in those factors.

We might decide that our sample should be balanced between males and females the same as the voting age population. We might get a sample evenly balanced between males and females, but not be aware that the people we interview during the day are mainly those on the street during working hours. Ofice workers would be overrepresented, while factory workers would be underrepresented.

There might be other biases inherent in choosing our sample this way, and we might not have a clue as to what these biases are. Some groups would be systematically underrepresented, and others systematically overrepresented. Surprisingly, random samples give more representative samples than any nonran- dom method such as quota samples or judgment samples. They not only minimize the amount of error in the inference, they also allow a probabilistic measurement of the error that remains.

Simple Random Sampling without Replacement Simple random sampling requires a sampling frame , which is a list of the population numbered from 1 to N. A sequence of n random numbers are drawn from the numbers 1 to N.

Data556 - Introduction to Bayesian Statistics.pdf - DATA...

Each time a number is drawn, it is removed from consideration, so it cannot be drawn again. The items on the list corresponding to the chosen numbers are included in the sample. Thus, at each draw, each item not yet selected has an equal chance of being selected.

Every item has equal chance of being in the final sample. Furthermore, every possible sample of the required size is equally likely. Suppose we are sampling from the population of registered voters in a large city.

It is likely that the proportion of males in the sample is close to the proportion of males in the population. Most samples are near the correct proportions, however, we are not certain to get the exact proportion. All possible samples of size n are equally likely, including those that are not representative with respect to sex.

Stratified Random Sampling Since we know what the proportions of males and females are from the voters list, we should take that information into account in our sampling method. In stratified random sampling, the population is divided into subpopulations called strata.

In our case this would be males and females. The sampling frame would be divided into separate sampling frames for the two strata. A simple random sample is taken from each stratum where each stratum sample size is proportional to stratum size.

Every item has equal chance of being selected. And every possible sample that has each stratum represented in the correct proportions is equally likely. This method will give us samples that are exactly representative with respect to sex. Hence inferences from these type samples will be more accurate than those from simple random sampling when the variable of interest has different distributions over the strata.

Stratification has no potential downside as far as accuracy of the inference. However, it is more costly, as the sampling frame has to be divided into separate sampling frames for each stratum. In other cases the individuals are scattered across a wide area. In cluster random sampling, we divide that area into neighborhoods called clusters. Then we make a sampling frame for clusters.

A random sample of clusters is selected. All items in the chosen clusters are included in the sample. The drawback is that items in a cluster tend to be more similar than items in different clusters.

For instance, people living in the same neighborhood usually come from the same economic level because the houses were built at the same time and in the same price range. This means that each observation gives less information about the population parameters. It is less efficient in terms of sample size. However, often it is very cost effective, since getting a larger sample is usually cheaper by this method. Nonsampling Errors in Sample Surveys Errors can arise in sample surveys or in a complete population census for reasons other than the sampling method used.

These nonsampling errors include response bias; the people who respond may be somewhat different than those who do not respond. They may have different views on the matters surveyed. Since we only get observations from those who respond, this difference would bias the results. This will entail additional costs, but is important as we have no reason to believe that nonrespondents have the same views as the respondents.

Errors can also arise from poorly worded questions. Survey questions should be trialed in a pilot study to determine if there is any ambiguity.

Randomized Response Methods Social science researchers and medical researchers often wish to obtain information about the population as a whole, but the information that they wish to obtain is sensitive to the individuals who are surveyed. For instance, the distribution of the number of sex partners over the whole population would be indicative of the overall population risk for sexually transmitted diseases.

Individuals surveyed may not wish to divulge this sensitive personal information. They might refuse to respond, or even worse, they could give an untruthful answer. Either way, this would threaten the validity of the survey results. Randomized response methods have been developed to get around this problem. There are two questions, the sensitive question and the dummy question. Both questions have the same set of answers.

Some of the answers in the survey data will be to the sensitive question and some will be to the dummy question. The interviewer will not know which is which. However, the incorrect answers are entering the data from known randomization probabilities. This way, information about the population can be obtained without actually knowing the personal information of the individuals surveyed, since only that individual knows which question he or she answered.

We gather data to help us determine these relationships and to develop mathematical models to explain them. The world is complicated. There are many other factors that may affect the response. We may not even know what these other factors are. Suppose, for example, we want to study a herbal medicine for its effect on weight loss.

Each person in the study is an experimental unit. There is great variability between experimental units, because people are all unique individuals with their own hereditary body chemistry and dietary and exercise habits. The variation among experimental units makes it more difficult to detect the effect of a treatment. Figure 2.

A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

The degree of shading shows they are not the same with respect to some unidentified variable. The response variable in the experiment may depend on that unidentified variable, which could be a lurking variable in the experiment. Observational Study If we record the data on a group of subjects that decided to take the herbal medicine and compared that with data from a control group who did not, that would be an observational study. The treatments have not been randomly assigned to treatment and control group.

Instead they self-select. Even if we observe a substantial difference between the two groups, we cannot conclude that there is a causal relationship from an observational study. In our study, those who took the treatment may have been more highly motivated to lose weight than those who did not.

Or there may be other factors that differed between the two groups. Any inferences we make on an observational study are dependent on the assumption that there are no differences between the distribution of the units assigned to the treatment groups and the control group.

Designed Experiment We need to get our data from a designed experiment if we want to be able to make sound inferences about cause-and-effect relationships. The experimenter uses randomization to decide which subjects get into the treatment group s and control group respectively.

We are going to divide the experimental units into four treatment groups one of which may be a control group. We must ensure that each group gets a similar range of units.

Completely randomized design. We will randomly assign experimental units to groups so that each experimental unit is equally likely to go to any of the groups. Each experimental unit will be assigned nearly independently of other experimental units. The only dependence between assignments is that having assigned one unit to treatment group 1 for example , the probability of the other unit being assigned to group 1 is slightly reduced because there is one less place in group 1.

This is known as a completely randomized design. Having a large number of nearly independent randomizations ensures that the comparisons between treatment groups and control group are fair since all groups will contain a similar range of experimental units.

Units have been randomly assigned to four treatment groups. In Figure 2. The randomization averages out the differences between experimental units as- signed to the groups. The expected value of the lurking variable is the same for all groups, because of the randomization.

The average value of the lurking variable for each group will be close to its mean value in the population because there are a large number of independent randomizations. The larger the number of units in the exper- iment, the closer the average values of the lurking variable in each group will be to its mean value in the population. If we find an association between the treatment and the response, it will be unlikely that the association was due to any lurking variable. For a large-scale experiment, we can effectively rule out any lurking variable and conclude that the association was due to the effect of different treatments.

Randomized block design. If we identify a variable, we can control for it directly. It ceases to be a lurking variable. One might think that using judgment about assigning experimental units to the treatment and control groups would lead to similar range of units being assigned to them. Any prior knowledge we have about the experimental units should be used before the randomization. Units that have similar values of the identified variable should be formed into blocks.

This is shown in Figure 2. The experimental units in each block are similar with respect to that variable. Then the randomization is be done within blocks.

One experimental unit in each block is randomly assigned to each treatment group. The blocking controls that particular variable, as we are sure all units in the block are similar, and one goes to each treatment group.

By selecting which one goes to each group randomly, we are protecting against any other lurking variable by randomization. It is unlikely that any of the treatment groups was unduly favored or disadvantaged by the lurking variable.

On the average, all groups are treated the same. We see the four treatment groups are even more similar than those from the completely randomized design.

For example, if we wanted to determine which of four varieties of wheat gave better yield, we would divide the field into blocks of four adjacent plots because plots that are adjacent are more similar in their fertility than plots that are distant from each other. Then within each block, one plot would be randomly assigned to each variety. This randomized block design ensures that the four varieties each have been assigned to similar groups of plots.

It protects against any other lurking variable, by the within-block randomization. One unit in each block randomly assigned to each treatment group. Randomizations in different blocks are independent of each other. When the response variable is related to the trait we are blocking on, the blocking will be effective, and the randomized blockdesign will lead to more precise inferences about the yields than a completely randomized design with the same number of plots.

This can be seen by comparing the treatment groups from the completely randomized design shown in Figure 2. The treatment groups from the randomized block design are more similar than those from the completely randomized design. Main Points 0 Population.

The entire set of objects or people that the study is about. Each member of the population has a number associated with it, so we often consider the population as a set of numbers. We want to know about the distribution of these numbers. The subset of the population from which we obtain the numbers. A number that is a characteristic of the population distribution, such as the mean, median, standard deviation, and interquartile range of the whole population.

A number that is a characteristic of the sample distribution, such as the mean, median, standard deviation, and interquartile range of the sample. Making a statement about population parameters on the basis of sample statistics. Simple random sampling. At each draw every item that has not already been drawn has an equal chance of being chosen to be included in the sample. StratiJed random sampling.

The population is partitioned into subpopulations called strata, and simple random samples are drawn from each stratum where the stratum sample sizes are proportional to the stratum proportions in the population. The stratum samples are combined to form the sample from the population. Cluster random sampling.

The area the population lies in is partitioned into areas called clusters. A random sample of clusters is drawn, and all members of the population in the chosen clusters are included in the sample. Randomized response methods. These allow the respondent to randomly de- termine whether to answer a sensitive question or the dummy question, which both have the same range of answers. Thus the respondents personal informa- tion is not divulged by the answer, since the interviewer does not know which question it applies to.

Observational study. The researcher collects data from a set of experimental units not chosen randomly, or not allocated to experimental or control group by randomization. There may be lurking variables due to the lack of random- ization. Designed experiment. The researcher allocates experimental units to the treat- ment group s and control group by some form of randomization.

The researcher randomly assigns the units into the treatment groups nearly independently. The only dependence is the constraint that the treatment groups are the correct size. The researcher first groups the units into blocks which contain similar units. Then the units in each block are randomly as- signed, one to each group.

The randomizations in separate blocks are per- formed independent of each other. Monte Carlo Exercises 2. We will use a Monte Carlo computer simulation to evaluate the methods of random sampling. Now, if we want to evaluate a method, we need to know how it does in the long run. Then we can see how closely the sampling distribution is centered around the true parameter.

If we use computer simulations to run a large number of hypothetical repetitions of the procedure with known parameters, this is known as a Monte Carlo study named after the famous casino. Instead of having the theoretical sampling distribution, we have the empirical distribution of the sample statistic over those simulated repetitions.

We judge the statistical procedure by seeing how closely the empirical distribution of the estimator is centered around the known parameter. The population. Suppose there is a population made up of individuals, and we want to estimate the mean income of the population from a random sample of size There are twenty neighborhoods, and five individuals live in each one. Now, the income dis- tribution may be different for the three ethnic groups. Also, individuals in the same neighborhood tend to be more similar than individuals in different neighborhoods.

Details about the population are contained in the Minitab worksheet sscsam- plemtw. Each row contains the information for an individual.

Column 1 contains the income, column 2 contains the ethnic group, and column 3 con- tains the neighborhood. Compute the mean income for the population. That will be the true parameter value that we are trying to estimate.

In the Monte Carlo study we will approximate the sampling distribution of the sample means for three types of random sampling, simple random sampling, stratified random sampling, and cluster random sampling. We do this by draw- ing a large number in this case random samples from the population using each method of sampling, calculating the sample mean as our estimate. The empirical distribution of these sample means approximates the sampling distribution of the estimate.

Compute the mean income for the three ethnic groups. Do you see any difference between the income distributions? Details of how to use this macro are in Appendix C.

Answer the following questions from the output: Does simple random sampling always have the strata represented in the correct proportions? On the average, does simple random sampling give the strata in their correct proportions? Does the mean of the sampling distribution of the sample mean for simple random sampling appear to be close enough to the population mean that we can consider the difference to be due to chance alone?

We only took samples, not all possible samples. Does stratified random sampling always have the strata represented in the correct proportions? On the average, does stratified random sampling give the strata in their correct proportions? Does the mean of the sampling distribution of the sample mean for stratified random sampling appear to be close enough to the population mean that we can consider the difference to be due to chance alone?

Does cluster random sampling always have the strata represented in the correct proportions? On the average, does cluster random sampling give the strata in their correct proportions? Does the mean of the sampling distribution of the sample mean for cluster random sampling appear to be close enough to the population mean that we can consider the difference to be due to chance alone?

Which method of random sampling seems to be more effective in giving sample means more concentrated about the true mean? Often we want to set up an experiment to determine the magnitude of several treatment effects. We have a set of experimental units that we are going to divide into treatment groups. There is variation among the experimental units in the underlying response variable that we are going to measure. We will assume that we have an additive model where each of the treatments has a constant effect.

The assignment of experimental units to treatment groups is crucial. There are two things that the assignment of experimental units into treatment groups should deal with. First, there may be a "lurking variable" that is related to the measurement variable, either positively or negatively. If we assign experimental units that have high values of that lurking variable into one treatment group, that group will be either advantaged or disadvantaged, depending if there is a positive or negative relationship.

TOP Related


Copyright © 2019 omyrkasuba.cf.