Random Variables, PDFs, and CDFs
What is the relationship between pdf/cdf and probability (in term of Integral expressions)? Please, give F(x)=P[Xdistribution function f(x), we get Similar questions and discussions. Cumulative Distribution Functions (CDF); Probability Density Function (PDF) For some such questions, we can and do settle on answers long before. Probability mass functions (pmf) and density functions (pdf) are almost the same thing, except Thus, the interpretation of the CDF is the same whether we have a discrete or continuous variable Related Questions (More Answers Below).
Let me go up here. You'd say it looks like it's about 0. And you'd say, I don't know, is it a 0. And I would say no, it is not a 0. And before we even think about how we would interpret it visually, let's just think about it logically. What is the probability that tomorrow we have exactly 2 inches of rain? Exactly 2 inches of rain. I mean, there's not a single extra atom, water molecule above the 2 inch mark.
And not as single water molecule below the 2 inch mark.
It's essentially 0, right? It might not be obvious to you, because you've probably heard, oh, we had 2 inches of rain last night. But think about it, exactly 2 inches, right? Normally if it's 2. But we're saying no, this does not count. It can't be 2 inches. We want exactly 2. Normally our measurements, we don't even have tools that can tell us whether it is exactly 2 inches.
No ruler you can even say is exactly 2 inches long. At some point, just the way we manufacture things, there's going to be an extra atom on it here or there.
So the odds of actually anything being exactly a certain measurement to the exact infinite decimal point is actually 0. The way you would think about a continuous random variable, you could say what is the probability that Y is almost 2?
So if we said that the absolute value of Y minus is 2 is less than some tolerance? Is less than 0. And if that doesn't make sense to you, this is essentially just saying what is the probability that Y is greater than 1.
These two statements are equivalent. I'll let you think about it a little bit. But now this starts to make a little bit of sense. Now we have an interval here.
So we want all Y's between 1. So we are now talking about this whole area. And area is key. So if you want to know the probability of this occurring, you actually want the area under this curve from this point to this point.
And for those of you who have studied your calculus, that would essentially be the definite integral of this probability density function from this point to this point.
So from-- let me see, I've run out of space down here. So let's say if this graph-- let me draw it in a different color.
Probability density functions (video) | Khan Academy
If this line was defined by, I'll call it f of x. I could call it p of x or something. The probability of this happening would be equal to the integral, for those of you who've studied calculus, from 1. Assuming this is the x-axis. So it's a very important thing to realize. Because when a random variable can take on an infinite number of values, or it can take on any value between an interval, to get an exact value, to get exactly 1.
It's like asking you what is the area under a curve on just this line. Or even more specifically, it's like asking you what's the area of a line?
An area of a line, if you were to just draw a line, you'd say well, area is height times base. Well the height has some dimension, but the base, what's the width the a line? As far as the way we've defined a line, a line has no with, and therefore no area. And it should make intuitive sense.
That the probability of a very super-exact thing happening is pretty much 0. That you really have to say, OK what's the probably that we'll get close to 2? And then you can define an area. And if you said oh, what's the probability that we get someplace between 1 and 3 inches of rain, then of course the probability is much higher. The probability is much higher.
It would be all of this kind of stuff. You could also say what's the probability we have less than 0. We care greatly to know what our chances are that we will get whirring turbines instead of a meltdown. To a strict determinist, all such bets were settled long before any coin, metaphorical or not, was ever minted; we simply do not yet know it.
If we only knew the forces applied at a coin's toss, its exact distribution of mass, the various minute movements of air in the room But we, of course, are often lacking even a mentionable fraction of such knowledge of the world.
Connecting the CDF and the PDF
Furthermore, it seems on exceedingly small scales that strict determinists are absolutely wrong; there is no way to predict when, for example, a uranium atom will split, and if such an event affects the larger world then that macro event is truly unpredictable. Some outcomes truly are up in the air, unsettled until they are part of the past. In order to cope with this reality and to be able to describe the future states of a system in some useful way, we use random variables.
A random variable is simply a function that relates each possible physical outcome of a system to some unique, real number. As such there are three sorts of random variables: In the following sections these categories will be briefly discussed and examples will be given. Consider our coin toss again. We could have heads or tails as possible outcomes. If we defined a variable, x, as the number of heads in a single toss, then x could possibly be 1 or 0, nothing else.
Such a function, x, would be an example of a discrete random variable. Such random variables can only take on discrete values. Other examples would be the possible results of a pregnancy test, or the number of students in a class room.
Back to the coin toss, what if we wished to describe the distance between where our coin came to rest and where it first hit the ground.
That distance, x, would be a continuous random variable because it could take on a infinite number of values within the continuous range of real numbers. The coin could travel 1 cm, or 1. Other examples of continuous random variables would be the mass of stars in our galaxy, the pH of ocean waters, or the residence time of some analyte in a gas chromatograph.
Probability density functions
Mixed random variables have both discrete and continuous components. Such random variables are infrequently encountered. For a possible example, though, you may be measuring a sample's weight and decide that any weight measured as a negative value will be given a value of 0.