Notes:Statistical test random variable
From Maths
\newcommand{\B}[0]{ {\mathbb{B} } } \newcommand{\O}[0]{ {\mathcal{O} } }
\newcommand{\P}[2][]{\mathbb{P}#1{\left[{#2}\right]} } \newcommand{\Pcond}[3][]{\mathbb{P}#1{\left[{#2}\!\ \middle\vert\!\ {#3}\right]} } \newcommand{\Plcond}[3][]{\Pcond[#1]{#2}{#3} } \newcommand{\Prcond}[3][]{\Pcond[#1]{#2}{#3} }
\newcommand{\E}[1]{ {\mathbb{E}{\left[{#1}\right]} } } \newcommand{\Mdm}[1]{\text{Mdm}{\left({#1}\right) } } \newcommand{\Var}[1]{\text{Var}{\left({#1}\right) } } \newcommand{\ncr}[2]{ \vphantom{C}^{#1}\!C_{#2} }
Notice
This actually might just be the definition of joint probability applied to tests....
Starting point
Let (S,\Omega,\mathbb{P})[Note 1] be the probability space we take subjects from (so each subject takes a value in S, so forth); then:
- Let: \mathbb{B}:\eq\{0,\ 1\} (or \mathbb{B}:\eq\{-,\ +\} ) for "negative" and "positive" respectively.
- We imbue \B with the sigma-algebra \mathcal{P}(\mathbb{B}), which is: \big\{\emptyset,\{0\},\{1\},\{0,1\}\big\}
We introduce a random variable, P:S\rightarrow\B ,
- such that: P:S\rightarrow\B is an "oracle" of sorts, specifically we have:
- P:s\mapsto 1 for a subject with the property being tested for, and,
- P:s\mapsto 0 if the property is absent.
- We assume P is never wrong.
- Note:
- The random variable requirements imbue that P^{-1}(\{i\})\in\Omega for i\in\B - this should be enough as per generator of a sigma-algebra (TODO: check), but if not implicit we add:
- P^{-1}(\{0,1\})\eq S\in\Omega and
- P^{-1}(\emptyset)\eq\emptyset\in\Omega too
- The random variable requirements imbue that P^{-1}(\{i\})\in\Omega for i\in\B - this should be enough as per generator of a sigma-algebra (
Step 1
We now introduce another random variable:
- T:S\rightarrow\B - for the same sigma-algebras as already covered, however T need not have any "oracular" properties, it represents our test.
We now introduce:
- \O:S\rightarrow\B\times\B given by \O:s\mapsto\big(P(s),T(s)\big)
- We claim this is a random variable itself.
- We claim that: \O^{-1}(\{i,j\})\eq P^{-1}(\{i\})\cap T^{-1}(\{j\})
Finally
We can now talk about \P{P\eq i\text{ and }T\eq j} , thus about conditional probabilities like \Pcond{P\eq 1}{T\eq 1} as a result - which is the goal.
Notes
- Jump up ↑ This assumption is critical to talking about "all tests", as we completely sidestep having to define the "space of all subjects", for example S could be:
- S\eq \mathbb{R}^{120}\times\mathbb{N}^4\times\B^6 - 120 factors (each a real dimensions), 4 natural number factors and 6 binary values, or
- S\eq \mathbb{B} - it could take just two values