next up previous
Next: The Klass-Nowicki Inequality Up: Measuring the magnitude of Previous: Introduction


Notation and definitions

Throughout this paper, a random variable will be a measurable function from a probability space to some Banach space (often the real line). The norm in the implicit Banach space will always be denoted by $ {\mathopen\vert\ \cdot\ \mathclose\vert}$.

Suppose that $ f:[0,\infty) \to [0,\infty]$ is a non-increasing function. Define the left continuous inverse to be

$\displaystyle f^{-1}(x-) = \sup\{ y: f(y) \ge x \} ,$

and the right continuous inverse to be

$\displaystyle f^{-1}(x+) = \sup\{ y: f(y) > x \} .$

In describing the tail distribution of a random variable $ X$, instead of considering the function $ t \mapsto \Pr({\mathopen\vert X\mathclose\vert} > t)$, we will consider its right continuous inverse, which we will denote by $ X^*(t)$. In fact, this quantity appears very much in the literature, and is more commonly referred to as the decreasing rearrangement (or more correctly the non-increasing rearrangement) of $ {\mathopen\vert X\mathclose\vert}$. Notice that if one considers $ X^*$ to be a random variable on the probability space $ [0,1]$ (with Lebesgue measure), then $ X^*$ has exactly the same law as $ {\mathopen\vert X\mathclose\vert}$. We might also consider the left continuous inverse $ t \mapsto
X^*(t-)$. Notice that $ X^*(t) \le x \le X^*(t-)$ if and only if $ \Pr({\mathopen\vert X\mathclose\vert} > x) \le t \le \Pr({\mathopen\vert X\mathclose\vert} \ge x)$.

If $ A$ and $ B$ are two quantities (that may depend upon certain parameters), we will write $ A \approx B$ to mean that there exist positive constants $ c_1$ and $ c_2$ such that $ c_1^{-1} A \le B \le c_2
A$. We will call $ c_1$ and $ c_2$ the constants of approximation. If $ f(t)$ and $ g(t)$ are two (usually non-increasing) functions on $ [0,\infty)$, we will write $ f(t) \mathrel{\mathop{\approx}\limits_{t}} g(t)$ if there exist positive constants $ c_1$, $ c_2$, $ c_3$ and $ c_4$ such that $ c_1^{-1}
f(c_2 t) \le g(t) \le c_3 f(c_4^{-1} t)$ for all $ t \ge 0$. Again, we will call $ c_1$, $ c_2$, $ c_3$ and $ c_4$ the constants of approximation.

Suppose that $ X$ and $ Y$ are random variables. Then the statement $ \Pr({\mathopen\vert X\mathclose\vert} > t) \mathrel{\mathop{\approx}\limits_{t}} \Pr({\mathopen\vert Y\mathclose\vert} > t)$ is the same as the statement $ X^*(t) \mathrel{\mathop{\approx}\limits_{t}} Y^*(t)$. Since $ X^*(t) = 0$ for $ t \ge 1$ the latter statement is equivalent to the existence of positive constants $ c_1$, $ c_2$, $ c_3$, $ c_4$ and $ c_5$ such that $ c_1^{-1} X^*(c_2 t) \le Y^*(t) \le c_3 X^*(c_4^{-1}
t)$ for $ t \le c_5^{-1}$.

To avoid bothersome convergence problems, we will always suppose that our sequence of independent random variables $ (X_n)$ is of finite length. Given a sequence of independent random variables $ (X_n)$, when no confusion will arise, we will use the following notations. If $ A$ is a finite subset of $ {\mathbb{N}}$, we will let $ S_A = \sum_{n \in A} X_n$, and $ M_A =
\sup_{n \in A} {\mathopen\vert X_n\mathclose\vert}$. If $ k$ is a positive integer, then $ S_k = S_{\{1,\dots,k\}}$ and $ M_k = M_{\{1,\dots,k\}}$. We will define the maximal function $ U_k = \sup_{1 \le n \le k}{\left\vert S_n\right\vert}$. Furthermore, $ S = S_N$, $ M =
M_N$, and $ U = U_N$, where $ N$ is the length of the sequence $ (X_n)$.

If $ s$ is a real number, we will write $ X_n^{(>s)} = X_n I_{{\mathopen\vert X_n\mathclose\vert} > s} $ and $ X_n^{(\le s)} = X_n I_{{\mathopen\vert X_n\mathclose\vert} \le s} = X_n - X_n^{(>s)}$. For $ A \subset{\mathbb{N}}$, we will write $ S^{(\le s)}_A = \sum_{n
\in A} X^{(\le s)}_n$. Similarly we define $ S^{(>s)}_A$, $ S^{(\le s)}_k$, etc.

Another quantity that we shall care about is the decreasing rearrangement of the disjoint sum of random variables. This notion was used by Johnson, Maurey, Schechtman and Tzafriri (1979), Carothers and Dilworth (1988), and Johnson and Schechtman (1989), all in the context of sums of independent random variables. The disjoint sum of the sequence $ (X_n)$ is the measurable function on the measure space $ \Omega \times {\mathbb{N}}$ that takes $ (\omega,n)$ to $ X_n(\omega)$. We shall denote the decreasing rearrangement of the disjoint sum by $ \tilde \ell:[0,\infty) \to [0,\infty]$, that is, $ \tilde \ell(t)$ is the least number such that

$\displaystyle \sum_n \Pr({\mathopen\vert X_n\mathclose\vert} > \tilde \ell(t)) \le t .$

Define $ \ell(t)$ to be $ \tilde \ell(t)$ if $ 0 \le t \le
1$, and 0 otherwise. Since $ \ell(t)$ is only non-zero when $ 0 \le t \le
1$, we will think of $ \ell$ as being a random variable on the probability space $ [0,1]$ with Lebesgue measure. The quantity $ \ell$ is effectively $ M$ in disguise. This next result (and its proof) essentially appears in Giné and Zinn (1983).

Proposition 2.1   If $ 0<t < 1$, then

$\displaystyle \ell(2t) \le \ell(t/(1-t)) \le M^*(t) \le\ell(t) .$


Proof: The first inequality follows easily once one notices that both sides of this inequality are zero if $ t >
1/2$.

To get the second inequality, note that, by an easy argument, if $ \alpha_1$, $ \alpha_2,\dots \ge 0$ with $ \sum_n \alpha_n \le 1$, then

$\displaystyle 1-\sum_n \alpha_n \le \prod_n (1-\alpha_n)\le 1 - {\frac{\sum_n \alpha_n
}{ 1
+ \sum_n \alpha_n}} .$

So, if $ \Pr(\ell > x) = \sum_n \Pr({\mathopen\vert X_n\mathclose\vert}
> x)
\le 1$, then

$\displaystyle \Pr(M > x) = 1-\prod_n (1-\Pr({\mathopen\vert X_n\mathclose\vert} > x)) ,$

and hence

$\displaystyle {\frac{\Pr(\ell > x) }{ 1+\Pr(\ell>x)}} \le \Pr(M>x) \le
\Pr(\ell > x) .$

Taking inverses, the result follows.


next up previous
Next: The Klass-Nowicki Inequality Up: Measuring the magnitude of Previous: Introduction
Stephen Montgomery-Smith 2002-10-30