n$, then $$ \E(d_k(c_{1,\tau_1},\dots,c_{k,\tau_k}) | \A_{n,t}) = \E(\E(d_k(c_{1,\tau_1},\dots,c_{k,\tau_k}) | \A_{k,0}) | \A_{n,t}) = 0 ,$$ by Lemma (\ref{lemma5.1}) and (\ref{value-at-0}). Similarly, by the same lemma, it also follows that if $k = n$, then $$ \E(d_k(c_{1,\tau_1},\dots,c_{k,\tau_k}) | \A_{n,t}) = d_k(c_{1,\tau_1},\dots,c_{k,t \wedge \tau_k}) $$ and hence $\E(F_\infty | \A_{n,t}) = F_{n,t}$. This proves that $(F_{n,t})$\ is a martingale. The rest of the lemma is obvious.\\ % % % % % % % {\bf Proof of Steps 1, 2, 4} Because of Lemma (\ref{lemma5.2}), Step 2 follows from Doob's Maximal Inequality for continuous time martingales (see \cite[Chapter VII, Section 11]{doob}). Step 1 also follows from the uniform distribution of Brownian motion over $\T$ (see \cite[Corollary 3.6.2]{peter}). Step 4 is also a consequence of the same property of Brownian motion. We give details. We have % % % % % % \begin{eqnarray*} \tilde F^* & = & \sup_{(n,t)}| \tilde F_{n,t} | \geq \sup_n | \tilde F_{n,\tau_n} |\\ & = & \sup_n \left| \sum_{m=0}^n H_m (d_m (f)) (c_{1,\tau_1},\ldots,c_{m,\tau_m}) \right|. \end{eqnarray*} But since $(c_{1,\tau_1},\ldots,c_{m,\tau_m})$ is equidistributed with $(\theta_1,\ldots,\theta_m)$, the right side of the displayed inequalities is equidistributed with $\sup_n \left| \sum_{m=0}^n H_m (d_m (f)) (\theta_1,\ldots,\theta_m) \right|,$ and Step 4 follows.\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Proof of Step 3.} The proof may be done as in \cite[Theorem~4]{bgs}. We provide the details to show the role of analyticity on $\T^N$. Here we call a function $\phi\in L^1(\T^N)$ analytic if its Fourier transform is supported in the half-space $$ \O=\{0\} \bigcup_{j=1}^N\{(m_1,m_2,\ldots,m_N)\in \Z^N :\ m_j>0, m_{j+1}=\ldots, m_N=0\}.$$ The following basic properties of analytic functions on $\T^N$ are easy to prove. \begin{itemize} \item A function $\phi\in L^1(\T^N)$ is analytic if and only if each term in its martingale difference decomposition, $d_j(\phi)$ ($j=1,\ldots,N)$, is analytic in the $j$-th variable $\theta_j$ and has zero mean, i.e., $d_j(\phi)\in H^1_0(\T)$. \item If $\phi$ is analytic then $\phi^2$ is also analytic. (This follows from $\O +\O=\O$.) \item If $\phi$ is a trigonometric polynomial on $\T^N$, then $\phi+i H(\phi)$ is analytic. \end{itemize} % % % % % Getting back to the proof of Step~3, let %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% $$g(r_1\theta_1,\dots,r_N\theta_N) = f(r_1\theta_1,\dots,r_N\theta_N) + i H (f)(r_1\theta_1,\dots,r_N\theta_N),$$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% and let %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% $$h = g^2.$$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Both $g$\ and $h$\ are analytic on $\T^N$. Hence the functions $d_m (g)(\theta_1,\ldots , r_m\theta_m)$ and $d_m (h)(\theta_1,\ldots , r_m\theta_m)$ are analytic in the $m$-th variable. Form the functions $G_{n,t}$\ and $H_{n,t}$\ as in (\ref{brownian-martingale}). By Lemma (\ref{lemma5.2}), $G_{n,t}$\ and $H_{n,t}$ are martingales relative to $\A_{n,t}$. We claim that, because of analyticity, we have \begin{equation} H_{n,t} = G_{n,t}^2. \label{square-of-analytic} \end{equation} To see this, write $$ g(\theta_1,\dots,\theta_N) = \sum_{k=1}^N d_k(g)(\theta_1,\dots,\theta_k) $$ and $$ h(\theta_1,\dots,\theta_N) = \sum_{k=1}^N d_k(h)(\theta_1,\dots,\theta_k).$$ Then, since all the exponents of $\theta_n$ are positive, we get % % % % $$ \left( \sum_{k=1}^{n-1} d_k(g)(\theta_1,\dots,\theta_k) + d_n(g)(\theta_1,\dots,r_n\theta_n) \right)^2 = \sum_{k=1}^{n-1} d_k(h)(\theta_1,\dots,\theta_k) + d_n(h)(\theta_1,\dots,r_n\theta_n) $$ % % % and (\ref{square-of-analytic}) easily follows. Consequently, since the functions $H_{n,t}$ form a martingale relative to the $\sigma$-algebra $\A_{n,t}$, we have that $G_{n,t}^2$ is a martingale relative to this $\sigma$-algebra. With this fact in hands, we can now proceed with the proof of Step 3 in exactly the same way as in \cite[pp. 148-149]{bgs}. We need a lemma. %%%%%%%%%% %%%%%%%%%% %%%%%%%%%%% %%%%%%%%%% %%%%%%%%%% %%%%%%%%%%% \begin{lemma5.3} Suppose that $\mu$ and $\nu$ are stopping times with $\mu\leq \nu$ a.\ e. Let $f$ be a real-valued trigonometric polynomial on $\T^N$ with $\int f dP = 0$. Then, $$\| \tilde{F}_\nu -\tilde{F}_\mu \|_2= \| F_\nu -F_\mu \|_2.$$ \label{lemma5.3} \end{lemma5.3} % % % {\bf Proof.} Using the fact that $G_{n,t}^2$ is a martingale, we get $$ 0=\E(G_0^2)=\E(G_\mu^2).$$ Similarly, $\E(G_{\nu}^2)=0$. Hence, $\E F_\mu^2 = \E \tilde{F}_\mu^2$ and $\E F_\nu^2 = \E \tilde{F}_\nu^2$. Next, we show that $\E(F_\mu F_\nu)= \E(F_\mu^2)$, and $\E(\tilde{F}_\mu \tilde{F}_\nu)= \E(\tilde{F}_\mu^2)$. We start with the first equality. Using Doob's Optional Sampling Theorem and basic properties of the conditional expectation, we see that % % % $$\E(F_\nu | F_\mu)=F_\mu ,$$ % % % % % % $$F_\mu \E(F_\nu | F_\mu)=F_\mu^2 ,$$ % % % and so % % % % % $$\E(F_\mu F_\nu | F_\mu)=F_\mu^2 .$$ % % % Integrating both sides of the last equality, we get $\E(F_\mu F_\nu)= \E(F_\mu^2)$. The second equality can be proved similarly. Thus \begin{eqnarray*} \E(F_\mu -F_\nu)^2 &=& \E F^2_\mu + \E F^2_\nu -2 \E (F_\mu F_\nu)\\ &=& \E F^2_\mu + \E F^2_\nu -2 \E (F_\mu^2) \\ &=& \E F^2_\nu - \E (F_\mu^2) \\ &=& \E(\tilde{F}_\mu - \tilde{F}_\nu)^2, \end{eqnarray*} which completes the proof.\\ %%%%%%%%%% %%%%%%%%%% %%%%%%%%%%% %%%%%%%%%% %%%%%%%%%% %%%%%%%%%%% The above lemma enables us to establish a fundamental inequality. This is our version of the `good $\lambda$' inequality for conjugate functions on $\T^N$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{lemma5.4} With the notation of the previous lemma, let $\alpha \ge 1$\ and $\beta > 1$. Then there is a constant $c$, depending only on $\alpha$\ and $\beta$, such that whenever $\lambda > 0$\ satisfies $$ P(G^* > \lambda) \le \alpha P(G^* > \beta \lambda) ,$$ then $$ P(G^* > \lambda) \le c\,P(c\,F^* > \lambda) .$$ \label{lemma5.4} \end{lemma5.4} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\bf Proof.} Define stopping times $$ \mu = \inf\{\, (n,t)\in \cT : |G_{n,t}| > \lambda \} ,\\ \nu = \inf\{\, (n,t)\in \cT : |G_{n,t}| > \beta \lambda \} . $$ If the set $\{\, (n,t) : |G_{n,t}| > \lambda \}$\ is empty, then we set $\mu = \infty$. Otherwise $\mu$\ is such that $|G_{n,t}| \le \lambda$\ whenever $(n,t) < \mu$, and $|G_\mu| = \lambda$. We define $\nu$\ similarly. Also, we have that $\mu \le \nu$, that $|G_\mu| = \lambda$\ on the set $\{\mu\ne\infty\} = \{G_\infty^* > \lambda\}$, and that $|G_\nu| = \beta\lambda$\ on the set $\{\nu\ne\infty\} = \{G^* > \beta\lambda\}$. Thus if $\lambda$\ satisfies the hypothesis of the lemma, then \begin{eqnarray*} \E(\chi_{G^*>\lambda} (F_\nu-F_\mu)^2) & = & \left\| F_\nu - F_\mu\right\|_2^2 \\ & = & \frac{1}{2} \left\| G_\nu - G_\mu\right\|_2^2 \\ & \ge & \frac{1}{2} (\beta \lambda - \lambda)^2 P(G^*>\beta \lambda) \\ & \ge & c \lambda^2 P(G_\infty^* > \lambda ). \end{eqnarray*} Also $$ \E(\chi_{G^*>\lambda} (F_\nu-F_\mu)^4) \le \left\| G_\nu - G_\mu\right\|_4^4 \le c \lambda^4 P(G_\infty^* > \lambda ). $$ Thus, by a lemma of Paley and Zygmund \cite[Chapter V, (8,26)]{zyg}, $$ P(G^* > \lambda) \le c P(c|F_\nu - F_\mu| > \lambda) .$$ Since $|F_\nu - F_\mu| \le 2 F^*$, the lemma follows. \bigskip Now let us finish by proving Step~3. It is sufficient to show $\left\| G^* \right\|^*_{1,\infty} \le c \, \left\| F^* \right\|^*_{1,\infty}$. Suppose that $$ \left\| G^* \right\|^*_{1,\infty} = \sup_{\lambda>0} \lambda P(G^*>\lambda) = A .$$ Pick $\lambda_0$\ such that $2\lambda_0 P(G^*>2\lambda_0) \ge A/2$. Then $\lambda_0 P(G^*>\lambda_0) \le A$, and thus $\lambda_0$\ satisfies the hypothesis of the lemma with $\alpha = 4$\ and $\beta = 2$. Then it follows that $$ \| F^*\|^*_{1,\infty} \ge \lambda_0 P(c F^* > \lambda_0) \ge c A/4 ,$$ as desired. {\bf Acknowledgements} The research of the authors was supported by grants from the National Science Foundation (U.\ S.\ A.). \begin{thebibliography}{Dillo 83} \bibitem{ams} N.\ Asmar, and S. Montgomery-Smith, {\em Hahn's Embedding Theorem for orders and analysis on groups with ordered dual groups}, Colloq. Math., {\bf LXX} (1996), 235--252. \bibitem{bg} D.\ L.\ Burkholder, and R.\ F.\ Gundy, {\em Extrapolation and interpolation of quasi-linear operators on martingales}, Acta Math.\ {\bf 124}\ (1970), 249--304. \bibitem{bgs} D.\ L.\ Burkholder, R.\ F.\ Gundy, and M.\ L.\ Silverstein {\em A maximal characterization of the class $H^p$}, Trans.\ Amer.\ Math.\ Soc., {\bf 157}\ (1971), 137--153. \bibitem{doob} J.\ L.\ Doob, ``Stochastic Processes'', Wiley Publications in Statistics, New York 1953. \bibitem{doob2} J.\ L.\ Doob, {\em Semimartingales and subharmonic functions}, Trans.\ Amer.\ Math.\ Soc., {\bf 77}\ (1954), 86--121. \bibitem{hel} H.\ Helson, {\em Conjugate series in several variables}, Pac.\ J.\ Math., {\bf 9}\ (1959), 513--523. \bibitem{peter} K.\ E.\ Petersen, ``Brownian Motion, Hardy Spaces and Bounded Mean Oscillation,'' London Math. Soc. Lecture Notes Series, No. 28, Cambridge University Press, 1977. \bibitem{zyg} A.\ Zygmund, `` Trigonometric series'', 2nd Edition, 2 vols.\ , Cambridge University Press, 1959. \end{thebibliography} \end{document}