Polarization of Impulsive Signals at Venus - Appendix

Polarization of the Impulsive Signals Observed in the Nightside Ionosphere of Venus

R. J. Strangeway

Institute of Geophysics and Planetary Physics,
University of California at Los Angeles


J. Geophys. Res., 96, 22, 741-22, 752, 1991
(Received: June 11, 1991; accepted: October 1, 1991)
Copyright 1991 by the American Geophysical Union.
Paper Number 91JE02506.


Next: References
Previous: 4. Summary and Conclusions
Top: Title and Abstract


Appendix: Statistical Error Analysis

       In order to readily compare phase angle distributions such as those plotted in Figures 6 - 8 we have elected to fit the histograms with a sinusoid, using a least squares analysis. This allows us to reduce the information in each histogram to a single phase angle. However, the histograms also convey information about the degree of confidence in the determination of the phase angle. For example, the relative phase histogram in Figure 6 is much smoother than that shown in Figure 8. This implies that in a statistical sense we should have more confidence in the phase angle as determined in Figure 6, in the absence of any consideration of other factors, such as contamination by interference. To this end, we carry out a probability and error analysis using methods for curvilinear regression, such as discussed by Pollard [1977].

       Each histogram is divided into 10° bins covering a range of 180°. We shall denote the bin angles by xi and the corresponding number of samples in a bin by yi. We fit a sinusoid to these data of the form

where is the mean of the yi, and for convenience we use x1 as a reference angle. Equation (Al) can also be written as

with y0 = (ys2 + yc2 )1/2 and x0 = 1/2 tan-1ys/ yc. The form given by (A1) is used to perform the regression analysis, while (A2) defines the best fit phase angle (x0).

       The coefficients in (A1) are determined through minimization of the residual sum of squares

giving

       In order to determine the significance of the coefficients given by (A3a and A3b) we compare the ratio of the variance of the data due to regression over the residual variance to that expected for two chi-square random variables [Pollard, 1977]. The variance ratio of the chi-square distributions follows the F distribution. With the particular functional form we have chosen the test statistic can be calculated using the relationship

where the left-hand side of (A4) gives the residual sum of squares, which we denote by Sr2, and the second term on the right-hand side is the regression sum of squares, which we denote by Sf2.

       There are 18 degrees of freedom in the data, and we calculate three regression parameters, including the mean. Since we are only testing for the coefficients ys, and yc, being significantly nonzero, there are only 2 degrees of freedom for the variance of the regression, and 15 for the residual. Dividing the regression and residual sum of squares by their respective degrees of freedom gives their variance, sf2 = Sf2 /2, and sf2 = sf2 /15. The variance ratio, F = sf2 / sr2, is compared to the F distribution for two chi-square random variables with 2 and 15 degrees of freedom, usually denoted F2,15.

       Rather than simply compute the test statistic F, and compare that with a specific value of the F2,15 probability distribution (e.g., the upper 5%), we calculate the probability that F > F2,15 If we denote the probability that F > F2,15 by P and Q = 1 - P, then

See, for example, (26.6.4) of Abramowitz and Stegun [1965]. In the figures and tables of the main text we give P as a percentage.

       The residual variance can also be used to give confidence limits on the coefficients ys,c denoted by ys,c Following Pollard [1977], the confidence limits are calculated using the Student's t distribution, giving ys,c = t15 sr/91/2. The value of t15 used depends on the particular confidence limit required. For example, to obtain the 95% confidence limits, we use the value of t15 corresponding to the upper and lower 2.5% of the t distribution.

       Since we use (A2) to specify the fit, we wish to express the error as an angular measure. Noting that ys,c, does not depend on the magnitude of the individual coefficients ys,c, and that Sf2 = 9y02, we define

and from this we calculate an augular error given by

       The functional form used to define the angular error in (A7) is somewhat arbitrary but has the advantage of only depending on the test statistic F, which we use to calculate the probability. The actual error depends on the confidence limit required, as shown in Figure A1. With the form given by (A7), the limiting error is 67.5° when F = 0, independent of the confidence limit. The figure shows that the error is reduced by roughly a factor of 2 if we use a 70% confidence limit, rather than the 95% limit actually employed in the main text. This is to be expected since these confidence limits correspond to roughly one and two standard deviations respectively for a normal probability distribution.

  Figure A1. Probability and angular error as a function of the test statistic. The probability is given by the single curve that approaches 100% for high values of the test statistic. The angular error depends on the degree of confidence desired, as indicated by the percentage labels. The horizontal dashed lines give the angular error for different confidence limits, assuming a test statistic that is 80% probable. In this paper we use 95% confidence limits when determining the error on the fit.

       It is not clear that (A7) is the best form to be used for assigning an error to the fit. For example, Figure A1 shows that the 80% confidence limit is around 34° when the fit is just significant at the 80% level. An error of 45° seems more appropriate. Consequently, we might consider a form such as x0 = 0.5 sin-1 (F2,15 /F)1/2, which does give a 45° error when the fit is just significant at the particular confidence level chosen. There is little difference between the two forms when the fit is moderately or highly significant. However, the main purpose of the error analysis is to allow us to compare the best fit phase angles as determined for different subsets of the data. The error is defined even when > 1 in (A7), while the alternative form is not defined for F < F2,15. As long as the larger errors (> 35°) are mainly used for comparative purposes rather than as absolute error estimates, the derivation given here is probably adequate.


Next: References
Previous: 4. Summary and Conclusions
Top: Title and Abstract


Return to the top of the document
Go to R. J. Strangeway's Homepage

Converted to HTML by Shaharoh Bolling
Last modified: Feb. 16, 2000