back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [ 69 ] [70] [71]


69

(2) (icoswifi +Bi sinwjfi)+ ••• + coswi sinoj) = « ?!) 04, cosfo,/j +5, sintOifj)+ -(A coscafj sin ) = )

To demonstrate the generalized process of least-square-error curve fitting, we will assume that we Icnow the cos. (The method of computing them also uses such curve fittmg and is described in the next section.) We wish to solve equations (2) for the /4s and the 5s such a way that the resulting equation will best fit the data points Sit I), Sit2 ), etc., in the least-square-error sense.

To do so, we must derive from the N equations (2), 2m new equations in the 2m unknowns of interest (the As and 5s). We proceed as follows:

Form a matrix from the coefficients of the As ands in equations (2)-and the right-hand equation members-as follows:

(3) costOj/i sinojj/i ""coscij1 sin<*>/, Siti) COSW1/2 sin CO]/2" "COS oj/2 sincOj/j Sit-x)

coswif sinwj--coswry sincpt S(t) From this matrix we now form a second matrix by the following rules:

• Multiply the elements of each row of the first matrix by the element of that row which lies in the first column. Sum the products by columns to get the elements of the first row of the second matrix.

• Multiply the elements of each row of the first matrbc by the element of that row which lies in the second column. Sum the products by column to get the elements of the second row of the second matrix.

• Continue in this manner until you have converted the entire first matrix. The new matrix will consist of 2m + 1 columns and 2m rows.

The second matrix is the set of coefficients and right-hand members of a new group of equations in the unknown s and Bs, located in the same positions in the members of these equations as in equations (2). Simultaneous solution of the new equation set yields the desired A and coefficients. Substituting these in equation (1) completes the least-square- data fitting process.

SOLVING FOR FREQUENCY

From equation (1), it can be shown that the equation determining is of the form:

(4) 2 cos (mcj] - 2 cos i(m - l)cj] - • • • - 2a cos [00] - = 0

Equation (4) must now be expressed in terms of Chebyshev polynomials and the parameter "m" as follows:

(5) r(cosco)-air i(cosw)-•• - -, ,( 8< )- 2 =0

The Chebyshev polynomials to be used in this expression (T (cos tj), ,, (cos to), etc.) are as follows:



where: = cos(<o), in this case.

Next, the as in equation (5) are solved for by a technique known as Pronys method. Application of this method in this situation results in (iV - 2m) equations in the unknown as, where the 5s are our filter output data points o, j, ••• 5„ ,.

We now set up these equations as follows:

(7) ¹ + i)a, + + 8. -) + - + {S„-, + S„)a , + 5„a„ = 5o + .Jj,,, {Si + 52)aj + + - + - + (5„ + 5™.2)«m-i + = + S,

v-am-i "5v-i

As the next step, the generalized least-square-error curve-fit procedures of the preceding section are used to obtain the m values of a from equations (7). The step-by-step process is exactly the same as before, except that the coefficients of the as of equations (7) are now used to set up the first matrix.

Substituting the derived values of the as in equation (5) results in an equation of degree m in (cos cj) as the unknown. From the m roots of this equation, the desired values of o) are found.

COMPUTING AMPLITUDES

The cjsmay now be substituted in equations (2), and the generalized least-square-error method of the preceding section used to solve for the amplitude coefficients ( s and 5s) as in the example of the first section.

DETERMINING COMTOSITE AMPLITUDES AND PHASES

The frequency and amplitude of each of the m sinusoid-cosinusoid pairs that best fit the filter output data are now known. Each pair can be further combined into a single sinusoid of the form:

\A cos (cl>0 + Bsin{(x>t)\ =C sin (cot + )

Where: C = yZT and: < = tan-4

With this step, the task is complete-and the frequency, ampUtude, and phase of each of the m sinusoidal components in the data has been identified. The final

approximation equation is:

Sit) ==C,sin((x!,/+0i) + C2sin(w2r-i-2)+ •• +C sin(co / + )

<6) T{x)\.Q

Tix) = 2x - I 7 ( ) = 4 - { )& * -8x + 1



bibliography

I. DIGITAL FILTERING AND OOTHING METHODS

A New Technique for Increasing (he Flexibility of Recursive Least-Squares Data Smoothing, Bell Telephone Laboratories Technical Memorandum, No. MM-60-4435-1 September, 1960.

Correlated Noise in Discrete Systems, J. D. Musa, Bell Telephone Laboratories Technical Memorandum, No. MM-62-6421-2, May, 1962.

Design Methods for Sampled Data Filters, J. F. Kaiser, Bell Telephone Laboratories. Design of Numerical Filters with Applications to Missile Data Processings Joseph F. A. Ormsby, Space Technology Laboratories, Inc. Technical Memorandum, No. STL/TR-60-0000-09123, March, 1960.

Digital Filters for Data Processing, Marcel A. Martin, General Electric Technical Information Series, No. 62SD484, October, 1962.

Filtering Sampled Functions, E. D, Fullenwider and B. I. McNamee, U. S. Naval Ordnance Laboratory Technical Memorandum, No. 64-104, July, 1956. Frequency Domain Application in Data Processing, Marcel A. Martin, General Electric Technical Information Series, No. 57SD340, May, 1957.

Practical Aspects of Digital Spectral Analysis, W. Lloyd, Great Britain Royal Aircraft

Establishment Technical Note, SPACE 1 l-AD-284-243, May, 1962.

Recursive Multivariate Differential-Correction Estimation Techniques, V. O. Mowery,

Bell Telephone Laboratories Technical Memorandum, No. MM-64-4212-4, February,

1964.

II. NUMERICAL, STATISTICAL, AND SPECTRAL ANALYSIS (TEXTS)

Lanczos, Cornelius. Applied Analysis. Engiewood Cliffs: Prentice-Hall, Inc., 1956. Blackman, R. B. Linear Data Smoothing and Prediction in Theory and Practice. Reading, Mass.: Addison-Wesley Publishing Co., Inc., 1965.

Stuart, R, D. Introduction to Fourier Analysis. New York; Barnes & Noble, Inc., 1961.

Hildebrand, Francis B. Introduction to Numerical Analysis. New York: McGraw-HiU Book Company, 1956.

Ezekiel, Mordecai, and Kari A. Fox. Methods of Correlation and Regression Analysis (3rd ed.). New York: John Wiley & Sons, Inc., 1959.

Siegel, Sidney. Nonparametric Statistics for the Behavioral Sciences. New York: McGraw-Hill Book Company, 1956.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [ 69 ] [70] [71]