Career, Family And Living For The Lord
-
A Twenty-Five Year History

by James Thomas Lee, Jr. 12/25/97 Copyrighted 1995 by James Thomas Lee, Jr. Copyright Number: XXx xxx-xxx

Appendices

Appendix A. Least Squares Curve Fitting Technique {771 words}

a. Least Squares Curve Fitting Technique, Part 1

The straight line, "y(i) = mx(i) + b," is the best straight-line fit for a set of observed points. The least squares curve fitting technique is the mathematical method which is used for computing the optimum values for "m" and "b." With these respective values, the deviation between the observed and estimated values of "y(i)" will be minimized. Thus, an important consideration for using this technique is that "x(i)" must be an independent variable, not dependent on anything else for getting its value, and "y(i)" must have some dependence on "x(i)." This dependence type of relationship is normally referred to as mathematical correlation, and obviously, the higher the correlation, the more reliable is the estimate.


1.  We begin by defining the function f = SUM(y(i) - mx(i) - b)**2,  where
				i => 1 to 5 (based on Figure 1)
				y(i) => the observed value for y
				x(i) => the observed value for x
				m => the slope of the best fit straight line
				b => the y-intercept of the straight line

2. The function, f, sums the total deviation-squared between the observed and estimated values for "y(i)." Notice that the differences between the observed and estimated values of "y(i)" have been squared in order to nullify the effect of any negative differences. The optimum values for "m" and "b" are found by using the principles of Theorem 1 (see Table 1 in Chapter One). Thus, we continue to steps 3 through 10 below by differentiating "f," both with respect to "m" and also with respect to "b," and then by solving each expression for "m" and "b."


3.     Partial(f)
      -----------  =  2 SUM ( y(i) - mx(i) - b) ( -x(i) )
       Partial(m)

4.     Partial(f)
      -----------  =  2 SUM (-x(i)y(i) + mx(i)**2 + bx(i)) = 0
       Partial(m)

5.    2mSUM(x(i)**2) - 2SUM(x(i)y(i)) + 2bSUM(x(i)) = 0   5.  Note that the constant "2" cancels.

6.         SUM(x(i)y(i)) - bSUM(x(i))     6.  In steps 11 through 15, we will eliminate "b."         m   =   --------------------------
                   SUM(x(i)**2)

b. Least Squares Curve Fitting Technique, Part 2

Note that "b" is found in the same way as "m." The first partial derivative of function, f, with respect to "b" is found and solved for "b."

7.     Partial(f)
      ------------  =  2 SUM ( y(i) - mx(i) - b) ( -1 )
       Partial(b)

8.     Partial(f)
      ------------  =  2 SUM (-y(i) + mx(i)**2 + b) = 0
       Partial(b)

9.     2mSUM(x(i)) - 2SUM(y(i)) + 2nb = 0          9.  Note that the constant "2" cancels.

10.        SUM(y(i)) - mSUM(x(i))            10.  In steps 11 through 15, we will eliminate "m."
     b  = ------------------------
                    n

c. Least Squares Curve Fitting Technique, Part 3

Notice in this section that we begin with the results from the previous two sections and complete the derivation. To leave "m" as a function of "b" or "b" as a function of "m" is incomplete because it is, then, a circular solution. The true solution can be found by either directly substituting one of the values into the equation for the other or by using linear algebra. Because the linear algebra method has additonal utility in more advanced applications, we will use that one.

11.  SUM(x(i)y(i)) = mSUM(x(i)**2) + bSUM(x(i))            1.  From step 5 (divided by 2)

12.  SUM(y(i)) = mSUM(x(i)) + nb                 2.  From step 9 (divided by 2)

13.  By expressing these equations in matrix form, we have

                    |         |         |            |    |   |
                    | Ex(i)yi |         | Exi2   Exi |    | m |         
                    |         |    =    |            |    |   |         
                    | Eyi     |         | Exi    n   |    | b |
                    |         |         |            |    |   |

This matrix relationship is of the form, Y = AX. If we find the inverse of matrix A, which is designated as A-1, then by matrix multiplication, we can find X, which is made up of the constants "m" and "b." Hence, (A**-1)Y = (A**-1)AX = X. 14. We start by finding the determinant of "A," and then by using the co-factors of each element of the matrix to find the matrix inverse.

Determinate of A (detA) = (SUM(x(i)**2)(n)) - (SUM(x(i))**2)  = nSUM(x(i)**2) - (SUM(x(i))**2

                        |                                    |
                        |      n               -SUM(x(i))    |
                        |  ----------          ----------    |
                        |     detA                detA       |
                A**-1 = |                                    |
                        |  -SUM(x(i))          -SUM(x(i)**2) |
                        |  ----------          ------------- |
                        |     detA                detA       |
                        |                                    |

15. Multiplying A-1 by Y yields the following values for X, which in this case are "m" and "b."

        nSUM(x(i)y(i)) - SUM(x(i))SUM(y(i))       SUM(x(i)**2)SUM(y(i)) - SUM(x(i))SUM(x(i)y(i))
   m =  -----------------------------------, b =  ----------------------------------------------
          nSUM(x(i)**2) - (SUM(x(i))**2)                   nSUM(x(i)**2) - (SUMx(i))**2)


Appendix B. Multiple Linear Regression

Back To The Table Of Contents

Back To TLEE's Home Page

Send email to: tlee6040@aol.com 1