160x Filetype PDF File size 0.26 MB Source: www.wiwi.uni-wuerzburg.de
44 Chapter 3 – Numerical Solution Methods 3.2 Nonlinear equations and equation systems Oneofthemostbasicnumericalproblemsencounteredincomputationaleconomicsisto find the solution to a nonlinear equation or a whole system of nonlinear equations. A nonlinear equation system usually can be defined by a function f (x) : Rn → Rn that maps an n dimensional vector x into the space Rn. We call the solution to the linear equation system f(x)=0aroot of f. The root of a nonlinear equation system usually has no closed-form solution. Consequently, various numerical methods addressing the issue of root-finding were invented. In this section, we introduce a very simple one- dimensional method called bisection search. As this method converges quite slow, we show how to enhance speed by using Newton’s method. We then discuss some modi- fications of Newton’s algorithm and introduce a solver for multidimensional nonlinear equation systems. We close the chapter by showing the very general class of fixed-point iteration methods. All those methods will, like in the previous section, be demonstrated with a simple example. Example Supposethedemandfunctioninagoodsmarketisgivenby qd(p)=0.5p0.2+0.5p0.5, where the first term denotes domestic and the second term export demand. Supply should be inelastic and given by qs = 2 units. At what price p∗ does the market clear? Setting supply equal to demand leads to the equation 0.5p0.2 +0.5p0.5 = 2 whichcanbereformulatedas f (p)=0.5p0.2 +0.5p0.5 2 = 0. (3.2) The second of the two equations exactly has the form f(p)=0 described above. The market clearing price consequently is the solution to a nonlinear equation. Note that it is not possible to derive an analytical solution to this problem. Hence, we need a numerical methodtosolvefor p∗. 3.2.1 Bisection search in one dimension Averyintuitive andad-hocapproachtosolvingnonlinearequationsinonedimensionis theso-calledbisectionsearch. Thebasicideabehindthismethodisamathematicaltheorem called intermediate value theorem. It states that, if [a,b] is an interval, f is a continuous 3.2. Nonlinear equations and equation systems 45 function and f(a) and f(b) have different signs, i.e. f(a) · f(b) < 0, then f has a root within the interval [a,b]. The bisection algorithm now successively bisects the interval [a, b] into intervals of equal size and tests in which interval the root of f is located by using the above theorem. Specifically, the algorithm proceeds as follows: suppose we had an interval [a ,b ] with i i the property that f(a )· f(b ) < 0, i.e. f has a root within this interval. We now bisect this i i interval into the sub-intervals [a ,x ] and [x ,b ] with i i i i a +b xi = i i . 2 Wecan now test in which interval the root of f is located and calculate a new interval [a , b ] by i+1 i+1 a =a and b =x if f(a)· f(x ) < 0, i+1 i i+1 i i i a =x and b =b otherwise. i+1 i i+1 i Figure 3.1 demonstrates the approach. f(x) f(x) b = b i i+1 a x= a x* x i ii+1 Figure3.1: Bisection search for finding the root of a function Notethat,duetothebisectionof[a ,b ],wesuccessivelyhalvetheintervalsizewithevery i i iteration step. Consequently, xi converges towards the real root f(x∗) of f.Westop the iteration process, if xi has sufficiently converged, i.e. the difference between two successive values is smaller than an exogenously specified tolerance level ǫ |x x|=|x a |=|x b |= 1 |b a |<ǫ. i+1 i i+1 i+1 i+1 i+1 2i+2 0 0 46 Chapter 3 – Numerical Solution Methods Program 3.5 shows how to find the equilibrium price in the above example by using bisection search. Program3.5: Bisection search in one dimension program bisection ! variable declaration implicit none integer :: iter real 8::x,a,b,fx,fa,fb * ! set initial guesses and function values a = 0.05d0 b = 0.25d0 fa = 0.5d0 a (-0.2d0)+0.5d0 a (-0.5d0)-2d0 * ** * ** fb = 0.5d0 b (-0.2d0)+0.5d0 b (-0.5d0)-2d0 * ** * ** ! check whether there is a root in [a,b] if(fa*fb >= 0d0)then stop ’Error: There is no root in [a,b]’ endif ! start iteration process do iter = 1, 200 ! calculate new bisection point and function value x = (a+b)/2d0 fx = 0.5d0 x (-0.2d0)+0.5d0 x (-0.5d0)-2d0 * ** * ** write(*,’(i4,f12.7)’)iter, abs(x-a) ! check for convergence if(abs(x-a) < 1d-6)then write(*,’(/a,f12.7,a,f12.9)’)’x=’,x,’ f=’,fx stop endif ! calculate new interval if(fa fx<0d0)then * b=x fb=fx else a=x fa=fx endif enddo write(*,’(a)’)’Error: no convergence’ end program 3.2. Nonlinear equations and equation systems 47 The program proceeds as follows: We first have to make an initial guess of the interval in which the actual root of f is located. We chose a = 0.05 and b = 0.25 in our case. 0 0 Wethencalculate function values at the interval endpoints via (3.2) and check, whether our condition for a root of f in [a ,b ] is satisfied. If not, we write an error message to 0 0 the console and abort the program. Having prepared all this, our iteration process starts bycalculating x0. Note that we specify a maximum number of iterations in our do-loop. If there is no convergence after 200 iteration, the program will be aborted with an error message. Having calculated the new x and the respective function value, we can check whether two successive iterates satisfy our convergence condition. If this is the case, we stop the program and display our approximation of the root of f. If not, we choose the appropriate interval and begin a new iteration. Looking at the output of the program, we find that our tolerance level is halved in every iteration step. This is obvious from |x x|= 1 |b a |= 1 1 |b a |= 1|x x |. i+1 i 2i+2 0 0 2 2i+1 0 0 2 i i1 In general, we call convergence speed linear, if |xi+1 xi| < c|xi xi1| with c > 0, i.e. the difference between the solutions of two successive iteration steps diminishes by a constant factor c. Hence, bisection converges linearly to the actual root of f.Inthe next section, we will show how to enhance this convergence speed by using some more information about the shape of our function. 3.2.2 Newton’s method in one dimension Oneofthebestknowandmostefficientmethodstosolvealinearequationistheso-called Newton’smethod. Itisbasedontheideaofsuccessivelinearization,whichallowstoreplace anonlinearroot-finding problemwithasequenceofsimplerlinearones. Thesolutionsof the simpler problems finally converge to the solution of the nonlinear problem. Starting at an arbitrary point x0, on which our function is defined, we can approximate f byafirst-order Taylor expansion, i.e. ˆ ′ f (x) ≈ f(x0)+f (x0)(x x0). Asthefunctionvalue f(x0)andthederivativeof f at x0 usuallyareknown,wecaneasily ˆ findthesolution to the equation f(x)=0. This solution is given by x =x f(x0). 1 0 f ′(x0) Figure3.2demonstratestheapproach. Weapproximatethefunction f linearlyatthepoint
no reviews yet
Please Login to review.