|
Java example source code file (analysis.xml)
The analysis.xml Java example source code<?xml version="1.0"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <?xml-stylesheet type="text/xsl" href="./xdoc.xsl"?> <document url="analysis.html"> <properties> <title>The Commons Math User Guide - Numerical Analysis </properties> <body> <section name="4 Numerical Analysis"> <subsection name="4.1 Overview" href="overview"> <p> The analysis package is the parent package for algorithms dealing with real-valued functions of one real variable. It contains dedicated sub-packages providing numerical root-finding, integration, interpolation and differentiation. It also contains a polynomials sub-package that considers polynomials with real coefficients as differentiable real functions. </p> <p> Functions interfaces are intended to be implemented by user code to represent their domain problems. The algorithms provided by the library will then operate on these function to find their roots, or integrate them, or ... Functions can be multivariate or univariate, real vectorial or matrix valued, and they can be differentiable or not. </p> </subsection> <subsection name="4.2 Error handling" href="errorhandling"> <p> For user-defined functions, when the method encounters an error during evaluation, users must use their <em>own unchecked exceptions. The following example shows the recommended way to do that, using root solving as the example (the same construct should be used for ODE integrators or for optimizations). </p> <source>private static class LocalException extends RuntimeException { // the x value that caused the problem private final double x; public LocalException(double x) { this.x = x; } public double getX() { return x; } } private static class MyFunction implements UnivariateFunction { public double value(double x) { double y = hugeFormula(x); if (somethingBadHappens) { throw new LocalException(x); } return y; } } public void compute() { try { solver.solve(maxEval, new MyFunction(a, b, c), min, max); } catch (LocalException le) { // retrieve the x value } } </source> <p> As shown in this example the exception is really something local to user code and there is a guarantee Apache Commons Math will not mess with it. The user is safe. </p> </subsection> <subsection name="4.3 Root-finding" href="rootfinding"> <p> <a href="../apidocs/org/apache/commons/math3/analysis/solvers/UnivariateSolver.html"> UnivariateSolver</a>, UnivariateDifferentiableSolver</a> and PolynomialSolver</a> provide means to find roots of <a href="../apidocs/org/apache/commons/math3/analysis/UnivariateFunction.html">univariate real-valued functions, <a href="../apidocs/org/apache/commons/math3/analysis/differentiation/UnivariateDifferentiable.html">differentiable univariate real-valued functions, and <a href="../apidocs/org/apache/commons/math3/analysis/polynomials/PolynomialFunction.html">polynomial functions respectively. A root is the value where the function takes the value 0. Commons-Math includes implementations of the several root-finding algorithms: </p> <table border="1" align="center"> <tr BGCOLOR="#CCCCFF"> | Name | Function type | Convergence | Needs initial bracketing | Bracket side selection | <tr> <td>Bisection <td>univariate real-valued functions <td>linear, guaranteed <td>yes <td>yes </tr> <tr> <td>Brent-Dekker <td>univariate real-valued functions <td>super-linear, guaranteed <td>yes <td>no </tr> <tr> <td>bracketing nth order Brent <td>univariate real-valued functions <td>variable order, guaranteed <td>yes <td>yes </tr> <tr> <td>Illinois Method <td>univariate real-valued functions <td>super-linear, guaranteed <td>yes <td>yes </tr> <tr> <td>Laguerre's Method <td>polynomial functions <td>cubic for simple root, linear for multiple root <td>yes <td>no </tr> <tr> <td>Muller's Method using bracketing to deal with real-valued functions <td>univariate real-valued functions <td>quadratic close to roots <td>yes <td>no </tr> <tr> <td>Muller's Method using modulus to deal with real-valued functions <td>univariate real-valued functions <td>quadratic close to root <td>yes <td>no </tr> <tr> <td>Newton-Raphson's Method <td>differentiable univariate real-valued functions <td>quadratic, non-guaranteed <td>no <td>no </tr> <tr> <td>Pegasus Method <td>univariate real-valued functions <td>super-linear, guaranteed <td>yes <td>yes </tr> <tr> <td>Regula Falsi (false position) Method <td>univariate real-valued functions <td>linear, guaranteed <td>yes <td>yes </tr> <tr> <td>Ridder's Method <td>univariate real-valued functions <td>super-linear <td>yes <td>no </tr> <tr> <td>Secant Method <td>univariate real-valued functions <td>super-linear, non-guaranteed <td>yes <td>no </tr> </table> <p> Some algorithms require that the initial search interval brackets the root (i.e. the function values at interval end points have opposite signs). Some algorithms preserve bracketing throughout computation and allow user to specify which side of the convergence interval to select as the root. It is also possible to force a side selection after a root has been found even for algorithms that do not provide this feature by themselves. This is useful for example in sequential search, for which a new search interval is started after a root has been found in order to find the next root. In this case, user must select a side to ensure his loop is not stuck on one root and always return the same solution without making any progress. </p> <p> There are numerous non-obvious traps and pitfalls in root finding. First, the usual disclaimers due to the way real world computers calculate values apply. If the computation of the function provides numerical instabilities, for example due to bit cancellation, the root finding algorithms may behave badly and fail to converge or even return bogus values. There will not necessarily be an indication that the computed root is way off the true value. Secondly, the root finding problem itself may be inherently ill-conditioned. There is a "domain of indeterminacy", the interval for which the function has near zero absolute values around the true root, which may be large. Even worse, small problems like roundoff error may cause the function value to "numerically oscillate" between negative and positive values. This may again result in roots way off the true value, without indication. There is not much a generic algorithm can do if ill-conditioned problems are met. A way around this is to transform the problem in order to get a better conditioned function. Proper selection of a root-finding algorithm and its configuration parameters requires knowledge of the analytical properties of the function under analysis and numerical analysis techniques. Users are encouraged to consult a numerical analysis text (or a numerical analyst) when selecting and configuring a solver. </p> <p> In order to use the root-finding features, first a solver object must be created by calling its constructor, often providing relative and absolute accuracy. Using a solver object, roots of functions are easily found using the <code>solve methods. These methods takes a maximum iteration count <code>maxEval, a functionProperty | Purpose | <tr> <td>Absolute accuracy <td> The Absolute Accuracy is (estimated) maximal difference between the computed root and the true root of the function. This is what most people think of as "accuracy" intuitively. The default value is chosen as a sane value for most real world problems, for roots in the range from -100 to +100. For accurate computation of roots near zero, in the range form -0.0001 to +0.0001, the value may be decreased. For computing roots much larger in absolute value than 100, the default absolute accuracy may never be reached because the given relative accuracy is reached first. </td> </tr> <tr> <td>Relative accuracy <td> The Relative Accuracy is the maximal difference between the computed root and the true root, divided by the maximum of the absolute values of the numbers. This accuracy measurement is better suited for numerical calculations with computers, due to the way floating point numbers are represented. The default value is chosen so that algorithms will get a result even for roots with large absolute values, even while it may be impossible to reach the given absolute accuracy. </td> </tr> <tr> <td>Function value accuracy <td> This value is used by some algorithms in order to prevent numerical instabilities. If the function is evaluated to an absolute value smaller than the Function Value Accuracy, the algorithms assume they hit a root and return the value immediately. The default value is a "very small value". If the goal is to get a near zero function value rather than an accurate root, computation may be sped up by setting this value appropriately. </td> </tr> </table> </p> </subsection> <subsection name="4.4 Interpolation" href="interpolation"> <p> A <a href="../apidocs/org/apache/commons/math3/analysis/interpolation/UnivariateInterpolator.html"> UnivariateInterpolator</a> is used to find a univariate real-valued function <code>f which for a given set of ordered pairs (<code>xi,
---|
... this post is sponsored by my books ... | |
#1 New Release! |
FP Best Seller |
Copyright 1998-2024 Alvin Alexander, alvinalexander.com
All Rights Reserved.
A percentage of advertising revenue from
pages under the /java/jwarehouse
URI on this website is
paid back to open source projects.