You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information


Numerical methods analysis of fluid flow

PetroWiki
Revision as of 16:52, 10 September 2013 by Glenda Smith (Glendasmith) (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Systems of nonlinear partial differential equations (PDEs) are needed to describe realistic multiphase, multidimensional flow in a reservoir. As a rule, these equations cannot be solved analytically; they must be solved with numerical methods. This article provides an overview of these methods.

One-dimensional convection/dispersion equation

To illustrate the mathematics, we discuss the numerical solution of the 1D convection/dispersion (C/D) equation

Vol1 page 0054 eq 004.png....................(1)

as introduced in vector analysis of fluid flow. As a reminder, v is velocity, D is dispersion, and C is concentration. Eq. 1 is a good example to use because it illustrates many useful numerical methods that can be compared with the analytical solution given by Eq. 2.

Vol1 page 0050 eq 001.png....................(2)

We first introduce the concept of finite differences to convert Eq. 1 to an equation that can be solved numerically. We then present a numerical representation of Eq. 1 and illustrate its solution. For more details, you should consult the reservoir simulation page, as well as sources in the literature.[1][2][3][4][5][6][7]

Finite differences

One way to solve a PDE is to convert the PDE to finite-difference form. The finite-difference form is obtained by replacing the derivatives in the PDE with differences that are obtained from Taylor’s series. To illustrate the procedure, let us suppose that we know the function f(x) at two discrete points x = xi and x = xi + Δx, where Δx is an increment along the x-axis (Fig. 1). We can approximate the derivative df(x)/dx at x = xi by solving the Taylor’s series,

Vol1 page 0054 eq 005.png....................(3)

for df(x)/dx. The result is

Vol1 page 0055 eq 001.png....................(4)

where ET is the term

Vol1 page 0055 eq 002.png....................(5)

If we neglect ET, we obtain the finite-difference approximation of the first derivative.

Vol1 page 0055 eq 003.png....................(6)

Eq. 6 is an approximation because we are neglecting all of the terms in ET, which is called the truncation error. In the limit as the increment Δx approaches zero, the truncation error approaches zero, and the finite difference approaches the definition of the derivative.

The finite difference in Eq. 6 is called a forward difference. Other differences are possible. Two that we use next are the backward difference,

Vol1 page 0055 eq 004.png....................(7)

and the centered difference,

Vol1 page 0055 eq 005.png....................(8)

Eqs. 6 through 8 are all derived from Taylor’s series.

Numerical solution of the 1D C/D equation

We illustrate the application of finite differences in a fluid flow problem by considering a specific finite-difference representation of the 1D C/D equation. For a more detailed discussion of the numerical analysis of Eq. 1, see Chap. 4 of Peacemen.[1] In our example, we choose a backward difference for the time derivative in Eq. 1, a centered difference for the space derivative in the convection term, and a centered-in-time/centered-in-space difference for the dispersion term. Eq. 1 is converted from a PDE to the difference equation

Vol1 page 0056 eq 001.png....................(9)

The subscripts of concentration C denote points in space, and the superscripts denote points in time. For example, the present time, tn, is denoted by superscript n and future time tn+1 is denoted by n+1. The time increment is Δt = tn+1 - tn. Similarly, the space increment is Δx = xi + 1 - xi. The concentration at time tn+1 and spatial location xi is denoted by Vol1 page 0056 inline 001.png.

The future concentration distribution is found from the current concentration distribution by rearranging Eq. 9. We collect terms in Cn+1 on the left-hand side and terms in Cn on the right-hand side, thus

Vol1 page 0056 eq 002.png....................(10)

Eq. 10 is now written in the form

Vol1 page 0056 eq 003.png....................(11)

where the coefficients are

Vol1 page 0056 eq 004.png....................(12)

All values of the variables in the coefficients are known at time tn. If we assume that the spatial subscript is in the range 1 ≤ INX, the system of finite-difference equations becomes

Vol1 page 0057 eq 001.png....................(13)

Eq. 13 can be written in matrix form as

Vol1 page 0057 eq 002.png....................(14)

where Vol1 page 0057 inline 001.png  is the NX × NX matrix of coefficients, Vol1 page 0057 inline 002.png is the column vector of unknown concentrations at time tn+1, and Vol1 page 0057 inline 003.png is the column vector of right-hand-side terms that depend on known concentrations at time tn. Both column vectors Vol1 page 0057 inline 004.png and Vol1 page 0057 inline 003.png have NX elements.

The system of equations in Eq. 14 is called a tridiagonal system because it consists of three lines of nonzero diagonal elements centered about the main diagonal. All other elements are zero. Techniques for solving the tridiagonal system of equations, using the Thomas algorithm, are presented in several sources.[1][2][3][4][8] A solution of the set of equations for physical parameters v = 1 ft/day and D = 0.01 ft2/day and finite-difference parameters Δx = 0.1 ft and Δt = 0.1 day is shown in Fig. 2. The difference between the analytical solution and the numerical solution is because of numerical dispersion,[1][9][10] which is beyond the scope of this chapter. What interests us here is the appearance of matrices in the mathematics of fluid flow. Matrices are the subject of the next section.

Matrices and linear algebra

An example of a matrix was introduced earlier for the 1D C/D equation. It is often easier to work with many fluid flow equations when they are expressed in terms of matrices. Our review follows the presentation in Fanchi.[11] We begin our discussion with an example of a matrix that is used later in this chapter, namely the matrix associated with the rotation of a coordinate system. We then summarize some important properties of matrices and determinants and review the concepts of eigenvalues and eigenvectors from linear algebra.

Rotation of a Cartesian coordinate system

Fig. 3 illustrates a rotation of Cartesian coordinates from one set of orthogonal coordinates {x1, x2} to another set {y1, y2} by the angle θ. The equations relating the coordinate systems are

Vol1 page 0057 eq 003.png....................(15)

The set of equations in Eq. 15 has the matrix form

Vol1 page 0058 eq 001.png....................(16)

which can be written as

Vol1 page 0058 eq 002.png....................(17)

The column vectors Vol1 page 0058 inline 001.png and Vol1 page 0058 inline 002.png are

Vol1 page 0058 eq 003.png....................(18)

with two elements each, and the rotation matrix Vol1 page 0058 inline 003.png is the 2 × 2 square matrix,

Vol1 page 0058 eq 004.png....................(19)

Properties of matrices

In general, a matrix with m rows and n columns has the order m × n and is referred to as a m × n matrix. The entry in the ith row and jth column of the matrix is the ijth element of the matrix. If the number of rows equals the number of columns so that m = n, the matrix is called a square matrix. On the other hand, if mn, the matrix is a rectangular matrix.

If the matrix has a single column so that n = 1, it is a column vector as in Eq. 18. If the matrix has a single row so that m = 1, it is a row vector. A row vector can be created from a column vector by taking the transpose of the column vector. For example, the transpose of the column vector Vol1 page 0059 inline 001.png in Eq. 18 may be written as  

Vol1 page 0059 eq 001.png....................(20)

where the superscript T denotes the transpose of the matrix. In general, we can write a m × n matrix Vol1 page 0059 inline 002.png with a set of elements {aij: i = 1, 2, ... n; j = 1, 2, ... m} as

Vol1 page 0059 eq 002.png....................(21)

The transpose of matrix Vol1 page 0059 inline 002.png is

Vol1 page 0059 eq 003.png....................(22)

The conjugate transpose of matrix Vol1 page 0059 inline 002.png is obtained by finding the complex conjugate of each element in Vol1 page 0059 inline 006.png and then taking the transpose of the matrix Vol1 page 0059 inline 006.png. This operation can be written as

Vol1 page 0059 eq 004.png....................(23)

where * denotes complex conjugation. Recall that the conjugate z* of a complex number z is obtained by replacing the imaginary number Vol1 page 0059 inline 003.png with Vol1 page 0059 inline 004.png wherever it occurs. If all the elements of matrix Vol1 page 0059 inline 006.png are real, the conjugate transpose of matrix Vol1 page 0059 inline 006.png is equal to the transpose of matrix Vol1 page 0059 inline 005.png.

If the matrix Vol1 page 0059 inline 002.png is a square matrix and the elements of matrix Vol1 page 0059 inline 002.png satisfy the equality aij = aji, the matrix Vol1 page 0059 inline 002.png is called a symmetric matrix. A square matrix A¯¯ is Hermitian, or self-adjoint, if Vol1 page 0059 inline 002.png= Vol1 page 0059 inline 002.png+ (i.e, the matrix equals its conjugate transpose).

The set of elements {aii} of a square matrix Vol1 page 0059 inline 005.png is the principal diagonal of the matrix. The elements {aji} with ij are off-diagonal elements. The matrix Vol1 page 0059 inline 002.png is a lower triangular matrix if aij = 0 for i < j, and Vol1 page 0059 inline 006.png is an upper triangular matrix if aij = 0 for i > j. The matrix Vol1 page 0059 inline 006.png is a diagonal matrix if aij =0 for ij.

Matrix operations

Suppose the matrices Vol1 page 0059 inline 005.png, Vol1 page 0060 inline 001.png, and Vol1 page 0060 inline 002.png with elements {aij}, {bij}, and {cij} have the same order m × n. We are using double underlines to denote matrices. Other notations are often used, such as boldface. The addition or subtraction of two matrices may be written as

Vol1 page 0060 eq 001.png....................(24)

The product of a matrix Vol1 page 0058 inline 003.png with a number k may be written as

Vol1 page 0060 eq 002.png....................(25)

The product of matrix Vol1 page 0058 inline 003.png with order m × n and matrix Vol1 page 0060 inline 003.png with order n × p is

Vol1 page 0060 eq 003.png....................(26)

where matrix Vol1 page 0060 inline 002.png has order m × p. Notice that matrix multiplication is possible only if the number of columns in Vol1 page 0058 inline 003.png equals the number of rows in Vol1 page 0060 inline 003.png. This requirement is always satisfied for square matrices.

The transpose of the product of two square matrices Vol1 page 0059 inline 002.png and Vol1 page 0060 inline 004.png is

Vol1 page 0060 eq 004.png....................(27)

and the adjoint of the product of two square matrices is

Vol1 page 0060 eq 005.png....................(28)

Notice that the product of two matrices may not be commutative (i.e., Vol1 page 0058 inline 003.png Vol1 page 0060 inline 003.pngVol1 page 0060 inline 003.png Vol1 page 0058 inline 003.png in general).

The identity matrix, Vol1 page 0060 inline 006.png, is a square matrix with all off-diagonal elements equaling zero and all diagonal elements equaling one. The identity matrix preserves the identity of a square matrix Vol1 page 0059 inline 006.png in matrix multiplication, thus

Vol1 page 0060 eq 006.png....................(29)

By contrast, a null matrix Vol1 page 0060 inline 007.png is a matrix in which all elements are zero. In this case, the product of the null matrix with a matrix Vol1 page 0059 inline 002.png is

Vol1 page 0061 eq 001.png....................(30)

The matrix, Vol1 page 0059 inline 002.png, is singular if the product of matrix Vol1 page 0059 inline 002.png with a column vector Vol1 page 0061 inline 001.png that has at least one nonzero element yields the null matrix; that is, Vol1 page 0059 inline 002.png is singular if

Vol1 page 0061 eq 002.png....................(31)

The concepts of identity matrix and matrix singularity are needed to define the inverse matrix. Suppose we have two square matrices Vol1 page 0059 inline 006.png and Vol1 page 0061 inline 003.png that satisfy the product

Vol1 page 0061 eq 003.png....................(32)

Notice that the matrices Vol1 page 0059 inline 002.png and Vol1 page 0060 inline 004.png commute. The matrix Vol1 page 0059 inline 002.png is nonsingular, and the matrix Vol1 page 0060 inline 004.png is the inverse of Vol1 page 0059 inline 002.png, thus Vol1 page 0060 inline 004.png = Vol1 page 0059 inline 002.png-1, where Vol1 page 0059 inline 002.png-1 denotes the inverse of Vol1 page 0059 inline 002.png. Eq. 32 can be written in terms of the inverse as

Vol1 page 0061 eq 004.png....................(33)

The inverse matrix is useful for solving systems of equations. For example, suppose we have a system of equations that satisfies

Vol1 page 0061 eq 005.png....................(34)

where the column vector Vol1 page 0061 inline 004.png and the matrix Vol1 page 0058 inline 003.png are known, and the column vector Vol1 page 0058 inline 001.png contains a set of unknowns. Eq. 13 is an example for the 1D C/D equation. We can solve for Vol1 page 0061 inline 001.png in Eq. 34 by premultiplying Eq. 34 by Vol1 page 0059 inline 002.png-1. The result is

Vol1 page 0061 eq 006.png....................(35)

Of course, we have to know Vol1 page 0059 inline 006.png-1 to find Vol1 page 0061 inline 001.png. This leads us to a discussion of determinants.

Determinants, eigenvalues, and eigenvectors

The determinant (det) of a square matrix Vol1 page 0059 inline 002.png is denoted by det Vol1 page 0061 inline 002.png or | Vol1 page 0059 inline 002.png |. Two examples of determinants are the determinants of a 2×2 matrix and a 3×3 matrix. The determinant of a 2×2 matrix is

Vol1 page 0061 eq 007.png....................(36)

and the determinant of a 3×3 matrix is

Vol1 page 0061 eq 008.png Vol1 page 0062 eq 001.png....................(37)

Determinants are useful for determining if an inverse matrix Vol1 page 0059 inline 002.png-1 exists. Inverse matrices are needed to solve finite-difference equations representing fluid flow. The condition det Vol1 page 0062 inline 001.png  says that an inverse matrix Vol1 page 0059 inline 006.png-1 exists, even though we may not know the elements of the inverse matrix. Determinants are also useful for determining eigenvalues and eigenvectors.

Eigenvalues and eigenvectors are useful for understanding the behavior of physical quantities that may be represented by a matrix. An example in fluid flow is permeability, which we discuss in more detail later in this chapter. First, we need to define the concepts of eigenvalue and eigenvector.

Eigenvalues are the values of λ in the eigenvalue equation

Vol1 page 0062 eq 002.png....................(38)

where Vol1 page 0059 inline 005.png is an n × n square matrix and Vol1 page 0058 inline 001.png is a column vector with n rows. The eigenvalue equation may be written as

Vol1 page 0062 eq 003.png....................(39)

where Vol1 page 0062 inline 002.png is the n × n identity matrix. Eq. 39 has nonzero solutions, Vol1 page 0058 inline 001.png, if the eigenvalue, λ, is a characteristic root of Vol1 page 0059 inline 002.png, that is, λ must be a solution of

Vol1 page 0062 eq 004.png....................(40)

Eq. 40 is the characteristic equation of Vol1 page 0059 inline 005.png, and the n values of λ are the characteristic roots of the characteristic equation. The characteristic roots, λ, are obtained by expanding the determinant in Eq. 40 into an nth-degree polynomial and then solving for the n values of λ. These concepts are illustrated in the next section.

Nomenclature

ai, bi, ci, di = finite-difference coefficients, Eq. 11
aij, bij, cij = elements of matrices, Eq. 24
Vol1 page 0074 inline 001.png = matrices, Eq. 24
Vol1 page 0059 inline 002.png = rotation matrix, Eq. 17
Vol1 page 0074 inline 003.png = column vector of unknown concentrations at tn +1, Eq. 14
Vol1 page 0074 inline 004.png = column vector of terms that depend on known concentrations at tn +1, Eq. 14
ET = truncation error, Eq. 5
i = imaginary operator
Vol1 page 0074 inline 006.png = identity matrix, Eq. 29
Jx, Jy, Jz = fluid flux in x-, y-, z-directions
(Jx)x = fluid flux in x-direction at location x
(Jy)y = fluid flux in y-direction at location y
(Jz)z = fluid flux in z-direction at location z
Vol1 page 0074 inline 005.png = matrix of coefficients, Eq. 14
n = exponent
R = region
S = surface
t = time
tn = present time
tn+1 = future time
v = velocity, L/t, ft/sec
x,y,z = space dimensions
xi = discrete point in x-direction, Eq. 3
Δt = time interval
Δx = length
Δy = width
Δz = thickness
θ = angle, Eq. 15
λ = eigenvalues, Eq. 38

References

  1. 1.0 1.1 1.2 1.3 Peaceman, D.W. 1977. Fundamentals of Numerical Reservoir Simulation. Oxford, UK: Elsevier Publishing.
  2. 2.0 2.1 Aziz, K. and Settari, A. 1979. Petroleum Reservoir Simulation. Essex, UK: Elsevier Applied Science Publishers.
  3. 3.0 3.1 Mattax, C.C. and Dalton, R.L. 1990. Reservoir Simulation, Vol. 13. Richardson, Texas: Monograph Series, SPE.
  4. 4.0 4.1 Ertekin, T., Abou-Kassem, J.H., and King, G.R. 2001. Basic Applied Reservoir Simulation, Vol. 7. Richardson, Texas: Textbook Series, SPE.
  5. Munka, M. and Pápay, J. 2001. 4D Numerical Modeling of Petroleum Reservoir Recovery. Budapest, Hungary: Akadémiai Kiadó.
  6. Fanchi, J.R. 2006. Principles of Applied Reservoir Simulation, third edition. Burlington, Massachusetts: Gulf Professional Publishing/Elsevier.
  7. Fanchi, J.R. 2000. Integrated Flow Modeling, No. 49. Amsterdam, The Netherlands: Developments in Petroleum Science, Elsevier Science B.V.
  8. Chapra, S.C. and Canale, R.P. 2002. Numerical Methods for Engineers, fourth edition. Boston, Massachusetts: McGraw-Hill Book Co.
  9. Fanchi, J.R. 1983. Multidimentional Numerical Dispersion. SPE J. 23 (1): 143–151. SPE-9018-PA. http://dx.doi.org/10.2118/9018-PA
  10. Lantz, R.B. 1971. Quantitative Evaluation of Numerical Diffusion (Truncation Error). SPE J. 11 (3): 315–320. SPE-2811-PA. http://dx.doi.org/10.2118/2811-PA
  11. Fanchi, J.R. 2006. Math Refresher for Scientists and Engineers, third edition. New York: Wiley Interscience.

Noteworthy papers in OnePetro

Use this section to list papers in OnePetro that a reader who wants to learn more should definitely read

External links

Use this section to provide links to relevant material on websites other than PetroWiki and OnePetro

See also

Mathematics of fluid flow

Vector analysis of fluid flow

Diagonalizing the permeability tensor

PEH:Mathematics of Fluid Flow