You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information


Probabilistic verses deterministic in production forecasting

PetroWiki
Revision as of 12:28, 31 May 2016 by Denise Watts (Denisewatts) (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Uncertainty and risk predominate in business and operational decision-making in upstream oil and gas. It is inherent in the entire system from the reservoir through to the delivery point; and beyond if the product price is included. In order to make valid decisions and plans, the impact of the uncertainty needs to be reflected in a range of possible production outcomes. As highlighted on the topic page Uncertainty analysis in creating production forecasting, many different approaches to handling uncertainty and generating ranges of forecasts are adopted throughout the industry, which can broadly be categorized as ‘probabilistic’ or ‘deterministic’. This chapter provides some definitions of these terms, explains the documented methodologies that have been used and makes recommendations on `best-practice` techniques.

Uncertainty

Uncertainty ranges for many of the inputs to production forecasts, especially in the reservoir, are based on statistically under-sampled datasets and rely on ‘heuristic’ methods to define them. With this context as background, many methodologies and tools have been developed and are available to the forecaster, ranging from purely deterministic processes to fully probabilistic, with hybrid possibilities in between.

Deterministic and probabilistic defined

Deterministic

Methods are based on one or more user-defined cases, for which the full range of inputs is chosen by the forecaster (supported by the asset team) to represent a fully consistent set (or sets) of input variables. Forecasting may consist of a single, best estimate, or additionally incorporate sensitivities from this best case (e.g. low, base and high cases) or can be based on a range of multi-deterministic realisations, chosen to represent the full range of potential outcomes.

Probabilistic

Forecasts incorporate the stochastic variability of the input variables, combining all the parameters according to their defined probability distributions (and any known correlation between the parameters) to generate probabilistic cumulative distribution curves (‘S curves’) for the quantities of interest. If specific cases are required for planning, representative models or cases at specified points on the S-curve (e.g. P90, P50, P10) are chosen.

INSERT Figure 1 schematically illustrates the range of options available in terms of probability vs. determinism and level of complexity.  (Pending permission approval)

INSERT Figure 2 Deterministic and Probabilistic Options versus Complexity (Pending permission approval)

In practice, as we move towards the more complex end of the spectrum, with either multiple scenarios (as opposed to a single best guess), or into a fully-defined probabilistic approach, we have to incorporate elements of both deterministic and probabilistic methodologies:

  • Within a predominantly deterministic framework, we nevertheless usually have to define cases to represent a ‘P90’ or ‘P10’ output. This means that we have to at some point assign probability to the various outcomes.
  • If we are working in a probabilistic framework, determinism is introduced in various ways, such as;
    • Our input variable ranges are constrained by our deterministic beliefs, or understanding, of the geological (e.g. sand body size) or engineering (e.g. plant availability) limits and we often employ deterministic (low, mid, high) cases to represent the uncertainty ranges
    • Specification of the interrelation or correlations between variables which are not defined statistically by the data
    • Sand body correlations in a reservoir model are defined based on deterministic geological concepts.
    • Alternative structural models, requiring different model grids cannot be expressed as a continuous variable and are rarely accommodated in a probabilistic approach.

It is important to note then that, except in some cases with exceptional seismic definition or large numbers of wells, the forecaster is required to work beyond the available data, using concepts and assumptions that are inherently deterministic in nature and cannot be statistically authenticated. A number of methodologies are used by different companies and forecasters to incorporate this mix of probabilistic and deterministic approaches, as represented in Fig 2. These methodologies may be applied to the reservoir model in any of its guises (simulation, analytical, decline curves) to produce a range of forecasts.

INSERT Figure 3 Deterministic and Probabilistic Methodology Ternary Diagram  (Pending permission approval)

At the three extremities of the triangle:

Best Guess.
A preferred or ‘best-guess’ model is used based on the analyst’s most rational explanation of the available data. Such an approach is prone to a re-set every time significant new data becomes available. A range of uncertainty may be included in the form of a percentage upside or downside to the volumes, moving the methodology from the top apex of the triangle slightly down towards the bottom right. This may be thought of as ‘traditional determinism’ and, whilst still often used, is understandably not a recommended path to reliably capturing the range of uncertainty or indeed defining the central outcome.
Multiple Probabilistic.
A suitable (typically large) number of models are generated based on the asset team and forecaster’s beliefs about the input data (conceptual models, parameters and ranges, quality of data and models). These are combined within a probabilistic (Bayesian) framework to generate cumulative probability curves for the quantities of interest (forecasts). Specific models are usually then chosen to represent low, mid and high (P90, P50, P10) cases for use in supporting the objective in question (reserves, field development, incremental projects).

Care needs to be taken that these ‘representative’ models are:
  1. Actually consistent with a ‘real-world outcome’ (as opposed, for instance, to being a case where a correlation coefficient less than one has allowed a random unphysical combination to be made)
  2. Encapsulate the range of all outcome quantities (e.g. oil, water, gas) that are pertinent to the use of the forecasts (business decisions).

For example a low case may either be the result of lack of pressure support or else rapid encroachment of aquifer water with highly differing water production forecasts for a similar oil recovery where water-handling facilities may be a critical part of the engineering design and cost.

Multiple Deterministic.
A suitable (typically small) number of models are generated, with each one explicitly-defined and reflecting a consistent physical representation of the input data and underlying concepts. Geostatistical/probabilistic methods may be used to facilitate the building of a reservoir model (e.g. distribution of properties) but the input is fixed for any given case or ‘scenario’. No ‘base case model’ is chosen but a methodology for defining the probability distribution (e.g. experimental design) is required to identify the low, mid and high cases.

As we tend towards the more probabilistic end of the spectrum it is worth noting that there is often a false sense of security generated by the production of an ‘S’ curve, which purports to cover the full range of potential outcomes, when there is no real way of validating, prior to actual reservoir performance data, that:

  1. We have fully captured the team’s understanding and beliefs about the reservoir in the form of the input distribution functions
  2. The approach has correctly combined the probabilistic (Bayesian) relationships
  3. We have simply been wrong several thousand times

Methodologies

The requirement is to define a workflow to express the range of potential production outcomes based on a (generally) under-sampled data set, through heuristic approaches to define the input data and a methodology to appropriately and rigorously encapsulate the uncertainties in the forecasts. The workflow will depend, to a large extent, on the tools being used for forecasting, whether that is full-field simulation modelling or more simple analytical or decline curve forecasting and many approaches (especially for the more cumbersome case of full field simulation modelling) have been adopted and documented. (e.g. Reference list). This in an evolving subject and many tools and applications are emerging to support the process; it is not intended in this chapter to document precise details of the various options but to highlight the generic options that may be used.

Conceptually, the workflow can be differentiated into two approaches depending whether there is a tendency towards (1) probabilistic or (2) deterministic methodologies:

  1. Data → Statistical Algorithms → Model Build → Range of production forecasts
  2. Conceptual description → Identify uncertainties → Generate models → Forecasts


At some point in the process, a choice of models is required (even for probabilistic approaches, if history matching is conducted) to eliminate invalid parameter sets. Depending on the complexity of the problem and the models/approach taken, experimental design may be employed to define the uncertainty space and allow appropriate models to be chosen to represent the range of potential outcomes and this is becoming increasingly frequently employed.

Input data and heuristics

Where there is sufficient data available, and/or where the impact of the uncertainty is minor, probabilistic approaches to incorporating that data are inherently supportable and there are highly developed methodologies and tools available to handle (geo)statistics, which can be routinely and rigorously applied. However these cannot compensate for input bias and quantification of the ‘train-wreck scenario’. And we need to make sure that statistically-unimportant but performance-determining features (e.g. faults and fractures) are appropriately included in the analysis. As technical practitioners, we cling tightly to scientifically definable data and procedures, but we are often required to work beyond the data and this requires ‘heuristic’ approaches to specify the input ranges to our forecasts.

A definition of heuristics is “a simple procedure that helps find adequate, though often imperfect, answers to difficult questions” (Kahneman, “Thinking Fast and Slow”). In the case of producing production forecasts we are required to make judgements on uncertainty, incorporating known information and our understanding of the physical concepts to generate meaningful ranges of potential outcomes. We have to employ human cognitive processes that are well-documented within the field of economics (e.g. Kahnmann and Tversky) but are equally applicable in the case of the upstream oil and gas industry. We are in fact extremely poor at making judgements relating to uncertainty and are susceptible to many biases, such as;

  • 'anchoring' to our best (rational) guess (and adjusting away from that fixed point)
  • being overly-influenced by ‘availability’ of readily-accessible information or interpretations that come easily to mind
  • fooled by ‘representativeness’ (the degree to which one parameter affects one’s perception of the probability of another, i.e. ‘plausibility’ vs. ‘probability’)
  • ‘over-confidence’ in the result we have generated

All is not lost as an awareness of these biases allows us to question whether we have applied appropriate safeguards and critically reviewed the representativeness of our data set, examined all possible alternatives and avoided anchoring on our (prematurely-formed) best guess or ‘base case’. Our tendency is to estimate ranges that are too narrow and (belief in) complex tools can often encourage this; thinking conceptually beyond the data is important and a deterministic approach encourages this behavior. The overall process is improved by experience and by involving the entire asset team, bearing in mind, of course, that experts and groups can also be subject to bias! It is important to develop a disciplined process, preferably based around an uncertainty-risk matrix, illustrated schematically. Fig 3

INSERT Figure 4 Uncertainty versus Impact Matrix (Pending permission approval)

The use of the uncertainty-risk matrix allows an interdisciplinary understanding of the key factors affecting the range of outcomes and identification of the most important uncertainties that are required to be defined in the analysis. This is then used in defining the models that need to be built or as input into an experimental design. The matrix is a ‘living tool’ and should be updated as new data or understanding, either from modelling sensitivities or experimental designs, are available.

Probabilistic methodologies

Probabilistic forecasting methods vary from the use of simple spreadsheet add-in applications, useful for handling volumetric or analytical approaches, through to complex algorithms and workflows to handle the optimization and prediction of multiple full-field simulation runs. Forecasting techniques that are easily implemented in spreadsheet formulae (volumetric, analytical or decline curve approaches) are readily adaptable to Monte Carlo spreadsheet add-in applications where the input parameters may be expressed in terms of probability functions. It is easy to combine reservoir / well forecasts with plant capacity and availability (see Fig 4) and potentially include risk factors for eventualities not covered by the technique being employed (e.g. piston-like water breakthrough in decline curves).

Probabilistic well forecasting with system constraints

Where reservoir simulation models are employed for forecasting, simulation software is available (commercially or in-house) to allow input of parameter ranges and, given a relatively simplistic green-field scenario and depending on the parameters to be varied, this may be straightforwardly implemented. Additional steps need to be taken when:

  1. Alternative conceptual geological models or structural models need to be implemented. It is not necessarily easy to assign probability distributions to a series of discrete models (with associated parameter ranges dependent on the particular realisation) that are implementable in the simulation software being used (and indeed weighting the relative probabilities of alternative geological concepts may be somewhat subjective). Output from each model may be combined subsequent to running probabilistic forecasts for each but in this case it is worth considering using a multi-deterministic approach, using experimental design to aid definition of probability levels.
  2. History-matching is required. In this case, not all combinations of parameter input ranges will provide a good match to the historical data. The valid model input ranges need to be constrained. Again, software is available to allow optimization of the process, and can speed up the task of achieving a range of (non-unique) history matches, which can be expressed as a probability function. The modified set of history-matched models may then be taken forward to forecasting. Definition of the ‘misfit’ functions, to evaluate the ‘goodness’ of the history match for a range of heterogenous data (e.g. rates vs time, water cut, pressure build-up, time-lapse seismic response) is an important step in this process to which sizeable effort is required. A good description of the process is given by Jorge Landa in “Assessment of Forecast Uncertainty in Mature Reservoirs”, 2007-08 SPE Distinguished Lecturer Program.[1]
  3. The model(s) is/are too large to allow calculation of a full probabilistic range of history matches or forecasts in a reasonable time. In this case, alternatively-named ‘proxy’, ‘surrogate’ or ‘meta’ models (mathematical models of the models) may be used to model the response surface and interpolate or extrapolate the data. This may be a multi-stage process, with experimental design used to choose the models to be run to set up the proxy models, running of the proxy models and final experimental design to choose the simulation models to be used for history matching/forecasting (at which point the method closely resembles a multi-deterministic methodology). Alternative mathematical approaches to modelling the reponse surfaces include (from less to more complex) ‘Polynomial’ equations, ‘Kriging’ and ‘Neural Networks’. A summary of approaches to defining proxy models is given by Zubarev.[2]


Deterministic methodologies

In general, simplistic deterministic approaches, such as ‘best guess’ or application of an error range (+/-) to generate low and high forecasts are discouraged. This includes, to some extent, when the output is disguised as a probabilistic forecast based on deterministic input (e.g. probabilistic summation of deterministic well decline curve forecasts with an error range).

It is recognised that, in some jurisdictions, different reserves categories are routinely and sometimes from a regulatory standpoint based on well/drainage areas, in the absence of better information. In these cases, a deterministic approach is inherently applicable but lacks a link to specification of probability levels.

In most cases, a deterministic approach will be employed (and is best-suited) as part of multi-deterministic reservoir modelling. Multi-deterministic modelling is especially applicable in the case of relatively large and complex models and with input uncertainties that are not easily represented as a mathematical probability distribution (e.g. alternative geological concepts or structural realisations). Since internally-consistent, ‘real-world case’ data input is used, the approach is especially powerful in developing data-gathering situations (e.g. appraisal) for identifying the critical information required to reduce uncertainty and its associated value.

Key uncertainties will be assessed by the team prior to the start of modelling and may be modified by sensitivity analysis during initial runs of the model(s). A number of discrete models will be built; at least three with the upper limit dependent on the complexity (run-times) of the models and the number of key uncertainties (12 – 15 would be a typical maximum for a large model but could be many tens or even hundreds for simple, quick models). It is not necessary to forecast all uncertainties for the sake of completeness but it is important to include the sensitivities to the most uncertain parameters with the largest impact. It is usual for the overall uncertainty to be determined by one or two key variables which predominate. These variables should provide the basis for the choice of deterministic cases.

At some point it is required to relate the different cases or realisations to particular probability levels. This may be (and is often) carried out in a subjective way by comparing outputs and assigning particular models to represent low (P90) or high (P50) cases, possibly treating each realisation as equi-probable or else applying some weighting. However, a more meaningful assignment of probability is afforded by employing experimental design in the modelling process.

Experimental Design

Experimental Design allows exploration of the entire output uncertainty space without the requirement to run every single permutation of input variables. The simplest set of models that could be run is to vary one input parameter at a time (low and high sensitivities), with the running of all permutations of parameters at the other end of the spectrum (known as a ‘three-level, 3k, full factorial’ design, where k is the number of variables).

The next step up from varying each parameter individually, and economising on the number of runs required, is the Plackett-Burman approach this is known as a `Resolution III` or ‘two-level fractional factorial’ design. This is the most commonly employed design, being the most-easily understood and programmable in spreadsheet algorithms. The technique is not, however, necessarily suitable for all simulation experiments. More complex designs are recommended (‘Resolution IV’, for example folded-Plackett-Burman or modified two-level fractional factorial design), especially where there is significant non-linearity in the input-output function and/or there are strong dependencies/interactions between variables. To allow an exhaustive (and uniform) exploration of the uncertainty space, the Latin-Hypercube technique has been successfully applied and is particularly useful in optimizing a history-matching effort. Further reading on a variety of experimental design approaches can be found in (References 9, 11).

It is important to be aware that (in any approach to uncertainty in production forecasting) different forecast outputs (e.g. oil, water, gas rates) may have an impact on costs and business planning. However, any given model may represent different probability levels for different outputs and, depending on the impacts of the various forecast outputs, the models need to be chosen appropriately (“P-number Alignment”). Techniques have been developed (e.g. Osterloh, SPE 116196 [3]) to allow choice of models at different probability levels (P90, P50, P10) for all required outputs (e.g. water, gas and/or oil production and reserves) of interest.

Advantages and disadvantages of the different methods

Probabilistic

Pros Cons
Uses impartial statistical rules
Dependencies between the variables not always easily-definable or transparent
Exhaustive cases can be run
 Whilst several thousand alternatives can be run, this may be reinforce an illusion of rigour whereas if the input ranges and algorithms are not appropriate, it may just be a case of ‘being wrong’ several thousand times
Need not be base-case led
 Is often base-case led (anchor to a base case and adjustment away)
 Appeals to the technologically-minded and can satisfy a desire to include mathematical rigour
Link between statistical realisations and the real world is unclear; ‘black box’

Multiple deterministic

Pros Cons
Generates plausible real-world cases
Pulls away from measured or interpreted data, which can create discomfort
Doesn’t rely on statistics
Not statistical
Maintains dependencies
Fewer cases; low and high cases are effectively ‘picked’
Easy to understand, transparent

In summary, a probabilistic approach, although purporting to be an unbiased, mathematical presentation of all potential outcomes, is often skewed by adherence, or ‘anchoring’, to the available data set which may not be truly representative of the range of outcomes, albeit ostensibly more defensible. Furthermore, the input is often centre-weighted (e.g. normal distribution) which automatically encourages a focus on a best-guess case. And although it is possible to extend the ranges of input variables and to allow for and maintain a realistic relationship between them, it is difficult to verify how reasonable the statistical ranges and the automatically-combined output are as these are defined mathematically rather than as ‘real-world’ representations.

Best practice guidelines

It cannot be stated that a probabilistic or a deterministic methodology is the superior to create production forecast ranges and elements of both approaches may often be employed. With imperfect data and incomplete understanding of probability ranges and the dependency between input variables, neither route is perfect, although there may be a preference for a given reservoir, model-type and objective. Usually, a blend of probabilistic (including geostatistical) and deterministic methods is required. Static (geological) reservoir models should typically encapsulate a significant degree of determinism based on (alternative) concepts whereas pure engineering models (e.g. analytical or decline curves) can more easily be handled using probabilistic methods.

Before “logging on”, it is important to design the methodology and boundaries of the forecasting exercise based on the available data and the ultimate objective(s). As part of this process it is advisable to create an uncertainty / risk matrix where the identifiable uncertainties are ranked in terms of their magnitude and likely impact on the target outcomes. This up-front analysis will allow the principal and impactful uncertainties (risks) to be identified as well as the appropriate method for handling the uncertainties for the chosen methodology.

The following is a list of best-practice guidelines to follow when formulating a methodology for dealing with uncertainty in forecasting.

  1. Low magnitude, low impact uncertainties may be included, if at all, in the most convenient manner to the forecaster, as their impact on the calculations will be insignificant compared to the overall range of uncertainty. Be aware though that, depending on the approach taken to define probability, adding additional, low-uncertainty-range parameters, may simply steepen the S-curve (additional central values) unless they are appropriately probabilistically-weighted.
  2. High uncertainty parameters are often associated with a lack of data. Definition of a probability distribution function is often meaningless and may just represent a means-to-an end if a probabilistic route has been chosen. Consider whether a deterministic approach is more defendable.
  3. Conceptual uncertainties (either/or situations or based on non-numerically definable realizations) such as those related to environment of deposition or well-correlation are typically better handled as discrete, deterministic events and, where these uncertainties represent the high magnitude, high impact parameters.
  4. Parameters that can easily be expressed numerically, as continuous variables, lend themselves to inclusion as probability distribution functions, especially where there is sufficient data and/or understanding to make a realistic range.
  5. Geological uncertainties are often best expressed as concepts (rather than, for instance, ranges in volume or porosity) meaning that where geological input is a significant part of the methodology for producing forecasts, determinism should be a strong part of the treatment of uncertainty.
  6. Simpler forecasting methodologies (e.g. decline analysis or analytical models) lend themselves to probabilistic approaches as they can easily be programmed into spreadsheets with Monte Carlo add-in applications (e.g. Crystal Ball, @RISK). Note that the range of individual inputs is often defined by relatively subjective, deterministic low, mid and high cases using curve fitting, implying that turning these into probabilistic ranges does not add any benefit to the calculations. However, using a Monte Carlo method allows appropriate combination of elements and variables (e.g. addition of wells/sectors) and, for instance, combination of well productivity with surface capacity and availability.
  7. Data-driven, advanced curve-fitting methodologies take out the subjectivity of interpreting the past, and give comfort that “everything has been taken into account” however these do not capture the conceptual reasons for historical changes and prediction of potential future events requires the re-introduction of somewhat subjective input. Conceptual input related to, for instance, reservoir drive mechanism or facilities constraints should be included in the analysis in an appropriate manner.
  8. For large models, it is more-and-more common to use a proxy model technique. This can reduce the time taken to evaluate the response of large numbers of models using probabilistic techniques.
  9. When producing forecasts through a static and dynamic reservoir modelling route it is recommended to build up the range of cases through conceptual geological models rather than being purely data-driven. This typically leads to generation of a number of deterministic realizations.
  10. Reservoir simulation software that incorporates optimization as well as probabilistic forecasting can be used to explore the uncertainty space during the history matching process (and can speed up that process) but still allows a deterministic forecasting approach to be used if preferred, employing alternative history-matched models (potentially with different grids) as deterministic cases.
  11. Finally apply a “sense check” to the range of forecasts derived. Some companies employ both a probabilistic and deterministic route to ensure that there is consistency in the outcomes. As with all things subsurface, alternative approaches to deriving the same quantity can generate more validity and robustness in an answer than having applied the most advanced techniques and tools. If there is time and resource available, generating a probabilistic forecast for comparison with a range of deterministic realisations (even if only a low, mid and high case) will help to give validity to the final outputs. Rules of thumb, based on industry or analogue data may also be applied; a low-high range of +/- 20 to 50% of the mid-case value of production or reserves is reasonable, not taking into account any overall bias (see for example SPE 145437[4]).


As with all subsurface work, the end objective is generally to allow informed and optimal business decisions to be made within a specified timeframe. Models and modelling workflows are required to adapt to these prevailing requirements. Understanding the impact of the input uncertainties is a vital part of the process to generate valid ranges of production forecasts.

Two final comments on modelling workflows, drawing on the discussion on this topic page, are made in this respect:

  1. Our capabilities to quantify the uncertainties in our input data are generally poor and complex modelling routines cannot make up for inadequately defined input ranges. Effort in understanding the potential ranges in the relevant uncertainties and consequences may be more important than that spent on the means to precisely define the output curves, especially when we are, by necessity, working (subjectively) beyond the available data set.
  2. A link to real, deterministic realizations is important in generating believable and auditable outputs that can be used for business planning. As discussed in this chapter, the developing methodologies (including proxy models and experimental design) to handle uncertainty in production forecasting are blurring the distinction between probabilistic and deterministic forecasting and access to appropriate models at the desired levels of probability is increasingly part of the routine of including uncertainty in production forecasting.

References 

  1. 2. Landa, Jorge. 2007. “Assessment of Forecast Uncertainty in Mature Reservoirs”. SPE Distinguished Lecturer Program. http://www.spegcs.org/events/1066/.
  2. 4. Zubarev, D. I. 2009. Pros and Cons of Applying Proxy-models as a Substitute for Full Reservoir Simulations. Society of Petroleum Engineers. http://dx.doi.org/10.2118/124815-MS.
  3. 9. Osterloh, W. T. 2008. Use of Multiple-Response Optimization To Assist Reservoir Simulation Probabilistic Forecasting and History Matching. Society of Petroleum Engineers. http://dx.doi.org/10.2118/116196-MS.
  4. 10. Nandurdikar, N. S., & Wallace, L. 2011. Failure to Produce: An Investigation of Deficiencies in Production Attainment. Society of Petroleum Engineers. http://dx.doi.org/10.2118/145437-MS.

Noteworthy papers in OnePetro

Goodwin, N. 2015. Bridging the Gap Between Deterministic and Probabilistic Uncertainty Quantification Using Advanced Proxy Based Methods. Society of Petroleum Engineers. http://dx.doi.org/10.2118/173301-MS.

Choudhary, M. K., Yoon, S., & Ludvigsen, B. E. 2007. Application of Global Optimization Methods for History Matching and Probabilistic Forecasting - Case Studies. Society of Petroleum Engineers. http://dx.doi.org/10.2118/105208-MS.

Mohaghegh, S. D. 2006. Quantifying Uncertainties Associated With Reservoir Simulation Studies Using a Surrogate Reservoir Model. Society of Petroleum Engineers. http://dx.doi.org/10.2118/102492-MS.

Mohaghegh, S. D., Modavi, C. A., Hafez, H. H., Haajizadeh, M., Kenawy, M. M., & Guruswamy, S. 2006. Development of Surrogate Reservoir Models (SRM) For Fast Track Analysis of Complex Reservoirs. Society of Petroleum Engineers. http://dx.doi.org/10.2118/99667-MS.

Peng, C. Y., & Gupta, R. 2003. Experimental Design in Deterministic Modelling: Assessing Significant Uncertainties. Society of Petroleum Engineers. http://dx.doi.org/10.2118/80537-MS.

Schaaf, T., Coureaud, B., & Labat, N. 2008. Using Experimental Designs, Assisted History Matching Tools and Bayesian Framework to get Probabilistic Production Forecasts. Society of Petroleum Engineers. http://dx.doi.org/10.2118/113498-MS.

Noteworthy books

Society of Petroleum Engineers (U.S.). 2011. Production forecasting. Richardson, Tex: Society of Petroleum Engineers. WorldCat or SPE Bookstore

Ringrose, P., & Bentley, M. 2014. Reservoir model design: A practitioner's guide. http://www.worldcat.org/oclc/892733899.

External links

Production forecasts and reserves estimates in unconventional resources. Society of Petroleum Engineers. http://www.spe.org/training/courses/FPE.php

Production Forecasts and Reserves Estimates in Unconventional Resources. Society of Petroleum Engineers. http://www.spe.org/training/courses/FPE1.php

See also

Production forecasting glossary

Aggregation of forecasts

Challenging the current barriers to forecast improvement

Commercial and economic assumptions in production forecasting

Controllable verses non controllable forecast factors

Discounting and risking in production forecasting

Documentation and reporting in production forecasting

Empirical methods in production forecasting

Establishing input for production forecasting

Integrated asset modelling in production forecasting

Long term verses short term production forecast

Look backs and forecast verification

Material balance models in production forecasting

Probabilistic verses deterministic in production forecasting

Production forecasting activity scheduling

Production forecasting analog methods

Production forecasting building blocks

Production forecasting decline curve analysis

Production forecasting expectations

Production forecasting flowchart

Production forecasting frequently asked questions and examples

Production forecasting in the financial markets

Production forecasting principles and definition

Production forecasting purpose

Production forecasting system constraints

Quality assurance in forecast

Reservoir simulation models in production forecasting

Types of decline analysis in production forecasting

Uncertainty analysis in creating production forecast

Uncertainty range in production forecasting

Using multiple methodologies in production forecasting

Category