Table of Contents
1. Introduction
2. Non-marine clastic depositional systems
a. Introduction
b. Low sinuosity fluvial c. High sinuosity fluvial d. Eolian
e. Alluvial fan
3. Shallow and deep marine systems
a. Deltas b. Shore-zone c. Deep marine
4. Sequence Stratigraphy
a. Introduction
b. Fundamental concepts and terminology c. Systems tracts d. Application
5. Reservoir Geophysics
a. Introduction
b. Terminology and fundamental concepts c. Seismic acquisition d. Seismic processing
e. Seismic framework interpretation
f. Acoustic impedance, attribute analysis, direct hydrocarbon indicators, and 4D seismic
6. Fractured Reservoirs
a. Introduction b. Key concepts c. Types of fractures
d. Character of the fracture plane e. Detecting and quantifying fractures
f. Fracture porosity, permeability, and productivity g. Data gathering and reservoir characterization
7. Capillary Pressure
a. Introduction
b. Buoyancy and capillary forces c. The capillary pressure equation d. Determining capillary pressure
e. Displacement pressure and saturation distributions f. The Leverett J-function g. Reservoir seals
h. Hydrostatic versus hydrodynamic reservoirs
i. Examples of how capillary pressure controls the location of oil-water contacts
8. Reservoir Heterogeneities and the Use of Geostatistics
a. Introduction
b. Types of heterogeneities
c. Steps to identify heterogeneities
1
d. Importance of capturing the appropriate heterogeneities e. Why geostatistics is needed f. How to calculate a variogram g. Variogram modeling
h. Kriging versus conditional simulation i. Object modeling
j. Sequential Gaussian simulation
Geocellular Modeling
a. Introduction b. Project scoping
c. Data import and quality checking d. Framework construction e. Three-dimensional gridding f. Property modeling
g. Volumetrics and net pay h. Realization assessment i. Upscaling and export
j. Numerical simulation and reserves 2
9.
1. Introduction
Production geology is a geological sub-discipline that focuses on identifying and producing hydrocarbons from known accumulations. The responsibilities of the production geologist are to 1) determine development well locations that target remaining hydrocarbons, 2) help explain the performance of existing wells by understanding the reservoir quality and lateral continuity of producing horizons, 3) determine the volume of hydrocarbons-in-place and the uncertainties associated with this value, and 4) look for additional opportunities including missed pay behind pipe in existing wells, shallower and deeper pay, and step-out wells that expand the existing field or discover new fields.
The primary objective of this course is to provide participants with insights and tools to help them become an effective production geologist. Production geologists differ from exploration geologists, who often work with little data and generate broad play concepts that are sharpened into prospects. The production geologist typically has more data, generates more detailed descriptions, and must be prepared to answer many specific questions that exploration geologists do not consider. A comprehensive list of these questions is included in Table 1-1. While this class will not address all of these questions, it will provide participants with fundamental skills and a philosophy of how to conduct their work, so that the goal of answering all of these questions can be reached.
A secondary objective of this course is to help participants understand how to build geocellular models, including how to incorporate information from other disciplines, and how long it should take to build these models. Figure 1-1 is generic geocellular modeling workflow, showing different tasks and how they inter-relate. Although this workflow must be modified for each individual project, it nonetheless provides a basic template for carrying-out geocellular modeling work.
Geocellular models have become the primary tool used by geologists to capture their data and interpretations. These models also contain information provided by petrophysicists, geophysicists, and reservoir engineers. The models are used for development planning, reservoir visualization, and geosteering wells. In order to construct these models, geologists must be experts in many critical areas including the six listed below:
• Depositional systems: to understand the likely geometry, lateral continuity and reservoir quality of sandbodies in the reservoir
• Sequence stratigraphy: to understand the nature of key surfaces, how they should be correlated through the reservoir, and their role as baffles and barriers to fluid flow
• Reservoir geophysics: to determine reservoir structure, faulting, and variations in properties in interwell areas
• Geostatistics: to use appropriate stochastic techniques for distributing petrophysical parameters including facies types, porosity, and permeability
• Capillary pressure: to distribute water saturation in the model and relate this to variations in rock quality • Geocellular modeling: to understand the techniques and workflows used to build models, and how the results are used by reservoir engineers and others
This course focuses on these six areas, explaining the fundamentals concepts of each and illustrating these concepts with diagrams, photographs, and exercises.
2. Non-Marine Clastic Depositional Systems
2a. Introduction. Non-marine clastic depositional systems include low-sinuosity fluvial, high-sinuosity fluvial, eolian, and alluvial fan depositional environments (Figure 2a-1). High sinuosity fluvial systems make excellent reservoirs due to their high net-to-gross ratios, coarse grain size, and sheet-like distribution. Low-sinuosity fluvial systems often have lower net-to-gross ratios than high-sinuosity systems and sandbodies of limited size and lateral extent. Eolian reservoirs are excellent reservoirs because of their clean, well-sorted nature. In contrast, alluvial fan reservoirs are relatively rare due to extreme variations in grain size, sorting, and clay content.
3
This chapter explains how sands and shales in each of these environments are deposited and preserved, and how to recognize them from cores and logs. It also discusses the size, shape, and continuity of different sandbody types, and the key heterogeneities contained within them that impact reservoir fluid flow.
2b. Low-sinuosity Fluvial Systems. A low-sinuosity fluvial system is a deposit of sand and gravel, generally with lesser amounts of silt and mud, produced by a series of low to moderately sinuous braided rivers traversing a coastal plain. It differs from a high-sinuosity system in that there are multiple river channels, and it differs from an anastamosing system in that there are no permanent islands between the river channels (Figure 2b-1).
Sand and gravel deposited in this fluvial system are concentrated in bars including longitudinal, lateral, and transverse bars (Figure 2b-2). Longitudinal bars have their long axis oriented parallel to flow whereas transverse bars are oriented transverse to the flow direction and migrate downstream. Lateral bars form along the channel margins and are submerged during flood events when coarse material is deposited on their surface. In addition to these basic types of bars, many others have been recognized and classified (Figure 2b-3). These bars are not stable, but instead migrate, and can be destroyed or enlarged with time (Figure 2b-4). The upstream portions of the bars accumulate coarser-grained sand and gravel with a blocky log signature whereas the downstream portions accumulate finer-grained sand and silt with a fining-upward log signature (Figure 2b-5).
Low-sinuosity fluvial systems are composed primarily of massive-appearing, sheet-like or tabular sandstone bodies of relatively high lateral continuity. These are separated by discontinuous silty sandstone intervals or less commonly by thin and discontinuous shales. The most common classes of shales are floodplain shales, channel-fill shales, and thin shales that drape various bars. The most important of these are floodplain shales which can extend laterally for hundreds of meters. The major control on shale continuity is their subsequent erosion by fluvial processes, resulting in laterally discontinuous permeability baffles instead of permeability barriers. Typical reservoir characteristics of low-sinuosity fluvial systems are summarized in Table 2b-1.
A significant portion of the world’s oil reserves are contained within low-sinuosity fluvial sandstone reservoirs. It is estimated that there are at least 30 billion stock tank barrels of remaining proven oil reserves and 40 trillion cubic feet of remaining proven gas reserves. An excellent example of one of these reservoirs is the Prudhoe Bay field on the North Slope of Alaska, which is the largest oilfield in North America (Figure 2b-6). It has 12 billion barrels of recoverable oil reserves and a gas cap containing 47 trillion cubic feet of gas. The lower part of the reservoir contains heterogeneous delta-front and lower delta-plain sandstones whereas the upper part consists of more homogeneous low-sinuosity fluvial sandstones and conglomerates interbedded with floodplain, abandoned channel, and drape shales. An important heterogeneity in addition to shales in this reservoir is open-framework conglomerates. These were deposited as laterally extensive gravel bars and now serve as “thief” zones which receive most of the injected water or gas during secondary recovery operations.
2c. High-sinuosity Fluvial Systems. A high-sinuosity fluvial system is one in which the ratio of the channel length to the down-valley distance exceeds 1.5. Higher sinuosity is favored by relatively low slopes, a high ratio of suspended to bed load sediment, cohesive bank material, and relatively steady discharge. The lateral distance across the active river channel system is referred to as the channel belt width, and the channel belt itself is contained within a larger floodplain (Figure 2c-1). With time, the channel belt migrates across the floodplain, cutting off portions of the active river channel (Figure 2c-2).
The most important reservoir sandbody in a high-sinuosity channel belt is the point bar (Figure 2c-3). Point bars develop on the inner portion of each meander loop where the flow is slower allowing the sand to drop out. The opposite side, where the water flows faster and causes erosion, is referred to as the cut bank. Point bars are characterized by a sharp base, a fining-upward character, and are often overlain by rooted soil horizons (Figure 2c-4). Within the sandbody itself, there is commonly an upward transition from coarse gravel to trough cross-bedded, parallel-laminated, and rippled sandstone (Figure 2c-5).
A secondary type of sandbody associated with high-sinuosity fluvial systems is the crevasse splay (Figure 2c-6). A crevasse splay is formed when a river breaks through a levee at floodstage and deposits its material on the floodplain. Crevasse splays typically are formed by the deposition of suspended sediment and are therefore finer-grained and
4
siltier than point bars.The overall shape of a crevasse splay is lobate, such that they appear lenticular in sections normal and oblique to the flow direction, but triangular in sections parallel to the flow direction (Figure 2c-7).
In addition to point bars and crevasse splays, other elements of high-sinuosity fluvial systems include levees, swamps, oxbow lakes, and floodplain deposits. Levees are elevated areas adjacent to the river channel containing overbank deposits of siltstone and very fine sand. They grade laterally into finer grained silts and clays of the floodplain and black organic-rich muds characteristic of swamps (Figure 2c-8). Floodplain sediments are distinguished in core by their red, oxidized nature whereas swamps appear as rooted mudstones or coals. Oxbow lakes are crescent-shaped bodies of standing water in the abandoned channel (oxbow) of a meander loop (Figure 2c-9). Fine-grained silts and clays referred to as abandoned channel-fill eventually fill these lakes.
Within a high sinuosity fluvial system, permeabilities are highest in the point bar and channel fill sands (Figure 2c-10). Crevasse splay permeabilities are typically 1-2 orders of magnitude lower, whereas levee and floodplain deposits are considered non-pay. A key element in determining whether these types of reservoirs can be drilled and produced is the degree to which reservoir sandbodies are connected. In a low net-to-gross reservoir, sandbodies may not be connected, or may only be connected in a few places along the length of the channel belt (Figure 2c-11).
As the net-to-gross ratio increases, connectivity will increase such that not only will all the sandbodies be in pressure communication, but it will also be possible to effectively sweep them using secondary recovery processes such as waterflooding. The key is for the system to be sufficiently sand-rich and for there to be enough accommodation space so that sandbodies will erode into each other to form amalgamated channel belts across a broad area (Figure 2c-12). These creates more sand-prone areas that can be imaged from seismic and targeted with wells (Figure 2c-13). Most high sinuosity fluvial systems are complex, ranging from amalgamated, areally extensive sandbodies to isolated single sandbodies. A good example of this is the Stratton Field of southeast Texas which contains multiple sandbody types that are being delineated with the help of seismic data. (Figure 2c-14).
Table 2c-1 summarizes the typical reservoir characteristics of high-sinuosity fluvial systems and how to recognize them from other depositional systems, including low-sinuosity fluvial systems. It should come as no surprise that low-sinuosity and high-sinuosity systems are end members, and that some fluvial systems show characteristics of each (Figure 2c-15). Only through the integration of sufficient core, log, and seismic data can the true nature of each system be known.
2d. Eolian Systems. Eolian sand dunes produce hydrocarbons from the Permian Rotliegendes Formation of the North Sea, Jurassic strata in the Gulf of Mexico, and numerous other locations. Eolian reservoirs are commonly not as thick as those of other depositional systems, but can be of very high quality due to their clean, well-sorted nature. Deserts cover approximately 20% of the earth’s land surface, but eolian dunes (Figure 2d-1) only cover about one-quarter of the deserts while the rest of the area consists of alluvial fans, plays, stony plains, and eroding highlands. These dunes come in many shapes and forms as a function of wind direction and velocity (Figure 2d-2).
In eolian systems, sand is transported by three major processes: saltation, suspension and surface creep (Figure 2d-3). By far, the dominant process is saltation which accounts for approximately 90% of sand transport. Saltation is a complex process of downwind grain movement by collision and bouncing. Saltating grains form a relatively dense layer that moves up the shallow-dipping (stoss) side of each sand dune and is deposited on the steeply-dipping (lee) side of the dune. Saltating grains are deposited by four processes: grainfall, wind ripple migration, avalanching, and adhesion (Figure 2d-4). In general, avalanche and wind ripple deposits are the main dune bedding structures that are preserved.
Most eolian dune stratification is dominated by large-scale cross bedding (Figure 2d-5). A cross section through a series of migrating dunes (Figure 2d-6) shows that the preserved portion only represents a fraction of the original dune height. Planar surfaces separate the preserved dune bedsets which are generally one to two meters thick. A major control on stratification within dune systems is the level of the water table at the time of deposition (Figure 2d-7). Changes in the height of the water table preserve the dunes and can produce horizontal truncation surfaces of regional extent called supersurfaces. At times when the water table is at or above ground level, fine grained silts, algal mats, and evaporate deposits may accumulate in lakes and other wet portions of interdune areas (Figure 2d-8).
5
Depending upon changes in the height of the water table and the amount of sand, wet eolian systems can become dry or visa-versa (Figure 2d-9)
Eolian systems demonstrate contrasts in reservoir properties ranging from the core-plug scale to the entire reservoir. At the core plug scale, variations in grain size and sorting can result in large variations in porosity and permeability. Figure 2d-10 shows a permeability value from a core plug compared with mini-permeameter values from the same core. The mini-permeameter values range from less than 0.5 millidarcies to 38.5 millidarcies, showing that a single core plug does a poor job of capturing the permeability variation inherent in these types of rocks. At a bedding scale, grain size variations in dune sheets and interdune deposits create both high quality reservoir intervals and permeability baffles (Figure 2d-11). This presence of these baffles creates complex fluid flow patterns within the reservoir (Figure 2d-12). Table 2d-1 summarizes the typical reservoir characteristics of eolian systems and how to recognize them from other depositional systems.
2e. Alluvial Fan Systems. Alluvial fans are directional landforms that begin at a point source along a mountain front and spread-out downdip with an accompanying decrease in grain size and an increase in sorting. The most distinctive feature of alluvial fans is their form. Their characteristic shape results from sediment-charged flows exiting the mountain front at a point source and spreading out along a wide arc (Figure 2e-1). Their relatively steep slope (typically 2-12 degrees) produces topographic relief of 300 to 2000+ meters across the fan.
Types of fans include fan-deltas which terminate into a standing body of water, terminal fans which form where topographically confined rivers drain into an unconfined lowland (Figure 2e-2), and bajadas which are a series of coalescing alluvial fans along a mountain front. Reservoir quality sand is most commonly located along the margins of these fans, while basin-margin faulting and downdip pinchouts of these sediments provide excellent trapping mechanisms. (Figure 2e-3).
Alluvial fans begin to form when steep slopes of bedrock erode to produce loose masses of soil and rock fragments called colluvium. As water from rainfall or snowmelt is added, the masses become unstable and slide downhill. This process mixes the sediment, entraining air and water and transforming the slide into a gravity flow. The flow races down the fan until dewatering and a decrease in slope cause the shear strength of the mixture to exceed the downhill pull of gravity, inducing deposition. The resulting deposit is poorly-sorted and generally massive or planar-stratified due to rapid deceleration of the flow.
Alluvial fans are formed by two major types of gravity flows; debris flows and fluidal flows. Debris flows contain sand to boulder-sized clasts that are supported by a clay + water slurry (Figure 2e-4). Large debris flows can be up to 6 feet thick and may cover the entire fan. Debris flow fans have constant slopes of 5-15 degrees and downlap onto basin floor deposits (Figure 2e-5). Debris flow processes dominate fans in rugged, semi-arid regions containing glacial or volcaniclastic rocks.
Fluidal flows contain sand and gravel carried downslope as bedload or suspended sediment that is deposited as sheetfloods or streamflows (Figure 2e-6). Sheetflood fans are characterized by catastrophic, turbulent, unconfined water flows that expand as they move downfan. Steamflow fans contain channel-fill and associated overbank deposits associated with a semi-permanent channel system. These types of fans typically have slopes of 2-8 degrees that decrease downdip. Their distal portion fines into a sandskirt that interfingers with basin floor deposits (Figure 2e-7). Streamflood and streamflow processes dominate fans with year-round discharge from highlands containing resistant rocks.
A good example of a modern streamflow alluvial fan is the Scott glacial outwash fan in Alaska which shows a downfan reduction in gradient and an accompanying decrease in grain size as the fan transitions into a low-sinuosity fluvial system (Figure 2e-8). Another good example is the Kosi fan of Nepal and India which shows a downfan progression from gravelly braided, to straight, to slightly sinuous channel geometries (Figure 2e-9).
Debris flow deposits are typically clast-supported, massive conglomerates. Sheet flood deposits are commonly interbedded gravel and sand deposited in couplets 10-30 cm thick. Streamflow deposits contain coarse-grained channel fill, longitudinal bars, and accretion bedding. As might be expected, most alluvial fans are not easily classified as “end members” but instead contain facies types that represent various types of deposits (Figure 2e-10).
6
Alluvial fan deposition is characterized by infrequent, catastrophic events separated by long intervals during which secondary processes are active. In a subaerial environment, ponding, rooting, burrowing, and subaerial erosion produce siltstone lenses, soils, bioturbated horizons, and thin lags of winnowed clasts over the surface of the fan. These are preserved if succeeding flows do not completely erode them. Subaqueous fans contain reduced green or black shales, marine trace fossils, and grade distally into turbidites.
The best candidates for hydrocarbon reservoirs are fan-deltas that have been reworked by marine processes to improve their reservoir quality. Figure 2e-11 shows a sequence formed from a progradational fan-delta that includes fine-grained shelf sands grading upwards into sand and gravel of the fan-delta plain. A good example of a modern fan delta is the Copper River delta in Alaska (Figures 2e-12 and 2e-13). The margin of this fan has been extensively modified by both tidal current and waves. There are numerous example of fan-delta reservoirs, including the Miocene Potter Sandstone of California (Figure 2e-14). Debris flow and turbidite sands in this reservoir are steamflooded to recover heavy oil. The reservoir also contains numerous baffles and barriers of muddy sandstone and siltstone. These formed over the entire surface of the fan-delta during periods when little sand was being supplied, and then were subsequently eroded by channeling and slumping as shown in the depositional model in Figure 2e-14.
Table 2e-1 summarizes the reservoir characteristics of alluvial fan systems and includes observations regarding their vertical profile, their sandbody geometry and lateral continuity, and their reservoir quality.
3. Shallow and Deep Marine Depositional Systems
3a. Introduction. Shallow marine depositional systems include deltas and shoreface sands whereas deep marine depositional systems are characterized by submarine fan sandbodies. Deltas are deposits of sediment formed where a river empties into a body of water. They form if the rate of amount of sediment that is supplied exceeds the ability of waves and tides to disperse it. Shoreface systems are primarily derived from the reworking and transport of deltaic sand along the shoreline. Shoreface sands are found in the narrow, high energy environment extending from wave base (water depth of about 10 meters) to the landward limit of marine processes. Deep marine systems consist of sands that have been transported beyond the edge of the continental shelf into deep water by sediment gravity flows.
This chapter explains how sands and shales in each of these environments are deposited and preserved, and how to recognize them from cores and logs. It also discusses the size, shape, and continuity of different sandbody types, and the key heterogeneities contained within them that impact reservoir fluid flow.
3b. Deltaic Depositional Systems. A delta forms where a fluvial system enters a standing body of water, producing a bulge in the shoreline (Figure 3b-1). Because fluvial systems carry most of the sediment that is deposited in a basin, deltas are very important locations for the accumulation of thick sediments. The facies types within deltas depend upon the influence of the river, waves, and tides. As such, there are river dominated, wave dominated, and tide dominated deltas, as well as combinations of these (Figure 3b-2). The primary locations of sand deposition in a delta are distributary channels, distributary mouth bars, and crevasse splays (Figure 3b-3). Non-pay facies include silts and muds associated with interdistributary bays and abandoned distributary channels.
A cross-section through a prograding, river dominated delta (Figure 3b-4) shows that the upper delta plain contains point bar and crevasse splay deposits similar to a high-sinuosity fluvial system. In the lower delta plain, where the river is influenced by marine processes, the system is dominated by channel fill and mouth bar deposits. Mouthbars form due to the rapid decrease in water velocity as the river terminates into a lake or ocean, resulting in the deposition of a broad apron of sand. As the delta builds seaward (progrades), the distributary channel commonly incises its associated mouthbar (Cross-section B-B’, Figure 3b-4).
Distributary mouthbars are characterized by a coarsening, thickening upward profile on logs (Figure 3b-5). They are commonly more poorly-sorted than distributary channel sands because the sediment is dumped into the standing body of water. They also can contain significant amounts of organic material that has been transported down the river. Distributary channel-fill deposits are characterized by a sharp, eroded based and a blocky to fining-upward character on the logs (Figure 3b-6). There is typically an upward transition from trough cross-bedded sands to planar laminated and ripple sands at the top. The sands can be interbedded with thin shales that form during periods of low river discharge.
7
Crevasse splays are similar to those found in high sinuosity fluvial systems (Figure 3b-7). They consist of finer-grained, siltier, and thinner sands than those found in mouthbars or distributary channels. As a result, their permeabilities are 1-2 orders of magnitude less than mouthbar or distributary channel sands. Crevasse splays form when a river breaches its levee, depositing sand in the adjacent interdistributary bay. Because of this, crevasse splays sands are encased by black or gray interdistributary bay fill shales.
In a wave or tide dominated delta, distributary channel-fill and mouthbar deposits are transformed into shoreface sands or tidal bars. Whereas fluvial-dominated deltas have a “bird’s foot” geometry, wave-dominated deltas are elongated parallel to the shoreline and tidally-dominated deltas contain sandbars oriented perpendicular to the shoreline. Figure 3b-8 is a diagram of the Niger Delta which is influenced by both waves and tides. These processes tend to result in cleaner, better sorted sandstones, but they can also destroy a delta by transporting the sand along the shoreline or seaward into the deep ocean.
As a delta progrades, mouthbar sands are deposited on prodelta siltstones and shelf mudstones (Figure 3b-9). This progradation creates a geometry consisting of topset beds, inclined strata, and bottomset beds (Figure 3b-10). This is referred to as a clinoform geometry and is typically of prograding deltas. This demonstrates that deltas do not consist of flat, sheet-like sandstones, but instead consist of sands that thin seaward onto the underlying shelf deposits. Any attempt to correlate these sands should reflect this geometry. A good example is a correlation framework generated for the Romeo interval of the Prudhoe Bay Field in Alaska (Figure 3b-11). It shows that the sands consist of a series of en echelon, off-lapping, fluvially-dominated deltaic wedges.
Deltaic depositional systems are recognizable by their lobate shape and the inclusion of distributary channel, mouthbar and crevasse splay sandstones. Most of the sand is contained within mouthbars that shingle in a down-dip direction. Reservoir quality is best where these sands are winnowed by wave or tidal action. Distributary channel belts range from narrow ribbon-like sandbodies formed during rapid progradation (when sediment supply is greater than basin subsidence), to more sheet-like sandbodies that form during aggradation (when sediment supply and basin subsidence are balanced).
Distributary channels typically have good lateral continuity along depositional dip, with poorer continuity along depositional strike. However, in areas where distributary channels have incised their associated mouth bars, lateral continuity should be good in both the depositional strike and dip directions. Table 3b-1 summarizes this aspect, and other deltaic reservoir aspects important to the production geologist.
3c. Shore-zone Depositional Systems. Shore-zone depositional systems are found in the narrow, high energy environment extending from wave base to the landward limit of marine processes. They are distinguished from deltas by the absence of deltaic elements including a shoreline bulge, distributary channels, and distributary mouth bars (Figure 3c-1). Shoreface systems include shoreface, barrier island, and tidal delta environments (Figure 3c-2). Sands are derived from the reworking of other deposits including deltas, channel-mouth deposits of coastal plain streams, and shelf sediments transported landward during storms such as hurricanes.
A cross-section through a shoreline shows that the foreshore (beach) is located between low tide and high tide. (Figure 3c-3). The sand-rich upper shoreface is located between fair-weather wave base (water depth of 5-15 meters) and low tide. Shalier sands of the lower shoreface are found between storm wave base and fair-weather wave base. Foreshore deposits are characterized by planar laminae that dip gently seaward (Figure 3c-4). Upper shoreface deposits are dominated by powerful currents that carry sand parallel to the shoreline. This produces trough and planar cross-bedded sandbodies that prograde seaward as the shoreline builds-out (Figure 3c-5). Because the lower shoreface is below wave base except during storms, the sediments here are finer-grained, shalier, and are commonly reworked by burrowing organisms.
A very distinctive bedform that allows lower shoreface deposits to be recognized in cores and outcrops is hummocky cross-stratification (Figure 3c-6). It is characterized by an erosional base, low-angle pinch and swell laminae, and a hummocky three dimensional external geometry. Generally these bedforms are less than a meter thick and about 2 to
8
10 meters wide. Hummocky cross stratification is the product of storm-generated waves and currents acting below fair-weather wave base. 1
The shore-zone stratigraphic cross-section in Figure 3c-7 contains a series of timelines that indicate how the shoreline progrades seaward with time. This progradation is often interrupted by marine transgressions (marine flooding surfaces) as sea level rises, the rate of basin subsidence increases, sediment supply decreases, or some combination of all three occurs. This creates a series of shingled, seaward-dipping sandbodies that become finer-grained and shalier downdip.
A distinctive aspect of shore-zone systems is the presence of barrier islands, which contain an enclosed lagoon or bay behind them, and are cut by tidal inlets that facilitate the exchange of water with the open ocean (Figure 3c-8). The characteristics of barrier islands are largely controlled by the tidal range. In microtidal settings (tidal range of 0-2 meters), barrier islands are long and narrow with few tidal inlets. In mesotidal settings, barrier islands are broader and are cut by numerous tidal inlets. Mesotidal barrier islands typically have a wide updrift end, a narrow midsection, and a recurved spit at the downdrift end which produces a characteristic “drumstick” shape (Figure 3c-9). A cross-section through a modern barrier island shows that it builds seaward with time through the addition of sand.
The tidal inlets associated with barrier islands transfer water at high velocities between the open ocean and bay or lagoon behind the island (Figure 3c-10). As the tidal currents disperse at either end of the inlet, sand is deposited in ebb tidal deltas (seaward side of the island) or flood tidal deltas (landward side of the island). With time, these inlets migrate laterally, and the old tidal channel is filled with sand containing bedforms that dip in opposite directions due to the ebb and flow of the tidal currents.
In general, shore-zone sandstones exhibit a coarsening-upward vertical profile and a sheet-like geometry (Figure 3c-11). This contrasts with fluvial and deltaic systems that are dominated by distributary channel and point bar sandstones that exhibit a fining-upward vertical profile and lenticular to ribbon-like geometry. Because of their sheet-like geometry, shore-zone reservoirs commonly have excellent lateral continuity, but this is not always the case. For example, sand transported into the lower shoreface during storms is deposited on top of muddy sands (Figure 3c-12). These soft and unstable sediments can deform causing breaks in the overlying sand sheet and disrupting its lateral continuity.
A good example of a shore-zone hydrocarbon reservoir is the Oak Field of Texas (Figure 3c-13). This field produces gas from low-permeability (cemented) sands deposited along a wave-dominated shoreline. Productive facies types include hummocky cross-stratified sandstones (lower shoreface) and flat parallel laminated pebbly sandstones (upper shoreface and foreshore). The sandstones are laterally very continuous and are vertically separated by bioturbated shelf and back barrier/lagoonal mudstones.
Table 3c-1 summarizes the typical reservoir characteristics of shore-zone deposits. They are distinguished by their coarsening-upward and thickening-upward profile on logs, hummocky-cross stratification and indications of wave and tidal influence in cores, and their extensive lateral continuity. They are also distinguished by worms, shrimp, and other fauna that leave trace fossils in the rocks. These trace fossils include vertical burrows created by animals in the foreshore and upper shoreface that feed from water that moves in and out with the tide, and horizontal burrows generated by creatures burrowing through sand and shale below wave base.
3d. Deep Marine Depositional Systems. Deep marine sandstone deposits are formed when sand is transported beyond the edge of the continental shelf into deep water by sediment gravity flows. The path taken by these flows is through submarine canyons cut into the continental shelf. Most deep marine sandstones are deposited during periods of low eustatic sea level (lowstand) when rivers can directly feed these submarine canyons. During periods of high eustatic sea level (highstand), sediment is trapped on the shelf as deltaic and shoreface sands. These are then eroded and transported into the deep ocean as sediment gravity flows when sea level falls.
Sediment gravity flows range from debris flows, which are dominated by laminar flow, to turbidites which are deposited by fully-turbulent flows (Figure 3d-1). In fully-turbulent flows, energy, grain-size, and sediment concentration decrease upwards resulting in a single sandstone bed with a fining-upward character. The elements that
1
9
comprise this bed are referred to as a Bouma sequence (Figure 3d-2). A complete Bouma sequence is generally less than one meter thick and includes massive sandstone, parallel laminated sandstone, rippled sandstone, sandy shale, and shale. Most Bouma sequences do not contain all five elements because successive turbidity flows erode the finer-grained upper portions of previously deposited turbidites. This results in the preservation of only the coarser-grained portions of successive flows which are stacked or amalgamated into thick sands of high reservoir quality. (Figure 3d-3).
A deep marine depositional system can be broken into three depositional provinces that include a submarine canyon, a turbidite valley, and a distributary channel complex (Figure 3d-4). The three most important sandstone facies in these provinces are submarine channel-fill and thin-bedded levee/overbank deposits associated with the turbidite valley, and sheet/lobe sandstones associated with the distributary channel complex (Figure 3d-5).
Channel-fill facies are composed of amalgamated massive and parallel laminated sandstones that are blocky or become finer-grained toward the top (Figure 3d-6). They fill submarine channels that have been eroded into underlying sediments (Figure 3d-7) resulting in isolated sandbodies that are difficult to correlate using well logs. Where channels are stacked and eroded into each other, the net-to-gross ratio and sandstone connectivity will be much greater than a system containing isolated channel-fill sandbodies. Compared to fluvial channels, submarine channels are very large, with channel widths ranging from hundreds to thousands of meters and channel depths ranging from tens to hundreds of meters.
Thin-bedded levee facies form adjacent to submarine channels when turbidity currents spill over the sides of the channel (Figure 3d-8). The levee deposits consist of thin rippled-bedded sands that are finely-interbedded with silt and mud, making it difficult to resolve or correlate these using wireline logs. The sands and shales are often less than 1 centimeter in thickness (Figure 3d-9), resulting in log responses that can be interpreted as thick, low-quality, shaly sandstone. Because of this, levee sandstones are often overlooked as potential reservoirs. This is unfortunate because these thin sandstones can have a high net-to-gross ratio, especially at the base of a levee package, and lateral continuity ranging from hundreds to thousands of meters (Figure 3d-10). This results in a significant volume of hydrocarbons that can be produced at high rates (Figure 3d-11).
Sheet/lobe facies are accumulations of sand that form at the downstream end of submarine channels (Figure 3d-12). Sheet facies consist of layered or amalgamated sheets with a blocky or coarsening upward log signature. Amalgamated sheets have an erosive character, resulting in sand-on-sand contacts and a high net-to-gross ratio (Figure 3d-13). Layered sheets are less erosive, which preserves the upper part of each Bouma sequence and results in a lower net-to-gross ratio. In outcrops, sheet sandstones are laterally continuous over large distances indicating good reservoir connectivity between wells (Figure 3d-14). In the downdip direction, amalgamated sheet sandstones grade into layered sheet sandstones that are thinner-bedded, finer-grained, and have a lower net-to-gross ratio (Figure 3d-15). Lobe facies are characterized by the deposition of sand in discrete, lobate sandbodies that exhibit a compensating relationship whereby the thicker portion of the underlying lobe is overlain by the thinner portion of the overlying lobe (Figure 3d-16). Internally, these sandbodies can contain off-lapping shales at multiple scales which may compartmentalize individual lobes (Figure 3d-17).
It is important to remember that submarine depositional systems represent a continuum from channel-fill sandstones at the toe of the continental shelf to channel-levee complexes and sheet/lobe sandstones on the basin floor (Figure 3d-18). Reservoirs can contain some or all of these elements, and it is critical to recognize them and understand their variations in thickness, lateral continuity and petrophysical properties for the purpose of efficient development (Table 3d-1).
A good example of deep marine sandstone reservoirs can be found in the Long Beach Unit, Wilmington Field, California. Oil is contained in semi-consolidated, turbidite lobe/sheet sandstones with a gross thickness of about 1000 meters (Figure 3d-19). Continuous shales divide the oil column into multiple reservoirs with permeabilities ranging from 10’s to 100’s of millidarcies depending upon grain size, sorting, and clay content. Water injection must be managed separately into each reservoir, which may be accessed with vertical or horizontal wells depending upon reservoir thickness and permeability. Over the past1 15 years, an optimized waterflood has been implemented using 3D seismic data, reservoir simulation, pattern waterflooding, hydraulic fracturing and other techniques to increase proven reserves by 135 million barrels.
10
4. Sequence Stratigraphy
4a. Introduction. Sequence stratigraphy is an approach for subdividing a stratigraphic interval into correlatable units bounded by unconformities. In general, these unconformable surfaces extend laterally for tens to hundreds of kilometers and are caused by relative changes in sea level. Sea level changes are caused by global fluctuations in ocean volume (for example, due to glaciation), uplift due to mountain building, or downwarp due to basin subsidence. Sequence stratigraphic surfaces provide a framework for correlation and mapping. Facies types between these surfaces are genetically-related to each other in both time and space. These facies can be assigned to depositional environments, which in turn can be assigned to systems tracts that exist between sequence stratigraphic surfaces.
In this chapter, key stratigraphic principles and terms such as parasequence, sequence boundary, and flooding surface are defined. The structure of the general sequence stratigraphic model and systems tracts are explained, followed by a discussion of how this model can be applied to correlation and interpretation work. Finally, this chapter discusses the relevance of the sequence stratigraphic model to production geology.
4b. Fundamental Concepts. Sequence stratigraphy is the study of rock relationships within a chronostratigraphic framework of repetitive, genetically-related strata bounded by surfaces of erosion or non-deposition. It contrasts sharply with the traditional lithostratigraphic framework which only focuses on correlating similar rock types using the pattern recognition of logs (Figure 4b-1). Sequence stratigraphy not only uses core and log data, but also uses seismic, biostratigraphic, and sea level information to create an interpretation that honors the geometrical relationships of sandbodies within the depositional system (Figure 4b-2). This can have a profound affect on the interpretations of sandbody quality, sandbody connectivity, and how the reservoir is ultimately developed.
Several stratigraphic principles guide the application of sequence stratigraphy. First, it is assumed that beds at the top of a stratigraphic succession are younger than those at the base, and that these beds continue laterally until they pinch-out depositionally. These pinch-outs, and the erosion of deposited strata, result in unconformities that are often characterized by an abrupt change in facies types and their orientation (strike and dip). It also may result in the omission of large periods of time in the rock record.
Second, the vertical stacking of facies results from the lateral movement of adjacent depositional environments over the same location with time (Walther’s Law). This concept helps us predict the vertical and lateral succession of facies types and recognize when discontinuities are present. For example, a progradational shoreface consists of a coarsening-upward succession of lower shoreface, upper shoreface and foreshore deposits. An upward change from lower shoreface sands to marine shale may indicate an abrupt increase in relative sea level, and a corresponding landward shift of the shoreline. Alternatively, it may indicate an abrupt decrease in sea level, resulting in the erosion of overlying upper shoreface and foreshore sands, followed by an increase in sea level and the deposition of marine shale.
Third, in order to understand the ancient rock record, we must understand those processes active in modern depositional environments and apply them in our interpretations. This is sometimes stated as “the present is the key to the past” (Lyell’s Law). Those same processes that form rocks today were active millions of years ago, although the rates at which they formed and the dominant types of rocks deposited may be different. For example, several periods in geological history, including the Devonian and late Jurassic, were dominated by the deposition of anoxic muds that gave rise to prolific hydrocarbon source rocks.
4c. Terminology. The fundamental building block of any sequence is a parasequence which is defined as a succession of genetically-related beds bounded by marine flooding surfaces and their correlative surfaces. Figure 4c-1 shows an example of a coarsening-upward parasequence formed by a prograding shoreface. A marine flooding surface is a surface separating younger from older strata, across which there is evidence of an abrupt increase in water depth. Figure 4c-2 shows a series of progradational parasequences and the marine flooding surfaces that separate them. Multiple parasequences are referred to as a parasequence set (Figure 4c-3).
A parasequence set that builds seaward progrades as a result of an increase in sediment supply due to uplift or a reduction in accommodation space (Figure 4c-4). A parasequence that builds landward retrogrades as a result of a decrease in sediment supply or an increase in accommodation space (Figure 4c-5). The point at which a
11
retrogradational parasequence set changes into a progradational parasequence set is delineated by a maximum flooding surface, which marks the most landward extent of the deepest water facies within a sequence (Figure 4c-6). If sediment supply and accommodation space are balanced, then parasequences will stack vertically or aggrade. Figure 4c-7 shows a series of typical parasequence sets representing progradational, retrogradational, and aggradational stacking patterns.
If relative sea level falls significantly, a regional unconformity called a sequence boundary will be created (Figure 4c-8). It is characterized by subaerial erosion and a basinward shift in facies. The previous shoreline is eroded, creating an incised valley, and sediment is transported across the continental shelf to form shelf-edge deltas at the new shoreline. Some of this sediment is also funneled down submarine canyons and deposited in the deep ocean basin as submarine fan debris flows and turbidites.
With a subsequent rise in sea level, the shelf will be transgressed, creating a transgressive surface of erosion (Figure 4c-9) characterized by winnowing of the underlying sediment and the concentration of coarse-grained material into a transgressive lag. The marine shale deposited above this lag is referred to as a condensed section, because it is deposited under a reduced sedimentation rate (suspended silt and clay falling from the water column) compared to its correlative strata which are being formed landward along the new shoreline.
Figure 4c-10 captures the entire process described above in a sequence stratigraphic model. A sea level drop associated with the lower sequence boundary results in the formation of a shelf-edge delta and submarine fan. A subsequent rise in sea level forms a transgressive surface overlain by a retrogradational parasequence set. A maximum flooding surface marks the most landward location of the deepest water facies, followed by a progradational parasequence set terminated by a sequence boundary.
4d. Sequence Stratigraphic Example. The best way to explain sequence stratigraphic principles is by means of an example (Figure 4d-1). Assume that sea level falls by 50 feet along an existing coastline (A). Because water depths are generally shallow on the continental shelf, the new shoreline will be translated tens of miles seaward. The area behind the new shoreline is now subject to erosion, and will be cut to form an incised valley (B). The resulting erosional surface is called a sequence boundary.
As sea level begins to rise, each incised valley along the coastline is drowned forming an estuary (C). Rivers flowing into the estuary bring sand and mud that are often reworked by waves and tides to form the tidal bars and tidal flats shown in Parasequence 1. As sea level continues to rise, the coastline moves tens of miles inland (D). Rivers dump their sand along the new shoreline, leaving only silt and mud to be carried by marine currents into the estuary.
The transition from sandy deposits in diagrams A-C (marginal marine conditions, less than 30’ water depth) to overlying silt and mud in diagram D (fully marine conditions, greater than 100’ water depth) is usually abrupt. The boundary is delineated by a marine flooding surface that is marked locally by a thin layer of coarser-grained sand and shells that are reworked and concentrated by wave action to form a transgressive lag. The marginal marine sediments deposited between the underlying sequence boundary and the overlying marine flooding surface are part of a parasequence. They are genetically-related by being deposited during a single episode of marine transgression.
Returning to our example (Figure 4d-2), assume that sea level falls again, this time by a relatively small amount. Rivers will now bring sand to the coastline and deposit it in a delta (E). As the delta builds out into the sea, sediments deposited atop the marine-flooding surface will become increasingly sandy. This gives rise to an upward-coarsening and thickening sandbody character. Assume that at some point sea level rises once again, drowning the coastline and delta, generating a marine-flooding surface, and depositing mud on top of it (F). The deltaic sediment deposited between the first and second marine-flooding surfaces forms a second parasequence.
Thus, as relative sea level fluctuates, parasequences are deposited one on top of the other, with each one bounded by marine flooding surfaces (G) and stacked into progradational, aggradational, or retrogradational parasequence sets. At some point, there will be another large drop in sea level, creating a sequence boundary (H) and the entire process will be repeated.
4e. Systems Tracts. Systems tracts (Figure 4e-1) are defined as related depositional environments that form during a particular position of relative sea level. They include the highstand systems tract (maximum flooding surface to
12
sequence boundary), the lowstand systems tract (sequence boundary to transgressive surface), and the transgressive systems tract (transgressive surface to maximum flooding surface).
In a highstand systems tract, relative sea level is high and sediments are trapped on the shelf, causing the deep ocean basin to be sediment starved (Figure 4e-2). Fluvial systems aggrade landward of the shoreline and near-shore marine deposits prograde onto the shelf (Figure 4e-3).
In a lowstand systems tract, relative sea level drops and rivers incise valleys into the shelf (Figure 4e-4). These carry eroded sediments to shelf edge deltas where they may be deposited as part of a prodelta wedge, or carried to the deep basin floor onto submarine fans (Figure 4e-5). Because this action bypasses the shelf as a depositional site, the process is referred to as sediment bypass. Fluvial incision characterizes the early lowstand, but as sea level starts to rise again during late lowstand, fluvial deposits begin to aggrade on the continental shelf (Figure 4e-6).
In a transgressive systems tract, the shoreline is displaced landward due to a relative rise in sea level, leaving behind a relic shelf margin and relic shelf (Figure 4e-7). Underlying sediments are reworked, leaving behind a transgressive lag. If sediment supply is sufficient, and sea level rise is fast enough, a transgressive sand deposit may be preserved above this lag. Deposition above the transgressive surface typically consists of retrogradational parasequences (Figure 4e-8).
Figure 4e-9 contains a typical set of log signatures and stacking patterns for the three different systems tracts. Highstand systems tracts are characterized by a coarsening-upward succession formed by a prograding shoreline. In a lowstand systems tract, sea level drops, a sequence boundary forms, and a coarsening-upward interval is deposited due to aggradation during late lowstand. In a transgressive systems tract, a fining-upward succession is formed during a marine incursion due to the deposition of progressively finer-grained silts and clays.
4f. Application of Sequence Stratigraphy. The application of sequence stratigraphic principles can be accomplished using a methodology that includes several steps. The first step is to construct a basic framework by recognizing sequence boundaries and other surfaces defined from the geometric relationship of seismic reflections (Figure 4f-1). For example, erosional truncation followed by onlap is characteristic of a sequence boundary. So are horizon slices that show the presence of incised valleys feeding shelf-edge deltas (Figure 4f-2). The second step is to identify systems tracts within the sequences, taking into account geometries (onlap, toplap, downlap), seismic facies (continuous reflectors, mounds, etc) and their position relative to the shoreline (Figures 4f-3 and 4f-4).
The third step is to tie the seismic interpretation to the logs and to correlate marine shale marker beds by recognizing and matching distinct log patterns (Figure 4f-5). It is important to remember that no log pattern is unique to, or diagnostic of a specific depositional environment (Figure 4f-6). This can only be determined by a fourth step that requires core description (Figure 4f-7) and its integration with logs. Once depositional environments are interpreted, these can be used to construct paleogeographic maps showing the possible locations of thicker, better quality sands (Figure 4f-8).
4g. Relevance to Production Geology.
When sequence stratigraphy was first introduced in the late 1970’s as seismic stratigraphy, the concepts and nomenclature were applied to packages of sedimentary rocks that were visible using 2D seismic data. These packages were typically tens to hundreds of meters thick. The techniques were primarily used to make predictions about where sand-rich facies might occur within a given stratigraphic interval in a basin.
Since that time, two things have happened. First, the complexity and diversity of sequence stratigraphic concepts and terminology have greatly increased and, second, the scale at which the interpretations are made has become much finer. This has made it possible to define specific play types within a given basin including highstand deltas and shoreface sands, and lowstand shelf-edge deltas and turbidites. It also now possible to combine the principles of sequence stratigraphy with high-resolution 3D seismic data to generate more accurate interpretations at the reservoir scale.
There are numerous ways in which sequence stratigraphic processes influence the geometry, continuity, quality, and location of reservoir sandbodies. For example, sequence boundaries define the geometry and size of incised valleys and estuaries that fill with sand and shale during sea level rise. Transgressive surfaces truncate sandbodies and
13
concentrate shell material that forms diagenetic cements. The tops of parasequences may be reworked by waves and tides below marine flooding surfaces, increasing reservoir quality. Updip and downdip reservoir quality and thickness will be influenced by whether a given parasequence set is retrogradational or progradational.
Sequence stratigraphic correlations can profoundly change reservoir interpretations. Sands that were thought to be continuous for the purpose of waterflooding may actually be discontinuous (Figure 4g-1). Sands that are continuous in the strike direction may pinch-out in both updip and downdip directions (Figure 4g-2). Channels that were thought to be distributaries, with associated crevasse splay and mouthbar deposits, may actually be incised valleys (Figure 4g-3). If so, there are likely to be lowstand fans and turbidites farther downdip that may be prolific reservoirs. To generate the most correct interpretation, all of the data including seismic, cores, logs, and engineering data (rates, pressures, fluid properties) must be used. Production geologists who only use logs and sequence stratigraphic principles may find it impossible to make clear choices with respect to the geometry, continuity, and quality of reservoir sandbodies (Figure 4g-4).
The basic concept of correlating time-lines in the subsurface using sequence stratigraphy is important, valid, and useful for constructing a reservoir zonation. However, it is also important to remember that fluids do not move along time lines but rather follow permeability pathways. It is only when reservoir heterogeneities coincide with time-lines that they become important in understanding reservoir behavior. A good example of this would be a low permeability lag associated with a transgressive surface. When permeability contrasts do not coincide with time-lines, they are irrelevant for modeling fluid flow. This is the basis for recognizing the concept of the flow unit which will be discussed later in this manual.
5. Reservoir Geophysics
5a. Introduction. Reservoir geophysics is a subset of geophysics whose primary goal is to interpret seismic data for use in constructing a geocellular model. This model can then be used for determining hydrocarbons-in-place, conducting numerical simulation, and planning new wells. A secondary goal of reservoir geophysics is to identify other opportunities including extension well prospects in adjacent areas, shallower or deeper prospects, and portions of the reservoir suitable for the implementation of enhanced recovery processes.
The key dataset used in reservoir geophysical work is a 3D seismic volume. With the advent of more sophisticated acquisition, processing, and interpretation techniques, the impact of 3D seismic on field development is increasing. Not only is it being used for structural interpretations, reservoir boundaries, and fluid contacts, but is also finding increasing use for distributing facies, porosity, fractures and saturations in geocellular models. Figure 5a-1 is an example of the impact of reservoir geophysics on field development. The graph shows the production increase resulting from the use of 3D seismic to define undrained compartments in an existing field. These compartments were successfully targeted by highly-deviated wells resulting in a tripling of the annual production rate.
Figure 5a-2 shows a typical reservoir geophysics workflow that begins with the acquisition and processing of seismic data, and continues with a seismic interpretation to generate a basic structural framework. This is followed by seismic inversion and attribute extraction which seeks to obtain information from seismic that will guide the distribution of reservoir properties in the geocellular model. This work must be integrated with core, log, production, and well test data to develop a properly calibrated model. The model can then be used for numerical simulation, well planning, and reservoir development.
In this chapter, seismic terminology and key concepts are presented, followed by the fundamentals of seismic acquisition and processing. Recommended steps are then discussed for interpreting seismic data, followed by insights as to how acoustic impedance and seismic attribute data can be used. Finally, this chapter briefly discusses the use of direct hydrocarbon indicators and 4D seismic.
5b. Terminology and Fundamental Concepts. Before discussing how seismic data is acquired, processed, and used, it is important to become familiar with the key terms used in reservoir geophysics (Figure 5b-1). Among the most important of these are frequency and acoustic impedance (AI). Figure 5b-2 shows that frequency decreases with depth, which decreases the vertical resolution of the seismic data (Figure 5b-3). This is why it is important to conduct forward modeling prior to the acquisition of seismic data to determine which frequencies are needed to image
14
subsurface features of interest. AI is the product of a rock’s velocity and density, and it is the AI contrast between different rock layers that results in the amplitude variations seen in seismic traces.
Seismic velocity is affected by numerous factors (Figure 5b-4). Velocity increases as fluid density increases, as denser lithologies are encountered, and as greater depths are penetrated due to more rock compaction and the closing of fractures. Velocity also increases as cementation increases and as shale volume decreases. Velocity decreases as porosity increases, as shale content increases, and as pore pressure increases (which supports more of the overburden, preserves porosity, and results in less compaction). In particular, velocity decreases significantly if there is a small amount of gas present (about 5%) which can lead to the misidentification of a significant gas accumulation.
The density of a rock is controlled by it’s porosity, fluid density and matrix density (Figure 5b-5). Coals, for example, have high microporosity that results not only in low velocity values, but low density values as well. This makes them easily visible on seismic and therefore extremely useful as correlation markers within either seismic amplitude or AI volumes. The same is true of basalt or other igneous rock horizons, which are very dense and therefore have a large AI contrast with adjacent horizons.
The AI contrast and thickness of a bed greatly impact the ability to resolve it with seismic data (Figure 5b-6). For many beds, detection depends on the quality of the seismic data, including the signal-to-noise ratio and type of wavelet used. Wavelength also plays an important role because the ability to resolve the top and bottom of a bed as distinct reflectors is not possible if the bed thickness is less than one-quarter of the wavelength. This is known as the tuning thickness, and it is characterized by increased amplitudes due to constructive interference (Figure 5b-7). As a bed continues to thin, it eventually becomes unresolvable below a thickness equivalent to one-eighth of the bed resolution.
The lateral resolution of the seismic data depends on the width of the Fresnel Zone (Figure 5b-8). As the width of this zone increases with depth, lateral resolution decreases. Figure 5b-9 summarizes some typical resolution limits for seismic data. Vertically, the resolution is generally 5-10 meters versus 0.1 meters for logs. This means that in the construction of a geocellular model, the log data must be upscaled to the resolution of the seismic data in order to equate these two. The opposite is true in the horizontal direction—seismic resolution is on the order of 25 meters, whereas logs are typically spaced at 500 meters or greater. This makes it essential to use both datasets in building a useful geocellular model.
5c. Seismic Acquisition. The forward modeling conducted prior to seismic acquisition is critical for determining which energy source should be used. In general, the greater the energy generated by the source, the lower the frequencies produced. Thus, a balance must be obtained between the penetration (low frequencies) and resolution (high frequencies) needed. Explosives (Figure 5c-1) are placed in shot holes whose depth depends upon factors such as the need to place the charge beneath the low-velocity soil zone near the surface. Vibrator trucks vibrate up and down on base plates that sweep through a designed range of frequencies in a certain amount of time—for example, increasing from 10 to 100 hertz (Hz) in 10 seconds (Figure 5c-2). Multiple trucks are used to put as much energy into the ground as possible, which maximizes the signal to noise ratio. At sea, airguns towed behind ships are used to create an acoustic pulse.
In order to record the energy reflected from the subsurface, hydrophones and geophones (Figure 5c-3) are used. Hydrophones towed behind ships are used to convert pressure changes from the reflected acoustic pulse into electrical energy that is recorded. Geophones are implanted in the ground and convert ground motions into electrical energy. Several receivers are often deployed in a group to boost the signal-to-noise ratio. The impulses are relayed to portable stations and then onto a recording truck (Figure 5c-4)
The reflected energy recorded by the receivers is weak and can be contaminated by various noise sources including wind, waves, automobile traffic, ground roll, etc. In order to improve the signal-to-noise ratio, the common mid-point (CMP) method was developed to increase the fold, which is the number of times a given point in the subsurface is sampled by different source-receiver combinations (Figure 5c-5). These traces are then stacked to reinforce the true signal and reduce the noise. The higher the fold, the better the data quality.
Figure 5c-6 shows how this process works. The diagram on the left has a shot source located at the red arrow with receivers spaced every 200 meters to the right of this location. Energy from the shot expands into the subsurface, is reflected from buried interfaces, and is recorded by the receivers. For horizontal beds, the reflection points will be half
15
the distance, or midpoint, between the source location and the receiver of interest. For example, if the receiver spacing is 200 meters, the subsurface reflection points will be spaced 100 meters apart. The next shot, shown by the diagram on the right, is moved a distance equal to the receiver spacing. This creates a new set of reflections with new midpoints. However, since the shotpoint and receivers were moved a distance equal to the receiver spacing, some of the midpoints from the second shot will correspond to those from the first, which ensures that the same subsurface locations will be sampled many times.
An appropriate acquisition plan is critical for ensuring the success of the data gathering work. Shot and receiver lines need to be planned (Figure 5c-7) and considerations including variations in terrain, weather conditions, and crew availability must be considered. For example, most seismic surveys in Siberia are run in the winter because the swampy ground is frozen and therefore more easily traversed.
5d. Seismic Processing. The purpose of seismic processing is to convert field recordings into a useful seismic data volume. The processing workflow is typically very complex and consists of multiple steps that include gain recovery, static corrections, deconvolution, normal move-out corrections, stacking, and migration. It is also common for a seismic volume to be reprocessed several times to enhance features that were poorly-resolved by previous processing, and to take advantage of new seismic processing techniques.
Gain recovery is a technique used to compensate for the decrease in seismic amplitude with depth due to spherical divergence and absorption (Figure 5d-1). The decrease in amplitude depends on the depth, rock types, boundary contrasts, and signal frequency. An increase or gain is added to the amplitude to compensate for this decrease (Figure 5d-2). Static corrections account for variations in surface topography and the thickness of the surface weathered layer that can result in the creation of false structures or the inability to resolve actual structures (Figure 5d-3).
Deconvolution is the process of replacing long and complex wavelets generated by seismic sources with a zero phase wavelet that possesses desirable characteristics for interpretation. These characteristics include wavelet symmetry and the ability to associate each trace with a specific interface (Figure 5d-4). The use of a zero phase wavelet improves the log-to-seismic tie and optimizes the seismic resolution for a given frequency range (Figure 5d-5).
Following deconvolution, the data is stacked to improve the signal-to-noise ratio. The first step in this process is to gather the groupings of traces that share a common midpoint. When this is done, it can be seen that the two-way time (TWT) increases as the distance between the seismic sources and each receiver increases (Figure 5d-6). This creates an apparent curvature in the subsurface reflectors that must be removed by applying a normal move-out correction. The effect of this correction is to restore the subsurface horizons to their true position in the subsurface. The data can then be stacked to create a representative seismic trace (Figure 5d-7).
Because subsurface reflectors are rarely horizontal, reflected energy may come from locations on either side of an assumed common midpoint. Migration is a processing technique used to reposition this energy to its true subsurface location and to collapse diffraction patterns (Figure 5d-8). Migration also helps shrink the size of the Fresnel Zone which improves the lateral resolution of the data (Figure 5d-9). Migration is generally conducted on post-stack data, however in areas where the velocity fields are complex such as beneath salt domes or thrust sheets, it can be helpful to conduct pre-stack migration.
Processing results are dependent upon the objectives of the processing, the software and techniques used, and the experience of the interpreter. For this reason, reprocessing will always result in differences, and if properly planned and managed, can result in both improved vertical resolution (Figure 5d-10) and improved lateral resolution (Figure 5d-11) of subsurface features.
5e. Seismic Framework Interpretation. The first step in interpreting a seismic volume is to conduct a preliminary scan to determine the quality and completeness of the data. It is particularly important to understand the acquisition and processing parameters, including the static shifts that have been applied and the polarity of the data. It is also critical to understand which seismic volumes may be available and how they were generated. For example, the seismic volume may have been filtered for use in mapping large-scale features for exploration. It may be advisable to use a volume without these filters for geocellular modeling, because it will contain finer-scale variations that could be used to guide the distribution of facies or porosity values.
16
The second step is to tie the wells to the seismic data using synthetic seismograms (Figure 5e-1). These are created by multiplying the density and sonic logs together to generate an AI curve. This curve is then converted to reflection coefficients and convolved with a seismic wavelet to create reflection pulses. These, in turn, are summed to produce a synthetic seismogram. The resulting seismogram is then compared to the actual seismic data at the well location (Figure 5e-2). If the match is not acceptable, then the frequency and phase of the wavelet are adjusted to obtain a better match.
Various techniques exist for determining the velocity field that is needed for converting time to depth. Stacking velocities are sometimes used, but they are only considered accurate within plus or minus 10 percent. Sonic logs can be integrated to determine velocity, but they are sometimes of poor quality (if hole conditions are bad) and they rarely extend from the reservoir all the way up to the surface. The best velocity values are those that come from vertical seismic profiles (VSP’s). In a VSP, a seismic source inputs energy at the surface and geophones lowered into a well record both direct and reflected arrivals (Figure 5e-3). This allows the VSP data to be displayed as seismic traces that can be directly compared to the 3D seismic volume. A less-expensive alternative to a VSP is a check-shot survey which only measures direct arrivals.
After making the ties between the wells and the seismic data, the next step is to use various displays (Figure 5e-4) to correlate key horizons and generate time structure maps (Figure 5e-5). As part of this work, faults need to be identified (Figure 5e-6). A technique that can help to identify subtle faults is to construct slope (first derivative) and curvature (second derivative) maps in multiple directions for a given horizon (Figure 5e-7). Those locations where slope and/or curvature change rapidly are good candidates for possible faults. Another technique that has gained popularity in recent years is the coherency cube which makes trace-by-trace comparisons to help locate discontinuities (Figure 5e-8).
Once the major stratigraphic packages have been defined, the internal reflection character within each of these can be examined to identify reflection terminations (Figure 5e-9). These, in turn, can be used to identify sequence boundaries, transgressive surfaces, marine flooding surfaces, and systems tracts. More detailed analysis of these packages will reveal three-dimensional seismic forms generated by reflections (Figure 5e-10). These can be assigned to various facies types and depositional settings, which can, in turn, be validated with core and log data.
Once the seismic has been interpreted in time, it needs to be converted to depth. The simplest way to convert a 3D-dataset from time to depth is to generate two identical grids (in time and depth) tied to an upper and a lower structural surface which are each tied to the wells (Figure 5e-11). Using a geocellular modeling tool such as Roxar’s RMS or Schlumberger’s Petrel, the 3D time grid is simply “snapped” to the depth grid. Intermediate horizons can then be added through structural modeling. Faults can be added through fault modeling and the results can be checked against fault cuts in wells (Figure 5e-12).
The use of 3D seismic data to construct a geocellular model can provide significant improvement relative to using only well data. The seismic data help to delineate field boundaries, identify compartments, recognize key correlation horizons, and image fluid contacts. Figure 5e-13 is an example of how 3D seismic can be used to greatly improve the reservoir description generated by production geologists.
5f. Acoustic Impedance, Attribute Analysis, Direct Hydrocarbon Indicators, and 4D Seismic. Once the basic stratigraphic and structural framework of the reservoir has been constructed, additional work can be conducted to distribute properties within this framework and quantify the distribution of fluids.
One of the most useful techniques for guiding the distribution of facies is to generate an AI volume from the seismic amplitude volume through a process called seismic inversion. This is the inverse of the technique used to create a synthetic seismogram (Figure 5f-1). Seismic inversion is accomplished by first generating AI logs from the density and sonic log data, and distributing these values throughout the model volume. This modeled response is then compared to the inverted seismic volume. The difference between the seismic and modeled responses is minimized by changing the wavelet or other parameters. This is an iterative process that stops when a sufficient correlation coefficient or maximum number of iterations is achieved. In many cases, it is helpful to “hide” some of the wells during this process in order to check the validity of the seismic inversion at various locations in the model.
17
In essence, an AI inversion is the transformation of seismic data into pseudo-impedance logs at each trace. This process makes the AI data comparable to the log data and therefore easier to relate to the geology than seismic amplitude. In addition, because AI is a rock property (the product of rock density and compressional wave velocities) direct relationships can be established between AI and other parameters such as facies and porosity An example of this is shown in Figure 5f-2 which is a cross-section through an AI volume showing red and yellow colors corresponding to higher values of acoustic impedance. Log and core data through this reservoir show that higher values of AI are related to higher porosity, clean sandstones (Facies 1) while lower porosity, shalier sandstones (Facies 4) are characterized by lower AI values (Figure 5f-3). Using these relationships, AI can be used to guide the distribution of different facies types within the geocellular model (Figure 5f-4).
AI is one of many different kinds of seismic attributes. These are any characteristics that can be derived from seismic data, but most commonly are measurements of time, amplitude, frequency, phase, and their spatial relationships (Figure 5f-5). Attributes are extracted from seismic traces and compared to various properties (for example, porosity) to find meaningful correlations that will provide information in the interwell areas. If an acceptable correlation exists, then a technique such as multiple regression, geostatistics, or a neural network can be used to distribute the property of interest using the attribute to guide the distribution. Not only are attributes useful for distributing properties in interwell areas, but they can also be used to extend an interpretation beyond the area of well control (Figure 5f-6). It is important to remember that the use of attributes is an empirical process and what works in one reservoir may not work in another. Nonetheless, experience has shown that specific attributes can consistently be used to identify certain features (Figure 5f-7).
3D seismic data can also be used to provide direct evidence of hydrocarbons and fluid contacts. One commonly used technique is amplitude versus offset (AVO). This technique is based on the principle that the amplitude of a reflected seismic signal normally decreases as the angle between the source and the receiver increases. If instead, the amplitude increases with increased offset, this results in a “bright spot” that may indicate a gas-charged sand (Figure 5f-8). Gas anomalies are easier to see than oil anomalies because the effect of gas on acoustic properties is much greater than oil.
Because reservoir fluids impact the seismic response of the subsurface, changes in these fluids as a result of production will change the seismic signature. This is the basis of 4D seismic which consists of comparing 3D seismic surveys taken at different times to understand changes in reservoir properties (Figure 5f-9). This work is often accompanied by forward modeling (using log data) in order to predict the changes in seismic response that should be seen as a result of fluid injection and withdrawal. A common response is to see the reservoir “dim-out” over time as pressure drops and low velocity oil and gas are replaced by higher velocity water. It may also be possible to see bypassed pay (no dimming over time) or secondary gas caps (brightening with time).
6. Fractured Reservoirs
6a. Introduction. Fractured reservoirs are defined as any reservoir in which naturally occurring fractures have, or are predicted to have, a significant effect on flow rates, anisotropy, recovery, or storage. Unfortunately, many companies fail to recognize the fractured nature of their reservoirs, and do a poor job of quantifying the effects of fractures. This, in part, is because fractured reservoirs are more complicated than unfractured reservoirs, requiring more time, effort, and money. It is easy, at least initially, to ignore fracturing, but doing so can be disastrous. For example, tens of millions of dollars were spent to build facilities for injecting gas to recover additional oil in an African field. Fracturing was not expected to impact this project, but when injection started, gas broke through almost immediately from the injectors to the producers without contacting the oil. Afterwards, it was determined that the reservoir was pervasively-fractured, and that the gas injection project should have never been implemented.
The goal of the production geologist is to first decide whether the reservoir of interest is fractured. Table 6a-1 includes a list of questions that should be asked to help determine this. Some characteristics of fractured reservoirs include highly variable well rates, well test permeabilities that greatly exceed matrix permeabilities, and injected fluids unexpectedly affecting wells located a considerable distance away. Once the reservoir is recognized as fractured, work can begin to determine the types of fractures that are present, quantify their intensity and connectivity, and understand the degree of interaction between the fractures and matrix rock. The resulting information can then be used to identify where wells should be placed, and how the wells should be oriented to encounter the best-developed
18
fractures. Insights regarding the distribution of these fractures are also critical for optimizing production from primary and secondary recovery projects.
In this chapter, key concepts will be presented followed by a discussion of the different types of fractures (tectonic, regional, contractional, or induced) and the character of the fracture plane (open, mineral-filled, vuggy, or deformed). Methods to detect fractures including logs and well tests will then be reviewed, and techniques to estimate fracture porosity, permeability, and productivity will be presented. The last portion of this chapter will summarize the steps that should be taken in data gathering and reservoir characterization.
6b. Key Concepts. A fracture is defined as a macroscopic planar discontinuity which is interpreted to be formed by structural deformation or by physical diagenesis. It may be due to compaction or tensional processes, thus having either a positive or negative effect on fluid flow. It’s characteristics may have been modified by subsequent deformation or diagenesis. Fractures can be classified based on their propagation mechanism (Figure 6b-1). These include Mode I tension fractures, Mode II shear fractures in which the shear and propagation directions are parallel, and Mode III shear fractures in which the shear and propagation directions are perpendicular to each other.
Several factors control the intensity of fracturing in rocks. Those rocks that are stronger, contain more brittle components, have lower porosities, have finer-grain sizes, are contained within thinner beds, are that are found within zones of high strain (more structural curvature) tend to be more fractured. It is also important to remember that rocks which are anisotropic and heterogeneous may show substantial variations in fracture intensity.
Rocks in the subsurface are subjected to three principal stress directions: a maximum principal stress (σ1) which is parallel to the maximum compressive direction, a minimum principal stress (σ3) which is the least compressive or tensional stress, and an intermediate stress which is typically the vertical stress (Figure 6b-2). Fractures propagate parallel to the maximum principal stress, and open in a directional parallel to the minimum compressive stress.
It is commonly assumed that fracture orientations depict the state of stress at the time of fracturing. This is an important concept because it allows us to determine the structural history of a given reservoir and place it within a regional tectonic history. It is also commonly assumed that rocks in the laboratory fracture in a manner qualitatively similar to equivalent rocks in nature, which allows us to understand the physics of rock deformation and predict its impact on wellbore stability, hydraulic fracturing, and fluid flow.
Fractured reservoirs can be divided into four different types. Type 1 reservoirs provide both porosity and permeability. They are characterized by high initial production rates and large drainage areas for each well, and the best wells are often drilled early in field development. Type 2 reservoirs provide the permeability and the rock matrix provides the porosity. In these types of reservoirs, fractures control the well rates and these rates may have to be reduced to limit coning. Type 3 reservoirs have good matrix porosity and permeability, and fractures simply increase the permeability. In these reservoirs, it is critical to properly orient well patterns and choose the appropriate pattern geometry in order to take advantage of the fractures. In Type 4 reservoirs, the fractures are barriers to flow resulting in inefficient drainage and poor sweep efficiency.
Figure 6b-3 shows how reservoir Types 1, 2, and 3 compare with respect to the amount of fracture porosity and fracture permeability found in each reservoir type. This figure also summarizes the critical issues to consider in each reservoir type.
6c. Types of Fractures. Fractures can be divided into four different categories. The first of these, tectonic fractures, are natural fractures that can be subdivided into fault-related and fold-related fractures. They range across 9-10 orders of magnitude from microscopic fractures to basin-scale features. Fault-related fractures typically form at a 30-60 angle to the maximum principal stress (σ1) which is parallel to the fault strike (Figure 6c-1). There is generally a strong relationship between faulting and fracturing, with fracture intensity decreasing away from a fault (Figure 6c-2). The matrix rock associated with these fractures is commonly crushed, resulting in a damage zone characterized by a permeability decrease of 1-4 orders of magnitude (Figure 6c-3). These damage zones have a width that is typically equivalent to 2.5 times the throw of the fault.
Fold-related fractures result from the compression and extension of rocks associated with structural folds (Figure 6c-4). The position, orientation, and intensity of these fractures varies with fold shape and origin, and can be very
19
complex (Figure 6c-5). The position of these fractures is primarily controlled by the direction and degree of structural bending (Figure 6c-6). Maps of the dip magnitude (first derivative) and curvature (second derivative) can be constructed to help locate those areas where fractures are likely to be best-developed.
The second category of fractures, regional fractures, are natural fractures developed over large areas of the earth’s surface and range from cleats observed in coal samples to large-scale fractures extending for hundreds of kilometers (Figure 6c-7). Because they are perpendicular to bedding, they are interpreted to result from a vertical maximum principal stress. Possible origins of these fractures include inherited flaws in the rock, regional uplift with locked-in pore pressure, and compaction. Regional fractures show relatively little change in strike over long distances and no evidence of offset along their fracture planes. They are perpendicular to bedding and typically consist of one systematic (through-going) fracture set and one non-systematic fracture set. Regional fractures are unrelated to local structure and are often geometrically aligned with the basin shape.
A third category of natural fractures, contractional fractures, include a collection of fractures with various origins. Each fracture is an extension fracture associated with a general bulk volume reduction throughout the rock. Processes generating contractional fractures include desiccation, syneresis, thermal gradients, and mineral phase changes. Chickenwire fractures are closely-spaced contractional fractures with a 3-dimensional polygonal shape (Figure 6c-8). These are known to be productive from gas fields such as the Panoma Field in Kansas. Columnar joints are another common type of contractional fracture commonly found in igneous rocks (Figure 6c-9).
Induced fractures are created from the weight of the drill bit, which concentrates stress on the core. Induced fractures grow vertically in a direction parallel to the maximum principal stress. This produces a centerline fracture (Figure 6c-10) which is a continuous fracture that bisects the core. Often the fracture curves sharply towards the core boundary which is generally a surface free of shear stresses. This forms a petal fracture. A petal fracture that merges with a centerline fracture is called a petal-centerline fracture.
Natural fractures can be distinguished from induced fractures by several criteria. First, natural fractures enter and exit the entire core at an angle (are not parallel to the core axis). The fracture faces have a dirty or “corroded” appearance—they are not “fresh” breaks. If the fracture faces contain euhedral crystals or bitumen staining, this indicates that the fracture will be open in the subsurface and capable of transmitting fluids. The occurrence of a natural fracture will also often coincide with a natural weakness such as a stylolite or bed boundary.
6d. Character of the Fracture Plane. The character of the fracture plane, and particularly the aperture (width of the opening) of the fracture plane, is a critical element for transmitting fluid through a fracture. Fracture permeability is a function of the aperture distribution, and like many other geometrical properties can be described by a log-normal or power law distribution. This means there will be many fractures with small openings, but few fractures with large openings. In general, the longer the fracture, the greater the aperture. The aperture at depth is a function of pore pressure, the preservation of bridging minerals, and surface roughness.
There are four different types of fracture plane character. The first is open fractures through which fluid can move without being slowed by deformation or diagenetic minerals filling the space between the fracture walls. (Figure 6d-1). Open fractures are visible in core as openings along the fracture plane and/or oil staining of the adjacent rock. The second type of fracture plane character is mineral-filled fractures that can be either partially or totally filled by secondary minerals (Figure 6d-2). A third type are vuggy fractures which contain vuggy porosity along the fracture plane (Figure 6d-3). These vugs form as fluid moving along the fracture plane dissolves the adjacent rock, creating pore space that can be filled with hydrocarbons. This is a very important porosity type in many Middle Eastern carbonate reservoirs.
The fourth type of fracture plane character is deformed fractures and includes both slickensides and stylolites. Slickensides are polished or striated surfaces along a fracture plane (Figure 6d-4). These form as material is pulverized by the motion of fracture faces past each other, or by grain melting which creates glass. In either case, this deformation reduces permeability along the fracture plane. Stylolites are pressure solution features along which rock has dissolved (Figure 6d-5). This creates two distinct interlocking fracture planes separated by insoluble constituents of the host rock. As a result, they don’t add a significant amount of fracture permeability to the reservoir.
20
Fracture plane character is best resolved by looking at thin sections which can reveal whether a fracture is open and what types of minerals it contains (Figure 6d-6). The succession of minerals deposited in the fractures is helpful for understanding the types of fluids that moved through the reservoir, and what the likely impact of this will be on matrix reservoir quality.
6e. Detecting and Quantifying Fractures. Multiple techniques exist for detecting and quantifying fractures. The most obvious technique is the description of fractures from core and thin sections, but there are many other methods including variations in fluid losses and drilling rates, the use of various open-hole and cased-hole logging tools, well tests, and the analysis of reservoir curvature and borehole breakout patterns.
Fluid losses result when the pressure exerted by the mud column in the wellbore is greater than the reservoir pressure. Normally these losses are minor, but there can be a significant loss of drilling fluid and dramatically increased penetration rates if fractures are penetrated (Figure 6e-1). The depths at which high fluid losses and increased penetration rates are encountered in a well are therefore good indicators of fracturing.
Before the advent of image logs to directly identify fractures, several different types of conventional logs were used for indirect fracture detection. Sonic logs can detect fractures from the energy loss caused by the attenuation of acoustic pulses. Gamma Ray logs identify fractures by responding to water-soluble radioactive salts in the fractures. Density logs and micrologs, which place pads on the borehole wall, respond to the accumulation of mudcake which is caused by the leak-off of mud filtrate into the fractures. This leak-off is seen as deep invasion by the resistivity logs.
One of the best conventional techniques for detecting fracture porosity is the separation between the sonic and density porosity curves (Figure 6e-2). Because the compressional sonic waves travel through the rock matrix, they do not detect fracture and vuggy porosity. However, because the density log detects this porosity, subtracting the sonic from the density porosity curve will provide an indication of the amount of fracture + vuggy porosity. Another useful conventional log curve is the dipmeter log, which detects variations in dip indicating the presence of faults and folds (Figure 6e-3). These are the most common locations in which fractures can be found.
Over the last 15 years, image logs have become the primary wireline tool for quantifying fractures. Image logs include both the resistivity-based formation micro-imager (FMI) and the acoustic-based ultrasonic borehole imager (UBI). Image logs display fractures as sine waves in which the amplitude of the sine wave is proportional to the dip magnitude and the dip azimuth is equivalent to the minimum of the sine wave (Figure 6e-4). The tool also resolves fracture displacement and different facies types (Figure 6e-5).
Another advantage of image logs is their ability to resolve borehole breakouts (Figure 6e-6). These are enlargements or elongations of the borehole in a direction perpendicular to the maximum compressive stress. Breakout commonly occurs along drilling-induced or natural fractures. Not only can borehole breakout be used to determine the in-situ stress directions, but they can also be used to locate fracture porosity and permeability, to estimate in-situ stress magnitudes, and to guide directional wells so that they intersect the greatest number of open fractures.
Well tests are the most definitive way of assessing the permeability associated with a fracture system. Well tests sense the matrix + fracture permeability, which allows the fracture permeability to be determined by subtracting the matrix permeability from the combined value. In order to calculate a well test permeability, the reservoir thickness (h) must be estimated and the test must be conducted for a long enough period to reach infinite acting radial flow. In Figure 6e-7, this is the point where the wellbore transition ends and the fracture dominated period begins. If the test is extended for a sufficiently long period, the matrix will begin to respond, indicating dual porosity behavior which is diagnostic of a reservoir with both fracture and matrix contributions.
Production logs are used to determine which wellbore intervals are contributing fluid, and therefore where the fractures are likely located. Production logs include spinner, temperature and noise logs (Figure 6e-8). Production logs generally show a good correlation between fracture aperture size and production, but there is not always a good correlation between fracture intensity/density (number of fractures) and productivity because these fractures may be closed. There are also times when fractures with large apertures may provide little production. This is an indication of formation damage which may require a stimulation treatment in order to restore productivity.
21
6f. Fracture Porosity, Permeability, and Productivity. Matrix porosity is defined as the ratio of the pore volume to the bulk volume of a rock (Figure 6f-1). By contrast, fracture porosity is a function of the average effective width of the fractures and the average spacing between the fractures. Fracture porosity is always less than about 2%, and is generally less than 0.5%. Although fracture porosity is low, it is very effective porosity (well-connected) and therefore the recovery factor for hydrocarbons in the fracture system is generally quite high. In addition, if there is good matrix porosity, communication between the fractures and matrix should be good unless the fracture permeability is inhibited by mineralization or deformation of the fracture planes.
Several techniques exist for estimating fracture porosity. Whole core analysis of a 3 or 4 inch diameter core can be used to measure fracture + matrix porosity, and then a core plug in an unfractured part of the whole core can be used to measure matrix porosity. The difference between these is equivalent to the fracture porosity. This is a reasonable technique for closely-spaced fractures, but would not be useful for capturing the porosity of fractures spaced greater than the core diameter. A second technique, if the matrix porosity is negligible, is to estimate fracture porosity from the fracture spacing and the permeability of the entire system.
A third technique to estimate fracture porosity is to measure the permeability of fractured and unfractured companion samples of reservoir rock under confining pressure in the laboratory. The permeability difference between these two curves is considered to be the fracture permeability. This value, combined with an estimate of the average fracture spacing, can be used to calculate the effective width of the fractures. The fracture spacing and effective width can then be used to calculate fracture porosity. A fourth technique to estimate fracture porosity is to use the effective fracture width and fracture spacing from an image log.
Natural fracture width and spacing are obviously key parameters in calculating both fracture porosity and permeability. Published values of fracture width have a range of 4-5 orders of magnitude depending on the study and the types of rocks analyzed (Figure 6f-2). Using the data from these studies, or laboratory measurements on rocks from the reservoir of interest, a graph like the one in Figure 6f-3 can be used to estimate fracture porosity and the fraction of total pore volume attributable to fractures. Once fracture porosity is estimated, it can be used to estimate the fracture volume in a given reservoir knowing the thickness of the hydrocarbon column and the well drainage area (Figure 6f-4). In a similar fashion, knowledge of the fracture width and spacing can be used to estimate fracture permeability and the fraction of permeability attributable to the fractures (Figure 6f-5).
In addition to estimating fracture porosity and permeability, a fracture index can be used to estimate the intensity of fracturing. Fracture index is defined as the ratio of permeability-thickness product (Kh) from a welltest to the Kh of the matrix. The welltest Kh is determined from a pressure buildup or injection falloff test and measures the contribution of the matrix + fracture permeability. The matrix Kh is determined from the core analysis of unfractured samples. If the fracture index is approximately equal to 1, this indicates matrix-dominated flow as a result of unconnected fractures or connected fractures with low permeability. If the fracture index is significantly greater than 1, this indicates fracture-dominated flow through high-permeability, interconnected fractures in the near wellbore region. An example fracture index map is shown in Figure 6f-6.
It is important to remember that the fracture index estimates the intensity of fracturing and tells us little about the connectivity of fractures. Even though the fracture intensity may be high around a given well, the important thing is how these fractures are connected to drain the entire reservoir volume, and if these fractures persist to the next well location. Long term well tests including inference tests, tracer tests, or gas injectivity tests are need to determine the connectivity of the fracture system.
6g. Data Gathering and Reservoir Characterization. The characterization of a fractured reservoir should focus on the determination of the fracture system properties including (1) the fracture types (fault-related, fold-related, regional fractures) and their locations, 2) whether there is a single fracture direction or multiple fracture directions, and 3) fracture properties including fracture porosity, permeability, intensity, and connectivity. Using this information, 1) the appropriate recovery process can be implemented, 2) drill locations and well paths can be optimized, and 3) the reservoir can be effectively managed.
More specifically, there are several opportunities during the life of a reservoir to collect and analyze data that can be used to make good development decisions. The first of these is the evaluation stage during which it is important to 1) obtain cores and/or image logs in early wells, 2) predict the distributions of natural fractures, 3) select optimum well
22
locations and well paths, 4) determine and map in situ stress from borehole breakouts and other data, 5) determine the type of fractured reservoir, and 6) evaluate reserves, reservoir variability, and risk.
The second stage is the primary recovery stage during which it is important to 1) collect the appropriate static data including reservoir thickness, porosity, and saturation, 2) perform multiple well tests to determine matrix + fracture permeabilities, reservoir pressures, and skin factors, 3) model the fracture system and in situ stress, and correlate these with the dynamic data, 4) determine directional permeability vectors, 5) correlate fracture directions, in situ stress, and directional permeability, and 6) refine reservoir simulations using improved fracture descriptions.
The third stage is the secondary recovery stage during which it is important to 1) re-evaluate flood patterns and adjust them accordingly, 2) evaluate water production in terms of the fracture contribution, 3) model the in situ stress across the field, 4) infer characteristics of the fracture system from dynamic data, 5) re-evaluate reservoir simulations to include fracture anisotropy, and 6) revise the predicted recovery factors.
By conducting this work, it is much more likely that fractures will be identified, appropriately quantified, and honored in the drilling, completion, and production of the reservoir, and that the development will not be plagued by unwelcome surprises or disastrous consequences.
7. Capillary Pressure
7a. Introduction. Capillary pressure is defined as a force that resists the buoyant force of oil and gas that is migrating into reservoir rocks. Capillary pressure controls the trapping of hydrocarbons and the distribution of fluid contacts. In addition, capillary pressure concepts can be used to evaluate reservoir rock quality, understand the expected fluid saturations under original conditions (including the height of the hydrocarbon-water transition zone), evaluate the capacity of the reservoir seal, and estimate the recovery efficiency of a displacement process.
Without a good understanding of capillary pressure, geologists can make fundamental mistakes in reservoir analyses. For example, if reservoir rock quality is poorer (and therefore the capillary pressure is higher) on one limb of an anticline, then the corresponding oil-water contact will be higher. Many geologists incorrectly interpret this as a tilted oil-water contact, when it is simply the consequence of differences in capillary pressure. The limb of the anticline with poorer-quality rock will also have a longer transition zone and therefore a higher average water saturation than the limb of the anticline with better quality rock. Geologists who fail to recognize this will assign a single average value of water saturation to the entire reservoir, instead of honoring those differences in water saturation caused by variations in capillary pressure.
This chapter begins with a discussion of key concepts including buoyancy and capillary forces, and the capillary pressure equation. This is followed by an explanation of how capillary pressure is measured in the laboratory and how the results are converted to the hydrocarbon-water system in the reservoir. The use of this data to determine displacement pressures and saturation distributions is then discussed, as well as how to create a normalized curve (J-function) for distributing saturations in a geocellular model. Other topics regarding capillary pressure data are then reviewed including the determination of seal capacity and the role of hydrodynamic conditions in creating tilted oil-water contacts. Finally, several examples are presented showing how capillary pressure controls the location of oil-water contacts in different types of reservoirs.
7b. Buoyancy and Capillary Forces. Most reservoirs are at hydrostatic conditions which means there are no hydrodynamic factors influencing the position of fluid contacts. Under hydrostatic conditions, the force that pushes hydrocarbons into reservoir rocks is buoyancy, and the amount of buoyant force is a function of the density difference between the water and hydrocarbon phases. The greater the density difference, the greater the buoyant force for a given thickness of rock. The location of zero buoyant force is defined as the free water level, and the difference in pressure between the oil phase and water phase at any point above the free water level is equal to the buoyancy pressure.
The buoyancy gradient is the rate of buoyant pressure increase with height above the free water level. It is calculated by subtracting the oil pressure gradient from the water pressure gradient (Figure 7b-1). Because the buoyancy pressure increases with height, more water can be expelled from the pores and replaced with hydrocarbons. Capillary
23
pressure is a force that opposes this buoyancy pressure. If the buoyancy pressure does not exceed the capillary pressure in a given pore throat, than the oil cannot move through it and displace the water in the adjacent pore (Figure 7b-2). A greater oil column thickness (or a longer oil droplet in the case of Figure 7b-2) is required to generate a sufficient buoyancy pressure to overcome the capillary pressure.
7c. The Capillary Pressure Equation. Capillary pressure results from a combination of forces acting within and between fluids and their bounding solids. Capillary pressure is controlled by the interfacial tension between different fluid types, the contact angle between the solid and fluid interface, and the pore throat radius (Figure 7c-1).
Interfacial tension is the surface free energy between two immiscible fluids. The higher the interfacial tension, the higher the capillary pressure. In an oil-water system, interfacial tension decreases as temperature increases (Figure 7c-2). In a methane-water system, interfacial tension decreases as temperature and pressure increase (Figure 7c-3). Interfacial tension also increases as a function of the density difference between the water and hydrocarbon phases (Figure 7c-4).
The contact angle is the angle between the solid and fluid-fluid interface as measured through the denser fluid (Figure 7c-5). If the cohesive forces (fluid-fluid attraction) exceed the adhesive forces (fluid-solid attraction), the contact angle is small and the denser liquid “beads up”. This is called non-wetting behavior. If the adhesive forces exceed the cohesive forces, the contact angle is large and the liquid spreads-out on the surface. This is called wetting behavior. It is more difficult to force a “bead” of oil through a pore than an oil droplet that spreads out along the pore walls. Therefore, a smaller contact angle results in a higher capillary pressure.
Smaller pore throat radii also result in higher capillary pressures. Figure 7c-6 shows the rise of a wetting phase fluid (water) in three capillary tubes with different diameters and filled with a non-wetting phase fluid (oil). Due to the attraction of the water to the glass surface, there is a greater rise of water in the smaller diameter capillary tubes and a greater pressure difference across the meniscus. This pressure difference is equivalent to the capillary pressure. The greater rise of water in smaller capillary tubes means that at a given height above the free water level in a reservoir, pores connected by smaller pore throats will have a higher water saturation than pores connected by larger pore throats (Figure 7c-7).
7d. Determining Capillary Pressure. Capillary pressure measurements are made in a laboratory with either (1) porous plate or centrifuge methods that use either air-water or hydrocarbon-brine, or (2) mercury injection. The first technique is preferable for simple pore systems where permeabilities are high (tens to hundreds of millidarcies). In more complex, low permeability pore systems, mercury injection is preferred because higher capillary pressures can be overcome in the smaller pores, and therefore it ensures that every pore will be 100% filled with a non-wetting phase fluid at the end of the test.
In a mercury injection capillary pressure measurement, the core is cleaned and placed into a core holder. Mercury (the non-wetting phase fluid) is then injected at increasing pressures that fill the largest pores first, and then the smaller pores at higher pressures (Figure 7d-1). The resulting curves show a step-wise increase in mercury saturation as this fluid enters large, medium and small pores (Figure 7d-2). By taking many small steps in the injection of mercury, a smooth curve can be generated showing the relationship between the injection pressure and the volume of mercury entering the sample as a percentage of the pore space.
Capillary pressure tests that inject non-wetting fluids are referred to as drainage capillary pressure tests because they result in a decrease in the wetting phase saturation. These tests are used to simulate the original reservoir filling conditions as oil moved-in to replace water. Sometimes these tests are followed by tests that inject water to simulate the waterflood process. These are called imbibition capillary pressure tests and they result in an increase in the wetting phase saturation.
Once the laboratory measurements have been made, they must be converted to reservoir conditions using the equation shown in Figure 7d-3. This equation requires knowledge of the interfacial tension and contact angle of both the laboratory and reservoir fluids, as well as information regarding the temperature and pressure at reservoir conditions. Once the measurements have been converted to reservoir conditions, capillary pressure can be expressed as a function of height above the free water level knowing the brine density and hydrocarbon density of the reservoir and using these in the equation shown in Figure 7d-4.
24
Figure 7d-5 is an example showing the conversion of laboratory capillary pressure data to reservoir conditions, and the calculation of the corresponding height above the free water level. The calculated height of 96.1 meters is the result of a density contrast of 0.295 gm/cm3 between the oil and water. If this oil was much heavier and had a density of 0.98 gm/cm3, then the density contrast would only be 0.07 gm/cm3 and the corresponding height would be 405.2 meters. Because the density contrast controls the buoyancy pressure, and therefore the capillary pressure that can be overcome at any given height above the free water level, the distribution of water saturation is strongly-controlled by the fluid density contrast. The author has observed, for example, a 50 meter long transition zone in a reservoir with a permeability of 3000 millidarcies and a difference of 0.02 gm/cm3 between the oil and water densities. This observation came as a surprise to geologists and engineers working the field who did not realize it was possible to have such a long transition zone in such a high permeability rock.
7e. Displacement Pressure and Saturation Distributions. Displacement pressure is the capillary pressure at which a continuous filament of non-wetting fluid (hydrocarbons) connects through the largest pores of a rock. It is equivalent to the oil-water or gas-water contact and is located above the free water level (the location where the capillary pressure = 0). Figure 7e-1 shows that for three different rock types, three contacts are located at three different locations above the free water level. This is why one limb of an anticline containing rocks with small pore throat sizes will have a higher oil-water contact than the other limb of the anticline containing rocks with large pore throat sizes. The oil-water contact is not “tilted”, it is simply at a different location on each limb of the anticline as a result of differences in capillary properties.
If the pore throat size distribution is homogeneous, Figure 7e-1 shows that the rock will quickly fill with hydrocarbons when the displacement pressure is exceeded, creating an “L” shaped capillary pressure curve. If the pore throat size distribution is heterogeneous, greater capillary pressure will exist in the smaller pore throats resulting in a capillary pressure curve with a more gradual slope. If a rock has homogeneous, small pore throat sizes then the displacement pressure will be high, but the rock will quickly fill with hydrocarbons once this displacement pressure is exceeded, again resulting in an “L-shaped” curve. In general, there is a good relationship between displacement pressure and values of porosity and permeability (Figure 7e-2). This is because rocks with bigger pores have bigger pore throats, and therefore higher porosity, higher permeability, and lower displacement pressures.
Most clastic reservoirs contain multiple rock types such as clean sandstone, cemented sandstone, and shaly sandstone. Each of these rock types can be thought of as a bundle of capillaries, with formation water being the wetting phase and hydrocarbons being the non-wetting phase. As hydrocarbons begin to migrate into a rock and displace the water, the hydrocarbons first enter the pores with the largest pore throats (capillaries), leaving the water in the pores that have smaller associated pore throats. As the hydrocarbon column increases in height, the buoyancy pressure increases, forcing hydrocarbons into pores with smaller and smaller pore throats. This process continues until either (1) the generation and migration of hydrocarbons ceases, (2) the trap containing the reservoir rock becomes filled to the spill point, or (3) the displacement pressure of the seal is exceeded in which case the reservoir thickness becomes fixed (for any additional hydrocarbons that migrate into the trap, there will be an equal amount of hydrocarbons that are “leaked” from the trap).
After a reservoir has been filled with hydrocarbons, the saturation profile at any location will be a function of the rock types encountered, their capillary properties, and the density contrast between water and hydrocarbons. If multiple rock types are encountered by the drill bit, the resulting saturation profile will be a composite of their different capillary pressure curves (Figure 7e-3). This can result in a saturation profile that varies between higher and lower water saturation values with increasing height above the free water level. In some cases a wet sand may be located between two oil productive sands.
Drainage capillary pressure curves can be related to changes in the types of fluids that are produced as shown in Figure 7e-4. In a typical oil reservoir, there will be an interval above the oil-water contact containing an insufficient concentration of oil for any oil to be produced. This is referred to as immobile or residual oil. Above the base of the transition zone, there is sufficient oil saturation for oil movement to take place, and both oil and water are produced. Above the top of the transition zone, there will be water-free oil production. The water saturation corresponding to this interval is often called the irreducible water saturation (Swirr) This term is misleading because this value is actually not irreducible. The true irreducible water saturation is when all pores are filled with 100% non-wetting phase fluid, which will result if a sufficiently high buoyancy pressure and density contrast between the wetting and
25
non-wetting phases is attained. Practically speaking, this pressure and density contrast are impossible to achieve under reservoir conditions, although some high-permeability gas reservoirs that are hundreds of meters thick may have irreducible water saturations of less than 5%.
The imbibition capillary pressure curve in Figure 7e-4 shows what happens if the wetting phase (water) is injected into the core sample at the conclusion of the drainage capillary pressure test. After water breakthrough, both oil and water will be produced until a water saturation value corresponding to a residual oil saturation (Swro) is obtained. Although capillary pressure curves can be used to provide the endpoint saturation values that control the movement of oil and water, relative permeability tests must be conducted in order to determine the actual fractions of oil and water produced at various saturations.
7f. The Leverett J-Function. The Leverett J-Function is a means to normalize capillary pressure values from samples with different porosities and permeabilities (Figure 7f-1). The equation assumes that there is a meaningful relationship between water saturation and the square root of permeability divided by porosity. A plot of these two parameters should be constructed to ensure this is the case in any reservoir to which the Leverett J-function is being applied.
Using the Leverett J-function, capillary pressure measurements for each core sample in a given reservoir can be converted into J values. The J values from multiple samples can then be plotted versus water saturation and curve-fitted (Figure 7f-2). This equation can then be set equal to the Leverett J-function equation and solved for water saturation as shown in the example in Figure 7f-3. In this example, the capillary pressure term in the Leverett J-function is first replaced with the equivalent expression in terms of fluid densities and height above the free water level. Then the two equations are set equal to each other and solved for water saturation in terms of height above the free water level, porosity, and permeability. After these three parameters have been assigned to each grid block in a geocellular model, this equation can be used to assign water saturation values to each grid block.
7g. Reservoir Seals. Any type of rock can serve as a reservoir seal in the subsurface as long as its minimum displacement pressure is greater than the buoyancy pressure exerted by the underlying hydrocarbon column. The capacity of a seal to trap hydrocarbons is a function of the size of the largest interconnected pore throats and the relative densities of the hydrocarbons and formation water. In practice, for a seal to be effective at containing a large hydrocarbon accumulation, the seal needs to be thick, laterally continuous, homogeneous, and unfractured.
If the capillary properties of a seal are known, then the maximum thickness of the underlying hydrocarbon column can be estimated using the equation shown in Figure 7g-1. In order to use this equation, both the displacement pressure of the seal and the displacement pressure of the underlying reservoir rock must be known. The difference between these is a capillary pressure value which can then be converted to a hydrocarbon column height by knowing the density difference between the hydrocarbon and water phases.
Figure 7g-2 is an example showing how to calculate the maximum hydrocarbon column for a seal with an entry pressure of 20 psi which is underlain by a reservoir rock with an entry pressure of 1 psi. In this case, because the pressure is in psi and the height is in feet, a value of 0.443 psi/ft is used as the fresh water gradient instead of the 0.098 psi/meter value used in the previous figure. Figure 7g-2 shows that Well 1 penetrated the seal and the uppermost part of the reservoir hydrocarbon column. Assuming that the seal and reservoir in this well were cored, capillary pressure measurements could have been made and used to calculate a maximum hydrocarbon thickness of 100 feet. This, in turn, could have been used to predict the location of the free water level. Had this been done, the drilling of Well 2, which was too deep to penetrate the hydrocarbon column, could have been avoided.
7h. Hydrostatic versus Hydrodynamic Reservoirs. The vast majority of reservoirs are at hydrostatic conditions which means that their free water level(s) is flat. There may be multiple free water levels if there are separate compartments due to faulting or shale barriers that are continuous through the reservoir and downdip aquifer. The number of hydrocarbon-water contacts and their position depends on the number of free water levels, the number of different rock types that are present, and the filling history of the structure. As shown in Figure 7e-3, it is possible for a well to penetrate several contacts if it is drilled near the free water level and encounters rock types of highly-variable reservoir quality.
26
Along with deep resistivity logs, wireline formation pressure tests can be used to approximate the position of hydrocarbon-water contacts. These tests define both the hydrocarbon and water gradients, and the intersection of these gradients defines the approximate location of the oil-water contact (Figure 7h-1). Individual production or pressure tests in a given well cannot accurately determine the position of a fluid contact. In fact, these individual tests may cause confusion because one well at a given elevation may produce oil while another well at the same elevation a few well spacings away may produce water. This can happen if the wells encounter different rock types with different capillary properties, because each of them will have different relative permeability behavior and different contacts.
A reservoir with a true tilted hydrocarbon-water contact only forms when special conditions are present including (1) a regional structural dip gradient, (2) aquifer continuity over a long distance, and (3) higher water pressure in the updip portion of the reservoir due to a regional flow gradient and aquifer recharge (Figure 7h-2). This results in groundwater movement that raises the oil-water contact on the updip limb of the structure and lowers it on the downdip limb.
Figure 7h-3 is a diagram showing the effect of hydrodynamics on the buoyant force in a reservoir. The higher water pressure on the updip side creates a higher buoyant pressure relative to static reservoir conditions, resulting in an elevated oil-water contact. On the downdip side, the lower water pressure creates a lower buoyant pressure relative to static reservoir conditions. Figure 7h-4 shows the effect that a greater buoyant force in the updip part of the reservoir has on a capillary pressure curve. The higher buoyant force effectively raises the oil-water contact resulting in a shorter oil column. The opposite effect occurs on the downdip side of the structure, resulting in a longer oil column. Depending on the size of a given structure, a small tilt in the oil-water contact can make a big difference. For example, an oil-water contact that is tilted one degree will result in a 100-foot difference in the elevation of this contact over a distance of one mile.
Tilted oil-water contacts are common in reservoirs such as the Cretaceous Asmari and Bangestan Formations of Iran. These oil-productive units are underlain by regional aquifers that are recharged in the Zagros mountain range and flow southwestward toward the Persian Gulf. As a result, the reservoirs in this area contain tilted oil-water contacts that are high on the northeast side of each structure and low on the southwest side. In other places, such as Western Siberia, oil-water contacts are flat, but tilted oil-water contacts are interpreted in many reservoirs primarily as a result of a failure to understand capillary pressure concepts and poor quality reservoir data, including very few wireline pressure tests.
7i. Examples of How Capillary Pressure Controls the Location of Oil-Water Contacts. Three examples in this section are used to illustrate the principles that have been reviewed in this chapter. The first example (Figure 7i-1) shows how an oil accumulation forming on one side of a fault can leak across this fault to form a second accumulation. Because both accumulations contain rocks with the same capillary properties, once both accumulations have equilibrated, they share a common contact that moves vertically downward.
The second example (Figure 7i-2) again shows how an oil accumulation forming on one side of a fault can leak across this fault to form a second accumulation. However, unlike the first example, the fault is non-sealing. Instead, the difference in oil-water contacts across the fault is caused by differences in the capillary properties of the different rock types on each side of the fault. This results in a difference in oil-water contacts that is maintained as the accumulation moves vertically downward.
The third example (Figure 7i-3) is similar to the second example except this time, instead of a fault separating the two lithologies with different capillary properties, the lithologies are in contact with each other. The result, however, is the same as Example 2—the difference in oil-water contacts is maintained as the accumulation moves vertically downward.
8. Reservoir Heterogeneities and the Use of Geostatistics
8a. Introduction. Reservoir heterogeneities are those reservoir components that control fluid flow. In order to build a representative geocellular model and numerical simulator, it is critical to identify and capture these heterogeneities in our modeling efforts. Geostatistics is one of the most popular techniques for quantifying uncertainties in fluid flow cause by the presence of these heterogeneities. Various geostatistical techniques are used to distribute facies and
27
petrophysical parameters, resulting in equi-probable realizations that span the range of possible geological interpretations.
This chapter begins with a discussion of the different types of heterogeneities, the appropriate steps to identify them, and how to determine flow units. It then focuses on geostatistics, beginning with a discussion of how to calculate a variogram and use it in variogram modeling. Variogram-based mapping techniques are then reviewed including kriging and conditional simulation, followed by a discussion of the variogram-based sequential indicator simulation for distributing facies in geocellular models. This is then compared to object modeling which is an alternative facies modeling technique. Finally, sequential indicator simulation and its variants, which are used to distribute various petrophysical properties, are explained.
8b. Types of Heterogeneities. There are five basic types of heterogeneities found in reservoirs. The first are structural heterogeneities that include both faults that compartmentalize the reservoir and fractures that usually increase reservoir permeability. The second are depositional heterogeneities that include various sandbody types (such as channels and splays) and their stacking patterns, shale types (such as floodplain shales and drapes), and internal sandbody features including cross-beds, laminations, and grain size variations.
The third type are stratigraphic heterogeneities that include both unconformities such as sequence boundaries and transgressive surfaces, and various relational geometries including onlap and downlap. Figure 8b-1 shows stratigraphic heterogeneities associated with progradational clinoforms in the Western Siberian Basin of Russia. Submarine fan sandbodies at the toe of the continental slope downlap onto the uppermost Jurassic Bazhenov Shale. These sandbodies are correlative with shelf edge delta and shallow marine shoreface sandstones deposited on the associated continental shelf. These are separated from the submarine sandbodies by thick continuous shales of the continental slope.
The fourth type are diagenetic heterogeneities including both cements and clays which reduce permeability. Figure 8b-2 shows an example of an uncemented, fine to medium grained, moderately well-sorted sandstone with good porosity (as indicated by the blue color) and a permeability of 62 millidarcies. A rock with a similar grain size and sorting that has been cemented by calcite has a permeability of only 0.05 millidarcies. Similarly, clays such as hairy illite that grow into pore spaces or smectite which can swell and block pores, can be important reservoir heterogeneities that reduce permeabilities. Conversely, the dissolution of unstable grains such as feldspar can increase both reservoir porosity and permeability. This is especially true in carbonate rocks.
The fifth type are fluid heterogeneities, which are important because gas reservoirs are much less impacted by reductions in permeability than oil reservoirs. A good rule to remember is that the lower limit of permeability in a reservoir can be conservatively estimated as 0.5 millidarcies per centipoise of hydrocarbon viscosity. So for a 1 centipoise oil, the lower limit of permeability would be 0.5 millidarcies, but for a 0.01 centipoise gas, the lower limit would be 0.005 millidarcies. This is obviously critical in distinguishing between pay and non-pay in any given reservoir. Figure 8b-3 shows that it is also important for characterizing parameters such as the height of the transition zone. In this example, the transition zone height for an oil reservoir with a permeability of 0.1 millidarcies will be about 100 feet, whereas it will only be about 50 feet for a gas reservoir with the same permeability.
It is also important to keep in mind that these different types of reservoir heterogeneities appear at different scales (Figure 8b-4). For example, diagenetic heterogeneities occur at the pore-level and must be identified through the use of techniques such as petrography and X-ray diffraction. Depositional heterogeneities such as grain size and sorting are discernible from core plugs. Structural heterogeneities including faults and fractures are also detectable from cores, but these are best appraised at the interwell scale with logs and well tests, or at the field scale with log correlations and seismic data.
8c. Steps to Identify Heterogeneities. There are four critical steps that the production geologist needs to apply in identifying heterogeneities. These include (1) describing the core to identify lithofacies types and the depositional model, (2) analyzing the core to determine porosity, permeability, and saturations, (3) developing a petrophysical model, and (4) relating the static data to the dynamic data.
The best way to illustrate the process of identifying heterogeneities is with an example, in this case, using the Potter Sandstone in Midway-Sunset Field, California. The work on the Potter began with describing the core and capturing
28
the characteristics shown in Figure 8c-1 in order to characterize the lithofacies that were present (Figure 8c-2). These included four reservoir quality lithofacies and two non-pay lithofacies (muddy sandstone and muddy siltstone) that serve to impede the vertical movement of steam that is injected to lower the viscosity of the in-situ oil. Using this core description interpretation, a depositional model was proposed that consisted of a prograding fan-delta (Figure 8c-3). This model contains sandy debris flows and turbidites separated by muddy lithologies that are dissected by later flows.
With these lithofacies and the depositional model in mind, the next step was to relate the core descriptions to the core analysis data. As summarized in Figure 8c-4, the strategy for sampling the core consisted of first drilling a plug every one foot for routine core analysis measurements. Then, after the core was slabbed, the original core sampling was supplemented by additional core plug sampling in thin-bedded or rare lithofacies types that were not captured in the original sampling work. Once the core description work was completed, the end pieces from representative core plugs were selected from each lithofacies type for petrographic and X-ray diffraction measurements to quantify bulk rock and clay mineralogy. Companion samples for capillary pressure measurements were then taken next to each sample selected for petrography/X-ray diffraction.
An analysis of the core plug samples showed that the pay quality lithofacies consisted of poorly-sorted, fine to coarse grained sandstone with high porosity and permeability, whereas the non-pay lithofacies had high microporosities and low permeabilities (Figure 8c-5). Only a small amount of clay (10-20%) was needed to substantially reduce permeabilities in these non-pay rocks (Figure 8c-6). The resulting capillary pressure curves reflect the conclusions drawn from the core description and core analysis work (Figure 8c-7). The muddier facies have high capillary pressures and high irreducible water saturations of 55-80%.
Once the core had been properly described and analyzed, a petrophysical model was developed by integrating the log and core data. The results of this work in the Potter show that the log-derived permeability, porosity, and saturation curves approximate the values measured from cores (Figure 8c-8). As is the case with every reservoir, the permeability curve is the most difficult to construct because there are no wireline logging tools that can make continuous measurements of permeability. Instead, combinations of other logs (such as porosity and resistivity), or empirical techniques such as the Timur equation are needed to determine permeability (Figure 8c-9).
If the log-derived permeability curve is matched to the routine core permeability measurements, then the resulting permeability curve values will be absolute permeabilities. In order to use these in a geocellular model, they have to be calibrated to well test permeabilities. This is done by averaging the log-derived net sand permeability values over the tested interval. In this case, net sand refers to sand that is capable of producing reservoir fluid (exceeds some permeability cutoff). The type of averaging that is done depends on how fluid flows through the sandstone (Figure 8c-10). In the Potter, as in most sandstones, fluid flow will be both lateral and vertical, and permeabilities should therefore should be geometrically-averaged.
The resulting average absolute permeability must then be adjusted downward to account for the fact that the well test is measuring an effective permeability (typically the permeability to oil at irreducible water saturation). Core tests can be run to measure this parameter which, based on the author’s experience, is about 75% of the absolute permeability value. As a last step, the resulting log-derived average Kh values for each well can be plotted against the corresponding well test Kh values to see if the comparison is reasonable (Figure 8c-10).
Additional dynamic data can also be compared to the cores and logs to help confirm the location and continuity of reservoir heterogeneities. In the case of the Potter Sandstone, temperature curves show the location of muddy sandstones and shales, which do not accept injected steam and therefore show a much smaller increase in temperature with time than permeable, oil-productive sandstones (Figure 8c-11). A cross-section containing the results from these temperature profiles can be drawn and compared to the distribution of sands and shales to determine where barriers are actually present (Figure 8c-12).
As a final step, the core, log, and dynamic data can be used to determine flow units. A flow unit is a volume of rock with similar geological and petrophysical properties that control the flow of fluid through it. In effect, the flow unit concept is a means to define the permeability stratification of a reservoir by imposing engineering requirements on the identification of geological and petrophysical zones. Within a geocellular model, flow units can be (1) treated as distinct layers, (2) grouped with adjacent flow units that have similar properties into a single layer, or (3) modeled as
29
discontinuous lenses or layers embedded within another flow unit. Within the Potter Sandstone, three flow units were defined as shown in Figure 8c-13.
There are also more quantitative ways to define the permeability stratification of a reservoir. One approach is the Dykstra-Parsons technique which is an empirical way to assess reservoir heterogeneity and waterflood performance. It is based on a correlation between waterflood recovery, mobility ratio and permeability variation factor (V) based on hundreds of coreflood experiments. Another approach is to make a Stratigraphic Modified Lorenz (SML) plot, which is a crossplot of cumulative flow capacity (Kh) versus cumulative storage capacity (Φh). Inflection points in the resulting plot correspond to changes in stratification, fracturing, etc.
8d. Importance of Capturing the Appropriate Heterogeneities. It is not enough to simply recognize heterogeneities in a given reservoir and make sure they are part of the geocellular model. Work must be done to identify the heterogeneities that will have the greatest impact on reservoir performance and ensure that they are appropriately captured. The following four slides are a real-life example of what can happen if the production geologist fails to do this.
A sandstone reservoir was discovered containing 3 billion barrels of oil-in-place (Figure 8d-1). Early cores indicated good matrix permeability enhanced by fracturing and no sealing faults in the reservoir. Water injection was rejected as a secondary recovery process primarily because the production of the very high-salinity formation water would cause the plugging of wells by salt precipitation. Instead, high pressure gas injection was proposed to increase the reservoir pressure until the oil and gas became miscible. This means they would combine to form a single-phase fluid with a low viscosity, resulting in a very high recovery factor--perhaps as high as 60% of the original oil in place. However, because of concerns that the reservoir may be too fractured to implement miscible gas injection, a decision was postponed.
More than 20 years later, a new interpretation of the reservoir was made using the existing data which included old cores and 2D seismic data. This work concluded that fractures were most intense near the faults, and that all of these trended in a northeast-southwest direction (Figure 8d-2). The fracture description was created by calculating a fracture index value for each reservoir layer in each well and mapping these using a geostatistical technique called kriging (Figure 8d-3). The resulting map was used to create a numerical model that showed miscible gas injection would work in the northern part of the field where fracture index values were low.
Over the next few years, a 3D seismic survey was conducted, wells were drilled, cores were cut, image logs were obtained, and wells were tested. This data was primarily used to refine the oil-in-place volume, interpret new faults, distribute shales, and more accurately determine the matrix porosity, permeability, and water saturation. Little data gathering and analyses were conducted in the northern part of the reservoir to confirm the connectivity of the fractures, which was the key heterogeneity governing the success or failure of the project.
After spending tens of millions of dollars to drill gas injection wells and build the needed facilities, gas injection commenced and injected gas broke through almost immediately to the producers. As a result, miscibility pressure could not be attained without shutting-in the entire field, which was not an economic option. The project was abandoned as a failure. Afterwards, an analysis of the data concluded that the reservoir was pervasively-faulted and fractured (Figure 8d-4). It is now clear that if the development team had focused on work to appropriately quantify fracture connectivity and include this in the geocellular and numerical models, that gas injection would not have been attempted and this expensive failure could have been avoided.
8e. Why geostatistics is needed. Geostatistics is a very useful set of techniques for capturing the heterogeneities that impact fluid flow and realistically distributing them in geocellular models. Some of the most important uses of geostatistics are to 1) quantify the uncertainty caused by having an incomplete set of data to build models, 2) provide the best estimate of properties in the interwell areas, and 3) quickly generate a variety of equally possible (equi-probable) models or realizations.
Geostatistics uses both deterministic and stochastic techniques. A deterministic technique results in a single outcome and is used when the best estimate of a property value at a particular location is desired. Deterministic techniques work well for smoothly-varying properties such as simple structure contouring or net sand mapping. Examples of deterministic techniques include linear interpolation and kriging. However, these techniques are not very useful for
30
mapping or distributing properties that are more spatially-variable than the control data. This means they do not work very well in heterogeneous reservoirs with little well control.
A stochastic technique is one which incorporates random components and produces equi-probable realizations. It is best applied when you cannot reliably estimate the value of a property at a particular location, but instead want to characterize it with a range of possible values. It is very useful for representing non-mappable heterogeneities such as shale baffles or thief zones.
One of the most important mathematical techniques used in geostatistics is the variogram. A variogram expresses the difference between two points as a function of the distance between them. Variogram modeling uses variograms to quantify the spatial distribution of data and distribute them in two-dimensional or three-dimensional space. Different techniques that use variograms include kriging which results in a smooth map of average values and conditional simulation which results in a noisy map that includes extreme values. Conditional simulation forms the basis for the sequential indicator simulation which is used in the three-dimensional modeling of discrete values such as facies, and sequential Gaussian simulation which is used for the three-dimensional modeling of continuous variables such as porosity.
Not all geostatistical techniques are variogram-based. A good example are object modeling techniques such as simulated annealing which is commonly used to distribute facies in reservoirs where there is good information about the types of sandbodies that are present and their characteristics including size, shape, orientation, and abundance.
8f. How to calculate a variogram. Figures 8f-1 to 8f-8 sequentially demonstrate the concept of spatial continuity and how the variogram is used to quantify this parameter. Figures 8f-1 and 8f-2 begin the sequence of figures by analyzing the statistics of a simple “value to the left” estimation technique for determining an unknown porosity value. This technique includes the calculation of three statistical parameters: the arithmetic mean which is a measure of the central value of the distribution, the variance which is a measure of how broad the distribution is, and the standard deviation which is the average amount by which values differ from the mean (Figure 8f-3).
The analysis then proceeds by generating a histogram of porosity values (Figure 8f-4) and determining that the porosity values corresponding to two standard deviations on either side of the mean are 1% and 11%. This indicates there is a 95% chance that the unknown porosity value in this exercise falls between these two values. Figure 8f-5 then changes the order of the data to show the points systematically increasing in value from 5 to 10% porosity. The resulting statistics from this dataset show that there is a 95% chance that the unknown porosity value is between 5% and 7%. This example shows that as the variability of a set of values decreases, the estimation error decreases. This means that the more ordered dataset has greater spatial continuity (Figure 8f-6) and therefore missing values can be estimated with greater confidence.
Figure 8f-7 shows how variograms are used to quantify spatial continuity by calculating the variance between data pairs at different distances. Figure 8f-8 shows that a variogram is simply a plot of these variances versus their respective distances (each distance increment is referred to as a lag). The plot contains a sill which is the point where the variance becomes stable, a range which is the upper limit of influence for samples in the dataset (there is no spatial correlation between samples separated by distances greater than the range), and the nugget which indicates random variations occurring at a smaller scale than the sampling interval if the nugget value is greater than zero. All variogram plots have these characteristics and it is necessary to be familiar with them to discuss the use and results of variogram modeling.
8g. Variogram modeling. Variograms are characterized by the distance and direction between two points in terms of a vector. In the case presented in the previous section (Figure 8f-7) these distances were regularly-spaced, with each successive distance increasing by 100 meters. In practice, however, well data is irregularly-spaced which makes it difficult to find enough sample points that are separated by the same distance vector. In order to overcome this, vectors are partitioned into classes that are constrained by bandwidth, angle tolerance, and distance tolerance (Figure 8g-1).
Three directions are commonly fitted in variogram modeling: the vertical direction, the horizontal direction with the maximum range, and the horizontal direction with the minimum range (Figure 8g-2). This means that three variograms are constructed for each property in each reservoir layer of a geocellular model. This represents a
31
minimum number of variograms which may increase if additional facies, compartments or zones are needed to ensure stationarity. Stationarity assumes 1) there is no trend or drift in the variable of interest within the volume of interest, and 2) that all samples belong to the same distribution and have the same mean.
Figure 8g-3 shows what can happen if the concept of stationarity is violated. On the left side of the figure is the actual condition of the reservoir showing the presence of thief sands in Zone 2. These sands have very high permeabilities and are conduits for injected water to quickly move from the injector to the producer without sweeping any oil. On the right side is a stochastic 2D reservoir pattern model showing that Zones 1 and 2 have been modeled together, and that thief sands have been placed in both zones. This is incorrect and will result in the predicted oil sweep in Zone 1 being much less than is actually achieved in the reservoir. To correct the problem, Zones 1 and 2 need to be modeled separately to ensure stationarity.
Variogram models can be fit with spherical or exponential curves (Figure 8g-4). Either is acceptable and the choice of which to use is generally guided by the best fit of the data. The value chosen for the range in variogram modeling is extremely important. A short range creates a “salt and pepper” texture with little spatial continuity whereas a long range creates more sheet-like features (Figure 8g-5). If wells are closely-spaced and there is good confidence in the correlation framework and the log data, then the variograms should be relatively easy to fit. However, most of the time the data quality is poor and the data are widely-spaced, requiring good geological judgment in order to pick the range. This means that a working knowledge is needed of the depositional environment and the distribution of critical heterogeneities in order to choose reasonable range values.
Variograms work best when there is plenty of data, and they are popular because they do a good job of modeling complex heterogeneity patterns and reproducing geological trends. Variogam modeling is also very fast, even if there are hundreds of wells. The primary limitation with variograms is the difficulty in accurately modeling them when the data is widely-spaced or of poor quality. In these situations, other data (such as seismic) can be used to help constrain the variogram modeling, or alternative techniques such as object modeling can be used.
8h. Kriging versus Conditional Simulation. Kriging is a weighted-average mapping technique that uses a variogram as the weighting function. In order to estimate the kriged value for a point on a map, a search is made of the area around the point and various samples are assigned weights that reflect their spatial variability from the variogram model. A weighted average of these samples is then calculated to estimate the value of the point. The kriging technique minimizes the variance of the estimation error and therefore provides the best average map. Like all variogram-based techniques, kriging requires that an appropriate range be selected for the variogram model. Not only will this control whether there are “bulls-eyes” or smooth contours, but by choosing different values for the range in different directions, trends can be shortened or lengthened (Figure 8h-1).
There are many different types of kriging including simple kriging which assumes a common mean value, ordinary kriging which can have a locally variable mean value, and universal kriging which accounts for the drift in the mean value. There is also block kriging which is used to estimate parameters that can be averaged over a volume or area, and indicator kriging used for distributing discrete parameters such as facies. A commonly used variation of kriging called co-kriging uses a correlation between the wells and other data (such as seismic) to constrain mapping (Figure 8h-2).
Kriging assumes that the data varies smoothly between data points, and therefore it is not a good technique for mapping properties that change abruptly in interwell areas. For example, the variable distribution of cement may abruptly lower porosities or additional fracturing may abruptly increase permeabilities between existing well control points. It is often these extremes that have the greatest influence on fluid flow. To capture these, it is necessary to use a technique called conditional simulation.
In conditional simulation, a random location is chosen and an estimate is made of its value and uncertainty. A histogram is then generated with these parameters and the variogram model is used to help determine values within the variogram range of offset control points (Figure 8h-3). Conditional simulation results in much “noisier” realizations as shown by a comparison of the two conditional simulation results to the control points at the bottom of Figure 8h-4. If a large number of conditional simulations are generated and then averaged, a result that is equivalent to kriging the data will be produced, as shown in the upper right hand corner of Figure 8h-4.
32
In summary, kriging is a technique whose output is a single deterministic interpretation that honors the well data and the variogram, and minimizes the average error. It creates a smooth map, especially in areas away from well control, and is best used for mapping simple parameters (such as net sand or net-to-gross ratio) and volumetrics. Conditional simulation is a technique whose output is a series of equi-probable realizations that honor the well data, variogram, and histogram. The realizations include extreme values, and are therefore useful for quantifying the range of uncertainty in a reservoir description and the corresponding range of reservoir behavior in numerical simulation.
8i. Object modeling. Object modeling is a simulated annealing process in which multiple pixels in a geocellular model are represented by a single object. The object shape and placement are determined by control data (primarily well logs and seismic) combined with geometry statistics and interaction rules.
Object modeling is used to distribute facies types in geocellular models where the distribution of petrophysical properties is primarily controlled by depositional processes. A good example would be a fluvial system containing point bars with permeabilities of hundreds of millidarcies and crevasse splays with permeabilities of only a few millidarcies. If, however, the permeabilities of the point bars in this reservoir were reduced to a few millidarcies by diagenetic cement, then object modeling would not be very useful and a faster, easier conditional simulation technique (sequential indicator simulation) would be recommended. In sequential indicator simulation, the model is populated one pixel at a time using input statistics (mean and standard deviation) and variograms. Unlike object modeling, pixel-based techniques cannot produce explicit shapes such as channel bodies.
Figure 8i-1 lists some of the input parameters commonly used to constrain the distribution of facies types with object modeling. Reservoir-specific control data, such as logs and seismic, are the primary data needed to determine the different facies types that are present and their abundance. Secondary data include shapes, dimensions, and orientations of the various facies bodies. These data come from (1) an understanding of the regional geology, depositional history, and reservoir stratigraphy, and (2) outcrop analogs and modern analogs appropriate for the depositional environment of interest. It is also necessary to define the interaction of facies bodies prior to the start of object modeling. For example, allowing distributary channels to erode into distributary mouthbars and attaching crevasse splays to distributary channels.
The object modeling process begins by inserting facies objects into the wells, honoring their location based on facies logs (Figure 8i-2). Facies bodies are then added in the interwell areas until the appropriate fraction of each facies type has been distributed. This distribution can be controlled by facies trend maps or seismic attributes if they are available. Object modeling is typically a time-consuming, multi-step process. For example, Step 1 may be to distribute channel belts into a background facies of floodplain shale, followed by Steps 2 and 3 to attach crevasse splays and abandoned channel fill facies to the channel bodies (Figure 8i-3).
A key issue in object modeling is the ability to obtain convergence, which means that the geocellular model is able to honor all of the input data in its construction to generate a successful realization. This can be a problem, especially in reservoirs containing large facies bodies and a large number of closely-spaced wells (Figure 8i-4). Commercially available software has problems placing these large bodies in multiple wells because they conflict with the actual facies types that are present. As a result, smaller facies bodies are placed in and around closely-spaced wells, while larger bodies are placed in areas with little well control such as the aquifer.
Another key issue regards what type of sensitivities should be considered in the object modeling process. Figure 8i-5 shows just two (width-thickness ratio and sinuosity) of the different sensitivities that can be tested. It is important to think about which of these is most likely to have the greatest influence on reservoir performance (through discussions with the numerical simulation engineer) before running a large number of sensitivities. As a rule, those sensitivities that have the greatest impact on sandbody connectivity and quality (permeability) will be of greatest importance.
A third key issue regards what the geocellular model will be used for, and what it can be reasonably expected to tell us. It is important to remember that because object modeling is a stochastic process, that a single realization is unlikely to be useful for determining the best well location to drill, unless the object model is very well constrained by seismic data. However, an average of many realizations may help determine which area of the reservoir is likely to be most attractive for future drilling (for example, the area of the reservoir containing the greatest sand thickness or greatest concentration of a high-permeability facies). Similarly, even through individual wells are unlikely to encounter the same sand thickness and petrophysical properties at a specific location as predicted by the geocellular model, the
33
average sand thickness and petrophysical properties from multiple wells should match the average parameters obtained from actual wells drilled into the reservoir.
While object modeling has historically been used only for facies modeling, it is possible to use it for other applications such as the distribution of sub-seismic faults. Figure 8i-6 shows an interpretation of faulting from seismic data that has been supplemented with stochastic faults. Such realizations require appropriate constraints including the relationship between fault length and displacement, the relationship between fault length and number of faults, and information regarding the spatial distribution of faults. In addition, as is the case with the distribution of any reservoir property, it must be appropriately tied to dynamic data (well tests, production logs).
8j. Sequential Gaussian Simulation. Sequential Gaussian simulation is a conditional simulation technique used to distribute continuous reservoir properties such as porosity and permeability. It is similar to sequential indicator simulation which is used to distribute discrete reservoir properties such as facies in that both techniques use variograms to control the spatial distribution of data. Sequential Gaussian simulation is the most common technique used in commercial software packages for distributing continuous variables.
Sequential Gaussian simulation assumes that a parameter is normally-distributed, which means the distribution is symmetric with respect to the arithmetic mean. A good example of a normally-distributed reservoir property is porosity. Permeability is log normally-distributed, which means the distribution is symmetric with respect to the log of the arithmetic mean. Therefore, Sequential Gaussian simulation is used to distribute the log of permeability, which is subsequently transformed back into permeability in the geocellular model.
Figure 8j-1 shows how Sequential Gaussian simulation (SGS) can be used to distribute a parameter such as permeability. The original data is first transformed into a normal score distribution by removing any trends or skewness in the data. The resulting data is then fitted with a variogram model and this is used along with the blocked values of permeability from the log data to distribute permeability. This distribution can then be checked, either visually or by comparing the statistics from the geocellular model realization to the actual data.
An alternative technique is to use a secondary property, such as seismic attribute trend maps or volumes, to constrain the property distribution in a process call sequential Gaussian co-simulation. This technique can also be used to transform one property into another. Figure 8j-2 shows an example using the equation from a semi-log plot of core permeability versus porosity. The relationship from the equation (shown in green) is used along with an assumed correlation coefficient of 0.9. This creates a permeability distribution in the model that is represented by the “cloud” of data (small red squares) distributed along the green line. By making several iterations, the distribution can be optimized such that the cloud of data from the model closely resembles the actual semilog plot of permeability versus porosity from the core data.
9. Geocellular Modeling
9a. Introduction. Geocellular modeling is the process of using all of the components discussed thus far in this manual to build a three-dimensional representation of the reservoir for use in volumetric calculations, reservoir simulation, and reserves estimations. It is critical not only to understand the steps involved in this process, but to be aware of the possible pitfalls and how to avoid them using the guidelines and recommendations presented. It is also important to realize that most of the time and effort spent in the geocellular modeling process involves collecting, analyzing, quality checking, and importing the data into the geocellular modeling software package. The actual geocellular modeling work is generally about one-quarter of the total effort.
This chapter discusses the geocellular modeling workflow (Figure 9a-1), beginning with project scoping, data import, and the quality checking of this data. The actual modeling process is then reviewed including the framework construction, three dimensional gridding, and property modeling aspects. This is followed by a discussion of volumetric calculations and the determination of net pay. How to assess the realizations generated by the modeling process is then explained, followed by a discussion of model upscaling and export for numerical simulation and reserves estimation.
34
9b. Project scoping. The construction of any geocellular model governed by the purpose of the model. If the model is to be used for numerical simulation, then the engineer must decide whether a reservoir screening, mechanistic, or full-field model is required.
A reservoir screening model provides basic information including hydrocarbons-in-place, and the likely recovery factors under a depletion drive or simple displacement process. The geocellular model for this type of simulation work needs to include the top and base of the reservoir, the net-to-gross ratio, single average values of petrophysical properties, and the hydrocarbon-water contacts.
A mechanistic model is built to understand various reservoir processes, and therefore requires more detail than a reservoir screening model. The corresponding geocellular model needs to include major shale horizons and faults, at least a binary facies distribution (sand and shale), and interpolated petrophysical properties by reservoir layer.
A full-field or reservoir sector model is built to investigate more detailed issues such as the feasibility of enhanced recovery processes or the optimization of an existing process. This type of model requires detailed history matching and forecasting, and therefore the geocellular model needs to be more complex. Components of this geocellular model commonly include all major faults, multiple facies types, petrophysical properties distributed by stochastic methods, and the use of J-functions or similar techniques to distribute water saturations.
9c. Data Import and Quality Checking. A great deal of the complexity associated with geocellular modeling is caused by the many different types of data that can be used for various model building processes (Figure 9c-1). As a result, most of the time spent conducting geocellular modeling work is focused on importing and quality checking the data that is used.
There are two primary aspects to the data checking process. The first is to make sure the data is spatially correct by checking the surface locations and directional surveys. Surface locations are commonly wrong on maps, but can be corrected by obtaining inexpensive satellite images of a given field. Sometimes the corrections simply require a translation, rotation, or scaling of the well location map, while other times it is necessary to add or remove wells from the map. Directional surveys are also commonly in error, and sometimes it is necessary to throw-out the data from these wells and only use data from straight holes in the construction of the model. In some cases, wells are mis-located because the units are wrong. For example, if the deviation data are in feet but these are misinterpreted as meters, then the well will be deviated 3.281 times as far as it should be.
The second aspect of data checking is to make sure that the input data from the logs and seismic are acceptable. Although the project petrophysicist and geophysicist are primarily responsible for the quality of this information, it is always necessary for the production geologist to check this information and completely understand how it was derived before conducting the geocellular modeling work. One way to check it is to make basic maps of the data and look for anomalies (Figure 9c-2). If “bulls-eyes” are found, they need to be investigated to determine whether they are real, or are caused by data problems in a given well (Figure 9c-3).
Figures 9c-4 through 9c-10 illustrate a few of the problems that the author has seen in his experience reviewing geocellular modeling input data. Figure 9c-4 is an example how the logs can overpredict the amount of sand due to artifacts of the log response associated with non-sand lithologies such as coal. Figure 9c-5 shows how trends in core porosity and permeability values can be very different in adjacent layers. In this case, the differences in the trends are the result of differences in core handling, measurement inconsistencies, and the application of a different compaction correction to each zone.
Figure 9c-6 shows a very strange relationship between core porosity and water saturation indicating that irreducible water saturations are above 90% for all core plugs with less than about 14% porosity. These water saturations were determined by cleaning core plugs, saturating them with water, and spinning them in a centrifuge at a single speed. Figure 9c-7 shows that because this speed was not very high, it results in a low buoyancy pressure (red line labeled “Sw at PC measurement”). This pressure is sufficient to drive the water saturations in high permeability samples to low values, but the low permeability samples retain their high water saturations (>90%) instead of being driven downward to appropriate irreducible water saturation values of 60-80%.
35
Figure 9c-8 is a graph of effective porosity versus acoustic impedance (AI) from a Western Siberian reservoir showing that AI is a poor predictor of clay volume. Rocks with different clay volumes and associated effective porosities have the same AI. However, this plot also shows that AI is useful for identifying tight, clean sandstones which have high AI values. This example is typical of the work needed to find relationships between seismic and log data. The production geologist needs to understand how good the correlations are between these data types (are the correlation coefficients high or low?) to determine if and how they should be used to condition the distribution of data in the model.
Figure 9c-9 shows how core data can be used to recognize an additional facies type and Figure 9c-10 shows the result of petrophysical work used to understand the origin of this facies. As a result, the log model was calibrated to recognize this facies which was distributed separately in the geocellular model. If this work had not been done, then the porosity-permeability relationship indicated by the blue line in Figure 9c-9 would have been used to characterize this facies in the model, resulting in permeabilities that would be too high.
9d. Framework Construction. The two elements that comprise the geocellular modeling framework are the reservoir zonation and the fault model. The reservoir zonation consists of a series of surfaces that bound flow barriers and that can be correlated using logs and seismic data. The barriers themselves may divide hydrocarbon bearing intervals from wet intervals, and it is therefore very useful to annotate correlation sections with the fluid types produced from well tests when conducting the correlation work (Figure 9d-1).
Seismic data is now commonly used to guide the correlation of surfaces through interwell areas. In using seismic data for this purpose, it is important to make sure that 1) the seismic event that is correlated corresponds to an appropriate log-based correlation horizon and that the two have been tied with synthetic seismograms, 2) a reasonable velocity model has been used to convert time to depth, and 3) the resulting interpretation has been appropriately related to sequence stratigraphic and depositional models.
Figures 9d-2 through 9d-4 show the value of using seismic data for establishing a geocellular modeling framework in the Western Siberian Basin. Figure 9d-2 shows an interpretation from 20 years ago using only log data. The Achimov interval is shown as an areally extensive submarine sheet sand beneath a progradational shoreface. Subsequently acquired seismic data (Figure 9d-3) showed that the Achimov Sands are located in the distal portion of prograding clinoforms, and are therefore separated by continental slope shales. The revised interpretation (Figure 9d-4) includes the clinoforms and shows that the Achimov consists of isolated toe-of-slope sandbodies correlative with shelf edge deltas and shoreface sands deposited on the corresponding continental shelf. It is also important to note the role of sequence stratigraphy in this interpretation which helps us understand how these clinoforms build into the basin (sediment supply exceeds accommodation space) and how the different depositional environments are inter-related.
Often times sequence stratigraphy plays a minor role in the correlation of surfaces in a given reservoir. This is because most oil and gas fields are relatively small and their productive sands are found in a single systems tract between sequence boundaries (Figure 9d-5). Marine flooding surfaces underlying condensed sections (shales) are the most important surfaces to correlate at this scale. However, it is important to remember that the zonation needs to be based on correlating those horizons that are most likely to be continuous barriers to fluid flow whether they are sequence stratigraphic surfaces or not. This means, for example, the correlation of shales that may encase deltaic lobes that have been temporarily abandoned by the lateral migration of distributary channel complexes.
Figure 9d-6 is a good example of how NOT to correlate reservoir horizons. The top and base of each sandbody has been correlated, no matter how thick or thin the sand, and no matter whether the associated shales are likely to be laterally continuous or not. This leads to “forcing” picks through intervals where there is not a clear shale break and results in an unnecessarily complex zonation with too much vertical compartmentalization. The best approach is to only correlate those shales that are present over the entire reservoir and to distribute the intervening sands and shales using stochastic techniques. This method will connect some of the shales and some of the sands, which is what is happening in the reservoir. We cannot precisely know which ones are connected because we don’t have enough data at the appropriate scale to determine this.
As part of correlating the key horizons through the reservoir, structure contour maps must be built to check the correlations in map view. Accurate structure mapping requires the use of all the available data, the application of
36
industry-accepted mapping techniques, and a good working knowledge of how to build these surfaces using contouring software. Figure 9d-7 shows an acceptable hand-contoured structure map that honors the appropriate data and is geologically feasible. Figure 9d-8 shows an unacceptable hand-contoured structure map with “bubble highs” and mapped elevations that are too optimistic. It is important to be very familiar with these acceptable and unacceptable contouring techniques in order to quality check and control the computer contouring used by geocellular modeling software packages. Based on the abundance of data, quality of data, and the mapping algorithms chosen, computer-generated contours can range from the reasonable (Figure 9d-9) to the ridiculous (Figure 9d-10). It is equally important to understand the principles of isochore mapping and to use these along with the structure mapping to detect incorrect marker picks, wells that are in the wrong location, crossing horizons and other problems (Figure 9d-11).
The other key element of framework construction is fault modeling. In building the geocellular model, faults with significant offset, that form compartments, and/or separate areas with different petrophysical properties must be included. The fault modeling functions vary substantially among the commercially available software packages. All are able to handle vertical faults, most handle normal faults reasonably well, and a few can deal with reverse faults. None of them, however, can easily handle complex fault geometries, including intersecting and truncating faults. The fault model can be constructed given these complexities, but the associated grids commonly contain mis-shaped cells, twisted cells, or zero pore volume cells that are unacceptable to the simulation engineer.
It is possible to spend weeks or months generating a fault model that honors all the data and produces reasonable structure grids. If the main purpose of the geocellular model is to serve as an input to numerical simulation, then a good alternative is to build the model without actually connecting the faults (Figure 9d-12). Transmissibility barriers can be used to join the fault segments later in the numerical simulation model. The alternative to this strategy can be weeks of frustration, or worse, spending a lot of money using specialists from the software companies to build the model for you.
9e. Three-dimensional Gridding. Assuming that the geocellular model that is being constructed will be used for numerical simulation, the simulation engineer should be consulted before decisions are made regarding how to build the three-dimensional grid. The first decision regards the total number of grid cells that should be used. For most processes that are simulated, a reasonable upper limit is a few hundred thousand cells, which is much less than the millions of cells that comprise most geocellular models. As a result, the fine-scale geocellular model will either be 1) upscaled in the x, y, and z directions, 2) upscaled in only the z direction, or 3) used “as is” with a only a portion of the model extracted for simulation. In some cases if the well spacing is large and the facies are thick and continuous, the geocellular model can be built at the same scale as the simulation model.
In constructing the 3-D grid, it is preferable to use orthogonal cells and proportional gridding if possible. Proportional gridding keeps the vertical number of cells constant and varies the thickness of the cells as the layer thickness changes (Figure 9e-1). The alternative to proportional gridding is constant thickness gridding which causes cells to truncate against the top and/or bottom of the reservoir as the thickness decreases. This truncation can result in unforeseen and undesirable effects as shown in Figure 9e-2.
Faults can cause significant problems for gridding algorithms. Most production geologists think that if the modeling software can build a fault model in which the fault geometries are acceptable, then the associated grids will be acceptable. Figure 9e-3, which contains odd-shaped and small pore volume cells along a fault, show that this is not the case. These cells are troublesome for simulators and significant editing of the grid may be required by the simulation engineer. One alternative to this unacceptable gridding is to use “zig-zag” faults which follow the orthogonal cell boundaries as shown in the lower left of Figure 9e-3. However, if there are many wells in the model, some of these may end up on the wrong side of the fault and will have to be manually moved to the other side. A better alternative may be to use conformable gridding which will rotate the cells and vary their areal dimensions to make them conformable with the faults (Figure 9e-4).
Gridding algorithms can honor sequence stratigraphic surfaces if they are present in the reservoir and are needed to construct an effective model (Figure 9e-5). As stated previously, it is important to include these surfaces if they coincide with baffles or barriers to fluid flow. If they do not, then it is important to think about how these surfaces affect the geometry of the grid (especially in terms of juxtaposing different dip magnitudes and directions) and whether this justifies the additional work to include them. For example, the onlap of high net-to-gross sands onto
37
underlying impermeable rock may not justify including this relationship if fluid can easily move updip through the sands. However, if the sands contain interbedded impermeable shales, then if may be necessary to honor the onlap geometries so that these flow barriers end up in the right location and properly affect fluid flow.
Another issue to consider in choosing areal grid dimensions is the need to have four or five cells between each well in order to minimize the effects of numerical dispersion. This phenomenon occurs because instead of fluid moving through the reservoir in a consistent fashion, it moves step-wise through a series of cells. The larger these cells are, the greater the chance of premature water breakthrough and the smearing of flood fronts. Figure 9e-6 shows that the “piston-like” displacement of oil by water that occurs in the “actual” reservoir is fairly well duplicated by a simulation model with small and medium grid blocks, but is not properly duplicated in the model with large grid blocks in which the water breakthrough occurs gradually.
9f. Property Modeling. Once the reservoir framework and grids have been constructed, facies and petrophysical properties (porosity, permeability, and water saturation) can be distributed within the model. Figure 9f-1 summarizes the modeling techniques used for facies distributions that were discussed in the previous chapter of this manual. These techniques include interpolation, object modeling, and pixel-based conditional simulation and kriging. Each technique is only as good as the conditioning data used to control the distribution of different facies types, and care must be taken, especially in the use of reservoir analogs, to ensure that the conditioning data is representative (Figure 9f-2)
There are a number of important guidelines to follow in facies modeling work. The first is to ensure that the core descriptions and analyses have been used to properly calibrate the wireline facies log interpretations. This is critical because facies logs are the primary tool used to condition the distribution of facies in the model. The second guideline is that shales or other types of barriers should only be deterministically correlated if they can be reliably traced over multiple well spacings. If they cannot be traced, then they should be distributed stochastically using a conditional simulation or object modeling technique. A third guideline is that object modeling should be used if facies body shape is important and the depositional system exerts the primary control on petrophysical properties (Figure 9f-3). If the facies body shape is unknown or not very important (for example, if the reservoir has been diagenetically overprinted) then sequential indicator simulation is recommended (Figures 9f-4 and 9f-5).
A fourth guideline is to use trend data to help control the distribution of facies. These can include seismically-derived maps, coherency cubes, or acoustic impedance volumes (Figures 9f-6 and 9f-7). It is particularly important before using any of these that the production geologist understands what these data show. For example, if the seismic data indicates channels, are these mud or sand-filled? Are the reservoir sands located in the channels or are they located adjacent to the channels in point bars, mouth bars, or submarine levees? It is also critical to remember that the seismic response is governed by both facies types and fluid types, and that variations in a seismic parameter may be more fluid-based than facies-based.
A fifth guideline is to make sure the relationship between facies and petrophysical properties are well understood. A key reason for conducting facies modeling is to create a template in which to distribute petrophysical properties. If each facies does not have distinct petrophysical properties that make its fluid flow response unique, then it should be combined with similar facies to reduce the model complexity and time needed to complete the work. The number of facies types in a given stratigraphic interval should never be more than five or six, and can often be as little as two (sand and shale).
A sixth guideline is make sure the resulting facies model looks geological reasonable. The reason geologists build these models instead of engineers is that geologists can apply their intuition, expertise, and experience to the problem. For example, the realization (using a range of 2,500 meters) in the bottom left hand corner of Figure 9f-5 includes a shale body within a thick sand in an interwell area. The location of this shale is not supported by adjacent well data, indicating that a longer variogram range (3,500 meters) would be preferable as shown in the bottom right hand corner of this figure.
After facies types have been distributed, petrophysical properties can be assigned to each facies type using the techniques summarized in Figure 9f-8. It should be noted that the assignment of facies prior to petrophysical modeling is recommended, but is not required. For example, if the acoustic impedance inversion of a 3D seismic
38
volume results in a good correlation to porosity irrespective of facies type, then the seismic data can be used to control the distribution of porosity directly without conducting an intermediate facies modeling step.
The most important objectives of the petrophysical modeling work are to calculate a hydrocarbon pore volume and generate a permeability distribution. Accomplishing these accurately and efficiently begins by making sure the porosity and permeability logs in the model have been properly tied to core and well test data, and continues with the generation of porosity and permeability distributions following some important guidelines.
The first guideline is that in a screening level or mechanistic model, or in a reservoir where all the sands have similar properties, it may be sufficient to use an average value of porosity, a simple porosity interpolation, or a kriged porosity map (Figure 9f-9). For more complex reservoirs and simulations, sequential Gaussian simulation should be used to distribute porosity, preferably conditioned to an acoustic impedance inversion or appropriate spectral attribute from seismic (Figure 9f-10). Without this conditioning data, it will be necessary to generate additional realizations to assess the uncertainty associated with the porosity distribution. It is also critical to compare the output from these porosity distributions to the input data to make sure the statistical measures are similar (Figure 9f-11).
With respect to the distribution of permeability, multiple techniques can also be used including the direct interpolation or kriging of wireline log-derived permeability data. More commonly however, permeability distributions are related to porosity using either a porosity-permeability transform or sequential Gaussian co-simulation, as was explained in the previous chapter. The choice of which technique to use in distributing permeability is based on how much confidence can be placed in the log-derived permeability curve. If there is high confidence, then a co-simulation with distributed porosity values is recommended. If, however, there is low confidence in the log-derived permeability curve, then the permeability-porosity transform from the core data should be used to distribute porosity (Figure 8j-2).
For water saturations, there are multiple techniques that can be used depending on the accuracy and time required for the work. The simplest ways to distribute water saturation are to 1) choose a single value, 2) interpolate the log-derived water saturation values, or 3) distribute the log-derived water saturation values using sequential Gaussian simulation or co-simulation (using porosity or permeability). The biggest problem with these techniques is that they all use log-derived water saturation values which 1) do not explicitly consider the existence of a transition zone, 2) are subject to variations in log quality and calibration, and 3) will probably include wells drilled after production began which means they are not representative of initial conditions. As a result of these factors, the distributed water saturations will redistribute (re-equilibrate) when the simulation model is initialized, resulting in a different water saturation distribution and a different oil-in-place volume than in the geocellular model.
To overcome the problems associated with log-derived saturations, it is recommended that J-functions (discussed in the previous chapter and summarized in Figure 9f-12) or similar techniques be used to distribute water saturations. These distributions reflect the presence of a transition zone as a function of the porosity, permeability, and height above the free water level for a given cell. The J-function water saturations can either be distributed in the fine-scale geocellular model and then upscaled for simulation, or the J-functions can be applied directly to the upscaled cells. In any case, whatever technique is chosen to distribute water saturation needs to be agreed upon by the petrophysicist, geologist, and simulation engineer.
Application of a J-function depends on properly locating the free water level in each reservoir. There are several ways to determine this based on the data that is available. The first technique is to locate an oil-water contact based on decreasing resistivity values in a thick, relatively homogeneous sand. In this case, the oil-water contact is approximately coincident with the free water level. A second technique is to project downward from a well in the transition zone to the free water level using the appropriate capillary pressure curve for the facies type in the transition zone. A third technique, assuming that the reservoir has no wells in the transition zone, is to project downward from the lowest known oil using a representative capillary pressure curve. This will provide a minimum hydrocarbon column thickness.
An additional complexity that may affect the distribution of petrophysical properties is the existence of fractures. As discussed in Chapter 6, fractures almost always increase permeability values and their effects need to be captured, especially if new displacement processes are being considered. The two techniques most commonly used to achieve this are discrete fracture network models and continuous fracture network models.
39
In a discrete fracture network model, fractures are inserted into the reservoir model to gauge the effect of these on a given recovery process. Figure 9f-13 shows how the number and orientation of individual fractures affects gas saturation and breakthrough for a process in which gas is injected to displace oil. Given a single, continuous fracture, gas breaks through very quickly (15 days). In contrast, more complex orthogonal fractures or directional fractures that rely on gas to move through both the fractures and matrix will require more time for breakthrough. Note that all of three of the gas breakthrough times for the fracture models are much shorter than the breakthrough time if there are no fractures (425 days). The utility of these models is that they help us understand how much additional permeability is added by fracturing and how much sweep efficiency should be decreased as a result of the increased fracture permeability.
In a continuous fracture network model (Figure 9f-14), the fracture distribution and intensity depends on fracture drivers such as lithology, the rate of change in dip, and mechanical properties of the rocks. These fracture drivers are calibrated to the actual location of fractures (from cores, image logs, and seismic data) using a neural network. The neural network then distributes fractures throughout the model using a combination of the fracture drivers. Like the discrete fracture network model, the primary function of the continuous fracture network model is to help us understand how much additional permeability has been added by fracturing and how much sweep efficiency should be reduced.
9g. Volumetrics and Net Pay. Using the standard equations to calculate oil and gas volumes (Figure 9g-1) is a relatively easy process in any geocellular modeling software package. Pull-down menus allow the user to select the porosity and water saturation realizations to use in the calculation, and to input formation volume factors for oil or gas. The more challenging work is reviewing the volumetric results to confirm they are reasonable, and checking the results against previous volumetric estimates by fault block and zone to understand the differences and reconcile them.
Checking results means assessing not only the hydrocarbon-in-place numbers, but also their components including bulk volume, net volume, pore volume, and hydrocarbon pore volume. The biggest difference is often found in the hydrocarbon pore volume because it includes the water saturation component. Volumetric estimates done without a geocellular model commonly underestimate water saturations because the transition zone saturations are not properly considered. Conversely, in places like Russia, the oil-in-place only includes oil above the transition zone, which means the geocellular model hydrocarbon pore volumes are always higher. If the geocellular model is being upscaled for simulation, it is also important to check the upscaled in-place volumes against the fine-scale geocellular model volumes.
In-place hydrocarbon values may span a large range due to uncertainties in model input parameters. These include the use core instead of log porosities, the use of log-derived water saturations instead of J-function saturations, and the use of lowest known oil values instead of highest known water values for estimating free water levels. As an example, Figure 9g-2 lists the range of oil-in-place numbers for a California oil field containing over 500 wells and 40 years of production. Despite all this data, the most recent original-oil-in-place value from geocellular modeling ranges from 89% to 120% of the previously calculated original-oil-in-place value.
In addition to changes in input parameters, hydrocarbon-in-place volumes can also vary by realization with no change in input parameters. Several conditions contribute to these variations including the use of more or less conditioning data, greater variability in the conditioning data, the use of more conditionally-simulated variables, and various mathematical operations on multiple conditionally-simulated variables. As a result, variations between low and high cases can easily be 5-10% or more.
In order to compare the results from the geocellular model to previous work, it is sometimes helpful to create maps such as permeability-thickness (Kh) or net pay. Kh maps (Figure 9g-3) are easy to construct and in most cases a permeability cut-off is applied beforehand, using the rules discussed in the previous chapter and summarized in Figure 9g-4. Net pay is more problematic for several reasons. First, several cutoffs must be applied to estimate net pay including shale volume, permeability, and water saturation (Figure 9g-5). Second, there are various definitions of net pay so it is important to be certain that the net pay map constructed from your geocellular model is compared to a map constructed using a similar definition of net pay.
9h. Realization Assessment. As part of the geocellular modeling process, multiple realizations are created by varying numerous parameters. Given that all of these realizations are equi-probable, they must be assessed to
40
determine which one most closely resembles the actual reservoir and should therefore be used for history matching and production forecasting.
As a first step in this process, the production geologist needs to consult with the simulation engineer to understand which variables in the geocellular model are likely to have the greatest impact on the simulation results. These variables may include horizontal permeability, the height of the transition zone, variations in areal sweep caused by differences in fracture connectivity, or other parameters. The different realizations should then be reviewed, and a subset of realizations should be chosen that reflect the greatest variation in the critical parameters. To assess these realizations, additional analysis may be needed. For example, if sandbody connectivity is a key parameter, each realization should be analyzed to determine the percentage of sandbodies that are connected. The subset of chosen realizations could be as little as three—one that is likely to be pessimistic in terms of hydrocarbon recovery, one that is likely to be optimistic, and one in the middle.
The realizations can then be upscaled if needed and exported to a single or two-phase streamline simulator, a black oil simulator, or a fully compositional simulator for a screening assessment. This often consists of initializing the model with reasonable fluid properties, conducting a model run, and comparing the results to full-field reservoir performance. The realization that most closely matches this performance can then be carried forward for detailed analysis.
Figures 9h-1 through 9h-4 document an example of this process applied to a fluvially-dominated deltaic reservoir in South America. The geocellular model for this reservoir was created using an object modeling technique and the distributed facies served as a template for the distribution of petrophysical properties. After constructing the geocellular model, ten realizations were generated and imported into a single-phase streamline modeling application (Figure 9h-1). The dominant mechanism in this reservoir is an active waterdrive, and in order to simulate it, a row of injectors was added in the aquifer (Figure 9h-2). The model was then run, resulting in a set of streamlines that were very dense in well-swept areas, and very sparse in poorly-swept areas.
Each of the models was then interrogated to quantify their degree of heterogeneity. This was accomplished by identifying all cells with permeabilities greater than 100 millidarcies and water saturations of less than 60% after more than 30 years of production (Figure 9h-3). The model with the least number of cells attaining this criteria was considered to be the least heterogeneous and best swept model whereas the model with the greatest number of cells attaining this criteria was judged to be the most heterogeneous and least swept. These two models, along with a third model of intermediate heterogeneity were then upscaled, numerically-simulated and history matched to field-wide performance.
Figure 9h-4 shows the results of one of these history matches. The oil rate is set equivalent to the actual production, and the quality of the match is assessed by comparing the actual water cut, gas-oil ratio, and pressure performance with time to the simulated results. The realization judged to have the best match to the actual reservoir performance was then carried forward for detailed history matching and forecasting.
9i. Upscaling and Export. The goal of upscaling and exporting the finely-gridded geocellular model is to provide a coarsely-gridded model for numerical simulation that preserves the critical reservoir properties in the original model. As discussed earlier, this upscaling process is needed because geocellular models are typically composed of millions of cells whereas numerical models only contain a few hundred thousand cells (Figure 9i-1).
Model properties including net-to-gross ratio, porosity, and water saturation are upscaled using various techniques and weightings (Figure 9i-2). The net-to-gross ratio can be upscaled as a discrete or continuous volume-weighted variable. The volume weighting is needed to account for variable cell volumes in the model. Porosity is a continuous variable that is both volume and net-to-gross weighted during upscaling. Water saturation is a continuous variable that is volume, net-to-gross and porosity-weighted during upscaling.
Permeability is a special parameter that is upscaled differently than the others. Analytically, permeability is upscaled harmonically, arithmetically, or geometrically depending upon whether the dominant reservoir fluid flow is vertical, horizontal, or some combination of the two, respectively (Figure 9i-3). In the simulator, a diagonal permeability tensor method is most commonly used which calculates the effective permeability that will result in the same single-phase flow rate and pressure drop as would flow through the geocellular model cells that occupy the same volume (Figure 9i-4).
41
Quality control of the upscaled model is extremely important to make sure its properties are similar to the geocellular model, and that heterogeneities that control fluid flow have been preserved during upscaling. Different parameters that should be compared between the two models include hydrocarbons-in-place, petrophysical property statistics, net-to-gross ratios, and sandbody connectivity. More specific checks can be made by drilling a “pseudo-well” in each model and comparing the facies proportions, porosity values, and other parameters encountered.
Another critical aspect of quality-controlling the upscaling is to make sure that those features controlling fluid flow in the geocellular model are captured in the upscaled model. These features include shale barriers and baffles, faults that create compartments, sandbody pinchouts and extreme permeability contrasts (especially “thief” zones). The nature of upscaling is to average and reduce the extremes in the property distributions. However, because these extremes often exert a great deal of control over fluid flow, it is critical to make sure that they are preserved in the upscaling.
For example, one of the key elements to preserve from the geocellular model is the degree of sandbody connectivity. Failure to preserve connectivity may substantially alter model parameters such as tortuosity, coning, sweep efficiency, and vertical permeability. Therefore, a careful validation of the upscaled model is needed to determine that the geological characterization has been accurately captured. There are several ways to do this. One method is to plot upscaled permeability values alongside facies profiles to ensure that both permeable and impermeable bodies have been preserved in the scale-up. A second method is to run single or multi-phase phase streamline models on both upscaled and non-upscaled models to see if similar results are obtained.
A third method is to compare connected body distributions before and after upscaling. Connected bodies are defined as volumes of continuous net sand. They are determined by defining those cells containing net sand that are connected to another cell of net sand by at least one edge of a grid cell. Figure 9i-5 contains two cumulative frequency plots showing the connected body distributions for geological and simulation models from two different realizations. The two plots only contain the 10 largest connected bodies in each realization, but these account for about 80% of the total connected volume. The plot on the left shows that the two largest bodies in the upscaled model account for the same connected volume as the seven largest bodies in the geocellular model. This indicates that thin impermeable shales have been removed during scale-up. In contrast, the plot on the right shows that the two curves nearly overlay, indicating that the connected body distribution has been maintained during scale-up. This second realization is clearly preferred for simulation. Another observation is that 80-90% of the total connected volume is contained within only 10 bodies in this model. These are the bodies to understand and characterize accurately. It may also be advisable to simply extract and simulate these.
9i. Numerical Simulation and Reserves. Numerical simulation is a complex mathematical modeling process used to understand the response of the reservoir to different development options. Numerical simulation has become more popular over the past decade as computing power has increased, and it is has now become a routine part of any reservoir analysis. While numerical models are very useful, care must be taken to ensure that their output is reasonable. The models contain large grid blocks and there is a limited data set used to assign properties to them. The models are also always less heterogeneous than the reservoir and therefore tend to over-predict hydrocarbon recovery. These uncertainties result in variations in the model results relative to actual performance which must be tuned by history matching.
In the history matching process, model parameters are adjusted until the simulated performance mimics the actual performance of the reservoir. In general, the models must be able to reproduce 1) well pressures and pressure gradients, 2) fluid flow rates and cumulative produced volumes, 3) saturation distributions, and 4) well productivity indices (PI’s). In order to obtain a history match, several key model parameters are adjusted to match reservoir performance including permeability (horizontal, vertical, fracture permeability), relative permeability (shape and endpoints), and pore volume (structure, net thickness, fluid levels). It is particularly important for the production geologist to understand how the simulation engineer has changed these to make sure they are reasonable and to update the geocellular model to reflect these changes.
The primary purpose of the numerical simulation model is development planning and not reserves determination. The reserves classification that most closely corresponds to the simulation model reserves are the “proven plus probable” or median (P50) case. The model must therefore be modified to only include “proven” components, or the model
42
results must be modified to only include those production streams and facilities that are “proven” if the numerical model is to used for estimating proven reserves.
If the model is being modified to only include proven components, then hydrocarbon contacts must coincide with the location of the lowest known oil or gas, spacing requirements must be honored, and only those surface facilities and infrastructure that can be economically justified by proven reserves can be part of the development plan. If the simulation output is being modified, then it must only include production from those areas, wells, and facilities that meet the requirements for proven reserves.
43
因篇幅问题不能全部显示,请点此查看更多更全内容