In general, any restriction to flow that causes a distortion of the flow lines from being perfectly normal to the borehole direction would result in positive skin. As a result, the skin factor computed from well testing is a widely used indicator of the overall flow efficiency of a given well. This is theoretically valid, but often, a high positive skin indicated from a well test is assumed synonymous with a large extent of near-wellbore formation damage and is, therefore, frequently used as a criterion to perform stimulation treatment meant to enhance the well productivity. This latter practice is generally not valid since the skin factor computed from a well test is, in reality, a composite variable that is not merely a function of near-wellbore damage. This skin factor is also a function of parameters such as perforation geometry, well deviation, partial completion, and other phase and rate dependent effects. Therefore, the skin computed from a well test must first be broken down into its constituent components in order to determine the "true" near-wellbore skin. Moreover, the skin and permeability values computed from a well test are so intimately related that an error in one directly impacts the other. Typically, additional input data would also need to be integrated in any given permeability-skin model in order to reduce the computational uncertainty. Only after proper modeling, can an appropriate stimulation treatment be selected. This is particularly important for high-deliverability gas wells, for which the formation damage-related skin often makes up only a small portion of the total well test-measured skin.

This paper outlines a methodology that can be used to compute the various skin components as well as representative values of the average formation permeability from a well test. Several examples from gas wells are presented to demonstrate the utility of this methodology.

You can access this article if you purchase or spend a download.