ABSTRACT

Geosteering and reservoir mapping electromagnetic tools provide complex deep and/or ultradeep azimuthal resistivity (UDAR) measurements while drilling. Using interpretation software, these measurements can be used to profile formation structure and reservoir fluid distribution more than 100 ft away from a wellbore. The quality of this interpretation is highly dependent on the quality of the raw measurements. It is often necessary to distinguish measurements based on their noise level to avoid biasing or even deteriorating the interpreted results with unreliable data.

Because acquiring and processing UDAR measurements themselves are already challenging tasks, measurement uncertainty has never been systematically investigated. Until now, the standard approach to undertaking the problem has been for geosteering engineers to use their experience to manually remove channels expected to be less reliable. One potential resource would be a UDAR noise model, which could be used to simulate tool responses based on a given formation; however, knowledge about the true formation can only be obtained after the interpretation process.

This work adopted a different approach for evaluating the quality of UDAR measurements by training a machine-learning (ML) algorithm to estimate channel noise levels directly from the raw measurements, thus, avoiding the need for an initial interpretation. To this end, a large dataset was created with raw measurements and noise levels from a wide range of simulated scenarios. An ML algorithm, e.g., a neural network or a decision forest, was then trained to predict these noise levels directly from the measurements without access to the actual scenario. The trained model could then be used to evaluate the noise levels in unseen scenarios and real-world cases.

In a proof of concept, the proposed method was applied to the noise model of a UDAR tool with 96 channels, consisting of eight types of measurements with six frequencies and two receivers. Scenarios were generated using a formation distribution designed to cover most cases that would be encountered in the field, and noise levels were computed as the standard deviation of output for each channel. The training set had 100,000 samples and the test set had 10,000 samples. The final performance was computed using the relative error, with an error goal of less than 10% for at least 90% of all the data in the test set. Among all the approaches, we found that feedforward neural networks perform the best, particularly when predicting all the channels in parallel. It seems like attempting to predict noise in one channel might help bootstrap the feature search for other channels as well. With four hidden layers, we were able to reach the 10% target for 42 of the 96 channels, particularly the low-frequency cases. The worst-performing channel achieved a relative error of 20.6% for its best 90% of its data. Classifying neural networks focused on one channel at a time were found to improve this result, but still not to the point where the 10% goal was obtained. As an additional test, the trained network was used to provide predictions on simulations of actual scenarios and benchmark problems, in which the method performed very well. The predicted uncertainty matched very well with the uncertainty computed directly from the noise model in the final real-world scenario tests.

This content is only available via PDF.
You can access this article if you purchase or spend a download.