The drilling industry is permanently seeking to reduce operational costs by improving the efficiency of the drilling process. The complexity and uncertainty of the drilling process and limited information during the well construction lead to difficulty in applying standard control methods and detecting and monitoring the actual ongoing state of the operation. All these represent challenges in order to achieve a fully autonomous drilling sequence.
An autonomous decision-making framework for a systematic plan update based on the current well construction status and available surface and downhole information is presented. The primary objective, minimization of the time to reach the total depth (TD) of a well, has to be fulfilled in multi-level finite operational horizons within the next few minutes up to the TD while keeping associated short- and long-term risks at acceptable levels. The problem is represented as a mathematical description called Markov Decision Process (MDP) defining the ongoing rig state, operational actions, and policies.
The methodology has been implemented on a wait-to-slip hole conditioning operation to demonstrate the viability of the approach using surface sensor information. The value of states is estimated by performing infinite state-action transition evaluations, a method of Reinforcement Learning, to prescribe the best possible set of actions to commit to the combined reward and penalty objectives. In the context of this research, the environment dynamic has been assumed to be fully known to apply Dynamic Programming approaches. The sensitivity analysis was performed confirming the selection of model parameters. The results show the potential for reducing non-necessary operation activities.
The paper presents a novel method for well construction operation planning that connects decision-making over multi-level operation levels. The proper design of such a system is an essential step toward a fully automatized well construction decision-making system.