Abstract
In order to create an efficient system for modeling giant reservoir models, there is a whole range of technical challenges which have to be addressed. In this paper we concentrate on the topic of parallel scalability of complex computing systems like multi-CPU clusters and workstations with GPU processing cards. For multi-CPU distributed memory computing systems, it is shown that about 10 times improvement in parallel performance can be achieved if a new so called "hybrid" approach is used. In this "hybrid" approach, the usual MPI synchronization between the cluster nodes is being interleaved with a shared memory system thread based synchronization at the node level. It is demonstrated, that for some "black oil" models of real oil and gas fields, the parallel acceleration factor can exceed 1300 times for 4096 CPU cores. Even for the extreme example of a giant full field model containing over 14,000 production and injection wells, it is shown that a parallel acceleration of over 350 times can be achieved. For CPU-GPU, and CPU-CPU based systems, we compare the parallel performance of simple iterative and realistic pre-conditioner based algorithms typically used in oil and gas simulations. Hardware systems equipped with AMD FirePro, nVidia TESLA and 16-core dual Intel Xeon E2580 systems are compared in this study.