It is useful to frame ML in terms of what we know, explore what we don’t know that we don’t know so we may create breakthrough. Getting started, lets take a look at feature engineering (alias attributes). Use case 1) Amplitude versus offset (AVO) – for simplicity, think classic two-term model. Reflection amplitude at each time sample is a weighted sum of P (intercept) and G (slope or gradient) multiplied by square of the sine of reflection angle. Features - P and G. Use case 2) Presalt plays - prestack depth imaging led to discoveries over 4 billions boe in GOM subsalt. Feature can be salt boundaries that are mapped using large-aperture depth imaged data. It shows the vital role feature engineering plays; otherwise, we won’t even see what is tucked under salt overhangs and canopies with conventional seismic imaging. We made further strides on improving subsalt image quality with wide-azimuth (WAZ) acquisition, anisotropic algorithm and full-waveform inversion (FWI) processing. Unlike ML examples in social networks, geoscience features engineering requires significant investment in capital, time and resources. Like ML, getting features engineering right will dramatically improve model quality and prediction accuracy. Use case 3) Inversion – when we are working with young clastic sediments in cyclic depositional environment, like GOM, techniques using least squares criteria tend to perform well. Whereas in North Sea, with distinct unconformity and geology, sparse layered model works better. Analog in ML? Using L2 (least squares) norm regularization tend to distribute weights across features (large and small), while L1 (absolute value) norm will result in zeroing some weights and putting more weights on the remaining features. In a nutshell, a smaller model reduced to the few features that matter. (Caution – may not be better, without proper calibration). Use cases 4, 5, ....? Feel free to chime in.
|