Resumo:
We have developed an efficient and very fast equivalent-layer technique for gravity and magnetic data processing by modifying the forward problem calculation of an iterative method grounded on excess mass constraint that does not require the solution of linear systems and of the conjugate gradient least squares algorithm, respectively, using a discrete convolutional method. Taking advantage of the Block- Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix, that raises when regular grids of observation points and equivalent sources (point masses or dipoles) are used to set up a fictitious equivalent layer, we have developed an algorithm which greatly reduces the number of floating-point operations (flops) and computer memory necessary to estimate a 2D physical property distribution over the equivalent layer. The structure of the BTTB matrix can be written by using only the elements of the first column of the sensitivity matrix, which in turn can be transformed into a block-circulant circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. Also, from the first column of BCCB matrix, its eigenvalues can be calculated using the 2D Fast Fourier Transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient to process very large datasets. Tests with synthetic data demonstrate the ability of our method to satisfactorily use the estimated equivalent sources for data processing, for example, upward-continuing the gravity and magnetic data. Our results show very small border effects and noise amplification compared to those produced by the classical approach in the Fourier domain. For the gravity case, our synthetic results show that while the running time of our method is ≈ 30.9 seconds for processing N = 1, 000, 000 observations, the iterative method grounded on excess mass constrain spent ≈ 46.8 seconds with N = 22, 500. A test with field data from Caraj ́as Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of N = 250, 000 observations. Synthetic tests for magnetic data with a mid-size 100×50 grid of total field anomaly data show a decrease of ≈ 104 in floating-point operations and ≈ 25× in computation runtime of our method compared to the classical approach of solving the least-squares normal equations via Cholesky decomposition. Faster results are obtained for millions of data, showing drastic decreases in computer memory usage and runtime, allowing to perform magnetic data processing of large data sets on regular desktop computers. Our results also show that, compared to the classical Fourier approach, the magnetic data processing with our method requires similar computation time, but produces significantly smaller border effects without using any padding scheme and also is more robust to deal with data on irregularly spaced points or on undulating observation surfaces. A test with 1, 310, 000 irregularly spaced field data over the Caraj ́as Province, Brazil, confirms the efficiency of our method by taking ≈ 385.56 seconds to estimate the physical-property distribution over the equivalent layer and ≈ 2.64 seconds to compute the upward-continuation.