It looks like you're using an Ad Blocker.

Thank you.

Some features of ATS will be disabled while you continue to use an ad-blocker.

Help ATS via PayPal:

# Quake Watch 2012

page: 3
159
share:

posted on Jan, 1 2012 @ 06:28 AM

TextAPPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1707 ANALYZING DIFFERENT SOURCES OF HYPOCENTRAL ERROR Introduction. Having seen the limitations of the approximation given by equation (18), we can approach the practical problem of what to do with this result. The problem is that with real data the vectors emodel, n, and e are fundamentally unknown. Our only information about them is provided by a projection of the residual vector, (I -AA+)i ". Unfortunately, the above analysis shows that ~ is a sum of all three error terms and is evaluated at the wrong place in space-time. The basic idea here is to use auxiliary information to appraise the relative importance of each term. This provides a valuable error appraisal tool to provide a more complete and realistic appraisal of location uncertainties. I now consider methods for estimating each of these terms. The order in which they are considered is significant. Measurement error term. Of the three terms in equation (18), e is unique in that it is the only truly statistical quantity. Thus, although e may be unknowable, we can assume we know something about its statistics [see Freedman (1966) or Leaver (1984) for examples of particularly careful studies]. For impulsive arrivals measured from analog records, the ¢ are approximately normally distributed with zero mean (Buland, 1976). For emergent arrivals, the e, tend to have a distribution skewed toward positive numbers due to a tendency to pick weak arrivals too late (Anderson, 1982). Finally, with newer computer picking methods, the distribution of e~ may be somewhat complicated, but at least the gross details of the distribution are known (Allen, 1982; Leaver, 1984). In any case, if we know something about the probability density function of e, it is a standard exercise in regression analysis (see Flinn, 1965; Hoel, 1971; Jordan and Sverdrup, 1981; and their associated references) that the errors induced by e can be appraised by examination of confidence ellipsoids. For the hypocenter location problem, this amounts to outlining an ellipsoidal region in space-time characterized by (h -/t)~C~-~(h -~)

posted on Jan, 1 2012 @ 06:28 AM

TextAn estimate for p that proved adequate, although still not entirely satisfying, was a crude guess one might be able to make with the real data that these synthetic data models represent. That is, all events are offset perpendicular to the "fault" by epicentral distances of ~9 and ~1.5 km for models A and B, respectively. If we presume this is representative for one degree of freedom of the solution, then a reasonable guess for the scale p is 9~f3 km and 1.5 ~ km for models A and B, respectively, since 5x has three degrees of freedom. Using these values results in the bounds shown in Figure 7 labeled "b." These are still somewhat overly pessi- mistic, being approximately an order of magnitude larger than the actual error. This is largely caused by the fact that these estimates of p are not perfect either. Ideal values of p are values of ~10 and -1.5 km for models A and B, respectively. Hence, if we knew p a priori, we would be able to reduce the bounds shown in Figure 7 by a factor of 3. In that case, these bounds would do a remarkably good job of measuring the potential impact of nonlinearity in the solution. It is important to recognize that this is a major advantage of the bounding criterion based on equation (28). That is, provided p can be estimated accurately by some auxiliary means, the bounds provided by (28) can be expected to be reasonable. This is again in contrast to simpler schemes I first tried using standard bounding methods based on matrix norms (i.e., II 5hn II =< IIA + JR II n ]1 for any pair of consistent matrix and vector norms, provided ]l n ]] is an upper bound on the true norm of n). These always gave terrible results even when p was chosen exactly.

TextDISCUSSION AND CONCLUSIONS The first major result of this paper is equation (18). It is significant that this was not the result I originally expected when I began this study. My original intent was to consider the impact of the fact that the matrix of partial derivatives [A defined by equations (7) and (10)] was calculated from a model of the earth's velocity structure in the same way the travel time is. It is well known that errors in calculating coefficients of a matrix cause errors in solutions of linear equations. Such errors are commonly appraised by matrix peturbation analysis methods (for least-squares problems, see, e.g., Lawson and Hanson, 1974, pp. 41-52) to bound the possible influence of computational errors in real computers with a finite precision. The analysis leading to equation (18) shows that even though A is not calculated perfectly, it makes little difference as long as a stable solution can be found. The basic reason is that in a linear problem, A is fixed; here A is variable. We try to minimize I[ r [] by a sequence of steps given in equation (6). Each step is a linear one designed to minimize the norm N r -ASh J] based on the current values of r and A, which vary from step to step. The net result is that when the solution 1714 GARY L. PAVLIS converges, the only relevant measure of A is the current one. Hence, the fact that it may be wrong due to inadequacies in calculating ray takeoff angles is irrelevant. Equation (18) shows that hypocentral errors are a composite of three terms: (1) measurement error; (2) what I have called modeling errors; and (3) a nonlinear term. Of these, I claim only the first should really be viewed as a statistical quantity. This term can be appraised with confidence ellipsoids, but the scale of the process should be determined from independent studies of measurement errors such as that by Freedman (1966) or Leaver (1984). Scaling confidence ellipsoids by rms following Flinn (1965) is a dangerous step that can produce misleading results as shown by the results given in Figure 2. This occurred because the synthetic examples studied here model a feature of most real data. That is, the errors are dominated by modeling errors. In this paper, I introduced an alternative means for appraising the influence of modeling errors using a component-wise bounding criteria based on a theorem proven in the Appendix. These bounds are based on arrival-dependent ray arc lengths and a common scale factor, Au, which is a bound on the average slowness along a given ray segment. The result is a parallelepiped-shaped bounding region for each earthquake location whose relative dimensions are fixed and whose absolute scale depends linearly on Au.

posted on Jan, 1 2012 @ 06:29 AM

TextAn estimate for p that proved adequate, although still not entirely satisfying, was a crude guess one might be able to make with the real data that these synthetic data models represent. That is, all events are offset perpendicular to the "fault" by epicentral distances of ~9 and ~1.5 km for models A and B, respectively. If we presume this is representative for one degree of freedom of the solution, then a reasonable guess for the scale p is 9~f3 km and 1.5 ~ km for models A and B, respectively, since 5x has three degrees of freedom. Using these values results in the bounds shown in Figure 7 labeled "b." These are still somewhat overly pessi- mistic, being approximately an order of magnitude larger than the actual error. This is largely caused by the fact that these estimates of p are not perfect either. Ideal values of p are values of ~10 and -1.5 km for models A and B, respectively. Hence, if we knew p a priori, we would be able to reduce the bounds shown in Figure 7 by a factor of 3. In that case, these bounds would do a remarkably good job of measuring the potential impact of nonlinearity in the solution. It is important to recognize that this is a major advantage of the bounding criterion based on equation (28). That is, provided p can be estimated accurately by some auxiliary means, the bounds provided by (28) can be expected to be reasonable. This is again in contrast to simpler schemes I first tried using standard bounding methods based on matrix norms (i.e., II 5hn II =< IIA + JR II n ]1 for any pair of consistent matrix and vector norms, provided ]l n ]] is an upper bound on the true norm of n). These always gave terrible results even when p was chosen exactly.

TextDISCUSSION AND CONCLUSIONS The first major result of this paper is equation (18). It is significant that this was not the result I originally expected when I began this study. My original intent was to consider the impact of the fact that the matrix of partial derivatives [A defined by equations (7) and (10)] was calculated from a model of the earth's velocity structure in the same way the travel time is. It is well known that errors in calculating coefficients of a matrix cause errors in solutions of linear equations. Such errors are commonly appraised by matrix peturbation analysis methods (for least-squares problems, see, e.g., Lawson and Hanson, 1974, pp. 41-52) to bound the possible influence of computational errors in real computers with a finite precision. The analysis leading to equation (18) shows that even though A is not calculated perfectly, it makes little difference as long as a stable solution can be found. The basic reason is that in a linear problem, A is fixed; here A is variable. We try to minimize I[ r [] by a sequence of steps given in equation (6). Each step is a linear one designed to minimize the norm N r -ASh J] based on the current values of r and A, which vary from step to step. The net result is that when the solution 1714 GARY L. PAVLIS converges, the only relevant measure of A is the current one. Hence, the fact that it may be wrong due to inadequacies in calculating ray takeoff angles is irrelevant. Equation (18) shows that hypocentral errors are a composite of three terms: (1) measurement error; (2) what I have called modeling errors; and (3) a nonlinear term. Of these, I claim only the first should really be viewed as a statistical quantity. This term can be appraised with confidence ellipsoids, but the scale of the process should be determined from independent studies of measurement errors such as that by Freedman (1966) or Leaver (1984). Scaling confidence ellipsoids by rms following Flinn (1965) is a dangerous step that can produce misleading results as shown by the results given in Figure 2. This occurred because the synthetic examples studied here model a feature of most real data. That is, the errors are dominated by modeling errors. In this paper, I introduced an alternative means for appraising the influence of modeling errors using a component-wise bounding criteria based on a theorem proven in the Appendix. These bounds are based on arrival-dependent ray arc lengths and a common scale factor, Au, which is a bound on the average slowness along a given ray segment. The result is a parallelepiped-shaped bounding region for each earthquake location whose relative dimensions are fixed and whose absolute scale depends linearly on Au.

posted on Jan, 1 2012 @ 06:29 AM

posted on Jan, 1 2012 @ 06:39 AM
Happy New Year to you all. I hope and pray for a safe and prosperous year for all.

And may 2012 bring us all lots of good times here at ATS.

posted on Jan, 1 2012 @ 06:48 AM

Don't sell yourself short...of course you deserve the accolades. After all, it is YOUR thread that brings us all together. I did not mean to omit the other knowledgable contributors (Westcoast, TA, Muzzy, Robin/Eric and many others) I just wanted to show my gratitude to you for providing this forum. Now, on with the show!

posted on Jan, 1 2012 @ 06:59 AM

Why thank you diamondsmith. Very interesting stuff, but I guess aboutface has probably done just that!!

What with answers from myself, TA and you and I think muzzy, I guess we should have about covered the subject!

Personally I like "Um, we don't really know", but I suppose that is because I said it.
It just seems nice and simple.

posted on Jan, 1 2012 @ 07:02 AM

This was for the thread,The other definition I U2U her /him 2 days ago.

posted on Jan, 1 2012 @ 07:04 AM

Only pulling your leg my friend!

I did appreciate it. Not being sarcastic.

posted on Jan, 1 2012 @ 07:13 AM

Originally posted by PuterMan

Only pulling your leg my friend!

I did appreciate it. Not being sarcastic.

I know that and thank you PMan,but that was just a survival ATS kit for 2012.

posted on Jan, 1 2012 @ 11:27 AM
Two good sized quakes here in Christchurch over night. Kids still are clinging to my leg again. First one was 5.1 then at 5.45 am there was a 5.5 that felt like a 6. Notice how they are all out to sea now.
www.canterburyquakelive.co.nz...
www.geonet.org.nz...

posted on Jan, 1 2012 @ 12:32 PM

QW, for me anyway, is about the technical aspects of earthquakes and trying to get an understanding of how and why they occur. Even perhaps working our way round to some sort of predictive capability.

Here some technical mumbojumbo for you that is tied to predictive capability:

In this paper we show, in terms of Fisher information and approximate entropy, that the two strong impulsive kHz electromagnetic (EM) bursts recorder prior to the Athens EQ (7 September 1999, magnitude 5.9) present compatibility to the radar interferometry data and the seismic data analysis, which indicate that two fault segments were activated during Athens EQ. The calculated Fisher information and approximate entropy content ratios closely follow the radar interferometry result that the main fault segment was responsible for 80% of the total energy released, while the secondary fault segment for the remaining 20%. This experimental finding, which appears for the first time in the literature, further enhances the hypothesis for the seismogenic origin of the analyzed kHz EM bursts.

And a companion paper:

The variation of fractal dimension and entropy during a damage evolution process, especially approaching critical failure, has been recently investigated. A sudden drop of fractal dimension has been proposed as a quantitative indicator of damage localization or a likely precursor of an impending catastrophic failure. In this contribution, electromagnetic emissions recorded prior to significant earthquake are analysed to investigate whether they also present such sudden fractal dimension and entropy drops as the main catastrophic event is approaching. The pre-earthquake electromagnetic time series analysis results reveal a good agreement to the theoretically expected ones indicating that the critical fracture is approaching.

Wow, that's quite a bit to mull over but it goes along with a lot of what I'd read while rooting around in the rabbit hole opened by that hoax...

posted on Jan, 1 2012 @ 01:46 PM

Dammit

Any chance you can describe the two quakes?
The thing is Geonet do a fine job putting out the Felt Reports, but you don't know what the folks actually put in their report, unlike EMSC who have the actual text they sent in.
EMSC example of testimonies for todays Izu Islands Japan quake
We get some good stuff from MoorfNZ, Aoraki and SpaceJockey1 on here, but the more the merrier, makes the Thread more interesting.

edit on 1-1-2012 by muzzy because: fix html in link to EMSC

posted on Jan, 1 2012 @ 01:54 PM
I hope EMSC downgrade that Izu islands like USGS and GFZ (6.7) have , I haven't had time to set up the 2012 Global Mag 7+ map and graph yet

posted on Jan, 1 2012 @ 01:57 PM

Well they were both very noisey. The one at 5.45 am you could hear the roar before it arrived. Damage report from my home is. Cracks in house are a wee bit bigger, just a few things fell over.
Also me and my wife have decided to pack up and leave CH CH. We just need to fix this place up and sell. Thinking of heading up to Blenheim. My mother and sister are all ready looking at real estate up there.
I really think alot of people are thinking the same. A year and a half of this crap is just a bit to much.

posted on Jan, 1 2012 @ 02:11 PM

I feel humbled by the replies. Thanks for taking the time to copy all of that and give me a new source of reference.

posted on Jan, 1 2012 @ 02:42 PM

You are welcome .A short definition unit with so many parameters is hard to find but I will think.

posted on Jan, 1 2012 @ 02:54 PM

Thanks for those j&c, they compliment some data I already have but have not got round to looking at yet. The first quick look I had using the Z channel proved inconclusive but I will delve into those later in the year (once my end of year report is done) as it does interest me very much.

posted on Jan, 1 2012 @ 04:29 PM
JUST TO MAKE THE POSTS READABLE WITH PARAGRAPHING:

Originally posted by diamondsmith

TextAPPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1707 ANALYZING DIFFERENT SOURCES OF HYPOCENTRAL ERROR Introduction.

Having seen the limitations of the approximation given by equation (18), we can approach the practical problem of what to do with this result. The problem is that with real data the vectors emodel, n, and e are fundamentally unknown.

Our only information about them is provided by a projection of the residual vector, (I -AA+)i ". Unfortunately, the above analysis shows that ~ is a sum of all three error terms and is evaluated at the wrong place in space-time.

The basic idea here is to use auxiliary information to appraise the relative importance of each term. This provides a valuable error appraisal tool to provide a more complete and realistic appraisal of location uncertainties. I now consider methods for estimating each of these terms.

The order in which they are considered is significant. Measurement error term. Of the three terms in equation (18), e is unique in that it is the only truly statistical quantity. Thus, although e may be unknowable, we can assume we know something about its statistics [see Freedman (1966) or Leaver (1984) for examples of particularly careful studies].

For impulsive arrivals measured from analog records, the ¢ are approximately normally distributed with zero mean (Buland, 1976).

For emergent arrivals, the e, tend to have a distribution skewed toward positive numbers due to a tendency to pick weak arrivals too late (Anderson, 1982).

Finally, with newer computer picking methods, the distribution of e~ may be somewhat complicated, but at least the gross details of the distribution are known (Allen, 1982; Leaver, 1984).

In any case, if we know something about the probability density function of e, it is a standard exercise in regression analysis (see Flinn, 1965; Hoel, 1971; Jordan and Sverdrup, 1981; and their associated references) that the errors induced by e can be appraised by examination of confidence ellipsoids.

For the hypocenter location problem, this amounts to outlining an ellipsoidal region in space-time characterized by (h -/t)~C~-~(h -~)

posted on Jan, 1 2012 @ 04:33 PM

Originally posted by diamondsmith

TextAPPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1703

A conventional analysis would assume n ~ 0, yielding classical results involving error ellipsoids (Flinn, 1965). One of the main points of this paper, however, is to consider the importance of n.

To do so, we need a method for calculating it. Application of equation (5.68) of Lee and Stewart (1981) to equation (11) yields 5~ = ½5h TH~Sh + ... (16) where H, is the Hessian matrix.

Analytic forms for Hi for a constant velocity medium and a two-layer model are given in a recent paper by Thurber (1985). I use Thurber's results below to approximate 5i to second order as ~ ~ lShTHiSh = (n2),. (17) Equation (15) then becomes ~h ~ A+(emodel + n2 -b e). (18)

There are two levels of approximation in equation (18): (1) the convergence criterion e, and (2) the second-order approximation of n. (The latter is emphasized with the 2 subscript on n.) Equation (18) is the focal point of this paper.

The first step is clearly to investigate the limits of this approximation. This is done in the following section using computer simulations. Computer simulations.

In this study, I chose to consider the location precision of earthquakes in the vicinity of the rupture zone of the 1984 Morgan Hill, California, earthquake (Cockerham and Eaton, 1984).

A crude approximation to the velocity structure in this area was used to generate travel times for a series of synthetic events.

This velocity model consisted of two constant velocity quarter-spaces joined along a vertical plane striking north 31.5 ° west and passing through the point 37°16 ' north latitude by 121o40 , west longitude (Figure 1).

Synthetic arrival times for every station in the U.S. Geological Survey Central California Network (CALNET) and all University of California Berkeley stations within 100 km of this point (121 stations) were calculated from this model for the set of events shown in Figure 1 using two different quarter-space models.

Velocities for these two models are given in Table 1. The measurement error vector \$ in equation (1) was simulated by using a random number generator to produce random samples from a normal distribution with zero mean and a variance of 0.05 sec. The same ~ was then added to each synthetic event arrival time vector.

The advantage of this is that it allows comparison of errors induced by \$ as a function of position. On the other hand, it gives a privileged position to a set of random numbers. However, repetition of these results with different random vectors indicates the results are not very dependent on the exact choice of ~, and the one presented here is representative.

All events were located using a simple, damped least-squares procedure similar to that described by Herrmann (1979), using travel times calculated from a constant velocity medium with a velocity of 5.6 km/sec. Figures 2 and 3 show the resulting location estimates. All location estimates show an eastward bias caused by approx- imating the quarter-space model with a constant velocity medium. The scale of this bias is ~10 km for model A events and ~2 km for model B events.

Figure 4 can be used to examine the validity of the second-order approximation in the context of these two different error scales. This is summarized here by examining only the
source

TextEstimated locations of synthetic events from model A using 5.6 km/sec constant velocity medium. For comparison, the map view (A) and cross-section (B) frames are identical to those shown in Figure 1.

Estimated locations are at the centers of crossing lines which are the projections of the major axes of the conventional 95 per cent confidence ellipsoids.

Critical values for these ellipsoids are based on an F statistic as originally advocated by Flinn (1966). where v0 is the constant velocity medium velocity. Vo is used to convert origin time errors into an equivalent length scale so all components of a are at least measured in the same units.
source

new topics

159