It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Quake Watch 2012

page: 3
159
<< 1  2    4  5  6 >>

log in

join
share:

posted on Jan, 1 2012 @ 06:28 AM
link   
reply to post by aboutface
 

TextAPPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1707 ANALYZING DIFFERENT SOURCES OF HYPOCENTRAL ERROR Introduction. Having seen the limitations of the approximation given by equation (18), we can approach the practical problem of what to do with this result. The problem is that with real data the vectors emodel, n, and e are fundamentally unknown. Our only information about them is provided by a projection of the residual vector, (I -AA+)i ". Unfortunately, the above analysis shows that ~ is a sum of all three error terms and is evaluated at the wrong place in space-time. The basic idea here is to use auxiliary information to appraise the relative importance of each term. This provides a valuable error appraisal tool to provide a more complete and realistic appraisal of location uncertainties. I now consider methods for estimating each of these terms. The order in which they are considered is significant. Measurement error term. Of the three terms in equation (18), e is unique in that it is the only truly statistical quantity. Thus, although e may be unknowable, we can assume we know something about its statistics [see Freedman (1966) or Leaver (1984) for examples of particularly careful studies]. For impulsive arrivals measured from analog records, the ¢ are approximately normally distributed with zero mean (Buland, 1976). For emergent arrivals, the e, tend to have a distribution skewed toward positive numbers due to a tendency to pick weak arrivals too late (Anderson, 1982). Finally, with newer computer picking methods, the distribution of e~ may be somewhat complicated, but at least the gross details of the distribution are known (Allen, 1982; Leaver, 1984). In any case, if we know something about the probability density function of e, it is a standard exercise in regression analysis (see Flinn, 1965; Hoel, 1971; Jordan and Sverdrup, 1981; and their associated references) that the errors induced by e can be appraised by examination of confidence ellipsoids. For the hypocenter location problem, this amounts to outlining an ellipsoidal region in space-time characterized by (h -/t)~C~-~(h -~)



posted on Jan, 1 2012 @ 06:28 AM
link   
reply to post by aboutface
 

TextAn estimate for p that proved adequate, although still not entirely satisfying, was a crude guess one might be able to make with the real data that these synthetic data models represent. That is, all events are offset perpendicular to the "fault" by epicentral distances of ~9 and ~1.5 km for models A and B, respectively. If we presume this is representative for one degree of freedom of the solution, then a reasonable guess for the scale p is 9~f3 km and 1.5 ~ km for models A and B, respectively, since 5x has three degrees of freedom. Using these values results in the bounds shown in Figure 7 labeled "b." These are still somewhat overly pessi- mistic, being approximately an order of magnitude larger than the actual error. This is largely caused by the fact that these estimates of p are not perfect either. Ideal values of p are values of ~10 and -1.5 km for models A and B, respectively. Hence, if we knew p a priori, we would be able to reduce the bounds shown in Figure 7 by a factor of 3. In that case, these bounds would do a remarkably good job of measuring the potential impact of nonlinearity in the solution. It is important to recognize that this is a major advantage of the bounding criterion based on equation (28). That is, provided p can be estimated accurately by some auxiliary means, the bounds provided by (28) can be expected to be reasonable. This is again in contrast to simpler schemes I first tried using standard bounding methods based on matrix norms (i.e., II 5hn II =< IIA + JR II n ]1 for any pair of consistent matrix and vector norms, provided ]l n ]] is an upper bound on the true norm of n). These always gave terrible results even when p was chosen exactly.
source(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drep1%26 type%3Dpdf&sa=X&scisig=AAGBfm2DSe3FQf2u0Sejs4A5Bar3a5Vr_A&oi=scholarr


TextDISCUSSION AND CONCLUSIONS The first major result of this paper is equation (18). It is significant that this was not the result I originally expected when I began this study. My original intent was to consider the impact of the fact that the matrix of partial derivatives [A defined by equations (7) and (10)] was calculated from a model of the earth's velocity structure in the same way the travel time is. It is well known that errors in calculating coefficients of a matrix cause errors in solutions of linear equations. Such errors are commonly appraised by matrix peturbation analysis methods (for least-squares problems, see, e.g., Lawson and Hanson, 1974, pp. 41-52) to bound the possible influence of computational errors in real computers with a finite precision. The analysis leading to equation (18) shows that even though A is not calculated perfectly, it makes little difference as long as a stable solution can be found. The basic reason is that in a linear problem, A is fixed; here A is variable. We try to minimize I[ r [] by a sequence of steps given in equation (6). Each step is a linear one designed to minimize the norm N r -ASh J] based on the current values of r and A, which vary from step to step. The net result is that when the solution 1714 GARY L. PAVLIS converges, the only relevant measure of A is the current one. Hence, the fact that it may be wrong due to inadequacies in calculating ray takeoff angles is irrelevant. Equation (18) shows that hypocentral errors are a composite of three terms: (1) measurement error; (2) what I have called modeling errors; and (3) a nonlinear term. Of these, I claim only the first should really be viewed as a statistical quantity. This term can be appraised with confidence ellipsoids, but the scale of the process should be determined from independent studies of measurement errors such as that by Freedman (1966) or Leaver (1984). Scaling confidence ellipsoids by rms following Flinn (1965) is a dangerous step that can produce misleading results as shown by the results given in Figure 2. This occurred because the synthetic examples studied here model a feature of most real data. That is, the errors are dominated by modeling errors. In this paper, I introduced an alternative means for appraising the influence of modeling errors using a component-wise bounding criteria based on a theorem proven in the Appendix. These bounds are based on arrival-dependent ray arc lengths and a common scale factor, Au, which is a bound on the average slowness along a given ray segment. The result is a parallelepiped-shaped bounding region for each earthquake location whose relative dimensions are fixed and whose absolute scale depends linearly on Au.
source(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drep1%26type% 3Dpdf&sa=X&scisig=AAGBfm2DSe3FQf2u0Sejs4A5Bar3a5Vr_A&oi=scholarr



posted on Jan, 1 2012 @ 06:29 AM
link   
reply to post by aboutface
 

TextAn estimate for p that proved adequate, although still not entirely satisfying, was a crude guess one might be able to make with the real data that these synthetic data models represent. That is, all events are offset perpendicular to the "fault" by epicentral distances of ~9 and ~1.5 km for models A and B, respectively. If we presume this is representative for one degree of freedom of the solution, then a reasonable guess for the scale p is 9~f3 km and 1.5 ~ km for models A and B, respectively, since 5x has three degrees of freedom. Using these values results in the bounds shown in Figure 7 labeled "b." These are still somewhat overly pessi- mistic, being approximately an order of magnitude larger than the actual error. This is largely caused by the fact that these estimates of p are not perfect either. Ideal values of p are values of ~10 and -1.5 km for models A and B, respectively. Hence, if we knew p a priori, we would be able to reduce the bounds shown in Figure 7 by a factor of 3. In that case, these bounds would do a remarkably good job of measuring the potential impact of nonlinearity in the solution. It is important to recognize that this is a major advantage of the bounding criterion based on equation (28). That is, provided p can be estimated accurately by some auxiliary means, the bounds provided by (28) can be expected to be reasonable. This is again in contrast to simpler schemes I first tried using standard bounding methods based on matrix norms (i.e., II 5hn II =< IIA + JR II n ]1 for any pair of consistent matrix and vector norms, provided ]l n ]] is an upper bound on the true norm of n). These always gave terrible results even when p was chosen exactly.
source(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drep1%26 type%3Dpdf&sa=X&scisig=AAGBfm2DSe3FQf2u0Sejs4A5Bar3a5Vr_A&oi=scholarr


TextDISCUSSION AND CONCLUSIONS The first major result of this paper is equation (18). It is significant that this was not the result I originally expected when I began this study. My original intent was to consider the impact of the fact that the matrix of partial derivatives [A defined by equations (7) and (10)] was calculated from a model of the earth's velocity structure in the same way the travel time is. It is well known that errors in calculating coefficients of a matrix cause errors in solutions of linear equations. Such errors are commonly appraised by matrix peturbation analysis methods (for least-squares problems, see, e.g., Lawson and Hanson, 1974, pp. 41-52) to bound the possible influence of computational errors in real computers with a finite precision. The analysis leading to equation (18) shows that even though A is not calculated perfectly, it makes little difference as long as a stable solution can be found. The basic reason is that in a linear problem, A is fixed; here A is variable. We try to minimize I[ r [] by a sequence of steps given in equation (6). Each step is a linear one designed to minimize the norm N r -ASh J] based on the current values of r and A, which vary from step to step. The net result is that when the solution 1714 GARY L. PAVLIS converges, the only relevant measure of A is the current one. Hence, the fact that it may be wrong due to inadequacies in calculating ray takeoff angles is irrelevant. Equation (18) shows that hypocentral errors are a composite of three terms: (1) measurement error; (2) what I have called modeling errors; and (3) a nonlinear term. Of these, I claim only the first should really be viewed as a statistical quantity. This term can be appraised with confidence ellipsoids, but the scale of the process should be determined from independent studies of measurement errors such as that by Freedman (1966) or Leaver (1984). Scaling confidence ellipsoids by rms following Flinn (1965) is a dangerous step that can produce misleading results as shown by the results given in Figure 2. This occurred because the synthetic examples studied here model a feature of most real data. That is, the errors are dominated by modeling errors. In this paper, I introduced an alternative means for appraising the influence of modeling errors using a component-wise bounding criteria based on a theorem proven in the Appendix. These bounds are based on arrival-dependent ray arc lengths and a common scale factor, Au, which is a bound on the average slowness along a given ray segment. The result is a parallelepiped-shaped bounding region for each earthquake location whose relative dimensions are fixed and whose absolute scale depends linearly on Au.
source(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drep1%26type% 3Dpdf&sa=X&scisig=AAGBfm2DSe3FQf2u0Sejs4A5Bar3a5Vr_A&oi=scholarr



posted on Jan, 1 2012 @ 06:29 AM
link   
reply to post by aboutface
 

TextThe third source of earthquake location errors, nonlinearity, is almost always ignored in conventional location procedures. The analysis presented here indicates nonlinearity can be viewed as an additional component of the systematic location bias superimposed on top of that caused by modeling errors. Evidence from the synthetic examples presented here suggests that such errors can be approximated to adequate precision for any reasonable location estimate using a second-order approximation. Using a second-order approximation and a component-wise bound- ing procedure similar to that used for modeling errors, the expected size of the nonlinear error can be bounded. These bounds are constructed from the spectral norm of the Hessian for each arrival (Thurber, 1985) and a common scale factor, p, which is a guess of an upper bound on the total hypocentral error for that event. Again, these bounds form a parallelepiped-shaped region of space whose relative dimensions are fixed. In this case, however, the scale of the bounding region is determined by the square of the scale factor p. Consequently, the bounds estimated this way are reasonable only when p is chosen reasonably. How to best estimate the scale factors Au and p used in the bounding procedures described here is an open question. Au is probably best estimated by nonparametric statistical methods (Efron and Gong, 1983) or from a random media viewpoint as advocated by Leaver (1985). The best approach for p in many cases is probably to make a reasonable guess of its size based on other information. For example, if a group of earthquakes appear to be systematically offset a distance d from a mapped fault trace, p could be estimated from d as described above. Lacking such informa- tion, p may be estimated from some measure based on rms residuals or the extremal bounds on systematic biases described here. In doing so, one must recognize, however, that the former may underestimate p, and the latter will always overesti- mate p. In any case, one should recognize the main advantage of a set of bounds based on a common scale factor. If one decides the original scale factors used to calculate the bounds were in error, recalculating them based on the revised scale is trivial. I would claim that one of the most significant facts about the error appraisal techniques described here is their practicality. The error estimates are easy to calculate, and the results are easy to understand at a glance. Furthermore, imple- APPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1715 menting them requires only minor modifications to most location programs. Re- scaling confidence ellipses to reflect only measurement errors is trivial and is already an option in at least one common location program I am aware of (HYPOINVERSE, Klein, 1978). Calculating unscaled model and nonlinear bounds requires only a minor modification to any program that explicitly calculates A +. One then only has to calculate the vector of component bounds defined in equations (24) and (27). The model error bound estimate requires the calculation of ray arc lengths. For local networks, this calculation can probably be approximated ade- quately as the total source receiver spatial separation. For teleseismic locations, it would probably require a table. The nonlinear error bound calculation requires calculation of the spectral norm of the Hessian matrix for each arrival. Analytic forms are presently known only for constant velocity media and a layer over a half- space model (Thurber, 1985). These second-order derivatives could presumably be calculated numerically for any arbitrary model, but I expect that is probably unnecessary in most cases. Thurber points out that the Hessian gives a measure of local wave front curvature. My experience from this work is that this causes the nonlinear errors to be dominated by the one or two nearest stations to the source where the wave front curvature is largest. At nearby stations, the wave fronts will not differ that dramatically from a constant velocity medium. Therefore, I suspect the use of Hessians appropriate to constant velocity media would normally give reasonable results. Finally, it is important to stress the two limitations of the techniques discussed here. First, the error analysis presented here is based on the fundamental assumption that the location estimate is well constrained. That is, I assumed no auxiliary constraint like fixing the depth is necessary to obtain a stable solution. In the case of poorly constrained locations, one probably should resort to the techniques of Tarantola and Valette (1982) or Rowlett and Forsyth (1984). Second, this error analysis is what might be called a single-event theory. source(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drep1%26type%3Dpdf&sa =X&scisig=AAGBfm2DSe3FQf2u0Sejs4A5Bar3a5Vr_A&oi=scholarr



posted on Jan, 1 2012 @ 06:39 AM
link   
Happy New Year to you all. I hope and pray for a safe and prosperous year for all.

And may 2012 bring us all lots of good times here at ATS.



posted on Jan, 1 2012 @ 06:48 AM
link   
reply to post by PuterMan
 


Don't sell yourself short...of course you deserve the accolades. After all, it is YOUR thread that brings us all together. I did not mean to omit the other knowledgable contributors (Westcoast, TA, Muzzy, Robin/Eric and many others) I just wanted to show my gratitude to you for providing this forum. Now, on with the show!



posted on Jan, 1 2012 @ 06:59 AM
link   
reply to post by diamondsmith
 


Why thank you diamondsmith. Very interesting stuff, but I guess aboutface has probably done just that!!


What with answers from myself, TA and you and I think muzzy, I guess we should have about covered the subject!


Personally I like "Um, we don't really know", but I suppose that is because I said it.
It just seems nice and simple.



posted on Jan, 1 2012 @ 07:02 AM
link   
reply to post by PuterMan
 
This was for the thread,The other definition I U2U her /him 2 days ago.




posted on Jan, 1 2012 @ 07:04 AM
link   
reply to post by diamondsmith
 


Only pulling your leg my friend!

I did appreciate it. Not being sarcastic.




posted on Jan, 1 2012 @ 07:13 AM
link   

Originally posted by PuterMan
reply to post by diamondsmith
 


Only pulling your leg my friend!

I did appreciate it. Not being sarcastic.

I know that and thank you PMan,but that was just a survival ATS kit for 2012.



posted on Jan, 1 2012 @ 11:27 AM
link   
Two good sized quakes here in Christchurch over night. Kids still are clinging to my leg again. First one was 5.1 then at 5.45 am there was a 5.5 that felt like a 6. Notice how they are all out to sea now.
www.canterburyquakelive.co.nz...
www.geonet.org.nz...



posted on Jan, 1 2012 @ 12:32 PM
link   
reply to post by PuterMan
 





QW, for me anyway, is about the technical aspects of earthquakes and trying to get an understanding of how and why they occur. Even perhaps working our way round to some sort of predictive capability.


Here some technical mumbojumbo for you that is tied to predictive capability:


In this paper we show, in terms of Fisher information and approximate entropy, that the two strong impulsive kHz electromagnetic (EM) bursts recorder prior to the Athens EQ (7 September 1999, magnitude 5.9) present compatibility to the radar interferometry data and the seismic data analysis, which indicate that two fault segments were activated during Athens EQ. The calculated Fisher information and approximate entropy content ratios closely follow the radar interferometry result that the main fault segment was responsible for 80% of the total energy released, while the secondary fault segment for the remaining 20%. This experimental finding, which appears for the first time in the literature, further enhances the hypothesis for the seismogenic origin of the analyzed kHz EM bursts.


And a companion paper:


The variation of fractal dimension and entropy during a damage evolution process, especially approaching critical failure, has been recently investigated. A sudden drop of fractal dimension has been proposed as a quantitative indicator of damage localization or a likely precursor of an impending catastrophic failure. In this contribution, electromagnetic emissions recorded prior to significant earthquake are analysed to investigate whether they also present such sudden fractal dimension and entropy drops as the main catastrophic event is approaching. The pre-earthquake electromagnetic time series analysis results reveal a good agreement to the theoretically expected ones indicating that the critical fracture is approaching.


Wow, that's quite a bit to mull over but it goes along with a lot of what I'd read while rooting around in the rabbit hole opened by that hoax...



posted on Jan, 1 2012 @ 01:46 PM
link   
reply to post by aarys
 

Dammit


Any chance you can describe the two quakes?
The thing is Geonet do a fine job putting out the Felt Reports, but you don't know what the folks actually put in their report, unlike EMSC who have the actual text they sent in.
EMSC example of testimonies for todays Izu Islands Japan quake
We get some good stuff from MoorfNZ, Aoraki and SpaceJockey1 on here, but the more the merrier, makes the Thread more interesting.

edit on 1-1-2012 by muzzy because: fix html in link to EMSC



posted on Jan, 1 2012 @ 01:54 PM
link   
I hope EMSC downgrade that Izu islands like USGS and GFZ (6.7) have , I haven't had time to set up the 2012 Global Mag 7+ map and graph yet



posted on Jan, 1 2012 @ 01:57 PM
link   
reply to post by muzzy
 


Well they were both very noisey. The one at 5.45 am you could hear the roar before it arrived. Damage report from my home is. Cracks in house are a wee bit bigger, just a few things fell over.
Also me and my wife have decided to pack up and leave CH CH. We just need to fix this place up and sell. Thinking of heading up to Blenheim. My mother and sister are all ready looking at real estate up there.
I really think alot of people are thinking the same. A year and a half of this crap is just a bit to much.



posted on Jan, 1 2012 @ 02:11 PM
link   
reply to post by diamondsmith
 


I feel humbled by the replies. Thanks for taking the time to copy all of that and give me a new source of reference.



posted on Jan, 1 2012 @ 02:42 PM
link   
reply to post by aboutface
 
You are welcome .A short definition unit with so many parameters is hard to find but I will think.



posted on Jan, 1 2012 @ 02:54 PM
link   
reply to post by jadedANDcynical
 


Thanks for those j&c, they compliment some data I already have but have not got round to looking at yet. The first quick look I had using the Z channel proved inconclusive but I will delve into those later in the year (once my end of year report is done) as it does interest me very much.



posted on Jan, 1 2012 @ 04:29 PM
link   
JUST TO MAKE THE POSTS READABLE WITH PARAGRAPHING:


Originally posted by diamondsmith
reply to post by aboutface
 

TextAPPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1707 ANALYZING DIFFERENT SOURCES OF HYPOCENTRAL ERROR Introduction.

Having seen the limitations of the approximation given by equation (18), we can approach the practical problem of what to do with this result. The problem is that with real data the vectors emodel, n, and e are fundamentally unknown.

Our only information about them is provided by a projection of the residual vector, (I -AA+)i ". Unfortunately, the above analysis shows that ~ is a sum of all three error terms and is evaluated at the wrong place in space-time.

The basic idea here is to use auxiliary information to appraise the relative importance of each term. This provides a valuable error appraisal tool to provide a more complete and realistic appraisal of location uncertainties. I now consider methods for estimating each of these terms.


The order in which they are considered is significant. Measurement error term. Of the three terms in equation (18), e is unique in that it is the only truly statistical quantity. Thus, although e may be unknowable, we can assume we know something about its statistics [see Freedman (1966) or Leaver (1984) for examples of particularly careful studies].


For impulsive arrivals measured from analog records, the ¢ are approximately normally distributed with zero mean (Buland, 1976).

For emergent arrivals, the e, tend to have a distribution skewed toward positive numbers due to a tendency to pick weak arrivals too late (Anderson, 1982).

Finally, with newer computer picking methods, the distribution of e~ may be somewhat complicated, but at least the gross details of the distribution are known (Allen, 1982; Leaver, 1984).

In any case, if we know something about the probability density function of e, it is a standard exercise in regression analysis (see Flinn, 1965; Hoel, 1971; Jordan and Sverdrup, 1981; and their associated references) that the errors induced by e can be appraised by examination of confidence ellipsoids.

For the hypocenter location problem, this amounts to outlining an ellipsoidal region in space-time characterized by (h -/t)~C~-~(h -~)



posted on Jan, 1 2012 @ 04:33 PM
link   

Originally posted by diamondsmith
reply to post by aboutface
 



TextAPPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1703

A conventional analysis would assume n ~ 0, yielding classical results involving error ellipsoids (Flinn, 1965). One of the main points of this paper, however, is to consider the importance of n.


To do so, we need a method for calculating it. Application of equation (5.68) of Lee and Stewart (1981) to equation (11) yields 5~ = ½5h TH~Sh + ... (16) where H, is the Hessian matrix.


Analytic forms for Hi for a constant velocity medium and a two-layer model are given in a recent paper by Thurber (1985). I use Thurber's results below to approximate 5i to second order as ~ ~ lShTHiSh = (n2),. (17) Equation (15) then becomes ~h ~ A+(emodel + n2 -b e). (18)

There are two levels of approximation in equation (18): (1) the convergence criterion e, and (2) the second-order approximation of n. (The latter is emphasized with the 2 subscript on n.) Equation (18) is the focal point of this paper.


The first step is clearly to investigate the limits of this approximation. This is done in the following section using computer simulations. Computer simulations.


In this study, I chose to consider the location precision of earthquakes in the vicinity of the rupture zone of the 1984 Morgan Hill, California, earthquake (Cockerham and Eaton, 1984).


A crude approximation to the velocity structure in this area was used to generate travel times for a series of synthetic events.


This velocity model consisted of two constant velocity quarter-spaces joined along a vertical plane striking north 31.5 ° west and passing through the point 37°16 ' north latitude by 121o40 , west longitude (Figure 1).


Synthetic arrival times for every station in the U.S. Geological Survey Central California Network (CALNET) and all University of California Berkeley stations within 100 km of this point (121 stations) were calculated from this model for the set of events shown in Figure 1 using two different quarter-space models.

Velocities for these two models are given in Table 1. The measurement error vector $ in equation (1) was simulated by using a random number generator to produce random samples from a normal distribution with zero mean and a variance of 0.05 sec. The same ~ was then added to each synthetic event arrival time vector.


The advantage of this is that it allows comparison of errors induced by $ as a function of position. On the other hand, it gives a privileged position to a set of random numbers. However, repetition of these results with different random vectors indicates the results are not very dependent on the exact choice of ~, and the one presented here is representative.


All events were located using a simple, damped least-squares procedure similar to that described by Herrmann (1979), using travel times calculated from a constant velocity medium with a velocity of 5.6 km/sec. Figures 2 and 3 show the resulting location estimates. All location estimates show an eastward bias caused by approx- imating the quarter-space model with a constant velocity medium. The scale of this bias is ~10 km for model A events and ~2 km for model B events.


Figure 4 can be used to examine the validity of the second-order approximation in the context of these two different error scales. This is summarized here by examining only the
source


(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drep


TextEstimated locations of synthetic events from model A using 5.6 km/sec constant velocity medium. For comparison, the map view (A) and cross-section (B) frames are identical to those shown in Figure 1.


Estimated locations are at the centers of crossing lines which are the projections of the major axes of the conventional 95 per cent confidence ellipsoids.


Critical values for these ellipsoids are based on an F statistic as originally advocated by Flinn (1966). where v0 is the constant velocity medium velocity. Vo is used to convert origin time errors into an equivalent length scale so all components of a are at least measured in the same units.
source


(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drep




new topics

top topics



 
159
<< 1  2    4  5  6 >>

log in

join