Originally posted by StealthyKat
I heard that that quake today in Ohio was only about 2 miles from a fracking site....surprise surprise
Here's an article from CNN www.cnn.com...
On Friday -- one day before the latest, stronger quake -- Ohio Department of Natural Resources Director James Zehringer announced that work would be halted on a fluid-injection well in Youngstownedit on 12/31/2011 by StealthyKat because: (no reason given)
if they weren't showing up on all three stations, I would say it is something else
Mount Rainier, at 4392 m the highest peak in the Cascade Range, forms a dramatic backdrop to the Puget Sound region. Large Holocene mudflows from collapse of this massive, heavily glaciated andesitic volcano have reached as far as the Puget Sound lowlands. The present summit was constructed within a large crater breached to the northeast formed by collapse of the volcano during a major explosive eruption about 5600 years that produced the widespread Osceola Mudflow. Rainier has produced eruptions throughout the Holocene, including about a dozen during the past 2600 years; the largest of these occurred about 2200 years ago. The present-day summit cone is capped by two overlapping craters. Extensive hydrothermal alteration of the upper portion of the volcano has contributed to its structural weakness; an active thermal system has caused periodic melting on flank glaciers and produced an elaborate system of steam caves in the summit icecap. Reported 19th-century eruptions have not left identifiable deposits, but a phreatic eruption may have taken place as recently as 1894.
Originally posted by megabogie
Puterman- I've learned so much from you this year I feel I should be paying you tuition! I've spent the last hour exploring links on your page and for the first time learned (from you) how to right click to open a new tab. I don't look forward to devastating quakes this year but I do look forward to attending your seminars! Thanks again and Happy New Year
Ref, Lat, Long UTC, Mag, Depth, Location, Felt, TTNT energy
3635799, -38.7739, 175.75523, 2012/1/1 1:13:21, 3.031, 131, os 5515A Western Bay Rd
Waihaha, Lake Taupo 3381, No, 0.53
I don't look forward to devastating quakes this year but I do look forward to attending your seminars! Thanks again and Happy New Year
I would happily swap you some massage or reflexology (don't think you would like some of the more 'esoteric 'stuff' i do) for the wealth of knowledge you pass on here. You are a gem sir!
it is too bad the OP didnt wait till the turn of the new year day to make this QW 2012 thread, for if he had then he could have included the 7.0 earthquake which just now hit japan in the first post of the thread,.
would have been fitting, it is ominous after all, for such a rattled already sinking country of island to be hit by another strong one right at the turn of the fateful year.....2012.
TextFUNDAMENTAL EQUATIONS AND DEFINITIONS The data always used for earthquake locations are a set of m arrival times measured from one or more phases recorded by a network of seismometers. I will denote this vector of measured arrival times as (t) ~ R ~. It is related to reality as follows (t) = t + rl + ~ (1) where t -- true travel-time vector; r = origin time (1 E R m denotes a vector of all ones); and = measurement error. With actual data, t is unknowable. Instead, we must always rely on a mathematical model of t calculated from some estimate of the earth's seismic velocity structure. I use tmodol(&) to symbolize this vector of travel times, tmoael is a function of the estimated spatial coordinates of the hypocenter, &. A complete specification of the hypocenter, of course, also requires an estimate of r, ~, as well. For convenience, this 4-vector will be symbolized as/~. With these definitions, I define the residual function r(]~) ---- (t) --tmodel(;~) --T1. (2) Using equation (1), equation (2) becomes (3) APPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1701 where emodel(X) = t --t~odel(X) and Ar = r -~.
TextAll computerized earthquake location methods minimize II r(/~)II, where I[ II denotes a vector norm. The details of how this proceeds vary greatly, but all methods are the same in one respect. All are iterative procedures that calculate a series of corrections, 5hh, to estimate h as /~ = ho + ~' 5hh (4) k=l where niter is the total number of corrections required to obtain a stable solution from an initial guess, ho, of the hypocenter. By stable I mean specifically that the sequence 5hk is convergent such that II II < (5) where e is some appropriately small number. The 5hk are always calculated as 5hk = A~-~P(/~k-1) (6) with ftk-1 = ho + F,~---~ 5hl. A'~-I E R 4xm is a generalized inverse. With the exception of the nonlinear method described by Thurber (1985), A~_~ is always calculated directly from the matrix of partial derivatives z[ with components 0 X,j = (tmo e,). (j = 1, 2, 3), and A,4 = l(h4 = r). (7) There are essentially as many variations in how Ah* is calculated as there are location programs. Fortunately, we do not have to present a different error analysis for every possibility that exists. The reason is that every existing location method can be equated to a weighted least-squares problem. For our purposes, this means specifically that there exists a positive definite matrix, W E R m×m for which the solution to the usual equations of condition
Textthe corre- 1702 GARY L. PAVLIS spondence is obvious. With procedures using damped least squares (e.g., Herrmann, 1979) or the recently published application of Newton's method by Thurber (1985), the correspondence is not so obvious. However, equation (9) still holds because the purpose of both algorithms is to promote convergence of the sequence defined in equation (4). Both seek a solution that minimizes I] r II = 'TW2]'] 1/2. As long as the location is well constrained, both methods will converge to the same solution as an equivalent least-squares procedure. Furthermore, even a procedure which seeks to minimize L1 ( I] ? ]01 = ~ ?=1 ] r~ I ) can be cast in this form (Anderson, 1982). We are also forced, on the other hand, to make three fundamental assumptions. 1. h is well constrained. That is, we assume A + is not singular. 2. ]~ is the global minimum of 0] r ]1, not a local minima. 3. ]~ is not enormously different from the true hypocenter, h. The consequences of nos. 2 and 3 are identical, but the way they can arise is different. What I mean by "enormously different" is stated specifically below. ANALYSIS OF HYPOCENTRAL ERRORS Second-order theory. /t inevitably contains errors that arise from a number of factors. To examine this, expand each component of the residual vector in a Taylor series as follows (Lee and Stewart, 1981, p. 124) 4 ~i(/~ + 5h) = ~,(h) + • A,jShj + fii (11) J=l where Av is as defined in equation (7), and ni is the sum of all second and higher order terms of the expansion. The m equations of (11) can be summarized in matrix form as f(h + 5h) = ?(/~) + ASh + fi (12) where the correspondence of different terms is obvious. source(scholar.google.ro...://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.127.1365%26rep%3Drepedit on 1-1-2012 by diamondsmith because: do
TextAPPRAISING EARTHQUAKE HYPOCENTER LOCATION ERRORS 1703 A conventional analysis would assume n ~ 0, yielding classical results involving error ellipsoids (Flinn, 1965). One of the main points of this paper, however, is to consider the importance of n. To do so, we need a method for calculating it. Application of equation (5.68) of Lee and Stewart (1981) to equation (11) yields 5~ = ½5h TH~Sh + ... (16) where H, is the Hessian matrix. Analytic forms for Hi for a constant velocity medium and a two-layer model are given in a recent paper by Thurber (1985). I use Thurber's results below to approximate 5i to second order as ~ ~ lShTHiSh = (n2),. (17) Equation (15) then becomes ~h ~ A+(emodel + n2 -b e). (18) There are two levels of approximation in equation (18): (1) the convergence criterion e, and (2) the second-order approximation of n. (The latter is emphasized with the 2 subscript on n.) Equation (18) is the focal point of this paper. The first step is clearly to investigate the limits of this approximation. This is done in the following section using computer simulations. Computer simulations. In this study, I chose to consider the location precision of earthquakes in the vicinity of the rupture zone of the 1984 Morgan Hill, California, earthquake (Cockerham and Eaton, 1984). A crude approximation to the velocity structure in this area was used to generate travel times for a series of synthetic events. This velocity model consisted of two constant velocity quarter-spaces joined along a vertical plane striking north 31.5 ° west and passing through the point 37°16 ' north latitude by 121o40 , west longitude (Figure 1). Synthetic arrival times for every station in the U.S. Geological Survey Central California Network (CALNET) and all University of California Berkeley stations within 100 km of this point (121 stations) were calculated from this model for the set of events shown in Figure 1 using two different quarter-space models. Velocities for these two models are given in Table 1. The measurement error vector $ in equation (1) was simulated by using a random number generator to produce random samples from a normal distribution with zero mean and a variance of 0.05 sec. The same ~ was then added to each synthetic event arrival time vector. The advantage of this is that it allows comparison of errors induced by $ as a function of position. On the other hand, it gives a privileged position to a set of random numbers. However, repetition of these results with different random vectors indicates the results are not very dependent on the exact choice of ~, and the one presented here is representative. All events were located using a simple, damped least-squares procedure similar to that described by Herrmann (1979), using travel times calculated from a constant velocity medium with a velocity of 5.6 km/sec. Figures 2 and 3 show the resulting location estimates. All location estimates show an eastward bias caused by approx- imating the quarter-space model with a constant velocity medium. The scale of this bias is ~10 km for model A events and ~2 km for model B events. Figure 4 can be used to examine the validity of the second-order approximation in the context of these two different error scales. This is summarized here by examining only the
TextEstimated locations of synthetic events from model A using 5.6 km/sec constant velocity medium. For comparison, the map view (A) and cross-section (B) frames are identical to those shown in Figure 1. Estimated locations are at the centers of crossing lines which are the projections of the major axes of the conventional 95 per cent confidence ellipsoids. Critical values for these ellipsoids are based on an F statistic as originally advocated by Flinn (1966). where v0 is the constant velocity medium velocity. Vo is used to convert origin time errors into an equivalent length scale so all components of a are at least measured in the same units.