Quality Assurance Practice at IRIS DMC - Finding problems
- Analyst review
- Tracking problems
- Reporting problems
Strategies for leveraging MUSTANG metrics Scripting your own clients
Finding Problems: Automated Text Reports (internal use) Finding Problems: Automated Text Reports (internal use)
Analyst review Analyst review - Metrics: dead_channel_exp < 0.3 and pct_below_nlnm > 20
- Review plot using MUSTANG noise-pdf service
Analyst review Analyst review - Review plot using MUSTANG noise-mode-timeseries service
Analyst review
Analyst review Analyst review - Example: Channel Orientation Analysis
- The orientation_check metric finds observed channel orientations for shallow M>= 7 events by
- Calculating the Hilbert transform of the Z component (H{Z}) for Rayleigh waves
- Cross-correlating H{Z} with trial radial components calculated at varying azimuths until the correlation coefficient is maximized
- The observed channel orientation is difference between the calculated event back azimuth and observed radial azimuth
Analyst review Analyst review - orientation_check measurements from 2013 and 2014 for CU.ANWB having correlation coefficients > 0.4
You can browse small networks by channel: You can browse small networks by channel:
Tracking
Tracking
Reporting
Use Metrics Thresholds Use Metrics Thresholds - Find problems by retrieving channels that meet a meaningful metrics condition
- Missing data have percent_availability=0
- Channels with masses against the stops have very large absolute_value(sample_mean)
- Channels that do report GPS locks where clock_locked=0 have lost their GPS time reference
Finding Metrics Thresholds Finding Metrics Thresholds - Retrieve measurements for your network
- wget 'http://service.iris.edu/mustang/measurements/1/
- query?metric=sample_mean
- &net=IU
- &cha=BH[12ENZ]
- &format=csv
- &timewindow=2015-07-07T00:00:00,2015-07-14T00:00:00'
Finding Metrics Thresholds Finding Metrics Thresholds - Find the range of metrics values for problem channels
Metrics reported in counts may have different thresholds for different instrumentation Metrics reported in counts may have different thresholds for different instrumentation - sample_max
- sample_mean
- sample_median
- sample_min
- sample_rms
PSD-based metrics have their instrument responses removed – one threshold works for similar (e.g. broadband) instrumentation PSD-based metrics have their instrument responses removed – one threshold works for similar (e.g. broadband) instrumentation - dead_channel_exp
- pct_below_nlnm
- pct_above_nhnm
- transfer_function
PDF – a “heat-density” plot of many Power Spectral Density curves: PDF – a “heat-density” plot of many Power Spectral Density curves:
Combine metrics Combine metrics - Dead channels have
- almost linear PSDs (dead_channel_exp < 0.3)
- and lie mainly below the NLNM (pct_below_nlnm > 20)
dead_channel_exp < 0.3 && pct_below_nlnm > 20 - dead_channel_exp < 0.3 && pct_below_nlnm > 20
Metrics Arithmetic - Metrics averages
- Metrics differences
- pct_below_nlnm daily difference
Some favorite metrics tests for GSN data - noData: percent_availability = 0
- gapsGt12: num_gaps > 12
- avgGaps: average gaps/measurement >= 2
- noTime: clock_locked = 0
- dead: dead_channel_exp < 0.3 && pct_below_nlnm > 20
- pegged: abs(sample_rms) > 10e+7
- lowAmp: dead_channel_exp >= 0.3 && pct_below_nlnm > 20
- noise: dead_channel_exp < 0.3 && pct_above_nhnm > 20
- hiAmp: sample_rms > 50000
- avgSpikes: average spikes/measurement >= 100
- dcOffsets: dc_offset > 50
- badRESP: pct_above_nhnm > 90 || pct_below_nlnm > 90
Scripting your own client can take advantage of these strategies: Scripting your own client can take advantage of these strategies:
http://ds.iris.edu/ds/nodes/dmc/quality-assurance/ http://ds.iris.edu/ds/nodes/dmc/quality-assurance/ - Currently has links to
- We hope to add tutorials on MUSTANG’s R-based metrics packages and other ways to script your own clients in the future
Do'stlaringiz bilan baham: |