Browse Source


Andreas Steinberg 2 years ago
committed by GitHub
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 1 additions and 154 deletions
  1. +1

+ 1
- 154 View File

@ -8,163 +8,10 @@ As researchers, we are obligated to retain open access to all. To achieve this,
To ensure a smooth transition, we will keep a version of the code repository at GitHub until 2020-01-01.
WIP Documentation:
## Installation
### Installation from source
git clone
and than in the cloned directory:
sudo python3 install
Basics that need to be installed:
sudo pip install pyproj scipy matplotlib numpy basemap affine
basemap is only necessary for plotting and can be omitted
Please make sure that current version of both obspy and pyrocko
are installed.
Note: This tool is writen for both python2 and python3.
For python2 choose the appropiate branch, the master is for python3 only.
## Quick Manual
This tool is intended to
Before processing, this tool can download waveform data and cluster them automatically into virtual arrays.
Also purely synthetic tests are possible, either with synthetics generated for stations of a real case or
with input from the pyrocko colosseo scenario generator (
There are three config files which specify the user input:
First, for general options, regarding data and event selection choose parameters in global.conf, which is in the seismoBat main folder.
This options are used for steps 0-2. After step 2 you will have a folder named after the specifc event in the events subfolder. This
eventfolder will contain all work and data specifc to this event.
Secondly, event dependent options can be changed in the config file (eventname.config) of the event in the eventfolder, which will be created after step 2.
Please make sure to investigate this configuration file closely before further processing.
And lastly, synthetic sources can be specified in the syn config file (eventname.syn) in the eventfolder, also created after step 2. A user definded number of
RectangularSources or DoubleCouble sources can used to create synthetic data with free user input of the source parameters.
See the pyrocko documentation for details about the sources. This configuration file also contains paths to greensfunction stores.
processing steps:
step 0)
python list: - lists all events in the eventfolder (config and orig file must exist) (optional, just to see which events are available to process)
step 1)
python search: - search for earthquakes to process in global catalogs
- searchparameter are defined in global.conf
- possible parameter: - date from to
- magnitude
- catalog
- number of result
step 2)
python create eventid: - creates eventfolder in events
- use eventid from search to start creating event directory structure
(copies example.config from skeleton folder)
For step 3) three options exsist:
a) Download real data with pyrocko (faster, less stations):
python pyrocko_download eventname
b) use synthetics but with a station distributions from a real data case:
For this you will need a greensfunctions store that is pre-calculated with the fomosto tool from pyrocko (
Several already pre-calculated stores for download can be found at
This possibilty assumes also that you downloaded data with a) or b), as the real station distributions will be used for the synthetics.
Please make sure to set the pyrocko_download option in the event config file to 1 if you downloaded data with this command.
Also the noise of the real data will be used to pertub the synthetics if you select the option in the event config file.
c) use synthetics from a scenario generator:
You can also use the output of the pyrocko scenario generator colosseo.
After you followed the steps to create a purely synthetic dataset at
you have to give the scenario.yml of the scenario as input in the event configuration file and set the variable colosseo input to
1. Please make sure that you unset other types of input. Also give the greensfunctions store used in the synthetic input file
(eventname.syn). Disregard all other parameters in the synthetic input file for this case, as the generated event from the scenario
will be used. You will need to merge all mseed files in the scenarios waveform directory into one file called scenario.mseed, located
in the same folder as the scenario.yml. This can be done with jackseis or simply by using cat.
The next steps are based on the input you have choosen before. Be sure to not mix different types of input. Remove or move the folders eventname/cluster and
eventname/work if you want to start over for different input or array setup.
Again be careful to check the eventname.config file and adjust it your liking.
Note that the option for several filters are build in. With this option the processing will be done seperatly for different filter setups
and according outputs are generated. This is useful for divding into high- and low-frequency content.
Also note that several depths can be selected to iterate over. Right now for one run only planar equi-depth grid are considered for the semblance
calculation. If several depths are choosen the processing will be repeted for each depth.
step 4) Clustering the stations into virtual arrays, based on the input in the eventname.config. This is handled as an optimization problem and returns a set of virtual arrays.
python cluster eventname: - clustering of stations to automatically build virtual arrays
- configuration parameter in the event config file (eventname.config)
- possible parameter: - clustermethod (distance zoning/ kmeans)
distance zoning:
beginminstationdistance = 1
maxcluster = 2 (amount of clusters you would like to have at the end of clustering)
minstationaroundinitialcluster = 5 (minimum station around initial cluster center to be one inital cluster)
initialstationdistance = 10 (stations must be 10 degree around initial cluster center)
cutoff = 30 (if no final result for kmean then run only 30 times and take last result)
runs = 5 (repeatings for kmean clustering to get best result)
centroidmindistance = 20 (minimum distance of initial centroids,unit in degree)
comparedelta = 2
stationdistance = 10 (maximum distance from station to cluster center)
minclusterstation = 10 (minimum stations per cluster)
STEP 5) The last step is the actual processing.
First the data of each array will be cross-correlated. Stations under the threshold (xcorrtreshold) given in the eventname.config
are disregarded. xcorr=1 enables a correction of timeshifts at each based on cross correlations. If also autoxcorrcorrectur = 1 is selected for each array a manual picking of phase onsets is done before the processing. This will return a reference waveform of one of the stations
in the virtual array in a figure and a snuffler window. Marker for STA/LTA and theortical phase onsets will be given.
After closing both figures, the user can then input a manual traveltime shift in second in regard to the xcorr window start (also markers in the snuffler). The traveltimes for this array will than be statically corrected using this manual selected value. Both methods allows for handling of velocity model inadequacies.
Second the traveltimes for each gridpoint to each station will be calculated. This can take some time, depending on your gridsize. Therefor the traveltime grids are
saved automatically in the folder tttgrids for each array seperatly. They will automatically be loaded in when starting step 5 again. This is very useful for synthetic
test as it saves a lot of time. If you change the setup of the arrays however (with step 4) you will have to delete the saved tttgrid files for the affected arrays.
Lastly the semblance will be calculated first for each array seperatly and then combined. The combination can be weighted by the average SNR of the arrays if the option
is choosen in the eventname.config. The output are grids for each timestep of semblance which are stored in eventname/work/semblance for each array in a different folder with the ending
.asc. The combined semblance for all arrays can be found directly in eventname/work/semblance also with the ending .asc. If you used multiple filter, the files will have a numeral matching the
listing of the filter. Also for each depth choosen a different output will be generated.
python process eventname: - ARRAYPROCESSING OF THE EVENT
## Documentation
Documentation and usage examples are available online soon
WIP Documentation:
## Citation