Browse Source

doc: work on method, spelled all

whitelist
miili 3 years ago
parent
commit
79f40fcbfb
  1. 4
      docs/source/config/analysers/index.rst
  2. 4
      docs/source/config/index.rst
  3. 8
      docs/source/config/optimisers/index.rst
  4. 6
      docs/source/config/problems/index.rst
  5. 10
      docs/source/config/targets/index.rst
  6. 20
      docs/source/examples/satellite_insar/index.rst
  7. 16
      docs/source/examples/waveform_regional/index.rst
  8. 14
      docs/source/examples/waveform_wphase/index.rst
  9. 91
      docs/source/method/index.rst
  10. 4
      docs/source/overview/index.rst
  11. 2
      src/apps/grond.py
  12. 6
      src/optimisers/highscore/optimiser.py

4
docs/source/config/analysers/index.rst

@ -27,7 +27,7 @@ the problem also configures the ``target_balancing``. The only extra parameter
needed is
``niterations``
an integer number that defines how many random waveform predictions the target-balancing weights are based. A meaningful number will depepend on the problem. Large model spaces (loose model parameter bounds) may require a larger number of predicitons for more stable average values. For more tightly constrained problems the signal strength at the targets for the not-so-different source models, which are drawn from a small model space, may not vary much. A smaller number may suffice. Of course, the computional effort increases linearly with the number of ``niterations``.
an integer number that defines how many random waveform predictions the target-balancing weights are based. A meaningful number will depend on the problem. Large model spaces (loose model parameter bounds) may require a larger number of predictions for more stable average values. For more tightly constrained problems the signal strength at the targets for the not-so-different source models, which are drawn from a small model space, may not vary much. A smaller number may suffice. Of course, the computational effort increases linearly with the number of ``niterations``.
.. code-block :: sh
@ -50,7 +50,7 @@ This analyser is not strongly model-independent. Only the reference time given i
defines how long the pre-event noise trace is in seconds.
``check_events``
is a boolean value. If ``True`` the iris global earthquake catalog is looked up to find if phase arrivals of other events may disturb the statistics of the noise.
is a boolean value. If ``True`` the iris global earthquake catalogue is looked up to find if phase arrivals of other events may disturb the statistics of the noise.
``phase_def``
is a string that defines the reference phase for the pre-event time window.

4
docs/source/config/index.rst

@ -26,11 +26,11 @@ Grond is configured by YAML text files. Read on :doc:`structure` to get started
``Problem``
is specific source modell (CMT, rectangular fault), which is optimised.
is specific source model (CMT, rectangular fault), which is optimised.
``Analysers``
are responisble for data analysis before the optimisation.
are responsible for data analysis before the optimisation.
``Optimiser``
deliver the optimisation strategy.

8
docs/source/config/optimisers/index.rst

@ -70,7 +70,7 @@ At the beginning of the optimisation this sampler phase explores the solution sp
``DirectedSamplerPhase`` configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This sampler is used for the second phase and follows any of starting samplers above: Using existing models of the current `highscore` models the `directed` sampler draws a configurable number of new models. Like this convergence to low-misfit regions is enabled. There are quite some noteworthy configureable details to this sampler phase:
This sampler is used for the second phase and follows any of starting samplers above: Using existing models of the current `highscore` models the `directed` sampler draws a configurable number of new models. Like this convergence to low-misfit regions is enabled. There are quite some noteworthy configurable details to this sampler phase:
.. glossary ::
@ -78,7 +78,7 @@ This sampler is used for the second phase and follows any of starting samplers a
Number of iterations for this phase.
``sampling_distributions``
New models are drawn from normal distribution. The standard deviations are derived from the `highscore` models parameter's standard deviation and scaled by ``scatter_scale`` (see below). Optionally, the covariance of model parameters is taken into account by configuring when ``multivariate_normal`` is enabled (default is ``normal`` distribution). The distribution is centered around
New models are drawn from normal distribution. The standard deviations are derived from the `highscore` models parameter's standard deviation and scaled by ``scatter_scale`` (see below). Optionally, the covariance of model parameters is taken into account by configuring when ``multivariate_normal`` is enabled (default is ``normal`` distribution). The distribution is centred around
1. ``mean`` of the `highscore` model parameter distributions
2. a ``random`` model from the `highscore` list or
@ -88,11 +88,11 @@ This sampler is used for the second phase and follows any of starting samplers a
This scales search radius around the current `highscore` models. With a scatter scale of 2 the search for new models has a distribution with twice the standard deviation as estimated from the current `highscore` list. It is possible to define a beginning scatter scale and an ending scatter scale. This leads to a confining directed search. In other words, the sampling evolves from being more explorative to being more exploitive in the end.
``starting_point``
This method tunes to the center value of the sampler distribution: This option, will increase the likelihood to draw a `highscore` member model off-center to the mean value. The probability of drawing a model from the `highscore` list is derived from distances the `highscore` models have to other `highscore` models in the model parameter space. Excentricity is therefore compensated, because models with few neighbours at larger distances have an increased likelihood to be drawn.
This method tunes to the center value of the sampler distribution: This option, will increase the likelihood to draw a `highscore` member model off-center to the mean value. The probability of drawing a model from the `highscore` list is derived from distances the `highscore` models have to other `highscore` models in the model parameter space. Eccentricity is therefore compensated, because models with few neighbours at larger distances have an increased likelihood to be drawn.
What's the use? Convergence is slowed down, yes, but to the benefit of low-misfit region represented by only a few models drawn up to the current point.
Let's assume there are two separated groups of low-misfit models in our `highscore` list, with one group forming the 75% majority. In the directed sampler phase the choices of a mean center point for the distribution as well as a random starting point for the sampler distribution would favour new samples in the region of the `highscore` model majority. Models in the low-misfit region may be dying out in the `highscore` list due to favorism and related sparse sampling. `excentricity compensations` can help is these cases and keep models with not significantly higher misfits in the game and in sight.
Let's assume there are two separated groups of low-misfit models in our `highscore` list, with one group forming the 75% majority. In the directed sampler phase the choices of a mean center point for the distribution as well as a random starting point for the sampler distribution would favour new samples in the region of the `highscore` model majority. Models in the low-misfit region may be dying out in the `highscore` list due to favour and related sparse sampling. `eccentricity compensations` can help is these cases and keep models with not significantly higher misfits in the game and in sight.
``InjectionSamplerPhase`` configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

6
docs/source/config/problems/index.rst

@ -139,11 +139,7 @@ particularly for the ``CMTProblem``.
``DoubleDCProblem`` configuration
---------------------------------
This problem has two double-couple point sources (derived from ``DoubleDCSource``). They are
dependent in location and relative timing to avoid overlapping in either space or time. The
mechanisms, the durations and the moments of the two sources are indepedent.
Using this model more complex eartquakes with two prominent rupture phases or with a
change of mechanism along the rupture plane can be studied. Or simply the potenial of a major source
This problem has two double-couple point sources (derived from ``DoubleDCSource``). They are dependent in location and relative timing to avoid overlapping in either space or time. The mechanisms, the durations and the moments of the two sources are independent. Using this model more complex earthquakes with two prominent rupture phases or with a change of mechanism along the rupture plane can be studied. Or simply the potential of a major source
complexity of an earthquake can be tested.
Configuration parameters that common for all problems are listed above and following are parameters

10
docs/source/config/targets/index.rst

@ -1,7 +1,7 @@
Targets
=======
**Targets** is what Grond tries to match: The misfit between an *observed* target and forward model is minimized. Targets are derived from observabled or synthesised data. A target can be, a filtered waveform, a spectrum, InSAR or GNSS displacement. Each target has properties which can be tuned. These can be frequency filters, selected observed components and essentially a Green's functions store which is responsible for the synthetics at a particular target.
**Targets** is what Grond tries to match: The misfit between an *observed* target and forward model is minimized. Targets are derived from observables or synthesised data. A target can be, a filtered waveform, a spectrum, InSAR or GNSS displacement. Each target has properties which can be tuned. These can be frequency filters, selected observed components and essentially a Green's functions store which is responsible for the synthetics at a particular target.
Different ``TargetGroups`` can be combined to solve the inverse problem, leading to a joint optimisation from different data sources and observables.
@ -30,12 +30,12 @@ Parameters valid for all types of ``MisfitTargets`` are:
Green's function store interpolation, choose from:
* ``multilinear``
Performs a linear interpolation between discrete Green's function for improved resolution of synthetic data. *This option is computationaly more expensive.*
Performs a linear interpolation between discrete Green's function for improved resolution of synthetic data. *This option is computationally more expensive.*
* ``nearest_neighbor``
Uses the Green's function calculation for the forward model.
Choices other than 'nearest_neighbor' may require dense GF stores to avoid aliasing artifacts in the forward modelling.
Choices other than 'nearest_neighbor' may require dense GF stores to avoid aliasing artefacts in the forward modelling.
``store_id``
Name of the GF Store to use.
@ -51,7 +51,7 @@ Waveform targets
:math:`{\bf d}_{raw, synth}` and the restituted observed waveforms. Only these parts are used in the misfit calculation. The taper window duration is configured for each seismic station individually by phase arrivals.
The tapering is source-model dependent, since the tapering time is given with respect to the theoretic phase arrival time. This arrival time depends on the source location, which is often part of the optimisation itself and therefore may change continuously with each iteration. Therefore, restitution, tapering and filtering are done for each misfit calculation anew. Grond uses the pyrocko `CosTaper`_ taper. The ``fade_out`` time can be configured or it is calculated as the inverse of the minimum frequency of the chosen bandpass filter.
The tapering is source-model dependent, since the tapering time is given with respect to the theoretic phase arrival time. This arrival time depends on the source location, which is often part of the optimisation itself and therefore may change continuously with each iteration. Therefore, restitution, tapering and filtering are done for each misfit calculation anew. Grond uses the Pyrocko `CosTaper`_ taper. The ``fade_out`` time can be configured or it is calculated as the inverse of the minimum frequency of the chosen bandpass filter.
**Frequency filtering**
``fmin`` and ``fmax`` in Hz define the desired bandpass filter.
@ -159,7 +159,7 @@ Example :class:`~grond.targets.satellite.SatelliteTargetGroup` configuration sec
GNSS campaign targets
---------------------
True 3D surface displacement as measured by GNSS stations can be included in the inversion process by defining a :class:`~grond.targets.gnss_campaign.GNSSCampaignTargetGroup`. The station's displacement data has to be stored according to :mod:`~pyrocko.model.gnss_campaign`. Please refer to pyrocko's documentation of the GNSS model (`See example <https://pyrocko.org/docs/current/library/examples/gnss_data.html>`_)
True 3D surface displacement as measured by GNSS stations can be included in the inversion process by defining a :class:`~grond.targets.gnss_campaign.GNSSCampaignTargetGroup`. The station's displacement data has to be stored according to :mod:`~pyrocko.model.gnss_campaign`. Please refer to Pyrocko's documentation of the GNSS model (`See example <https://pyrocko.org/docs/current/library/examples/gnss_data.html>`_)
.. glossary ::

20
docs/source/examples/satellite_insar/index.rst

@ -1,7 +1,7 @@
Rectangular source plane from InSAR observations
================================================
This step-by-step recipe will guide you to through an earthquake source inversion for a finite rectangular fault plane from InSAR data using Grond. We will excercise the inversion for the 2009 L'Aquila earthquake - A shallow normal faulting Mw 6.3 earthquake - and use unwrapped surface displacement data derived from the Envisat mission.
This step-by-step recipe will guide you to through an earthquake source inversion for a finite rectangular fault plane from InSAR data using Grond. We will exercise the inversion for the 2009 L'Aquila earthquake - A shallow normal faulting Mw 6.3 earthquake - and use unwrapped surface displacement data derived from the Envisat mission.
Setup
-----
@ -21,13 +21,13 @@ exercise project directory from Grond's git repos to a place of your choice.
The project folder
------------------
The project folder now contains a configuration file for Grond and some utility scripts to download precalculated Green's functions and InSAR data:
The project folder now contains a configuration file for Grond and some utility scripts to download pre-calculated Green's functions and InSAR data:
.. code-block :: sh
grond-playground-insar # project folder
├── bin # directory with scripts
│   ├── download_gf_stores.sh # download precalculated Green's functions
│   ├── download_gf_stores.sh # download pre-calculated Green's functions
│   ├── download_insar_data.sh # a simple event-based waveform downloader
└── config # directory for configuration files
   └── insar_rectangular.gronf # Grond configuration file for this exercise
@ -36,7 +36,7 @@ The project folder now contains a configuration file for Grond and some utility
Green's function download
-------------------------
To download the precalculated Green's functions needed in this exercise, run
To download the pre-calculated Green's functions needed in this exercise, run
.. code-block :: sh
@ -69,12 +69,12 @@ The downloaded data has to be prepared for the inversion using the ``kite`` tool
Once the software is installed we need to parametrize the two scenes:
1. The data subsampling quadtree. This efficiently reduces the resolution of the scene, yet conserves the important data information. A reduced number of samples will benefit the forward-modelling computing cost.
1. The data sub-sampling quadtree. This efficiently reduces the resolution of the scene, yet conserves the important data information. A reduced number of samples will benefit the forward-modelling computing cost.
2. Estimate the spatial data covariance. By looking at the spatial noise of the scene we can estimate the data covariance. ``kite`` enables us to calculate a covariance matrix for the quadtree, which will be used as a weight matrix in our Grond inversion.
Let's start by parametrizing the quadtree: find a good parameters for the subsampling quadtree by tuning four parameters:
Let's start by parametrizing the quadtree: find a good parameters for the sub-sampling quadtree by tuning four parameters:
1. ``epsilon``, the variance threshold in each quadtree's tile.
2. ``nan_fraction``, percentage of allowed NaN pixels per tile.
@ -100,7 +100,7 @@ Now we can parametrize the quadtree visually:
.. note ::
Delete unncessary tiles of the quadtree by right-click select, and delete with :kbd:`Del`.
Delete unnecessary tiles of the quadtree by right-click select, and delete with :kbd:`Del`.
Once you are done, click on the Tag :guilabel:`scene.covariance`. Now we will define a window for the data's noise. The window's data will be use to calculating the spatial covariance of the scene(for details see: reference).
@ -116,9 +116,9 @@ On the left hand side of the GUI you find parameters to tune the spatial covaria
**Figure 2**: Data covariance inspection with :command:`spool`.
Once we finished parametrisation of the quadtree and covariance, we have to calculate the full covariance and weight matrix from the complete scene resoltion:
Once we finished parametrisation of the quadtree and covariance, we have to calculate the full covariance and weight matrix from the complete scene resolution:
1. Calulate the full covariance: :menuselection:`Tools --> Calculate Full Matrix`
1. Calculate the full covariance: :menuselection:`Tools --> Calculate Full Matrix`
2. Save the parametrized scene: :menuselection:`File --> Save Scene`.
@ -127,7 +127,7 @@ Grond configuration
The project folder already contains a configuration file for rectangular source optimisation with Grond, so let's have a look at it.
It's a `YAML`_ file: This file format has been choosen for the Grond configuration because it can represent arbitrarily nested data structures built from mappings, lists, and scalar values. It also provides an excellent balance between human and machine readability. When working with YAML files, it is good to know that the **indentation is part of the syntax** and that comments can be introduced with the ``#`` symbol. The type markers, like ``!grond.RectangularProblemConfig``, select the Grond object type of the following mapping and it's documentation can likely be found in the :doc:`/library/index`.
It's a `YAML`_ file: This file format has been chosen for the Grond configuration because it can represent arbitrarily nested data structures built from mappings, lists, and scalar values. It also provides an excellent balance between human and machine readability. When working with YAML files, it is good to know that the **indentation is part of the syntax** and that comments can be introduced with the ``#`` symbol. The type markers, like ``!grond.RectangularProblemConfig``, select the Grond object type of the following mapping and it's documentation can likely be found in the :doc:`/library/index`.
.. literalinclude :: ../../../../examples/grond-playground-insar/config/insar_rectangular.gronf

16
docs/source/examples/waveform_regional/index.rst

@ -6,7 +6,7 @@ This step-by-step guide explains how to obtain a probabilistic centroid moment t
Setup
-----
To repeat this exercise on your machine, you should first `install Pyrocko <https://pyrocko.org/docs/current/install/>`_ and Grond (see :doc:`/install/index`), if you have not already done so. Then, copy the exercise project directory from Grond's git repos to a place of your choice:
To repeat this exercise on your machine, you should first `install Pyrocko <https://pyrocko.org/docs/current/install/>`_ and Grond (see :doc:`/install/index`), if you have not already done so. Then, copy the exercise project directory from Grond's git repositories to a place of your choice:
.. code-block :: sh
@ -19,13 +19,13 @@ To repeat this exercise on your machine, you should first `install Pyrocko <http
The project folder
------------------
The project folder now contains a configuration file for Grond, some utility scripts to download precalculated Green's functions and to download seismic waveforms from public datacenters.
The project folder now contains a configuration file for Grond, some utility scripts to download pre-calculated Green's functions and to download seismic waveforms from public data centres.
.. code-block :: sh
grond-playground-regional # project folder
├── bin # directory with scripts
│   ├── download_gf_stores.sh # download precalculated Green's functions
│   ├── download_gf_stores.sh # download pre-calculated Green's functions
│   ├── grondown # a simple event-based waveform downloader
│   └── grondown_regional.sh # downloader configured for this exercise
└── config # directory for configuration files
@ -34,7 +34,7 @@ The project folder now contains a configuration file for Grond, some utility scr
Green's function download
-------------------------
To download the precalculated Green's functions needed in this exercise, run
To download the pre-calculated Green's functions needed in this exercise, run
.. code-block :: sh
@ -52,7 +52,7 @@ It contains a Pyrocko Green's function store, named ``crust2_j3``, which has bee
Seismic waveform data download
------------------------------
A preconfigured script is provided to download seismic waveform recordings via FDSN web services from the `IRIS <http://service.iris.edu/fdsnws/>`_ and `GEOFON <https://geofon.gfz-potsdam.de/waveform/webservices.php>`_ datacenters. Just run it with the GEOFON event ID of the study earthquake. The GEOFON event ID of the Mw 5.9 aftershock is ``gfz2018pmjk`` (you can find the ID in the `GEOFON catalog <https://geofon.gfz-potsdam.de/eqinfo/list.php>`_ event links).
A preconfigured script is provided to download seismic waveform recordings via FDSN web services from the `IRIS <http://service.iris.edu/fdsnws/>`_ and `GEOFON <https://geofon.gfz-potsdam.de/waveform/webservices.php>`_ data centers. Just run it with the GEOFON event ID of the study earthquake. The GEOFON event ID of the Mw 5.9 aftershock is ``gfz2018pmjk`` (you can find the ID in the `GEOFON catalog <https://geofon.gfz-potsdam.de/eqinfo/list.php>`_ event links).
To download the seismic waveform data, now run:
@ -64,11 +64,11 @@ This shell script calls the data downloader :file:`bin/grondown` with parameters
* Query the `GEOFON catalog <https://geofon.gfz-potsdam.de/eqinfo/list.php>`_ for event information about ``gfz2018pmjk``.
* Select time windows based on event origin and time, considering that we want to analyse the signals at low frequencies (0.01 - 0.1 Hz).
* Query datacenters for seismic stations with epicentral distance between 0 and 1000 km.
* Query data centres for seismic stations with epicentral distance between 0 and 1000 km.
* From the available recorder channels select appropriate ones for a target sampling rate of 2 Hz.
* Download raw waveform data for the selected stations and channels.
* Download instrument transfer function meta-information for all successfully downloaded waveform data.
* Calculate displacement seismograms for quality check (Grond will use the raw data). If all went well, the displacement seismograms should be valid in the frequency range 0.01 - 0.05 Hz, sampled at 1 Hz and rotated to radial, transverse, and vertical components. The rotation to radial and transverse components is with respect to the event coordinates from the GEOFON catalog.
* Calculate displacement seismograms for quality check (Grond will use the raw data). If all went well, the displacement seismograms should be valid in the frequency range 0.01 - 0.05 Hz, sampled at 1 Hz and rotated to radial, transverse, and vertical components. The rotation to radial and transverse components is with respect to the event coordinates from the GEOFON catalogue.
After running the download script, the playground directory should contain a new :file:`data` directory with the following content:
@ -77,7 +77,7 @@ After running the download script, the playground directory should contain a new
data
└── events
└── gfz2018pmjk
├── event.txt # catalog information about the event
├── event.txt # catalogue information about the event
└── waveforms
├── grondown.command
├── prepared/... # rotated, displacement waveforms

14
docs/source/examples/waveform_wphase/index.rst

@ -6,7 +6,7 @@ This step-by-step guide explains how to obtain a probabilistic W-phase centroid
Setup
-----
To repeat this exercise on your machine, you should first `install Pyrocko <https://pyrocko.org/docs/current/install/>`_ and Grond (see :doc:`/install/index`), if you have not already done so. Then, copy the exercise project directory from Grond's git repos to a place of your choice:
To repeat this exercise on your machine, you should first `install Pyrocko <https://pyrocko.org/docs/current/install/>`_ and Grond (see :doc:`/install/index`), if you have not already done so. Then, copy the exercise project directory from Grond's git repositories to a place of your choice:
.. code-block :: sh
@ -19,13 +19,13 @@ To repeat this exercise on your machine, you should first `install Pyrocko <http
The project folder
------------------
The project folder now contains a configuration file for Grond, some utility scripts to download precalculated Green's functions and to download seismic waveforms from public datacenters.
The project folder now contains a configuration file for Grond, some utility scripts to download pre-calculated Green's functions and to download seismic waveforms from public data centres.
.. code-block :: sh
grond-playground-wphase # project folder
├── bin # directory with scripts
│   ├── download_gf_stores.sh # download precalculated Green's functions
│   ├── download_gf_stores.sh # download pre-calculated Green's functions
│   ├── grondown # a simple event-based waveform downloader
│   └── grondown_wphase.sh # downloader configured for this exercise
└── config # directory for configuration files
@ -34,7 +34,7 @@ The project folder now contains a configuration file for Grond, some utility scr
Green's function download
-------------------------
To download the precalculated Green's functions needed in this exercise, run
To download the pre-calculated Green's functions needed in this exercise, run
.. code-block :: sh
@ -64,12 +64,12 @@ This shell script calls the data downloader :file:`bin/grondown` with parameters
* Query the `GEOFON catalog <https://geofon.gfz-potsdam.de/eqinfo/list.php>`_ for event information about ``gfz2015sfdd``.
* Select time windows based on event origin and time, considering that we want to analyse the signals at very low frequencies (0.001 - 0.005 Hz).
* Query datacenters for seismic stations with epicentral distance between 3000 and 11000 km.
* Query data centres for seismic stations with epicentral distance between 3000 and 11000 km.
* Select a small set of stations (N=40) providing a good coverage in azimuth and distance.
* From the available recorder channels select appropriate ones for a target sampling rate of 1 Hz.
* Download raw waveform data for the selected stations and channels.
* Download instrument transfer function meta-information for all successfully downloaded waveform data.
* Calculate displacement seismograms for quality check (Grond will use the raw data). If all went well, the displacement seismograms should be valid in the frequency range 0.01 - 0.05 Hz, sampled at 1 Hz and rotated to radial, transverse, and vertical components. The rotation to radial and transverse components is with respect to the event coordinates from the GEOFON catalog.
* Calculate displacement seismograms for quality check (Grond will use the raw data). If all went well, the displacement seismograms should be valid in the frequency range 0.01 - 0.05 Hz, sampled at 1 Hz and rotated to radial, transverse, and vertical components. The rotation to radial and transverse components is with respect to the event coordinates from the GEOFON catalogue.
After running the download script, the playground directory should contain a new :file:`data` directory with the following content:
@ -78,7 +78,7 @@ After running the download script, the playground directory should contain a new
data
└── events
└── gfz2015sfdd
├── event.txt # catalog information about the event
├── event.txt # catalogue information about the event
└── waveforms
├── grondown.command
├── prepared/... # rotated, displacement waveforms

91
docs/source/method/index.rst

@ -2,12 +2,18 @@
Method
******
This document gives a comprehensive overview over Grond's methodical background. It describes how the objective function and data weighting are defined, how the optimisation algorithm works and how model uncertainties are
This document gives a comprehensive overview over Grond's methodical background. It describes how the objective function and data weighting are defined, the optimisation algorithm works and how model uncertainties are
estimated.
The very core of any optimisation is the evaluation of an objective function or misfit value between observed :math:`{\bf d}_{obs}` and predicted data :math:`{\bf d}_{synth}`. This is most often based on the difference :math:`{\bf d}_{obs} - {\bf d}_{synth}`, but can also be any other comparison, like a correlation measure for example.
The very core of any optimisation is the evaluation of an objective function or misfit value between observed and predicted data. This is most often based on the difference
`Observed data` here means post-processed data (or features) derived from the `raw` measurements. For example, in the context of seismic source inversion, seismic waveform recordings are usually tapered to extract specific seismic phases, restituted to displacement and filtered. `Predicted data` are in this case forward modelled seismograms that are tapered and filtered in the same way as the observed waveforms.
.. math::
{\bf d}_{obs} - {\bf d}_{synth}\mathrm{,}
but can also be any other comparison, like a correlation measure for instance.
`Observed data` :math:`{\bf d}_{obs}` are post-processed data (or features) derived from the `raw` measurements. For example, in the context of seismic source inversion, seismic waveforms are tapered to seismic phases of interest, restituted to displacement and filtered. `Predicted data` :math:`{\bf d}_{synth}` are then modelled seismograms which are tapered and filtered in the same way as the observed waveforms.
.. contents :: Content
:depth: 3
@ -15,9 +21,15 @@ The very core of any optimisation is the evaluation of an objective function or
Forward modelling with pre-calculated Green's functions
=======================================================
The forward modelling of raw synthetic data :math:`{\bf d}_{raw, synth}` for earthquake source models requires the calculation of the Green's function (GF) between all :term:`source` points and :term:`receiver` positions involved, based on a medium model. In the general earthquake :term:`source problem <Problem>`, the positions of the sources change during the optimisation because the misfit is calculated for many different source-receiver configurations. The calculation of the GFs for each specific source-receiver pair is computationally costly and would be a significant contribution to the total computational duration of an optimisation. Therefore, Grond uses pre-calculated GFs, stored in a database called Pyrocko :term:`GF store <Green's Function Store>`. Such GF Stores can be created with the `Fomosto`_ GF management tool of Pyrocko.
The forward modelling of raw synthetic data :math:`{\bf d}_{raw, synth}` for earthquake source models requires the calculation of the Green's function (GF) between all :term:`source` points and :term:`receiver` positions involved, based on a medium (velocity) model. In the general earthquake :term:`source problem <Problem>`, the positions of the sources change during the optimisation because the misfit is calculated for many different source-receiver configurations. The calculation of the GFs for each specific source-receiver pair is computationally costly and would be a significant contribution to the total computational duration of an optimisation. Therefore, Grond uses pre-calculated GFs, stored in a database called Pyrocko :term:`GF store <Green's Function Store>`. Such GF Stores can be created with the `Fomosto`_ GF management tool of Pyrocko.
Different applications need different types of :term:`GF stores <Green's function store>`. For the purpose of forward modelling in Grond, we have to distinguish between
Different applications need different types of :term:`GF stores <Green's function store>`. For the purpose of forward modelling in Grond, we have to distinguish between GFs for dynamic seismic waveforms and GFs for static near-field displacement. Ready to use GF stores can be found in `our online repository <http://kinherd.org/gfs.html>`_. Global GFs for several standard earth models, like e.g. the global 1D `PREM model`_ are available, as well as regional distance GFs for many profiles of the `CRUST 2.0 <https://igppweb.ucsd.edu/~gabi/crust2.html>`_ earth model database. Custom GF stores can be created using the `Fomosto`_ tool and an appropriate choice of the numerical method to calculate the GFs (Fomosto backend).
1. GFs for dynamic seismic waveforms and
2. GFs for static near-field displacement.
Ready to use GF stores can be found in `our online repository <http://kinherd.org/gfs.html>`_. Global GFs for several standard earth models, like e.g. the global 1D `PREM model`_ are available, as well as regional distance GFs for many profiles of the `CRUST 2.0 <https://igppweb.ucsd.edu/~gabi/crust2.html>`_ earth model database. Custom GF stores can be created using the `Fomosto`_ tool and an appropriate choice of the numerical method to calculate the GFs (Fomosto back end).
GFs for seismic waveforms
-------------------------
@ -28,17 +40,17 @@ For regional data analyses the `QSEIS <https://pyrocko.org/docs/current/apps/fom
GFs for static near-field displacements (measured by using GNSS or InSAR)
-------------------------------------------------------------------------
For the calculation of purely static coseismic displacements the use of the `PSGRN/PSCMP <https://pyrocko.org/docs/current/apps/fomosto/backends.html#the-psgrn-pscmp-backend>`_ method by `Wang et al.`_ (2006) is suggested for fast forward modelling.
For the calculation of purely static coseismic surface displacements the use of the `PSGRN/PSCMP <https://pyrocko.org/docs/current/apps/fomosto/backends.html#the-psgrn-pscmp-backend>`_ method by `Wang et al.`_ (2006) is suggested for fast forward modelling.
For more details on GF stores, see the `Pyrocko documentation <https://pyrocko.org/docs/current/>`_
For more details on GF stores, see the `Pyrocko documentation <https://pyrocko.org/docs/current/>`_.
Objective function design
=========================
The `objective function` gives a scalar misfit value how well the source model fits the observed data. A smaller misfit value is better than a large one. It is often called `misfit function`. The source model that results in the smallest values of the :term:`objective function` is the global minimum of the misfit function optimum model.
The `objective function` (or `misfit function`) gives a scalar misfit value how well the source model fits the observed data. A smaller misfit value is better than a large one. The source model that results in the smallest values of the :term:`objective function` is the global minimum of the misfit function optimum model.
The objective function defines what a `model fit` is and how `good` or `poor` models are scaled with respect to others. Furthermore, the objective function has rules how different data sets are handled, which `Lp-norm <https://en.wikipedia.org/wiki/Lp_space>`_ is applied and how data errors are considered in optimisations.
The objective function defines what a `model fit` is and how `good` or `poor` models are scaled with respect to others. Furthermore, the objective function has rules how different observed data sets are handled, which `Lp-norm <https://en.wikipedia.org/wiki/Lp_space>`_ is applied and how data errors are considered in optimisations.
.. figure:: ../images/illu_combi_weights.svg
@ -55,7 +67,7 @@ Misfit calculation and objective function
The core of an optimisation is the data-point-wise calculation of the difference between observed and predicted data:
TODO: avoid repetiion
TODO: avoid repetition
.. math ::
@ -74,32 +86,30 @@ Grond supports different seismological observations and a combination of those,
* from pixel offsets
* measured by using GNSS sensors
TODO: Define the word target.
The misfit is based on the configurable :math:`L^x`-norm with :math:`x \,\, \epsilon \,\, [1, 2, 3, ...]`:
The misfit is based on the configurable :math:`L^p`-norm with :math:`p \,\, \epsilon \,\, [1, 2, 3, ...]`:
.. math::
:label: eq:ms
\lVert e \rVert_x = \lVert {\bf{d}}_{obs} - {{\bf d}}_{synth} \rVert_x = \
\left(\sum{|{ d}_{i, obs} - {d}_{i, synth}|^x}\right)^{\frac{1}{x}}
\lVert e \rVert_p = \lVert {\bf{d}}_{obs} - {{\bf d}}_{synth} \rVert_p = \
\left(\sum{|{ d}_{i, obs} - {d}_{i, synth}|^p}\right)^{\frac{1}{p}}
Further the misfit normalisation factor :math:`norm` is associated with each target. This measure will be used to normalise the misfit values for relative weighting:
.. math::
:label: ns
\lVert e_{\mathrm{0}} \rVert_x = \lVert {\bf{d}}_{obs} \rVert_x = \left(\sum{|{d}_{i, obs}|^x} \right)^{\frac{1}{x}}.
\lVert e_{\mathrm{0}} \rVert_p = \lVert {\bf{d}}_{obs} \rVert_p = \left(\sum{|{d}_{i, obs}|^p} \right)^{\frac{1}{p}}.
The resulting normalised misfit
.. math::
:label: ms_ns
\lVert e_{\mathrm{norm}} \rVert_x = \
\frac{\lVert e \rVert_x}{ \lVert e_{\mathrm{0}} \rVert_x}.
\lVert e_{\mathrm{norm}} \rVert_p = \
\frac{\lVert e \rVert_p}{ \lVert e_{\mathrm{0}} \rVert_p}.
is a useful measure to evaluate the data fit. Model predictions that manage to explain parts of the observed data holds :math:`\lVert e_{\mathrm{norm}} \rVert_x <1`. Furthermore, the data norm :math:`\lVert e_{\mathrm{0}} \rVert_x` is used in the normalisation of data groups.
is a useful measure to evaluate the data fit. Model predictions that manage to explain parts of the observed data holds :math:`\lVert e_{\mathrm{norm}} \rVert_p <1`. Furthermore, the data norm :math:`\lVert e_{\mathrm{0}} \rVert_p` is used in the normalisation of data groups.
Waveform misfit
^^^^^^^^^^^^^^^
@ -124,7 +134,7 @@ When measuring waveform data's cross-correlation, the misfit function is based o
Satellite misfit
^^^^^^^^^^^^^^^^
The surface deformation data is pre-processed with kite (:doc:`../examples/satellite_insar/index`) to obtain a subsampled quadtree. The misfit is then calculated for each quadtree tile :math:`d_{i}`.
The surface deformation data is pre-processed with kite (See example project: :doc:`../examples/satellite_insar/index`) to obtain a subsampled quadtree. The misfit is then calculated for each quadtree tile :math:`d_{i}`.
GNSS misfit
@ -178,9 +188,9 @@ With these weights waveform targets are `balanced` with respect to the expected
:alt: alternate text
:figclass: align-center
**Figure 2**: Qualitative sketch how target balancing weight increases with source-receiver distance to balance amplitude inferred by geometrical spreading.
**Figure 2**: Qualitative sketch how target balancing weight increases with source-receiver distance to balance amplitude inferred by geometrical spreading. Large triangles indicate larger weights and vice versa.
Signal amplitudes in a trace :math:`|{\bf{d}}_{synth}|` depend on the (1) source-receiver distance, (2) on the phase type and (3) signal procesing applied (taper or bandpass). The problem tackled with this particular weight is that large signal amplitude have higher contributions to the misfit than smaller signals, without providing more information about the source machanism. From synthetic waveforms of `N` forward models that have been randomly drawn from the defined model space the mean signal amplitude of the traces is derived. The weight for each trace is then the inverse of these mean signal amplitudes:
Signal amplitudes in a trace :math:`|{\bf{d}}_{synth}|` depend on the (1) source-receiver distance, (2) on the phase type and (3) the signal processing applied (taper or bandpass). The problem tackled with this particular weight is that large signal amplitude have higher contributions to the misfit than smaller signals, without providing more information about the source mechanism. From synthetic waveforms of `N` forward models that have been randomly drawn from the defined model space the mean signal amplitude of the traces is derived. The weight for each trace is then the inverse of these mean signal amplitudes:
.. math::
:label: wtba
@ -270,7 +280,7 @@ The **global misfit** in this example is then:
The bootstrap method
====================
`Bootstrapping` in Grond (see also `Bootstrapping (Wikipedia) <https://en.wikipedia.org/wiki/Bootstrapping_(statistics)>`_) enables to suppress some types of bias in the optimization results. Observations that are affected by other signals or noise often show large misfits. Also insufficient media models for the forward model can result in high misfit values. Already a few high misfit values may pull the optimisation to a biased optimum. With bootstrapping techinques we can better estimate model parameter uncertainties in an efficient way. These include the propagation of the data error, but also the assessment of modelling errors to some extent.
`Bootstrapping` in Grond (see also `Bootstrapping [Wikipedia] <https://en.wikipedia.org/wiki/Bootstrapping_(statistics)>`_) enables to suppress some types of bias in the optimization results. Observations that are affected by other signals or noise often show large misfits. Also insufficient media models for the forward model can result in high misfit values. Already a few high misfit values may pull the optimisation to a biased optimum. With bootstrapping techniques we can better estimate model parameter uncertainties in an efficient way. These include the propagation of the data error, but also the assessment of modelling errors to some extent.
In Grond the bootstrapping is applied in a number of parallel `bootstrapping chains` where individual bootstrap weights and bootstrap noise is applied to the model misfits. Technically each bootstrap chain carries out its optimization. Find more detail below, at :ref:`babo_optimiser`. (What is an :term:`optimiser`?)
@ -297,7 +307,7 @@ For a `classic` bootstrap realisation we draw :math:`N_{\mathrm{targets}}` rando
:alt: alternate text
:figclass: align-center
**Figure 3**: Formation of `classical` bootstrap weights. Uniformly random samples (left) and the corresponding histogram (right) with the occurence frequencies being used as bootstrap weights.
**Figure 3**: Formation of `classical` bootstrap weights. Uniformly random samples (left) and the corresponding histogram (right) with the occurrence frequencies being used as bootstrap weights.
Bayesian weights
^^^^^^^^^^^^^^^^
@ -321,7 +331,9 @@ For a `Bayesian` bootstrap realisation we draw :math:`N_{\mathrm{targets}}` rand
Residual bootstrap
------------------
Residual bootstrapping is a computationally more efficient implementation of the `Randomize-then-optimize`_ approach: with empirical estimates of the data error statistics individual realisations of synthetic correlated random noise are systematically added to the data to obtain perturbed optimisations results (Fig. 5). Earthquake source parameter distributions retrieved with the `Randomize-then-optimize`_ method based on the data error variance-covariance matrices have been shown to match the model parameter distributions obtained through `Marcov Chain Monte Carlo` sampling of the model space (`Jonsson et al.`_,2014). In our `residual bootstrapping` method we add one realisation of synthetic correlated random noise to each bootstrapping chain (Fig. 5C and 1). This saves the calculation of many independent forward models compared to `Randomize-then-optimize`_ approach.
Residual bootstrapping is a computationally more efficient implementation of the `Randomize-then-optimize`_ approach: with empirical estimates of the data error statistics individual realisations of synthetic correlated random noise are systematically added to the data to obtain perturbed optimisations results (Fig. 5).
Earthquake source parameter distributions retrieved with the `Randomize-then-optimize`_ method based on the data error variance-covariance matrices have been shown to match the model parameter distributions obtained through `Markov Chain Monte Carlo` sampling of the model space (`Jonsson et al.`_, 2014). In our `residual bootstrapping` method we add realisations of synthetic correlated random noise to each bootstrapping chain (Fig. 5C and 1). This reduces the calculation of many independent forward models compared to `Randomize-then-optimize`_ approach.
To generate random noise we use functions of the `Kite`_ module. From the noise estimation region defined in the `Kite`_ scenes (Fig. 5A), the noise power spectrum is used directly with a randomised phase spectrum to create new random noise with same spectral characteristics (Fig. 5B). The noise is then subsampled through the same quadtree as defined for the observed data (Fig. 5C).
@ -332,7 +344,7 @@ To generate random noise we use functions of the `Kite`_ module. From the noise
:alt: alternate text
:figclass: align-center
**Figure 5**: Residual bootstrap realisation of InSAR surface displacement data in Grond. (A) From data noise we (B) synthesise random correlated data noise, which is then (C) subsampled exactly as the observed data. These perturbation are then added as bootstrap residuals.
**Figure 5**: Residual bootstrapping of InSAR surface displacement data in Grond. (A) From displacements maps we extract noise and (B) synthesise random correlated data noise. (C) This synthetic noise is then subsampled exactly as the observed data. These random realisations are added to the residuals of each bootstrap chain.
.. _optimisation:
@ -351,8 +363,9 @@ Bayesian bootstrap optimisation `BABO <https://de.wikipedia.org/wiki/Babo_(Jugen
BABO can be configured for a simple Monte-Carlo random direct search. It can also resemble a simulated annealing optimisation approach. Last but not least BABO enables fully probabilistic bootstrapping of the optimisation results. This is realised in parallel with optimisation chains to which bootstrapping weights are applied.
Note:
*Weights* are explained above. The specific weighting is configured with the `targets config`_ used and also with the `problem`_. The *model space* in which the optimisation takes place is defined with the `problem`_. Here described is the sampling and in the context of the multiple objective functions given by the bootstrapping.
.. note ::
*Weights* are explained above. The specific weighting is configured with the `targets config`_ used and also with the `problem`_. The *model space* in which the optimisation takes place is defined with the `problem`_. Here described is the sampling and in the context of the multiple objective functions given by the bootstrapping.
Sampling scheme and sampling phases
@ -362,7 +375,7 @@ Like in any `direct search` optimisation models are drawn from the model space.
Highscore list
""""""""""""""
This list contains a defined number of the current best models (lowest misfit). It is continuously updated at runtime. The `highscore` list length :math:`L_{hs}` (i.e. number of member models) is `problem`_ dependend: :math:`L_{hs} = f_{\mathrm{len}} (N_{\mathrm{par}} -1)`, with :math:`N_{\mathrm{par}}` being the number of model paramters. :math:`f_{\mathrm{len}}` is configurable (``chain_length_factor``, default is 8).
This list contains a defined number of the current best models (lowest misfit). It is continuously updated at runtime. The `highscore` list length :math:`L_{hs}` (i.e. number of member models) is `problem`_ dependend: :math:`L_{hs} = f_{\mathrm{len}} (N_{\mathrm{par}} -1)`, with :math:`N_{\mathrm{par}}` being the number of model parameters. :math:`f_{\mathrm{len}}` is configurable (``chain_length_factor``, default is 8).
There are three sampling phases defined, based on which models are drawn from
the model space:
@ -391,11 +404,11 @@ This starting phase allows to inject pre-defined models at the start of the opti
DirectedSamplerPhase
""""""""""""""""""""
This sampler is used for the second phase and follows any of starting samplers above: Using existing models of the current `highscore` models the `directed` sampler draws a configurable number of new models. Like this convergence to low-misfit regions is enabled. There are quite some noteworthy configureable details to this sampler phase:
This sampler is used for the second phase and follows any of starting samplers above: Using existing models of the current `highscore` models the `directed` sampler draws a configurable number of new models. Like this convergence to low-misfit regions is enabled. There are quite some noteworthy configurable details to this sampler phase:
``sampling_distributions``
..........................
New models are drawn from normal distribution. The standard deviations are derived from the `highscore` models parameter's standard deviation and scaled by ``scatter_scale`` (see below). Optionally, the covariance of model parameters is taken into account by configuring when ``multivariate_normal`` is enabled (default is ``normal`` distribution). The distribution is centered around
New models are drawn from normal distribution. The standard deviations are derived from the `highscore` models parameter's standard deviation and scaled by ``scatter_scale`` (see below). Optionally, the covariance of model parameters is taken into account by configuring when ``multivariate_normal`` is enabled (default is ``normal`` distribution). The distribution is centred around
1. ``mean`` of the `highscore` model parameter distributions
2. a ``random`` model from the `highscore` list or
@ -407,11 +420,11 @@ This scales search radius around the current `highscore` models. With a scatter
``starting_point``
..................
This method tunes to the center value of the sampler distribution: This option, will increase the likelihood to draw a `highscore` member model off-center to the mean value. The probability of drawing a model from the `highscore` list is derived from distances the `highscore` models have to other `highscore` models in the model parameter space. Excentricity is therefore compensated, because models with few neighbours at larger distances have an increased likelihood to be drawn.
This method tunes to the center value of the sampler distribution: This option, will increase the likelihood to draw a `highscore` member model off-center to the mean value. The probability of drawing a model from the `highscore` list is derived from distances the `highscore` models have to other `highscore` models in the model parameter space. Eccentricity is therefore compensated, because models with few neighbours at larger distances have an increased likelihood to be drawn.
What's the use? Convergence is slowed down, yes, but to the benefit of low-misfit region represented by only a few models drawn up to the current point.
Let's assume there are two separated groups of low-misfit models in our `highscore` list, with one group forming the 75% majority. In the directed sampler phase the choices of a mean center point for the distribution as well as a random starting point for the sampler distribution would favour new samples in the region of the `highscore` model majority. Models in the low-misfit region may be dying out in the `highscore` list due to favorism and related sparse sampling. `excentricity compensations` can help is these cases and keep models with not significantly higher misfits in the game and in sight.
Let's assume there are two separated groups of low-misfit models in our `highscore` list, with one group forming the 75% majority. In the directed sampler phase the choices of a mean center point for the distribution as well as a random starting point for the sampler distribution would favour new samples in the region of the `highscore` model majority. Models in the low-misfit region may be dying out in the `highscore` list due to favour and related sparse sampling. `eccentricity compensations` can help is these cases and keep models with not significantly higher misfits in the game and in sight.
TODO: correct? too many explanations? Sebastian, here is the perfect place for one of your movies.
@ -430,9 +443,9 @@ The highscore list member models in each bootstrap chain (Fig. 7B) will differ t
:alt: alternate text
:figclass: align-center
**Figure 7**: Bootstrap chain graph. (A) Illustration of bootstrap weights, (B) bootstrap chain highscore lists and (C) their influence on the convergence in the model parameter space due to the individual objective function of each bootstrap chain.
**Figure 7**: Bootstrap chain graph. (A) Illustration of bootstrap weights, larger triangles indicate higher weight. (B) bootstrap chain highscore lists and (C) their influence on the convergence in the model parameter space due to the individual weighted objective function of each bootstrap chain.
The convergence of model parameters for the models within each bootstrap chain is dependent on the settings of the optimisation, e.g. the setup of parameter bounds, `scatter scale` settings of the `directive sampling phase` and other tuneables. With very `exploitive` settings convergence can be forced. However, if the convergence within each bootstrap chain starts to form individual clusters in the model space, further optimisation will not provide significantly better models. In Fig. 8 the area of the `highscore` models of three bootstrap chains has only little overlap compared to an earlier stage visualised in Fig. 7C.
The convergence of model parameters for the models within each bootstrap chain is dependent on the settings of the optimisation, e.g. the setup of parameter bounds, `scatter scale` settings of the `directive sampling phase` and other tunable. With very `exploitive` settings convergence can be forced. However, if the convergence within each bootstrap chain starts to form individual clusters in the model space, further optimisation will not provide significantly better models. In Fig. 8 the area of the `highscore` models of three bootstrap chains has only little overlap compared to an earlier stage visualised in Fig. 7C.
.. figure:: ../images/illu_babo_chains.svg
@ -442,7 +455,7 @@ The convergence of model parameters for the models within each bootstrap chain i
:alt: alternate text
:figclass: align-left
**Figure 8**: Drawing new model candidated from the described sampling strategies - the proposal is based on the existing solution space.
**Figure 8**: Drawing new model candidates from the described sampling strategies - the proposal is based on the existing solution space.
Movies: BABO at work
@ -478,7 +491,7 @@ Global + 3 bootstrap chains
Your browser does not support the video tag.
</video>
Illposed problem, no excentricity correction
Ill-posed problem, no eccentricity correction
--------------------------------------------
.. raw:: html
@ -488,7 +501,7 @@ Illposed problem, no excentricity correction
Your browser does not support the video tag.
</video>
Illposed problem, excentricity correction applied
Ill-posed problem, eccentricity correction applied
-------------------------------------------------
.. raw:: html
@ -501,7 +514,7 @@ Illposed problem, excentricity correction applied
Bimodal, standard deviations from high score models
---------------------------------------------------
Fullspace is searched, problem is bimodel due to symmetry
Full space is searched, problem is bimodal due to symmetry
.. raw:: html

4
docs/source/overview/index.rst

@ -362,10 +362,10 @@ strategies.
This refers to the optimisation strategy, how to sample model space to find solutions in a given Grond setup.
Bootstrapping
In statistics, bootstrapping is any test or metric that relies on random sampling with replacement. Bootstrapping allows assigning measures of accuracy (defined in terms of bias, variance, confidence intervals, prediction error or some other such measure) to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods. Generally, it falls in the broader class of resampling methods. `Wiki <https://en.wikipedia.org/wiki/Bootstrapping_(statistics)>`_
In statistics, bootstrapping is any test or metric that relies on random sampling with replacement. Bootstrapping allows assigning measures of accuracy (defined in terms of bias, variance, confidence intervals, prediction error or some other such measure) to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods. Generally, it falls in the broader class of re-sampling methods. `Wiki <https://en.wikipedia.org/wiki/Bootstrapping_(statistics)>`_
Green's Function Store
Refers to Green's function databases to be used for the forward modelling. In Grond these stores are adressed with directory paths and an individual ``store_id``.
Refers to Green's function databases to be used for the forward modelling. In Grond these stores are addressed with directory paths and an individual ``store_id``.
Engine
Forward modelling in Grond is done through the Pyrocko GF engine, which allows fast forward modelling for arbitrary source models based on pre-calculated Green's function stores (databases). Its configuration may contain information about where to find the pre-calculated Pyrocko Green's function stores.

2
src/apps/grond.py

@ -340,7 +340,7 @@ def command_scenario(args):
'--problem', dest='problem', default='cmt',
type='choice', choices=['cmt', 'rectangular'],
help='problem to generate: \'dc\' (double couple)'
' or\'rectangular\' (rectangular finite fault)'
' or \'rectangular\' (rectangular finite fault)'
' (default: \'%default\')')
parser.add_option(
'--magnitude-range', dest='magnitude_range', type=str,

6
src/optimisers/highscore/optimiser.py

@ -96,7 +96,9 @@ class SamplerPhase(Object):
class InjectionSamplerPhase(SamplerPhase):
xs_inject = Array.T(dtype=num.float, shape=(None, None))
xs_inject = Array.T(
dtype=num.float, shape=(None, None),
help='Array with the reference model.')
def get_raw_sample(self, problem, iiter, chains):
return self.xs_inject[iiter, :]
@ -624,7 +626,7 @@ class HighScoreOptimiserConfig(OptimiserConfig):
SamplerPhase.T(),
default=[UniformSamplerPhase(niterations=1000),
DirectedSamplerPhase(niterations=5000)],
help='Stages of the sampler. Start with uniform sampling of the model'
help='Stages of the sampler: Start with uniform sampling of the model'
' model space and narrow down through directed sampling.')
chain_length_factor = Float.T(
default=8.,

Loading…
Cancel
Save