sentence1
stringlengths 52
3.87M
| sentence2
stringlengths 1
47.2k
| label
stringclasses 1
value |
|---|---|---|
def calc_evi_inzp_v1(self):
"""Calculate interception evaporation and update the interception
storage accordingly.
Required control parameters:
|NHRU|
|Lnk|
|TRefT|
|TRefN|
Required flux sequence:
|EvPo|
Calculated flux sequence:
|EvI|
Updated state sequence:
|Inzp|
Basic equation:
:math:`EvI = \\Bigl \\lbrace
{
{EvPo \\ | \\ Inzp > 0}
\\atop
{0 \\ | \\ Inzp = 0}
}`
Examples:
Initialize five HRUs with different combinations of land usage
and initial interception storage and apply a value of potential
evaporation of 3 mm on each one:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER)
>>> states.inzp = 2.0, 2.0, 0.0, 2.0, 4.0
>>> fluxes.evpo = 3.0
>>> model.calc_evi_inzp_v1()
>>> states.inzp
inzp(0.0, 0.0, 0.0, 0.0, 1.0)
>>> fluxes.evi
evi(3.0, 3.0, 0.0, 2.0, 3.0)
For arable land (|ACKER|) and most other land types, interception
evaporation (|EvI|) is identical with potential evapotranspiration
(|EvPo|), as long as it is met by available intercepted water
([Inzp|). Only water areas (|FLUSS| and |SEE|), |EvI| is
generally equal to |EvPo| (but this might be corrected by a method
called after |calc_evi_inzp_v1| has been applied) and [Inzp| is
set to zero.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.evi[k] = flu.evpo[k]
sta.inzp[k] = 0.
else:
flu.evi[k] = min(flu.evpo[k], sta.inzp[k])
sta.inzp[k] -= flu.evi[k]
|
Calculate interception evaporation and update the interception
storage accordingly.
Required control parameters:
|NHRU|
|Lnk|
|TRefT|
|TRefN|
Required flux sequence:
|EvPo|
Calculated flux sequence:
|EvI|
Updated state sequence:
|Inzp|
Basic equation:
:math:`EvI = \\Bigl \\lbrace
{
{EvPo \\ | \\ Inzp > 0}
\\atop
{0 \\ | \\ Inzp = 0}
}`
Examples:
Initialize five HRUs with different combinations of land usage
and initial interception storage and apply a value of potential
evaporation of 3 mm on each one:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER)
>>> states.inzp = 2.0, 2.0, 0.0, 2.0, 4.0
>>> fluxes.evpo = 3.0
>>> model.calc_evi_inzp_v1()
>>> states.inzp
inzp(0.0, 0.0, 0.0, 0.0, 1.0)
>>> fluxes.evi
evi(3.0, 3.0, 0.0, 2.0, 3.0)
For arable land (|ACKER|) and most other land types, interception
evaporation (|EvI|) is identical with potential evapotranspiration
(|EvPo|), as long as it is met by available intercepted water
([Inzp|). Only water areas (|FLUSS| and |SEE|), |EvI| is
generally equal to |EvPo| (but this might be corrected by a method
called after |calc_evi_inzp_v1| has been applied) and [Inzp| is
set to zero.
|
entailment
|
def calc_sbes_v1(self):
"""Calculate the frozen part of stand precipitation.
Required control parameters:
|NHRU|
|TGr|
|TSp|
Required flux sequences:
|TKor|
|NBes|
Calculated flux sequence:
|SBes|
Examples:
In the first example, the threshold temperature of seven hydrological
response units is 0 °C and the corresponding temperature interval of
mixed precipitation 2 °C:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> tgr(0.0)
>>> tsp(2.0)
The value of |NBes| is zero above 1 °C and equal to the value of
|NBes| below -1 °C. Between these temperature values, |NBes|
decreases linearly:
>>> fluxes.nbes = 4.0
>>> fluxes.tkor = -10.0, -1.0, -0.5, 0.0, 0.5, 1.0, 10.0
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 3.0, 2.0, 1.0, 0.0, 0.0)
Note the special case of a zero temperature interval. With the
actual temperature being equal to the threshold temperature, the
the value of `sbes` is zero:
>>> tsp(0.)
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 4.0, 0.0, 0.0, 0.0, 0.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
if flu.nbes[k] <= 0.:
flu.sbes[k] = 0.
elif flu.tkor[k] >= (con.tgr[k]+con.tsp[k]/2.):
flu.sbes[k] = 0.
elif flu.tkor[k] <= (con.tgr[k]-con.tsp[k]/2.):
flu.sbes[k] = flu.nbes[k]
else:
flu.sbes[k] = ((((con.tgr[k]+con.tsp[k]/2.)-flu.tkor[k]) /
con.tsp[k])*flu.nbes[k])
|
Calculate the frozen part of stand precipitation.
Required control parameters:
|NHRU|
|TGr|
|TSp|
Required flux sequences:
|TKor|
|NBes|
Calculated flux sequence:
|SBes|
Examples:
In the first example, the threshold temperature of seven hydrological
response units is 0 °C and the corresponding temperature interval of
mixed precipitation 2 °C:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> tgr(0.0)
>>> tsp(2.0)
The value of |NBes| is zero above 1 °C and equal to the value of
|NBes| below -1 °C. Between these temperature values, |NBes|
decreases linearly:
>>> fluxes.nbes = 4.0
>>> fluxes.tkor = -10.0, -1.0, -0.5, 0.0, 0.5, 1.0, 10.0
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 3.0, 2.0, 1.0, 0.0, 0.0)
Note the special case of a zero temperature interval. With the
actual temperature being equal to the threshold temperature, the
the value of `sbes` is zero:
>>> tsp(0.)
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 4.0, 0.0, 0.0, 0.0, 0.0)
|
entailment
|
def calc_wgtf_v1(self):
"""Calculate the potential snowmelt.
Required control parameters:
|NHRU|
|Lnk|
|GTF|
|TRefT|
|TRefN|
|RSchmelz|
|CPWasser|
Required flux sequence:
|TKor|
Calculated fluxes sequence:
|WGTF|
Basic equation:
:math:`WGTF = max(GTF \\cdot (TKor - TRefT), 0) +
max(\\frac{CPWasser}{RSchmelz} \\cdot (TKor - TRefN), 0)`
Examples:
Initialize seven HRUs with identical degree-day factors and
temperature thresholds, but different combinations of land use
and air temperature:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(7)
>>> lnk(ACKER, LAUBW, FLUSS, SEE, ACKER, ACKER, ACKER)
>>> gtf(5.0)
>>> treft(0.0)
>>> trefn(1.0)
>>> fluxes.tkor = 2.0, 2.0, 2.0, 2.0, -1.0, 0.0, 1.0
Compared to most other LARSIM parameters, the specific heat capacity
and melt heat capacity of water can be seen as fixed properties:
>>> cpwasser(4.1868)
>>> rschmelz(334.0)
Note that the values of the degree-day factor are only half
as much as the given value, due to the simulation step size
being only half as long as the parameter step size:
>>> gtf
gtf(5.0)
>>> gtf.values
array([ 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5])
After performing the calculation, one can see that the potential
melting rate is identical for the first two HRUs (|ACKER| and
|LAUBW|). The land use class results in no difference, except for
water areas (third and forth HRU, |FLUSS| and |SEE|), where no
potential melt needs to be calculated. The last three HRUs (again
|ACKER|) show the usual behaviour of the degree day method, when the
actual temperature is below (fourth HRU), equal to (fifth HRU) or
above (sixths zone) the threshold temperature. Additionally, the
first two zones show the influence of the additional energy intake
due to "warm" precipitation. Obviously, this additional term is
quite negligible for common parameterizations, even if lower
values for the separate threshold temperature |TRefT| would be
taken into account:
>>> model.calc_wgtf_v1()
>>> fluxes.wgtf
wgtf(5.012535, 5.012535, 0.0, 0.0, 0.0, 0.0, 2.5)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.wgtf[k] = 0.
else:
flu.wgtf[k] = (
max(con.gtf[k]*(flu.tkor[k]-con.treft[k]), 0) +
max(con.cpwasser/con.rschmelz*(flu.tkor[k]-con.trefn[k]), 0.))
|
Calculate the potential snowmelt.
Required control parameters:
|NHRU|
|Lnk|
|GTF|
|TRefT|
|TRefN|
|RSchmelz|
|CPWasser|
Required flux sequence:
|TKor|
Calculated fluxes sequence:
|WGTF|
Basic equation:
:math:`WGTF = max(GTF \\cdot (TKor - TRefT), 0) +
max(\\frac{CPWasser}{RSchmelz} \\cdot (TKor - TRefN), 0)`
Examples:
Initialize seven HRUs with identical degree-day factors and
temperature thresholds, but different combinations of land use
and air temperature:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(7)
>>> lnk(ACKER, LAUBW, FLUSS, SEE, ACKER, ACKER, ACKER)
>>> gtf(5.0)
>>> treft(0.0)
>>> trefn(1.0)
>>> fluxes.tkor = 2.0, 2.0, 2.0, 2.0, -1.0, 0.0, 1.0
Compared to most other LARSIM parameters, the specific heat capacity
and melt heat capacity of water can be seen as fixed properties:
>>> cpwasser(4.1868)
>>> rschmelz(334.0)
Note that the values of the degree-day factor are only half
as much as the given value, due to the simulation step size
being only half as long as the parameter step size:
>>> gtf
gtf(5.0)
>>> gtf.values
array([ 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5])
After performing the calculation, one can see that the potential
melting rate is identical for the first two HRUs (|ACKER| and
|LAUBW|). The land use class results in no difference, except for
water areas (third and forth HRU, |FLUSS| and |SEE|), where no
potential melt needs to be calculated. The last three HRUs (again
|ACKER|) show the usual behaviour of the degree day method, when the
actual temperature is below (fourth HRU), equal to (fifth HRU) or
above (sixths zone) the threshold temperature. Additionally, the
first two zones show the influence of the additional energy intake
due to "warm" precipitation. Obviously, this additional term is
quite negligible for common parameterizations, even if lower
values for the separate threshold temperature |TRefT| would be
taken into account:
>>> model.calc_wgtf_v1()
>>> fluxes.wgtf
wgtf(5.012535, 5.012535, 0.0, 0.0, 0.0, 0.0, 2.5)
|
entailment
|
def calc_schm_wats_v1(self):
"""Calculate the actual amount of water melting within the snow cover.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequences:
|SBes|
|WGTF|
Calculated flux sequence:
|Schm|
Updated state sequence:
|WATS|
Basic equations:
:math:`\\frac{dWATS}{dt} = SBes - Schm`
:math:`Schm = \\Bigl \\lbrace
{
{WGTF \\ | \\ WATS > 0}
\\atop
{0 \\ | \\ WATS = 0}
}`
Examples:
Initialize two water (|FLUSS| and |SEE|) and four arable land
(|ACKER|) HRUs. Assume the same values for the initial amount
of frozen water (|WATS|) and the frozen part of stand precipitation
(|SBes|), but different values for potential snowmelt (|WGTF|):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> states.wats = 2.0
>>> fluxes.sbes = 1.0
>>> fluxes.wgtf = 1.0, 1.0, 0.0, 1.0, 3.0, 5.0
>>> model.calc_schm_wats_v1()
>>> states.wats
wats(0.0, 0.0, 3.0, 2.0, 0.0, 0.0)
>>> fluxes.schm
schm(0.0, 0.0, 0.0, 1.0, 3.0, 3.0)
For the water areas, both the frozen amount of water and actual melt
are set to zero. For all other land use classes, actual melt
is either limited by potential melt or the available frozen water,
which is the sum of initial frozen water and the frozen part
of stand precipitation.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
sta.wats[k] = 0.
flu.schm[k] = 0.
else:
sta.wats[k] += flu.sbes[k]
flu.schm[k] = min(flu.wgtf[k], sta.wats[k])
sta.wats[k] -= flu.schm[k]
|
Calculate the actual amount of water melting within the snow cover.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequences:
|SBes|
|WGTF|
Calculated flux sequence:
|Schm|
Updated state sequence:
|WATS|
Basic equations:
:math:`\\frac{dWATS}{dt} = SBes - Schm`
:math:`Schm = \\Bigl \\lbrace
{
{WGTF \\ | \\ WATS > 0}
\\atop
{0 \\ | \\ WATS = 0}
}`
Examples:
Initialize two water (|FLUSS| and |SEE|) and four arable land
(|ACKER|) HRUs. Assume the same values for the initial amount
of frozen water (|WATS|) and the frozen part of stand precipitation
(|SBes|), but different values for potential snowmelt (|WGTF|):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> states.wats = 2.0
>>> fluxes.sbes = 1.0
>>> fluxes.wgtf = 1.0, 1.0, 0.0, 1.0, 3.0, 5.0
>>> model.calc_schm_wats_v1()
>>> states.wats
wats(0.0, 0.0, 3.0, 2.0, 0.0, 0.0)
>>> fluxes.schm
schm(0.0, 0.0, 0.0, 1.0, 3.0, 3.0)
For the water areas, both the frozen amount of water and actual melt
are set to zero. For all other land use classes, actual melt
is either limited by potential melt or the available frozen water,
which is the sum of initial frozen water and the frozen part
of stand precipitation.
|
entailment
|
def calc_wada_waes_v1(self):
"""Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|PWMax|
Required flux sequences:
|NBes|
Calculated flux sequence:
|WaDa|
Updated state sequence:
|WAeS|
Basic equations:
:math:`\\frac{dWAeS}{dt} = NBes - WaDa`
:math:`WAeS \\leq PWMax \\cdot WATS`
Examples:
For simplicity, the threshold parameter |PWMax| is set to a value
of two for each of the six initialized HRUs. Thus, snow cover can
hold as much liquid water as it contains frozen water. Stand
precipitation is also always set to the same value, but the initial
conditions of the snow cover are varied:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> pwmax(2.0)
>>> fluxes.nbes = 1.0
>>> states.wats = 0.0, 0.0, 0.0, 1.0, 1.0, 1.0
>>> states.waes = 1.0, 1.0, 0.0, 1.0, 1.5, 2.0
>>> model.calc_wada_waes_v1()
>>> states.waes
waes(0.0, 0.0, 0.0, 2.0, 2.0, 2.0)
>>> fluxes.wada
wada(1.0, 1.0, 1.0, 0.0, 0.5, 1.0)
Note the special cases of the first two HRUs of type |FLUSS| and
|SEE|. For water areas, stand precipitaton |NBes| is generally
passed to |WaDa| and |WAeS| is set to zero. For all other land
use classes (of which only |ACKER| is selected), only the amount
of |NBes| exceeding the actual snow holding capacity is passed
to |WaDa|.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
sta.waes[k] = 0.
flu.wada[k] = flu.nbes[k]
else:
sta.waes[k] += flu.nbes[k]
flu.wada[k] = max(sta.waes[k]-con.pwmax[k]*sta.wats[k], 0.)
sta.waes[k] -= flu.wada[k]
|
Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|PWMax|
Required flux sequences:
|NBes|
Calculated flux sequence:
|WaDa|
Updated state sequence:
|WAeS|
Basic equations:
:math:`\\frac{dWAeS}{dt} = NBes - WaDa`
:math:`WAeS \\leq PWMax \\cdot WATS`
Examples:
For simplicity, the threshold parameter |PWMax| is set to a value
of two for each of the six initialized HRUs. Thus, snow cover can
hold as much liquid water as it contains frozen water. Stand
precipitation is also always set to the same value, but the initial
conditions of the snow cover are varied:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> pwmax(2.0)
>>> fluxes.nbes = 1.0
>>> states.wats = 0.0, 0.0, 0.0, 1.0, 1.0, 1.0
>>> states.waes = 1.0, 1.0, 0.0, 1.0, 1.5, 2.0
>>> model.calc_wada_waes_v1()
>>> states.waes
waes(0.0, 0.0, 0.0, 2.0, 2.0, 2.0)
>>> fluxes.wada
wada(1.0, 1.0, 1.0, 0.0, 0.5, 1.0)
Note the special cases of the first two HRUs of type |FLUSS| and
|SEE|. For water areas, stand precipitaton |NBes| is generally
passed to |WaDa| and |WAeS| is set to zero. For all other land
use classes (of which only |ACKER| is selected), only the amount
of |NBes| exceeding the actual snow holding capacity is passed
to |WaDa|.
|
entailment
|
def calc_evb_v1(self):
"""Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|GrasRef_R|
Required state sequence:
|BoWa|
Required flux sequences:
|EvPo|
|EvI|
Calculated flux sequence:
|EvB|
Basic equations:
:math:`temp = exp(-GrasRef_R \\cdot \\frac{BoWa}{NFk})`
:math:`EvB = (EvPo - EvI) \\cdot
\\frac{1 - temp}{1 + temp -2 \\cdot exp(-GrasRef_R)}`
Examples:
Soil evaporation is calculated neither for water nor for sealed
areas (see the first three HRUs of type |FLUSS|, |SEE|, and |VERS|).
All other land use classes are handled in accordance with a
recommendation of the set of codes described in ATV-DVWK-M 504
(arable land |ACKER| has been selected for the last four HRUs
arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> grasref_r(5.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0)
>>> fluxes.evpo = 5.0
>>> fluxes.evi = 3.0
>>> states.bowa = 50.0, 50.0, 50.0, 0.0, 0.0, 50.0, 100.0
>>> model.calc_evb_v1()
>>> fluxes.evb
evb(0.0, 0.0, 0.0, 0.0, 0.0, 1.717962, 2.0)
In case usable field capacity (|NFk|) is zero, soil evaporation
(|EvB|) is generally set to zero (see the forth HRU). The last
three HRUs demonstrate the rise in soil evaporation with increasing
soil moisture, which is lessening in the high soil moisture range.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if (con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or (con.nfk[k] <= 0.):
flu.evb[k] = 0.
else:
d_temp = modelutils.exp(-con.grasref_r *
sta.bowa[k]/con.nfk[k])
flu.evb[k] = ((flu.evpo[k]-flu.evi[k]) * (1.-d_temp) /
(1.+d_temp-2.*modelutils.exp(-con.grasref_r)))
|
Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|GrasRef_R|
Required state sequence:
|BoWa|
Required flux sequences:
|EvPo|
|EvI|
Calculated flux sequence:
|EvB|
Basic equations:
:math:`temp = exp(-GrasRef_R \\cdot \\frac{BoWa}{NFk})`
:math:`EvB = (EvPo - EvI) \\cdot
\\frac{1 - temp}{1 + temp -2 \\cdot exp(-GrasRef_R)}`
Examples:
Soil evaporation is calculated neither for water nor for sealed
areas (see the first three HRUs of type |FLUSS|, |SEE|, and |VERS|).
All other land use classes are handled in accordance with a
recommendation of the set of codes described in ATV-DVWK-M 504
(arable land |ACKER| has been selected for the last four HRUs
arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> grasref_r(5.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0)
>>> fluxes.evpo = 5.0
>>> fluxes.evi = 3.0
>>> states.bowa = 50.0, 50.0, 50.0, 0.0, 0.0, 50.0, 100.0
>>> model.calc_evb_v1()
>>> fluxes.evb
evb(0.0, 0.0, 0.0, 0.0, 0.0, 1.717962, 2.0)
In case usable field capacity (|NFk|) is zero, soil evaporation
(|EvB|) is generally set to zero (see the forth HRU). The last
three HRUs demonstrate the rise in soil evaporation with increasing
soil moisture, which is lessening in the high soil moisture range.
|
entailment
|
def calc_qbb_v1(self):
"""Calculate the amount of base flow released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|Beta|
|FBeta|
Required derived parameter:
|WB|
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QBB|
Basic equations:
:math:`Beta_{eff} = \\Bigl \\lbrace
{
{Beta \\ | \\ BoWa \\leq WZ}
\\atop
{Beta \\cdot (1+(FBeta-1)\\cdot\\frac{BoWa-WZ}{NFk-WZ}) \\|\\ BoWa > WZ}
}`
:math:`QBB = \\Bigl \\lbrace
{
{0 \\ | \\ BoWa \\leq WB}
\\atop
{Beta_{eff} \\cdot (BoWa - WB) \\|\\ BoWa > WB}
}`
Examples:
For water and sealed areas, no base flow is calculated (see the
first three HRUs of type |VERS|, |FLUSS|, and |SEE|). No principal
distinction is made between the remaining land use classes (arable
land |ACKER| has been selected for the last five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> beta(0.04)
>>> fbeta(2.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wb(10.0)
>>> derived.wz(70.0)
Note the time dependence of parameter |Beta|:
>>> beta
beta(0.04)
>>> beta.values
array([ 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02])
In the first example, the actual soil water content |BoWa| is set
to low values. For values below the threshold |WB|, not percolation
occurs. Above |WB| (but below |WZ|), |QBB| increases linearly by
an amount defined by parameter |Beta|:
>>> states.bowa = 20.0, 20.0, 20.0, 0.0, 0.0, 10.0, 20.0, 20.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2)
Note that for the last two HRUs the same amount of
base flow generation is determined, in spite of the fact
that both exhibit different relative soil moistures. It is
common to modify this "pure absolute dependency" to a "mixed
absolute/relative dependency" through defining the values of
parameter |WB| indirectly via parameter |RelWB|.
In the second example, the actual soil water content |BoWa| is set
to high values. For values below threshold |WZ|, the discussion above
remains valid. For values above |WZ|, percolation shows a nonlinear
behaviour when factor |FBeta| is set to values larger than one:
>>> nfk(0.0, 0.0, 0.0, 100.0, 100.0, 100.0, 100.0, 200.0)
>>> states.bowa = 0.0, 0.0, 0.0, 60.0, 70.0, 80.0, 100.0, 200.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 1.0, 1.2, 1.866667, 3.6, 7.6)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wb[k]) or (con.nfk[k] <= 0.)):
flu.qbb[k] = 0.
elif sta.bowa[k] <= der.wz[k]:
flu.qbb[k] = con.beta[k]*(sta.bowa[k]-der.wb[k])
else:
flu.qbb[k] = (con.beta[k]*(sta.bowa[k]-der.wb[k]) *
(1.+(con.fbeta[k]-1.)*((sta.bowa[k]-der.wz[k]) /
(con.nfk[k]-der.wz[k]))))
|
Calculate the amount of base flow released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|Beta|
|FBeta|
Required derived parameter:
|WB|
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QBB|
Basic equations:
:math:`Beta_{eff} = \\Bigl \\lbrace
{
{Beta \\ | \\ BoWa \\leq WZ}
\\atop
{Beta \\cdot (1+(FBeta-1)\\cdot\\frac{BoWa-WZ}{NFk-WZ}) \\|\\ BoWa > WZ}
}`
:math:`QBB = \\Bigl \\lbrace
{
{0 \\ | \\ BoWa \\leq WB}
\\atop
{Beta_{eff} \\cdot (BoWa - WB) \\|\\ BoWa > WB}
}`
Examples:
For water and sealed areas, no base flow is calculated (see the
first three HRUs of type |VERS|, |FLUSS|, and |SEE|). No principal
distinction is made between the remaining land use classes (arable
land |ACKER| has been selected for the last five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> beta(0.04)
>>> fbeta(2.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wb(10.0)
>>> derived.wz(70.0)
Note the time dependence of parameter |Beta|:
>>> beta
beta(0.04)
>>> beta.values
array([ 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02])
In the first example, the actual soil water content |BoWa| is set
to low values. For values below the threshold |WB|, not percolation
occurs. Above |WB| (but below |WZ|), |QBB| increases linearly by
an amount defined by parameter |Beta|:
>>> states.bowa = 20.0, 20.0, 20.0, 0.0, 0.0, 10.0, 20.0, 20.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2)
Note that for the last two HRUs the same amount of
base flow generation is determined, in spite of the fact
that both exhibit different relative soil moistures. It is
common to modify this "pure absolute dependency" to a "mixed
absolute/relative dependency" through defining the values of
parameter |WB| indirectly via parameter |RelWB|.
In the second example, the actual soil water content |BoWa| is set
to high values. For values below threshold |WZ|, the discussion above
remains valid. For values above |WZ|, percolation shows a nonlinear
behaviour when factor |FBeta| is set to values larger than one:
>>> nfk(0.0, 0.0, 0.0, 100.0, 100.0, 100.0, 100.0, 200.0)
>>> states.bowa = 0.0, 0.0, 0.0, 60.0, 70.0, 80.0, 100.0, 200.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 1.0, 1.2, 1.866667, 3.6, 7.6)
|
entailment
|
def calc_qib1_v1(self):
"""Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
Required derived parameter:
|WB|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB1|
Basic equation:
:math:`QIB1 = DMin \\cdot \\frac{BoWa}{NFk}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(101.0, 101.0, 101.0, 0.0, 101.0, 101.0, 101.0, 202.0)
>>> derived.wb(10.0)
>>> states.bowa = 10.1, 10.1, 10.1, 0.0, 0.0, 10.0, 10.1, 10.1
Note the time dependence of parameter |DMin|:
>>> dmin
dmin(4.0)
>>> dmin.values
array([ 2., 2., 2., 2., 2., 2., 2., 2.])
Compared to the calculation of |QBB|, the following results show
some relevant differences:
>>> model.calc_qib1_v1()
>>> fluxes.qib1
qib1(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.1)
Firstly, as demonstrated with the help of the seventh and the
eight HRU, the generation of the first interflow component |QIB1|
depends on relative soil moisture. Secondly, as demonstrated with
the help the sixth and seventh HRU, it starts abruptly whenever
the slightest exceedance of the threshold parameter |WB| occurs.
Such sharp discontinuouties are a potential source of trouble.
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wb[k])):
flu.qib1[k] = 0.
else:
flu.qib1[k] = con.dmin[k]*(sta.bowa[k]/con.nfk[k])
|
Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
Required derived parameter:
|WB|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB1|
Basic equation:
:math:`QIB1 = DMin \\cdot \\frac{BoWa}{NFk}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(101.0, 101.0, 101.0, 0.0, 101.0, 101.0, 101.0, 202.0)
>>> derived.wb(10.0)
>>> states.bowa = 10.1, 10.1, 10.1, 0.0, 0.0, 10.0, 10.1, 10.1
Note the time dependence of parameter |DMin|:
>>> dmin
dmin(4.0)
>>> dmin.values
array([ 2., 2., 2., 2., 2., 2., 2., 2.])
Compared to the calculation of |QBB|, the following results show
some relevant differences:
>>> model.calc_qib1_v1()
>>> fluxes.qib1
qib1(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.1)
Firstly, as demonstrated with the help of the seventh and the
eight HRU, the generation of the first interflow component |QIB1|
depends on relative soil moisture. Secondly, as demonstrated with
the help the sixth and seventh HRU, it starts abruptly whenever
the slightest exceedance of the threshold parameter |WB| occurs.
Such sharp discontinuouties are a potential source of trouble.
|
entailment
|
def calc_qib2_v1(self):
"""Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
|DMax|
Required derived parameter:
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB2|
Basic equation:
:math:`QIB2 = (DMax-DMin) \\cdot
(\\frac{BoWa-WZ}{NFk-WZ})^\\frac{3}{2}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last
five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(100.0, 100.0, 100.0, 50.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wz(50.0)
>>> states.bowa = 100.0, 100.0, 100.0, 50.1, 50.0, 75.0, 100.0, 100.0
Note the time dependence of parameters |DMin| (see the example above)
and |DMax|:
>>> dmax
dmax(10.0)
>>> dmax.values
array([ 5., 5., 5., 5., 5., 5., 5., 5.])
The following results show that he calculation of |QIB2| both
resembles those of |QBB| and |QIB1| in some regards:
>>> model.calc_qib2_v1()
>>> fluxes.qib2
qib2(0.0, 0.0, 0.0, 0.0, 0.0, 1.06066, 3.0, 0.57735)
In the given example, the maximum rate of total interflow
generation is 5 mm/12h (parameter |DMax|). For the seventh zone,
which contains a saturated soil, the value calculated for the
second interflow component (|QIB2|) is 3 mm/h. The "missing"
value of 2 mm/12h is be calculated by method |calc_qib1_v1|.
(The fourth zone, which is slightly oversaturated, is only intended
to demonstrate that zero division due to |NFk| = |WZ| is circumvented.)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wz[k]) or (con.nfk[k] <= der.wz[k])):
flu.qib2[k] = 0.
else:
flu.qib2[k] = ((con.dmax[k]-con.dmin[k]) *
((sta.bowa[k]-der.wz[k]) /
(con.nfk[k]-der.wz[k]))**1.5)
|
Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
|DMax|
Required derived parameter:
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB2|
Basic equation:
:math:`QIB2 = (DMax-DMin) \\cdot
(\\frac{BoWa-WZ}{NFk-WZ})^\\frac{3}{2}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last
five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(100.0, 100.0, 100.0, 50.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wz(50.0)
>>> states.bowa = 100.0, 100.0, 100.0, 50.1, 50.0, 75.0, 100.0, 100.0
Note the time dependence of parameters |DMin| (see the example above)
and |DMax|:
>>> dmax
dmax(10.0)
>>> dmax.values
array([ 5., 5., 5., 5., 5., 5., 5., 5.])
The following results show that he calculation of |QIB2| both
resembles those of |QBB| and |QIB1| in some regards:
>>> model.calc_qib2_v1()
>>> fluxes.qib2
qib2(0.0, 0.0, 0.0, 0.0, 0.0, 1.06066, 3.0, 0.57735)
In the given example, the maximum rate of total interflow
generation is 5 mm/12h (parameter |DMax|). For the seventh zone,
which contains a saturated soil, the value calculated for the
second interflow component (|QIB2|) is 3 mm/h. The "missing"
value of 2 mm/12h is be calculated by method |calc_qib1_v1|.
(The fourth zone, which is slightly oversaturated, is only intended
to demonstrate that zero division due to |NFk| = |WZ| is circumvented.)
|
entailment
|
def calc_qdb_v1(self):
"""Calculate direct runoff released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|BSf|
Required state sequence:
|BoWa|
Required flux sequence:
|WaDa|
Calculated flux sequence:
|QDB|
Basic equations:
:math:`QDB = \\Bigl \\lbrace
{
{max(Exz, 0) \\ | \\ SfA \\leq 0}
\\atop
{max(Exz + NFk \\cdot SfA^{BSf+1}, 0) \\ | \\ SfA > 0}
}`
:math:`SFA = (1 - \\frac{BoWa}{NFk})^\\frac{1}{BSf+1} -
\\frac{WaDa}{(BSf+1) \\cdot NFk}`
:math:`Exz = (BoWa + WaDa) - NFk`
Examples:
For water areas (|FLUSS| and |SEE|), sealed areas (|VERS|), and
areas without any soil storage capacity, all water is completely
routed as direct runoff |QDB| (see the first four HRUs). No
principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(9)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> bsf(0.4)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 100.0, 100.0)
>>> fluxes.wada = 10.0
>>> states.bowa = (
... 100.0, 100.0, 100.0, 0.0, -0.1, 0.0, 50.0, 100.0, 100.1)
>>> model.calc_qdb_v1()
>>> fluxes.qdb
qdb(10.0, 10.0, 10.0, 10.0, 0.142039, 0.144959, 1.993649, 10.0, 10.1)
With the common |BSf| value of 0.4, the discharge coefficient
increases more or less exponentially with soil moisture.
For soil moisture values slightly below zero or above usable
field capacity, plausible amounts of generated direct runoff
are ensured.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.qdb[k] = 0.
elif ((con.lnk[k] in (VERS, FLUSS, SEE)) or
(con.nfk[k] <= 0.)):
flu.qdb[k] = flu.wada[k]
else:
if sta.bowa[k] < con.nfk[k]:
aid.sfa[k] = (
(1.-sta.bowa[k]/con.nfk[k])**(1./(con.bsf[k]+1.)) -
(flu.wada[k]/((con.bsf[k]+1.)*con.nfk[k])))
else:
aid.sfa[k] = 0.
aid.exz[k] = sta.bowa[k]+flu.wada[k]-con.nfk[k]
flu.qdb[k] = aid.exz[k]
if aid.sfa[k] > 0.:
flu.qdb[k] += aid.sfa[k]**(con.bsf[k]+1.)*con.nfk[k]
flu.qdb[k] = max(flu.qdb[k], 0.)
|
Calculate direct runoff released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|BSf|
Required state sequence:
|BoWa|
Required flux sequence:
|WaDa|
Calculated flux sequence:
|QDB|
Basic equations:
:math:`QDB = \\Bigl \\lbrace
{
{max(Exz, 0) \\ | \\ SfA \\leq 0}
\\atop
{max(Exz + NFk \\cdot SfA^{BSf+1}, 0) \\ | \\ SfA > 0}
}`
:math:`SFA = (1 - \\frac{BoWa}{NFk})^\\frac{1}{BSf+1} -
\\frac{WaDa}{(BSf+1) \\cdot NFk}`
:math:`Exz = (BoWa + WaDa) - NFk`
Examples:
For water areas (|FLUSS| and |SEE|), sealed areas (|VERS|), and
areas without any soil storage capacity, all water is completely
routed as direct runoff |QDB| (see the first four HRUs). No
principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(9)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> bsf(0.4)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 100.0, 100.0)
>>> fluxes.wada = 10.0
>>> states.bowa = (
... 100.0, 100.0, 100.0, 0.0, -0.1, 0.0, 50.0, 100.0, 100.1)
>>> model.calc_qdb_v1()
>>> fluxes.qdb
qdb(10.0, 10.0, 10.0, 10.0, 0.142039, 0.144959, 1.993649, 10.0, 10.1)
With the common |BSf| value of 0.4, the discharge coefficient
increases more or less exponentially with soil moisture.
For soil moisture values slightly below zero or above usable
field capacity, plausible amounts of generated direct runoff
are ensured.
|
entailment
|
def calc_bowa_v1(self):
"""Update soil moisture and correct fluxes if necessary.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequence:
|WaDa|
Updated state sequence:
|BoWa|
Required (and eventually corrected) flux sequences:
|EvB|
|QBB|
|QIB1|
|QIB2|
|QDB|
Basic equations:
:math:`\\frac{dBoWa}{dt} = WaDa - EvB - QBB - QIB1 - QIB2 - QDB`
:math:`BoWa \\geq 0`
Examples:
For water areas (|FLUSS| and |SEE|) and sealed areas (|VERS|),
soil moisture |BoWa| is simply set to zero and no flux correction
are performed (see the first three HRUs). No principal distinction
is made between the remaining land use classes (arable land |ACKER|
has been selected for the last four HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> states.bowa = 2.0
>>> fluxes.wada = 1.0
>>> fluxes.evb = 1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.3
>>> fluxes.qbb = 1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.6
>>> fluxes.qib1 = 1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.9
>>> fluxes.qib2 = 1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 1.2
>>> fluxes.qdb = 1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.5
>>> model.calc_bowa_v1()
>>> states.bowa
bowa(0.0, 0.0, 0.0, 3.0, 1.5, 0.0, 0.0)
>>> fluxes.evb
evb(1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.2)
>>> fluxes.qbb
qbb(1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.4)
>>> fluxes.qib1
qib1(1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.6)
>>> fluxes.qib2
qib2(1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 0.8)
>>> fluxes.qdb
qdb(1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.0)
For the seventh HRU, the original total loss terms would result in a
negative soil moisture value. Hence it is reduced to the total loss
term of the sixt HRU, which results exactly in a complete emptying
of the soil storage.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (VERS, WASSER, FLUSS, SEE):
sta.bowa[k] = 0.
else:
aid.bvl[k] = (
flu.evb[k]+flu.qbb[k]+flu.qib1[k]+flu.qib2[k]+flu.qdb[k])
aid.mvl[k] = sta.bowa[k]+flu.wada[k]
if aid.bvl[k] > aid.mvl[k]:
aid.rvl[k] = aid.mvl[k]/aid.bvl[k]
flu.evb[k] *= aid.rvl[k]
flu.qbb[k] *= aid.rvl[k]
flu.qib1[k] *= aid.rvl[k]
flu.qib2[k] *= aid.rvl[k]
flu.qdb[k] *= aid.rvl[k]
sta.bowa[k] = 0.
else:
sta.bowa[k] = aid.mvl[k]-aid.bvl[k]
|
Update soil moisture and correct fluxes if necessary.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequence:
|WaDa|
Updated state sequence:
|BoWa|
Required (and eventually corrected) flux sequences:
|EvB|
|QBB|
|QIB1|
|QIB2|
|QDB|
Basic equations:
:math:`\\frac{dBoWa}{dt} = WaDa - EvB - QBB - QIB1 - QIB2 - QDB`
:math:`BoWa \\geq 0`
Examples:
For water areas (|FLUSS| and |SEE|) and sealed areas (|VERS|),
soil moisture |BoWa| is simply set to zero and no flux correction
are performed (see the first three HRUs). No principal distinction
is made between the remaining land use classes (arable land |ACKER|
has been selected for the last four HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> states.bowa = 2.0
>>> fluxes.wada = 1.0
>>> fluxes.evb = 1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.3
>>> fluxes.qbb = 1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.6
>>> fluxes.qib1 = 1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.9
>>> fluxes.qib2 = 1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 1.2
>>> fluxes.qdb = 1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.5
>>> model.calc_bowa_v1()
>>> states.bowa
bowa(0.0, 0.0, 0.0, 3.0, 1.5, 0.0, 0.0)
>>> fluxes.evb
evb(1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.2)
>>> fluxes.qbb
qbb(1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.4)
>>> fluxes.qib1
qib1(1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.6)
>>> fluxes.qib2
qib2(1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 0.8)
>>> fluxes.qdb
qdb(1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.0)
For the seventh HRU, the original total loss terms would result in a
negative soil moisture value. Hence it is reduced to the total loss
term of the sixt HRU, which results exactly in a complete emptying
of the soil storage.
|
entailment
|
def calc_qbgz_v1(self):
"""Aggregate the amount of base flow released by all "soil type" HRUs
and the "net precipitation" above water areas of type |SEE|.
Water areas of type |SEE| are assumed to be directly connected with
groundwater, but not with the stream network. This is modelled by
adding their (positive or negative) "net input" (|NKor|-|EvI|) to the
"percolation output" of the soil containing HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequences:
|QBB|
|NKor|
|EvI|
Calculated state sequence:
|QBGZ|
Basic equation:
:math:`QBGZ = \\Sigma(FHRU \\cdot QBB) +
\\Sigma(FHRU \\cdot (NKor_{SEE}-EvI_{SEE}))`
Examples:
The first example shows that |QBGZ| is the area weighted sum of
|QBB| from "soil type" HRUs like arable land (|ACKER|) and of
|NKor|-|EvI| from water areas of type |SEE|. All other water
areas (|WASSER| and |FLUSS|) and also sealed surfaces (|VERS|)
have no impact on |QBGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(6)
>>> lnk(ACKER, ACKER, VERS, WASSER, FLUSS, SEE)
>>> fhru(0.1, 0.2, 0.1, 0.1, 0.1, 0.4)
>>> fluxes.qbb = 2., 4.0, 300.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(5.0)
The second example shows that large evaporation values above a
HRU of type |SEE| can result in negative values of |QBGZ|:
>>> fluxes.evi[5] = 30
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(-3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qbgz = 0.
for k in range(con.nhru):
if con.lnk[k] == SEE:
sta.qbgz += con.fhru[k]*(flu.nkor[k]-flu.evi[k])
elif con.lnk[k] not in (WASSER, FLUSS, VERS):
sta.qbgz += con.fhru[k]*flu.qbb[k]
|
Aggregate the amount of base flow released by all "soil type" HRUs
and the "net precipitation" above water areas of type |SEE|.
Water areas of type |SEE| are assumed to be directly connected with
groundwater, but not with the stream network. This is modelled by
adding their (positive or negative) "net input" (|NKor|-|EvI|) to the
"percolation output" of the soil containing HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequences:
|QBB|
|NKor|
|EvI|
Calculated state sequence:
|QBGZ|
Basic equation:
:math:`QBGZ = \\Sigma(FHRU \\cdot QBB) +
\\Sigma(FHRU \\cdot (NKor_{SEE}-EvI_{SEE}))`
Examples:
The first example shows that |QBGZ| is the area weighted sum of
|QBB| from "soil type" HRUs like arable land (|ACKER|) and of
|NKor|-|EvI| from water areas of type |SEE|. All other water
areas (|WASSER| and |FLUSS|) and also sealed surfaces (|VERS|)
have no impact on |QBGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(6)
>>> lnk(ACKER, ACKER, VERS, WASSER, FLUSS, SEE)
>>> fhru(0.1, 0.2, 0.1, 0.1, 0.1, 0.4)
>>> fluxes.qbb = 2., 4.0, 300.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(5.0)
The second example shows that large evaporation values above a
HRU of type |SEE| can result in negative values of |QBGZ|:
>>> fluxes.evi[5] = 30
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(-3.0)
|
entailment
|
def calc_qigz1_v1(self):
"""Aggregate the amount of the first interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB1|
Calculated state sequence:
|QIGZ1|
Basic equation:
:math:`QIGZ1 = \\Sigma(FHRU \\cdot QIB1)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib1 = 1.0, 5.0
>>> model.calc_qigz1_v1()
>>> states.qigz1
qigz1(2.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qigz1 = 0.
for k in range(con.nhru):
sta.qigz1 += con.fhru[k]*flu.qib1[k]
|
Aggregate the amount of the first interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB1|
Calculated state sequence:
|QIGZ1|
Basic equation:
:math:`QIGZ1 = \\Sigma(FHRU \\cdot QIB1)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib1 = 1.0, 5.0
>>> model.calc_qigz1_v1()
>>> states.qigz1
qigz1(2.0)
|
entailment
|
def calc_qigz2_v1(self):
"""Aggregate the amount of the second interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB2|
Calculated state sequence:
|QIGZ2|
Basic equation:
:math:`QIGZ2 = \\Sigma(FHRU \\cdot QIB2)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib2 = 1.0, 5.0
>>> model.calc_qigz2_v1()
>>> states.qigz2
qigz2(2.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qigz2 = 0.
for k in range(con.nhru):
sta.qigz2 += con.fhru[k]*flu.qib2[k]
|
Aggregate the amount of the second interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB2|
Calculated state sequence:
|QIGZ2|
Basic equation:
:math:`QIGZ2 = \\Sigma(FHRU \\cdot QIB2)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib2 = 1.0, 5.0
>>> model.calc_qigz2_v1()
>>> states.qigz2
qigz2(2.0)
|
entailment
|
def calc_qdgz_v1(self):
"""Aggregate the amount of total direct flow released by all HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequence:
|QDB|
|NKor|
|EvI|
Calculated flux sequence:
|QDGZ|
Basic equation:
:math:`QDGZ = \\Sigma(FHRU \\cdot QDB) +
\\Sigma(FHRU \\cdot (NKor_{FLUSS}-EvI_{FLUSS}))`
Examples:
The first example shows that |QDGZ| is the area weighted sum of
|QDB| from "land type" HRUs like arable land (|ACKER|) and sealed
surfaces (|VERS|) as well as of |NKor|-|EvI| from water areas of
type |FLUSS|. Water areas of type |WASSER| and |SEE| have no
impact on |QDGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(5)
>>> lnk(ACKER, VERS, WASSER, SEE, FLUSS)
>>> fhru(0.1, 0.2, 0.1, 0.2, 0.4)
>>> fluxes.qdb = 2., 4.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(5.0)
The second example shows that large evaporation values above a
HRU of type |FLUSS| can result in negative values of |QDGZ|:
>>> fluxes.evi[4] = 30
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(-3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
flu.qdgz = 0.
for k in range(con.nhru):
if con.lnk[k] == FLUSS:
flu.qdgz += con.fhru[k]*(flu.nkor[k]-flu.evi[k])
elif con.lnk[k] not in (WASSER, SEE):
flu.qdgz += con.fhru[k]*flu.qdb[k]
|
Aggregate the amount of total direct flow released by all HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequence:
|QDB|
|NKor|
|EvI|
Calculated flux sequence:
|QDGZ|
Basic equation:
:math:`QDGZ = \\Sigma(FHRU \\cdot QDB) +
\\Sigma(FHRU \\cdot (NKor_{FLUSS}-EvI_{FLUSS}))`
Examples:
The first example shows that |QDGZ| is the area weighted sum of
|QDB| from "land type" HRUs like arable land (|ACKER|) and sealed
surfaces (|VERS|) as well as of |NKor|-|EvI| from water areas of
type |FLUSS|. Water areas of type |WASSER| and |SEE| have no
impact on |QDGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(5)
>>> lnk(ACKER, VERS, WASSER, SEE, FLUSS)
>>> fhru(0.1, 0.2, 0.1, 0.2, 0.4)
>>> fluxes.qdb = 2., 4.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(5.0)
The second example shows that large evaporation values above a
HRU of type |FLUSS| can result in negative values of |QDGZ|:
>>> fluxes.evi[4] = 30
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(-3.0)
|
entailment
|
def calc_qdgz1_qdgz2_v1(self):
"""Seperate total direct flow into a small and a fast component.
Required control parameters:
|A1|
|A2|
Required flux sequence:
|QDGZ|
Calculated state sequences:
|QDGZ1|
|QDGZ2|
Basic equation:
:math:`QDGZ2 = \\frac{(QDGZ-A2)^2}{QDGZ+A1-A2}`
:math:`QDGZ1 = QDGZ - QDGZ1`
Examples:
The formula for calculating the amount of the fast component of
direct flow is borrowed from the famous curve number approach.
Parameter |A2| would be the initial loss and parameter |A1| the
maximum storage, but one should not take this analogy too serious.
Instead, with the value of parameter |A1| set to zero, parameter
|A2| just defines the maximum amount of "slow" direct runoff per
time step:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> a1(0.0)
Let us set the value of |A2| to 4 mm/d, which is 2 mm/12h with
respect to the selected simulation step size:
>>> a2(4.0)
>>> a2
a2(4.0)
>>> a2.value
2.0
Define a test function and let it calculate |QDGZ1| and |QDGZ1| for
values of |QDGZ| ranging from -10 to 100 mm/12h:
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_qdgz1_qdgz2_v1,
... last_example=6,
... parseqs=(fluxes.qdgz,
... states.qdgz1,
... states.qdgz2))
>>> test.nexts.qdgz = -10.0, 0.0, 1.0, 2.0, 3.0, 100.0
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 2.0 | 0.0 |
| 5 | 3.0 | 2.0 | 1.0 |
| 6 | 100.0 | 2.0 | 98.0 |
Setting |A2| to zero and |A1| to 4 mm/d (or 2 mm/12h) results in
a smoother transition:
>>> a2(0.0)
>>> a1(4.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
--------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 0.666667 | 0.333333 |
| 4 | 2.0 | 1.0 | 1.0 |
| 5 | 3.0 | 1.2 | 1.8 |
| 6 | 100.0 | 1.960784 | 98.039216 |
Alternatively, one can mix these two configurations by setting
the values of both parameters to 2 mm/h:
>>> a2(2.0)
>>> a1(2.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 1.5 | 0.5 |
| 5 | 3.0 | 1.666667 | 1.333333 |
| 6 | 100.0 | 1.99 | 98.01 |
Note the similarity of the results for very high values of total
direct flow |QDGZ| in all three examples, which converge to the sum
of the values of parameter |A1| and |A2|, representing the maximum
value of `slow` direct flow generation per simulation step
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
if flu.qdgz > con.a2:
sta.qdgz2 = (flu.qdgz-con.a2)**2/(flu.qdgz+con.a1-con.a2)
sta.qdgz1 = flu.qdgz-sta.qdgz2
else:
sta.qdgz2 = 0.
sta.qdgz1 = flu.qdgz
|
Seperate total direct flow into a small and a fast component.
Required control parameters:
|A1|
|A2|
Required flux sequence:
|QDGZ|
Calculated state sequences:
|QDGZ1|
|QDGZ2|
Basic equation:
:math:`QDGZ2 = \\frac{(QDGZ-A2)^2}{QDGZ+A1-A2}`
:math:`QDGZ1 = QDGZ - QDGZ1`
Examples:
The formula for calculating the amount of the fast component of
direct flow is borrowed from the famous curve number approach.
Parameter |A2| would be the initial loss and parameter |A1| the
maximum storage, but one should not take this analogy too serious.
Instead, with the value of parameter |A1| set to zero, parameter
|A2| just defines the maximum amount of "slow" direct runoff per
time step:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> a1(0.0)
Let us set the value of |A2| to 4 mm/d, which is 2 mm/12h with
respect to the selected simulation step size:
>>> a2(4.0)
>>> a2
a2(4.0)
>>> a2.value
2.0
Define a test function and let it calculate |QDGZ1| and |QDGZ1| for
values of |QDGZ| ranging from -10 to 100 mm/12h:
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_qdgz1_qdgz2_v1,
... last_example=6,
... parseqs=(fluxes.qdgz,
... states.qdgz1,
... states.qdgz2))
>>> test.nexts.qdgz = -10.0, 0.0, 1.0, 2.0, 3.0, 100.0
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 2.0 | 0.0 |
| 5 | 3.0 | 2.0 | 1.0 |
| 6 | 100.0 | 2.0 | 98.0 |
Setting |A2| to zero and |A1| to 4 mm/d (or 2 mm/12h) results in
a smoother transition:
>>> a2(0.0)
>>> a1(4.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
--------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 0.666667 | 0.333333 |
| 4 | 2.0 | 1.0 | 1.0 |
| 5 | 3.0 | 1.2 | 1.8 |
| 6 | 100.0 | 1.960784 | 98.039216 |
Alternatively, one can mix these two configurations by setting
the values of both parameters to 2 mm/h:
>>> a2(2.0)
>>> a1(2.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 1.5 | 0.5 |
| 5 | 3.0 | 1.666667 | 1.333333 |
| 6 | 100.0 | 1.99 | 98.01 |
Note the similarity of the results for very high values of total
direct flow |QDGZ| in all three examples, which converge to the sum
of the values of parameter |A1| and |A2|, representing the maximum
value of `slow` direct flow generation per simulation step
|
entailment
|
def calc_qbga_v1(self):
"""Perform the runoff concentration calculation for base flow.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KB|
Required flux sequence:
|QBGZ|
Calculated state sequence:
|QBGA|
Basic equation:
:math:`QBGA_{neu} = QBGA_{alt} +
(QBGZ_{alt}-QBGA_{alt}) \\cdot (1-exp(-KB^{-1})) +
(QBGZ_{neu}-QBGZ_{alt}) \\cdot (1-KB\\cdot(1-exp(-KB^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kb(0.1)
>>> states.qbgz.old = 2.0
>>> states.qbgz.new = 4.0
>>> states.qbga.old = 3.0
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kb(0.0)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kb(1e500)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kb <= 0.:
new.qbga = new.qbgz
elif der.kb > 1e200:
new.qbga = old.qbga+new.qbgz-old.qbgz
else:
d_temp = (1.-modelutils.exp(-1./der.kb))
new.qbga = (old.qbga +
(old.qbgz-old.qbga)*d_temp +
(new.qbgz-old.qbgz)*(1.-der.kb*d_temp))
|
Perform the runoff concentration calculation for base flow.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KB|
Required flux sequence:
|QBGZ|
Calculated state sequence:
|QBGA|
Basic equation:
:math:`QBGA_{neu} = QBGA_{alt} +
(QBGZ_{alt}-QBGA_{alt}) \\cdot (1-exp(-KB^{-1})) +
(QBGZ_{neu}-QBGZ_{alt}) \\cdot (1-KB\\cdot(1-exp(-KB^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kb(0.1)
>>> states.qbgz.old = 2.0
>>> states.qbgz.new = 4.0
>>> states.qbga.old = 3.0
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kb(0.0)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kb(1e500)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(5.0)
|
entailment
|
def calc_qiga1_v1(self):
"""Perform the runoff concentration calculation for the first
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI1|
Required state sequence:
|QIGZ1|
Calculated state sequence:
|QIGA1|
Basic equation:
:math:`QIGA1_{neu} = QIGA1_{alt} +
(QIGZ1_{alt}-QIGA1_{alt}) \\cdot (1-exp(-KI1^{-1})) +
(QIGZ1_{neu}-QIGZ1_{alt}) \\cdot (1-KI1\\cdot(1-exp(-KI1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki1(0.1)
>>> states.qigz1.old = 2.0
>>> states.qigz1.new = 4.0
>>> states.qiga1.old = 3.0
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki1(0.0)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki1(1e500)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.ki1 <= 0.:
new.qiga1 = new.qigz1
elif der.ki1 > 1e200:
new.qiga1 = old.qiga1+new.qigz1-old.qigz1
else:
d_temp = (1.-modelutils.exp(-1./der.ki1))
new.qiga1 = (old.qiga1 +
(old.qigz1-old.qiga1)*d_temp +
(new.qigz1-old.qigz1)*(1.-der.ki1*d_temp))
|
Perform the runoff concentration calculation for the first
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI1|
Required state sequence:
|QIGZ1|
Calculated state sequence:
|QIGA1|
Basic equation:
:math:`QIGA1_{neu} = QIGA1_{alt} +
(QIGZ1_{alt}-QIGA1_{alt}) \\cdot (1-exp(-KI1^{-1})) +
(QIGZ1_{neu}-QIGZ1_{alt}) \\cdot (1-KI1\\cdot(1-exp(-KI1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki1(0.1)
>>> states.qigz1.old = 2.0
>>> states.qigz1.new = 4.0
>>> states.qiga1.old = 3.0
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki1(0.0)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki1(1e500)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(5.0)
|
entailment
|
def calc_qiga2_v1(self):
"""Perform the runoff concentration calculation for the second
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI2|
Required state sequence:
|QIGZ2|
Calculated state sequence:
|QIGA2|
Basic equation:
:math:`QIGA2_{neu} = QIGA2_{alt} +
(QIGZ2_{alt}-QIGA2_{alt}) \\cdot (1-exp(-KI2^{-1})) +
(QIGZ2_{neu}-QIGZ2_{alt}) \\cdot (1-KI2\\cdot(1-exp(-KI2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki2(0.1)
>>> states.qigz2.old = 2.0
>>> states.qigz2.new = 4.0
>>> states.qiga2.old = 3.0
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki2(0.0)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki2(1e500)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.ki2 <= 0.:
new.qiga2 = new.qigz2
elif der.ki2 > 1e200:
new.qiga2 = old.qiga2+new.qigz2-old.qigz2
else:
d_temp = (1.-modelutils.exp(-1./der.ki2))
new.qiga2 = (old.qiga2 +
(old.qigz2-old.qiga2)*d_temp +
(new.qigz2-old.qigz2)*(1.-der.ki2*d_temp))
|
Perform the runoff concentration calculation for the second
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI2|
Required state sequence:
|QIGZ2|
Calculated state sequence:
|QIGA2|
Basic equation:
:math:`QIGA2_{neu} = QIGA2_{alt} +
(QIGZ2_{alt}-QIGA2_{alt}) \\cdot (1-exp(-KI2^{-1})) +
(QIGZ2_{neu}-QIGZ2_{alt}) \\cdot (1-KI2\\cdot(1-exp(-KI2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki2(0.1)
>>> states.qigz2.old = 2.0
>>> states.qigz2.new = 4.0
>>> states.qiga2.old = 3.0
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki2(0.0)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki2(1e500)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(5.0)
|
entailment
|
def calc_qdga1_v1(self):
"""Perform the runoff concentration calculation for "slow" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD1|
Required state sequence:
|QDGZ1|
Calculated state sequence:
|QDGA1|
Basic equation:
:math:`QDGA1_{neu} = QDGA1_{alt} +
(QDGZ1_{alt}-QDGA1_{alt}) \\cdot (1-exp(-KD1^{-1})) +
(QDGZ1_{neu}-QDGZ1_{alt}) \\cdot (1-KD1\\cdot(1-exp(-KD1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd1(0.1)
>>> states.qdgz1.old = 2.0
>>> states.qdgz1.new = 4.0
>>> states.qdga1.old = 3.0
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd1(0.0)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd1(1e500)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kd1 <= 0.:
new.qdga1 = new.qdgz1
elif der.kd1 > 1e200:
new.qdga1 = old.qdga1+new.qdgz1-old.qdgz1
else:
d_temp = (1.-modelutils.exp(-1./der.kd1))
new.qdga1 = (old.qdga1 +
(old.qdgz1-old.qdga1)*d_temp +
(new.qdgz1-old.qdgz1)*(1.-der.kd1*d_temp))
|
Perform the runoff concentration calculation for "slow" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD1|
Required state sequence:
|QDGZ1|
Calculated state sequence:
|QDGA1|
Basic equation:
:math:`QDGA1_{neu} = QDGA1_{alt} +
(QDGZ1_{alt}-QDGA1_{alt}) \\cdot (1-exp(-KD1^{-1})) +
(QDGZ1_{neu}-QDGZ1_{alt}) \\cdot (1-KD1\\cdot(1-exp(-KD1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd1(0.1)
>>> states.qdgz1.old = 2.0
>>> states.qdgz1.new = 4.0
>>> states.qdga1.old = 3.0
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd1(0.0)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd1(1e500)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(5.0)
|
entailment
|
def calc_qdga2_v1(self):
"""Perform the runoff concentration calculation for "fast" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD2|
Required state sequence:
|QDGZ2|
Calculated state sequence:
|QDGA2|
Basic equation:
:math:`QDGA2_{neu} = QDGA2_{alt} +
(QDGZ2_{alt}-QDGA2_{alt}) \\cdot (1-exp(-KD2^{-1})) +
(QDGZ2_{neu}-QDGZ2_{alt}) \\cdot (1-KD2\\cdot(1-exp(-KD2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd2(0.1)
>>> states.qdgz2.old = 2.0
>>> states.qdgz2.new = 4.0
>>> states.qdga2.old = 3.0
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd2(0.0)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd2(1e500)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kd2 <= 0.:
new.qdga2 = new.qdgz2
elif der.kd2 > 1e200:
new.qdga2 = old.qdga2+new.qdgz2-old.qdgz2
else:
d_temp = (1.-modelutils.exp(-1./der.kd2))
new.qdga2 = (old.qdga2 +
(old.qdgz2-old.qdga2)*d_temp +
(new.qdgz2-old.qdgz2)*(1.-der.kd2*d_temp))
|
Perform the runoff concentration calculation for "fast" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD2|
Required state sequence:
|QDGZ2|
Calculated state sequence:
|QDGA2|
Basic equation:
:math:`QDGA2_{neu} = QDGA2_{alt} +
(QDGZ2_{alt}-QDGA2_{alt}) \\cdot (1-exp(-KD2^{-1})) +
(QDGZ2_{neu}-QDGZ2_{alt}) \\cdot (1-KD2\\cdot(1-exp(-KD2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd2(0.1)
>>> states.qdgz2.old = 2.0
>>> states.qdgz2.new = 4.0
>>> states.qdga2.old = 3.0
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd2(0.0)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd2(1e500)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(5.0)
|
entailment
|
def calc_q_v1(self):
"""Calculate the final runoff.
Note that, in case there are water areas, their |NKor| values are
added and their |EvPo| values are subtracted from the "potential"
runoff value, if possible. This hold true for |WASSER| only and is
due to compatibility with the original LARSIM implementation. Using land
type |WASSER| can result in problematic modifications of simulated
runoff series. It seems advisable to use land type |FLUSS| and/or
land type |SEE| instead.
Required control parameters:
|NHRU|
|FHRU|
|Lnk|
|NegQ|
Required flux sequence:
|NKor|
Updated flux sequence:
|EvI|
Required state sequences:
|QBGA|
|QIGA1|
|QIGA2|
|QDGA1|
|QDGA2|
Calculated flux sequence:
|lland_fluxes.Q|
Basic equations:
:math:`Q = QBGA + QIGA1 + QIGA2 + QDGA1 + QDGA2 +
NKor_{WASSER} - EvI_{WASSER}`
:math:`Q \\geq 0`
Examples:
When there are no water areas in the respective subbasin (we
choose arable land |ACKER| arbitrarily), the different runoff
components are simply summed up:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(3)
>>> lnk(ACKER, ACKER, ACKER)
>>> fhru(0.5, 0.2, 0.3)
>>> negq(False)
>>> states.qbga = 0.1
>>> states.qiga1 = 0.3
>>> states.qiga2 = 0.5
>>> states.qdga1 = 0.7
>>> states.qdga2 = 0.9
>>> fluxes.nkor = 10.0
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(2.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
The defined values of interception evaporation do not show any
impact on the result of the given example, the predefined values
for sequence |EvI| remain unchanged. But when the first HRU is
assumed to be a water area (|WASSER|), its adjusted precipitaton
|NKor| value and its interception evaporation |EvI| value are added
to and subtracted from |lland_fluxes.Q| respectively:
>>> control.lnk(WASSER, VERS, NADELW)
>>> model.calc_q_v1()
>>> fluxes.q
q(5.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
Note that only 5 mm are added (instead of the |NKor| value 10 mm)
and that only 2 mm are substracted (instead of the |EvI| value 4 mm,
as the first HRU`s area only accounts for 50 % of the subbasin area.
Setting also the land use class of the second HRU to land type
|WASSER| and resetting |NKor| to zero would result in overdrying.
To avoid this, both actual water evaporation values stored in
sequence |EvI| are reduced by the same factor:
>>> control.lnk(WASSER, WASSER, NADELW)
>>> fluxes.nkor = 0.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(3.333333, 4.166667, 3.0)
The handling from water areas of type |FLUSS| and |SEE| differs
from those of type |WASSER|, as these do receive their net input
before the runoff concentration routines are applied. This
should be more realistic in most cases (especially for type |SEE|
representing lakes not direct connected to the stream network).
But it could sometimes result in negative outflow values. This
is avoided by simply setting |lland_fluxes.Q| to zero and adding
the truncated negative outflow value to the |EvI| value of all
HRUs of type |FLUSS| and |SEE|:
>>> control.lnk(FLUSS, SEE, NADELW)
>>> states.qbga = -1.0
>>> states.qdga2 = -1.5
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(2.571429, 3.571429, 3.0)
This adjustment of |EvI| is only correct regarding the total
water balance. Neither spatial nor temporal consistency of the
resulting |EvI| values are assured. In the most extreme case,
even negative |EvI| values might occur. This seems acceptable,
as long as the adjustment of |EvI| is rarely triggered. When in
doubt about this, check sequences |EvPo| and |EvI| of HRUs of
types |FLUSS| and |SEE| for possible discrepancies. Also note
that there might occur unnecessary corrections of |lland_fluxes.Q|
in case landtype |WASSER| is combined with either landtype
|SEE| or |FLUSS|.
Eventually you might want to avoid correcting |lland_fluxes.Q|.
This can be achieved by setting parameter |NegQ| to `True`:
>>> negq(True)
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(-1.0)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
flu.q = sta.qbga+sta.qiga1+sta.qiga2+sta.qdga1+sta.qdga2
if (not con.negq) and (flu.q < 0.):
d_area = 0.
for k in range(con.nhru):
if con.lnk[k] in (FLUSS, SEE):
d_area += con.fhru[k]
if d_area > 0.:
for k in range(con.nhru):
if con.lnk[k] in (FLUSS, SEE):
flu.evi[k] += flu.q/d_area
flu.q = 0.
aid.epw = 0.
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.q += con.fhru[k]*flu.nkor[k]
aid.epw += con.fhru[k]*flu.evi[k]
if (flu.q > aid.epw) or con.negq:
flu.q -= aid.epw
elif aid.epw > 0.:
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.evi[k] *= flu.q/aid.epw
flu.q = 0.
|
Calculate the final runoff.
Note that, in case there are water areas, their |NKor| values are
added and their |EvPo| values are subtracted from the "potential"
runoff value, if possible. This hold true for |WASSER| only and is
due to compatibility with the original LARSIM implementation. Using land
type |WASSER| can result in problematic modifications of simulated
runoff series. It seems advisable to use land type |FLUSS| and/or
land type |SEE| instead.
Required control parameters:
|NHRU|
|FHRU|
|Lnk|
|NegQ|
Required flux sequence:
|NKor|
Updated flux sequence:
|EvI|
Required state sequences:
|QBGA|
|QIGA1|
|QIGA2|
|QDGA1|
|QDGA2|
Calculated flux sequence:
|lland_fluxes.Q|
Basic equations:
:math:`Q = QBGA + QIGA1 + QIGA2 + QDGA1 + QDGA2 +
NKor_{WASSER} - EvI_{WASSER}`
:math:`Q \\geq 0`
Examples:
When there are no water areas in the respective subbasin (we
choose arable land |ACKER| arbitrarily), the different runoff
components are simply summed up:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(3)
>>> lnk(ACKER, ACKER, ACKER)
>>> fhru(0.5, 0.2, 0.3)
>>> negq(False)
>>> states.qbga = 0.1
>>> states.qiga1 = 0.3
>>> states.qiga2 = 0.5
>>> states.qdga1 = 0.7
>>> states.qdga2 = 0.9
>>> fluxes.nkor = 10.0
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(2.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
The defined values of interception evaporation do not show any
impact on the result of the given example, the predefined values
for sequence |EvI| remain unchanged. But when the first HRU is
assumed to be a water area (|WASSER|), its adjusted precipitaton
|NKor| value and its interception evaporation |EvI| value are added
to and subtracted from |lland_fluxes.Q| respectively:
>>> control.lnk(WASSER, VERS, NADELW)
>>> model.calc_q_v1()
>>> fluxes.q
q(5.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
Note that only 5 mm are added (instead of the |NKor| value 10 mm)
and that only 2 mm are substracted (instead of the |EvI| value 4 mm,
as the first HRU`s area only accounts for 50 % of the subbasin area.
Setting also the land use class of the second HRU to land type
|WASSER| and resetting |NKor| to zero would result in overdrying.
To avoid this, both actual water evaporation values stored in
sequence |EvI| are reduced by the same factor:
>>> control.lnk(WASSER, WASSER, NADELW)
>>> fluxes.nkor = 0.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(3.333333, 4.166667, 3.0)
The handling from water areas of type |FLUSS| and |SEE| differs
from those of type |WASSER|, as these do receive their net input
before the runoff concentration routines are applied. This
should be more realistic in most cases (especially for type |SEE|
representing lakes not direct connected to the stream network).
But it could sometimes result in negative outflow values. This
is avoided by simply setting |lland_fluxes.Q| to zero and adding
the truncated negative outflow value to the |EvI| value of all
HRUs of type |FLUSS| and |SEE|:
>>> control.lnk(FLUSS, SEE, NADELW)
>>> states.qbga = -1.0
>>> states.qdga2 = -1.5
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(2.571429, 3.571429, 3.0)
This adjustment of |EvI| is only correct regarding the total
water balance. Neither spatial nor temporal consistency of the
resulting |EvI| values are assured. In the most extreme case,
even negative |EvI| values might occur. This seems acceptable,
as long as the adjustment of |EvI| is rarely triggered. When in
doubt about this, check sequences |EvPo| and |EvI| of HRUs of
types |FLUSS| and |SEE| for possible discrepancies. Also note
that there might occur unnecessary corrections of |lland_fluxes.Q|
in case landtype |WASSER| is combined with either landtype
|SEE| or |FLUSS|.
Eventually you might want to avoid correcting |lland_fluxes.Q|.
This can be achieved by setting parameter |NegQ| to `True`:
>>> negq(True)
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(-1.0)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
|
entailment
|
def pass_q_v1(self):
"""Update the outlet link sequence.
Required derived parameter:
|QFactor|
Required flux sequences:
|lland_fluxes.Q|
Calculated flux sequence:
|lland_outlets.Q|
Basic equation:
:math:`Q_{outlets} = QFactor \\cdot Q_{fluxes}`
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
out = self.sequences.outlets.fastaccess
out.q[0] += der.qfactor*flu.q
|
Update the outlet link sequence.
Required derived parameter:
|QFactor|
Required flux sequences:
|lland_fluxes.Q|
Calculated flux sequence:
|lland_outlets.Q|
Basic equation:
:math:`Q_{outlets} = QFactor \\cdot Q_{fluxes}`
|
entailment
|
def calc_outputs_v1(self):
"""Performs the actual interpolation or extrapolation.
Required control parameters:
|XPoints|
|YPoints|
Required derived parameter:
|NmbPoints|
|NmbBranches|
Required flux sequence:
|Input|
Calculated flux sequence:
|Outputs|
Examples:
As a simple example, assume a weir directing all discharge into
`branch1` until the capacity limit of 2 m³/s is reached. The
discharge exceeding this threshold is directed into `branch2`:
>>> from hydpy.models.hbranch import *
>>> parameterstep()
>>> xpoints(0., 2., 4.)
>>> ypoints(branch1=[0., 2., 2.],
... branch2=[0., 0., 2.])
>>> model.parameters.update()
Low discharge example (linear interpolation between the first two
supporting point pairs):
>>> fluxes.input = 1.
>>> model.calc_outputs_v1()
>>> fluxes.outputs
outputs(branch1=1.0,
branch2=0.0)
Medium discharge example (linear interpolation between the second
two supporting point pairs):
>>> fluxes.input = 3.
>>> model.calc_outputs_v1()
>>> print(fluxes.outputs)
outputs(branch1=2.0,
branch2=1.0)
High discharge example (linear extrapolation beyond the second two
supporting point pairs):
>>> fluxes.input = 5.
>>> model.calc_outputs_v1()
>>> fluxes.outputs
outputs(branch1=2.0,
branch2=3.0)
Non-monotonous relationships and balance violations are allowed,
e.g.:
>>> xpoints(0., 2., 4., 6.)
>>> ypoints(branch1=[0., 2., 0., 0.],
... branch2=[0., 0., 2., 4.])
>>> model.parameters.update()
>>> fluxes.input = 7.
>>> model.calc_outputs_v1()
>>> fluxes.outputs
outputs(branch1=0.0,
branch2=5.0)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
# Search for the index of the two relevant x points...
for pdx in range(1, der.nmbpoints):
if con.xpoints[pdx] > flu.input:
break
# ...and use it for linear interpolation (or extrapolation).
for bdx in range(der.nmbbranches):
flu.outputs[bdx] = (
(flu.input-con.xpoints[pdx-1]) *
(con.ypoints[bdx, pdx]-con.ypoints[bdx, pdx-1]) /
(con.xpoints[pdx]-con.xpoints[pdx-1]) +
con.ypoints[bdx, pdx-1])
|
Performs the actual interpolation or extrapolation.
Required control parameters:
|XPoints|
|YPoints|
Required derived parameter:
|NmbPoints|
|NmbBranches|
Required flux sequence:
|Input|
Calculated flux sequence:
|Outputs|
Examples:
As a simple example, assume a weir directing all discharge into
`branch1` until the capacity limit of 2 m³/s is reached. The
discharge exceeding this threshold is directed into `branch2`:
>>> from hydpy.models.hbranch import *
>>> parameterstep()
>>> xpoints(0., 2., 4.)
>>> ypoints(branch1=[0., 2., 2.],
... branch2=[0., 0., 2.])
>>> model.parameters.update()
Low discharge example (linear interpolation between the first two
supporting point pairs):
>>> fluxes.input = 1.
>>> model.calc_outputs_v1()
>>> fluxes.outputs
outputs(branch1=1.0,
branch2=0.0)
Medium discharge example (linear interpolation between the second
two supporting point pairs):
>>> fluxes.input = 3.
>>> model.calc_outputs_v1()
>>> print(fluxes.outputs)
outputs(branch1=2.0,
branch2=1.0)
High discharge example (linear extrapolation beyond the second two
supporting point pairs):
>>> fluxes.input = 5.
>>> model.calc_outputs_v1()
>>> fluxes.outputs
outputs(branch1=2.0,
branch2=3.0)
Non-monotonous relationships and balance violations are allowed,
e.g.:
>>> xpoints(0., 2., 4., 6.)
>>> ypoints(branch1=[0., 2., 0., 0.],
... branch2=[0., 0., 2., 4.])
>>> model.parameters.update()
>>> fluxes.input = 7.
>>> model.calc_outputs_v1()
>>> fluxes.outputs
outputs(branch1=0.0,
branch2=5.0)
|
entailment
|
def pick_input_v1(self):
"""Updates |Input| based on |Total|."""
flu = self.sequences.fluxes.fastaccess
inl = self.sequences.inlets.fastaccess
flu.input = 0.
for idx in range(inl.len_total):
flu.input += inl.total[idx][0]
|
Updates |Input| based on |Total|.
|
entailment
|
def pass_outputs_v1(self):
"""Updates |Branched| based on |Outputs|."""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
out = self.sequences.outlets.fastaccess
for bdx in range(der.nmbbranches):
out.branched[bdx][0] += flu.outputs[bdx]
|
Updates |Branched| based on |Outputs|.
|
entailment
|
def connect(self):
"""Connect the |LinkSequence| instances handled by the actual model
to the |NodeSequence| instances handled by one inlet node and
multiple oulet nodes.
The HydPy-H-Branch model passes multiple output values to different
outlet nodes. This requires additional information regarding the
`direction` of each output value. Therefore, node names are used
as keywords. Assume the discharge values of both nodes `inflow1`
and `inflow2` shall be branched to nodes `outflow1` and `outflow2`
via element `branch`:
>>> from hydpy import *
>>> branch = Element('branch',
... inlets=['inflow1', 'inflow2'],
... outlets=['outflow1', 'outflow2'])
Then parameter |YPoints| relates different supporting points via
its keyword arguments to the respective nodes:
>>> from hydpy.models.hbranch import *
>>> parameterstep()
>>> xpoints(0.0, 3.0)
>>> ypoints(outflow1=[0.0, 1.0], outflow2=[0.0, 2.0])
>>> parameters.update()
After connecting the model with its element the total discharge
value of nodes `inflow1` and `inflow2` can be properly divided:
>>> branch.model = model
>>> branch.inlets.inflow1.sequences.sim = 1.0
>>> branch.inlets.inflow2.sequences.sim = 5.0
>>> model.doit(0)
>>> print(branch.outlets.outflow1.sequences.sim)
sim(2.0)
>>> print(branch.outlets.outflow2.sequences.sim)
sim(4.0)
In case of missing (or misspelled) outlet nodes, the following
error is raised:
>>> branch.outlets.mutable = True
>>> del branch.outlets.outflow1
>>> parameters.update()
>>> model.connect()
Traceback (most recent call last):
...
RuntimeError: Model `hbranch` of element `branch` tried to connect \
to an outlet node named `outflow1`, which is not an available outlet node \
of element `branch`.
"""
nodes = self.element.inlets
total = self.sequences.inlets.total
if total.shape != (len(nodes),):
total.shape = len(nodes)
for idx, node in enumerate(nodes):
double = node.get_double('inlets')
total.set_pointer(double, idx)
for (idx, name) in enumerate(self.nodenames):
try:
outlet = getattr(self.element.outlets, name)
double = outlet.get_double('outlets')
except AttributeError:
raise RuntimeError(
f'Model {objecttools.elementphrase(self)} tried '
f'to connect to an outlet node named `{name}`, '
f'which is not an available outlet node of element '
f'`{self.element.name}`.')
self.sequences.outlets.branched.set_pointer(double, idx)
|
Connect the |LinkSequence| instances handled by the actual model
to the |NodeSequence| instances handled by one inlet node and
multiple oulet nodes.
The HydPy-H-Branch model passes multiple output values to different
outlet nodes. This requires additional information regarding the
`direction` of each output value. Therefore, node names are used
as keywords. Assume the discharge values of both nodes `inflow1`
and `inflow2` shall be branched to nodes `outflow1` and `outflow2`
via element `branch`:
>>> from hydpy import *
>>> branch = Element('branch',
... inlets=['inflow1', 'inflow2'],
... outlets=['outflow1', 'outflow2'])
Then parameter |YPoints| relates different supporting points via
its keyword arguments to the respective nodes:
>>> from hydpy.models.hbranch import *
>>> parameterstep()
>>> xpoints(0.0, 3.0)
>>> ypoints(outflow1=[0.0, 1.0], outflow2=[0.0, 2.0])
>>> parameters.update()
After connecting the model with its element the total discharge
value of nodes `inflow1` and `inflow2` can be properly divided:
>>> branch.model = model
>>> branch.inlets.inflow1.sequences.sim = 1.0
>>> branch.inlets.inflow2.sequences.sim = 5.0
>>> model.doit(0)
>>> print(branch.outlets.outflow1.sequences.sim)
sim(2.0)
>>> print(branch.outlets.outflow2.sequences.sim)
sim(4.0)
In case of missing (or misspelled) outlet nodes, the following
error is raised:
>>> branch.outlets.mutable = True
>>> del branch.outlets.outflow1
>>> parameters.update()
>>> model.connect()
Traceback (most recent call last):
...
RuntimeError: Model `hbranch` of element `branch` tried to connect \
to an outlet node named `outflow1`, which is not an available outlet node \
of element `branch`.
|
entailment
|
def update(self):
"""Determine the number of response functions.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.nmb.update()
>>> derived.nmb
nmb(2)
Note that updating parameter `nmb` sets the shape of the flux
sequences |QPIn|, |QPOut|, |QMA|, and |QAR| automatically.
>>> fluxes.qpin
qpin(nan, nan)
>>> fluxes.qpout
qpout(nan, nan)
>>> fluxes.qma
qma(nan, nan)
>>> fluxes.qar
qar(nan, nan)
"""
pars = self.subpars.pars
responses = pars.control.responses
fluxes = pars.model.sequences.fluxes
self(len(responses))
fluxes.qpin.shape = self.value
fluxes.qpout.shape = self.value
fluxes.qma.shape = self.value
fluxes.qar.shape = self.value
|
Determine the number of response functions.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.nmb.update()
>>> derived.nmb
nmb(2)
Note that updating parameter `nmb` sets the shape of the flux
sequences |QPIn|, |QPOut|, |QMA|, and |QAR| automatically.
>>> fluxes.qpin
qpin(nan, nan)
>>> fluxes.qpout
qpout(nan, nan)
>>> fluxes.qma
qma(nan, nan)
>>> fluxes.qar
qar(nan, nan)
|
entailment
|
def update(self):
"""Determine the total number of AR coefficients.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.ar_order.update()
>>> derived.ar_order
ar_order(2, 1)
"""
responses = self.subpars.pars.control.responses
self.shape = len(responses)
self(responses.ar_orders)
|
Determine the total number of AR coefficients.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.ar_order.update()
>>> derived.ar_order
ar_order(2, 1)
|
entailment
|
def update(self):
"""Determine all AR coefficients.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.ar_coefs.update()
>>> derived.ar_coefs
ar_coefs([[1.0, 2.0],
[1.0, nan]])
Note that updating parameter `ar_coefs` sets the shape of the log
sequence |LogOut| automatically.
>>> logs.logout
logout([[nan, nan],
[nan, nan]])
"""
pars = self.subpars.pars
coefs = pars.control.responses.ar_coefs
self.shape = coefs.shape
self(coefs)
pars.model.sequences.logs.logout.shape = self.shape
|
Determine all AR coefficients.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.ar_coefs.update()
>>> derived.ar_coefs
ar_coefs([[1.0, 2.0],
[1.0, nan]])
Note that updating parameter `ar_coefs` sets the shape of the log
sequence |LogOut| automatically.
>>> logs.logout
logout([[nan, nan],
[nan, nan]])
|
entailment
|
def update(self):
"""Determine all MA coefficients.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.ma_coefs.update()
>>> derived.ma_coefs
ma_coefs([[1.0, nan, nan],
[1.0, 2.0, 3.0]])
Note that updating parameter `ar_coefs` sets the shape of the log
sequence |LogIn| automatically.
>>> logs.login
login([[nan, nan, nan],
[nan, nan, nan]])
"""
pars = self.subpars.pars
coefs = pars.control.responses.ma_coefs
self.shape = coefs.shape
self(coefs)
pars.model.sequences.logs.login.shape = self.shape
|
Determine all MA coefficients.
>>> from hydpy.models.arma import *
>>> parameterstep('1d')
>>> responses(((1., 2.), (1.,)), th_3=((1.,), (1., 2., 3.)))
>>> derived.ma_coefs.update()
>>> derived.ma_coefs
ma_coefs([[1.0, nan, nan],
[1.0, 2.0, 3.0]])
Note that updating parameter `ar_coefs` sets the shape of the log
sequence |LogIn| automatically.
>>> logs.login
login([[nan, nan, nan],
[nan, nan, nan]])
|
entailment
|
def _prepare_docstrings():
"""Assign docstrings to the corresponding attributes of class `Options`
to make them available in the interactive mode of Python."""
if config.USEAUTODOC:
source = inspect.getsource(Options)
docstrings = source.split('"""')[3::2]
attributes = [line.strip().split()[0] for line in source.split('\n')
if '_Option(' in line]
for attribute, docstring in zip(attributes, docstrings):
Options.__dict__[attribute].__doc__ = docstring
|
Assign docstrings to the corresponding attributes of class `Options`
to make them available in the interactive mode of Python.
|
entailment
|
def nodes(self) -> devicetools.Nodes:
"""A |set| containing the |Node| objects of all handled
|Selection| objects.
>>> from hydpy import Selection, Selections
>>> selections = Selections(
... Selection('sel1', ['node1', 'node2'], ['element1']),
... Selection('sel2', ['node1', 'node3'], ['element2']))
>>> selections.nodes
Nodes("node1", "node2", "node3")
"""
nodes = devicetools.Nodes()
for selection in self:
nodes += selection.nodes
return nodes
|
A |set| containing the |Node| objects of all handled
|Selection| objects.
>>> from hydpy import Selection, Selections
>>> selections = Selections(
... Selection('sel1', ['node1', 'node2'], ['element1']),
... Selection('sel2', ['node1', 'node3'], ['element2']))
>>> selections.nodes
Nodes("node1", "node2", "node3")
|
entailment
|
def elements(self) -> devicetools.Elements:
"""A |set| containing the |Node| objects of all handled
|Selection| objects.
>>> from hydpy import Selection, Selections
>>> selections = Selections(
... Selection('sel1', ['node1'], ['element1']),
... Selection('sel2', ['node1'], ['element2', 'element3']))
>>> selections.elements
Elements("element1", "element2", "element3")
"""
elements = devicetools.Elements()
for selection in self:
elements += selection.elements
return elements
|
A |set| containing the |Node| objects of all handled
|Selection| objects.
>>> from hydpy import Selection, Selections
>>> selections = Selections(
... Selection('sel1', ['node1'], ['element1']),
... Selection('sel2', ['node1'], ['element2', 'element3']))
>>> selections.elements
Elements("element1", "element2", "element3")
|
entailment
|
def __getiterable(value): # ToDo: refactor
"""Try to convert the given argument to a |list| of |Selection|
objects and return it.
"""
if isinstance(value, Selection):
return [value]
try:
for selection in value:
if not isinstance(selection, Selection):
raise TypeError
return list(value)
except TypeError:
raise TypeError(
f'Binary operations on Selections objects are defined for '
f'other Selections objects, single Selection objects, or '
f'iterables containing `Selection` objects, but the type of '
f'the given argument is `{objecttools.classname(value)}`.')
|
Try to convert the given argument to a |list| of |Selection|
objects and return it.
|
entailment
|
def assignrepr(self, prefix='') -> str:
"""Return a |repr| string with a prefixed assignment."""
with objecttools.repr_.preserve_strings(True):
with hydpy.pub.options.ellipsis(2, optional=True):
prefix += '%s(' % objecttools.classname(self)
repr_ = objecttools.assignrepr_values(
sorted(self.names), prefix, 70)
return repr_ + ')'
|
Return a |repr| string with a prefixed assignment.
|
entailment
|
def search_upstream(self, device: devicetools.Device,
name: str = 'upstream') -> 'Selection':
"""Return the network upstream of the given starting point, including
the starting point itself.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
You can pass both |Node| and |Element| objects and, optionally,
the name of the newly created |Selection| object:
>>> test = pub.selections.complete.copy('test')
>>> test.search_upstream(hp.nodes.lahn_2)
Selection("upstream",
nodes=("dill", "lahn_1", "lahn_2"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"stream_dill_lahn_2", "stream_lahn_1_lahn_2"))
>>> test.search_upstream(
... hp.elements.stream_lahn_1_lahn_2, 'UPSTREAM')
Selection("UPSTREAM",
nodes="lahn_1",
elements=("land_lahn_1", "stream_lahn_1_lahn_2"))
Wrong device specifications result in errors like the following:
>>> test.search_upstream(1)
Traceback (most recent call last):
...
TypeError: While trying to determine the upstream network of \
selection `test`, the following error occurred: Either a `Node` or \
an `Element` object is required as the "outlet device", but the given \
`device` value is of type `int`.
>>> pub.selections.headwaters.search_upstream(hp.nodes.lahn_3)
Traceback (most recent call last):
...
KeyError: "While trying to determine the upstream network of \
selection `headwaters`, the following error occurred: 'No node named \
`lahn_3` available.'"
Method |Selection.select_upstream| restricts the current selection
to the one determined with the method |Selection.search_upstream|:
>>> test.select_upstream(hp.nodes.lahn_2)
Selection("test",
nodes=("dill", "lahn_1", "lahn_2"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"stream_dill_lahn_2", "stream_lahn_1_lahn_2"))
On the contrary, the method |Selection.deselect_upstream| restricts
the current selection to all devices not determined by method
|Selection.search_upstream|:
>>> complete = pub.selections.complete.deselect_upstream(
... hp.nodes.lahn_2)
>>> complete
Selection("complete",
nodes="lahn_3",
elements=("land_lahn_3", "stream_lahn_2_lahn_3"))
If necessary, include the "outlet device" manually afterwards:
>>> complete.nodes += hp.nodes.lahn_2
>>> complete
Selection("complete",
nodes=("lahn_2", "lahn_3"),
elements=("land_lahn_3", "stream_lahn_2_lahn_3"))
"""
try:
selection = Selection(name)
if isinstance(device, devicetools.Node):
node = self.nodes[device.name]
return self.__get_nextnode(node, selection)
if isinstance(device, devicetools.Element):
element = self.elements[device.name]
return self.__get_nextelement(element, selection)
raise TypeError(
f'Either a `Node` or an `Element` object is required '
f'as the "outlet device", but the given `device` value '
f'is of type `{objecttools.classname(device)}`.')
except BaseException:
objecttools.augment_excmessage(
f'While trying to determine the upstream network of '
f'selection `{self.name}`')
|
Return the network upstream of the given starting point, including
the starting point itself.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
You can pass both |Node| and |Element| objects and, optionally,
the name of the newly created |Selection| object:
>>> test = pub.selections.complete.copy('test')
>>> test.search_upstream(hp.nodes.lahn_2)
Selection("upstream",
nodes=("dill", "lahn_1", "lahn_2"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"stream_dill_lahn_2", "stream_lahn_1_lahn_2"))
>>> test.search_upstream(
... hp.elements.stream_lahn_1_lahn_2, 'UPSTREAM')
Selection("UPSTREAM",
nodes="lahn_1",
elements=("land_lahn_1", "stream_lahn_1_lahn_2"))
Wrong device specifications result in errors like the following:
>>> test.search_upstream(1)
Traceback (most recent call last):
...
TypeError: While trying to determine the upstream network of \
selection `test`, the following error occurred: Either a `Node` or \
an `Element` object is required as the "outlet device", but the given \
`device` value is of type `int`.
>>> pub.selections.headwaters.search_upstream(hp.nodes.lahn_3)
Traceback (most recent call last):
...
KeyError: "While trying to determine the upstream network of \
selection `headwaters`, the following error occurred: 'No node named \
`lahn_3` available.'"
Method |Selection.select_upstream| restricts the current selection
to the one determined with the method |Selection.search_upstream|:
>>> test.select_upstream(hp.nodes.lahn_2)
Selection("test",
nodes=("dill", "lahn_1", "lahn_2"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"stream_dill_lahn_2", "stream_lahn_1_lahn_2"))
On the contrary, the method |Selection.deselect_upstream| restricts
the current selection to all devices not determined by method
|Selection.search_upstream|:
>>> complete = pub.selections.complete.deselect_upstream(
... hp.nodes.lahn_2)
>>> complete
Selection("complete",
nodes="lahn_3",
elements=("land_lahn_3", "stream_lahn_2_lahn_3"))
If necessary, include the "outlet device" manually afterwards:
>>> complete.nodes += hp.nodes.lahn_2
>>> complete
Selection("complete",
nodes=("lahn_2", "lahn_3"),
elements=("land_lahn_3", "stream_lahn_2_lahn_3"))
|
entailment
|
def select_upstream(self, device: devicetools.Device) -> 'Selection':
"""Restrict the current selection to the network upstream of the given
starting point, including the starting point itself.
See the documentation on method |Selection.search_upstream| for
additional information.
"""
upstream = self.search_upstream(device)
self.nodes = upstream.nodes
self.elements = upstream.elements
return self
|
Restrict the current selection to the network upstream of the given
starting point, including the starting point itself.
See the documentation on method |Selection.search_upstream| for
additional information.
|
entailment
|
def search_modeltypes(self, *models: ModelTypesArg,
name: str = 'modeltypes') -> 'Selection':
"""Return a |Selection| object containing only the elements
currently handling models of the given types.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
You can pass both |Model| objects and names and, as a keyword
argument, the name of the newly created |Selection| object:
>>> test = pub.selections.complete.copy('test')
>>> from hydpy import prepare_model
>>> hland_v1 = prepare_model('hland_v1')
>>> test.search_modeltypes(hland_v1)
Selection("modeltypes",
nodes=(),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3"))
>>> test.search_modeltypes(
... hland_v1, 'hstream_v1', 'lland_v1', name='MODELTYPES')
Selection("MODELTYPES",
nodes=(),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2", "stream_lahn_2_lahn_3"))
Wrong model specifications result in errors like the following:
>>> test.search_modeltypes('wrong')
Traceback (most recent call last):
...
ModuleNotFoundError: While trying to determine the elements of \
selection `test` handling the model defined by the argument(s) `wrong` \
of type(s) `str`, the following error occurred: \
No module named 'hydpy.models.wrong'
Method |Selection.select_modeltypes| restricts the current selection to
the one determined with the method the |Selection.search_modeltypes|:
>>> test.select_modeltypes(hland_v1)
Selection("test",
nodes=(),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3"))
On the contrary, the method |Selection.deselect_upstream| restricts
the current selection to all devices not determined by method the
|Selection.search_upstream|:
>>> pub.selections.complete.deselect_modeltypes(hland_v1)
Selection("complete",
nodes=(),
elements=("stream_dill_lahn_2", "stream_lahn_1_lahn_2",
"stream_lahn_2_lahn_3"))
"""
try:
typelist = []
for model in models:
if not isinstance(model, modeltools.Model):
model = importtools.prepare_model(model)
typelist.append(type(model))
typetuple = tuple(typelist)
selection = Selection(name)
for element in self.elements:
if isinstance(element.model, typetuple):
selection.elements += element
return selection
except BaseException:
values = objecttools.enumeration(models)
classes = objecttools.enumeration(
objecttools.classname(model) for model in models)
objecttools.augment_excmessage(
f'While trying to determine the elements of selection '
f'`{self.name}` handling the model defined by the '
f'argument(s) `{values}` of type(s) `{classes}`')
|
Return a |Selection| object containing only the elements
currently handling models of the given types.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
You can pass both |Model| objects and names and, as a keyword
argument, the name of the newly created |Selection| object:
>>> test = pub.selections.complete.copy('test')
>>> from hydpy import prepare_model
>>> hland_v1 = prepare_model('hland_v1')
>>> test.search_modeltypes(hland_v1)
Selection("modeltypes",
nodes=(),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3"))
>>> test.search_modeltypes(
... hland_v1, 'hstream_v1', 'lland_v1', name='MODELTYPES')
Selection("MODELTYPES",
nodes=(),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2", "stream_lahn_2_lahn_3"))
Wrong model specifications result in errors like the following:
>>> test.search_modeltypes('wrong')
Traceback (most recent call last):
...
ModuleNotFoundError: While trying to determine the elements of \
selection `test` handling the model defined by the argument(s) `wrong` \
of type(s) `str`, the following error occurred: \
No module named 'hydpy.models.wrong'
Method |Selection.select_modeltypes| restricts the current selection to
the one determined with the method the |Selection.search_modeltypes|:
>>> test.select_modeltypes(hland_v1)
Selection("test",
nodes=(),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3"))
On the contrary, the method |Selection.deselect_upstream| restricts
the current selection to all devices not determined by method the
|Selection.search_upstream|:
>>> pub.selections.complete.deselect_modeltypes(hland_v1)
Selection("complete",
nodes=(),
elements=("stream_dill_lahn_2", "stream_lahn_1_lahn_2",
"stream_lahn_2_lahn_3"))
|
entailment
|
def select_modeltypes(self, *models: ModelTypesArg) -> 'Selection':
"""Restrict the current |Selection| object to all elements
containing the given model types (removes all nodes).
See the documentation on method |Selection.search_modeltypes| for
additional information.
"""
self.nodes = devicetools.Nodes()
self.elements = self.search_modeltypes(*models).elements
return self
|
Restrict the current |Selection| object to all elements
containing the given model types (removes all nodes).
See the documentation on method |Selection.search_modeltypes| for
additional information.
|
entailment
|
def search_nodenames(self, *substrings: str, name: str = 'nodenames') -> \
'Selection':
"""Return a new selection containing all nodes of the current
selection with a name containing at least one of the given substrings.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
Pass the (sub)strings as positional arguments and, optionally, the
name of the newly created |Selection| object as a keyword argument:
>>> test = pub.selections.complete.copy('test')
>>> from hydpy import prepare_model
>>> test.search_nodenames('dill', 'lahn_1')
Selection("nodenames",
nodes=("dill", "lahn_1"),
elements=())
Wrong string specifications result in errors like the following:
>>> test.search_nodenames(['dill', 'lahn_1'])
Traceback (most recent call last):
...
TypeError: While trying to determine the nodes of selection \
`test` with names containing at least one of the given substrings \
`['dill', 'lahn_1']`, the following error occurred: 'in <string>' \
requires string as left operand, not list
Method |Selection.select_nodenames| restricts the current selection
to the one determined with the the method |Selection.search_nodenames|:
>>> test.select_nodenames('dill', 'lahn_1')
Selection("test",
nodes=("dill", "lahn_1"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2", "stream_lahn_2_lahn_3"))
On the contrary, the method |Selection.deselect_nodenames| restricts
the current selection to all devices not determined by the method
|Selection.search_nodenames|:
>>> pub.selections.complete.deselect_nodenames('dill', 'lahn_1')
Selection("complete",
nodes=("lahn_2", "lahn_3"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2", "stream_lahn_2_lahn_3"))
"""
try:
selection = Selection(name)
for node in self.nodes:
for substring in substrings:
if substring in node.name:
selection.nodes += node
break
return selection
except BaseException:
values = objecttools.enumeration(substrings)
objecttools.augment_excmessage(
f'While trying to determine the nodes of selection '
f'`{self.name}` with names containing at least one '
f'of the given substrings `{values}`')
|
Return a new selection containing all nodes of the current
selection with a name containing at least one of the given substrings.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
Pass the (sub)strings as positional arguments and, optionally, the
name of the newly created |Selection| object as a keyword argument:
>>> test = pub.selections.complete.copy('test')
>>> from hydpy import prepare_model
>>> test.search_nodenames('dill', 'lahn_1')
Selection("nodenames",
nodes=("dill", "lahn_1"),
elements=())
Wrong string specifications result in errors like the following:
>>> test.search_nodenames(['dill', 'lahn_1'])
Traceback (most recent call last):
...
TypeError: While trying to determine the nodes of selection \
`test` with names containing at least one of the given substrings \
`['dill', 'lahn_1']`, the following error occurred: 'in <string>' \
requires string as left operand, not list
Method |Selection.select_nodenames| restricts the current selection
to the one determined with the the method |Selection.search_nodenames|:
>>> test.select_nodenames('dill', 'lahn_1')
Selection("test",
nodes=("dill", "lahn_1"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2", "stream_lahn_2_lahn_3"))
On the contrary, the method |Selection.deselect_nodenames| restricts
the current selection to all devices not determined by the method
|Selection.search_nodenames|:
>>> pub.selections.complete.deselect_nodenames('dill', 'lahn_1')
Selection("complete",
nodes=("lahn_2", "lahn_3"),
elements=("land_dill", "land_lahn_1", "land_lahn_2",
"land_lahn_3", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2", "stream_lahn_2_lahn_3"))
|
entailment
|
def select_nodenames(self, *substrings: str) -> 'Selection':
"""Restrict the current selection to all nodes with a name
containing at least one of the given substrings (does not
affect any elements).
See the documentation on method |Selection.search_nodenames| for
additional information.
"""
self.nodes = self.search_nodenames(*substrings).nodes
return self
|
Restrict the current selection to all nodes with a name
containing at least one of the given substrings (does not
affect any elements).
See the documentation on method |Selection.search_nodenames| for
additional information.
|
entailment
|
def deselect_nodenames(self, *substrings: str) -> 'Selection':
"""Restrict the current selection to all nodes with a name
not containing at least one of the given substrings (does not
affect any elements).
See the documentation on method |Selection.search_nodenames| for
additional information.
"""
self.nodes -= self.search_nodenames(*substrings).nodes
return self
|
Restrict the current selection to all nodes with a name
not containing at least one of the given substrings (does not
affect any elements).
See the documentation on method |Selection.search_nodenames| for
additional information.
|
entailment
|
def search_elementnames(self, *substrings: str,
name: str = 'elementnames') -> 'Selection':
"""Return a new selection containing all elements of the current
selection with a name containing at least one of the given substrings.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
Pass the (sub)strings as positional arguments and, optionally, the
name of the newly created |Selection| object as a keyword argument:
>>> test = pub.selections.complete.copy('test')
>>> from hydpy import prepare_model
>>> test.search_elementnames('dill', 'lahn_1')
Selection("elementnames",
nodes=(),
elements=("land_dill", "land_lahn_1", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2"))
Wrong string specifications result in errors like the following:
>>> test.search_elementnames(['dill', 'lahn_1'])
Traceback (most recent call last):
...
TypeError: While trying to determine the elements of selection \
`test` with names containing at least one of the given substrings \
`['dill', 'lahn_1']`, the following error occurred: 'in <string>' \
requires string as left operand, not list
Method |Selection.select_elementnames| restricts the current selection
to the one determined with the method |Selection.search_elementnames|:
>>> test.select_elementnames('dill', 'lahn_1')
Selection("test",
nodes=("dill", "lahn_1", "lahn_2", "lahn_3"),
elements=("land_dill", "land_lahn_1", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2"))
On the contrary, the method |Selection.deselect_elementnames|
restricts the current selection to all devices not determined
by the method |Selection.search_elementnames|:
>>> pub.selections.complete.deselect_elementnames('dill', 'lahn_1')
Selection("complete",
nodes=("dill", "lahn_1", "lahn_2", "lahn_3"),
elements=("land_lahn_2", "land_lahn_3",
"stream_lahn_2_lahn_3"))
"""
try:
selection = Selection(name)
for element in self.elements:
for substring in substrings:
if substring in element.name:
selection.elements += element
break
return selection
except BaseException:
values = objecttools.enumeration(substrings)
objecttools.augment_excmessage(
f'While trying to determine the elements of selection '
f'`{self.name}` with names containing at least one '
f'of the given substrings `{values}`')
|
Return a new selection containing all elements of the current
selection with a name containing at least one of the given substrings.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, _ = prepare_full_example_2()
Pass the (sub)strings as positional arguments and, optionally, the
name of the newly created |Selection| object as a keyword argument:
>>> test = pub.selections.complete.copy('test')
>>> from hydpy import prepare_model
>>> test.search_elementnames('dill', 'lahn_1')
Selection("elementnames",
nodes=(),
elements=("land_dill", "land_lahn_1", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2"))
Wrong string specifications result in errors like the following:
>>> test.search_elementnames(['dill', 'lahn_1'])
Traceback (most recent call last):
...
TypeError: While trying to determine the elements of selection \
`test` with names containing at least one of the given substrings \
`['dill', 'lahn_1']`, the following error occurred: 'in <string>' \
requires string as left operand, not list
Method |Selection.select_elementnames| restricts the current selection
to the one determined with the method |Selection.search_elementnames|:
>>> test.select_elementnames('dill', 'lahn_1')
Selection("test",
nodes=("dill", "lahn_1", "lahn_2", "lahn_3"),
elements=("land_dill", "land_lahn_1", "stream_dill_lahn_2",
"stream_lahn_1_lahn_2"))
On the contrary, the method |Selection.deselect_elementnames|
restricts the current selection to all devices not determined
by the method |Selection.search_elementnames|:
>>> pub.selections.complete.deselect_elementnames('dill', 'lahn_1')
Selection("complete",
nodes=("dill", "lahn_1", "lahn_2", "lahn_3"),
elements=("land_lahn_2", "land_lahn_3",
"stream_lahn_2_lahn_3"))
|
entailment
|
def select_elementnames(self, *substrings: str) -> 'Selection':
"""Restrict the current selection to all elements with a name
containing at least one of the given substrings (does not
affect any nodes).
See the documentation on method |Selection.search_elementnames| for
additional information.
"""
self.elements = self.search_elementnames(*substrings).elements
return self
|
Restrict the current selection to all elements with a name
containing at least one of the given substrings (does not
affect any nodes).
See the documentation on method |Selection.search_elementnames| for
additional information.
|
entailment
|
def deselect_elementnames(self, *substrings: str) -> 'Selection':
"""Restrict the current selection to all elements with a name
not containing at least one of the given substrings. (does
not affect any nodes).
See the documentation on method |Selection.search_elementnames| for
additional information.
"""
self.elements -= self.search_elementnames(*substrings).elements
return self
|
Restrict the current selection to all elements with a name
not containing at least one of the given substrings. (does
not affect any nodes).
See the documentation on method |Selection.search_elementnames| for
additional information.
|
entailment
|
def copy(self, name: str) -> 'Selection':
"""Return a new |Selection| object with the given name and copies
of the handles |Nodes| and |Elements| objects based on method
|Devices.copy|."""
return type(self)(name, copy.copy(self.nodes), copy.copy(self.elements))
|
Return a new |Selection| object with the given name and copies
of the handles |Nodes| and |Elements| objects based on method
|Devices.copy|.
|
entailment
|
def save_networkfile(self, filepath: Union[str, None] = None,
write_nodes: bool = True) -> None:
"""Save the selection as a network file.
>>> from hydpy.core.examples import prepare_full_example_2
>>> _, pub, TestIO = prepare_full_example_2()
In most cases, one should conveniently write network files via method
|NetworkManager.save_files| of class |NetworkManager|. However,
using the method |Selection.save_networkfile| allows for additional
configuration via the arguments `filepath` and `write_nodes`:
>>> with TestIO():
... pub.selections.headwaters.save_networkfile()
... with open('headwaters.py') as networkfile:
... print(networkfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy import Node, Element
<BLANKLINE>
<BLANKLINE>
Node("dill", variable="Q",
keywords="gauge")
<BLANKLINE>
Node("lahn_1", variable="Q",
keywords="gauge")
<BLANKLINE>
<BLANKLINE>
Element("land_dill",
outlets="dill",
keywords="catchment")
<BLANKLINE>
Element("land_lahn_1",
outlets="lahn_1",
keywords="catchment")
<BLANKLINE>
>>> with TestIO():
... pub.selections.headwaters.save_networkfile('test.py', False)
... with open('test.py') as networkfile:
... print(networkfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy import Node, Element
<BLANKLINE>
<BLANKLINE>
Element("land_dill",
outlets="dill",
keywords="catchment")
<BLANKLINE>
Element("land_lahn_1",
outlets="lahn_1",
keywords="catchment")
<BLANKLINE>
"""
if filepath is None:
filepath = self.name + '.py'
with open(filepath, 'w', encoding="utf-8") as file_:
file_.write('# -*- coding: utf-8 -*-\n')
file_.write('\nfrom hydpy import Node, Element\n\n')
if write_nodes:
for node in self.nodes:
file_.write('\n' + repr(node) + '\n')
file_.write('\n')
for element in self.elements:
file_.write('\n' + repr(element) + '\n')
|
Save the selection as a network file.
>>> from hydpy.core.examples import prepare_full_example_2
>>> _, pub, TestIO = prepare_full_example_2()
In most cases, one should conveniently write network files via method
|NetworkManager.save_files| of class |NetworkManager|. However,
using the method |Selection.save_networkfile| allows for additional
configuration via the arguments `filepath` and `write_nodes`:
>>> with TestIO():
... pub.selections.headwaters.save_networkfile()
... with open('headwaters.py') as networkfile:
... print(networkfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy import Node, Element
<BLANKLINE>
<BLANKLINE>
Node("dill", variable="Q",
keywords="gauge")
<BLANKLINE>
Node("lahn_1", variable="Q",
keywords="gauge")
<BLANKLINE>
<BLANKLINE>
Element("land_dill",
outlets="dill",
keywords="catchment")
<BLANKLINE>
Element("land_lahn_1",
outlets="lahn_1",
keywords="catchment")
<BLANKLINE>
>>> with TestIO():
... pub.selections.headwaters.save_networkfile('test.py', False)
... with open('test.py') as networkfile:
... print(networkfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy import Node, Element
<BLANKLINE>
<BLANKLINE>
Element("land_dill",
outlets="dill",
keywords="catchment")
<BLANKLINE>
Element("land_lahn_1",
outlets="lahn_1",
keywords="catchment")
<BLANKLINE>
|
entailment
|
def assignrepr(self, prefix: str) -> str:
"""Return a |repr| string with a prefixed assignment."""
with objecttools.repr_.preserve_strings(True):
with hydpy.pub.options.ellipsis(2, optional=True):
with objecttools.assignrepr_tuple.always_bracketed(False):
classname = objecttools.classname(self)
blanks = ' ' * (len(prefix+classname) + 1)
nodestr = objecttools.assignrepr_tuple(
self.nodes.names, blanks+'nodes=', 70)
elementstr = objecttools.assignrepr_tuple(
self.elements.names, blanks + 'elements=', 70)
return (f'{prefix}{classname}("{self.name}",\n'
f'{nodestr},\n'
f'{elementstr})')
|
Return a |repr| string with a prefixed assignment.
|
entailment
|
def calc_qpin_v1(self):
"""Calculate the input discharge portions of the different response
functions.
Required derived parameters:
|Nmb|
|MaxQ|
|DiffQ|
Required flux sequence:
|QIn|
Calculated flux sequences:
|QPIn|
Examples:
Initialize an arma model with three different response functions:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb = 3
>>> derived.maxq.shape = 3
>>> derived.diffq.shape = 2
>>> fluxes.qpin.shape = 3
Define the maximum discharge value of the respective response
functions and their successive differences:
>>> derived.maxq(0.0, 2.0, 6.0)
>>> derived.diffq(2., 4.)
The first six examples are performed for inflow values ranging from
0 to 12 m³/s:
>>> from hydpy import UnitTest
>>> test = UnitTest(
... model, model.calc_qpin_v1,
... last_example=6,
... parseqs=(fluxes.qin, fluxes.qpin))
>>> test.nexts.qin = 0., 1., 2., 4., 6., 12.
>>> test()
| ex. | qin | qpin |
-------------------------------
| 1 | 0.0 | 0.0 0.0 0.0 |
| 2 | 1.0 | 1.0 0.0 0.0 |
| 3 | 2.0 | 2.0 0.0 0.0 |
| 4 | 4.0 | 2.0 2.0 0.0 |
| 5 | 6.0 | 2.0 4.0 0.0 |
| 6 | 12.0 | 2.0 4.0 6.0 |
The following two additional examples are just supposed to
demonstrate method |calc_qpin_v1| also functions properly if
there is only one response function, wherefore total discharge
does not need to be divided:
>>> derived.nmb = 1
>>> derived.maxq.shape = 1
>>> derived.diffq.shape = 0
>>> fluxes.qpin.shape = 1
>>> derived.maxq(0.)
>>> test = UnitTest(
... model, model.calc_qpin_v1,
... first_example=7, last_example=8,
... parseqs=(fluxes.qin,
... fluxes.qpin))
>>> test.nexts.qin = 0., 12.
>>> test()
| ex. | qin | qpin |
---------------------
| 7 | 0.0 | 0.0 |
| 8 | 12.0 | 12.0 |
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
for idx in range(der.nmb-1):
if flu.qin < der.maxq[idx]:
flu.qpin[idx] = 0.
elif flu.qin < der.maxq[idx+1]:
flu.qpin[idx] = flu.qin-der.maxq[idx]
else:
flu.qpin[idx] = der.diffq[idx]
flu.qpin[der.nmb-1] = max(flu.qin-der.maxq[der.nmb-1], 0.)
|
Calculate the input discharge portions of the different response
functions.
Required derived parameters:
|Nmb|
|MaxQ|
|DiffQ|
Required flux sequence:
|QIn|
Calculated flux sequences:
|QPIn|
Examples:
Initialize an arma model with three different response functions:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb = 3
>>> derived.maxq.shape = 3
>>> derived.diffq.shape = 2
>>> fluxes.qpin.shape = 3
Define the maximum discharge value of the respective response
functions and their successive differences:
>>> derived.maxq(0.0, 2.0, 6.0)
>>> derived.diffq(2., 4.)
The first six examples are performed for inflow values ranging from
0 to 12 m³/s:
>>> from hydpy import UnitTest
>>> test = UnitTest(
... model, model.calc_qpin_v1,
... last_example=6,
... parseqs=(fluxes.qin, fluxes.qpin))
>>> test.nexts.qin = 0., 1., 2., 4., 6., 12.
>>> test()
| ex. | qin | qpin |
-------------------------------
| 1 | 0.0 | 0.0 0.0 0.0 |
| 2 | 1.0 | 1.0 0.0 0.0 |
| 3 | 2.0 | 2.0 0.0 0.0 |
| 4 | 4.0 | 2.0 2.0 0.0 |
| 5 | 6.0 | 2.0 4.0 0.0 |
| 6 | 12.0 | 2.0 4.0 6.0 |
The following two additional examples are just supposed to
demonstrate method |calc_qpin_v1| also functions properly if
there is only one response function, wherefore total discharge
does not need to be divided:
>>> derived.nmb = 1
>>> derived.maxq.shape = 1
>>> derived.diffq.shape = 0
>>> fluxes.qpin.shape = 1
>>> derived.maxq(0.)
>>> test = UnitTest(
... model, model.calc_qpin_v1,
... first_example=7, last_example=8,
... parseqs=(fluxes.qin,
... fluxes.qpin))
>>> test.nexts.qin = 0., 12.
>>> test()
| ex. | qin | qpin |
---------------------
| 7 | 0.0 | 0.0 |
| 8 | 12.0 | 12.0 |
|
entailment
|
def calc_login_v1(self):
"""Refresh the input log sequence for the different MA processes.
Required derived parameters:
|Nmb|
|MA_Order|
Required flux sequence:
|QPIn|
Updated log sequence:
|LogIn|
Example:
Assume there are three response functions, involving one, two and
three MA coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> derived.ma_order.shape = 3
>>> derived.ma_order = 1, 2, 3
>>> fluxes.qpin.shape = 3
>>> logs.login.shape = (3, 3)
The "memory values" of the different MA processes are defined as
follows (one row for each process):
>>> logs.login = ((1.0, nan, nan),
... (2.0, 3.0, nan),
... (4.0, 5.0, 6.0))
These are the new inflow discharge portions to be included into
the memories of the different processes:
>>> fluxes.qpin = 7.0, 8.0, 9.0
Through applying method |calc_login_v1| all values already
existing are shifted to the right ("into the past"). Values,
which are no longer required due to the limited order or the
different MA processes, are discarded. The new values are
inserted in the first column:
>>> model.calc_login_v1()
>>> logs.login
login([[7.0, nan, nan],
[8.0, 2.0, nan],
[9.0, 4.0, 5.0]])
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
for idx in range(der.nmb):
for jdx in range(der.ma_order[idx]-2, -1, -1):
log.login[idx, jdx+1] = log.login[idx, jdx]
for idx in range(der.nmb):
log.login[idx, 0] = flu.qpin[idx]
|
Refresh the input log sequence for the different MA processes.
Required derived parameters:
|Nmb|
|MA_Order|
Required flux sequence:
|QPIn|
Updated log sequence:
|LogIn|
Example:
Assume there are three response functions, involving one, two and
three MA coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> derived.ma_order.shape = 3
>>> derived.ma_order = 1, 2, 3
>>> fluxes.qpin.shape = 3
>>> logs.login.shape = (3, 3)
The "memory values" of the different MA processes are defined as
follows (one row for each process):
>>> logs.login = ((1.0, nan, nan),
... (2.0, 3.0, nan),
... (4.0, 5.0, 6.0))
These are the new inflow discharge portions to be included into
the memories of the different processes:
>>> fluxes.qpin = 7.0, 8.0, 9.0
Through applying method |calc_login_v1| all values already
existing are shifted to the right ("into the past"). Values,
which are no longer required due to the limited order or the
different MA processes, are discarded. The new values are
inserted in the first column:
>>> model.calc_login_v1()
>>> logs.login
login([[7.0, nan, nan],
[8.0, 2.0, nan],
[9.0, 4.0, 5.0]])
|
entailment
|
def calc_qma_v1(self):
"""Calculate the discharge responses of the different MA processes.
Required derived parameters:
|Nmb|
|MA_Order|
|MA_Coefs|
Required log sequence:
|LogIn|
Calculated flux sequence:
|QMA|
Examples:
Assume there are three response functions, involving one, two and
three MA coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> derived.ma_order.shape = 3
>>> derived.ma_order = 1, 2, 3
>>> derived.ma_coefs.shape = (3, 3)
>>> logs.login.shape = (3, 3)
>>> fluxes.qma.shape = 3
The coefficients of the different MA processes are stored in
separate rows of the 2-dimensional parameter `ma_coefs`:
>>> derived.ma_coefs = ((1.0, nan, nan),
... (0.8, 0.2, nan),
... (0.5, 0.3, 0.2))
The "memory values" of the different MA processes are defined as
follows (one row for each process). The current values are stored
in first column, the values of the last time step in the second
column, and so on:
>>> logs.login = ((1.0, nan, nan),
... (2.0, 3.0, nan),
... (4.0, 5.0, 6.0))
Applying method |calc_qma_v1| is equivalent to calculating the
inner product of the different rows of both matrices:
>>> model.calc_qma_v1()
>>> fluxes.qma
qma(1.0, 2.2, 4.7)
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
for idx in range(der.nmb):
flu.qma[idx] = 0.
for jdx in range(der.ma_order[idx]):
flu.qma[idx] += der.ma_coefs[idx, jdx] * log.login[idx, jdx]
|
Calculate the discharge responses of the different MA processes.
Required derived parameters:
|Nmb|
|MA_Order|
|MA_Coefs|
Required log sequence:
|LogIn|
Calculated flux sequence:
|QMA|
Examples:
Assume there are three response functions, involving one, two and
three MA coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> derived.ma_order.shape = 3
>>> derived.ma_order = 1, 2, 3
>>> derived.ma_coefs.shape = (3, 3)
>>> logs.login.shape = (3, 3)
>>> fluxes.qma.shape = 3
The coefficients of the different MA processes are stored in
separate rows of the 2-dimensional parameter `ma_coefs`:
>>> derived.ma_coefs = ((1.0, nan, nan),
... (0.8, 0.2, nan),
... (0.5, 0.3, 0.2))
The "memory values" of the different MA processes are defined as
follows (one row for each process). The current values are stored
in first column, the values of the last time step in the second
column, and so on:
>>> logs.login = ((1.0, nan, nan),
... (2.0, 3.0, nan),
... (4.0, 5.0, 6.0))
Applying method |calc_qma_v1| is equivalent to calculating the
inner product of the different rows of both matrices:
>>> model.calc_qma_v1()
>>> fluxes.qma
qma(1.0, 2.2, 4.7)
|
entailment
|
def calc_qar_v1(self):
"""Calculate the discharge responses of the different AR processes.
Required derived parameters:
|Nmb|
|AR_Order|
|AR_Coefs|
Required log sequence:
|LogOut|
Calculated flux sequence:
|QAR|
Examples:
Assume there are four response functions, involving zero, one, two,
and three AR coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(4)
>>> derived.ar_order.shape = 4
>>> derived.ar_order = 0, 1, 2, 3
>>> derived.ar_coefs.shape = (4, 3)
>>> logs.logout.shape = (4, 3)
>>> fluxes.qar.shape = 4
The coefficients of the different AR processes are stored in
separate rows of the 2-dimensional parameter `ma_coefs`.
Note the special case of the first AR process of zero order
(first row), which involves no autoregressive memory at all:
>>> derived.ar_coefs = ((nan, nan, nan),
... (1.0, nan, nan),
... (0.8, 0.2, nan),
... (0.5, 0.3, 0.2))
The "memory values" of the different AR processes are defined as
follows (one row for each process). The values of the last time
step are stored in first column, the values of the last time step
in the second column, and so on:
>>> logs.logout = ((nan, nan, nan),
... (1.0, nan, nan),
... (2.0, 3.0, nan),
... (4.0, 5.0, 6.0))
Applying method |calc_qar_v1| is equivalent to calculating the
inner product of the different rows of both matrices:
>>> model.calc_qar_v1()
>>> fluxes.qar
qar(0.0, 1.0, 2.2, 4.7)
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
for idx in range(der.nmb):
flu.qar[idx] = 0.
for jdx in range(der.ar_order[idx]):
flu.qar[idx] += der.ar_coefs[idx, jdx] * log.logout[idx, jdx]
|
Calculate the discharge responses of the different AR processes.
Required derived parameters:
|Nmb|
|AR_Order|
|AR_Coefs|
Required log sequence:
|LogOut|
Calculated flux sequence:
|QAR|
Examples:
Assume there are four response functions, involving zero, one, two,
and three AR coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(4)
>>> derived.ar_order.shape = 4
>>> derived.ar_order = 0, 1, 2, 3
>>> derived.ar_coefs.shape = (4, 3)
>>> logs.logout.shape = (4, 3)
>>> fluxes.qar.shape = 4
The coefficients of the different AR processes are stored in
separate rows of the 2-dimensional parameter `ma_coefs`.
Note the special case of the first AR process of zero order
(first row), which involves no autoregressive memory at all:
>>> derived.ar_coefs = ((nan, nan, nan),
... (1.0, nan, nan),
... (0.8, 0.2, nan),
... (0.5, 0.3, 0.2))
The "memory values" of the different AR processes are defined as
follows (one row for each process). The values of the last time
step are stored in first column, the values of the last time step
in the second column, and so on:
>>> logs.logout = ((nan, nan, nan),
... (1.0, nan, nan),
... (2.0, 3.0, nan),
... (4.0, 5.0, 6.0))
Applying method |calc_qar_v1| is equivalent to calculating the
inner product of the different rows of both matrices:
>>> model.calc_qar_v1()
>>> fluxes.qar
qar(0.0, 1.0, 2.2, 4.7)
|
entailment
|
def calc_qpout_v1(self):
"""Calculate the ARMA results for the different response functions.
Required derived parameter:
|Nmb|
Required flux sequences:
|QMA|
|QAR|
Calculated flux sequence:
|QPOut|
Examples:
Initialize an arma model with three different response functions:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> fluxes.qma.shape = 3
>>> fluxes.qar.shape = 3
>>> fluxes.qpout.shape = 3
Define the output values of the MA and of the AR processes
associated with the three response functions and apply
method |calc_qpout_v1|:
>>> fluxes.qar = 4.0, 5.0, 6.0
>>> fluxes.qma = 1.0, 2.0, 3.0
>>> model.calc_qpout_v1()
>>> fluxes.qpout
qpout(5.0, 7.0, 9.0)
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
for idx in range(der.nmb):
flu.qpout[idx] = flu.qma[idx]+flu.qar[idx]
|
Calculate the ARMA results for the different response functions.
Required derived parameter:
|Nmb|
Required flux sequences:
|QMA|
|QAR|
Calculated flux sequence:
|QPOut|
Examples:
Initialize an arma model with three different response functions:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> fluxes.qma.shape = 3
>>> fluxes.qar.shape = 3
>>> fluxes.qpout.shape = 3
Define the output values of the MA and of the AR processes
associated with the three response functions and apply
method |calc_qpout_v1|:
>>> fluxes.qar = 4.0, 5.0, 6.0
>>> fluxes.qma = 1.0, 2.0, 3.0
>>> model.calc_qpout_v1()
>>> fluxes.qpout
qpout(5.0, 7.0, 9.0)
|
entailment
|
def calc_logout_v1(self):
"""Refresh the log sequence for the different AR processes.
Required derived parameters:
|Nmb|
|AR_Order|
Required flux sequence:
|QPOut|
Updated log sequence:
|LogOut|
Example:
Assume there are four response functions, involving zero, one, two
and three AR coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(4)
>>> derived.ar_order.shape = 4
>>> derived.ar_order = 0, 1, 2, 3
>>> fluxes.qpout.shape = 4
>>> logs.logout.shape = (4, 3)
The "memory values" of the different AR processes are defined as
follows (one row for each process). Note the special case of the
first AR process of zero order (first row), which is why there are
no autoregressive memory values required:
>>> logs.logout = ((nan, nan, nan),
... (0.0, nan, nan),
... (1.0, 2.0, nan),
... (3.0, 4.0, 5.0))
These are the new outflow discharge portions to be included into
the memories of the different processes:
>>> fluxes.qpout = 6.0, 7.0, 8.0, 9.0
Through applying method |calc_logout_v1| all values already
existing are shifted to the right ("into the past"). Values, which
are no longer required due to the limited order or the different
AR processes, are discarded. The new values are inserted in the
first column:
>>> model.calc_logout_v1()
>>> logs.logout
logout([[nan, nan, nan],
[7.0, nan, nan],
[8.0, 1.0, nan],
[9.0, 3.0, 4.0]])
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
for idx in range(der.nmb):
for jdx in range(der.ar_order[idx]-2, -1, -1):
log.logout[idx, jdx+1] = log.logout[idx, jdx]
for idx in range(der.nmb):
if der.ar_order[idx] > 0:
log.logout[idx, 0] = flu.qpout[idx]
|
Refresh the log sequence for the different AR processes.
Required derived parameters:
|Nmb|
|AR_Order|
Required flux sequence:
|QPOut|
Updated log sequence:
|LogOut|
Example:
Assume there are four response functions, involving zero, one, two
and three AR coefficients respectively:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(4)
>>> derived.ar_order.shape = 4
>>> derived.ar_order = 0, 1, 2, 3
>>> fluxes.qpout.shape = 4
>>> logs.logout.shape = (4, 3)
The "memory values" of the different AR processes are defined as
follows (one row for each process). Note the special case of the
first AR process of zero order (first row), which is why there are
no autoregressive memory values required:
>>> logs.logout = ((nan, nan, nan),
... (0.0, nan, nan),
... (1.0, 2.0, nan),
... (3.0, 4.0, 5.0))
These are the new outflow discharge portions to be included into
the memories of the different processes:
>>> fluxes.qpout = 6.0, 7.0, 8.0, 9.0
Through applying method |calc_logout_v1| all values already
existing are shifted to the right ("into the past"). Values, which
are no longer required due to the limited order or the different
AR processes, are discarded. The new values are inserted in the
first column:
>>> model.calc_logout_v1()
>>> logs.logout
logout([[nan, nan, nan],
[7.0, nan, nan],
[8.0, 1.0, nan],
[9.0, 3.0, 4.0]])
|
entailment
|
def calc_qout_v1(self):
"""Sum up the results of the different response functions.
Required derived parameter:
|Nmb|
Required flux sequences:
|QPOut|
Calculated flux sequence:
|QOut|
Examples:
Initialize an arma model with three different response functions:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> fluxes.qpout.shape = 3
Define the output values of the three response functions and
apply method |calc_qout_v1|:
>>> fluxes.qpout = 1.0, 2.0, 3.0
>>> model.calc_qout_v1()
>>> fluxes.qout
qout(6.0)
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
flu.qout = 0.
for idx in range(der.nmb):
flu.qout += flu.qpout[idx]
|
Sum up the results of the different response functions.
Required derived parameter:
|Nmb|
Required flux sequences:
|QPOut|
Calculated flux sequence:
|QOut|
Examples:
Initialize an arma model with three different response functions:
>>> from hydpy.models.arma import *
>>> parameterstep()
>>> derived.nmb(3)
>>> fluxes.qpout.shape = 3
Define the output values of the three response functions and
apply method |calc_qout_v1|:
>>> fluxes.qpout = 1.0, 2.0, 3.0
>>> model.calc_qout_v1()
>>> fluxes.qout
qout(6.0)
|
entailment
|
def pick_q_v1(self):
"""Update inflow."""
flu = self.sequences.fluxes.fastaccess
inl = self.sequences.inlets.fastaccess
flu.qin = 0.
for idx in range(inl.len_q):
flu.qin += inl.q[idx][0]
|
Update inflow.
|
entailment
|
def update(self):
"""Determine the number of branches"""
con = self.subpars.pars.control
self(con.ypoints.shape[0])
|
Determine the number of branches
|
entailment
|
def update(self):
"""Update value based on :math:`HV=BBV/BNV`.
Required Parameters:
|BBV|
|BNV|
Examples:
>>> from hydpy.models.lstream import *
>>> parameterstep('1d')
>>> bbv(left=10., right=40.)
>>> bnv(left=10., right=20.)
>>> derived.hv.update()
>>> derived.hv
hv(left=1.0, right=2.0)
>>> bbv(left=10., right=0.)
>>> bnv(left=0., right=20.)
>>> derived.hv.update()
>>> derived.hv
hv(0.0)
"""
con = self.subpars.pars.control
self(0.)
for idx in range(2):
if (con.bbv[idx] > 0.) and (con.bnv[idx] > 0.):
self.values[idx] = con.bbv[idx]/con.bnv[idx]
|
Update value based on :math:`HV=BBV/BNV`.
Required Parameters:
|BBV|
|BNV|
Examples:
>>> from hydpy.models.lstream import *
>>> parameterstep('1d')
>>> bbv(left=10., right=40.)
>>> bnv(left=10., right=20.)
>>> derived.hv.update()
>>> derived.hv
hv(left=1.0, right=2.0)
>>> bbv(left=10., right=0.)
>>> bnv(left=0., right=20.)
>>> derived.hv.update()
>>> derived.hv
hv(0.0)
|
entailment
|
def update(self):
"""Update value based on the actual |calc_qg_v1| method.
Required derived parameter:
|H|
Note that the value of parameter |lstream_derived.QM| is directly
related to the value of parameter |HM| and indirectly related to
all parameters values relevant for method |calc_qg_v1|. Hence the
complete paramter (and sequence) requirements might differ for
various application models.
For examples, see the documentation on method ToDo.
"""
mod = self.subpars.pars.model
con = mod.parameters.control
flu = mod.sequences.fluxes
flu.h = con.hm
mod.calc_qg()
self(flu.qg)
|
Update value based on the actual |calc_qg_v1| method.
Required derived parameter:
|H|
Note that the value of parameter |lstream_derived.QM| is directly
related to the value of parameter |HM| and indirectly related to
all parameters values relevant for method |calc_qg_v1|. Hence the
complete paramter (and sequence) requirements might differ for
various application models.
For examples, see the documentation on method ToDo.
|
entailment
|
def update(self):
"""Determines in how many segments the whole reach needs to be
divided to approximate the desired lag time via integer rounding.
Adjusts the shape of sequence |QJoints| additionally.
Required control parameters:
|Lag|
Calculated derived parameters:
|NmbSegments|
Prepared state sequence:
|QJoints|
Examples:
Define a lag time of 1.4 days and a simulation step size of 12
hours:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> lag(1.4)
Then the actual lag value for the simulation step size is 2.8
>>> lag
lag(1.4)
>>> lag.value
2.8
Through rounding the number of segments is determined:
>>> derived.nmbsegments.update()
>>> derived.nmbsegments
nmbsegments(3)
The number of joints is always the number of segments plus one:
>>> states.qjoints.shape
(4,)
"""
pars = self.subpars.pars
self(int(round(pars.control.lag)))
pars.model.sequences.states.qjoints.shape = self+1
|
Determines in how many segments the whole reach needs to be
divided to approximate the desired lag time via integer rounding.
Adjusts the shape of sequence |QJoints| additionally.
Required control parameters:
|Lag|
Calculated derived parameters:
|NmbSegments|
Prepared state sequence:
|QJoints|
Examples:
Define a lag time of 1.4 days and a simulation step size of 12
hours:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> lag(1.4)
Then the actual lag value for the simulation step size is 2.8
>>> lag
lag(1.4)
>>> lag.value
2.8
Through rounding the number of segments is determined:
>>> derived.nmbsegments.update()
>>> derived.nmbsegments
nmbsegments(3)
The number of joints is always the number of segments plus one:
>>> states.qjoints.shape
(4,)
|
entailment
|
def update(self):
"""Update |C1| based on :math:`c_1 = \\frac{Damp}{1+Damp}`.
Examples:
The first examples show the calculated value of |C1| for
the lowest possible value of |Lag|, the lowest possible value,
and an intermediate value:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> damp(0.0)
>>> derived.c1.update()
>>> derived.c1
c1(0.0)
>>> damp(1.0)
>>> derived.c1.update()
>>> derived.c1
c1(0.5)
>>> damp(0.25)
>>> derived.c1.update()
>>> derived.c1
c1(0.2)
For to low and to high values of |Lag|, clipping is performed:
>>> damp.value = -0.1
>>> derived.c1.update()
>>> derived.c1
c1(0.0)
>>> damp.value = 1.1
>>> derived.c1.update()
>>> derived.c1
c1(0.5)
"""
damp = self.subpars.pars.control.damp
self(numpy.clip(damp/(1.+damp), 0., .5))
|
Update |C1| based on :math:`c_1 = \\frac{Damp}{1+Damp}`.
Examples:
The first examples show the calculated value of |C1| for
the lowest possible value of |Lag|, the lowest possible value,
and an intermediate value:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> damp(0.0)
>>> derived.c1.update()
>>> derived.c1
c1(0.0)
>>> damp(1.0)
>>> derived.c1.update()
>>> derived.c1
c1(0.5)
>>> damp(0.25)
>>> derived.c1.update()
>>> derived.c1
c1(0.2)
For to low and to high values of |Lag|, clipping is performed:
>>> damp.value = -0.1
>>> derived.c1.update()
>>> derived.c1
c1(0.0)
>>> damp.value = 1.1
>>> derived.c1.update()
>>> derived.c1
c1(0.5)
|
entailment
|
def update(self):
"""Update |C2| based on :math:`c_2 = 1.-c_1-c_3`.
Examples:
The following examples show the calculated value of |C2| are
clipped when to low or to high:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> derived.c1 = 0.6
>>> derived.c3 = 0.1
>>> derived.c2.update()
>>> derived.c2
c2(0.3)
>>> derived.c1 = 1.6
>>> derived.c2.update()
>>> derived.c2
c2(0.0)
>>> derived.c1 = -1.6
>>> derived.c2.update()
>>> derived.c2
c2(1.0)
"""
der = self.subpars
self(numpy.clip(1. - der.c1 - der.c3, 0., 1.))
|
Update |C2| based on :math:`c_2 = 1.-c_1-c_3`.
Examples:
The following examples show the calculated value of |C2| are
clipped when to low or to high:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> derived.c1 = 0.6
>>> derived.c3 = 0.1
>>> derived.c2.update()
>>> derived.c2
c2(0.3)
>>> derived.c1 = 1.6
>>> derived.c2.update()
>>> derived.c2
c2(0.0)
>>> derived.c1 = -1.6
>>> derived.c2.update()
>>> derived.c2
c2(1.0)
|
entailment
|
def view(data, enc=None, start_pos=None, delimiter=None, hdr_rows=None,
idx_cols=None, sheet_index=0, transpose=False, wait=None,
recycle=None, detach=None, metavar=None, title=None):
"""View the supplied data in an interactive, graphical table widget.
data: When a valid path or IO object, read it as a tabular text file. When
a valid URI, a Blaze object is constructed and visualized. Any other
supported datatype is visualized directly and incrementally *without
copying*.
enc: File encoding (such as "utf-8", normally autodetected).
delimiter: Text file delimiter (normally autodetected).
hdr_rows: For files or lists of lists, specify the number of header rows.
For files only, a default of one header line is assumed.
idx_cols: For files or lists of lists, specify the number of index columns.
By default, no index is assumed.
sheet_index: For multi-table files (such as xls[x]), specify the sheet
index to read, starting from 0. Defaults to the first.
start_pos: A tuple of the form (y, x) specifying the initial cursor
position. Negative offsets count from the end of the dataset.
transpose: Transpose the resulting view.
metavar: name of the variable being shown for display purposes (inferred
automatically when possible).
title: title of the data window.
wait: Wait for the user to close the view before returning. By default, try
to match the behavior of ``matplotlib.is_interactive()``. If
matplotlib is not loaded, wait only if ``detach`` is also False. The
default value can also be set through ``gtabview.WAIT``.
recycle: Recycle the previous window instead of creating a new one. The
default is True, and can also be set through ``gtabview.RECYCLE``.
detach: Create a fully detached GUI thread for interactive use (note: this
is *not* necessary if matplotlib is loaded). The default is False,
and can also be set through ``gtabview.DETACH``.
"""
global WAIT, RECYCLE, DETACH, VIEW
model = read_model(data, enc=enc, delimiter=delimiter, hdr_rows=hdr_rows,
idx_cols=idx_cols, sheet_index=sheet_index,
transpose=transpose)
if model is None:
warnings.warn("cannot visualize the supplied data type: {}".format(type(data)),
category=RuntimeWarning)
return None
# setup defaults
if wait is None: wait = WAIT
if recycle is None: recycle = RECYCLE
if detach is None: detach = DETACH
if wait is None:
if 'matplotlib' not in sys.modules:
wait = not bool(detach)
else:
import matplotlib.pyplot as plt
wait = not plt.isinteractive()
# try to fetch the variable name in the upper stack
if metavar is None:
if isinstance(data, basestring):
metavar = data
else:
metavar = _varname_in_stack(data, 1)
# create a view controller
if VIEW is None:
if not detach:
VIEW = ViewController()
else:
VIEW = DetachedViewController()
VIEW.setDaemon(True)
VIEW.start()
if VIEW.is_detached():
atexit.register(VIEW.exit)
else:
VIEW = None
return None
# actually show the data
view_kwargs = {'hdr_rows': hdr_rows, 'idx_cols': idx_cols,
'start_pos': start_pos, 'metavar': metavar, 'title': title}
VIEW.view(model, view_kwargs, wait=wait, recycle=recycle)
return VIEW
|
View the supplied data in an interactive, graphical table widget.
data: When a valid path or IO object, read it as a tabular text file. When
a valid URI, a Blaze object is constructed and visualized. Any other
supported datatype is visualized directly and incrementally *without
copying*.
enc: File encoding (such as "utf-8", normally autodetected).
delimiter: Text file delimiter (normally autodetected).
hdr_rows: For files or lists of lists, specify the number of header rows.
For files only, a default of one header line is assumed.
idx_cols: For files or lists of lists, specify the number of index columns.
By default, no index is assumed.
sheet_index: For multi-table files (such as xls[x]), specify the sheet
index to read, starting from 0. Defaults to the first.
start_pos: A tuple of the form (y, x) specifying the initial cursor
position. Negative offsets count from the end of the dataset.
transpose: Transpose the resulting view.
metavar: name of the variable being shown for display purposes (inferred
automatically when possible).
title: title of the data window.
wait: Wait for the user to close the view before returning. By default, try
to match the behavior of ``matplotlib.is_interactive()``. If
matplotlib is not loaded, wait only if ``detach`` is also False. The
default value can also be set through ``gtabview.WAIT``.
recycle: Recycle the previous window instead of creating a new one. The
default is True, and can also be set through ``gtabview.RECYCLE``.
detach: Create a fully detached GUI thread for interactive use (note: this
is *not* necessary if matplotlib is loaded). The default is False,
and can also be set through ``gtabview.DETACH``.
|
entailment
|
def gather_registries() -> Tuple[Dict, Mapping, Mapping]:
"""Get and clear the current |Node| and |Element| registries.
Function |gather_registries| is thought to be used by class |Tester| only.
"""
id2devices = copy.copy(_id2devices)
registry = copy.copy(_registry)
selection = copy.copy(_selection)
dict_ = globals()
dict_['_id2devices'] = {}
dict_['_registry'] = {Node: {}, Element: {}}
dict_['_selection'] = {Node: {}, Element: {}}
return id2devices, registry, selection
|
Get and clear the current |Node| and |Element| registries.
Function |gather_registries| is thought to be used by class |Tester| only.
|
entailment
|
def reset_registries(dicts: Tuple[Dict, Mapping, Mapping]):
"""Reset the current |Node| and |Element| registries.
Function |reset_registries| is thought to be used by class |Tester| only.
"""
dict_ = globals()
dict_['_id2devices'] = dicts[0]
dict_['_registry'] = dicts[1]
dict_['_selection'] = dicts[2]
|
Reset the current |Node| and |Element| registries.
Function |reset_registries| is thought to be used by class |Tester| only.
|
entailment
|
def _get_pandasindex():
"""
>>> from hydpy import pub
>>> pub.timegrids = '2004.01.01', '2005.01.01', '1d'
>>> from hydpy.core.devicetools import _get_pandasindex
>>> _get_pandasindex() # doctest: +ELLIPSIS
DatetimeIndex(['2004-01-01 12:00:00', '2004-01-02 12:00:00',
...
'2004-12-30 12:00:00', '2004-12-31 12:00:00'],
dtype='datetime64[ns]', length=366, freq=None)
"""
tg = hydpy.pub.timegrids.init
shift = tg.stepsize / 2
index = pandas.date_range(
(tg.firstdate + shift).datetime,
(tg.lastdate - shift).datetime,
(tg.lastdate - tg.firstdate - tg.stepsize) / tg.stepsize + 1)
return index
|
>>> from hydpy import pub
>>> pub.timegrids = '2004.01.01', '2005.01.01', '1d'
>>> from hydpy.core.devicetools import _get_pandasindex
>>> _get_pandasindex() # doctest: +ELLIPSIS
DatetimeIndex(['2004-01-01 12:00:00', '2004-01-02 12:00:00',
...
'2004-12-30 12:00:00', '2004-12-31 12:00:00'],
dtype='datetime64[ns]', length=366, freq=None)
|
entailment
|
def startswith(self, name: str) -> List[str]:
"""Return a list of all keywords starting with the given string.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.startswith('keyword')
['keyword_3', 'keyword_4']
"""
return sorted(keyword for keyword in self if keyword.startswith(name))
|
Return a list of all keywords starting with the given string.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.startswith('keyword')
['keyword_3', 'keyword_4']
|
entailment
|
def endswith(self, name: str) -> List[str]:
"""Return a list of all keywords ending with the given string.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.endswith('keyword')
['first_keyword', 'second_keyword']
"""
return sorted(keyword for keyword in self if keyword.endswith(name))
|
Return a list of all keywords ending with the given string.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.endswith('keyword')
['first_keyword', 'second_keyword']
|
entailment
|
def contains(self, name: str) -> List[str]:
"""Return a list of all keywords containing the given string.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.contains('keyword')
['first_keyword', 'keyword_3', 'keyword_4', 'second_keyword']
"""
return sorted(keyword for keyword in self if name in keyword)
|
Return a list of all keywords containing the given string.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.contains('keyword')
['first_keyword', 'keyword_3', 'keyword_4', 'second_keyword']
|
entailment
|
def update(self, *names: Any) -> None:
"""Before updating, the given names are checked to be valid
variable identifiers.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.update('test_1', 'test 2') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: While trying to add the keyword `test 2` to device ?, \
the following error occurred: The given name string `test 2` does not \
define a valid variable identifier. ...
Note that even the first string (`test1`) is not added due to the
second one (`test 2`) being invalid.
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"second_keyword")
After correcting the second string, everything works fine:
>>> keywords.update('test_1', 'test_2')
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"second_keyword", "test_1", "test_2")
"""
_names = [str(name) for name in names]
self._check_keywords(_names)
super().update(_names)
|
Before updating, the given names are checked to be valid
variable identifiers.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.update('test_1', 'test 2') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: While trying to add the keyword `test 2` to device ?, \
the following error occurred: The given name string `test 2` does not \
define a valid variable identifier. ...
Note that even the first string (`test1`) is not added due to the
second one (`test 2`) being invalid.
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"second_keyword")
After correcting the second string, everything works fine:
>>> keywords.update('test_1', 'test_2')
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"second_keyword", "test_1", "test_2")
|
entailment
|
def add(self, name: Any) -> None:
"""Before adding a new name, it is checked to be valid variable
identifiers.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.add('1_test') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: While trying to add the keyword `1_test` to device ?, \
the following error occurred: The given name string `1_test` does not \
define a valid variable identifier. ...
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"second_keyword")
After correcting the string, everything works fine:
>>> keywords.add('one_test')
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"one_test", "second_keyword")
"""
self._check_keywords([str(name)])
super().add(str(name))
|
Before adding a new name, it is checked to be valid variable
identifiers.
>>> from hydpy.core.devicetools import Keywords
>>> keywords = Keywords('first_keyword', 'second_keyword',
... 'keyword_3', 'keyword_4',
... 'keyboard')
>>> keywords.add('1_test') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: While trying to add the keyword `1_test` to device ?, \
the following error occurred: The given name string `1_test` does not \
define a valid variable identifier. ...
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"second_keyword")
After correcting the string, everything works fine:
>>> keywords.add('one_test')
>>> keywords
Keywords("first_keyword", "keyboard", "keyword_3", "keyword_4",
"one_test", "second_keyword")
|
entailment
|
def add_device(self, device: Union[DeviceType, str]) -> None:
"""Add the given |Node| or |Element| object to the actual
|Nodes| or |Elements| object.
You can pass either a string or a device:
>>> from hydpy import Nodes
>>> nodes = Nodes()
>>> nodes.add_device('old_node')
>>> nodes
Nodes("old_node")
>>> nodes.add_device('new_node')
>>> nodes
Nodes("new_node", "old_node")
Method |Devices.add_device| is disabled for immutable |Nodes|
and |Elements| objects:
>>> nodes.mutable = False
>>> nodes.add_device('newest_node')
Traceback (most recent call last):
...
RuntimeError: While trying to add the device `newest_node` to a \
Nodes object, the following error occurred: Adding devices to immutable \
Nodes objects is not allowed.
"""
try:
if self.mutable:
_device = self.get_contentclass()(device)
self._name2device[_device.name] = _device
_id2devices[_device][id(self)] = self
else:
raise RuntimeError(
f'Adding devices to immutable '
f'{objecttools.classname(self)} objects is not allowed.')
except BaseException:
objecttools.augment_excmessage(
f'While trying to add the device `{device}` to a '
f'{objecttools.classname(self)} object')
|
Add the given |Node| or |Element| object to the actual
|Nodes| or |Elements| object.
You can pass either a string or a device:
>>> from hydpy import Nodes
>>> nodes = Nodes()
>>> nodes.add_device('old_node')
>>> nodes
Nodes("old_node")
>>> nodes.add_device('new_node')
>>> nodes
Nodes("new_node", "old_node")
Method |Devices.add_device| is disabled for immutable |Nodes|
and |Elements| objects:
>>> nodes.mutable = False
>>> nodes.add_device('newest_node')
Traceback (most recent call last):
...
RuntimeError: While trying to add the device `newest_node` to a \
Nodes object, the following error occurred: Adding devices to immutable \
Nodes objects is not allowed.
|
entailment
|
def remove_device(self, device: Union[DeviceType, str]) -> None:
"""Remove the given |Node| or |Element| object from the actual
|Nodes| or |Elements| object.
You can pass either a string or a device:
>>> from hydpy import Node, Nodes
>>> nodes = Nodes('node_x', 'node_y')
>>> node_x, node_y = nodes
>>> nodes.remove_device(Node('node_y'))
>>> nodes
Nodes("node_x")
>>> nodes.remove_device(Node('node_x'))
>>> nodes
Nodes()
>>> nodes.remove_device(Node('node_z'))
Traceback (most recent call last):
...
ValueError: While trying to remove the device `node_z` from a \
Nodes object, the following error occurred: The actual Nodes object does \
not handle such a device.
Method |Devices.remove_device| is disabled for immutable |Nodes|
and |Elements| objects:
>>> nodes.mutable = False
>>> nodes.remove_device('node_z')
Traceback (most recent call last):
...
RuntimeError: While trying to remove the device `node_z` from a \
Nodes object, the following error occurred: Removing devices from \
immutable Nodes objects is not allowed.
"""
try:
if self.mutable:
_device = self.get_contentclass()(device)
try:
del self._name2device[_device.name]
except KeyError:
raise ValueError(
f'The actual {objecttools.classname(self)} '
f'object does not handle such a device.')
del _id2devices[_device][id(self)]
else:
raise RuntimeError(
f'Removing devices from immutable '
f'{objecttools.classname(self)} objects is not allowed.')
except BaseException:
objecttools.augment_excmessage(
f'While trying to remove the device `{device}` from a '
f'{objecttools.classname(self)} object')
|
Remove the given |Node| or |Element| object from the actual
|Nodes| or |Elements| object.
You can pass either a string or a device:
>>> from hydpy import Node, Nodes
>>> nodes = Nodes('node_x', 'node_y')
>>> node_x, node_y = nodes
>>> nodes.remove_device(Node('node_y'))
>>> nodes
Nodes("node_x")
>>> nodes.remove_device(Node('node_x'))
>>> nodes
Nodes()
>>> nodes.remove_device(Node('node_z'))
Traceback (most recent call last):
...
ValueError: While trying to remove the device `node_z` from a \
Nodes object, the following error occurred: The actual Nodes object does \
not handle such a device.
Method |Devices.remove_device| is disabled for immutable |Nodes|
and |Elements| objects:
>>> nodes.mutable = False
>>> nodes.remove_device('node_z')
Traceback (most recent call last):
...
RuntimeError: While trying to remove the device `node_z` from a \
Nodes object, the following error occurred: Removing devices from \
immutable Nodes objects is not allowed.
|
entailment
|
def keywords(self) -> Set[str]:
"""A set of all keywords of all handled devices.
In addition to attribute access via device names, |Nodes| and
|Elements| objects allow for attribute access via keywords,
allowing for an efficient search of certain groups of devices.
Let us use the example from above, where the nodes `na` and `nb`
have no keywords, but each of the other three nodes both belongs
to either `group_a` or `group_b` and `group_1` or `group_2`:
>>> from hydpy import Node, Nodes
>>> nodes = Nodes('na',
... Node('nb', variable='W'),
... Node('nc', keywords=('group_a', 'group_1')),
... Node('nd', keywords=('group_a', 'group_2')),
... Node('ne', keywords=('group_b', 'group_1')))
>>> nodes
Nodes("na", "nb", "nc", "nd", "ne")
>>> sorted(nodes.keywords)
['group_1', 'group_2', 'group_a', 'group_b']
If you are interested in inspecting all devices belonging to
`group_a`, select them via this keyword:
>>> subgroup = nodes.group_1
>>> subgroup
Nodes("nc", "ne")
You can further restrict the search by also selecting the devices
belonging to `group_b`, which holds only for node "e", in the given
example:
>>> subsubgroup = subgroup.group_b
>>> subsubgroup
Node("ne", variable="Q",
keywords=["group_1", "group_b"])
Note that the keywords already used for building a device subgroup
are not informative anymore (as they hold for each device) and are
thus not shown anymore:
>>> sorted(subgroup.keywords)
['group_a', 'group_b']
The latter might be confusing if you intend to work with a device
subgroup for a longer time. After copying the subgroup, all
keywords of the contained devices are available again:
>>> from copy import copy
>>> newgroup = copy(subgroup)
>>> sorted(newgroup.keywords)
['group_1', 'group_a', 'group_b']
"""
return set(keyword for device in self
for keyword in device.keywords if
keyword not in self._shadowed_keywords)
|
A set of all keywords of all handled devices.
In addition to attribute access via device names, |Nodes| and
|Elements| objects allow for attribute access via keywords,
allowing for an efficient search of certain groups of devices.
Let us use the example from above, where the nodes `na` and `nb`
have no keywords, but each of the other three nodes both belongs
to either `group_a` or `group_b` and `group_1` or `group_2`:
>>> from hydpy import Node, Nodes
>>> nodes = Nodes('na',
... Node('nb', variable='W'),
... Node('nc', keywords=('group_a', 'group_1')),
... Node('nd', keywords=('group_a', 'group_2')),
... Node('ne', keywords=('group_b', 'group_1')))
>>> nodes
Nodes("na", "nb", "nc", "nd", "ne")
>>> sorted(nodes.keywords)
['group_1', 'group_2', 'group_a', 'group_b']
If you are interested in inspecting all devices belonging to
`group_a`, select them via this keyword:
>>> subgroup = nodes.group_1
>>> subgroup
Nodes("nc", "ne")
You can further restrict the search by also selecting the devices
belonging to `group_b`, which holds only for node "e", in the given
example:
>>> subsubgroup = subgroup.group_b
>>> subsubgroup
Node("ne", variable="Q",
keywords=["group_1", "group_b"])
Note that the keywords already used for building a device subgroup
are not informative anymore (as they hold for each device) and are
thus not shown anymore:
>>> sorted(subgroup.keywords)
['group_a', 'group_b']
The latter might be confusing if you intend to work with a device
subgroup for a longer time. After copying the subgroup, all
keywords of the contained devices are available again:
>>> from copy import copy
>>> newgroup = copy(subgroup)
>>> sorted(newgroup.keywords)
['group_1', 'group_a', 'group_b']
|
entailment
|
def copy(self: DevicesTypeBound) -> DevicesTypeBound:
"""Return a shallow copy of the actual |Nodes| or |Elements| object.
Method |Devices.copy| returns a semi-flat copy of |Nodes| or
|Elements| objects, due to their devices being not copyable:
>>> from hydpy import Nodes
>>> old = Nodes('x', 'y')
>>> import copy
>>> new = copy.copy(old)
>>> new == old
True
>>> new is old
False
>>> new.devices is old.devices
False
>>> new.x is new.x
True
Changing the |Device.name| of a device is recognised both by the
original and the copied collection objects:
>>> new.x.name = 'z'
>>> old.z
Node("z", variable="Q")
>>> new.z
Node("z", variable="Q")
Deep copying is permitted due to the above reason:
>>> copy.deepcopy(old)
Traceback (most recent call last):
...
NotImplementedError: Deep copying of Nodes objects is not supported, \
as it would require to make deep copies of the Node objects themselves, \
which is in conflict with using their names as identifiers.
"""
new = type(self)()
vars(new).update(vars(self))
vars(new)['_name2device'] = copy.copy(self._name2device)
vars(new)['_shadowed_keywords'].clear()
for device in self:
_id2devices[device][id(new)] = new
return new
|
Return a shallow copy of the actual |Nodes| or |Elements| object.
Method |Devices.copy| returns a semi-flat copy of |Nodes| or
|Elements| objects, due to their devices being not copyable:
>>> from hydpy import Nodes
>>> old = Nodes('x', 'y')
>>> import copy
>>> new = copy.copy(old)
>>> new == old
True
>>> new is old
False
>>> new.devices is old.devices
False
>>> new.x is new.x
True
Changing the |Device.name| of a device is recognised both by the
original and the copied collection objects:
>>> new.x.name = 'z'
>>> old.z
Node("z", variable="Q")
>>> new.z
Node("z", variable="Q")
Deep copying is permitted due to the above reason:
>>> copy.deepcopy(old)
Traceback (most recent call last):
...
NotImplementedError: Deep copying of Nodes objects is not supported, \
as it would require to make deep copies of the Node objects themselves, \
which is in conflict with using their names as identifiers.
|
entailment
|
def prepare_allseries(self, ramflag: bool = True) -> None:
"""Call methods |Node.prepare_simseries| and
|Node.prepare_obsseries|."""
self.prepare_simseries(ramflag)
self.prepare_obsseries(ramflag)
|
Call methods |Node.prepare_simseries| and
|Node.prepare_obsseries|.
|
entailment
|
def prepare_simseries(self, ramflag: bool = True) -> None:
"""Call method |Node.prepare_simseries| of all handled
|Node| objects."""
for node in printtools.progressbar(self):
node.prepare_simseries(ramflag)
|
Call method |Node.prepare_simseries| of all handled
|Node| objects.
|
entailment
|
def prepare_obsseries(self, ramflag: bool = True) -> None:
"""Call method |Node.prepare_obsseries| of all handled
|Node| objects."""
for node in printtools.progressbar(self):
node.prepare_obsseries(ramflag)
|
Call method |Node.prepare_obsseries| of all handled
|Node| objects.
|
entailment
|
def init_models(self) -> None:
"""Call method |Element.init_model| of all handle |Element| objects.
We show, based the `LahnH` example project, that method
|Element.init_model| prepares the |Model| objects of all elements,
including building the required connections and updating the
derived parameters:
>>> from hydpy.core.examples import prepare_full_example_1
>>> prepare_full_example_1()
>>> from hydpy import HydPy, pub, TestIO
>>> with TestIO():
... hp = HydPy('LahnH')
... pub.timegrids = '1996-01-01', '1996-02-01', '1d'
... hp.prepare_network()
... hp.init_models()
>>> hp.elements.land_dill.model.parameters.derived.dt
dt(0.000833)
Wrong control files result in error messages like the following:
>>> with TestIO():
... with open('LahnH/control/default/land_dill.py', 'a') as file_:
... _ = file_.write('zonetype(-1)')
... hp.init_models() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: While trying to initialise the model object of element \
`land_dill`, the following error occurred: While trying to load the control \
file `...land_dill.py`, the following error occurred: At least one value of \
parameter `zonetype` of element `?` is not valid.
By default, missing control files result in exceptions:
>>> del hp.elements.land_dill.model
>>> import os
>>> with TestIO():
... os.remove('LahnH/control/default/land_dill.py')
... hp.init_models() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
FileNotFoundError: While trying to initialise the model object of \
element `land_dill`, the following error occurred: While trying to load the \
control file `...land_dill.py`, the following error occurred: ...
>>> hasattr(hp.elements.land_dill, 'model')
False
When building new, still incomplete *HydPy* projects, this behaviour
can be annoying. After setting the option
|Options.warnmissingcontrolfile| to |False|, missing control files
only result in a warning:
>>> with TestIO():
... with pub.options.warnmissingcontrolfile(True):
... hp.init_models()
Traceback (most recent call last):
...
UserWarning: Due to a missing or no accessible control file, \
no model could be initialised for element `land_dill`
>>> hasattr(hp.elements.land_dill, 'model')
False
"""
try:
for element in printtools.progressbar(self):
element.init_model(clear_registry=False)
finally:
hydpy.pub.controlmanager.clear_registry()
|
Call method |Element.init_model| of all handle |Element| objects.
We show, based the `LahnH` example project, that method
|Element.init_model| prepares the |Model| objects of all elements,
including building the required connections and updating the
derived parameters:
>>> from hydpy.core.examples import prepare_full_example_1
>>> prepare_full_example_1()
>>> from hydpy import HydPy, pub, TestIO
>>> with TestIO():
... hp = HydPy('LahnH')
... pub.timegrids = '1996-01-01', '1996-02-01', '1d'
... hp.prepare_network()
... hp.init_models()
>>> hp.elements.land_dill.model.parameters.derived.dt
dt(0.000833)
Wrong control files result in error messages like the following:
>>> with TestIO():
... with open('LahnH/control/default/land_dill.py', 'a') as file_:
... _ = file_.write('zonetype(-1)')
... hp.init_models() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: While trying to initialise the model object of element \
`land_dill`, the following error occurred: While trying to load the control \
file `...land_dill.py`, the following error occurred: At least one value of \
parameter `zonetype` of element `?` is not valid.
By default, missing control files result in exceptions:
>>> del hp.elements.land_dill.model
>>> import os
>>> with TestIO():
... os.remove('LahnH/control/default/land_dill.py')
... hp.init_models() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
FileNotFoundError: While trying to initialise the model object of \
element `land_dill`, the following error occurred: While trying to load the \
control file `...land_dill.py`, the following error occurred: ...
>>> hasattr(hp.elements.land_dill, 'model')
False
When building new, still incomplete *HydPy* projects, this behaviour
can be annoying. After setting the option
|Options.warnmissingcontrolfile| to |False|, missing control files
only result in a warning:
>>> with TestIO():
... with pub.options.warnmissingcontrolfile(True):
... hp.init_models()
Traceback (most recent call last):
...
UserWarning: Due to a missing or no accessible control file, \
no model could be initialised for element `land_dill`
>>> hasattr(hp.elements.land_dill, 'model')
False
|
entailment
|
def save_controls(self, parameterstep: 'timetools.PeriodConstrArg' = None,
simulationstep: 'timetools.PeriodConstrArg' = None,
auxfiler: 'Optional[auxfiletools.Auxfiler]' = None):
"""Save the control parameters of the |Model| object handled by
each |Element| object and eventually the ones handled by the
given |Auxfiler| object."""
if auxfiler:
auxfiler.save(parameterstep, simulationstep)
for element in printtools.progressbar(self):
element.model.parameters.save_controls(
parameterstep=parameterstep,
simulationstep=simulationstep,
auxfiler=auxfiler)
|
Save the control parameters of the |Model| object handled by
each |Element| object and eventually the ones handled by the
given |Auxfiler| object.
|
entailment
|
def load_conditions(self) -> None:
"""Save the initial conditions of the |Model| object handled by
each |Element| object."""
for element in printtools.progressbar(self):
element.model.sequences.load_conditions()
|
Save the initial conditions of the |Model| object handled by
each |Element| object.
|
entailment
|
def save_conditions(self) -> None:
"""Save the calculated conditions of the |Model| object handled by
each |Element| object."""
for element in printtools.progressbar(self):
element.model.sequences.save_conditions()
|
Save the calculated conditions of the |Model| object handled by
each |Element| object.
|
entailment
|
def conditions(self) -> \
Dict[str, Dict[str, Dict[str, Union[float, numpy.ndarray]]]]:
"""A nested dictionary containing the values of all
|ConditionSequence| objects of all currently handled models.
See the documentation on property |HydPy.conditions| for further
information.
"""
return {element.name: element.model.sequences.conditions
for element in self}
|
A nested dictionary containing the values of all
|ConditionSequence| objects of all currently handled models.
See the documentation on property |HydPy.conditions| for further
information.
|
entailment
|
def prepare_allseries(self, ramflag: bool = True) -> None:
"""Call method |Element.prepare_allseries| of all handled
|Element| objects."""
for element in printtools.progressbar(self):
element.prepare_allseries(ramflag)
|
Call method |Element.prepare_allseries| of all handled
|Element| objects.
|
entailment
|
def prepare_inputseries(self, ramflag: bool = True) -> None:
"""Call method |Element.prepare_inputseries| of all handled
|Element| objects."""
for element in printtools.progressbar(self):
element.prepare_inputseries(ramflag)
|
Call method |Element.prepare_inputseries| of all handled
|Element| objects.
|
entailment
|
def prepare_fluxseries(self, ramflag: bool = True) -> None:
"""Call method |Element.prepare_fluxseries| of all handled
|Element| objects."""
for element in printtools.progressbar(self):
element.prepare_fluxseries(ramflag)
|
Call method |Element.prepare_fluxseries| of all handled
|Element| objects.
|
entailment
|
def prepare_stateseries(self, ramflag: bool = True) -> None:
"""Call method |Element.prepare_stateseries| of all handled
|Element| objects."""
for element in printtools.progressbar(self):
element.prepare_stateseries(ramflag)
|
Call method |Element.prepare_stateseries| of all handled
|Element| objects.
|
entailment
|
def extract_new(cls) -> DevicesTypeUnbound:
"""Gather all "new" |Node| or |Element| objects.
See the main documentation on module |devicetools| for further
information.
"""
devices = cls.get_handlerclass()(*_selection[cls])
_selection[cls].clear()
return devices
|
Gather all "new" |Node| or |Element| objects.
See the main documentation on module |devicetools| for further
information.
|
entailment
|
def get_double(self, group: str) -> pointerutils.Double:
"""Return the |Double| object appropriate for the given |Element|
input or output group and the actual |Node.deploymode|.
Method |Node.get_double| should be of interest for framework
developers only (and eventually for model developers).
Let |Node| object `node1` handle different simulation and
observation values:
>>> from hydpy import Node
>>> node = Node('node1')
>>> node.sequences.sim = 1.0
>>> node.sequences.obs = 2.0
The following `test` function shows for a given |Node.deploymode|
if method |Node.get_double| either returns the |Double| object
handling the simulated value (1.0) or the |Double| object handling
the observed value (2.0):
>>> def test(deploymode):
... node.deploymode = deploymode
... for group in ('inlets', 'receivers', 'outlets', 'senders'):
... print(group, node.get_double(group))
In the default mode, nodes (passively) route simulated values
through offering the |Double| object of sequence |Sim| to all
|Element| input and output groups:
>>> test('newsim')
inlets 1.0
receivers 1.0
outlets 1.0
senders 1.0
Setting |Node.deploymode| to `obs` means that a node receives
simulated values (from group `outlets` or `senders`), but provides
observed values (to group `inlets` or `receivers`):
>>> test('obs')
inlets 2.0
receivers 2.0
outlets 1.0
senders 1.0
With |Node.deploymode| set to `oldsim`, the node provides
(previously) simulated values (to group `inlets` or `receivers`)
but does not receive any values. Method |Node.get_double| just
returns a dummy |Double| object with value 0.0 in this case
(for group `outlets` or `senders`):
>>> test('oldsim')
inlets 1.0
receivers 1.0
outlets 0.0
senders 0.0
Other |Element| input or output groups are not supported:
>>> node.get_double('test')
Traceback (most recent call last):
...
ValueError: Function `get_double` of class `Node` does not support \
the given group name `test`.
"""
if group in ('inlets', 'receivers'):
if self.deploymode != 'obs':
return self.sequences.fastaccess.sim
return self.sequences.fastaccess.obs
if group in ('outlets', 'senders'):
if self.deploymode != 'oldsim':
return self.sequences.fastaccess.sim
return self.__blackhole
raise ValueError(
f'Function `get_double` of class `Node` does not '
f'support the given group name `{group}`.')
|
Return the |Double| object appropriate for the given |Element|
input or output group and the actual |Node.deploymode|.
Method |Node.get_double| should be of interest for framework
developers only (and eventually for model developers).
Let |Node| object `node1` handle different simulation and
observation values:
>>> from hydpy import Node
>>> node = Node('node1')
>>> node.sequences.sim = 1.0
>>> node.sequences.obs = 2.0
The following `test` function shows for a given |Node.deploymode|
if method |Node.get_double| either returns the |Double| object
handling the simulated value (1.0) or the |Double| object handling
the observed value (2.0):
>>> def test(deploymode):
... node.deploymode = deploymode
... for group in ('inlets', 'receivers', 'outlets', 'senders'):
... print(group, node.get_double(group))
In the default mode, nodes (passively) route simulated values
through offering the |Double| object of sequence |Sim| to all
|Element| input and output groups:
>>> test('newsim')
inlets 1.0
receivers 1.0
outlets 1.0
senders 1.0
Setting |Node.deploymode| to `obs` means that a node receives
simulated values (from group `outlets` or `senders`), but provides
observed values (to group `inlets` or `receivers`):
>>> test('obs')
inlets 2.0
receivers 2.0
outlets 1.0
senders 1.0
With |Node.deploymode| set to `oldsim`, the node provides
(previously) simulated values (to group `inlets` or `receivers`)
but does not receive any values. Method |Node.get_double| just
returns a dummy |Double| object with value 0.0 in this case
(for group `outlets` or `senders`):
>>> test('oldsim')
inlets 1.0
receivers 1.0
outlets 0.0
senders 0.0
Other |Element| input or output groups are not supported:
>>> node.get_double('test')
Traceback (most recent call last):
...
ValueError: Function `get_double` of class `Node` does not support \
the given group name `test`.
|
entailment
|
def plot_simseries(self, **kwargs: Any) -> None:
"""Plot the |IOSequence.series| of the |Sim| sequence object.
See method |Node.plot_allseries| for further information.
"""
self.__plot_series([self.sequences.sim], kwargs)
|
Plot the |IOSequence.series| of the |Sim| sequence object.
See method |Node.plot_allseries| for further information.
|
entailment
|
def plot_obsseries(self, **kwargs: Any) -> None:
"""Plot the |IOSequence.series| of the |Obs| sequence object.
See method |Node.plot_allseries| for further information.
"""
self.__plot_series([self.sequences.obs], kwargs)
|
Plot the |IOSequence.series| of the |Obs| sequence object.
See method |Node.plot_allseries| for further information.
|
entailment
|
def assignrepr(self, prefix: str = '') -> str:
"""Return a |repr| string with a prefixed assignment."""
lines = ['%sNode("%s", variable="%s",'
% (prefix, self.name, self.variable)]
if self.keywords:
subprefix = '%skeywords=' % (' '*(len(prefix)+5))
with objecttools.repr_.preserve_strings(True):
with objecttools.assignrepr_tuple.always_bracketed(False):
line = objecttools.assignrepr_list(
sorted(self.keywords), subprefix, width=70)
lines.append(line + ',')
lines[-1] = lines[-1][:-1]+')'
return '\n'.join(lines)
|
Return a |repr| string with a prefixed assignment.
|
entailment
|
def model(self) -> 'modeltools.Model':
"""The |Model| object handled by the actual |Element| object.
Directly after their initialisation, elements do not know
which model they require:
>>> from hydpy import Element
>>> hland = Element('hland', outlets='outlet')
>>> hland.model
Traceback (most recent call last):
...
AttributeError: The model object of element `hland` has been \
requested but not been prepared so far.
During scripting and when working interactively in the Python
shell, it is often convenient to assign a |model| directly.
>>> from hydpy.models.hland_v1 import *
>>> parameterstep('1d')
>>> hland.model = model
>>> hland.model.name
'hland_v1'
>>> del hland.model
>>> hasattr(hland, 'model')
False
For the "usual" approach to prepare models, please see the method
|Element.init_model|.
The following examples show that assigning |Model| objects
to property |Element.model| creates some connection required by
the respective model type automatically . These
examples should be relevant for developers only.
The following |hbranch| model branches a single input value
(from to node `inp`) to multiple outputs (nodes `out1` and `out2`):
>>> from hydpy import Element, Node, reverse_model_wildcard_import
>>> reverse_model_wildcard_import()
>>> element = Element('a_branch',
... inlets='branch_input',
... outlets=('branch_output_1', 'branch_output_2'))
>>> inp = element.inlets.branch_input
>>> out1, out2 = element.outlets
>>> from hydpy.models.hbranch import *
>>> parameterstep()
>>> xpoints(0.0, 3.0)
>>> ypoints(branch_output_1=[0.0, 1.0], branch_output_2=[0.0, 2.0])
>>> parameters.update()
>>> element.model = model
To show that the inlet and outlet connections are built properly,
we assign a new value to the inlet node `inp` and verify that the
suitable fractions of this value are passed to the outlet nodes
out1` and `out2` by calling method |Model.doit|:
>>> inp.sequences.sim = 999.0
>>> model.doit(0)
>>> fluxes.input
input(999.0)
>>> out1.sequences.sim
sim(333.0)
>>> out2.sequences.sim
sim(666.0)
"""
model = vars(self).get('model')
if model:
return model
raise AttributeError(
f'The model object of element `{self.name}` has '
f'been requested but not been prepared so far.')
|
The |Model| object handled by the actual |Element| object.
Directly after their initialisation, elements do not know
which model they require:
>>> from hydpy import Element
>>> hland = Element('hland', outlets='outlet')
>>> hland.model
Traceback (most recent call last):
...
AttributeError: The model object of element `hland` has been \
requested but not been prepared so far.
During scripting and when working interactively in the Python
shell, it is often convenient to assign a |model| directly.
>>> from hydpy.models.hland_v1 import *
>>> parameterstep('1d')
>>> hland.model = model
>>> hland.model.name
'hland_v1'
>>> del hland.model
>>> hasattr(hland, 'model')
False
For the "usual" approach to prepare models, please see the method
|Element.init_model|.
The following examples show that assigning |Model| objects
to property |Element.model| creates some connection required by
the respective model type automatically . These
examples should be relevant for developers only.
The following |hbranch| model branches a single input value
(from to node `inp`) to multiple outputs (nodes `out1` and `out2`):
>>> from hydpy import Element, Node, reverse_model_wildcard_import
>>> reverse_model_wildcard_import()
>>> element = Element('a_branch',
... inlets='branch_input',
... outlets=('branch_output_1', 'branch_output_2'))
>>> inp = element.inlets.branch_input
>>> out1, out2 = element.outlets
>>> from hydpy.models.hbranch import *
>>> parameterstep()
>>> xpoints(0.0, 3.0)
>>> ypoints(branch_output_1=[0.0, 1.0], branch_output_2=[0.0, 2.0])
>>> parameters.update()
>>> element.model = model
To show that the inlet and outlet connections are built properly,
we assign a new value to the inlet node `inp` and verify that the
suitable fractions of this value are passed to the outlet nodes
out1` and `out2` by calling method |Model.doit|:
>>> inp.sequences.sim = 999.0
>>> model.doit(0)
>>> fluxes.input
input(999.0)
>>> out1.sequences.sim
sim(333.0)
>>> out2.sequences.sim
sim(666.0)
|
entailment
|
def init_model(self, clear_registry: bool = True) -> None:
"""Load the control file of the actual |Element| object, initialise
its |Model| object, build the required connections via (an eventually
overridden version of) method |Model.connect| of class |Model|, and
update its derived parameter values via calling (an eventually
overridden version) of method |Parameters.update| of class |Parameters|.
See method |HydPy.init_models| of class |HydPy| and property
|model| of class |Element| fur further information.
"""
try:
with hydpy.pub.options.warnsimulationstep(False):
info = hydpy.pub.controlmanager.load_file(
element=self, clear_registry=clear_registry)
self.model = info['model']
self.model.parameters.update()
except OSError:
if hydpy.pub.options.warnmissingcontrolfile:
warnings.warn(
f'Due to a missing or no accessible control file, no '
f'model could be initialised for element `{self.name}`')
else:
objecttools.augment_excmessage(
f'While trying to initialise the model '
f'object of element `{self.name}`')
except BaseException:
objecttools.augment_excmessage(
f'While trying to initialise the model '
f'object of element `{self.name}`')
|
Load the control file of the actual |Element| object, initialise
its |Model| object, build the required connections via (an eventually
overridden version of) method |Model.connect| of class |Model|, and
update its derived parameter values via calling (an eventually
overridden version) of method |Parameters.update| of class |Parameters|.
See method |HydPy.init_models| of class |HydPy| and property
|model| of class |Element| fur further information.
|
entailment
|
def variables(self) -> Set[str]:
"""A set of all different |Node.variable| values of the |Node|
objects directly connected to the actual |Element| object.
Suppose there is an element connected to five nodes, which (partly)
represent different variables:
>>> from hydpy import Element, Node
>>> element = Element('Test',
... inlets=(Node('N1', 'X'), Node('N2', 'Y1')),
... outlets=(Node('N3', 'X'), Node('N4', 'Y2')),
... receivers=(Node('N5', 'X'), Node('N6', 'Y3')),
... senders=(Node('N7', 'X'), Node('N8', 'Y4')))
Property |Element.variables| puts all the different variables of
these nodes together:
>>> sorted(element.variables)
['X', 'Y1', 'Y2', 'Y3', 'Y4']
"""
variables: Set[str] = set()
for connection in self.__connections:
variables.update(connection.variables)
return variables
|
A set of all different |Node.variable| values of the |Node|
objects directly connected to the actual |Element| object.
Suppose there is an element connected to five nodes, which (partly)
represent different variables:
>>> from hydpy import Element, Node
>>> element = Element('Test',
... inlets=(Node('N1', 'X'), Node('N2', 'Y1')),
... outlets=(Node('N3', 'X'), Node('N4', 'Y2')),
... receivers=(Node('N5', 'X'), Node('N6', 'Y3')),
... senders=(Node('N7', 'X'), Node('N8', 'Y4')))
Property |Element.variables| puts all the different variables of
these nodes together:
>>> sorted(element.variables)
['X', 'Y1', 'Y2', 'Y3', 'Y4']
|
entailment
|
def prepare_allseries(self, ramflag: bool = True) -> None:
"""Prepare the |IOSequence.series| objects of all `input`, `flux` and
`state` sequences of the model handled by this element.
Call this method before a simulation run, if you need access to
(nearly) all simulated series of the handled model after the
simulation run is finished.
By default, the time series are stored in RAM, which is the faster
option. If your RAM is limited, pass |False| to function argument
`ramflag` to store the series on disk.
"""
self.prepare_inputseries(ramflag)
self.prepare_fluxseries(ramflag)
self.prepare_stateseries(ramflag)
|
Prepare the |IOSequence.series| objects of all `input`, `flux` and
`state` sequences of the model handled by this element.
Call this method before a simulation run, if you need access to
(nearly) all simulated series of the handled model after the
simulation run is finished.
By default, the time series are stored in RAM, which is the faster
option. If your RAM is limited, pass |False| to function argument
`ramflag` to store the series on disk.
|
entailment
|
def plot_inputseries(
self, names: Optional[Iterable[str]] = None,
average: bool = False, **kwargs: Any) \
-> None:
"""Plot (the selected) |InputSequence| |IOSequence.series| values.
We demonstrate the functionalities of method |Element.plot_inputseries|
based on the `Lahn` example project:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, _, _ = prepare_full_example_2(lastdate='1997-01-01')
Without any arguments, |Element.plot_inputseries| prints the
time series of all input sequences handled by its |Model| object
directly to the screen (in the given example, |hland_inputs.P|,
|hland_inputs.T|, |hland_inputs.TN|, and |hland_inputs.EPN| of
application model |hland_v1|):
>>> land = hp.elements.land_dill
>>> land.plot_inputseries()
You can use the `pyplot` API of `matplotlib` to modify the figure
or to save it to disk (or print it to the screen, in case the
interactive mode of `matplotlib` is disabled):
>>> from matplotlib import pyplot
>>> from hydpy.docs import figs
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_inputseries.png')
>>> pyplot.close()
.. image:: Element_plot_inputseries.png
Methods |Element.plot_fluxseries| and |Element.plot_stateseries|
work in the same manner. Before applying them, one has at first
to calculate the time series of the |FluxSequence| and
|StateSequence| objects:
>>> hp.doit()
All three methods allow to select certain sequences by passing their
names (here, flux sequences |hland_fluxes.Q0| and |hland_fluxes.Q1|
of |hland_v1|). Additionally, you can pass the keyword arguments
supported by `matplotlib` for modifying the line style:
>>> land.plot_fluxseries(['q0', 'q1'], linewidth=2)
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_fluxseries.png')
>>> pyplot.close()
.. image:: Element_plot_fluxseries.png
For 1-dimensional |IOSequence| objects, all three methods plot the
individual time series in the same colour (here, from the state
sequences |hland_states.SP| and |hland_states.WC| of |hland_v1|):
>>> land.plot_stateseries(['sp', 'wc'])
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_stateseries1.png')
>>> pyplot.close()
.. image:: Element_plot_stateseries1.png
Alternatively, you can print the averaged time series through
passing |True| to the method `average` argument (demonstrated
for the state sequence |hland_states.SM|):
>>> land.plot_stateseries(['sm'], color='grey')
>>> land.plot_stateseries(
... ['sm'], average=True, color='black', linewidth=3)
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_stateseries2.png')
>>> pyplot.close()
.. image:: Element_plot_stateseries2.png
"""
self.__plot(self.model.sequences.inputs, names, average, kwargs)
|
Plot (the selected) |InputSequence| |IOSequence.series| values.
We demonstrate the functionalities of method |Element.plot_inputseries|
based on the `Lahn` example project:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, _, _ = prepare_full_example_2(lastdate='1997-01-01')
Without any arguments, |Element.plot_inputseries| prints the
time series of all input sequences handled by its |Model| object
directly to the screen (in the given example, |hland_inputs.P|,
|hland_inputs.T|, |hland_inputs.TN|, and |hland_inputs.EPN| of
application model |hland_v1|):
>>> land = hp.elements.land_dill
>>> land.plot_inputseries()
You can use the `pyplot` API of `matplotlib` to modify the figure
or to save it to disk (or print it to the screen, in case the
interactive mode of `matplotlib` is disabled):
>>> from matplotlib import pyplot
>>> from hydpy.docs import figs
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_inputseries.png')
>>> pyplot.close()
.. image:: Element_plot_inputseries.png
Methods |Element.plot_fluxseries| and |Element.plot_stateseries|
work in the same manner. Before applying them, one has at first
to calculate the time series of the |FluxSequence| and
|StateSequence| objects:
>>> hp.doit()
All three methods allow to select certain sequences by passing their
names (here, flux sequences |hland_fluxes.Q0| and |hland_fluxes.Q1|
of |hland_v1|). Additionally, you can pass the keyword arguments
supported by `matplotlib` for modifying the line style:
>>> land.plot_fluxseries(['q0', 'q1'], linewidth=2)
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_fluxseries.png')
>>> pyplot.close()
.. image:: Element_plot_fluxseries.png
For 1-dimensional |IOSequence| objects, all three methods plot the
individual time series in the same colour (here, from the state
sequences |hland_states.SP| and |hland_states.WC| of |hland_v1|):
>>> land.plot_stateseries(['sp', 'wc'])
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_stateseries1.png')
>>> pyplot.close()
.. image:: Element_plot_stateseries1.png
Alternatively, you can print the averaged time series through
passing |True| to the method `average` argument (demonstrated
for the state sequence |hland_states.SM|):
>>> land.plot_stateseries(['sm'], color='grey')
>>> land.plot_stateseries(
... ['sm'], average=True, color='black', linewidth=3)
>>> pyplot.savefig(figs.__path__[0] + '/Element_plot_stateseries2.png')
>>> pyplot.close()
.. image:: Element_plot_stateseries2.png
|
entailment
|
def plot_fluxseries(
self, names: Optional[Iterable[str]] = None,
average: bool = False, **kwargs: Any) \
-> None:
"""Plot the `flux` series of the handled model.
See the documentation on method |Element.plot_inputseries| for
additional information.
"""
self.__plot(self.model.sequences.fluxes, names, average, kwargs)
|
Plot the `flux` series of the handled model.
See the documentation on method |Element.plot_inputseries| for
additional information.
|
entailment
|
def plot_stateseries(
self, names: Optional[Iterable[str]] = None,
average: bool = False, **kwargs: Any) \
-> None:
"""Plot the `state` series of the handled model.
See the documentation on method |Element.plot_inputseries| for
additional information.
"""
self.__plot(self.model.sequences.states, names, average, kwargs)
|
Plot the `state` series of the handled model.
See the documentation on method |Element.plot_inputseries| for
additional information.
|
entailment
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.