sentence1
stringlengths 52
3.87M
| sentence2
stringlengths 1
47.2k
| label
stringclasses 1
value |
|---|---|---|
def solve_dv_dt_v1(self):
"""Solve the differential equation of HydPy-L.
At the moment, HydPy-L only implements a simple numerical solution of
its underlying ordinary differential equation. To increase the accuracy
(or sometimes even to prevent instability) of this approximation, one
can set the value of parameter |MaxDT| to a value smaller than the actual
simulation step size. Method |solve_dv_dt_v1| then applies the methods
related to the numerical approximation multiple times and aggregates
the results.
Note that the order of convergence is one only. It is hard to tell how
short the internal simulation step needs to be to ensure a certain degree
of accuracy. In most cases one hour or very often even one day should be
sufficient to gain acceptable results. However, this strongly depends on
the given water stage-volume-discharge relationship. Hence it seems
advisable to always define a few test waves and apply the llake model with
different |MaxDT| values. Afterwards, select a |MaxDT| value lower than
one which results in acceptable approximations for all test waves. The
computation time of the llake mode per substep is rather small, so always
include a safety factor.
Of course, an adaptive step size determination would be much more
convenient...
Required derived parameter:
|NmbSubsteps|
Used aide sequence:
|llake_aides.V|
|llake_aides.QA|
Updated state sequence:
|llake_states.V|
Calculated flux sequence:
|llake_fluxes.QA|
Note that method |solve_dv_dt_v1| calls the versions of `calc_vq`,
`interp_qa` and `calc_v_qa` selected by the respective application model.
Hence, also their parameter and sequence specifications need to be
considered.
Basic equation:
:math:`\\frac{dV}{dt}= QZ - QA(V)`
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
aid = self.sequences.aides.fastaccess
flu.qa = 0.
aid.v = old.v
for _ in range(der.nmbsubsteps):
self.calc_vq()
self.interp_qa()
self.calc_v_qa()
flu.qa += aid.qa
flu.qa /= der.nmbsubsteps
new.v = aid.v
|
Solve the differential equation of HydPy-L.
At the moment, HydPy-L only implements a simple numerical solution of
its underlying ordinary differential equation. To increase the accuracy
(or sometimes even to prevent instability) of this approximation, one
can set the value of parameter |MaxDT| to a value smaller than the actual
simulation step size. Method |solve_dv_dt_v1| then applies the methods
related to the numerical approximation multiple times and aggregates
the results.
Note that the order of convergence is one only. It is hard to tell how
short the internal simulation step needs to be to ensure a certain degree
of accuracy. In most cases one hour or very often even one day should be
sufficient to gain acceptable results. However, this strongly depends on
the given water stage-volume-discharge relationship. Hence it seems
advisable to always define a few test waves and apply the llake model with
different |MaxDT| values. Afterwards, select a |MaxDT| value lower than
one which results in acceptable approximations for all test waves. The
computation time of the llake mode per substep is rather small, so always
include a safety factor.
Of course, an adaptive step size determination would be much more
convenient...
Required derived parameter:
|NmbSubsteps|
Used aide sequence:
|llake_aides.V|
|llake_aides.QA|
Updated state sequence:
|llake_states.V|
Calculated flux sequence:
|llake_fluxes.QA|
Note that method |solve_dv_dt_v1| calls the versions of `calc_vq`,
`interp_qa` and `calc_v_qa` selected by the respective application model.
Hence, also their parameter and sequence specifications need to be
considered.
Basic equation:
:math:`\\frac{dV}{dt}= QZ - QA(V)`
|
entailment
|
def calc_vq_v1(self):
"""Calculate the auxiliary term.
Required derived parameters:
|Seconds|
|NmbSubsteps|
Required flux sequence:
|QZ|
Required aide sequence:
|llake_aides.V|
Calculated aide sequence:
|llake_aides.VQ|
Basic equation:
:math:`VQ = 2 \\cdot V + \\frac{Seconds}{NmbSubsteps} \\cdot QZ`
Example:
The following example shows that the auxiliary term `vq` does not
depend on the (outer) simulation step size but on the (inner)
calculation step size defined by parameter `maxdt`:
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> maxdt('6h')
>>> derived.seconds.update()
>>> derived.nmbsubsteps.update()
>>> fluxes.qz = 2.
>>> aides.v = 1e5
>>> model.calc_vq_v1()
>>> aides.vq
vq(243200.0)
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
aid = self.sequences.aides.fastaccess
aid.vq = 2.*aid.v+der.seconds/der.nmbsubsteps*flu.qz
|
Calculate the auxiliary term.
Required derived parameters:
|Seconds|
|NmbSubsteps|
Required flux sequence:
|QZ|
Required aide sequence:
|llake_aides.V|
Calculated aide sequence:
|llake_aides.VQ|
Basic equation:
:math:`VQ = 2 \\cdot V + \\frac{Seconds}{NmbSubsteps} \\cdot QZ`
Example:
The following example shows that the auxiliary term `vq` does not
depend on the (outer) simulation step size but on the (inner)
calculation step size defined by parameter `maxdt`:
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> maxdt('6h')
>>> derived.seconds.update()
>>> derived.nmbsubsteps.update()
>>> fluxes.qz = 2.
>>> aides.v = 1e5
>>> model.calc_vq_v1()
>>> aides.vq
vq(243200.0)
|
entailment
|
def interp_qa_v1(self):
"""Calculate the lake outflow based on linear interpolation.
Required control parameters:
|N|
|llake_control.Q|
Required derived parameters:
|llake_derived.TOY|
|llake_derived.VQ|
Required aide sequence:
|llake_aides.VQ|
Calculated aide sequence:
|llake_aides.QA|
Examples:
In preparation for the following examples, define a short simulation
time period with a simulation step size of 12 hours and initialize
the required model object:
>>> from hydpy import pub
>>> pub.timegrids = '2000.01.01','2000.01.04', '12h'
>>> from hydpy.models.llake import *
>>> parameterstep()
Next, for the sake of brevity, define a test function:
>>> def test(*vqs):
... for vq in vqs:
... aides.vq(vq)
... model.interp_qa_v1()
... print(repr(aides.vq), repr(aides.qa))
The following three relationships between the auxiliary term `vq` and
the tabulated discharge `q` are taken as examples. Each one is valid
for one of the first three days in January and is defined via five
nodes:
>>> n(5)
>>> derived.toy.update()
>>> derived.vq(_1_1_6=[0., 1., 2., 2., 3.],
... _1_2_6=[0., 1., 2., 2., 3.],
... _1_3_6=[0., 1., 2., 3., 4.])
>>> q(_1_1_6=[0., 0., 0., 0., 0.],
... _1_2_6=[0., 2., 5., 6., 9.],
... _1_3_6=[0., 2., 1., 3., 2.])
In the first example, discharge does not depend on the actual value
of the auxiliary term and is always zero:
>>> model.idx_sim = pub.timegrids.init['2000.01.01']
>>> test(0., .75, 1., 4./3., 2., 7./3., 3., 10./3.)
vq(0.0) qa(0.0)
vq(0.75) qa(0.0)
vq(1.0) qa(0.0)
vq(1.333333) qa(0.0)
vq(2.0) qa(0.0)
vq(2.333333) qa(0.0)
vq(3.0) qa(0.0)
vq(3.333333) qa(0.0)
The seconds example demonstrates that relationships are allowed to
contain jumps, which is the case for the (`vq`,`q`) pairs (2,6) and
(2,7). Also it demonstrates that when the highest `vq` value is
exceeded linear extrapolation based on the two highest (`vq`,`q`)
pairs is performed:
>>> model.idx_sim = pub.timegrids.init['2000.01.02']
>>> test(0., .75, 1., 4./3., 2., 7./3., 3., 10./3.)
vq(0.0) qa(0.0)
vq(0.75) qa(1.5)
vq(1.0) qa(2.0)
vq(1.333333) qa(3.0)
vq(2.0) qa(5.0)
vq(2.333333) qa(7.0)
vq(3.0) qa(9.0)
vq(3.333333) qa(10.0)
The third example shows that the relationships do not need to be
arranged monotonously increasing. Particualarly for the extrapolation
range, this could result in negative values of `qa`, which is avoided
by setting it to zero in such cases:
>>> model.idx_sim = pub.timegrids.init['2000.01.03']
>>> test(.5, 1.5, 2.5, 3.5, 4.5, 10.)
vq(0.5) qa(1.0)
vq(1.5) qa(1.5)
vq(2.5) qa(2.0)
vq(3.5) qa(2.5)
vq(4.5) qa(1.5)
vq(10.0) qa(0.0)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
aid = self.sequences.aides.fastaccess
idx = der.toy[self.idx_sim]
for jdx in range(1, con.n):
if der.vq[idx, jdx] >= aid.vq:
break
aid.qa = ((aid.vq-der.vq[idx, jdx-1]) *
(con.q[idx, jdx]-con.q[idx, jdx-1]) /
(der.vq[idx, jdx]-der.vq[idx, jdx-1]) +
con.q[idx, jdx-1])
aid.qa = max(aid.qa, 0.)
|
Calculate the lake outflow based on linear interpolation.
Required control parameters:
|N|
|llake_control.Q|
Required derived parameters:
|llake_derived.TOY|
|llake_derived.VQ|
Required aide sequence:
|llake_aides.VQ|
Calculated aide sequence:
|llake_aides.QA|
Examples:
In preparation for the following examples, define a short simulation
time period with a simulation step size of 12 hours and initialize
the required model object:
>>> from hydpy import pub
>>> pub.timegrids = '2000.01.01','2000.01.04', '12h'
>>> from hydpy.models.llake import *
>>> parameterstep()
Next, for the sake of brevity, define a test function:
>>> def test(*vqs):
... for vq in vqs:
... aides.vq(vq)
... model.interp_qa_v1()
... print(repr(aides.vq), repr(aides.qa))
The following three relationships between the auxiliary term `vq` and
the tabulated discharge `q` are taken as examples. Each one is valid
for one of the first three days in January and is defined via five
nodes:
>>> n(5)
>>> derived.toy.update()
>>> derived.vq(_1_1_6=[0., 1., 2., 2., 3.],
... _1_2_6=[0., 1., 2., 2., 3.],
... _1_3_6=[0., 1., 2., 3., 4.])
>>> q(_1_1_6=[0., 0., 0., 0., 0.],
... _1_2_6=[0., 2., 5., 6., 9.],
... _1_3_6=[0., 2., 1., 3., 2.])
In the first example, discharge does not depend on the actual value
of the auxiliary term and is always zero:
>>> model.idx_sim = pub.timegrids.init['2000.01.01']
>>> test(0., .75, 1., 4./3., 2., 7./3., 3., 10./3.)
vq(0.0) qa(0.0)
vq(0.75) qa(0.0)
vq(1.0) qa(0.0)
vq(1.333333) qa(0.0)
vq(2.0) qa(0.0)
vq(2.333333) qa(0.0)
vq(3.0) qa(0.0)
vq(3.333333) qa(0.0)
The seconds example demonstrates that relationships are allowed to
contain jumps, which is the case for the (`vq`,`q`) pairs (2,6) and
(2,7). Also it demonstrates that when the highest `vq` value is
exceeded linear extrapolation based on the two highest (`vq`,`q`)
pairs is performed:
>>> model.idx_sim = pub.timegrids.init['2000.01.02']
>>> test(0., .75, 1., 4./3., 2., 7./3., 3., 10./3.)
vq(0.0) qa(0.0)
vq(0.75) qa(1.5)
vq(1.0) qa(2.0)
vq(1.333333) qa(3.0)
vq(2.0) qa(5.0)
vq(2.333333) qa(7.0)
vq(3.0) qa(9.0)
vq(3.333333) qa(10.0)
The third example shows that the relationships do not need to be
arranged monotonously increasing. Particualarly for the extrapolation
range, this could result in negative values of `qa`, which is avoided
by setting it to zero in such cases:
>>> model.idx_sim = pub.timegrids.init['2000.01.03']
>>> test(.5, 1.5, 2.5, 3.5, 4.5, 10.)
vq(0.5) qa(1.0)
vq(1.5) qa(1.5)
vq(2.5) qa(2.0)
vq(3.5) qa(2.5)
vq(4.5) qa(1.5)
vq(10.0) qa(0.0)
|
entailment
|
def calc_v_qa_v1(self):
"""Update the stored water volume based on the equation of continuity.
Note that for too high outflow values, which would result in overdraining
the lake, the outflow is trimmed.
Required derived parameters:
|Seconds|
|NmbSubsteps|
Required flux sequence:
|QZ|
Updated aide sequences:
|llake_aides.QA|
|llake_aides.V|
Basic Equation:
:math:`\\frac{dV}{dt}= QZ - QA`
Examples:
Prepare a lake model with an initial storage of 100.000 m³ and an
inflow of 2 m³/s and a (potential) outflow of 6 m³/s:
>>> from hydpy.models.llake import *
>>> parameterstep()
>>> simulationstep('12h')
>>> maxdt('6h')
>>> derived.seconds.update()
>>> derived.nmbsubsteps.update()
>>> aides.v = 1e5
>>> fluxes.qz = 2.
>>> aides.qa = 6.
Through calling method `calc_v_qa_v1` three times with the same inflow
and outflow values, the storage is emptied after the second step and
outflow is equal to inflow after the third step:
>>> model.calc_v_qa_v1()
>>> aides.v
v(13600.0)
>>> aides.qa
qa(6.0)
>>> model.new2old()
>>> model.calc_v_qa_v1()
>>> aides.v
v(0.0)
>>> aides.qa
qa(2.62963)
>>> model.new2old()
>>> model.calc_v_qa_v1()
>>> aides.v
v(0.0)
>>> aides.qa
qa(2.0)
Note that the results of method |calc_v_qa_v1| are not based
depend on the (outer) simulation step size but on the (inner)
calculation step size defined by parameter `maxdt`.
"""
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
aid = self.sequences.aides.fastaccess
aid.qa = min(aid.qa, flu.qz+der.nmbsubsteps/der.seconds*aid.v)
aid.v = max(aid.v+der.seconds/der.nmbsubsteps*(flu.qz-aid.qa), 0.)
|
Update the stored water volume based on the equation of continuity.
Note that for too high outflow values, which would result in overdraining
the lake, the outflow is trimmed.
Required derived parameters:
|Seconds|
|NmbSubsteps|
Required flux sequence:
|QZ|
Updated aide sequences:
|llake_aides.QA|
|llake_aides.V|
Basic Equation:
:math:`\\frac{dV}{dt}= QZ - QA`
Examples:
Prepare a lake model with an initial storage of 100.000 m³ and an
inflow of 2 m³/s and a (potential) outflow of 6 m³/s:
>>> from hydpy.models.llake import *
>>> parameterstep()
>>> simulationstep('12h')
>>> maxdt('6h')
>>> derived.seconds.update()
>>> derived.nmbsubsteps.update()
>>> aides.v = 1e5
>>> fluxes.qz = 2.
>>> aides.qa = 6.
Through calling method `calc_v_qa_v1` three times with the same inflow
and outflow values, the storage is emptied after the second step and
outflow is equal to inflow after the third step:
>>> model.calc_v_qa_v1()
>>> aides.v
v(13600.0)
>>> aides.qa
qa(6.0)
>>> model.new2old()
>>> model.calc_v_qa_v1()
>>> aides.v
v(0.0)
>>> aides.qa
qa(2.62963)
>>> model.new2old()
>>> model.calc_v_qa_v1()
>>> aides.v
v(0.0)
>>> aides.qa
qa(2.0)
Note that the results of method |calc_v_qa_v1| are not based
depend on the (outer) simulation step size but on the (inner)
calculation step size defined by parameter `maxdt`.
|
entailment
|
def interp_w_v1(self):
"""Calculate the actual water stage based on linear interpolation.
Required control parameters:
|N|
|llake_control.V|
|llake_control.W|
Required state sequence:
|llake_states.V|
Calculated state sequence:
|llake_states.W|
Examples:
Prepare a model object:
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> simulationstep('12h')
For the sake of brevity, define a test function:
>>> def test(*vs):
... for v in vs:
... states.v.new = v
... model.interp_w_v1()
... print(repr(states.v), repr(states.w))
Define a simple `w`-`v` relationship consisting of three nodes and
calculate the water stages for different volumes:
>>> n(3)
>>> v(0., 2., 4.)
>>> w(-1., 1., 2.)
Perform the interpolation for a few test points:
>>> test(0., .5, 2., 3., 4., 5.)
v(0.0) w(-1.0)
v(0.5) w(-0.5)
v(2.0) w(1.0)
v(3.0) w(1.5)
v(4.0) w(2.0)
v(5.0) w(2.5)
The reference water stage of the relationship can be selected
arbitrarily. Even negative water stages are returned, as is
demonstrated by the first two calculations. For volumes outside
the range of the (`v`,`w`) pairs, the outer two highest pairs are
used for linear extrapolation.
"""
con = self.parameters.control.fastaccess
new = self.sequences.states.fastaccess_new
for jdx in range(1, con.n):
if con.v[jdx] >= new.v:
break
new.w = ((new.v-con.v[jdx-1]) *
(con.w[jdx]-con.w[jdx-1]) /
(con.v[jdx]-con.v[jdx-1]) +
con.w[jdx-1])
|
Calculate the actual water stage based on linear interpolation.
Required control parameters:
|N|
|llake_control.V|
|llake_control.W|
Required state sequence:
|llake_states.V|
Calculated state sequence:
|llake_states.W|
Examples:
Prepare a model object:
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> simulationstep('12h')
For the sake of brevity, define a test function:
>>> def test(*vs):
... for v in vs:
... states.v.new = v
... model.interp_w_v1()
... print(repr(states.v), repr(states.w))
Define a simple `w`-`v` relationship consisting of three nodes and
calculate the water stages for different volumes:
>>> n(3)
>>> v(0., 2., 4.)
>>> w(-1., 1., 2.)
Perform the interpolation for a few test points:
>>> test(0., .5, 2., 3., 4., 5.)
v(0.0) w(-1.0)
v(0.5) w(-0.5)
v(2.0) w(1.0)
v(3.0) w(1.5)
v(4.0) w(2.0)
v(5.0) w(2.5)
The reference water stage of the relationship can be selected
arbitrarily. Even negative water stages are returned, as is
demonstrated by the first two calculations. For volumes outside
the range of the (`v`,`w`) pairs, the outer two highest pairs are
used for linear extrapolation.
|
entailment
|
def corr_dw_v1(self):
"""Adjust the water stage drop to the highest value allowed and correct
the associated fluxes.
Note that method |corr_dw_v1| calls the method `interp_v` of the
respective application model. Hence the requirements of the actual
`interp_v` need to be considered additionally.
Required control parameter:
|MaxDW|
Required derived parameters:
|llake_derived.TOY|
|Seconds|
Required flux sequence:
|QZ|
Updated flux sequence:
|llake_fluxes.QA|
Updated state sequences:
|llake_states.W|
|llake_states.V|
Basic Restriction:
:math:`W_{old} - W_{new} \\leq MaxDW`
Examples:
In preparation for the following examples, define a short simulation
time period with a simulation step size of 12 hours and initialize
the required model object:
>>> from hydpy import pub
>>> pub.timegrids = '2000.01.01', '2000.01.04', '12h'
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> derived.toy.update()
>>> derived.seconds.update()
Select the first half of the second day of January as the simulation
step relevant for the following examples:
>>> model.idx_sim = pub.timegrids.init['2000.01.02']
The following tests are based on method |interp_v_v1| for the
interpolation of the stored water volume based on the corrected
water stage:
>>> model.interp_v = model.interp_v_v1
For the sake of simplicity, the underlying `w`-`v` relationship is
assumed to be linear:
>>> n(2.)
>>> w(0., 1.)
>>> v(0., 1e6)
The maximum drop in water stage for the first half of the second
day of January is set to 0.4 m/d. Note that, due to the difference
between the parameter step size and the simulation step size, the
actual value used for calculation is 0.2 m/12h:
>>> maxdw(_1_1_18=.1,
... _1_2_6=.4,
... _1_2_18=.1)
>>> maxdw
maxdw(toy_1_1_18_0_0=0.1,
toy_1_2_6_0_0=0.4,
toy_1_2_18_0_0=0.1)
>>> from hydpy import round_
>>> round_(maxdw.value[2])
0.2
Define old and new water stages and volumes in agreement with the
given linear relationship:
>>> states.w.old = 1.
>>> states.v.old = 1e6
>>> states.w.new = .9
>>> states.v.new = 9e5
Also define an inflow and an outflow value. Note the that the latter
is set to zero, which is inconsistent with the actual water stage drop
defined above, but done for didactic reasons:
>>> fluxes.qz = 1.
>>> fluxes.qa = 0.
Calling the |corr_dw_v1| method does not change the values of
either of following sequences, as the actual drop (0.1 m/12h) is
smaller than the allowed drop (0.2 m/12h):
>>> model.corr_dw_v1()
>>> states.w
w(0.9)
>>> states.v
v(900000.0)
>>> fluxes.qa
qa(0.0)
Note that the values given above are not recalculated, which can
clearly be seen for the lake outflow, which is still zero.
Through setting the new value of the water stage to 0.6 m, the actual
drop (0.4 m/12h) exceeds the allowed drop (0.2 m/12h). Hence the
water stage is trimmed and the other values are recalculated:
>>> states.w.new = .6
>>> model.corr_dw_v1()
>>> states.w
w(0.8)
>>> states.v
v(800000.0)
>>> fluxes.qa
qa(5.62963)
Through setting the maximum water stage drop to zero, method
|corr_dw_v1| is effectively disabled. Regardless of the actual
change in water stage, no trimming or recalculating is performed:
>>> maxdw.toy_01_02_06 = 0.
>>> states.w.new = .6
>>> model.corr_dw_v1()
>>> states.w
w(0.6)
>>> states.v
v(800000.0)
>>> fluxes.qa
qa(5.62963)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
idx = der.toy[self.idx_sim]
if (con.maxdw[idx] > 0.) and ((old.w-new.w) > con.maxdw[idx]):
new.w = old.w-con.maxdw[idx]
self.interp_v()
flu.qa = flu.qz+(old.v-new.v)/der.seconds
|
Adjust the water stage drop to the highest value allowed and correct
the associated fluxes.
Note that method |corr_dw_v1| calls the method `interp_v` of the
respective application model. Hence the requirements of the actual
`interp_v` need to be considered additionally.
Required control parameter:
|MaxDW|
Required derived parameters:
|llake_derived.TOY|
|Seconds|
Required flux sequence:
|QZ|
Updated flux sequence:
|llake_fluxes.QA|
Updated state sequences:
|llake_states.W|
|llake_states.V|
Basic Restriction:
:math:`W_{old} - W_{new} \\leq MaxDW`
Examples:
In preparation for the following examples, define a short simulation
time period with a simulation step size of 12 hours and initialize
the required model object:
>>> from hydpy import pub
>>> pub.timegrids = '2000.01.01', '2000.01.04', '12h'
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> derived.toy.update()
>>> derived.seconds.update()
Select the first half of the second day of January as the simulation
step relevant for the following examples:
>>> model.idx_sim = pub.timegrids.init['2000.01.02']
The following tests are based on method |interp_v_v1| for the
interpolation of the stored water volume based on the corrected
water stage:
>>> model.interp_v = model.interp_v_v1
For the sake of simplicity, the underlying `w`-`v` relationship is
assumed to be linear:
>>> n(2.)
>>> w(0., 1.)
>>> v(0., 1e6)
The maximum drop in water stage for the first half of the second
day of January is set to 0.4 m/d. Note that, due to the difference
between the parameter step size and the simulation step size, the
actual value used for calculation is 0.2 m/12h:
>>> maxdw(_1_1_18=.1,
... _1_2_6=.4,
... _1_2_18=.1)
>>> maxdw
maxdw(toy_1_1_18_0_0=0.1,
toy_1_2_6_0_0=0.4,
toy_1_2_18_0_0=0.1)
>>> from hydpy import round_
>>> round_(maxdw.value[2])
0.2
Define old and new water stages and volumes in agreement with the
given linear relationship:
>>> states.w.old = 1.
>>> states.v.old = 1e6
>>> states.w.new = .9
>>> states.v.new = 9e5
Also define an inflow and an outflow value. Note the that the latter
is set to zero, which is inconsistent with the actual water stage drop
defined above, but done for didactic reasons:
>>> fluxes.qz = 1.
>>> fluxes.qa = 0.
Calling the |corr_dw_v1| method does not change the values of
either of following sequences, as the actual drop (0.1 m/12h) is
smaller than the allowed drop (0.2 m/12h):
>>> model.corr_dw_v1()
>>> states.w
w(0.9)
>>> states.v
v(900000.0)
>>> fluxes.qa
qa(0.0)
Note that the values given above are not recalculated, which can
clearly be seen for the lake outflow, which is still zero.
Through setting the new value of the water stage to 0.6 m, the actual
drop (0.4 m/12h) exceeds the allowed drop (0.2 m/12h). Hence the
water stage is trimmed and the other values are recalculated:
>>> states.w.new = .6
>>> model.corr_dw_v1()
>>> states.w
w(0.8)
>>> states.v
v(800000.0)
>>> fluxes.qa
qa(5.62963)
Through setting the maximum water stage drop to zero, method
|corr_dw_v1| is effectively disabled. Regardless of the actual
change in water stage, no trimming or recalculating is performed:
>>> maxdw.toy_01_02_06 = 0.
>>> states.w.new = .6
>>> model.corr_dw_v1()
>>> states.w
w(0.6)
>>> states.v
v(800000.0)
>>> fluxes.qa
qa(5.62963)
|
entailment
|
def modify_qa_v1(self):
"""Add water to or remove water from the calculated lake outflow.
Required control parameter:
|Verzw|
Required derived parameter:
|llake_derived.TOY|
Updated flux sequence:
|llake_fluxes.QA|
Basic Equation:
:math:`QA = QA* - Verzw`
Examples:
In preparation for the following examples, define a short simulation
time period with a simulation step size of 12 hours and initialize
the required model object:
>>> from hydpy import pub
>>> pub.timegrids = '2000.01.01', '2000.01.04', '12h'
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> derived.toy.update()
Select the first half of the second day of January as the simulation
step relevant for the following examples:
>>> model.idx_sim = pub.timegrids.init['2000.01.02']
Assume that, in accordance with previous calculations, the original
outflow value is 3 m³/s:
>>> fluxes.qa = 3.
Prepare the shape of parameter `verzw` (usually, this is done
automatically when calling parameter `n`):
>>> verzw.shape = (None,)
Set the value of the abstraction on the first half of the second
day of January to 2 m³/s:
>>> verzw(_1_1_18=0.,
... _1_2_6=2.,
... _1_2_18=0.)
In the first example `verzw` is simply subtracted from `qa`:
>>> model.modify_qa_v1()
>>> fluxes.qa
qa(1.0)
In the second example `verzw` exceeds `qa`, resulting in a zero
outflow value:
>>> model.modify_qa_v1()
>>> fluxes.qa
qa(0.0)
The last example demonstrates, that "negative abstractions" are
allowed, resulting in an increase in simulated outflow:
>>> verzw.toy_1_2_6 = -2.
>>> model.modify_qa_v1()
>>> fluxes.qa
qa(2.0)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
idx = der.toy[self.idx_sim]
flu.qa = max(flu.qa-con.verzw[idx], 0.)
|
Add water to or remove water from the calculated lake outflow.
Required control parameter:
|Verzw|
Required derived parameter:
|llake_derived.TOY|
Updated flux sequence:
|llake_fluxes.QA|
Basic Equation:
:math:`QA = QA* - Verzw`
Examples:
In preparation for the following examples, define a short simulation
time period with a simulation step size of 12 hours and initialize
the required model object:
>>> from hydpy import pub
>>> pub.timegrids = '2000.01.01', '2000.01.04', '12h'
>>> from hydpy.models.llake import *
>>> parameterstep('1d')
>>> derived.toy.update()
Select the first half of the second day of January as the simulation
step relevant for the following examples:
>>> model.idx_sim = pub.timegrids.init['2000.01.02']
Assume that, in accordance with previous calculations, the original
outflow value is 3 m³/s:
>>> fluxes.qa = 3.
Prepare the shape of parameter `verzw` (usually, this is done
automatically when calling parameter `n`):
>>> verzw.shape = (None,)
Set the value of the abstraction on the first half of the second
day of January to 2 m³/s:
>>> verzw(_1_1_18=0.,
... _1_2_6=2.,
... _1_2_18=0.)
In the first example `verzw` is simply subtracted from `qa`:
>>> model.modify_qa_v1()
>>> fluxes.qa
qa(1.0)
In the second example `verzw` exceeds `qa`, resulting in a zero
outflow value:
>>> model.modify_qa_v1()
>>> fluxes.qa
qa(0.0)
The last example demonstrates, that "negative abstractions" are
allowed, resulting in an increase in simulated outflow:
>>> verzw.toy_1_2_6 = -2.
>>> model.modify_qa_v1()
>>> fluxes.qa
qa(2.0)
|
entailment
|
def pass_q_v1(self):
"""Update the outlet link sequence."""
flu = self.sequences.fluxes.fastaccess
out = self.sequences.outlets.fastaccess
out.q[0] += flu.qa
|
Update the outlet link sequence.
|
entailment
|
def thresholds(self):
"""Threshold values of the response functions."""
return numpy.array(
sorted(self._key2float(key) for key in self._coefs), dtype=float)
|
Threshold values of the response functions.
|
entailment
|
def prepare_arrays(sim=None, obs=None, node=None, skip_nan=False):
"""Prepare and return two |numpy| arrays based on the given arguments.
Note that many functions provided by module |statstools| apply function
|prepare_arrays| internally (e.g. |nse|). But you can also apply it
manually, as shown in the following examples.
Function |prepare_arrays| can extract time series data from |Node|
objects. To set up an example for this, we define a initialization
time period and prepare a |Node| object:
>>> from hydpy import pub, Node, round_, nan
>>> pub.timegrids = '01.01.2000', '07.01.2000', '1d'
>>> node = Node('test')
Next, we assign values the `simulation` and the `observation` sequences
(to do so for the `observation` sequence requires a little trick, as
its values are normally supposed to be read from a file):
>>> node.prepare_simseries()
>>> with pub.options.checkseries(False):
... node.sequences.sim.series = 1.0, nan, nan, nan, 2.0, 3.0
... node.sequences.obs.ramflag = True
... node.sequences.obs.series = 4.0, 5.0, nan, nan, nan, 6.0
Now we can pass the node object to function |prepare_arrays| and
get the (unmodified) time series data:
>>> from hydpy import prepare_arrays
>>> arrays = prepare_arrays(node=node)
>>> round_(arrays[0])
1.0, nan, nan, nan, 2.0, 3.0
>>> round_(arrays[1])
4.0, 5.0, nan, nan, nan, 6.0
Alternatively, we can pass directly any iterables (e.g. |list| and
|tuple| objects) containing the `simulated` and `observed` data:
>>> arrays = prepare_arrays(sim=list(node.sequences.sim.series),
... obs=tuple(node.sequences.obs.series))
>>> round_(arrays[0])
1.0, nan, nan, nan, 2.0, 3.0
>>> round_(arrays[1])
4.0, 5.0, nan, nan, nan, 6.0
The optional `skip_nan` flag allows to skip all values, which are
no numbers. Note that only those pairs of `simulated` and `observed`
values are returned which do not contain any `nan`:
>>> arrays = prepare_arrays(node=node, skip_nan=True)
>>> round_(arrays[0])
1.0, 3.0
>>> round_(arrays[1])
4.0, 6.0
The final examples show the error messages returned in case of
invalid combinations of input arguments:
>>> prepare_arrays()
Traceback (most recent call last):
...
ValueError: Neither a `Node` object is passed to argument `node` nor \
are arrays passed to arguments `sim` and `obs`.
>>> prepare_arrays(sim=node.sequences.sim.series, node=node)
Traceback (most recent call last):
...
ValueError: Values are passed to both arguments `sim` and `node`, \
which is not allowed.
>>> prepare_arrays(obs=node.sequences.obs.series, node=node)
Traceback (most recent call last):
...
ValueError: Values are passed to both arguments `obs` and `node`, \
which is not allowed.
>>> prepare_arrays(sim=node.sequences.sim.series)
Traceback (most recent call last):
...
ValueError: A value is passed to argument `sim` but \
no value is passed to argument `obs`.
>>> prepare_arrays(obs=node.sequences.obs.series)
Traceback (most recent call last):
...
ValueError: A value is passed to argument `obs` but \
no value is passed to argument `sim`.
"""
if node:
if sim is not None:
raise ValueError(
'Values are passed to both arguments `sim` and `node`, '
'which is not allowed.')
if obs is not None:
raise ValueError(
'Values are passed to both arguments `obs` and `node`, '
'which is not allowed.')
sim = node.sequences.sim.series
obs = node.sequences.obs.series
elif (sim is not None) and (obs is None):
raise ValueError(
'A value is passed to argument `sim` '
'but no value is passed to argument `obs`.')
elif (obs is not None) and (sim is None):
raise ValueError(
'A value is passed to argument `obs` '
'but no value is passed to argument `sim`.')
elif (sim is None) and (obs is None):
raise ValueError(
'Neither a `Node` object is passed to argument `node` nor '
'are arrays passed to arguments `sim` and `obs`.')
sim = numpy.asarray(sim)
obs = numpy.asarray(obs)
if skip_nan:
idxs = ~numpy.isnan(sim) * ~numpy.isnan(obs)
sim = sim[idxs]
obs = obs[idxs]
return sim, obs
|
Prepare and return two |numpy| arrays based on the given arguments.
Note that many functions provided by module |statstools| apply function
|prepare_arrays| internally (e.g. |nse|). But you can also apply it
manually, as shown in the following examples.
Function |prepare_arrays| can extract time series data from |Node|
objects. To set up an example for this, we define a initialization
time period and prepare a |Node| object:
>>> from hydpy import pub, Node, round_, nan
>>> pub.timegrids = '01.01.2000', '07.01.2000', '1d'
>>> node = Node('test')
Next, we assign values the `simulation` and the `observation` sequences
(to do so for the `observation` sequence requires a little trick, as
its values are normally supposed to be read from a file):
>>> node.prepare_simseries()
>>> with pub.options.checkseries(False):
... node.sequences.sim.series = 1.0, nan, nan, nan, 2.0, 3.0
... node.sequences.obs.ramflag = True
... node.sequences.obs.series = 4.0, 5.0, nan, nan, nan, 6.0
Now we can pass the node object to function |prepare_arrays| and
get the (unmodified) time series data:
>>> from hydpy import prepare_arrays
>>> arrays = prepare_arrays(node=node)
>>> round_(arrays[0])
1.0, nan, nan, nan, 2.0, 3.0
>>> round_(arrays[1])
4.0, 5.0, nan, nan, nan, 6.0
Alternatively, we can pass directly any iterables (e.g. |list| and
|tuple| objects) containing the `simulated` and `observed` data:
>>> arrays = prepare_arrays(sim=list(node.sequences.sim.series),
... obs=tuple(node.sequences.obs.series))
>>> round_(arrays[0])
1.0, nan, nan, nan, 2.0, 3.0
>>> round_(arrays[1])
4.0, 5.0, nan, nan, nan, 6.0
The optional `skip_nan` flag allows to skip all values, which are
no numbers. Note that only those pairs of `simulated` and `observed`
values are returned which do not contain any `nan`:
>>> arrays = prepare_arrays(node=node, skip_nan=True)
>>> round_(arrays[0])
1.0, 3.0
>>> round_(arrays[1])
4.0, 6.0
The final examples show the error messages returned in case of
invalid combinations of input arguments:
>>> prepare_arrays()
Traceback (most recent call last):
...
ValueError: Neither a `Node` object is passed to argument `node` nor \
are arrays passed to arguments `sim` and `obs`.
>>> prepare_arrays(sim=node.sequences.sim.series, node=node)
Traceback (most recent call last):
...
ValueError: Values are passed to both arguments `sim` and `node`, \
which is not allowed.
>>> prepare_arrays(obs=node.sequences.obs.series, node=node)
Traceback (most recent call last):
...
ValueError: Values are passed to both arguments `obs` and `node`, \
which is not allowed.
>>> prepare_arrays(sim=node.sequences.sim.series)
Traceback (most recent call last):
...
ValueError: A value is passed to argument `sim` but \
no value is passed to argument `obs`.
>>> prepare_arrays(obs=node.sequences.obs.series)
Traceback (most recent call last):
...
ValueError: A value is passed to argument `obs` but \
no value is passed to argument `sim`.
|
entailment
|
def nse(sim=None, obs=None, node=None, skip_nan=False):
"""Calculate the efficiency criteria after Nash & Sutcliffe.
If the simulated values predict the observed values as well
as the average observed value (regarding the the mean square
error), the NSE value is zero:
>>> from hydpy import nse
>>> nse(sim=[2.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0])
0.0
>>> nse(sim=[0.0, 2.0, 4.0], obs=[1.0, 2.0, 3.0])
0.0
For worse and better simulated values the NSE is negative
or positive, respectively:
>>> nse(sim=[3.0, 2.0, 1.0], obs=[1.0, 2.0, 3.0])
-3.0
>>> nse(sim=[1.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0])
0.5
The highest possible value is one:
>>> nse(sim=[1.0, 2.0, 3.0], obs=[1.0, 2.0, 3.0])
1.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |nse|.
"""
sim, obs = prepare_arrays(sim, obs, node, skip_nan)
return 1.-numpy.sum((sim-obs)**2)/numpy.sum((obs-numpy.mean(obs))**2)
|
Calculate the efficiency criteria after Nash & Sutcliffe.
If the simulated values predict the observed values as well
as the average observed value (regarding the the mean square
error), the NSE value is zero:
>>> from hydpy import nse
>>> nse(sim=[2.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0])
0.0
>>> nse(sim=[0.0, 2.0, 4.0], obs=[1.0, 2.0, 3.0])
0.0
For worse and better simulated values the NSE is negative
or positive, respectively:
>>> nse(sim=[3.0, 2.0, 1.0], obs=[1.0, 2.0, 3.0])
-3.0
>>> nse(sim=[1.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0])
0.5
The highest possible value is one:
>>> nse(sim=[1.0, 2.0, 3.0], obs=[1.0, 2.0, 3.0])
1.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |nse|.
|
entailment
|
def bias_abs(sim=None, obs=None, node=None, skip_nan=False):
"""Calculate the absolute difference between the means of the simulated
and the observed values.
>>> from hydpy import round_
>>> from hydpy import bias_abs
>>> round_(bias_abs(sim=[2.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0]))
0.0
>>> round_(bias_abs(sim=[5.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0]))
1.0
>>> round_(bias_abs(sim=[1.0, 1.0, 1.0], obs=[1.0, 2.0, 3.0]))
-1.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |bias_abs|.
"""
sim, obs = prepare_arrays(sim, obs, node, skip_nan)
return numpy.mean(sim-obs)
|
Calculate the absolute difference between the means of the simulated
and the observed values.
>>> from hydpy import round_
>>> from hydpy import bias_abs
>>> round_(bias_abs(sim=[2.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0]))
0.0
>>> round_(bias_abs(sim=[5.0, 2.0, 2.0], obs=[1.0, 2.0, 3.0]))
1.0
>>> round_(bias_abs(sim=[1.0, 1.0, 1.0], obs=[1.0, 2.0, 3.0]))
-1.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |bias_abs|.
|
entailment
|
def std_ratio(sim=None, obs=None, node=None, skip_nan=False):
"""Calculate the ratio between the standard deviation of the simulated
and the observed values.
>>> from hydpy import round_
>>> from hydpy import std_ratio
>>> round_(std_ratio(sim=[1.0, 2.0, 3.0], obs=[1.0, 2.0, 3.0]))
0.0
>>> round_(std_ratio(sim=[1.0, 1.0, 1.0], obs=[1.0, 2.0, 3.0]))
-1.0
>>> round_(std_ratio(sim=[0.0, 3.0, 6.0], obs=[1.0, 2.0, 3.0]))
2.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |std_ratio|.
"""
sim, obs = prepare_arrays(sim, obs, node, skip_nan)
return numpy.std(sim)/numpy.std(obs)-1.
|
Calculate the ratio between the standard deviation of the simulated
and the observed values.
>>> from hydpy import round_
>>> from hydpy import std_ratio
>>> round_(std_ratio(sim=[1.0, 2.0, 3.0], obs=[1.0, 2.0, 3.0]))
0.0
>>> round_(std_ratio(sim=[1.0, 1.0, 1.0], obs=[1.0, 2.0, 3.0]))
-1.0
>>> round_(std_ratio(sim=[0.0, 3.0, 6.0], obs=[1.0, 2.0, 3.0]))
2.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |std_ratio|.
|
entailment
|
def corr(sim=None, obs=None, node=None, skip_nan=False):
"""Calculate the product-moment correlation coefficient after Pearson.
>>> from hydpy import round_
>>> from hydpy import corr
>>> round_(corr(sim=[0.5, 1.0, 1.5], obs=[1.0, 2.0, 3.0]))
1.0
>>> round_(corr(sim=[4.0, 2.0, 0.0], obs=[1.0, 2.0, 3.0]))
-1.0
>>> round_(corr(sim=[1.0, 2.0, 1.0], obs=[1.0, 2.0, 3.0]))
0.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |corr|.
"""
sim, obs = prepare_arrays(sim, obs, node, skip_nan)
return numpy.corrcoef(sim, obs)[0, 1]
|
Calculate the product-moment correlation coefficient after Pearson.
>>> from hydpy import round_
>>> from hydpy import corr
>>> round_(corr(sim=[0.5, 1.0, 1.5], obs=[1.0, 2.0, 3.0]))
1.0
>>> round_(corr(sim=[4.0, 2.0, 0.0], obs=[1.0, 2.0, 3.0]))
-1.0
>>> round_(corr(sim=[1.0, 2.0, 1.0], obs=[1.0, 2.0, 3.0]))
0.0
See the documentation on function |prepare_arrays| for some
additional instructions for use of function |corr|.
|
entailment
|
def hsepd_pdf(sigma1, sigma2, xi, beta,
sim=None, obs=None, node=None, skip_nan=False):
"""Calculate the probability densities based on the
heteroskedastic skewed exponential power distribution.
For convenience, the required parameters of the probability density
function as well as the simulated and observed values are stored
in a dictonary:
>>> import numpy
>>> from hydpy import round_
>>> from hydpy import hsepd_pdf
>>> general = {'sigma1': 0.2,
... 'sigma2': 0.0,
... 'xi': 1.0,
... 'beta': 0.0,
... 'sim': numpy.arange(10.0, 41.0),
... 'obs': numpy.full(31, 25.0)}
The following test function allows the variation of one parameter
and prints some and plots all of probability density values
corresponding to different simulated values:
>>> def test(**kwargs):
... from matplotlib import pyplot
... special = general.copy()
... name, values = list(kwargs.items())[0]
... results = numpy.zeros((len(general['sim']), len(values)+1))
... results[:, 0] = general['sim']
... for jdx, value in enumerate(values):
... special[name] = value
... results[:, jdx+1] = hsepd_pdf(**special)
... pyplot.plot(results[:, 0], results[:, jdx+1],
... label='%s=%.1f' % (name, value))
... pyplot.legend()
... for idx, result in enumerate(results):
... if not (idx % 5):
... round_(result)
When varying parameter `beta`, the resulting probabilities correspond
to the Laplace distribution (1.0), normal distribution (0.0), and the
uniform distribution (-1.0), respectively. Note that we use -0.99
instead of -1.0 for approximating the uniform distribution to prevent
from running into numerical problems, which are not solved yet:
>>> test(beta=[1.0, 0.0, -0.99])
10.0, 0.002032, 0.000886, 0.0
15.0, 0.008359, 0.010798, 0.0
20.0, 0.034382, 0.048394, 0.057739
25.0, 0.141421, 0.079788, 0.057739
30.0, 0.034382, 0.048394, 0.057739
35.0, 0.008359, 0.010798, 0.0
40.0, 0.002032, 0.000886, 0.0
.. testsetup::
>>> from matplotlib import pyplot
>>> pyplot.close()
When varying parameter `xi`, the resulting density is negatively
skewed (0.2), symmetric (1.0), and positively skewed (5.0),
respectively:
>>> test(xi=[0.2, 1.0, 5.0])
10.0, 0.0, 0.000886, 0.003175
15.0, 0.0, 0.010798, 0.012957
20.0, 0.092845, 0.048394, 0.036341
25.0, 0.070063, 0.079788, 0.070063
30.0, 0.036341, 0.048394, 0.092845
35.0, 0.012957, 0.010798, 0.0
40.0, 0.003175, 0.000886, 0.0
.. testsetup::
>>> from matplotlib import pyplot
>>> pyplot.close()
In the above examples, the actual `sigma` (5.0) is calculated by
multiplying `sigma1` (0.2) with the mean simulated value (25.0),
internally. This can be done for modelling homoscedastic errors.
Instead, `sigma2` is multiplied with the individual simulated values
to account for heteroscedastic errors. With increasing values of
`sigma2`, the resulting densities are modified as follows:
>>> test(sigma2=[0.0, 0.1, 0.2])
10.0, 0.000886, 0.002921, 0.005737
15.0, 0.010798, 0.018795, 0.022831
20.0, 0.048394, 0.044159, 0.037988
25.0, 0.079788, 0.053192, 0.039894
30.0, 0.048394, 0.04102, 0.032708
35.0, 0.010798, 0.023493, 0.023493
40.0, 0.000886, 0.011053, 0.015771
.. testsetup::
>>> from matplotlib import pyplot
>>> pyplot.close()
"""
sim, obs = prepare_arrays(sim, obs, node, skip_nan)
sigmas = _pars_h(sigma1, sigma2, sim)
mu_xi, sigma_xi, w_beta, c_beta = _pars_sepd(xi, beta)
x, mu = obs, sim
a = (x-mu)/sigmas
a_xi = numpy.empty(a.shape)
idxs = mu_xi+sigma_xi*a < 0.
a_xi[idxs] = numpy.absolute(xi*(mu_xi+sigma_xi*a[idxs]))
a_xi[~idxs] = numpy.absolute(1./xi*(mu_xi+sigma_xi*a[~idxs]))
ps = (2.*sigma_xi/(xi+1./xi)*w_beta *
numpy.exp(-c_beta*a_xi**(2./(1.+beta))))/sigmas
return ps
|
Calculate the probability densities based on the
heteroskedastic skewed exponential power distribution.
For convenience, the required parameters of the probability density
function as well as the simulated and observed values are stored
in a dictonary:
>>> import numpy
>>> from hydpy import round_
>>> from hydpy import hsepd_pdf
>>> general = {'sigma1': 0.2,
... 'sigma2': 0.0,
... 'xi': 1.0,
... 'beta': 0.0,
... 'sim': numpy.arange(10.0, 41.0),
... 'obs': numpy.full(31, 25.0)}
The following test function allows the variation of one parameter
and prints some and plots all of probability density values
corresponding to different simulated values:
>>> def test(**kwargs):
... from matplotlib import pyplot
... special = general.copy()
... name, values = list(kwargs.items())[0]
... results = numpy.zeros((len(general['sim']), len(values)+1))
... results[:, 0] = general['sim']
... for jdx, value in enumerate(values):
... special[name] = value
... results[:, jdx+1] = hsepd_pdf(**special)
... pyplot.plot(results[:, 0], results[:, jdx+1],
... label='%s=%.1f' % (name, value))
... pyplot.legend()
... for idx, result in enumerate(results):
... if not (idx % 5):
... round_(result)
When varying parameter `beta`, the resulting probabilities correspond
to the Laplace distribution (1.0), normal distribution (0.0), and the
uniform distribution (-1.0), respectively. Note that we use -0.99
instead of -1.0 for approximating the uniform distribution to prevent
from running into numerical problems, which are not solved yet:
>>> test(beta=[1.0, 0.0, -0.99])
10.0, 0.002032, 0.000886, 0.0
15.0, 0.008359, 0.010798, 0.0
20.0, 0.034382, 0.048394, 0.057739
25.0, 0.141421, 0.079788, 0.057739
30.0, 0.034382, 0.048394, 0.057739
35.0, 0.008359, 0.010798, 0.0
40.0, 0.002032, 0.000886, 0.0
.. testsetup::
>>> from matplotlib import pyplot
>>> pyplot.close()
When varying parameter `xi`, the resulting density is negatively
skewed (0.2), symmetric (1.0), and positively skewed (5.0),
respectively:
>>> test(xi=[0.2, 1.0, 5.0])
10.0, 0.0, 0.000886, 0.003175
15.0, 0.0, 0.010798, 0.012957
20.0, 0.092845, 0.048394, 0.036341
25.0, 0.070063, 0.079788, 0.070063
30.0, 0.036341, 0.048394, 0.092845
35.0, 0.012957, 0.010798, 0.0
40.0, 0.003175, 0.000886, 0.0
.. testsetup::
>>> from matplotlib import pyplot
>>> pyplot.close()
In the above examples, the actual `sigma` (5.0) is calculated by
multiplying `sigma1` (0.2) with the mean simulated value (25.0),
internally. This can be done for modelling homoscedastic errors.
Instead, `sigma2` is multiplied with the individual simulated values
to account for heteroscedastic errors. With increasing values of
`sigma2`, the resulting densities are modified as follows:
>>> test(sigma2=[0.0, 0.1, 0.2])
10.0, 0.000886, 0.002921, 0.005737
15.0, 0.010798, 0.018795, 0.022831
20.0, 0.048394, 0.044159, 0.037988
25.0, 0.079788, 0.053192, 0.039894
30.0, 0.048394, 0.04102, 0.032708
35.0, 0.010798, 0.023493, 0.023493
40.0, 0.000886, 0.011053, 0.015771
.. testsetup::
>>> from matplotlib import pyplot
>>> pyplot.close()
|
entailment
|
def hsepd_manual(sigma1, sigma2, xi, beta,
sim=None, obs=None, node=None, skip_nan=False):
"""Calculate the mean of the logarithmised probability densities of the
'heteroskedastic skewed exponential power distribution.
The following examples are taken from the documentation of function
|hsepd_pdf|, which is used by function |hsepd_manual|. The first
one deals with a heteroscedastic normal distribution:
>>> from hydpy import round_
>>> from hydpy import hsepd_manual
>>> round_(hsepd_manual(sigma1=0.2, sigma2=0.2,
... xi=1.0, beta=0.0,
... sim=numpy.arange(10.0, 41.0),
... obs=numpy.full(31, 25.0)))
-3.682842
The second one is supposed to show to small zero probability density
values are set to 1e-200 before calculating their logarithm (which
means that the lowest possible value returned by function
|hsepd_manual| is approximately -460):
>>> round_(hsepd_manual(sigma1=0.2, sigma2=0.0,
... xi=1.0, beta=-0.99,
... sim=numpy.arange(10.0, 41.0),
... obs=numpy.full(31, 25.0)))
-209.539335
"""
sim, obs = prepare_arrays(sim, obs, node, skip_nan)
return _hsepd_manual(sigma1, sigma2, xi, beta, sim, obs)
|
Calculate the mean of the logarithmised probability densities of the
'heteroskedastic skewed exponential power distribution.
The following examples are taken from the documentation of function
|hsepd_pdf|, which is used by function |hsepd_manual|. The first
one deals with a heteroscedastic normal distribution:
>>> from hydpy import round_
>>> from hydpy import hsepd_manual
>>> round_(hsepd_manual(sigma1=0.2, sigma2=0.2,
... xi=1.0, beta=0.0,
... sim=numpy.arange(10.0, 41.0),
... obs=numpy.full(31, 25.0)))
-3.682842
The second one is supposed to show to small zero probability density
values are set to 1e-200 before calculating their logarithm (which
means that the lowest possible value returned by function
|hsepd_manual| is approximately -460):
>>> round_(hsepd_manual(sigma1=0.2, sigma2=0.0,
... xi=1.0, beta=-0.99,
... sim=numpy.arange(10.0, 41.0),
... obs=numpy.full(31, 25.0)))
-209.539335
|
entailment
|
def hsepd(sim=None, obs=None, node=None, skip_nan=False,
inits=None, return_pars=False, silent=True):
"""Calculate the mean of the logarithmised probability densities of the
'heteroskedastic skewed exponential power distribution.
Function |hsepd| serves the same purpose as function |hsepd_manual|,
but tries to estimate the parameters of the heteroscedastic skewed
exponential distribution via an optimization algorithm. This
is shown by generating a random sample. 1000 simulated values
are scattered around the observed (true) value of 10.0 with a
standard deviation of 2.0:
>>> import numpy
>>> numpy.random.seed(0)
>>> sim = numpy.random.normal(10.0, 2.0, 1000)
>>> obs = numpy.full(1000, 10.0)
First, as a reference, we calculate the "true" value based on
function |hsepd_manual| and the correct distribution parameters:
>>> from hydpy import round_
>>> from hydpy import hsepd, hsepd_manual
>>> round_(hsepd_manual(sigma1=0.2, sigma2=0.0,
... xi=1.0, beta=0.0,
... sim=sim, obs=obs))
-2.100093
When using function |hsepd|, the returned value is even a little
"better":
>>> round_(hsepd(sim=sim, obs=obs))
-2.09983
This is due to the deviation from the random sample to its
theoretical distribution. This is reflected by small differences
between the estimated values and the theoretical values of
`sigma1` (0.2), , `sigma2` (0.0), `xi` (1.0), and `beta` (0.0).
The estimated values are returned in the mentioned order through
enabling the `return_pars` option:
>>> value, pars = hsepd(sim=sim, obs=obs, return_pars=True)
>>> round_(pars, decimals=5)
0.19966, 0.0, 0.96836, 0.0188
There is no guarantee that the optimization numerical optimization
algorithm underlying function |hsepd| will always find the parameters
resulting in the largest value returned by function |hsepd_manual|.
You can increase its robustness (and decrease computation time) by
supplying good initial parameter values:
>>> value, pars = hsepd(sim=sim, obs=obs, return_pars=True,
... inits=(0.2, 0.0, 1.0, 0.0))
>>> round_(pars, decimals=5)
0.19966, 0.0, 0.96836, 0.0188
However, the following example shows a case when this strategie
results in worse results:
>>> value, pars = hsepd(sim=sim, obs=obs, return_pars=True,
... inits=(0.0, 0.2, 1.0, 0.0))
>>> round_(value)
-2.174492
>>> round_(pars)
0.0, 0.213179, 1.705485, 0.505112
"""
def transform(pars):
"""Transform the actual optimization problem into a function to
be minimized and apply parameter constraints."""
sigma1, sigma2, xi, beta = constrain(*pars)
return -_hsepd_manual(sigma1, sigma2, xi, beta, sim, obs)
def constrain(sigma1, sigma2, xi, beta):
"""Apply constrains on the given parameter values."""
sigma1 = numpy.clip(sigma1, 0.0, None)
sigma2 = numpy.clip(sigma2, 0.0, None)
xi = numpy.clip(xi, 0.1, 10.0)
beta = numpy.clip(beta, -0.99, 5.0)
return sigma1, sigma2, xi, beta
sim, obs = prepare_arrays(sim, obs, node, skip_nan)
if not inits:
inits = [0.1, 0.2, 3.0, 1.0]
values = optimize.fmin(transform, inits,
ftol=1e-12, xtol=1e-12,
disp=not silent)
values = constrain(*values)
result = _hsepd_manual(*values, sim=sim, obs=obs)
if return_pars:
return result, values
return result
|
Calculate the mean of the logarithmised probability densities of the
'heteroskedastic skewed exponential power distribution.
Function |hsepd| serves the same purpose as function |hsepd_manual|,
but tries to estimate the parameters of the heteroscedastic skewed
exponential distribution via an optimization algorithm. This
is shown by generating a random sample. 1000 simulated values
are scattered around the observed (true) value of 10.0 with a
standard deviation of 2.0:
>>> import numpy
>>> numpy.random.seed(0)
>>> sim = numpy.random.normal(10.0, 2.0, 1000)
>>> obs = numpy.full(1000, 10.0)
First, as a reference, we calculate the "true" value based on
function |hsepd_manual| and the correct distribution parameters:
>>> from hydpy import round_
>>> from hydpy import hsepd, hsepd_manual
>>> round_(hsepd_manual(sigma1=0.2, sigma2=0.0,
... xi=1.0, beta=0.0,
... sim=sim, obs=obs))
-2.100093
When using function |hsepd|, the returned value is even a little
"better":
>>> round_(hsepd(sim=sim, obs=obs))
-2.09983
This is due to the deviation from the random sample to its
theoretical distribution. This is reflected by small differences
between the estimated values and the theoretical values of
`sigma1` (0.2), , `sigma2` (0.0), `xi` (1.0), and `beta` (0.0).
The estimated values are returned in the mentioned order through
enabling the `return_pars` option:
>>> value, pars = hsepd(sim=sim, obs=obs, return_pars=True)
>>> round_(pars, decimals=5)
0.19966, 0.0, 0.96836, 0.0188
There is no guarantee that the optimization numerical optimization
algorithm underlying function |hsepd| will always find the parameters
resulting in the largest value returned by function |hsepd_manual|.
You can increase its robustness (and decrease computation time) by
supplying good initial parameter values:
>>> value, pars = hsepd(sim=sim, obs=obs, return_pars=True,
... inits=(0.2, 0.0, 1.0, 0.0))
>>> round_(pars, decimals=5)
0.19966, 0.0, 0.96836, 0.0188
However, the following example shows a case when this strategie
results in worse results:
>>> value, pars = hsepd(sim=sim, obs=obs, return_pars=True,
... inits=(0.0, 0.2, 1.0, 0.0))
>>> round_(value)
-2.174492
>>> round_(pars)
0.0, 0.213179, 1.705485, 0.505112
|
entailment
|
def calc_mean_time(timepoints, weights):
"""Return the weighted mean of the given timepoints.
With equal given weights, the result is simply the mean of the given
time points:
>>> from hydpy import calc_mean_time
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[2., 2.])
5.0
With different weights, the resulting mean time is shifted to the larger
ones:
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[1., 3.])
6.0
Or, in the most extreme case:
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[0., 4.])
7.0
There will be some checks for input plausibility perfomed, e.g.:
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[-2., 2.])
Traceback (most recent call last):
...
ValueError: While trying to calculate the weighted mean time, \
the following error occurred: For the following objects, at least \
one value is negative: weights.
"""
timepoints = numpy.array(timepoints)
weights = numpy.array(weights)
validtools.test_equal_shape(timepoints=timepoints, weights=weights)
validtools.test_non_negative(weights=weights)
return numpy.dot(timepoints, weights)/numpy.sum(weights)
|
Return the weighted mean of the given timepoints.
With equal given weights, the result is simply the mean of the given
time points:
>>> from hydpy import calc_mean_time
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[2., 2.])
5.0
With different weights, the resulting mean time is shifted to the larger
ones:
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[1., 3.])
6.0
Or, in the most extreme case:
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[0., 4.])
7.0
There will be some checks for input plausibility perfomed, e.g.:
>>> calc_mean_time(timepoints=[3., 7.],
... weights=[-2., 2.])
Traceback (most recent call last):
...
ValueError: While trying to calculate the weighted mean time, \
the following error occurred: For the following objects, at least \
one value is negative: weights.
|
entailment
|
def calc_mean_time_deviation(timepoints, weights, mean_time=None):
"""Return the weighted deviation of the given timepoints from their mean
time.
With equal given weights, the is simply the standard deviation of the
given time points:
>>> from hydpy import calc_mean_time_deviation
>>> calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[2., 2.])
2.0
One can pass a precalculated or alternate mean time:
>>> from hydpy import round_
>>> round_(calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[2., 2.],
... mean_time=4.))
2.236068
>>> round_(calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[1., 3.]))
1.732051
Or, in the most extreme case:
>>> calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[0., 4.])
0.0
There will be some checks for input plausibility perfomed, e.g.:
>>> calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[-2., 2.])
Traceback (most recent call last):
...
ValueError: While trying to calculate the weighted time deviation \
from mean time, the following error occurred: For the following objects, \
at least one value is negative: weights.
"""
timepoints = numpy.array(timepoints)
weights = numpy.array(weights)
validtools.test_equal_shape(timepoints=timepoints, weights=weights)
validtools.test_non_negative(weights=weights)
if mean_time is None:
mean_time = calc_mean_time(timepoints, weights)
return (numpy.sqrt(numpy.dot(weights, (timepoints-mean_time)**2) /
numpy.sum(weights)))
|
Return the weighted deviation of the given timepoints from their mean
time.
With equal given weights, the is simply the standard deviation of the
given time points:
>>> from hydpy import calc_mean_time_deviation
>>> calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[2., 2.])
2.0
One can pass a precalculated or alternate mean time:
>>> from hydpy import round_
>>> round_(calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[2., 2.],
... mean_time=4.))
2.236068
>>> round_(calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[1., 3.]))
1.732051
Or, in the most extreme case:
>>> calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[0., 4.])
0.0
There will be some checks for input plausibility perfomed, e.g.:
>>> calc_mean_time_deviation(timepoints=[3., 7.],
... weights=[-2., 2.])
Traceback (most recent call last):
...
ValueError: While trying to calculate the weighted time deviation \
from mean time, the following error occurred: For the following objects, \
at least one value is negative: weights.
|
entailment
|
def evaluationtable(nodes, criteria, nodenames=None,
critnames=None, skip_nan=False):
"""Return a table containing the results of the given evaluation
criteria for the given |Node| objects.
First, we define two nodes with different simulation and observation
data (see function |prepare_arrays| for some explanations):
>>> from hydpy import pub, Node, nan
>>> pub.timegrids = '01.01.2000', '04.01.2000', '1d'
>>> nodes = Node('test1'), Node('test2')
>>> for node in nodes:
... node.prepare_simseries()
... node.sequences.sim.series = 1.0, 2.0, 3.0
... node.sequences.obs.ramflag = True
... node.sequences.obs.series = 4.0, 5.0, 6.0
>>> nodes[0].sequences.sim.series = 1.0, 2.0, 3.0
>>> nodes[0].sequences.obs.series = 4.0, 5.0, 6.0
>>> nodes[1].sequences.sim.series = 1.0, 2.0, 3.0
>>> with pub.options.checkseries(False):
... nodes[1].sequences.obs.series = 3.0, nan, 1.0
Selecting functions |corr| and |bias_abs| as evaluation criteria,
function |evaluationtable| returns the following table (which is
actually a pandas data frame):
>>> from hydpy import evaluationtable, corr, bias_abs
>>> evaluationtable(nodes, (corr, bias_abs))
corr bias_abs
test1 1.0 -3.0
test2 NaN NaN
One can pass alternative names for both the node objects and the
criteria functions. Also, `nan` values can be skipped:
>>> evaluationtable(nodes, (corr, bias_abs),
... nodenames=('first node', 'second node'),
... critnames=('corrcoef', 'bias'),
... skip_nan=True)
corrcoef bias
first node 1.0 -3.0
second node -1.0 0.0
The number of assigned node objects and criteria functions must
match the number of givern alternative names:
>>> evaluationtable(nodes, (corr, bias_abs),
... nodenames=('first node',))
Traceback (most recent call last):
...
ValueError: While trying to evaluate the simulation results of some \
node objects, the following error occurred: 2 node objects are given \
which does not match with number of given alternative names beeing 1.
>>> evaluationtable(nodes, (corr, bias_abs),
... critnames=('corrcoef',))
Traceback (most recent call last):
...
ValueError: While trying to evaluate the simulation results of some \
node objects, the following error occurred: 2 criteria functions are given \
which does not match with number of given alternative names beeing 1.
"""
if nodenames:
if len(nodes) != len(nodenames):
raise ValueError(
'%d node objects are given which does not match with '
'number of given alternative names beeing %s.'
% (len(nodes), len(nodenames)))
else:
nodenames = [node.name for node in nodes]
if critnames:
if len(criteria) != len(critnames):
raise ValueError(
'%d criteria functions are given which does not match '
'with number of given alternative names beeing %s.'
% (len(criteria), len(critnames)))
else:
critnames = [crit.__name__ for crit in criteria]
data = numpy.empty((len(nodes), len(criteria)), dtype=float)
for idx, node in enumerate(nodes):
sim, obs = prepare_arrays(None, None, node, skip_nan)
for jdx, criterion in enumerate(criteria):
data[idx, jdx] = criterion(sim, obs)
table = pandas.DataFrame(
data=data, index=nodenames, columns=critnames)
return table
|
Return a table containing the results of the given evaluation
criteria for the given |Node| objects.
First, we define two nodes with different simulation and observation
data (see function |prepare_arrays| for some explanations):
>>> from hydpy import pub, Node, nan
>>> pub.timegrids = '01.01.2000', '04.01.2000', '1d'
>>> nodes = Node('test1'), Node('test2')
>>> for node in nodes:
... node.prepare_simseries()
... node.sequences.sim.series = 1.0, 2.0, 3.0
... node.sequences.obs.ramflag = True
... node.sequences.obs.series = 4.0, 5.0, 6.0
>>> nodes[0].sequences.sim.series = 1.0, 2.0, 3.0
>>> nodes[0].sequences.obs.series = 4.0, 5.0, 6.0
>>> nodes[1].sequences.sim.series = 1.0, 2.0, 3.0
>>> with pub.options.checkseries(False):
... nodes[1].sequences.obs.series = 3.0, nan, 1.0
Selecting functions |corr| and |bias_abs| as evaluation criteria,
function |evaluationtable| returns the following table (which is
actually a pandas data frame):
>>> from hydpy import evaluationtable, corr, bias_abs
>>> evaluationtable(nodes, (corr, bias_abs))
corr bias_abs
test1 1.0 -3.0
test2 NaN NaN
One can pass alternative names for both the node objects and the
criteria functions. Also, `nan` values can be skipped:
>>> evaluationtable(nodes, (corr, bias_abs),
... nodenames=('first node', 'second node'),
... critnames=('corrcoef', 'bias'),
... skip_nan=True)
corrcoef bias
first node 1.0 -3.0
second node -1.0 0.0
The number of assigned node objects and criteria functions must
match the number of givern alternative names:
>>> evaluationtable(nodes, (corr, bias_abs),
... nodenames=('first node',))
Traceback (most recent call last):
...
ValueError: While trying to evaluate the simulation results of some \
node objects, the following error occurred: 2 node objects are given \
which does not match with number of given alternative names beeing 1.
>>> evaluationtable(nodes, (corr, bias_abs),
... critnames=('corrcoef',))
Traceback (most recent call last):
...
ValueError: While trying to evaluate the simulation results of some \
node objects, the following error occurred: 2 criteria functions are given \
which does not match with number of given alternative names beeing 1.
|
entailment
|
def set_primary_parameters(self, **kwargs):
"""Set all primary parameters at once."""
given = sorted(kwargs.keys())
required = sorted(self._PRIMARY_PARAMETERS)
if given == required:
for (key, value) in kwargs.items():
setattr(self, key, value)
else:
raise ValueError(
'When passing primary parameter values as initialization '
'arguments of the instantaneous unit hydrograph class `%s`, '
'or when using method `set_primary_parameters, one has to '
'to define all values at once via keyword arguments. '
'But instead of the primary parameter names `%s` the '
'following keywords were given: %s.'
% (objecttools.classname(self),
', '.join(required), ', '.join(given)))
|
Set all primary parameters at once.
|
entailment
|
def primary_parameters_complete(self):
"""True/False flag that indicates wheter the values of all primary
parameters are defined or not."""
for primpar in self._PRIMARY_PARAMETERS.values():
if primpar.__get__(self) is None:
return False
return True
|
True/False flag that indicates wheter the values of all primary
parameters are defined or not.
|
entailment
|
def update(self):
"""Delete the coefficients of the pure MA model and also all MA and
AR coefficients of the ARMA model. Also calculate or delete the values
of all secondary iuh parameters, depending on the completeness of the
values of the primary parameters.
"""
del self.ma.coefs
del self.arma.ma_coefs
del self.arma.ar_coefs
if self.primary_parameters_complete:
self.calc_secondary_parameters()
else:
for secpar in self._SECONDARY_PARAMETERS.values():
secpar.__delete__(self)
|
Delete the coefficients of the pure MA model and also all MA and
AR coefficients of the ARMA model. Also calculate or delete the values
of all secondary iuh parameters, depending on the completeness of the
values of the primary parameters.
|
entailment
|
def delay_response_series(self):
"""A tuple of two numpy arrays, which hold the time delays and the
associated iuh values respectively."""
delays = []
responses = []
sum_responses = 0.
for t in itertools.count(self.dt_response/2., self.dt_response):
delays.append(t)
response = self(t)
responses.append(response)
sum_responses += self.dt_response*response
if (sum_responses > .9) and (response < self.smallest_response):
break
return numpy.array(delays), numpy.array(responses)
|
A tuple of two numpy arrays, which hold the time delays and the
associated iuh values respectively.
|
entailment
|
def plot(self, threshold=None, **kwargs):
"""Plot the instanteneous unit hydrograph.
The optional argument allows for defining a threshold of the cumulative
sum uf the hydrograph, used to adjust the largest value of the x-axis.
It must be a value between zero and one.
"""
delays, responses = self.delay_response_series
pyplot.plot(delays, responses, **kwargs)
pyplot.xlabel('time')
pyplot.ylabel('response')
if threshold is not None:
threshold = numpy.clip(threshold, 0., 1.)
cumsum = numpy.cumsum(responses)
idx = numpy.where(cumsum >= threshold*cumsum[-1])[0][0]
pyplot.xlim(0., delays[idx])
|
Plot the instanteneous unit hydrograph.
The optional argument allows for defining a threshold of the cumulative
sum uf the hydrograph, used to adjust the largest value of the x-axis.
It must be a value between zero and one.
|
entailment
|
def moment1(self):
"""The first time delay weighted statistical moment of the
instantaneous unit hydrograph."""
delays, response = self.delay_response_series
return statstools.calc_mean_time(delays, response)
|
The first time delay weighted statistical moment of the
instantaneous unit hydrograph.
|
entailment
|
def moment2(self):
"""The second time delay weighted statistical momens of the
instantaneous unit hydrograph."""
moment1 = self.moment1
delays, response = self.delay_response_series
return statstools.calc_mean_time_deviation(
delays, response, moment1)
|
The second time delay weighted statistical momens of the
instantaneous unit hydrograph.
|
entailment
|
def calc_secondary_parameters(self):
"""Determine the values of the secondary parameters `a` and `b`."""
self.a = self.x/(2.*self.d**.5)
self.b = self.u/(2.*self.d**.5)
|
Determine the values of the secondary parameters `a` and `b`.
|
entailment
|
def calc_secondary_parameters(self):
"""Determine the value of the secondary parameter `c`."""
self.c = 1./(self.k*special.gamma(self.n))
|
Determine the value of the secondary parameter `c`.
|
entailment
|
def trim(self, lower=None, upper=None):
"""Trim values in accordance with :math:`WAeS \\leq PWMax \\cdot WATS`,
or at least in accordance with if :math:`WATS \\geq 0`.
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> pwmax(2.0)
>>> states.waes = -1., 0., 1., -1., 5., 10., 20.
>>> states.wats(-1., 0., 0., 5., 5., 5., 5.)
>>> states.wats
wats(0.0, 0.0, 0.5, 5.0, 5.0, 5.0, 10.0)
"""
pwmax = self.subseqs.seqs.model.parameters.control.pwmax
waes = self.subseqs.waes
if lower is None:
lower = numpy.clip(waes/pwmax, 0., numpy.inf)
lower[numpy.isnan(lower)] = 0.0
lland_sequences.State1DSequence.trim(self, lower, upper)
|
Trim values in accordance with :math:`WAeS \\leq PWMax \\cdot WATS`,
or at least in accordance with if :math:`WATS \\geq 0`.
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> pwmax(2.0)
>>> states.waes = -1., 0., 1., -1., 5., 10., 20.
>>> states.wats(-1., 0., 0., 5., 5., 5., 5.)
>>> states.wats
wats(0.0, 0.0, 0.5, 5.0, 5.0, 5.0, 10.0)
|
entailment
|
def trim(self, lower=None, upper=None):
"""Trim values in accordance with :math:`WAeS \\leq PWMax \\cdot WATS`.
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> pwmax(2.)
>>> states.wats = 0., 0., 0., 5., 5., 5., 5.
>>> states.waes(-1., 0., 1., -1., 5., 10., 20.)
>>> states.waes
waes(0.0, 0.0, 0.0, 0.0, 5.0, 10.0, 10.0)
"""
pwmax = self.subseqs.seqs.model.parameters.control.pwmax
wats = self.subseqs.wats
if upper is None:
upper = pwmax*wats
lland_sequences.State1DSequence.trim(self, lower, upper)
|
Trim values in accordance with :math:`WAeS \\leq PWMax \\cdot WATS`.
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> pwmax(2.)
>>> states.wats = 0., 0., 0., 5., 5., 5., 5.
>>> states.waes(-1., 0., 1., -1., 5., 10., 20.)
>>> states.waes
waes(0.0, 0.0, 0.0, 0.0, 5.0, 10.0, 10.0)
|
entailment
|
def trim(self, lower=None, upper=None):
"""Trim values in accordance with :math:`BoWa \\leq NFk`.
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> nfk(200.)
>>> states.bowa(-100.,0., 100., 200., 300.)
>>> states.bowa
bowa(0.0, 0.0, 100.0, 200.0, 200.0)
"""
if upper is None:
upper = self.subseqs.seqs.model.parameters.control.nfk
lland_sequences.State1DSequence.trim(self, lower, upper)
|
Trim values in accordance with :math:`BoWa \\leq NFk`.
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> nfk(200.)
>>> states.bowa(-100.,0., 100., 200., 300.)
>>> states.bowa
bowa(0.0, 0.0, 100.0, 200.0, 200.0)
|
entailment
|
def post(self, request, pk):
""" Clean the data and save opening hours in the database.
Old opening hours are purged before new ones are saved.
"""
location = self.get_object()
# open days, disabled widget data won't make it into request.POST
present_prefixes = [x.split('-')[0] for x in request.POST.keys()]
day_forms = OrderedDict()
for day_no, day_name in WEEKDAYS:
for slot_no in (1, 2):
prefix = self.form_prefix(day_no, slot_no)
# skip closed day as it would be invalid form due to no data
if prefix not in present_prefixes:
continue
day_forms[prefix] = (day_no, Slot(request.POST, prefix=prefix))
if all([day_form[1].is_valid() for pre, day_form in day_forms.items()]):
OpeningHours.objects.filter(company=location).delete()
for prefix, day_form in day_forms.items():
day, form = day_form
opens, shuts = [str_to_time(form.cleaned_data[x])
for x in ('opens', 'shuts')]
if opens != shuts:
OpeningHours(from_hour=opens, to_hour=shuts,
company=location, weekday=day).save()
return redirect(request.path_info)
|
Clean the data and save opening hours in the database.
Old opening hours are purged before new ones are saved.
|
entailment
|
def get(self, request, pk):
""" Initialize the editing form
1. Build opening_hours, a lookup dictionary to populate the form
slots: keys are day numbers, values are lists of opening
hours for that day.
2. Build days, a list of days with 2 slot forms each.
3. Build form initials for the 2 slots padding/trimming
opening_hours to end up with exactly 2 slots even if it's
just None values.
"""
location = self.get_object()
two_sets = False
closed = None
opening_hours = {}
for o in OpeningHours.objects.filter(company=location):
opening_hours.setdefault(o.weekday, []).append(o)
days = []
for day_no, day_name in WEEKDAYS:
if day_no not in opening_hours.keys():
if opening_hours:
closed = True
ini1, ini2 = [None, None]
else:
closed = False
ini = [{'opens': time_to_str(oh.from_hour),
'shuts': time_to_str(oh.to_hour)}
for oh in opening_hours[day_no]]
ini += [None] * (2 - len(ini[:2])) # pad
ini1, ini2 = ini[:2] # trim
if ini2:
two_sets = True
days.append({
'name': day_name,
'number': day_no,
'slot1': Slot(prefix=self.form_prefix(day_no, 1), initial=ini1),
'slot2': Slot(prefix=self.form_prefix(day_no, 2), initial=ini2),
'closed': closed
})
return render(request, self.template_name, {
'days': days,
'two_sets': two_sets,
'location': location,
})
|
Initialize the editing form
1. Build opening_hours, a lookup dictionary to populate the form
slots: keys are day numbers, values are lists of opening
hours for that day.
2. Build days, a list of days with 2 slot forms each.
3. Build form initials for the 2 slots padding/trimming
opening_hours to end up with exactly 2 slots even if it's
just None values.
|
entailment
|
def calc_qjoints_v1(self):
"""Apply the routing equation.
Required derived parameters:
|NmbSegments|
|C1|
|C2|
|C3|
Updated state sequence:
|QJoints|
Basic equation:
:math:`Q_{space+1,time+1} =
c1 \\cdot Q_{space,time+1} +
c2 \\cdot Q_{space,time} +
c3 \\cdot Q_{space+1,time}`
Examples:
Firstly, define a reach divided into four segments:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> derived.nmbsegments(4)
>>> states.qjoints.shape = 5
Zero damping is achieved through the following coefficients:
>>> derived.c1(0.0)
>>> derived.c2(1.0)
>>> derived.c3(0.0)
For initialization, assume a base flow of 2m³/s:
>>> states.qjoints.old = 2.0
>>> states.qjoints.new = 2.0
Through successive assignements of different discharge values
to the upper junction one can see that these discharge values
are simply shifted from each junction to the respective lower
junction at each time step:
>>> states.qjoints[0] = 5.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(5.0, 2.0, 2.0, 2.0, 2.0)
>>> states.qjoints[0] = 8.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(8.0, 5.0, 2.0, 2.0, 2.0)
>>> states.qjoints[0] = 6.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(6.0, 8.0, 5.0, 2.0, 2.0)
With the maximum damping allowed, the values of the derived
parameters are:
>>> derived.c1(0.5)
>>> derived.c2(0.0)
>>> derived.c3(0.5)
Assuming again a base flow of 2m³/s and the same input values
results in:
>>> states.qjoints.old = 2.0
>>> states.qjoints.new = 2.0
>>> states.qjoints[0] = 5.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(5.0, 3.5, 2.75, 2.375, 2.1875)
>>> states.qjoints[0] = 8.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(8.0, 5.75, 4.25, 3.3125, 2.75)
>>> states.qjoints[0] = 6.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(6.0, 5.875, 5.0625, 4.1875, 3.46875)
"""
der = self.parameters.derived.fastaccess
new = self.sequences.states.fastaccess_new
old = self.sequences.states.fastaccess_old
for j in range(der.nmbsegments):
new.qjoints[j+1] = (der.c1*new.qjoints[j] +
der.c2*old.qjoints[j] +
der.c3*old.qjoints[j+1])
|
Apply the routing equation.
Required derived parameters:
|NmbSegments|
|C1|
|C2|
|C3|
Updated state sequence:
|QJoints|
Basic equation:
:math:`Q_{space+1,time+1} =
c1 \\cdot Q_{space,time+1} +
c2 \\cdot Q_{space,time} +
c3 \\cdot Q_{space+1,time}`
Examples:
Firstly, define a reach divided into four segments:
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> derived.nmbsegments(4)
>>> states.qjoints.shape = 5
Zero damping is achieved through the following coefficients:
>>> derived.c1(0.0)
>>> derived.c2(1.0)
>>> derived.c3(0.0)
For initialization, assume a base flow of 2m³/s:
>>> states.qjoints.old = 2.0
>>> states.qjoints.new = 2.0
Through successive assignements of different discharge values
to the upper junction one can see that these discharge values
are simply shifted from each junction to the respective lower
junction at each time step:
>>> states.qjoints[0] = 5.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(5.0, 2.0, 2.0, 2.0, 2.0)
>>> states.qjoints[0] = 8.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(8.0, 5.0, 2.0, 2.0, 2.0)
>>> states.qjoints[0] = 6.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(6.0, 8.0, 5.0, 2.0, 2.0)
With the maximum damping allowed, the values of the derived
parameters are:
>>> derived.c1(0.5)
>>> derived.c2(0.0)
>>> derived.c3(0.5)
Assuming again a base flow of 2m³/s and the same input values
results in:
>>> states.qjoints.old = 2.0
>>> states.qjoints.new = 2.0
>>> states.qjoints[0] = 5.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(5.0, 3.5, 2.75, 2.375, 2.1875)
>>> states.qjoints[0] = 8.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(8.0, 5.75, 4.25, 3.3125, 2.75)
>>> states.qjoints[0] = 6.0
>>> model.calc_qjoints_v1()
>>> model.new2old()
>>> states.qjoints
qjoints(6.0, 5.875, 5.0625, 4.1875, 3.46875)
|
entailment
|
def pick_q_v1(self):
"""Assign the actual value of the inlet sequence to the upper joint
of the subreach upstream."""
inl = self.sequences.inlets.fastaccess
new = self.sequences.states.fastaccess_new
new.qjoints[0] = 0.
for idx in range(inl.len_q):
new.qjoints[0] += inl.q[idx][0]
|
Assign the actual value of the inlet sequence to the upper joint
of the subreach upstream.
|
entailment
|
def pass_q_v1(self):
"""Assing the actual value of the lower joint of of the subreach
downstream to the outlet sequence."""
der = self.parameters.derived.fastaccess
new = self.sequences.states.fastaccess_new
out = self.sequences.outlets.fastaccess
out.q[0] += new.qjoints[der.nmbsegments]
|
Assing the actual value of the lower joint of of the subreach
downstream to the outlet sequence.
|
entailment
|
def _detect_encoding(data=None):
"""Return the default system encoding. If data is passed, try
to decode the data with the default system encoding or from a short
list of encoding types to test.
Args:
data - list of lists
Returns:
enc - system encoding
"""
import locale
enc_list = ['utf-8', 'latin-1', 'iso8859-1', 'iso8859-2',
'utf-16', 'cp720']
code = locale.getpreferredencoding(False)
if data is None:
return code
if code.lower() not in enc_list:
enc_list.insert(0, code.lower())
for c in enc_list:
try:
for line in data:
line.decode(c)
except (UnicodeDecodeError, UnicodeError, AttributeError):
continue
return c
print("Encoding not detected. Please pass encoding value manually")
|
Return the default system encoding. If data is passed, try
to decode the data with the default system encoding or from a short
list of encoding types to test.
Args:
data - list of lists
Returns:
enc - system encoding
|
entailment
|
def parameterstep(timestep=None):
"""Define a parameter time step size within a parameter control file.
Argument:
* timestep(|Period|): Time step size.
Function parameterstep should usually be be applied in a line
immediately behind the model import. Defining the step size of time
dependent parameters is a prerequisite to access any model specific
parameter.
Note that parameterstep implements some namespace magic by
means of the module |inspect|. This makes things a little
complicated for framework developers, but it eases the definition of
parameter control files for framework users.
"""
if timestep is not None:
parametertools.Parameter.parameterstep(timestep)
namespace = inspect.currentframe().f_back.f_locals
model = namespace.get('model')
if model is None:
model = namespace['Model']()
namespace['model'] = model
if hydpy.pub.options.usecython and 'cythonizer' in namespace:
cythonizer = namespace['cythonizer']
namespace['cythonmodule'] = cythonizer.cymodule
model.cymodel = cythonizer.cymodule.Model()
namespace['cymodel'] = model.cymodel
model.cymodel.parameters = cythonizer.cymodule.Parameters()
model.cymodel.sequences = cythonizer.cymodule.Sequences()
for numpars_name in ('NumConsts', 'NumVars'):
if hasattr(cythonizer.cymodule, numpars_name):
numpars_new = getattr(cythonizer.cymodule, numpars_name)()
numpars_old = getattr(model, numpars_name.lower())
for (name_numpar, numpar) in vars(numpars_old).items():
setattr(numpars_new, name_numpar, numpar)
setattr(model.cymodel, numpars_name.lower(), numpars_new)
for name in dir(model.cymodel):
if (not name.startswith('_')) and hasattr(model, name):
setattr(model, name, getattr(model.cymodel, name))
if 'Parameters' not in namespace:
namespace['Parameters'] = parametertools.Parameters
model.parameters = namespace['Parameters'](namespace)
if 'Sequences' not in namespace:
namespace['Sequences'] = sequencetools.Sequences
model.sequences = namespace['Sequences'](**namespace)
namespace['parameters'] = model.parameters
for pars in model.parameters:
namespace[pars.name] = pars
namespace['sequences'] = model.sequences
for seqs in model.sequences:
namespace[seqs.name] = seqs
if 'Masks' in namespace:
model.masks = namespace['Masks'](model)
namespace['masks'] = model.masks
try:
namespace.update(namespace['CONSTANTS'])
except KeyError:
pass
focus = namespace.get('focus')
for par in model.parameters.control:
try:
if (focus is None) or (par is focus):
namespace[par.name] = par
else:
namespace[par.name] = lambda *args, **kwargs: None
except AttributeError:
pass
|
Define a parameter time step size within a parameter control file.
Argument:
* timestep(|Period|): Time step size.
Function parameterstep should usually be be applied in a line
immediately behind the model import. Defining the step size of time
dependent parameters is a prerequisite to access any model specific
parameter.
Note that parameterstep implements some namespace magic by
means of the module |inspect|. This makes things a little
complicated for framework developers, but it eases the definition of
parameter control files for framework users.
|
entailment
|
def reverse_model_wildcard_import():
"""Clear the local namespace from a model wildcard import.
Calling this method should remove the critical imports into the local
namespace due the last wildcard import of a certain application model.
It is thought for securing the successive preperation of different
types of models via wildcard imports. See the following example, on
how it can be applied.
>>> from hydpy import reverse_model_wildcard_import
Assume you wildcard import the first version of HydPy-L-Land (|lland_v1|):
>>> from hydpy.models.lland_v1 import *
This for example adds the collection class for handling control
parameters of `lland_v1` into the local namespace:
>>> print(ControlParameters(None).name)
control
Calling function |parameterstep| for example prepares the control
parameter object |lland_control.NHRU|:
>>> parameterstep('1d')
>>> nhru
nhru(?)
Calling function |reverse_model_wildcard_import| removes both
objects (and many more, but not all) from the local namespace:
>>> reverse_model_wildcard_import()
>>> ControlParameters
Traceback (most recent call last):
...
NameError: name 'ControlParameters' is not defined
>>> nhru
Traceback (most recent call last):
...
NameError: name 'nhru' is not defined
"""
namespace = inspect.currentframe().f_back.f_locals
model = namespace.get('model')
if model is not None:
for subpars in model.parameters:
for par in subpars:
namespace.pop(par.name, None)
namespace.pop(objecttools.classname(par), None)
namespace.pop(subpars.name, None)
namespace.pop(objecttools.classname(subpars), None)
for subseqs in model.sequences:
for seq in subseqs:
namespace.pop(seq.name, None)
namespace.pop(objecttools.classname(seq), None)
namespace.pop(subseqs.name, None)
namespace.pop(objecttools.classname(subseqs), None)
for name in ('parameters', 'sequences', 'masks', 'model',
'Parameters', 'Sequences', 'Masks', 'Model',
'cythonizer', 'cymodel', 'cythonmodule'):
namespace.pop(name, None)
for key in list(namespace.keys()):
try:
if namespace[key].__module__ == model.__module__:
del namespace[key]
except AttributeError:
pass
|
Clear the local namespace from a model wildcard import.
Calling this method should remove the critical imports into the local
namespace due the last wildcard import of a certain application model.
It is thought for securing the successive preperation of different
types of models via wildcard imports. See the following example, on
how it can be applied.
>>> from hydpy import reverse_model_wildcard_import
Assume you wildcard import the first version of HydPy-L-Land (|lland_v1|):
>>> from hydpy.models.lland_v1 import *
This for example adds the collection class for handling control
parameters of `lland_v1` into the local namespace:
>>> print(ControlParameters(None).name)
control
Calling function |parameterstep| for example prepares the control
parameter object |lland_control.NHRU|:
>>> parameterstep('1d')
>>> nhru
nhru(?)
Calling function |reverse_model_wildcard_import| removes both
objects (and many more, but not all) from the local namespace:
>>> reverse_model_wildcard_import()
>>> ControlParameters
Traceback (most recent call last):
...
NameError: name 'ControlParameters' is not defined
>>> nhru
Traceback (most recent call last):
...
NameError: name 'nhru' is not defined
|
entailment
|
def prepare_model(module: Union[types.ModuleType, str],
timestep: PeriodABC.ConstrArg = None):
"""Prepare and return the model of the given module.
In usual HydPy projects, each hydrological model instance is prepared
in an individual control file. This allows for "polluting" the
namespace with different model attributes. There is no danger of
name conflicts, as long as no other (wildcard) imports are performed.
However, there are situations when different models are to be loaded
into the same namespace. Then it is advisable to use function
|prepare_model|, which just returns a reference to the model
and nothing else.
See the documentation of |dam_v001| on how to apply function
|prepare_model| properly.
"""
if timestep is not None:
parametertools.Parameter.parameterstep(timetools.Period(timestep))
try:
model = module.Model()
except AttributeError:
module = importlib.import_module(f'hydpy.models.{module}')
model = module.Model()
if hydpy.pub.options.usecython and hasattr(module, 'cythonizer'):
cymodule = module.cythonizer.cymodule
cymodel = cymodule.Model()
cymodel.parameters = cymodule.Parameters()
cymodel.sequences = cymodule.Sequences()
model.cymodel = cymodel
for numpars_name in ('NumConsts', 'NumVars'):
if hasattr(cymodule, numpars_name):
numpars_new = getattr(cymodule, numpars_name)()
numpars_old = getattr(model, numpars_name.lower())
for (name_numpar, numpar) in vars(numpars_old).items():
setattr(numpars_new, name_numpar, numpar)
setattr(cymodel, numpars_name.lower(), numpars_new)
for name in dir(cymodel):
if (not name.startswith('_')) and hasattr(model, name):
setattr(model, name, getattr(cymodel, name))
dict_ = {'cythonmodule': cymodule,
'cymodel': cymodel}
else:
dict_ = {}
dict_.update(vars(module))
dict_['model'] = model
if hasattr(module, 'Parameters'):
model.parameters = module.Parameters(dict_)
else:
model.parameters = parametertools.Parameters(dict_)
if hasattr(module, 'Sequences'):
model.sequences = module.Sequences(**dict_)
else:
model.sequences = sequencetools.Sequences(**dict_)
if hasattr(module, 'Masks'):
model.masks = module.Masks(model)
return model
|
Prepare and return the model of the given module.
In usual HydPy projects, each hydrological model instance is prepared
in an individual control file. This allows for "polluting" the
namespace with different model attributes. There is no danger of
name conflicts, as long as no other (wildcard) imports are performed.
However, there are situations when different models are to be loaded
into the same namespace. Then it is advisable to use function
|prepare_model|, which just returns a reference to the model
and nothing else.
See the documentation of |dam_v001| on how to apply function
|prepare_model| properly.
|
entailment
|
def simulationstep(timestep):
""" Define a simulation time step size for testing purposes within a
parameter control file.
Using |simulationstep| only affects the values of time dependent
parameters, when `pub.timegrids.stepsize` is not defined. It thus has
no influence on usual hydpy simulations at all. Use it just to check
your parameter control files. Write it in a line immediately behind
the one calling |parameterstep|.
To clarify its purpose, executing raises a warning, when executing
it from within a control file:
>>> from hydpy import pub
>>> with pub.options.warnsimulationstep(True):
... from hydpy.models.hland_v1 import *
... parameterstep('1d')
... simulationstep('1h')
Traceback (most recent call last):
...
UserWarning: Note that the applied function `simulationstep` is intended \
for testing purposes only. When doing a HydPy simulation, parameter values \
are initialised based on the actual simulation time step as defined under \
`pub.timegrids.stepsize` and the value given to `simulationstep` is ignored.
>>> k4.simulationstep
Period('1h')
"""
if hydpy.pub.options.warnsimulationstep:
warnings.warn(
'Note that the applied function `simulationstep` is intended for '
'testing purposes only. When doing a HydPy simulation, parameter '
'values are initialised based on the actual simulation time step '
'as defined under `pub.timegrids.stepsize` and the value given '
'to `simulationstep` is ignored.')
parametertools.Parameter.simulationstep(timestep)
|
Define a simulation time step size for testing purposes within a
parameter control file.
Using |simulationstep| only affects the values of time dependent
parameters, when `pub.timegrids.stepsize` is not defined. It thus has
no influence on usual hydpy simulations at all. Use it just to check
your parameter control files. Write it in a line immediately behind
the one calling |parameterstep|.
To clarify its purpose, executing raises a warning, when executing
it from within a control file:
>>> from hydpy import pub
>>> with pub.options.warnsimulationstep(True):
... from hydpy.models.hland_v1 import *
... parameterstep('1d')
... simulationstep('1h')
Traceback (most recent call last):
...
UserWarning: Note that the applied function `simulationstep` is intended \
for testing purposes only. When doing a HydPy simulation, parameter values \
are initialised based on the actual simulation time step as defined under \
`pub.timegrids.stepsize` and the value given to `simulationstep` is ignored.
>>> k4.simulationstep
Period('1h')
|
entailment
|
def controlcheck(controldir='default', projectdir=None, controlfile=None):
"""Define the corresponding control file within a condition file.
Function |controlcheck| serves similar purposes as function
|parameterstep|. It is the reason why one can interactively
access the state and/or the log sequences within condition files
as `land_dill.py` of the example project `LahnH`. It is called
`controlcheck` due to its implicite feature to check upon the execution
of the condition file if eventual specifications within both files
disagree. The following test, where we write a number of soil moisture
values (|hland_states.SM|) into condition file `land_dill.py` which
does not agree with the number of hydrological response units
(|hland_control.NmbZones|) defined in control file `land_dill.py`,
verifies that this actually works within a new Python process:
>>> from hydpy.core.examples import prepare_full_example_1
>>> prepare_full_example_1()
>>> import os, subprocess
>>> from hydpy import TestIO
>>> cwd = os.path.join('LahnH', 'conditions', 'init_1996_01_01')
>>> with TestIO():
... os.chdir(cwd)
... with open('land_dill.py') as file_:
... lines = file_.readlines()
... lines[10:12] = 'sm(185.13164, 181.18755)', ''
... with open('land_dill.py', 'w') as file_:
... _ = file_.write('\\n'.join(lines))
... result = subprocess.run(
... 'python land_dill.py',
... stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... universal_newlines=True,
... shell=True)
>>> print(result.stderr.split('ValueError:')[-1].strip())
While trying to set the value(s) of variable `sm`, the following error \
occurred: While trying to convert the value(s) `(185.13164, 181.18755)` to \
a numpy ndarray with shape `(12,)` and type `float`, the following error \
occurred: could not broadcast input array from shape (2) into shape (12)
With a little trick, we can fake to be "inside" condition file
`land_dill.py`. Calling |controlcheck| then e.g. prepares the shape
of sequence |hland_states.Ic| as specified by the value of parameter
|hland_control.NmbZones| given in the corresponding control file:
>>> from hydpy.models.hland_v1 import *
>>> __file__ = 'land_dill.py' # ToDo: undo?
>>> with TestIO():
... os.chdir(cwd)
... controlcheck()
>>> ic.shape
(12,)
In the above example, the standard names for the project directory
(the one containing the executed condition file) and the control
directory (`default`) are used. The following example shows how
to change them:
>>> del model
>>> with TestIO(): # doctest: +ELLIPSIS
... os.chdir(cwd)
... controlcheck(projectdir='somewhere', controldir='nowhere')
Traceback (most recent call last):
...
FileNotFoundError: While trying to load the control file \
`...hydpy...tests...iotesting...control...nowhere...land_dill.py`, the \
following error occurred: [Errno 2] No such file or directory: '...land_dill.py'
Note that the functionalities of function |controlcheck| are disabled
when there is already a `model` variable in the namespace, which is
the case when a condition file is executed within the context of a
complete HydPy project.
"""
namespace = inspect.currentframe().f_back.f_locals
model = namespace.get('model')
if model is None:
if not controlfile:
controlfile = os.path.split(namespace['__file__'])[-1]
if projectdir is None:
projectdir = (
os.path.split(
os.path.split(
os.path.split(os.getcwd())[0])[0])[-1])
dirpath = os.path.abspath(os.path.join(
'..', '..', '..', projectdir, 'control', controldir))
class CM(filetools.ControlManager):
currentpath = dirpath
model = CM().load_file(filename=controlfile)['model']
model.parameters.update()
namespace['model'] = model
for name in ('states', 'logs'):
subseqs = getattr(model.sequences, name, None)
if subseqs is not None:
for seq in subseqs:
namespace[seq.name] = seq
|
Define the corresponding control file within a condition file.
Function |controlcheck| serves similar purposes as function
|parameterstep|. It is the reason why one can interactively
access the state and/or the log sequences within condition files
as `land_dill.py` of the example project `LahnH`. It is called
`controlcheck` due to its implicite feature to check upon the execution
of the condition file if eventual specifications within both files
disagree. The following test, where we write a number of soil moisture
values (|hland_states.SM|) into condition file `land_dill.py` which
does not agree with the number of hydrological response units
(|hland_control.NmbZones|) defined in control file `land_dill.py`,
verifies that this actually works within a new Python process:
>>> from hydpy.core.examples import prepare_full_example_1
>>> prepare_full_example_1()
>>> import os, subprocess
>>> from hydpy import TestIO
>>> cwd = os.path.join('LahnH', 'conditions', 'init_1996_01_01')
>>> with TestIO():
... os.chdir(cwd)
... with open('land_dill.py') as file_:
... lines = file_.readlines()
... lines[10:12] = 'sm(185.13164, 181.18755)', ''
... with open('land_dill.py', 'w') as file_:
... _ = file_.write('\\n'.join(lines))
... result = subprocess.run(
... 'python land_dill.py',
... stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... universal_newlines=True,
... shell=True)
>>> print(result.stderr.split('ValueError:')[-1].strip())
While trying to set the value(s) of variable `sm`, the following error \
occurred: While trying to convert the value(s) `(185.13164, 181.18755)` to \
a numpy ndarray with shape `(12,)` and type `float`, the following error \
occurred: could not broadcast input array from shape (2) into shape (12)
With a little trick, we can fake to be "inside" condition file
`land_dill.py`. Calling |controlcheck| then e.g. prepares the shape
of sequence |hland_states.Ic| as specified by the value of parameter
|hland_control.NmbZones| given in the corresponding control file:
>>> from hydpy.models.hland_v1 import *
>>> __file__ = 'land_dill.py' # ToDo: undo?
>>> with TestIO():
... os.chdir(cwd)
... controlcheck()
>>> ic.shape
(12,)
In the above example, the standard names for the project directory
(the one containing the executed condition file) and the control
directory (`default`) are used. The following example shows how
to change them:
>>> del model
>>> with TestIO(): # doctest: +ELLIPSIS
... os.chdir(cwd)
... controlcheck(projectdir='somewhere', controldir='nowhere')
Traceback (most recent call last):
...
FileNotFoundError: While trying to load the control file \
`...hydpy...tests...iotesting...control...nowhere...land_dill.py`, the \
following error occurred: [Errno 2] No such file or directory: '...land_dill.py'
Note that the functionalities of function |controlcheck| are disabled
when there is already a `model` variable in the namespace, which is
the case when a condition file is executed within the context of a
complete HydPy project.
|
entailment
|
def update(self):
"""Update |RelSoilArea| based on |Area|, |ZoneArea|, and |ZoneType|.
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(4)
>>> zonetype(FIELD, FOREST, GLACIER, ILAKE)
>>> area(100.0)
>>> zonearea(10.0, 20.0, 30.0, 40.0)
>>> derived.relsoilarea.update()
>>> derived.relsoilarea
relsoilarea(0.3)
"""
con = self.subpars.pars.control
temp = con.zonearea.values.copy()
temp[con.zonetype.values == GLACIER] = 0.
temp[con.zonetype.values == ILAKE] = 0.
self(numpy.sum(temp)/con.area)
|
Update |RelSoilArea| based on |Area|, |ZoneArea|, and |ZoneType|.
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(4)
>>> zonetype(FIELD, FOREST, GLACIER, ILAKE)
>>> area(100.0)
>>> zonearea(10.0, 20.0, 30.0, 40.0)
>>> derived.relsoilarea.update()
>>> derived.relsoilarea
relsoilarea(0.3)
|
entailment
|
def update(self):
"""Update |TTM| based on :math:`TTM = TT+DTTM`.
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(1)
>>> zonetype(FIELD)
>>> tt(1.0)
>>> dttm(-2.0)
>>> derived.ttm.update()
>>> derived.ttm
ttm(-1.0)
"""
con = self.subpars.pars.control
self(con.tt+con.dttm)
|
Update |TTM| based on :math:`TTM = TT+DTTM`.
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(1)
>>> zonetype(FIELD)
>>> tt(1.0)
>>> dttm(-2.0)
>>> derived.ttm.update()
>>> derived.ttm
ttm(-1.0)
|
entailment
|
def update(self):
"""Update |UH| based on |MaxBaz|.
.. note::
This method also updates the shape of log sequence |QUH|.
|MaxBaz| determines the end point of the triangle. A value of
|MaxBaz| being not larger than the simulation step size is
identical with applying no unit hydrograph at all:
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> maxbaz(0.0)
>>> derived.uh.update()
>>> logs.quh.shape
(1,)
>>> derived.uh
uh(1.0)
Note that, due to difference of the parameter and the simulation
step size in the given example, the largest assignment resulting
in a `inactive` unit hydrograph is 1/2:
>>> maxbaz(0.5)
>>> derived.uh.update()
>>> logs.quh.shape
(1,)
>>> derived.uh
uh(1.0)
When |MaxBaz| is in accordance with two simulation steps, both
unit hydrograph ordinats must be 1/2 due to symmetry of the
triangle:
>>> maxbaz(1.0)
>>> derived.uh.update()
>>> logs.quh.shape
(2,)
>>> derived.uh
uh(0.5)
>>> derived.uh.values
array([ 0.5, 0.5])
A |MaxBaz| value in accordance with three simulation steps results
in the ordinate values 2/9, 5/9, and 2/9:
>>> maxbaz(1.5)
>>> derived.uh.update()
>>> logs.quh.shape
(3,)
>>> derived.uh
uh(0.222222, 0.555556, 0.222222)
And a final example, where the end of the triangle lies within
a simulation step, resulting in the fractions 8/49, 23/49, 16/49,
and 2/49:
>>> maxbaz(1.75)
>>> derived.uh.update()
>>> logs.quh.shape
(4,)
>>> derived.uh
uh(0.163265, 0.469388, 0.326531, 0.040816)
"""
maxbaz = self.subpars.pars.control.maxbaz.value
quh = self.subpars.pars.model.sequences.logs.quh
# Determine UH parameters...
if maxbaz <= 1.:
# ...when MaxBaz smaller than or equal to the simulation time step.
self.shape = 1
self(1.)
quh.shape = 1
else:
# ...when MaxBaz is greater than the simulation time step.
# Define some shortcuts for the following calculations.
full = maxbaz
# Now comes a terrible trick due to rounding problems coming from
# the conversation of the SMHI parameter set to the HydPy
# parameter set. Time to get rid of it...
if (full % 1.) < 1e-4:
full //= 1.
full_f = int(numpy.floor(full))
full_c = int(numpy.ceil(full))
half = full/2.
half_f = int(numpy.floor(half))
half_c = int(numpy.ceil(half))
full_2 = full**2.
# Calculate the triangle ordinate(s)...
self.shape = full_c
uh = self.values
quh.shape = full_c
# ...of the rising limb.
points = numpy.arange(1, half_f+1)
uh[:half_f] = (2.*points-1.)/(2.*full_2)
# ...around the peak (if it exists).
if numpy.mod(half, 1.) != 0.:
uh[half_f] = (
(half_c-half)/full +
(2*half**2.-half_f**2.-half_c**2.)/(2.*full_2))
# ...of the falling limb (eventually except the last one).
points = numpy.arange(half_c+1., full_f+1.)
uh[half_c:full_f] = 1./full-(2.*points-1.)/(2.*full_2)
# ...at the end (if not already done).
if numpy.mod(full, 1.) != 0.:
uh[full_f] = (
(full-full_f)/full-(full_2-full_f**2.)/(2.*full_2))
# Normalize the ordinates.
self(uh/numpy.sum(uh))
|
Update |UH| based on |MaxBaz|.
.. note::
This method also updates the shape of log sequence |QUH|.
|MaxBaz| determines the end point of the triangle. A value of
|MaxBaz| being not larger than the simulation step size is
identical with applying no unit hydrograph at all:
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> maxbaz(0.0)
>>> derived.uh.update()
>>> logs.quh.shape
(1,)
>>> derived.uh
uh(1.0)
Note that, due to difference of the parameter and the simulation
step size in the given example, the largest assignment resulting
in a `inactive` unit hydrograph is 1/2:
>>> maxbaz(0.5)
>>> derived.uh.update()
>>> logs.quh.shape
(1,)
>>> derived.uh
uh(1.0)
When |MaxBaz| is in accordance with two simulation steps, both
unit hydrograph ordinats must be 1/2 due to symmetry of the
triangle:
>>> maxbaz(1.0)
>>> derived.uh.update()
>>> logs.quh.shape
(2,)
>>> derived.uh
uh(0.5)
>>> derived.uh.values
array([ 0.5, 0.5])
A |MaxBaz| value in accordance with three simulation steps results
in the ordinate values 2/9, 5/9, and 2/9:
>>> maxbaz(1.5)
>>> derived.uh.update()
>>> logs.quh.shape
(3,)
>>> derived.uh
uh(0.222222, 0.555556, 0.222222)
And a final example, where the end of the triangle lies within
a simulation step, resulting in the fractions 8/49, 23/49, 16/49,
and 2/49:
>>> maxbaz(1.75)
>>> derived.uh.update()
>>> logs.quh.shape
(4,)
>>> derived.uh
uh(0.163265, 0.469388, 0.326531, 0.040816)
|
entailment
|
def update(self):
"""Update |QFactor| based on |Area| and the current simulation
step size.
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> area(50.0)
>>> derived.qfactor.update()
>>> derived.qfactor
qfactor(1.157407)
"""
self(self.subpars.pars.control.area*1000. /
self.subpars.qfactor.simulationstep.seconds)
|
Update |QFactor| based on |Area| and the current simulation
step size.
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> area(50.0)
>>> derived.qfactor.update()
>>> derived.qfactor
qfactor(1.157407)
|
entailment
|
def nmb_neurons(self) -> Tuple[int, ...]:
"""Number of neurons of the hidden layers.
>>> from hydpy import ANN
>>> ann = ANN(None)
>>> ann(nmb_inputs=2, nmb_neurons=(2, 1), nmb_outputs=3)
>>> ann.nmb_neurons
(2, 1)
>>> ann.nmb_neurons = (3,)
>>> ann.nmb_neurons
(3,)
>>> del ann.nmb_neurons
>>> ann.nmb_neurons
Traceback (most recent call last):
...
hydpy.core.exceptiontools.AttributeNotReady: Attribute `nmb_neurons` \
of object `ann` has not been prepared so far.
"""
return tuple(numpy.asarray(self._cann.nmb_neurons))
|
Number of neurons of the hidden layers.
>>> from hydpy import ANN
>>> ann = ANN(None)
>>> ann(nmb_inputs=2, nmb_neurons=(2, 1), nmb_outputs=3)
>>> ann.nmb_neurons
(2, 1)
>>> ann.nmb_neurons = (3,)
>>> ann.nmb_neurons
(3,)
>>> del ann.nmb_neurons
>>> ann.nmb_neurons
Traceback (most recent call last):
...
hydpy.core.exceptiontools.AttributeNotReady: Attribute `nmb_neurons` \
of object `ann` has not been prepared so far.
|
entailment
|
def shape_weights_hidden(self) -> Tuple[int, int, int]:
"""Shape of the array containing the activation of the hidden neurons.
The first integer value is the number of connection between the
hidden layers, the second integer value is maximum number of
neurons of all hidden layers feeding information into another
hidden layer (all except the last one), and the third integer
value is the maximum number of the neurons of all hidden layers
receiving information from another hidden layer (all except the
first one):
>>> from hydpy import ANN
>>> ann = ANN(None)
>>> ann(nmb_inputs=6, nmb_neurons=(4, 3, 2), nmb_outputs=6)
>>> ann.shape_weights_hidden
(2, 4, 3)
>>> ann(nmb_inputs=6, nmb_neurons=(4,), nmb_outputs=6)
>>> ann.shape_weights_hidden
(0, 0, 0)
"""
if self.nmb_layers > 1:
nmb_neurons = self.nmb_neurons
return (self.nmb_layers-1,
max(nmb_neurons[:-1]),
max(nmb_neurons[1:]))
return 0, 0, 0
|
Shape of the array containing the activation of the hidden neurons.
The first integer value is the number of connection between the
hidden layers, the second integer value is maximum number of
neurons of all hidden layers feeding information into another
hidden layer (all except the last one), and the third integer
value is the maximum number of the neurons of all hidden layers
receiving information from another hidden layer (all except the
first one):
>>> from hydpy import ANN
>>> ann = ANN(None)
>>> ann(nmb_inputs=6, nmb_neurons=(4, 3, 2), nmb_outputs=6)
>>> ann.shape_weights_hidden
(2, 4, 3)
>>> ann(nmb_inputs=6, nmb_neurons=(4,), nmb_outputs=6)
>>> ann.shape_weights_hidden
(0, 0, 0)
|
entailment
|
def nmb_weights_hidden(self) -> int:
"""Number of hidden weights.
>>> from hydpy import ANN
>>> ann = ANN(None)
>>> ann(nmb_inputs=2, nmb_neurons=(4, 3, 2), nmb_outputs=3)
>>> ann.nmb_weights_hidden
18
"""
nmb = 0
for idx_layer in range(self.nmb_layers-1):
nmb += self.nmb_neurons[idx_layer] * self.nmb_neurons[idx_layer+1]
return nmb
|
Number of hidden weights.
>>> from hydpy import ANN
>>> ann = ANN(None)
>>> ann(nmb_inputs=2, nmb_neurons=(4, 3, 2), nmb_outputs=3)
>>> ann.nmb_weights_hidden
18
|
entailment
|
def verify(self) -> None:
"""Raise a |RuntimeError| if the network's shape is not defined
completely.
>>> from hydpy import ANN
>>> ANN(None).verify()
Traceback (most recent call last):
...
RuntimeError: The shape of the the artificial neural network \
parameter `ann` of element `?` has not been defined so far.
"""
if not self.__protectedproperties.allready(self):
raise RuntimeError(
'The shape of the the artificial neural network '
'parameter %s has not been defined so far.'
% objecttools.elementphrase(self))
|
Raise a |RuntimeError| if the network's shape is not defined
completely.
>>> from hydpy import ANN
>>> ANN(None).verify()
Traceback (most recent call last):
...
RuntimeError: The shape of the the artificial neural network \
parameter `ann` of element `?` has not been defined so far.
|
entailment
|
def assignrepr(self, prefix) -> str:
"""Return a string representation of the actual |anntools.ANN| object
that is prefixed with the given string."""
prefix = '%s%s(' % (prefix, self.name)
blanks = len(prefix)*' '
lines = [
objecttools.assignrepr_value(
self.nmb_inputs, '%snmb_inputs=' % prefix)+',',
objecttools.assignrepr_tuple(
self.nmb_neurons, '%snmb_neurons=' % blanks)+',',
objecttools.assignrepr_value(
self.nmb_outputs, '%snmb_outputs=' % blanks)+',',
objecttools.assignrepr_list2(
self.weights_input, '%sweights_input=' % blanks)+',']
if self.nmb_layers > 1:
lines.append(objecttools.assignrepr_list3(
self.weights_hidden, '%sweights_hidden=' % blanks)+',')
lines.append(objecttools.assignrepr_list2(
self.weights_output, '%sweights_output=' % blanks)+',')
lines.append(objecttools.assignrepr_list2(
self.intercepts_hidden, '%sintercepts_hidden=' % blanks)+',')
lines.append(objecttools.assignrepr_list(
self.intercepts_output, '%sintercepts_output=' % blanks)+')')
return '\n'.join(lines)
|
Return a string representation of the actual |anntools.ANN| object
that is prefixed with the given string.
|
entailment
|
def plot(self, xmin, xmax, idx_input=0, idx_output=0, points=100,
**kwargs) -> None:
"""Plot the relationship between a certain input (`idx_input`) and a
certain output (`idx_output`) variable described by the actual
|anntools.ANN| object.
Define the lower and the upper bound of the x axis via arguments
`xmin` and `xmax`. The number of plotting points can be modified
by argument `points`. Additional `matplotlib` plotting arguments
can be passed as keyword arguments.
"""
xs_ = numpy.linspace(xmin, xmax, points)
ys_ = numpy.zeros(xs_.shape)
for idx, x__ in enumerate(xs_):
self.inputs[idx_input] = x__
self.process_actual_input()
ys_[idx] = self.outputs[idx_output]
pyplot.plot(xs_, ys_, **kwargs)
|
Plot the relationship between a certain input (`idx_input`) and a
certain output (`idx_output`) variable described by the actual
|anntools.ANN| object.
Define the lower and the upper bound of the x axis via arguments
`xmin` and `xmax`. The number of plotting points can be modified
by argument `points`. Additional `matplotlib` plotting arguments
can be passed as keyword arguments.
|
entailment
|
def refresh(self) -> None:
"""Prepare the actual |anntools.SeasonalANN| object for calculations.
Dispite all automated refreshings explained in the general
documentation on class |anntools.SeasonalANN|, it is still possible
to destroy the inner consistency of a |anntools.SeasonalANN| instance,
as it stores its |anntools.ANN| objects by reference. This is shown
by the following example:
>>> from hydpy import SeasonalANN, ann
>>> seasonalann = SeasonalANN(None)
>>> seasonalann.simulationstep = '1d'
>>> jan = ann(nmb_inputs=1, nmb_neurons=(1,), nmb_outputs=1,
... weights_input=0.0, weights_output=0.0,
... intercepts_hidden=0.0, intercepts_output=1.0)
>>> seasonalann(_1_1_12=jan)
>>> jan.nmb_inputs, jan.nmb_outputs = 2, 3
>>> jan.nmb_inputs, jan.nmb_outputs
(2, 3)
>>> seasonalann.nmb_inputs, seasonalann.nmb_outputs
(1, 1)
Due to the C level implementation of the mathematical core of
both |anntools.ANN| and |anntools.SeasonalANN| in module |annutils|,
such an inconsistency might result in a program crash without any
informative error message. Whenever you are afraid some
inconsistency might have crept in, and you want to repair it,
call method |anntools.SeasonalANN.refresh| explicitly:
>>> seasonalann.refresh()
>>> jan.nmb_inputs, jan.nmb_outputs
(2, 3)
>>> seasonalann.nmb_inputs, seasonalann.nmb_outputs
(2, 3)
"""
# pylint: disable=unsupported-assignment-operation
if self._do_refresh:
if self.anns:
self.__sann = annutils.SeasonalANN(self.anns)
setattr(self.fastaccess, self.name, self._sann)
self._set_shape((None, self._sann.nmb_anns))
if self._sann.nmb_anns > 1:
self._interp()
else:
self._sann.ratios[:, 0] = 1.
self.verify()
else:
self.__sann = None
|
Prepare the actual |anntools.SeasonalANN| object for calculations.
Dispite all automated refreshings explained in the general
documentation on class |anntools.SeasonalANN|, it is still possible
to destroy the inner consistency of a |anntools.SeasonalANN| instance,
as it stores its |anntools.ANN| objects by reference. This is shown
by the following example:
>>> from hydpy import SeasonalANN, ann
>>> seasonalann = SeasonalANN(None)
>>> seasonalann.simulationstep = '1d'
>>> jan = ann(nmb_inputs=1, nmb_neurons=(1,), nmb_outputs=1,
... weights_input=0.0, weights_output=0.0,
... intercepts_hidden=0.0, intercepts_output=1.0)
>>> seasonalann(_1_1_12=jan)
>>> jan.nmb_inputs, jan.nmb_outputs = 2, 3
>>> jan.nmb_inputs, jan.nmb_outputs
(2, 3)
>>> seasonalann.nmb_inputs, seasonalann.nmb_outputs
(1, 1)
Due to the C level implementation of the mathematical core of
both |anntools.ANN| and |anntools.SeasonalANN| in module |annutils|,
such an inconsistency might result in a program crash without any
informative error message. Whenever you are afraid some
inconsistency might have crept in, and you want to repair it,
call method |anntools.SeasonalANN.refresh| explicitly:
>>> seasonalann.refresh()
>>> jan.nmb_inputs, jan.nmb_outputs
(2, 3)
>>> seasonalann.nmb_inputs, seasonalann.nmb_outputs
(2, 3)
|
entailment
|
def verify(self) -> None:
"""Raise a |RuntimeError| and removes all handled neural networks,
if the they are defined inconsistently.
Dispite all automated safety checks explained in the general
documentation on class |anntools.SeasonalANN|, it is still possible
to destroy the inner consistency of a |anntools.SeasonalANN| instance,
as it stores its |anntools.ANN| objects by reference. This is shown
by the following example:
>>> from hydpy import SeasonalANN, ann
>>> seasonalann = SeasonalANN(None)
>>> seasonalann.simulationstep = '1d'
>>> jan = ann(nmb_inputs=1, nmb_neurons=(1,), nmb_outputs=1,
... weights_input=0.0, weights_output=0.0,
... intercepts_hidden=0.0, intercepts_output=1.0)
>>> seasonalann(_1_1_12=jan)
>>> jan.nmb_inputs, jan.nmb_outputs = 2, 3
>>> jan.nmb_inputs, jan.nmb_outputs
(2, 3)
>>> seasonalann.nmb_inputs, seasonalann.nmb_outputs
(1, 1)
Due to the C level implementation of the mathematical core of both
|anntools.ANN| and |anntools.SeasonalANN| in module |annutils|,
such an inconsistency might result in a program crash without any
informative error message. Whenever you are afraid some
inconsistency might have crept in, and you want to find out if this
is actually the case, call method |anntools.SeasonalANN.verify|
explicitly:
>>> seasonalann.verify()
Traceback (most recent call last):
...
RuntimeError: The number of input and output values of all neural \
networks contained by a seasonal neural network collection must be \
identical and be known by the containing object. But the seasonal \
neural network collection `seasonalann` of element `?` assumes `1` input \
and `1` output values, while the network corresponding to the time of \
year `toy_1_1_12_0_0` requires `2` input and `3` output values.
>>> seasonalann
seasonalann()
>>> seasonalann.verify()
Traceback (most recent call last):
...
RuntimeError: Seasonal artificial neural network collections need \
to handle at least one "normal" single neural network, but for the seasonal \
neural network `seasonalann` of element `?` none has been defined so far.
"""
if not self.anns:
self._toy2ann.clear()
raise RuntimeError(
'Seasonal artificial neural network collections need '
'to handle at least one "normal" single neural network, '
'but for the seasonal neural network `%s` of element '
'`%s` none has been defined so far.'
% (self.name, objecttools.devicename(self)))
for toy, ann_ in self:
ann_.verify()
if ((self.nmb_inputs != ann_.nmb_inputs) or
(self.nmb_outputs != ann_.nmb_outputs)):
self._toy2ann.clear()
raise RuntimeError(
'The number of input and output values of all neural '
'networks contained by a seasonal neural network '
'collection must be identical and be known by the '
'containing object. But the seasonal neural '
'network collection `%s` of element `%s` assumes '
'`%d` input and `%d` output values, while the network '
'corresponding to the time of year `%s` requires '
'`%d` input and `%d` output values.'
% (self.name, objecttools.devicename(self),
self.nmb_inputs, self.nmb_outputs,
toy,
ann_.nmb_inputs, ann_.nmb_outputs))
|
Raise a |RuntimeError| and removes all handled neural networks,
if the they are defined inconsistently.
Dispite all automated safety checks explained in the general
documentation on class |anntools.SeasonalANN|, it is still possible
to destroy the inner consistency of a |anntools.SeasonalANN| instance,
as it stores its |anntools.ANN| objects by reference. This is shown
by the following example:
>>> from hydpy import SeasonalANN, ann
>>> seasonalann = SeasonalANN(None)
>>> seasonalann.simulationstep = '1d'
>>> jan = ann(nmb_inputs=1, nmb_neurons=(1,), nmb_outputs=1,
... weights_input=0.0, weights_output=0.0,
... intercepts_hidden=0.0, intercepts_output=1.0)
>>> seasonalann(_1_1_12=jan)
>>> jan.nmb_inputs, jan.nmb_outputs = 2, 3
>>> jan.nmb_inputs, jan.nmb_outputs
(2, 3)
>>> seasonalann.nmb_inputs, seasonalann.nmb_outputs
(1, 1)
Due to the C level implementation of the mathematical core of both
|anntools.ANN| and |anntools.SeasonalANN| in module |annutils|,
such an inconsistency might result in a program crash without any
informative error message. Whenever you are afraid some
inconsistency might have crept in, and you want to find out if this
is actually the case, call method |anntools.SeasonalANN.verify|
explicitly:
>>> seasonalann.verify()
Traceback (most recent call last):
...
RuntimeError: The number of input and output values of all neural \
networks contained by a seasonal neural network collection must be \
identical and be known by the containing object. But the seasonal \
neural network collection `seasonalann` of element `?` assumes `1` input \
and `1` output values, while the network corresponding to the time of \
year `toy_1_1_12_0_0` requires `2` input and `3` output values.
>>> seasonalann
seasonalann()
>>> seasonalann.verify()
Traceback (most recent call last):
...
RuntimeError: Seasonal artificial neural network collections need \
to handle at least one "normal" single neural network, but for the seasonal \
neural network `seasonalann` of element `?` none has been defined so far.
|
entailment
|
def shape(self) -> Tuple[int, ...]:
"""The shape of array |anntools.SeasonalANN.ratios|."""
return tuple(int(sub) for sub in self.ratios.shape)
|
The shape of array |anntools.SeasonalANN.ratios|.
|
entailment
|
def _set_shape(self, shape):
"""Private on purpose."""
try:
shape = (int(shape),)
except TypeError:
pass
shp = list(shape)
shp[0] = timetools.Period('366d')/self.simulationstep
shp[0] = int(numpy.ceil(round(shp[0], 10)))
getattr(self.fastaccess, self.name).ratios = numpy.zeros(
shp, dtype=float)
|
Private on purpose.
|
entailment
|
def toys(self) -> Tuple[timetools.TOY, ...]:
"""A sorted |tuple| of all contained |TOY| objects."""
return tuple(toy for (toy, _) in self)
|
A sorted |tuple| of all contained |TOY| objects.
|
entailment
|
def plot(self, xmin, xmax, idx_input=0, idx_output=0, points=100,
**kwargs) -> None:
"""Call method |anntools.ANN.plot| of all |anntools.ANN| objects
handled by the actual |anntools.SeasonalANN| object.
"""
for toy, ann_ in self:
ann_.plot(xmin, xmax,
idx_input=idx_input, idx_output=idx_output,
points=points,
label=str(toy),
**kwargs)
pyplot.legend()
|
Call method |anntools.ANN.plot| of all |anntools.ANN| objects
handled by the actual |anntools.SeasonalANN| object.
|
entailment
|
def specstring(self):
"""The string corresponding to the current values of `subgroup`,
`state`, and `variable`.
>>> from hydpy.core.itemtools import ExchangeSpecification
>>> spec = ExchangeSpecification('hland_v1', 'fluxes.qt')
>>> spec.specstring
'fluxes.qt'
>>> spec.series = True
>>> spec.specstring
'fluxes.qt.series'
>>> spec.subgroup = None
>>> spec.specstring
'qt.series'
"""
if self.subgroup is None:
variable = self.variable
else:
variable = f'{self.subgroup}.{self.variable}'
if self.series:
variable = f'{variable}.series'
return variable
|
The string corresponding to the current values of `subgroup`,
`state`, and `variable`.
>>> from hydpy.core.itemtools import ExchangeSpecification
>>> spec = ExchangeSpecification('hland_v1', 'fluxes.qt')
>>> spec.specstring
'fluxes.qt'
>>> spec.series = True
>>> spec.specstring
'fluxes.qt.series'
>>> spec.subgroup = None
>>> spec.specstring
'qt.series'
|
entailment
|
def collect_variables(self, selections) -> None:
"""Apply method |ExchangeItem.insert_variables| to collect the
relevant target variables handled by the devices of the given
|Selections| object.
We prepare the `LahnH` example project to be able to use its
|Selections| object:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
We change the type of a specific application model to the type
of its base model for reasons explained later:
>>> from hydpy.models.hland import Model
>>> hp.elements.land_lahn_3.model.__class__ = Model
We prepare a |SetItem| as an example, handling all |hland_states.Ic|
sequences corresponding to any application models derived from |hland|:
>>> from hydpy import SetItem
>>> item = SetItem('ic', 'hland', 'states.ic', 0)
>>> item.targetspecs
ExchangeSpecification('hland', 'states.ic')
Applying method |ExchangeItem.collect_variables| connects the |SetItem|
object with all four relevant |hland_states.Ic| objects:
>>> item.collect_variables(pub.selections)
>>> land_dill = hp.elements.land_dill
>>> sequence = land_dill.model.sequences.states.ic
>>> item.device2target[land_dill] is sequence
True
>>> for element in sorted(item.device2target, key=lambda x: x.name):
... print(element)
land_dill
land_lahn_1
land_lahn_2
land_lahn_3
Asking for |hland_states.Ic| objects corresponding to application
model |hland_v1| only, results in skipping the |Element| `land_lahn_3`
(handling the |hland| base model due to the hack above):
>>> item = SetItem('ic', 'hland_v1', 'states.ic', 0)
>>> item.collect_variables(pub.selections)
>>> for element in sorted(item.device2target, key=lambda x: x.name):
... print(element)
land_dill
land_lahn_1
land_lahn_2
Selecting a series of a variable instead of the variable itself
only affects the `targetspec` attribute:
>>> item = SetItem('t', 'hland_v1', 'inputs.t.series', 0)
>>> item.collect_variables(pub.selections)
>>> item.targetspecs
ExchangeSpecification('hland_v1', 'inputs.t.series')
>>> sequence = land_dill.model.sequences.inputs.t
>>> item.device2target[land_dill] is sequence
True
It is both possible to address sequences of |Node| objects, as well
as their time series, by arguments "node" and "nodes":
>>> item = SetItem('sim', 'node', 'sim', 0)
>>> item.collect_variables(pub.selections)
>>> dill = hp.nodes.dill
>>> item.targetspecs
ExchangeSpecification('node', 'sim')
>>> item.device2target[dill] is dill.sequences.sim
True
>>> for node in sorted(item.device2target, key=lambda x: x.name):
... print(node)
dill
lahn_1
lahn_2
lahn_3
>>> item = SetItem('sim', 'nodes', 'sim.series', 0)
>>> item.collect_variables(pub.selections)
>>> item.targetspecs
ExchangeSpecification('nodes', 'sim.series')
>>> for node in sorted(item.device2target, key=lambda x: x.name):
... print(node)
dill
lahn_1
lahn_2
lahn_3
"""
self.insert_variables(self.device2target, self.targetspecs, selections)
|
Apply method |ExchangeItem.insert_variables| to collect the
relevant target variables handled by the devices of the given
|Selections| object.
We prepare the `LahnH` example project to be able to use its
|Selections| object:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
We change the type of a specific application model to the type
of its base model for reasons explained later:
>>> from hydpy.models.hland import Model
>>> hp.elements.land_lahn_3.model.__class__ = Model
We prepare a |SetItem| as an example, handling all |hland_states.Ic|
sequences corresponding to any application models derived from |hland|:
>>> from hydpy import SetItem
>>> item = SetItem('ic', 'hland', 'states.ic', 0)
>>> item.targetspecs
ExchangeSpecification('hland', 'states.ic')
Applying method |ExchangeItem.collect_variables| connects the |SetItem|
object with all four relevant |hland_states.Ic| objects:
>>> item.collect_variables(pub.selections)
>>> land_dill = hp.elements.land_dill
>>> sequence = land_dill.model.sequences.states.ic
>>> item.device2target[land_dill] is sequence
True
>>> for element in sorted(item.device2target, key=lambda x: x.name):
... print(element)
land_dill
land_lahn_1
land_lahn_2
land_lahn_3
Asking for |hland_states.Ic| objects corresponding to application
model |hland_v1| only, results in skipping the |Element| `land_lahn_3`
(handling the |hland| base model due to the hack above):
>>> item = SetItem('ic', 'hland_v1', 'states.ic', 0)
>>> item.collect_variables(pub.selections)
>>> for element in sorted(item.device2target, key=lambda x: x.name):
... print(element)
land_dill
land_lahn_1
land_lahn_2
Selecting a series of a variable instead of the variable itself
only affects the `targetspec` attribute:
>>> item = SetItem('t', 'hland_v1', 'inputs.t.series', 0)
>>> item.collect_variables(pub.selections)
>>> item.targetspecs
ExchangeSpecification('hland_v1', 'inputs.t.series')
>>> sequence = land_dill.model.sequences.inputs.t
>>> item.device2target[land_dill] is sequence
True
It is both possible to address sequences of |Node| objects, as well
as their time series, by arguments "node" and "nodes":
>>> item = SetItem('sim', 'node', 'sim', 0)
>>> item.collect_variables(pub.selections)
>>> dill = hp.nodes.dill
>>> item.targetspecs
ExchangeSpecification('node', 'sim')
>>> item.device2target[dill] is dill.sequences.sim
True
>>> for node in sorted(item.device2target, key=lambda x: x.name):
... print(node)
dill
lahn_1
lahn_2
lahn_3
>>> item = SetItem('sim', 'nodes', 'sim.series', 0)
>>> item.collect_variables(pub.selections)
>>> item.targetspecs
ExchangeSpecification('nodes', 'sim.series')
>>> for node in sorted(item.device2target, key=lambda x: x.name):
... print(node)
dill
lahn_1
lahn_2
lahn_3
|
entailment
|
def insert_variables(
self, device2variable, exchangespec, selections) -> None:
"""Determine the relevant target or base variables (as defined by
the given |ExchangeSpecification| object ) handled by the given
|Selections| object and insert them into the given `device2variable`
dictionary."""
if self.targetspecs.master in ('node', 'nodes'):
for node in selections.nodes:
variable = self._query_nodevariable(node, exchangespec)
device2variable[node] = variable
else:
for element in self._iter_relevantelements(selections):
variable = self._query_elementvariable(element, exchangespec)
device2variable[element] = variable
|
Determine the relevant target or base variables (as defined by
the given |ExchangeSpecification| object ) handled by the given
|Selections| object and insert them into the given `device2variable`
dictionary.
|
entailment
|
def update_variable(self, variable, value) -> None:
"""Assign the given value(s) to the given target or base variable.
If the assignment fails, |ChangeItem.update_variable| raises an
error like the following:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> item = SetItem('alpha', 'hland_v1', 'control.alpha', 0)
>>> item.collect_variables(pub.selections)
>>> item.update_variables() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: When trying to update a target variable of SetItem `alpha` \
with the value(s) `None`, the following error occurred: While trying to set \
the value(s) of variable `alpha` of element `...`, the following error \
occurred: The given value `None` cannot be converted to type `float`.
"""
try:
variable(value)
except BaseException:
objecttools.augment_excmessage(
f'When trying to update a target variable of '
f'{objecttools.classname(self)} `{self.name}` '
f'with the value(s) `{value}`')
|
Assign the given value(s) to the given target or base variable.
If the assignment fails, |ChangeItem.update_variable| raises an
error like the following:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> item = SetItem('alpha', 'hland_v1', 'control.alpha', 0)
>>> item.collect_variables(pub.selections)
>>> item.update_variables() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: When trying to update a target variable of SetItem `alpha` \
with the value(s) `None`, the following error occurred: While trying to set \
the value(s) of variable `alpha` of element `...`, the following error \
occurred: The given value `None` cannot be converted to type `float`.
|
entailment
|
def update_variables(self) -> None:
"""Assign the current objects |ChangeItem.value| to the values
of the target variables.
We use the `LahnH` project in the following:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
In the first example, a 0-dimensional |SetItem| changes the value
of the 0-dimensional parameter |hland_control.Alpha|:
>>> from hydpy.core.itemtools import SetItem
>>> item = SetItem('alpha', 'hland_v1', 'control.alpha', 0)
>>> item
SetItem('alpha', 'hland_v1', 'control.alpha', 0)
>>> item.collect_variables(pub.selections)
>>> item.value is None
True
>>> land_dill = hp.elements.land_dill
>>> land_dill.model.parameters.control.alpha
alpha(1.0)
>>> item.value = 2.0
>>> item.value
array(2.0)
>>> land_dill.model.parameters.control.alpha
alpha(1.0)
>>> item.update_variables()
>>> land_dill.model.parameters.control.alpha
alpha(2.0)
In the second example, a 0-dimensional |SetItem| changes the values
of the 1-dimensional parameter |hland_control.FC|:
>>> item = SetItem('fc', 'hland_v1', 'control.fc', 0)
>>> item.collect_variables(pub.selections)
>>> item.value = 200.0
>>> land_dill.model.parameters.control.fc
fc(278.0)
>>> item.update_variables()
>>> land_dill.model.parameters.control.fc
fc(200.0)
In the third example, a 1-dimensional |SetItem| changes the values
of the 1-dimensional sequence |hland_states.Ic|:
>>> for element in hp.elements.catchment:
... element.model.parameters.control.nmbzones(5)
... element.model.parameters.control.icmax(4.0)
>>> item = SetItem('ic', 'hland_v1', 'states.ic', 1)
>>> item.collect_variables(pub.selections)
>>> land_dill.model.sequences.states.ic
ic(nan, nan, nan, nan, nan)
>>> item.value = 2.0
>>> item.update_variables()
>>> land_dill.model.sequences.states.ic
ic(2.0, 2.0, 2.0, 2.0, 2.0)
>>> item.value = 1.0, 2.0, 3.0, 4.0, 5.0
>>> item.update_variables()
>>> land_dill.model.sequences.states.ic
ic(1.0, 2.0, 3.0, 4.0, 4.0)
"""
value = self.value
for variable in self.device2target.values():
self.update_variable(variable, value)
|
Assign the current objects |ChangeItem.value| to the values
of the target variables.
We use the `LahnH` project in the following:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
In the first example, a 0-dimensional |SetItem| changes the value
of the 0-dimensional parameter |hland_control.Alpha|:
>>> from hydpy.core.itemtools import SetItem
>>> item = SetItem('alpha', 'hland_v1', 'control.alpha', 0)
>>> item
SetItem('alpha', 'hland_v1', 'control.alpha', 0)
>>> item.collect_variables(pub.selections)
>>> item.value is None
True
>>> land_dill = hp.elements.land_dill
>>> land_dill.model.parameters.control.alpha
alpha(1.0)
>>> item.value = 2.0
>>> item.value
array(2.0)
>>> land_dill.model.parameters.control.alpha
alpha(1.0)
>>> item.update_variables()
>>> land_dill.model.parameters.control.alpha
alpha(2.0)
In the second example, a 0-dimensional |SetItem| changes the values
of the 1-dimensional parameter |hland_control.FC|:
>>> item = SetItem('fc', 'hland_v1', 'control.fc', 0)
>>> item.collect_variables(pub.selections)
>>> item.value = 200.0
>>> land_dill.model.parameters.control.fc
fc(278.0)
>>> item.update_variables()
>>> land_dill.model.parameters.control.fc
fc(200.0)
In the third example, a 1-dimensional |SetItem| changes the values
of the 1-dimensional sequence |hland_states.Ic|:
>>> for element in hp.elements.catchment:
... element.model.parameters.control.nmbzones(5)
... element.model.parameters.control.icmax(4.0)
>>> item = SetItem('ic', 'hland_v1', 'states.ic', 1)
>>> item.collect_variables(pub.selections)
>>> land_dill.model.sequences.states.ic
ic(nan, nan, nan, nan, nan)
>>> item.value = 2.0
>>> item.update_variables()
>>> land_dill.model.sequences.states.ic
ic(2.0, 2.0, 2.0, 2.0, 2.0)
>>> item.value = 1.0, 2.0, 3.0, 4.0, 5.0
>>> item.update_variables()
>>> land_dill.model.sequences.states.ic
ic(1.0, 2.0, 3.0, 4.0, 4.0)
|
entailment
|
def collect_variables(self, selections) -> None:
"""Apply method |ChangeItem.collect_variables| of the base class
|ChangeItem| and also apply method |ExchangeItem.insert_variables|
of class |ExchangeItem| to collect the relevant base variables
handled by the devices of the given |Selections| object.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy import AddItem
>>> item = AddItem(
... 'alpha', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
>>> item.collect_variables(pub.selections)
>>> land_dill = hp.elements.land_dill
>>> control = land_dill.model.parameters.control
>>> item.device2target[land_dill] is control.sfcf
True
>>> item.device2base[land_dill] is control.rfcf
True
>>> for device in sorted(item.device2base, key=lambda x: x.name):
... print(device)
land_dill
land_lahn_1
land_lahn_2
land_lahn_3
"""
super().collect_variables(selections)
self.insert_variables(self.device2base, self.basespecs, selections)
|
Apply method |ChangeItem.collect_variables| of the base class
|ChangeItem| and also apply method |ExchangeItem.insert_variables|
of class |ExchangeItem| to collect the relevant base variables
handled by the devices of the given |Selections| object.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy import AddItem
>>> item = AddItem(
... 'alpha', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
>>> item.collect_variables(pub.selections)
>>> land_dill = hp.elements.land_dill
>>> control = land_dill.model.parameters.control
>>> item.device2target[land_dill] is control.sfcf
True
>>> item.device2base[land_dill] is control.rfcf
True
>>> for device in sorted(item.device2base, key=lambda x: x.name):
... print(device)
land_dill
land_lahn_1
land_lahn_2
land_lahn_3
|
entailment
|
def update_variables(self) -> None:
"""Add the general |ChangeItem.value| with the |Device| specific base
variable and assign the result to the respective target variable.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy.models.hland_v1 import FIELD
>>> for element in hp.elements.catchment:
... control = element.model.parameters.control
... control.nmbzones(3)
... control.zonetype(FIELD)
... control.rfcf(1.1)
>>> from hydpy.core.itemtools import AddItem
>>> item = AddItem(
... 'sfcf', 'hland_v1', 'control.sfcf', 'control.rfcf', 1)
>>> item.collect_variables(pub.selections)
>>> land_dill = hp.elements.land_dill
>>> land_dill.model.parameters.control.sfcf
sfcf(?)
>>> item.value = -0.1, 0.0, 0.1
>>> item.update_variables()
>>> land_dill.model.parameters.control.sfcf
sfcf(1.0, 1.1, 1.2)
>>> land_dill.model.parameters.control.rfcf.shape = 2
>>> land_dill.model.parameters.control.rfcf = 1.1
>>> item.update_variables() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: When trying to add the value(s) `[-0.1 0. 0.1]` of \
AddItem `sfcf` and the value(s) `[ 1.1 1.1]` of variable `rfcf` of element \
`land_dill`, the following error occurred: operands could not be broadcast \
together with shapes (2,) (3,)...
"""
value = self.value
for device, target in self.device2target.items():
base = self.device2base[device]
try:
result = base.value + value
except BaseException:
raise objecttools.augment_excmessage(
f'When trying to add the value(s) `{value}` of '
f'AddItem `{self.name}` and the value(s) `{base.value}` '
f'of variable {objecttools.devicephrase(base)}')
self.update_variable(target, result)
|
Add the general |ChangeItem.value| with the |Device| specific base
variable and assign the result to the respective target variable.
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy.models.hland_v1 import FIELD
>>> for element in hp.elements.catchment:
... control = element.model.parameters.control
... control.nmbzones(3)
... control.zonetype(FIELD)
... control.rfcf(1.1)
>>> from hydpy.core.itemtools import AddItem
>>> item = AddItem(
... 'sfcf', 'hland_v1', 'control.sfcf', 'control.rfcf', 1)
>>> item.collect_variables(pub.selections)
>>> land_dill = hp.elements.land_dill
>>> land_dill.model.parameters.control.sfcf
sfcf(?)
>>> item.value = -0.1, 0.0, 0.1
>>> item.update_variables()
>>> land_dill.model.parameters.control.sfcf
sfcf(1.0, 1.1, 1.2)
>>> land_dill.model.parameters.control.rfcf.shape = 2
>>> land_dill.model.parameters.control.rfcf = 1.1
>>> item.update_variables() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: When trying to add the value(s) `[-0.1 0. 0.1]` of \
AddItem `sfcf` and the value(s) `[ 1.1 1.1]` of variable `rfcf` of element \
`land_dill`, the following error occurred: operands could not be broadcast \
together with shapes (2,) (3,)...
|
entailment
|
def collect_variables(self, selections) -> None:
"""Apply method |ExchangeItem.collect_variables| of the base class
|ExchangeItem| and determine the `ndim` attribute of the current
|ChangeItem| object afterwards.
The value of `ndim` depends on whether the values of the target
variable or its time series is of interest:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy.core.itemtools import SetItem
>>> for target in ('states.lz', 'states.lz.series',
... 'states.sm', 'states.sm.series'):
... item = GetItem('hland_v1', target)
... item.collect_variables(pub.selections)
... print(item, item.ndim)
GetItem('hland_v1', 'states.lz') 0
GetItem('hland_v1', 'states.lz.series') 1
GetItem('hland_v1', 'states.sm') 1
GetItem('hland_v1', 'states.sm.series') 2
"""
super().collect_variables(selections)
for device in sorted(self.device2target.keys(), key=lambda x: x.name):
self._device2name[device] = f'{device.name}_{self.target}'
for target in self.device2target.values():
self.ndim = target.NDIM
if self.targetspecs.series:
self.ndim += 1
break
|
Apply method |ExchangeItem.collect_variables| of the base class
|ExchangeItem| and determine the `ndim` attribute of the current
|ChangeItem| object afterwards.
The value of `ndim` depends on whether the values of the target
variable or its time series is of interest:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy.core.itemtools import SetItem
>>> for target in ('states.lz', 'states.lz.series',
... 'states.sm', 'states.sm.series'):
... item = GetItem('hland_v1', target)
... item.collect_variables(pub.selections)
... print(item, item.ndim)
GetItem('hland_v1', 'states.lz') 0
GetItem('hland_v1', 'states.lz.series') 1
GetItem('hland_v1', 'states.sm') 1
GetItem('hland_v1', 'states.sm.series') 2
|
entailment
|
def yield_name2value(self, idx1=None, idx2=None) \
-> Iterator[Tuple[str, str]]:
"""Sequentially return name-value-pairs describing the current state
of the target variables.
The names are automatically generated and contain both the name of
the |Device| of the respective |Variable| object and the target
description:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy.core.itemtools import SetItem
>>> item = GetItem('hland_v1', 'states.lz')
>>> item.collect_variables(pub.selections)
>>> hp.elements.land_dill.model.sequences.states.lz = 100.0
>>> for name, value in item.yield_name2value():
... print(name, value)
land_dill_states_lz 100.0
land_lahn_1_states_lz 8.18711
land_lahn_2_states_lz 10.14007
land_lahn_3_states_lz 7.52648
>>> item = GetItem('hland_v1', 'states.sm')
>>> item.collect_variables(pub.selections)
>>> hp.elements.land_dill.model.sequences.states.sm = 2.0
>>> for name, value in item.yield_name2value():
... print(name, value) # doctest: +ELLIPSIS
land_dill_states_sm [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, \
2.0, 2.0, 2.0, 2.0]
land_lahn_1_states_sm [99.27505, ..., 142.84148]
...
When querying time series, one can restrict the span of interest
by passing index values:
>>> item = GetItem('nodes', 'sim.series')
>>> item.collect_variables(pub.selections)
>>> hp.nodes.dill.sequences.sim.series = 1.0, 2.0, 3.0, 4.0
>>> for name, value in item.yield_name2value():
... print(name, value) # doctest: +ELLIPSIS
dill_sim_series [1.0, 2.0, 3.0, 4.0]
lahn_1_sim_series [nan, ...
...
>>> for name, value in item.yield_name2value(2, 3):
... print(name, value) # doctest: +ELLIPSIS
dill_sim_series [3.0]
lahn_1_sim_series [nan]
...
"""
for device, name in self._device2name.items():
target = self.device2target[device]
if self.targetspecs.series:
values = target.series[idx1:idx2]
else:
values = target.values
if self.ndim == 0:
values = objecttools.repr_(float(values))
else:
values = objecttools.repr_list(values.tolist())
yield name, values
|
Sequentially return name-value-pairs describing the current state
of the target variables.
The names are automatically generated and contain both the name of
the |Device| of the respective |Variable| object and the target
description:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
>>> from hydpy.core.itemtools import SetItem
>>> item = GetItem('hland_v1', 'states.lz')
>>> item.collect_variables(pub.selections)
>>> hp.elements.land_dill.model.sequences.states.lz = 100.0
>>> for name, value in item.yield_name2value():
... print(name, value)
land_dill_states_lz 100.0
land_lahn_1_states_lz 8.18711
land_lahn_2_states_lz 10.14007
land_lahn_3_states_lz 7.52648
>>> item = GetItem('hland_v1', 'states.sm')
>>> item.collect_variables(pub.selections)
>>> hp.elements.land_dill.model.sequences.states.sm = 2.0
>>> for name, value in item.yield_name2value():
... print(name, value) # doctest: +ELLIPSIS
land_dill_states_sm [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, \
2.0, 2.0, 2.0, 2.0]
land_lahn_1_states_sm [99.27505, ..., 142.84148]
...
When querying time series, one can restrict the span of interest
by passing index values:
>>> item = GetItem('nodes', 'sim.series')
>>> item.collect_variables(pub.selections)
>>> hp.nodes.dill.sequences.sim.series = 1.0, 2.0, 3.0, 4.0
>>> for name, value in item.yield_name2value():
... print(name, value) # doctest: +ELLIPSIS
dill_sim_series [1.0, 2.0, 3.0, 4.0]
lahn_1_sim_series [nan, ...
...
>>> for name, value in item.yield_name2value(2, 3):
... print(name, value) # doctest: +ELLIPSIS
dill_sim_series [3.0]
lahn_1_sim_series [nan]
...
|
entailment
|
def iso_day_to_weekday(d):
"""
Returns the weekday's name given a ISO weekday number;
"today" if today is the same weekday.
"""
if int(d) == utils.get_now().isoweekday():
return _("today")
for w in WEEKDAYS:
if w[0] == int(d):
return w[1]
|
Returns the weekday's name given a ISO weekday number;
"today" if today is the same weekday.
|
entailment
|
def is_open(location=None, attr=None):
"""
Returns False if the location is closed, or the OpeningHours object
to show the location is currently open.
"""
obj = utils.is_open(location)
if obj is False:
return False
if attr is not None:
return getattr(obj, attr)
return obj
|
Returns False if the location is closed, or the OpeningHours object
to show the location is currently open.
|
entailment
|
def is_open_now(location=None, attr=None):
"""
Returns False if the location is closed, or the OpeningHours object
to show the location is currently open.
Same as `is_open` but passes `now` to `utils.is_open` to bypass `get_now()`.
"""
obj = utils.is_open(location, now=datetime.datetime.now())
if obj is False:
return False
if attr is not None:
return getattr(obj, attr)
return obj
|
Returns False if the location is closed, or the OpeningHours object
to show the location is currently open.
Same as `is_open` but passes `now` to `utils.is_open` to bypass `get_now()`.
|
entailment
|
def opening_hours(location=None, concise=False):
"""
Creates a rendered listing of hours.
"""
template_name = 'openinghours/opening_hours_list.html'
days = [] # [{'hours': '9:00am to 5:00pm', 'name': u'Monday'}, {'hours...
# Without `location`, choose the first company.
if location:
ohrs = OpeningHours.objects.filter(company=location)
else:
try:
Location = utils.get_premises_model()
ohrs = Location.objects.first().openinghours_set.all()
except AttributeError:
raise Exception("You must define some opening hours"
" to use the opening hours tags.")
ohrs.order_by('weekday', 'from_hour')
for o in ohrs:
days.append({
'day_number': o.weekday,
'name': o.get_weekday_display(),
'from_hour': o.from_hour,
'to_hour': o.to_hour,
'hours': '%s%s to %s%s' % (
o.from_hour.strftime('%I:%M').lstrip('0'),
o.from_hour.strftime('%p').lower(),
o.to_hour.strftime('%I:%M').lstrip('0'),
o.to_hour.strftime('%p').lower()
)
})
open_days = [o.weekday for o in ohrs]
for day_number, day_name in WEEKDAYS:
if day_number not in open_days:
days.append({
'day_number': day_number,
'name': day_name,
'hours': 'Closed'
})
days = sorted(days, key=lambda k: k['day_number'])
if concise:
# [{'hours': '9:00am to 5:00pm', 'day_names': u'Monday to Friday'},
# {'hours':...
template_name = 'openinghours/opening_hours_list_concise.html'
concise_days = []
current_set = {}
for day in days:
if 'hours' not in current_set.keys():
current_set = {'day_names': [day['name']],
'hours': day['hours']}
elif day['hours'] != current_set['hours']:
concise_days.append(current_set)
current_set = {'day_names': [day['name']],
'hours': day['hours']}
else:
current_set['day_names'].append(day['name'])
concise_days.append(current_set)
for day_set in concise_days:
if len(day_set['day_names']) > 2:
day_set['day_names'] = '%s to %s' % (day_set['day_names'][0],
day_set['day_names'][-1])
elif len(day_set['day_names']) > 1:
day_set['day_names'] = '%s and %s' % (day_set['day_names'][0],
day_set['day_names'][-1])
else:
day_set['day_names'] = '%s' % day_set['day_names'][0]
days = concise_days
template = get_template(template_name)
return template.render({'days': days})
|
Creates a rendered listing of hours.
|
entailment
|
def prepare_everything(self):
"""Convenience method to make the actual |HydPy| instance runable."""
self.prepare_network()
self.init_models()
self.load_conditions()
with hydpy.pub.options.warnmissingobsfile(False):
self.prepare_nodeseries()
self.prepare_modelseries()
self.load_inputseries()
|
Convenience method to make the actual |HydPy| instance runable.
|
entailment
|
def prepare_network(self):
"""Load all network files as |Selections| (stored in module |pub|)
and assign the "complete" selection to the |HydPy| object."""
hydpy.pub.selections = selectiontools.Selections()
hydpy.pub.selections += hydpy.pub.networkmanager.load_files()
self.update_devices(hydpy.pub.selections.complete)
|
Load all network files as |Selections| (stored in module |pub|)
and assign the "complete" selection to the |HydPy| object.
|
entailment
|
def save_controls(self, parameterstep=None, simulationstep=None,
auxfiler=None):
"""Call method |Elements.save_controls| of the |Elements| object
currently handled by the |HydPy| object.
We use the `LahnH` example project to demonstrate how to write
a complete set parameter control files. For convenience, we let
function |prepare_full_example_2| prepare a fully functional
|HydPy| object, handling seven |Element| objects controlling
four |hland_v1| and three |hstream_v1| application models:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
At first, there is only one control subfolder named "default",
containing the seven control files used in the step above:
>>> import os
>>> with TestIO():
... os.listdir('LahnH/control')
['default']
Next, we use the |ControlManager| to create a new directory
and dump all control file into it:
>>> with TestIO():
... pub.controlmanager.currentdir = 'newdir'
... hp.save_controls()
... sorted(os.listdir('LahnH/control'))
['default', 'newdir']
We focus our examples on the (smaller) control files of
application model |hstream_v1|. The values of parameter
|hstream_control.Lag| and |hstream_control.Damp| for the
river channel connecting the outlets of subcatchment `lahn_1`
and `lahn_2` are 0.583 days and 0.0, respectively:
>>> model = hp.elements.stream_lahn_1_lahn_2.model
>>> model.parameters.control
lag(0.583)
damp(0.0)
The corresponding written control file defines the same values:
>>> dir_ = 'LahnH/control/newdir/'
>>> with TestIO():
... with open(dir_ + 'stream_lahn_1_lahn_2.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
lag(0.583)
damp(0.0)
<BLANKLINE>
Its name equals the element name and the time step information
is taken for the |Timegrid| object available via |pub|:
>>> pub.timegrids.stepsize
Period('1d')
Use the |Auxfiler| class To avoid redefining the same parameter
values in multiple control files. Here, we prepare an |Auxfiler|
object which handles the two parameters of the model discussed
above:
>>> from hydpy import Auxfiler
>>> aux = Auxfiler()
>>> aux += 'hstream_v1'
>>> aux.hstream_v1.stream = model.parameters.control.damp
>>> aux.hstream_v1.stream = model.parameters.control.lag
When passing the |Auxfiler| object to |HydPy.save_controls|,
both parameters the control file of element `stream_lahn_1_lahn_2`
do not define their values on their own, but reference the
auxiliary file `stream.py` instead:
>>> with TestIO():
... pub.controlmanager.currentdir = 'newdir'
... hp.save_controls(auxfiler=aux)
... with open(dir_ + 'stream_lahn_1_lahn_2.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
lag(auxfile='stream')
damp(auxfile='stream')
<BLANKLINE>
`stream.py` contains the actual value definitions:
>>> with TestIO():
... with open(dir_ + 'stream.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
damp(0.0)
lag(0.583)
<BLANKLINE>
The |hstream_v1| model of element `stream_lahn_2_lahn_3` defines
the same value for parameter |hstream_control.Damp| but a different
one for parameter |hstream_control.Lag|. Hence, only
|hstream_control.Damp| can reference control file `stream.py`
without distorting data:
>>> with TestIO():
... with open(dir_ + 'stream_lahn_2_lahn_3.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
lag(0.417)
damp(auxfile='stream')
<BLANKLINE>
Another option is to pass alternative step size information.
The `simulationstep` information, which is not really required
in control files but useful for testing them, has no impact
on the written data. However, passing an alternative
`parameterstep` information changes the written values of
time dependent parameters both in the primary and the auxiliary
control files, as to be expected:
>>> with TestIO():
... pub.controlmanager.currentdir = 'newdir'
... hp.save_controls(
... auxfiler=aux, parameterstep='2d', simulationstep='1h')
... with open(dir_ + 'stream_lahn_1_lahn_2.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('2d')
<BLANKLINE>
lag(auxfile='stream')
damp(auxfile='stream')
<BLANKLINE>
>>> with TestIO():
... with open(dir_ + 'stream.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('2d')
<BLANKLINE>
damp(0.0)
lag(0.2915)
<BLANKLINE>
>>> with TestIO():
... with open(dir_ + 'stream_lahn_2_lahn_3.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('2d')
<BLANKLINE>
lag(0.2085)
damp(auxfile='stream')
<BLANKLINE>
"""
self.elements.save_controls(parameterstep=parameterstep,
simulationstep=simulationstep,
auxfiler=auxfiler)
|
Call method |Elements.save_controls| of the |Elements| object
currently handled by the |HydPy| object.
We use the `LahnH` example project to demonstrate how to write
a complete set parameter control files. For convenience, we let
function |prepare_full_example_2| prepare a fully functional
|HydPy| object, handling seven |Element| objects controlling
four |hland_v1| and three |hstream_v1| application models:
>>> from hydpy.core.examples import prepare_full_example_2
>>> hp, pub, TestIO = prepare_full_example_2()
At first, there is only one control subfolder named "default",
containing the seven control files used in the step above:
>>> import os
>>> with TestIO():
... os.listdir('LahnH/control')
['default']
Next, we use the |ControlManager| to create a new directory
and dump all control file into it:
>>> with TestIO():
... pub.controlmanager.currentdir = 'newdir'
... hp.save_controls()
... sorted(os.listdir('LahnH/control'))
['default', 'newdir']
We focus our examples on the (smaller) control files of
application model |hstream_v1|. The values of parameter
|hstream_control.Lag| and |hstream_control.Damp| for the
river channel connecting the outlets of subcatchment `lahn_1`
and `lahn_2` are 0.583 days and 0.0, respectively:
>>> model = hp.elements.stream_lahn_1_lahn_2.model
>>> model.parameters.control
lag(0.583)
damp(0.0)
The corresponding written control file defines the same values:
>>> dir_ = 'LahnH/control/newdir/'
>>> with TestIO():
... with open(dir_ + 'stream_lahn_1_lahn_2.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
lag(0.583)
damp(0.0)
<BLANKLINE>
Its name equals the element name and the time step information
is taken for the |Timegrid| object available via |pub|:
>>> pub.timegrids.stepsize
Period('1d')
Use the |Auxfiler| class To avoid redefining the same parameter
values in multiple control files. Here, we prepare an |Auxfiler|
object which handles the two parameters of the model discussed
above:
>>> from hydpy import Auxfiler
>>> aux = Auxfiler()
>>> aux += 'hstream_v1'
>>> aux.hstream_v1.stream = model.parameters.control.damp
>>> aux.hstream_v1.stream = model.parameters.control.lag
When passing the |Auxfiler| object to |HydPy.save_controls|,
both parameters the control file of element `stream_lahn_1_lahn_2`
do not define their values on their own, but reference the
auxiliary file `stream.py` instead:
>>> with TestIO():
... pub.controlmanager.currentdir = 'newdir'
... hp.save_controls(auxfiler=aux)
... with open(dir_ + 'stream_lahn_1_lahn_2.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
lag(auxfile='stream')
damp(auxfile='stream')
<BLANKLINE>
`stream.py` contains the actual value definitions:
>>> with TestIO():
... with open(dir_ + 'stream.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
damp(0.0)
lag(0.583)
<BLANKLINE>
The |hstream_v1| model of element `stream_lahn_2_lahn_3` defines
the same value for parameter |hstream_control.Damp| but a different
one for parameter |hstream_control.Lag|. Hence, only
|hstream_control.Damp| can reference control file `stream.py`
without distorting data:
>>> with TestIO():
... with open(dir_ + 'stream_lahn_2_lahn_3.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1d')
parameterstep('1d')
<BLANKLINE>
lag(0.417)
damp(auxfile='stream')
<BLANKLINE>
Another option is to pass alternative step size information.
The `simulationstep` information, which is not really required
in control files but useful for testing them, has no impact
on the written data. However, passing an alternative
`parameterstep` information changes the written values of
time dependent parameters both in the primary and the auxiliary
control files, as to be expected:
>>> with TestIO():
... pub.controlmanager.currentdir = 'newdir'
... hp.save_controls(
... auxfiler=aux, parameterstep='2d', simulationstep='1h')
... with open(dir_ + 'stream_lahn_1_lahn_2.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('2d')
<BLANKLINE>
lag(auxfile='stream')
damp(auxfile='stream')
<BLANKLINE>
>>> with TestIO():
... with open(dir_ + 'stream.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('2d')
<BLANKLINE>
damp(0.0)
lag(0.2915)
<BLANKLINE>
>>> with TestIO():
... with open(dir_ + 'stream_lahn_2_lahn_3.py') as controlfile:
... print(controlfile.read())
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('2d')
<BLANKLINE>
lag(0.2085)
damp(auxfile='stream')
<BLANKLINE>
|
entailment
|
def networkproperties(self):
"""Print out some properties of the network defined by the |Node| and
|Element| objects currently handled by the |HydPy| object."""
print('Number of nodes: %d' % len(self.nodes))
print('Number of elements: %d' % len(self.elements))
print('Number of end nodes: %d' % len(self.endnodes))
print('Number of distinct networks: %d' % len(self.numberofnetworks))
print('Applied node variables: %s' % ', '.join(self.variables))
|
Print out some properties of the network defined by the |Node| and
|Element| objects currently handled by the |HydPy| object.
|
entailment
|
def numberofnetworks(self):
"""The number of distinct networks defined by the|Node| and
|Element| objects currently handled by the |HydPy| object."""
sels1 = selectiontools.Selections()
sels2 = selectiontools.Selections()
complete = selectiontools.Selection('complete',
self.nodes, self.elements)
for node in self.endnodes:
sel = complete.copy(node.name).select_upstream(node)
sels1 += sel
sels2 += sel.copy(node.name)
for sel1 in sels1:
for sel2 in sels2:
if sel1.name != sel2.name:
sel1 -= sel2
for name in list(sels1.names):
if not sels1[name].elements:
del sels1[name]
return sels1
|
The number of distinct networks defined by the|Node| and
|Element| objects currently handled by the |HydPy| object.
|
entailment
|
def endnodes(self):
"""|Nodes| object containing all |Node| objects currently handled by
the |HydPy| object which define a downstream end point of a network."""
endnodes = devicetools.Nodes()
for node in self.nodes:
for element in node.exits:
if ((element in self.elements) and
(node not in element.receivers)):
break
else:
endnodes += node
return endnodes
|
|Nodes| object containing all |Node| objects currently handled by
the |HydPy| object which define a downstream end point of a network.
|
entailment
|
def variables(self):
"""Sorted list of strings summarizing all variables handled by the
|Node| objects"""
variables = set([])
for node in self.nodes:
variables.add(node.variable)
return sorted(variables)
|
Sorted list of strings summarizing all variables handled by the
|Node| objects
|
entailment
|
def simindices(self):
"""Tuple containing the start and end index of the simulation period
regarding the initialization period defined by the |Timegrids| object
stored in module |pub|."""
return (hydpy.pub.timegrids.init[hydpy.pub.timegrids.sim.firstdate],
hydpy.pub.timegrids.init[hydpy.pub.timegrids.sim.lastdate])
|
Tuple containing the start and end index of the simulation period
regarding the initialization period defined by the |Timegrids| object
stored in module |pub|.
|
entailment
|
def open_files(self, idx=0):
"""Call method |Devices.open_files| of the |Nodes| and |Elements|
objects currently handled by the |HydPy| object."""
self.elements.open_files(idx=idx)
self.nodes.open_files(idx=idx)
|
Call method |Devices.open_files| of the |Nodes| and |Elements|
objects currently handled by the |HydPy| object.
|
entailment
|
def update_devices(self, selection=None):
"""Determines the order, in which the |Node| and |Element| objects
currently handled by the |HydPy| objects need to be processed during
a simulation time step. Optionally, a |Selection| object for defining
new |Node| and |Element| objects can be passed."""
if selection is not None:
self.nodes = selection.nodes
self.elements = selection.elements
self._update_deviceorder()
|
Determines the order, in which the |Node| and |Element| objects
currently handled by the |HydPy| objects need to be processed during
a simulation time step. Optionally, a |Selection| object for defining
new |Node| and |Element| objects can be passed.
|
entailment
|
def methodorder(self):
"""A list containing all methods of all |Node| and |Element| objects
that need to be processed during a simulation time step in the
order they must be called."""
funcs = []
for node in self.nodes:
if node.deploymode == 'oldsim':
funcs.append(node.sequences.fastaccess.load_simdata)
elif node.deploymode == 'obs':
funcs.append(node.sequences.fastaccess.load_obsdata)
for node in self.nodes:
if node.deploymode != 'oldsim':
funcs.append(node.reset)
for device in self.deviceorder:
if isinstance(device, devicetools.Element):
funcs.append(device.model.doit)
for element in self.elements:
if element.senders:
funcs.append(element.model.update_senders)
for element in self.elements:
if element.receivers:
funcs.append(element.model.update_receivers)
for element in self.elements:
funcs.append(element.model.save_data)
for node in self.nodes:
if node.deploymode != 'oldsim':
funcs.append(node.sequences.fastaccess.save_simdata)
return funcs
|
A list containing all methods of all |Node| and |Element| objects
that need to be processed during a simulation time step in the
order they must be called.
|
entailment
|
def doit(self):
"""Perform a simulation run over the actual simulation time period
defined by the |Timegrids| object stored in module |pub|."""
idx_start, idx_end = self.simindices
self.open_files(idx_start)
methodorder = self.methodorder
for idx in printtools.progressbar(range(idx_start, idx_end)):
for func in methodorder:
func(idx)
self.close_files()
|
Perform a simulation run over the actual simulation time period
defined by the |Timegrids| object stored in module |pub|.
|
entailment
|
def pic_inflow_v1(self):
"""Update the inlet link sequence.
Required inlet sequence:
|dam_inlets.Q|
Calculated flux sequence:
|Inflow|
Basic equation:
:math:`Inflow = Q`
"""
flu = self.sequences.fluxes.fastaccess
inl = self.sequences.inlets.fastaccess
flu.inflow = inl.q[0]
|
Update the inlet link sequence.
Required inlet sequence:
|dam_inlets.Q|
Calculated flux sequence:
|Inflow|
Basic equation:
:math:`Inflow = Q`
|
entailment
|
def pic_inflow_v2(self):
"""Update the inlet link sequences.
Required inlet sequences:
|dam_inlets.Q|
|dam_inlets.S|
|dam_inlets.R|
Calculated flux sequence:
|Inflow|
Basic equation:
:math:`Inflow = Q + S + R`
"""
flu = self.sequences.fluxes.fastaccess
inl = self.sequences.inlets.fastaccess
flu.inflow = inl.q[0]+inl.s[0]+inl.r[0]
|
Update the inlet link sequences.
Required inlet sequences:
|dam_inlets.Q|
|dam_inlets.S|
|dam_inlets.R|
Calculated flux sequence:
|Inflow|
Basic equation:
:math:`Inflow = Q + S + R`
|
entailment
|
def pic_totalremotedischarge_v1(self):
"""Update the receiver link sequence."""
flu = self.sequences.fluxes.fastaccess
rec = self.sequences.receivers.fastaccess
flu.totalremotedischarge = rec.q[0]
|
Update the receiver link sequence.
|
entailment
|
def pic_loggedrequiredremoterelease_v1(self):
"""Update the receiver link sequence."""
log = self.sequences.logs.fastaccess
rec = self.sequences.receivers.fastaccess
log.loggedrequiredremoterelease[0] = rec.d[0]
|
Update the receiver link sequence.
|
entailment
|
def pic_loggedrequiredremoterelease_v2(self):
"""Update the receiver link sequence."""
log = self.sequences.logs.fastaccess
rec = self.sequences.receivers.fastaccess
log.loggedrequiredremoterelease[0] = rec.s[0]
|
Update the receiver link sequence.
|
entailment
|
def pic_loggedallowedremoterelieve_v1(self):
"""Update the receiver link sequence."""
log = self.sequences.logs.fastaccess
rec = self.sequences.receivers.fastaccess
log.loggedallowedremoterelieve[0] = rec.r[0]
|
Update the receiver link sequence.
|
entailment
|
def update_loggedtotalremotedischarge_v1(self):
"""Log a new entry of discharge at a cross section far downstream.
Required control parameter:
|NmbLogEntries|
Required flux sequence:
|TotalRemoteDischarge|
Calculated flux sequence:
|LoggedTotalRemoteDischarge|
Example:
The following example shows that, with each new method call, the
three memorized values are successively moved to the right and the
respective new value is stored on the bare left position:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> nmblogentries(3)
>>> logs.loggedtotalremotedischarge = 0.0
>>> from hydpy import UnitTest
>>> test = UnitTest(model, model.update_loggedtotalremotedischarge_v1,
... last_example=4,
... parseqs=(fluxes.totalremotedischarge,
... logs.loggedtotalremotedischarge))
>>> test.nexts.totalremotedischarge = [1., 3., 2., 4]
>>> del test.inits.loggedtotalremotedischarge
>>> test()
| ex. | totalremotedischarge | loggedtotalremotedischarge |
---------------------------------------------------------------------
| 1 | 1.0 | 1.0 0.0 0.0 |
| 2 | 3.0 | 3.0 1.0 0.0 |
| 3 | 2.0 | 2.0 3.0 1.0 |
| 4 | 4.0 | 4.0 2.0 3.0 |
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
for idx in range(con.nmblogentries-1, 0, -1):
log.loggedtotalremotedischarge[idx] = \
log.loggedtotalremotedischarge[idx-1]
log.loggedtotalremotedischarge[0] = flu.totalremotedischarge
|
Log a new entry of discharge at a cross section far downstream.
Required control parameter:
|NmbLogEntries|
Required flux sequence:
|TotalRemoteDischarge|
Calculated flux sequence:
|LoggedTotalRemoteDischarge|
Example:
The following example shows that, with each new method call, the
three memorized values are successively moved to the right and the
respective new value is stored on the bare left position:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> nmblogentries(3)
>>> logs.loggedtotalremotedischarge = 0.0
>>> from hydpy import UnitTest
>>> test = UnitTest(model, model.update_loggedtotalremotedischarge_v1,
... last_example=4,
... parseqs=(fluxes.totalremotedischarge,
... logs.loggedtotalremotedischarge))
>>> test.nexts.totalremotedischarge = [1., 3., 2., 4]
>>> del test.inits.loggedtotalremotedischarge
>>> test()
| ex. | totalremotedischarge | loggedtotalremotedischarge |
---------------------------------------------------------------------
| 1 | 1.0 | 1.0 0.0 0.0 |
| 2 | 3.0 | 3.0 1.0 0.0 |
| 3 | 2.0 | 2.0 3.0 1.0 |
| 4 | 4.0 | 4.0 2.0 3.0 |
|
entailment
|
def calc_waterlevel_v1(self):
"""Determine the water level based on an artificial neural network
describing the relationship between water level and water stage.
Required control parameter:
|WaterVolume2WaterLevel|
Required state sequence:
|WaterVolume|
Calculated aide sequence:
|WaterLevel|
Example:
Prepare a dam model:
>>> from hydpy.models.dam import *
>>> parameterstep()
Prepare a very simple relationship based on one single neuron:
>>> watervolume2waterlevel(
... nmb_inputs=1, nmb_neurons=(1,), nmb_outputs=1,
... weights_input=0.5, weights_output=1.0,
... intercepts_hidden=0.0, intercepts_output=-0.5)
At least in the water volume range used in the following examples,
the shape of the relationship looks acceptable:
>>> from hydpy import UnitTest
>>> test = UnitTest(
... model, model.calc_waterlevel_v1,
... last_example=10,
... parseqs=(states.watervolume, aides.waterlevel))
>>> test.nexts.watervolume = range(10)
>>> test()
| ex. | watervolume | waterlevel |
----------------------------------
| 1 | 0.0 | 0.0 |
| 2 | 1.0 | 0.122459 |
| 3 | 2.0 | 0.231059 |
| 4 | 3.0 | 0.317574 |
| 5 | 4.0 | 0.380797 |
| 6 | 5.0 | 0.424142 |
| 7 | 6.0 | 0.452574 |
| 8 | 7.0 | 0.470688 |
| 9 | 8.0 | 0.482014 |
| 10 | 9.0 | 0.489013 |
For more realistic approximations of measured relationships between
water level and volume, larger neural networks are required.
"""
con = self.parameters.control.fastaccess
new = self.sequences.states.fastaccess_new
aid = self.sequences.aides.fastaccess
con.watervolume2waterlevel.inputs[0] = new.watervolume
con.watervolume2waterlevel.process_actual_input()
aid.waterlevel = con.watervolume2waterlevel.outputs[0]
|
Determine the water level based on an artificial neural network
describing the relationship between water level and water stage.
Required control parameter:
|WaterVolume2WaterLevel|
Required state sequence:
|WaterVolume|
Calculated aide sequence:
|WaterLevel|
Example:
Prepare a dam model:
>>> from hydpy.models.dam import *
>>> parameterstep()
Prepare a very simple relationship based on one single neuron:
>>> watervolume2waterlevel(
... nmb_inputs=1, nmb_neurons=(1,), nmb_outputs=1,
... weights_input=0.5, weights_output=1.0,
... intercepts_hidden=0.0, intercepts_output=-0.5)
At least in the water volume range used in the following examples,
the shape of the relationship looks acceptable:
>>> from hydpy import UnitTest
>>> test = UnitTest(
... model, model.calc_waterlevel_v1,
... last_example=10,
... parseqs=(states.watervolume, aides.waterlevel))
>>> test.nexts.watervolume = range(10)
>>> test()
| ex. | watervolume | waterlevel |
----------------------------------
| 1 | 0.0 | 0.0 |
| 2 | 1.0 | 0.122459 |
| 3 | 2.0 | 0.231059 |
| 4 | 3.0 | 0.317574 |
| 5 | 4.0 | 0.380797 |
| 6 | 5.0 | 0.424142 |
| 7 | 6.0 | 0.452574 |
| 8 | 7.0 | 0.470688 |
| 9 | 8.0 | 0.482014 |
| 10 | 9.0 | 0.489013 |
For more realistic approximations of measured relationships between
water level and volume, larger neural networks are required.
|
entailment
|
def calc_allowedremoterelieve_v2(self):
"""Calculate the allowed maximum relieve another location
is allowed to discharge into the dam.
Required control parameters:
|HighestRemoteRelieve|
|WaterLevelRelieveThreshold|
Required derived parameter:
|WaterLevelRelieveSmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|AllowedRemoteRelieve|
Basic equation:
:math:`ActualRemoteRelease = HighestRemoteRelease \\cdot
smooth_{logistic1}(WaterLevelRelieveThreshold-WaterLevel,
WaterLevelRelieveSmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
All control parameters that are involved in the calculation of
|AllowedRemoteRelieve| are derived from |SeasonalParameter|.
This allows to simulate seasonal dam control schemes.
To show how this works, we first define a short simulation
time period of only two days:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Now we prepare the dam model and define two different control
schemes for the hydrological summer (April to October) and
winter month (November to May)
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremoterelieve(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelrelievethreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelrelievetolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelrelievesmoothpar.update()
>>> derived.toy.update()
The following test function is supposed to calculate
|AllowedRemoteRelieve| for values of |WaterLevel| ranging
from 0 and 8 m:
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_allowedremoterelieve_v2,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.allowedremoterelieve))
>>> test.nexts.waterlevel = range(9)
On March 30 (which is the last day of the winter month and the
first day of the simulation period), the value of
|WaterLevelRelieveSmoothPar| is zero. Hence, |AllowedRemoteRelieve|
drops abruptly from 1 m³/s (the value of |HighestRemoteRelieve|) to
0 m³/s, as soon as |WaterLevel| reaches 3 m (the value
of |WaterLevelRelieveThreshold|):
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | allowedremoterelieve |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
On April 1 (which is the first day of the sommer month and the
last day of the simulation period), all parameter values are
increased. The value of parameter |WaterLevelRelieveSmoothPar|
is 1 m. Hence, loosely speaking, |AllowedRemoteRelieve| approaches
the "discontinuous extremes (2 m³/s -- which is the value of
|HighestRemoteRelieve| -- and 0 m³/s) to 99 % within a span of
2 m³/s around the original threshold value of 4 m³/s defined by
|WaterLevelRelieveThreshold|:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | allowedremoterelieve |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 |
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
aid = self.sequences.aides.fastaccess
toy = der.toy[self.idx_sim]
flu.allowedremoterelieve = (
con.highestremoterelieve[toy] *
smoothutils.smooth_logistic1(
con.waterlevelrelievethreshold[toy]-aid.waterlevel,
der.waterlevelrelievesmoothpar[toy]))
|
Calculate the allowed maximum relieve another location
is allowed to discharge into the dam.
Required control parameters:
|HighestRemoteRelieve|
|WaterLevelRelieveThreshold|
Required derived parameter:
|WaterLevelRelieveSmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|AllowedRemoteRelieve|
Basic equation:
:math:`ActualRemoteRelease = HighestRemoteRelease \\cdot
smooth_{logistic1}(WaterLevelRelieveThreshold-WaterLevel,
WaterLevelRelieveSmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
All control parameters that are involved in the calculation of
|AllowedRemoteRelieve| are derived from |SeasonalParameter|.
This allows to simulate seasonal dam control schemes.
To show how this works, we first define a short simulation
time period of only two days:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Now we prepare the dam model and define two different control
schemes for the hydrological summer (April to October) and
winter month (November to May)
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremoterelieve(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelrelievethreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelrelievetolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelrelievesmoothpar.update()
>>> derived.toy.update()
The following test function is supposed to calculate
|AllowedRemoteRelieve| for values of |WaterLevel| ranging
from 0 and 8 m:
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_allowedremoterelieve_v2,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.allowedremoterelieve))
>>> test.nexts.waterlevel = range(9)
On March 30 (which is the last day of the winter month and the
first day of the simulation period), the value of
|WaterLevelRelieveSmoothPar| is zero. Hence, |AllowedRemoteRelieve|
drops abruptly from 1 m³/s (the value of |HighestRemoteRelieve|) to
0 m³/s, as soon as |WaterLevel| reaches 3 m (the value
of |WaterLevelRelieveThreshold|):
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | allowedremoterelieve |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
On April 1 (which is the first day of the sommer month and the
last day of the simulation period), all parameter values are
increased. The value of parameter |WaterLevelRelieveSmoothPar|
is 1 m. Hence, loosely speaking, |AllowedRemoteRelieve| approaches
the "discontinuous extremes (2 m³/s -- which is the value of
|HighestRemoteRelieve| -- and 0 m³/s) to 99 % within a span of
2 m³/s around the original threshold value of 4 m³/s defined by
|WaterLevelRelieveThreshold|:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | allowedremoterelieve |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 |
|
entailment
|
def calc_requiredremotesupply_v1(self):
"""Calculate the required maximum supply from another location
that can be discharged into the dam.
Required control parameters:
|HighestRemoteSupply|
|WaterLevelSupplyThreshold|
Required derived parameter:
|WaterLevelSupplySmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|RequiredRemoteSupply|
Basic equation:
:math:`RequiredRemoteSupply = HighestRemoteSupply \\cdot
smooth_{logistic1}(WaterLevelSupplyThreshold-WaterLevel,
WaterLevelSupplySmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
Method |calc_requiredremotesupply_v1| is functionally identical
with method |calc_allowedremoterelieve_v2|. Hence the following
examples serve for testing purposes only (see the documentation
on function |calc_allowedremoterelieve_v2| for more detailed
information):
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotesupply(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelsupplythreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelsupplytolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelsupplysmoothpar.update()
>>> derived.toy.update()
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_requiredremotesupply_v1,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.requiredremotesupply))
>>> test.nexts.waterlevel = range(9)
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 |
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
aid = self.sequences.aides.fastaccess
toy = der.toy[self.idx_sim]
flu.requiredremotesupply = (
con.highestremotesupply[toy] *
smoothutils.smooth_logistic1(
con.waterlevelsupplythreshold[toy]-aid.waterlevel,
der.waterlevelsupplysmoothpar[toy]))
|
Calculate the required maximum supply from another location
that can be discharged into the dam.
Required control parameters:
|HighestRemoteSupply|
|WaterLevelSupplyThreshold|
Required derived parameter:
|WaterLevelSupplySmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|RequiredRemoteSupply|
Basic equation:
:math:`RequiredRemoteSupply = HighestRemoteSupply \\cdot
smooth_{logistic1}(WaterLevelSupplyThreshold-WaterLevel,
WaterLevelSupplySmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
Method |calc_requiredremotesupply_v1| is functionally identical
with method |calc_allowedremoterelieve_v2|. Hence the following
examples serve for testing purposes only (see the documentation
on function |calc_allowedremoterelieve_v2| for more detailed
information):
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotesupply(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelsupplythreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelsupplytolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelsupplysmoothpar.update()
>>> derived.toy.update()
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_requiredremotesupply_v1,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.requiredremotesupply))
>>> test.nexts.waterlevel = range(9)
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 |
|
entailment
|
def calc_naturalremotedischarge_v1(self):
"""Try to estimate the natural discharge of a cross section far downstream
based on the last few simulation steps.
Required control parameter:
|NmbLogEntries|
Required log sequences:
|LoggedTotalRemoteDischarge|
|LoggedOutflow|
Calculated flux sequence:
|NaturalRemoteDischarge|
Basic equation:
:math:`RemoteDemand =
max(\\frac{\\Sigma(LoggedTotalRemoteDischarge - LoggedOutflow)}
{NmbLogEntries}), 0)`
Examples:
Usually, the mean total remote flow should be larger than the mean
dam outflows. Then the estimated natural remote discharge is simply
the difference of both mean values:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> nmblogentries(3)
>>> logs.loggedtotalremotedischarge(2.5, 2.0, 1.5)
>>> logs.loggedoutflow(2.0, 1.0, 0.0)
>>> model.calc_naturalremotedischarge_v1()
>>> fluxes.naturalremotedischarge
naturalremotedischarge(1.0)
Due to the wave travel times, the difference between remote discharge
and dam outflow mights sometimes be negative. To avoid negative
estimates of natural discharge, it its value is set to zero in
such cases:
>>> logs.loggedoutflow(4.0, 3.0, 5.0)
>>> model.calc_naturalremotedischarge_v1()
>>> fluxes.naturalremotedischarge
naturalremotedischarge(0.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
flu.naturalremotedischarge = 0.
for idx in range(con.nmblogentries):
flu.naturalremotedischarge += (
log.loggedtotalremotedischarge[idx] - log.loggedoutflow[idx])
if flu.naturalremotedischarge > 0.:
flu.naturalremotedischarge /= con.nmblogentries
else:
flu.naturalremotedischarge = 0.
|
Try to estimate the natural discharge of a cross section far downstream
based on the last few simulation steps.
Required control parameter:
|NmbLogEntries|
Required log sequences:
|LoggedTotalRemoteDischarge|
|LoggedOutflow|
Calculated flux sequence:
|NaturalRemoteDischarge|
Basic equation:
:math:`RemoteDemand =
max(\\frac{\\Sigma(LoggedTotalRemoteDischarge - LoggedOutflow)}
{NmbLogEntries}), 0)`
Examples:
Usually, the mean total remote flow should be larger than the mean
dam outflows. Then the estimated natural remote discharge is simply
the difference of both mean values:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> nmblogentries(3)
>>> logs.loggedtotalremotedischarge(2.5, 2.0, 1.5)
>>> logs.loggedoutflow(2.0, 1.0, 0.0)
>>> model.calc_naturalremotedischarge_v1()
>>> fluxes.naturalremotedischarge
naturalremotedischarge(1.0)
Due to the wave travel times, the difference between remote discharge
and dam outflow mights sometimes be negative. To avoid negative
estimates of natural discharge, it its value is set to zero in
such cases:
>>> logs.loggedoutflow(4.0, 3.0, 5.0)
>>> model.calc_naturalremotedischarge_v1()
>>> fluxes.naturalremotedischarge
naturalremotedischarge(0.0)
|
entailment
|
def calc_remotedemand_v1(self):
"""Estimate the discharge demand of a cross section far downstream.
Required control parameter:
|RemoteDischargeMinimum|
Required derived parameters:
|dam_derived.TOY|
Required flux sequence:
|dam_derived.TOY|
Calculated flux sequence:
|RemoteDemand|
Basic equation:
:math:`RemoteDemand =
max(RemoteDischargeMinimum - NaturalRemoteDischarge, 0`
Examples:
Low water elevation is often restricted to specific month of the year.
Sometimes the pursued lowest discharge value varies over the year
to allow for a low flow variability that is in some agreement with
the natural flow regime. The HydPy-Dam model supports such
variations. Hence we define a short simulation time period first.
This enables us to show how the related parameters values can be
defined and how the calculation of the `remote` water demand
throughout the year actually works:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Prepare the dam model:
>>> from hydpy.models.dam import *
>>> parameterstep()
Assume the required discharge at a gauge downstream being 2 m³/s
in the hydrological summer half-year (April to October). In the
winter month (November to May), there is no such requirement:
>>> remotedischargeminimum(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> derived.toy.update()
Prepare a test function, that calculates the remote discharge demand
based on the parameter values defined above and for natural remote
discharge values ranging between 0 and 3 m³/s:
>>> from hydpy import UnitTest
>>> test = UnitTest(model, model.calc_remotedemand_v1, last_example=4,
... parseqs=(fluxes.naturalremotedischarge,
... fluxes.remotedemand))
>>> test.nexts.naturalremotedischarge = range(4)
On April 1, the required discharge is 2 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | naturalremotedischarge | remotedemand |
-----------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.0 |
| 3 | 2.0 | 0.0 |
| 4 | 3.0 | 0.0 |
On May 31, the required discharge is 0 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.03.31']
>>> test()
| ex. | naturalremotedischarge | remotedemand |
-----------------------------------------------
| 1 | 0.0 | 0.0 |
| 2 | 1.0 | 0.0 |
| 3 | 2.0 | 0.0 |
| 4 | 3.0 | 0.0 |
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
flu.remotedemand = max(con.remotedischargeminimum[der.toy[self.idx_sim]] -
flu.naturalremotedischarge, 0.)
|
Estimate the discharge demand of a cross section far downstream.
Required control parameter:
|RemoteDischargeMinimum|
Required derived parameters:
|dam_derived.TOY|
Required flux sequence:
|dam_derived.TOY|
Calculated flux sequence:
|RemoteDemand|
Basic equation:
:math:`RemoteDemand =
max(RemoteDischargeMinimum - NaturalRemoteDischarge, 0`
Examples:
Low water elevation is often restricted to specific month of the year.
Sometimes the pursued lowest discharge value varies over the year
to allow for a low flow variability that is in some agreement with
the natural flow regime. The HydPy-Dam model supports such
variations. Hence we define a short simulation time period first.
This enables us to show how the related parameters values can be
defined and how the calculation of the `remote` water demand
throughout the year actually works:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Prepare the dam model:
>>> from hydpy.models.dam import *
>>> parameterstep()
Assume the required discharge at a gauge downstream being 2 m³/s
in the hydrological summer half-year (April to October). In the
winter month (November to May), there is no such requirement:
>>> remotedischargeminimum(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> derived.toy.update()
Prepare a test function, that calculates the remote discharge demand
based on the parameter values defined above and for natural remote
discharge values ranging between 0 and 3 m³/s:
>>> from hydpy import UnitTest
>>> test = UnitTest(model, model.calc_remotedemand_v1, last_example=4,
... parseqs=(fluxes.naturalremotedischarge,
... fluxes.remotedemand))
>>> test.nexts.naturalremotedischarge = range(4)
On April 1, the required discharge is 2 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | naturalremotedischarge | remotedemand |
-----------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.0 |
| 3 | 2.0 | 0.0 |
| 4 | 3.0 | 0.0 |
On May 31, the required discharge is 0 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.03.31']
>>> test()
| ex. | naturalremotedischarge | remotedemand |
-----------------------------------------------
| 1 | 0.0 | 0.0 |
| 2 | 1.0 | 0.0 |
| 3 | 2.0 | 0.0 |
| 4 | 3.0 | 0.0 |
|
entailment
|
def calc_remotefailure_v1(self):
"""Estimate the shortfall of actual discharge under the required discharge
of a cross section far downstream.
Required control parameters:
|NmbLogEntries|
|RemoteDischargeMinimum|
Required derived parameters:
|dam_derived.TOY|
Required log sequence:
|LoggedTotalRemoteDischarge|
Calculated flux sequence:
|RemoteFailure|
Basic equation:
:math:`RemoteFailure =
\\frac{\\Sigma(LoggedTotalRemoteDischarge)}{NmbLogEntries} -
RemoteDischargeMinimum`
Examples:
As explained in the documentation on method |calc_remotedemand_v1|,
we have to define a simulation period first:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Now we prepare a dam model with log sequences memorizing three values:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> nmblogentries(3)
Again, the required discharge is 2 m³/s in summer and 0 m³/s in winter:
>>> remotedischargeminimum(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> derived.toy.update()
Let it be supposed that the actual discharge at the remote
cross section droped from 2 m³/s to 0 m³/s over the last three days:
>>> logs.loggedtotalremotedischarge(0.0, 1.0, 2.0)
This means that for the April 1 there would have been an averaged
shortfall of 1 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> model.calc_remotefailure_v1()
>>> fluxes.remotefailure
remotefailure(1.0)
Instead for May 31 there would have been an excess of 1 m³/s, which
is interpreted to be a "negative failure":
>>> model.idx_sim = pub.timegrids.init['2001.03.31']
>>> model.calc_remotefailure_v1()
>>> fluxes.remotefailure
remotefailure(-1.0)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
flu.remotefailure = 0
for idx in range(con.nmblogentries):
flu.remotefailure -= log.loggedtotalremotedischarge[idx]
flu.remotefailure /= con.nmblogentries
flu.remotefailure += con.remotedischargeminimum[der.toy[self.idx_sim]]
|
Estimate the shortfall of actual discharge under the required discharge
of a cross section far downstream.
Required control parameters:
|NmbLogEntries|
|RemoteDischargeMinimum|
Required derived parameters:
|dam_derived.TOY|
Required log sequence:
|LoggedTotalRemoteDischarge|
Calculated flux sequence:
|RemoteFailure|
Basic equation:
:math:`RemoteFailure =
\\frac{\\Sigma(LoggedTotalRemoteDischarge)}{NmbLogEntries} -
RemoteDischargeMinimum`
Examples:
As explained in the documentation on method |calc_remotedemand_v1|,
we have to define a simulation period first:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Now we prepare a dam model with log sequences memorizing three values:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> nmblogentries(3)
Again, the required discharge is 2 m³/s in summer and 0 m³/s in winter:
>>> remotedischargeminimum(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> derived.toy.update()
Let it be supposed that the actual discharge at the remote
cross section droped from 2 m³/s to 0 m³/s over the last three days:
>>> logs.loggedtotalremotedischarge(0.0, 1.0, 2.0)
This means that for the April 1 there would have been an averaged
shortfall of 1 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> model.calc_remotefailure_v1()
>>> fluxes.remotefailure
remotefailure(1.0)
Instead for May 31 there would have been an excess of 1 m³/s, which
is interpreted to be a "negative failure":
>>> model.idx_sim = pub.timegrids.init['2001.03.31']
>>> model.calc_remotefailure_v1()
>>> fluxes.remotefailure
remotefailure(-1.0)
|
entailment
|
def calc_requiredremoterelease_v1(self):
"""Guess the required release necessary to not fall below the threshold
value at a cross section far downstream with a certain level of certainty.
Required control parameter:
|RemoteDischargeSafety|
Required derived parameters:
|RemoteDischargeSmoothPar|
|dam_derived.TOY|
Required flux sequence:
|RemoteDemand|
|RemoteFailure|
Calculated flux sequence:
|RequiredRemoteRelease|
Basic equation:
:math:`RequiredRemoteRelease = RemoteDemand + RemoteDischargeSafety
\\cdot smooth_{logistic1}(RemoteFailure, RemoteDischargeSmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
As in the examples above, define a short simulation time period first:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Prepare the dam model:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> derived.toy.update()
Define a safety factor of 0.5 m³/s for the summer months and
no safety factor at all for the winter months:
>>> remotedischargesafety(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.remotedischargesmoothpar.update()
Assume the actual demand at the cross section downsstream has actually
been estimated to be 2 m³/s:
>>> fluxes.remotedemand = 2.0
Prepare a test function, that calculates the required discharge
based on the parameter values defined above and for a "remote
failure" values ranging between -4 and 4 m³/s:
>>> from hydpy import UnitTest
>>> test = UnitTest(model, model.calc_requiredremoterelease_v1,
... last_example=9,
... parseqs=(fluxes.remotefailure,
... fluxes.requiredremoterelease))
>>> test.nexts.remotefailure = range(-4, 5)
On May 31, the safety factor is 0 m³/s. Hence no discharge is
added to the estimated remote demand of 2 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.03.31']
>>> test()
| ex. | remotefailure | requiredremoterelease |
-----------------------------------------------
| 1 | -4.0 | 2.0 |
| 2 | -3.0 | 2.0 |
| 3 | -2.0 | 2.0 |
| 4 | -1.0 | 2.0 |
| 5 | 0.0 | 2.0 |
| 6 | 1.0 | 2.0 |
| 7 | 2.0 | 2.0 |
| 8 | 3.0 | 2.0 |
| 9 | 4.0 | 2.0 |
On April 1, the safety factor is 1 m³/s. If the remote failure was
exactly zero in the past, meaning the control of the dam was perfect,
only 0.5 m³/s are added to the estimated remote demand of 2 m³/s.
If the actual recharge did actually fall below the threshold value,
up to 1 m³/s is added. If the the actual discharge exceeded the
threshold value by 2 or 3 m³/s, virtually nothing is added:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | remotefailure | requiredremoterelease |
-----------------------------------------------
| 1 | -4.0 | 2.0 |
| 2 | -3.0 | 2.000001 |
| 3 | -2.0 | 2.000102 |
| 4 | -1.0 | 2.01 |
| 5 | 0.0 | 2.5 |
| 6 | 1.0 | 2.99 |
| 7 | 2.0 | 2.999898 |
| 8 | 3.0 | 2.999999 |
| 9 | 4.0 | 3.0 |
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
flu.requiredremoterelease = (
flu.remotedemand+con.remotedischargesafety[der.toy[self.idx_sim]] *
smoothutils.smooth_logistic1(
flu.remotefailure,
der.remotedischargesmoothpar[der.toy[self.idx_sim]]))
|
Guess the required release necessary to not fall below the threshold
value at a cross section far downstream with a certain level of certainty.
Required control parameter:
|RemoteDischargeSafety|
Required derived parameters:
|RemoteDischargeSmoothPar|
|dam_derived.TOY|
Required flux sequence:
|RemoteDemand|
|RemoteFailure|
Calculated flux sequence:
|RequiredRemoteRelease|
Basic equation:
:math:`RequiredRemoteRelease = RemoteDemand + RemoteDischargeSafety
\\cdot smooth_{logistic1}(RemoteFailure, RemoteDischargeSmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
As in the examples above, define a short simulation time period first:
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
Prepare the dam model:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> derived.toy.update()
Define a safety factor of 0.5 m³/s for the summer months and
no safety factor at all for the winter months:
>>> remotedischargesafety(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.remotedischargesmoothpar.update()
Assume the actual demand at the cross section downsstream has actually
been estimated to be 2 m³/s:
>>> fluxes.remotedemand = 2.0
Prepare a test function, that calculates the required discharge
based on the parameter values defined above and for a "remote
failure" values ranging between -4 and 4 m³/s:
>>> from hydpy import UnitTest
>>> test = UnitTest(model, model.calc_requiredremoterelease_v1,
... last_example=9,
... parseqs=(fluxes.remotefailure,
... fluxes.requiredremoterelease))
>>> test.nexts.remotefailure = range(-4, 5)
On May 31, the safety factor is 0 m³/s. Hence no discharge is
added to the estimated remote demand of 2 m³/s:
>>> model.idx_sim = pub.timegrids.init['2001.03.31']
>>> test()
| ex. | remotefailure | requiredremoterelease |
-----------------------------------------------
| 1 | -4.0 | 2.0 |
| 2 | -3.0 | 2.0 |
| 3 | -2.0 | 2.0 |
| 4 | -1.0 | 2.0 |
| 5 | 0.0 | 2.0 |
| 6 | 1.0 | 2.0 |
| 7 | 2.0 | 2.0 |
| 8 | 3.0 | 2.0 |
| 9 | 4.0 | 2.0 |
On April 1, the safety factor is 1 m³/s. If the remote failure was
exactly zero in the past, meaning the control of the dam was perfect,
only 0.5 m³/s are added to the estimated remote demand of 2 m³/s.
If the actual recharge did actually fall below the threshold value,
up to 1 m³/s is added. If the the actual discharge exceeded the
threshold value by 2 or 3 m³/s, virtually nothing is added:
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | remotefailure | requiredremoterelease |
-----------------------------------------------
| 1 | -4.0 | 2.0 |
| 2 | -3.0 | 2.000001 |
| 3 | -2.0 | 2.000102 |
| 4 | -1.0 | 2.01 |
| 5 | 0.0 | 2.5 |
| 6 | 1.0 | 2.99 |
| 7 | 2.0 | 2.999898 |
| 8 | 3.0 | 2.999999 |
| 9 | 4.0 | 3.0 |
|
entailment
|
def calc_requiredremoterelease_v2(self):
"""Get the required remote release of the last simulation step.
Required log sequence:
|LoggedRequiredRemoteRelease|
Calculated flux sequence:
|RequiredRemoteRelease|
Basic equation:
:math:`RequiredRemoteRelease = LoggedRequiredRemoteRelease`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> logs.loggedrequiredremoterelease = 3.0
>>> model.calc_requiredremoterelease_v2()
>>> fluxes.requiredremoterelease
requiredremoterelease(3.0)
"""
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
flu.requiredremoterelease = log.loggedrequiredremoterelease[0]
|
Get the required remote release of the last simulation step.
Required log sequence:
|LoggedRequiredRemoteRelease|
Calculated flux sequence:
|RequiredRemoteRelease|
Basic equation:
:math:`RequiredRemoteRelease = LoggedRequiredRemoteRelease`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> logs.loggedrequiredremoterelease = 3.0
>>> model.calc_requiredremoterelease_v2()
>>> fluxes.requiredremoterelease
requiredremoterelease(3.0)
|
entailment
|
def calc_allowedremoterelieve_v1(self):
"""Get the allowed remote relieve of the last simulation step.
Required log sequence:
|LoggedAllowedRemoteRelieve|
Calculated flux sequence:
|AllowedRemoteRelieve|
Basic equation:
:math:`AllowedRemoteRelieve = LoggedAllowedRemoteRelieve`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> logs.loggedallowedremoterelieve = 2.0
>>> model.calc_allowedremoterelieve_v1()
>>> fluxes.allowedremoterelieve
allowedremoterelieve(2.0)
"""
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
flu.allowedremoterelieve = log.loggedallowedremoterelieve[0]
|
Get the allowed remote relieve of the last simulation step.
Required log sequence:
|LoggedAllowedRemoteRelieve|
Calculated flux sequence:
|AllowedRemoteRelieve|
Basic equation:
:math:`AllowedRemoteRelieve = LoggedAllowedRemoteRelieve`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> logs.loggedallowedremoterelieve = 2.0
>>> model.calc_allowedremoterelieve_v1()
>>> fluxes.allowedremoterelieve
allowedremoterelieve(2.0)
|
entailment
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.