code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
orbit_ps = _get_system_ps(b, orbit)
metawargs = orbit_ps.meta
metawargs.pop('qualifier')
esinw_def = FloatParameter(qualifier='esinw', value=0.0, default_unit=u.dimensionless_unscaled, limits=(-1.0,1.0), description='Eccentricity times sin of argument of periastron')
esinw, created = b.get_or_create('esinw', esinw_def, **metawargs)
ecc = b.get_parameter(qualifier='ecc', **metawargs)
per0 = b.get_parameter(qualifier='per0', **metawargs)
if solve_for in [None, esinw]:
lhs = esinw
rhs = ecc * sin(per0)
elif solve_for == ecc:
lhs = ecc
rhs = esinw / sin(per0)
elif solve_for == per0:
lhs = per0
#rhs = arcsin(esinw/ecc)
rhs = esinw2per0(ecc, esinw)
else:
raise NotImplementedError
return lhs, rhs, {'orbit': orbit} | def esinw(b, orbit, solve_for=None, **kwargs) | Create a constraint for esinw in an orbit.
If 'esinw' does not exist in the orbit, it will be created
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str orbit: the label of the orbit in which this
constraint should be built
:parameter str solve_for: if 'esinw' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'ecc', 'per0')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 4.06002 | 3.610529 | 1.124495 |
orbit_ps = _get_system_ps(b, orbit)
metawargs = orbit_ps.meta
metawargs.pop('qualifier')
ecosw_def = FloatParameter(qualifier='ecosw', value=0.0, default_unit=u.dimensionless_unscaled, limits=(-1.0,1.0), description='Eccentricity times cos of argument of periastron')
ecosw, created = b.get_or_create('ecosw', ecosw_def, **metawargs)
ecc = b.get_parameter(qualifier='ecc', **metawargs)
per0 = b.get_parameter(qualifier='per0', **metawargs)
if solve_for in [None, ecosw]:
lhs = ecosw
rhs = ecc * cos(per0)
elif solve_for == ecc:
lhs = ecc
rhs = ecosw / cos(per0)
elif solve_for == per0:
lhs = per0
#rhs = arccos(ecosw/ecc)
rhs = ecosw2per0(ecc, ecosw)
else:
raise NotImplementedError
return lhs, rhs, {'orbit': orbit} | def ecosw(b, orbit, solve_for=None, **kwargs) | Create a constraint for ecosw in an orbit.
If 'ecosw' does not exist in the orbit, it will be created
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str orbit: the label of the orbit in which this
constraint should be built
:parameter str solve_for: if 'ecosw' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'ecc' or 'per0')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 4.424966 | 3.907254 | 1.1325 |
orbit_ps = _get_system_ps(b, orbit)
metawargs = orbit_ps.meta
metawargs.pop('qualifier')
# by default both t0s exist in an orbit, so we don't have to worry about creating either
t0_perpass = b.get_parameter(qualifier='t0_perpass', **metawargs)
t0_supconj = b.get_parameter(qualifier='t0_supconj', **metawargs)
period = b.get_parameter(qualifier='period', **metawargs)
ecc = b.get_parameter(qualifier='ecc', **metawargs)
per0 = b.get_parameter(qualifier='per0', **metawargs)
if solve_for in [None, t0_perpass]:
lhs = t0_perpass
rhs = t0_supconj_to_perpass(t0_supconj, period, ecc, per0)
elif solve_for == t0_supconj:
lhs = t0_supconj
rhs = t0_perpass_to_supconj(t0_perpass, period, ecc, per0)
else:
raise NotImplementedError
return lhs, rhs, {'orbit': orbit} | def t0_perpass_supconj(b, orbit, solve_for=None, **kwargs) | Create a constraint for t0_perpass in an orbit - allowing translating between
t0_perpass and t0_supconj.
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str orbit: the label of the orbit in which this
constraint should be built
:parameter str solve_for: if 't0_perpass' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 't0_supconj', 'per0', 'period')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 3.084028 | 2.687636 | 1.147488 |
orbit_ps = _get_system_ps(b, orbit)
metawargs = orbit_ps.meta
metawargs.pop('qualifier')
# by default both t0s exist in an orbit, so we don't have to worry about creating either
t0_ref = b.get_parameter(qualifier='t0_ref', **metawargs)
t0_supconj = b.get_parameter(qualifier='t0_supconj', **metawargs)
period = b.get_parameter(qualifier='period', **metawargs)
ecc = b.get_parameter(qualifier='ecc', **metawargs)
per0 = b.get_parameter(qualifier='per0', **metawargs)
if solve_for in [None, t0_ref]:
lhs = t0_ref
rhs = t0_supconj_to_ref(t0_supconj, period, ecc, per0)
elif solve_for == t0_supconj:
lhs = t0_supconj
rhs = t0_ref_to_supconj(t0_ref, period, ecc, per0)
else:
raise NotImplementedError
return lhs, rhs, {'orbit': orbit} | def t0_ref_supconj(b, orbit, solve_for=None, **kwargs) | Create a constraint for t0_ref in an orbit - allowing translating between
t0_ref and t0_supconj.
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str orbit: the label of the orbit in which this
constraint should be built
:parameter str solve_for: if 't0_ref' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 't0_supconj', 'per0', 'period')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 3.038427 | 2.663707 | 1.140676 |
phshift = 0
mean_anom = true_anom - (ecc*sin(true_anom))*u.deg
Phi = (mean_anom + per0) / (360*u.deg) - 1./4
# phase = Phi - (phshift - 0.25 + per0/(360*u.deg)) * period
phase = (Phi*u.d - (phshift - 0.25 + per0/(360*u.deg)) * period)*(u.cycle/u.d)
return phase | def _true_anom_to_phase(true_anom, period, ecc, per0) | TODO: add documentation | 5.590062 | 5.384987 | 1.038083 |
orbit_ps = _get_system_ps(b, orbit)
# metawargs = orbit_ps.meta
#metawargs.pop('qualifier')
# t0_ph0 and phshift both exist by default, so we don't have to worry about creating either
# t0_ph0 = orbit_ps.get_parameter(qualifier='t0_ph0')
# phshift = orbit_ps.get_parameter(qualifier='phshift')
ph_supconj = orbit_ps.get_parameter(qualifier='ph_supconj')
per0 = orbit_ps.get_parameter(qualifier='per0')
ecc = orbit_ps.get_parameter(qualifier='ecc')
period = orbit_ps.get_parameter(qualifier='period')
# true_anom_supconj = pi/2 - per0
# mean_anom_supconj = true_anom_supconj - ecc*sin(true_anom_supconj)
# ph_supconj = (mean_anom_supconj + per0) / (2 * pi) - 1/4
if solve_for in [None, ph_supconj]:
lhs = ph_supconj
#true_anom_supconj = np.pi/2*u.rad - per0
true_anom_supconj = -1*(per0 - 360*u.deg)
rhs = _true_anom_to_phase(true_anom_supconj, period, ecc, per0)
#elif solve_for in [per0]:
# raise NotImplementedError("phshift constraint does not support solving for per0 yet")
else:
raise NotImplementedError
return lhs, rhs, {'orbit': orbit} | def ph_supconj(b, orbit, solve_for=None, **kwargs) | TODO: add documentation | 4.129766 | 4.08948 | 1.009851 |
component_ps = _get_system_ps(b, component)
#metawargs = component_ps.meta
#metawargs.pop('qualifier')
period = component_ps.get_parameter(qualifier='period', check_visible=False)
freq = component_ps.get_parameter(qualifier='freq', check_visible=False)
if solve_for in [None, freq]:
lhs = freq
rhs = 2 * np.pi / period
elif solve_for == period:
lhs = period
rhs = 2 * np.pi / freq
else:
raise NotImplementedError
return lhs, rhs, {'component': component} | def freq(b, component, solve_for=None, **kwargs) | Create a constraint for frequency (either orbital or rotational) given a period.
freq = 2 * pi / period
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the orbit or component in which this
constraint should be built
:parameter str solve_for: if 'freq' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'period')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 4.933862 | 4.368773 | 1.129347 |
hier = b.hierarchy
orbit1_ps = _get_system_ps(b, orbit1)
orbit2_ps = _get_system_ps(b, orbit2)
sma1 = orbit1_ps.get_parameter(qualifier='sma')
sma2 = orbit2_ps.get_parameter(qualifier='sma')
q1 = orbit1_ps.get_parameter(qualifier='q')
q2 = orbit2_ps.get_parameter(qualifier='q')
period1 = orbit1_ps.get_parameter(qualifier='period')
period2 = orbit2_ps.get_parameter(qualifier='period')
# NOTE: orbit1 is the outer, so we need to check orbit2... which will
# be the OPPOSITE component as that of the mass we're solving for
if hier.get_primary_or_secondary(orbit2_ps.component) == 'primary':
qthing1 = 1.0+q1
else:
qthing1 = 1.0+1./q1
if solve_for in [None, sma1]:
lhs = sma1
rhs = (sma2**3 * qthing1 * period1**2/period2**2)**"(1./3)"
else:
# TODO: add other options to solve_for
raise NotImplementedError
return lhs, rhs, {'orbit1': orbit1, 'orbit2': orbit2} | def keplers_third_law_hierarchical(b, orbit1, orbit2, solve_for=None, **kwargs) | TODO: add documentation | 4.18433 | 4.152259 | 1.007724 |
comp_ps = b.get_component(component=component)
irrad_frac_refl_bol = comp_ps.get_parameter(qualifier='irrad_frac_refl_bol')
irrad_frac_lost_bol = comp_ps.get_parameter(qualifier='irrad_frac_lost_bol')
if solve_for in [irrad_frac_lost_bol, None]:
lhs = irrad_frac_lost_bol
rhs = 1.0 - irrad_frac_refl_bol
elif solve_for in [irrad_frac_refl_bol]:
lhs = irrad_frac_refl_bol
rhs = 1.0 - irrad_frac_lost_bol
else:
raise NotImplementedError
return lhs, rhs, {'component': component} | def irrad_frac(b, component, solve_for=None, **kwargs) | Create a constraint to ensure that energy is conserved and all incident
light is accounted for. | 2.559845 | 2.571117 | 0.995616 |
comp_ps = b.get_component(component=component)
requiv = comp_ps.get_parameter(qualifier='requiv')
requiv_critical = comp_ps.get_parameter(qualifier='requiv_max')
if solve_for in [requiv, None]:
lhs = requiv
rhs = 1.0*requiv_critical
else:
raise NotImplementedError
return lhs, rhs, {'component': component} | def semidetached(b, component, solve_for=None, **kwargs) | Create a constraint to force requiv to be semidetached | 7.145957 | 6.272906 | 1.139178 |
# TODO: optimize this - this is currently by far the most expensive constraint (due mostly to the parameter multiplication)
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for mass requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
metawargs = component_ps.meta
metawargs.pop('qualifier')
mass_def = FloatParameter(qualifier='mass', value=1.0, default_unit=u.solMass, description='Mass')
mass, created = b.get_or_create('mass', mass_def, **metawargs)
metawargs = parentorbit_ps.meta
metawargs.pop('qualifier')
sma = b.get_parameter(qualifier='sma', **metawargs)
period = b.get_parameter(qualifier='period', **metawargs)
q = b.get_parameter(qualifier='q', **metawargs)
G = c.G.to('solRad3 / (solMass d2)')
G.keep_in_solar_units = True
if hier.get_primary_or_secondary(component) == 'primary':
qthing = 1.0+q
else:
qthing = 1.0+1./q
if solve_for in [None, mass]:
lhs = mass
rhs = (4*np.pi**2 * sma**3 ) / (period**2 * qthing * G)
elif solve_for==sma:
lhs = sma
rhs = ((mass * period**2 * qthing * G)/(4 * np.pi**2))**"(1./3)"
elif solve_for==period:
lhs = period
rhs = ((4 * np.pi**2 * sma**3)/(mass * qthing * G))**"(1./2)"
elif solve_for==q:
# TODO: implement this so that one mass can be solved for sma and the
# other can be solved for q. The tricky thing is that we actually
# have qthing here... so we'll probably need to handle the primary
# vs secondary case separately.
raise NotImplementedError
else:
# TODO: solve for other options
raise NotImplementedError
return lhs, rhs, {'component': component} | def mass(b, component, solve_for=None, **kwargs) | Create a constraint for the mass of a star based on Kepler's third
law from its parent orbit.
If 'mass' does not exist in the component, it will be created
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'mass' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'q', sma', 'period')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function)
:raises NotImplementedError: if the hierarchy is not found
:raises NotImplementedError: if the value of solve_for is not yet implemented | 6.104214 | 5.553452 | 1.099175 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for comp_sma requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
metawargs = component_ps.meta
metawargs.pop('qualifier')
compsma_def = FloatParameter(qualifier='sma', value=4.0, default_unit=u.solRad, description='Semi major axis of the component in the orbit')
compsma, created = b.get_or_create('sma', compsma_def, **metawargs)
metawargs = parentorbit_ps.meta
metawargs.pop('qualifier')
sma = b.get_parameter(qualifier='sma', **metawargs)
q = b.get_parameter(qualifier='q', **metawargs)
# NOTE: similar logic is also in dynamics.keplerian.dynamics_from_bundle to
# handle nested hierarchical orbits. If changing any of the logic here,
# it should be changed there as well.
if hier.get_primary_or_secondary(component) == 'primary':
qthing = (1. + 1./q)
else:
qthing = (1. + q)
if solve_for in [None, compsma]:
lhs = compsma
rhs = sma / qthing
elif solve_for == sma:
lhs = sma
rhs = compsma * qthing
else:
raise NotImplementedError
return lhs, rhs, {'component': component} | def comp_sma(b, component, solve_for=None, **kwargs) | Create a constraint for the star's semi-major axes WITHIN its
parent orbit. This is NOT the same as the semi-major axes OF
the parent orbit
If 'sma' does not exist in the component, it will be created
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'sma@star' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'sma@orbit', 'q')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 6.622697 | 6.031233 | 1.098067 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for requiv_detached_max requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
if parentorbit == 'component':
raise ValueError("cannot constrain requiv_detached_max for single star")
parentorbit_ps = _get_system_ps(b, parentorbit)
requiv_max = component_ps.get_parameter(qualifier='requiv_max')
q = parentorbit_ps.get_parameter(qualifier='q')
syncpar = component_ps.get_parameter(qualifier='syncpar')
ecc = parentorbit_ps.get_parameter(qualifier='ecc')
sma = parentorbit_ps.get_parameter(qualifier='sma')
incl_star = component_ps.get_parameter(qualifier='incl')
long_an_star = component_ps.get_parameter(qualifier='long_an')
incl_orbit = parentorbit_ps.get_parameter(qualifier='incl')
long_an_orbit = parentorbit_ps.get_parameter(qualifier='long_an')
if solve_for in [None, requiv_max]:
lhs = requiv_max
rhs = roche_requiv_L1(q, syncpar, ecc, sma,
incl_star, long_an_star,
incl_orbit, long_an_orbit,
hier.get_primary_or_secondary(component, return_ind=True))
else:
raise NotImplementedError("requiv_detached_max can only be solved for requiv_max")
return lhs, rhs, {'component': component} | def requiv_detached_max(b, component, solve_for=None, **kwargs) | Create a constraint to determine the critical (at L1) value of
requiv.
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'requiv_max' should not be the derived/constrained
parameter, provide which other parameter should be derived
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 4.376077 | 3.941519 | 1.110252 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for requiv_contact_min requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
pot_min = component_ps.get_parameter(qualifier='pot_min')
q = parentorbit_ps.get_parameter(qualifier='q')
if solve_for in [None, pot_min]:
lhs = pot_min
rhs = roche_potential_contact_L23(q)
else:
raise NotImplementedError("potential_contact_min can only be solved for requiv_min")
return lhs, rhs, {'component': component} | def potential_contact_min(b, component, solve_for=None, **kwargs) | Create a constraint to determine the critical (at L23) value of
potential at which a constact will underflow. This will only be used
for contacts for pot_min
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'pot_min' should not be the derived/constrained
parameter, provide which other parameter should be derived
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 8.559605 | 7.432929 | 1.151579 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for requiv_contact_max requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
pot_max = component_ps.get_parameter(qualifier='pot_max')
q = parentorbit_ps.get_parameter(qualifier='q')
if solve_for in [None, pot_max]:
lhs = pot_max
rhs = roche_potential_contact_L1(q)
else:
raise NotImplementedError("potential_contact_max can only be solved for requiv_max")
return lhs, rhs, {'component': component} | def potential_contact_max(b, component, solve_for=None, **kwargs) | Create a constraint to determine the critical (at L1) value of
potential at which a constact will underflow. This will only be used
for contacts for pot_min
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'pot_max' should not be the derived/constrained
parameter, provide which other parameter should be derived
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 8.406987 | 7.576692 | 1.109586 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for requiv_contact_min requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
requiv_min = component_ps.get_parameter(qualifier='requiv_min')
q = parentorbit_ps.get_parameter(qualifier='q')
sma = parentorbit_ps.get_parameter(qualifier='sma')
if solve_for in [None, requiv_min]:
lhs = requiv_min
rhs = roche_requiv_contact_L1(q, sma, hier.get_primary_or_secondary(component, return_ind=True))
else:
raise NotImplementedError("requiv_contact_min can only be solved for requiv_min")
return lhs, rhs, {'component': component} | def requiv_contact_min(b, component, solve_for=None, **kwargs) | Create a constraint to determine the critical (at L1) value of
requiv at which a constact will underflow. This will only be used
for contacts for requiv_min
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'requiv_max' should not be the derived/constrained
parameter, provide which other parameter should be derived
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 6.710004 | 6.080079 | 1.103605 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for requiv_contact_max requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
requiv_max = component_ps.get_parameter(qualifier='requiv_max')
q = parentorbit_ps.get_parameter(qualifier='q')
sma = parentorbit_ps.get_parameter(qualifier='sma')
if solve_for in [None, requiv_max]:
lhs = requiv_max
rhs = roche_requiv_contact_L23(q, sma, hier.get_primary_or_secondary(component, return_ind=True))
else:
raise NotImplementedError("requiv_contact_max can only be solved for requiv_max")
return lhs, rhs, {'component': component} | def requiv_contact_max(b, component, solve_for=None, **kwargs) | Create a constraint to determine the critical (at L2/3) value of
requiv at which a constact will overflow. This will only be used
for contacts for requiv_max
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'requiv_max' should not be the derived/constrained
parameter, provide which other parameter should be derived
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 6.82493 | 6.03827 | 1.130279 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for requiv_contact_max requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
pot = component_ps.get_parameter(qualifier='pot')
fillout_factor = component_ps.get_parameter(qualifier='fillout_factor')
q = parentorbit_ps.get_parameter(qualifier='q')
if solve_for in [None, fillout_factor]:
lhs = fillout_factor
rhs = roche_pot_to_fillout_factor(q, pot)
elif solve_for in [pot]:
lhs = pot
rhs = roche_fillout_factor_to_pot(q, fillout_factor)
else:
raise NotImplementedError("fillout_factor can not be solved for {}".format(solve_for))
return lhs, rhs, {'component': component} | def fillout_factor(b, component, solve_for=None, **kwargs) | Create a constraint to determine the fillout factor of a contact envelope.
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'requiv_max' should not be the derived/constrained
parameter, provide which other parameter should be derived
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 5.84677 | 5.218044 | 1.120491 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for comp_sma requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
metawargs = component_ps.meta
metawargs.pop('qualifier')
period_star = b.get_parameter(qualifier='period', **metawargs)
syncpar_star = b.get_parameter(qualifier='syncpar', **metawargs)
metawargs = parentorbit_ps.meta
metawargs.pop('qualifier')
period_orbit = b.get_parameter(qualifier='period', **metawargs)
if solve_for in [None, period_star]:
lhs = period_star
rhs = period_orbit / syncpar_star
elif solve_for == syncpar_star:
lhs = syncpar_star
rhs = period_orbit / period_star
elif solve_for == period_orbit:
lhs = period_orbit
rhs = syncpar_star * period_star
else:
raise NotImplementedError
return lhs, rhs, {'component': component} | def rotation_period(b, component, solve_for=None, **kwargs) | Create a constraint for the rotation period of a star given its orbital
period and synchronicity parameters.
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'period@star' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'syncpar@star', 'period@orbit')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 4.807487 | 4.184138 | 1.148979 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for pitch requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
incl_comp = component_ps.get_parameter(qualifier='incl')
pitch_comp = component_ps.get_parameter(qualifier='pitch')
incl_orb = parentorbit_ps.get_parameter(qualifier='incl')
if solve_for in [None, incl_comp]:
lhs = incl_comp
rhs = incl_orb + pitch_comp
elif solve_for == incl_orb:
lhs = incl_orb
rhs = incl_comp - pitch_comp
elif solve_for == pitch_comp:
lhs = pitch_comp
rhs = incl_comp - incl_orb
else:
raise NotImplementedError
return lhs, rhs, {'component': component} | def pitch(b, component, solve_for=None, **kwargs) | Create a constraint for the inclination of a star relative to its parent orbit
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'incl@star' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'incl@orbit', 'pitch@star')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 4.518762 | 3.918657 | 1.153141 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for yaw requires hierarchy")
component_ps = _get_system_ps(b, component)
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
long_an_comp = component_ps.get_parameter(qualifier='long_an')
yaw_comp = component_ps.get_parameter(qualifier='yaw')
long_an_orb = parentorbit_ps.get_parameter(qualifier='long_an')
if solve_for in [None, long_an_comp]:
lhs = long_an_comp
rhs = long_an_orb + yaw_comp
elif solve_for == long_an_orb:
lhs = long_an_orb
rhs = long_an_comp - yaw_comp
elif solve_for == yaw_comp:
lhs = yaw_comp
rhs = long_an_comp - long_an_orb
else:
raise NotImplementedError
return lhs, rhs, {'component': component} | def yaw(b, component, solve_for=None, **kwargs) | Create a constraint for the inclination of a star relative to its parent orbit
:parameter b: the :class:`phoebe.frontend.bundle.Bundle`
:parameter str component: the label of the star in which this
constraint should be built
:parameter str solve_for: if 'long_an@star' should not be the derived/constrained
parameter, provide which other parameter should be derived
(ie 'long_an@orbit', 'yaw@star')
:returns: lhs (Parameter), rhs (ConstraintParameter), args (list of arguments
that were passed to this function) | 4.241004 | 3.552803 | 1.193706 |
hier = b.get_hierarchy()
if not len(hier.get_value()):
# TODO: change to custom error type to catch in bundle.add_component
# TODO: check whether the problem is 0 hierarchies or more than 1
raise NotImplementedError("constraint for time_ecl requires hierarchy")
if component=='_default':
# need to do this so that the constraint won't fail before being copied
parentorbit = hier.get_top()
else:
parentorbit = hier.get_parent_of(component)
parentorbit_ps = _get_system_ps(b, parentorbit)
filterwargs = {}
if component is not None:
filterwargs['component'] = component
if dataset is not None:
filterwargs['dataset'] = dataset
time_ephem = b.get_parameter(qualifier='time_ephems', **filterwargs)
t0 = parentorbit_ps.get_parameter(qualifier='t0_supconj') # TODO: make sure t0_supconj makes sense here
period = parentorbit_ps.get_parameter(qualifier='period')
phshift = parentorbit_ps.get_parameter(qualifier='phshift')
dpdt = parentorbit_ps.get_parameter(qualifier='dpdt')
esinw_ = parentorbit_ps.get_parameter(qualifier='esinw')
N = b.get_parameter(qualifier='Ns', **filterwargs)
if solve_for in [None, time_ephem]:
# TODO: N is always an int, but we want to include the expected phase of eclipse (ie N+ph_ecl) based on which component and esinw/ecosw
# then we can have bundle.add_component automatically default to add all components instead of just the primary
# same as Bundle.to_time except phase can be > 1
lhs = time_ephem
# we have to do a trick here since dpdt is in sec/yr and floats are
# assumed to have the same unit during subtraction or addition.
one = 1.0*(u.s/u.s)
if component!='_default' and hier.get_primary_or_secondary(component)=='secondary':
# TODO: make sure this constraint updates if the hierarchy changes?
N = N + 0.5 + esinw_ # TODO: check this
rhs = t0 + ((N - phshift) * period) / (-1 * (N - phshift) * dpdt + one)
#rhs = (N-phshift)*period
else:
raise NotImplementedError
return lhs, rhs, {'component': component, 'dataset': dataset} | def time_ephem(b, component, dataset, solve_for=None, **kwargs) | use the ephemeris of component to predict the expected times of eclipse (used
in the ETV dataset) | 8.259723 | 8.130751 | 1.015862 |
time_ephem = b.get_parameter(qualifier='time_ephems', component=component, dataset=dataset, context=['dataset', 'model']) # need to provide context to avoid getting the constraint
time_ecl = b.get_parameter(qualifier='time_ecls', component=component, dataset=dataset)
etv = b.get_parameter(qualifier='etvs', component=component, dataset=dataset)
if solve_for in [None, etv]:
lhs = etv
rhs = time_ecl - time_ephem
else:
raise NotImplementedError
return lhs, rhs, {'component': component, 'dataset': dataset} | def etv(b, component, dataset, solve_for=None, **kwargs) | compute the ETV column from the time_ephem and time_ecl columns (used in the
ETV dataset) | 4.814931 | 3.873454 | 1.243059 |
if not _has_astropy:
raise ImportError("astropy must be installed for unit support")
if (isinstance(value, units.Unit) or isinstance(value, units.IrreducibleUnit) or isinstance(value, units.CompositeUnit)):
return True, value
else:
return False, value | def is_unit(value) | must be an astropy unit | 3.926538 | 3.251413 | 1.207641 |
if is_unit(value)[0]:
return True, value
try:
unit = units.Unit(value)
except:
return False, value
else:
return True, unit | def is_unit_or_unitstring(value) | must be an astropy.unit | 3.848918 | 3.661535 | 1.051176 |
return isinstance(value, float) or isinstance(value, int) or isinstance(value, np.float64), float(value) | def is_float(value) | must be a float | 3.62821 | 3.39231 | 1.06954 |
if is_int_positive(value):
return True, value
elif isinstance(value, tuple) or isinstance(value, list):
for v in value:
if not is_int_positive(v):
return False, value
return True, value
else:
return False, value | def is_valid_shape(value) | must be a positive integer or a tuple/list of positive integers | 2.519523 | 2.024045 | 1.244796 |
return isinstance(value, np.ndarray) or isinstance(value, list) or isinstance(value, tuple), value | def is_iterable(value) | must be an iterable (list, array, tuple) | 4.444094 | 4.018317 | 1.105959 |
if not _has_astropy:
raise ImportError("astropy must be installed for unit/quantity support")
if self.unit is None:
raise ValueError("unit is not set, cannot convert to quantity")
return self.array * self.unit | def quantity(self) | return the underlying astropy quantity (if astropy is installed) | 6.266922 | 4.435885 | 1.412778 |
if not _has_astropy:
raise ImportError("astropy must be installed for unit/quantity support")
if self.unit is None:
raise ValueError("no units currently set")
if not is_unit_or_unitstring(unit)[0]:
raise ValueError("unit not recognized")
mult_factor = self.unit.to(unit)
copy = self.copy() * mult_factor
copy.unit = unit
return copy | def to(self, unit) | convert between units. Returns a new nparray object with the new units | 5.667968 | 5.321558 | 1.065096 |
def _json_safe(v):
if isinstance(v, np.ndarray):
return v.tolist()
elif is_unit(v)[0]:
return v.to_string()
else:
return v
d = {k:_json_safe(v) for k,v in self._descriptors.items()}
d['nparray'] = self.__class__.__name__.lower()
return d | def to_dict(self) | dump a representation of the nparray object to a dictionary. The
nparray object should then be able to be fully restored via
nparray.from_dict | 4.285115 | 3.861347 | 1.109746 |
f = open(filename, 'w')
f.write(self.to_json(**kwargs))
f.close()
return filename | def to_file(self, filename, **kwargs) | dump a representation of the nparray object to a json-formatted file.
The nparray object should then be able to be fully restored via
nparray.from_file
@parameter str filename: path to the file to be created (will overwrite
if already exists)
@rtype: str
@returns: the filename | 2.702095 | 3.940628 | 0.685702 |
return np.arange(self.start, self.stop, self.step) | def array(self) | return the underlying numpy array | 5.424076 | 3.869634 | 1.401703 |
num = int((self.stop-self.start)/(self.step))
return Linspace(self.start, self.stop-self.step, num) | def to_linspace(self) | convert from arange to linspace | 3.941895 | 3.369635 | 1.169829 |
return np.linspace(self.start, self.stop, self.num, self.endpoint) | def array(self) | return the underlying numpy array | 5.479082 | 4.292573 | 1.27641 |
arr, step = np.linspace(self.start, self.stop, self.num, self.endpoint, retstep=True)
return Arange(self.start, self.stop+step, step) | def to_arange(self) | convert from linspace to arange | 4.487565 | 4.029819 | 1.11359 |
return np.logspace(self.start, self.stop, self.num, self.endpoint, self.base) | def array(self) | return the underlying numpy array | 4.757774 | 3.574068 | 1.331193 |
return np.geomspace(self.start, self.stop, self.num, self.endpoint) | def array(self) | return the underlying numpy array | 5.446166 | 4.089615 | 1.331706 |
if hasattr(self.shape, '__len__'):
raise NotImplementedError("can only convert flat Full arrays to linspace")
return Linspace(self.fill_value, self.fill_value, self.shape) | def to_linspace(self) | convert from full to linspace | 8.113259 | 6.475705 | 1.252877 |
return np.eye(self.M, self.N, self.k) | def array(self) | return the underlying numpy array | 10.281085 | 7.077321 | 1.45268 |
def projected_separation_sq(time, b, dynamics_method, cind1, cind2, ltte=True):
#print "*** projected_separation_sq", time, dynamics_method, cind1, cind2, ltte
times = np.array([time])
if dynamics_method in ['nbody', 'rebound']:
# TODO: make sure that this takes systemic velocity and corrects positions and velocities (including ltte effects if enabled)
ts, xs, ys, zs, vxs, vys, vzs = dynamics.nbody.dynamics_from_bundle(b, times, compute=None, ltte=ltte)
elif dynamics_method=='bs':
ts, xs, ys, zs, vxs, vys, vzs = dynamics.nbody.dynamics_from_bundle_bs(b, times, compute, ltte=ltte)
elif dynamics_method=='keplerian':
# TODO: make sure that this takes systemic velocity and corrects positions and velocities (including ltte effects if enabled)
ts, xs, ys, zs, vxs, vys, vzs = dynamics.keplerian.dynamics_from_bundle(b, times, compute=None, ltte=ltte, return_euler=False)
else:
raise NotImplementedError
return (xs[cind2][0]-xs[cind1][0])**2 + (ys[cind2][0]-ys[cind1][0])**2
# TODO: optimize this by allowing to pass cind1 and cind2 directly (and fallback to this if they aren't)
starrefs = b.hierarchy.get_stars()
cind1 = starrefs.index(component)
cind2 = starrefs.index(b.hierarchy.get_sibling_of(component))
# TODO: provide options for tol and maxiter (in the frontend computeoptionsp)?
return newton(projected_separation_sq, x0=time, args=(b, dynamics_method, cind1, cind2, ltte), tol=tol, maxiter=maxiter) | def crossing(b, component, time, dynamics_method='keplerian', ltte=True, tol=1e-4, maxiter=1000) | tol in days | 4.024859 | 4.063904 | 0.990392 |
return requiv_L1(q=q, syncpar=1, ecc=0, sma=sma, incl_star=0, long_an_star=0, incl_orb=0, long_an_orb=0, compno=compno, **kwargs) | def requiv_contact_L1(q, sma, compno, **kwargs) | for the contact case we can make the assumption of aligned, synchronous, and circular | 6.05026 | 5.852532 | 1.033785 |
logger.debug("requiv_contact_L23(q={}, sma={}, compno={})".format(q, sma, compno))
crit_pot_L23 = potential_contact_L23(q)
logger.debug("libphoebe.roche_contact_neck_min(phi=pi/2, q={}, d=1., crit_pot_L23={})".format(q, crit_pot_L23))
nekmin = libphoebe.roche_contact_neck_min(np.pi/2., q, 1., crit_pot_L23)['xmin']
# we now have the critical potential and nekmin as if we were the primary star, so now we'll use compno=0 regardless
logger.debug("libphoebe.roche_contact_partial_area_volume(nekmin={}, q={}, d=1, Omega={}, compno=0)".format(nekmin, q, crit_pot_L23))
crit_vol_L23 = libphoebe.roche_contact_partial_area_volume(nekmin, q, 1., crit_pot_L23, compno-1)['lvolume']
logger.debug("resulting vol: {}, requiv: {}".format(crit_vol_L23, (3./4*1./np.pi*crit_vol_L23)**(1./3) * sma))
return (3./4*1./np.pi*crit_vol_L23)**(1./3) * sma | def requiv_contact_L23(q, sma, compno, **kwargs) | for the contact case we can make the assumption of aligned, synchronous, and circular | 4.15801 | 4.105469 | 1.012798 |
ups_sc = np.pi/2-per0
E_sc = 2*np.arctan( np.sqrt((1-ecc)/(1+ecc)) * np.tan(ups_sc/2) )
M_sc = E_sc - ecc*np.sin(E_sc)
return period*(M_sc/2./np.pi) | def _delta_t_supconj_perpass(period, ecc, per0) | time shift between superior conjuction and periastron passage | 3.710613 | 3.518254 | 1.054675 |
logger.debug("requiv_to_pot_contact(requiv={}, q={}, sma={}, compno={})".format(requiv, q, sma, compno))
# since the functions called here work with normalized r, we need to set d=D=sma=1.
# or provide sma as a function parameter and normalize r here as requiv = requiv/sma
requiv = requiv/sma
vequiv = 4./3*np.pi*requiv**3
d = 1.
F = 1.
logger.debug("libphoebe.roche_contact_Omega_at_partial_vol(vol={}, phi=pi/2, q={}, d={}, choice={})".format(vequiv, q, d, compno-1))
return libphoebe.roche_contact_Omega_at_partial_vol(vequiv, np.pi/2, q, d, choice=compno-1) | def requiv_to_pot_contact(requiv, q, sma, compno=1) | :param requiv: user-provided equivalent radius
:param q: mass ratio
:param sma: semi-major axis (d = sma because we explicitly assume circular orbits for contacts)
:param compno: 1 for primary, 2 for secondary
:return: potential and fillout factor | 6.811398 | 6.485942 | 1.050179 |
if isinstance(d, str):
return from_json(d)
if not isinstance(d, dict):
raise TypeError("argument must be of type dict")
if 'nparray' not in d.keys():
raise ValueError("input dictionary missing 'nparray' entry")
classname = d.pop('nparray').title()
return getattr(_wrappers, classname)(**d) | def from_dict(d) | load an nparray object from a dictionary
@parameter str d: dictionary representing the nparray object | 4.347681 | 3.99868 | 1.087279 |
if isinstance(j, dict):
return from_dict(j)
if not (isinstance(j, str) or isinstance(j, unicode)):
raise TypeError("argument must be of type str")
return from_dict(json.loads(j)) | def from_json(j) | load an nparray object from a json-formatted string
@parameter str j: json-formatted string | 3.129874 | 3.190969 | 0.980854 |
f = open(filename, 'r')
j = json.load(f)
f.close()
return from_dict(j) | def from_file(filename) | load an nparray object from a json filename
@parameter str filename: path to the file | 2.812844 | 3.619953 | 0.777039 |
np.array = array
np.arange = arange
np.linspace = linspace
np.logspace = logspace
np.geomspace = geomspace
np.full = full
np.full_like = full_like
np.zeros = zeros
np.zeros_like = zeros_like
np.ones = ones
np.ones_like = ones_like
np.eye = eye | def monkeypatch() | monkeypath built-in numpy functions to call those provided by nparray instead. | 1.950135 | 1.715607 | 1.136703 |
global _initialized
if not _initialized or refresh:
# load information from online passbands first so that any that are
# available locally will override
online_passbands = list_online_passbands(full_dict=True, refresh=refresh)
for pb, info in online_passbands.items():
_pbtable[pb] = {'fname': None, 'atms': info['atms'], 'pb': None}
# load global passbands (in install directory) next and then local
# (in .phoebe directory) second so that local passbands override
# global passbands whenever there is a name conflict
for path in [_pbdir_global, _pbdir_local]:
for f in os.listdir(path):
if f=='README':
continue
init_passband(path+f)
#Check if _pbdir_env has been set and load those passbands too
if not _pbdir_env == None:
for path in [_pbdir_env]:
for f in os.listdir(path):
if f=='README':
continue
init_passband(path+f)
_initialized = True | def init_passbands(refresh=False) | This function should be called only once, at import time. It
traverses the passbands directory and builds a lookup table of
passband names qualified as 'pbset:pbname' and corresponding files
and atmosphere content within. | 5.858336 | 5.547919 | 1.055952 |
pbdir = _pbdir_local if local else _pbdir_global
shutil.copy(fname, pbdir)
init_passband(os.path.join(pbdir, fname)) | def install_passband(fname, local=True) | Install a passband from a local file. This simply copies the file into the
install path - but beware that clearing the installation will clear the
passband as well
If local=False, you must have permissions to access the installation directory | 4.950454 | 5.890771 | 0.840375 |
pbdir = _pbdir_local if local else _pbdir_global
for f in os.listdir(pbdir):
pbpath = os.path.join(pbdir, f)
logger.warning("deleting file: {}".format(pbpath))
os.remove(pbpath) | def uninstall_all_passbands(local=True) | Uninstall all passbands, either globally or locally (need to call twice to
delete ALL passbands)
If local=False, you must have permission to access the installation directory | 3.146516 | 3.622504 | 0.868602 |
if passband not in list_online_passbands():
raise ValueError("passband '{}' not available".format(passband))
pbdir = _pbdir_local if local else _pbdir_global
passband_fname = _online_passbands[passband]['fname']
passband_fname_local = os.path.join(pbdir, passband_fname)
url = 'http://github.com/phoebe-project/phoebe2-tables/raw/master/passbands/{}'.format(passband_fname)
logger.info("downloading from {} and installing to {}...".format(url, passband_fname_local))
try:
urllib.urlretrieve(url, passband_fname_local)
except IOError:
raise IOError("unable to download {} passband - check connection".format(passband))
else:
init_passband(passband_fname_local) | def download_passband(passband, local=True) | Download and install a given passband from the repository.
If local=False, you must have permission to access the installation directory | 3.252284 | 3.253057 | 0.999762 |
if atm != 'blackbody':
raise ValueError('atmosphere must be set to blackbody for Inorm_bol_bb.')
if photon_weighted:
factor = 2.6814126821264836e22/Teff
else:
factor = 1.0
# convert scalars to vectors if necessary:
if not hasattr(Teff, '__iter__'):
Teff = np.array((Teff,))
return factor * sigma_sb.value * Teff**4 / np.pi | def Inorm_bol_bb(Teff=5772., logg=4.43, abun=0.0, atm='blackbody', photon_weighted=False) | @Teff: value or array of effective temperatures
@logg: surface gravity; not used, for class compatibility only
@abun: abundances; not used, for class compatibility only
@atm: atmosphere model, must be blackbody, otherwise exception is raised
@photon_weighted: intensity weighting scheme; must be False, otherwise exception is raised
Computes normal bolometric intensity using the Stefan-Boltzmann law,
Inorm_bol_bb = 1/\pi \sigma T^4. If photon-weighted intensity is
requested, Inorm_bol_bb is multiplied by a conversion factor that
comes from integrating lambda/hc P(lambda) over all lambda.
Input parameters mimick the Passband class Inorm method for calling
convenience. | 4.741347 | 4.7904 | 0.98976 |
return 2*self.h*self.c*self.c/lam**5 * 1./(np.exp(self.h*self.c/lam/self.k/Teff)-1) | def _planck(self, lam, Teff) | Computes monochromatic blackbody intensity in W/m^3 using the
Planck function.
@lam: wavelength in m
@Teff: effective temperature in K
Returns: monochromatic blackbody intensity | 4.038856 | 4.609552 | 0.876193 |
expterm = np.exp(self.h*self.c/lam/self.k/Teff)
return 2*self.h*self.c*self.c/self.k/Teff/lam**7 * (expterm-1)**-2 * (self.h*self.c*expterm-5*lam*self.k*Teff*(expterm-1)) | def _planck_deriv(self, lam, Teff) | Computes the derivative of the monochromatic blackbody intensity using
the Planck function.
@lam: wavelength in m
@Teff: effective temperature in K
Returns: the derivative of monochromatic blackbody intensity | 4.343646 | 4.484893 | 0.968506 |
hclkt = self.h*self.c/lam/self.k/Teff
expterm = np.exp(hclkt)
return hclkt * expterm/(expterm-1) | def _planck_spi(self, lam, Teff) | Computes the spectral index of the monochromatic blackbody intensity
using the Planck function. The spectral index is defined as:
B(lambda) = 5 + d(log I)/d(log lambda),
where I is the Planck function.
@lam: wavelength in m
@Teff: effective temperature in K
Returns: the spectral index of monochromatic blackbody intensity | 8.040573 | 9.607808 | 0.836879 |
if photon_weighted:
pb = lambda w: w*self._planck(w, Teff)*self.ptf(w)
return integrate.quad(pb, self.wl[0], self.wl[-1])[0]/self.ptf_photon_area
else:
pb = lambda w: self._planck(w, Teff)*self.ptf(w)
return integrate.quad(pb, self.wl[0], self.wl[-1])[0]/self.ptf_area | def _bb_intensity(self, Teff, photon_weighted=False) | Computes mean passband intensity using blackbody atmosphere:
I_pb^E = \int_\lambda I(\lambda) P(\lambda) d\lambda / \int_\lambda P(\lambda) d\lambda
I_pb^P = \int_\lambda \lambda I(\lambda) P(\lambda) d\lambda / \int_\lambda \lambda P(\lambda) d\lambda
Superscripts E and P stand for energy and photon, respectively.
@Teff: effective temperature in K
@photon_weighted: photon/energy switch
Returns: mean passband intensity using blackbody atmosphere. | 2.770447 | 2.926516 | 0.946671 |
if photon_weighted:
num = lambda w: w*self._planck(w, Teff)*self.ptf(w)*self._planck_spi(w, Teff)
denom = lambda w: w*self._planck(w, Teff)*self.ptf(w)
return integrate.quad(num, self.wl[0], self.wl[-1], epsabs=1e10, epsrel=1e-8)[0]/integrate.quad(denom, self.wl[0], self.wl[-1], epsabs=1e10, epsrel=1e-6)[0]
else:
num = lambda w: self._planck(w, Teff)*self.ptf(w)*self._planck_spi(w, Teff)
denom = lambda w: self._planck(w, Teff)*self.ptf(w)
return integrate.quad(num, self.wl[0], self.wl[-1], epsabs=1e10, epsrel=1e-8)[0]/integrate.quad(denom, self.wl[0], self.wl[-1], epsabs=1e10, epsrel=1e-6)[0] | def _bindex_blackbody(self, Teff, photon_weighted=False) | Computes the mean boosting index using blackbody atmosphere:
B_pb^E = \int_\lambda I(\lambda) P(\lambda) B(\lambda) d\lambda / \int_\lambda I(\lambda) P(\lambda) d\lambda
B_pb^P = \int_\lambda \lambda I(\lambda) P(\lambda) B(\lambda) d\lambda / \int_\lambda \lambda I(\lambda) P(\lambda) d\lambda
Superscripts E and P stand for energy and photon, respectively.
@Teff: effective temperature in K
@photon_weighted: photon/energy switch
Returns: mean boosting index using blackbody atmosphere. | 1.740601 | 1.771309 | 0.982664 |
if Teffs is None:
log10Teffs = np.linspace(2.5, 5.7, 97) # this corresponds to the 316K-501187K range.
Teffs = 10**log10Teffs
# Energy-weighted intensities:
log10ints_energy = np.array([np.log10(self._bb_intensity(Teff, photon_weighted=False)) for Teff in Teffs])
self._bb_func_energy = interpolate.splrep(Teffs, log10ints_energy, s=0)
self._log10_Inorm_bb_energy = lambda Teff: interpolate.splev(Teff, self._bb_func_energy)
# Photon-weighted intensities:
log10ints_photon = np.array([np.log10(self._bb_intensity(Teff, photon_weighted=True)) for Teff in Teffs])
self._bb_func_photon = interpolate.splrep(Teffs, log10ints_photon, s=0)
self._log10_Inorm_bb_photon = lambda Teff: interpolate.splev(Teff, self._bb_func_photon)
self.content.append('blackbody')
self.atmlist.append('blackbody') | def compute_blackbody_response(self, Teffs=None) | Computes blackbody intensities across the entire range of
effective temperatures. It does this for two regimes, energy-weighted
and photon-weighted. It then fits a cubic spline to the log(I)-Teff
values and exports the interpolation functions _log10_Inorm_bb_energy
and _log10_Inorm_bb_photon.
@Teffs: an array of effective temperatures. If None, a default
array from ~300K to ~500000K with 97 steps is used. The default
array is uniform in log10 scale.
Returns: n/a | 2.854068 | 2.235555 | 1.276671 |
if photon_weighted:
grid = self._ck2004_ld_photon_grid
else:
grid = self._ck2004_ld_energy_grid
if filename is not None:
import time
f = open(filename, 'w')
f.write('# PASS_SET %s\n' % self.pbset)
f.write('# PASSBAND %s\n' % self.pbname)
f.write('# VERSION 1.0\n\n')
f.write('# Exported from PHOEBE-2 passband on %s\n' % (time.ctime()))
f.write('# The coefficients are computed for the %s-weighted regime.\n\n' % ('photon' if photon_weighted else 'energy'))
mods = np.loadtxt(models)
for mod in mods:
Tindex = np.argwhere(self._ck2004_intensity_axes[0] == mod[0])[0][0]
lindex = np.argwhere(self._ck2004_intensity_axes[1] == mod[1]/10)[0][0]
mindex = np.argwhere(self._ck2004_intensity_axes[2] == mod[2]/10)[0][0]
if filename is None:
print('%6.3f '*11 % tuple(grid[Tindex, lindex, mindex].tolist()))
else:
f.write(('%6.3f '*11+'\n') % tuple(self._ck2004_ld_photon_grid[Tindex, lindex, mindex].tolist()))
if filename is not None:
f.close() | def export_legacy_ldcoeffs(self, models, filename=None, photon_weighted=True) | @models: the path (including the filename) of legacy's models.list
@filename: output filename for storing the table
Exports CK2004 limb darkening coefficients to a PHOEBE legacy
compatible format. | 3.270911 | 3.081973 | 1.061304 |
if 'ck2004_all' not in self.content:
print('Castelli & Kurucz (2004) intensities are not computed yet. Please compute those first.')
return None
ldaxes = self._ck2004_intensity_axes
ldtable = self._ck2004_Imu_energy_grid
pldtable = self._ck2004_Imu_photon_grid
self._ck2004_ldint_energy_grid = np.nan*np.ones((len(ldaxes[0]), len(ldaxes[1]), len(ldaxes[2]), 1))
self._ck2004_ldint_photon_grid = np.nan*np.ones((len(ldaxes[0]), len(ldaxes[1]), len(ldaxes[2]), 1))
mu = ldaxes[3]
Imu = 10**ldtable[:,:,:,:]/10**ldtable[:,:,:,-1:]
pImu = 10**pldtable[:,:,:,:]/10**pldtable[:,:,:,-1:]
# To compute the fluxes, we need to evaluate \int_0^1 2pi Imu mu dmu.
for a in range(len(ldaxes[0])):
for b in range(len(ldaxes[1])):
for c in range(len(ldaxes[2])):
ldint = 0.0
pldint = 0.0
for i in range(len(mu)-1):
ki = (Imu[a,b,c,i+1]-Imu[a,b,c,i])/(mu[i+1]-mu[i])
ni = Imu[a,b,c,i]-ki*mu[i]
ldint += ki/3*(mu[i+1]**3-mu[i]**3) + ni/2*(mu[i+1]**2-mu[i]**2)
pki = (pImu[a,b,c,i+1]-pImu[a,b,c,i])/(mu[i+1]-mu[i])
pni = pImu[a,b,c,i]-pki*mu[i]
pldint += pki/3*(mu[i+1]**3-mu[i]**3) + pni/2*(mu[i+1]**2-mu[i]**2)
self._ck2004_ldint_energy_grid[a,b,c] = 2*ldint
self._ck2004_ldint_photon_grid[a,b,c] = 2*pldint
self.content.append('ck2004_ldint') | def compute_ck2004_ldints(self) | Computes integrated limb darkening profiles for ck2004 atmospheres.
These are used for intensity-to-flux transformations. The evaluated
integral is:
ldint = 2 \pi \int_0^1 Imu mu dmu | 2.341752 | 2.206024 | 1.061526 |
if 'ck2004_ld' not in self.content:
print('Castelli & Kurucz (2004) limb darkening coefficients are not computed yet. Please compute those first.')
return None
if photon_weighted:
table = self._ck2004_ld_photon_grid
else:
table = self._ck2004_ld_energy_grid
if not hasattr(Teff, '__iter__'):
req = np.array(((Teff, logg, abun),))
ld_coeffs = libphoebe.interp(req, self._ck2004_intensity_axes[0:3], table)[0]
else:
req = np.vstack((Teff, logg, abun)).T
ld_coeffs = libphoebe.interp(req, self._ck2004_intensity_axes[0:3], table).T
if ld_func == 'linear':
return ld_coeffs[0:1]
elif ld_func == 'logarithmic':
return ld_coeffs[1:3]
elif ld_func == 'square_root':
return ld_coeffs[3:5]
elif ld_func == 'quadratic':
return ld_coeffs[5:7]
elif ld_func == 'power':
return ld_coeffs[7:11]
elif ld_func == 'all':
return ld_coeffs
else:
print('ld_func=%s is invalid; please choose from [linear, logarithmic, square_root, quadratic, power, all].')
return None | def interpolate_ck2004_ldcoeffs(self, Teff=5772., logg=4.43, abun=0.0, atm='ck2004', ld_func='power', photon_weighted=False) | Interpolate the passband-stored table of LD model coefficients. | 2.645661 | 2.680213 | 0.987108 |
# Initialize the external atmcof module if necessary:
# PERHAPS WD_DATA SHOULD BE GLOBAL??
self.wd_data = libphoebe.wd_readdata(plfile, atmfile)
# That is all that was necessary for *_extern_planckint() and
# *_extern_atmx() functions. However, we also want to support
# circumventing WD subroutines and use WD tables directly. For
# that, we need to do a bit more work.
# Store the passband index for use in planckint() and atmx():
self.extern_wd_idx = wdidx
# Break up the table along axes and extract a single passband data:
atmtab = np.reshape(self.wd_data['atm_table'], (Nabun, Npb, Nlogg, Nints, -1))
atmtab = atmtab[:, wdidx, :, :, :]
# Finally, reverse the metallicity axis because it is sorted in
# reverse order in atmcof:
self.extern_wd_atmx = atmtab[::-1, :, :, :]
self.content += ['extern_planckint', 'extern_atmx']
self.atmlist += ['extern_planckint', 'extern_atmx'] | def import_wd_atmcof(self, plfile, atmfile, wdidx, Nabun=19, Nlogg=11, Npb=25, Nints=4) | Parses WD's atmcof and reads in all Legendre polynomials for the
given passband.
@plfile: path and filename of atmcofplanck.dat
@atmfile: path and filename of atmcof.dat
@wdidx: WD index of the passed passband. This can be automated
but it's not a high priority.
@Nabun: number of metallicity nodes in atmcof.dat. For the 2003 version
the number of nodes is 19.
@Nlogg: number of logg nodes in atmcof.dat. For the 2003 version
the number of nodes is 11.
@Npb: number of passbands in atmcof.dat. For the 2003 version
the number of passbands is 25.
@Nints: number of temperature intervals (input lines) per entry.
For the 2003 version the number of lines is 4. | 8.548277 | 8.324638 | 1.026865 |
log10_Inorm = libphoebe.wd_planckint(Teff, self.extern_wd_idx, self.wd_data["planck_table"])
return log10_Inorm | def _log10_Inorm_extern_planckint(self, Teff) | Internal function to compute normal passband intensities using
the external WD machinery that employs blackbody approximation.
@Teff: effective temperature in K
Returns: log10(Inorm) | 12.015922 | 11.944935 | 1.005943 |
log10_Inorm = libphoebe.wd_atmint(Teff, logg, abun, self.extern_wd_idx, self.wd_data["planck_table"], self.wd_data["atm_table"])
return log10_Inorm | def _log10_Inorm_extern_atmx(self, Teff, logg, abun) | Internal function to compute normal passband intensities using
the external WD machinery that employs model atmospheres and
ramps.
@Teff: effective temperature in K
@logg: surface gravity in cgs
@abun: metallicity in dex, Solar=0.0
Returns: log10(Inorm) | 9.136605 | 9.443345 | 0.967518 |
self.keyspace = keyspace
dfrds = []
for p in self._protos:
dfrds.append(p.submitRequest(ManagedThriftRequest(
'set_keyspace', keyspace)))
return defer.gatherResults(dfrds) | def set_keyspace(self, keyspace) | switch all connections to another keyspace | 6.200953 | 5.795553 | 1.06995 |
dfrds = []
for p in self._protos:
dfrds.append(p.submitRequest(ManagedThriftRequest('login',
ttypes.AuthenticationRequest(credentials=credentials))))
return defer.gatherResults(dfrds) | def login(self, credentials) | authenticate all connections | 9.726183 | 8.699663 | 1.117995 |
A = csc_matrix(A)
if self.prop == self.SYMMETRIC:
A = (A + A.T) - triu(A)
self.lu = self.umfpack.splu(A) | def factorize(self, A) | Factorizes A.
Parameters
----------
A : matrix
For symmetric systems, should contain only lower diagonal part. | 7.663726 | 8.279189 | 0.925661 |
if self.connector is None:
raise ValueError("No connector to retry")
if self.service is None:
return
self.connector.connect() | def retry(self) | Retry this factory's connection. It is assumed that a previous
connection was attempted and failed- either before or after a
successful connection. | 8.623503 | 7.255579 | 1.188534 |
d = defer.succeed(None)
if creds is not None:
d.addCallback(lambda _: self.my_login(creds))
if keyspace is not None:
d.addCallback(lambda _: self.my_set_keyspace(keyspace))
if node_auto_discovery:
d.addCallback(lambda _: self.my_describe_ring(keyspace))
return d | def prep_connection(self, creds=None, keyspace=None, node_auto_discovery=True) | Do login and set_keyspace tasks as necessary, and also check this
node's idea of the Cassandra ring. Expects that our connection is
alive.
Return a Deferred that will fire with the ring information, or be
errbacked if something goes wrong. | 2.354348 | 2.322767 | 1.013597 |
d = self.my_describe_keyspaces()
def pick_non_system(klist):
for k in klist:
if k.name not in SYSTEM_KEYSPACES:
return k.name
err = NoKeyspacesAvailable("Can't gather information about the "
"Cassandra ring; no non-system "
"keyspaces available")
warn(err)
raise err
d.addCallback(pick_non_system)
return d | def my_pick_non_system_keyspace(self) | Find a keyspace in the cluster which is not 'system', for the purpose
of getting a valid ring view. Can't use 'system' or null. | 5.241589 | 5.023591 | 1.043395 |
self.logstate('finish_and_die')
self.stop_working_on_queue()
if self.jobphase != 'pending_request':
self.stopFactory() | def finish_and_die(self) | If there is a request pending, let it finish and be handled, then
disconnect and die. If not, cancel any pending queue requests and
just die. | 15.65671 | 13.026788 | 1.201886 |
# TODO: this should ideally take node history into account
conntime = node.seconds_until_connect_ok()
if conntime > 0:
self.log("not considering %r for new connection; has %r left on "
"connect blackout" % (node, conntime))
return -conntime
numconns = self.num_connectors_to(node)
if numconns >= self.max_connections_per_node:
return float('-Inf')
return sys.maxint - numconns | def add_connection_score(self, node) | Return a numeric value that determines this node's score for adding
a new connection. A negative value indicates that no connections
should be made to this node for at least that number of seconds.
A value of -inf indicates no connections should be made to this
node for the foreseeable future.
This score should ideally take into account the connectedness of
available nodes, so that those with less current connections will
get more. | 8.342038 | 6.769269 | 1.23234 |
if newsize < 0:
raise ValueError("pool size must be nonnegative")
self.log("Adjust pool size from %d to %d." % (self.target_pool_size, newsize))
self.target_pool_size = newsize
self.kill_excess_pending_conns()
self.kill_excess_conns()
self.fill_pool() | def adjustPoolSize(self, newsize) | Change the target pool size. If we have too many connections already,
ask some to finish what they're doing and die (preferring to kill
connections to the node that already has the most connections). If
we have too few, create more. | 3.779624 | 3.279344 | 1.152555 |
time_since_last_called = self.fill_pool_throttle
if self.fill_pool_last_called is not None:
time_since_last_called = time() - self.fill_pool_last_called
need = self.target_pool_size - self.num_connectors()
if need <= 0 or (self.throttle_timer is not None and self.throttle_timer.active()):
return
elif time_since_last_called < self.fill_pool_throttle:
self.log("Filling pool too quickly, calling again in %.1f seconds" % self.fill_pool_throttle)
self._set_fill_pool_timer()
return
else:
try:
for num, node in izip(xrange(need), self.choose_nodes_to_connect()):
self.make_conn(node)
self.fill_pool_last_called = time()
except NoNodesAvailable, e:
waittime = e.args[0]
pending_requests = len(self.request_queue.pending)
if self.on_insufficient_nodes:
self.on_insufficient_nodes(self.num_active_conns(),
self.target_pool_size,
pending_requests,
waittime if waittime != float('Inf') else None)
self.schedule_future_fill_pool(e.args[0])
if self.num_connectors() == 0 and pending_requests > 0:
if self.on_insufficient_conns:
self.on_insufficient_conns(self.num_connectors(),
pending_requests) | def fill_pool(self) | Add connections as necessary to meet the target pool size. If there
are no nodes to connect to (because we maxed out connections-per-node
on all active connections and any unconnected nodes have pending
reconnect timers), call the on_insufficient_nodes callback. | 3.398007 | 3.047567 | 1.11499 |
self.log('resubmitting %s request' % (req.method,))
self.pushRequest_really(req, keyspace, req_d, retries)
try:
self.request_queue.pending.remove((req, keyspace, req_d, retries))
except ValueError:
# it's already been scooped up
pass
else:
self.request_queue.pending.insert(0, (req, keyspace, req_d, retries)) | def resubmit(self, req, keyspace, req_d, retries) | Push this request to the front of the line, just to be a jerk. | 4.276563 | 3.840768 | 1.113466 |
# push a real set_keyspace on some (any) connection; the idea is that
# if it succeeds there, it is likely to succeed everywhere, and vice
# versa. don't bother waiting for all connections to change- some of
# them may be doing long blocking tasks and by the time they're done,
# the keyspace might be changed again anyway
d = self.pushRequest(ManagedThriftRequest('set_keyspace', keyspace))
def store_keyspace(_):
self.keyspace = keyspace
d.addCallback(store_keyspace)
return d | def set_keyspace(self, keyspace) | Change the keyspace which will be used for subsequent requests to this
CassandraClusterPool, and return a Deferred that will fire once it can
be verified that connections can successfully use that keyspace.
If something goes wrong trying to change a connection to that keyspace,
the Deferred will errback, and the keyspace to be used for future
requests will not be changed.
Requests made between the time this method is called and the time that
the returned Deferred is fired may be made in either the previous
keyspace or the new keyspace. If you may need to make use of multiple
keyspaces at the same time in the same app, consider using the
specialized CassandraKeyspaceConnection interface provided by the
keyspaceConnection method. | 12.549792 | 11.945269 | 1.050608 |
conn = CassandraKeyspaceConnection(self, keyspace)
return CassandraClient(conn, consistency=consistency) | def keyspaceConnection(self, keyspace, consistency=ConsistencyLevel.ONE) | Return a CassandraClient instance which uses this CassandraClusterPool
by way of a CassandraKeyspaceConnection, so that all requests made
through it are guaranteed to go to the given keyspace, no matter what
other consumers of this pool may do. | 6.544887 | 4.225783 | 1.548799 |
if not maybe_s:
return ()
parts: List[str] = []
split_by_backslash = maybe_s.split(r'\,')
for split_by_backslash_part in split_by_backslash:
splitby_comma = split_by_backslash_part.split(',')
if parts:
parts[-1] += ',' + splitby_comma[0]
else:
parts.append(splitby_comma[0])
parts.extend(splitby_comma[1:])
return tuple(parts) | def split_by_commas(maybe_s: str) -> Tuple[str, ...] | Split a string by commas, but allow escaped commas.
- If maybe_s is falsey, returns an empty tuple
- Ignore backslashed commas | 2.194245 | 2.187408 | 1.003125 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_login(auth_request)
return d | def login(self, auth_request) | Parameters:
- auth_request | 2.823819 | 2.949367 | 0.957432 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_set_keyspace(keyspace)
return d | def set_keyspace(self, keyspace) | Parameters:
- keyspace | 2.76779 | 2.874882 | 0.962749 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_get(key, column_path, consistency_level)
return d | def get(self, key, column_path, consistency_level) | Get the Column or SuperColumn at the given column_path. If no value is present, NotFoundException is thrown. (This is
the only method that can throw an exception under non-failure conditions.)
Parameters:
- key
- column_path
- consistency_level | 2.351601 | 3.503378 | 0.671238 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_get_slice(key, column_parent, predicate, consistency_level)
return d | def get_slice(self, key, column_parent, predicate, consistency_level) | Get the group of columns contained by column_parent (either a ColumnFamily name or a ColumnFamily/SuperColumn name
pair) specified by the given SlicePredicate. If no matching values are found, an empty list is returned.
Parameters:
- key
- column_parent
- predicate
- consistency_level | 2.12919 | 2.987938 | 0.712595 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_get_count(key, column_parent, predicate, consistency_level)
return d | def get_count(self, key, column_parent, predicate, consistency_level) | returns the number of columns matching <code>predicate</code> for a particular <code>key</code>,
<code>ColumnFamily</code> and optionally <code>SuperColumn</code>.
Parameters:
- key
- column_parent
- predicate
- consistency_level | 2.103675 | 2.815704 | 0.747122 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_multiget_slice(keys, column_parent, predicate, consistency_level)
return d | def multiget_slice(self, keys, column_parent, predicate, consistency_level) | Performs a get_slice for column_parent and predicate for the given keys in parallel.
Parameters:
- keys
- column_parent
- predicate
- consistency_level | 2.063007 | 2.644371 | 0.78015 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_multiget_count(keys, column_parent, predicate, consistency_level)
return d | def multiget_count(self, keys, column_parent, predicate, consistency_level) | Perform a get_count in parallel on the given list<binary> keys. The return value maps keys to the count found.
Parameters:
- keys
- column_parent
- predicate
- consistency_level | 2.026595 | 2.696733 | 0.7515 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_get_range_slices(column_parent, predicate, range, consistency_level)
return d | def get_range_slices(self, column_parent, predicate, range, consistency_level) | returns a subset of columns for a contiguous range of keys.
Parameters:
- column_parent
- predicate
- range
- consistency_level | 2.139051 | 2.670258 | 0.801065 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_get_paged_slice(column_family, range, start_column, consistency_level)
return d | def get_paged_slice(self, column_family, range, start_column, consistency_level) | returns a range of columns, wrapping to the next rows if necessary to collect max_results.
Parameters:
- column_family
- range
- start_column
- consistency_level | 2.03781 | 2.427288 | 0.839542 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_get_indexed_slices(column_parent, index_clause, column_predicate, consistency_level)
return d | def get_indexed_slices(self, column_parent, index_clause, column_predicate, consistency_level) | Returns the subset of columns specified in SlicePredicate for the rows matching the IndexClause
@deprecated use get_range_slices instead with range.row_filter specified
Parameters:
- column_parent
- index_clause
- column_predicate
- consistency_level | 2.017255 | 2.693202 | 0.749017 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_insert(key, column_parent, column, consistency_level)
return d | def insert(self, key, column_parent, column, consistency_level) | Insert a Column at the given column_parent.column_family and optional column_parent.super_column.
Parameters:
- key
- column_parent
- column
- consistency_level | 2.163587 | 2.990235 | 0.723551 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_add(key, column_parent, column, consistency_level)
return d | def add(self, key, column_parent, column, consistency_level) | Increment or decrement a counter.
Parameters:
- key
- column_parent
- column
- consistency_level | 2.140616 | 2.666514 | 0.802777 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_remove(key, column_path, timestamp, consistency_level)
return d | def remove(self, key, column_path, timestamp, consistency_level) | Remove data from the row specified by key at the granularity specified by column_path, and the given timestamp. Note
that all the values in column_path besides column_path.column_family are truly optional: you can remove the entire
row by just specifying the ColumnFamily, or you can remove a SuperColumn or a single Column by specifying those levels too.
Parameters:
- key
- column_path
- timestamp
- consistency_level | 2.201753 | 3.14039 | 0.701108 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_remove_counter(key, path, consistency_level)
return d | def remove_counter(self, key, path, consistency_level) | Remove a counter at the specified location.
Note that counters have limited support for deletes: if you remove a counter, you must wait to issue any following update
until the delete has reached all the nodes and all of them have been fully compacted.
Parameters:
- key
- path
- consistency_level | 2.326221 | 2.966425 | 0.784183 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_batch_mutate(mutation_map, consistency_level)
return d | def batch_mutate(self, mutation_map, consistency_level) | Mutate many columns or super columns for many row keys. See also: Mutation.
mutation_map maps key to column family to a list of Mutation objects to take place at that scope.
*
Parameters:
- mutation_map
- consistency_level | 2.396668 | 3.062025 | 0.782707 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_atomic_batch_mutate(mutation_map, consistency_level)
return d | def atomic_batch_mutate(self, mutation_map, consistency_level) | Atomically mutate many columns or super columns for many row keys. See also: Mutation.
mutation_map maps key to column family to a list of Mutation objects to take place at that scope.
*
Parameters:
- mutation_map
- consistency_level | 2.272928 | 2.877088 | 0.79001 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_truncate(cfname)
return d | def truncate(self, cfname) | Truncate will mark and entire column family as deleted.
From the user's perspective a successful call to truncate will result complete data deletion from cfname.
Internally, however, disk space will not be immediatily released, as with all deletes in cassandra, this one
only marks the data as deleted.
The operation succeeds only if all hosts in the cluster at available and will throw an UnavailableException if
some hosts are down.
Parameters:
- cfname | 2.970821 | 3.946371 | 0.752798 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_describe_schema_versions()
return d | def describe_schema_versions(self, ) | for each schema version present in the cluster, returns a list of nodes at that version.
hosts that do not respond will be under the key DatabaseDescriptor.INITIAL_VERSION.
the cluster is all on the same version if the size of the map is 1. | 3.06591 | 3.455291 | 0.887309 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_describe_keyspaces()
return d | def describe_keyspaces(self, ) | list the defined keyspaces in this cluster | 2.977355 | 3.352114 | 0.888202 |
self._seqid += 1
d = self._reqs[self._seqid] = defer.Deferred()
self.send_describe_cluster_name()
return d | def describe_cluster_name(self, ) | get the cluster name | 3.122918 | 2.845475 | 1.097503 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.