signature
stringlengths 8
3.44k
| body
stringlengths 0
1.41M
| docstring
stringlengths 1
122k
| id
stringlengths 5
17
|
|---|---|---|---|
def _uneven_transform_deriv_v(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>shape_params,<EOL>output_array=None,<EOL>*args, **kwargs):
|
<EOL>natural_shapes = np.exp(shape_params)<EOL>natural_shapes[np.isposinf(natural_shapes)] = max_comp_value<EOL>long_shapes = rows_to_alts.dot(natural_shapes)<EOL>exp_neg_utilities = np.exp(-<NUM_LIT:1> * systematic_utilities)<EOL>exp_shape_utilities = np.exp(long_shapes * systematic_utilities)<EOL>derivs = (<NUM_LIT:1.0> / (<NUM_LIT:1.0> + exp_neg_utilities) +<EOL>long_shapes / (<NUM_LIT:1.0> + exp_shape_utilities))<EOL>output_array.data = derivs<EOL>return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
shape_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. There
should be one value per shape parameter of the model being used.
output_array : 2D scipy sparse array.
The array should be square and it should have
`systematic_utilities.shape[0]` rows. It's data is to be replaced with
the correct derivatives of the transformation vector with respect to
the vector of systematic utilities. This argument is NOT optional.
Returns
-------
output_array : 2D scipy sparse array.
The shape of the returned array is `(systematic_utilities.shape[0],
systematic_utilities.shape[0])`. The returned array specifies the
derivative of the transformed utilities with respect to the systematic
utilities. All elements are ints, floats, or longs.
|
f7698:m2
|
def _uneven_transform_deriv_shape(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>shape_params,<EOL>output_array=None,<EOL>*args, **kwargs):
|
<EOL>natural_shapes = np.exp(shape_params)<EOL>natural_shapes[np.isposinf(natural_shapes)] = max_comp_value<EOL>long_shapes = rows_to_alts.dot(natural_shapes)<EOL>exp_shape_utilities = np.exp(long_shapes * systematic_utilities)<EOL>derivs = (systematic_utilities / (<NUM_LIT:1.0> + exp_shape_utilities))<EOL>derivs[np.isposinf(systematic_utilities)] = <NUM_LIT:0><EOL>huge_index = np.isneginf(systematic_utilities)<EOL>derivs[huge_index] = -max_comp_value<EOL>output_array.data = derivs * long_shapes<EOL>return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
shape_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. There
should be one value per shape parameter of the model being used.
output_array : 2D scipy sparse array.
The array should have shape `(systematic_utilities.shape[0],
shape_params.shape[0])`. It's data is to be replaced with the correct
derivatives of the transformation vector with respect to the vector of
shape parameters. This argument is NOT optional.
Returns
-------
output_array : 2D scipy sparse array.
The shape of the returned array is `(systematic_utilities.shape[0],
shape_params.shape[0])`. The returned array specifies the derivative of
the transformed utilities with respect to the shape parameters. All
elements are ints, floats, or longs.
|
f7698:m3
|
def _uneven_transform_deriv_alpha(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>intercept_params,<EOL>output_array=None,<EOL>*args, **kwargs):
|
return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
intercept_params : 1D ndarray or None.
If an array, each element should be an int, float, or long. For
identifiability, there should be J- 1 elements where J is the total
number of observed alternatives for this dataset.
output_array: None or 2D scipy sparse array.
If a sparse array is pased, it should contain the derivative of the
vector of transformed utilities with respect to the intercept
parameters outside of the index. This keyword argurment will be
returned. If there are no intercept parameters outside of the index,
then `output_array` should equal None. If there are intercept
parameters outside of the index, then `output_array` should be
`rows_to_alts` with the all of its columns except the column
corresponding to the alternative whose intercept is not being estimated
in order to ensure identifiability.
Returns
-------
output_array.
|
f7698:m4
|
def create_calc_dh_dv(estimator):
|
dh_dv = diags(np.ones(estimator.design.shape[<NUM_LIT:0>]), <NUM_LIT:0>, format='<STR_LIT>')<EOL>calc_dh_dv = partial(_uneven_transform_deriv_v, output_array=dh_dv)<EOL>return calc_dh_dv<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the index.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `design` attribute that is a 2D ndarray representing
the design matrix for this model and dataset.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of systematic utilities. The dimensions of the returned vector should
be `(design.shape[0], design.shape[0])`.
|
f7698:m5
|
def create_calc_dh_d_shape(estimator):
|
dh_d_shape = estimator.rows_to_alts.copy()<EOL>calc_dh_d_shape = partial(_uneven_transform_deriv_shape,<EOL>output_array=dh_d_shape)<EOL>return calc_dh_d_shape<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the shape parameters.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `rows_to_alts` attribute that is a 2D scipy sparse
matrix that maps the rows of the `design` matrix to the alternatives
available in this dataset.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of shape parameters. The dimensions of the returned vector should
be `(design.shape[0], num_alternatives)`.
|
f7698:m6
|
def create_calc_dh_d_alpha(estimator):
|
if estimator.intercept_ref_pos is not None:<EOL><INDENT>needed_idxs = range(estimator.rows_to_alts.shape[<NUM_LIT:1>])<EOL>needed_idxs.remove(estimator.intercept_ref_pos)<EOL>dh_d_alpha = (estimator.rows_to_alts<EOL>.copy()<EOL>.transpose()[needed_idxs, :]<EOL>.transpose())<EOL><DEDENT>else:<EOL><INDENT>dh_d_alpha = None<EOL><DEDENT>calc_dh_d_alpha = partial(_uneven_transform_deriv_alpha,<EOL>output_array=dh_d_alpha)<EOL>return calc_dh_d_alpha<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the outside intercept parameters.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `rows_to_alts` attribute that is a 2D scipy sparse
matrix that maps the rows of the `design` matrix to the alternatives
available in this dataset. Should also contain an `intercept_ref_pos`
attribute that is either None or an int. This attribute should denote
which intercept is not being estimated (in the case of outside
intercept parameters) for identification purposes.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of outside intercepts. The dimensions of the returned vector should
be `(design.shape[0], num_alternatives - 1)`.
|
f7698:m7
|
def check_length_of_initial_values(self, init_values):
|
<EOL>num_alts = self.rows_to_alts.shape[<NUM_LIT:1>]<EOL>num_index_coefs = self.design.shape[<NUM_LIT:1>]<EOL>if self.intercept_ref_pos is not None:<EOL><INDENT>assumed_param_dimensions = num_index_coefs + <NUM_LIT:2> * num_alts - <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>assumed_param_dimensions = num_index_coefs + num_alts<EOL><DEDENT>if init_values.shape[<NUM_LIT:0>] != assumed_param_dimensions:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>raise ValueError(msg_1 +<EOL>msg_2.format(assumed_param_dimensions) +<EOL>msg_3.format(init_values.shape[<NUM_LIT:0>]))<EOL><DEDENT>return None<EOL>
|
Ensures that `init_values` is of the correct length. Raises a helpful
ValueError if otherwise.
Parameters
----------
init_values : 1D ndarray.
The initial values to start the optimizatin process with. There
should be one value for each index coefficient, outside intercept
parameter, and shape parameter being estimated.
Returns
-------
None.
|
f7698:c0:m1
|
def fit_mle(self,<EOL>init_vals,<EOL>init_shapes=None,<EOL>init_intercepts=None,<EOL>init_coefs=None,<EOL>print_res=True,<EOL>method="<STR_LIT>",<EOL>loss_tol=<NUM_LIT>,<EOL>gradient_tol=<NUM_LIT>,<EOL>maxiter=<NUM_LIT:1000>,<EOL>ridge=None,<EOL>constrained_pos=None,<EOL>just_point=False,<EOL>**kwargs):
|
<EOL>self.optimization_method = method<EOL>self.ridge_param = ridge<EOL>if ridge is not None:<EOL><INDENT>warnings.warn(_ridge_warning_msg)<EOL><DEDENT>mapping_res = self.get_mappings_for_fit()<EOL>rows_to_alts = mapping_res["<STR_LIT>"]<EOL>if init_vals is None and all([x is not None for x in [init_shapes,<EOL>init_coefs]]):<EOL><INDENT>num_alternatives = rows_to_alts.shape[<NUM_LIT:1>]<EOL>try:<EOL><INDENT>assert init_shapes.shape[<NUM_LIT:0>] == num_alternatives<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_shapes.shape,<EOL>num_alternatives))<EOL><DEDENT>try:<EOL><INDENT>assert init_coefs.shape[<NUM_LIT:0>] == self.design.shape[<NUM_LIT:1>]<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_coefs.shape,<EOL>self.design.shape[<NUM_LIT:1>]))<EOL><DEDENT>try:<EOL><INDENT>if init_intercepts is not None:<EOL><INDENT>assert init_intercepts.shape[<NUM_LIT:0>] == (num_alternatives - <NUM_LIT:1>)<EOL><DEDENT><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_intercepts.shape,<EOL>num_alternatives - <NUM_LIT:1>))<EOL><DEDENT>if init_intercepts is not None:<EOL><INDENT>init_vals = np.concatenate((init_shapes,<EOL>init_intercepts,<EOL>init_coefs), axis=<NUM_LIT:0>)<EOL><DEDENT>else:<EOL><INDENT>init_vals = np.concatenate((init_shapes, init_coefs), axis=<NUM_LIT:0>)<EOL><DEDENT><DEDENT>elif init_vals is None:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg + msg_2)<EOL><DEDENT>zero_vector = np.zeros(init_vals.shape)<EOL>uneven_estimator = UnevenEstimator(self,<EOL>mapping_res,<EOL>ridge,<EOL>zero_vector,<EOL>split_param_vec,<EOL>constrained_pos=constrained_pos)<EOL>uneven_estimator.set_derivatives()<EOL>uneven_estimator.check_length_of_initial_values(init_vals)<EOL>estimation_res = estimate(init_vals,<EOL>uneven_estimator,<EOL>method,<EOL>loss_tol,<EOL>gradient_tol,<EOL>maxiter,<EOL>print_res,<EOL>just_point=just_point)<EOL>if not just_point:<EOL><INDENT>self.store_fit_results(estimation_res)<EOL>return None<EOL><DEDENT>else:<EOL><INDENT>return estimation_res<EOL><DEDENT>
|
Parameters
----------
init_vals : 1D ndarray.
The initial values to start the optimization process with. There
should be one value for each index coefficient and shape
parameter being estimated. Shape parameters should come before
intercept parameters, which should come before index coefficients.
One can also pass None, and instead pass `init_shapes`, optionally
`init_intercepts` if `"intercept"` is not in the utility
specification, and `init_coefs`.
init_shapes : 1D ndarray or None, optional.
The initial values of the shape parameters. All elements should be
ints, floats, or longs. There should be one parameter per possible
alternative id in the dataset. This keyword argument will be
ignored if `init_vals` is not None. Default == None.
init_intercepts : 1D ndarray or None, optional.
The initial values of the intercept parameters. There should be one
parameter per possible alternative id in the dataset, minus one.
The passed values for this argument will be ignored if `init_vals`
is not None. This keyword argument should only be used if
`"intercept"` is not in the utility specification. Default == None.
init_coefs : 1D ndarray or None, optional.
The initial values of the index coefficients. There should be one
coefficient per index variable. The passed values for this argument
will be ignored if `init_vals` is not None. Default == None.
print_res : bool, optional.
Determines whether the timing and initial and final log likelihood
results will be printed as they they are determined.
Default `== True`.
method : str, optional.
Should be a valid string for scipy.optimize.minimize. Determines
the optimization algorithm that is used for this problem.
Default `== 'bfgs'`.
loss_tol : float, optional.
Determines the tolerance on the difference in objective function
values from one iteration to the next that is needed to determine
convergence. Default `== 1e-06`.
gradient_tol : float, optional.
Determines the tolerance on the difference in gradient values from
one iteration to the next which is needed to determine convergence.
Default `== 1e-06`.
maxiter : int, optional.
Determines the maximum number of iterations used by the optimizer.
Default `== 1000`.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a
scalar is passed, then that scalar determines the ridge penalty for
the optimization. The scalar should be greater than or equal to
zero. Default `== None`.
constrained_pos : list or None, optional.
Denotes the positions of the array of estimated parameters that are
not to change from their initial values. If a list is passed, the
elements are to be integers where no such integer is greater than
`init_vals.size.` Default == None.
just_point : bool, optional.
Determines whether (True) or not (False) calculations that are non-
critical for obtaining the maximum likelihood point estimate will
be performed. If True, this function will return the results
dictionary from scipy.optimize. Default == False.
Returns
-------
None. Estimation results are saved to the model instance.
|
f7698:c1:m1
|
def ensure_valid_nums_in_specification_cols(specification, dataframe):
|
problem_cols = []<EOL>for col in specification:<EOL><INDENT>if dataframe[col].dtype.kind not in ['<STR_LIT:f>', '<STR_LIT:i>', '<STR_LIT:u>']:<EOL><INDENT>problem_cols.append(col)<EOL><DEDENT>elif np.isinf(dataframe[col]).any():<EOL><INDENT>problem_cols.append(col)<EOL><DEDENT>elif np.isnan(dataframe[col]).any():<EOL><INDENT>problem_cols.append(col)<EOL><DEDENT><DEDENT>if problem_cols != []:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>total_msg = msg + msg_2 + msg_3<EOL>raise ValueError(total_msg.format(problem_cols))<EOL><DEDENT>return None<EOL>
|
Checks whether each column in `specification` contains numeric data,
excluding positive or negative infinity and excluding NaN. Raises
ValueError if any of the columns do not meet these requirements.
Parameters
----------
specification : iterable of column headers in `dataframe`.
dataframe : pandas DataFrame.
Dataframe containing the data for the choice model to be estimated.
Returns
-------
None.
|
f7699:m0
|
def ensure_ref_position_is_valid(ref_position, num_alts, param_title):
|
assert param_title in ['<STR_LIT>', '<STR_LIT>']<EOL>try:<EOL><INDENT>assert ref_position is None or isinstance(ref_position, int)<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise TypeError(msg.format(param_title))<EOL><DEDENT>if param_title == "<STR_LIT>":<EOL><INDENT>try:<EOL><INDENT>assert ref_position is not None<EOL><DEDENT>except AssertionError:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT><DEDENT>try:<EOL><INDENT>if ref_position is not None:<EOL><INDENT>assert ref_position >= <NUM_LIT:0> and ref_position <= num_alts - <NUM_LIT:1><EOL><DEDENT><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensures that `ref_position` is None or an integer that is in the interval
`[0, num_alts - 1]`. If None, ensures that intercepts are not the
parameters being estimated. Raises a helpful ValueError if otherwise.
Parameters
----------
ref_position : int.
An integer denoting the position in an array of parameters that will
be constrained for identification purposes.
num_alts : int.
An integer denoting the total number of alternatives in one's universal
choice set.
param_title : {'intercept_names', 'shape_names'}.
String denoting the name of the parameters that are being estimated,
with a constraint for identification. E.g. 'intercept_names'.
Returns
-------
None.
|
f7699:m1
|
def check_length_of_shape_or_intercept_names(name_list,<EOL>num_alts,<EOL>constrained_param,<EOL>list_title):
|
if len(name_list) != (num_alts - constrained_param):<EOL><INDENT>msg_1 = "<STR_LIT>".format(list_title)<EOL>msg_2 = "<STR_LIT>".format(list_title, len(name_list))<EOL>correct_length = num_alts - constrained_param<EOL>msg_3 = "<STR_LIT>".format(correct_length)<EOL>total_msg = "<STR_LIT:\n>".join([msg_1, msg_2, msg_3])<EOL>raise ValueError(total_msg)<EOL><DEDENT>return None<EOL>
|
Ensures that the length of the parameter names matches the number of
parameters that will be estimated. Will raise a ValueError otherwise.
Parameters
----------
name_list : list of strings.
Each element should be the name of a parameter that is to be estimated.
num_alts : int.
Should be the total number of alternatives in the universal choice set
for this dataset.
constrainted_param : {0, 1, True, False}
Indicates whether (1 or True) or not (0 or False) one of the type of
parameters being estimated will be constrained. For instance,
constraining one of the intercepts.
list_title : str.
Should specify the type of parameters whose names are being checked.
Examples include 'intercept_params' or 'shape_params'.
Returns
-------
None.
|
f7699:m2
|
def check_type_of_nest_spec_keys_and_values(nest_spec):
|
try:<EOL><INDENT>assert all([isinstance(k, str) for k in nest_spec])<EOL>assert all([isinstance(nest_spec[k], list) for k in nest_spec])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise TypeError(msg)<EOL><DEDENT>return None<EOL>
|
Ensures that the keys and values of `nest_spec` are strings and lists.
Raises a helpful ValueError if they are.
Parameters
----------
nest_spec : OrderedDict, or None, optional.
Keys are strings that define the name of the nests. Values are lists of
alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Default == None.
Returns
-------
None.
|
f7699:m3
|
def check_for_empty_nests_in_nest_spec(nest_spec):
|
empty_nests = []<EOL>for k in nest_spec:<EOL><INDENT>if len(nest_spec[k]) == <NUM_LIT:0>:<EOL><INDENT>empty_nests.append(k)<EOL><DEDENT><DEDENT>if empty_nests != []:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(empty_nests))<EOL><DEDENT>return None<EOL>
|
Ensures that the values of `nest_spec` are not empty lists.
Raises a helpful ValueError if they are.
Parameters
----------
nest_spec : OrderedDict, or None, optional.
Keys are strings that define the name of the nests. Values are lists of
alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Default == None.
Returns
-------
None.
|
f7699:m4
|
def ensure_alt_ids_in_nest_spec_are_ints(nest_spec, list_elements):
|
try:<EOL><INDENT>assert all([isinstance(x, int) for x in list_elements])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensures that the alternative id's in `nest_spec` are integers. Raises a
helpful ValueError if they are not.
Parameters
----------
nest_spec : OrderedDict, or None, optional.
Keys are strings that define the name of the nests. Values are lists of
alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Default == None.
list_elements : list of lists of ints.
Each element should correspond to one of the alternatives identified as
belonging to a nest.
Returns
-------
None.
|
f7699:m5
|
def ensure_alt_ids_are_only_in_one_nest(nest_spec, list_elements):
|
try:<EOL><INDENT>assert len(set(list_elements)) == len(list_elements)<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensures that the alternative id's in `nest_spec` are only associated with
a single nest. Raises a helpful ValueError if they are not.
Parameters
----------
nest_spec : OrderedDict, or None, optional.
Keys are strings that define the name of the nests. Values are lists of
alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Default == None.
list_elements : list of ints.
Each element should correspond to one of the alternatives identified as
belonging to a nest.
Returns
-------
None.
|
f7699:m6
|
def ensure_all_alt_ids_have_a_nest(nest_spec, list_elements, all_ids):
|
unaccounted_alt_ids = []<EOL>for alt_id in all_ids:<EOL><INDENT>if alt_id not in list_elements:<EOL><INDENT>unaccounted_alt_ids.append(alt_id)<EOL><DEDENT><DEDENT>if unaccounted_alt_ids != []:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(unaccounted_alt_ids))<EOL><DEDENT>return None<EOL>
|
Ensures that the alternative id's in `nest_spec` are all associated with
a nest. Raises a helpful ValueError if they are not.
Parameters
----------
nest_spec : OrderedDict, or None, optional.
Keys are strings that define the name of the nests. Values are lists of
alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Default == None.
list_elements : list of ints.
Each element should correspond to one of the alternatives identified as
belonging to a nest.
all_ids : list of ints.
Each element should correspond to one of the alternatives that is
present in the universal choice set for this model.
Returns
-------
None.
|
f7699:m7
|
def ensure_nest_alts_are_valid_alts(nest_spec, list_elements, all_ids):
|
invalid_alt_ids = []<EOL>for x in list_elements:<EOL><INDENT>if x not in all_ids:<EOL><INDENT>invalid_alt_ids.append(x)<EOL><DEDENT><DEDENT>if invalid_alt_ids != []:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(invalid_alt_ids))<EOL><DEDENT>return None<EOL>
|
Ensures that the alternative id's in `nest_spec` are all in the universal
choice set for this dataset. Raises a helpful ValueError if they are not.
Parameters
----------
nest_spec : OrderedDict, or None, optional.
Keys are strings that define the name of the nests. Values are lists of
alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Default == None.
list_elements : list of ints.
Each element should correspond to one of the alternatives identified as
belonging to a nest.
all_ids : list of ints.
Each element should correspond to one of the alternatives that is
present in the universal choice set for this model.
Returns
-------
None.
|
f7699:m8
|
def add_intercept_to_dataframe(specification, dataframe):
|
if "<STR_LIT>" in specification and "<STR_LIT>" not in dataframe.columns:<EOL><INDENT>dataframe["<STR_LIT>"] = <NUM_LIT:1.0><EOL><DEDENT>return None<EOL>
|
Checks whether `intercept` is in `specification` but not in `dataframe` and
adds the required column to dataframe. Note this function is not
idempotent--it alters the original argument, `dataframe`.
Parameters
----------
specification : an iterable that has a `__contains__` method.
dataframe : pandas DataFrame.
Dataframe containing the data for the choice model to be estimated.
Returns
-------
None.
|
f7699:m9
|
def check_num_rows_of_parameter_array(param_array, correct_num_rows, title):
|
if param_array.shape[<NUM_LIT:0>] != correct_num_rows:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(title, correct_num_rows))<EOL><DEDENT>return None<EOL>
|
Ensures that `param_array.shape[0]` has the correct magnitude. Raises a
helpful ValueError if otherwise.
Parameters
----------
param_array : ndarray.
correct_num_rows : int.
The int that `param_array.shape[0]` should equal.
title : str.
The 'name' of the param_array whose shape is being checked.
Results
-------
None.
|
f7699:m10
|
def check_type_and_size_of_param_list(param_list, expected_length):
|
try:<EOL><INDENT>assert isinstance(param_list, list)<EOL>assert len(param_list) == expected_length<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(expected_length))<EOL><DEDENT>return None<EOL>
|
Ensure that param_list is a list with the expected length. Raises a helpful
ValueError if this is not the case.
|
f7699:m11
|
def check_type_of_param_list_elements(param_list):
|
try:<EOL><INDENT>assert isinstance(param_list[<NUM_LIT:0>], np.ndarray)<EOL>assert all([(x is None or isinstance(x, np.ndarray))<EOL>for x in param_list])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>total_msg = msg + "<STR_LIT:\n>" + msg_2<EOL>raise TypeError(total_msg)<EOL><DEDENT>return None<EOL>
|
Ensures that all elements of param_list are ndarrays or None. Raises a
helpful ValueError if otherwise.
|
f7699:m12
|
def check_num_columns_in_param_list_arrays(param_list):
|
try:<EOL><INDENT>num_columns = param_list[<NUM_LIT:0>].shape[<NUM_LIT:1>]<EOL>assert all([x is None or (x.shape[<NUM_LIT:1>] == num_columns)<EOL>for x in param_list])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensure that each array in param_list, that is not None, has the same number
of columns. Raises a helpful ValueError if otherwise.
Parameters
----------
param_list : list of ndarrays or None.
Returns
-------
None.
|
f7699:m13
|
def check_dimensional_equality_of_param_list_arrays(param_list):
|
try:<EOL><INDENT>num_dimensions = len(param_list[<NUM_LIT:0>].shape)<EOL>assert num_dimensions in [<NUM_LIT:1>, <NUM_LIT:2>]<EOL>assert all([(x is None or (len(x.shape) == num_dimensions))<EOL>for x in param_list])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>total_msg = msg + "<STR_LIT:\n>" + msg_2<EOL>raise ValueError(total_msg)<EOL><DEDENT>return None<EOL>
|
Ensures that all arrays in param_list have the same dimension, and that
this dimension is either 1 or 2 (i.e. all arrays are 1D arrays or all
arrays are 2D arrays.) Raises a helpful ValueError if otherwise.
Parameters
----------
param_list : list of ndarrays or None.
Returns
-------
None.
|
f7699:m14
|
def check_for_choice_col_based_on_return_long_probs(return_long_probs,<EOL>choice_col):
|
if not return_long_probs and choice_col is None:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>else:<EOL><INDENT>return None<EOL><DEDENT>
|
Ensure that if return_long_probs is False then choice_col is not None.
Raise a helpful ValueError if otherwise.
Parameters
----------
return_long_probs : bool.
Indicates whether or not the long format probabilites (a 1D numpy
array with one element per observation per available alternative)
should be returned.
choice_col : str or None.
Denotes the column in `data` which contains a one if the
alternative pertaining to the given row was the observed outcome
for the observation pertaining to the given row and a zero
otherwise.
Returns
-------
None.
|
f7699:m15
|
def ensure_all_mixing_vars_are_in_the_name_dict(mixing_vars,<EOL>name_dict,<EOL>ind_var_names):
|
if mixing_vars is None:<EOL><INDENT>return None<EOL><DEDENT>problem_names = [variable_name for variable_name in mixing_vars<EOL>if variable_name not in ind_var_names]<EOL>msg_0 = "<STR_LIT>"<EOL>msg_1 = "<STR_LIT>"<EOL>msg_with_name_dict = msg_0 + msg_1.format(problem_names)<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>msg_4 = "<STR_LIT>"<EOL>msg_without_name_dict = (msg_2 +<EOL>msg_3.format(problem_names) +<EOL>msg_4.format(ind_var_names))<EOL>if problem_names != []:<EOL><INDENT>if name_dict:<EOL><INDENT>raise ValueError(msg_with_name_dict)<EOL><DEDENT>else:<EOL><INDENT>raise ValueError(msg_without_name_dict)<EOL><DEDENT><DEDENT>return None<EOL>
|
Ensures that all of the variables listed in `mixing_vars` are present in
`ind_var_names`. Raises a helpful ValueError if otherwise.
Parameters
----------
mixing_vars : list of strings, or None.
Each string denotes a parameter to be treated as a random variable.
name_dict : OrderedDict or None.
Contains the specification relating column headers in one's data (i.e.
the keys of the OrderedDict) to the index coefficients to be estimated
based on this data (i.e. the values of each key).
ind_var_names : list of strings.
Each string denotes an index coefficient (i.e. a beta) to be estimated.
Returns
-------
None.
|
f7699:m16
|
def ensure_all_alternatives_are_chosen(alt_id_col, choice_col, dataframe):
|
all_ids = set(dataframe[alt_id_col].unique())<EOL>chosen_ids = set(dataframe.loc[dataframe[choice_col] == <NUM_LIT:1>,<EOL>alt_id_col].unique())<EOL>non_chosen_ids = all_ids.difference(chosen_ids)<EOL>if len(non_chosen_ids) != <NUM_LIT:0>:<EOL><INDENT>msg = ("<STR_LIT>"<EOL>"<STR_LIT>")<EOL>raise ValueError(msg.format(non_chosen_ids))<EOL><DEDENT>return None<EOL>
|
Ensures that all of the available alternatives in the dataset are chosen at
least once (for model identification). Raises a ValueError otherwise.
Parameters
----------
alt_id_col : str.
Should denote the column in `dataframe` that contains the alternative
identifiers for each row.
choice_col : str.
Should denote the column in `dataframe` that contains the ones and
zeros that denote whether or not the given row corresponds to the
chosen alternative for the given individual.
dataframe : pandas dataframe.
Should contain the data being used to estimate the model, as well as
the headers denoted by `alt_id_col` and `choice_col`.
Returns
-------
None.
|
f7699:m17
|
def compute_aic(model_object):
|
assert isinstance(model_object.params, pd.Series)<EOL>assert isinstance(model_object.log_likelihood, Number)<EOL>return -<NUM_LIT:2> * model_object.log_likelihood + <NUM_LIT:2> * model_object.params.size<EOL>
|
Compute the Akaike Information Criteria for an estimated model.
Parameters
----------
model_object : an MNDC_Model (multinomial discrete choice model) instance.
The model should have already been estimated.
`model_object.log_likelihood` should be a number, and
`model_object.params` should be a pandas Series.
Returns
-------
aic : float.
The AIC for the estimated model.
Notes
-----
aic = -2 * log_likelihood + 2 * num_estimated_parameters
References
----------
Akaike, H. (1974). 'A new look at the statistical identification model',
IEEE Transactions on Automatic Control 19, 6: 716-723.
|
f7699:m18
|
def compute_bic(model_object):
|
assert isinstance(model_object.params, pd.Series)<EOL>assert isinstance(model_object.log_likelihood, Number)<EOL>assert isinstance(model_object.nobs, Number)<EOL>log_likelihood = model_object.log_likelihood<EOL>num_obs = model_object.nobs<EOL>num_params = model_object.params.size<EOL>return -<NUM_LIT:2> * log_likelihood + np.log(num_obs) * num_params<EOL>
|
Compute the Bayesian Information Criteria for an estimated model.
Parameters
----------
model_object : an MNDC_Model (multinomial discrete choice model) instance.
The model should have already been estimated.
`model_object.log_likelihood` and `model_object.nobs` should be a
number, and `model_object.params` should be a pandas Series.
Returns
-------
bic : float.
The BIC for the estimated model.
Notes
-----
bic = -2 * log_likelihood + log(num_observations) * num_parameters
The original BIC was introduced as (-1 / 2) times the formula above.
However, for model comparison purposes, it does not matter if the
goodness-of-fit measure is multiplied by a constant across all models being
compared. Moreover, the formula used above allows for a common scale
between measures such as the AIC, BIC, DIC, etc.
References
----------
Schwarz, G. (1978), 'Estimating the dimension of a model', The Annals of
Statistics 6, 2: 461–464.
|
f7699:m19
|
def get_mappings_for_fit(self, dense=False):
|
return create_long_form_mappings(self.data,<EOL>self.obs_id_col,<EOL>self.alt_id_col,<EOL>choice_col=self.choice_col,<EOL>nest_spec=self.nest_spec,<EOL>mix_id_col=self.mixing_id_col,<EOL>dense=dense)<EOL>
|
Parameters
----------
dense : bool, optional.
Dictates if sparse matrices will be returned or dense numpy arrays.
Returns
-------
mapping_dict : OrderedDict.
Keys will be `["rows_to_obs", "rows_to_alts", "chosen_row_to_obs",
"rows_to_nests"]`. The value for `rows_to_obs` will map the rows of
the `long_form` to the unique observations (on the columns) in
their order of appearance. The value for `rows_to_alts` will map
the rows of the `long_form` to the unique alternatives which are
possible in the dataset (on the columns), in sorted order--not
order of appearance. The value for `chosen_row_to_obs`, if not
None, will map the rows of the `long_form` that contain the chosen
alternatives to the specific observations those rows are associated
with (denoted by the columns). The value of `rows_to_nests`, if not
None, will map the rows of the `long_form` to the nest (denoted by
the column) that contains the row's alternative. If `dense==True`,
the returned values will be dense numpy arrays. Otherwise, the
returned values will be scipy sparse arrays.
|
f7699:c0:m1
|
def _store_basic_estimation_results(self, results_dict):
|
<EOL>self.log_likelihood = results_dict["<STR_LIT>"]<EOL>self.fitted_probs = results_dict["<STR_LIT>"]<EOL>self.long_fitted_probs = results_dict["<STR_LIT>"]<EOL>self.long_residuals = results_dict["<STR_LIT>"]<EOL>self.ind_chi_squareds = results_dict["<STR_LIT>"]<EOL>self.chi_square = self.ind_chi_squareds.sum()<EOL>self.estimation_success = results_dict["<STR_LIT:success>"]<EOL>self.estimation_message = results_dict["<STR_LIT:message>"]<EOL>self.rho_squared = results_dict["<STR_LIT>"]<EOL>self.rho_bar_squared = results_dict["<STR_LIT>"]<EOL>self.null_log_likelihood = results_dict["<STR_LIT>"]<EOL>return None<EOL>
|
Extracts the basic estimation results (i.e. those that need no further
calculation or logic applied to them) and stores them on the model
object.
Parameters
----------
results_dict : dict.
The estimation result dictionary that is output from
scipy.optimize.minimize. In addition to the standard keys which are
included, it should also contain the following keys:
`["final_log_likelihood", "chosen_probs", "long_probs",
"residuals", "ind_chi_squareds", "sucess", "message",
"rho_squared", "rho_bar_squared", "log_likelihood_null"]`
Returns
-------
None.
|
f7699:c0:m2
|
def _create_results_summary(self):
|
<EOL>needed_attributes = ["<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"]<EOL>try:<EOL><INDENT>assert all([hasattr(self, attr) for attr in needed_attributes])<EOL>assert all([isinstance(getattr(self, attr), pd.Series)<EOL>for attr in needed_attributes])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise NotImplementedError(msg + msg_2)<EOL><DEDENT>self.summary = pd.concat((self.params,<EOL>self.standard_errors,<EOL>self.tvalues,<EOL>self.pvalues,<EOL>self.robust_std_errs,<EOL>self.robust_t_stats,<EOL>self.robust_p_vals), axis=<NUM_LIT:1>)<EOL>return None<EOL>
|
Create the dataframe that displays the estimation results, and store
it on the model instance.
Returns
-------
None.
|
f7699:c0:m3
|
def _record_values_for_fit_summary_and_statsmodels(self):
|
<EOL>needed_attributes = ["<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"]<EOL>try:<EOL><INDENT>assert all([hasattr(self, attr) for attr in needed_attributes])<EOL>assert all([getattr(self, attr) is not None<EOL>for attr in needed_attributes])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise NotImplementedError(msg + msg_2)<EOL><DEDENT>self.nobs = self.fitted_probs.shape[<NUM_LIT:0>]<EOL>self.df_model = self.params.shape[<NUM_LIT:0>]<EOL>self.df_resid = self.nobs - self.df_model<EOL>self.llf = self.log_likelihood<EOL>self.bse = self.standard_errors<EOL>self.aic = compute_aic(self)<EOL>self.bic = compute_bic(self)<EOL>return None<EOL>
|
Store the various estimation results that are used to describe how well
the estimated model fits the given dataset, and record the values that
are needed for the statsmodels estimation results table. All values are
stored on the model instance.
Returns
-------
None.
|
f7699:c0:m4
|
def _create_fit_summary(self):
|
<EOL>needed_attributes = ["<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"]<EOL>try:<EOL><INDENT>assert all([hasattr(self, attr) for attr in needed_attributes])<EOL>assert all([getattr(self, attr) is not None<EOL>for attr in needed_attributes])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise NotImplementedError(msg + msg_2)<EOL><DEDENT>self.fit_summary = pd.Series([self.df_model,<EOL>self.nobs,<EOL>self.null_log_likelihood,<EOL>self.log_likelihood,<EOL>self.rho_squared,<EOL>self.rho_bar_squared,<EOL>self.estimation_message],<EOL>index=["<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"])<EOL>return None<EOL>
|
Create and store a pandas series that will display to users the
various statistics/values that indicate how well the estimated model
fit the given dataset.
Returns
-------
None.
|
f7699:c0:m5
|
def _store_inferential_results(self,<EOL>value_array,<EOL>index_names,<EOL>attribute_name,<EOL>series_name=None,<EOL>column_names=None):
|
if len(value_array.shape) == <NUM_LIT:1>:<EOL><INDENT>assert series_name is not None<EOL>new_attribute_value = pd.Series(value_array,<EOL>index=index_names,<EOL>name=series_name)<EOL><DEDENT>elif len(value_array.shape) == <NUM_LIT:2>:<EOL><INDENT>assert column_names is not None<EOL>new_attribute_value = pd.DataFrame(value_array,<EOL>index=index_names,<EOL>columns=column_names)<EOL><DEDENT>setattr(self, attribute_name, new_attribute_value)<EOL>return None<EOL>
|
Store the estimation results that relate to statistical inference, such
as parameter estimates, standard errors, p-values, etc.
Parameters
----------
value_array : 1D or 2D ndarray.
Contains the values that are to be stored on the model instance.
index_names : list of strings.
Contains the names that are to be displayed on the 'rows' for each
value being stored. There should be one element for each value of
`value_array.`
series_name : string or None, optional.
The name of the pandas series being created for `value_array.` This
kwarg should be None when `value_array` is a 1D ndarray.
attribute_name : string.
The attribute name that will be exposed on the model instance and
related to the passed `value_array.`
column_names : list of strings, or None, optional.
Same as `index_names` except that it pertains to the columns of a
2D ndarray. When `value_array` is a 2D ndarray, There should be one
element for each column of `value_array.` This kwarg should be None
otherwise.
Returns
-------
None. Stores a pandas series or dataframe on the model instance.
|
f7699:c0:m6
|
def _store_generic_inference_results(self,<EOL>results_dict,<EOL>all_params,<EOL>all_names):
|
<EOL>self._store_inferential_results(results_dict["<STR_LIT>"],<EOL>index_names=self.ind_var_names,<EOL>attribute_name="<STR_LIT>",<EOL>series_name="<STR_LIT>")<EOL>self._store_inferential_results(results_dict["<STR_LIT>"],<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>series_name="<STR_LIT>")<EOL>self._store_inferential_results(results_dict["<STR_LIT>"],<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>column_names=all_names)<EOL>self._store_inferential_results(-<NUM_LIT:1> * scipy.linalg.inv(self.hessian),<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>column_names=all_names)<EOL>self._store_inferential_results(np.concatenate(all_params, axis=<NUM_LIT:0>),<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>series_name="<STR_LIT>")<EOL>self._store_inferential_results(np.sqrt(np.diag(self.cov)),<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>series_name="<STR_LIT>")<EOL>self.tvalues = self.params / self.standard_errors<EOL>self.tvalues.name = "<STR_LIT>"<EOL>p_vals = <NUM_LIT:2> * scipy.stats.norm.sf(np.abs(self.tvalues))<EOL>self._store_inferential_results(p_vals,<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>series_name="<STR_LIT>")<EOL>self._store_inferential_results(results_dict["<STR_LIT>"],<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>column_names=all_names)<EOL>robust_covariance = calc_asymptotic_covariance(self.hessian,<EOL>self.fisher_information)<EOL>self._store_inferential_results(robust_covariance,<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>column_names=all_names)<EOL>self._store_inferential_results(np.sqrt(np.diag(self.robust_cov)),<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>series_name="<STR_LIT>")<EOL>self.robust_t_stats = self.params / self.robust_std_errs<EOL>self.robust_t_stats.name = "<STR_LIT>"<EOL>one_sided_p_vals = scipy.stats.norm.sf(np.abs(self.robust_t_stats))<EOL>self._store_inferential_results(<NUM_LIT:2> * one_sided_p_vals,<EOL>index_names=all_names,<EOL>attribute_name="<STR_LIT>",<EOL>series_name="<STR_LIT>")<EOL>return None<EOL>
|
Store the model inference values that are common to all choice models.
This includes things like index coefficients, gradients, hessians,
asymptotic covariance matrices, t-values, p-values, and robust versions
of these values.
Parameters
----------
results_dict : dict.
The estimation result dictionary that is output from
scipy.optimize.minimize. In addition to the standard keys which are
included, it should also contain the following keys:
`["utility_coefs", "final_gradient", "final_hessian",
"fisher_info"]`.
The "final_gradient", "final_hessian", and "fisher_info" values
should be the gradient, hessian, and Fisher-Information Matrix of
the log likelihood, evaluated at the final parameter vector.
all_params : list of 1D ndarrays.
Should contain the various types of parameters that were actually
estimated.
all_names : list of strings.
Should contain names of each estimated parameter.
Returns
-------
None. Stores all results on the model instance.
|
f7699:c0:m7
|
def _store_optional_parameters(self,<EOL>optional_params,<EOL>name_list_attr,<EOL>default_name_str,<EOL>all_names,<EOL>all_params,<EOL>param_attr_name,<EOL>series_name):
|
<EOL>num_elements = optional_params.shape[<NUM_LIT:0>]<EOL>parameter_names = getattr(self, name_list_attr)<EOL>if parameter_names is None:<EOL><INDENT>parameter_names = [default_name_str.format(x) for x in<EOL>range(<NUM_LIT:1>, num_elements + <NUM_LIT:1>)]<EOL><DEDENT>all_names = list(parameter_names) + list(all_names)<EOL>all_params.insert(<NUM_LIT:0>, optional_params)<EOL>self._store_inferential_results(optional_params,<EOL>index_names=parameter_names,<EOL>attribute_name=param_attr_name,<EOL>series_name=series_name)<EOL>return all_names, all_params<EOL>
|
Extract the optional parameters from the `results_dict`, save them
to the model object, and update the list of all parameters and all
parameter names.
Parameters
----------
optional_params : 1D ndarray.
The optional parameters whose values and names should be stored.
name_list_attr : str.
The attribute name on the model object where the names of the
optional estimated parameters will be stored (if they exist).
default_name_str : str.
The name string that will be used to create generic names for the
estimated parameters, in the event that the estimated parameters
do not have names that were specified by the user. Should contain
empty curly braces for use with python string formatting.
all_names : list of strings.
The current list of the names of the estimated parameters. The
names of these optional parameters will be added to the beginning
of this list.
all_params : list of 1D ndarrays.
Each array is a set of estimated parameters. The current optional
parameters will be added to the beginning of this list.
param_attr_name : str.
The attribute name that will be used to store the optional
parameter values on the model object.
series_name : str.
The string that will be used as the name of the series that
contains the optional parameters.
Returns
-------
(all_names, all_params) : tuple.
|
f7699:c0:m8
|
def _adjust_inferential_results_for_parameter_constraints(self,<EOL>constraints):
|
if constraints is not None:<EOL><INDENT>inferential_attributes = ["<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"]<EOL>assert all([hasattr(self, x) for x in inferential_attributes])<EOL>assert hasattr(self, "<STR_LIT>")<EOL>all_names = self.params.index.tolist()<EOL>for series in [getattr(self, x) for x in inferential_attributes]:<EOL><INDENT>for pos in constraints:<EOL><INDENT>series.loc[all_names[pos]] = np.nan<EOL><DEDENT><DEDENT><DEDENT>return None<EOL>
|
Ensure that parameters that were constrained during estimation do not
have any values showed for inferential results. After all, no inference
was performed.
Parameters
----------
constraints : list of ints, or None.
If list, should contain the positions in the array of all estimated
parameters that were constrained to their initial values.
Returns
-------
None.
|
f7699:c0:m9
|
def _check_result_dict_for_needed_keys(self, results_dict):
|
missing_cols = [x for x in needed_result_keys if x not in results_dict]<EOL>if missing_cols != []:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(missing_cols))<EOL><DEDENT>return None<EOL>
|
Ensure that `results_dict` has the needed keys to store all the
estimation results. Raise a helpful ValueError otherwise.
|
f7699:c0:m10
|
def _add_mixing_variable_names_to_individual_vars(self):
|
assert isinstance(self.ind_var_names, list)<EOL>already_included = any(["<STR_LIT>" in x for x in self.ind_var_names])<EOL>if self.mixing_vars is not None and not already_included:<EOL><INDENT>new_ind_var_names = ["<STR_LIT>" + x for x in self.mixing_vars]<EOL>self.ind_var_names += new_ind_var_names<EOL><DEDENT>return None<EOL>
|
Ensure that the model objects mixing variables are added to its list of
individual variables.
|
f7699:c0:m11
|
def store_fit_results(self, results_dict):
|
<EOL>self._check_result_dict_for_needed_keys(results_dict)<EOL>self._store_basic_estimation_results(results_dict)<EOL>if not hasattr(self, "<STR_LIT>"):<EOL><INDENT>self.design_3d = None<EOL><DEDENT>self._add_mixing_variable_names_to_individual_vars()<EOL>all_names = deepcopy(self.ind_var_names)<EOL>all_params = [deepcopy(results_dict["<STR_LIT>"])]<EOL>if results_dict["<STR_LIT>"] is not None:<EOL><INDENT>storage_args = [results_dict["<STR_LIT>"],<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>all_names,<EOL>all_params,<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"]<EOL>storage_results = self._store_optional_parameters(*storage_args)<EOL>all_names, all_params = storage_results<EOL><DEDENT>else:<EOL><INDENT>self.intercepts = None<EOL><DEDENT>if results_dict["<STR_LIT>"] is not None:<EOL><INDENT>storage_args = [results_dict["<STR_LIT>"],<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>all_names,<EOL>all_params,<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"]<EOL>storage_results = self._store_optional_parameters(*storage_args)<EOL>all_names, all_params = storage_results<EOL><DEDENT>else:<EOL><INDENT>self.shapes = None<EOL><DEDENT>if results_dict["<STR_LIT>"] is not None:<EOL><INDENT>storage_args = [results_dict["<STR_LIT>"],<EOL>"<STR_LIT>",<EOL>"<STR_LIT>",<EOL>all_names,<EOL>all_params,<EOL>"<STR_LIT>",<EOL>"<STR_LIT>"]<EOL>storage_results = self._store_optional_parameters(*storage_args)<EOL>all_names, all_params = storage_results<EOL><DEDENT>else:<EOL><INDENT>self.nests = None<EOL><DEDENT>self._store_generic_inference_results(results_dict,<EOL>all_params,<EOL>all_names)<EOL>constraints = results_dict["<STR_LIT>"]<EOL>self._adjust_inferential_results_for_parameter_constraints(constraints)<EOL>self._create_results_summary()<EOL>self._record_values_for_fit_summary_and_statsmodels()<EOL>self._create_fit_summary()<EOL>return None<EOL>
|
Parameters
----------
results_dict : dict.
The estimation result dictionary that is output from
scipy.optimize.minimize. In addition to the standard keys which are
included, it should also contain the following keys:
`["final_gradient", "final_hessian", "fisher_info",
"final_log_likelihood", "chosen_probs", "long_probs", "residuals",
"ind_chi_squareds"]`.
The "final_gradient", "final_hessian", and "fisher_info" values
should be the gradient, hessian, and Fisher-Information Matrix of
the log likelihood, evaluated at the final parameter vector.
Returns
-------
None. Will calculate and store a variety of estimation results and
inferential statistics as attributes of the model instance.
|
f7699:c0:m12
|
def fit_mle(self,<EOL>init_vals,<EOL>print_res=True,<EOL>method="<STR_LIT>",<EOL>loss_tol=<NUM_LIT>,<EOL>gradient_tol=<NUM_LIT>,<EOL>maxiter=<NUM_LIT:1000>,<EOL>ridge=None,<EOL>*args):
|
msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL>
|
Parameters
----------
init_vals : 1D ndarray.
The initial values to start the optimizatin process with. There
should be one value for each utility coefficient, outside intercept
parameter, shape parameter, and nest parameter being estimated.
print_res : bool, optional.
Determines whether the timing and initial and final log likelihood
results will be printed as they they are determined.
method : str, optional.
Should be a valid string which can be passed to
scipy.optimize.minimize. Determines the optimization algorithm
which is used for this problem.
loss_tol : float, optional.
Determines the tolerance on the difference in objective function
values from one iteration to the next which is needed to determine
convergence. Default == 1e-06.
gradient_tol : float, optional.
Determines the tolerance on the difference in gradient values from
one iteration to the next which is needed to determine convergence.
Default == 1e-06.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If an int,
float or long is passed, then that scalar determines the ridge
penalty for the optimization. Default == None.
Returns
-------
None. Saves estimation results to the model instance.
|
f7699:c0:m13
|
def print_summaries(self):
|
if hasattr(self, "<STR_LIT>") and hasattr(self, "<STR_LIT>"):<EOL><INDENT>print("<STR_LIT:\n>")<EOL>print(self.fit_summary)<EOL>print("<STR_LIT:=>" * <NUM_LIT:30>)<EOL>print(self.summary)<EOL><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise NotImplementedError(msg.format(self.model_type) + msg_2)<EOL><DEDENT>return None<EOL>
|
Returns None. Will print the measures of fit and the estimation results
for the model.
|
f7699:c0:m14
|
def conf_int(self, alpha=<NUM_LIT>, coefs=None, return_df=False):
|
<EOL>z_critical = scipy.stats.norm.ppf(<NUM_LIT:1.0> - alpha / <NUM_LIT>,<EOL>loc=<NUM_LIT:0>, scale=<NUM_LIT:1>)<EOL>lower = self.params - z_critical * self.standard_errors<EOL>upper = self.params + z_critical * self.standard_errors<EOL>lower.name = "<STR_LIT>"<EOL>upper.name = "<STR_LIT>"<EOL>combined = pd.concat((lower, upper), axis=<NUM_LIT:1>)<EOL>if coefs is not None:<EOL><INDENT>combined = combined.loc[coefs, :]<EOL><DEDENT>if return_df:<EOL><INDENT>return combined<EOL><DEDENT>else:<EOL><INDENT>return combined.values<EOL><DEDENT>
|
Creates the dataframe or array of lower and upper bounds for the
(1-alpha)% confidence interval of the estimated parameters. Used when
creating the statsmodels summary.
Parameters
----------
alpha : float, optional.
Should be between 0.0 and 1.0. Determines the (1-alpha)% confidence
interval that will be reported. Default == 0.05.
coefs : array-like, optional.
Should contain strings that denote the coefficient names that one
wants the confidence intervals for. Default == None because that
will return the confidence interval for all variables.
return_df : bool, optional.
Determines whether the returned value will be a dataframe or a
numpy array. Default = False.
Returns
-------
pandas dataframe or ndarray.
Depends on return_df kwarg. The first column contains the lower
bound to the confidence interval whereas the second column contains
the upper values of the confidence intervals.
|
f7699:c0:m15
|
def get_statsmodels_summary(self,<EOL>title=None,<EOL>alpha=<NUM_LIT>):
|
try:<EOL><INDENT>from statsmodels.iolib.summary import Summary<EOL><DEDENT>except ImportError:<EOL><INDENT>print("<STR_LIT>")<EOL>return self.print_summaries()<EOL><DEDENT>if not hasattr(self, "<STR_LIT>"):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL><DEDENT>smry = Summary()<EOL>new_yname, new_yname_list = self.choice_col, None<EOL>model_name = self.model_type<EOL>top_left = [('<STR_LIT>', None),<EOL>('<STR_LIT>', [model_name]),<EOL>('<STR_LIT>', ['<STR_LIT>']),<EOL>('<STR_LIT>', None),<EOL>('<STR_LIT>', None),<EOL>('<STR_LIT>', ["<STR_LIT>".format(self.aic)]),<EOL>('<STR_LIT>', ["<STR_LIT>".format(self.bic)])<EOL>]<EOL>top_right = [('<STR_LIT>', ["<STR_LIT>".format(self.nobs)]),<EOL>('<STR_LIT>', ["<STR_LIT>".format(self.df_resid)]),<EOL>('<STR_LIT>', ["<STR_LIT>".format(self.df_model)]),<EOL>('<STR_LIT>',<EOL>["<STR_LIT>".format(self.rho_squared)]),<EOL>('<STR_LIT>',<EOL>["<STR_LIT>".format(self.rho_bar_squared)]),<EOL>('<STR_LIT>', ["<STR_LIT>".format(self.llf)]),<EOL>('<STR_LIT>',<EOL>["<STR_LIT>".format(self.null_log_likelihood)]),<EOL>]<EOL>if title is None:<EOL><INDENT>title = model_name + '<STR_LIT:U+0020>' + "<STR_LIT>"<EOL><DEDENT>xnames = self.params.index.tolist()<EOL>smry.add_table_2cols(self,<EOL>gleft=top_left,<EOL>gright=top_right, <EOL>yname=new_yname,<EOL>xname=xnames,<EOL>title=title)<EOL>smry.add_table_params(self,<EOL>yname=[new_yname_list],<EOL>xname=xnames,<EOL>alpha=alpha,<EOL>use_t=False)<EOL>return smry<EOL>
|
Parameters
----------
title : str, or None, optional.
Will be the title of the returned summary. If None, the default
title is used.
alpha : float, optional.
Should be between 0.0 and 1.0. Determines the width of the
displayed, (1 - alpha)% confidence interval.
Returns
-------
statsmodels.summary object or None.
|
f7699:c0:m16
|
def check_param_list_validity(self, param_list):
|
if param_list is None:<EOL><INDENT>return None<EOL><DEDENT>check_type_and_size_of_param_list(param_list, <NUM_LIT:4>)<EOL>check_type_of_param_list_elements(param_list)<EOL>check_dimensional_equality_of_param_list_arrays(param_list)<EOL>if len(param_list[<NUM_LIT:0>].shape) == <NUM_LIT:2>:<EOL><INDENT>check_num_columns_in_param_list_arrays(param_list)<EOL><DEDENT>num_index_coefs = len(self.ind_var_names)<EOL>check_num_rows_of_parameter_array(param_list[<NUM_LIT:0>],<EOL>num_index_coefs,<EOL>'<STR_LIT>')<EOL>if param_list[<NUM_LIT:1>] is not None:<EOL><INDENT>num_intercepts = (<NUM_LIT:0> if self.intercept_names is None else<EOL>len(self.intercept_names))<EOL>check_num_rows_of_parameter_array(param_list[<NUM_LIT:1>],<EOL>num_intercepts,<EOL>'<STR_LIT>')<EOL><DEDENT>if param_list[<NUM_LIT:2>] is not None:<EOL><INDENT>num_shapes = (<NUM_LIT:0> if self.shape_names is None else<EOL>len(self.shape_names))<EOL>check_num_rows_of_parameter_array(param_list[<NUM_LIT:2>],<EOL>num_shapes,<EOL>'<STR_LIT>')<EOL><DEDENT>if param_list[<NUM_LIT:3>] is not None:<EOL><INDENT>num_nests = (<NUM_LIT:0> if self.nest_names is None else<EOL>len(self.nest_names))<EOL>check_num_rows_of_parameter_array(param_list[<NUM_LIT:3>],<EOL>num_nests,<EOL>'<STR_LIT>')<EOL><DEDENT>return None<EOL>
|
Parameters
----------
param_list : list.
Contains four elements, each being a numpy array. Either all of the
arrays should be 1D or all of the arrays should be 2D. If 2D, the
arrays should have the same number of columns. Each column being a
particular set of parameter values that one wants to predict with.
The first element in the list should be the index coefficients. The
second element should contain the 'outside' intercept parameters if
there are any, or None otherwise. The third element should contain
the shape parameters if there are any or None otherwise. The fourth
element should contain the nest coefficients if there are any or
None otherwise. Default == None.
Returns
-------
None. Will check whether `param_list` and its elements meet all
requirements specified above and required for correct calculation of
the probabilities to be predicted.
|
f7699:c0:m17
|
def predict(self,<EOL>data,<EOL>param_list=None,<EOL>return_long_probs=True,<EOL>choice_col=None,<EOL>num_draws=None,<EOL>seed=None):
|
<EOL>dataframe = get_dataframe_from_data(data)<EOL>add_intercept_to_dataframe(self.specification, dataframe)<EOL>for column in [self.alt_id_col,<EOL>self.obs_id_col,<EOL>self.mixing_id_col]:<EOL><INDENT>if column is not None:<EOL><INDENT>ensure_columns_are_in_dataframe([column], dataframe)<EOL><DEDENT><DEDENT>self.check_param_list_validity(param_list)<EOL>check_for_choice_col_based_on_return_long_probs(return_long_probs,<EOL>choice_col)<EOL>new_alt_IDs = dataframe[self.alt_id_col].values<EOL>new_design_res = create_design_matrix(dataframe,<EOL>self.specification,<EOL>self.alt_id_col,<EOL>names=self.name_spec)<EOL>new_design = new_design_res[<NUM_LIT:0>]<EOL>mapping_res = create_long_form_mappings(dataframe,<EOL>self.obs_id_col,<EOL>self.alt_id_col,<EOL>choice_col=choice_col,<EOL>nest_spec=self.nest_spec,<EOL>mix_id_col=self.mixing_id_col)<EOL>new_rows_to_obs = mapping_res["<STR_LIT>"]<EOL>new_rows_to_alts = mapping_res["<STR_LIT>"]<EOL>new_chosen_to_obs = mapping_res["<STR_LIT>"]<EOL>new_rows_to_nests = mapping_res["<STR_LIT>"]<EOL>new_rows_to_mixers = mapping_res["<STR_LIT>"]<EOL>if param_list is None:<EOL><INDENT>new_index_coefs = self.coefs.values<EOL>new_intercepts = (self.intercepts.values if self.intercepts<EOL>is not None else None)<EOL>new_shape_params = (self.shapes.values if self.shapes<EOL>is not None else None)<EOL>new_nest_coefs = (self.nests.values if self.nests<EOL>is not None else None)<EOL><DEDENT>else:<EOL><INDENT>new_index_coefs = param_list[<NUM_LIT:0>]<EOL>new_intercepts = param_list[<NUM_LIT:1>]<EOL>new_shape_params = param_list[<NUM_LIT:2>]<EOL>new_nest_coefs = param_list[<NUM_LIT:3>]<EOL><DEDENT>if self.model_type == "<STR_LIT>":<EOL><INDENT>new_natural_nests = naturalize_nest_coefs(new_nest_coefs)<EOL>if return_long_probs:<EOL><INDENT>return_string = "<STR_LIT>"<EOL><DEDENT>else:<EOL><INDENT>return_string = "<STR_LIT>"<EOL><DEDENT>return calc_nested_probs(new_natural_nests,<EOL>new_index_coefs,<EOL>new_design,<EOL>new_rows_to_obs,<EOL>new_rows_to_nests,<EOL>chosen_row_to_obs=new_chosen_to_obs,<EOL>return_type=return_string)<EOL><DEDENT>elif self.model_type == "<STR_LIT>":<EOL><INDENT>num_mixing_units = new_rows_to_mixers.shape[<NUM_LIT:1>]<EOL>draw_list = mlc.get_normal_draws(num_mixing_units,<EOL>num_draws,<EOL>len(self.mixing_pos),<EOL>seed=seed)<EOL>design_args = (new_design,<EOL>draw_list,<EOL>self.mixing_pos,<EOL>new_rows_to_mixers)<EOL>new_design_3d = mlc.create_expanded_design_for_mixing(*design_args)<EOL>prob_args = (new_index_coefs,<EOL>new_design_3d,<EOL>new_alt_IDs,<EOL>new_rows_to_obs,<EOL>new_rows_to_alts,<EOL>self.utility_transform)<EOL>prob_kwargs = {"<STR_LIT>": new_intercepts,<EOL>"<STR_LIT>": new_shape_params,<EOL>"<STR_LIT>": new_chosen_to_obs,<EOL>"<STR_LIT>": return_long_probs}<EOL>prob_array = calc_probabilities(*prob_args, **prob_kwargs)<EOL>return prob_array.mean(axis=<NUM_LIT:1>)<EOL><DEDENT>else:<EOL><INDENT>return calc_probabilities(new_index_coefs,<EOL>new_design,<EOL>new_alt_IDs,<EOL>new_rows_to_obs,<EOL>new_rows_to_alts,<EOL>self.utility_transform,<EOL>intercept_params=new_intercepts,<EOL>shape_params=new_shape_params,<EOL>chosen_row_to_obs=new_chosen_to_obs,<EOL>return_long_probs=return_long_probs)<EOL><DEDENT>
|
Parameters
----------
data : string or pandas dataframe.
If string, data should be an absolute or relative path to a CSV
file containing the long format data for this choice model. Note
long format is has one row per available alternative for each
observation. If pandas dataframe, the dataframe should be the long
format data for the choice model. The data should include all of
the same columns as the original data used to construct the choice
model, with the sole exception of the "intercept" column. If needed
the "intercept" column will be dynamically created.
param_list : list, optional.
Contains four elements, each being a numpy array or None. Either
all of the arrays should be 1D or all of the arrays should be 2D.
If 2D, the arrays should have the same number of columns. Each
column should be a particular set of parameter values that one
wants to predict with. The first element in the list should
contain the index coefficients. The second element should contain
the 'outside' intercept parameters if there are any, or None
otherwise. The third element should contain the shape parameters
if there are any or None otherwise. The fourth element should
contain the nest coefficients if there are any or None otherwise.
Default == None.
return_long_probs : bool, optional.
Indicates whether or not the long format probabilites (a 1D numpy
array with one element per observation per available alternative)
should be returned. Default == True.
choice_col : str, optional.
Denotes the column in `data` which contains a one if the
alternative pertaining to the given row was the observed outcome
for the observation pertaining to the given row and a zero
otherwise. Default == None.
num_draws : int, or None, optional.
Should be greater than zero. Denotes the number of draws that we
are making from each normal distribution. This kwarg is only used
if self.model_type == "Mixed Logit Model". Default == None.
seed : int, or None, optional.
If an int is passed, it should be greater than zero. Denotes the
value to be used in seeding the random generator used to generate
the draws from the normal distribution. This kwarg is only used if
self.model_type == "Mixed Logit Model". Default == None.
Returns
-------
numpy array or tuple of two numpy arrays.
If `choice_col` is passed AND `return_long_probs is True`, then the
tuple `(chosen_probs, long_probs)` is returned. If
`return_long_probs is True` and `chosen_row_to_obs is None`, then
`long_probs` is returned. If `chosen_row_to_obs` is passed and
`return_long_probs is False` then `chosen_probs` is returned.
`chosen_probs` is a 1D numpy array of shape (num_observations,).
Each element is the probability of the corresponding observation
being associated with its realized outcome.
`long_probs` is a 1D numpy array with one element per observation
per available alternative for that observation. Each element is the
probability of the corresponding observation being associated with
that rows corresponding alternative.
It is NOT valid to have `chosen_row_to_obs == None` and
`return_long_probs == False`.
|
f7699:c0:m18
|
def to_pickle(self, filepath):
|
if not isinstance(filepath, str):<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>if not filepath.endswith("<STR_LIT>"):<EOL><INDENT>filepath = filepath + "<STR_LIT>"<EOL><DEDENT>with open(filepath, "<STR_LIT:wb>") as f:<EOL><INDENT>pickle.dump(self, f)<EOL><DEDENT>print("<STR_LIT>".format(filepath))<EOL>return None<EOL>
|
Parameters
----------
filepath : str.
Should end in .pkl. If it does not, ".pkl" will be appended to the
passed string.
Returns
-------
None. Saves the model object to the location specified by `filepath`.
|
f7699:c0:m19
|
def calc_nested_probs(nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>rows_to_obs,<EOL>rows_to_nests,<EOL>chosen_row_to_obs=None,<EOL>return_type="<STR_LIT>",<EOL>*args,<EOL>**kwargs):
|
<EOL>try:<EOL><INDENT>assert len(index_coefs.shape) <= <NUM_LIT:2><EOL>assert (len(index_coefs.shape) == <NUM_LIT:1>) or (index_coefs.shape[<NUM_LIT:1>] == <NUM_LIT:1>)<EOL>assert len(nest_coefs.shape) <= <NUM_LIT:2><EOL>assert (len(nest_coefs.shape) == <NUM_LIT:1>) or (nest_coefs.shape[<NUM_LIT:1>] == <NUM_LIT:1>)<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL><DEDENT>valid_return_types = ['<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>']<EOL>if return_type not in valid_return_types:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg + str(valid_return_types))<EOL><DEDENT>chosen_probs_needed = ['<STR_LIT>', '<STR_LIT>']<EOL>if chosen_row_to_obs is None and return_type in chosen_probs_needed:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(chosen_probs_needed) +<EOL>"<STR_LIT>")<EOL><DEDENT>index_vals = design.dot(index_coefs)<EOL>long_nest_coefs = rows_to_nests.dot(nest_coefs)<EOL>scaled_index = index_vals / long_nest_coefs<EOL>pos_inf_idx = np.isposinf(scaled_index)<EOL>neg_inf_idx = np.isneginf(scaled_index)<EOL>scaled_index[pos_inf_idx] = max_comp_value<EOL>scaled_index[neg_inf_idx] = -<NUM_LIT:1> * max_comp_value<EOL>exp_scaled_index = np.exp(scaled_index)<EOL>inf_idx = np.isposinf(exp_scaled_index)<EOL>exp_scaled_index[inf_idx] = max_comp_value<EOL>zero_idx = (exp_scaled_index == <NUM_LIT:0>)<EOL>exp_scaled_index[zero_idx] = min_comp_value<EOL>ind_exp_sums_per_nest = (rows_to_obs.T *<EOL>rows_to_nests.multiply(exp_scaled_index[:, None]))<EOL>if isinstance(ind_exp_sums_per_nest, np.matrixlib.defmatrix.matrix):<EOL><INDENT>ind_exp_sums_per_nest = np.asarray(ind_exp_sums_per_nest)<EOL><DEDENT>elif issparse(ind_exp_sums_per_nest):<EOL><INDENT>ind_exp_sums_per_nest = ind_exp_sums_per_nest.toarray()<EOL><DEDENT>inf_idx = np.isposinf(ind_exp_sums_per_nest)<EOL>ind_exp_sums_per_nest[inf_idx] = max_comp_value<EOL>long_exp_sums_per_nest = rows_to_obs.dot(ind_exp_sums_per_nest)<EOL>if isinstance(long_exp_sums_per_nest, np.matrixlib.defmatrix.matrix):<EOL><INDENT>long_exp_sums_per_nest = np.asarray(long_exp_sums_per_nest)<EOL><DEDENT>long_exp_sums = (rows_to_nests.multiply(long_exp_sums_per_nest)<EOL>.sum(axis=<NUM_LIT:1>)<EOL>.A).ravel()<EOL>ind_denom = (np.power(ind_exp_sums_per_nest,<EOL>nest_coefs[None, :])<EOL>.sum(axis=<NUM_LIT:1>))<EOL>inf_idx = np.isposinf(ind_denom)<EOL>ind_denom[inf_idx] = max_comp_value<EOL>zero_idx = (ind_denom == <NUM_LIT:0>)<EOL>ind_denom[zero_idx] = min_comp_value<EOL>long_denom = rows_to_obs.dot(ind_denom)<EOL>long_denom.ravel()<EOL>long_numerators = (exp_scaled_index *<EOL>np.power(long_exp_sums,<EOL>(long_nest_coefs - <NUM_LIT:1>)))<EOL>inf_idx = np.isposinf(long_numerators)<EOL>long_numerators[inf_idx] = max_comp_value<EOL>zero_idx = (long_numerators == <NUM_LIT:0>)<EOL>long_numerators[zero_idx] = min_comp_value<EOL>long_probs = (long_numerators / long_denom).ravel()<EOL>long_probs[np.where(long_probs == <NUM_LIT:0>)] = min_comp_value<EOL>if chosen_row_to_obs is None:<EOL><INDENT>chosen_probs = None<EOL><DEDENT>else:<EOL><INDENT>chosen_probs = (chosen_row_to_obs.transpose()<EOL>.dot(long_probs))<EOL>chosen_probs = np.asarray(chosen_probs).ravel()<EOL><DEDENT>if return_type == '<STR_LIT>':<EOL><INDENT>return chosen_probs, long_probs<EOL><DEDENT>elif return_type == '<STR_LIT>':<EOL><INDENT>return long_probs<EOL><DEDENT>elif return_type == '<STR_LIT>':<EOL><INDENT>return chosen_probs<EOL><DEDENT>elif return_type == '<STR_LIT>':<EOL><INDENT>prob_dict = {}<EOL>prob_dict["<STR_LIT>"] = long_probs<EOL>prob_dict["<STR_LIT>"] = chosen_probs<EOL>prob_given_nest = exp_scaled_index / long_exp_sums<EOL>zero_idx = (prob_given_nest == <NUM_LIT:0>)<EOL>prob_given_nest[zero_idx] = min_comp_value<EOL>nest_choice_probs = (np.power(ind_exp_sums_per_nest,<EOL>nest_coefs[None, :]) /<EOL>ind_denom[:, None])<EOL>zero_idx = (nest_choice_probs == <NUM_LIT:0>)<EOL>nest_choice_probs[zero_idx] = min_comp_value<EOL>prob_dict["<STR_LIT>"] = prob_given_nest<EOL>prob_dict["<STR_LIT>"] = nest_choice_probs<EOL>prob_dict["<STR_LIT>"] = ind_exp_sums_per_nest<EOL>return prob_dict<EOL><DEDENT>
|
Parameters
----------
nest_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each nesting coefficient being estimated. If 2D, should
have 1 column for each set of nesting coefficients being used to
predict the probabilities of each alternative being chosen. There
should be one row per nesting coefficient. Elements denote the inverse
of the scale coefficients for each of the lower level nests.
index_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each utility coefficient being estimated (i.e.
num_features). If 2D, should have 1 column for each set of coefficients
being used to predict the probabilities of each alternative being
chosen. There should be one row per index coefficient.
design : 2D ndarray.
There should be one row per observation per available alternative.
There should be one column per utility coefficient being estimated. All
elements should be ints, floats, or longs.
rows_to_obs : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per observation. This matrix maps the rows of the design
matrix to the unique observations (on the columns).
rows_to_nests : 2D scipy sparse array.
There should with one row per observation per available alternative and
one column per nest. This matrix maps the rows of the design matrix to
the unique nests (on the columns).
chosen_row_to_obs : 2D scipy sparse array, or None, optional.
There should be one row per observation per available alternative and
one column per observation. This matrix indicates, for each observation
(on the columns), which rows of the design matrix were the realized
outcome. If an array is passed then an array of shape
(num_observations,) can be returned and each element will be the
probability of the realized outcome of the given observation.
Default == None.
return_type : str, optional.
Indicates what object(s) are to be returned from the function. Valid
values are: `['long_probs', 'chosen_probs', 'long_and_chosen_probs',
'all_prob_dict']`. If `long_probs`, the long format probabilities (a 1D
numpy array with one element per observation per available alternative)
will be returned. If `chosen_probs`, a 1D numpy array with one element
per observation will be returned, where the values are the
probabilities of the chosen alternative for the given observation. If
`long_and_chosen_probs`, a tuple of chosen_probs and long_probs will be
returned. If `all_prob_dict`, a dictionary will be returned. The values
will all be 1D numpy arrays of probabilities dictated by the value's
corresponding key. The keys will be `long_probs`, `nest_choice_probs`,
`prob_given_nest`, and `chosen_probs`. If chosen_row_to_obs is None,
then `chosen_probs` will be None. If `chosen_row_to_obs` is passed,
then `chosen_probs` will be a 1D array as described above.
`nest_choice_probs` is of the same shape as `rows_to_nests` and it
denotes the probability of each individual choosing each of the
possible nests. `prob_given_nest` is of the same shape as `long_probs`
and it denotes the probability of the individual associated with a
given row choosing the alternative associated with that row, given that
the individual chooses the nest that contains the given alternative.
Default == `long_probs`.
Returns
-------
See above for documentation of the `return_type` kwarg.
|
f7700:m0
|
def calc_nested_log_likelihood(nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>rows_to_obs,<EOL>rows_to_nests,<EOL>choice_vector,<EOL>ridge=None,<EOL>weights=None,<EOL>*args,<EOL>**kwargs):
|
<EOL>long_probs = calc_nested_probs(nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>rows_to_obs,<EOL>rows_to_nests,<EOL>return_type='<STR_LIT>')<EOL>if weights is None:<EOL><INDENT>weights = <NUM_LIT:1><EOL><DEDENT>log_likelihood = choice_vector.dot(weights * np.log(long_probs))<EOL>if ridge is None:<EOL><INDENT>return log_likelihood<EOL><DEDENT>else:<EOL><INDENT>params = np.concatenate(((nest_coefs - <NUM_LIT:1.0>), index_coefs), axis=<NUM_LIT:0>)<EOL>return log_likelihood - ridge * np.square(params).sum()<EOL><DEDENT>
|
Parameters
----------
nest_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each nesting coefficient being estimated. If 2D, should
have 1 column for each set of nesting coefficients being used to
predict the probabilities of each alternative being chosen. There
should be one row per nesting coefficient. Elements denote the inverse
of the scale coefficients for each of the lower level nests.
index_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each utility coefficient being estimated
(i.e. num_features). If 2D, should have 1 column for each set of
coefficients being used to predict the probabilities of choosing each
alternative. There should be one row per index coefficient.
design : 2D ndarray.
There should be one row per observation per available alternative.
There should be one column per utility coefficient being estimated. All
elements should be ints, floats, or longs.
rows_to_obs : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per observation. This matrix maps the rows of the design
matrix to the unique observations (on the columns).
rows_to_nests : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per nest. This matrix maps the rows of the design matrix to
the unique nests (on the columns).
choice_vector : 1D ndarray.
All elements should be either ones or zeros. There should be one row
per observation per available alternative for the given observation.
Elements denote the alternative which is chosen by the given
observation with a 1 and a zero otherwise.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If an int,
float or long is passed, then that scalar determines the ridge penalty
for the optimization. Default = None.
weights : 1D ndarray or None, optional.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class.
Returns
-------
log_likelihood : float.
The log likelihood of the nested logit model. Includes ridge penalty if
a penalized regression is being performed.
|
f7700:m1
|
def prep_vectors_for_gradient(nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>choice_vec,<EOL>rows_to_obs,<EOL>rows_to_nests,<EOL>*args,<EOL>**kwargs):
|
<EOL>long_nest_params = (rows_to_nests.multiply(nest_coefs[None, :])<EOL>.sum(axis=<NUM_LIT:1>)<EOL>.A<EOL>.ravel())<EOL>scaled_y = choice_vec / long_nest_params<EOL>inf_index = np.isinf(scaled_y)<EOL>scaled_y[inf_index] = max_comp_value<EOL>obs_to_chosen_nests = (rows_to_obs.T *<EOL>rows_to_nests.multiply(choice_vec[:, None])).A<EOL>row_to_chosen_nest = rows_to_obs * obs_to_chosen_nests<EOL>long_chosen_nest = (rows_to_nests.multiply(row_to_chosen_nest)<EOL>.sum(axis=<NUM_LIT:1>)<EOL>.A<EOL>.ravel())<EOL>prob_dict = calc_nested_probs(nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>rows_to_obs,<EOL>rows_to_nests,<EOL>return_type='<STR_LIT>')<EOL>p_tilde_row_given_nest = (prob_dict["<STR_LIT>"] *<EOL>long_chosen_nest /<EOL>long_nest_params)<EOL>inf_index = np.isinf(p_tilde_row_given_nest)<EOL>p_tilde_row_given_nest[inf_index] = max_comp_value<EOL>desired_arrays = {}<EOL>desired_arrays["<STR_LIT>"] = long_nest_params.ravel()<EOL>desired_arrays["<STR_LIT>"] = scaled_y.ravel()<EOL>desired_arrays["<STR_LIT>"] = long_chosen_nest<EOL>desired_arrays["<STR_LIT>"] = obs_to_chosen_nests<EOL>desired_arrays["<STR_LIT>"] = p_tilde_row_given_nest<EOL>desired_arrays["<STR_LIT>"] = prob_dict["<STR_LIT>"]<EOL>desired_arrays["<STR_LIT>"] = prob_dict["<STR_LIT>"]<EOL>desired_arrays["<STR_LIT>"] = prob_dict["<STR_LIT>"]<EOL>desired_arrays["<STR_LIT>"] = prob_dict["<STR_LIT>"]<EOL>return desired_arrays<EOL>
|
Parameters
----------
nest_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each nesting coefficient being estimated. If 2D, should
have 1 column for each set of nesting coefficients being used to
predict the probabilities of each alternative being chosen. There
should be one row per nesting coefficient. Elements denote the inverse
of the scale coefficients for each of the lower level nests. Note, this
is NOT THE LOGIT of the inverse of the scale coefficients.
index_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each utility coefficient being estimated
(i.e. num_features). If 2D, should have 1 column for each set of
coefficients being used to predict the probabilities of choosing each
alternative. There should be one row per index coefficient.
design : 2D ndarray.
There should be one row per observation per available alternative.
There should be one column per utility coefficient being estimated.
All elements should be ints, floats, or longs.
choice_vec : 1D ndarray.
All elements should by ints, floats, or longs. Each element represents
whether the individual associated with the given row chose the
alternative associated with the given row. Should have the same number
of rows as `design`.
rows_to_obs : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per observation. This matrix maps the rows of the design
matrix to the unique observations (on the columns).
rows_to_nests : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per nest. This matrix maps the rows of the design matrix to
the unique nests (on the columns).
Returns
-------
desired_arrays : dict.
Will contain the arrays necessary for calculating the gradient of the
nested logit log-likelihood. The keys will be:
`["long_nest_params", "scaled_y", "long_chosen_nest",
"obs_to_chosen_nests", "p_tilde_given_nest", "long_probs",
"prob_given_nest", "nest_choice_probs", "ind_sums_per_nest"]`
|
f7700:m2
|
def naturalize_nest_coefs(nest_coef_estimates):
|
<EOL>exp_term = np.exp(-<NUM_LIT:1> * nest_coef_estimates)<EOL>inf_idx = np.isinf(exp_term)<EOL>exp_term[inf_idx] = max_comp_value<EOL>nest_coefs = <NUM_LIT:1.0> / (<NUM_LIT:1.0> + exp_term)<EOL>zero_idx = (nest_coefs == <NUM_LIT:0>)<EOL>nest_coefs[zero_idx] = min_comp_value<EOL>return nest_coefs<EOL>
|
Parameters
----------
nest_coef_estimates : 1D ndarray.
Should contain the estimated logit's
(`ln[nest_coefs / (1 - nest_coefs)]`) of the true nest coefficients.
All values should be ints, floats, or longs.
Returns
-------
nest_coefs : 1D ndarray.
Will contain the 'natural' nest coefficients:
`1.0 / (1.0 + exp(-nest_coef_estimates))`.
|
f7700:m3
|
def calc_nested_gradient(orig_nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>choice_vec,<EOL>rows_to_obs,<EOL>rows_to_nests,<EOL>ridge=None,<EOL>weights=None,<EOL>use_jacobian=True,<EOL>*args,<EOL>**kwargs):
|
<EOL>if weights is None:<EOL><INDENT>weights = np.ones(design.shape[<NUM_LIT:0>])<EOL><DEDENT>weights_per_obs = np.max(rows_to_obs.toarray() * weights[:, None], axis=<NUM_LIT:0>)<EOL>nest_coefs = naturalize_nest_coefs(orig_nest_coefs)<EOL>vector_dict = prep_vectors_for_gradient(nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>choice_vec,<EOL>rows_to_obs,<EOL>rows_to_nests)<EOL>sys_utility = design.dot(index_coefs)<EOL>long_w = sys_utility / vector_dict["<STR_LIT>"]<EOL>inf_index = np.isposinf(long_w)<EOL>long_w[inf_index] = max_comp_value<EOL>log_exp_sums = np.log(vector_dict["<STR_LIT>"])<EOL>log_exp_sums[np.isneginf(log_exp_sums)] = -<NUM_LIT:1> * max_comp_value<EOL>nest_gradient_term_1 = ((vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"]) *<EOL>log_exp_sums *<EOL>weights_per_obs[:, None]).sum(axis=<NUM_LIT:0>)<EOL>half_deriv = ((vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"] *<EOL>vector_dict["<STR_LIT>"]) *<EOL>long_w *<EOL>weights)<EOL>nest_gradient_term_2 = (rows_to_nests.transpose()<EOL>.dot(half_deriv)[:, None]).ravel()<EOL>nest_gradient_term_3a = (choice_vec -<EOL>vector_dict["<STR_LIT>"] *<EOL>vector_dict["<STR_LIT>"])<EOL>nest_gradient_term_3b = ((-<NUM_LIT:1> * nest_gradient_term_3a * long_w * weights) /<EOL>vector_dict["<STR_LIT>"])<EOL>inf_idx = np.isposinf(nest_gradient_term_3b)<EOL>nest_gradient_term_3b[inf_idx] = max_comp_value<EOL>neg_inf_idx = np.isneginf(nest_gradient_term_3b)<EOL>nest_gradient_term_3b[neg_inf_idx] = -<NUM_LIT:1> * max_comp_value<EOL>nest_gradient_term_3 = (rows_to_nests.transpose()<EOL>.dot(nest_gradient_term_3b)).ravel()<EOL>if use_jacobian:<EOL><INDENT>jacobian = nest_coefs * (<NUM_LIT:1.0> - nest_coefs)<EOL><DEDENT>else:<EOL><INDENT>jacobian = <NUM_LIT:1><EOL><DEDENT>nest_gradient = ((nest_gradient_term_1 +<EOL>nest_gradient_term_2 +<EOL>nest_gradient_term_3) *<EOL>jacobian)[None, :]<EOL>beta_gradient_term_1 = ((vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"] +<EOL>vector_dict["<STR_LIT>"] *<EOL>vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"]) *<EOL>weights)[None, :]<EOL>beta_gradient = beta_gradient_term_1.dot(design)<EOL>gradient = np.concatenate((nest_gradient, beta_gradient), axis=<NUM_LIT:1>).ravel()<EOL>if ridge is not None:<EOL><INDENT>params = np.concatenate(((<NUM_LIT:20> - orig_nest_coefs), index_coefs), axis=<NUM_LIT:0>)<EOL>gradient -= <NUM_LIT:2> * ridge * params<EOL><DEDENT>return gradient<EOL>
|
Parameters
----------
orig_nest_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each nesting coefficient being estimated. If 2D, should
have 1 column for each set of nesting coefficients being used to
predict the probabilities of each alternative being chosen. There
should be one row per nesting coefficient. Elements denote the logit of
the inverse of the scale coefficients for each lower level nests.
index_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each utility coefficient being estimated
(i.e. num_features). If 2D, should have 1 column for each set of
coefficients being used to predict the probabilities of choosing each
alternative. There should be one row per index coefficient.
design : 2D ndarray.
There should be one row per observation per available alternative. There
should be one column per utility coefficient being estimated. All
elements should be ints, floats, or longs.
choice_vec : 1D ndarray.
All elements should by ints, floats, or longs. Each element represents
whether the individual associated with the given row chose the
alternative associated with the given row.
rows_to_obs : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per observation. This matrix maps the rows of the design
matrix to the unique observations (on the columns).
rows_to_nests : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per nest. This matrix maps the rows of the design matrix to
the unique nests (on the columns).
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If an int,
float or long is passed, then that scalar determines the ridge penalty
for the optimization. Default `== None`.
weights : 1D ndarray or None.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class.
use_jacobian : bool, optional.
Determines whether or not the jacobian will be used when calculating
the gradient. When performing model estimation, `use_jacobian` should
be `True` if the values being estimated are actually the logit of the
nest coefficients. Default `== True`.
Returns
-------
gradient : 1D numpy array.
The gradient of the log-likelihood with respect to the given nest
coefficients and index coefficients.
|
f7700:m4
|
def calc_bhhh_hessian_approximation(orig_nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>choice_vec,<EOL>rows_to_obs,<EOL>rows_to_nests,<EOL>ridge=None,<EOL>weights=None,<EOL>use_jacobian=True,<EOL>*args,<EOL>**kwargs):
|
<EOL>if weights is None:<EOL><INDENT>weights = np.ones(design.shape[<NUM_LIT:0>])<EOL><DEDENT>weights_per_obs = np.max(rows_to_obs.toarray() * weights[:, None], axis=<NUM_LIT:0>)<EOL>nest_coefs = naturalize_nest_coefs(orig_nest_coefs)<EOL>vector_dict = prep_vectors_for_gradient(nest_coefs,<EOL>index_coefs,<EOL>design,<EOL>choice_vec,<EOL>rows_to_obs,<EOL>rows_to_nests)<EOL>sys_utility = design.dot(index_coefs)<EOL>long_w = sys_utility / vector_dict["<STR_LIT>"]<EOL>inf_index = np.isposinf(long_w)<EOL>long_w[inf_index] = max_comp_value<EOL>log_exp_sums = np.log(vector_dict["<STR_LIT>"])<EOL>log_exp_sums[np.isneginf(log_exp_sums)] = -<NUM_LIT:1> * max_comp_value<EOL>nest_gradient_term_1 = ((vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"]) *<EOL>log_exp_sums)<EOL>half_deriv = ((vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"] *<EOL>vector_dict["<STR_LIT>"]) *<EOL>long_w)[:, None]<EOL>spread_half_deriv = rows_to_nests.multiply(half_deriv)<EOL>nest_gradient_term_2 = rows_to_obs.transpose().dot(spread_half_deriv).A<EOL>nest_gradient_term_3a = (choice_vec -<EOL>vector_dict["<STR_LIT>"] *<EOL>vector_dict["<STR_LIT>"])<EOL>nest_gradient_term_3b = ((-<NUM_LIT:1> * nest_gradient_term_3a * long_w) /<EOL>vector_dict["<STR_LIT>"])<EOL>inf_idx = np.isposinf(nest_gradient_term_3b)<EOL>nest_gradient_term_3b[inf_idx] = max_comp_value<EOL>neg_inf_idx = np.isneginf(nest_gradient_term_3b)<EOL>nest_gradient_term_3b[neg_inf_idx] = -<NUM_LIT:1> * max_comp_value<EOL>spread_out_term_3b = rows_to_nests.multiply(nest_gradient_term_3b[:, None])<EOL>nest_gradient_term_3 = rows_to_obs.transpose().dot(spread_out_term_3b).A<EOL>if use_jacobian:<EOL><INDENT>jacobian = (nest_coefs * (<NUM_LIT:1.0> - nest_coefs))[None, :]<EOL><DEDENT>else:<EOL><INDENT>jacobian = <NUM_LIT:1><EOL><DEDENT>nest_gradient = ((nest_gradient_term_1 +<EOL>nest_gradient_term_2 +<EOL>nest_gradient_term_3) *<EOL>jacobian)<EOL>beta_gradient_term_1 = (vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"] +<EOL>vector_dict["<STR_LIT>"] *<EOL>vector_dict["<STR_LIT>"] -<EOL>vector_dict["<STR_LIT>"])[:, None]<EOL>beta_gradient = rows_to_obs.T.dot(beta_gradient_term_1 * design)<EOL>gradient_matrix = np.concatenate((nest_gradient, beta_gradient), axis=<NUM_LIT:1>)<EOL>bhhh_matrix =gradient_matrix.T.dot(weights_per_obs[:, None] * gradient_matrix)<EOL>if ridge is not None:<EOL><INDENT>bhhh_matrix += <NUM_LIT:2> * ridge<EOL><DEDENT>return -<NUM_LIT:1> * bhhh_matrix<EOL>
|
Parameters
----------
orig_nest_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each nesting coefficient being estimated. If 2D, should
have 1 column for each set of nesting coefficients being used to
predict the probabilities of each alternative being chosen. There
should be one row per nesting coefficient. Elements denote the inverse
of the scale coefficients for each of the lower level nests.
index_coefs : 1D or 2D ndarray.
All elements should by ints, floats, or longs. If 1D, should have 1
element for each utility coefficient being estimated
(i.e. num_features). If 2D, should have 1 column for each set of
coefficients being used to predict the probabilities of choosing each
alternative. There should be one row per index coefficient.
design : 2D ndarray.
There should be one row per observation per available alternative.
There should be one column per utility coefficient being estimated. All
elements should be ints, floats, or longs.
choice_vec : 1D ndarray.
All elements should by ints, floats, or longs. Each element represents
whether the individual associated with the given row chose the
alternative associated with the given row.
rows_to_obs : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per observation. This matrix maps the rows of the design
matrix to the unique observations (on the columns).
rows_to_nests : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per nest. This matrix maps the rows of the design matrix to
the unique nests (on the columns).
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If an int,
float or long is passed, then that scalar determines the ridge penalty
for the optimization. Default == None. Note that if this parameter is
passed, the values of the BHHH matrix MAY BE INCORRECT since it is not
100% clear how penalization affects the information matrix.
use_jacobian : bool, optional.
Determines whether or not the jacobian will be used when calculating
the gradient. When performing model estimation, `use_jacobian` should
be `True` if the values that are actually being estimated are the
logit of the nest coefficients. Default `== False`.
Returns
-------
bhhh_matrix : 2D ndarray.
The negative of the sum of the outer products of the gradient of the
log-likelihood function for each observation.
|
f7700:m5
|
def calc_individual_chi_squares(residuals,<EOL>long_probabilities,<EOL>rows_to_obs):
|
chi_squared_terms = np.square(residuals) / long_probabilities<EOL>return rows_to_obs.T.dot(chi_squared_terms)<EOL>
|
Calculates individual chi-squared values for each choice situation in the
dataset.
Parameters
----------
residuals : 1D ndarray.
The choice vector minus the predicted probability of each alternative
for each observation.
long_probabilities : 1D ndarray.
The probability of each alternative being chosen in each choice
situation.
rows_to_obs : 2D scipy sparse array.
Should map each row of the long format dataferame to the unique
observations in the dataset.
Returns
-------
ind_chi_squareds : 1D ndarray.
Will have as many elements as there are columns in `rows_to_obs`. Each
element will contain the pearson chi-squared value for the given choice
situation.
|
f7701:m1
|
def calc_rho_and_rho_bar_squared(final_log_likelihood,<EOL>null_log_likelihood,<EOL>num_est_parameters):
|
rho_squared = <NUM_LIT:1.0> - final_log_likelihood / null_log_likelihood<EOL>rho_bar_squared = <NUM_LIT:1.0> - ((final_log_likelihood - num_est_parameters) /<EOL>null_log_likelihood)<EOL>return rho_squared, rho_bar_squared<EOL>
|
Calculates McFadden's rho-squared and rho-bar squared for the given model.
Parameters
----------
final_log_likelihood : float.
The final log-likelihood of the model whose rho-squared and rho-bar
squared are being calculated for.
null_log_likelihood : float.
The log-likelihood of the model in question, when all parameters are
zero or their 'base' values.
num_est_parameters : int.
The number of parameters estimated in this model.
Returns
-------
`(rho_squared, rho_bar_squared)` : tuple of floats.
The rho-squared and rho-bar-squared for the model.
|
f7701:m2
|
def calc_and_store_post_estimation_results(results_dict,<EOL>estimator):
|
<EOL>final_log_likelihood = -<NUM_LIT:1> * results_dict["<STR_LIT>"]<EOL>results_dict["<STR_LIT>"] = final_log_likelihood<EOL>final_params = results_dict["<STR_LIT:x>"]<EOL>split_res = estimator.convenience_split_params(final_params,<EOL>return_all_types=True)<EOL>results_dict["<STR_LIT>"] = split_res[<NUM_LIT:0>]<EOL>results_dict["<STR_LIT>"] = split_res[<NUM_LIT:1>]<EOL>results_dict["<STR_LIT>"] = split_res[<NUM_LIT:2>]<EOL>results_dict["<STR_LIT>"] = split_res[<NUM_LIT:3>]<EOL>chosen_probs, long_probs = estimator.convenience_calc_probs(final_params)<EOL>results_dict["<STR_LIT>"] = chosen_probs<EOL>results_dict["<STR_LIT>"] = long_probs<EOL>if len(long_probs.shape) == <NUM_LIT:1>:<EOL><INDENT>residuals = estimator.choice_vector - long_probs<EOL><DEDENT>else:<EOL><INDENT>residuals = estimator.choice_vector[:, None] - long_probs<EOL><DEDENT>results_dict["<STR_LIT>"] = residuals<EOL>args = [residuals, long_probs, estimator.rows_to_obs]<EOL>results_dict["<STR_LIT>"] = calc_individual_chi_squares(*args)<EOL>log_likelihood_null = results_dict["<STR_LIT>"]<EOL>rho_results = calc_rho_and_rho_bar_squared(final_log_likelihood,<EOL>log_likelihood_null,<EOL>final_params.shape[<NUM_LIT:0>])<EOL>results_dict["<STR_LIT>"] = rho_results[<NUM_LIT:0>]<EOL>results_dict["<STR_LIT>"] = rho_results[<NUM_LIT:1>]<EOL>results_dict["<STR_LIT>"] =estimator.convenience_calc_gradient(final_params)<EOL>results_dict["<STR_LIT>"] =estimator.convenience_calc_hessian(final_params)<EOL>results_dict["<STR_LIT>"] =estimator.convenience_calc_fisher_approx(final_params)<EOL>results_dict["<STR_LIT>"] = estimator.constrained_pos<EOL>return results_dict<EOL>
|
Calculates and stores post-estimation results that require the use of the
systematic utility transformation functions or the various derivative
functions. Note that this function is only valid for logit-type models.
Parameters
----------
results_dict : dict.
This dictionary should be the dictionary returned from
scipy.optimize.minimize. In particular, it should have the following
keys: `["fun", "x", "log_likelihood_null"]`.
estimator : an instance of the EstimationObj class.
Should contain the following attributes or methods:
- convenience_split_params
- convenience_calc_probs
- convenience_calc_gradient
- convenience_calc_hessian
- convenience_calc_fisher_approx
- choice_vector
- rows_to_obs
Returns
-------
results_dict : dict.
The following keys will have been entered into `results_dict`:
- final_log_likelihood
- utility_coefs
- intercept_params
- shape_params
- nest_params
- chosen_probs
- long_probs
- residuals
- ind_chi_squareds
- rho_squared
- rho_bar_squared
- final_gradient
- final_hessian
- fisher_info
|
f7701:m3
|
def estimate(init_values,<EOL>estimator,<EOL>method,<EOL>loss_tol,<EOL>gradient_tol,<EOL>maxiter,<EOL>print_results,<EOL>use_hessian=True,<EOL>just_point=False,<EOL>**kwargs):
|
if not just_point:<EOL><INDENT>log_likelihood_at_zero =estimator.convenience_calc_log_likelihood(estimator.zero_vector)<EOL>initial_log_likelihood =estimator.convenience_calc_log_likelihood(init_values)<EOL>if print_results:<EOL><INDENT>null_msg = "<STR_LIT>"<EOL>print(null_msg.format(log_likelihood_at_zero))<EOL>init_msg = "<STR_LIT>"<EOL>print(init_msg.format(initial_log_likelihood))<EOL>sys.stdout.flush()<EOL><DEDENT><DEDENT>hess_func = estimator.calc_neg_hessian if use_hessian else None<EOL>start_time = time.time()<EOL>results = minimize(estimator.calc_neg_log_likelihood_and_neg_gradient,<EOL>init_values,<EOL>method=method,<EOL>jac=True,<EOL>hess=hess_func,<EOL>tol=loss_tol,<EOL>options={'<STR_LIT>': gradient_tol,<EOL>"<STR_LIT>": maxiter},<EOL>**kwargs)<EOL>if not just_point:<EOL><INDENT>if print_results:<EOL><INDENT>end_time = time.time()<EOL>elapsed_sec = (end_time - start_time)<EOL>elapsed_min = elapsed_sec / <NUM_LIT><EOL>if elapsed_min > <NUM_LIT:1.0>:<EOL><INDENT>msg = "<STR_LIT>"<EOL>print(msg.format(elapsed_min))<EOL><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL>print(msg.format(elapsed_sec))<EOL><DEDENT>print("<STR_LIT>".format(-<NUM_LIT:1> * results["<STR_LIT>"]))<EOL>sys.stdout.flush()<EOL><DEDENT>results["<STR_LIT>"] = log_likelihood_at_zero<EOL>results = calc_and_store_post_estimation_results(results, estimator)<EOL><DEDENT>return results<EOL>
|
Estimate the given choice model that is defined by `estimator`.
Parameters
----------
init_vals : 1D ndarray.
Should contain the initial values to start the optimization process
with.
estimator : an instance of the EstimationObj class.
method : str, optional.
Should be a valid string for scipy.optimize.minimize. Determines
the optimization algorithm that is used for this problem.
Default `== 'bfgs'`.
loss_tol : float, optional.
Determines the tolerance on the difference in objective function
values from one iteration to the next that is needed to determine
convergence. Default `== 1e-06`.
gradient_tol : float, optional.
Determines the tolerance on the difference in gradient values from
one iteration to the next which is needed to determine convergence.
Default `== 1e-06`.
maxiter : int, optional.
Determines the maximum number of iterations used by the optimizer.
Default `== 1000`.
print_res : bool, optional.
Determines whether the timing and initial and final log likelihood
results will be printed as they they are determined.
Default `== True`.
use_hessian : bool, optional.
Determines whether the `calc_neg_hessian` method of the `estimator`
object will be used as the hessian function during the estimation. This
kwarg is used since some models (such as the Mixed Logit and Nested
Logit) use a rather crude (i.e. the BHHH) approximation to the Fisher
Information Matrix, and users may prefer to not use this approximation
for the hessian during estimation.
just_point : bool, optional.
Determines whether or not calculations that are non-critical for
obtaining the maximum likelihood point estimate will be performed.
Default == False.
Return
------
results : dict.
The dictionary of estimation results that is returned by
scipy.optimize.minimize. It will also have (at minimum) the following
keys:
- "log-likelihood_null"
- "final_log_likelihood"
- "utility_coefs"
- "intercept_params"
- "shape_params"
- "nest_params"
- "chosen_probs"
- "long_probs"
- "residuals"
- "ind_chi_squareds"
- "rho_squared"
- "rho_bar_squared"
- "final_gradient"
- "final_hessian"
- "fisher_info"
|
f7701:m4
|
def convenience_split_params(self, params, return_all_types=False):
|
return self.split_params(params,<EOL>self.rows_to_alts,<EOL>self.design,<EOL>return_all_types=return_all_types)<EOL>
|
Splits parameter vector into shape, intercept, and index parameters.
Parameters
----------
params : 1D ndarray.
The array of parameters being estimated or used in calculations.
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned
(with one element for the nest, shape, intercept, and index
parameters for this model). If False, a tuple of 3 elements will
be returned with one element for the shape, intercept, and index
parameters.
Returns
-------
tuple. Will have 4 or 3 elements based on `return_all_types`.
|
f7701:c0:m1
|
def convenience_calc_probs(self, params):
|
msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL>
|
Calculates the probabilities of the chosen alternative, and the long
format probabilities for this model and dataset.
|
f7701:c0:m2
|
def convenience_calc_log_likelihood(self, params):
|
msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL>
|
Calculates the log-likelihood for this model and dataset.
|
f7701:c0:m3
|
def convenience_calc_gradient(self, params):
|
msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL>
|
Calculates the gradient of the log-likelihood for this model / dataset.
|
f7701:c0:m4
|
def convenience_calc_hessian(self, params):
|
msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL>
|
Calculates the hessian of the log-likelihood for this model / dataset.
|
f7701:c0:m5
|
def convenience_calc_fisher_approx(self, params):
|
msg = "<STR_LIT>"<EOL>raise NotImplementedError(msg)<EOL>
|
Calculates the BHHH approximation of the Fisher Information Matrix for
this model / dataset.
|
f7701:c0:m6
|
def calc_neg_log_likelihood_and_neg_gradient(self, params):
|
neg_log_likelihood = -<NUM_LIT:1> * self.convenience_calc_log_likelihood(params)<EOL>neg_gradient = -<NUM_LIT:1> * self.convenience_calc_gradient(params)<EOL>if self.constrained_pos is not None:<EOL><INDENT>neg_gradient[self.constrained_pos] = <NUM_LIT:0><EOL><DEDENT>return neg_log_likelihood, neg_gradient<EOL>
|
Calculates and returns the negative of the log-likelihood and the
negative of the gradient. This function is used as the objective
function in scipy.optimize.minimize.
|
f7701:c0:m7
|
def calc_neg_hessian(self, params):
|
return -<NUM_LIT:1> * self.convenience_calc_hessian(params)<EOL>
|
Calculate and return the negative of the hessian for this model and
dataset.
|
f7701:c0:m8
|
def convenience_calc_probs(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>prob_args = [betas,<EOL>self.design,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.utility_transform]<EOL>prob_kwargs = {"<STR_LIT>": intercepts,<EOL>"<STR_LIT>": shapes,<EOL>"<STR_LIT>": self.chosen_row_to_obs,<EOL>"<STR_LIT>": True}<EOL>prob_results = cc.calc_probabilities(*prob_args, **prob_kwargs)<EOL>return prob_results<EOL>
|
Calculates the probabilities of the chosen alternative, and the long
format probabilities for this model and dataset.
|
f7701:c1:m1
|
def convenience_calc_log_likelihood(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>args = [betas,<EOL>self.design,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.choice_vector,<EOL>self.utility_transform]<EOL>kwargs = {"<STR_LIT>": intercepts,<EOL>"<STR_LIT>": shapes,<EOL>"<STR_LIT>": self.ridge,<EOL>"<STR_LIT>": self.weights}<EOL>log_likelihood = cc.calc_log_likelihood(*args, **kwargs)<EOL>return log_likelihood<EOL>
|
Calculates the log-likelihood for this model and dataset.
|
f7701:c1:m2
|
def convenience_calc_gradient(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>args = [betas,<EOL>self.design,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.choice_vector,<EOL>self.utility_transform,<EOL>self.calc_dh_d_shape,<EOL>self.calc_dh_dv,<EOL>self.calc_dh_d_alpha,<EOL>intercepts,<EOL>shapes,<EOL>self.ridge,<EOL>self.weights]<EOL>return cc.calc_gradient(*args)<EOL>
|
Calculates the gradient of the log-likelihood for this model / dataset.
|
f7701:c1:m3
|
def convenience_calc_hessian(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>args = [betas,<EOL>self.design,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.utility_transform,<EOL>self.calc_dh_d_shape,<EOL>self.calc_dh_dv,<EOL>self.calc_dh_d_alpha,<EOL>self.block_matrix_idxs,<EOL>intercepts,<EOL>shapes,<EOL>self.ridge,<EOL>self.weights]<EOL>return cc.calc_hessian(*args)<EOL>
|
Calculates the hessian of the log-likelihood for this model / dataset.
|
f7701:c1:m4
|
def convenience_calc_fisher_approx(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>args = [betas,<EOL>self.design,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.choice_vector,<EOL>self.utility_transform,<EOL>self.calc_dh_d_shape,<EOL>self.calc_dh_dv,<EOL>self.calc_dh_d_alpha,<EOL>intercepts,<EOL>shapes,<EOL>self.ridge,<EOL>self.weights]<EOL>return cc.calc_fisher_info_matrix(*args)<EOL>
|
Calculates the BHHH approximation of the Fisher Information Matrix for
this model / dataset.
|
f7701:c1:m5
|
def check_conf_percentage_validity(conf_percentage):
|
msg = "<STR_LIT>"<EOL>condition_1 = isinstance(conf_percentage, Number)<EOL>if not condition_1:<EOL><INDENT>raise ValueError(msg)<EOL><DEDENT>else:<EOL><INDENT>condition_2 = <NUM_LIT:0> < conf_percentage < <NUM_LIT:100><EOL>if not condition_2:<EOL><INDENT>raise ValueError(msg)<EOL><DEDENT><DEDENT>return None<EOL>
|
Ensures that `conf_percentage` is in (0, 100). Raises a helpful ValueError
if otherwise.
|
f7702:m0
|
def ensure_samples_is_ndim_ndarray(samples, name='<STR_LIT>', ndim=<NUM_LIT:2>):
|
assert isinstance(ndim, int)<EOL>assert isinstance(name, str)<EOL>if not isinstance(samples, np.ndarray) or not (samples.ndim == ndim):<EOL><INDENT>sample_name = name + "<STR_LIT>"<EOL>msg = "<STR_LIT>".format(sample_name, ndim)<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensures that `samples` is an `ndim` numpy array. Raises a helpful
ValueError if otherwise.
|
f7702:m1
|
def get_alpha_from_conf_percentage(conf_percentage):
|
return <NUM_LIT> - conf_percentage<EOL>
|
Calculates `100 - conf_percentage`, which is useful for calculating alpha
levels.
|
f7702:m2
|
def combine_conf_endpoints(lower_array, upper_array):
|
return np.concatenate([lower_array[None, :], upper_array[None, :]], axis=<NUM_LIT:0>)<EOL>
|
Concatenates upper and lower endpoint arrays for a given confidence level.
|
f7702:m3
|
def ensure_model_obj_has_mapping_constructor(model_obj):
|
if not hasattr(model_obj, "<STR_LIT>"):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensure that `model_obj` has a 'get_mappings_for_fit' method. Raises a
helpful ValueError if otherwise.
|
f7703:m0
|
def ensure_rows_to_obs_validity(rows_to_obs):
|
if rows_to_obs is not None and not isspmatrix_csr(rows_to_obs):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensure that `rows_to_obs` is None or a 2D scipy sparse CSR matrix. Raises a
helpful ValueError if otherwise.
|
f7703:m1
|
def ensure_wide_weights_is_1D_or_2D_ndarray(wide_weights):
|
if not isinstance(wide_weights, np.ndarray):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>ndim = wide_weights.ndim<EOL>if not <NUM_LIT:0> < ndim < <NUM_LIT:3>:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Ensures that `wide_weights` is a 1D or 2D ndarray. Raises a helpful
ValueError if otherwise.
|
f7703:m2
|
def check_validity_of_long_form_args(model_obj, wide_weights, rows_to_obs):
|
<EOL>ensure_model_obj_has_mapping_constructor(model_obj)<EOL>ensure_wide_weights_is_1D_or_2D_ndarray(wide_weights)<EOL>ensure_rows_to_obs_validity(rows_to_obs)<EOL>return None<EOL>
|
Ensures the args to `create_long_form_weights` have expected properties.
|
f7703:m3
|
def create_long_form_weights(model_obj, wide_weights, rows_to_obs=None):
|
<EOL>check_validity_of_long_form_args(model_obj, wide_weights, rows_to_obs)<EOL>if rows_to_obs is None:<EOL><INDENT>rows_to_obs = model_obj.get_mappings_for_fit()['<STR_LIT>']<EOL><DEDENT>wide_weights_2d =wide_weights if wide_weights.ndim == <NUM_LIT:2> else wide_weights[:, None]<EOL>long_weights = rows_to_obs.dot(wide_weights_2d)<EOL>if wide_weights.ndim == <NUM_LIT:1>:<EOL><INDENT>long_weights = long_weights.sum(axis=<NUM_LIT:1>)<EOL><DEDENT>return long_weights<EOL>
|
Converts an array of weights with one element per observation (wide-format)
to an array of weights with one element per observation per available
alternative (long-format).
Parameters
----------
model_obj : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
wide_weights : 1D or 2D ndarray.
Should contain one element or one column per observation in
`model_obj.data`, depending on whether `wide_weights` is 1D or 2D
respectively. These elements should be the weights for optimizing the
model's objective function for estimation.
rows_to_obs : 2D scipy sparse array.
A mapping matrix of zeros and ones, were `rows_to_obs[i, j]` is one if
row `i` of the long-format data belongs to observation `j` and zero
otherwise.
Returns
-------
long_weights : 1D or 2D ndarray.
Should contain one element or one column per observation in
`model_obj.data`, depending on whether `wide_weights` is 1D or 2D
respectively. These elements should be the weights from `wide_weights`,
simply mapping each observation's weight to the corresponding row in
the long-format data.
|
f7703:m4
|
def calc_finite_diff_terms_for_abc(model_obj,<EOL>mle_params,<EOL>init_vals,<EOL>epsilon,<EOL>**fit_kwargs):
|
<EOL>num_obs = model_obj.data[model_obj.obs_id_col].unique().size<EOL>init_weights_wide = np.ones(num_obs, dtype=float) / num_obs<EOL>init_wide_weights_plus = (<NUM_LIT:1> - epsilon) * init_weights_wide<EOL>init_wide_weights_minus = (<NUM_LIT:1> + epsilon) * init_weights_wide<EOL>term_plus = np.empty((num_obs, init_vals.shape[<NUM_LIT:0>]), dtype=float)<EOL>term_minus = np.empty((num_obs, init_vals.shape[<NUM_LIT:0>]), dtype=float)<EOL>rows_to_obs = model_obj.get_mappings_for_fit()['<STR_LIT>']<EOL>new_fit_kwargs = deepcopy(fit_kwargs)<EOL>if fit_kwargs is not None and '<STR_LIT>' in fit_kwargs:<EOL><INDENT>orig_weights = fit_kwargs['<STR_LIT>']<EOL>del new_fit_kwargs['<STR_LIT>']<EOL><DEDENT>else:<EOL><INDENT>orig_weights = <NUM_LIT:1><EOL><DEDENT>new_fit_kwargs['<STR_LIT>'] = True<EOL>for obs in xrange(num_obs):<EOL><INDENT>current_wide_weights_plus = init_wide_weights_plus.copy()<EOL>current_wide_weights_plus[obs] += epsilon<EOL>current_wide_weights_minus = init_wide_weights_minus.copy()<EOL>current_wide_weights_minus[obs] -= epsilon<EOL>long_weights_plus =(create_long_form_weights(model_obj, current_wide_weights_plus,<EOL>rows_to_obs=rows_to_obs) * orig_weights)<EOL>long_weights_minus =(create_long_form_weights(model_obj,<EOL>current_wide_weights_minus,<EOL>rows_to_obs=rows_to_obs) * orig_weights)<EOL>term_plus[obs] = model_obj.fit_mle(init_vals,<EOL>weights=long_weights_plus,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL>term_minus[obs] = model_obj.fit_mle(init_vals,<EOL>weights=long_weights_minus,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL><DEDENT>return term_plus, term_minus<EOL>
|
Calculates the terms needed for the finite difference approximations of
the empirical influence and second order empirical influence functions.
Parameters
----------
model_obj : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
mle_params : 1D ndarray.
Should contain the desired model's maximum likelihood point estimate.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
epsilon : positive float.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be 'close' to zero.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
term_plus : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the finite difference term that comes from adding a small value
to the observation corresponding to that elements respective row.
term_minus : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the finite difference term that comes from subtracting a small
value to the observation corresponding to that elements respective row.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equations 22.32 and 22.36.
Notes
-----
The returned, symbolic value for `term_minus` does not explicitly appear in
Equations 22.32 or 22.36. However, it is used to compute a midpoint / slope
approximation to the finite difference derivative used to define the
empirical influence function.
|
f7703:m5
|
def calc_empirical_influence_abc(term_plus,<EOL>term_minus,<EOL>epsilon):
|
<EOL>denominator = <NUM_LIT:2> * epsilon<EOL>empirical_influence = np.zeros(term_plus.shape)<EOL>diff_idx = ~np.isclose(term_plus, term_minus, atol=<NUM_LIT>, rtol=<NUM_LIT:0>)<EOL>if diff_idx.any():<EOL><INDENT>empirical_influence[diff_idx] =(term_plus[diff_idx] - term_minus[diff_idx]) / denominator<EOL><DEDENT>return empirical_influence<EOL>
|
Calculates the finite difference, midpoint / slope approximation to the
empirical influence array needed to compute the approximate boostrap
confidence (ABC) intervals.
Parameters
----------
term_plus : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the finite difference term that comes from adding a small value
to the observation corresponding to that elements respective row.
term_minus : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the finite difference term that comes from subtracting a small
value to the observation corresponding to that elements respective row.
epsilon : positive float.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be 'close' to zero.
Returns
-------
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.32.
Notes
-----
This function is based off of the code in Efron's original Bootstrap
library, written in S-plus. It is a finite difference, midpoint or slope
approximation of Equation 22.32.
|
f7703:m6
|
def calc_2nd_order_influence_abc(mle_params,<EOL>term_plus,<EOL>term_minus,<EOL>epsilon):
|
<EOL>denominator = epsilon**<NUM_LIT:2><EOL>term_2 = np.broadcast_to(<NUM_LIT:2> * mle_params, term_plus.shape)<EOL>second_order_influence = np.zeros(term_plus.shape, dtype=float)<EOL>diff_idx = ~np.isclose(term_plus + term_minus, term_2, atol=<NUM_LIT>, rtol=<NUM_LIT:0>)<EOL>if diff_idx.any():<EOL><INDENT>second_order_influence[diff_idx] =((term_plus[diff_idx] - term_2[diff_idx] + term_minus[diff_idx]) /<EOL>denominator)<EOL><DEDENT>return second_order_influence<EOL>
|
Calculates either a 'positive' finite difference approximation or an
approximation of a 'positive' finite difference approximation to the the
2nd order empirical influence array needed to compute the approximate
boostrap confidence (ABC) intervals. See the 'Notes' section for more
information on the ambiguous function description.
Parameters
----------
mle_params : 1D ndarray.
Should contain the desired model's maximum likelihood point estimate.
term_plus : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the finite difference term that comes from adding a small value
to the observation corresponding to that elements respective row.
term_minus : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the finite difference term that comes from subtracting a small
value to the observation corresponding to that elements respective row.
epsilon : positive float.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be 'close' to zero.
Returns
-------
second_order_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the second order empirical influence of the associated
observation on the associated parameter.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.36.
Notes
-----
This function is based on the code in Efron's original Bootstrap library
written in S-plus. It is not equivalent to the 'positive' finite difference
approximation to Equation 22.36, where epsilon is set to a small float. The
reason for this discrepancy is becaue `term_minus` is not equal to the
third term in the numerator in Equation 22.36. That term uses
`(1 - epsilon)P^0` whereas `term_minus` uses `(1 + epsilon)P^0`. At the
limit, both of these terms would be `P^0` and therefore equal. I think
Efron's original code was making the assumption that the two terms are
approximately equal to conserve computational resources. Either that or
Equation 22.36, as printed, is incorrect because its third term really
should be `(1 + epsilon)P^0`.
|
f7703:m7
|
def calc_influence_arrays_for_abc(model_obj,<EOL>mle_est,<EOL>init_values,<EOL>epsilon,<EOL>**fit_kwargs):
|
<EOL>term_plus, term_minus = calc_finite_diff_terms_for_abc(model_obj,<EOL>mle_est,<EOL>init_values,<EOL>epsilon,<EOL>**fit_kwargs)<EOL>empirical_influence =calc_empirical_influence_abc(term_plus, term_minus, epsilon)<EOL>second_order_influence =calc_2nd_order_influence_abc(mle_est, term_plus, term_minus, epsilon)<EOL>return empirical_influence, second_order_influence<EOL>
|
Calculates the empirical influence array and the 2nd order empirical
influence array needed to compute the approximate boostrap confidence (ABC)
intervals.
Parameters
----------
model_obj : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
mle_est : 1D ndarray.
Should contain the desired model's maximum likelihood point estimate.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
epsilon : positive float.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be 'close' to zero.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
second_order_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the second order empirical influence of the associated
observation on the associated parameter.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equations 22.32 and 22.36.
|
f7703:m8
|
def calc_std_error_abc(empirical_influence):
|
num_obs = empirical_influence.shape[<NUM_LIT:0>]<EOL>std_error = ((empirical_influence**<NUM_LIT:2>).sum(axis=<NUM_LIT:0>))**<NUM_LIT:0.5> / num_obs<EOL>return std_error<EOL>
|
Calculates the standard error of the MLE estimates for use in calculating
the approximate bootstrap confidence (ABC) intervals.
Parameters
----------
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
Returns
-------
std_error : 1D ndarray.
Contains the standard error of the MLE estimates for use in the ABC
confidence intervals.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.31.
|
f7703:m9
|
def calc_acceleration_abc(empirical_influence):
|
influence_cubed = empirical_influence**<NUM_LIT:3><EOL>influence_squared = empirical_influence**<NUM_LIT:2><EOL>numerator = influence_cubed.sum(axis=<NUM_LIT:0>)<EOL>denominator = <NUM_LIT:6> * (influence_squared.sum(axis=<NUM_LIT:0>))**<NUM_LIT><EOL>acceleration = numerator / denominator<EOL>return acceleration<EOL>
|
Calculates the acceleration constant for the approximate bootstrap
confidence (ABC) intervals.
Parameters
----------
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
Returns
-------
acceleration : 1D ndarray.
Contains the ABC confidence intervals' estimated acceleration vector.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.34.
|
f7703:m10
|
def calc_bias_abc(second_order_influence):
|
num_obs = second_order_influence.shape[<NUM_LIT:0>]<EOL>constant = <NUM_LIT> * num_obs**<NUM_LIT:2><EOL>bias = second_order_influence.sum(axis=<NUM_LIT:0>) / constant<EOL>return bias<EOL>
|
Calculates the approximate bias of the MLE estimates for use in calculating
the approximate bootstrap confidence (ABC) intervals.
Parameters
----------
second_order_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the second order empirical influence of the associated
observation on the associated parameter.
Returns
-------
bias : 1D ndarray.
Contains the approximate bias of the MLE estimates for use in the ABC
confidence intervals.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.35.
|
f7703:m11
|
def calc_quadratic_coef_abc(model_object,<EOL>mle_params,<EOL>init_vals,<EOL>empirical_influence,<EOL>std_error,<EOL>epsilon,<EOL>**fit_kwargs):
|
<EOL>num_obs = float(empirical_influence.shape[<NUM_LIT:0>])<EOL>standardized_influence =empirical_influence / (num_obs**<NUM_LIT:2> * std_error[None, :])<EOL>init_weights_wide = (np.ones(int(num_obs), dtype=float) / num_obs)[:, None]<EOL>term_1_wide_weights =(<NUM_LIT:1> - epsilon) * init_weights_wide + epsilon * standardized_influence<EOL>term_3_wide_weights =(<NUM_LIT:1> - epsilon) * init_weights_wide - epsilon * standardized_influence<EOL>rows_to_obs = model_object.get_mappings_for_fit()['<STR_LIT>']<EOL>expected_term_shape = (init_vals.shape[<NUM_LIT:0>], init_vals.shape[<NUM_LIT:0>])<EOL>term_1_array = np.empty(expected_term_shape, dtype=float)<EOL>term_3_array = np.empty(expected_term_shape, dtype=float)<EOL>new_fit_kwargs = deepcopy(fit_kwargs)<EOL>if fit_kwargs is not None and '<STR_LIT>' in fit_kwargs:<EOL><INDENT>orig_weights = fit_kwargs['<STR_LIT>']<EOL>del new_fit_kwargs['<STR_LIT>']<EOL><DEDENT>else:<EOL><INDENT>orig_weights = <NUM_LIT:1><EOL><DEDENT>new_fit_kwargs['<STR_LIT>'] = True<EOL>for param_id in xrange(expected_term_shape[<NUM_LIT:0>]):<EOL><INDENT>term_1_long_weights =(create_long_form_weights(model_object,<EOL>term_1_wide_weights[:, param_id],<EOL>rows_to_obs=rows_to_obs) * orig_weights)<EOL>term_3_long_weights =(create_long_form_weights(model_object,<EOL>term_3_wide_weights[:, param_id],<EOL>rows_to_obs=rows_to_obs) * orig_weights)<EOL>term_1_array[param_id] =model_object.fit_mle(init_vals,<EOL>weights=term_1_long_weights,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL>term_3_array[param_id] =model_object.fit_mle(init_vals,<EOL>weights=term_3_long_weights,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL><DEDENT>term_1 = np.diag(term_1_array)<EOL>term_3 = np.diag(term_3_array)<EOL>term_2 = <NUM_LIT:2> * mle_params<EOL>quadratic_coef = np.zeros(term_1.shape, dtype=float)<EOL>denominator = epsilon**<NUM_LIT:2><EOL>diff_idx = ~np.isclose(term_1 + term_3, term_2, atol=<NUM_LIT>, rtol=<NUM_LIT:0>)<EOL>if diff_idx.any():<EOL><INDENT>quadratic_coef[diff_idx] =((term_1[diff_idx] - term_2[diff_idx] + term_3[diff_idx]) /<EOL>denominator)<EOL><DEDENT>return quadratic_coef<EOL>
|
Calculates the quadratic coefficient needed to compute the approximate
boostrap confidence (ABC) intervals.
Parameters
----------
model_object : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
mle_params : 1D ndarray.
Should contain the desired model's maximum likelihood point estimate.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
std_error : 1D ndarray.
Contains the standard error of the MLE estimates for use in the ABC
confidence intervals.
epsilon : positive float.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be 'close' to zero.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
quadratic_coef : 1D ndarray.
Contains a measure of nonlinearity of the MLE estimation function as
one moves in the 'least favorable direction.'
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.37.
|
f7703:m12
|
def efron_quadratic_coef_abc(model_object,<EOL>mle_params,<EOL>init_vals,<EOL>empirical_influence,<EOL>std_error,<EOL>epsilon,<EOL>**fit_kwargs):
|
<EOL>num_obs = float(empirical_influence.shape[<NUM_LIT:0>])<EOL>standardized_influence =empirical_influence / (num_obs**<NUM_LIT:2> * std_error[None, :])<EOL>init_weights_wide = (np.ones(int(num_obs), dtype=float) / num_obs)[:, None]<EOL>term_1_wide_weights = init_weights_wide + epsilon * standardized_influence<EOL>term_3_wide_weights = init_weights_wide - epsilon * standardized_influence<EOL>rows_to_obs = model_object.get_mappings_for_fit()['<STR_LIT>']<EOL>expected_term_shape = (init_vals.shape[<NUM_LIT:0>], init_vals.shape[<NUM_LIT:0>])<EOL>term_1_array = np.empty(expected_term_shape, dtype=float)<EOL>term_3_array = np.empty(expected_term_shape, dtype=float)<EOL>new_fit_kwargs = deepcopy(fit_kwargs)<EOL>if fit_kwargs is not None and '<STR_LIT>' in fit_kwargs:<EOL><INDENT>orig_weights = fit_kwargs['<STR_LIT>']<EOL>del new_fit_kwargs['<STR_LIT>']<EOL><DEDENT>else:<EOL><INDENT>orig_weights = <NUM_LIT:1><EOL><DEDENT>new_fit_kwargs['<STR_LIT>'] = True<EOL>for param_id in xrange(expected_term_shape[<NUM_LIT:0>]):<EOL><INDENT>term_1_long_weights =(create_long_form_weights(model_object,<EOL>term_1_wide_weights[:, param_id],<EOL>rows_to_obs=rows_to_obs) * orig_weights)<EOL>term_3_long_weights =(create_long_form_weights(model_object,<EOL>term_3_wide_weights[:, param_id],<EOL>rows_to_obs=rows_to_obs) * orig_weights)<EOL>term_1_array[param_id] =model_object.fit_mle(init_vals,<EOL>weights=term_1_long_weights,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL>term_3_array[param_id] =model_object.fit_mle(init_vals,<EOL>weights=term_3_long_weights,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL><DEDENT>term_1 = np.diag(term_1_array)<EOL>term_3 = np.diag(term_3_array)<EOL>term_2 = <NUM_LIT:2> * mle_params<EOL>quadratic_coef = np.zeros(term_1.shape, dtype=float)<EOL>denominator = <NUM_LIT:2> * std_error * epsilon**<NUM_LIT:2><EOL>diff_idx = ~np.isclose(term_1 + term_3, term_2, atol=<NUM_LIT>, rtol=<NUM_LIT:0>)<EOL>if diff_idx.any():<EOL><INDENT>quadratic_coef[diff_idx] =((term_1[diff_idx] - term_2[diff_idx] + term_3[diff_idx]) /<EOL>denominator)<EOL><DEDENT>return quadratic_coef<EOL>
|
Calculates the quadratic coefficient needed to compute the approximate
boostrap confidence (ABC) intervals.
Parameters
----------
model_object : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
mle_params : 1D ndarray.
Should contain the desired model's maximum likelihood point estimate.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
std_error : 1D ndarray.
Contains the standard error of the MLE estimates for use in the ABC
confidence intervals.
epsilon : positive float.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be 'close' to zero.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
quadratic_coef : 1D ndarray.
Contains a measure of nonlinearity of the MLE estimation function as
one moves in the 'least favorable direction.'
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.37.
Notes
-----
This function does not directly implement Equation 22.37. Instead, it
re-implements the calculations that Efron and Tibshirani use in their
'abcnon.R' file within the 'bootstrap' library.
|
f7703:m13
|
def calc_total_curvature_abc(bias, std_error, quadratic_coef):
|
total_curvature = (bias / std_error) - quadratic_coef<EOL>return total_curvature<EOL>
|
Calculate the total curvature of the level surface of the weight vector,
where the set of weights in the surface are those where the weighted MLE
equals the original (i.e. the equal-weighted) MLE.
Parameters
----------
bias : 1D ndarray.
Contains the approximate bias of the MLE estimates for use in the ABC
confidence intervals.
std_error : 1D ndarray.
Contains the standard error of the MLE estimates for use in the ABC
confidence intervals.
quadratic_coef : 1D ndarray.
Contains a measure of nonlinearity of the MLE estimation function as
one moves in the 'least favorable direction.'
Returns
-------
total_curvature : 1D ndarray of scalars.
Denotes the total curvature of the level surface of the weight vector.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6. Equation 22.39.
|
f7703:m14
|
def calc_bias_correction_abc(acceleration, total_curvature):
|
inner_arg = <NUM_LIT:2> * norm.cdf(acceleration) * norm.cdf(-<NUM_LIT:1> * total_curvature)<EOL>bias_correction = norm.ppf(inner_arg)<EOL>return bias_correction<EOL>
|
Calculate the bias correction constant for the approximate bootstrap
confidence (ABC) intervals.
Parameters
----------
acceleration : 1D ndarray of scalars.
Should contain the ABC intervals' estimated acceleration constants.
total_curvature : 1D ndarray of scalars.
Should denote the ABC intervals' computred total curvature values.
Returns
-------
bias_correction : 1D ndarray of scalars.
Contains the computed bias correction for the MLE estimates that the
ABC interval is being computed for.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.40, line 1.
|
f7703:m15
|
def calc_endpoint_from_percentile_abc(model_obj,<EOL>init_vals,<EOL>percentile,<EOL>bias_correction,<EOL>acceleration,<EOL>std_error,<EOL>empirical_influence,<EOL>**fit_kwargs):
|
<EOL>bias_corrected_z = bias_correction + norm.ppf(percentile * <NUM_LIT>)<EOL>lam = bias_corrected_z / (<NUM_LIT:1> - acceleration * bias_corrected_z)**<NUM_LIT:2><EOL>multiplier = lam / std_error<EOL>num_obs = empirical_influence.shape[<NUM_LIT:0>]<EOL>init_weights_wide = np.ones(num_obs, dtype=float)[:, None] / num_obs<EOL>weight_adjustment_wide = (multiplier[None, :] * empirical_influence)<EOL>wide_weights_all_params = init_weights_wide + weight_adjustment_wide<EOL>new_fit_kwargs = deepcopy(fit_kwargs)<EOL>if fit_kwargs is not None and '<STR_LIT>' in fit_kwargs:<EOL><INDENT>orig_weights = fit_kwargs['<STR_LIT>']<EOL>del new_fit_kwargs['<STR_LIT>']<EOL><DEDENT>else:<EOL><INDENT>orig_weights = np.ones(model_obj.data.shape[<NUM_LIT:0>], dtype=float)<EOL><DEDENT>new_fit_kwargs['<STR_LIT>'] = True<EOL>long_weights_all_params =(create_long_form_weights(model_obj, wide_weights_all_params) *<EOL>orig_weights[:, None])<EOL>num_params = init_vals.shape[<NUM_LIT:0>]<EOL>endpoint = np.empty(num_params, dtype=float)<EOL>for param_id in xrange(num_params):<EOL><INDENT>current_weights = long_weights_all_params[:, param_id]<EOL>current_estimate = model_obj.fit_mle(init_vals,<EOL>weights=current_weights,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL>endpoint[param_id] = current_estimate[param_id]<EOL><DEDENT>return endpoint<EOL>
|
Calculates the endpoint of the 1-tailed, (percentile)% confidence interval.
Note this interval spans from negative infinity to the calculated endpoint.
Parameters
----------
model_obj : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
percentile : scalar in (0.0, 100.0).
Denotes the percentile of the standard normal distribution at which
we'd like to evaluate the inverse cumulative distribution function and
then convert this standardized value back to our approximate bootstrap
distribution.
bias_correction : 1D ndarray of scalars.
Contains the computed bias correction for the MLE estimates that the
ABC interval is being computed for.
acceleration : 1D ndarray of scalars.
Should contain the ABC intervals' estimated acceleration constants.
std_error : 1D ndarray.
Contains the standard error of the MLE estimates for use in the ABC
confidence intervals.
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
endpoint : 1D ndarray.
Contains the endpoint from our approximate bootstrap distribution's
1-tailed, upper `percentile` confidence interval.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.33.
|
f7703:m16
|
def efron_endpoint_from_percentile_abc(model_obj,<EOL>init_vals,<EOL>percentile,<EOL>bias_correction,<EOL>acceleration,<EOL>std_error,<EOL>empirical_influence,<EOL>**fit_kwargs):
|
<EOL>bias_corrected_z = bias_correction + norm.ppf(percentile * <NUM_LIT>)<EOL>num_obs = empirical_influence.shape[<NUM_LIT:0>]<EOL>lam = bias_corrected_z / (<NUM_LIT:1> - acceleration * bias_corrected_z)**<NUM_LIT:2><EOL>multiplier = lam / (std_error * num_obs**<NUM_LIT:2>)<EOL>init_weights_wide = np.ones(num_obs, dtype=float)[:, None] / num_obs<EOL>weight_adjustment_wide = (multiplier[None, :] * empirical_influence)<EOL>wide_weights_all_params = init_weights_wide + weight_adjustment_wide<EOL>new_fit_kwargs = deepcopy(fit_kwargs)<EOL>if fit_kwargs is not None and '<STR_LIT>' in fit_kwargs:<EOL><INDENT>orig_weights = fit_kwargs['<STR_LIT>']<EOL>del new_fit_kwargs['<STR_LIT>']<EOL><DEDENT>else:<EOL><INDENT>orig_weights = np.ones(model_obj.data.shape[<NUM_LIT:0>], dtype=float)<EOL><DEDENT>new_fit_kwargs['<STR_LIT>'] = True<EOL>long_weights_all_params =(create_long_form_weights(model_obj, wide_weights_all_params) *<EOL>orig_weights[:, None])<EOL>num_params = init_vals.shape[<NUM_LIT:0>]<EOL>endpoint = np.empty(num_params, dtype=float)<EOL>for param_id in xrange(num_params):<EOL><INDENT>current_weights = long_weights_all_params[:, param_id]<EOL>current_estimate = model_obj.fit_mle(init_vals,<EOL>weights=current_weights,<EOL>**new_fit_kwargs)['<STR_LIT:x>']<EOL>endpoint[param_id] = current_estimate[param_id]<EOL><DEDENT>return endpoint<EOL>
|
Calculates the endpoint of the 1-tailed, (percentile)% confidence interval.
Note this interval spans from negative infinity to the calculated endpoint.
Parameters
----------
model_obj : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
percentile : scalar in (0.0, 100.0).
Denotes the percentile of the standard normal distribution at which
we'd like to evaluate the inverse cumulative distribution function and
then convert this standardized value back to our approximate bootstrap
distribution.
bias_correction : 1D ndarray of scalars.
Contains the computed bias correction for the MLE estimates that the
ABC interval is being computed for.
acceleration : 1D ndarray of scalars.
Should contain the ABC intervals' estimated acceleration constants.
std_error : 1D ndarray.
Contains the standard error of the MLE estimates for use in the ABC
confidence intervals.
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
endpoint : 1D ndarray.
Contains the endpoint from our approximate bootstrap distribution's
1-tailed, upper `percentile` confidence interval.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6, Equation 22.33.
Notes
-----
This function does not directly implement Equation 22.33. Instead, it
implements Efron's endpoint calculations from 'abcnon.R' in the 'bootstrap'
library in R. It is not clear where these calculations come from, and
if/how these calculations are equivalent to Equation 22.33.
|
f7703:m17
|
def efron_endpoints_for_abc_confidence_interval(conf_percentage,<EOL>model_obj,<EOL>init_vals,<EOL>bias_correction,<EOL>acceleration,<EOL>std_error,<EOL>empirical_influence,<EOL>**fit_kwargs):
|
<EOL>alpha_percent = get_alpha_from_conf_percentage(conf_percentage)<EOL>lower_percentile = alpha_percent / <NUM_LIT><EOL>upper_percentile = <NUM_LIT:100> - lower_percentile<EOL>lower_endpoint = efron_endpoint_from_percentile_abc(model_obj,<EOL>init_vals,<EOL>lower_percentile,<EOL>bias_correction,<EOL>acceleration,<EOL>std_error,<EOL>empirical_influence,<EOL>**fit_kwargs)<EOL>upper_endpoint = efron_endpoint_from_percentile_abc(model_obj,<EOL>init_vals,<EOL>upper_percentile,<EOL>bias_correction,<EOL>acceleration,<EOL>std_error,<EOL>empirical_influence,<EOL>**fit_kwargs)<EOL>return lower_endpoint, upper_endpoint<EOL>
|
Calculates the endpoints of the equal-tailed, `conf_percentage`%
approximate bootstrap confidence (ABC) interval.
Parameters
----------
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level for the returned endpoints. For instance,
to calculate a 95% confidence interval, pass `95`.
model_obj : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
bias_correction : 1D ndarray of scalars.
Contains the computed bias correction for the MLE estimates that the
ABC interval is being computed for.
acceleration : 1D ndarray of scalars.
Should contain the ABC intervals' estimated acceleration constants.
std_error : 1D ndarray.
Contains the standard error of the MLE estimates for use in the ABC
confidence intervals.
empirical_influence : 2D ndarray.
Should have one row for each observation. Should have one column for
each parameter in the parameter vector being estimated. Elements should
denote the empirical influence of the associated observation on the
associated parameter.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
lower_endpoint, upper_endpoint : 1D ndarray.
Contains the lower or upper endpoint, respectively, from our
`conf_percentage`% ABC interval.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6.
Notes
-----
This function does not directly implement Equation 22.33. Instead, it
implements Efron's endpoint calculations from 'abcnon.R' in the 'bootstrap'
library in R. It is not clear where these calculations come from, and
if/how these calculations are equivalent to Equation 22.33.
|
f7703:m18
|
def calc_abc_interval(model_obj,<EOL>mle_params,<EOL>init_vals,<EOL>conf_percentage,<EOL>epsilon=<NUM_LIT>,<EOL>**fit_kwargs):
|
<EOL>check_conf_percentage_validity(conf_percentage)<EOL>empirical_influence, second_order_influence =calc_influence_arrays_for_abc(model_obj,<EOL>mle_params,<EOL>init_vals,<EOL>epsilon,<EOL>**fit_kwargs)<EOL>acceleration = calc_acceleration_abc(empirical_influence)<EOL>std_error = calc_std_error_abc(empirical_influence)<EOL>bias = calc_bias_abc(second_order_influence)<EOL>quadratic_coef = efron_quadratic_coef_abc(model_obj,<EOL>mle_params,<EOL>init_vals,<EOL>empirical_influence,<EOL>std_error,<EOL>epsilon,<EOL>**fit_kwargs)<EOL>total_curvature = calc_total_curvature_abc(bias, std_error, quadratic_coef)<EOL>bias_correction = calc_bias_correction_abc(acceleration, total_curvature)<EOL>lower_endpoint, upper_endpoint =efron_endpoints_for_abc_confidence_interval(conf_percentage,<EOL>model_obj,<EOL>init_vals,<EOL>bias_correction,<EOL>acceleration,<EOL>std_error,<EOL>empirical_influence,<EOL>**fit_kwargs)<EOL>conf_intervals = combine_conf_endpoints(lower_endpoint, upper_endpoint)<EOL>return conf_intervals<EOL>
|
Calculate 'approximate bootstrap confidence' intervals.
Parameters
----------
model_obj : an instance or sublcass of the MNDC class.
Should be the model object that corresponds to the model we are
constructing the bootstrap confidence intervals for.
mle_params : 1D ndarray.
Should contain the desired model's maximum likelihood point estimate.
init_vals : 1D ndarray.
The initial values used to estimate the desired choice model.
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level of the returned confidence interval. For
instance, to calculate a 95% confidence interval, pass `95`.
epsilon : positive float, optional.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be close to zero. Default == sys.float_info.epsilon.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default behavior
of `model_obj.fit_mle` and thereby enforce conformity with how the MLE
was obtained. Will be passed directly to `model_obj.fit_mle`.
Returns
-------
conf_intervals : 2D ndarray.
The shape of the returned array will be `(2, samples.shape[1])`. The
first row will correspond to the lower value in the confidence
interval. The second row will correspond to the upper value in the
confidence interval. There will be one column for each element of the
parameter vector being estimated.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 22.6.
DiCiccio, Thomas J., and Bradley Efron. "Bootstrap confidence intervals."
Statistical science (1996): 189-212.
|
f7703:m19
|
def extract_default_init_vals(orig_model_obj, mnl_point_series, num_params):
|
<EOL>init_vals = np.zeros(num_params, dtype=float)<EOL>no_outside_intercepts = orig_model_obj.intercept_names is None<EOL>if no_outside_intercepts:<EOL><INDENT>init_index_coefs = mnl_point_series.values<EOL>init_intercepts = None<EOL><DEDENT>else:<EOL><INDENT>init_index_coefs =mnl_point_series.loc[orig_model_obj.ind_var_names].values<EOL>init_intercepts =mnl_point_series.loc[orig_model_obj.intercept_names].values<EOL><DEDENT>if orig_model_obj.mixing_vars is not None:<EOL><INDENT>num_mixing_vars = len(orig_model_obj.mixing_vars)<EOL>init_index_coefs = np.concatenate([init_index_coefs,<EOL>np.zeros(num_mixing_vars)],<EOL>axis=<NUM_LIT:0>)<EOL><DEDENT>if orig_model_obj.model_type == model_type_to_display_name["<STR_LIT>"]:<EOL><INDENT>multiplier = np.log(len(np.unique(orig_model_obj.alt_IDs)))<EOL>init_index_coefs = init_index_coefs.astype(float)<EOL>init_index_coefs /= multiplier<EOL><DEDENT>if init_intercepts is not None:<EOL><INDENT>init_index_coefs =np.concatenate([init_intercepts, init_index_coefs], axis=<NUM_LIT:0>)<EOL><DEDENT>num_index = init_index_coefs.shape[<NUM_LIT:0>]<EOL>init_vals[-<NUM_LIT:1> * num_index:] = init_index_coefs<EOL>return init_vals<EOL>
|
Get the default initial values for the desired model type, based on the
point estimate of the MNL model that is 'closest' to the desired model.
Parameters
----------
orig_model_obj : an instance or sublcass of the MNDC class.
Should correspond to the actual model that we want to bootstrap.
mnl_point_series : pandas Series.
Should denote the point estimate from the MNL model that is 'closest'
to the desired model.
num_params : int.
Should denote the number of parameters being estimated (including any
parameters that are being constrained during estimation).
Returns
-------
init_vals : 1D ndarray of initial values for the MLE of the desired model.
|
f7704:m0
|
def get_model_abbrev(model_obj):
|
<EOL>model_type = model_obj.model_type<EOL>for key in model_type_to_display_name:<EOL><INDENT>if model_type_to_display_name[key] == model_type:<EOL><INDENT>return key<EOL><DEDENT><DEDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL>
|
Extract the string used to specify the model type of this model object in
`pylogit.create_chohice_model`.
Parameters
----------
model_obj : An MNDC_Model instance.
Returns
-------
str. The internal abbreviation used for the particular type of MNDC_Model.
|
f7704:m1
|
def get_model_creation_kwargs(model_obj):
|
<EOL>model_abbrev = get_model_abbrev(model_obj)<EOL>model_kwargs = {"<STR_LIT>": model_abbrev,<EOL>"<STR_LIT>": model_obj.name_spec,<EOL>"<STR_LIT>": model_obj.intercept_names,<EOL>"<STR_LIT>": model_obj.intercept_ref_position,<EOL>"<STR_LIT>": model_obj.shape_names,<EOL>"<STR_LIT>": model_obj.shape_ref_position,<EOL>"<STR_LIT>": model_obj.nest_spec,<EOL>"<STR_LIT>": model_obj.mixing_vars,<EOL>"<STR_LIT>": model_obj.mixing_id_col}<EOL>return model_kwargs<EOL>
|
Get a dictionary of the keyword arguments needed to create the passed model
object using `pylogit.create_choice_model`.
Parameters
----------
model_obj : An MNDC_Model instance.
Returns
-------
model_kwargs : dict.
Contains the keyword arguments and the required values that are needed
to initialize a replica of `model_obj`.
|
f7704:m2
|
def get_mnl_point_est(orig_model_obj,<EOL>new_df,<EOL>boot_id_col,<EOL>num_params,<EOL>mnl_spec,<EOL>mnl_names,<EOL>mnl_init_vals,<EOL>mnl_fit_kwargs):
|
<EOL>if orig_model_obj.model_type == model_type_to_display_name["<STR_LIT>"]:<EOL><INDENT>mnl_spec = orig_model_obj.specification<EOL>mnl_names = orig_model_obj.name_spec<EOL>if mnl_init_vals is None:<EOL><INDENT>mnl_init_vals = np.zeros(num_params)<EOL><DEDENT>if mnl_fit_kwargs is None:<EOL><INDENT>mnl_fit_kwargs = {}<EOL><DEDENT><DEDENT>mnl_fit_kwargs["<STR_LIT>"] = True<EOL>if "<STR_LIT>" not in mnl_fit_kwargs:<EOL><INDENT>mnl_fit_kwargs["<STR_LIT>"] = "<STR_LIT>"<EOL><DEDENT>mnl_obj = pl.create_choice_model(data=new_df,<EOL>alt_id_col=orig_model_obj.alt_id_col,<EOL>obs_id_col=boot_id_col,<EOL>choice_col=orig_model_obj.choice_col,<EOL>specification=mnl_spec,<EOL>model_type="<STR_LIT>",<EOL>names=mnl_names)<EOL>mnl_point = mnl_obj.fit_mle(mnl_init_vals, **mnl_fit_kwargs)<EOL>return mnl_point, mnl_obj<EOL>
|
Calculates the MLE for the desired MNL model.
Parameters
----------
orig_model_obj : An MNDC_Model instance.
The object corresponding to the desired model being bootstrapped.
new_df : pandas DataFrame.
The pandas dataframe containing the data to be used to estimate the
MLE of the MNL model for the current bootstrap sample.
boot_id_col : str.
Denotes the new column that specifies the bootstrap observation ids for
choice model estimation.
num_params : non-negative int.
The number of parameters in the MLE of the `orig_model_obj`.
mnl_spec : OrderedDict or None.
If `orig_model_obj` is not a MNL model, then `mnl_spec` should be an
OrderedDict that contains the specification dictionary used to estimate
the MNL model that will provide starting values for the final estimated
model. If `orig_model_obj` is a MNL model, then `mnl_spec` may be None.
mnl_names : OrderedDict or None.
If `orig_model_obj` is not a MNL model, then `mnl_names` should be an
OrderedDict that contains the name dictionary used to initialize the
MNL model that will provide starting values for the final estimated
model. If `orig_model_obj` is a MNL, then `mnl_names` may be None.
mnl_init_vals : 1D ndarray or None.
If `orig_model_obj` is not a MNL model, then `mnl_init_vals` should be
a 1D ndarray. `mnl_init_vals` should denote the initial values used to
estimate the MNL model that provides starting values for the final
desired model. If `orig_model_obj` is a MNL model, then `mnl_init_vals`
may be None.
mnl_fit_kwargs : dict or None.
If `orig_model_obj` is not a MNL model, then `mnl_fit_kwargs` should be
a dict. `mnl_fit_kwargs` should denote the keyword arguments used when
calling the `fit_mle` function of the MNL model that will provide
starting values to the desired choice model. If `orig_model_obj` is a
MNL model, then `mnl_fit_kwargs` may be None.
Returns
-------
mnl_point : dict.
The dictionary returned by `scipy.optimize` after estimating the
desired MNL model.
mnl_obj : An MNL model instance.
The model object used to estimate the desired MNL model.
|
f7704:m3
|
def retrieve_point_est(orig_model_obj,<EOL>new_df,<EOL>new_id_col,<EOL>num_params,<EOL>mnl_spec,<EOL>mnl_names,<EOL>mnl_init_vals,<EOL>mnl_fit_kwargs,<EOL>extract_init_vals=None,<EOL>**fit_kwargs):
|
<EOL>mnl_point, mnl_obj = get_mnl_point_est(orig_model_obj,<EOL>new_df,<EOL>new_id_col,<EOL>num_params,<EOL>mnl_spec,<EOL>mnl_names,<EOL>mnl_init_vals,<EOL>mnl_fit_kwargs)<EOL>mnl_point_series = pd.Series(mnl_point["<STR_LIT:x>"], index=mnl_obj.ind_var_names)<EOL>if orig_model_obj.model_type == model_type_to_display_name["<STR_LIT>"]:<EOL><INDENT>final_point = mnl_point<EOL><DEDENT>else:<EOL><INDENT>if extract_init_vals is None:<EOL><INDENT>extraction_func = extract_default_init_vals<EOL><DEDENT>else:<EOL><INDENT>extraction_func = extract_init_vals<EOL><DEDENT>default_init_vals =extraction_func(orig_model_obj, mnl_point_series, num_params)<EOL>model_kwargs = get_model_creation_kwargs(orig_model_obj)<EOL>new_obj =pl.create_choice_model(data=new_df,<EOL>alt_id_col=orig_model_obj.alt_id_col,<EOL>obs_id_col=new_id_col,<EOL>choice_col=orig_model_obj.choice_col,<EOL>specification=orig_model_obj.specification,<EOL>**model_kwargs)<EOL>if '<STR_LIT>' not in fit_kwargs:<EOL><INDENT>fit_kwargs['<STR_LIT>'] = True<EOL><DEDENT>final_point = new_obj.fit_mle(default_init_vals, **fit_kwargs)<EOL><DEDENT>return final_point<EOL>
|
Calculates the MLE for the desired MNL model.
Parameters
----------
orig_model_obj : An MNDC_Model instance.
The object corresponding to the desired model being bootstrapped.
new_df : pandas DataFrame.
The pandas dataframe containing the data to be used to estimate the
MLE of the MNL model for the current bootstrap sample.
new_id_col : str.
Denotes the new column that specifies the bootstrap observation ids for
choice model estimation.
num_params : non-negative int.
The number of parameters in the MLE of the `orig_model_obj`.
mnl_spec : OrderedDict or None.
If `orig_model_obj` is not a MNL model, then `mnl_spec` should be an
OrderedDict that contains the specification dictionary used to estimate
the MNL model that will provide starting values for the final estimated
model. If `orig_model_obj` is a MNL model, then `mnl_spec` may be None.
mnl_names : OrderedDict or None.
If `orig_model_obj` is not a MNL model, then `mnl_names` should be an
OrderedDict that contains the name dictionary used to initialize the
MNL model that will provide starting values for the final estimated
model. If `orig_model_obj` is a MNL, then `mnl_names` may be None.
mnl_init_vals : 1D ndarray or None.
If `orig_model_obj` is not a MNL model, then `mnl_init_vals` should be
a 1D ndarray. `mnl_init_vals` should denote the initial values used to
estimate the MNL model that provides starting values for the final
desired model. If `orig_model_obj` is a MNL model, then `mnl_init_vals`
may be None.
mnl_fit_kwargs : dict or None.
If `orig_model_obj` is not a MNL model, then `mnl_fit_kwargs` should be
a dict. `mnl_fit_kwargs` should denote the keyword arguments used when
calling the `fit_mle` function of the MNL model that will provide
starting values to the desired choice model. If `orig_model_obj` is a
MNL model, then `mnl_fit_kwargs` may be None.
extract_init_vals : callable or None, optional.
Should accept 3 arguments, in the following order. First, it should
accept `orig_model_obj`. Second, it should accept a pandas Series of
the estimated parameters from the MNL model. The index of the Series
will be the names of the coefficients from `mnl_names`. Thirdly, it
should accept an int denoting the number of parameters in the desired
choice model. The callable should return a 1D ndarray of starting
values for the desired choice model. Default == None.
fit_kwargs : dict.
Denotes the keyword arguments to be used when estimating the desired
choice model using the current bootstrap sample (`new_df`). All such
kwargs will be directly passed to the `fit_mle` method of the desired
model object.
Returns
-------
final_point : dict.
The dictionary returned by `scipy.optimize` after estimating the
desired choice model.
|
f7704:m4
|
def relate_obs_ids_to_chosen_alts(obs_id_array,<EOL>alt_id_array,<EOL>choice_array):
|
<EOL>chosen_alts_to_obs_ids = {}<EOL>for alt_id in np.sort(np.unique(alt_id_array)):<EOL><INDENT>selection_condition =np.where((alt_id_array == alt_id) & (choice_array == <NUM_LIT:1>))<EOL>chosen_alts_to_obs_ids[alt_id] =np.sort(np.unique(obs_id_array[selection_condition]))<EOL><DEDENT>return chosen_alts_to_obs_ids<EOL>
|
Creates a dictionary that relates each unique alternative id to the set of
observations ids that chose the given alternative.
Parameters
----------
obs_id_array : 1D ndarray of ints.
Should be a long-format array of observation ids. Each element should
correspond to the unique id of the unit of observation that corresponds
to the given row of the long-format data. Note that each unit of
observation may have more than one associated choice situation.
alt_id_array : 1D ndarray of ints.
Should be a long-format array of alternative ids. Each element should
denote the unique id of the alternative that corresponds to the given
row of the long format data.
choice_array : 1D ndarray of ints.
Each element should be either a one or a zero, indicating whether the
alternative on the given row of the long format data was chosen or not.
Returns
-------
chosen_alts_to_obs_ids : dict.
Each key will be a unique value from `alt_id_array`. Each key's value
will be a 1D ndarray that contains the sorted, unique observation ids
of those observational units that chose the given alternative.
|
f7705:m0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.