signature
stringlengths 8
3.44k
| body
stringlengths 0
1.41M
| docstring
stringlengths 1
122k
| id
stringlengths 5
17
|
|---|---|---|---|
def calc_log_likes_for_replicates(self,<EOL>replicates='<STR_LIT>',<EOL>num_draws=None,<EOL>seed=None):
|
<EOL>ensure_replicates_kwarg_validity(replicates)<EOL>replicate_vec = getattr(self, replicates + "<STR_LIT>").values<EOL>choice_col = self.model_obj.choice_col<EOL>current_model_type = self.model_obj.model_type<EOL>non_2d_predictions =[model_type_to_display_name["<STR_LIT>"],<EOL>model_type_to_display_name["<STR_LIT>"]]<EOL>if current_model_type not in non_2d_predictions:<EOL><INDENT>param_list =get_param_list_for_prediction(self.model_obj, replicate_vec)<EOL>chosen_probs =self.model_obj.predict(self.model_obj.data,<EOL>param_list=param_list,<EOL>return_long_probs=False,<EOL>choice_col=choice_col)<EOL><DEDENT>else:<EOL><INDENT>chosen_probs_list = []<EOL>iterable_for_iteration = PROGRESS(xrange(replicate_vec.shape[<NUM_LIT:0>]),<EOL>desc="<STR_LIT>",<EOL>total=replicate_vec.shape[<NUM_LIT:0>])<EOL>for idx in iterable_for_iteration:<EOL><INDENT>param_list =get_param_list_for_prediction(self.model_obj,<EOL>replicate_vec[idx][None, :])<EOL>param_list =[x.ravel() if x is not None else x for x in param_list]<EOL>chosen_probs =self.model_obj.predict(self.model_obj.data,<EOL>param_list=param_list,<EOL>return_long_probs=False,<EOL>choice_col=choice_col,<EOL>num_draws=num_draws,<EOL>seed=seed)<EOL>chosen_probs_list.append(chosen_probs[:, None])<EOL><DEDENT>chosen_probs = np.concatenate(chosen_probs_list, axis=<NUM_LIT:1>)<EOL><DEDENT>log_likelihoods = np.log(chosen_probs).sum(axis=<NUM_LIT:0>)<EOL>attribute_name = replicates + "<STR_LIT>"<EOL>log_like_series = pd.Series(log_likelihoods, name=attribute_name)<EOL>setattr(self, attribute_name, log_like_series)<EOL>return log_likelihoods<EOL>
|
Calculate the log-likelihood value of one's replicates, given one's
dataset.
Parameters
----------
replicates : str in {'bootstrap', 'jackknife'}.
Denotes which set of replicates should have their log-likelihoods
calculated.
num_draws : int greater than zero or None, optional.
Denotes the number of random draws for mixed logit estimation. If
None, then no random draws will be made. Default == None.
seed : int greater than zero or None, optional.
Denotes the random seed to be used for mixed logit estimation.
If None, then no random seed will be set. Default == None.
Returns
-------
log_likelihoods : 1D ndarray.
Each element stores the log-likelihood of the associated parameter
values on the model object's dataset. The log-likelihoods are also
stored on the `replicates + '_log_likelihoods'` attribute.
|
f7686:c0:m3
|
def calc_gradient_norm_for_replicates(self,<EOL>replicates='<STR_LIT>',<EOL>ridge=None,<EOL>constrained_pos=None,<EOL>weights=None):
|
<EOL>ensure_replicates_kwarg_validity(replicates)<EOL>estimation_obj =create_estimation_obj(self.model_obj,<EOL>self.mle_params.values,<EOL>ridge=ridge,<EOL>constrained_pos=constrained_pos,<EOL>weights=weights)<EOL>if hasattr(estimation_obj, "<STR_LIT>"):<EOL><INDENT>estimation_obj.set_derivatives()<EOL><DEDENT>replicate_array = getattr(self, replicates + "<STR_LIT>").values<EOL>num_reps = replicate_array.shape[<NUM_LIT:0>]<EOL>gradient_norms = np.empty((num_reps,), dtype=float)<EOL>iterable_for_iteration = PROGRESS(xrange(num_reps),<EOL>desc="<STR_LIT>",<EOL>total=num_reps)<EOL>for row in iterable_for_iteration:<EOL><INDENT>current_params = replicate_array[row]<EOL>gradient = estimation_obj.convenience_calc_gradient(current_params)<EOL>gradient_norms[row] = np.linalg.norm(gradient)<EOL><DEDENT>return gradient_norms<EOL>
|
Calculate the Euclidean-norm of the gradient of one's replicates, given
one's dataset.
Parameters
----------
replicates : str in {'bootstrap', 'jackknife'}.
Denotes which set of replicates should have their log-likelihoods
calculated.
ridge : float or None, optional.
Denotes the ridge penalty used when estimating the replicates, and
to be used when calculating the gradient. If None, no ridge penalty
is used. Default == None.
constrained_pos : list or None, optional.
Denotes the positions of the array of estimated parameters that are
not to change from their initial values. If a list is passed, the
elements are to be integers where no such integer is greater than
`self.mle_params` Default == None.
weights : 1D ndarray or None, optional.
Allows for the calculation of weighted log-likelihoods. The weights
can represent various things. In stratified samples, the weights
may be the proportion of the observations in a given strata for a
sample in relation to the proportion of observations in that strata
in the population. In latent class models, the weights may be the
probability of being a particular class.
Returns
-------
log_likelihoods : 1D ndarray.
Each element stores the log-likelihood of the associated parameter
values on the model object's dataset. The log-likelihoods are also
stored on the `replicates + '_log_likelihoods'` attribute.
|
f7686:c0:m4
|
def calc_percentile_interval(self, conf_percentage):
|
<EOL>alpha = bc.get_alpha_from_conf_percentage(conf_percentage)<EOL>single_column_names =['<STR_LIT>'.format(alpha / <NUM_LIT>),<EOL>'<STR_LIT>'.format(<NUM_LIT:100> - alpha / <NUM_LIT>)]<EOL>conf_intervals =bc.calc_percentile_interval(self.bootstrap_replicates.values,<EOL>conf_percentage)<EOL>self.percentile_interval =pd.DataFrame(conf_intervals.T,<EOL>index=self.mle_params.index,<EOL>columns=single_column_names)<EOL>return None<EOL>
|
Calculates percentile bootstrap confidence intervals for one's model.
Parameters
----------
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level for the returned endpoints. For
instance, to calculate a 95% confidence interval, pass `95`.
Returns
-------
None. Will store the percentile intervals as `self.percentile_interval`
Notes
-----
Must have all ready called `self.generate_bootstrap_replicates`.
|
f7686:c0:m5
|
def calc_bca_interval(self, conf_percentage):
|
<EOL>alpha = bc.get_alpha_from_conf_percentage(conf_percentage)<EOL>single_column_names =['<STR_LIT>'.format(alpha / <NUM_LIT>),<EOL>'<STR_LIT>'.format(<NUM_LIT:100> - alpha / <NUM_LIT>)]<EOL>args = [self.bootstrap_replicates.values,<EOL>self.jackknife_replicates.values,<EOL>self.mle_params.values,<EOL>conf_percentage]<EOL>conf_intervals = bc.calc_bca_interval(*args)<EOL>self.bca_interval = pd.DataFrame(conf_intervals.T,<EOL>index=self.mle_params.index,<EOL>columns=single_column_names)<EOL>return None<EOL>
|
Calculates Bias-Corrected and Accelerated (BCa) Bootstrap Confidence
Intervals for one's model.
Parameters
----------
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level for the returned endpoints. For
instance, to calculate a 95% confidence interval, pass `95`.
Returns
-------
None. Will store the BCa intervals as `self.abc_interval`.
Notes
-----
Must have all ready called `self.generate_bootstrap_replicates` and
`self.generate_jackknife_replicates`.
|
f7686:c0:m6
|
def calc_abc_interval(self,<EOL>conf_percentage,<EOL>init_vals,<EOL>epsilon=<NUM_LIT>,<EOL>**fit_kwargs):
|
print("<STR_LIT>")<EOL>print(time.strftime("<STR_LIT>"))<EOL>sys.stdout.flush()<EOL>alpha = bc.get_alpha_from_conf_percentage(conf_percentage)<EOL>single_column_names =['<STR_LIT>'.format(alpha / <NUM_LIT>),<EOL>'<STR_LIT>'.format(<NUM_LIT:100> - alpha / <NUM_LIT>)]<EOL>conf_intervals =abc.calc_abc_interval(self.model_obj,<EOL>self.mle_params.values,<EOL>init_vals,<EOL>conf_percentage,<EOL>epsilon=epsilon,<EOL>**fit_kwargs)<EOL>self.abc_interval = pd.DataFrame(conf_intervals.T,<EOL>index=self.mle_params.index,<EOL>columns=single_column_names)<EOL>return None<EOL>
|
Calculates Approximate Bootstrap Confidence Intervals for one's model.
Parameters
----------
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level for the returned endpoints. For
instance, to calculate a 95% confidence interval, pass `95`.
init_vals : 1D ndarray.
The initial values used to estimate the one's choice model.
epsilon : positive float, optional.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions. Should be close to zero.
Default == sys.float_info.epsilon.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default
behavior of `model_obj.fit_mle` and thereby enforce conformity with
how the MLE was obtained. Will be passed directly to
`model_obj.fit_mle`.
Returns
-------
None. Will store the ABC intervals as `self.abc_interval`.
|
f7686:c0:m7
|
def calc_conf_intervals(self,<EOL>conf_percentage,<EOL>interval_type='<STR_LIT:all>',<EOL>init_vals=None,<EOL>epsilon=abc.EPSILON,<EOL>**fit_kwargs):
|
if interval_type == '<STR_LIT>':<EOL><INDENT>self.calc_percentile_interval(conf_percentage)<EOL><DEDENT>elif interval_type == '<STR_LIT>':<EOL><INDENT>self.calc_bca_interval(conf_percentage)<EOL><DEDENT>elif interval_type == '<STR_LIT:abc>':<EOL><INDENT>self.calc_abc_interval(conf_percentage,<EOL>init_vals,<EOL>epsilon=epsilon,<EOL>**fit_kwargs)<EOL><DEDENT>elif interval_type == '<STR_LIT:all>':<EOL><INDENT>print("<STR_LIT>")<EOL>sys.stdout.flush()<EOL>self.calc_percentile_interval(conf_percentage)<EOL>print("<STR_LIT>")<EOL>sys.stdout.flush()<EOL>self.calc_bca_interval(conf_percentage)<EOL>self.calc_abc_interval(conf_percentage,<EOL>init_vals,<EOL>epsilon=epsilon,<EOL>**fit_kwargs)<EOL>alpha = bc.get_alpha_from_conf_percentage(conf_percentage)<EOL>interval_type_names = ['<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>']<EOL>endpoint_names = ['<STR_LIT>'.format(alpha / <NUM_LIT>),<EOL>'<STR_LIT>'.format(<NUM_LIT:100> - alpha / <NUM_LIT>)]<EOL>multi_index_names =list(itertools.product(interval_type_names, endpoint_names))<EOL>df_column_index = pd.MultiIndex.from_tuples(multi_index_names)<EOL>self.all_intervals = pd.concat([self.percentile_interval,<EOL>self.bca_interval,<EOL>self.abc_interval],<EOL>axis=<NUM_LIT:1>,<EOL>ignore_index=True)<EOL>self.all_intervals.columns = df_column_index<EOL>self.all_intervals.index = self.mle_params.index<EOL><DEDENT>else:<EOL><INDENT>msg ="<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Calculates percentile, bias-corrected and accelerated, and approximate
bootstrap confidence intervals.
Parameters
----------
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level for the returned endpoints. For
instance, to calculate a 95% confidence interval, pass `95`.
interval_type : str in {'all', 'pi', 'bca', 'abc'}, optional.
Denotes the type of confidence intervals that should be calculated.
'all' results in all types of confidence intervals being
calculated. 'pi' means 'percentile intervals', 'bca' means
'bias-corrected and accelerated', and 'abc' means 'approximate
bootstrap confidence' intervals. Default == 'all'.
init_vals : 1D ndarray.
The initial values used to estimate the one's choice model.
epsilon : positive float, optional.
Should denote the 'very small' value being used to calculate the
desired finite difference approximations to the various influence
functions for the 'abc' intervals. Should be close to zero.
Default == sys.float_info.epsilon.
fit_kwargs : additional keyword arguments, optional.
Should contain any additional kwargs used to alter the default
behavior of `model_obj.fit_mle` and thereby enforce conformity with
how the MLE was obtained. Will be passed directly to
`model_obj.fit_mle` when calculating the 'abc' intervals.
Returns
-------
None. Will store the confidence intervals on their respective model
objects: `self.percentile_interval`, `self.bca_interval`,
`self.abc_interval`, or all of these objects.
|
f7686:c0:m8
|
def split_param_vec(beta, return_all_types=False, *args, **kwargs):
|
if return_all_types:<EOL><INDENT>return None, None, None, beta<EOL><DEDENT>else:<EOL><INDENT>return None, None, beta<EOL><DEDENT>
|
Parameters
----------
beta : 1D numpy array.
All elements should by ints, floats, or longs. Should have 1 element
for each utility coefficient being estimated (i.e. num_features).
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned (with
one element for the nest, shape, intercept, and index parameters for
this model). If False, a tuple of 3 elements will be returned, as
described below.
Returns
-------
tuple.
`(None, None, beta)`. This function is merely for compatibility with
the other choice model files.
Note
----
If `return_all_types == True` then the function will return a tuple of
`(None, None, None, beta)`. These values represent the nest, shape, outside
intercept, and index coefficients for the mixed logit model.
|
f7687:m0
|
def mnl_utility_transform(sys_utility_array, *args, **kwargs):
|
<EOL>if len(sys_utility_array.shape) == <NUM_LIT:1>:<EOL><INDENT>systematic_utilities = sys_utility_array[:, np.newaxis]<EOL><DEDENT>else:<EOL><INDENT>systematic_utilities = sys_utility_array<EOL><DEDENT>return systematic_utilities<EOL>
|
Parameters
----------
sys_utility_array : ndarray.
Should have 1D or 2D. Should have been created by the dot product of a
design matrix and an array of index coefficients.
Returns
-------
systematic_utilities : 2D ndarray.
The input systematic utilities. If `sys_utility_array` is 2D, then
`sys_utility_array` is returned. Else, returns
`sys_utility_array[:, None]`.
|
f7687:m1
|
def check_length_of_init_values(design_3d, init_values):
|
if init_values.shape[<NUM_LIT:0>] != design_3d.shape[<NUM_LIT:2>]:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>".format(design_3d.shape[<NUM_LIT:2>])<EOL>raise ValueError(msg_1 + msg_2)<EOL><DEDENT>return None<EOL>
|
Ensures that the initial values are of the correct length, given the design
matrix that they will be dot-producted with. Raises a ValueError if that is
not the case, and provides a useful error message to users.
Parameters
----------
init_values : 1D ndarray.
1D numpy array of the initial values to start the optimizatin process
with. There should be one value for each index coefficient being
estimated.
design_3d : 2D ndarray.
2D numpy array with one row per observation per available alternative.
There should be one column per index coefficient being estimated. All
elements should be ints, floats, or longs.
Returns
-------
None.
|
f7687:m2
|
def add_mixl_specific_results_to_estimation_res(estimator, results_dict):
|
<EOL>prob_res = mlc.calc_choice_sequence_probs(results_dict["<STR_LIT>"],<EOL>estimator.choice_vector,<EOL>estimator.rows_to_mixers,<EOL>return_type='<STR_LIT:all>')<EOL>results_dict["<STR_LIT>"] = prob_res[<NUM_LIT:0>]<EOL>results_dict["<STR_LIT>"] = prob_res[<NUM_LIT:1>]<EOL>return results_dict<EOL>
|
Stores particular items in the results dictionary that are unique to mixed
logit-type models. In particular, this function calculates and adds
`sequence_probs` and `expanded_sequence_probs` to the results dictionary.
The `constrained_pos` object is also stored to the results_dict.
Parameters
----------
estimator : an instance of the MixedEstimator class.
Should contain a `choice_vector` attribute that is a 1D ndarray
representing the choices made for this model's dataset. Should also
contain a `rows_to_mixers` attribute that maps each row of the long
format data to a unit of observation that the mixing is being performed
over.
results_dict : dict.
This dictionary should be the dictionary returned from
scipy.optimize.minimize. In particular, it should have the following
`long_probs` key.
Returns
-------
results_dict.
|
f7687:m3
|
def convenience_split_params(self, params, return_all_types=False):
|
return self.split_params(params,<EOL>return_all_types=return_all_types)<EOL>
|
Splits parameter vector into shape, intercept, and index parameters.
Parameters
----------
params : 1D ndarray.
The array of parameters being estimated or used in calculations.
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned
(with one element for the nest, shape, intercept, and index
parameters for this model). If False, a tuple of 3 elements will
be returned with one element for the shape, intercept, and index
parameters.
Returns
-------
tuple. Will have 4 or 3 elements based on `return_all_types`.
|
f7687:c0:m1
|
def check_length_of_initial_values(self, init_values):
|
return check_length_of_init_values(self.design_3d, init_values)<EOL>
|
Ensures that the initial values are of the correct length.
|
f7687:c0:m2
|
def convenience_calc_probs(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>prob_args = (betas,<EOL>self.design_3d,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.utility_transform)<EOL>prob_kwargs = {"<STR_LIT>": self.chosen_row_to_obs,<EOL>"<STR_LIT>": True}<EOL>probability_results = general_calc_probabilities(*prob_args,<EOL>**prob_kwargs)<EOL>return probability_results<EOL>
|
Calculates the probabilities of the chosen alternative, and the long
format probabilities for this model and dataset.
|
f7687:c0:m3
|
def convenience_calc_log_likelihood(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>args = [betas,<EOL>self.design_3d,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.rows_to_mixers,<EOL>self.choice_vector,<EOL>self.utility_transform]<EOL>kwargs = {"<STR_LIT>": self.ridge, "<STR_LIT>": self.weights}<EOL>log_likelihood = general_log_likelihood(*args, **kwargs)<EOL>return log_likelihood<EOL>
|
Calculates the log-likelihood for this model and dataset.
|
f7687:c0:m4
|
def convenience_calc_gradient(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>args = [betas,<EOL>self.design_3d,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.rows_to_mixers,<EOL>self.choice_vector,<EOL>self.utility_transform]<EOL>return general_gradient(*args, ridge=self.ridge, weights=self.weights)<EOL>
|
Calculates the gradient of the log-likelihood for this model / dataset.
|
f7687:c0:m5
|
def convenience_calc_hessian(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>args = [betas,<EOL>self.design_3d,<EOL>self.alt_id_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_alts,<EOL>self.rows_to_mixers,<EOL>self.choice_vector,<EOL>self.utility_transform]<EOL>approx_hess =general_bhhh(*args, ridge=self.ridge, weights=self.weights)<EOL>if self.constrained_pos is not None:<EOL><INDENT>for idx_val in self.constrained_pos:<EOL><INDENT>approx_hess[idx_val, :] = <NUM_LIT:0><EOL>approx_hess[:, idx_val] = <NUM_LIT:0><EOL>approx_hess[idx_val, idx_val] = -<NUM_LIT:1><EOL><DEDENT><DEDENT>return approx_hess<EOL>
|
Calculates the hessian of the log-likelihood for this model / dataset.
Note that this function name is INCORRECT with regard to the actual
actions performed. The Mixed Logit model uses the BHHH approximation
to the Fisher Information Matrix in place of the actual hessian.
|
f7687:c0:m6
|
def convenience_calc_fisher_approx(self, params):
|
shapes, intercepts, betas = self.convenience_split_params(params)<EOL>placeholder_bhhh = np.diag(-<NUM_LIT:1> * np.ones(betas.shape[<NUM_LIT:0>]))<EOL>return placeholder_bhhh<EOL>
|
Calculates the BHHH approximation of the Fisher Information Matrix for
this model / dataset. Note that this function name is INCORRECT with
regard to the actual actions performed. The Mixed Logit model uses a
placeholder for the BHHH approximation of the Fisher Information Matrix
because the BHHH approximation is already being used to approximate the
hessian.
This placeholder allows calculation of a value for the 'robust'
standard errors, even though such a value is not useful since it is not
correct...
|
f7687:c0:m7
|
def fit_mle(self,<EOL>init_vals,<EOL>num_draws,<EOL>seed=None,<EOL>constrained_pos=None,<EOL>print_res=True,<EOL>method="<STR_LIT>",<EOL>loss_tol=<NUM_LIT>,<EOL>gradient_tol=<NUM_LIT>,<EOL>maxiter=<NUM_LIT:1000>,<EOL>ridge=None,<EOL>just_point=False,<EOL>**kwargs):
|
<EOL>kwargs_to_be_ignored = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>"]<EOL>if any([x in kwargs for x in kwargs_to_be_ignored]):<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg.format(kwargs_to_be_ignored) + msg_2)<EOL><DEDENT>self.optimization_method = method<EOL>self.ridge_param = ridge<EOL>if ridge is not None:<EOL><INDENT>warnings.warn(_ridge_warning_msg)<EOL><DEDENT>mapping_res = self.get_mappings_for_fit()<EOL>rows_to_mixers = mapping_res["<STR_LIT>"]<EOL>num_mixing_units = rows_to_mixers.shape[<NUM_LIT:1>]<EOL>draw_list = mlc.get_normal_draws(num_mixing_units,<EOL>num_draws,<EOL>len(self.mixing_pos),<EOL>seed=seed)<EOL>self.design_3d = mlc.create_expanded_design_for_mixing(self.design,<EOL>draw_list,<EOL>self.mixing_pos,<EOL>rows_to_mixers)<EOL>zero_vector = np.zeros(init_vals.shape)<EOL>mixl_estimator = MixedEstimator(self,<EOL>mapping_res,<EOL>ridge,<EOL>zero_vector,<EOL>split_param_vec,<EOL>constrained_pos=constrained_pos)<EOL>mixl_estimator.check_length_of_initial_values(init_vals)<EOL>estimation_res = estimate(init_vals,<EOL>mixl_estimator,<EOL>method,<EOL>loss_tol,<EOL>gradient_tol,<EOL>maxiter,<EOL>print_res,<EOL>use_hessian=True,<EOL>just_point=just_point)<EOL>if not just_point:<EOL><INDENT>args = [mixl_estimator, estimation_res]<EOL>estimation_res = add_mixl_specific_results_to_estimation_res(*args)<EOL>self.store_fit_results(estimation_res)<EOL>return None<EOL><DEDENT>else:<EOL><INDENT>return estimation_res<EOL><DEDENT>
|
Parameters
----------
init_vals : 1D ndarray.
Should contain the initial values to start the optimization process
with. There should be one value for each utility coefficient and
shape parameter being estimated.
num_draws : int.
Should be greater than zero. Denotes the number of draws that we
are making from each normal distribution.
seed : int or None, optional.
If an int is passed, it should be greater than zero. Denotes the
value to be used in seeding the random generator used to generate
the draws from the normal distribution. Default == None.
constrained_pos : list or None, optional.
Denotes the positions of the array of estimated parameters that are
not to change from their initial values. If a list is passed, the
elements are to be integers where no such integer is greater than
`init_values.size.` Default == None.
print_res : bool, optional.
Determines whether the timing and initial and final log likelihood
results will be printed as they they are determined.
method : str, optional.
Should be a valid string which can be passed to
scipy.optimize.minimize. Determines the optimization algorithm
that is used for this problem.
loss_tol : float, optional.
Determines the tolerance on the difference in objective function
values from one iteration to the next which is needed to determine
convergence. Default = 1e-06.
gradient_tol : float, optional.
Determines the tolerance on the difference in gradient values from
one iteration to the next which is needed to determine convergence.
Default = 1e-06.
maxiter : int, optional.
Denotes the maximum number of iterations of the algorithm specified
by `method` that will be used to estimate the parameters of the
given model. Default == 1000.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a float
is passed, then that float determines the ridge penalty for the
optimization. Default = None.
just_point : bool, optional.
Determines whether (True) or not (False) calculations that are non-
critical for obtaining the maximum likelihood point estimate will
be performed. If True, this function will return the results
dictionary from scipy.optimize. Default == False.
Returns
-------
None. Estimation results are saved to the model instance.
|
f7687:c1:m1
|
def __filter_past_mappings(self,<EOL>past_mappings,<EOL>long_inclusion_array):
|
new_mappings = {}<EOL>for key in past_mappings:<EOL><INDENT>if past_mappings[key] is None:<EOL><INDENT>new_mappings[key] = None<EOL><DEDENT>else:<EOL><INDENT>mask_array = long_inclusion_array[:, None]<EOL>orig_map = past_mappings[key]<EOL>new_map = orig_map.multiply(np.tile(mask_array,<EOL>(<NUM_LIT:1>, orig_map.shape[<NUM_LIT:1>]))).A<EOL>current_filter = (new_map.sum(axis=<NUM_LIT:1>) != <NUM_LIT:0>)<EOL>if current_filter.shape[<NUM_LIT:0>] > <NUM_LIT:0>:<EOL><INDENT>current_filter = current_filter.ravel()<EOL>new_map = new_map[current_filter, :]<EOL><DEDENT>current_filter = (new_map.sum(axis=<NUM_LIT:0>) != <NUM_LIT:0>)<EOL>if current_filter.shape[<NUM_LIT:0>] > <NUM_LIT:0>:<EOL><INDENT>current_filter = current_filter.ravel()<EOL>new_map = new_map[:, current_filter]<EOL><DEDENT>new_mappings[key] = csr_matrix(new_map)<EOL><DEDENT><DEDENT>return new_mappings<EOL>
|
Parameters
----------
past_mappings : dict.
All elements should be None or compressed sparse row matrices from
scipy.sparse. The following keys should be in past_mappings:
- "rows_to_obs",
- "rows_to_alts",
- "chosen_rows_to_obs",
- "rows_to_nests",
- "rows_to_mixers"
The values that are not None should be 'mapping' matrices that
denote which rows of the past long-format design matrix belong to
which unique object such as unique observations, unique
alternatives, unique nests, unique 'mixing' units etc.
long_inclusion_array : 1D ndarray.
Should denote, via a `1`, the rows of the past mapping matrices
that are to be included in the filtered mapping matrices.
Returns
-------
new_mappings : dict.
The returned dictionary will be the same as `past_mappings` except
that all the mapping matrices will have been filtered according to
`long_inclusion_array`.
|
f7687:c1:m2
|
def panel_predict(self,<EOL>data,<EOL>num_draws,<EOL>return_long_probs=True,<EOL>choice_col=None,<EOL>seed=None):
|
<EOL>if choice_col is None and not return_long_probs:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>dataframe = get_dataframe_from_data(data)<EOL>condition_1 = "<STR_LIT>" in self.specification<EOL>condition_2 = "<STR_LIT>" not in dataframe.columns<EOL>if condition_1 and condition_2:<EOL><INDENT>dataframe["<STR_LIT>"] = <NUM_LIT:1.0><EOL><DEDENT>for column in [self.alt_id_col,<EOL>self.obs_id_col,<EOL>self.mixing_id_col]:<EOL><INDENT>if column is not None and column not in dataframe.columns:<EOL><INDENT>msg = "<STR_LIT>".format(column)<EOL>raise ValueError(msg)<EOL><DEDENT><DEDENT>new_alt_IDs = dataframe[self.alt_id_col].values<EOL>new_design_res = create_design_matrix(dataframe,<EOL>self.specification,<EOL>self.alt_id_col,<EOL>names=self.name_spec)<EOL>new_design_2d = new_design_res[<NUM_LIT:0>]<EOL>mapping_res = create_long_form_mappings(dataframe,<EOL>self.obs_id_col,<EOL>self.alt_id_col,<EOL>choice_col=choice_col,<EOL>nest_spec=self.nest_spec,<EOL>mix_id_col=self.mixing_id_col)<EOL>new_rows_to_obs = mapping_res["<STR_LIT>"]<EOL>new_rows_to_alts = mapping_res["<STR_LIT>"]<EOL>new_chosen_to_obs = mapping_res["<STR_LIT>"]<EOL>new_rows_to_mixers = mapping_res["<STR_LIT>"]<EOL>new_index_coefs = self.coefs.values<EOL>new_intercepts = (self.intercepts.values if self.intercepts<EOL>is not None else None)<EOL>new_shape_params = (self.shapes.values if self.shapes<EOL>is not None else None)<EOL>num_mixing_units = new_rows_to_mixers.shape[<NUM_LIT:1>]<EOL>draw_list = mlc.get_normal_draws(num_mixing_units,<EOL>num_draws,<EOL>len(self.mixing_pos),<EOL>seed=seed)<EOL>design_args = (new_design_2d,<EOL>draw_list,<EOL>self.mixing_pos,<EOL>new_rows_to_mixers)<EOL>new_design_3d = mlc.create_expanded_design_for_mixing(*design_args)<EOL>prob_args = (new_index_coefs,<EOL>new_design_3d,<EOL>new_alt_IDs,<EOL>new_rows_to_obs,<EOL>new_rows_to_alts,<EOL>mnl_utility_transform)<EOL>prob_kwargs = {"<STR_LIT>": new_intercepts,<EOL>"<STR_LIT>": new_shape_params,<EOL>"<STR_LIT>": True}<EOL>new_kernel_probs = general_calc_probabilities(*prob_args,<EOL>**prob_kwargs)<EOL>weights_per_ind_per_draw = (<NUM_LIT:1.0> / num_draws *<EOL>np.ones((new_rows_to_mixers.shape[<NUM_LIT:1>],<EOL>num_draws)))<EOL>old_mixing_id_long = self.data[self.mixing_id_col].values<EOL>new_mixing_id_long = dataframe[self.mixing_id_col].values<EOL>orig_unique_id_idx_old = np.sort(np.unique(old_mixing_id_long,<EOL>return_index=True)[<NUM_LIT:1>])<EOL>orig_unique_id_idx_new = np.sort(np.unique(new_mixing_id_long,<EOL>return_index=True)[<NUM_LIT:1>])<EOL>orig_order_unique_ids_old = old_mixing_id_long[orig_unique_id_idx_old]<EOL>orig_order_unique_ids_new = new_mixing_id_long[orig_unique_id_idx_new]<EOL>old_repeat_mixing_id_idx = np.in1d(old_mixing_id_long,<EOL>orig_order_unique_ids_new)<EOL>old_unique_mix_id_repeats = np.in1d(orig_order_unique_ids_old,<EOL>orig_order_unique_ids_new)<EOL>new_unique_mix_id_repeats = np.in1d(orig_order_unique_ids_new,<EOL>orig_order_unique_ids_old)<EOL>past_design_2d = self.design[old_repeat_mixing_id_idx, :]<EOL>orig_mappings = self.get_mappings_for_fit()<EOL>past_mappings = self.__filter_past_mappings(orig_mappings,<EOL>old_repeat_mixing_id_idx)<EOL>past_draw_list = [x[new_unique_mix_id_repeats, :] for x in draw_list]<EOL>design_args = (past_design_2d,<EOL>past_draw_list,<EOL>self.mixing_pos,<EOL>past_mappings["<STR_LIT>"])<EOL>past_design_3d = mlc.create_expanded_design_for_mixing(*design_args)<EOL>prob_args = (new_index_coefs,<EOL>past_design_3d,<EOL>self.alt_IDs[old_repeat_mixing_id_idx],<EOL>past_mappings["<STR_LIT>"],<EOL>past_mappings["<STR_LIT>"],<EOL>mnl_utility_transform)<EOL>prob_kwargs = {"<STR_LIT>": True}<EOL>past_kernel_probs = mlc.general_calc_probabilities(*prob_args,<EOL>**prob_kwargs)<EOL>past_choices = self.choices[old_repeat_mixing_id_idx]<EOL>sequence_args = (past_kernel_probs,<EOL>past_choices,<EOL>past_mappings["<STR_LIT>"])<EOL>seq_kwargs = {"<STR_LIT>": '<STR_LIT:all>'}<EOL>old_sequence_results = mlc.calc_choice_sequence_probs(*sequence_args,<EOL>**seq_kwargs)<EOL>past_sequence_probs_per_draw = old_sequence_results[<NUM_LIT:1>]<EOL>past_weights = (past_sequence_probs_per_draw /<EOL>past_sequence_probs_per_draw.sum(axis=<NUM_LIT:1>)[:, None])<EOL>rel_new_ids = orig_order_unique_ids_new[new_unique_mix_id_repeats]<EOL>num_rel_new_id = rel_new_ids.shape[<NUM_LIT:0>]<EOL>new_unique_mix_id_repeats_2d = rel_new_ids.reshape((num_rel_new_id, <NUM_LIT:1>))<EOL>rel_old_ids = orig_order_unique_ids_old[old_unique_mix_id_repeats]<EOL>num_rel_old_id = rel_old_ids.shape[<NUM_LIT:0>]<EOL>old_unique_mix_id_repeats_2d = rel_old_ids.reshape((<NUM_LIT:1>, num_rel_old_id))<EOL>new_to_old_repeat_ids = csr_matrix(new_unique_mix_id_repeats_2d ==<EOL>old_unique_mix_id_repeats_2d)<EOL>past_weights = new_to_old_repeat_ids.dot(past_weights)<EOL>weights_per_ind_per_draw[new_unique_mix_id_repeats, :] = past_weights<EOL>weights_per_draw = new_rows_to_mixers.dot(weights_per_ind_per_draw)<EOL>pred_probs_long = (weights_per_draw * new_kernel_probs).sum(axis=<NUM_LIT:1>)<EOL>pred_probs_long = pred_probs_long.ravel()<EOL>if new_chosen_to_obs is None:<EOL><INDENT>chosen_probs = None<EOL><DEDENT>else:<EOL><INDENT>chosen_probs = new_chosen_to_obs.transpose().dot(pred_probs_long)<EOL>if len(chosen_probs.shape) > <NUM_LIT:1> and chosen_probs.shape[<NUM_LIT:1>] > <NUM_LIT:1>:<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>chosen_probs = chosen_probs.ravel()<EOL><DEDENT><DEDENT>if return_long_probs and chosen_probs is not None:<EOL><INDENT>return chosen_probs, pred_probs_long<EOL><DEDENT>elif return_long_probs and chosen_probs is None:<EOL><INDENT>return pred_probs_long<EOL><DEDENT>elif chosen_probs is not None:<EOL><INDENT>return chosen_probs<EOL><DEDENT>
|
Parameters
----------
data : string or pandas dataframe.
If string, data should be an absolute or relative path to a CSV
file containing the long format data to be predicted with this
choice model. Note long format has one row per available
alternative for each observation. If pandas dataframe, the
dataframe should be in long format.
num_draws : int.
Should be greater than zero. Denotes the number of draws being
made from each mixing distribution for the random coefficients.
return_long_probs : bool, optional.
Indicates whether or not the long format probabilites (a 1D numpy
array with one element per observation per available alternative)
should be returned. Default == True.
choice_col : str or None, optonal.
Denotes the column in long_form which contains a one if the
alternative pertaining to the given row was the observed outcome
for the observation pertaining to the given row and a zero
otherwise. If passed, then an array of probabilities of just the
chosen alternative for each observation will be returned.
Default == None.
seed : int or None, optional.
If an int is passed, it should be greater than zero. Denotes the
value to be used in seeding the random generator used to generate
the draws from the mixing distributions of each random coefficient.
Default == None.
Returns
-------
numpy array or tuple of two numpy arrays.
- If `choice_col` is passed AND `return_long_probs` is True, then
the tuple `(chosen_probs, pred_probs_long)` is returned.
- If `return_long_probs` is True and `choice_col` is None, then
only `pred_probs_long` is returned.
- If `choice_col` is passed and `return_long_probs` is False then
`chosen_probs` is returned.
`chosen_probs` is a 1D numpy array of shape (num_observations,).
Each element is the probability of the corresponding observation
being associated with its realized outcome.
`pred_probs_long` is a 1D numpy array with one element per
observation per available alternative for that observation. Each
element is the probability of the corresponding observation being
associated with that row's corresponding alternative.
Notes
-----
It is NOT valid to have `choice_col == None` and
`return_long_probs == False`.
|
f7687:c1:m3
|
def create_estimation_obj(model_obj,<EOL>init_vals,<EOL>mappings=None,<EOL>ridge=None,<EOL>constrained_pos=None,<EOL>weights=None):
|
<EOL>mapping_matrices =model_obj.get_mappings_for_fit() if mappings is None else mappings<EOL>zero_vector = np.zeros(init_vals.shape[<NUM_LIT:0>])<EOL>internal_model_name = display_name_to_model_type[model_obj.model_type]<EOL>estimator_class, current_split_func =(model_type_to_resources[internal_model_name]['<STR_LIT>'],<EOL>model_type_to_resources[internal_model_name]['<STR_LIT>'])<EOL>estimation_obj = estimator_class(model_obj,<EOL>mapping_matrices,<EOL>ridge,<EOL>zero_vector,<EOL>current_split_func,<EOL>constrained_pos,<EOL>weights=weights)<EOL>return estimation_obj<EOL>
|
Should return a model estimation object corresponding to the model type of
the `model_obj`.
Parameters
----------
model_obj : an instance or sublcass of the MNDC class.
init_vals : 1D ndarray.
The initial values to start the estimation process with. In the
following order, there should be one value for each nest coefficient,
shape parameter, outside intercept parameter, or index coefficient that
is being estimated.
mappings : OrderedDict or None, optional.
Keys will be `["rows_to_obs", "rows_to_alts", "chosen_row_to_obs",
"rows_to_nests"]`. The value for `rows_to_obs` will map the rows of
the `long_form` to the unique observations (on the columns) in
their order of appearance. The value for `rows_to_alts` will map
the rows of the `long_form` to the unique alternatives which are
possible in the dataset (on the columns), in sorted order--not
order of appearance. The value for `chosen_row_to_obs`, if not
None, will map the rows of the `long_form` that contain the chosen
alternatives to the specific observations those rows are associated
with (denoted by the columns). The value of `rows_to_nests`, if not
None, will map the rows of the `long_form` to the nest (denoted by
the column) that contains the row's alternative. Default == None.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a
scalar is passed, then that scalar determines the ridge penalty for
the optimization. The scalar should be greater than or equal to
zero. Default `== None`.
constrained_pos : list or None, optional.
Denotes the positions of the array of estimated parameters that are
not to change from their initial values. If a list is passed, the
elements are to be integers where no such integer is greater than
`init_vals.size.` Default == None.
weights : 1D ndarray.
Should contain the weights for each corresponding observation for each
row of the long format data.
|
f7688:m0
|
def split_param_vec(param_vec, rows_to_alts, design, return_all_types=False):
|
<EOL>num_shapes = rows_to_alts.shape[<NUM_LIT:1>]<EOL>num_index_coefs = design.shape[<NUM_LIT:1>]<EOL>shapes = param_vec[:num_shapes]<EOL>betas = param_vec[-<NUM_LIT:1> * num_index_coefs:]<EOL>remaining_idx = param_vec.shape[<NUM_LIT:0>] - (num_shapes + num_index_coefs)<EOL>if remaining_idx > <NUM_LIT:0>:<EOL><INDENT>intercepts = param_vec[num_shapes: num_shapes + remaining_idx]<EOL><DEDENT>else:<EOL><INDENT>intercepts = None<EOL><DEDENT>if return_all_types:<EOL><INDENT>return None, shapes, intercepts, betas<EOL><DEDENT>else:<EOL><INDENT>return shapes, intercepts, betas<EOL><DEDENT>
|
Parameters
----------
param_vec : 1D ndarray.
Elements should all be ints, floats, or longs. Should have as many
elements as there are parameters being estimated.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
design : 2D ndarray.
There should be one row per observation per available alternative.
There should be one column per utility coefficient being estimated. All
elements should be ints, floats, or longs.
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned (with
one element for the nest, shape, intercept, and index parameters for
this model). If False, a tuple of 3 elements will be returned, as
described below.
Returns
-------
`(shapes, intercepts, betas)` : tuple of 1D ndarrays.
The first element will be an array of the shape parameters for this
model. The second element will either be an array of the "outside"
intercept parameters for this model or None. The third element will be
an array of the index coefficients for this model.
Note
----
If `return_all_types == True` then the function will return a tuple of four
objects. In order, these objects will either be None or the arrays
representing the arrays corresponding to the nest, shape, intercept, and
index parameters.
|
f7689:m0
|
def _scobit_utility_transform(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>shape_params,<EOL>intercept_params,<EOL>intercept_ref_pos=None,<EOL>*args, **kwargs):
|
<EOL>if intercept_ref_pos is not None and intercept_params is not None:<EOL><INDENT>needed_idxs = range(intercept_params.shape[<NUM_LIT:0>] + <NUM_LIT:1>)<EOL>needed_idxs.remove(intercept_ref_pos)<EOL>if len(intercept_params.shape) > <NUM_LIT:1> and intercept_params.shape[<NUM_LIT:1>] > <NUM_LIT:1>:<EOL><INDENT>all_intercepts = np.zeros((rows_to_alts.shape[<NUM_LIT:1>],<EOL>intercept_params.shape[<NUM_LIT:1>]))<EOL>all_intercepts[needed_idxs, :] = intercept_params<EOL><DEDENT>else:<EOL><INDENT>all_intercepts = np.zeros(rows_to_alts.shape[<NUM_LIT:1>])<EOL>all_intercepts[needed_idxs] = intercept_params<EOL><DEDENT><DEDENT>else:<EOL><INDENT>all_intercepts = np.zeros(rows_to_alts.shape[<NUM_LIT:1>])<EOL><DEDENT>long_intercepts = rows_to_alts.dot(all_intercepts)<EOL>natural_shapes = np.exp(shape_params)<EOL>natural_shapes[np.isposinf(natural_shapes)] = max_comp_value<EOL>long_natural_shapes = rows_to_alts.dot(natural_shapes)<EOL>exp_neg_v = np.exp(-<NUM_LIT:1> * systematic_utilities)<EOL>exp_neg_v[np.isposinf(exp_neg_v)] = max_comp_value<EOL>powered_term = np.power(<NUM_LIT:1> + exp_neg_v, long_natural_shapes)<EOL>powered_term[np.isposinf(powered_term)] = max_comp_value<EOL>term_2 = np.log(powered_term - <NUM_LIT:1>)<EOL>too_big_idx = np.isposinf(powered_term)<EOL>term_2[too_big_idx] = (-<NUM_LIT:1> * long_natural_shapes[too_big_idx] *<EOL>systematic_utilities[too_big_idx])<EOL>transformations = long_intercepts - term_2<EOL>transformations[np.isposinf(transformations)] = max_comp_value<EOL>transformations[np.isneginf(transformations)] = -<NUM_LIT:1> * max_comp_value<EOL>if len(transformations.shape) == <NUM_LIT:1>:<EOL><INDENT>transformations = transformations[:, np.newaxis]<EOL><DEDENT>return transformations<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
shape_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. There
should be one value per shape parameter of the model being used.
intercept_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. If J is the
total number of possible alternatives for the dataset being modeled,
there should be J-1 elements in the array.
intercept_ref_pos : int, or None, optional.
Specifies the index of the alternative, in the ordered array of unique
alternatives, that is not having its intercept parameter estimated (in
order to ensure identifiability). Should only be None if
`intercept_params` is None.
Returns
-------
transformations : 2D ndarray.
Should have shape `(systematic_utilities.shape[0], 1)`. The returned
array contains the transformed utility values for this model. All
elements should be ints, floats, or longs.
|
f7689:m1
|
def _scobit_transform_deriv_v(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>shape_params,<EOL>output_array=None,<EOL>*args, **kwargs):
|
<EOL>curve_shapes = np.exp(shape_params)<EOL>curve_shapes[np.isposinf(curve_shapes)] = max_comp_value<EOL>long_curve_shapes = rows_to_alts.dot(curve_shapes)<EOL>exp_neg_v = np.exp(-<NUM_LIT:1> * systematic_utilities)<EOL>powered_term = np.power(<NUM_LIT:1> + exp_neg_v, long_curve_shapes)<EOL>small_powered_term = np.power(<NUM_LIT:1> + exp_neg_v, long_curve_shapes - <NUM_LIT:1>)<EOL>derivs = (long_curve_shapes *<EOL>exp_neg_v *<EOL>small_powered_term /<EOL>(powered_term - <NUM_LIT:1>))<EOL>too_big_idx = (np.isposinf(derivs) +<EOL>np.isposinf(exp_neg_v) +<EOL>np.isposinf(powered_term) +<EOL>np.isposinf(small_powered_term)).astype(bool)<EOL>derivs[too_big_idx] = long_curve_shapes[too_big_idx]<EOL>too_small_idx = np.where((exp_neg_v == <NUM_LIT:0>) | (powered_term - <NUM_LIT:1> == <NUM_LIT:0>))<EOL>derivs[too_small_idx] = <NUM_LIT:1.0><EOL>output_array.data = derivs<EOL>assert output_array.shape == (systematic_utilities.shape[<NUM_LIT:0>],<EOL>systematic_utilities.shape[<NUM_LIT:0>])<EOL>return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
shape_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. There
should be one value per shape parameter of the model being used.
output_array : 2D scipy sparse array.
The array should be square and it should have
`systematic_utilities.shape[0]` rows. It's data is to be replaced with
the correct derivatives of the transformation vector with respect to
the vector of systematic utilities. This argument is NOT optional.
Returns
-------
output_array : 2D scipy sparse array.
The shape of the returned array is `(systematic_utilities.shape[0],
systematic_utilities.shape[0])`. The returned array specifies the
derivative of the transformed utilities with respect to the systematic
utilities. All elements are ints, floats, or longs.
|
f7689:m2
|
def _scobit_transform_deriv_shape(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>shape_params,<EOL>output_array=None,<EOL>*args, **kwargs):
|
<EOL>curve_shapes = np.exp(shape_params)<EOL>curve_shapes[np.isposinf(curve_shapes)] = max_comp_value<EOL>long_curve_shapes = rows_to_alts.dot(curve_shapes)<EOL>exp_neg_v = np.exp(-<NUM_LIT:1> * systematic_utilities)<EOL>powered_term = np.power(<NUM_LIT:1> + exp_neg_v, long_curve_shapes)<EOL>curve_derivs = (-<NUM_LIT:1> * np.log1p(exp_neg_v) *<EOL>powered_term / (powered_term - <NUM_LIT:1>)) * long_curve_shapes<EOL>too_big_idx = np.where((powered_term - <NUM_LIT:1>) == <NUM_LIT:0>)<EOL>curve_derivs[too_big_idx] = -<NUM_LIT:1><EOL>too_small_idx = np.isposinf(exp_neg_v)<EOL>curve_derivs[too_small_idx] = max_comp_value<EOL>shape_too_big_idx = np.where((np.abs(systematic_utilities) <= <NUM_LIT:10>) &<EOL>np.isposinf(powered_term))<EOL>curve_derivs[shape_too_big_idx] =(-<NUM_LIT:1> * np.log1p(exp_neg_v))[shape_too_big_idx]<EOL>output_array.data = curve_derivs<EOL>return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
shape_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. There
should be one value per shape parameter of the model being used.
output_array : 2D scipy sparse array.
The array should have shape `(systematic_utilities.shape[0],
shape_params.shape[0])`. It's data is to be replaced with the correct
derivatives of the transformation vector with respect to the vector of
shape parameters. This argument is NOT optional.
Returns
-------
output_array : 2D scipy sparse array.
The shape of the returned array is `(systematic_utilities.shape[0],
shape_params.shape[0])`. The returned array specifies the derivative of
the transformed utilities with respect to the shape parameters. All
elements are ints, floats, or longs.
|
f7689:m3
|
def _scobit_transform_deriv_alpha(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>intercept_params,<EOL>output_array=None,<EOL>*args, **kwargs):
|
return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
intercept_params : 1D ndarray or None.
If an array, each element should be an int, float, or long. For
identifiability, there should be J- 1 elements where J is the total
number of observed alternatives for this dataset.
output_array: None or 2D scipy sparse array.
If a sparse array is pased, it should contain the derivative of the
vector of transformed utilities with respect to the intercept
parameters outside of the index. This keyword argurment will be
returned. If there are no intercept parameters outside of the index,
then `output_array` should equal None. If there are intercept
parameters outside of the index, then `output_array` should be
`rows_to_alts` with the all of its columns except the column
corresponding to the alternative whose intercept is not being estimated
in order to ensure identifiability.
Returns
-------
output_array.
|
f7689:m4
|
def create_calc_dh_dv(estimator):
|
dh_dv = diags(np.ones(estimator.design.shape[<NUM_LIT:0>]), <NUM_LIT:0>, format='<STR_LIT>')<EOL>calc_dh_dv = partial(_scobit_transform_deriv_v, output_array=dh_dv)<EOL>return calc_dh_dv<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the index.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `design` attribute that is a 2D ndarray representing
the design matrix for this model and dataset.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of systematic utilities. The dimensions of the returned vector should
be `(design.shape[0], design.shape[0])`.
|
f7689:m5
|
def create_calc_dh_d_shape(estimator):
|
dh_d_shape = estimator.rows_to_alts.copy()<EOL>calc_dh_d_shape = partial(_scobit_transform_deriv_shape,<EOL>output_array=dh_d_shape)<EOL>return calc_dh_d_shape<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the shape parameters.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `rows_to_alts` attribute that is a 2D scipy sparse
matrix that maps the rows of the `design` matrix to the alternatives
available in this dataset.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of shape parameters. The dimensions of the returned vector should
be `(design.shape[0], num_alternatives)`.
|
f7689:m6
|
def create_calc_dh_d_alpha(estimator):
|
if estimator.intercept_ref_pos is not None:<EOL><INDENT>needed_idxs = range(estimator.rows_to_alts.shape[<NUM_LIT:1>])<EOL>needed_idxs.remove(estimator.intercept_ref_pos)<EOL>dh_d_alpha = (estimator.rows_to_alts<EOL>.copy()<EOL>.transpose()[needed_idxs, :]<EOL>.transpose())<EOL><DEDENT>else:<EOL><INDENT>dh_d_alpha = None<EOL><DEDENT>calc_dh_d_alpha = partial(_scobit_transform_deriv_alpha,<EOL>output_array=dh_d_alpha)<EOL>return calc_dh_d_alpha<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the outside intercept parameters.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `rows_to_alts` attribute that is a 2D scipy sparse
matrix that maps the rows of the `design` matrix to the alternatives
available in this dataset. Should also contain an `intercept_ref_pos`
attribute that is either None or an int. This attribute should denote
which intercept is not being estimated (in the case of outside
intercept parameters) for identification purposes.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of outside intercepts. The dimensions of the returned vector should
be `(design.shape[0], num_alternatives - 1)`.
|
f7689:m7
|
def check_length_of_initial_values(self, init_values):
|
<EOL>num_alts = self.rows_to_alts.shape[<NUM_LIT:1>]<EOL>num_index_coefs = self.design.shape[<NUM_LIT:1>]<EOL>if self.intercept_ref_pos is not None:<EOL><INDENT>assumed_param_dimensions = num_index_coefs + <NUM_LIT:2> * num_alts - <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>assumed_param_dimensions = num_index_coefs + num_alts<EOL><DEDENT>if init_values.shape[<NUM_LIT:0>] != assumed_param_dimensions:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>raise ValueError(msg_1 +<EOL>msg_2.format(assumed_param_dimensions) +<EOL>msg_3.format(init_values.shape[<NUM_LIT:0>]))<EOL><DEDENT>return None<EOL>
|
Ensures that `init_values` is of the correct length. Raises a helpful
ValueError if otherwise.
Parameters
----------
init_values : 1D ndarray.
The initial values to start the optimization process with. There
should be one value for each index coefficient, outside intercept
parameter, and shape parameter being estimated.
Returns
-------
None.
|
f7689:c0:m1
|
def fit_mle(self,<EOL>init_vals,<EOL>init_shapes=None,<EOL>init_intercepts=None,<EOL>init_coefs=None,<EOL>print_res=True,<EOL>method="<STR_LIT>",<EOL>loss_tol=<NUM_LIT>,<EOL>gradient_tol=<NUM_LIT>,<EOL>maxiter=<NUM_LIT:1000>,<EOL>ridge=None,<EOL>constrained_pos=None,<EOL>just_point=False,<EOL>**kwargs):
|
<EOL>self.optimization_method = method<EOL>self.ridge_param = ridge<EOL>if ridge is not None:<EOL><INDENT>warnings.warn(_ridge_warning_msg)<EOL><DEDENT>mapping_res = self.get_mappings_for_fit()<EOL>rows_to_alts = mapping_res["<STR_LIT>"]<EOL>if init_vals is None and all([x is not None for x in [init_shapes,<EOL>init_coefs]]):<EOL><INDENT>num_alternatives = rows_to_alts.shape[<NUM_LIT:1>]<EOL>try:<EOL><INDENT>assert init_shapes.shape[<NUM_LIT:0>] == num_alternatives<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_shapes.shape,<EOL>num_alternatives))<EOL><DEDENT>try:<EOL><INDENT>assert init_coefs.shape[<NUM_LIT:0>] == self.design.shape[<NUM_LIT:1>]<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_coefs.shape,<EOL>self.design.shape[<NUM_LIT:1>]))<EOL><DEDENT>try:<EOL><INDENT>if init_intercepts is not None:<EOL><INDENT>assert init_intercepts.shape[<NUM_LIT:0>] == (num_alternatives - <NUM_LIT:1>)<EOL><DEDENT><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_intercepts.shape,<EOL>num_alternatives - <NUM_LIT:1>))<EOL><DEDENT>if init_intercepts is not None:<EOL><INDENT>init_vals = np.concatenate((init_shapes,<EOL>init_intercepts,<EOL>init_coefs), axis=<NUM_LIT:0>)<EOL><DEDENT>else:<EOL><INDENT>init_vals = np.concatenate((init_shapes,<EOL>init_coefs), axis=<NUM_LIT:0>)<EOL><DEDENT><DEDENT>elif init_vals is None:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg + msg_2)<EOL><DEDENT>zero_vector = np.zeros(init_vals.shape)<EOL>scobit_estimator = ScobitEstimator(self,<EOL>mapping_res,<EOL>ridge,<EOL>zero_vector,<EOL>split_param_vec,<EOL>constrained_pos=constrained_pos)<EOL>scobit_estimator.set_derivatives()<EOL>scobit_estimator.check_length_of_initial_values(init_vals)<EOL>estimation_res = estimate(init_vals,<EOL>scobit_estimator,<EOL>method,<EOL>loss_tol,<EOL>gradient_tol,<EOL>maxiter,<EOL>print_res,<EOL>just_point=just_point)<EOL>if not just_point:<EOL><INDENT>self.store_fit_results(estimation_res)<EOL>return None<EOL><DEDENT>else:<EOL><INDENT>return estimation_res<EOL><DEDENT>
|
Parameters
----------
init_vals : 1D ndarray.
The initial values to start the optimization process with. There
should be one value for each index coefficient and shape
parameter being estimated. Shape parameters should come before
intercept parameters, which should come before index coefficients.
One can also pass None, and instead pass `init_shapes`, optionally
`init_intercepts` if `"intercept"` is not in the utility
specification, and `init_coefs`.
init_shapes : 1D ndarray or None, optional.
The initial values of the shape parameters. All elements should be
ints, floats, or longs. There should be one parameter per possible
alternative id in the dataset. This keyword argument will be
ignored if `init_vals` is not None. Default == None.
init_intercepts : 1D ndarray or None, optional.
The initial values of the intercept parameters. There should be one
parameter per possible alternative id in the dataset, minus one.
The passed values for this argument will be ignored if `init_vals`
is not None. This keyword argument should only be used if
`"intercept"` is not in the utility specification. Default == None.
init_coefs : 1D ndarray or None, optional.
The initial values of the index coefficients. There should be one
coefficient per index variable. The passed values for this argument
will be ignored if `init_vals` is not None. Default == None.
print_res : bool, optional.
Determines whether the timing and initial and final log likelihood
results will be printed as they they are determined.
Default `== True`.
method : str, optional.
Should be a valid string for scipy.optimize.minimize. Determines
the optimization algorithm that is used for this problem.
Default `== 'bfgs'`.
loss_tol : float, optional.
Determines the tolerance on the difference in objective function
values from one iteration to the next that is needed to determine
convergence. Default `== 1e-06`.
gradient_tol : float, optional.
Determines the tolerance on the difference in gradient values from
one iteration to the next which is needed to determine convergence.
Default `== 1e-06`.
maxiter : int, optional.
Determines the maximum number of iterations used by the optimizer.
Default `== 1000`.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a
scalar is passed, then that scalar determines the ridge penalty for
the optimization. The scalar should be greater than or equal to
zero. Default `== None`.
constrained_pos : list or None, optional.
Denotes the positions of the array of estimated parameters that are
not to change from their initial values. If a list is passed, the
elements are to be integers where no such integer is greater than
`init_vals.size.` Default == None.
just_point : bool, optional.
Determines whether (True) or not (False) calculations that are non-
critical for obtaining the maximum likelihood point estimate will
be performed. If True, this function will return the results
dictionary from scipy.optimize. Default == False.
Returns
-------
None. Estimation results are saved to the model instance.
|
f7689:c1:m1
|
def ensure_valid_model_type(specified_type, model_type_list):
|
if specified_type not in model_type_list:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>".format(model_type_list)<EOL>msg_3 = "<STR_LIT>".format(specified_type)<EOL>total_msg = "<STR_LIT:\n>".join([msg_1, msg_2, msg_3])<EOL>raise ValueError(total_msg)<EOL><DEDENT>return None<EOL>
|
Checks to make sure that `specified_type` is in `model_type_list` and
raises a helpful error if this is not the case.
Parameters
----------
specified_type : str.
Denotes the user-specified model type that is to be checked.
model_type_list : list of strings.
Contains all of the model types that are acceptable kwarg values.
Returns
-------
None.
|
f7691:m0
|
def create_choice_model(data,<EOL>alt_id_col,<EOL>obs_id_col,<EOL>choice_col,<EOL>specification,<EOL>model_type,<EOL>intercept_ref_pos=None,<EOL>shape_ref_pos=None,<EOL>names=None,<EOL>intercept_names=None,<EOL>shape_names=None,<EOL>nest_spec=None,<EOL>mixing_id_col=None,<EOL>mixing_vars=None):
|
<EOL>ensure_valid_model_type(model_type, valid_model_types)<EOL>model_kwargs = {"<STR_LIT>": intercept_ref_pos,<EOL>"<STR_LIT>": shape_ref_pos,<EOL>"<STR_LIT>": names,<EOL>"<STR_LIT>": intercept_names,<EOL>"<STR_LIT>": shape_names,<EOL>"<STR_LIT>": nest_spec,<EOL>"<STR_LIT>": mixing_id_col,<EOL>"<STR_LIT>": mixing_vars}<EOL>return model_type_to_class[model_type](data,<EOL>alt_id_col,<EOL>obs_id_col,<EOL>choice_col,<EOL>specification,<EOL>**model_kwargs)<EOL>
|
Parameters
----------
data : string or pandas dataframe.
If `data` is a string, it should be an absolute or relative path to
a CSV file containing the long format data for this choice model.
Note long format has one row per available alternative for each
observation. If `data` is a pandas dataframe, `data` should already
be in long format.
alt_id_col : string.
Should denote the column in data that contains the alternative
identifiers for each row.
obs_id_col : string.
Should denote the column in data that contains the observation
identifiers for each row.
choice_col : string.
Should denote the column in data which contains the ones and zeros
that denote whether or not the given row corresponds to the chosen
alternative for the given individual.
specification : OrderedDict.
Keys are a proper subset of the columns in `long_form_df`. Values are
either a list or a single string, `all_diff` or `all_same`. If a list,
the elements should be:
1) single objects that are within the alternative ID column of
`long_form_df`
2) lists of objects that are within the alternative ID column of
`long_form_df`. For each single object in the list, a unique
column will be created (i.e. there will be a unique
coefficient for that variable in the corresponding utility
equation of the corresponding alternative). For lists within
the `specification_dict` values, a single column will be
created for all the alternatives within iterable (i.e. there
will be one common coefficient for the variables in the
iterable).
model_type : string.
Denotes the model type of the choice_model being instantiated.
Should be one of the following values:
- "MNL"
- "Asym"
- "Cloglog"
- "Scobit"
- "Uneven"
- "Nested Logit"
- "Mixed Logit"
intercept_ref_pos : int, optional.
Valid only when the intercepts being estimated are not part of the
index. Specifies the alternative in the ordered array of unique
alternative ids whose intercept or alternative-specific constant is
not estimated, to ensure model identifiability. Default == None.
shape_ref_pos : int, optional.
Specifies the alternative in the ordered array of unique
alternative ids whose shape parameter is not estimated, to ensure
model identifiability. Default == None.
names : OrderedDict or None, optional.
Should have the same keys as `specification`. For each key:
- if the corresponding value in `specification` is
"all_same", then there should be a single string as the value
in names.
- if the corresponding value in `specification` is "all_diff",
then there should be a list of strings as the value in names.
There should be one string in the value in names for each
possible alternative.
- if the corresponding value in `specification` is a list, then
there should be a list of strings as the value in names.
There should be one string the value in names per item in the
value in `specification`.
Default == None.
intercept_names : list of strings or None, optional.
If a list is passed, then the list should have the same number of
elements as there are possible alternatives in data, minus 1. Each
element of the list should be the name of the corresponding
alternative's intercept term, in sorted order of the possible
alternative IDs. If None is passed, the resulting names that are
shown in the estimation results will be
["Outside_ASC_{}".format(x) for x in shape_names]. Default = None.
shape_names : list of strings or None, optional.
If a list is passed, then the list should have the same number of
elements as there are possible alternative IDs in data. Each
element of the list should be a string denoting the name of the
corresponding alternative, in sorted order of the possible
alternative IDs. The resulting names which are shown in the
estimation results will be
["shape_{}".format(x) for x in shape_names]. Default = None.
nest_spec : OrderedDict or None, optional.
Keys are strings that define the name of the nests. Values are
lists of alternative ids, denoting which alternatives belong to
which nests. Each alternative id only be associated with a single
nest! Default == None.
mixing_id_col : str, or None, optional.
Should be a column heading in `data`. Should denote the column in
`data` which contains the identifiers of the units of observation
over which the coefficients of the model are thought to be randomly
distributed. If `model_type == "Mixed Logit"`, then `mixing_id_col`
must be passed. Default == None.
mixing_vars : list, or None, optional.
All elements of the list should be strings. Each string should be
present in the values of `names.values()` and they're associated
variables should only be index variables (i.e. part of the design
matrix). If `model_type == "Mixed Logit"`, then `mixing_vars` must
be passed. Default == None.
Returns
-------
model_obj : instantiation of the Choice Model Class corresponding
to the model type passed as the function argument. The returned
object will have been instantiated with the arguments passed to
this function.
|
f7691:m1
|
def get_dataframe_from_data(data):
|
if isinstance(data, str):<EOL><INDENT>if data.endswith("<STR_LIT>"):<EOL><INDENT>dataframe = pd.read_csv(data)<EOL><DEDENT>else:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg_1.format(data) + msg_2)<EOL><DEDENT><DEDENT>elif isinstance(data, pd.DataFrame):<EOL><INDENT>dataframe = data<EOL><DEDENT>else:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise TypeError(msg_1.format(type(data)) + msg_2)<EOL><DEDENT>return dataframe<EOL>
|
Parameters
----------
data : string or pandas dataframe.
If string, data should be an absolute or relative path to a CSV file
containing the long format data for this choice model. Note long format
has one row per available alternative for each observation. If pandas
dataframe, the dataframe should be the long format data for the choice
model.
Returns
-------
dataframe : pandas dataframe of the long format data for the choice model.
|
f7692:m0
|
def ensure_object_is_ordered_dict(item, title):
|
assert isinstance(title, str)<EOL>if not isinstance(item, OrderedDict):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise TypeError(msg.format(title, type(item)))<EOL><DEDENT>return None<EOL>
|
Checks that the item is an OrderedDict. If not, raises ValueError.
|
f7692:m1
|
def ensure_object_is_string(item, title):
|
assert isinstance(title, str)<EOL>if not isinstance(item, str):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise TypeError(msg.format(title, type(item)))<EOL><DEDENT>return None<EOL>
|
Checks that the item is a string. If not, raises ValueError.
|
f7692:m2
|
def ensure_object_is_ndarray(item, title):
|
assert isinstance(title, str)<EOL>if not isinstance(item, np.ndarray):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise TypeError(msg.format(title, type(item)))<EOL><DEDENT>return None<EOL>
|
Ensures that a given mapping matrix is a dense numpy array. Raises a
helpful TypeError if otherwise.
|
f7692:m3
|
def ensure_columns_are_in_dataframe(columns,<EOL>dataframe,<EOL>col_title='<STR_LIT>',<EOL>data_title='<STR_LIT:data>'):
|
<EOL>assert isinstance(columns, Iterable)<EOL>assert isinstance(dataframe, pd.DataFrame)<EOL>assert isinstance(col_title, str)<EOL>assert isinstance(data_title, str)<EOL>problem_cols = [col for col in columns if col not in dataframe.columns]<EOL>if problem_cols != []:<EOL><INDENT>if col_title == '<STR_LIT>':<EOL><INDENT>msg = "<STR_LIT>"<EOL>final_msg = msg.format(problem_cols, data_title)<EOL><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL>final_msg = msg.format(col_title, data_title, problem_cols)<EOL><DEDENT>raise ValueError(final_msg)<EOL><DEDENT>return None<EOL>
|
Checks whether each column in `columns` is in `dataframe`. Raises
ValueError if any of the columns are not in the dataframe.
Parameters
----------
columns : list of strings.
Each string should represent a column heading in dataframe.
dataframe : pandas DataFrame.
Dataframe containing the data for the choice model to be estimated.
col_title : str, optional.
Denotes the title of the columns that were passed to the function.
data_title : str, optional.
Denotes the title of the dataframe that is being checked to see whether
it contains the passed columns. Default == 'data'
Returns
-------
None.
|
f7692:m4
|
def check_argument_type(long_form, specification_dict):
|
if not isinstance(long_form, pd.DataFrame):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise TypeError(msg.format(type(long_form)))<EOL><DEDENT>ensure_object_is_ordered_dict(specification_dict, "<STR_LIT>")<EOL>return None<EOL>
|
Ensures that long_form is a pandas dataframe and that specification_dict
is an OrderedDict, raising a ValueError otherwise.
Parameters
----------
long_form : pandas dataframe.
Contains one row for each available alternative, for each observation.
specification_dict : OrderedDict.
Keys are a proper subset of the columns in `long_form_df`. Values are
either a list or a single string, `"all_diff"` or `"all_same"`. If a
list, the elements should be:
- single objects that are within the alternative ID column of
`long_form_df`
- lists of objects that are within the alternative ID column of
`long_form_df`. For each single object in the list, a unique
column will be created (i.e. there will be a unique coefficient
for that variable in the corresponding utility equation of the
corresponding alternative). For lists within the
`specification_dict` values, a single column will be created for
all the alternatives within iterable (i.e. there will be one
common coefficient for the variables in the iterable).
Returns
-------
None.
|
f7692:m5
|
def ensure_alt_id_in_long_form(alt_id_col, long_form):
|
if alt_id_col not in long_form.columns:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(alt_id_col))<EOL><DEDENT>return None<EOL>
|
Ensures alt_id_col is in long_form, and raises a ValueError if not.
Parameters
----------
alt_id_col : str.
Column name which denotes the column in `long_form` that contains the
alternative ID for each row in `long_form`.
long_form : pandas dataframe.
Contains one row for each available alternative, for each observation.
Returns
-------
None.
|
f7692:m6
|
def ensure_specification_cols_are_in_dataframe(specification, dataframe):
|
<EOL>try:<EOL><INDENT>assert isinstance(specification, OrderedDict)<EOL><DEDENT>except AssertionError:<EOL><INDENT>raise TypeError("<STR_LIT>")<EOL><DEDENT>assert isinstance(dataframe, pd.DataFrame)<EOL>problem_cols = []<EOL>dataframe_cols = dataframe.columns<EOL>for key in specification:<EOL><INDENT>if key not in dataframe_cols:<EOL><INDENT>problem_cols.append(key)<EOL><DEDENT><DEDENT>if problem_cols != []:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(problem_cols))<EOL><DEDENT>return None<EOL>
|
Checks whether each column in `specification` is in `dataframe`. Raises
ValueError if any of the columns are not in the dataframe.
Parameters
----------
specification : OrderedDict.
Keys are a proper subset of the columns in `data`. Values are either a
list or a single string, "all_diff" or "all_same". If a list, the
elements should be:
- single objects that are in the alternative ID column of `data`
- lists of objects that are within the alternative ID column of
`data`. For each single object in the list, a unique column will
be created (i.e. there will be a unique coefficient for that
variable in the corresponding utility equation of the
corresponding alternative). For lists within the
`specification` values, a single column will be created for all
the alternatives within the iterable (i.e. there will be one
common coefficient for the variables in the iterable).
dataframe : pandas DataFrame.
Dataframe containing the data for the choice model to be estimated.
Returns
-------
None.
|
f7692:m7
|
def check_type_and_values_of_specification_dict(specification_dict,<EOL>unique_alternatives):
|
for key in specification_dict:<EOL><INDENT>specification = specification_dict[key]<EOL>if isinstance(specification, str):<EOL><INDENT>if specification not in ["<STR_LIT>", "<STR_LIT>"]:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(key))<EOL><DEDENT><DEDENT>elif isinstance(specification, list):<EOL><INDENT>for group in specification:<EOL><INDENT>group_is_list = isinstance(group, list)<EOL>if group_is_list:<EOL><INDENT>for group_item in group:<EOL><INDENT>if isinstance(group_item, list):<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>total_msg = msg.format(key) + msg_2 + msg_3<EOL>raise ValueError(total_msg)<EOL><DEDENT>elif group_item not in unique_alternatives:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>total_msg = (msg_1.format(group_item, group, key) +<EOL>msg_2)<EOL>raise ValueError(total_msg)<EOL><DEDENT><DEDENT><DEDENT>else:<EOL><INDENT>if group not in unique_alternatives:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg_1.format(group, key) + msg_2)<EOL><DEDENT><DEDENT><DEDENT><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise TypeError(msg.format(key) + msg_2)<EOL><DEDENT><DEDENT>return None<EOL>
|
Verifies that the values of specification_dict have the correct type, have
the correct structure, and have valid values (i.e. are actually in the set
of possible alternatives). Will raise various errors if / when appropriate.
Parameters
----------
specification_dict : OrderedDict.
Keys are a proper subset of the columns in `long_form_df`. Values are
either a list or a single string, `"all_diff"` or `"all_same"`. If a
list, the elements should be:
- single objects that are within the alternative ID column of
`long_form_df`
- lists of objects that are within the alternative ID column of
`long_form_df`. For each single object in the list, a unique
column will be created (i.e. there will be a unique coefficient
for that variable in the corresponding utility equation of the
corresponding alternative). For lists within the
`specification_dict` values, a single column will be created for
all the alternatives within iterable (i.e. there will be one
common coefficient for the variables in the iterable).
unique_alternatives : 1D ndarray.
Should contain the possible alternative id's for this dataset.
Returns
-------
None.
|
f7692:m8
|
def check_keys_and_values_of_name_dictionary(names,<EOL>specification_dict,<EOL>num_alts):
|
if names.keys() != specification_dict.keys():<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>for key in names:<EOL><INDENT>specification = specification_dict[key]<EOL>name_object = names[key]<EOL>if isinstance(specification, list):<EOL><INDENT>try:<EOL><INDENT>assert isinstance(name_object, list)<EOL>assert len(name_object) == len(specification)<EOL>assert all([isinstance(x, str) for x in name_object])<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>raise ValueError(msg.format(key) + msg_2 + msg_3)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>if specification == "<STR_LIT>":<EOL><INDENT>if not isinstance(name_object, str):<EOL><INDENT>msg = "<STR_LIT>".format(key)<EOL>raise TypeError(msg)<EOL><DEDENT><DEDENT>else: <EOL><INDENT>try:<EOL><INDENT>assert isinstance(name_object, list)<EOL>assert len(name_object) == num_alts<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg = (msg_1.format(key, num_alts) + msg_2)<EOL>raise ValueError(msg)<EOL><DEDENT><DEDENT><DEDENT><DEDENT>return None<EOL>
|
Check the validity of the keys and values in the names dictionary.
Parameters
----------
names : OrderedDict, optional.
Should have the same keys as `specification_dict`. For each key:
- if the corresponding value in `specification_dict` is "all_same",
then there should be a single string as the value in names.
- if the corresponding value in `specification_dict` is "all_diff",
then there should be a list of strings as the value in names.
There should be one string in the value in names for each
possible alternative.
- if the corresponding value in `specification_dict` is a list,
then there should be a list of strings as the value in names.
There should be one string the value in names per item in the
value in `specification_dict`.
specification_dict : OrderedDict.
Keys are a proper subset of the columns in `long_form_df`. Values are
either a list or a single string, `"all_diff"` or `"all_same"`. If a
list, the elements should be:
- single objects that are within the alternative ID column of
`long_form_df`
- lists of objects that are within the alternative ID column of
`long_form_df`. For each single object in the list, a unique
column will be created (i.e. there will be a unique coefficient
for that variable in the corresponding utility equation of the
corresponding alternative). For lists within the
`specification_dict` values, a single column will be created for
all the alternatives within iterable (i.e. there will be one
common coefficient for the variables in the iterable).
num_alts : int.
The number of alternatives in this dataset's universal choice set.
Returns
-------
None.
|
f7692:m9
|
def ensure_all_columns_are_used(num_vars_accounted_for,<EOL>dataframe,<EOL>data_title='<STR_LIT>'):
|
dataframe_vars = set(dataframe.columns.tolist())<EOL>num_dataframe_vars = len(dataframe_vars)<EOL>if num_vars_accounted_for == num_dataframe_vars:<EOL><INDENT>pass<EOL><DEDENT>elif num_vars_accounted_for < num_dataframe_vars:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>warnings.warn(msg.format(num_dataframe_vars, data_title) +<EOL>msg_2 + msg_3.format(num_vars_accounted_for))<EOL><DEDENT>else: <EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>warnings.warn(msg +<EOL>msg_2.format(num_vars_accounted_for) +<EOL>msg_3.format(data_title, num_dataframe_vars))<EOL><DEDENT>return None<EOL>
|
Ensure that all of the columns from dataframe are in the list of used_cols.
Will raise a helpful UserWarning if otherwise.
Parameters
----------
num_vars_accounted_for : int.
Denotes the number of variables used in one's function.
dataframe : pandas dataframe.
Contains all of the data to be converted from one format to another.
data_title : str, optional.
Denotes the title by which `dataframe` should be referred in the
UserWarning.
Returns
-------
None.
|
f7692:m10
|
def check_dataframe_for_duplicate_records(obs_id_col, alt_id_col, df):
|
if df.duplicated(subset=[obs_id_col, alt_id_col]).any():<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Checks a cross-sectional dataframe of long-format data for duplicate
observations. Duplicate observations are defined as rows with the same
observation id value and the same alternative id value.
Parameters
----------
obs_id_col : str.
Denotes the column in `df` that contains the observation ID
values for each row.
alt_id_col : str.
Denotes the column in `df` that contains the alternative ID
values for each row.
df : pandas dataframe.
The dataframe of long format data that is to be checked for duplicates.
Returns
-------
None.
|
f7692:m11
|
def ensure_num_chosen_alts_equals_num_obs(obs_id_col, choice_col, df):
|
num_obs = df[obs_id_col].unique().shape[<NUM_LIT:0>]<EOL>num_choices = df[choice_col].sum()<EOL>if num_choices < num_obs:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg + msg_2)<EOL><DEDENT>if num_choices > num_obs:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return None<EOL>
|
Checks that the total number of recorded choices equals the total number of
observations. If this is not the case, raise helpful ValueError messages.
Parameters
----------
obs_id_col : str.
Denotes the column in `df` that contains the observation ID values for
each row.
choice_col : str.
Denotes the column in `long_data` that contains a one if the
alternative pertaining to the given row was the observed outcome for
the observation pertaining to the given row and a zero otherwise.
df : pandas dataframe.
The dataframe whose choices and observations will be checked.
Returns
-------
None.
|
f7692:m12
|
def check_type_and_values_of_alt_name_dict(alt_name_dict, alt_id_col, df):
|
if not isinstance(alt_name_dict, dict):<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise TypeError(msg.format(type(alt_name_dict)))<EOL><DEDENT>if not all([x in df[alt_id_col].values for x in alt_name_dict.keys()]):<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg + msg_2)<EOL><DEDENT>return None<EOL>
|
Ensures that `alt_name_dict` is a dictionary and that its keys are in the
alternative id column of `df`. Raises helpful errors if either condition
is not met.
Parameters
----------
alt_name_dict : dict.
A dictionary whose keys are the possible values in
`df[alt_id_col].unique()`. The values should be the name that one
wants to associate with each alternative id.
alt_id_col : str.
Denotes the column in `df` that contains the alternative ID values for
each row.
df : pandas dataframe.
The dataframe of long format data that contains the alternative IDs.
Returns
-------
None.
|
f7692:m13
|
def ensure_ridge_is_scalar_or_none(ridge):
|
if (ridge is not None) and not isinstance(ridge, Number):<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>".format(type(ridge))<EOL>raise TypeError(msg_1 + msg_2)<EOL><DEDENT>return None<EOL>
|
Ensures that `ridge` is either None or a scalar value. Raises a helpful
TypeError otherwise.
Parameters
----------
ridge : int, float, long, or None.
Scalar value or None, determining the L2-ridge regression penalty.
Returns
-------
None.
|
f7692:m14
|
def create_design_matrix(long_form,<EOL>specification_dict,<EOL>alt_id_col,<EOL>names=None):
|
<EOL>check_argument_type(long_form, specification_dict)<EOL>ensure_alt_id_in_long_form(alt_id_col, long_form)<EOL>ensure_specification_cols_are_in_dataframe(specification_dict, long_form)<EOL>unique_alternatives = np.sort(long_form[alt_id_col].unique())<EOL>num_alternatives = len(unique_alternatives)<EOL>check_type_and_values_of_specification_dict(specification_dict,<EOL>unique_alternatives)<EOL>if names is not None:<EOL><INDENT>ensure_object_is_ordered_dict(names, "<STR_LIT>")<EOL>check_keys_and_values_of_name_dictionary(names,<EOL>specification_dict,<EOL>num_alternatives)<EOL><DEDENT>independent_vars = []<EOL>var_names = []<EOL>for variable in specification_dict:<EOL><INDENT>specification = specification_dict[variable]<EOL>if specification == "<STR_LIT>":<EOL><INDENT>independent_vars.append(long_form[variable].values)<EOL>var_names.append(variable)<EOL><DEDENT>elif specification == "<STR_LIT>":<EOL><INDENT>for alt in unique_alternatives:<EOL><INDENT>independent_vars.append((long_form[alt_id_col] == alt).values *<EOL>long_form[variable].values)<EOL>var_names.append("<STR_LIT>".format(variable, alt))<EOL><DEDENT><DEDENT>else:<EOL><INDENT>for group in specification:<EOL><INDENT>if isinstance(group, list):<EOL><INDENT>independent_vars.append(<EOL>long_form[alt_id_col].isin(group).values *<EOL>long_form[variable].values)<EOL>var_names.append("<STR_LIT>".format(variable, str(group)))<EOL><DEDENT>else: <EOL><INDENT>new_col_vals = ((long_form[alt_id_col] == group).values *<EOL>long_form[variable].values)<EOL>independent_vars.append(new_col_vals)<EOL>var_names.append("<STR_LIT>".format(variable, group))<EOL><DEDENT><DEDENT><DEDENT><DEDENT>design_matrix = np.hstack((x[:, None] for x in independent_vars))<EOL>if names is not None:<EOL><INDENT>var_names = []<EOL>for value in names.values():<EOL><INDENT>if isinstance(value, str):<EOL><INDENT>var_names.append(value)<EOL><DEDENT>else:<EOL><INDENT>for inner_name in value:<EOL><INDENT>var_names.append(inner_name)<EOL><DEDENT><DEDENT><DEDENT><DEDENT>return design_matrix, var_names<EOL>
|
Parameters
----------
long_form : pandas dataframe.
Contains one row for each available alternative, for each observation.
specification_dict : OrderedDict.
Keys are a proper subset of the columns in `long_form_df`. Values are
either a list or a single string, `"all_diff"` or `"all_same"`. If a
list, the elements should be:
- single objects that are within the alternative ID column of
`long_form_df`
- lists of objects that are within the alternative ID column of
`long_form_df`. For each single object in the list, a unique
column will be created (i.e. there will be a unique coefficient
for that variable in the corresponding utility equation of the
corresponding alternative). For lists within the
`specification_dict` values, a single column will be created for
all the alternatives within iterable (i.e. there will be one
common coefficient for the variables in the iterable).
alt_id_col : str.
Column name which denotes the column in `long_form` that contains the
alternative ID for each row in `long_form`.
names : OrderedDict, optional.
Should have the same keys as `specification_dict`. For each key:
- if the corresponding value in `specification_dict` is "all_same",
then there should be a single string as the value in names.
- if the corresponding value in `specification_dict` is "all_diff",
then there should be a list of strings as the value in names.
There should be one string in the value in names for each
possible alternative.
- if the corresponding value in `specification_dict` is a list,
then there should be a list of strings as the value in names.
There should be one string the value in names per item in the
value in `specification_dict`.
Default == None.
Returns
-------
design_matrix, var_names: tuple with two elements.
First element is the design matrix, a numpy array with some number of
columns and as many rows as are in `long_form`. Each column corresponds
to a coefficient to be estimated. The second element is a list of
strings denoting the names of each coefficient, with one variable name
per column in the design matrix.
|
f7692:m15
|
def get_original_order_unique_ids(id_array):
|
assert isinstance(id_array, np.ndarray)<EOL>assert len(id_array.shape) == <NUM_LIT:1><EOL>original_unique_id_indices =np.sort(np.unique(id_array, return_index=True)[<NUM_LIT:1>])<EOL>original_order_unique_ids = id_array[original_unique_id_indices]<EOL>return original_order_unique_ids<EOL>
|
Get the unique id's of id_array, in their original order of appearance.
Parameters
----------
id_array : 1D ndarray.
Should contain the ids that we want to extract the unique values from.
Returns
-------
original_order_unique_ids : 1D ndarray.
Contains the unique ids from `id_array`, in their original order of
appearance.
|
f7692:m16
|
def create_row_to_some_id_col_mapping(id_array):
|
<EOL>original_order_unique_ids = get_original_order_unique_ids(id_array)<EOL>rows_to_ids = (id_array[:, None] ==<EOL>original_order_unique_ids[None, :]).astype(int)<EOL>return rows_to_ids<EOL>
|
Parameters
----------
id_array : 1D ndarray.
All elements of the array should be ints representing some id related
to the corresponding row.
Returns
-------
rows_to_ids : 2D scipy sparse array.
Will map each row of id_array to the unique values of `id_array`. The
columns of the returned sparse array will correspond to the unique
values of `id_array`, in the order of appearance for each of these
unique values.
|
f7692:m17
|
def create_sparse_mapping(id_array, unique_ids=None):
|
<EOL>if unique_ids is None:<EOL><INDENT>unique_ids = get_original_order_unique_ids(id_array)<EOL><DEDENT>assert isinstance(unique_ids, np.ndarray)<EOL>assert isinstance(id_array, np.ndarray)<EOL>assert unique_ids.ndim == <NUM_LIT:1><EOL>assert id_array.ndim == <NUM_LIT:1><EOL>represented_ids = np.in1d(id_array, unique_ids)<EOL>num_non_zero_rows = represented_ids.sum()<EOL>num_rows = id_array.size<EOL>num_cols = unique_ids.size<EOL>data = np.ones(num_non_zero_rows, dtype=int)<EOL>row_indices = np.arange(num_rows)[represented_ids]<EOL>unique_id_dict = dict(zip(unique_ids, np.arange(num_cols)))<EOL>col_indices =np.array([unique_id_dict[x] for x in id_array[represented_ids]])<EOL>return csr_matrix((data, (row_indices, col_indices)),<EOL>shape=(num_rows, num_cols))<EOL>
|
Will create a scipy.sparse compressed-sparse-row matrix that maps
each row represented by an element in id_array to the corresponding
value of the unique ids in id_array.
Parameters
----------
id_array : 1D ndarray of ints.
Each element should represent some id related to the corresponding row.
unique_ids : 1D ndarray of ints, or None, optional.
If not None, each element should be present in `id_array`. The elements
in `unique_ids` should be present in the order in which one wishes them
to appear in the columns of the resulting sparse array. For the
`row_to_obs` and `row_to_mixers` mappings, this should be the order of
appearance in `id_array`. If None, then the unique_ids will be created
from `id_array`, in the order of their appearance in `id_array`.
Returns
-------
mapping : 2D scipy.sparse CSR matrix.
Will contain only zeros and ones. `mapping[i, j] == 1` where
`id_array[i] == unique_ids[j]`. The id's corresponding to each column
are given by `unique_ids`. The rows correspond to the elements of
`id_array`.
|
f7692:m18
|
def create_long_form_mappings(long_form,<EOL>obs_id_col,<EOL>alt_id_col,<EOL>choice_col=None,<EOL>nest_spec=None,<EOL>mix_id_col=None,<EOL>dense=False):
|
<EOL>obs_id_values = long_form[obs_id_col].values<EOL>alt_id_values = long_form[alt_id_col].values<EOL>rows_to_obs = create_sparse_mapping(obs_id_values)<EOL>all_alternatives = np.sort(np.unique(alt_id_values))<EOL>rows_to_alts = create_sparse_mapping(alt_id_values,<EOL>unique_ids=all_alternatives)<EOL>if choice_col is not None:<EOL><INDENT>chosen_row_to_obs = csr_matrix(rows_to_obs.multiply(<EOL>long_form[choice_col].values[:, None]))<EOL><DEDENT>else:<EOL><INDENT>chosen_row_to_obs = None<EOL><DEDENT>if nest_spec is not None:<EOL><INDENT>num_nests = len(nest_spec)<EOL>alt_id_to_nest_name = {}<EOL>for key in nest_spec:<EOL><INDENT>for element in nest_spec[key]:<EOL><INDENT>alt_id_to_nest_name[element] = key<EOL><DEDENT><DEDENT>nest_ids = np.arange(<NUM_LIT:1>, num_nests + <NUM_LIT:1>)<EOL>nest_name_to_nest_id = dict(zip(nest_spec.keys(), nest_ids))<EOL>nest_id_vec = np.array([nest_name_to_nest_id[alt_id_to_nest_name[x]]<EOL>for x in alt_id_values])<EOL>rows_to_nests = create_sparse_mapping(nest_id_vec, unique_ids=nest_ids)<EOL><DEDENT>else:<EOL><INDENT>rows_to_nests = None<EOL><DEDENT>if mix_id_col is not None:<EOL><INDENT>mix_id_array = long_form[mix_id_col].values<EOL>rows_to_mixers = create_sparse_mapping(mix_id_array)<EOL><DEDENT>else:<EOL><INDENT>rows_to_mixers = None<EOL><DEDENT>mapping_dict = OrderedDict()<EOL>mapping_dict["<STR_LIT>"] = rows_to_obs<EOL>mapping_dict["<STR_LIT>"] = rows_to_alts<EOL>mapping_dict["<STR_LIT>"] = chosen_row_to_obs<EOL>mapping_dict["<STR_LIT>"] = rows_to_nests<EOL>mapping_dict["<STR_LIT>"] = rows_to_mixers<EOL>if dense:<EOL><INDENT>for key in mapping_dict:<EOL><INDENT>if mapping_dict[key] is not None:<EOL><INDENT>mapping_dict[key] = mapping_dict[key].A<EOL><DEDENT><DEDENT><DEDENT>return mapping_dict<EOL>
|
Parameters
----------
long_form : pandas dataframe.
Contains one row for each available alternative for each observation.
obs_id_col : str.
Denotes the column in `long_form` which contains the choice situation
observation ID values for each row of `long_form`. Note each value in
this column must be unique (i.e., individuals with repeat observations
have unique `obs_id_col` values for each choice situation, and
`obs_id_col` values are unique across individuals).
alt_id_col : str.
Denotes the column in long_form which contains the alternative ID
values for each row of `long_form`.
choice_col : str, optional.
Denotes the column in long_form which contains a one if the alternative
pertaining to the given row was the observed outcome for the
observation pertaining to the given row and a zero otherwise.
Default == None.
nest_spec : OrderedDict, or None, optional.
Keys are strings that define the name of the nests. Values are lists of
alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Default == None.
mix_id_col : str, optional.
Denotes the column in long_form that contains the identification values
used to denote the units of observation over which parameters are
randomly distributed.
dense : bool, optional.
Determines whether or not scipy sparse matrices will be returned or
dense numpy arrays.
Returns
-------
mapping_dict : OrderedDict.
Keys will be `["rows_to_obs", "rows_to_alts", "chosen_row_to_obs",
"rows_to_nests"]`. If `choice_col` is None, then the value for
`chosen_row_to_obs` will be None. Likewise, if `nest_spec` is None,
then the value for `rows_to_nests` will be None. The value for
"rows_to_obs" will map the rows of the `long_form` to the unique
observations (on the columns) in their order of appearance. The value
for `rows_to_alts` will map the rows of the `long_form` to the unique
alternatives which are possible in the dataset (on the columns), in
sorted order--not order of appearance. The value for
`chosen_row_to_obs`, if not None, will map the rows of the `long_form`
that contain the chosen alternatives to the specific observations those
rows are associated with (denoted by the columns). The value of
`rows_to_nests`, if not None, will map the rows of the `long_form` to
the nest (denoted by the column) that contains the row's alternative.
If `dense==True`, the returned values will be dense numpy arrays.
Otherwise, the returned values will be scipy sparse arrays.
|
f7692:m19
|
def convert_long_to_wide(long_data,<EOL>ind_vars,<EOL>alt_specific_vars,<EOL>subset_specific_vars,<EOL>obs_id_col,<EOL>alt_id_col,<EOL>choice_col,<EOL>alt_name_dict=None,<EOL>null_value=np.nan):
|
<EOL>num_vars_accounted_for = sum([len(x) for x in<EOL>[ind_vars, alt_specific_vars,<EOL>subset_specific_vars,<EOL>[obs_id_col, alt_id_col, choice_col]]])<EOL>ensure_all_columns_are_used(num_vars_accounted_for, long_data)<EOL>ensure_columns_are_in_dataframe(ind_vars,<EOL>long_data,<EOL>col_title="<STR_LIT>",<EOL>data_title='<STR_LIT>')<EOL>ensure_columns_are_in_dataframe(alt_specific_vars,<EOL>long_data,<EOL>col_title="<STR_LIT>",<EOL>data_title='<STR_LIT>')<EOL>ensure_columns_are_in_dataframe(subset_specific_vars.keys(),<EOL>long_data,<EOL>col_title="<STR_LIT>",<EOL>data_title='<STR_LIT>')<EOL>identifying_cols = [choice_col, obs_id_col, alt_id_col]<EOL>identifying_col_string = "<STR_LIT>"<EOL>ensure_columns_are_in_dataframe(identifying_cols,<EOL>long_data,<EOL>col_title=identifying_col_string,<EOL>data_title='<STR_LIT>')<EOL>check_dataframe_for_duplicate_records(obs_id_col, alt_id_col, long_data)<EOL>ensure_num_chosen_alts_equals_num_obs(obs_id_col, choice_col, long_data)<EOL>if alt_name_dict is not None:<EOL><INDENT>check_type_and_values_of_alt_name_dict(alt_name_dict,<EOL>alt_id_col,<EOL>long_data)<EOL><DEDENT>num_obs = long_data[obs_id_col].unique().shape[<NUM_LIT:0>]<EOL>num_alts = long_data[alt_id_col].unique().shape[<NUM_LIT:0>]<EOL>num_cols = <NUM_LIT:1><EOL>num_cols += num_alts<EOL>num_cols += <NUM_LIT:1><EOL>num_cols += len(ind_vars)<EOL>num_cols += len(alt_specific_vars) * num_alts<EOL>for col in subset_specific_vars:<EOL><INDENT>num_cols += len(subset_specific_vars[col])<EOL><DEDENT>new_df = long_data[[obs_id_col] + ind_vars].drop_duplicates()<EOL>new_df.reset_index(inplace=True)<EOL>new_df[choice_col] = long_data.loc[long_data[choice_col] == <NUM_LIT:1>,<EOL>alt_id_col].values<EOL>mapping_res = create_long_form_mappings(long_data,<EOL>obs_id_col,<EOL>alt_id_col)<EOL>row_to_obs = mapping_res["<STR_LIT>"]<EOL>row_to_alt = mapping_res["<STR_LIT>"]<EOL>obs_to_alt = row_to_obs.T.dot(row_to_alt).todense()<EOL>alt_id_values = long_data[alt_id_col].values<EOL>all_alternatives = np.sort(np.unique(alt_id_values))<EOL>if alt_name_dict is None:<EOL><INDENT>availability_col_names = ["<STR_LIT>".format(int(x))<EOL>for x in all_alternatives]<EOL><DEDENT>else:<EOL><INDENT>availability_col_names = ["<STR_LIT>".format(alt_name_dict[x])<EOL>for x in all_alternatives]<EOL><DEDENT>availability_df = pd.DataFrame(obs_to_alt,<EOL>columns=availability_col_names)<EOL>alt_specific_dfs = []<EOL>for col in alt_specific_vars + list(subset_specific_vars.keys()):<EOL><INDENT>relevant_vals = long_data[col].values[:, None]<EOL>obs_to_var = row_to_obs.T.dot(row_to_alt.multiply(relevant_vals))<EOL>if issparse(obs_to_var):<EOL><INDENT>obs_to_var = obs_to_var.toarray()<EOL><DEDENT>obs_to_var = obs_to_var.astype(float)<EOL>if (obs_to_alt == <NUM_LIT:0>).any():<EOL><INDENT>obs_to_var[np.where(obs_to_alt == <NUM_LIT:0>)] = null_value<EOL><DEDENT>if alt_name_dict is None:<EOL><INDENT>obs_to_var_names = ["<STR_LIT>".format(col, int(x))<EOL>for x in all_alternatives]<EOL><DEDENT>else:<EOL><INDENT>obs_to_var_names = ["<STR_LIT>".format(col, alt_name_dict[x])<EOL>for x in all_alternatives]<EOL><DEDENT>if col in subset_specific_vars:<EOL><INDENT>relevant_alt_ids = subset_specific_vars[col]<EOL>relevant_col_idx = np.where(np.in1d(all_alternatives,<EOL>relevant_alt_ids))[<NUM_LIT:0>]<EOL><DEDENT>else:<EOL><INDENT>relevant_col_idx = None<EOL><DEDENT>if relevant_col_idx is None:<EOL><INDENT>obs_to_var_df = pd.DataFrame(obs_to_var,<EOL>columns=obs_to_var_names)<EOL><DEDENT>else:<EOL><INDENT>obs_to_var_df = pd.DataFrame(obs_to_var[:, relevant_col_idx],<EOL>columns=[obs_to_var_names[x] for<EOL>x in relevant_col_idx])<EOL><DEDENT>alt_specific_dfs.append(obs_to_var_df)<EOL><DEDENT>final_alt_specific_df = pd.concat(alt_specific_dfs, axis=<NUM_LIT:1>)<EOL>final_wide_df = pd.concat([new_df[[obs_id_col]],<EOL>new_df[[choice_col]],<EOL>availability_df,<EOL>new_df[ind_vars],<EOL>final_alt_specific_df],<EOL>axis=<NUM_LIT:1>)<EOL>if final_wide_df.shape != (num_obs, num_cols):<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>".format((num_obs,<EOL>num_cols))<EOL>msg_3 = "<STR_LIT>"<EOL>total_msg = msg_1 + msg_2 + msg_3.format(final_wide_df.shape)<EOL>warnings.warn(total_msg)<EOL><DEDENT>return final_wide_df<EOL>
|
Converts a 'long format' dataframe of cross-sectional discrete choice data
into a 'wide format' version of the same data.
Parameters
----------
long_data : pandas dataframe.
Contains one row for each available alternative for each observation.
Should have the specified `[obs_id_col, alt_id_col, choice_col]` column
headings. The dtypes of all columns should be numeric.
ind_vars : list of strings.
Each element should be a column heading in `long_data` that denotes a
variable that varies across observations but not across alternatives.
alt_specific_vars : list of strings.
Each element should be a column heading in `long_data` that denotes a
variable that varies not only across observations but also across all
alternatives.
subset_specific_vars : dict.
Each key should be a string that is a column heading of `long_data`.
Each value should be a list of alternative ids denoting the subset of
alternatives which the variable (i.e. the key) over actually varies.
These variables should vary across individuals and across some
alternatives.
obs_id_col : str.
Denotes the column in `long_data` that contains the observation ID
values for each row.
alt_id_col : str.
Denotes the column in `long_data` that contains the alternative ID
values for each row.
choice_col : str.
Denotes the column in `long_data` that contains a one if the
alternative pertaining to the given row was the observed outcome for
the observation pertaining to the given row and a zero otherwise.
alt_name_dict : dict or None, optional
If not None, should be a dictionary whose keys are the possible values
in `long_data[alt_id_col].unique()`. The values should be the name
that one wants to associate with each alternative id. Default == None.
null_value : int, float, long, or `np.nan`, optional.
The passed value will be used to fill cells in the wide format
dataframe when that cell is unknown for a given individual. This is
most commonly the case when there is a variable that varies across
alternatives and one of the alternatives is not available for a given
indvidual. The `null_value` will be inserted for that individual for
that variable. Default == `np.nan`.
Returns
-------
final_wide_df : pandas dataframe.
Will contain one row per observational unit. Will contain an
observation id column of the same name as `obs_id_col`. Will also
contain a choice column of the same name as `choice_col`. Will contain
one availability column per unique, observed alternative in the
dataset. Will contain one column per variable in `ind_vars`. Will
contain one column per alternative per variable in `alt_specific_vars`.
Will contain one column per specified alternative per variable in
`subset_specific_vars`.
|
f7692:m20
|
def check_wide_data_for_blank_choices(choice_col, wide_data):
|
if wide_data[choice_col].isnull().any():<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg_1 + msg_2)<EOL><DEDENT>return None<EOL>
|
Checks `wide_data` for null values in the choice column, and raises a
helpful ValueError if null values are found.
Parameters
----------
choice_col : str.
Denotes the column in `wide_data` that is used to record each
observation's choice.
wide_data : pandas dataframe.
Contains one row for each observation. Should contain `choice_col`.
Returns
-------
None.
|
f7692:m21
|
def ensure_unique_obs_ids_in_wide_data(obs_id_col, wide_data):
|
if len(wide_data[obs_id_col].unique()) != wide_data.shape[<NUM_LIT:0>]:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg + msg_2)<EOL><DEDENT>return None<EOL>
|
Ensures that there is one observation per row in wide_data. Raises a
helpful ValueError if otherwise.
Parameters
----------
obs_id_col : str.
Denotes the column in `wide_data` that contains the observation ID
values for each row.
wide_data : pandas dataframe.
Contains one row for each observation. Should contain the specified
`obs_id_col` column.
Returns
-------
None.
|
f7692:m22
|
def ensure_chosen_alternatives_are_in_user_alt_ids(choice_col,<EOL>wide_data,<EOL>availability_vars):
|
if not wide_data[choice_col].isin(availability_vars.keys()).all():<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg + msg_2)<EOL><DEDENT>return None<EOL>
|
Ensures that all chosen alternatives in `wide_df` are present in the
`availability_vars` dict. Raises a helpful ValueError if not.
Parameters
----------
choice_col : str.
Denotes the column in `wide_data` that contains a one if the
alternative pertaining to the given row was the observed outcome for
the observation pertaining to the given row and a zero otherwise.
wide_data : pandas dataframe.
Contains one row for each observation. Should contain the specified
`choice_col` column.
availability_vars : dict.
There should be one key value pair for each alternative that is
observed in the dataset. Each key should be the alternative id for the
alternative, and the value should be the column heading in `wide_data`
that denotes (using ones and zeros) whether an alternative is
available/unavailable, respectively, for a given observation.
Alternative id's, i.e. the keys, must be integers.
Returns
-------
None.
|
f7692:m23
|
def ensure_each_wide_obs_chose_an_available_alternative(obs_id_col,<EOL>choice_col,<EOL>availability_vars,<EOL>wide_data):
|
<EOL>wide_availability_values = wide_data[list(<EOL>availability_vars.values())].values<EOL>unavailable_condition = ((wide_availability_values == <NUM_LIT:0>).sum(axis=<NUM_LIT:1>)<EOL>.astype(bool))<EOL>problem_obs = []<EOL>for idx, row in wide_data.loc[unavailable_condition].iterrows():<EOL><INDENT>if row.at[availability_vars[row.at[choice_col]]] != <NUM_LIT:1>:<EOL><INDENT>problem_obs.append(row.at[obs_id_col])<EOL><DEDENT><DEDENT>if problem_obs != []:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(problem_obs))<EOL><DEDENT>return None<EOL>
|
Checks whether or not each observation with a restricted choice set chose
an alternative that was personally available to him or her. Will raise a
helpful ValueError if this is not the case.
Parameters
----------
obs_id_col : str.
Denotes the column in `wide_data` that contains the observation ID
values for each row.
choice_col : str.
Denotes the column in `wide_data` that contains a one if the
alternative pertaining to the given row was the observed outcome for
the observation pertaining to the given row and a zero otherwise.
availability_vars : dict.
There should be one key value pair for each alternative that is
observed in the dataset. Each key should be the alternative id for the
alternative, and the value should be the column heading in `wide_data`
that denotes (using ones and zeros) whether an alternative is
available/unavailable, respectively, for a given observation.
Alternative id's, i.e. the keys, must be integers.
wide_data : pandas dataframe.
Contains one row for each observation. Should have the specified
`[obs_id_col, choice_col] + availability_vars.values()` columns.
Returns
-------
None
|
f7692:m24
|
def ensure_all_wide_alt_ids_are_chosen(choice_col,<EOL>alt_specific_vars,<EOL>availability_vars,<EOL>wide_data):
|
sorted_alt_ids = np.sort(wide_data[choice_col].unique())<EOL>try:<EOL><INDENT>problem_ids = [x for x in availability_vars<EOL>if x not in sorted_alt_ids]<EOL>problem_type = "<STR_LIT>"<EOL>assert problem_ids == []<EOL>problem_ids = []<EOL>for new_column in alt_specific_vars:<EOL><INDENT>for alt_id in alt_specific_vars[new_column]:<EOL><INDENT>if alt_id not in sorted_alt_ids and alt_id not in problem_ids:<EOL><INDENT>problem_ids.append(alt_id)<EOL><DEDENT><DEDENT><DEDENT>problem_type = "<STR_LIT>"<EOL>assert problem_ids == []<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg.format(problem_type) + msg_2.format(problem_ids))<EOL><DEDENT>return None<EOL>
|
Checks to make sure all user-specified alternative id's, both in
`alt_specific_vars` and `availability_vars` are observed in the choice
column of `wide_data`.
|
f7692:m25
|
def ensure_contiguity_in_observation_rows(obs_id_vector):
|
<EOL>contiguity_check_array = (obs_id_vector[<NUM_LIT:1>:] - obs_id_vector[:-<NUM_LIT:1>]) >= <NUM_LIT:0><EOL>if not contiguity_check_array.all():<EOL><INDENT>problem_ids = obs_id_vector[np.where(~contiguity_check_array)]<EOL>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>raise ValueError(msg_1 + msg_2 + msg_3.format(problem_ids.tolist()))<EOL><DEDENT>else:<EOL><INDENT>return None<EOL><DEDENT>
|
Ensures that all rows pertaining to a given choice situation are located
next to one another. Raises a helpful ValueError otherwise. This check is
needed because the hessian calculation function requires the design matrix
to have contiguity in rows with the same observation id.
Parameters
----------
rows_to_obs : 2D scipy sparse array.
Should map each row of the long format dataferame to the unique
observations in the dataset.
obs_id_vector : 1D ndarray of ints.
Should contain the id (i.e. a unique integer) that corresponds to each
choice situation in the dataset.
Returns
-------
None.
|
f7692:m26
|
def convert_wide_to_long(wide_data,<EOL>ind_vars,<EOL>alt_specific_vars,<EOL>availability_vars,<EOL>obs_id_col,<EOL>choice_col,<EOL>new_alt_id_name=None):
|
<EOL>all_alt_specific_cols = []<EOL>for var_dict in alt_specific_vars.values():<EOL><INDENT>all_alt_specific_cols.extend(var_dict.values())<EOL><DEDENT>vars_accounted_for = set(ind_vars +<EOL>list(availability_vars.values()) +<EOL>[obs_id_col, choice_col] +<EOL>all_alt_specific_cols)<EOL>num_vars_accounted_for = len(vars_accounted_for)<EOL>ensure_all_columns_are_used(num_vars_accounted_for,<EOL>wide_data,<EOL>data_title='<STR_LIT>')<EOL>ensure_columns_are_in_dataframe(ind_vars,<EOL>wide_data,<EOL>col_title='<STR_LIT>',<EOL>data_title='<STR_LIT>')<EOL>ensure_columns_are_in_dataframe(availability_vars.values(),<EOL>wide_data,<EOL>col_title='<STR_LIT>',<EOL>data_title='<STR_LIT>')<EOL>for new_column in alt_specific_vars:<EOL><INDENT>for alt_id in alt_specific_vars[new_column]:<EOL><INDENT>old_column = alt_specific_vars[new_column][alt_id]<EOL>ensure_columns_are_in_dataframe([old_column],<EOL>wide_data,<EOL>col_title="<STR_LIT>",<EOL>data_title='<STR_LIT>')<EOL><DEDENT><DEDENT>ensure_columns_are_in_dataframe([choice_col, obs_id_col],<EOL>wide_data,<EOL>col_title='<STR_LIT>',<EOL>data_title='<STR_LIT>')<EOL>ensure_unique_obs_ids_in_wide_data(obs_id_col, wide_data)<EOL>check_wide_data_for_blank_choices(choice_col, wide_data)<EOL>ensure_all_wide_alt_ids_are_chosen(choice_col,<EOL>alt_specific_vars,<EOL>availability_vars,<EOL>wide_data)<EOL>ensure_chosen_alternatives_are_in_user_alt_ids(choice_col,<EOL>wide_data,<EOL>availability_vars)<EOL>ensure_each_wide_obs_chose_an_available_alternative(obs_id_col,<EOL>choice_col,<EOL>availability_vars,<EOL>wide_data)<EOL>sorted_alt_ids = np.sort(wide_data[choice_col].unique())<EOL>sorted_availability_cols = [availability_vars[x] for x in sorted_alt_ids]<EOL>num_rows = wide_data[sorted_availability_cols].sum(axis=<NUM_LIT:0>).sum()<EOL>num_cols = <NUM_LIT:1><EOL>num_cols += <NUM_LIT:1><EOL>num_cols += <NUM_LIT:1><EOL>num_cols += len(ind_vars)<EOL>num_cols += len(alt_specific_vars.keys())<EOL>wide_availability_values = wide_data[list(<EOL>availability_vars.values())].values<EOL>new_obs_id_col = (wide_availability_values *<EOL>wide_data[obs_id_col].values[:, None]).ravel()<EOL>new_obs_id_col = new_obs_id_col.astype(int)<EOL>new_ind_var_cols = []<EOL>for var in ind_vars:<EOL><INDENT>new_ind_var_cols.append((wide_availability_values *<EOL>wide_data[var].values[:, None]).ravel())<EOL><DEDENT>wide_choice_data = (wide_data[choice_col].values[:, None] ==<EOL>sorted_alt_ids[None, :])<EOL>new_choice_col = wide_choice_data.ravel()<EOL>new_choice_col = new_choice_col.astype(int)<EOL>new_alt_id_col = (wide_availability_values *<EOL>sorted_alt_ids[None, :]).ravel().astype(int)<EOL>new_alt_id_col = new_alt_id_col.astype(int)<EOL>new_alt_specific_cols = []<EOL>for new_col in alt_specific_vars:<EOL><INDENT>new_wide_alt_specific_cols = []<EOL>for alt_id in sorted_alt_ids:<EOL><INDENT>if alt_id in alt_specific_vars[new_col]:<EOL><INDENT>rel_wide_column = alt_specific_vars[new_col][alt_id]<EOL>new_col_vals = wide_data[rel_wide_column].values[:, None]<EOL>new_wide_alt_specific_cols.append(new_col_vals)<EOL><DEDENT>else:<EOL><INDENT>new_wide_alt_specific_cols.append(np.zeros(<EOL>(wide_data.shape[<NUM_LIT:0>], <NUM_LIT:1>)))<EOL><DEDENT><DEDENT>concatenated_long_column = np.concatenate(new_wide_alt_specific_cols,<EOL>axis=<NUM_LIT:1>).ravel()<EOL>new_alt_specific_cols.append(concatenated_long_column)<EOL><DEDENT>availability_condition = wide_availability_values.ravel() != <NUM_LIT:0><EOL>alt_id_column_name = ("<STR_LIT>" if new_alt_id_name is None<EOL>else new_alt_id_name)<EOL>final_long_columns = ([obs_id_col,<EOL>alt_id_column_name,<EOL>choice_col] +<EOL>ind_vars +<EOL>list(alt_specific_vars.keys()))<EOL>all_arrays = ([new_obs_id_col,<EOL>new_alt_id_col,<EOL>new_choice_col] +<EOL>new_ind_var_cols +<EOL>new_alt_specific_cols)<EOL>df_recs = np.rec.fromarrays([all_arrays[pos][availability_condition]<EOL>for pos in range(len(all_arrays))],<EOL>names=final_long_columns)<EOL>final_long_df = pd.DataFrame.from_records(df_recs)<EOL>try:<EOL><INDENT>assert final_long_df.shape == (num_rows, num_cols)<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>".format((num_rows,<EOL>num_cols))<EOL>msg_3 = "<STR_LIT>"<EOL>total_msg = "<STR_LIT:\n>".join([msg_1, msg_2, msg_3])<EOL>warnings.warn(total_msg.format(final_long_df.shape))<EOL><DEDENT>return final_long_df<EOL>
|
Will convert a cross-sectional dataframe of discrete choice data from wide
format to long format.
Parameters
----------
wide_data : pandas dataframe.
Contains one row for each observation. Should have the specified
`[obs_id_col, choice_col] + availability_vars.values()` columns.
ind_vars : list of strings.
Each element should be a column heading in `wide_data` that denotes a
variable that varies across observations but not across alternatives.
alt_specific_vars : dict.
Each key should be a string that will be a column heading of the
returned, long format dataframe. Each value should be a dictionary
where the inner key is the alternative id and the value is the column
heading in wide data that specifies the value of the outer key for the
associated alternative. The variables denoted by the outer key should
vary across individuals and across some or all alternatives.
availability_vars : dict.
There should be one key value pair for each alternative that is
observed in the dataset. Each key should be the alternative id for the
alternative, and the value should be the column heading in `wide_data`
that denotes (using ones and zeros) whether an alternative is
available/unavailable, respectively, for a given observation.
Alternative id's, i.e. the keys, must be integers.
obs_id_col : str.
Denotes the column in `wide_data` that contains the observation ID
values for each row.
choice_col : str.
Denotes the column in `wide_data` that contains a one if the
alternative pertaining to the given row was the observed outcome for
the observation pertaining to the given row and a zero otherwise.
new_alt_id_name : str, optional.
If not None, should be a string. This string will be used as the column
heading for the alternative id column in the returned 'long' format
dataframe. If not passed, this column will be called `'alt_id'`.
Default == None.
Returns
-------
final_long_df : pandas dataframe.
Will contain one row for each available alternative for each
observation. Will contain an observation id column of the same name as
`obs_id_col`. Will also contain a choice column of the same name as
`choice_col`. Will also contain an alternative id column called
`alt_id` if `new_alt_id_col == None`, or `new_alt_id` otherwise. Will
contain one column per variable in `ind_vars`. Will contain one column
per key in `alt_specific_vars`.
|
f7692:m27
|
def convert_mixing_names_to_positions(mixing_names, ind_var_names):
|
return [ind_var_names.index(name) for name in mixing_names]<EOL>
|
Parameters
----------
mixing_names : list.
All elements should be strings. Denotes the names of the index
variables that are being treated as random variables.
ind_var_names : list.
All elements should be strings, representing (in order) the variables
in the index.
Returns
-------
list.
All elements should be ints. Elements will be the position of each of
the elements in mixing name, in the `ind_var_names` list.
|
f7692:m28
|
def get_normal_draws(num_mixers,<EOL>num_draws,<EOL>num_vars,<EOL>seed=None):
|
<EOL>assert all([isinstance(x, int) for x in [num_mixers, num_draws, num_vars]])<EOL>assert all([x > <NUM_LIT:0> for x in [num_mixers, num_draws, num_vars]])<EOL>if seed is not None:<EOL><INDENT>assert isinstance(seed, int) and seed > <NUM_LIT:0><EOL><DEDENT>normal_dist = scipy.stats.norm(loc=<NUM_LIT:0.0>, scale=<NUM_LIT:1.0>)<EOL>all_draws = []<EOL>if seed:<EOL><INDENT>np.random.seed(seed)<EOL><DEDENT>for i in xrange(num_vars):<EOL><INDENT>all_draws.append(normal_dist.rvs(size=(num_mixers, num_draws)))<EOL><DEDENT>return all_draws<EOL>
|
Parameters
----------
num_mixers : int.
Should be greater than zero. Denotes the number of observations for
which we are making draws from a normal distribution for. I.e. the
number of observations with randomly distributed coefficients.
num_draws : int.
Should be greater than zero. Denotes the number of draws that are to be
made from each normal distribution.
num_vars : int.
Should be greater than zero. Denotes the number of variables for which
we need to take draws from the normal distribution.
seed : int or None, optional.
If an int is passed, it should be greater than zero. Denotes the value
to be used in seeding the random generator used to generate the draws
from the normal distribution. Default == None.
Returns
-------
all_draws : list of 2D ndarrays.
The list will have num_vars elements. Each element will be a num_obs by
num_draws numpy array of draws from a normal distribution with mean
zero and standard deviation of one.
|
f7693:m0
|
def convert_mixing_names_to_positions(mixing_names, ind_var_names):
|
return [ind_var_names.index(name) for name in mixing_names]<EOL>
|
Parameters
----------
mixing_names : list of strings.
Denotes the names of the index variables that are being treated as
random variables.
ind_var_names : list of strings.
Each string should represent (in order) the variables in the index.
Returns
-------
list. All elements should be ints. Elements will be the position in
`ind_var_names` of each of the elements in `mixing_names`.
|
f7693:m1
|
def create_expanded_design_for_mixing(design,<EOL>draw_list,<EOL>mixing_pos,<EOL>rows_to_mixers):
|
if len(mixing_pos) != len(draw_list):<EOL><INDENT>msg = "<STR_LIT>".format(mixing_pos)<EOL>msg_2 = "<STR_LIT>".format(len(draw_list))<EOL>raise ValueError(msg + "<STR_LIT:\n>" + msg_2)<EOL><DEDENT>num_draws = draw_list[<NUM_LIT:0>].shape[<NUM_LIT:1>]<EOL>orig_num_vars = design.shape[<NUM_LIT:1>]<EOL>arrays_for_mixing = design[:, mixing_pos]<EOL>expanded_design = np.concatenate((design, arrays_for_mixing),<EOL>axis=<NUM_LIT:1>).copy()<EOL>design_3d = np.repeat(expanded_design[:, None, :],<EOL>repeats=num_draws,<EOL>axis=<NUM_LIT:1>)<EOL>for pos, idx in enumerate(mixing_pos):<EOL><INDENT>rel_draws = draw_list[pos]<EOL>rel_long_draws = rows_to_mixers.dot(rel_draws)<EOL>design_3d[:, :, orig_num_vars + pos] *= rel_long_draws<EOL><DEDENT>return design_3d<EOL>
|
Parameters
----------
design : 2D ndarray.
All elements should be ints, floats, or longs. Each row corresponds to
an available alternative for a given individual. There should be one
column per index coefficient being estimated.
draw_list : list of 2D ndarrays.
All numpy arrays should have the same number of columns (`num_draws`)
and the same number of rows (`num_mixers`). All elements of the numpy
arrays should be ints, floats, or longs. Should have as many elements
as there are lements in `mixing_pos`.
mixing_pos : list of ints.
Each element should denote a column in design whose associated index
coefficient is being treated as a random variable.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
Returns
-------
design_3d : 3D numpy array.
Each slice of the third dimension will contain a copy of the design
matrix corresponding to a given draw of the random variables being
mixed over.
|
f7693:m2
|
def calc_choice_sequence_probs(prob_array,<EOL>choice_vec,<EOL>rows_to_mixers,<EOL>return_type=None):
|
if return_type not in [None, '<STR_LIT:all>']:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>log_chosen_prob_array = choice_vec[:, None] * np.log(prob_array)<EOL>expanded_log_sequence_probs = rows_to_mixers.T.dot(log_chosen_prob_array)<EOL>expanded_sequence_probs = np.exp(expanded_log_sequence_probs)<EOL>zero_idx = np.where(expanded_sequence_probs == <NUM_LIT:0>)<EOL>expanded_sequence_probs[zero_idx] = min_comp_value<EOL>sequence_probs = expanded_sequence_probs.mean(axis=<NUM_LIT:1>)<EOL>if return_type is None:<EOL><INDENT>return sequence_probs<EOL><DEDENT>elif return_type == '<STR_LIT:all>':<EOL><INDENT>return sequence_probs, expanded_sequence_probs<EOL><DEDENT>
|
Parameters
----------
prob_array : 2D ndarray.
All elements should be ints, floats, or longs. All elements should be
between zero and one (exclusive). Each element should represent the
probability of the corresponding alternative being chosen by the
corresponding individual during the given choice situation, given the
particular draw of coefficients being considered. There should be one
column for each draw of the coefficients.
choice_vec : 1D ndarray.
All elements should be zeros or ones. Should denote the rows that were
chosen by the individuals corresponding to those rows.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
return_type : `'all'` or None, optional.
If `'all'` is passed, then a tuple will be returned. The first element
will be a 1D numpy array of shape `(num_mixing_units,)`. Each value
will be the average probability of predicting the associated mixing
unit's probability of making the observed sequence of choices. The
second element of the tuple will be a 2D numpy array with shape
`(num_mixing_units, num_draws)`, where
`num_draws == prob_array.shape[1]`. Each value will be the probability
of predicting the associated mixing unit's probability of making the
observed sequence of choices, given the associated draw of the mixing
distribution for the given individual. If None, only the first
element of the tuple described above will be returned. Default == None.
Returns
-------
See `return_type` kwarg.
|
f7693:m3
|
def calc_mixed_log_likelihood(params,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>rows_to_mixers,<EOL>choice_vector,<EOL>utility_transform,<EOL>ridge=None,<EOL>weights=None):
|
<EOL>if weights is None:<EOL><INDENT>weights = np.ones(design_3d.shape[<NUM_LIT:0>])<EOL><DEDENT>weights_per_obs =np.max(rows_to_mixers.toarray() * weights[:, None], axis=<NUM_LIT:0>)<EOL>prob_array = general_calc_probabilities(params,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>utility_transform,<EOL>return_long_probs=True)<EOL>simulated_sequence_probs = calc_choice_sequence_probs(prob_array,<EOL>choice_vector,<EOL>rows_to_mixers)<EOL>log_likelihood = weights_per_obs.dot(np.log(simulated_sequence_probs))<EOL>if ridge is None:<EOL><INDENT>return log_likelihood<EOL><DEDENT>else:<EOL><INDENT>return log_likelihood - ridge * np.square(params).sum()<EOL><DEDENT>
|
Parameters
----------
params : 1D ndarray.
All elements should by ints, floats, or longs. Should have 1 element
for each utility coefficient being estimated (i.e. num_features +
num_coefs_being_mixed).
design_3d : 3D ndarray.
All elements should be ints, floats, or longs. Should have one row per
observation per available alternative. The second axis should have as
many elements as there are draws from the mixing distributions of the
coefficients. The last axis should have one element per index
coefficient being estimated.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_obs : 2D scipy sparse array.
All elements should be zeros and ones. There should be one row per
observation per available alternative and one column per observation.
This matrix maps the rows of the design matrix to the unique
observations (on the columns).
rows_to_alts : 2D scipy sparse array.
All elements should be zeros and ones. There should be one row per
observation per available alternative and one column per possible
alternative. This matrix maps the rows of the design matrix to the
possible alternatives for this dataset.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
choice_vector : 1D ndarray.
All elements should be either ones or zeros. There should be one row
per observation per available alternative for the given observation.
Elements denote the alternative which is chosen by the given
observation with a 1 and a zero otherwise.
utility_transform : callable.
Should accept a 1D array of systematic utility values, a 1D array of
alternative IDs, and miscellaneous args and kwargs. Should return a 2D
array whose elements contain the appropriately transformed systematic
utility values, based on the current model being evaluated and the
given draw of the random coefficients. There should be one column for
each draw of the random coefficients. There should have one row per
individual per choice situation per available alternative.
ridge : scalar or None, optional.
Determines whether or not ridge regression is performed. If a scalar is
passed, then that scalar determines the ridge penalty for the
optimization. Default = None.
weights : 1D ndarray or None.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class.
Returns
-------
log_likelihood: float.
The log-likelihood of the mixed logit model evaluated at the passed
values of `params`.
|
f7693:m4
|
def calc_mixed_logit_gradient(params,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>rows_to_mixers,<EOL>choice_vector,<EOL>utility_transform,<EOL>ridge=None,<EOL>weights=None):
|
<EOL>if weights is None:<EOL><INDENT>weights = np.ones(design_3d.shape[<NUM_LIT:0>])<EOL><DEDENT>prob_array = general_calc_probabilities(params,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>utility_transform,<EOL>return_long_probs=True)<EOL>prob_results = calc_choice_sequence_probs(prob_array,<EOL>choice_vector,<EOL>rows_to_mixers,<EOL>return_type="<STR_LIT:all>")<EOL>sequence_prob_array = prob_results[<NUM_LIT:1>]<EOL>simulated_probs = prob_results[<NUM_LIT:0>]<EOL>long_sequence_prob_array = rows_to_mixers.dot(sequence_prob_array)<EOL>long_simulated_probs = rows_to_mixers.dot(simulated_probs)<EOL>scaled_sequence_probs = (long_sequence_prob_array /<EOL>long_simulated_probs[:, None])<EOL>scaled_error = ((choice_vector[:, None] - prob_array) *<EOL>scaled_sequence_probs)<EOL>gradient = (scaled_error[:, :, None] *<EOL>design_3d *<EOL>weights[:, None, None]).sum(axis=<NUM_LIT:0>)<EOL>gradient = gradient.mean(axis=<NUM_LIT:0>)<EOL>if ridge is not None:<EOL><INDENT>gradient -= <NUM_LIT:2> * ridge * params<EOL><DEDENT>return gradient.ravel()<EOL>
|
Parameters
----------
params : 1D ndarray.
All elements should by ints, floats, or longs. Should have 1 element
for each utility coefficient being estimated
(i.e. num_features + num_coefs_being_mixed).
design_3d : 3D ndarray.
All elements should be ints, floats, or longs. Should have one row per
observation per available alternative. The second axis should have as
many elements as there are draws from the mixing distributions of the
coefficients. The last axis should have one element per index
coefficient being estimated.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_obs : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per observation.
This matrix maps the rows of the design matrix to the unique
observations (on the columns).
rows_to_alts : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per possible
alternative. This matrix maps the rows of the design matrix to the
possible alternatives for this dataset.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
choice_vector : 1D ndarray.
All elements should be either ones or zeros. There should be one row
per observation per available alternative for the given observation.
Elements denote the alternative which is chosen by the given
observation with a 1 and a zero otherwise.
utility_transform : callable.
Should accept a 1D array of systematic utility values, a 1D array of
alternative IDs, and miscellaneous args and kwargs. Should return a 2D
array whose elements contain the appropriately transformed systematic
utility values, based on the current model being evaluated and the
given draw of the random coefficients. There should be one column for
each draw of the random coefficients. There should have one row per
individual per choice situation per available alternative.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a float is
passed, then that float determines the ridge penalty for the
optimization. Default = None.
weights : 1D ndarray or None.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class.
Returns
-------
gradient : ndarray of shape (design_3d.shape[2],).
The returned array is the gradient of the log-likelihood of the mixed
MNL model with respect to `params`.
|
f7693:m5
|
def calc_neg_log_likelihood_and_neg_gradient(beta,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>rows_to_mixers,<EOL>choice_vector,<EOL>utility_transform,<EOL>constrained_pos,<EOL>ridge=None,<EOL>weights=None,<EOL>*args):
|
neg_log_likelihood = -<NUM_LIT:1> * calc_mixed_log_likelihood(beta,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>rows_to_mixers,<EOL>choice_vector,<EOL>utility_transform,<EOL>ridge=ridge,<EOL>weights=weights)<EOL>neg_beta_gradient_vec = -<NUM_LIT:1> * calc_mixed_logit_gradient(beta,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>rows_to_mixers,<EOL>choice_vector,<EOL>utility_transform,<EOL>ridge=ridge,<EOL>weights=weights)<EOL>if constrained_pos is not None:<EOL><INDENT>neg_beta_gradient_vec[constrained_pos] = <NUM_LIT:0><EOL><DEDENT>return neg_log_likelihood, neg_beta_gradient_vec<EOL>
|
Parameters
----------
beta : 1D ndarray.
All elements should by ints, floats, or longs. Should have 1 element
for each utility coefficient being estimated (i.e. num_features +
num_coefs_being_mixed).
design_3d : 3D ndarray.
All elements should be ints, floats, or longs. Should have one row per
observation per available alternative. The second axis should have as
many elements as there are draws from the mixing distributions of the
coefficients. The last axis should have one element per index
coefficient being estimated.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_obs : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per observation.
This matrix maps the rows of the design matrix to the unique
observations (on the columns).
rows_to_alts : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per possible
alternative. This matrix maps the rows of the design matrix to the
possible alternatives for this dataset.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
choice_vector : 1D ndarray.
All elements should be either ones or zeros. There should be one row
per observation per available alternative for the given observation.
Elements denote the alternative which is chosen by the given
observation with a 1 and a zero otherwise.
utility_transform : callable.
Should accept a 1D array of systematic utility values, a 1D array of
alternative IDs, and miscellaneous args and kwargs. Should return a 2D
array whose elements contain the appropriately transformed systematic
utility values, based on the current model being evaluated and the
given draw of the random coefficients. There should be one column for
each draw of the random coefficients. There should have one row per
individual per choice situation per available alternative.
constrained_pos : list of ints, or None, optional.
Each int denotes a position in the array of estimated parameters that
are not to change from their initial values. None of the integers
should be greater than `beta.size`.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a float is
passed, then that float determines the ridge penalty for the
optimization. Default = None.
weights : 1D ndarray or None.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class.
Returns
-------
tuple. (`neg_log_likelihood`, `neg_beta_gradient_vec`).
The first element is a float. The second element is a 1D numpy array of
shape (design.shape[1],). The first element is the negative
log-likelihood of this model evaluated at the passed values of beta.
The second element is the gradient of the negative log- likelihood with
respect to the vector of utility coefficients.
|
f7693:m6
|
def calc_bhhh_hessian_approximation_mixed_logit(params,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>rows_to_mixers,<EOL>choice_vector,<EOL>utility_transform,<EOL>ridge=None,<EOL>weights=None):
|
<EOL>if weights is None:<EOL><INDENT>weights = np.ones(design_3d.shape[<NUM_LIT:0>])<EOL><DEDENT>weights_per_obs =np.max(rows_to_mixers.toarray() * weights[:, None], axis=<NUM_LIT:0>)<EOL>prob_array = general_calc_probabilities(params,<EOL>design_3d,<EOL>alt_IDs,<EOL>rows_to_obs,<EOL>rows_to_alts,<EOL>utility_transform,<EOL>return_long_probs=True)<EOL>prob_results = calc_choice_sequence_probs(prob_array,<EOL>choice_vector,<EOL>rows_to_mixers,<EOL>return_type="<STR_LIT:all>")<EOL>sequence_prob_array = prob_results[<NUM_LIT:1>]<EOL>simulated_probs = prob_results[<NUM_LIT:0>]<EOL>long_sequence_prob_array = rows_to_mixers.dot(sequence_prob_array)<EOL>long_simulated_probs = rows_to_mixers.dot(simulated_probs)<EOL>scaled_sequence_probs = (long_sequence_prob_array /<EOL>long_simulated_probs[:, None])<EOL>scaled_error = ((choice_vector[:, None] - prob_array) *<EOL>scaled_sequence_probs)<EOL>gradient = (scaled_error[:, :, None] * design_3d).mean(axis=<NUM_LIT:1>)<EOL>gradient_per_obs = rows_to_mixers.T.dot(gradient)<EOL>bhhh_matrix =gradient_per_obs.T.dot(weights_per_obs[:, None] * gradient_per_obs)<EOL>if ridge is not None:<EOL><INDENT>bhhh_matrix -= <NUM_LIT:2> * ridge<EOL><DEDENT>return -<NUM_LIT:1> * bhhh_matrix<EOL>
|
Parameters
----------
params : 1D ndarray.
All elements should by ints, floats, or longs. Should have 1 element
for each utility coefficient being estimated (i.e. num_features +
num_coefs_being_mixed).
design_3d : 3D ndarray.
All elements should be ints, floats, or longs. Should have one row per
observation per available alternative. The second axis should have as
many elements as there are draws from the mixing distributions of the
coefficients. The last axis should have one element per index
coefficient being estimated.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_obs : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per observation.
This matrix maps the rows of the design matrix to the unique
observations (on the columns).
rows_to_alts : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per possible
alternative. This matrix maps the rows of the design matrix to the
possible alternatives for this dataset.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
choice_vector : 1D ndarray.
All elements should be either ones or zeros. There should be one row
per observation per available alternative for the given observation.
Elements denote the alternative which is chosen by the given
observation with a 1 and a zero otherwise.
utility_transform : callable.
Should accept a 1D array of systematic utility values, a 1D array of
alternative IDs, and miscellaneous args and kwargs. Should return a 2D
array whose elements contain the appropriately transformed systematic
utility values, based on the current model being evaluated and the
given draw of the random coefficients. There should be one column for
each draw of the random coefficients. There should have one row per
individual per choice situation per available alternative.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a float is
passed, then that float determines the ridge penalty for the
optimization. Default = None.
weights : 1D ndarray or None, optional.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class. Default == None.
Returns
-------
bhhh_matrix : 2D ndarray of shape `(design.shape[1], design.shape[1])`.
The returned array is the BHHH approximation of the Fisher Information
Matrix. I.e it is the negative of the sum of the outer product of
each individual's gradient with itself.
|
f7693:m7
|
def calc_percentile_interval(bootstrap_replicates, conf_percentage):
|
<EOL>check_conf_percentage_validity(conf_percentage)<EOL>ensure_samples_is_ndim_ndarray(bootstrap_replicates, ndim=<NUM_LIT:2>)<EOL>alpha = get_alpha_from_conf_percentage(conf_percentage)<EOL>lower_percent = alpha / <NUM_LIT><EOL>upper_percent = <NUM_LIT> - lower_percent<EOL>lower_endpoint = np.percentile(bootstrap_replicates,<EOL>lower_percent,<EOL>interpolation='<STR_LIT>',<EOL>axis=<NUM_LIT:0>)<EOL>upper_endpoint = np.percentile(bootstrap_replicates,<EOL>upper_percent,<EOL>interpolation='<STR_LIT>',<EOL>axis=<NUM_LIT:0>)<EOL>conf_intervals = combine_conf_endpoints(lower_endpoint, upper_endpoint)<EOL>return conf_intervals<EOL>
|
Calculate bootstrap confidence intervals based on raw percentiles of the
bootstrap distribution of samples.
Parameters
----------
bootstrap_replicates : 2D ndarray.
Each row should correspond to a different bootstrap parameter sample.
Each column should correspond to an element of the parameter vector
being estimated.
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level of the returned confidence interval. For
instance, to calculate a 95% confidence interval, pass `95`.
Returns
-------
conf_intervals : 2D ndarray.
The shape of the returned array will be `(2, samples.shape[1])`. The
first row will correspond to the lower value in the confidence
interval. The second row will correspond to the upper value in the
confidence interval. There will be one column for each element of the
parameter vector being estimated.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 12.5 and Section 13.3. See Equation 13.3.
Notes
-----
This function differs slightly from the actual percentile bootstrap
procedure described in Efron and Tibshirani (1994). To ensure that the
returned endpoints of one's bootstrap confidence intervals are actual
values that were observed in the bootstrap distribution, both the procedure
of Efron and Tibshirani and this function make more conservative confidence
intervals. However, this function uses a simpler (and in some cases less
conservative) correction than that of Efron and Tibshirani.
|
f7694:m0
|
def calc_bias_correction_bca(bootstrap_replicates, mle_estimate):
|
numerator = (bootstrap_replicates < mle_estimate[None, :]).sum(axis=<NUM_LIT:0>)<EOL>denominator = float(bootstrap_replicates.shape[<NUM_LIT:0>])<EOL>bias_correction = norm.ppf(numerator / denominator)<EOL>return bias_correction<EOL>
|
Calculate the bias correction for the Bias Corrected and Accelerated (BCa)
bootstrap confidence intervals.
Parameters
----------
bootstrap_replicates : 2D ndarray.
Each row should correspond to a different bootstrap parameter sample.
Each column should correspond to an element of the parameter vector
being estimated.
mle_estimate : 1D ndarray.
The original dataset's maximum likelihood point estimate. Should have
one elements for each component of the estimated parameter vector.
Returns
-------
bias_correction : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the bias correction factors for each component of the parameter
vector.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 14.3, Equation 14.14.
|
f7694:m1
|
def calc_acceleration_bca(jackknife_replicates):
|
<EOL>jackknife_mean = jackknife_replicates.mean(axis=<NUM_LIT:0>)[None, :]<EOL>differences = jackknife_mean - jackknife_replicates<EOL>numerator = (differences**<NUM_LIT:3>).sum(axis=<NUM_LIT:0>)<EOL>denominator = <NUM_LIT:6> * ((differences**<NUM_LIT:2>).sum(axis=<NUM_LIT:0>))**<NUM_LIT><EOL>zero_denom = np.where(denominator == <NUM_LIT:0>)<EOL>denominator[zero_denom] = MIN_COMP_VALUE<EOL>acceleration = numerator / denominator<EOL>return acceleration<EOL>
|
Calculate the acceleration constant for the Bias Corrected and Accelerated
(BCa) bootstrap confidence intervals.
Parameters
----------
jackknife_replicates : 2D ndarray.
Each row should correspond to a different jackknife parameter sample,
formed by deleting a particular observation and then re-estimating the
desired model. Each column should correspond to an element of the
parameter vector being estimated.
Returns
-------
acceleration : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the acceleration factors for each component of the parameter
vector.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 14.3, Equation 14.15.
|
f7694:m2
|
def calc_lower_bca_percentile(alpha_percent, bias_correction, acceleration):
|
z_lower = norm.ppf(alpha_percent / (<NUM_LIT> * <NUM_LIT:2>))<EOL>numerator = bias_correction + z_lower<EOL>denominator = <NUM_LIT:1> - acceleration * numerator<EOL>lower_percentile =norm.cdf(bias_correction + numerator / denominator) * <NUM_LIT:100><EOL>return lower_percentile<EOL>
|
Calculate the lower values of the Bias Corrected and Accelerated (BCa)
bootstrap confidence intervals.
Parameters
----------
alpha_percent : float in (0.0, 100.0).
`100 - confidence_percentage`, where `confidence_percentage` is the
confidence level (such as 95%), expressed as a percent.
bias_correction : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the bias correction factors for each component of the parameter
vector.
acceleration : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the acceleration factors for each component of the parameter
vector.
Returns
-------
lower_percentile : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the smaller values in the confidence interval for each component
of the parameter vector.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 14.3, Equation 14.10.
Notes
-----
The `alpha` used in this function is different from the `alpha` used in
Efron and Tibshirani (1994). The `alpha` used in this function must be
converted to a decimal (by dividing by 100) and then divided by 2 (to
account for the equal-tailed nature of the confidence interval) in order to
be made equivalent to the `alpha` in Efron and Tibshirani (1994).
|
f7694:m3
|
def calc_upper_bca_percentile(alpha_percent, bias_correction, acceleration):
|
z_upper = norm.ppf(<NUM_LIT:1> - alpha_percent / (<NUM_LIT> * <NUM_LIT:2>))<EOL>numerator = bias_correction + z_upper<EOL>denominator = <NUM_LIT:1> - acceleration * numerator<EOL>upper_percentile =norm.cdf(bias_correction + numerator / denominator) * <NUM_LIT:100><EOL>return upper_percentile<EOL>
|
Calculate the lower values of the Bias Corrected and Accelerated (BCa)
bootstrap confidence intervals.
Parameters
----------
alpha_percent : float in (0.0, 100.0).
`100 - confidence_percentage`, where `confidence_percentage` is the
confidence level (such as 95%), expressed as a percent.
bias_correction : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the bias correction factors for each component of the parameter
vector.
acceleration : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the acceleration factors for each component of the parameter
vector.
Returns
-------
upper_percentile : 1D ndarray.
There will be one element for each element in `mle_estimate`. Elements
denote the larger values in the confidence interval for each component
of the parameter vector.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 14.3, Equation 14.10.
Notes
-----
The `alpha` used in this function is different from the `alpha` used in
Efron and Tibshirani (1994). The `alpha` used in this function must be
converted to a decimal (by dividing by 100) and then divided by 2 (to
account for the equal-tailed nature of the confidence interval) in order to
be made equivalent to the `alpha` in Efron and Tibshirani (1994).
|
f7694:m4
|
def calc_bca_interval(bootstrap_replicates,<EOL>jackknife_replicates,<EOL>mle_params,<EOL>conf_percentage):
|
<EOL>check_conf_percentage_validity(conf_percentage)<EOL>ensure_samples_is_ndim_ndarray(bootstrap_replicates, ndim=<NUM_LIT:2>)<EOL>ensure_samples_is_ndim_ndarray(jackknife_replicates,<EOL>name='<STR_LIT>', ndim=<NUM_LIT:2>)<EOL>alpha_percent = get_alpha_from_conf_percentage(conf_percentage)<EOL>bias_correction =calc_bias_correction_bca(bootstrap_replicates, mle_params)<EOL>acceleration = calc_acceleration_bca(jackknife_replicates)<EOL>lower_percents =calc_lower_bca_percentile(alpha_percent, bias_correction, acceleration)<EOL>upper_percents =calc_upper_bca_percentile(alpha_percent, bias_correction, acceleration)<EOL>lower_endpoints = np.diag(np.percentile(bootstrap_replicates,<EOL>lower_percents,<EOL>interpolation='<STR_LIT>',<EOL>axis=<NUM_LIT:0>))<EOL>upper_endpoints = np.diag(np.percentile(bootstrap_replicates,<EOL>upper_percents,<EOL>interpolation='<STR_LIT>',<EOL>axis=<NUM_LIT:0>))<EOL>conf_intervals = combine_conf_endpoints(lower_endpoints, upper_endpoints)<EOL>return conf_intervals<EOL>
|
Calculate 'bias-corrected and accelerated' bootstrap confidence intervals.
Parameters
----------
bootstrap_replicates : 2D ndarray.
Each row should correspond to a different bootstrap parameter sample.
Each column should correspond to an element of the parameter vector
being estimated.
jackknife_replicates : 2D ndarray.
Each row should correspond to a different jackknife parameter sample,
formed by deleting a particular observation and then re-estimating the
desired model. Each column should correspond to an element of the
parameter vector being estimated.
mle_params : 1D ndarray.
The original dataset's maximum likelihood point estimate. Should have
the same number of elements as `samples.shape[1]`.
conf_percentage : scalar in the interval (0.0, 100.0).
Denotes the confidence-level of the returned confidence interval. For
instance, to calculate a 95% confidence interval, pass `95`.
Returns
-------
conf_intervals : 2D ndarray.
The shape of the returned array will be `(2, samples.shape[1])`. The
first row will correspond to the lower value in the confidence
interval. The second row will correspond to the upper value in the
confidence interval. There will be one column for each element of the
parameter vector being estimated.
References
----------
Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap.
CRC press, 1994. Section 14.3.
DiCiccio, Thomas J., and Bradley Efron. "Bootstrap confidence intervals."
Statistical science (1996): 189-212.
|
f7694:m5
|
def identify_degenerate_nests(nest_spec):
|
degenerate_positions = []<EOL>for pos, key in enumerate(nest_spec):<EOL><INDENT>if len(nest_spec[key]) == <NUM_LIT:1>:<EOL><INDENT>degenerate_positions.append(pos)<EOL><DEDENT><DEDENT>return degenerate_positions<EOL>
|
Identify the nests within nest_spec that are degenerate, i.e. those nests
with only a single alternative within the nest.
Parameters
----------
nest_spec : OrderedDict.
Keys are strings that define the name of the nests. Values are lists
of alternative ids, denoting which alternatives belong to which nests.
Each alternative id must only be associated with a single nest!
Returns
-------
list.
Will contain the positions in the list of keys from `nest_spec` that
are degenerate.
|
f7696:m0
|
def split_param_vec(all_params, rows_to_nests, return_all_types=False):
|
<EOL>num_nests = rows_to_nests.shape[<NUM_LIT:1>]<EOL>orig_nest_coefs = all_params[:num_nests]<EOL>index_coefs = all_params[num_nests:]<EOL>if return_all_types:<EOL><INDENT>return orig_nest_coefs, None, None, index_coefs<EOL><DEDENT>else:<EOL><INDENT>return orig_nest_coefs, index_coefs<EOL><DEDENT>
|
Parameters
----------
all_params : 1D ndarray.
Should contain all of the parameters being estimated (i.e. all the
nest coefficients and all of the index coefficients). All elements
should be ints, floats, or longs.
rows_to_nests : 2D scipy sparse array.
There should be one row per observation per available alternative and
one column per nest. This matrix maps the rows of the design matrix to
the unique nests (on the columns).
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned (with
one element for the nest, shape, intercept, and index parameters for
this model). If False, a tuple of 2 elements will be returned, as
described below. The tuple will contain the nest parameters and the
index coefficients.
Returns
-------
orig_nest_coefs : 1D ndarray.
The nest coefficients being used for estimation. Note that these values
are the logit of the inverse of the scale parameters for each lower
level nest.
index_coefs : 1D ndarray.
The coefficients of the index being used for this nested logit model.
Note
----
If `return_all_types == True` then the function will return a tuple of four
objects. In order, these objects will either be None or the arrays
representing the arrays corresponding to the nest, shape, intercept, and
index parameters.
|
f7696:m1
|
def check_length_of_initial_values(self, init_values):
|
<EOL>num_nests = self.rows_to_nests.shape[<NUM_LIT:1>]<EOL>num_index_coefs = self.design.shape[<NUM_LIT:1>]<EOL>assumed_param_dimensions = num_index_coefs + num_nests<EOL>if init_values.shape[<NUM_LIT:0>] != assumed_param_dimensions:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg +<EOL>msg_1.format(assumed_param_dimensions) +<EOL>msg_2.format(init_values.shape[<NUM_LIT:0>]))<EOL><DEDENT>return None<EOL>
|
Ensures that the initial values are of the correct length.
|
f7696:c0:m1
|
def convenience_split_params(self, params, return_all_types=False):
|
return split_param_vec(params,<EOL>self.rows_to_nests,<EOL>return_all_types=return_all_types)<EOL>
|
Splits parameter vector into nest parameters and index parameters.
Parameters
----------
all_params : 1D ndarray.
Should contain all of the parameters being estimated (i.e. all the
nest coefficients and all of the index coefficients). All elements
should be ints, floats, or longs.
rows_to_nests : 2D scipy sparse array.
There should be one row per observation per available alternative
and one column per nest. This matrix maps the rows of the design
matrix to the unique nests (on the columns).
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned
(with one element for the nest, shape, intercept, and index
parameters for this model). If False, a tuple of 2 elements will
be returned, as described below. The tuple will contain the nest
parameters and the index coefficients.
Returns
-------
orig_nest_coefs : 1D ndarray.
The nest coefficients being used for estimation. Note that these
values are the logit of the inverse of the scale parameters for
each lower level nest.
index_coefs : 1D ndarray.
The index coefficients of this nested logit model.
Note
----
If `return_all_types == True` then the function will return a tuple of
four objects. In order, these objects will either be None or the arrays
representing the arrays corresponding to the nest, shape, intercept,
and index parameters.
|
f7696:c0:m2
|
def convenience_calc_probs(self, params):
|
orig_nest_coefs, betas = self.convenience_split_params(params)<EOL>natural_nest_coefs = nc.naturalize_nest_coefs(orig_nest_coefs)<EOL>args = [natural_nest_coefs,<EOL>betas,<EOL>self.design,<EOL>self.rows_to_obs,<EOL>self.rows_to_nests]<EOL>kwargs = {"<STR_LIT>": self.chosen_row_to_obs,<EOL>"<STR_LIT>": "<STR_LIT>"}<EOL>probability_results = general_calc_probabilities(*args, **kwargs)<EOL>return probability_results<EOL>
|
Calculates the probabilities of the chosen alternative, and the long
format probabilities for this model and dataset.
|
f7696:c0:m3
|
def convenience_calc_log_likelihood(self, params):
|
orig_nest_coefs, betas = self.convenience_split_params(params)<EOL>natural_nest_coefs = nc.naturalize_nest_coefs(orig_nest_coefs)<EOL>args = [natural_nest_coefs,<EOL>betas,<EOL>self.design,<EOL>self.rows_to_obs,<EOL>self.rows_to_nests,<EOL>self.choice_vector]<EOL>kwargs = {"<STR_LIT>": self.ridge, "<STR_LIT>": self.weights}<EOL>log_likelihood = general_log_likelihood(*args, **kwargs)<EOL>return log_likelihood<EOL>
|
Calculates the log-likelihood for this model and dataset.
|
f7696:c0:m4
|
def convenience_calc_gradient(self, params):
|
orig_nest_coefs, betas = self.convenience_split_params(params)<EOL>args = [orig_nest_coefs,<EOL>betas,<EOL>self.design,<EOL>self.choice_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_nests]<EOL>return general_gradient(*args, ridge=self.ridge, weights=self.weights)<EOL>
|
Calculates the gradient of the log-likelihood for this model / dataset.
|
f7696:c0:m5
|
def convenience_calc_hessian(self, params):
|
orig_nest_coefs, betas = self.convenience_split_params(params)<EOL>args = [orig_nest_coefs,<EOL>betas,<EOL>self.design,<EOL>self.choice_vector,<EOL>self.rows_to_obs,<EOL>self.rows_to_nests]<EOL>approx_hess =bhhh_approx(*args, ridge=self.ridge, weights=self.weights)<EOL>if self.constrained_pos is not None:<EOL><INDENT>for idx_val in self.constrained_pos:<EOL><INDENT>approx_hess[idx_val, :] = <NUM_LIT:0><EOL>approx_hess[:, idx_val] = <NUM_LIT:0><EOL>approx_hess[idx_val, idx_val] = -<NUM_LIT:1><EOL><DEDENT><DEDENT>return approx_hess<EOL>
|
Calculates the hessian of the log-likelihood for this model / dataset.
Note that this function name is INCORRECT with regard to the actual
actions performed. The Nested Logit model uses the BHHH approximation
to the Fisher Information Matrix in place of the actual hessian.
|
f7696:c0:m6
|
def convenience_calc_fisher_approx(self, params):
|
placeholder_bhhh = np.diag(-<NUM_LIT:1> * np.ones(params.shape[<NUM_LIT:0>]))<EOL>return placeholder_bhhh<EOL>
|
Calculates the BHHH approximation of the Fisher Information Matrix for
this model / dataset. Note that this function name is INCORRECT with
regard to the actual actions performed. The Nested Logit model uses a
placeholder for the BHHH approximation of the Fisher Information Matrix
because the BHHH approximation is already being used to approximate the
hessian.
This placeholder allows calculation of a value for the 'robust'
standard errors, even though such a value is not useful since it is not
correct...
|
f7696:c0:m7
|
def fit_mle(self,<EOL>init_vals,<EOL>constrained_pos=None,<EOL>print_res=True,<EOL>method="<STR_LIT>",<EOL>loss_tol=<NUM_LIT>,<EOL>gradient_tol=<NUM_LIT>,<EOL>maxiter=<NUM_LIT:1000>,<EOL>ridge=None,<EOL>just_point=False,<EOL>**kwargs):
|
<EOL>kwargs_to_be_ignored = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>"]<EOL>if any([x in kwargs for x in kwargs_to_be_ignored]):<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg.format(kwargs_to_be_ignored) + msg_2)<EOL><DEDENT>self.optimization_method = method<EOL>self.ridge_param = ridge<EOL>if ridge is not None:<EOL><INDENT>warnings.warn(_ridge_warning_msg)<EOL><DEDENT>mapping_res = self.get_mappings_for_fit()<EOL>fixed_params = identify_degenerate_nests(self.nest_spec)<EOL>if constrained_pos is not None:<EOL><INDENT>fixed_params.extend(constrained_pos)<EOL><DEDENT>final_constrained_pos = sorted(list(set(fixed_params)))<EOL>zero_vector = np.zeros(init_vals.shape)<EOL>estimator_args = [self,<EOL>mapping_res,<EOL>ridge,<EOL>zero_vector,<EOL>split_param_vec]<EOL>estimator_kwargs = {"<STR_LIT>": final_constrained_pos}<EOL>nested_estimator = NestedEstimator(*estimator_args,<EOL>**estimator_kwargs)<EOL>nested_estimator.check_length_of_initial_values(init_vals)<EOL>estimation_res = estimate(init_vals,<EOL>nested_estimator,<EOL>method,<EOL>loss_tol,<EOL>gradient_tol,<EOL>maxiter,<EOL>print_res,<EOL>use_hessian=True,<EOL>just_point=just_point)<EOL>if not just_point:<EOL><INDENT>self.store_fit_results(estimation_res)<EOL>return None<EOL><DEDENT>else:<EOL><INDENT>return estimation_res<EOL><DEDENT>
|
Parameters
----------
init_vals : 1D ndarray.
Should containn the initial values to start the optimization
process with. There should be one value for each nest parameter
and utility coefficient. Nest parameters not being estimated
should still be included. Handle these parameters using the
`constrained_pos` kwarg.
constrained_pos : list, or None, optional.
Denotes the positions of the array of estimated parameters that are
not to change from their initial values. If a list is passed, the
elements are to be integers where no such integer is greater than
`init_values.size`.
print_res : bool, optional.
Determines whether the timing and initial and final log likelihood
results will be printed as they they are determined.
method : str, optional.
Should be a valid string which can be passed to
`scipy.optimize.minimize`. Determines the optimization algorithm
which is used for this problem.
loss_tol : float, optional.
Determines the tolerance on the difference in objective function
values from one iteration to the next which is needed to determine
convergence. Default `== 1e-06`.
gradient_tol : float, optional.
Determines the tolerance on the difference in gradient values from
one iteration to the next which is needed to determine convergence.
Default `== 1e-06`.
ridge : int, float, long, or None, optional.
Determines whether ridge regression is performed. If a scalar is
passed, then that scalar determines the ridge penalty for the
optimization. Default `== None`.
just_point : bool, optional.
Determines whether (True) or not (False) calculations that are non-
critical for obtaining the maximum likelihood point estimate will
be performed. If True, this function will return the results
dictionary from scipy.optimize. Default == False.
Returns
-------
None. Estimation results are saved to the model instance.
|
f7696:c1:m1
|
def split_param_vec(param_vec, rows_to_alts, design, return_all_types=False):
|
<EOL>num_shapes = rows_to_alts.shape[<NUM_LIT:1>] - <NUM_LIT:1><EOL>num_index_coefs = design.shape[<NUM_LIT:1>]<EOL>shapes = param_vec[:num_shapes]<EOL>betas = param_vec[-<NUM_LIT:1> * num_index_coefs:]<EOL>remaining_idx = param_vec.shape[<NUM_LIT:0>] - (num_shapes + num_index_coefs)<EOL>if remaining_idx > <NUM_LIT:0>:<EOL><INDENT>intercepts = param_vec[num_shapes: num_shapes + remaining_idx]<EOL><DEDENT>else:<EOL><INDENT>intercepts = None<EOL><DEDENT>if return_all_types:<EOL><INDENT>return None, shapes, intercepts, betas<EOL><DEDENT>else:<EOL><INDENT>return shapes, intercepts, betas<EOL><DEDENT>
|
Parameters
----------
param_vec : 1D ndarray.
Should have as many elements as there are parameters being estimated.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset.
design : 2D ndarray.
There should be one row per observation per available alternative.
There should be one column per utility coefficient being estimated. All
elements should be ints, floats, or longs.
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned (with
one element for the nest, shape, intercept, and index parameters for
this model). If False, a tuple of 3 elements will be returned, as
described below.
Returns
-------
tuple of three 1D ndarrays.
The first element will be an array of the shape parameters for this
model. The second element will either be an array of the "outside"
intercept parameters for this model or None. The third element will be
an array of the index coefficients for this model.
Note
----
If `return_all_types == True` then the function will return a tuple of four
objects. In order, these objects will either be None or the arrays
representing the arrays corresponding to the nest, shape, intercept, and
index parameters.
|
f7697:m0
|
def _convert_eta_to_c(eta, ref_position):
|
<EOL>exp_eta = np.exp(eta)<EOL>exp_eta[np.isposinf(exp_eta)] = max_comp_value<EOL>exp_eta[exp_eta == <NUM_LIT:0>] = min_comp_value<EOL>denom = exp_eta.sum(axis=<NUM_LIT:0>) + <NUM_LIT:1><EOL>replace_list = list(range(eta.shape[<NUM_LIT:0>] + <NUM_LIT:1>))<EOL>replace_list.remove(ref_position)<EOL>if len(eta.shape) > <NUM_LIT:1> and eta.shape[<NUM_LIT:1>] > <NUM_LIT:1>:<EOL><INDENT>c_vector = np.zeros((eta.shape[<NUM_LIT:0>] + <NUM_LIT:1>,<EOL>eta.shape[<NUM_LIT:1>]))<EOL>c_vector[replace_list, :] = exp_eta / denom<EOL>c_vector[ref_position, :] = <NUM_LIT:1.0> / denom<EOL><DEDENT>else:<EOL><INDENT>c_vector = np.zeros(eta.shape[<NUM_LIT:0>] + <NUM_LIT:1>)<EOL>c_vector[replace_list] = exp_eta / denom<EOL>c_vector[ref_position] = <NUM_LIT:1.0> / denom<EOL><DEDENT>return c_vector<EOL>
|
Parameters
----------
eta : 1D or 2D ndarray.
The elements of the array should be this model's 'transformed' shape
parameters, i.e. the natural log of (the corresponding shape parameter
divided by the reference shape parameter). This array's elements will
be real valued. If `eta` is 2D, then its shape should be
(num_estimated_shapes, num_parameter_samples).
ref_position : int.
Specifies the position in the resulting array of shape ==
`(eta.shape[0] + 1,)` that should be equal to 1 - the sum of the other
elements in the resulting array.
Returns
-------
c_vector : 1D or 2D ndarray based on `eta`.
If `eta` is 1D then `c_vector` should have shape
`(eta.shape[0] + 1, )`. If `eta` is 2D then `c_vector` should have
shape `(eta.shape[0] + 1, eta.shape[1])`. The returned array will
contains the 'natural' shape parameters that correspond to `eta`.
|
f7697:m1
|
def _calc_deriv_c_with_respect_to_eta(natural_shapes,<EOL>ref_position,<EOL>output_array=None):
|
<EOL>columns_to_be_kept = range(natural_shapes.shape[<NUM_LIT:0>])<EOL>columns_to_be_kept.remove(ref_position)<EOL>output_array[:, :] = (np.diag(natural_shapes) -<EOL>np.outer(natural_shapes,<EOL>natural_shapes))[:, columns_to_be_kept]<EOL>return output_array<EOL>
|
Parameters
----------
natural_shapes : 1D ndarray.
Should have one element per available alternative in the dataset whose
choice situations are being modeled. Should have at least
`ref_position` elements in it.
ref_position : int.
Specifies the position in the array of natural shape parameters that
should be equal to 1 - the sum of the other elements. Specifies the
alternative in the ordered array of unique alternatives that is not
having its shape parameter estimated (in order to ensure
identifiability).
output_array : 2D ndarray.
This array is to have its data overwritten with the correct derivatives
of the natural shape parameters with respect to transformed shape
parameters. Should have shape ==
`(natural_shapes.shape[0], natural_shapes.shape[0] - 1)`.
Returns
-------
output_array : 2D ndarray.
Has shape == (natural_shapes.shape[0], natural_shapes.shape[0] - 1).
Will contain the derivative of the shape parameters, with
respect to the underlying 'transformed' shape parameters.
|
f7697:m2
|
def _asym_utility_transform(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>eta,<EOL>intercept_params,<EOL>shape_ref_position=None,<EOL>intercept_ref_pos=None,<EOL>*args, **kwargs):
|
<EOL>natural_shape_params = _convert_eta_to_c(eta, shape_ref_position)<EOL>long_shapes = rows_to_alts.dot(natural_shape_params)<EOL>num_alts = rows_to_alts.shape[<NUM_LIT:1>]<EOL>log_long_shapes = np.log(long_shapes)<EOL>log_long_shapes[np.isneginf(log_long_shapes)] = -<NUM_LIT:1> * max_comp_value<EOL>log_1_sub_long_shapes = np.log((<NUM_LIT:1> - long_shapes) / float(num_alts - <NUM_LIT:1>))<EOL>small_idx = np.isneginf(log_1_sub_long_shapes)<EOL>log_1_sub_long_shapes[small_idx] = -<NUM_LIT:1> * max_comp_value<EOL>multiplier = ((systematic_utilities >= <NUM_LIT:0>) * log_long_shapes +<EOL>(systematic_utilities < <NUM_LIT:0>) * log_1_sub_long_shapes)<EOL>transformed_utilities = log_long_shapes - systematic_utilities * multiplier<EOL>weird_case = np.isposinf(systematic_utilities) * (long_shapes == <NUM_LIT:1>)<EOL>transformed_utilities[weird_case] = <NUM_LIT:0><EOL>if intercept_params is not None and intercept_ref_pos is not None:<EOL><INDENT>needed_idxs = range(rows_to_alts.shape[<NUM_LIT:1>])<EOL>needed_idxs.remove(intercept_ref_pos)<EOL>if len(intercept_params.shape) > <NUM_LIT:1> and intercept_params.shape[<NUM_LIT:1>] > <NUM_LIT:1>:<EOL><INDENT>all_intercepts = np.zeros((rows_to_alts.shape[<NUM_LIT:1>],<EOL>intercept_params.shape[<NUM_LIT:1>]))<EOL>all_intercepts[needed_idxs, :] = intercept_params<EOL><DEDENT>else:<EOL><INDENT>all_intercepts = np.zeros(rows_to_alts.shape[<NUM_LIT:1>])<EOL>all_intercepts[needed_idxs] = intercept_params<EOL><DEDENT>transformed_utilities += rows_to_alts.dot(all_intercepts)<EOL><DEDENT>transformed_utilities[np.isposinf(transformed_utilities)] = max_comp_value<EOL>transformed_utilities[np.isneginf(transformed_utilities)] = -max_comp_value<EOL>if len(transformed_utilities.shape) == <NUM_LIT:1>:<EOL><INDENT>transformed_utilities = transformed_utilities[:, np.newaxis]<EOL><DEDENT>return transformed_utilities<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
Contains the systematic utilities for each each available alternative
for each observation. All elements should be ints, floats, or longs.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D ndarray.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset.
eta : 1D ndarray.
Each element should be an int, float, or long. There should be one
value per transformed shape parameter. Note that if there are J
possible alternatives in the dataset, then there should be J - 1
elements in `eta`.
intercept_params : 1D ndarray or None.
If an array, each element should be an int, float, or long. For
identifiability, there should be J- 1 elements where J is the total
number of observed alternatives for this dataset.
shape_ref_position : int.
Specifies the position in the array of natural shape parameters that
should be equal to 1 - the sum of the other elements. Specifies the
alternative in the ordered array of unique alternatives that is not
having its shape parameter estimated (to ensure identifiability).
intercept_ref_pos : int, or None, optional.
Specifies the index of the alternative, in the ordered array of unique
alternatives, that is not having its intercept parameter estimated (in
order to ensure identifiability). Should only be None if
intercept_params is None. Default == None.
Returns
-------
transformed_utilities : 2D ndarray.
Should have shape `(systematic_utilities.shape[0], 1)`. The returned
array contains the values of the transformed index for this model.
|
f7697:m3
|
def _asym_transform_deriv_v(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>eta,<EOL>ref_position=None,<EOL>output_array=None,<EOL>*args, **kwargs):
|
<EOL>natural_shape_params = _convert_eta_to_c(eta, ref_position)<EOL>long_shapes = rows_to_alts.dot(natural_shape_params)<EOL>num_alts = rows_to_alts.shape[<NUM_LIT:1>]<EOL>log_long_shapes = np.log(long_shapes)<EOL>log_long_shapes[np.isneginf(log_long_shapes)] = -<NUM_LIT:1> * max_comp_value<EOL>log_1_sub_long_shapes = np.log((<NUM_LIT:1> - long_shapes) /<EOL>(num_alts - <NUM_LIT:1>))<EOL>small_idx = np.isneginf(log_1_sub_long_shapes)<EOL>log_1_sub_long_shapes[small_idx] = -<NUM_LIT:1> * max_comp_value<EOL>derivs = -<NUM_LIT:1> * ((systematic_utilities >= <NUM_LIT:0>).astype(int) *<EOL>log_long_shapes +<EOL>(systematic_utilities < <NUM_LIT:0>).astype(int) *<EOL>log_1_sub_long_shapes)<EOL>output_array.data = derivs<EOL>return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
Contains the systematic utilities for each each available alternative
for each observation. All elements should be ints, floats, or longs.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D ndarray.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset.
eta : 1D ndarray.
Each element should be an int, float, or long. There should be one
value per transformed shape parameter. Note that if there are J
possible alternatives in the dataset, then there should be J - 1
elements in `eta`.
ref_position : int.
Specifies the position in the array of natural shape parameters that
should be equal to 1 - the sum of the other elements. Specifies the
alternative in the ordered array of unique alternatives that is not
having its shape parameter estimated (to ensure identifiability).
output_array : 2D scipy sparse matrix.
This matrix's data is to be replaced with the correct derivatives of
the transformation vector with respect to the vector of systematic
utilities.
Returns
-------
output_array : 2D scipy sparse matrix.
Will be a square matrix with `systematic_utilities.shape[0]` rows and
columns. `output_array` specifies the derivative of the transformed
utilities with respect to the index, V.
|
f7697:m4
|
def _asym_transform_deriv_shape(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>eta,<EOL>ref_position=None,<EOL>dh_dc_array=None,<EOL>fill_dc_d_eta=None,<EOL>output_array=None,<EOL>*args, **kwargs):
|
<EOL>natural_shape_params = _convert_eta_to_c(eta, ref_position)<EOL>long_shapes = rows_to_alts.dot(natural_shape_params)<EOL>d_lnShape_dShape = <NUM_LIT:1.0> / long_shapes<EOL>d_lnShape_dShape[np.isposinf(d_lnShape_dShape)] = max_comp_value<EOL>d_lnShapeComp_dShape = -<NUM_LIT:1.0> / (<NUM_LIT:1> - long_shapes)<EOL>d_lnShapeComp_dShape[np.isneginf(d_lnShapeComp_dShape)] = -max_comp_value<EOL>deriv_multiplier = ((systematic_utilities >= <NUM_LIT:0>) * d_lnShape_dShape +<EOL>(systematic_utilities < <NUM_LIT:0>) * d_lnShapeComp_dShape)<EOL>dh_dc_values = d_lnShape_dShape - systematic_utilities * deriv_multiplier<EOL>dh_dc_values[np.isinf(dh_dc_values)] = -<NUM_LIT:1> * max_comp_value<EOL>dh_dc_array.data = dh_dc_values<EOL>output_array[:, :] = dh_dc_array.dot(fill_dc_d_eta(natural_shape_params,<EOL>ref_position))<EOL>return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
Contains the systematic utilities for each each available alternative
for each observation. All elements should be ints, floats, or longs.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D ndarray.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset.
eta : 1D ndarray.
Each element should be an int, float, or long. There should be one
value per transformed shape parameter. Note that if there are J
possible alternatives in the dataset, then there should be J - 1
elements in `eta`.
ref_position : int.
Specifies the position in the array of natural shape parameters that
should be equal to 1 - the sum of the other elements. Specifies the
alternative in the ordered array of unique alternatives that is not
having its shape parameter estimated (to ensure identifiability).
dh_dc_array : 2D scipy sparse matrix.
Its data is to be replaced with the correct derivatives of the
transformed index vector with respect to the shape parameter vector.
Should have shape
`(systematic_utilities.shape[0], rows_to_alts.shape[1])`.
fill_dc_d_eta : callable.
Should accept `eta` and `ref_position` and return a 2D numpy array
containing the derivatives of the 'natural' shape parameter vector with
respect to the vector of transformed shape parameters.
output_array : 2D numpy matrix.
This matrix's data is to be replaced with the correct derivatives of
the transformed systematic utilities with respect to the vector of
transformed shape parameters. Should have shape
`(systematic_utilities.shape[0], shape_params.shape[0])`.
Returns
-------
output_array : 2D ndarray.
The shape of the returned array will be
`(systematic_utilities.shape[0], shape_params.shape[0])`. The returned
array specifies the derivative of the transformed utilities with
respect to the shape parameters.
|
f7697:m5
|
def _asym_transform_deriv_alpha(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>intercept_params,<EOL>output_array=None,<EOL>*args, **kwargs):
|
return output_array<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
Contains the systematic utilities for each each available alternative
for each observation. All elements should be ints, floats, or longs.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D ndarray.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset.
intercept_params : 1D ndarray or None.
If an array, each element should be an int, float, or long. For
identifiability, there should be J- 1 elements where J is the total
number of observed alternatives for this dataset.
output_array : None or 2D scipy sparse matrix.
If `output_array` is a 2D scipy sparse matrix, then it should contain
the derivative of the vector of transformed utilities with respect to
the intercept parameters outside of the index. This keyword argurment
will be returned without alteration.
If there are no intercept parameters outside of the index, then
`output_array` should equal None.
If there are intercept parameters outside of the index, then
`output_array` should be rows_to_alts` without the column corresponding
to the alternative whose intercept is not being estimated in order to
ensure identifiability.
Returns
-------
output_array.
|
f7697:m6
|
def create_calc_dh_dv(estimator):
|
dh_dv = diags(np.ones(estimator.design.shape[<NUM_LIT:0>]), <NUM_LIT:0>, format='<STR_LIT>')<EOL>calc_dh_dv = partial(_asym_transform_deriv_v,<EOL>ref_position=estimator.shape_ref_pos,<EOL>output_array=dh_dv)<EOL>return calc_dh_dv<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the index.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `design` attribute that is a 2D ndarray representing
the design matrix for this model and dataset.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of systematic utilities. The dimensions of the returned vector should
be `(design.shape[0], design.shape[0])`.
|
f7697:m7
|
def create_calc_dh_d_shape(estimator):
|
num_alts = estimator.rows_to_alts.shape[<NUM_LIT:1>]<EOL>pre_dc_d_eta = np.zeros((num_alts, num_alts - <NUM_LIT:1>), dtype=float)<EOL>pre_dh_dc = estimator.rows_to_alts.copy()<EOL>pre_dh_d_eta = np.matrix(np.zeros((estimator.design.shape[<NUM_LIT:0>],<EOL>num_alts - <NUM_LIT:1>), dtype=float))<EOL>easy_calc_dc_d_eta = partial(_calc_deriv_c_with_respect_to_eta,<EOL>output_array=pre_dc_d_eta)<EOL>calc_dh_d_eta = partial(_asym_transform_deriv_shape,<EOL>ref_position=estimator.shape_ref_pos,<EOL>dh_dc_array=pre_dh_dc,<EOL>fill_dc_d_eta=easy_calc_dc_d_eta,<EOL>output_array=pre_dh_d_eta)<EOL>return calc_dh_d_eta<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the shape parameters.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `rows_to_alts` attribute that is a 2D scipy sparse
matrix that maps the rows of the `design` matrix to the alternatives
available in this dataset.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of shape parameters. The dimensions of the returned vector should
be `(design.shape[0], num_alternatives)`.
|
f7697:m8
|
def create_calc_dh_d_alpha(estimator):
|
if estimator.intercept_ref_pos is not None:<EOL><INDENT>needed_idxs = range(estimator.rows_to_alts.shape[<NUM_LIT:1>])<EOL>needed_idxs.remove(estimator.intercept_ref_pos)<EOL>dh_d_alpha = (estimator.rows_to_alts<EOL>.copy()<EOL>.transpose()[needed_idxs, :]<EOL>.transpose())<EOL><DEDENT>else:<EOL><INDENT>dh_d_alpha = None<EOL><DEDENT>calc_dh_d_alpha = partial(_asym_transform_deriv_alpha,<EOL>output_array=dh_d_alpha)<EOL>return calc_dh_d_alpha<EOL>
|
Return the function that can be used in the various gradient and hessian
calculations to calculate the derivative of the transformation with respect
to the outside intercept parameters.
Parameters
----------
estimator : an instance of the estimation.LogitTypeEstimator class.
Should contain a `rows_to_alts` attribute that is a 2D scipy sparse
matrix that maps the rows of the `design` matrix to the alternatives
available in this dataset. Should also contain an `intercept_ref_pos`
attribute that is either None or an int. This attribute should denote
which intercept is not being estimated (in the case of outside
intercept parameters) for identification purposes.
Returns
-------
Callable.
Will accept a 1D array of systematic utility values, a 1D array of
alternative IDs, (shape parameters if there are any) and miscellaneous
args and kwargs. Should return a 2D array whose elements contain the
derivative of the tranformed utility vector with respect to the vector
of outside intercepts. The dimensions of the returned vector should
be `(design.shape[0], num_alternatives - 1)`.
|
f7697:m9
|
def check_length_of_initial_values(self, init_values):
|
<EOL>num_alts = self.rows_to_alts.shape[<NUM_LIT:1>]<EOL>num_index_coefs = self.design.shape[<NUM_LIT:1>]<EOL>if self.intercept_ref_pos is not None:<EOL><INDENT>assumed_param_dimensions = num_index_coefs + <NUM_LIT:2> * (num_alts - <NUM_LIT:1>)<EOL><DEDENT>else:<EOL><INDENT>assumed_param_dimensions = num_index_coefs + num_alts - <NUM_LIT:1><EOL><DEDENT>if init_values.shape[<NUM_LIT:0>] != assumed_param_dimensions:<EOL><INDENT>msg_1 = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>msg_3 = "<STR_LIT>"<EOL>raise ValueError(msg_1 +<EOL>msg_2.format(assumed_param_dimensions) +<EOL>msg_3.format(init_values.shape[<NUM_LIT:0>]))<EOL><DEDENT>return None<EOL>
|
Ensures that `init_values` is of the correct length. Raises a helpful
ValueError if otherwise.
Parameters
----------
init_values : 1D ndarray.
The initial values to start the optimization process with. There
should be one value for each index coefficient, outside intercept
parameter, and shape parameter being estimated.
Returns
-------
None.
|
f7697:c0:m1
|
def fit_mle(self, init_vals,<EOL>init_shapes=None,<EOL>init_intercepts=None,<EOL>init_coefs=None,<EOL>print_res=True,<EOL>method="<STR_LIT>",<EOL>loss_tol=<NUM_LIT>,<EOL>gradient_tol=<NUM_LIT>,<EOL>maxiter=<NUM_LIT:1000>,<EOL>ridge=None,<EOL>constrained_pos=None,<EOL>just_point=False,<EOL>**kwargs):
|
<EOL>self.optimization_method = method<EOL>self.ridge_param = ridge<EOL>if ridge is not None:<EOL><INDENT>warnings.warn(_ridge_warning_msg)<EOL><DEDENT>mapping_res = self.get_mappings_for_fit()<EOL>rows_to_alts = mapping_res["<STR_LIT>"]<EOL>if init_vals is None and all([x is not None for x in [init_shapes,<EOL>init_coefs]]):<EOL><INDENT>num_alternatives = rows_to_alts.shape[<NUM_LIT:1>]<EOL>try:<EOL><INDENT>assert init_shapes.shape[<NUM_LIT:0>] == num_alternatives - <NUM_LIT:1><EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_shapes.shape,<EOL>num_alternatives - <NUM_LIT:1>))<EOL><DEDENT>try:<EOL><INDENT>assert init_coefs.shape[<NUM_LIT:0>] == self.design.shape[<NUM_LIT:1>]<EOL><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_coefs.shape,<EOL>self.design.shape[<NUM_LIT:1>]))<EOL><DEDENT>try:<EOL><INDENT>if init_intercepts is not None:<EOL><INDENT>assert init_intercepts.shape[<NUM_LIT:0>] == (num_alternatives - <NUM_LIT:1>)<EOL><DEDENT><DEDENT>except AssertionError:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(init_intercepts.shape,<EOL>num_alternatives - <NUM_LIT:1>))<EOL><DEDENT>if init_intercepts is not None:<EOL><INDENT>init_vals = np.concatenate((init_shapes,<EOL>init_intercepts,<EOL>init_coefs), axis=<NUM_LIT:0>)<EOL><DEDENT>else:<EOL><INDENT>init_vals = np.concatenate((init_shapes,<EOL>init_coefs), axis=<NUM_LIT:0>)<EOL><DEDENT><DEDENT>elif init_vals is None:<EOL><INDENT>msg = "<STR_LIT>"<EOL>msg_2 = "<STR_LIT>"<EOL>raise ValueError(msg + msg_2)<EOL><DEDENT>zero_vector = np.zeros(init_vals.shape)<EOL>asym_estimator = AsymEstimator(self,<EOL>mapping_res,<EOL>ridge,<EOL>zero_vector,<EOL>split_param_vec,<EOL>constrained_pos=constrained_pos)<EOL>asym_estimator.set_derivatives()<EOL>asym_estimator.check_length_of_initial_values(init_vals)<EOL>estimation_res = estimate(init_vals,<EOL>asym_estimator,<EOL>method,<EOL>loss_tol,<EOL>gradient_tol,<EOL>maxiter,<EOL>print_res,<EOL>just_point=just_point)<EOL>if not just_point:<EOL><INDENT>self.store_fit_results(estimation_res)<EOL>return None<EOL><DEDENT>else:<EOL><INDENT>return estimation_res<EOL><DEDENT>
|
Parameters
----------
init_vals : 1D ndarray.
The initial values to start the optimization process with. There
should be one value for each index coefficient and shape
parameter being estimated. Shape parameters should come before
intercept parameters, which should come before index coefficients.
One can also pass None, and instead pass `init_shapes`, optionally
`init_intercepts` if `"intercept"` is not in the utility
specification, and `init_coefs`.
init_shapes : 1D ndarray or None, optional.
The initial values of the shape parameters. All elements should be
ints, floats, or longs. There should be one element less than the
total number of possible alternatives in the dataset. This keyword
argument will be ignored if `init_vals` is not None.
Default == None.
init_intercepts : 1D ndarray or None, optional.
The initial values of the intercept parameters. There should be one
parameter per possible alternative id in the dataset, minus one.
The passed values for this argument will be ignored if `init_vals`
is not None. This keyword argument should only be used if
`"intercept"` is not in the utility specification. Default == None.
init_coefs : 1D ndarray or None, optional.
The initial values of the index coefficients. There should be one
coefficient per index variable. The passed values for this argument
will be ignored if `init_vals` is not None. Default == None.
print_res : bool, optional.
Determines whether the timing and initial and final log likelihood
results will be printed as they they are determined.
Default `== True`.
method : str, optional.
Should be a valid string for scipy.optimize.minimize. Determines
the optimization algorithm that is used for this problem.
Default `== 'bfgs'`.
loss_tol : float, optional.
Determines the tolerance on the difference in objective function
values from one iteration to the next that is needed to determine
convergence. Default `== 1e-06`.
gradient_tol : float, optional.
Determines the tolerance on the difference in gradient values from
one iteration to the next which is needed to determine convergence.
Default `== 1e-06`.
maxiter : int, optional.
Determines the maximum number of iterations used by the optimizer.
Default `== 1000`.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a
scalar is passed, then that scalar determines the ridge penalty for
the optimization. The scalar should be greater than or equal to
zero. Default `== None`.
constrained_pos : list or None, optional.
Denotes the positions of the array of estimated parameters that are
not to change from their initial values. If a list is passed, the
elements are to be integers where no such integer is greater than
`init_vals.size.` Default == None.
just_point : bool, optional.
Determines whether (True) or not (False) calculations that are non-
critical for obtaining the maximum likelihood point estimate will
be performed. If True, this function will return the results
dictionary from scipy.optimize. Default == False.
Returns
-------
None. Estimation results are saved to the model instance.
|
f7697:c1:m1
|
def split_param_vec(param_vec, rows_to_alts, design, return_all_types=False):
|
<EOL>num_shapes = rows_to_alts.shape[<NUM_LIT:1>]<EOL>num_index_coefs = design.shape[<NUM_LIT:1>]<EOL>shapes = param_vec[:num_shapes]<EOL>betas = param_vec[-<NUM_LIT:1> * num_index_coefs:]<EOL>remaining_idx = param_vec.shape[<NUM_LIT:0>] - (num_shapes + num_index_coefs)<EOL>if remaining_idx > <NUM_LIT:0>:<EOL><INDENT>intercepts = param_vec[num_shapes: num_shapes + remaining_idx]<EOL><DEDENT>else:<EOL><INDENT>intercepts = None<EOL><DEDENT>if return_all_types:<EOL><INDENT>return None, shapes, intercepts, betas<EOL><DEDENT>else:<EOL><INDENT>return shapes, intercepts, betas<EOL><DEDENT>
|
Parameters
----------
param_vec : 1D ndarray.
Elements should all be ints, floats, or longs. Should have as many
elements as there are parameters being estimated.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
design : 2D ndarray.
There should be one row per observation per available alternative.
There should be one column per utility coefficient being estimated. All
elements should be ints, floats, or longs.
return_all_types : bool, optional.
Determines whether or not a tuple of 4 elements will be returned (with
one element for the nest, shape, intercept, and index parameters for
this model). If False, a tuple of 3 elements will be returned, as
described below.
Returns
-------
`(shapes, intercepts, betas)` : tuple of 1D ndarrays.
The first element will be an array of the shape parameters for this
model. The second element will either be an array of the "outside"
intercept parameters for this model or None. The third element will be
an array of the index coefficients for this model.
Note
----
If `return_all_types == True` then the function will return a tuple of four
objects. In order, these objects will either be None or the arrays
representing the arrays corresponding to the nest, shape, intercept, and
index parameters.
|
f7698:m0
|
def _uneven_utility_transform(systematic_utilities,<EOL>alt_IDs,<EOL>rows_to_alts,<EOL>shape_params,<EOL>intercept_params,<EOL>intercept_ref_pos=None,<EOL>*args, **kwargs):
|
<EOL>natural_shapes = np.exp(shape_params)<EOL>natural_shapes[np.isposinf(natural_shapes)] = max_comp_value<EOL>long_natural_shapes = rows_to_alts.dot(natural_shapes)<EOL>exp_neg_utilities = np.exp(-<NUM_LIT:1> * systematic_utilities)<EOL>log_1_plus_exp_neg_utilitiles = np.log1p(exp_neg_utilities)<EOL>inf_idx = np.isinf(log_1_plus_exp_neg_utilitiles)<EOL>log_1_plus_exp_neg_utilitiles[inf_idx] = -<NUM_LIT:1> * systematic_utilities[inf_idx]<EOL>exp_neg_shape_utilities = np.exp(-<NUM_LIT:1> *<EOL>long_natural_shapes *<EOL>systematic_utilities)<EOL>log_1_plus_exp_neg_shape_utilities = np.log1p(exp_neg_shape_utilities)<EOL>inf_idx = np.isinf(log_1_plus_exp_neg_shape_utilities)<EOL>if np.any(inf_idx):<EOL><INDENT>log_1_plus_exp_neg_shape_utilities[inf_idx] =-<NUM_LIT:1> * long_natural_shapes[inf_idx] * systematic_utilities[inf_idx]<EOL><DEDENT>transformed_utilities = (systematic_utilities +<EOL>log_1_plus_exp_neg_utilitiles -<EOL>log_1_plus_exp_neg_shape_utilities)<EOL>transformed_utilities[np.isposinf(transformed_utilities)] = max_comp_value<EOL>transformed_utilities[np.isneginf(transformed_utilities)] = -max_comp_value<EOL>transformed_utilities[np.isneginf(systematic_utilities)] = -max_comp_value<EOL>if intercept_params is not None and intercept_ref_pos is not None:<EOL><INDENT>needed_idxs = range(rows_to_alts.shape[<NUM_LIT:1>])<EOL>needed_idxs.remove(intercept_ref_pos)<EOL>if len(intercept_params.shape) > <NUM_LIT:1> and intercept_params.shape[<NUM_LIT:1>] > <NUM_LIT:1>:<EOL><INDENT>all_intercepts = np.zeros((rows_to_alts.shape[<NUM_LIT:1>],<EOL>intercept_params.shape[<NUM_LIT:1>]))<EOL>all_intercepts[needed_idxs, :] = intercept_params<EOL><DEDENT>else:<EOL><INDENT>all_intercepts = np.zeros(rows_to_alts.shape[<NUM_LIT:1>])<EOL>all_intercepts[needed_idxs] = intercept_params<EOL><DEDENT>transformed_utilities += rows_to_alts.dot(all_intercepts)<EOL><DEDENT>if len(transformed_utilities.shape) == <NUM_LIT:1>:<EOL><INDENT>transformed_utilities = transformed_utilities[:, np.newaxis]<EOL><DEDENT>return transformed_utilities<EOL>
|
Parameters
----------
systematic_utilities : 1D ndarray.
All elements should be ints, floats, or longs. Should contain the
systematic utilities of each observation per available alternative.
Note that this vector is formed by the dot product of the design matrix
with the vector of utility coefficients.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_alts : 2D scipy sparse matrix.
There should be one row per observation per available alternative and
one column per possible alternative. This matrix maps the rows of the
design matrix to the possible alternatives for this dataset. All
elements should be zeros or ones.
shape_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. There
should be one value per shape parameter of the model being used.
intercept_params : None or 1D ndarray.
If an array, each element should be an int, float, or long. If J is the
total number of possible alternatives for the dataset being modeled,
there should be J-1 elements in the array.
intercept_ref_pos : int, or None, optional.
Specifies the index of the alternative, in the ordered array of unique
alternatives, that is not having its intercept parameter estimated (in
order to ensure identifiability). Should only be None if
`intercept_params` is None.
Returns
-------
transformed_utilities : 2D ndarray.
Should have shape `(systematic_utilities.shape[0], 1)`. The returned
array contains the transformed utility values for this model. All
elements will be ints, longs, or floats.
|
f7698:m1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.