desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Compute correlation of the model with human similarity judgments. `pairs` is a filename of a dataset where lines are 3-tuples, each consisting of a word pair and a similarity value, separated by `delimiter`. An example dataset is included in Gensim (test/test_data/wordsim353.tsv). More datasets can be found at http://...
def evaluate_word_pairs(self, pairs, delimiter=' DCTB ', restrict_vocab=300000, case_insensitive=True, dummy4unknown=False):
ok_vocab = [(w, self.vocab[w]) for w in self.index2word[:restrict_vocab]] ok_vocab = (dict(((w.upper(), v) for (w, v) in reversed(ok_vocab))) if case_insensitive else dict(ok_vocab)) similarity_gold = [] similarity_model = [] oov = 0 original_vocab = self.vocab self.vocab = ok_vocab for ...
'Precompute L2-normalized vectors. If `replace` is set, forget the original vectors and only keep the normalized ones = saves lots of memory! Note that you **cannot continue training** after doing a replace. The model becomes effectively read-only = you can call `most_similar`, `similarity` etc., but not `train`.'
def init_sims(self, replace=False):
if ((getattr(self, 'syn0norm', None) is None) or replace): logger.info('precomputing L2-norms of word weight vectors') if replace: for i in xrange(self.syn0.shape[0]): self.syn0[i, :] /= sqrt((self.syn0[i, :] ** 2).sum((-1))) self.syn0norm = sel...
'Return a Keras \'Embedding\' layer with weights set as the Word2Vec model\'s learned word embeddings'
def get_embedding_layer(self, train_embeddings=False):
if (not KERAS_INSTALLED): raise ImportError('Please install Keras to use this function') weights = self.syn0 layer = Embedding(input_dim=weights.shape[0], output_dim=weights.shape[1], weights=[weights]) return layer
'Compute the \'l1\' or \'l2\' normalization by normalizing separately for each doc in a corpus. Formula for \'l1\' norm for term \'i\' in document \'j\' in a corpus of \'D\' documents is:: norml1_{i, j} = (i / sum(absolute(values in j))) Formula for \'l2\' norm for term \'i\' in document \'j\' in a corpus of \'D\' docu...
def __init__(self, corpus=None, norm='l2'):
self.norm = norm if (corpus is not None): self.calc_norm(corpus) else: pass
'Calculates the norm by calling matutils.unitvec with the norm parameter.'
def calc_norm(self, corpus):
logger.info(('Performing %s normalization...' % self.norm)) norms = [] numnnz = 0 docno = 0 for bow in corpus: docno += 1 numnnz += len(bow) norms.append(matutils.unitvec(bow, self.norm)) self.num_docs = docno self.num_nnz = numnnz self.norms = norms
'Prepare the state for a new EM iteration (reset sufficient stats).'
def reset(self):
self.sstats[:] = 0.0 self.numdocs = 0
'Merge the result of an E step from one node with that of another node (summing up sufficient statistics). The merging is trivial and after merging all cluster nodes, we have the exact same result as if the computation was run on a single node (no approximation).'
def merge(self, other):
assert (other is not None) self.sstats += other.sstats self.numdocs += other.numdocs
'Given LdaState `other`, merge it with the current state. Stretch both to `targetsize` documents before merging, so that they are of comparable magnitude. Merging is done by average weighting: in the extremes, `rhot=0.0` means `other` is completely ignored; `rhot=1.0` means `self` is completely ignored. This procedure ...
def blend(self, rhot, other, targetsize=None):
assert (other is not None) if (targetsize is None): targetsize = self.numdocs if ((self.numdocs == 0) or (targetsize == self.numdocs)): scale = 1.0 else: scale = ((1.0 * targetsize) / self.numdocs) self.sstats *= ((1.0 - rhot) * scale) if ((other.numdocs == 0) or (targets...
'Alternative, more simple blend.'
def blend2(self, rhot, other, targetsize=None):
assert (other is not None) if (targetsize is None): targetsize = self.numdocs self.sstats += other.sstats self.numdocs = targetsize
'If given, start training from the iterable `corpus` straight away. If not given, the model is left untrained (presumably because you want to call `update()` manually). `num_topics` is the number of requested latent topics to be extracted from the training corpus. `id2word` is a mapping from word ids (integers) to word...
def __init__(self, corpus=None, num_topics=100, id2word=None, distributed=False, chunksize=2000, passes=1, update_every=1, alpha='symmetric', eta=None, decay=0.5, offset=1.0, eval_every=10, iterations=50, gamma_threshold=0.001, minimum_probability=0.01, random_state=None, ns_conf={}, minimum_phi_value=0.01, per_word_to...
self.id2word = id2word if ((corpus is None) and (self.id2word is None)): raise ValueError('at least one of corpus/id2word must be specified, to establish input space dimensionality') if (self.id2word is None): logger.warning('no word id mapping ...
'Clear model state (free up some memory). Used in the distributed algo.'
def clear(self):
self.state = None self.Elogbeta = None
'Given a chunk of sparse document vectors, estimate gamma (parameters controlling the topic weights) for each document in the chunk. This function does not modify the model (=is read-only aka const). The whole input chunk of document is assumed to fit in RAM; chunking of a large corpus must be done earlier in the pipel...
def inference(self, chunk, collect_sstats=False):
try: _ = len(chunk) except: chunk = list(chunk) if (len(chunk) > 1): logger.debug('performing inference on a chunk of %i documents', len(chunk)) gamma = self.random_state.gamma(100.0, (1.0 / 100.0), (len(chunk), self.num_topics)) Elogtheta = dirichlet_exp...
'Perform inference on a chunk of documents, and accumulate the collected sufficient statistics in `state` (or `self.state` if None).'
def do_estep(self, chunk, state=None):
if (state is None): state = self.state (gamma, sstats) = self.inference(chunk, collect_sstats=True) state.sstats += sstats state.numdocs += gamma.shape[0] return gamma
'Update parameters for the Dirichlet prior on the per-document topic weights `alpha` given the last `gammat`.'
def update_alpha(self, gammat, rho):
N = float(len(gammat)) logphat = (sum((dirichlet_expectation(gamma) for gamma in gammat)) / N) self.alpha = update_dir_prior(self.alpha, N, logphat, rho) logger.info('optimized alpha %s', list(self.alpha)) return self.alpha
'Update parameters for the Dirichlet prior on the per-topic word weights `eta` given the last `lambdat`.'
def update_eta(self, lambdat, rho):
N = float(lambdat.shape[0]) logphat = (sum((dirichlet_expectation(lambda_) for lambda_ in lambdat)) / N).reshape((self.num_terms,)) self.eta = update_dir_prior(self.eta, N, logphat, rho) return self.eta
'Calculate and return per-word likelihood bound, using the `chunk` of documents as evaluation corpus. Also output the calculated statistics. incl. perplexity=2^(-bound), to log at INFO level.'
def log_perplexity(self, chunk, total_docs=None):
if (total_docs is None): total_docs = len(chunk) corpus_words = sum((cnt for document in chunk for (_, cnt) in document)) subsample_ratio = ((1.0 * total_docs) / len(chunk)) perwordbound = (self.bound(chunk, subsample_ratio=subsample_ratio) / (subsample_ratio * corpus_words)) logger.info(('%...
'Train the model with new documents, by EM-iterating over `corpus` until the topics converge (or until the maximum number of allowed iterations is reached). `corpus` must be an iterable (repeatable stream of documents), In distributed mode, the E step is distributed over a cluster of machines. This update also supports...
def update(self, corpus, chunksize=None, decay=None, offset=None, passes=None, update_every=None, eval_every=None, iterations=None, gamma_threshold=None, chunks_as_numpy=False):
if (decay is None): decay = self.decay if (offset is None): offset = self.offset if (passes is None): passes = self.passes if (update_every is None): update_every = self.update_every if (eval_every is None): eval_every = self.eval_every if (iterations is N...
'M step: use linear interpolation between the existing topics and collected sufficient statistics in `other` to update the topics.'
def do_mstep(self, rho, other, extra_pass=False):
logger.debug('updating topics') diff = np.log(self.expElogbeta) self.state.blend(rho, other) diff -= self.state.get_Elogbeta() self.sync_state() self.print_topics(5) logger.info('topic diff=%f, rho=%f', np.mean(np.abs(diff)), rho) if self.optimize_eta: self.update_eta(se...
'Estimate the variational bound of documents from `corpus`: E_q[log p(corpus)] - E_q[log q(corpus)] `gamma` are the variational parameters on topic weights for each `corpus` document (=2d matrix=what comes out of `inference()`). If not supplied, will be inferred from the model.'
def bound(self, corpus, gamma=None, subsample_ratio=1.0):
score = 0.0 _lambda = self.state.get_lambda() Elogbeta = dirichlet_expectation(_lambda) for (d, doc) in enumerate(corpus): if ((d % self.chunksize) == 0): logger.debug('bound: at document #%i', d) if (gamma is None): (gammad, _) = self.inference([doc]) ...
'For `num_topics` number of topics, return `num_words` most significant words (10 words per topic, by default). The topics are returned as a list -- a list of strings if `formatted` is True, or a list of `(word, probability)` 2-tuples if False. If `log` is True, also output this result to log. Unlike LSA, there is no n...
def show_topics(self, num_topics=10, num_words=10, log=False, formatted=True):
if ((num_topics < 0) or (num_topics >= self.num_topics)): num_topics = self.num_topics chosen_topics = range(num_topics) else: num_topics = min(num_topics, self.num_topics) sort_alpha = (self.alpha + (0.0001 * self.random_state.rand(len(self.alpha)))) sorted_topics = list...
'Return a list of `(word, probability)` 2-tuples for the most probable words in topic `topicid`. Only return 2-tuples for the topn most probable words (ignore the rest).'
def show_topic(self, topicid, topn=10):
return [(self.id2word[id], value) for (id, value) in self.get_topic_terms(topicid, topn)]
'Return a list of `(word_id, probability)` 2-tuples for the most probable words in topic `topicid`. Only return 2-tuples for the topn most probable words (ignore the rest).'
def get_topic_terms(self, topicid, topn=10):
topic = self.state.get_lambda()[topicid] topic = (topic / topic.sum()) bestn = matutils.argsort(topic, topn, reverse=True) return [(id, topic[id]) for id in bestn]
'Calculate the Umass topic coherence for each topic. Algorithm from **Mimno, Wallach, Talley, Leenders, McCallum: Optimizing Semantic Coherence in Topic Models, CEMNLP 2011.**'
def top_topics(self, corpus, num_words=20):
(is_corpus, corpus) = utils.is_corpus(corpus) if (not is_corpus): logger.warning('LdaModel.top_topics() called with an empty corpus') return topics = [] str_topics = [] for topic in self.state.get_lambda(): topic = (topic / topic.sum()) bestn = matutils...
'Return topic distribution for the given document `bow`, as a list of (topic_id, topic_probability) 2-tuples. Ignore topics with very low probability (below `minimum_probability`). If per_word_topics is True, it also returns a list of topics, sorted in descending order of most likely topics for that word. It also retur...
def get_document_topics(self, bow, minimum_probability=None, minimum_phi_value=None, per_word_topics=False):
if (minimum_probability is None): minimum_probability = self.minimum_probability minimum_probability = max(minimum_probability, 1e-08) if (minimum_phi_value is None): minimum_phi_value = self.minimum_probability minimum_phi_value = max(minimum_phi_value, 1e-08) (is_corpus, corpus) = ...
'Returns most likely topics for a particular word in vocab.'
def get_term_topics(self, word_id, minimum_probability=None):
if (minimum_probability is None): minimum_probability = self.minimum_probability minimum_probability = max(minimum_probability, 1e-08) if isinstance(word_id, str): word_id = self.id2word.doc2bow([word_id])[0][0] values = [] for topic_id in range(0, self.num_topics): if (self....
'Calculate difference topic2topic between two Lda models `other` instances of `LdaMulticore` or `LdaModel` `distance` is function that will be applied to calculate difference between any topic pair. Available values: `kullback_leibler`, `hellinger` and `jaccard` `num_words` is quantity of most relevant words that used ...
def diff(self, other, distance='kullback_leibler', num_words=100, n_ann_terms=10, diagonal=False, annotation=True, normed=True):
distances = {'kullback_leibler': kullback_leibler, 'hellinger': hellinger, 'jaccard': jaccard_distance} if (distance not in distances): valid_keys = ', '.join(('`{}`'.format(x) for x in distances.keys())) raise ValueError('Incorrect distance, valid only {}'.format(valid_keys)) ...
'Return topic distribution for the given document `bow`, as a list of (topic_id, topic_probability) 2-tuples. Ignore topics with very low probability (below `eps`).'
def __getitem__(self, bow, eps=None):
return self.get_document_topics(bow, eps, self.minimum_phi_value, self.per_word_topics)
'Save the model to file. Large internal arrays may be stored into separate files, with `fname` as prefix. `separately` can be used to define which arrays should be stored in separate files. `ignore` parameter can be used to define which variables should be ignored, i.e. left out from the pickled lda model. By default t...
def save(self, fname, ignore=['state', 'dispatcher'], separately=None, *args, **kwargs):
if (self.state is not None): self.state.save(utils.smart_extension(fname, '.state'), *args, **kwargs) if ('id2word' not in ignore): utils.pickle(self.id2word, utils.smart_extension(fname, '.id2word')) if ((ignore is not None) and ignore): if isinstance(ignore, six.string_types): ...
'Load a previously saved object from file (also see `save`). Large arrays can be memmap\'ed back as read-only (shared memory) by setting `mmap=\'r\'`: >>> LdaModel.load(fname, mmap=\'r\')'
@classmethod def load(cls, fname, *args, **kwargs):
kwargs['mmap'] = kwargs.get('mmap', None) result = super(LdaModel, cls).load(fname, *args, **kwargs) if (not hasattr(result, 'random_state')): result.random_state = utils.get_random_state(None) logging.warning('random_state not set so using default value') state_fname =...
'Note that the constructor does not fully initialize the dispatcher; use the `initialize()` function to populate it with workers etc.'
def __init__(self, maxsize=0):
self.maxsize = maxsize self.workers = {} self.callback = None
'`model_params` are parameters used to initialize individual workers (gets handed all the way down to worker.initialize()).'
@Pyro4.expose def initialize(self, **model_params):
self.jobs = Queue(maxsize=self.maxsize) self.lock_update = threading.Lock() self._jobsdone = 0 self._jobsreceived = 0 self.workers = {} with utils.getNS() as ns: self.callback = Pyro4.Proxy('PYRONAME:gensim.lsi_dispatcher') for (name, uri) in iteritems(ns.list(prefix='gensim.lsi_...
'Return pyro URIs of all registered workers.'
@Pyro4.expose def getworkers(self):
return [worker._pyroUri for worker in itervalues(self.workers)]
'Merge projections from across all workers and return the final projection.'
@Pyro4.expose def getstate(self):
logger.info('end of input, assigning all remaining jobs') logger.debug(('jobs done: %s, jobs received: %s' % (self._jobsdone, self._jobsreceived))) while (self._jobsdone < self._jobsreceived): time.sleep(0.5) logger.info(('merging states from %i worke...
'Initialize all workers for a new decomposition.'
@Pyro4.expose def reset(self):
for (workerid, worker) in iteritems(self.workers): logger.info(('resetting worker %s' % workerid)) worker.reset() worker.requestjob() self._jobsdone = 0 self._jobsreceived = 0
'A worker has finished its job. Log this event and then asynchronously transfer control back to the worker. In this way, control flow basically oscillates between dispatcher.jobdone() worker.requestjob().'
@Pyro4.expose @Pyro4.oneway @utils.synchronous('lock_update') def jobdone(self, workerid):
self._jobsdone += 1 logger.info(('worker #%s finished job #%i' % (workerid, self._jobsdone))) worker = self.workers[workerid] worker.requestjob()
'Wrap self._jobsdone, needed for remote access through proxies'
def jobsdone(self):
return self._jobsdone
'Terminate all registered workers and then the dispatcher.'
@Pyro4.oneway def exit(self):
for (workerid, worker) in iteritems(self.workers): logger.info(('terminating worker %s' % workerid)) worker.exit() logger.info('terminating dispatcher') os._exit(0)
'`normalize` dictates whether the resulting vectors will be set to unit length.'
def __init__(self, corpus, id2word=None, normalize=True):
self.normalize = normalize self.n_docs = 0 self.n_words = 0 self.entr = {} if (corpus is not None): self.initialize(corpus)
'Initialize internal statistics based on a training corpus. Called automatically from the constructor.'
def initialize(self, corpus):
logger.info('calculating counts') glob_freq = {} (glob_num_words, doc_no) = (0, (-1)) for (doc_no, bow) in enumerate(corpus): if ((doc_no % 10000) == 0): logger.info(('PROGRESS: processing document #%i' % doc_no)) glob_num_words += len(bow) for (term_id, t...
'Return log entropy representation of the input vector and/or corpus.'
def __getitem__(self, bow):
(is_corpus, bow) = utils.is_corpus(bow) if is_corpus: return self._apply(bow) vector = [(term_id, (math.log((tf + 1)) * self.entr.get(term_id))) for (term_id, tf) in bow if (term_id in self.entr)] if self.normalize: vector = matutils.unitvec(vector) return vector
'Note that the constructor does not fully initialize the dispatcher; use the `initialize()` function to populate it with workers etc.'
def __init__(self, maxsize=MAX_JOBS_QUEUE, ns_conf={}):
self.maxsize = maxsize self.callback = None self.ns_conf = ns_conf
'`model_params` are parameters used to initialize individual workers (gets handed all the way down to `worker.initialize()`).'
@Pyro4.expose def initialize(self, **model_params):
self.jobs = Queue(maxsize=self.maxsize) self.lock_update = threading.Lock() self._jobsdone = 0 self._jobsreceived = 0 self.workers = {} with utils.getNS(**self.ns_conf) as ns: self.callback = Pyro4.Proxy(ns.list(prefix=LDA_DISPATCHER_PREFIX)[LDA_DISPATCHER_PREFIX]) for (name, uri...
'Return pyro URIs of all registered workers.'
@Pyro4.expose def getworkers(self):
return [worker._pyroUri for worker in itervalues(self.workers)]
'Merge states from across all workers and return the result.'
@Pyro4.expose def getstate(self):
logger.info('end of input, assigning all remaining jobs') logger.debug(('jobs done: %s, jobs received: %s' % (self._jobsdone, self._jobsreceived))) while (self._jobsdone < self._jobsreceived): time.sleep(0.5) logger.info(('merging states from %i worke...
'Initialize all workers for a new EM iterations.'
@Pyro4.expose def reset(self, state):
for (workerid, worker) in iteritems(self.workers): logger.info(('resetting worker %s' % workerid)) worker.reset(state) worker.requestjob() self._jobsdone = 0 self._jobsreceived = 0
'A worker has finished its job. Log this event and then asynchronously transfer control back to the worker. In this way, control flow basically oscillates between `dispatcher.jobdone()` and `worker.requestjob()`.'
@Pyro4.expose @Pyro4.oneway @utils.synchronous('lock_update') def jobdone(self, workerid):
self._jobsdone += 1 logger.info(('worker #%s finished job #%i' % (workerid, self._jobsdone))) self.workers[workerid].requestjob()
'Wrap self._jobsdone, needed for remote access through Pyro proxies'
def jobsdone(self):
return self._jobsdone
'Terminate all registered workers and then the dispatcher.'
@Pyro4.oneway def exit(self):
for (workerid, worker) in iteritems(self.workers): logger.info(('terminating worker %s' % workerid)) worker.exit() logger.info('terminating dispatcher') os._exit(0)
'Request jobs from the dispatcher, in a perpetual loop until `getstate()` is called.'
@Pyro4.expose @Pyro4.oneway def requestjob(self):
if (self.model is None): raise RuntimeError('worker must be initialized before receiving jobs') job = None while ((job is None) and (not self.finished)): try: job = self.dispatcher.getjob(self.myid) except Queue.Empty: continue if (job is...
'If the iterable corpus and one of author2doc/doc2author dictionaries are given, start training straight away. If not given, the model is left untrained (presumably because you want to call the `update` method manually). `num_topics` is the number of requested latent topics to be extracted from the training corpus. `id...
def __init__(self, corpus=None, num_topics=100, id2word=None, author2doc=None, doc2author=None, chunksize=2000, passes=1, iterations=50, decay=0.5, offset=1.0, alpha='symmetric', eta='symmetric', update_every=1, eval_every=10, gamma_threshold=0.001, serialized=False, serialization_path=None, minimum_probability=0.01, r...
distributed = False self.dispatcher = None self.numworkers = 1 self.id2word = id2word if ((corpus is None) and (self.id2word is None)): raise ValueError('at least one of corpus/id2word must be specified, to establish input space dimensionality') if (se...
'Initialize an empty corpus. If the corpora are to be treated as lists, simply initialize an empty list. If serialization is used, initialize an empty corpus of the class `gensim.corpora.MmCorpus`.'
def init_empty_corpus(self):
if self.serialized: MmCorpus.serialize(self.serialization_path, []) self.corpus = MmCorpus(self.serialization_path) else: self.corpus = []
'Add new documents in `corpus` to `self.corpus`. If serialization is used, then the entire corpus (`self.corpus`) is re-serialized and the new documents are added in the process. If serialization is not used, the corpus, as a list of documents, is simply extended.'
def extend_corpus(self, corpus):
if self.serialized: if isinstance(corpus, MmCorpus): assert (self.corpus.input != corpus.input), 'Input corpus cannot have the same file path as the model corpus (serialization_path).' corpus_chain = chain(self.corpus, corpus) copyfile(self.ser...
'Efficiently computes the normalizing factor in phi.'
def compute_phinorm(self, ids, authors_d, expElogthetad, expElogbetad):
phinorm = np.zeros(len(ids)) expElogtheta_sum = expElogthetad.sum(axis=0) phinorm = (expElogtheta_sum.dot(expElogbetad) + 1e-100) return phinorm
'Given a chunk of sparse document vectors, update gamma (parameters controlling the topic weights) for each author corresponding to the documents in the chunk. The whole input chunk of document is assumed to fit in RAM; chunking of a large corpus must be done earlier in the pipeline. If `collect_sstats` is True, also c...
def inference(self, chunk, author2doc, doc2author, rhot, collect_sstats=False, chunk_doc_idx=None):
try: _ = len(chunk) except: chunk = list(chunk) if (len(chunk) > 1): logger.debug('performing inference on a chunk of %i documents', len(chunk)) if collect_sstats: sstats = np.zeros_like(self.expElogbeta) else: sstats = None converged ...
'Perform inference on a chunk of documents, and accumulate the collected sufficient statistics in `state` (or `self.state` if None).'
def do_estep(self, chunk, author2doc, doc2author, rhot, state=None, chunk_doc_idx=None):
if (state is None): state = self.state (gamma, sstats) = self.inference(chunk, author2doc, doc2author, rhot, collect_sstats=True, chunk_doc_idx=chunk_doc_idx) state.sstats += sstats state.numdocs += len(chunk) return gamma
'Calculate and return per-word likelihood bound, using the `chunk` of documents as evaluation corpus. Also output the calculated statistics. incl. perplexity=2^(-bound), to log at INFO level.'
def log_perplexity(self, chunk, chunk_doc_idx=None, total_docs=None):
if (total_docs is None): total_docs = len(chunk) corpus_words = sum((cnt for document in chunk for (_, cnt) in document)) subsample_ratio = ((1.0 * total_docs) / len(chunk)) perwordbound = (self.bound(chunk, chunk_doc_idx, subsample_ratio=subsample_ratio) / (subsample_ratio * corpus_words)) ...
'Train the model with new documents, by EM-iterating over `corpus` until the topics converge (or until the maximum number of allowed iterations is reached). `corpus` must be an iterable (repeatable stream of documents), This update also supports updating an already trained model (`self`) with new documents from `corpus...
def update(self, corpus=None, author2doc=None, doc2author=None, chunksize=None, decay=None, offset=None, passes=None, update_every=None, eval_every=None, iterations=None, gamma_threshold=None, chunks_as_numpy=False):
if (decay is None): decay = self.decay if (offset is None): offset = self.offset if (passes is None): passes = self.passes if (update_every is None): update_every = self.update_every if (eval_every is None): eval_every = self.eval_every if (iterations is N...
'Estimate the variational bound of documents from `corpus`: E_q[log p(corpus)] - E_q[log q(corpus)] There are basically two use cases of this method: 1. `chunk` is a subset of the training corpus, and `chunk_doc_idx` is provided, indicating the indexes of the documents in the training corpus. 2. `chunk` is a test set (...
def bound(self, chunk, chunk_doc_idx=None, subsample_ratio=1.0, author2doc=None, doc2author=None):
_lambda = self.state.get_lambda() Elogbeta = dirichlet_expectation(_lambda) expElogbeta = np.exp(Elogbeta) gamma = self.state.gamma if ((author2doc is None) and (doc2author is None)): author2doc = self.author2doc doc2author = self.doc2author if (not chunk_doc_idx): ...
'This method overwrites `LdaModel.get_document_topics` and simply raises an exception. `get_document_topics` is not valid for the author-topic model, use `get_author_topics` instead.'
def get_document_topics(self, word_id, minimum_probability=None):
raise NotImplementedError('Method "get_document_topics" is not valid for the author-topic model. Use the "get_author_topics" method.')
'Return topic distribution the given author, as a list of (topic_id, topic_probability) 2-tuples. Ignore topics with very low probability (below `minimum_probability`). Obtaining topic probabilities of each word, as in LDA (via `per_word_topics`), is not supported.'
def get_author_topics(self, author_name, minimum_probability=None):
author_id = self.author2id[author_name] if (minimum_probability is None): minimum_probability = self.minimum_probability minimum_probability = max(minimum_probability, 1e-08) topic_dist = (self.state.gamma[author_id, :] / sum(self.state.gamma[author_id, :])) author_topics = [(topicid, topicv...
'Return topic distribution for input author as a list of (topic_id, topic_probabiity) 2-tuples. Ingores topics with probaility less than `eps`. Do not call this method directly, instead use `model[author_names]`.'
def __getitem__(self, author_names, eps=None):
if isinstance(author_names, list): items = [] for a in author_names: items.append(self.get_author_topics(a, minimum_probability=eps)) else: items = self.get_author_topics(author_names, minimum_probability=eps) return items
'Note a document tag during initial corpus scan, for structure sizing.'
def note_doctag(self, key, document_no, document_length):
if isinstance(key, (integer_types + (integer,))): self.max_rawint = max(self.max_rawint, key) elif (key in self.doctags): self.doctags[key] = self.doctags[key].repeat(document_length) else: self.doctags[key] = Doctag(len(self.offset2doctag), document_length, 1) self.offset2do...
'Return indexes and backing-arrays used in training examples.'
def indexed_doctags(self, doctag_tokens):
return ([self._int_index(index) for index in doctag_tokens if (index in self)], self.doctag_syn0, self.doctag_syn0_lockf, doctag_tokens)
'Persist any changes made to the given indexes (matching tuple previously returned by indexed_doctags()); a no-op for this implementation'
def trained_item(self, indexed_tuple):
pass
'Return int index for either string or int index'
def _int_index(self, index):
if isinstance(index, (integer_types + (integer,))): return index else: return ((self.max_rawint + 1) + self.doctags[index].offset)
'Return string index for given int index, if available'
def _key_index(self, i_index, missing=None):
warnings.warn('use DocvecsArray.index_to_doctag', DeprecationWarning) return self.index_to_doctag(i_index)
'Return string key for given i_index, if available. Otherwise return raw int doctag (same int).'
def index_to_doctag(self, i_index):
candidate_offset = ((i_index - self.max_rawint) - 1) if (0 <= candidate_offset < len(self.offset2doctag)): return self.offset2doctag[candidate_offset] else: return i_index
'Accept a single key (int or string tag) or list of keys as input. If a single string or int, return designated tag\'s vector representation, as a 1D numpy array. If a list, return designated tags\' vector representations as a 2D numpy array: #tags x #vector_size.'
def __getitem__(self, index):
if isinstance(index, ((string_types + integer_types) + (integer,))): return self.doctag_syn0[self._int_index(index)] return vstack([self[i] for i in index])
'Estimated memory for tag lookup; 0 if using pure int tags.'
def estimated_lookup_memory(self):
return ((60 * len(self.offset2doctag)) + (140 * len(self.doctags)))
'Precompute L2-normalized vectors. If `replace` is set, forget the original vectors and only keep the normalized ones = saves lots of memory! Note that you **cannot continue training or inference** after doing a replace. The model becomes effectively read-only = you can call `most_similar`, `similarity` etc., but not `...
def init_sims(self, replace=False):
if ((getattr(self, 'doctag_syn0norm', None) is None) or replace): logger.info('precomputing L2-norms of doc weight vectors') if replace: for i in xrange(self.doctag_syn0.shape[0]): self.doctag_syn0[i, :] /= sqrt((self.doctag_syn0[i, :] ** 2).sum((-1))) ...
'Find the top-N most similar docvecs known from training. Positive docs contribute positively towards the similarity, negative docs negatively. This method computes cosine similarity between a simple mean of the projection weight vectors of the given docs. Docs may be specified as vectors, integer indexes of trained do...
def most_similar(self, positive=[], negative=[], topn=10, clip_start=0, clip_end=None, indexer=None):
self.init_sims() clip_end = (clip_end or len(self.doctag_syn0norm)) if (isinstance(positive, ((string_types + integer_types) + (integer,))) and (not negative)): positive = [positive] positive = [((doc, 1.0) if isinstance(doc, ((string_types + integer_types) + (ndarray, integer))) else doc) for d...
'Which doc from the given list doesn\'t go with the others? (TODO: Accept vectors of out-of-training-set docs, as if from inference.)'
def doesnt_match(self, docs):
self.init_sims() docs = [doc for doc in docs if ((doc in self.doctags) or (0 <= doc < self.count))] logger.debug(('using docs %s' % docs)) if (not docs): raise ValueError('cannot select a doc from an empty list') vectors = vstack((self.doctag_syn0norm[self._int_ind...
'Compute cosine similarity between two docvecs in the trained set, specified by int index or string tag. (TODO: Accept vectors of out-of-training-set docs, as if from inference.)'
def similarity(self, d1, d2):
return dot(matutils.unitvec(self[d1]), matutils.unitvec(self[d2]))
'Compute cosine similarity between two sets of docvecs from the trained set, specified by int index or string tag. (TODO: Accept vectors of out-of-training-set docs, as if from inference.)'
def n_similarity(self, ds1, ds2):
v1 = [self[doc] for doc in ds1] v2 = [self[doc] for doc in ds2] return dot(matutils.unitvec(array(v1).mean(axis=0)), matutils.unitvec(array(v2).mean(axis=0)))
'Compute cosine similarity between two post-bulk out of training documents. Document should be a list of (word) tokens.'
def similarity_unseen_docs(self, model, doc_words1, doc_words2, alpha=0.1, min_alpha=0.0001, steps=5):
d1 = model.infer_vector(doc_words=doc_words1, alpha=alpha, min_alpha=min_alpha, steps=steps) d2 = model.infer_vector(doc_words=doc_words2, alpha=alpha, min_alpha=min_alpha, steps=steps) return dot(matutils.unitvec(d1), matutils.unitvec(d2))
'Initialize the model from an iterable of `documents`. Each document is a TaggedDocument object that will be used for training. The `documents` iterable can be simply a list of TaggedDocument elements, but for larger corpora, consider an iterable that streams the documents directly from disk/network. If you don\'t supp...
def __init__(self, documents=None, dm_mean=None, dm=1, dbow_words=0, dm_concat=0, dm_tag_count=1, docvecs=None, docvecs_mapfile=None, comment=None, trim_rule=None, **kwargs):
if ('sentences' in kwargs): raise DeprecationWarning("'sentences' in doc2vec was renamed to 'documents'. Please use documents parameter.") super(Doc2Vec, self).__init__(sg=((1 + dm) % 2), null_word=dm_concat, **kwargs) self.load = call_on_class_only if (dm_mean is n...
'Reuse shareable structures from other_model.'
def reset_from(self, other_model):
self.docvecs.borrow_from(other_model.docvecs) super(Doc2Vec, self).reset_from(other_model)
'Return the number of words in a given job.'
def _raw_word_count(self, job):
return sum((len(sentence.words) for sentence in job))
'Infer a vector for given post-bulk training document. Document should be a list of (word) tokens.'
def infer_vector(self, doc_words, alpha=0.1, min_alpha=0.0001, steps=5):
doctag_vectors = empty((1, self.vector_size), dtype=REAL) doctag_vectors[0] = self.seeded_vector(' '.join(doc_words)) doctag_locks = ones(1, dtype=REAL) doctag_indexes = [0] work = zeros(self.layer1_size, dtype=REAL) if (not self.sg): neu1 = matutils.zeros_aligned(self.layer1_size, dt...
'Estimate required memory for a model using current settings.'
def estimate_memory(self, vocab_size=None, report=None):
report = (report or {}) report['doctag_lookup'] = self.docvecs.estimated_lookup_memory() report['doctag_syn0'] = ((self.docvecs.count * self.vector_size) * dtype(REAL).itemsize) return super(Doc2Vec, self).estimate_memory(vocab_size, report=report)
'Abbreviated name reflecting major configuration paramaters.'
def __str__(self):
segments = [] if self.comment: segments.append(('"%s"' % self.comment)) if self.sg: if self.dbow_words: segments.append('dbow+w') else: segments.append('dbow') elif self.dm_concat: segments.append('dm/c') elif self.cbow_mean: segments.a...
'Discard parameters that are used in training and score. Use if you\'re sure you\'re done training a model. Set `keep_doctags_vectors` to False if you don\'t want to save doctags vectors, in this case you can\'t to use docvecs\'s most_similar, similarity etc. methods. Set `keep_inference` to False if you don\'t want to...
def delete_temporary_training_data(self, keep_doctags_vectors=True, keep_inference=True):
if (not keep_inference): self._minimize_model(False, False, False) if (self.docvecs and hasattr(self.docvecs, 'doctag_syn0') and (not keep_doctags_vectors)): del self.docvecs.doctag_syn0 if (self.docvecs and hasattr(self.docvecs, 'doctag_syn0_lockf')): del self.docvecs.doctag_syn0_lo...
'Store the input-hidden weight matrix. `fname` is the file used to save the vectors in `doctag_vec` is an optional boolean indicating whether to store document vectors `word_vec` is an optional boolean indicating whether to store word vectors (if both doctag_vec and word_vec are True, then both vectors are stored in th...
def save_word2vec_format(self, fname, doctag_vec=False, word_vec=True, prefix='*dt_', fvocab=None, binary=False):
total_vec = (len(self.wv.vocab) + len(self.docvecs)) if word_vec: if (not doctag_vec): total_vec = len(self.wv.vocab) KeyedVectors.save_word2vec_format(self.wv, fname, fvocab, binary, total_vec) if doctag_vec: with utils.smart_open(fname, 'ab') as fout: if (no...
'`source` can be either a string (filename) or a file object. Example:: documents = TaggedLineDocument(\'myfile.txt\') Or for compressed files:: documents = TaggedLineDocument(\'compressed_text.txt.bz2\') documents = TaggedLineDocument(\'compressed_text.txt.gz\')'
def __init__(self, source):
self.source = source
'Iterate through the lines in the source.'
def __iter__(self):
try: self.source.seek(0) for (item_no, line) in enumerate(self.source): (yield TaggedDocument(utils.to_unicode(line).split(), [item_no])) except AttributeError: with utils.smart_open(self.source) as fin: for (item_no, line) in enumerate(fin): (yiel...
'Load a previously saved object from file (also see `save`). If the object was saved with large arrays stored separately, you can load these arrays via mmap (shared memory) using `mmap=\'r\'`. Default: don\'t use mmap, load large arrays as normal objects. If the file being loaded is compressed (either \'.gz\' or \'.bz2...
@classmethod def load(cls, fname, mmap=None):
logger.info(('loading %s object from %s' % (cls.__name__, fname))) (compress, subname) = SaveLoad._adapt_by_suffix(fname) obj = unpickle(fname) obj._load_specials(fname, mmap, compress, subname) logger.info('loaded %s', fname) return obj
'Loads any attributes that were stored specially, and gives the same opportunity to recursively included SaveLoad instances.'
def _load_specials(self, fname, mmap, compress, subname):
mmap_error = (lambda x, y: IOError((('Cannot mmap compressed object %s in file %s. ' % (x, y)) + 'Use `load(fname, mmap=None)` or uncompress files manually.'))) for attrib in getattr(self, '__recursive_saveloads', []): cfname = '.'.join((fname, attrib)) ...
'Give appropriate compress setting and filename formula'
@staticmethod def _adapt_by_suffix(fname):
if (fname.endswith('.gz') or fname.endswith('.bz2')): compress = True subname = (lambda *args: '.'.join((list(args) + ['npz']))) else: compress = False subname = (lambda *args: '.'.join((list(args) + ['npy']))) return (compress, subname)
'Save the object to file (also see `load`). If `separately` is None, automatically detect large numpy/scipy.sparse arrays in the object being stored, and store them into separate files. This avoids pickle memory errors and allows mmap\'ing large arrays back on load efficiently. You can also set `separately` manually, i...
def _smart_save(self, fname, separately=None, sep_limit=(10 * (1024 ** 2)), ignore=frozenset(), pickle_protocol=2):
logger.info(('saving %s object under %s, separately %s' % (self.__class__.__name__, fname, separately))) (compress, subname) = SaveLoad._adapt_by_suffix(fname) restores = self._save_specials(fname, separately, sep_limit, ignore, pickle_protocol, compress, subname) try: pickle(s...
'Save aside any attributes that need to be handled separately, including by recursion any attributes that are themselves SaveLoad instances. Returns a list of (obj, {attrib: value, ...}) settings that the caller should use to restore each object\'s attributes that were set aside during the default pickle().'
def _save_specials(self, fname, separately, sep_limit, ignore, pickle_protocol, compress, subname):
asides = {} sparse_matrices = (scipy.sparse.csr_matrix, scipy.sparse.csc_matrix) if (separately is None): separately = [] for (attrib, val) in iteritems(self.__dict__): if (isinstance(val, np.ndarray) and (val.size >= sep_limit)): separately.append(attrib) ...
'Save the object to file (also see `load`). `fname_or_handle` is either a string specifying the file name to save to, or an open file-like object which can be written to. If the object is a file handle, no special array handling will be performed; all attributes will be saved to the same file. If `separately` is None, ...
def save(self, fname_or_handle, separately=None, sep_limit=(10 * (1024 ** 2)), ignore=frozenset(), pickle_protocol=2):
try: _pickle.dump(self, fname_or_handle, protocol=pickle_protocol) logger.info(('saved %s object' % self.__class__.__name__)) except TypeError: self._smart_save(fname_or_handle, separately, sep_limit, ignore, pickle_protocol=pickle_protocol)
'Override the dict.keys() function, which is used to determine the maximum internal id of a corpus = the vocabulary dimensionality. HACK: To avoid materializing the whole `range(0, self.num_terms)`, this returns the highest id = `[self.num_terms - 1]` only.'
def keys(self):
return [(self.num_terms - 1)]
'Wrap a `corpus` as another corpus of length `reps`. This is achieved by repeating documents from `corpus` over and over again, until the requested length `len(result)==reps` is reached. Repetition is done on-the-fly=efficiently, via `itertools`. >>> corpus = [[(1, 0.5)], []] # 2 documents >>> list(RepeatCorpus(corpus,...
def __init__(self, corpus, reps):
self.corpus = corpus self.reps = reps
'Repeat a `corpus` `n` times. >>> corpus = [[(1, 0.5)], []] >>> list(RepeatCorpusNTimes(corpus, 3)) # repeat 3 times [[(1, 0.5)], [], [(1, 0.5)], [], [(1, 0.5)], []]'
def __init__(self, corpus, n):
self.corpus = corpus self.n = n
'Return a corpus that is the "head" of input iterable `corpus`. Any documents after `max_docs` are ignored. This effectively limits the length of the returned corpus to <= `max_docs`. Set `max_docs=None` for "no limit", effectively wrapping the entire input corpus.'
def __init__(self, corpus, max_docs=None):
self.corpus = corpus self.max_docs = max_docs
'Return a corpus that is the slice of input iterable `corpus`. Negative slicing can only be used if the corpus is indexable. Otherwise, the corpus will be iterated over. Slice can also be a np.ndarray to support fancy indexing. NOTE: calculating the size of a SlicedCorpus is expensive when using a slice as the corpus h...
def __init__(self, corpus, slice_):
self.corpus = corpus self.slice_ = slice_ self.length = None
'Return number of docs the word occurs in, once `accumulate` has been called.'
def get_occurrences(self, word_id):
return self._get_occurrences(self.id2contiguous[word_id])
'Return number of docs the words co-occur in, once `accumulate` has been called.'
def get_co_occurrences(self, word_id1, word_id2):
return self._get_co_occurrences(self.id2contiguous[word_id1], self.id2contiguous[word_id2])
'Return number of docs the word occurs in, once `accumulate` has been called.'
def get_occurrences(self, word):
try: word_id = self.token2id[word] except KeyError: word_id = word return self._get_occurrences(self.id2contiguous[word_id])
'Return number of docs the words co-occur in, once `accumulate` has been called.'
def get_co_occurrences(self, word1, word2):
word_id1 = self._word2_contiguous_id(word1) word_id2 = self._word2_contiguous_id(word2) return self._get_co_occurrences(word_id1, word_id2)