id
int64 0
458k
| file_name
stringlengths 4
119
| file_path
stringlengths 14
227
| content
stringlengths 24
9.96M
| size
int64 24
9.96M
| language
stringclasses 1
value | extension
stringclasses 14
values | total_lines
int64 1
219k
| avg_line_length
float64 2.52
4.63M
| max_line_length
int64 5
9.91M
| alphanum_fraction
float64 0
1
| repo_name
stringlengths 7
101
| repo_stars
int64 100
139k
| repo_forks
int64 0
26.4k
| repo_open_issues
int64 0
2.27k
| repo_license
stringclasses 12
values | repo_extraction_date
stringclasses 433
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7,000
|
run_similarity_queries.ipynb
|
piskvorky_gensim/docs/src/auto_examples/core/run_similarity_queries.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nSimilarity Queries\n==================\n\nDemonstrates querying a corpus for similar documents.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating the Corpus\n-------------------\n\nFirst, we need to create a corpus to work with.\nThis step is the same as in the previous tutorial;\nif you completed it, feel free to skip to the next section.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from collections import defaultdict\nfrom gensim import corpora\n\ndocuments = [\n \"Human machine interface for lab abc computer applications\",\n \"A survey of user opinion of computer system response time\",\n \"The EPS user interface management system\",\n \"System and human system engineering testing of EPS\",\n \"Relation of user perceived response time to error measurement\",\n \"The generation of random binary unordered trees\",\n \"The intersection graph of paths in trees\",\n \"Graph minors IV Widths of trees and well quasi ordering\",\n \"Graph minors A survey\",\n]\n\n# remove common words and tokenize\nstoplist = set('for a of the and to in'.split())\ntexts = [\n [word for word in document.lower().split() if word not in stoplist]\n for document in documents\n]\n\n# remove words that appear only once\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1\n\ntexts = [\n [token for token in text if frequency[token] > 1]\n for text in texts\n]\n\ndictionary = corpora.Dictionary(texts)\ncorpus = [dictionary.doc2bow(text) for text in texts]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarity interface\n--------------------\n\nIn the previous tutorials on\n`sphx_glr_auto_examples_core_run_corpora_and_vector_spaces.py`\nand\n`sphx_glr_auto_examples_core_run_topics_and_transformations.py`,\nwe covered what it means to create a corpus in the Vector Space Model and how\nto transform it between different vector spaces. A common reason for such a\ncharade is that we want to determine **similarity between pairs of\ndocuments**, or the **similarity between a specific document and a set of\nother documents** (such as a user query vs. indexed documents).\n\nTo show how this can be done in gensim, let us consider the same corpus as in the\nprevious examples (which really originally comes from Deerwester et al.'s\n`\"Indexing by Latent Semantic Analysis\" <http://www.cs.bham.ac.uk/~pxt/IDA/lsa_ind.pdf>`_\nseminal 1990 article).\nTo follow Deerwester's example, we first use this tiny corpus to define a 2-dimensional\nLSI space:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim import models\nlsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the purposes of this tutorial, there are only two things you need to know about LSI.\nFirst, it's just another transformation: it transforms vectors from one space to another.\nSecond, the benefit of LSI is that enables identifying patterns and relationships between terms (in our case, words in a document) and topics.\nOur LSI space is two-dimensional (`num_topics = 2`) so there are two topics, but this is arbitrary.\nIf you're interested, you can read more about LSI here: `Latent Semantic Indexing <https://en.wikipedia.org/wiki/Latent_semantic_indexing>`_:\n\nNow suppose a user typed in the query `\"Human computer interaction\"`. We would\nlike to sort our nine corpus documents in decreasing order of relevance to this query.\nUnlike modern search engines, here we only concentrate on a single aspect of possible\nsimilarities---on apparent semantic relatedness of their texts (words). No hyperlinks,\nno random-walk static ranks, just a semantic extension over the boolean keyword match:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"doc = \"Human computer interaction\"\nvec_bow = dictionary.doc2bow(doc.lower().split())\nvec_lsi = lsi[vec_bow] # convert the query to LSI space\nprint(vec_lsi)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In addition, we will be considering `cosine similarity <http://en.wikipedia.org/wiki/Cosine_similarity>`_\nto determine the similarity of two vectors. Cosine similarity is a standard measure\nin Vector Space Modeling, but wherever the vectors represent probability distributions,\n`different similarity measures <http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Symmetrised_divergence>`_\nmay be more appropriate.\n\nInitializing query structures\n++++++++++++++++++++++++++++++++\n\nTo prepare for similarity queries, we need to enter all documents which we want\nto compare against subsequent queries. In our case, they are the same nine documents\nused for training LSI, converted to 2-D LSA space. But that's only incidental, we\nmight also be indexing a different corpus altogether.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim import similarities\nindex = similarities.MatrixSimilarity(lsi[corpus]) # transform corpus to LSI space and index it"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-danger\"><h4>Warning</h4><p>The class :class:`similarities.MatrixSimilarity` is only appropriate when the whole\n set of vectors fits into memory. For example, a corpus of one million documents\n would require 2GB of RAM in a 256-dimensional LSI space, when used with this class.\n\n Without 2GB of free RAM, you would need to use the :class:`similarities.Similarity` class.\n This class operates in fixed memory, by splitting the index across multiple files on disk, called shards.\n It uses :class:`similarities.MatrixSimilarity` and :class:`similarities.SparseMatrixSimilarity` internally,\n so it is still fast, although slightly more complex.</p></div>\n\nIndex persistency is handled via the standard :func:`save` and :func:`load` functions:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"index.save('/tmp/deerwester.index')\nindex = similarities.MatrixSimilarity.load('/tmp/deerwester.index')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is true for all similarity indexing classes (:class:`similarities.Similarity`,\n:class:`similarities.MatrixSimilarity` and :class:`similarities.SparseMatrixSimilarity`).\nAlso in the following, `index` can be an object of any of these. When in doubt,\nuse :class:`similarities.Similarity`, as it is the most scalable version, and it also\nsupports adding more documents to the index later.\n\nPerforming queries\n++++++++++++++++++\n\nTo obtain similarities of our query document against the nine indexed documents:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sims = index[vec_lsi] # perform a similarity query against the corpus\nprint(list(enumerate(sims))) # print (document_number, document_similarity) 2-tuples"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cosine measure returns similarities in the range `<-1, 1>` (the greater, the more similar),\nso that the first document has a score of 0.99809301 etc.\n\nWith some standard Python magic we sort these similarities into descending\norder, and obtain the final answer to the query `\"Human computer interaction\"`:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sims = sorted(enumerate(sims), key=lambda item: -item[1])\nfor doc_position, doc_score in sims:\n print(doc_score, documents[doc_position])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The thing to note here is that documents no. 2 (``\"The EPS user interface management system\"``)\nand 4 (``\"Relation of user perceived response time to error measurement\"``) would never be returned by\na standard boolean fulltext search, because they do not share any common words with ``\"Human\ncomputer interaction\"``. However, after applying LSI, we can observe that both of\nthem received quite high similarity scores (no. 2 is actually the most similar!),\nwhich corresponds better to our intuition of\nthem sharing a \"computer-human\" related topic with the query. In fact, this semantic\ngeneralization is the reason why we apply transformations and do topic modelling\nin the first place.\n\nWhere next?\n------------\n\nCongratulations, you have finished the tutorials -- now you know how gensim works :-)\nTo delve into more details, you can browse through the `apiref`,\nsee the `wiki` or perhaps check out `distributed` in `gensim`.\n\nGensim is a fairly mature package that has been used successfully by many individuals and companies, both for rapid prototyping and in production.\nThat doesn't mean it's perfect though:\n\n* there are parts that could be implemented more efficiently (in C, for example), or make better use of parallelism (multiple machines cores)\n* new algorithms are published all the time; help gensim keep up by `discussing them <http://groups.google.com/group/gensim>`_ and `contributing code <https://github.com/piskvorky/gensim/wiki/Developer-page>`_\n* your **feedback is most welcome** and appreciated (and it's not just the code!):\n `bug reports <https://github.com/piskvorky/gensim/issues>`_ or\n `user stories and general questions <http://groups.google.com/group/gensim/topics>`_.\n\nGensim has no ambition to become an all-encompassing framework, across all NLP (or even Machine Learning) subfields.\nIts mission is to help NLP practitioners try out popular topic modelling algorithms\non large datasets easily, and to facilitate prototyping of new algorithms for researchers.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimg = mpimg.imread('run_similarity_queries.png')\nimgplot = plt.imshow(img)\n_ = plt.axis('off')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 12,229
|
Python
|
.py
| 198
| 55.060606
| 2,046
| 0.667055
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,001
|
run_corpora_and_vector_spaces.ipynb
|
piskvorky_gensim/docs/src/auto_examples/core/run_corpora_and_vector_spaces.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Corpora and Vector Spaces\n\nDemonstrates transforming text into a vector space representation.\n\nAlso introduces corpus streaming and persistence to disk in various formats.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let\u2019s create a small corpus of nine short documents [1]_:\n\n\n## From Strings to Vectors\n\nThis time, let's start from documents represented as strings:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"documents = [\n \"Human machine interface for lab abc computer applications\",\n \"A survey of user opinion of computer system response time\",\n \"The EPS user interface management system\",\n \"System and human system engineering testing of EPS\",\n \"Relation of user perceived response time to error measurement\",\n \"The generation of random binary unordered trees\",\n \"The intersection graph of paths in trees\",\n \"Graph minors IV Widths of trees and well quasi ordering\",\n \"Graph minors A survey\",\n]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is a tiny corpus of nine documents, each consisting of only a single sentence.\n\nFirst, let's tokenize the documents, remove common words (using a toy stoplist)\nas well as words that only appear once in the corpus:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from pprint import pprint # pretty-printer\nfrom collections import defaultdict\n\n# remove common words and tokenize\nstoplist = set('for a of the and to in'.split())\ntexts = [\n [word for word in document.lower().split() if word not in stoplist]\n for document in documents\n]\n\n# remove words that appear only once\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1\n\ntexts = [\n [token for token in text if frequency[token] > 1]\n for text in texts\n]\n\npprint(texts)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Your way of processing the documents will likely vary; here, I only split on whitespace\nto tokenize, followed by lowercasing each word. In fact, I use this particular\n(simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.'s\noriginal LSA article [1]_.\n\nThe ways to process documents are so varied and application- and language-dependent that I\ndecided to *not* constrain them by any interface. Instead, a document is represented\nby the features extracted from it, not by its \"surface\" string form: how you get to\nthe features is up to you. Below I describe one common, general-purpose approach (called\n:dfn:`bag-of-words`), but keep in mind that different application domains call for\ndifferent features, and, as always, it's `garbage in, garbage out <http://en.wikipedia.org/wiki/Garbage_In,_Garbage_Out>`_...\n\nTo convert documents to vectors, we'll use a document representation called\n`bag-of-words <http://en.wikipedia.org/wiki/Bag_of_words>`_. In this representation,\neach document is represented by one vector where each vector element represents\na question-answer pair, in the style of:\n\n- Question: How many times does the word `system` appear in the document?\n- Answer: Once.\n\nIt is advantageous to represent the questions only by their (integer) ids. The mapping\nbetween the questions and ids is called a dictionary:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim import corpora\ndictionary = corpora.Dictionary(texts)\ndictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference\nprint(dictionary)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we assigned a unique integer id to all words appearing in the corpus with the\n:class:`gensim.corpora.dictionary.Dictionary` class. This sweeps across the texts, collecting word counts\nand relevant statistics. In the end, we see there are twelve distinct words in the\nprocessed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector).\nTo see the mapping between words and their ids:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(dictionary.token2id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To actually convert tokenized documents to vectors:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"new_doc = \"Human computer interaction\"\nnew_vec = dictionary.doc2bow(new_doc.lower().split())\nprint(new_vec) # the word \"interaction\" does not appear in the dictionary and is ignored"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The function :func:`doc2bow` simply counts the number of occurrences of\neach distinct word, converts the word to its integer word id\nand returns the result as a sparse vector. The sparse vector ``[(0, 1), (1, 1)]``\ntherefore reads: in the document `\"Human computer interaction\"`, the words `computer`\n(id 0) and `human` (id 1) appear once; the other ten dictionary words appear (implicitly) zero times.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corpus = [dictionary.doc2bow(text) for text in texts]\ncorpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use\nprint(corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By now it should be clear that the vector feature with ``id=10`` stands for the question \"How many\ntimes does the word `graph` appear in the document?\" and that the answer is \"zero\" for\nthe first six documents and \"one\" for the remaining three.\n\n\n## Corpus Streaming -- One Document at a Time\n\nNote that `corpus` above resides fully in memory, as a plain Python list.\nIn this simple example, it doesn't matter much, but just to make things clear,\nlet's assume there are millions of documents in the corpus. Storing all of them in RAM won't do.\nInstead, let's assume the documents are stored in a file on disk, one document per line. Gensim\nonly requires that a corpus must be able to return one document vector at a time:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from smart_open import open # for transparently opening remote files\n\n\nclass MyCorpus:\n def __iter__(self):\n for line in open('https://radimrehurek.com/mycorpus.txt'):\n # assume there's one document per line, tokens separated by whitespace\n yield dictionary.doc2bow(line.lower().split())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The full power of Gensim comes from the fact that a corpus doesn't have to be\na ``list``, or a ``NumPy`` array, or a ``Pandas`` dataframe, or whatever.\nGensim *accepts any object that, when iterated over, successively yields\ndocuments*.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# This flexibility allows you to create your own corpus classes that stream the\n# documents directly from disk, network, database, dataframes... The models\n# in Gensim are implemented such that they don't require all vectors to reside\n# in RAM at once. You can even create the documents on the fly!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Download the sample `mycorpus.txt file here <https://radimrehurek.com/mycorpus.txt>`_. The assumption that\neach document occupies one line in a single file is not important; you can mold\nthe `__iter__` function to fit your input format, whatever it is.\nWalking directories, parsing XML, accessing the network...\nJust parse your input to retrieve a clean list of tokens in each document,\nthen convert the tokens via a dictionary to their ids and yield the resulting sparse vector inside `__iter__`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!\nprint(corpus_memory_friendly)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Corpus is now an object. We didn't define any way to print it, so `print` just outputs address\nof the object in memory. Not very useful. To see the constituent vectors, let's\niterate over the corpus and print each document vector (one at a time):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"for vector in corpus_memory_friendly: # load one vector into memory at a time\n print(vector)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Although the output is the same as for the plain Python list, the corpus is now much\nmore memory friendly, because at most one vector resides in RAM at a time. Your\ncorpus can now be as large as you want.\n\nSimilarly, to construct the dictionary without loading all texts into memory:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# collect statistics about all tokens\ndictionary = corpora.Dictionary(line.lower().split() for line in open('https://radimrehurek.com/mycorpus.txt'))\n# remove stop words and words that appear only once\nstop_ids = [\n dictionary.token2id[stopword]\n for stopword in stoplist\n if stopword in dictionary.token2id\n]\nonce_ids = [tokenid for tokenid, docfreq in dictionary.dfs.items() if docfreq == 1]\ndictionary.filter_tokens(stop_ids + once_ids) # remove stop words and words that appear only once\ndictionary.compactify() # remove gaps in id sequence after words that were removed\nprint(dictionary)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And that is all there is to it! At least as far as bag-of-words representation is concerned.\nOf course, what we do with such a corpus is another question; it is not at all clear\nhow counting the frequency of distinct words could be useful. As it turns out, it isn't, and\nwe will need to apply a transformation on this simple representation first, before\nwe can use it to compute any meaningful document vs. document similarities.\nTransformations are covered in the next tutorial\n(`sphx_glr_auto_examples_core_run_topics_and_transformations.py`),\nbut before that, let's briefly turn our attention to *corpus persistency*.\n\n\n## Corpus Formats\n\nThere exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk.\n`Gensim` implements them via the *streaming corpus interface* mentioned earlier:\ndocuments are read from (resp. stored to) disk in a lazy fashion, one document at\na time, without the whole corpus being read into main memory at once.\n\nOne of the more notable file formats is the `Market Matrix format <http://math.nist.gov/MatrixMarket/formats.html>`_.\nTo save a corpus in the Matrix Market format:\n\ncreate a toy corpus of 2 documents, as a plain Python list\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it\n\ncorpora.MmCorpus.serialize('/tmp/corpus.mm', corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Other formats include `Joachim's SVMlight format <http://svmlight.joachims.org/>`_,\n`Blei's LDA-C format <https://github.com/blei-lab/lda-c>`_ and\n`GibbsLDA++ format <http://gibbslda.sourceforge.net/>`_.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corpora.SvmLightCorpus.serialize('/tmp/corpus.svmlight', corpus)\ncorpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)\ncorpora.LowCorpus.serialize('/tmp/corpus.low', corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Conversely, to load a corpus iterator from a Matrix Market file:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corpus = corpora.MmCorpus('/tmp/corpus.mm')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Corpus objects are streams, so typically you won't be able to print them directly:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Instead, to view the contents of a corpus:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# one way of printing a corpus: load it entirely into memory\nprint(list(corpus)) # calling list() will convert any sequence to a plain Python list"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"or\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# another way of doing it: print one document at a time, making use of the streaming interface\nfor doc in corpus:\n print(doc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The second way is obviously more memory-friendly, but for testing and development\npurposes, nothing beats the simplicity of calling ``list(corpus)``.\n\nTo save the same Matrix Market document stream in Blei's LDA-C format,\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this way, `gensim` can also be used as a memory-efficient **I/O format conversion tool**:\njust load a document stream using one format and immediately save it in another format.\nAdding new formats is dead easy, check out the `code for the SVMlight corpus\n<https://github.com/piskvorky/gensim/blob/develop/gensim/corpora/svmlightcorpus.py>`_ for an example.\n\n## Compatibility with NumPy and SciPy\n\nGensim also contains `efficient utility functions <http://radimrehurek.com/gensim/matutils.html>`_\nto help converting from/to numpy matrices\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gensim\nimport numpy as np\nnumpy_matrix = np.random.randint(10, size=[5, 2]) # random matrix as an example\ncorpus = gensim.matutils.Dense2Corpus(numpy_matrix)\n# numpy_matrix = gensim.matutils.corpus2dense(corpus, num_terms=number_of_corpus_features)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and from/to `scipy.sparse` matrices\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import scipy.sparse\nscipy_sparse_matrix = scipy.sparse.random(5, 2) # random sparse matrix as example\ncorpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)\nscipy_csc_matrix = gensim.matutils.corpus2csc(corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What Next\n\nRead about `sphx_glr_auto_examples_core_run_topics_and_transformations.py`.\n\n## References\n\nFor a complete reference (Want to prune the dictionary to a smaller size?\nOptimize converting between corpora and NumPy/SciPy arrays?), see the `apiref`.\n\n.. [1] This is the same corpus as used in\n `Deerwester et al. (1990): Indexing by Latent Semantic Analysis <http://www.cs.bham.ac.uk/~pxt/IDA/lsa_ind.pdf>`_, Table 2.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimg = mpimg.imread('run_corpora_and_vector_spaces.png')\nimgplot = plt.imshow(img)\n_ = plt.axis('off')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 19,367
|
Python
|
.py
| 432
| 38.027778
| 1,392
| 0.627271
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,002
|
run_similarity_queries.py
|
piskvorky_gensim/docs/src/auto_examples/core/run_similarity_queries.py
|
r"""
Similarity Queries
==================
Demonstrates querying a corpus for similar documents.
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###############################################################################
#
# Creating the Corpus
# -------------------
#
# First, we need to create a corpus to work with.
# This step is the same as in the previous tutorial;
# if you completed it, feel free to skip to the next section.
from collections import defaultdict
from gensim import corpora
documents = [
"Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey",
]
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [
[word for word in document.lower().split() if word not in stoplist]
for document in documents
]
# remove words that appear only once
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [
[token for token in text if frequency[token] > 1]
for text in texts
]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
###############################################################################
# Similarity interface
# --------------------
#
# In the previous tutorials on
# :ref:`sphx_glr_auto_examples_core_run_corpora_and_vector_spaces.py`
# and
# :ref:`sphx_glr_auto_examples_core_run_topics_and_transformations.py`,
# we covered what it means to create a corpus in the Vector Space Model and how
# to transform it between different vector spaces. A common reason for such a
# charade is that we want to determine **similarity between pairs of
# documents**, or the **similarity between a specific document and a set of
# other documents** (such as a user query vs. indexed documents).
#
# To show how this can be done in gensim, let us consider the same corpus as in the
# previous examples (which really originally comes from Deerwester et al.'s
# `"Indexing by Latent Semantic Analysis" <http://www.cs.bham.ac.uk/~pxt/IDA/lsa_ind.pdf>`_
# seminal 1990 article).
# To follow Deerwester's example, we first use this tiny corpus to define a 2-dimensional
# LSI space:
from gensim import models
lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
###############################################################################
# For the purposes of this tutorial, there are only two things you need to know about LSI.
# First, it's just another transformation: it transforms vectors from one space to another.
# Second, the benefit of LSI is that enables identifying patterns and relationships between terms (in our case, words in a document) and topics.
# Our LSI space is two-dimensional (`num_topics = 2`) so there are two topics, but this is arbitrary.
# If you're interested, you can read more about LSI here: `Latent Semantic Indexing <https://en.wikipedia.org/wiki/Latent_semantic_indexing>`_:
#
# Now suppose a user typed in the query `"Human computer interaction"`. We would
# like to sort our nine corpus documents in decreasing order of relevance to this query.
# Unlike modern search engines, here we only concentrate on a single aspect of possible
# similarities---on apparent semantic relatedness of their texts (words). No hyperlinks,
# no random-walk static ranks, just a semantic extension over the boolean keyword match:
doc = "Human computer interaction"
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow] # convert the query to LSI space
print(vec_lsi)
###############################################################################
# In addition, we will be considering `cosine similarity <https://en.wikipedia.org/wiki/Cosine_similarity>`_
# to determine the similarity of two vectors. Cosine similarity is a standard measure
# in Vector Space Modeling, but wherever the vectors represent probability distributions,
# `different similarity measures <https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Symmetrised_divergence>`_
# may be more appropriate.
#
# Initializing query structures
# ++++++++++++++++++++++++++++++++
#
# To prepare for similarity queries, we need to enter all documents which we want
# to compare against subsequent queries. In our case, they are the same nine documents
# used for training LSI, converted to 2-D LSA space. But that's only incidental, we
# might also be indexing a different corpus altogether.
from gensim import similarities
index = similarities.MatrixSimilarity(lsi[corpus]) # transform corpus to LSI space and index it
###############################################################################
# .. warning::
# The class :class:`similarities.MatrixSimilarity` is only appropriate when the whole
# set of vectors fits into memory. For example, a corpus of one million documents
# would require 2GB of RAM in a 256-dimensional LSI space, when used with this class.
#
# Without 2GB of free RAM, you would need to use the :class:`similarities.Similarity` class.
# This class operates in fixed memory, by splitting the index across multiple files on disk, called shards.
# It uses :class:`similarities.MatrixSimilarity` and :class:`similarities.SparseMatrixSimilarity` internally,
# so it is still fast, although slightly more complex.
#
# Index persistency is handled via the standard :func:`save` and :func:`load` functions:
index.save('/tmp/deerwester.index')
index = similarities.MatrixSimilarity.load('/tmp/deerwester.index')
###############################################################################
# This is true for all similarity indexing classes (:class:`similarities.Similarity`,
# :class:`similarities.MatrixSimilarity` and :class:`similarities.SparseMatrixSimilarity`).
# Also in the following, `index` can be an object of any of these. When in doubt,
# use :class:`similarities.Similarity`, as it is the most scalable version, and it also
# supports adding more documents to the index later.
#
# Performing queries
# ++++++++++++++++++
#
# To obtain similarities of our query document against the nine indexed documents:
sims = index[vec_lsi] # perform a similarity query against the corpus
print(list(enumerate(sims))) # print (document_number, document_similarity) 2-tuples
###############################################################################
# Cosine measure returns similarities in the range `<-1, 1>` (the greater, the more similar),
# so that the first document has a score of 0.99809301 etc.
#
# With some standard Python magic we sort these similarities into descending
# order, and obtain the final answer to the query `"Human computer interaction"`:
sims = sorted(enumerate(sims), key=lambda item: -item[1])
for doc_position, doc_score in sims:
print(doc_score, documents[doc_position])
###############################################################################
# The thing to note here is that documents no. 2 (``"The EPS user interface management system"``)
# and 4 (``"Relation of user perceived response time to error measurement"``) would never be returned by
# a standard boolean fulltext search, because they do not share any common words with ``"Human
# computer interaction"``. However, after applying LSI, we can observe that both of
# them received quite high similarity scores (no. 2 is actually the most similar!),
# which corresponds better to our intuition of
# them sharing a "computer-human" related topic with the query. In fact, this semantic
# generalization is the reason why we apply transformations and do topic modelling
# in the first place.
#
# Where next?
# ------------
#
# Congratulations, you have finished the tutorials -- now you know how gensim works :-)
# To delve into more details, you can browse through the :ref:`apiref`,
# see the :ref:`wiki` or perhaps check out :ref:`distributed` in `gensim`.
#
# Gensim is a fairly mature package that has been used successfully by many individuals and companies, both for rapid prototyping and in production.
# That doesn't mean it's perfect though:
#
# * there are parts that could be implemented more efficiently (in C, for example), or make better use of parallelism (multiple machines cores)
# * new algorithms are published all the time; help gensim keep up by `discussing them <https://groups.google.com/g/gensim>`_ and `contributing code <https://github.com/piskvorky/gensim/wiki/Developer-page>`_
# * your **feedback is most welcome** and appreciated (and it's not just the code!):
# `bug reports <https://github.com/piskvorky/gensim/issues>`_ or
# `user stories and general questions <https://groups.google.com/g/gensim>`_.
#
# Gensim has no ambition to become an all-encompassing framework, across all NLP (or even Machine Learning) subfields.
# Its mission is to help NLP practitioners try out popular topic modelling algorithms
# on large datasets easily, and to facilitate prototyping of new algorithms for researchers.
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('run_similarity_queries.png')
imgplot = plt.imshow(img)
_ = plt.axis('off')
| 9,565
|
Python
|
.py
| 170
| 54.729412
| 208
| 0.711268
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,003
|
run_word2vec.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_word2vec.py
|
r"""
Word2Vec Model
==============
Introduces Gensim's Word2Vec model and demonstrates its use on the `Lee Evaluation Corpus
<https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>`_.
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###############################################################################
# In case you missed the buzz, Word2Vec is a widely used algorithm based on neural
# networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow).
# Using large amounts of unannotated plain text, word2vec learns relationships
# between words automatically. The output are vectors, one vector per word,
# with remarkable linear relationships that allow us to do things like:
#
# * vec("king") - vec("man") + vec("woman") =~ vec("queen")
# * vec("Montreal Canadiens") – vec("Montreal") + vec("Toronto") =~ vec("Toronto Maple Leafs").
#
# Word2vec is very useful in `automatic text tagging
# <https://github.com/RaRe-Technologies/movie-plots-by-genre>`_\ , recommender
# systems and machine translation.
#
# This tutorial:
#
# #. Introduces ``Word2Vec`` as an improvement over traditional bag-of-words
# #. Shows off a demo of ``Word2Vec`` using a pre-trained model
# #. Demonstrates training a new model from your own data
# #. Demonstrates loading and saving models
# #. Introduces several training parameters and demonstrates their effect
# #. Discusses memory requirements
# #. Visualizes Word2Vec embeddings by applying dimensionality reduction
#
# Review: Bag-of-words
# --------------------
#
# .. Note:: Feel free to skip these review sections if you're already familiar with the models.
#
# You may be familiar with the `bag-of-words model
# <https://en.wikipedia.org/wiki/Bag-of-words_model>`_ from the
# :ref:`core_concepts_vector` section.
# This model transforms each document to a fixed-length vector of integers.
# For example, given the sentences:
#
# - ``John likes to watch movies. Mary likes movies too.``
# - ``John also likes to watch football games. Mary hates football.``
#
# The model outputs the vectors:
#
# - ``[1, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0]``
# - ``[1, 1, 1, 1, 0, 1, 0, 1, 2, 1, 1]``
#
# Each vector has 10 elements, where each element counts the number of times a
# particular word occurred in the document.
# The order of elements is arbitrary.
# In the example above, the order of the elements corresponds to the words:
# ``["John", "likes", "to", "watch", "movies", "Mary", "too", "also", "football", "games", "hates"]``.
#
# Bag-of-words models are surprisingly effective, but have several weaknesses.
#
# First, they lose all information about word order: "John likes Mary" and
# "Mary likes John" correspond to identical vectors. There is a solution: bag
# of `n-grams <https://en.wikipedia.org/wiki/N-gram>`__
# models consider word phrases of length n to represent documents as
# fixed-length vectors to capture local word order but suffer from data
# sparsity and high dimensionality.
#
# Second, the model does not attempt to learn the meaning of the underlying
# words, and as a consequence, the distance between vectors doesn't always
# reflect the difference in meaning. The ``Word2Vec`` model addresses this
# second problem.
#
# Introducing: the ``Word2Vec`` Model
# -----------------------------------
#
# ``Word2Vec`` is a more recent model that embeds words in a lower-dimensional
# vector space using a shallow neural network. The result is a set of
# word-vectors where vectors close together in vector space have similar
# meanings based on context, and word-vectors distant to each other have
# differing meanings. For example, ``strong`` and ``powerful`` would be close
# together and ``strong`` and ``Paris`` would be relatively far.
#
# The are two versions of this model and :py:class:`~gensim.models.word2vec.Word2Vec`
# class implements them both:
#
# 1. Skip-grams (SG)
# 2. Continuous-bag-of-words (CBOW)
#
# .. Important::
# Don't let the implementation details below scare you.
# They're advanced material: if it's too much, then move on to the next section.
#
# The `Word2Vec Skip-gram <http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model>`__
# model, for example, takes in pairs (word1, word2) generated by moving a
# window across text data, and trains a 1-hidden-layer neural network based on
# the synthetic task of given an input word, giving us a predicted probability
# distribution of nearby words to the input. A virtual `one-hot
# <https://en.wikipedia.org/wiki/One-hot>`__ encoding of words
# goes through a 'projection layer' to the hidden layer; these projection
# weights are later interpreted as the word embeddings. So if the hidden layer
# has 300 neurons, this network will give us 300-dimensional word embeddings.
#
# Continuous-bag-of-words Word2vec is very similar to the skip-gram model. It
# is also a 1-hidden-layer neural network. The synthetic training task now uses
# the average of multiple input context words, rather than a single word as in
# skip-gram, to predict the center word. Again, the projection weights that
# turn one-hot words into averageable vectors, of the same width as the hidden
# layer, are interpreted as the word embeddings.
#
###############################################################################
# Word2Vec Demo
# -------------
#
# To see what ``Word2Vec`` can do, let's download a pre-trained model and play
# around with it. We will fetch the Word2Vec model trained on part of the
# Google News dataset, covering approximately 3 million words and phrases. Such
# a model can take hours to train, but since it's already available,
# downloading and loading it with Gensim takes minutes.
#
# .. Important::
# The model is approximately 2GB, so you'll need a decent network connection
# to proceed. Otherwise, skip ahead to the "Training Your Own Model" section
# below.
#
# You may also check out an `online word2vec demo
# <https://radimrehurek.com/2014/02/word2vec-tutorial/#app>`_ where you can try
# this vector algebra for yourself. That demo runs ``word2vec`` on the
# **entire** Google News dataset, of **about 100 billion words**.
#
import gensim.downloader as api
wv = api.load('word2vec-google-news-300')
###############################################################################
# A common operation is to retrieve the vocabulary of a model. That is trivial:
for index, word in enumerate(wv.index_to_key):
if index == 10:
break
print(f"word #{index}/{len(wv.index_to_key)} is {word}")
###############################################################################
# We can easily obtain vectors for terms the model is familiar with:
#
vec_king = wv['king']
###############################################################################
# Unfortunately, the model is unable to infer vectors for unfamiliar words.
# This is one limitation of Word2Vec: if this limitation matters to you, check
# out the FastText model.
#
try:
vec_cameroon = wv['cameroon']
except KeyError:
print("The word 'cameroon' does not appear in this model")
###############################################################################
# Moving on, ``Word2Vec`` supports several word similarity tasks out of the
# box. You can see how the similarity intuitively decreases as the words get
# less and less similar.
#
pairs = [
('car', 'minivan'), # a minivan is a kind of car
('car', 'bicycle'), # still a wheeled vehicle
('car', 'airplane'), # ok, no wheels, but still a vehicle
('car', 'cereal'), # ... and so on
('car', 'communism'),
]
for w1, w2 in pairs:
print('%r\t%r\t%.2f' % (w1, w2, wv.similarity(w1, w2)))
###############################################################################
# Print the 5 most similar words to "car" or "minivan"
print(wv.most_similar(positive=['car', 'minivan'], topn=5))
###############################################################################
# Which of the below does not belong in the sequence?
print(wv.doesnt_match(['fire', 'water', 'land', 'sea', 'air', 'car']))
###############################################################################
# Training Your Own Model
# -----------------------
#
# To start, you'll need some data for training the model. For the following
# examples, we'll use the `Lee Evaluation Corpus
# <https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>`_
# (which you `already have
# <https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee_background.cor>`_
# if you've installed Gensim).
#
# This corpus is small enough to fit entirely in memory, but we'll implement a
# memory-friendly iterator that reads it line-by-line to demonstrate how you
# would handle a larger corpus.
#
from gensim.test.utils import datapath
from gensim import utils
class MyCorpus:
"""An iterator that yields sentences (lists of str)."""
def __iter__(self):
corpus_path = datapath('lee_background.cor')
for line in open(corpus_path):
# assume there's one document per line, tokens separated by whitespace
yield utils.simple_preprocess(line)
###############################################################################
# If we wanted to do any custom preprocessing, e.g. decode a non-standard
# encoding, lowercase, remove numbers, extract named entities... All of this can
# be done inside the ``MyCorpus`` iterator and ``word2vec`` doesn’t need to
# know. All that is required is that the input yields one sentence (list of
# utf8 words) after another.
#
# Let's go ahead and train a model on our corpus. Don't worry about the
# training parameters much for now, we'll revisit them later.
#
import gensim.models
sentences = MyCorpus()
model = gensim.models.Word2Vec(sentences=sentences)
###############################################################################
# Once we have our model, we can use it in the same way as in the demo above.
#
# The main part of the model is ``model.wv``\ , where "wv" stands for "word vectors".
#
vec_king = model.wv['king']
###############################################################################
# Retrieving the vocabulary works the same way:
for index, word in enumerate(wv.index_to_key):
if index == 10:
break
print(f"word #{index}/{len(wv.index_to_key)} is {word}")
###############################################################################
# Storing and loading models
# --------------------------
#
# You'll notice that training non-trivial models can take time. Once you've
# trained your model and it works as expected, you can save it to disk. That
# way, you don't have to spend time training it all over again later.
#
# You can store/load models using the standard gensim methods:
#
import tempfile
with tempfile.NamedTemporaryFile(prefix='gensim-model-', delete=False) as tmp:
temporary_filepath = tmp.name
model.save(temporary_filepath)
#
# The model is now safely stored in the filepath.
# You can copy it to other machines, share it with others, etc.
#
# To load a saved model:
#
new_model = gensim.models.Word2Vec.load(temporary_filepath)
###############################################################################
# which uses pickle internally, optionally ``mmap``\ ‘ing the model’s internal
# large NumPy matrices into virtual memory directly from disk files, for
# inter-process memory sharing.
#
# In addition, you can load models created by the original C tool, both using
# its text and binary formats::
#
# model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# # using gzipped/bz2 input works too, no need to unzip
# model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
#
###############################################################################
# Training Parameters
# -------------------
#
# ``Word2Vec`` accepts several parameters that affect both training speed and quality.
#
# min_count
# ---------
#
# ``min_count`` is for pruning the internal dictionary. Words that appear only
# once or twice in a billion-word corpus are probably uninteresting typos and
# garbage. In addition, there’s not enough data to make any meaningful training
# on those words, so it’s best to ignore them:
#
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
###############################################################################
#
# vector_size
# -----------
#
# ``vector_size`` is the number of dimensions (N) of the N-dimensional space that
# gensim Word2Vec maps the words onto.
#
# Bigger size values require more training data, but can lead to better (more
# accurate) models. Reasonable values are in the tens to hundreds.
#
# The default value of vector_size is 100.
model = gensim.models.Word2Vec(sentences, vector_size=200)
###############################################################################
# workers
# -------
#
# ``workers`` , the last of the major parameters (full list `here
# <https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec>`_)
# is for training parallelization, to speed up training:
#
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
###############################################################################
# The ``workers`` parameter only has an effect if you have `Cython
# <http://cython.org/>`_ installed. Without Cython, you’ll only be able to use
# one core because of the `GIL
# <https://wiki.python.org/moin/GlobalInterpreterLock>`_ (and ``word2vec``
# training will be `miserably slow
# <https://rare-technologies.com/word2vec-in-python-part-two-optimizing/>`_\ ).
#
###############################################################################
# Memory
# ------
#
# At its core, ``word2vec`` model parameters are stored as matrices (NumPy
# arrays). Each array is **#vocabulary** (controlled by the ``min_count`` parameter)
# times **vector size** (the ``vector_size`` parameter) of floats (single precision aka 4 bytes).
#
# Three such matrices are held in RAM (work is underway to reduce that number
# to two, or even one). So if your input contains 100,000 unique words, and you
# asked for layer ``vector_size=200``\ , the model will require approx.
# ``100,000*200*4*3 bytes = ~229MB``.
#
# There’s a little extra memory needed for storing the vocabulary tree (100,000 words would
# take a few megabytes), but unless your words are extremely loooong strings, memory
# footprint will be dominated by the three matrices above.
#
###############################################################################
# Evaluating
# ----------
#
# ``Word2Vec`` training is an unsupervised task, there’s no good way to
# objectively evaluate the result. Evaluation depends on your end application.
#
# Google has released their testing set of about 20,000 syntactic and semantic
# test examples, following the “A is to B as C is to D” task. It is provided in
# the 'datasets' folder.
#
# For example a syntactic analogy of comparative type is ``bad:worse;good:?``.
# There are total of 9 types of syntactic comparisons in the dataset like
# plural nouns and nouns of opposite meaning.
#
# The semantic questions contain five types of semantic analogies, such as
# capital cities (``Paris:France;Tokyo:?``) or family members
# (``brother:sister;dad:?``).
#
###############################################################################
# Gensim supports the same evaluation set, in exactly the same format:
#
model.wv.evaluate_word_analogies(datapath('questions-words.txt'))
###############################################################################
#
# This ``evaluate_word_analogies`` method takes an `optional parameter
# <https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.evaluate_word_analogies>`_
# ``restrict_vocab`` which limits which test examples are to be considered.
#
###############################################################################
# In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
#
# By default it uses an academic dataset WS-353 but one can create a dataset
# specific to your business based on it. It contains word pairs together with
# human-assigned similarity judgments. It measures the relatedness or
# co-occurrence of two words. For example, 'coast' and 'shore' are very similar
# as they appear in the same context. At the same time 'clothes' and 'closet'
# are less similar because they are related but not interchangeable.
#
model.wv.evaluate_word_pairs(datapath('wordsim353.tsv'))
###############################################################################
# .. Important::
# Good performance on Google's or WS-353 test set doesn’t mean word2vec will
# work well in your application, or vice versa. It’s always best to evaluate
# directly on your intended task. For an example of how to use word2vec in a
# classifier pipeline, see this `tutorial
# <https://github.com/RaRe-Technologies/movie-plots-by-genre>`_.
#
###############################################################################
# Online training / Resuming training
# -----------------------------------
#
# Advanced users can load a model and continue training it with more sentences
# and `new vocabulary words <online_w2v_tutorial.ipynb>`_:
#
model = gensim.models.Word2Vec.load(temporary_filepath)
more_sentences = [
['Advanced', 'users', 'can', 'load', 'a', 'model',
'and', 'continue', 'training', 'it', 'with', 'more', 'sentences'],
]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, total_examples=model.corpus_count, epochs=model.epochs)
# cleaning up temporary file
import os
os.remove(temporary_filepath)
###############################################################################
# You may need to tweak the ``total_words`` parameter to ``train()``,
# depending on what learning rate decay you want to simulate.
#
# Note that it’s not possible to resume training with models generated by the C
# tool, ``KeyedVectors.load_word2vec_format()``. You can still use them for
# querying/similarity, but information vital for training (the vocab tree) is
# missing there.
#
###############################################################################
# Training Loss Computation
# -------------------------
#
# The parameter ``compute_loss`` can be used to toggle computation of loss
# while training the Word2Vec model. The computed loss is stored in the model
# attribute ``running_training_loss`` and can be retrieved using the function
# ``get_latest_training_loss`` as follows :
#
# instantiating and training the Word2Vec model
model_with_loss = gensim.models.Word2Vec(
sentences,
min_count=1,
compute_loss=True,
hs=0,
sg=1,
seed=42,
)
# getting the training loss value
training_loss = model_with_loss.get_latest_training_loss()
print(training_loss)
###############################################################################
# Benchmarks
# ----------
#
# Let's run some benchmarks to see effect of the training loss computation code
# on training time.
#
# We'll use the following data for the benchmarks:
#
# #. Lee Background corpus: included in gensim's test data
# #. Text8 corpus. To demonstrate the effect of corpus size, we'll look at the
# first 1MB, 10MB, 50MB of the corpus, as well as the entire thing.
#
import io
import os
import gensim.models.word2vec
import gensim.downloader as api
import smart_open
def head(path, size):
with smart_open.open(path) as fin:
return io.StringIO(fin.read(size))
def generate_input_data():
lee_path = datapath('lee_background.cor')
ls = gensim.models.word2vec.LineSentence(lee_path)
ls.name = '25kB'
yield ls
text8_path = api.load('text8').fn
labels = ('1MB', '10MB', '50MB', '100MB')
sizes = (1024 ** 2, 10 * 1024 ** 2, 50 * 1024 ** 2, 100 * 1024 ** 2)
for l, s in zip(labels, sizes):
ls = gensim.models.word2vec.LineSentence(head(text8_path, s))
ls.name = l
yield ls
input_data = list(generate_input_data())
###############################################################################
# We now compare the training time taken for different combinations of input
# data and model training parameters like ``hs`` and ``sg``.
#
# For each combination, we repeat the test several times to obtain the mean and
# standard deviation of the test duration.
#
# Temporarily reduce logging verbosity
logging.root.level = logging.ERROR
import time
import numpy as np
import pandas as pd
train_time_values = []
seed_val = 42
sg_values = [0, 1]
hs_values = [0, 1]
fast = True
if fast:
input_data_subset = input_data[:3]
else:
input_data_subset = input_data
for data in input_data_subset:
for sg_val in sg_values:
for hs_val in hs_values:
for loss_flag in [True, False]:
time_taken_list = []
for i in range(3):
start_time = time.time()
w2v_model = gensim.models.Word2Vec(
data,
compute_loss=loss_flag,
sg=sg_val,
hs=hs_val,
seed=seed_val,
)
time_taken_list.append(time.time() - start_time)
time_taken_list = np.array(time_taken_list)
time_mean = np.mean(time_taken_list)
time_std = np.std(time_taken_list)
model_result = {
'train_data': data.name,
'compute_loss': loss_flag,
'sg': sg_val,
'hs': hs_val,
'train_time_mean': time_mean,
'train_time_std': time_std,
}
print("Word2vec model #%i: %s" % (len(train_time_values), model_result))
train_time_values.append(model_result)
train_times_table = pd.DataFrame(train_time_values)
train_times_table = train_times_table.sort_values(
by=['train_data', 'sg', 'hs', 'compute_loss'],
ascending=[False, False, True, False],
)
print(train_times_table)
###############################################################################
#
# Visualising Word Embeddings
# ---------------------------
#
# The word embeddings made by the model can be visualised by reducing
# dimensionality of the words to 2 dimensions using tSNE.
#
# Visualisations can be used to notice semantic and syntactic trends in the data.
#
# Example:
#
# * Semantic: words like cat, dog, cow, etc. have a tendency to lie close by
# * Syntactic: words like run, running or cut, cutting lie close together.
#
# Vector relations like vKing - vMan = vQueen - vWoman can also be noticed.
#
# .. Important::
# The model used for the visualisation is trained on a small corpus. Thus
# some of the relations might not be so clear.
#
from sklearn.decomposition import IncrementalPCA # inital reduction
from sklearn.manifold import TSNE # final reduction
import numpy as np # array handling
def reduce_dimensions(model):
num_dimensions = 2 # final num dimensions (2D, 3D, etc)
# extract the words & their vectors, as numpy arrays
vectors = np.asarray(model.wv.vectors)
labels = np.asarray(model.wv.index_to_key) # fixed-width numpy strings
# reduce using t-SNE
tsne = TSNE(n_components=num_dimensions, random_state=0)
vectors = tsne.fit_transform(vectors)
x_vals = [v[0] for v in vectors]
y_vals = [v[1] for v in vectors]
return x_vals, y_vals, labels
x_vals, y_vals, labels = reduce_dimensions(model)
def plot_with_plotly(x_vals, y_vals, labels, plot_in_notebook=True):
from plotly.offline import init_notebook_mode, iplot, plot
import plotly.graph_objs as go
trace = go.Scatter(x=x_vals, y=y_vals, mode='text', text=labels)
data = [trace]
if plot_in_notebook:
init_notebook_mode(connected=True)
iplot(data, filename='word-embedding-plot')
else:
plot(data, filename='word-embedding-plot.html')
def plot_with_matplotlib(x_vals, y_vals, labels):
import matplotlib.pyplot as plt
import random
random.seed(0)
plt.figure(figsize=(12, 12))
plt.scatter(x_vals, y_vals)
#
# Label randomly subsampled 25 data points
#
indices = list(range(len(labels)))
selected_indices = random.sample(indices, 25)
for i in selected_indices:
plt.annotate(labels[i], (x_vals[i], y_vals[i]))
try:
get_ipython()
except Exception:
plot_function = plot_with_matplotlib
else:
plot_function = plot_with_plotly
plot_function(x_vals, y_vals, labels)
###############################################################################
# Conclusion
# ----------
#
# In this tutorial we learned how to train word2vec models on your custom data
# and also how to evaluate it. Hope that you too will find this popular tool
# useful in your Machine Learning tasks!
#
# Links
# -----
#
# - API docs: :py:mod:`gensim.models.word2vec`
# - `Original C toolkit and word2vec papers by Google <https://code.google.com/archive/p/word2vec/>`_.
#
| 25,529
|
Python
|
.py
| 579
| 41.336788
| 126
| 0.642739
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,004
|
run_wmd.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_wmd.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Word Mover's Distance\n\nDemonstrates using Gensim's implemenation of the WMD.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Word Mover's Distance (WMD) is a promising new tool in machine learning that\nallows us to submit a query and return the most relevant documents. This\ntutorial introduces WMD and shows how you can compute the WMD distance\nbetween two documents using ``wmdistance``.\n\n## WMD Basics\n\nWMD enables us to assess the \"distance\" between two documents in a meaningful\nway even when they have no words in common. It uses [word2vec](http://rare-technologies.com/word2vec-tutorial/) [4] vector embeddings of\nwords. It been shown to outperform many of the state-of-the-art methods in\nk-nearest neighbors classification [3].\n\nWMD is illustrated below for two very similar sentences (illustration taken\nfrom [Vlad Niculae's blog](http://vene.ro/blog/word-movers-distance-in-python.html)). The sentences\nhave no words in common, but by matching the relevant words, WMD is able to\naccurately measure the (dis)similarity between the two sentences. The method\nalso uses the bag-of-words representation of the documents (simply put, the\nword's frequencies in the documents), noted as $d$ in the figure below. The\nintuition behind the method is that we find the minimum \"traveling distance\"\nbetween documents, in other words the most efficient way to \"move\" the\ndistribution of document 1 to the distribution of document 2.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Image from https://vene.ro/images/wmd-obama.png\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimg = mpimg.imread('wmd-obama.png')\nimgplot = plt.imshow(img)\nplt.axis('off')\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This method was introduced in the article \"From Word Embeddings To Document\nDistances\" by Matt Kusner et al. (\\ [link to PDF](http://jmlr.org/proceedings/papers/v37/kusnerb15.pdf)\\ ). It is inspired\nby the \"Earth Mover's Distance\", and employs a solver of the \"transportation\nproblem\".\n\nIn this tutorial, we will learn how to use Gensim's WMD functionality, which\nconsists of the ``wmdistance`` method for distance computation, and the\n``WmdSimilarity`` class for corpus based similarity queries.\n\n.. Important::\n If you use Gensim's WMD functionality, please consider citing [1] and [2].\n\n## Computing the Word Mover's Distance\n\nTo use WMD, you need some existing word embeddings.\nYou could train your own Word2Vec model, but that is beyond the scope of this tutorial\n(check out `sphx_glr_auto_examples_tutorials_run_word2vec.py` if you're interested).\nFor this tutorial, we'll be using an existing Word2Vec model.\n\nLet's take some sentences to compute the distance between.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Initialize logging.\nimport logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n\nsentence_obama = 'Obama speaks to the media in Illinois'\nsentence_president = 'The president greets the press in Chicago'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These sentences have very similar content, and as such the WMD should be low.\nBefore we compute the WMD, we want to remove stopwords (\"the\", \"to\", etc.),\nas these do not contribute a lot to the information in the sentences.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Import and download stopwords from NLTK.\nfrom nltk.corpus import stopwords\nfrom nltk import download\ndownload('stopwords') # Download stopwords list.\nstop_words = stopwords.words('english')\n\ndef preprocess(sentence):\n return [w for w in sentence.lower().split() if w not in stop_words]\n\nsentence_obama = preprocess(sentence_obama)\nsentence_president = preprocess(sentence_president)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, as mentioned earlier, we will be using some downloaded pre-trained\nembeddings. We load these into a Gensim Word2Vec model class.\n\n.. Important::\n The embeddings we have chosen here require a lot of memory.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gensim.downloader as api\nmodel = api.load('word2vec-google-news-300')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So let's compute WMD using the ``wmdistance`` method.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distance = model.wmdistance(sentence_obama, sentence_president)\nprint('distance = %.4f' % distance)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sentence_orange = preprocess('Oranges are my favorite fruit')\ndistance = model.wmdistance(sentence_obama, sentence_orange)\nprint('distance = %.4f' % distance)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n\n1. R\u00e9mi Flamary et al. *POT: Python Optimal Transport*, 2021.\n2. Matt Kusner et al. *From Embeddings To Document Distances*, 2015.\n3. Tom\u00e1\u0161 Mikolov et al. *Efficient Estimation of Word Representations in Vector Space*, 2013.\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 7,147
|
Python
|
.py
| 158
| 38.594937
| 1,344
| 0.636624
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,005
|
run_lda.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_lda.py
|
r"""
LDA Model
=========
Introduces Gensim's LDA model and demonstrates its use on the NIPS corpus.
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###############################################################################
# The purpose of this tutorial is to demonstrate how to train and tune an LDA model.
#
# In this tutorial we will:
#
# * Load input data.
# * Pre-process that data.
# * Transform documents into bag-of-words vectors.
# * Train an LDA model.
#
# This tutorial will **not**:
#
# * Explain how Latent Dirichlet Allocation works
# * Explain how the LDA model performs inference
# * Teach you all the parameters and options for Gensim's LDA implementation
#
# If you are not familiar with the LDA model or how to use it in Gensim, I (Olavur Mortensen)
# suggest you read up on that before continuing with this tutorial. Basic
# understanding of the LDA model should suffice. Examples:
#
# * `Introduction to Latent Dirichlet Allocation <http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation>`_
# * Gensim tutorial: :ref:`sphx_glr_auto_examples_core_run_topics_and_transformations.py`
# * Gensim's LDA model API docs: :py:class:`gensim.models.LdaModel`
#
# I would also encourage you to consider each step when applying the model to
# your data, instead of just blindly applying my solution. The different steps
# will depend on your data and possibly your goal with the model.
#
# Data
# ----
#
# I have used a corpus of NIPS papers in this tutorial, but if you're following
# this tutorial just to learn about LDA I encourage you to consider picking a
# corpus on a subject that you are familiar with. Qualitatively evaluating the
# output of an LDA model is challenging and can require you to understand the
# subject matter of your corpus (depending on your goal with the model).
#
# NIPS (Neural Information Processing Systems) is a machine learning conference
# so the subject matter should be well suited for most of the target audience
# of this tutorial. You can download the original data from Sam Roweis'
# `website <http://www.cs.nyu.edu/~roweis/data.html>`_. The code below will
# also do that for you.
#
# .. Important::
# The corpus contains 1740 documents, and not particularly long ones.
# So keep in mind that this tutorial is not geared towards efficiency, and be
# careful before applying the code to a large dataset.
#
import io
import os.path
import re
import tarfile
import smart_open
def extract_documents(url='https://cs.nyu.edu/~roweis/data/nips12raw_str602.tgz'):
with smart_open.open(url, "rb") as file:
with tarfile.open(fileobj=file) as tar:
for member in tar.getmembers():
if member.isfile() and re.search(r'nipstxt/nips\d+/\d+\.txt', member.name):
member_bytes = tar.extractfile(member).read()
yield member_bytes.decode('utf-8', errors='replace')
docs = list(extract_documents())
###############################################################################
# So we have a list of 1740 documents, where each document is a Unicode string.
# If you're thinking about using your own corpus, then you need to make sure
# that it's in the same format (list of Unicode strings) before proceeding
# with the rest of this tutorial.
#
print(len(docs))
print(docs[0][:500])
###############################################################################
# Pre-process and vectorize the documents
# ---------------------------------------
#
# As part of preprocessing, we will:
#
# * Tokenize (split the documents into tokens).
# * Lemmatize the tokens.
# * Compute bigrams.
# * Compute a bag-of-words representation of the data.
#
# First we tokenize the text using a regular expression tokenizer from NLTK. We
# remove numeric tokens and tokens that are only a single character, as they
# don't tend to be useful, and the dataset contains a lot of them.
#
# .. Important::
#
# This tutorial uses the nltk library for preprocessing, although you can
# replace it with something else if you want.
#
# Tokenize the documents.
from nltk.tokenize import RegexpTokenizer
# Split the documents into tokens.
tokenizer = RegexpTokenizer(r'\w+')
for idx in range(len(docs)):
docs[idx] = docs[idx].lower() # Convert to lowercase.
docs[idx] = tokenizer.tokenize(docs[idx]) # Split into words.
# Remove numbers, but not words that contain numbers.
docs = [[token for token in doc if not token.isnumeric()] for doc in docs]
# Remove words that are only one character.
docs = [[token for token in doc if len(token) > 1] for doc in docs]
###############################################################################
# We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a
# stemmer in this case because it produces more readable words. Output that is
# easy to read is very desirable in topic modelling.
#
# Download the WordNet data
from nltk import download
download('wordnet')
# Lemmatize the documents.
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
docs = [[lemmatizer.lemmatize(token) for token in doc] for doc in docs]
###############################################################################
# We find bigrams in the documents. Bigrams are sets of two adjacent words.
# Using bigrams we can get phrases like "machine_learning" in our output
# (spaces are replaced with underscores); without bigrams we would only get
# "machine" and "learning".
#
# Note that in the code below, we find bigrams and then add them to the
# original data, because we would like to keep the words "machine" and
# "learning" as well as the bigram "machine_learning".
#
# .. Important::
# Computing n-grams of large dataset can be very computationally
# and memory intensive.
#
# Compute bigrams.
from gensim.models import Phrases
# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(docs, min_count=20)
for idx in range(len(docs)):
for token in bigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
###############################################################################
# We remove rare words and common words based on their *document frequency*.
# Below we remove words that appear in less than 20 documents or in more than
# 50% of the documents. Consider trying to remove words only based on their
# frequency, or maybe combining that with this approach.
#
# Remove rare and common tokens.
from gensim.corpora import Dictionary
# Create a dictionary representation of the documents.
dictionary = Dictionary(docs)
# Filter out words that occur less than 20 documents, or more than 50% of the documents.
dictionary.filter_extremes(no_below=20, no_above=0.5)
###############################################################################
# Finally, we transform the documents to a vectorized form. We simply compute
# the frequency of each word, including the bigrams.
#
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(doc) for doc in docs]
###############################################################################
# Let's see how many tokens and documents we have to train on.
#
print('Number of unique tokens: %d' % len(dictionary))
print('Number of documents: %d' % len(corpus))
###############################################################################
# Training
# --------
#
# We are ready to train the LDA model. We will first discuss how to set some of
# the training parameters.
#
# First of all, the elephant in the room: how many topics do I need? There is
# really no easy answer for this, it will depend on both your data and your
# application. I have used 10 topics here because I wanted to have a few topics
# that I could interpret and "label", and because that turned out to give me
# reasonably good results. You might not need to interpret all your topics, so
# you could use a large number of topics, for example 100.
#
# ``chunksize`` controls how many documents are processed at a time in the
# training algorithm. Increasing chunksize will speed up training, at least as
# long as the chunk of documents easily fit into memory. I've set ``chunksize =
# 2000``, which is more than the amount of documents, so I process all the
# data in one go. Chunksize can however influence the quality of the model, as
# discussed in Hoffman and co-authors [2], but the difference was not
# substantial in this case.
#
# ``passes`` controls how often we train the model on the entire corpus.
# Another word for passes might be "epochs". ``iterations`` is somewhat
# technical, but essentially it controls how often we repeat a particular loop
# over each document. It is important to set the number of "passes" and
# "iterations" high enough.
#
# I suggest the following way to choose iterations and passes. First, enable
# logging (as described in many Gensim tutorials), and set ``eval_every = 1``
# in ``LdaModel``. When training the model look for a line in the log that
# looks something like this::
#
# 2016-06-21 15:40:06,753 - gensim.models.ldamodel - DEBUG - 68/1566 documents converged within 400 iterations
#
# If you set ``passes = 20`` you will see this line 20 times. Make sure that by
# the final passes, most of the documents have converged. So you want to choose
# both passes and iterations to be high enough for this to happen.
#
# We set ``alpha = 'auto'`` and ``eta = 'auto'``. Again this is somewhat
# technical, but essentially we are automatically learning two parameters in
# the model that we usually would have to specify explicitly.
#
# Train LDA model.
from gensim.models import LdaModel
# Set training parameters.
num_topics = 10
chunksize = 2000
passes = 20
iterations = 400
eval_every = None # Don't evaluate model perplexity, takes too much time.
# Make an index to word dictionary.
temp = dictionary[0] # This is only to "load" the dictionary.
id2word = dictionary.id2token
model = LdaModel(
corpus=corpus,
id2word=id2word,
chunksize=chunksize,
alpha='auto',
eta='auto',
iterations=iterations,
num_topics=num_topics,
passes=passes,
eval_every=eval_every
)
###############################################################################
# We can compute the topic coherence of each topic. Below we display the
# average topic coherence and print the topics in order of topic coherence.
#
# Note that we use the "Umass" topic coherence measure here (see
# :py:func:`gensim.models.ldamodel.LdaModel.top_topics`), Gensim has recently
# obtained an implementation of the "AKSW" topic coherence measure (see
# accompanying blog post, https://rare-technologies.com/what-is-topic-coherence/).
#
# If you are familiar with the subject of the articles in this dataset, you can
# see that the topics below make a lot of sense. However, they are not without
# flaws. We can see that there is substantial overlap between some topics,
# others are hard to interpret, and most of them have at least some terms that
# seem out of place. If you were able to do better, feel free to share your
# methods on the blog at https://rare-technologies.com/lda-training-tips/ !
#
top_topics = model.top_topics(corpus)
# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.
avg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics
print('Average topic coherence: %.4f.' % avg_topic_coherence)
from pprint import pprint
pprint(top_topics)
###############################################################################
# Things to experiment with
# -------------------------
#
# * ``no_above`` and ``no_below`` parameters in ``filter_extremes`` method.
# * Adding trigrams or even higher order n-grams.
# * Consider whether using a hold-out set or cross-validation is the way to go for you.
# * Try other datasets.
#
# Where to go from here
# ---------------------
#
# * Check out a RaRe blog post on the AKSW topic coherence measure (https://rare-technologies.com/what-is-topic-coherence/).
# * pyLDAvis (https://pyldavis.readthedocs.io/en/latest/index.html).
# * Read some more Gensim tutorials (https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials).
# * If you haven't already, read [1] and [2] (see references).
#
# References
# ----------
#
# 1. "Latent Dirichlet Allocation", Blei et al. 2003.
# 2. "Online Learning for Latent Dirichlet Allocation", Hoffman et al. 2010.
#
| 12,616
|
Python
|
.py
| 275
| 44.145455
| 128
| 0.695041
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,006
|
run_scm.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_scm.py
|
r"""
Soft Cosine Measure
===================
Demonstrates using Gensim's implemenation of the SCM.
"""
###############################################################################
# Soft Cosine Measure (SCM) is a promising new tool in machine learning that
# allows us to submit a query and return the most relevant documents. This
# tutorial introduces SCM and shows how you can compute the SCM similarities
# between two documents using the ``inner_product`` method.
#
# Soft Cosine Measure basics
# --------------------------
#
# Soft Cosine Measure (SCM) is a method that allows us to assess the similarity
# between two documents in a meaningful way, even when they have no words in
# common. It uses a measure of similarity between words, which can be derived
# [2] using [word2vec][] [4] vector embeddings of words. It has been shown to
# outperform many of the state-of-the-art methods in the semantic text
# similarity task in the context of community question answering [2].
#
#
# SCM is illustrated below for two very similar sentences. The sentences have
# no words in common, but by modeling synonymy, SCM is able to accurately
# measure the similarity between the two sentences. The method also uses the
# bag-of-words vector representation of the documents (simply put, the word's
# frequencies in the documents). The intution behind the method is that we
# compute standard cosine similarity assuming that the document vectors are
# expressed in a non-orthogonal basis, where the angle between two basis
# vectors is derived from the angle between the word2vec embeddings of the
# corresponding words.
#
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('scm-hello.png')
imgplot = plt.imshow(img)
plt.axis('off')
plt.show()
###############################################################################
# This method was perhaps first introduced in the article “Soft Measure and
# Soft Cosine Measure: Measure of Features in Vector Space Model” by Grigori
# Sidorov, Alexander Gelbukh, Helena Gomez-Adorno, and David Pinto.
#
# In this tutorial, we will learn how to use Gensim's SCM functionality, which
# consists of the ``inner_product`` method for one-off computation, and the
# ``SoftCosineSimilarity`` class for corpus-based similarity queries.
#
# .. Important::
# If you use Gensim's SCM functionality, please consider citing [1], [2] and [3].
#
# Computing the Soft Cosine Measure
# ---------------------------------
# To use SCM, you need some existing word embeddings.
# You could train your own Word2Vec model, but that is beyond the scope of this tutorial
# (check out :ref:`sphx_glr_auto_examples_tutorials_run_word2vec.py` if you're interested).
# For this tutorial, we'll be using an existing Word2Vec model.
#
# Let's take some sentences to compute the distance between.
#
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentence_obama = 'Obama speaks to the media in Illinois'
sentence_president = 'The president greets the press in Chicago'
sentence_orange = 'Oranges are my favorite fruit'
###############################################################################
# The first two sentences sentences have very similar content, and as such the
# SCM should be high. By contrast, the third sentence is unrelated to the first
# two and the SCM should be low.
#
# Before we compute the SCM, we want to remove stopwords ("the", "to", etc.),
# as these do not contribute a lot to the information in the sentences.
#
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
stop_words = stopwords.words('english')
def preprocess(sentence):
return [w for w in sentence.lower().split() if w not in stop_words]
sentence_obama = preprocess(sentence_obama)
sentence_president = preprocess(sentence_president)
sentence_orange = preprocess(sentence_orange)
###############################################################################
# Next, we will build a dictionary and a TF-IDF model, and we will convert the
# sentences to the bag-of-words format.
#
from gensim.corpora import Dictionary
documents = [sentence_obama, sentence_president, sentence_orange]
dictionary = Dictionary(documents)
sentence_obama = dictionary.doc2bow(sentence_obama)
sentence_president = dictionary.doc2bow(sentence_president)
sentence_orange = dictionary.doc2bow(sentence_orange)
from gensim.models import TfidfModel
documents = [sentence_obama, sentence_president, sentence_orange]
tfidf = TfidfModel(documents)
sentence_obama = tfidf[sentence_obama]
sentence_president = tfidf[sentence_president]
sentence_orange = tfidf[sentence_orange]
###############################################################################
# Now, as mentioned earlier, we will be using some downloaded pre-trained
# embeddings. We load these into a Gensim Word2Vec model class and we build
# a term similarity mextrix using the embeddings.
#
# .. Important::
# The embeddings we have chosen here require a lot of memory.
#
import gensim.downloader as api
model = api.load('word2vec-google-news-300')
from gensim.similarities import SparseTermSimilarityMatrix, WordEmbeddingSimilarityIndex
termsim_index = WordEmbeddingSimilarityIndex(model)
termsim_matrix = SparseTermSimilarityMatrix(termsim_index, dictionary, tfidf)
###############################################################################
# So let's compute SCM using the ``inner_product`` method.
#
similarity = termsim_matrix.inner_product(sentence_obama, sentence_president, normalized=(True, True))
print('similarity = %.4f' % similarity)
###############################################################################
# Let's try the same thing with two completely unrelated sentences.
# Notice that the similarity is smaller.
#
similarity = termsim_matrix.inner_product(sentence_obama, sentence_orange, normalized=(True, True))
print('similarity = %.4f' % similarity)
###############################################################################
#
# References
# ----------
#
# 1. Grigori Sidorov et al. *Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model*, 2014.
# 2. Delphine Charlet and Geraldine Damnati, SimBow at SemEval-2017 Task 3: Soft-Cosine Semantic Similarity between Questions for Community Question Answering, 2017.
# 3. Vít Novotný. *Implementation Notes for the Soft Cosine Measure*, 2018.
# 4. Tomáš Mikolov et al. Efficient Estimation of Word Representations in Vector Space, 2013.
#
| 6,658
|
Python
|
.py
| 133
| 48.744361
| 165
| 0.703094
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,007
|
run_doc2vec_lee.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_doc2vec_lee.py
|
r"""
Doc2Vec Model
=============
Introduces Gensim's Doc2Vec model and demonstrates its use on the
`Lee Corpus <https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>`__.
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###############################################################################
# Doc2Vec is a :ref:`core_concepts_model` that represents each
# :ref:`core_concepts_document` as a :ref:`core_concepts_vector`. This
# tutorial introduces the model and demonstrates how to train and assess it.
#
# Here's a list of what we'll be doing:
#
# 0. Review the relevant models: bag-of-words, Word2Vec, Doc2Vec
# 1. Load and preprocess the training and test corpora (see :ref:`core_concepts_corpus`)
# 2. Train a Doc2Vec :ref:`core_concepts_model` model using the training corpus
# 3. Demonstrate how the trained model can be used to infer a :ref:`core_concepts_vector`
# 4. Assess the model
# 5. Test the model on the test corpus
#
# Review: Bag-of-words
# --------------------
#
# .. Note:: Feel free to skip these review sections if you're already familiar with the models.
#
# You may be familiar with the `bag-of-words model
# <https://en.wikipedia.org/wiki/Bag-of-words_model>`_ from the
# :ref:`core_concepts_vector` section.
# This model transforms each document to a fixed-length vector of integers.
# For example, given the sentences:
#
# - ``John likes to watch movies. Mary likes movies too.``
# - ``John also likes to watch football games. Mary hates football.``
#
# The model outputs the vectors:
#
# - ``[1, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0]``
# - ``[1, 1, 1, 1, 0, 1, 0, 1, 2, 1, 1]``
#
# Each vector has 10 elements, where each element counts the number of times a
# particular word occurred in the document.
# The order of elements is arbitrary.
# In the example above, the order of the elements corresponds to the words:
# ``["John", "likes", "to", "watch", "movies", "Mary", "too", "also", "football", "games", "hates"]``.
#
# Bag-of-words models are surprisingly effective, but have several weaknesses.
#
# First, they lose all information about word order: "John likes Mary" and
# "Mary likes John" correspond to identical vectors. There is a solution: bag
# of `n-grams <https://en.wikipedia.org/wiki/N-gram>`__
# models consider word phrases of length n to represent documents as
# fixed-length vectors to capture local word order but suffer from data
# sparsity and high dimensionality.
#
# Second, the model does not attempt to learn the meaning of the underlying
# words, and as a consequence, the distance between vectors doesn't always
# reflect the difference in meaning. The ``Word2Vec`` model addresses this
# second problem.
#
# Review: ``Word2Vec`` Model
# --------------------------
#
# ``Word2Vec`` is a more recent model that embeds words in a lower-dimensional
# vector space using a shallow neural network. The result is a set of
# word-vectors where vectors close together in vector space have similar
# meanings based on context, and word-vectors distant to each other have
# differing meanings. For example, ``strong`` and ``powerful`` would be close
# together and ``strong`` and ``Paris`` would be relatively far.
#
# Gensim's :py:class:`~gensim.models.word2vec.Word2Vec` class implements this model.
#
# With the ``Word2Vec`` model, we can calculate the vectors for each **word** in a document.
# But what if we want to calculate a vector for the **entire document**\ ?
# We could average the vectors for each word in the document - while this is quick and crude, it can often be useful.
# However, there is a better way...
#
# Introducing: Paragraph Vector
# -----------------------------
#
# .. Important:: In Gensim, we refer to the Paragraph Vector model as ``Doc2Vec``.
#
# Le and Mikolov in 2014 introduced the `Doc2Vec algorithm <https://cs.stanford.edu/~quocle/paragraph_vector.pdf>`__,
# which usually outperforms such simple-averaging of ``Word2Vec`` vectors.
#
# The basic idea is: act as if a document has another floating word-like
# vector, which contributes to all training predictions, and is updated like
# other word-vectors, but we will call it a doc-vector. Gensim's
# :py:class:`~gensim.models.doc2vec.Doc2Vec` class implements this algorithm.
#
# There are two implementations:
#
# 1. Paragraph Vector - Distributed Memory (PV-DM)
# 2. Paragraph Vector - Distributed Bag of Words (PV-DBOW)
#
# .. Important::
# Don't let the implementation details below scare you.
# They're advanced material: if it's too much, then move on to the next section.
#
# PV-DM is analogous to Word2Vec CBOW. The doc-vectors are obtained by training
# a neural network on the synthetic task of predicting a center word based an
# average of both context word-vectors and the full document's doc-vector.
#
# PV-DBOW is analogous to Word2Vec SG. The doc-vectors are obtained by training
# a neural network on the synthetic task of predicting a target word just from
# the full document's doc-vector. (It is also common to combine this with
# skip-gram testing, using both the doc-vector and nearby word-vectors to
# predict a single target word, but only one at a time.)
#
# Prepare the Training and Test Data
# ----------------------------------
#
# For this tutorial, we'll be training our model using the `Lee Background
# Corpus
# <https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>`_
# included in gensim. This corpus contains 314 documents selected from the
# Australian Broadcasting Corporation’s news mail service, which provides text
# e-mails of headline stories and covers a number of broad topics.
#
# And we'll test our model by eye using the much shorter `Lee Corpus
# <https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>`_
# which contains 50 documents.
#
import os
import gensim
# Set file names for train and test data
test_data_dir = os.path.join(gensim.__path__[0], 'test', 'test_data')
lee_train_file = os.path.join(test_data_dir, 'lee_background.cor')
lee_test_file = os.path.join(test_data_dir, 'lee.cor')
###############################################################################
# Define a Function to Read and Preprocess Text
# ---------------------------------------------
#
# Below, we define a function to:
#
# - open the train/test file (with latin encoding)
# - read the file line-by-line
# - pre-process each line (tokenize text into individual words, remove punctuation, set to lowercase, etc)
#
# The file we're reading is a **corpus**.
# Each line of the file is a **document**.
#
# .. Important::
# To train the model, we'll need to associate a tag/number with each document
# of the training corpus. In our case, the tag is simply the zero-based line
# number.
#
import smart_open
def read_corpus(fname, tokens_only=False):
with smart_open.open(fname, encoding="iso-8859-1") as f:
for i, line in enumerate(f):
tokens = gensim.utils.simple_preprocess(line)
if tokens_only:
yield tokens
else:
# For training data, add tags
yield gensim.models.doc2vec.TaggedDocument(tokens, [i])
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))
###############################################################################
# Let's take a look at the training corpus
#
print(train_corpus[:2])
###############################################################################
# And the testing corpus looks like this:
#
print(test_corpus[:2])
###############################################################################
# Notice that the testing corpus is just a list of lists and does not contain
# any tags.
#
###############################################################################
# Training the Model
# ------------------
#
# Now, we'll instantiate a Doc2Vec model with a vector size with 50 dimensions and
# iterating over the training corpus 40 times. We set the minimum word count to
# 2 in order to discard words with very few occurrences. (Without a variety of
# representative examples, retaining such infrequent words can often make a
# model worse!) Typical iteration counts in the published `Paragraph Vector paper <https://cs.stanford.edu/~quocle/paragraph_vector.pdf>`__
# results, using 10s-of-thousands to millions of docs, are 10-20. More
# iterations take more time and eventually reach a point of diminishing
# returns.
#
# However, this is a very very small dataset (300 documents) with shortish
# documents (a few hundred words). Adding training passes can sometimes help
# with such small datasets.
#
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=2, epochs=40)
###############################################################################
# Build a vocabulary
model.build_vocab(train_corpus)
###############################################################################
# Essentially, the vocabulary is a list (accessible via
# ``model.wv.index_to_key``) of all of the unique words extracted from the training corpus.
# Additional attributes for each word are available using the ``model.wv.get_vecattr()`` method,
# For example, to see how many times ``penalty`` appeared in the training corpus:
#
print(f"Word 'penalty' appeared {model.wv.get_vecattr('penalty', 'count')} times in the training corpus.")
###############################################################################
# Next, train the model on the corpus.
# In the usual case, where Gensim installation found a BLAS library for optimized
# bulk vector operations, this training on this tiny 300 document, ~60k word corpus
# should take just a few seconds. (More realistic datasets of tens-of-millions
# of words or more take proportionately longer.) If for some reason a BLAS library
# isn't available, training uses a fallback approach that takes 60x-120x longer,
# so even this tiny training will take minutes rather than seconds. (And, in that
# case, you should also notice a warning in the logging letting you know there's
# something worth fixing.) So, be sure your installation uses the BLAS-optimized
# Gensim if you value your time.
#
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)
###############################################################################
# Now, we can use the trained model to infer a vector for any piece of text
# by passing a list of words to the ``model.infer_vector`` function. This
# vector can then be compared with other vectors via cosine similarity.
#
vector = model.infer_vector(['only', 'you', 'can', 'prevent', 'forest', 'fires'])
print(vector)
###############################################################################
# Note that ``infer_vector()`` does *not* take a string, but rather a list of
# string tokens, which should have already been tokenized the same way as the
# ``words`` property of original training document objects.
#
# Also note that because the underlying training/inference algorithms are an
# iterative approximation problem that makes use of internal randomization,
# repeated inferences of the same text will return slightly different vectors.
#
###############################################################################
# Assessing the Model
# -------------------
#
# To assess our new model, we'll first infer new vectors for each document of
# the training corpus, compare the inferred vectors with the training corpus,
# and then returning the rank of the document based on self-similarity.
# Basically, we're pretending as if the training corpus is some new unseen data
# and then seeing how they compare with the trained model. The expectation is
# that we've likely overfit our model (i.e., all of the ranks will be less than
# 2) and so we should be able to find similar documents very easily.
# Additionally, we'll keep track of the second ranks for a comparison of less
# similar documents.
#
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
inferred_vector = model.infer_vector(train_corpus[doc_id].words)
sims = model.dv.most_similar([inferred_vector], topn=len(model.dv))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
second_ranks.append(sims[1])
###############################################################################
# Let's count how each document ranks with respect to the training corpus
#
# NB. Results vary between runs due to random seeding and very small corpus
import collections
counter = collections.Counter(ranks)
print(counter)
###############################################################################
# Basically, greater than 95% of the inferred documents are found to be most
# similar to itself and about 5% of the time it is mistakenly most similar to
# another document. Checking the inferred-vector against a
# training-vector is a sort of 'sanity check' as to whether the model is
# behaving in a usefully consistent manner, though not a real 'accuracy' value.
#
# This is great and not entirely surprising. We can take a look at an example:
#
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('SECOND-MOST', 1), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
###############################################################################
# Notice above that the most similar document (usually the same text) is has a
# similarity score approaching 1.0. However, the similarity score for the
# second-ranked documents should be significantly lower (assuming the documents
# are in fact different) and the reasoning becomes obvious when we examine the
# text itself.
#
# We can run the next cell repeatedly to see a sampling other target-document
# comparisons.
#
# Pick a random document from the corpus and infer a vector from the model
import random
doc_id = random.randint(0, len(train_corpus) - 1)
# Compare and print the second-most-similar document
print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: «{}»\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))
###############################################################################
# Testing the Model
# -----------------
#
# Using the same approach above, we'll infer the vector for a randomly chosen
# test document, and compare the document to our model by eye.
#
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus) - 1)
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.dv.most_similar([inferred_vector], topn=len(model.dv))
# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): «{}»\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
###############################################################################
# Conclusion
# ----------
#
# Let's review what we've seen in this tutorial:
#
# 0. Review the relevant models: bag-of-words, Word2Vec, Doc2Vec
# 1. Load and preprocess the training and test corpora (see :ref:`core_concepts_corpus`)
# 2. Train a Doc2Vec :ref:`core_concepts_model` model using the training corpus
# 3. Demonstrate how the trained model can be used to infer a :ref:`core_concepts_vector`
# 4. Assess the model
# 5. Test the model on the test corpus
#
# That's it! Doc2Vec is a great way to explore relationships between documents.
#
# Additional Resources
# --------------------
#
# If you'd like to know more about the subject matter of this tutorial, check out the links below.
#
# * `Word2Vec Paper <https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf>`_
# * `Doc2Vec Paper <https://cs.stanford.edu/~quocle/paragraph_vector.pdf>`_
# * `Dr. Michael D. Lee's Website <http://faculty.sites.uci.edu/mdlee>`_
# * `Lee Corpus <http://faculty.sites.uci.edu/mdlee/similarity-data/>`__
# * `IMDB Doc2Vec Tutorial <doc2vec-IMDB.ipynb>`_
#
| 16,619
|
Python
|
.py
| 330
| 48.836364
| 139
| 0.674689
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,008
|
run_fasttext.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_fasttext.py
|
r"""
FastText Model
==============
Introduces Gensim's fastText model and demonstrates its use on the Lee Corpus.
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###############################################################################
# Here, we'll learn to work with fastText library for training word-embedding
# models, saving & loading them and performing similarity operations & vector
# lookups analogous to Word2Vec.
###############################################################################
#
# When to use fastText?
# ---------------------
#
# The main principle behind `fastText <https://github.com/facebookresearch/fastText>`_ is that the
# morphological structure of a word carries important information about the meaning of the word.
# Such structure is not taken into account by traditional word embeddings like Word2Vec, which
# train a unique word embedding for every individual word.
# This is especially significant for morphologically rich languages (German, Turkish) in which a
# single word can have a large number of morphological forms, each of which might occur rarely,
# thus making it hard to train good word embeddings.
#
#
# fastText attempts to solve this by treating each word as the aggregation of its subwords.
# For the sake of simplicity and language-independence, subwords are taken to be the character ngrams
# of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.
#
#
# According to a detailed comparison of Word2Vec and fastText in
# `this notebook <https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Word2Vec_FastText_Comparison.ipynb>`__,
# fastText does significantly better on syntactic tasks as compared to the original Word2Vec,
# especially when the size of the training corpus is small. Word2Vec slightly outperforms fastText
# on semantic tasks though. The differences grow smaller as the size of the training corpus increases.
#
#
# fastText can obtain vectors even for out-of-vocabulary (OOV) words, by summing up vectors for its
# component char-ngrams, provided at least one of the char-ngrams was present in the training data.
#
###############################################################################
#
# Training models
# ---------------
#
###############################################################################
#
# For the following examples, we'll use the Lee Corpus (which you already have if you've installed Gensim) for training our model.
#
#
#
from pprint import pprint as print
from gensim.models.fasttext import FastText
from gensim.test.utils import datapath
# Set file names for train and test data
corpus_file = datapath('lee_background.cor')
model = FastText(vector_size=100)
# build the vocabulary
model.build_vocab(corpus_file=corpus_file)
# train the model
model.train(
corpus_file=corpus_file, epochs=model.epochs,
total_examples=model.corpus_count, total_words=model.corpus_total_words,
)
print(model)
###############################################################################
#
# Training hyperparameters
# ^^^^^^^^^^^^^^^^^^^^^^^^
#
###############################################################################
#
# Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the following parameters from the original word2vec:
#
# - model: Training architecture. Allowed values: `cbow`, `skipgram` (Default `cbow`)
# - vector_size: Dimensionality of vector embeddings to be learnt (Default 100)
# - alpha: Initial learning rate (Default 0.025)
# - window: Context window size (Default 5)
# - min_count: Ignore words with number of occurrences below this (Default 5)
# - loss: Training objective. Allowed values: `ns`, `hs`, `softmax` (Default `ns`)
# - sample: Threshold for downsampling higher-frequency words (Default 0.001)
# - negative: Number of negative words to sample, for `ns` (Default 5)
# - epochs: Number of epochs (Default 5)
# - sorted_vocab: Sort vocab by descending frequency (Default 1)
# - threads: Number of threads to use (Default 12)
#
#
# In addition, fastText has three additional parameters:
#
# - min_n: min length of char ngrams (Default 3)
# - max_n: max length of char ngrams (Default 6)
# - bucket: number of buckets used for hashing ngrams (Default 2000000)
#
#
# Parameters ``min_n`` and ``max_n`` control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If ``max_n`` is set to 0, or to be lesser than ``min_n``\ , no character ngrams are used, and the model effectively reduces to Word2Vec.
#
#
#
# To bound the memory requirements of the model being trained, a hashing function is used that maps ngrams to integers in 1 to K. For hashing these character sequences, the `Fowler-Noll-Vo hashing function <http://www.isthe.com/chongo/tech/comp/fnv>`_ (FNV-1a variant) is employed.
#
###############################################################################
#
# **Note:** You can continue to train your model while using Gensim's native implementation of fastText.
#
###############################################################################
#
# Saving/loading models
# ---------------------
#
###############################################################################
#
# Models can be saved and loaded via the ``load`` and ``save`` methods, just like
# any other model in Gensim.
#
# Save a model trained via Gensim's fastText implementation to temp.
import tempfile
import os
with tempfile.NamedTemporaryFile(prefix='saved_model_gensim-', delete=False) as tmp:
model.save(tmp.name, separately=[])
# Load back the same model.
loaded_model = FastText.load(tmp.name)
print(loaded_model)
os.unlink(tmp.name) # demonstration complete, don't need the temp file anymore
###############################################################################
#
# The ``save_word2vec_format`` is also available for fastText models, but will
# cause all vectors for ngrams to be lost.
# As a result, a model loaded in this way will behave as a regular word2vec model.
#
###############################################################################
#
# Word vector lookup
# ------------------
#
#
# All information necessary for looking up fastText words (incl. OOV words) is
# contained in its ``model.wv`` attribute.
#
# If you don't need to continue training your model, you can export & save this `.wv`
# attribute and discard `model`, to save space and RAM.
#
wv = model.wv
print(wv)
#
# FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.
#
print('night' in wv.key_to_index)
###############################################################################
#
print('nights' in wv.key_to_index)
###############################################################################
#
print(wv['night'])
###############################################################################
#
print(wv['nights'])
###############################################################################
#
# Similarity operations
# ---------------------
#
###############################################################################
#
# Similarity operations work the same way as word2vec. **Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.**
#
print("nights" in wv.key_to_index)
###############################################################################
#
print("night" in wv.key_to_index)
###############################################################################
#
print(wv.similarity("night", "nights"))
###############################################################################
#
# Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided `here <Word2Vec_FastText_Comparison.ipynb>`_.
#
###############################################################################
#
# Other similarity operations
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only
print(wv.most_similar("nights"))
###############################################################################
#
print(wv.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant']))
###############################################################################
#
print(wv.doesnt_match("breakfast cereal dinner lunch".split()))
###############################################################################
#
print(wv.most_similar(positive=['baghdad', 'england'], negative=['london']))
###############################################################################
#
print(wv.evaluate_word_analogies(datapath('questions-words.txt')))
###############################################################################
# Word Movers distance
# ^^^^^^^^^^^^^^^^^^^^
#
# You'll need the optional ``POT`` library for this section, ``pip install POT``.
#
# Let's start with two sentences:
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
###############################################################################
# Remove their stopwords.
#
from gensim.parsing.preprocessing import STOPWORDS
sentence_obama = [w for w in sentence_obama if w not in STOPWORDS]
sentence_president = [w for w in sentence_president if w not in STOPWORDS]
###############################################################################
# Compute the Word Movers Distance between the two sentences.
distance = wv.wmdistance(sentence_obama, sentence_president)
print(f"Word Movers Distance is {distance} (lower means closer)")
###############################################################################
# That's all! You've made it to the end of this tutorial.
#
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('fasttext-logo-color-web.png')
imgplot = plt.imshow(img)
_ = plt.axis('off')
| 10,335
|
Python
|
.py
| 222
| 45.252252
| 306
| 0.589083
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,009
|
run_lda.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_lda.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# LDA Model\n\nIntroduces Gensim's LDA model and demonstrates its use on the NIPS corpus.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The purpose of this tutorial is to demonstrate how to train and tune an LDA model.\n\nIn this tutorial we will:\n\n* Load input data.\n* Pre-process that data.\n* Transform documents into bag-of-words vectors.\n* Train an LDA model.\n\nThis tutorial will **not**:\n\n* Explain how Latent Dirichlet Allocation works\n* Explain how the LDA model performs inference\n* Teach you all the parameters and options for Gensim's LDA implementation\n\nIf you are not familiar with the LDA model or how to use it in Gensim, I (Olavur Mortensen)\nsuggest you read up on that before continuing with this tutorial. Basic\nunderstanding of the LDA model should suffice. Examples:\n\n* `Introduction to Latent Dirichlet Allocation <http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation>`_\n* Gensim tutorial: `sphx_glr_auto_examples_core_run_topics_and_transformations.py`\n* Gensim's LDA model API docs: :py:class:`gensim.models.LdaModel`\n\nI would also encourage you to consider each step when applying the model to\nyour data, instead of just blindly applying my solution. The different steps\nwill depend on your data and possibly your goal with the model.\n\n## Data\n\nI have used a corpus of NIPS papers in this tutorial, but if you're following\nthis tutorial just to learn about LDA I encourage you to consider picking a\ncorpus on a subject that you are familiar with. Qualitatively evaluating the\noutput of an LDA model is challenging and can require you to understand the\nsubject matter of your corpus (depending on your goal with the model).\n\nNIPS (Neural Information Processing Systems) is a machine learning conference\nso the subject matter should be well suited for most of the target audience\nof this tutorial. You can download the original data from Sam Roweis'\n`website <http://www.cs.nyu.edu/~roweis/data.html>`_. The code below will\nalso do that for you.\n\n.. Important::\n The corpus contains 1740 documents, and not particularly long ones.\n So keep in mind that this tutorial is not geared towards efficiency, and be\n careful before applying the code to a large dataset.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import io\nimport os.path\nimport re\nimport tarfile\n\nimport smart_open\n\ndef extract_documents(url='https://cs.nyu.edu/~roweis/data/nips12raw_str602.tgz'):\n with smart_open.open(url, \"rb\") as file:\n with tarfile.open(fileobj=file) as tar:\n for member in tar.getmembers():\n if member.isfile() and re.search(r'nipstxt/nips\\d+/\\d+\\.txt', member.name):\n member_bytes = tar.extractfile(member).read()\n yield member_bytes.decode('utf-8', errors='replace')\n\ndocs = list(extract_documents())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So we have a list of 1740 documents, where each document is a Unicode string.\nIf you're thinking about using your own corpus, then you need to make sure\nthat it's in the same format (list of Unicode strings) before proceeding\nwith the rest of this tutorial.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(len(docs))\nprint(docs[0][:500])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pre-process and vectorize the documents\n\nAs part of preprocessing, we will:\n\n* Tokenize (split the documents into tokens).\n* Lemmatize the tokens.\n* Compute bigrams.\n* Compute a bag-of-words representation of the data.\n\nFirst we tokenize the text using a regular expression tokenizer from NLTK. We\nremove numeric tokens and tokens that are only a single character, as they\ndon't tend to be useful, and the dataset contains a lot of them.\n\n.. Important::\n\n This tutorial uses the nltk library for preprocessing, although you can\n replace it with something else if you want.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Tokenize the documents.\nfrom nltk.tokenize import RegexpTokenizer\n\n# Split the documents into tokens.\ntokenizer = RegexpTokenizer(r'\\w+')\nfor idx in range(len(docs)):\n docs[idx] = docs[idx].lower() # Convert to lowercase.\n docs[idx] = tokenizer.tokenize(docs[idx]) # Split into words.\n\n# Remove numbers, but not words that contain numbers.\ndocs = [[token for token in doc if not token.isnumeric()] for doc in docs]\n\n# Remove words that are only one character.\ndocs = [[token for token in doc if len(token) > 1] for doc in docs]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a\nstemmer in this case because it produces more readable words. Output that is\neasy to read is very desirable in topic modelling.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Lemmatize the documents.\nfrom nltk.stem.wordnet import WordNetLemmatizer\n\nlemmatizer = WordNetLemmatizer()\ndocs = [[lemmatizer.lemmatize(token) for token in doc] for doc in docs]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We find bigrams in the documents. Bigrams are sets of two adjacent words.\nUsing bigrams we can get phrases like \"machine_learning\" in our output\n(spaces are replaced with underscores); without bigrams we would only get\n\"machine\" and \"learning\".\n\nNote that in the code below, we find bigrams and then add them to the\noriginal data, because we would like to keep the words \"machine\" and\n\"learning\" as well as the bigram \"machine_learning\".\n\n.. Important::\n Computing n-grams of large dataset can be very computationally\n and memory intensive.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Compute bigrams.\nfrom gensim.models import Phrases\n\n# Add bigrams and trigrams to docs (only ones that appear 20 times or more).\nbigram = Phrases(docs, min_count=20)\nfor idx in range(len(docs)):\n for token in bigram[docs[idx]]:\n if '_' in token:\n # Token is a bigram, add to document.\n docs[idx].append(token)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We remove rare words and common words based on their *document frequency*.\nBelow we remove words that appear in less than 20 documents or in more than\n50% of the documents. Consider trying to remove words only based on their\nfrequency, or maybe combining that with this approach.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Remove rare and common tokens.\nfrom gensim.corpora import Dictionary\n\n# Create a dictionary representation of the documents.\ndictionary = Dictionary(docs)\n\n# Filter out words that occur less than 20 documents, or more than 50% of the documents.\ndictionary.filter_extremes(no_below=20, no_above=0.5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we transform the documents to a vectorized form. We simply compute\nthe frequency of each word, including the bigrams.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Bag-of-words representation of the documents.\ncorpus = [dictionary.doc2bow(doc) for doc in docs]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see how many tokens and documents we have to train on.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print('Number of unique tokens: %d' % len(dictionary))\nprint('Number of documents: %d' % len(corpus))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training\n\nWe are ready to train the LDA model. We will first discuss how to set some of\nthe training parameters.\n\nFirst of all, the elephant in the room: how many topics do I need? There is\nreally no easy answer for this, it will depend on both your data and your\napplication. I have used 10 topics here because I wanted to have a few topics\nthat I could interpret and \"label\", and because that turned out to give me\nreasonably good results. You might not need to interpret all your topics, so\nyou could use a large number of topics, for example 100.\n\n``chunksize`` controls how many documents are processed at a time in the\ntraining algorithm. Increasing chunksize will speed up training, at least as\nlong as the chunk of documents easily fit into memory. I've set ``chunksize =\n2000``, which is more than the amount of documents, so I process all the\ndata in one go. Chunksize can however influence the quality of the model, as\ndiscussed in Hoffman and co-authors [2], but the difference was not\nsubstantial in this case.\n\n``passes`` controls how often we train the model on the entire corpus.\nAnother word for passes might be \"epochs\". ``iterations`` is somewhat\ntechnical, but essentially it controls how often we repeat a particular loop\nover each document. It is important to set the number of \"passes\" and\n\"iterations\" high enough.\n\nI suggest the following way to choose iterations and passes. First, enable\nlogging (as described in many Gensim tutorials), and set ``eval_every = 1``\nin ``LdaModel``. When training the model look for a line in the log that\nlooks something like this::\n\n 2016-06-21 15:40:06,753 - gensim.models.ldamodel - DEBUG - 68/1566 documents converged within 400 iterations\n\nIf you set ``passes = 20`` you will see this line 20 times. Make sure that by\nthe final passes, most of the documents have converged. So you want to choose\nboth passes and iterations to be high enough for this to happen.\n\nWe set ``alpha = 'auto'`` and ``eta = 'auto'``. Again this is somewhat\ntechnical, but essentially we are automatically learning two parameters in\nthe model that we usually would have to specify explicitly.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Train LDA model.\nfrom gensim.models import LdaModel\n\n# Set training parameters.\nnum_topics = 10\nchunksize = 2000\npasses = 20\niterations = 400\neval_every = None # Don't evaluate model perplexity, takes too much time.\n\n# Make an index to word dictionary.\ntemp = dictionary[0] # This is only to \"load\" the dictionary.\nid2word = dictionary.id2token\n\nmodel = LdaModel(\n corpus=corpus,\n id2word=id2word,\n chunksize=chunksize,\n alpha='auto',\n eta='auto',\n iterations=iterations,\n num_topics=num_topics,\n passes=passes,\n eval_every=eval_every\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can compute the topic coherence of each topic. Below we display the\naverage topic coherence and print the topics in order of topic coherence.\n\nNote that we use the \"Umass\" topic coherence measure here (see\n:py:func:`gensim.models.ldamodel.LdaModel.top_topics`), Gensim has recently\nobtained an implementation of the \"AKSW\" topic coherence measure (see\naccompanying blog post, http://rare-technologies.com/what-is-topic-coherence/).\n\nIf you are familiar with the subject of the articles in this dataset, you can\nsee that the topics below make a lot of sense. However, they are not without\nflaws. We can see that there is substantial overlap between some topics,\nothers are hard to interpret, and most of them have at least some terms that\nseem out of place. If you were able to do better, feel free to share your\nmethods on the blog at http://rare-technologies.com/lda-training-tips/ !\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"top_topics = model.top_topics(corpus)\n\n# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.\navg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics\nprint('Average topic coherence: %.4f.' % avg_topic_coherence)\n\nfrom pprint import pprint\npprint(top_topics)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Things to experiment with\n\n* ``no_above`` and ``no_below`` parameters in ``filter_extremes`` method.\n* Adding trigrams or even higher order n-grams.\n* Consider whether using a hold-out set or cross-validation is the way to go for you.\n* Try other datasets.\n\n## Where to go from here\n\n* Check out a RaRe blog post on the AKSW topic coherence measure (http://rare-technologies.com/what-is-topic-coherence/).\n* pyLDAvis (https://pyldavis.readthedocs.io/en/latest/index.html).\n* Read some more Gensim tutorials (https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials).\n* If you haven't already, read [1] and [2] (see references).\n\n## References\n\n1. \"Latent Dirichlet Allocation\", Blei et al. 2003.\n2. \"Online Learning for Latent Dirichlet Allocation\", Hoffman et al. 2010.\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 15,482
|
Python
|
.py
| 241
| 57.510373
| 2,199
| 0.665267
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,010
|
run_scm.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_scm.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nSoft Cosine Measure\n===================\n\nDemonstrates using Gensim's implemenation of the SCM.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Soft Cosine Measure (SCM) is a promising new tool in machine learning that\nallows us to submit a query and return the most relevant documents. This\ntutorial introduces SCM and shows how you can compute the SCM similarities\nbetween two documents using the ``inner_product`` method.\n\nSoft Cosine Measure basics\n--------------------------\n\nSoft Cosine Measure (SCM) is a method that allows us to assess the similarity\nbetween two documents in a meaningful way, even when they have no words in\ncommon. It uses a measure of similarity between words, which can be derived\n[2] using [word2vec][] [4] vector embeddings of words. It has been shown to\noutperform many of the state-of-the-art methods in the semantic text\nsimilarity task in the context of community question answering [2].\n\n\nSCM is illustrated below for two very similar sentences. The sentences have\nno words in common, but by modeling synonymy, SCM is able to accurately\nmeasure the similarity between the two sentences. The method also uses the\nbag-of-words vector representation of the documents (simply put, the word's\nfrequencies in the documents). The intution behind the method is that we\ncompute standard cosine similarity assuming that the document vectors are\nexpressed in a non-orthogonal basis, where the angle between two basis\nvectors is derived from the angle between the word2vec embeddings of the\ncorresponding words.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimg = mpimg.imread('scm-hello.png')\nimgplot = plt.imshow(img)\nplt.axis('off')\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This method was perhaps first introduced in the article \u201cSoft Measure and\nSoft Cosine Measure: Measure of Features in Vector Space Model\u201d by Grigori\nSidorov, Alexander Gelbukh, Helena Gomez-Adorno, and David Pinto.\n\nIn this tutorial, we will learn how to use Gensim's SCM functionality, which\nconsists of the ``inner_product`` method for one-off computation, and the\n``SoftCosineSimilarity`` class for corpus-based similarity queries.\n\n.. Important::\n If you use Gensim's SCM functionality, please consider citing [1], [2] and [3].\n\nComputing the Soft Cosine Measure\n---------------------------------\nTo use SCM, you need some existing word embeddings.\nYou could train your own Word2Vec model, but that is beyond the scope of this tutorial\n(check out `sphx_glr_auto_examples_tutorials_run_word2vec.py` if you're interested).\nFor this tutorial, we'll be using an existing Word2Vec model.\n\nLet's take some sentences to compute the distance between.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Initialize logging.\nimport logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n\nsentence_obama = 'Obama speaks to the media in Illinois'\nsentence_president = 'The president greets the press in Chicago'\nsentence_orange = 'Oranges are my favorite fruit'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first two sentences sentences have very similar content, and as such the\nSCM should be high. By contrast, the third sentence is unrelated to the first\ntwo and the SCM should be low.\n\nBefore we compute the SCM, we want to remove stopwords (\"the\", \"to\", etc.),\nas these do not contribute a lot to the information in the sentences.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Import and download stopwords from NLTK.\nfrom nltk.corpus import stopwords\nfrom nltk import download\ndownload('stopwords') # Download stopwords list.\nstop_words = stopwords.words('english')\n\ndef preprocess(sentence):\n return [w for w in sentence.lower().split() if w not in stop_words]\n\nsentence_obama = preprocess(sentence_obama)\nsentence_president = preprocess(sentence_president)\nsentence_orange = preprocess(sentence_orange)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we will build a dictionary and a TF-IDF model, and we will convert the\nsentences to the bag-of-words format.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.corpora import Dictionary\ndocuments = [sentence_obama, sentence_president, sentence_orange]\ndictionary = Dictionary(documents)\n\nsentence_obama = dictionary.doc2bow(sentence_obama)\nsentence_president = dictionary.doc2bow(sentence_president)\nsentence_orange = dictionary.doc2bow(sentence_orange)\n\nfrom gensim.models import TfidfModel\ndocuments = [sentence_obama, sentence_president, sentence_orange]\ntfidf = TfidfModel(documents)\n\nsentence_obama = tfidf[sentence_obama]\nsentence_president = tfidf[sentence_president]\nsentence_orange = tfidf[sentence_orange]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, as mentioned earlier, we will be using some downloaded pre-trained\nembeddings. We load these into a Gensim Word2Vec model class and we build\na term similarity mextrix using the embeddings.\n\n.. Important::\n The embeddings we have chosen here require a lot of memory.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gensim.downloader as api\nmodel = api.load('word2vec-google-news-300')\n\nfrom gensim.similarities import SparseTermSimilarityMatrix, WordEmbeddingSimilarityIndex\ntermsim_index = WordEmbeddingSimilarityIndex(model)\ntermsim_matrix = SparseTermSimilarityMatrix(termsim_index, dictionary, tfidf)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So let's compute SCM using the ``inner_product`` method.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"similarity = termsim_matrix.inner_product(sentence_obama, sentence_president, normalized=(True, True))\nprint('similarity = %.4f' % similarity)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try the same thing with two completely unrelated sentences.\nNotice that the similarity is smaller.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"similarity = termsim_matrix.inner_product(sentence_obama, sentence_orange, normalized=(True, True))\nprint('similarity = %.4f' % similarity)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"References\n----------\n\n1. Grigori Sidorov et al. *Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model*, 2014.\n2. Delphine Charlet and Geraldine Damnati, SimBow at SemEval-2017 Task 3: Soft-Cosine Semantic Similarity between Questions for Community Question Answering, 2017.\n3. V\u00edt Novotn\u00fd. *Implementation Notes for the Soft Cosine Measure*, 2018.\n4. Tom\u00e1\u0161 Mikolov et al. Efficient Estimation of Word Representations in Vector Space, 2013.\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 8,927
|
Python
|
.py
| 176
| 44.056818
| 1,431
| 0.658135
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,011
|
run_fasttext.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_fasttext.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# FastText Model\n\nIntroduces Gensim's fastText model and demonstrates its use on the Lee Corpus.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we'll learn to work with fastText library for training word-embedding\nmodels, saving & loading them and performing similarity operations & vector\nlookups analogous to Word2Vec.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## When to use fastText?\n\nThe main principle behind [fastText](https://github.com/facebookresearch/fastText) is that the\nmorphological structure of a word carries important information about the meaning of the word.\nSuch structure is not taken into account by traditional word embeddings like Word2Vec, which\ntrain a unique word embedding for every individual word.\nThis is especially significant for morphologically rich languages (German, Turkish) in which a\nsingle word can have a large number of morphological forms, each of which might occur rarely,\nthus making it hard to train good word embeddings.\n\n\nfastText attempts to solve this by treating each word as the aggregation of its subwords.\nFor the sake of simplicity and language-independence, subwords are taken to be the character ngrams\nof the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.\n\n\nAccording to a detailed comparison of Word2Vec and fastText in\n[this notebook](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Word2Vec_FastText_Comparison.ipynb)_,\nfastText does significantly better on syntactic tasks as compared to the original Word2Vec,\nespecially when the size of the training corpus is small. Word2Vec slightly outperforms fastText\non semantic tasks though. The differences grow smaller as the size of the training corpus increases.\n\n\nfastText can obtain vectors even for out-of-vocabulary (OOV) words, by summing up vectors for its\ncomponent char-ngrams, provided at least one of the char-ngrams was present in the training data.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training models\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the following examples, we'll use the Lee Corpus (which you already have if you've installed Gensim) for training our model.\n\n\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from pprint import pprint as print\nfrom gensim.models.fasttext import FastText\nfrom gensim.test.utils import datapath\n\n# Set file names for train and test data\ncorpus_file = datapath('lee_background.cor')\n\nmodel = FastText(vector_size=100)\n\n# build the vocabulary\nmodel.build_vocab(corpus_file=corpus_file)\n\n# train the model\nmodel.train(\n corpus_file=corpus_file, epochs=model.epochs,\n total_examples=model.corpus_count, total_words=model.corpus_total_words,\n)\n\nprint(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training hyperparameters\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the following parameters from the original word2vec:\n\n- model: Training architecture. Allowed values: `cbow`, `skipgram` (Default `cbow`)\n- vector_size: Dimensionality of vector embeddings to be learnt (Default 100)\n- alpha: Initial learning rate (Default 0.025)\n- window: Context window size (Default 5)\n- min_count: Ignore words with number of occurrences below this (Default 5)\n- loss: Training objective. Allowed values: `ns`, `hs`, `softmax` (Default `ns`)\n- sample: Threshold for downsampling higher-frequency words (Default 0.001)\n- negative: Number of negative words to sample, for `ns` (Default 5)\n- epochs: Number of epochs (Default 5)\n- sorted_vocab: Sort vocab by descending frequency (Default 1)\n- threads: Number of threads to use (Default 12)\n\n\nIn addition, fastText has three additional parameters:\n\n- min_n: min length of char ngrams (Default 3)\n- max_n: max length of char ngrams (Default 6)\n- bucket: number of buckets used for hashing ngrams (Default 2000000)\n\n\nParameters ``min_n`` and ``max_n`` control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If ``max_n`` is set to 0, or to be lesser than ``min_n``\\ , no character ngrams are used, and the model effectively reduces to Word2Vec.\n\n\n\nTo bound the memory requirements of the model being trained, a hashing function is used that maps ngrams to integers in 1 to K. For hashing these character sequences, the [Fowler-Noll-Vo hashing function](http://www.isthe.com/chongo/tech/comp/fnv) (FNV-1a variant) is employed.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** You can continue to train your model while using Gensim's native implementation of fastText.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Saving/loading models\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Models can be saved and loaded via the ``load`` and ``save`` methods, just like\nany other model in Gensim.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Save a model trained via Gensim's fastText implementation to temp.\nimport tempfile\nimport os\nwith tempfile.NamedTemporaryFile(prefix='saved_model_gensim-', delete=False) as tmp:\n model.save(tmp.name, separately=[])\n\n# Load back the same model.\nloaded_model = FastText.load(tmp.name)\nprint(loaded_model)\n\nos.unlink(tmp.name) # demonstration complete, don't need the temp file anymore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The ``save_word2vec_format`` is also available for fastText models, but will\ncause all vectors for ngrams to be lost.\nAs a result, a model loaded in this way will behave as a regular word2vec model.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Word vector lookup\n\n\nAll information necessary for looking up fastText words (incl. OOV words) is\ncontained in its ``model.wv`` attribute.\n\nIf you don't need to continue training your model, you can export & save this `.wv`\nattribute and discard `model`, to save space and RAM.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"wv = model.wv\nprint(wv)\n\n#\n# FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.\n#\nprint('night' in wv.key_to_index)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print('nights' in wv.key_to_index)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv['night'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv['nights'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similarity operations\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarity operations work the same way as word2vec. **Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.**\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"nights\" in wv.key_to_index)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"night\" in wv.key_to_index)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.similarity(\"night\", \"nights\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided [here](Word2Vec_FastText_Comparison.ipynb).\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Other similarity operations\n\nThe example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.most_similar(\"nights\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant']))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.doesnt_match(\"breakfast cereal dinner lunch\".split()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.most_similar(positive=['baghdad', 'england'], negative=['london']))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.evaluate_word_analogies(datapath('questions-words.txt')))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Word Movers distance\n\nYou'll need the optional ``POT`` library for this section, ``pip install POT``.\n\nLet's start with two sentences:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()\nsentence_president = 'The president greets the press in Chicago'.lower().split()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Remove their stopwords.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.parsing.preprocessing import STOPWORDS\nsentence_obama = [w for w in sentence_obama if w not in STOPWORDS]\nsentence_president = [w for w in sentence_president if w not in STOPWORDS]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compute the Word Movers Distance between the two sentences.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distance = wv.wmdistance(sentence_obama, sentence_president)\nprint(f\"Word Movers Distance is {distance} (lower means closer)\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's all! You've made it to the end of this tutorial.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimg = mpimg.imread('fasttext-logo-color-web.png')\nimgplot = plt.imshow(img)\n_ = plt.axis('off')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 14,100
|
Python
|
.py
| 385
| 29.833766
| 1,688
| 0.600977
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,012
|
run_ensemblelda.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_ensemblelda.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nEnsemble LDA\n============\n\nIntroduces Gensim's EnsembleLda model\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tutorial will explain how to use the EnsembleLDA model class.\n\nEnsembleLda is a method of finding and generating stable topics from the results of multiple topic models,\nit can be used to remove topics from your results that are noise and are not reproducible.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Corpus\n------\nWe will use the gensim downloader api to get a small corpus for training our ensemble.\n\nThe preprocessing is similar to `sphx_glr_auto_examples_tutorials_run_word2vec.py`,\nso it won't be explained again in detail.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gensim.downloader as api\nfrom gensim.corpora import Dictionary\nfrom nltk.stem.wordnet import WordNetLemmatizer\n\nlemmatizer = WordNetLemmatizer()\ndocs = api.load('text8')\n\ndictionary = Dictionary()\nfor doc in docs:\n dictionary.add_documents([[lemmatizer.lemmatize(token) for token in doc]])\ndictionary.filter_extremes(no_below=20, no_above=0.5)\n\ncorpus = [dictionary.doc2bow(doc) for doc in docs]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training\n--------\n\nTraining the ensemble works very similar to training a single model,\n\nYou can use any model that is based on LdaModel, such as LdaMulticore, to train the Ensemble.\nIn experiments, LdaMulticore showed better results.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.models import LdaModel\ntopic_model_class = LdaModel"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Any arbitrary number of models can be used, but it should be a multiple of your workers so that the\nload can be distributed properly. In this example, 4 processes will train 8 models each.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"ensemble_workers = 4\nnum_models = 8"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After training all the models, some distance computations are required which can take quite some\ntime as well. You can speed this up by using workers for that as well.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distance_workers = 4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All other parameters that are unknown to EnsembleLda are forwarded to each LDA Model, such as\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"num_topics = 20\npasses = 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now start the training\n\nSince 20 topics were trained on each of the 8 models, we expect there to be 160 different topics.\nThe number of stable topics which are clustered from all those topics is smaller.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.models import EnsembleLda\nensemble = EnsembleLda(\n corpus=corpus,\n id2word=dictionary,\n num_topics=num_topics,\n passes=passes,\n num_models=num_models,\n topic_model_class=LdaModel,\n ensemble_workers=ensemble_workers,\n distance_workers=distance_workers\n)\n\nprint(len(ensemble.ttda))\nprint(len(ensemble.get_topics()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tuning\n------\n\nDifferent from LdaModel, the number of resulting topics varies greatly depending on the clustering parameters.\n\nYou can provide those in the ``recluster()`` function or the ``EnsembleLda`` constructor.\n\nPlay around until you get as many topics as you desire, which however may reduce their quality.\nIf your ensemble doesn't have enough topics to begin with, you should make sure to make it large enough.\n\nHaving an epsilon that is smaller than the smallest distance doesn't make sense.\nMake sure to chose one that is within the range of values in ``asymmetric_distance_matrix``.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\nshape = ensemble.asymmetric_distance_matrix.shape\nwithout_diagonal = ensemble.asymmetric_distance_matrix[~np.eye(shape[0], dtype=bool)].reshape(shape[0], -1)\nprint(without_diagonal.min(), without_diagonal.mean(), without_diagonal.max())\n\nensemble.recluster(eps=0.09, min_samples=2, min_cores=2)\n\nprint(len(ensemble.get_topics()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Increasing the Size\n-------------------\n\nIf you have some models lying around that were trained on a corpus based on the same dictionary,\nthey are compatible and you can add them to the ensemble.\n\nBy setting num_models of the EnsembleLda constructor to 0 you can also create an ensemble that is\nentirely made out of your existing topic models with the following method.\n\nAfterwards the number and quality of stable topics might be different depending on your added topics and parameters.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.models import LdaMulticore\n\nmodel1 = LdaMulticore(\n corpus=corpus,\n id2word=dictionary,\n num_topics=9,\n passes=4,\n)\n\nmodel2 = LdaModel(\n corpus=corpus,\n id2word=dictionary,\n num_topics=11,\n passes=2,\n)\n\n# add_model supports various types of input, check out its docstring\nensemble.add_model(model1)\nensemble.add_model(model2)\n\nensemble.recluster()\n\nprint(len(ensemble.ttda))\nprint(len(ensemble.get_topics()))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 7,873
|
Python
|
.py
| 205
| 31.702439
| 620
| 0.609206
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,013
|
run_doc2vec_lee.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_doc2vec_lee.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Doc2Vec Model\n\nIntroduces Gensim's Doc2Vec model and demonstrates its use on the\n[Lee Corpus](https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf)_.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Doc2Vec is a `core_concepts_model` that represents each\n`core_concepts_document` as a `core_concepts_vector`. This\ntutorial introduces the model and demonstrates how to train and assess it.\n\nHere's a list of what we'll be doing:\n\n0. Review the relevant models: bag-of-words, Word2Vec, Doc2Vec\n1. Load and preprocess the training and test corpora (see `core_concepts_corpus`)\n2. Train a Doc2Vec `core_concepts_model` model using the training corpus\n3. Demonstrate how the trained model can be used to infer a `core_concepts_vector`\n4. Assess the model\n5. Test the model on the test corpus\n\n## Review: Bag-of-words\n\n.. Note:: Feel free to skip these review sections if you're already familiar with the models.\n\nYou may be familiar with the [bag-of-words model](https://en.wikipedia.org/wiki/Bag-of-words_model) from the\n`core_concepts_vector` section.\nThis model transforms each document to a fixed-length vector of integers.\nFor example, given the sentences:\n\n- ``John likes to watch movies. Mary likes movies too.``\n- ``John also likes to watch football games. Mary hates football.``\n\nThe model outputs the vectors:\n\n- ``[1, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0]``\n- ``[1, 1, 1, 1, 0, 1, 0, 1, 2, 1, 1]``\n\nEach vector has 10 elements, where each element counts the number of times a\nparticular word occurred in the document.\nThe order of elements is arbitrary.\nIn the example above, the order of the elements corresponds to the words:\n``[\"John\", \"likes\", \"to\", \"watch\", \"movies\", \"Mary\", \"too\", \"also\", \"football\", \"games\", \"hates\"]``.\n\nBag-of-words models are surprisingly effective, but have several weaknesses.\n\nFirst, they lose all information about word order: \"John likes Mary\" and\n\"Mary likes John\" correspond to identical vectors. There is a solution: bag\nof [n-grams](https://en.wikipedia.org/wiki/N-gram)_\nmodels consider word phrases of length n to represent documents as\nfixed-length vectors to capture local word order but suffer from data\nsparsity and high dimensionality.\n\nSecond, the model does not attempt to learn the meaning of the underlying\nwords, and as a consequence, the distance between vectors doesn't always\nreflect the difference in meaning. The ``Word2Vec`` model addresses this\nsecond problem.\n\n## Review: ``Word2Vec`` Model\n\n``Word2Vec`` is a more recent model that embeds words in a lower-dimensional\nvector space using a shallow neural network. The result is a set of\nword-vectors where vectors close together in vector space have similar\nmeanings based on context, and word-vectors distant to each other have\ndiffering meanings. For example, ``strong`` and ``powerful`` would be close\ntogether and ``strong`` and ``Paris`` would be relatively far.\n\nGensim's :py:class:`~gensim.models.word2vec.Word2Vec` class implements this model.\n\nWith the ``Word2Vec`` model, we can calculate the vectors for each **word** in a document.\nBut what if we want to calculate a vector for the **entire document**\\ ?\nWe could average the vectors for each word in the document - while this is quick and crude, it can often be useful.\nHowever, there is a better way...\n\n## Introducing: Paragraph Vector\n\n.. Important:: In Gensim, we refer to the Paragraph Vector model as ``Doc2Vec``.\n\nLe and Mikolov in 2014 introduced the [Doc2Vec algorithm](https://cs.stanford.edu/~quocle/paragraph_vector.pdf)_,\nwhich usually outperforms such simple-averaging of ``Word2Vec`` vectors.\n\nThe basic idea is: act as if a document has another floating word-like\nvector, which contributes to all training predictions, and is updated like\nother word-vectors, but we will call it a doc-vector. Gensim's\n:py:class:`~gensim.models.doc2vec.Doc2Vec` class implements this algorithm.\n\nThere are two implementations:\n\n1. Paragraph Vector - Distributed Memory (PV-DM)\n2. Paragraph Vector - Distributed Bag of Words (PV-DBOW)\n\n.. Important::\n Don't let the implementation details below scare you.\n They're advanced material: if it's too much, then move on to the next section.\n\nPV-DM is analogous to Word2Vec CBOW. The doc-vectors are obtained by training\na neural network on the synthetic task of predicting a center word based an\naverage of both context word-vectors and the full document's doc-vector.\n\nPV-DBOW is analogous to Word2Vec SG. The doc-vectors are obtained by training\na neural network on the synthetic task of predicting a target word just from\nthe full document's doc-vector. (It is also common to combine this with\nskip-gram testing, using both the doc-vector and nearby word-vectors to\npredict a single target word, but only one at a time.)\n\n## Prepare the Training and Test Data\n\nFor this tutorial, we'll be training our model using the [Lee Background\nCorpus](https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf)\nincluded in gensim. This corpus contains 314 documents selected from the\nAustralian Broadcasting Corporation\u2019s news mail service, which provides text\ne-mails of headline stories and covers a number of broad topics.\n\nAnd we'll test our model by eye using the much shorter [Lee Corpus](https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf)\nwhich contains 50 documents.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nimport gensim\n# Set file names for train and test data\ntest_data_dir = os.path.join(gensim.__path__[0], 'test', 'test_data')\nlee_train_file = os.path.join(test_data_dir, 'lee_background.cor')\nlee_test_file = os.path.join(test_data_dir, 'lee.cor')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define a Function to Read and Preprocess Text\n\nBelow, we define a function to:\n\n- open the train/test file (with latin encoding)\n- read the file line-by-line\n- pre-process each line (tokenize text into individual words, remove punctuation, set to lowercase, etc)\n\nThe file we're reading is a **corpus**.\nEach line of the file is a **document**.\n\n.. Important::\n To train the model, we'll need to associate a tag/number with each document\n of the training corpus. In our case, the tag is simply the zero-based line\n number.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import smart_open\n\ndef read_corpus(fname, tokens_only=False):\n with smart_open.open(fname, encoding=\"iso-8859-1\") as f:\n for i, line in enumerate(f):\n tokens = gensim.utils.simple_preprocess(line)\n if tokens_only:\n yield tokens\n else:\n # For training data, add tags\n yield gensim.models.doc2vec.TaggedDocument(tokens, [i])\n\ntrain_corpus = list(read_corpus(lee_train_file))\ntest_corpus = list(read_corpus(lee_test_file, tokens_only=True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the training corpus\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(train_corpus[:2])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the testing corpus looks like this:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(test_corpus[:2])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that the testing corpus is just a list of lists and does not contain\nany tags.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n\nNow, we'll instantiate a Doc2Vec model with a vector size with 50 dimensions and\niterating over the training corpus 40 times. We set the minimum word count to\n2 in order to discard words with very few occurrences. (Without a variety of\nrepresentative examples, retaining such infrequent words can often make a\nmodel worse!) Typical iteration counts in the published [Paragraph Vector paper](https://cs.stanford.edu/~quocle/paragraph_vector.pdf)_\nresults, using 10s-of-thousands to millions of docs, are 10-20. More\niterations take more time and eventually reach a point of diminishing\nreturns.\n\nHowever, this is a very very small dataset (300 documents) with shortish\ndocuments (a few hundred words). Adding training passes can sometimes help\nwith such small datasets.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=2, epochs=40)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Build a vocabulary\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model.build_vocab(train_corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Essentially, the vocabulary is a list (accessible via\n``model.wv.index_to_key``) of all of the unique words extracted from the training corpus.\nAdditional attributes for each word are available using the ``model.wv.get_vecattr()`` method,\nFor example, to see how many times ``penalty`` appeared in the training corpus:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(f\"Word 'penalty' appeared {model.wv.get_vecattr('penalty', 'count')} times in the training corpus.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, train the model on the corpus.\nIn the usual case, where Gensim installation found a BLAS library for optimized\nbulk vector operations, this training on this tiny 300 document, ~60k word corpus \nshould take just a few seconds. (More realistic datasets of tens-of-millions\nof words or more take proportionately longer.) If for some reason a BLAS library \nisn't available, training uses a fallback approach that takes 60x-120x longer, \nso even this tiny training will take minutes rather than seconds. (And, in that \ncase, you should also notice a warning in the logging letting you know there's \nsomething worth fixing.) So, be sure your installation uses the BLAS-optimized \nGensim if you value your time.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can use the trained model to infer a vector for any piece of text\nby passing a list of words to the ``model.infer_vector`` function. This\nvector can then be compared with other vectors via cosine similarity.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"vector = model.infer_vector(['only', 'you', 'can', 'prevent', 'forest', 'fires'])\nprint(vector)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that ``infer_vector()`` does *not* take a string, but rather a list of\nstring tokens, which should have already been tokenized the same way as the\n``words`` property of original training document objects.\n\nAlso note that because the underlying training/inference algorithms are an\niterative approximation problem that makes use of internal randomization,\nrepeated inferences of the same text will return slightly different vectors.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Assessing the Model\n\nTo assess our new model, we'll first infer new vectors for each document of\nthe training corpus, compare the inferred vectors with the training corpus,\nand then returning the rank of the document based on self-similarity.\nBasically, we're pretending as if the training corpus is some new unseen data\nand then seeing how they compare with the trained model. The expectation is\nthat we've likely overfit our model (i.e., all of the ranks will be less than\n2) and so we should be able to find similar documents very easily.\nAdditionally, we'll keep track of the second ranks for a comparison of less\nsimilar documents.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"ranks = []\nsecond_ranks = []\nfor doc_id in range(len(train_corpus)):\n inferred_vector = model.infer_vector(train_corpus[doc_id].words)\n sims = model.dv.most_similar([inferred_vector], topn=len(model.dv))\n rank = [docid for docid, sim in sims].index(doc_id)\n ranks.append(rank)\n\n second_ranks.append(sims[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's count how each document ranks with respect to the training corpus\n\nNB. Results vary between runs due to random seeding and very small corpus\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import collections\n\ncounter = collections.Counter(ranks)\nprint(counter)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Basically, greater than 95% of the inferred documents are found to be most\nsimilar to itself and about 5% of the time it is mistakenly most similar to\nanother document. Checking the inferred-vector against a\ntraining-vector is a sort of 'sanity check' as to whether the model is\nbehaving in a usefully consistent manner, though not a real 'accuracy' value.\n\nThis is great and not entirely surprising. We can take a look at an example:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print('Document ({}): \u00ab{}\u00bb\\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))\nprint(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\\n' % model)\nfor label, index in [('MOST', 0), ('SECOND-MOST', 1), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:\n print(u'%s %s: \u00ab%s\u00bb\\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice above that the most similar document (usually the same text) is has a\nsimilarity score approaching 1.0. However, the similarity score for the\nsecond-ranked documents should be significantly lower (assuming the documents\nare in fact different) and the reasoning becomes obvious when we examine the\ntext itself.\n\nWe can run the next cell repeatedly to see a sampling other target-document\ncomparisons.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Pick a random document from the corpus and infer a vector from the model\nimport random\ndoc_id = random.randint(0, len(train_corpus) - 1)\n\n# Compare and print the second-most-similar document\nprint('Train Document ({}): \u00ab{}\u00bb\\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))\nsim_id = second_ranks[doc_id]\nprint('Similar Document {}: \u00ab{}\u00bb\\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Testing the Model\n\nUsing the same approach above, we'll infer the vector for a randomly chosen\ntest document, and compare the document to our model by eye.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Pick a random document from the test corpus and infer a vector from the model\ndoc_id = random.randint(0, len(test_corpus) - 1)\ninferred_vector = model.infer_vector(test_corpus[doc_id])\nsims = model.dv.most_similar([inferred_vector], topn=len(model.dv))\n\n# Compare and print the most/median/least similar documents from the train corpus\nprint('Test Document ({}): \u00ab{}\u00bb\\n'.format(doc_id, ' '.join(test_corpus[doc_id])))\nprint(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\\n' % model)\nfor label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:\n print(u'%s %s: \u00ab%s\u00bb\\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n\nLet's review what we've seen in this tutorial:\n\n0. Review the relevant models: bag-of-words, Word2Vec, Doc2Vec\n1. Load and preprocess the training and test corpora (see `core_concepts_corpus`)\n2. Train a Doc2Vec `core_concepts_model` model using the training corpus\n3. Demonstrate how the trained model can be used to infer a `core_concepts_vector`\n4. Assess the model\n5. Test the model on the test corpus\n\nThat's it! Doc2Vec is a great way to explore relationships between documents.\n\n## Additional Resources\n\nIf you'd like to know more about the subject matter of this tutorial, check out the links below.\n\n* [Word2Vec Paper](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)\n* [Doc2Vec Paper](https://cs.stanford.edu/~quocle/paragraph_vector.pdf)\n* [Dr. Michael D. Lee's Website](http://faculty.sites.uci.edu/mdlee)\n* [Lee Corpus](http://faculty.sites.uci.edu/mdlee/similarity-data/)_\n* [IMDB Doc2Vec Tutorial](doc2vec-IMDB.ipynb)\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 20,217
|
Python
|
.py
| 327
| 55.061162
| 5,297
| 0.660148
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,014
|
run_annoy.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_annoy.py
|
r"""
Fast Similarity Queries with Annoy and Word2Vec
===============================================
Introduces the Annoy library for similarity queries on top of vectors learned by Word2Vec.
"""
LOGS = False # Set to True if you want to see progress in logs.
if LOGS:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###############################################################################
# The `Annoy "Approximate Nearest Neighbors Oh Yeah"
# <https://github.com/spotify/annoy>`_ library enables similarity queries with
# a Word2Vec model. The current implementation for finding k nearest neighbors
# in a vector space in Gensim has linear complexity via brute force in the
# number of indexed documents, although with extremely low constant factors.
# The retrieved results are exact, which is an overkill in many applications:
# approximate results retrieved in sub-linear time may be enough. Annoy can
# find approximate nearest neighbors much faster.
#
# Outline
# -------
#
# 1. Download Text8 Corpus
# 2. Train the Word2Vec model
# 3. Construct AnnoyIndex with model & make a similarity query
# 4. Compare to the traditional indexer
# 5. Persist indices to disk
# 6. Save memory by via memory-mapping indices saved to disk
# 7. Evaluate relationship of ``num_trees`` to initialization time and accuracy
# 8. Work with Google's word2vec C formats
#
###############################################################################
# 1. Download Text8 corpus
# ------------------------
import gensim.downloader as api
text8_path = api.load('text8', return_path=True)
print("Using corpus from", text8_path)
###############################################################################
# 2. Train the Word2Vec model
# ---------------------------
#
# For more details, see :ref:`sphx_glr_auto_examples_tutorials_run_word2vec.py`.
from gensim.models import Word2Vec, KeyedVectors
from gensim.models.word2vec import Text8Corpus
# Using params from Word2Vec_FastText_Comparison
params = {
'alpha': 0.05,
'vector_size': 100,
'window': 5,
'epochs': 5,
'min_count': 5,
'sample': 1e-4,
'sg': 1,
'hs': 0,
'negative': 5,
}
model = Word2Vec(Text8Corpus(text8_path), **params)
wv = model.wv
print("Using trained model", wv)
###############################################################################
# 3. Construct AnnoyIndex with model & make a similarity query
# ------------------------------------------------------------
#
# An instance of ``AnnoyIndexer`` needs to be created in order to use Annoy in Gensim.
# The ``AnnoyIndexer`` class is located in ``gensim.similarities.annoy``.
#
# ``AnnoyIndexer()`` takes two parameters:
#
# * **model**: A ``Word2Vec`` or ``Doc2Vec`` model.
# * **num_trees**: A positive integer. ``num_trees`` effects the build
# time and the index size. **A larger value will give more accurate results,
# but larger indexes**. More information on what trees in Annoy do can be found
# `here <https://github.com/spotify/annoy#how-does-it-work>`__. The relationship
# between ``num_trees``\ , build time, and accuracy will be investigated later
# in the tutorial.
#
# Now that we are ready to make a query, lets find the top 5 most similar words
# to "science" in the Text8 corpus. To make a similarity query we call
# ``Word2Vec.most_similar`` like we would traditionally, but with an added
# parameter, ``indexer``.
#
# Apart from Annoy, Gensim also supports the NMSLIB indexer. NMSLIB is a similar library to
# Annoy – both support fast, approximate searches for similar vectors.
#
from gensim.similarities.annoy import AnnoyIndexer
# 100 trees are being used in this example
annoy_index = AnnoyIndexer(model, 100)
# Derive the vector for the word "science" in our model
vector = wv["science"]
# The instance of AnnoyIndexer we just created is passed
approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = wv.most_similar([vector], topn=11)
print("\nExact Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
###############################################################################
# The closer the cosine similarity of a vector is to 1, the more similar that
# word is to our query, which was the vector for "science". There are some
# differences in the ranking of similar words and the set of words included
# within the 10 most similar words.
###############################################################################
# 4. Compare to the traditional indexer
# -------------------------------------
# Set up the model and vector that we are using in the comparison
annoy_index = AnnoyIndexer(model, 100)
# Dry run to make sure both indexes are fully in RAM
normed_vectors = wv.get_normed_vectors()
vector = normed_vectors[0]
wv.most_similar([vector], topn=5, indexer=annoy_index)
wv.most_similar([vector], topn=5)
import time
import numpy as np
def avg_query_time(annoy_index=None, queries=1000):
"""Average query time of a most_similar method over 1000 random queries."""
total_time = 0
for _ in range(queries):
rand_vec = normed_vectors[np.random.randint(0, len(wv))]
start_time = time.process_time()
wv.most_similar([rand_vec], topn=5, indexer=annoy_index)
total_time += time.process_time() - start_time
return total_time / queries
queries = 1000
gensim_time = avg_query_time(queries=queries)
annoy_time = avg_query_time(annoy_index, queries=queries)
print("Gensim (s/query):\t{0:.5f}".format(gensim_time))
print("Annoy (s/query):\t{0:.5f}".format(annoy_time))
speed_improvement = gensim_time / annoy_time
print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement))
###############################################################################
# **This speedup factor is by no means constant** and will vary greatly from
# run to run and is particular to this data set, BLAS setup, Annoy
# parameters(as tree size increases speedup factor decreases), machine
# specifications, among other factors.
#
# .. Important::
# Initialization time for the annoy indexer was not included in the times.
# The optimal knn algorithm for you to use will depend on how many queries
# you need to make and the size of the corpus. If you are making very few
# similarity queries, the time taken to initialize the annoy indexer will be
# longer than the time it would take the brute force method to retrieve
# results. If you are making many queries however, the time it takes to
# initialize the annoy indexer will be made up for by the incredibly fast
# retrieval times for queries once the indexer has been initialized
#
# .. Important::
# Gensim's 'most_similar' method is using numpy operations in the form of
# dot product whereas Annoy's method isnt. If 'numpy' on your machine is
# using one of the BLAS libraries like ATLAS or LAPACK, it'll run on
# multiple cores (only if your machine has multicore support ). Check `SciPy
# Cookbook
# <http://scipy-cookbook.readthedocs.io/items/ParallelProgramming.html>`_
# for more details.
#
###############################################################################
# 5. Persisting indices to disk
# -----------------------------
#
# You can save and load your indexes from/to disk to prevent having to
# construct them each time. This will create two files on disk, *fname* and
# *fname.d*. Both files are needed to correctly restore all attributes. Before
# loading an index, you will have to create an empty AnnoyIndexer object.
#
fname = '/tmp/mymodel.index'
# Persist index to disk
annoy_index.save(fname)
# Load index back
import os.path
if os.path.exists(fname):
annoy_index2 = AnnoyIndexer()
annoy_index2.load(fname)
annoy_index2.model = model
# Results should be identical to above
vector = wv["science"]
approximate_neighbors2 = wv.most_similar([vector], topn=11, indexer=annoy_index2)
for neighbor in approximate_neighbors2:
print(neighbor)
assert approximate_neighbors == approximate_neighbors2
###############################################################################
# Be sure to use the same model at load that was used originally, otherwise you
# will get unexpected behaviors.
#
###############################################################################
# 6. Save memory via memory-mapping indexes saved to disk
# -------------------------------------------------------
#
# Annoy library has a useful feature that indices can be memory-mapped from
# disk. It saves memory when the same index is used by several processes.
#
# Below are two snippets of code. First one has a separate index for each
# process. The second snipped shares the index between two processes via
# memory-mapping. The second example uses less total RAM as it is shared.
#
# Remove verbosity from code below (if logging active)
if LOGS:
logging.disable(logging.CRITICAL)
from multiprocessing import Process
import os
import psutil
###############################################################################
# Bad example: two processes load the Word2vec model from disk and create their
# own Annoy index from that model.
#
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model.wv["science"]
annoy_index = AnnoyIndexer(new_model, 100)
approximate_neighbors = new_model.wv.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Create and run two parallel processes to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
###############################################################################
# Good example: two processes load both the Word2vec model and index from disk
# and memory-map the index.
#
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model.wv["science"]
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = new_model
approximate_neighbors = new_model.wv.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
###############################################################################
# 7. Evaluate relationship of ``num_trees`` to initialization time and accuracy
# -----------------------------------------------------------------------------
#
import matplotlib.pyplot as plt
###############################################################################
# Build dataset of initialization times and accuracy measures:
#
exact_results = [element[0] for element in wv.most_similar([normed_vectors[0]], topn=100)]
x_values = []
y_values_init = []
y_values_accuracy = []
for x in range(1, 300, 10):
x_values.append(x)
start_time = time.time()
annoy_index = AnnoyIndexer(model, x)
y_values_init.append(time.time() - start_time)
approximate_results = wv.most_similar([normed_vectors[0]], topn=100, indexer=annoy_index)
top_words = [result[0] for result in approximate_results]
y_values_accuracy.append(len(set(top_words).intersection(exact_results)))
###############################################################################
# Plot results:
plt.figure(1, figsize=(12, 6))
plt.subplot(121)
plt.plot(x_values, y_values_init)
plt.title("num_trees vs initalization time")
plt.ylabel("Initialization time (s)")
plt.xlabel("num_trees")
plt.subplot(122)
plt.plot(x_values, y_values_accuracy)
plt.title("num_trees vs accuracy")
plt.ylabel("%% accuracy")
plt.xlabel("num_trees")
plt.tight_layout()
plt.show()
###############################################################################
# From the above, we can see that the initialization time of the annoy indexer
# increases in a linear fashion with num_trees. Initialization time will vary
# from corpus to corpus. In the graph above we used the (tiny) Lee corpus.
#
# Furthermore, in this dataset, the accuracy seems logarithmically related to
# the number of trees. We see an improvement in accuracy with more trees, but
# the relationship is nonlinear.
#
###############################################################################
# 7. Work with Google's word2vec files
# ------------------------------------
#
# Our model can be exported to a word2vec C format. There is a binary and a
# plain text word2vec format. Both can be read with a variety of other
# software, or imported back into Gensim as a ``KeyedVectors`` object.
#
# To export our model as text
wv.save_word2vec_format('/tmp/vectors.txt', binary=False)
from smart_open import open
# View the first 3 lines of the exported file
# The first line has the total number of entries and the vector dimension count.
# The next lines have a key (a string) followed by its vector.
with open('/tmp/vectors.txt', encoding='utf8') as myfile:
for i in range(3):
print(myfile.readline().strip())
# To import a word2vec text model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# To export a model as binary
wv.save_word2vec_format('/tmp/vectors.bin', binary=True)
# To import a word2vec binary model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
# To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees)
annoy_index = AnnoyIndexer(wv, 100)
annoy_index.save('/tmp/mymodel.index')
# Load and test the saved word vectors and saved Annoy index
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = wv
vector = wv["cat"]
approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = wv.most_similar([vector], topn=11)
print("\nExact Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
###############################################################################
# Recap
# -----
#
# In this notebook we used the Annoy module to build an indexed approximation
# of our word embeddings. To do so, we did the following steps:
#
# 1. Download Text8 Corpus
# 2. Train Word2Vec Model
# 3. Construct AnnoyIndex with model & make a similarity query
# 4. Persist indices to disk
# 5. Save memory by via memory-mapping indices saved to disk
# 6. Evaluate relationship of ``num_trees`` to initialization time and accuracy
# 7. Work with Google's word2vec C formats
#
| 15,445
|
Python
|
.py
| 344
| 43.06686
| 101
| 0.661128
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,015
|
run_word2vec.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_word2vec.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nWord2Vec Model\n==============\n\nIntroduces Gensim's Word2Vec model and demonstrates its use on the `Lee Evaluation Corpus\n<https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>`_.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In case you missed the buzz, Word2Vec is a widely used algorithm based on neural\nnetworks, commonly referred to as \"deep learning\" (though word2vec itself is rather shallow).\nUsing large amounts of unannotated plain text, word2vec learns relationships\nbetween words automatically. The output are vectors, one vector per word,\nwith remarkable linear relationships that allow us to do things like:\n\n* vec(\"king\") - vec(\"man\") + vec(\"woman\") =~ vec(\"queen\")\n* vec(\"Montreal Canadiens\") \u2013 vec(\"Montreal\") + vec(\"Toronto\") =~ vec(\"Toronto Maple Leafs\").\n\nWord2vec is very useful in `automatic text tagging\n<https://github.com/RaRe-Technologies/movie-plots-by-genre>`_\\ , recommender\nsystems and machine translation.\n\nThis tutorial:\n\n#. Introduces ``Word2Vec`` as an improvement over traditional bag-of-words\n#. Shows off a demo of ``Word2Vec`` using a pre-trained model\n#. Demonstrates training a new model from your own data\n#. Demonstrates loading and saving models\n#. Introduces several training parameters and demonstrates their effect\n#. Discusses memory requirements\n#. Visualizes Word2Vec embeddings by applying dimensionality reduction\n\nReview: Bag-of-words\n--------------------\n\n.. Note:: Feel free to skip these review sections if you're already familiar with the models.\n\nYou may be familiar with the `bag-of-words model\n<https://en.wikipedia.org/wiki/Bag-of-words_model>`_ from the\n`core_concepts_vector` section.\nThis model transforms each document to a fixed-length vector of integers.\nFor example, given the sentences:\n\n- ``John likes to watch movies. Mary likes movies too.``\n- ``John also likes to watch football games. Mary hates football.``\n\nThe model outputs the vectors:\n\n- ``[1, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0]``\n- ``[1, 1, 1, 1, 0, 1, 0, 1, 2, 1, 1]``\n\nEach vector has 10 elements, where each element counts the number of times a\nparticular word occurred in the document.\nThe order of elements is arbitrary.\nIn the example above, the order of the elements corresponds to the words:\n``[\"John\", \"likes\", \"to\", \"watch\", \"movies\", \"Mary\", \"too\", \"also\", \"football\", \"games\", \"hates\"]``.\n\nBag-of-words models are surprisingly effective, but have several weaknesses.\n\nFirst, they lose all information about word order: \"John likes Mary\" and\n\"Mary likes John\" correspond to identical vectors. There is a solution: bag\nof `n-grams <https://en.wikipedia.org/wiki/N-gram>`__\nmodels consider word phrases of length n to represent documents as\nfixed-length vectors to capture local word order but suffer from data\nsparsity and high dimensionality.\n\nSecond, the model does not attempt to learn the meaning of the underlying\nwords, and as a consequence, the distance between vectors doesn't always\nreflect the difference in meaning. The ``Word2Vec`` model addresses this\nsecond problem.\n\nIntroducing: the ``Word2Vec`` Model\n-----------------------------------\n\n``Word2Vec`` is a more recent model that embeds words in a lower-dimensional\nvector space using a shallow neural network. The result is a set of\nword-vectors where vectors close together in vector space have similar\nmeanings based on context, and word-vectors distant to each other have\ndiffering meanings. For example, ``strong`` and ``powerful`` would be close\ntogether and ``strong`` and ``Paris`` would be relatively far.\n\nThe are two versions of this model and :py:class:`~gensim.models.word2vec.Word2Vec`\nclass implements them both:\n\n1. Skip-grams (SG)\n2. Continuous-bag-of-words (CBOW)\n\n.. Important::\n Don't let the implementation details below scare you.\n They're advanced material: if it's too much, then move on to the next section.\n\nThe `Word2Vec Skip-gram <http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model>`__\nmodel, for example, takes in pairs (word1, word2) generated by moving a\nwindow across text data, and trains a 1-hidden-layer neural network based on\nthe synthetic task of given an input word, giving us a predicted probability\ndistribution of nearby words to the input. A virtual `one-hot\n<https://en.wikipedia.org/wiki/One-hot>`__ encoding of words\ngoes through a 'projection layer' to the hidden layer; these projection\nweights are later interpreted as the word embeddings. So if the hidden layer\nhas 300 neurons, this network will give us 300-dimensional word embeddings.\n\nContinuous-bag-of-words Word2vec is very similar to the skip-gram model. It\nis also a 1-hidden-layer neural network. The synthetic training task now uses\nthe average of multiple input context words, rather than a single word as in\nskip-gram, to predict the center word. Again, the projection weights that\nturn one-hot words into averageable vectors, of the same width as the hidden\nlayer, are interpreted as the word embeddings.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Word2Vec Demo\n-------------\n\nTo see what ``Word2Vec`` can do, let's download a pre-trained model and play\naround with it. We will fetch the Word2Vec model trained on part of the\nGoogle News dataset, covering approximately 3 million words and phrases. Such\na model can take hours to train, but since it's already available,\ndownloading and loading it with Gensim takes minutes.\n\n.. Important::\n The model is approximately 2GB, so you'll need a decent network connection\n to proceed. Otherwise, skip ahead to the \"Training Your Own Model\" section\n below.\n\nYou may also check out an `online word2vec demo\n<http://radimrehurek.com/2014/02/word2vec-tutorial/#app>`_ where you can try\nthis vector algebra for yourself. That demo runs ``word2vec`` on the\n**entire** Google News dataset, of **about 100 billion words**.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gensim.downloader as api\nwv = api.load('word2vec-google-news-300')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A common operation is to retrieve the vocabulary of a model. That is trivial:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"for index, word in enumerate(wv.index_to_key):\n if index == 10:\n break\n print(f\"word #{index}/{len(wv.index_to_key)} is {word}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can easily obtain vectors for terms the model is familiar with:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"vec_king = wv['king']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Unfortunately, the model is unable to infer vectors for unfamiliar words.\nThis is one limitation of Word2Vec: if this limitation matters to you, check\nout the FastText model.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"try:\n vec_cameroon = wv['cameroon']\nexcept KeyError:\n print(\"The word 'cameroon' does not appear in this model\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Moving on, ``Word2Vec`` supports several word similarity tasks out of the\nbox. You can see how the similarity intuitively decreases as the words get\nless and less similar.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pairs = [\n ('car', 'minivan'), # a minivan is a kind of car\n ('car', 'bicycle'), # still a wheeled vehicle\n ('car', 'airplane'), # ok, no wheels, but still a vehicle\n ('car', 'cereal'), # ... and so on\n ('car', 'communism'),\n]\nfor w1, w2 in pairs:\n print('%r\\t%r\\t%.2f' % (w1, w2, wv.similarity(w1, w2)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Print the 5 most similar words to \"car\" or \"minivan\"\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.most_similar(positive=['car', 'minivan'], topn=5))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Which of the below does not belong in the sequence?\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(wv.doesnt_match(['fire', 'water', 'land', 'sea', 'air', 'car']))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training Your Own Model\n-----------------------\n\nTo start, you'll need some data for training the model. For the following\nexamples, we'll use the `Lee Evaluation Corpus\n<https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>`_\n(which you `already have\n<https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee_background.cor>`_\nif you've installed Gensim).\n\nThis corpus is small enough to fit entirely in memory, but we'll implement a\nmemory-friendly iterator that reads it line-by-line to demonstrate how you\nwould handle a larger corpus.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.test.utils import datapath\nfrom gensim import utils\n\nclass MyCorpus:\n \"\"\"An iterator that yields sentences (lists of str).\"\"\"\n\n def __iter__(self):\n corpus_path = datapath('lee_background.cor')\n for line in open(corpus_path):\n # assume there's one document per line, tokens separated by whitespace\n yield utils.simple_preprocess(line)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we wanted to do any custom preprocessing, e.g. decode a non-standard\nencoding, lowercase, remove numbers, extract named entities... All of this can\nbe done inside the ``MyCorpus`` iterator and ``word2vec`` doesn\u2019t need to\nknow. All that is required is that the input yields one sentence (list of\nutf8 words) after another.\n\nLet's go ahead and train a model on our corpus. Don't worry about the\ntraining parameters much for now, we'll revisit them later.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gensim.models\n\nsentences = MyCorpus()\nmodel = gensim.models.Word2Vec(sentences=sentences)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we have our model, we can use it in the same way as in the demo above.\n\nThe main part of the model is ``model.wv``\\ , where \"wv\" stands for \"word vectors\".\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"vec_king = model.wv['king']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Retrieving the vocabulary works the same way:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"for index, word in enumerate(wv.index_to_key):\n if index == 10:\n break\n print(f\"word #{index}/{len(wv.index_to_key)} is {word}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Storing and loading models\n--------------------------\n\nYou'll notice that training non-trivial models can take time. Once you've\ntrained your model and it works as expected, you can save it to disk. That\nway, you don't have to spend time training it all over again later.\n\nYou can store/load models using the standard gensim methods:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import tempfile\n\nwith tempfile.NamedTemporaryFile(prefix='gensim-model-', delete=False) as tmp:\n temporary_filepath = tmp.name\n model.save(temporary_filepath)\n #\n # The model is now safely stored in the filepath.\n # You can copy it to other machines, share it with others, etc.\n #\n # To load a saved model:\n #\n new_model = gensim.models.Word2Vec.load(temporary_filepath)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"which uses pickle internally, optionally ``mmap``\\ \u2018ing the model\u2019s internal\nlarge NumPy matrices into virtual memory directly from disk files, for\ninter-process memory sharing.\n\nIn addition, you can load models created by the original C tool, both using\nits text and binary formats::\n\n model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)\n # using gzipped/bz2 input works too, no need to unzip\n model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training Parameters\n-------------------\n\n``Word2Vec`` accepts several parameters that affect both training speed and quality.\n\nmin_count\n---------\n\n``min_count`` is for pruning the internal dictionary. Words that appear only\nonce or twice in a billion-word corpus are probably uninteresting typos and\ngarbage. In addition, there\u2019s not enough data to make any meaningful training\non those words, so it\u2019s best to ignore them:\n\ndefault value of min_count=5\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model = gensim.models.Word2Vec(sentences, min_count=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"vector_size\n-----------\n\n``vector_size`` is the number of dimensions (N) of the N-dimensional space that\ngensim Word2Vec maps the words onto.\n\nBigger size values require more training data, but can lead to better (more\naccurate) models. Reasonable values are in the tens to hundreds.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# The default value of vector_size is 100.\nmodel = gensim.models.Word2Vec(sentences, vector_size=200)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"workers\n-------\n\n``workers`` , the last of the major parameters (full list `here\n<http://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec>`_)\nis for training parallelization, to speed up training:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# default value of workers=3 (tutorial says 1...)\nmodel = gensim.models.Word2Vec(sentences, workers=4)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The ``workers`` parameter only has an effect if you have `Cython\n<http://cython.org/>`_ installed. Without Cython, you\u2019ll only be able to use\none core because of the `GIL\n<https://wiki.python.org/moin/GlobalInterpreterLock>`_ (and ``word2vec``\ntraining will be `miserably slow\n<http://rare-technologies.com/word2vec-in-python-part-two-optimizing/>`_\\ ).\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Memory\n------\n\nAt its core, ``word2vec`` model parameters are stored as matrices (NumPy\narrays). Each array is **#vocabulary** (controlled by the ``min_count`` parameter)\ntimes **vector size** (the ``vector_size`` parameter) of floats (single precision aka 4 bytes).\n\nThree such matrices are held in RAM (work is underway to reduce that number\nto two, or even one). So if your input contains 100,000 unique words, and you\nasked for layer ``vector_size=200``\\ , the model will require approx.\n``100,000*200*4*3 bytes = ~229MB``.\n\nThere\u2019s a little extra memory needed for storing the vocabulary tree (100,000 words would\ntake a few megabytes), but unless your words are extremely loooong strings, memory\nfootprint will be dominated by the three matrices above.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Evaluating\n----------\n\n``Word2Vec`` training is an unsupervised task, there\u2019s no good way to\nobjectively evaluate the result. Evaluation depends on your end application.\n\nGoogle has released their testing set of about 20,000 syntactic and semantic\ntest examples, following the \u201cA is to B as C is to D\u201d task. It is provided in\nthe 'datasets' folder.\n\nFor example a syntactic analogy of comparative type is ``bad:worse;good:?``.\nThere are total of 9 types of syntactic comparisons in the dataset like\nplural nouns and nouns of opposite meaning.\n\nThe semantic questions contain five types of semantic analogies, such as\ncapital cities (``Paris:France;Tokyo:?``) or family members\n(``brother:sister;dad:?``).\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Gensim supports the same evaluation set, in exactly the same format:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model.wv.evaluate_word_analogies(datapath('questions-words.txt'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This ``evaluate_word_analogies`` method takes an `optional parameter\n<http://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.evaluate_word_analogies>`_\n``restrict_vocab`` which limits which test examples are to be considered.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.\n\nBy default it uses an academic dataset WS-353 but one can create a dataset\nspecific to your business based on it. It contains word pairs together with\nhuman-assigned similarity judgments. It measures the relatedness or\nco-occurrence of two words. For example, 'coast' and 'shore' are very similar\nas they appear in the same context. At the same time 'clothes' and 'closet'\nare less similar because they are related but not interchangeable.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model.wv.evaluate_word_pairs(datapath('wordsim353.tsv'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
".. Important::\n Good performance on Google's or WS-353 test set doesn\u2019t mean word2vec will\n work well in your application, or vice versa. It\u2019s always best to evaluate\n directly on your intended task. For an example of how to use word2vec in a\n classifier pipeline, see this `tutorial\n <https://github.com/RaRe-Technologies/movie-plots-by-genre>`_.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Online training / Resuming training\n-----------------------------------\n\nAdvanced users can load a model and continue training it with more sentences\nand `new vocabulary words <online_w2v_tutorial.ipynb>`_:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model = gensim.models.Word2Vec.load(temporary_filepath)\nmore_sentences = [\n ['Advanced', 'users', 'can', 'load', 'a', 'model',\n 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences'],\n]\nmodel.build_vocab(more_sentences, update=True)\nmodel.train(more_sentences, total_examples=model.corpus_count, epochs=model.epochs)\n\n# cleaning up temporary file\nimport os\nos.remove(temporary_filepath)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You may need to tweak the ``total_words`` parameter to ``train()``,\ndepending on what learning rate decay you want to simulate.\n\nNote that it\u2019s not possible to resume training with models generated by the C\ntool, ``KeyedVectors.load_word2vec_format()``. You can still use them for\nquerying/similarity, but information vital for training (the vocab tree) is\nmissing there.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training Loss Computation\n-------------------------\n\nThe parameter ``compute_loss`` can be used to toggle computation of loss\nwhile training the Word2Vec model. The computed loss is stored in the model\nattribute ``running_training_loss`` and can be retrieved using the function\n``get_latest_training_loss`` as follows :\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# instantiating and training the Word2Vec model\nmodel_with_loss = gensim.models.Word2Vec(\n sentences,\n min_count=1,\n compute_loss=True,\n hs=0,\n sg=1,\n seed=42,\n)\n\n# getting the training loss value\ntraining_loss = model_with_loss.get_latest_training_loss()\nprint(training_loss)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Benchmarks\n----------\n\nLet's run some benchmarks to see effect of the training loss computation code\non training time.\n\nWe'll use the following data for the benchmarks:\n\n#. Lee Background corpus: included in gensim's test data\n#. Text8 corpus. To demonstrate the effect of corpus size, we'll look at the\n first 1MB, 10MB, 50MB of the corpus, as well as the entire thing.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import io\nimport os\n\nimport gensim.models.word2vec\nimport gensim.downloader as api\nimport smart_open\n\n\ndef head(path, size):\n with smart_open.open(path) as fin:\n return io.StringIO(fin.read(size))\n\n\ndef generate_input_data():\n lee_path = datapath('lee_background.cor')\n ls = gensim.models.word2vec.LineSentence(lee_path)\n ls.name = '25kB'\n yield ls\n\n text8_path = api.load('text8').fn\n labels = ('1MB', '10MB', '50MB', '100MB')\n sizes = (1024 ** 2, 10 * 1024 ** 2, 50 * 1024 ** 2, 100 * 1024 ** 2)\n for l, s in zip(labels, sizes):\n ls = gensim.models.word2vec.LineSentence(head(text8_path, s))\n ls.name = l\n yield ls\n\n\ninput_data = list(generate_input_data())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now compare the training time taken for different combinations of input\ndata and model training parameters like ``hs`` and ``sg``.\n\nFor each combination, we repeat the test several times to obtain the mean and\nstandard deviation of the test duration.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Temporarily reduce logging verbosity\nlogging.root.level = logging.ERROR\n\nimport time\nimport numpy as np\nimport pandas as pd\n\ntrain_time_values = []\nseed_val = 42\nsg_values = [0, 1]\nhs_values = [0, 1]\n\nfast = True\nif fast:\n input_data_subset = input_data[:3]\nelse:\n input_data_subset = input_data\n\n\nfor data in input_data_subset:\n for sg_val in sg_values:\n for hs_val in hs_values:\n for loss_flag in [True, False]:\n time_taken_list = []\n for i in range(3):\n start_time = time.time()\n w2v_model = gensim.models.Word2Vec(\n data,\n compute_loss=loss_flag,\n sg=sg_val,\n hs=hs_val,\n seed=seed_val,\n )\n time_taken_list.append(time.time() - start_time)\n\n time_taken_list = np.array(time_taken_list)\n time_mean = np.mean(time_taken_list)\n time_std = np.std(time_taken_list)\n\n model_result = {\n 'train_data': data.name,\n 'compute_loss': loss_flag,\n 'sg': sg_val,\n 'hs': hs_val,\n 'train_time_mean': time_mean,\n 'train_time_std': time_std,\n }\n print(\"Word2vec model #%i: %s\" % (len(train_time_values), model_result))\n train_time_values.append(model_result)\n\ntrain_times_table = pd.DataFrame(train_time_values)\ntrain_times_table = train_times_table.sort_values(\n by=['train_data', 'sg', 'hs', 'compute_loss'],\n ascending=[False, False, True, False],\n)\nprint(train_times_table)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Visualising Word Embeddings\n---------------------------\n\nThe word embeddings made by the model can be visualised by reducing\ndimensionality of the words to 2 dimensions using tSNE.\n\nVisualisations can be used to notice semantic and syntactic trends in the data.\n\nExample:\n\n* Semantic: words like cat, dog, cow, etc. have a tendency to lie close by\n* Syntactic: words like run, running or cut, cutting lie close together.\n\nVector relations like vKing - vMan = vQueen - vWoman can also be noticed.\n\n.. Important::\n The model used for the visualisation is trained on a small corpus. Thus\n some of the relations might not be so clear.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from sklearn.decomposition import IncrementalPCA # inital reduction\nfrom sklearn.manifold import TSNE # final reduction\nimport numpy as np # array handling\n\n\ndef reduce_dimensions(model):\n num_dimensions = 2 # final num dimensions (2D, 3D, etc)\n\n # extract the words & their vectors, as numpy arrays\n vectors = np.asarray(model.wv.vectors)\n labels = np.asarray(model.wv.index_to_key) # fixed-width numpy strings\n\n # reduce using t-SNE\n tsne = TSNE(n_components=num_dimensions, random_state=0)\n vectors = tsne.fit_transform(vectors)\n\n x_vals = [v[0] for v in vectors]\n y_vals = [v[1] for v in vectors]\n return x_vals, y_vals, labels\n\n\nx_vals, y_vals, labels = reduce_dimensions(model)\n\ndef plot_with_plotly(x_vals, y_vals, labels, plot_in_notebook=True):\n from plotly.offline import init_notebook_mode, iplot, plot\n import plotly.graph_objs as go\n\n trace = go.Scatter(x=x_vals, y=y_vals, mode='text', text=labels)\n data = [trace]\n\n if plot_in_notebook:\n init_notebook_mode(connected=True)\n iplot(data, filename='word-embedding-plot')\n else:\n plot(data, filename='word-embedding-plot.html')\n\n\ndef plot_with_matplotlib(x_vals, y_vals, labels):\n import matplotlib.pyplot as plt\n import random\n\n random.seed(0)\n\n plt.figure(figsize=(12, 12))\n plt.scatter(x_vals, y_vals)\n\n #\n # Label randomly subsampled 25 data points\n #\n indices = list(range(len(labels)))\n selected_indices = random.sample(indices, 25)\n for i in selected_indices:\n plt.annotate(labels[i], (x_vals[i], y_vals[i]))\n\ntry:\n get_ipython()\nexcept Exception:\n plot_function = plot_with_matplotlib\nelse:\n plot_function = plot_with_plotly\n\nplot_function(x_vals, y_vals, labels)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Conclusion\n----------\n\nIn this tutorial we learned how to train word2vec models on your custom data\nand also how to evaluate it. Hope that you too will find this popular tool\nuseful in your Machine Learning tasks!\n\nLinks\n-----\n\n- API docs: :py:mod:`gensim.models.word2vec`\n- `Original C toolkit and word2vec papers by Google <https://code.google.com/archive/p/word2vec/>`_.\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 31,352
|
Python
|
.py
| 513
| 54.319688
| 4,878
| 0.628664
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,016
|
run_wmd.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_wmd.py
|
r"""
Word Mover's Distance
=====================
Demonstrates using Gensim's implemenation of the WMD.
"""
###############################################################################
# Word Mover's Distance (WMD) is a promising new tool in machine learning that
# allows us to submit a query and return the most relevant documents. This
# tutorial introduces WMD and shows how you can compute the WMD distance
# between two documents using ``wmdistance``.
#
# WMD Basics
# ----------
#
# WMD enables us to assess the "distance" between two documents in a meaningful
# way even when they have no words in common. It uses `word2vec
# <https://rare-technologies.com/word2vec-tutorial/>`_ [4] vector embeddings of
# words. It been shown to outperform many of the state-of-the-art methods in
# k-nearest neighbors classification [3].
#
# WMD is illustrated below for two very similar sentences (illustration taken
# from `Vlad Niculae's blog
# <http://vene.ro/blog/word-movers-distance-in-python.html>`_). The sentences
# have no words in common, but by matching the relevant words, WMD is able to
# accurately measure the (dis)similarity between the two sentences. The method
# also uses the bag-of-words representation of the documents (simply put, the
# word's frequencies in the documents), noted as $d$ in the figure below. The
# intuition behind the method is that we find the minimum "traveling distance"
# between documents, in other words the most efficient way to "move" the
# distribution of document 1 to the distribution of document 2.
#
# Image from https://vene.ro/images/wmd-obama.png
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('wmd-obama.png')
imgplot = plt.imshow(img)
plt.axis('off')
plt.show()
###############################################################################
# This method was introduced in the article "From Word Embeddings To Document
# Distances" by Matt Kusner et al. (\ `link to PDF
# <http://jmlr.org/proceedings/papers/v37/kusnerb15.pdf>`_\ ). It is inspired
# by the "Earth Mover's Distance", and employs a solver of the "transportation
# problem".
#
# In this tutorial, we will learn how to use Gensim's WMD functionality, which
# consists of the ``wmdistance`` method for distance computation, and the
# ``WmdSimilarity`` class for corpus based similarity queries.
#
# .. Important::
# If you use Gensim's WMD functionality, please consider citing [1] and [2].
#
# Computing the Word Mover's Distance
# -----------------------------------
#
# To use WMD, you need some existing word embeddings.
# You could train your own Word2Vec model, but that is beyond the scope of this tutorial
# (check out :ref:`sphx_glr_auto_examples_tutorials_run_word2vec.py` if you're interested).
# For this tutorial, we'll be using an existing Word2Vec model.
#
# Let's take some sentences to compute the distance between.
#
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentence_obama = 'Obama speaks to the media in Illinois'
sentence_president = 'The president greets the press in Chicago'
###############################################################################
# These sentences have very similar content, and as such the WMD should be low.
# Before we compute the WMD, we want to remove stopwords ("the", "to", etc.),
# as these do not contribute a lot to the information in the sentences.
#
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
stop_words = stopwords.words('english')
def preprocess(sentence):
return [w for w in sentence.lower().split() if w not in stop_words]
sentence_obama = preprocess(sentence_obama)
sentence_president = preprocess(sentence_president)
###############################################################################
# Now, as mentioned earlier, we will be using some downloaded pre-trained
# embeddings. We load these into a Gensim Word2Vec model class.
#
# .. Important::
# The embeddings we have chosen here require a lot of memory.
#
import gensim.downloader as api
model = api.load('word2vec-google-news-300')
###############################################################################
# So let's compute WMD using the ``wmdistance`` method.
#
distance = model.wmdistance(sentence_obama, sentence_president)
print('distance = %.4f' % distance)
###############################################################################
# Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.
#
sentence_orange = preprocess('Oranges are my favorite fruit')
distance = model.wmdistance(sentence_obama, sentence_orange)
print('distance = %.4f' % distance)
###############################################################################
# References
# ----------
#
# 1. Rémi Flamary et al. *POT: Python Optimal Transport*, 2021.
# 2. Matt Kusner et al. *From Embeddings To Document Distances*, 2015.
# 3. Tomáš Mikolov et al. *Efficient Estimation of Word Representations in Vector Space*, 2013.
#
| 5,165
|
Python
|
.py
| 109
| 46.183486
| 103
| 0.674871
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,017
|
run_ensemblelda.py
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_ensemblelda.py
|
r"""
Ensemble LDA
============
Introduces Gensim's EnsembleLda model
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###############################################################################
# This tutorial will explain how to use the EnsembleLDA model class.
#
# EnsembleLda is a method of finding and generating stable topics from the results of multiple topic models,
# it can be used to remove topics from your results that are noise and are not reproducible.
#
###############################################################################
# Corpus
# ------
# We will use the gensim downloader api to get a small corpus for training our ensemble.
#
# The preprocessing is similar to :ref:`sphx_glr_auto_examples_tutorials_run_word2vec.py`,
# so it won't be explained again in detail.
#
import gensim.downloader as api
from gensim.corpora import Dictionary
from nltk.stem.wordnet import WordNetLemmatizer
from nltk import download
download('wordnet')
lemmatizer = WordNetLemmatizer()
docs = api.load('text8')
dictionary = Dictionary()
for doc in docs:
dictionary.add_documents([[lemmatizer.lemmatize(token) for token in doc]])
dictionary.filter_extremes(no_below=20, no_above=0.5)
corpus = [dictionary.doc2bow(doc) for doc in docs]
###############################################################################
# Training
# --------
#
# Training the ensemble works very similar to training a single model,
#
# You can use any model that is based on LdaModel, such as LdaMulticore, to train the Ensemble.
# In experiments, LdaMulticore showed better results.
#
from gensim.models import LdaModel
topic_model_class = LdaModel
###############################################################################
# Any arbitrary number of models can be used, but it should be a multiple of your workers so that the
# load can be distributed properly. In this example, 4 processes will train 8 models each.
#
ensemble_workers = 4
num_models = 8
###############################################################################
# After training all the models, some distance computations are required which can take quite some
# time as well. You can speed this up by using workers for that as well.
#
distance_workers = 4
###############################################################################
# All other parameters that are unknown to EnsembleLda are forwarded to each LDA Model, such as
#
num_topics = 20
passes = 2
###############################################################################
# Now start the training
#
# Since 20 topics were trained on each of the 8 models, we expect there to be 160 different topics.
# The number of stable topics which are clustered from all those topics is smaller.
#
from gensim.models import EnsembleLda
ensemble = EnsembleLda(
corpus=corpus,
id2word=dictionary,
num_topics=num_topics,
passes=passes,
num_models=num_models,
topic_model_class=LdaModel,
ensemble_workers=ensemble_workers,
distance_workers=distance_workers
)
print(len(ensemble.ttda))
print(len(ensemble.get_topics()))
###############################################################################
# Tuning
# ------
#
# Different from LdaModel, the number of resulting topics varies greatly depending on the clustering parameters.
#
# You can provide those in the ``recluster()`` function or the ``EnsembleLda`` constructor.
#
# Play around until you get as many topics as you desire, which however may reduce their quality.
# If your ensemble doesn't have enough topics to begin with, you should make sure to make it large enough.
#
# Having an epsilon that is smaller than the smallest distance doesn't make sense.
# Make sure to chose one that is within the range of values in ``asymmetric_distance_matrix``.
#
import numpy as np
shape = ensemble.asymmetric_distance_matrix.shape
without_diagonal = ensemble.asymmetric_distance_matrix[~np.eye(shape[0], dtype=bool)].reshape(shape[0], -1)
print(without_diagonal.min(), without_diagonal.mean(), without_diagonal.max())
ensemble.recluster(eps=0.09, min_samples=2, min_cores=2)
print(len(ensemble.get_topics()))
###############################################################################
# Increasing the Size
# -------------------
#
# If you have some models lying around that were trained on a corpus based on the same dictionary,
# they are compatible and you can add them to the ensemble.
#
# By setting num_models of the EnsembleLda constructor to 0 you can also create an ensemble that is
# entirely made out of your existing topic models with the following method.
#
# Afterwards the number and quality of stable topics might be different depending on your added topics and parameters.
#
from gensim.models import LdaMulticore
model1 = LdaMulticore(
corpus=corpus,
id2word=dictionary,
num_topics=9,
passes=4,
)
model2 = LdaModel(
corpus=corpus,
id2word=dictionary,
num_topics=11,
passes=2,
)
# add_model supports various types of input, check out its docstring
ensemble.add_model(model1)
ensemble.add_model(model2)
ensemble.recluster()
print(len(ensemble.ttda))
print(len(ensemble.get_topics()))
| 5,237
|
Python
|
.py
| 130
| 38.523077
| 118
| 0.66647
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,018
|
run_annoy.ipynb
|
piskvorky_gensim/docs/src/auto_examples/tutorials/run_annoy.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nFast Similarity Queries with Annoy and Word2Vec\n===============================================\n\nIntroduces the Annoy library for similarity queries on top of vectors learned by Word2Vec.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"LOGS = False # Set to True if you want to see progress in logs.\nif LOGS:\n import logging\n logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `Annoy \"Approximate Nearest Neighbors Oh Yeah\"\n<https://github.com/spotify/annoy>`_ library enables similarity queries with\na Word2Vec model. The current implementation for finding k nearest neighbors\nin a vector space in Gensim has linear complexity via brute force in the\nnumber of indexed documents, although with extremely low constant factors.\nThe retrieved results are exact, which is an overkill in many applications:\napproximate results retrieved in sub-linear time may be enough. Annoy can\nfind approximate nearest neighbors much faster.\n\nOutline\n-------\n\n1. Download Text8 Corpus\n2. Train the Word2Vec model\n3. Construct AnnoyIndex with model & make a similarity query\n4. Compare to the traditional indexer\n5. Persist indices to disk\n6. Save memory by via memory-mapping indices saved to disk\n7. Evaluate relationship of ``num_trees`` to initialization time and accuracy\n8. Work with Google's word2vec C formats\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Download Text8 corpus\n------------------------\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import gensim.downloader as api\ntext8_path = api.load('text8', return_path=True)\nprint(\"Using corpus from\", text8_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"2. Train the Word2Vec model\n---------------------------\n\nFor more details, see `sphx_glr_auto_examples_tutorials_run_word2vec.py`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.models import Word2Vec, KeyedVectors\nfrom gensim.models.word2vec import Text8Corpus\n\n# Using params from Word2Vec_FastText_Comparison\nparams = {\n 'alpha': 0.05,\n 'vector_size': 100,\n 'window': 5,\n 'epochs': 5,\n 'min_count': 5,\n 'sample': 1e-4,\n 'sg': 1,\n 'hs': 0,\n 'negative': 5,\n}\nmodel = Word2Vec(Text8Corpus(text8_path), **params)\nwv = model.wv\nprint(\"Using trained model\", wv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"3. Construct AnnoyIndex with model & make a similarity query\n------------------------------------------------------------\n\nAn instance of ``AnnoyIndexer`` needs to be created in order to use Annoy in Gensim.\nThe ``AnnoyIndexer`` class is located in ``gensim.similarities.annoy``.\n\n``AnnoyIndexer()`` takes two parameters:\n\n* **model**: A ``Word2Vec`` or ``Doc2Vec`` model.\n* **num_trees**: A positive integer. ``num_trees`` effects the build\n time and the index size. **A larger value will give more accurate results,\n but larger indexes**. More information on what trees in Annoy do can be found\n `here <https://github.com/spotify/annoy#how-does-it-work>`__. The relationship\n between ``num_trees``\\ , build time, and accuracy will be investigated later\n in the tutorial.\n\nNow that we are ready to make a query, lets find the top 5 most similar words\nto \"science\" in the Text8 corpus. To make a similarity query we call\n``Word2Vec.most_similar`` like we would traditionally, but with an added\nparameter, ``indexer``.\n\nApart from Annoy, Gensim also supports the NMSLIB indexer. NMSLIB is a similar library to\nAnnoy \u2013 both support fast, approximate searches for similar vectors.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from gensim.similarities.annoy import AnnoyIndexer\n\n# 100 trees are being used in this example\nannoy_index = AnnoyIndexer(model, 100)\n# Derive the vector for the word \"science\" in our model\nvector = wv[\"science\"]\n# The instance of AnnoyIndexer we just created is passed\napproximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = wv.most_similar([vector], topn=11)\nprint(\"\\nExact Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The closer the cosine similarity of a vector is to 1, the more similar that\nword is to our query, which was the vector for \"science\". There are some\ndifferences in the ranking of similar words and the set of words included\nwithin the 10 most similar words.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"4. Compare to the traditional indexer\n-------------------------------------\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Set up the model and vector that we are using in the comparison\nannoy_index = AnnoyIndexer(model, 100)\n\n# Dry run to make sure both indexes are fully in RAM\nnormed_vectors = wv.get_normed_vectors()\nvector = normed_vectors[0]\nwv.most_similar([vector], topn=5, indexer=annoy_index)\nwv.most_similar([vector], topn=5)\n\nimport time\nimport numpy as np\n\ndef avg_query_time(annoy_index=None, queries=1000):\n \"\"\"Average query time of a most_similar method over 1000 random queries.\"\"\"\n total_time = 0\n for _ in range(queries):\n rand_vec = normed_vectors[np.random.randint(0, len(wv))]\n start_time = time.process_time()\n wv.most_similar([rand_vec], topn=5, indexer=annoy_index)\n total_time += time.process_time() - start_time\n return total_time / queries\n\nqueries = 1000\n\ngensim_time = avg_query_time(queries=queries)\nannoy_time = avg_query_time(annoy_index, queries=queries)\nprint(\"Gensim (s/query):\\t{0:.5f}\".format(gensim_time))\nprint(\"Annoy (s/query):\\t{0:.5f}\".format(annoy_time))\nspeed_improvement = gensim_time / annoy_time\nprint (\"\\nAnnoy is {0:.2f} times faster on average on this particular run\".format(speed_improvement))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**This speedup factor is by no means constant** and will vary greatly from\nrun to run and is particular to this data set, BLAS setup, Annoy\nparameters(as tree size increases speedup factor decreases), machine\nspecifications, among other factors.\n\n.. Important::\n Initialization time for the annoy indexer was not included in the times.\n The optimal knn algorithm for you to use will depend on how many queries\n you need to make and the size of the corpus. If you are making very few\n similarity queries, the time taken to initialize the annoy indexer will be\n longer than the time it would take the brute force method to retrieve\n results. If you are making many queries however, the time it takes to\n initialize the annoy indexer will be made up for by the incredibly fast\n retrieval times for queries once the indexer has been initialized\n\n.. Important::\n Gensim's 'most_similar' method is using numpy operations in the form of\n dot product whereas Annoy's method isnt. If 'numpy' on your machine is\n using one of the BLAS libraries like ATLAS or LAPACK, it'll run on\n multiple cores (only if your machine has multicore support ). Check `SciPy\n Cookbook\n <http://scipy-cookbook.readthedocs.io/items/ParallelProgramming.html>`_\n for more details.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"5. Persisting indices to disk\n-----------------------------\n\nYou can save and load your indexes from/to disk to prevent having to\nconstruct them each time. This will create two files on disk, *fname* and\n*fname.d*. Both files are needed to correctly restore all attributes. Before\nloading an index, you will have to create an empty AnnoyIndexer object.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fname = '/tmp/mymodel.index'\n\n# Persist index to disk\nannoy_index.save(fname)\n\n# Load index back\nimport os.path\nif os.path.exists(fname):\n annoy_index2 = AnnoyIndexer()\n annoy_index2.load(fname)\n annoy_index2.model = model\n\n# Results should be identical to above\nvector = wv[\"science\"]\napproximate_neighbors2 = wv.most_similar([vector], topn=11, indexer=annoy_index2)\nfor neighbor in approximate_neighbors2:\n print(neighbor)\n\nassert approximate_neighbors == approximate_neighbors2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Be sure to use the same model at load that was used originally, otherwise you\nwill get unexpected behaviors.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"6. Save memory via memory-mapping indexes saved to disk\n-------------------------------------------------------\n\nAnnoy library has a useful feature that indices can be memory-mapped from\ndisk. It saves memory when the same index is used by several processes.\n\nBelow are two snippets of code. First one has a separate index for each\nprocess. The second snipped shares the index between two processes via\nmemory-mapping. The second example uses less total RAM as it is shared.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Remove verbosity from code below (if logging active)\nif LOGS:\n logging.disable(logging.CRITICAL)\n\nfrom multiprocessing import Process\nimport os\nimport psutil"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Bad example: two processes load the Word2vec model from disk and create their\nown Annoy index from that model.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model.wv[\"science\"]\n annoy_index = AnnoyIndexer(new_model, 100)\n approximate_neighbors = new_model.wv.most_similar([vector], topn=5, indexer=annoy_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Create and run two parallel processes to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Good example: two processes load both the Word2vec model and index from disk\nand memory-map the index.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model.wv[\"science\"]\n annoy_index = AnnoyIndexer()\n annoy_index.load('/tmp/mymodel.index')\n annoy_index.model = new_model\n approximate_neighbors = new_model.wv.most_similar([vector], topn=5, indexer=annoy_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"7. Evaluate relationship of ``num_trees`` to initialization time and accuracy\n-----------------------------------------------------------------------------\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Build dataset of initialization times and accuracy measures:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"exact_results = [element[0] for element in wv.most_similar([normed_vectors[0]], topn=100)]\n\nx_values = []\ny_values_init = []\ny_values_accuracy = []\n\nfor x in range(1, 300, 10):\n x_values.append(x)\n start_time = time.time()\n annoy_index = AnnoyIndexer(model, x)\n y_values_init.append(time.time() - start_time)\n approximate_results = wv.most_similar([normed_vectors[0]], topn=100, indexer=annoy_index)\n top_words = [result[0] for result in approximate_results]\n y_values_accuracy.append(len(set(top_words).intersection(exact_results)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot results:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plt.figure(1, figsize=(12, 6))\nplt.subplot(121)\nplt.plot(x_values, y_values_init)\nplt.title(\"num_trees vs initalization time\")\nplt.ylabel(\"Initialization time (s)\")\nplt.xlabel(\"num_trees\")\nplt.subplot(122)\nplt.plot(x_values, y_values_accuracy)\nplt.title(\"num_trees vs accuracy\")\nplt.ylabel(\"%% accuracy\")\nplt.xlabel(\"num_trees\")\nplt.tight_layout()\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the above, we can see that the initialization time of the annoy indexer\nincreases in a linear fashion with num_trees. Initialization time will vary\nfrom corpus to corpus. In the graph above we used the (tiny) Lee corpus.\n\nFurthermore, in this dataset, the accuracy seems logarithmically related to\nthe number of trees. We see an improvement in accuracy with more trees, but\nthe relationship is nonlinear.\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"7. Work with Google's word2vec files\n------------------------------------\n\nOur model can be exported to a word2vec C format. There is a binary and a\nplain text word2vec format. Both can be read with a variety of other\nsoftware, or imported back into Gensim as a ``KeyedVectors`` object.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# To export our model as text\nwv.save_word2vec_format('/tmp/vectors.txt', binary=False)\n\nfrom smart_open import open\n# View the first 3 lines of the exported file\n# The first line has the total number of entries and the vector dimension count.\n# The next lines have a key (a string) followed by its vector.\nwith open('/tmp/vectors.txt', encoding='utf8') as myfile:\n for i in range(3):\n print(myfile.readline().strip())\n\n# To import a word2vec text model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)\n\n# To export a model as binary\nwv.save_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To import a word2vec binary model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees)\nannoy_index = AnnoyIndexer(wv, 100)\nannoy_index.save('/tmp/mymodel.index')\n\n# Load and test the saved word vectors and saved Annoy index\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\nannoy_index = AnnoyIndexer()\nannoy_index.load('/tmp/mymodel.index')\nannoy_index.model = wv\n\nvector = wv[\"cat\"]\napproximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = wv.most_similar([vector], topn=11)\nprint(\"\\nExact Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Recap\n-----\n\nIn this notebook we used the Annoy module to build an indexed approximation\nof our word embeddings. To do so, we did the following steps:\n\n1. Download Text8 Corpus\n2. Train Word2Vec Model\n3. Construct AnnoyIndex with model & make a similarity query\n4. Persist indices to disk\n5. Save memory by via memory-mapping indices saved to disk\n6. Evaluate relationship of ``num_trees`` to initialization time and accuracy\n7. Work with Google's word2vec C formats\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
| 19,202
|
Python
|
.py
| 312
| 54.798077
| 1,593
| 0.637076
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,019
|
logentropy_model.rst
|
piskvorky_gensim/docs/src/models/logentropy_model.rst
|
:mod:`models.logentropy_model` -- LogEntropy model
======================================================
.. automodule:: gensim.models.logentropy_model
:synopsis: LogEntropy model
:members:
:inherited-members:
:undoc-members:
:show-inheritance:
| 267
|
Python
|
.py
| 8
| 29.75
| 54
| 0.577519
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,020
|
to_python.py
|
piskvorky_gensim/docs/src/tools/to_python.py
|
"""Convert a Jupyter notebook to Python source in Sphinx Gallery format.
How to use:
$ pip install m2r
$ cat tutorial.ipynb | python to_python.py > tutorial.py
That will do the bulk of the conversion for you.
Stuff that you'll need to change yourself:
* Replace the placeholder with a unique RST label,
* Replace the placeholder with a decent tutorial title, and
* Little tweaks to make Sphinx happy.
YMMV ;)
"""
import json
import sys
import m2r
def write_docstring(fout):
fout.write('''r"""
Autogenerated docstring
=======================
Please replace me.
"""
''')
def process_markdown(source, fout):
def gen():
for markdown_line in source:
rst_lines = m2r.convert(markdown_line).split('\n')
skip_flag = True
for line in rst_lines:
if line == '' and skip_flag and False:
#
# Suppress empty lines at the start of each section, they
# are not needed.
#
continue
yield line
skip_flag = bool(line)
for line in gen():
fout.write('# %s\n' % line)
def output_cell(cell, fout):
if cell['cell_type'] == 'code':
for line in cell['source']:
fout.write(line.replace('%time ', ''))
elif cell['cell_type'] == 'markdown':
fout.write('#' * 79 + '\n')
process_markdown(cell['source'], fout)
fout.write('\n\n')
def main():
write_docstring(sys.stdout)
notebook = json.load(sys.stdin)
for cell in notebook['cells']:
output_cell(cell, sys.stdout)
if __name__ == '__main__':
main()
| 1,677
|
Python
|
.py
| 52
| 25.230769
| 77
| 0.589152
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,021
|
check_gallery.py
|
piskvorky_gensim/docs/src/tools/check_gallery.py
|
"""Check that the gallery output is up to date with the input.
We do this so we can know in advance if "make html" is going to rebuild the
gallery. That's helpful to know because rebuilding usually takes a long time,
so we want to avoid it under some environments (e.g. CI).
The script returns non-zero if there are any problems. At that stage, you may
fail the CI build immediately, as further building will likely take too long.
If you run the script interactively, it will give you tips about what you may
want to do, on standard output.
If you run this script with the --apply option set, it will automatically run
the suggested commands for you.
"""
import argparse
import os
import os.path
import re
import sys
import shlex
import subprocess
def get_friends(py_file):
for ext in ('.py', '.py.md5', '.rst', '.ipynb'):
friend = re.sub(r'\.py$', ext, py_file)
if os.path.isfile(friend):
yield friend
def is_under_version_control(path):
command = ['git', 'ls-files', '--error-unmatch', path]
popen = subprocess.Popen(
command,
cwd=os.path.dirname(path),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
popen.communicate()
popen.wait()
return popen.returncode == 0
def find_unbuilt_examples(gallery_subdir):
"""Returns True if there are any examples that have not been built yet."""
for root, dirs, files in os.walk(gallery_subdir):
in_files = [os.path.join(root, f) for f in files if f.endswith('.py')]
for in_file in in_files:
out_file = in_file.replace('/gallery/', '/auto_examples/')
friends = list(get_friends(out_file))
if any([not os.path.isfile(f) for f in friends]):
yield in_file
def diff(f1, f2):
"""Returns True if the files are different."""
with open(f1) as fin:
f1_contents = fin.read()
with open(f2) as fin:
f2_contents = fin.read()
return f1_contents != f2_contents
def find_py_files(subdir):
for root, dirs, files in os.walk(subdir):
for f in files:
if f.endswith('.py'):
yield os.path.join(root, f)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--apply', action='store_true',
help='Apply any suggestions made by this script',
)
args = parser.parse_args()
curr_dir = os.path.dirname(os.path.abspath(__file__))
output_dir = os.path.abspath(os.path.join(curr_dir, '../auto_examples/'))
retval = 0
rebuild = False
suggestions = []
#
# Check for stale output.
#
for out_file in find_py_files(output_dir):
in_file = out_file.replace('/auto_examples/', '/gallery/')
if not os.path.isfile(in_file):
print('%s is stale, consider removing it and its friends.' % in_file)
for friend in get_friends(out_file):
suggestions.append('git rm -f %s' % friend)
retval = 1
continue
for friend in get_friends(out_file):
if not is_under_version_control(friend):
print('%s is not under version control, consider adding it.' % friend)
suggestions.append('git add %s' % friend)
if diff(in_file, out_file):
print('%s is stale.' % in_file)
rebuild = True
retval = 1
gallery_dir = output_dir.replace('/auto_examples', '/gallery')
unbuilt = list(find_unbuilt_examples(gallery_dir))
if unbuilt:
for u in unbuilt:
print('%s has not been built yet' % u)
rebuild = True
retval = 1
if rebuild:
src_dir = os.path.abspath(os.path.join(gallery_dir, '..'))
print('consider rebuilding the gallery:')
print('\tmake -C %s html' % src_dir)
if suggestions:
print('consider running the following commands (or rerun this script with --apply option):')
for command in suggestions:
print('\t' + command)
if args.apply:
for command in suggestions:
subprocess.check_call(shlex.split(command))
return retval
if __name__ == '__main__':
sys.exit(main())
| 4,197
|
Python
|
.py
| 107
| 32.158879
| 100
| 0.631462
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,022
|
wordcloud.ipynb
|
piskvorky_gensim/docs/src/tools/wordcloud.ipynb
|
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Generate wordcloud images for our core tutorials.'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"Generate wordcloud images for our core tutorials.\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"scrolled": false
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUUAAADnCAYAAACJ10QMAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9Z5Bm13nf+Ts3vjl0zj09Mz2DSRgMIhGJQEICJYqkRMmWJZmWvVqv7bJd6yqXt3bLta7dT+ty2ZZ3a9e2rOSVSEqitBQDGECCyHkGwOTQPZ1z95vDjefsh9vonp6e1JgBB6D7hw+Y9973nnvu7fv+7znP85znEUopttlmm222idBudwe22WabbT5ObIviNttss80lbIviNttss80lbIviNttss80lbIviNttss80lGNfaKYTYdk1v86Gwsja6bdBcrN/urgCgmTpCF4ROcGvbxSBupDG1GAKBVCGebOKENRRy9VsCS4sT11NowiBUAU5Yw5ONDW1ZWpyYnqIelDCESUxPowmdUPnUgxKh8te+qwuTuJ7B1GwUEjdsXHbO24/RlsXIpfBml5EN98YPFGC0ZjG7WqLPQYg3XyAs1W5Z35RS4mr7rimK23x0aMIgl+onYbeiaTquX6NUm8T1qwgEyXg7mUQvhmZSd1co1acJw+jBStgtxO08TbdINtmHqcdpuCsUqmNIFSKETj7VTzLWjlKScmOWWmPhp/eD0QRthzpJ92U49/WTP51zXofszjxmymLp/XlUcGvug60l6E7soTO+G1PYKBQaOiV/njOlFwmUBwhyVhcDyUOkzVYg+i1W/WXGaseo+surrQnyVi9D6buZqL1Hi91HxmxDFxaSkOOFH1ALCkAknoPJw7TFBtCFCQjcsMZU4xSLzYtIwltyfTdL5pFDZB49xMLvfYfm2akbP1BoxHb3kHv6Pox8Cj2TZOlPnqPyk/c+us5ewrYo3hYEvW1H6Mzvp+kWCUKHuJWj4a7g+lXSiS529TxOEDj4YZPWzC7S8Q6ml44RSJdUvJPetiO4XpUgdBCahtA0irVJBIqu/AG6Ww7RcAvoukV7bg8XZ1+i3Jj5SK/KTFn0PjpIqjcNQuBXIxHPDbfS+9gg0guZfnGc6mQZzdTovLeX1oMdBE2f6RfG8WsePQ8PMPbd88TycTru7WH5+Dxth7pIdqeozVZJ9aSZfW2KyniRtjs76bi7h9ANmPzhKG7ZoefhAeKtCexcjMLZZebfmibRmWL4V/ZjZWxaD7Qz++oU5dHCTV2rLgx6EncwmLqL+eYFlpwJpAowtRi6MNZGdbaWYDj9AJrQGaseoxlWiesZBlKH2JW+l5PF51fFMyKup+hPHmTJGWeueX61jSROuD5K6k3spyu+m9nmOUreHLqw6EvsZzj9APWgRNVfuqlru+1ISeP4RdyJBRIHdtD6y4/9VE+/LYo3gLAs9LY8wdIK+Dc//UrF2uhuOcjk4tssV0ZQSiKERih9hNDoa7+Xplvi4txLhDIgn+pnZ/djVBrzFKpjq220M184wVL5AkpFI59QeqTj3fS2HWF6+SjL5QsINHb3PEFf+z2UJz5CURTQdqiDdH+GiedGGXx6NxAJ5dDnhhl79gJ2LsbAZ3Zy/i9O0bq/g7aDHUw+P4Zf8/AqDnY+Tuv+dsa+ex4jadJyRxuVsSKZwSz1hRqd9/RQPL9MblcLoRvQ//gQM69MkO7PsuOZYS584xTdDw2weHSWwtlluh/sozpdoT5boXB2CSsTY/JHF3EKzZu+3Jiepjexj4I7zVj1GK5cNxMIBIrI8tQWGyBuZLhQeYP55gUUihJzSEJ2px+gPbZjTfwgGgXO+eeZqp8iUO5ai6y2F9NSdMd3s+JOMV57l1BFz6NUIftyj9IZG/rkiyIgGy6y4eK35/hpLzDZFsUbwNrRS+aZJyh89ZuESzc3wgBIJ7pQwFL5PKH0NuzThEEq1s7U0tt4QfRDqzlLeEGDZKyNYm0CADeoU2nMEYTOhuPjdg7bTBMzs7Rn9wKg6xaJWCua0JHqo5laabpGrCVBY6FOdaJM6cIKiY4kic4UoRNSGS8idI2Bz+zEiBkke9I0FusbRmx2fr09oa2bfLyqR3WiTKo3g1NoYqYsUr0Zcnta8evR/atOlQEIHZ/C2SWclSad9/ZgxA1CN8SruICguVRH+jc/fba1JAkjy3jt3Q2CCKwJIkDW6sQNG1T8pbXtCkXRmwUkWatzgyiGKqTkLVwiiNERH5CyWjG0GJow6IztWtse01OAILU2Rb+ykJgdOYz2HEGxit3bTlit444vYPW1YbRl8aaX8WZXQEb3SJg69kAnRkcOoWkEpRru+DyyvvG5E5aBPdiJ2Z5DBWHUhriC2U4IjLYsdn87WsJGNj28qUX8pTJ8TFbXbYvi9RACa6gf9FvpqI/aUld5cKN9V/q0/pBJ6a+NEC9FCIEQGplEF36YAyCUPsvlCzfZ52ujpCJ0AmLdaTRTI9YSB8ArOxhJEz1mYGdihE6ADCRB3SPZlUaPGYRugBCC0AvRbQNhaCQ6U+ix6PFUSkX3Sq3fB6/iUp0oc/arJ/BrHpqlo5saMlBXul2oUKEZ2gaxvRl0YaKUXBupXQ1DmCjkppdRKAPU6v5LkcpHXqNNQ5joQqfF7l21Ua4TSHfDNPtKxPcOkHvmfvz5AtZgJ0JA+fn3SBzYgdXXjje7wuIfPIu/UESLW6QfOUT2ySMIywQV6Vzt6DmK332TsBy9DIRtkn7oIPln7kfETGTDJazUkY2NL3w0QfyOAfKfewCrrx2kRJgG7uQChb96BWd09pp9/2mxNVHUNIz2FhJHDmAO9CIsE+V6eGNT1F59B9WM3h4iZhPbt4vYgb3o2TSyVsc5dYHmqfNr3zE6Wkn/3GM03nofPZ8lvn8PIhEjWCrQeOs9vIlpkOvSoGVSxA/uxR7egZZOofyAsFCi/to7+LML0Q9BgNHRRvJTRzB7OgHwJmdpvHOcYHEFlEJLJ0k/9QjB8gphqULiyEG0TIpgqUD99WP4M3MgFSIeI3F4H/bendh7d6HFY7R+5csoL7IVNd45Tv21o2v907MZ4ncfwN45gIjZKM/Dn5qjcfQEwWWjy4azjEAjnxqgWJ1AoVanXBKFpOEWyMS7WNLOo1RAzMphGUmabvGKQngpjlfB8SrMF09Rqk+DUgihoVT4kY0SIRLFwrllcsMtHPi7dxP6Ic5KE6fQZPn4PPu/chdCCObfnsGveiy+O8/AZxIc+p178CouUy+M0ZirUZutctc/uh+37EYCGipCN0AFisAJkL5EeiG1mQrF8ysc+Lt3I4OQ+denKZxbJmj6qFCCgsAJon8D5bEiLfva2Pdbh5l6fozyxeJNXW+oPBAahmZxrZGZJx0SRm6T+JmajUDgy41e2euNlXzpEiifucY55pqbX3SB9K/bij3YSe2dc1ReO0Xbrz1O7nMPsPQnz6HHbVq//Gns/g785TLJe/fS+qVHqbz0PtU3zqCkIr67h/wXHkZ5IYVvv4ZyfWK7e2n54sM4o7OUvv8W0vWJ7+6l5UuPoLx1gTe7Wmj7G0+gwpDFP/w+QbGK2Z6l7Vcfp+VLjzD/H7+NrN28aeNmuXFRFAJ7z05yX/gsmAb+xAzhShEtm8bs74Yw+sEJyyT9xIMkH7oHf2aeYGEJPZ8j+4XPYvZ2Uvn+iyjXi4Rz7y7Mrg5UEODPLqIFAfEDw8T2DLH8n/40EjJAz2XI/cozWDsH8KdmCeaWEIkYZk8nwrLWngGzu5P8r/8SwtDxpuZACOJH9mPvGaL0l9/Dn5xFGAZmXxexA8PIpkOwsEywXMTePYg91E/xG8/ijU6AUsimgz89j9nbBVLijowja1EYhb+4vHZrtEya7JeexurvwR2dQJUraJk01q5B3LGpTaJYbUa2wR2dD5JPDeCHDrowWKlepFyfYWb5XXZ2P8runifwgjqZRBfF2iSVxvXfpNXGPAul03S1HCIZa0PKANNIUGsuMF88dcN/7g9DbabC6T9+H2EIVCBRUqGkYvqFcebenF4XqkDSXKoz8len0Sw92t70UaHi7FePo5s6clXMQjegPltBBpLyWDESOSGQXsjF75xDt/Xoe06IDELO/Nf3CN0QpRQjf3UG6UfPZWWyzKk/eg+hCYJbEJbjyjqNoEir3c+SM4ETVtf2aehrHuCiO0tHbIis1UUtKKKQCDRa7D4EgqI3t6XzVv0VfOkQNzJ4sokv16exujBv6MWnvIDmmQnc6SXciQWsnlacs1Po2QRBpY6eS2Hk06Tv34e/UqH042MES5F5wl8sYg10knrgDurvjeBORs4QLWFTfu4dnPPTAHjTSyTvHsbqXh3NahrxOwawelpZ+P3v0Th+EZTCm1rC7u8g++TdxPf0UT+29RmNmTIxYgZOwUHJm5+C37AoaskE6aceQiRiFL/617jnx9amMyIRXxtBmb1dJB+9j+bRk5Sf/Qmq6SAsi/RTD5F85D7cC+M4p6MLF7aFMHSKf/Zt/MlZ0DRSj95H9gtPYw31r4li4t47sfcMUfnBS9RffQflRsNyEbNRq44PYRqknvgUWjLByh/8Gf70PAhB7OAeWn7rl0nceyeV+XUDtNGaY/n3vo57ZgQAe88Qbb/z6yTuPog/PY9yXJrvnwFNw+zvRm/JUXv1nSvaFI22PNZAL41jJ6h878W1F4SWSiLdzfFZUoWMzb9CW3Y3yVg7ph6j6ZZwvAoApdoUF2aepyW9A0O3WSieYaUyih9Gb9GGu8J88RT+ZfbEqO2A6aWjNJwVsoleTCOO61epNOZv9E/94VkVt019CiSy6m3aHrohoXvZtNIJNsUShqv3Mww3bpdeiPQ2Hh8017+zoR2pCBqb+/ZhaYY1Zhpn2JE6wq70fay4U2veZ0NYzDbO4iuXgjdDxV+kP3kAXRg0wwpxPUNvYh9lf5GCO72l83qyEZ03eYThzKcourOEKsDSYiSNFmYap9dCd65G2HSRXgChRHo+QaWBClZH1UGIMDT0ZAyzp5XmmUlkbf05U56PMzJD9om7MFrSeHMrWN2t+IslgkvjCEOJN7WEtRprKAyN2GAnCEHiwA7MjtzaV+3BLrRkDKM1u6V78QHd93WT35Xn9NdO49dv/m9846KYSmDvHaL+0lt4Y1MbjKKqsT7ktYd3gB9umCorz6N54iyJ+w4T2z+8JoqEIe7IBP706ttSSvy5RcJqDT2XibYJQWz/bsJCmcY7x9cEEUA564Kjt+ax+nrwJmcjQQRQCn9uEX92MZrW2vba94PFAt74+gPpzy3hTsxgDfSgJeOEVxCzqyGrNcJKlfiBvfizizhnRlBNB1m7euByKD0WiqevuE8hqTRmrzoyrDUXqTUXr9q2UiErlVFWKqM3fA3bbA2pAmYb5whVQE98L612L0qBIqToza3Zi33pMFJ5i/7kQfqS+9EwkASUvHkmau9vCMe5UeYa5wFBV3wXrXb/Wn8aQRl5I7Go8hL7rGLNqbL2GUDT0CwT5fmoy/Yr10eYBsLQQRMIS48GJ5eN0qR7iUAJgYhZaLZJ/I5+7KGuDd91xucJqx8u0D/ZkcRKWbfMS33DoqinEgjDJFguro0Kr9hgSw7peYSlyobtQaGM8gOMtnUXowpCwmJ5w81UQQhhiNCjaZGIWWipJGGpgqxc3YisZ1JoyTixg3vo/J//0dp2oWtomTRhsbzByB5WqhCu/7FVEBCWKhi7d0RG5S0QrBQpf+s5Mk9/muwvfYb0Zx/FPTNC/a33CBaWPzZetVuFGU9jJtK3uxu3nGZpERXe+NTak02m66dZbI6haya2kaIzvx8zm6U/dj9zhRP4QZNcbggtkaOs6iwUTlFzlshlBsjnd9FtHcH16ywUTxKaUIk3aNH3kFE7mFx6Gyl9BjseZGLxDYTQ6G29i7nCCZb9KRLZLkI7hufXmVx6m6Zf2mSjvDLXfx6V7xNW6+iZJMLQUR8InCbQcylkw0E6HoSSsOZgduQR5kY50VKx9Q9SEZbrBMUaK3/+As2R2Y39UCCbW1j1cgn1xTqp7hSGbRA0bt40csOiqJRCCAHatb2wSkY2H3HZ94SmgeCyt45ChZfbQNQVPq66va5uz47aUopgao7me5ttZ7LRRDZdtGTkFUXXN3gn1/qo5NZFTCq8i1Ms/5evYe/eQfzOfcTv2k/s0F6Kf/rXeONbiOb/mJLpHSbR2gNA576H6Nj/4G3u0a3n7LP/mam3vrulv79CRiE5Elrzw9SDAjPzz6NUiAKyyV4Sdgtnpp4lm+yjJbODpfooOSGIWVlGZ19YswNmkwb1oMSFuefpyO2jM7ef+cIJLDOFEBpC6FhmEiE0LCOJFCFjS69RbS5uCs26WYJynebZaZL37sHqbsUZiWJc9UyC5N27cScW8OZWkK6POzZH6v59WAMdeLPLIBV6OkF8uG8tLEcFIc2zk2SeuAurr4P68YsbhTYZXzOFbZXC+QL9j/Qz/IVhVs6sbAi5qi/Uqc5Ur3H0Zm5YFGWtgXRcjPYWhG1tmMZeir+wTPzQHegtucgr/MGJ2vIIy1yzE94oyvWQ5Rp6JoWey0YjyysQlqrIegNZb1B78c2rN7gqikY+u/oGjDYLy0RvyxOWq5tHwlKthrpcJ5wjCHHPjuJeGMfeNUjrb3+Z+JH9nzhRTLb303PkMxu2te66i0z3rqsc8bPB4INfZOboD5DBh7NLWUaKUn1yQ0iNacRx/Go0vXUL9LQeWttXd5ZXBTEKnVBK4gV1QunjeBVaMxtDbsTqfx8cW6qlyCb7SMU7mSucuDlhvOw9IJse1TdOEb+jn9Yvf5ra0XOoICS+pw+7r52Vv3wZf7EEoaR5egJveomWX3wQM59GOh6x4V60hH1J+wpnZJb6O+fIPH4YLRXDm15CaBpGWxaha6z8fy+jmh7C0LH629Fsi9jObjTLxO7rIL5vENlw8BeLyOa6/uSGciQ7k3Qe6WTw8UHCS+zMF394kXPfOLelW3Hjolit0zx+hsTh/XijkzhnLkSGWU2gp5KExQrK93HPjKAevZ/EfXcSLK0QlqtoiTiJB44gNC1yXmwFpWi8d5rs5z9D6tMPUHvxTWTTASHQkwmk4yJrdYJCEefMKMn77iRxzyGcs6OoMERoGnpLFum4hCultWb1lhyJe++k/vZxUIrYniGsvm6qP36V8FJboFKEpTL27kGMzjbC1Sm8CkIIoodfz2XQs2mClRJqdZt0nNUBx8dz6ix0A8NOoOkme5/577ASmbV9ZiJDumto0zF+s4aS0QM3/spfUp4d+an196OkY9+nGPzUL910O45fJhlrpdqYR6oQKX1cr0pLageWkSST6KLhrocCXR6nKoRGzIqC75OxVppucXWVk8A2UphGAstMASBX7c5Nr0R3y0ESdp5K4/qebBUEhHUnMh0phXI8ZNON7HGhJGw4kRNGKZzRWRZ//1myT99L9rP3IIQgWKmw/LXnqR27sGZ+cmeWWf7T58j9/P1knjyCbLo0z05SevZNMo/ftbbWPKzUWfnGS6TuXyJ17x6Sh3dBKAkqDepHz8NqpICeTdLxd34eYRpoMQsEJO/dQ3zfAGGlwcpfvYxzfn2gsXJmhaP/99HNFws0l7ce4iOuZZy8PEuO2d9D5pnHsXq7CMsVpOuixeMo12Plj78R2fw0jfjhfWR+/tOrN6KGlkqg2RbVF96g/toxCALMgR7afufXqT7/GrWfvL52DmtnPy2/+SUa75yg8uxPon4kYmR+7nESd+1HOg5hpRotvUsmKH3zhzinzkcxiKkk2S98Fnv34Op02UGLxRAxm+r3fkLj3VPo2Qz53/giRi6DbDrR9F0qjPYWvIkZyt/+McHCxmVS1o4+Wr7yKwAESwWUVDSPnaDx1vsA2Ht3kf3iZxGatupcEej5LGG1Rukbz+JPbS3s4pYjBJme3WiGtbYp07OboUe/DAisRGaTuaNZWsQpbXTmnH/uj2gWIieW79S2ZH/7ONN//y+w7xf/AY3CPK/9X//gQ48UTT1OZ/4AcTuP51dZKJ3BC+p0ZPeSSXTjhy5zheM4XpnW9E4Q4hJnmKAtO0xv62FqzUWkCpleOoofNmnL7qElvQPXr6ILg+nldzGNGN0td6IJnaZXZnblvU2ro65EJDQmYcOFUKLFrSjEqemC0NDiFsoP1uMLRRQlolmrgfSBjGyJ8jKHjhBoMTOyK6pVJ4uUiJgVtX2J/V4YOsI2V/0GChXKyKGzKopoAj2duPIFKIVsuNGgZNPFgW7pkXi7wTXHI9fKkrMlUQTQ0klie3dh9HRG00/HxZucxTkzshaKgqZh9nRg792Jns0gaw3c0XG8iRlYvRg9lyHxqSO4I+N4IxNr7estORL3HMKfnova/KAvlom1cwBrRx9aMgoBCleKNE9eQFbWbQbCjgLHzd4uhG0hG02C+SXckQlkrY6ez5L/jS9CEFD5/ovYw0NoqUTU1vGzmxxEH1yPNdi79l3luDinLkQB5kThSvbwDsyeTrREDBVKwmIZ5/SFLZsLbgUtQ3eS7t659lloBkOP/MpVnSNurcTc+z/ZsG1l9BgrI+9+pP38uHCrRPHmELRldpOKdzC+8OptOP8nGztn0/dgH7ldOYQQVKYqzLwxQ33+yh7tWyqKHwW2kUKqcC0O76PkUlFc+YM/v6Yn/XrYWoK81cO8c+VppC4M2u0dpIwWphunceStywf3AanOIQYf+sKGbbm+vSTb+6977OKZN1g88zp+s8rSubdued8+KXy8RLGT8YVXbsP5P7lYGYsDf+sAXXd3RU4VBem+NKWLJd79z+/iFDbbWj+yfIpC6BiatRpErDD1OKH0kSrE0G0MzYqSbgYNQCHQsIwEQuhI6eOFTXTNpCN7B55fo9SYwQ8bSBWiaxamHkMphRc2UCpEEwaaZqCLqNt+2LwlS9d0YaILY0N2E082EWiYmo2GTqC8tZgyQ1gYwiKmp8iaHcw7IwgEphZHQyNQPoFyCVVI2V+kxezB1OwPLYqaaRO7zOjed+/P0zZ8L7oVJ55r33RMs7S44cfdLM5z7vv/ZcN3vHoZv3GFkfE2twFFsTZBub61YO5toOvuLvI787z1b99a8zTHW+Pc+dt30v9IPxe+tbVVMjcliraRZLDtfiaW3yKUHjvaH2ChfA4vqNOVO4ChWWhCZ6F8lnJzlpbUIG3pXUglabgrLJTPkrTbaEvvJAhdbDPNUvUCjl+lt+UwMSONEBqF2gTL1VHyyX5aUkNI6RNIj8XKOZpe6fodvQ5t1gA5qxNdmPjSwdLinK++Ttpspc3qByHwZJP55giBCtiRvHNNmD/4f87sosXqidpQHrPNc7iyjieb100asAEhaN9zH7oVX9uU7hpix8O/vOl7H3jDqwsT1BbGNuweef5PaRYvWcXyMXb6bBMRSu+m08MKw8AeGEToGs7kJGoLixA+ILZjCD23vuLEm57CX16+xhG3l2RnkupclfJkeS1OMXACSqMl0r1bj6e9KVF0/CpB6JKwWwilTygDXL9GJtFNLtFHsT5Jws6TS/ZTbs7Smb2D6cK7VJrzfBB0WG7OUKxPUnOWWa6OAoqElSdh5Tk/92NsM8Ng230U6pHdUdcMJpbfXEurtVVk04kSOUi5IUayHpTQ0PCVS9psw9YT5MwuCt4cRX+O/vh+smYn9bCMJeKcrb5Ki9VLmx1lP+6M7QQUnnRIGy3E9fSmlFLXItHaS+/dkbOm956fw4wlr3vM5JvfwSkvUZo8Q2nyyqtjtvlvC2Hb5J54As2yWfqLP8P/MKK4axfx4T0YuRxGLsfKt/76Yy2KXs0j0Z7ATJhromjEDFI9KcoTVw7huxY3mTpMsVIboy29i4ZXxPVr+GEDQzNx/DKlxjSl+hRusOqR1S1cv7p27KVcOsHXNYsg9CI7Y9BA16y1+CzHryDlh/d6KseleezyFPmKQHloaIQqQCmJRrSqJVBRP0IVoAsLfbVmhiSqxaGQaOgINMr+AvWwxLI7RSO89h8j1TXE3p/7e2ufzUSGzKpzRMkQ77Jp7eLp15g/+fKGbeWZ84Tu7c8qss3PFpU3Xqf2/nsk9x8g99RTt7s712Xx/UX6HuzjgX/2AMtnl0FBfnceI25w+utbHyzcdD7FcnOW/tYjCKExXzqNVCENr0g61oUmdALprAazKqqNBdrSu6k051eDWaOsIZ5fJ2blSNltNP0STa+EJnRyiT5iZoa6u7xuO1TXzkP44dnYZqBcXNkgZbREtlAtxoo3jRs20DWTnNlFysiv2htdGmFpNceeQhKu1etI6XlMLUZcz+DKRmSr1E0OfvGfkunZveGczfISTmmRZmmR8z/4gw37Qq9J6N3aVQvbbHMlZL2OrNcJyqXNoTcfQ6ozVY79x2MMf36YriPRmurSWImR747cjpEiKCWZK53G1OPU3GiIXW0uoAuLTLwbqUKWq5F3dqZ4nI7MHlrTO2m4RZpeGaUky7WLdGT2RLFcNQfXrzJbOkFbaohQBqtiG9D0y0glUbc4L+AHozqBhiTKO+hLh0VnjFa7n4zZRtGbo+wvAoJ5Z5Sc2UkzrFLwZlAoZprn6bAHabF7ccIablgHoZE222iGFWJakpiWikRRgBGLgnALF49Tnb8IwMrouyxfuHIQ6jafTBIHDxKWKyglsXt68QsruOPjWN09WD09BIUVnPFxlH9Jpb5sFrt/ACOTQYUh/soy7uQkyrssDlEIzM4u7N5eNMsiqFXxFxavukzRyOexBwbRk8koXd/yEu709OZ2byVC3NiySQF6OonRlgEhCIpVwmLtxo5VUJ2ucuz/ObY+5byJcdMtybwd2QLXkSqkUB+nUB/fsN0PG8wUN1fk8oI604WNMXE1Z5GaszF4uOEWaLg3Xw7gcqrBxljCGuvn2Bxuoyh4MxS8jfVOAuUy65zncqab1x6+L5x+NVpvu83PJJmHHkY5LgpFbGgnQWGF+nvvEd+zB6u3F1mtsfLsd2meOwuAPTBI9tFHsQd3RAmCNQ3pONSPv0/55ZeQzupsQQjiu4fJPfEEVncP0nVRYYg3O4MwNyc0sfr7yT/xFHZ/f7SoUNOQ9TrVd49SffNNZKOx6ZibRcQsMo8eQkHcHLwAACAASURBVPkh9XcvrGXq3oSukbxnD5lPH8Zsy0aiWKrReH+UyovvXzfxbKY/AxpUJiuXZPmBdHcaKSX1ua35H7bLEWyzzUeIMAyM9gzFHz1H9a03af/yr5F+4FOUX3qB0osv0v6rv0Z89y6ci6Po2SwtzzyDnkpT+M63cWdmELpG8tCdpO+7HxWGlF9+CeX7mB2d5J58Ej2dYekbf443P49m22QffQy7fwBvZv2lbXZ20vL0zyFMi8U/+zpBsYhmmaTvf4DcY4+jPI/KG2+sL764RRj5NC1f/jRaLCprMPfvv7FZ4DRB+pFDtP36k9Ea6A8SSChFfN8gVk8ry19/Hlm9ujD2PNCDZmicnT27lgxC0zW67+vGSBic/urW7Iq3svDINttsczlCEJRKNM+fwxkbIygWCQoFmhdG8OZm8ZcW0dMZtFiM+NAQ9sAglddepX7yBMHKMv7iIuWXX8adniZ552Gsru4oC35fH7EdQ1TefIPG2bMEKyt4s7OUXnxxbf09EC27Hd5DbGgnpReexxm5QLCyjDc3R+WN1wlrVZKHDqMnrh/tsFViu7oxWjPoqThBqXrFpXn2YCf5X3oIoy1KMBtW6viLRZQfoidsMo/fRfrhg9c8j27rGLGN4zshBLGWGHbGvspRV+eWjRSFgJ4Bk4P3xOjsNdB1qFcl4xc8jr/j4DTWJ/ltHTqH7ovRPxStxZ2d8jn6apPi8sabtusOi089nuCH36yiJNz7aIKuXgPPU1w46XLiqIPrKHbvs7jrU3Fe+kGd9k6DQ/fFsG3B7KTP2y83KK6svj00OHRvjAc+neBbX60wP7P+8CRTgqc+nyII4aXv16lV1o/Zc8hm/1026axOGCgKSyEjZ1zGL/h47nbs3zbXRvke0nEQpol0XaTTRLqrCZj9AKHpCMPA6uklrFbwlhY3ODhUEOBOjJPYuxcjn8ebn8Nsa4sWNszPbRjhyXoNf+WSUhm2jdXRiTAMkgcPERtaX/6pxWII28aw7E25EG8F9lD36gUomqcnNtRrgagCYOpT+7F6ooUJ7sVZyj86RlCokLxrmPRjh9ASMVIP7Kf6yslNo8zczhw99/fQdU8Xmq6hUKgw+j0acYPOw52Mfm/riZZvyZ0wLXj06SRf+I0sHd0GTlMhpSKZ1pge8zl7fAGnoRAChvfbfOWf5BnaY+E0JAiwY4LHnk7y+/+uwMTIusF5cLfF3/ofcizOBzzw6QSDuyyUgkxO462XG5x6z0FoMLTH4tf/+xz9QyY7hi2SaQ3TFNgxwd0PxfnD3y2yMBOgabD/iM1v/sM8b/yksUEU40mNz34xje8pjr7SpFaRaDr8wq+l+fzfzBCLazQbknhCw7IFJ485/OG/LzIzcTuWhG3ziUGp1UJaav2zlJHoaRpRrlBAaOiJBNL1Ni89VZKwXkfYdpQ9XtPQYjGU525w0ERflVEilFW7otANtEQcpRR2X9/GUSQQViqE9foV8prePB+UIgirjdU8ixs92VZvO4nDuxCmQVCoUvzr16i+cRpCiTs2j55LRqLZmcfe0UXz5MYFCl7NQwYSK21hxk1ahlvWarRIXzLx4gRTL289bd8tEcXhAzZ//1+0srIQ8h/+t2XOHXcJpSKT1Wnv1qmWo5uRyWt85Z/k6R8y+aPfLfDOq01QcOCeGH/3n+b5yj/O8+//12UqpfWbZ5qCX/6tLC98r8bv/ZsCTlOSSmskkhqNulrLedvSrnPkU3F+798UOP5OE10XPPOraX7172SZnwn46n8sIcOtjeq6+0w+8/k09arkX/3jBepViW4IevpN4klBYTmI3rT5LGGlinQ9hGminO3QmW0uZfNzt1p8csMW6boIYzXN/6UIgWbbUaq6IErrpXwfoRubkzkLgTDWf9ZKySjrjeuy/Fd/eeUgbKXWHTi3ED0frSbxlysb6rxEOzVie/uJDXaBVDjnp6i9fW4tm06wUqF5ZoLk4d1oqRhWT+smUWwsNjj/1+dRUmHEDEa/N0ror4fuhV74oWp837QoCgFPfC6FYQj+4g9KvPGTxtpLsVyQTI2tv8mG99scvCfGt79W4SfP1vBXIwFe/kGd4X0Wn/1imsMPxHn5B5d4iwQszAV8+2tVGnW51u7laJrglefqvPVSA6cZdeC7X6/y2NNJ7nogxvPfNpid3PqoTtPXZyfVskRKKCyFaxeffOAgsT27aRw/jTc2Tmz/Xuqvv7Pl82zz3zYqlHhzc6TuOoLR0gpjlxSG03Wsvj78UomgUkYFAUGhgDBNzJZWnPHx9eL1to3RkicsRstfpePgLy8h7MPo6Qzu5ORP7Zo0Oxqtylpz09RZT8ZI3j2MsAxk06X6+ulNI+RgqYxsuuiZBHr2yjZP6Uvmjs6hmzpu2b0l1fxuiaNl+IDN3JTP2HnvmmFFO/dauE3JxIi3JogfcOZ9l2RaY3Dn5nCCM+85eN71FX/yoo/rrHfAdSRj533aOg2yef0aR16ZpbmAN15osHOvxT/739v5zX+YY/d+a70ig6Zh9fbgjo5FadSkwuzs2PJ5ttkGGeJMTBBUq6SO3I3ZvprkYzX0Jr5rF+7EON78AkiJtzCPXyiQOnI3Rj6qeyR0g+SdhzEyl1TFC0OcixcJlpfJPPjQersAq2JrdnSslQ24payKgVJy04ILPZ8mvjfK5OSvVHDObZ7mSseLnDN6VETralQmKhRHirdEEOFWTJ9FZONbWQyp164tXKmMRhCwNuK7lGolxDAgkd6s0/Wq5Do14FEK3KbaIMpSQr0WEotrmPa1/+hCbH4uPE/xzT+pMDHq87kvp3n6i2me+nyaU8ccvvknZUYvhNG0QzfQkwb2YD+yub3sbpsPh7cwT/G5H5J77NN0/K3fJCiXopCeTBZvYYHSiy8g61GmJXd6msprr5F9+GE6f+tvE5TLaJYdOTVGR9CMdRFxJico/vCH5J58io7f+C3CagWkQkvEEZZF5ZVXomm1Ulg9PcSH96Alk9jdPQg7RvLwYYx8Hum6uBMTNEdHbmilS7g6ZdYTsU0mgdR9d6AloyxY9aPnCStXiCX8oC4T4nJbwwZa9raQHcwy9fLUWolb3dIZfHKQ8kSZlTNby2l686KooFGT2DFBLK7BNfJ81CoSw4icGpeTSuuEITTrV7C/3OALIJ4UaNr630vTIJHScB2J76kNiWLEZV3QdEEipeFdVoe4XpW88lydN19oMLjL5MlfTPHEL6To32nyb//lMjMnzpB6+H601WlL5eXX2WabD3BGR5D11cBoKXHGLkYv0jBESRWtZgnDyGEShtTffw9vbo7kwYORhzkIqZ84QePkCcLaeuo55ftUXn8Vf2mBxB370GIx3Kkp6sffx+ruxursWq85HobUTxzHW5gneeAgRlsbQmh4iwu4MzM4o6NrPxo9lcbq7Iw85Y5D4/QpUGCsZs0Jy2WEpm0sQHcVvPkV4gd3YHbkMXIp/NlInIx8mtSD+6NbUo1KEVw+vQbQ4nZkH5XymkWt2va10bqvlZnXL1lQIaKciomOxG0QRWD0rMeDTyYY3G0yNe5fdYnNyBkPKyYY3GViWmyYQt9xp02jJpke/3BLjoSAgd0WVkyshf/YMY2hYYvl+ZByMURJ8NxIZLMtOmK1eB9ArkWns9egtHIFUVfge4qRMx6TF4u4ruI3/0GOngGDsR9NU1xYRNg2stFcq9uyzTYAxR/8YO3fKggoPf/jDfvLL76w8QCl8BfmKS3Mc13CkOa5czTPbSzM5C8uUuf9TV/3FxcpLT5/zSab58/RPL+1Qk9Xwx2Zgc/cg55JkHroAN58ARTknrkfq6sFATROjOHNXDkDj5FPIWImKgiRjavrgh7TCRoBMrgkjEkqAifATt+GOEWl4MXv1/j0zyf55b+dpVKSjJ71UFJhxQT9QyYXz3nUq4rRMy6njrk88tkkI6c9jr3eREo4cMTm0aeTXDzn8f5bH84LppTikc8kOPpqg5PvOGi64LNfTNHdZ/Ktr1VYmA2QEhbnAkorAU/+QooLJ13KxZB0VucLv5EhHt84fOzbYTKwy2T0rEe1HEbhQFmd9i6dRl3iumDv2YV74eJVqxtus81PG6OtjdShQ5idXSDAX1ig9u670UqWeJzEvn3Edu1GaBrOxDiNU6cIq1H2qsSBg+jpFEGxSGL/AfR4jMbZs9SOHYvazuVJ3XM3ZmcnyvWoH3+f5sjIFadzzTOTBItFjI48mU8fJjYUBZ5bvW0I2yQoVKm9dfaKy/+EqWN25tFiFmG1QVC6eoJmv+aTOJjASlkETlSbxYgZpLpSVy1HcM37t+UjrsCpd13+6D8U+YVfy/Av/o8OKqUQKaOA6KX5kH/9Py1Sr4ZUSpL/+n8W+Tv/NM/v/PMWauUoTjGV1pibDvjj/1Bc9+xuEddRjJ33+K1/mEcIiMU10lmNo683+N43KrirHulT7zr86Ns1Hns6yb/+w27KxRDTEiwvBBx9tYFurBsvWjt0fvW3s3T0GNSrCt+TJNM6mg7f+XqVkTM+8Yf34F7YeoDoNh9/hKajWTah0+TS6Y/QDYRuID+GWYuMfJ7Wz/8SyvNonD2LCgKEYUSVLU2TzIMPEd+1i9rJEyjfJ3HgAFZHJ8Uf/gDpOBj5PKk778Sdm8W5OIoQGmEtEkwtlSL/zDOowKdx8hRGLkfuqc+gVp05l+Mvlig++xatX34MLRUnfsfA2j7peFRePUn92IUrCqrRlsMa6EQIgaw5UZzjVVg+vczA4wMc+sohpl+ZRklF592dZAezXPz+5n5d9x5u+Ygr4DYV3/pahfffanLXA3E6ew0EUQjL2RMuxdUpqVJw7qTLv/lflrj7oThDw1H5wplxnzdfarA8v1EQp8c8vvknFcYveFzPsWSYgh99u0a5IDnyQAw7rjE15vHajxsbhLawFPJHv1vkzHsuO++w0ARMjfu89WKDfYdjtHboa46gC6ddvvqfSuw5YJNt1dEEVEqS0+85nDzq4LiCTNMhtm9PVPBKRbFmYeHms4Fvc/uxMq1kdx1i+fgrSH89WWss30myZ4jl4x+/WiqxnTvREnGWv/sd/MWNCVX0XI7kwYOUX3+N2ttvAxBWq7R87hcwWlvX1kuLWIzyyy8TXBbTGN+1C7O1hcWvfpWgECVNsXp6SN555xVFUfkBlRfeA6XIPHYnZk8rQtMIilXqb5+j+M1XkI0rv1is/nbsHZ0oBf5C4apTbIDSxRIn/9+T7PnSHu76+3chhKA2X+PsN86ydHLpqsddjVu2tkeGMHbeZ+z89WMBVxZDnvvm9euVnD/lcf7UjRlJP1gc8O7rTd59/doe4GpZ8qNv1eBbG7cvXTbUbtQUb7/c5O2Xr9KeBihF4vChyAiuFMHiMvW3jt1Qn7f5eKPbMWKtPQhto+dUs2ySPbs+lqJotrURVqoEK5t/N5ppIiyLsLT+0g7KZZTnoWcysCqKYbmErG+edhrZHHo2S/7nfn5tZYzd24u3uBiNRq9gT5e1JuXn3qF5dhKzMx+JYrmGe3EOWb/KSFsTKM+n9uZZkJLG8Yso59rmqcX3F6lMVkh2JkFAc6VJY+nDZf75mcmSI9Zyc/8UkYraG+9sWFVwpQdjm08WQtPRYwmMRBrNMDFTWTTzg5rZgkTHwG2q+Hd9pOdHq2IsC3VZeJgKJUpKhLle/1szTdD1DTkVr1hTGVCBj6zXccfG1tZuOyMXCEqla3qjlRfgjs7ijs7e4EVEa6Wd8zOAQl7BM305mqEhdEHohvgNn+ZKEzNuEngBKtha/OLPjCjeLpTnY/Z1o8VihNUa3uwNeA23+VhjJrO0HnqIZM8u7GwrfU/+DZSMhEIIgQpDlo+/fJ1Wbg/u+DipO+8kefgwzXPnUFJGCWirVaTTxJufJ75nD/7yEioIiN9xB8pz8a8wsrwcZ3KSxIGDhI0GzuQEKIWWSES5GG9xhm7lBVcM07kSekxn8PFBhr8wTKYvw+SLk5z44xPs+MwOlk4usXRia1PonylR/KnnqxGC+KF9mJ3tUSiOYaClkzinbk1Iwza3B69aYOm9F3GLS+SG76Jw9u31UhBKEdQrNFfmbm8nr4I7M03t/fdJHjxEfHgPhCFBrUr1jTfwl5aigO+HHqLlmc+hlIwKx7/62pr3+Vp4s7PUjh0lefgwiYMHAYXyA2rHjhKWt572/1bRtr+Nnc/sZOrlKRJtCYy4ESXAyNp0HOr4b08UZQhHX2vyz397lpEzV7c7CE0nme2hVtx61oyrogmsgT4qz72AbDQxO9tJ3HN4WxR/BggaVWqzo2iWTWX89E/N06wZAt2MzDGhJ5GhQugCw7psm7Y6ZdQEQkDgyShtVujjvPcmwdQoyoyhAkVYrxEUi6AU7uQEhXIJI98CmiAsl/GLxbUF/o1TJ3EuXlwP/L4UKam99x7uzAx6Kg0CZKNx2yv95XbmqM3VGPnWCH2P9tFxZwcqVHgVDytrXb+By/jEiyJEjpuVxWs7VxLpTjoHH7i1oqgAP8BoySPjMfRsBuVsvaTkNh9PvEqBwqk3kMFPJwbVShjseaqHjr1ZQl9x/sczzJ8usvPhTvrvbgNNMPPuMhdfXSA/kGLXY93oRlT/+/xPZlk6X2bnw5303tWKYelMHVtm9KU5ZLBx7WtQLEYieQWCYhGusg+AMMSfn8fn42Mmkr5EN3Q0a922r1kasXwMp7T1l9mWRFE346RyfcQS0QJ0t1mmWpwk9JsIzSCR6SSZiQI0m5UF6uU5pPSxYhkSmU48p0Yi04muWzjNItWVcWQYGayF0EjmekmkOxCaSeA3qBUmcZuRp8yMZci0DGJYCXynRqUwRuA1EJpOpnWIwG9iWknseA63WVrtl4Nhxsm07aSlax+pXB9dOx8CoFGeo1IYB6XQDZt0yyB2IodS4DVLVAsThMF1BE5KGsdPkThyKDI0+z71t9+99jHbfHJQEs20SHQPYcRTCG3dlRc061QnztzS0+UGknQdzPP6752lvuwiNEE8bzP0UCdv/tF5hCa49zd2s3C2jG5o5HoSHPuzUZYuVKK8pGmTg58fZPTleeJZi92PdTP1zhJO5ePpFLpVLJ9eZuCxAfb/zf2YCZNUT4rdn9tNdkeW8d8f33J7NyyKuhmjd/hxUvk+vEYRKSWxZCvN+hJh4JLv3EvnjgcI3DpKSVq7D7I0/S4rsyeIpdrp2/Mkge/iOWU03aQ9eS+zIy9RmItqMLd0H6Rzx/34Xp3Aa6JpGr5TxW2WMKwk/Xufwopn8JoVrM4sqVwvsxdfAaXoGnoQXTdxGgWEZtCeupvl6fdZnHgHoemYdgozlkboJpYdVdFzDRuBQAEdA/eSad+J2yih6SZayyD1yvz1RRHwJqYIlpajZX71xkdbGe1jQvve+2nfe//a57njL1Acv7yW9icf3U7QdvgxUv3DSN/bEGTsFBZuuSiaMQPpSerL0XOnpCKWtvAbAV49IPQkuqmtTaWri811wVMQS5tousBvBnh1n6WRMn7z1iePvVESd+1Ci9v4CyXcyQW4ilf7ahitGYyOHMoL8KaWNiffXaV0scTpr59m7y/vJbcrh6ZrSE8y8t0RChe2XujuhkUx3bKDXMdups89T2VlNdeb0Ah9B9NO0TX0IJWVMRbG30QpSVvfEbqGHqRamACikV5p8R0WJt5BqZD+vZ+hpWsfhbmTWPEsnTvup7w0wuLkO8gwQNN0wjASmLbeQ8SSrYyd+DaeUyaR7mLo0OeplWepLI+iGxa+W2fmwkuEgUNb72Ha+u6ivDRCs7bM4uRRNN0i1w7T56K1n0rJqOi9bpFu3UGzssD8+Jur5zbw3evHUSIEsf17cc6ch0YTLRHH3rUn+vwJRjNt7GQOzTC543N/HzOZ3bDfTuWx0/m1z23D93L0v/5L6ku30DTxMcBMZUl0DrB07Cc0Fqc2iOIH3ujLae+18FxJrRQSXhYKYlqCvuEYe+9OoZuCsVMNxk41aK5ml3LKHmiCrgN5aktN/GZIbamJ0AQtgyl0U8Ot+XiNADNuoGS0vPUD6isu9RWX6nyTykIDFMjw9tVtbv3VxzG7W6m+fJyVP38BuUVRTBzaSf5LjyBrTZb++Ac456ev+t2FdxcojhaxUtGCEL/u4xSdtfIEW+HGRTE/gFsvUlq8gAw3joYMI4YVz1ItTq6JSbUwTkf/3cQSLShAhj7l5Yv4buTlcuorZDuGgcjeZ1gJVmZPrB1/6e1L5wdXv9dBPN2OaSbQDYt4qpXKykWUDGlUF3Ab0VuhXp6lY/A+DCsBKJQMUCpEKYmUG938MvQozJ2mvf8IvcNxqoUJqivjXDdXGURFgfbvwTkVladE14nt3f3JEkWh0brrLnRzfeF8pmc3gw99CQRourlWYe1SSpNnCAOP1p2HiWXb0IytG7Q/7ghNR3oOzaUZ/Oo17GyrpLI6f+9f9aNpgj/73VlGj68HD+uG4OFfzPNr/2MPuXYTATiNkB99fYXv/P4C5ZWA4lSNkRfn2P1YN0opRl6cY+FMiZPfnmDPk72gwYWfzFFfcdEtnZXxKoG7/pwGbsi7f3GRvU/1oJsaM/8/e+8dZcd5nnn+vko3h76dI7qRAQIESJAEE0iRoixKsiwrB3styWFszZ4z9nh31vLs2bW9O3vO2p4Zj+SwM9ZIxyPZli2JsrJEiRJFggRIgCByRgOdw+2bY+XaP6q70Rcd0A2CECnyOYeh61Z9t+6tum+94Xmf90SOy89N3TSdwbVCioZQEhGkkHZDeo2e4yBHQ8jxMFpX87JGsXVHK/HeOJmzGcqjZRzzlXnHqzaKQpJmvasl3nDuAy98knoe/vwJX4rGdSzcBU9XD2+ebC2EjEAsMlhX31tGC8Zpat86v62cG6FWnoHZENhbYMSuPj1XdyEyY8fQKxmSbRtp6d5Fsm0zo2d/OJ/PXB4euC5KSwo7m0dtaX5VZl3cTLRsvptEz5b5vyVJpueed6IGl1Y21ktZxg5/b9H2qVP7kbUQ933qM6/auf6sYetVbL1GuL0Pq1JY1jucQ++WEB39AWRZcM2UAPq3h3jnJ9tItank0halrE33hiCPfrCZ9JjBT76SxbE9Rg7NMHKokUKSGSyTGTzXsK04XqU4vrjrZOZCkZkLPzt6zM2EWzfwbAc5FkJORJfdTwkp9L+1n4FfGCBzJkP6eJqZUzOY5RtLZa3aKNbLaRIt6wnHOqiV5jhaAs9zcWwDx6wTjLRQmvWywtFWhCRj1guowfiKa+u1HK5rEU/1k504NWswha/W67nUylMISWH0/I9nvVS/4mZbOpIkI0kywXATshrEtU0C4SR4Ho59tfLkOjZCUhBCnl131mjjG/xqcYJaeYpgpJkNu95HJNl9faPoetTPDxJ//K2+QEC9Tu01UGiRZBUhK8Q719P/4AcaXou2ryOUbFQH91wX22hsiZo+fYDpM89h6zUKI0vPzY11bri5J/5ag+ehhKK07XkrifU7sfUac/eMWcoxc/Tpht07+gJEkwrjl3RGzl+99wIhib1vT9K1PsjIhTr/8Gfj5NM2+97TxC/+eju7H4rz8tNFspM/3wWRtcIzbXBchCL73uYymH55mvJ4mdTGFF17u9jxqzswSgYThyYYe36MWnpt7X6rNorFmUFSnbfRu/UxSrkhPNdBCInc5GmMeoH06BGau3aiBWO4rk2ieYDsxCmMevG6RrFemSE3dZb2/r2EYu1YRhlZCVDOj1LKDJIZO0443kn3xoeolaf9Yoiskps8jVkv4nkusaZ1dG98GNvWSbZtppC+gFG7atRqpSnaeu+ke/MjWEaZWmmKcn4ELRCjre8uwMMyawRCSRzHQK+uIkHreehnL2COjPm9n6bps/tvIZRghHBzV8O2nj1vp2XTHiRFQ7smHwhQSY9cJSMD9fwU55/8QsM+tlHDMW7tZ3ktwijMYJZ9HcCFKRXPWRzVJFoUAkGJiSs6Rv3qvt0bAux6MI7nevzoHzOceK6M58FPvuqy75dSrNsaItmqvqpGsSUl0ZSUuHj5+l0isgSbNqhEIgLXhTPnLYyfxShfSZpV35YQ8vLjRBzToTxapjxWZuLQBJH2CH1v6WPzezYTSoU4/vnF2pIrYdVG0dSLXDnxTZratxKMtYLnUStNYVt1PNdhZuQIejVHonU9ihRkauhFijOXcB0Ts15syBcC1IpTV1uDPJeJS/upl9NEm3oJRlsw60XMWU9Nr2YZPvVdmjq3E4w047o21cI4Zt0PEzzHJp8+j1EvEIq2kBk7Rn7qbIOnWMmPMn7pGeLNAyhqC3rVb2uyrTq18jTRpj7CgSiWUWPk7A+plVbJw3Ic3NL1uwFuCoRE5+0PowTC85uibevoveedyx5SHL9Acawxxzn03BPoxbWrh7zRYFUKTB74tv+HEH4KaYX0iBaQEBIUM1eNm6zAhtsj9G4OMjlk8PJPi/NZplrJYWrEYOPtEYLhmzIuaVm86+1h3vfuCO/91enrduSpmuDd7wjz0H1BdmzT+IX3T3Jx8Nb39KttTUjhgP8bM67/wAgmgzRvbaZlewupLSlqMzUKl9euWLUmnqKpl5gePrTka65rU5y5SHHm4qLX9GqGiUvPNmwrZS9Tyl6VG/Jcm9zkaXKTp5dc36gXmLp8YPEHUEMgBJZRYXroxRXO3iM/dZb8VCONwnWsFd/3Zw0lGGHDI7+CrGogJDp2PNhgFJfD5We/Qj0/TXlykNLEpVtwpj+fkANhYuu2EukcwKoWyZ46iBZP4Rh1zGJjJ4fjeHguKOpVAxeKyuzaF0dRJU7sL1MpLMire2DqHooqkORbLmeyLHTd47/8TZGTZ0z+5A+arn/AqwC1p5Xo3VuQwgHcqo5dWN7xCLeH6b6vm867Ogm3hCmNlhh+epjM6Qzl8bU7LD8XHS0+Xjs31VqghmMEF+T4JFlh6zt/Gy2SnP870to7r8Tj2hb1/HTDGlOnn2Pqg0FDnwAAIABJREFU5DMN26qZ8QYNwIUIhlK+B29WkCSFZNN61ECUYv4Ker3Az6CL/DUJSQvRuvthIl3rsesVgk1t5M8fIdTchRpLMn3oyYb9ixkb03Dp6A+gqALb8ujfFmLH3hilnM3RZ4qY+lU3TUh+vnHOmM5BUaApKWFZEAkL8kWXcEigKoJcwcEw/KgyFhXEohJCQKXqUSq7816gLENTQiIYEhiGh6qIhss693ooJHAcKBRdavWrO1g21OseziooLUpLgsidm5AiwYbtcsx/eAfWddD0rntxl+EZLoQQArkpSvi2ftTOFhACO1fGuLS8wk777nb69vUxcWiCycOT1DN1jLJxw7fx694ouq5DOT86Hw6/3rD+4Q8z0FAMEUiq1kCDMSp5iqN+P3W9kObiU/+jYQ3Psa9bGV2Irr77MOp5xkcOkGrdQm//wziORTy5jqFLP8Qy1y7h/vMILZYk1NbD1Avfx65X6H7ovQBY1SKxvi2L9h8+V6c4Y7FhZ5hHPtBMeszk3b/ZRiAsceS7eUbO1xtEptWAREuXSr3iYC0Y4dvXrfCf/58UR0+aPHRfkB8/U6enW2Fdr8Lnvljmuz+ssXmDyic+GmPTRgUhBCOjNv/wtQqHXzbwPLj/ngCf+vU4sajE+KSNIgvm0nKSBI/sC/I/fThKU0IGPJ49YPC5L5YoltZuSdS2JE3vvg+17RqvUvYf5IH1XQT6O1a/oBAg+cVU1zCpvHTeJ38vg7Hnxhh7fgy7Zt8U+tHr3yg6JuMXnr7+jq9RSLIKcuNMW6OcZ3QBDaaWnVjkCb4SBINJ8tmLyLJGS9sOpieOUiqOsG7Do2ha9E2jOAsh+zL+VrVIo9ux9A9v+HydM4cqPPLBZn710904lkc4LpOZMDn4vTzFbGNerrVTI9miMn5Zp165+lATEqSaZC4OWhimxwd+KcJ/+I8F7r4zwEP3BzlyzOB3PhknHhP82WeK6LrHxz4Y5fc+Fed3P51DluFffSLO5JTDn32lSKpJ4vd+J04o5D9oN65X+d3fTvD0c3V+8FSdtlaZf//7ScYmbP7p62u/9lamSHn/SUJb+9D62lCS0QZeopAESGufu+5UdSoHTlF88jArSe9b1ZtboHrdG8VbCaFISIqEo7+ypLPr2Fx+5stsfefvAHDpJ39PLXs1PLCNGoXhVy/H6eHzTaPxHhQ1RD57weeQeiCkN2+JObimjpAkwp396JlJEAJZCxHp3oheSC/a36y7fPfv0kQSMnc8nEANSKRHTb71uWlOHSwvGkWy7e4ongcj5+sUM433VK3u8fyLOqWyxv13B3n+BZ2mpMTeuwK0NMvs2a3xn/66yIFDfookEKzyZ3+SYmCdghDQ3Snzt39X4thJEyHgrt0BHtkXAmDvngDhsODbP6gxMeUwMmYzeNni4QeCfOUb1TVLI9rpArlvPo/y7AmU5jhaTyuhzT1E925DBDWcfAVrOoe3mu4az8Otm5iTWfQLY9TPDeMUbu1D+s1fwGohCQY+fAdtD6znyB9+G6v4CqSkPI/J4z8ld9mnChiVwpIUj1cLtUqajq67EEKmUprAMMqEQimEJK8pDP95h1nOUxg8QfP2e0H4M1u6970Hx6gzefC7Sx4zOWTw+T8epbVrmlDU9xLzaQvLXOzpTA7rfPWzk5w7UqGUu6bTyvWo1/124XLFzxW6rq8wH9AEqirIF64amULBxTA8Pxc5y1muzM5Q9zzIFRys2bbDVJNEf6/CX/5pM9ask5VMSBx62UBVxQ3RbzzDwprKYU3lqJ8bofzcSbR17QT6O6iduuy3+VVW95vxPBfPcnye4mqHvt9EvGkUV4lAU5jE1g6C7bEGtZQbhec66MWfjQ7d9MRRuvvux/NcpiaO4DomkqRQLo5hWW+GznPwHJv8+ZfQMxNEujcga0GsSoHyyHmsyjJUDw8qBYdKYWUpO4CD31uZLrJcwK4bHvW6S3urjJh9rbVFJhwSzGQcZNkvqiRifk5PCGhvVdBU/76dyThcGbb5v/6sQDpz9SFYrXqYSxjvNcNxcas61niGQH8HnmXjlOu4let/J68FvGkUV4lQR4xQ92Ii9OsRej3H5QvfB5hv26xW0+h6Hst8k7C9ELIaACHQMxPMMRwCyTaUcIx6+mcjgJEvuDz/osF7fzFMJutQq3t87ANRhkZsRsZshIChUZuPvD+CYXqkmiTuvyeAOpu6PnjY4L3vivDQ/UGe/Ik/e723W2Z03GFy2kEICIcEibiEogpSSZlwyEHXvetO1VwIc+L1Wfx8VYyiEg2gRDQ8x8XM1VasCEkBBS3ul/LNko5rLBFGCpBDGnJQQZqtaLmWg61buGvI78khFTmoIGRfsXhORcQ1HRzdwrMX5DwESJqCHJCRNIXEtg7CXQmcukWwNYqkNX51rmljFvVlE8JCFshhDVlTELLAcz1cy8GpWbjWCiGrADUWRA6pWEUdR7f87yOoIodUJFnymy0s/zOsJt8pySquY7PQ/3Ada3UiGG8gqNEknfe9CzWSwLWNhkjOKKRfNaNo25DO+Arbet0jk/XnqFerHtm8Q6ns8jefL/Hh90X4rY/HkGXB2fMWf/3fK2SyLp4Hn/1vJX7r12L8b7+bYGjY5qln6mzZ6LfKDY3Y/If/VOBjH4jyv/8vSQS+Ef3iP/nNFQ/eG+C3fi1OW6uMpgr+j3+XJJd3+R//VObHz6w+bWSMTOPZzmwu8fVD8xLeCjG7EGLNn0QoEn2/vJNNv3EfZr7GkT/4FtXRZcIEIeh8bDNb//U+XMvmxP/9Q/InG/lIclAheVsnHW/ZRPK2DgLNETzXoz5ZJHt0nPT+QUoX0isaXiWiEdvUStsDAzTd1kWwLYocUnFNByNXpTqSJ33gCpM/ujpGQGsK0f34Npp2dhFZlyLYFkVS/Qqaa9iLUh25o2Oc/s9PY8wslhzTkiGa7+qlfd9GYhtb0OJBHN2iMpwnc3iEmQNXqI7ll7xv5JDK5t++n+7Ht3P2M88w9fQFEts6aN+3ntTuHgLNEVzbQZ8qM/XsIEP/fP3xqj39+0hPHsc0SvPbVC1Cc+t2cpnzDdtXQqxzw7wgxMH/73cpTw6u6rjXGnrveRfbfvFT1HJTHPirT81P6gs2d9Fx7+Okj/yYeuYanpznLcoDB0ISWujGUiu1kjsvNSYEaJqf25Mln7domKDIfhF3TrJTUfD5hwJs28NekIITAvxBfQLX8XBcnyFjLNBI0FRfvQfAdTxMyz9elv33v/aTmKa3JklEpdWn6ugXxqgcPoe3iq6UWwXP85a9UDfdU/Rsl+L5NE7dItSZILmjc1mjqERUmu/oIZAKkz85QWUoe83rGp2PbWHgo3cSaothlXX0bBVJloiubyGxvYPmO3u4+PmDZF9a+qmtJoL0/tJOet61nVB7HDwPq2piV02UsEpsoJnY+hY822XyqfPzhkkOKATbYkiqTH2iiOe4RNelcEybwpmpRq8SqFzOLOnxBdtj9H9wN92Pb0MOa5i5GvV0GTmoktrdTeqOHlJ3dHPpCy9SurC4ookASZVRQiqBlgit9w+w8ZP3Eu5OYBV1XMtGjQYJbAtTGb2+vBVAsmk9uZkLDcZPklRSrVuolMZXbRR/3mHXiui5aZKb7iTU2tNgBO16hdKVRobA7ofj3PXWG0uxfOtz04xe8L0wz2O+2OG4MKfUZzs0aOrZtm8Ml4LnzRnPq69fG0OYFmAtPt5xfOL2K4U9U2DmC99/xevcaqxoFNWgTCAqU82atG2MEogoTJwtNWi4LYX6RIn8yQnaH95I6/3rGf/B2SW9IC0ZpnlPH57nkT44hF2/+iQRskTrvf2s/5W70BJBxp88y+RPLmIV6yAJoutS9L1nJ/Etbaz/2F3o6QrVkUajIGkyXb+wlYEP3YES0Siem2b8B2epjRdwDQcpIBNsjZLY2s70/sGGc9QzVS7/40tIqoyQBF1v28rGT+zFrppc+NsDmMXGpLFr2NjlxtBCiWj0vWcnPe+6Dc92ufz3L5E9MopdM5E1mdimVnrfvYOWu/pwTYdzf/Usenp5cdumnZ10vGUjRq7K0FePUhst4DkuSjRAdKCZ8lJGdQGEkH1SuBBIsoK0gH6jBWIocsBXEHoTgC+2G27tQdICKIEQ3gJpO6OYXWQU+7eF2Pee1LLridl/zRHzPc/DdfxK87P/kps3im/iZ4sVjWLLQIR1u5Oc/WmavR/uo1awCMVVzj+7spiAka+ROzpGy919xDe1ElvfTHlwcdI1taubYFsUI1sl9/Jog/cVaImw7v27CLZFmXzqPBf+9iBm/moRoHxxBn2mwo7/9VGSt3XQ8fBGLn/5SMMa4a4E6963CyUWIHtklHN/+SzVWUMyByEJpp65hHuNMKVnuxgzs5VYSWDOVhM9x6U+VcLMX7+SFtvQQs+7dyBpMpf+4SWGv34Cp3Y1fimeT1MbLbD99x+h5e4+Ot+6haGvHl3khc5/X7t7yB4Z4cx/eQYjU234HNmXRlZMIciyRnvXnSRTG4jFu9m49d04zlwboECWNWrV9LwI8JsASdFw9Brpl3+CnptukJtbavj7xBWDl59eXstQUQXhuEI0IZNoVlBUiZefLnLqYJnRC6+PyuwbASsaRUkWBKIKOx7r4ML+GRCCcFJd6RAfrkf+xCSV4TyxgWZa7u2nfDnb4IkJRaJt33qEEGRfHqM22RiyxTa0kNjWgWvYjH//LGahsSrqOR75Y+OUh3LzYXqwNUp9wTqt9/YTaIlg5msMP3GcynBukcfquR5O7ebnOoQkaL1/AC0epHwly9QzlxoMIviGN3tklPzxcboe30bzXb1M7x+kNrZ0usFzPYa+egx9erHhutaoXwvHMZmePEa1MoUWiJHLnMfQ537AvmxauTSOZa5iDMMbBHPyaqntezHy6QaFHKtaJH/+pYb9938jx/5vrCw5J8nQ1hPg9gdjvP1XWgmEJS4crVLI3HoVmtcEhPiZcBFXwopGsZY3CUQUIk0Sp340RfeOBLa5ugpldSxP6eIM8U2tJLd3ojWFMXNXDVukJ0lsoBm7ZlI4OYFVWhA6CIhvbkVIgvp0GT1bXTL89lwPfaqM57oEmiOo8WCDUYxtakWSJSpXclRHly5kvGqQBPGNrQBUBjPY5eWHYBXPTdP52BYiPUkCqfCyRlGfWZwiWAscW6eYH6KQG2Rm6gT12uuTMnHL4HkYpSxClpG0QMNL7iqGmi0F14GpYYPpUQPH8fi1P+zh4fel+NpfTlEr/5wT52UJrbMZbV07WlczciyMUBVwXVzdxM5XsKZy6BfHcIo/O77sikaxOKXzwpeHwYNy1sQ8lF21UXdNh5mDV+h4eAPxDS0ktrQzc/CK/6KA1vsGUBMhypczFE5PXUNlEYTaYgCE2mPs/qPHl/WEgq1RhCyhhFXkQOPHCTZHQBIY2Wqj0b0FEEIQbJ2dHJiv4SxFNZqFnq6A66ElQyjh5RWGzWx12dB6LRgfOYhtvRmuXQ+2XiV/7vCSr81VqG8UngsnniuTn7bY9WCcZ76eY/jczb0mAoEsVCR8tXnLM7jqGQhulZcgNIXghi4Sb7uL4MZupGgQKRRAKH6O2wNwXTzTxq2b2LkS5RfPUjl4Gmu6wJr7Dl8hrltoSbT7HMJEh//fwoSOXl6dq587Nk5tokRiaxtNt3eSOzqKo9sEmsKkdncjKRLF01N+WLsQwqfi+P8vUGPBZfsmXctBny5jZGu4CwyGkCXELIXGsZzV9V3eTAg/ReCfo7tiiOBaDh4gKTJCXl5s1LUcVqJQrRaWWSUQjBMMpZCVAK5rY+pF6vV8QzHhjQ4t3kzPIx+c/1sICUkLIISgPHKBiee+8YrWN2ou2SmTTbsjRBJrF0xYCTIqzWoXHdoAEakJ06tzqrofw6sSlCLE5VaK9gyG9+p6ZFI0RPyR3TS9616UVNznB18zxEoAyDIiJCOFAshNUQLr2onevZXsV35K7fjgLTWMKxrFUEJl0/0tAKghhURHkJe/MUY5s7rQwalbzLw4RHJ7B6ndPYx99zS1sSLxLW1E+1NYZZ3M4eHF3o8H9myerzZeZOirRxtC7yXfy7Abwk5vlpQNPr1mjmN4q+C5Hs5sNX2OML4c5LCKEOAY1yFy3yREYh30DbzFn7ToOgghcByTqfEjZKZPLTtA7I0Gu1Ymc/yqOLKQJNRIgljfVvTs8vp+q4WQfLXumy0yKyHToQ2wIXTn7EA3gSIUJCGBB6oIsD64iwnzIiPG0vN3bgaEqhB/eBep9+5DSfiD0TzXxcmXsabzOKUanu34bIightwUnVfbFqpCaHMPLR97K+lKDf3i+Kt2ntdiRaNYSuu88OURAGRNYuP9LSjBtRmXzKFhBj58J7ENLUR6m9BnqiS2tBNoiVA6nyZ/anKJozxqE0U8z0PIguKZKSpDax9qrU+X8FyfrK0lQquqGK8Oq7iBPaiNF4hvaiXUHkMOqtjVpaeLhTsTfpifrmJVbixXtRZ0dO3BdW0Gz30H29aRJIVUy2bau/ZQKo6iv5lrBMAxahQHTzRuFAJbrxFs7nxFawsBnf1BujcEqZZdLP3meUJhKU5XYDMlO8uIcYpmtYcOtX/+9apTwHTrxOXmxnNSVSQtgFOvgesiR6IEunqwS0XM9NSaCyJqZ4rkO/eiJCJ4ros1kaX402PoF8dxChV/Wp/j+kZRlZEiIZSmGNG9W4k+sAMpqBFY10bi8XswRr5zy8jfKxpFWZGINPs5LkkSBKMKjrW2i1efKJE7OkbbA+tpuXsd9ckSyZ2dCEkifeDK0pVfDwonJ3HqFuHuBIlt7YuoNKtB7sQE7W/ZRHxjK8mdXVTHCjeWk/M8PMf1jbQkUMIq5nXqHZ7rkT0ySvu+DSS2thPqimNkF4cqclileU8vkixRHsyip199SkwgmGRq/DClwvD8Nsc2iCf7UOTACke+CTwPx6ihxRbL9MdTCrGm6/RDCFAUQff6AO/69XbCcZlTB8sUMjfvBx+SY4SlOKfrz5KzJ4ldY/xcXExPR5NCDdsDHV3E9+wl/9zTuPUaqbe8jUB3L55lkn3qB+ijQ2s6j8gdm+aFZ/VLE8x84fsYQ1N41uJIxI+PChhXJqlfGEW/PEnq/Q+hpGKEtvQS3NBF/czwouNeDax4BeNtAfZ+qG/+73LG4OzTaxsEY5V1Zl4YovmuPpr39JI/PUlicxtWxWDmhaFlj6sM55jeP0jnY1vo/9AdmIU6uWPj8yEp4Ms5NYUJd8YxcrWGyjNA5vAwlStZkts7GPjwHX64fmi4wRALWaA1hVEiGtXhZSydB1ZJx66ayAGV1O4e6lMlvJWk2j2PzOERSpcyJLa00f/+3VzI1xtCfDUWoPud20lsa8cs6qSfv4yRefWrbtXyJIFQEklSZkNlQSCUxDQq2PabBOI5yIEw0d5NC7YIJEUhsX4nRmExV/fRDzbzzk+0Ldq+EEKAFpIIhiQQUJixeP7bOTITNzajeClISICH5a1tTTkWR8gyrmGidXSjNrcw852vE964hci229ZsFEPb+kCAq5vkv/U8+uD4imKxc3ArdUrPHkfraib5+D0oySiBgc7XhlHMj9f50V/5g6g818PS3TXLfXuOR+niDNXRPOHuBB0PbUCNBUkfuIy+RJ/wHKyyzug3TxHuSpDY1sHW/3kf+RMT1CaKOHULOaCgNoUJd8TQmsKMfOPEIqNo5uoMP3GMYOuDhHuSbPntB2i9x/dW7bqJFFAJNocJtsfQpyuc/ezy6tbV0QKVoRzJ7R2s+8BuAi0R//2EL4BhZCpkDo00GG19psLwE8fY9Bv30vrAAFJQIX98AiNbRQ6pJLe103x3H3JIZfx7Z8i8eGsuum3rdHTfRTTaiWlWkGWNWKIH13Vo794zX2yZmTpJvfazkTd7LUAJR2nZ8cCCLR6e52EWM4s4igDBiEyydRU8Xvxe45FzdX70TxkO/ajIzZTTtD0Lz3MJShGKzuJUT1BECEkxKm5jSkoIgWf7KjuhvgGMqUnM9BRqcwvhjYvHL1wPSpPPILGm85hjmVUZxDl4dRN9cAKnqiNHg8ix0PUPuklYmbytCPpuT9J/VxOyKjF6osjgwQx6ZW1XsDKco3Bqkmh/itZ7+3FMm/SBIeyV8meez98785lnWP+xPaTu6KHzrZvxXP/GFEIgJIHruL6Rqyx+KnqOy/T+y7iWw7r37Sa2oZnOx7b4a7ju7BoSru2Qfu7yEifR+BmGnziGGttLuCvOwEfuxDWvCkNkXhgif3yiwSh6tsv0M5dwdJuBj+6haWcXTTu7cC0HIfl5FLtuMvL1Ewx99egtow1JioZez6NoEWQ1iEDCMqu4nkModLVNTVbe2KG0Wcox+vRXGjd6Lo6hLzkT+9CPCqTHrpMT9sA0XIoZm8yESWbSxLrJM5WrboGCk2Zd8DYcz0IRKiDQRJCAHKI3sI2AFGZQP9pwnF0pI4cjxO64m2BvH/n9P8GzbeRQGM9eu9WeY0o4xSqesXZP2ClV/TxiJHhLOcYrt/n1R9j0YAtnfpzGNhw27/PnPZ9+avkhMkvBqVmkn7uMGg+ihDX0dJn8yYmVw098o1a+OMOpP/8xye0dNO3qJtwRRw5ruKaNkatRHsyQPzG+yEucg2vYTO+/TO7YOM139pLc3kGgJYIUUHANGyNTpXwlS/alkZXPZdbA1UYLtD4wQLQvhRRQcHQLY6ZC7vg4dm3xhXd0m+n9gxTPTtN6Xz/xLW1oMV8lpzqSJ/vyKKVLmaUl0/A97crlLDMvDFG8kL4pPMWxof3MaaAISfJv3iWS6G9WoX2hWVuv4i3gJSrhGEooil1vjHSunKoxdOb6epTzX/er9EPX3SrD+mk2hHazI/IQslCQkNkVeRRF0rBcg1HjLAW7sVfenJqgdvki4Q2bqV06T33Y5xWLQAB9dO1RjJ0twcZuhKbACsPsl4MUUBGyhGtaOKVbR+a+rkpObqzG6Ik8rnOVq3gjyJ1IU8sFqQ8NrjhQfCk4NYvsS6PLKuFcF66HVdSZevoiU0/PzqUWAjkaw6lVfVmQVWAuFVC6uMZB8q6Hni4z+s2Tazxx36gPP3Gc4Sf80QUyCmEpTs19BUo2nksk1kmiaQBFDTE9eQzH1lHVCLXazJsjCWahRhO07HiA3NlD6Lmp+e3R7o2o0QQzR3/asL/ngfca+eqKTprztRdpVdcRk1OoQsPFpW6VydmT5KwJ3Gt0c1zDoHjoAKWjL+FZ5vyDsnz8ZTx97VFM/dwI0b3bUFoTyJEgayolCYHa0YwUCWIXKhjDa3PEXglWNIqe67HxvhY6t8RxbY+mnhDVnEnvriTZkRqHv7oKIyUESBJyOEJk4zb0sWHfKEqS/5q7gNg8uy9wdbskXSVuCj+BPL/vQsztI8k+Y2bueCFmj5vbz79rpWCI2LadlE+fwKlXbzo5tKlDY+veBAe/ubQBDYQldj+aom97lJ/8wyTZ8dVRcTQpRIva84r4ZbFEL/0b3+Z33YRSlIojWGaN9s7djA7tx9CvLabduu6H1xJkLYSWaFn0EHdMnXjrbataQ8ze5nO3q+f53Sy3ot236hapGSdRRQBZKHie39XiLBIRmztZP52EY8/PGQdwq5UbahqoHb+Mlc6jtiQJ7RzwRWfN1UUfameK8M4BhCKjXxpHv7wUde/VwYpGsTBR55nPDc634gjhJ4gBjNXkFYUgtnMPwZ5ePMNADoUBgdbWSXTbTuRgEGNmmsqZE7imQXj9ZsIbNiMkidrgBWpXLpG85wGqF85iF/PEbt+DMT2B1tKG1tI2f7O6ep38wWcIdHYT3bwdKRjEmJqgcu40we4+ott24tZrCEWhcPgAdqVMfOedxHbtQU21oI+PUDl7qiFEmkMoJhOMyEizYp1CQH7KRFYE8RYVRZOolWwqBf/7iCYVIgmFZJtG79YIB785g6wKEq0aiiqol20qeRtTd7n4comNe+KEomsJLTwUoRKUIjie/56OZ+OyehelvesOCrnLTI4dZvP2XwbAtuoEggkUNYRnmShyCN0q0BwZQAiJfG0Uy3kDtgYKsYiW6ueil79mkgTxZoX2vgAD20O09wYIRmVcB8oFm/FLOmOX6kyPmtRKr4ZrKRAIf2ojHqanr+qZFlq3ntTDb12wjEAoKngelbOnKBxY25hdcyJD7mvP0vzhR0i+Yy9uRad65AJOtb4051CWkIIaakeK1C8/SGh7P8bINPlvPo+n37zq/PWwsqfo+f3PpfSNEYrlYIjwhk2kv/t1gl09xG/fg1A1Ipu3Yk5PULtyicRd9xLu30h95DKx23aR2/9jrFwGJAmhqMihMEKRQRJIwSBCURGqijE1TqCzh9rgBcIbtiBHIiTuvBcjPQXlEqGBTeiT48jhCJ5jk3vux0Q230Z4w2aKhw9QPnUUraWN3LNP+SH0MtjxYBMDu6IEQhLVgk2sWeNrfz7Euu0Rtt6bwHWhXnE4/L0MRs3hbZ/oBs9DViW0oE+72HxXnIHbY2hBCVN3eeFbaXKT/g9iraRdx7NRRZD+wO3zLVp5e5qCvfrwQtNiZKZOYZnl+byh5zkgJASCSKCFWKCNQn2M5uj6WWPokS5fXNO5vt7hmDpCQLR3M3a9gmMZfkfLum2YpaWbCWRFsPvhOL/w0Ra23h0lFFnaeBazFge/V+BHX5656TqKISlKq9pHwZ6m5GRZrZdvFwuUTx2f/1soCkoyRbCnDzOzslbnUhCSoH52mNIzx0k+fg+tn3yc2IM7qJ8fxZrK4dYNX0VXkhCagpKIoPW0Et61AaUlgVOsUjl4BiUZRbl7y+Lo8BrUTly+KcZzRaPY1BWia3uco9+6sZYmKRjCNQw808AuFXFNE0lVEULCqVVxTQOnXkcKhxGyjFBUrPxsN8WicFZcdeldF6dWxanVfO/OdZBDEb8vFXAMg+q5UzjlEl5zK3axgGsYuHoL+DqEAAAgAElEQVQNJRZf02fwPI/JwTqy4nt5vbLv9Q3cHuP8oRKXjpZ46EMdDNweZXqoTjgm88//7xW23JNg50NNBMMyux9JIWRBrWjTuzXMhcMBcpM3dvEczyZtNSa9dXdtSWjTKBNL9lGcJW8LJMKRNp+Y7BioyEiSQmtsE9Ols2hKFEm88WacWdUi5dGLpLbeTaRzANfUUaNJ5ECIiee/teQxux+K8Sv/rouejT6FpJSzyU6ZGDUXSfIjieZOjUSzymMfaSbVofLPfzFxUw1jREqwPriL8/UXZ43i6mDls1hHrtlfkmja9yhaSxu1C2fXdB6pDzxMYF07SlPMn5keUAnvGCC8YwDPdnB1k7k5CUJV/LlHCwyfUGXij97hC0coCizfKQsejP77/471ahtFIQtCCY1QXMGelS13LBd3GQn0a+HUqkiKgppqQWtuRQoEfEOo6yhNKdRqGSUSRZ8Yw7VtnHqVYM86rFwGz3VwTRPPdVHiCT/x2tRCvaEKdvU8nHodu1LEKuTQx0dASLimCR6z/Z+N8DwPz3FQ4kk828Y1l/eG9YqNokpYhj9HQwv6F86sO9iWh2W4aCEJLTC7j+VRKdi4joesCoQMF4+UmLhU82fwTtx4K5+DTdGeQRYqAoHj2ThrS2EzPXmU/o2/QCzRTSTagaKGAEF25gyGXsKTLJLhXhzXpGJkSEpBXO+NV4n2bGu+yBLt2oAcCFEZH6QyfpH6zGJHoa1X452faKNjXZDhczWe+ZccZw9VqJYdHNubnZsiSLSq7N4X5+H3NXPHQ3EyEyZf/ewk1eLNCaVloeDhUXcqvOJcsOvilEsEe9et+dDo3m0EelqXfE0oMnJ0Ze6hHAkhR1bHT/Q8zx9CcxOwolHUSxbJziD7PjlAOWPieR7DL+eZPLe6VjTX0CmfPk5iz71YhRxmehLX9L246LadxHffgzE5Rn3kMp5tUz5+hMiW7QhZoXblErVL56icO0l02048w8CYGsetV7GLCk6tipXP4dRrmNkZXFOneOgA0Z13EFq3HjOTpnLuJHa1PJ/VtqtVhOKHPa5pUBu5TOz2O9HHhqleOHMdLtbVm6tedshNGfRsjRCIyESTCkOnKhTTJmpAYtt9Cdr6Qqia711OXqoTb1YppBVM3b/x1YBE79YIiRaVrg1hKnmbQvr6TzkZhVa1j6TSjiRkak6RKesKurt6cdhScZTB89+huXUrllnDturksxcpFoZwXQvdtRjOvuh/T55DpvL6HEh1M+CaOpXRi9SmRxCygmsZS+aeAbbuidK3JUR61OALfzLGhaPV+WFUCzF+2eDC0SqZSYtf+8Nudj0Y56dPZKkWb07O1vYsbM9EiLUZCTkcQW1d2JEjEKpCZMs29PGxNZ+HNZH1PcFbBO8miamsaBTNusPlQ43u9KoKLAtQGzxPbfB8wzbbKlI49NyiffXxEd/LW/h+E2MYE40XxJjyn9LGpK+cYaZ9uoRl5sjv/3HjmiNXFqw1ijExWzF3HKpnT1I9uzJNZnLQv1ElGWzLQ686lPMWx5/OsfXeJM3dQc69UGTkjB/CvvjdGXq3RMhOGOSmDFwHXvjODLc9kKR3S5j8tEl6WEfVBK09AUbO1QiEZWLN6qqMoiaFiMnNDOknsD2LNq2PZqWTcXMN+T7PpVqepFpeuqKnSBqSpGLa1fm/ASz3jdcCKGlBol0bCHf2I6sBrFrJN5Lp0UUjYVu6NEIRmePPljh7eOWHlG16vPTjAm//1RbaegJE4jdPxaniFCg5WZqUDor2zCLqzXLQOrtofuTtjRuFwExPUTl1bM3nkfmHp3wR2VsEp3BzdANWVt4uWJx/ZgYxK2vkd4LclPd93WD8YiMZd85IArz8w8X5mktHylw60nhx6mWHl36weN8D31gj3xE//+fhYHnGfGN/UIquaY1Esh/LrlOrXC3OBENNhCNtFPNDhNUmQmqC6bL/MIuHOgDIVofWfL6vZwhFpWnLHpKb7sAq53AMnUj7OqLdG5l64XvUphcT/l3XIzu1unSGbXoU0hbtvTe3c8j0aozop+kL3sb64C6y9gS2ZywaSuZ4NnX36r1qTE6QefLbDft4joNdKuJU1m5wzLG139+vBVxXZHbj/S1svK8ZRZMYOVbg7NPT1AqvnfmtbzSYXg3TM9gUugsHBw+PCePCmtZo77qTfO5Sg1GU5QBtnXfg2RYRkSCsNmE6dYSQSIS6Ketrrz6+3qFFm4j1bSFz7Bkq45f81lBZoeX2B0luvnORUcxOmdQrLrEmeVXUTkUTxFLK7HE3j5rTpHSyJXQPISmKUCW6vc1L8gyLzgzHq1cjK7dWRTd0lEgMKRgEScKzTJ/I/QbCikaxeV2Y7tviPPd3VzDrDrt/sYuBu1Oc/tGtY5e/iUbYnsWocZagFEFCpu6WGziKqhwExIqcQlWLLBpQZTsGsqIRUGOERRMhLUkqvA7wqJp5CvW155Re75C0AHhQz4zjGLPfp2VQmxqi+bb7Fu1/7qUK44M663dG6FgXYGpo5YLalj1R2nsD7P9WjvTYzTM8lmeQs69Pdl7oJQIITSO6dQfxO+9GCkf9mVKuS+3SeQoH99+Qt/h6xHX1FI2KTXFax3M8ilM6amDtuY9IRLBpk8KWLQodHTLRqECWoK57ZDIuExMOFy7YDA87q2oskWVYP6Bw2w6F3l6FeEzguJDPu5w9a3H8uEWxuPqq23vfG2L3LpWpKYdvf1tnbNw3MuGw4LbbFHbt0mhvk5AVqFY8xscdzp23OX/eXtXQcEWBTZsUbr9dpatLJhIRmCbMzDhcvGBz7LhFpbK685VR0KTQ/A2tCA2VAIbnh/nxYAdCSGQqywtcuK5FINg4tF1VQkiSTFmfJl+/jCaHKBtvPO9wITzbQsgKarTpKi9RCIKpDmx9cY/z9KjJD/9xhg/+m04+9LudPP21LBePVdGrC0fqQqpdZfveGO/8eCvD5+s88/UclcLN8xTLTpbz9bULBQfaO4nvuYfalUH0kSGfnRGLE9+zl+j2nRQPHbhp5ygUGRHUELKEZ1g+Pec1gpUHV03XUTSJt/2bzTimi5Dg2HdWz1mMRgVveyzABz4QZt06mWRSIhQSqKpPR3Ic0HWPWs0jn3d55lmDv/iLCuXy0gZClmHdOplf/dUwjz4SoKlJIhKR0DS/wGwYHoWCx8VLNv/y9TpP/VinULi+sXn4IY33vz/E4GWH4ycsJqccbr9d5Td/I8Idd6g0N/vnLQTYNlSr/vn+6Z+W+f4P9GVbpxUFdtym8uGPhLh3r0Zrq0w4LFD8AWboukex6HLunM2X/r7G/v0GxnXYOgEpTLPaw5jhc8YiUpKwFGPS8ivEQsioSghF0nBnE8Cu57Awlivmh2jrvAPHNqmUJ1G1CJ09d2NbdSyzhmlXsF0DVfbpEI5rvSEpOWalQD09Ssfet1NLj+IaOmosRSDZwvThHy7av2sgQDylUi067H17kq17ouTTFsWshVH3kBWIxhViKYWmNoVgWObi8Spv+2gLj/9aK9IS3OTnv5vnpaeWnyV9MyHHEjjVCuWXD2GXrr6nHIkQ6Ox+RWsLTUHtSBG5czPBjd0oqZg/Q0kIyvtPUvjeiw3isyKgIgVUnzqnmzetsrwarGgUK1mTg/84TKo3jKxIZIar6KXr5xOFgL4+md/6zQjvfW+IeFwgSQLP86jXoVj05b8CAUEkIojFJFIpiUOHzWU9JlWF++8P8G9/L8quXSqaJrAs3zjpus8Bi8UlOjslOjs19typsm2bwn/72yrp9OqqQ9GIIB6X2LtX44//OM6WzQqSBJWK/z6yLIhGBcmkwDQFhaK7rEGUZXjb2wJ8+g/irFsnoyhQr0Mu52KYHqrir9PdLdPZKbNtm8rnP1/ly/9Uo1Ra+jvQRIik0kFSacV0/XAuJMUwFpC3DbtMS2wDITWB5fjV4kx5kLp1tZ85PXWCYKiJnoGHkCQVPJd6Lcv4yEFMo4wqB2mLbSYWaAUEFTPDTPkihv3Gmgntmjozx5/FLGWJ9mxGSrRiVQpMHfoBtcmhRfvf/64m3vOvOpAVgaIKWro0mjvVBkUcvxd6tnDpeWzdE2XrHpadcDF0rn7DRlEgCEpRAiI8y110sTwD3a3OTva75vPWKrj1OmLBONe58a5W7sZ1NeWmKIlH7yTx2J3ISZ/IjRDz30P9zDANTwRZIrp3G8nH7wHXpfDkS5T3n1hm9ZuPFY1iIKwQjKlMnC2BB9GWAJHmAJXrDK5qa5P4/X8b5d3vDhEICGzb4/JlmxcPmVy6ZFMo+MYkHhf09MhsWK/Q0yvzg+8bSzbKCwE7d6r84adj7Nih4roe589b/ORpg5MnLfJ5F0UWdHbJ3HefxmNvDRCLSXz84xFMy+Mv/7JKrXZ9jzEaFezepXLXXSrrBxROnrR46SWTK1ccKhUXTfPfY/s2hWLJY2Rk+afXnj0qn/6DGBs3Kti2x8GDJk8/bXD5ik2l4hEKCtavV9i3T+PeewN0d8t86lMRSmWXr32tjrXEs0fMDSBCJjArJV93S+TsqwoulqOTrVxpOM7xGhezrRpDl35EKNyCFojjOCa1ahrb8kPCiNZMSE0ykj+Kh0tHfCuJUBfp8toKOj8PcPQqubOHyJ09dN3B7ZNXDF5++uZ6dROXb4wGJaPQrvXTqW0kLregigAuDnW3TM6aZMw4R/kakVm7UkGOREk99FaMyTE8x0GOxYls2kr1wlnid+0FBE65RPX86gRJlOY4qfftI/7wLqTQKqvsjotTrKKkYijNcWLlOpUXziw5xuDVwMptft0henYmeOkJP8nesSmKFpI585Plc02qCu94R5DHHw+iaX5I+8Mf6nzxSzVOnbIWeUGaBm1tMp2dEqdPL/2ho1HBxz8e5rbbFMDjwgWbP/qjEi8ftRbl9J56SmdoKMwnPxEhkRC8//1hTp60+d73rn9zhcOCD33IN+RPPFHji1+qMThos1A1SZKgvV0iGBBMTS1tFNvaJD75iQj9/QqO4/Hkkzqf+WyFs2fthpypJBk8+aTOpz8d4x3vCNLaKvPxj0d49lmT8fHFaxtejQnzEnl7mvIy7Vu6VUK3SvN6idfSMObgug7Vappa9eq1FELC81xcz8GwShhWCQ8Pwyrjee6syIC/6s8rQi3dJLfsadx4rTEUYBazi6TDjjxd5Myhm+tN30hVWiBo1wbYELwDy9MZNy9guTqSUIjJKdq1fkJylNO15xuiDDkUQigKaiyOmmzyr7mi4lkWwZ4+hPC7WozpqVUZRRFQib9lN7GHbkcENTzHxc6VMIZ9tZzofdvnvcVrYU3nMcczqK1J1PYm1PamW0bxWbnQogrCSQ1ZlXAdl1BCRVZWZsm3tkp87KNholFfvPTAAZP/849KzMy4Sz5kTRPGxhzGxpa/+Lt3qzz6aBBJEtTrHp/5bIUDB80l15uedvnCF6q0tUp86ENhOjskfvk9QY4dM5mYWDmMlmVBW5vMN75Z58//Y4VMZvH+rguTk8uvIwQ89FCA++/XkGU4ftziv/7XKmfO2IvO13VhaNjhr/+mwj33aLS3y2zZrPDoowG+9KWlxUodz2qoGvrzOMR8BVqRArTENpKK9OF5HvnqMDOVQRz3aiI7EGqiq+cewtE2hGgsnA2e+zae69IcXU9TpA8P0OQgpl2jNbaRsp5mNP/yit/j6xme5+I6vmcthEQw1Y4SilLPTOCaBmo0gRZvRs9OLTpWr7oNRZWbBln2E/CShAhoeKa1ogZoRGqiR9tC3p5iUD+K6dZx8R9qslBIKV1sCN1Bp7qeIeNq84IxOc70N76y7LpzWK0eqtqRIvH2u5HDQZxyjcKThyn99DhupY7SmiB63/Zlj7WzJayJLN6uDcjxMFpva4NRDG/ehlMpYUxOzD+whKwQ7B/AqVYxp258BO3KOcWciRKQ2PuRXmzDJdEe5OxPV65I3n2XRn+//0PTdfjbz1VWndNbDo8/HiQW9Z8oR4+aHFzGIM4hl/P47nd1Hn5LgJ5uhXvv1diyRWViFT3HhYLLl75YW9IgrgbJpOD++zRaWmQsy+OFF02On7BWPN+REYejxywef7ufe3zwAW1ZoxgQYVJqFxPmJcAjKjcRkCLMWD5nLhnuIajGuJR+FknIdCdvR7dLFGpX5+Z2995HJNZBPnMBx2n8TmxLx7B1zkz9YMn3d18rKqqvEvTs5Dz/UIs3037XY0y/9BTV8dlWRyFo3vEAciB8y84ptHUzxpUh1PY2gps2Yk5MUj+1vKcWkqOE5BiX9CMND1AP//pNW1fo0PpJKu2wwCgK1VelsvLZZVtetYiCZTgoQZlYa5BqVsesLX1PRHZvREnF8FyP4k+Oknvi2Xk9Rek6fc+eZWNlS3imjRQKoLQ0siXC6zdiTE9hTE1e9eJlifCGzf5I1ldgFFd0+0rTOqd+OIVZc5Akwfn9M0yeW1nxeedOFVX1DdiVKzYXL76yPEAgAJs3KSiz5vvYcYtq9frh2+kzNvmcv19Tk8RAvzy/xko4e9ZifOLGf/jNzTIbNvhvVK16XLxoX1fY27JgcvY954pUy6m3y0IhJMWYC2FloTZ0tEiSgmlXMO0ahlXGcupI13iDwXCKydEXGBt+jsmxww3/WFYVD4+IlqI1uhHXtf3jPQ/TrmI7b5xWPzkQQg6EscoLRHc9D6M4Q6il65adR3DTBuR4nMD6AZxymdCWTVfFmJeAhISHN6+3uRRsz0a+Rvko0NlD/O57kULLG/ytb+0klFDpv7uFO96/jm1vW74qHdzgz8Z2SlWqh8+vWmB2Dm61jmfZvnhE+Pqq/0KSkcORV6zgex3lbUhfqpC+tPo8ybp1yvwP+uw5a1UFjpWQSskkkpIvdOvB0JCNsYpBP9msSzbn4roekiTo65Pniz4rYXTMofoKQqB4XNDV5X8BkYjgX38qykc+vLJXIUnQ3e0fI4QgEPSr8ktVoR0cZKEQl1uxPJ2YnMJc0JNc0WfoatrJxrZ98yo6tWuGVFtmFSEpLNd2EdVa/n/23jNIkjs97/ylz8ry1V3t3UyP98AMvFkAC2AtuUvyuNJG0N1JFGN5pyAv7i7Eb4y4L5LiTgqdRJ3IiNuTjpRudbtYw/XYxQJYmIEZAGMwg7E97X1Vl6/0mfche3q6p810zwyAXQDPt+7OyszKznzy/b/mecgZ/RhqlkJ9iKTeThiGFOpXtng1fr0R+C6SHiPePUjgOYS+j6hqpPr34q/Rp9i7S+fJf9jK7JjN8R+WKBfuTGEgdD20gT4QRRqnzqB2dmy4vRtGVgJxKUPVLxKy8n7WBIOYmFxlaSHpMURN31CFPr8jxdzlKm27Ulx9fZ5dj3asO70jt0TRnTtbwqtsPdcauD5hsKi3qETPh7FjF8lDd6P39qEPbCe+a8/15bOqISgytdNvb/lYK877tj69BlJpYeklViwGa1ZRt4JkUkBddI0Mw5BaLdxUg3cYQq12PY+Zzoib8s5p1MPbOmdNi9p2IJKJGhyUGRzc2j5EUUDT1r7T7KBJ0Z2kR92FKMhU/DkK3vVpk6azwGjhTQw1Rxj6NJ0y3g1CDrXKBB3dxxAQsKwKLHto6rUZJEnF8ZtInhpVvEXtY2li5VZLVK6coWXvfWR3RdavkqLhuxazb67uU9y2z+CBz2UpF1xO/rJ6x0ixefYc+uD2aMns+9gTkxtGQw2/QsmfpV87QAjU/SJ+6CMgoIkGnep2dNFgxFrZ5uJWy4SugxSP4zfWJjGz4jBwXx6n6TF5eoGdD7evex6CFj24ge0QeltffYmyGGmoBgHBYpRpTYxF9ibJJIFtY89eXz6HjoM5PrpKQGaruKOkKAggSwLXmq58j9suVEbtS9crVFuJjJeTpySu2wq28jO3e74iS8t02w65csWjWt1a5Dk56eM4a59ISEDRm6ToTa7592sN1xUz+rsqGUiiuqLQEjNaUNQ4fYNP4Hv2UpM3wKVz38Zxm0gxhYSepytzEElUma2uVDr6OCDwHIrnXqMxPYye70KUNXyzTmNmBLd+o48NZNsVjJTE2CWT2bFb18y8Ec7oOM7odT+kxpsbR0J22GDMeo8dsbvZZzyAFTTxQhsBiZgYx8dnzHpv1SigX60ShpC5/xGsiTEC+/rL1K2UsSfGOP+zKTr2pZk4uUAYhFx+eXbdZzxoRtdA1LVIPX+LkNIJBFUhdD2CRnQugWXRvHQBJduCVynTuHh+lVrR7eKOkmIYgmle92WOxwU2sLLYFBrNcGnJK4oCui7crF1sCXFDWBLyrdfDD0TazXVDTDPEMKBeD/h//rbBm29sLfR0nHDdJnYBgYzcTkZqi4otgoCIuJRQT+ptCAhLijaZeA+WU6VqXa+Wzs2cpji/toqybVUIfY9ifRhvkUhrjdGP7chfGPiYhUnMwtovoeWQlagRqjTncicNEZXODty5+aWKs9LZjju9sf5AxZ/jovkGrXIPaTmPIuj42Cw4kyx4M5TdmVW+PnI2i5pvR06m0Lp7V+hGNq9cwp4YozTeoDR+vY1n5I3122Tc+TIx+lHaMkhJA5fNjx5KKQO1J4+gSHi1Bu7syhRQ48K5qBj0Psh23fHl8+xcNL8sijAwIKNrwqbnetdCsRhQr18n2p4eabH/cePPJRICuZy4SKAhMzM+rvv+99c1GtE8d0uLhCQJeB5cvnLnlp66GCev9KIIGoqgIYsqcTHNpFNDV1KkY50IgkQQBgiCQErvxPFW5r8atWlkxUCPZVHVJL5vYzaLuE6DMPQREGk6JepOgWsmSB9PP7+toV72sa0AWdnMmmSTkCSMQweoH38Dv9lEEAXidx+h/JOf3XRZU/dLNPwKsqMs9ZgGobeum589NcHcd//rmn8LnOgFue3+POMni3h2REYD9+XXJUbz7AipRw4hpeIk7t+LMz5P0NxEoU4SMY7swDgwAIBXqGIPr4xqvUoZUY8hZ3OrTMT8xcmcW8UdJ8UzZ1y+9JuxaO73gEJnp0iheOts3myGvHfe5cgRBUWBu44oxOMitr3xPvfulWlpiQo09XrApUveTYn0TmB+PjrW7t0KiYTAoUMKf//3Jrdgm7smBMRFZWUXBAEZFWGxiUCVDGJqNhKJFaQoB2vNUrdW3rR6LEvf9sdJpLoJfBdRlPHcJhOjxykVLxFXWzHUzNIES9boJySg1LxF3+2PCYbPNSN9xD6NZFaiVrq9cFHQNLSBPrT+XkLXIbAdBEkChA3fUAICItKitFywaqRPQEQWZLzQW1GECV0Xr1JeOrYgiJFNRxDpHigxiW0P5FkYa2BVHURZZPfjHYy8Ob/m+ZjnRnAm59H62kl/+ih+uUHt5XfxSut3sAgxlfhdO2n53U8h51IEjkv9xAW84srPyOkMmYcfQ+/ujUhx2XuofPwlaqdvvZf2jpPiO+841GoBsZhENivwm78Z49x7tduyVX75ZYff+e0YiiJw110q27dLLCysv0NVhYcf1sjnI7K4cMFjZPSD6a8rlQLOnHH59Kc1DEPknmMqBw4ovPXWndGg9EKHkICs3Ikq6AQEzDkjAFStGUYKbyAg0HDWX6q0th9AknSuXvoJjl1DkjRa2/bR3nUXjfo0iqijydfbfBTZ2FQ+9qOCni6JWj2kspgL7uuRmS/4mNbGkdnIBZNTL1V54AtZ7n0qw4vfWVjTjmDTCANC2yGwrKhY4TgEQYB5/uKG+SNdjNOl7qTiFSh6E6ummnQxTo+6mwVvelVuWjTiJPYfRO/uQ5BkvFqF+tnTCPVZtt2fp/tADlES8WwfQRCozqwfkXmlKpXn3qHlK48hxnVyv/0I+s4erCuTUQP6IuRsMnLwS8XRtnUQv3sncj4DgD00Rf311T2Zxs49qK15KideW1UUcuZvL9Vzx0lxeNjn+edtfvd3Y4gi/M7vxBgd8/nhD02q1bUrx7IcVW0FgTWX2idOOLz1lsujj6pkswJ/8icJJiYqzM0Fq/anafD4Yxpf/lKMWEyg0Qj4xfM2w8MfTPXUdeEXz9s89ZTOsWMKO3fK/A//fYJ/8S9rDA15a880CxGRx2ICra0So6NrbwfghDbTzhB1v4SITDOoUPevJ/0tt8Ly12Y0urdMkQBIpXuZmz5FuXi9xcb3LLbv/jyyYuC7DpqcIKHlCQKPuJqlYq6e4PgoQhThkft1Ll5xOX3OQRDgi08b/OjnTUbHN76HHCvgJ383TyIj88V/1E4iK3P8RyWaVT9Se9mAH20zWJWHDB0Xe2SUwLJw5wsbTrEsR0xM0a3txlzHt8cNbdJyHllQV5CiaMTJPvwYWmc39vQUoeeituZp+fRnKT3/YybfLdG+u8jQq3O4TY8gCKnNre8pHbo+1ZfOIGUSZJ48ipiMkbh/L/G7dhAs8302Dg+i7+pB1FXEmBbJifkB9tQ8xWdeWpVPhChSNEeGqb976ibeSlvHHSfFRiPkb/+uyb59MgcOKLS1ifzFP0vyyMMqP/mpxeioj21HN4iqCKTSAv39MkePKlSrIX/5l6tD64WFgL/+6zp9fWm2bZN44nEN9V+k+da3TK4MRZqGggDJpMj99yv8d/9tnJ4eCd+PosxvP9P8QJbO13D5ssfXv96gqytJT4/M449rdHSIfPvbJmfe9Wg0IkEMSYpeBq2tIgcPKtx3r4ooCnztT0vrTgFJSGiCQdUr4oTWqh60hN6GKEiUmxOAQM7ox/KqNOzrkWMYhgiisuJzoihf+yN1u4Aup+jPHUMQRIqN0Y+FyKwiw77dKsfu0unqlOntlpFlge0Dm3tMenbo7DgUp1byUDSB3/2nnTz91VZmRm3MRrCm+vU1PPNXM4yeXzvq8gpF5JYcoq5HS9xCkXCDvjF50emxGVTXnH33Qgc7aKKL8RW/1zq7UVvbKP78x5HvUQhIIi2Pf4bY7oMsPP8sb33jKo0Fh3CTbRpBrUnp2y/hFSuknzqG2tkSkUSAF4EAACAASURBVJ+uLm0jxXWk+PXmbL/WpHluhPKPXse8OL5m7tQtLaC1tSPqOn79zs6b33FSDEM4d87lf/vf6/z5n0UyX9msyBe/GONzn9MpFAKq1YgU4/GIEDRNwPdDfvLTtRNvQQBvnnD4V/+6xp//WZLBQYknn9R56CGNkRGPhYVI1quzU6S7R0KWBIIAXnvN5t/+uzqTN5l5vtMIAvjJTy1aWkW+9rU4vT0yBw9Gy+haLaRYDHCcEFUTSKdE0mkBadEH5+JFd0PPb1lQyCt9BPjYgUnFn6fhl5fIMaakl7m4hcTUDCHBClIsl4bp6L4bUZSw7SqypJHL78FxajhOnSD0mKtfYq7+MVPFESCmC8R0gdachGWH+H7ID55tMjN782jk4d/I8dt/urKxOt+jke+5uTrMz7+xvjSXvmsH2uC2iKREAXdqmsbbp9ZdQl8ri21EwgHBqkknSY8RWBZepXw9+vLBnpshNrAdALvhMXBfK8m8jllxGT9ZvKmcYGA5VH72FtblSeJHd6Fv60TpzCGn4wiqAmFIYDl4pTruzALNM0PU37yAX1nfz9wrFUkfu4/cE5/BmZtdPN/o+1rjYzizN1ceXw/vi9WW58FLL9ksLAT80R8ZfO6zOoYRNXW3tYm0ta3cPghCPA8K8+uTl23DT35iUS4F/MmfJHjoIRVdhz17Vn6FMIzyej//ucV//E+RMs+HAd+Hb36zyfx8wD/4SowHH1TRdYFkUiCZXN2nFASRQO4bbzgbqnnbocmofRZDTBGXMnSpO6h488y6kVyYH7gk9TYUSUcQRDQ5Tt1eWWgpzp5D19N09z2IKMmEQUC9NsXU+Ou4TgNRkMkY3aT0dgSicy00rlKzPto2FK4LJ07a1Co2xVLA7PzW8tBjF02O/2jh5huugY0avbXB7TTPnMWbLyAaMVKfepjGO6fXJUVvUSpOFw0q/uq+AQkZTTRwg5XLp8AykQwDKZnCbzai/UsSWkfXUgFm75NdpLsNmgs2qfYYmW6DE//v+irv13ceYg9NYY/OImcSyC0ppLgeuf2FIYHj4tdMvIUqfrlx0547OZUmdJ0ous23Ey7LPQSW+atHihAR46lTLn/5l1X+9m+bHDuqcuiQQmenRDIVvcvqjZCpKZ+hIY+33nLWlQ67BtuGl152uHCxzN13Kzxwv8bOnTLZrIjjhhQLAadOubz8sh01Tdc2zuNcw8xswOXLHqIYWQTcTlFoOUwTnn3W4vXXHQ7sl7n/AZW9eyIlb8MQsO1IvHZkxOf0aZd3z7rMzPjrKo/DNU1FFVlQSYgZVEHHDa9H2BVzEl1JsbfzM4SELNRHV1WfHafG2NUXmBp/A0lSCAIfzzXxPAsISept5OM7WGiOLSluu97aAhUfRVwZdkmnRXq6pKWofWbW52Zyfm/9osyZVzbWBlgPZmN9Ag5MEzGmIxoxpGSC0POQW3KRUX21tmopbQY1an6JPm0fZlCnGVSjFi0i+4oOdTtxMc2Is3KixZ6exJ6bJf+FLy9FX0o2hyCKzP/k7wHI9Sc4/f0xqjMmqiHz0D/auSmTriV4Pl6hgle4Pd3JxoVzNK9GOfHlor3AiqbzW4GwUYgtCMInrWm/YtAEgwH9EG5oUXDHqa6jq7gR9FgW33dXmFfJSgxFTWCZC6S0DlJ6OxPl04QbqOIkOwd54Gv/BwCv/Yc/ozY9tPUv9CuA3nu/wN4vfo3mwgzH/+pr3H1A5AtPGcRiwlLN6t/8dYWZ2Q9HISjx4H3oOwcJHQdR0wgse0m+q/bSK7izq/sE25Vt7IwdQ0Ck5E1jh01EJFJSK0m5hQV3ivear+KEK/OYoqYT37sfvW9bVH0uFamdfge3FN1nD//xLlzLpzzVJJnXad+T4fIvpyGE8VMLmOUPzmtFSiTRe3qRkmmcmSns2WnkVAavWiZ0Nj6PMAzXTVJ9cE7Vn+COwA1tRqx3scO18y2iIJExelDlOHPVS2hyHNe3V8w/t3cdpVGbpjB3bul3up6lvfsok6OvQBjQEt9GTM1EroBhSKEx/JFfPl/D0cMqF4ccXnndWpqCKhQ/PMm0xtunaJ5aW44/sNauIF6TkutSd9Ki9KAKGgEBpl9l0r7EpH1xFSFCFGXVTr1N7dTao4TzV+u0707RMpBE0SVK4w1y/QlAYOZChVtvmd4apHiC7KNPEOvfhhRPUDv1Dl6tSuroPTQunMca3cSSfh38ypFiKtZJf/5+FuojTC6c/LBP51cOAT4OJuutWdKxLjJGN4aaZb52mYzRS9MpLc1CA8QT7dQqKz2LPd9G1zNIcgyzWWayfGZRmShihY0sUz9qKFUCJDGaRnIWp6BuU43qtiAlE8SPHsGdmsG8cAm1twf76vCGnwnwmXNHqHhzqKKOiETIYuU5bOKFqyMpOZtD7+qhefUKgXk9XRLfvQ+vUceeGOPqq7NMni7iuwGeEyDKIr4bQAj+WuZSm53J3SKM3XuRUynmf/Q94nv2A5H4raioqPm2jxYpKnKMlNGJ6aweuP8EIIkq+dROGvYCNXN1MlmRYphOBUUyEBBRJB1xjQF0QVz5rxcEEUGUEADHN6lYU8SUNCBgu1Wsj5FpVbMZ8ntfifPkp2KYVvRS+Of/pszk9NaixVhCJJtXiKdlZCUybnPskHrJo1xwcW7SDH4N8aN3EToOclsr4dn3MA7uwx4Z3VDiCyIrCitsYPnrV3GXQ823E997IBKDWEaKcjqL3r8Ne2KM9j1p9j3dzeylCud+OsH2+/Oc//nagq6CrmLsH8CdL+POlwnNzS+txaSB2pkDQcCvNHBnVhaw1Fwr9tQU9uQEet8Akm5AEBB6HoKirLPXzeFXjhQ/wcZQZYP+/H2MF95ekxQtt0o23kdCa6UnewRZjuE0VkYVzeY8Lfm9NOuzWGYZWdZoad0NgOdZaHKc9tSepakW17eYq16k6a5uov0o4hcvmbx2YmWyvrwFpSNFFdh1V5y7n0iz52iCjn6NWFwkCKBW9pgasrh0ssHrz5bX7U1cDkGRccbGUbo6EWM6m9LAuwWIqkq4SCzLEVgmUrwHgJ2PtjN7uUosreJZAb13tXD+uak1Cy1qT56OP/sdvEKF2qtnKf3g+KaFZrWBDtr/+PPImSSNk5eZ+avvrTCu8ht11PaO6HpcO/+YgZxKY03e3jjqJ6T4awZNSWJouXV10Or2fDTFQkAY+CxUx1aJzBZmzzK4+wvs2v87i2N+KqqeYnbybRy7SkrrQBRkhguvExLQmdpPQs9/bEhRFOHJx2KIgsAzP6izc7uCZYc0NiGYLMkCD/1Gjt/84zY6B/QVKlESkGtXyLUr7L03wZ5jCb71b6dvanZlj45jHDmE3JJDactHUeL7sCQNXRdJ0xE1bcXonJzJEi5OP4QBNAoWsZRCIq+xUdlZ396JaGho/e04k/PRpMomz8UrVnCLVdTuPGpfG0pHFmf8ekGpeeUSxs495D//ZaR4AkQRpaWF0PexxkZu4dtfx4dKislYOz0tx0jG2gkCj1JjFMtZ3dIgIJKMtdORPUAq1okoKphOibnKBYrVIbwb+q0UKUYusY18ZheGmkNAwAtsSvUxxgsncH2TuNbCgf7fYqZ0ltH511d8vq/1PlpTg1ydfZlKY5IDfV+i0pxEFBXyqZ2U6iNMl87Skd1PS2I7pcY4Y4U3sd3o3EVBIRPvoTN3kISWB0Jq5iyTC6epmtNLSjTdLUfIp3czPPsquUQ/2UQ/iqRjOlWmS2co1obwAzdaMqd30ZrcQSrWiSzpDLZ/it7We5bO+fLUL1ioDyMKCrbXYKp8Fj9w16we16vTXDr3XdLZbVEl2rOplEeo16YIgkgkQAAUSSckXMwtfrS9WZbjc5826OqQ6euR+a/frfO5Txt890cNro5uHOUIAtzzZJqv/FknyazMxBWLd4/XGLto0qh6iKJApk1hxyGDXUfi7Lo7zlf/5y7+7p9Pcunk+ktc6/xFvPkCcjaDX2/gzs29L6ToFOYIg4DMQ49Rf/cUgW2jdfcQ37mHhVeeB+DySzMc/nI/6c4YbbtTnP3RxLq8qG2P7AjCMKR55uqSUOxm4BWquJNFwoPbkZJx1L72FaToFOYo/PT7JA8fRdR18MCemaZ26u2lnspbxYdGioaWY2/PF1DlOAv1YVzPJBnroC29e0ko9RoyiT72dH8GCCk1xvF9h4SeZ3fX00zq7zA8+xrBYsOqIhns7HyCfHo3dWuWcmOcMAzQlRSGll3quxMFGUPLocqrrQJUOUZMzSCJKghgaC3EY3mqzSkcr0F3y91kEn3Ybh3TKdPdchemW2ai8DaiINGVO8RA2wM07QVKjVEEQSJtdHOg70tcmnqOQvUSCBF5Z+O9KF1P4vk2NXMGQZDIxvvY2/N5Lk79nJnSWSDE8yyqzSlCQgwtR7kxTmWZGdW1HGxSz9OW2oXpVGjYRep2YdHEfvmdG2I2C5jNtaco6naBhJanL3eMMAww3fIK46uPOjIZkfcuOHS0SeiaQDIpbmSJsoRch8Jnfz9PPCXx/DcLfP//mqU4vXp44FkB+nbp/P5f9HDggSSPfDnH5JBFo7r2i0dMxHHnC7gzsyAISJk0vn3nW1/cYoHSq78kc/9DtH7+SwiShN9sUjt7iualCwBMnikxd6VKPKdhVVys2vrDEWpbFoDAtLHH59mKoGnouLiFyqJxlYrSkrphgxCvXqN68gTC6ev/HFFTEQ2DoHnrfbUfGil2ZPajq2muzrzE1MJpgtBDV9Ps6nySROb6yIskKvS2RjO4FyaepdQYIQwDNCXJYMen6MweYqE2QqkRVVNzyQHaM/uYq1zg6uzLmItLR1FQUGUDP9j6hIsoiLi+zejca4TAscHfRxIURuaO4/s2SaMDQ80hiQqGmqW39Sg1c5ZLU88tHT8T7+NA35foyh2kakbkCiBLOkHoc3HqWRpWRFL51C4O9n+ZXLyPQvUynm9RqF2BmkBHZj8dmb0Ua1eZKp1eda6l5gSmWyEd6yQZa6clMcBU+eyqqZaN4AcOU+V3UeU4giBgu/VVM9YfZZy/5PDoAzGOHFT5X/5pmjAMN5VT3Hk4Tke/xtSwxQ++PrcmIQIQwthFix98fZZdd8fZfVecXIeyLinG7zpM/c23CBrRgx6/+zDV5196X6JFe2KUwo8LKLkWkGT8Rg13obh0rNbtScqTDcoTTURJIL8jyfyV2pr7EhNRvs+vNglvgcSDemRcJcZURGPlqKTa0UX6ngeQ0xluDFWrb79J4/zZLR/vGj4UUlSkGCmjC8+3mC2/txS9WU6FYm2IluS2pW3jeisJPU+lMUmpPrL0cNpujZnSe2Tj/bSmdi6SokA+tQvbqzFdOrNESABB6C4qyNwabLeK6VSQRBXbq9G0F7DcKqIgYbt1ZElDEmWSsQ4MrYXxwtv4gYMiGUvn27ALJPR2dCW9RIoAs+VzNK3r1bW6NYfplNHUFJKobMlBT0DAD1z8wCWmpFCk2LqG4+tBk5NocpyqNQuEJLRWgjCg6dzaCNuvG946aVOvh4yOe9QbAWfPOyyUbk6KbT0qsaTEOy9WKUzdnATGL1lMD9u092kk0us/ilIqiXDN40IUkNKZ963VBcBvNqIxvzWw9+ku3vrGVTw7MoXb93Q3vxy6sOYS+poFQeh6hLcgex94y4yrbrDiNHbuRlAUyq+9tGqixy3d3n364ZCibKBIMRy3sar/zfbquMu8iHUlgyjINOzVrmSms0AQesT1PBARQlxvXSSwO9vS4wceQeghoRAGPn7gEAQegigSLqpcC4tLclGQ2N7xKANtD6zYxzWvlBtbZBpWYaXYZxjg+c6isMPWCC0Za6c7cxDTrTJZOkPTKS3ZCmwWhpohrrUuWRjE1RaAjw0pplPRGOa1hu1t/TJzBZ/6Tax11ZiIJAnUy5vLnfleSKPiocV0pA3Uut35Asbhg7hTM0gtWYJa9QNtnBRUFcKQ0HVRdAk9rWJWXLSEgpZYv/0lsCKyEnX1ljxaRFVGkMSlVpvlcOZmUdKR5mJkS7DsetzmnO6HQooCAoIgrDBMuoYwDFcQhLio+BKuuW0AhEvbRNtHUvy3Y9ouCCsTSGF0sBU/r3VLCkTes37gMVc+T8NenbMLAp+mvbKKu5E/71ZhOhWuzh9fzCPeGsIwQJFikYtf6KPJCSxv7SXSRxFPP2Gwe1BhYspb+reffNeBm5CiWQ/w3JBM2+b65GQ1KryY9WBJ3n8tNE6eIX7XIWL79uA3GtRPnPzASFFQVdKPPYZXKlE/cYKpd0sc/lIfZtlBT6vMXamuW2jxq1G0KaUMpERs7Y3WPbCAnE0iqgqh66+yMQhsC627F72nD69eW0GE1ZMnaJw/d+MeN40PhRT9wMEPPGRJ58bJDElUEJeZdDteM8ohyvFV+1FkA0GQli1FQxyviSzpKFIM213/Qb6mM3c9Grt+DopksKF+17r7DHC8BoIgsFAfYa5yYd1tbyTeOwXXv33hhqZTJmv0sbvjSYQQTK/KfP3Xc675VjAz63Fwr0ImLXLNmVPaxL9rbsKmWfcZ2BOjc0BjemRjEc/BgwZtPRrjl0xq5fVf4qFpUj/+xla+wh2DGIuROHYUe2yM+okTDB2fw254JFp1SuMNxk+uv3pwJgvEj+1G1FT0nT2Y50cJ15p6WQNSJoHam0eQJfy6iVdc+SzHBgZx5maonX6H4Aax1F/L6rPt1WnaC7SmBsnGeyk1xoEQSVRIxjpQlxFgzZzFdmuk493E1MzSslgUZFoS25BEhYX6CBDR2kJ9mP78/eSSAzSdEsGywoooKIv5yxAvsAkCD11JocnxpchKV9IkjY4l35OtIAh8qs1pPN8in9pJuTGBsyxiEwUZQRBX2I1uDSF+6CIIErKksTV5ks3D8RuMl06iygYCAo7f/FiN+XW0y8wWAk6ftSObXqC5gZzbNVx8u8HoeZODDyb5vX/WzQ//7zkmrlg4VkDghyCALAvocZEdh+P8g/+xizAMefuFCvOT698Tak8Xsf17EfUo2goaDSq/ePEDiRYFVWV56d33AuYuVykMRx0Norx+8GBeGCP7xQdAlkg9eojGycvYVzch6SVLxO/aSWzfQDSaWK5jj6xUfrcmRpFTaaSYEYljLL8WtxDQrDj8bX36FhGGAdOld8kl+hnsfJzJ4kk83yIZayef2rnCeN31m0wU32Kw43F2dT3FbPk8XmCTinXSlTtEpTHJfOWaGGrITOkcramd9OcfQJGMpTYWRdLR1TSTxZM4XgPPsyjVR8klt9PXdj/l+hiSqJBP70YWN25K3Qi15jQTxXfoaTlKEPoUa0MEgY8saST0Niy3ytTC6Vuu5pp2Cddr0pHdj+tbuF4TQZCoNidva8l8I7zAwnPukNvWrxnm5n0G+xV2DapL9rpn3nNu2rxdXfD48X+ao71P5diTafbem+Di23UmrtiYdR9BhEyrwrb9Bv27Y4gyvPHTMi9/bwHHWv9+iN9zFOvSFbxClI4JPf8DWz6LmhYZ0i9i8KF2eo+0IEqRglBzweb4f7y85uNiXZnEGpoktrsPpbOF/B98huI3X8AamiK0167MizGN+NFdtPzup5BSBvgB1vkxnKmValByIomSyaDcc/+qY1ffep36ubUFNDaDO06K+c7DJFKdNGqzFOfew/fWXkJUGhOMzL1Gb+sxdnU9RRC61K15phZO05E9sGLbucolJFGjK3eIXV2fBgT8wKFYG2as8OaKKMZyylyeeo6u3GE6MvvpbrkrShITRj2Li1fQCxzGCm8iigpd2UN0ZQ/h+E3K9XGmFk7TmVt5DpuFF9hMFN7G8x3aM3vJp3ddS0piOlUmim+vKRG/WZhOmZG543TljixeNw/Pt7kw8ZM7SoofZ5y/5FKrrSQpa5Nzyu+9WeOZfzfDl/5JO317Yhx9IsPRJ1Zv51gBr/24xPf+Zpa5iY1XDkG9gTdfiLyfNwFBURDjq9NNtwI5l1t0EIzQf7SFC89NU5mO0jSBH66fU6yZVF88jdrVipQ0iO3rp+0ff4HG6SGsyxN4c2UCywEBxHgMtTOHNthF4p49yC0pBEHAKVSovnxm1Uugcf4c5tUrax/Xur1VzR0nxVx+D21dRyjMnqNcHFqXFEMCpktnKDfGUeRoWWC7NRyvQakxiresAh2EHlOlM5TqIyhyfHEJamM51VXLumvk17AK6GoKSdQQiIoZjtvA9cylLSuNCc5P/BhdSSIKEl7gYjllQKBYv0rTXiAMA94b/+HS1Izn21yY/OlS20sQ+lyaeo4w9JfO2fbqjBdOMF+9tLQEDUIf1zOxvdrSdMh06Szl5iQN8wYRWK/BhcmfAiGut7I1wg8cJorvUKxdRV5scg8Cd4XdwCe4PQxuk3n0gajHTpIixfirYx5N8+b5MN+DN35WZuKyyb77kxx6MEn3Dp1YQiLwQypFj6vvNnn3eI3zJ2qU5m5eZBNkmexv/QZ+uUzoB/j1BpVnn1s3WtS3bSP7hc9v7UuvA1HTEY3rAw7NkoMggmtF12JDrxbPp/76eyidLaQ/fffSyJ/a3YpfO0jQsJZadQRVRorHEON6NA4Yhvi1JqXvH1/l+QwQ370XY9fe678QhCiiDQKqp96iefniLX/nD3XMLwj9qEJ7A2/WzNW6fWHoRzO8zubmb13fxL2JIXZIiO1Wl8bzVnx+WUd81Zxa9pmA2jJnuzD0qa+hMxiEHk27SHMDsrLcypq9k0HorzjGjfADd5Wa9q1DIJ7qJPCcdSdcPm64ctXFNENEEXq6ZA7tUxG3kKby3ZCxSxZTwzYvfbeIJIvRDHQIvh/iOSGOvdq9bz3UXz+BePrdJWXpVS0oN0A0DLTe3qjHLwxvb6ktCCtyioEfcv8f7cQsOwR+SKNo8cv/c+0+RYgatxee+SVBwyL12GGUtgyCLCFnk5BNrvmZwHGxR+eo/OwEtePn1izO+GYTr3y9yCMoKmp75+LffsUixU/w6wVZMdh75KtUSyNcPvvdNVufPm6YmPKZmIoeREm02btTIRHfevLec0M8N4TbnAby63Xk9jxyLkvQaGJdXnvZeCPskVGsq0NLKt23AjmXI37o0NLPJ789wum/v67FGQbrL5+vIWhYLHz3FaxL48Tv2RNFix05xGRsqSk7dNxIImyujHlhjNrxs9Gs8zqE3rx8cVU0KKcz5J54GslYPbq7FXxCih9zxJPtyMrt3UQfNXz60RiPPRwtn2O6QMIQN1V9fr8Q27cHOd9KUG8g51uIxw1qr7150wjQvHSRyvMv3JYvsjbQT2zXrqWfwxB2P9FJfjBFbc7k7I82KdPl+zTPXMW6PIHcmkZuTSMZOoIqQxhFh8GicZU7V9p0686KQ5gmvtlEybVu+bPL8QkpfsyRzg4gyerNN/wY4fKQu+So6HohM3M+M3M3f0hFCTJ5BccKaFZvboAWT0vE4iLNerT9elD7eqi/9ibewgKCqpH53FObGvMLGs0NbU43g8B2VjRGH/piL3bT4+yPxkl3Gxz+cj+vfv3Spps1AtPBGZ9foXhzKxBjMURNX/E7OZ1Ba+/Embl1Jz+4RVJU1DhGvA1FjYMg4DpNmo05XLu22Gqy8RUSBBFNT6MbLchKVCzwPAurWcQyy5Fo2ybPQ4/lULQEoqiAAIHn4DoNbLuKY9dW7EuLZUll+gkCl8rCVTx37dxDKjuApqexzTK16iRh4CHJGsl0LwgClYWraFoaI9kOYUijPottlkAQiSfa0Y0cYRjQqM1Ev18HkqwTi7egailEUSEMPRy7jtmYX/fc9FiWZKYfq1mkXp1CEEWMeD7ah6QSBh6u26BZn1tzH6KkoOlpVC2JoibJte1BFBV0o4V85+EbHqKQRm2GZv3j4c1yDU0r4N3zNrV6lFdsb5OQpci2diO092r84/+1l4VZl+/99QyTQxs3bz/65Ryf+4M2jv+4xN//zQxmfe37PqjV0LYNIKVTSPF4JAyxkaezaeJMTeGWSrc98hbYdpSbXESqy+DNv7tCvWAzd7lK75/vv6393yqSh4+SPHz3it8JooQ1Now5cutWBHALpBhPddHVdx+Z3CBqLBPJ19s1KgtXmZl4K1Lu3eiASox85yFy+b0kUp0oWhxCAddtUK9OUZg5S3H23LqkACCKMtn8Hlra95JIdaPHMohSFO34noVjVWnUZxm59DOs5vVCRzo7wJ4j/xDHrnP2xNepr3OMnm0P09K+n/npMwy99wNcp46mpenb8QSKmuDimW/Ss+1RsvmdEIYszF9kfOgFFDXBwK6nSaS7CQKfhbkLjFz6Kba5usM+nuygo/deUtkBdCOLJGkEgYttlqkUrzI9/gaN2o3FFoF0bju7D3+FuanTjF35Ba0dB2hp24tu5JBkncB3cewqpcIVpkZfxWysLJ4k0730bHsEI9GOqiWRJAUEgUzLIJmWwRXbBoHP2OXnGPuYkeJjD8UYGnaj0T7gC08ZPPt8k/HJjVmxe4dO54BOvkcjlrj5rO/0sE2+W2XfvQle+o6CWV+bRBunz2IcOoCe3U7oezRPv7shKdoTExSe+fYd0V0MbRu/XMFfVOgpjtQ5/OV+KtNNEq06lanNWR3caZhXr6yaXAksE3tuhqBxe+e0JVJMpLoZ2P1ZMi3bCXyXRnUax66gKHGy+T0YyQ4Cf/38hSzH6Bt8gvaeo4iSim2Vqc1HjddGPE+mZZBkuhcj0cb40AtrEqMgSnQNPET3wEMoaoIgcHGdBk59DlGUULUkupFb9DC+89D0NN39DxAzctTKE6QyveQ7DuK5TYx4G6KkUiuNEU910dq+j0Ztmomrv1yxj0S6m+17vkgy00vguzRrszhOHVVNEIvn6ei9h1giz+jl56iWRtY8j1i8lf6dT5LObcf3bWqVCYLAIxZvJRbL0dl3L7Kic/XCj3Ht6yNSvmdTr01jNgsIgkiubR8xI0ejNkOpcJnlrX7icAAAIABJREFUUX4YhlTLY2sc/aON1pzI3LyIIESr1M52CU27eaGltUvFSEmMXTQZu3jzCujMaDQW2NajEk/LrGrDWIRfrVE//gaCqhK67k0LJ0GjgX2bxLC0L9Ok8J3vLClvX3x+ip5DOeI5jYXxBpOnF96PoaqbwinM4RTnV4XvgizftoLQpklRknU6++4j2zKI49QZvfxzijPnIpNtQSSV7aN38HGS6Z5153rbuu+io/cewjBgcuQVpsfeWCI+WdZp7zlG98CDdPU9gNVcYHrs9VX7aG3fT/fAwyiqQa08xuTIcaqlkaUpGElWSaZ78X0Hx7o9w+21r0Mk3X/xzDdx7Do92x6hb8cTdPTcS6U0zIWT/wXXNekdfJyu/gdIpnuRlTieG92kmp6mb/AJUplequUxxodepFaZWFLaiQjz82RaBvE9myvNBRx7dctQPNmBbmSZnXib6bHXce06IaCoBl39D9LZex/Z1l1kWy8zN/nO0ufqtWma9bmoYVaUicXzxIwc9coko5efW6WwHW62b+QjhOExj/uOalGOMCWiKALNTVgRJNISiipQmt2cKZVjBVQKLu19GlpsfdI1Du3HunA5ioBEkfhdh2i8c/qDmWoJAtzplTm64dfnQYgqzxup5NwIQZEQYxqCIhP1OG2+oh80LYLG9UDH2L4TKZGg/u5pwsVATIzFSN11D+boMPZt+LRsmhT1WJbWjgMIosTc1CnmJk+uGMdbmLuAJGkM7vsNVG11/5Gmp2nrvgtRUpmZeIuJqy/hudd7AX3PYmr0VRTVoLPvfjp776U4ey7KCy5C1VK0dR9F1RJUS2MMvfd96tWVTmKe21xzuXqnEAQ+1dJYRCzAwvwFerc/hiSrFGbOYpolCAMqC1fp6DmKohooqrFIigK5tr2kWwZx3SZjV35BubhSaKFcGGJi+GV2H/oKufxuEqkuFuZr3Pg6FkWJcmGMsSu/WBFR+57F5PArZFt3YiTaiSc7EETpOrmFwZI6Ubishy0MfQLf+aQlB3jxFRNNFXjy0RhNM+R7P24wu4lCyzXdSsvc3DUMw6jZW5SEDTUvtYF+rMuL90kQoA700zi5esrjg8D+z/Vw7scTWFUXURI48PkeTnzj6obRohjT0Hf3EtvVg9KZQ4zHImLcAurHz1H5+VtLP+u9fVFla/llC0PkXAua63wwpGgko8KK7zuU5i+tIMRrqJSGcazqmqSYSPeg6Rk8p0GleGUFIV6D55qUi0O0tO9H1dOkMv0UZq8r6MaTHRjxPIHvUpx7j3r19qpMt4Iw9LGWFU8818T3bSRBj/J3i6TiuSZhGCCKMuKinais6CTTPciyTqU6TbU0utYRqFcmce06qp4imemjVLi8pkdKYfY9vDUmhmyrjGPXMRLtKIqBJKl4wcdH0OF2YTvw/Z82+cGz0T26We6xzaghO52TN6XVIclRdOlaAb6//sahbaN2dmBduYrclodgPfG69x/ZnvgSgQd+SLZ343FCKZck85l7ST60H6U9u2KOeiu4capFkBV8c2V1PQxCWPR+vh1snhTj7dHJWRUcZ+0ZW8eq4Tj1NdsAjEQbshLDahaXoqy1YDaKOHaVRLKTeKpzBSnqRg5VT+K55mKu7UO4McJwRWQWhiFB6CP47oqRxqWISxCX0gmKmiAWbwVCHLuGqqfXPISiJvA8C5UUupFbNI5auU0Q+DRq0+tW6gPfBsLIy/l9kin7qCKbEfny5w3uvVunaYY8+3yTF181cW4ibjQ/6dCs+bT1qvTs0Jm4vHFee2CvQapFZnbM3rAlp/HOaRIP3kfykYcILIv6GycWifHWIagqSksL+vZtyK2tiEacoNGg9uabuDNRgU9QFERdJ/T9Jc+T2qzJ4MNtTL1bIjeQwKqub+8hxnVyv/UI6SfuQtSiZXboB5GgxRar4svtTQG8WhWtsxvJiOPXo9WknEyiZLLY07fnJ7RpUlTUxdYZ11xaw6/GNcJY/Q+TlRiiKBP4Lu4GlWXfM/E9B0GUopaf5fuQdURRIfBr2NbqPNsHgTAMCW7Ms4UsRnKrv/fy6F6UlMVGaYF85yHyHeuJTghL8keyck1zciV8zyLwt+438wlujs9+Ompm/8t/WSKXEfnDrya4NOQyMrZxE/TwuSbzkzYDew0+9/t5vvMfZtb0aREEaO/X+NwftiFJAkPvNlmYW/9/6c7MUvruDxZV3sPbV5bOZkk+cD/Je+9FTCSW7jVvvkDzwoUlUtQG+kk//jhBvUHhmWcIHYd3fzTOkd/qZ9v9eRpFh5PfHlk3NtG3dUYzz5pCGAQ4E/NYVyZxZ8sETWtLq/8bI8Xm0CXiO/fQ+tTnMUeHI4O5wZ0AWONrrcA2jy0s7K8/mBt+l3UiF2Hx8+FN9rD8L6vyLIJwXSrt/cx93TSyWn3+m1nQXFMch8WIexOFILM+v6aqzjXV8U9w55FMCFwacpkv+JTKPs1myGbU9GfHbU6+WKV/j8GDX8wST0u88WyZsUsWzaqHIAikWmS27Te47+kMe47GKRc8Tr1UpVa6ydRJGN7WuN41SKkU6SceJ3HsGIKqEpgmfr2Oks2uevcG9QairqN2daF2dmKPjtJccDj+9cubSg8YhwejiRXAujxB8RvPY54fiyLF24Q7P0fplRdIHrqL5JGoX9GZnaF68i3c4u3N8G+aFK/lriRJRdwgLyBK2pq/9zyLMPARRQlZ1nFYO9KTZBVJUgjDYFVEGfguQeAjiCKyGsfecnV5cyQiLQm43lkEgbe4xA4pFS4xcfWlm37G9x3CNfK3n+D9w9unbf6b34hz/1ENwxCp1QOKmzCuCgN44Zkivbtj3PNkmvs/m2XPsQTVBQ/HChEE0AyRTF4hkZJo1n1eeKbAyRer7+s7fgmShLF/H4m77yb0PKqvvIp54QKBZdH2h3+wanO3WMQrlVA7OtD6erFHl0Vgm3iUtIEOAALLofyjN2ieG7ntZf9ymMND2DPTSIsyaX69TnCbsmGwBVK0FhVUND297qysJGuoWoK1CMWsz+N5JopiEDNa180rXptQ8T1n1Ta2WcZ1GsiyTjLVTeOGyvPNcG3ZK4rSutGgoiVR1PdnFth1m5jNBRLpHhQljtUsfujV3uu36NZaJD7KOPWuw9y8z/YBhUYj4MqwR2UTFqcAC7Mu//lfTDI/bnPkU2ly7Qq9O2Nc8yrz3JBm1Wf0gsmL3yny4jNFzMYHcw+Iuo5x8BCCqlJ59mdUX3mFwIrynqHnrdBNBAgdB69UBlFEbmnZ8vGkdPQcOVNF7NHZO0qIEH0ftaUVOdcCYYhbLOAU5ghvlvy9CTZNivXqNJ5rISsxMi2D1Crjq3rYUpl+VD21ZntBtTyGZZZIpLrItO6gvHAV/4YGa0nSyLRsR9PTmM3iqsblem0as1EgndtGS8cBygtDWM3NO8y5Tp0wDJBkHT2WpV6Z5MZXXqZlEFVLbtkWdDPwnAa18hi5tt0k0t2ksv1UFobv+HE2izDwCQOPMAyRZA1JVjecJPqoI24I/OZn4wjh9YcqlxHp7Zb5xUsm1drmHurClMM3/tUUL35ngV13RV7QmhG9hBsVn6lhi8snG8xPOh9oV42oqqhdXXiVCs3FCPFmCMxopFCKbdF4avk+6iahc2fz36Kmk37gEeK79y2RoKio1M68Q+WdN5eazW8FmyZF2yxRWbhKS/s+2rruolYep1S4tPT3a5MYa7XjQNQmUpg5SzzZSWv7fsxGgZmJE0vFAlGUae08SL7zMIIgUZw9v4rwLHOBhbkLJNM9pHPb6N/5FBPDL9OoTnOd3AQ0PY2RyFOrTK5o/XGsGrZVQY9lae8+Sq0yvqynUSCV6aOj955VBZ47hTAMWJi/SEv7PlLZfvp2fJqJ4ZcpF4dWLJFFSSWR6kI3cizMX8Rz3q9RqhDbKhOGPkayg2S6d9VUy8cJkhTNOfd2arTnJS5dcTEMgZacxBtv21Rrm8+FBQFMDllMDkXEIwhLAuwfHkQRUddwZ8urvJJvhg3FZNeBX41esIIirdBkvBOIbRtE7+6l9OJzOIWomVzv6iF55Cj27Azm1cu3vO9Nk6LrNJkaPY4Wy2Ak2tl58LeplyewrAqKapBIdROGAdWFUVK5/jX3MTvxNjGjhbauI/TvfIrWjoM0azOEhEuNxpKsUpw5y/TYa6t3EIbMjJ8gFm+lresI+Y5DpLMDWOYCjl0HQUDTUqhaEt93uHDq/1tBirZdZXbibXq2PUK2dSf77/4DatVJfM9Bj2WIp7rwPZvKwgip7Nrf4XZhNgqMXv452/d+kVR2gN2JdiyzhG1VCMMAWY6h6Slk1cA2y1RLo+8jKUJh5hy5tn0YRis79n+ZankM16lHlXJZZ37qFMW58+/b8X+VUKuH/Odv1vjqb2m88rrFq29YyLLAP/nDJKp6eyuHD6HPejWCgMC2EXR9c83TooicjtrG/NrWLW7tq1MYRwaR27NI6TjuzJ3zDde6e7Anx2leubgkjebXquh921DzbR8MKUbFgSuE539I98DDJNI95NoiOXDXbVKvTjI99jqKmiSZ6VlzD65TZ+TSs5jNIi3t+zASbaSyfdEX8mys5gIL8xeYGT+xbhHF80yGL/yEZmOelra9xIwWkpk+BCHKhwS+i+vUMRvFxV696wh8h+mx1wnDkNaOA+hGDiPZEZnPO03q1QmmRl9HN1pIZno3f2m2hJDKwjAXz3yLnsXrGIu3kkh3A0Jka+Ca2FaFysLwqhTDnUa1NMLYlV/Q1f8AeixLvvMgICzOlDc/NoQIEXGZVki5ErBvt4rvQywm0JKVbrcLZgmqIdG7P0lx3KQ8c/MlXmtfjM5dCURRYG6kwfSlW39BBq6LOz2NPjiI1teHMz2zofSP2tGB2tND6HnY41ufEKm/c4n008eQs0nid+/CHplZ17Bqqwg9DylmrIhABUlGVNUNWgY3hy2q5ISUi0OYjSLJTC+ankGWBXYNmuw/MEXyviq5llZSuTeYmixwphVOnxGZmb1+R7lOg8nhl1mYu0Aq282OHWnuPqzSnrfx3Rlmpqa5MmTzxgmBSnXt16vnmXiN13jw8Ai9fd38l2/JVGsS6ZTAsSMB/b11Qm+ehXs9Rsd03jnlMjUd/fMdu8bE1RcpFy9HSjFqjIF+gSP7TFLxKXyvguO6TBd/yfjlafxFYnWcOjPjJ1C0BGZjfsX3GR96AQQBZ1nvpG2WGRt6Ad+zsNeYXW7WZhh67wcYybZFGTYDBHFR5aZGsz6H1SzeMDkUUq9OMnzxpwS+s2IEEkAQBWRNxLN8ZidPUi2P06zN4vvrJ57DMGBu8iT1yhSJVAeKGhXK/MX91z6GghAvvGzy+acMjh5WCUL42Qsms5vwUtkMUq0qn/nTbbz2rSlO/vjm6kPxjELfgRT7H2vlvZcK/PBf37r/dmBZmBcvom/bRuqhh/DrdczzF9ZcSsstLaQeeRi1qwt3ZgZ7eOu5b2d0jupLZ8g8fYzUY4dxZ4rUXj1L6Nz+tTRHrpJ74mkyDz4a6ScKAnpvP5Jh3NaIH9yinqJtlXHnyuzaIfN7X43z2MMaHR0SRiyJJDmE4SksK6RSNXj+lxL/01+UVhSewjDAd2d5+pE6v/9Vg55umURcQJLAdeOUqzHeesfh3/9NjdPvumu+zPKtwv/P3puHx3VeZ56/u9aOqkJhXwiCBBdwFSmRkiyJ2mzJlmTLS+KOEydx1k5n68eTdHdmxp2esXsST/d0Op20HceOe9yOEzmObdnWZkvWSkqUKJEiCQLciX1H7dvdv/7jggBBgBS4SLIkvnr0PETVrVv33qp67znne897+Nj9Bqs6B/n2t2eoi8K/+vUYN+3UqU36TfyuG2NyyuP/+6sC//hP82m05zkUsoOoDPPgx8J87MEQnR0qkYiELEWxbJPp6cO8eJ3F335dcOyE31M9Nfb6ouNw7AqjA3sWPW6ZBcYGXrzodXRdk2JumGJu+R9iuTixhKWYj2RHjPUPdHLooePMTPQse59CuJSLY5SLl7aa/25FOuvxD98tEQlLOA4Ypnjb0t/BngJTAxXi9VfBCNhxqBw5QnDVKkLr1pF68EHMG27Anpyak7VENm8itH4dwZUr0Zub/UFQe17ELV36pEjPtMg98QqyrhG7dROpT91NaEMH5ddPYZwewyte3BfyXAjHXeDGbY4MUdi3l5rrdxLduAUE2Olpsrufw5y88Hyj5eCySFGSYPt1Op//XJzt1+kYpqB/wGFk1CGXFyQTMk2NCnUpmcEhZ9FKfDgk8Ru/GuH3fydGMChx/ITN4SM2+YJHa4vC5o06H7onxMoVKn/2n/M8t9u8YJSvqRJ37gpw521BVnWqDA657NlromkSrS1+Sj0+sfjFNTUSv/8vo/zGZ6K4LhzptTl8xMI0Bas6VW7YrvMLPx9mVafC//bvcpw687OtFZRkaN5aR/36BEpgGUrja7goPA9KFUG8MUBQkghGFayKi1l1idcHKOdtcuN+FhEIKyRbg+hBBdtwyY4ZGOXZ75wEsVqdeKOv31UD8oLFFkmCSFIj3hBAViXKWZv8lIlrz24kwLEu3ht9KbBn0mQef4KEZRFau5bwhg2Idet8yy0hiO7cOae8sNNpintfptyz/BvsuQhvXkVgZRNSUEO4HmptjJpdW4neuAFh2XgVc9bZ+43PrfDCIXKPvTL3t3BdSkePUDlzClnT/OYJy8IzjSsu4F4WKSbiMv/692LcfGOA6RmXL3+txEPfKTM5NZ8mx+MSmzZonD6PTCQJbr8twG98JkokLPH1b5b4ytdKjM+m2JIE3es0vvDv49x1R5B//XsxTp1xGBxamhXDYYnP/FKUiUmX//0/5HjmeXNuRm8oJNHRrsylzufizl1Bfu1XfEL86v8o8ZW/K5HL+8egqfDAfSH+/Z/Eed+NAX7tlyN87vP5i17r5q11JFbEOPX0MHZl4Tk3bUmR7Kjh9DPDWGX/OT2i0bQlRawxDBLkR0pM9WWxygtTGUmGeFuMurUJgnEd4QmqOYv0yRyFsTKyJlPXFad2VQ1rP9hBpD5E9wOdGHn/BzvZm2GiZ95oV9FlGrprSa6MIasy5RmDicMzVLPz9S1ZlWjeUoesykz2ZahbmyDRHkOSITdcYrInjWO++y3F9JDCbb/UTrIpgCRLhGpURvqKNHVFqRRsHv6zE7i24NZfbKVrZxLHEsiKxMDBPHseGqFacKip07nvD1eTag9RyljYpkcoNv+za1gVYden20g0Bf2xnpbHvh9McGxPGtd5E8JTIbDHx0l/7/uEt2whtKYLNVnrD3tSZITj4BaLWKNjlA8d8muJl9lJU//rHyLQVr/wQVVBURWIBC84zW8pVHoHCK5chb6M+SvG6BDWFUSLl0WK27Zq3HKTjuMIvv3PFb7+jRLF0sIPMJ8XvLh3cS0rHpf5wN1B2loUDvVYfPmrpQVkKgT0HbP50teK7Nyhs/P6ADftCDAyWlnys9E0CUWFv/5KkSd/auCeUxCvVgXHTiyO8AIBiZ//WJhYVOLHTxn8/UPlOUIEsB146hmDndfr/NavRbljV5CWphKjS5DrWaS64mz/5fWUJisM75uvFQViOps/0UWiI0b/br9RXY9oXP+Zblbd3orn+T6KkgQnfjJEz/dOY+TmCWrFTc1s+/Q6Ys0RrLJv16SFVQ588xh9PzyDFlJo2lJHy9Y6Eh0xVF2h9YYGHMM/b7Nkz5GiFlHpvn8l3R9ZhaxIeK5ACyqM7p9m39d7KU/5EgpFU+h6fzvhVJC6NQlW3dGKpMoEazRmTuaZPpqF9wApAiiqhOfCvu+Ocv9nu1B1maf/boD7/nA1de0hauoDbLqznp/8TT/TAxVq20Lc8y9Xkhmrsv+RSdbflqK+M8wTf3Wa/KRJ964U3bf6Qmg9rLDzo81EkhpPfqWfasFh58eaueUXWhnuK1CYujIR8sXgVSqUXnmFSk8PajyOHA4hyQqe4+CVSjjZ7CXLdt5shNpXEury+5vxBFqqDkmWcXJZkCTUZC1usYD9VA7ealK8/bYgkbDMiVMOj/64uogQL4bmRpkd23UkCZ59wVxAiOei76jNgYMWu24JcseuAI88Xl1yoprnCY702jz7/EJCvBjWdKms7VKpGoK9r5iML0F2pZKgp9emUBTUpWS2bNYuSorDL0+w8aOr6Ly9lZHXJufatlJr4jRsqOX4E4NYJRtJho0fW8WqO1p5/R+OM/raFJIi0XV3O1v+xRrKMwbHHuvHcwSprjg7f2sjVtnmmf+4j+J4xY9YkgEqGQPPEZgFm2OPDXDm2RFu/ex1hGqDvPjfDlGc8Fcpz0atkiyxalcr23+lm94fnObkk8N4jkftqhpu/r0tbP/0el756hGs0vwPoXlrHa7tsecvD1LJmCi6jKLKWJWfrR/Lm43J/jKTZyrMDFWY6q8wdryEXXUJRFXW3JRkarDCiZezeI4gP2Uy3Feka0eSnqdnWLk1zvCRAqdfyyE8UPZm2fGgP584GFVYdX2cY3sylLM2whNMnqmw9QMNxGr1N5UUARACr1zGukou3ecj851nkSOXL/o+F+bABNZwhuJh3zA52NFJZN0G8i/vwSn6C5laqo6a7TuumMwvixTXr1VRVTjd7zDyBnMrzkciLtOxQp2LCC+Ecllw4qTDrltg43oNVQOWaLawbTh52sG4BAH76k6VmhoZx4GWZoX7P7T0B7durYYQfmTZ3HTxOl1+tMxkb4bGDbXUtEbJD5eQNZmmzSkCMY2hlyfwHEGsKczK21rInMlz9Ef9c6LY088M03FLM527Wjjz3AhG3qJzVyuRhhAv/ekhRvfPr3jnR+aL3sITmAULu+LgGC6e5VHNGlRmFkp5gnGdVXe2Uhwvc+zxQYqzszVKUxXq1ydZ96GVnHlhlNHX5lsr9ahG78Onmey9evqydyJc28PzBI7l4VizBr34apBAWMEsu+eY9UK14BBvCKBqEoGwQmHamrtJurbAmjWhlRWJaK3O1nsb6Nzu6wElWSI/ZeJdpRri24niS71ctdbR2evrVf0FUz1VjzUxhjk2OifB8aoV3LXdBBqbMQYvv1PskklRliEakZEkiWLRo1K5NAGXrktEwr4/YD5/4dc6LhRm+01ramTfvXwJeB6UL7F3tCYmoWs+Qf/ub8f43d+++Palkkcw+MYfbv/uMTpuaaZ5Sx354RLBuE77zkbGD6cpTfofZqwpTDgZRA0o3Pg789ZhgahGuDaIFlSQNRlFl6lpieBZHjMnrtxJXAurJNpiTPZlsIrzEYjnCmZO5Nj+KyGi9QtvDkbeIjd86auO7xUIAdlxk7buKLIi47kesixRtyJEccbCrLgU0xbxxgCKJuHagkBEJpaaHbJmCWaGqgwcyvPqD8cXmEIUZi4/SpQCAdSamis9vQXwTBO3cIl2fW9iC4/wXLS6BuRgELfsf0flcAQtVYc1c2G/1uXgCuY+i9nBPsvwEDoHnufXbWXZv1NeCBLMNdG73sXf4VKFtUL4+yuXvdnm/4vvwDTEogWjpTBzPEslbdC8tY7+F0apaY6Q6krw6td7qc7WCWVNQVYlAjGNuq7Egtdn+wsUJ8q4loesyCiajGO7VyVqkCQJRZdxbXehI4sA13JRVAlZXdiKZVecy2rvei+h5+kpunYkuPPXVzB8pEDzmii1rUF+8sMBXFtwdHeaBz67mls/1cbUQIXu21JoQf86Vwo2PT+dZtPd9eQmTDKjVaK1Oqou0/PTaTzHX5SpqQ8QqvFNWlNtQapFh0r+wt/HYEcHyfvvv6rnWT12jOwTT1zVfV4JKiePk7qni4aP/jx2xs9k9Pp6hOtiDA5c0b4vmRQ9D7I5DyGgNikTi0lkLyGQqRqCbM6jLiXT2HDhfkhNh/o6nxWnp994sPilIJP1ME2wLMG3HqrwxJMXN0EQwteqvRGMgsXA7jG6PtBOcmUNK25qojRRYeJwGjFLbEbBxCrbTB/P8eJfLR4+JDyBVXGQZAmzaKGHNYJxfdGK9tIHygWzFddyKacNwrVBlIAMs2UkSYZIfRizZGMWz49O3tuE6LmC7JhBKWvhOh4zw1VKGb/2N9Vfxig6TJwq8/h/O82OB5tp/2QbpazFU387wJkD/o+i/0CO3d8aofu2FO0bYwwcLPBqaZxKzsa1Bfsfm6CUtVh/a4qNt6eo5G1O7cvhuR6xlM7Nn2ylrTtGMKoSjqt89N+tZfxUiSf++swF7cakYBCtseECJ+VHGJKuIyn+7094nu/VKIQ/LkBR/K+RJOGZJl6lsrAtUJGRNBVhzH5fVAVJli5JlC0FdWRdA0X2vSIdF2Haixy2LwRzfJSZxx4m0r0ZLeW75FROnaDUdxgnf2UD6y4rUjxy1Ob9dwZZt0ZjZYfK0PDy64rpjMuJUzZ1dQGuv07nWw8tntUCEIvKbN7o3x0PH7G5mgthx0845HIeHR0qLc0K5YrAuQoyRNfyGNk/xYaPrqJxU4oVNzUx2Zsmc2b+Q8oPl0ifzlO3NkE4FSQ7UJjjnkCNNuccIFzBxOE0nbe3svbeDg790wmc6qz1mSYjK5Ivi5njLYFZskiuqiEQ05ljvVkYBYuxA9Osv6+Duq4Ew69OgoBgPEDnrhbSp/Nk+t8eN/OfVdiGx/PfnBfWP/WVgbl///A/nZr798ChAgOHlr52VtVj3w/G2feDxfOE9FAcs5zn8FPTHH5qeonXWvzkS5deG7MnJsg99dTiJ2ZTJL21hXB3N65hYg4P4eTyeNUKwnGRVBUlGkFraCDQ1oZTKJD/6dNUT82fr5qKo7c3Utl/zD+P1gbkSBCj7+LHKmkqgY5G9BUN6O0NqMmYP6bA83ArBvZ0Hnsi47tzj6Uvui+EwJqewpp++pKvzxvhskjxhT0Gv/WZCG2tCh//SIhDh60LtuSdj8lJj32vWtx4Q4CbdgZYs1rl5OnztIzAjTforF2FFyQRAAAgAElEQVSjYRgeu180sayrF7UMjTgcOmKxpkvl7juCPPGkwdHjV4d1C6Nlpo5mWXvPCsKpIK//w3Fc6xy5T8Wh9+Ez7Prjbdz+x9sZfX0KI28RSgRIdtZw/IlBBl8cR3iCoVcm6Lilmc2fWE20MUy2P4+sysTbokwcSXPqp8Nz+/ZcwcSRDOs+tJJtv7SO8UMzSLLE9LEM44fTOIbL6WdGaLmujh2/uZGG7lrsqkPDhiR1axLs+1ovhdFr9cO3ClowSkPnTkaOPn3VXeTtqSnyTz+z5HN6exvhjRux0xnyTz+NceaM361ybsaiKGi1tUS2byN6/Q1ojQ2UDvirvlJAQ61Porc3Yp4aAVki0Omvpl+MFNVUDfF7dxDZ1oXeUocU0BbZ8wkh8MoGxpkxinuOUHq5b8FY07cKl0WKPUdsHv5RlV/+VISf+2gYVZX4xt+XGZtwcV2BLEMkIrNmtUoiLvPQP89Hg5Wq4OFHquy8QWfH9QG+8KcJ/vtXipw45bfz6TrsvCHAv/lsDbIMP3zUYN9+86qmz7YNf/eNMtu36ly/Tec//z8Jvvy1IocO2ziuXyvVNImWJoUbtutYluBr31iebKGSrjL40jhbf2EN2f4CI68u7m+d6Enz7J+/xvr7O2nf2YSqK5gli0x/geJYeW7wl5G32Pulw3R9YAUrbmykcUMSzxGUpqsYOWthrVHA4EvjvPb/99FxcxPrH1iJWbQoT8+XBtKn8zz7xf1s+EgnHbc2o6gypckKu//idYZfmcRzzq6gCoy8RWmyOpf2v3cgoc6aDDvW/PdWVjQULYhjlv0Z3bKCqof9uUOeg2NVFviLSrKCooWQFQ0QeK6NY1WRADUQIdG0lljdSoKR2tm2V3PeDUmSUfUQiqL7ZiV2Fc+ZL22ogQie6yBJMooW8IepWZU3nNkjBQLEdu5Eb2lh5vvf9ztVlvphuS729DSFF3aj1qaIbNtG9cQJjFOnURtqiezoRu9oQq2t8ce0FkqUX+ld+k1lieCaNlKfvINQd8f8ACshEK43T8ayhCTLKNEQ4c2rCK5qJtTdQeZ7z2OPL6F+kOcH3ku6BmcHYl0FXBYpFkuCv/6bIuGwxAc/EOIXfi7Mg/eHmJ7xKBQ9wiGJhnqFUEjimeeNBaQI0HvU5ot/UeRP/0TirtsD3LxT5+Rp3924qVGhY4VfS3z0iSp/+aXiBbWMV4JDPRb/95/n+befreGG7Tpf/3KK6bRLNuuhKBKpWplkQkaS4FvfXr6OS3hw9Ef9HH98ACHAsxcfu/AEU31Zpo/nkGRfuC2E//j5JFSaqnLooRMc/s5Jzq5pCSF8QjyPr8yCxcF/PMGhfzo5v89zxJvCE+QGi+z9cg+yLM0NND9/X47hsu/vepFkacnjfzdDkhWaum5BC9UwdOixOZeiVPtWki0b6T/wfVzboG7FNmpb/TnonuuQGe0hPXwYz7WQFZ1U+xaSLRtQ1CDgUclPMn7iBYTwaF5zG/HGNQTCSVZuexCEIDd5gqkzryCER7xhDfWdN6BqIRCCUnaEydN7sap+GaZjy/2Y5SySrBKONyE8h4nTL1GYurhZhBwKEVq3DieXxR4be8MVSq9axRweIty9ntCatRinTmOPTJJ/Yi96az2Vw7MptRAsKRKWJIKrW6j/5Q8QXLcCJHDLBvZkFiddwM2X8EwbSZaRwwGURBQ1VYPemESOhKi5fQuSBNPffBI3v/A3GFrTgpMpIskyNTetw8mWKLxyAu8quPBc9urz4LDLF76Y59X9Fve+P8iGbo3GRoUV7QqmBem0S0+vy5NPLw5/hYAX95p87vN5PvXzEXbeoLO2SyUQlCgWBUePOTz1jMF3vl9mYPDN6ZxwHHjyaYOpaY+P3B/ifTfqrOrUaGpQ5uRCfcds+o7ZPPbjS3OjFp7AXUa6L9zFJHih/S13FXg52wpXvGEv7XKP7d0G4TkUM0O0rr8LLRTDLRqARLxxDZXcKLZRpKZ+NfWdO5g8vZdydoRosp3G1TdjVnIUpk4RrW2ncfX7yI4eITtxDBDIsupHfK7D2PHnsY0iyZYNnHntuwjP9ecPuTbhRAst6+8gN36M7MQxND1Cy7o7qOu4nrFjfkqsaCFi9QmmzuxjZugAkqxiL2O6paTIKLEY9vT08odgOQ7IMkrNbEueAGc6h5st+rq5i0AOB0jcu4PgunZAYA1Mkn/2INW+QayxmYU2YpKEEguht9UT3dlNzR1bUWJhojd2Uz02TP6p1xbsO9LdRvXkOHpjHFlXCW9op9w3/PaSIsD4hMc3/7HMT581aG5UCEckVEXCdQWlsiCT9Rgdu/CFe+2AxcnTDu2tCsmkjKaCYfqEOjjsUl2ig+UsBoYc/v0X8sSiEmcGLo84Hcc/huMnbNpaFVIphcCsGUm16vvqTU55ZJYxtOhSICkqibYN1DSv9UcAmGUyg4cpTfWDJBGpbaO2cxtaMIpZypIZeJ1qbhIQaOE4dat3UJw8Q7S+g1CiCbOYZubUPoTnUte1A6M4QzjZgh5JUsmMkBk4iF0tEk62klq1jcljL2KVswAk2jcTSjQw3uMXrEPxRmo7t6FHkgjPpZweId2/H8++fHv3dxpKad/YN96wGqM4TbimET2UYPy4P2gs3rgW2yiRGz+G57l4rkt9506iyTaKMwPE6ldhVbJM9e/DNhfXaR2r7I+o9Rxso7hgTk8k0Yosq6RHDmOWM1SBUE0jtS0bmDz14txscaMwTWakZ3a07jIhBMK2UeNxlGgE+42cy2QZta4OSVUXpKaSKhNc34He2YI0O+aw2nMa88RCm7nAigYiO9cjyTLm0BRT3/gx1aNDS0eVQuAWKlT7BjEHJ3FyJWofvAU5GiJ2y0ZK+44uiBaF46E1xgmsaGDmh6+Q+vAOLihmvkRcESmCrzkcHnEZHrl0YhICcjmPXO7SSadUEux77Y0FrpLsm0ZYlsC+wObFkuDocQd4851wJFmhYd0tJNq6yQ33YZVzqIHI3J07nGylbfv9FKf6Kc0MEqlto3XbfYwd/DGV7BiKqpNo30gk1Uph/BS5kV4kScbzXBRVI9G2wU/nBg5ilXMkV25F0YJM9D6HGowQre9k5vT8XTcQSxGpbZs7tpat92BVCmSHelD0EKoeWjSL590O1zbIT54k3rCW6cEDJJrXY5azVAo+i+jBGNFkG2tu+jTga0BVLYgnXCRZQQ9Esc3ygprkcqFoAVzHWlAftI0ikqLNPmciPBfLyF8aIQKeZWONjhJcu5bYTTfjFkvYMzNLptFSIOC76HR3g+tijc4PmNeaUoS3raNy8MTcfBQnvViXF96yGjkcRDguuSdeoXpseGlCPP84ywaFZ19Hb6+nZtcWtKZaAqtbqByYd9MuHeqn5sa1lA7145YNrPHsVYkS4SqQ4s86amtlPvKJEK+8ZNHX8/b37AaiKZLtm5g++QrpM68tej7VeR22UWTs8JMgBIWxE6y8+ZMk2jdRyflN7qoeIn1mP1Mn9i5YuVSitUiKSn7oCNOn9oHw8DyH+jU3M33ylUXvdT4kWUFSNDzHwihMYxZn3nOEeBa5iePUtm4m2bKRWGolM8MH5+bomNUc5dwoQz2PI84hFH+xxcEyCkSSbaiB6AXTWiEE0hITJR2rgqIFkNV5/8RApBbPtXAsY8HrLxVetUr54CH0tnYi265DTcSp9PZhZzJ41Sp4HpKqIkciBFa0E9m8GTWZxBgYoHLs2DnvDc5MDuNo/0W1iYHVLQDY0zmMEyNvmG6fCzdfxjg6RPSGdX5a3Vq3gBTN4Rmmh+fnO2efOngpl+KieNeTYlOLwgMfDdN/2qHv8mzhriq0cA2KHqSSHlniWQktksAspud7aT0Xs5RGC9fM/VAcq4pVzi0p5fAcC9sozj1nVwrIioakLP1RS+e9dvr4S6RWXU/79Q9QzU2SPrOfam6xxu7dDtsoUC1O0dC5A+E6lHPzkVJu/Bg1dauI1a2inBlGUlT0YA2V/DiOVaEwc4ba1s00r7mFzFgvCFD1MKXM8NwKs1UtoGoh4o1rMas5PNvErORmt6nQuHIn6bEj6MEYyeZusuN9eBdxUF8WXJdKXx9qQz01N93kjyVYuRKvUsEzDITnISkKcjiMHAohSRJ2JkP+2edwc+dEgkIQ3NiJ1pzCLZRBQPXwKYzjgwveTo37xrVOpohnXPqxO5kCXtVETcZQwsEFz0U2thPqal7wWPbpw7ilK5fwXHVSlGVI1cu4DuRz3iK7r0AAausUSkWP4jnaRj0AsZiMrksIBNWKoFgULBWoyArEYhKBoISiSHienxqXyx6mMZ8yR6MSm7ZotLYrpOoUWtrmTR1mpl2sc8pksgzRmEQoJCErErYlKBXFnDfjudvVNchUyoJySRCrkQiHJSRJwrQEhZx3USG4cB2E56HogaWexXNsZOVcl2UJRQ34Te+z6ZLwvAtGcJIkI8nz5ykrmk+QYlb+IOELxGehBBbOuciPH6eSHSMYqyOxYhNt2+9jcN/DWKX3limE57pkx/qIN64hPXQQu1pAllRUJYBTKjBxYg+J1m5SbVtQJY1Kfhyj4Auwy5lRJo49T6ptC8mta7HtKsXMIFYxi+wJQKaSGSU73kfrujtBCKaGXiM70otTKTLU8wQtq2+l67pPIFyX9EQv0wP7547NNku49uX9+N1ikfxPn8YaHSW2cydaU5Mv2I7FOCtZEK6Lk05TPXGS0v79mEMLa4VuOk/h8b1+Dc/1ECydPs/dcYW4POPXuQGd0qJ6oVOoYo37lmFaQxy9Ie53x1wFXHVSjEQlPv//JpAV+OL/VaD/HGG2LMOuu4L86Z8n+NJ/KfDdhyo4DqTqZO77SIgPfjhEQ5OM8OD4UYfvPlRm316LamX+ggaCcMuuIB/+eIiutRrBkL9gMjXh8j+/Vub5ZwxiMZmPfDzEbXcFWbNWJRKV+IM/ivGbvxud28+//YMsPYf8dFpRYOfNOj/3qQhru1UCAYlsxuP5p00eebjCyJA795kma2X+9pspHvtBlddeMfnFz0TZsFkjFJQYOOPw+f8zx2D/hdMEozCDUZgm1Xk9tlHBcwxkRcNzXexqnuLEKepW7yCcaseu5AnGGwglGpk+8TKe88bpv6RoxBo6KU0PIlyHWNNqzHIG17H89E4IwslWHKOMFqohkmqbiyplNUAgmsS1DYziNNnBw7Tf8GG0YOw9R4ogyE+epOepv0R4LpKk0JTcSCzYiCdcMqV+Bg78gLp4F8nIClQgqTdjKTlUOUDCq8UeGsKRRkgXTlM206xu2IXllNCUIIXqJJMnXkJMZZAllXT6IKnoagJahJH0AYzBM2jhCp7nUsmfWVCfHDz0yBW5S3vVKuXXD1Lp7UOrq0Orr0eJRkFRELaFk8tjT0z47XJLrFJ7hoU9mUZJ1mANTiCp8pKKB7fgH7MSjyDN6hMvBUoighzQEbazSMRtDs9gjvhdL5Ii0/ip25B1latR7LnqpGhUBS+/aPLbvx9jzTqVgTPO3OcXDEncdmeAXMbj+FEHx4GauMQv/mqE+z4aYs9zJo/+wCagS9x2Z5A/+Q9x/sufFXj2KWPOROLue4P84R/XMD3l8viPKqSnPWriMitWKlTKHp4Llino7bGZmfZ43206H3owzI++V6W3Zz6EHzmnNXHrdp0/+j9qME145PtV8jmPztUqH/54iJWrFL74+QKZmXln8GhU4qZbdNZtUJkYc9n/ikkoLFHfoJB/g0UjxywxffwlGrpvpf36+/EcC+G55Eb6yA71kBvpI1jTQOvWe3BtA0UNUBg/RX7s+LKuv+eYqMEYTRvvQNVDyKrG5NE9uJaB4c2QG+6lbvUNxJvX4gkXx6oiz0aWajBCw/pbUfQgwnWQVZ3iZD9G/o0HLL07IeYWPAJqlGRkBacmnsO0i0hIqGqIRKiN0ZkDWE6ZFXU3Eg7UYjsVFFlnOttLruxHWZoaRlWCDE7tBUmiPXU92dIgwnMRErPEKyPhRzuqrFOuTpMunKJiZTlXSHpV6rxCIEwTa3R0wSLKcqAkYkRv2YLe2UrmWz9GTcWRAhrVQwvHilqjacLXdaE1JtHb6rFGppc1egBADgUIrG5BjgRx82XsqeyC58PdbYRW++mzHFBR4uEFmtwrwVUnRduGQ/v96G7HTQFefN6kXPYvRG1KZsdNAQ68as1FkFu26Xz4EyEe+0GVb3y1RD7nd5QcPmjzn/4qyX0Phji432Jm2qOjU+HnPhWhWPD4wufynDnpnJVREY1JmLMDhioVwf59FqoGyZTMPfcLDrxm8txPF8tKIhGJT/xCmHBE5gufy3LksI3r+BFvJu3xqV+NcNsdAX743XO0ipJE11qN//4XBR77oUG14h9zNCZRKr7xh16cOoNVzROI1CLJClo0gRZLAuBaVSb6nieUaERRAzh2FccoEV+xgWCiiem+PYzsfwSjOLPkvoXnkRs+QiU3jqIFcV0TrbGR+rUfwq2UyZ05QTkzTM3arah1dVQLacon+gDQUikMpUKkqQlhGqR7XqUyNUSos4tQcxuuaVI4sh+n9N7rkZYkBUmSsWxfYiMQKJLm31g8G8e15h6zAdupYDkLBceeZ+N41tkdIksy8y4eEoo8/3OcyPaSiLTRUnsdmdIAmWI/gp8NIb1SEwYknOksSD6ByTWRRdtV+wZIfGgnsq4Rf//1mKfHfHJ7o5+IqhC+rovI9jUgSzjpPMbphXVtz7Bx8hVAIDyP0sH+2b+vHG/KQsvQoMurL5vcdmeAv/8fMuXZIT7vuy2AosK+vSa5nEcgAFuu06irVxgfdWnvUGnzx0ATjkhMT7us36iRSMrMTHt0b9JY1aXy7W+WOXF0Pi33PCjkLy+dWLlaZe16laO9Nj0H7Tl1Qrkk2P2cyfs/GOT2u4MLSRE4ecJekNoLAVUrjBYNzH/PPQ+rnPML8eEEkixjGyVcs4JZyuC5DooeBFUlHG0FfA2jGgzjmGWM/BSOWQZJQowLoo2dCOFRmh648AlJvqSkPO0XvWs2bEOJRMi9vhfP9gf7iGQdtmYz9dPvEGhoQW9uhME+lHAEwjrDT/6j745iGuj1TURWrqF0qo9AfTM1G64js++Fy7rW72S4nonlVEjFVlO18zhOFcsp43omsVAjrp5ACA/TmSfN86GpQeLhFiRJxnENbNfAdkzi4VpioUZioUaqpl+bUxSdsplGU8ME9RokSV6gZ7wqkGWUSMRfVNFUvxjvuniWhVcu4xlL1y2F7SAcBzkcRG9vRGuuwx5bfJM2To5Q7RsgtKmT8IYOGn7zPrKPvow1Mo1bMhCWPV8GUGTkoO63+V3XRe1Hb0GtSyAMi+KLR3BmFjrfGEPTuMUqSiSAcDzsTHFZcp/l4E0hxWzGY99enxSvu15naKBKIAC33RlkasJj317Ld+sISDS3Kqgq/OG/qcFZYlBPNuPNrQuk6hRiNTKnT149PWFtSiYakxkbthbJtWamXIpFQfuKxa7b6ZmFC0UAiRWbCSYakFUNxyij6CFGXn2EeNt6wrVtgMA2y2T7DyHJCo0bd+FaBkogNFcvjLetJxhvQFZ1XMsgffJVHLOMa1UvK23Sk3UY0+PYufmaoFoTxykVsPNZPMcmtm7e7NacmcAu5Pw7jSShJWrRUw0Ey0UQAnPqygw836mwnCpjmUOkYquIhRrJlocwy8OMpg/RGF+PrCeZKZ6mYmb8mmFlHMddSCq2UyUcqEWSJKbyx7GcCvnKCLoWoTbWSb4yimXPmhGHGgnpSTzPZqZ4Gk9cXQ2tEo8T7u4m1L0evbkZJRZDUlU8w8DJZjEHBqj09mEMDCDMhRmWPZ3DGplGbagltKULa3CC6tHFZhBOrkT20ZdR6+LoTbVEtq8luG4FxqlRzP4JnHTetwqTZZRwAK0hSWBVsz8BUFUQnkf59VMUXji8qIYaXFFPzY1rUSJ+EGJNZMk+24NXvfIRDm8KKQoBRw7bDA243H5XkB99r0rXOo22FQr79lpMnNPlIkkS5ZLga18qMrLExD7LFHPbS3677lWfwStgaR/CiwjkPW+xVkwIj0p6BC2SoJoZo6ZlDXokQaRuBZn+g1ilLA0bbiWUaJjtdXaZOPw0iZWbCSWaUPQQiY7NOEYJ4XlEGjrIDfX60eIyYJtlJvteoJqfJy7XqKDF4kiKOkeqbqWCGo75EWwihVs5p1Pg3DuDELiVMtbMJNnX9uCZpj8K8z0JQcmYomQsvClYTonh9EK9qe1WmSksrK8BeMJlLHsY25lP8yynzGj6wKJtJ7JHrtJxL4bWUE/8zruIbNmMHFrotq5EIiiRCHpLC6H16ynte5X87t0LidF2qPacxhqeRNJUvIqxtAbRE1R6zpB5eA+pT96OmoqjRIJEtq4msnW1X9cUAiRpsWOO4/qv/f4LvuznPEQ2tGOOpCkfGUQO6dR9eAdKNPizS4oA/accDu63uPWOAKu7VHbcpBOJSDz94+ocqVmWYHzUJRCQOHXC4aUXzIsSXibtUSr6iyDLhZgv2SyJmWmPYt6jtV1BlheK++vrFWI1EqPL9YsUAtcyUXQT4boIz0NWtFlHEAfPdXzRrqwiCTG3yOKaFQQCSfbvjvmR49hlP4062463HHi2SW6kb8FjxZO91Gy4joY778cpFygcPYSVnsLKTlN/530I1yXfe2D29ZZ/mc75DMzpccyGZupuuxfh2BSP92CMD3MNlwghsN3q1b+jXyLUVIrEvfcS3rQJr1qleuYM1vAwTi7np8TBEFp9PcGu1Wj19cTvuhPPsijs3j137GpDLfEP3ex3Yc2mrJWDJzCODix6P2HaFHcfxpnJk/zILQRXNyMFdSRVmZ1iOf/DFJ6HsB3f3OH5QxSfP4Q9nVuyBulVLV9XqSpIqoJn2CjREMLx8KomnnH5jRpvGik6Djz7lMHt7w/ywMdDdHSqDA+6HD0yf7CmAQcPWMxMu9z/YIjTJ20mxmZXeWU/tRWen0ILAUeP2PSfcbj7g0GeedJg4IyD686vCAv8WuDZ753woFLyCOgSDY0KirJYYdB/2uFor83OmwNs3a7Tc9DCmV1oue3OAI1NCg/9z0sp4J51nPEPwjbK2JU8kYYOgolG/7yLGSRJIh7cSE3rOkK1s3Ums4KRmyIQq/VNAhwLkFC0IOG6NtRQjFBtC0K4ONXleR86hRyZfbuR5LPmtS4gyB3a57ssC+YG/5ROH5sNxefvDJ5RJff6Xn9bWL6RwDUsgO1WOT3xwiW35l1VKAqRTZsIb9qEk06TeexxjBMn5ly35yDLKNEoNbtuI3bjjdS872YqfX04M37dUFJl3HIV8+TwbE+08BddLgBhOVQOn8Y4NUpoQwehjSsJtNYjhwNzabKwHdxcGWNggvK+Y1jj6YvWCN2yQWLXRmpu6PLtyIQg9cHtCASFl45TOvQWDq66FBw5ZDMz6XLPfSFkGf7hG2Wq54mhD79u8b1vV/jkpyN8Vqmh56CN4wjiCZnVa1Vee9ni4X+uYFswcMbh4e9U+L3Pxvjcf4zz8h6TbMYjEpVp71DY95LJ0z8x5ly6XRcGBhymplw+8S/CRCL+6nAwKPHjx6pMT3oYVcF3H6qwdr3GH3+uhheeNshl/Wj0jvcHOfCqxe7nlieUrWRG8Rwbu1rEMcoUxk7iWlWy/QeJtaxFCYQoDB/FLM4gSQq5oSMEYimqmTE/tRUeMyf3UdO6llCiAbOU9UsGqoYWqqE02Y8aCKFooWWTIgDeeXNZADxvYaoMswLvpV6/xLbXcMl4WwmRWeuwDd1IkkRh9x6qx44tPejedXHzeQrPv4CWqiO4povQ2rUUZ0kRIVDiUQKdLXOdKl7FhJmLjAEQfk9z+dXjlF89jhQKoERDyLqK8Pzozi1Wl71YUjzYT/no0hnLlUSJ8CaTYrUqeOFZk9/+gyjTkx4H91ucrz8uFgT/9A8Vpqc9PvThEJ/+dR1Z9mU1J487DA24zAYyuC785DEDoyr48MfDPPhzYTQdLBPGRx2efUosWiw5cdThy39Z5OOfjPDpX4tgWf4iyYu7TaYn/Y37emz+6xcL/PwvRnjg42ECOuSyHo/9oMojD/tayLMQwh9kZVtiUSZUzYwt+Pus4NlxbbL9C3szhXAojp2kyMLak2uWyZ55fcFjnmOTOb2fa7iGK4GsaWiNjTj5PObw8NKEeA7cQgFzaJDgmi60hnNmvkgSwrSwpzJzNbxLdcgWVROnevnOS4GWWpxcCSd79WdWv+lV84e+WeaJH1VxHEEmvfRdIJfx+NH3Kjz7pEEggC8rcX0heLWykOiqFcFTTxjs3WMSDErIst/mZ1lQKi5uK6yUBY8+XOX5n5oEAhIyKqbtkMvOb+h58NorFsf6bEJhCUWWsG1BqSQwzrMvy2Y8fuNTM5gmi1afr+EafqYhy8jBIHY+v+yB8W65AkIgB+fbUr2KiT2RRg7MDp8C3JkrH8N7KYh0t1E5PvrOJMVKWVApv3Ha4Ng+4SwHrgv5nCC/zGlztuUv0mhSgKTaTNYZxT0vlRHCJ7k3IjrPg8mJq5NKRuraSK7cfFX29XYgmKh/uw/hqkINhGi9/t53dN104shuHOMCpRXPw6tWkSMRJF1fepvzoMRiSJKEVz3HocdxcDMLBfye9dY6UDm5MsHORsQ5YwisidxVGUnwlukrZBTq9RVElDimV2XaGsIWJrVaCzVKClNUmTR91X693o4mBdGkAHlnmqwzAQjiaj0JtQmAaXuIspsjJMeo11YgSzIZe5yimyGu1hFValEkDcMrMmUNoUgqzYEu6rUVRJ0kOWeSrD2OJgWo09sJShEqXpFpewhXvPkfcCCWovvDv0dt5zuXFN9t0CNxuu//nbf7MK4I0YYVHHviq0uucnu2jT01RbCzk0B7O9bY2EVTaCUeJ7BiBUKAPTnf6inpGlpTyv93UEdrrEXsOYQzeV5/vCwhB3W/3tCfSEgAACAASURBVHiV4VYtottWEWipRdj+Ocw8+qpfl7xCvGWk2KivJCCHmbaGcYWDK2yiSpKU1sq4eYoaNUVjoJMpa4BGfRXT1hCGV6JWa6HqFVAkjQa9gxlrFFNUsLwqEjKtgbUUnBkUSaMpsArTqBJXG9HlIFPWACmtjaRqkXUmKDppapQ60vYIVbeEQBBXGwjIYbL2BKZXxXuLiuF2tUhuqI/azs2+PVgxe1keeT8LGHntx5RnlrJCe2dgsncPNS2rqV217e0+lMuGGgihhaLUrbke6cfSkt8lr1ql0ttLoKOD+B2345kG1WPHEbY9Fx1LkgSqihqPE9+1i9CaLuzpaSpHj87tx80WKTzjazMlRSa0uQs5Elr0fmp9gqZ/9RHM4WmKLx7BODG87N7n4Lp2ah+8BUlXqfT0k/3RSwuIvnpiDHNkYReNe5XI9y0jxZiSYtoepujOz3MNKTEMr0TRTePi0BHcxJQ1gCsccs4krnBIqE3IkoYuhRECMs44Z5dIA3KEGrUORdIQwsMTLkJ4CDwqbp6CkyYsx4kocTLOGKZXwRYmZTePI/wCccnNElNriav1ZJ1JDK/4loyA9xyLas4XApulHK989Y9wzKvTu/lW46ze8p0Kq5zn6KN/gyS/c4Xp7TvvY+09v3bxjVyXSm8fgRUrCG/eTN3HP445Moo5MoJbyCMcFzkYQKurJ7h6FWoyiVepUHj+BdxzBszLIR29Y9bLUJZRG5I404trisHVLYQ2riS0oQO9Kcn4f/3usqNGYdkoySjBNW0osTDFPT046fmU3a2YKPEwWiqGsF3MsczPdpvfUnCERUAOISHNkY7jWWhKLRIyASmE7fkXzMM9p3dU+G3EOMiSjCbp2MJEQsITDqZXZcjoo+oVkJCRkJBRkCX/f03W5/Z7dp8y875rhldmsHqEpNZEs74a2zOoeG+x4YEQuFYV17ry0P8aLg9+m+Xb78x+uViOrRyAk06T++nTCMclsnULoTVdhNZ0LdpOeB725CSFl17yR6GeAzkcJLRh1dktcfOlRQazAMFVLX63iixTPTEyl+YuB/Z0HnssQ2htO0o8gr6iYQEpBjsbqblhNSgykiwT6mom+8zhq5Kqv2WkOGUP0qSvoiZch+GVmbT6KblZEloDa8I3IIBJ8wzAbOO7wKcx/7+Km8fwyqwKXYcjbKasQYpumhl7hPZgNwKPkpNl2h5C4BFX61klbUUAU45v3+QIC0fYdIauY8YeJmOPEVfrqdfakSUF0yvPRZDXcA3vVtgTE2SfeALj1EnC3d1ozc0o8TiSLCNME3tmhuqpU1R7+7AmJhatVDuZArnHX/RT4VlT2qXSYq25FvBb9ozTY4iLuS+fB69sYM/kEK6LHAqg1ScWPB9Z30r19ASlnkHkoEb9x9+HEg68s0ix5GYZNI4go+Dh4QgLgceQ0YuChoeHLUxAcLp6YPZ5GDSO4AoHgceIcQxF8iUAZ8lryhoga48jIeHi4AkXT3hk7AmmrIHZ9/IvlC1MBo0eFFQcbF/97qSpukUAXGyct2CR5Rqu4e2Gm89TOvA6ld4+JE0DxW+7E56HcByEaV5QtqMka9Bb66n2nAYh0DuakDQV89TCurKa9MeiOvmyP4nvUupSQuBk/bnQckBDiS+0JvMsBzkcQKuN+R0tnrfsMcBvhLe0iGKLxSzuCBvnvLTl3O3OjdxcHNzz3EIEHpaYTzvPmnS6wl7w+Ln7c5jfp4eDeZUdSK7hGt4RmJXoUL20so1SE0Ff2Uz18Cn/72QNSiK6iBTlWbdtr2L4bjiXCGFY4HgQVOb0kGdRfO0Uybu2EOpsBFmi3Ds85/R9pXjnVpYvAIEg64zjXW3vuWu4hncR5HAYNZlEDgVn+9ovPjPZLRawxidm/3CRQ0GURBSvYqLVJ3DLi4n1rBO2pMiLXHCWd5CztlhLhJhOrszMo6+ixsN4puNLca6SeuNdR4ogKLnLd5a5ZMgygdY2hGNjjb/3ptxdwzsban09sZtuIrR6lW8ue9YK7g1Iq9JzhPTDDwNgT2VxJtLU/uK9SIqMOTCBsf/YotecJUolFkYKLk8sfi6UqG9+KxwX77yWwPC6Vux0EXu6ALJEdEsHlZPj76ya4rsFciBAYtftGMND10jxGt5RkEMhkvfcQ+S6rX5XSr6AWy7PTtu7+Gs9Yz4SFIZFcc9Byvt6QZL8bpYlOknsySxsBjkaQm+rxzg1umzZjBwOoLfWIQU0vFIVJ7+wSyfc3Ub5yBD2TAE8QWRTB+ZY9hopvh2Qw2ECrW2Yo+9csfI1vDchh4KE1q1FOA7Fl16i0nfUJ8VltDUuGE0ggdaQJLiuA0nXME+PYPaPc74bi3lqDN5/PZIkEbt5I+XXjvsLLstAYGUTwXXtSJKEW6pinSfUFqaD3hDHGJj03XaC+tuQPksSWqqO8Lp16E3NSLqOZxjY6RnKPYdxsvMpqxKLEdmwEb21DVnXsNNpKkePYo6Nzl04ORgi+f4PUD19CjuTIbp5M1pdHcJxMIeHKR0+5BeBzz0ERSHQ1k543TrU2hQgcEtljIEBKieOI84Z5KwmkkQ2bkJvbfWHeqdnKB0+jD01OXc+wc5VRDZtpvDKy/65rV+PrGtYk5OUDh7Eyc62LSkKwY6VhFZ3EWhtRUkkiG7bTqCtfe79iq+9SuXYvOpfUlWCq9cQXrMGJRLxZ+nmcxj9/RhDgwjrmvTnGt5iCN8t3s3mKB14/ZKn+J2FUhsncuNG3EIZr2IQ2rIGFAXzxML50NWTI7i5MmoySmh9O/G7ryf72F6EeXGFh1pbQ+JDN6I3pxCewJ7IYA0vdDwv9QyQvHsLNTvXIoRH9eQ4bunq6HyXTYrhDRuoveeDSIqKm8/hmRZqIklwRQfVU6dglhS1hkZS9z+AVlfnu/laFuHuDUQ2byX/wnMUXz8AroukqoS7uwm0tfkEoaoIy0SNJwiv34DW0Ej6sUfmSTQcJn7rbUS3XOfPps3nAQmtvgE5HKJycnYE6CzZJe/+AEokglss+PNgWloId28g8/hjVM+cBiTURILYtm1oiQRyKIxnWUiKTM0sAaYffQRrYhxJkvwBP8HA3GBvYdn+XXYW52qwJFUlceddRK/bjlvI41YqKMEggdY2tFQddjaDk57v7LmGa3gr4FarlA8eJLJ5M3pzE0426wcelxhhKbEwCCjtPuSPzbi+G62xdhEpOlNZii/2kLh3B1JQJ/mR96E1JSk8fwh7Mut7MboeSPgO2kEdvaWO5AM3E960EklVcIsVCrt7FqXF5mia6YdfRgkHEK6HW6xesY/iWSyLFNXaWmo/8EGEbZN+9BGfVFwXSdVQa2txMrNDqVWVxO23ozc0kH7icSp9vQjHQU3Vkbr3gyTuvAtrYgJzxDeHlBQVvbmZ/J495HY/jzAM1GQtdR/9GJENGynsewV7cgJkmUj3BuK33EblaB+ZJ38y955yKIwcCc/NkFCiUZJ33Y0cCDD98PcwhwZ9LVVbG42f/BTxXXdgp9OzpApKJIqarGX6+9/FHB4CSSK6/XrqHvgIsZ07yfzkxwjTpNxzmHLPYUJdawiu7KTc20PuuWcvcL1ShNeuw56aZPIfvzWn91ITCeRgECf31tosXcM1AAjDIP/0MwjLIn7XXQRXrcYaH8czqv5K8UXI8ewwKwDPsPhf7Z1pjF3ned9/79nvNvtGcoa7SFEUF1HUQkm2ZEmWLUe2Y1t2k7hZigAt0AJuki8FUhQwUCB1E6TZDDdQajtNAsORXce2LMWKpEikFnMRhxR3zpAckrPc2e/c/Z7tffvhDGfjNhyOLDk5P2A+zD3LPffgnv99nvfZhGWS2HkHyg8xOprw+oauOkZWPfKvHMZa2UJyxwb0TIL6J3aReWgr7uUR/KHJKICiCbR0AruzFXNF80wqjwpDij87Rfng1UEcFISFKmFh+avAFiWKiY2b0JNJcm+8TvVc78zNU4E/644SzX+wO7twBwdnBBEgmBin+N4RWrq6SG7ZMiOKAN7wMKVjR1HTaxZBbpLapUtY7R2YTU34I8NolkVyy1bCYoHCgf0zggggqxVkdTY/yWxuwVmzltzrr0XvM32t3uAgtcsXSWy4A2u60eYVyqdOzAZNlKJy5gz+Aw9ir+zEqG+Y9xkXw5UCez2VxmxuwRsZjpJRYzGM+YDRHBvNstDTaTIP3A8wM0/oRqJYPvrejCiG41NUT17AXrcSBPiDY9R6r90F2xscZ+L7exGmQeKuNQhdQ0vYJDavJrF59XXfT7oe5UNnyb3wzlWR5/ebRYmi1dGBkhK3/9INb5zV3IJmWlFp0IKSHn9kBOl6WCtWzns9zOcJJue3HJKVqLHllXSByKLswBsZwZ+8sdtptbUjLJPM7vtIbto8Z0vkLgvDQHOc2ZeVwh8fR82pHVWBjzcyQmLjRvRk8pYrYoP8FMVDh2h47GO0/cqv4mazVE4cp3rxYiTgcWv/mA8ALZGg4cknSe7YEf1IFwrIai367t/EhQ4KUd2xlrRB1/EHRqMmEJqIkqyv951WilrPACPP/YT6J+4h/cAW9LoUmmWArs2mAimFCkKUH+KP5si/2k3p4GnCXHE5b8GiWJQoatMNKWXtJoptmtFN8q8OIkg/uvHaguaWKgiuURMZ1T3P5JMKgWbbkQV2oyaSQiBsOwpqTEzgLZhR7A4ORNbtgvW8q5qKKoUKfIRugKZxy0hJ8Wg33nCW9K57sTu7aP7MZ/HHxsi/uY/q+XO3VAcaE7McaI5D4s47QUoKb75F8dAhwsIim59MPyPph3dgdrahp5ORBReEiKRN+Z3jVI72XPtYpfCHxpn47uvkXz2Ms3EVzoZVGM11aI6JUlHViz82hdc3QuXMpShKvUxdb26VRYnilSiwnkrhj914PxVKtFTqqm16Igm6FlmBC7nJr5RSkrBSQXMcNMtEXi+qr9RM3lX55AkK+392/ZOKabETIrIchZi5DiG0mcDLksUrDHEHB3CHBjEam0jeeSd1DzxI0yefZvR7z+MNLS3yFxOzVJRSSNdFFYuUjxwhGLvBw3wdSgdOYg2M4mxdT+XwGVTNw7lzzaJ6gSo/wM9O4mcnKb453XlnznP3YWFRZpCbHURoGs769Te0nPyREWS1ir1iJZozv+mktWoVmm3jDtx6fp8KArzBQayODszWthtm33sjw6ggxO7sQkteLc7XOtZeuXJee3bh2FgrVhDkcoTl+UmjSkYzVW86FH6OWxBMTlA8eIDSkW6s9g6MhoYbHxsT8z4gy2WKBw4iTBNzRUf0HRZicX9XzlEoI6suQin87ATB+BTS9dHr00u7qA+ZIMIiLcXKmTOkd+6ibvf9EARUenoi99K0MNva8AYH8MfHCQp5SkePUP/wIzQ8/jilo0dRvofVsYL6PQ8RTE1ROnHsli9SeR7FI90469bR9ORTTBnGjGusJZMY9fVUe3uRtWoU1Hn3EOkdOwgLBUrH3ovWTHQdq7UNYVmUjx+bccOVlCTv3II7OEitrw90jbr7H0RPpykefveq4EiQzyNdF2fdepz1G6Lmm5ogLJVmLGpr5UqcNWvxhrOEpVK0bJBKY69eEx1/LWs5JuZ9RklJracHzXFoePJJ0jt34o2OIsvlaGnqBgLlj41T64nc42Aiakjb/BtPQyiRNZfi6/9ypk0uzn2uVJh86Sc0PvFx6vY8TP1HHkWFAcIwkNUaY99/Hn98HJSicPAAwrLI7NhJZueuKKolwMtmyb3+2rwk70WjFLUL55l46UXq7n+Als99YXoOsUJoOtXz56ZzD6PM+6m9byBrNVJ3301m932oMESYBtJ1KXV3zz/1tMhndt1Lw8ceR7NsVOBTPPwuxcPvXpXtHxYK5N9+i/oH99D+b38dVXNRMiT36iuUjkajSYVhktp6N00f/0TUiikMEbpOMDXF1Bv/HCWxx8T8nDHq62j/7d9GS6cQQmB1dJBc5LGl7u4ZUZTFClM/fhMtk4xyCfOlD2z97/1g0cnb3vAw4z/+IdbKVRj19QhdR7luNEN2eLYGWHku+Tf3Ues7j9ncgjAMwmIRd2gwsrqmf42kWyP3yj8RFK4eoF272EfO9/EGZ3OfVBBQPnEcLzuEtWIlejIFKGS1gjfttl8hLOSZ2vs61d4ezJYWhGGifJ+gMIWXzUZ5g9NrisIwqPb2UNj/s8iNNgyCfJ7alUjxApTvUzi4Hy87hNHUhBAaslqldvni7L3KDjH503/EbG2dWa+UtRr+6Mg1I/MxMT8PlB9QPX9uSR1r3EvzE7NRCllY/vGiHwZuqfY5LBapnr1GIuUClO9F5Wx9fTfYx6d4+NA1t3nDw3jDw1dvkBJ/bAx/EQvEyvOoXeyjdvH61wDMrJd4Q4OLDn6oWo1q73UibUSfzR3on5ePGRPzQVNnVnhcvMwjj9i0tWqEIWSzIT98ocqBgx5hAJoOGzcYfOFzSbZsMdAEnDzl8/zhAmV91nFKpwW/+5U0F/pC3u32+OynHe6+y0QqePU1lx/8sIoMFV/+1RTtHRp797l84uMOGzcYlMqKH71Q5ZVXa0wXs/Gppx2e+VSCv/h6ieMnZ5PgPvGUzWc/neC5/1Pm6HvR601NGh971OYjj9i0tmj4gWJwMOSFF2vsP3D75bP/6htCLKHLW0zMLxzplOD3/nOKTz4Fx46XeG9/gGML1qw2SFEhyNeQErbvNPnqf61DhjWOdHtIJdi102T3V23+4Gsehw57KAWGDhs2GGzeZPLQHgvfV5w6HdDeruM4At9XmKags1PnU0873L/b4vyFgJOnfDZtMvlvv1+HacCPf1JDCGhv09m21SSTmf9EtrbobL3LpC4TeXaJhODf/WaSZz+f5OAhj2PHfRobNdauNdi4wYhFMSYmZnE8+qjN0085fOfvK/zVN8sUi9EyVjIhkFIhJdTXC37tV5LUZTS+8rtTHD8RWWZbt5p8/U8b+PKvJTl/IWBicnb9cPt2k//5R0W+/4MKrhvlY+sG+H6UtgzQ1anz7b8u863/W8bzoKVZ47n/3ciXnk1y4KDH+MTi1yNTScF991qcOu3zB18rMDIaHVtXJ1iugZJLyExeBELM5gF+aJkzdOdDmBYQE7NcaBo8tMciDOHvn6/MCCJApaq4UpPR1amz7W6T/Qc9zl+YXffu6wvY+6bLfbstOjrmP9f5vOSFF6tMtx4glLCwAdTYuOTAIW/m9fEJyVvvuHR16axbd2t2Wa2mOHc+YPs2ky99MUlXp46mQaGgKJU/pJ23hWHS9OCj2G0rGN/7U7yJW08Q/bmgFOVTJ7n8x39IWHw/S4mmW6rHwhvzAZFwBK0tOoWiZHjk+lZZXZ1Gc5PG5csBQTD7ffVcRX9/SHubRio1XxRHRuQ8kb0WU1OSanX+PkNDIem0oKH+JsbTgvWtckXxzW+XCQL4zDMOn/20w5GjPj9+ocq73f5V77MUlt+cE6An0xh1UYT6w4xyXYKJife1t6Hd1kFq7R1RyWBMzAeAVAolQdfFDacOhGH0Z5rzdxICTCPatnBinuffXIQMQ1xV82GaAikhnHu+a1ybY8+/ZqWg72LIV/97gf/0lSl++OMaGzcY/MWfNfL5X05cfYIl8GH3cX+hEbpO+o67yGzZEY2RjIn5AKjVIDsSkk4J1qy5vqEyOSnJDods3GDME0bHEazfYDA4FFIs3bol1tKikcnMl5p1aw0KBcXkpEIp8DyFYwssa74yru7SMYxrK3lPb8A3/rLEf/n9PL29Ps9+fnlE8fbNFyHQbCeyhNSV9kPXvnGaZSMME4RAyRDpulxzdVTT0ExrupROgJLIwJ+x6IRholk20q2hwjnNXXUDzbaj+s4wQHMSUeK0piF0A+nVpptSOIAidGvzu3toOpplTX8WhQy8KKfxiusrNHTHQfoeIKLmFkJDyRDlubONJYRAs2yMTD2JrnWEtSpGKkM4XRooa9U4VzHm54ZS8MqrNZ583OY//oc03/jLEuMTEiGgvl7D92B4JORyf8DefS5ffDbJYx+1eftnLii4/z6bxz5q88JPqgwM3no0I50SfOFzCYaGQopFyerVBo89anPylE9Pr08YwsBASCYj2LPH4sxZH9dVbLvbZM+DFnMrapNJwY7tJpcvh5QrEqWidcxQAsv0SN2WKApdJ7FmA433PYLV0ExYrVC5fAF9Qd0zmo7TsYr6ex7AaV+F0HXCSon88W6KZ47NGyOg2Q6p9Zuo27oLs7E5EtDAp9zXy+Q7/4x0a6Q33UXzw08y+tpPqFw4O3Nsav0mmh95krG9L1Pp62HFM1/CHR3GaGgisaKT/InDBFM56nftQQgYf/NVKn1RvqHmJMhs2UFm8zaMTF1UEpXtJ3/0ILXsACiJ2dhMx9Ofp9R7Cs1Jklq7AT2ZIqxUKJzopnCiG+m5GHUNNOy8n+SajTgrOpGBj922YkaAR197gUpf7+3c+piYW2L/AY9v/nWZ3/hyiue+0cjImMQ0oKFB42/+rsJ3n4+ix9/5boW2Np3f+50Mv/nrKaRUtLfr7HvT5W+/U6G0BEuxfyBk7RqDP/+TBlxXsXKlzuSk5FvfLpPPR+c7ddrnxZdqPPu5JA89aFGpRpbjqdMB9+yc9bIyGcHvfCVNW6vOxGSI50Frs4ZuCP7Xny5PbOC2RNFZuZq2J54hLJeZPLAPUKTWbcLpXEtQnK0ZdjpW0v7UZ5GeR/7YIaTn4rSvpOUjTyJ0nfx7h0BGnbzrtt1L80OPU+2/SO7dt1GBj5GpR4Vh1H6MyCLUnQRCW+AK6Pq01Rq9rifTpDZspnj2OJqu03T/RylfOEvpzDEyd+2kYdeDVC6dByWp23ZvtL2vh+Lp99CTSTJbdmB//NNkf/Rd/KmJ6bEEGRp27cEdzVI40R116t68jZZHP4k3MUbl0jmkW6N8oQdvcoLmVBp/apL80YPIafF3R6+RmB4T8z4ShvC3f1fh2DGfe3ZYNDVr+J5iKBuy/4DHFcdldEzyP/6wwEc/YrN5k4GmQW9vFH2empoVxJqrePGlGrp+8xhiqST5s68X6ezUWd2ls/dNxd59LqdOz5p22WHJ1/6owCPv2Kxda1CtKrqPeAwNhTzxuEP/QLRvLif586+XuGuLSWOjhgAmJiTdR6OcxeVgyaIoDJPU+s0YmXpGXv4R1ctR7XF14BKrvvhbs/uZFunN29BTacbe+N60CClKlo2WSFK/fTfVgYt4Y8OYDU3Ub78PdzTL2Bsv4eeu9D0UCNO4tqt9w4sUBKUCuQNvklyzAWfVGtyRLLl330ZPpUl0rYusWgH123dTuXyB8Tf+kbBSBk0jKJdpfeyTpNbdwdSR6ZELuo50Xcb3vow3MRq9R7FA+yc+h7Oik0p/H7JWpdrfh5/P0bj7YYJCnnJfz7wxkTExP2+CAA53+xzuvrF4FIuR4L340vX3qdXge/9vcd9nTROMj0vefufGAc3ssLzGOQXf/5skhqgHhvA8ePsdb965HC1Fq7GBTt3iYnDrDWcWsmRR1J0EdmsH3vhoJA7TBIUp3OEBjPpGIHKHk13rqA0P4Y6PzNY+ey7l3tOkn/4CdtuKSBTrG7DbOhh5+R/wp+Z241Yzc05uCSUJigVUGBCUi0i3Fp1XKcJqFSE0hGFgt3agJ1L4+Rx6KoOeykSHBz7S83BWrYYj+6PXwpBatn/2MyuFX5giKBfRU2mEEDcboXsVQtdJtXYReLFoxiwNO9O47Ods7bIxHY3hC9VlSIwWmMJBFzpSSTShEaoAX7noGFha1A3fkzVCAgQCSyTQhUFab0QIDYKh6X0T0/tWCQlwZZVcMMwaZ/vtXiRwW5aigWY7hAva6yulCCqVGVEUuo6eTOGOj1wVXAhKBTTbRred6WavyXmNYpdwVcyN6yvFbCBmegqfnPs/gBAYqTSa7dCwaw/12+6df0q1QJCljCzJuUgV3YMlFNoD2OlGHvj3f7ykY2NiFmInddpWJxjvr1EphaxYnyDwFBNDNVpXJ2hf7RD4kr7jJarFEN0UrLs7TabJpJQPuHyqhJPSefAzbSRSOqf3T3H5dJn8mE9di8nqu1KgYKCnwtSIR7rBoKHdIllnYFoaAz3RvnPRhU6nvQlbpNCERqA8AuVzsXacDmsDST2DQKMUTjHiXSClN7DC2kCoAiwtQSGM5j53WOtxtBSGMCmFU2S9c0gknnJRank69SxZFKOWWMFso8o5aHPzE6VE+h6aYUZqP3c/KxodoMIgKjAJgihSfLMGrldYoEGaaV51LYvR1iiy7VI4eYTKxXNXbQ/KCxZwlykRO99/htylkzSu2bos54uJGex+BU2HXU82c3xfjqHzFXY+3sTAmTJuNeTep5rJZV0a2i0yzSYHXhhnywP1rN+ZYXzAJQwVmi7QdEGyTsdJ6thJHd0QGKZgz2da8WoS3RS0r03wzj+M0txp88jn2xnsrVCc9DHM6Dn3PMXLr9Q4ecpndDQkpSST4RBNxkrG/AFazE4SWpqM3siF2nsIBOucHeS0LBm9iYosMOT20uVsAcAUNu3WWnLBCApFo9HBiNeHZHnbli1ZFCNXNEdq451oToKwEnWo1iwLq7ltdj/PxR0ewm5fiZ7OEFanrSxNw1m1mqAwNe0qK4JSgaBSJrFqDeXzZ5BubfYN57Qtl76HMK3IwryCpmG3dkTCeIu4o8NI30cpSeXS+XlpPmhaZAkuBaVmfjgW/iAAlEYvceIHf4KdaVra+WNiFlDInkf6PsN9VVZuTBD4EjupM9BToX1tgp0fa+Tc0SKJlI5haSBg26ONHHxpnPNHitPVV1Athlw6WcZOaBx9bZLAV7R22dS3Wrz03ABKwS9/ZTWphkhC3GrIybdyjPXPZpIEATMNGjR0kracthA9FBKlFLqwCAmRhCilEGho6GhCJ1RBZAXKGprQ0YVJoAIm/CEUEqlCwuXKw5nDbYli+WIvqTu20Lj7YfLH71TfQgAAA+JJREFUojZgqfWbMRub8Qu5mf2Kp98j0bWWxl17yJ/ojrpxN7eR2bKD0rnTUcoL4E2OUTz9Hpkt2/HzOSqXz4OUaJaNZjtU+y8gPQ9/chyUJHPnNvxiHunWcDo6Sa7ZsKRBU97kGKXeU2Q23U1YKlEd6ItmzdgWVks75fNnCQq3Pp5U+h5ebhxn5WoSq9dF161p+FOTM4JfzQ1TzcXR6JjlpefdAk/9VjQ5szDukRv1aF5lc/lMhZ/+1SC+J2fTiedMHJjWRCBaChOamPHIlJzduHClqFoM8Wq3aLEJqMkSUoU0Gu1oGLiqgq9q1GSZer2VBqOdlN5AVRZxZYWaLGFrCTxZRYoQBZjCIa03YmkOaa2RmiwR3PIMzlluKyWncuEsuQP7aLjnAVLrNyM9F39qglLPCeyOVbP7XT7P+L5/ouHePXT80hej8aWaRvnSOXIH9s2kqshqhdyht1BhSMM991N/zwMgJULTqA5dpjZ4CQBvaoLJ/Xup33EfK575N0ivRlguUz5/hsyWHbf+QaQkd2AvQhPU7dhNw64HUdPvG5RLVC9dWNL9kW6NwvFuzIYmWh9/Bum5KN9j7PWXqPbfpM9jTMxtMJl1cashd9xbx8vfGgIFo5drVIsB932qhWoppP9Mmf7TZY7vzbH14UY61iUo5QLOHspTLYZMZj22P5Zi9ydbOHswT27EY3LY5f5fasW0BCMXq5SngshavIkzpVAUgnFcFQVFarJMzs8SKI8hr4dmsxMUjHh9eKpGLhhGoJHS6pkKRvBkFYWk3z097XZnKIYTVMhjCYekliEfjJHWGwmUT6CWLoriRnMZhBA39RuFaeF0rMJI1yEDH29sGBCYjc3Uhi7PusCahtXchtXYEiVvVyu4o0NXBy2IEqnt1g6MdCbqWu15+LkJvMmxGRdas51on+lEaz83QVDM46zowh3LEhQLJNfeQVir4A4PotkJEp1rcEeHCIoFrOZWjLoGqgOXZkayanYCq6UNI1MXddT2aviFPP7kGCoM0SybRNc6gmIedzQ773qdFV2ElTLu6ND8NUddx2pqxWpsjsY3uC61bP81P3dMzHLStsahsd3iXHeRMFAIDVo6HdpWR8tO2fNVJrMuhilYfVeaVL1OuRAycLaMV5XYSY3OzSlMS5tZL8w0GXRuTs0cPzXqkao3qG81Get38d1fjLEESqnrRkVvWxRjYmJiftG4kSjGDSFiYmJi5hCLYkxMTMwcYlGMiYmJmcMN1xRjYmJi/rURW4oxMTExc4hFMSYmJmYOsSjGxMTEzCEWxZiYmJg5xKIYExMTM4dYFGNiYmLm8P8BWOQuVtNScx8AAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUUAAADnCAYAAACJ10QMAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9aZAc6Xnn98uzsu6qrr7vRjeARgOYAQaYezgz5AxvipQoUZTWOryxsi3ZlsMbithwbITXxwdH2BuOWMdGOFYSdyVT1HotiZckckjOQXI4NzDA4D4aQN93133l/fpDFqrR6G5c03Nx+j8xga7MN998M6vyyef8P5IQgh3sYAc72EEA+YNewA52sIMdfJiwIxR3sIMd7OAG7AjFHexgBzu4ATtCcQc72MEObsCOUNzBDnawgxug3mqnJEk7oel7gSQTSmSQ9RBurYxTLb5355IVZEXBdxxg5+vaFIqM3ppANjTcch2/bqNl4kiyhL1URNJV1HgYAK9uIesqsq4hhMDJlVHCIdxiDTmsI8kSkqqgREP4poOTLSM8/90tLxRGi6eQdQNJkhG+h2vWcIpZhO9txx3YwU0QQkhb7bulULxzSESJk5AyaJKGLSzyYhmLOjIyKamdqBTHEx4FsUKNMgA6BkkpQ11UScoZBD45fxmTKq1SNw4WERLIkkxRZKmIAgAyCimprTlnXixTp9KcMyG1YIoaSTmDj0++MaeOQVpqQ5fCeMKlKFapUtqeW3AD9FiKzIFHceoVqgsT76lQ1GMpQi0dVGbHEa7znp3no4xQe5LUk2M42TLWfA7Z0AkPtgVCrbsCskR4qIP65DJeqU50fx9+3UJSFJzVEmoiQuH1S4SHO5FDKqGuNF7NxppdxS3V3pVQDKXbaLnvcWL9e9ETaZAVhGNTW5xi5kffwjNr23gndnAn2BahGJFiDMsHsYVFnSohKYwqNCzqZKQuuuQhKiKPIWkk5QyT3gXqVDCIMKLcx7I/i4uDQKCiAhK98gguDhVRQJfCpKQ2rnpnMKmSkTpvMWeUEeV+Vm6YU2lcpkaIuNSCg0VYSpKSWrnin8bG3I7bAIDR0kFm/2MYmS68uXF8O5jbyHSR3HUQ4XkUJ85i5ZfQk62E0u2EEi1Iikpp4jxKKIyR6URSNITnIEkyuQtvoScyJIb2I+sh6kszlKcvooZjZA4E54q091GZuUx1YQItliI1cj+KEaW+Okdp8vzHV2BKElprAjdfIffCaeSwTvqpMcqnp7CXirR/9WHqE8tYc1mKr11CTUcJD3dQPjGBEjNIPrIHZ6V0fSoAzNksoc40wvPxnXvX5GQtRPrAI7QcfAy3WiR//hieVUPWDYTr4NvWdtyBjxwkRUVPtOBUivjO+38PtkUotku9ODhc9c/g4iAhIfCRkOiSB1nyp1kRc6ho7FIO0C73MuVfBEAgyItlCmK18XntrVsWeWb8cTR0RpT7ycidzPvXbjsnW8xZp8KCPwFASArTL48SkeLYYvuEolXKUZ65jO+5FK6ewSnlkHWD9OiDFC6fQAmFSe0+xOrpX6CGYyR3HSR/8Th2KYtn1gi3dqOEIsiqhlsrE2rpRAkZeHad6sIEqhEl0jmAVVrFLqxSXZwASaIwfrKpkcZ6d+N7LpWrp3FrZYT3MTbBhMA3beRICC0TRwiBX7PRWmIgwC3WgjGWC41CBtnQUTMxFEPHXi0hKRJaJo7WlsAr1alenserWhhDHdgrJezFwj0tTTHCJEfuR/gui6/+kPLkBYTvIUkySHxsTedwZz+dj32Bpdefozp79X0//7YIRYMoVVHEIZDq1z1bKjoaISqiiI+Hg6DuVzCkKHIjxmMLi5qo4HPjD0DCw6Pe2O5gYwuTEGFUtMachbU5RRWDyA1zmpvMCS1SB21SL0LykVHQpRAyCoqs4/kusiSj6zEct47n3dsbSrgObr2CZ9dxq0V818Zo6UR4LmZ+CYDU7sPIqh6stZTFyi/h1ivXZ8Cp5JEUDc+uozlpZM3AaOkk0tGH8H20aBJFCyF8D69ew7NNnEoB37EBqM5fI7XnAWK9I9QWp3DrFT7OhUvmbBa9I0Xq8VHqk8tUL80RPzSE3pag8MoFtJY4wl97GUtAdG8PTrZC/qUzREZ7iIx04psOdrZMdLQXNR3FXszj5itbn/g2kLUQWiyBmV/Byi02rYqP8VcFQKx3BDUaR1K2ybt3l9iWs/qSiyb0jdvxEPhoBPskJFRJx8XGb3z1ovHfzZCRUG88DhUbEx+/MWeouU9Dw8G5YU42zCkh0SPvIusvseBPoKKyX3kEgLaWUfLFCSKRVjLJEepmjvmVdxBie97UrllD0ULIqo6iG/iu03wIhe9vuHohBBKi+XTImk6ko4/q4iRWfoW2Bz51w1gfSZJAUprbnGqR1dMvE+/fR3xgH3Yp994Gez7k8Os2hVcurNuW/fE7zb+d1fK6fW6pRvnkBNZCHoDy29dYP2J7ELwYJcQNv4ePPSSJSNfAB7qEbRGKWX+JQWUfXWKQCkUUVCxRo06VvFimSxkCH3RCRKU4M/44t3sfyqhk5E7qfgUdg5AUZcGfwsMlL1boUgYbcxpE7nBOgUCWZAwitErdqJIGQDTSQa2+SiLWQ6E8TUtyF7Kk4N2jUBS+h+84Te3MrZepzF+j/YFPgiRTnR3HM6uIWDLw9d2gxvmeG8whSQjPxXdsfMeinl0k3jdKtHMwOLbxEDmVAiDRcfRZitfOUFucJNY7QqxnN5IsY+YW8eztcw/cLRTdaGrFm0L4OPV717a2G16pTvGNy7iV7b9nkqyQHH2AaNcgajiGGk+BLBNqaaf3M7/d1PQBVo6/SGXq0voJZDnwLQ8fINzRj6KH8Kwa1fkJytfO4VRKIPwN50yNHiG17ygLP/sOdjlPpHuIxNAYWiLwi9qlLIULxzFX5gFIjz1EpGuA7KlXCLf3Et+1H7uUI3/uGE45T3rsQWL9e/DMKtlTr1JfmrnhhBJaNEmka4BI9xB6shVJUfBtE3N1gfLEeczVhXWuAUlRSe49TLitBz2ZIdI9hCTLdD7xK3hHn1l3PbMv/CecYm7DNeot7SR2HSDc1h34ZH0Pt1rCXJ2nMnMFK7fEnZpL2yIU82IZ2VPIyJ1k6MYUFebFZHAR/lW65CF65WE84TLvX6MgVgBwcZtm8M3wcKiLCu1SH6qkMu9fo9jwEc76V5pzusJlbt2czqZzCgTT/mW65V3EpRRFP8uCP4WLjeNWSSWCt1O+OEE6Mfiu7kc9O4+ZX1oLbghBaeIsldnLCCEamoFHfXUOM7eIcN3mseXpGx4EIajOX8P3HJxKsbFPgO83hadTKbJy8iUkWcFvnK8ye4Xa4hRAoJV6a/PfC0LxFiKZ7ns6tvfI58gMH9pyv10rcum5b+B7dxcI8l2H4tzlO/6h3ymE6+Hk3iMhLUmE0m3o6XYkSUKSr6cJS0iygqQoN4xdn0IsKRrJ3ffReuRptGgCt15F+B56Ik2sfw+pPYdZPvYClenLcKPWKUnoyRZifSMYHX0k9z5Acs8hED5CCGRVI9ozTHVmvCkU9WQL0b7dSIqCnmpDDceID+5DiyZxSjlig6PIqoYaHUZLtDD9D3+BZ9UB0GIpup/+NSLdQ/i2ie9YwXmSLcE6Rx9g6Y2fUBw/BQ3BKCkK8YHRwGSWFSRZadwCef09adyr9fdFIbH7EO0PPotihJsKgyTJGJlOEiP3YbSfZeHn38dvrPG2X9OtWHI+uDxFiYPKoyz4k6yK+ff8bIaeJJnop1iewbYrpBID5EuT22Y+f5Sg6GF6HngWSV57X6b6R+kYe/wDXNVGOPUyk698B99zEZ7L3MkX8Ow7+9F/kJAUtfmgG61d7Pr1/4b66jzzL/0dVn65Oc53nRuEm0S0b4TeT38dWQuRPfUK5YnzeFYdNRInOXIf6f0P4ZTzzPzorzFXF9adr/2hZ2l/+DNUZsaRtRClq2eoLU7j2yZqJE4o1Ubh0gncWuAk6Hj0c2QOP4W5Os/q2z/Dd226nvoKWiyFUy6w9MaPcStFOh79LKFMF7M//o+BMCbwk6ZGH0BLtFBfmMQqZhGehxZPkt73IKnRI9QWJpn+4Tdxq2vpcJKmN14UKsO/9d8hIbPwi+9TmRlfd/982+ZGi1BPZuj59G8R6exn+c3nKU9ewLdNZFVHT2Uw2nuozU8F67tBi34f8hTfG2y56m2GaRcxV880P+eK73/E6/2EFo6z+9O/j6xqG/YpmkHb6MPI2+jkvvLit6gXljfdN/LM7xBOtd/1nNevAQKXQ3rwAJ6z0eSdfPV7VJYm7nr+9wqiIcSBRsI9QfTbsbdMwZFUldTew2jxNLnTr7J68uWm1mMXgiwEJRwlPXqExO77MVcX2cyVZGS6mP/ptyldO7fOetjMXypJEvWlWcqTF5BkherMFTL3P0FpeYbK5EV8x6K2OI3R2o0WTzeP8x2L/IXjIMS6c9iFFYTrEO0dwWjrRtbWu1SEYyMASfGCYyWBfwdpSbIWQosm8B2L4uV3sIurzX1mdoHSxPlGkOHOfbYfWqFoCROPd2f23SnCRgtdbYeoW3mWVs58ZDVFWdEwUm3rtnUeeJLO+55aP05WCKc7bzDf1sO169QbkfLrmD32HKvjx+9pXfXCUtO0vxmFqXObCufbYe/n/wvC6U4AQokMHfs312Rbhu7DvcGneum5P6eeXwTALGU/kDy4u4USChPr2x24UCYvbDAD3VqZ8uQF4gN7SQyNsXLsxU3zUs3VuQ0CcSv4roNTKTTHOpViY45FhB9s88waQggUPbTuWOE6SLKCYkSRVRVkBUmSkLXAB6rFU0jK3X/nm8Eza1i5RWKD+2h78FNkT72CUy7gWWYgCO8hgPUhFYqCy/6J9+1sna33YVklDD2FL1wyqd0UyzP3HGh5vxFrHyDWOUg41cHIM79z014piE7fBLO0Sn7i7KbzlRauMvXa927a+t54UuqFpdsP2gQn/up/4rotMfD4r5Ho3LVhTKxzkHjH4LptD/zu/8L1a5l85TuUlyaorc5Tmh/fcPyHBVo8jawb2MXVIJiyCZxyEbdWQQlH0WIp7MLKhjFmbvmOc1aDYGHjhXGD1ufbJtddbs2I+Y2/L1nGaOkg2rebaM8woXQbSijc9JnKWqiRLXGHF38bOJUiqyd/Efhc9xwmMXyA6tw1KpMXqS5MYeeX7zrf80MqFG8DWUI2Qvh1C4RAScZR0kmcxRWEefdvfkmSqNt5dD2OpobXf8kNqB2tKIk4bq6AlyugJBOED44iPI/aybOI+vsT4Q3FW+h/9CvrtqX6RkkP7N/ymPmTL1JZnlq3rZZbZPnCa+/JGt8/BA/n1Kvf2XRvqn+MVN9o83PH/sdJ9u7l+hM59InfAKA0f5XstXdwamWmXvvuhy5pWlY1kKRAoG2h+QjfbSR+S4F2ttkY9y7r4zcxOW/N1C8Rbu+j64lfIdzZh7kyR2XqEm6tgu9YKEaY9NhD6MnMna/h9oukOnsVt1okvusA8aExYv17iA/so748Q3H8NMXLJ5v+0jvBHQvF8O49RPbsBQkqZ85gz88RGd2H1taOpGk4K8tUz54B3yd63/3oXV3gC8pvvYGTzSKHI8Tuvx8t04pbKlI59Q5+vU5k7yih3l6E41A+eQI3l0Pv6iJ68D5kTcecmqR28cK6CK2SiBF76hGqb76DqJukfuMLyMkY1qUJyi+8grDsW1zJRmQLV+lsPUg00k44lCRfmsQX600Mvb+X6MOHcOaWsKfnAme5IiP5PpH7x6i+sb2araxqKHoYLRxn7+f/AKWR1qKEwiR79mwY75o1fM8he+0dZo//eN2+ytIkTm37a7w/7ChMn6cwfb75efnSm4TiwQPZeeATdIw9hiQrJLqHSXQP4zkWrbuPgPDxHIuLP/yzpqn9QcK3A1NQVjWkLQSerGpIiorwBd42uQTu1jaQNY3M/Y8T6R6kOP4OK8dewi6uNslK9HQ78V0HtlkoBiu18ivYJ1+mdOU0erqdxK79JIcP0vHIZ9BiCZbfev6OyybvWCgaQ0OY01PYi4u4pWIQ6u/uxq9UqJ5+h9jhIxh9/dQnrmEvLeEWCoQHhzB2DeNks8Tuvz8oR3v5ZwjXw7ct9LZ2IntHqZ49g97RQez+QxR++hLG4BBeqURl4hpeeWOZmmQYqB2tiLqJcXAvvmlS/unrxJ99HDkawbtLoVgqz2BaeUJ6EtetY9pFxE1vSWHb1N85h3l5gujDh/HrJs78Im62QOTw1lra3cBItmE0gg6Z4UP0Hv0ckiSjRRLrTGCrnKOWXR+Vn3jl25TmxvFcq5kesYP1qK3OUVudA6A0e5mrL/01kUw3u5/9PQDi3SO0DB0EAo3oYDjO5ef/EoDy/JVtEzZ3C7ucx61VUGNJtFhyfV4gABJaogU1GscuZnG3MLHfa0iKSrR7CM82KV45g7m6/jeq6CHUSOzWkzQ0UUliU7fPLQ/1PexiFruYozZ/jercNbo/+VXSYw+SPf3a9gvF2sULRPbsRW1poX7pIk4uh3BdnHwet1DAq1ZR4nGUaIzYgYO4xQJyLIbc0PC0TCvV82fxymtqrJpKoabT6B0dAU3TQpBKULt0iejBg0T3jWHOTGNOTsLNvhAhkEI6xugw1ddO4C6tIikyknL3FJGhUArbqWDZZUAibKSpm3lufFd6pTJ6bzeRwweQoxGUeBQ8DzkWRbj3Zm7FOgbIDB9ufs4MHw40lZtQyy+yfP715ufi7EWWzr16T+fcQQDPMfEcE7ta4Nh/+B8AGHjs1wjFWwDoGHuMVP8+Hvpn/zsAk69+B6ucozB9nuLs5fd1rb5tUZ68QOsDTxPfFfjMbmTPUWMJEkNjqEaE7DuvfKC17oJAmN0s0CRVIzY4ihZL3vp438d3naCUNXwbAcr1FCe1WSJ5fRW+bWHnl/HqVbRYcsug4ma4Y6FoLy7irGaJHzmCMTCEWywiaxpqPI4UCqFEwrjZVZREAjlsUHr+TZKPP9FMQvWqFdSWDEw1fFtC4FWrOLkcpWNv4Ztm0zTwyiVKb7xOdGyM8NAw9uIifrXaXItfN8EXpL7+JYRlY0/PIcciCF/cU7lUe2aMpdXTWLbT+LyfmYU38P21CJ4ztxQ4imNR6mcuIOkaWkcbSiSCee7OH5K2vQ/RPvYYANFMD6n+fVuOnX/nJXLXTmGWVsldO3XX17WDu8PUa99t/p2fPMuBr/73aOE4AIOPfxUI/I/F2YuMv/BNXLO66TzbDeG5FC+/Q6x/L8mRg/h2neL4aTyzhhZNkNx7mNjAXszVBUpXTvNBVU8Lz8NcmScxvJ/knsPYxRxOKYcSiZMY3k9631F820IJhW81C9bqAkami9TeB/DqFexiDklVUfQwtaXpdZH1UKaDtsNPU1uawVydD3IfZRk9niK5+xBaPEV9aeauGIfuWCgmHno4EGpA9dTJQKI7DqHePrTWNtx8DnN6CiQZ37Jo+eznA3KEXKN+9ORJ4ocfIPPFL+MWC1ROncReXsKanaHl05/Bdz2q585gTU0RGd2HMTgEEpgTEwhr/QX5pQrF536G3tuJPTWHX60hhw1qb53Cr9w9/5ymhlGUEFBGlhR0LYp0U3hMOA723CJyKKhXpWbiLgeJqVv5MCVFbebgaeE4o1/8Q8KpNvRoqjnGrhRwbni4Jl75O4ozAduPVc7jmh+eEriPE1Yuv8Wbf/onSIpC30NfJDN8GDUUJtE9TLxzCCPZxqUffQOAemH5XVcN3Q71lTkWXv4+bUc+2Yiy3heYmnJQDVObn2Tl+EuY2XuL5m8HfNcme+oV9GSGaO8w4fZehOeACEzb/Llj6MkW0vsfvuU82dOvYrT1EOnsx3j268176zk2U9//Bk453xwrqzrRvhGi/bvB9xC+AImAaUiWqS1OsfLWCzcQrtwed1fRIkmNl5BA0jSSj38Ce3GB2qWLG8utJGnzEqzNtl9XtW/cvtm29wjtLWNEI+3UzCwhLQ6SxOzCm+uCLWpHG9EjB5Eja9HpymvHceY2OuKj7f3EO4IH57q/CoCGWZGfOodZDFIm5t95iezVk2tjPs50Nh9aBCkkyZ499D/yK7QM3YceSze/q/EXvrkhqHMn0BJpOh79PE4pR/b067h3QNqhhCLEh/YR7uxH1kL4Vp3aYlCxsSkhrSyTHD5IYvggpYlzFC+d3DjmJiR330+0d4TilVNUZ66AJBMf2Etq9Ai5s29QnbsKQhAbGCWxaz/lifOUJ68TbkiE0u3Eh/YRagkUAqdcoDx1kfryHPGBvST3HGbptR/glLeiXJPQ023EB4M5ZEXFs+rYxSz588fWmcqSqhHp6CPc0Y+eaEHWDUDg1quYyzNUpsc3FYi3qmi55zI/SdNIPvEJ7IUFahcvbDXsPYEU0lFbW9YLJEVBzaRwcwVwPWRFBwS+FySSGrE2hO9iVlY3zCfLGpnUCBEjg+uarBbGsez1P9Dwof1o7a1U3z7d9G/61RrCWa8hRDI97P/KH5MePLDhPNXVWeZOPM/yhTeoZefe/Y3YwQeCrvs/xb4v/leoRrS5rboyy/l//L/JT5z+AFe2gzvFe1LmJ1yXyjsn8a33PyKnpBJEnzhK4W9/2MzbknSN2Cceovziq3iFEumufciKRnb2FOmuMdr6H0D4HnOXf0YlN71uPt93yObHyUpXgg2bvCiEaQWpBa6L3/A9bua/DKc7SA8eQAiBa1bJXj3J7PEfAeDUSpQXPzwlZzu4Nyye+Rn1/CKyqtF79PNkhg8RaQ1ehpXlaS7+8E8xtyhr/NhgK0txeyZHkpVGPun2n+Pek7eFwM3nbz9umyEZIeR4FCWdQI5FoNEfQ0nGUTvboBGsiSS7MMvL6EaCVMceVmdOoqgh0l37NgjFSLiVluQudC2GQOC6JnOLb+HfUNEiXBdjzy6M4QF8MyhKL7/8Js705hqf8FxO/NW/oroyi2vt9Nn4ZYLw/aapXJobx0h1cPA3/oR4xyDhdAdaOM74839JLTuPXb03Vu6POqKZPurFpfekjFIJhYm29FDLzb0nz9ZHq6JFUTDGdhM+PIY+0EPq1z+/RiEfi+BXaghzfWVJJBnUx5ZWJ4gkOoimejZM29l6H65roqlhSpV5IuFMEDW/QShak7M4f72+csK/ZfWMwK4WdwTiLzlcq0ZlaYIz3/4/6Tn0DO37HiU9MMZDf/B/sHjmZS784N/9UifOS7JKrG0A1Yji2XUqK1Po0TQd+56gtHiNWm6WWnaWaNtAQBYdjlHPL2BXi0QzvWiRBE69THVlCi2SINLSjSQp1PILCOETimdQ1BDCd3HMCmZxmXCqs6Ep+kiyQjjViR5JIskKZnmVen6BcLqbUCyNohmY5VVqqzMbco+3wker77Pv4y6u4EzN4xXK2BPTWNemsa5OUTt2mvKPX8avBonLtcI8LT0HaR98mEp+Bscso4ZiuJswqciSQqkyR90qsLh6ioCn+yaXg+viV2rN/+VYFCUW3TDXDj6eqCxOcOlH3+Dic3/WJM3tPPgku5/9/bvKkXvPIUtIurJttcdqKELL0CGQpIC7U4ig3FBWEb7b5P1Mdo8STnfjOzbC9wjF0iS69+A5NsnuveixFoQQeK6DrIZoGbyfcKqTaKaXWMcQ0Uwf8Y7hgL1e1Yl17Ap6GSkqqd59GKkOJEUl3XcAWQvRMnAfIBHN9GIk2u7KyP5oaYpC4MwvBRqaBJVfHGuazzcjv3gB17WQJIny6gRC+Dj1ErnyxkL5YmUG1zORZZXRXV/GdirrGmjJ0QjCdZFvEILh0RHs2QW8wi+vFrCDu8fKpbd488/+hPZ9j7Drya/TffhTpPr3sXDqJabe+IcPnJUnOtxBbG8X2Z9fwC2/+3p9z65Tmr9MONWJLMlUs7NY5VXsapHK8iRWudE8TnjUsjNUs0E1TrJ3jFj7IEgSim6ghsIoWppIpg9F1dHC8UCjrOTwrBqeY6FFEgDUi8sYqc61Nbg2tdw8ZmmFcLIDRdXxXZtISzdWJU9lefKXgzrsVvByBcovvHpLR67vORSX1tO5F1eubDp2JRfkBdbNPJoWxXYq+A16JGSJ8IG9+LZD7PGjeMWgIkdra8XNvv8+1R18yCEEtewck698B9essfvTv0+svZ+RZ34PSdGYfOXb+O6ty1BlQyO2t4vKpQV8Mwjq6e0J9EyM6vgSQgjCfS0YXUE7gfpMFmuh4buUQM/ECQ+0ooR1vJpF9eoybrFGbF8PLY/tJjrcjvB8nHyVyuUFnFwVSZUJD7YRak8gXI/axCp2o7WrEgsRGWjFzlYI97cGjPBXl3ByVZAkPMeimp0l2buPwtxFPNvFc02irf0gSVillYAw/gZ2dau8SmVlispSoLBYlQKZoUP4joVTK6DoRnA7fR8h/KbpKysa0ZYeQrEWIuluqtmZ5pjr3Zmu5yha5SxOvdTYJ3GnQZmPpFAEkAwdraMNOWLQtAWEwLo6hbAdEq27NvUf1kqLFJfX00TJsko6MUTYaMFxa+SLE9jXq1l8QfX4afS+bkovvIJ9LQjSRA7vx6/t1BjvYCsIZo8/h/BdWnbdT9d9TzP05NfwHYvJLVh9rkNSZDp/7UEWvvMW5dMzSKpC5ql9aOkI1avLRPtb6fzVI4GmJ0mkHtzF4nePY87n0Vvj9Pz2Y8ghFadQQzgeTqGGW6yht8YIdSZRU1GM7jSyoVGbzgJV4vt7af/iIZx8FVlVSB4ZYuHbx7CXS4Q6kgz80bPkXxtHNlTwwSlUG0JRRjVigCA/+Q6+G2jC+clThNNdKFog3EqLV3DqayW+VjlLaX4cPZoMehF5NqXFq4ST7fieQ27yFE69HJQMKirC87Cr+SCRW5Ko5RoBTiGorEw2OmhaFOcvo4SizYTvaNsAoXiG7NW377jlxUdSKEphg8TnnkTv60ZJxPDKVeR4FGd+CXtmHmE7qHqYUCSoHJEkGS2cIBRpYe7iixvma88cwNAT1K0Cuhalp+MIU/OvrmmLnhfkREoSosGWXL8wviFHcQc7uBlzJ55n5dIxfMem69CnGHrq6wjfY/qtH2xZBeNVLaoX50kd3UX5zCxKWCO2t5OVn5xB2C6Zp/dhLhZZfeEscikSVD8AACAASURBVEij+2sPkXp4mMXvHif18DBq3GDmL17GzleQNRWvoW3mXr2MYmgkDrssfOcYTrEGfqA9tT57gPLZWVZfOo8S0en57UdpeWw3i997GwAlpFGfzlI8OQmA7zRydR2T3MRGhqh6YZF6YS2PuLoyuW6/8D0qy9fWH5Ofp56/ffuR/NT6XFCnvubCqixPoEdTQTuCaBJZUbGrhbuig/tICkUlHkXraqf4Dy8S//QTFL//PHpvF/pwX3NMfvEihaZGKCHLCm39D6CHNxakh0MplrPnqJlZFFljsOdJpJtjUKqC1p5BCoWaFS3O3OLHvkfvDm4Pu1rg4nN/ju979Bx+lpFnfhdFDzP52ne39DEW3r5Gz+88gdGbxuhKgS8onZ0FIDbWg6wqREc6Av7EsE712jJIEO5rpXp1CXM+cO349Ru0I18EfIiikWPbEIhKWCfUFmfp+2/jVUx806Y+kyPc24KkBf1knHyV2sQyXvXDz1RuV4ssnX+5+Zz6rn3HkWd4t0JRkkB+F2GsLYIkt4UiI1wPdyUbkEMIQf38OOHDY0hGCCo1hO+tezv4gFnNkerYyEXo+Q4drfdRN3PoeoxQKElH6wF836NQmsS0i4R29RO5bwxnebWZMO7lCviV94cUYAcfbXh2nUvP/TmSJNF79HOMPPM7CN9j4pW/29Q3bi0WcXIV4mO9RIbaKJ2Zwa8FvkivYrH6+jjZly82x/uOFwg7x0Ux9K1daM1ta8+tED7CF8jhRt8USUIOBRqmaDyjwvOb2uGHH2LTfj13irsTioqM3ttOeLQPrTODHDHuiaoLACFY/dZP8Ip3L1SE7SIcByWdxFvNE33sCM7iShAdbvzAIolOQtE1MktZUWnpGqNaXNgwX7m6QCLahaqEgnaklVlkSUVWVKRGk3m/Wsd3HCRdh+tmz13yve3g4w3ftRl/8a9QdKPpY3StGjNv/WDDWK9mU728SPKBQbSEwfJzayxJpbMzJB8YpDK+iFusoaWjOMUa9lKJyoV5On/1CKkjQ9QmVlCiIbyajb0a+PM800FLRgh1JgGBV3fwTZfq1SUyn9iLk6+iJsJERzrIvnS+qU1+nHDHQlHrbCH9lSeIHNiFnIgghzSQ5bsmgrwO4fnkvv3zexKKfqlM+cXX8LIFasfPkPjiJ9EHe6mfuoBfDuaLpntId65R0Qvfp15eZmVqY/OlYnmaYmk6SA+QNXzfbVazNKNe0XDgW5yZb7KA+/WdQMsO7g5OtcjFH/wprlUPTOlnfw9FN5h+4x/WRaWF51O5tEDyyCC1qVXs5TW/2epPzoDn0/21h5E1GXO+wPKPg26UxZOTqHGD1mcPIIdU7JUyyz863RSK1YvzVHa10/P1R7CzVZZ/fJbalXmW/v4E7V84RN8/fQrhuBTevErhWODzkxQVz1eQburAB0G6ml+9xwIFWQqKJG7mf1QU5JD+gQUy74gQQklEaf/DrxA9srdBy3PvGpIQAmHaOKtF5v+3v8JdvT0zyKZQFeRwOPB5yHKgtfk+Xr7UMG83El0G17rxeluSuyhXFwmFEmSSw1Trq2QL4+v8EPpQH5FD+/GK5aZQNM+P467m1s2VGXmAI7/3v+K7Nq/+2z/a0BVvBzsAQJLZ96U/ovfoZ0EIrv70/2Xy1e9sTNdR5MD6uVljkySkhutK3LxfopkwLmCjm0oOjlXbWlFSacwLl9a2N54Z4fvNR8XYM0Ls8YcxL45Tef2ttXlUlcihg9SO3555ZzOoba0oyQTW5DTc2G4klUTv76N+evPGatuBd00IEXt0P5H9Q2tfgi/wCmWclQK+5SDrKuG9jZyk2RW8YgVJlgI2m0wCNRENIrdC4MytUnz+GNbkIl7x3rgCpXCI2BMPonW1s+7SfEHxez9paIuCcLyDRNsuVD2C55iUVieoFjbWKrckh7GdKunEEL7vkUkNky9O4Im1H6ibzVM/sz7v0Tffn2ZVO/glhPAZf/4vUXWDrvs/ycBjv8rciZ9glde/ZDf1uysKoaEB9O6uwAI6ewGvUECORDD27UUOG9izc9jTs8hGCK23BzlsoESjWNcmcRaXUFpb0QcH8bJr55OjUYy9u1EiEdx8gfr5i+B5mFevoWRa1ikZcixKaNfQ+p4xikL4wBhKLIqk6zizc5jjV1FbMxi7h0FVcebmsSanUeIxYo88iNrSgtbZgXV1AmdhESWdwtgzss5XL2kaoV2DaB1tuPki9fMXkWQZY98eJF1HiUSwpqaxZ2a3xdy/rVBU0nEih3cjhXWEEPiVOsUX36b69iW8cg3hemjtaXr+5e8iqQrll9+h8tYF8AWSqqDEwhh7+og9fpBQXzuSoeNbDuaVWcQ9Om6VeCxoQ/DmO0FFSfM+iGY3v1hLP917nsJzLFynTiiSJtE2zML4y5RW1zPV+MIlFu1AU8PMLLzBYM+TG84p6hb4PmpHa+CjlqQGge5ObfMO7g2uWWXx3Ku0738cRTcY/uQ/4eIP/+z2yd2RMOH9+7CnZvCqFYRtgywROXIIUTfx63XCB8YCs1b4RB+4H2tyGnt+Ab8W/F79Wh1JlgkND2FebhQ1+D5esYQwTfTBftx8Hmd28xQZYTt4xSLRI4eovnk80GaFwF1eQVgW0QeP4MwHxwrfw83lkeMxjD0juIUifq2GVyojhUI4C4t4lUBB8k0T4bgYo3uonwsoCfXebkK7hzEvXkbv7SG8by/WxCSxh49SPXkav1YjfGAMdzV776b8DbitUNS7W9E7g7eEV6mT+94rlF44hl9bC81LsoxwXCRVwa/bOIu5dW84c3yW6onLtPzmJ4kdHaXlN55C2A7l187eUwTar1u4KznU1haE7az1pBCiGfxIde6lmp9laeKtJqdi28AR0l37NwjFldwlMqlhVnOX8HyHan1lXZkfQGhkAGN0BLUlhT27GPw7NYeX+3iyoOxge5C9coIrL36LXU/9Jt2HngEkxl/8Js4tCGeFZePm8ui93ViTUwjhI0cihAb7g6wL01oXBPSrNeypGZyFtbxBv1LBzWZREvHmNiWZxBgeAklCy2RQEnG2SncWto2ztLKuyya+j7uaRe/rwbw8jjUxFczV1kZo1yCSpqEk4sjhMF4uj7OaRVJV7OnZZv6vqJu4KyuEhgYai1JQWjO42RzWlWvguEQO34c1MYlvWpiXxpFUlURfL5Km3fX93wy3FYpqSxwlEdT8WhMLVN44t04gBjdDBInM4RCSrm6IygrXw55ZJvc3P0XvbCE02EX6y09gXp7BWbr7UjlJVVBa0ygtSdS2lqamKISPPTXXEJQudr2E69SDukcP7HoRLbSRxKFcnadcXXsjzi+/vfGcoRD23BK+ZVN5+Q3Ch/YHLNw72MG7gO/aQd9q32P4md+l9+hnKcxcZP7k81seI2yb6pvHUFIp4k8+jvAF1rVJhGlROfY29uQ0KEpg2WTSCNddL7w2g6IQ2jWIX61RPX6CxGcCAX1XkCVCQwPI0QjVYycQtoMcNgjv20vtzDn8apXYJx5bm1b4QbDlVml9QiBsByUaAVlGjoab7FTCcYMgjSxvG8EF3IFQlCMhpJCG8H2c5fymgREhRNMUlsOhta4FN8FZzFJ+7SyhwS70vnaMvf33JBRRZPxqnfJLr+PML64/WePLrxTmaB84iqzqDYacKKmOvVTyM6S7gmZRjlXdwK24FfxqDdkXiLpJ4gufQg4buMsbWbx3sIN7wdzJFxh4/KuoukHP4WfIXjmBVc5uOlYOh4l94tHAnyeBX60iTJP6+YtE7j9IeGwUN5enfvrclucL799H+OAYaqaF6ENHqJ85j1coEt6/j/iTjyMboebY6AOHCO/bC4BXqVI/fwGtrY3w/lG0ro4gCHP5Cl65QuzJxxC1OtKjD2HPzGJdmcDN54kc3N/o/7wmvbx8EWV0D4lPPkXt7Dmc2XlCI7uIHNyP3tdD9OGj1M9dwJlfQO/tJvm5Z5EUheqJ97aJ222FoqSpSKoCvsCvms3E5XXwffx6IL2VeKTRwW+jv1A4HualGbxKHTlqEN4/SPnle7hA10OOGKR+/bOBQ/Z6gqkvKHz7OfxSBT0UR9WjpDv3IXwXSVaQFZ1Yuo9YuheAWmHhjoWiPTkTpCBpKnpvN75pBQJ5BzvYBrhWnYs/+HeMNVpZHPjqP+fEX/2rTdndfcui9vY7oCgI18UrBuk69YuXsRcWkVQFYTv4polvWZRfeX1Deos1NYOzvAKyjLAsfMvCvDwemNhS4DP0Gw3ZzMtXsKYCdhvfNMHzcXN5aidPUz97Ad+ygtYcrkvh759r+hd90wwY+t86gRKNBMQNrotopLI5S8uUXnoZSVWbPkVnfpFysYj0uoJvWvh1E79ao/zyq8iGgXCc4HolKP7o+aB4w7QovfBzvOr2NHm7ffTZb4T7JalZ8nMzhOfjNXgMtbYUkiJvWf7mVet4hQpy1EDrzGwx6jZLqtUp/+QXmywk0OQAVmZOsDrzzi3nEXdRpCc8H7UlhRw2MK9MrHuT7mAH7xrCZ/nCG4TTHez93B8Qax8g0bOX4swm/Y98Hzeb27jd8/ByGy0vv7xRWPiVCn5l/XZhbz7vdaG7bqxl4W7SisRd3kjNJ+p13M1yen0fr7De8vRrtWYw6OZrWHcdgrVjhcArbR+F322Fom/aCNtBCoeQY2EkTd1AhCBcD68QLFjryiAZOtS3qJH0RTMwokTuUbBIEsJ1cRZWmtHmm2FEW0H4mNUc29HHQetsI3z4AFp7K7n/9H1CI0N4uQL2Fu0IdrCDu4cgP3WO0vwVEt0jDDzyZU5vJhR38J7itkLRK1bxqiZaxEBNxVHScdzl9W8jYTvYizmEECjJKMbuXqpvbf5lyoaOEosEx92QUyTLoOuNxHAp2LdVGqCSjBP9xEOU/v55vC2EYqbnAFY1j1nLr6stVTSDSKY36L0soF5YWKMhugXU1hb8Yhk/HgMhUOKxDf2oP6pQ9PCGRPeO/Y/Tsf+JbT/XxCvfpjy/xmvp2uZdEYD+sqM0N0558RqJ7hEkRUFWtDumvHo/EOmMYWZraPEQyd2tlCfz1Jd+uXqT31YoOst53HwZrS2F1pFG723bRCi62DPLCMtBNnSSzxzBujqPm12vGku6RnjfAEo6BrCuxO/ogxqPPRHCMCQkCVZWfL7xp1uUAEpS4Oe8hQIoyRuj4ACRTC+RdDeqEcWpl9CjqTsSir5poaQSyJEwoaF+lGQce+b2NEcfRsTa+5sknrKisfcL/yV6ZD17kBqKoBqRbT93omc3vrOWh3f5J/+Bej7wzVZX53DNHYKNWm4Rz7Fo3X2E4U/9Z0y8/Dd33OtH0hQkRWk0V7szyOEQakss4CaVQFgObrGKV6xtIKsY+NI+pn5wkY5H+gm3x2g71M25P3uz+SzKegg1HvRL8WoV3GoZSVVR4ykkRcGrVfFqFZBl1GgcSZaRND3YXq+hxhPIigqKglet4NUbnI2xBIph4Ns2TilQdJRILIgVNHyNTqmApCio8SSyquLbFk6pcNddBW8vFJdy2HOrGCM9qJkExkgP9TPX1pvQQmBPLWLPLGHs7iO8f4jW3/0M5V+cxp5fRdgucswgPDZI8vOPBHmNQmBNrAmVz37e4Cc/tliYC0xr27kFq3a5iruwgrF/T5NU9jq8Yhl8n+LyOKmO3cTSvdj1Ete/NVULYxaXCIlWPNtEUe/MhLenZlFiEYRlEz44inVlcn3f6Y8Aou39tAweZPCJXyecar/t+JXLx7a1TDHZs4dk73qWovu+9i+af0+/8Q+Mv/D/4Nkf70qhiZf/BjUUYfDxrzL0id8ge+0dcldv7R+/jsh9u4juH6B2cQZzfA63UNlaeZAlQgMdJJ88QOzoHvSuFkDCLVSpX5ii+Iuz1M5M4NfXBKwSUokPpAklDa595yx7/smh5j5JUYmP3o/e0oawLerz07i1KtGhvRidfSB8fMukdPEUwvPIPPw0Xr2G79jU56aw/CVaH3sWtxIIUq9aoXjmGEo0RvLAUXzHRlJUypdOYy3Pkxg7jBqJ41k13HIRt1ZBCUeIDu5BiUSRNZ3CyddwineX4XJboShsF/PSNLGH96FEDCL7hyj//J0NqTT2QpbauUn0gU7kkEbskf0Yu3txlvJBf5OIgd7Thhxt0IxbNrXTV5vHZ1d9Fhc8pqdvX+UihXSMA3sClpxcoSmghe+T/+Z38IpljGiGTO8hUp1juHaV6zXetdICC1dfxXNtjGT7HTelF5ZN7eRZ6hevBgEd03oP+9puH1r3PEj76MMARNv6SA/s3zCmXljm2s//vw3bs1dOYha3r39xvGuYRPdI83PX/U/TMniw+bn3wc+jReJ4tsnCqZ+Sn9o6peSXHbNv/5iBx351I6/nbZB8+j4Sj46R+MQBcn//BvkfHd+SDDnU00r773+ayFg/srYmCrRMHPWx/RgjPWS/9yrFn55C2MEcpYkcbUd7yJ1ZxKnYeNba3LKuE+0bZvnnP8CtlAKCFSNMuLOPytXzWNll0g88TqitC3NxFpCoz05SnQp4T2UjjKTqlC+fwTNrpB94Ai2ZRk+34lbL5N9+hejgHuK7D2AtBwqVWymSP/na2rOoh/AdC9nRUNOtaOnW7ReKANXjF0l88jCSLFN+4xxeeWMkSVgOpZdOYAz3EB4bRFJktPY0Wnt6/Tgh8C2b4gsnsK6t0XiFDIn/8X+Os7Li47qwvOTxb//N5qaUX61R+O6PN+4Q4DVSD0qrE5hv/+2GIbIWQlF1KiuT1HLz+N6dmRlaVweSpmBPf3hNZklRMeIZeo5+jtbdRwAIxVKE4i3NMXatRD23yMUf/mmznMxzLGrZ9/66ygtXKS+svQhXx4+jN5oRdR58kv6Hf4Wu+54GoHX3ESors5z/3v+FWfr45YMKz8WuFjESGfZ+9p9x+m//NdWVW6ePqS1xjMFOkCXUdDwow72ZgaYBOWaQ+fUniB4InlUhBMIK+BMlTUXSFPSOFG2/9TT2Qo7a6aAKbP6n11h5ew67UMdzfK78zek1TbTRG8UzG/JBCJCV4Jm/nkDu+8iNemnfMnFrNz3jwltLNhciSEKXFYRjB2k+lomsN6w738cpF9cEoiwTGdqLGolSnbyMEokjyZtnzNzyPt7JIK9YZfkb/4iXLeJVzS2Lrp2FLKvf+gmZ33qG8Gg/sqGv8+sJX+Bmi5RfPUPhuTebuY0A3/zLKqFQkPUtWEeasQHCdrCvTDWuQGmkDa131lu1HFYtB0iNHrEeIIi1DaLHWjBLq81+EncCta0FKaR/KIVitLWXaFsf4XQHI8/8HrKirPsxFGYuNokG5k48T/bqyS2p8N9PWKUsVilIUK4sT+HaJomuYcLpThJduwglWjn4tX/B1GvfpTBzEbvy8WkUZhZXOPN3/5qxL/+3xDoG2felP+T4X/zLWx4TGupEiRlIkoS1sIo5tbTlsxo/upfY0T1NPlRrYpHSa+dxc2VC/e3EHxlF72xBSUZp+cJD1M9PI1yPSFccLaYT6WjEBSwPKxcIQd+xsbPLJMcOY5fyuJUSdm4Vt1Qg3D2Anm4FRcHK3Zi2s359kqoRGRhGuC6+a+OWi+AL4rv3E901Sqiti9rU5g3oaDQmljUdNZFCjdxbC+I75lO0J+/Mf2ZNzLPy7/+R6NFRQkNdqC1xJFXFr5vYCznq5yaon5vYUCpo1uHBh3T6+mRWVnx+9tKtBZYci2Ls343e1xV0Jpuex7xwZV2SaiTZTbpzFC0UxbGqFJYugQSJrj1BL1jfo5abo7p6+wRur1zB6B4mtHe4mQvpruY+0OZVvUc/RyjRSnpgPy1DB9ftmzn2w6bAWTz7yh27CT4oCN9nomHCJ7pHaN19hL6Hvkh6YIz0wBgLZ35OfuIss8ef+4BX+v4hP3mWpfOvsevJ30RSbv+o6p1pJD2o/7WmlvEKm1taSjxC4on9TaZtZ7XI8l+/RPVEQ9goMvbcKu3/+WdQogbGri707has6RUSwxmiXfGAgKIjhpWvU7i80mD9timePU6kfxg91RoQVfgepctniA4MI+sG1WsXsbPLSKpKdWoct1petzavXgvkpCRRGT+HWynh1arImo7e0oq9ukhl8jIA9flp/Bv9z75HdeoKkf6gJ3Tx/Amcwib5nLfB9vdoEeAs5Sk89wZKLIIcCQVZ844bqPNbRMU++wWDWExiasqjvUPhS182+NY3Nxc4khEi9uRDaP3duIvLgET4wftQ2tJUfvpGEAxJdNCz5ylcp45jltFCMbr3Ps3y1HGKc+eDZt3CxzHLm55jw2W5Hko8Rnjf7qaPpvr26fddKEbb+tnzmX8KQKp/H1q48cZ2LITvkbt2itm3f0xh6vwdRyw/bCjNX6E0f4XK8jRjX/ljVD1M18GnaN19FFlVmXnrh3fViOiXAZIk3zY9R03HkbTAXLUX83iVzQNWkQMDhAY7goCn61P8+Wlq56bWBng+lZNXiT88TezoHuRICGOkB2t6hcVXJ5GUgFxaS4To/+z6wJlTylM8u57I2TdrlC+dWbdNOA616atshurEJZzimjATnktt5iq1mfXjzcWZDce65QKlcxsbad0N3rvGVb7AK1XxSneWYrF7t8pf/vsqKys+0ajEH//z2JZjlUQMra+L8nM/w1kIAgFaTyeJzz5JLXoaz7KbrQeWJ9/Cc+2gcdXAETI9BykVpzESbTj14pb1pTfDmV+i+IP1nQD99ylPUQ1FiLT2ICsq+770XxPvHArO7zqU5q8EWtYrf0dx9jKebeKavxx5Y8sX36Q4d5mug0/RefBJYu0DjDzzeyh6mKnXv/+BN5Z/P2AWV3HNKomuYYae/BpXf/oftxyrREKBf9D1cAuVTQMscswgengEtZEWZ04tUX7rEsJaL2y9YpX6+DzRI7uRdLURmQY9aaBGAm1UCalo8W2s7PJ9nFL+A3ftfGi6+S0uenzymRDz8x5tbTLLS7fQBBptELxiuZmO4xVKDcaNwEeiaAb18gquEyQHe76Lbf7/7L1nkF33meb3O/ncHPp29+0ckHMiCUYwSCKprNFIkzwzXu/atWtPjavs2vI3V/mD7V27yvbO7pQ8oXZnPTMajTQaSSOJYhAlMYBgAJFTowF0zt03x5P94Vx0ALrRDaApkhKfKhDEveeefN7z/7/v8z5PidbmXvKZa0yefYlQqptwcy/Z8vpDbEFVGs33/shQDAURZBlvg4Wae0G8Zw+R1l7CLd10PfT5xc/Lc6Nkhy9g18sMvfbtjxS5d1PhuRjFDCNvfY+xd37Ezs//SzofeJ5tn/5DAIbf/O6vPPF74uRPaN31CE1bDyHr6+TIpIYCve0sVotvhdbVQnB3d2OU6FC9MIwxcjvtanlgFWQJKepzVluPdhHujOPhIYgCMydGN6NhDADXNMi+84vNWdl94CMTFF95qc6xpzS2b5epVD1e+snaowCvbuA5DqFHD1O7MAiiQGDfTtxafbHLpJKfJtm+G6tewqjmUPQIybY9FDMjKMEY4eYetGgzzganmGpHG4IiUb/i5120/m6cQukDa/OL9+xh9xf/O8ItPSs+rxfmufyjb5DfRLqKpkEqJTI7696xwLVZ6OyU6OuXePONjb9QXMfi2qt/jaQGaNv/JH1PfI1AIs3clRMsDN7uu/PrCM9y/MKKKC6q5C+HIEsEtnegtvqMEDtfoXpxZM0A6tkOnu0gKDKi6oeKsZcGEaSGdajl4pq/emmMj0xQnJ5y+d53a2iagGl4yPLaAmlOsUTpF28TfuwIid/6PAhgzWUov/YObtkPcrmZy8iKTtvWxxAl1bcrnRlgYfIcgUQboaYujHKW4szqeY3lkGIR1O4OBFXBrRkIooja20X96vq/3ShEWUUNx+l7/DeJde5ECUYJxJsxSllc22Lm0nFmLryOY5lUFybuaRuSBJGogKoK2JZHPu8hCNC/RebpZzRefKHO3LxLpeyh6yArwiIjoFh0sSwIBgVCYQHPhULB/0zXIRIREQSoVDyqVY9QWEBTBRzHQ5IETNOjVPLQNJ+5MTmx9DAFAhAMioiiTyLI5Vxcd2m9kuwbKGYyRQZ+8uc4Zo32g5+i88izNG9/gHPf/rfkxy5v1qX4yMGsFPBch7b9T5IdPs/8wLurLneTgiMoMlIs5I8cl4k4S/Ew0cf2+Co6rocxNkfl8uiq6wL83OFNr5dGFVtLBOh4qp9AaxizYDD12hDF4bsvZnyUsaGgqLQ3gQd2prDmW2UzYJlgmR6CCF/5TZ3vfOuWIsZNL1vXw7wxSnZkAlHXQRRwq/UVPB7XNpkdeY/58TPIsobjWDi2gRqKo0ebkbQgKqAG56mZdx4tSokYamcaMRjwhWU9D2tmbtM6WuI9e0j27afv8d9ElJVFOk1pZpgL//h/Uc1O4zn2fRcXtm6T+dKXdWRZYGrK4QffqyGK8OnPaBx7UiMcFnj7bZO33jR58KjKAw+ouC6USy6vvFwnm/X4yld10mkJXRd48w2DN1432b5D5ulP6cSiAsPDDj/+YY2vfDVAV5cEAhh1j0rF4xt/WqG3T+brvxWgXHb5d/93BUmCZz6l8enP6IxPOETCAn//rRpXB2y+9JUAnZ0SO3fJZBZc/o9/UyKfL3LlhT8DoOPws2iRJMn+AxSnrq8r4/9xxdWX/yPJ/gNokQRKILrmcubkAm7dQo4q6FvakBNh7IUl9Zjwka3o/W2AT2srnriEV1vjnAm+ToGgyL6IS+O57/n8TkrDOWbeHiOYjtD7xZ2c/w8nNm0K/VHA+iKz4QDJrzyBnIpjDE1ROT1I7erYvRvZ34LePglVFdi+Q6Y17b+VRAH27FNWBEWpKY7S0YYxcB1EESXdjDkysaYZvRZM+EZalRxmg4+oh1NE0luwjQoL108SSnURTLZTy985uJkjExTqryNI0qa29kXattCy8ygdR55Fj6YAcB2HsRM/zzlODgAAIABJREFUwKpXWLj2PuXZkU3bXjgsoCgCJ94yOPW+Rank38k/eaFOJCLyZ9+oUCz6n0miv/w3/rRCJuNf6527ZD79aZ3XfmHQmhZ54kmNN143KeQ9bly3Sacltm6VSTVLeC68f9Lk0GGVN98wOfaU39c+eNXmp6/UeeLJRoJegGDID9J//o0Kn3lW4/BhhasDNg88oPBv//cyTz6lIslQrfr75jk2g6/8FaKi0n7gGfqP/RaOVWf0re9v2rn6KMG1LW5GnZZdD7Mw+B7mKnYFtetTuJU6XiRAcFcP0cf3Uvj5Wdy6SWB7J4nnH1xUuTYmFqicG1pzm4Kq+HQ6ScQ1bZzyTS6iQ+bCDLW5MrX5Ck17W+/6eBQ9QrRtG7ZRoTQ/sqkFM1FWEQQRx7r3VtENebRofe1ovWn07Z3Y2SK1gbWH3HeLWtXDsmD/AYVLFy0Mw0MUoH/ryl2TU0kC+3ZgXBtGioYJPnLIJ1KvJnoLJNp345g1Fqr5xRa/YDRNsm0P5fI0oiSjhpMIokSy9wD5iSt3HGnY8xkWVYNv5mvuwTlMkGQkRWfHc/+caMc2Iuk+PM/DNmvMXT7BzIU3yI5c/EAqq1cu27hunQMHFfbsVfjW31XJZlZaYy7H5KRDvb70fTgsUCy5DAzYXLnsUSh6BIICn/+iztiYw+SkQ7rNnwbbjkeh4PnTbtPDsT2kNZoLTBPmZl1qNX9EGYv7L8fBqzb/6r8N4Tjw6qv1FflOu15h8OW/As8jvf8p+o/9No5R/5XkMTpmjeE3/oGdn/+XtOw8yrVXY6sGRWsuT/XSKLHWBFJYp+nLjxI+4lPIlJb4oteSa1rkf3bG74teA1JIX6w4e7aDnfFHnKIise+PH8XIVtESAeSAwr4/egSAoe9fojK5jq6hIBJr34GoaNTyM/7sRxD92ZHnLTZZCKIM+DqunusgCKKfd2k8yzeFo29+dnMWFUp2Iik6helBPNfmZvMGAuC6K2yL18L6Hi3N8UVVG3uhiDE+tyk2gjcxO+vv5Isv1Ll8ycKyGjJi2i0+L5aFGNTR+rr8t1gqidLegnfLiNWey4DjoAXilOuVFUKyrmsjCjLVzASObWDVfI6i5zrr9jFLsajfpF4sEdizA89xqA8O4d0FT1HWgvQd+y3a9j+JGk4iShK1/ByV+XEGXvwLzFLuA+UWBgJgmh5XB2wefUylOSWRzdiYht9+uXOnzPXrDtnGyNB1V56WqSmHQt7F8zxyeY9y2UVRIBIRqFQ8AgGBUGhZB5PX+LNsH1LNIm1tEom4SGenxMKC07A1vv38WzbMzDpcumgv3ifLYZZzDLzw52jRFE39B0j272f28ltY1c0THP0owHMdilNrdXEsg+uRffE9Ajs6UTtTyPEQcnxlxdo1bUonLlM+OQj22gFCjofR+9L+9g0LY9zvQhl/aRAlqjUKOoI/tfY8P02Su/OzIIgS4eZeEt37sOoVrFqRemmBZN9B9EgKz7XJT17BqpVo3fUEVqWAKCsUpq4S69iJGohhG/4znRk6RSDRTijhU9UyI2dxbJPUlgeQ1CBapIn56+8QTLQT79iD59qU5oYpzlxb9zSub3EaCSIFfREHeyGPNb0xXt/d4szpJVqJ68IPvrdy+GvPZrAmZwg/8whiQEduaSL2G8+tfGpdj+w3f4BbKGGbVQKRZkRR8fubBZFAuBnPczCrhQ11sSyH2tOJqKvY2QLa1l4808StVDGuDa//Y6Bl1yMk+w/QffQLgN+DPHvxODMX3yA38sGZfi9HMinx2OMasgTnz1sMDflDr7k5h/dPWjz4oIrnmeRzLlOTLoZhY9tL53d2xuUfv1vnkUdVBAFOvGVy7qzFT18xOHxEoVL2+NmrBvm8y+BVm4UFl4sXBObnXM6esbAsjwMHFdJtEtmsywMPKvziFy4jwzaBgIDnwfi4Q6nkkUgIRKMikgSPPKISjgj8h39fppBfGTxto8r8wLvEOraT3vsEnuMw8OKfY1U3Rsr/VYMxMsfsX71C01cfI7C9wxd6aLTaOqUqxXcGyP7w7cWR36oQRbT+NEqzLydn58vUG7Qd13HJDy7g3SGgrgXPdSjN3iCYaKeam6I0ewM91ooWTjJz+TXCzb2EUz0UZ64hijL5qQGM0gJauAnXMqlUxxEVHcesooYSmJUcrmUQ79iFFk2RGz1HfnIAWQsyf+0d/1AkBateol6cp5rbGFNkfY8WWfITTIBTra8qBrEZ6N8iMTricLN/vbdPYmR4qbDgVqqUfnYC6fQl1K42gg8fovjiayunz94Sj7Awd43O3c/StfszPiVHixBJdpObH0TWGj4yd8Nx8zwETSOwezvlE++j9XauNAJfA/HuXbTtf5qW3Y+iheMAjJ98kezQOWYvHd/49jcB167ZXLt2e6HMMOCnrxj89JWlKfvgoA2DK5dzHDh31uLc2ZW8yNOnLE6fWvnZ7Ix/bodu+NdwdNT/+/ibJsffXJmmOJ9f2qeBK/7/Hzig4Dge/+83KkTCAv/6f4qgqjcrbSsx/t4LyFqArZ/6A9oOPMXIW9/7tQ2KuC6V80PY2SLBvb1oXc2IuopTqlG7Nkn1wjB27s7kfkGRCGzvxG7onZZPX/f9mYD2J/sZfeEKVmmTilorZgjCUgC3DBxzKdZ4ruPnVhvfq8EYWmQL1ewEoqwgCuLiKpbrLZTmR7GNKsFkJ8meg8wNnlh3l9aXDrNtXzlDFFcVXtgMSBI897zON/+2SrXiIcvwhS/p/OmfrCyieIaJPTOPZ5q+yOvQ+Jr7U8lPMTnwMxJtewjFO3GsOpODr+EIDk19h4i2bcfzXIoz1ylODqy7j9bUDGp7GjuTw57PoKSbcVfznWhAkGQO/t7/jKwFCcRb8FyHemGemYtvMvT6tz8RU10HQ0M2jz2u8j/8j2FEEd5526RYWP1ae67D3MB79DzyFZRghO3P/XNO/+3/8qF3RnxocFyM0TmMiQW/giyKvsBC3dqQ3J1n2mS++ybZH/mjLWdZ7lHSJLREEM/2/NSUy6J8mJZoxioX0VNtCJJMdXpk1WvgWHXcxudGOYNRypDe/RSe55CfuLIYEG/m/zzPwbENHMfEa/zbc2z0qEAg0Y7r2FiNLi6znCOc6qV15xPMXXuHcFMX0bbtCIJAJbsxKtv6dgT5Cm65hpiIIOgqYlDDKWzezRYOC+zaLbN3v8Jzn9Wp1z00tcGPW2ufsgVKv3j7jgHa81xKmVHK2YnGm8NPxgqijGNUUQJRbLNKPb8xEVV7LkPh5df8m8p1qZ66cMcbTBBEIq29AJRmRyhOXWfw5f+EXa/82vXt3gtKJY+//IvKzQYl7IbF71ooz40w9Ma32fH8f00wmUYQ1jZP+7WB4y6O8O4Knoc1l1/1q+p0ia2/vZ/yWB7P9bBKBmMv+VOKcO9uajOjhDq3IKk6Rn4eu3x7QSgzdGox1++5DpmRM34hBa/hXugxe/X44nNiVvJkRm4V2fUa+UGhscv+spXMBNXcNDef99L8COXMmC9Y4W3suVs3KJpTC9jzeeREBDkeQU7FVtgI3C88DyTZzyeFQgKKLGDZHj/43jrT9A1SgjzPWTHj0iJJIq39eIAOCIJEZWGD1fTlT+UGRsxGOcfEyReZH3yf4uTgust/gpWw7qZ70fMa1BWQ9TAtux9l5vxrH8h+fdhI7z3GjV9880MROS5PFLBKS2kWu7Y0QBIECHVuoTo9ip5qW3Mdt1WAvdurwrcNHFZJda0e5LxG1XnZuu+SPri+HcH0AtXLI6i9adS2JPq2LoyRmU3jKVYqHu+cMCmXXQYu27e1mcmRGKH+HUgNbbTy4CXMTEMNWhSJHH0Qp1zBGB3zvWPXCVZKIIpVK5GfuEKwqQMtktx4UNwAStM3mDzzKkYxQ+bGWXIjF9b/0SfYFCwMvs/C9dOkth4mvfcJZi+++SszKi8vjDNx6mU6jzxH275jDL32rQ2PfDYTuctziLIIkgAuuPbSPhSHLiNpOvXMDE6tgmt8eLJ694P1c4qWQ+Gl99C3tBPY00fsmcMYozPUr45tKjVncMAmFvNbugBs2yOb8Qj178CpVahN+oHLLi0Nx5XWFiKPP4YcT+CWy+ReepnKufN3DIx+0rWdZO9+JDVILTe95rJrQZQ1ZD2IKGt+rtW7mQg2sM0aV1/8S1zHXsE1lBQdNZLEMeuYFX9qogQiSKqOIEh4ro1t1LCNyroFIFHWkLQAkqwhNMh/nuPgWPXG9Hxj6Q1RVpDUEJKybD2ug2ubOGbdJ8DeYTQiayEkLegbDeH3J9v1Ko5VW38UIwhIagBZCyJKCoIg+ArNjo1r1bHN2l3nBGv5WXIjF0n07KGp/wBdD36OsXd/dFfr+KjCrpUpz27ey/teEe6K0/WZrQTbIpjFOhOvXid3xafrSIEQ9bkJPMdGUJRVjeM+DthQm5+dKbDw16+Q+mfPE9jRTer3nyX3g+PULg/fW85iFRx9ROWBB1WiMQFRhNERh//0l1U810FUNURVbcibL3lWqO3tyNEYUiiIZ1u+ifY6I0WznKU45Veg7dwMVu3uOG3Bpk6SfQeJpLeghZuQVB3Pc3GMGvVShmpmnOzIOSrzK2/gcHoL/U/+PsWpq0ye+gnBpk5SWx8kkGhDUnXsepXKwphPKxi9uKryjSirRNJbibZtJdjUiR5rRlJ99RLHrFHLz1CYuEJu9PydVaoFkVCqi1jnLsKtfQRirYs2p7ZRwyxnqGQmmLv8JvXi7ebmoqIRSW8h2XuIcEsPSiDin9tKgfLcMLnRCxSnBtcMzqKsEevcSaxzN+HmbpRQHFGU/IR5rUS9MEtlYZyFwXfv+voMv/EdWvc8RrRtywfiRvjrjs7PbGX+zBT5b80T6ojS/fx28oNZlEiS2Nb9SKqGa1tEeneRLRdwzcbAQBCQoiFcw1xTU/Wjgg0LQhijMyz85xeJffZhwg/vpvmfPU/t0gj14Wms6QW/DchxN9wD6eFhTczjNYbfjz2uMjLsYFkSly9Z7N3ra7bZxTxauhNNlsHzfPevhgeEHI8jaA314Nk5rNz6cvV6rAVBlCnNDqFHmwk1dVGY3JjheCDRRsfhzxFt24bn2hiVPPXiPKIkowTjhFt6CCbbMMu524LiTajBOM07HiHevRdRUjArOaiK6NEUiZ59hFJdCKJM5vrJ234rSgpt+z9FuKUXz3Uwq0WM7CSCIKCGm4iktxJKdaFFmpg689KaRPB4127a9n+KYLIDBBHXNrGqBQRRRNZCBFPdSGqAzI1Tt++DopHa+iCtu4+hhhLYRqXRJimiRZpIbX2QcEsfU+deITu0itinIJLsPUD7oedQgzEso0o9P4fnWkhqEDWUQI+mCCY7yI9dvOug+OsOMawvijhsGlxvsc1P1mUqEwXsmkVpPL9oZyDKsq+OHU3g2jb1uUkcY2nAJOgqyd99jvqVYUpvnN54r7QkEdi7BX1HN6KqYM3nqJ4dxJ794EQoNhQUQ4e3E9jTi9wUQ+1qQVQVpBYdORUjdHSXb4Fo23eX93VdJv/Xv8ae96eSjgPDQzaaLnB1wOKho41gV8gT6OpDTTZjzE4uvXkAMRhY5Ara2SxOaX1umiirKHoYWQuhBKKL/sfrQRAlkn2HiKT7qeVnmDr7MrX8rN9uJIiIskognkaPtVCcvrbm9DEQT6OFk2SHzzJ/7V0cs4YgCOixFtJ7nyLc0kfrniepZiap5Vb6wThWnfz4ZSrzoxSnr2GUs4vFBSUQIb33aeLde2na8gD50Qv+ftyCcEsv7QefIxBvxShnyVx/n+LUYKNXVEBSNIKpLjzXvs3JTxBE4p1+QBVljfmrb7Fw7SR24xjUUIKWXY8T69xJet8zGKXMbS8HWQ3QsutxlECUzPAZ5q4cx66X8TwXQZRRtBChll48z11MM9wrZD2EKKu/skIRq6H9j76E0pZcf8G7gDVXYOLffAs8mD89xbb/4hBmoYYa18mcm/G7uzIzZC++g1Mr47kunuOsOO+CLBE8uB08j+rZQaBhSuV5eKbl25Pc+syIItHPHCX22UeRExFfwd8wCR3dR/6Hr1M7f/0DoQhuKCiGH9lD5PH9flvPsjyBIIpIQX2x4+Vu4Dnu4lsG4I3XTWZmXPbshT/64zAXLvgPe6Crl+roDYz5GcLb96Amm6ktOoAt7Ytbr6/wf14L9eIcgXgLzdsfwXMd8uMb6yYRFQ01nEAQRKqZCfLjl267iLX8jE8FucOFEmWF8twQ0xdeXdGOZpSyuI5N98NfRY+kSPTu90dgy/KLnuswe+m1xv/71IWbMMtZps//zJ/O6hGCTZ2UZm6sqOpJaoCmrQ/5AbGUZfzkDylOXb2tGFHJTNzc4IrPZT1My87HkfUI2eEzTJ376Qq+pVHKYBsV1FCcQKKdZP9hatmpFakAJRBBCURwrDqFiSu3BU2DecoLY74g0j0KyBanrhNp7aP76BfJXD9D5saZe1rPxxFqexNa9/qe3ncDQZG5KVE1d3Kc0kgOvTmEkatRnWkMRDwPSdUJtfch6UE8zyN7/i3fzH4ZlK5WEl99Gm1bF3IqjmdY1G+MU/r5+9QuD8Oywo3a20b0mQeQ4mHM8RnsfBm5KY6+vZvEV57CnstiLxQRNR3PNFFbWrFyWZzy/RH3NzZ9lkS/s+UDxLtv+2+Vb/5tlWhEJJttEDddFzkcxbMtJE3HXBZwvHrN148TRQRJQhAF1ivIWdUi84PvIqkBXMvYsGq1Z1sNk3aBQCJNuLmHysL4yoDieRuoCHrkJ66sQt72KM+NUMtONaaP7SiBCFZ1Jc/rTtVUq1aklp9DaYsi62H/BbYsbuuxFkKpLjzPozA5QHFyYPXAs0Yw0uOtBJLtuLZJbvQ8dv326XktP0MtN0Uw2U4w0eY7JxaWuKA+cddC0UNE0lspzw77KYRbtn8/Jbyh179D2/6n/ALSPVhcfoIluJa9oiUw1B6lOlumOlNCkARCnTHKo/6IPtjei5GdBUFAEMQG93Al9P4O9P6OpcFVBEJNvjRf5m9eoHb+2uI9q2/pRE5GsSbnmPvGd7GmF9C2dpH6L7+AtrUTbWsXgjeLlu7As0y0dDtWPkfx/Xfu65g35vv87hWsmU22l/SW8hQARx9WuHbNFyNYMFwOHVY4fcqiNjpEoKsXLd2JmV3AXGaPaGWyePU6KApiKISoB3Cs9d8SnuvctY+J61gUJq8Q69xFINFOzyNfozh9jdzIBSqZcb+osIH8gWvbmKXsqsHNcyzqxXk8x1kcUd0aFBfRuPH81qjGR6K0WLFdLa+khuKooTiubVKavXHXI7FgsgNBFDErOex6ZU2HOatW8l9mehhFD68IilatTH7sEi27HiPZdwAtnCA/donC5ABmJfcrQ6H5sFB67yq1GxtjVAiCPwoUNQU5FUNpjiFqit/W63pUL46Q//lZzImFxXu74+ktDP3gEq7p4Lkenc9sYeA/nwIPnHoVq5hDS7YiB8Kr3oMe4ORLlN86hzU5j6ApBA/tQN/ZR+TYYczxWZysH4SlWBhBVTCnFrAm/efeuDFJ6fhZmrqfR9/Rgz2aQ45EwfOoDF4h0Lvlvs/hhoJi+dRVhDObTz5eLlj75a8GyGVdXvhRnaEbNk88qXH6jINdLVEZHkQQJdRkCikQXCy0GMMjWJksYjiMmk4jNyU3lFe8VxQnrzJ+8oe07j5GIN5K8/ZHSPYepJafITd2kdLUIEZ59YB3E45tLOYBV4OfX3N82o1yuymQpAUJxFoJpboIJNpRQzEkVfdpLbKK2qgErwZJDSApOrZRwSzdfaJaDcUWc4f9T/7+mkbrkqr7EmmyiiirK77zXJuZS7/AsU2/sNTcS6i5m9Y9xyjPjZAdPktlYawxkv6170m5a2T+8fiStN1GIAgIouD7sMRCBPf2Ent8L1p/Gq2nBSUVpXzy6uLiSkhFCalYRQNZV1AjS6mz8tg1BEnCzC9gFXMrCi034RYrZP/+p1Teu+inuwSB6qkBUv/NV9C2dqK2p6g1gqKg+mIWN/UMAHAczLEZ7GwBtaMF16ghyDLm3AxOuYRTuf/Gko1Nn23nA789Mwu+1/NTz2j09skkWgJozTFEXUeNJ/EcB621ncr1Aaycr9Rj53KU3z+Fmm5FSbcS3LcXc2ra95v9AOC5DvnR85Tnhol37ibSto1QcxeRtm1E27dRzUwye/lNcqMXcO07eMzc4Wx6rtuQkRN80Ypl0CJNpPc+TbLvEKKsYFYKWLUCtlHFdfyCz2qB6CaEm94dnoe7QS7jit9Lfm7J8/zm/LUoN27NxKoVsWolnFWKHFa1yNTZl8gOnyHRs59IegvBRBvJ/sPEe/ZSGL/M3MAJKnMj95xX/FWDGooT79q57nLufdBd7FwZY2SW8slBWv7w00Qe2UXq68ewsiWKr50HID+4QPdnd1Cfr6AlA5TG84vvrmj/bgRZwWu89AVJglve/8bINPWrI8toOR52rkj19ABNv/c8YmS51FlDAOSWRhG3UsMpVpDiYZxalcK7by1+V754azvg3eMj49FSLnsMDNjMzrg8+YxGW5uAaxqoqRZcy8KpVpCjt/cNV86eRe1oJ/LQg4QfOII1PUP51OkPtAXKrpVYuPYu+bGLBFNdRNu2Ee/aQyDZTvuh5/Fcm+zIuVX3QZQURHHt0y4pmv92dGy8ZflOUdZo2fUETVsf8OWyBt+hOH0Nq1pYbLCXtRDdR39jkTd4K1zH9oOuICIpgbs+br/y72GWs8xc+AXGOi6InmNTLy6s8aVHPT/DdGGO3MhZgk1dxDp2EOvaTaL3AGoowfDxb2Gs9ftfMwSTbaT3HQNg4v2XPtCXhTWXI/vjd9D721DTCRLPHqF8chC3Umfm7VFShk2gOURlqsjcyWUiC4LoV50b/Zne8vvf84urnmUt0vBWfGdYIEsriq+3ih4vLm45vsugqiBHYwT7d/o8yGCI2vB1KlfuT4rvIxMU//EfatSqHpWywz99v8Y7JwysnINTr+KZJp5jY8zP3DZlc8sV8i//FK9eJ3zkCInPPY/S2krl7DnsfM6/AHdbtvc8vA3Y2tlGheLkAOXZYbIj5+g++hVCqS7i3fsozQ6tKnYqyipy4GYR5NagKTTEZ2Vso4q1rBgTTLYRbdsGHmRvnGbq/Ku4t0iui6K82F2y6v7Wyli1EpKqE0y2UZkfWfcYl6NemMPzXERZxShnKM9uTEvyjvBc6oU56oV5ilNXKc0O0fnAFwk1dxNt28Z8KfOh9Ph+lDF/7f0P9px4YIzNUzk3hNJ6GLUjRXBPD+X3rmJXLWaOj6z6M8eo4Zr1pWnzLUVIf3QXQYqGcDJLuXJBllBafRqRFI/402ZJQgwFAGFRunDpB/jPj+vhlEuUL5wFAZR4ArUlfd+H/5EJigvzS4GrXPIIhfyLriabsfIZnEoZORzBNQzcZVVPKR5HCoWoXR1EDIYIHTpI7OmnCD/8ENb0DNbcnG99eieJlVvgGXWKb761ocAI4NoG1YUxStPXCSY7kANhJEW7deYA+NPiSHoL+bGLK/TiALRoE8FEGyBglBZWFFlkPYyoari2SS0/fVtABFDDCfTY2nSMenGeWmGOaHor8a695Mcvr13IWQXl+VGsehk1FCeS3kp1YWITPac97HqZ4tQg9fwM4ZZelFCMJbeyT/DLhFup+2rbjouoq+hb2ym/d3WdH7lIehDxJvd3WeXfs22MoQnCTxwi8uRhSo6LU6yAJKL1thF6ZB+4HtHPHPWdAx0XfUcPiAJyPLLCmVAKBZAiQZxyFQQBUWvk3mUF7jAo2Cg+MkFxOUQRnv98kL/7vk5kx17M7Dx2uYje1kV15AZWfkn9O/6pZ9C39PvV5waZWxAE5EgEORIhsH3bXW/fLhQovfPuiqAo62GCyXbMSh6juHDb9EUJxtDjrQiCiF0rN+g7qyPWuYvi5FVyo+cWOY2yFiS19SECiTS2UaEwObCC/OqYdTzbQtRCaNFmBEle0RusRZtp2fW4L6C7BoxShsLEFcKpbsItvbQd+DTzAycafMilwCNrIdRIE2aDd3gTZiVHbugsrXuOkdp+FKtaJDd6/pbgLqAEI+ixVux6+bbecj3eiqJHqOVnbmcACAKBWCtq0BfjNct5PgmIHx7cuolnOQia4pOn11veNildO4tr3Z7X9Eyb6qkrhB7YRfTTR9G2dGIv5BFkGW1LB1I4SPXcIKKq0PS7z/k0QEHAzuRR2lNoW7swBscQZAltWzdyKkH19ACCohHobzzjnkdt5P5th+8vKAoCiAKCJCIGNMSA5tuN1k3cmoln2esK0+7eI6NpAg88pNDZKeE1Vptuk/nmd22/uVxWEBWV+vTEkkJOA2pHO2r72jJFmwU1FKd1z5NISgCjtLBIVAY/IAaT7YRb+vwp9dTgiqnvctj1Mq5t0n7wWSLpLdSL8wiCSDDVRSS9BVFWyU9coTi1shullp+hVphDi6Ro2nIESVapZCfB89CjzYRb+1BDCcrzo4RT3asfhOeSHTpDKNVFomc/Tf1HCDV1UsvNYNaKCICsR1DDcURRZuLUC9jzS8fh2iYL199DizUT69hJ+6HniHfvwShlcaw6oqT4VKJgDFkNMHf1xG1BMZTq9v036mXMctb/rW0gSjJapIlwSx9KKEo1O0V5buSTqfOHCEESGw0brMz1rbm8ghSM4JV83uKtHN7awCiFF98m+uxRtL4OtP4O/yvboXbhOrm/fwUEgehzj6D2tWNNLVA7P0jia5+i6Q8+hzE4BpJI8NAOBFmkdmkIO5elcuUCciyOU6lgzm9MH/VOuKegKOgqSmuCwM5u9K1daH1pxKDvvyzgq514lo2dKVC/NkHt4jDGxJyvw3hLJWlszEEU4dHHVb7/j3XqNQ9RhK98LYBdKFE4f8qfQTkOnufi3jICc4pFrMzm+sY4pdKi+fdN3FSOCSbbCSQato5dfEb7AAAgAElEQVSe/x8PwHUxSgvMX32H7PCZNQnQjmUwfeHnjaLCHhKShCBIIAg4lkF25CxTZ16+bXps18tMn/8ZkqITTLaT2v4wTa6D1/hjVvKMn/wn9EiKUFPnmsdm10tMvP9jzEqeRNcetEiKQDy97HA8v0BSun00DFAvzDP+3j9h7Jgn3r2XcEs/kfS2paS45+HaJkY5u2pO1TH8KU8o1UW4pdff6rLz6NomhYkrzF5647Y2w19bCCJ6YnO7VNbdpKagtiX8/J7r4dbWd5cUFYXmQ08uplTm3/8FTm1pNuBV6xRefIvapRsE9m9DaUn4HS3XxqlduI6T9+l0mW++iKgpeKav+i83J4h97jG07sZ96rpU3r9C7fw1pGCY6JGjeI6DqCjUxkZ+yYUWQUBpTxF96iCRR/eitCTuuLja1kRwbz/uZx+mfm2C4s9PUzl1dcUJLje8h1/4UZ3xMd+jRRAg8XN/GSkYItDehRyN49k2pYHzGLNLPcHZH7/gK+hsIjzHuY3WUy/MMfbu9wk39zSmgGEEWQHXwzarGMUFSrNDmOtUZAVJwSguMDJ8lmjHToI3VXKMCuX5Mcpzw2vam1YXxhh+45vEOnehx1oQZQXHrFMvzC1Woq2mTuauvEVlYXRl9W8ZrGqBydM/IXPtPcItvWixZiSlYU5m+MdSWRhbo3LsV58nTv2YzNApwi19qOEkkqLhub4sfL0wR2V+bNV8ZX78MvVShnCqCzWcQNKCDdK5g1UvUc1MUJ67Py/gpi0Hf6U6WWQtyI7n/gUAuZGL2LW7azy4awig97cRPuLL+LuWhTW/fu45P3CaQGsXoqxgZGdxVpkteYaFcW0c49r4muvxagbOshhRfPkd3HLNzzFKIuboNOW3zmPP51BvdrGcfBs5niCy7xD3y1TceFCURIJ7+4h/8TECu3oQVWXDPxV1leC+ftT2FEpbkvyL7+KWVxYZlptUeR68ddwPSlpLGquYR1AU7HL5tpvdXvhg3AVXg10v+z3P45fueR0+/9Bvd8uNnCV3m8z6nWHViixce3fN76uZCaqZiTW/X4TnUS/OryoNtlHUctP3oEfpU3Hq+Zl73u566Hroc4iSzPzVk5Tn78618aOOyTM/w7gH4v1GIagywd09JL/4MGpnCvBzi/UNdMlEenb4AwXPJdK3i/zAaZxVWkHvFm6lRvGn71A+fhZE0Sdz33zhuy5aup3Y0ceQwhHkUITI4YeoDl655x7oDQfFwO5emv/FF1BaE75BtePg1kzcmoE9n8fKFPEMEzwPQVWQIkHkVAwpFkIMaAiKjJSMEP/8owiqQv5Hb+EUVz9hggBPPKnyxmsmTrWKlc+ipVpRk01Y2Xt/iD86+HiKb37cUJgYoJ7/+E/Bt336D1CCUWyj1lAzWhtNv/n4hooit0IQBcSghtIcQ2mOIycji8K/9aFpaoPrv2iVcJzC9fM4Ro3knqMI0sYHTuvCA7d6+7Hb5RK1G4N4nk/PMVzXN72/C7bJrdhQUJRTMZp+51Mo6aRfEcqVqF4covzuZerXJ32hWXeZlmKDRySIImpXC6EHdhA6tA21oxkxqBF79kHsTIHiq6dIxj36+leO/kQRnnxa443XTGoTI3iWSeHiaZR4EjPzqxAUP8En2DiCyXZESWby1CvMXX77jstGH9uD1tV8bxu6WTgVbppBeVizeTI/OOGTq9eBWcwS6dvlq7e7Dt6dJNtEAUFTN1TA8RwXb62cputizM1gZW5J9dxHgW79oCgKRJ86iNbTiiAIWAt5ct9/k+Ib59ZV0PWA+uA49euTlN+5TPLrTxE6tA0pqBP79ANUTw9y8ECNz31BZ2x0iV4iitDa6gfKUO9WRFWjOODbDEjhyG1yRJ/gE/w6wHXWbq28CWGTFK1c06Y2ME7uxfeoXVk7/wcgBcKIkkx9fgqx4Ft0GPmF24qi4BdwtP5O9G1dSE0xBEVeDMJrwZyep/DDN1f9To4n0Du7sRY2b7C0blBUmhME9vQhqApuzaD489MbCogr4LoYQ1Pkvvs6anvK/9PZjNbfwfTUVb7zrSoXzi9raRMF/vC/8vl2gqzgAdFd+0GUqE/f+QJ9gl9vdD7wPIF4qy90+gEIkP6ykd73JOG0L7q7keNxaybOPViEeK6HZ1g4pSrGxDzVy6NUL45iTmfXNalTY03IgRCSHqQ0dAlnjSKZGNSJfOpBIscOo6STvsjsBlC/MrxmUPSMOoIkoaXbcYy639xRvb9B07pBUe1u8afNQH1khvLbl+7ZY8EYm6V0/DxNX38aRJHgvn4G/+oSgnCrnaXH3/x/fr7RrpQwF+YQZJn4waP3xkMSxUUZI6+Rc/hlw7Hq1HIzuLZ5X5XVT7A2lECEpi2HUAJhFq6fZvzkTz7sXbpvhFt70MIJilPXGX7jO+suP/F/fufeRopeQ9TXcXFN2xeW2KBjp2vUCPTuJNTeS6C1azGfN//+z5coOaJA4MA24l9+EjGg4eRK2LminydcxwDPHLtDUU4QkOMJwpEonutizExRuXx/DprrBkU5EUGKhPAAayaDOXXv1V7PsqlfHfd9WCURJZ1ctDTVdejqlrnZsVOve5SKDpUbVxc5f9n3Xt9Y650sI8fjyMkkcjyGGAw0WoGERptgHadYws5lsTPZD0xVZznKMzcYeOFPPvDt/Dojve8YrXsewzZqLAyeXEXI9+OFQKKVSLoPAMc2sWrrV1Pt7AcnnbcWjNwcc++8TLhnJ5XJG4sqOcshqArhh/chBXXq18bIfvun1AdG7tsR1MpmyP7sZaRwBM8yfjnSYYIqIygSOK4/LL/PUZZrWLh1EzGkNxq+fTz7vM6uPQrRqO/md2PIZWi0IUGk+pFSb+vCymdx7jA8Vjs7Ce7fh9bdhdKcQo7FFn1cbsJzXZxSCTuTwZiYpDZwlfrgtQ33Om8eBJItOzDrJcrFKTbS0qYF4rT3PIYoisxNnqFUuLUqeHOdxcY6P2gIJFt2YtQLVH4p21sfVrXAxPsvf9i7cd+IdmyjefuDOGadqdM//bB3Z11Up4ZWDYjgy4gp7c041TrFn53clIAIIOoBQjv3IEfj4LlUrl7GnLs/utcGfJ9tPNvx5f43IYG7MhG8dFJ275U5e9qirV3kxHGTL/xmEr0tihyLoyZSvoxYUzPFS6vz+sRQiPDhQ0QeeRi5OYWorE0HEEQRORZDjsXQenoI7dtH5cIFCr94HSef/+W1lgkQDKcREDYcwCyzSnbuCl39TxIMt1AqTLIimAoQjDSUQorTbCTQrgdRUpEkBcus3r4+AUKRtK98zOZs714Qau6i57HfAHx17497e6AWbWLLU78HgFHOMX/1vQ95j9bHaj3PixAERF3FyZd8J75N8oz3LY7DlC6cQY7FCG7d/sEHRbdcx62ZSNEgcjKKGA7cRrzeMEQRuSWOoDXsS3NLQ/1qxSOz4NDTKxFPiGhSnep4Bs1oozpyHadWQU93rjpKlJuSxD/zGUIH9iPo2hKlwHVx6waeZS5RhiTR76XWNT/XKElI8RjRRx9BaW4m9+JLmOMTv5yHyvOYGjne6DzZ2PZcx6RUmMAy1+hq8Dymho83WvQ25xhiiV6CkTQz4+/h2Lck8T2PyeE3N3V7dw1BJN69m2DCfxkM/OQvNlG958OBKKsEm9oBuPrSf8SsfMytXj0Xp1xF1NQV3u2bAUEUEGXZF0HeBJ3JdYOiNZ/HyZWQYyHU7hb0bZ1Uz9xunbkRiEGN8IO7fFKo62EMLY2OXn7RIJ9zaWt3OPakynvvGOA6GDNL00Mrl8F1Vk5xBV0n9szThB88giBJeJ6HXShgjE9gTc9g57K4lSqubQMegqwg6TpSPI7alkbt7vL9o2WZwI7tAGS++z3sZf3UoqgQS/YRDDfjAbXyHIXsCK5rIYgysUQPwUga17Eo5kaplv1ikKpHiUQ7qdfzROPdeK5DLnMdo5ZDUUO0dhxBECVy84Mrps+qFiXW1I+qRXBsk2JuZHGdPlbX7l6+zuz8VSqLI0WBaLybWFP/4rK2VSU7P4hRy6HqUWLJ5dsbplqeA0Egld5HKr0XTYshyxqmUSQ7fxWjlkdRw7R2HF62vaXrKUoKsWQ/wXAzjmWQz96gXs0iCCLJ5h1YVpVgqAVRUigXJijmx7nXoCopGlue+h0A5gffp5q92y6bjx5adz+2aPzkE7bv74Uj6ipyKoqcCCNqyqJDn2fZuIaJnS1jZQp49Q/mZeLZDubINKGje5Fbk3B1ZFNGi3axgF0sEtq5B8+xqV4buO91rhsUzcl5zOkMancrSnOc2KeOYE0sYM3fpZGVJBJ5dC+B3b0AeKZF9cLQ4tdXB/xg9+ILdV77uUC1LviGNMugd/Rg5bMYtaVOmOCuXYQOHvADoutSuzpI8fhbmFPTOIXC2jlQQUCKRFDb0oSOHCF0cD+iqhLYvo3QwQMUXnvdN6MGIvEu0l0PUsyNIooieihFIedbc8YSvaQ7H6BcmkKWA3T0PcHk8JtUy7PoeoKe7c+Smb2MbVXwBBGpwfJ3XZt6LUdb91Fsq06lNL3Yq6yoQULhViyzTCDURDTexcjgy5jGnZPorussrtOyalRKNyXBPGzbwKgVFqe7idQ2Chn//CtKiFC4FbOxvUi8k9HBVzCNMrZZxXNsPM/GNIqYRgm3oX7iH0Oe9u6jWGZ1RRBuatlNU3oP5cIkgVCKSLyL8aHXMY0ibT2P4NgG5eIUkqTS0XcM5/rPqZTuLSfZ/fCXUIJ+9XHu8gnM8iabrP2S0fngZ+k79nUQYPrca5RnRu5tRZKI3pcmfGQrWm8aOR5GigQQFMm3CgA828WzbJxyDTtboj4yS+X0dYzR2dsVsu8DnmlTOXmZ4KEdRI4dxrg6ijWzcN+TC9eoUxm4hKjrvkWGcf/MjvWnz5U6pTfOEtjZjZyIEDq8HUFTyP/4BMbILG617kuErQZRQAxoKM1xwo/uJfrUIcSQjmc7lE5cxBy/vQXLssC2Pb70e2neLRzDzC4x1dWmZooXTy+tPhAg8shDiMEgnuNQvXCR7I9eWDHKWxOeh1MsUisWMcYncMolYk8eQ5AkwkcfpHLmLHbW7zGVVV+0oFSYoF7zpa5cx0QQRFo6DlGtzJPPDCFJGu29j5BIbVs2svPIZ65TzK/0MnZsg4WZi8Sbtt62a7VqlrmpM4CIpkfp6H0MPdi0blB07Pqa66yWZ6iWZ9ECcWKJXiaHj1OtZhrby6yyvSSmUSKfuUEw3ILj2sxPn8e2ards7wKJ1MrtiZJCc9sBZidPk50fQFGC9Gz7DMmWncyMnwSgXJhkcvQtZDlA747nCUfb7ikoqqEYTVsPISkaU2d/zsyl43e9jo8aYh3bUPQQ84MnufLCn92186QgS6idKRLPPUD4yDakaNAvmK5DkvY8j/CRbSSePUz55CC5l9/HnMxsTnD0PIzhSYqvvkf00w/R8se/Ren10xhDyzri1oBrWosOf7dCbW5Fa++kdPZ9pEiUyJGjKzxb7gUbszg9fY3SiYvEn3sIQZEJHdhKYHsXtavj1K9PYM1kcSt1/+R5HsgSoqYixULoW9oJ7O1HaYo2eqZdagNj5H/yNm6lTrpNRJEFurolEkl/uiCK8NhRm5f+tzdXKOJoLW0Nn5DGv3t7UJpbEAQBc26ewmtvbCwg3gK3WqX0xnH0vj70vl7kaBR96xbK7/lBMZ+5QTCYor3nUSyzQmbuMoXMDRAEwpE29ECCYNiXE/Ncd0XwMo0S9VoWz7XXeCnebkmQSG0l2bIL17WRRBlZCd7R12WjULUIHb2PUS5Okp0faORfBBKpbSRbdja2p9yyPW/x77VUd26FrOjIik6tPIfn2thWFaNeQNNjiKKE65hUK3O4joUj+P8W76FPVo81s/Pz/4pk717MSoGF66d9abKPMULNXQSbfJ1Bx6jfU0AMH91J6uvH0Lqb1w2EK34rCAiagqjFiD//AIHd3Sz8w5uU3x2478AoaCqxzz+B1pP21bb7OtD6Ohp5f9N39lvj9jKujTH7//zd6l+KIqKugyAiKiqipq++3F1gY0+a65J/4W1/pPjgTkRFRgzqhA5tI3hwK57l4FZreJYfFAVZRtBVRF313eMa8FyX+o1Jst99bXGUqGkCqirw/Oc1piYcTNM3sVNFE2N2HkHV0FvaEPUAgihRX5ZjVFqaEQO6/xYaHcWavXeBSadcpjYwgN7bgyDLK4RrHavGxPAbaIEEidR22rsfxqjlqNdymEaZ+elzLMwsEUbdW8Q176ZoIwgirR0PkFsYZG7yDJKis2P/1+/5uG5ClFRa2g/iuQ4LMxcXR6yiKNHacYTc/FXmps4iKwG27//aLb/2EO5CxMJ1bDzPRVaDUPGPSVZ0LNM3HvM8b1O6TaLtW2nZeRTHNrn+879l5vxr973ODxtNWw6S6NmDWSkwff4Xd/17rS9N8+88hdZQuAG/d9heKFIfm8VeKOJU63im44vHKjJSSEdORdF7WpCTUZ8hIgjoPa00/+5TWPMF6hsQhLgTBFUm8vgBpFh45eeiiBTUIbh2MLPCa5usubUqUjBE8tgzCLJCfer+O942PPywF/Is/M3LmGOzRJ8+7KtoyJL/dlFlRHV1ZQ7P85W33ZpJ5f0Bcj8+4QfERpJ1bNQPIK++bHDqfQvD8EVmf/t3/RMR3roTKRhGiSUXzbbtoq/sK4UjCIqC57rYuTxu7R6r4vgeEnYmi2vbfkU6upTPjMZ7kJUAplHCtqoIgoggiHiuS3b+CsmWnZhmBduqoqhBauV5atU7j1glSUNSdCRZQ1YCKGoY26757WmegySpaMEETS27kKQlvUhJ1lDUMJKkIisBVC2MbdVxXWvZOlUUWUdVw9hWDde1STZvJ5roZXrsbSRJRdRlLKvqXx/PRZJX3x6AZVZQtSjReLf/IqgXcBxzaXuL+xLBtmo4tkExN0JrxyEEQUTTY2h6nIXpC3e0d70bhJq72PK0T1kxywXmB9aWU/t4QCDes5vuh78MQC0/y8K1U3e3BkWi6cuPoHY0Af60s35tkvyrZ6mPzOAUq/6ozHaWihyigKDIiLqKFAmg96eJPX2QwLYORE1Ba0+R+sqjTP67763wab9beHWTzN+9hHAXkoM34eTWrrzbxSLF0yeR43E80/wlK297YC8UyP3TcaoXhog8tg99RxdSJIgYCfr6ijdHhQ3lbbdq4BQqGCMzlN+7TO3yiJ8/WL7axrV567jJcrWfb/+9H+AEWcGYmUSQJFzTQFxOxBaFRRkzz7r/qpln235xRZaXzHAAQZT8YCFrWFaVqdETVMvzgMfs5BkcxySV3oMoSFTKc1RK/oWx7Rrl4tQqHssCyZZdNLXsRFJ0IvFOAsEk89PnyC1cY2rkBC0dh+mMtFDMjTE7eQbLqqJqUXq2f4am5p2Yhq8tGYy0Mjd5lkJ2mKbWXSSbG+tMdBMIpZifPkc+cwM9kEQQJdJdDwFgWVVmJ96nlJ9gcvQtWtoP0xlpXdreMt+VfGYIRY3Q0n6QWjXL7OQpnLpNU+tuks07kBSdaKKbYCjF3NQ58plrTI2+Q0v7QdJdD2BbBlOjJ/y8qihRKc1gmZXG9Xeplucw6hs30AqlOtn3tX9NuLWXanaayz/8U4yPeXFF1gLs+fJ/TzCZpjw3xqXv/8ltdr7rQd/aQWCHr7ru2Q7F45dY+PvXsDLFO1Z6PdPm/2fvvYMkOc8zz1/a8qarq333tBlvYQYeBAEQlgQIEiCX5FLLlURZQtRqqdPpNmIvYnWxcbcnd5QhKWm5citSUogOBEGCBt4QZgaD8X6mvS/v0mfeH1lTPT3TPW2mhxhA+0TAdFXll6ay3ny/93ve53GrOna2hDEyS+3gIM2fuJPEXbsQFZnQ5m5Cm7qoHR5e9fl5lk3llQOrU8271EzLc7ELOezC2mlMCpeqEwmCsPiboojcHEftSqO0NyPFwggBxVfqNS3cmn+RzdEZzInMkn2U/QMSoyMOFzaVBNo6cbQa4XUDSNEY1dPHMDP+1Dt53z0k7r8PQZIoPvs8+R88fVn8wtitN9P88Y+BIFA9cIDZv//aqsdaa0hygJ6BuwgEEzi2wezUIYq5s0tud0UgiHXbibVbnVwJtj3yebpveJBqZpxj3/syucGD78hxrCU6r72HLQ/9GnIgzOHv/CkTb6+8gyX1kVtp+dRdiEEV7dQ4k196EmNkdXqSalczHZ//COEtPbiGReYbL5P91sKiDO8kFCVCItaDooQbFCaATPY4prV4PdbzvEXD8+qr966LPVvAni0Ap/3X6qrSq+EfPfBgkK/9zxqVyvxtzy20lE8eJpBuwzXmMk27VMYzTcRIBKUljRSL4ZRWR3IVAgGUjk4/I7Us7OzVlXkEQymisU6GTv0ErZZZUkLqSiKW6EaWgxSyZ37mgbF5w25SA9fgeZ7vt/0eCIjtO+5g432/gKSGmD2xh/zQ6s7pXEnL8zyqh4Ywp1afPVkzBaoHzxLa3I0gS8iplQvXXmmIokJ3x83E4z2YZnleQlQoDl4yKF4Ka2tx6nmLriAtBVGEjk6JmWkHD/AEBc2LIEgyciTqu9Z19qBPjGKX/cBnjo7ilCuI4TDB9esJbd5EZd/bsArV3UBPN5FrdvqnYZroZxfJwgQBKRz1g6dhAB6iGvCNp7Sar/kYjvh/16rguYjBMIIo4ho6nuch1VfI7FoFUVYRAwG/H1urLkBNEAiEkkTjnYiSgiQHUNUIRn2FWxBElEAMSVJ9AyujjOv6pQRJDiCKKp7noqhhPNfGNMoIgoSkBBEFCde1/EYfSfW3dUwkSUVRIwiijOc5mEYF1/FXwIKhJM2t2xBFGcusYlsaplHCdW0EQUJRI0hyAPCwLW3h1sDVQBBIb9jNjse+gBpJUBg5xokf/vXlj/sOQwqESQ1cQyDmK+Ec/s4XFzT8Wg7OdYv4pnGly6sBWg52xh9DUGRE9R1wQ5YlIru3gutS3XP0orclUSEaaWdo5Hlqtcy8erXjrJ6veNX4Pk9PO3z2V8KMDDs4jkexGuC7z8QItHUgBcM4Wg0lkcKYmetWMKemMYaH61lilOR99+KZJrWjx5ZdYxQUmUBPD00PfRA5kfDHHRtHHxxa8PNiIET6tnuwijm0yVGUeBPBDr/90Crk0CZGiPRvQkkkqY2cxTV0wr0bcLQa+tQYciyOmvKd2bTRswTS7cixOGY+Q+XMcVx9/mKRVF81TjT1EQw309V7K3otz9T4XrTqLMn0Rlo7r0MUJTzXo5QfYmZiH7Zt0JTeTGvntVSK44QjaSxLY3L0dULhFtq6d+M6FoIgYhplQpE0MxNvMzt1iFhyXX1MGUGUKOWHmBx9s74yvptUy2YQBNRADK2WZXpsL7qWJ97US3v3jY1pTKkwxNTYW35AvUykN97Ajsd+GzUcozByjIPf+AP04rtbhV1SQ2y89zN03/Agrm2ROfXWqgMigFPVzitTrc2DCME3cnPKP3uqkxQOknj4DjzTXDAoeng4joFlaVj22h3fVRMUjx+zGR+fy5K0moU+kccuF3F0Dc8yUZqa5wc716Xyxh5CWzb7Ag8taZo+/BBqTzf66TMYY+O4lYVTaDEUQu3qJLhxA+Ed21E7fAqOU61Reu11PH0RoU7XwSrmcU0Dp1pGjsTQJ0epnDlO6qY70WcmcPQaohogkG7DrlXRZyaonj6GFIkS3bgNz3bwHBulKY1VKSGqKna1Mo+DeQ6OYzB69kUqqTE6e2/jzLGnMI2ST3lRwrR1Xk+pMExm6hChSCvdfe9Dq2XJZ04C/rR7enwfk6NvIAgitq01OJXjQ6/Qt+kB8pmTaLUs4Vg74sxRapVpxgZfwjLKxJLraO+5kfzsSWrVGUZOP+evvHswNvgCjmP52bAoE0+uw3VtRk4/C/hZ7Fr0IDdvuJ4tD/0aajhGfvgox5768rs+IAL03/Fxem58CIDRPU8z+NI3Lms8cyKLa5iI4SBSIuJPpVfJLxQUCTkVRVBk3JqBcRmSgauFoCoIkrhoeHddG90osq77dvKFsz57o/5eqTyGba+OjbK6oCiJPjk7HkZpT6F0pJHiYcSA4qfvholbM/zVrLEZ7EwRz7AW73zBD4rhiIBpepgGnFtklkJhPNvCsUysQu6ihRRjZITis8+RfOB+xHAYpbmZxJ3vJ7r7epxyBadcxilXcC3Tt45RFMRIBCkWQ4pFkWIxRMVX93YqVUovv4J2/MSiCzaubVE+dYRQ5zrCPQN4Xr2fOhwB1yXU0QOCiGtovuSZ48yRSgUBR9dwalXM7Ax2uYhr27iG7meb1TLG7AIKH56Lh+vz+zy3wTFU1AiKGqWQPV2f+toYRolwtJVC1q/z6rUs5eLIPAEJz/MwjTLVyjSObVAtTxIMNxMMJetT4Cip1q0Eg0kUNeKTuWW1vq3bELDwXLfRgO+6DsX8ENFEF+s2fIDc7Mn6MVxGxiKIpDfuZuvDnyOYaKEwepzD3/7/0PJXzgnwZwUpEKZ1220IosjY3h9y5vmvL2lKtRRqh4aw82XUSJDQpm7kdBxranW1cSWdILy9z/dkKlSpHRq8rGNbDZbycBEEkWAwQSiYIhCIz2N5GEbxZxQURRE5HSe8fYDIjVsIbuzyPZclsW54A36+Xe9+cH1qjjWZpXrgDLX9pzCGpnz2+gXYfYPCBx8KcvCAxY+eNrjzbpWnv2+gpFr89F2rLRioPNum/OZe8Dxit9+O0tqCoCjITU1IyaT/oQvrdIJP5Zln0DMzS/mVV6ns3bt4lghIwRDRDdsQZRltYhQ5FiPY0o0UCFIZPIEgiIR71/ucymIeY2qcyPrNJHbeiDY+RG34DOHuPoId3Wi2RSDRRKC5DdfQsMrLp6X4pyHWKQ7zF9LOV6xxHHNBaofnuY3reT7ZXFHDrNtwD+XCKCNnnycQiLNuwz0LkLcv/NujVBhGO5Il3tRLutgECwgAACAASURBVG07iaZeRs48N681cCVIb6zXEMNxCqPHOfTNP3pPBMRAPM3mD/4ykeZOjEqB3OChNRHEtWYLFJ47QMsn7yS8rZfEHTvJ/3Dviqe+clOUxL3XEd7Sg2c5FJ/fjzWzzHtTEJDiEYRQACdfmjO8EgTEWHhFHTZycxzhErVMxzE4cfopBIRGyabRRnsZJZtlB0VBkYns3kT8nt2EtvUt6fvcOPWgihQLE9zUQ+z2HRSf20f5xf04hfnT2rvuDnDsqE0iIWIYHrtv9IOiIAjEtl6DXS7ieR7a8Bms4vxVNU/XKb36GubkFLFbbyG0aSNSNDr3BSziBeF5Hk6pjDE4SPn1N/wMcQk4tSr5PS/VT1IgunEb1bMnqZya84LWxuY/VQv75juwGdPjc/8/O0X19LEl97sQTKOCZVRoat6IbWmEoy2ogVi9Y+W8B8gKkjVBkFHVCLXKDI5tEGnpRJLP7zbwcB2TYLgZRY1g21pjihwMN+O5jk8V8jzae270SemrCIrNG65n2yOfRw3HyQ8f4ch3//y9ERBjKbY+/Dlat9yMpVU4+aO/ZurQi2szuAfF5/ajdjSTuGM7qY/eipSIUH7zOMbIDE6xuvi9IAoo6QSBvjZiN20hfvt2PMel/OYJis/tXzbVTU7FSX7sAyjtaapvHKb0I//eFwIKqU/c63egLRNiLIyUiGEv0vd87qSTiT6ikQ6fSleboVAc9NX9V4llB8XITVtJ/9t7fT3Euh/secfFxVdbmJdMCIKA0tFM6tE7kJNR8k++inOenqLtQDbjEumVaGsXG7Jo+tQYrmk2xl+0RuW66KfPYM1mCKzrIbCuB7W7G7W9DTESaahve7aNW6thTc9gjIyiDw1hjo2vjsrjeeiTl9f+tIJd4V+D81bYbI3psb20dl1HPNUHnks+c5Jy8ZwB/GIthnOvN1bs6tm9Y+sUcmfpWHcL6fYd2JaGVp2dt99C7ixd8U4Gtj5MpTjO9MQ+bLNGMtVPsnljfTiX3MxxDL2w4nNtXn8dWx/6dQKxFMWxkxz73leoZX421/lKQlKDbH7wV2jdcjMAp5/9ByYPvLCm+3AqOoVn3kJpSRDZ1U/TA7uJXDuANV3AyhSxc2VczcCzXV+wRfVbduXmOEpzDLW9yW/1EwX0wSm0E6NEdg0gBJWGz9FicC0bY7JI9JadCEEVQRbngqIiE7lhG2I8smbnKggiLc3baGvZiabn8TyH9pZdRMKtjE28vurp87LI28EN3bR9/jHULr+f0rMc7EIZc2wW48wE5kQGu1BupMqCKiNGQyjNCQIDnQTXdyE3xRBCqk/uNkwKT71G7omXGyZYu29Q+PgnQrS1S2SzLk98S+PVV0yUZIropu3Y1QrGzCSeaWAVl66TCIqCEAj4CtySNC8o4jq4poVnGgtO5a9GiKLitxqalXlCmn5fcRhJDuC5DpZZnaPkSCqiFKjXE+e+5zkl7SpqIIZl1RAFGVHyW/8kSUVR/JvXtnUEBGzHaHAj/X1GkGQV1zGxzBqe5yDLIWQl5HcZuTaWWWscy7IgCDSvv54dj32BQDRJYfSEv8pcuPzWrasB2x/9j3TsuhtRkjDKOfb8zX+ill1bC4fO33qU0NYepFgIMRSYN131HLfR4ud5np+3CEJdeNmfTTU+73kNAyuhXh5bqh3FKdcY+/1vELv3FgL9HZRf3k/p6Z8CftbX84e/hQeYQxOL+zifBzEaJrC+G2NwnMn/ejH9SpZDbF7/MBPT+yiVfWHocDhNT9dtjI7/lHJl8Wt7WeRtMRwk+aGbUTpS/olXdSqvH6X44zcxBieXlVZLqTixW7YRv2c3ancLYkAl8cBNVN86gX7KzwDe2mtx7KhNqlmkWHApl/1xw70bMLOzqOk2f8GlWllWUPQsC8+yePebXPpwXQtzAUNyz3OxzMqCStyOYy5YW3Eds0GTOafo4+A0uF2ObeDYi9+0/j7LXKg+b9vaqp/OgM9D/NhvN2qIB//l998Tq8wAsY4B4p0bECWJyuwox5780poHRIDgQDtqW9OC7wmSuCzzef/DAmJA8RdPlwnPtrFzJTJf/c6inzHPjpP52+9hL0OPVe3vpPU3FhdDERAQJQVNyzbuXd0o4rnOZalKLXmF1J5W1HXtPvnYtCi/eojsPz/jq2Yvs87g5EoUfvgG2X95rmFBIIaDhHatb3wm2SRgGB6Fgktvn0Q87gdyx9BRmtKo6Tbfq+Vnbi71v/CzgE+7+XW/hjhylKNPfuk9ExCjbX1s/fBvEGvrQ8tPc+ypr5AfPrL0hqtBfYHzHflnGeHANcxlz84807pkbdD1HEyzQnvbdUQjHUTCrbSldyBKCpa1et7ikuFUaWtCTsV8GsfwNKVn9160SLIsuB61/aepvHKI5CO3gwDB9V2Nt//NJ8M88W2N99+psnGTQjbr8NW/rKGNnCHQ3uXLg81OYmZX18v5rwWCJPsPsEuZCF1FCKe72frQrxFu7jqPdvNFtPeApQD4aj47P/a/EWvvRy9lOPzEn5IfvDxf4kth4s++2/BA+pnDcXAW82/yPFzNwC1VcZcZFF3DuqRmguOYTEztpavjJjYOPAiCgGlWmJrZj6avvk13yaAoRfzaBB4YY7MYw6uv73iGb0GQfOgWkCTk5Jy2Wjot0tcv0dom8e1vaHyiLh0mR+O+HJAHSqIJMzeLc7X94AVxXp3vnUR8/Q7UWIrsgVdw7avsOl2AcHMXuz7+O8Q61iMIAsWxExz61h+/ZwJiON3Nzo//73U1nymOfvfPyA9duYAIoJ+9Oq+dq5tk/u57OPlyYx1hKXi6iVOsXCKz9ChXJjl19gfIUhBRlDGtGq5j4l1G4Wzpife5+qrr+i5+l7HUDf7FcQ0LMSzN61WcGHe44/0BTp+ymZhwsCz/vWBHD2YhS6inH1fXUJvSaJXzZPlF0ecd1nUbf9YQ1SDhth602XEc/Z1XfTYLWRyttmLZqZ81Yu39bP3wbxDv3EAtO8HMsdcYf/uZ90xAjHWsZ9uHHyfeMUA1M8ax733lPSFesWrYDtqBlRneuZpO8Qev4l1SYMa7qHYei3ZiGKUrJwjh1HTfvD4YQFDWyvfZ361TmCOsPv19ne4eiTOnbUzT44lv+wRq1zKJbtqONnRmQb5h8v77UFpbcPIFSj99bVV2BABIEuHt2wht3IBrmpRfex07s/RYSiRGcvNuzFLuqgiK2szlKw9fSYiywsb7foFkzxYS3Zsxq0WOPfUXZM+8/U4f2pohnOpg2yOfJ9G1EaNS4Oj3vkL+X3NAXC0cF+3Q6RVvlm7eQi5/6soFRWsigz1bQO1tR04nkBIRnwS6Gkgi6ro2n6XugXZ8TrRyZsZlZmYu0zvn7lc5fRRtdBBHqyIo6kUKOKFNGwkO9GNOTFA9cGD1QRHf3iB2+224uo45OYmdySJICqkdNxPp6EeQJMxSjszbL2JViiS37Ca5+TpCrT0osSSOXiN35A0qIycQ1SCdd36U0tnDxHq3IqlB8sffojx0FEFWSG6+nljPJhBAmx4ld3QPjl5FDsVo3nUrgeZ2BEGgNj1K9uCryKEoTVtvRFQCSIEQWmacUEs3emaCzP6XEQSBjjseIdjSSXX0NLP7XsC1/BW5lhvuATyUWBIlHJ93DqIaILXzNkItXcjBMFIoSnnwKJn9L615kFfCcbZ86Fdp2/4+REnGKOc59K0/Inf2vRMwwukudv2b3yXesR6jnOPgN//witYQ/7VBElWakv1UajOYRpl085YLVpoF4rEe8oXVa40uGRSNoWm0Y8Oo3S0E+toJbe+j8vrRVWkmyukEsdt2IAgCVrZIbf/S6bRnmtimnxp7V3Ll2XFwqzW//U9RUNvaqQKBVCuJDbuYfPlJbK2KGm/CrgeLwsm3MYtZ2m/9IBMvfAezlG1MWwVBINTSjVUtkTnwsk8ar9f4Yr1biHZvYGbPT/Ach5Yb76Fp241k9r2AaxtUJ4fIHduLqAToeN/DaLMTWOUc4Y4+ckfeINo1QKSjn/zxvaR23ELx9EGscp6Jl56gedftBFNtcB4/TYnECbV2M/nqUzi1Cq03P0Bi47Vk3n6RaM8mwq3djD37DZRYks73f7QeoNc2IAbiabZ86Fdp3XorllYmc3IvE28/856aUsbaB9jx2BeItvVRnR3l6Pe+Qq04Q7ijD7OQwdZWl7k0cI48vYoykRyIEEy04jk2ejmDEoohByLYRhWjlCGU6qxz+l1cS0cOxRBECatWxHVs/zdbKxFMtmHWigTjLYiihF6axdIu7TJ5IQRVQYyGEYPK3DldAp5hNSg8oqSQSm7AcSwcx6S/924q1Wnmlr4FQsGFKUnLxZJB0TMtis/tI7StD3VdG8kHbsbOltFPjq5I5VpOJ2h6+DaCG7pwDYvSc/uwpq8uIVfXNPFs2zfTifmLQI5Wwa6WSG66jsr4WarjZ/DseuHXdeueyJ5v1uTMD9qupVMZPo6Rnd+eFlu3CSWaILn1RgDUWBOi4tsfeI5v/pXccA2ioiIFQijhGFY5h2uZaFPDqNEEYq2Cnp3Cs63Gto2e8wvgeS7VySFqE377oTY7hhL1ZdJEWfHNpFwHzzJ94zFx+f2py4EcjLL5wV+ibdttuLbFmef/kdE3nlrTfbzT8Gk3jxNr70crzHDsqb8gP3SItpsfoDY9iiBdpiCVIBBMteNaJmYxs+hnYjdtxsqWMMcyuOcWNASRZM82v8e/WkQNJ4i1b8DWK4Sb2ikhkN5wE8XxEzhmlUA0SbSln1phkkiqC0ur4Lk2JfMUTet2Upo8RdO6HVRmhxEqyxeyFYIq4V0bCW7pQ04n/QVcSWQpUrg5PEH2778PgGVVOXX2aTw8FCVEpTrF0RPfatz3giCwvu/+y1LgX9Y3ZQ5Pk/3n52j5hQcJbl5H6689QnXPMWoHzmBOZv3VIfdcC1r9BEWfKa+2pQjtWk9453qC6zvxHJfSSwcov3LQz8qC6pK+DZ5hrSozXSk82/aDYiCAGPJXv61Kkek3fkyka4DE+p3E+7cx/caPsKvz2wIX6nP3HAdbu7jUIKpBzGKW2qRfPqhNDmPX/KdtcvN1RHs2UjxzCEevEWzp8le3oW6T6jWCMK4v6rBkk71HY3x/HKcxZmXsNPH+bfTc/2lcy6B09ghWeeWteQtBlFUkNehPmbfdjmPqnHrmfzK+90drMv7VgnC6m52P/TaxjgGMco4jT/wZhbHjRHu3+FliKYsgSsTX70SNpbAqBcojJ0hu2Y1nmbj1B5sgikhqEPAonjlMrHczUiBEdWIQ1zJJ7bgF19QpnjqIY2rE+raC51EaPIpZzKB2pkh/4v0IqoKdKZL51ivUjgwjKUGUUJzMmbewagUi6XW4tkFh7ChNvbsIRFPgOpQmT+A5NrG2AYxKlsLIYdq33Ylrm9imU7/JBSy9glHOEog1U8sus/1SlojdcR3JR+9CikX8h/8yF209fX4jwbmVZdvWGRx+viECAb6eQbE0grFAM8NysaygqLQmQRTQByeJNidQu1tQ2lMkHrgJVzOxcyXcquYr/Z5z9wsHkJMxxHDA10WTZQQB7JwvG5788G1IoSCCvPSTIvNPz2BNLPJ0XEMIYl05RxAai0GiGsDRaxROHaAyeoqOOz5CuG0dpbOHAfBc17+ZAyEESfHl+ZeY3ujZKdRkM7XJIVzLQJCVxjbhjn6MQobK8EmkUARRVuYC7rznwsIPCeG8f8/DIk9OzzYBgdm9z2KW/Gz0wox3NZDUEP13fJzu3Q8gh2JYWoUzz3+dsbd+tCbjXy04RyuKtvdTy09x9LtfIjd4ADyP6tgZ4r1bKZ49ghprQg7HKJ45SLR7A+HWHoKpNmbfeh5bq9K09UYcvYqo+K2pgihSmxpGjaUIpjsonTmMNjOGWcw06smCIFKdPNt4QAc3dKK0JJBiYaRo0Cc+e76Ah2tbBONpREn2g7CsEkp2IIoStlHF9Vw/UAEIImo4SbipE9c2sfQqgWgT4aZOJCWA59hUM6OEU12EUl0Yy8gWRVUhduf1SLEI+slhqnuOYmcKc17xl4C7CPfR81xq2sVxIZM7MS9QrhTLCopNj7yP+D2757UICYoMiowUCaGkE8vfYXOc5AM3regg80+9yhXvUBZFpHgCQVXrfZ/+1COYaiO+fpfPQxQlHEPDyM9xNW2tglUp0Lzrdoz8LOWho+jZS6u5FE6+TesNH6Dt5vtwDB3wqIycpDJ2mtrkEPH+baSvv7M+tbWXZUIfSLUT7V5PdN0m5HCc5l23o02PUBm9dN1WkBTkcJR4/3YcS8fWqlRHT2NVVp8tCqJE3/seY+DOT86d88hRRt/8/qrHvBoR6xhgW51WVM2Mc+ypr5A7e6DxvudYeK7jP/hE0ZfAM3wbW0GW8Wwbq1L07y3PxTF1JNPwp8rNHajJFjzbRFSDvvWtY+Pall8XzEwgygqRrvW+etT0CGpHM0LA173Uh6axs6X6cdgURg8TaelDUoJUM6PUchME483opVlquQnkYJS5B62HIEkE4mlK02cwK3kkNYgcCFOZHUKUFELJNjzXWX6mKAqIkRB2vkThuy+iHVz5qvKFEASRULCZmjbX+SQgoipRLHuFfffnYVlBUQhdWuzxvQApESe8Y7t/89o2TtHXjzMKWapjp5GCYTzHxsjPYhTmnk52rcLMnmcJptrxXP+mB59KNLP3Wf+mvwBWKcfM3ucItXQiyiqOXkPP+YG2eOYQVrWEFAxhlQsUTx3AMXRsvUr24Cs4WpXy8HEQhPprP8WqFJCCYWytQvF0faXTc3HqSt6FE/vmEbkrwyd8Xxkg1rcFPT+DY/rBOdLeSyCRZvr1H7FagdjeWz9C722PNv42q0VG9/xgVWNdjRBllY33/TzJnq0kujdhVgsXBcQLYRQyhNv7SG2/Gc9xqE0OEe3euOjnBUlCCUf9h5NXr107DvG+bX7913EINLcjqUGEOlVNjoXrMy+wpnLzukuMSm5eRmdpRSozcxJ3xbHz5P49Dy0/RfbsnO909szeecenl1bWWeZZDvqpEUI7N/iJh8Bl6Q+D753e1XEjpwd/2MgMRVGmtWU7ufxpqrXVdb8tKyg6uTLm+Pw+VDkaQAoouKaNrZkosSCIAnbFANdDjgZ8HyvLAYGG8Y1V0hElASkcAM/DKmlIIaWh0uFoJqIiIwZlPMvFruh4loMYiSyoxSYofkuTIMtIySRybSWrpoJ/87W1Erv1FoID/YC/4KIP+fU+R68HocXguRi5aYzc9AUvO1RGTi66mVXKYZUunna4pk5lZGFdx+q4TzM4f1/VCf81x9AwiwvTkS7kLhp5/2aRgmHi/TvIHHiJ2uQwgigS69tGYsOueiaz/CetIEq07XgfA3d+imA8jSCKDL/2JGN7foDrOmi5d78WoqQGCcRSrP/Az9G27XafVlTJc+ibf7RoQJx96zm/8cHUKZx4C0FW8RwLR9f89+o/5sKp/eA41CTJl+ZzHKrjZ+uLYC6OqVEaPEJ14kxjduFoVV8xXq/Xz2SxUdx2SjW/TW4VOJdJriU8w6Tw5EsIikzyI+9HbkminxjGrelLlps808YpzF/hFgQJWQoSCqaQpECD9SErIaLhNl81Z5VYVlDMfO3HZL7+47mNokE6PrQTz/HQpwoYsxWab12PrZkYM2WM6RItH9hC9ewsZqZCuCeFHA/i2S7lk9OIqkS4O4XrOBQPjhPb0oYUVDAzVYpHJwh1JIgMtCDIIjPPHcfMayTvv5/4HbdfdGzngqKcTtPymZ9bGV1B8IPiOVrAuQULc2IS/dTK2PfvRjimgZadID6wg0CyBVFRCbZ0Ux45saKACNC+8/1se+Q3kRR/+lYcP8WJp//7lTjsdwSSGmT93T9H720fAfzaczUzxtHvffmSPMTzaTiOoYGhLfiee+7188qt53im5/99/msX9re7VcNfvJCluhK+sKpkzHUsWANvnQvhFMpoh86Q+NBtNH/mQz5bwnJ836VLHKh+cpjpP5rzYBcEkWS8l+bUJiLhFvrXfYBzAyhyCFFSMa3VK5kvnydw3kGrTWEAxr7pp9Qtd22mcHCM0uExev/drdglDTNTZvrHRxAViWBbnMK+EYxMma7Hrqc6mGX2pZMIskj6tvV4jkvx8ASlw+MIooCaimBkKoTXpVBTEcycBq7rayQuwmsSRHFJEczlwM7lKL34Em5t9RJY7xq4DrmDPyXSvQElEsNzHArH91KdWJkfR+vWW9n0wGcbAdG1LSb3P3cljvgdgxpJsO7mhxqy9w3azVVEzDan87iGhSRLSLEwQkBZtXHVmkOWiN5+LU2P3Y0Uj/h+Q5bt+zatMHJ7nodpVTGMEo5rYtm1xvRZNwqUyuPoqxA3bhzqajZyLQdBlpDCKp7j4hoWclhFjgVxDRvPcXF021f3VSQESUQKq8jRAHbFwHNdpIiKIIrYNRNRkXA0/6mnpmPEt7RTG8036CaebVN+/Q3MiQnUdesIDvT7Xiyy7AfKc0rg7pzvyLJR5/Z5poUxPEzptdfRzzeuEgSkoIKgiODhC28adbFVWUQM+vt3LcefrtQ3k0IqrmUjBmQEScQ1HVzdQlD96ZHf7iji2S6OZjX2t5oxz5UnREUGAVy7/voyLoVdK1M8uboWu3jnBjbd/4uE092o4QSZU28x+Mq3wHUoji1eOni3QVKDbLj35xEkGcfUOf3sP5AbOkx58gyhQIrWpq1IokKhMkKhMkpb0zZCgRSmVWE6f5iAEqMp1gf4clez+ROkk5v8rEYQyRRPU9UztDVtJxRo8rfLHSagzt9uIvM2iWgPTdFeADLFU5Rrc73i2vERnHINMRwg0NOKnIhgVi/PDGutIKoK0fddixgNoR05S/nFfdgzubr52aW3dS8SpPWo1qYxjCKKEmZk7OW6z9Da0PZWFRTNbBVtPE/nw9egTRWpnJyi+bYNtHVuI7dvGKdmIkfnpgaCLBLf0YVVrDH9k6PIsSCJHV14tkvm5VMkdnb7gQGwyzqObhHqSmLlazg1n1Bs53LY+Ty1I0cRVBU51USwr4/EB+5CSiZxq1W0EydXZitQf1o5lQrm+Djm5JRvcH9uCi4KxLZ20nrvNpQm360v9+Ygs88cQVAkWu7cQuK6XkRVwpgpMf30IWrDGQRZov9zd1M+NklkoAU1HaXw9gjTPzhAy51biGxqx7Mdgh1JnKrB+Lf2UhucRQwqlx7z8Q9QPjpOZH0ranOUwr5hpn9wEDEg03rfdmJbOxECMnZJZ+Jbe9BGlk+sXSliHevZ+fHfIZLuBiBzeh+Hv/1FzOracByvFgRiKTY98Fnatt2OVStz+rmvM77vx3iOjYBIR/oa8qVByrVpXNciGmolqCYYmX6djuadxMNdOK5JQIkzkdmHYVWRJZVYuI2J2bfrga8XWQoSVGP17XYRj3TiuDYBJcZ45m1Mq4LnuRhmiVzpLIloD4lI97ygaIzOUtlzkqYP3UhwQwfh7b2Y0/nLFnFZE9RFa+1MgeJTL1M7ePqyBVxsR2d49KV5Ln5rgdVliqbN7AvzFwPGv71v3t+14bmiv10xKB2ZoHRs7gssn/f/sy/OjeVoJpNPLdL65XkNRW2zWsUcHSO0dQuhZBKnVKL04ksYI2sniBBoi9P1yZso7h8h94+v+x60nofnuDTd2E/TLeuZ+PZbGDMl2h7cSfsj1zH0V8/juR5yPER8Zxfj/7IHu+o/6TzHRQwoJHZ2M/TVF9Em8nR85Hpa79/B8N+8RPK63vqYezFmynNj/uXzeJ6HHA8S29HNxDfOG9N1cU2b8vEp8nsG8VyPnp+7leQN/ejjhcsy8FkMie4tbP/obxJJd1OeGiR7Zj/Drz3xngqIgiTTee09pDfupm3bbVh6lZM/+Tsm9v1k7jOCiCTIWLaG7fjlFlFScFwL29GwHRNJUnFcC8MqY1gVHNdEkvwuIts1kBzVt3eQAo3tHMeob2djWBXM+nayFKQluQXXtQgFkujmBQmA65H7/huoPS1EdvWTevhmzKk8tcODa5VErRqeaVF984ifLcbCSzZsLHtcXCLhVmQpOG/MSnW6oca9Ulxm79HScE2H4sGxxo94rWFlMoSukGRYeF0zoiySe+Ukxuz5cmUCkYFWtPE8lZNTuLpF7rXT9H72/UiRAHZZB9ejdGic2tDF5NLqUIbKqSnskk5x3zAdj1yLqMpE1reijeWonJyuj3mG3s/e4Y9Z8ccsHxq7aEzP9RAkgfTdW1GbwoR6m7EKtYXbbC4TsfYBtn74caKtvWj5aY4++SWKY0u7IL7b0HfbRxm481P1DhM48/zXmXj7mXmfcT2bQmWE1tQ20s5GitUxqnqGVKyPvo47kMUAE9n9qHL4ovEFUaaj+RpEUaZUHadUHScWbve3kwJMZPajyhEuNCtT5BCup/omYwv86K3ZIrP/+ByefSeRXf20/rsPkH96D+W9J3Er79xU2rNsSs/tAUkk+r5rkdtSWKMzOJWaX1e8xG/Y1Qys8YtV2AVBoq1lF63pHciy/1CRpSC6nufM0I/RrlRQFBSJxAM3IygS2pFBzIksblVb9pPHc1y0iSuXQViT03NM/DWGoEi4tovnzD9ZQRIRVNmnG53zTq5Tj87Bw8MuLbJYc14dxbUdqC8Siars12HPjWnb9THPOe6BtcCYTTcN0Hr/DjLPHSP7yknaP3TNmmcGaiRBONXJ9sf+I5HmLmq5KQ5+4w8ojb93aofg1w/7bn+M3tsfBUGkmhln9M3vM7bn6QXr1ZniaUrVSd803jGwHYPRmT1Ioorr2ZhWFV0oUNFmcc4jE1u2RqZ4CsMqY9karmsxOvPm/O3EIhVturGd7eiMTL/uq8J44LgX/+jFUAA7Wyb7rVdwdZPYTVto++yDNH34FvQzk35PdE1f0QzC1QzKr19IS5trs5IEBQ8P11t8GiuEAqR//mHU/k7kiRg0+wAAIABJREFUZIzglj48w/SPw50f+C+EfmqUmT/5p4telySVVNNGxiffpK11JxNT+1CUMMl477IaHhbD0nYEHWmit24nuLEbz7DIffN5Ct9//apZ1TKnpq6YuKw+UUBUJGJbOyjsH0UQfVKtVaiijWRI3bqBYFcTZrZCYlcPxkzJr4HW4S3yRYe6UwS7mtCnisS2dmLMlHANi9pwhtTN6wl2NmHmFh5zwfG6mnA1k/KxCQRFQmkKY2YqazZFCac62PLw52geuBYEgfL0EEe+8yeUJi6/K+Fqgk+7+TS9tz2K5zqMvP4kJ3/8t5dcvPM8B8OaP4217BoWc3xZ17P9XvVzf7s25doUmpHDOs/o66LtXAv3gl4uy740D7fl395FaEMnUjKK3BRFUCREVSYYDRLsb68f9CWHuAjmZJbyGyfmXYeY2oKHi+nU6Ixtx3I0ZqpnsNyFEwGhrnPglKs45ZXRZdzKwucsICAIIqXKOE3JAWxbp1QeJRHrQVVj6MbqkrGlg2Jr0rcnFQQc3cCaLVw1ARHAzmTQjh3HqVZxtbWdHmgjOTLPHyd91xaSN/TjuR6V45PMPneUwtsjBLtTdD52AzguYlBm+oeHcU17ye4f13JovXcbYkBBSYaZ/O4+XNOhsG+YUFcTnR87N6bC9A8P4ppOo1NhIVROThHb2kn3p2/B0S0QhWWvPi+FQCzFloc/R3rD9f6+Zkc59r2vvOcCIkDvbY+y7taPIAgCY2/9mDPP/9Nlqa0sBtvRmS0cW/NxASI7+wmsa73o9XmiISt9WC5QhokF0tiuRVhJIgsKihIkpMSwjIWDoqubZL/+dKP7ZiW4ePXZh+e52LZGMJCgpmVoSW+lVIoSDCS4nJt/aY+WRBQpWleMmc5jjl15YYaVwKlUyD7xJHgeTnX1hM2F4NkOsy8cp3JqCjkeAtfDmCnhOS5Wrsrkd94i2NmEqEhYhRr6ZKHuqesw+g8/xcwtfDzaSJbZ548jSCJ2WUcby/ndPbkqE9/ZR7AzOX9Mz8OzHUb+4dUFxywfn8QqvoqSDGFXDJ/e5HqXVVYQZRUlFGXbI79JesP12KbOiae/SnHsJJXplfEYr2YIokjb9jvouv4+Et2bwXUZ2ftDzjz7NRzz3cdVdXUTp7a29fuFPFVczyEghYmoTQwV3qItsgnhUuagros1srb+3Y5rMjm9D8MooRtFBnrvYV33+yiWRq+scZUYVBFUv2vEKVYbTeZXDTyv0ad8RYY3bWqDCz8I7JJOpbSAp4jHotuAvzBSG87gVC6+ee2SRmWhWuQlxvQsB20kizay6C5XBDkUZf1dn6Z95x2o4ThGOcepn/wdkwdfvOq9X1YEQaRtxx1se/g3kINhHMtk9M3vc/q5r9V7wd99mP77ZxBD6pqO6enWRRlzUZ+kNbKJnDaG6dTQ7TKW+7O9Zp7nUiyNci4rPHX2aURRxnUsHHf1pm3LXH32d+qZ9qr7Kf8X3h2Q1BAb7/n39Nz0ocZrM8deZ+I91qEC0L7jfWx9+HHkYJjsmf3khg4x9PI339WBXzs6vPSH1mI/donh4pxIxHR1+QtugiIjpZNIkVBdZPbScGs61ujFWaYgiIRDLehGvm5eZeA4BuFwC6ZZwbZXl+kvGRTdmuG3DoUlaHRhrGpf7y3IMqGtm1G7u3BrNbSDR7CLRYLrBwhuGMCp1qi+9TZutYq6rgcpGkXpaMeNhsm9eRop0UT4ho0IkuRzuPa8havryM0pwtfuAkFAP34Sc3wCKR4j0NONGIshhsNox45jTUwS3Oz703iuQ+3gEazJyxddGLjzk3Ttvr/xt1aYeU8GxNatt7Lp/s+iBCNkz+zn2FNfoZZdWxGE9zKCUhS3vtACEJTj2K6OvUSGpq5rJ3bn9ah9nYjhwLLsCMyz48z+1bcvel2SAmwceJBCcYjxqb1YVhUQaE3vIJc/tWpRiKWNq2ZyOLkyUjiIFA8jN8WwJldvDrUYJEUk1R8ne7bot9PVaTCiIhJrC6GGZEpTNYzy1ZGphrduQeloo7b/IJ5h4FSryOlmQju2UXn9DZS2NiI3XE/5xZf97psNA5RffZ2qruNWqgTW9xNY10Pxx88S3LSB0K7t1A4cJnrrzegnT4HrEdq5DadaRQwECO3cTu3wEfSTp3Cq/o3oFEtoJ0+jdrQT2rIJO5fDM1Y+bRAkhWAiTc+NH6TnpodwHZuhl7/J5OGXfAuE/Ltf4eYc4p0b2faRzxOMp1FCsblOnMrFNah4QiAaExFF0HWPfNbFcaApJeI4HqGwiKxApeRRLPgMCEmGZFIkGPIXJ8oll1JxbuoZCEKySURRBGwbinkXTfPfj8YE4gmxsV255M0/lriIKMw/lncGAolgJ7ZnkquNgCCQDvdT1CcomxfzCc9B7Wkj/dlHCAx0gSjgGRae6yIGAyAIjXtXUGWoJwtOoeIr6Sx0FIKIKCqIokz/ursZm3wDTcuhKOELzKxWhmUYV02hnx1H6WhG7Wgm0NeBNZVdcx5cIKbw/i/sYvDlSRzb5ezLk5Qna6y7oZXrPr0RURaYOZbnra+dpJabq8XJLWlAwM7lLnL6WwnEaBQp7C8oOZqGW760nLnS1oo1PTsvO1NSTbjlMtbkNE65QuqRhzhH+bamprFmZueO0fWwc3msqWnEUIjwzm0Yg758lzk5hacbhHdsQwqHfXHSQhFrYgo7W2/dk2WCWzYhqipSLAaugyBKK/9aBJHuGx5g84O/UlceF5k98Sann//6FVl5fScR79zIrk/8H4Sa2hAEgcyptzj0rT/Gql1cJ29Oi3z28RjpVhFJEsjMOHz1S2Wysy6f+0IMURSQZP9zs9Muf/knJWZnXFpaJT7zy1Fa2vzANzPl8BdfLFPIu4QjAo9+MswNt/iyerWqxxP/UmXv6ybpVpFP/2KUnnUSngfT0w7/+DdVJscd0q0in/1cjOYW/1hmpx2++udlctl3on1PIKqmaQp1Ay5BKYogCESUJgr64pmZoEiEb9xGYNM63KpO7e3j6MeHcTWd5EfvQgyqvsezZaP2tBO+dhOuZpD92g/QTyxeErDsGiNjr5Ju3kJv9/uYmjl4WQERljN9ruqUXthPaFs/cipO9OataMeGcAqX6Ux2IQSBWHuYVH8MBIFATOXNvz5G1/VpBl+ZZOpwjl0fG6B5IE4tN/c0Stx1J0o6jT44RO3IUczx8VXxFoMD/Q1pMmN4lMIPf3RJ90BX15DiUZDlRqBzajXEcBgkCTmZxDlP2/EiGpMoIEUjCIqCFIvhVDVcXUeQJUQ1gCdKvh+L7Vs8XCh2IUXCqJ0d5L/zJMFNGwn0rVvxOQN0XvsBNtzzGcQ6VcLSq0weevE9FxAT3VvY+uHHCafaqUwPM3nwBaYOv7xgQARY1y+zcavM7/+XIvmcS3NabGSDiiIQT4r8yX8r4gGf/504d94X5Jtfr1HMu3z/OzWmpxxCIYHf/S8Jtu5QeO1lg53Xqtx5T5C//kqFU8ctQmGBStkf8677QkSiAl/8byUcB371P8R44OEQf/dXFXr7ZTZslvn935s7llLxnexndn3bjTo8z2WmepqadYkFT1kmuKkXXI/KTw+Q/5dn/CYQIPb+6yAepbrnKE6uBLKEcXqE5EfvJnbn9ZjDk7iXELZwXIupmQPYjk5H23XEoh1Mzywu+LsUlhVStWPD5J98heZP3kPkuk1YswUKT/0Up1Rb0x9P7myJV750GFESeP9vX4MgCSghmfJ0jczZItWsTjAxt7ImJRME1q1D7e4i0N+Pq2mYkxOwivvFKZcJ9PUhKgpyczPlN97Enl18KqAdO0Fk93U0PfwgTqlM7cAhrJkMdqFA08MfBKC2vy4r5Thg28xLrz0PKZkgcd/dCIEgldfewK1U0Y6fJHrbzQiSiDU+iZ0vICcTuJaFd555l1vTcAolEvfcDYKAqxsrYvELokTndfey8d7PIKtBpg6/zNhbP8K1LQojV4ZD905AUoNEWnrY8ehvEWnpoZad4PB3vrgkz3J0yGZsyOGXH4+xb6/Jy8/qnONfOw4cOWAyOuwHhmOHLLZs9xkaagA2b1P48MfChMICHZ0yyZQ/Jd5xrcLQWZs3f2r4P5vzqlA33KLSNyDz2//Zt/bo6pao2wUxMmgzPurwS4/HeHuvyUvP6Jc3dRaY4x563gpnfR4VM8tw8W1foMJZXnIkSCJKSxKnUKb29olGQAS/BRBR8C1OAGyH6p5jqOvaid5xHaEdG6i+cfiiMV3HYjZzrC4b5pHNncSyarS3XoNlrZ5Otbw803YoPf82nmGR/NAtJO69AbW7hfIL+zEnMrhV3XfCW+EFdqv6eS1tLnrJJL0x4afjqSCb7+8h3hkmGFORFQlBFBq1RgClrc3PtgQB1zSwZmdhlcRyp1DEmpom0NONFAoR6Om+ZFC0szlKL77iG1y5bkNdp/Lam3WfFxdX959u2rET/k14fmuV62GOTVB64RXfE0b3r4V2/CTG0DAg4NUtV61MFvuV13wxzjo8y6L47PO+Yrnr+h4ey6wnRtv6SPXtZMO9/x5JCTB99FWOfvfPsY219Xp+pxFMtLD14cdJdG9GjcSpZSc4+M0/XBbxPDPr8md/UGLrDoXb7wpw6+8l+OP/WmR8rO7rfcH6wDl686d+Pkpbh8Q//32FfM7lP/1fyUb8EUUBd5GfiCQKvPqCwdPfnfsOymUPz4PZWZc//X9LbN2pcPudAX739xL80X8tMjm+zHtdEpGiIaR42P9vJODbi+ITo52agVPRcMr+P8v5DRl2GVkMEJRjjbM3nSqut9i2fkeLW9Uvyvpc3USQpMYxgS8goZ8ZI3LrLkI7BhYMio5rMj17kAY7pk7RqVSmLtlyuBSW5+bXnkJpbcIzbfSzE8Ram4jesIXwNRuwZwtYk1mcql73LF7mnj2P7D89g1P0ychWzWZ07yzXfWoDgigw+OokrZuTFMeqdF7TTGogTqQ5QHly7qZRmlMIAf9CWtMzfl1xlXANHXNyikBPN8gyans7S1HBPcPAMy6wX6yr+Mx7bbFpuOPgXmif4LoXC9y6Lp55ccDz97/EQV6Apv6d7Hj0twklWwD8gPi9L7/nAmKoqZ2tH3680YlTnhrk6Pe+TGl8eYrqLW0iobDA0UMmw4M2//n/TtK7XmZ8zEGSYNtOla51Ep4LW7Yr7N/rfxE9vRKDZ/zMrn+9TGub2AiYxw6bfOaXo1xzvcrwWZtAwJ9AZDMuB/YZbN+lYtuQz7vE40JjZtDSKhIKCRw9aDJ81ub//H+S9PbLSwdFQUDtThPZ2U9oczfBgQ7U9qaG9cc5eI6DOV3AGJpGOzFG7cgQ+uD0JctQYSVFOtxHVG3G9kxEQWIovxfNXmwK7eGZNoIqIyjz9++WNQRVRkrGgTner1v1rQrk5uSix+FdFIQ9wuFmdKNUX41eOZYVFON3X0/01u1IkSBCKNC4qKIio3amUTvTK96x57jkn3i5ERQdy+XUM2PMnvT7FfNDZby6MXv37hbatjQx8voUuaE5tRopEm3YEdiFAk5l9R0tnmXjFPx9C6KIlIiveqzlwJqextV+toGoqW8HWx9+vBEQZ46/wYmn/8c8Wfz3ApRwnK311kTHMjj97NcojBxdkZpPT5/MRz4WJhjys7uRQZsTR/2Hnev6if8vPR6juVmkWHR54Rk/+3n5OYOHPxamd0CmlPeYGHMaFab9e022bDf4xV+PYpoe1YrHD56okc2Y/OQHOi2tEr/xOzFcF0pFl+9/R2Nm2qS3X+bDHwsTDPrHMnTG5uSxJVgYskT8li0kH9hNcH0n0nlZ2IUQJIlAZzOBzmaiuzdiDE9TeHY/xRcP+p7rCyAe8HufXc9mpnKa5nAv0qUWOFwPO1ci0N/p1+LPg5UtIIYCqN0taAdONmaPYijoz8RWaJqXbt5KLneK4pUMinJLArWjeVU7WAlc16OW1REVkXBq7ksce2uWsb2zOPZ8lV4hEGj0UjrVGp6xeka95zhzbYKiiBgKrXqs5cAplXFK5aU/uAZQo01E23rZ8egXCMabKU2e5eiTf45emH1PaSDKgTADd32Kls03EW7uxKqVOf70f2fq0EsrJmQfO2gxO1VGDQi4DuRzLoW8nzm5rsfbe0xe+ImGogoU8y7ZjP/e8z/ROHLIRFX9RRTHAUP3b9pK2eMf/7ZKc4uGqgpYpkdm1t9uetLhf3y5TCotIcs+7SY747935KDF9KR/LI4DhfOOZcHrkIrR9NBNJO66xheFqM/fPddfuJtTpsGv5Umin+jUhWCDG7to6UgR6G0l993XsGYWvkc0q4gihTCcKo5rIQqLhxPPsjHOjBHc2kdgQze1t080yj3G6VHwIHLTDoyhScyRKaRwkPB1mxGjYX/xpQ5RVEgm+qjVZjHMMunUZgRhLvMUBIF4tIt84eziX+4SWFZQtCayaMfWlinvWwDMTStFRWTjB7rYdG83kirOa2B/8YsHyA1eHEAE8bxGddedtxCxigOa1yu8msb1qxGRlh62f+Q/EO/aiCjJlCZOc/Abf0gtO/5OH9qaQg79/+y955cd933m+al8c+jcjU6IjQySADPBJJIiLVGybMler+31jOUse7w7O+fsOfsH7Iud8e6MZz1ea33GGtlyVJYYxEwwgETOqRvdjc755lu5fvuiGt1odkCjAZAUB88LHNx761bVvX3rqd83PU+CLU//a9bd8zSSLGMVp7j40l8zfvY91tI/ZppirpCyFBxn6dcdG4ZWeF+1IqhWln69VBSUiotTLWZ15XO5FnI8Qu0vPULmmXvmIjqvWMUdncEZm8EZy+EXq+EKUAJJ11BSMbTGLHpzFr25BiUZ5h6zT9+DbGhMfPu1sKh6DQr2OBIyAsGG7H1YXhlnBQUf4fmYZ3tJPrkPo6MJOarjz5KiMziO1TNI7J6tNPzR13CGxlEzCbTmegLboXpqPuWhKgZ1NVuZFAF+4LKh83NUqpML7EOi0ZtbwK2KFAuvHqH07jJq2DcBrzAftukxla5n2hg6NsnExfwCDcPyxDLKG44bxjKyjGzoSJq6dhGE2X0AcwrfP++I1a5j2/PfINO+DYDC0CXO/eQvPnOEKCkqGx//NVr3fR6AgQ9/ykzfKSbOvf8Jn9nHj/hdG0g/sQdZUxF+QPXCAMW3TlG9MIQzOr28NYEiozfXEO1qI/3YLmLb25E0ldT+XVRO91F8e6FBV9XNISHj+BWqbp5AeCtXooXAuTJK7ntv4E3lFzRkC9uh+MqHaM116M11qNlk+LznUz1yHvPM5bltHbdM9+UXEAg0LUq5Msa5i9+b67yQJImNnc/cVFfM6nyfC2X826e5AICshJXl7teHKI2trpwemFWE54Wez+k0SjyOZ61NIUTSNNSamvCBEHNTI9eiaUeWzvsbAMGVQ5NUpm26nlqHrEr4bsC5FwdRNJmup9cRSWlM9RS58Mow6ZYYm59swYhrjJyeYeDwJOsfaqRhawa75HD+Z0PIikTXU61EkirT/WUu/GztvrWyqmOkatnxlT8l274d1yxx4YVvUhztoTJ56+waPmlcbStq2rmfTPt2As/lygc/pvetf7ytCjf/8K0K1eotaEWTpFva0iYZGtln9iJH9VDm7vhlJr/zOvbAxHy4vBz8AGdoCmdkGrN7iIZfe5LEfV3Iukr26b2UD10kMOeLfZlICw3xzVy7Ch8snFyh0BLySPHVDxdXtwOBdb6Pqb/+EbG7t6C11BNUbazuAarHLhCUFl6LYrbnzvMs+gfennPygzACLZaGcNy158lvux3BauE5AeVxk44Hmhg8MkHgBnNfd3XGJnAX3+HcySkCy0KORNCbm9EaGvBmcmv6oSnJBEZnJxBWi90l2nHqNiSxig4TlwrkBsokGqKk18V57/89x64vddDQlWbsXJ7hkzMomszuX+zg4usjbHu2jfELecbO5/Asn0R9hM4HG7jwyjDrH2ygaWuGyoxNNK0xdGKamStr/4Pq8Qybn/qfqOu6Fz2ewSpMcf6Fv2Ty4mEQn2TD762FpGi07XuWTU/9JqoRw7NNJk7+ALv3xzQ1u+RnZGJxCSMiEwSCiRGPRFommVJwHcHkmEc0JpOpU/AcwcyUTzItE4vLVMoBhZxPXYOKboS5wfyMP/f19fbcmuH/VP16KvnRW0bgsa42jNawiOaO55j5wbvYV8ZvLHsQCJyBSaZ/8C56Wx3Gujr0dbVENq+jempeMk5TotheiZIzObdKu65KjhDLtvsI18O60I99eQhJVebTaytEfkIEVKoTi56fnD6/gChvFJ8aUkQIjKTKvf+qi82fWxfOOM9+2Qf/6hy5gcVE4QwP45dKKOk0ajZDYt9e7MEhgvINkoosE9+5E72lOTwV18Xu71+02aU3R2jfV0/7ffVEUjrFsSqe7WPmHeyKhx5TadmZpXl3LaXxKtGsgaJKaFEFs+Bg5sI7rbZOJdkYpWFLmsqMTWXaZrqvhKLLNG3PUrchxZHv3LiIqxZL0fULv0fzrkcBMHPjnP/pXzLVfeQ67/z5Q+veZ9jy7NeRFZWJCx+S6z9Ds/4zHvjFCAOXofeSw4NPxrAtgSJLHP/ARJJh41adVEbh3Vcr7Lwngm5ITI379M1u79gCRZU4/E6VL/1aiqE+l95LDqWCjzd7nenxLJoep5IbQpJkUo2bKI73ICkqidp2VCOOXZ6mmh9FBD5GPEs824okK1Tzo1jlKWKZZho3P0xhvAezME4lN4QIfOLZdRjxGly7TGmyDxH4RNNNyIqGEc/iOVVKU/0IfzExRzY2I8cMJCQqJ3ux+m+QEK+BPThJ5XgPRkstclQn+hFS9AOXhF6HpkTmehNNr4i3hEXCqjFrNSycm0tdxaJ1OE7puirly+FTQ4q+G9D9xjC97yzWJzQLSzcl+8USdl8/eksLKAqxHdtxxsYovnVg9TlBWSa2bRvJhx6YK+44o2PhnPJH0LKrhtrOJEZcQ4+FX12yMcreX99EuiXGmZ/kSDfHiGV1zJyNVXTx3YCRUzNs3N9E+756xs7nGD+fZ+j4NKohAxJm3ibREKHtnjpkRULRb7zII8kqm5/+rTlCdKslLrz4zc8kITbveYKNT/56SIjnD3Lhhb/CKk5h7NQpl3z8AKrlALMiOPxOlURSYftdES5fsPE8iMZkOjfp1DYo/PDvihTzAZt36HRu1ui94FDboBCNSgxfcVE0qJQCruUgRdGpW7+XSn6ESLKedNNmihO9pBs3o0WS+J5NpnlrOPFRzlHXuRfPqeKY81VUEfjIioYIvLnKeDTVQLZ1J5XcMInaNgCK4z2kGzeiR9OUpgZCn+RloNYk5/LqznhuWcXq1SAwHZyxHMIPkBQFNbOwjUaRNGyvTMEemyPF6ynkfFyor9vGTK4Ht/TzTopOQO+BxYSoJzQ8e5kltBCUPjhEdPs2tLo6JMMg/cTj6E1NlA8dwRkbJbAd8P25H5MkSaEEmqaHq8t77ia+ZzdKJmwQDWyb0sEPCMzFIc34+Tz5wQqBLzDzNumWOPmhCj1vjRD4gsqURW6gzGR3gcAL6H5zBBFA/wcTTPYUkRUJu+xil11O/6ifSDIs7FRzNkgSF18NCyB2+cbulGokzpbP/zYtdz2JZ1dxKkUuvvRNJi8dvqH9fLohEUnXUd91Hxuf/HVUI8bo6be5+OL/N9dWNDLg4fsmu/dF6NplEIlKZGsVonEZzYAtOw0Ge13qGhRcVwDh6xC2zUyP+1w4ZWNWLcaHPSrlKpt3GOy8x2Bi1KNcDH9DdiWH71rEs+tI1ndSmuxHVlSSDRswYmlcq4ys6iiqgapHiSRq6TtygMCfJ41qfhS7mqMw1o1TzYMkE0014FTyzAycJFHbTk3bborjPQghMIuT5IYXT3Us+Ia0sK1GuB6B5dy0aEtguQjXQ4rooczXNbD9Mm6QIaLO9/OWnWlWk1iQNBW1LkNkaydacx1KIkpgO5QOHMfpC+XbJFVBjkUQgSCoVOc+iyLrZDKdy+8biWSihXyh/wY/7TxuGSlKmgKqGuaOPT8UQLiZFplZ3PUrG+l/f4yJC0v3Sjnj4+Rfe4OaLzyHkkwiR6PE776L2K6duGPjOCMjeLl8OP0RBEiGjpJIoDc1obe1IkciIEnhqKBtU/rgQ8wLF5fMS5p5BzM//8O2yy7jF/IUhq8xG/I88tWFPw3fDSiOLrxrWQUXq7CQ/PJDN9ZsKqs6tZvupnHHIzTvegzPrtLz+rcZOvrqbHj12RF1qNmwm52/9G/R4xlkRWH09AHO/vDPCdxwNSTL0LFRY/tdEVxXMNDr0rZeZ9e9UWYmPV77UZk990ZobFWZHPMYHXSZnvTZtz9GMe9z/AOTcydttu4xmJn0MSsB+x6OYUQlus/amNX5FVrgu5SnB6ht3w0CcsPnEEIQeC5T/cfJj86OdQY+eiy82UqyAj4LiytChM/PQswqHSFJyIqKuGryLgT+KmZ5g0o4aippKnJED6fv1voTkECOhh0dBCJ0rrwGplvAdAsLwuXljNquhZJJknhoN6lnHkCtS8/NYPu5EubZ3jlS1DuayfziYwhfMP2tn+Dnw5Y8w0jRtel5yuXl5ezi8cYb/rjX4iZExxT05lr01vqQ7ZNRZEMDWUY4LoHp4OWKOEOT2P1jBOXFf1RZlYjXR7EKDp7tk26JIykLTXKynUmGji4/g4zvUz15CiUaJfXofpRMGkmWkXQdo70No73t+p9FCALbpnzoCMW331k8ercMSuMmpfFPrr2l/YHn2fTkbyCrGoHv0Xvgnxg89OIndj63CzUb9rD9+W8QSYX9ZxPnP+DiS9+cI0QIO7POHrc5ezx8rrZBIT/jc/Q9k6H+8Obz+k8X33Qunprfx9TYwr/7j/5+OesNgVWaonHzQxQnenGtMoFnU5rqJ1nXiRHP4lplihOX8ZwK1cI4TVsexnOqVGaGKE+HvhFSpshqAAAgAElEQVRWaZr69XspTw1QnOyjWhgjlm2lafPDqHqM3MhHbUVXhpcvI1wvnA5pyiLHIiuqy6wEOWpgtNQiqQqB7eJNL6wqpyMt+MIlZ66+m0FOREk/9xDJz90brgJNG79QRsmmZslx/toPKiaSoRPd2Ep54zqqR69+F4J84QoXe36y7HE2dj51U+uBGydFVSGyqZXk/t1ENreiZhIoydjiecpAENgOfqGMOzJN9WQPpQ/O4edKc3fKWE2E/X+yi3MvXGHszAyf+9/vQb52pEeCbFuC099buTs9ME2KBw/iTk2SevxxjLbWcDxodgW4FELxirBh2y8UKb3/PuXDR/FLH8+Uyc1AUlTa7vsC6x/9FZAkXKtC/7vfY/DDFz7pU7tlkFWdrmd/h2i2kVhNC9GaJoqjvXS/+i3KEwM45ZUncYp5n4NvVCnkb48Sq13JMXTqZVynSuCFxFoc78Gp5lG0CIFr47sWge8y1XcEI1EDkoRTnSeX6YHjGIk6PKeKCDys8gyTfYfRIgl818YshpXV3Mj5+VXjCrB6R/GrNlJEJ75nA5EDp6meH7hxgpDAaKsnfvdGIBSNMHsWqpIrsoosFGRJmas+i5XkqSSJyNZOkk/uQ1JVSq8donL4LH7ZpO7rX0bNLhyr9WYKuKNTRLo6MDa1zZGibZcYGHoPfwWj+7AlZ+0jv6snRQnU2jTJx+4i/cQ9qLWpRUS4YHNZQokaKFEDraGG6PYO4vu6mPn+AawLA2Fj5ozFO39+GrvkoEZUPNPnrT87uuCYD/zO9lWdnrBsqmfPY18ZJLJ5E7GtW9EaG5Bj0ZAg5XBKRvgBIvARlo2Xy2F292CeP483PbOifuKNQpLkm2oLWGavJJs6qe+6j879X0WWVYaPvUrv2/+IWy0ReJ+ORPetwMYnfo11e59BVsKfaHG0l9Pf/fer7rN0HZgYvX2+GSLwqOSGFz1nFhaHda5dxrUXd0S4VhnXWvi8XZ7GLi9UtneqqxvFNHtGcIanUWuSaA1Zar+6H/9vZ/sUl2va/igUGaOtnrqvPYrWkA3PaWgS8/zC7910i6xL7iCmpfGCUAptqtqL7S9NRpKuEb93O3I0QvG1D8l//038YritWKK3WDge/nQBggCtaX5CxQ8cypUlzOKuwcTU2RULUtfDqklRrc9Q9xufJ7Gva173jFDYwS+bBGUzJBUBqApyREdJRpF0LVR0jhhEd26goS7D9HdepXL0IoHnU5oNWYSAi68NUhhe+KVOXsrjVFf54w4C/GKRytFjVI4eQ47F0OpqkWMxJF1HkhUCJ1S28WZyePn8bRFTlSSZRLaVSmGUwL91kzH1Xfex4yt/ih4L76pDR17m4ovfvKXH+DQg0dBBpn3HHCEWhi5y5of/6TPVeH47IGyX3MuHiWwOBSDiezbQnIhSeOMkZs8wzsg0wTL2p3LMQG+pJbplHenH9xDZ0IwkS/gVi9xLh8PCzTXwApvcR5S2V8opSqqCsX4dfqFM9djFOUJcCX7VQvhBaHC1DGRJJRLJoKoRrg2/K9WJFVeTK2F1pKjIZH7hQRL3bUNSZIQQ4Yc7eRmrdwRvqkBQMcPiigi3lyM6SiKG0dlIbM8m9NaGUGiyqYbsLz+GVyhjXZj35HSrHhdeXOzR2f36EFZxbRd9UK1iD9y8Ek2yphPXLmJVc9Q276SUGyRdtx5ZNciPXyTwXdJ1G1E0g0phFFWLUN92D8WpPnITl9CNBPFMC55TJTd+iVTdehTVQAhBaeYKrnV929i6Lfey9Rd+b44QR0+9Rfdr3/7MEWIk08C2579BtmM7TrXI5Te+Q37wApWJW+Tf+hlH5VQfxQOnST95F7KmEt3Ugr6uFmdkBmd0Gm+qhF+uEszqDsi6hpKMotYmwxpBS22oqCNBYLsU3jpJ5cTi9FXVzVF1b8BbWZKQowZBxbzhVqHlVn2SpNBQv4uG+h2oSoQgcFHVCKaVp7f/FczbSYqJe7eSfHgXkiITOC6VIxfJv3AwFJit2ivqrpUP6xReO0p8XxeZZ+9Hrc9gdDSSefZ+Joan5kd4JEg1x6jdkEKPaXCN2MPAB2PchJDuLUBAumEz8swgWjRJUmpHUSPY1Ry1LbuwKtNIikp+ohvPtdAjKaxqjlJuAFlWSWTbKOeHMaIpMg2bSda0k5/oploax7vONIORrCHTto2tX/h99ESWmf4zXHzxr7CK08tK6f884mpbUbZjJ/G6dTiVAmd+8H8zdekon6Uq+u1GULGY+u47CNcn9ejOUNwhFiG6qYXIxmbwg7A7JBChIIQsheInyrwIixACb7pE4Y0T4SpxidWlKunUxzeQjbYjhM9UtZ9p88ry4q5CEJhhvlOOaNf/IJKEmkkiqcqy1ieKolNbs4XRsWM01O9kdPw4uhYjlWy7IRX6RZ/tehvIyRjJR3ajpOMIz6f84Tmm/+5VvJnVXZDCcnBHp8m/cBA/X6but55DTceJ37sV49XDmGf7AYgkdR7/t3fhuwGVGWvBdTB6anpBK8zHDdsskKztpKZpK8XpfuLpZmLJRiRZxjbzKKqOa5exq+Gd01c0fM/GsUpoRhwQ2NUcsqISTdSHfWflSVxr5aJOsmk9u77674jVrkNWVHL9Zzj93X+PXbz1boqfJPRElq5nv07TzkeRZJnqzCgXXvgrprrXTohKKoXW0ICsGwS2HfasViogy+iNjWFfqh/gTIzP62gaEfTGRvxiAa2hAQBndBS/VEJvbUXYNko6jaSquJOTeNPh30FvbcUvlfALBZCk8L1BEI6KyjJ6QyNqJgMSeMUSzvjYrD3F7YE3VWTi717H6hsl89Q9GB2NKFE9JD1VWbYWIITAL5vYfePkXj5M6dDFZXORNbE2DDVBb+4giqTRktyO6RUoO1NL79v3cQbHid+3A2NzO9bFKwtUsj4KrbEGY3MbiFlpsSUgISFLMoXSIJl0J65boVAcIJlYh64nsey1yeJdlxSN9ga0dXVIkoQznqP45vFVE+ICBILK0YvE7t5Mav8eJFUlur1zjhQFgsq0RfcbQ4yfzxF48xeDZ32yRtOeU8V3TKI1HdiDJxAiQJZVXKuEWZ5E1aIksm0oqkG1MIZjFZBllUzDZir5IYLAo6Z5G5KkUM4PYcQy181lplo2se35PyLR0AHATN9pzv3k//nMEeLVFWLz7sdDQYeDP2Sm9+QsIa4NSipF5snPoUSj+JUKwvMILBOnUsFo7yD90EP4VRNJlon52ygceBsvl0PNZqj7xa9QOXsG2TAQhJ0NfqlE5oknELaLXy6hJBKIQJB79Wf4hQLpR/ZTvXCeyokTSIpC4u57CGybwptvoDc2knn8CbxyGUmS8CsV3OmpW1rUWwrCdim8eQqze4T47vVhGN1ah96QDWX/1bDLQ3g+QdXGGcthD0xgXRqicrofdyK34v1IkXRMt4jllZCQcAN7ZT1Fx6N64hKxfdtIPrYXbypP5cOzS1poKNkUqWceILK5HT9XxDzft8Qew2q365lEjDRVc4r62m0USoMYRpqbiS6uS4pqXQY1k0QIgTM4gd27dsPwwHKoHrtE8pHdYdm/o2n+NTfAqbo8/Ic7MQsOwp9P277756cWKG6vCrKMJMth/5MEYZIkWNOPMfBdchOXKOUGcawCrl3Cd00kScW1K1jlaVy7giwreG4V37OZGj6JLKs4Vonc+EV0I0ngu1iVGTzXwnWWznXKqk6sppkdX/mfSTZ2YhUmufjSX1Ma7//MSX7JikbXs79D865HCTyXy2/+PVcO/uimq+jRrq2oqRQzL7+EVyggqeqcnUNy3z7soSFKRw4j6zqZzz1FfOcuCu8cAEDSddzpacyLYQvI1d+LJMl4lTL5t99C1g2yzz5LrKuL0qFDK56LWlODHI1ReuMNgko59DO2b2I++EYgBM7gJM7wFMVEdLZ9LhpaAugaIBC2R+B6+MUqfq6Mf41v0kooO1O0JLcT12vCthwEtreC5kAQYJ69TPXoBeL3bqfmV58mfu92nOFJ1LoskqERu2sLxvoWIlvaMDa2gSRRfP0Q7vjSNiO+7zI2cQLbKWPPXGRDx+foyKynWBzCstYunnxdUlTiEeSoHsqJ50trbgYFwn1MFxGWgxTRUTLJ+RMxFLLtSU78cw+TlwoLBGM/Og2yLCQJJZ1Gb2xAX7cOvakROZEIFbplCbvvCjM/XrrpM9wmvHsGjrNIncMxCzhm2GMm8KkWxxe8bpYWPrbK82GE79kLQmWrvHQzupGsoeu53yPbuQMjkcXMT3Dme/8XuStn+Szl1RTNIN7QTuu+52je8wSebdL79j8yeOjFW9JWZDQ1YY+O4k6EfX5XSUjSDdRsltKhQwTVKoFl4YyNoTc3h1a1gF8u4wwPLxrzFJ43F4IHloU3M4Nau4wNxzW9sfbAAO7GTWSfeQZnZITquXP4xY85FxyIkPSKt87+ouxMcSV/lJheQyB8qm4Ox195/36uSO67rxNYNvF7thK7ZyvR3ZvnVL8T++8CQjsQb7pA6cAxSgeOL6+sI3zyhStcvTa6e19CllV831lz5RlWQYrimn8WeRevASIQiECELovG/OEDT1CdsalZnwor3IGYu2NVc/by88+zkKNRolu7SNx/L9FNm8LexGuPKwSBuQyhSxKJvfegt64Lj3fiJGZ3z21p11kOofPcH1LfdR8A5YkrnP/JX5K7svK8688bJFmh46GvsOlzvwEwO5r4twweunWN58L3kDVtsV6hEBAIZH1WTFiSkDQtXEXOFgtF4C8dTUjS3PskSVqw+hRBMD+uJ0kosdick6NfKjH9wk/RGxqIbd9B9vPPMv2TH83lI3/eICGTNOqxvBK+8Jgxb6ArQIA7Msn0f3sB61wf0bu70OoyyIkYkqaEM9ulKs7QOJWDZ7Au9S8aL1xyp7PwvPBGlog3YTul22dcJUyHwHGRowZKIhaayKy2EXQJyDEjnMuEBVWtwA8ojJRJNsao35KeJcXwteETU5grVP/lRIL0o/tJ3H8vSiq17BTLyucVJfXQg+H/IxGsvv6PTX1bjcTpeu535wjRKkxy7if/hfyVsx/L8T9OtN3/RTof+eW5x71v/xNDR166pcewrgyQ3v8o0S1dOKMjyJEIgW3jFwo4I8PEdu7CK+SRI1GMtjYqJ0+u2EEBoQhxZMNGzN5e5EgUrbGR0ocfAhBUKhhtbVh9vai1dejr1uHOOktqDY1hBbVUwuzpJrJxI0o0hsfPJykqskpDfDN5axg/cBf1Kq4GwnYov3uC6olLqLUp5Hg4ESc8j6Bs4k7klmzoXi3q67bfXuMqL18iKFVRYhG05hr0dfU4A+PXe9uSkAyN6PbOuV5HZ2g+jHQtn1Pf60WLqsjKwjnIytTyIbsUiZB+4nFSjzwUJsdFaM4zFwLLMko8vvKJCYF1qZvAcZB1ncj6TpRU6mO5m2vRJNu//Cc0bH0A1yrjlAuc+9F//kwRoiTLGMlamnY9xvpHv4asqAwefpGhIy9TmRxa9fSBLF+XuwAwe7pR4nGSe/ciaQ/iFwsUD32IXyhQPPg+yfsfoOUrXyDwfArnLlE9fz58o+/jl0pLno9wXYJqlcyjjyLH4lg9PZjdlwAoHztKev+j1H7py3j5POaFC3MrRTWdJrF3H7KhEzgOlVMnccZWnsiY/dZQFA3fd/k0pU6EAF2J0JzYhhfYJPT5FMJEpWdlS4KPIChXcco3FtLLskYm1U7VnMJ2ytRmNyPL11bTQ5Wc22pc5QxO4IzNoDZkMTqbSD6yi9wPCws8FlaLyMZ1JB/eFT7wA6qn570XJEmidn2KDY+2EKuNhJ3suoJddDnytxfnJl8+itj2baQeeiAkxCDAHZ/A7O7GvjKANz2N3txM3a9+7brn5hVLuKNjGB3tyJEIRnvrbSVFxYhRs34X6+5+ivqu+3GqRbpf+RvGzr5D4H52xvUAGrY+yPYv/wmKHioSDR/5GRde+uaSQqnLwTBg0waNs9ez9iTMIZYOH6J87OhcCH3Vu8fL58m99irrdhq4rmD4ko2YVY91JyeZ/Jd/XrpdJggw+3qpnjkTynP5/hxDO6OjTH3/+8RidThOGdeZX6GYPd1Yfb3z5xEEq2J2RdFIJdsoFgfwg9VHLFpzDfgB7nTxpiI6JRWbkwsLKhZ+KQxNfeHQnz9Ka2oXQbDQlyVY5MF866EqBvV1O5icOjtrXPUUpjk9d9uQgHi0/uaOcb0NvJki5pk+ol3tyBGd1BP3EFRtim8em/Nsvh4kXSWytYOaX3oMtTacyLD7RrG755feelxl9y9vxMzbZNsSXPlwnGRjlEhaWza3J0UjJO7dh2QYIAR2Xz+5l3+G1XN57j2ysbzf7bUQjoM7MYHR0Q6KitZwc/JDK0GSZdbv/xobHg3J2rOqdL/6LUZOvH7bjvlJoW7LvWz9wu+jRUOR0uFjr9L96reuS4jbulQ2bVQpVwTnLrjsvVvniccMfvhjkzNnXe6+S0eSQv46c9Zh21aNhgaFy70e/Vc89j9kEItJnLvgUlerYBgKqiphWYJTZ1w2tAb09HoYmmDXXTqNjTKKLHHwQ5t0WqVri0pdrcIHh2zOX5iX8Lqab1SVCJna9SiySqk8iixrNNXdhWnOMJPrQZIkotHQ86daDYtuiXgTgfDIF/qJReuIRmrwfBvHKaPrcRRFRwhBuTyKrifCx0DEyJBMtCDLKpadp1KdJJPuRNfieJ7J1PTFuabpuq89ipKMYl4YpHz4Ivbg1I3nxhWZ+F0byTyxB4DKuStM/8s7cy9X3RwjpXNzBZaPE45b5lLPT+eMqyrV8U/AuCoQlA6cJHHvVoxZVZzslx4m0tVG5eglrEsDeJOFeRe9ULszTEzHIkQ2riO+dwvRHevRGjIgSXj5MvmfHcK7plNd0WT0mMp7f3Ga+76+jYuvDOJZPnt/fTNGUqM0vnjyw2hpQWuoR5Jl3OlpCm++hXX58pq+EOG64Sw0ICkyajZzw/tYDSRFZcOjv0LHA88T+B6B53Lplb9h9OSbt+V4nxQUzaB20910Pfd76IkMM32n6X/v+xSHL+HZ1w+Z9j9ikMsH9PX7mKbAsgSFgmBsPMBx4PNPR/jxT00mJwOaGhXuvkvn7DmX/Q8bFIoBYxM+He0qD95vUFcrUywJPE9QX69w7LhDfb2C68H4RMC+vTq5XIBlCZ56MgKEmZe2VoWz56/283kLVl66niSTamd6phvPt1GBwHdwnCKeb5HNbCBiZJjJhbYSmXQnvu+gKTGy6Q1UqhO4bpV4vJF0shXbKRKL1WPbJXzfwXWrpBItFIpXQlJMriOXv0wm3YmETCrRguuZKIo+uwoFtS5FdEsreksN8Z2deLky9vA0+Dd4PQQBQcUiuqMDWVNRa5LkX1m4CCo7NxhFKQrx+3dgdDRRPdGNdWlgRf+VlTBvXGVzZfCdRcZVpfLw7VfJ8WaKTP3DazT87hfRmmpRElHi92whtn09ge0QVCy8XClUuRbhylBJxlCSMeSojhwxwjwi4OfL5H76PuVD5xf8yEQgcC0fNapSmTRp3JZl7MwMRlJHjSx9mnpLC0osjhAC63IvVl/fmoVthe/PO/jJMnJs+SH0tUCSVRKN7TTtfJT2B54HoP+97zN0+EXscv6Gzdo/zUg0dLDzl/4XotlGtGiSmb7TnP7uf8Aurf5Cev1Ni6c/F+HuuzSGhj1GRn3Gxn0u94YrIteF4yddLEvwyEM6O7ZpzKrB0dSg8NADOrICiYRMuSwYG/epVgWZtEzVDB9fjZKLpYALF11MS/D05yKMjfus71QZHvE5NxuuT7/w0wWFN9spUiqPhpMUnolt57GdElVzGtetICFhWjOUK6MYeop4rAEhAlyviuuZZNIdqGqUiJEGoFAaRtMS2E4RCQnbKeLPKnULBKY5Q7E0FI6wEaBpcfzAYybXQzArKxbpbESJG2GTuOXgjM6sLYQW4EzkcUZniLQ3oKTjRDY0Uznes3CjG4CkqyQe2k101yZEILB7h9ZuR3z1DIRPubJYlWhy+jzBTVxPq1bJMc9dYfK/vkj2+YeJbOtA1jWkmBHmHbJJ9NbrxPFC4AxMkH/5A0pvn0S4C8Mn1/IZPDKOaij0Hxzn/t/eRtczbbimR3VmGWWPRHxWGTjAm54OZcvXimtCIyCcB72FaN7zBNu++AcomoEIfPrf+wE9r337lh7j04BUyyZ2/OKfkmxaD8BM/xnOfP/PbogQFQWiEYn+Kz6bNqo0NCiUSoLGBoW779K41O0RBIJg9gbY1+9z7oLLufMuuXxAXa2MLMPwcMD69VfneWcDCAka6hW2bFbxPJie8Qn8kDuECIs5miYxMRHQ2+eRScuUy/4i4WFZVnE9E88ziUVrMK3w86WS6/A8CxBctf9zPXP24hVYdgHfd0jEm7CsHIpioKnRcPtZopFkmUS8iUg0SyLejISEuCZfJ0kKQeBRKo+G7UGShBACvbkWyQjniu2BCbyZtWuD+iUTZ2iSSHsDsq5htNV/hBRvDJIsozWFRRl3eAJh33xnhyTJRKO12LPf6VUEq9CeXAmr11MMAqqnevGmiiT37ybxwA70ltoFjapLQQiBnytROdlD6Z1TmOf6l7x7ebbPxVeG8B0fIeDYP3aTaowx2ZOnPLE02UmaFk4IuB7BTTqAIUlhb1t40je/v2vQuHM/m5/+LRQtzG8OHXmZ3gP/dMv2/2lBorGTbc9/g2TTesz8BH3v/Au5/jNYhaXnYa8H2xYc/NDmyoBHEMCBd+dvjj99yZpb6Y2O+Rx416YmG4a6Z8+Ff7uqKbjc6+G4gmIxwPNgYtLH9wVnzroIAeWy4NARh1wuwPcF5897NDTITE75NDXJtK4z+Pt/Wv5mWzGnKFdG8X2HXL5vbuVXLA3PhXVB4JLL9xKLhrqAjlMml7+Mouhz87mWXcBxyvi+jUCga3Fmcj34vo3rVrHsPEHgMZ3rJh6tnSPZutqtVM0pPM9CycTn5prd8Tx+Ze0qKsJ25khVUhXUbOI677gOJAlJVwmqFv4SKvxrgaLotDbfx8DQ+wtI8WZxY8rbQuAMTzLzw3covXsKY0MLxvpmjM5mlGxitmEWhOvhl03c0Wms7iHMi4N4U/lwGma5fJ8IiTGS1NCiKiPHpxgKBIomI5bJiQjXBd9HUuRVF1SWg6RpKFfziEEwb5MqyeEdX5KQlVD2f7X+yUayhrrNe9n81G+hxZJMnP+Ay2/9A2ZuDN/+RGV/binUaILtz3+DVPMmYrXNmPlJTn/vz9bcVuT7cOacBx+xQXrnvXlSPHL02pUBnDq98CY2Nr50dDE07K/4+uGjNs99Pkpnh4rvw6EjS2/nuhVmct0Lnquak1TNpaeVbKeI7RQXPF60jT2vym1ZOVhioVcuj6DIKtnMBjQtimlOz62MlGiYpgLwihWEtfYbu3A8/PJsh4kiocQja94XMKt1WkGty6ypj3gpSMhoWgLBrU09rcmjRVgOzlA4U1l67/TcjLGkyiDJs2Kz4fRA2IJw/fyDJEu03l3H7q9upGlnDS/8bwep5h06H2yk9+0RypOLW4C8fD7sLYxG0errkBOJG/d8noWSiBNZ3xl+Ps/DHgl7yRJN6zFnRtFiaVItm7EKExSHL12XGNNtW9n11X9HJFmLrGpMnP+AMz/4j3jW2o3uP42IZBrY+tzvUd91H5IsU5ka4uwP/5z8wLlP+tTWhFxe8M/fqyLLhBMYn6wWyZIoFAcplq7OwYv5QoMiz0VuwgtuSj4r7PedJZtZdZ2bgfA87Av9GM8+iNpQc9NDIBB6T5dKQ2RS68kX+wmuaV3yPGvNyvc35+YnBPgCMfvhxE2sYPW4Stfn2xk9PUMkrYeqv7ZP47Ys42dzS5KiOxzOoiqxGEZnB0Zb67JOfCtCkohu3Rr6RxPanNr9VwBItW7Fs6okWzYjgFTbNspjvSvO6GY6drD9+W8Qy4aCF5OXjnD+hb/8zBGinsiy7Qt/QH3XfbhWhcEPX2D68vGfW0K8itssYHMLIBbkGK8isJyQaFQldOJTlblr80YRRl+z6aRArCjztRoI16Ny5ByxvduI37+D6vELeBM3184jIRGN1tHctBfT3Inn21zNy/YPvE3VXFva5lPj+6xoMtGMzqVXB2ncHnpDeE6AJEvI6tLLbWd0FGdkFLW2FjWbJf34Y3i5PO74+KqJUVIUol1bSO1/BEkJTXisnst4s2NaIvCJZptQI3Emzx6gcc9TXDtt81GkW7vY/qU/Jl7XSmVqiEuv/A3l8f7PnOSXrGhs/YXfnxtNlBWVZFMniYZ2Oh788id8dp995IcucOW9HyzoWgjKoXy/pCpodWmUmIG3xoKGHNHneopFENxUfjLcCdh9I+R++BaZ5/eT/epT5H9yAG9s+vorWiGWXFUGwmcm302+sFhazPXWfr6fGlIMPIFTcanbnEaLKETSOs27alA0Gbu89F1KuC7F997HaG9DzWaJbNpI/a//GsV338O+MkBQrYaN3VchS0iGjqSoSIaOmskQ37Ob+O7dKJl0WBTKFyi9f3BOWaUwcI7Uui4KA+cIPAe7MLHkXRrCldOuX/5fidW2UJka4tQ//5+UxtY+bvSphiyTbt0y91DRjDmCvIPbj9qNd+FbVQYPz1va2kOTBLaDbGhENjSjNWbxcmuLTtTaFJGNYeQkHA93dGn5rlVDkTE2toW6rANjxPduJbq9E3cihz9TRHjesk0+7sgUhR8fWPS8ED6TU1ejkqsLlZsfifzUkKJTdbl8YJTtX+ykblOavb/ZReAGXH57hOLo8o2YVm8fhTffJvPMUyiJBEZbK3W/+jX8YhFndGxBUlerqyP77OdDQ6v6OrTmZpTofD+iVyiSf+117IF5pd/q5ADVyXklkKkLB5c9F0mW0eJh9bH7tW9/dgmRcAU93XOM1n3PAjB+9r0F9p13cPvQtHM/WjSJGuuXrOwAACAASURBVFk402/2jOCXLJRkDL0pS2r/TuyBiWXNqpaDpKsk79+K0R622QWmjXnxxoUfroUci9D4b34VJZ1Y8Jxae/0hCfNc75KkCKAoBqlkK9FIlqnpC/iBi67Fse3i8tYI18GqSNHYtA4lEcO6OHDDpjMLDlaXRmsK/W/9YhVnaN56MfAEfe+OUh6vUrM+haIr5AdLjJ/L4VkrVJc8j/KRI6DIpPfvR8lmkGQZNZMJJeBnIUkSWl0d6SceX7QLIQR+qUTh9depHD++oEk31bYNIzU/9B64DjM9Rz5TzdZrgfA9ul/7NsXRkPjHz777mfKM+TQj274DLZpc9Lw3XcS8NITeEo4Xph7egTM0Rf61Y6uQ4JqFLJG8byuZJ++aW1DY/RM4YzeX/xOeT/VUN8oahiKcwcUN2hD2a9bXbqehfieJeCOV6gS2U6KxYQ/jk6cwzbWlrK5PiopM6tE9xO/dhjs+Q/HN45QPnkWsoY9Pa8xS//UvIEcMvKk8Y//xu3jTs6sLCWRFYvx8nrGzN/YHCKompfcOYl8ZJP34o0Q2bECOGKAoy5b/hRBh641t446MUnjzLcxL3YvkwjyrjCSH+zFS9ahG7Lq9mf+9wK0WGbomfLuDTxiBIP/aMRL3bEJNx1FSMep+5TGM9U0U3jqFMzyFsF2E78+JOEuSFLpv6ipaQ4bU/p2kHtoxl08MLIfcq0cXDVvcKITlMPOdl9d07Syn46ooGtnMBoaG32ddc5i68T2biJFCV2OYa5Rnuy4panVp9PZG1NoUaiaBeX5e6fZG4YxMhyFmXRo1m8DobJojRT2useerG7nw8sCyijgrQTgOdm8vkyPDGO3txLZtRWtqQkkmQs/nWYIUfigi6lequBMTmBcuYl3uDScWlkj4VieHQApDB0lWaNn7bGh0v6Zv4A7u4PbCujzKzI8OUvuVh1GSUZR0jMyTd5F6aDtW3xh2/zjuTCmM+IQICyqZBEZHA5GNLSgxY87Zzzdt8q8co3r2ys2f2Kwt8q2EhIyiaFTMqTklIUmSkSTlpq7P63u01GdQMwkkScIrVXGujK+5PC8sB7tvFL2lDiQZY2MLlaMXw+PoMs27ajj/4s39AYRlY13qxrrUjWQYqDVZlFgMSddBlhGOEwqO5vL4peuPQSVbt2Akw/BZ1nSQlRVNv+/gDj5JCNsl/8YJJF0l++y9KOkYkiKjxCPEd3YS39m5ip2ERvSFN04y88KHN2dBchsRCA/bLtLUsAddT5KIN80KZkg47toJ+PoeLekEciLMA3hTBZxlTGRWA+H588KyEuhNtXOv+W5AcbRCrNaYFZWdJ5419mAibBt3dIybGdgLXAd/1pvZtysUB87fdie2O7iDm4FfqDDzwoe4k3lqvvgARlt9ONQtsXI6CUItxqkCuZeOUDhwCj+/drWZ2w3fdxmdOE5Lwz1E9CQtzfdSrU4yOn58wXTQjWJ1xlWRsK3FK1Twp9eeTBe+jzs577KlzOYtIBzxK45Wuf+3tzFyahqr4MzxYu+7o5i5j8kB7SOoTg7gVvLImkHgu7iVIp8mJeQ7uIOlEJQtCgdOUz1zhfieDcTv3ojeUjsbHitz44AEAcIPCEwHdyJP5XQf5SOXcCcKN51HvP0QlMtj9Jqvc2XoHSRJxvcdPH/t0yywClKUdBVJC0d8hO3cVPUZIRYsxa96tQAoukKmLYFjetRtTi94W+jR8smQYqy+jXTbdmRVQwhBdXKQfP/JVUvo38EdfGLwAtyJPPlXj5F/9RhKKhYSYzyCFAmnxgIrvKadsRze1G3sHpBl9PZGhOPhTeZWRbhyMiwW+YUywTIiEoqs4fk2nm+hKAaGniIQ/u118wsVjWZXRrPisbcM1+zLKbu89WcnWGpaJPCWJ6DEffvwZnLYAwNrqohfD6l1XZRGujFnRlGMGPXbH6YwcBYRfLYsA+7gsw+/WMW8hTanNwI5olPzPz6LNzFD/kcH8Cav32ES3bGBxP67qHxwhvI7xxfvU9ZoathDrthPtTpF27oHSSXbyBd6GRk9iuevLRd6XVIMLAfheEhRBTkS6if6ayUfSUZOzvcpiWtXnRLIiryoJ1HRZCSWD1gT+/ah1dXiTExiXbhI9fx5vHwhtJ+8BRalvmOF+RZJQpJlAtdGjSQIfBffNhE3qd12B3fwaYQkhbqWtyx9rsgY7U1IshxqoK4CckTH2NiKny8tSYqKHLbkzBR6SafaSMabGRk9TH3ddiKRLOXKagzCFuP6pFg2CUwbOWqg1qbQGrP4+bVVdiRNwWif9z651o5Aj2vs+GIHJ/7l8pxUmBpRWP9IM2NnpimNLb18llQVtaYGJZslumkjqccfxTx/AfNSN87wCN7MzE0VRnzXpH7bQ/iujaIbCN+ncc8TAExd+ABzevg6e7iDO/j5Q02tzOYtKieOOVifUPE5sEI/biWzuFF9DpKEEAH1tduYyfWQL/RTV9MV2jSsEdclRXcih5crodak0BqyRDa3Yl0egWUaKleCkooT2xEqMiPAvsYqVTUUWvfVc/J7vXOkKCsS7fc2UBqrLkuKwnURQoRVNVVFzWRIPvgA8b334AwNY3Z3Y/X24QwOLVJPXg1yvScpLKML6DufzlaFO/jvF5oGO3Zr5GYCrvT5rGtVaG1TOPSBQ8s6hXsfCA2/jh5yGBzwUVTYtVtj42YVsyo4/KGDbQme/WKEe+/XWb9R5eRxh3NnPDIZifseNIgnJC6cc7l43sMwJHbu0YhEJOrqZS6cczl/9hYtL2UpjNCWWVkGwsd1q3S2PYZhpBgZO4aiaEiyvKw+wWpwfVIcn8Edz2Gsb0GOGiQf2Y15th+7f/TGirCKTPKRXWgtsyNzgY91aRBJlki1xKjdkCaaMVi3pxbfE0hAsilGsjm2Yk4x/8qrxIZHiG7bilZXOzfFctW/2Whvw7snjzM+Hq4gz53DKxRXZyAMRDKNWPlxAveTKfTcwR3cCISA9RtUtmyVGB2psvc+HU2Do4fhC1+O0H3Ro6ZW5rkvRvjbb1XZtl3l4UcNTp90KRYCXFfguFDIBzi2YGTIp1AQaBo8/VwERZGYmfbZ/5iBY0MuF/ALz0e53OPSfcGjXFoFKayiLCFpKnpLPXLEWLa46/sOo+PHqMlsYmr6ApY1g64nKZaGlhTxXS2uHz5XbcoHzxLbuR4lFcfobKbuNz/P9D+/gd03hrCvU3CQJdRsisRDO8k8ez+SroIQmGf6cQYnkGSo6Uyx7bk2sh1J7v/d7XOitL4nGPhwnMLQCoIQl3uxB4coHHgHo3Udsd27iKzvRE4kkA0DSVXR6utQ62qJbtqI/9h+rL5+KqdO446M4pfLi0b7rkWqtQu3UrhDinfwcwHPg55LHo88ZtDeobJ1m8p3vl2lrl7mwYcN6upkFEXCtgWyDFu3a4yO+Lz9hr1gnXC522PTZpWjRxzKJUFdvUxTk8Jrr1j09nis+y2V9RsVckcCyqWA0ydcTh5feB3p7U3h9Q6hEZwiI0cN9PYm5OjySvlSxCC6rZPEo/eALOH0LZeiEhRLw5QrY4jARyCw7SKj4yduyqdlVRnPytGLVB/YTvLhXUiKTHTnepqavkr5vTOY3YN4Ezm8YjUsswehnpscMVDScfT2RhIPbCc6a3YFoTtg4bUj+MUKBKEQxPi5GR7+o5289R9O4HsBCEJjolXceITj4DsO1Xye6tlzIQFu3kxk00aMtlbUmppwzC8SQY5E0BoaSOzbiz04iHn+IvaVK9hDw0uqdrvVEvHGDqzc+Oy8tI9dmuFOr+IdfFrRezkkxYf3G1QqguFBn5pamdERn7/4T2UK+XDl53ngOIJoVEJVQ4dESQqDqLC2KHHVv811BUKArktomoSmgTu7HnJcsKzF10P9H/4yekfzgoZxJRGj8d/8D6v+LHbvMJXDK4kWiwUEKAhuul1uVaQoXI/8Tw+it9ZjdDSFijP1GTJfephkoYw7mccvlBGOhwhESIoxAzWTQGusWdCP6FctCq8fo3qmd4FNgV12OfOjfjzHX/MES3iyAm9yitLkFJWTJzFaWzE62ols3IixvhNZD89FUhQinZ0Y7e14uRz2lQGs7h7M7h686em5ynXgWKTWdRHNtoAI8F2byXPv3ak638GnFpWyYHDA44tfjvJfvxlGWflcwJnTLv/qd+KYVcGlix7vHbA5cdTl81+M8PU/SJCbCXj3gM3QgE8hL1AU+M1/Heedt2xOn3Q5fcpl/+MGjzxqYFmCixdX7kLJ/+htIls60Dua0DuakWORVfmzCCEIyiZ2zwCFVz7EHft4BZqllVRvJUmaf1GWiW7rIPvlR4hu60AytBsyoBFBgF+oUPjZIQqvHMYvLS56SLI0r96hSCi6jO8EyxpXrRaSqiInEuiNDcR2bCeyaSNKJoM8KxTBrEVkYJr4hSJmdze5n7yAcF3USBxZnSd1IYJlp1qMVC0P/fF/QYvEOfGP/wcT596/qfO+gztYCg994y9INHbQ/ep/o++df1lym0RSoqlJYXDQw56tB8YTEs3NCrISkuTkRIAkQV29TDoj47qC8bEAsxoSYkOjTCIpMzUZkJsJ0HVoalHQNYlcLmBmOkBRoLZOppAXS64WpaiBEo+iNdfR8Me/EkaJL72PN5VftO0cAp/AdPCLlVBE4jYMSgghliWvG7I4Nc/1445Nk3riHuJ7t6A11SLHjNC4aukDg+fj5UrYA+MU3zhG9eTlJZusFV2meXctM71FzLzN+keaad5Vy/DRCQaPTOK7a/9ihOfh5/OY+TzW5V6kSIRIZ0foy7KuBTWTQUklUWIxlGgUOWKQf/FlhOvi2SaRSAI1miBwHazCBHdC5zv4tKNcEvSUFkYzlbKgp3vhc0LAxHjAxPjC68v3YXQkAOafdxwY6F9Y1fU8GB9b/toUpo1n2gRVC3d8GuF42JcHcYeXdj38NOCGLU696SIz33ub0sEzRLd2oDfXojXVICeiYc5QkRGOR2A7+IUK7sgU1uVhrO6hFdU29Jj6/7d3Zs1xXGl6fnKtvQr7TgDERhIUF5FaW2211NE9mu5p79ET4XDEhG98N5eO8L/wL7Ad4ZkYexyeiZhw94x7elGrpyVRlChuIgmCILEQQGEr1F65nzy+SBAERGijQIqaySeCF5VVmZWVSXx5vnO+73059a+PcunP7mBmDc7+dJzSvTonfjJKdbVF9f7hyA7JIEA2m1g3bkbzj20FzKEh0tMnSJ2cRs/nd5vnATI9o+QGJiLTcVXDqW5Qmb/6T15kNibmqyBFiF8soXe1PfNjisezI5ASf7WEv1pCMTS0XAZlxz1MUVVkICKDestBNOwv1VmiqAp6QqdVsjn21jAbMxUu/885vvunp0jmH78Q8zNRVdRUCnNwkOTRUfSuLhT90cuRG5igvjyDXS6iJdL0nnqD6uIncVCMifkKyEBg35zHHO4l/KKKlW+Yr+3RIn1BUP76jeShkPhWwMjLfQyd6+bSn0dWpcrBmfljoRgGWjaL0dNNanqa1NQkWj6Hmnio0i3DkNB2duO4FAGamcJI59HM5E4wfMYfdTExzxpC0Lp4A+vjGULn2S5ve4aMqwKWPljn2FvDFK+VKM3VSHckaG7akYzY46IoaLkcRl8vybEx0ienMfv79o0KpZQgBN7mJt5qEWvmNnKn3qAyf5XOYy+RG5xEUTSqS58QxnqKMTFfGen5T0S05bB5ZoJi6Ifc+dUKK5e3cGoevi2wFbj5s0Wam4/h4aooGH29pKamSI4djRZUOjoeWRQKHQd3eQV75jbOwgL+2jrhnmZPr1lm4/rb6MkswnMQrk08UoyJ+cfLMxMUIVLf3tvj7LUCvNaXH5UphoGayZAcHyM9fRxzcBC9rQ0lkYhSYymRYYj0PIJaDfvWDNbMLMHWFkG9Hi25fYps/wR2uYjXKKMoKrmBSZrr9+I5xZhvHlUlO3GS3PgJ1EQKYbeoXHkPt7RB/sTz6JkcWjKD2dGFVy1RufweQbOOaibJnzhLanAUVdPxypuUr7yPsKLFzERXH4Xp8xhtnchQ0LjzCY0710FRyYxMkpt6Ds1M4GxvUPvkI4JmHRSF3MRJsmPHH57L5fdwtze+4Ed8Cl1D0TUQYSTk8g2MP77hoKigajqh+BpDak1Dbytg9PSQHB8nfXIavbMjSo8VZbeWMvR9gnIFb3UVe/YOzuydqMXvC1Lh/OAUbn0L4UY1irmhKVqbi3FQjPnGMXJttJ95ldqNj3DLmxiFDgIrKtY2snny0+coXfgNzYUZOp5/jcLJ82xf/C0yFATNGtUrF0CBzpffJDt2nNqNS+i5At2vvYVbWmf74tsouk64U+iY7Bui7dSL1G5dJmjWaTv1Iu1nv8PWu7/AyLfTduYVap98iFveis7F3t+eq+UzmEd68TfKBOXavuYNVBVjoIvUcxMY/V2IegNndgl3bhnpPt2U+xsNioaRJt82zPbWzGMfI/fyi6TPnCY5PIya2u8pK6VEtFq495dxFxex78zhLq98JZG4UAQkcp34rRpGpoCiaI99rjExh0nouwR+i9TAMKHvYa8uIvYEIndrjdbiHULPoZ4tUJg+FzUqiAC/USPZO4iWSKEaJka+HYBU3zCKrlO++j6itd/YLXNkHD1XINV3hFAEqMk0yd4htt77JaHvETTrpPpHDjwXVIXU6Uk6/+SPaL5/ncr/+TVh62FWaA710PkffkJyamTXKiEoVSn/1W9ovnvtwCzuSfG1g2I600MiWaCyPUc2P4CmJ7Gam3R0TZJIttFqrLO9NYOZyNHVcxLdSNOor9CsrdI7eI72zilSmS42164hZUhXzzSGmaFRW6WyfYe2jnEMM0sikaPV3KSyPcfeMXX2/HkSY0d302MAwhBRrWHdvo09M4u3tkZQrT7Wha2vzNAx8QKdUy8jkTRWZglFvNAS880jrBbbt35HZmSK/InnyR8/S+nCr/AqJQBCz93XrqrqkfZA+sgYbc+9hL2+jF+vIhwHZafMQ00kCD33wAxKTaaQgY9wLKQQ2CsLNO/diuxLrSbbH/2O7Ogk+RPnyB9/nq33f4lfjVr0FF0nMTW8a4InxcOCb8U0yL35AsljI0g/wJldQctlMAa7afvxaziziwTrj2+Y91X52kHRc+v0DZ6nUVumrX2MRn2VXH6QZLKDSvkeXT3TtJrrdPacxLHLbG/OEAiXUHjUK0ukUp1srl0lCFw6OieRwEbxCn2D57GtEslUB6pmsLVxgyDY7/IH7FoahJ5H2GrhrRZpXb2Gs7SEqDeQ7tdb/rdKK3jNKpqRQIYC3248vr1gTMwhoug6QbNO7eYlWgu36frOH5AZndoNikZbJ1oqjUSS6OyN5v6A9OBRpPCp374GUpCbOBlpFwJ+vYqWSmMU2vG2dxQiFAXpe/i1MmZbJ817t/BbDRT1QdYkUTQ9SslvXEKfv03Xa2+RGZ2ievXCzrlqJI70Ij0f9+7yvr9L80gvmZdOEjoe1b95h+a719DasnT88Q9JHh8hfXqS+vrFp3Zdv3ZQDAKHVmOdvqGXkFLg2BXaOsZJ53oRoYdtbSMBXU/gu8191oN+YCNCH89toKg6mp7E91q4To0wDNCN6Klit0q4To2DZl1Fo4F9exZ3YRF7djZKjw+zV1JKArtBYH+xR3RMzNPEbOuk49x3d14pKLqBW97cfV9PZWg/+x2kDEn1DVG+/B5IiVta35kffAFFM9AzObydEZ1dXMJeXaLr5e/jltYBcLc3ady5TvPuLZLd/XS8+D38Rg1V13E212jcuY7Z3kVu8lQ0MlUVVD1awNlFVdA68ohqk6BS3/ennHn1FFoujXX1Dq0L1xGVOqJSx75xl+TUMImxIeBbFBQBGvVVxo79EZtrV/DcBs16kUSyLfIxES6uU6NZL9LWOUG2MEirsU6tskgofAwjRf+RV9jeuInV2qSz+zjJZAFFUbGtbbK5AT5vCar2zu8QtTpBpXIoniwxMd8WgmYDZ7OIkWtDyhDrynvYq4u779tr93E2V9FSacofL9K6fxeA1uIcUgQY+Q78xgatxVnkztSSDHy2P/496SNjGNkCMhR41WjkGbTqlC6+TXrwKFoqQ9D0cEtrO+dSx9kqYuQKyDCkfPk97NWlPWerRIs2LWtfraLWniM5cQQZBDi3Fwn2WCiLSoPQ89EKmSd0BQ/mUIKibW0zP/u3eF4TKQVWa5Og6KDpZiSxJUMq23NYrS0UVSPwLUDiuXXuz7+DoqgEgY3fsPE9C03T8X0LEThsb83syHQdHPDchcXD+AkxMd86hGNFc3qfQSgCGvdu7TYi7G73XZrztz97P8eiOXfjwPeCRo367asHn8vdg207IiTS81F0fV/jRGp6DL23g6BUw75xb9/ARgYi6mo7oP32SXIo3/YgEO597Tr7LQzDMMC2Sp/aL3xkm2Pv107zvS8SgnigABSPEmNinlWkCPE3yyQnhzF6O3Bml9DasqRfOIGWSWFdvo23sr+mUUkmUDTtS3lEHybPVPH241AojOC5dWzn6a1OxcQ869hr91HN5FMtZfk8pB/gzCySPjVB7s3zKAkDc7CH1KkJQsuh+f71/XWLgN6RRzF1ROOz7UieBE8pKB40mvu0m/NezcdPj/o+rQe5I0SrqPT2nqa0dQvHre6U5HyZYz747s/7zpiYby/WysI3fQr7CQTW9Tmy3zlNYnIY80gvihFJDTZ+ewlvfr8Pi5pJYfR1gqoSbDzdAc8TD4qKotHffx7DSLOy8j5CeJhmjrGjP6BY/IhGs0ihMExP9ylMM4fj1tjcvE69vgJI0uku+vqeJ53qAaBWX6JY/JB0upvBgZfo7Jwilx1g0G9Rqy2xuvoBYSjoaB+nu/s5DCOFZZfZ2LhKs7mOoiiMjr6JY1dJpTvJpLtpNFZZXrmAEIen3pEsdJPpGT6048XEPOBBveG3DW9pje0//zvyf/gq5kAX0o/kxGr/771HHPv0nnb07jZEuY4983QD/BMPilIKbLtER8erJBIFLGuLQmEYXU/huDXS6W4GBl6iVJqhXlumu+c5jhz5Lnfu/AxFgZHh7+H7FvMLv0RKgaJohGFAo1HkztzPOZX496ysXKBcuQtIpAwp5EcYHHyZ9fUrNJpFenvPMDz8OnNzPycIHEwjSz43xNL9f6BY/BBFUQnDw20lOv6j/3iox4uJeSZQFPTOPDIQiOrB8/1qNoU53EfYsPCWNx7u157DW15n47/8BVohh/R9wgNsSQBErUntFxcgEI+odCsJA70jT1CuP5EWwKeSPrdam4TCI5PpwXXr5HJDNBpFgsCmUBghm+kjFD75/BESZp58fghdT2AaWUwzx8rKBSzrUflyKSNbQynFHvNrhXSmB9+3qFQX8P0WpdIMkxNjmGZ2pwAcavVlarX7X8s0+9METou1q28z/Mo/P7RjxsQcRH1tnu35R1eBnwq69vmzTapKYrQPGcqHQZGogFtKQISIcu2z9wdEuY5VvoWSMMi8eILmu9cfvqkou9oGT4KnEhQ9r0Wtdp+2wiiOUyWZbGOt+BFhGKCpBoFwqdXu4wfRU2Nj4xqe1yCRKCAJCeXnrT7tnxtUFAVdMwnDALnTeSKEv/Oph5/zvRaHPY8oPIe7b/8F6zd+v1tGIEWAaphIIaJyBEUlDILd7YqqRjViqkL6yBhBs45b2ohKEYxIcVx6LqhaZLJFVEsmD2kCXVEgnVbQdAXfk7gepFKRtaXrSAIB6ZSCBIJA4vuQSSt4nsR1JYahkExG17XVkggh+fFPknxy3Wf5vsA0FZIphVZTEoYP7DSjz1uWJJEg+m5fEvgwManT369y4YKH50bnIqXEsiSqqpBOKygqOHZkrmQYCooCti3xPLn7o9REIurECAIQAsU0dzoz/J1ra+zcHxHZ8ibMSO8vCFAMI7p/UhJ6HoQh2fPnsWdnEc3m7vuhG7XRqaYZWVg8KDsxDFCVXZWXxNAQiqbhLC1F55JIgBCEvo+iqiimAShIz4vu8YP77HmfeZ99q06rtPLY9z197hiJsUGk59G6dBtRaZB59SR6Zxui0qDx7jUSo/3R3J+pIyoNmh/cJDkxROrMOPa1u4jtGmo6Qe57z6PlMgTVBo13rhDWW7hL69Gc4A7JqSOkTo9jXd3ZL5Mk9+Z51FQCLZ+h/L9+hd5VIH12EiVh4txewr23SuaVk+ReP4veWcC6PIu/WSH7ynNo7Vka71xBOB7mSB/pM5OgqzgzS/hr22RfO4Vi6iiGTuvDGbzFtS99bZ7SQoukXLlLZ+cUbW1jCOHRbEVPENerEQQOjlulXl9FVVQUVUMIH89rIGVINtOP69aRUqKqGr4faRo+kAIzzDSqaiBlGKXrTplC2wipVAeWVSKXG0AIbzfoPjinJ0HgNKnev0VqaBQ910bz7hyFMy/hbhRJ9Y4iVQVRq2KvrpA7fgqJgru1Ruh4JLImQahjO1VUM0GqqwcZCuylLYz2doy2dkSrgb2ygre9+cUn8yUwDPiTP81i25KZWz4LC4Ifv55EVWF1VTA7E/DH/zbFvbsBS0uCu3cDfvpv0ty85fPrX7qMjet850WT9g6Vv/u5w+xsQMJN4265VO8Ljh3X+Xc/TfPf/muL1RXBue8nmD6pk8+r/Nn/sDhz0mBkVKNSDrl0yefV55L09KpUllyuXvH4/uspdB3+91/aDAyovPWDJIEPC/MB+YLKyIhGEEiuXwt4+zfRvJSaydD2xhuIVgtncZHQssiMPQeKglcsImyb9MQ0QbmMt7aGqNfJv/469ief4NyfJzEyQnJwBCWRoHn5MsH2NnKij+bqLKJWi8SKj5+i8ZvfICyLzOnTmAO9IAT1ixdJT06it7fj7xxf7RwHKbFp4pU2yL/yCkGtRuvqTczBQVJD48gwxN/YQO8ooGUyICXWnTu49xcP5T4/ct/7O/HXSnjFLYJyncRwL4mR/aqOKAAAB09JREFUAZoXb5J54TjmQBdaNoWWTVF/53LkrxSGuPfXMYe6H/YwBwJnbgVUhcIfvEzzg5sHCsm6S+uYg91oO/uFlkPz3Wukz07hrWwSej6iYWHPLKF35Ek/P4V9Yx77+l2SE0PU/v5iVJojQpy5ZTIvHEcxdNR0ktT0KO5CkaBUI/vaKQhDzIFuKn/zO5LHR0gM9+KtbELw5QYST60kx7a3sZ0K3d3TrK99jO9Hy+yNRpFK+S59vc/T2TEVfdYps75+Bdsus7nxCV3dJ8jnhwilwHGqrK9fRggPkFRri3R3nyST7qbeWKVcnqNWu08228/gwMsEwsE0s2xt3cDznt7Svru5RmbsGH69ggwC9FwBPZvDLi6T7O0n9FzCwKd66T0A1EQS6/48ztoyfq1KfvoMzXszhK5D4dQLBK0GXmmD5tzNQ+/cabVC1tZC7i8JJiY0urtVrl/zmTqus1YU2Lbkr//axtuZC79+3ScMo1Gm40jm5wMmNJ2pYzqzs/tH9bO3A5aXg12T9d+945JIKDQaPutrgv4+lVDC+LiGEJKLFz3a2lR++3b0ZR9/7HHqlIGuw4lpg48/8rh6xec//ecc8/cCPr7kMTcn+Bf/MrkbFJGS0HEIyuUoiI2OoiaTuGtrmENDuMvLhM0mjY8+2i1Z8YrFyERdUQgdB29zE7O/H6O7O/IB33tvl5dJDA/vOsdbt26hGAb+1hZhq4VfrSKFQO/sxL57F+fePULHwbl3L9p/aQk1k0ExTczeXuw7dwiqVdp++EP8zU2s2Vmk75McH8ddXDzUe/2A5u+vkZweJXP+ONa1OZRkAq09h9HfiXd/A9G0o3m77XpkRxw86HgRUVH1DsZgD5kXTuAtb6AVMijqZ6S0n9oPTSMx2h95t1ybQ1EVUqfG0XIZQKKmEoAkdH0IJdJ5WID+IDhGx1FBVQkdL1qsURQUQyeoNhBNm9B20bLpSDDmS16bp1qnuLp6kVSqg0ajuLstCByKa5fIZvtJmFlCKbCt7d30d6t0E8veJplsAyS2XSHcY0S/vn6FVmsTTTNxnGrky+y3WF39gGx2AF1P4rl1mq31nflDheLaJYLA2U2vnwSh7xHUa2RGJ3FLG5HySCgIGlUalS0Uw8RQOthbmqQo6o5aiYwCn6KCou6OiEPXOfSA6Pvw8//r8OJLJi+/atJshvi+ZH1dMD8fIIIozfX2LA7uaASQySi8+KKB50WvH+gDRO8rj3we4OzzBqYJH33oMTSkcfqMwdJ9gb6TBksJexsYFCW6RIoCgZBomoKmKYQCXDdKq31fPsg4o2tv2zSvXiV59CjJsbEo0Pk+QbWKt7aGlstFqe+nUlNFUSKR4qNHEa1WlNruVWrfO4e150cljx5FOg7eygpGby+J/n6CWg1lx/cHePQ4D44lZZR6qyrsCCBL34/S7M+wDj4MUmcn0bsKqJkkiqbhrWziLa2j5dKR6VzTJipw2/P/TYHMuWOkTo/v+DJbKKqC1pFDrTUiKTAJiaMDZF88gZrPRqvHtxZIn5kkdXqC0HYRDYvQdsi/9TL+yiZqOkHroxnUhInekUM07V1feBkIZChp+1ev0/rwFqLWIvfdMyTGBkBTab53Hb9YIv38FIQhwVY18oyScv+/r8BTDYqWtXXggokQLrXa4oH7RCvNKzQaB8+f+H6LcvnOAdstKpW7B+whaTRWD9h+yEiJXVyicOYlvNIGQauB2dlNamgUr1zCWVsm2TtA52vfx9koYi3MEbSaZCan4e4MzuYamaNTICWNmeskunsJ/cN3QctkFd76UZJCQWFuTnDlsk93t8a5cyYLCwHz8wGVSvTwUBQ4elTj3HkTiMzPLQumjum0rBDHkZw7bzA2rqNpCkjo6FQZG9N540348KLH995IEASSP/xRktnZgFRG5cgw1Oshrgvra4LXX0/QaklmbgX8s9cT9PdrVCshd2YDfvDDJGfOGrzzjksyqWBZkiCAcvnhA07L5cicPo1imriLi/ilEplMhuToKN7KSqSoZO9o+SkK5sAAicFB9Pb2KBAIgdnTE9nhCkFychKjq4v0iRM4d+9i9PZidHeTnp7GWVggfeIEQb1OWtPwSyXUTAYdCC2LcEflPTM9jQwCgkqF5Pg4qmkSWhZusUh6chKA1vXrqJlMFBTDkNA6eGX2MLCuzkUK12GIaNogQmp//0E0vykloeNifzIfqec8GOFJsG8t4M6vRnJhLSdaHf7LX0MY0vrgJqLRitRufvZu9DCyPaQXPLqfCCn997/dmXeViIZF891rKKlE9HAIRDQ2cFwqf/U26Bph00IGIfXffozyD1cjP/eGjai1cBeK0YixZSMDQe0XHyBdH/vGAoqqfKWuGEV+ThRVFCWuaI6JiflHh5TyM5eun9z4PCYmJuZbSBwUY2JiYvbwuelzTExMzD814pFiTExMzB7ioBgTExOzhzgoxsTExOwhDooxMTExe4iDYkxMTMwe4qAYExMTs4f/D7wsd3wzCSUqAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUUAAADnCAYAAACJ10QMAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9Z3Bd2X3g+bvx5Qg8PESCIEEwN5vsnNRqdStnj+zyzNrasac843XN2q6t2qr9ujVbrprarakdl2vWI48nOMuSpVZsSZ0DOzKTYAJI5Phyvvnsh0eCfCRIAiAYWsLvC3nvO/eccx/e/d9z/lESQrDBBhtssEET+V5PYIMNNtjgfmJDKG6wwQYbXMWGUNxggw02uIoNobjBBhtscBUbQnGDDTbY4CrUm30oSdKGaXo5FAVcd0VNpYAPSVXxqnXYsPRvsMF9gRBCutFnGyvFVSIH/egDPatoH0CJhUG64d9ggw1ujgSJgSgde9ru9Ux+JbjpSnGDVuRggNAT+9C39lGPhDBHJ/Dv3IocC2NPziFMG32gB0nTMC9O4cxmCOwexC1VsOeyyMEA/r2DKLEI1tg0brmGf+cWkCQaJ87h5kr3+hY3WEe2vrCJ+eMZapnGbfUjyRLtO5KEOoIsnsqt0+w2uBEbQnEVeIaJPZ8FXcU8P47a2Y4cDGAcP0dg/05wHCSfTuPEeQL7tlNdzOOWq8jRMJKqoPV3IYeC1D88hWeY+LZtQg74MM6M4VXq9/r2NlgnFF0m2hth8HObcW0PbbxMo2hglixkVSKQDKD6FRzDoZ4zEK4g2B7Asz30sAYS1LMNHMMFCYJtAYrjZTKn8y3jyKqEP+5HC6oIT9AomNg1G0WX8Sf8qD4Fz/ZoFE2chnOPvg1QFB8dfQ9Rr8xTyo0BTTWSJCvE2wbp7n8cXzBJrTzH/OSHlAvjCOHds/luCMXV4Hl49QbCsPBqDeSAD6/ewMkWkRQFAXi1Bk62gKQ1f9yeYSKFAiBJyD4dr9bALZYBMEcnkXSNwL7tCNvGnlm8t/e3wboQaAuw5zeGSO1IIisSRsni4muTTB6cpefRTrZ/YQtIAtfyGPn5ONPvz/PEH+/HLFuoPoVAMsD0B3Oc+sdzyJrC1k9vYuCTfcwfz/DBnx0DQFIkeh/vYusL/SiajOd4nP/pGHNHFul7spstn9oEgF23GfnZOPPHMvfs+wjHe+gdeJpGLUelOIXn2gDEklvYuvvL+ANJbLtGqmsvkXgvZw7/DbXK/D2b74ZQXCVezUDrSuHfPYiTKxHc1E346QMIw8SzbPQtvYRUFbdUQdZ1/EObkWNh3GwRt1jGv3eI0FP7sWczCMdB9vuaAjPgv9e3tsE6UZ2rcegvThLuCPLRn58ge74AgBbSGPr8AONvTzN5cJbuh9Js/9IW8qNFJFlCeIL3/vQo0e4wj/7+PsbemKI6X+fUt8/hWi7hztDSGOF0kMFP9zN7ZJGx16eQVRnHdJF1ha4HUxTGSgx/5zyqX8G1rqy6AnocTQlQNRbxxBVjoSyptEe2Eg124XkO+eo4xfr0unwfgVAKRfFRKU0vCURF9dHetZdAsJ2FmcMUMudJduwk1bWXRGr7hlD8OOHkilTf/AjPtHCLVeofnUTSNbxqA21TJ858FvPcGG6pirAs6odPgyLjlat4tQZe3UDSVLxqA+F5WJ7AHJvGzRTu9a2tC7LmQ9F8t2znORauZdyFGd0/BBI+/HEfs4cXsKo2mTM5tn1+M9HeCJ7jsXgqh1myyJTz2IZDrDdCdX55tUo4HUJSJOaPZzDL1tJ5WZGYP55l6IsDaIFdjL0+RW6k+dtSZI2e5IOkYzs4M/0z8rXxpeu6kw+wteMZNDWIEB4dsSFG598kW7lw2/et+yJIskKtPHflnD9GW3oX1cosM2MHqZVnscwK4Vg3sbYBpi++edvjrpWPvVDUwwlkRcMo3XrrKSkqwWQPkqIiHJtabhpWq7tw3ZZtrnOVMFMqEWzXw55eWDpnzyy0XO4stuqFvEptdePfYSJdW1F9gTVfn979NJ27n75lu+yFo8wc/vmaxynPjt73QlUAXOt00HJSQrrKK0GSr2p8K2cFaflGniuYeHuG3GiBTU90c+B3d3Pux2OMvzmNIuu0hTcjSwqOZy5dE9QTbGp/BEmSmS8OI0kyqcggPcn9VI0Mhl1e6S0vi6yoIIFjXfmtJ9q3oelBFqY+olFrbu3NRhHbrOLzx29rvNvlYy8U27c9ih5OMv7239+yraL6iPXtJJzqR/VHOPfTP8O11+/BssZn162vu0Gsbwfxvp0t5/qf+Ar+WOqOj9297zm69z235usn3n0Ro3zFEmtVC8ydeGMdZrY+uKaLa7qE00EK4+WmISRvUM8ZdD/UweTBWVI7k3iOR3mmiqzKpPe2MfnuLJGuEKpfoThVuWH/tcU6whOk97RTzzaQNRnhCuy6Q6gjgFmyGHlpnECbn/YdCcbfnEaWVAJ6glJjFsu5IqDSsZ341DCzheOMzr+JJCkofV8mGuwk6Gu7baHouTYIgaI2dxCyrNKW3oXZKFLKjy1tqYXnIoSLJCsr71ySkVUVz2mOIakaeiSJZ5vY1RKXjTqr4WMvFFeDY9aYO/Yyic0P0rHjiXs9nbtCasfjpHctf6+R9ACRri3rPubEuy9Snrt4w88DiTSDn/qfbmuM/ie/1nJs1cq0De6/rp1rW4y88j9wGtXbGm+12HWbyXdnGfriFvo/0cOFX0wyc2iBkZfG2P7lLWx+pgfPFZz/yRj1bAME+KI+Hv/D/QQSfqbenaOeqdM2lGDwM/107+9ADapoAZWRl8bJnMlz8dVJtjzfR98TXbiWy8jPxlkczjHwXB/t2xPN+7dczv2o+beQJBlV8WPZNVyvKYh8aph4qA9PuMwVT+N6NpLkUarP0R7Zgq6sfddwGdMo4XkOkXgfuYUzxNsHCUXSlIuTVEszS+1kRUeWNTzXuklvrWjhKPHtByiPncYqZolve5Dk7kdxGjUW3v85Rm7u1p1cw10VipKiofoCCM9DVjUco4biCyIBdqOC8FyQJFQ9iKL7EULgWo1L26QrZnzVH0ZWVDzHAqnV//zK5xrCc7CNGuLSmwiabyPPtT+2wSWSohGIX7+SG3jmN4j1bb/uvB6MoodiN+zPKOdatqFWvcTZH/9587tdI0Y5e9OtraxqLJx6e8399z36RdoGDywdS5KEP95B94PPX9dWeB5tW/bheVeMCqWps4y9/Z2lY8eoYdXW10dUeHDhFxPMHl5AVmQaBQPhCWaPLFAYL6MGFJyGSz3XFIie6zH13hwLJ7MATVcdD8pTFU5/f5RzP754qV9BPWfgOR6TB2fJnM6hBlWEK2gUDJyGw8hPx5h4uyls7LpNo3B5qyyari5XbdljwR4igTTl+hyVxiVVjxDYbh1Z0la3arsB5fw4Zr1A56bHiCW34AvEEAgWp49gX7Wl9gXiaL4I9erCTXprRQtFCaY3Ub44jBaOE99+gMLZI+ixNuJDDzL/3n0uFEPtfXTv/yyN/Czh9ADFqWECiS40f5iZIy9Ry0wS6ugnte0xtGAEIZrbosUzB2kU5kCSiG/aQ2r7EwjPxW6UUX1BHLv5R5dkhbatD5HYvA9JkhBCUJo+Q3b0IzzbvMXs7m/CHf2EOzcTiHUw+MJvL9OiVT91NfnxU5il7LKfTbz3IuXZa5Xpd/aN4Tk2tezaLZtnf/qfuVqfJms6Q5/+n9EC0evaJgb2EmzrbjkXau+le/8VAZq7cIzZY6/iWgaZcx+uWzimY7pUZlt1xp4jqC0uYzwR4DQcytOtK1q74WDPLL/K9RxvWcfwRsG8ShBe1V64GHaZoJ5AU5srwFR0Gz4twsjca4jL1mipaZQRwl2X76JRyzI99ja9W57FH2rDdUwy04fJLZxeaiNJMuFYD/5AnPnJD1bctySrCNfFswzCm7bjNKoUzx8l2LWZ5M6H1zTfu7t9liQUzUd5bhSAWM9Opg/9iPahxwml+rEbFdI7n8GqFVkYfgtkmY4dT5La8STTH/0APZQgteNJytNnKU4No4cS9Bz4HK7T1C0F23po2/YombPvUs9N44930rn7E5iVHKXpM3f1Vm8HXyTJpie+2nIu3reDRP/uG17juQ5jB7+P07heD7V45j3q+dW/Me9vrjysnm1y9qffWrZVx64nCSY6l461QIT+p77eVP5fon3wAO2DB7CNKjNHXkZ4Hp5tMfHu93HMu+NUP/qLCUqTN9Yhrgeua1GqT5OO7WRz6nE8zyEVHaJmZFrcbyRkAlocx7OWttm3S3Z+GKNewB9M4th1yoXJVgdtSaZWnmVi5BWy88Mr7tdzLCRFIdjZT2TzTirjZ/As45JxZ22htXddp+iYNWqZCTR/CD0Uo7owRrR7CEXz4wsn0cMJFk6/TT3fXP4Xxo/Tte/TaIEo/ngaWdEoTJzArOQwSosk+vcgX3IBiXYP4dkmlblRhOfSyM/g2gah1CZK02e50yug1SKrGooeYNsL3ySY7Fo6r+gBYr1D17V3jDqea5O7eIzpQ9dYboVHafpcU+G8wRKLp99tOZZVnezo4SW1SzDZxbYXvgmAGgiz+cmvA+B5Lm1b9+E5NlOHfkZu9CiOcef0kpMH77yRzvEs5grDRANddMX3NM+5BpPZjzDsKwJZkTViwR4Mu4zprNM9C49qaZpqafkdgvAcCpnzFDIjrOY5tcp5zEKG1IHnsMo5yhebAlWPt2GV87e4ennuulAUnodwHYTwcO2m3koIgSRLSIoGQuA5V5b+l9/Uiu5H0fwIz8Fzr4QsOVYDTdUBUH0hAolONj/zm0BT16RofjzbQpLlps7yHqL6w4TT/UvHbVsepPeRz6EFIi0rFwCzkqeea31Qxg9+j9L0eVzHxDVvL572VxXPsSiMn1o6Lk4Mkznb3K4NffZ3CcQ7AAin+4lv2gVAtHsbjeIC5176CzzPpbowcUcF5J1DkK+OMzz9Ezqi25AkhVx1jEJ18srWGdDVEJZTo9KYp27e7Vjr1S1cXKPO4qFXyJ/+ENeo4xpNdUV18jyetTaV2f1jfRbg2QYgofqueO7roabPkmPUcM06sqKhaDrN9ZCEFowu6dKsWpFGYZ6pD3/QIgBdq3HPBWLn3mdJDuyl9+HPXfeZYzaYPvSzllVeafosC8MH7+YUfyURnotVKwJw6nv/Yel8z0OfJdTeS6RrC21b9hHpHODh3/kTAKYP/Yxadpry3EUKYyfuybzXisCjVJ+hVJ+5YZuameXY+D8i7rOd1Y3wLBPLag1jNLJrVxfdP0IRMKt5GsU52gYfQQgPSVZp2/ow1cwEtlGjXpjFMeu0Dz1OYewYvmg7gUQXRqn5hZRmzhHr3Um0ZzvVhXEkWUYPxWkU55srTklCUX2ouh9JaVqpL1uj7yRdD3yS7Z//vWWtwLPHXiNz7gMWz7x/zwX3Ble47Fgeau8l1rudrc//FoFL/puXX2zVzBSl6XNcePVvMMrLG7I+rlwtEBUVIjGFatljy04fi7M2xezKf6uyrKH7o5iNYsuKtBUJzRdC00IYjfyqnkk91k540xB6NIkkX/FGMYtZ8iffvcmVy3NXhaLnWNj1MgIP1zKwLxkFHLOOJEnYjQpzx18hufUA6V2fQOBRzUyQu3AY4dpY1SKzx35BaseTdD34GRrFBfJjx1D1IABGaYGZoz+jbetDRLuGEMKjUZijnmu+FRP9e2kbfARF86FqGjs/9+tU81Vmj71OPXeVolmCYExFVSWqJQfXvr03Zqx3O3oohmsZTLz3gxYnY7NS+JhuxX41qGWnqWWnKUwMI6saXQ98kvTup5E1nXCqj3Cqj0T/bvJjJxh/558wyrmPvafDtUTiCo+/ECG/6NDZq7H74SDf/68r19eFol0M7Pw8tco8F4d/dMMMOG0du+gb/CQTI6+yOH14RX1r0QSdT34e1R/CLOXAu9K3rKzNneiuCsV6doqJ7BQAxclTFCebup3MmXeW2piVHHPHXuaKy8XVAklQy0xQy0xc+vwaYSUEtcVxaovjy15fGD9BYfwk0BR6z/92F+++u0g91+pTp2gSD3wywcDeCK/97RzzF9dHf1eYOMXoq3+9Ln1tcHdpFJoJCkZf/WtGX/sbgslutj73z4n1bieY7CKQ6KRn/6cZP/hP5C+eIHfh6D2e8c3xqWF0LYwi3VwE1K08EiadvRrBkMzx92o8+GToptdcSyjaRTDcQa08f5OUYALLrCBJMsnU0MqFYiiGrPuZfetFjNz6JJG4j7bPEuFYD7KiUc5fybl2Y1o/T23y07MtSCSh0qi61MsOYyeqbN0foa3bT37e5Oz7JUIxhe2PxvCHVYIRBYQg3e9n64Eoru0xcrhMfs7i7PslwgltXRNmf1wdxje4BiGo52Y4+d3/h7ZtD5HcvJfNT30dSVYYeObX6dr3KeaOv87C6YOUZ0bu9WxbUGSd9sgg6dh2gr4kiqzdtP3I3OuUquc5e6xBteQyN2kTjKwuNFbTQ0iSckunbNusYNv11cU+C4FnNvDstQcbXMttC8XOzY+TnzuNZd5efKQkyQTC7Siq/5JQXB1t3T5SfX6CEQWz4eHaHooms3l3iBNvFtjzTIJ62aGt20corjIz0mBgTxh/WGHfcwlmRxsk0jq7nojzwU9+ufRDG9w5ciOHKYydJDt6hK4HniW980l84QQDz3yD1I7HOPnd/5t6bva+SF4hIdMR2862zufwqSFc4SCE4GYLEFlWkWTIzjuMnjJQNQlnleokRdGQJAnHurnPp+vaeK6Dqq08jZ5dK+HZFtEteyieP3opzvrSh8Jbk7Bcs1BUVB+haDftXXtxbQOzUcCoF7CMMrKs4Q8lUVQfjt3AqOUQwiMQSuG5FpovAggatRyuY8AlgWiZFaxSq1VMklX8wQSqFkAID6Oex7HqyIqOP5hAUX14ro2qeZSzFvWyjG16hOMqbT0+ynmbieEafdtDpHp9+CMq2WmT6XM16s8mCEZVugdDhGIqri2YHa1fGzm4wQY3xXMsCmMnKM+cZ+yt77D987/XjEBK9fHQN/8dhYlhzv70W5jrbIyJJRWEgErRvW4XEgzLtKdVKiWXwiWjiKb66UnsQ1eDzBVPM5M/SsMqtuRVvBbHNYm1y+zcH2D0lIGmS2x/wM+ZIytXKbmujRACVQvetJ2saMiyiuutPEu4rGho4Tihnq3Etx/AtRpLW7JGZpb5gz9ecV+XWbNQVLUgyfQO/MEk8dQ2HLtBfv40tlkj2bmTZHonrmshSwrZuZMUFs/Rv/OzOGYNz3PwBWKU85PMXHwLWVKItW2hrWsP5dw4k+dfBkCSFNq7dpNIbW/GKyPIzByjWpymrWs38fZBXMfC82wCoUmEN4/wxNIPZHHCYNuBCI98vo32Xj+Hf5GlvddP/64QgbCCLyhTzdtcOFrGrLuYDY+ZkTqSBEMPR+kZDGLWXMo5m1rx3qVz3+DjgWsZNKx5jv3dvyPev5udX/oDIul+0rueRFY1zvzo/1tRiruVsv/JIJ4rePeVKtf67EcTCp/6WpS5SZuff6cZ163IOpFAmkpjgYuL76zYB9FzFUIRmcefD6P7ZexVrhRNo4jwHKLJfhamD+EtK/QkQpE0vkCcYnZ0xX27lkHx/PL6W7u+tt3rmoWi2Sgwe/EdQtEupkffpFFt/rFVPUR7114Wp49Qyl0k1raFVO9+auU5JCRsu8b06Fv4g0k27/wcmdnjWEaJ+ckPkWUVRb2ydPaHkrR17iYze4JiZgRJVvBcG1nRiMT7qFfmmZ/4EElWKRdkZMVCliU8T6DpMqWsRbVgE23XOP5GntkLDfLzFvWygyRLvPO9RfLzJkdeztG9NYisSjiWh/CgsGBx7LU8Zs1FuBvKwA1WR3FimDM//DPim3Yx+MI3SQ09gvRVlcL4Kcbe+vZ17VUVHn0uTKpbw6x7+IMyB39ewR+SOfBUECQ48k6dqQsWD38ixMB2Hx09KqcPNwiGFfY/FaSzR2N8xOSD12vMT9mcPdYgFL3aAiuhyBo1M4ftrDx8sVp2OfJOje5+HdeFQ2+uLgdotTiNZdWIt22lvWsP2blT1wnGUCRNuvchND1EMbfyxLZOvULhzEcAKP7QUqIZcRuRXetuaNF9ERRVp1KYxHVMauV5OvoewhdINKMBSrO4jkG9Mo/nOviDSSxj+QwlvkAcz/MuXXPFzUF4LsXMCOm+h9H9MXJzwxQXppZ9A02ebv0D1ooO5z5sfYOYdY9SpnUO5z+6PR3pBhsUp85QmhnBNioMfeZ3aB/cT6J/F6ovwMW3vt0SlSSrEl2bdCzTo61TpZBxefjZEIGQzPh5E8cWPPZcGFmu8uCTQX76D0U++cUoul9maI+PTVt9HHmnxpOfCTN+zmRh5vpnQQgXy64iSyrSCnVEsgyeC2NnTaYuWCgqDOzwszC9cqFTr2VZnDlK39Zn2bLry7R3PkApP4ZlVVFklWAkTTK1HV8wSSl/kULm3Ir7BglfMk1yz+MEOnqRZBnPtihdOEnxzKE1uUfdllC8vH662kDbdECWkKTmG0qSpEsZa5opi+SlVERSsy7FTfQZCNG8/poMw0J45BfOUClMkkzvpHfwEyxOHyU7+/GKLtjglx/hOcwc/gWuZZDe/TTt2x5i89O/huoLMvLyf29JOGHbHpk5B8+FYs6hf5sPzxVk5hwcW7D7IYlUl4ZtCuanbPIZB88VtKU1tuz0YVseuQUH7wYbG8c1yVYvkAwPEPF3kKuOcysvj03bfFSKLvufCtGoe+i6RN+gzskPVpEoQ3jMTbyHouh09B4gmd5JW+eu1rnZBvnFs0xfeB2zUVxx12ooQurAJ5FkmezRN/EsA18iRXzwQTzToHhuZa49LX2u+oqr8Fwbz7PxhZIYjTxCCGyrim1WiacGyS+cIZLox7VNzHoeWVaIJjZTzF4gGO5AkmSM2o31Gka9AAiiyc04dr2ZgFI4eI6FLxDHthvkF84QjKTxB5O3cysbbHBHmT/5Foun32Pr87/FwNP/jL5Hv4Bj1hl55a+ulMQQV/0joFRwMeoejz0XQgjILTqMDhvseSTAZ389xuZtPkZOGVw4Y9DepVLMuxg1j2LWZesuHw88FsLnl5gZsxgdNnBsk5n8CUK+FJs7nkBV/JTqM1hO/YbGFs8F3SfRvVnj7FEDCXDXoF53HZPpC29QKU4SiW8iEGpH1fx4notllKmWZyhkzq9KIALokThKIMjCuy8tJZStTo0gPEG4b/DuC0XXMcnOnKSj90BTjzh1mFJujPnJj0j3PUQyvRPHqjM7/h62VUMIgayoDOz8AqrmZ3H6KLZVIxzvpaNnP+F4H5Iko/sjLEwdolqcZmHqMO3dD5BM78BzbRamPqJWnifZuYtwrAcAy6yQXzh7O7eywQZ3HM+1GXvz20iywqZHv8imx7/cTPn21rexLZv3XqlimQJNl7AtgaJKWIZHR3fTX3ZxzqZS9Hjp2yWicZlzxwwKWYdq2aNeKxMKy9iWwHUFhYzDGz8uI0lNYeq5oMo+2iNbkSWFeLCHsC+FYVfwhH1Dp+qx+fcoGhf58d8UqVddZEVi7Oza3Itc1yK/eJZCdhRVCyDLCuJSAhjHMVlTFitZaaZ6uyYpsmvWkdWb+2DeiNvUKYqmZTk7goTUdK9BUC6MU6/MI8kKwnNxLtVBEZ5LfuEs5fw4wNL5WnmeqfprV7JoC4HjGAjhUcxeoFqcbmYAFgLHMRGey/zEB8hK86Y912nROW6wwf2KY9YZffWvca0Gm5/8Opuf+jUUzcfFN/+BzNzyBoxqufW3vTBts3BNBq7Z8VYdXz7jks+0rv50zU9Pch+ypOJ4JrKsEvQlbjpfTQ2gahJ7HwvSt0XHMgWnPqozM752Q4bwHGxzfXJHOvUqkiQR2byL6uQ5hOug+AJENu+isXjjpBc347YNLUK4LVW6Lp3EsZfzYxJ4ntOSghwufUnWDdbkwlu2L9cxNwThBh9LPNvkwmt/24yAefobzWgYSWb01b/CvYNx07bbYHT+DW5dKvAKpfosobhCV5/Oi/89TySm8IkvRjl9+P5IXWeX8xTPHaFt71PEhx5EeC6yqtPIzlI8f2RNfd7VML/5iQ9o3ESHuMEGv0qMvfUdVF+QTY99ib5Hv4BrNRh97W/u2HiuZzFbOLnq6/xCIZqQeeTZMIGQTHta4/EXwlwYNsjMra//bjQ5QCDYxsL0oRW1F55L+eIwZjGLP5lGUjTcRpX64hRO7S77Ka6FUu7GFd422OBXDddqMPrKXyGrGr0PfZb2oYeZeP+Ha3Y6XhsSsiQDTQ8RwfW6RaMuGDtrEo41PUdOH62DuDN57JOp7SRS21YsFAGE62BkZjAyV22XJRklEMZdQxXHdQ1okxWJ9NYgstK6PPeHFQb2x4im9PUcboMNPvY4Zp38xeOY1QKRrq3s+fof44+n7/i4quwjHuqjJ7GPzakn2NzxBL1t+2mLbMGntRYAE0Igy5BMqUTjCsWswwevVcmu8yoRaFbyVG5fTmihKF1Pf2ltc7jt0a+eiF/mwc928Mb/mMKsXVHyqrrMrmfbGf2wQCWX54bZg+4CPj1GKrkDwyyRK56/SSqj5VFknd7OR1GVZuSN4xosZE9iWOtbInODXx3mT76FrOrs+MK/JrX9UQCGX/zTpYzg601Aj9PbdoB0bAdBPbHkyC2EwHLqFGoTTGYPUahNABAMKYRjCn/3Z1nCUZnnvx7j6ME7UdBLQlG0lRWckuWW3InXfaz7US7VblotKxKK259MEu/yEYxquI7g/Lt5Nu+P4Q8rTA9XWByvs//z6aZvUFIn3ulj68MJfEGFswfzzJ2vsnCxhmPfQ2l4Cb8epafjYUqVKfKl0VULRSQJVfUT8CWJhrqRJJlieWJDKG5wW8wdfx1Z1dn15T8gtf1RAomOOyIUA3qcrelPkI7twPVsctVxTLuCQKArIcL+FKnoNvxahHNzr1KsTeG6Al9A5qFPhPAHZRr15Z8ZSVbo6H4Q3RdhcfbYks9hLDlAJN53y7lJkkwo0sWtDEHBzn6Sex5n8dBryKpK55NfRFwjIBWfH7EWh0pWKBRjaR9G1SXW4UOyPPvSy0EAACAASURBVLY9lqBRdTj6Uo5Hv9ZFLO3HqDlMniiT7PHTvzeGqsvMjdbY+XSSufO/PJmlXddkbPpNZElmoPc5Uokd93pKG/wSIDyX4sQwjcICgUSaHV/4fT78L//7upaokCWVzvhu0rGd5KoXubhwkLqVX1oYSJKMrgTpbdtPb3I/fW0HqJt5quUa7/2iQrpPo5x3OPXh8um4FEWnd8sn0P1R6rXsklBMpIboGXhmZXOUVRr1mxtjnUaV2uxFPMtAC3WgBkKULw7jOVeEoBaO4kt0rGjMa1nx9tmo2Jg1F9f28IdVigsm1byFqsv4wwqVvEU1b+O5glBCI97pw6g6TJ2+s7Vs7wWeZ+MBntjInLPB+lFdnGDivRfZ8YV/QyCRpm1wP9nzKzc43ApV8ZOO7aBhF7mw8DaVxvWZqh3X4OLiQUK+duLBPsL+FJbcoL1T5f1Xqug+iQNPh8jOX7/Q8VyHzNxJfP5oS6SaJCtIkkR2fhjXvbHLkYRMrG3LLe/DKuWwSs3+9WiSyuR5MkdeR7hXXiC+RIr0E1+4ZV/LsSad4txIle4dERLdforzBrPnqjzw6RSRpI7mU5gazqPqMqou45ge6a1BBvbHSG0O0ig7zJ5rfqG6FqY3/QjZ4gjhYJqgv41s8TyV2hxd7fsI+JPkSxfIly5eFSMtEY/2k4xtQVODOE6DQnmMQnn8uq2wImu0J3YQDfcgSRLl6gymXWU5u5kkKSSi/cQjm9G0ILZdp1Aeo1RdPtHEBhvcCXIXjlOYGCbRv5vufc+vq1BUZJWgniBTGcW0b7xYcT2bQm2StsgAsViELY+G2ftoiHBMwR+UmwLy1WWEomczffFNZFldCsy4jOOYjJ19CesmTtuSJDP0wDeWItVWglnM4J490iIQAZxGjerU+RX3czUrEoqnXs3gOoLpM1UQYBsuU8MVFE2mUXGwGy6ljInw4NTrWWpFm8x4HVmVMKoOniN47S8n8Dyol654wquKn+6OA0TC3UjIhALtxCKbqDcyBANt6FqEtvg2To18h2p9HkXx0Zt+lK7UPjzhYjt1NCVAKrmTxfxppubeX0qJpCg+tvZ+ilRyJ47TwHLqRMO9uK55nXVLljU2dz9Nun0vrmfjOiZqJEC6fS8L2RNMzL170zfcBhusF7XMJMXJM8R6h2gb3E/Pgc8wc+QX1zdUlUsGEoEcCOA1DIR96ygTSVLwPOfmunTRDLKQkLFMmL5o0dmnk7uUgOLw2zdOHea59rKV+FzHwHVNxE0WGIJmUMZqLLHNWs/XG31co07xzOrjnmGFQrFebt6IWb8ijS2jVa9QnGsVGqXF1uOrr22ZgBoAAefGf0w03Muuwa+DEAyPfo9IqIsdA18iHExTa2RIJXfQ1/U4mdxpLk6/geM2UBU/fV1P0NPxCJZVY2bxEEK4pBI76GjbTaZwjotTr+G4Bn5fnKH+zxHwJYErPpMdyZ10ph5kdvEwMwuHcVwDXQuzpfc5+jofp1ybJVtYTTqjDTZYO6Ov/jWp7Y8Q7ugnueUBMuc+wKq1GvJ8PT3IkTCSquLfthXj3Aj1E6du2q8nXAy7TMjXhqYGsd3lo1JkWSESSGO5dUyzwcKwwcKMTSnnLiWrWA218jwgrcjw4brmpRIJK0NSdRRfAKdebimCJGs+JFUFZ/XlCO6LxPuV2hyGVaZh5rGdBqXqJJZdxbLKWHYVTQ2iqUES0S1ISEwvfITt1BDCw3bqzGeO0zALdCR3oql+ZFkjHulHkhVmFw5dauvSMPIs5E61rPoUWac9sQPXNZm5qq1plcgWzuK4Jm2xwXv47Wzwq4bwXOaOv47wPLoe+CThzoHr2ki6hhKLoqVS1I+fRIlFl+mpFcc1KVQniAY66Y7vQVMC17WRJYVUdBvtka1UGgvUzByOA8XspZIHa/DYXpw5wtiZn+C6txZQhcwI81MfrbhvXyJFctcjyGrr7s/f1klix8OrnivcJ9X8HNdACLeZMcNzsJ1G07teCITwkGUFTQ0Q9CepG9dnDbadGvXGIu2JHciyjiap+PQIjtOgYRauaikwLwnay/j0CH5fFL8vzq7BX2vpV1MDqKofn+/WP7j7HUUPIC3j/9X/1K8R69l21+bhORbnfvaXS1EbQohmXY0NWpgfPsjWT/0WEvIlf7vWkr5urUawcw/m+CRew7ipz97SNZ7NfOk08VAffe0PEw12UaxNYzoVhBDoaohIIE0i1IfAY7Zw4q67mhWyI6sqR6DofnyJFJLcur6TfQGCnZvWNIf7Qihe0W8IuCQILx0tIUkKiqxh2bXr9CGe8HA9G1XxIUsygmYyW/dSXZfWti7eVW4OiqwjSyqua2Hb1whbu069kaNaX596sncDPRQjkGiNiJAUjR2f/9foodh17bVgZM1OrmtBCEGsd/uSX5lVK3HupW8t6aFcy6C6OHnX5nO/4jkW9fwc4VQf2z/7ryhNnW3ZQtvzixR/8VpTGAqBk19JcXpBoTbJyPzrbE49QTzYRzLc31x8NJ8aQNCwikxkPmSxdHbZsL87ivBWtBiVVR0tmsAXT6H4gvjbu3CtKzvAcO9WXGNtL9v7Qiguz7XCzMFxTVQ1cF0qdVlSUBU/tmPgCRchPFzPwa/oy7aV5Su37XgmrufQMAucvvD968b9uNC+7WECiTTxTbvoeuDZFV/nuQ6zR1/FuUurtdTQIy1C2x9r55F/9e+XjhvFRU597z9QGL+5fuyXHbOc48Lrf8u+3/g/8EWStA89wuzRV5Y+l0MhAkODyAE/CLBm5zDHxm/ZrxAemfJ5qsYiqegQ0UAaVfE36ye5BjUzR7Y8QsXIcD8/C0ogRHzHQ0T6tqFH2+h6+stXLZYkXKNO5sgba+r7PhaKrdh2nVojQ7ptD7oeadkC61qYUKCDan0e17VxPRPTKhML9xH0JylXrwSK+/QouhZaOjatCoZZJBbuJRRIUWusX7W1O0nn3mdJDuxdOm7buv+6FeLVNIoZLr75D9edF57LwvA7d60ucebch/hjqaXjLc/+JoH4leNAvIOdX/oDipOnseoVLrz2tze1WP4yU54ZJXP+EKmhh+l56DMtQlHrSKGlUhjj4+AJ3OrqAiQaVpHJ7IcosoYqN7fnzQXC+hWVvxZJkglG0kTjm/CH2lAUHU+4WEaFammacmESbwV6RwC7UiB79E2sYpbolj3kh9/Hu7xSFB5OvYpZWltJ2Y+NUHRcg0z+LPFIPwM9zzI2/Qa2U0NVAmzqegJdCzE19x6O29RH5oqjtMUG6e96mrGZN7CdBgFfku6OA0hXrRQ9z2Yuc4xouJutfc8zOXeQhtn0xNfUILFwL7nSKMalc7KsIUtNdwhF1pEkCU0NoKmBpn7Ms1YfOngTJEnGF2tHQmLzM98g1rsdAH+0rWU7LITAKGXwPJeZQz8jO9LqjuDaFvXc2pJuXot/+zZQFIxzI3CNf5ja0U7o4QPY8wvUjxy/7trcaGuOu8L4KRStqSSXVZ1dX/m3RDoHCHdswnNsUkMPI4RoJmE98/66frf3O43CPJX5i6SGljEYCIFTLmPPziM8D7GGou/Q1DO63toTxq4U3R+jd8sztKd3o+rBlmdQCA/XNqgUp5gZe4dSYXxFbjluo0ZttulFUpu+sKYiVcvxsRGKAIXyRcZm3mBT5xPs2/EvcF27qWd06ozPvM1i4czSQ5MrjjATSNHdsZ8Hd/w2rmfhejal8iSypLT0my9dYGz6DXo7H2f34DeaPlqSjCTJWHaVYuWKjqsrtY9EdABV8REKdqCrYQb6nsO0Kth2nbnMUYqVidu6T9UXJLllH9DU+Q195neRVR1ZUVsUyvmLJ7CN5grBc23O//y/YddLeK67cl8vCdSOjuY2zPNAVnAWFkGWUFPtIEk4mRxetYra3oYSjaL3duNWqkiaitqRQg4GcKs1nMVFnEwWa3IaORy69dhwnaA+8Y//nsEXvglAcuABIpcsr3u+9secC/wl5ZnzVObHVnZvv0TogQjh9GaqC+MAeIaB3tuDlmpHWBbm+ASNM+vnNqbIGmF/Bw2riOWsrqTpdX2pfjYPfZpU94PYVo1yYRKjUcB1TGRZQfdFCYZTJDu2Ewx3cPbY31MtrewFbhWzzeiWVbjx3Ip1F4rxNoVSwV3RM2k7Ncam36BUnQKaW9nJufco12YQomkpnl74iFp9ESE8IjGJoYcniMcsLp5oo1bWcF2Tcm2WSm2upTKgEC5T8+9Trc0RCnYAEtXGApXqLInoAK5nXxVELsiVhpF9GYSVRtfCCOERCJtEU0VcccWCbdk1qvVFJCRKldac8E2959q3H4nNe0hueRBfOE7vw5+77nPPdZh49wc4l3LEzRx5GbOyEgX7TZAVgnt3IQf8SKqKsB2sWARJ15HDYYTjoHelqZ88TejRh3DyBZREHM8wUdvbCO7ZhZMv4N8xROXtd/HKlSXl/1qoZac5/g9/AkDPQ5/FF46T3LKP5MAD7P7q/0ppZoTsyGGmPvwJVrVwi94+/uQuHKNzzzOEUn30HPg05176CwCcTJbKm++gJmK4tTp2Zm1bxRsR0BPs6vk8FxcPslA6c1t9xZKbibdvo1HPMXH+FUq50ZZs+pKsEIn10rPlE7Sld5LqemDFQhFAVjT0WBtKINTiYeEaDRqZ6ZtcuTzrKhRVFZ77UoTXflShWnZxHdD05iRlGWxLIETznCSB6zaYmH0HVWtWDJOkGnPZD3BdUDWQlCrZ0hFsSyArsHN/gL4tGsNHprk4OYplClS12b8kXel7aTzbpty4QMW4gKDp1OC4TQGoaBKaHxy7uahq75TZ/1Sdl188jm02H+hwTKbmKDiXLKOyAqX6WUq1szj2VeNd+jtcvk7TpaV6ubZ9Y+EgyQp9j3yBtsEDAATbugi19wJNAXi5GE9l7iJj73wX4QkK4yevK9JzuwjLxsxkUWMxnFIJLd0BnkfjzDm8eoPQow+h9/WALFE7fLQp9BQFvasTvb8PhECJhFHC4aZQXCdmDv8cgPlTb7Prq39IrHuQWM82Yj3bCHdsYvj7/29LidBfRgpjJ2jk5wkmu1rOy6EQgZ1DSLreTLU1fAZren3UI9BcKQZ9bSjy2oo/XU0w0omi+pgdf5fc/CmuNeAIz6VcmEAaf5dgOEU0uXnl8/QFadv3FJH+nUiKgqzqCNdBeC6l0RP3Vij6AxIPPh7k2S9FiCUVjr1f5/j7DX7z9xOU8i6xpMrrPyxjGILnvhwhGpcp5l1e/qcyn/1GlHBUQdUkzp8yOPjzKl/5rTjJlEq56PLqD8pEEwqf/GKEQFCmXvU4f9LgkWeC7Hiwqcs7+VGDU4eMlvHee6XK/ieDJDsULEOgaBKvvlgmGJZ58IkggZDMsffqjJ4yee4rER56OkgsqfCL75WxTMGnvx4hHFP4p78s0BAeDz8TZMe+S+N92CC36PDFfx6jXPSIJRR+9LdFPA8++40oQsDEiMVrP2wVEr5IEl+0DWgaRwae+XVU3xUn2npuFrtRJXfhKFMfvQQ03TPuZDZmIQS4bjN+1BMI20a4LmpbEi9kguvi5otI21S0jg6UaASv0cAulrBn5jBGL4Ln4WSzyMEgSiyKHAohR8J41dptb23quVmO/8OfEO7oZ+gzv0Mg0UHHjsfga3/E2Z/+59tfLX8MUduSeIZJ/cNDaOk0ev+mdRWKsiSjyOsjHi6H1dbKs9zMom3U81hGBU0PrrhvPZYksmmI/PB7WOUC7fueYfHQqyR3PYKRX1jTfNdNKBoNwQdv1HjsUyG++18LVEsekgT+gMz7x2qMnGoqQaMJmQunTSQZvvibMV59sUIipfL6jyoIAY88GyQYlpFluHDGZHbColzwyC24vPdqDX9A5o2fVIi3KWzb4+fVF8uousQTz4eYvmg3xztaY2TYJBiS2fe4YOSUSSKlkJlz6B/UuXDG5MxRg4EdOtsf8HPorToHf14lFJH5u/905QE7/E6dx54LIcnQllLZtsfPy98v4w9IPPbJEGePG6iqxN/9pzxf/hcxuvo0FmYd6lWPmXGbkVOtFt1AvINdX/m3S4lEL7Mw/A5mtWnImTn887urM/M87Ll5vFodr97AM0y8SgW3WsO3bStKNErj1Gns+UUaZ87hHxzAq9exZuawFzLIfh96bw+eaWLNzKK2JZH8fkCgdaYxxybAuX3rsV0vUxg/yQff+t/o3PsJdnzh35De/RSyqnHmx3+OUfp4eA2sF8JxkMNBtK4u1LYkwmw1MuhqmEggTcPMU7eaagZZUokFu1lJ4apYsOs6d7a1YhqlZs4BzX/TdoqiIysaRn3lLzlZ1bHrVaqTI0iqimebWMUM5YvDRLbspjI2vOr5ruv2WQCK0po417EFhWxT16cosPPBAJuHdC6eNfH5mtvoasnDbAgUDTwHkODVH1YY3OXjqc+EqVXLTF9stZCJyzUipOZ4ktTcxl49HjQLdxt1D6Mm4TqCRFrlsedC5BZdNE1C05pzQGrO74b31hLi1BxUAMVcU39qmQJJlpibtDn4co3+QZ3P/LMYf/Ufr6RQCndsItzR6mU/d+JNzr30retiW+8aQmCOLl87x8m25rUzz49inm+NNmicPN1ybE1NY02tfsuyGuZPvoWiB9j5pf+F1PZHEZ7Lqe//Rxzjlydv561wFjM4sabRy6s3MEZa/y4d0W1s7niSbOUCZ2d+BoCuBtnZ8/kVCbv12DZfppQfw2yUiLdvI79wZtmsU5KkEE324w8mmJv8YMV9N/sSyJqO59iAhBqKIYRA8V0fxrgS1tfQImDsvMVv/F6Cj96sc/KjBpWyh+c0pYknoFHzSHWpeC5Mj9t4LlTLLrbd9KmvVZrGjxe+GiEcVRACLKN5vdHwmsJQNIt9nzth8PxXmyF4Jz9qUMg6VMoernt5PEG96mEYgnpNYDYE5YJLNKGweUhHCJiftBECqiUXSZb47T9M8tNvl9E0iU99JUL3Jo3nvhThzZ9UOXfC4NNfb4534sM6uYXmeNC8L9Pw6OzTeOGrEZAgt9AUznajgmsZWJeC1sfe+SdK02eb91TK3uVCRb8czB57FUmS2PaZf0lq+2Ps+fofMfzin2I3fvnyd15G9QVRdD+uZeA1GtSHzyDrOshSM/nBdQikq1aFsqQQ9rfjeBbOLbI+rdfWGcBqFJmf+pDeLc8yuOfr5BbP0Kgu4ro2sqzgC8RJtA/R3r2XUm6MenkeX+D6etS2VbvOj9GpVbDLBRRfALtewTGqdD3zZSRFoTazth2XdLOMFJIkrVoZpKggyxKuK/Dc5vHVyTEkGVRVWsqE4djNNpcj72Sl2V7Vmqs/IURz9yWaxhOkq9rKoFwytDiOQHjXj6cozVWeJF1a7V1aWcqyBAI8Tyy52qmXVo3OJePI5WPPEzj29eMhrsxXVliqcHbZ+OO5zb4lWWXz07/G9KGXcG2r+Ub7FfK3u2NIMt37nmPbZ/4lejDGxPs/4PzP/vJez2rdGXzhm/Q/+TUkSWb4xT9lcewQnmWhxpt+qmoigRKPUn3vw6VrZEkloMcxnSqO21TjBPUET+34A8YX32My++F1IbBXkwht4sHN3+Dk5A+YLZy4rfl39Oynq/8xQpFuFFVv+lXicdn6KSE1BQPg2g2QZCT5+m3byInvkpm9xvdVkpBVrZl1W3j44h3Ehh5EuDaFM4dw6su/JIUQN9QhrLtLjuuAe3Xg+jUrZeE1rdDXXnPt/x0brgv1u0aOeB54N+kLrvMtvnJ+mR+Ec42l+Np53my8q7PGX3ud8BzG3vrH5SeywdoRHrPHXiW9+2lS2x8hsWk30e5ByrMrTyjwcWD8ne/Rs/8FfJEkkqygdaURlk1w317cUgklEsGtt1rhPeFQM6930xHCxXJqmM7NVQ2mU8Vbpxe37ougaWEs4/Z2RMvlaWyueGT8yTSy5gOJpeSyij90Q6F4Mz5WztsbbLAc4+98l0jnZmK9Q+z+2h9x7O//LxqFtVkePw5Yk1Mo8TjGyCjW9CxqWwK1re2W1wkEttPAdm8d0imEi7dOkS6Ls8cprkPN9+UMMFo4Tscjz+NLpJvhoFetRxrZWeYP/njV42wIxZshScjhIEqsmcxTCA/RMHEyhXX1oL+vUWTU9gTCdXHz5RWlqLrbFCZOc/qHf8aB3/4/iXQOEO0apFFY5H5OaHA7eA0Dz1zEXsw03aUqFazp2VteZ1hlPhj9bzdMLns1jmtRqs+uSIDeCssoYRl3xpCohqJokTiLH76Mcc2L8I5W87sZkk9DDgVx8798JT61vk6in3sa/86tSKqMcD3MC1PkvvUdhHnnAufvNFLQj6zruMVbb2eUeJT23/sGTqlK/q9+gFe6Hy28gmpmaqm2ydBnf4fM+Y/W3cn9fkIJh5B0HSebQ/bpKJFIU0jeBIGHYa9sC1szcxwb/w7uHSrOpmoBZEVDeC6ObbREo60Gp1bCLGQI9WxBCYRaqh86jSr12dUbW25bKPoG+/Hv3krxH39+u13dV0g+ndBjDxB8aBeVV9/HGp8FScIzTMQ6+N3dS4L7dqAkY5R/8uaK2gtoKmfv44WXUVxk/uRbJPp33+up3BXU9naUaAQnm0OJx/EPbrmlUFwdAsdb37pEkqwSjfcRb99GINSOouhNHadZoVyYIL94DsdeXYSSJKv4Eh1o4Tj+9q6W1aGRn78HQlEC/+5BfAMrr771cUEOBdC6UzjZIpXXPsDNrn9h8nuCJBHYv3NFRY4A3GK5uTJ23GZ0yscASVbQglHM8vrGA98v6L3dRJ58DCUaJbB9CCSoD99efPKdRtWC9Aw8Tbr3ITQ9eKnWiwdISJJMe9cDtHftZWr0DSrFKVb6BlYCQYTrMPfOj7CKmRaL+loXL2sSilLAh5ZuR0nG8G8f+P/Ze88gva77zPN345tz54TuRiPnRJAgGERSpIIlKntkj+OMx7Vje8re2tr9sp/2y9aut7xrT9WUXZbkJFtZFkWalEQxggQDQOQMdM7hze97c9gPt9GNRncjEWIw+VQ1we5777nnpuec8w/PHzEaJrJr08J2r1bHvLyoLCOlkygdTVgDY/i+j9LWhBiLBKlh+TLOTGHBTSyEFKRsGjmdQAip+K6HV6tjT83h6/MjlwByYxYpm8YankBKxpAb0kFEu25gT87hVa/7gAWCdhszCKoSpLMZJk6hjFuuguvNK8NkkbIplMYMci6DIIqEN/TgdQa2FWt4YqmpQJZQmnNIqQQoMr5mYM8Ughzga5+rLBNatwY3X8TJl1BagvuHIODVdeyxaXzTQulqDaqp1XWU1sbgekanEFNxlJYGPN3EHpnEt52F88vZFFImiRgOBSrKuok9tfQeiPEocnMOOZtC7WnHzZeWPDMnX8IemVx8Zg1plLYmhPmIdrdcxa3UlrrZr4GUTSE3ZhHDKr7j4MyVgud61fYqCMFzj4SwxqaQcxmkdAIkEa+m40zP4dWvs3VJInJjBimTQlAU8Fw8zcAtlHHLtWV2Xb00jV6aIZxqZP0Tv8fpH/z5in39sMMan6Ty8iHEaBRzaDhQq79l+9nSsgbvBQRBoq37Ptq6D2CbVWYmLlOvTOI4OqIoE47mFkQjFDXOxRPfw9DyN2+YQD7MtQySPZsw8tkl98HWqtSGL9x2f++IFJWWRpJP3I/S1ojcEshLZb62qOpiDoxi9o8G0dpAaP0asr/9JPm//QGhvi4iOzcixiIIqoJ29Cyl7/8MT3MRVIXo/u3ED+5Bbs7NxxQKuNU62lunqf7yjeDDEUWie7YQf/Reqr98g/CWPpTWhoAULBvtnXNUnz8cfJTzCG9aS+Kxe1HWtAXZL5KEb9kYpy9RfvYQ7lwRQZaJ7tpEZPemQOAgF5BW6slHAtIEij/8OXpxvnKYLBG7fxfxB/Yg59IggG+7mJeGqb7wBtbg+MKHK8Ui5H7/i9QPH8cenyHx6H6khgxiSMXJl5n7H9/Bmc6TfPwAUjKOky8R3b0Zt1qn+ovDhNZ3E97ah6cblH7wc/Rj50EUCW/pC55Fa+N8OhH45vx1Pf1yQOCCgNrTTuLR+5Ab0gGJJqJLnln9yGnKo1ML/VXam0k8dh9yNoXcnMM4fZnC3/9rQEbXQhBQ13WRfOw+Qmu7QAwCQe3xGaovvoVx6iK+7SAoMvGH9xHe2Evt0FGi+7YhZxIIkTBeTUM7cobK84eX2Cyj92wj/uBe5MYsgjj/zHQD7dh5Kv/2yjISnbt0lPyVY3Ts/RSidPcyMj5w8H3sqWkQBXzr1j3EihShq+EeivURCrUhViNHAYF0rItoKMNctf+GNaJvBbFEC42tOzD0IgPnnqZSGF5mQ4zEcnSt+ySNrdvJNm5gYvjwrTUuikHKoxJaVpPFLM68d6TozOSp/OwQUipB6guPIigy+W/+eGG7Z5gLhHgVgqoQ/8Q9uJUapZ+8gKcZSIlYMCuaL5fqex64HtbgWPCB1DTEcIj4J+4h/tBe7IkZtCOLMvVyJkniE/egvX2a6s9eA1Egtn878Yf24hYrVH5xGBwHKZ0g/th9yE1ZSt99DrdaR1AUlNaA0K++WL7joL1zDuPiEFImQeqzDyGoCsXv/wyvFnyAzuzi7CeycyOpzzyINTZN5bnX8DUdpa2J+MP7SH/1CfLf+BFufnHZLUgSkZ0bUVobqR8+gT1TQFRkpMbMItkIAkpHC8blYQrfeZb0lx4j+ZkH0Y6do/CPPyX95U8S3bUJ42x/0G/bwZnOU3/9eOA4kWSiezYT3bsFZ6ZA5blD4PtYI5OUn3oRKZsk8/XP4EzMUvrXFxb65laXCjeYl4ZwpvPIzTkyX//Mqu+C0tVC+guPIiZilH7yAs5cETESJv7AHtJf+SQlx0U/tajzJ7fkiB/cTf3Nk1hDEwiREImH9pF47F6ssWm0t0+D5yE1Zkg+fj+eYVL8zr/h1XXEcAiloxm3puG7K3vBPcfG9z3SnRtpEN3R1gAAIABJREFU3nKQ6bOvrdr3DwtEWeH6fGUxkSC2dxdKUxN4Ltrps+hnzq3cwDwUKcza5oMMzLxOsT6yunNDEMnEOujI7cawK++aFBOZLhQ1xtjgIcqrhObo9TzTY0eJp9vJNK67ZVK0K0Wm3/rZNTKAi/BXC1K+Ce6IFL26jlXXkTJJPN1AcBWswZvkuwqB86L81EuLRBGkliyGeTgu2tun0Y6eXeLd9UyTxj/5TZTOFriGFAVFxrw0ROVnry3MGtxihfCWPtTeDsRYGK9cQ4iGg9nXbAHjXP/Cvsa5fgRRWFyKen5AerMFpGoaTzMQPB9reBKvsnSGJCZixB/Yg2/ZFL/7bGBz9H2MS8P4tkPqi48SvWcb1ecOLT0uHqX2ry9gnL60MPtElsBxF+6Jb1lob5zAKZSJ3bsduSGDduQ01sgU0d2bkdJJxFgE17QwLg5iDowF9+tqltDULKH1a1D7FkdOr1zDKteQ61l8y8GtaTd8Zr5u4ugmvuOu6mkXFJnozk2E+rqY+9sfoh87t3BNbr5E7g+/RuzBPZiDY/hGYPoQJAntxAUqv3gjEIoQBLAd1J52Qn2d6MfP45sWUiyKGAlh9o9inOtfMJ3oZ64E/GCvvFzsf+lfyPbuIN7URazh34ete8On/hNqLIlVK2HrgfdYyWXxdQNndhZ7egYxcmd5vivC97FdE1WOrVgG9XahhhIIojSvkrM69Hoey6gQiqRvuW0lkSbZs4XylVPYtbtj939P4xSNc/2B/e4q5iWrroXvesjpBHLzGsREFEGWkTNJBFkO7ErXlenUT1/G0xZjqby6jlMoI0bDC/mgbr6M1T9C7L6dpL/8SYzzA1iD4zj5Er57Z/YVpbUBuSGDNTCGO3uN2KnrYl4exi1UiGxeu4wU7ZHJwHZ37UzHue4e2A5OsTJfe0NDjIZxixXAx9MNpFxqMdfV8xHDIeTudqRUDEFVEFUVMRxCVJWlg85dhhCNoPZ24MwVsYeXXpMzW8S8PEx09yakRBRnnhR910U/dWlROcf3ccs13HINMRadz+UEezqPNTZNdM9mcF2M8/1YQxO4pRvPWmy9unLmw4cYajSFIEpMnn6F2QuBWII/H58oSBJSIn7XzykgLJTduIpEcwRBEqhOaURSIdp35vA8n5G3Z7D11WdlwYzUX1IwbiWIooQgSLdVckKOJYm1r6UycPtqOKu2eddaugV4NW3ZsvpaCCGV6L6txPZvQ0onAykry0FQpMDgv0K24vXLPnzA85Yo8PqmFdgjDYvYvUGYjTU8iXbyAtqRM3iV2/eqitEwgqoEzofrr1Mz8CwLKbX8ZfV086Z2oKtmhOAAP3AKXTN4CFf/I4mEt64j8VBgd/MMC880EXw/KC/wK4agSEjxKF5VW+bp8x0nMH8kYqBc85r5Pl516T3z/eAalzwz3aD805eIP7SX6N4tRPduwRwcR3/nLPrJi0sGwo8i7NlZnFIJp1hC7ezAGhm9a21LokI61oHr2UuyWroPNGPrDv0li/WPtdO5rxHX8hBlkcsvrK7laBplPM8hmekmP32e1WyZ0Xgz4UiGcuHWw2h818W1jVtRQ7tlvLcZLTcgRASB8Ppu0l99AntsisK/PIObL+M7LnJjltCf/fYqbd7aqOLMFqk8+yra26dRe9uJ7d9B6tceRl3TRvknL9528LnveAH5rqROIksIoojvrDB63komzPW7+Cv8DVCacmR+/dP4lk35py9hjU7hWzaCqtD43/7jrVzGu4Pn4zsugiwtm8EHyi1SQO7XPffV7IHXwx6ZpPSj56m9coTwhh6i+7eT/tqnUNa0UfnpSx9tYhQEQt1dSLEYgqIGq6jlO5GOttOYWg8ENkUQyMa7EQVpRUEIURCJhRpIxzqpGbPo1uKSNNkcZfpiiURzhJYtGc48NYTn+vQ93HZDUiznBzG0Io1tOzD1EvmZczi2wVVFCFGUiafa6ex7GFmJMDt56wIUdqWAUysT79pAdfAs3jXeZ9918e6gSuW7I8X5j/XaYkp3CkGWUNd2IkbD1F45inl+cbRQ2pruykjgmxb2+DT25Az68QuknnyE+AN70N85j36bpOjkS3g1DaWtYalNEJAbMoiJGNbA3Ru9V4Kypg05m6T87CG0o2cXCFduaUCQVn4mV9WJ7sYz8w0Le2qOyM6NSPHoEqeSGA6htDYGoVTGnQcB+5qBPTKFPTaDdvw82f/wGeIP7qV+6NhHmhSVxkbUtjaskTF838PTVg56VqQImWgnqhIjJAcrl3S0fV5sdmX4vodmFRkrHJ+v/xxAK5lkuxNkuuKYNYfxkwUa1iYR5Ru/S4aWZ3L4Ddasf5yezZ+ltfs+6tVJHNtAFGUi0SyxZCu+7zE58taqzpiVIKph1GSW5NptZDbuwTUW74M+N8n0G8/ecltX8a5I0TNNfMtCbMwgNaTfVYCz7/sLSzAhrC7YwqRsitj9u1YZCW8NUiqOmEzgzOQDp4Hn49tO8FGJQvBzm3BmC5iXR4ju20J050a0kxfBdhCTMaL7tiIlomjHf7UBtVfvl6gqCLKEbzuIiRixA7uQ0oklIUkLx2hBeQEpk0BMJ/BuYqO7ETzdwDg/QHTXJqL7t2PPFfHrerCs39KH2tNB/fBx3DswT0i5NGJIxZ7JBwOO5+GbNp5hBoR+F5dLH0p4HoKqIKWTMB/Luxw+c9V+asYssXADmXgXPY33UdLGKdVHWVk20Md2DSraBEVtbImHeujwNNu+0I0SkTn7zDBWzSYUV5g+f/MCYldnfy2d+4inOojGmxbMJZ7nUK9MMTN+nJmJ47jOrQ92nmNRnxhYKHV6LezanaUevytS9A0L4+IQobVd5H7vS9gTQWlMZ7YYhMjcDlwXq38Ur1on+fj9qO3NMB/o7fv+ira7W4Xc1kTyUw+A7+GWqvh2EKaj9rRjnO3HHr8DRRXHpfrSWyjtzaS/8jiRPVvw6hpKUw6lo5n6m6cwL/xqywrYI5M4s0Wi+7cjxqJ4mj4fryjiFisrvvSepmNcHCK2fzu53/siznQeZAnryij1w8eBIJsntK4LKZVAyqWRUokgpOqhe3BKFdxiGfPKKL5uYJy5TO3QO8T2b0NpbcCZKwXH93RgDo5Re+0YvmEGAfO3AbWnncQj+/F1E7dSCxxwDWnU7na0d86u6nARxKXOgQ87Wrc/TKKlJ9AgvN4paZi45UqQbWQERCIqAtFMCKNs07IpTXlKozpVQrdL1M052jPbKdaG6Z9+DW/VvOaVTTzFkRpHv30ZURbR8gaCCDMXS0yfuzkp+p7L7MRJqqVRIrEGItEckhLG81wMrYBen0Wvza6oyn0jOPUK+VO3GNN4i3iXy2ef2stHwHGI7NqE2tOOp5vLZiieZmCNTOLWNFaNpvfBuDxC/ls/JrZ/O8qaNnzDxLg0hH7iAvGH9s17YIN93UoNa2h8IcZxoRnXxZnKBy/Q/EvkTM5inL1MeGMvSkczIODVNaovvIl25OySZd8CHBd7Jki2l1WBaFcCOSTh2i7lsTqe4+FM5yn840+I3bcTta8LOZfCKVSofe9nGGevLAku9l0Xe3x63uO9uk3NmSsGgcrzcPOleSHHQJTTLZSxlaAUqVuqkP/WvxK7f1dgYnBdzKFxtLdPE9m2HrUpTbI1ilEyser2wjOrPPMKXq1OaH138MzqOqY7tHBOMRELgqsbMiCKQcSAD+Ft6+ZruszgTOVxdAOvplF++mXssSkiuzejrmnDM01qh45Sf/PUNc/Mxy2UsUYml9lafcvGnpzFmSsuiO9aQxOYl4ZRezuCUCw/yKop/+SFwNFSW3m5uPaR3yTW0IFt1Bbq3nyYEWvqQo2nKY9fZuDQoianp2l4moaUSATOKz24H+GESt9DbdTndBLNEVq3Zznyj5eBoPB93cwHqzI8bjezRQ6JJFuiZHsSyKHFgac0WmfkyM3zrn3fQ6/PodfnEAQpEJGGeRGHO8+yEUQJOZYklGlCVFScegWzPIer31la6l1X3r5TxBojaHn9AydILQiw+Yu9rH9iDbbuYJQsXv/LE+jFu5ssfycQZZFIWqU+t/JyI9Ea48Afb+fiz4YZOnRzaal/D9j8+T+mY++nmD53eKF+9IcZfY/9Nr0Pfo3iyDmOfON/Xfi7lJ43L0wvLdgVzYbY91vr0Iomw2/OsOa+Jo78Q0CKoiDTkt6M7erMVq5wu0TUe7CFHV/ppZ43cIzFgW3qfJFzz4zc4MhfHQRRItG9kdz2g4hqCN9zkZQQ2vQoM0eex66uPDC+p8rbd4JwOsTmz/dw4l8uYesfLAUaQRbpe6yLkTenuPz8CIIgYJTff0IEyHQn6NrfwvF/vrjidqNscu6nA5RGP4hyXx/jZki0rqVhvib49ZBzWaRkYhkpWnWb4bdnMGs2pbE6yonFVZvnO++qtEDz5gz9r05y5qfDeM7tzF6E+dIid3/GoyTSpDfsoT45SHXwHJ5jo8SSZLfdR2rtduZOvHrbbb6vpCiIEEqqdOxton1PI/0vjWHVbBzTxahY4IMSlREEAc/1UGMKgihgazZWPSBPOSKhRGREScT3fKy6vTCKqTF5IVRECUtB1b160H7QAVDCcnAOMTiHrTk4hosgCYQSKtFcmHBSpTal4VkejukuzGYFSSAUV5BUKTh37Zq2Ca7NtVxESUSJSHjz+/iejxpX8BwfOSRh1WzkiIQgCpgVG8/xEBURNSojKYFTwTFcrLqD7/mIikgortB9sI1cX4p4UwQfsGo2tuYsnFsJSxQHK+jl5RkpckgK7qck4FouZs0OAtkFCCVUPMdDUkQkVcK1Pay6jWd7C89NiSrI4WAJ5DkeZm1x+/uF5i3307TxXnzP/dAXsBJEkVTHepJtfTiGxuXn/2HJdk/TUDvaCa3twTct3GoNt1wGBNSYTMuWDD33NVOdMRg/eWviCjeDVjQRZYFQXMY2FqXkPM+/4bNv6z5Arnkzl07+ANNY3aQRiTXQ3vsAej3P+MCtkZkcTSBIEqWLx7BKgSqSWZhGjsRJ9Gy+9Yu7ts07OuouIZRQ2fUbG+i4p5l0V4IH/5fdeI7HxIk5Tn73ErbmsPnzvaQ6YlSnNFq25VBjCgMvj3P6h1cQZYEtT66lfU8jSkRGUkVG3pzm+D9fxDVddv7GBhKtUay6Q7ozgSgLjLw5xbmnBjCrNvGmCNu+0ke2J4UUknBMl0s/H2bgpXGiuTDbvtpH4/o0qY44O76+ng2fXsPEiTmO/9MFfKBrfzMbP9tNKKniuT7j78xw7qlBjFLw8jzwP++kOFhBjSnk1qWxNYdT37+MWbO5/7/tYPZikY49TQy8Ok62J0msMcLxb19k9O1pmrdk2fz5HqK5MHJIQssbHP/2RWYuFMmsSbD5yV7WHGhFiUg88r/vw3N9Lj47zOXng2XM+ie66H2onWxvipf+z6NLls/hdIgtX+ilbVcjoixglC0uPjfM6FtTKBGZg3+6E0tzkEMSiZYoru1x6efD9L84hmt5NKzPsOULvcSbooiKiK3ZHP2788xeuLnB/VcJNZZGjafRSzNc+vnfva99ebcIpxpZ//jvAVCeuExtZnjpDr6PGAoR7u0BH8yRUdxymVBCobEvFczmXB/XupX83xu58heX2OXxOgf+yyZatmapTGh48/Gnc1fKXHlxdfNMKJIinmpHlG5ON9FYE5Fo7pZJ8WpVuusFQEQ19N7mPt8tmFWbE9+5RG1WZ/0TXbz8f70TzLYsd2EZLSki7bubuPjcMEe+eQ5BELCNYJvn+pRGq8xdKqIXTZo2Z9n5G+sZPjzJ7IUiakymdXsDJ793mXNPDdC0Kcv2r/Yxd7nE6NvTNG3O0r6niaPfOk9lvEa8JUp9VsdzPepzOse/fZFoJsQn/497Of5PFxk/NoNrubi2R9PmLHt+ZxP9L48x+tY00VyYPb+zCUd3OfX9y4BANBcmmotw/ulBLv58BFkVqU5rxHIRorkw+f4yoiiw4VNrePOvT7PmQCsde5uYODGHUbYYfHWCykSdUEJl7+9tYu0jHRQGK5RGqhz9u/MIIkTSIQ79xQkALG0x++DMj/uZPpPnof9tD6K09KVf+0gH3fe3cuyfLlCZqNOxt4ndv7URs2JRHK4Qb46gRBSO//NFSiNVNnxqDZs/38vkyTnqswZ9j3YQyYR4+xtncS2PZHuM+uzNJe5/lQglsjSs2xP84nsfftVtQUSUVQAGD/0AR19qArGnpqkUS8jpFK6mLWYJ+cHqI9ebwNaDFYBWXHovIkqKdLyTsJJCFCRuRIoz5QtUjSA6wzFdrry8nPx85+64HjzXxvcdlFDilo+xa2U806Bh5wNUhi/g2SZqPEOqbwfF80fuqB/vKyn6no9eNDGrFp7toeUNrNryFDjbcLjw7BBa/jqHgg8jb0wiiCKCAPW8wdYvrSXeFFmYtZTHapx/ZhBHd7FqNps+10MsFwlG14qNKIm0bM9Rnawz9vY03nwutO/6GCUz8Ow5HkbFXHL+5q1ZfM/n8i9Gqc/q5K+Uad6UpXN/M+efHsS1AwHN/JUS/S+OLtS+BojlIth1m4l3ZsGH3Lo0Y0dmSHUkSHXEEUWB4lBAfoIoIEoC02cLJFqDmZlVtdELBrbmoITl5fdlvv+W5qy4rFlzXwvjx2YZPDQReP3LFmsOtNKyNUd5rAYIjB2dYeCVcTzbI9YQoW13I0pEwfd06nMGbbsaadyYZui1SQqD5TvOIb9bCCVzNG64B4DB1370oc9/7jn4FQRRxHPdFbOgpFSK6I5tiJEQ+KCdPYc9PoltupQnNZItURzLozZrAIvxeolwE2ubHyQd70KRwghc9SJfPYew8Lvnu2hmfoEUx96ZY/p8ibbtWeKNEYyKxfiJPMYK5pk7gSgpCKLM7QSh2vUK+VOvk912gOZ9nwRRwLNMyldOUx26szjhD4Sj5WaozxorOmDUuELfIx00b82hJhRkVSLWFEGcz+bwPdAKJs58srrv+fiOhzhfl3nq9Byv//eTrH24nYN/tpPicJVzPx0gf6l042y8ebubWbeXLE9qczod0cBG6ZYtPMulPqMvIcSr8Dwfx3LxXA9Ls4MwifnliKgItO1spPtgK7HGCHJYJtubZO5S6V3HLAsihOJKQKTz3XItF71kEk6HEBURx3LRCsYCoXqOhzB/rO/5XHhmkPqcTu9D7fQ92sn0mQKnf3jl/ZstCgKRdPPCr+XRix/qwmLhZAPZnm0IgsjwGz+mNLL845Ybcvi2RfWdYygtzYS6u7HHJ/FsH6tq07Yti226FEcWZ5iSqNKe3UVjch2aVWSmfJGomiYd7WCyfBbPc4mFssTDTWhWgf6pQ5S0RSWlZGuUvb+9nkhKxShbhBIK6x9r561vXaQweI0NVxBRlMiCAIQkhRAEATWUXHWwEkSZXMsWovFmqsWbebKFea1cH3wPbXoEsziDqIaDgcSxcQ3t/StcdTfge36QK7vaF7/KC7724Xa2fGktR75xlplzRURZ4DP/z8HFw/BvOINxLY/xozOMH50h05Nk/x9sYcevr+O1/+8kZuUGo58PRtEklFCRQotFuxOtMay6vRAT6MMqWQOL7Sz5F0CAeGOUe/+nbcycL3Dkm+cwShZ7fncj8ebo0sM9bjsbx/cCg3m8ObIgwiyHJaK5MLMXigER+iwQ9LL+AVbdof+FMQZfmaB1RwP3/ddt1Od0zvyof+lx7xEaN9zD5s//EQDF4bPY2rurL/x+Y+0jv0E014ZWmKIweGplU4DrIEajKE1NyJkMvhVERIQSCvHmCC//5RlCSYWtv9bF+InA0SJLIXKJHnS7wrmx5yjWh+lq2Es80szg9GE0q4AoSLSmt9LddC+xUJZifZGg+j7RRm1a59W/PB04MwXY+mQ3Wz+3hlf/alHST1VjdKx9mESqA4BwNIMoqfRu/rWVSVEICllFojl8YG76zPJ9roGabkBNZqhPDCJKMnIsiVmcxTXvzqD8gSBFvWCihGU69jZTHK7gGC7Vae2mSzI5IuN7PkbZIpwJ0X2gBSUs3fCYqxAkgaZNGWKNESrjdSRFxDacW55gTJ6aY90nO9n0uR6GX58k3hShc18zl58fwTHcm+aD3qxvclgKQn98aNvVQMvWHNXppQHL2pxO264G2nY2YFRtjNLiEl8KSYQSauCpTqgoURnHcPE9n6HXJtj25T76HuukPFaj655mZFVi6kwe7yZiDUpUpnVHAwD1GR1BBNe+M4P23UDD+r1s/PR/QYkkKA6d4dwz/wPjQ1ybJdO9lVTHRgAq45eZu3x0xf3smVmkdIrQmk483UC/GBSA9z0fJSzRujWDGpPnzTgBREEipCTIVwfQzIAoPd9DFMTFlDvfZap8nky8i5bMFgr1EWpGEPYTTigUhqq41nybPpRGarRtzS7pm+ta1MrjqKEEiXQnSiiBKEok0h2rXrfvexhagbmpsxSmb7zsjTS2k+hajzE7gZLKku7bwfTbz9+R+MNK+ECQ4tTZPP0vjbHlC734vs/Q65Ocf3oQx3WxNBujbK444xo6NEFmTYJ9/2kzluYwcXyWsaMzC2ExVs1GuGbB6Xk+eslccNSEkyobPt2NGpVxLJfKWJ2Lzw1j1RZHZt/z0YrG4oswj8JghTf/+gybn+zhvj/ejmu6XPnlKJd+fnVkDcj6aojMtfAcD6NkBnKSphfYZHywdQezalGf0Tn74366H2yjeWuOQn+Z4TenkBRxCWkPHJoguzbFvv+8BdtwOf/UKKNvzpBoU9n5H9eRao8TTqps+UIvvQ+3M/DyOOefHmTw1QnksMT6x7uQwzLanM7bf3uW2fNF5IiMUVo0OQA4totZ8fC9gOiTrTHW3N+KHJKwdYeJE3MMvjrx3s4SBZGmjfvZ+Nk/JJTIUhw+y+kf/wVGaebmx35AIUgyme6txJu6MMpz9L/0zyvup7Q04xSKGFcG5vP47YVCZEbZ4vLLkzSsTeCYLv2vTl57BkRBxnGNhRQ/17PnK2ssSs15nk2hNkxTcgNRNbNAitPnS/R9og3H8qhMakTSKuseaWf02NJByHVMZidOUJy9hBKK0dH7EA0tW7hy5icrh+T44HkurmNgGpWb5z77HoIkI6ph5HAUOZ5CUkNcn/kRpEbe/hL6A5PRIioikhI4TFzbWyAhSRERJGFJBP21kEISkiwEBGN5CJKA73h4ro+kiiAIuNfEJcphCc8OnCeCJCCHrqYb+XjOfAjDdVetRALSXDZzFYJ4P1ESg/xsy11iP5TDEp67PIZLEEEOyYjEESRwnGowu1REBDHorygLSKqEIIDn+HieH9yb68j56vULgkRz50EsvU45fwXHLS1JF4TAdnj1eFESkOav3XM9XGsx/lKJyKihDKZWx7E0BEkg3dyNUami1+aueVYCvh9cn/sexyg2btzP1i/+GUokjmPqvPk3f4o2t7p81QcdgijRvvuTbPj0HyApIabOHOLUD/582YcOkPr041hDI4jRyGIVP8dBEDx810eJSMH7o4i0bs1y5eWAGMNKinvX/T6l+ijnx3+O6VRpTK5jU/unGJh+jbHC8au9oSW9mW1dT3Jm9Gkmi6eB4Bvte7CVdY+2E0ooOIbLlVcmuPjzsRs+/7buA3T0Psjpt76JXn/3ZVgjTR007X8C37EQRIlQtpn6eP8yAjRLc+RPrqzB8IHPaAHwbG9FT6lre3ADR6JrurjXJphcc1+uJxB8lsyAfNdfcSZ3PVbNsvGZJ+uVCXs1Ivc9EIUEzZ178TyHwtQ5RHRiqTZ838fSS7iORSwZFNkKkugzqKEY+IE+nRKKE4k34rkm9coU4Vgj0Xgb5dnD2IZGJN6OGk5g6iW0yhTJhl4Ie3ieO++tlxAlGd/zqJUnyDR1IkoyteIooqTQ2LYHUy9RnLmI65iIxPH9wFanqhmiiWY8z0ErTxJLNqKoUTzXoVIYIhzLEY6msS2NWmnsrnuCG9btZfPn/gglEkhhzZw/jFV9f2Mk3y0kNUzvw19HUkLMXnybC//2NysSIoBvGES2bEQIhRZUt62xcTLhIlrRZMNj7ZgVG0kVSbXHFkjR8x3q5hyxcAOqHMV0quhWGcOu0pzaxGz1CqZdRRZVMrEuPM9dItDg2R6XXhin/9VJlKiMrQfJBOGkilZYPcvL0IrY9g10D24Txtwkc++8RKx9LeHG1nmHi7+8TssdKs5/YEjxowbfc/F9D8fScByTcDRLMtdDYfIsrmPieQ62VSOWbCWV6yUUzWBqJQRBJJXrRY2kcF2TermE5zk4tobnWthmDTWcJJ7pQKtME0934PseuZYtzI4dx7Y0si2b8FwbWQkjyiG02gy2WSMSbyCRXUO9PBFkhVh1XMfCcx0isQYcS8exdJLZblzHQJYj5Nq2oYYT1ErjhKJpEl4HiVw3enUGx9Jv7Gi6TQiiRK5vNxs/84eo8QzFkXOMHH6K4sg5HPP2iqh/oCCIdB/4Ikokge95TJ99Hau+euZH/Z3jRDZvQoxFMYcDzU63XKGmBMtOz/GZvlRGVkXk8OIn7rgmxfoY3Y37iYVzVI0ZNLNAVZ+iNbONLR2fpW7kCSlxcokeNKuAYS93Wrm2hzsfhhNvDLPxiU6Ofvvyqv2tlEYYOPcMlnF3HGC+5y7IhcU6+kiv38X0W79YoqUIrCiieyv4SJKilIkjJaK4FQ23dBfyggUBOZtAii8W+fF9H7dUw62s/LE6to6hFTD1MpZeIhRJYellqsWRgPga1hJNtqCocRw7WMbWy+OIskoi3UFp9hKZ5o3E0+0YWn6+6E8VrTpNMteDY2lUCkNE4jkUNYbn2VQKw4CP56zFNEq4ThQ1kiIczZDIdCGIMp5rU5q5HMwwqzPYZhBqYRkVfN9FUsKIkkJ5bgBZCZNq2I9em6VWGiOabEYOxSlOnSfduA5BEDH1Iu5tykGthFhjJ1u++KdEMy2osRTF4XNYvyxvAAAgAElEQVSc/uH/jVH+8DpVAORQlJ6HvkbX/s+B7zH42g+ZPndjKSy3UqV+4hSCLC3RUawSmEXOPjOCXrYQRIHK1KJH1vMdZsoXcFyTsjZBEIvoMDJ3lHi4iUxsDdl4D+Bj2jXGCyep6tMk26K4pofv+0QyoSV9SbZEya1N3rC/jlW/LeHY24FdKVAbvYRrGfNqO+8eH0lSzD15P6kn9lH86RvMfeeFmx9wEwiqTPZLD5D6xC4ENcjV9l2P2W8/T/GnK7/gvufhex6xZCuWXkJgMWFeEGVC0SyubeL7XiAd73vByOf7IIgIgohenSWabCYUTqPXF8nBqBeIp4JSkQIillmbX1pcG2YTtCUgEIpm8T0PyywiKxE8z8HHJ55qx7E18H3CsRyipOBYOq5jkch2IQoS9crUgk0WH8R5RW+tNksklkNR47jOnQloSGqY5s33I4giXfd+nkRLD8A8If75h54QAXJ9u+k5+BU8x2bw9R/R/+K3b+k4QZIQZBkpmUDtaMcan8AtV1CiQd7z4OFpRFmgsS9JZXJxYK7ok1T0ySVt1c05zow+TWt6C2ElieOZ5GuDFGpD+L7HtifXU5nUSLZG6dzXiFlZNIcoEYnaKipN7wWsSgGrslxM+d3gI0mKdxu+7VJ55RTmwCRSIkr8no2E+25cXtP3XWqlUSKJZhBEDK0wX7ciSHeqFAYJR7N4dQvHMfE9B9uo4ogSft5HEIPQo2phGL0+h++7FKaCur+WUaZSGEJRY1QKQ+i12YVtAOXCYGAnFGW02gyOrRNNNOM6JpozieuaVPJDhGPZecFWn1p5HN9zcR2TSn6QSLwB33PRajMooTiOpaNVZ5B0FUkOJJzK+YEbCgDcCM1bH6Chbw+tOz6BKAXXqpdmGD78rxQGTmGU373B/v2GEk3StvMRAFzbZOzIrUvnK82NSMkEYiQa/BuNYp07Sc/9zazZ14galVEiMumOGP2Hpm7anm4VGZhZ2Slx/Hv9eI7H7q/3cfy7/UydWSShZGuMTZ/pvOV+X4UcS6A2NKMNX1ny91BDC5GObirnjuNZtzCYiiKirODZQYlfUVEJZZpxLR2rnL+jIP6PSfFuwPMwLo1iXBpFjIZQmtI3JUUAQytgaIsv2KLNxUevzqBXVw4vsc1gyV9naR5qpTC08P+10tiq27TK8o/EqC9VUtFrM+i1xfOb+lJyM/VFx8bV/pj6naV7ibKCEknQ88BXSbb1ARDJthCKZ3BtC6NewnMszj71VxQHT9/ROT5okMMxtjz5JzSs34djaFz6xbew6rdhcxME5GwWEDCuDKC2teBaHtUpHa1oYWo2RtVaUSFHQEQU5QWF8huF/zueveBEOf/cKHrZWpLWZxsuEycWzyGqocUiZr6P5ziIigpCUL5EEEUERUFOZQg3t6OPDSHICuDjWSZ2rUwylQ3Kj9wCKaqJLLntByheeAezOEtu+/2k+rbjWiYzb/1ixTIFN8PHpHi34d8ki+WjDkEk3bUJaV7sACDZvo7u+7+EpIaXKKmURi9QGDjJ0Os/BsAxPsTOlHnIoRipjvV07Ps0jRvuwaqXuPz8PzB58qXbsok5xRLh3h7MwWG8ahW3HA0cLOeLWJoThKEJLJGyAwgrSbLxblLRdsJKYj4Vb3VaHJp5g3wtIJbiaG2ZA1krmpx7LnD2CIpKevf9iIoSmGYEkfrwZSIdPfieizExgqCohFvaEQQR33WJ9WxAzTWB71EbuIA1N31rM8R5yJE4ajKLZ1uoySyJNRuZOfICoWwTyb5tH5PireJjyvrVQ41naN/12LK/i7JC131PooRjy7Z5js3IW88seJJH33oGs3p37UXvJ0RJofehr9F98MsA2HqNSz/7JpOnXr7tttxCkcorryGGg5mZfilYhkqqROfuHJFMCAGo502Ofz8gBlkM0dt0kLbsjuD8Th3Xd274QUjioiRX555GyuP1JTbKWDZEtjvJ6DuBOcN3bWythmtoxLrWouaaqA9cwKlVyB14DHN2knr/eQRZJbFxO2GlE7dWBUki1NCCNXeb9ZJEAd918B2LRM8WrHnHi+faZDffc3ttzeNXTorpx/ciJaOUXzlJdGsvkQ0dWGOzVF4/g2/aJB/cQai3FXumSOXF4ziF68RBBVCas8R2ryPU2YSgyrjlOtrpAfQLo3j6KqOKAOG17cT2rEdpTOFbDvrFUWpHLgR1iG8wm5NSMWJ71hPuaUWMhfHqBsaVceqnB3Cv798HEYLAukd/m0im5X3rghKJk+vbteI23/eX1OcFmD53mOkzh5i7cgzP/mAom989CAiiSM+DX6Vz/68FTjbf4/Lzf8/kqVfuqEW5sYHY7l1BbW3PwxwaRj9/ETkkEYorlMc1jLJFsnUxIkKWwjSlNmDYZUbmjlAzZvH85ckK10KzFpfG3fc1M/zm9FJSbAiz4fGOBVIMaoE7wY8feKyRJJCkxZmwMF9czA9qD7mGhl0uYJeLBGIPwi0XH/NsC0FWiHdtING9idLFY3i2haSE7njF9qslRQHCfe2E1rYhN6QI97YhZxMI929DziXxbZfEgS0IqoIYVlAa08x881l8a77UaUghvn8T2c8fQE7H8W0H3/cRJInkA9vQzg2T/+ErWKNLje5CSCH54HaynzuAlIziGRa+4xHd1kts51oQxZWLR0kikQ2dNHztYdT2RnzHnT+fSOLAFlJTBfLffRHt7NCyAu8fJKx9+Ot0HXhyyRL1/YCtVTGqy21aWmGSSz//1tJ99eoyzcB/DxAkmbadj9Jz8MuEElnwYei1HzF27BfzzqI7/HBTSbxaDe30mSBNzgrsfJ7tURytY1ZtGvqS12lpBiE4VW2K6dJ5TOfW7reoiGQ6Y8SyIdIdsQUld0GArnuaMKuLRdFcXcMzjcA+WCxgTo0R7V4HQPnUEaRwhFjPOjzbwpydxi7lCbd2IoYjeKZJqLEFNZ0l3reJ2sBFnMqNHXVWOY82MURm0z6M/CSVwbNAUKbAyN/cwbQS3pPlc6i9Aa+mM/e9l5CSURr+wyOkPrELe6bIzLeew/d8mn73CSIbOgl1NWNcGQdJJL5vA42/+Rj4UHzubbRT/Xi6idKSI/XILuJ7NyCqClN/8/RivKEgEN3eS+5LDyJGVIo/P0L9yEU80ybU3Uz68b2Ee1oXjcHXINzbRtPvPIGcS1L+5THqp/pxazpyJkHygW3E79lE4289zsT/+0Psybsj8f6rgBJNIskqRnmWfP/xmx/wK0Jh8DSTJ196387/QUDbzkfZ9Ln/iihKeI7N0Os/5soL//Su2/VdDymVRG1vx3ccnFIJZzbI668XTJSQhOf4zF5ZdN5Yjs7I3FGakutpTK6nVB/FdnU8f/XKfq5nEYpJbP9yL63bs2R7E/Q93Bb0wYfqtMax7/YHvzs2lXPHFo7Vx4ISv8bUUqefNtK/6u/m3BT1wUu3fB88y2D22Mvkz7wx74EOJjvlK6fveMXx3tgURRHt1AD1E1eQklHiu9cTv28z2ovHqR8LIuHN4Z1E1nciZwPVXSkeIfngDuRckrnvvEjx6cMLM0hrbA57qoAYCRHZ2k1sVx+VlwL1aTGiEt+1DqUxRfmVkxSfeh1PC26ONT6Lbzk0/u4TyOn4ki4Gs9KNhHpaKTz1OvkfH8K3ghHQGpnBmsgjJWNEt3QT37OO4jMfXFK8iurUEGd/8lfvdzc+kkiv2ULz5gO0bn8YUZSYvXiE2UtvM3ni7gwSbq2GZ1koLU3BCsp1cGbnUGMKPfc2kR+sUi8YS1ZEnm8zXT5PItJEb/NBdKuE7WjBEnoVjObfoVga4eW/OMW9v7+R6YvFhTrPvg9WzVnmzFmAICCoCr55a1EJclMOZAlnavY2V2L+MoUc+12kfb4npOgZFna+DK6Hb1g4pRr4PubwolHVrWggiQihYMknZxJEN6/BnilRP3ZpgRCvwpqYQzt5hXBfG7Hd66m8cgo8DzEaJryxC9/3qb55Du/aMBHPx7g8hjk4hbyrb0l7UiJKbPtafMfFuDyGGAtB9JrofdfFni0hyBLh9Z3Am3f9Pn2MDzckNYIaS7Hh039AvGkN0WwLvucyfe4wF579G8zK3RtI3WKJ+jsnEGQZAXC1+cwWIdBUTLZF8WwP8RpREElU6cjuJBvvQRZVpHATcOP84JlKUCnSd31O/3QIq7ZYNO6GEASU1kaUjhb0kxfwbRtBVRFEAd+eP16RERCCpb/rBTVVbDuYtIrCwv6e7QTebCVw+vimuUiagki0pZNkz1aURHrJCtAsTDPz9vO3cjuX4D0hRd+y8cx54dV5QyyAW7u2WPzVbI756ntNaYSwin15bMl+C/B8rMkCvmGj5JJIqShusYaoyiiNadyaEThtrjO2rpbaJ4ZV1PYGBFWm5U++CCvYHIWQApKIFAuDJK64z8f46EGUVdKdG2nZ/jDNm+9DDscQBJHazAjVyX7O/9tf4xirF2aXJOholxifcHGu4ZtoRKCnW2J6xmMuv/Rdk5said+3H7W5Cc8w0M+ep/bWEXzXpzKpMddfmS+jcU32iRShLbsTgKHZtyjUh3Ac44YOiWtzn/WCSWZNnGg2tKBuD1Cb1Zm7sjTGUoxFiO7ZgtzSiG/Z2FNzRHdvBt/HKVXA8VB72vF0A2dyFvPyMPGDe3DmitTfOonS0khk2wZ818G8PIwYjRBa341brqIdPYNXCb5hNZWlad9jwWQmP7lEFMK6w8iF94YUfX/l6fASFYul28WwGqSPmc6q5OOZNr7rIcgSoqoEWjWiiBiScTQDVqjm5TsuvrP874IsIYYUXN1CPzuEW189del6x87H+Ggi2dZHdu1OlEiC7gNfWMgyMip5Jk+8yPS5w1QmVhdKuIpwWOCrX4ryjb+vUSotfgfRqMAXPh/lyDsWL7xkLHmdpVgMa2QUZ2YWa3wCpSEQevU9H9sIpOdEWUJaotTk4/supfoYo/mjWM7qRL0SWrdl2fGVXsJJhXhDhNJ4nYa1Sd7558vLSNGraRgXB5ELZfQT5wlv34Azk0c7eYHUZx/Gq9axx6fRT5wn8dgB9LNXMAdGESMhBElCaWvCnplDfydwnIQ29uLVNZypObz6ovdbjgTmtqk3nsUs3h0tzVsmRVESCCdkjJqD5/ioEYnWjUkc22XqYhXXvrveWM8M6pYIqgziyu55UZUD/UTXW5yS+z6+4yIo0srHicKKEv6+6wWEaVoUfnoYa3T1G+w73sezxI8qBIGeg18h0dJDNNe2kIEThBm5DL3+IwoDpygMnGD/PSpd90ZIJEQcB15/w2TnDoVsVuTUKZtLVxw+95kIsgKNjSK5rMSnH1dJpQRefNnk0mWHcxdsTGuFGj+WhWAYiNEIkc0bcUtBcSpBDErWXn5xctkxlqMzmj9GLt5LMtJCoTY072S5ERbP3b4zx/iJOSoTGh27Gjj2vX7WPtCyenSb5wchQ4IQfC+iiCCKC6s3YT5UZ6EBQVj8Iah9jSCA72MNjOJpBuH13XiajjU0r53pe3iWuaAbcDdwy6QYy6o89sfrOPnsBCPHS+z7aic7Phvo/73+j0Ocenb5Q3g3sKeL+IaF0pxBikdw5spLdxAElJYsQljFLVVxysGo51kO9mwZtTWHnIpxvf9JikeQkisEDpsW1mQBtT2HFAvj1owPdfGjj3H3EEpkkee1G1u3P8yaez+PpAZK1UYlj6PXKI2eZ/C1H2FWCwtez65OCccJCK9a9Xn4wRCG6fOL5w0+/2sR2lolRMnn7SM2PWtkdu1USCQEzl90ePzRMFf6Vw+ZsccnsCcDG7zS1IQzsziIRzIhmtansLTACRJU9GNeFNhFEAS2dHwWy9EwndqCAMhKGJk7SrE+vHC8XrTQCmZQf8jzyQ9U2fTZLvjx4LJjnWKZyO7NRHZvwRoaI7p7M8nHDmCNTCCGQoQ29CClE5j9I0jJOOF1axAUBa+qYQ2PE9mxkeQTBzEHRkEUCfV2gijglhdjhe1aCc9xyG6+h9KlE0E9m/lL8Rwbp15e1q+b4ZZJUY1KpFrC1PMWiaYQ6x9o5Jf//TLhhMzGh5vuOik6hSra6QFiezcS3bkWa3xucTYIqK1ZYjvWIggC9RP9CzM3TzMwLo0Ram8gfs9G9Asji84WAULdrYTWNC87n1fRqJ+4gtreQPITuzAGp5YTMYF327NWX9J/jH8fyK7dSTiZA6Bt12Nku7ctbPNcm6kzh3Atk/Hjz1MaPrtiGwJQLHpUKj71ukdjg8iFSy4TEy4hVSCbE5mbc5mccrEsn5Ymie5uGcuGS1fsG47JYiyGGFKxZ2Yxq4vk6bk+kiKy/pPtC/bF008FpKZIETpzewABz/eQpfCSMgQrQZUXg7+nLxRRIjJawUSUBA7+0RZkVVwxvxrAzZcoff+5hd+rzy8qRkW2b8A4cwntncV7V/rRL5YcX/3F60t+Ny8sT9kTlTBKPEWsvYfk2m241uJkRp8ZY+LlH9/w+lbCLZNiIFvvY5sua3Zm0EoWQ8cKNHTHCCfuvmnSremUXz1FqLuFzOP7wPXQzgzhGSZKU4bUJ3YS2diJfnmc2juLcU2eblI/fpnYzrXE79mEPV2ifuIKvmmjtOVIPbYHOZNYZuP0DIvqG2eJbllDbMdaGr7+CLWjl3Bmg+BRKRFBacmiNGUoPvsmzuwqI5AkIswboQVJWpj+f4wPPpRIgu4HvowgyjRvuo9IZvngCTB25Gdc/uU/BB/gbeDCJYe1vTJf//Uo+YLHmbMWjz8WIR4TicUEXjlkIysCuu6Tz3t0d0vs3qlQq/nMzbqcPb84KZAbcsjJBPbMUvu2VbcZPDxN5+4GbN1h4LXFAGbL0Tg//jNup67y1ZrPAGPH84iygFV3OPv0MB17GrE1h4HXb39CZE/MrGoWux04epW5k4dW3Obqt2czvYpbZjNLd3Ftn+2fbqNze4qzv5zG0lwiSWWhgPxdhedRP3aZOVUh+4WDZJ+8n9Sju/E9DzGkIsXC6JfGmPvuSzj5a4y8nk/9xBWK//YW6U///+y9d3BdZ3rm+Tv55gDgIgMECAKMYpTE0Mpqtbpb6uTu9rjH69C2Zx1nq2Z3vfPPrrd2q3ZqgtdTNTPrca3XqT329HRStzsrUYlsiRQp5gQSOQMXN9978tk/zuUFQQQCECVRbT1VrOK995zvfLjhPe/3fs/7PA+S/NwRYo/v9euMkogxMkP+tXNED+9Yckl9YJKZv3mehn/yOJEDfYR2deNVd82RJURVxinpZF9YcFgTFInIwR1ED+1ADKmIQQ2lMQmiQOIT9xPe34tnWjiFMvnXz1M6tXZi6kd4f7Ht2d+hedfDvliB52KWcrXdzGs//QsqVVOs4szImgLiy68YWJbH1WsWjgOVisf5CxaaJvgZZMElPe/heR7H3zSYnnEZGbVRFIFszsUwPP7u62VcF2bnFq9M3HIFsa0VrXsTnmHiFEs4+TyBmMqmB1NMnJtH1iS2PNbK2W/7S1vXs5gr3FhuqmtCIKbQ80gLsZYQkrxQ7/Ncj0s/upNX82LYc3fHPsLRyxQGL/v6krcF2Y0aqa05KJbmDS68MMWeT7WQHilx+aj/BYk2aIxfXCFr8sAYm/XrftWaH66LOTlP+fwgTmFhF8mamKdyeQQ7s1Av8AyL/Bvn0QcniR7cjrapCUFVcLJFyhcGKV8YxM4Ul2Ribtkg88M30QcniTywrdb7XL40TOH4BdS2BsSQhjV925a97VC5NMzEn3yTyP5eAn0dyMlItY5Rwhyfo3x+AGv61g9UQNQUxKAKCLi6hTFy6yaNB4KAGNAQNYWPcO8ikuqs9dy6lsn1l/8LhalBynPjWHppRc+UlXCTRpPLL3w/C8XFzIeBwcWcv6Hhxa/n8yv5A7lIoRCB3l7AwxgewcnnESQBURYpzlRQwjJN2+KLTpNFDddzam5+60HfE23EWkKMvD27yP+o9AGKzAqiRLCpg0hHH0o4Vvv8PNehMjvO/MX184nX5+YngBqQsE23lh1GGzQc26WcvbvmRBuFJkdoiG1BElXy5Qmy5XE+aF0cWQqQCLWTLg7c1V2ylbDtmd+h8+CzzF49yTt/93+859f7ecHmx77C5sd+CVFc8A7Pjlxm5or/wyrPjdf+f69CCUpsebSFQEzF8/yNkNFTvkK5JGp01B9AllTG0qeX9V9ZDV1Hmuh8oJHsWNH3EK/+rPITJcbPfDAdXmqigbZHf8H3FCrlCbd2UxofIJhqI33+GJnLy/tm3z03Pw9M3UFggTheTBuLQo4kqjTFtzKR+SDEQH1rxkigkWx5/H3XNWxN7mYqe2nJXViVgjTFt5IpDeO8D0HxI2wMw8efQ9aCbDr8ef8JARKd20l0bgegkpuladdDDL3xHQrTgx9orVhpTCEGgxjDI0ixGEqqgWQwQ26yTHqoiFm0cCyX4uxC44MsqbTV7UEUJGZyV9cdFIMJlYaeGIIA5i0umLb+7j14Ngo5GAEBpt/6Ka5lIioqUz/7EYmt+xEV7c4DLDfmWg8UJYHOvQnu+2QLieaAr75RDYzZSZ3v/Z8XUaQgsWAzzYmd5MqTgEfZzOJ5LqocwnVtZCmAKEqYVgnbNZElDUUKIAgijmNi2mU8XDQlius6KJKGh4dpl3BcPxtVpBBKddfMdg1Mu4QsamhKhHi4nVxpnFx5DN3MAx6SqKBKIQRBwnIqWI4OeMiihiD43seypGE7BrZjIksqoiBXg5uAKEgYlr+sV+UQkuTfhS27jO0aSKJKSE3SktxFyUhjOzq6VcBxzerfKzM+f3aRXaSAiKqEkUQF17UxrCIeLqocwfMcZCmAgIDplLEdf3kiixqKHEIQBBzHwrRLeHdo0/oIa4dj6tw4+veMvf0TAEL1bfQ9/RsAiKJEsK6Flt2PkezahWMazF47wfjbP8V1HSrzd5d9sRoEVUVpbUGKRXFKJZRUCrmhjlTKJRj3A9fVF8cRJYFQXaC2vBUFiYASJVMaxbTXL9irBGVGT80ycmIG+5blc00l5wOBgGPq2HoZoVrjFCQZPT1F3Y4HSZ87duchbsOag2KkXuXx395COWty/c30IvPrSrWVKB5qo61uL5FAI92NR/A8j/6pl3E9h56mR7DsErIURJGDTMyfJVMaoyG6mfrIZkRRwvM8RtOnyJcn2NXxGUp6GllSkaUA88VhRudOosphNjc9hCSqCAjkKhOMpU8TCTbRnNhBLNiMKgWJBpoYSZ+kbMzTkryPZLgdEDDsEuPzZyjpczTGt1If3UzZSBNU68hXxilUpmmr83UAJVFGtwoE1ThDMz+jYuXpqN9PQIkjiQpFfZbh2bcIafV01O8noqXobHgQz3UYSZ+kUJkiFmyhvW4vIa2Okzf+Fsf16UHJSCftdfvw5ZxcpnNXmM1fo7f5UW7ebRQ5SNmY58b064iCSGfDAwTVJAICFTPHSPrkursSPsLqcEydctq3eSinJ5i7dhIAJRSn96lfI97WWzPQCjd8ga4jX8AsZTn3zX/H/MDZ92WOckMdwa29foZYX49rWZTPnmcmXaT38VZSvXGUsP/Tzk+WOf/d4dq5rufguCbeKiIQKyE/UWb3L3TT2JfA0h1ulqUmL2Q4/9xSnuL7Aces4DkOSiiCmc+A55HcegBRC2zYb3zNQVEJSogSvPoXA0xdXV5oda5wHccx2NR4iAuj3+fmmyZLASRRxREtBmeO43o2jmfjejaZ4giZ4giu57Ap9SD1kW7y5QnEalZ3Y/o1IoEU3Y1HmMpeJBxoQJPDXBr/Ca7nIAkyrueSLY2QK40itT/DTP4qs3m/vSoeaiMZ7mQ0fZKSMU9H/QGa4tsZtt5CECQ0OczI3AlK+gkEQSSkJZFFleG5E2xKHaRQmUY3C8RCreTT04zPn8W0y0QCDfQ0PYosaeQrEwxMlwioMS6P/7iW2QHMFwcx7SK9zU/UnhMFmdbkbmbz15nNXyMebqctubvqsiYgihJXJ15EElV2tH2KoBJHEERiwRZuTL9OyUgjS1o14/0I7wesco5L3/sPJDq3E2/fSiDeyKbDnwVADSfY/uzvMXv1BK5tMvTGt99TH2prYorcC0cRAxrmxEKGOgWkh4qkemNMnF3a9+u4FiV9DlUOI4nr19pMD+b52f97eQnbxCh+cJmiVciSvXoKq1TAMcoUx25Qt+sgnusye3pjikRrDoqu7VHJ28iK6Ccy6yynCIJAtjxaFbastvkgEtLqqI92o8phwloDufI4IGA7JpnSCJZToWT4RVxFClDUZzGdCr0tT5AuDJAuDK4yGYGQmsB2DQqVGWzXIF+ZpDG+zV+yI1A00hQqM7fUAZOYThnDKlIxc/6y2QNNjSKJCnWRLqKBRmQpQFirQxCkFa69MhQ5iKZEmC8OVuc2CcndhNQ6PDwypbFqBljCcU0UOUihMk2+MkVX6hDp4iDpwuD7smnzERYjO3KZ7MhlJDXI9CV/abb1k79FrHULXR/7Aq7rULd5LzNXfsbE6RexKvlFIgV3C3a2qlJ9GzzHF4FQwzItu+rIT5bJjPjkbsvRmcxeoLvxCIlwO7qVX9cudOcDjeh5k4E3pvDeCxreBuCaOoWhq9yMAbnrZyhPj4Dnbtj6dM1BUS/YlDImB7/SSejHk1Tydm0ilu4ydc3PHldqFwKwHZNbA1hAjdGVOsxE5hzZ8hhtyT3IkgYCeLi1GuICBEy7RP/kURLhDpri24gHW+ifemXFD3fpbBZ/kVzXXhJcvJuim1WvZQ/fH7k1uZuQlmR49gSSpBBU47XRbl5nTbTYW3s9l5mX4yw0J3r4NxTbNRia/RnRYDMtiZ3URbron3x53cXyj3B34JiVWifL+W/9Mdue+W1EUSLS3E2iYyvR5i66jnyByz/8M6YvvMFdZ0C43rJjqhGFroONlNI6oaRG844Eb/2Vz431PIeZ3DUCSpzOhgeRRJlsaQxnlTY/v5bvl3M0V/EAACAASURBVHxizSHMknXXN5hCcpKKndtQfVyQZORgBLtSxHNsX9F7fmOK2zex5qAoB0TizQESbUHadsQxynZNii0zUeEb/9Kvp9iuhSjIaHIEy9FxvZVTa0lQkCSFkjGHgEg02ETFvEV+fJk3X5XDCAikCwO4rsWm1EFEQVohKHqUjQyNsT6iwSZKeppYsAXdzG9o6RlQouhWHtMuUh/sQZUXeqgd19eBCyhx/0vmuSt+yKZTQTfz1Ee6mcldJRZsAQHKxsp3NlkKIIsqhcoUrmuzpflRFCn4UVC8B1BOj3P6a38EQNv+T1DXfR8tex5HUjS2P/u7BJNNZAbPkxu7+t5PxoNYSwhZExk8Pk33kQWfHkUK0NFwAFUOE1AibGt9moqZrW1uLoeBmWOkC3573fiZNPU9UaLNIUqzC5JjnrtxorQoSPQkDnJl/lUsdxmJwDtAS6RI7nyQ9Nk3fJ/nu4C1Z4p5i9f+Ynm7QOsWeSLDylExM2xr+wSmXeL61GvVN3zpm2bYJUp6mp6mRzHtIpajr5ppAoS0OloT9/kbLYLAXOGG70i2AkrGHPPFITrq9+NnmmXG59/BcczalW6/5vIz8EgXB2mr28u2tqcxrAK2s0BHsp0KmdIom5sexnLKjMy9jW7laavbQyLUTjTYxLa2p8mWRpnOXWEic472+v00RHvwPJepzAUMu7DM1f3HASVKa3IPmuxTEPKVqerxH+Fewvjp55m58iZz/adoO/AJkl330ffUr5OfHCA/fo1rz/81tv7eedEYRYvrr0xglm2yYyVGTi60ASpSkO7GjyHcsioJafWEtPoVx9Pkhc0jLSqz85lNbHm0FT1v1oLi+Jk073x9Y50ykqDUSlkbgagFUSKJu1qiWBd5W5QEQkmFuvYQgahCcc4gO1mhnLMW/ZYVKVhtNPfQzTweHqocxnaMJZnjzWO96q4YiFhOGU2OVv0jfFqMpkQw7TICAqocrmaHDpZTXrTMVuUwjmsuek4UZFTZp+TYjr6EkmM55UXHypKK5ejIYgDHNREEEVGQsF0TTQ5XxzF8apFdqi2/JVFFlUMAGFYJz7NRlQiSINdaomzXxKrSIfyCt4Lj2ZhWqUrJCeO4Vm3JcjPjBg9FDiEJCh4ull3Bdpf3oPiIvH1vQI0k2PWFf0GkqYtArB7PdZm7foprP/kLjML8e7IZowQkep9oJdERQZQE8lMVzlUVbERBJhZqYT29z2UjXWM4JNrDxNsj3G60V5rVme1f2tUWkKLsbXx21aRFQCQgR3hz4r9iuut/PwL1LdTveYj0uWPocxNrPu+ukLcFSaDrQJKP/UoX0cYAju2iBCRmbxR56U+vMze0QA3xuYCLU2FzBeew5Y4FbsuCvBpP0AN0a2U5oOUoKq5nL7vMXC6ouJ6NWZU/rgVLD27mwovGuY3V4LgmFfMW+wNRxJRMPH35jG45NzXTLoEkIqfqcMsVjNLCMTffg4/w4YBZzHL6b/93Ym19dB58lrrNe2jovZ+G3vsZP/081376l6sqcm8EWlQh2RnhnW8M4Dourr2QrbieTbY0uuGxs2MlcpNl4i0h1IiCrTvkJkqLWv5uhSjICILEVPESzgplNFlQaY0s1SFYK6xSDkcvUbfrEMWRa4t60h2jgj47vu4x1xwUQzGF/V9oZ36swmt/OYhetIimNA58oZ3dn2rh5f98fd0X/3mHFA2jbelcJI+0JlQNf1ij4c9HuLeRH7/Ghe/8CS17Hmf7M7+DHAjTfuBpbL3Etef/et091avBcz3UkEz7vnrMso2et5i6eHfEF+SAxPZPdtBxfwpBFBBEgelL85z9ziBmcWk26OGSN6YZL15ccSNUElTqg5s2PqdQlGCqDTVeT7i1e5GDX2VmjIlX38OgGIjJROs1jv31IJNVnuJ0v5/FHPrKxv+oexaCiKSouLa1YOK92uGSUvNZdm0TQirhhw8Q2NGD3NxA6fg7IIqE7t+FGAxgjk5iXB0kuHc7UjwCooh+sR9zaJzgzl7UrraaF4UYDhLcux05VYdTLFE8+hZadzvajh7fjOvidczBsTvM8CN80Jg6/yrl+Ukatx+m8+Bn6HjwGTzHYej4c9h6Gc999+1ylu4wN1BAi6koQdnv8rgDBKoiCiy/o30TXYebSPXFufC9ISo5E1mT6HuyjW2f6Kgt0W+F4RQZyJ1YlfbjehbTpf4VM8k7wSpkmfrZT5Yf29oYj3ftvc/V90pSFhcUFE3a8M7TvYxArIG2B55h5tIxChN3lvuKt2+lbssBAvFG5q69xczF19EvXUeKRSi8/BY4DsH9O3CLJUrH3yH6xCHcQgm1vYnSifMImkpg62asyVnMoXGUlhRi0O/dVDpbkZIxCi/9zBfadVzk5gbcfBFzdApr+t63W/0I4LkuudErFCYHcMwKXQ99iU1HPkfb/qf89sLTz+M57y4weq6Hbdgk2iMIwtLf601IokJQTRBUE2hyFEHw6W4VM+9bnzpL63uNfXEmzs0zenquxlOUVJHtn1peedv1HCr28qUuAaFGd5ssXdlwu6prGVRmNl4SWA5rDoqVnEVuqsLBr3QSe3kGvWgTbdDY8+kWbrx193+UURKUKZKgAQebHOk77kzfVYgikhpElNZGzs6OXKQ0N0bHwc8hyqqvM2dYeI6DV9ERFBlBkXFLFdyyjuc4CAHV94bRDd/ESxQRRKnq41vNTgUBUZHxbMc37KnegMqnLhLY2k1gew9SOETl3PtA9/gIdwWubTLwytcRRJnNj/wiaiRB3yd/E8/zGHv7x3ceYBVoEYWGzTGKszpGySIQXSpXF1BitCZ305LcRVirX5Db8jwMu8hc4QZj6dPVRooFFGd1Ur0xpi+HKc3paBGF1t315CfWtkEiIhHTmohpTQgITBYvI4oKIiJlO8dGuJyiGkAORX06zi1lCEkLIioaVjG7ytnLY+1BsWBx4pujHP6nm3jyD3oRRbANl4GT85z/yd1vhq+nGZEZIsSRUSiQwbl9Z+NegufhWvqiO71rmEjRMLFPP0Lp+DtY4zOE9u9AbWvCc13s6TTCrj5Ch/YiiALm6CSuZRE5vJfAts3I9QmcYgVrZh61p5P4s4/j5AoUj50m0NuF0tGCGAr49ceP8KHD0Bvfpjw3RtOuh2ncdpAtT/4yjllh8twrGx7TdT2KszqleYNIQ4BQ3WKlGE2O0JU6TFvdHgRB9Kldlt9l5neV1dOW3E1AiXFt4sVFytuDx6bY90s9PP4/7cGxXCRFJDte5NR/ufN+goBAfXATmxMPAqCIQeYqQ4TkOAmtlYHciQ0tobVEiljPLmZPvYxrLtQTtfoWIu09763vs+fC+MUcP/jXlwklFCRFxNIdyhkTo7QQrBra91LOT1MuTFPXsoNU5z5KuSmmB9/C0tdONJaRCRElzzxJUqxGIxAVjdT2jyEIAoFEE2Yxg1mcJ9baR3FmmNnLfjtWtLWPup59yFoIq5xn/sY7FKcH8FwHSQ1Q3/sg0dYtPis+P7uoHiMHIiS7dhNt3YIgypTTY8xfP4VRWDlLduZzvu+EIOAUyjiFEk46C7KEVzFAkXCKZYzLN7DTWdxiGUyL8pnL6Jeu47kebqGEZ9sUXjzuBz/HAcfBuDGCOT4NrotbWj/p9SN88LgZANVwgvrNe1DDCbY989tIWojxDS6l9ZxJ/ysTOIZL4/bEIp6iIEg0J3bQVrebXHmCgZljlIw53GrNXBQkVCVMR/1+mhM76Wi4n/7Jl0E1sXTfAOvEX18j2hQkmNAwCib5yTKV7J03BCVBoTWyncnSFaZL19md+hQAhlMmqjYgCtKGgqKoqKjRRC3brV1P1QjUN69w1upYl56i5/rL6EpuYfKCCJ17E4yc8dPURPNWyvlpgtEUqc79pMfPEU52EE/1MDf6zpqvlSONjEqeDCraqktnQRDQInXIgTDZkQu07H2KwmQ/2dHLNPQ9QH7sKnIwQuu+p5i7dpLK/AThxk5a9j3FxDs/pTh5g1j7Nuq3HGDm4uuYpSzJzXvRYg0AiLJCatthIs2bmb1yHMc0SG09RNPuxxk/+YOVpeldFyez+EZwqxOZEAxgVWuCN829AdxccUmFxS0spm64pQp8FAx/LjD8s+8iaQG6H/4ySjDK9md/F0GA0RM/Zr1LSjUk07wjyfVXJhl+c7FNryIFaU7sxLCLXJt6mXx5Ka/PsAv0Tx4loMb9Pv9gI52flrj0oxFSW+LkJ8vMXlu/Q54giMiiRkYfx3CKNW6v6znVgLY+8rYUCBFMtRNq2YQSSRLt2rGw8ywIxLp3YJU21u31rp1j1KDEQ7/evfBE1UYxlupBL6XJTl9FL84hq6F1jTvHFGmmUVDIksZdw9K5NDtMcWoQIzdLaWaU/PgVPwvUgiQ6d6Dn55i7+ial2WHS105ilXPEWvsQJIVoyxYKUwOkb5ymMHmd+f5TtWAna2Girb1kh8+THb5AYeIamaGzhFOdSNr6/q5b4VV0Ku9cWhQQP8I/Tgy9/m1G3vw+4N/kNx3+/BLPkbVAEAVCCQ1ZW1oLl0SZkFZPvjyFbq4c2GzXZL4wTECJoEph2g+kEEWBzQ+3kOyMrHtO4O9sO55FTG1CwJ+bgEhMTVU72da30SJIMoHGNiIdvWh1jdTdd5iGvQ/TsPdh6vc8hCBKZK+uPQm7FatmipIsEEwo6AUbx3IJxpbWriINGmpw4QMoZydp2/YEoqQyM/QWrmMjSgq2uT62ejs9aARwcXGwGeXGHQOja/k6cY5l4Dq+/6vnuQiiiBKKYRYz3Lzzuq6DUcygBKNIsoociFCaGa4Vax1bxzb87EyQFbRoHfW9DxBt7QNADoSQtFCNhnOvQJQXqEHxjq3c/+v/1wcyj3Jmiv4X/mbZ1zzXueuk5Q87XMdi8LVvgCDQeegzaLF6ep74ZQZe+bpP8VojBEGgaUeCVG8M23TJTZQ4882FnWFRkHBca3WFJc/zO7mQEAQRs2ix/5/2kuqLI0oC8dbFvunZsRKjp2ZXGMyH4/rUm87YXuqDnYTVJFuSh1DEIKOFs8uIv6wOu5Rn/txxzOwc8Z77SJ97A8e4uWLzcAwdu7yxZGPVoNjQHeapf97HG18bpDBr8Bt//gDlrLWIgiOp4qIMf3roBLHUZmyjRDEzhiAIlHOTGOX1EUhVNIbwd1Q9XzNnDWctdDMvWnV4HrZRWZTVCaKIrIVxLB3XtXEt3Tc4r6n3KjU5c891sCp58hP9FCb6a2N7nodR2Jg80XuFROdOklWPYjUUo27zng9mHs5OUlsfXPa1yvwk124LmHp2Bj23+g/r5x22UebG0b/DtQy6PvYLbDryeSRZ5cYr/3XNNxG9YPLmX17FqjhLKDme52LYRYJqAlkKLNtJBn7gjAT8DM52Dd75+ih9H28jmFCJt4UR5cUZrGPfOcvzcJkuX8dwyjQEu8jqk+hOgdHyOTLGxIYoOT4dx+fnVuam8NZx81gNqwbF3KTO6389yNxQiVBCIT9j8MbXhrDKCwEqmFB44IvttceOVSE/cx1ZDaEFfScxozSPZa4vMxAQSdGKhYmLQ4bZd0XJyY9doXn348Tat2Pk5wjWNRNINDJz8TVcy6Q4M0xdzz6iLVuwynnibVtRAv5SwdHLFCcHUYJxXNvCqhSRtSCCKNca0QVBRFQ0BElGlBREWatmq/6cg0Hfz/duIJUS2blDoVhwOX3G4tZe+PmBM1x7/i9JdK6tdUoQ/LmVy/7cNE2gp0dmYMBG1zeofCLJtO57Ei2SXPZ1LZLkwd/8N4uem+s/RfrGwnInfeMMxemhDV3/wwzXMrlx9O8RRInuh7/MpiOfZ7b/beZvnFnT+UpQJtUbZ+D1KSRVpPtIM9df8WuHtmOQKY7QnNhBc3w7o/OnsJ3Fra6CIFEX7SYV66WgT1My0pT7c8z257ANh7HTcxs2qXI9m3l9hHl9fXaoq8EqZLAKd6dj5yZWDYp60Wb4tH9BNSQxcibLlaPTONYtmogxmT2faqk9DifaaNnyEIq2uPYwPfgW8xMX1jyxAllUNASEKhVnA8q2t443eR0t3kDTrof9UVyX7NA5P/PDIzd6mXCqk9Z9T2MbRcxynkrWpyM4ls7ctbeo73uQ5j1P+r7AjkNxeoDK/ARyIE5D34OEGtoJ1bcTSDQRiDVQmhtj+vxRNA2OHFZ57XUD6y6IFO/cobC1V+aN48sLQsxc/hkzl3+2prEiEYEHH1A5c9QfS1Gg3CExOuZgbvDGK4gSmaELfuZ9G0RZYcvHfxU1FFv0fEPvARp6D9QeZ0cukx29TP+LX3vXhOYPI8bfeZGuh76IgEjHA8+QH+uvlXNWghqW2fJYC533pwjGVJSgRLQxuBAUXZOZ/BXqol1sSh0kEmggUxqrCT+rcohooIn6aA+iIDOdvbRIyq//6ASVzPLfueWgiEE2Jw7ecQvF9iwGsxuj5LwXWLNKjigLBCLyEitTQRLoPVLPtdd9G8XuvZ+nUpglN9O/yE3PMoo41tp3SwMEiVEHCBTIUGGVL4QgIAcieI6NYxkowai/LLYtlFAMWy/hORaCpKAEowiSjOf4GZ93i4+DpIWQq0tsx6ggSBKOqddqOpISQAqEfYK16+AYZRyzgiBKKKGYT9quwcO1TBRyfOLjGs98OsjRV3SOvmrw1JMBBKCie7xxzODB+zW6uiUmJx0GhxwefkjFcWBmxuWVVw2+9MUgngenTpuk0y6/9dUwiirwnecqzKUdnnjcv3m88ppBOCRw//0quZxLoeDSmJKIx0XKZY/hEZtMxmXvXhXL9njpJYPDh1UeeUjjRz+u8MZxkz33Kezbp/Dnf1HCsuCXfjFIOCRyY8CmVPY4+KCKY8ONAZt3zph86YtBLBOOvmpw/oJ1Z/1RQSCUbEG4hRQfb++j++FfrD2WA2G0SALXsSnPT4LnMfDq18mOXP5Hs8QWRJnWfU/Q99RXkQMhpi8e49w3/+0dzhFo3Ban56FmRk/N4Toe+akyhamF350oyDTFt9HVeJiQmvR7SjwHDxAR/RqiU2Zk9gRj6XcWBSo5ICEIYFUcJFUklNTwPCin9SUWBQCqGKQnebj2WJMiRJR6StY8plNClcKElSRzlUH6M8ff16B4V1RyXNtb1tvZc7xaQAQQJZXc7A0qhZklx64HTXTgYOMBzXQyzLWV64qeh11ZoLpY5YWdNau0cKfzHAuzuHIN0DHKOKvIOTmWjrNMP6XnOtVNnKUwgVOnLRobJb79XAVdh85OmW99u8yVqzaiCFf7LTI5l/t2Kbgu6BX4278v8ctfCdHYKCLLcPJtk6vXbNJpl6OvGv5zp0w+80yAEyctbgzY/OZXw1y6ZJHNunz9G2W29MhoAYFIVOBqv8WmTpkLFy1s22TvXoXGlMiJEyaqIvDdf9D9wPuOybZtMrIssHOHTC7n8fdfL/JbvxGmXILZGZfvfr/Mr/1KmMEhEdf1zxkattcmyOx5lOcXU0FKs6NMnHm59rh+815a9z1JQ+8BIqkOAO770v9MZX6K6y//Hbnxa++re94HAc+1GT/1PMFkC5sf+TLRlh6iLT0UJlfWLfRcj9mrOfITK3MHXc9mMnuBfGWSpvgOYqFmZNGvnduuSUmfYyZ/hXx5akmdb8ujLbiOx8AbU2x+qJldn+3CMR1O/d11xt6ZW3It061wOe1/rpKg0ps8wnTpGlOlqzU1+6ZwLwmtlfVSct5LrMu4qmFTmJnrBZxb5IgCUZlIncrcsB9MStkxGjcdIDN1Bceq1LJFs5LHXnNdUUBGZRi/53gzOzYsQnkvwPE7+Gqrf9vyyGRdPA9aWyQefzTA9IxDLCowMwPpjINV/U4XSx7f/4HOx46o1CVFfvK87rftV3v3BQFcx8O2PUTRv1Yu52JVJS4tEwwD9IpHfZ3I448FyOddEgkRSfa7CyXxFpHzW/aoRLHGFUcQBFzXJZN1MU3/talphxdeMjh8UEXTBI6+YrBhrc9bImr6xjukb7xD694nUSMJ6nv2Ud+zl1B9K7u//IfM9Z9ifvAcw8e/uyaxjg8z0v1v07TzCOGGNlr3PsHVVYIi+N+FtZCpS0aagZnXkUS1ahcs+Ba/7soiCnVdUeYG8oQSKpsONdF/dBzHcul5tGXZoHgrREEkpCQZL16s7Q14eJTMDK2RnVXy9h2nXUNUSOLiUvKWUouCQpiE0ETJy5H31l//XHNQjDZoHPpKJz/5k6uLyNuJ1iD3f7GdH/yrywBowSSJxj7C8daqz7H/l04PniAzeWmNV/MwMeikFw8Pm5U9JD4MmJ93iYQFfuELQV46alCpeLU2TdPyiMYEHFekUvEwzeoGOKDrHvGYwP59KomEyNyci+uCZXq4koBueFy5avPYoxqHDmm89rpBKCjgSv4NxHHAND103cOyoVhy0VRoaZYwDA/Lgtk5h6Ymkc9/NsBbJ00OH9TYuV3h40+4vH3KZNdOhX/2m2GmphyyWRdJ9ien6x7NTRJHDqvE4yJXrt79z2fizEsATF88RjjVwdZP/iZarIGG3gMku+6jYcsBJs+/ysQ7L/7cBsfM8EWK08OE6lpo3HHE35C6fvquje8LMq+teOy39kk0bk8iSgKDx6aJNATo2J+647k3FXjqA50YdgnXsxEFmfpgJwICIiLSLeHIwUHA75cGcHHw8BCRfOsSoQ4bk7JXQKiO4FUJfJZnIosyYWIbCoprrimmNof55P+4je/8b+cpZRbexM69CR777R6+9runAJCUIJKsomi+QrVjm9hmCcc21uXDKqMQJQFAmQIGH247z0jED1SVikcg4O9Eu64fAG++5nlgV7NwXYdQUMB2/OMFQDc8DAM0FRD8DFCSIBQSEAQolz0kcfFrsgySJGBZHorij3MzQzQMD9uGaETAcf1AF9AEVA1Ms/o4ICBLYJgL8zVNf26W7REM+nMvl727som0GpRQnFTf/bQd+ATRls3IahDbqHD9pa8xP3SB4tQH4z38XiOYbOLBf/Z/o0USDLz2DQZe/W+LdAOXQ0CJE1TjlG5RzgYBWdJ8Cto6XPxuom1fA/d9rgs5IHHthTGuvTxO9+EmmnYmefPPr6x6roBIY3gLXbH9Ne9pSVQRBYmx/HkUw6fhubiICEw5I6hCgLhYjwfk3Fky7gwdUh8yCgEhTNqdJO/N0yh2IAkStmcy6lzDxaVR7EBEYsodWnY+q9UU7xgUQwmFTfuSNPdF2fWJZk59dxyz5L+hggib9iVxHI/n/sjfWZbVEA0dewkn2hAEEcc2yExcIjc3sGG9uASpqkrOHdZmokigsRVzfnZRcziAkqgj2LkZUdWwsvNURgaWHPMRPjzYdOQLJLt20bjtIACV7AwXnvv3ZAbPf8Azu/sQZZVtn/5t2u9/GoBj/+n3KM2sTmtpq9tHT9PDXJ96hYnMOQBUOcSW5seYy99gNn9tQ6uvuq4ooiyQHijguR7JTRE8xyM7tpbSmEBYSRLXmlHFIKarkzem0K08rVIPJhU0guhemaAQRkBkxh3FA5rEDrLuLA1SK/32GTqkrdien5wlxBTz7hR1YjPDzmXKXuFdBcU7Lp89DwIRmdYdMaKNGjueaMSuyo97HpQzJqe+uaBnVt92H8FoI5nJy7iOhRpMUN+xF8cxKaSXn+CtCBLGxSFCopZO19FYVclZPShqDU2knnyW0o0rZE6+UaNyKMkG6h/6ONGt9yEqqm+gffINMm+/sSDR9RE+VBg+/hzTF98AoHHbQYKJRrY/+3tc/v7/Q2Zo7dSvDwNc22TinRdrQbHz4Ge4/IP/vKpit1b1/7nVtVIWNdrr9mHaZWYL18Fb/3e/MF0m2hyisS9ee66SXyt3y6NkzaPbBRQxgOUaOJ6JhFxtA3RwBAf3ZrH8phMwN/UXbz7yx7rJ0vNwsbGYdocxvXef6NwxKFZyFud+MsnMQImP/VoXr/1/A5SrhVzPA9t00QsL66ZofTeT11+nlPW12ARRRlaDhOItawqKPgSSpCiQqT5aG4Kdm9FSLZRHbtRqTIKiEt/9ALHtezEzc5jzswRbO0k+8DDlsSH08eE1jv4+QxAQVNkXlf2gRHwlkdTnD6G21DH7rWOYU3eXJPtuoedmufS9/8DAKyl2fO6fE2vp4b4v/SHnv/3HP3cZY35qgMHXvkn3I1+mofcAWjSJkV+5XiaKMoIg4q6zfW41pLbGeeBX+wgmNNxbuljGTs1x8mt3FmLWpDDt0fuoC3QgCjKu55A1xpkoXMR2LRxsbM/EwaLsFrCxaJa6AMi4M+S9OWJekh5pN5IgMe9OU/JyqJ5GQmzE8gyyzJEUmqgXmxEQsTDIuNO46+iYWdNGi2N5pEdKnP3BBPOjZczKyncY2yoTjDVhlDO4jo0ajKEEopiZtYk93uQjTjNKAf8cf6Plzn+UWt+I5zroE6O13Uy1vpHo9t045SJT3/86+swkyf2HqH/oKSK9O+7ZoKg0xIgd2Ub+2GWsuQ/G21kQBII9LSiNcaRoEO6xoAhglnKYpRznvvFv2P3l/4VY6xZ2f+kPOfetPyYzeO6Dnt5dg2sZzA9foGn+YYLJJvo+8Ruc/9a/W/F4p9pNFdIaSBeHeDeNDzex+WPNzF7NceZbA9jGLTFgDVwsUZBoj95HXGtmongJ3S4QkKM0hftojbCinuK8u9jYfsS5yu2NHMPOlUXPZbxpMvZNSuD6/+61+z4XbK6+dmfibHr8PE3dh0g09uI6FpKsoZfS5OeG1jWxmwERIM3UKkcuQAoEwXWx89VzRZFASztqXYrM229gzM+C61CZGMUxdNRkw7rm9H5C60yRfHIPpXNDH1hQ9ByX9I/eRooFMcbvbcuDcnqCi9/7j+z4zO8Tb+9jx2d+j8vf/1Pmf44CY7r/FJmhc4TqPrFEP/B2lM15HNeivX4voiiim/ma/W5Yq6cx1re6KEQV+cpkzcHScz2KsxUcw6nZEawV8uhfcAAAIABJREFUoiAT05oZzp8mXVmoh+p2gU3x/evUU1zu2sv7pW8E69JTDNeptO2IEWsOIlZpH3gepYzFxRf8wFVID2MbZYKxFJKkYVZylPNTWHdoUbodzXQwhV+rbKSdOSbvLArhUWXo+2+IqGpEtmzHqZQoD/bjVTdWnEoZHAdRW9qGdhOSEkBSNMzy+rTjJCVAqK4NPT+LVVlnMBNAUGREVSHY3YRSF0UKaUgRf56eB27ZWHxnFgQEVfI7RAQBHBe36uNy6zFiUMUzbTzPQ1Rln2jour71we0N/aKIGFQQBAF9aBrPcXGN1b+wgiwhKNICIdP18CwHz77tM5PE6vWr3x/Xw7Orx73LZKYweYMLz/179v3yHxFuaKeuZy+Z4Ys/l3QdQZYRZQXXXv5zmS+OkCmNkor1sqXp0Srtxq/ONUR7SITalz3vdlyZeB4953+Ppy5l2fPFboIJjdxEqdbFUpytMHt19d/JTdrN7bbClmtUjbPuHR7ymoNiMKbw8T/opW1nnEreor4zRHaiQjipcvHF6VpQxHOpFKb9jhYB8DwCkRRKIEo5d+cuBAGRACHiNFCiiADESK4pW7SLeQRBREnUY85NE2hqJdTRTXl0EH1moYviZgDxVmAai5JCtGkLwXgTmdHzGMW038oXjCGKEpZexLF0/7GkAB5mOY9rm0hqEMfUcWz/w795nlQViDCK88haCDkQBc/DquRrXTJyPEziid1E7usisLkZKRKg7X/4DJ7pbxi5ls3Iv/4W1oz/BRRkifDOThKP7iLQ1YSgSJizOfJvXiX/5hWcvN/epdRHaf8Xn/efK1RIPLoLJRXHKVbIHb9C9pXzOPmFTp7AphTNv/IEcjKCFAmgj8wy9bWXMIaXWSmIAmpzksQjO4ns7kaKhcD1sPNlcscukT16Hlf3a9ByMkLs0FZiB7ciJ8IIkohTqFA8N0Tm6Dms6fX7adyO0uwoubFrBBNNdB58Fj03t2EV63sR5fQkjqnTsOUAbQeeZvStHyx7nOWUuTrp2wkkI5tQRA1JVFHlMLZr1vqd74RbJb1CSRVREWnbV0/LfXW1m/P42fQdg6LrOeh2nqZQH47r4HgWkqDQHO6jYudxN7Dp815h7b7PCYXmvigv/2k/45fzPPMvt/ODf3WJvc+2YpSX+4MWWiPCiTZkNbimoCghESNBkBApWgCBPJk1SYdVxoeJ776fxL5DyJEosR178Two3biysKQGlGgcQVZwKstnr5IaJFzfRjDejKUXsYwikfpOwvWdvg+L55KbukZj7xGMYhpZi1BKj5CfvEa4voNo42Zm+9+kkpsilGwl3roNx9Qxy1mMUpb6rv3IWhiznCU/faMWFD3bwRib87NBILxrE9mj5zFnq18418UpLPSxRvZtpuWrT+EUKxROX8fVLQLdTTT+0sOoqTgz33zDD6iiiNoYp+5TB7Cms+jDM5QujxLqbaXxyx9DDCjMfvNY7UtuzmSZfe44ciJC/bMPIMdCi3qVb0Wwu5nmX38StTlJ8cwA5qnrCLKE2pxE1JTa7r6oKdR9cj91T++ncPoGpXNDCIqE0hhHbU4ghTTu1pbAlR/+GaKs0rTjMH1P/TqzV99adVPiw4TB179Jy+7HiDRtQlaDqx5rWHkGZ44xOHMMoSoHdmjLV5nMnGdw5tiaApF7S5Z95adjXPlp1Ur31sRuDRm+41mMFS+yJXGIXalP4Lg2kiBjOmVuZN/E8e4dj/M1B0VBFDBKNrODJVzLw3U8PA9uvJXmyd/bzdTFHWSnrhCp6yQQWVyrC0ZTa+5msbGYZgwDnSyrtw7djsrIDfSpcSJbthPu7kWQJEoD1yj2X1q05FRTzYiqipVZ/odiVfLkp67jWAZzN95CUgIEoikKMwOU58dp2voxtHASz3WYHz5LMN5MMN4Igkhxdgg1lKi+aQKBeBNmKcvcwNvc/PbohTmCok9sd24R33WKOoUT/YiagtqcJLStnfxbV9EHp5fMUQyo1H3yAK5lMflXL1K+Nu5bnybCNH7lUeIP76B4doDSBb9+I0giggXzPz1N/mQ/OC5qc4K233+G+JHt5F67WNtddksGpfPDCLJE7IFe1Ja6Zd8nMaSReHQXwZ4Wpv72ZbKvnMetmLXXBFnCs/wflRQJEuhqwkoXmPn7V2t1UjGgIsWC2PN3T33cqhSYPPsyTTsOIyoqHQ8+w/UXv3bXxv8wwvMcLLuC5ei4noPtWnjrzM60iMK2p9tp2p5EvKnT6HlMnJ3n3HN3Js7njSmuzL9KRK1HETQs16BgzlGx3/0K4W5izXrntuliVRzCdSqO7WKVbbr2J4k3B5Fkl1J2HMsoEm3oppyfppgZrf3Ti+u/S5cpECBEkDAaq98Rb8KplJl54XvMn3iV4o2rpI+/zPQL38MuLKT2YiCIEktg5bKUBla2BfU8D0nWUIJRPM/DsQ2UQAQlFANRxLEMv3MnEEUOhGstjbIWQlICyFoIQZD8JbWioYbiyFoYEChnJijMDKKFEkRSm9f93gAEupvQWusoXxyhfHWsVkO0syVyxy4BAtH7exedUxmaoXRhuHasOZWlcGYQKRwguKXl9kvcEXI8RGRfD+X+cQon+2sBEfza561LcqdiYE1nUVMx4g/vRGlOIgY1XMPEmsktrT2+S6QHzjJ68seIkkyq737/c/s5gVHM4HkenYc+S7xj25rP8zwH3cqvaYNlOfQ81kJqW4LCTAVZE5k4m0aLKhRn16Z+FZRjxNQmNCmCJKoE5AipUBdt0V2Iwrq2N95TrMP32eTaMT9zs3SH0fM5Hv3vexBFgXM/mmR+YgCA/OwN5kZO1WpqALZZRpK1ZcddCe30ECaGhIyJzlXO4HDnupCZnmH25R+ywOxcnNu7eoWZoz9CEAS8FYrUAEYxjWN1UN+1n9nrb5Gf6ifRtp1k+04KU9cxS1lc1ybRth3HNsmNXwYEoo09SIpKuKETo5ShMH2dROt2Gjbfj1GaZ374LNHGbgLRFI6lU8luTO1FScUQJBFjKrOEx2hOZfBsZ0mG55Z0nNLidklrOguigFwfXfccRE1FbYpTPDe4ZNzb4ZYN5p8/jRBYWEYXzw5SePsa5ctjOMW728bpGGXmB8/RuO0gkaZuNj/6T7j64z+/q9f4oHDlh3/G4d//T2jR5BLNSgkZBxtVCOB5Hha3/A5dg5G5k1TM3IYCY6ItwtDxacppHUGAC/8wRHowz5bHWhl4Y/Wavyyo9NU9giQoVOz8om4ax7PuKcGXNQdFo+Rw6jtjtR2niy9MUcnbSLLA9TcXlrkzQyeW9DgXM2OI4tpM5X34O1WjXK9ancZY19bkTQmZleDYdxzNKueYuvzqwimWzsy147XHaiiOY1aYuf4mtr6w9Ju5dmzJWHODpxY9nh8+e4errwGCsLLu7s1OAHENX7Sbxwob+FIKVHex1/bZGGNppv7mZSK7NhHe00V41yai928he/Q86R+cwM7eXd+W6Quv07r3CVJ9D5Do3EGstZf8RP9dvcYHAXeV3fR6qZWCO0+95NfjJ+wB3Goy4bhWreVvIzCKJoGYSn6iTCCh0bq7nmhTECWwhjAiCHiey1DhFOnKvckNvom156wCRFMaTb0RAlGFW39DnbsTXHl1duHA23AnxeCl8NAp4+ESpw4ZbdlxP0hYeom5gVM45ntsM7rCn23PF/BsF2WZDE+uiyBIIla6sOh5Maj5S9aysehYPA87tz5jMQDPsrGzRZSGKKKqLFo+rwS3pJN/6yrF84OoTQnqP/0AyY/vxRibI/vK3e9CGXztm8Tb+oi39RJt2fxzERTxPBzLQJRkNj/yi+QnrmNX/BtzWIxVHY0cAkIYEXED7ifLY+TkLHJAIj9ZIjtS5NBvbcMxXN75xupyZuAH5Hl9lO74AzSH+mp9yzdfG8ydvGeUt9clHfbZ/3UHSkCinDEXtV3mZ/RaUOza/SzDF368oLItCNS17QIgPbb2DGmWCSQU8mSwcdZmXCWKCLJS4yOuBEFWEEQR1zKXLK/XCs+1MTZQK13b2D4vUFAVpOjyFqr64DTWXJ7Qtna09gafXO15iEGV6IEtCLJE8dzi4rfW0UBwS4tfV3Q9pHiI8M5NuLqFPrg2gvytsPMVSpdGiezpJtjXSvGdgVptUJAlBFX2A6XnIcgSYljDrZh4po1bNtEHZyi83U/8kZ3LBve7gcL04Ipcvg8r9OwM/c//FTs++wdEm7urtDAfpmcQEmNM2UPUSy13VXIvPeBvjrm2x7nnBrn20jiu7aKvofdZEhUaQz3YroHuFBbtfDuedU9JA645KMqaT7B84T/2M3p25d0iWQ0tYtuLoowWrruj1NHtSNF2yyTtakP46m+cWpcitnMfpcF+KqMDywc8USTSt5NAczu5M29hzt978vae5WCMzuGWdRo+8yBKfRTPdhAkkfzPruLqJk5RJ/3jt2n+lSdo+Y2Pkz9xzafk9DQTP7SN4tlBimcGFsb0POREmIZnHkBrq8etmIR3dRLe0UHm6Dn0ker7IAqoLXXI0SBiUEWuiyCGNUJ9bYgBFbesY87k/I2UQpnMS2cJdKZo/tUnyG5qxJzJIUgiSkMM17DIvHQWt6SjNCWo//T9uCUDcyaLa1hIIY3Y4W3Y6QKVgfUH5X+s8DwXS19+tz7tjBMWE0TFOizPWNIeq8kRZClA2ZxfVFcUBIlEqJ1IoAHXtclVJinqi9Xzm3fVYVVs5vpz2LpDUV/7KsnzPEpWhnl9lIw+voQOtBEps/cKaw6K5YzJ2PksR/67LuYfLWHpCx0IpazFpRdcGrseIFLXQffez+FWybKiKCHKKlPXl9baVoOChkEZB7sqMHlnBDu6Sew/giBJflBcDq6LkqgnvucBrNz8PRkUAUoXhpn7hxMkn9hNy1c/jmva2HN5imcGa2TowtvXEVWFxBN7aPzywyAJOFXS9PxPTuOWb7mDex7liyPoI7PUPb0fORnBM22yb1wk/cO3a3VBMajS8LmDhLa2I6pydSkukfryx3DLfkCb/dZxypdHwYPypREm/+pF6p7aS93T+xFU2V/eFXWyL5/lphS3Z1jgecQf2el36LgenutijM4x/d9e98d7j9G69wnm+k9h5NdH9fowoUnahOnpuLhLAqIoyDQnd9EY66N/8mWy5bHaa6noFnpbHiek1eF6DpniCDf+f/beM0iOMz/z/KUr79s7tAHQ8JYECdBT5AxnqHHSaDQrc3taSTtarWJPIek+6O7DubiIu9DervZ0K93tSifpVtJYjeUMx0ka0AIgCBKWaKC9N1VdvrLS533IQnU32qAbhm7wRCBQXZWZb5qq//v+3fPMv0RBXWp66DnewuJIkcxgkdtpP/JLEbYnjnuJlmUG2XIMrmVfWuFSv5fYtFEMRBV2PNKIUbVRAhLisiC+qTto5TTp8XMEo00U5gfr2WfHsdEqmS2X5Sj4MNFw2XyCxt/QDK6DvjC3oVtspOdwTRN/S8e626yEUBOrujezWUPTXiLRNsZH/okbXzYrX2HxhTcovHIFwa+A6+LoJlZhKT7rGhb5V65QvjiKFAqAKODoJnZRXR3fE0WsokrmW6fIn7yE4JdxTRsrX1kRY3SqBplvnEIMKKwF13IwF5faF13TpnJxFG10HikSQFA8o+gaFtay8zCzJRa+8gqL3zuLoMhe9t9xsCu6d0323Yp8rYRtaFz7wV9w6PN/SLJ7H75Q9ENtFEVBImcv4Lj2Kr10WfLRFN1RE6xagl+O0Nv8CAElSrY8jiCIJMNddDU8QNUo1ElqS3MqkiIiBySs6rJk5Sbso+NazFTeQRJWf68cd5PhsXcJWyoOyk2pvPWdaWavllZQBzku2JZNOTdJeuJtFqcv1hXwbhc6VexaLPFWPIo3IAZDuI6DWdy4GNRWy7iOjRyKbLjdDQRDjYQjLWQW7g1Pn23rmEbZMxTLjLmjGRhzt7iPtoOVLW+u+FkQcDRzYwowx906RZgLdlFdUZe41nHtUnVFR867Atelmr8zEbUPEhzXoV3ejunqONjMWqN1gyMKCuFAI6XqPLq5lIRrSewh5E8xm7/C4OxPEASRA12fJhHuIuxvrBvF9FCBB3+ln3h7mOJsBbfmXRRm1FtqQbs4pNV1vLf3GTav5me7hBI+nvzN7WQnVWxzyVCVMzon/9y74PT4OZamji32Ai1DgcVaozg1N+DW+9fLSm6RPLlheARxM7XrAsmGHQQC8XtmFPPZYfLZW2fw7uM+boW8s0CT1ElACJN1ViryCYKIIgXQrXJdl8Unh0iFu3Fdm9ncZUy7iiBI5NRJEuGuOrMOQCDiFWqHGvyEGpbqjkVZvKVR/CBh00bR1G2uv5pGlFfXiCwXsgpGm2jpPY4/nKyz5QoCzI2cJj+3sY7DcoSIMMPYprcHsColBFHEl2pcP6YIKPEkgqxgrdP7fAORaDvNrYdpaNyFIIgEgg04tsn87FvkskOAQDTeSXPLIXy+CJqWZ2HuApWylzRo7zqOYxvIcpBItB1Ny7Ewex5VTdeP39bxEH5/lHJ5jrHhH68YXxAkUg39pBr7kZUgtqWTSb9DNuN14oTDLTS1HiIQSOC6DqXiNPOzb2Hbt8c+LEoemfNtJuQ3RGOTyKd/IcTpV3WuXPpwZYPfT4iKKbL2POASERPkWVjd9LDsAUeDrUSCLZS1zFL80HUxrQqSKCOKSyZi8lyGmUurJYKdm1mW7gKCe7cR6G2j8JPzK8I7yyE3xmn49AmkeJjci29QHbg7cenNG0XVZuClNNWiuZJgkpU/ouaehzB1z5WzDBXTqBBJdmFqK2vmbn1iPmSUGmOuuynmXG1mkvjBh4j070MdH8bMr5695GicyM69iD6/R0a7AXStQHrhEsFQCse2mJk6jes6VKveFyMSbaNv+3MUCuNkFsaJxjro3fFRxoZ/TKU8RyzeRSjURHr+EtnFazQ07mFb71OMDv0QXS+iVtJMTbxKR9cJIpGVbXaCINLSfpTWtiNkM9fJZYdQlJBHHopnMLu3P4OuF0kvXEKWgyhKcM1+Vte00EbmMDZgoVEUOP6Yn3cumSxm7v6XXBQhGBJQfHf90PexDDIKqlvExSFGw4pOEde10c0yASWKLPlwXJvGyHaCSpzx9JmlDLDgfb/cmzrCQg1+9JJZ12gCUAISwYSf0vzdDYvIiQj+bU0I8vo5BStXIv/jt2j4uUeRE5sLhW1q7M1uGGnw8/i/6OXMVyZYGFo/fuULRJkfOUW8eSeWoZKdvQJ4TDk3JAo2AxGBTrZjomNjMc/0Ldm3K6OD6Ok5wn27aH7mExQuv4WRTeOaBoIkoyQaiO07TGTXAczcIpXhjVeuplnBLFYx9BK2rVPIj634vLn1EIZRZmz4HwCXXHaI/r0/T2PTXtRKGgGRSnme6clTOI6JVs2yY9enCIWb0fUijmNSVTPoWh5/ILHi2P5AnKbm/WQWrjI9+dqqtixRlDxKd9ukqi5SVRfXbfC3cmXG/9evbHitbR0Sn/lciPm50j0xigvzDv/x321tYryPraPs5mmSOgGXqltesZiwHIOCOk1jdDsdqSPYjklrYi9VI89iacmzEhAJ+hLYtrGCOuzw57YzfnqeibNLFRvJ7gj7PtnDT/6PpRpkMRL0yrpSUVzDxFF1lKY4lctj2PkKvrYUgf5OBFFAG5pBn86A46I0xQnu6vLaThORukEWAj5CuzuRGxPYuRLqO+NeAs92sAoVHOPueh6bNopKUKKxO4RjbexbmYaKLxjH1EtEG3owtCKBcGrL2ec8i0h1zdfNxRQdTSX3xkv4nvt5IrsPEmjfhlnI4VomgiQhxxIoiRS2qpI7+ypm8U7o9QUCgURt1eidm+NYaNUsPn8MSVJwXRvDKNV1MixTw3EsJHl9ctsbUJQwPl+Ecml6zT5V2zaYnjpNa9sRtu98nkplnvnZt+uu+2YhSfCxTwZ58hk/Rx7w8du/G6WQdxgftXjhG1UyaYdgSODYCR/HHvbiSOfe0DnzukFVddnRL3PwiI+ZaYuHTvjx+wXOvK5z6hW9Lnn6mc8FOXTUhyDAF/+6wvWBle5cMiXyxM/42bVHQRRhdMTi5D9ozM86+P1w5JifY8d9RKICmQWH11/RGbhicl9zbDUW7RkCQsSTsq0JQt2AbevMFwZIhrfR03Qc8GoeJxbP1tm1wVslxoJt6FYFw1Zr7638570J0dYQkm9lbF5ORkh94mHMhTyB3lb0yTRSNIigyFQHJkn+7EP1Xvngrk6yL5zGLqokPnIUKRbCmMsRPtCDlS0jSBKRI92EDvRizCwSOLwdpSVB7ofn7lnFwubdZ82mlNYJxuT1e26BzMQ5TENFLc4RirXRuedZtPIihfTWEgkVisRIAgIVSpuueK8MDzD7wpeJH36YYEc3voZmBFHAdcE1DcrX36Fw/gyV0ev1GrrN4eZYqottG0jSkj8oCAKS5MdxTM+QCUItJuPdMEGUaqUoty7tcV27ZkBvtDiuvv7c4iCV0gzBUCMNzXvp2/kxBge+jVbdvLF3HBgfsRjslNi7X+GtswbTUzb5rIOqugSCAv/sV8OceMLHmdcNXNfl878aprtX5st/U6G1TeLzvxpicsLmzdM6za0S//J3IrgOvHLSiwW99YaBIAj82hfC/OTH2gqj2NAo8nv/XYxEQuTcGzpa1dOnlmWvt/uBh/z8xm9HOPWazuyMTWeXRDIl3pO45wcZCn5cXIKix8QE0Ci1UzXLWLXVootLpjjEO65NS3w3giCSKQ2TKQ6uKJ4O+mJIokKuPI6qZwml/Dzxb/bT+UAj2441Uy0sK+Gy3LXb/ESB0tlriAEf5nyO6vUpAjvaEQM+cCH34llcw6Ths48TProDbXAGX0cji998DWMqg+iT8bWlvA6tE3sonxtEG5zB7Goi+dGjVM6PYMzcm+TO5llyiiaLEypHPtVBKOGjWlxaspqaw+yAN9NUy4sEIw34gwlK2XEqhWkc215FEnErtNFdX/pHSTDB4KZqmVzbRh0bRJ0YQYnG8TU0Iih+XNvCzC1i5LOwFRZm18UyqwSCKfyBJI5jYtsGjm2Qyw7T1nGMaKwLQy8QDDURjjQzM30W2zYQBJFItI1YYhu6ViCe6PEKlvUbVGYCoigh1JTXRNFbXbqug1bNUy7N0txyEF3LY+hlr53LddG0HJLkJxBMYlkaVXWRzPwV4v3d+HzRLRlF14XLF03CEYGnngnwximd61etutHp7pX46CcCfOn/q/C9b1dxXZidcvivfiPMP/7Qm+2jMZFvfrXE6Vd1BAH+4L+P8fFPBetGcWLcRlYM8rnVFHBHjvno3S7zv/2PBd65kYARwLG9OGQ8KSAIMHTN5Pw5g1LR4/Hc0nz2UwCf4AnJt0m9VN2KZyCF6Cr2Gds1SRevkykOAdQWGytnmIqW4ezQf/Ei+a6NkYV//LcXOPEvd5MZLrIw4MWmXcdFzepUC6vLxpyqgVs1vDIsVcfRDMSgH7khhpkuYJerYDuYC3l8bSnMSNb7rWVLXinabBalOYGgyAT62pCCfiIP9CMIAvrUvc10b9oo+iMyHfvjNPaE6XkwhVay6vcyN1Pla3/oxRSaux8gEGmquYzLgryOhVHdoH5QEJD94RrjjICCn2G8Epg+9m6dWsixMQtZzMLqbNnW4JLLDtPdt43t/c9jGmXmZt+iVJgkmxkgHG6mp+8ZbFtHlBRy2WFyteyw69gIgkRb+wNIkh9FCbMwdwG1kkEUZRoadxNL9BCLd+HzR+nb+TFMU2V64jUsq8rs9Bt0bnucnr6P1BIsXtxybuYcihKio+sEshzAcWxESaGQH0Wt3F5N3g0j6N7EthaOiPh9AtOTdn0uGR/zXrS0euGNUslhYc6uu7NjoxbPPhfwpHtvsaLr2y6TzdgMDpirDJ3jwPlzJrv3mnz2n4V47Ek/r72sc/a0Qbm0uaWiICm0HXhiU9t+kFFxvUXJlDWEWnsdExvWpdvbKD7vaTCvXMSYqsXgP81Qmq9SXthEUsW94dste062UzOOPgRJxLWdJe0g0/b0hmTJS/T45LpkiDmXI/3Vl9GGl7pr7qXs7+bV/Iomr/712uy6yyVP/eEUxcwIldzUivnHMjYuf5GUAKndD7Fw3uvqMDHoYgfgYmO/pw3jhfwYw9dfRPGFPQlV1TO0lqUxOf4K4UgzkuTHsjTUShrLqiIIIi4uxcIE87Pn8fkjWGYVtZLGcUwEQURVM9iOSS67xNziOnaNsBbKpRlGh39IIJhCEmUcx6Kqet0Yul5kevI0Pl8EBAHb1qlWMljW3eUldBzvay0t+6bI3vcVy3LxVruwXFxOkYVNx/tsG0RRQJIF0Fc/49lpm7/4sxI9fTLHjvv41V+P0Nqm8Y2vVNA3UXkkyjIt+x4DYOL0C6i5D3ePtbBMRN52b02RtxXMXs7ekbiYaztoQ9PEnzpEYEc7mDaB3jaKr17CTHtEw6H9PWhDM4QP9oFl42oG2sgs0Yd341SqXkgq4EMbnUMQRcSAgiBJXpeUX/HkN+4wtrKFmKLD2DnPLZNkAVEWsGuyBMtRyU3T0vswRuvu+o8bPIYcTcsjrlOTIfnDBFNt9RXhLONEiQNQpnDX24DkSAwEYQUr93pwXZtyaWbNzyyrSiG/Pj+cY5tUyrNUbkrYu65DpTx3y8SIruXRtdUrbNe1vePe8uw3B03zVBAbmyRmp20cG6pVl3zOJj1vc+ioj5EhC9eBBx/2o6ouM9M2u3aLhCMiB4/4yCw4KAocOKwwPLTkgouiV/Lj/S8gyUsRjCuXDJ79WIDHn/LzxmkDx3bx+QRKJRddc0kkRfx+gfFRi/S8TUOjxP5DCi9+R0Bfw4huhNL8GLb+LnfUvGvwvKtmqYu5Wotfk9TJlHW9HlNcDln0I0v+GnnL+l6YYVXqhd6bNYiuYWIu5L1Wz2wJu6TiqDpmpoB6eRyHzFTtAAAgAElEQVTBp5B4+hCIIuW3h1CvjOMYFoWfnCf66H6COzvQR+dwTQu7qpP/8VvEnzxI4+efwjUtyueG0EbniBzbReTBnZ5G+qP78Pe2kPveGazFO6ty2FKbnxKU6Nwfp+tggkBUppzRmbiQZ+ZqsZ6VjjX2kp2+TDk3uSJralQLdD75OS9lsEZASJC9U4mSwMJCQSHLvWvPanjkGQRFYe57X71nY3yQMDVhc/Ftk1/5r0NMP+NneMjiR9/TSM87fPlvK3z28yF6+2K4QCIp8pW/qZBb9J5juejw+FN+du9VaGgUCQQE/uo/ebNAQ5PIz3wkwM7dMm3tMs/9bIBtPRIX3jI594bBxbdNXjmp8wu/HObxp/3ommeMv/11lfERi8ef9vPE0wEKBQdBgKZmie+/UEWtbO4X2rj96Cp26g8jRESiYoKIkKRJ8hYQFuaq+l5RkEmGu2iK9xPxN3mGcYPjDs2/TLq4NQ5Kcz7P4tdfBSA/vxTfrl7zCCjKb1yj/MZqKZDK+REqy5idbsApa2T+/pVV75dOvUPp1Oa0n7aCTRtFSRHY+zPNPPSL26gWTbSSRef+OHufaeGHf3ydyYveasaoFggl2pAUf40h2PvyFtMjOKZO+tLL2MZqF08KhGna9wghPF69KHF0aip3uOjc3RleSTbUNIrvDVzXZW7mHPYd9oC/W8guOvzNX1Y4cEghHPFiiJrmYtvw2ks6czM2O/q9Zv6xEYvBAROrttrTNPjGV1RCYS9rPHTNZPCa96HrQKXsMjhgMTjgzeCOA6bpfS/Uisvf/L9lzp1RaG3z4kmzMzZz0zamCefeMNCqLuGoiGm4TI5bXB+w6uU+t0L70WdRghGyoxfJj1+5uzftfQQHm5yzgGCJlJysFxdcZRQFUpEedrU/S9jfsILibz0o0ub0kT5M2LRRDKd87H+ujbe/M821l9PYhoMSlHjws50c+VR73SiW81P4zQZPgHyZLIDr2CxcPImWm1tzpShWy5RnR9HI0UgbUZK00wOAjcUkw3fVhRZ9/ntMPupSyN1a4ez9hNlpm9np1ffYNGDgisXAlbWD9qLorTSHB1d/nl10ePE7G09opaLLmdfXnjymJ22mJ2/vuXcee57ktr04lklu7AqVzNStd/oAw8XBwqBJ7kREWkUIoUgBOhuOEvKnyJRGmMldoGrkcTbQa9GMW4eXPmzYtFH0BWV8AZHxt3KU0rUId8Fk8LUMT31he30729RZnLyAeXNi5eYbL4gooSiu62CpJRxLJ3v9LC42KhU0qiyw1AGzIlsmSvgbm8HF61ipBajkeBLJf+uZTZAkxGAQt3S/B/fDDH80hRwIU5geZOSljTt6PixolDqoOmVEQUISlBqpSk1mVvSRCHVQ1tIMzv4TJW2BO8qc3EUIkkwg0YJVLWGqxVvvcA+xaaNoWw6CIBBrCZAe9QyeIEBTbxh9WS9kc+/DzFw/iamvH+wUZR/x3gPEuvag5WaZP/9PBFNtSP4g5ZlhXBwyzK5bNiBHorQ+/znAZfaFr2AserHH5NFHCPf13/piBAFfopFq6f3bdib4/SCAq90eucOGEEV82zrxtbVgTM1gTM/eduHf+JjF17+skl1c2l+MhHHU6tIxRRGltRmnqmHn3h2N33BjJ6keTwYD17lnXJjvBZRghJa9j675mekaVNwCMaEBP/4VpWyCICJLfjKlETSzxFYNohKKY5valln0NwPZH6LpwBOUJgfIDb9914+/pXPZ7IZqzmD6SoGnvrCd7qNJ1JxBsj3ItqNJTv3tUvbV0svLMlpr3/Rway/hlh7U9AShRo/oVZQVEr0HKc941fFJmiiSpYUuLEzmmKwbSUGS6tnjGwkaACWZwt/agbuJgNPy/e4UgighykpdUOjODyjg790Gooh2fRgsy/NRBWGJc9G2QZIQRHHpb8G7N1CLXFiWN3NJklczaDvgOIjhEMF9uzDnFnBUFXBBXiJ+rdXJrBzPdT0NnBu1h64DjsvktMDUN01cp/bjE0XCRw+iXriCXS57rViiiGuYK5/LzeNJUl198MZ5IokIouTVvNk3wjGbuoFE2/pI9uzHNnVGXv7anT+T9xHkYJTmPScAGH3l63XRKoCMPY2DhelGagzcS6EH13UwbQ2BrcvASb4gqf4HKU5epbq4diXGncDSVdKXXlpxLe8VtiRx+vrfjbNvQWPH8UZ8IYnyos5r/2WMay8vNYgXMyO09D5MfmEQS6/U6wv18iKG5i2L/YkmtOwslfmxulG0DX1FljBGCgcHC4sQEUTEOtmsWcgz9fd/hSCIGJmVGWozl2X663/txTTXgSDKtH3qlzZ76RtClH1sO/Epuo79LGa1yLXv/zm2oaNmZ1dIn24FcksT4YcfQFBklKZGSq+dIbinH6W1GQArs4h64TKhQ/tQ2tsQJInK2beQIhFCh/dhl1UESaTw45P4OjsI7tvlMeUMDmNMThM6cgD/jl4ERcacnUdOJgk/fBQkCbtYonL6TQK7dqK0NHnjpRdxLQtfVwdiNIyrG5jzabRrQ4SOHkQMh3GrGqVXTuHv3UbogcNIyQTa4AjaO9cI7OgluG836vlL6OUKckNqabxCkerFdwgd2o8Uj4IgYM7OU3nzPJFHHkZOJXFUlcq5C9jZzXXqJHv30//cbwBQyUxRmLx6W8/h/YpQqr3egFycHVrRLdYsdSIKCgICDhaCvWT+LEcnVx4nGmwh6E9iqJtTcFTCCULN24h27caslhFECds0MMpZTztdEJADEeRgBEEQsLQKploC10FU/MiBMI5lIAciCKJY+7xYn+SUcKK+r7NGt5kgSiihWC00JuCYOqZarBFZC8iBsLe/KHmda9UStr51dcob2NAoCiKIklePCFBK65z+4gSnvziBIK4OEwIEwo1ISoDGrsMr3k+PvYkx5xlFSy0RaurC0iqIso9AvJl4z360/JJxFYAIMXKkCRJeOYhjo8+twbjjgq1V0edvPZM5unZXCsJDDe30f+TXAAgmmjj26/87AJNvvMj1H//VbdXFWXMLVN+5hiCKVM54mtGCT8E1Tcqvv4FrmCCAtZjFqaiEDu9HbvCE761SmeL3/5HEJ55DSsQRFBm7UMScT2POzOFqOupbF5HiMcovn8Iulgge3o+VzVM5/Saxjz6F0tbqjWfUxjNNgof3Y2YWkapVnEoVMRJCEAXMqRlcIPbECUovOWhXrxM8sJfSydewC97z1kfGkRtS3upcFPF1dy4b72mU9lbEaITq1evYhRLhY0cQgwEEScKYmcVayOCUNjfBJLv3c+Dnf59ArIHC1HUuf+s/YFQ+XMmCXc/9OqIokZ+8hl5YKa0gIFJxCjhYq3RaLFtnKvs2u9qepafpOJOL5yhrC5i2vi7DEkC0o59E7wGCiRZSO45i6bvQCxkWB05jlHOEmrbRuPcEciAKuNh6lezgm5SmBwk1dNBy+BmMchbRF0QJRrGNKrNv/gCtVkgfae0l3nuQSGsfM2deIDv45tL1iBLxngOkdhz1jJ7rYKpFMldeo5qdwRdN0nzwaZRQzPM8gMLoRXLDb2+4MNoIGxrFVGeIHScaufzjOWzToetggtE3s1i6s6ZBBJgfPYMwvjrVv7w0pTwzjBKOk9r5AMHGDtqPfwKjlCN9eakWaYFpJGQqFGvatZu7QGeTRsgx9GV0H3cfnQ8+h+wPYekqc5dfITe2NU1jQRBWlQzZxZLnWgJSPEZw/x6MsUkExee5qKaJU/biva5lIYgS+tAoTkXF19WBnExQfv2N1YOtmBuEum9ll0orKgVcw8RVPDdYEAV8XR0obS3o41MIilK/n4IobHBvb+ojrG3nVjVc3fDc5pr7Xjn3trfS3b8H17Yxpzae7JI9+9nziX9NIN5EcXaEd174UyoLExvu80FGeuD0qoy6K7iISLV7vPIZSKJCyJfEsKs0xfqJBJqoaBlMZ32jOJ29QH70AqZaQA6Emb/4E9SFSY+wxNSRfEGa9j6K69jMnn0R17FJ7XyQpv2PU83OAuCLJqnm5khfeQ1Bkmk9+lFSO44yc/ZFAHIj56lmZ+h87BdWje+Lpmjc+yjl2SHyoxdxLBNJ9mGUc4BAINVGsKGNuXM/Qi9mkINRbL1y2wYRbmEUo01+eo+lGHhpgWBU5uhnOpi+UsDS1w/KO7ZJINRErKEHSfGjq3mKC0NeH7Dk9YfZlk5u6G3KsyNIvoC35FWLK7JOGlWChIiSvCWP4g2kT764st9sA5iFHKLv7jKeXv7GH1OaH6Nl36N0H/8UbYeeAqBx1zHMSoHizDAjL30ZAL20uGZp0g1YmUVCRw4iPvkopVdPewZpWWOyoxsIkoTS2Y5j6DiVitcrWkvMOFUN17bx93Xj798OgoAxVjMQjoOjVr3xXRdjYpLIww+Q+ORz2KUy5sw8UjTiGeDaeK7hqfE5suy5ZLqFo2lIsRhKWwvWYg5Z8uNIAsbULNEnH0G7NoQ2MEjo6AH823tQ2lpwqlWMyRnCxw574xXLmDNzyMkErmV5mtdVDVyX8AOHEGOeS+0aG9V7CiR79rH/s39AMN5EaXaES1/7ow99Cc5acFwHRfABCjY2yw2jIgXpa3kMSfThug5BX5ygL77h8XLlCQrqNLahec/G0LD1pcoSyR8k2NDO7LkfUF30vLf82CW6OnbgjzUAHklLcfJqfWVYnBwg0XtwaRDXxTHNNQ1ZuKUbcFm8emrNrLRZKeDaNom+g+RHLqJmptasg94KNjSKtungC0q07Izi2i6RlJ9kZ4hAZOUX1LZcCnPeicSat9Pa+zB6tYBtaiRadhFv6mNm8BVCHb34Isla0F6qB/Fv/K9lZ8mPeMQS3fTXKvJtbGzKFG7p7pq5zbNnpE++uKTpcpdQyUxSmh2mPD+GbWjEO/sJxpuJtvURjDcRbe2l7eCTOLbF9R/9JYXpQcpza9cy6qMT6BPTeDEBm+qVGiHuDcnQqkb+ez9eWpHVmo312t+lV0+B42KlM2jDYyu2cdQqpZOv1f+2s3kKPzpZew4O2A7VywPe2LXGe23gOgCiqNDSfIjK5DRaYRJtaAxwkUSFtuajCIjMnDnrqcnVzrVy9jzquQveIrGWMLl5vPKps3U2itKrp8B2KL16hnpmZ4Nm6mTPPg78wn9LINZIaW6Eix9igxht7d2w7Ex3VdLWFEt0DMtJZnUmM+e2lGUpafMbfi6IkieItizB6Jg6jm0jKQEcy8CxrRU1wY5RRZRl1o3BLYPkC+HaFpa2dkNrdXGaqde+QaxnH037H8exdDIDZ7yE7S2OvR42NIqFOY1iWufpf7Ud14HG7hAf+/1dK0SrAPKzGt/6nzxGm4b2A8yPna3rsUhygI5dTxNt6MWybRzLRAnFCDdvQ12cwawUEGSFcEs3jrlk4W1sxti8pstW4VRvPxB7K7iOzejLXvtgtG07zbsfRvKH2PbQ80i+ABKw79P/DcWZIRYGzmDrKhNvvIi7nF7Ndam3jMDaJTNrGYobrukNAs71eLZu2tcvRwiFmpAkX01yYRFJ8hHwJzCMEqXSNLFYF4ovQtCfRC3Pk4h5VGVqNUOlMk8hP0Y02oFr31QCY9us8s7sm7S8l5/jjXO/BatEuLGTlv2P03bwKQKxRgpT17n63T/70BpEgG3HP0kw0Ux5YZzcGh06suCrJVlW3zvL1hhNv357A7tuzZiutKiOZdZ+00srTiUUQ5R9mNUikuxHVHzIgSUBLF8k6SVCNmG0zEoeUfHjizWi59cw0K6Llp9HOz9PPpKk5fAzNPQfQ1u8g0TnRh+WMjov/fkwTb1hWvujHPhYG5d/NLeiLhHwaMRqECW5psfileS4ro1lVhElidzQWQCSO45iVgosDpzBNqogiETa+oh2LtUYKsjs4ABGTY5glrFN6bS831CaHaY0O4wo+8gOn0eQJLpPfIZ45y5i7TuIte/ANnVS2w/j2jbjp75FYeraPe62WY1AIEko2EAw2EC1ukg41ITtmKiVNOFQM4ovQijYSKk8jYCAP5DA749RKc+RTPRhGO9eKYUgyfR/9F+Q6NpNvHMXrutSmh/nyrf/hPL82Lt2Hu8lyvMT5CdW9/0GhBA7fEewXQsbiwnz6rr0YVuBravYepXYtj0e36VpoBcz2FqF4tQ1EtuP1FeEqZ1HqWZnMEo5gslWABp2PQyISD4/se59S7WIgoDkC6CE44iyDzkYQQnFsIwqrmVSWRgnZWi0Hn6G/OhFbKOKHIxQXZxBLy4Sae3BH29CL2Q8EmfFh21unDi6FTY0iq4DuekquekqixMqDd1hLv9oDjV/U3xn2ZRfzIzS0neCQnoI26wSCDcRjDQxP3q6vo3kCyD5AoiKH9d1EEQJyRdEDiyJz8wygVyjU/eyaFvIFAsCoj+AFAx5rDy1ZbpjmthV1UvGvMvUzY5lkKll1fITA0Sau9j1sd8EvBVPU/8xABLbdlNemOTaD/4CANe2KKcn3pXz1Y0SshxE03Kkkjspq/MUS1MoSohopB3dKFIqzRAONRMIJIiEWur7riWZcLchKn7CjZ10Hfs47YefQZQVtOIi1dw8V1/4U8oL67MVfRighOMowdiG20xZg4SECCCgu2otrnjn0EtZ0ldeJbn9CM0Hn0JNT7F47QyWWiRz5VUSvQdI9B7wKPEyU+RGztfLYmxdRc1Mk+jdj+QLUZx4p24UfZEkzQeexBdrQABiXbsJt/RSnBrw4oiVIjNnXiC18wFS/Q8CoOXmqGa80JLruoRbeol178O1LLTcLLmRC3fEhiS4G/zYBEGof+gPy7TtiTJ1aeNEiygpNHQcINa0A1FSMKp50hNvoxaWMoeBZCtN+x9DkGRsQ0MQZeRAiOzgmxTHvdkvSpIIMQRETPRah8smtJ8VH5Htuwnv2EOwYxtyPIkoKV4yp5CnOjNGZWiAyvA1nDsMyAJEWnp45Hf+IwBn/vPvU5i6vuVj9Dz2WYLJFmLtO4h3rOzIMatlhn/yRRzLC0TPXX7FW13fZcTjPShykHC4mVLZM3yOY2MYRXxKhKqWIxptp1JZIB7rolieJhhIUSrPYplVND1PY8MuQqEWFtIXKZdn75qhjLXvJNa+g0Cimb4nPgdQz+ovvHOqPtl8mOGLJNj9/G/Ruv9xTK3C4I/+iqk3f7Bqu2ZpG2HRM5yaqzJvja9ypf1ylFioFb8cQRAkNgoyZssjVPTbZ7qOtPbRdux5pk9/BzX9/qkEcF133YveQvG2xdibaxfPipKCP5RCK6cRBJH0xNtkpi4iSjK2pa9a5Wi5OebP/4Rw8zakQAjXMqnm5lGXxYJa6MDFRUbBxSG7ln7tKgjEDx6j4dFnUKJxXNfFVitYloogK/gaGvE3NhPu20XuzCvkzr7sBf7fY4y9+nUAYu07SHTtoe/pX8IX8r7YSjDC7ue/AHixymTvgfoMPPrq19Hyd4derVpdRBdlNL2AaVaoVrMIgoTfH6OqTVGpzGNaKpLkZzE3iKqmqVZz+JRw/TelaQVMU7ujcojl6H38cwTijcQ6+ol37Fzx2ejLX2P01a/fdjD9g4ZQso3W/Y8DMHLyy0yd++Ha24kxJswBHBy6lT11Yoj65/4Ufc2P0hDtwydHEDeo1nBdl8uT37kjowhsvX3mPcaWqMOSHSFKaQ29YhNp8NH3UAOWYTN9BRJNx5h850d07nmWqYF/xDY17HX7TQUcU8N1HURJwbZtbFNjuR/u1AyhDz9hNnYZbiDQ0U3q4ScQJIncW6coD1zEKpc8F10QkMJRorsPEt19gOSxx9DmZ1BHV/O6vVcozgxRnBkmO3bJ02MBdn7017wOBsAXjtF+6On69smeA9imxvyV15i79NKKY+nlfJ0oYzMwjBt94CsnPlVdMrql0sqC+ZvjiLn81sTJlkP2h+rB+IYdD9D54HNEmrbVu5yMSgHbNFgcOsfUmz+shRR+OgwigoASXvoNZIbOrRtOsV2TJrmrnslfnn2WRIWO5GFaE/swLJWFwjUCSpRosIV0cQhwCPpShPxJKlqasfRpsuU7W905joVVLX+ges+3QB3m59l/s5M3/36S8bdzHP/lbnacaATg6j9qzF5OEIw1E4q1Eog0YZsrs7umXqkZPvDFUjQfeBJBVrAqBXyRJNHOfjJXT1Ge9ggti2SxMEjSiICwKdc5smM3YiBE7uwr5M6cxLm5ti09R3V6HLOQJXX8aSL9e99XRtGDuyJZ8Pbf/S913rvuRz5Td68jLduING8DINrSS9+Tn19xlKF/+BvU3OyK97R8+n0TdwvEm4i0dNf/btn3GC17HwG8RIokewHzzOA5XMdh7NW/pzg7gmtbWxZB+6Aj1XOAPZ/41wCU5kbXjJfJ+ACXjD1NsBZTXLRnV3hXsuinMbodw1K5Ov19suUxOlNHCfmTDM+/hKrnPMOZOkh78jCS6Mdy7qyXX01PMvaTv7vlBC0lYwiKjLVwp5pKd47N6z4HRAIRmcKCRrwlQNfBBD/842sEYwpHPtnH2Jkc7TufJBhroaP/iVU9jJnJt8nPe/G2aPsOLL3Cwhsn6/VNse69HiFEzSguMo+Li4GOg7OpjhYllsQ1dSrDV1cbxBpc06Ayco34oYdQYsnNXv57huX1XyMnv1R/3dh/jHhnP6FUO20Hn/QIKZZh98/+1qpjFaYHSQ+cAUBdnGHu8sv36KzXRrJnP8luj7km3tlP066H1t02fe0s2ZHzTJz57l1zxz+o6H38c7W2xWu888KfoRXSq7aJSSkc18YvhJm3x9Y8jijKBH1xFstjFNU5bMfEdk0ERFzXwXEtHNticvFtosF2OlIHKajTVPTMmscL7N2OMT7jaaesB9f1+qM3giQS2LsdKR6m+OKrG2/7LmBLVDG25eLa0PtQA8UFjcmLeZq3R5D9JjNDr+APJZB9IdITb9VXhTegVZZmAEFWsHV1WUeHgFUt111GgFa2Mcs4BpufqRzTwLUdHGPjh+BaFjjOrR/W+xiZ62fJXD+LL5xg7vJKqvaOI8/SsOPIyh0EgXjHznpsTi/laD3w+Lt1uoCXZQ83ddX/dmxrhcEbe/UbFGc9F7w0N3LX4qUfZAiiVG/3zE8OUJpdO0Thug5NchdRIUVUTHJD8G1lSY6AKCpYdrWu1uc4Fgge1+INOI5FtjxKf9vPEPKnVhtFQUAI+ok8fpRiRcU0zCVmIwB5qTEDq/Z8Ja/1UJC8a3GtJdYjQRTRhyfrLEkrsII9yfaaCYTa+3XGKOeuVmds2igaqo1ZtTn+S9to6g3zxtcmsXSHSIMfy7AxtSKmVqQwf41CegTHWt+YVdNTNOw+TsPuh9ALGSR/iGj7DkozS1oQPvzIKPUV4mZqFKuTo4R6duJvbsVYnF/7RgkC/maPWUadWK0H8UGDUcmTHji94r38+JVVXQ/BZCu7aswxoiQTbuqq00+9m/DYg7zQytSb3ycz9Fb9M6OcrzGf3McNdJ/4NImu3ViGhlZcP+FRcDJUjCLN8jbS9hSeiIe7MjnpOtiOjiwFkAQZCzBtDcs2iAQaKVa9cIuLi+2YyFIAWVzdCis3p4h/6ikCe/uQ4hGcqkbl9Quob13F19NO5IkHkGIRrGyB8smzmNMLpH7p41iFEkprI2LAT+XUedQ330HwK8See5TA3j6qF65jztRWwYKAr6ed8COHkFNxT7Dq5XNo18bw79hG5MRBxEgIu6RSefVt9OHJu3bPN20UK1mDc9+c4sBzbQyfXuT6K97J+0MSw6eXHtbM4K1dssr8GK7rkNx+hEhrH45lUJwcoDB2edlWAp1sx0DDxmKBmVv2QJcHrxDq3Uni8MM4uo42N4WtVryAvCgihyIE2rpIHD2BnpmnNHBxs5f/gYJZLWFWVxLoavkFTv8/vwuAHIyw/elfRpLvbu/3ZjBx5rs/NQXWd4pgspVk70EkxU9u4h3GX/vGuts6OBhoZOxpdLeKXwhyM5+p7VpUtEXC/kYUOYRuldGMArpZoiW+l0xpBMOqIIkK8VA7jmuvKVVgzS+S/+oPkWIRcl/5PuaUt6IXQwFizx5HG5xAPfcOocO7iH38MXJf+n49Zpj/6o9QOpqJPf8Y2rUxnJJK4Xuv4Go6YnSJDUtKxYh95AT62DSFF15CUBRcTUfwKYQf2IuZzlP+2o8RQwEc/e5OpJs2io7tMvhahtE3szjWkrTp8JnFFTKnkWQXpl5GV3OE4m00dBxALc6Tm72KY3snLyoBJH8Io5Kv9Uaa+KJJ4j37673PRbJIeISp3irx1stjX2MLtqYS3X2Aluc+g74wi10p4zoOgiQhhSL4W9qQIzEqo9dpePQZzxW46TjZ0y9hFt77gO+9glUtc+3F//xen8Z9bAAlFGP3z/4rmvofxDY0ps/9aFP7xcUmRGeRBqkNAZFpa6i+WrQdg7w6zbbGBwn7GyhraVQjR0lboDW+l13tH6GiZfArEZrjuz2DaW2enV6MhhHjEaoXruGUKmhXRwge3YPc0uAVVl8bxS6UsEtlYh89gdLaiF5aO7stN6VAltAuDeEUl/U9yzLGbIbgwZ0eDd7bAys/vwvYUkxRlAUUv4QUEVaSKQhLs0njtqOkx8/hOBZtOx7D1ErEm3Zgmxr5eS/TG9u2h2hnP9XMFHppyfgsZ60ukcOqPUyp1tlyK8QPPEB0zyEEUUKOJpCjibU3dF1C3TsQetamuCpcOvehNor38f5HKNVKw3aPk/TaD/+S2Qs/2dx+YgQXB9M1CIiRmkaLB9sxWSheA1zKWhpwsR2DqcW3iIfaaInvhrgLiFi2xmzuMqXqxoQQy4sQBbnG+m7dUHL0+u4Fpfb+jZ52x8W1nQ3Z7wVZ8uL+N9cRWxaV0xcx5zIE928n+fnnKP3DGfTBu1dVsYU6RZHdTzVz6Pk2wknfCoau3EyVr/2h54oqvhC2pRFv2oHj2MwNnyLVsY9AuKG+vWNqmJUCRnER21xa+uCaLMMAAB96SURBVC7v1Oigj3Gu119PM3LL4m11YgRbu3OiB6v83grn3MdPNyItPez7ud9DEKUac/jApjPwmqPiF4LM2+OkaF0VcspXpiioMyt6g4vVWS5PfpeO5CECvjiWrZEpDZMuXsd21k5GurYNtoUUCWFKokfmVCjjVnV8PR3o18ZQWhq89xeLCJKEr7OF6oVrSPEogl/ByqzPpG7nS562T3tT/TWug2vaCLKEMTqFOTVP7GNePPI9MYqRlI+Hf7GL9GiFyz+ew7GW1m56eclY6dU8Td0PEAg3kJsbwDJVBFFaUVvm2hahpi78kSSWodY9Y72QxsymSdJIgkYcHAQEAmxOe7Z09QKlqxc2e0n3cR/vO8Q6drL3k79DpKmL8sIEV1/4U0pzm08I5p0FomKKpNiKJzB8s4flrkmWUKrOMVCd2/Q4jqqjXRsj/NgR/P3dVC8PYYzPUHnzCuHjBwns6UOKhFDPXvaMnwBKexPxTzyJ3JBAH5zASueQWxsJ7t9OYN8OBL+P2Mceo3rpOubcItrlIULH9uPf1QOO6yVipuYIP7QfKRUH10WMhFDP3l097y3VKSLAG1+dYH5ofUaU9MRbNHQcpJgZoTB/HRCwDBWtvJSMUcJx1PQE+eELK3nWbKMmfK+hU0WljIBAjvQdNravL6J1H/fx/oFAsnsfsfYd6KUsV779JxQmt0af1yJtQ3VL2Ji47haJVLYCx6H8ytv4utsQZBm74ImUVc8PYC8WEKMh7FIFc7yW0bZstKsjWIueNIQx5nEhuJqOOZvBznmxS9e2PZJhy6Jy5hLG9AJSPIJrWljpLI5uol0fQ25IAAJ2rogxfXdLtzZtFC3DoZIzCNVc5/U6rKrFeWarr3oqc46NIErk5wexraW6RbNSJN6zH180hWPoS+JW+QUyV16jRJ5xrlPlNgOogoAUDCNHYyiJBkTFh2MZWPkcZqmAXa3ctqTnfdzHPYEg0rz7OL1P/CIA5YVxirdBLiIIImU7X1tEuKvcZ1HwuATuhFrrBpyyinZlZd2kq5voQ2skT1wXu1BGuzS44m07X/Lc4zXg6gbGGqU25uQ85uStYp23jy2p+RmqzRO/3ktzXwQ1b9TLAPWKxeBrXoGnP5Qk1b4fXyixQnM2O3OZYsZzA7T8PPMXTq4aw1kWU7xdgyjIMuG+3cQPPECotx/RH6gXebqmgTo+TPHyOUqD7+Aa90BT+T7u4zbQvOc4+z7zuyiBMLmxy1z+xh/fFsuQ7Vq0yr2Yro6Dzaw1Wq/1lUQfHclDiKLEbO4yuvXucWBqV0exM++O5vedYkvuc6TRTzjl5+jPd2BW7bpHmp+p1o1i07YH8IeTlHNTK4qnLWMpAWJWCpj3SGEtvH03zc9+Cl+iAbOYR5+bxrEMjyUn0UBk5x78Le0Iio/CxbP3V4z38Z6jaddD7PrYb6IEwmRHL3H1u3+2oipjK8g7aWJiQy3rvLKyQpb8dDYcRZZ8ZMvj76pRLL/0waF321Lx9ot/tLZ+7nJ5gkC0iZnBl1Dzd18w+1aQY0kaHn0WQZJJv/wDyoPvYFdVz/AJIlIwSGTHHuJHTpA6/hTazAT6wuytD3wf93EPIAgizXtPsOvjX8AfTZIdvcjlb/6HO2pvTImt2Nj4BH8tUSlCbaUoChJBX4y8Oo1p331Ozg8LNm0UbcslN10lFFfwheSbRPOWZiSttEAk2YVtajVSCG+1aJt6vXj7XiHSvxclliD/5utkT59c1dtsFXMYiwu4tkPq+JNE+vffN4r38Z6hac9x9n36dz2C5dFLXPr6v0cvrk2+sFm4uBSdRUJClIAYXtWY4Loulq17Pc/vMVKNEvGkyOSYyXo0BMGwwM69fhSfwMg1g1zm3pODbL5O0Sey+8km9n+klVDShyR7t9u2XCYv5fmHP6kFUAWBjp1PkmzdjW1q3GD2zkyep7Cw9cDxVuBvbsO1LCqj19Yle3Ati8rYIPHDD+Fvbr2n53Mf97EeGvuPsecTv10ziBe58u3/644NInjus+FWiYgJLNdYkX22HZOKvohPDiGJygZHeXdw8KEADz4a4i/+XZZ8dm1jFwiKHH44yPGnQ3zlz/O89IO7272yFjbPp5hUePjz28hOqoyczXL0Mx28/Z1pdj/ZTG5yaSleWBikkp9etb9WufMHfiuIsuKxbesbyww4hg6Og/ge9P7ex083BFGmcedRdj//W/jCCXJjl7n6wp/VhePvFI5rUXUrzFvjcBMhhGXrzBcG6G58iEiwhapR2LSm+loQxZrMek2OXBDBqZHfCIJHjAM1EpvaMIIAouT5ljeT4kjSkurpjUaWXMbmhS8VaW7bUvPdHWHTI/mCMgICp780QWlRZ/vDDVz6wRy56Sr9jzbVt6vkp/EFE4iSUs8+u6xs4btXsNQygiiiJBowMuun7JV4EkFRsCqb7+u8j/u4U0Sau9n3c79LMNGKLxwjN3aZi1/7o9tOqqyFmNRI2SpgsTpU5bgWc/krBJQYvU0nEBEpVGdwHGvdakbL1nDc1a52Z4/Mz3wiQqpJRpJgZtJkxx4/3/rbIkPv6DzxsTDHHgshSnDhDY1/eqGMVnU4/nSYp54PY+outl0L9wvQvV3h+V+M0dwmkVt0+MHXiwxeMe4mI9imsWmj6DoupmHjui6O5WIZDtEmP8UFnVTXUsdJsm0fTduO4g8lvbiF6yL7goxd/C652dWSjHcT1YkR4vsfIH7wGGY2jZHL3EQfJqAkG4gfPIboC1AZHVz3WPdxH3cTsfYd7P+536uzjefGr3Dx7//tXTWIADIKbXIvpmvgYpO15+q0e7LopzWxF3AJKDEObPs0FT2LbpXXLf/xJAlGV4+jCOw5HOA7Xyzy0Z+L4pu3ufimxoOPBkmkJI4/GeaL/ymHacCv/HaC/KLN+LDBs5+OcPLFMgMXdX7zD1LYFgSCAh//XJSZCZO//b9zPPR4iF/6QpL/83/OkF989wmGt8SnqOYMEq1BctNV1LzBY/+8B9ty0dWlE0+09DM/eoZgrAVTK6EW50i17VtRvH2voE6MoM1MeLIEfj+VkWsY2TSuaSLKMkqqiXDfLoJdfajjQ6jjQ/f8nO7jpxtKKErvE79IsnsfkZZuqrl5xk99i8XhC+gb8CPeLkpOFp8QRELCQWB5EtQnh+hve6YubwEQDTYTpXnd483l12+hK+Zs5qctpkYN5mcschmb7j6Fvl0+xoYMxodNDN3l+mWdnXv9qBXvjK5e0Jmftjj3WpVDDwXxB0WOPhJiatSge7tCJCbR0iHT0ia9v41iJW9w8s9HUAsGZtXm4vdnefSf9xJUBF7/u6VmbEEQ0coZZH8YENBKGcxUiWC0hWL63pK6OlqV9Ms/olnxE+zsIdjR7cUPXa8kR/T5QRCoTo6QefmH2Jt0nwUBJFnAMu+3Ct7H5iApAeRAiD2f+G2adj+MIIhU8wtc/sa/Jzd+d3t1l8PEIGt7PcwRIbEi0aJbZd4e+9qWjlfaoB/aMj0P0jJrrPyu9ztRfAKVsoPjeGMbhovsE5BlARewTG9bQ3dxHBdRBFN3OX1SZX7ac9WrqsvU2HuTId88n6LlkhlbyvxMXynwzf/hEghgakvWvJybQvGHqRbmaO59mEA4iT/cQHb63SF01eemmX3hS0R27iPc148cidVaDh2scpHy0FUqQ1exSusz4Sg+YUXJUSQu094X4PKp+zHI+9gYgiiS6NpL857jtB959v9v78yb4zjvA/28fUzPPZjBfR8ECfAWadnUQVtKLCe2IjtOnGSztVupVK3X+yF28wHyFTbJVqWSKrliK5WyozuyrIO0LIkiJVIkQBzEfQ1mMPdMn+/+0eCQEEASlABSjuYp/MHufvvk9K/f341mhLDKeYorN7jx1r+wMbs/JiSBICBCtKm9eJspfK1aLzV7DGdTfXY9m3Thfk1Gd58IfHarbUmW5mxGTxgkm1XMmqRnQGdp1ia34SKAzl6dWlXSO6SjqgKzJlmctZEejF82EQJ0Q1AtP5zEivvKaOk4FGd9tkw179+AVd0+tV2fv4T0XDzXIrt4mVhzP7mVMfL7PEu8hcQp5MhdOEf+0nuo4ahftddxcEpF3z12Dx5/NokWEHVzZDSuIlTREIoN7knnid9j9Nn/VW/XaldLXL+PeoifF4EgLKKERIyU6oeambKyQxuP/dV2pIQPz1XoHdT5ix/79Ux1XfDBu1XyWZfJaxbf/8s4Kws2fQcC5LMutarHmy+WePrZKAOHAriOZG7K5s2XShw+GeSRM0GGjwQIGBCJKZz7jzL5jf0TmLsvHdZs8Ph/6+etv5+imr9zwyfpObiOBUhyq+P1wrIPA+m6OMX7TycMx1Vmx6p1dTme1Ei2P/y4rgZfToSikOgZoe/MD0gOHEU1Qniuw+KF10hff5/MxIXPd1yx+35MHh45Lw2OoOTl6v1ZvkjIzd1YW3b4xfMF0isub71SolqW2LYkm3ZYW3L513/Kc2DEQFFgdspied7B8+DFfylwYDSAogjeea2M60Eh53HhfJXsuktbp+bHPt+w8Vwo5l1uXLdYmLGREspFD3ufzVi7FoqKJgjGNBzr7g+55/B3SM99RCX/4NP89oo3f5bBNj3CMRUtIJgfrxJLPbg4qQZffhQtQDjVCcDI9/4n0bZ+jFgS6Xmkx3/L+Cv/gFXOEVBqHD2qcuXK/dnHAgE4fSpAOu0yNb07Z4NEkvPW9q9c2G1USpLxyyYjJ4KMf3LLiZpN+9eaXnZJL28v+JxZc8msVRk+YqCoCtNXb+07ftlk/PLW0L3ZSZvZyQfbdXPXb3qt6JCZq9A6ECG7UEW6Oz94LRB6aM3Kw4MHUUMRimOX76omq5EYoa5erFwWK73dkGxWPZJtOmf+sIlUu87b/5alvc8gvdDoNNfAD8Duf+KHHHzmr7asLyxNkpsfY/KNf8KplQmF4IknDP74B0F+/vMql6/Y1Gpw9KhGPK6wuOgyM+MwNKSRSiqoKkxMOiwve/T2qugBKJb890zXYXhYo6NDxTIlv33fIpVSODisoelw44bL3Jx7V4F4MySnUF2hUF3hzqq0IBZqJ6jFNvOktws3PSDoGQrwox+nePVnORZuWKSXHYZGDVKtGoWcy41xk9YOjZYOHVUTrC3azE9bpNo0YkmV5TlfTggBHb06PYN+WcJPP6wiBAwcMgiGFTJrDjfGHlxFq93XUzQ9zJLNo3/WS+donFLWqj/TasHmymu+cMmtjBFvHcJ1ajhWlZuD/B7P+/sFazr1GEZbN+WpMTzzzkJRTyRpeep7lKfGSL/54o5jEs0aZtUjn3Uwwgqpjnurz32P/5CNG5+w8OErn/seGnx5ibYP0n36GRRVo/Pk79fXW5UCN97+GRuzVygs3nJkCCEIhwXxuEIwJNB1wdCQytmzBpOTDt/9bpCXX67x3HNB1tc9PBeGhjSe/2mFSETwrW8auC6srVl0d6v80R8FmZxwKG4Wtzl7NkB/n8b4dQddv/dsMqCFOdLzLNNr5yjW1u5YU1ERCq2xYbpSJ7i68BLZ0sy2MUJAICCIxlWMkIKqCdq7dR5/JsbshMnIyRBIOHQiSFOLxsK0xeHTQX7+d1lUFQ6fChFvUlmZtwmGBd/+kzjLsza2JREChg4bPPb7MSau1CjmHmxYzq6FohFRaRuOYUQ1hr7RvEWNzq1U60IxFO8g2XmElp6Tm8Hb/pi1md+SXdq/UIT7wTNrfuZLMnXHMdWySzSp0nPAj6Oavba9qohQVAbO/qi+3Hn8W7QMn6bzEf+FqeXWGH/lH3DMCt4DyOhpsLcIRUUPx+n9+rOkDpwkEIoTae0BqNsMFz96HdexKC5P89mPfqUiuXTJ4uuPBnjtNRNNg7NPGkxMOLz4Yo2f/CTCwICKbcOFCzblsscPvh8iHleYmnKZn/eFgRDQ0a5imfCLX9bq1e4++cQhHlfo61WZuL532pmUEsczCWoxAmp4xzGWKbl+uUZ62eY3r5eQEk6fDZPPurz9UpFn/kTQPRjA82Bm3OTDt8v8+U9SJFs0luds5idNFNWPoYwlVGIJlef/PYO7aWVYmrWZmzJp79VZW/6Sqs/FtMkL/+fyjtukd+vHsD73ERvLn/LZWm5WdX/qJ34epJQgQNGNHbd3DBiEIgqLkzWQkM84LExsF4rSc5l59wWMqO9lU7QAiZ5Rkn1H/O09I7QMf43Fi6+zPuE3fa9urFDd2H0vjAYPnlCyg1Cyg3BzJwe//VeoAaOeJ1/JLlPdWKGSWWL81f93z4+d64IRhOPHNWZmXFbXXI4d0zl5QicaFayuejz6qGD4gEqtplKtSioVSX+/SmengmWrXLsmyBc8QiE4dUqnXJZMTjrUapJr1xyGhzWePGswMfnFm7bdREqJomgIRb3LGD9N79CJICvzNusrDodPhTh0PEhrp861i1UGRw06+3QOHPGdLsW8S0uHRlu3jqIIUm0aZlXiWJLDj4Qwa5L5aRPblixMW5TyHmd+L8qnHz64Ume7T/OTWxtU3Ylaee+j9PcUIQi2daIYITxr5yT8/pEQ3QeCJFo00gsW7b0GiiJYnd9+b6XVGS78498AoBlh+h7/Y1QtgFAUur/2h+jhGANP/ikDT/4pAJnpj8lMXazvn5m8SHF5attxGzxYmoe/Rqxz0P/3gVM0D53csn3xo9ewSnnWpy6ycWP3MbcbG5J337E4dlRnddXj449tYjHB8eM6585ZLCy4lMuStnaVbMbj9ddrVCqSzk6V9XUPRYGmJoWZGZd3z1mMjuiUyh43bji0tioMD2vUapLz5/fO3q0qOvFQB67n3LPE2DsvFzhw2KBa9licsbh6ocrBYwYz102mr9XoHtBJpHS6BwL85j9KlPIePYMBLNNvq9XcpjEzYfLmLwscOhECKUkv20TjCn0HDVxH8u6rDzYUTsi7+PyFEPWNRkSjczTG/OU87m2qczipk2gPsjzmX3jn8DdZnXkfz7n5BRXEWwYBSWF9ew7lF0EJhogMjaBuxoQlTp1BjzWROf9GvffsZ24ILRojcmCUQKqV7Pk3yJz/1bZhmi7oGwnRNxLk/dfytHTpDB0L8+sX7iNPVQhaDj5K68g36D79BzdXbfvyFpYmqWRueepnf/MLCkvX/dnsw8iG/4ogFIWBJ39ErMMXhPHug3VvMvhagJSQmbrI0qU3yEx+hFPb+7JVyaTgL/48zFtvm4yN7XUGhyAabKUp4qv8AS3CcPtTm+1LJ3YM11GESjTYQlviMJZd5urCi+QqC5/r7AFD8PRzMfIbLr/91f6X/LofpJTbG75vsvs4xZYAZ/5LH2tTV6ncJhQTHSGe+O/9vPC/fdU61txPeu5C/XELoRBOdIDcB6FoBImNniTY3oViBFFDIRAKLU9994773MxuqS3OUvj00o5jHFtSKbqkOgI88VySUERhde4+bYJSsn79AzZmP2Xm3L8CflGAA0/9ZX2IFooS7xom3jVcX5ccOIZr1Vif/Ij5D17acshaPo1rNiom3y+KFiCUulU7s6lnlMFv/hlGLIUaCNbXVzLLeI4/45r69fMUlqdwamXsyv71Ac/nJc//tEK1uvcfQAFEjBb6mh9FVQPoqn+vzdEBkpHeO+8nFGynynLuMoXaHapNCVBTSdx8EXaagOCn9737agnPlaBpBEeGMKdmkbUvt31993GKqvArboutAjYY1YikDIxwkljLIEYkRWvf6c0AblBUnWiqb1/S/Jz8Bquv/JxgZy/B7n4Sx06jRmKUp64h3Z3iKSXStrGyaQpXL+EU7txIZ23B5PK5IsefjJFetLj41ud7MVyzQtX0bT3V7DKrV96pb0sOHKPrkWfqy7GOQeJdBwDoa+6i78xzW441994vKSxvLWJRy6+TnW70ur6d1pEz6OFofTmU7ODA0/912zjPdVi58g7uZgOzyV/9854Uer0fPA8Khf3RCCSStfwYpdoq8VAnqdgg3ckTlM0sxdoaO0WD+JW5q2RLs6wXJ+spg1tQFPTONiKPnaY2Po01M48I6GjNSaRt46xvoDU3IQIGjmVhLS6jJRNI264XStRaU6jNKbxKFWdtHb29FRE0cHN5nHQGvIenJd1TKMZaDEaeaqXjUIyWgTBn/3oAa7MqjlAF3Ufi5JaqvjfXiKKqOqFYe90WIT2X/Or4ns8Sb+JWypSnxihPjaFF44R7B1l9+QVcc6evkdx1o6rmjgCDR0OsL1nouuDIN6J8/M7e2jY2Zq6wMXOlvpzoGaGpdxQj3ly3Qd5O32Pf37ausrHC2rX3ADAL68ye/7c9vcbfBRRVZ+CbP0IL+oKw6+TTBCJNd91nfeJD0uMfsHTpDb9thvSzsb4wQiHReRA1EKK0NoNV+WId7BRV31Tl/d9tKNW1OXvdneNS4lE2M5TNDLnKIm3xQ6QL15lefXdHgefHOd5DIAlQQgYiZCACGko0jDHU5wsyTUXv6URrbcZeXkOJhOumoNCJI9gr60hZIXT6OM7qOtJxUIIG4cdOYU7M4JXKPOw+7fcUimbFIb9cpftYAj2okuwO18NxPE+yMlHi0i8WqZXKLE++g25EWJk8Xy8VJqXcDObe/5u0M2t4HT1+TOQucpzvRjimYpuS91/N0dYbYOR0FNhfg29+YZz8wjiqbtQF3U3ajz5Jx7FvbVknFIVQUzsDT/wQAMeq0nb48W3H9Ryb8Zf/DmsXL5JdKSK/4LO7XxRVRwtF7jkunOrk4Hf+ett6IRTiXcMomh9LKj0Xs7Sx5Se3PnmBxQuv1Zdr+TVq+XUUVSfWfgDPNilnFgCJUP3XQrp+72RF9Y/rORZC1eqlt1zHRCgqiqLVf+eaESLePUJm6gKuXUPRAgihbNYDcFD0wOZ1SUBs/gkkEunYvpdbiPq54p0HMQvr1App/xy2VVfxhaKhqP65pWvXr00i8Ryr7j2W0sNzbFzPomJmNrtBe58/BdD1cNayOKvrmJMzKMEgQtepfnwVNZkgcuYUbq6AOTZJYKgPrbWZ2pUx3M0ASzUaQWgq1UubIXqahjWzgN7egpvL8zAFIuxCKFoVl8n3MuRWaggh+PX/naKyccvT5bnyVsyilCyO/QrXeTg2g+riHFossScvdbngEowoPPa9JEZQsHTjwd2Ta5vk5rZWUyksTTL9659uWaeHY4x898f+i6cofjhQ/9Ftx5NS8vX/8be76iM88fo/PvCQoXjXQQbObp8ZfxahqOjBnYWnU6vUS3JZ5Tzjr/w90r0183Mda8fwmWCijdTASVyrikQiECR6DmOVNqjklvFsi1jHAfRghMLKFJHmHl9ICsjNX/WraEeTWMUsxbUbRFr7CDV1EIy34toWiZ5RFEXFMcsUlidpGz1LLb+Ga1UIJvw6hjfDfTJTF4h3HUIPxSmtTuF5Lqm+E9SKaTbmruDaJm2jT7Ix8zHlzAKJnlGC8Vak61DNrxJt6a//H2/MXSbRdQhFC1DLp8kvX8d1bdYK1zHt0hd24knHQagqelsLbq6AdD30rnaUUBAnm0ONRfzlSARnNY2abEKNRdGam3DSWRAKem8nsmbhFkvYS6sgJYHeLuyl1Ydqd9y19zkQUek8FGPpagHb/M/fKzme0vj6dxJ0DQYpbjh8cq7I9JW9iwPba4Si0v/ED9FDsW3b2g4/TqSl+yFc1d5TWJokM7XdQVbLp5l/f+fspLuhaAFS/SeoFdYppWeIdx4k1NTB2th5pHQxYs0YsRZCTR0gPYRQyS+NoxkhjHgrrlVDDQSpZpcoZxdRVI2uk3/AwoV/J9o2gBFrITtzidaDj1HNrZDsO8bChZfwXIu20bPY1QKKFiAQTbExcwkjmiIQbkKoGtmZiyT7TlBcmaJW8Nuetgx/g2p+FadaJNE9Sm7hGnooRmrgERyzQmb6ApHmHiR+iJj0XCqZRar5lT2PZjAODaHEolhTM4hwiEB3B55p4WZzhL92HCebw83mMadnCfR2oXe346xvYE3PoTUn0bo78ApF7KVVjOEBUFXsxRXsxeV9tynuiffZKrvMXvxi9pHfJeIpjXhK59oHvvfMsb/cH4KbgeQ7kb7+AcF4yz2P0XXq2zQPPbLXl7YrVq+eZ+3ab+45rpJZpLC0txXTJb4pAvxZtWubSOmi6AbRtkE81/GtXELgOn57UOl5CKGQXxoj0txDpLUf165tMVHcHCNudndC4rnOba1+JdJ1kIrie4qbe+vN1xShgPRTBbd0eLpN3Qbqx5fS87tnbtofhaKxMXeZaEs/0fZBXLuKVd7b99e8fls5wFwBZ8n3VKutzbjFEtWLV/BKlfrY28db5QrW3K0Gd5UPvjzOwkbplzugqJBq11G1TYeZgLnx/W+psB/kdlnpOTt9Ce0O6ul+Y1eK2NUHX6/Sc23sapF416FNu5+Na23aw11/ORRvx3NNnFoF6drITcHmOhaJzkMEE624toljVUBK7Jp/H5XsEuFkF22jZ7FKWWr5NOFUT/3crlnFdUyklNi1Eo5ZJpYYwjbLOGYF165h18oke4+TVzRAEE52EwglyC1eo1bMkBw4gefY5OYuY0SbkdLFtU1UDZq6jxCIJLCrxQfSOK5+X9kNyu99hKx+uUNv7sSu1eevGp2DBqefTrAya+K5ksyK9TsrFBs0aLCVPVGfv2oUNxxmx6v1DG734bSLaNCgwQOmMVNs0KDBV467zRTvKhQbNGjQ4KuGcu8hDRo0aPDVoSEUGzRo0OA2GkKxQYMGDW6jIRQbNGjQ4DYaQrFBgwYNbqMhFBs0aNDgNv4/AloVDpjmHBQAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUUAAADnCAYAAACJ10QMAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9d5Qc133n+7mVOk6niRgMMMiZAEhQJEFSoiiJpAKVk+W1bEsOu7alY3t9vOed9/asvHLc9T6HfU7y2rKCJSvSsgJNUpSYI0AQGYM0mBx7Ond15fv+qMEAw5lBHICkNF8eHkxXV917q/reb/3yFVJKlrCEJSxhCSGUV3sAS1jCEpbwWsISKS5hCUtYwnlYIsUlLGEJSzgPS6S4hCUsYQnnYYkUl7CEJSzhPGgX+lIIseSa/gmAqhhE9CZMe+qSzhdCIWqkcb0Gnm9NH1NJRltRVQPLLmG5lUUdo6LoaFoUx6kB8087IRR0PY7vO/i+g6JoRKNZpAxoNAoLXreEJbwSUkqx0HdLkuJPAeLRHN3tt13y+ZoSoatlF03xjpljilDJNnWzrvPNtOe2LvoYVdUgGs0gFHXBc4RQiETSqGpk5ppc8wY6lt2IqhqLPqYl/HTigpLiEn4y0LDLDE/tv+Tz/cBhrHgY26nOOjaU34euxq5uMEKhuWUjiUQ7vm8zPnYAgaCldTNSBtRqY0RjzaTSK4lEmnBdE993KEydIJVeSTzeSj5/DADXNSmX+0mnu2eaT6VXks6sQsqAQv44jUaBtvbtSOkTjWYZH9uPbZev7h6W8BONRSdFJREj88H7qDz8NN54frGbXzQoqSRBtQ4/AcHrilBpz26jObUaRdGoNyYZmNyD59u0ZzbTkduK59scrg8D0JxaS3PTGgw9SdUcIxZJ4/kOg5MvIoTG2s67iBtZTo08hlOrX7BvTY3Skd1CJtmNEAqV+jAjhQP4vsuy5h0oioqhJUhEW5gs9TBRPk463U251IdllfA9G5CYjSky6W4URUU3Euh6FABVi6AoOrqRxDQnSTYtQ9fjWI3CvONx7AqlYi+pVBeZ3FqskRJNTZ1M5Xsol/qn1fPXL5RIDCUSxauWrnLuCrRkEzII8M25z0SNJ1BjCQCk7+GW5n/eiw0lGkMxInjV8qu2NhdffVYU1Gwaob+GhVBVIfOh+xDx6Ks9kkVBItpCe2YTAxMvcmzgQcaKR/B9Byl9xoqH6Rt7FlU5p17qWhxDTzBR6qEts4mpSi+qYpCMtWM5JU4OPYppF2ddsxCCwKNUH+Lk8KP0jT1DJrmCplgHCIgaKVpT65ksHadn4EGmKqfxfYexkZeIxZppa7sBXY8jZYDnNggCf6Zd121gO1Vcp4aUPkIoeJ5N4LsLjkVVIzS3biaTWUUi2Y6mhb+vHzjU6xPYdhkp/QWvfz0gtfVGOt/9MdRo/KraEbpO29veS/NtdyM0ffaXikpy3WY63v5BVn7sV1nxkV8GsaAJblGR3noTne/+GIqxwNpUFIyWdrSm9DUbw6IwlxKLEr9lO2o2hTcxhTj7AIUgum09kXUrCRo2jX1H8SZCY7+SShLfuRmtNUvQsDH3HALAWLsC6+BxAtNCX9mJ1prFL5TRWrKouQzOwAh6W47AcmjsP4b0/LCPVV0ElhX2MVkgum0DINGXtaIkYtinBrB6etE7WonfcgPRjavJvO9tSMel/vwB3MFR1Fya2PaNqLk0QaVO49BxvPFLc068mnC8OrZXY1nuBibLxynXhwmkt+D5UgZYTgXTnqLhlDDtIqmEjaqcvzgu7S0tZYAiVNoymzH0BFEjPdOOQFCqD1FrjBNMk5EQKqn0CqT00fU4iqIRiaRpbtlEMtkxLT1ac/pRhEpz8waaUivQtCiuE0qwzc0biMdbse0ylfIgup7A9yw8t4Hv25f4BF8/aAz34TfqBO7V3Zv0fSqHX8K3LaT/ihdF4FPtOYQ5cIbcrXcRX7H6qvq6HJhDfXj1GtJz5v1ejcTI7rods+8U1eOHrskYFoUU47duJ7K2m8bBHozVXSiJkOWj2zaQuOUGGodOoGZTpN7+RkoPPIIMJKl33gWej907gBKNID0frS1HfMdm7BP9YFronW1EN6zCGRwlunkd7uAo6XfdhXX4JMbqLvxCCRGLEtu+EevoKbTmDKl33kXx6w8S2bCK6IZuas/uR7oeTW/djV8o45erOH0jxLZtwDp6isC08EsVRMQgsftGlFgE+/QAwjBAeX34oWy3Rt/YM+SaVtPZfCPNqXX0jT+L65kLXiOlj5RymjzPEuDlSwOZ5EqWN+9gsnIK054iFe+YlirCtjzfIpDBef0GVCvDCKFQKJzCtisoisZUvodi4RSe28DzGiDOPXuBCD3OgYtpTiIDH9dtgID85FGEouK5DRynxtjIXlQtCjLAdRsEgcfo8F5cd+Fn8apDKCi6jlBDJ5MMAqTnIf1zLzah6SiGgVct45ZLSG/2S09oOggRCiSKinRtQCB0A+l7SHeaZIRA0SMITaUx0k/gunDe73MWgWMTODa+eQHziRAITUeoGkIIpAyQrjtr3AiBYkSQXijhC91ACAUZ+ASOPaMin7u/Em65OJeop5+Rns4SX7kWe3wUNR6q94HrhvcnFNRolMBxZo8BUCJRkDLs8yK4elIUguiWddSe3It19CTu2CSRNSsQmhoS4pGTmHsPgarS/AvvI7JhFdJx0XJpSt98CC9fBEVAEKC15ubvQ0r8YpnGgR70lcuwjp0mFouiplPEdmzEyxeRpoXr5onv2oqxIvSaOkPjmC8eRLoekQ2r0dpbcEcmcPqHCeoN7JP9BLVwsYhoBGHooCh4U2W8fAHZeH1IGorQcH2LseJhatYka5a9kYjedEFSnMFVmm0S0WYCAorVPgw9iSJeoYrNaV9Sr4/POuL7PvXa2EX7aphzf4/aK66zrNKccyyreNG2Xy0ITSexZiPpLTtREymEohA4NpUj+ygffik8SVFIrttC9qbdqIkmpOsw8NW/O7fAFYXsrtvRU1mEqhJp76RydD9CUUiu24ozNc7Ucz/GLRdRYwmyu+4g3r0WPZ2lenQ/k088hAwu36wQaeske9NujFwrQtOQvk/tdA+ll58nsMK5p6dztL3lfqrHD6JG4yTWbkKNxvGqJcYeegC/UQdFoWn9VjI33oaaaCJwbAa+8rczRApgZJtJ33Az8e61GLk2cre+mfQNNwNQObqf4ktPo2dyLH/fz1F6+XlKB144j3A1Ot/zs9gTI0w++fBFbZVXT4qKQInH8ItlCCR+qYp0PUTEQEnE8afC4wQeftVETTcRNGwCs0HQsMIB+gvFpYlQ6pAgbQfpesiGhXQ9kBKhq2jNmbCveOgVtY6fITBD9cvPF8NzfT8ck6HP2w+AtGzqLxwguXsnqXvvwB2ZoP78fvzCa99TGYtkac1sQKCgqREadgnXM9HVGLnUajKJFcQjOVa0voFaYwJFLBz2kk50kU12k4y1EUgfXYuSr5zC0JJkm7pJJ7oIpIvn20xVeqlbU2SbulnRejMgQtX59e+7um6ItHbQeue9mENnKB/eF9rMsi34duPcSUFAvbcHe3KUzM5bSa7dNMfGp+gGyfVbKex5EoSg+ba7qZ44TPXEIbI33Y450ItbLuI36hT3PUPt9DHa73kvihG5EgUBADUaw6uWMftPETg2kfblZHfcgl+vUj64BwChKCjRGJntt2BPTVA+uBfp+6jxxLl7DAJqvT1YEyNkb9pNfNX6UPI8ry+vXqVy7AB2foyO+9opH95L/cyJ8LtaGDPrm3XsyTESazZSPXUUf/p4dNkK9HSO4r7nLsl5c/WkKCXSdVGmnRYiYoCqIB0XadsoyWmDsAAlHiUwLaTjokQjoL5CPZVBOGhFhGSbiCIUhXCVTd+MPP90iV+pYe49gvnS4VljOvv9hTF7NngjE5S/9xjGyk6Sb7qZ2I5N1B574fKexwKIJhTe9jOtJNMaj387z1j/4kmhtlulUh/B0BIE0qPWmMB2a2iqgedZlM1hKuYoUvr4gUO5PkS1MY7tVhmeehnbqTJRPIbrN0JSdYoMTLyIROJ5jVDNDjwct8Zo4SCS8LOUPmVzmGDCI2qksZ0K+copLKeCDDzGi0dfoZ4v4ZVQjAhKNIo1OkDtdM85VfYVpBc4Ns7UBF61vOC89upVqj0HcTu6SK7fSu3kEZypSZrWbw1VTSFCrateQ7ougT3Xdns5MAd7MQd6Z8bcGB4guXYzRnPbnHOFblB44Qnc8vxe7MC2cGwLdwGvc2Bb2BMjIAOk7+MW81gjA7PPcSzqfSdoufNejHSWxjQpJlZvIHBsGkNnLum+rp4UA4l9vI/4bTsBQXTjaoSqIl2P+t4jxHZsIjAbaC1ZhK5jnzgDgSR242aSd+zCOnYaJR7DHZkgqDdAEcS2rMMrlIndsBHvQpKalJgvHSF2Y9hHYDuoTXEah05eeMg1k8BsENu+AWdoDC9fAt8nsj5U7UGGNht/rq3lStG1LsZ7fqWDWFJlasxhcjiP7y0OWXi+RaE69wf3fJupau8Fry3XhwCoNs6pobXGxLx9NJz51dByfWimnfNxfpsLIZpuRXml9/MnFFJKrNI4Mjg3r+z8GLWTR8jdchfxleuoHj+ENTaEV69eoKX54Zs1pO8jPRevXg2dKIFP4LoIVZshxcWCGkvQtGk7sY4ulEgUxYhgNLfNISsAa3Tgiu7psiAljeEB3NIUyXWbscaG0ZrSxLtWU+87cckvgUVxtNSe3kvyzbeSuH0n9qkB6s/vJzAbNPYdQSiQuG0ngWlRefAJ/FIVpKTy3R8Rv20nTXffSlBv4OWLuON5qo+/SPzGLWhtzdSfP4D0PIJqfUbdtk8PEDQsnMEx/FIFp38YgoDYrq0IRWD3DoVG9sFRAsueeYs5vQP4+XBRS9uh+sNnib/hBozVXdQefxFvsoDalCCybiUIgd07iPnSkcV4PGH/VoBlBghVYFZDJ8dPO1LLN7D9I/+FWGauZPGTCN+xOfJv/5vxw0+dO1avMfHYgyRWb6Bp03Za734Xvlkj/9QjmAOnL6v9kGxlyHtBcB4BXrkjbSFoiSba3vpujFwrpf3PY09NgJS0v+29857vNxrXJe7QLU1RP3OS1ObtFF9+ntjybvR0ltpjP7jkNhaFFIOaSeX7j837nbnnMOaew3OOe/kSle8/Pue43dOL3bOwdFN7MrRVNM5Tl82XjswhsLMhPjPXPbFndj+n+rFP9c86Vn/uZerPvbxg31eD0T6Lr/zPIYyo4PBzFa7Arv0ThXTXJja/+9eJZzsufvJPCLRIjO7b3jOLFAGk51I7eYR6/yliy7tpvu1ucre8CXPwzLye4dcCjJZ2op0rKe55itL+0MSkxuII7XrEJ1+A3KWk3neS1OYdJNdsxGhdhlPM45Yv3dn2Go6w/smCa0teeOi16wW9nog3L2fbB36LREsX9fwwPQ9+jsBfOK7yJwFdN93Dsh13zzmuGFGUSITAtsL40ZFBnKkJoh1d4do/K1wpCkKIMDdciFAdVtRp0rwcCTC01wtVnQ7fURCqFmou56n15/oL7f5C1UJJ9KzdXwYQBEgpEboOCBJrNl15UPWs/sLx4avn+ptG4HnIwEdNphBGJPwu8GeF8NiTY1gTI6S27UIIQaXnIL516SFZ140Uo3GFXIdBPKUiBFj1gKlRB7Pqk2nVaV1uUCl6jE87IBQV2roiZFp1qiWPkV5r3pdmIq2yfG0U34PRMxZmda4IJgQ0ZTUyrTrRePgjW2ZAadKlUvQW9AMk0iodKyO4jmTgeGPmWK7dCNsR4DQCylMe1aLLK9d1y3KDlmVzs0KGT1tUi5dGApouyLbrJDMauiEIPKhXPYoTLlb9wlKEokIqp5PKaUTiCooiCHyJZQbUSh7VoofnXmc1XiikV2wk0dIFwNHv/RXFM9cmCPe1hOY1O+Y9HuvqpuXOe3DLJXzLRI3FMbKtlA/tDaM2CD3LTZu2o8YTxLvXosXDsBq/UccaH8Yam2vPnQ9CgN7cSmLVetRYHD3TjBKJkb35Trx6FbPvFG65gBKN0bRhG2osQaxrFVqiidwtbwqdFSMDWKND2JPjmP2nyOy4BSPbgqIbaE0pnKm59uiLQTEi4f3FEsRXrkVLJMnefCe+1cAaG6IxMjBD2H7DpN57nPS2XURaOpCeS73vFLUT582hwKd6/DCd938Ut1ygMdw/m/AvgutCiu0rI7zp/c3cdHeGjlURFEUwNeqw/4kyj35tkpvuTvP+X1vGsz8o8I+fCY20kZjKuz7Zzj0/28aLjxT5m989g2XOvbH1O5P85l+uoV72+Jv/0sfRF2YbcyMxhW27m7jlviwbdyXJtYckVZxw6Nlb4+nvFujZW8W155LDhp1JPvGZlZQmXT7zsR5WbY7z5g+2cMOdKXLtBkJAOe+y59ESP/j8GIWxc3FVigJv+XALH/iNzjnt/tlvnOLFR+bG070S6RaN296R4w33ZOhaHyOZUrGtgLE+myPPV3j821OMnLHmJfVIXOGWe7Pc/LYMqzbHybTpaLrAsQJKEy5DpxoceLLC8w8VqJWuny7fvvUONr3jVwGYOr0fMz9y3fp+LcKeGKW0/wX0TA5F1bArJcoH9kzbE6d/WFUNCcyIYOcnsPMTodfaiITOCwmN4X7cSonAdfEqRcqH9+HVKgSOTfXEYdziFFJKlGgMPdMMQPVEaHJSYwkUXceKTEeQqFrYn25gjQ1jjQ2jxsJcaLdUAAF+o07+mUdJbtiKnsri1MoU9z2DlkzNUqF9y6R67ABuaQq5kCng7P3pBvbkOPbk+HSOdxjyc74kHFgmk089THLNRvRMM9Lz8M25DhxrbBDftrDGRi6bqK85KTblND78m53c/LYMuqEwdKpBccKlKaNx14eaybTpRGMKRnTxs0c0XbD7nVne/+udtC43KIw7HNtTRVEFK9bHuPM9zay9IcE3/nKYl35UXtAbHE0qrNuR4EOf6mTTLU2UJ11GzzSIN2k0LwulQceafW0gYd+PyzRqAbEmleZ2nTvenUMzLu0+s+067/pEO2/+UAtGRGG0z6LviEkyo7JiY5zuzTGWrY7ypT8aZHJobkrU7ffn+NCnO8m16+RHHE6+XMNzJE1ZjWWrI3SsyhD4kpceKwHXhxRbN93Gxvs+iR5LMtV7gGPf/1vs6ms/jfJawqtVZmL6FkJgNcg/9fAFzzkbswfgFCZxCpMzn8sHXkRBpS2ykonhPqzh/vmamIFfr5J/8qFLGntp33OzjjkT4yjnZSP5Zp3Sy8+98tJZCBrmJfU302+lNGPHXAh6Uwbp+5hDZy479Oiak+Ib35Pj1rdncayAb/3vIV54uIjTCFB1wYabknz4NztpXR45P6tr0bDhpiQf/HQnyYzGI1+Z4NF/maRaCtXWbJvBe361g1vvy/LR/7yckV6LoZPzP7xcu8FHf3s5mi74m989w5kjJp4boGqC5mUGBFArv0IdlnDqUJ0zR01UTdC63OCW+7Jol1D2LxJTeOtHW3nbx1op5z2+8JcDHNtTxXUkmi5Yf2OS9/2nDna8Mc17f9Xjy38yiH2eFN2U03jLR1pIpjUe+edJHvnqBPWyRxCApgkSGY112xMUxhxKkwsXWFhMqHqU5jU7iKZbqY73cehb/wuntmRjvdYQKOhKBE3opLVW8vYQ2nRuuhuEpipdGCAEvvTClMrpfzUlgpTBTC0DX7oIFBShEUifQHpoioFAEEgfX/qk9VYiSpy8M4gvPXQlgkDgSQf/Avn4i3OzIiyUoaqkd96KWy5Q7+257GauKSnGkgp3vrcZVRM892CBx7+Vp1Y+J5Xs+WGJdLPOz/5u16LX69ENwb3/oZVsm8ELDxX4t8+NUZw4RwDlvMd3/36MFRtiLF8T5fZ35fjGX8yvyiVSKtG4yj99tp/TB2cbbOeT0mYgwfckviexzeCSIxKWrYrwlo+0oBsK3/uHMZ57sDDLXvnCQ0WSKZWP/W4XN9yRYtOuJAeeOlcJO92sk0xr1CseR/dUGT41m+wL4y6DxxtcL6h6hDVv/hlW3Ho/ge8xdXr/EiFeJzRpOdoja3GlhaZEyBodpPUwW6nqFQikR1ZfhittTL9MTE0x5QwRVRKsiG2l6k0RU5twggZ1v4gmIqhCC6udB1VyxnJsP8yPLjgjtBor0ZQIrnQwvRKrEzsx/TIFZ5SKl+daBvJryRRtb3sPWiyBUDXyT//wwrnbC+CaVjxYvjZGtk3HrPgcf6k2ixABfFfSd9Rkcmjxc4zbuyN0rY9hN3yO7anNIsSzmBp1OHO4jlBg081NC0qrriM59GyFvqPXnkiEAqu3Jci1G+RHHHr2Vuc4cJBw4uU61YJHtl1n5ab4LAdktehimwGJtMb221O0dBqLGaJ22dATaVbeej9CCEZefpTTj33l1RvMTxEECjE1RckdY9Q6BUBcTZO3BxixTtJidJHQ0pTccQYbR6h5Bc6V8gjTbOt+CScwqXp54mqapJbFkw6aEiGiJLD8GgONI/jSJyCg4I4y5Qwy5QziSQfLr+EEFnZgcq0zmwLXoTF4huqJw0w8/iD1/gsncSyEayopLlsdwYgolPIuU6Pzq2nlvEth3KFjVWRR+16+NkYyrWFWfSoFj2Rmbr6vbggatVDtTDVrxJIqZmWufc2q+/QfMxctA+VCUASs3hqmRk4O2wgh5h27awd4nkQ3BNlWHd0QM86iSsFjz6Ml7v+ldnbfn6NrfYwXf1jkpR+VKI67ONalS61XC1WPsO6tH0fRDHzXZvzIM7PzepdwDSEJ8FGFhib08LP00YSBJjw86RJIiSo0VKHBNCVqwiCixGdaCOTZ/328wMH0KlTkJKowSGhpQM6QqUSiCh0FFU+6jNtnSOtt5IxljFtnCK6h/TqwGhT3Pn3V7VxTUkymNRRNYJkBjdr8D8NuBPN6la8WqZyGHhE0ZQ1+5Q+65w09EYQ5yUIIVE0QS8xPip4rqZWuUxydEGTbQpvPxpuS/NcvbZg3mkAISGbCkk2RuIKmK7h2OHYZwMNfHifwJbvfmaN7U4zuzTHu/Q9t9Oyp8uwPCvQeMi85LOhKYSSzbLj3F+nY9kZcq0bvY/9CoW9uIP8SIJUSrFihMjDgU60uzhtLIqm5BTqia9GVCKZXoexN0Kx3kRKCsWnpsdnoYrm6iaI7huXXyBnLCfCpeUXcwMISNdzAouJOoigaTVozjrRo+BXswAzTF4N6aDf0PbJ6OzljOaZfoi2yCoFCwR0h4LUZiP5KXFNSVFQRplsGkmCBJPYgkAQLVMm5pD4WUHlVLawt57kBxTEHq3HhH6Qw5iwsCUquaoyXBRGOHaBR98mPLDyuicHQ7DAxZM8pElAr+fzb50bZ+2iJXW9Js3V3ijXb4tz1gRZ2vTXDs98v8PA/T8yxNy4m2jbfRufOtwJQ6j/KwAvfu2Z9vd6xc6fOf/2/U/z336/wzDMXsFNfJhpBlTPm7P15at7sogz1xrnwsCp5Jp3Z3umSO7vU23wYt89loZ2q7535+5V9vxK5nMKaNSoHDri418fnd1FcU1J0rCD0eOoKemR+9lI1MUMCV4JITJm3UrptBvi+pFby+d4/jDFw4sIqm+fIGc/0qwoZqusAZ46YfPf/jF1USq2VPBxrnkKhPgyeaDB8usGLD5dYtzPBTXen2XlXmjd/sAVVE3zzL0euiQc6mmmjc+dbFr3dJfxkYedOnXe+I8qJExXc651IsACuKSlODjt4dkAyo5Fpnr8SSiKl0ZSdZxhS4k9LZ5ohFnQUtK2IoKhzvxwftGjUfDRD4NiS/mOvDzuWlJKhaekt3qQyOeTMSIRXisCHkTMWY/0WLz9e5i0fbeFDnw5jR5/9fmHRSVHRDG744O+QWbEZp1bCLIxy4uHPL2ofr3dEo5DLKmi6oNGQaPMIBqoK6bRCIiEQQK0uKZcDXlmUWtMgk1GIxQSKANeVVKqSel2iKNDcrOD7UCicsyXHYoJsVjA1FeA40N6mUKtLUimB70OxGJDJKBiGIJ/3sc5TKBJxQToj0HWB60iKJUmjIWfG3Nys0GhIhIB0KhRazIakWDw39qakIJ1RuGO3wdo1Gt3d6ozZYHjYnyU1ppoETSmBpgr8AEwzoFKReNdIhrmmpDhw3KRW9mlfGWHNDXEOPlOeFeQsBHSuidKxau4mNZ4nZzItmjsMIlFlTlpbNKGwcVcSTZ87ofp7GowP2Gx+Q5Jtu5s4/FyFevm1X4Uh8KFnbxWz5tO9Kc6aG+JMDtuLUhcgCKBa9Hj+wSLv/dUOkhltJu1xMZHp3kKiZQUApx/7KoN7H3rNFjZ4NZDLCT7y4TjvfXcUCYyPBfQNeGjn+dMMHd7ylggf+XCM5Z0qIBgY9PjGNxs88YSNM00asRjc/64Y73tvlGxWCUnRg69/3eQb32qQzSr8wWdTTOYD/vhPqjPEc8ftBr/z20l+77NVjh51+Zu/zvDc8w67bzNQFMG/fM3k3rdFWbdO4wtfqvNPXwhD0ZZ3qvz8x+Ps3m1gGOA48PgTNl/7msnIaEBLi8If/n6K4ZEAw4BtW3SiUcFk3udzf1/niSdD08Cb3xzhfe+NsXOHTlOT4H/8cXomyuI3Pl1iYDBcq2vXqPzyLyXYskVH18JU51OnPf7uc3WO9VwbVrympFgcd+nZW6VzTZRb7s1y5IUqR1+ozqyP5k6DO96dI92szbGbeY5krN/Cqvu0Lo9w45vTPPmdqZnqMtG4wh3351h7QwJFmUuK9bLPcz8osG5Hgl1vzTA57PDjb+RnqaJCQDKr0b0pxshpi8L4a8OoMXjSYv8TZW57R5Z3/mI7taJHz97aLGeRooRScq7d4PTh+qzg7Y5VEZoyGkOnrDkOLkVh+kWiUMm7mAs4wK4UuTU72HL/r2MkUpQGj1Ma7FkixPOgaoK3vCXKz388zte/3mDvPoeVK1R+5qNxYrFz83jDBo3f+s0mTpxw+bO/qCElvOMdUX77N5OMjPgcORrO47veFOHX/mOCp59xeOJJE8uWdC5T6e31ZqQtVRVz1ogQ4fGz2+mkUgqdy1T+8fMmn/5Uko99NMZX/6XB+ITOu98V5cv/bP815IcAACAASURBVCIEfPzn4ty4U+cfP19ncjJg/TqNn//5OL4Pf/9/6jPtfuD9UR54wOL//fMahgGf/ESCT34iwf79LuWKZO9ehzN9Hv/xVxJsWK/z3z9bpTYdCTI+fm5OfvADMXbs0Pmrv65RKEhyOUEqpVCtXcN4x2vWMiGr//gbebbelmLZmii/8P+s4MBTFfIjDsm0yuY3NJFu1ZkYtMPMkFdc23uwzvGXamy7PcUHPtXJ2u0Jhk9bGBGF1dvirNuRYKTXIp6av7z+nh+W6J7OV77/l9q54Y4UgycaNGo+RkQh16HT3GmQyun8w3/rX1RSjCUVUs068aRKJK7Q1hVBnX7aq7clqFd8GnUfuxFQL/uU8u5MGFet5PHvXxwn3ayxfmeSX/psN/3HTCaGbDxHEk+pNHcYNC8zmBp1+MfPDMwixdVb4rzrkx2YNZ/RXov8iI1lBsQSCp1rY2y5tQmAfY+XGT69eI6WdNcmtrznU8SyHVRGTnPkX/+cev7SihX8tCCdFrxpk0HvaY+vfd1kfCLgeRXa21Xe/75zGtO990bxXMnffa7O0WMhAQ4O+vzJH6V5+31RjvXU0DR4//tiDA37/NlfVCmVrpwoXEfy8n6XHzxocd99UdIpwQPfaVCrB+zYoROPC7qWq9x5p8ED/9rgiSdtpISBQZ+bbtJ5050G3/m3BvZ0WNjEeMAXv1Sn90xIcB3tKh/7mRjLu1TKRz1GxwLGJwKmCpK6Kek57lIuzx1/PCGwbejp8Rge8XEWzwe1IK55mt+ZIyZf+qNB3vWJNrrWx7jnY60EgcS2AiaHHH7wj2PccEdqDikCjPbb/OvfjiIU6N4c543va0YGYYiMWfU58nyVx76Z5zf+dDXKPLxYLXo88Ncj5IcdbntHlpUbY6zfkUAo01vDeBKrHlAYdzAriyuKv+1nWrnv420YUQWhCFRVzOR3v+MX2rjnY63TZf7DZ/TnnzqNPe0hlwGcPlDn8783wLs+2c6WW5vYdnsKTQ/f7MH0M6iXPU7ud3Hs2ZJYrezj+5LujTHWbY/PpGlJKfF9qBU9fvSNSf79n8apFBbnvoWqk+neQjy3DN+xOPydv1gixHmQTCh0r9TY+5IzI+34PvT0uDjOOVLctlVnbNzn1Olzv8+ZPo+hYZ8dO3SEgNZWlfZ2lWeesa+KEAG8aTsiQK0a4HsCzwPLCovWGoZgzRqNZR0qP/9zcT7wvnBPJCEgkRCMjQdoGtjT5u+e4x6Fwrl5OTEZoKiCZPLynKrf+U6DTRt0/vIvMrz8sstDD1scOTo/gS4Wrjkp+p7kpR+V6D9WZ+ttKTq6oygq5EcdDj8b2vm23zl/DTYZQM/eGn/7f/WxbXeKju4wGLxe8Th9yOTUgRqBDw99aZxYk0phbO5rpJz3+N4/jPHiI0U23JSkoztCJKbguZLylMvoGZu+o3VK+bnkMDFs8/i38yiqYGrs8qTIgRMNnnuwOK8B/ZWYHHFmnEpnEQRhibHP/94A63Ykwko3rTqaEcZ9FidcBo6b9B0151T4OfRMhcKYw+ptcVqWGcSaVDRNwbEDynmX3kN1Th828RfR29e16x7Wv/Xj4f2c3ItVmrzIFT+dUBTQ9JAIzw+gt+3Znw0DajVmORNcF1xPYkwXFZnefPKKQllUdbbzUkpm5qCEGYfI+TNE10Mnzpe/YnLixOz1YpqS0dGAVCpstFoN8M6zzJytNC8uM7XqwEGP3/ndMvfeG+HWWw3++A9TPPJDm89/oc7o6LUxy1y3eor5EZcnHphbESXdrF20GERx3OWp7yxcTeXBL1y8NND4gM34wOV5cYdPWTxwanTWMUUzyG2+FbdeptJ3eNZ+G+fjwJMVDjxZmfe7s4hk28mu3Unx9H48ZzZBafEUuQ03E8m0UgeefeoIlcsIfB4+bS2qanwxLL/pXhRNZ6LneU489A94Vu269f16guNISqWA1lYFw4DGdFBELqugnqftjIwELF+ukMsq5KfCOZbNKmTSCqOjPlKGkp3jSFauVBfcfiUIQiIz9DBb6ixaWmf3dymYmgowTcnUVMBTT8+vx54lxUt63UrCXTkvXEib4RGfL3zR5Ac/sLj33iif/lSSY8c9Hnjg2kSUvD52e38tQVGIZtvRExmuNqFYNWJEmztR9bne98C1MScHsYpjNK3YSLx1+VX1da2gRuJsuPcTJFpX4DsWhd4DWOX8qz2s1yyKxYB9+1xuucXghm06uaygq0vlTW+KEI2cm08PP2KxfLnGO98Zpa1VobVV4d57Iqxdo/HIo6E9r1yRPP+8w/YbdO69J0JLi0I2K+joUGhpVlAUaJiSkRGfrVt0urs1slnB2jUqd99lzOrvUnDgoMvpXp8PvD/G+nUq2Ywgl1Po7lZZtkxZMJFiIQQSCgVJJi3oXKaSSQtyWTGrnXXrwrYzGYHjSk6e8hACYnOXzKJhaTuCy0TgWAw99QDhZtRXp36a4330PfLFeb2zgWtTGz6JVRgju+Hmq+rnWkGLJll/zy/Qtes+As+h98lvMPDC91/tYb2m0bAk33vcYs0ajc/8txT9/R6aJshPhVLfWTz/gsO3vmXy0Q/HuO/eCFKG2R//9r0Gzzxtz0y9L33ZpL1D5Xf+cxP5SZ9aXZLJKPz4xzZf+FKdWl3y8A9ttm3T+YPfTzE66tPUJJiYCGacIpeKclny//1Vjd/+rST/608zjI756Jogm1X47vcafOWrl17y/yyeetrmttsM/ugP04yMeASB4A/+sML4RIAQ8Ov/KcnaNRqT+TB2sXOZytNP2zz/wrXzuCyR4jRirV0kOlajGjEC38UpT1If68NrhGqgGomRXrUNI90CQH3sDNXB4zPEKBSV7PqbaBTGiLetRAY+1f6jJDrXoCcyVAeOYRXDdCmjKUdm3Y0oeuhcKvS8iFNZyDxwgYkrBJF0G/G2FejJDEII7HKe2shpPPOc6h7JtJPoWEW59wCx1hXE28IYQnO8n/pYH/IKd9GKZdtZ8YZ3AND//Hfpe/rb12XHttc1JJw46fE//7TK7t0GmbRCX7/PgYMOb7wzQn9/aKur1yX/9EWTAwddNm7UEMDJUx4v7XNn5UZPTIbxh7feorOiS0VRBYVCwL59zoynds8eh8/+foWdOw3iMTjT53PkqMsdt0cYGPSwbcmX/9nk+LSd8Ic/tGdU695eny992aReD/s8dNjlM79X4Q23GLS1KjhOKImeTdOrViXffqBBvS5nkfzJkx5f+EKdwaHZtsgDB13+x59W2blDJxEXlMsB5nQguJTwxS+ZbNumkUkreD78+0MWL7zoMD5+7cK8lkgRSHSupWPXvfiuhd+oo0ZiJJetxmvUZ0hRBgGe0yAiBNn1u1A0nerQiXOkqGqhrbFWQigK0dwy4q1dqJE4RlOGWHMng09+E+m5SN/DdxrEch2kVm2hNnzyAqS4MBQtQsvW3URzHbj1CkJRyazZTqJjFWN7H8G3wlpykUwrLVtvR4slibd24bsWWjSJ9D3qY31X9Mz0WJK1d/8sQFj95vBTV0yuP43oH/DpH5htE/vGN2d/rtclTz3tLGi/O4tCIeDfH1rYXu77cOiwx6HDYfFYgUJAMKu/b37r3N+PP3Gurf5+n/7+c9+dDcMZGJzfnlevS77/g7m27N4zPr1n5l4TBHDggMuBA/N7iw4cdDlw8PrGD7/qpOh5ktEzFif31686ne1K0bR8PYqmM/Djr+I7FmJ6h7PAOTeewLWp9B2lMTFIctmaedsRQuCaVcb3PUrXnR8g3rqCvke+QLJzHS3b7kTVI3iei2tWKBx7Aa9eJtE5f1uXgsBzmDjwODIICDwHIQTZDbto2/Fmpo49P0OKEDpujFSO0RcfxDWrCKEgA/+KiCySamHz/b9G64absWtFTv7wi9Qm5m6AvoTXHnK0kxVt9MseXK5D0N8VQEHBIIrF5avji4GrIsWkEhactOTFqtsKUmqORlDDlSHRRESCtNaCU2ssWPH6YhAI0mobZX8SOV2WSKCgCwNX2shLLGppFcfJrNlOdsMu6iO9NKZG5q/5JwMC311wI3sZBNjlSXyrjlsroWg6TrWIUy+HmwadtxeBDHyCwL+6upsyIHAsos2d6Ik0qh4hmmkPt81UZ+eaB65FdfA49lWGysQy7WyaJkSnVqLnB59j/MjV17BbwvWBgoqOwatadfgiiJGkS6zjuNz3qvR/VaQYEFwy8QRziEQSVRKk1VZK/uVvi3j+GM6HLiK06isYc8/gy0sTu6sDPejxJjLrbiSzdgfmxACF43sxxy+8wc8cSImcTuCU0ifw3JnjILlg7MEVQIslab/pHqK5DpxaCd820WJJhFDmzPnAd/Hm2fXscpFs76Z1w80EnsvJR7+4RIivAyRI0c4KhBCz1qxOhA5WEhdJ6rLCKAP4hHM2RY5W0YmGgUmVYdlLjjYkkgLjSCQddGPTQEEQI4kuIpRknqxopSynKDCOToQ2lhMXTdRkmQmG8HDpYi0WJhnRiiRgXA5So0wry+kQK2kiw2ZxMx4uvfIIPt7MmFR0TFllnIFrIu1elBTjSooOYw2GiFL3y4w6p/HxaNG7aNdXMeKcxPZM4kqaVr2LiJLEDupElQRjTi81v0ib3k2LvoIz1oEZSdGWJhVvilY9PtOXIWJ0GuuIKHE86TBgH8WVNsuMtWjCIKYkMf0qo84pVKHTrneT0lrpMZ/DJ0AXEboiG2nVV5JUsxS8UcreBG16N2U/T80v0KZ3I5FMuOcIz3ca5I88S7nvCPHWLrLrd7F893sY3fMQteFLL2ku4RWOhmvrdEivvoFk13omXv4R1cETBL5LauVmEp1r5x3c1dr8jGSW9ff8IgBWdYr8yZeuqr0lXHtEiLFCrMeSJjVZYrlYi4uNisoysRoFhUk5QoYWVor19Mse4jTRLTYxJUcxGUdDIyCgSWQIkBTlBBJJRjRTo0KMOJJQ7e0SaynJPC2ik4as0S5WIqVkUo6QE+10yXUMcII20UUDkwk5SJwU3WIjPXIfZaaIyBiKUBmUJ5HT1cNjJOgSa5mS4zg0UNAuWSC7XFw0siiupBAIRpxTjLqn8HCQBEy5w9SDMpoIVUJVqKhCp+yNE1OaqPlFokqSAJ8Jtx9bmijiwhzsS4+SP86kO4A6vfsYgCHiaMLgVGMfw84JfDwc2WDU7Q0Nx9PR3660GXPOUPYmOWMdYNIdxJEWkoAmNYshYiTV7PR+EecgFBXpeziVKUqnDzD8zHdQo3ESHauu5JleNxhNOXzbnPaSV5G+RySVQzOuTRBXdtU2YtkOAE48/HnsauEiVyzhUhBpXUasazVqLH7xky+3bWIYRJhgiCnGyMsRBAKdCGlyjMozFBhnlD6ayJAgRTPLMKkywhlKTJJnlAu94AMC6rJMUU7iYjPFGAoKEeI0006Aj45BgE9aNBMlho/PlBxlijHG6UcnQoQoDhYWJh4ONcrUqSDPoz8dAwebIhN4XBsHzEUlxbI/ia4YNOvLSQRpJpxQxJazhhrCkw5OYOFKGx9vJqUnPO/irJ5Us7RqK6kFRVQ01OnhSXzqfmXO/g7zjUFOqwehWh1+V/Im6TBWk1DTKEKj5s/eiD67YRdC1XCrBWQQEG0JA6Wd6rkd5xQ9ghZLosdTKLqBFk0SzXbg2w28xuWppVosiWrEiDTlEKqKkWohUq+EbVl1kAGKEUWLJtCTGRRVR4s1Ec2247s2nllFBj5WYYz0mu2kV23DnBwi1txBatU2gjk7XV09WjfewqZ3/AqqbjDVe4DaeN+i9/HTiuTazaS27cIaGcAc7KUx3IdTzC9KeJOCEqbt4SGRuDhIJNr02jqrfgYESBmgoaMLA0teipPj3PoOpv/z8WfWpIKKQZSIiKNKHSRMMoyHiyTAxpq5XiIRF5DRLOoMyV7aWU5KbKQsC4wzcE2I8aKkGEifvDtCQk3Tpq+kokxRD0qoQkVBQSH89yzm2wlFFRoCBVWoCBQkQXil0BBCRUFFIompSTxc8u4QabVtVivyFbZDgUBFmx6Bio84j3wluojgSw9JgBmUCWRAVuug5hdn7CZnoegR0qtvQNUNgsAnsC0mDz1Npf/odGeCpuXradl2B0LT0eMpVCNKV9P7cM0akwefwCqMErh26DwBpOfN2BRlEBC4TpjSpKi0bLszjImMxBBCoWXr7WTX30hjcojJQ0/hmRXS3VvIbboFoeqokSjJ5euIZNrwzArj+36EVRilOnScaPMyMmt3kF61FbtaoHB8D5k122elH8rAJ3Bt5JWU8BIK7Zt3s+Htv4yRzFDoPcDR7/4VjcLoxa9dwiWhcuxlPLNKcu0Wmm+7G99qYI0MUD15GGtihMBx4ApNHwE+AtDQcbDQCfdpPkuOBlFcnHAtCRVXOrjSJkIMMbOmwi2pAgIUwtqOGvpMW+ev+vOFlAAfkxrjcoAqxTnnLKT+hgQpzus/RJUCJlUysoVOsZqKLMxqd7FwUVJMqjnajG4UBDW/iBXUUFDpNNYTV9JERLi9ZkMxsf0GPg5WUJ+10fbK+DbiWpZWJJpiUFGrZGkhTTOGiNJprGfc7aPqTdEUaaY7uo1GUJ2xPwaagmefJbJw75W4SNFurEIRGp3GBvLeIFW/gBNY1P0yKyNbmHKHmfKGkUiK3hirotsYtOdujj117HlKp14Ow1Q4j0TOSlxSUh06gTk5OOdaKQN8u4H0PQYe+xqBF755Jw4+iZjOVzLH++n/4ZfxbBNkQP7QkwQTL1Ip+bO2Lw18D99uoGkSpdrD0BOnQULbMo0gkOTHfXw/wLfCt7jXqDG+9xEm9QgIgfRdAtchmDqEaJzznlvjp+h7ZPCKdtFLtq5gy3s/jR5LUjhzkEPf/jPs6uXHVC5hYbjlIuXDL1E9cRgjnSO+aj2J7vW0rXgPfqNOve8kZv9J3FIB32pwObZqiwY2Fu2soEqJnGjHw8XDpSQnWSZWUZSTZGimQoE6VSSwRmyhU66mQR0VlSnGqcsK7WIlLXRgECVGkiILRzM42JTJ0yFWYsgIAgUXhxIXTgO1MVHRaWU5Ng0qFIgQJ0UWDxcdAx/vmu0MeAnq8wTlxmzvsKpFGFdGGbJPEfgOsUQLkWiG8alTqFqEhrRAEWH4Ch4TxiSj/jD1xiiqFqVz5W6qxQHGpp5HUXT0SAIZQN0rc9p+GUU1UBQVx6uiaVEayYBSIw9CEE+2oxsJ6tVR+tyjSMdHBgGKqiOEgi89hpy5xKcpBhVvCkfOJQbpuXjehcXwwHNmCG8hnB8XGLjnAlhl4M0qkBA4JstafWp5B6s+d4JHo4LOTo8TR01kAG94Q5JkSuGbX6xim6+opjPPuFqyFlVVYtZA1WDjFsGpnjreApuHLQShqLRuuhU9lgTO2hGXCPGaIAgIrAaWNYw1Pkz1+CEyN+4mtXE78RVr8G68HXPgFNUTh6n3nUBeZL6ehYPFkDxFK8tJijRjcgCm1egRztAmu8iJttCbKweRBNSp0C+P0yKWEacJU1Zh2uusSJWUyNGQdQblCUxqM1JoQEBZKni4FOUENg2GZC8tLCMrWvHwKMhxQJKXozjT6rOPT16OzqjyJlVGZR8pkcPFpipLBPhoGKRFM770GJa9mFyboiOXHZIjhEpz+1Z8t4FZm6Rh5jGiKfRIAoBsy3pULYIRaaJWGaVaGiASy2JbZQB8z6JRn5oRi41oE4lUJ5FIirHBPaRyq4glWrDMAs7kCTQjTiSWCYVpRSeVWYGmx3DtGlJKNCOGY5WJJVqoFgfw/dkEoWHQqq8gq3XMS5bXA5tuMNiwRcesSV542mJZl8rajTqnTzhs3m7Q2aURbxL4HhzYY7NitUY6o3DymIvtSA6/bLPtxnBf7HRG4a774kRjghefbuDYsOMNEQJfMtjnMTnus31XlP17LCLRgJ1viPD29yd4+QWLMyc9VBWOH3FoaVPJtaoc2LNwwLxQNbp23QfA6IHHaZSuPHRqCZcGPZUlsXoDidUbMXKt2FPjFF56Gul5JNZspPVNb0eoKtXjBy+5zToV6lTmCJg+MEzvPIKnpEKBipzrSJtgiAm5cJ3MOmF66RjngvlH6ZvTxwhnZv4O8Bnm9Hmfgzn9OPiMzDvWxcdlk6KUAZoWRVE0guoYAHajTFO6C4BILINZHceINKEbIVH6noUx/fdsCFTVQNdjpLLdTAy/jBFJ4lgVKsV+QGI3ymh6PJQCfQfbruLYVRrmFEIoZFvXYyoaupHE9+e+PX1cCt4YJX8CK3h1yllt2W5g1iW9J10sM2B8GO66RyUWU+jq1lBV6FqpUS4G5FpU8uM+G7cYGBGB1Zg9C2xbcuKoQ1e3xrabIowN+axYpfHgt+v/P3vvGSTHmZ95/t40lVnete9GN7w3BO2QHJLjZ6gxWkkjs1qZO+lkQlqdLnS6uK8XcXEf9nb3g+5WWu1pNLsysbKj0RiKM8NxdEMLOvgG0N6Xt+kz70MWutHobpgGwAY5fCKA6K7Kyvet6sp//u3zUKt4WBZIMmRyEhOXApYWPOZnXE69aVMpeXz0yRi1is/wToVqeeMco6LH2fvpX0NL5XCtNsULr+O0r02F9gE2Bymio/cPkdx7hGj/MEgSxuwElRMvYlcKuO1mmMIZPUn+wSdIHbqPxuipD2Qe7hA24SlK1MpjJLMjZPK7WJp9E01Po2pJ1EgYZrmu2fHYgtBr1FNIcgRZiSKEQI9mkGWVtlYglR3BdQxsO3TRA9/DdQw81wQhULUEaiSOFstiNJdw7TaxRA+qGsO26thmnVRuB4X5d1jvNhIQYAWtd+UOsxGee8bgsU9EefDDOrWqR7Xsr9KhrpR9umo+9VrIDNKo++vKPQoBO/eofOiJkPXYdQIWZz1KSx4zkyvJyWY9lJb1/fDnVsOntORRq/oszLoMbJPJdcm8/Nz6fIuRRJZ9n/51+o48jmu1ufDdv2T+5HO3+VP5AJeROfYQmXs+hF0pUDt9gualszj16priitdqYMxNofcNcVV94z0LXU0R1XLU27N4/t2hkbQJoyhQ1DBkNdrlzvSEoFWfR1Y06pUpLKPWodp3IQhw7DZCGMiyghASplENG4kDn2Z9joiWolaewPddWo2F5RBYIFAUnUZtBllWAYHRKhDRk+EerDqt5gLx9ABm6+7k8JNl6B+SscyAdFYik5UZGFLYsSfCsQc8snmJerVjIAOIaIKj92ns3BvhyL0uo6ft8Pd9EQ6Ou8RiAlmBSslH00TYlN1ZSwgYHFbYdyhCd69MueBRXPLQdImHHtd5/UcmJ0/YfP7n4sxNu1jm2qtK6fAj9h/7CL7nMvbc3zPz2r+8ex/YjyHMpVkKz3+L9vQYXuva7V1WaZHKW6+EZITvA2QTI2zrup+Tk/+MYd/+SvJmcNNG0fddauVxrrxV1Uor+QCzHSbibWsl1LIWaqvOYc2v5EMce/XcdLu5krcKAp92c5F2c3H5MdcxKC2cBkCLZknndlBaOL25dpN3AX4ApYKHY8OZd2zmpl3yXRL//DdNGnWf0dMB9ZrPzKSLYwc4TkAmJ7M451Etexhtn7dftzh/2qZc9Gk1fYoFD8sMsMyARt1nerLT+hOE+hrPfLOF50K95tNqBnz3my1ULdS/dpyAaFzi1Jvr5xIjiSw9Bx8BYPy5v2f6A37EO4725CVu1O2zi4vYxcXrH/geQbk5QduqYDl3D1P7Lcw+b/2dyjbrFObewfO2hl3nRhD4MDftMTe9EgrNz3rMz64Oja7M71VKqw38hbPOVceuLibVKlc+5695fmo8DK3z3RIf/2yc0dP2mjUuY/9P/CayqmHWS5TH3rluxf0D3DqUZKhR5DZqXH1dSVoUORrDqVU2nUOUhIoeSSJLETzfwXIaaGoCxzNx3DaKpKFFUph2Dc+/HKVJ6JEUIDDt6nJhVJZUNDWFIkXwAx/bbWK7K46NEBJRNY3jW/i+ix4JByZ838Gwq/hB+F1U5RhRLQOA3+knXg/XWy98fwqamkCRNUDg+y6228bxNseys+XUYbeCIPDw3A84/G4UpYLP1/524ztyduQw8e6QOGD2xLepTN64JswH2Dyy9z4CQlB84TtrWm2iQ9vJHn+E+af+Fs+4HhvVWshShIHcUfqzR0NqO9eg2pqmO7WX2fKbzJTeIB0fZM/AJzg38zTVVtiLqyg6u/qeQAiJczNP43gmqhxlMH+c7tQehJARQqJtFZlYepmGERZdI3KMPQOf6PwuyMS3ocoaQeBzavrrtK2wop2M9jDc/RAxLYsqx3jlwpcx7dWTZqoc66y3e3m9lllksvASDWNx+f31ZQ7RmzmILIVteQLBYu0cU4VXlo3wzeCWjaIey5Pr2oseywMBRqtIuTiKZVTIde8jk99DtXSRciFsh5Flje7+oyTTQ9QqEyzNvbnueZOZYfoG78exm8xNvbwqHL8MIcmkMsOkMiNEtCRBEGAaFaqli50wfH1vNpEeorvvKEarwMLMa+FjqUEy+d1EtCQQYJk16tUpWvU5fN8loqXoGbgHLZqlUhylvHR2w89EjSToHbwPPZqhXBilXNj42LsF2ZFDHPj87xDN9NBYGKdw/tWt3tIHIJyMkqMxbloApYNktJfhrgcp1i+xWDuDIkfpzx5a9tJuFAKJ3swButP7mCm+Ttsqo8gaI90fYqT7IUbnnsF2VzyzvswhCvVRxpdewPc9VCWK5azkS6utGVpWmYHcEQZz915jvb1MF1/HWF7vYYa7H2J07rs4bhs9kmZb132Um5MsVsPrTFOTOF570ym1TRtFSVLo6jvK4MgjRONdgCAIfHzfpWfwXqbHniXfvZ/ugWO4jrFsFCVZJZPfRXf/PSCkDY1iNNZF//BDGK0iS/NvX2UUBbFED4Mjj3b6IvXOGGYoadY//BBLc2+xMP3KmpxleO48vYP30qjNUFw4Rd+2B+kbuh81EkdI4R0pCHwK8yeZIPm2KQAAIABJREFUGP02vt3Acy30WI6+oQfQoxlq5fGwQr4OUpltDIw8jCTk5fd9NyM7cojDX/xDouluGgvjvPMP/zetdaZ33k2o0SQHv/BviWb7WDj5LFOvfPN9F8pLehQhK6EshZBQ4skVujkIGdz7hsJk8SYucCFkcokdOJ7BXOWtZe9KEhKp6MBNnUuPpOhO7aVpLFJvz+MFDpbbpNKaoj97hITeQ7k5sXy8F7jMlt6kvUHxxA9cLKeO7bbXDZ31SJru9F4a7QUa7Tm8wMVym1RbU/RlD5PQu6k0JzvT1+H1Gp6vRa19a3rjmzSKgnR+N9v3fhpNS1GvTlIunMexW2h6ikzXHnbs/TSKoiPETeoo3gD0WI5dBz5HOrcLo7VEcfEU7VYBSSiksiNku/YwsutjIf3Q2A/xNyj1R7QkQzsfp7vvKKZRprR0Bs+1USNxYvFuWo0FXCc0qp5nUS6cJ99zkHiij1R2hErh/JpzSrJKKruj87lMdYpSdy8kJcLBn/y9FYP4j/9+yw0iwK6P/iI9Bx9GCIlE7wiNxQlKF7eGdPROIXP0IeLb96B19yMkidjQ9lUExkJRkBSV2snXwvnnm4QQEjEth+nUVxUyDLuK692c/G1ESRCNZEhEe0jFVgyqEALbaXI1gWfLLGJ7m5cg1dTOelo36fiKkqUQErbTXCabadtVZkonGMgdIx0fpFi/SKU5ScNY3FToDJs0irISob/jMTWqM1w6+3Wa9cvs2YLi0hl27f8c8WQv3m1nbBH09N9DJr+HdnOJsXP/QqV4gcuh8uLcGwzv+igDI4/QN3Q/5cJ5GrX1qfJj8R5kOcLS/FvMT72Mba2494oaCwfSr7hD1ysTGO0SqfQwmexOaqWxNQZX0zOkMiMgBJXi6JoJm7sNvYc+TCSRBWD6tadp3QWyAqnBPWSGDyxTwkmysjxH/n5C89IZ3FaD9JH7kRSV9sw4gbeSIw88D6dWpjV+/obH+q6EIDQiV2eRguD65NBXUvJBaPwQgoXKaYqNi6uuCz/wMKzVHqHnO7fI8hMSQixUT3XWu4Jo4or1fN9htvwmtfYcXanddKX20Jc5xFThFearpzYVQm/KKOrRPKnsCL7vsjh3glbjyhaBgFZjgcLCOyRSAwjp9tZy1Eic7r4jBIFHYf4tauUxrvyre65JYf4dMrldxJN95HsObGgUJVmhUZtmbvIlHHt1AcJ11lauXMegUjhPKj1MKrcdPZZb1S4Egli8m0SqH9c1Ka/jSd5NELJC3+EPo+pxiqOvUxx9fau3RKJ3hEP/6vdJ9m7Hc2wkRQ0vyPch7NISdmkJORpDimiUX3tutfG7ReqwIAiwnAapaB+qHF2u2qpKDFlakavw/NBxufIxWVLRlASWG14Xjmdguy2EENTbc3e80drxDCy3BQhqrXn8a7DoB4FPw5inaS6xVDvH9u6HGeq6j6X66E17xHADJLPrIZEaQJY1LKNKqz5PEFxVAQ58WvV5LLO6/gluAcn0EBE9hWXWaNRmwgbxq2AalXAMUFJIpAc2DOEdu02leHGNQbwWSotnsO0midRgONp4xQUryyrZrr1IcoRa6RKmcXc0o64HRYuz++O/TG7nMTzbpDJ5CrO2tbPNsqpx+Kf+gGTvdozKIme+9v+syrG9X9GeHqM9PRZ6iUGw8u8W4QceleYkUS1LV2oP0UiGmJanK7mLiLIydmvaNYLAJ5fYQTSSIRrJkE/uJKGv0PcZVpVSY4x8chfZxA50NYWmpohrXST03lVe5Y1AklQUWQ8rxkioso4iactE1IZVodwYoyu1m1xyO7qaQl9nPVWOkolvIxrJElFi+L7baSva/I10c55iLBvG9nYTx1k/b+DYLVzn5q309RCNdyNJ4WRMOrcTPZpbc4yQ5JBEQghkWUdRNJx1PD/PNW/acFtmjUrhPP3DD5HvOUBp6Qxu5zNQIwmy3XsIfJfy0jk0TyUvDWMH4edQC0rERQo7sLAxiIsUbuBi0iImkiREmoCAql/AwSZKAkWo6CKGF3jUgzIRoSECQYs6CipJkaUS3JwxkyM6uz72i4w8/JP4nsfEi19h/Pl/vKlz3Ankdx8nmg0vxPHn/4HK1Bnuhn7YOw1rcfYOnTmg2ppmvnKKgdwxulK78X0Hx7NWeVCmU2Oxeob+7BHS8UE838H3HWrGiqCcH7jMlt5CkXR29j623AMoSyq11iyXFp+74VBVCJnhrgeJRtIkoj1ElBg7eh7FdlvU2/Ms1M7g+w4zpTeRZY2dvY+vWq/ammFs4XkCfKJalv2Dn8HzHVzPRJJCXsiZ0onlnsubxeZyinIERCiGFKzjqUE4+eLfAR1gWdFASOjRLMO7Pnrd44UQG4bwQUed72bgeTbl4ijdA/eEBRU9s2wUk5ltRLQU7VaBdmOJAbGDVlAnKbIkpSxNt0aXNEDNL+EEFjmpFyNo4fkOPdI2jKCJToyIpDPrXyIl5ciILmpBCa/DHacTIyN3MeadISlyZKVuKjcp/LXjsS+y7cHPAjDz2r8w/tw/3NTr7wS69j7Avid/EzWapDxxksrk6a3e0l0BoSho3QNYS3Mr/J43AcczmFx6mWpzioiawHFbuJ5NTPvYquOmiydoGItoagLfd2kYi8gdb+5yqGy7TcaXXqDUHENXU+H53TZNc2k5t+54FjOlN/B95xrCcQFmp9jTMBaZ5+TyM5bbXPaSbbfJ+OILlGNjaFes1zCXlsPpllniwvz3iCgJJKHg+TaGXaFhLL67LTmXF7s897whbiEVJEkbVK2DUGbAMmtUiqM49rW71i2jsuHEy+aEbwJa9Tka1UkyuV3kew7QasyDEOR7DnYIM8bx200U0cu8N05CpIkFyVVnEVf8HxVJuqVB6kEJCRk7WNmvSZslf3pZtbAV1MnQRVwkyUrdlPyFm9q9Ek3Qve8hJFlh5vVvcfH7f41nb75KeDsgKRHyu+4hmumhuTTFqa/8R8xaET3Tc/0Xv88hx5LkHnyCxWe+itfe3Cic47UpNi4u/x7Xu7n64vR8i1LjEteD65mUG2MbPu8HznXPEwQ+C9Ubu+m5nknpGuuF+974+c1gU0bRsdsEQciAI8uRdY+RFW1V4vZKLBMYXMNqqpH1RXxsu0ng+zhWk4Xp12jUrt8+crvnoi2zSqVwgVRmO7meA8xO/ghNT5NIDeBYTarlS3iuBYqgUwO8vBMg7BOTApkIGgZNAnyaQZUx9xRehz7+MtzAXmW8bUyMoEVe6kcREZrBjYf/eqaH/U/+BomeEaxmhfL4yVATZgshKRG2f/hn2PbgZ/F9j8rEScza3UnucVshhZMXge+FjsUGzoWsR1HiyfdtseluxKaMotEqEgQ+mp7pTICshaal1jVsQeAvS21KirbBCoJYom/dZ1r1eTzPJqKn0KIZ6tWb1Ga+DQgCn1plHKNdQtczpLIj6HoGVUvQqs9Tr0ziYeAEJsPynlBFRoQykUbQIit6iEoJNBGFAIyghR2YDMq7cAOHRlChFoTEGmvVsgNaQZ0RaT8lf+GGKdkjiSwHPvvbdO97EID67AUWTj57Oz+WTWH4Q59n5xM/jyQrzL75XS488xdbvaV3BYldB1GTaSpv/oj49r3ERnave5wST6AkUu/y7n68sSmj2GzM49hN9GiWdH4ntcp46Bl1IElKmF/T02te63v2crU3FsujqNHlnNxlRGNdJNPb1l271Vyk3Vwgk99Nvucgtco4trk++enlyZQ7gVZjgWZ9hu6+Y+S693cSvAq1ysTy+5vxLhIVCSQhoxEDAor+HIYIn694i9iBhYvNjHeRmEgAAqsjmVD1C2vEewDswMTDpRlUbzgFEM30LBtEs15k4sV/uj0fxC1AyAr9R55AkhXm33mWC9/5b6GOzY8B1EwOrbsPEMSGd5E+fF9ICHFV1VmoEaTI+tHYZmHaNUbnvotp165/8I8hNhc+Ww0K828ztONx+gYfwGpXKBfO4/sukqySzu2ke+CejlG6SlPEd2nW57CtBtF4F9t2fISF2RN4rokQEnosx+DIhzcMn33PZmbiBeKJPvK9h4CA+elXQ7mDIAARsnnrsRzJ9BCLMycw2rc/HPN9h+LiaXI9B8h17SUIfFzXoLS4kisxaWMGbXTidIl+AFxsasHa/VgYy8bwyseuhopGRuqmHTQxghvLMcW7hjj4+d8FwKgVOPWV/0hlYmsLGWosxZ5P/Arx7m04Zovy+DvYrdvfwnW3ovb2KyFFeuemXTt1gsobLxK4q4spWlcv3Y8/eVvX9nybamvrm/TvVmzKKPq+w8L0a8TiPeS697HzwOfoHXoA12mhRhJoeiY0kp5DLNG75vWV0kWKiyfpH3qQwR2Pke87hGVUkeUI0Xg3nmsyP/UKQzufWHf9WukS4xe+zdD2x+juP0a+N3y95zlIsoqmJZEVHcdudhi57wxqpTGMZoFUdjsApcXTtFtrK8FeECqn+behvSQhUmgiSsGfWRb6uR72fPJXSfbvpLk0xdlv/gmVia1lv1Fjafb/xG/Sf/QJXKvNxe/9FbMnvr2le3q34dsrkZVdKeKZBm6zHtKlXwGnrq0xlB/gzmLT4yamUWHiwrdpt5bI9xwkluiBIMBoF5mdfJFK4Twjez4BrDWKrt1idvx5PMck33sIPZpF17M4TlikKMy/Tbu5RO/Q/euu7fsuhbm3MdsluvqOks7uQI9lkSSVwHexrTq1ygTV0qU70kB+GZ5nUyqcJZ3bQRAElJbOrtuG5GCz6N+eeeJKUKDibSwreTW69txPom9H+NrJ01tuECVZZc8nfpn+o+ENz2k3mH3jmS3d01ajeeFU2Ljtr031eO0mlTd/hGffvZyhVyIqkugiRsV/7xLh3hLJbLu5xPSlH7A4cwJZiRAEIXGCbTU6ZBAbd7mbRoXpsR+yOHsCSY4gREgO6djtzoid4ORrf44AzHYZ0Zl/9Z3QO/J9h1p5nGZ9HjUSR5YjHd3mACTwXAur3ZE9kGTkSKiN7BktKsVRTr725wSBf8tTJ44VVm+NViGshN8tDOBConvfAxz43O+gp/K0ijOMP7/1/YiRZJauvQ8A4BgNzj39/73v2G9uFp6xcR7Vty0aoydvy4TLZQgECmGeMiDAxyMgQEbGxUGEpcHOz+GxQgi8wMXDRSCQUQgJvkJJUx8PCRkIsILV70dBRRIyfuDjdqIbCRlFhBIjbmDfMQ3nzeCWB5M9z14/Z6foXK9RMXztxjrC7eZKD56W70VJpGlNjq4+h2uupvASgmj/CCJQCZrhudVEitS+Y2hd/cw9/Te4jrGmuLMZSJJMrns/QeBTKV7AMmvIQiUmp8ICjxA4voUXOAT4RKQYrm+hywkCAtpujYgURRZh65LXYSC2/TaqFH5+tr+5woOWzHLoC79HJJEh8H0qE6cx7wKJ0r2f+jW0ZBajusT5p/8s5G28jRf8exFCksPEykbDDrf588lKvXTJg3i4RITGojuNi023vI1LzlskpAzd8iBjzknSUjc5uQ9FqJh+i3l3HEWoDCq7l3tni94sdb9EXEozqOzGCtqMO2FEEhVJeuRtqEIjwGfBm6Dl1+hXdpCQsgSBz5I3TdUvcLdML91VzNtCUYkNbieSzuM7No2xM/iWSSTXQ7R/GKtSXDluYAQlnsJtN1BiScziPFZhHr2rH72rD2NhJVx16hVqZ9+k+5HulbUkmdi2XUQyeXzXoXnpDJ7ZJpLvQdFjqJ09tGfGNmQ8TqQGSGWHsa0G1fIlfNcio/aRVfux/DZptZuqs0TLq+D4NgP6bhpumUjH4EWETkrtxg882l4dN7BJKXkWrDFy6gC2b1LepFHsO/wYih7Oty6cfJbR73x5U+e5nchuP0yybztCSBRHX2Pp7EtbvaW7AtHB7Wi9A1TffoXAubNes4RMVu6j5M3RDKrsUA+HU1+B6Hh6oScZ+ooK/coOrMDEDkzSchdlfxEvcFGFxrw71jFmIRp+mYI3Q1rqWj5PVu4lKeeoeQUSUoaM1E3LrxEROm2/Tsmb7xQM7w6DCJskhLhTiKRzxLftwTVaq5LOvm0i6zHiQzsBkCIa8eHdKMk06f3Hw+e27QZJwrNN1EwOvXfommsFgO/aOPUKkVSWWOfcer6P1P7jeEYTt1FbNVolhNT5J6NHcwxs/zCqGqNWHqdemQx1LaQ4NXeRkjOLfwVRhgAUoZFSu9DlJKrQiEhRJCSqziJFe4qqs0gARKUkkpBpuht70dfC4H2fZucTv4CQZObf/gHnv/3lLW/Szgwf5ODnf5dYfpDm0hTTr31rS/dzNyE6tJ349r0I+fZzj14NCRmBwMXBC9zluXxYiesuG0cZBQkZw29Q80pMO6PL2ul2YK6avFoPYeitYvsGLb/GgjtByZsHYMGdwA0cBtXdZKWeaw5yvNu4qzxFz2jjOxZaVx+tqYsEHaPothpYlQJ6V//ysb5lYixMo0QTWKUF9N4hhCTj1Ks4tfJ115LUCNG+bciRKJFM13KuEsCplmhNXVplENVInJ6B4+jRHLISIZEaJBbvpt0qMDv5Io7dRHTyK5oUx+/kFgN8NClORIrh42F6TRzfpu1VMb0m0WjyCuMZULHnyEUGsHxj1Rf2RqHGkuR33YMaTVAee5uzT/3p1hvEkUMc/eIfoqe7aS5N8s4//HuaixNbuqe7DV67uW6h5boQAjmWwDeNG5qN9nBwMElImfAmLhLUKOEGNoqIkJAypOU8EhIONu2gAQJsv93JP17+Xq/8fxmaiKKJKKrQ0EUMKzBpBw1UoeFghznJzsyyhEwjqKD6EaJSgqq/tMmx29uPu8pTdI0mpRPP0Rw7Q+bQ/Wj5tZXrywiCgMDzCHxvJeVyE6NQkUweLdtD4ZXv0p6bWE1i6dhr8jiSpJLKjNA//BC9A/eh6Wnq1WkmLnyHRjXs+QrwqTtFBBJxOYOPT8MtoUlRFKFQseco2bNIQiKh5JClCA23hHOF8Wt5NTQpRs25+fyfGkuy78nfpO/wY7iWQeH8a1tuEAH2P/kb6OluarMXPjCI68BcmEbSdOT4+tNh14JQVLoe/Ci5ez9MbHg3kh695vEBAQvuBBIyaakLj9BIGUGLir9AjzyM6RtU/SIBPjPOKDIKvcp2UlJXWFgJbOp+CfcqwoeklEMTMbzAJSv3ISFR9hao+0W65EG65IHl/HlKztMjb0MIibK3uGxsLyOdldi2XUHZwG3r6ZN54tMxvvDzCT76mRiDwwo3yV62Ie6Yp+i6JnOTL1JcOEmjdmPUSGoyQ2r/8bDKbJv4TuieJ3YcILnzEEoiRWr/PRjzGzWeClJ7jxIf3kPgB3jtJo3xs+j5PhK7DqB19ZE+dD/t6Ut4poGQZfL3PY6azGDXr12FdpwWc1MvUSmOIoSE6xjhdE1rdXuM5beYNc+hCp1h+RBNt0zTXe25tr2VSQLDW5nG0aQY+cgQhtdcDlNuFEKS2f2xX2bgWMgc5BhNZu6C3r++w4+jp8NcbuH8qx8YxHVgzE2h9w+Tf+gjtMbO47Yay6OwAPg+ZmF+XU8y8Fya4+fRewdJ7jlMcu8RjLkp2tOXcBvrt6NZgcG0ex4ZhSFlb7gEHvPuWukMB4sZd3TN40VvrQ5K0Zul6K291gveDIWrjp93NyZxkBV49GNRPvNTCb711RbPfL2Jc4X9Hdml8su/neaRj0ZJZyUaNZ/Tb1n8539f4eLZW+ffvGNG0fdsStdQvFsPbrtJ49LpsD3HCfN9AObSLE69DELCtwzcdoPa2TfwbQunUcG3TOzyIoFrY8xPYpUXIQDPaoMfYDcq1EdP0hw7h2eZuEaLwHUpvPI9JFnBd5xlMtPW1AUQ0uovJSFNWq081mH6voH3EljMGDcnWuX4FhV7Hie4+WR7JJGhe9/lVpcm57/1Jbw7wGd5M5BVna59DxCJpylPnGL2xHe2dD93K5L7jpLccxglmSa2bReB66yOXGyT2a/+5foFP9+nNXGe9vQllHiCSL6X2NBOUnuPYBUXqI+exCoubFjZ9nBX5b7vBsTjEg9/JMYDj+q89ANjVVCdSEn8/P+Y4lNfiOM6AYVFD10XPPRYFMcJ+D/+lyKmcWth+F2VUwxcB7u0tunTbdZwm6vnNF031FNZ7lvsNLc663h8vmlgm2tbcJzq2kKGt85xm0FAgOXfXOjq42H4jesfeBWiuX4OfuHfoqXyGNUCo9/6UljZ3cJWF0WLseujv0jf4cfwHIvK+DtYjc0Vjt7vMOemKF2jOTvw3FUTMFdDKCpqOouW60Hv24aazmEW5nBbDXL3fZj62TdpTaz19jxcZt2L3E2VX4BoTLD3oEqr4XP2pMWV5Ot7Dqh8+ifjOE7AX/xxjaf+scmu/Sq/+79nOXJc4/hDOi/98Nau4bvKKH6AzaH/yOPkdx7DalQ499SfUjj/ypbuR0gKOx7/OUYe+VcATLz0Ncae/bst3dPdDKu4EHpzm4BQFPIPPBG2ljk27blJKm++GHZvCIHXbhLbtmtdowisKy+6iV2QjvZjOLVlHZhbgRIR5HsU5qYdapXV+/v4T8SJJQTPP2Pw9FeblAoelZLHy88Z/PS/SXLgaOQDo3izEIoEfkDg3113x80iPbSPviNPEAQBZm1pyw0iwI7HfobhD32ewPeYfu1bjD3392vSER/gNiEAp1GnNXkBu1JYPR0TBJhLc1d0VnQU+oLLtWOf1Vyfq38WQkBwpeG83Dgjrnh9+Kjn22u0mgRSR4o94LI3Gj4W6rNvZJAlCXRd0KgH2NbKdZrvljn+IZ1mI+ClZw1KhXA934epMQffh66eWzdpP1ZGUcvHOPR7j1F4fZqZp88ReHfJSN4mkd62n6Nf/N+IZntpFWc48/U/3uotEUlkyO26B1nVmH/7h1x45r9tObP3ewGyHkOOJxGKsqZjLwh8rOLi+oUW38eYHceuXFnwE8ixeJhzr5WXU0p9qQNkooNYbgtViTJRfJnu5G5Mp06lPUVvcj+ub1FsjdOT2E06OoAkZBYbo1TaU+TjIyS0blQ5iutbLNTPYrttepJ76U3tZbz4Mo4XerxJrZee5B4UWaNpFVisn0cSMgOZo+hKEte3mKqcWNezDHywrABNE1xu3RQCHv14lJ5+hUvnbN561VyVJjWNsKEnot96v+OPlVH0bY/quSWMufoaSrP3IvY/+ZtEs73U5y9x+qt/RGPh9tKy3yz0dDf7P/vb5LYfwW7VKF54/QODeAOQ40m6Hvk4iT2HkbVoWGj0XGQtiu85mPPTzP7TX+CZa6ebJE2j5/EnmfnaXy4/JmSZ3L2P0hwfxZgdhyBAkXS6Ejs5t/AMCb2HwcxRhBCoso7jGUBIuRfgoykJ+tIHWayfQ1OS9KX2U2lPIQmVqJrhUvF5nCuEr5Ya50nq3Ugd1UxZqAxlj9GyK5huk2xsmLqxgBe4ROQYxdYYNWOus+5a2FbAwqxL/5BMV6/M9IRL74DMRz8TR1XhjZdNJi+trjLHEhKSAMe+9ev6x8ooOg2Lsb99c6u3cVvQe+jD6JnLrS6vbblBBEj27aRn/0P4rsPF7/018+/8cKu39J5AfGQ3sW27qLz+PJFcD0o0TuPCKdRMnsSu/dTPvrXcnnY1lmUKrujRvdzzKKSVx1RZCxXvfBvLaeCuo3QnddqWI3K0Y+AEltukYa30zLbt8rJO9EaQ5QiyFAECPN+m0LiA6TbwfIdSa4yE1kNS62Gm+vaySt+VaLXCFpvP/1yCL/5KisFhg8P3ahw+HqFc9PnB0+01TnNvv4IkCRq1W4/+brtRVFM6Q5/ZR/7YIL7rU3x9Gt/2SO7Mcelv3gQBu/71vZjFJpNfO41vhR9w/vggQ0/uZ+zv3qJxqQQCor1J+h7bSe5IP0KRaE5WmPv+BeqXStDJCfY8sp2u+4aY/Npp+h7dTuZgKGMw9/0LLLwwRuD46D0JdvzMUZI78yixCFNfP8XMM6PL5wBW1nt8F7nDfSvrfe8C9bGV9dSkRv9Hd5M7OoAaj+AaDrXRAvM/uIixePOV482ge/+H2P/Z3yISS7F07hVmXn/6XVn3Woh3D7P7478EgNWs3hW5zfcKItkurNIS1XdeJbX/HiK5bhqjJwl8H89so/cNUT/zZkgvdgUSuw+ROfwA0YERtv30ry0/LiQZu1bGqa10YtheG0nIxCJZYpEsiqSFapa+i6YkiHY0le32FJbbxHJbNK0CtttaVZteG2EJVDmKJDo6zkLF9Uwst4lp16kaMyAErm8jEKH36DToTx8mHslRXYchqN0M+N5TbQ4f1/jwx2Pc97COHg0Jq//lK3XGRlcb9ERSsGOPih8ETE/cZX2KQpbY8cVj9D22g8UXxrGqBrlj/WT29dKaqSIpEkiCxLYMkiKtvpOldJI7QqMFoOVi7PufPkQkE6V4YprA8+m6d4jDv/847/yHH9CcCP/gkUyU7vu2oURVPNOl+MY0ej6OZ7oEbkcqsWIw861zpPZ0s/83PoTWFUewuhFBy8XZ/xsfQk3pFN+YWV7v0O8/zsn/8AOak+F62z53kMFP7GXhuUtYZQO9K46WiyHr747TLUei5HYcRUtkMetFznzt/91yxup49zBHf/YPSfRup12a4/TX/xNW49Yo2X6cEE5nueAH+LaFrMcQiorfbmItzZHccxghr/1+tcbPYVeK9Dz2JMWXv7tyPsfBadbwrZUQ1/Md5utnGMocw/EsLpOGlVoT9KUO0Jc+SMMqYLktbLfFfO00/alDCCEotsYptyawvXDU78pxPEWK0Jc6iCJFyMaGCAKfUmuc6cob9KUOkokN0rLLLNbPo0gRBtNHkSUFy22t8kBXfx7wxksGf/Lv4Au/kKS7V8ZoB7z+I4Ov/GWDq+4NDG1XSaYlpsddTp64dd7J23olxwZSdD80zMIL44z+11fBD4gPZ7j//7x5OvXckX6SO3Kc/qPnKb8TinJXTi9y5A+eoOe+KO4bAAAgAElEQVShkWWjCKFhNBYajP39W3jmWtfedzwa42Vcw8FtrdMYLSB3tJ/ESI7Tf/Qc5ZPh0Hr19CKH/+AJeh4aXjaKye05zEKL8a+8g9vsnEsS70qrl5Akdjz2RUYe/gIAi6dewDE3J3t5OzH80OdI9u2kVZzl7Df+mMr4nWM7fz/CbdSID+9CqCpuo4aazhAf2YMxN4neO4jYYNYt8DycaonqOy9jLqydMLka5dYE5dYEupJiJB82+rfsEpeKL6w5ttKeotJePTlWM9ZOq7i+xWT51TWPm06didLLVx1rcqn4/HX3CeA48ML3DN561STbJdOs+9Qq/rrj4eWix999uY5tBWtyjZvBjRtFWUaJxnGbDdRsDiWRxJyfJbiiszK+LYOsKVROzi+Hm3bFoHahiBq/CfEdAZlDfej5OLt+8TgjP3U43GxURcvGiA+uFsSyqgaV0wvrGsQbW0+QPdSH3hVn17+5lxHDWbVe7Ir1iq9Ps+dXH+DQ7z3G0ksTFE/M4DStd8Uojjzy04w8/JMEvs/sG9/h0g//dlMC6bcTXXvuI7/7OEEQ0FyapPyBQbxpmAvT6APDSLKCWZjDrhTpevQTuO0WaipDY/TUKsKSKxF4Ls2xtZNTSjKDb5n49vpTTUEQvCdoLJuNgGbj2t/xpXmPpfnbV9C7YaOoJFKkj95H7c1Xydz/KL5jo6SzNE6tFC6UqIqQBc4V3ljg+bhN67pGUYgrKGmFIJLSsWsGtXMFXHPF8JbemqMxvnqW2LNc7Prm3WYBRFI6VnWd9d6cozG+Momx8PwYds1k8JN72fkLxxn5ycNMfeMMiz8a37xRvgFE4mnyu48jR3TmTz7L+W9/ORxj3EIkekY4/NN/QCSepjp1lvPf+tKW7ue9Cqu4ROHZfwmnVnyf4gvfJrn/HiLpHLVTr9O6dPameRZTe49izE9izK2VALbcBuOll3D92y9xIJCQhLzMhvNexA0bRSEEkqYT270fc2EGt1pBH1gtQ+q2HfACtMwKU4eQJeTYFQYxCBunhSxWkdqoKT0MQzvH2DUTq2Iw+93ztGauI8UYcEsjbQFg1wzsSpuZZ87Tnt14Pc90Kbw6ReH1aRLDWUY+f4i9v/4Qrumw9KOJTe/hWtBSXRz47G+R33kMu12nOHpiyw2ikCS69t5PJB560Wef+tO7gtn7PYnAx79ivNRtNqi8fu0wU0mmifYPY8xPER0YQVLUVc/HhndhFubWX45gw3YYCRlF0kK2+CBs7g46UgMeLhIKihTB8U0CfBShIiHj4+MGDnE5gypFqDhhv6IqNEDgBCYCgSrpCASWb3C3jRdexg2T7fi2ReD7aD19WPOzIMC/SmWsNV3BtVyyR/oRcnhqNaWT3t21fIxneTgNi9hAGikS2mRJU8ge7gsLMQABVE7OExtIkznUh5CvaDeIyKt+vy3wA8qnFogNpske6r3menJUXX5Nc6LM1FNnUHQFPR+/vXu6Asne7fQceBjfc7n0/f/O/Nvfv2Nr3SgG7/sMO5/4eQAWTj2PVb/9MrIfIKwka119oRzqFZC1KFp3P0osSf6BjxDtH171T02mV9p1bhASMrnIIHl1gAFtD3E5Q1rtRpV0MpE+IlKULm0bObWfrNoPCAb0fWQjA2hSHIFAlxMoInSCdClOtzZCtzZMVEoRlZL0REZIKjlkrk2oK8vhZMtW4IY9Rc9oU33tRZAk3EYdqaFgF1d7Bu35OosvjjPwsT0oUQWz0CKxI4dQVz4At2lRenuO3b94L/t/62Fa01Xi2zJEe1ZzyZXfmWPp5Ql2fPEYqV15rFKbSDpKbCjNhS+/SmPi+kSylxEbSBHfliHWn0JJaKR2ddH72E6scpvGeAm3aVN+e47CK1Od9bqwyp31BtOMfvmVsNAiYP9vPYyQJYyFOvgB6f29tOfqN7Wfm0Gsa2i51cVu1Sice/k6r7jzGLz3k+z5xC8jKRFm33iGC8/8BXbrA2H1OwE5niT3oY+x9P2vh0S0HVjlJZzXnkVIMubCNEvPr2Yyzz/wOP5N5psjUhRNitJwS+QjQ0SkKBE5ium3iMopCAJyaj9+4OIENnW3QFROsGCN4fp2R1/IIKWETlBG7SOj9hAEIQOU4dWJykks37gmoWzvgMz/9Z+6ee1Fk7/58zr16vV7Dz/1hTiPfSrGN/+uwSvP3xo71I0XWjp0XpfdfN8F6ar3FXgBE195B6dmkDs2iBKNUHprFrdhEe0NjV7gB8z/8CKe5dJ9/xBDx7txpsuc//Mz9H1sD04jzHPYNZPRP3+Fg0/uoOvQAO3eJDHJoccvM+2aXO4INAstqmcWcNvr51yEJMgc6KX/I7sRkqAxXkKJqQx+ci9Wqc3k107RaJawqwbnv/QyXQ8O03XPING+JE7Dovz2HFalE6p2PNj88UEy+3vxHY/WVIWLf/U6jbHbzwAT7xriyBf/kGT/Ltrlec5+408wt5hpRo2lyG4/ghpNUp8f4+xT/3nDIsAHuHVImoaaWsfr8/0wBykEhR99F99aHQ43x8/j1G+uVevyLLQqtE5I7KIQCrGpQqOJR8ut0PDKWH4bN7BxA2dZXE1CJiLpaFLIvu34FjVniaZbod3hDS3aM/RoI9Td4oYsUpomOHRcY2HWQ1FvzNtNJCXufUhjfNR+94yiHI0T27mXxskTAKjpLJGuHlqjZ1Yd57ZsJr92msmvne68TiW1I7/qGM9wmP/+Bea/f4HBfplMSqZ2yaZ0cjVTiF0zsU5cpPT8BWbmXIYGFH72d7NM5j0KnQ6E4mtTFF/biHQ2NMJz37vA3PcuXPc92jWTuWdGmXtmfUYR4IbPdTuw7aHPkRrYTbs8z5mv/zHlsbfelXU3gqzF2PPJ/4GBez6G77ksnn7+A4O4ScR37UfvGbjucUoihRJPbXxAEKwxiADG7MRN78nyDZpumZicRBISptfu5BBVGm6RhltCQkaTQnZtgIq9cs1KQkYgYfotFBGh7hZQpUF0OYHptxBIqEKj4izgboIz9FpoNnwkSZDvvnWdmxsyikJVUTNZ9L5+jMkMCIHWN4iaTHMzREHRqOBnv5gkm5F565TFybMWn/lonELJY3be5TMfixGLScgSNJo+z79i8qmPxHnxVYPZOZfpWZfzF21cL3RRH3lA594jOs22z1efavLEI1EScQnfh+dfMZid39p2lVtBftdxunbf22l1mdpygwiw+2O/xMA9HyfwfSZe+AqTL319q7f0nkVq31ESuw+FBu0a9QahqNcUtBJqhNzxR6idPoHbutWJqoCau0TTKyNEWFgpOav7H8vO7Ia/u4FNwV5d7V6wVo+fmvad6asVUpiD1LR3iRBCiadI7DtEdNtOhBoJWa3bTRpnb64nTdckDuxVefp7LS6OOzSaPrMLLgO9CrouGOhTcByoVD32747wT081abV9MikprFSvnspjes7F80zuP6ZxeH+Eh+7T+fYPWpw+Z9NovncZcMKw+X8lEs9QmznP+af/bKu3hJbMkd91DEmWmXr564w99/cbzuN+gBuBoPzqs9TOvHHNzolIvpfuxz698VkkCb13iMo7mxurFCsNH8vwApdFa3zZG7zboUcFew5EUCOCVutdIoRwqiUqLz2LMTO5Jly+HgLPp3JmkdZsjUrB4p+esvjUR2L0dMl887ttGk0ftyt8I5YVUGv4LBVddoyo2A402z7rUR+m0xJPfixGRBVsG1QZHXPwPBibdClV3rsGEXG51SUDwNmn/gtGZXMEpLcLsa4hDn7ud0j0jGBUC5TG3v7AIN4i7EoxbG27Tt5PKCqBs3HPX+B5GAvTRPu2YcxN4btXhKU3oA44slcjoktcOmWsGp97N/sMFQUO36sRj4fl5u6+0DPu6pV58FF9Y5IHARFNsP9wJJQncAPOn7r1sPzGq8+mQXvs5nNpvu0x990wR5dJS+zfE8f1IJWS6euWeeLhKAN9CtWaj6Jc0QojYM8OlQeP61RrPuWqj+cF3HNIQ1UEP3rNIJ2SMc2AVrsju3gb2p7iUpqM0gf4GH4Lw28Qk1KU3FnScjde4GAGBjm5D02K0fQqVLwFMnLPspZz26thByaK0Kh5S8SlDLJQqHslrtebNXjvJ9n5kX8NwOLpFzFrW9/7l9txhNzOozhGk9Fvf4nCuQ/IHm4VtVMnVs0mbwSv3aR25sQ1c7dqKktq3zHs8hK+43D5O1Y+8QJ2+drfn0MPxIhoEmNnTPC2pm9Qjwp+/X/OsH1P2O4md1rgDhyNsG1HDn8DQmhBaBTjSQlJgrdfs3jtxVufbLnx5m1ZJrZjD3r/UJjjENCeHKN96fwNL9Y2Al55w0SSoFT2aLUDvvJUE0URVGs+r79t4jjgOAFnLzjUGz5f+us6vh9QKIVSpn/0Z1UsK/x9bqFBNCpw3YBy1efcRZul4uZdfhmFbnWYlldFFgp5ZYCiO01G6abkzpKQsziBiR4kiMsZat4SXeoQbb9OQs4hC5mSM4cTmAQE9EV20PCK5JQ+6l6Z6xnEgXs+zp5P/iqyqjH31vcZ/c5/xW5uLdlDemgf2x/9aQDM6hKF0de3dD/vF2yktHc1fMukdvoN1rAgdBD4Pq2JUYy5qTUKv+sVYK5GreyR7RZENIHnXn/0T9JkfGtjFnVZk/E6z8taKDvqGte+Jk0j4L//WZ37H9U5eI/G9l2hcZQkgRoJCPyN84S+D8UljzNvWfzVn9YoLNw6w/uNV5/jSRL7DlE/9SZBR0THbdav86rVsO2AC2Or3fLzF9d306v10PurNVbfIctVe80xl1Gr31rYLAsVSci0/TqyUInLaQKCkEKdsOUgrL7FScldCARuYC+TtLe8Gk2/ymU695ZXo1sdxgvc6wpSqdEk2R1HiMRSNBYnOPONP97yEDU1uIcjP/OHxHL9tArTnPrnP9ryPf3YQUjIeizsUVzPYvkezbGzLA/JXmk/biB8nrpgcfzROJ/9pRzTlyz8DrNUo+Zx7m0TNR7Bd33ctoOsKfTcP8jS67P4roesKQReOJ3mth0iGZ3cgR4WXgq7QXruGyCxLcOFv3sHBMujvk7bgSBA1hUkWcIzXV553uDEyyaqCkfv1/l3/6WHd06Y/MNfNK7Zp2hbAeWiR7XiYbZvj6d7432KQYBbr2HOTRHY7882DDewcXyTvDqIhISEgu1byKj0qTtJyBnK7jwNr4QiIhh+AzdwsIKwT2u1oHdAzSuyXTvMkjOFHWwcKsmRKLs/8SsMHv8Evu+xePqFu8L47Hz854nl+6nPX+LsN/6Exvylrd7Sjx3kWJzcA49TevkH+OswbwNEMnni2/ciadErbGJA7cyb66pbXolkRsZxYWB7hIEdK+O4c5MOC60EiW1p2gsNSqcW0XJRIhkdBCS2ZUiOZNDSOlbFoHRygWhvEiW2Mm5YvVAiuT0LgKTK5A71omejlM8uYVUMhj6xG7PYonxqEbNsYFsBtgVjozaL8y7Vks/JNyzKhXdX3+emqMP0oRH6PvuzYek/CGhPXKR1YbW2syQUetL7yCZGgIBi/RJ+4JGM9jBTehOBYDB/D7bbYqFyBr9T4UrFBuhN72e2/DZtK2xQVmSdruQuMvFhJEmmaSyxVDuH6ax4qNn4COn4IHPlt8knd5GJbwN8CrVRSs0JEno3Q/njzJTeoGGsyKcqksaO3g9TN+ZZrJ4lbF31WHImiclpdBFHSBJuYDHvXERGZd4ew/AbOIGNh4siOjOiBJTc2U61buVuFQQBTmB1vMSN72JqNEH/kccBmPrRPzP54j/fzJ/ljqD30IdJDe4mCAKqU2epzdx4muQD3D7Imk60fxhJlteVeRKKSubYh5AiOnpXL+bSHGoyQ0BA7ez127hG3zaYOL/2hu37kDm2jdSOLI3pMNS3ygbxviSSIhHrTUAAiW1pnKZNtCeBWWrT/8jwuusISaCldfJH+7CbFq7pkhhIsfjyNHZztZNlmSFZ7FZNRt94oaXdpPTsdxCS6CjhBbiNq8NnwVD+Pgbzx6i157CdFgO5Y+iRNJbTYL5yCoEgGx+hbVdYrJ5bthW6miKf3EWhfoG2VUKVo+zse5xktI96ew7PdehJ7yef2sXZmacx7fAPFdMy9GUOoalJFFnDtOvoahJF1sNBe98loffRnd5HyywtG+FkrJ+u1B5q7VmuNFhWYGC5BjEpjSQkfPxOgWQ1Gt7qsb6rw+OElKUnMkLNLVwzdI4kshz8yd9DjkRDXZOLb265iH1ux1EOfP53UaNxShdPcOkHf7Ol+3k/Qs3kkSIa1tIcSjKNmkyve1wk24WkaRueR8gyairL4ve/Tv6Bj1A7/Tqe2SZ7z8PIegyndu3xU88LiGgyybS8arzaNAIq5wo4bYf+R0aoXSghFIGkSESSGkIWIYlK1cBt20iqhKQIZE1Biat4poea1FBjKkpUJbO3C6FK1McrBF4AfoBVNmjNrU3BGa2A736jjaaDZb77nSQ3bBQDz8NrNVBzXVhz0xAEiKuG1KORNH3ZQxTrl7gw/30gIK53c8/2n8Pi5hpLM/Ft5BLbOT/7HcrN8c5jwxwYepKe1F6m/n/23js6r/u88/zc+vaCXohCAARJsEqkKKrZkixLcY1LxkmcxCnrTCbrFM/uTmZP9uzs/pGcs9nJnt1kEjuTZOKSGceexE4cy5ZtKZJs9UKKnSAJgOj17fX2e/ePC4IE0UmKoGR8zpEOAdz2lvvc5/eU75O+KmwZVOM4RZPB6edxrps9oZl5MqUhGhK9TGVOLniZjYldGFaRXHn5bpiqW6Bq3ng/b9nNUdZXX7qE61rp+8jnqOu5C72Q5sJTf7npRdqSEvBLgsIxHFPn/JNfxKpu9TXfahL7jxBsbGXin75CzaEHqTn8EJ5pLOkJvv4eW4IHnm0hiAJ2pUCgoYXq1AhiMIyormxMr9DRG+Cnf6WOpnaF5jaVmXGTlu0qrz5T5pv/YBNujFC8nEWQRGLtSayqRbQjiZHTsMomhaEsWqqC53rEOmuwKiax9iSVqSLR9gS2ZhNuiaFnq8S6anAtBy1VwdZtCiu0xhqGx1Pf2jzx5HUbRTEYIrb/EIGGZpyq38ei1jVQOnf1Jo4E6pFEhVxljCvel2mVKWrTSKKy3GFXQCAZaUeWgrTU7KM+3gv4w3cUKUgkWL9oa9OqkKuMLTGIAK5nk6+M05TcQyzUjG4VUeUosVATufLYsoNzbhc12/dT13MXll7h0tNfYq7/1U27lit0PvgJOu7zlb2nTj2Ppd2euTM/aZQunqI6NriQDKkM9ftzWa7LMiuJWhL771nxOJ5jUx65BKJEeeQS9UcfI77rAI6uYZfWfpj17g9RzNo8/+08j3wswT/85zSH3hNFAHL9OfIX0wsz0nMXUuQupBbtXx6/eo7icI7Z18cXfp768fDibScKi6JIM6+NcyeyfqOoBhAVFTM9C4KAIMnI0cU9mbIUQBBELPuqoXE9B8vWkNTVjeL1De+qEsXznPnlrv9OWo7OTP48xepinTjHtZadH3uFQnWSsjZDU2I3qeIANdEOFClIqrhyj/PbTbx1B9sf+hkAjEKa1MU3N+1artD5wMfZ/uDPgOcy9tqTDD3/dzjrKOvYYuMYc9ML//Zsi+rE8LJGUa1tJLpjz4rH8Ryb0sBZX4Hd80i98jRy2FfIXyvJAiArAplZm/SMjWl4VEouJ1+p8POfa+B7X8stGMRbwg0cSpL8WkRJFhYLUa+AaXro2s1d84aWz65lIsfiBJtbEdXgkieR4/kfjCJdIzIriIji1dN48wNzrjeCshjk2pdsOwa6WWAs9Saaubimy1su5LxKgZXjmswVLrG98X7i4RZqo9upGjkq+uZoAMZaetj/r36PSF0rlfQEZ7/9J5s+HzkQq6Nm+37kQIj0wHEuPf3lxd0RW7xtFC+cwrOtJQYR/FpDfXZi2b9d4VpVbjMzi5mBUEsHrhXFLq/uLU6PmrRuV7FNF0Nz+dn/sR5BgGxqc5WzBcHvaDl0NMjdR4M0b5MIBMUltZjX8/oLGl/+85sL92wo0aKNDhHp3YNa34yZmaNyXeF2Vc/geDaJSBvp0hDgIUtBYqEmDMtfhrmug+0YhJQkoiDjYCIKEonwNkThiqqkR6EyTkO8l2iokYpxtRNEEKQbeuJkypdprz9Me909hAN1TGZP4Lqb88F3P/xzROq3UZq+zPnvfpHi5O1R3VkJNZJk94f/DY27j2LrVWbPv7xlEG8jZmblrhO7UiL14+9veBZPZPsuqhNDaxrFiyf97HMp7/D8twvc93gM14WXvr+xGuRbTXuXwv/wuwkefiJMKLx+tdnpiZvv115/SY7rok+OYeWziEoAx9DxrisOrZo5P36X2I3r2uhWkWSkHVUOLxhF29UpVmfobDhKZ+NRKnqaaLCRaKhx0UDvXGWMkj5LZ8P9BJU4hlVClcNEgg2Mpd6gYmzMy7Mdg2xljPa6e9CtAqXq9KpCl28XjX33k9jW65e6TFykML506NDtJpiop2nPAwAMPf81pk7Mj8sUBAK9XYT29GKnsiAIVN48SXDPTuyZFFYqQ3B3D26xjDkxTaCnk+DuHjzLpvLGSZxsHrW7AykWRWltxq1qaCfPITf6UnLGwDBSMk5o3y7KLx+7NX2a71gEBFlGVFWWLhI9HM1Z9P4Em7aR3H+UlTyEUGsn2tTImmfVKi7afORp9JLB2ICBIEJt4+0Z2bscgaDAEx8N8+gHwkiSwLmTBhfOmlTL7prL+YvnbmPvsxSNUf/wT+FaFp7ru/LayBCVwat1ip7nMDz7MrajURvb7ic5yuNkipdRlcj8Ni7TuTOAR22sm3iolbI+x+XZF2mtOYAz770ZVomLk0/TlNhNTaQDSQpgOzoVI43lXC1ZsRwDzcwvlNqsjEe6MMD2hvsoVqfQrNufUa3p3Muej/0OSihG9vIphp772m2/husJ1TTT99HfAsAo5cgMnVx42InhEJF776L0/CsorU0E+3qpnjiL0tqEW9UQsjmUpgZsUUSqVAkfPoB26jxyYx2R+w5RfOo55NoaAj2dlH70Cp5u4lSqSFaC0IE+zNEJAr1dvubTT7BBVGrqqTv6yLy+4tL1oWPqTP3zf8W5ZnC8kqhFCoWpjCwfF1fiNcsMrl8bz4NQWOSjv1zHl//v2RW3E2UFz3UXbMGtJJ4Qec8TYSRZ4O+/UuQfvlKimHdW6nRchGPfJpUcHwFHq6JNjOLZfvLDyi+tgTLtMkMzLyz8LIkqO1oeXbSN5VQZS7+xqKwGIF1cvIzUzTyjqdcYTa0swT9XuMBcYX3elihKmHaVbHkE27m9tYCiEqC+9x7UcBzHMjn/nc0fYh9t6mLfJz5PvHUHlfQE/U9+kfLcVT08MR7Fs2ysyRk8xyHQvUxh7vwgDbmuBrWtBVfT/e6nuauevDU563ua88bWTmdxNR21YxuBrg5Kz7389r7QO5zYzn0Em9vIn3odu1RYsgLzVbYXe0BWqUDh/HHKQ4ubJ64gKuqqEwD3HQmTrF/+9g+GRVo6V56+KQXC1O04jCir5IZP3fL5PIGgQEeXwuSozVPfLN+SJfFG2MA0P5BCYdS6BlzDADxcQ2d9UblbPGjqBhAQaEzsompkKFSXn3L2dtJ5/8fofODjAEyf/hFWdfNLXRr77iPeugO9kKb/u3+xZGazpxsIgQCCqiBFIwiyjOe6CAiIqooQCCDXJbFnU7gVDXs2RemZF3wvUrkmuXadN+FWqji5AsHdvXimhZ3/ya6DtMtFzFyG8lD/mlJiVzBmJzHEle+r8vCFRZ7l9Xzkl2v9trpliqOVgEggtPKxlVCM+p1HCdY0U81O3fqhZYJvbzIpB+0W9TNvhA30PvulA45WwdV1wFvy9LoTCSkJGhK7CaoxaqLbGZ17Df0mirJvhI77fpqu9/wrAMbe+B5Dz/037M0cUSoI1O04xLZDj+N5HpXMJNnLp5Zs5hTLmGMTJD/+AVzd8OsjbAdjeJTQgT2+52g5eLaNnc5gjEyQ+Mj78WwH7dxFjItD/qpCgEWxL8/DmpwhfGg/lTdOgPMO1r+8BVRHBwm3ddH60V/ALhZw7cWuhmeZpF78wSKpMc91WLbvb53kUjbf+1qOUn6pFxaJSXzqN+uX2ctHUgKokSS2Vnpbpjjalkcm5RCJ+eo9t5t1G0XXMjFmp8D15oPBrCqTfgXPc6kaGRzHwH0b4g9rIUkBkpE2PM9lLPU6s4Xz3M55s4FY7XypS5jM0Eku/fBLmy72UNdzN/s+8T8RiCYpTA7Q/50vLL+h41B+6U0ERUZOJoi+517AQ78whDHsdzXheXi242/76vEFD/GKMKp29gL+PNzF77nnuTiFItbkzE90PBEgsn0n4fYerGIO1zaXlN94rrPhr2ysdx/VyZEVZ7V8668z5OasZeN0WtnlzR+t0lEiiAiSjFXOvi1VCuWix/FXdR77cIRd+1TGhi02mHy/KTakkrPEM/TWflS5ns1E/iShxjZcVYDbXI5X1uc4PfqtJb+XQ1Ec08Bz3r6yHCWcYNeHfoOmPfdjGxqz51/ZdIMIsOOxXyIQTZIf6+fcP/8Z1ewq4QTbxrNtvHDoqvFyXTxtmZjs/LaLf3fdXScIKG0thA/uwZqcwSlsbunHnYAYCGJkZpn5wTdxtPVNPVIStYQ7eqiODRHp7EVUFscAI5070WYmVtgb0tMrf+8ty+P8mytfh+tY2HoFQZIXZPNuJZWyy1P/WKG3T+XTn41jmR5vvKSvrMB9i1m/URRFpLCfQRZUFbWuEde2MWan19jRD/rGOnfjGvq6P/S3m1jXHsrjA1iltav+b5RAvJbmvQ8BcPlHX2fy+A/ftnOtl9a7HiNU04zneWSGTlBJrTwJ8VqccoXK6yd8r/Bm8Dw808S4PIo5MYNnbm6R8J2ANjlKqKWdmkMPYObSfiH3tdEG16E6MrCoVlGQZKRQBDEQJGLTVUAAACAASURBVLn/CNXxxQOiRDWwdCzqOgmGRN73iST/+F+W7022tTLVzASx1l7kcOyWJwxFEcoFl+Ov6nz056L89u/XcvGsweSYTTbtsNoAydGh2zji1NU1CifnW9EEgVBbJ0pN3ar7hFu7SezYj2fbKPEaEEAOx0juOoQaq8GqFMmdfwNbKxOsayG5825ENYCemSE/cJJY5y48x6Y4fJ5Y525EScGxdMJNnYiKimvqiGqAuTf+BTkUId6zHyWawCxmKVw6iRgI0nD3w9haBTkcJX/hLapz40Q7dlK79yjh5k601ATFoTO4tk1N3xECyXpc2yJz5hWs4o0PuA8mG9nz0d8GwCjnSA++9baUL2yEpn3vYdcH/zWSGmTm9I8Ze+1JxGDI9wTt1Y2TZ5iYY5OrbrNe7Nk09mwaX5bluolkP4EEmloJNDQTaGxdaNe79h1xDQN9ahTnGgVrM5ci99bLCJJMdXKE1Kv/suiYdZaJe73Xfg01DTIIYOouidrFZiASF+nZF1xxX0srkhk4RjDZSGPfQ0yUvoOt3zpnp6lV5v/5m0aCYYFoVCSRFGjeJmFbYNveqgvUZ56s3D6jKKgq4S5fmEEQBNS6hgVhiOUQFZWaXYfInHkFz3NpOPQIgiASbtmOIAhMv/Qk8e59JHcdIn3yBWr33Udh4BSVqct+zEIUkAIhXwEEAUkNIMoq4OHoFRxDwzU0gvUtyKEo8R37kQJBqjNjRNt6sFo6MUs55HCcmVd/QLC2kWjHLipTlyldPkesvZfM6Vcwcn43gRJNoMZrKI1eQJsdv6kPOdrYyd6Pf574tl6qmSn6v/sXlGdHbvh4twI5FKV+xyGUUBQtn+L8k1/AMTVq7n0v+vQ42vjw2ge5xYQ7ujHmpn1V6Z9gimePU7qwNNG1gAeued2N7nm+R2lbpF95Zkn5Tfly/6rK+I9+LIEoCehVl/f/TJLcNWM8ZIXVl8WeR37sHIIo0nzwMdrv/yS54VOYpSyubawaInaMKrax+r3l2B7Tk/YNLcxnJm9jR4sgyQTqGwE/tORoVSpDKwsqiEoAPBfHqOI5Dna15C/B1SBWtYTnOpilLOGW7SAIyOEoena+WNRzwVucxBGEqz/begVBlHBtC8+xEdUAcjCMIEqIskxlahg9M4ugKFiVAo5ewdarCNI1L/e6D87WKpRGLhBsaCVY10L+4ltYNzgfpWH3URJtOzFKWfq/9xdkhk7c0HFuFYIo0fPIp9l26HEAZs6+QLh7J2IoiFrXgD4zgRxLEOroRlRV9MkxjLlp5FjC305W0CZGcQ0NOZ5EG7tMqKMbV9dRkjVIkSiupoEkURm6iByNEtq2Hc+xqY4NIQgSoY4uv6THsigPnEetrSd56H6M2Um0qTGqwwMEmloJtrSD51IdvbzGq3r34Dn2im18/n3XhJGeWaH/WUCOJgi1dvj33DVL5tLA2RXP+ew/5kGAhz4Q5/lv53n9uaslYpGYxCc+u/IqUIkkqd95FCkQQhAlaroOEm/dgVHK4ljGqomz3PAp0hdXrjsGyKQd/uj3l1+6r0WldPNxxw0tn/PHXp3/YLwFodmVcEy/DVCN1eA6NnIohue6WNUSoYZtyOEYwdom31PzPMxijnBzB9XpEX9/y8S1LeRgFDkSI1DXjJmfly26cur5N99zbKxKETwojVzA8zxc20RN1K+YDHItEzkcxSrn/RIIz8PIzWIWM/4yuq5p40ZREKjtOsC2w0/geR7VzBSZwc01iAA9j/4ibfd8EMcyGXv1n5nsf4HIrj6K504SbG5DVAOEu3oRBAEzkyLauwe7UiZx4Aj63BRGsYBdLqDW1BNq7UAbu0ywpR27VECtrfeLsqNxXF0j1NZJsLkNfXYKJVlDeHsvTrlEqL2b/PFXiO+7G2NuGrtUwDV0jPQcZtYv64js6MMplzDScziGhhSIbvI7t/lIkSi19z7M7LPfWdajFhWVuqOPYhdz2NXKdQZp5fuzkPUN7PEXyti2R3r6qlEOhmzOHVu5ZEyNJGna/zDCvFaB59iIcoBQTcuar6eaXlsuzLZgdGjzYs3rb/MLhQlv30Gp/zR4HmpdA3Kihurl5b1Fz7bI9r9Jsvcgtl7FzKdwTR2tmEOJxKm/+2GscoH8BX86XPbsayR33U20bQd6dpbCpZNUp0dJ7ryb2n33Y5XzWJUirmX4H4Yg4jk2ZjGLY+oUBk+T2HGAhsOPYpULFAZP41oGRt6/4VzLxCxeffoUR/tJdO9DTTZQHDoLAtTuux9BVnxlktTGC7zVSJK9H/+3hJINFKcvc/47f77hY9xqgolGarbvRVJUpk4+y9CPvk6gtR2nWsWYncIuFRAkCSkYQp+exEhNE+7oQUkkkSJR34AV5pNRyboFT0RUFARBwDV07HIJMRDENQ2UeJJAQxOiGsBzHYxK2R/LkE9jzE7idPmZUqNcwtEqmNnUwvFL508R6dlFZPsOXNMA611cvyiIfjjVdeff0+UXi1IghBxLrpg08fBwtApGNuVLhV1T+uQ3WaxOetamtl6ma3cAUbo6/Kr/rZWNolXJM3Pm+TWPvRyV2dG1N1oBWQFFEfA8f2DVOuZy3dh51ruhGAgSaNlG6bwf+xCDIQKNzSsaRQA9NclMamlwPnf+jSW/Mwtp5t54ZvHv8inm3nh61euqTF1dZmVOvbTk71d+Z5VyZM9cFXGtTg1TnVocR5t97QernmstmvruR434svKXf/wNKumVSyJuB6GaZvo+8jlqOvdilPPMXXgd1zaxS3mk6F6ifQeQEzVoU+NYhTyhbR0oNXV4toWZTWNm5oju6MPRfAPq6P7yOdZ3ELWmHiuf9ftrr/wH2NUy1YkRXMPAsy2M2SmkcATv2gLt+RvcqVaIzBtJfWoMJZHE1aq+PH+iBid9Y0uodwKR7l0o0Tj5028Q7ugh3Na17HZyNI4cja18IM9DEESS++/FKmRxr1mG5956CTO7etJh5/4Q7/3IvC7qNXZ3etRi5MLyRtWs5Jk58cyyf7vVKCrs3Bugt0+hvlEiHBXJphye/V51of0vEBQIR0Qsy6N8kxM9YSN6iq6DKCsoiRrscslv91stN/4TRsvBR+l53y8iKSpzF17fdDkwgEhDG/W9h3Bti4Gnv8zc+VcAsAp5SudPIAaCFM8cx8zM+eMmqmUEWUYbHcTVNUrnTxJoakUQJTzHxi4XKfWfQpAk8qfewC7kEGQZ17IRJGm+y8IvNFbrG0EQcS0LJ5NaaDkrD/YvLANL/adQrvkeOYaOIEpUBvsx0rOo7+Llc7ChhUBjC/nTbxLZvpPkwaPLjjEVZAVBXbkPWRBFpFCY9GvPLpnHYlfWrgFt7lDQqh7P/VMeo3rVoJjm5nvpiRqRT/xijEc+EKajSyEUFhAEgYF+k1NvGgtGsXunwid/KYahe/zpH2ZXLdlZD+vXUyyXqY4NU/fIBxAVBWN2msLJ12/u7O8SGvc8wO4P/QaSGmbm7Iv0f/cvsKqbW5Qca+lh1wf+NQB6MU1m6JrZL66DPrU0tnN9BtrRqlRHBhdvM7a+BMj1mc8rhtBMX1VesQo5rMLVOlFjeoJFvsm72CjmTryKIMsLMe/CmWPkTr66pPg9UNdI/Xt+asXjeK6LY+jUHn4Iu1xaVPaVO/EyZja14r4AIxcMDr03yqd/p55ywV2wydOjJt/+0uZ56o3NEr/2u0me+OmIX7dYcslnPRqal5qsYt6ltV2mo0vhme9UOHXs5hokNqC8bVPuP035whlAWLObRVLDhJMtKMEItl6hWpghnGzBqGQxylmUUJxwspVKZgzb9L0IQZSJ1LYBLpXs5MIHLAcihJOtyIEwnmOjFefQS1c/bFFSiNS1Y1YLuLZJuHYbkhzAMTXK6VEc2yRa34kgSpTmFs8uDkTrCCdbKM4O3tAUPTkYoX7HYZRQDL2Y5vx3/vyW1mzdCPHWXg787L8nVNNMaXaEc//0pxild+9S9J2Ia2hceQJYhRyOXvU9vesCZVcy9ivhOQ7F/hNLWm5FNYhjrP197t4TxDI8XnyqQLV89dzX/vt2o6jw+EcjfOiTUdJzNt/7ZplXf6QxO23zl3+/NJkzPWkzMWqza1+APXcFbp9RBFBq64l070QMBNGnJ6iODPnLpesPqoZp2vkgNe17sQ1twZDFm3uZufACRjlLpKaVtgMfZPjNb2JnqvP7BWna+QCe6zB6/J9xXAclFKdl98NE6zpwHQtRVrGNMtPnf0QpPeLvFwjTsvthKtkJ5ECYQLQeUZIRBJGh176B45gkW3cRa+xh4IWvLDLC9dsPUdO2j0p2YsNGURBEuh/+edru8Z/kUyef80sSNpnWux8jXNtCOTVO/5NfpDi1+Uv5LVamdOm0X9WxTObAqZbJn3nTTzwth+eiTS1NXsR2HkAKhnEqq6sxFbI2ggDbdwWpltyFfHUuZTN2aSPfZV/aZq3aQl/jcfVi/Vhc5IH3hRAE+PsvF/nHr5UWlsTLJVdcB+am/dfR1Lq2HsNarD/7HI4Q33cIM5fGKuYJNG9DEEUqg9drGQpEG7ZT33WY2cFXyU+cR1ICNPYcRQ2tEjBeBkGUaeg+QijZwvip72NqBSRZpXXf+2na9R6Mag7zmvGbddsPMXvpJWYHXsVzXSRZxdKK4Hlkx8+SbN1DvLmX7JifLFLDCWKNXRRmBhYM5UaQQ1Ga9vqK1WOvPcnwi/+wYdn4W4kgiDTtfYjGvvvwXJfSzGXyY+dX3F5V4eMfDvPoe4OMjds4Lpw8bZLNubz3gQBf/XqFTNbl8UeD9GyX+W9/X6G+TuJnPxmme7vM8IjN179ZYWzCYc8uhQ89EWJ41OZ9DweZnLL58tcq/PLPR3jqGY0z53xv51MfDyOK8N//cRNVgu4wVmuCcA2d4vm3ltb+iZLf1WWZvozbdeYo1NLuZ/czKwvFAqSmLMaGDAQBAmHBt1cChCJrjAAQBNRIDTXdB4k0dCKpwXW1FeYunyJ14ZVVtwmGRLp3qoyPWBx7RV9XjLBcdMGDSHT9owtWYkPT/ARZptzvP9VcwyDQtNSVFSWZeGMPRjlLfuL8wjI3F4oTa+ze0MUFo3UkmneilVIIoogaioMgYJRz1LTtIRhrWGQUzWqO7PgZ3xBeh1aYQSvOkmzdTWH6Io5lEIo3EgjXMN3/wobVPoKJenZ/+DcJxusxSjmyI2c2d/KdINC490H2/PRvIwfDZIZOcPH7/2XVXR48GuTDPxXiq39XIaDC730+QaHgAja7ehUCqv8lb2mS2NEtU5MU+aWfi1AouvzpF0s8/miQ3/6NOH/4x3lqakQ+9pEwf/mlEn/2n4uoAYG5lINpwWPvDdJ/0SKgCrz/4SB/9607o//9HcMyxdDBplZq7rqf/KnXaXzvh5YkPQO1jSuqcl/L2KDB1MjifZWAwN57IqvuF65vZ9s9HyLS2Il4pSnC8/xSI/8HQFgwlI6pY+tlBHntUceSBOGowPCAu+7JfJLsn8e2bqPytjsf14jtOYhrGqi1DRjppU8hQRBRI0ksvbTI+zKreZwNGh45GEEJxVCCUcLxJq51u7X8zKLyAwC9lFnRuHmuQ3bsDG37Hydcs41KdpxEyy6qxVm0wsyGriuYbKTvI5+jYec9mJUCF576y4XM7mbR2Hc/ez7yOeRgGICBZ/4Ws7y62MXBfQqXR2xeP2Zg2R4f+qnwqnrA8bjI+x8J8tJrBu9/JEhzk8TePoW2bTKCAKmUww+f1Uilr65xnn5W49/9TpxYVGTnDhnd9DjXvyUCcS1SKIKSqMFIzy70oIe2bSfYtA2rXKA6OrhISxHASM+SfvXZeY3TKqmXFpeT1dz94LrGEbgOmM7i7dSgwP1PxHjjueWX3pIaom7HPcSae3BMjfzYeYxyDkEQqN99P0YxTXHqEqKoEKxpIlLfjp6fZerk01Tm1hYgsW0o5V2StSLBVcRuF65HhpY235SlZm9eX2D92WetQunCGULb2hFDYYz0HNrI0NINr7yG6wtSvYX/rYwg+OUf18Up0yNvkZ/sx7smueO5DuZ1HqHnOqt+Eaq5SVzXJlrfgVnJEa3vJDt+Zslx1iJS30bDzntwHZuBZ77C7Lml9ZG3m9aDj6KE/fDE1Mnn0PKrG3pBgFBIQNM8HMfDNKFadZfNn4kiiKKAKgsEVIFczqVUdikNurzyusHsnENtjYimeRSuqxO7OGhRKLncfUClt1vm3AWL0iYG8e9Egq0dJA8cYfbpf8K2LQINLTQ8/CHU2gZc0yD98jMU+08sCqh5lomVzyBIMrlTr2Fct0zWZsZw9JVDFI9/Kokowsy4xYMfWDy/XQkIvmDECsihGNHGThBg9uyPSV/yRV2kQIhk10Eq6XGm33oa13UIxuqo23UvyfY9qOEkRWPt0SG65nJ5wOLgPX7iZGRodT3Frl6FvgMqpuFx8extGlwlhsKIioJTLlEZuoQgiH5r3DIxBM91McpZovWdKMEotuGXYqiRJJIcWNjOsU2/51m5OiNaVsOo4SR60RdpsLQSZjWPEoyhl9PXeYEbV1cx9RL5yX4STb3+F8zzKM4OrEsX8gqx5i52fdAvdTFKGdJ3QBtf25EPUtt9ENc2mTj+Qwaf/Rq2vrrIgudBOuPS2yMTiYiEQlBfJzE8amNZIEkC4bBANCrQ0S4jSVDVXS6P2pw6a/LCyzqCCJIoUJ1f4lzTebmAbcO//EjnZz8RZnLa4ennNNbRaPETRaCuCUGQcB0bQZJJHLgXz7EZ/+9/SWz3QeK7D1IeOo+rLw3PeI5NZXTpMrl08cyq8e3xIQNREOjeG8Tz4MI1HSzBsMih965cDiWpQdR4HXpuhuzlE9jaVY/Sc2xEUUYQJbBN9MIcM6eeI5hoon73fVRSY2ir6XcCpYLLs9+r0HcgwK/9dgJFEXj1eY1q1Z1/QEMoIlBTJ9LZo/CZ30ywc2+AYy9rvPXazc9eWpdRjOzoI9TWiRyL4xo6rmkihcOUL56jdHaxUXAdm+LMALUdB6jvOkx2/AyirFLbtg9ZDS9sZ5QyeK5NbccBbEtHEASSrX0EY/ULRtEoZ8hNnKOx9wHq8oeoZP3aOiXgxzuKc0MbkuPyHIvi7CDJ1j7qtt9NNT9DNbsxOay+j3yOSH0b5blRzn37P70tcuwbIRCrpbbrIHIgTHF6iAvf+yvW+7B48RWDew6pfPYzUXTDo6dL5vVjBhNTNrru8QufipDOuBw6qHLhkkUu7/KDZzQ++VE/0YIAk1MOP/yX1WOpx04Y/JtfjTE17TAwtHmJqDsVUVWxtQq4LoHGVsJtXWTf+DFGagY5miDS2btYzORaBAE1WYddKflhrbpGArVNmNk5jFVqFC+85X9mwbDI8Hl9Ua9zJCbS1LZy7E8QJSRZpVzJL45leh6ubSAqKqKs4Jj+ORyjSmlqgKYDjxBt2r6mUbRteOHpKu3bFT7+CzH+5/+zlulftxkeMKltkLAsj0/9ahxVFdhzQCUSExkdsvjynxVuyUyXdRnFysB5zNQMsT0HKZ07iWvoBFrbVwg/eZSz48wNvkZdx0Fijd3YRgWzksMyrwbYTa1IauhNGnbcS2fyY7i2iVHOUJq9WhzseS6Z0ZNISpCGrsPUbz/EFTWI/PRFinMbV1KpFmaoFmao2baHmQsvLlqSr0XDrnsJJhsRBIG5/tcoTFzc8PlvJXIgzM4nfo3mfQ/hOjbTJ59jI97zxUGLv/pymcN3q6RSDucvWrgujI3b/NVXS9x7KIBuePzJF4sYhkeh4PLtp6rMpR16uhQMw2Ni0s9aj4zafONbleXl7TWPi4MWI+M2ufzW0vl6XENHagj5whzt3SAIaNPzsTdBWHXshygrJA8cJXfqNRBF6u591DdIWoXMGz/y+6FX4fRrlSUhJ11z+dF3Vplj5Hl4rosoqYtXi56HbWjIgQiSGlrUwGDrZURJQV5nBUou4/Lfv1SkWnH5yKeitG9XaN9+1VC/5zHfwdJ1l9df0Pnm3xbpP3NrliDrMoquruHoVQRZxq6W/WJSx0aKJ5fd3jE15gZepTB1EUkN4pg6oqzMF2ZfwSM9fIxyehRJDeK5NkY5jygriKK0kJSxjQozF18kN3keWfWX2o5lYFbzeK7vdVh6hYnTP8B1bFxn9ZiCY+nYeglLK1KcWyYmuhyCSFPf/ez64K8TiNaQvXyKybduT+/nSkhKgF0f+g2a978XxzIYev7vGH/zqQ0dw/Pg+EmT4yf99+zw3QEQ/AkCp89anD67NCFimB4/fFYHFi9TJqYcJqYWe4ySBMmEyN0HVFRV4Mcvra6195OKNj1GbNcBWj7888ixBMXzJxY6gpR40heLXUn9QJRQErU4WpVo924822Lu1WepO/welHhyTaPouR6C6IeiRBECYRFB8LPSK+HaJpZWIhCrndc4nT+W52KWMsTb+gjE6tDzV+OcUiCMIIj+snqdpGYdvvE3Rd58SWffoQC9fSqNLRKKKqBVPMZHLU4fMzj7lsHctL2uudDrYf2JlkoFK5eh/pEP4DkOnmVSOLFym59j6VTzV93kUKKZ61ObrmMt2mYlXNtEy6889sBzbaqr/P1aApFawjXbyE9fwFoj7naFSP029nz8d1GCEbLDpznzj/8vRnFzO0S6Hv45Wg48gm1UGXr+60y8+X3cm5w3MzpmMzvn3DLDlUyI/PKnI3S2y3zjWxXGbvP83ncK2tQY2Td+TKR7N+Whfor9J31xZUlGkBWqIwNLJvwt4Ll4tkV423ZiO/aQP3sMu1L0w0rC2jV7fYfD1DXLvP4vJbbvCvKRz9SiqALf+29ZTr+2fKLGNqro+RnibbsJJZsw57ulXMemkhqjtvcIdb33YFULWFoJORAh2bEX8LC1jQkKVyseZ94yuHDGQFEFJFlAmG+os20Pw/CW6x+5Kdbf5meZ5I+9ghgM+e55tbyC6OWdSTDeiBKMUrNtL0owSnbszIKnuRaCKKEE/TjmwDNf3XSDGG3spKZzL6Ikkxm7wNhr37klx/3rr95aBexM1uX/+8Lmz7e+43FdSpfOULp0ZtGvPccmd+zF1Xe1LcrDF0nsPYyZS6FNjyEqAWytsq55SB07AihBgUBQ5IGfijN+2WB61OTBD8ZXNopaifLsMNGWHYTr2ymMzzcIeC7luVGMwhw1XQcIxOsxSzkCiXoCiQaMYobK3Mi63pLrsSx/oNbtGF2xoTY/PA9Xq97MuNlNo67zLpLb+nBtm9mBVzZcm3inEEw0sOenf5tkRx9WtbThJfMW7zLmDao2Mz4/k11DkBWK/Sew12jxA0AAx4LaJpmmNpWv/ekcetXl6PtWjv15rkNhwm+AuF4f0ShmSPW/QstdjxOu20a4bhvgh7yyg8dWnxx5HU2tMvsPB2jvlInGRCRFWK7gZRH9pwx+8O2baw7YmFG8CfRSmsFXNm8I/Nzga2RGT/rxEKO8qe14N0O0sYNkRx8A577zZ6QHjm3yFW2x2XiOjZW7WgXh2dYSGbGVGL6g88Snath/NMz54xUmLhvs2Bsin119Fahlp9FzM0uqPzzHIjNwDC0/R7JzH2o4jqWVKE5cpDxzeV33nazA+z8c4dO/HqepVUYNrH9aSzAkvHOMop9IufHpeDeLpRWXbf97J1HTdYC9H/88AMWpQUrTl3/iB8lvcXP0v6VRyjvEa2QGTmu4DlRKDs99a41RHJ674lfPtU3K04OUp4euKSde//e0tl7ilz+XoKNb4dI5k6GLJvmsi2N7ax5l4PxtKt7eYvOp7TpA30c/RyBWS2HiEuef/AJa7p0ZAthiExHm7dQV6+LBxJAJXDUmk8O3Sjzau6EQoKoKtLTJTI/b/MUf5zj+iv62jR5YjrfFKMohGc/1cIybS8QIkoAUlHF0G++6/kxEATkoL8yVcEznps93J5Ps6CNS75c0BWK17Hz8Vzf3gm4DoqIiSmsLCLyTiDR2cPiX/2BTzn3hB39NS3OGQ+9Lcu61IiP9VYyqe0tEFNaDKKt+K+4a6WJd9xgZtJBlgUJuZY/07eKWG0U5rHDo8/dRnipx4etncM0bN1R1exrY+9nDnP/qSVInpxc9dcL1YXp/dh91exoIN0UY+vYF+v/rKrNz3+HohTRmtYgajhNM1BNM1G/2Jd02tPwc1gZLOe409EIKSyujhKLU7bh7U66h66GfYfCH/wlJzrP7SIzDjyWZHNKZGNCYGdEpZN6+OLscitF+9GOUpgZIX1pdsT+XcfjKFwp8+rNxfuYzMZ55skJmzsEy1x5WVa245LM351becqPo2S6p07PoWW3xsKIbQM/pzLw2gZ6uLnHDtUyVS984Q7Qtzj3//iHk8LvLo7ieqZPPIogi0abtm30pt530wHEK42sLCdzJTBz7AQjCgrd/uwjE6mje9xAAwy99C73iMHCyQiFrc+ChOH33ROneFyY9ZfLW83lG+98e+Ts5ECG+bSdWdZVOmXkcG956TaelTeYzv5ngiZ+OMDftUCq6WKa3quf4xosaX/nC2udY9Vpvau9lcEyH4e+treO2HsoTRS5+48yyf/McDy1dxbEcbP2dmUneKJvdRbPFzTHx5vdv+zmTHXsWjCJAY3uABz9aiyQJzI4ZPPXVWfIpi669YQ6/L/m2GUVJUZcVkFmOQFDgg5+M8KlfiZGsFZEkgc6e9YnHzk7dvC1Yl1GUgjIdj3XRcl8HSlTBtVwKwzlGvj9AccTPUgVrQ+z4mT3U729CjamMPXuZi18/sxALlIIyd//ufWTOz9F0uBVBEhn81nm2vaeTeFeS4e9eYuKFETzHI95Vw95fu5tQfRglonL8j18ifXZuwy9OlEWa72uj9cEOwg0RBFmkMllk9Okh0mdnF66t4a5muj+6i/6/PUXrg+003NWMKEtMvDDCyPe3pPy3ePfguR4DJ8pMDesUM/ZCPFErl9Arb182Q1QCrCrWeQ3JWpFP/UqcxhaZ46/ol5OZKAAAIABJREFUvPCMP87UNFb3EgEyqdukp9j+aBe7f+EgIz8YoDpbJpAMEqqPIAWu9jEaBZ3L371I6tQMBz93L6G6MIIgcCWJLogCie4aQg1hUidm6Hi8hwO/eYTZt6Yw8jq7fn4/s8emMEsGlcki579yguajbfT90kHk0I0tjUVFJNlTi121mPjxCIIk0v5oFwc+d4TX/+DHlCf8Eh05rFC3p5Hdv3gAq2ox8eNR1EQAI6/jvJsHsm/xE0cpZ1MtOVSKvvEIRURcD7Syy7nXlxZ7h2pbUCPLaxxshHBDx7r7niVZIJYQGRmw+NM/zHL5knVbky3rkw5rjWHrNpefvIiRnxcCEBdPhfAcj+p0GatkYpaWbyb3XI/8QIYLXz9NpDVK7e4G+v/2JE1HtrHvs4eRwwpmycAxHQqXcwTrw9jGjbvDtmb7yR7bBdd/V82izt2fv59AIrhgFMH3ZPW8zvmvnMCubilDb/HupLU7SDQpc/ol/7vfvT+CobsMnqwsWz7TsPsBantuQWJIkBDXMYoAfD3FV57X2L1XJRgS7szs88wbE2x7sIMj/+tDzLw5yfRrE2ipCu4GvSjPcdEyGnhglky0TBXHcLA12x80pd78JK5FCALhxggt97URa0+gRFUiLTHUeABBWhyjsKsm6VMzWwZxi3ctsaRMW2+IeJ1MatJAlAR2HIgwPrByHFGQFEQl6IuN3ESx4DrDiQBUyi7f+JsiH/pkhF/5rSSnjumMD9uUCg6msXoGupBzmRq/ubjiuoxi9nyKN//oRbo+vJOOx3vo+tBOxp69zPB3L63oFS6H5/nZaQBc76pRvfIo2MAbtx4S3TUc/ncPYJVNZt6YpDpTJtFTQ3x7csm5XNvFLN68au8WW9ypNLap9N0bI9kgU9us4rkeqXGTkfOrt946RpWJN7+LfhPNAtHmblrufmJd29bVS/zO79fQ2aNQWy9x5MEgtuVdEctf1XN87qkKf/wfbq5zbl1G0XM8cpcy5AZeJdIco+P93ez4eB9WyeTydzcqtHr7fOHWBztQwgqn/ux1Mud9FWJBFhGkZazvGm/2Flu80xk6WyU9MksoIjF0Zv0aBK5tUZkbXaSPuFFERcVbp6cpSn4GembKZmaD2eTblmiRQwqOYeO5HpXpEsNPXaLziR7CzauPQdxsRFnEMRzMst+2JIcVGg40oUTUNfbcYot3J7NjxrrmM1+L57kLowVuFMcyWa9DlE07/Mf//cbk+UrFm0+Mrm0UBdj9iweItccpTRRxLYdEVw121SJ9+uqTI9wcpWZnHcHaEKH6CLgeHY/3oOc0chfT62/BEwRqdtURbY1Ru6cRJazQfLQNJR6gOlehMJjB1m2irTESO+oI1YUIJIMkd9TS+UQPRkEn25/GLBrMvTVF+/u62Ptrd5O7lCHaFifRVYOe2cT5zFtssYn4IjVXjVP3vjCW4a0YVyzPDGJVCzetbuWaui9OvY7lmGXC4IXNi+2vbRQ9mD0+iaRKRFtjuI5HaaLIxa+fIXtxXq5IgHhngo73dwMChctZ8KDl/nasiomR1ykM50idnKE85af9C8O5BQ/OyOvMHpvCrlqIkkDjXS3U7mlAEGDu+BThpgihxgjF0TzV2TKO6ZDorqXjsW4Asv0p8KD1oU7MkkF1ropZNMicnePUF96g9cF2Et015AeyDH37Atve03k1iw7oGY2ZY5OYha2Y4hbvXjp2hphBobPvygA5j12HogyeqqxoFDODxwFhQxMvl8PSy2SHjlNJrT33ORD0B1LdSI4hm3YYvcnhaMJqllsQhK0oGxBt2s4Dv/XnALz+V//Lpg+s2mKL9ZLs2MO9v/4fAaic/n0qqVEe/Ggtk0O+A9DeG+L1H+Z544erz3K5nbS2y3zh680bylhf4Uc/qPAnf7D2a/E8b8Wj35I2v0SghYhay1Tp3Lq2V6UQrdE9TJX7MR3fLQ/KcbqSR1DEIKOFtygY65u5shIBKUJzdDfjxVO43tUnh4BfiuO9I/XDt9jixjnzcpFoyOTpr6UYPuvfdwceilMt3VnqUpIEiRpxVaMoCCCKArIMoiRg2x7ZlEO5dJtGnK6FKIhIwkZqDAVEYfGpDbvMePEU3cmjqFJ4hf02goAkyEs88MZIN6IgM11+ZwsMbLHFRnEcj5kRHUFgYfLdmZeLd1zVRXrO4T/8zsozq0XJV9huapHZfzjIvrtUMimXP/+jLGdP3PyY03UbxUSghZjqy1WVrSwFfQoQqAm1EVGSlE2/NkhAIKLUEpRjKFKIkpkiptZTtQoUjVmCcoy6UAemU13kwXm4mI6G4y0OsEqCQjLYQlBO4HkOBWOGipVDFEQSgWZMRyOq1qOIAXL6BBUrT1StIxloRbeLuPOfuIBAQ6SHlugeABQxiGaXyGrj1IbayOtTWK4+/1qbcT2XkrnxfusttriTCcUkFFUkn/LvM8u8fVqK8bbdGKUMWmZy1W21qsfLz60vGRoIlnjsw2E++/kkn/iFGP2n1p/lXvFa17NRWEnSFj+AJCooUoCQHL+6DPVcYmoTLdHdAAiCSEO4m+bobupDnXQn7yWmNtIa7SMgRfDwkMUA7fGDKGJozXNLokpEqUMWVWKBBjoTh1DEIJKg0BrbQ0fiboJyFEUKIQnK/DV5qFKItvhBJNG3+74iuocoSHi4uDjzS2iPpkgvyeC2+TMKtMX2E1YSG3ojt9jinUDbjhA77779pXRyKEbbkQ9Ts33/LT2uoXs891SV11/Q2Hd3gINHAjd9zHUZRVUKo4gBMtoYE8VzzFUGF4xKTp8gp08s3kGAsplmunwRQRD9WKMgIEsBdLvIbGUQw1nfcBnL0ZirDjJXGSRVuUxEqUUW/TpDSVDQrDwTxTOM5o9TMlOAR8XKMFsZxHavdaU90tooJTNFXp9moniGdHUYx7NIVYepC3UAAlG1FlGUyWkTy13OFlu8oxFFiNXIRJMSwYhIMCIi3YahJJIaRFJDb4uSuq559J82kWSBA/cEb/p463o7isYseX2KHTX3U7XyzFYGKBizrOSmup6L41rYroHlaNjzS2LhBnLsEaWGllgfsqgiCQohJY5wzZDvkplZsuTeKBltlObITiJKkppgOwVjBsu9+djEFlvcaRSzNkc/EKK+VaWUt8GDUy8VmRx8e8vRJCWwsQboDWIaHpIEyZr16S6uxrqMous5jBaOk9bqaI700pk4xLnUv+B4qw248a75/43THN2FgMBg9mUUMUToumXtxrPI3hLTbLsmBWOGpshOFCnIbGVgKzu9xbuSStHhxI8XK1Nra2Sf5WAE13FwrRs3nJL89hlFUYSObgVFETCM25R9DitJJEHFdCqUzAwxtcFvFfL8DK8oyPMZaIW1Ki5FQUISFQRBRBIVREHC9RwEJCTB//nKMV3v6ocliwHqwtsJyWvH+kRB9s+BgCQouIKN6zl4notpa0TUGkJyAts1sVwN8MjpE+yqe5icPoFm3Zyc+RZb3KnkUxbFrIUoXr1PneuHwl1DonMfDbvvxzGqzJ59gWp6nGCymbYjH97QeeVQDEleX3utIICirm1Ar2x35MEgj3wgjOvB6NDNd8KsT3lbUGmL7UOVw9iuwVjxNI5rElXraI3u8eNwgkJf/WPMVgb8JbNr4ngmml3E8xx029dva4vtpya4DUmQ6UneR8GYYap0ntpwOw2hLoJSlJbobuKBRkYLJ5guX6A9foAdtQ9S0KdJVS/jejYeLlWrcF3c0DeeHfG7iAUaEEWZXXXvJadPMl2+gO0apLRhVDlCb+2D5PUpxoonAdDtEqZTRbfKC7WTW2zxbiOakOi7N0ZbbwjX8ZBkgTefzq/Y0RJt2k6sZQee61CaHqKaHkcOhEl07HnbrrGuQeLf/h+1azqWsgz1TTJt22WiMZGLZ811Z61XPe56NiqZc/Rnnlvy+7KZ5lL2hTX29VsBh/Nvzv+cWjBE1zJTvshMeWmniOHAhcyPlj32YO7lJb+zXYPL+ZWnhVWtHAPZFxf9TkAkIEcxHY2cMcnaI7e32OKdyfa9YXr2R4jXyVw4VqZtR5BwbOUa48J4P3IggmsZS1r0tNw0lbnRNUeWAqjRWmKtveu6xnBE4LEPh9ctXKFrLv2nTL70Z3mmJ27TjJZ3M6oUZltsLxGlzq9zNO+cdqcttrjVyLLAzJiOXlU59WIRx/IIx1c2iuXpIbTMFJ7r4tqLV2XFyUtMvfVDXGvtpGS8bTeRdU6iLJdc/vkb5TXTso7rUS66TIzYXDhjMnRptRzH+nlHGsXtR+po7InxxjdGbvpYlmMwWx5AEAbR7BIed1bL0xZb3Eoy0xaO7aGVXD71+VY81+OFb68u07WSbJhtaHjO+u4X1zLWLViaz7p84f9aR/8yHrYFhuHdrF7FIt6RRtHSHLTirXkqeDhU7fwtOdYWW9zpTAxqTAz5fcOXz1VwLI9camPJCdcx0fKzmOUsnrc+o+hY+rqNouveGl3EG+XWGUUBmnbE6Dpaj6U5lFIGRsUiM1ah60g9Q6+kqOZNkq0hWvcmGXolheO47HiggfrtMcpZg8EX5yhnDIJxha5768hNVOk6Uo9tupx5ahJLd9j53kYaumNMX1ycIY43Bem+r55IbYCZi0XGTmSxNIdQUqX3wQYSLSGMss3l19Okh8u37GVvscU7Cc8DPH/pmZowqW1WiNfKFNLrj8XphTRjL30TvZhat6FzTB2jnL1pXcbbwc1XOs5T0xbm3k9vJxBVEESB+z7TRe97m4jVB9n/wVZCSb+SPdEaou+xZgJRmd4HG+m5v5HirEZNa5ijv9iFHBAJxRXu+6Vueu5roJzRsTQb13HxXI/cZJVATKb3PY0L5w4nVe76WDuJljCVrMmex1vY9UgTSlBi9yNNtB2oITdRxdIdlNAtHo61xRbvENSAiBpa/F/PgQjbetZut70W19Ipz17G1paORF0Jq1pk7KVvkh18a81tBQHat8v09qmogbWTLdG4SN8Blb4DKuucoroqt8xTbN4ZRw5InPrOONW8SW1HBElZ+QUJosC+D2xj9K0M2fEqpuZw9Be6aOiOoZd8d37g5TnSl8sggGv7T6SZC0Uae+O0Hbg6i7auM8K2fUlOf3eC4qxOXWeEnvsaGDuRJRhT8DxIj5QpTGsY5ZvPTm2xxTuRz/xv7WiZxZ5dY1uAZ/9+ZUWaW4XnOlQz62udDYUFPvlLMR77SIT/+sUC3/56CWuVFX5NrchnP5+keZvMH/5emgtnbi60dsuMYiCmYOkOetnGrDqUUjqxhqV9iIIgIIgCoixQ2xFGEKFpZxyAidM5LN2PUdiGQ268irtKYem1527ojrHrkWas+bEHMxeLWFWH/uemuedfdfLAr/SQHi5z6skJCtNb4wi2+Mlj+rLO01+aXLTivevhBNXinZVcDEdEDhwJUtcgUSq5OGuEF2enHEoFl7uOyOw/9P+3955Blp33md/vpJtTd98O02FyzgEYAINMgAQBUiAoKotexZV2bVkub5XKrvKHLfuD17Uuu2qtta2lJa3E1YpckiJFigSJnAZ5gMmpZ3qmc7753pPP+/rD6emZRs9M9+QB0E8Vwj19wnvPvfc5//j8o3cPKQZOgKaraHpIekZMQ1FDxRopQiFIFIildFRdQQpJdcLmg+/0M/DxxeyXFKErjgQhFhev8J2A8VMVXv+L05SGZ2IWM9P5rKrHG/+hl9bVaXZ9vYdtT3ex/6/P3qy3vYQlfGrw/gslKoW5ntKJ92sE/vXV5UZSTSTbVhJNNaEaURR14Whcbfwc1aGTV93HiCi0tWthuc2Az0JlkK4rGRvxUVRY1n3jlHbTSHHqfJ31j3awZl8b9WmbFbubmeyr4VkBrh2w6t48sZTBpieXIQKJCCRn355i2zNdBL5AeJJIUmPgoyvPbNUjKrGsQSJrEE3oZNpj2DWP4lCDRsll0xPL6Ht3ing2QqPgUBxs0LUtR+BLXNPHaQQoNy2KuoQlfLpQGJ9vQZWnvWsWKFA0neY1u2ndtI9IsglV0xfd1yylXJAUNR3SWZXxkQDLXFwWulIMmTOdvU2CEIvBxJkaB380yMYnl+HUPIpDJoEnaBQdPvr+AJue7KBlVXKW9Dwr4MjPhhFBF7ue60EEkoEDRaQEzw4YOlxCfsJSbF6eZOvTnTR1JVB1lUf/eD3HXhhl4ECBoz86zyO/3k3nhtVUJh2OvxiOM0g2R1nzQCuqoTDVV+PwT5YkwZawhAtYvyuFawnOHlmclB9ANJOnfdtjxDJ5zMIIdmWKwDGRiygWbEwOLHyBGS9PVRevIaHp4Y6LHC19Vdw0UpSB5Nz705x7P2zru++3VpFujyICSf+BAv0HLl8g+tEPBvjoB3O31acdXv9/e+ftO3m2xqt/fvmhUbVRk+P/cI6Tpz3sS5QyTrw0xomXbmzeyxKW8FnAtn0ZBo/E2PloZnbbio0JPnqlDEcWfx7NiBJNNWFXpxl854eY00M3dZ2+LykVAlraNLI5Dbh6HaWqQkeXjqpCpXTj8dFbVrwtAkHgySuWMakKPPuVOGtXGfiB5P0DoWmvafD2ew7feDbBW+86NGUVnngsjq7B6/ttjp7w+OavJXFcSU+Xzj/8k0mlInj2mQSGDmfOeeDAxvUGTzwaQ9fgtf02x054fPXLcTavN6g1JN//UYPJ6Wt/rKh6BNW4cXXfJSweMvAX1V97t0PVjPAXfDuvqV8UdW1UfPKdEZraIvQdDS3Dlg7/mq2rwHNwaiWEZ19TWc5iYVuSvl6Ph76g8+ATcU4ecTAbV/bx12wIy3E8V3L+zG1SybkeHP7pCKqqzGaTP4llHRp7dkb599+q8ju/mWJFj4bngTETnli/1uDMOZ8vPh5jcMhHVeELj8YYGgnYsN7gpVctXnvTplwVBAGcOOXy6EMxIoZCJg2/9HScgUEfTYMnHokxMhKwc1uEAx+7DI74VGrXZ2dv/9U/Q/h3blD35xHjx97k/Jvf/1QU/l4J0XQzm776L0l3rL6t11Uvkes6f8IkqtoUxl1Gz4XaiOUpD2eRcbsLcKrTTBx7ndaN+2jd/CCl84cJHAspglBM5SoxSuE7BO7VdRlrVcH+l0123xfjK99IUZwKePmnDSplgWNJhAjjjomkyorVBr/9Rxk2bo1yrtflw/03LpZ7y0jRbVy9HjCVCgUhJyYFpXJIbFKCbiioKmTSKrEodLRpZNIKE5OC3rM+gZDYtqS3z59j6dXqEn/mktGoQkebRjo1c1yfj+tJ/tN3G3zh0Rh7dkX427+vMzh87dZHNN18zccs4caw6uFfpTRwnOneA3d6KdeNnr1foW3TA3fs+sXzR/DMOvWai6qFhdyKAv3HzWseXiUDn8rAcYxYivbtj5PfuA+vUSZwTcQCvdCl84eZPvXuVfcJfHj3dYttu02++GySP/pXTTzypQTHDzqMj/p4bqiks3y1wT37YnT2GNSqgh98u8b05F3sPi+E4dEAVYH/+g/TbNlo8Np+m6npgK//UoJkQiGbUSiVBW+9a7N6pY6UUKmKi2b0JZ9jZ4fGs0/H2bE1wvR0nJ+9aPHWuzarVswcVxHYtuTRB6Mk4wq5rEpkESKWF+BUC4wefg1V0ymeP3rN7zXS1UZsy2rMj0+hN2WI71iHeTCMjRptTZgfn0Y4LvFta1F0Dad3kPjWNQSmjTswRnzLaqTjYR7uxehqI7l3C9bxPryhCRRDxy/VSN63laBQwT47hKIqpB67B/voWbypT6/qTySRZvVjvxlmNz8jMIvj9L/9w9t+3eL5wzi1Iu0rouz9UhORmDIb2vro5TIDpxZfu6saUVrW30t+4wNokRiKoqJHOhZ17EKT/C5gcizg775VQdfhC88k2L4nxvY9sdk1X0jASAmTYz4/+vsar/588cmiq+GOfdsaDcm3/qZGOhU+sRxbcvCoy+h4yPQ/f9lifCJgeCSgvU1D16FUFvgB/OW360wXLj4RSmXBD35s8pOfW9QbgmpN8OKrNu2tc497420HXQ9VNcYmFv9E8awap3/+/6EoKm7j2sUjsu0Po54ZofzGW6jxKPmer1MfPBjWbcaWUzr0MqJukW17GCUWwTx3nNyOCPXDh3CHJ4mrg8TWLqfY+yZRv5vMOkHhlZ/hTRRRVAXp+iSNYaKruige+AVGZyu5aoLCWz9D1G6ty6mo2ky28ILVLgnV1+Xs36WUYQEqSqiRpyhIcfn9L0Us18aqR36N2zJZ6TbBrZcY/vD5GzqHpkdJNy2nWhxABNdWqNzcZqDpCic/qM82RpSvURAimm6hddM+jESWyvApqkOn8MwqIli4W8ytXV2R51L0n/H48/+1xFsvmzz8xQRrN0ZIZ1VUJcwyT477HHzf4a2XTM6ecrHMm6ODeke/beOTgvFJQWeni2lKTFPSd37ujXWRnB+Yu21sfC6hWfb843x//nGffH0t8Mzq9R2ogN6cwR2eRNougWkTVK8gSKGE/6jxGNG1PSi6jnA9FFXFmyjMPh79cg1vsgR+MEsl1rE+Ers3obc1E9u0CqdvCGHe2mFERjRFKteNoqiYtUlUVcdzahjRNJ7bQNUMEqk8ge9RKw0ST7cSjWUIfBe7UUSPxHGsMrFEM7ZVCpVUPgfQjQTReA4pA6SUiMAN75eqo+lRhAiIRNOIwMM2i0QTTeGwNilx7ApGJInvmjMKNQrxZAuaHsV16vhug1gyj6pqOHYVz5mbCLHqAa2dEdJNOVwr/B3ZpphX1H01KJqOFk1gFUcZevcfceuli0oTNxmFqYDXfm7y1ssWug6xhEIkomJbAseWeF4oH3YzcVc8gt/Y/xmenCdBej5KxAhJT1NRtJmudXGJL6CqKLEoqAoyCHAHxil+9wW8selLTgSs6gqP+0TKUDQs3P5RkvduRsumaLx3lAX7o24ICqlcN7oRJxoL5+ZoepRGBZLZZdhmiXSuGyOawoimcOwy6aYVIAXV4nmEDEg3LUdRVLL51VgDH9zCtd49UFSdXH4NkXgOhCCaaMZqTFKePkskkibd1IPvWYCCphlUS4Pkl23DrE/g2jU8t0Es0UyudT1OXxlF1Wnu2ILn1Mk0xSgX+li24n5q5UGE8PGcOpeSVTylUSv5nPighmuH34/SxLVZm55ZoTZ6lkiqaWZW062V+RIinNbnOsyEz27t9e4KUvyswzk/SnzXRozOVrRsGqMtTNYEdRMtm8JYlgcJsXXLsc8OEVQbBOUaiZ3rqdsOaiwKUuKNX8X1CAR27yBNv/okTu8gfuEWD99SFFTNwG6Ea5IyzJQpqoamRVA1HSkF9fIQrtPAdy1E4GE1prHqIdEHvkMq14NjVRHB5yOjr+kRdCNBtXAOUGhNNAEz438VhUg0TSzRjOvUw1idEQNFoTzdh2uHn2mjOk4q2w1AItVKMt1GQwr0SBIpAqzGFEYkNROimGu91SsBnitYviE+O7CqVvavyVL0zRrjR14lv34vnbufojZ2FrdeCkVnRXBVOTHPrl93GY+igKLOBFvkxX9uNpZI8TbAPHgarTlD7muP4Y1NE9RNEBLn7DBWaxOZJ+8jqDawzwziTxQIyjWqL75L8v5tNP/mlxGWQ+ODY3jjBYRp40+XL+up+NMlRN3Cmy5d2UW/WZACqzZJc8dmIrEMxYlTuE6dptZ1KJpBtTRAQwSksl1oeox6eRjfMxH+RaukWuynZ/0TDJ5+6dau9S6CFAFC+ERiWRRFQYoABYVoPEc0nsX3HaTboFEdx3WqOFaZXH4tQlx4aCgY0RSaEcOIpPA9C6tRpFLsR/gurlOlPN1HKttJuqkHxyohxEXCGz1n8/O/nZx9HUuo2NdYkhPN5Ona/WViuXb0eJpsz+bwGiJ0oa/GU9On3mHs4OI/b0WBXItKR6dOzyqDtg4NI6LQqIV90cP9HlMTwU2LJ8ISKd4WSMel+vzFIVv5P3wOFJCWTe3VD6m9+uG8Y9zBcdzB8Xnb7RPnsE+cm38RTUXPpQkqNZyzQxdd81sIszaBWZuguWMzgW9TKw5Sme7jUsauFvtnH+eliVMXl6tHiadaqRbO47uf3vrDa0XgO9TKQ2SaVqBpEXzPolEdI5FuJwhcKoW+0ILOdqMZMTy3Qa08NJvEUFSVeLKFwHeIp/JUi/00KiOksp04VgUhPDLNK4Hw3otPFL0HvsS8ZM7zmu1JHEvQ+/HiH6JhTDGOZ1XxrGuLtQfO4uPGRgT2PBDn2V9PsffhOKn03MJ3ISTDAz6v/LTBS//UoP+sd3e1+S3hzkHNJEk/vBujM4/dO3hJHPL2oFEZnbFGLkPEV/RvFETgU54+u6ie2c8SrPokVn2SRKqNpraNVEsDVEtze4Ib1YutqYXx47P/L0Uw8/rituLkqUsPZXzgvcted88TWaZHXO5/uml2W+eaOO/89MoiLJeDU51m8O0fLLzjZeBZiyNf3YBHv5Tg9/80x8q1oSZqqRBQLgb4HsSTCi2tGj0rdX7rn2fZtD3K//k/Fxg891mY5qco5B7cQPMT26gfHmDyJx/eFisnvDZkdq8m/5XdWOcmGPv7txfsKM8/s4vmx7ei6BclfoXjMfEP71P9cHGSZOUfv4Gwbl6mVZg25sFTcFjBnyqDv3C5UXx1Gx2//iCB6TD6t6/jl6/fWnOsay9TCnybWmkR4gCfYdhmkcmRg7fteqN9Ns3LIji2oPfjsKbP9ySec20PJeG7mIusN7weKArs3Bvj9/80R0enzlsvW/zih3WGB3wcRyBFKACRzqjc82CMp7+eYs8DMX7vT3L8m/9xGvcGxzfdcVLUklGy960je+9a9HSM4hvH8Us3pwhzIaixCJl7VpPdu45YVzPTvziEN331ILB0A4Tro6oKWjxKtLsZ6QXo2cVLuvs3u6DaD/BGF6+erGgqyU3d5B7cgHB8iq8cpb5YUlQg0p5DT8Ww+ieR/ufLyruZEMJHODe/d/hKGOtfnNJfAAAgAElEQVR3sBqCqSGHyZGQORwzwL1GUrweKJpOoqULz6rh1q5umSZTCk99LcmK1QY/+S91/sP/UaJSFpdNcp886nD0Y4c//Z+auffBGDvvi/HBWzdmcNxxdUHh+biTFYKGjTNWRti3Lwsp/QB3qhZee7xMYC5cGlR87Rjn/82P6P/f/pGRv3yZoPbpU/GWUuIV6/gVC3eqil9dvJWoJWK0fe1euv7wCbTkfGX1JdzdKE95TA67YaRDQt9Rk6HeW18faiQyrHzkN2hZd++C+ybTKjvuiVGcDnj+H+qUi5cnRADfg4Pv2bzzmkkkprBt942LtdxxS1E6PoUXDlE/PoQ7WUFYN2d06aKu7QWU3jiO1T+JV6wjzIWvLQNBULcJ6jaKrn46LSUhqR8doP/f/hjhejiji3d/jeYUiXXLUCOLFxZdwt2DeEpFN1QaVZ+e9XE0TWHwtIXv3dqQlWbE0KNJtEsEKq4Ew1Bo7dA4c8KlXFw4FBQEMNDnIQXk2+8i5e0bgTtZxZ28zo6RG4RXqOMVPn8jT4O6Q+1w/zUfF2nLEFuexx1fmpX9acTqrUmaOwzOHTV58jdaaVQDonGVkx/e2t+AZkQXr85NGNpXVMKixMWcXwt3/KQw9fXghkhRjRlEO5uItOfQkjEUBYTr41dM3MkK7lQV6c1nej2XILW5By091/3yCnWqB/ouf624QWpzD4qhUzvcj9GSJrG6DRkIzLPjuBMVFF0jubGTSHuWoO7QODWCX5nrGmrpGKnNPei5xJztQc2m/M7lBWxvJpSoTrQ9R6Q9i56Oo2gqwgvwyw3skSJeoXbFRFNseZ7E2g7s4QJm7xioCrGuZqJdzWjJGDIQ+FUTZ6SEO1WZcx4lopPa0kOkLTPnnDKQlPefvHLYQlOJtGYwWtJEWtJk71uLnooRZOI0P75lXsjBOjeBeSYsJbpwr7V0HOvcBNa5iSveFzUZJb1tebhv39X3XcL1QzcU4imNPV/I8d4vSjS3GSQyV58LqkXiYX2lf/1enHoNpOg5krFhn+4VBh2dOsP9/lWLtKMxhQ1bIygqjA7dweyz0Zqh9ZldpLavILqsCT0VVt4L18MvmzjjZSrvn6Hw4uF5P7josiY6fvthEqva5myvHjx/RVLU0wnanruXSEcTg3/+PO3fuJ/U5h5kICi/18v4d/aT3NhFx6/tI9rdTFCzKb52jLG/exPhXLxRkdYM7b/2AKlN3XPOb52fvLWkqCok1nbQ/PgW4qs7iC7LoWcSF0mxVMfqn6Lw0hGqB/qQl8kgp3eupPObj1B45ShW3wRNj22h5cltxJa3oqWiISmWTYqvHGXiRx8gLiEsLRml9Zf2kLt//ZxzCtujfmRgtlvik9AzcZb9xoMkNnRitKTRkmHMJtqWpfufPzlv//Hvvo15diLsbtE1Wp7cRua+dRRfPsrQ//PCZd8XQGJNB11/8AR6KsbQt15aIsVbhNKkx6a9aZAw1GvR3B4huIrrnOpYQ8u6PfiORaH3A+zyBNF0C21bHrmm60bTzaHI7iJQrwsOf2jzy9/M8OxvpBjq9xgfufz3xogoPPxkgvsfjWM1JB+9d4f0FLVklGW//TBNj2xCegFm7yjORAWkJJLPzFo0jRPDoULKJ2ANTDH0f/8CI59Gz8RpemQz6a3LF3XtSGuG/FO7UFSVygdnSe9cSe7BDSiqQqQ1gzNRxp2qkNrSQ/PjW6ke6KN2+GLphzNaYvhbLxNpy6Cn42TvX0/2njXXcxuuGUY+Q9OjW8J1jJWoHTyPsD2MljTJTd1k71tLfGUr/VWTxokrzJJRFfRcgvzTu2h99h4UTcU6P4FwPIzmFJG2LIHtId25T8ygZjH2d29RePkIejpOcmM3+ad2LLhm6frUTwxjjxRRNJXUlh4ye1bjFmoUXzpC8IkYcOPk8Gxtol9uUD14ntSOlSTWLyO+qg3zzPzREIoenjfSmsEamKJ2qH8Rd3MJ14ORczYvf2cK35PUSz6H3qhcNZ6Y7dlE85o9yMDHqUxhlycwElnatjx0y9ZoNSSvPm+yfU+Uh59M0Nah89ZLJqePu0xNBAS+JJFS6V5hcN/DMe5/NE62SeUH365x5sSN5ySuixSjXc3k7l+HdHxG/vZ1ym+eRMxYAIqqoKViJNYuwx6aRjrzzVlhuuGPXg3Hoca684smRcXQ0HMJ+v/tjwksl85vPkzr1+6l6dHNFH5xiLHvvI2iqXT94RPk7ltHclP3HFIUtod5ehSzdwxFVdBzydtDikJinhlj9G9fxx6Yxh6aRnjBrEWV2b2Krj94gkhbluZHN1+ZFIHU1uXEV7ZR+bCP6Z9+hFeqgwzJJdKWwa878ywy6YvQhT0/CapCULMXRYpBw6HwytGw79TQw9rOPavxyyZTz3+MX55bPjUnpiOh8mEf+ad3E+1sIr1jBea5iXlCFUZrltSWbhRVoXao/7aVZH0eEXiS6dGLxFErXd3dbEz2U2/pJPAc7PJc692tl7CrU5fIwF0ZRixFvHnZotYoJRz60Oav/l2F3/lvsmzeGWXT9ii+L2fLiBUlHF2i6wr1muCFHzf4zl9Wb4piznWRop6Oo+gaXrGOPTQ9J64kCYmnskC9HwBCzmjtXUNwVIbWyIVYYe3oIPmv7AEhqR7qn/2ROiMFJGsxWjNXOI9EBvL2FYoD3lSVwguH5y/FC6ge7Cf90Tnyz+wi/omwwidhNKeoHx5g4nvvzI2ZOmCdX6BeUUoI5KK+yLMIRNirogThPZs5j/TFgtl3b6pK/egg8ZWtpLb2UHrzxLykWqy7mcTaZQS2R+X9M4tf1xJuOcoDx6hP9CNFQODOLT8r9R9l9ONfLCrWmO3eyMrHfnvR1w18eP1Fk8J0wFNfS7J9T4zOHo1kWkVRFHxfMj0R0Hvc4a2XLd562aRcvDmVINdFiu50DWF7RPIZmh7ZjDtRwSvWbxPByDk/Kr9sghB4FXNOvV1guhBItPjCJQB3A4Tp4E5VQUjUeCghdqX76VdMKh+evab6wjuJ8nunaXlqB8kNXUS7W+Z8fmrUILW5Gy0do350EGfk2lrOlnAVqAq5rZ0YmRhT+y8fq18MfPvymenAXVgVZ3Zfz7l2SRsJRz9yON/r0r3SoDmvkUyp6AbYpqRcChgb9pkcC25Kz/MFXB8pTpQpvHKU1md20/LFHaS3LKe0/yS1Q/04YyX8mnXrCFJCUL8YTJUzQqvCcue66jOKHWh3vD59HpSIjpaMokYNFE1FUZUwVpiOI6VEUUOF6ivpjXjTNZzxyyvl3I2wB6apHxsKu4fuWUPjxPBs8k1Lx8ncuxaEpPzOaYLGZ1hb8w5AT0TQUzd3+qQUPp5ZwzOri/Y4Avc6SHEG9Zrk1NGL1qii3BrJsAu4LlIUtsfE99/FK9bJ7dtAYl0Hnf/sUfxv3EftUD/l/aeoHR6YF2+6Wbg0Xnbh3shAzKtRuts4Q40ZJDZ0ktraQ3J9J9H2HFoqhhozUAwNRddC0c4FIBzvtnb+3Cj8uk357VOkd6wge986pv7pI5yxsNUxsa6DWHcLzmiRxsmRK2anP2tIdOdIr2vDN12ChotXs/GqNsmVzVRPTRBYHrH2NLG2NNXTE6AoZDa2E2tN45ZNKifGCUwXPR0ltSqPW7ZIr80T2D7FDweQUtK0oxsjE6N+bq5ASKQ5QWZDO1o8gjlUotFfQHgBRjpGZmM7Ri5BYDpUeydxpuZbiXZ1moH938Mqji5aYDZwLTyzuuAkv8XgVhIi3EBJTlC3mfrpR9QOnSe9YxXZvWtIbOgkt28DqS09lN85zeSPPrgl7tDlMtp3O5SITsuXdtD61T1ElzXhFWqYfRP4pQaBaSNcn+TGLtI7Vy18Msmt/2bcTMzUktpD08R68iS3dM+SYvaeNSiaSv3USGj9fg4Qa0/T841dYSlWzaZpRzeFA4NUjo/S/ex2zozvJ7A80uvayD+wCnO0QmZ9G/n7V2GNV8lu7iDR3cTIT48Sa0uz+nfvp/B+PxD+NooHwsSiFjNo2buCxPImar2hhqKRibHsS5swMnECy6V5Vzfjr/ZSOTFG60NrSG9oxx6vQj6JM924LCkGjkll6MQ1vWfPqjHy4c9wG3P7/nfuDZMoNwvnz3i898aNtd7eWEdLIMJM6nCR8runiXU10/z4FrIPbKDlye0oisLIX7+65BIByU1dtP/KAxi5BMVXjzH1s4/wpmsIxw9DAELQ/o37SO9YeaeXekvgjBapHR4gtryV3L4NlF4/gZ5LktzYhV+zqB8dJKh/+vrIrxkKZDcvQ4sbDHz3I4TrE+/KoRpXDvNoUZ32x9ZRODBI+egoqZUtdD+3neKBQRRFQdFUSoeHqfVNo2gKwg2t7en3zxNtTRLNp2bPlV7bSmpVnpGfHcMp1Ol6Ziv5+1diDhWJd+Xwyibjr5wisL059b03Chl4VIZPztt+38Nxfv33r5AMvUaIAF74Sf0Ok+IFBAJvqoo3XQ27S6ZrdPzGg2QfWM/4995dIkUgs2sVei6B1T/F1E8/mlevp+gaaiwSJljuasxYqKqy6BYsCEMu1YPnye1bT3xFK4m1HUQ7mzBaUtiDBerHhu6+eMctgUKkKYFXtfGqFoHlYU9UL+/9XBhkZuikN4Rubeu+1aAq2JN1lJnWtsB0qfcXkF6AXCCqEmlOkt26DD0ZQXgBqqFT7Z0gsH2m9vfR9UtbWf2791M6OMz0+/2zBHurUCwEnD/7iUXLUEA2m9NY1qOjKFAuBNSq4bAqIcJynFhCpalZJZ1VqVYEB9+zOfzhjXPNze19nkmCNE6EBbxaIjr7wX3eoaViYSlBzSJozI+rGK0ZEuuWLSqmeMcg5cUESTKGco1JLPPUCFb/FOntK0jvWYWRS6JEdBqnR3AnPh+uM4DwA1T9YoJN1TWEJ2ZnjoRjYMMkiaKpSCmxx6qc+/b7s26wlGE5WXptK1JI5CKHlAkvoHx0jL6/ehu3aM45V/XUOI3BIrltnSz74ibUqM7oz47dsvsA8IsfNXjrpfmWXXNe5Vd/J0M0rrD/ZZOP37MZG/apVQW+B4YBTS0aqzcaPPZUktZ2jZ//sMEbL954RcZ1kWJqaw+oKmbfeKhqc+Ehp4Q/lsyeNaAoOGMlhLuACf5JErj8COBbg0uvrVzHtZWL1tJCZOYVakghZnvFw+xxOPNYS8dofXoXyQ2dtydeqsz+65LXC0MGIizHcn30bIL09hUUXz8+5/MPf9mXPz5oONQOnie9fQWZ3avDFkfbo3Kg73NiJQJSYg6XablnBanVeQLXJ72+jdKREYTjoeoqyZUtKJpKy96VCD9AOB61vmlaH1yNW2iEhJmKUj9/lUFmSji6QJlpkFA0BSkk5lAJ9WGV3LZOSh8PYTQnCOoubtkkubwZt2pTPzuFvbOHSG7xGqGXX4N68Td1hYRMrSKoVT5RzB+Bx57K8NATcf7hP9X4zl9VqZTmHz943ufoxw5HDzj89/+6hd/+owynjjlMjN6YdXudpLic/NO7Qh3C0VIo+eX6aOk4iTXtxLpb8Csm089/PE+QQY0axJa3oKfjqIkoWiJKfGUrAJF8mpYv7UBYLoHlEtQdnNECfuXmxJoUQyO2PI+eSaAlIqiJKIn1nUBYGtLy1E6E5SIsF7/h4IwW53RXGPk00Y6msJwmESHalkWNRcI2tW3LkUISXDi+3MAenJ59glc+OEvzE9uItufo+ZdfovJBH16pjp6Nk9rSg5aIUj82RGL94qr+r+2NQ7SrhUg+jRaPoMYjJC/0fmsqTY9uxpusEtgugemGYh6XS3pIsPunqB0ZILNzJZ2/9ziZe9bgTddQdBUtFaPy/hnKb1+5h7zy3hlav3oPiTUdoED9+FAobvE5QuX4GLF8iu7nduAUGng1J+w4Gq8y8cYZ2h9fj3B8Gv0FAsfHtzxGfnqUzi9vZt2/eBjh+hQODFI/XyBwfeyJ6ryHSmZ9Ox1f2khyeTOKprLpz77I6PPHqRwfY/gnh2l7dB1tD6/Fq9iMvnACr2aT2byMpu2doCiYQyXGX7nS56iQ7lxDNNNKZegkXmPud0WLxMgu30p+3b0YiTROrUTh7IdUhk4ivIXd20xO44mvJikVBa//wrwsIV6AENB32uOd10x+8w8yPPiFBD/8uxsT7r0uUrQGp7FHikSXNZHeuRI1aoCiID0fv2pSPzFM6bXjlN/tnaeSE+1uZtX/8BzRzuZ554315Fnx331l9rXfsBn+i5covnL0epY5D0ZzilV/9jViy/Pz/hZpSbPiT5+ZfR3YLmPffoPJf7w4VCr/pR20//o+VGP+bWt5cjstT26ffd04PUrf//L9WVK1+6cY+evXaPule4ityNP67B6QkqDuYA9OMf6TAzhjJZb/yZcvzoW+SVA0ja7ff3yeGASAamh0/e7js6+lHzD94mGG/v0vLnsuZ7TI+HfeRlguyU1d5B7ciKIqM+pIDerHh666Fq9Up/J+L+2/8gBSSsr7TyOcT0950c2AcHzGXjzJ2Ith4mHVN/eCEm6feOU0E5chI79qc+5v5s9eMQdLnP6/Xp+3vXp6IizluQzKR0YpHxmdt33kJ0cY+cmRBdevRxM0r7mH/Pp7GX7/J0wce2P2b6pukN/wAB07voAWiQOSaLaNVPsKhj/8GdOn3mMhtyAeV1i93qD3uEuttnBYQEqYHPfRdIU1GxYnOnE1XBcpVt4/gzM0TWRZE3omEdbZqQrS9fFKDayBqTBGdJn37k3XGPvu2+ipOKm99yBsh8bhy38Q0g/FJgD8usXU8wepfNiHPXix7sor1Bj9j68TNOxQdmsGtaMDiL8JVb0vwK9ZjH/vHbT0wm6BDEQobnDp+/7oXFiYri4cS/PLjTmitTIQVGfuW3xlG1omDlLi12zsgSnsoQJqzGD8e++iGNpldeHqx4YY+Y+v4VdMvGvoD5ZCUHjh8Jwe8CvvLLEGrt4q2Dg5zPB0lcSaDozmVJjxdHy8ciPsrb7qWiTWQPj5+aUGjVMjt7XVcgk3Di2WIN7UgRABjem5D8FoppWW9feiRxNUhk5SHT1DIt9NrmczLWvvoTp8Cre+8DgOVYVUWiUSWVxsJ5vT0A1Qb0IO4/oSLYHAHipgD10lpnEF+BWT4stHZwa6biKo1yj8eP6Iz09CmC6Vd3vnL6VqMf38x/O2W30TWH1zn5TCdCm+ev2BY/P0KObp+U/YK0JRUAwD6fszvdZXv2/CcinvP3XZvwELahJeEULe9J5ib6pKZerahYEVXSO5uRspw1519zrO8VnD6AsnwozrZbRHAVQ9OtNfvPDDQzWic1xUVY+gagaB5yDFzSmx0YwYkVQTbq04h+AUVSe7fDOxbCtmYYSRA89jFccwEmmMeJp4rp1k24oFSdF1JCNDPstX6+zZF2Ni1MdsXP69qyp0rdC5/9E4mqowNnTjXsfd1wP3GYKWSpF+cB96S8udXspdg+iyJjK7VxHULKof9X0+ahMXgDNVx5muX5bzFFUj17MZdREy/oqq09SzhYvZP5VMx1qW7/06mc51N229iqqhRWK49RIiuEhCeixB08rtoWdy9qMZVR2JZ1ZpTA6g6AbR9PzQ1SdRrQj2v2SCovD7/22Ob/5xlu33RFmxxqCjS6NtmUZnj86ajQaPPZXgX/3rZvY+FGdqIuD9N++QnuISFgctlyOxaSP2mcWNPv2sQ8vEaf3qboxckurB8zSOD39+ss4LIJJsItW2ClXTsUpjNIojaHqEZL4HRdVms7fRdJ5YOo+qGwjfRdEMKqOnULUIqfzymbDLBSUjQWWsl2imNTzHpddqXYGi6djlCRrFYVRVJ9HSjaoZGPE0ZmkMq3QFr0ghzCEIMefzS7avIppuwS5PUJ84H4pFzMC3aiiKGo4lWAC2JXn+h3WWrzZ46IkEv/snWZ77rTRjQz6VckDgQyR2ce5zNKZSLQf86D/XOH38DukpXhc0jcTWLcTXrUUKiX26d145jpbLktyxA6O9Hel72L1nsHrPIC8Z5KomkyS3byPS3Y2iaQS1KubxEzj9A0RXrSSxaROV119HmKEFore0kHloH/WPD4HvE9+4Ab9SIbpiBdbJU0gpSWzehDM4SOOj0A1XdJ345s3E165BiRi4I2M0Dh0iqNVQDIPknt1I20Y4DvGNG1E0DWdoGPPIEYRloefzJLdvJbZuHZGuLpq+8jTCtAhqNWrvf4A3Hsr1662tJLdsQW/NAxK/WMI8fhxvYvKybXzxaBORSJpKbfCKt9nQE6QS7dQaY/iBTTzaTGvzBhrWNIXytbnQ8WgzkUhq9nqKohIxUjju4lxeLRkltXX5bH93clMX2XvXEjQciq8eW3KdL0GuZwtSBLhmZdb6CkcAeOS6t1AZOYUIPOLZ9pAoFZXAd4kksjSmBwl8BxG45JfvpThwaME2UN+1MGIpMp3r8ewawvdoWbULuzqFVZ64qhyYDAKEa6PHk7NJQUXVaVqxFUXTaUwNYpc/EVtWZpzSRYb8hs77/MX/XmJ0yOepryVpbtXINUdmT3Hh3Xme5NhBm59+v8FL/3Rz5szcHlLUNNL330f6/vtw+gcIqlWSe3YT6VyGdTr8YehNTTT/8nMouo7TP4ASiZB94gkiXV1U3ngDaTvora00P/Nl1HQau+8c0nHQc1nURGL2HPEN66m+/TYQkqKaTBDftAn73Hmk45J5+CEaR46ipdM0P/cs9tk+ZBCQeehB3NFRvMkpMo89SnzDBpz+fkS1SnzTBqIrllN6/ucEjQaxlSuJrlwZ7j8xgRqPk33kYbREgsqbbyI9D29yCr2pGSOfxznfj18qIRwH0QgTJFouR8svP4d0XOyBAdRIhMiyZThDQ/iT0xh6AlXVEMLH9eqoSjg9z3FrM7c0iqZGUBUViSQQHkHgoKo6rldHzMSPHLeK49aIRi62Uhl6Ak2LIKXA9Rqz2yAkPsetoqoaKMwSoIJKItZCR347IxMHcH0zvLaUBMJF12JIKQjExR+TlorR/MQ20ttXgKbMFK/bTP34gzDG+Wnq377FsKtTZJdtwHca+K4VFsoHHmZpdJ6OoVObnlGnUVA1HdWI4llVzOIoIljIUlKIJHMk88vR9AhGPI0WTSD8CoHnUJ8epDF19YRc4JhY5QmS+R5S7aupT5wjs2wdqfbVeFaNyuBxhDfXjTUSmbDawlucJSdlWIf4V/+uzD99r87GrRFWrDFoalHRDQXbkkyM+pw+7nKu16NSCm6KwCzcJlI0WppJbt+OdeYs5ed/jvR9tKYcHX/8x7P7JHfvQkulmP7u9/AmQlWQ1J7dZL/4JNbpXtzhYVI7dhDp7GTy23+HOzJyXWsJTJPGoUNI16P1n/1X2GfPYh4/Qce/+GOMfCuKbpDcto3yK69gHglLgayuLtp+73eJr19P49Ch8FEVBFRffwNncBBUleZnnia+aSPVt98mqFQwKxUUwyDS04154uS89aqRCHquidq771J7/wOkczE4Ho1kWdX1MI7XwLKLTBSOEomkWNa6E9+3GRx7h+bsajLJLiJGCj+wsZ0yE4VjtDZtIBFvY2D0LRy3ipA+vm+jqRdjUrn0cpLxVqKRDKNTB/F9mxWdD1K3pkBKRqc+Jmqk6GzdheuZDI2/i6ZFyGVWksusxPVMCuVe4rFmDD3B+PRhOtt2U6yco26OX7zXdZvqh2dDtSRNxS/WqR7qDzPOi+zAuICW1TuJJLPX8YnfHUh3XF3oozZ+Fqs0Sn7tXmLpViZO779iTZ8Qwey4OynlNXVBaUZ0xvI8iWdVad2wD4ULk/D8RQnGumaVytBJEi1dLH/g6zjVaaKZPFokRuHsAaqjcz0SRdWI5dqQIsAzr807sEzJQJ/HQN/tK9u6LaSoptJo2SzuO+8ggzDOEJQrBOWLWahIZydBuYI3OWN2S4k7MYF0HIx8Hr9QQG9rxR0ZxS9dv/KOdByEZSNdF+k4BPUGMgiQIkDRNSIdHWjpFIktW4itCr/IaiyGGoui5bJh0yXgjo/jl2bWLwR+vU4sYix6YllQq2GdPEly106M1jzW6TPY584hTBMhfEy7hJA+DWsqbPNyypQq50jEW2fPUTcnMPQqnm8TMZIIEVCo9KFpV4vbKAjhY7sVkom2kFR9Gz+wmSqexHHDEibLKVGsnCMRDwPjfmAzXTpNKt7G0Pi74b2UgnzTBuLRJhRFxbTnSlQFDYfCS0covLRw7dtCWLHvuRs+x92MbPcmYuk8mh7FrhVACCLJJnJdm4hn22lZuYvK2PzqiwuIZdrILFtPLNNGfvU91CbP4TbKNK/YQap1JbFMnsC1qU8N4JplMsvWIwI39AiuETLwKPcfIZ5ro2n1TpJtK5BCUBs7y9SJt5HB3Cx3NJMnkmrGdy2cygLK8HcBbgspKpqKomkIz5/jMkn/4s1TdB3peXNdqkAggyCcDaJpKLqOcJxFzHa9SEyKoqBcUlcoRdhgKpFIKS4Gg0O9/Zk5JArCshBO+NQUjkt1/zvY587Nrk+6TrjeC5hxZxYLYVmUX3mVSFcniU2byD72CKk9uym/+ireyBiFci8RI01HfvsMMQo0LYKmGqiqwQVyE8JHSn/2vepaFFXVZyxDBVXR0LSwLENVDSJGklxmBcVKH0HgzloZQeDNcX0vHHfhekJ4M9fQ0LU4gXBw3Bq2W2FZ604Klb7QgrmJcGslzrz8bVY99Cs39bx3CoFnc+blb1/2b2ZhBLdeQkqBNxNXFK6NOlWlXN5PzR0nsOo07PolXZUq5tQgrl1F1QxqE2cxC0MEnk1ghy64OT2MVQmnKwa2iQw8iucPEYnPuLO+g+fUMYRO9ewRXLs4c+6w71VyeYveqU4zcuB5ygPHwvIcs4pVGMGpzS8306JxGpP9eFYNs3Dl2UN3C24LKQrLQpgNjHweS9MgCFAiEbR0Br8ctgh509NEu7tQk8mLcbdsBjUexy+XEbZNUAhN6pwAAAVZSURBVKkQ7elBi8fx7fmpd+l6oCiosRhBpQKKgpZOz8YcFwO/VEKYJtapU1hnL0q4KzPZNkWfuWUzxHo1yECEhHyFYm/RaGD3nsE534/R3kbzV79KatcuzJJFe24bIKmb40gpSCc7yKR60LUozdnVBIFLIFyE9AkCFwnoeoyW3DoiRoqW3Fr8gk081kQ62YmqGjRlVlKtD+P5FplkF65Xx3YqCOFhOSWkvEhqqeQyMqludC1KU2YVpep5/MDBtAv0dNzH+PRhLKeE55koiopll7jZqWQReAx98DPGDr16U897pyCR+Nbli+7dRmmO1qBOhGyQI1oTNMQkgayQUbIIBFVRQEMjpTaBB4FQiAgDp1xAU5IoBKSVNKqSRdahJqaIKDFSSgJDyeK6LhHPR0PHkQG+lLTq3Uhb4gVVJD45NfRIaqKEx+XdeM+sUh48jqKoSCmuGCNuTA5iFUZDIyS4eXJktwq3hRS9QhG7f4Dkzu345TJBrUZs1UrUZHJ2H/PoMWKrV5N9/DGskydRIhHSe/fiTxdwx8aRto3V20t84wYyDz+EefwE0vdREwlEo4EzOIg3NQmKSuqePZgnTqLG46Tu2XNNa3UGBnFHx0jv24ei6QT1Gko0hpZM4gwOENQX30kS1Kqo8Tjx9etCUg18/EIRYdthUmjjBrxCIXydy4GmIX0fx6lybnguEVTrI1TrC8dR+0fenPPa9epUanO7DgbH3pl33Pj03IFa1fow1fr8p/rg2Nuz/59KdJBN91BtjOH5t2ZejAx8POvGelk/jciozUSVJBKJpug0Ka0IBHEljkAQIYqhRKmKAqqik1GbKYuArJrHx8MgQlLNUJcVIkqUlJpDIlDRKIgxskqemiyRU9tx/AFA4koLX7qoaOS1Lkri6t1JQGgcyAU8BCkWFau8W3BbSFFaFrV330NLPE7ui08gTAt3bBSnv3/WuHCGh6m+9Rape/YQW7sGhMSbmqL2+hsE1TA4a/edo/zCi6T27qV59WqkFAjLpn7gQEiK0wXqH35Ias8eYmvXEjTqeBOTaJkLJS7ykv8yV9FlZrswTSqvvU563wNkv/A4qGFPtzs+djFZIuXlH4qf2OhNTGIeO05y21YSWzbjTUyG5UK2jRKLkty1EzUSDZ+gnoc3OUH94KG5bvldjEC4VGpDM+7952OMwO2CrhiYsopBFAXQiVIVkyQUgY6BqmiYskZdljG4EEMOwyVID0daqFLFkRYJJYUvXaqiSICHgoZJDUvUMNQoEokrbRxp4uOhoDARDJBWm4ipSTzx+dJDVa4mVaUoyk31h9REAjUe9vwKa6aHWFEQ9Zn6Ik1FSyRRIpHZfYRtzyUbVUVLJlGiYcxM+j7CNGdrGRXDQE0mUXQN6QcIy0KNxcLrSYmaTBDU6qHeYyaDaDSQnoeWzSJsezYLrMZjqLF4mFgRAcJ2ZteipVJIKUM3f2ZtajyOEo2Gbvsl61WiUbREIrQCPY+gXocgCN9HKoVizCRnggBhWwj7+gf8LOGzg4SSpl1biYJCRUzh4dKktqMAY0E/EWK0aMvwpMNUMEyT1kZMSaGjUxHTBATElAS2NAFJVEkQUaI0RA1HmkSVOKaskVKbKAbjJNUMebWTyWAQVzrktS4iSoxiMEZVfvYmLEopr5gAuK2kuIQlLOHWQeHKEyCv/e/KTJH0Z5MCrkaKS21+S1jCZwQLJv6u6e+fVTpcGEuCEEtYwhKWcAmu6j4vYQlLWMLnDUuW4hKWsIQlXIIlUlzCEpawhEuwRIpLWMISlnAJlkhxCUtYwhIuwRIpLmEJS1jCJVgixSUsYQlLuAT/P0p7iA8waKZYAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"import os\n",
"import os.path as P\n",
"import re\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import PIL\n",
"import wordcloud\n",
"\n",
"def make_image(data, outputname, size=(1, 1), dpi=80):\n",
" #\n",
" # https://stackoverflow.com/questions/9295026/matplotlib-plots-removing-axis-legends-and-white-spaces\n",
" #\n",
" # Argh, getting the image saved without any borders is such a PITA\n",
" #\n",
" fig = plt.figure()\n",
" fig.set_size_inches(size)\n",
" ax = plt.Axes(fig, [0., 0., 1., 1.])\n",
" ax.set_axis_off()\n",
" fig.add_axes(ax)\n",
" plt.set_cmap('hot')\n",
" ax.imshow(data, aspect='equal')\n",
" plt.savefig(outputname, dpi=dpi)\n",
"\n",
"def make_cloud(mask_path, text, out_path):\n",
" mask = 255 - np.array(PIL.Image.open(mask_path))\n",
" \n",
" cloud = wordcloud.WordCloud(\n",
" mask=mask,\n",
" contour_width=3, \n",
" contour_color='steelblue',\n",
" max_words=50,\n",
" repeat=True,\n",
" ).generate(text)\n",
" \n",
" cloud.to_file(out_path)\n",
" \n",
" plt.axis(\"off\")\n",
" plt.imshow(cloud, interpolation=\"bilinear\")\n",
" plt.show()\n",
"\n",
"def make_clouds(subdir):\n",
" masks = ('one.png', 'two.png', 'three.png', 'four.png')\n",
" py_files = [P.join(subdir, f) for f in os.listdir(subdir) if f.endswith('.py')]\n",
" text = [\n",
" 'core concepts document corpus model vector',\n",
" 'corpora vector spaces corpus streaming corpus formats serialization',\n",
" 'topics transformations model',\n",
" 'similarity queries query similar documents'\n",
" ]\n",
" for m, p, t in zip(masks, py_files, text):\n",
" make_cloud(m, t, re.sub('.py$', '.png', p))\n",
" \n",
"make_clouds('../gallery/core/')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
| 349,506
|
Python
|
.py
| 167
| 2,087.862275
| 90,244
| 0.96066
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,023
|
nosy.py
|
piskvorky_gensim/gensim/nosy.py
|
#!/usr/bin/env python
"""
A simple testrunner for nose (or anything else).
Watch for changes in all file types specified in 'EXTENSIONS'.
If changes, run test executable in 'EXECUTABLE', with default
arguments 'DEFAULTARGS'.
The --with-color option needs the "rudolf" nose plugin. See:
https://pypi.org/project/rudolf/
Originally by Jeff Winkler, http://jeffwinkler.net
Forked from wkral https://github.com/wkral/Nosy
"""
import os
import stat
import time
import datetime
import sys
import fnmatch
EXTENSIONS = ['*.py']
EXECUTABLE = 'nosetests test/'
DEFAULTARGS = '--with-color -exe' # -w tests'
def check_sum():
"""
Return a long which can be used to know if any .py files have changed.
"""
val = 0
for root, dirs, files in os.walk(os.getcwd()):
for extension in EXTENSIONS:
for f in fnmatch.filter(files, extension):
stats = os.stat(os.path.join(root, f))
val += stats[stat.ST_SIZE] + stats[stat.ST_MTIME]
return val
if __name__ == '__main__':
val = 0
try:
while True:
if check_sum() != val:
val = check_sum()
os.system('%s %s %s' % (EXECUTABLE, DEFAULTARGS, ' '.join(sys.argv[1:])))
print(datetime.datetime.now().__str__())
print('=' * 77)
time.sleep(1)
except KeyboardInterrupt:
print('Goodbye')
| 1,405
|
Python
|
.py
| 43
| 26.953488
| 89
| 0.628423
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,024
|
utils.py
|
piskvorky_gensim/gensim/utils.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""Various general utility functions."""
from contextlib import contextmanager
import collections.abc
import logging
import warnings
import numbers
from html.entities import name2codepoint as n2cp
import pickle as _pickle
import re
import unicodedata
import os
import random
import itertools
import tempfile
from functools import wraps
import multiprocessing
import shutil
import sys
import subprocess
import inspect
import heapq
from copy import deepcopy
from datetime import datetime
import platform
import types
import numpy as np
import scipy.sparse
from smart_open import open
from gensim import __version__ as gensim_version
logger = logging.getLogger(__name__)
# When pickling objects for persistence, use this protocol by default.
# Note that users won't be able to load models saved with high protocols on older environments that do
# not support that protocol (e.g. Python 2).
# In the rare cases where this matters, users can explicitly pass `model.save(pickle_protocol=2)`.
# See also https://github.com/RaRe-Technologies/gensim/pull/3065
PICKLE_PROTOCOL = 4
PAT_ALPHABETIC = re.compile(r'(((?![\d])\w)+)', re.UNICODE)
RE_HTML_ENTITY = re.compile(r'&(#?)([xX]?)(\w{1,8});', re.UNICODE)
NO_CYTHON = RuntimeError(
"Compiled extensions are unavailable. "
"If you've installed from a package, ask the package maintainer to include compiled extensions. "
"If you're building Gensim from source yourself, install Cython and a C compiler, and then "
"run `python setup.py build_ext --inplace` to retry. "
)
"""An exception that gensim code raises when Cython extensions are unavailable."""
#: A default, shared numpy-Generator-based PRNG for any/all uses that don't require seeding
default_prng = np.random.default_rng()
def get_random_state(seed):
"""Generate :class:`numpy.random.RandomState` based on input seed.
Parameters
----------
seed : {None, int, array_like}
Seed for random state.
Returns
-------
:class:`numpy.random.RandomState`
Random state.
Raises
------
AttributeError
If seed is not {None, int, array_like}.
Notes
-----
Method originally from `maciejkula/glove-python <https://github.com/maciejkula/glove-python>`_
and written by `@joshloyal <https://github.com/joshloyal>`_.
"""
if seed is None or seed is np.random:
return np.random.mtrand._rand
if isinstance(seed, (numbers.Integral, np.integer)):
return np.random.RandomState(seed)
if isinstance(seed, np.random.RandomState):
return seed
raise ValueError('%r cannot be used to seed a np.random.RandomState instance' % seed)
def synchronous(tlockname):
"""A decorator to place an instance-based lock around a method.
Notes
-----
Adapted from http://code.activestate.com/recipes/577105-synchronization-decorator-for-class-methods/.
"""
def _synched(func):
@wraps(func)
def _synchronizer(self, *args, **kwargs):
tlock = getattr(self, tlockname)
logger.debug("acquiring lock %r for %s", tlockname, func.__name__)
with tlock: # use lock as a context manager to perform safe acquire/release pairs
logger.debug("acquired lock %r for %s", tlockname, func.__name__)
result = func(self, *args, **kwargs)
logger.debug("releasing lock %r for %s", tlockname, func.__name__)
return result
return _synchronizer
return _synched
def file_or_filename(input):
"""Open a filename for reading with `smart_open`, or seek to the beginning if `input` is an already open file.
Parameters
----------
input : str or file-like
Filename or file-like object.
Returns
-------
file-like object
An open file, positioned at the beginning.
"""
if isinstance(input, str):
# input was a filename: open as file
return open(input, 'rb')
else:
# input already a file-like object; just reset to the beginning
input.seek(0)
return input
@contextmanager
def open_file(input):
"""Provide "with-like" behaviour without closing the file object.
Parameters
----------
input : str or file-like
Filename or file-like object.
Yields
-------
file
File-like object based on input (or input if this already file-like).
"""
mgr = file_or_filename(input)
exc = False
try:
yield mgr
except Exception:
# Handling any unhandled exceptions from the code nested in 'with' statement.
exc = True
if not isinstance(input, str) or not mgr.__exit__(*sys.exc_info()):
raise
# Try to introspect and silence errors.
finally:
if not exc and isinstance(input, str):
mgr.__exit__(None, None, None)
def deaccent(text):
"""Remove letter accents from the given string.
Parameters
----------
text : str
Input string.
Returns
-------
str
Unicode string without accents.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import deaccent
>>> deaccent("Šéf chomutovských komunistů dostal poštou bílý prášek")
u'Sef chomutovskych komunistu dostal postou bily prasek'
"""
if not isinstance(text, str):
# assume utf8 for byte strings, use default (strict) error handling
text = text.decode('utf8')
norm = unicodedata.normalize("NFD", text)
result = ''.join(ch for ch in norm if unicodedata.category(ch) != 'Mn')
return unicodedata.normalize("NFC", result)
def copytree_hardlink(source, dest):
"""Recursively copy a directory ala shutils.copytree, but hardlink files instead of copying.
Parameters
----------
source : str
Path to source directory
dest : str
Path to destination directory
Warnings
--------
Available on UNIX systems only.
"""
copy2 = shutil.copy2
try:
shutil.copy2 = os.link
shutil.copytree(source, dest)
finally:
shutil.copy2 = copy2
def tokenize(text, lowercase=False, deacc=False, encoding='utf8', errors="strict", to_lower=False, lower=False):
"""Iteratively yield tokens as unicode strings, optionally removing accent marks and lowercasing it.
Parameters
----------
text : str or bytes
Input string.
deacc : bool, optional
Remove accentuation using :func:`~gensim.utils.deaccent`?
encoding : str, optional
Encoding of input string, used as parameter for :func:`~gensim.utils.to_unicode`.
errors : str, optional
Error handling behaviour, used as parameter for :func:`~gensim.utils.to_unicode`.
lowercase : bool, optional
Lowercase the input string?
to_lower : bool, optional
Same as `lowercase`. Convenience alias.
lower : bool, optional
Same as `lowercase`. Convenience alias.
Yields
------
str
Contiguous sequences of alphabetic characters (no digits!), using :func:`~gensim.utils.simple_tokenize`
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import tokenize
>>> list(tokenize('Nic nemůže letět rychlostí vyšší, než 300 tisíc kilometrů za sekundu!', deacc=True))
[u'Nic', u'nemuze', u'letet', u'rychlosti', u'vyssi', u'nez', u'tisic', u'kilometru', u'za', u'sekundu']
"""
lowercase = lowercase or to_lower or lower
text = to_unicode(text, encoding, errors=errors)
if lowercase:
text = text.lower()
if deacc:
text = deaccent(text)
return simple_tokenize(text)
def simple_tokenize(text):
"""Tokenize input test using :const:`gensim.utils.PAT_ALPHABETIC`.
Parameters
----------
text : str
Input text.
Yields
------
str
Tokens from `text`.
"""
for match in PAT_ALPHABETIC.finditer(text):
yield match.group()
def simple_preprocess(doc, deacc=False, min_len=2, max_len=15):
"""Convert a document into a list of lowercase tokens, ignoring tokens that are too short or too long.
Uses :func:`~gensim.utils.tokenize` internally.
Parameters
----------
doc : str
Input document.
deacc : bool, optional
Remove accent marks from tokens using :func:`~gensim.utils.deaccent`?
min_len : int, optional
Minimum length of token (inclusive). Shorter tokens are discarded.
max_len : int, optional
Maximum length of token in result (inclusive). Longer tokens are discarded.
Returns
-------
list of str
Tokens extracted from `doc`.
"""
tokens = [
token for token in tokenize(doc, lower=True, deacc=deacc, errors='ignore')
if min_len <= len(token) <= max_len and not token.startswith('_')
]
return tokens
def any2utf8(text, errors='strict', encoding='utf8'):
"""Convert a unicode or bytes string in the given encoding into a utf8 bytestring.
Parameters
----------
text : str
Input text.
errors : str, optional
Error handling behaviour if `text` is a bytestring.
encoding : str, optional
Encoding of `text` if it is a bytestring.
Returns
-------
str
Bytestring in utf8.
"""
if isinstance(text, str):
return text.encode('utf8')
# do bytestring -> unicode -> utf8 full circle, to ensure valid utf8
return str(text, encoding, errors=errors).encode('utf8')
to_utf8 = any2utf8
def any2unicode(text, encoding='utf8', errors='strict'):
"""Convert `text` (bytestring in given encoding or unicode) to unicode.
Parameters
----------
text : str
Input text.
errors : str, optional
Error handling behaviour if `text` is a bytestring.
encoding : str, optional
Encoding of `text` if it is a bytestring.
Returns
-------
str
Unicode version of `text`.
"""
if isinstance(text, str):
return text
return str(text, encoding, errors=errors)
to_unicode = any2unicode
def call_on_class_only(*args, **kwargs):
"""Helper to raise `AttributeError` if a class method is called on an instance. Used internally.
Parameters
----------
*args
Variable length argument list.
**kwargs
Arbitrary keyword arguments.
Raises
------
AttributeError
If a class method is called on an instance.
"""
raise AttributeError('This method should be called on a class object.')
class SaveLoad:
"""Serialize/deserialize objects from disk, by equipping them with the `save()` / `load()` methods.
Warnings
--------
This uses pickle internally (among other techniques), so objects must not contain unpicklable attributes
such as lambda functions etc.
"""
def add_lifecycle_event(self, event_name, log_level=logging.INFO, **event):
"""
Append an event into the `lifecycle_events` attribute of this object, and also
optionally log the event at `log_level`.
Events are important moments during the object's life, such as "model created",
"model saved", "model loaded", etc.
The `lifecycle_events` attribute is persisted across object's :meth:`~gensim.utils.SaveLoad.save`
and :meth:`~gensim.utils.SaveLoad.load` operations. It has no impact on the use of the model,
but is useful during debugging and support.
Set `self.lifecycle_events = None` to disable this behaviour. Calls to `add_lifecycle_event()`
will not record events into `self.lifecycle_events` then.
Parameters
----------
event_name : str
Name of the event. Can be any label, e.g. "created", "stored" etc.
event : dict
Key-value mapping to append to `self.lifecycle_events`. Should be JSON-serializable, so keep it simple.
Can be empty.
This method will automatically add the following key-values to `event`, so you don't have to specify them:
- `datetime`: the current date & time
- `gensim`: the current Gensim version
- `python`: the current Python version
- `platform`: the current platform
- `event`: the name of this event
log_level : int
Also log the complete event dict, at the specified log level. Set to False to not log at all.
"""
# See also https://github.com/RaRe-Technologies/gensim/issues/2863
event_dict = deepcopy(event)
event_dict['datetime'] = datetime.now().isoformat()
event_dict['gensim'] = gensim_version
event_dict['python'] = sys.version
event_dict['platform'] = platform.platform()
event_dict['event'] = event_name
if not hasattr(self, 'lifecycle_events'):
# Avoid calling str(self), the object may not be fully initialized yet at this point.
logger.debug("starting a new internal lifecycle event log for %s", self.__class__.__name__)
self.lifecycle_events = []
if log_level:
logger.log(log_level, "%s lifecycle event %s", self.__class__.__name__, event_dict)
if self.lifecycle_events is not None:
self.lifecycle_events.append(event_dict)
@classmethod
def load(cls, fname, mmap=None):
"""Load an object previously saved using :meth:`~gensim.utils.SaveLoad.save` from a file.
Parameters
----------
fname : str
Path to file that contains needed object.
mmap : str, optional
Memory-map option. If the object was saved with large arrays stored separately, you can load these arrays
via mmap (shared memory) using `mmap='r'.
If the file being loaded is compressed (either '.gz' or '.bz2'), then `mmap=None` **must be** set.
See Also
--------
:meth:`~gensim.utils.SaveLoad.save`
Save object to file.
Returns
-------
object
Object loaded from `fname`.
Raises
------
AttributeError
When called on an object instance instead of class (this is a class method).
"""
logger.info("loading %s object from %s", cls.__name__, fname)
compress, subname = SaveLoad._adapt_by_suffix(fname)
obj = unpickle(fname)
obj._load_specials(fname, mmap, compress, subname)
obj.add_lifecycle_event("loaded", fname=fname)
return obj
def _load_specials(self, fname, mmap, compress, subname):
"""Load attributes that were stored separately, and give them the same opportunity
to recursively load using the :class:`~gensim.utils.SaveLoad` interface.
Parameters
----------
fname : str
Input file path.
mmap : {None, ‘r+’, ‘r’, ‘w+’, ‘c’}
Memory-map options. See `numpy.load(mmap_mode)
<https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.load.html>`_.
compress : bool
Is the input file compressed?
subname : str
Attribute name. Set automatically during recursive processing.
"""
def mmap_error(obj, filename):
return IOError(
'Cannot mmap compressed object %s in file %s. ' % (obj, filename)
+ 'Use `load(fname, mmap=None)` or uncompress files manually.'
)
for attrib in getattr(self, '__recursive_saveloads', []):
cfname = '.'.join((fname, attrib))
logger.info("loading %s recursively from %s.* with mmap=%s", attrib, cfname, mmap)
with ignore_deprecation_warning():
getattr(self, attrib)._load_specials(cfname, mmap, compress, subname)
for attrib in getattr(self, '__numpys', []):
logger.info("loading %s from %s with mmap=%s", attrib, subname(fname, attrib), mmap)
if compress:
if mmap:
raise mmap_error(attrib, subname(fname, attrib))
val = np.load(subname(fname, attrib))['val']
else:
val = np.load(subname(fname, attrib), mmap_mode=mmap)
with ignore_deprecation_warning():
setattr(self, attrib, val)
for attrib in getattr(self, '__scipys', []):
logger.info("loading %s from %s with mmap=%s", attrib, subname(fname, attrib), mmap)
sparse = unpickle(subname(fname, attrib))
if compress:
if mmap:
raise mmap_error(attrib, subname(fname, attrib))
with np.load(subname(fname, attrib, 'sparse')) as f:
sparse.data = f['data']
sparse.indptr = f['indptr']
sparse.indices = f['indices']
else:
sparse.data = np.load(subname(fname, attrib, 'data'), mmap_mode=mmap)
sparse.indptr = np.load(subname(fname, attrib, 'indptr'), mmap_mode=mmap)
sparse.indices = np.load(subname(fname, attrib, 'indices'), mmap_mode=mmap)
with ignore_deprecation_warning():
setattr(self, attrib, sparse)
for attrib in getattr(self, '__ignoreds', []):
logger.info("setting ignored attribute %s to None", attrib)
with ignore_deprecation_warning():
setattr(self, attrib, None)
@staticmethod
def _adapt_by_suffix(fname):
"""Get compress setting and filename for numpy file compression.
Parameters
----------
fname : str
Input filename.
Returns
-------
(bool, function)
First argument will be True if `fname` compressed.
"""
compress, suffix = (True, 'npz') if fname.endswith('.gz') or fname.endswith('.bz2') else (False, 'npy')
return compress, lambda *args: '.'.join(args + (suffix,))
def _smart_save(
self, fname,
separately=None, sep_limit=10 * 1024**2, ignore=frozenset(), pickle_protocol=PICKLE_PROTOCOL,
):
"""Save the object to a file. Used internally by :meth:`gensim.utils.SaveLoad.save()`.
Parameters
----------
fname : str
Path to file.
separately : list, optional
Iterable of attributes than need to store distinctly.
sep_limit : int, optional
Limit for separation.
ignore : frozenset, optional
Attributes that shouldn't be store.
pickle_protocol : int, optional
Protocol number for pickle.
Notes
-----
If `separately` is None, automatically detect large numpy/scipy.sparse arrays in the object being stored,
and store them into separate files. This avoids pickle memory errors and allows mmap'ing large arrays back
on load efficiently.
You can also set `separately` manually, in which case it must be a list of attribute names to be stored
in separate files. The automatic check is not performed in this case.
"""
compress, subname = SaveLoad._adapt_by_suffix(fname)
restores = self._save_specials(
fname, separately, sep_limit, ignore, pickle_protocol, compress, subname,
)
try:
pickle(self, fname, protocol=pickle_protocol)
finally:
# restore attribs handled specially
for obj, asides in restores:
for attrib, val in asides.items():
with ignore_deprecation_warning():
setattr(obj, attrib, val)
logger.info("saved %s", fname)
def _save_specials(self, fname, separately, sep_limit, ignore, pickle_protocol, compress, subname):
"""Save aside any attributes that need to be handled separately, including
by recursion any attributes that are themselves :class:`~gensim.utils.SaveLoad` instances.
Parameters
----------
fname : str
Output filename.
separately : list or None
List of attributes to store separately.
sep_limit : int
Don't store arrays smaller than this separately. In bytes.
ignore : iterable of str
Attributes that shouldn't be stored at all.
pickle_protocol : int
Protocol number for pickle.
compress : bool
If True - compress output with :func:`numpy.savez_compressed`.
subname : function
Produced by :meth:`~gensim.utils.SaveLoad._adapt_by_suffix`
Returns
-------
list of (obj, {attrib: value, ...})
Settings that the caller should use to restore each object's attributes that were set aside
during the default :func:`~gensim.utils.pickle`.
"""
asides = {}
sparse_matrices = (scipy.sparse.csr_matrix, scipy.sparse.csc_matrix)
if separately is None:
separately = []
for attrib, val in self.__dict__.items():
if isinstance(val, np.ndarray) and val.size >= sep_limit:
separately.append(attrib)
elif isinstance(val, sparse_matrices) and val.nnz >= sep_limit:
separately.append(attrib)
with ignore_deprecation_warning():
# whatever's in `separately` or `ignore` at this point won't get pickled
for attrib in separately + list(ignore):
if hasattr(self, attrib):
asides[attrib] = getattr(self, attrib)
delattr(self, attrib)
recursive_saveloads = []
restores = []
for attrib, val in self.__dict__.items():
if hasattr(val, '_save_specials'): # better than 'isinstance(val, SaveLoad)' if IPython reloading
recursive_saveloads.append(attrib)
cfname = '.'.join((fname, attrib))
restores.extend(val._save_specials(cfname, None, sep_limit, ignore, pickle_protocol, compress, subname))
try:
numpys, scipys, ignoreds = [], [], []
for attrib, val in asides.items():
if isinstance(val, np.ndarray) and attrib not in ignore:
numpys.append(attrib)
logger.info("storing np array '%s' to %s", attrib, subname(fname, attrib))
if compress:
np.savez_compressed(subname(fname, attrib), val=np.ascontiguousarray(val))
else:
np.save(subname(fname, attrib), np.ascontiguousarray(val))
elif isinstance(val, (scipy.sparse.csr_matrix, scipy.sparse.csc_matrix)) and attrib not in ignore:
scipys.append(attrib)
logger.info("storing scipy.sparse array '%s' under %s", attrib, subname(fname, attrib))
if compress:
np.savez_compressed(
subname(fname, attrib, 'sparse'),
data=val.data,
indptr=val.indptr,
indices=val.indices
)
else:
np.save(subname(fname, attrib, 'data'), val.data)
np.save(subname(fname, attrib, 'indptr'), val.indptr)
np.save(subname(fname, attrib, 'indices'), val.indices)
data, indptr, indices = val.data, val.indptr, val.indices
val.data, val.indptr, val.indices = None, None, None
try:
# store array-less object
pickle(val, subname(fname, attrib), protocol=pickle_protocol)
finally:
val.data, val.indptr, val.indices = data, indptr, indices
else:
logger.info("not storing attribute %s", attrib)
ignoreds.append(attrib)
self.__dict__['__numpys'] = numpys
self.__dict__['__scipys'] = scipys
self.__dict__['__ignoreds'] = ignoreds
self.__dict__['__recursive_saveloads'] = recursive_saveloads
except Exception:
# restore the attributes if exception-interrupted
for attrib, val in asides.items():
setattr(self, attrib, val)
raise
return restores + [(self, asides)]
def save(
self, fname_or_handle,
separately=None, sep_limit=10 * 1024**2, ignore=frozenset(), pickle_protocol=PICKLE_PROTOCOL,
):
"""Save the object to a file.
Parameters
----------
fname_or_handle : str or file-like
Path to output file or already opened file-like object. If the object is a file handle,
no special array handling will be performed, all attributes will be saved to the same file.
separately : list of str or None, optional
If None, automatically detect large numpy/scipy.sparse arrays in the object being stored, and store
them into separate files. This prevent memory errors for large objects, and also allows
`memory-mapping <https://en.wikipedia.org/wiki/Mmap>`_ the large arrays for efficient
loading and sharing the large arrays in RAM between multiple processes.
If list of str: store these attributes into separate files. The automated size check
is not performed in this case.
sep_limit : int, optional
Don't store arrays smaller than this separately. In bytes.
ignore : frozenset of str, optional
Attributes that shouldn't be stored at all.
pickle_protocol : int, optional
Protocol number for pickle.
See Also
--------
:meth:`~gensim.utils.SaveLoad.load`
Load object from file.
"""
self.add_lifecycle_event(
"saving",
fname_or_handle=str(fname_or_handle),
separately=str(separately),
sep_limit=sep_limit,
ignore=ignore,
)
try:
_pickle.dump(self, fname_or_handle, protocol=pickle_protocol)
logger.info("saved %s object", self.__class__.__name__)
except TypeError: # `fname_or_handle` does not have write attribute
self._smart_save(fname_or_handle, separately, sep_limit, ignore, pickle_protocol=pickle_protocol)
def identity(p):
"""Identity fnc, for flows that don't accept lambda (pickling etc).
Parameters
----------
p : object
Input parameter.
Returns
-------
object
Same as `p`.
"""
return p
def get_max_id(corpus):
"""Get the highest feature id that appears in the corpus.
Parameters
----------
corpus : iterable of iterable of (int, numeric)
Collection of texts in BoW format.
Returns
------
int
Highest feature id.
Notes
-----
For empty `corpus` return -1.
"""
maxid = -1
for document in corpus:
if document:
maxid = max(maxid, max(fieldid for fieldid, _ in document))
return maxid
class FakeDict:
"""Objects of this class act as dictionaries that map integer->str(integer), for a specified
range of integers <0, num_terms).
This is meant to avoid allocating real dictionaries when `num_terms` is huge, which is a waste of memory.
"""
def __init__(self, num_terms):
"""
Parameters
----------
num_terms : int
Number of terms.
"""
self.num_terms = num_terms
def __str__(self):
return "%s<num_terms=%s>" % (self.__class__.__name__, self.num_terms)
def __getitem__(self, val):
if 0 <= val < self.num_terms:
return str(val)
raise ValueError("internal id out of bounds (%s, expected <0..%s))" % (val, self.num_terms))
def __contains__(self, val):
return 0 <= val < self.num_terms
def iteritems(self):
"""Iterate over all keys and values.
Yields
------
(int, str)
Pair of (id, token).
"""
for i in range(self.num_terms):
yield i, str(i)
def keys(self):
"""Override the `dict.keys()`, which is used to determine the maximum internal id of a corpus,
i.e. the vocabulary dimensionality.
Returns
-------
list of int
Highest id, packed in list.
Notes
-----
To avoid materializing the whole `range(0, self.num_terms)`,
this returns the highest id = `[self.num_terms - 1]` only.
"""
return [self.num_terms - 1]
def __len__(self):
return self.num_terms
def get(self, val, default=None):
if 0 <= val < self.num_terms:
return str(val)
return default
def dict_from_corpus(corpus):
"""Scan corpus for all word ids that appear in it, then construct a mapping
which maps each `word_id` -> `str(word_id)`.
Parameters
----------
corpus : iterable of iterable of (int, numeric)
Collection of texts in BoW format.
Returns
------
id2word : :class:`~gensim.utils.FakeDict`
"Fake" mapping which maps each `word_id` -> `str(word_id)`.
Warnings
--------
This function is used whenever *words* need to be displayed (as opposed to just their ids)
but no `word_id` -> `word` mapping was provided. The resulting mapping only covers words actually
used in the corpus, up to the highest `word_id` found.
"""
num_terms = 1 + get_max_id(corpus)
id2word = FakeDict(num_terms)
return id2word
def is_corpus(obj):
"""Check whether `obj` is a corpus, by peeking at its first element. Works even on streamed generators.
The peeked element is put back into a object returned by this function, so always use
that returned object instead of the original `obj`.
Parameters
----------
obj : object
An `iterable of iterable` that contains (int, numeric).
Returns
-------
(bool, object)
Pair of (is `obj` a corpus, `obj` with peeked element restored)
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import is_corpus
>>> corpus = [[(1, 1.0)], [(2, -0.3), (3, 0.12)]]
>>> corpus_or_not, corpus = is_corpus(corpus)
Warnings
--------
An "empty" corpus (empty input sequence) is ambiguous, so in this case
the result is forcefully defined as (False, `obj`).
"""
try:
if 'Corpus' in obj.__class__.__name__: # the most common case, quick hack
return True, obj
except Exception:
pass
try:
if hasattr(obj, 'next') or hasattr(obj, '__next__'):
# the input is an iterator object, meaning once we call next()
# that element could be gone forever. we must be careful to put
# whatever we retrieve back again
doc1 = next(obj)
obj = itertools.chain([doc1], obj)
else:
doc1 = next(iter(obj)) # empty corpus is resolved to False here
if len(doc1) == 0: # sparse documents must have a __len__ function (list, tuple...)
return True, obj # the first document is empty=>assume this is a corpus
# if obj is a 1D numpy array(scalars) instead of 2-tuples, it resolves to False here
id1, val1 = next(iter(doc1))
id1, val1 = int(id1), float(val1) # must be a 2-tuple (integer, float)
except Exception:
return False, obj
return True, obj
def get_my_ip():
"""Try to obtain our external ip (from the Pyro4 nameserver's point of view)
Returns
-------
str
IP address.
Warnings
--------
This tries to sidestep the issue of bogus `/etc/hosts` entries and other local misconfiguration,
which often mess up hostname resolution.
If all else fails, fall back to simple `socket.gethostbyname()` lookup.
"""
import socket
try:
from Pyro4.naming import locateNS
# we know the nameserver must exist, so use it as our anchor point
ns = locateNS()
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect((ns._pyroUri.host, ns._pyroUri.port))
result, port = s.getsockname()
except Exception:
try:
# see what ifconfig says about our default interface
import commands
result = commands.getoutput("ifconfig").split("\n")[1].split()[1][5:]
if len(result.split('.')) != 4:
raise Exception()
except Exception:
# give up, leave the resolution to gethostbyname
result = socket.gethostbyname(socket.gethostname())
return result
class RepeatCorpus(SaveLoad):
"""Wrap a `corpus` as another corpus of length `reps`. This is achieved by repeating documents from `corpus`
over and over again, until the requested length `len(result) == reps` is reached.
Repetition is done on-the-fly=efficiently, via `itertools`.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import RepeatCorpus
>>>
>>> corpus = [[(1, 2)], []] # 2 documents
>>> list(RepeatCorpus(corpus, 5)) # repeat 2.5 times to get 5 documents
[[(1, 2)], [], [(1, 2)], [], [(1, 2)]]
"""
def __init__(self, corpus, reps):
"""
Parameters
----------
corpus : iterable of iterable of (int, numeric)
Input corpus.
reps : int
Number of repeats for documents from corpus.
"""
self.corpus = corpus
self.reps = reps
def __iter__(self):
return itertools.islice(itertools.cycle(self.corpus), self.reps)
class RepeatCorpusNTimes(SaveLoad):
"""Wrap a `corpus` and repeat it `n` times.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import RepeatCorpusNTimes
>>>
>>> corpus = [[(1, 0.5)], []]
>>> list(RepeatCorpusNTimes(corpus, 3)) # repeat 3 times
[[(1, 0.5)], [], [(1, 0.5)], [], [(1, 0.5)], []]
"""
def __init__(self, corpus, n):
"""
Parameters
----------
corpus : iterable of iterable of (int, numeric)
Input corpus.
n : int
Number of repeats for corpus.
"""
self.corpus = corpus
self.n = n
def __iter__(self):
for _ in range(self.n):
for document in self.corpus:
yield document
class ClippedCorpus(SaveLoad):
"""Wrap a `corpus` and return `max_doc` element from it."""
def __init__(self, corpus, max_docs=None):
"""
Parameters
----------
corpus : iterable of iterable of (int, numeric)
Input corpus.
max_docs : int
Maximum number of documents in the wrapped corpus.
Warnings
--------
Any documents after `max_docs` are ignored. This effectively limits the length of the returned corpus
to <= `max_docs`. Set `max_docs=None` for "no limit", effectively wrapping the entire input corpus.
"""
self.corpus = corpus
self.max_docs = max_docs
def __iter__(self):
return itertools.islice(self.corpus, self.max_docs)
def __len__(self):
return min(self.max_docs, len(self.corpus))
class SlicedCorpus(SaveLoad):
"""Wrap `corpus` and return a slice of it."""
def __init__(self, corpus, slice_):
"""
Parameters
----------
corpus : iterable of iterable of (int, numeric)
Input corpus.
slice_ : slice or iterable
Slice for `corpus`.
Notes
-----
Negative slicing can only be used if the corpus is indexable, otherwise, the corpus will be iterated over.
Slice can also be a np.ndarray to support fancy indexing.
Calculating the size of a SlicedCorpus is expensive when using a slice as the corpus has
to be iterated over once. Using a list or np.ndarray does not have this drawback, but consumes more memory.
"""
self.corpus = corpus
self.slice_ = slice_
self.length = None
def __iter__(self):
if hasattr(self.corpus, 'index') and len(self.corpus.index) > 0:
return (self.corpus.docbyoffset(i) for i in self.corpus.index[self.slice_])
return itertools.islice(self.corpus, self.slice_.start, self.slice_.stop, self.slice_.step)
def __len__(self):
# check cached length, calculate if needed
if self.length is None:
if isinstance(self.slice_, (list, np.ndarray)):
self.length = len(self.slice_)
elif isinstance(self.slice_, slice):
(start, end, step) = self.slice_.indices(len(self.corpus.index))
diff = end - start
self.length = diff // step + (diff % step > 0)
else:
self.length = sum(1 for x in self)
return self.length
def safe_unichr(intval):
"""Create a unicode character from its integer value. In case `unichr` fails, render the character
as an escaped `\\U<8-byte hex value of intval>` string.
Parameters
----------
intval : int
Integer code of character
Returns
-------
string
Unicode string of character
"""
try:
return chr(intval)
except ValueError:
# ValueError: chr() arg not in range(0x10000) (narrow Python build)
s = "\\U%08x" % intval
# return UTF16 surrogate pair
return s.decode('unicode-escape')
def decode_htmlentities(text):
"""Decode all HTML entities in text that are encoded as hex, decimal or named entities.
Adapted from `python-twitter-ircbot/html_decode.py
<https://github.com/sku/python-twitter-ircbot/blob/321d94e0e40d0acc92f5bf57d126b57369da70de/html_decode.py>`_.
Parameters
----------
text : str
Input HTML.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import decode_htmlentities
>>>
>>> u = u'E tu vivrai nel terrore - L'aldilà (1981)'
>>> print(decode_htmlentities(u).encode('UTF-8'))
E tu vivrai nel terrore - L'aldilà (1981)
>>> print(decode_htmlentities("l'eau"))
l'eau
>>> print(decode_htmlentities("foo < bar"))
foo < bar
"""
def substitute_entity(match):
try:
ent = match.group(3)
if match.group(1) == "#":
# decoding by number
if match.group(2) == '':
# number is in decimal
return safe_unichr(int(ent))
elif match.group(2) in ['x', 'X']:
# number is in hex
return safe_unichr(int(ent, 16))
else:
# they were using a name
cp = n2cp.get(ent)
if cp:
return safe_unichr(cp)
else:
return match.group()
except Exception:
# in case of errors, return original input
return match.group()
return RE_HTML_ENTITY.sub(substitute_entity, text)
def chunkize_serial(iterable, chunksize, as_numpy=False, dtype=np.float32):
"""Yield elements from `iterable` in "chunksize"-ed groups.
The last returned element may be smaller if the length of collection is not divisible by `chunksize`.
Parameters
----------
iterable : iterable of object
An iterable.
chunksize : int
Split iterable into chunks of this size.
as_numpy : bool, optional
Yield chunks as `np.ndarray` instead of lists.
Yields
------
list OR np.ndarray
"chunksize"-ed chunks of elements from `iterable`.
Examples
--------
.. sourcecode:: pycon
>>> print(list(grouper(range(10), 3)))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
"""
it = iter(iterable)
while True:
if as_numpy:
# convert each document to a 2d numpy array (~6x faster when transmitting
# chunk data over the wire, in Pyro)
wrapped_chunk = [[np.array(doc, dtype=dtype) for doc in itertools.islice(it, int(chunksize))]]
else:
wrapped_chunk = [list(itertools.islice(it, int(chunksize)))]
if not wrapped_chunk[0]:
break
# memory opt: wrap the chunk and then pop(), to avoid leaving behind a dangling reference
yield wrapped_chunk.pop()
grouper = chunkize_serial
class InputQueue(multiprocessing.Process):
"""Populate a queue of input chunks from a streamed corpus.
Useful for reading and chunking corpora in the background, in a separate process,
so that workers that use the queue are not starved for input chunks.
"""
def __init__(self, q, corpus, chunksize, maxsize, as_numpy):
"""
Parameters
----------
q : multiprocessing.Queue
Enqueue chunks into this queue.
corpus : iterable of iterable of (int, numeric)
Corpus to read and split into "chunksize"-ed groups
chunksize : int
Split `corpus` into chunks of this size.
as_numpy : bool, optional
Enqueue chunks as `numpy.ndarray` instead of lists.
"""
super(InputQueue, self).__init__()
self.q = q
self.maxsize = maxsize
self.corpus = corpus
self.chunksize = chunksize
self.as_numpy = as_numpy
def run(self):
it = iter(self.corpus)
while True:
chunk = itertools.islice(it, self.chunksize)
if self.as_numpy:
# HACK XXX convert documents to numpy arrays, to save memory.
# This also gives a scipy warning at runtime:
# "UserWarning: indices array has non-integer dtype (float64)"
wrapped_chunk = [[np.asarray(doc) for doc in chunk]]
else:
wrapped_chunk = [list(chunk)]
if not wrapped_chunk[0]:
self.q.put(None, block=True)
break
try:
qsize = self.q.qsize()
except NotImplementedError:
qsize = '?'
logger.debug("prepared another chunk of %i documents (qsize=%s)", len(wrapped_chunk[0]), qsize)
self.q.put(wrapped_chunk.pop(), block=True)
# Multiprocessing on Windows (and on OSX with python3.8+) uses "spawn" mode, which
# causes issues with pickling.
# So for these two platforms, use simpler serial processing in `chunkize`.
# See https://github.com/RaRe-Technologies/gensim/pull/2800#discussion_r410890171
if os.name == 'nt' or (sys.platform == "darwin" and sys.version_info >= (3, 8)):
def chunkize(corpus, chunksize, maxsize=0, as_numpy=False):
"""Split `corpus` into fixed-sized chunks, using :func:`~gensim.utils.chunkize_serial`.
Parameters
----------
corpus : iterable of object
An iterable.
chunksize : int
Split `corpus` into chunks of this size.
maxsize : int, optional
Ignored. For interface compatibility only.
as_numpy : bool, optional
Yield chunks as `np.ndarray` s instead of lists?
Yields
------
list OR np.ndarray
"chunksize"-ed chunks of elements from `corpus`.
"""
if maxsize > 0:
entity = "Windows" if os.name == 'nt' else "OSX with python3.8+"
warnings.warn("detected %s; aliasing chunkize to chunkize_serial" % entity)
for chunk in chunkize_serial(corpus, chunksize, as_numpy=as_numpy):
yield chunk
else:
def chunkize(corpus, chunksize, maxsize=0, as_numpy=False):
"""Split `corpus` into fixed-sized chunks, using :func:`~gensim.utils.chunkize_serial`.
Parameters
----------
corpus : iterable of object
An iterable.
chunksize : int
Split `corpus` into chunks of this size.
maxsize : int, optional
If > 0, prepare chunks in a background process, filling a chunk queue of size at most `maxsize`.
as_numpy : bool, optional
Yield chunks as `np.ndarray` instead of lists?
Yields
------
list OR np.ndarray
"chunksize"-ed chunks of elements from `corpus`.
Notes
-----
Each chunk is of length `chunksize`, except the last one which may be smaller.
A once-only input stream (`corpus` from a generator) is ok, chunking is done efficiently via itertools.
If `maxsize > 0`, don't wait idly in between successive chunk `yields`, but rather keep filling a short queue
(of size at most `maxsize`) with forthcoming chunks in advance. This is realized by starting a separate process,
and is meant to reduce I/O delays, which can be significant when `corpus` comes from a slow medium
like HDD, database or network.
If `maxsize == 0`, don't fool around with parallelism and simply yield the chunksize
via :func:`~gensim.utils.chunkize_serial` (no I/O optimizations).
Yields
------
list of object OR np.ndarray
Groups based on `iterable`
"""
assert chunksize > 0
if maxsize > 0:
q = multiprocessing.Queue(maxsize=maxsize)
worker = InputQueue(q, corpus, chunksize, maxsize=maxsize, as_numpy=as_numpy)
worker.daemon = True
worker.start()
while True:
chunk = [q.get(block=True)]
if chunk[0] is None:
break
yield chunk.pop()
else:
for chunk in chunkize_serial(corpus, chunksize, as_numpy=as_numpy):
yield chunk
def smart_extension(fname, ext):
"""Append a file extension `ext` to `fname`, while keeping compressed extensions like `.bz2` or
`.gz` (if any) at the end.
Parameters
----------
fname : str
Filename or full path.
ext : str
Extension to append before any compression extensions.
Returns
-------
str
New path to file with `ext` appended.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import smart_extension
>>> smart_extension("my_file.pkl.gz", ".vectors")
'my_file.pkl.vectors.gz'
"""
fname, oext = os.path.splitext(fname)
if oext.endswith('.bz2'):
fname = fname + oext[:-4] + ext + '.bz2'
elif oext.endswith('.gz'):
fname = fname + oext[:-3] + ext + '.gz'
else:
fname = fname + oext + ext
return fname
def pickle(obj, fname, protocol=PICKLE_PROTOCOL):
"""Pickle object `obj` to file `fname`, using smart_open so that `fname` can be on S3, HDFS, compressed etc.
Parameters
----------
obj : object
Any python object.
fname : str
Path to pickle file.
protocol : int, optional
Pickle protocol number.
"""
with open(fname, 'wb') as fout: # 'b' for binary, needed on Windows
_pickle.dump(obj, fout, protocol=protocol)
def unpickle(fname):
"""Load object from `fname`, using smart_open so that `fname` can be on S3, HDFS, compressed etc.
Parameters
----------
fname : str
Path to pickle file.
Returns
-------
object
Python object loaded from `fname`.
"""
with open(fname, 'rb') as f:
return _pickle.load(f, encoding='latin1') # needed because loading from S3 doesn't support readline()
def revdict(d):
"""Reverse a dictionary mapping, i.e. `{1: 2, 3: 4}` -> `{2: 1, 4: 3}`.
Parameters
----------
d : dict
Input dictionary.
Returns
-------
dict
Reversed dictionary mapping.
Notes
-----
When two keys map to the same value, only one of them will be kept in the result (which one is kept is arbitrary).
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import revdict
>>> d = {1: 2, 3: 4}
>>> revdict(d)
{2: 1, 4: 3}
"""
return {v: k for (k, v) in dict(d).items()}
def deprecated(reason):
"""Decorator to mark functions as deprecated.
Calling a decorated function will result in a warning being emitted, using warnings.warn.
Adapted from https://stackoverflow.com/a/40301488/8001386.
Parameters
----------
reason : str
Reason of deprecation.
Returns
-------
function
Decorated function
"""
if isinstance(reason, str):
def decorator(func):
fmt = "Call to deprecated `{name}` ({reason})."
@wraps(func)
def new_func1(*args, **kwargs):
warnings.warn(
fmt.format(name=func.__name__, reason=reason),
category=DeprecationWarning,
stacklevel=2
)
return func(*args, **kwargs)
return new_func1
return decorator
elif inspect.isclass(reason) or inspect.isfunction(reason):
func = reason
fmt = "Call to deprecated `{name}`."
@wraps(func)
def new_func2(*args, **kwargs):
warnings.warn(
fmt.format(name=func.__name__),
category=DeprecationWarning,
stacklevel=2
)
return func(*args, **kwargs)
return new_func2
else:
raise TypeError(repr(type(reason)))
@contextmanager
def ignore_deprecation_warning():
"""Contextmanager for ignoring DeprecationWarning."""
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
yield
@deprecated("Function will be removed in 4.0.0")
def toptexts(query, texts, index, n=10):
"""Debug fnc to help inspect the top `n` most similar documents (according to a similarity index `index`),
to see if they are actually related to the query.
Parameters
----------
query : {list of (int, number), numpy.ndarray}
vector OR BoW (list of tuples)
texts : str
object that can return something insightful for each document via `texts[docid]`,
such as its fulltext or snippet.
index : any
A instance from from :mod:`gensim.similarity.docsim`.
Return
------
list
a list of 3-tuples (docid, doc's similarity to the query, texts[docid])
"""
sims = index[query] # perform a similarity query against the corpus
sims = sorted(enumerate(sims), key=lambda item: -item[1])
return [(topid, topcosine, texts[topid]) for topid, topcosine in sims[:n]] # only consider top-n most similar docs
def randfname(prefix='gensim'):
"""Generate a random filename in temp.
Parameters
----------
prefix : str
Prefix of filename.
Returns
-------
str
Full path in the in system's temporary folder, ending in a random filename.
"""
randpart = hex(random.randint(0, 0xffffff))[2:]
return os.path.join(tempfile.gettempdir(), prefix + randpart)
@deprecated("Function will be removed in 4.0.0")
def upload_chunked(server, docs, chunksize=1000, preprocess=None):
"""Memory-friendly upload of documents to a SimServer (or Pyro SimServer proxy).
Notes
-----
Use this function to train or index large collections.abc -- avoid sending the
entire corpus over the wire as a single Pyro in-memory object. The documents
will be sent in smaller chunks, of `chunksize` documents each.
"""
start = 0
for chunk in grouper(docs, chunksize):
end = start + len(chunk)
logger.info("uploading documents %i-%i", start, end - 1)
if preprocess is not None:
pchunk = []
for doc in chunk:
doc['tokens'] = preprocess(doc['text'])
del doc['text']
pchunk.append(doc)
chunk = pchunk
server.buffer(chunk)
start = end
def getNS(host=None, port=None, broadcast=True, hmac_key=None):
"""Get a Pyro4 name server proxy.
Parameters
----------
host : str, optional
Name server hostname.
port : int, optional
Name server port.
broadcast : bool, optional
Use broadcast mechanism? (i.e. reach out to all Pyro nodes in the network)
hmac_key : str, optional
Private key.
Raises
------
RuntimeError
When Pyro name server is not found.
Returns
-------
:class:`Pyro4.core.Proxy`
Proxy from Pyro4.
"""
import Pyro4
try:
return Pyro4.locateNS(host, port, broadcast, hmac_key)
except Pyro4.errors.NamingError:
raise RuntimeError("Pyro name server not found")
def pyro_daemon(name, obj, random_suffix=False, ip=None, port=None, ns_conf=None):
"""Register an object with the Pyro name server.
Start the name server if not running yet and block until the daemon is terminated.
The object is registered under `name`, or `name`+ some random suffix if `random_suffix` is set.
"""
if ns_conf is None:
ns_conf = {}
if random_suffix:
name += '.' + hex(random.randint(0, 0xffffff))[2:]
import Pyro4
with getNS(**ns_conf) as ns:
with Pyro4.Daemon(ip or get_my_ip(), port or 0) as daemon:
# register server for remote access
uri = daemon.register(obj, name)
ns.remove(name)
ns.register(name, uri)
logger.info("%s registered with nameserver (URI '%s')", name, uri)
daemon.requestLoop()
def mock_data_row(dim=1000, prob_nnz=0.5, lam=1.0):
"""Create a random gensim BoW vector, with the feature counts following the Poisson distribution.
Parameters
----------
dim : int, optional
Dimension of vector.
prob_nnz : float, optional
Probability of each coordinate will be nonzero, will be drawn from the Poisson distribution.
lam : float, optional
Lambda parameter for the Poisson distribution.
Returns
-------
list of (int, float)
Vector in BoW format.
"""
nnz = np.random.uniform(size=(dim,))
return [(i, float(np.random.poisson(lam=lam) + 1.0)) for i in range(dim) if nnz[i] < prob_nnz]
def mock_data(n_items=1000, dim=1000, prob_nnz=0.5, lam=1.0):
"""Create a random Gensim-style corpus (BoW), using :func:`~gensim.utils.mock_data_row`.
Parameters
----------
n_items : int
Size of corpus
dim : int
Dimension of vector, used for :func:`~gensim.utils.mock_data_row`.
prob_nnz : float, optional
Probability of each coordinate will be nonzero, will be drawn from Poisson distribution,
used for :func:`~gensim.utils.mock_data_row`.
lam : float, optional
Parameter for Poisson distribution, used for :func:`~gensim.utils.mock_data_row`.
Returns
-------
list of list of (int, float)
Gensim-style corpus.
"""
return [mock_data_row(dim=dim, prob_nnz=prob_nnz, lam=lam) for _ in range(n_items)]
def prune_vocab(vocab, min_reduce, trim_rule=None):
"""Remove all entries from the `vocab` dictionary with count smaller than `min_reduce`.
Modifies `vocab` in place, returns the sum of all counts that were pruned.
Parameters
----------
vocab : dict
Input dictionary.
min_reduce : int
Frequency threshold for tokens in `vocab`.
trim_rule : function, optional
Function for trimming entities from vocab, default behaviour is `vocab[w] <= min_reduce`.
Returns
-------
result : int
Sum of all counts that were pruned.
"""
result = 0
old_len = len(vocab)
for w in list(vocab): # make a copy of dict's keys
if not keep_vocab_item(w, vocab[w], min_reduce, trim_rule): # vocab[w] <= min_reduce:
result += vocab[w]
del vocab[w]
logger.info(
"pruned out %i tokens with count <=%i (before %i, after %i)",
old_len - len(vocab), min_reduce, old_len, len(vocab)
)
return result
def trim_vocab_by_freq(vocab, topk, trim_rule=None):
"""Retain `topk` most frequent words in `vocab`.
If there are more words with the same frequency as `topk`-th one, they will be kept.
Modifies `vocab` in place, returns nothing.
Parameters
----------
vocab : dict
Input dictionary.
topk : int
Number of words with highest frequencies to keep.
trim_rule : function, optional
Function for trimming entities from vocab, default behaviour is `vocab[w] <= min_count`.
"""
if topk >= len(vocab):
return
min_count = heapq.nlargest(topk, vocab.values())[-1]
prune_vocab(vocab, min_count, trim_rule=trim_rule)
def merge_counts(dict1, dict2):
"""Merge `dict1` of (word, freq1) and `dict2` of (word, freq2) into `dict1` of (word, freq1+freq2).
Parameters
----------
dict1 : dict of (str, int)
First dictionary.
dict2 : dict of (str, int)
Second dictionary.
Returns
-------
result : dict
Merged dictionary with sum of frequencies as values.
"""
for word, freq in dict2.items():
if word in dict1:
dict1[word] += freq
else:
dict1[word] = freq
return dict1
def qsize(queue):
"""Get the (approximate) queue size where available.
Parameters
----------
queue : :class:`queue.Queue`
Input queue.
Returns
-------
int
Queue size, -1 if `qsize` method isn't implemented (OS X).
"""
try:
return queue.qsize()
except NotImplementedError:
# OS X doesn't support qsize
return -1
RULE_DEFAULT = 0
RULE_DISCARD = 1
RULE_KEEP = 2
def keep_vocab_item(word, count, min_count, trim_rule=None):
"""Should we keep `word` in the vocab or remove it?
Parameters
----------
word : str
Input word.
count : int
Number of times that word appeared in a corpus.
min_count : int
Discard words with frequency smaller than this.
trim_rule : function, optional
Custom function to decide whether to keep or discard this word.
If a custom `trim_rule` is not specified, the default behaviour is simply `count >= min_count`.
Returns
-------
bool
True if `word` should stay, False otherwise.
"""
default_res = count >= min_count
if trim_rule is None:
return default_res
else:
rule_res = trim_rule(word, count, min_count)
if rule_res == RULE_KEEP:
return True
elif rule_res == RULE_DISCARD:
return False
else:
return default_res
def check_output(stdout=subprocess.PIPE, *popenargs, **kwargs):
r"""Run OS command with the given arguments and return its output as a byte string.
Backported from Python 2.7 with a few minor modifications. Used in word2vec/glove2word2vec tests.
Behaves very similar to https://docs.python.org/2/library/subprocess.html#subprocess.check_output.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import check_output
>>> check_output(args=['echo', '1'])
'1\n'
Raises
------
KeyboardInterrupt
If Ctrl+C pressed.
"""
try:
logger.debug("COMMAND: %s %s", popenargs, kwargs)
process = subprocess.Popen(stdout=stdout, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
error = subprocess.CalledProcessError(retcode, cmd)
error.output = output
raise error
return output
except KeyboardInterrupt:
process.terminate()
raise
def sample_dict(d, n=10, use_random=True):
"""Selected `n` (possibly random) items from the dictionary `d`.
Parameters
----------
d : dict
Input dictionary.
n : int, optional
Number of items to select.
use_random : bool, optional
Select items randomly (without replacement), instead of by the natural dict iteration order?
Returns
-------
list of (object, object)
Selected items from dictionary, as a list.
"""
selected_keys = random.sample(list(d), min(len(d), n)) if use_random else itertools.islice(d.keys(), n)
return [(key, d[key]) for key in selected_keys]
def strided_windows(ndarray, window_size):
"""Produce a numpy.ndarray of windows, as from a sliding window.
Parameters
----------
ndarray : numpy.ndarray
Input array
window_size : int
Sliding window size.
Returns
-------
numpy.ndarray
Subsequences produced by sliding a window of the given size over the `ndarray`.
Since this uses striding, the individual arrays are views rather than copies of `ndarray`.
Changes to one view modifies the others and the original.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.utils import strided_windows
>>> strided_windows(np.arange(5), 2)
array([[0, 1],
[1, 2],
[2, 3],
[3, 4]])
>>> strided_windows(np.arange(10), 5)
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8],
[5, 6, 7, 8, 9]])
"""
ndarray = np.asarray(ndarray)
if window_size == ndarray.shape[0]:
return np.array([ndarray])
elif window_size > ndarray.shape[0]:
return np.ndarray((0, 0))
stride = ndarray.strides[0]
return np.lib.stride_tricks.as_strided(
ndarray, shape=(ndarray.shape[0] - window_size + 1, window_size),
strides=(stride, stride))
def iter_windows(texts, window_size, copy=False, ignore_below_size=True, include_doc_num=False):
"""Produce a generator over the given texts using a sliding window of `window_size`.
The windows produced are views of some subsequence of a text.
To use deep copies instead, pass `copy=True`.
Parameters
----------
texts : list of str
List of string sentences.
window_size : int
Size of sliding window.
copy : bool, optional
Produce deep copies.
ignore_below_size : bool, optional
Ignore documents that are not at least `window_size` in length?
include_doc_num : bool, optional
Yield the text position with `texts` along with each window?
"""
for doc_num, document in enumerate(texts):
for window in _iter_windows(document, window_size, copy, ignore_below_size):
if include_doc_num:
yield (doc_num, window)
else:
yield window
def _iter_windows(document, window_size, copy=False, ignore_below_size=True):
doc_windows = strided_windows(document, window_size)
if doc_windows.shape[0] == 0:
if not ignore_below_size:
yield document.copy() if copy else document
else:
for doc_window in doc_windows:
yield doc_window.copy() if copy else doc_window
def flatten(nested_list):
"""Recursively flatten a nested sequence of elements.
Parameters
----------
nested_list : iterable
Possibly nested sequence of elements to flatten.
Returns
-------
list
Flattened version of `nested_list` where any elements that are an iterable (`collections.abc.Iterable`)
have been unpacked into the top-level list, in a recursive fashion.
"""
return list(lazy_flatten(nested_list))
def lazy_flatten(nested_list):
"""Lazy version of :func:`~gensim.utils.flatten`.
Parameters
----------
nested_list : list
Possibly nested list.
Yields
------
object
Element of list
"""
for el in nested_list:
if isinstance(el, collections.abc.Iterable) and not isinstance(el, str):
for sub in flatten(el):
yield sub
else:
yield el
def save_as_line_sentence(corpus, filename):
"""Save the corpus in LineSentence format, i.e. each sentence on a separate line,
tokens are separated by space.
Parameters
----------
corpus : iterable of iterables of strings
"""
with open(filename, mode='wb', encoding='utf8') as fout:
for sentence in corpus:
line = any2unicode(' '.join(sentence) + '\n')
fout.write(line)
def effective_n_jobs(n_jobs):
"""Determines the number of jobs can run in parallel.
Just like in sklearn, passing n_jobs=-1 means using all available
CPU cores.
Parameters
----------
n_jobs : int
Number of workers requested by caller.
Returns
-------
int
Number of effective jobs.
"""
if n_jobs == 0:
raise ValueError('n_jobs == 0 in Parallel has no meaning')
elif n_jobs is None:
return 1
elif n_jobs < 0:
n_jobs = max(multiprocessing.cpu_count() + 1 + n_jobs, 1)
return n_jobs
def is_empty(corpus):
"""Is the corpus (an iterable or a scipy.sparse array) empty?"""
if scipy.sparse.issparse(corpus):
return corpus.shape[1] == 0 # by convention, scipy.sparse documents are columns
if isinstance(corpus, types.GeneratorType):
return False # don't try to guess emptiness of generators, may lose elements irretrievably
try:
# list, numpy array etc
first_doc = next(iter(corpus)) # noqa: F841 (ignore unused variable)
return False # first document exists => not empty
except StopIteration:
return True
except Exception:
return False
| 66,616
|
Python
|
.py
| 1,683
| 31.14082
| 120
| 0.609588
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,025
|
_matutils.pyx
|
piskvorky_gensim/gensim/_matutils.pyx
|
#!/usr/bin/env cython
# coding: utf-8
# cython: embedsignature=True
# cython: language_level=3
from __future__ import division
cimport cython
import numpy as np
cimport numpy as np
ctypedef cython.floating DTYPE_t
from libc.math cimport log, exp, fabs
from cython.parallel import prange
def mean_absolute_difference(a, b):
"""Mean absolute difference between two arrays, using :func:`~gensim._matutils._mean_absolute_difference`.
Parameters
----------
a : numpy.ndarray
Input 1d array, supports float16, float32 and float64.
b : numpy.ndarray
Input 1d array, supports float16, float32 and float64.
Returns
-------
float
mean(abs(a - b)).
"""
if a.shape != b.shape:
raise ValueError("a and b must have same shape")
if a.dtype == np.float64:
return _mean_absolute_difference[double](a, b)
elif a.dtype == np.float32:
return _mean_absolute_difference[float](a, b)
elif a.dtype == np.float16:
return _mean_absolute_difference[float](a.astype(np.float32), b.astype(np.float32))
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
cdef DTYPE_t _mean_absolute_difference(DTYPE_t[:] a, DTYPE_t[:] b) nogil:
"""Mean absolute difference between two arrays.
Parameters
----------
a : numpy.ndarray
Input 1d array.
b : numpy.ndarray
Input 1d array.
Returns
-------
DTYPE_t
mean(abs(a - b))
"""
cdef DTYPE_t result = 0.0
cdef size_t i
cdef size_t j
cdef size_t I = a.shape[0]
cdef size_t N = I
for i in range(I):
result += fabs(a[i] - b[i])
result /= N
return result
def logsumexp(x):
"""Log of sum of exponentials, using :func:`~gensim._matutils._logsumexp_2d`.
Parameters
----------
x : numpy.ndarray
Input 2d matrix, supports float16, float32 and float64.
Returns
-------
float
log of sum of exponentials of elements in `x`.
Warnings
--------
By performance reasons, doesn't support NaNs or 1d, 3d, etc arrays like :func:`scipy.special.logsumexp`.
"""
if x.dtype == np.float64:
return _logsumexp_2d[double](x)
elif x.dtype == np.float32:
return _logsumexp_2d[float](x)
elif x.dtype == np.float16:
return _logsumexp_2d[float](x.astype(np.float32))
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
cdef DTYPE_t _logsumexp_2d(DTYPE_t[:, :] data) nogil:
"""Log of sum of exponentials.
Parameters
----------
x : numpy.ndarray
Input 2d matrix.
Returns
-------
DTYPE_t
log of sum of exponentials of elements in `data`.
"""
cdef DTYPE_t max_val = data[0, 0]
cdef DTYPE_t result = 0.0
cdef size_t i
cdef size_t j
cdef size_t I = data.shape[0]
cdef size_t J = data.shape[1]
for i in range(I):
for j in range(J):
if data[i, j] > max_val:
max_val = data[i, j]
for i in range(I):
for j in range(J):
result += exp(data[i, j] - max_val)
result = log(result) + max_val
return result
def dirichlet_expectation(alpha):
"""Expected value of log(theta) where theta is drawn from a Dirichlet distribution.
Using :func:`~gensim._matutils.dirichlet_expectation_1d` or :func:`~gensim._matutils.dirichlet_expectation_2d`.
Parameters
----------
alpha : numpy.ndarray
Dirichlet parameter 2d matrix or 1d vector, if 2d - each row is treated as a separate parameter vector,
supports float16, float32 and float64.
Returns
-------
numpy.ndarray
Log of expected values, dimension same as `alpha.ndim`.
"""
if alpha.ndim == 2:
return dirichlet_expectation_2d(alpha)
else:
return dirichlet_expectation_1d(alpha)
def dirichlet_expectation_2d(alpha):
"""Expected value of log(theta) where theta is drawn from a Dirichlet distribution.
Using :func:`~gensim._matutils._dirichlet_expectation_2d`.
Parameters
----------
alpha : numpy.ndarray
Dirichlet parameter 2d matrix, each row is treated as a separate parameter vector,
supports float16, float32 and float64.
Returns
-------
numpy.ndarray
Log of expected values, 2d matrix.
"""
if alpha.dtype == np.float64:
out = np.zeros(alpha.shape, dtype=alpha.dtype)
_dirichlet_expectation_2d[double](alpha, out)
elif alpha.dtype == np.float32:
out = np.zeros(alpha.shape, dtype=alpha.dtype)
_dirichlet_expectation_2d[float](alpha, out)
elif alpha.dtype == np.float16:
out = np.zeros(alpha.shape, dtype=np.float32)
_dirichlet_expectation_2d[float](alpha.astype(np.float32), out)
out = out.astype(np.float16)
return out
def dirichlet_expectation_1d(alpha):
"""Expected value of log(theta) where theta is drawn from a Dirichlet distribution.
Using :func:`~gensim._matutils._dirichlet_expectation_1d`.
Parameters
----------
alpha : numpy.ndarray
Dirichlet parameter 1d vector, supports float16, float32 and float64.
Returns
-------
numpy.ndarray
Log of expected values, 1d vector.
"""
if alpha.dtype == np.float64:
out = np.zeros(alpha.shape, dtype=alpha.dtype)
_dirichlet_expectation_1d[double](alpha, out)
elif alpha.dtype == np.float32:
out = np.zeros(alpha.shape, dtype=alpha.dtype)
_dirichlet_expectation_1d[float](alpha, out)
elif alpha.dtype == np.float16:
out = np.zeros(alpha.shape, dtype=np.float32)
_dirichlet_expectation_1d[float](alpha.astype(np.float32), out)
out = out.astype(np.float16)
return out
@cython.boundscheck(False)
@cython.wraparound(False)
cdef void _dirichlet_expectation_1d(DTYPE_t[:] alpha, DTYPE_t[:] out) nogil:
"""Expected value of log(theta) where theta is drawn from a Dirichlet distribution.
Parameters
----------
alpha : numpy.ndarray
Dirichlet parameter 1d vector.
out : numpy.ndarray
Output array, contains log of expected values.
"""
cdef DTYPE_t sum_alpha = 0.0
cdef DTYPE_t psi_sum_alpha = 0.0
cdef size_t i
cdef size_t I = alpha.shape[0]
for i in range(I):
sum_alpha += alpha[i]
psi_sum_alpha = _digamma(sum_alpha)
for i in range(I):
out[i] = _digamma(alpha[i]) - psi_sum_alpha
@cython.boundscheck(False)
@cython.wraparound(False)
cdef void _dirichlet_expectation_2d(DTYPE_t[:, :] alpha, DTYPE_t[:, :] out) nogil:
"""Expected value of log(theta) where theta is drawn from a Dirichlet distribution.
Parameters
----------
alpha : numpy.ndarray
Dirichlet parameter matrix, each row is treated as a parameter vector for its own Dirichlet.
out : numpy.ndarray
Log of expected values, 2d matrix.
"""
cdef DTYPE_t sum_alpha = 0.0
cdef DTYPE_t psi_sum_alpha = 0.0
cdef size_t i, j
cdef size_t I = alpha.shape[0]
cdef size_t J = alpha.shape[1]
for i in range(I):
sum_alpha = 0.0
for j in range(J):
sum_alpha += alpha[i, j]
psi_sum_alpha = _digamma(sum_alpha)
for j in range(J):
out[i, j] = _digamma(alpha[i, j]) - psi_sum_alpha
def digamma(DTYPE_t x):
"""Digamma function for positive floats, using :func:`~gensim._matutils._digamma`.
Parameters
----------
x : float
Positive value.
Returns
-------
float
Digamma(x).
"""
return _digamma(x)
@cython.cdivision(True)
cdef inline DTYPE_t _digamma(DTYPE_t x,) nogil:
"""Digamma function for positive floats.
Parameters
----------
x : float
Positive value.
Notes
-----
Adapted from:
* Authors:
* Original FORTRAN77 version by Jose Bernardo.
* C version by John Burkardt.
* Reference: Jose Bernardo, Algorithm AS 103: Psi (Digamma) Function,
Applied Statistics, Volume 25, Number 3, 1976, pages 315-317.
* Licensing: This code is distributed under the GNU LGPL license.
Returns
-------
float
Digamma(x).
"""
cdef DTYPE_t c = 8.5;
cdef DTYPE_t euler_mascheroni = 0.57721566490153286060;
cdef DTYPE_t r;
cdef DTYPE_t value;
cdef DTYPE_t x2;
if ( x <= 0.000001 ):
value = - euler_mascheroni - 1.0 / x + 1.6449340668482264365 * x;
return value;
# Reduce to DIGAMA(X + N).
value = 0.0;
x2 = x;
while ( x2 < c ):
value = value - 1.0 / x2;
x2 = x2 + 1.0;
# Use Stirling's (actually de Moivre's) expansion.
r = 1.0 / x2;
value = value + log ( x2 ) - 0.5 * r;
r = r * r;
value = value \
- r * ( 1.0 / 12.0 \
- r * ( 1.0 / 120.0 \
- r * ( 1.0 / 252.0 \
- r * ( 1.0 / 240.0 \
- r * ( 1.0 / 132.0 ) ) ) ) )
return value;
| 8,993
|
Python
|
.py
| 271
| 27.154982
| 115
| 0.627128
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,026
|
interfaces.py
|
piskvorky_gensim/gensim/interfaces.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""Basic interfaces used across the whole Gensim package.
These interfaces are used for building corpora, model transformation and similarity queries.
The interfaces are realized as abstract base classes. This means some functionality is already
provided in the interface itself, and subclasses should inherit from these interfaces
and implement the missing methods.
"""
import logging
from gensim import utils, matutils
logger = logging.getLogger(__name__)
class CorpusABC(utils.SaveLoad):
"""Interface for corpus classes from :mod:`gensim.corpora`.
Corpus is simply an iterable object, where each iteration step yields one document:
.. sourcecode:: pycon
>>> from gensim.corpora import MmCorpus # inherits from the CorpusABC class
>>> from gensim.test.utils import datapath
>>>
>>> corpus = MmCorpus(datapath("testcorpus.mm"))
>>> for doc in corpus:
... pass # do something with the doc...
A document represented in the bag-of-word (BoW) format, i.e. list of (attr_id, attr_value),
like ``[(1, 0.2), (4, 0.6), ...]``.
.. sourcecode:: pycon
>>> from gensim.corpora import MmCorpus # inherits from the CorpusABC class
>>> from gensim.test.utils import datapath
>>>
>>> corpus = MmCorpus(datapath("testcorpus.mm"))
>>> doc = next(iter(corpus))
>>> print(doc)
[(0, 1.0), (1, 1.0), (2, 1.0)]
Remember that the save/load methods only pickle the corpus object, not
the (streamed) corpus data itself!
To save the corpus data, please use this pattern :
.. sourcecode:: pycon
>>> from gensim.corpora import MmCorpus # MmCorpus inherits from CorpusABC
>>> from gensim.test.utils import datapath, get_tmpfile
>>>
>>> corpus = MmCorpus(datapath("testcorpus.mm"))
>>> tmp_path = get_tmpfile("temp_corpus.mm")
>>>
>>> MmCorpus.serialize(tmp_path, corpus) # serialize corpus to disk in the MmCorpus format
>>> loaded_corpus = MmCorpus(tmp_path) # load corpus through constructor
>>> for (doc_1, doc_2) in zip(corpus, loaded_corpus):
... assert doc_1 == doc_2 # no change between the original and loaded corpus
See Also
--------
:mod:`gensim.corpora`
Corpora in different formats.
"""
def __iter__(self):
"""Iterate all over corpus."""
raise NotImplementedError('cannot instantiate abstract base class')
def save(self, *args, **kwargs):
"""Saves the in-memory state of the corpus (pickles the object).
Warnings
--------
This saves only the "internal state" of the corpus object, not the corpus data!
To save the corpus data, use the `serialize` method of your desired output format
instead, e.g. :meth:`gensim.corpora.mmcorpus.MmCorpus.serialize`.
"""
import warnings
warnings.warn(
"corpus.save() stores only the (tiny) iteration object in memory; "
"to serialize the actual corpus content, use e.g. MmCorpus.serialize(corpus)"
)
super(CorpusABC, self).save(*args, **kwargs)
def __len__(self):
"""Get the corpus size = the total number of documents in it."""
raise NotImplementedError("must override __len__() before calling len(corpus)")
@staticmethod
def save_corpus(fname, corpus, id2word=None, metadata=False):
"""Save `corpus` to disk.
Some formats support saving the dictionary (`feature_id -> word` mapping),
which can be provided by the optional `id2word` parameter.
Notes
-----
Some corpora also support random access via document indexing, so that the documents on disk
can be accessed in O(1) time (see the :class:`gensim.corpora.indexedcorpus.IndexedCorpus` base class).
In this case, :meth:`~gensim.interfaces.CorpusABC.save_corpus` is automatically called internally by
:func:`serialize`, which does :meth:`~gensim.interfaces.CorpusABC.save_corpus` plus saves the index
at the same time.
Calling :func:`serialize() is preferred to calling :meth:`gensim.interfaces.CorpusABC.save_corpus`.
Parameters
----------
fname : str
Path to output file.
corpus : iterable of list of (int, number)
Corpus in BoW format.
id2word : :class:`~gensim.corpora.Dictionary`, optional
Dictionary of corpus.
metadata : bool, optional
Write additional metadata to a separate too?
"""
raise NotImplementedError('cannot instantiate abstract base class')
class TransformedCorpus(CorpusABC):
"""Interface for corpora that are the result of an online (streamed) transformation."""
def __init__(self, obj, corpus, chunksize=None, **kwargs):
"""
Parameters
----------
obj : object
A transformation :class:`~gensim.interfaces.TransformationABC` object that will be applied
to each document from `corpus` during iteration.
corpus : iterable of list of (int, number)
Corpus in bag-of-words format.
chunksize : int, optional
If provided, a slightly more effective processing will be performed by grouping documents from `corpus`.
"""
self.obj, self.corpus, self.chunksize = obj, corpus, chunksize
# add the new parameters like per_word_topics to base class object of LdaModel
for key, value in kwargs.items():
setattr(self.obj, key, value)
self.metadata = False
def __len__(self):
"""Get corpus size."""
return len(self.corpus)
def __iter__(self):
"""Iterate over the corpus, applying the selected transformation.
If `chunksize` was set in the constructor, works in "batch-manner" (more efficient).
Yields
------
list of (int, number)
Documents in the sparse Gensim bag-of-words format.
"""
if self.chunksize:
for chunk in utils.grouper(self.corpus, self.chunksize):
for transformed in self.obj.__getitem__(chunk, chunksize=None):
yield transformed
else:
for doc in self.corpus:
yield self.obj[doc]
def __getitem__(self, docno):
"""Transform the document at position `docno` within `corpus` specified in the constructor.
Parameters
----------
docno : int
Position of the document to transform. Document offset inside `self.corpus`.
Notes
-----
`self.corpus` must support random indexing.
Returns
-------
list of (int, number)
Transformed document in the sparse Gensim bag-of-words format.
Raises
------
RuntimeError
If corpus doesn't support index slicing (`__getitem__` doesn't exists).
"""
if hasattr(self.corpus, '__getitem__'):
return self.obj[self.corpus[docno]]
else:
raise RuntimeError('Type {} does not support slicing.'.format(type(self.corpus)))
class TransformationABC(utils.SaveLoad):
"""Transformation interface.
A 'transformation' is any object which accepts document in BoW format via the `__getitem__` (notation `[]`)
and returns another sparse document in its stead:
.. sourcecode:: pycon
>>> from gensim.models import LsiModel
>>> from gensim.test.utils import common_dictionary, common_corpus
>>>
>>> model = LsiModel(common_corpus, id2word=common_dictionary)
>>> bow_vector = model[common_corpus[0]] # model applied through __getitem__ on one document from corpus.
>>> bow_corpus = model[common_corpus] # also, we can apply model on the full corpus
"""
def __getitem__(self, vec):
"""Transform a single document, or a whole corpus, from one vector space into another.
Parameters
----------
vec : {list of (int, number), iterable of list of (int, number)}
Document in bag-of-words, or streamed corpus.
"""
raise NotImplementedError('cannot instantiate abstract base class')
def _apply(self, corpus, chunksize=None, **kwargs):
"""Apply the transformation to a whole corpus and get the result as another corpus.
Parameters
----------
corpus : iterable of list of (int, number)
Corpus in sparse Gensim bag-of-words format.
chunksize : int, optional
If provided, a more effective processing will performed.
Returns
-------
:class:`~gensim.interfaces.TransformedCorpus`
Transformed corpus.
"""
return TransformedCorpus(self, corpus, chunksize, **kwargs)
class SimilarityABC(utils.SaveLoad):
"""Interface for similarity search over a corpus.
In all instances, there is a corpus against which we want to perform the similarity search.
For each similarity search, the input is a document or a corpus, and the output are the similarities
to individual corpus documents.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.similarities import MatrixSimilarity
>>> from gensim.test.utils import common_corpus
>>>
>>> index = MatrixSimilarity(common_corpus)
>>> similarities = index.get_similarities(common_corpus[1]) # get similarities between query and corpus
Notes
-----
There is also a convenience wrapper, where iterating over `self` yields similarities of each document in the corpus
against the whole corpus (i.e. the query is each corpus document in turn).
See Also
--------
:mod:`gensim.similarities`
Different index implementations of this interface.
"""
def __init__(self, corpus):
"""
Parameters
----------
corpus : iterable of list of (int, number)
Corpus in sparse Gensim bag-of-words format.
"""
raise NotImplementedError("cannot instantiate Abstract Base Class")
def get_similarities(self, doc):
"""Get similarities of the given document or corpus against this index.
Parameters
----------
doc : {list of (int, number), iterable of list of (int, number)}
Document in the sparse Gensim bag-of-words format, or a streamed corpus of such documents.
"""
raise NotImplementedError("cannot instantiate Abstract Base Class")
def __getitem__(self, query):
"""Get similarities of the given document or corpus against this index.
Uses :meth:`~gensim.interfaces.SimilarityABC.get_similarities` internally.
Notes
-----
Passing an entire corpus as `query` can be more efficient than passing its documents one after another,
because it will issue queries in batches internally.
Parameters
----------
query : {list of (int, number), iterable of list of (int, number)}
Document in the sparse Gensim bag-of-words format, or a streamed corpus of such documents.
Returns
-------
{`scipy.sparse.csr.csr_matrix`, list of (int, float)}
Similarities given document or corpus and objects corpus, depends on `query`.
"""
is_corpus, query = utils.is_corpus(query)
if self.normalize:
# self.normalize only works if the input is a plain gensim vector/corpus (as
# advertised in the doc). in fact, input can be a numpy or scipy.sparse matrix
# as well, but in that case assume tricks are happening and don't normalize
# anything (self.normalize has no effect).
if not matutils.ismatrix(query):
if is_corpus:
query = [matutils.unitvec(v) for v in query]
else:
query = matutils.unitvec(query)
result = self.get_similarities(query)
if self.num_best is None:
return result
# if maintain_sparsity is True, result is scipy sparse. Sort, clip the
# topn and return as a scipy sparse matrix.
if getattr(self, 'maintain_sparsity', False):
return matutils.scipy2scipy_clipped(result, self.num_best)
# if the input query was a corpus (=more documents), compute the top-n
# most similar for each document in turn
if matutils.ismatrix(result):
return [matutils.full2sparse_clipped(v, self.num_best) for v in result]
else:
# otherwise, return top-n of the single input document
return matutils.full2sparse_clipped(result, self.num_best)
def __iter__(self):
"""Iterate over all documents, compute similarity of each document against all other documents in the index.
Yields
------
{`scipy.sparse.csr.csr_matrix`, list of (int, float)}
Similarity of the current document and all documents in the corpus.
"""
# turn off query normalization (vectors in the index are assumed to be already normalized)
norm = self.normalize
self.normalize = False
# Try to compute similarities in bigger chunks of documents (not
# one query = a single document after another). The point is, a
# bigger query of N documents is faster than N small queries of one
# document.
#
# After computing similarities of the bigger query in `self[chunk]`,
# yield the resulting similarities one after another, so that it looks
# exactly the same as if they had been computed with many small queries.
try:
chunking = self.chunksize > 1
except AttributeError:
# chunking not supported; fall back to the (slower) mode of 1 query=1 document
chunking = False
if chunking:
# assumes `self.corpus` holds the index as a 2-d numpy array.
# this is true for MatrixSimilarity and SparseMatrixSimilarity, but
# may not be true for other (future) classes..?
for chunk_start in range(0, self.index.shape[0], self.chunksize):
# scipy.sparse doesn't allow slicing beyond real size of the matrix
# (unlike numpy). so, clip the end of the chunk explicitly to make
# scipy.sparse happy
chunk_end = min(self.index.shape[0], chunk_start + self.chunksize)
chunk = self.index[chunk_start: chunk_end]
for sim in self[chunk]:
yield sim
else:
for doc in self.index:
yield self[doc]
# restore old normalization value
self.normalize = norm
| 15,094
|
Python
|
.py
| 310
| 39.483871
| 119
| 0.637929
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,027
|
__init__.py
|
piskvorky_gensim/gensim/__init__.py
|
"""
This package contains functionality to transform documents (strings) into vectors, and calculate
similarities between documents.
"""
__version__ = '4.3.3'
import logging
from gensim import parsing, corpora, matutils, interfaces, models, similarities, utils # noqa:F401
logger = logging.getLogger('gensim')
if not logger.handlers: # To ensure reload() doesn't add another one
logger.addHandler(logging.NullHandler())
| 432
|
Python
|
.py
| 10
| 41.2
| 99
| 0.783654
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,028
|
matutils.py
|
piskvorky_gensim/gensim/matutils.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""Math helper functions."""
from __future__ import with_statement
import logging
import math
from gensim import utils
import numpy as np
import scipy.sparse
from scipy.stats import entropy
from scipy.linalg import get_blas_funcs
from scipy.linalg.lapack import get_lapack_funcs
from scipy.special import psi # gamma function utils
try:
from numpy import triu
except ImportError:
from scipy.linalg import triu
logger = logging.getLogger(__name__)
def blas(name, ndarray):
"""Helper for getting the appropriate BLAS function, using :func:`scipy.linalg.get_blas_funcs`.
Parameters
----------
name : str
Name(s) of BLAS functions, without the type prefix.
ndarray : numpy.ndarray
Arrays can be given to determine optimal prefix of BLAS routines.
Returns
-------
object
BLAS function for the needed operation on the given data type.
"""
return get_blas_funcs((name,), (ndarray,))[0]
def argsort(x, topn=None, reverse=False):
"""Efficiently calculate indices of the `topn` smallest elements in array `x`.
Parameters
----------
x : array_like
Array to get the smallest element indices from.
topn : int, optional
Number of indices of the smallest (greatest) elements to be returned.
If not given, indices of all elements will be returned in ascending (descending) order.
reverse : bool, optional
Return the `topn` greatest elements in descending order,
instead of smallest elements in ascending order?
Returns
-------
numpy.ndarray
Array of `topn` indices that sort the array in the requested order.
"""
x = np.asarray(x) # unify code path for when `x` is not a np array (list, tuple...)
if topn is None:
topn = x.size
if topn <= 0:
return []
if reverse:
x = -x
if topn >= x.size or not hasattr(np, 'argpartition'):
return np.argsort(x)[:topn]
# np >= 1.8 has a fast partial argsort, use that!
most_extreme = np.argpartition(x, topn)[:topn]
return most_extreme.take(np.argsort(x.take(most_extreme))) # resort topn into order
def corpus2csc(corpus, num_terms=None, dtype=np.float64, num_docs=None, num_nnz=None, printprogress=0):
"""Convert a streamed corpus in bag-of-words format into a sparse matrix `scipy.sparse.csc_matrix`,
with documents as columns.
Notes
-----
If the number of terms, documents and non-zero elements is known, you can pass
them here as parameters and a (much) more memory efficient code path will be taken.
Parameters
----------
corpus : iterable of iterable of (int, number)
Input corpus in BoW format
num_terms : int, optional
Number of terms in `corpus`. If provided, the `corpus.num_terms` attribute (if any) will be ignored.
dtype : data-type, optional
Data type of output CSC matrix.
num_docs : int, optional
Number of documents in `corpus`. If provided, the `corpus.num_docs` attribute (in any) will be ignored.
num_nnz : int, optional
Number of non-zero elements in `corpus`. If provided, the `corpus.num_nnz` attribute (if any) will be ignored.
printprogress : int, optional
Log a progress message at INFO level once every `printprogress` documents. 0 to turn off progress logging.
Returns
-------
scipy.sparse.csc_matrix
`corpus` converted into a sparse CSC matrix.
See Also
--------
:class:`~gensim.matutils.Sparse2Corpus`
Convert sparse format to Gensim corpus format.
"""
try:
# if the input corpus has the `num_nnz`, `num_docs` and `num_terms` attributes
# (as is the case with MmCorpus for example), we can use a more efficient code path
if num_terms is None:
num_terms = corpus.num_terms
if num_docs is None:
num_docs = corpus.num_docs
if num_nnz is None:
num_nnz = corpus.num_nnz
except AttributeError:
pass # not a MmCorpus...
if printprogress:
logger.info("creating sparse matrix from corpus")
if num_terms is not None and num_docs is not None and num_nnz is not None:
# faster and much more memory-friendly version of creating the sparse csc
posnow, indptr = 0, [0]
indices = np.empty((num_nnz,), dtype=np.int32) # HACK assume feature ids fit in 32bit integer
data = np.empty((num_nnz,), dtype=dtype)
for docno, doc in enumerate(corpus):
if printprogress and docno % printprogress == 0:
logger.info("PROGRESS: at document #%i/%i", docno, num_docs)
posnext = posnow + len(doc)
# zip(*doc) transforms doc to (token_indices, token_counts]
indices[posnow: posnext], data[posnow: posnext] = zip(*doc) if doc else ([], [])
indptr.append(posnext)
posnow = posnext
assert posnow == num_nnz, "mismatch between supplied and computed number of non-zeros"
result = scipy.sparse.csc_matrix((data, indices, indptr), shape=(num_terms, num_docs), dtype=dtype)
else:
# slower version; determine the sparse matrix parameters during iteration
num_nnz, data, indices, indptr = 0, [], [], [0]
for docno, doc in enumerate(corpus):
if printprogress and docno % printprogress == 0:
logger.info("PROGRESS: at document #%i", docno)
# zip(*doc) transforms doc to (token_indices, token_counts]
doc_indices, doc_data = zip(*doc) if doc else ([], [])
indices.extend(doc_indices)
data.extend(doc_data)
num_nnz += len(doc)
indptr.append(num_nnz)
if num_terms is None:
num_terms = max(indices) + 1 if indices else 0
num_docs = len(indptr) - 1
# now num_docs, num_terms and num_nnz contain the correct values
data = np.asarray(data, dtype=dtype)
indices = np.asarray(indices)
result = scipy.sparse.csc_matrix((data, indices, indptr), shape=(num_terms, num_docs), dtype=dtype)
return result
def pad(mat, padrow, padcol):
"""Add additional rows/columns to `mat`. The new rows/columns will be initialized with zeros.
Parameters
----------
mat : numpy.ndarray
Input 2D matrix
padrow : int
Number of additional rows
padcol : int
Number of additional columns
Returns
-------
numpy.matrixlib.defmatrix.matrix
Matrix with needed padding.
"""
if padrow < 0:
padrow = 0
if padcol < 0:
padcol = 0
rows, cols = mat.shape
return np.block([
[mat, np.zeros((rows, padcol))],
[np.zeros((padrow, cols + padcol))],
])
def zeros_aligned(shape, dtype, order='C', align=128):
"""Get array aligned at `align` byte boundary in memory.
Parameters
----------
shape : int or (int, int)
Shape of array.
dtype : data-type
Data type of array.
order : {'C', 'F'}, optional
Whether to store multidimensional data in C- or Fortran-contiguous (row- or column-wise) order in memory.
align : int, optional
Boundary for alignment in bytes.
Returns
-------
numpy.ndarray
Aligned array.
"""
nbytes = np.prod(shape, dtype=np.int64) * np.dtype(dtype).itemsize
buffer = np.zeros(nbytes + align, dtype=np.uint8) # problematic on win64 ("maximum allowed dimension exceeded")
start_index = -buffer.ctypes.data % align
return buffer[start_index: start_index + nbytes].view(dtype).reshape(shape, order=order)
def ismatrix(m):
"""Check whether `m` is a 2D `numpy.ndarray` or `scipy.sparse` matrix.
Parameters
----------
m : object
Object to check.
Returns
-------
bool
Is `m` a 2D `numpy.ndarray` or `scipy.sparse` matrix.
"""
return isinstance(m, np.ndarray) and m.ndim == 2 or scipy.sparse.issparse(m)
def any2sparse(vec, eps=1e-9):
"""Convert a numpy.ndarray or `scipy.sparse` vector into the Gensim bag-of-words format.
Parameters
----------
vec : {`numpy.ndarray`, `scipy.sparse`}
Input vector
eps : float, optional
Value used for threshold, all coordinates less than `eps` will not be presented in result.
Returns
-------
list of (int, float)
Vector in BoW format.
"""
if isinstance(vec, np.ndarray):
return dense2vec(vec, eps)
if scipy.sparse.issparse(vec):
return scipy2sparse(vec, eps)
return [(int(fid), float(fw)) for fid, fw in vec if np.abs(fw) > eps]
def scipy2scipy_clipped(matrix, topn, eps=1e-9):
"""Get the 'topn' elements of the greatest magnitude (absolute value) from a `scipy.sparse` vector or matrix.
Parameters
----------
matrix : `scipy.sparse`
Input vector or matrix (1D or 2D sparse array).
topn : int
Number of greatest elements, in absolute value, to return.
eps : float
Ignored.
Returns
-------
`scipy.sparse.csr.csr_matrix`
Clipped matrix.
"""
if not scipy.sparse.issparse(matrix):
raise ValueError("'%s' is not a scipy sparse vector." % matrix)
if topn <= 0:
return scipy.sparse.csr_matrix([])
# Return clipped sparse vector if input is a sparse vector.
if matrix.shape[0] == 1:
# use np.argpartition/argsort and only form tuples that are actually returned.
biggest = argsort(abs(matrix.data), topn, reverse=True)
indices, data = matrix.indices.take(biggest), matrix.data.take(biggest)
return scipy.sparse.csr_matrix((data, indices, [0, len(indices)]))
# Return clipped sparse matrix if input is a matrix, processing row by row.
else:
matrix_indices = []
matrix_data = []
matrix_indptr = [0]
# calling abs() on entire matrix once is faster than calling abs() iteratively for each row
matrix_abs = abs(matrix)
for i in range(matrix.shape[0]):
v = matrix.getrow(i)
v_abs = matrix_abs.getrow(i)
# Sort and clip each row vector first.
biggest = argsort(v_abs.data, topn, reverse=True)
indices, data = v.indices.take(biggest), v.data.take(biggest)
# Store the topn indices and values of each row vector.
matrix_data.append(data)
matrix_indices.append(indices)
matrix_indptr.append(matrix_indptr[-1] + min(len(indices), topn))
matrix_indices = np.concatenate(matrix_indices).ravel()
matrix_data = np.concatenate(matrix_data).ravel()
# Instantiate and return a sparse csr_matrix which preserves the order of indices/data.
return scipy.sparse.csr.csr_matrix(
(matrix_data, matrix_indices, matrix_indptr),
shape=(matrix.shape[0], np.max(matrix_indices) + 1)
)
def scipy2sparse(vec, eps=1e-9):
"""Convert a scipy.sparse vector into the Gensim bag-of-words format.
Parameters
----------
vec : `scipy.sparse`
Sparse vector.
eps : float, optional
Value used for threshold, all coordinates less than `eps` will not be presented in result.
Returns
-------
list of (int, float)
Vector in Gensim bag-of-words format.
"""
vec = vec.tocsr()
assert vec.shape[0] == 1
return [(int(pos), float(val)) for pos, val in zip(vec.indices, vec.data) if np.abs(val) > eps]
class Scipy2Corpus:
"""Convert a sequence of dense/sparse vectors into a streamed Gensim corpus object.
See Also
--------
:func:`~gensim.matutils.corpus2csc`
Convert corpus in Gensim format to `scipy.sparse.csc` matrix.
"""
def __init__(self, vecs):
"""
Parameters
----------
vecs : iterable of {`numpy.ndarray`, `scipy.sparse`}
Input vectors.
"""
self.vecs = vecs
def __iter__(self):
for vec in self.vecs:
if isinstance(vec, np.ndarray):
yield full2sparse(vec)
else:
yield scipy2sparse(vec)
def __len__(self):
return len(self.vecs)
def sparse2full(doc, length):
"""Convert a document in Gensim bag-of-words format into a dense numpy array.
Parameters
----------
doc : list of (int, number)
Document in BoW format.
length : int
Vector dimensionality. This cannot be inferred from the BoW, and you must supply it explicitly.
This is typically the vocabulary size or number of topics, depending on how you created `doc`.
Returns
-------
numpy.ndarray
Dense numpy vector for `doc`.
See Also
--------
:func:`~gensim.matutils.full2sparse`
Convert dense array to gensim bag-of-words format.
"""
result = np.zeros(length, dtype=np.float32) # fill with zeroes (default value)
# convert indices to int as numpy 1.12 no longer indexes by floats
doc = ((int(id_), float(val_)) for (id_, val_) in doc)
doc = dict(doc)
# overwrite some of the zeroes with explicit values
result[list(doc)] = list(doc.values())
return result
def full2sparse(vec, eps=1e-9):
"""Convert a dense numpy array into the Gensim bag-of-words format.
Parameters
----------
vec : numpy.ndarray
Dense input vector.
eps : float
Feature weight threshold value. Features with `abs(weight) < eps` are considered sparse and
won't be included in the BOW result.
Returns
-------
list of (int, float)
BoW format of `vec`, with near-zero values omitted (sparse vector).
See Also
--------
:func:`~gensim.matutils.sparse2full`
Convert a document in Gensim bag-of-words format into a dense numpy array.
"""
vec = np.asarray(vec, dtype=float)
nnz = np.nonzero(abs(vec) > eps)[0]
return list(zip(nnz, vec.take(nnz)))
dense2vec = full2sparse
def full2sparse_clipped(vec, topn, eps=1e-9):
"""Like :func:`~gensim.matutils.full2sparse`, but only return the `topn` elements of the greatest magnitude (abs).
This is more efficient that sorting a vector and then taking the greatest values, especially
where `len(vec) >> topn`.
Parameters
----------
vec : numpy.ndarray
Input dense vector
topn : int
Number of greatest (abs) elements that will be presented in result.
eps : float
Threshold value, if coordinate in `vec` < eps, this will not be presented in result.
Returns
-------
list of (int, float)
Clipped vector in BoW format.
See Also
--------
:func:`~gensim.matutils.full2sparse`
Convert dense array to gensim bag-of-words format.
"""
# use np.argpartition/argsort and only form tuples that are actually returned.
# this is about 40x faster than explicitly forming all 2-tuples to run sort() or heapq.nlargest() on.
if topn <= 0:
return []
vec = np.asarray(vec, dtype=float)
nnz = np.nonzero(abs(vec) > eps)[0]
biggest = nnz.take(argsort(abs(vec).take(nnz), topn, reverse=True))
return list(zip(biggest, vec.take(biggest)))
def corpus2dense(corpus, num_terms, num_docs=None, dtype=np.float32):
"""Convert corpus into a dense numpy 2D array, with documents as columns.
Parameters
----------
corpus : iterable of iterable of (int, number)
Input corpus in the Gensim bag-of-words format.
num_terms : int
Number of terms in the dictionary. X-axis of the resulting matrix.
num_docs : int, optional
Number of documents in the corpus. If provided, a slightly more memory-efficient code path is taken.
Y-axis of the resulting matrix.
dtype : data-type, optional
Data type of the output matrix.
Returns
-------
numpy.ndarray
Dense 2D array that presents `corpus`.
See Also
--------
:class:`~gensim.matutils.Dense2Corpus`
Convert dense matrix to Gensim corpus format.
"""
if num_docs is not None:
# we know the number of documents => don't bother column_stacking
docno, result = -1, np.empty((num_terms, num_docs), dtype=dtype)
for docno, doc in enumerate(corpus):
result[:, docno] = sparse2full(doc, num_terms)
assert docno + 1 == num_docs
else:
# The below used to be a generator, but NumPy deprecated generator as of 1.16 with:
# """
# FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple.
# Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error
# in the future.
# """
result = np.column_stack([sparse2full(doc, num_terms) for doc in corpus])
return result.astype(dtype)
class Dense2Corpus:
"""Treat dense numpy array as a streamed Gensim corpus in the bag-of-words format.
Notes
-----
No data copy is made (changes to the underlying matrix imply changes in the streamed corpus).
See Also
--------
:func:`~gensim.matutils.corpus2dense`
Convert Gensim corpus to dense matrix.
:class:`~gensim.matutils.Sparse2Corpus`
Convert sparse matrix to Gensim corpus format.
"""
def __init__(self, dense, documents_columns=True):
"""
Parameters
----------
dense : numpy.ndarray
Corpus in dense format.
documents_columns : bool, optional
Documents in `dense` represented as columns, as opposed to rows?
"""
if documents_columns:
self.dense = dense.T
else:
self.dense = dense
def __iter__(self):
"""Iterate over the corpus.
Yields
------
list of (int, float)
Document in BoW format.
"""
for doc in self.dense:
yield full2sparse(doc.flat)
def __len__(self):
return len(self.dense)
class Sparse2Corpus:
"""Convert a matrix in scipy.sparse format into a streaming Gensim corpus.
See Also
--------
:func:`~gensim.matutils.corpus2csc`
Convert gensim corpus format to `scipy.sparse.csc` matrix
:class:`~gensim.matutils.Dense2Corpus`
Convert dense matrix to gensim corpus.
"""
def __init__(self, sparse, documents_columns=True):
"""
Parameters
----------
sparse : `scipy.sparse`
Corpus scipy sparse format
documents_columns : bool, optional
Documents will be column?
"""
if documents_columns:
self.sparse = sparse.tocsc()
else:
self.sparse = sparse.tocsr().T # make sure shape[1]=number of docs (needed in len())
def __iter__(self):
"""
Yields
------
list of (int, float)
Document in BoW format.
"""
for indprev, indnow in zip(self.sparse.indptr, self.sparse.indptr[1:]):
yield list(zip(self.sparse.indices[indprev:indnow], self.sparse.data[indprev:indnow]))
def __len__(self):
return self.sparse.shape[1]
def __getitem__(self, key):
"""
Retrieve a document vector or subset from the corpus by key.
Parameters
----------
key: int, ellipsis, slice, iterable object
Index of the document retrieve.
Less commonly, the key can also be a slice, ellipsis, or an iterable
to retrieve multiple documents.
Returns
-------
list of (int, number), Sparse2Corpus
Document in BoW format when `key` is an integer. Otherwise :class:`~gensim.matutils.Sparse2Corpus`.
"""
sparse = self.sparse
if isinstance(key, int):
iprev = self.sparse.indptr[key]
inow = self.sparse.indptr[key + 1]
return list(zip(sparse.indices[iprev:inow], sparse.data[iprev:inow]))
sparse = self.sparse.__getitem__((slice(None, None, None), key))
return Sparse2Corpus(sparse)
def veclen(vec):
"""Calculate L2 (euclidean) length of a vector.
Parameters
----------
vec : list of (int, number)
Input vector in sparse bag-of-words format.
Returns
-------
float
Length of `vec`.
"""
if len(vec) == 0:
return 0.0
length = 1.0 * math.sqrt(sum(val**2 for _, val in vec))
assert length > 0.0, "sparse documents must not contain any explicit zero entries"
return length
def ret_normalized_vec(vec, length):
"""Normalize a vector in L2 (Euclidean unit norm).
Parameters
----------
vec : list of (int, number)
Input vector in BoW format.
length : float
Length of vector
Returns
-------
list of (int, number)
L2-normalized vector in BoW format.
"""
if length != 1.0:
return [(termid, val / length) for termid, val in vec]
else:
return list(vec)
def ret_log_normalize_vec(vec, axis=1):
log_max = 100.0
if len(vec.shape) == 1:
max_val = np.max(vec)
log_shift = log_max - np.log(len(vec) + 1.0) - max_val
tot = np.sum(np.exp(vec + log_shift))
log_norm = np.log(tot) - log_shift
vec -= log_norm
else:
if axis == 1: # independently normalize each sample
max_val = np.max(vec, 1)
log_shift = log_max - np.log(vec.shape[1] + 1.0) - max_val
tot = np.sum(np.exp(vec + log_shift[:, np.newaxis]), 1)
log_norm = np.log(tot) - log_shift
vec = vec - log_norm[:, np.newaxis]
elif axis == 0: # normalize each feature
k = ret_log_normalize_vec(vec.T)
return k[0].T, k[1]
else:
raise ValueError("'%s' is not a supported axis" % axis)
return vec, log_norm
blas_nrm2 = blas('nrm2', np.array([], dtype=float))
blas_scal = blas('scal', np.array([], dtype=float))
def unitvec(vec, norm='l2', return_norm=False):
"""Scale a vector to unit length.
Parameters
----------
vec : {numpy.ndarray, scipy.sparse, list of (int, float)}
Input vector in any format
norm : {'l1', 'l2', 'unique'}, optional
Metric to normalize in.
return_norm : bool, optional
Return the length of vector `vec`, in addition to the normalized vector itself?
Returns
-------
numpy.ndarray, scipy.sparse, list of (int, float)}
Normalized vector in same format as `vec`.
float
Length of `vec` before normalization, if `return_norm` is set.
Notes
-----
Zero-vector will be unchanged.
"""
supported_norms = ('l1', 'l2', 'unique')
if norm not in supported_norms:
raise ValueError("'%s' is not a supported norm. Currently supported norms are %s." % (norm, supported_norms))
if scipy.sparse.issparse(vec):
vec = vec.tocsr()
if norm == 'l1':
veclen = np.sum(np.abs(vec.data))
if norm == 'l2':
veclen = np.sqrt(np.sum(vec.data ** 2))
if norm == 'unique':
veclen = vec.nnz
if veclen > 0.0:
if np.issubdtype(vec.dtype, np.integer):
vec = vec.astype(float)
vec /= veclen
if return_norm:
return vec, veclen
else:
return vec
else:
if return_norm:
return vec, 1.0
else:
return vec
if isinstance(vec, np.ndarray):
if norm == 'l1':
veclen = np.sum(np.abs(vec))
if norm == 'l2':
if vec.size == 0:
veclen = 0.0
else:
veclen = blas_nrm2(vec)
if norm == 'unique':
veclen = np.count_nonzero(vec)
if veclen > 0.0:
if np.issubdtype(vec.dtype, np.integer):
vec = vec.astype(float)
if return_norm:
return blas_scal(1.0 / veclen, vec).astype(vec.dtype), veclen
else:
return blas_scal(1.0 / veclen, vec).astype(vec.dtype)
else:
if return_norm:
return vec, 1.0
else:
return vec
try:
first = next(iter(vec)) # is there at least one element?
except StopIteration:
if return_norm:
return vec, 1.0
else:
return vec
if isinstance(first, (tuple, list)) and len(first) == 2: # gensim sparse format
if norm == 'l1':
length = float(sum(abs(val) for _, val in vec))
if norm == 'l2':
length = 1.0 * math.sqrt(sum(val ** 2 for _, val in vec))
if norm == 'unique':
length = 1.0 * len(vec)
assert length > 0.0, "sparse documents must not contain any explicit zero entries"
if return_norm:
return ret_normalized_vec(vec, length), length
else:
return ret_normalized_vec(vec, length)
else:
raise ValueError("unknown input type")
def cossim(vec1, vec2):
"""Get cosine similarity between two sparse vectors.
Cosine similarity is a number between `<-1.0, 1.0>`, higher means more similar.
Parameters
----------
vec1 : list of (int, float)
Vector in BoW format.
vec2 : list of (int, float)
Vector in BoW format.
Returns
-------
float
Cosine similarity between `vec1` and `vec2`.
"""
vec1, vec2 = dict(vec1), dict(vec2)
if not vec1 or not vec2:
return 0.0
vec1len = 1.0 * math.sqrt(sum(val * val for val in vec1.values()))
vec2len = 1.0 * math.sqrt(sum(val * val for val in vec2.values()))
assert vec1len > 0.0 and vec2len > 0.0, "sparse documents must not contain any explicit zero entries"
if len(vec2) < len(vec1):
vec1, vec2 = vec2, vec1 # swap references so that we iterate over the shorter vector
result = sum(value * vec2.get(index, 0.0) for index, value in vec1.items())
result /= vec1len * vec2len # rescale by vector lengths
return result
def isbow(vec):
"""Checks if a vector is in the sparse Gensim bag-of-words format.
Parameters
----------
vec : object
Object to check.
Returns
-------
bool
Is `vec` in BoW format.
"""
if scipy.sparse.issparse(vec):
vec = vec.todense().tolist()
try:
id_, val_ = vec[0] # checking first value to see if it is in bag of words format by unpacking
int(id_), float(val_)
except IndexError:
return True # this is to handle the empty input case
except (ValueError, TypeError):
return False
return True
def _convert_vec(vec1, vec2, num_features=None):
if scipy.sparse.issparse(vec1):
vec1 = vec1.toarray()
if scipy.sparse.issparse(vec2):
vec2 = vec2.toarray() # converted both the vectors to dense in case they were in sparse matrix
if isbow(vec1) and isbow(vec2): # if they are in bag of words format we make it dense
if num_features is not None: # if not None, make as large as the documents drawing from
dense1 = sparse2full(vec1, num_features)
dense2 = sparse2full(vec2, num_features)
return dense1, dense2
else:
max_len = max(len(vec1), len(vec2))
dense1 = sparse2full(vec1, max_len)
dense2 = sparse2full(vec2, max_len)
return dense1, dense2
else:
# this conversion is made because if it is not in bow format, it might be a list within a list after conversion
# the scipy implementation of Kullback fails in such a case so we pick up only the nested list.
if len(vec1) == 1:
vec1 = vec1[0]
if len(vec2) == 1:
vec2 = vec2[0]
return vec1, vec2
def kullback_leibler(vec1, vec2, num_features=None):
"""Calculate Kullback-Leibler distance between two probability distributions using `scipy.stats.entropy`.
Parameters
----------
vec1 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
vec2 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
num_features : int, optional
Number of features in the vectors.
Returns
-------
float
Kullback-Leibler distance between `vec1` and `vec2`.
Value in range [0, +∞) where values closer to 0 mean less distance (higher similarity).
"""
vec1, vec2 = _convert_vec(vec1, vec2, num_features=num_features)
return entropy(vec1, vec2)
def jensen_shannon(vec1, vec2, num_features=None):
"""Calculate Jensen-Shannon distance between two probability distributions using `scipy.stats.entropy`.
Parameters
----------
vec1 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
vec2 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
num_features : int, optional
Number of features in the vectors.
Returns
-------
float
Jensen-Shannon distance between `vec1` and `vec2`.
Notes
-----
This is a symmetric and finite "version" of :func:`gensim.matutils.kullback_leibler`.
"""
vec1, vec2 = _convert_vec(vec1, vec2, num_features=num_features)
avg_vec = 0.5 * (vec1 + vec2)
return 0.5 * (entropy(vec1, avg_vec) + entropy(vec2, avg_vec))
def hellinger(vec1, vec2):
"""Calculate Hellinger distance between two probability distributions.
Parameters
----------
vec1 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
vec2 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
Returns
-------
float
Hellinger distance between `vec1` and `vec2`.
Value in range `[0, 1]`, where 0 is min distance (max similarity) and 1 is max distance (min similarity).
"""
if scipy.sparse.issparse(vec1):
vec1 = vec1.toarray()
if scipy.sparse.issparse(vec2):
vec2 = vec2.toarray()
if isbow(vec1) and isbow(vec2):
# if it is a BoW format, instead of converting to dense we use dictionaries to calculate appropriate distance
vec1, vec2 = dict(vec1), dict(vec2)
indices = set(list(vec1.keys()) + list(vec2.keys()))
sim = np.sqrt(
0.5 * sum((np.sqrt(vec1.get(index, 0.0)) - np.sqrt(vec2.get(index, 0.0)))**2 for index in indices)
)
return sim
else:
sim = np.sqrt(0.5 * ((np.sqrt(vec1) - np.sqrt(vec2))**2).sum())
return sim
def jaccard(vec1, vec2):
"""Calculate Jaccard distance between two vectors.
Parameters
----------
vec1 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
vec2 : {scipy.sparse, numpy.ndarray, list of (int, float)}
Distribution vector.
Returns
-------
float
Jaccard distance between `vec1` and `vec2`.
Value in range `[0, 1]`, where 0 is min distance (max similarity) and 1 is max distance (min similarity).
"""
# converting from sparse for easier manipulation
if scipy.sparse.issparse(vec1):
vec1 = vec1.toarray()
if scipy.sparse.issparse(vec2):
vec2 = vec2.toarray()
if isbow(vec1) and isbow(vec2):
# if it's in bow format, we use the following definitions:
# union = sum of the 'weights' of both the bags
# intersection = lowest weight for a particular id; basically the number of common words or items
union = sum(weight for id_, weight in vec1) + sum(weight for id_, weight in vec2)
vec1, vec2 = dict(vec1), dict(vec2)
intersection = 0.0
for feature_id, feature_weight in vec1.items():
intersection += min(feature_weight, vec2.get(feature_id, 0.0))
return 1 - float(intersection) / float(union)
else:
# if it isn't in bag of words format, we can use sets to calculate intersection and union
if isinstance(vec1, np.ndarray):
vec1 = vec1.tolist()
if isinstance(vec2, np.ndarray):
vec2 = vec2.tolist()
vec1 = set(vec1)
vec2 = set(vec2)
intersection = vec1 & vec2
union = vec1 | vec2
return 1 - float(len(intersection)) / float(len(union))
def jaccard_distance(set1, set2):
"""Calculate Jaccard distance between two sets.
Parameters
----------
set1 : set
Input set.
set2 : set
Input set.
Returns
-------
float
Jaccard distance between `set1` and `set2`.
Value in range `[0, 1]`, where 0 is min distance (max similarity) and 1 is max distance (min similarity).
"""
union_cardinality = len(set1 | set2)
if union_cardinality == 0: # Both sets are empty
return 1.
return 1. - float(len(set1 & set2)) / float(union_cardinality)
try:
# try to load fast, cythonized code if possible
from gensim._matutils import logsumexp, mean_absolute_difference, dirichlet_expectation
except ImportError:
def logsumexp(x):
"""Log of sum of exponentials.
Parameters
----------
x : numpy.ndarray
Input 2d matrix.
Returns
-------
float
log of sum of exponentials of elements in `x`.
Warnings
--------
For performance reasons, doesn't support NaNs or 1d, 3d, etc arrays like :func:`scipy.special.logsumexp`.
"""
x_max = np.max(x)
x = np.log(np.sum(np.exp(x - x_max)))
x += x_max
return x
def mean_absolute_difference(a, b):
"""Mean absolute difference between two arrays.
Parameters
----------
a : numpy.ndarray
Input 1d array.
b : numpy.ndarray
Input 1d array.
Returns
-------
float
mean(abs(a - b)).
"""
return np.mean(np.abs(a - b))
def dirichlet_expectation(alpha):
"""Expected value of log(theta) where theta is drawn from a Dirichlet distribution.
Parameters
----------
alpha : numpy.ndarray
Dirichlet parameter 2d matrix or 1d vector, if 2d - each row is treated as a separate parameter vector.
Returns
-------
numpy.ndarray
Log of expected values, dimension same as `alpha.ndim`.
"""
if len(alpha.shape) == 1:
result = psi(alpha) - psi(np.sum(alpha))
else:
result = psi(alpha) - psi(np.sum(alpha, 1))[:, np.newaxis]
return result.astype(alpha.dtype, copy=False) # keep the same precision as input
def qr_destroy(la):
"""Get QR decomposition of `la[0]`.
Parameters
----------
la : list of numpy.ndarray
Run QR decomposition on the first elements of `la`. Must not be empty.
Returns
-------
(numpy.ndarray, numpy.ndarray)
Matrices :math:`Q` and :math:`R`.
Notes
-----
Using this function is less memory intense than calling `scipy.linalg.qr(la[0])`,
because the memory used in `la[0]` is reclaimed earlier. This makes a difference when
decomposing very large arrays, where every memory copy counts.
Warnings
--------
Content of `la` as well as `la[0]` gets destroyed in the process. Again, for memory-effiency reasons.
"""
a = np.asfortranarray(la[0])
del la[0], la # now `a` is the only reference to the input matrix
m, n = a.shape
# perform q, r = QR(a); code hacked out of scipy.linalg.qr
logger.debug("computing QR of %s dense matrix", str(a.shape))
geqrf, = get_lapack_funcs(('geqrf',), (a,))
qr, tau, work, info = geqrf(a, lwork=-1, overwrite_a=True)
qr, tau, work, info = geqrf(a, lwork=work[0], overwrite_a=True)
del a # free up mem
assert info >= 0
r = triu(qr[:n, :n])
if m < n: # rare case, #features < #topics
qr = qr[:, :m] # retains fortran order
gorgqr, = get_lapack_funcs(('orgqr',), (qr,))
q, work, info = gorgqr(qr, tau, lwork=-1, overwrite_a=True)
q, work, info = gorgqr(qr, tau, lwork=work[0], overwrite_a=True)
assert info >= 0, "qr failed"
assert q.flags.f_contiguous
return q, r
class MmWriter:
"""Store a corpus in `Matrix Market format <https://math.nist.gov/MatrixMarket/formats.html>`_,
using :class:`~gensim.corpora.mmcorpus.MmCorpus`.
Notes
-----
The output is written one document at a time, not the whole matrix at once (unlike e.g. `scipy.io.mmread`).
This allows you to write corpora which are larger than the available RAM.
The output file is created in a single pass through the input corpus, so that the input can be
a once-only stream (generator).
To achieve this, a fake MM header is written first, corpus statistics are collected
during the pass (shape of the matrix, number of non-zeroes), followed by a seek back to the beginning of the file,
rewriting the fake header with the final values.
"""
HEADER_LINE = b'%%MatrixMarket matrix coordinate real general\n' # the only supported MM format
def __init__(self, fname):
"""
Parameters
----------
fname : str
Path to output file.
"""
self.fname = fname
if fname.endswith(".gz") or fname.endswith('.bz2'):
raise NotImplementedError("compressed output not supported with MmWriter")
self.fout = utils.open(self.fname, 'wb+') # open for both reading and writing
self.headers_written = False
def write_headers(self, num_docs, num_terms, num_nnz):
"""Write headers to file.
Parameters
----------
num_docs : int
Number of documents in corpus.
num_terms : int
Number of term in corpus.
num_nnz : int
Number of non-zero elements in corpus.
"""
self.fout.write(MmWriter.HEADER_LINE)
if num_nnz < 0:
# we don't know the matrix shape/density yet, so only log a general line
logger.info("saving sparse matrix to %s", self.fname)
self.fout.write(utils.to_utf8(' ' * 50 + '\n')) # 48 digits must be enough for everybody
else:
logger.info(
"saving sparse %sx%s matrix with %i non-zero entries to %s",
num_docs, num_terms, num_nnz, self.fname
)
self.fout.write(utils.to_utf8('%s %s %s\n' % (num_docs, num_terms, num_nnz)))
self.last_docno = -1
self.headers_written = True
def fake_headers(self, num_docs, num_terms, num_nnz):
"""Write "fake" headers to file, to be rewritten once we've scanned the entire corpus.
Parameters
----------
num_docs : int
Number of documents in corpus.
num_terms : int
Number of term in corpus.
num_nnz : int
Number of non-zero elements in corpus.
"""
stats = '%i %i %i' % (num_docs, num_terms, num_nnz)
if len(stats) > 50:
raise ValueError('Invalid stats: matrix too large!')
self.fout.seek(len(MmWriter.HEADER_LINE))
self.fout.write(utils.to_utf8(stats))
def write_vector(self, docno, vector):
"""Write a single sparse vector to the file.
Parameters
----------
docno : int
Number of document.
vector : list of (int, number)
Document in BoW format.
Returns
-------
(int, int)
Max word index in vector and len of vector. If vector is empty, return (-1, 0).
"""
assert self.headers_written, "must write Matrix Market file headers before writing data!"
assert self.last_docno < docno, "documents %i and %i not in sequential order!" % (self.last_docno, docno)
vector = sorted((i, w) for i, w in vector if abs(w) > 1e-12) # ignore near-zero entries
for termid, weight in vector: # write term ids in sorted order
# +1 because MM format starts counting from 1
self.fout.write(utils.to_utf8("%i %i %s\n" % (docno + 1, termid + 1, weight)))
self.last_docno = docno
return (vector[-1][0], len(vector)) if vector else (-1, 0)
@staticmethod
def write_corpus(fname, corpus, progress_cnt=1000, index=False, num_terms=None, metadata=False):
"""Save the corpus to disk in `Matrix Market format <https://math.nist.gov/MatrixMarket/formats.html>`_.
Parameters
----------
fname : str
Filename of the resulting file.
corpus : iterable of list of (int, number)
Corpus in streamed bag-of-words format.
progress_cnt : int, optional
Print progress for every `progress_cnt` number of documents.
index : bool, optional
Return offsets?
num_terms : int, optional
Number of terms in the corpus. If provided, the `corpus.num_terms` attribute (if any) will be ignored.
metadata : bool, optional
Generate a metadata file?
Returns
-------
offsets : {list of int, None}
List of offsets (if index=True) or nothing.
Notes
-----
Documents are processed one at a time, so the whole corpus is allowed to be larger than the available RAM.
See Also
--------
:func:`gensim.corpora.mmcorpus.MmCorpus.save_corpus`
Save corpus to disk.
"""
mw = MmWriter(fname)
# write empty headers to the file (with enough space to be overwritten later)
mw.write_headers(-1, -1, -1) # will print 50 spaces followed by newline on the stats line
# calculate necessary header info (nnz elements, num terms, num docs) while writing out vectors
_num_terms, num_nnz = 0, 0
docno, poslast = -1, -1
offsets = []
if hasattr(corpus, 'metadata'):
orig_metadata = corpus.metadata
corpus.metadata = metadata
if metadata:
docno2metadata = {}
else:
metadata = False
for docno, doc in enumerate(corpus):
if metadata:
bow, data = doc
docno2metadata[docno] = data
else:
bow = doc
if docno % progress_cnt == 0:
logger.info("PROGRESS: saving document #%i", docno)
if index:
posnow = mw.fout.tell()
if posnow == poslast:
offsets[-1] = -1
offsets.append(posnow)
poslast = posnow
max_id, veclen = mw.write_vector(docno, bow)
_num_terms = max(_num_terms, 1 + max_id)
num_nnz += veclen
if metadata:
utils.pickle(docno2metadata, fname + '.metadata.cpickle')
corpus.metadata = orig_metadata
num_docs = docno + 1
num_terms = num_terms or _num_terms
if num_docs * num_terms != 0:
logger.info(
"saved %ix%i matrix, density=%.3f%% (%i/%i)",
num_docs, num_terms, 100.0 * num_nnz / (num_docs * num_terms), num_nnz, num_docs * num_terms
)
# now write proper headers, by seeking and overwriting the spaces written earlier
mw.fake_headers(num_docs, num_terms, num_nnz)
mw.close()
if index:
return offsets
def __del__(self):
"""Close `self.fout` file. Alias for :meth:`~gensim.matutils.MmWriter.close`.
Warnings
--------
Closing the file explicitly via the close() method is preferred and safer.
"""
self.close() # does nothing if called twice (on an already closed file), so no worries
def close(self):
"""Close `self.fout` file."""
logger.debug("closing %s", self.fname)
if hasattr(self, 'fout'):
self.fout.close()
try:
from gensim.corpora._mmreader import MmReader # noqa: F401
except ImportError:
raise utils.NO_CYTHON
| 44,476
|
Python
|
.py
| 1,114
| 31.97307
| 119
| 0.612598
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,029
|
downloader.py
|
piskvorky_gensim/gensim/downloader.py
|
"""
This module is an API for downloading, getting information and loading datasets/models.
See `RaRe-Technologies/gensim-data <https://github.com/RaRe-Technologies/gensim-data>`_ repo
for more information about models/datasets/how-to-add-new/etc.
Give information about available models/datasets:
.. sourcecode:: pycon
>>> import gensim.downloader as api
>>>
>>> api.info() # return dict with info about available models/datasets
>>> api.info("text8") # return dict with info about "text8" dataset
Model example:
.. sourcecode:: pycon
>>> import gensim.downloader as api
>>>
>>> model = api.load("glove-twitter-25") # load glove vectors
>>> model.most_similar("cat") # show words that similar to word 'cat'
Dataset example:
.. sourcecode:: pycon
>>> import gensim.downloader as api
>>> from gensim.models import Word2Vec
>>>
>>> dataset = api.load("text8") # load dataset as iterable
>>> model = Word2Vec(dataset) # train w2v model
Also, this API available via CLI::
python -m gensim.downloader --info <dataname> # same as api.info(dataname)
python -m gensim.downloader --info name # same as api.info(name_only=True)
python -m gensim.downloader --download <dataname> # same as api.load(dataname, return_path=True)
You may specify the local subdirectory for saving gensim data using the
GENSIM_DATA_DIR environment variable. For example:
$ export GENSIM_DATA_DIR=/tmp/gensim-data
$ python -m gensim.downloader --download <dataname>
By default, this subdirectory is ~/gensim-data.
"""
from __future__ import absolute_import
import argparse
import os
import io
import json
import logging
import sys
import errno
import hashlib
import math
import shutil
import tempfile
from functools import partial
if sys.version_info[0] == 2:
import urllib
from urllib2 import urlopen
else:
import urllib.request as urllib
from urllib.request import urlopen
_DEFAULT_BASE_DIR = os.path.expanduser('~/gensim-data')
BASE_DIR = os.environ.get('GENSIM_DATA_DIR', _DEFAULT_BASE_DIR)
"""The default location to store downloaded data.
You may override this with the GENSIM_DATA_DIR environment variable.
"""
_PARENT_DIR = os.path.abspath(os.path.join(BASE_DIR, '..'))
base_dir = BASE_DIR # for backward compatibility with some of our test data
logger = logging.getLogger(__name__)
DATA_LIST_URL = "https://raw.githubusercontent.com/RaRe-Technologies/gensim-data/master/list.json"
DOWNLOAD_BASE_URL = "https://github.com/RaRe-Technologies/gensim-data/releases/download"
def _progress(chunks_downloaded, chunk_size, total_size, part=1, total_parts=1):
"""Reporthook for :func:`urllib.urlretrieve`, code from [1]_.
Parameters
----------
chunks_downloaded : int
Number of chunks of data that have been downloaded.
chunk_size : int
Size of each chunk of data.
total_size : int
Total size of the dataset/model.
part : int, optional
Number of current part, used only if `no_parts` > 1.
total_parts : int, optional
Total number of parts.
References
----------
[1] https://gist.github.com/vladignatyev/06860ec2040cb497f0f3
"""
bar_len = 50
size_downloaded = float(chunks_downloaded * chunk_size)
filled_len = int(math.floor((bar_len * size_downloaded) / total_size))
percent_downloaded = round(((size_downloaded * 100) / total_size), 1)
bar = '=' * filled_len + '-' * (bar_len - filled_len)
if total_parts == 1:
sys.stdout.write(
'\r[%s] %s%s %s/%sMB downloaded' % (
bar, percent_downloaded, "%",
round(size_downloaded / (1024 * 1024), 1),
round(float(total_size) / (1024 * 1024), 1))
)
sys.stdout.flush()
else:
sys.stdout.write(
'\r Part %s/%s [%s] %s%s %s/%sMB downloaded' % (
part + 1, total_parts, bar, percent_downloaded, "%",
round(size_downloaded / (1024 * 1024), 1),
round(float(total_size) / (1024 * 1024), 1))
)
sys.stdout.flush()
def _create_base_dir():
"""Create the gensim-data directory in home directory, if it has not been already created.
Raises
------
Exception
An exception is raised when read/write permissions are not available or a file named gensim-data
already exists in the home directory.
"""
if not os.path.isdir(BASE_DIR):
try:
logger.info("Creating %s", BASE_DIR)
os.makedirs(BASE_DIR)
except OSError as e:
if e.errno == errno.EEXIST:
raise Exception(
"Not able to create folder gensim-data in {}. File gensim-data "
"exists in the directory already.".format(_PARENT_DIR)
)
else:
raise Exception(
"Can't create {}. Make sure you have the read/write permissions "
"to the directory or you can try creating the folder manually"
.format(BASE_DIR)
)
def _calculate_md5_checksum(fname):
"""Calculate the checksum of the file, exactly same as md5-sum linux util.
Parameters
----------
fname : str
Path to the file.
Returns
-------
str
MD5-hash of file names as `fname`.
"""
hash_md5 = hashlib.md5()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_md5.update(chunk)
return hash_md5.hexdigest()
def _load_info(url=DATA_LIST_URL, encoding='utf-8'):
"""Load dataset information from the network.
If the network access fails, fall back to a local cache. This cache gets
updated each time a network request _succeeds_.
"""
cache_path = os.path.join(BASE_DIR, 'information.json')
_create_base_dir()
try:
info_bytes = urlopen(url).read()
except (OSError, IOError):
#
# The exception raised by urlopen differs between Py2 and Py3.
#
# https://docs.python.org/3/library/urllib.error.html
# https://docs.python.org/2/library/urllib.html
#
logger.exception(
'caught non-fatal exception while trying to update gensim-data cache from %r; '
'using local cache at %r instead', url, cache_path
)
else:
with open(cache_path, 'wb') as fout:
fout.write(info_bytes)
try:
#
# We need io.open here because Py2 open doesn't support encoding keyword
#
with io.open(cache_path, 'r', encoding=encoding) as fin:
return json.load(fin)
except IOError:
raise ValueError(
'unable to read local cache %r during fallback, '
'connect to the Internet and retry' % cache_path
)
def info(name=None, show_only_latest=True, name_only=False):
"""Provide the information related to model/dataset.
Parameters
----------
name : str, optional
Name of model/dataset. If not set - shows all available data.
show_only_latest : bool, optional
If storage contains different versions for one data/model, this flag allow to hide outdated versions.
Affects only if `name` is None.
name_only : bool, optional
If True, will return only the names of available models and corpora.
Returns
-------
dict
Detailed information about one or all models/datasets.
If name is specified, return full information about concrete dataset/model,
otherwise, return information about all available datasets/models.
Raises
------
Exception
If name that has been passed is incorrect.
Examples
--------
.. sourcecode:: pycon
>>> import gensim.downloader as api
>>> api.info("text8") # retrieve information about text8 dataset
{u'checksum': u'68799af40b6bda07dfa47a32612e5364',
u'description': u'Cleaned small sample from wikipedia',
u'file_name': u'text8.gz',
u'parts': 1,
u'source': u'https://mattmahoney.net/dc/text8.zip'}
>>>
>>> api.info() # retrieve information about all available datasets and models
"""
information = _load_info()
if name is not None:
corpora = information['corpora']
models = information['models']
if name in corpora:
return information['corpora'][name]
elif name in models:
return information['models'][name]
else:
raise ValueError("Incorrect model/corpus name")
if not show_only_latest:
return information
if name_only:
return {"corpora": list(information['corpora'].keys()), "models": list(information['models'])}
return {
"corpora": {name: data for (name, data) in information['corpora'].items() if data.get("latest", True)},
"models": {name: data for (name, data) in information['models'].items() if data.get("latest", True)}
}
def _get_checksum(name, part=None):
"""Retrieve the checksum of the model/dataset from gensim-data repository.
Parameters
----------
name : str
Dataset/model name.
part : int, optional
Number of part (for multipart data only).
Returns
-------
str
Retrieved checksum of dataset/model.
"""
information = info()
corpora = information['corpora']
models = information['models']
if part is None:
if name in corpora:
return information['corpora'][name]["checksum"]
elif name in models:
return information['models'][name]["checksum"]
else:
if name in corpora:
return information['corpora'][name]["checksum-{}".format(part)]
elif name in models:
return information['models'][name]["checksum-{}".format(part)]
def _get_parts(name):
"""Retrieve the number of parts in which dataset/model has been split.
Parameters
----------
name: str
Dataset/model name.
Returns
-------
int
Number of parts in which dataset/model has been split.
"""
information = info()
corpora = information['corpora']
models = information['models']
if name in corpora:
return information['corpora'][name]["parts"]
elif name in models:
return information['models'][name]["parts"]
def _download(name):
"""Download and extract the dataset/model.
Parameters
----------
name: str
Dataset/model name which has to be downloaded.
Raises
------
Exception
If md5sum on client and in repo are different.
"""
url_load_file = "{base}/{fname}/__init__.py".format(base=DOWNLOAD_BASE_URL, fname=name)
data_folder_dir = os.path.join(BASE_DIR, name)
data_folder_dir_tmp = data_folder_dir + '_tmp'
tmp_dir = tempfile.mkdtemp()
init_path = os.path.join(tmp_dir, "__init__.py")
urllib.urlretrieve(url_load_file, init_path)
total_parts = _get_parts(name)
if total_parts > 1:
concatenated_folder_name = "{fname}.gz".format(fname=name)
concatenated_folder_dir = os.path.join(tmp_dir, concatenated_folder_name)
for part in range(0, total_parts):
url_data = "{base}/{fname}/{fname}.gz_0{part}".format(base=DOWNLOAD_BASE_URL, fname=name, part=part)
fname = "{f}.gz_0{p}".format(f=name, p=part)
dst_path = os.path.join(tmp_dir, fname)
urllib.urlretrieve(
url_data, dst_path,
reporthook=partial(_progress, part=part, total_parts=total_parts)
)
if _calculate_md5_checksum(dst_path) == _get_checksum(name, part):
sys.stdout.write("\n")
sys.stdout.flush()
logger.info("Part %s/%s downloaded", part + 1, total_parts)
else:
shutil.rmtree(tmp_dir)
raise Exception("Checksum comparison failed, try again")
with open(concatenated_folder_dir, 'wb') as wfp:
for part in range(0, total_parts):
part_path = os.path.join(tmp_dir, "{fname}.gz_0{part}".format(fname=name, part=part))
with open(part_path, "rb") as rfp:
shutil.copyfileobj(rfp, wfp)
os.remove(part_path)
else:
url_data = "{base}/{fname}/{fname}.gz".format(base=DOWNLOAD_BASE_URL, fname=name)
fname = "{fname}.gz".format(fname=name)
dst_path = os.path.join(tmp_dir, fname)
urllib.urlretrieve(url_data, dst_path, reporthook=_progress)
if _calculate_md5_checksum(dst_path) == _get_checksum(name):
sys.stdout.write("\n")
sys.stdout.flush()
logger.info("%s downloaded", name)
else:
shutil.rmtree(tmp_dir)
raise Exception("Checksum comparison failed, try again")
if os.path.exists(data_folder_dir_tmp):
os.remove(data_folder_dir_tmp)
shutil.move(tmp_dir, data_folder_dir_tmp)
os.rename(data_folder_dir_tmp, data_folder_dir)
def _get_filename(name):
"""Retrieve the filename of the dataset/model.
Parameters
----------
name: str
Name of dataset/model.
Returns
-------
str:
Filename of the dataset/model.
"""
information = info()
corpora = information['corpora']
models = information['models']
if name in corpora:
return information['corpora'][name]["file_name"]
elif name in models:
return information['models'][name]["file_name"]
def load(name, return_path=False):
"""Download (if needed) dataset/model and load it to memory (unless `return_path` is set).
Parameters
----------
name: str
Name of the model/dataset.
return_path: bool, optional
If True, return full path to file, otherwise, return loaded model / iterable dataset.
Returns
-------
Model
Requested model, if `name` is model and `return_path` == False.
Dataset (iterable)
Requested dataset, if `name` is dataset and `return_path` == False.
str
Path to file with dataset / model, only when `return_path` == True.
Raises
------
Exception
Raised if `name` is incorrect.
Examples
--------
Model example:
.. sourcecode:: pycon
>>> import gensim.downloader as api
>>>
>>> model = api.load("glove-twitter-25") # load glove vectors
>>> model.most_similar("cat") # show words that similar to word 'cat'
Dataset example:
.. sourcecode:: pycon
>>> import gensim.downloader as api
>>>
>>> wiki = api.load("wiki-en") # load extracted Wikipedia dump, around 6 Gb
>>> for article in wiki: # iterate over all wiki script
>>> pass
Download only example:
.. sourcecode:: pycon
>>> import gensim.downloader as api
>>>
>>> print(api.load("wiki-en", return_path=True)) # output: /home/user/gensim-data/wiki-en/wiki-en.gz
"""
_create_base_dir()
file_name = _get_filename(name)
if file_name is None:
raise ValueError("Incorrect model/corpus name")
folder_dir = os.path.join(BASE_DIR, name)
path = os.path.join(folder_dir, file_name)
if not os.path.exists(folder_dir):
_download(name)
if return_path:
return path
else:
sys.path.insert(0, BASE_DIR)
module = __import__(name)
return module.load_data()
if __name__ == '__main__':
logging.basicConfig(
format='%(asctime)s : %(name)s : %(levelname)s : %(message)s', stream=sys.stdout, level=logging.INFO
)
parser = argparse.ArgumentParser(
description="Gensim console API",
usage="python -m gensim.api.downloader [-h] [-d data_name | -i data_name]"
)
group = parser.add_mutually_exclusive_group()
group.add_argument(
"-d", "--download", metavar="data_name", nargs=1,
help="To download a corpus/model : python -m gensim.downloader -d <dataname>"
)
full_information = 1
group.add_argument(
"-i", "--info", metavar="data_name", nargs='?', const=full_information,
help="To get information about a corpus/model : python -m gensim.downloader -i <dataname>"
)
args = parser.parse_args()
if args.download is not None:
data_path = load(args.download[0], return_path=True)
logger.info("Data has been installed and data path is %s", data_path)
elif args.info is not None:
if args.info == 'name':
print(json.dumps(info(name_only=True), indent=4))
else:
output = info() if (args.info == full_information) else info(name=args.info)
print(json.dumps(output, indent=4))
| 16,881
|
Python
|
.py
| 430
| 31.881395
| 112
| 0.627287
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,030
|
preprocessing.py
|
piskvorky_gensim/gensim/parsing/preprocessing.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""This module contains methods for parsing and preprocessing strings.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import remove_stopwords, preprocess_string
>>> remove_stopwords("Better late than never, but better never late.")
u'Better late never, better late.'
>>>
>>> preprocess_string("<i>Hel 9lo</i> <b>Wo9 rld</b>! Th3 weather_is really g00d today, isn't it?")
[u'hel', u'rld', u'weather', u'todai', u'isn']
"""
import re
import string
import glob
from gensim import utils
from gensim.parsing.porter import PorterStemmer
STOPWORDS = frozenset([
'all', 'six', 'just', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through',
'using', 'fifty', 'where', 'mill', 'only', 'find', 'before', 'one', 'whose', 'system', 'how', 'somewhere',
'much', 'thick', 'show', 'had', 'enough', 'should', 'to', 'must', 'whom', 'seeming', 'yourselves', 'under',
'ours', 'two', 'has', 'might', 'thereafter', 'latterly', 'do', 'them', 'his', 'around', 'than', 'get', 'very',
'de', 'none', 'cannot', 'every', 'un', 'they', 'front', 'during', 'thus', 'now', 'him', 'nor', 'name', 'regarding',
'several', 'hereafter', 'did', 'always', 'who', 'didn', 'whither', 'this', 'someone', 'either', 'each', 'become',
'thereupon', 'sometime', 'side', 'towards', 'therein', 'twelve', 'because', 'often', 'ten', 'our', 'doing', 'km',
'eg', 'some', 'back', 'used', 'up', 'go', 'namely', 'computer', 'are', 'further', 'beyond', 'ourselves', 'yet',
'out', 'even', 'will', 'what', 'still', 'for', 'bottom', 'mine', 'since', 'please', 'forty', 'per', 'its',
'everything', 'behind', 'does', 'various', 'above', 'between', 'it', 'neither', 'seemed', 'ever', 'across', 'she',
'somehow', 'be', 'we', 'full', 'never', 'sixty', 'however', 'here', 'otherwise', 'were', 'whereupon', 'nowhere',
'although', 'found', 'alone', 're', 'along', 'quite', 'fifteen', 'by', 'both', 'about', 'last', 'would',
'anything', 'via', 'many', 'could', 'thence', 'put', 'against', 'keep', 'etc', 'amount', 'became', 'ltd', 'hence',
'onto', 'or', 'con', 'among', 'already', 'co', 'afterwards', 'formerly', 'within', 'seems', 'into', 'others',
'while', 'whatever', 'except', 'down', 'hers', 'everyone', 'done', 'least', 'another', 'whoever', 'moreover',
'couldnt', 'throughout', 'anyhow', 'yourself', 'three', 'from', 'her', 'few', 'together', 'top', 'there', 'due',
'been', 'next', 'anyone', 'eleven', 'cry', 'call', 'therefore', 'interest', 'then', 'thru', 'themselves',
'hundred', 'really', 'sincere', 'empty', 'more', 'himself', 'elsewhere', 'mostly', 'on', 'fire', 'am', 'becoming',
'hereby', 'amongst', 'else', 'part', 'everywhere', 'too', 'kg', 'herself', 'former', 'those', 'he', 'me', 'myself',
'made', 'twenty', 'these', 'was', 'bill', 'cant', 'us', 'until', 'besides', 'nevertheless', 'below', 'anywhere',
'nine', 'can', 'whether', 'of', 'your', 'toward', 'my', 'say', 'something', 'and', 'whereafter', 'whenever',
'give', 'almost', 'wherever', 'is', 'describe', 'beforehand', 'herein', 'doesn', 'an', 'as', 'itself', 'at',
'have', 'in', 'seem', 'whence', 'ie', 'any', 'fill', 'again', 'hasnt', 'inc', 'thereby', 'thin', 'no', 'perhaps',
'latter', 'meanwhile', 'when', 'detail', 'same', 'wherein', 'beside', 'also', 'that', 'other', 'take', 'which',
'becomes', 'you', 'if', 'nobody', 'unless', 'whereas', 'see', 'though', 'may', 'after', 'upon', 'most', 'hereupon',
'eight', 'but', 'serious', 'nothing', 'such', 'why', 'off', 'a', 'don', 'whereby', 'third', 'i', 'whole', 'noone',
'sometimes', 'well', 'amoungst', 'yours', 'their', 'rather', 'without', 'so', 'five', 'the', 'first', 'with',
'make', 'once'
])
RE_PUNCT = re.compile(r'([%s])+' % re.escape(string.punctuation), re.UNICODE)
RE_TAGS = re.compile(r"<([^>]+)>", re.UNICODE)
RE_NUMERIC = re.compile(r"[0-9]+", re.UNICODE)
RE_NONALPHA = re.compile(r"\W", re.UNICODE)
RE_AL_NUM = re.compile(r"([a-z]+)([0-9]+)", flags=re.UNICODE)
RE_NUM_AL = re.compile(r"([0-9]+)([a-z]+)", flags=re.UNICODE)
RE_WHITESPACE = re.compile(r"(\s)+", re.UNICODE)
def remove_stopwords(s, stopwords=None):
"""Remove :const:`~gensim.parsing.preprocessing.STOPWORDS` from `s`.
Parameters
----------
s : str
stopwords : iterable of str, optional
Sequence of stopwords
If None - using :const:`~gensim.parsing.preprocessing.STOPWORDS`
Returns
-------
str
Unicode string without `stopwords`.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import remove_stopwords
>>> remove_stopwords("Better late than never, but better never late.")
u'Better late never, better late.'
"""
s = utils.to_unicode(s)
return " ".join(remove_stopword_tokens(s.split(), stopwords))
def remove_stopword_tokens(tokens, stopwords=None):
"""Remove stopword tokens using list `stopwords`.
Parameters
----------
tokens : iterable of str
Sequence of tokens.
stopwords : iterable of str, optional
Sequence of stopwords
If None - using :const:`~gensim.parsing.preprocessing.STOPWORDS`
Returns
-------
list of str
List of tokens without `stopwords`.
"""
if stopwords is None:
stopwords = STOPWORDS
return [token for token in tokens if token not in stopwords]
def strip_punctuation(s):
"""Replace ASCII punctuation characters with spaces in `s` using :const:`~gensim.parsing.preprocessing.RE_PUNCT`.
Parameters
----------
s : str
Returns
-------
str
Unicode string without punctuation characters.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import strip_punctuation
>>> strip_punctuation("A semicolon is a stronger break than a comma, but not as much as a full stop!")
u'A semicolon is a stronger break than a comma but not as much as a full stop '
"""
s = utils.to_unicode(s)
# For unicode enhancement options see https://github.com/RaRe-Technologies/gensim/issues/2962
return RE_PUNCT.sub(" ", s)
def strip_tags(s):
"""Remove tags from `s` using :const:`~gensim.parsing.preprocessing.RE_TAGS`.
Parameters
----------
s : str
Returns
-------
str
Unicode string without tags.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import strip_tags
>>> strip_tags("<i>Hello</i> <b>World</b>!")
u'Hello World!'
"""
s = utils.to_unicode(s)
return RE_TAGS.sub("", s)
def strip_short(s, minsize=3):
"""Remove words with length lesser than `minsize` from `s`.
Parameters
----------
s : str
minsize : int, optional
Returns
-------
str
Unicode string without short words.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import strip_short
>>> strip_short("salut les amis du 59")
u'salut les amis'
>>>
>>> strip_short("one two three four five six seven eight nine ten", minsize=5)
u'three seven eight'
"""
s = utils.to_unicode(s)
return " ".join(remove_short_tokens(s.split(), minsize))
def remove_short_tokens(tokens, minsize=3):
"""Remove tokens shorter than `minsize` chars.
Parameters
----------
tokens : iterable of str
Sequence of tokens.
minsize : int, optimal
Minimal length of token (include).
Returns
-------
list of str
List of tokens without short tokens.
"""
return [token for token in tokens if len(token) >= minsize]
def strip_numeric(s):
"""Remove digits from `s` using :const:`~gensim.parsing.preprocessing.RE_NUMERIC`.
Parameters
----------
s : str
Returns
-------
str
Unicode string without digits.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import strip_numeric
>>> strip_numeric("0text24gensim365test")
u'textgensimtest'
"""
s = utils.to_unicode(s)
return RE_NUMERIC.sub("", s)
def strip_non_alphanum(s):
"""Remove non-alphabetic characters from `s` using :const:`~gensim.parsing.preprocessing.RE_NONALPHA`.
Parameters
----------
s : str
Returns
-------
str
Unicode string with alphabetic characters only.
Notes
-----
Word characters - alphanumeric & underscore.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import strip_non_alphanum
>>> strip_non_alphanum("if-you#can%read$this&then@this#method^works")
u'if you can read this then this method works'
"""
s = utils.to_unicode(s)
return RE_NONALPHA.sub(" ", s)
def strip_multiple_whitespaces(s):
r"""Remove repeating whitespace characters (spaces, tabs, line breaks) from `s`
and turns tabs & line breaks into spaces using :const:`~gensim.parsing.preprocessing.RE_WHITESPACE`.
Parameters
----------
s : str
Returns
-------
str
Unicode string without repeating in a row whitespace characters.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import strip_multiple_whitespaces
>>> strip_multiple_whitespaces("salut" + '\r' + " les" + '\n' + " loulous!")
u'salut les loulous!'
"""
s = utils.to_unicode(s)
return RE_WHITESPACE.sub(" ", s)
def split_alphanum(s):
"""Add spaces between digits & letters in `s` using :const:`~gensim.parsing.preprocessing.RE_AL_NUM`.
Parameters
----------
s : str
Returns
-------
str
Unicode string with spaces between digits & letters.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import split_alphanum
>>> split_alphanum("24.0hours7 days365 a1b2c3")
u'24.0 hours 7 days 365 a 1 b 2 c 3'
"""
s = utils.to_unicode(s)
s = RE_AL_NUM.sub(r"\1 \2", s)
return RE_NUM_AL.sub(r"\1 \2", s)
def stem_text(text):
"""Transform `s` into lowercase and stem it.
Parameters
----------
text : str
Returns
-------
str
Unicode lowercased and porter-stemmed version of string `text`.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import stem_text
>>> stem_text("While it is quite useful to be able to search a large collection of documents almost instantly.")
u'while it is quit us to be abl to search a larg collect of document almost instantly.'
"""
text = utils.to_unicode(text)
p = PorterStemmer()
return ' '.join(p.stem(word) for word in text.split())
stem = stem_text
def lower_to_unicode(text, encoding='utf8', errors='strict'):
"""Lowercase `text` and convert to unicode, using :func:`gensim.utils.any2unicode`.
Parameters
----------
text : str
Input text.
encoding : str, optional
Encoding that will be used for conversion.
errors : str, optional
Error handling behaviour, used as parameter for `unicode` function (python2 only).
Returns
-------
str
Unicode version of `text`.
See Also
--------
:func:`gensim.utils.any2unicode`
Convert any string to unicode-string.
"""
return utils.to_unicode(text.lower(), encoding, errors)
def split_on_space(s):
"""Split line by spaces, used in :class:`gensim.corpora.lowcorpus.LowCorpus`.
Parameters
----------
s : str
Some line.
Returns
-------
list of str
List of tokens from `s`.
"""
return [word for word in utils.to_unicode(s).strip().split(' ') if word]
DEFAULT_FILTERS = [
lambda x: x.lower(), strip_tags, strip_punctuation,
strip_multiple_whitespaces, strip_numeric,
remove_stopwords, strip_short, stem_text
]
def preprocess_string(s, filters=DEFAULT_FILTERS):
"""Apply list of chosen filters to `s`.
Default list of filters:
* :func:`~gensim.parsing.preprocessing.strip_tags`,
* :func:`~gensim.parsing.preprocessing.strip_punctuation`,
* :func:`~gensim.parsing.preprocessing.strip_multiple_whitespaces`,
* :func:`~gensim.parsing.preprocessing.strip_numeric`,
* :func:`~gensim.parsing.preprocessing.remove_stopwords`,
* :func:`~gensim.parsing.preprocessing.strip_short`,
* :func:`~gensim.parsing.preprocessing.stem_text`.
Parameters
----------
s : str
filters: list of functions, optional
Returns
-------
list of str
Processed strings (cleaned).
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import preprocess_string
>>> preprocess_string("<i>Hel 9lo</i> <b>Wo9 rld</b>! Th3 weather_is really g00d today, isn't it?")
[u'hel', u'rld', u'weather', u'todai', u'isn']
>>>
>>> s = "<i>Hel 9lo</i> <b>Wo9 rld</b>! Th3 weather_is really g00d today, isn't it?"
>>> CUSTOM_FILTERS = [lambda x: x.lower(), strip_tags, strip_punctuation]
>>> preprocess_string(s, CUSTOM_FILTERS)
[u'hel', u'9lo', u'wo9', u'rld', u'th3', u'weather', u'is', u'really', u'g00d', u'today', u'isn', u't', u'it']
"""
s = utils.to_unicode(s)
for f in filters:
s = f(s)
return s.split()
def preprocess_documents(docs):
"""Apply :const:`~gensim.parsing.preprocessing.DEFAULT_FILTERS` to the documents strings.
Parameters
----------
docs : list of str
Returns
-------
list of list of str
Processed documents split by whitespace.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.preprocessing import preprocess_documents
>>> preprocess_documents(["<i>Hel 9lo</i> <b>Wo9 rld</b>!", "Th3 weather_is really g00d today, isn't it?"])
[[u'hel', u'rld'], [u'weather', u'todai', u'isn']]
"""
return [preprocess_string(d) for d in docs]
def read_file(path):
with utils.open(path, 'rb') as fin:
return fin.read()
def read_files(pattern):
return [read_file(fname) for fname in glob.glob(pattern)]
| 14,532
|
Python
|
.py
| 358
| 34.944134
| 120
| 0.610218
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,031
|
__init__.py
|
piskvorky_gensim/gensim/parsing/__init__.py
|
"""This package contains functions to preprocess raw text"""
from .porter import PorterStemmer # noqa:F401
from .preprocessing import ( # noqa:F401
preprocess_documents,
preprocess_string,
read_file,
read_files,
remove_stopwords,
split_alphanum,
stem_text,
strip_multiple_whitespaces,
strip_non_alphanum,
strip_numeric,
strip_punctuation,
strip_short,
strip_tags,
)
| 421
|
Python
|
.py
| 17
| 20.647059
| 60
| 0.719603
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,032
|
porter.py
|
piskvorky_gensim/gensim/parsing/porter.py
|
#!/usr/bin/env python
"""Porter Stemming Algorithm
This is the Porter stemming algorithm, ported to Python from the
version coded up in ANSI C by the author. It may be be regarded
as canonical, in that it follows the algorithm presented in [1]_, see also [2]_
Author - Vivake Gupta (v@nano.com), optimizations and cleanup of the code by Lars Buitinck.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>>
>>> p = PorterStemmer()
>>> p.stem("apple")
'appl'
>>>
>>> p.stem_sentence("Cats and ponies have meeting")
'cat and poni have meet'
>>>
>>> p.stem_documents(["Cats and ponies", "have meeting"])
['cat and poni', 'have meet']
.. [1] Porter, 1980, An algorithm for suffix stripping, http://www.cs.odu.edu/~jbollen/IR04/readings/readings5.pdf
.. [2] http://www.tartarus.org/~martin/PorterStemmer
"""
class PorterStemmer:
"""Class contains implementation of Porter stemming algorithm.
Attributes
--------
b : str
Buffer holding a word to be stemmed. The letters are in b[0], b[1] ... ending at b[`k`].
k : int
Readjusted downwards as the stemming progresses.
j : int
Word length.
"""
def __init__(self):
self.b = "" # buffer for word to be stemmed
self.k = 0
self.j = 0 # j is a general offset into the string
def _cons(self, i):
"""Check if b[i] is a consonant letter.
Parameters
----------
i : int
Index for `b`.
Returns
-------
bool
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.b = "hi"
>>> p._cons(1)
False
>>> p.b = "meow"
>>> p._cons(3)
True
"""
ch = self.b[i]
if ch in "aeiou":
return False
if ch == 'y':
return i == 0 or not self._cons(i - 1)
return True
def _m(self):
"""Calculate the number of consonant sequences between 0 and j.
If c is a consonant sequence and v a vowel sequence, and <..>
indicates arbitrary presence,
<c><v> gives 0
<c>vc<v> gives 1
<c>vcvc<v> gives 2
<c>vcvcvc<v> gives 3
Returns
-------
int
The number of consonant sequences between 0 and j.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.b = "<bm>aobm<ao>"
>>> p.j = 11
>>> p._m()
2
"""
i = 0
while True:
if i > self.j:
return 0
if not self._cons(i):
break
i += 1
i += 1
n = 0
while True:
while True:
if i > self.j:
return n
if self._cons(i):
break
i += 1
i += 1
n += 1
while 1:
if i > self.j:
return n
if not self._cons(i):
break
i += 1
i += 1
def _vowelinstem(self):
"""Check if b[0: j + 1] contains a vowel letter.
Returns
-------
bool
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.b = "gnsm"
>>> p.j = 3
>>> p._vowelinstem()
False
>>> p.b = "gensim"
>>> p.j = 5
>>> p._vowelinstem()
True
"""
return not all(self._cons(i) for i in range(self.j + 1))
def _doublec(self, j):
"""Check if b[j - 1: j + 1] contain a double consonant letter.
Parameters
----------
j : int
Index for `b`
Returns
-------
bool
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.b = "real"
>>> p.j = 3
>>> p._doublec(3)
False
>>> p.b = "really"
>>> p.j = 5
>>> p._doublec(4)
True
"""
return j > 0 and self.b[j] == self.b[j - 1] and self._cons(j)
def _cvc(self, i):
"""Check if b[j - 2: j + 1] makes the (consonant, vowel, consonant) pattern and also
if the second 'c' is not 'w', 'x' or 'y'. This is used when trying to restore an 'e' at the end of a short word,
e.g. cav(e), lov(e), hop(e), crim(e), but snow, box, tray.
Parameters
----------
i : int
Index for `b`
Returns
-------
bool
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.b = "lib"
>>> p.j = 2
>>> p._cvc(2)
True
>>> p.b = "dll"
>>> p.j = 2
>>> p._cvc(2)
False
>>> p.b = "wow"
>>> p.j = 2
>>> p._cvc(2)
False
"""
if i < 2 or not self._cons(i) or self._cons(i - 1) or not self._cons(i - 2):
return False
return self.b[i] not in "wxy"
def _ends(self, s):
"""Check if b[: k + 1] ends with `s`.
Parameters
----------
s : str
Returns
-------
bool
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.b = "cowboy"
>>> p.j = 5
>>> p.k = 2
>>> p._ends("cow")
True
"""
if s[-1] != self.b[self.k]: # tiny speed-up
return False
length = len(s)
if length > (self.k + 1):
return False
if self.b[self.k - length + 1:self.k + 1] != s:
return False
self.j = self.k - length
return True
def _setto(self, s):
"""Append `s` to `b`, adjusting `k`.
Parameters
----------
s : str
"""
self.b = self.b[:self.j + 1] + s
self.k = len(self.b) - 1
def _r(self, s):
if self._m() > 0:
self._setto(s)
def _step1ab(self):
"""Get rid of plurals and -ed or -ing.
caresses -> caress
ponies -> poni
ties -> ti
caress -> caress
cats -> cat
feed -> feed
agreed -> agree
disabled -> disable
matting -> mat
mating -> mate
meeting -> meet
milling -> mill
messing -> mess
meetings -> meet
"""
if self.b[self.k] == 's':
if self._ends("sses"):
self.k -= 2
elif self._ends("ies"):
self._setto("i")
elif self.b[self.k - 1] != 's':
self.k -= 1
if self._ends("eed"):
if self._m() > 0:
self.k -= 1
elif (self._ends("ed") or self._ends("ing")) and self._vowelinstem():
self.k = self.j
if self._ends("at"):
self._setto("ate")
elif self._ends("bl"):
self._setto("ble")
elif self._ends("iz"):
self._setto("ize")
elif self._doublec(self.k):
if self.b[self.k - 1] not in "lsz":
self.k -= 1
elif self._m() == 1 and self._cvc(self.k):
self._setto("e")
def _step1c(self):
"""Turn terminal 'y' to 'i' when there is another vowel in the stem."""
if self._ends("y") and self._vowelinstem():
self.b = self.b[:self.k] + 'i'
def _step2(self):
"""Map double suffices to single ones.
So, -ization ( = -ize plus -ation) maps to -ize etc. Note that the
string before the suffix must give _m() > 0.
"""
ch = self.b[self.k - 1]
if ch == 'a':
if self._ends("ational"):
self._r("ate")
elif self._ends("tional"):
self._r("tion")
elif ch == 'c':
if self._ends("enci"):
self._r("ence")
elif self._ends("anci"):
self._r("ance")
elif ch == 'e':
if self._ends("izer"):
self._r("ize")
elif ch == 'l':
if self._ends("bli"):
self._r("ble") # --DEPARTURE--
# To match the published algorithm, replace this phrase with
# if self._ends("abli"): self._r("able")
elif self._ends("alli"):
self._r("al")
elif self._ends("entli"):
self._r("ent")
elif self._ends("eli"):
self._r("e")
elif self._ends("ousli"):
self._r("ous")
elif ch == 'o':
if self._ends("ization"):
self._r("ize")
elif self._ends("ation"):
self._r("ate")
elif self._ends("ator"):
self._r("ate")
elif ch == 's':
if self._ends("alism"):
self._r("al")
elif self._ends("iveness"):
self._r("ive")
elif self._ends("fulness"):
self._r("ful")
elif self._ends("ousness"):
self._r("ous")
elif ch == 't':
if self._ends("aliti"):
self._r("al")
elif self._ends("iviti"):
self._r("ive")
elif self._ends("biliti"):
self._r("ble")
elif ch == 'g': # --DEPARTURE--
if self._ends("logi"):
self._r("log")
# To match the published algorithm, delete this phrase
def _step3(self):
"""Deal with -ic-, -full, -ness etc. Similar strategy to _step2."""
ch = self.b[self.k]
if ch == 'e':
if self._ends("icate"):
self._r("ic")
elif self._ends("ative"):
self._r("")
elif self._ends("alize"):
self._r("al")
elif ch == 'i':
if self._ends("iciti"):
self._r("ic")
elif ch == 'l':
if self._ends("ical"):
self._r("ic")
elif self._ends("ful"):
self._r("")
elif ch == 's':
if self._ends("ness"):
self._r("")
def _step4(self):
"""Takes off -ant, -ence etc., in context <c>vcvc<v>."""
ch = self.b[self.k - 1]
if ch == 'a':
if not self._ends("al"):
return
elif ch == 'c':
if not self._ends("ance") and not self._ends("ence"):
return
elif ch == 'e':
if not self._ends("er"):
return
elif ch == 'i':
if not self._ends("ic"):
return
elif ch == 'l':
if not self._ends("able") and not self._ends("ible"):
return
elif ch == 'n':
if self._ends("ant"):
pass
elif self._ends("ement"):
pass
elif self._ends("ment"):
pass
elif self._ends("ent"):
pass
else:
return
elif ch == 'o':
if self._ends("ion") and self.b[self.j] in "st":
pass
elif self._ends("ou"):
pass
# takes care of -ous
else:
return
elif ch == 's':
if not self._ends("ism"):
return
elif ch == 't':
if not self._ends("ate") and not self._ends("iti"):
return
elif ch == 'u':
if not self._ends("ous"):
return
elif ch == 'v':
if not self._ends("ive"):
return
elif ch == 'z':
if not self._ends("ize"):
return
else:
return
if self._m() > 1:
self.k = self.j
def _step5(self):
"""Remove a final -e if _m() > 1, and change -ll to -l if m() > 1."""
k = self.j = self.k
if self.b[k] == 'e':
a = self._m()
if a > 1 or (a == 1 and not self._cvc(k - 1)):
self.k -= 1
if self.b[self.k] == 'l' and self._doublec(self.k) and self._m() > 1:
self.k -= 1
def stem(self, w):
"""Stem the word `w`.
Parameters
----------
w : str
Returns
-------
str
Stemmed version of `w`.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.stem("ponies")
'poni'
"""
w = w.lower()
k = len(w) - 1
if k <= 1:
return w # --DEPARTURE--
# With this line, strings of length 1 or 2 don't go through the
# stemming process, although no mention is made of this in the
# published algorithm. Remove the line to match the published
# algorithm.
self.b = w
self.k = k
self._step1ab()
self._step1c()
self._step2()
self._step3()
self._step4()
self._step5()
return self.b[:self.k + 1]
def stem_sentence(self, txt):
"""Stem the sentence `txt`.
Parameters
----------
txt : str
Input sentence.
Returns
-------
str
Stemmed sentence.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.stem_sentence("Wow very nice woman with apple")
'wow veri nice woman with appl'
"""
return " ".join(self.stem(x) for x in txt.split())
def stem_documents(self, docs):
"""Stem documents.
Parameters
----------
docs : list of str
Input documents
Returns
-------
list of str
Stemmed documents.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.parsing.porter import PorterStemmer
>>> p = PorterStemmer()
>>> p.stem_documents(["Have a very nice weekend", "Have a very nice weekend"])
['have a veri nice weekend', 'have a veri nice weekend']
"""
return [self.stem_sentence(x) for x in docs]
if __name__ == '__main__':
import sys
p = PorterStemmer()
for f in sys.argv[1:]:
with open(f) as infile:
for line in infile:
print(p.stem_sentence(line))
| 15,535
|
Python
|
.py
| 495
| 20.185859
| 120
| 0.431304
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,033
|
test_nmf.py
|
piskvorky_gensim/gensim/test/test_nmf.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2018 Timofey Yefimov <anotherbugmaster@gmail.com>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import unittest
import copy
import logging
import numbers
import numpy as np
from gensim import matutils
from gensim.models import nmf
from gensim.test import basetmtests
from gensim.test.utils import datapath, get_tmpfile, common_corpus, common_dictionary
class TestNmf(unittest.TestCase, basetmtests.TestBaseTopicModel):
def setUp(self):
self.model = nmf.Nmf(
common_corpus,
id2word=common_dictionary,
chunksize=1,
num_topics=2,
passes=100,
random_state=42,
)
def test_generator(self):
model_1 = nmf.Nmf(
iter(common_corpus * 100),
id2word=common_dictionary,
chunksize=1,
num_topics=2,
passes=1,
random_state=42,
)
model_2 = nmf.Nmf(
common_corpus * 100,
id2word=common_dictionary,
chunksize=1,
num_topics=2,
passes=1,
random_state=42,
)
self.assertTrue(np.allclose(model_1.get_topics(), model_2.get_topics()))
def test_update(self):
model = copy.deepcopy(self.model)
model.update(common_corpus)
self.assertFalse(np.allclose(self.model.get_topics(), model.get_topics()))
def test_random_state(self):
model_1 = nmf.Nmf(
common_corpus,
id2word=common_dictionary,
chunksize=1,
num_topics=2,
passes=100,
random_state=42,
)
model_2 = nmf.Nmf(
common_corpus,
id2word=common_dictionary,
chunksize=1,
num_topics=2,
passes=100,
random_state=0,
)
self.assertTrue(np.allclose(self.model.get_topics(), model_1.get_topics()))
self.assertFalse(np.allclose(self.model.get_topics(), model_2.get_topics()))
def test_transform(self):
# transform one document
doc = list(common_corpus)[0]
transformed = self.model[doc]
vec = matutils.sparse2full(transformed, 2) # convert to dense vector, for easier equality tests
# The results sometimes differ on Windows, for unknown reasons.
# See https://github.com/RaRe-Technologies/gensim/pull/2481#issuecomment-549456750
expected = [0.03028875, 0.96971124]
# must contain the same values, up to re-ordering
self.assertTrue(np.allclose(sorted(vec), sorted(expected), atol=1e-3))
# transform one word
word = 5
transformed = self.model.get_term_topics(word)
vec = matutils.sparse2full(transformed, 2)
expected = [[0.3076869, 0.69231313]]
# must contain the same values, up to re-ordering
self.assertTrue(np.allclose(sorted(vec), sorted(expected), atol=1e-3))
def test_top_topics(self):
top_topics = self.model.top_topics(common_corpus)
for topic, score in top_topics:
self.assertTrue(isinstance(topic, list))
self.assertTrue(isinstance(score, float))
for v, k in topic:
self.assertTrue(isinstance(k, str))
self.assertTrue(np.issubdtype(v, float))
def test_get_topic_terms(self):
topic_terms = self.model.get_topic_terms(1)
for k, v in topic_terms:
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, float))
def test_get_document_topics(self):
doc_topics = self.model.get_document_topics(common_corpus)
for topic in doc_topics:
self.assertTrue(isinstance(topic, list))
for k, v in topic:
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, float))
# Test case to use the get_document_topic function for the corpus
all_topics = self.model.get_document_topics(common_corpus)
print(list(all_topics))
for topic in all_topics:
self.assertTrue(isinstance(topic, list))
for k, v in topic: # list of doc_topics
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, float))
def test_term_topics(self):
# check with word_type
result = self.model.get_term_topics(2)
for topic_no, probability in result:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(np.issubdtype(probability, float))
# if user has entered word instead, check with word
result = self.model.get_term_topics(str(self.model.id2word[2]))
for topic_no, probability in result:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(np.issubdtype(probability, float))
def test_persistence(self):
fname = get_tmpfile('gensim_models_nmf.tst')
self.model.save(fname)
model2 = nmf.Nmf.load(fname)
tstvec = []
self.assertTrue(np.allclose(self.model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_large_mmap(self):
fname = get_tmpfile('gensim_models_nmf.tst')
# simulate storing large arrays separately
self.model.save(fname, sep_limit=0)
# test loading the large model arrays with mmap
model2 = nmf.Nmf.load(fname, mmap='r')
self.assertEqual(self.model.num_topics, model2.num_topics)
tstvec = []
self.assertTrue(np.allclose(self.model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_large_mmap_compressed(self):
fname = get_tmpfile('gensim_models_nmf.tst.gz')
# simulate storing large arrays separately
self.model.save(fname, sep_limit=0)
# test loading the large model arrays with mmap
self.assertRaises(IOError, nmf.Nmf.load, fname, mmap='r')
def test_dtype_backward_compatibility(self):
nmf_fname = datapath('nmf_model')
test_doc = [(0, 1), (1, 1), (2, 1)]
expected_topics = [(1, 1.0)]
# save model to use in test
# self.model.save(nmf_fname)
# load a model saved using the latest version of Gensim
model = nmf.Nmf.load(nmf_fname)
# and test it on a predefined document
topics = model[test_doc]
self.assertTrue(np.allclose(expected_topics, topics))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
unittest.main()
| 6,821
|
Python
|
.py
| 159
| 33.566038
| 106
| 0.631404
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,034
|
test_fasttext.py
|
piskvorky_gensim/gensim/test/test_fasttext.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division
import gzip
import io
import logging
import unittest
import os
import shutil
import subprocess
import struct
import sys
import numpy as np
import pytest
from gensim import utils
from gensim.models.word2vec import LineSentence
from gensim.models.fasttext import FastText as FT_gensim, FastTextKeyedVectors, _unpack
from gensim.models.keyedvectors import KeyedVectors
from gensim.test.utils import (
datapath, get_tmpfile, temporary_file, common_texts as sentences, lee_corpus_list as list_corpus,
)
from gensim.test.test_word2vec import TestWord2VecModel
import gensim.models._fasttext_bin
from gensim.models.fasttext_inner import compute_ngrams, compute_ngrams_bytes, ft_hash_bytes
import gensim.models.fasttext
try:
from ot import emd2 # noqa:F401
POT_EXT = True
except (ImportError, ValueError):
POT_EXT = False
logger = logging.getLogger(__name__)
IS_WIN32 = (os.name == "nt") and (struct.calcsize('P') * 8 == 32)
MAX_WORDVEC_COMPONENT_DIFFERENCE = 1.0e-10
# Limit the size of FastText ngram buckets, for RAM reasons.
# See https://github.com/RaRe-Technologies/gensim/issues/2790
BUCKET = 10000
FT_HOME = os.environ.get("FT_HOME")
FT_CMD = shutil.which("fasttext", path=FT_HOME) or shutil.which("fasttext")
new_sentences = [
['computer', 'artificial', 'intelligence'],
['artificial', 'trees'],
['human', 'intelligence'],
['artificial', 'graph'],
['intelligence'],
['artificial', 'intelligence', 'system']
]
class TestFastTextModel(unittest.TestCase):
def setUp(self):
self.test_model_file = datapath('lee_fasttext.bin')
self.test_model = gensim.models.fasttext.load_facebook_model(self.test_model_file)
self.test_new_model_file = datapath('lee_fasttext_new.bin')
def test_training(self):
model = FT_gensim(vector_size=12, min_count=1, hs=1, negative=0, seed=42, workers=1, bucket=BUCKET)
model.build_vocab(sentences)
self.model_sanity(model)
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
sims = model.wv.most_similar('graph', topn=10)
self.assertEqual(model.wv.vectors.shape, (12, 12))
self.assertEqual(len(model.wv), 12)
self.assertEqual(model.wv.vectors_vocab.shape[1], 12)
self.assertEqual(model.wv.vectors_ngrams.shape[1], 12)
self.model_sanity(model)
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
# build vocab and train in one step; must be the same as above
model2 = FT_gensim(sentences, vector_size=12, min_count=1, hs=1, negative=0, seed=42, workers=1, bucket=BUCKET)
self.models_equal(model, model2)
# verify oov-word vector retrieval
invocab_vec = model.wv['minors'] # invocab word
self.assertEqual(len(invocab_vec), 12)
oov_vec = model.wv['minor'] # oov word
self.assertEqual(len(oov_vec), 12)
def test_fast_text_train_parameters(self):
model = FT_gensim(vector_size=12, min_count=1, hs=1, negative=0, seed=42, workers=1, bucket=BUCKET)
model.build_vocab(corpus_iterable=sentences)
self.assertRaises(TypeError, model.train, corpus_file=11111, total_examples=1, epochs=1)
self.assertRaises(TypeError, model.train, corpus_iterable=11111, total_examples=1, epochs=1)
self.assertRaises(
TypeError, model.train, corpus_iterable=sentences, corpus_file='test', total_examples=1, epochs=1)
self.assertRaises(TypeError, model.train, corpus_iterable=None, corpus_file=None, total_examples=1, epochs=1)
self.assertRaises(TypeError, model.train, corpus_file=sentences, total_examples=1, epochs=1)
def test_training_fromfile(self):
with temporary_file('gensim_fasttext.tst') as corpus_file:
utils.save_as_line_sentence(sentences, corpus_file)
model = FT_gensim(vector_size=12, min_count=1, hs=1, negative=0, seed=42, workers=1, bucket=BUCKET)
model.build_vocab(corpus_file=corpus_file)
self.model_sanity(model)
model.train(corpus_file=corpus_file, total_words=model.corpus_total_words, epochs=model.epochs)
sims = model.wv.most_similar('graph', topn=10)
self.assertEqual(model.wv.vectors.shape, (12, 12))
self.assertEqual(len(model.wv), 12)
self.assertEqual(model.wv.vectors_vocab.shape[1], 12)
self.assertEqual(model.wv.vectors_ngrams.shape[1], 12)
self.model_sanity(model)
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
# verify oov-word vector retrieval
invocab_vec = model.wv['minors'] # invocab word
self.assertEqual(len(invocab_vec), 12)
oov_vec = model.wv['minor'] # oov word
self.assertEqual(len(oov_vec), 12)
def models_equal(self, model, model2):
self.assertEqual(len(model.wv), len(model2.wv))
self.assertEqual(model.wv.bucket, model2.wv.bucket)
self.assertTrue(np.allclose(model.wv.vectors_vocab, model2.wv.vectors_vocab))
self.assertTrue(np.allclose(model.wv.vectors_ngrams, model2.wv.vectors_ngrams))
self.assertTrue(np.allclose(model.wv.vectors, model2.wv.vectors))
if model.hs:
self.assertTrue(np.allclose(model.syn1, model2.syn1))
if model.negative:
self.assertTrue(np.allclose(model.syn1neg, model2.syn1neg))
most_common_word = max(model.wv.key_to_index, key=lambda word: model.wv.get_vecattr(word, 'count'))[0]
self.assertTrue(np.allclose(model.wv[most_common_word], model2.wv[most_common_word]))
def test_persistence(self):
tmpf = get_tmpfile('gensim_fasttext.tst')
model = FT_gensim(sentences, min_count=1, bucket=BUCKET)
model.save(tmpf)
self.models_equal(model, FT_gensim.load(tmpf))
# test persistence of the KeyedVectors of a model
wv = model.wv
wv.save(tmpf)
loaded_wv = FastTextKeyedVectors.load(tmpf)
self.assertTrue(np.allclose(wv.vectors_ngrams, loaded_wv.vectors_ngrams))
self.assertEqual(len(wv), len(loaded_wv))
def test_persistence_fromfile(self):
with temporary_file('gensim_fasttext1.tst') as corpus_file:
utils.save_as_line_sentence(sentences, corpus_file)
tmpf = get_tmpfile('gensim_fasttext.tst')
model = FT_gensim(corpus_file=corpus_file, min_count=1, bucket=BUCKET)
model.save(tmpf)
self.models_equal(model, FT_gensim.load(tmpf))
# test persistence of the KeyedVectors of a model
wv = model.wv
wv.save(tmpf)
loaded_wv = FastTextKeyedVectors.load(tmpf)
self.assertTrue(np.allclose(wv.vectors_ngrams, loaded_wv.vectors_ngrams))
self.assertEqual(len(wv), len(loaded_wv))
def model_sanity(self, model):
self.model_structural_sanity(model)
# TODO: add semantic tests, where appropriate
def model_structural_sanity(self, model):
"""Check a model for basic self-consistency, necessary properties & property
correspondences, but no semantic tests."""
self.assertEqual(model.wv.vectors.shape, (len(model.wv), model.vector_size))
self.assertEqual(model.wv.vectors_vocab.shape, (len(model.wv), model.vector_size))
self.assertEqual(model.wv.vectors_ngrams.shape, (model.wv.bucket, model.vector_size))
self.assertLessEqual(len(model.wv.vectors_ngrams_lockf), len(model.wv.vectors_ngrams))
self.assertLessEqual(len(model.wv.vectors_vocab_lockf), len(model.wv.index_to_key))
self.assertTrue(np.isfinite(model.wv.vectors_ngrams).all(), "NaN in ngrams")
self.assertTrue(np.isfinite(model.wv.vectors_vocab).all(), "NaN in vectors_vocab")
if model.negative:
self.assertTrue(np.isfinite(model.syn1neg).all(), "NaN in syn1neg")
if model.hs:
self.assertTrue(np.isfinite(model.syn1).all(), "NaN in syn1neg")
def test_load_fasttext_format(self):
try:
model = gensim.models.fasttext.load_facebook_model(self.test_model_file)
except Exception as exc:
self.fail('Unable to load FastText model from file %s: %s' % (self.test_model_file, exc))
vocab_size, model_size = 1762, 10
self.assertEqual(model.wv.vectors.shape, (vocab_size, model_size))
self.assertEqual(len(model.wv), vocab_size, model_size)
self.assertEqual(model.wv.vectors_ngrams.shape, (model.wv.bucket, model_size))
expected_vec = [
-0.57144,
-0.0085561,
0.15748,
-0.67855,
-0.25459,
-0.58077,
-0.09913,
1.1447,
0.23418,
0.060007
] # obtained using ./fasttext print-word-vectors lee_fasttext_new.bin
actual_vec = model.wv["hundred"]
self.assertTrue(np.allclose(actual_vec, expected_vec, atol=1e-4))
# vector for oov words are slightly different from original FastText due to discarding unused ngrams
# obtained using a modified version of ./fasttext print-word-vectors lee_fasttext_new.bin
expected_vec_oov = [
-0.21929,
-0.53778,
-0.22463,
-0.41735,
0.71737,
-1.59758,
-0.24833,
0.62028,
0.53203,
0.77568
]
actual_vec_oov = model.wv["rejection"]
self.assertTrue(np.allclose(actual_vec_oov, expected_vec_oov, atol=1e-4))
self.assertEqual(model.min_count, 5)
self.assertEqual(model.window, 5)
self.assertEqual(model.epochs, 5)
self.assertEqual(model.negative, 5)
self.assertEqual(model.sample, 0.0001)
self.assertEqual(model.wv.bucket, 1000)
self.assertEqual(model.wv.max_n, 6)
self.assertEqual(model.wv.min_n, 3)
self.assertEqual(model.wv.vectors.shape, (len(model.wv), model.vector_size))
self.assertEqual(model.wv.vectors_ngrams.shape, (model.wv.bucket, model.vector_size))
def test_load_fasttext_new_format(self):
try:
new_model = gensim.models.fasttext.load_facebook_model(self.test_new_model_file)
except Exception as exc:
self.fail('Unable to load FastText model from file %s: %s' % (self.test_new_model_file, exc))
vocab_size, model_size = 1763, 10
self.assertEqual(new_model.wv.vectors.shape, (vocab_size, model_size))
self.assertEqual(len(new_model.wv), vocab_size, model_size)
self.assertEqual(new_model.wv.vectors_ngrams.shape, (new_model.wv.bucket, model_size))
expected_vec = [
-0.025627,
-0.11448,
0.18116,
-0.96779,
0.2532,
-0.93224,
0.3929,
0.12679,
-0.19685,
-0.13179
] # obtained using ./fasttext print-word-vectors lee_fasttext_new.bin
actual_vec = new_model.wv["hundred"]
self.assertTrue(np.allclose(actual_vec, expected_vec, atol=1e-4))
# vector for oov words are slightly different from original FastText due to discarding unused ngrams
# obtained using a modified version of ./fasttext print-word-vectors lee_fasttext_new.bin
expected_vec_oov = [
-0.49111,
-0.13122,
-0.02109,
-0.88769,
-0.20105,
-0.91732,
0.47243,
0.19708,
-0.17856,
0.19815
]
actual_vec_oov = new_model.wv["rejection"]
self.assertTrue(np.allclose(actual_vec_oov, expected_vec_oov, atol=1e-4))
self.assertEqual(new_model.min_count, 5)
self.assertEqual(new_model.window, 5)
self.assertEqual(new_model.epochs, 5)
self.assertEqual(new_model.negative, 5)
self.assertEqual(new_model.sample, 0.0001)
self.assertEqual(new_model.wv.bucket, 1000)
self.assertEqual(new_model.wv.max_n, 6)
self.assertEqual(new_model.wv.min_n, 3)
self.assertEqual(new_model.wv.vectors.shape, (len(new_model.wv), new_model.vector_size))
self.assertEqual(new_model.wv.vectors_ngrams.shape, (new_model.wv.bucket, new_model.vector_size))
def test_load_model_supervised(self):
with self.assertRaises(NotImplementedError):
gensim.models.fasttext.load_facebook_model(datapath('pang_lee_polarity_fasttext.bin'))
def test_load_model_with_non_ascii_vocab(self):
model = gensim.models.fasttext.load_facebook_model(datapath('non_ascii_fasttext.bin'))
self.assertTrue(u'kter√Ω' in model.wv)
try:
model.wv[u'kter√Ω']
except UnicodeDecodeError:
self.fail('Unable to access vector for utf8 encoded non-ascii word')
def test_load_model_non_utf8_encoding(self):
model = gensim.models.fasttext.load_facebook_model(datapath('cp852_fasttext.bin'), encoding='cp852')
self.assertTrue(u'kter√Ω' in model.wv)
try:
model.wv[u'kter√Ω']
except KeyError:
self.fail('Unable to access vector for cp-852 word')
def test_oov_similarity(self):
word = 'someoovword'
most_similar = self.test_model.wv.most_similar(word)
top_neighbor, top_similarity = most_similar[0]
v1 = self.test_model.wv[word]
v2 = self.test_model.wv[top_neighbor]
top_similarity_direct = self.test_model.wv.cosine_similarities(v1, v2.reshape(1, -1))[0]
self.assertAlmostEqual(top_similarity, top_similarity_direct, places=6)
def test_n_similarity(self):
# In vocab, sanity check
self.assertTrue(np.allclose(self.test_model.wv.n_similarity(['the', 'and'], ['and', 'the']), 1.0))
self.assertEqual(
self.test_model.wv.n_similarity(['the'], ['and']), self.test_model.wv.n_similarity(['and'], ['the']))
# Out of vocab check
self.assertTrue(np.allclose(self.test_model.wv.n_similarity(['night', 'nights'], ['nights', 'night']), 1.0))
self.assertEqual(
self.test_model.wv.n_similarity(['night'], ['nights']),
self.test_model.wv.n_similarity(['nights'], ['night'])
)
def test_similarity(self):
# In vocab, sanity check
self.assertTrue(np.allclose(self.test_model.wv.similarity('the', 'the'), 1.0))
self.assertEqual(self.test_model.wv.similarity('the', 'and'), self.test_model.wv.similarity('and', 'the'))
# Out of vocab check
self.assertTrue(np.allclose(self.test_model.wv.similarity('nights', 'nights'), 1.0))
self.assertEqual(
self.test_model.wv.similarity('night', 'nights'), self.test_model.wv.similarity('nights', 'night'))
def test_most_similar(self):
# In vocab, sanity check
self.assertEqual(len(self.test_model.wv.most_similar(positive=['the', 'and'], topn=5)), 5)
self.assertEqual(self.test_model.wv.most_similar('the'), self.test_model.wv.most_similar(positive=['the']))
# Out of vocab check
self.assertEqual(len(self.test_model.wv.most_similar(['night', 'nights'], topn=5)), 5)
self.assertEqual(
self.test_model.wv.most_similar('nights'), self.test_model.wv.most_similar(positive=['nights']))
def test_most_similar_cosmul(self):
# In vocab, sanity check
self.assertEqual(len(self.test_model.wv.most_similar_cosmul(positive=['the', 'and'], topn=5)), 5)
self.assertEqual(
self.test_model.wv.most_similar_cosmul('the'),
self.test_model.wv.most_similar_cosmul(positive=['the']))
# Out of vocab check
self.assertEqual(len(self.test_model.wv.most_similar_cosmul(['night', 'nights'], topn=5)), 5)
self.assertEqual(
self.test_model.wv.most_similar_cosmul('nights'),
self.test_model.wv.most_similar_cosmul(positive=['nights']))
self.assertEqual(
self.test_model.wv.most_similar_cosmul('the', 'and'),
self.test_model.wv.most_similar_cosmul(positive=['the'], negative=['and']))
def test_lookup(self):
# In vocab, sanity check
self.assertTrue('night' in self.test_model.wv.key_to_index)
self.assertTrue(np.allclose(self.test_model.wv['night'], self.test_model.wv[['night']]))
# Out of vocab check
self.assertFalse('nights' in self.test_model.wv.key_to_index)
self.assertTrue(np.allclose(self.test_model.wv['nights'], self.test_model.wv[['nights']]))
def test_contains(self):
# In vocab, sanity check
self.assertTrue('night' in self.test_model.wv.key_to_index)
self.assertTrue('night' in self.test_model.wv)
# Out of vocab check
self.assertFalse(self.test_model.wv.has_index_for('nights'))
self.assertFalse('nights' in self.test_model.wv.key_to_index)
self.assertTrue('nights' in self.test_model.wv)
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_wm_distance(self):
doc = ['night', 'payment']
oov_doc = ['nights', 'forests', 'payments']
dist = self.test_model.wv.wmdistance(doc, oov_doc)
self.assertNotEqual(float('inf'), dist)
def test_cbow_neg_training(self):
model_gensim = FT_gensim(
vector_size=48, sg=0, cbow_mean=1, alpha=0.05, window=5, hs=0, negative=5,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET)
lee_data = LineSentence(datapath('lee_background.cor'))
model_gensim.build_vocab(lee_data)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(lee_data, total_examples=model_gensim.corpus_count, epochs=model_gensim.epochs)
self.assertFalse((orig0 == model_gensim.wv.vectors[0]).all()) # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night.',
u'night,',
u'eight',
u'fight',
u'month',
u'hearings',
u'Washington',
u'remains',
u'overnight',
u'running']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
self.assertGreaterEqual(
overlap_count, 2,
"only %i overlap in expected %s & actual %s" % (overlap_count, expected_sims_words, sims_gensim_words))
def test_cbow_neg_training_fromfile(self):
with temporary_file('gensim_fasttext.tst') as corpus_file:
model_gensim = FT_gensim(
vector_size=48, sg=0, cbow_mean=1, alpha=0.05, window=5, hs=0, negative=5,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET)
lee_data = LineSentence(datapath('lee_background.cor'))
utils.save_as_line_sentence(lee_data, corpus_file)
model_gensim.build_vocab(corpus_file=corpus_file)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(corpus_file=corpus_file,
total_words=model_gensim.corpus_total_words,
epochs=model_gensim.epochs)
self.assertFalse((orig0 == model_gensim.wv.vectors[0]).all()) # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night.',
u'night,',
u'eight',
u'fight',
u'month',
u'hearings',
u'Washington',
u'remains',
u'overnight',
u'running']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
self.assertGreaterEqual(
overlap_count, 2,
"only %i overlap in expected %s & actual %s" % (overlap_count, expected_sims_words, sims_gensim_words))
def test_sg_neg_training(self):
model_gensim = FT_gensim(
vector_size=48, sg=1, cbow_mean=1, alpha=0.025, window=5, hs=0, negative=5,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET * 4)
lee_data = LineSentence(datapath('lee_background.cor'))
model_gensim.build_vocab(lee_data)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(lee_data, total_examples=model_gensim.corpus_count, epochs=model_gensim.epochs)
self.assertFalse((orig0 == model_gensim.wv.vectors[0]).all()) # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night.',
u'night,',
u'eight',
u'overnight',
u'overnight.',
u'month',
u'land',
u'firm',
u'singles',
u'death']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
self.assertGreaterEqual(
overlap_count, 2,
"only %i overlap in expected %s & actual %s" % (overlap_count, expected_sims_words, sims_gensim_words))
def test_sg_neg_training_fromfile(self):
with temporary_file('gensim_fasttext.tst') as corpus_file:
model_gensim = FT_gensim(
vector_size=48, sg=1, cbow_mean=1, alpha=0.025, window=5, hs=0, negative=5,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET * 4)
lee_data = LineSentence(datapath('lee_background.cor'))
utils.save_as_line_sentence(lee_data, corpus_file)
model_gensim.build_vocab(corpus_file=corpus_file)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(corpus_file=corpus_file,
total_words=model_gensim.corpus_total_words,
epochs=model_gensim.epochs)
self.assertFalse((orig0 == model_gensim.wv.vectors[0]).all()) # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night.',
u'night,',
u'eight',
u'overnight',
u'overnight.',
u'month',
u'land',
u'firm',
u'singles',
u'death']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
self.assertGreaterEqual(
overlap_count, 2,
"only %i overlap in expected %s & actual %s" % (overlap_count, expected_sims_words, sims_gensim_words))
def test_online_learning(self):
model_hs = FT_gensim(sentences, vector_size=12, min_count=1, seed=42, hs=1, negative=0, bucket=BUCKET)
self.assertEqual(len(model_hs.wv), 12)
self.assertEqual(model_hs.wv.get_vecattr('graph', 'count'), 3)
model_hs.build_vocab(new_sentences, update=True) # update vocab
self.assertEqual(len(model_hs.wv), 14)
self.assertEqual(model_hs.wv.get_vecattr('graph', 'count'), 4)
self.assertEqual(model_hs.wv.get_vecattr('artificial', 'count'), 4)
def test_online_learning_fromfile(self):
with temporary_file('gensim_fasttext1.tst') as corpus_file, \
temporary_file('gensim_fasttext2.tst') as new_corpus_file:
utils.save_as_line_sentence(sentences, corpus_file)
utils.save_as_line_sentence(new_sentences, new_corpus_file)
model_hs = FT_gensim(
corpus_file=corpus_file, vector_size=12, min_count=1, seed=42, hs=1, negative=0, bucket=BUCKET)
self.assertTrue(len(model_hs.wv), 12)
self.assertTrue(model_hs.wv.get_vecattr('graph', 'count'), 3)
model_hs.build_vocab(corpus_file=new_corpus_file, update=True) # update vocab
self.assertEqual(len(model_hs.wv), 14)
self.assertTrue(model_hs.wv.get_vecattr('graph', 'count'), 4)
self.assertTrue(model_hs.wv.get_vecattr('artificial', 'count'), 4)
def test_online_learning_after_save(self):
tmpf = get_tmpfile('gensim_fasttext.tst')
model_neg = FT_gensim(sentences, vector_size=12, min_count=0, seed=42, hs=0, negative=5, bucket=BUCKET)
model_neg.save(tmpf)
model_neg = FT_gensim.load(tmpf)
self.assertTrue(len(model_neg.wv), 12)
model_neg.build_vocab(new_sentences, update=True) # update vocab
model_neg.train(new_sentences, total_examples=model_neg.corpus_count, epochs=model_neg.epochs)
self.assertEqual(len(model_neg.wv), 14)
def test_online_learning_through_ft_format_saves(self):
tmpf = get_tmpfile('gensim_ft_format.tst')
model = FT_gensim(sentences, vector_size=12, min_count=0, seed=42, hs=0, negative=5, bucket=BUCKET)
gensim.models.fasttext.save_facebook_model(model, tmpf)
model_reload = gensim.models.fasttext.load_facebook_model(tmpf)
self.assertTrue(len(model_reload.wv), 12)
self.assertEqual(len(model_reload.wv), len(model_reload.wv.vectors))
self.assertEqual(len(model_reload.wv), len(model_reload.wv.vectors_vocab))
model_reload.build_vocab(new_sentences, update=True) # update vocab
model_reload.train(new_sentences, total_examples=model_reload.corpus_count, epochs=model_reload.epochs)
self.assertEqual(len(model_reload.wv), 14)
self.assertEqual(len(model_reload.wv), len(model_reload.wv.vectors))
self.assertEqual(len(model_reload.wv), len(model_reload.wv.vectors_vocab))
tmpf2 = get_tmpfile('gensim_ft_format2.tst')
gensim.models.fasttext.save_facebook_model(model_reload, tmpf2)
def test_online_learning_after_save_fromfile(self):
with temporary_file('gensim_fasttext1.tst') as corpus_file, \
temporary_file('gensim_fasttext2.tst') as new_corpus_file:
utils.save_as_line_sentence(sentences, corpus_file)
utils.save_as_line_sentence(new_sentences, new_corpus_file)
tmpf = get_tmpfile('gensim_fasttext.tst')
model_neg = FT_gensim(
corpus_file=corpus_file, vector_size=12, min_count=0, seed=42, hs=0, negative=5, bucket=BUCKET)
model_neg.save(tmpf)
model_neg = FT_gensim.load(tmpf)
self.assertTrue(len(model_neg.wv), 12)
model_neg.build_vocab(corpus_file=new_corpus_file, update=True) # update vocab
model_neg.train(corpus_file=new_corpus_file, total_words=model_neg.corpus_total_words,
epochs=model_neg.epochs)
self.assertEqual(len(model_neg.wv), 14)
def online_sanity(self, model):
terro, others = [], []
for line in list_corpus:
if 'terrorism' in line:
terro.append(line)
else:
others.append(line)
self.assertTrue(all('terrorism' not in line for line in others))
model.build_vocab(others)
start_vecs = model.wv.vectors_vocab.copy()
model.train(others, total_examples=model.corpus_count, epochs=model.epochs)
# checks that `vectors_vocab` has been changed by training
self.assertFalse(np.all(np.equal(start_vecs, model.wv.vectors_vocab)))
# checks that `vectors` is different from `vectors_vocab`
self.assertFalse(np.all(np.equal(model.wv.vectors, model.wv.vectors_vocab)))
self.assertFalse('terrorism' in model.wv.key_to_index)
model.build_vocab(terro, update=True) # update vocab
self.assertTrue(model.wv.vectors_ngrams.dtype == 'float32')
self.assertTrue('terrorism' in model.wv.key_to_index)
orig0_all = np.copy(model.wv.vectors_ngrams)
model.train(terro, total_examples=len(terro), epochs=model.epochs)
self.assertFalse(np.allclose(model.wv.vectors_ngrams, orig0_all))
sim = model.wv.n_similarity(['war'], ['terrorism'])
assert abs(sim) > 0.6
def test_sg_hs_online(self):
model = FT_gensim(sg=1, window=2, hs=1, negative=0, min_count=3, epochs=1, seed=42, workers=1, bucket=BUCKET)
self.online_sanity(model)
def test_sg_neg_online(self):
model = FT_gensim(sg=1, window=2, hs=0, negative=5, min_count=3, epochs=1, seed=42, workers=1, bucket=BUCKET)
self.online_sanity(model)
def test_cbow_hs_online(self):
model = FT_gensim(
sg=0, cbow_mean=1, alpha=0.05, window=2, hs=1, negative=0, min_count=3, epochs=1, seed=42, workers=1,
bucket=BUCKET,
)
self.online_sanity(model)
def test_cbow_neg_online(self):
model = FT_gensim(
sg=0, cbow_mean=1, alpha=0.05, window=2, hs=0, negative=5,
min_count=5, epochs=1, seed=42, workers=1, sample=0, bucket=BUCKET
)
self.online_sanity(model)
def test_get_vocab_word_vecs(self):
model = FT_gensim(vector_size=12, min_count=1, seed=42, bucket=BUCKET)
model.build_vocab(sentences)
original_syn0_vocab = np.copy(model.wv.vectors_vocab)
model.wv.adjust_vectors()
self.assertTrue(np.all(np.equal(model.wv.vectors_vocab, original_syn0_vocab)))
def test_persistence_word2vec_format(self):
"""Test storing/loading the model in word2vec format."""
tmpf = get_tmpfile('gensim_fasttext_w2v_format.tst')
model = FT_gensim(sentences, min_count=1, vector_size=12, bucket=BUCKET)
model.wv.save_word2vec_format(tmpf, binary=True)
loaded_model_kv = KeyedVectors.load_word2vec_format(tmpf, binary=True)
self.assertEqual(len(model.wv), len(loaded_model_kv))
self.assertTrue(np.allclose(model.wv['human'], loaded_model_kv['human']))
def test_bucket_ngrams(self):
model = FT_gensim(vector_size=12, min_count=1, bucket=20)
model.build_vocab(sentences)
self.assertEqual(model.wv.vectors_ngrams.shape, (20, 12))
model.build_vocab(new_sentences, update=True)
self.assertEqual(model.wv.vectors_ngrams.shape, (20, 12))
def test_estimate_memory(self):
model = FT_gensim(sg=1, hs=1, vector_size=12, negative=5, min_count=3, bucket=BUCKET)
model.build_vocab(sentences)
report = model.estimate_memory()
self.assertEqual(report['vocab'], 2800)
self.assertEqual(report['syn0_vocab'], 192)
self.assertEqual(report['syn1'], 192)
self.assertEqual(report['syn1neg'], 192)
# TODO: these fixed numbers for particular implementation generations encumber changes without real QA
# perhaps instead verify reports' total is within some close factor of a deep-audit of actual memory used?
self.assertEqual(report['syn0_ngrams'], model.vector_size * np.dtype(np.float32).itemsize * BUCKET)
self.assertEqual(report['buckets_word'], 688)
self.assertEqual(report['total'], 484064)
def obsolete_testLoadOldModel(self):
"""Test loading fasttext models from previous version"""
model_file = 'fasttext_old'
model = FT_gensim.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (12, 100))
self.assertTrue(len(model.wv) == 12)
self.assertTrue(len(model.wv.index_to_key) == 12)
self.assertIsNone(model.corpus_total_words)
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.vector_size))
self.assertTrue(model.wv.vectors_lockf.shape == (12, ))
self.assertTrue(model.cum_table.shape == (12, ))
self.assertEqual(model.wv.vectors_vocab.shape, (12, 100))
self.assertEqual(model.wv.vectors_ngrams.shape, (2000000, 100))
# Model stored in multiple files
model_file = 'fasttext_old_sep'
model = FT_gensim.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (12, 100))
self.assertTrue(len(model.wv) == 12)
self.assertTrue(len(model.wv.index_to_key) == 12)
self.assertIsNone(model.corpus_total_words)
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.vector_size))
self.assertTrue(model.wv.vectors_lockf.shape == (12, ))
self.assertTrue(model.cum_table.shape == (12, ))
self.assertEqual(model.wv.vectors_vocab.shape, (12, 100))
self.assertEqual(model.wv.vectors_ngrams.shape, (2000000, 100))
def test_vectors_for_all_with_inference(self):
"""Test vectors_for_all can infer new vectors."""
words = [
'responding',
'approached',
'chairman',
'an out-of-vocabulary word',
'another out-of-vocabulary word',
]
vectors_for_all = self.test_model.wv.vectors_for_all(words)
expected = 5
predicted = len(vectors_for_all)
assert expected == predicted
expected = self.test_model.wv['responding']
predicted = vectors_for_all['responding']
assert np.allclose(expected, predicted)
smaller_distance = np.linalg.norm(
vectors_for_all['an out-of-vocabulary word']
- vectors_for_all['another out-of-vocabulary word']
)
greater_distance = np.linalg.norm(
vectors_for_all['an out-of-vocabulary word']
- vectors_for_all['responding']
)
assert greater_distance > smaller_distance
def test_vectors_for_all_without_inference(self):
"""Test vectors_for_all does not infer new vectors when prohibited."""
words = [
'responding',
'approached',
'chairman',
'an out-of-vocabulary word',
'another out-of-vocabulary word',
]
vectors_for_all = self.test_model.wv.vectors_for_all(words, allow_inference=False)
expected = 3
predicted = len(vectors_for_all)
assert expected == predicted
expected = self.test_model.wv['responding']
predicted = vectors_for_all['responding']
assert np.allclose(expected, predicted)
def test_negative_ns_exp(self):
"""The model should accept a negative ns_exponent as a valid value."""
model = FT_gensim(sentences, ns_exponent=-1, min_count=1, workers=1)
tmpf = get_tmpfile('fasttext_negative_exp.tst')
model.save(tmpf)
loaded_model = FT_gensim.load(tmpf)
loaded_model.train(sentences, total_examples=model.corpus_count, epochs=1)
assert loaded_model.ns_exponent == -1, loaded_model.ns_exponent
@pytest.mark.parametrize('shrink_windows', [True, False])
def test_cbow_hs_training(shrink_windows):
model_gensim = FT_gensim(
vector_size=48, sg=0, cbow_mean=1, alpha=0.05, window=5, hs=1, negative=0,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET, shrink_windows=shrink_windows)
lee_data = LineSentence(datapath('lee_background.cor'))
model_gensim.build_vocab(lee_data)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(lee_data, total_examples=model_gensim.corpus_count, epochs=model_gensim.epochs)
assert not (orig0 == model_gensim.wv.vectors[0]).all() # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night,',
u'night.',
u'rights',
u'kilometres',
u'in',
u'eight',
u'according',
u'flights',
u'during',
u'comes']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
message = f"only {overlap_count} overlap in expected {expected_sims_words} & actual {sims_gensim_words}"
assert overlap_count >= 2, message
@pytest.mark.parametrize('shrink_windows', [True, False])
def test_cbow_hs_training_fromfile(shrink_windows):
with temporary_file('gensim_fasttext.tst') as corpus_file:
model_gensim = FT_gensim(
vector_size=48, sg=0, cbow_mean=1, alpha=0.05, window=5, hs=1, negative=0,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET * 4, shrink_windows=shrink_windows)
lee_data = LineSentence(datapath('lee_background.cor'))
utils.save_as_line_sentence(lee_data, corpus_file)
model_gensim.build_vocab(corpus_file=corpus_file)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(corpus_file=corpus_file,
total_words=model_gensim.corpus_total_words,
epochs=model_gensim.epochs)
assert not (orig0 == model_gensim.wv.vectors[0]).all() # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night,',
u'night.',
u'rights',
u'kilometres',
u'in',
u'eight',
u'according',
u'flights',
u'during',
u'comes']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
message = f"only {overlap_count} overlap in expected {expected_sims_words} & actual {sims_gensim_words}"
assert overlap_count >= 2, message
@pytest.mark.parametrize('shrink_windows', [True, False])
def test_sg_hs_training(shrink_windows):
model_gensim = FT_gensim(
vector_size=48, sg=1, cbow_mean=1, alpha=0.025, window=5, hs=1, negative=0,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET, shrink_windows=shrink_windows)
lee_data = LineSentence(datapath('lee_background.cor'))
model_gensim.build_vocab(lee_data)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(lee_data, total_examples=model_gensim.corpus_count, epochs=model_gensim.epochs)
assert not (orig0 == model_gensim.wv.vectors[0]).all() # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night,',
u'night.',
u'eight',
u'nine',
u'overnight',
u'crew',
u'overnight.',
u'manslaughter',
u'north',
u'flight']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
message = f"only {overlap_count} overlap in expected {expected_sims_words} & actual {sims_gensim_words}"
assert overlap_count >= 2, message
@pytest.mark.parametrize('shrink_windows', [True, False])
def test_sg_hs_training_fromfile(shrink_windows):
with temporary_file('gensim_fasttext.tst') as corpus_file:
model_gensim = FT_gensim(
vector_size=48, sg=1, cbow_mean=1, alpha=0.025, window=5, hs=1, negative=0,
min_count=5, epochs=10, batch_words=1000, word_ngrams=1, sample=1e-3, min_n=3, max_n=6,
sorted_vocab=1, workers=1, min_alpha=0.0, bucket=BUCKET, shrink_windows=shrink_windows)
lee_data = LineSentence(datapath('lee_background.cor'))
utils.save_as_line_sentence(lee_data, corpus_file)
model_gensim.build_vocab(corpus_file=corpus_file)
orig0 = np.copy(model_gensim.wv.vectors[0])
model_gensim.train(corpus_file=corpus_file,
total_words=model_gensim.corpus_total_words,
epochs=model_gensim.epochs)
assert not (orig0 == model_gensim.wv.vectors[0]).all() # vector should vary after training
sims_gensim = model_gensim.wv.most_similar('night', topn=10)
sims_gensim_words = [word for (word, distance) in sims_gensim] # get similar words
expected_sims_words = [
u'night,',
u'night.',
u'eight',
u'nine',
u'overnight',
u'crew',
u'overnight.',
u'manslaughter',
u'north',
u'flight']
overlaps = set(sims_gensim_words).intersection(expected_sims_words)
overlap_count = len(overlaps)
message = f"only {overlap_count} overlap in expected {expected_sims_words} & actual {sims_gensim_words}"
assert overlap_count >= 2, message
with open(datapath('toy-data.txt')) as fin:
TOY_SENTENCES = [fin.read().strip().split(' ')]
def train_gensim(bucket=100, min_count=5):
#
# Set parameters to match those in the load_native function
#
model = FT_gensim(bucket=bucket, vector_size=5, alpha=0.05, workers=1, sample=0.0001, min_count=min_count)
model.build_vocab(TOY_SENTENCES)
model.train(TOY_SENTENCES, total_examples=len(TOY_SENTENCES), epochs=model.epochs)
return model
def load_native():
#
# trained using:
#
# ./fasttext cbow -input toy-data.txt -output toy-model -bucket 100 -dim 5
#
path = datapath('toy-model.bin')
model = gensim.models.fasttext.load_facebook_model(path)
return model
def load_vec(fin):
fin.readline() # array shape
for line in fin:
columns = line.strip().split(u' ')
word = columns.pop(0)
vector = [float(c) for c in columns]
yield word, np.array(vector, dtype=np.float32)
def compare_wv(a, b, t):
a_count = {key: a.get_vecattr(key, 'count') for key in a.key_to_index}
b_count = {key: b.get_vecattr(key, 'count') for key in b.key_to_index}
t.assertEqual(a_count, b_count)
#
# We do not compare most matrices directly, because they will never
# be equal unless many conditions are strictly controlled.
#
t.assertEqual(a.vectors.shape, b.vectors.shape)
# t.assertTrue(np.allclose(a.vectors, b.vectors))
t.assertEqual(a.vectors_vocab.shape, b.vectors_vocab.shape)
# t.assertTrue(np.allclose(a.vectors_vocab, b.vectors_vocab))
def compare_nn(a, b, t):
#
# Ensure the neural networks are identical for both cases.
#
t.assertEqual(a.syn1neg.shape, b.syn1neg.shape)
#
# Only if match_gensim=True in init_post_load
#
# t.assertEqual(a.vectors_ngrams_lockf.shape, b.vectors_ngrams_lockf.shape)
# t.assertTrue(np.allclose(a.vectors_ngrams_lockf, b.vectors_ngrams_lockf))
# t.assertEqual(a.vectors_vocab_lockf.shape, b.vectors_vocab_lockf.shape)
# t.assertTrue(np.allclose(a.vectors_vocab_lockf, b.vectors_vocab_lockf))
def compare_vocabulary(a, b, t):
t.assertEqual(a.max_vocab_size, b.max_vocab_size)
t.assertEqual(a.min_count, b.min_count)
t.assertEqual(a.sample, b.sample)
t.assertEqual(a.sorted_vocab, b.sorted_vocab)
t.assertEqual(a.null_word, b.null_word)
t.assertTrue(np.allclose(a.cum_table, b.cum_table))
t.assertEqual(a.raw_vocab, b.raw_vocab)
t.assertEqual(a.max_final_vocab, b.max_final_vocab)
t.assertEqual(a.ns_exponent, b.ns_exponent)
class NativeTrainingContinuationTest(unittest.TestCase):
maxDiff = None
model_structural_sanity = TestFastTextModel.model_structural_sanity
def setUp(self):
#
# $ echo "quick brown fox jumps over lazy dog" | ./fasttext print-word-vectors gensim/test/test_data/toy-model.bin # noqa: E501
#
expected = {
u"quick": [0.023393, 0.11499, 0.11684, -0.13349, 0.022543],
u"brown": [0.015288, 0.050404, -0.041395, -0.090371, 0.06441],
u"fox": [0.061692, 0.082914, 0.020081, -0.039159, 0.03296],
u"jumps": [0.070107, 0.081465, 0.051763, 0.012084, 0.0050402],
u"over": [0.055023, 0.03465, 0.01648, -0.11129, 0.094555],
u"lazy": [-0.022103, -0.020126, -0.033612, -0.049473, 0.0054174],
u"dog": [0.084983, 0.09216, 0.020204, -0.13616, 0.01118],
}
self.oov_expected = {
word: np.array(arr, dtype=np.float32)
for word, arr in expected.items()
}
def test_in_vocab(self):
"""Test for correct representation of in-vocab words."""
native = load_native()
with utils.open(datapath('toy-model.vec'), 'r', encoding='utf-8') as fin:
expected = dict(load_vec(fin))
for word, expected_vector in expected.items():
actual_vector = native.wv.get_vector(word)
self.assertTrue(np.allclose(expected_vector, actual_vector, atol=1e-5))
self.model_structural_sanity(native)
def test_out_of_vocab(self):
"""Test for correct representation of out-of-vocab words."""
native = load_native()
for word, expected_vector in self.oov_expected.items():
actual_vector = native.wv.get_vector(word)
self.assertTrue(np.allclose(expected_vector, actual_vector, atol=1e-5))
self.model_structural_sanity(native)
def test_sanity(self):
"""Compare models trained on toy data. They should be equal."""
trained = train_gensim()
native = load_native()
self.assertEqual(trained.wv.bucket, native.wv.bucket)
#
# Only if match_gensim=True in init_post_load
#
# self.assertEqual(trained.bucket, native.bucket)
compare_wv(trained.wv, native.wv, self)
compare_vocabulary(trained, native, self)
compare_nn(trained, native, self)
self.model_structural_sanity(trained)
self.model_structural_sanity(native)
def test_continuation_native(self):
"""Ensure that training has had a measurable effect."""
native = load_native()
self.model_structural_sanity(native)
#
# Pick a word that is in both corpuses.
# Its vectors should be different between training runs.
#
word = 'society'
old_vector = native.wv.get_vector(word).tolist()
native.train(list_corpus, total_examples=len(list_corpus), epochs=native.epochs)
new_vector = native.wv.get_vector(word).tolist()
self.assertNotEqual(old_vector, new_vector)
self.model_structural_sanity(native)
def test_continuation_gensim(self):
"""Ensure that continued training has had a measurable effect."""
model = train_gensim(min_count=0)
self.model_structural_sanity(model)
vectors_ngrams_before = np.copy(model.wv.vectors_ngrams)
word = 'human'
old_vector = model.wv.get_vector(word).tolist()
model.train(list_corpus, total_examples=len(list_corpus), epochs=model.epochs)
vectors_ngrams_after = np.copy(model.wv.vectors_ngrams)
self.assertFalse(np.allclose(vectors_ngrams_before, vectors_ngrams_after))
new_vector = model.wv.get_vector(word).tolist()
self.assertNotEqual(old_vector, new_vector)
self.model_structural_sanity(model)
def test_save_load_gensim(self):
"""Test that serialization works end-to-end. Not crashing is a success."""
#
# This is a workaround for a problem with temporary files on AppVeyor:
#
# - https://bugs.python.org/issue14243 (problem discussion)
# - https://github.com/dropbox/pyannotate/pull/48/files (workaround source code)
#
model_name = 'test_ft_saveload_native.model'
with temporary_file(model_name):
train_gensim().save(model_name)
model = FT_gensim.load(model_name)
self.model_structural_sanity(model)
model.train(list_corpus, total_examples=len(list_corpus), epochs=model.epochs)
model.save(model_name)
self.model_structural_sanity(model)
def test_save_load_native(self):
"""Test that serialization works end-to-end. Not crashing is a success."""
model_name = 'test_ft_saveload_fb.model'
with temporary_file(model_name):
load_native().save(model_name)
model = FT_gensim.load(model_name)
self.model_structural_sanity(model)
model.train(list_corpus, total_examples=len(list_corpus), epochs=model.epochs)
model.save(model_name)
self.model_structural_sanity(model)
def test_load_native_pretrained(self):
model = gensim.models.fasttext.load_facebook_model(datapath('toy-model-pretrained.bin'))
actual = model.wv['monarchist']
expected = np.array([0.76222, 1.0669, 0.7055, -0.090969, -0.53508])
self.assertTrue(np.allclose(expected, actual, atol=10e-4))
self.model_structural_sanity(model)
def test_load_native_vectors(self):
cap_path = datapath("crime-and-punishment.bin")
fbkv = gensim.models.fasttext.load_facebook_vectors(cap_path)
self.assertFalse('landlord' in fbkv.key_to_index)
self.assertTrue('landlady' in fbkv.key_to_index)
oov_vector = fbkv['landlord']
iv_vector = fbkv['landlady']
self.assertFalse(np.allclose(oov_vector, iv_vector))
def test_no_ngrams(self):
model = gensim.models.fasttext.load_facebook_model(datapath('crime-and-punishment.bin'))
v1 = model.wv['']
origin = np.zeros(v1.shape, v1.dtype)
self.assertTrue(np.allclose(v1, origin))
self.model_structural_sanity(model)
def _train_model_with_pretrained_vectors():
"""Generate toy-model-pretrained.bin for use in test_load_native_pretrained.
Requires https://github.com/facebookresearch/fastText/tree/master/python to be installed.
"""
import fastText
training_text = datapath('toy-data.txt')
pretrained_file = datapath('pretrained.vec')
model = fastText.train_unsupervised(
training_text,
bucket=100, model='skipgram', dim=5, pretrainedVectors=pretrained_file
)
model.save_model(datapath('toy-model-pretrained.bin'))
class HashCompatibilityTest(unittest.TestCase):
def test_compatibility_true(self):
m = FT_gensim.load(datapath('compatible-hash-true.model'))
self.assertTrue(m.wv.compatible_hash)
def test_hash_native(self):
m = load_native()
self.assertTrue(m.wv.compatible_hash)
class FTHashResultsTest(unittest.TestCase):
"""Loosely based on the test described here:
https://github.com/RaRe-Technologies/gensim/issues/2059#issuecomment-432300777
With a broken hash, vectors for non-ASCII keywords don't match when loaded
from a native model.
"""
def setUp(self):
#
# ./fasttext skipgram -minCount 0 -bucket 100 -input crime-and-punishment.txt -output crime-and-punishment -dim 5 # noqa: E501
#
self.model = gensim.models.fasttext.load_facebook_model(datapath('crime-and-punishment.bin'))
with utils.open(datapath('crime-and-punishment.vec'), 'r', encoding='utf-8') as fin:
self.expected = dict(load_vec(fin))
def test_ascii(self):
word = u'landlady'
expected = self.expected[word]
actual = self.model.wv[word]
self.assertTrue(np.allclose(expected, actual, atol=1e-5))
def test_unicode(self):
word = u'—Ö–æ–∑—è–π–∫–∞'
expected = self.expected[word]
actual = self.model.wv[word]
self.assertTrue(np.allclose(expected, actual, atol=1e-5))
def test_out_of_vocab(self):
longword = u'rechtsschutzversicherungsgesellschaften' # many ngrams
expected = {
u'steamtrain': np.array([0.031988, 0.022966, 0.059483, 0.094547, 0.062693]),
u'–ø–∞—Ä–æ–≤–æ–∑': np.array([-0.0033987, 0.056236, 0.036073, 0.094008, 0.00085222]),
longword: np.array([-0.012889, 0.029756, 0.018020, 0.099077, 0.041939]),
}
actual = {w: self.model.wv[w] for w in expected}
self.assertTrue(np.allclose(expected[u'steamtrain'], actual[u'steamtrain'], atol=1e-5))
self.assertTrue(np.allclose(expected[u'–ø–∞—Ä–æ–≤–æ–∑'], actual[u'–ø–∞—Ä–æ–≤–æ–∑'], atol=1e-5))
self.assertTrue(np.allclose(expected[longword], actual[longword], atol=1e-5))
def hash_main(alg):
"""Generate hash values for test from standard input."""
hashmap = {
'cy_bytes': ft_hash_bytes,
}
try:
fun = hashmap[alg]
except KeyError:
raise KeyError('invalid alg: %r expected one of %r' % (alg, sorted(hashmap)))
for line in sys.stdin:
if 'bytes' in alg:
words = line.encode('utf-8').rstrip().split(b' ')
else:
words = line.rstrip().split(' ')
for word in words:
print('u%r: %r,' % (word, fun(word)))
class FTHashFunctionsTest(unittest.TestCase):
def setUp(self):
#
# I obtained these expected values using:
#
# $ echo word1 ... wordN | python -c 'from gensim.test.test_fasttext import hash_main;hash_main("alg")' # noqa: E501
#
# where alg is cy_bytes (previous options had included: py_bytes, py_broken, cy_bytes, cy_broken.)
#
self.expected = {
u'–∫–æ–º–∞–Ω–¥–∞': 1725507386,
u'–º–∞–ª–µ–Ω—å–∫–∏—Ö': 3011324125,
u'–¥—Ä—É–∑–µ–π': 737001801,
u'–≤–æ–∑–∏—Ç': 4225261911,
u'–≥—Ä—É–∑—ã': 1301826944,
u'–≤—Å–µ—Ö': 706328732,
u'–±—ã—Å—Ç—Ä–µ–π': 1379730754,
u'mysterious': 1903186891,
u'asteroid': 1988297200,
u'odyssey': 310195777,
u'introduction': 2848265721,
u'北海道': 4096045468,
u'札幌': 3909947444,
u'西区': 3653372632,
}
def test_cython(self):
actual = {k: ft_hash_bytes(k.encode('utf-8')) for k in self.expected}
self.assertEqual(self.expected, actual)
#
# Run with:
#
# python -c 'import gensim.test.test_fasttext as t;t.ngram_main()' py_text 3 5
#
def ngram_main():
"""Generate ngrams for tests from standard input."""
alg = sys.argv[1]
minn = int(sys.argv[2])
maxn = int(sys.argv[3])
assert minn <= maxn, 'expected sane command-line parameters'
hashmap = {
'cy_text': compute_ngrams,
'cy_bytes': compute_ngrams_bytes,
}
try:
fun = hashmap[alg]
except KeyError:
raise KeyError('invalid alg: %r expected one of %r' % (alg, sorted(hashmap)))
for line in sys.stdin:
word = line.rstrip('\n')
ngrams = fun(word, minn, maxn)
print("%r: %r," % (word, ngrams))
class NgramsTest(unittest.TestCase):
def setUp(self):
self.expected_text = {
'test': ['<te', 'tes', 'est', 'st>', '<tes', 'test', 'est>', '<test', 'test>'],
'at the': [
'<at', 'at ', 't t', ' th', 'the', 'he>',
'<at ', 'at t', 't th', ' the', 'the>', '<at t', 'at th', 't the', ' the>'
],
'at\nthe': [
'<at', 'at\n', 't\nt', '\nth', 'the', 'he>',
'<at\n', 'at\nt', 't\nth', '\nthe', 'the>', '<at\nt', 'at\nth', 't\nthe', '\nthe>'
],
'—Ç–µ—Å—Ç': ['<—Ç–µ', '—Ç–µ—Å', '–µ—Å—Ç', '—Å—Ç>', '<—Ç–µ—Å', '—Ç–µ—Å—Ç', '–µ—Å—Ç>', '<—Ç–µ—Å—Ç', '—Ç–µ—Å—Ç>'],
'テスト': ['<テス', 'テスト', 'スト>', '<テスト', 'テスト>', '<テスト>'],
'Ë©¶„Åó': ['<Ë©¶„Åó', 'Ë©¶„Åó>', '<Ë©¶„Åó>'],
}
self.expected_bytes = {
'test': [b'<te', b'<tes', b'<test', b'tes', b'test', b'test>', b'est', b'est>', b'st>'],
'at the': [
b'<at', b'<at ', b'<at t', b'at ', b'at t', b'at th', b't t',
b't th', b't the', b' th', b' the', b' the>', b'the', b'the>', b'he>'
],
'—Ç–µ—Å—Ç': [
b'<\xd1\x82\xd0\xb5', b'<\xd1\x82\xd0\xb5\xd1\x81', b'<\xd1\x82\xd0\xb5\xd1\x81\xd1\x82',
b'\xd1\x82\xd0\xb5\xd1\x81', b'\xd1\x82\xd0\xb5\xd1\x81\xd1\x82', b'\xd1\x82\xd0\xb5\xd1\x81\xd1\x82>',
b'\xd0\xb5\xd1\x81\xd1\x82', b'\xd0\xb5\xd1\x81\xd1\x82>', b'\xd1\x81\xd1\x82>'
],
'テスト': [
b'<\xe3\x83\x86\xe3\x82\xb9', b'<\xe3\x83\x86\xe3\x82\xb9\xe3\x83\x88',
b'<\xe3\x83\x86\xe3\x82\xb9\xe3\x83\x88>', b'\xe3\x83\x86\xe3\x82\xb9\xe3\x83\x88',
b'\xe3\x83\x86\xe3\x82\xb9\xe3\x83\x88>', b'\xe3\x82\xb9\xe3\x83\x88>'
],
'Ë©¶„Åó': [b'<\xe8\xa9\xa6\xe3\x81\x97', b'<\xe8\xa9\xa6\xe3\x81\x97>', b'\xe8\xa9\xa6\xe3\x81\x97>'],
}
self.expected_text_wide_unicode = {
'üöëüöíüöìüöï': [
'<üöëüöí', 'üöëüöíüöì', 'üöíüöìüöï', 'üöìüöï>',
'<üöëüöíüöì', 'üöëüöíüöìüöï', 'üöíüöìüöï>', '<üöëüöíüöìüöï', 'üöëüöíüöìüöï>'
],
}
self.expected_bytes_wide_unicode = {
'üöëüöíüöìüöï': [
b'<\xf0\x9f\x9a\x91\xf0\x9f\x9a\x92',
b'<\xf0\x9f\x9a\x91\xf0\x9f\x9a\x92\xf0\x9f\x9a\x93',
b'<\xf0\x9f\x9a\x91\xf0\x9f\x9a\x92\xf0\x9f\x9a\x93\xf0\x9f\x9a\x95',
b'\xf0\x9f\x9a\x91\xf0\x9f\x9a\x92\xf0\x9f\x9a\x93',
b'\xf0\x9f\x9a\x91\xf0\x9f\x9a\x92\xf0\x9f\x9a\x93\xf0\x9f\x9a\x95',
b'\xf0\x9f\x9a\x91\xf0\x9f\x9a\x92\xf0\x9f\x9a\x93\xf0\x9f\x9a\x95>',
b'\xf0\x9f\x9a\x92\xf0\x9f\x9a\x93\xf0\x9f\x9a\x95',
b'\xf0\x9f\x9a\x92\xf0\x9f\x9a\x93\xf0\x9f\x9a\x95>',
b'\xf0\x9f\x9a\x93\xf0\x9f\x9a\x95>'
],
}
def test_text_cy(self):
for word in self.expected_text:
expected = self.expected_text[word]
actual = compute_ngrams(word, 3, 5)
self.assertEqual(expected, actual)
@unittest.skipIf(sys.maxunicode == 0xffff, "Python interpreter doesn't support UCS-4 (wide unicode)")
def test_text_cy_wide_unicode(self):
for word in self.expected_text_wide_unicode:
expected = self.expected_text_wide_unicode[word]
actual = compute_ngrams(word, 3, 5)
self.assertEqual(expected, actual)
def test_bytes_cy(self):
for word in self.expected_bytes:
expected = self.expected_bytes[word]
actual = compute_ngrams_bytes(word, 3, 5)
self.assertEqual(expected, actual)
expected_text = self.expected_text[word]
actual_text = [n.decode('utf-8') for n in actual]
self.assertEqual(sorted(expected_text), sorted(actual_text))
for word in self.expected_bytes_wide_unicode:
expected = self.expected_bytes_wide_unicode[word]
actual = compute_ngrams_bytes(word, 3, 5)
self.assertEqual(expected, actual)
expected_text = self.expected_text_wide_unicode[word]
actual_text = [n.decode('utf-8') for n in actual]
self.assertEqual(sorted(expected_text), sorted(actual_text))
def test_fb(self):
"""Test against results from Facebook's implementation."""
with utils.open(datapath('fb-ngrams.txt'), 'r', encoding='utf-8') as fin:
fb = dict(_read_fb(fin))
for word, expected in fb.items():
#
# The model was trained with minn=3, maxn=6
#
actual = compute_ngrams(word, 3, 6)
self.assertEqual(sorted(expected), sorted(actual))
def _read_fb(fin):
"""Read ngrams from output of the FB utility."""
#
# $ cat words.txt
# test
# at the
# at\nthe
# —Ç–µ—Å—Ç
# テスト
# Ë©¶„Åó
# üöëüöíüöìüöï
# $ while read w;
# do
# echo "<start>";
# echo $w;
# ./fasttext print-ngrams gensim/test/test_data/crime-and-punishment.bin "$w";
# echo "<end>";
# done < words.txt > gensim/test/test_data/fb-ngrams.txt
#
while fin:
line = fin.readline().rstrip()
if not line:
break
assert line == '<start>'
word = fin.readline().rstrip()
fin.readline() # ignore this line, it contains an origin vector for the full term
ngrams = []
while True:
line = fin.readline().rstrip()
if line == '<end>':
break
columns = line.split(' ')
term = ' '.join(columns[:-5])
ngrams.append(term)
yield word, ngrams
class ZeroBucketTest(unittest.TestCase):
"""Test FastText with no buckets / no-ngrams: essentially FastText-as-Word2Vec."""
def test_in_vocab(self):
model = train_gensim(bucket=0)
self.assertIsNotNone(model.wv['anarchist'])
def test_out_of_vocab(self):
model = train_gensim(bucket=0)
with self.assertRaises(KeyError):
model.wv.get_vector('streamtrain')
def test_cbow_neg(self):
"""See `gensim.test.test_word2vec.TestWord2VecModel.test_cbow_neg`."""
model = FT_gensim(
sg=0, cbow_mean=1, alpha=0.05, window=5, hs=0, negative=15,
min_count=5, epochs=10, workers=2, sample=0,
max_n=0 # force no char-ngram buckets
)
TestWord2VecModel.model_sanity(self, model)
class UnicodeVocabTest(unittest.TestCase):
def test_ascii(self):
buf = io.BytesIO()
buf.name = 'dummy name to keep fasttext happy'
buf.write(struct.pack('@3i', 2, -1, -1)) # vocab_size, nwords, nlabels
buf.write(struct.pack('@1q', 10)) # ntokens
buf.write(b'hello')
buf.write(b'\x00')
buf.write(struct.pack('@qb', 1, -1))
buf.write(b'world')
buf.write(b'\x00')
buf.write(struct.pack('@qb', 2, -1))
buf.seek(0)
raw_vocab, vocab_size, nlabels, ntokens = gensim.models._fasttext_bin._load_vocab(buf, False)
expected = {'hello': 1, 'world': 2}
self.assertEqual(expected, dict(raw_vocab))
self.assertEqual(vocab_size, 2)
self.assertEqual(nlabels, -1)
self.assertEqual(ntokens, 10)
def test_bad_unicode(self):
buf = io.BytesIO()
buf.name = 'dummy name to keep fasttext happy'
buf.write(struct.pack('@3i', 2, -1, -1)) # vocab_size, nwords, nlabels
buf.write(struct.pack('@1q', 10)) # ntokens
#
# encountered in https://github.com/RaRe-Technologies/gensim/issues/2378
# The model from downloaded from
# https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki-news-300d-1M-subword.bin.zip
# suffers from bad characters in a few of the vocab terms. The native
# fastText utility loads the model fine, but we trip up over the bad
# characters.
#
buf.write(
b'\xe8\x8b\xb1\xe8\xaa\x9e\xe7\x89\x88\xe3\x82\xa6\xe3\x82\xa3\xe3'
b'\x82\xad\xe3\x83\x9a\xe3\x83\x87\xe3\x82\xa3\xe3\x82\xa2\xe3\x81'
b'\xb8\xe3\x81\xae\xe6\x8a\x95\xe7\xa8\xbf\xe3\x81\xaf\xe3\x81\x84'
b'\xe3\x81\xa4\xe3\x81\xa7\xe3\x82\x82\xe6'
)
buf.write(b'\x00')
buf.write(struct.pack('@qb', 1, -1))
buf.write(
b'\xd0\xb0\xd0\xb4\xd0\xbc\xd0\xb8\xd0\xbd\xd0\xb8\xd1\x81\xd1\x82'
b'\xd1\x80\xd0\xb0\xd1\x82\xd0\xb8\xd0\xb2\xd0\xbd\xd0\xbe-\xd1\x82'
b'\xd0\xb5\xd1\x80\xd1\x80\xd0\xb8\xd1\x82\xd0\xbe\xd1\x80\xd0\xb8'
b'\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd1'
)
buf.write(b'\x00')
buf.write(struct.pack('@qb', 2, -1))
buf.seek(0)
raw_vocab, vocab_size, nlabels, ntokens = gensim.models._fasttext_bin._load_vocab(buf, False)
expected = {
u'英語版ウィキペディアへの投稿はいつでも\\xe6': 1,
u'–∞–¥–º–∏–Ω–∏—Å—Ç—Ä–∞—Ç–∏–≤–Ω–æ-—Ç–µ—Ä—Ä–∏—Ç–æ—Ä–∏–∞–ª—å–Ω\\xd1': 2,
}
self.assertEqual(expected, dict(raw_vocab))
self.assertEqual(vocab_size, 2)
self.assertEqual(nlabels, -1)
self.assertEqual(ntokens, 10)
_BYTES = b'the quick brown fox jumps over the lazy dog'
_ARRAY = np.array([0., 1., 2., 3., 4., 5., 6., 7., 8.], dtype=np.dtype('float32'))
class TestFromfile(unittest.TestCase):
def test_decompressed(self):
with open(datapath('reproduce.dat'), 'rb') as fin:
self._run(fin)
def test_compressed(self):
with gzip.GzipFile(datapath('reproduce.dat.gz'), 'rb') as fin:
self._run(fin)
def _run(self, fin):
actual = fin.read(len(_BYTES))
self.assertEqual(_BYTES, actual)
array = gensim.models._fasttext_bin._fromfile(fin, _ARRAY.dtype, _ARRAY.shape[0])
logger.error('array: %r', array)
self.assertTrue(np.allclose(_ARRAY, array))
def _create_and_save_fb_model(fname, model_params):
model = FT_gensim(**model_params)
lee_data = LineSentence(datapath('lee_background.cor'))
model.build_vocab(lee_data)
model.train(lee_data, total_examples=model.corpus_count, epochs=model.epochs)
gensim.models.fasttext.save_facebook_model(model, fname)
return model
def calc_max_diff(v1, v2):
return np.max(np.abs(v1 - v2))
class SaveFacebookFormatModelTest(unittest.TestCase):
def _check_roundtrip(self, sg):
model_params = {
"sg": sg,
"vector_size": 10,
"min_count": 1,
"hs": 1,
"negative": 5,
"seed": 42,
"bucket": BUCKET,
"workers": 1}
with temporary_file("roundtrip_model_to_model.bin") as fpath:
model_trained = _create_and_save_fb_model(fpath, model_params)
model_loaded = gensim.models.fasttext.load_facebook_model(fpath)
self.assertEqual(model_trained.vector_size, model_loaded.vector_size)
self.assertEqual(model_trained.window, model_loaded.window)
self.assertEqual(model_trained.epochs, model_loaded.epochs)
self.assertEqual(model_trained.negative, model_loaded.negative)
self.assertEqual(model_trained.hs, model_loaded.hs)
self.assertEqual(model_trained.sg, model_loaded.sg)
self.assertEqual(model_trained.wv.bucket, model_loaded.wv.bucket)
self.assertEqual(model_trained.wv.min_n, model_loaded.wv.min_n)
self.assertEqual(model_trained.wv.max_n, model_loaded.wv.max_n)
self.assertEqual(model_trained.sample, model_loaded.sample)
self.assertEqual(set(model_trained.wv.index_to_key), set(model_loaded.wv.index_to_key))
for w in model_trained.wv.index_to_key:
v_orig = model_trained.wv[w]
v_loaded = model_loaded.wv[w]
self.assertLess(calc_max_diff(v_orig, v_loaded), MAX_WORDVEC_COMPONENT_DIFFERENCE)
def test_skipgram(self):
self._check_roundtrip(sg=1)
def test_cbow(self):
self._check_roundtrip(sg=0)
def _read_binary_file(fname):
with open(fname, "rb") as f:
data = f.read()
return data
class SaveGensimByteIdentityTest(unittest.TestCase):
"""
This class containts tests that check the following scenario:
+ create binary fastText file model1.bin using gensim
+ load file model1.bin to variable `model`
+ save `model` to model2.bin
+ check if files model1.bin and model2.bin are byte identical
"""
def _check_roundtrip_file_file(self, sg):
model_params = {
"sg": sg,
"vector_size": 10,
"min_count": 1,
"hs": 1,
"negative": 0,
"bucket": BUCKET,
"seed": 42,
"workers": 1}
with temporary_file("roundtrip_file_to_file1.bin") as fpath1, \
temporary_file("roundtrip_file_to_file2.bin") as fpath2:
_create_and_save_fb_model(fpath1, model_params)
model = gensim.models.fasttext.load_facebook_model(fpath1)
gensim.models.fasttext.save_facebook_model(model, fpath2)
bin1 = _read_binary_file(fpath1)
bin2 = _read_binary_file(fpath2)
self.assertEqual(bin1, bin2)
def test_skipgram(self):
self._check_roundtrip_file_file(sg=1)
def test_cbow(self):
self._check_roundtrip_file_file(sg=0)
def _save_test_model(out_base_fname, model_params):
inp_fname = datapath('lee_background.cor')
model_type = "cbow" if model_params["sg"] == 0 else "skipgram"
size = str(model_params["vector_size"])
seed = str(model_params["seed"])
cmd = [
FT_CMD, model_type, "-input", inp_fname, "-output",
out_base_fname, "-dim", size, "-seed", seed]
subprocess.check_call(cmd)
@unittest.skipIf(not FT_CMD, "fasttext not in FT_HOME or PATH, skipping test")
class SaveFacebookByteIdentityTest(unittest.TestCase):
"""
This class containts tests that check the following scenario:
+ create binary fastText file model1.bin using facebook_binary (FT)
+ load file model1.bin to variable `model`
+ save `model` to model2.bin using gensim
+ check if files model1.bin and model2.bin are byte-identical
"""
def _check_roundtrip_file_file(self, sg):
model_params = {"vector_size": 10, "sg": sg, "seed": 42}
# fasttext tool creates both *vec and *bin files, so we have to remove both, even thought *vec is unused
with temporary_file("m1.bin") as m1, temporary_file("m2.bin") as m2, temporary_file("m1.vec"):
m1_basename = m1[:-4]
_save_test_model(m1_basename, model_params)
model = gensim.models.fasttext.load_facebook_model(m1)
gensim.models.fasttext.save_facebook_model(model, m2)
bin1 = _read_binary_file(m1)
bin2 = _read_binary_file(m2)
self.assertEqual(bin1, bin2)
def test_skipgram(self):
self._check_roundtrip_file_file(sg=1)
def test_cbow(self):
self._check_roundtrip_file_file(sg=0)
def _read_wordvectors_using_fasttext(fasttext_fname, words):
def line_to_array(line):
return np.array([float(s) for s in line.split()[1:]], dtype=np.float32)
cmd = [FT_CMD, "print-word-vectors", fasttext_fname]
process = subprocess.Popen(
cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
words_str = '\n'.join(words)
out, _ = process.communicate(input=words_str.encode("utf-8"))
return np.array([line_to_array(line) for line in out.splitlines()], dtype=np.float32)
@unittest.skipIf(not FT_CMD, "fasttext not in FT_HOME or PATH, skipping test")
class SaveFacebookFormatReadingTest(unittest.TestCase):
"""
This class containts tests that check the following scenario:
+ create fastText model using gensim
+ save file to model.bin
+ retrieve word vectors from model.bin using fasttext Facebook utility
+ compare vectors retrieved by Facebook utility with those obtained directly from gensim model
"""
def _check_load_fasttext_format(self, sg):
model_params = {
"sg": sg,
"vector_size": 10,
"min_count": 1,
"hs": 1,
"negative": 5,
"bucket": BUCKET,
"seed": 42,
"workers": 1}
with temporary_file("load_fasttext.bin") as fpath:
model = _create_and_save_fb_model(fpath, model_params)
wv = _read_wordvectors_using_fasttext(fpath, model.wv.index_to_key)
for i, w in enumerate(model.wv.index_to_key):
diff = calc_max_diff(wv[i, :], model.wv[w])
# Because fasttext command line prints vectors with limited accuracy
self.assertLess(diff, 1.0e-4)
def test_skipgram(self):
self._check_load_fasttext_format(sg=1)
def test_cbow(self):
self._check_load_fasttext_format(sg=0)
class UnpackTest(unittest.TestCase):
def test_sanity(self):
m = np.array(range(9))
m.shape = (3, 3)
hash2index = {10: 0, 11: 1, 12: 2}
n = _unpack(m, 25, hash2index)
self.assertTrue(np.all(np.array([0, 1, 2]) == n[10]))
self.assertTrue(np.all(np.array([3, 4, 5]) == n[11]))
self.assertTrue(np.all(np.array([6, 7, 8]) == n[12]))
def test_tricky(self):
m = np.array(range(9))
m.shape = (3, 3)
hash2index = {1: 0, 0: 1, 12: 2}
n = _unpack(m, 25, hash2index)
self.assertTrue(np.all(np.array([3, 4, 5]) == n[0]))
self.assertTrue(np.all(np.array([0, 1, 2]) == n[1]))
self.assertTrue(np.all(np.array([6, 7, 8]) == n[12]))
def test_identity(self):
m = np.array(range(9))
m.shape = (3, 3)
hash2index = {0: 0, 1: 1, 2: 2}
n = _unpack(m, 25, hash2index)
self.assertTrue(np.all(np.array([0, 1, 2]) == n[0]))
self.assertTrue(np.all(np.array([3, 4, 5]) == n[1]))
self.assertTrue(np.all(np.array([6, 7, 8]) == n[2]))
class FastTextKeyedVectorsTest(unittest.TestCase):
def test_add_vector(self):
wv = FastTextKeyedVectors(vector_size=2, min_n=3, max_n=6, bucket=2000000)
wv.add_vector("test_key", np.array([0, 0]))
self.assertEqual(wv.key_to_index["test_key"], 0)
self.assertEqual(wv.index_to_key[0], "test_key")
self.assertTrue(np.all(wv.vectors[0] == np.array([0, 0])))
def test_add_vectors(self):
wv = FastTextKeyedVectors(vector_size=2, min_n=3, max_n=6, bucket=2000000)
wv.add_vectors(["test_key1", "test_key2"], np.array([[0, 0], [1, 1]]))
self.assertEqual(wv.key_to_index["test_key1"], 0)
self.assertEqual(wv.index_to_key[0], "test_key1")
self.assertTrue(np.all(wv.vectors[0] == np.array([0, 0])))
self.assertEqual(wv.key_to_index["test_key2"], 1)
self.assertEqual(wv.index_to_key[1], "test_key2")
self.assertTrue(np.all(wv.vectors[1] == np.array([1, 1])))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 75,586
|
Python
|
.py
| 1,504
| 40.797872
| 136
| 0.625385
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,035
|
test_matutils.py
|
piskvorky_gensim/gensim/test/test_matutils.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
import logging
import unittest
import numpy as np
from numpy.testing import assert_array_equal
from scipy import sparse
from scipy.sparse import csc_matrix
from scipy.special import psi # gamma function utils
import gensim.matutils as matutils
# we'll define known, good (slow) version of functions here
# and compare results from these functions vs. cython ones
def logsumexp(x):
"""Log of sum of exponentials.
Parameters
----------
x : numpy.ndarray
Input 2d matrix.
Returns
-------
float
log of sum of exponentials of elements in `x`.
Warnings
--------
By performance reasons, doesn't support NaNs or 1d, 3d, etc arrays like :func:`scipy.special.logsumexp`.
"""
x_max = np.max(x)
x = np.log(np.sum(np.exp(x - x_max)))
x += x_max
return x
def mean_absolute_difference(a, b):
"""Mean absolute difference between two arrays.
Parameters
----------
a : numpy.ndarray
Input 1d array.
b : numpy.ndarray
Input 1d array.
Returns
-------
float
mean(abs(a - b)).
"""
return np.mean(np.abs(a - b))
def dirichlet_expectation(alpha):
r"""For a vector :math:`\theta \sim Dir(\alpha)`, compute :math:`E[log \theta]`.
Parameters
----------
alpha : numpy.ndarray
Dirichlet parameter 2d matrix or 1d vector, if 2d - each row is treated as a separate parameter vector.
Returns
-------
numpy.ndarray:
:math:`E[log \theta]`
"""
if len(alpha.shape) == 1:
result = psi(alpha) - psi(np.sum(alpha))
else:
result = psi(alpha) - psi(np.sum(alpha, 1))[:, np.newaxis]
return result.astype(alpha.dtype, copy=False) # keep the same precision as input
dirichlet_expectation_1d = dirichlet_expectation
dirichlet_expectation_2d = dirichlet_expectation
class TestLdaModelInner(unittest.TestCase):
def setUp(self):
self.random_state = np.random.RandomState()
self.num_runs = 100 # test functions with *num_runs* random inputs
self.num_topics = 100
def test_log_sum_exp(self):
# test logsumexp
rs = self.random_state
for dtype in [np.float16, np.float32, np.float64]:
for i in range(self.num_runs):
input = rs.uniform(-1000, 1000, size=(self.num_topics, 1))
known_good = logsumexp(input)
test_values = matutils.logsumexp(input)
msg = "logsumexp failed for dtype={}".format(dtype)
self.assertTrue(np.allclose(known_good, test_values), msg)
def test_mean_absolute_difference(self):
# test mean_absolute_difference
rs = self.random_state
for dtype in [np.float16, np.float32, np.float64]:
for i in range(self.num_runs):
input1 = rs.uniform(-10000, 10000, size=(self.num_topics,))
input2 = rs.uniform(-10000, 10000, size=(self.num_topics,))
known_good = mean_absolute_difference(input1, input2)
test_values = matutils.mean_absolute_difference(input1, input2)
msg = "mean_absolute_difference failed for dtype={}".format(dtype)
self.assertTrue(np.allclose(known_good, test_values), msg)
def test_dirichlet_expectation(self):
# test dirichlet_expectation
rs = self.random_state
for dtype in [np.float16, np.float32, np.float64]:
for i in range(self.num_runs):
# 1 dimensional case
input_1d = rs.uniform(.01, 10000, size=(self.num_topics,))
known_good = dirichlet_expectation(input_1d)
test_values = matutils.dirichlet_expectation(input_1d)
msg = "dirichlet_expectation_1d failed for dtype={}".format(dtype)
self.assertTrue(np.allclose(known_good, test_values), msg)
# 2 dimensional case
input_2d = rs.uniform(.01, 10000, size=(1, self.num_topics,))
known_good = dirichlet_expectation(input_2d)
test_values = matutils.dirichlet_expectation(input_2d)
msg = "dirichlet_expectation_2d failed for dtype={}".format(dtype)
self.assertTrue(np.allclose(known_good, test_values), msg)
def manual_unitvec(vec):
# manual unit vector calculation for UnitvecTestCase
vec = vec.astype(float)
if sparse.issparse(vec):
vec_sum_of_squares = vec.multiply(vec)
unit = 1. / np.sqrt(vec_sum_of_squares.sum())
return vec.multiply(unit)
elif not sparse.issparse(vec):
sum_vec_squared = np.sum(vec ** 2)
vec /= np.sqrt(sum_vec_squared)
return vec
class UnitvecTestCase(unittest.TestCase):
# test unitvec
def test_sparse_npfloat32(self):
input_vector = sparse.csr_matrix(np.asarray([[1, 0, 0, 0, 3], [0, 0, 4, 3, 0]])).astype(np.float32)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector.data, man_unit_vector.data, atol=1e-3))
self.assertEqual(input_vector.dtype, unit_vector.dtype)
def test_sparse_npfloat64(self):
input_vector = sparse.csr_matrix(np.asarray([[1, 0, 0, 0, 3], [0, 0, 4, 3, 0]])).astype(np.float64)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector.data, man_unit_vector.data, atol=1e-3))
self.assertEqual(input_vector.dtype, unit_vector.dtype)
def test_sparse_npint32(self):
input_vector = sparse.csr_matrix(np.asarray([[1, 0, 0, 0, 3], [0, 0, 4, 3, 0]])).astype(np.int32)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector.data, man_unit_vector.data, atol=1e-3))
self.assertTrue(np.issubdtype(unit_vector.dtype, np.floating))
def test_sparse_npint64(self):
input_vector = sparse.csr_matrix(np.asarray([[1, 0, 0, 0, 3], [0, 0, 4, 3, 0]])).astype(np.int64)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector.data, man_unit_vector.data, atol=1e-3))
self.assertTrue(np.issubdtype(unit_vector.dtype, np.floating))
def test_dense_npfloat32(self):
input_vector = np.random.uniform(size=(5,)).astype(np.float32)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector, man_unit_vector))
self.assertEqual(input_vector.dtype, unit_vector.dtype)
def test_dense_npfloat64(self):
input_vector = np.random.uniform(size=(5,)).astype(np.float64)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector, man_unit_vector))
self.assertEqual(input_vector.dtype, unit_vector.dtype)
def test_dense_npint32(self):
input_vector = np.random.randint(10, size=5).astype(np.int32)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector, man_unit_vector))
self.assertTrue(np.issubdtype(unit_vector.dtype, np.floating))
def test_dense_npint64(self):
input_vector = np.random.randint(10, size=5).astype(np.int32)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector, man_unit_vector))
self.assertTrue(np.issubdtype(unit_vector.dtype, np.floating))
def test_sparse_python_float(self):
input_vector = sparse.csr_matrix(np.asarray([[1, 0, 0, 0, 3], [0, 0, 4, 3, 0]])).astype(float)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector.data, man_unit_vector.data, atol=1e-3))
self.assertEqual(input_vector.dtype, unit_vector.dtype)
def test_sparse_python_int(self):
input_vector = sparse.csr_matrix(np.asarray([[1, 0, 0, 0, 3], [0, 0, 4, 3, 0]])).astype(int)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector.data, man_unit_vector.data, atol=1e-3))
self.assertTrue(np.issubdtype(unit_vector.dtype, np.floating))
def test_dense_python_float(self):
input_vector = np.random.uniform(size=(5,)).astype(float)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector, man_unit_vector))
self.assertEqual(input_vector.dtype, unit_vector.dtype)
def test_dense_python_int(self):
input_vector = np.random.randint(10, size=5).astype(int)
unit_vector = matutils.unitvec(input_vector)
man_unit_vector = manual_unitvec(input_vector)
self.assertTrue(np.allclose(unit_vector, man_unit_vector))
self.assertTrue(np.issubdtype(unit_vector.dtype, np.floating))
def test_return_norm_zero_vector_scipy_sparse(self):
input_vector = sparse.csr_matrix([[]], dtype=np.int32)
return_value = matutils.unitvec(input_vector, return_norm=True)
self.assertTrue(isinstance(return_value, tuple))
norm = return_value[1]
self.assertTrue(isinstance(norm, float))
self.assertEqual(norm, 1.0)
def test_return_norm_zero_vector_numpy(self):
input_vector = np.array([], dtype=np.int32)
return_value = matutils.unitvec(input_vector, return_norm=True)
self.assertTrue(isinstance(return_value, tuple))
norm = return_value[1]
self.assertTrue(isinstance(norm, float))
self.assertEqual(norm, 1.0)
def test_return_norm_zero_vector_gensim_sparse(self):
input_vector = []
return_value = matutils.unitvec(input_vector, return_norm=True)
self.assertTrue(isinstance(return_value, tuple))
norm = return_value[1]
self.assertTrue(isinstance(norm, float))
self.assertEqual(norm, 1.0)
class TestSparse2Corpus(unittest.TestCase):
def setUp(self):
self.orig_array = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
self.s2c = matutils.Sparse2Corpus(csc_matrix(self.orig_array))
def test_getitem_slice(self):
assert_array_equal(self.s2c[:2].sparse.toarray(), self.orig_array[:, :2])
assert_array_equal(self.s2c[1:3].sparse.toarray(), self.orig_array[:, 1:3])
def test_getitem_index(self):
self.assertListEqual(self.s2c[1], [(0, 2), (1, 5), (2, 8)])
def test_getitem_list_of_indices(self):
assert_array_equal(
self.s2c[[1, 2]].sparse.toarray(), self.orig_array[:, [1, 2]]
)
assert_array_equal(self.s2c[[1]].sparse.toarray(), self.orig_array[:, [1]])
def test_getitem_ndarray(self):
assert_array_equal(
self.s2c[np.array([1, 2])].sparse.toarray(), self.orig_array[:, [1, 2]]
)
assert_array_equal(
self.s2c[np.array([1])].sparse.toarray(), self.orig_array[:, [1]]
)
def test_getitem_range(self):
assert_array_equal(
self.s2c[range(1, 3)].sparse.toarray(), self.orig_array[:, [1, 2]]
)
assert_array_equal(
self.s2c[range(1, 2)].sparse.toarray(), self.orig_array[:, [1]]
)
def test_getitem_ellipsis(self):
assert_array_equal(self.s2c[...].sparse.toarray(), self.orig_array)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 12,113
|
Python
|
.py
| 248
| 40.669355
| 111
| 0.648704
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,036
|
test_lee.py
|
piskvorky_gensim/gensim/test/test_lee.py
|
#!/usr/bin/env python
# encoding: utf-8
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated test to reproduce the results of Lee et al. (2005)
Lee et al. (2005) compares different models for semantic
similarity and verifies the results with similarity judgements from humans.
As a validation of the gensim implementation we reproduced the results
of Lee et al. (2005) in this test.
Many thanks to Michael D. Lee (michael.lee@adelaide.edu.au) who provideded us
with his corpus and similarity data.
If you need to reference this dataset, please cite:
Lee, M., Pincombe, B., & Welsh, M. (2005).
An empirical evaluation of models of text document similarity.
Proceedings of the 27th Annual Conference of the Cognitive Science Society
"""
from __future__ import with_statement
import logging
import unittest
from functools import partial
import numpy as np
from gensim import corpora, models, utils, matutils
from gensim.parsing.preprocessing import preprocess_documents, preprocess_string, DEFAULT_FILTERS
from gensim.test.utils import datapath
bg_corpus = None
corpus = None
human_sim_vector = None
class TestLeeTest(unittest.TestCase):
def setUp(self):
"""setup lee test corpora"""
global bg_corpus, corpus, human_sim_vector, bg_corpus2, corpus2
bg_corpus_file = datapath('lee_background.cor')
corpus_file = datapath('lee.cor')
sim_file = datapath('similarities0-1.txt')
# read in the corpora
latin1 = partial(utils.to_unicode, encoding='latin1')
with utils.open(bg_corpus_file, 'rb') as f:
bg_corpus = preprocess_documents(latin1(line) for line in f)
with utils.open(corpus_file, 'rb') as f:
corpus = preprocess_documents(latin1(line) for line in f)
with utils.open(bg_corpus_file, 'rb') as f:
bg_corpus2 = [preprocess_string(latin1(s), filters=DEFAULT_FILTERS[:-1]) for s in f]
with utils.open(corpus_file, 'rb') as f:
corpus2 = [preprocess_string(latin1(s), filters=DEFAULT_FILTERS[:-1]) for s in f]
# read the human similarity data
sim_matrix = np.loadtxt(sim_file)
sim_m_size = np.shape(sim_matrix)[0]
human_sim_vector = sim_matrix[np.triu_indices(sim_m_size, 1)]
def test_corpus(self):
"""availability and integrity of corpus"""
documents_in_bg_corpus = 300
documents_in_corpus = 50
len_sim_vector = 1225
self.assertEqual(len(bg_corpus), documents_in_bg_corpus)
self.assertEqual(len(corpus), documents_in_corpus)
self.assertEqual(len(human_sim_vector), len_sim_vector)
def test_lee(self):
"""correlation with human data > 0.6
(this is the value which was achieved in the original paper)
"""
global bg_corpus, corpus
# create a dictionary and corpus (bag of words)
dictionary = corpora.Dictionary(bg_corpus)
bg_corpus = [dictionary.doc2bow(text) for text in bg_corpus]
corpus = [dictionary.doc2bow(text) for text in corpus]
# transform the bag of words with log_entropy normalization
log_ent = models.LogEntropyModel(bg_corpus)
bg_corpus_ent = log_ent[bg_corpus]
# initialize an LSI transformation from background corpus
lsi = models.LsiModel(bg_corpus_ent, id2word=dictionary, num_topics=200)
# transform small corpus to lsi bow->log_ent->fold-in-lsi
corpus_lsi = lsi[log_ent[corpus]]
# compute pairwise similarity matrix and extract upper triangular
res = np.zeros((len(corpus), len(corpus)))
for i, par1 in enumerate(corpus_lsi):
for j, par2 in enumerate(corpus_lsi):
res[i, j] = matutils.cossim(par1, par2)
flat = res[np.triu_indices(len(corpus), 1)]
cor = np.corrcoef(flat, human_sim_vector)[0, 1]
logging.info("LSI correlation coefficient is %s", cor)
self.assertTrue(cor > 0.6)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 4,160
|
Python
|
.py
| 85
| 42.352941
| 97
| 0.6875
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,037
|
test_direct_confirmation.py
|
piskvorky_gensim/gensim/test/test_direct_confirmation.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for direct confirmation measures in the direct_confirmation_measure module.
"""
import logging
import unittest
from collections import namedtuple
from gensim.topic_coherence import direct_confirmation_measure
from gensim.topic_coherence import text_analysis
class TestDirectConfirmationMeasure(unittest.TestCase):
def setUp(self):
# Set up toy example for better understanding and testing
# of this module. See the modules for the mathematical formulas
self.segmentation = [[(1, 2)]]
self.posting_list = {1: {2, 3, 4}, 2: {3, 5}}
self.num_docs = 5
id2token = {1: 'test', 2: 'doc'}
token2id = {v: k for k, v in id2token.items()}
dictionary = namedtuple('Dictionary', 'token2id, id2token')(token2id, id2token)
self.accumulator = text_analysis.InvertedIndexAccumulator({1, 2}, dictionary)
self.accumulator._inverted_index = {0: {2, 3, 4}, 1: {3, 5}}
self.accumulator._num_docs = self.num_docs
def test_log_conditional_probability(self):
"""Test log_conditional_probability()"""
obtained = direct_confirmation_measure.log_conditional_probability(
self.segmentation, self.accumulator)[0]
# Answer should be ~ ln(1 / 2) = -0.693147181
expected = -0.693147181
self.assertAlmostEqual(expected, obtained)
mean, std = direct_confirmation_measure.log_conditional_probability(
self.segmentation, self.accumulator, with_std=True)[0]
self.assertAlmostEqual(expected, mean)
self.assertEqual(0.0, std)
def test_log_ratio_measure(self):
"""Test log_ratio_measure()"""
obtained = direct_confirmation_measure.log_ratio_measure(
self.segmentation, self.accumulator)[0]
# Answer should be ~ ln{(1 / 5) / [(3 / 5) * (2 / 5)]} = -0.182321557
expected = -0.182321557
self.assertAlmostEqual(expected, obtained)
mean, std = direct_confirmation_measure.log_ratio_measure(
self.segmentation, self.accumulator, with_std=True)[0]
self.assertAlmostEqual(expected, mean)
self.assertEqual(0.0, std)
def test_normalized_log_ratio_measure(self):
"""Test normalized_log_ratio_measure()"""
obtained = direct_confirmation_measure.log_ratio_measure(
self.segmentation, self.accumulator, normalize=True)[0]
# Answer should be ~ -0.182321557 / -ln(1 / 5) = -0.113282753
expected = -0.113282753
self.assertAlmostEqual(expected, obtained)
mean, std = direct_confirmation_measure.log_ratio_measure(
self.segmentation, self.accumulator, normalize=True, with_std=True)[0]
self.assertAlmostEqual(expected, mean)
self.assertEqual(0.0, std)
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 3,088
|
Python
|
.py
| 62
| 42.516129
| 95
| 0.675963
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,038
|
test_api.py
|
piskvorky_gensim/gensim/test/test_api.py
|
import logging
import unittest
import os
import gensim.downloader as api
import shutil
import numpy as np
@unittest.skipIf(
os.environ.get("SKIP_NETWORK_TESTS", False) == "1",
"Skip network-related tests (probably SSL problems on this CI/OS)"
)
class TestApi(unittest.TestCase):
def test_base_dir_creation(self):
if os.path.isdir(api.BASE_DIR):
shutil.rmtree(api.BASE_DIR)
api._create_base_dir()
self.assertTrue(os.path.isdir(api.BASE_DIR))
os.rmdir(api.BASE_DIR)
def test_load_dataset(self):
dataset_path = os.path.join(api.BASE_DIR, "__testing_matrix-synopsis", "__testing_matrix-synopsis.gz")
if os.path.isdir(api.BASE_DIR):
shutil.rmtree(api.BASE_DIR)
self.assertEqual(api.load("__testing_matrix-synopsis", return_path=True), dataset_path)
shutil.rmtree(api.BASE_DIR)
self.assertEqual(len(list(api.load("__testing_matrix-synopsis"))), 1)
shutil.rmtree(api.BASE_DIR)
def test_load_model(self):
if os.path.isdir(api.BASE_DIR):
shutil.rmtree(api.BASE_DIR)
vector_dead = np.array([
0.17403787, -0.10167074, -0.00950371, -0.10367849, -0.14034484,
-0.08751217, 0.10030612, 0.07677923, -0.32563496, 0.01929072,
0.20521086, -0.1617067, 0.00475458, 0.21956187, -0.08783089,
-0.05937332, 0.26528183, -0.06771874, -0.12369668, 0.12020949,
0.28731, 0.36735833, 0.28051138, -0.10407482, 0.2496888,
-0.19372769, -0.28719661, 0.11989869, -0.00393865, -0.2431484,
0.02725661, -0.20421691, 0.0328669, -0.26947051, -0.08068217,
-0.10245913, 0.1170633, 0.16583319, 0.1183883, -0.11217165,
0.1261425, -0.0319365, -0.15787181, 0.03753783, 0.14748634,
0.00414471, -0.02296237, 0.18336892, -0.23840059, 0.17924534
])
dataset_path = os.path.join(
api.BASE_DIR, "__testing_word2vec-matrix-synopsis", "__testing_word2vec-matrix-synopsis.gz"
)
model = api.load("__testing_word2vec-matrix-synopsis")
vector_dead_calc = model.wv["dead"]
self.assertTrue(np.allclose(vector_dead, vector_dead_calc))
shutil.rmtree(api.BASE_DIR)
self.assertEqual(api.load("__testing_word2vec-matrix-synopsis", return_path=True), dataset_path)
shutil.rmtree(api.BASE_DIR)
def test_multipart_load(self):
dataset_path = os.path.join(
api.BASE_DIR, '__testing_multipart-matrix-synopsis', '__testing_multipart-matrix-synopsis.gz'
)
if os.path.isdir(api.BASE_DIR):
shutil.rmtree(api.BASE_DIR)
self.assertEqual(dataset_path, api.load("__testing_multipart-matrix-synopsis", return_path=True))
shutil.rmtree(api.BASE_DIR)
dataset = api.load("__testing_multipart-matrix-synopsis")
self.assertEqual(len(list(dataset)), 1)
def test_info(self):
data = api.info("text8")
self.assertEqual(data["parts"], 1)
self.assertEqual(data["file_name"], 'text8.gz')
data = api.info()
self.assertEqual(sorted(data.keys()), sorted(['models', 'corpora']))
self.assertTrue(len(data['models']))
self.assertTrue(len(data['corpora']))
name_only_data = api.info(name_only=True)
self.assertEqual(len(name_only_data.keys()), 2)
self.assertTrue({'models', 'corpora'} == set(name_only_data))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
unittest.main()
| 3,583
|
Python
|
.py
| 73
| 40.684932
| 110
| 0.640206
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,039
|
test_normmodel.py
|
piskvorky_gensim/gensim/test/test_normmodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
import numpy as np
from scipy.sparse import csr_matrix
from scipy.sparse import issparse
from gensim.corpora import mmcorpus
from gensim.models import normmodel
from gensim.test.utils import datapath, get_tmpfile
class TestNormModel(unittest.TestCase):
def setUp(self):
self.corpus = mmcorpus.MmCorpus(datapath('testcorpus.mm'))
# Choose doc to be normalized. [3] chosen to demonstrate different results for l1 and l2 norm.
# doc is [(1, 1.0), (5, 2.0), (8, 1.0)]
self.doc = list(self.corpus)[3]
self.model_l1 = normmodel.NormModel(self.corpus, norm='l1')
self.model_l2 = normmodel.NormModel(self.corpus, norm='l2')
def test_tupleInput_l1(self):
"""Test tuple input for l1 transformation"""
normalized = self.model_l1.normalize(self.doc)
expected = [(1, 0.25), (5, 0.5), (8, 0.25)]
self.assertTrue(np.allclose(normalized, expected))
def test_sparseCSRInput_l1(self):
"""Test sparse csr matrix input for l1 transformation"""
row = np.array([0, 0, 1, 2, 2, 2])
col = np.array([0, 2, 2, 0, 1, 2])
data = np.array([1, 2, 3, 4, 5, 6])
sparse_matrix = csr_matrix((data, (row, col)), shape=(3, 3))
normalized = self.model_l1.normalize(sparse_matrix)
# Check if output is of same type
self.assertTrue(issparse(normalized))
# Check if output is correct
expected = np.array([[0.04761905, 0., 0.0952381],
[0., 0., 0.14285714],
[0.19047619, 0.23809524, 0.28571429]])
self.assertTrue(np.allclose(normalized.toarray(), expected))
def test_numpyndarrayInput_l1(self):
"""Test for np ndarray input for l1 transformation"""
ndarray_matrix = np.array([
[1, 0, 2],
[0, 0, 3],
[4, 5, 6]
])
normalized = self.model_l1.normalize(ndarray_matrix)
# Check if output is of same type
self.assertTrue(isinstance(normalized, np.ndarray))
# Check if output is correct
expected = np.array([
[0.04761905, 0., 0.0952381],
[0., 0., 0.14285714],
[0.19047619, 0.23809524, 0.28571429]
])
self.assertTrue(np.allclose(normalized, expected))
# Test if error is raised on unsupported input type
self.assertRaises(ValueError, lambda model, doc: model.normalize(doc), self.model_l1, [1, 2, 3])
def test_tupleInput_l2(self):
"""Test tuple input for l2 transformation"""
normalized = self.model_l2.normalize(self.doc)
expected = [(1, 0.4082482904638631), (5, 0.8164965809277261), (8, 0.4082482904638631)]
self.assertTrue(np.allclose(normalized, expected))
def test_sparseCSRInput_l2(self):
"""Test sparse csr matrix input for l2 transformation"""
row = np.array([0, 0, 1, 2, 2, 2])
col = np.array([0, 2, 2, 0, 1, 2])
data = np.array([1, 2, 3, 4, 5, 6])
sparse_matrix = csr_matrix((data, (row, col)), shape=(3, 3))
normalized = self.model_l2.normalize(sparse_matrix)
# Check if output is of same type
self.assertTrue(issparse(normalized))
# Check if output is correct
expected = np.array([
[0.10482848, 0., 0.20965697],
[0., 0., 0.31448545],
[0.41931393, 0.52414242, 0.6289709]
])
self.assertTrue(np.allclose(normalized.toarray(), expected))
def test_numpyndarrayInput_l2(self):
"""Test for np ndarray input for l2 transformation"""
ndarray_matrix = np.array([
[1, 0, 2],
[0, 0, 3],
[4, 5, 6]
])
normalized = self.model_l2.normalize(ndarray_matrix)
# Check if output is of same type
self.assertTrue(isinstance(normalized, np.ndarray))
# Check if output is correct
expected = np.array([
[0.10482848, 0., 0.20965697],
[0., 0., 0.31448545],
[0.41931393, 0.52414242, 0.6289709]
])
self.assertTrue(np.allclose(normalized, expected))
# Test if error is raised on unsupported input type
self.assertRaises(ValueError, lambda model, doc: model.normalize(doc), self.model_l2, [1, 2, 3])
def test_init(self):
"""Test if error messages raised on unsupported norm"""
self.assertRaises(ValueError, normmodel.NormModel, self.corpus, 'l0')
def test_persistence(self):
fname = get_tmpfile('gensim_models.tst')
model = normmodel.NormModel(self.corpus)
model.save(fname)
model2 = normmodel.NormModel.load(fname)
self.assertTrue(model.norms == model2.norms)
tstvec = []
# try projecting an empty vector
self.assertTrue(np.allclose(model.normalize(tstvec), model2.normalize(tstvec)))
def test_persistence_compressed(self):
fname = get_tmpfile('gensim_models.tst.gz')
model = normmodel.NormModel(self.corpus)
model.save(fname)
model2 = normmodel.NormModel.load(fname, mmap=None)
self.assertTrue(model.norms == model2.norms)
tstvec = []
# try projecting an empty vector
self.assertTrue(np.allclose(model.normalize(tstvec), model2.normalize(tstvec)))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 5,790
|
Python
|
.py
| 126
| 37.388889
| 104
| 0.62378
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,040
|
test_keyedvectors.py
|
piskvorky_gensim/gensim/test/test_keyedvectors.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Author: Jayant Jain <jayantjain1992@gmail.com>
# Copyright (C) 2017 Radim Rehurek <me@radimrehurek.com>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking the poincare module from the models package.
"""
import functools
import logging
import unittest
import numpy as np
from gensim.models.keyedvectors import KeyedVectors, REAL, pseudorandom_weak_vector
from gensim.test.utils import datapath
import gensim.models.keyedvectors
logger = logging.getLogger(__name__)
class TestKeyedVectors(unittest.TestCase):
def setUp(self):
self.vectors = KeyedVectors.load_word2vec_format(datapath('euclidean_vectors.bin'), binary=True)
self.model_path = datapath("w2v_keyedvectors_load_test.modeldata")
self.vocab_path = datapath("w2v_keyedvectors_load_test.vocab")
def test_most_similar(self):
"""Test most_similar returns expected results."""
expected = [
'conflict',
'administration',
'terrorism',
'call',
'israel'
]
predicted = [result[0] for result in self.vectors.most_similar('war', topn=5)]
self.assertEqual(expected, predicted)
def test_most_similar_vector(self):
"""Can we pass vectors to most_similar directly?"""
positive = self.vectors.vectors[0:5]
most_similar = self.vectors.most_similar(positive=positive)
assert most_similar is not None
def test_most_similar_parameter_types(self):
"""Are the positive/negative parameter types are getting interpreted correctly?"""
partial = functools.partial(self.vectors.most_similar, topn=5)
position = partial('war', 'peace')
position_list = partial(['war'], ['peace'])
keyword = partial(positive='war', negative='peace')
keyword_list = partial(positive=['war'], negative=['peace'])
#
# The above calls should all yield identical results.
#
assert position == position_list
assert position == keyword
assert position == keyword_list
def test_most_similar_cosmul_parameter_types(self):
"""Are the positive/negative parameter types are getting interpreted correctly?"""
partial = functools.partial(self.vectors.most_similar_cosmul, topn=5)
position = partial('war', 'peace')
position_list = partial(['war'], ['peace'])
keyword = partial(positive='war', negative='peace')
keyword_list = partial(positive=['war'], negative=['peace'])
#
# The above calls should all yield identical results.
#
assert position == position_list
assert position == keyword
assert position == keyword_list
def test_vectors_for_all_list(self):
"""Test vectors_for_all returns expected results with a list of keys."""
words = [
'conflict',
'administration',
'terrorism',
'an out-of-vocabulary word',
'another out-of-vocabulary word',
]
vectors_for_all = self.vectors.vectors_for_all(words)
expected = 3
predicted = len(vectors_for_all)
assert expected == predicted
expected = self.vectors['conflict']
predicted = vectors_for_all['conflict']
assert np.allclose(expected, predicted)
def test_vectors_for_all_with_copy_vecattrs(self):
"""Test vectors_for_all returns can copy vector attributes."""
words = ['conflict']
vectors_for_all = self.vectors.vectors_for_all(words, copy_vecattrs=True)
expected = self.vectors.get_vecattr('conflict', 'count')
predicted = vectors_for_all.get_vecattr('conflict', 'count')
assert expected == predicted
def test_vectors_for_all_without_copy_vecattrs(self):
"""Test vectors_for_all returns can copy vector attributes."""
words = ['conflict']
vectors_for_all = self.vectors.vectors_for_all(words, copy_vecattrs=False)
not_expected = self.vectors.get_vecattr('conflict', 'count')
predicted = vectors_for_all.get_vecattr('conflict', 'count')
assert not_expected != predicted
def test_most_similar_topn(self):
"""Test most_similar returns correct results when `topn` is specified."""
self.assertEqual(len(self.vectors.most_similar('war', topn=5)), 5)
self.assertEqual(len(self.vectors.most_similar('war', topn=10)), 10)
predicted = self.vectors.most_similar('war', topn=None)
self.assertEqual(len(predicted), len(self.vectors))
predicted = self.vectors.most_similar('war', topn=0)
self.assertEqual(len(predicted), 0)
predicted = self.vectors.most_similar('war', topn=np.uint8(0))
self.assertEqual(len(predicted), 0)
def test_relative_cosine_similarity(self):
"""Test relative_cosine_similarity returns expected results with an input of a word pair and topn"""
wordnet_syn = [
'good', 'goodness', 'commodity', 'trade_good', 'full', 'estimable', 'honorable',
'respectable', 'beneficial', 'just', 'upright', 'adept', 'expert', 'practiced', 'proficient',
'skillful', 'skilful', 'dear', 'near', 'dependable', 'safe', 'secure', 'right', 'ripe', 'well',
'effective', 'in_effect', 'in_force', 'serious', 'sound', 'salutary', 'honest', 'undecomposed',
'unspoiled', 'unspoilt', 'thoroughly', 'soundly',
] # synonyms for "good" as per wordnet
cos_sim = [self.vectors.similarity("good", syn) for syn in wordnet_syn if syn in self.vectors]
cos_sim = sorted(cos_sim, reverse=True) # cosine_similarity of "good" with wordnet_syn in decreasing order
# computing relative_cosine_similarity of two similar words
rcs_wordnet = self.vectors.similarity("good", "nice") / sum(cos_sim[i] for i in range(10))
rcs = self.vectors.relative_cosine_similarity("good", "nice", 10)
self.assertTrue(rcs_wordnet >= rcs)
self.assertTrue(np.allclose(rcs_wordnet, rcs, 0, 0.125))
# computing relative_cosine_similarity for two non-similar words
rcs = self.vectors.relative_cosine_similarity("good", "worst", 10)
self.assertTrue(rcs < 0.10)
def test_most_similar_raises_keyerror(self):
"""Test most_similar raises KeyError when input is out of vocab."""
with self.assertRaises(KeyError):
self.vectors.most_similar('not_in_vocab')
def test_most_similar_restrict_vocab(self):
"""Test most_similar returns handles restrict_vocab correctly."""
expected = set(self.vectors.index_to_key[:5])
predicted = set(result[0] for result in self.vectors.most_similar('war', topn=5, restrict_vocab=5))
self.assertEqual(expected, predicted)
def test_most_similar_with_vector_input(self):
"""Test most_similar returns expected results with an input vector instead of an input word."""
expected = [
'war',
'conflict',
'administration',
'terrorism',
'call',
]
input_vector = self.vectors['war']
predicted = [result[0] for result in self.vectors.most_similar([input_vector], topn=5)]
self.assertEqual(expected, predicted)
def test_most_similar_to_given(self):
"""Test most_similar_to_given returns correct results."""
predicted = self.vectors.most_similar_to_given('war', ['terrorism', 'call', 'waging'])
self.assertEqual(predicted, 'terrorism')
def test_similar_by_word(self):
"""Test similar_by_word returns expected results."""
expected = [
'conflict',
'administration',
'terrorism',
'call',
'israel',
]
predicted = [result[0] for result in self.vectors.similar_by_word('war', topn=5)]
self.assertEqual(expected, predicted)
def test_similar_by_vector(self):
"""Test similar_by_word returns expected results."""
expected = [
'war',
'conflict',
'administration',
'terrorism',
'call',
]
input_vector = self.vectors['war']
predicted = [result[0] for result in self.vectors.similar_by_vector(input_vector, topn=5)]
self.assertEqual(expected, predicted)
def test_distance(self):
"""Test that distance returns expected values."""
self.assertTrue(np.allclose(self.vectors.distance('war', 'conflict'), 0.06694602))
self.assertEqual(self.vectors.distance('war', 'war'), 0)
def test_similarity(self):
"""Test similarity returns expected value for two words, and for identical words."""
self.assertTrue(np.allclose(self.vectors.similarity('war', 'war'), 1))
self.assertTrue(np.allclose(self.vectors.similarity('war', 'conflict'), 0.93305397))
def test_closer_than(self):
"""Test words_closer_than returns expected value for distinct and identical nodes."""
self.assertEqual(self.vectors.closer_than('war', 'war'), [])
expected = set(['conflict', 'administration'])
self.assertEqual(set(self.vectors.closer_than('war', 'terrorism')), expected)
def test_rank(self):
"""Test rank returns expected value for distinct and identical nodes."""
self.assertEqual(self.vectors.rank('war', 'war'), 1)
self.assertEqual(self.vectors.rank('war', 'terrorism'), 3)
def test_add_single(self):
"""Test that adding entity in a manual way works correctly."""
entities = [f'___some_entity{i}_not_present_in_keyed_vectors___' for i in range(5)]
vectors = [np.random.randn(self.vectors.vector_size) for _ in range(5)]
# Test `add` on already filled kv.
for ent, vector in zip(entities, vectors):
self.vectors.add_vectors(ent, vector)
for ent, vector in zip(entities, vectors):
self.assertTrue(np.allclose(self.vectors[ent], vector))
# Test `add` on empty kv.
kv = KeyedVectors(self.vectors.vector_size)
for ent, vector in zip(entities, vectors):
kv.add_vectors(ent, vector)
for ent, vector in zip(entities, vectors):
self.assertTrue(np.allclose(kv[ent], vector))
def test_add_multiple(self):
"""Test that adding a bulk of entities in a manual way works correctly."""
entities = ['___some_entity{}_not_present_in_keyed_vectors___'.format(i) for i in range(5)]
vectors = [np.random.randn(self.vectors.vector_size) for _ in range(5)]
# Test `add` on already filled kv.
vocab_size = len(self.vectors)
self.vectors.add_vectors(entities, vectors, replace=False)
self.assertEqual(vocab_size + len(entities), len(self.vectors))
for ent, vector in zip(entities, vectors):
self.assertTrue(np.allclose(self.vectors[ent], vector))
# Test `add` on empty kv.
kv = KeyedVectors(self.vectors.vector_size)
kv[entities] = vectors
self.assertEqual(len(kv), len(entities))
for ent, vector in zip(entities, vectors):
self.assertTrue(np.allclose(kv[ent], vector))
def test_add_type(self):
kv = KeyedVectors(2)
assert kv.vectors.dtype == REAL
words, vectors = ["a"], np.array([1., 1.], dtype=np.float64).reshape(1, -1)
kv.add_vectors(words, vectors)
assert kv.vectors.dtype == REAL
def test_set_item(self):
"""Test that __setitem__ works correctly."""
vocab_size = len(self.vectors)
# Add new entity.
entity = '___some_new_entity___'
vector = np.random.randn(self.vectors.vector_size)
self.vectors[entity] = vector
self.assertEqual(len(self.vectors), vocab_size + 1)
self.assertTrue(np.allclose(self.vectors[entity], vector))
# Replace vector for entity in vocab.
vocab_size = len(self.vectors)
vector = np.random.randn(self.vectors.vector_size)
self.vectors['war'] = vector
self.assertEqual(len(self.vectors), vocab_size)
self.assertTrue(np.allclose(self.vectors['war'], vector))
# __setitem__ on several entities.
vocab_size = len(self.vectors)
entities = ['war', '___some_new_entity1___', '___some_new_entity2___', 'terrorism', 'conflict']
vectors = [np.random.randn(self.vectors.vector_size) for _ in range(len(entities))]
self.vectors[entities] = vectors
self.assertEqual(len(self.vectors), vocab_size + 2)
for ent, vector in zip(entities, vectors):
self.assertTrue(np.allclose(self.vectors[ent], vector))
def test_load_model_and_vocab_file_strict(self):
"""Test loading model and voacab files which have decoding errors: strict mode"""
with self.assertRaises(UnicodeDecodeError):
gensim.models.KeyedVectors.load_word2vec_format(
self.model_path, fvocab=self.vocab_path, binary=False, unicode_errors="strict")
def test_load_model_and_vocab_file_replace(self):
"""Test loading model and voacab files which have decoding errors: replace mode"""
model = gensim.models.KeyedVectors.load_word2vec_format(
self.model_path, fvocab=self.vocab_path, binary=False, unicode_errors="replace")
self.assertEqual(model.get_vecattr(u'ありがとう�', 'count'), 123)
self.assertEqual(model.get_vecattr(u'どういたしまして�', 'count'), 789)
self.assertEqual(model.key_to_index[u'ありがとう�'], 0)
self.assertEqual(model.key_to_index[u'どういたしまして�'], 1)
self.assertTrue(np.array_equal(
model.get_vector(u'ありがとう�'), np.array([.6, .6, .6], dtype=np.float32)))
self.assertTrue(np.array_equal(
model.get_vector(u'どういたしまして�'), np.array([.1, .2, .3], dtype=np.float32)))
def test_load_model_and_vocab_file_ignore(self):
"""Test loading model and voacab files which have decoding errors: ignore mode"""
model = gensim.models.KeyedVectors.load_word2vec_format(
self.model_path, fvocab=self.vocab_path, binary=False, unicode_errors="ignore")
self.assertEqual(model.get_vecattr(u'ありがとう', 'count'), 123)
self.assertEqual(model.get_vecattr(u'どういたしまして', 'count'), 789)
self.assertEqual(model.key_to_index[u'ありがとう'], 0)
self.assertEqual(model.key_to_index[u'どういたしまして'], 1)
self.assertTrue(np.array_equal(
model.get_vector(u'ありがとう'), np.array([.6, .6, .6], dtype=np.float32)))
self.assertTrue(np.array_equal(
model.get_vector(u'どういたしまして'), np.array([.1, .2, .3], dtype=np.float32)))
def test_save_reload(self):
randkv = KeyedVectors(vector_size=100)
count = 20
keys = [str(i) for i in range(count)]
weights = [pseudorandom_weak_vector(randkv.vector_size) for _ in range(count)]
randkv.add_vectors(keys, weights)
tmpfiletxt = gensim.test.utils.get_tmpfile("tmp_kv.txt")
randkv.save_word2vec_format(tmpfiletxt, binary=False)
reloadtxtkv = KeyedVectors.load_word2vec_format(tmpfiletxt, binary=False)
self.assertEqual(randkv.index_to_key, reloadtxtkv.index_to_key)
self.assertTrue((randkv.vectors == reloadtxtkv.vectors).all())
tmpfilebin = gensim.test.utils.get_tmpfile("tmp_kv.bin")
randkv.save_word2vec_format(tmpfilebin, binary=True)
reloadbinkv = KeyedVectors.load_word2vec_format(tmpfilebin, binary=True)
self.assertEqual(randkv.index_to_key, reloadbinkv.index_to_key)
self.assertTrue((randkv.vectors == reloadbinkv.vectors).all())
def test_no_header(self):
randkv = KeyedVectors(vector_size=100)
count = 20
keys = [str(i) for i in range(count)]
weights = [pseudorandom_weak_vector(randkv.vector_size) for _ in range(count)]
randkv.add_vectors(keys, weights)
tmpfiletxt = gensim.test.utils.get_tmpfile("tmp_kv.txt")
randkv.save_word2vec_format(tmpfiletxt, binary=False, write_header=False)
reloadtxtkv = KeyedVectors.load_word2vec_format(tmpfiletxt, binary=False, no_header=True)
self.assertEqual(randkv.index_to_key, reloadtxtkv.index_to_key)
self.assertTrue((randkv.vectors == reloadtxtkv.vectors).all())
def test_get_mean_vector(self):
"""Test get_mean_vector returns expected results."""
keys = [
'conflict',
'administration',
'terrorism',
'call',
'an out-of-vocabulary word',
]
weights = [1, 2, 3, 1, 2]
expected_result_1 = np.array([
0.02000151, -0.12685453, 0.09196121, 0.25514853, 0.25740655,
-0.11134843, -0.0502661, -0.19278568, -0.83346179, -0.12068878,
], dtype=np.float32)
expected_result_2 = np.array([
-0.0145228, -0.11530358, 0.1169825, 0.22537769, 0.29353586,
-0.10458107, -0.05272481, -0.17547795, -0.84245106, -0.10356515,
], dtype=np.float32)
expected_result_3 = np.array([
0.01343237, -0.47651053, 0.45645328, 0.98304356, 1.1840123,
-0.51647933, -0.25308795, -0.77931081, -3.55954733, -0.55429711,
], dtype=np.float32)
self.assertTrue(np.allclose(self.vectors.get_mean_vector(keys), expected_result_1))
self.assertTrue(np.allclose(self.vectors.get_mean_vector(keys, weights), expected_result_2))
self.assertTrue(np.allclose(
self.vectors.get_mean_vector(keys, pre_normalize=False), expected_result_3)
)
class Gensim320Test(unittest.TestCase):
def test(self):
path = datapath('old_keyedvectors_320.dat')
vectors = gensim.models.keyedvectors.KeyedVectors.load(path)
self.assertTrue(vectors.get_vector('computer') is not None)
def save_dict_to_word2vec_formated_file(fname, word2vec_dict):
with gensim.utils.open(fname, "wb") as f:
num_words = len(word2vec_dict)
vector_length = len(list(word2vec_dict.values())[0])
header = "%d %d\n" % (num_words, vector_length)
f.write(header.encode(encoding="ascii"))
for word, vector in word2vec_dict.items():
f.write(word.encode())
f.write(' '.encode())
f.write(np.array(vector).astype(np.float32).tobytes())
class LoadWord2VecFormatTest(unittest.TestCase):
def assert_dict_equal_to_model(self, d, m):
self.assertEqual(len(d), len(m))
for word in d.keys():
self.assertSequenceEqual(list(d[word]), list(m[word]))
def verify_load2vec_binary_result(self, w2v_dict, binary_chunk_size, limit):
tmpfile = gensim.test.utils.get_tmpfile("tmp_w2v")
save_dict_to_word2vec_formated_file(tmpfile, w2v_dict)
w2v_model = \
gensim.models.keyedvectors._load_word2vec_format(
cls=gensim.models.KeyedVectors,
fname=tmpfile,
binary=True,
limit=limit,
binary_chunk_size=binary_chunk_size)
if limit is None:
limit = len(w2v_dict)
w2v_keys_postprocessed = list(w2v_dict.keys())[:limit]
w2v_dict_postprocessed = {k.lstrip(): w2v_dict[k] for k in w2v_keys_postprocessed}
self.assert_dict_equal_to_model(w2v_dict_postprocessed, w2v_model)
def test_load_word2vec_format_basic(self):
w2v_dict = {"abc": [1, 2, 3],
"cde": [4, 5, 6],
"def": [7, 8, 9]}
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=None)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=16, limit=None)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=1024, limit=None)
w2v_dict = {"abc": [1, 2, 3],
"cdefg": [4, 5, 6],
"d": [7, 8, 9]}
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=None)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=16, limit=None)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=1024, limit=None)
def test_load_word2vec_format_limit(self):
w2v_dict = {"abc": [1, 2, 3],
"cde": [4, 5, 6],
"def": [7, 8, 9]}
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=1)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=16, limit=1)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=1024, limit=1)
w2v_dict = {"abc": [1, 2, 3],
"cde": [4, 5, 6],
"def": [7, 8, 9]}
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=2)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=16, limit=2)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=1024, limit=2)
w2v_dict = {"abc": [1, 2, 3],
"cdefg": [4, 5, 6],
"d": [7, 8, 9]}
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=1)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=16, limit=1)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=1024, limit=1)
w2v_dict = {"abc": [1, 2, 3],
"cdefg": [4, 5, 6],
"d": [7, 8, 9]}
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=2)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=16, limit=2)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=1024, limit=2)
def test_load_word2vec_format_space_stripping(self):
w2v_dict = {"\nabc": [1, 2, 3],
"cdefdg": [4, 5, 6],
"\n\ndef": [7, 8, 9]}
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=None)
self.verify_load2vec_binary_result(w2v_dict, binary_chunk_size=5, limit=1)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 22,477
|
Python
|
.py
| 413
| 44.445521
| 115
| 0.637014
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,041
|
test_aggregation.py
|
piskvorky_gensim/gensim/test/test_aggregation.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
from gensim.topic_coherence import aggregation
class TestAggregation(unittest.TestCase):
def setUp(self):
self.confirmed_measures = [1.1, 2.2, 3.3, 4.4]
def test_arithmetic_mean(self):
"""Test arithmetic_mean()"""
obtained = aggregation.arithmetic_mean(self.confirmed_measures)
expected = 2.75
self.assertEqual(obtained, expected)
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 797
|
Python
|
.py
| 22
| 32.318182
| 95
| 0.711864
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,042
|
test_similarity_metrics.py
|
piskvorky_gensim/gensim/test/test_similarity_metrics.py
|
#!/usr/bin/env python
# encoding: utf-8
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated test to check similarity functions and isbow function.
"""
import logging
import unittest
from gensim import matutils
from scipy.sparse import csr_matrix
import numpy as np
import math
from gensim.corpora.mmcorpus import MmCorpus
from gensim.models import ldamodel
from gensim.test.utils import datapath, common_dictionary, common_corpus
class TestIsBow(unittest.TestCase):
def test_None(self):
# test None
result = matutils.isbow(None)
expected = False
self.assertEqual(expected, result)
def test_bow(self):
# test list words
# one bag of words
potentialbow = [(0, 0.4)]
result = matutils.isbow(potentialbow)
expected = True
self.assertEqual(expected, result)
# multiple bags
potentialbow = [(0, 4.), (1, 2.), (2, 5.), (3, 8.)]
result = matutils.isbow(potentialbow)
expected = True
self.assertEqual(expected, result)
# checking empty input
potentialbow = []
result = matutils.isbow(potentialbow)
expected = True
self.assertEqual(expected, result)
# checking corpus; should return false
potentialbow = [[(2, 1), (3, 1), (4, 1), (5, 1), (1, 1), (7, 1)]]
result = matutils.isbow(potentialbow)
expected = False
self.assertEqual(expected, result)
# not a bag of words, should return false
potentialbow = [(1, 3, 6)]
result = matutils.isbow(potentialbow)
expected = False
self.assertEqual(expected, result)
# checking sparse matrix format bag of words
potentialbow = csr_matrix([[1, 0.4], [0, 0.3], [2, 0.1]])
result = matutils.isbow(potentialbow)
expected = True
self.assertEqual(expected, result)
# checking np array format bag of words
potentialbow = np.array([[1, 0.4], [0, 0.2], [2, 0.2]])
result = matutils.isbow(potentialbow)
expected = True
self.assertEqual(expected, result)
class TestHellinger(unittest.TestCase):
def setUp(self):
self.corpus = MmCorpus(datapath('testcorpus.mm'))
self.class_ = ldamodel.LdaModel
self.model = self.class_(common_corpus, id2word=common_dictionary, num_topics=2, passes=100)
def test_inputs(self):
# checking empty inputs
vec_1 = []
vec_2 = []
result = matutils.hellinger(vec_1, vec_2)
expected = 0.0
self.assertEqual(expected, result)
# checking np array and list input
vec_1 = np.array([])
vec_2 = []
result = matutils.hellinger(vec_1, vec_2)
expected = 0.0
self.assertEqual(expected, result)
# checking scipy csr matrix and list input
vec_1 = csr_matrix([])
vec_2 = []
result = matutils.hellinger(vec_1, vec_2)
expected = 0.0
self.assertEqual(expected, result)
def test_distributions(self):
# checking different length bag of words as inputs
vec_1 = [(2, 0.1), (3, 0.4), (4, 0.1), (5, 0.1), (1, 0.1), (7, 0.2)]
vec_2 = [(1, 0.1), (3, 0.8), (4, 0.1)]
result = matutils.hellinger(vec_1, vec_2)
expected = 0.484060507634
self.assertAlmostEqual(expected, result)
# checking symmetrical bag of words inputs return same distance
vec_1 = [(2, 0.1), (3, 0.4), (4, 0.1), (5, 0.1), (1, 0.1), (7, 0.2)]
vec_2 = [(1, 0.1), (3, 0.8), (4, 0.1), (8, 0.1), (10, 0.8), (9, 0.1)]
result = matutils.hellinger(vec_1, vec_2)
result_symmetric = matutils.hellinger(vec_2, vec_1)
expected = 0.856921568786
self.assertAlmostEqual(expected, result)
self.assertAlmostEqual(expected, result_symmetric)
# checking ndarray, csr_matrix as inputs
vec_1 = np.array([[1, 0.3], [0, 0.4], [2, 0.3]])
vec_2 = csr_matrix([[1, 0.4], [0, 0.2], [2, 0.2]])
result = matutils.hellinger(vec_1, vec_2)
expected = 0.160618030536
self.assertAlmostEqual(expected, result)
# checking ndarray, list as inputs
vec_1 = np.array([0.6, 0.1, 0.1, 0.2])
vec_2 = [0.2, 0.2, 0.1, 0.5]
result = matutils.hellinger(vec_1, vec_2)
expected = 0.309742984153
self.assertAlmostEqual(expected, result)
# testing LDA distribution vectors
np.random.seed(0)
model = self.class_(self.corpus, id2word=common_dictionary, num_topics=2, passes=100)
lda_vec1 = model[[(1, 2), (2, 3)]]
lda_vec2 = model[[(2, 2), (1, 3)]]
result = matutils.hellinger(lda_vec1, lda_vec2)
expected = 1.0406845281146034e-06
self.assertAlmostEqual(expected, result)
class TestKL(unittest.TestCase):
def setUp(self):
self.corpus = MmCorpus(datapath('testcorpus.mm'))
self.class_ = ldamodel.LdaModel
self.model = self.class_(common_corpus, id2word=common_dictionary, num_topics=2, passes=100)
def test_inputs(self):
# checking empty inputs
vec_1 = []
vec_2 = []
result = matutils.kullback_leibler(vec_1, vec_2)
expected = 0.0
self.assertEqual(expected, result)
# checking np array and list input
vec_1 = np.array([])
vec_2 = []
result = matutils.kullback_leibler(vec_1, vec_2)
expected = 0.0
self.assertEqual(expected, result)
# checking scipy csr matrix and list input
vec_1 = csr_matrix([])
vec_2 = []
result = matutils.kullback_leibler(vec_1, vec_2)
expected = 0.0
self.assertEqual(expected, result)
def test_distributions(self):
# checking bag of words as inputs
vec_1 = [(2, 0.1), (3, 0.4), (4, 0.1), (5, 0.1), (1, 0.1), (7, 0.2)]
vec_2 = [(1, 0.1), (3, 0.8), (4, 0.1)]
result = matutils.kullback_leibler(vec_2, vec_1, 8)
expected = 0.55451775
self.assertAlmostEqual(expected, result, places=5)
# KL is not symetric; vec1 compared with vec2 will contain log of zeros and return infinity
vec_1 = [(2, 0.1), (3, 0.4), (4, 0.1), (5, 0.1), (1, 0.1), (7, 0.2)]
vec_2 = [(1, 0.1), (3, 0.8), (4, 0.1)]
result = matutils.kullback_leibler(vec_1, vec_2, 8)
self.assertTrue(math.isinf(result))
# checking ndarray, csr_matrix as inputs
vec_1 = np.array([[1, 0.3], [0, 0.4], [2, 0.3]])
vec_2 = csr_matrix([[1, 0.4], [0, 0.2], [2, 0.2]])
result = matutils.kullback_leibler(vec_1, vec_2, 3)
expected = 0.0894502
self.assertAlmostEqual(expected, result, places=5)
# checking ndarray, list as inputs
vec_1 = np.array([0.6, 0.1, 0.1, 0.2])
vec_2 = [0.2, 0.2, 0.1, 0.5]
result = matutils.kullback_leibler(vec_1, vec_2)
expected = 0.40659450877
self.assertAlmostEqual(expected, result, places=5)
# testing LDA distribution vectors
np.random.seed(0)
model = self.class_(self.corpus, id2word=common_dictionary, num_topics=2, passes=100)
lda_vec1 = model[[(1, 2), (2, 3)]]
lda_vec2 = model[[(2, 2), (1, 3)]]
result = matutils.kullback_leibler(lda_vec1, lda_vec2)
expected = 4.283407e-12
self.assertAlmostEqual(expected, result, places=5)
class TestJaccard(unittest.TestCase):
def test_inputs(self):
# all empty inputs will give a divide by zero exception
vec_1 = []
vec_2 = []
self.assertRaises(ZeroDivisionError, matutils.jaccard, vec_1, vec_2)
def test_distributions(self):
# checking bag of words as inputs
vec_1 = [(2, 1), (3, 4), (4, 1), (5, 1), (1, 1), (7, 2)]
vec_2 = [(1, 1), (3, 8), (4, 1)]
result = matutils.jaccard(vec_2, vec_1)
expected = 1 - 0.3
self.assertAlmostEqual(expected, result)
# checking ndarray, csr_matrix as inputs
vec_1 = np.array([[1, 3], [0, 4], [2, 3]])
vec_2 = csr_matrix([[1, 4], [0, 2], [2, 2]])
result = matutils.jaccard(vec_1, vec_2)
expected = 1 - 0.388888888889
self.assertAlmostEqual(expected, result)
# checking ndarray, list as inputs
vec_1 = np.array([6, 1, 2, 3])
vec_2 = [4, 3, 2, 5]
result = matutils.jaccard(vec_1, vec_2)
expected = 1 - 0.333333333333
self.assertAlmostEqual(expected, result)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 8,690
|
Python
|
.py
| 202
| 34.876238
| 100
| 0.596684
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,043
|
test_rpmodel.py
|
piskvorky_gensim/gensim/test/test_rpmodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
import numpy as np
from gensim.corpora.mmcorpus import MmCorpus
from gensim.models import rpmodel
from gensim import matutils
from gensim.test.utils import datapath, get_tmpfile
class TestRpModel(unittest.TestCase):
def setUp(self):
self.corpus = MmCorpus(datapath('testcorpus.mm'))
def test_transform(self):
# create the transformation model
# HACK; set fixed seed so that we always get the same random matrix (and can compare against expected results)
np.random.seed(13)
model = rpmodel.RpModel(self.corpus, num_topics=2)
# transform one document
doc = list(self.corpus)[0]
transformed = model[doc]
vec = matutils.sparse2full(transformed, 2) # convert to dense vector, for easier equality tests
expected = np.array([-0.70710677, 0.70710677])
self.assertTrue(np.allclose(vec, expected)) # transformed entries must be equal up to sign
def test_persistence(self):
fname = get_tmpfile('gensim_models.tst')
model = rpmodel.RpModel(self.corpus, num_topics=2)
model.save(fname)
model2 = rpmodel.RpModel.load(fname)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.projection, model2.projection))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_persistence_compressed(self):
fname = get_tmpfile('gensim_models.tst.gz')
model = rpmodel.RpModel(self.corpus, num_topics=2)
model.save(fname)
model2 = rpmodel.RpModel.load(fname, mmap=None)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.projection, model2.projection))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 2,374
|
Python
|
.py
| 50
| 41.4
| 118
| 0.703463
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,044
|
test_tfidfmodel.py
|
piskvorky_gensim/gensim/test/test_tfidfmodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
import numpy as np
from gensim.corpora.mmcorpus import MmCorpus
from gensim.models import tfidfmodel
from gensim.test.utils import datapath, get_tmpfile, common_dictionary, common_corpus
from gensim.corpora import Dictionary
texts = [
['complier', 'system', 'computer'],
['eulerian', 'node', 'cycle', 'graph', 'tree', 'path'],
['graph', 'flow', 'network', 'graph'],
['loading', 'computer', 'system'],
['user', 'server', 'system'],
['tree', 'hamiltonian'],
['graph', 'trees'],
['computer', 'kernel', 'malfunction', 'computer'],
['server', 'system', 'computer'],
]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
class TestTfidfModel(unittest.TestCase):
def setUp(self):
self.corpus = MmCorpus(datapath('testcorpus.mm'))
def test_transform(self):
# create the transformation model
model = tfidfmodel.TfidfModel(self.corpus, normalize=True)
# transform one document
doc = list(self.corpus)[0]
transformed = model[doc]
expected = [(0, 0.57735026918962573), (1, 0.57735026918962573), (2, 0.57735026918962573)]
self.assertTrue(np.allclose(transformed, expected))
def test_init(self):
# create the transformation model by analyzing a corpus
# uses the global `corpus`!
model1 = tfidfmodel.TfidfModel(common_corpus)
dfs = common_dictionary.dfs
# make sure the dfs<->idfs transformation works
self.assertEqual(model1.dfs, dfs)
self.assertEqual(model1.idfs, tfidfmodel.precompute_idfs(model1.wglobal, dfs, len(common_corpus)))
# create the transformation model by directly supplying a term->docfreq
# mapping from the global var `dictionary`.
model2 = tfidfmodel.TfidfModel(dictionary=common_dictionary)
self.assertEqual(model1.idfs, model2.idfs)
def test_persistence(self):
# Test persistence without using `smartirs`
fname = get_tmpfile('gensim_models.tst')
model = tfidfmodel.TfidfModel(self.corpus, normalize=True)
model.save(fname)
model2 = tfidfmodel.TfidfModel.load(fname)
self.assertTrue(model.idfs == model2.idfs)
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model[tstvec[0]], model2[tstvec[0]]))
self.assertTrue(np.allclose(model[tstvec[1]], model2[tstvec[1]]))
self.assertTrue(np.allclose(model[[]], model2[[]])) # try projecting an empty vector
# Test persistence with using `smartirs`
fname = get_tmpfile('gensim_models_smartirs.tst')
model = tfidfmodel.TfidfModel(self.corpus, smartirs="nfc")
model.save(fname)
model2 = tfidfmodel.TfidfModel.load(fname)
self.assertTrue(model.idfs == model2.idfs)
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model[tstvec[0]], model2[tstvec[0]]))
self.assertTrue(np.allclose(model[tstvec[1]], model2[tstvec[1]]))
self.assertTrue(np.allclose(model[[]], model2[[]])) # try projecting an empty vector
# Test persistence between Gensim v3.2.0 and current model.
model3 = tfidfmodel.TfidfModel(self.corpus, smartirs="nfc")
model4 = tfidfmodel.TfidfModel.load(datapath('tfidf_model.tst'))
idfs3 = [model3.idfs[key] for key in sorted(model3.idfs.keys())]
idfs4 = [model4.idfs[key] for key in sorted(model4.idfs.keys())]
self.assertTrue(np.allclose(idfs3, idfs4))
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model3[tstvec[0]], model4[tstvec[0]]))
self.assertTrue(np.allclose(model3[tstvec[1]], model4[tstvec[1]]))
self.assertTrue(np.allclose(model3[[]], model4[[]])) # try projecting an empty vector
# Test persistence with using pivoted normalization
fname = get_tmpfile('gensim_models_smartirs.tst')
model = tfidfmodel.TfidfModel(self.corpus, pivot=0, slope=1)
model.save(fname)
model2 = tfidfmodel.TfidfModel.load(fname, mmap=None)
self.assertTrue(model.idfs == model2.idfs)
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model[tstvec[0]], model2[tstvec[0]]))
self.assertTrue(np.allclose(model[tstvec[1]], model2[tstvec[1]]))
# Test persistence between Gensim v3.2.0 and pivoted normalization compressed model.
model3 = tfidfmodel.TfidfModel(self.corpus, pivot=0, slope=1)
model4 = tfidfmodel.TfidfModel.load(datapath('tfidf_model.tst'))
idfs3 = [model3.idfs[key] for key in sorted(model3.idfs.keys())]
idfs4 = [model4.idfs[key] for key in sorted(model4.idfs.keys())]
self.assertTrue(np.allclose(idfs3, idfs4))
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model3[tstvec[0]], model4[tstvec[0]]))
self.assertTrue(np.allclose(model3[tstvec[1]], model4[tstvec[1]]))
def test_persistence_compressed(self):
# Test persistence without using `smartirs`
fname = get_tmpfile('gensim_models.tst.gz')
model = tfidfmodel.TfidfModel(self.corpus, normalize=True)
model.save(fname)
model2 = tfidfmodel.TfidfModel.load(fname, mmap=None)
self.assertTrue(model.idfs == model2.idfs)
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model[tstvec[0]], model2[tstvec[0]]))
self.assertTrue(np.allclose(model[tstvec[1]], model2[tstvec[1]]))
self.assertTrue(np.allclose(model[[]], model2[[]])) # try projecting an empty vector
# Test persistence with using `smartirs`
fname = get_tmpfile('gensim_models_smartirs.tst.gz')
model = tfidfmodel.TfidfModel(self.corpus, smartirs="nfc")
model.save(fname)
model2 = tfidfmodel.TfidfModel.load(fname, mmap=None)
self.assertTrue(model.idfs == model2.idfs)
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model[tstvec[0]], model2[tstvec[0]]))
self.assertTrue(np.allclose(model[tstvec[1]], model2[tstvec[1]]))
self.assertTrue(np.allclose(model[[]], model2[[]])) # try projecting an empty vector
# Test persistence between Gensim v3.2.0 and current compressed model.
model3 = tfidfmodel.TfidfModel(self.corpus, smartirs="nfc")
model4 = tfidfmodel.TfidfModel.load(datapath('tfidf_model.tst.bz2'))
idfs3 = [model3.idfs[key] for key in sorted(model3.idfs.keys())]
idfs4 = [model4.idfs[key] for key in sorted(model4.idfs.keys())]
self.assertTrue(np.allclose(idfs3, idfs4))
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model3[tstvec[0]], model4[tstvec[0]]))
self.assertTrue(np.allclose(model3[tstvec[1]], model4[tstvec[1]]))
self.assertTrue(np.allclose(model3[[]], model4[[]])) # try projecting an empty vector
# Test persistence with using pivoted normalization
fname = get_tmpfile('gensim_models_smartirs.tst.gz')
model = tfidfmodel.TfidfModel(self.corpus, pivot=0, slope=1)
model.save(fname)
model2 = tfidfmodel.TfidfModel.load(fname, mmap=None)
self.assertTrue(model.idfs == model2.idfs)
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model[tstvec[0]], model2[tstvec[0]]))
self.assertTrue(np.allclose(model[tstvec[1]], model2[tstvec[1]]))
# Test persistence between Gensim v3.2.0 and pivoted normalization compressed model.
model3 = tfidfmodel.TfidfModel(self.corpus, pivot=0, slope=1)
model4 = tfidfmodel.TfidfModel.load(datapath('tfidf_model.tst.bz2'))
idfs3 = [model3.idfs[key] for key in sorted(model3.idfs.keys())]
idfs4 = [model4.idfs[key] for key in sorted(model4.idfs.keys())]
self.assertTrue(np.allclose(idfs3, idfs4))
tstvec = [corpus[1], corpus[2]]
self.assertTrue(np.allclose(model3[tstvec[0]], model4[tstvec[0]]))
self.assertTrue(np.allclose(model3[tstvec[1]], model4[tstvec[1]]))
def test_consistency(self):
docs = [corpus[1], corpus[2]]
# Test if `ntc` yields the default docs.
model = tfidfmodel.TfidfModel(corpus, smartirs='nfc')
transformed_docs = [model[docs[0]], model[docs[1]]]
model = tfidfmodel.TfidfModel(corpus)
expected_docs = [model[docs[0]], model[docs[1]]]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# Testing all the variations of `wlocal`
# tnn
model = tfidfmodel.TfidfModel(corpus, smartirs='tnn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = docs[:]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# nnn
model = tfidfmodel.TfidfModel(corpus, smartirs='nnn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = docs[:]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# lnn
model = tfidfmodel.TfidfModel(corpus, smartirs='lnn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[(3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0), (7, 1.0), (8, 1.0)],
[(5, 2.0), (9, 1.0), (10, 1.0)]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# dnn
model = tfidfmodel.TfidfModel(corpus, smartirs='dnn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[(3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0), (7, 1.0), (8, 1.0)],
[(5, 2.0), (9, 1.0), (10, 1.0)]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# ann
model = tfidfmodel.TfidfModel(corpus, smartirs='ann')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[(3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0), (7, 1.0), (8, 1.0)],
[(5, 1.0), (9, 0.75), (10, 0.75)]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# bnn
model = tfidfmodel.TfidfModel(corpus, smartirs='bnn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[(3, 1), (4, 1), (5, 1), (6, 1), (7, 1), (8, 1)],
[(5, 1), (9, 1), (10, 1)]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# Lnn
model = tfidfmodel.TfidfModel(corpus, smartirs='Lnn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[
(3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0),
(7, 1.0), (8, 1.0)
],
[
(5, 1.4133901052), (9, 0.7066950526), (10, 0.7066950526)
]
]
# Testing all the variations of `glocal`
# nxn
model = tfidfmodel.TfidfModel(corpus, smartirs='nxn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = docs[:]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# nfn
model = tfidfmodel.TfidfModel(corpus, smartirs='nfn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[
(3, 3.169925001442312), (4, 3.169925001442312), (5, 1.584962500721156), (6, 3.169925001442312),
(7, 3.169925001442312), (8, 2.169925001442312)
],
[
(5, 3.169925001442312), (9, 3.169925001442312), (10, 3.169925001442312)
]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# ntn
model = tfidfmodel.TfidfModel(corpus, smartirs='ntn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[
(3, 3.321928094887362), (4, 3.321928094887362), (5, 1.736965594166206), (6, 3.321928094887362),
(7, 3.321928094887362), (8, 2.321928094887362)
],
[
(5, 3.473931188332412), (9, 3.321928094887362), (10, 3.321928094887362)
]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# npn
model = tfidfmodel.TfidfModel(corpus, smartirs='npn')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[
(3, 3.0), (4, 3.0), (5, 1.0), (6, 3.0),
(7, 3.0), (8, 1.8073549220576042)
],
[
(5, 2.0), (9, 3.0), (10, 3.0)
]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# Testing all the variations of `normalize`
# nnx
model = tfidfmodel.TfidfModel(corpus, smartirs='nnx')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = docs[:]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# nnc
model = tfidfmodel.TfidfModel(corpus, smartirs='nnc')
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[
(3, 0.4082482905), (4, 0.4082482905), (5, 0.4082482905), (6, 0.4082482905),
(7, 0.4082482905), (8, 0.4082482905)
],
[
(5, 0.81649658092772603), (9, 0.40824829046386302), (10, 0.40824829046386302)
]
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
model = tfidfmodel.TfidfModel(corpus, wlocal=lambda x: x, wglobal=lambda x, y: x * x, smartirs='nnc')
transformed_docs = [model[docs[0]], model[docs[1]]]
model = tfidfmodel.TfidfModel(corpus, wlocal=lambda x: x * x, wglobal=lambda x, y: x, smartirs='nnc')
expected_docs = [model[docs[0]], model[docs[1]]]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# nnu
slope = 0.2
model = tfidfmodel.TfidfModel(corpus, smartirs='nnu', slope=slope)
transformed_docs = [model[docs[0]], model[docs[1]]]
average_unique_length = 1.0 * sum(len(set(text)) for text in texts) / len(texts)
vector_norms = [
(1.0 - slope) * average_unique_length + slope * 6.0,
(1.0 - slope) * average_unique_length + slope * 3.0,
]
expected_docs = [
[(termid, weight / vector_norms[0]) for termid, weight in docs[0]],
[(termid, weight / vector_norms[1]) for termid, weight in docs[1]],
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
# nnb
slope = 0.2
model = tfidfmodel.TfidfModel(dictionary=dictionary, smartirs='nnb', slope=slope)
transformed_docs = [model[docs[0]], model[docs[1]]]
average_character_length = sum(len(word) + 1.0 for text in texts for word in text) / len(texts)
vector_norms = [
(1.0 - slope) * average_character_length + slope * 36.0,
(1.0 - slope) * average_character_length + slope * 25.0,
]
expected_docs = [
[(termid, weight / vector_norms[0]) for termid, weight in docs[0]],
[(termid, weight / vector_norms[1]) for termid, weight in docs[1]],
]
self.assertTrue(np.allclose(transformed_docs[0], expected_docs[0]))
self.assertTrue(np.allclose(transformed_docs[1], expected_docs[1]))
def test_pivoted_normalization(self):
docs = [corpus[1], corpus[2]]
# Test if slope=1 yields the default docs for pivoted normalization.
model = tfidfmodel.TfidfModel(self.corpus)
transformed_docs = [model[docs[0]], model[docs[1]]]
model = tfidfmodel.TfidfModel(self.corpus, pivot=0, slope=1)
expected_docs = [model[docs[0]], model[docs[1]]]
self.assertTrue(np.allclose(sorted(transformed_docs[0]), sorted(expected_docs[0])))
self.assertTrue(np.allclose(sorted(transformed_docs[1]), sorted(expected_docs[1])))
# Test if pivoted model is consistent
model = tfidfmodel.TfidfModel(self.corpus, pivot=0, slope=0.5)
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[
(8, 0.8884910505493495), (7, 0.648974041227711), (6, 0.8884910505493495),
(5, 0.648974041227711), (4, 0.8884910505493495), (3, 0.8884910505493495)
],
[
(10, 0.8164965809277263), (9, 0.8164965809277263), (5, 1.6329931618554525)
]
]
self.assertTrue(np.allclose(sorted(transformed_docs[0]), sorted(expected_docs[0])))
self.assertTrue(np.allclose(sorted(transformed_docs[1]), sorted(expected_docs[1])))
def test_wlocal_wglobal(self):
def wlocal(tf):
assert isinstance(tf, np.ndarray)
return iter(tf + 1)
def wglobal(df, total_docs):
return 1
docs = [corpus[1], corpus[2]]
model = tfidfmodel.TfidfModel(corpus, wlocal=wlocal, wglobal=wglobal, normalize=False)
transformed_docs = [model[docs[0]], model[docs[1]]]
expected_docs = [
[(termid, weight + 1) for termid, weight in docs[0]],
[(termid, weight + 1) for termid, weight in docs[1]],
]
self.assertTrue(np.allclose(sorted(transformed_docs[0]), sorted(expected_docs[0])))
self.assertTrue(np.allclose(sorted(transformed_docs[1]), sorted(expected_docs[1])))
def test_backwards_compatibility(self):
model = tfidfmodel.TfidfModel.load(datapath('tfidf_model_3_2.tst'))
# attrs ensured by load method
attrs = ['pivot', 'slope', 'smartirs']
for a in attrs:
self.assertTrue(hasattr(model, a))
# __getitem__: assumes smartirs attr is present
self.assertEqual(len(model[corpus]), len(corpus))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 19,533
|
Python
|
.py
| 374
| 42.81016
| 111
| 0.620238
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,045
|
test_tmdiff.py
|
piskvorky_gensim/gensim/test/test_tmdiff.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2016 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
import logging
import unittest
import numpy as np
from gensim.models import LdaModel
from gensim.test.utils import common_dictionary, common_corpus
class TestLdaDiff(unittest.TestCase):
def setUp(self):
self.dictionary = common_dictionary
self.corpus = common_corpus
self.num_topics = 5
self.n_ann_terms = 10
self.model = LdaModel(corpus=self.corpus, id2word=self.dictionary, num_topics=self.num_topics, passes=10)
def test_basic(self):
# test for matrix case
mdiff, annotation = self.model.diff(self.model, n_ann_terms=self.n_ann_terms)
self.assertEqual(mdiff.shape, (self.num_topics, self.num_topics))
self.assertEqual(len(annotation), self.num_topics)
self.assertEqual(len(annotation[0]), self.num_topics)
# test for diagonal case
mdiff, annotation = self.model.diff(self.model, n_ann_terms=self.n_ann_terms, diagonal=True)
self.assertEqual(mdiff.shape, (self.num_topics,))
self.assertEqual(len(annotation), self.num_topics)
def test_identity(self):
for dist_name in ["hellinger", "kullback_leibler", "jaccard"]:
# test for matrix case
mdiff, annotation = self.model.diff(self.model, n_ann_terms=self.n_ann_terms, distance=dist_name)
for row in annotation:
for (int_tokens, diff_tokens) in row:
self.assertEqual(diff_tokens, [])
self.assertEqual(len(int_tokens), self.n_ann_terms)
self.assertTrue(np.allclose(np.diag(mdiff), np.zeros(mdiff.shape[0], dtype=mdiff.dtype)))
if dist_name == "jaccard":
self.assertTrue(np.allclose(mdiff, np.zeros(mdiff.shape, dtype=mdiff.dtype)))
# test for diagonal case
mdiff, annotation = \
self.model.diff(self.model, n_ann_terms=self.n_ann_terms, distance=dist_name, diagonal=True)
for (int_tokens, diff_tokens) in annotation:
self.assertEqual(diff_tokens, [])
self.assertEqual(len(int_tokens), self.n_ann_terms)
self.assertTrue(np.allclose(mdiff, np.zeros(mdiff.shape, dtype=mdiff.dtype)))
if dist_name == "jaccard":
self.assertTrue(np.allclose(mdiff, np.zeros(mdiff.shape, dtype=mdiff.dtype)))
def test_input(self):
self.assertRaises(ValueError, self.model.diff, self.model, n_ann_terms=self.n_ann_terms, distance='something')
self.assertRaises(ValueError, self.model.diff, [], n_ann_terms=self.n_ann_terms, distance='something')
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 2,944
|
Python
|
.py
| 53
| 46.339623
| 118
| 0.660167
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,046
|
test_phrases.py
|
piskvorky_gensim/gensim/test/test_phrases.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for the phrase detection module.
"""
import logging
import unittest
import numpy as np
from gensim.models.phrases import Phrases, FrozenPhrases, _PhrasesTransformation
from gensim.models.phrases import original_scorer
from gensim.test.utils import common_texts, temporary_file, datapath
class TestPhraseAnalysis(unittest.TestCase):
class AnalysisTester(_PhrasesTransformation):
def __init__(self, scores, threshold):
super().__init__(connector_words={"a", "the", "with", "of"})
self.scores = scores
self.threshold = threshold
def score_candidate(self, word_a, word_b, in_between):
phrase = "_".join([word_a] + in_between + [word_b])
score = self.scores.get(phrase, -1)
if score > self.threshold:
return phrase, score
return None, None
def test_simple_analysis(self):
"""Test transformation with no phrases."""
sentence = ["simple", "sentence", "should", "pass"]
result = self.AnalysisTester({}, threshold=1)[sentence]
self.assertEqual(result, sentence)
sentence = ["a", "simple", "sentence", "with", "no", "bigram", "but", "common", "terms"]
result = self.AnalysisTester({}, threshold=1)[sentence]
self.assertEqual(result, sentence)
def test_analysis_bigrams(self):
scores = {
"simple_sentence": 2, "sentence_many": 2,
"many_possible": 2, "possible_bigrams": 2,
}
sentence = ["simple", "sentence", "many", "possible", "bigrams"]
result = self.AnalysisTester(scores, threshold=1)[sentence]
self.assertEqual(result, ["simple_sentence", "many_possible", "bigrams"])
sentence = ["some", "simple", "sentence", "many", "bigrams"]
result = self.AnalysisTester(scores, threshold=1)[sentence]
self.assertEqual(result, ["some", "simple_sentence", "many", "bigrams"])
sentence = ["some", "unrelated", "simple", "words"]
result = self.AnalysisTester(scores, threshold=1)[sentence]
self.assertEqual(result, sentence)
def test_analysis_connector_words(self):
scores = {
"simple_sentence": 2, "sentence_many": 2,
"many_possible": 2, "possible_bigrams": 2,
}
sentence = ["a", "simple", "sentence", "many", "the", "possible", "bigrams"]
result = self.AnalysisTester(scores, threshold=1)[sentence]
self.assertEqual(result, ["a", "simple_sentence", "many", "the", "possible_bigrams"])
sentence = ["simple", "the", "sentence", "and", "many", "possible", "bigrams", "with", "a"]
result = self.AnalysisTester(scores, threshold=1)[sentence]
self.assertEqual(
result,
["simple", "the", "sentence", "and", "many_possible", "bigrams", "with", "a"],
)
def test_analysis_connector_words_in_between(self):
scores = {
"simple_sentence": 2, "sentence_with_many": 2,
"many_possible": 2, "many_of_the_possible": 2, "possible_bigrams": 2,
}
sentence = ["sentence", "with", "many", "possible", "bigrams"]
result = self.AnalysisTester(scores, threshold=1)[sentence]
self.assertEqual(result, ["sentence_with_many", "possible_bigrams"])
sentence = ["a", "simple", "sentence", "with", "many", "of", "the", "possible", "bigrams", "with"]
result = self.AnalysisTester(scores, threshold=1)[sentence]
self.assertEqual(
result, ["a", "simple_sentence", "with", "many_of_the_possible", "bigrams", "with"])
class PhrasesData:
sentences = common_texts + [
['graph', 'minors', 'survey', 'human', 'interface'],
]
connector_words = frozenset()
bigram1 = u'response_time'
bigram2 = u'graph_minors'
bigram3 = u'human_interface'
def gen_sentences(self):
return ((w for w in sentence) for sentence in self.sentences)
class PhrasesCommon(PhrasesData):
"""Tests for both Phrases and FrozenPhrases classes."""
def setUp(self):
self.bigram = Phrases(self.sentences, min_count=1, threshold=1, connector_words=self.connector_words)
self.bigram_default = Phrases(self.sentences, connector_words=self.connector_words)
def test_empty_phrasified_sentences_iterator(self):
bigram_phrases = Phrases(self.sentences)
bigram_phraser = FrozenPhrases(bigram_phrases)
trigram_phrases = Phrases(bigram_phraser[self.sentences])
trigram_phraser = FrozenPhrases(trigram_phrases)
trigrams = trigram_phraser[bigram_phraser[self.sentences]]
fst, snd = list(trigrams), list(trigrams)
self.assertEqual(fst, snd)
self.assertNotEqual(snd, [])
def test_empty_inputs_on_bigram_construction(self):
"""Test that empty inputs don't throw errors and return the expected result."""
# Empty list -> empty list
self.assertEqual(list(self.bigram_default[[]]), [])
# Empty iterator -> empty list
self.assertEqual(list(self.bigram_default[iter(())]), [])
# List of empty list -> list of empty list
self.assertEqual(list(self.bigram_default[[[], []]]), [[], []])
# Iterator of empty list -> list of empty list
self.assertEqual(list(self.bigram_default[iter([[], []])]), [[], []])
# Iterator of empty iterator -> list of empty list
self.assertEqual(list(self.bigram_default[(iter(()) for i in range(2))]), [[], []])
def test_sentence_generation(self):
"""Test basic bigram using a dummy corpus."""
# test that we generate the same amount of sentences as the input
self.assertEqual(
len(self.sentences),
len(list(self.bigram_default[self.sentences])),
)
def test_sentence_generation_with_generator(self):
"""Test basic bigram production when corpus is a generator."""
self.assertEqual(
len(list(self.gen_sentences())),
len(list(self.bigram_default[self.gen_sentences()])),
)
def test_bigram_construction(self):
"""Test Phrases bigram construction."""
# with this setting we should get response_time and graph_minors
bigram1_seen = False
bigram2_seen = False
for sentence in self.bigram[self.sentences]:
if not bigram1_seen and self.bigram1 in sentence:
bigram1_seen = True
if not bigram2_seen and self.bigram2 in sentence:
bigram2_seen = True
if bigram1_seen and bigram2_seen:
break
self.assertTrue(bigram1_seen and bigram2_seen)
# check the same thing, this time using single doc transformation
# last sentence should contain both graph_minors and human_interface
self.assertTrue(self.bigram1 in self.bigram[self.sentences[1]])
self.assertTrue(self.bigram1 in self.bigram[self.sentences[4]])
self.assertTrue(self.bigram2 in self.bigram[self.sentences[-2]])
self.assertTrue(self.bigram2 in self.bigram[self.sentences[-1]])
self.assertTrue(self.bigram3 in self.bigram[self.sentences[-1]])
def test_bigram_construction_from_generator(self):
"""Test Phrases bigram construction building when corpus is a generator."""
bigram1_seen = False
bigram2_seen = False
for s in self.bigram[self.gen_sentences()]:
if not bigram1_seen and self.bigram1 in s:
bigram1_seen = True
if not bigram2_seen and self.bigram2 in s:
bigram2_seen = True
if bigram1_seen and bigram2_seen:
break
self.assertTrue(bigram1_seen and bigram2_seen)
def test_bigram_construction_from_array(self):
"""Test Phrases bigram construction building when corpus is a numpy array."""
bigram1_seen = False
bigram2_seen = False
for s in self.bigram[np.array(self.sentences, dtype=object)]:
if not bigram1_seen and self.bigram1 in s:
bigram1_seen = True
if not bigram2_seen and self.bigram2 in s:
bigram2_seen = True
if bigram1_seen and bigram2_seen:
break
self.assertTrue(bigram1_seen and bigram2_seen)
# scorer for testCustomScorer
# function is outside of the scope of the test because for picklability of custom scorer
# Phrases tests for picklability
# all scores will be 1
def dumb_scorer(worda_count, wordb_count, bigram_count, len_vocab, min_count, corpus_word_count):
return 1
class TestPhrasesModel(PhrasesCommon, unittest.TestCase):
def test_export_phrases(self):
"""Test Phrases bigram and trigram export phrases."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, delimiter=' ')
trigram = Phrases(bigram[self.sentences], min_count=1, threshold=1, delimiter=' ')
seen_bigrams = set(bigram.export_phrases().keys())
seen_trigrams = set(trigram.export_phrases().keys())
assert seen_bigrams == set([
'human interface',
'response time',
'graph minors',
'minors survey',
])
assert seen_trigrams == set([
'human interface',
'graph minors survey',
])
def test_find_phrases(self):
"""Test Phrases bigram find phrases."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, delimiter=' ')
seen_bigrams = set(bigram.find_phrases(self.sentences).keys())
assert seen_bigrams == set([
'response time',
'graph minors',
'human interface',
])
def test_multiple_bigrams_single_entry(self):
"""Test a single entry produces multiple bigrams."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, delimiter=' ')
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface']]
seen_bigrams = set(bigram.find_phrases(test_sentences).keys())
assert seen_bigrams == {'graph minors', 'human interface'}
def test_scoring_default(self):
"""Test the default scoring, from the mikolov word2vec paper."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, delimiter=' ')
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface']]
seen_scores = set(round(score, 3) for score in bigram.find_phrases(test_sentences).values())
assert seen_scores == {
5.167, # score for graph minors
3.444 # score for human interface
}
def test__getitem__(self):
"""Test Phrases[sentences] with a single sentence."""
bigram = Phrases(self.sentences, min_count=1, threshold=1)
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface']]
phrased_sentence = next(bigram[test_sentences].__iter__())
assert phrased_sentence == ['graph_minors', 'survey', 'human_interface']
def test_scoring_npmi(self):
"""Test normalized pointwise mutual information scoring."""
bigram = Phrases(self.sentences, min_count=1, threshold=.5, scoring='npmi')
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface']]
seen_scores = set(round(score, 3) for score in bigram.find_phrases(test_sentences).values())
assert seen_scores == {
.882, # score for graph minors
.714 # score for human interface
}
def test_custom_scorer(self):
"""Test using a custom scoring function."""
bigram = Phrases(self.sentences, min_count=1, threshold=.001, scoring=dumb_scorer)
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface', 'system']]
seen_scores = list(bigram.find_phrases(test_sentences).values())
assert all(score == 1 for score in seen_scores)
assert len(seen_scores) == 3 # 'graph minors' and 'survey human' and 'interface system'
def test_bad_parameters(self):
"""Test the phrases module with bad parameters."""
# should fail with something less or equal than 0
self.assertRaises(ValueError, Phrases, self.sentences, min_count=0)
# threshold should be positive
self.assertRaises(ValueError, Phrases, self.sentences, threshold=-1)
def test_pruning(self):
"""Test that max_vocab_size parameter is respected."""
bigram = Phrases(self.sentences, max_vocab_size=5)
self.assertTrue(len(bigram.vocab) <= 5)
# endclass TestPhrasesModel
class TestPhrasesPersistence(PhrasesData, unittest.TestCase):
def test_save_load_custom_scorer(self):
"""Test saving and loading a Phrases object with a custom scorer."""
bigram = Phrases(self.sentences, min_count=1, threshold=.001, scoring=dumb_scorer)
with temporary_file("test.pkl") as fpath:
bigram.save(fpath)
bigram_loaded = Phrases.load(fpath)
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface', 'system']]
seen_scores = list(bigram_loaded.find_phrases(test_sentences).values())
assert all(score == 1 for score in seen_scores)
assert len(seen_scores) == 3 # 'graph minors' and 'survey human' and 'interface system'
def test_save_load(self):
"""Test saving and loading a Phrases object."""
bigram = Phrases(self.sentences, min_count=1, threshold=1)
with temporary_file("test.pkl") as fpath:
bigram.save(fpath)
bigram_loaded = Phrases.load(fpath)
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface', 'system']]
seen_scores = set(round(score, 3) for score in bigram_loaded.find_phrases(test_sentences).values())
assert seen_scores == set([
5.167, # score for graph minors
3.444 # score for human interface
])
def test_save_load_with_connector_words(self):
"""Test saving and loading a Phrases object."""
connector_words = frozenset({'of'})
bigram = Phrases(self.sentences, min_count=1, threshold=1, connector_words=connector_words)
with temporary_file("test.pkl") as fpath:
bigram.save(fpath)
bigram_loaded = Phrases.load(fpath)
assert bigram_loaded.connector_words == connector_words
def test_save_load_string_scoring(self):
"""Test backwards compatibility with a previous version of Phrases with custom scoring."""
bigram_loaded = Phrases.load(datapath("phrases-scoring-str.pkl"))
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface', 'system']]
seen_scores = set(round(score, 3) for score in bigram_loaded.find_phrases(test_sentences).values())
assert seen_scores == set([
5.167, # score for graph minors
3.444 # score for human interface
])
def test_save_load_no_scoring(self):
"""Test backwards compatibility with old versions of Phrases with no scoring parameter."""
bigram_loaded = Phrases.load(datapath("phrases-no-scoring.pkl"))
test_sentences = [['graph', 'minors', 'survey', 'human', 'interface', 'system']]
seen_scores = set(round(score, 3) for score in bigram_loaded.find_phrases(test_sentences).values())
assert seen_scores == set([
5.167, # score for graph minors
3.444 # score for human interface
])
def test_save_load_no_common_terms(self):
"""Ensure backwards compatibility with old versions of Phrases, before connector_words."""
bigram_loaded = Phrases.load(datapath("phrases-no-common-terms.pkl"))
self.assertEqual(bigram_loaded.connector_words, frozenset())
# can make a phraser, cf #1751
phraser = FrozenPhrases(bigram_loaded) # does not raise
phraser[["human", "interface", "survey"]] # does not raise
class TestFrozenPhrasesPersistence(PhrasesData, unittest.TestCase):
def test_save_load_custom_scorer(self):
"""Test saving and loading a FrozenPhrases object with a custom scorer."""
with temporary_file("test.pkl") as fpath:
bigram = FrozenPhrases(
Phrases(self.sentences, min_count=1, threshold=.001, scoring=dumb_scorer))
bigram.save(fpath)
bigram_loaded = FrozenPhrases.load(fpath)
self.assertEqual(bigram_loaded.scoring, dumb_scorer)
def test_save_load(self):
"""Test saving and loading a FrozenPhrases object."""
with temporary_file("test.pkl") as fpath:
bigram = FrozenPhrases(Phrases(self.sentences, min_count=1, threshold=1))
bigram.save(fpath)
bigram_loaded = FrozenPhrases.load(fpath)
self.assertEqual(
bigram_loaded[['graph', 'minors', 'survey', 'human', 'interface', 'system']],
['graph_minors', 'survey', 'human_interface', 'system'])
def test_save_load_with_connector_words(self):
"""Test saving and loading a FrozenPhrases object."""
connector_words = frozenset({'of'})
with temporary_file("test.pkl") as fpath:
bigram = FrozenPhrases(Phrases(self.sentences, min_count=1, threshold=1, connector_words=connector_words))
bigram.save(fpath)
bigram_loaded = FrozenPhrases.load(fpath)
self.assertEqual(bigram_loaded.connector_words, connector_words)
def test_save_load_string_scoring(self):
"""Test saving and loading a FrozenPhrases object with a string scoring parameter.
This should ensure backwards compatibility with the previous version of FrozenPhrases"""
bigram_loaded = FrozenPhrases.load(datapath("phraser-scoring-str.pkl"))
# we do not much with scoring, just verify its the one expected
self.assertEqual(bigram_loaded.scoring, original_scorer)
def test_save_load_no_scoring(self):
"""Test saving and loading a FrozenPhrases object with no scoring parameter.
This should ensure backwards compatibility with old versions of FrozenPhrases"""
bigram_loaded = FrozenPhrases.load(datapath("phraser-no-scoring.pkl"))
# we do not much with scoring, just verify its the one expected
self.assertEqual(bigram_loaded.scoring, original_scorer)
def test_save_load_no_common_terms(self):
"""Ensure backwards compatibility with old versions of FrozenPhrases, before connector_words."""
bigram_loaded = FrozenPhrases.load(datapath("phraser-no-common-terms.pkl"))
self.assertEqual(bigram_loaded.connector_words, frozenset())
class TestFrozenPhrasesModel(PhrasesCommon, unittest.TestCase):
"""Test FrozenPhrases models."""
def setUp(self):
"""Set up FrozenPhrases models for the tests."""
bigram_phrases = Phrases(
self.sentences, min_count=1, threshold=1, connector_words=self.connector_words)
self.bigram = FrozenPhrases(bigram_phrases)
bigram_default_phrases = Phrases(self.sentences, connector_words=self.connector_words)
self.bigram_default = FrozenPhrases(bigram_default_phrases)
class CommonTermsPhrasesData:
"""This mixin permits to reuse tests with the connector_words option."""
sentences = [
['human', 'interface', 'with', 'computer'],
['survey', 'of', 'user', 'computer', 'system', 'lack', 'of', 'interest'],
['eps', 'user', 'interface', 'system'],
['system', 'and', 'human', 'system', 'eps'],
['user', 'lack', 'of', 'interest'],
['trees'],
['graph', 'of', 'trees'],
['data', 'and', 'graph', 'of', 'trees'],
['data', 'and', 'graph', 'survey'],
['data', 'and', 'graph', 'survey', 'for', 'human', 'interface'] # test bigrams within same sentence
]
connector_words = ['of', 'and', 'for']
bigram1 = u'lack_of_interest'
bigram2 = u'data_and_graph'
bigram3 = u'human_interface'
expression1 = u'lack of interest'
expression2 = u'data and graph'
expression3 = u'human interface'
def gen_sentences(self):
return ((w for w in sentence) for sentence in self.sentences)
class TestPhrasesModelCommonTerms(CommonTermsPhrasesData, TestPhrasesModel):
"""Test Phrases models with connector words."""
def test_multiple_bigrams_single_entry(self):
"""Test a single entry produces multiple bigrams."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, connector_words=self.connector_words, delimiter=' ')
test_sentences = [['data', 'and', 'graph', 'survey', 'for', 'human', 'interface']]
seen_bigrams = set(bigram.find_phrases(test_sentences).keys())
assert seen_bigrams == set([
'data and graph',
'human interface',
])
def test_find_phrases(self):
"""Test Phrases bigram export phrases."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, connector_words=self.connector_words, delimiter=' ')
seen_bigrams = set(bigram.find_phrases(self.sentences).keys())
assert seen_bigrams == set([
'human interface',
'graph of trees',
'data and graph',
'lack of interest',
])
def test_export_phrases(self):
"""Test Phrases bigram export phrases."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, delimiter=' ')
seen_bigrams = set(bigram.export_phrases().keys())
assert seen_bigrams == set([
'and graph',
'data and',
'graph of',
'graph survey',
'human interface',
'lack of',
'of interest',
'of trees',
])
def test_scoring_default(self):
""" test the default scoring, from the mikolov word2vec paper """
bigram = Phrases(self.sentences, min_count=1, threshold=1, connector_words=self.connector_words)
test_sentences = [['data', 'and', 'graph', 'survey', 'for', 'human', 'interface']]
seen_scores = set(round(score, 3) for score in bigram.find_phrases(test_sentences).values())
min_count = float(bigram.min_count)
len_vocab = float(len(bigram.vocab))
graph = float(bigram.vocab["graph"])
data = float(bigram.vocab["data"])
data_and_graph = float(bigram.vocab["data_and_graph"])
human = float(bigram.vocab["human"])
interface = float(bigram.vocab["interface"])
human_interface = float(bigram.vocab["human_interface"])
assert seen_scores == set([
# score for data and graph
round((data_and_graph - min_count) / data / graph * len_vocab, 3),
# score for human interface
round((human_interface - min_count) / human / interface * len_vocab, 3),
])
def test_scoring_npmi(self):
"""Test normalized pointwise mutual information scoring."""
bigram = Phrases(
self.sentences, min_count=1, threshold=.5,
scoring='npmi', connector_words=self.connector_words,
)
test_sentences = [['data', 'and', 'graph', 'survey', 'for', 'human', 'interface']]
seen_scores = set(round(score, 3) for score in bigram.find_phrases(test_sentences).values())
assert seen_scores == set([
.74, # score for data and graph
.894 # score for human interface
])
def test_custom_scorer(self):
"""Test using a custom scoring function."""
bigram = Phrases(
self.sentences, min_count=1, threshold=.001,
scoring=dumb_scorer, connector_words=self.connector_words,
)
test_sentences = [['data', 'and', 'graph', 'survey', 'for', 'human', 'interface']]
seen_scores = list(bigram.find_phrases(test_sentences).values())
assert all(seen_scores) # all scores 1
assert len(seen_scores) == 2 # 'data and graph' 'survey for human'
def test__getitem__(self):
"""Test Phrases[sentences] with a single sentence."""
bigram = Phrases(self.sentences, min_count=1, threshold=1, connector_words=self.connector_words)
test_sentences = [['data', 'and', 'graph', 'survey', 'for', 'human', 'interface']]
phrased_sentence = next(bigram[test_sentences].__iter__())
assert phrased_sentence == ['data_and_graph', 'survey', 'for', 'human_interface']
class TestFrozenPhrasesModelCompatibility(unittest.TestCase):
def test_compatibility(self):
phrases = Phrases.load(datapath("phrases-3.6.0.model"))
phraser = FrozenPhrases.load(datapath("phraser-3.6.0.model"))
test_sentences = ['trees', 'graph', 'minors']
self.assertEqual(phrases[test_sentences], ['trees', 'graph_minors'])
self.assertEqual(phraser[test_sentences], ['trees', 'graph_minors'])
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 25,196
|
Python
|
.py
| 468
| 44.685897
| 119
| 0.638542
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,047
|
basetmtests.py
|
piskvorky_gensim/gensim/test/basetmtests.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import numpy as np
class TestBaseTopicModel:
def test_print_topic(self):
topics = self.model.show_topics(formatted=True)
for topic_no, topic in topics:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(isinstance(topic, str))
def test_print_topics(self):
topics = self.model.print_topics()
for topic_no, topic in topics:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(isinstance(topic, str))
def test_show_topic(self):
topic = self.model.show_topic(1)
for k, v in topic:
self.assertTrue(isinstance(k, str))
self.assertTrue(isinstance(v, (np.floating, float)))
def test_show_topics(self):
topics = self.model.show_topics(formatted=False)
for topic_no, topic in topics:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(isinstance(topic, list))
for k, v in topic:
self.assertTrue(isinstance(k, str))
self.assertTrue(isinstance(v, (np.floating, float)))
def test_get_topics(self):
topics = self.model.get_topics()
vocab_size = len(self.model.id2word)
for topic in topics:
self.assertTrue(isinstance(topic, np.ndarray))
# Note: started moving to np.float32 as default
# self.assertEqual(topic.dtype, np.float64)
self.assertEqual(vocab_size, topic.shape[0])
self.assertAlmostEqual(np.sum(topic), 1.0, 5)
| 1,837
|
Python
|
.py
| 42
| 35.142857
| 95
| 0.644619
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,048
|
test_utils.py
|
piskvorky_gensim/gensim/test/test_utils.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking various utils functions.
"""
import logging
import unittest
import numpy as np
from gensim import utils
from gensim.test.utils import datapath, get_tmpfile
class TestIsCorpus(unittest.TestCase):
def test_None(self):
# test None
result = utils.is_corpus(None)
expected = (False, None)
self.assertEqual(expected, result)
def test_simple_lists_of_tuples(self):
# test list words
# one document, one word
potentialCorpus = [[(0, 4.)]]
result = utils.is_corpus(potentialCorpus)
expected = (True, potentialCorpus)
self.assertEqual(expected, result)
# one document, several words
potentialCorpus = [[(0, 4.), (1, 2.)]]
result = utils.is_corpus(potentialCorpus)
expected = (True, potentialCorpus)
self.assertEqual(expected, result)
potentialCorpus = [[(0, 4.), (1, 2.), (2, 5.), (3, 8.)]]
result = utils.is_corpus(potentialCorpus)
expected = (True, potentialCorpus)
self.assertEqual(expected, result)
# several documents, one word
potentialCorpus = [[(0, 4.)], [(1, 2.)]]
result = utils.is_corpus(potentialCorpus)
expected = (True, potentialCorpus)
self.assertEqual(expected, result)
potentialCorpus = [[(0, 4.)], [(1, 2.)], [(2, 5.)], [(3, 8.)]]
result = utils.is_corpus(potentialCorpus)
expected = (True, potentialCorpus)
self.assertEqual(expected, result)
def test_int_tuples(self):
potentialCorpus = [[(0, 4)]]
result = utils.is_corpus(potentialCorpus)
expected = (True, potentialCorpus)
self.assertEqual(expected, result)
def test_invalid_formats(self):
# test invalid formats
# these are no corpus, because they do not consists of 2-tuples with
# the form(int, float).
potentials = list()
potentials.append(["human"])
potentials.append("human")
potentials.append(["human", "star"])
potentials.append([1, 2, 3, 4, 5, 5])
potentials.append([[(0, 'string')]])
for noCorpus in potentials:
result = utils.is_corpus(noCorpus)
expected = (False, noCorpus)
self.assertEqual(expected, result)
class TestUtils(unittest.TestCase):
def test_decode_entities(self):
# create a string that fails to decode with unichr on narrow python builds
body = u'It’s the Year of the Horse. YES VIN DIESEL 🙌 💯'
expected = u'It\x92s the Year of the Horse. YES VIN DIESEL \U0001f64c \U0001f4af'
self.assertEqual(utils.decode_htmlentities(body), expected)
def test_open_file_existent_file(self):
number_of_lines_in_file = 30
with utils.open_file(datapath('testcorpus.mm')) as infile:
self.assertEqual(sum(1 for _ in infile), number_of_lines_in_file)
def test_open_file_non_existent_file(self):
with self.assertRaises(Exception):
with utils.open_file('non_existent_file.txt'):
pass
def test_open_file_existent_file_object(self):
number_of_lines_in_file = 30
file_obj = open(datapath('testcorpus.mm'))
with utils.open_file(file_obj) as infile:
self.assertEqual(sum(1 for _ in infile), number_of_lines_in_file)
def test_open_file_non_existent_file_object(self):
file_obj = None
with self.assertRaises(Exception):
with utils.open_file(file_obj):
pass
class TestSampleDict(unittest.TestCase):
def test_sample_dict(self):
d = {1: 2, 2: 3, 3: 4, 4: 5}
expected_dict = [(1, 2), (2, 3)]
expected_dict_random = [(k, v) for k, v in d.items()]
sampled_dict = utils.sample_dict(d, 2, False)
self.assertEqual(sampled_dict, expected_dict)
sampled_dict_random = utils.sample_dict(d, 2)
if sampled_dict_random in expected_dict_random:
self.assertTrue(True)
class TestTrimVocabByFreq(unittest.TestCase):
def test_trim_vocab(self):
d = {"word1": 5, "word2": 1, "word3": 2}
expected_dict = {"word1": 5, "word3": 2}
utils.trim_vocab_by_freq(d, topk=2)
self.assertEqual(d, expected_dict)
d = {"word1": 5, "word2": 2, "word3": 2, "word4": 1}
expected_dict = {"word1": 5, "word2": 2, "word3": 2}
utils.trim_vocab_by_freq(d, topk=2)
self.assertEqual(d, expected_dict)
class TestMergeDicts(unittest.TestCase):
def test_merge_dicts(self):
d1 = {"word1": 5, "word2": 1, "word3": 2}
d2 = {"word1": 2, "word3": 3, "word4": 10}
res_dict = utils.merge_counts(d1, d2)
expected_dict = {"word1": 7, "word2": 1, "word3": 5, "word4": 10}
self.assertEqual(res_dict, expected_dict)
class TestWindowing(unittest.TestCase):
arr10_5 = np.array([
[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8],
[5, 6, 7, 8, 9]
])
def _assert_arrays_equal(self, expected, actual):
self.assertEqual(expected.shape, actual.shape)
self.assertTrue((actual == expected).all())
def test_strided_windows1(self):
out = utils.strided_windows(range(5), 2)
expected = np.array([
[0, 1],
[1, 2],
[2, 3],
[3, 4]
])
self._assert_arrays_equal(expected, out)
def test_strided_windows2(self):
input_arr = np.arange(10)
out = utils.strided_windows(input_arr, 5)
expected = self.arr10_5.copy()
self._assert_arrays_equal(expected, out)
out[0, 0] = 10
self.assertEqual(10, input_arr[0], "should make view rather than copy")
def test_strided_windows_window_size_exceeds_size(self):
input_arr = np.array(['this', 'is', 'test'], dtype='object')
out = utils.strided_windows(input_arr, 4)
expected = np.ndarray((0, 0))
self._assert_arrays_equal(expected, out)
def test_strided_windows_window_size_equals_size(self):
input_arr = np.array(['this', 'is', 'test'], dtype='object')
out = utils.strided_windows(input_arr, 3)
expected = np.array([input_arr.copy()])
self._assert_arrays_equal(expected, out)
def test_iter_windows_include_below_window_size(self):
texts = [['this', 'is', 'a'], ['test', 'document']]
out = utils.iter_windows(texts, 3, ignore_below_size=False)
windows = [list(w) for w in out]
self.assertEqual(texts, windows)
out = utils.iter_windows(texts, 3)
windows = [list(w) for w in out]
self.assertEqual([texts[0]], windows)
def test_iter_windows_list_texts(self):
texts = [['this', 'is', 'a'], ['test', 'document']]
windows = list(utils.iter_windows(texts, 2))
list_windows = [list(iterable) for iterable in windows]
expected = [['this', 'is'], ['is', 'a'], ['test', 'document']]
self.assertListEqual(list_windows, expected)
def test_iter_windows_uses_views(self):
texts = [np.array(['this', 'is', 'a'], dtype='object'), ['test', 'document']]
windows = list(utils.iter_windows(texts, 2))
list_windows = [list(iterable) for iterable in windows]
expected = [['this', 'is'], ['is', 'a'], ['test', 'document']]
self.assertListEqual(list_windows, expected)
windows[0][0] = 'modified'
self.assertEqual('modified', texts[0][0])
def test_iter_windows_with_copy(self):
texts = [
np.array(['this', 'is', 'a'], dtype='object'),
np.array(['test', 'document'], dtype='object')
]
windows = list(utils.iter_windows(texts, 2, copy=True))
windows[0][0] = 'modified'
self.assertEqual('this', texts[0][0])
windows[2][0] = 'modified'
self.assertEqual('test', texts[1][0])
def test_flatten_nested(self):
nested_list = [[[1, 2, 3], [4, 5]], 6]
expected = [1, 2, 3, 4, 5, 6]
self.assertEqual(utils.flatten(nested_list), expected)
def test_flatten_not_nested(self):
not_nested = [1, 2, 3, 4, 5, 6]
expected = [1, 2, 3, 4, 5, 6]
self.assertEqual(utils.flatten(not_nested), expected)
class TestSaveAsLineSentence(unittest.TestCase):
def test_save_as_line_sentence_en(self):
corpus_file = get_tmpfile('gensim_utils.tst')
ref_sentences = [
line.split()
for line in utils.any2unicode('hello world\nhow are you').split('\n')
]
utils.save_as_line_sentence(ref_sentences, corpus_file)
with utils.open(corpus_file, 'rb', encoding='utf8') as fin:
sentences = [line.strip().split() for line in fin.read().strip().split('\n')]
self.assertEqual(sentences, ref_sentences)
def test_save_as_line_sentence_ru(self):
corpus_file = get_tmpfile('gensim_utils.tst')
ref_sentences = [
line.split()
for line in utils.any2unicode('привет мир\nкак ты поживаешь').split('\n')
]
utils.save_as_line_sentence(ref_sentences, corpus_file)
with utils.open(corpus_file, 'rb', encoding='utf8') as fin:
sentences = [line.strip().split() for line in fin.read().strip().split('\n')]
self.assertEqual(sentences, ref_sentences)
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 9,703
|
Python
|
.py
| 215
| 36.576744
| 95
| 0.604654
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,049
|
test_miislita.py
|
piskvorky_gensim/gensim/test/test_miislita.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
This module replicates the miislita vector spaces from
"A Linear Algebra Approach to the Vector Space Model -- A Fast Track Tutorial"
by Dr. E. Garcia, admin@miislita.com
See http://www.miislita.com for further details.
"""
from __future__ import division # always use floats
from __future__ import with_statement
import logging
import os
import unittest
from gensim import utils, corpora, models, similarities
from gensim.test.utils import datapath, get_tmpfile
logger = logging.getLogger(__name__)
class CorpusMiislita(corpora.TextCorpus):
stoplist = set('for a of the and to in on'.split())
def get_texts(self):
"""
Parse documents from the .cor file provided in the constructor. Lowercase
each document and ignore some stopwords.
.cor format: one document per line, words separated by whitespace.
"""
for doc in self.getstream():
yield [word for word in utils.to_unicode(doc).lower().split()
if word not in CorpusMiislita.stoplist]
def __len__(self):
"""Define this so we can use `len(corpus)`"""
if 'length' not in self.__dict__:
logger.info("caching corpus size (calculating number of documents)")
self.length = sum(1 for _ in self.get_texts())
return self.length
class TestMiislita(unittest.TestCase):
def test_textcorpus(self):
"""Make sure TextCorpus can be serialized to disk. """
# construct corpus from file
miislita = CorpusMiislita(datapath('head500.noblanks.cor.bz2'))
# make sure serializing works
ftmp = get_tmpfile('test_textcorpus.mm')
corpora.MmCorpus.save_corpus(ftmp, miislita)
self.assertTrue(os.path.exists(ftmp))
# make sure deserializing gives the same result
miislita2 = corpora.MmCorpus(ftmp)
self.assertEqual(list(miislita), list(miislita2))
def test_save_load_ability(self):
"""
Make sure we can save and load (un/pickle) TextCorpus objects (as long
as the underlying input isn't a file-like object; we cannot pickle those).
"""
# construct corpus from file
corpusname = datapath('miIslita.cor')
miislita = CorpusMiislita(corpusname)
# pickle to disk
tmpf = get_tmpfile('tc_test.cpickle')
miislita.save(tmpf)
miislita2 = CorpusMiislita.load(tmpf)
self.assertEqual(len(miislita), len(miislita2))
self.assertEqual(miislita.dictionary.token2id, miislita2.dictionary.token2id)
def test_miislita_high_level(self):
# construct corpus from file
miislita = CorpusMiislita(datapath('miIslita.cor'))
# initialize tfidf transformation and similarity index
tfidf = models.TfidfModel(miislita, miislita.dictionary, normalize=False)
index = similarities.SparseMatrixSimilarity(tfidf[miislita], num_features=len(miislita.dictionary))
# compare to query
query = 'latent semantic indexing'
vec_bow = miislita.dictionary.doc2bow(query.lower().split())
vec_tfidf = tfidf[vec_bow]
# perform a similarity query against the corpus
sims_tfidf = index[vec_tfidf]
# for the expected results see the article
expected = [0.0, 0.2560, 0.7022, 0.1524, 0.3334]
for i, value in enumerate(expected):
self.assertAlmostEqual(sims_tfidf[i], value, 2)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
unittest.main()
| 3,675
|
Python
|
.py
| 80
| 38.7375
| 107
| 0.680684
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,050
|
test_big.py
|
piskvorky_gensim/gensim/test/test_big.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2014 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking processing/storing large inputs.
"""
import logging
import unittest
import os
import numpy as np
import gensim
from gensim.test.utils import get_tmpfile
class BigCorpus:
"""A corpus of a large number of docs & large vocab"""
def __init__(self, words_only=False, num_terms=200000, num_docs=1000000, doc_len=100):
self.dictionary = gensim.utils.FakeDict(num_terms)
self.words_only = words_only
self.num_docs = num_docs
self.doc_len = doc_len
def __iter__(self):
for _ in range(self.num_docs):
doc_len = np.random.poisson(self.doc_len)
ids = np.random.randint(0, len(self.dictionary), doc_len)
if self.words_only:
yield [str(idx) for idx in ids]
else:
weights = np.random.poisson(3, doc_len)
yield sorted(zip(ids, weights))
if os.environ.get('GENSIM_BIG', False):
class TestLargeData(unittest.TestCase):
"""Try common operations, using large models. You'll need ~8GB RAM to run these tests"""
def test_word2vec(self):
corpus = BigCorpus(words_only=True, num_docs=100000, num_terms=3000000, doc_len=200)
tmpf = get_tmpfile('gensim_big.tst')
model = gensim.models.Word2Vec(corpus, vector_size=300, workers=4)
model.save(tmpf, ignore=['syn1'])
del model
gensim.models.Word2Vec.load(tmpf)
def test_lsi_model(self):
corpus = BigCorpus(num_docs=50000)
tmpf = get_tmpfile('gensim_big.tst')
model = gensim.models.LsiModel(corpus, num_topics=500, id2word=corpus.dictionary)
model.save(tmpf)
del model
gensim.models.LsiModel.load(tmpf)
def test_lda_model(self):
corpus = BigCorpus(num_docs=5000)
tmpf = get_tmpfile('gensim_big.tst')
model = gensim.models.LdaModel(corpus, num_topics=500, id2word=corpus.dictionary)
model.save(tmpf)
del model
gensim.models.LdaModel.load(tmpf)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 2,460
|
Python
|
.py
| 57
| 34.719298
| 96
| 0.633012
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,051
|
test_translation_matrix.py
|
piskvorky_gensim/gensim/test/test_translation_matrix.py
|
#!/usr/bin/env python
# encoding: utf-8
from collections import namedtuple
import unittest
import logging
import numpy as np
import pytest
from scipy.spatial.distance import cosine
from gensim.models.doc2vec import Doc2Vec
from gensim import utils
from gensim.models import translation_matrix
from gensim.models import KeyedVectors
from gensim.test.utils import datapath, get_tmpfile
class TestTranslationMatrix(unittest.TestCase):
def setUp(self):
self.source_word_vec_file = datapath("EN.1-10.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt")
self.target_word_vec_file = datapath("IT.1-10.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt")
self.word_pairs = [
("one", "uno"), ("two", "due"), ("three", "tre"),
("four", "quattro"), ("five", "cinque"), ("seven", "sette"), ("eight", "otto"),
("dog", "cane"), ("pig", "maiale"), ("fish", "cavallo"), ("birds", "uccelli"),
("apple", "mela"), ("orange", "arancione"), ("grape", "acino"), ("banana", "banana"),
]
self.test_word_pairs = [("ten", "dieci"), ("cat", "gatto")]
self.source_word_vec = KeyedVectors.load_word2vec_format(self.source_word_vec_file, binary=False)
self.target_word_vec = KeyedVectors.load_word2vec_format(self.target_word_vec_file, binary=False)
def test_translation_matrix(self):
model = translation_matrix.TranslationMatrix(self.source_word_vec, self.target_word_vec, self.word_pairs)
model.train(self.word_pairs)
self.assertEqual(model.translation_matrix.shape, (300, 300))
def test_persistence(self):
"""Test storing/loading the entire model."""
tmpf = get_tmpfile('transmat-en-it.pkl')
model = translation_matrix.TranslationMatrix(self.source_word_vec, self.target_word_vec, self.word_pairs)
model.train(self.word_pairs)
model.save(tmpf)
loaded_model = translation_matrix.TranslationMatrix.load(tmpf)
self.assertTrue(np.allclose(model.translation_matrix, loaded_model.translation_matrix))
def test_translate_nn(self):
# Test the nearest neighbor retrieval method
model = translation_matrix.TranslationMatrix(self.source_word_vec, self.target_word_vec, self.word_pairs)
model.train(self.word_pairs)
test_source_word, test_target_word = zip(*self.test_word_pairs)
translated_words = model.translate(
test_source_word, topn=5, source_lang_vec=self.source_word_vec, target_lang_vec=self.target_word_vec,
)
for idx, item in enumerate(self.test_word_pairs):
self.assertTrue(item[1] in translated_words[item[0]])
@pytest.mark.xfail(
True,
reason='blinking test, can be related to <https://github.com/RaRe-Technologies/gensim/issues/2977>'
)
def test_translate_gc(self):
# Test globally corrected neighbour retrieval method
model = translation_matrix.TranslationMatrix(self.source_word_vec, self.target_word_vec, self.word_pairs)
model.train(self.word_pairs)
test_source_word, test_target_word = zip(*self.test_word_pairs)
translated_words = model.translate(
test_source_word, topn=5, gc=1, sample_num=3,
source_lang_vec=self.source_word_vec, target_lang_vec=self.target_word_vec
)
for idx, item in enumerate(self.test_word_pairs):
self.assertTrue(item[1] in translated_words[item[0]])
def read_sentiment_docs(filename):
sentiment_document = namedtuple('SentimentDocument', 'words tags')
alldocs = [] # will hold all docs in original order
with utils.open(filename, mode='rb', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = utils.to_unicode(line).split()
words = tokens
tags = str(line_no)
alldocs.append(sentiment_document(words, tags))
return alldocs
class TestBackMappingTranslationMatrix(unittest.TestCase):
def setUp(self):
filename = datapath("alldata-id-10.txt")
train_docs = read_sentiment_docs(filename)
self.train_docs = train_docs
self.source_doc_vec = Doc2Vec(documents=train_docs[:5], vector_size=8, epochs=50, seed=1)
self.target_doc_vec = Doc2Vec(documents=train_docs, vector_size=8, epochs=50, seed=2)
def test_translation_matrix(self):
model = translation_matrix.BackMappingTranslationMatrix(
self.source_doc_vec, self.target_doc_vec, self.train_docs[:5],
)
transmat = model.train(self.train_docs[:5])
self.assertEqual(transmat.shape, (8, 8))
@unittest.skip(
"flaky test likely to be discarded when <https://github.com/RaRe-Technologies/gensim/issues/2977> "
"is addressed"
)
def test_infer_vector(self):
"""Test that translation gives similar results to traditional inference.
This may not be completely sensible/salient with such tiny data, but
replaces what seemed to me to be an ever-more-nonsensical test.
See <https://github.com/RaRe-Technologies/gensim/issues/2977> for discussion
of whether the class this supposedly tested even survives when the
TranslationMatrix functionality is better documented.
"""
model = translation_matrix.BackMappingTranslationMatrix(
self.source_doc_vec, self.target_doc_vec, self.train_docs[:5],
)
model.train(self.train_docs[:5])
backmapped_vec = model.infer_vector(self.target_doc_vec.dv[self.train_docs[5].tags[0]])
self.assertEqual(backmapped_vec.shape, (8, ))
d2v_inferred_vector = self.source_doc_vec.infer_vector(self.train_docs[5].words)
distance = cosine(backmapped_vec, d2v_inferred_vector)
self.assertLessEqual(distance, 0.1)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 5,980
|
Python
|
.py
| 110
| 46.436364
| 113
| 0.680822
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,052
|
test_poincare.py
|
piskvorky_gensim/gensim/test/test_poincare.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Author: Jayant Jain <jayantjain1992@gmail.com>
# Copyright (C) 2017 Radim Rehurek <me@radimrehurek.com>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking the poincare module from the models package.
"""
import logging
import os
import tempfile
import unittest
from unittest.mock import Mock
import numpy as np
try:
import autograd # noqa:F401
autograd_installed = True
except ImportError:
autograd_installed = False
from gensim.models.poincare import PoincareRelations, PoincareModel, PoincareKeyedVectors
from gensim.test.utils import datapath
logger = logging.getLogger(__name__)
def testfile():
# temporary data will be stored to this file
return os.path.join(tempfile.gettempdir(), 'gensim_word2vec.tst')
class TestPoincareData(unittest.TestCase):
def test_encoding_handling(self):
"""Tests whether utf8 and non-utf8 data loaded correctly."""
non_utf8_file = datapath('poincare_cp852.tsv')
relations = [relation for relation in PoincareRelations(non_utf8_file, encoding='cp852')]
self.assertEqual(len(relations), 2)
self.assertEqual(relations[0], (u'tímto', u'budeš'))
utf8_file = datapath('poincare_utf8.tsv')
relations = [relation for relation in PoincareRelations(utf8_file)]
self.assertEqual(len(relations), 2)
self.assertEqual(relations[0], (u'tímto', u'budeš'))
class TestPoincareModel(unittest.TestCase):
def setUp(self):
self.data = PoincareRelations(datapath('poincare_hypernyms.tsv'))
self.data_large = PoincareRelations(datapath('poincare_hypernyms_large.tsv'))
def models_equal(self, model_1, model_2):
self.assertEqual(len(model_1.kv), len(model_2.kv))
self.assertEqual(set(model_1.kv.index_to_key), set(model_2.kv.index_to_key))
self.assertTrue(np.allclose(model_1.kv.vectors, model_2.kv.vectors))
def test_data_counts(self):
"""Tests whether data has been loaded correctly and completely."""
model = PoincareModel(self.data)
self.assertEqual(len(model.all_relations), 5)
self.assertEqual(len(model.node_relations[model.kv.get_index('kangaroo.n.01')]), 3)
self.assertEqual(len(model.kv), 7)
self.assertTrue('mammal.n.01' not in model.node_relations)
def test_data_counts_with_bytes(self):
"""Tests whether input bytes data is loaded correctly and completely."""
model = PoincareModel([(b'\x80\x01c', b'\x50\x71a'), (b'node.1', b'node.2')])
self.assertEqual(len(model.all_relations), 2)
self.assertEqual(len(model.node_relations[model.kv.get_index(b'\x80\x01c')]), 1)
self.assertEqual(len(model.kv), 4)
self.assertTrue(b'\x50\x71a' not in model.node_relations)
def test_persistence(self):
"""Tests whether the model is saved and loaded correctly."""
model = PoincareModel(self.data, burn_in=0, negative=3)
model.train(epochs=1)
model.save(testfile())
loaded = PoincareModel.load(testfile())
self.models_equal(model, loaded)
def test_persistence_separate_file(self):
"""Tests whether the model is saved and loaded correctly when the arrays are stored separately."""
model = PoincareModel(self.data, burn_in=0, negative=3)
model.train(epochs=1)
model.save(testfile(), sep_limit=1)
loaded = PoincareModel.load(testfile())
self.models_equal(model, loaded)
def test_online_learning(self):
"""Tests whether additional input data is loaded correctly and completely."""
model = PoincareModel(self.data, burn_in=0, negative=3)
self.assertEqual(len(model.kv), 7)
self.assertEqual(model.kv.get_vecattr('kangaroo.n.01', 'count'), 3)
self.assertEqual(model.kv.get_vecattr('cat.n.01', 'count'), 1)
model.build_vocab([('kangaroo.n.01', 'cat.n.01')], update=True) # update vocab
self.assertEqual(model.kv.get_vecattr('kangaroo.n.01', 'count'), 4)
self.assertEqual(model.kv.get_vecattr('cat.n.01', 'count'), 2)
def test_train_after_load(self):
"""Tests whether the model can be trained correctly after loading from disk."""
model = PoincareModel(self.data, burn_in=0, negative=3)
model.train(epochs=1)
model.save(testfile())
loaded = PoincareModel.load(testfile())
model.train(epochs=1)
loaded.train(epochs=1)
self.models_equal(model, loaded)
def test_persistence_old_model(self):
"""Tests whether model from older gensim version is loaded correctly."""
loaded = PoincareModel.load(datapath('poincare_test_3.4.0'))
self.assertEqual(loaded.kv.vectors.shape, (239, 2))
self.assertEqual(len(loaded.kv), 239)
self.assertEqual(loaded.size, 2)
self.assertEqual(len(loaded.all_relations), 200)
def test_train_old_model_after_load(self):
"""Tests whether loaded model from older gensim version can be trained correctly."""
loaded = PoincareModel.load(datapath('poincare_test_3.4.0'))
old_vectors = np.copy(loaded.kv.vectors)
loaded.train(epochs=2)
self.assertFalse(np.allclose(old_vectors, loaded.kv.vectors))
def test_invalid_data_raises_error(self):
"""Tests that error is raised on invalid input data."""
with self.assertRaises(ValueError):
PoincareModel([("a", "b", "c")])
with self.assertRaises(ValueError):
PoincareModel(["a", "b", "c"])
with self.assertRaises(ValueError):
PoincareModel("ab")
def test_vector_shape(self):
"""Tests whether vectors are initialized with the correct size."""
model = PoincareModel(self.data, size=20)
self.assertEqual(model.kv.vectors.shape, (7, 20))
def test_vector_dtype(self):
"""Tests whether vectors have the correct dtype before and after training."""
model = PoincareModel(self.data_large, dtype=np.float32, burn_in=0, negative=3)
self.assertEqual(model.kv.vectors.dtype, np.float32)
model.train(epochs=1)
self.assertEqual(model.kv.vectors.dtype, np.float32)
def test_training(self):
"""Tests that vectors are different before and after training."""
model = PoincareModel(self.data_large, burn_in=0, negative=3)
old_vectors = np.copy(model.kv.vectors)
model.train(epochs=2)
self.assertFalse(np.allclose(old_vectors, model.kv.vectors))
def test_training_multiple(self):
"""Tests that calling train multiple times results in different vectors."""
model = PoincareModel(self.data_large, burn_in=0, negative=3)
model.train(epochs=2)
old_vectors = np.copy(model.kv.vectors)
model.train(epochs=1)
self.assertFalse(np.allclose(old_vectors, model.kv.vectors))
old_vectors = np.copy(model.kv.vectors)
model.train(epochs=0)
self.assertTrue(np.allclose(old_vectors, model.kv.vectors))
def test_gradients_check(self):
"""Tests that the model is trained successfully with gradients check enabled."""
model = PoincareModel(self.data, negative=3)
try:
model.train(epochs=1, batch_size=1, check_gradients_every=1)
except Exception as e:
self.fail('Exception %s raised unexpectedly while training with gradient checking' % repr(e))
@unittest.skipIf(not autograd_installed, 'autograd needs to be installed for this test')
def test_wrong_gradients_raises_assertion(self):
"""Tests that discrepancy in gradients raises an error."""
model = PoincareModel(self.data, negative=3)
model._loss_grad = Mock(return_value=np.zeros((2 + model.negative, model.size)))
with self.assertRaises(AssertionError):
model.train(epochs=1, batch_size=1, check_gradients_every=1)
def test_reproducible(self):
"""Tests that vectors are same for two independent models trained with the same seed."""
model_1 = PoincareModel(self.data_large, seed=1, negative=3, burn_in=1)
model_1.train(epochs=2)
model_2 = PoincareModel(self.data_large, seed=1, negative=3, burn_in=1)
model_2.train(epochs=2)
self.assertTrue(np.allclose(model_1.kv.vectors, model_2.kv.vectors))
def test_burn_in(self):
"""Tests that vectors are different after burn-in."""
model = PoincareModel(self.data, burn_in=1, negative=3)
original_vectors = np.copy(model.kv.vectors)
model.train(epochs=0)
self.assertFalse(np.allclose(model.kv.vectors, original_vectors))
def test_burn_in_only_done_once(self):
"""Tests that burn-in does not happen when train is called a second time."""
model = PoincareModel(self.data, negative=3, burn_in=1)
model.train(epochs=0)
original_vectors = np.copy(model.kv.vectors)
model.train(epochs=0)
self.assertTrue(np.allclose(model.kv.vectors, original_vectors))
def test_negatives(self):
"""Tests that correct number of negatives are sampled."""
model = PoincareModel(self.data, negative=5)
self.assertEqual(len(model._get_candidate_negatives()), 5)
def test_error_if_negative_more_than_population(self):
"""Tests error is rased if number of negatives to sample is more than remaining nodes."""
model = PoincareModel(self.data, negative=5)
with self.assertRaises(ValueError):
model.train(epochs=1)
def test_no_duplicates_and_positives_in_negative_sample(self):
"""Tests that no duplicates or positively related nodes are present in negative samples."""
model = PoincareModel(self.data_large, negative=3)
positive_nodes = model.node_relations[0] # Positive nodes for node 0
num_samples = 100 # Repeat experiment multiple times
for i in range(num_samples):
negatives = model._sample_negatives(0)
self.assertFalse(positive_nodes & set(negatives))
self.assertEqual(len(negatives), len(set(negatives)))
def test_handle_duplicates(self):
"""Tests that correct number of negatives are used."""
vector_updates = np.array([[0.5, 0.5], [0.1, 0.2], [0.3, -0.2]])
node_indices = [0, 1, 0]
PoincareModel._handle_duplicates(vector_updates, node_indices)
vector_updates_expected = np.array([[0.0, 0.0], [0.1, 0.2], [0.8, 0.3]])
self.assertTrue((vector_updates == vector_updates_expected).all())
@classmethod
def tearDownClass(cls):
try:
os.unlink(testfile())
except OSError:
pass
class TestPoincareKeyedVectors(unittest.TestCase):
def setUp(self):
self.vectors = PoincareKeyedVectors.load_word2vec_format(datapath('poincare_vectors.bin'), binary=True)
def test_most_similar(self):
"""Test most_similar returns expected results."""
expected = [
'canine.n.02',
'hunting_dog.n.01',
'carnivore.n.01',
'placental.n.01',
'mammal.n.01'
]
predicted = [result[0] for result in self.vectors.most_similar('dog.n.01', topn=5)]
self.assertEqual(expected, predicted)
def test_most_similar_topn(self):
"""Test most_similar returns correct results when `topn` is specified."""
self.assertEqual(len(self.vectors.most_similar('dog.n.01', topn=5)), 5)
self.assertEqual(len(self.vectors.most_similar('dog.n.01', topn=10)), 10)
predicted = self.vectors.most_similar('dog.n.01', topn=None)
self.assertEqual(len(predicted), len(self.vectors) - 1)
self.assertEqual(predicted[-1][0], 'gallant_fox.n.01')
def test_most_similar_raises_keyerror(self):
"""Test most_similar raises KeyError when input is out of vocab."""
with self.assertRaises(KeyError):
self.vectors.most_similar('not_in_vocab')
def test_most_similar_restrict_vocab(self):
"""Test most_similar returns handles restrict_vocab correctly."""
expected = set(self.vectors.index_to_key[:5])
predicted = set(result[0] for result in self.vectors.most_similar('dog.n.01', topn=5, restrict_vocab=5))
self.assertEqual(expected, predicted)
def test_most_similar_to_given(self):
"""Test most_similar_to_given returns correct results."""
predicted = self.vectors.most_similar_to_given('dog.n.01', ['carnivore.n.01', 'placental.n.01', 'mammal.n.01'])
self.assertEqual(predicted, 'carnivore.n.01')
def test_most_similar_with_vector_input(self):
"""Test most_similar returns expected results with an input vector instead of an input word."""
expected = [
'dog.n.01',
'canine.n.02',
'hunting_dog.n.01',
'carnivore.n.01',
'placental.n.01',
]
input_vector = self.vectors['dog.n.01']
predicted = [result[0] for result in self.vectors.most_similar([input_vector], topn=5)]
self.assertEqual(expected, predicted)
def test_distance(self):
"""Test that distance returns expected values."""
self.assertTrue(np.allclose(self.vectors.distance('dog.n.01', 'mammal.n.01'), 4.5278745))
self.assertEqual(self.vectors.distance('dog.n.01', 'dog.n.01'), 0)
def test_distances(self):
"""Test that distances between one word and multiple other words have expected values."""
distances = self.vectors.distances('dog.n.01', ['mammal.n.01', 'dog.n.01'])
self.assertTrue(np.allclose(distances, [4.5278745, 0]))
distances = self.vectors.distances('dog.n.01')
self.assertEqual(len(distances), len(self.vectors))
self.assertTrue(np.allclose(distances[-1], 10.04756))
def test_distances_with_vector_input(self):
"""Test that distances between input vector and a list of words have expected values."""
input_vector = self.vectors['dog.n.01']
distances = self.vectors.distances(input_vector, ['mammal.n.01', 'dog.n.01'])
self.assertTrue(np.allclose(distances, [4.5278745, 0]))
distances = self.vectors.distances(input_vector)
self.assertEqual(len(distances), len(self.vectors))
self.assertTrue(np.allclose(distances[-1], 10.04756))
def test_poincare_distances_batch(self):
"""Test that poincare_distance_batch returns correct distances."""
vector_1 = self.vectors['dog.n.01']
vectors_2 = self.vectors[['mammal.n.01', 'dog.n.01']]
distances = self.vectors.vector_distance_batch(vector_1, vectors_2)
self.assertTrue(np.allclose(distances, [4.5278745, 0]))
def test_poincare_distance(self):
"""Test that poincare_distance returns correct distance between two input vectors."""
vector_1 = self.vectors['dog.n.01']
vector_2 = self.vectors['mammal.n.01']
distance = self.vectors.vector_distance(vector_1, vector_2)
self.assertTrue(np.allclose(distance, 4.5278745))
distance = self.vectors.vector_distance(vector_1, vector_1)
self.assertTrue(np.allclose(distance, 0))
def test_closest_child(self):
"""Test closest_child returns expected value and returns None for lowest node in hierarchy."""
self.assertEqual(self.vectors.closest_child('dog.n.01'), 'terrier.n.01')
self.assertEqual(self.vectors.closest_child('harbor_porpoise.n.01'), None)
def test_closest_parent(self):
"""Test closest_parent returns expected value and returns None for highest node in hierarchy."""
self.assertEqual(self.vectors.closest_parent('dog.n.01'), 'canine.n.02')
self.assertEqual(self.vectors.closest_parent('mammal.n.01'), None)
def test_ancestors(self):
"""Test ancestors returns expected list and returns empty list for highest node in hierarchy."""
expected = ['canine.n.02', 'carnivore.n.01', 'placental.n.01', 'mammal.n.01']
self.assertEqual(self.vectors.ancestors('dog.n.01'), expected)
expected = []
self.assertEqual(self.vectors.ancestors('mammal.n.01'), expected)
def test_descendants(self):
"""Test descendants returns expected list and returns empty list for lowest node in hierarchy."""
expected = [
'terrier.n.01', 'sporting_dog.n.01', 'spaniel.n.01', 'water_spaniel.n.01', 'irish_water_spaniel.n.01'
]
self.assertEqual(self.vectors.descendants('dog.n.01'), expected)
self.assertEqual(self.vectors.descendants('dog.n.01', max_depth=3), expected[:3])
def test_similarity(self):
"""Test similarity returns expected value for two nodes, and for identical nodes."""
self.assertTrue(np.allclose(self.vectors.similarity('dog.n.01', 'dog.n.01'), 1))
self.assertTrue(np.allclose(self.vectors.similarity('dog.n.01', 'mammal.n.01'), 0.180901358))
def norm(self):
"""Test norm returns expected value."""
self.assertTrue(np.allclose(self.vectors.norm('dog.n.01'), 0.97757602))
self.assertTrue(np.allclose(self.vectors.norm('mammal.n.01'), 0.03914723))
def test_difference_in_hierarchy(self):
"""Test difference_in_hierarchy returns expected value for two nodes, and for identical nodes."""
self.assertTrue(np.allclose(self.vectors.difference_in_hierarchy('dog.n.01', 'dog.n.01'), 0))
self.assertTrue(np.allclose(self.vectors.difference_in_hierarchy('mammal.n.01', 'dog.n.01'), 0.9384287))
self.assertTrue(np.allclose(self.vectors.difference_in_hierarchy('dog.n.01', 'mammal.n.01'), -0.9384287))
def test_closer_than(self):
"""Test closer_than returns expected value for distinct and identical nodes."""
self.assertEqual(self.vectors.closer_than('dog.n.01', 'dog.n.01'), [])
expected = set(['canine.n.02', 'hunting_dog.n.01'])
self.assertEqual(set(self.vectors.closer_than('dog.n.01', 'carnivore.n.01')), expected)
def test_rank(self):
"""Test rank returns expected value for distinct and identical nodes."""
self.assertEqual(self.vectors.rank('dog.n.01', 'dog.n.01'), 1)
self.assertEqual(self.vectors.rank('dog.n.01', 'carnivore.n.01'), 3)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 18,513
|
Python
|
.py
| 328
| 48.219512
| 119
| 0.671985
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,053
|
test_corpora_dictionary.py
|
piskvorky_gensim/gensim/test/test_corpora_dictionary.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Unit tests for the `corpora.Dictionary` class.
"""
from collections.abc import Mapping
from itertools import chain
import logging
import unittest
import codecs
import os
import os.path
import scipy
import gensim
from gensim.corpora import Dictionary
from gensim.utils import to_utf8
from gensim.test.utils import get_tmpfile, common_texts
class TestDictionary(unittest.TestCase):
def setUp(self):
self.texts = common_texts
def test_doc_freq_one_doc(self):
texts = [['human', 'interface', 'computer']]
d = Dictionary(texts)
expected = {0: 1, 1: 1, 2: 1}
self.assertEqual(d.dfs, expected)
def test_doc_freq_and_token2id_for_several_docs_with_one_word(self):
# two docs
texts = [['human'], ['human']]
d = Dictionary(texts)
expected = {0: 2}
self.assertEqual(d.dfs, expected)
# only one token (human) should exist
expected = {'human': 0}
self.assertEqual(d.token2id, expected)
# three docs
texts = [['human'], ['human'], ['human']]
d = Dictionary(texts)
expected = {0: 3}
self.assertEqual(d.dfs, expected)
# only one token (human) should exist
expected = {'human': 0}
self.assertEqual(d.token2id, expected)
# four docs
texts = [['human'], ['human'], ['human'], ['human']]
d = Dictionary(texts)
expected = {0: 4}
self.assertEqual(d.dfs, expected)
# only one token (human) should exist
expected = {'human': 0}
self.assertEqual(d.token2id, expected)
def test_doc_freq_for_one_doc_with_several_word(self):
# two words
texts = [['human', 'cat']]
d = Dictionary(texts)
expected = {0: 1, 1: 1}
self.assertEqual(d.dfs, expected)
# three words
texts = [['human', 'cat', 'minors']]
d = Dictionary(texts)
expected = {0: 1, 1: 1, 2: 1}
self.assertEqual(d.dfs, expected)
def test_doc_freq_and_collection_freq(self):
# one doc
texts = [['human', 'human', 'human']]
d = Dictionary(texts)
self.assertEqual(d.cfs, {0: 3})
self.assertEqual(d.dfs, {0: 1})
# two docs
texts = [['human', 'human'], ['human']]
d = Dictionary(texts)
self.assertEqual(d.cfs, {0: 3})
self.assertEqual(d.dfs, {0: 2})
# three docs
texts = [['human'], ['human'], ['human']]
d = Dictionary(texts)
self.assertEqual(d.cfs, {0: 3})
self.assertEqual(d.dfs, {0: 3})
def test_build(self):
d = Dictionary(self.texts)
# Since we don't specify the order in which dictionaries are built,
# we cannot reliably test for the mapping; only the keys and values.
expected_keys = list(range(12))
expected_values = [2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3]
self.assertEqual(sorted(d.dfs.keys()), expected_keys)
self.assertEqual(sorted(d.dfs.values()), expected_values)
expected_keys = sorted([
'computer', 'eps', 'graph', 'human', 'interface',
'minors', 'response', 'survey', 'system', 'time', 'trees', 'user'
])
expected_values = list(range(12))
self.assertEqual(sorted(d.token2id.keys()), expected_keys)
self.assertEqual(sorted(d.token2id.values()), expected_values)
def test_merge(self):
d = Dictionary(self.texts)
f = Dictionary(self.texts[:3])
g = Dictionary(self.texts[3:])
f.merge_with(g)
self.assertEqual(sorted(d.token2id.keys()), sorted(f.token2id.keys()))
def test_filter(self):
d = Dictionary(self.texts)
d.filter_extremes(no_below=2, no_above=1.0, keep_n=4)
dfs_expected = {0: 3, 1: 3, 2: 3, 3: 3}
cfs_expected = {0: 4, 1: 3, 2: 3, 3: 3}
self.assertEqual(d.dfs, dfs_expected)
self.assertEqual(d.cfs, cfs_expected)
def testFilterKeepTokens_keepTokens(self):
# provide keep_tokens argument, keep the tokens given
d = Dictionary(self.texts)
d.filter_extremes(no_below=3, no_above=1.0, keep_tokens=['human', 'survey'])
expected = {'graph', 'trees', 'human', 'system', 'user', 'survey'}
self.assertEqual(set(d.token2id.keys()), expected)
def testFilterKeepTokens_unchangedFunctionality(self):
# do not provide keep_tokens argument, filter_extremes functionality is unchanged
d = Dictionary(self.texts)
d.filter_extremes(no_below=3, no_above=1.0)
expected = {'graph', 'trees', 'system', 'user'}
self.assertEqual(set(d.token2id.keys()), expected)
def testFilterKeepTokens_unseenToken(self):
# do provide keep_tokens argument with unseen tokens, filter_extremes functionality is unchanged
d = Dictionary(self.texts)
d.filter_extremes(no_below=3, no_above=1.0, keep_tokens=['unknown_token'])
expected = {'graph', 'trees', 'system', 'user'}
self.assertEqual(set(d.token2id.keys()), expected)
def testFilterKeepTokens_keepn(self):
# keep_tokens should also work if the keep_n parameter is used, but only
# to keep a maximum of n (so if keep_n < len(keep_n) the tokens to keep are
# still getting removed to reduce the size to keep_n!)
d = Dictionary(self.texts)
# Note: there are four tokens with freq 3, all the others have frequence 2
# in self.texts. In order to make the test result deterministic, we add
# 2 tokens of frequency one
d.add_documents([['worda'], ['wordb']])
# this should keep the 3 tokens with freq 3 and the one we want to keep
d.filter_extremes(keep_n=5, no_below=0, no_above=1.0, keep_tokens=['worda'])
expected = {'graph', 'trees', 'system', 'user', 'worda'}
self.assertEqual(set(d.token2id.keys()), expected)
def test_filter_most_frequent(self):
d = Dictionary(self.texts)
d.filter_n_most_frequent(4)
expected = {0: 2, 1: 2, 2: 2, 3: 2, 4: 2, 5: 2, 6: 2, 7: 2}
self.assertEqual(d.dfs, expected)
def test_filter_tokens(self):
self.maxDiff = 10000
d = Dictionary(self.texts)
removed_word = d[0]
d.filter_tokens([0])
expected = {
'computer': 0, 'eps': 8, 'graph': 10, 'human': 1,
'interface': 2, 'minors': 11, 'response': 3, 'survey': 4,
'system': 5, 'time': 6, 'trees': 9, 'user': 7
}
del expected[removed_word]
self.assertEqual(sorted(d.token2id.keys()), sorted(expected.keys()))
expected[removed_word] = len(expected)
d.add_documents([[removed_word]])
self.assertEqual(sorted(d.token2id.keys()), sorted(expected.keys()))
def test_doc2bow(self):
d = Dictionary([["žluťoučký"], ["žluťoučký"]])
# pass a utf8 string
self.assertEqual(d.doc2bow(["žluťoučký"]), [(0, 1)])
# doc2bow must raise a TypeError if passed a string instead of array of strings by accident
self.assertRaises(TypeError, d.doc2bow, "žluťoučký")
# unicode must be converted to utf8
self.assertEqual(d.doc2bow([u'\u017elu\u0165ou\u010dk\xfd']), [(0, 1)])
def test_saveAsText(self):
"""`Dictionary` can be saved as textfile. """
tmpf = get_tmpfile('save_dict_test.txt')
small_text = [
["prvé", "slovo"],
["slovo", "druhé"],
["druhé", "slovo"]
]
d = Dictionary(small_text)
d.save_as_text(tmpf)
with codecs.open(tmpf, 'r', encoding='utf-8') as file:
serialized_lines = file.readlines()
self.assertEqual(serialized_lines[0], u"3\n")
self.assertEqual(len(serialized_lines), 4)
# We do not know, which word will have which index
self.assertEqual(serialized_lines[1][1:], u"\tdruhé\t2\n")
self.assertEqual(serialized_lines[2][1:], u"\tprvé\t1\n")
self.assertEqual(serialized_lines[3][1:], u"\tslovo\t3\n")
d.save_as_text(tmpf, sort_by_word=False)
with codecs.open(tmpf, 'r', encoding='utf-8') as file:
serialized_lines = file.readlines()
self.assertEqual(serialized_lines[0], u"3\n")
self.assertEqual(len(serialized_lines), 4)
self.assertEqual(serialized_lines[1][1:], u"\tslovo\t3\n")
self.assertEqual(serialized_lines[2][1:], u"\tdruhé\t2\n")
self.assertEqual(serialized_lines[3][1:], u"\tprvé\t1\n")
def test_loadFromText_legacy(self):
"""
`Dictionary` can be loaded from textfile in legacy format.
Legacy format does not have num_docs on the first line.
"""
tmpf = get_tmpfile('load_dict_test_legacy.txt')
no_num_docs_serialization = to_utf8("1\tprvé\t1\n2\tslovo\t2\n")
with open(tmpf, "wb") as file:
file.write(no_num_docs_serialization)
d = Dictionary.load_from_text(tmpf)
self.assertEqual(d.token2id[u"prvé"], 1)
self.assertEqual(d.token2id[u"slovo"], 2)
self.assertEqual(d.dfs[1], 1)
self.assertEqual(d.dfs[2], 2)
self.assertEqual(d.num_docs, 0)
def test_loadFromText(self):
"""`Dictionary` can be loaded from textfile."""
tmpf = get_tmpfile('load_dict_test.txt')
no_num_docs_serialization = to_utf8("2\n1\tprvé\t1\n2\tslovo\t2\n")
with open(tmpf, "wb") as file:
file.write(no_num_docs_serialization)
d = Dictionary.load_from_text(tmpf)
self.assertEqual(d.token2id[u"prvé"], 1)
self.assertEqual(d.token2id[u"slovo"], 2)
self.assertEqual(d.dfs[1], 1)
self.assertEqual(d.dfs[2], 2)
self.assertEqual(d.num_docs, 2)
def test_saveAsText_and_loadFromText(self):
"""`Dictionary` can be saved as textfile and loaded again from textfile. """
tmpf = get_tmpfile('dict_test.txt')
for sort_by_word in [True, False]:
d = Dictionary(self.texts)
d.save_as_text(tmpf, sort_by_word=sort_by_word)
self.assertTrue(os.path.exists(tmpf))
d_loaded = Dictionary.load_from_text(tmpf)
self.assertNotEqual(d_loaded, None)
self.assertEqual(d_loaded.token2id, d.token2id)
def test_from_corpus(self):
"""build `Dictionary` from an existing corpus"""
documents = [
"Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"
]
stoplist = set('for a of the and to in'.split())
texts = [
[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
all_tokens = list(chain.from_iterable(texts))
tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1)
texts = [[word for word in text if word not in tokens_once] for text in texts]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
# Create dictionary from corpus without a token map
dictionary_from_corpus = Dictionary.from_corpus(corpus)
dict_token2id_vals = sorted(dictionary.token2id.values())
dict_from_corpus_vals = sorted(dictionary_from_corpus.token2id.values())
self.assertEqual(dict_token2id_vals, dict_from_corpus_vals)
self.assertEqual(dictionary.dfs, dictionary_from_corpus.dfs)
self.assertEqual(dictionary.num_docs, dictionary_from_corpus.num_docs)
self.assertEqual(dictionary.num_pos, dictionary_from_corpus.num_pos)
self.assertEqual(dictionary.num_nnz, dictionary_from_corpus.num_nnz)
# Create dictionary from corpus with an id=>token map
dictionary_from_corpus_2 = Dictionary.from_corpus(corpus, id2word=dictionary)
self.assertEqual(dictionary.token2id, dictionary_from_corpus_2.token2id)
self.assertEqual(dictionary.dfs, dictionary_from_corpus_2.dfs)
self.assertEqual(dictionary.num_docs, dictionary_from_corpus_2.num_docs)
self.assertEqual(dictionary.num_pos, dictionary_from_corpus_2.num_pos)
self.assertEqual(dictionary.num_nnz, dictionary_from_corpus_2.num_nnz)
# Ensure Sparse2Corpus is compatible with from_corpus
bow = gensim.matutils.Sparse2Corpus(scipy.sparse.rand(10, 100))
dictionary = Dictionary.from_corpus(bow)
self.assertEqual(dictionary.num_docs, 100)
def test_dict_interface(self):
"""Test Python 2 dict-like interface in both Python 2 and 3."""
d = Dictionary(self.texts)
self.assertTrue(isinstance(d, Mapping))
self.assertEqual(list(zip(d.keys(), d.values())), list(d.items()))
# Even in Py3, we want the iter* members.
self.assertEqual(list(d.items()), list(d.iteritems()))
self.assertEqual(list(d.keys()), list(d.iterkeys()))
self.assertEqual(list(d.values()), list(d.itervalues()))
def test_patch_with_special_tokens(self):
special_tokens = {'pad': 0, 'space': 1, 'quake': 3}
corpus = [["máma", "mele", "maso"], ["ema", "má", "máma"]]
d = Dictionary(corpus)
self.assertEqual(len(d.token2id), 5)
d.patch_with_special_tokens(special_tokens)
self.assertEqual(d.token2id['pad'], 0)
self.assertEqual(d.token2id['space'], 1)
self.assertEqual(d.token2id['quake'], 3)
self.assertEqual(len(d.token2id), 8)
self.assertNotIn((0, 1), d.doc2bow(corpus[0]))
self.assertIn((0, 1), d.doc2bow(['pad'] + corpus[0]))
corpus_with_special_tokens = [["máma", "mele", "maso"], ["ema", "má", "máma", "space"]]
d = Dictionary(corpus_with_special_tokens)
self.assertEqual(len(d.token2id), 6)
self.assertNotEqual(d.token2id['space'], 1)
d.patch_with_special_tokens(special_tokens)
self.assertEqual(len(d.token2id), 8)
self.assertEqual(max(d.token2id.values()), 7)
self.assertEqual(d.token2id['space'], 1)
self.assertNotIn((1, 1), d.doc2bow(corpus_with_special_tokens[0]))
self.assertIn((1, 1), d.doc2bow(corpus_with_special_tokens[1]))
def test_most_common_with_n(self):
texts = [['human', 'human', 'human', 'computer', 'computer', 'interface', 'interface']]
d = Dictionary(texts)
expected = [('human', 3), ('computer', 2)]
assert d.most_common(n=2) == expected
def test_most_common_without_n(self):
texts = [['human', 'human', 'human', 'computer', 'computer', 'interface', 'interface']]
d = Dictionary(texts)
expected = [('human', 3), ('computer', 2), ('interface', 2)]
assert d.most_common(n=None) == expected
# endclass TestDictionary
if __name__ == '__main__':
logging.basicConfig(level=logging.WARNING)
unittest.main()
| 15,569
|
Python
|
.py
| 317
| 40.189274
| 104
| 0.62246
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,054
|
test_probability_estimation.py
|
piskvorky_gensim/gensim/test/test_probability_estimation.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for probability estimation algorithms in the probability_estimation module.
"""
import logging
import unittest
from gensim.corpora.dictionary import Dictionary
from gensim.corpora.hashdictionary import HashDictionary
from gensim.topic_coherence import probability_estimation
class BaseTestCases:
class ProbabilityEstimationBase(unittest.TestCase):
texts = [
['human', 'interface', 'computer'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees']
]
dictionary = None
def build_segmented_topics(self):
# Suppose the segmented topics from s_one_pre are:
token2id = self.dictionary.token2id
computer_id = token2id['computer']
system_id = token2id['system']
user_id = token2id['user']
graph_id = token2id['graph']
self.segmented_topics = [
[
(system_id, graph_id),
(computer_id, graph_id),
(computer_id, system_id)
], [
(computer_id, graph_id),
(user_id, graph_id),
(user_id, computer_id)
]
]
self.computer_id = computer_id
self.system_id = system_id
self.user_id = user_id
self.graph_id = graph_id
def setup_dictionary(self):
raise NotImplementedError
def setUp(self):
self.setup_dictionary()
self.corpus = [self.dictionary.doc2bow(text) for text in self.texts]
self.build_segmented_topics()
def test_p_boolean_document(self):
"""Test p_boolean_document()"""
accumulator = probability_estimation.p_boolean_document(
self.corpus, self.segmented_topics)
obtained = accumulator.index_to_dict()
expected = {
self.graph_id: {5},
self.user_id: {1, 3},
self.system_id: {1, 2},
self.computer_id: {0}
}
self.assertEqual(expected, obtained)
def test_p_boolean_sliding_window(self):
"""Test p_boolean_sliding_window()"""
# Test with window size as 2. window_id is zero indexed.
accumulator = probability_estimation.p_boolean_sliding_window(
self.texts, self.segmented_topics, self.dictionary, 2)
self.assertEqual(1, accumulator[self.computer_id])
self.assertEqual(3, accumulator[self.user_id])
self.assertEqual(1, accumulator[self.graph_id])
self.assertEqual(4, accumulator[self.system_id])
class TestProbabilityEstimation(BaseTestCases.ProbabilityEstimationBase):
def setup_dictionary(self):
self.dictionary = HashDictionary(self.texts)
class TestProbabilityEstimationWithNormalDictionary(BaseTestCases.ProbabilityEstimationBase):
def setup_dictionary(self):
self.dictionary = Dictionary(self.texts)
self.dictionary.id2token = {v: k for k, v in self.dictionary.token2id.items()}
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 3,567
|
Python
|
.py
| 83
| 32.216867
| 95
| 0.600115
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,055
|
test_similarities.py
|
piskvorky_gensim/gensim/test/test_similarities.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for similarity algorithms (the similarities package).
"""
import logging
import unittest
import math
import os
import numpy
import scipy
from gensim import utils
from gensim.corpora import Dictionary
from gensim.models import word2vec
from gensim.models import doc2vec
from gensim.models import KeyedVectors
from gensim.models import TfidfModel
from gensim import matutils, similarities
from gensim.models import Word2Vec, FastText
from gensim.test.utils import (
datapath, get_tmpfile,
common_texts as TEXTS, common_dictionary as DICTIONARY, common_corpus as CORPUS,
)
from gensim.similarities import UniformTermSimilarityIndex
from gensim.similarities import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
from gensim.similarities import LevenshteinSimilarityIndex
from gensim.similarities.docsim import _nlargest
from gensim.similarities.fastss import editdist
try:
from ot import emd2 # noqa:F401
POT_EXT = True
except (ImportError, ValueError):
POT_EXT = False
SENTENCES = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(TEXTS)]
@unittest.skip("skipping abstract base class")
class _TestSimilarityABC(unittest.TestCase):
"""
Base class for SparseMatrixSimilarity and MatrixSimilarity unit tests.
"""
def factoryMethod(self):
"""Creates a SimilarityABC instance."""
return self.cls(CORPUS, num_features=len(DICTIONARY))
def test_full(self, num_best=None, shardsize=100):
if self.cls == similarities.Similarity:
index = self.cls(None, CORPUS, num_features=len(DICTIONARY), shardsize=shardsize)
else:
index = self.cls(CORPUS, num_features=len(DICTIONARY))
if isinstance(index, similarities.MatrixSimilarity):
expected = numpy.array([
[0.57735026, 0.57735026, 0.57735026, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.40824831, 0.0, 0.40824831, 0.40824831, 0.40824831, 0.40824831, 0.40824831, 0.0, 0.0, 0.0, 0.0],
[0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5, 0.5, 0.5, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.40824831, 0.0, 0.0, 0.0, 0.81649661, 0.0, 0.40824831, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.57735026, 0.57735026, 0.0, 0.0, 0.57735026, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1., 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.70710677, 0.70710677, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.57735026, 0.57735026, 0.57735026],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.57735026, 0.0, 0.0, 0.0, 0.0, 0.57735026, 0.57735026],
], dtype=numpy.float32)
# HACK: dictionary can be in different order, so compare in sorted order
self.assertTrue(numpy.allclose(sorted(expected.flat), sorted(index.index.flat)))
index.num_best = num_best
query = CORPUS[0]
sims = index[query]
expected = [(0, 0.99999994), (2, 0.28867513), (3, 0.23570226), (1, 0.23570226)][: num_best]
# convert sims to full numpy arrays, so we can use allclose() and ignore
# ordering of items with the same similarity value
expected = matutils.sparse2full(expected, len(index))
if num_best is not None: # when num_best is None, sims is already a numpy array
sims = matutils.sparse2full(sims, len(index))
self.assertTrue(numpy.allclose(expected, sims))
if self.cls == similarities.Similarity:
index.destroy()
def test_num_best(self):
if self.cls == similarities.WmdSimilarity and not POT_EXT:
self.skipTest("POT not installed")
for num_best in [None, 0, 1, 9, 1000]:
self.testFull(num_best=num_best)
def test_full2sparse_clipped(self):
vec = [0.8, 0.2, 0.0, 0.0, -0.1, -0.15]
expected = [(0, 0.80000000000000004), (1, 0.20000000000000001), (5, -0.14999999999999999)]
self.assertTrue(matutils.full2sparse_clipped(vec, topn=3), expected)
def test_scipy2scipy_clipped(self):
# Test for scipy vector/row
vec = [0.8, 0.2, 0.0, 0.0, -0.1, -0.15]
expected = [(0, 0.80000000000000004), (1, 0.20000000000000001), (5, -0.14999999999999999)]
vec_scipy = scipy.sparse.csr_matrix(vec)
vec_scipy_clipped = matutils.scipy2scipy_clipped(vec_scipy, topn=3)
self.assertTrue(scipy.sparse.issparse(vec_scipy_clipped))
self.assertTrue(matutils.scipy2sparse(vec_scipy_clipped), expected)
# Test for scipy matrix
vec = [0.8, 0.2, 0.0, 0.0, -0.1, -0.15]
expected = [(0, 0.80000000000000004), (1, 0.20000000000000001), (5, -0.14999999999999999)]
matrix_scipy = scipy.sparse.csr_matrix([vec] * 3)
matrix_scipy_clipped = matutils.scipy2scipy_clipped(matrix_scipy, topn=3)
self.assertTrue(scipy.sparse.issparse(matrix_scipy_clipped))
self.assertTrue([matutils.scipy2sparse(x) for x in matrix_scipy_clipped], [expected] * 3)
def test_empty_query(self):
index = self.factoryMethod()
if isinstance(index, similarities.WmdSimilarity) and not POT_EXT:
self.skipTest("POT not installed")
query = []
try:
sims = index[query]
self.assertTrue(sims is not None)
except IndexError:
self.assertTrue(False)
def test_chunking(self):
if self.cls == similarities.Similarity:
index = self.cls(None, CORPUS, num_features=len(DICTIONARY), shardsize=5)
else:
index = self.cls(CORPUS, num_features=len(DICTIONARY))
query = CORPUS[:3]
sims = index[query]
expected = numpy.array([
[0.99999994, 0.23570226, 0.28867513, 0.23570226, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.23570226, 1.0, 0.40824831, 0.33333334, 0.70710677, 0.0, 0.0, 0.0, 0.23570226],
[0.28867513, 0.40824831, 1.0, 0.61237246, 0.28867513, 0.0, 0.0, 0.0, 0.0]
], dtype=numpy.float32)
self.assertTrue(numpy.allclose(expected, sims))
# test the same thing but with num_best
index.num_best = 3
sims = index[query]
expected = [
[(0, 0.99999994), (2, 0.28867513), (1, 0.23570226)],
[(1, 1.0), (4, 0.70710677), (2, 0.40824831)],
[(2, 1.0), (3, 0.61237246), (1, 0.40824831)]
]
self.assertTrue(numpy.allclose(expected, sims))
if self.cls == similarities.Similarity:
index.destroy()
def test_iter(self):
if self.cls == similarities.Similarity:
index = self.cls(None, CORPUS, num_features=len(DICTIONARY), shardsize=5)
else:
index = self.cls(CORPUS, num_features=len(DICTIONARY))
sims = [sim for sim in index]
expected = numpy.array([
[0.99999994, 0.23570226, 0.28867513, 0.23570226, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.23570226, 1.0, 0.40824831, 0.33333334, 0.70710677, 0.0, 0.0, 0.0, 0.23570226],
[0.28867513, 0.40824831, 1.0, 0.61237246, 0.28867513, 0.0, 0.0, 0.0, 0.0],
[0.23570226, 0.33333334, 0.61237246, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.70710677, 0.28867513, 0.0, 0.99999994, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.70710677, 0.57735026, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.70710677, 0.99999994, 0.81649655, 0.40824828],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.57735026, 0.81649655, 0.99999994, 0.66666663],
[0.0, 0.23570226, 0.0, 0.0, 0.0, 0.0, 0.40824828, 0.66666663, 0.99999994]
], dtype=numpy.float32)
self.assertTrue(numpy.allclose(expected, sims))
if self.cls == similarities.Similarity:
index.destroy()
def test_persistency(self):
if self.cls == similarities.WmdSimilarity and not POT_EXT:
self.skipTest("POT not installed")
fname = get_tmpfile('gensim_similarities.tst.pkl')
index = self.factoryMethod()
index.save(fname)
index2 = self.cls.load(fname)
if self.cls == similarities.Similarity:
# for Similarity, only do a basic check
self.assertTrue(len(index.shards) == len(index2.shards))
index.destroy()
else:
if isinstance(index, similarities.SparseMatrixSimilarity):
# hack SparseMatrixSim indexes so they're easy to compare
index.index = index.index.todense()
index2.index = index2.index.todense()
self.assertTrue(numpy.allclose(index.index, index2.index))
self.assertEqual(index.num_best, index2.num_best)
def test_persistency_compressed(self):
if self.cls == similarities.WmdSimilarity and not POT_EXT:
self.skipTest("POT not installed")
fname = get_tmpfile('gensim_similarities.tst.pkl.gz')
index = self.factoryMethod()
index.save(fname)
index2 = self.cls.load(fname)
if self.cls == similarities.Similarity:
# for Similarity, only do a basic check
self.assertTrue(len(index.shards) == len(index2.shards))
index.destroy()
else:
if isinstance(index, similarities.SparseMatrixSimilarity):
# hack SparseMatrixSim indexes so they're easy to compare
index.index = index.index.todense()
index2.index = index2.index.todense()
self.assertTrue(numpy.allclose(index.index, index2.index))
self.assertEqual(index.num_best, index2.num_best)
def test_large(self):
if self.cls == similarities.WmdSimilarity and not POT_EXT:
self.skipTest("POT not installed")
fname = get_tmpfile('gensim_similarities.tst.pkl')
index = self.factoryMethod()
# store all arrays separately
index.save(fname, sep_limit=0)
index2 = self.cls.load(fname)
if self.cls == similarities.Similarity:
# for Similarity, only do a basic check
self.assertTrue(len(index.shards) == len(index2.shards))
index.destroy()
else:
if isinstance(index, similarities.SparseMatrixSimilarity):
# hack SparseMatrixSim indexes so they're easy to compare
index.index = index.index.todense()
index2.index = index2.index.todense()
self.assertTrue(numpy.allclose(index.index, index2.index))
self.assertEqual(index.num_best, index2.num_best)
def test_large_compressed(self):
if self.cls == similarities.WmdSimilarity and not POT_EXT:
self.skipTest("POT not installed")
fname = get_tmpfile('gensim_similarities.tst.pkl.gz')
index = self.factoryMethod()
# store all arrays separately
index.save(fname, sep_limit=0)
index2 = self.cls.load(fname, mmap=None)
if self.cls == similarities.Similarity:
# for Similarity, only do a basic check
self.assertTrue(len(index.shards) == len(index2.shards))
index.destroy()
else:
if isinstance(index, similarities.SparseMatrixSimilarity):
# hack SparseMatrixSim indexes so they're easy to compare
index.index = index.index.todense()
index2.index = index2.index.todense()
self.assertTrue(numpy.allclose(index.index, index2.index))
self.assertEqual(index.num_best, index2.num_best)
def test_mmap(self):
if self.cls == similarities.WmdSimilarity and not POT_EXT:
self.skipTest("POT not installed")
fname = get_tmpfile('gensim_similarities.tst.pkl')
index = self.factoryMethod()
# store all arrays separately
index.save(fname, sep_limit=0)
# same thing, but use mmap to load arrays
index2 = self.cls.load(fname, mmap='r')
if self.cls == similarities.Similarity:
# for Similarity, only do a basic check
self.assertTrue(len(index.shards) == len(index2.shards))
index.destroy()
else:
if isinstance(index, similarities.SparseMatrixSimilarity):
# hack SparseMatrixSim indexes so they're easy to compare
index.index = index.index.todense()
index2.index = index2.index.todense()
self.assertTrue(numpy.allclose(index.index, index2.index))
self.assertEqual(index.num_best, index2.num_best)
def test_mmap_compressed(self):
if self.cls == similarities.WmdSimilarity and not POT_EXT:
self.skipTest("POT not installed")
fname = get_tmpfile('gensim_similarities.tst.pkl.gz')
index = self.factoryMethod()
# store all arrays separately
index.save(fname, sep_limit=0)
# same thing, but use mmap to load arrays
self.assertRaises(IOError, self.cls.load, fname, mmap='r')
class TestMatrixSimilarity(_TestSimilarityABC):
def setUp(self):
self.cls = similarities.MatrixSimilarity
class TestWmdSimilarity(_TestSimilarityABC):
def setUp(self):
self.cls = similarities.WmdSimilarity
self.w2v_model = Word2Vec(TEXTS, min_count=1).wv
def factoryMethod(self):
# Override factoryMethod.
return self.cls(TEXTS, self.w2v_model)
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_full(self, num_best=None):
# Override testFull.
index = self.cls(TEXTS, self.w2v_model)
index.num_best = num_best
query = TEXTS[0]
sims = index[query]
if num_best is not None:
# Sparse array.
for i, sim in sims:
# Note that similarities are bigger than zero, as they are the 1/ 1 + distances.
self.assertTrue(numpy.alltrue(sim > 0.0))
else:
self.assertTrue(sims[0] == 1.0) # Similarity of a document with itself is 0.0.
self.assertTrue(numpy.alltrue(sims[1:] > 0.0))
self.assertTrue(numpy.alltrue(sims[1:] < 1.0))
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_non_increasing(self):
''' Check that similarities are non-increasing when `num_best` is not
`None`.'''
# NOTE: this could be implemented for other similarities as well (i.e.
# in _TestSimilarityABC).
index = self.cls(TEXTS, self.w2v_model, num_best=3)
query = TEXTS[0]
sims = index[query]
sims2 = numpy.asarray(sims)[:, 1] # Just the similarities themselves.
# The difference of adjacent elements should be negative.
cond = sum(numpy.diff(sims2) < 0) == len(sims2) - 1
self.assertTrue(cond)
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_chunking(self):
# Override testChunking.
index = self.cls(TEXTS, self.w2v_model)
query = TEXTS[:3]
sims = index[query]
for i in range(3):
self.assertTrue(numpy.alltrue(sims[i, i] == 1.0)) # Similarity of a document with itself is 0.0.
# test the same thing but with num_best
index.num_best = 3
sims = index[query]
for sims_temp in sims:
for i, sim in sims_temp:
self.assertTrue(numpy.alltrue(sim > 0.0))
self.assertTrue(numpy.alltrue(sim <= 1.0))
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_iter(self):
# Override testIter.
index = self.cls(TEXTS, self.w2v_model)
for sims in index:
self.assertTrue(numpy.alltrue(sims >= 0.0))
self.assertTrue(numpy.alltrue(sims <= 1.0))
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_str(self):
index = self.cls(TEXTS, self.w2v_model)
self.assertTrue(str(index))
class TestSoftCosineSimilarity(_TestSimilarityABC):
def setUp(self):
self.cls = similarities.SoftCosineSimilarity
self.tfidf = TfidfModel(dictionary=DICTIONARY)
similarity_matrix = scipy.sparse.identity(12, format="lil")
similarity_matrix[DICTIONARY.token2id["user"], DICTIONARY.token2id["human"]] = 0.5
similarity_matrix[DICTIONARY.token2id["human"], DICTIONARY.token2id["user"]] = 0.5
self.similarity_matrix = SparseTermSimilarityMatrix(similarity_matrix)
def factoryMethod(self):
return self.cls(CORPUS, self.similarity_matrix)
def test_full(self, num_best=None):
# Single query
index = self.cls(CORPUS, self.similarity_matrix, num_best=num_best)
query = DICTIONARY.doc2bow(TEXTS[0])
sims = index[query]
if num_best is not None:
# Sparse array.
for i, sim in sims:
self.assertTrue(numpy.alltrue(sim <= 1.0))
self.assertTrue(numpy.alltrue(sim >= 0.0))
else:
self.assertAlmostEqual(1.0, sims[0]) # Similarity of a document with itself is 1.0.
self.assertTrue(numpy.alltrue(sims[1:] >= 0.0))
self.assertTrue(numpy.alltrue(sims[1:] < 1.0))
# Corpora
for query in (
CORPUS, # Basic text corpus.
self.tfidf[CORPUS]): # Transformed corpus without slicing support.
index = self.cls(query, self.similarity_matrix, num_best=num_best)
sims = index[query]
if num_best is not None:
# Sparse array.
for result in sims:
for i, sim in result:
self.assertTrue(numpy.alltrue(sim <= 1.0))
self.assertTrue(numpy.alltrue(sim >= 0.0))
else:
for i, result in enumerate(sims):
self.assertAlmostEqual(1.0, result[i]) # Similarity of a document with itself is 1.0.
self.assertTrue(numpy.alltrue(result[:i] >= 0.0))
self.assertTrue(numpy.alltrue(result[:i] < 1.0))
self.assertTrue(numpy.alltrue(result[i + 1:] >= 0.0))
self.assertTrue(numpy.alltrue(result[i + 1:] < 1.0))
def test_non_increasing(self):
""" Check that similarities are non-increasing when `num_best` is not `None`."""
# NOTE: this could be implemented for other similarities as well (i.e. in _TestSimilarityABC).
index = self.cls(CORPUS, self.similarity_matrix, num_best=5)
query = DICTIONARY.doc2bow(TEXTS[0])
sims = index[query]
sims2 = numpy.asarray(sims)[:, 1] # Just the similarities themselves.
# The difference of adjacent elements should be less than or equal to zero.
cond = sum(numpy.diff(sims2) <= 0) == len(sims2) - 1
self.assertTrue(cond)
def test_chunking(self):
index = self.cls(CORPUS, self.similarity_matrix)
query = [DICTIONARY.doc2bow(document) for document in TEXTS[:3]]
sims = index[query]
for i in range(3):
self.assertTrue(numpy.alltrue(sims[i, i] == 1.0)) # Similarity of a document with itself is 1.0.
# test the same thing but with num_best
index.num_best = 5
sims = index[query]
for i, chunk in enumerate(sims):
expected = i
self.assertAlmostEqual(expected, chunk[0][0], places=2)
expected = 1.0
self.assertAlmostEqual(expected, chunk[0][1], places=2)
def test_iter(self):
index = self.cls(CORPUS, self.similarity_matrix)
for sims in index:
self.assertTrue(numpy.alltrue(sims >= 0.0))
self.assertTrue(numpy.alltrue(sims <= 1.0))
class TestSparseMatrixSimilarity(_TestSimilarityABC):
def setUp(self):
self.cls = similarities.SparseMatrixSimilarity
def test_maintain_sparsity(self):
"""Sparsity is correctly maintained when maintain_sparsity=True"""
num_features = len(DICTIONARY)
index = self.cls(CORPUS, num_features=num_features)
dense_sims = index[CORPUS]
index = self.cls(CORPUS, num_features=num_features, maintain_sparsity=True)
sparse_sims = index[CORPUS]
self.assertFalse(scipy.sparse.issparse(dense_sims))
self.assertTrue(scipy.sparse.issparse(sparse_sims))
numpy.testing.assert_array_equal(dense_sims, sparse_sims.todense())
def test_maintain_sparsity_with_num_best(self):
"""Tests that sparsity is correctly maintained when maintain_sparsity=True and num_best is not None"""
num_features = len(DICTIONARY)
index = self.cls(CORPUS, num_features=num_features, maintain_sparsity=False, num_best=3)
dense_topn_sims = index[CORPUS]
index = self.cls(CORPUS, num_features=num_features, maintain_sparsity=True, num_best=3)
scipy_topn_sims = index[CORPUS]
self.assertFalse(scipy.sparse.issparse(dense_topn_sims))
self.assertTrue(scipy.sparse.issparse(scipy_topn_sims))
self.assertEqual(dense_topn_sims, [matutils.scipy2sparse(v) for v in scipy_topn_sims])
class TestSimilarity(_TestSimilarityABC):
def setUp(self):
self.cls = similarities.Similarity
def factoryMethod(self):
# Override factoryMethod.
return self.cls(None, CORPUS, num_features=len(DICTIONARY), shardsize=5)
def test_sharding(self):
for num_best in [None, 0, 1, 9, 1000]:
for shardsize in [1, 2, 9, 1000]:
self.testFull(num_best=num_best, shardsize=shardsize)
def test_reopen(self):
"""test re-opening partially full shards"""
index = similarities.Similarity(None, CORPUS[:5], num_features=len(DICTIONARY), shardsize=9)
_ = index[CORPUS[0]] # noqa:F841 forces shard close
index.add_documents(CORPUS[5:])
query = CORPUS[0]
sims = index[query]
expected = [(0, 0.99999994), (2, 0.28867513), (3, 0.23570226), (1, 0.23570226)]
expected = matutils.sparse2full(expected, len(index))
self.assertTrue(numpy.allclose(expected, sims))
index.destroy()
def test_mmap_compressed(self):
pass
# turns out this test doesn't exercise this because there are no arrays
# to be mmaped!
def test_chunksize(self):
index = self.cls(None, CORPUS, num_features=len(DICTIONARY), shardsize=5)
expected = [sim for sim in index]
index.chunksize = len(index) - 1
sims = [sim for sim in index]
self.assertTrue(numpy.allclose(expected, sims))
index.destroy()
def test_nlargest(self):
sims = ([(0, 0.8), (1, 0.2), (2, 0.0), (3, 0.0), (4, -0.1), (5, -0.15)],)
expected = [(0, 0.8), (1, 0.2), (5, -0.15)]
self.assertTrue(_nlargest(3, sims), expected)
class TestWord2VecAnnoyIndexer(unittest.TestCase):
def setUp(self):
try:
import annoy # noqa:F401
except ImportError as e:
raise unittest.SkipTest("Annoy library is not available: %s" % e)
from gensim.similarities.annoy import AnnoyIndexer
self.indexer = AnnoyIndexer
def test_word2vec(self):
model = word2vec.Word2Vec(TEXTS, min_count=1)
index = self.indexer(model, 10)
self.assertVectorIsSimilarToItself(model.wv, index)
self.assertApproxNeighborsMatchExact(model.wv, model.wv, index)
self.assertIndexSaved(index)
self.assertLoadedIndexEqual(index, model)
def test_fast_text(self):
class LeeReader:
def __init__(self, fn):
self.fn = fn
def __iter__(self):
with utils.open(self.fn, 'r', encoding="latin_1") as infile:
for line in infile:
yield line.lower().strip().split()
model = FastText(LeeReader(datapath('lee.cor')), bucket=5000)
index = self.indexer(model, 10)
self.assertVectorIsSimilarToItself(model.wv, index)
self.assertApproxNeighborsMatchExact(model.wv, model.wv, index)
self.assertIndexSaved(index)
self.assertLoadedIndexEqual(index, model)
def test_annoy_indexing_of_keyed_vectors(self):
from gensim.similarities.annoy import AnnoyIndexer
keyVectors_file = datapath('lee_fasttext.vec')
model = KeyedVectors.load_word2vec_format(keyVectors_file)
index = AnnoyIndexer(model, 10)
self.assertEqual(index.num_trees, 10)
self.assertVectorIsSimilarToItself(model, index)
self.assertApproxNeighborsMatchExact(model, model, index)
def test_load_missing_raises_error(self):
from gensim.similarities.annoy import AnnoyIndexer
test_index = AnnoyIndexer()
self.assertRaises(IOError, test_index.load, fname='test-index')
def assertVectorIsSimilarToItself(self, wv, index):
vector = wv.get_normed_vectors()[0]
label = wv.index_to_key[0]
approx_neighbors = index.most_similar(vector, 1)
word, similarity = approx_neighbors[0]
self.assertEqual(word, label)
self.assertAlmostEqual(similarity, 1.0, places=2)
def assertApproxNeighborsMatchExact(self, model, wv, index):
vector = wv.get_normed_vectors()[0]
approx_neighbors = model.most_similar([vector], topn=5, indexer=index)
exact_neighbors = model.most_similar(positive=[vector], topn=5)
approx_words = [neighbor[0] for neighbor in approx_neighbors]
exact_words = [neighbor[0] for neighbor in exact_neighbors]
self.assertEqual(approx_words, exact_words)
def assertAllSimilaritiesDisableIndexer(self, model, wv, index):
vector = wv.get_normed_vectors()[0]
approx_similarities = model.most_similar([vector], topn=None, indexer=index)
exact_similarities = model.most_similar(positive=[vector], topn=None)
self.assertEqual(approx_similarities, exact_similarities)
self.assertEqual(len(approx_similarities), len(wv.vectors))
def assertIndexSaved(self, index):
fname = get_tmpfile('gensim_similarities.tst.pkl')
index.save(fname)
self.assertTrue(os.path.exists(fname))
self.assertTrue(os.path.exists(fname + '.d'))
def assertLoadedIndexEqual(self, index, model):
from gensim.similarities.annoy import AnnoyIndexer
fname = get_tmpfile('gensim_similarities.tst.pkl')
index.save(fname)
index2 = AnnoyIndexer()
index2.load(fname)
index2.model = model
self.assertEqual(index.index.f, index2.index.f)
self.assertEqual(index.labels, index2.labels)
self.assertEqual(index.num_trees, index2.num_trees)
class TestDoc2VecAnnoyIndexer(unittest.TestCase):
def setUp(self):
try:
import annoy # noqa:F401
except ImportError as e:
raise unittest.SkipTest("Annoy library is not available: %s" % e)
from gensim.similarities.annoy import AnnoyIndexer
self.model = doc2vec.Doc2Vec(SENTENCES, min_count=1)
self.index = AnnoyIndexer(self.model, 300)
self.vector = self.model.dv.get_normed_vectors()[0]
def test_document_is_similar_to_itself(self):
approx_neighbors = self.index.most_similar(self.vector, 1)
doc, similarity = approx_neighbors[0]
self.assertEqual(doc, 0)
self.assertAlmostEqual(similarity, 1.0, places=2)
def test_approx_neighbors_match_exact(self):
approx_neighbors = self.model.dv.most_similar([self.vector], topn=5, indexer=self.index)
exact_neighbors = self.model.dv.most_similar([self.vector], topn=5)
approx_words = [neighbor[0] for neighbor in approx_neighbors]
exact_words = [neighbor[0] for neighbor in exact_neighbors]
self.assertEqual(approx_words, exact_words)
def test_save(self):
fname = get_tmpfile('gensim_similarities.tst.pkl')
self.index.save(fname)
self.assertTrue(os.path.exists(fname))
self.assertTrue(os.path.exists(fname + '.d'))
def test_load_not_exist(self):
from gensim.similarities.annoy import AnnoyIndexer
self.test_index = AnnoyIndexer()
self.assertRaises(IOError, self.test_index.load, fname='test-index')
def test_save_load(self):
from gensim.similarities.annoy import AnnoyIndexer
fname = get_tmpfile('gensim_similarities.tst.pkl')
self.index.save(fname)
self.index2 = AnnoyIndexer()
self.index2.load(fname)
self.index2.model = self.model
self.assertEqual(self.index.index.f, self.index2.index.f)
self.assertEqual(self.index.labels, self.index2.labels)
self.assertEqual(self.index.num_trees, self.index2.num_trees)
class TestWord2VecNmslibIndexer(unittest.TestCase):
def setUp(self):
try:
import nmslib # noqa:F401
except ImportError as e:
raise unittest.SkipTest("NMSLIB library is not available: %s" % e)
from gensim.similarities.nmslib import NmslibIndexer
self.indexer = NmslibIndexer
def test_word2vec(self):
model = word2vec.Word2Vec(TEXTS, min_count=1)
index = self.indexer(model)
self.assertVectorIsSimilarToItself(model.wv, index)
self.assertApproxNeighborsMatchExact(model.wv, model.wv, index)
self.assertIndexSaved(index)
self.assertLoadedIndexEqual(index, model)
def test_fasttext(self):
class LeeReader:
def __init__(self, fn):
self.fn = fn
def __iter__(self):
with utils.open(self.fn, 'r', encoding="latin_1") as infile:
for line in infile:
yield line.lower().strip().split()
model = FastText(LeeReader(datapath('lee.cor')), bucket=5000)
index = self.indexer(model)
self.assertVectorIsSimilarToItself(model.wv, index)
self.assertApproxNeighborsMatchExact(model.wv, model.wv, index)
self.assertIndexSaved(index)
self.assertLoadedIndexEqual(index, model)
def test_indexing_keyedvectors(self):
from gensim.similarities.nmslib import NmslibIndexer
keyVectors_file = datapath('lee_fasttext.vec')
model = KeyedVectors.load_word2vec_format(keyVectors_file)
index = NmslibIndexer(model)
self.assertVectorIsSimilarToItself(model, index)
self.assertApproxNeighborsMatchExact(model, model, index)
def test_load_missing_raises_error(self):
from gensim.similarities.nmslib import NmslibIndexer
self.assertRaises(IOError, NmslibIndexer.load, fname='test-index')
def assertVectorIsSimilarToItself(self, wv, index):
vector = wv.get_normed_vectors()[0]
label = wv.index_to_key[0]
approx_neighbors = index.most_similar(vector, 1)
word, similarity = approx_neighbors[0]
self.assertEqual(word, label)
self.assertAlmostEqual(similarity, 1.0, places=2)
def assertApproxNeighborsMatchExact(self, model, wv, index):
vector = wv.get_normed_vectors()[0]
approx_neighbors = model.most_similar([vector], topn=5, indexer=index)
exact_neighbors = model.most_similar([vector], topn=5)
approx_words = [word_id for word_id, similarity in approx_neighbors]
exact_words = [word_id for word_id, similarity in exact_neighbors]
self.assertEqual(approx_words, exact_words)
def assertIndexSaved(self, index):
fname = get_tmpfile('gensim_similarities.tst.pkl')
index.save(fname)
self.assertTrue(os.path.exists(fname))
self.assertTrue(os.path.exists(fname + '.d'))
def assertLoadedIndexEqual(self, index, model):
from gensim.similarities.nmslib import NmslibIndexer
fname = get_tmpfile('gensim_similarities.tst.pkl')
index.save(fname)
index2 = NmslibIndexer.load(fname)
index2.model = model
self.assertEqual(index.labels, index2.labels)
self.assertEqual(index.index_params, index2.index_params)
self.assertEqual(index.query_time_params, index2.query_time_params)
class TestDoc2VecNmslibIndexer(unittest.TestCase):
def setUp(self):
try:
import nmslib # noqa:F401
except ImportError as e:
raise unittest.SkipTest("NMSLIB library is not available: %s" % e)
from gensim.similarities.nmslib import NmslibIndexer
self.model = doc2vec.Doc2Vec(SENTENCES, min_count=1)
self.index = NmslibIndexer(self.model)
self.vector = self.model.dv.get_normed_vectors()[0]
def test_document_is_similar_to_itself(self):
approx_neighbors = self.index.most_similar(self.vector, 1)
doc, similarity = approx_neighbors[0]
self.assertEqual(doc, 0)
self.assertAlmostEqual(similarity, 1.0, places=2)
def test_approx_neighbors_match_exact(self):
approx_neighbors = self.model.dv.most_similar([self.vector], topn=5, indexer=self.index)
exact_neighbors = self.model.dv.most_similar([self.vector], topn=5)
approx_tags = [tag for tag, similarity in approx_neighbors]
exact_tags = [tag for tag, similarity in exact_neighbors]
self.assertEqual(approx_tags, exact_tags)
def test_save(self):
fname = get_tmpfile('gensim_similarities.tst.pkl')
self.index.save(fname)
self.assertTrue(os.path.exists(fname))
self.assertTrue(os.path.exists(fname + '.d'))
def test_load_not_exist(self):
from gensim.similarities.nmslib import NmslibIndexer
self.assertRaises(IOError, NmslibIndexer.load, fname='test-index')
def test_save_load(self):
from gensim.similarities.nmslib import NmslibIndexer
fname = get_tmpfile('gensim_similarities.tst.pkl')
self.index.save(fname)
self.index2 = NmslibIndexer.load(fname)
self.index2.model = self.model
self.assertEqual(self.index.labels, self.index2.labels)
self.assertEqual(self.index.index_params, self.index2.index_params)
self.assertEqual(self.index.query_time_params, self.index2.query_time_params)
class TestUniformTermSimilarityIndex(unittest.TestCase):
def setUp(self):
self.documents = [[u"government", u"denied", u"holiday"], [u"holiday", u"slowing", u"hollingworth"]]
self.dictionary = Dictionary(self.documents)
def test_most_similar(self):
"""Test most_similar returns expected results."""
# check that the topn works as expected
index = UniformTermSimilarityIndex(self.dictionary)
results = list(index.most_similar(u"holiday", topn=1))
self.assertLess(0, len(results))
self.assertGreaterEqual(1, len(results))
results = list(index.most_similar(u"holiday", topn=4))
self.assertLess(1, len(results))
self.assertGreaterEqual(4, len(results))
# check that the term itself is not returned
index = UniformTermSimilarityIndex(self.dictionary)
terms = [term for term, similarity in index.most_similar(u"holiday", topn=len(self.dictionary))]
self.assertFalse(u"holiday" in terms)
# check that the term_similarity works as expected
index = UniformTermSimilarityIndex(self.dictionary, term_similarity=0.2)
similarities = numpy.array([
similarity for term, similarity in index.most_similar(u"holiday", topn=len(self.dictionary))])
self.assertTrue(numpy.all(similarities == 0.2))
class TestSparseTermSimilarityMatrix(unittest.TestCase):
def setUp(self):
self.documents = [
[u"government", u"denied", u"holiday"],
[u"government", u"denied", u"holiday", u"slowing", u"hollingworth"]]
self.dictionary = Dictionary(self.documents)
self.tfidf = TfidfModel(dictionary=self.dictionary)
zero_index = UniformTermSimilarityIndex(self.dictionary, term_similarity=0.0)
self.index = UniformTermSimilarityIndex(self.dictionary, term_similarity=0.5)
self.identity_matrix = SparseTermSimilarityMatrix(zero_index, self.dictionary)
self.uniform_matrix = SparseTermSimilarityMatrix(self.index, self.dictionary)
self.vec1 = self.dictionary.doc2bow([u"government", u"government", u"denied"])
self.vec2 = self.dictionary.doc2bow([u"government", u"holiday"])
def test_empty_dictionary(self):
with self.assertRaises(ValueError):
SparseTermSimilarityMatrix(self.index, [])
def test_type(self):
"""Test the type of the produced matrix."""
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary).matrix
self.assertTrue(isinstance(matrix, scipy.sparse.csc_matrix))
def test_diagonal(self):
"""Test the existence of ones on the main diagonal."""
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary).matrix.todense()
self.assertTrue(numpy.all(numpy.diag(matrix) == numpy.ones(matrix.shape[0])))
def test_order(self):
"""Test the matrix order."""
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary).matrix.todense()
self.assertEqual(matrix.shape[0], len(self.dictionary))
self.assertEqual(matrix.shape[1], len(self.dictionary))
def test_dtype(self):
"""Test the dtype parameter of the matrix constructor."""
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary, dtype=numpy.float32).matrix.todense()
self.assertEqual(numpy.float32, matrix.dtype)
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary, dtype=numpy.float64).matrix.todense()
self.assertEqual(numpy.float64, matrix.dtype)
def test_nonzero_limit(self):
"""Test the nonzero_limit parameter of the matrix constructor."""
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary, nonzero_limit=100).matrix.todense()
self.assertGreaterEqual(101, numpy.max(numpy.sum(matrix != 0, axis=0)))
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary, nonzero_limit=4).matrix.todense()
self.assertGreaterEqual(5, numpy.max(numpy.sum(matrix != 0, axis=0)))
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary, nonzero_limit=1).matrix.todense()
self.assertGreaterEqual(2, numpy.max(numpy.sum(matrix != 0, axis=0)))
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary, nonzero_limit=0).matrix.todense()
self.assertEqual(1, numpy.max(numpy.sum(matrix != 0, axis=0)))
self.assertTrue(numpy.all(matrix == numpy.eye(matrix.shape[0])))
def test_symmetric(self):
"""Test the symmetric parameter of the matrix constructor."""
matrix = SparseTermSimilarityMatrix(self.index, self.dictionary).matrix.todense()
self.assertTrue(numpy.all(matrix == matrix.T))
matrix = SparseTermSimilarityMatrix(
self.index, self.dictionary, nonzero_limit=1).matrix.todense()
expected_matrix = numpy.array([
[1.0, 0.5, 0.0, 0.0, 0.0],
[0.5, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
self.assertTrue(numpy.all(expected_matrix == matrix))
matrix = SparseTermSimilarityMatrix(
self.index, self.dictionary, nonzero_limit=1, symmetric=False).matrix.todense()
expected_matrix = numpy.array([
[1.0, 0.5, 0.5, 0.5, 0.5],
[0.5, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
self.assertTrue(numpy.all(expected_matrix == matrix))
def test_dominant(self):
"""Test the dominant parameter of the matrix constructor."""
negative_index = UniformTermSimilarityIndex(self.dictionary, term_similarity=-0.5)
matrix = SparseTermSimilarityMatrix(
negative_index, self.dictionary, nonzero_limit=2).matrix.todense()
expected_matrix = numpy.array([
[1.0, -.5, -.5, 0.0, 0.0],
[-.5, 1.0, 0.0, -.5, 0.0],
[-.5, 0.0, 1.0, 0.0, 0.0],
[0.0, -.5, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
self.assertTrue(numpy.all(expected_matrix == matrix))
matrix = SparseTermSimilarityMatrix(
negative_index, self.dictionary, nonzero_limit=2, dominant=True).matrix.todense()
expected_matrix = numpy.array([
[1.0, -.5, 0.0, 0.0, 0.0],
[-.5, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
self.assertTrue(numpy.all(expected_matrix == matrix))
def test_tfidf(self):
"""Test the tfidf parameter of the matrix constructor."""
matrix = SparseTermSimilarityMatrix(
self.index, self.dictionary, nonzero_limit=1).matrix.todense()
expected_matrix = numpy.array([
[1.0, 0.5, 0.0, 0.0, 0.0],
[0.5, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
self.assertTrue(numpy.all(expected_matrix == matrix))
matrix = SparseTermSimilarityMatrix(
self.index, self.dictionary, nonzero_limit=1, tfidf=self.tfidf).matrix.todense()
expected_matrix = numpy.array([
[1.0, 0.0, 0.0, 0.5, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.5, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
self.assertTrue(numpy.all(expected_matrix == matrix))
def test_encapsulation(self):
"""Test the matrix encapsulation."""
# check that a sparse matrix will be converted to a CSC format
expected_matrix = numpy.array([
[1.0, 2.0, 3.0],
[0.0, 1.0, 4.0],
[0.0, 0.0, 1.0]])
matrix = SparseTermSimilarityMatrix(scipy.sparse.csc_matrix(expected_matrix)).matrix
self.assertTrue(isinstance(matrix, scipy.sparse.csc_matrix))
self.assertTrue(numpy.all(matrix.todense() == expected_matrix))
matrix = SparseTermSimilarityMatrix(scipy.sparse.csr_matrix(expected_matrix)).matrix
self.assertTrue(isinstance(matrix, scipy.sparse.csc_matrix))
self.assertTrue(numpy.all(matrix.todense() == expected_matrix))
def test_inner_product_zerovector_zerovector_default(self):
"""Test the inner product between two zero vectors with the default normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], []))
def test_inner_product_zerovector_zerovector_false_maintain(self):
"""Test the inner product between two zero vectors with the (False, 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=(False, 'maintain')))
def test_inner_product_zerovector_zerovector_false_true(self):
"""Test the inner product between two zero vectors with the (False, True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=(False, True)))
def test_inner_product_zerovector_zerovector_maintain_false(self):
"""Test the inner product between two zero vectors with the ('maintain', False) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=('maintain', False)))
def test_inner_product_zerovector_zerovector_maintain_maintain(self):
"""Test the inner product between two zero vectors with the ('maintain', 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=('maintain', 'maintain')))
def test_inner_product_zerovector_zerovector_maintain_true(self):
"""Test the inner product between two zero vectors with the ('maintain', True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=('maintain', True)))
def test_inner_product_zerovector_zerovector_true_false(self):
"""Test the inner product between two zero vectors with the (True, False) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=(True, False)))
def test_inner_product_zerovector_zerovector_true_maintain(self):
"""Test the inner product between two zero vectors with the (True, 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=(True, 'maintain')))
def test_inner_product_zerovector_zerovector_true_true(self):
"""Test the inner product between two zero vectors with the (True, True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], [], normalized=(True, True)))
def test_inner_product_zerovector_vector_default(self):
"""Test the inner product between a zero vector and a vector with the default normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2))
def test_inner_product_zerovector_vector_false_maintain(self):
"""Test the inner product between a zero vector and a vector with the (False, 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=(False, 'maintain')))
def test_inner_product_zerovector_vector_false_true(self):
"""Test the inner product between a zero vector and a vector with the (False, True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=(False, True)))
def test_inner_product_zerovector_vector_maintain_false(self):
"""Test the inner product between a zero vector and a vector with the ('maintain', False) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=('maintain', False)))
def test_inner_product_zerovector_vector_maintain_maintain(self):
"""Test the inner product between a zero vector and a vector with the ('maintain', 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=('maintain', 'maintain')))
def test_inner_product_zerovector_vector_maintain_true(self):
"""Test the inner product between a zero vector and a vector with the ('maintain', True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=('maintain', True)))
def test_inner_product_zerovector_vector_true_false(self):
"""Test the inner product between a zero vector and a vector with the (True, False) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=(True, False)))
def test_inner_product_zerovector_vector_true_maintain(self):
"""Test the inner product between a zero vector and a vector with the (True, 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=(True, 'maintain')))
def test_inner_product_zerovector_vector_true_true(self):
"""Test the inner product between a zero vector and a vector with the (True, True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product([], self.vec2, normalized=(True, True)))
def test_inner_product_vector_zerovector_default(self):
"""Test the inner product between a vector and a zero vector with the default normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, []))
def test_inner_product_vector_zerovector_false_maintain(self):
"""Test the inner product between a vector and a zero vector with the (False, 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=(False, 'maintain')))
def test_inner_product_vector_zerovector_false_true(self):
"""Test the inner product between a vector and a zero vector with the (False, True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=(False, True)))
def test_inner_product_vector_zerovector_maintain_false(self):
"""Test the inner product between a vector and a zero vector with the ('maintain', False) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=('maintain', False)))
def test_inner_product_vector_zerovector_maintain_maintain(self):
"""Test the inner product between a vector and a zero vector with the ('maintain', 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=('maintain', 'maintain')))
def test_inner_product_vector_zerovector_maintain_true(self):
"""Test the inner product between a vector and a zero vector with the ('maintain', True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=('maintain', True)))
def test_inner_product_vector_zerovector_true_false(self):
"""Test the inner product between a vector and a zero vector with the (True, False) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=(True, False)))
def test_inner_product_vector_zerovector_true_maintain(self):
"""Test the inner product between a vector and a zero vector with the (True, 'maintain') normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=(True, 'maintain')))
def test_inner_product_vector_zerovector_true_true(self):
"""Test the inner product between a vector and a zero vector with the (True, True) normalization."""
self.assertEqual(0.0, self.uniform_matrix.inner_product(self.vec1, [], normalized=(True, True)))
def test_inner_product_vector_vector_default(self):
"""Test the inner product between two vectors with the default normalization."""
expected_result = 0.0
expected_result += 2 * 1.0 * 1 # government * s_{ij} * government
expected_result += 2 * 0.5 * 1 # government * s_{ij} * holiday
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * government
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * holiday
result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_false_maintain(self):
"""Test the inner product between two vectors with the (False, 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=(False, 'maintain'))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_false_true(self):
"""Test the inner product between two vectors with the (False, True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=(False, True))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_maintain_false(self):
"""Test the inner product between two vectors with the ('maintain', False) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=('maintain', False))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_maintain_maintain(self):
"""Test the inner product between two vectors with the ('maintain', 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=('maintain', 'maintain'))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_maintain_true(self):
"""Test the inner product between two vectors with the ('maintain', True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=('maintain', True))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_true_false(self):
"""Test the inner product between two vectors with the (True, False) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=(True, False))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_true_maintain(self):
"""Test the inner product between two vectors with the (True, 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=(True, 'maintain'))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_vector_true_true(self):
"""Test the inner product between two vectors with the (True, True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
result = self.uniform_matrix.inner_product(self.vec1, self.vec2, normalized=(True, True))
self.assertAlmostEqual(expected_result, result, places=5)
def test_inner_product_vector_corpus_default(self):
"""Test the inner product between a vector and a corpus with the default normalization."""
expected_result = 0.0
expected_result += 2 * 1.0 * 1 # government * s_{ij} * government
expected_result += 2 * 0.5 * 1 # government * s_{ij} * holiday
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * government
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * holiday
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2)
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_false_maintain(self):
"""Test the inner product between a vector and a corpus with the (False, 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=(False, 'maintain'))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_false_true(self):
"""Test the inner product between a vector and a corpus with the (False, True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=(False, True))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_maintain_false(self):
"""Test the inner product between a vector and a corpus with the ('maintain', False) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=('maintain', False))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_maintain_maintain(self):
"""Test the inner product between a vector and a corpus with the ('maintain', 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=('maintain', 'maintain'))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_maintain_true(self):
"""Test the inner product between a vector and a corpus with the ('maintain', True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=('maintain', True))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_true_false(self):
"""Test the inner product between a vector and a corpus with the (True, False) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=(True, False))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_true_maintain(self):
"""Test the inner product between a vector and a corpus with the (True, 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=(True, 'maintain'))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_vector_corpus_true_true(self):
"""Test the inner product between a vector and a corpus with the (True, True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((1, 2), expected_result)
result = self.uniform_matrix.inner_product(self.vec1, [self.vec2] * 2, normalized=(True, True))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_default(self):
"""Test the inner product between a corpus and a vector with the default normalization."""
expected_result = 0.0
expected_result += 2 * 1.0 * 1 # government * s_{ij} * government
expected_result += 2 * 0.5 * 1 # government * s_{ij} * holiday
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * government
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * holiday
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2)
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_false_maintain(self):
"""Test the inner product between a corpus and a vector with the (False, 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=(False, 'maintain'))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_false_true(self):
"""Test the inner product between a corpus and a vector with the (False, True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=(False, True))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_maintain_false(self):
"""Test the inner product between a corpus and a vector with the ('maintain', False) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=('maintain', False))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_maintain_maintain(self):
"""Test the inner product between a corpus and a vector with the ('maintain', 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=('maintain', 'maintain'))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_maintain_true(self):
"""Test the inner product between a corpus and a vector with the ('maintain', True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=('maintain', True))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_true_false(self):
"""Test the inner product between a corpus and a vector with the (True, False) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=(True, False))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_true_maintain(self):
"""Test the inner product between a corpus and a vector with the (True, 'maintain') normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=(True, 'maintain'))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_vector_true_true(self):
"""Test the inner product between a corpus and a vector with the (True, True) normalization."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 1), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, self.vec2, normalized=(True, True))
self.assertTrue(isinstance(result, numpy.ndarray))
self.assertTrue(numpy.allclose(expected_result, result))
def test_inner_product_corpus_corpus_default(self):
"""Test the inner product between two corpora with the default normalization."""
expected_result = 0.0
expected_result += 2 * 1.0 * 1 # government * s_{ij} * government
expected_result += 2 * 0.5 * 1 # government * s_{ij} * holiday
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * government
expected_result += 1 * 0.5 * 1 # denied * s_{ij} * holiday
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2)
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_false_maintain(self):
"""Test the inner product between two corpora with the (False, 'maintain')."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2, normalized=(False, 'maintain'))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_false_true(self):
"""Test the inner product between two corpora with the (False, True)."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2, normalized=(False, True))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_maintain_false(self):
"""Test the inner product between two corpora with the ('maintain', False)."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2, normalized=('maintain', False))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_maintain_maintain(self):
"""Test the inner product between two corpora with the ('maintain', 'maintain')."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2,
normalized=('maintain', 'maintain'))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_maintain_true(self):
"""Test the inner product between two corpora with the ('maintain', True)."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2, normalized=('maintain', True))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_true_false(self):
"""Test the inner product between two corpora with the (True, False)."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2, normalized=(True, False))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_true_maintain(self):
"""Test the inner product between two corpora with the (True, 'maintain')."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result *= math.sqrt(self.identity_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2, normalized=(True, 'maintain'))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
def test_inner_product_corpus_corpus_true_true(self):
"""Test the inner product between two corpora with the (True, True)."""
expected_result = self.uniform_matrix.inner_product(self.vec1, self.vec2)
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec1, self.vec1))
expected_result /= math.sqrt(self.uniform_matrix.inner_product(self.vec2, self.vec2))
expected_result = numpy.full((3, 2), expected_result)
result = self.uniform_matrix.inner_product([self.vec1] * 3, [self.vec2] * 2, normalized=(True, True))
self.assertTrue(isinstance(result, scipy.sparse.csr_matrix))
self.assertTrue(numpy.allclose(expected_result, result.todense()))
class TestLevenshteinSimilarityIndex(unittest.TestCase):
def setUp(self):
self.documents = [[u"government", u"denied", u"holiday"], [u"holiday", u"slowing", u"hollingworth"]]
self.dictionary = Dictionary(self.documents)
max_distance = max(len(term) for term in self.dictionary.values())
self.index = LevenshteinSimilarityIndex(self.dictionary, max_distance=max_distance)
def test_most_similar_topn(self):
"""Test most_similar returns expected results."""
results = list(self.index.most_similar(u"holiday", topn=0))
self.assertEqual(0, len(results))
results = list(self.index.most_similar(u"holiday", topn=1))
self.assertEqual(1, len(results))
results = list(self.index.most_similar(u"holiday", topn=4))
self.assertEqual(4, len(results))
results = list(self.index.most_similar(u"holiday", topn=len(self.dictionary)))
self.assertEqual(len(self.dictionary) - 1, len(results))
self.assertNotIn(u"holiday", results)
def test_most_similar_result_order(self):
results = self.index.most_similar(u"holiday", topn=4)
terms, _ = zip(*results)
expected_terms = (u"hollingworth", u"denied", u"slowing", u"government")
self.assertEqual(expected_terms, terms)
def test_most_similar_alpha(self):
index = LevenshteinSimilarityIndex(self.dictionary, alpha=1.0)
first_similarities = numpy.array([similarity for term, similarity in index.most_similar(u"holiday", topn=10)])
index = LevenshteinSimilarityIndex(self.dictionary, alpha=2.0)
second_similarities = numpy.array([similarity for term, similarity in index.most_similar(u"holiday", topn=10)])
self.assertTrue(numpy.allclose(2.0 * first_similarities, second_similarities))
def test_most_similar_beta(self):
index = LevenshteinSimilarityIndex(self.dictionary, alpha=1.0, beta=1.0)
first_similarities = numpy.array([similarity for term, similarity in index.most_similar(u"holiday", topn=10)])
index = LevenshteinSimilarityIndex(self.dictionary, alpha=1.0, beta=2.0)
second_similarities = numpy.array([similarity for term, similarity in index.most_similar(u"holiday", topn=10)])
self.assertTrue(numpy.allclose(first_similarities ** 2.0, second_similarities))
class TestWordEmbeddingSimilarityIndex(unittest.TestCase):
def setUp(self):
self.vectors = KeyedVectors.load_word2vec_format(
datapath('euclidean_vectors.bin'), binary=True, datatype=numpy.float64)
def test_most_similar(self):
"""Test most_similar returns expected results."""
# check the handling of out-of-dictionary terms
index = WordEmbeddingSimilarityIndex(self.vectors)
self.assertLess(0, len(list(index.most_similar(u"holiday", topn=10))))
self.assertEqual(0, len(list(index.most_similar(u"out-of-dictionary term", topn=10))))
# check that the topn works as expected
index = WordEmbeddingSimilarityIndex(self.vectors)
results = list(index.most_similar(u"holiday", topn=10))
self.assertLess(0, len(results))
self.assertGreaterEqual(10, len(results))
results = list(index.most_similar(u"holiday", topn=20))
self.assertLess(10, len(results))
self.assertGreaterEqual(20, len(results))
# check that the term itself is not returned
index = WordEmbeddingSimilarityIndex(self.vectors)
terms = [term for term, similarity in index.most_similar(u"holiday", topn=len(self.vectors))]
self.assertFalse(u"holiday" in terms)
# check that the threshold works as expected
index = WordEmbeddingSimilarityIndex(self.vectors, threshold=0.0)
results = list(index.most_similar(u"holiday", topn=10))
self.assertLess(0, len(results))
self.assertGreaterEqual(10, len(results))
index = WordEmbeddingSimilarityIndex(self.vectors, threshold=1.0)
results = list(index.most_similar(u"holiday", topn=10))
self.assertEqual(0, len(results))
# check that the exponent works as expected
index = WordEmbeddingSimilarityIndex(self.vectors, exponent=1.0)
first_similarities = numpy.array([similarity for term, similarity in index.most_similar(u"holiday", topn=10)])
index = WordEmbeddingSimilarityIndex(self.vectors, exponent=2.0)
second_similarities = numpy.array([similarity for term, similarity in index.most_similar(u"holiday", topn=10)])
self.assertTrue(numpy.allclose(first_similarities**2.0, second_similarities))
class TestFastSS(unittest.TestCase):
def test_editdist_same_unicode_kind_latin1(self):
"""Test editdist returns the expected result with two Latin-1 strings."""
expected = 2
actual = editdist('Zizka', 'siska')
assert expected == actual
def test_editdist_same_unicode_kind_ucs2(self):
"""Test editdist returns the expected result with two UCS-2 strings."""
expected = 2
actual = editdist('콯i�쬶a', '코i코ka')
assert expected == actual
def test_editdist_same_unicode_kind_ucs4(self):
"""Test editdist returns the expected result with two UCS-4 strings."""
expected = 2
actual = editdist('콯i�쬶a 游��', '코i코ka 游��')
assert expected == actual
def test_editdist_different_unicode_kinds(self):
"""Test editdist returns the expected result with strings of different Unicode kinds."""
expected = 2
actual = editdist('콯i�쬶a', 'siska')
assert expected == actual
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 82,233
|
Python
|
.py
| 1,318
| 53.108498
| 120
| 0.671396
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,056
|
utils.py
|
piskvorky_gensim/gensim/test/utils.py
|
#!/usr/bin/env python
# encoding: utf-8
"""Module contains common utilities used in automated code tests for Gensim modules.
Attributes:
module_path : str
Full path to this module directory.
common_texts : list of list of str
Toy dataset.
common_dictionary : :class:`~gensim.corpora.dictionary.Dictionary`
Dictionary of toy dataset.
common_corpus : list of list of (int, int)
Corpus of toy dataset.
Examples:
It's easy to keep objects in temporary folder and reuse'em if needed:
.. sourcecode:: pycon
>>> from gensim.models import word2vec
>>> from gensim.test.utils import get_tmpfile, common_texts
>>>
>>> model = word2vec.Word2Vec(common_texts, min_count=1)
>>> temp_path = get_tmpfile('toy_w2v')
>>> model.save(temp_path)
>>>
>>> new_model = word2vec.Word2Vec.load(temp_path)
>>> result = new_model.wv.most_similar("human", topn=1)
Let's print first document in toy dataset and then recreate it using its corpus and dictionary.
.. sourcecode:: pycon
>>> from gensim.test.utils import common_texts, common_dictionary, common_corpus
>>> print(common_texts[0])
['human', 'interface', 'computer']
>>> assert common_dictionary.doc2bow(common_texts[0]) == common_corpus[0]
We can find our toy set in test data directory.
.. sourcecode:: pycon
>>> from gensim.test.utils import datapath
>>>
>>> with open(datapath("testcorpus.txt")) as f:
... texts = [line.strip().split() for line in f]
>>> print(texts[0])
['computer', 'human', 'interface']
If you don't need to keep temporary objects on disk use :func:`~gensim.test.utils.temporary_file`:
.. sourcecode:: pycon
>>> from gensim.test.utils import temporary_file, common_corpus, common_dictionary
>>> from gensim.models import LdaModel
>>>
>>> with temporary_file("temp.txt") as tf:
... lda = LdaModel(common_corpus, id2word=common_dictionary, num_topics=3)
... lda.save(tf)
"""
import contextlib
import tempfile
import os
import shutil
from gensim.corpora import Dictionary
from gensim.utils import simple_preprocess
module_path = os.path.dirname(__file__) # needed because sample data files are located in the same folder
def datapath(fname):
"""Get full path for file `fname` in test data directory placed in this module directory.
Usually used to place corpus to test_data directory.
Parameters
----------
fname : str
Name of file.
Returns
-------
str
Full path to `fname` in test_data folder.
Example
-------
Let's get path of test GloVe data file and check if it exits.
.. sourcecode:: pycon
>>> from gensim.corpora import MmCorpus
>>> from gensim.test.utils import datapath
>>>
>>> corpus = MmCorpus(datapath("testcorpus.mm"))
>>> for document in corpus:
... pass
"""
return os.path.join(module_path, 'test_data', fname)
def get_tmpfile(suffix):
"""Get full path to file `suffix` in temporary folder.
This function doesn't creates file (only generate unique name).
Also, it may return different paths in consecutive calling.
Parameters
----------
suffix : str
Suffix of file.
Returns
-------
str
Path to `suffix` file in temporary folder.
Examples
--------
Using this function we may get path to temporary file and use it, for example, to store temporary model.
.. sourcecode:: pycon
>>> from gensim.models import LsiModel
>>> from gensim.test.utils import get_tmpfile, common_dictionary, common_corpus
>>>
>>> tmp_f = get_tmpfile("toy_lsi_model")
>>>
>>> model = LsiModel(common_corpus, id2word=common_dictionary)
>>> model.save(tmp_f)
>>>
>>> loaded_model = LsiModel.load(tmp_f)
"""
return os.path.join(tempfile.mkdtemp(), suffix)
@contextlib.contextmanager
def temporary_file(name=""):
"""This context manager creates file `name` in temporary directory and returns its full path.
Temporary directory with included files will deleted at the end of context. Note, it won't create file.
Parameters
----------
name : str
Filename.
Yields
------
str
Path to file `name` in temporary directory.
Examples
--------
This example demonstrates that created temporary directory (and included
files) will deleted at the end of context.
.. sourcecode:: pycon
>>> import os
>>> from gensim.test.utils import temporary_file
>>> with temporary_file("temp.txt") as tf, open(tf, 'w') as outfile:
... outfile.write("my extremely useful information")
... print("Is this file exists? {}".format(os.path.exists(tf)))
... print("Is this folder exists? {}".format(os.path.exists(os.path.dirname(tf))))
Is this file exists? True
Is this folder exists? True
>>>
>>> print("Is this file exists? {}".format(os.path.exists(tf)))
Is this file exists? False
>>> print("Is this folder exists? {}".format(os.path.exists(os.path.dirname(tf))))
Is this folder exists? False
"""
# note : when dropping python2.7 support, we can use tempfile.TemporaryDirectory
tmp = tempfile.mkdtemp()
try:
yield os.path.join(tmp, name)
finally:
shutil.rmtree(tmp, ignore_errors=True)
# set up vars used in testing ("Deerwester" from the web tutorial)
common_texts = [
['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']
]
common_dictionary = Dictionary(common_texts)
common_corpus = [common_dictionary.doc2bow(text) for text in common_texts]
class LeeCorpus:
def __iter__(self):
with open(datapath('lee_background.cor')) as f:
for line in f:
yield simple_preprocess(line)
lee_corpus_list = list(LeeCorpus())
| 6,211
|
Python
|
.py
| 161
| 33.074534
| 108
| 0.654096
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,057
|
test_doc2vec.py
|
piskvorky_gensim/gensim/test/test_doc2vec.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
from __future__ import with_statement, division
import logging
import unittest
import os
from collections import namedtuple
import numpy as np
from testfixtures import log_capture
from gensim import utils
from gensim.models import doc2vec, keyedvectors
from gensim.test.utils import datapath, get_tmpfile, temporary_file, common_texts as raw_sentences
class DocsLeeCorpus:
def __init__(self, string_tags=False, unicode_tags=False):
self.string_tags = string_tags
self.unicode_tags = unicode_tags
def _tag(self, i):
if self.unicode_tags:
return u'_\xa1_%d' % i
elif self.string_tags:
return '_*%d' % i
return i
def __iter__(self):
with open(datapath('lee_background.cor')) as f:
for i, line in enumerate(f):
yield doc2vec.TaggedDocument(utils.simple_preprocess(line), [self._tag(i)])
list_corpus = list(DocsLeeCorpus())
sentences = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(raw_sentences)]
def load_on_instance():
# Save and load a Doc2Vec Model on instance for test
tmpf = get_tmpfile('gensim_doc2vec.tst')
model = doc2vec.Doc2Vec(DocsLeeCorpus(), min_count=1)
model.save(tmpf)
model = doc2vec.Doc2Vec() # should fail at this point
return model.load(tmpf)
def save_lee_corpus_as_line_sentence(corpus_file):
utils.save_as_line_sentence((doc.words for doc in DocsLeeCorpus()), corpus_file)
class TestDoc2VecModel(unittest.TestCase):
def test_persistence(self):
"""Test storing/loading the entire model."""
tmpf = get_tmpfile('gensim_doc2vec.tst')
model = doc2vec.Doc2Vec(DocsLeeCorpus(), min_count=1)
model.save(tmpf)
self.models_equal(model, doc2vec.Doc2Vec.load(tmpf))
def test_persistence_fromfile(self):
"""Test storing/loading the entire model."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
tmpf = get_tmpfile('gensim_doc2vec.tst')
model = doc2vec.Doc2Vec(corpus_file=corpus_file, min_count=1)
model.save(tmpf)
self.models_equal(model, doc2vec.Doc2Vec.load(tmpf))
def test_persistence_word2vec_format(self):
"""Test storing the entire model in word2vec format."""
model = doc2vec.Doc2Vec(DocsLeeCorpus(), min_count=1)
# test saving both document and word embedding
test_doc_word = get_tmpfile('gensim_doc2vec.dw')
model.save_word2vec_format(test_doc_word, doctag_vec=True, word_vec=True, binary=False)
binary_model_dv = keyedvectors.KeyedVectors.load_word2vec_format(test_doc_word, binary=False)
self.assertEqual(len(model.wv) + len(model.dv), len(binary_model_dv))
# test saving document embedding only
test_doc = get_tmpfile('gensim_doc2vec.d')
model.save_word2vec_format(test_doc, doctag_vec=True, word_vec=False, binary=True)
binary_model_dv = keyedvectors.KeyedVectors.load_word2vec_format(test_doc, binary=True)
self.assertEqual(len(model.dv), len(binary_model_dv))
# test saving word embedding only
test_word = get_tmpfile('gensim_doc2vec.w')
model.save_word2vec_format(test_word, doctag_vec=False, word_vec=True, binary=True)
binary_model_dv = keyedvectors.KeyedVectors.load_word2vec_format(test_word, binary=True)
self.assertEqual(len(model.wv), len(binary_model_dv))
def obsolete_testLoadOldModel(self):
"""Test loading an old doc2vec model from indeterminate version"""
model_file = 'doc2vec_old' # which version?!?
model = doc2vec.Doc2Vec.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (3955, 100))
self.assertTrue(len(model.wv) == 3955)
self.assertTrue(len(model.wv.index_to_key) == 3955)
self.assertIsNone(model.corpus_total_words)
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.vector_size))
self.assertTrue(model.wv.vectors_lockf.shape == (3955, ))
self.assertTrue(model.cum_table.shape == (3955, ))
self.assertTrue(model.dv.vectors.shape == (300, 100))
self.assertTrue(model.dv.vectors_lockf.shape == (300, ))
self.assertTrue(len(model.dv) == 300)
self.model_sanity(model)
def obsolete_testLoadOldModelSeparates(self):
"""Test loading an old doc2vec model from indeterminate version"""
# Model stored in multiple files
model_file = 'doc2vec_old_sep'
model = doc2vec.Doc2Vec.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (3955, 100))
self.assertTrue(len(model.wv) == 3955)
self.assertTrue(len(model.wv.index_to_key) == 3955)
self.assertIsNone(model.corpus_total_words)
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.vector_size))
self.assertTrue(model.wv.vectors_lockf.shape == (3955, ))
self.assertTrue(model.cum_table.shape == (3955, ))
self.assertTrue(model.dv.vectors.shape == (300, 100))
self.assertTrue(model.dv.vectors_lockf.shape == (300, ))
self.assertTrue(len(model.dv) == 300)
self.model_sanity(model)
def obsolete_test_load_old_models_pre_1_0(self):
"""Test loading pre-1.0 models"""
model_file = 'd2v-lee-v0.13.0'
model = doc2vec.Doc2Vec.load(datapath(model_file))
self.model_sanity(model)
old_versions = [
'0.12.0', '0.12.1', '0.12.2', '0.12.3', '0.12.4',
'0.13.0', '0.13.1', '0.13.2', '0.13.3', '0.13.4',
]
for old_version in old_versions:
self._check_old_version(old_version)
def obsolete_test_load_old_models_1_x(self):
"""Test loading 1.x models"""
old_versions = [
'1.0.0', '1.0.1',
]
for old_version in old_versions:
self._check_old_version(old_version)
def obsolete_test_load_old_models_2_x(self):
"""Test loading 2.x models"""
old_versions = [
'2.0.0', '2.1.0', '2.2.0', '2.3.0',
]
for old_version in old_versions:
self._check_old_version(old_version)
def obsolete_test_load_old_models_pre_3_3(self):
"""Test loading 3.x models"""
old_versions = [
'3.2.0', '3.1.0', '3.0.0'
]
for old_version in old_versions:
self._check_old_version(old_version)
def obsolete_test_load_old_models_post_3_2(self):
"""Test loading 3.x models"""
old_versions = [
'3.4.0', '3.3.0',
]
for old_version in old_versions:
self._check_old_version(old_version)
def _check_old_version(self, old_version):
logging.info("TESTING LOAD of %s Doc2Vec MODEL", old_version)
saved_models_dir = datapath('old_d2v_models/d2v_{}.mdl')
model = doc2vec.Doc2Vec.load(saved_models_dir.format(old_version))
self.assertTrue(len(model.wv) == 3)
self.assertIsNone(model.corpus_total_words)
self.assertTrue(model.wv.vectors.shape == (3, 4))
self.assertTrue(model.dv.vectors.shape == (2, 4))
self.assertTrue(len(model.dv) == 2)
# check if inferring vectors for new documents and similarity search works.
doc0_inferred = model.infer_vector(list(DocsLeeCorpus())[0].words)
sims_to_infer = model.dv.most_similar([doc0_inferred], topn=len(model.dv))
self.assertTrue(sims_to_infer)
# check if inferring vectors and similarity search works after saving and loading back the model
tmpf = get_tmpfile('gensim_doc2vec.tst')
model.save(tmpf)
loaded_model = doc2vec.Doc2Vec.load(tmpf)
doc0_inferred = loaded_model.infer_vector(list(DocsLeeCorpus())[0].words)
sims_to_infer = loaded_model.dv.most_similar([doc0_inferred], topn=len(loaded_model.dv))
self.assertTrue(sims_to_infer)
def test_doc2vec_train_parameters(self):
model = doc2vec.Doc2Vec(vector_size=50)
model.build_vocab(corpus_iterable=list_corpus)
self.assertRaises(TypeError, model.train, corpus_file=11111)
self.assertRaises(TypeError, model.train, corpus_iterable=11111)
self.assertRaises(TypeError, model.train, corpus_iterable=sentences, corpus_file='test')
self.assertRaises(TypeError, model.train, corpus_iterable=None, corpus_file=None)
self.assertRaises(TypeError, model.train, corpus_file=sentences)
@unittest.skipIf(os.name == 'nt', "See another test for Windows below")
def test_get_offsets_and_start_doctags(self):
# Each line takes 6 bytes (including '\n' character)
lines = ['line1\n', 'line2\n', 'line3\n', 'line4\n', 'line5\n']
tmpf = get_tmpfile('gensim_doc2vec.tst')
with utils.open(tmpf, 'wb', encoding='utf8') as fout:
for line in lines:
fout.write(utils.any2unicode(line))
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 1)
self.assertEqual(offsets, [0])
self.assertEqual(start_doctags, [0])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 2)
self.assertEqual(offsets, [0, 12])
self.assertEqual(start_doctags, [0, 2])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 3)
self.assertEqual(offsets, [0, 6, 18])
self.assertEqual(start_doctags, [0, 1, 3])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 4)
self.assertEqual(offsets, [0, 6, 12, 18])
self.assertEqual(start_doctags, [0, 1, 2, 3])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 5)
self.assertEqual(offsets, [0, 6, 12, 18, 24])
self.assertEqual(start_doctags, [0, 1, 2, 3, 4])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 6)
self.assertEqual(offsets, [0, 0, 6, 12, 18, 24])
self.assertEqual(start_doctags, [0, 0, 1, 2, 3, 4])
@unittest.skipIf(os.name != 'nt', "See another test for posix above")
def test_get_offsets_and_start_doctags_win(self):
# Each line takes 7 bytes (including '\n' character which is actually '\r\n' on Windows)
lines = ['line1\n', 'line2\n', 'line3\n', 'line4\n', 'line5\n']
tmpf = get_tmpfile('gensim_doc2vec.tst')
with utils.open(tmpf, 'wb', encoding='utf8') as fout:
for line in lines:
fout.write(utils.any2unicode(line))
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 1)
self.assertEqual(offsets, [0])
self.assertEqual(start_doctags, [0])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 2)
self.assertEqual(offsets, [0, 14])
self.assertEqual(start_doctags, [0, 2])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 3)
self.assertEqual(offsets, [0, 7, 21])
self.assertEqual(start_doctags, [0, 1, 3])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 4)
self.assertEqual(offsets, [0, 7, 14, 21])
self.assertEqual(start_doctags, [0, 1, 2, 3])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 5)
self.assertEqual(offsets, [0, 7, 14, 21, 28])
self.assertEqual(start_doctags, [0, 1, 2, 3, 4])
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 6)
self.assertEqual(offsets, [0, 0, 7, 14, 14, 21])
self.assertEqual(start_doctags, [0, 0, 1, 2, 2, 3])
def test_cython_linesentence_readline_after_getting_offsets(self):
lines = ['line1\n', 'line2\n', 'line3\n', 'line4\n', 'line5\n']
tmpf = get_tmpfile('gensim_doc2vec.tst')
with utils.open(tmpf, 'wb', encoding='utf8') as fout:
for line in lines:
fout.write(utils.any2unicode(line))
from gensim.models.word2vec_corpusfile import CythonLineSentence
offsets, start_doctags = doc2vec.Doc2Vec._get_offsets_and_start_doctags_for_corpusfile(tmpf, 5)
for offset, line in zip(offsets, lines):
ls = CythonLineSentence(tmpf, offset)
sentence = ls.read_sentence()
self.assertEqual(len(sentence), 1)
self.assertEqual(sentence[0], utils.any2utf8(line.strip()))
def test_unicode_in_doctag(self):
"""Test storing document vectors of a model with unicode titles."""
model = doc2vec.Doc2Vec(DocsLeeCorpus(unicode_tags=True), min_count=1)
tmpf = get_tmpfile('gensim_doc2vec.tst')
try:
model.save_word2vec_format(tmpf, doctag_vec=True, word_vec=True, binary=True)
except UnicodeEncodeError:
self.fail('Failed storing unicode title.')
def test_load_mmap(self):
"""Test storing/loading the entire model."""
model = doc2vec.Doc2Vec(sentences, min_count=1)
tmpf = get_tmpfile('gensim_doc2vec.tst')
# test storing the internal arrays into separate files
model.save(tmpf, sep_limit=0)
self.models_equal(model, doc2vec.Doc2Vec.load(tmpf))
# make sure mmaping the arrays back works, too
self.models_equal(model, doc2vec.Doc2Vec.load(tmpf, mmap='r'))
def test_int_doctags(self):
"""Test doc2vec doctag alternatives"""
corpus = DocsLeeCorpus()
model = doc2vec.Doc2Vec(min_count=1)
model.build_vocab(corpus)
self.assertEqual(len(model.dv.vectors), 300)
self.assertEqual(model.dv[0].shape, (100,))
self.assertEqual(model.dv[np.int64(0)].shape, (100,))
self.assertRaises(KeyError, model.__getitem__, '_*0')
def test_missing_string_doctag(self):
"""Test doc2vec doctag alternatives"""
corpus = list(DocsLeeCorpus(True))
# force duplicated tags
corpus = corpus[0:10] + corpus
model = doc2vec.Doc2Vec(min_count=1)
model.build_vocab(corpus)
self.assertRaises(KeyError, model.dv.__getitem__, 'not_a_tag')
def test_string_doctags(self):
"""Test doc2vec doctag alternatives"""
corpus = list(DocsLeeCorpus(True))
# force duplicated tags
corpus = corpus[0:10] + corpus
model = doc2vec.Doc2Vec(min_count=1)
model.build_vocab(corpus)
self.assertEqual(len(model.dv.vectors), 300)
self.assertEqual(model.dv[0].shape, (100,))
self.assertEqual(model.dv['_*0'].shape, (100,))
self.assertTrue(all(model.dv['_*0'] == model.dv[0]))
self.assertTrue(max(model.dv.key_to_index.values()) < len(model.dv.index_to_key))
self.assertLess(
max(model.dv.get_index(str_key) for str_key in model.dv.key_to_index.keys()),
len(model.dv.vectors)
)
# verify dv.most_similar() returns string doctags rather than indexes
self.assertEqual(model.dv.index_to_key[0], model.dv.most_similar([model.dv[0]])[0][0])
def test_empty_errors(self):
# no input => "RuntimeError: you must first build vocabulary before training the model"
self.assertRaises(RuntimeError, doc2vec.Doc2Vec, [])
# input not empty, but rather completely filtered out
self.assertRaises(RuntimeError, doc2vec.Doc2Vec, list_corpus, min_count=10000)
def test_similarity_unseen_docs(self):
"""Test similarity of out of training sentences"""
rome_words = ['rome', 'italy']
car_words = ['car']
corpus = list(DocsLeeCorpus(True))
model = doc2vec.Doc2Vec(min_count=1)
model.build_vocab(corpus)
self.assertTrue(
model.similarity_unseen_docs(rome_words, rome_words)
> model.similarity_unseen_docs(rome_words, car_words)
)
def model_sanity(self, model, keep_training=True):
"""Any non-trivial model on DocsLeeCorpus can pass these sanity checks"""
fire1 = 0 # doc 0 sydney fires
fire2 = np.int64(8) # doc 8 sydney fires
alt1 = 29 # doc 29 palestine
# inferred vector should be top10 close to bulk-trained one
doc0_inferred = model.infer_vector(list(DocsLeeCorpus())[0].words)
sims_to_infer = model.dv.most_similar([doc0_inferred], topn=len(model.dv))
sims_ids = [docid for docid, sim in sims_to_infer]
self.assertTrue(fire1 in sims_ids, "{0} not found in {1}".format(fire1, sims_to_infer))
f_rank = sims_ids.index(fire1)
self.assertLess(f_rank, 10)
# fire2 should be top30 close to fire1
sims = model.dv.most_similar(fire1, topn=len(model.dv))
f2_rank = [docid for docid, sim in sims].index(fire2)
self.assertLess(f2_rank, 30)
# same sims should appear in lookup by vec as by index
doc0_vec = model.dv[fire1]
sims2 = model.dv.most_similar(positive=[doc0_vec], topn=21)
sims2 = [(id, sim) for id, sim in sims2 if id != fire1] # ignore the doc itself
sims = sims[:20]
self.assertEqual(list(zip(*sims))[0], list(zip(*sims2))[0]) # same doc ids
self.assertTrue(np.allclose(list(zip(*sims))[1], list(zip(*sims2))[1])) # close-enough dists
# sim results should be in clip range if given
clip_sims = \
model.dv.most_similar(fire1, clip_start=len(model.dv) // 2, clip_end=len(model.dv) * 2 // 3)
sims_doc_id = [docid for docid, sim in clip_sims]
for s_id in sims_doc_id:
self.assertTrue(len(model.dv) // 2 <= s_id <= len(model.dv) * 2 // 3)
# fire docs should be closer than fire-alt
self.assertLess(model.dv.similarity(fire1, alt1), model.dv.similarity(fire1, fire2))
self.assertLess(model.dv.similarity(fire2, alt1), model.dv.similarity(fire1, fire2))
# alt doc should be out-of-place among fire news
self.assertEqual(model.dv.doesnt_match([fire1, alt1, fire2]), alt1)
# keep training after save
if keep_training:
tmpf = get_tmpfile('gensim_doc2vec_resave.tst')
model.save(tmpf)
loaded = doc2vec.Doc2Vec.load(tmpf)
loaded.train(corpus_iterable=sentences, total_examples=loaded.corpus_count, epochs=loaded.epochs)
def test_training(self):
"""Test doc2vec training."""
corpus = DocsLeeCorpus()
model = doc2vec.Doc2Vec(vector_size=100, min_count=2, epochs=20, workers=1)
model.build_vocab(corpus)
self.assertEqual(model.dv.vectors.shape, (300, 100))
model.train(corpus, total_examples=model.corpus_count, epochs=model.epochs)
self.model_sanity(model)
# build vocab and train in one step; must be the same as above
model2 = doc2vec.Doc2Vec(corpus, vector_size=100, min_count=2, epochs=20, workers=1)
self.models_equal(model, model2)
def test_training_fromfile(self):
"""Test doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(vector_size=100, min_count=2, epochs=20, workers=1)
model.build_vocab(corpus_file=corpus_file)
self.assertEqual(model.dv.vectors.shape, (300, 100))
model.train(corpus_file=corpus_file, total_words=model.corpus_total_words, epochs=model.epochs)
self.model_sanity(model)
model = doc2vec.Doc2Vec(corpus_file=corpus_file, vector_size=100, min_count=2, epochs=20, workers=1)
self.model_sanity(model)
def test_dbow_hs(self):
"""Test DBOW doc2vec training."""
model = doc2vec.Doc2Vec(list_corpus, dm=0, hs=1, negative=0, min_count=2, epochs=20)
self.model_sanity(model)
def test_dbow_hs_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(corpus_file=corpus_file, dm=0, hs=1, negative=0, min_count=2, epochs=20)
self.model_sanity(model)
def test_dmm_hs(self):
"""Test DM/mean doc2vec training."""
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=1, vector_size=24, window=4,
hs=1, negative=0, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmm_hs_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=1, vector_size=24, window=4,
hs=1, negative=0, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dms_hs(self):
"""Test DM/sum doc2vec training."""
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=0, vector_size=24, window=4, hs=1,
negative=0, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dms_hs_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=0, vector_size=24, window=4, hs=1,
negative=0, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmc_hs(self):
"""Test DM/concatenate doc2vec training."""
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_concat=1, vector_size=24, window=4,
hs=1, negative=0, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmc_hs_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_concat=1, vector_size=24, window=4,
hs=1, negative=0, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dbow_neg(self):
"""Test DBOW doc2vec training."""
model = doc2vec.Doc2Vec(list_corpus, vector_size=16, dm=0, hs=0, negative=5, min_count=2, epochs=40)
self.model_sanity(model)
def test_dbow_neg_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(list_corpus, vector_size=16, dm=0, hs=0, negative=5, min_count=2, epochs=40)
self.model_sanity(model)
def test_dmm_neg(self):
"""Test DM/mean doc2vec training."""
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=1, vector_size=24, window=4, hs=0,
negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmm_neg_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=1, vector_size=24, window=4, hs=0,
negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dms_neg(self):
"""Test DM/sum doc2vec training."""
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=0, vector_size=24, window=4, hs=0,
negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dms_neg_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_mean=0, vector_size=24, window=4, hs=0,
negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmc_neg(self):
"""Test DM/concatenate doc2vec training."""
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_concat=1, vector_size=24, window=4, hs=0,
negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmc_neg_fromfile(self):
"""Test DBOW doc2vec training."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
list_corpus, dm=1, dm_concat=1, vector_size=24, window=4, hs=0,
negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmm_fixedwindowsize(self):
"""Test DMM doc2vec training with fixed window size."""
model = doc2vec.Doc2Vec(
list_corpus, vector_size=24,
dm=1, dm_mean=1, window=4, shrink_windows=False,
hs=0, negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dmm_fixedwindowsize_fromfile(self):
"""Test DMM doc2vec training with fixed window size, from file."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
corpus_file=corpus_file, vector_size=24,
dm=1, dm_mean=1, window=4, shrink_windows=False,
hs=0, negative=10, alpha=0.05, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dbow_fixedwindowsize(self):
"""Test DBOW doc2vec training with fixed window size."""
model = doc2vec.Doc2Vec(
list_corpus, vector_size=16, shrink_windows=False,
dm=0, hs=0, negative=5, min_count=2, epochs=20
)
self.model_sanity(model)
def test_dbow_fixedwindowsize_fromfile(self):
"""Test DBOW doc2vec training with fixed window size, from file."""
with temporary_file(get_tmpfile('gensim_doc2vec.tst')) as corpus_file:
save_lee_corpus_as_line_sentence(corpus_file)
model = doc2vec.Doc2Vec(
corpus_file=corpus_file, vector_size=16, shrink_windows=False,
dm=0, hs=0, negative=5, min_count=2, epochs=20
)
self.model_sanity(model)
def test_parallel(self):
"""Test doc2vec parallel training with more than default 3 threads."""
# repeat the ~300 doc (~60000 word) Lee corpus to get 6000 docs (~1.2M words)
corpus = utils.RepeatCorpus(DocsLeeCorpus(), 6000)
# use smaller batches-to-workers for more contention
model = doc2vec.Doc2Vec(corpus, workers=6, batch_words=5000)
self.model_sanity(model)
def test_deterministic_hs(self):
"""Test doc2vec results identical with identical RNG seed."""
# hs
model = doc2vec.Doc2Vec(DocsLeeCorpus(), seed=42, workers=1)
model2 = doc2vec.Doc2Vec(DocsLeeCorpus(), seed=42, workers=1)
self.models_equal(model, model2)
def test_deterministic_neg(self):
"""Test doc2vec results identical with identical RNG seed."""
# neg
model = doc2vec.Doc2Vec(DocsLeeCorpus(), hs=0, negative=3, seed=42, workers=1)
model2 = doc2vec.Doc2Vec(DocsLeeCorpus(), hs=0, negative=3, seed=42, workers=1)
self.models_equal(model, model2)
def test_deterministic_dmc(self):
"""Test doc2vec results identical with identical RNG seed."""
# bigger, dmc
model = doc2vec.Doc2Vec(
DocsLeeCorpus(), dm=1, dm_concat=1, vector_size=24,
window=4, hs=1, negative=3, seed=42, workers=1
)
model2 = doc2vec.Doc2Vec(
DocsLeeCorpus(), dm=1, dm_concat=1, vector_size=24,
window=4, hs=1, negative=3, seed=42, workers=1
)
self.models_equal(model, model2)
def test_mixed_tag_types(self):
"""Ensure alternating int/string tags don't share indexes in vectors"""
mixed_tag_corpus = [doc2vec.TaggedDocument(words, [i, words[0]]) for i, words in enumerate(raw_sentences)]
model = doc2vec.Doc2Vec()
model.build_vocab(mixed_tag_corpus)
expected_length = len(sentences) + len(model.dv.key_to_index) # 9 sentences, 7 unique first tokens
self.assertEqual(len(model.dv.vectors), expected_length)
# TODO: test saving in word2vec format
def models_equal(self, model, model2):
# check words/hidden-weights
self.assertEqual(len(model.wv), len(model2.wv))
self.assertTrue(np.allclose(model.wv.vectors, model2.wv.vectors))
if model.hs:
self.assertTrue(np.allclose(model.syn1, model2.syn1))
if model.negative:
self.assertTrue(np.allclose(model.syn1neg, model2.syn1neg))
# check docvecs
self.assertEqual(len(model.dv), len(model2.dv))
self.assertEqual(len(model.dv.index_to_key), len(model2.dv.index_to_key))
def test_word_vec_non_writeable(self):
model = keyedvectors.KeyedVectors.load_word2vec_format(datapath('word2vec_pre_kv_c'))
vector = model['says']
with self.assertRaises(ValueError):
vector *= 0
@log_capture()
def test_build_vocab_warning(self, loglines):
"""Test if logger warning is raised on non-ideal input to a doc2vec model"""
raw_sentences = ['human', 'machine']
sentences = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(raw_sentences)]
model = doc2vec.Doc2Vec()
model.build_vocab(sentences)
warning = "Each 'words' should be a list of words (usually unicode strings)."
self.assertTrue(warning in str(loglines))
@log_capture()
def test_train_warning(self, loglines):
"""Test if warning is raised if alpha rises during subsequent calls to train()"""
raw_sentences = [['human'],
['graph', 'trees']]
sentences = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(raw_sentences)]
model = doc2vec.Doc2Vec(alpha=0.025, min_alpha=0.025, min_count=1, workers=8, vector_size=5)
model.build_vocab(sentences)
for epoch in range(10):
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
model.alpha -= 0.002
model.min_alpha = model.alpha
if epoch == 5:
model.alpha += 0.05
warning = "Effective 'alpha' higher than previous training cycles"
self.assertTrue(warning in str(loglines))
def test_load_on_class_error(self):
"""Test if exception is raised when loading doc2vec model on instance"""
self.assertRaises(AttributeError, load_on_instance)
def test_negative_ns_exp(self):
"""The model should accept a negative ns_exponent as a valid value."""
model = doc2vec.Doc2Vec(sentences, ns_exponent=-1, min_count=1, workers=1)
tmpf = get_tmpfile('d2v_negative_exp.tst')
model.save(tmpf)
loaded_model = doc2vec.Doc2Vec.load(tmpf)
loaded_model.train(sentences, total_examples=model.corpus_count, epochs=1)
assert loaded_model.ns_exponent == -1, loaded_model.ns_exponent
# endclass TestDoc2VecModel
if not hasattr(TestDoc2VecModel, 'assertLess'):
# workaround for python 2.6
def assertLess(self, a, b, msg=None):
self.assertTrue(a < b, msg="%s is not less than %s" % (a, b))
setattr(TestDoc2VecModel, 'assertLess', assertLess)
# Following code is useful for reproducing paragraph-vectors paper sentiment experiments
class ConcatenatedDoc2Vec:
"""
Concatenation of multiple models for reproducing the Paragraph Vectors paper.
Models must have exactly-matching vocabulary and document IDs. (Models should
be trained separately; this wrapper just returns concatenated results.)
"""
def __init__(self, models):
self.models = models
if hasattr(models[0], 'dv'):
self.dv = ConcatenatedDocvecs([model.dv for model in models])
def __getitem__(self, token):
return np.concatenate([model[token] for model in self.models])
def __str__(self):
"""Abbreviated name, built from submodels' names"""
return "+".join(str(model) for model in self.models)
@property
def epochs(self):
return self.models[0].epochs
def infer_vector(self, document, alpha=None, min_alpha=None, epochs=None):
return np.concatenate([model.infer_vector(document, alpha, min_alpha, epochs) for model in self.models])
def train(self, *ignore_args, **ignore_kwargs):
pass # train subcomponents individually
class ConcatenatedDocvecs:
def __init__(self, models):
self.models = models
def __getitem__(self, token):
return np.concatenate([model[token] for model in self.models])
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
def read_su_sentiment_rotten_tomatoes(dirname, lowercase=True):
"""
Read and return documents from the Stanford Sentiment Treebank
corpus (Rotten Tomatoes reviews), from http://nlp.Stanford.edu/sentiment/
Initialize the corpus from a given directory, where
http://nlp.stanford.edu/~socherr/stanfordSentimentTreebank.zip
has been expanded. It's not too big, so compose entirely into memory.
"""
logging.info("loading corpus from %s", dirname)
# many mangled chars in sentences (datasetSentences.txt)
chars_sst_mangled = [
'à', 'á', 'â', 'ã', 'æ', 'ç', 'è', 'é', 'í',
'í', 'ï', 'ñ', 'ó', 'ô', 'ö', 'û', 'ü'
]
sentence_fixups = [(char.encode('utf-8').decode('latin1'), char) for char in chars_sst_mangled]
# more junk, and the replace necessary for sentence-phrase consistency
sentence_fixups.extend([
('Â', ''),
('\xa0', ' '),
('-LRB-', '('),
('-RRB-', ')'),
])
# only this junk in phrases (dictionary.txt)
phrase_fixups = [('\xa0', ' ')]
# sentence_id and split are only positive for the full sentences
# read sentences to temp {sentence -> (id,split) dict, to correlate with dictionary.txt
info_by_sentence = {}
with open(os.path.join(dirname, 'datasetSentences.txt'), 'r') as sentences:
with open(os.path.join(dirname, 'datasetSplit.txt'), 'r') as splits:
next(sentences) # legend
next(splits) # legend
for sentence_line, split_line in zip(sentences, splits):
id, text = sentence_line.split('\t')
id = int(id)
text = text.rstrip()
for junk, fix in sentence_fixups:
text = text.replace(junk, fix)
(id2, split_i) = split_line.split(',')
assert id == int(id2)
if text not in info_by_sentence: # discard duplicates
info_by_sentence[text] = (id, int(split_i))
# read all phrase text
phrases = [None] * 239232 # known size of phrases
with open(os.path.join(dirname, 'dictionary.txt'), 'r') as phrase_lines:
for line in phrase_lines:
(text, id) = line.split('|')
for junk, fix in phrase_fixups:
text = text.replace(junk, fix)
phrases[int(id)] = text.rstrip() # for 1st pass just string
SentimentPhrase = namedtuple('SentimentPhrase', SentimentDocument._fields + ('sentence_id',))
# add sentiment labels, correlate with sentences
with open(os.path.join(dirname, 'sentiment_labels.txt'), 'r') as sentiments:
next(sentiments) # legend
for line in sentiments:
(id, sentiment) = line.split('|')
id = int(id)
sentiment = float(sentiment)
text = phrases[id]
words = text.split()
if lowercase:
words = [word.lower() for word in words]
(sentence_id, split_i) = info_by_sentence.get(text, (None, 0))
split = [None, 'train', 'test', 'dev'][split_i]
phrases[id] = SentimentPhrase(words, [id], split, sentiment, sentence_id)
assert sum(1 for phrase in phrases if phrase.sentence_id is not None) == len(info_by_sentence) # all
# counts don't match 8544, 2210, 1101 because 13 TRAIN and 1 DEV sentences are duplicates
assert sum(1 for phrase in phrases if phrase.split == 'train') == 8531 # 'train'
assert sum(1 for phrase in phrases if phrase.split == 'test') == 2210 # 'test'
assert sum(1 for phrase in phrases if phrase.split == 'dev') == 1100 # 'dev'
logging.info(
"loaded corpus with %i sentences and %i phrases from %s",
len(info_by_sentence), len(phrases), dirname
)
return phrases
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main(module='gensim.test.test_doc2vec')
| 38,067
|
Python
|
.py
| 722
| 43.422438
| 114
| 0.639606
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,058
|
test_atmodel.py
|
piskvorky_gensim/gensim/test/test_atmodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2016 Radim Rehurek <radimrehurek@seznam.cz>
# Copyright (C) 2016 Olavur Mortensen <olavurmortensen@gmail.com>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for the author-topic model (AuthorTopicModel class). These tests
are based on the unit tests of LDA; the classes are quite similar, and the tests
needed are thus quite similar.
"""
import logging
import unittest
import numbers
from os import remove
import numpy as np
from gensim.corpora import mmcorpus, Dictionary
from gensim.models import atmodel
from gensim import matutils
from gensim.test import basetmtests
from gensim.test.utils import (datapath,
get_tmpfile, common_texts, common_dictionary as dictionary, common_corpus as corpus)
from gensim.matutils import jensen_shannon
# TODO:
# Test that computing the bound on new unseen documents works as expected (this is somewhat different
# in the author-topic model than in LDA).
# Perhaps test that the bound increases, in general (i.e. in several of the tests below where it makes
# sense. This is not tested in LDA either. Tests can also be made to check that automatic prior learning
# increases the bound.
# Test that models are compatiple across versions, as done in LdaModel.
# Assign some authors randomly to the documents above.
author2doc = {
'john': [0, 1, 2, 3, 4, 5, 6],
'jane': [2, 3, 4, 5, 6, 7, 8],
'jack': [0, 2, 4, 6, 8],
'jill': [1, 3, 5, 7]
}
doc2author = {
0: ['john', 'jack'],
1: ['john', 'jill'],
2: ['john', 'jane', 'jack'],
3: ['john', 'jane', 'jill'],
4: ['john', 'jane', 'jack'],
5: ['john', 'jane', 'jill'],
6: ['john', 'jane', 'jack'],
7: ['jane', 'jill'],
8: ['jane', 'jack']
}
# More data with new and old authors (to test update method).
# Although the text is just a subset of the previous, the model
# just sees it as completely new data.
texts_new = common_texts[0:3]
author2doc_new = {'jill': [0], 'bob': [0, 1], 'sally': [1, 2]}
dictionary_new = Dictionary(texts_new)
corpus_new = [dictionary_new.doc2bow(text) for text in texts_new]
class TestAuthorTopicModel(unittest.TestCase, basetmtests.TestBaseTopicModel):
def setUp(self):
self.corpus = mmcorpus.MmCorpus(datapath('testcorpus.mm'))
self.class_ = atmodel.AuthorTopicModel
self.model = self.class_(corpus, id2word=dictionary, author2doc=author2doc, num_topics=2, passes=100)
def test_transform(self):
passed = False
# sometimes, training gets stuck at a local minimum
# in that case try re-training the model from scratch, hoping for a
# better random initialization
for i in range(25): # restart at most 5 times
# create the transformation model
model = self.class_(id2word=dictionary, num_topics=2, passes=100, random_state=0)
model.update(corpus, author2doc)
jill_topics = model.get_author_topics('jill')
# NOTE: this test may easily fail if the author-topic model is altered in any way. The model's
# output is sensitive to a lot of things, like the scheduling of the updates, or like the
# author2id (because the random initialization changes when author2id changes). If it does
# fail, simply be aware of whether we broke something, or if it just naturally changed the
# output of the model slightly.
vec = matutils.sparse2full(jill_topics, 2) # convert to dense vector, for easier equality tests
expected = [0.91, 0.08]
# must contain the same values, up to re-ordering
passed = np.allclose(sorted(vec), sorted(expected), atol=1e-1)
if passed:
break
logging.warning(
"Author-topic model failed to converge on attempt %i (got %s, expected %s)",
i, sorted(vec), sorted(expected)
)
self.assertTrue(passed)
def test_basic(self):
# Check that training the model produces a positive topic vector for some author
# Otherwise, many of the other tests are invalid.
model = self.class_(corpus, author2doc=author2doc, id2word=dictionary, num_topics=2)
jill_topics = model.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
self.assertTrue(all(jill_topics > 0))
def test_empty_document(self):
local_texts = common_texts + [['only_occurs_once_in_corpus_and_alone_in_doc']]
dictionary = Dictionary(local_texts)
dictionary.filter_extremes(no_below=2)
corpus = [dictionary.doc2bow(text) for text in local_texts]
a2d = author2doc.copy()
a2d['joaquin'] = [len(local_texts) - 1]
self.class_(corpus, author2doc=a2d, id2word=dictionary, num_topics=2)
def test_author2doc_missing(self):
# Check that the results are the same if author2doc is constructed automatically from doc2author.
model = self.class_(
corpus, author2doc=author2doc, doc2author=doc2author,
id2word=dictionary, num_topics=2, random_state=0
)
model2 = self.class_(
corpus, doc2author=doc2author, id2word=dictionary,
num_topics=2, random_state=0
)
# Compare Jill's topics before in both models.
jill_topics = model.get_author_topics('jill')
jill_topics2 = model2.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
jill_topics2 = matutils.sparse2full(jill_topics2, model.num_topics)
self.assertTrue(np.allclose(jill_topics, jill_topics2))
def test_doc2author_missing(self):
# Check that the results are the same if doc2author is constructed automatically from author2doc.
model = self.class_(
corpus, author2doc=author2doc, doc2author=doc2author,
id2word=dictionary, num_topics=2, random_state=0
)
model2 = self.class_(
corpus, author2doc=author2doc, id2word=dictionary,
num_topics=2, random_state=0
)
# Compare Jill's topics before in both models.
jill_topics = model.get_author_topics('jill')
jill_topics2 = model2.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
jill_topics2 = matutils.sparse2full(jill_topics2, model.num_topics)
self.assertTrue(np.allclose(jill_topics, jill_topics2))
def test_update(self):
# Check that calling update after the model already has been trained works.
model = self.class_(corpus, author2doc=author2doc, id2word=dictionary, num_topics=2)
jill_topics = model.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
model.update()
jill_topics2 = model.get_author_topics('jill')
jill_topics2 = matutils.sparse2full(jill_topics2, model.num_topics)
# Did we learn something?
self.assertFalse(all(np.equal(jill_topics, jill_topics2)))
def test_update_new_data_old_author(self):
# Check that calling update with new documents and/or authors after the model already has
# been trained works.
# Test an author that already existed in the old dataset.
model = self.class_(corpus, author2doc=author2doc, id2word=dictionary, num_topics=2)
jill_topics = model.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
model.update(corpus_new, author2doc_new)
jill_topics2 = model.get_author_topics('jill')
jill_topics2 = matutils.sparse2full(jill_topics2, model.num_topics)
# Did we learn more about Jill?
self.assertFalse(all(np.equal(jill_topics, jill_topics2)))
def test_update_new_data_new_author(self):
# Check that calling update with new documents and/or authors after the model already has
# been trained works.
# Test a new author, that didn't exist in the old dataset.
model = self.class_(corpus, author2doc=author2doc, id2word=dictionary, num_topics=2)
model.update(corpus_new, author2doc_new)
# Did we learn something about Sally?
sally_topics = model.get_author_topics('sally')
sally_topics = matutils.sparse2full(sally_topics, model.num_topics)
self.assertTrue(all(sally_topics > 0))
def test_serialized(self):
# Test the model using serialized corpora. Basic tests, plus test of update functionality.
model = self.class_(
self.corpus, author2doc=author2doc, id2word=dictionary, num_topics=2,
serialized=True, serialization_path=datapath('testcorpus_serialization.mm')
)
jill_topics = model.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
self.assertTrue(all(jill_topics > 0))
model.update()
jill_topics2 = model.get_author_topics('jill')
jill_topics2 = matutils.sparse2full(jill_topics2, model.num_topics)
# Did we learn more about Jill?
self.assertFalse(all(np.equal(jill_topics, jill_topics2)))
model.update(corpus_new, author2doc_new)
# Did we learn something about Sally?
sally_topics = model.get_author_topics('sally')
sally_topics = matutils.sparse2full(sally_topics, model.num_topics)
self.assertTrue(all(sally_topics > 0))
# Delete the MmCorpus used for serialization inside the author-topic model.
remove(datapath('testcorpus_serialization.mm'))
def test_transform_serialized(self):
# Same as testTransform, using serialized corpora.
passed = False
# sometimes, training gets stuck at a local minimum
# in that case try re-training the model from scratch, hoping for a
# better random initialization
for i in range(25): # restart at most 5 times
# create the transformation model
model = self.class_(
id2word=dictionary, num_topics=2, passes=100, random_state=0,
serialized=True, serialization_path=datapath('testcorpus_serialization.mm')
)
model.update(self.corpus, author2doc)
jill_topics = model.get_author_topics('jill')
# NOTE: this test may easily fail if the author-topic model is altered in any way. The model's
# output is sensitive to a lot of things, like the scheduling of the updates, or like the
# author2id (because the random initialization changes when author2id changes). If it does
# fail, simply be aware of whether we broke something, or if it just naturally changed the
# output of the model slightly.
vec = matutils.sparse2full(jill_topics, 2) # convert to dense vector, for easier equality tests
expected = [0.91, 0.08]
# must contain the same values, up to re-ordering
passed = np.allclose(sorted(vec), sorted(expected), atol=1e-1)
# Delete the MmCorpus used for serialization inside the author-topic model.
remove(datapath('testcorpus_serialization.mm'))
if passed:
break
logging.warning(
"Author-topic model failed to converge on attempt %i (got %s, expected %s)",
i, sorted(vec), sorted(expected)
)
self.assertTrue(passed)
def test_alpha_auto(self):
model1 = self.class_(
corpus, author2doc=author2doc, id2word=dictionary,
alpha='symmetric', passes=10, num_topics=2
)
modelauto = self.class_(
corpus, author2doc=author2doc, id2word=dictionary,
alpha='auto', passes=10, num_topics=2
)
# did we learn something?
self.assertFalse(all(np.equal(model1.alpha, modelauto.alpha)))
def test_alpha(self):
kwargs = dict(
author2doc=author2doc,
id2word=dictionary,
num_topics=2,
alpha=None
)
expected_shape = (2,)
# should not raise anything
self.class_(**kwargs)
kwargs['alpha'] = 'symmetric'
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
self.assertTrue(all(model.alpha == np.array([0.5, 0.5])))
kwargs['alpha'] = 'asymmetric'
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
self.assertTrue(np.allclose(model.alpha, [0.630602, 0.369398]))
kwargs['alpha'] = 0.3
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
self.assertTrue(all(model.alpha == np.array([0.3, 0.3])))
kwargs['alpha'] = 3
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
self.assertTrue(all(model.alpha == np.array([3, 3])))
kwargs['alpha'] = [0.3, 0.3]
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
self.assertTrue(all(model.alpha == np.array([0.3, 0.3])))
kwargs['alpha'] = np.array([0.3, 0.3])
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
self.assertTrue(all(model.alpha == np.array([0.3, 0.3])))
# all should raise an exception for being wrong shape
kwargs['alpha'] = [0.3, 0.3, 0.3]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['alpha'] = [[0.3], [0.3]]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['alpha'] = [0.3]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['alpha'] = "gensim is cool"
self.assertRaises(ValueError, self.class_, **kwargs)
def test_eta_auto(self):
model1 = self.class_(
corpus, author2doc=author2doc, id2word=dictionary,
eta='symmetric', passes=10, num_topics=2
)
modelauto = self.class_(
corpus, author2doc=author2doc, id2word=dictionary,
eta='auto', passes=10, num_topics=2
)
# did we learn something?
self.assertFalse(all(np.equal(model1.eta, modelauto.eta)))
def test_eta(self):
kwargs = dict(
author2doc=author2doc,
id2word=dictionary,
num_topics=2,
eta=None
)
num_terms = len(dictionary)
expected_shape = (num_terms,)
# should not raise anything
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
self.assertTrue(all(model.eta == np.array([0.5] * num_terms)))
kwargs['eta'] = 'symmetric'
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
self.assertTrue(all(model.eta == np.array([0.5] * num_terms)))
kwargs['eta'] = 0.3
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
self.assertTrue(all(model.eta == np.array([0.3] * num_terms)))
kwargs['eta'] = 3
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
self.assertTrue(all(model.eta == np.array([3] * num_terms)))
kwargs['eta'] = [0.3] * num_terms
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
self.assertTrue(all(model.eta == np.array([0.3] * num_terms)))
kwargs['eta'] = np.array([0.3] * num_terms)
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
self.assertTrue(all(model.eta == np.array([0.3] * num_terms)))
# should be ok with num_topics x num_terms
testeta = np.array([[0.5] * len(dictionary)] * 2)
kwargs['eta'] = testeta
self.class_(**kwargs)
# all should raise an exception for being wrong shape
kwargs['eta'] = testeta.reshape(tuple(reversed(testeta.shape)))
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['eta'] = [0.3]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['eta'] = [0.3] * (num_terms + 1)
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['eta'] = "gensim is cool"
self.assertRaises(ValueError, self.class_, **kwargs)
kwargs['eta'] = "asymmetric"
self.assertRaises(ValueError, self.class_, **kwargs)
def test_top_topics(self):
top_topics = self.model.top_topics(corpus)
for topic, score in top_topics:
self.assertTrue(isinstance(topic, list))
self.assertTrue(isinstance(score, float))
for v, k in topic:
self.assertTrue(isinstance(k, str))
self.assertTrue(isinstance(v, float))
def test_get_topic_terms(self):
topic_terms = self.model.get_topic_terms(1)
for k, v in topic_terms:
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(isinstance(v, float))
def test_get_author_topics(self):
model = self.class_(
corpus, author2doc=author2doc, id2word=dictionary, num_topics=2,
passes=100, random_state=np.random.seed(0)
)
author_topics = []
for a in model.id2author.values():
author_topics.append(model.get_author_topics(a))
for topic in author_topics:
self.assertTrue(isinstance(topic, list))
for k, v in topic:
self.assertTrue(isinstance(k, int))
self.assertTrue(isinstance(v, float))
def test_term_topics(self):
model = self.class_(
corpus, author2doc=author2doc, id2word=dictionary, num_topics=2,
passes=100, random_state=np.random.seed(0)
)
# check with word_type
result = model.get_term_topics(2)
for topic_no, probability in result:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(isinstance(probability, float))
# if user has entered word instead, check with word
result = model.get_term_topics(str(model.id2word[2]))
for topic_no, probability in result:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(isinstance(probability, float))
def test_new_author_topics(self):
model = self.class_(
corpus, author2doc=author2doc, id2word=dictionary, num_topics=2,
passes=100, random_state=np.random.seed(0)
)
author2doc_newauthor = {}
author2doc_newauthor["test"] = [0, 1]
model.update(corpus=corpus[0:2], author2doc=author2doc_newauthor)
# temp save model state vars before get_new_author_topics is called
state_gamma_len = len(model.state.gamma)
author2doc_len = len(model.author2doc)
author2id_len = len(model.author2id)
id2author_len = len(model.id2author)
doc2author_len = len(model.doc2author)
new_author_topics = model.get_new_author_topics(corpus=corpus[0:2])
# sanity check
for k, v in new_author_topics:
self.assertTrue(isinstance(k, int))
self.assertTrue(isinstance(v, float))
# make sure topics are similar enough
similarity = 1 / (1 + jensen_shannon(model["test"], new_author_topics))
self.assertTrue(similarity >= 0.9)
# produce an error to test if rollback occurs
with self.assertRaises(TypeError):
model.get_new_author_topics(corpus=corpus[0])
# assure rollback was successful and the model state is as before
self.assertEqual(state_gamma_len, len(model.state.gamma))
self.assertEqual(author2doc_len, len(model.author2doc))
self.assertEqual(author2id_len, len(model.author2id))
self.assertEqual(id2author_len, len(model.id2author))
self.assertEqual(doc2author_len, len(model.doc2author))
def test_passes(self):
# long message includes the original error message with a custom one
self.longMessage = True
# construct what we expect when passes aren't involved
test_rhots = []
model = self.class_(id2word=dictionary, chunksize=1, num_topics=2)
def final_rhot(model):
return pow(model.offset + (1 * model.num_updates) / model.chunksize, -model.decay)
# generate 5 updates to test rhot on
for _ in range(5):
model.update(corpus, author2doc)
test_rhots.append(final_rhot(model))
for passes in [1, 5, 10, 50, 100]:
model = self.class_(id2word=dictionary, chunksize=1, num_topics=2, passes=passes)
self.assertEqual(final_rhot(model), 1.0)
# make sure the rhot matches the test after each update
for test_rhot in test_rhots:
model.update(corpus, author2doc)
msg = "{}, {}, {}".format(passes, model.num_updates, model.state.numdocs)
self.assertAlmostEqual(final_rhot(model), test_rhot, msg=msg)
self.assertEqual(model.state.numdocs, len(corpus) * len(test_rhots))
self.assertEqual(model.num_updates, len(corpus) * len(test_rhots))
def test_persistence(self):
fname = get_tmpfile('gensim_models_atmodel.tst')
model = self.model
model.save(fname)
model2 = self.class_.load(fname)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.expElogbeta, model2.expElogbeta))
self.assertTrue(np.allclose(model.state.gamma, model2.state.gamma))
def test_persistence_ignore(self):
fname = get_tmpfile('gensim_models_atmodel_testPersistenceIgnore.tst')
model = atmodel.AuthorTopicModel(corpus, author2doc=author2doc, num_topics=2)
model.save(fname, ignore='id2word')
model2 = atmodel.AuthorTopicModel.load(fname)
self.assertTrue(model2.id2word is None)
model.save(fname, ignore=['id2word'])
model2 = atmodel.AuthorTopicModel.load(fname)
self.assertTrue(model2.id2word is None)
def test_persistence_compressed(self):
fname = get_tmpfile('gensim_models_atmodel.tst.gz')
model = self.model
model.save(fname)
model2 = self.class_.load(fname, mmap=None)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.expElogbeta, model2.expElogbeta))
# Compare Jill's topics before and after save/load.
jill_topics = model.get_author_topics('jill')
jill_topics2 = model2.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
jill_topics2 = matutils.sparse2full(jill_topics2, model.num_topics)
self.assertTrue(np.allclose(jill_topics, jill_topics2))
def test_large_mmap(self):
fname = get_tmpfile('gensim_models_atmodel.tst')
model = self.model
# simulate storing large arrays separately
model.save(fname, sep_limit=0)
# test loading the large model arrays with mmap
model2 = self.class_.load(fname, mmap='r')
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(isinstance(model2.expElogbeta, np.memmap))
self.assertTrue(np.allclose(model.expElogbeta, model2.expElogbeta))
# Compare Jill's topics before and after save/load.
jill_topics = model.get_author_topics('jill')
jill_topics2 = model2.get_author_topics('jill')
jill_topics = matutils.sparse2full(jill_topics, model.num_topics)
jill_topics2 = matutils.sparse2full(jill_topics2, model.num_topics)
self.assertTrue(np.allclose(jill_topics, jill_topics2))
def test_large_mmap_compressed(self):
fname = get_tmpfile('gensim_models_atmodel.tst.gz')
model = self.model
# simulate storing large arrays separately
model.save(fname, sep_limit=0)
# test loading the large model arrays with mmap
self.assertRaises(IOError, self.class_.load, fname, mmap='r')
def test_dtype_backward_compatibility(self):
atmodel_3_0_1_fname = datapath('atmodel_3_0_1_model')
expected_topics = [(0, 0.068200842977296727), (1, 0.93179915702270333)]
# save model to use in test
# self.model.save(atmodel_3_0_1_fname)
# load a model saved using a 3.0.1 version of Gensim
model = self.class_.load(atmodel_3_0_1_fname)
# and test it on a predefined document
topics = model['jane']
self.assertTrue(np.allclose(expected_topics, topics))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 25,102
|
Python
|
.py
| 491
| 41.94501
| 109
| 0.651545
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,059
|
test_scripts.py
|
piskvorky_gensim/gensim/test/test_scripts.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2018 Vimig Socrates <vimig.socrates@gmail.com> heavily influenced from @AakaashRao
# Copyright (C) 2018 Manos Stergiadis <em.stergiadis@gmail.com>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking the output of gensim.scripts.
"""
from __future__ import unicode_literals
import json
import logging
import os.path
import unittest
import numpy as np
from gensim import utils
from gensim.scripts.segment_wiki import segment_all_articles, segment_and_write_all_articles
from gensim.test.utils import datapath, get_tmpfile
from gensim.scripts.word2vec2tensor import word2vec2tensor
from gensim.models import KeyedVectors
class TestSegmentWiki(unittest.TestCase):
def setUp(self):
self.fname = datapath('enwiki-latest-pages-articles1.xml-p000000010p000030302-shortened.bz2')
self.expected_title = 'Anarchism'
self.expected_section_titles = [
'Introduction',
'Etymology and terminology',
'History',
'Anarchist schools of thought',
'Internal issues and debates',
'Topics of interest',
'Criticisms',
'References',
'Further reading',
'External links'
]
def tearDown(self):
# remove all temporary test files
fname = get_tmpfile('script.tst')
extensions = ['', '.json']
for ext in extensions:
try:
os.remove(fname + ext)
except OSError:
pass
def test_segment_all_articles(self):
title, sections, interlinks = next(segment_all_articles(self.fname, include_interlinks=True))
# Check title
self.assertEqual(title, self.expected_title)
# Check section titles
section_titles = [s[0] for s in sections]
self.assertEqual(section_titles, self.expected_section_titles)
# Check text
first_section_text = sections[0][1]
first_sentence = "'''Anarchism''' is a political philosophy that advocates self-governed societies"
self.assertTrue(first_sentence in first_section_text)
# Check interlinks
self.assertEqual(len(interlinks), 685)
self.assertTrue(interlinks[0] == ("political philosophy", "political philosophy"))
self.assertTrue(interlinks[1] == ("self-governance", "self-governed"))
self.assertTrue(interlinks[2] == ("stateless society", "stateless societies"))
def test_generator_len(self):
expected_num_articles = 106
num_articles = sum(1 for x in segment_all_articles(self.fname))
self.assertEqual(num_articles, expected_num_articles)
def test_json_len(self):
tmpf = get_tmpfile('script.tst.json')
segment_and_write_all_articles(self.fname, tmpf, workers=1)
expected_num_articles = 106
with utils.open(tmpf, 'rb') as f:
num_articles = sum(1 for line in f)
self.assertEqual(num_articles, expected_num_articles)
def test_segment_and_write_all_articles(self):
tmpf = get_tmpfile('script.tst.json')
segment_and_write_all_articles(self.fname, tmpf, workers=1, include_interlinks=True)
# Get the first line from the text file we created.
with open(tmpf) as f:
first = next(f)
# decode JSON line into a Python dictionary object
article = json.loads(first)
title, section_titles, interlinks = article['title'], article['section_titles'], article['interlinks']
self.assertEqual(title, self.expected_title)
self.assertEqual(section_titles, self.expected_section_titles)
# Check interlinks
# JSON has no tuples, only lists. So, we convert lists to tuples explicitly before comparison.
self.assertEqual(len(interlinks), 685)
self.assertEqual(tuple(interlinks[0]), ("political philosophy", "political philosophy"))
self.assertEqual(tuple(interlinks[1]), ("self-governance", "self-governed"))
self.assertEqual(tuple(interlinks[2]), ("stateless society", "stateless societies"))
class TestWord2Vec2Tensor(unittest.TestCase):
def setUp(self):
self.datapath = datapath('word2vec_pre_kv_c')
self.output_folder = get_tmpfile('w2v2t_test')
self.metadata_file = self.output_folder + '_metadata.tsv'
self.tensor_file = self.output_folder + '_tensor.tsv'
self.vector_file = self.output_folder + '_vector.tsv'
def test_conversion(self):
word2vec2tensor(word2vec_model_path=self.datapath, tensor_filename=self.output_folder)
with utils.open(self.metadata_file, 'rb') as f:
metadata = f.readlines()
with utils.open(self.tensor_file, 'rb') as f:
vectors = f.readlines()
# check if number of words and vector size in tensor file line up with word2vec
with utils.open(self.datapath, 'rb') as f:
first_line = f.readline().strip()
number_words, vector_size = map(int, first_line.split(b' '))
self.assertTrue(len(metadata) == len(vectors) == number_words,
('Metadata file %s and tensor file %s imply different number of rows.'
% (self.metadata_file, self.tensor_file)))
# grab metadata and vectors from written file
metadata = [word.strip() for word in metadata]
vectors = [vector.replace(b'\t', b' ') for vector in vectors]
# get the originaly vector KV model
orig_model = KeyedVectors.load_word2vec_format(self.datapath, binary=False)
# check that the KV model and tensor files have the same values key-wise
for word, vector in zip(metadata, vectors):
word_string = word.decode("utf8")
vector_string = vector.decode("utf8")
vector_array = np.array(list(map(float, vector_string.split())))
np.testing.assert_almost_equal(orig_model[word_string], vector_array, decimal=5)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
unittest.main()
| 6,158
|
Python
|
.py
| 123
| 41.682927
| 110
| 0.667945
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,060
|
test_coherencemodel.py
|
piskvorky_gensim/gensim/test/test_coherencemodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
import multiprocessing as mp
from functools import partial
import numpy as np
from gensim.matutils import argsort
from gensim.models.coherencemodel import CoherenceModel, BOOLEAN_DOCUMENT_BASED
from gensim.models.ldamodel import LdaModel
from gensim.test.utils import get_tmpfile, common_texts, common_dictionary, common_corpus
class TestCoherenceModel(unittest.TestCase):
# set up vars used in testing ("Deerwester" from the web tutorial)
texts = common_texts
dictionary = common_dictionary
corpus = common_corpus
def setUp(self):
# Suppose given below are the topics which two different LdaModels come up with.
# `topics1` is clearly better as it has a clear distinction between system-human
# interaction and graphs. Hence both the coherence measures for `topics1` should be
# greater.
self.topics1 = [
['human', 'computer', 'system', 'interface'],
['graph', 'minors', 'trees', 'eps']
]
self.topics2 = [
['user', 'graph', 'minors', 'system'],
['time', 'graph', 'survey', 'minors']
]
self.topics3 = [
['token', 'computer', 'system', 'interface'],
['graph', 'minors', 'trees', 'eps']
]
# using this list the model should be unable to interpret topic
# as either a list of tokens or a list of ids
self.topics4 = [
['not a token', 'not an id', 'tests using', "this list"],
['should raise', 'an error', 'to pass', 'correctly']
]
# list of topics with unseen words in the dictionary
self.topics5 = [
['aaaaa', 'bbbbb', 'ccccc', 'eeeee'],
['ddddd', 'fffff', 'ggggh', 'hhhhh']
]
self.topicIds1 = []
for topic in self.topics1:
self.topicIds1.append([self.dictionary.token2id[token] for token in topic])
self.ldamodel = LdaModel(
corpus=self.corpus, id2word=self.dictionary, num_topics=2,
passes=0, iterations=0
)
def check_coherence_measure(self, coherence):
"""Check provided topic coherence algorithm on given topics"""
if coherence in BOOLEAN_DOCUMENT_BASED:
kwargs = dict(corpus=self.corpus, dictionary=self.dictionary, coherence=coherence)
else:
kwargs = dict(texts=self.texts, dictionary=self.dictionary, coherence=coherence)
cm1 = CoherenceModel(topics=self.topics1, **kwargs)
cm2 = CoherenceModel(topics=self.topics2, **kwargs)
cm3 = CoherenceModel(topics=self.topics3, **kwargs)
cm4 = CoherenceModel(topics=self.topicIds1, **kwargs)
# check if the same topic always returns the same coherence value
cm5 = CoherenceModel(topics=[self.topics1[0]], **kwargs)
self.assertRaises(ValueError, lambda: CoherenceModel(topics=self.topics4, **kwargs))
self.assertRaises(ValueError, lambda: CoherenceModel(topics=self.topics5, **kwargs))
self.assertEqual(cm1.get_coherence(), cm4.get_coherence())
self.assertEqual(cm1.get_coherence_per_topic()[0], cm5.get_coherence())
self.assertIsInstance(cm3.get_coherence(), np.double)
self.assertGreater(cm1.get_coherence(), cm2.get_coherence())
def testUMass(self):
"""Test U_Mass topic coherence algorithm on given topics"""
self.check_coherence_measure('u_mass')
def testCv(self):
"""Test C_v topic coherence algorithm on given topics"""
self.check_coherence_measure('c_v')
def testCuci(self):
"""Test C_uci topic coherence algorithm on given topics"""
self.check_coherence_measure('c_uci')
def testCnpmi(self):
"""Test C_npmi topic coherence algorithm on given topics"""
self.check_coherence_measure('c_npmi')
def testUMassLdaModel(self):
"""Perform sanity check to see if u_mass coherence works with LDA Model"""
# Note that this is just a sanity check because LDA does not guarantee a better coherence
# value on the topics if iterations are increased. This can be seen here:
# https://gist.github.com/dsquareindia/60fd9ab65b673711c3fa00509287ddde
CoherenceModel(model=self.ldamodel, corpus=self.corpus, coherence='u_mass')
def testCvLdaModel(self):
"""Perform sanity check to see if c_v coherence works with LDA Model"""
CoherenceModel(model=self.ldamodel, texts=self.texts, coherence='c_v')
def testCw2vLdaModel(self):
"""Perform sanity check to see if c_w2v coherence works with LDAModel."""
CoherenceModel(model=self.ldamodel, texts=self.texts, coherence='c_w2v')
def testCuciLdaModel(self):
"""Perform sanity check to see if c_uci coherence works with LDA Model"""
CoherenceModel(model=self.ldamodel, texts=self.texts, coherence='c_uci')
def testCnpmiLdaModel(self):
"""Perform sanity check to see if c_npmi coherence works with LDA Model"""
CoherenceModel(model=self.ldamodel, texts=self.texts, coherence='c_npmi')
def testErrors(self):
"""Test if errors are raised on bad input"""
# not providing dictionary
self.assertRaises(
ValueError, CoherenceModel, topics=self.topics1, corpus=self.corpus,
coherence='u_mass'
)
# not providing texts for c_v and instead providing corpus
self.assertRaises(
ValueError, CoherenceModel, topics=self.topics1, corpus=self.corpus,
dictionary=self.dictionary, coherence='c_v'
)
# not providing corpus or texts for u_mass
self.assertRaises(
ValueError, CoherenceModel, topics=self.topics1, dictionary=self.dictionary,
coherence='u_mass'
)
def testProcesses(self):
get_model = partial(CoherenceModel,
topics=self.topics1, corpus=self.corpus, dictionary=self.dictionary, coherence='u_mass'
)
model, used_cpus = get_model(), mp.cpu_count() - 1
self.assertEqual(model.processes, used_cpus)
for p in range(-2, 1):
self.assertEqual(get_model(processes=p).processes, used_cpus)
for p in range(1, 4):
self.assertEqual(get_model(processes=p).processes, p)
def testPersistence(self):
fname = get_tmpfile('gensim_models_coherence.tst')
model = CoherenceModel(
topics=self.topics1, corpus=self.corpus, dictionary=self.dictionary, coherence='u_mass'
)
model.save(fname)
model2 = CoherenceModel.load(fname)
self.assertTrue(model.get_coherence() == model2.get_coherence())
def testPersistenceCompressed(self):
fname = get_tmpfile('gensim_models_coherence.tst.gz')
model = CoherenceModel(
topics=self.topics1, corpus=self.corpus, dictionary=self.dictionary, coherence='u_mass'
)
model.save(fname)
model2 = CoherenceModel.load(fname)
self.assertTrue(model.get_coherence() == model2.get_coherence())
def testPersistenceAfterProbabilityEstimationUsingCorpus(self):
fname = get_tmpfile('gensim_similarities.tst.pkl')
model = CoherenceModel(
topics=self.topics1, corpus=self.corpus, dictionary=self.dictionary, coherence='u_mass'
)
model.estimate_probabilities()
model.save(fname)
model2 = CoherenceModel.load(fname)
self.assertIsNotNone(model2._accumulator)
self.assertTrue(model.get_coherence() == model2.get_coherence())
def testPersistenceAfterProbabilityEstimationUsingTexts(self):
fname = get_tmpfile('gensim_similarities.tst.pkl')
model = CoherenceModel(
topics=self.topics1, texts=self.texts, dictionary=self.dictionary, coherence='c_v'
)
model.estimate_probabilities()
model.save(fname)
model2 = CoherenceModel.load(fname)
self.assertIsNotNone(model2._accumulator)
self.assertTrue(model.get_coherence() == model2.get_coherence())
def testAccumulatorCachingSameSizeTopics(self):
kwargs = dict(corpus=self.corpus, dictionary=self.dictionary, coherence='u_mass')
cm1 = CoherenceModel(topics=self.topics1, **kwargs)
cm1.estimate_probabilities()
accumulator = cm1._accumulator
self.assertIsNotNone(accumulator)
cm1.topics = self.topics1
self.assertEqual(accumulator, cm1._accumulator)
cm1.topics = self.topics2
self.assertEqual(None, cm1._accumulator)
def testAccumulatorCachingTopicSubsets(self):
kwargs = dict(corpus=self.corpus, dictionary=self.dictionary, coherence='u_mass')
cm1 = CoherenceModel(topics=self.topics1, **kwargs)
cm1.estimate_probabilities()
accumulator = cm1._accumulator
self.assertIsNotNone(accumulator)
cm1.topics = [t[:2] for t in self.topics1]
self.assertEqual(accumulator, cm1._accumulator)
cm1.topics = self.topics1
self.assertEqual(accumulator, cm1._accumulator)
def testAccumulatorCachingWithModelSetting(self):
kwargs = dict(corpus=self.corpus, dictionary=self.dictionary, coherence='u_mass')
cm1 = CoherenceModel(topics=self.topics1, **kwargs)
cm1.estimate_probabilities()
self.assertIsNotNone(cm1._accumulator)
cm1.model = self.ldamodel
topics = []
for topic in self.ldamodel.state.get_lambda():
bestn = argsort(topic, topn=cm1.topn, reverse=True)
topics.append(bestn)
self.assertTrue(np.array_equal(topics, cm1.topics))
self.assertIsNone(cm1._accumulator)
def testAccumulatorCachingWithTopnSettingGivenTopics(self):
kwargs = dict(corpus=self.corpus, dictionary=self.dictionary, topn=5, coherence='u_mass')
cm1 = CoherenceModel(topics=self.topics1, **kwargs)
cm1.estimate_probabilities()
self.assertIsNotNone(cm1._accumulator)
accumulator = cm1._accumulator
topics_before = cm1._topics
cm1.topn = 3
self.assertEqual(accumulator, cm1._accumulator)
self.assertEqual(3, len(cm1.topics[0]))
self.assertEqual(topics_before, cm1._topics)
# Topics should not have been truncated, so topn settings below 5 should work
cm1.topn = 4
self.assertEqual(accumulator, cm1._accumulator)
self.assertEqual(4, len(cm1.topics[0]))
self.assertEqual(topics_before, cm1._topics)
with self.assertRaises(ValueError):
cm1.topn = 6 # can't expand topics any further without model
def testAccumulatorCachingWithTopnSettingGivenModel(self):
kwargs = dict(corpus=self.corpus, dictionary=self.dictionary, topn=5, coherence='u_mass')
cm1 = CoherenceModel(model=self.ldamodel, **kwargs)
cm1.estimate_probabilities()
self.assertIsNotNone(cm1._accumulator)
accumulator = cm1._accumulator
topics_before = cm1._topics
cm1.topn = 3
self.assertEqual(accumulator, cm1._accumulator)
self.assertEqual(3, len(cm1.topics[0]))
self.assertEqual(topics_before, cm1._topics)
cm1.topn = 6 # should be able to expand given the model
self.assertEqual(6, len(cm1.topics[0]))
def testCompareCoherenceForTopics(self):
topics = [self.topics1, self.topics2]
cm = CoherenceModel.for_topics(
topics, dictionary=self.dictionary, texts=self.texts, coherence='c_v')
self.assertIsNotNone(cm._accumulator)
# Accumulator should have all relevant IDs.
for topic_list in topics:
cm.topics = topic_list
self.assertIsNotNone(cm._accumulator)
(coherence_topics1, coherence1), (coherence_topics2, coherence2) = \
cm.compare_model_topics(topics)
self.assertAlmostEqual(np.mean(coherence_topics1), coherence1, 4)
self.assertAlmostEqual(np.mean(coherence_topics2), coherence2, 4)
self.assertGreater(coherence1, coherence2)
def testCompareCoherenceForModels(self):
models = [self.ldamodel, self.ldamodel]
cm = CoherenceModel.for_models(
models, dictionary=self.dictionary, texts=self.texts, coherence='c_v')
self.assertIsNotNone(cm._accumulator)
# Accumulator should have all relevant IDs.
for model in models:
cm.model = model
self.assertIsNotNone(cm._accumulator)
(coherence_topics1, coherence1), (coherence_topics2, coherence2) = \
cm.compare_models(models)
self.assertAlmostEqual(np.mean(coherence_topics1), coherence1, 4)
self.assertAlmostEqual(np.mean(coherence_topics2), coherence2, 4)
self.assertAlmostEqual(coherence1, coherence2, places=4)
def testEmptyList(self):
"""Test if CoherenceModel works with document without tokens"""
texts = self.texts + [[]]
cm = CoherenceModel(model=self.ldamodel, texts=texts, coherence="c_v", processes=1)
cm.get_coherence()
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 13,506
|
Python
|
.py
| 267
| 41.846442
| 99
| 0.674198
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,061
|
test_text_analysis.py
|
piskvorky_gensim/gensim/test/test_text_analysis.py
|
import logging
import unittest
from gensim.corpora.dictionary import Dictionary
from gensim.topic_coherence.text_analysis import (
InvertedIndexAccumulator, WordOccurrenceAccumulator, ParallelWordOccurrenceAccumulator,
CorpusAccumulator)
from gensim.test.utils import common_texts
class BaseTestCases:
class TextAnalyzerTestBase(unittest.TestCase):
texts = [
['this', 'is', 'a'],
['test', 'document'],
['this', 'test', 'document'],
['test', 'test', 'this']
]
token2id = {
'this': 10,
'is': 15,
'a': 20,
'test': 21,
'document': 17
}
dictionary = Dictionary(texts)
dictionary.token2id = token2id
dictionary.id2token = {v: k for k, v in token2id.items()}
top_ids = set(token2id.values())
texts2 = common_texts + [['user', 'user']]
dictionary2 = Dictionary(texts2)
dictionary2.id2token = {v: k for k, v in dictionary2.token2id.items()}
top_ids2 = set(dictionary2.token2id.values())
accumulator_cls = None
def init_accumulator(self):
return self.accumulator_cls(self.top_ids, self.dictionary)
def init_accumulator2(self):
return self.accumulator_cls(self.top_ids2, self.dictionary2)
def test_occurrence_counting(self):
accumulator = self.init_accumulator().accumulate(self.texts, 3)
self.assertEqual(3, accumulator.get_occurrences("this"))
self.assertEqual(1, accumulator.get_occurrences("is"))
self.assertEqual(1, accumulator.get_occurrences("a"))
self.assertEqual(2, accumulator.get_co_occurrences("test", "document"))
self.assertEqual(2, accumulator.get_co_occurrences("test", "this"))
self.assertEqual(1, accumulator.get_co_occurrences("is", "a"))
def test_occurrence_counting2(self):
accumulator = self.init_accumulator2().accumulate(self.texts2, 110)
self.assertEqual(2, accumulator.get_occurrences("human"))
self.assertEqual(4, accumulator.get_occurrences("user"))
self.assertEqual(3, accumulator.get_occurrences("graph"))
self.assertEqual(3, accumulator.get_occurrences("trees"))
cases = [
(1, ("human", "interface")),
(2, ("system", "user")),
(2, ("graph", "minors")),
(2, ("graph", "trees")),
(4, ("user", "user")),
(3, ("graph", "graph")),
(0, ("time", "eps"))
]
for expected_count, (word1, word2) in cases:
# Verify co-occurrence counts are correct, regardless of word order.
self.assertEqual(expected_count, accumulator.get_co_occurrences(word1, word2))
self.assertEqual(expected_count, accumulator.get_co_occurrences(word2, word1))
# Also verify that using token ids instead of tokens works the same.
word_id1 = self.dictionary2.token2id[word1]
word_id2 = self.dictionary2.token2id[word2]
self.assertEqual(expected_count, accumulator.get_co_occurrences(word_id1, word_id2))
self.assertEqual(expected_count, accumulator.get_co_occurrences(word_id2, word_id1))
def test_occurences_for_irrelevant_words(self):
accumulator = self.init_accumulator().accumulate(self.texts, 2)
with self.assertRaises(KeyError):
accumulator.get_occurrences("irrelevant")
with self.assertRaises(KeyError):
accumulator.get_co_occurrences("test", "irrelevant")
class TestInvertedIndexAccumulator(BaseTestCases.TextAnalyzerTestBase):
accumulator_cls = InvertedIndexAccumulator
def test_accumulate1(self):
accumulator = InvertedIndexAccumulator(self.top_ids, self.dictionary)\
.accumulate(self.texts, 2)
# [['this', 'is'], ['is', 'a'], ['test', 'document'], ['this', 'test'],
# ['test', 'document'], ['test', 'test'], ['test', 'this']]
inverted_index = accumulator.index_to_dict()
expected = {
10: {0, 3, 6},
15: {0, 1},
20: {1},
21: {2, 3, 4, 5, 6},
17: {2, 4}
}
self.assertDictEqual(expected, inverted_index)
def test_accumulate2(self):
accumulator = InvertedIndexAccumulator(self.top_ids, self.dictionary)\
.accumulate(self.texts, 3)
# [['this', 'is', 'a'], ['test', 'document'], ['this', 'test', 'document'],
# ['test', 'test', 'this']
inverted_index = accumulator.index_to_dict()
expected = {
10: {0, 2, 3},
15: {0},
20: {0},
21: {1, 2, 3},
17: {1, 2}
}
self.assertDictEqual(expected, inverted_index)
class TestWordOccurrenceAccumulator(BaseTestCases.TextAnalyzerTestBase):
accumulator_cls = WordOccurrenceAccumulator
class TestParallelWordOccurrenceAccumulator(BaseTestCases.TextAnalyzerTestBase):
accumulator_cls = ParallelWordOccurrenceAccumulator
def init_accumulator(self):
return self.accumulator_cls(2, self.top_ids, self.dictionary)
def init_accumulator2(self):
return self.accumulator_cls(2, self.top_ids2, self.dictionary2)
class TestCorpusAnalyzer(unittest.TestCase):
def setUp(self):
self.dictionary = BaseTestCases.TextAnalyzerTestBase.dictionary
self.top_ids = BaseTestCases.TextAnalyzerTestBase.top_ids
self.corpus = \
[self.dictionary.doc2bow(doc) for doc in BaseTestCases.TextAnalyzerTestBase.texts]
def test_index_accumulation(self):
accumulator = CorpusAccumulator(self.top_ids).accumulate(self.corpus)
inverted_index = accumulator.index_to_dict()
expected = {
10: {0, 2, 3},
15: {0},
20: {0},
21: {1, 2, 3},
17: {1, 2}
}
self.assertDictEqual(expected, inverted_index)
self.assertEqual(3, accumulator.get_occurrences(10))
self.assertEqual(2, accumulator.get_occurrences(17))
self.assertEqual(2, accumulator.get_co_occurrences(10, 21))
self.assertEqual(1, accumulator.get_co_occurrences(10, 17))
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 6,474
|
Python
|
.py
| 135
| 37.392593
| 100
| 0.613982
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,062
|
test_indirect_confirmation.py
|
piskvorky_gensim/gensim/test/test_indirect_confirmation.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for indirect confirmation measures in the indirect_confirmation_measure module.
"""
import logging
import unittest
import numpy as np
from gensim.corpora.dictionary import Dictionary
from gensim.topic_coherence import indirect_confirmation_measure
from gensim.topic_coherence import text_analysis
class TestIndirectConfirmation(unittest.TestCase):
def setUp(self):
# Set up toy example for better understanding and testing
# of this module. See the modules for the mathematical formulas
self.topics = [np.array([1, 2])]
# Result from s_one_set segmentation:
self.segmentation = [[(1, np.array([1, 2])), (2, np.array([1, 2]))]]
self.gamma = 1
self.measure = 'nlr'
self.dictionary = Dictionary()
self.dictionary.id2token = {1: 'fake', 2: 'tokens'}
def test_cosine_similarity(self):
"""Test cosine_similarity()"""
accumulator = text_analysis.InvertedIndexAccumulator({1, 2}, self.dictionary)
accumulator._inverted_index = {0: {2, 3, 4}, 1: {3, 5}}
accumulator._num_docs = 5
obtained = indirect_confirmation_measure.cosine_similarity(
self.segmentation, accumulator, self.topics, self.measure, self.gamma)
# The steps involved in this calculation are as follows:
# 1. Take (1, array([1, 2]). Take w' which is 1.
# 2. Calculate nlr(1, 1), nlr(1, 2). This is our first vector.
# 3. Take w* which is array([1, 2]).
# 4. Calculate nlr(1, 1) + nlr(2, 1). Calculate nlr(1, 2), nlr(2, 2). This is our second vector.
# 5. Find out cosine similarity between these two vectors.
# 6. Similarly for the second segmentation.
expected = (0.6230 + 0.6230) / 2. # To account for EPSILON approximation
self.assertAlmostEqual(expected, obtained[0], 4)
mean, std = indirect_confirmation_measure.cosine_similarity(
self.segmentation, accumulator, self.topics, self.measure, self.gamma,
with_std=True)[0]
self.assertAlmostEqual(expected, mean, 4)
self.assertAlmostEqual(0.0, std, 1)
def test_word2vec_similarity(self):
"""Sanity check word2vec_similarity."""
accumulator = text_analysis.WordVectorsAccumulator({1, 2}, self.dictionary)
accumulator.accumulate([
['fake', 'tokens'],
['tokens', 'fake']
], 5)
mean, std = indirect_confirmation_measure.word2vec_similarity(
self.segmentation, accumulator, with_std=True)[0]
self.assertNotEqual(0.0, mean)
self.assertNotEqual(0.0, std)
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 2,931
|
Python
|
.py
| 60
| 41.683333
| 104
| 0.662583
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,063
|
__init__.py
|
piskvorky_gensim/gensim/test/__init__.py
|
"""
This package contains automated code tests for all other gensim packages.
"""
| 82
|
Python
|
.py
| 3
| 26.333333
| 73
| 0.78481
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,064
|
simspeed.py
|
piskvorky_gensim/gensim/test/simspeed.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
USAGE: %(program)s CORPUS_DENSE.mm CORPUS_SPARSE.mm [NUMDOCS]
Run speed test of similarity queries. Only use the first NUMDOCS documents of \
each corpus for testing (or use all if no NUMDOCS is given).
The two sample corpora can be downloaded from http://nlp.fi.muni.cz/projekty/gensim/wikismall.tgz
Example: ./simspeed.py wikismall.dense.mm wikismall.sparse.mm 5000
"""
import logging
import sys
import itertools
import os
import math
from time import time
import numpy as np
import gensim
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.info("running %s", " ".join(sys.argv))
# check and process cmdline input
program = os.path.basename(sys.argv[0])
if len(sys.argv) < 3:
print(globals()['__doc__'] % locals())
sys.exit(1)
corpus_dense = gensim.corpora.MmCorpus(sys.argv[1])
corpus_sparse = gensim.corpora.MmCorpus(sys.argv[2])
NUMTERMS = corpus_sparse.num_terms
if len(sys.argv) > 3:
NUMDOCS = int(sys.argv[3])
corpus_dense = list(itertools.islice(corpus_dense, NUMDOCS))
corpus_sparse = list(itertools.islice(corpus_sparse, NUMDOCS))
# create the query index to be tested (one for dense input, one for sparse)
index_dense = gensim.similarities.MatrixSimilarity(corpus_dense)
index_sparse = gensim.similarities.SparseMatrixSimilarity(corpus_sparse, num_terms=NUMTERMS)
density = 100.0 * index_sparse.index.nnz / (index_sparse.index.shape[0] * index_sparse.index.shape[1])
# Difference between test #1 and test #3 is that the query in #1 is a gensim iterable
# corpus, while in #3, the index is used directly (np arrays). So #1 is slower,
# because it needs to convert sparse vecs to np arrays and normalize them to
# unit length=extra work, which #3 avoids.
query = list(itertools.islice(corpus_dense, 1000))
logging.info(
"test 1 (dense): dense corpus of %i docs vs. index (%i documents, %i dense features)",
len(query), len(index_dense), index_dense.num_features
)
for chunksize in [1, 4, 8, 16, 64, 128, 256, 512, 1024]:
start = time()
if chunksize > 1:
sims = []
for chunk in gensim.utils.chunkize_serial(query, chunksize):
sim = index_dense[chunk]
sims.extend(sim)
else:
sims = [index_dense[vec] for vec in query]
assert len(sims) == len(query) # make sure we have one result for each query document
taken = time() - start
queries = math.ceil(1.0 * len(query) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(query) / taken, queries / taken
)
# Same comment as for test #1 but vs. test #4.
query = list(itertools.islice(corpus_sparse, 1000))
logging.info(
"test 2 (sparse): sparse corpus of %i docs vs. sparse index (%i documents, %i features, %.2f%% density)",
len(query), len(corpus_sparse), index_sparse.index.shape[1], density
)
for chunksize in [1, 5, 10, 100, 500, 1000]:
start = time()
if chunksize > 1:
sims = []
for chunk in gensim.utils.chunkize_serial(query, chunksize):
sim = index_sparse[chunk]
sims.extend(sim)
else:
sims = [index_sparse[vec] for vec in query]
assert len(sims) == len(query) # make sure we have one result for each query document
taken = time() - start
queries = math.ceil(1.0 * len(query) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(query) / taken, queries / taken
)
logging.info(
"test 3 (dense): similarity of all vs. all (%i documents, %i dense features)",
len(corpus_dense), index_dense.num_features
)
for chunksize in [0, 1, 4, 8, 16, 64, 128, 256, 512, 1024]:
index_dense.chunksize = chunksize
start = time()
# `sims` stores the entire N x N sim matrix in memory!
# this is not necessary, but i added it to test the accuracy of the result
# (=report mean diff below)
sims = list(index_dense)
taken = time() - start
sims = np.asarray(sims)
if chunksize == 0:
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s)",
chunksize, taken, len(corpus_dense) / taken
)
unchunksizeed = sims
else:
queries = math.ceil(1.0 * len(corpus_dense) / chunksize)
diff = gensim.matutils.mean_absolute_difference(unchunksizeed, sims)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s), meandiff=%.3e",
chunksize, taken, len(corpus_dense) / taken, queries / taken, diff
)
del sims
index_dense.num_best = 10
logging.info("test 4 (dense): as above, but only ask for the top-10 most similar for each document")
for chunksize in [0, 1, 4, 8, 16, 64, 128, 256, 512, 1024]:
index_dense.chunksize = chunksize
start = time()
sims = list(index_dense)
taken = time() - start
if chunksize == 0:
queries = len(corpus_dense)
else:
queries = math.ceil(1.0 * len(corpus_dense) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(corpus_dense) / taken, queries / taken
)
index_dense.num_best = None
logging.info(
"test 5 (sparse): similarity of all vs. all (%i documents, %i features, %.2f%% density)",
len(corpus_sparse), index_sparse.index.shape[1], density
)
for chunksize in [0, 5, 10, 100, 500, 1000, 5000]:
index_sparse.chunksize = chunksize
start = time()
sims = list(index_sparse)
taken = time() - start
sims = np.asarray(sims)
if chunksize == 0:
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s)",
chunksize, taken, len(corpus_sparse) / taken
)
unchunksizeed = sims
else:
queries = math.ceil(1.0 * len(corpus_sparse) / chunksize)
diff = gensim.matutils.mean_absolute_difference(unchunksizeed, sims)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s), meandiff=%.3e",
chunksize, taken, len(corpus_sparse) / taken, queries / taken, diff
)
del sims
index_sparse.num_best = 10
logging.info("test 6 (sparse): as above, but only ask for the top-10 most similar for each document")
for chunksize in [0, 5, 10, 100, 500, 1000, 5000]:
index_sparse.chunksize = chunksize
start = time()
sims = list(index_sparse)
taken = time() - start
if chunksize == 0:
queries = len(corpus_sparse)
else:
queries = math.ceil(1.0 * len(corpus_sparse) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(corpus_sparse) / taken, queries / taken
)
index_sparse.num_best = None
logging.info("finished running %s", program)
| 7,650
|
Python
|
.py
| 170
| 36.482353
| 113
| 0.608684
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,065
|
test_ldaseqmodel.py
|
piskvorky_gensim/gensim/test/test_ldaseqmodel.py
|
"""
Tests to check DTM math functions and Topic-Word, Doc-Topic proportions.
"""
import unittest
import logging
import numpy as np # for arrays, array broadcasting etc.
from gensim.models import ldaseqmodel
from gensim.corpora import Dictionary
from gensim.test.utils import datapath
class TestLdaSeq(unittest.TestCase):
# we are setting up a DTM model and fitting it, and checking topic-word and doc-topic results.
def setUp(self):
texts = [
[u'senior', u'studios', u'studios', u'studios', u'creators', u'award', u'mobile', u'currently',
u'challenges', u'senior', u'summary', u'senior', u'motivated', u'creative', u'senior'],
[u'performs', u'engineering', u'tasks', u'infrastructure', u'focusing', u'primarily',
u'programming', u'interaction', u'designers', u'engineers', u'leadership', u'teams',
u'teams', u'crews', u'responsibilities', u'engineering', u'quality', u'functional',
u'functional', u'teams', u'organizing', u'prioritizing', u'technical', u'decisions',
u'engineering', u'participates', u'participates', u'reviews', u'participates',
u'hiring', u'conducting', u'interviews'],
[u'feedback', u'departments', u'define', u'focusing', u'engineering', u'teams', u'crews',
u'facilitate', u'engineering', u'departments', u'deadlines', u'milestones', u'typically',
u'spends', u'designing', u'developing', u'updating', u'bugs', u'mentoring', u'engineers',
u'define', u'schedules', u'milestones', u'participating'],
[u'reviews', u'interviews', u'sized', u'teams', u'interacts', u'disciplines', u'knowledge',
u'skills', u'knowledge', u'knowledge', u'xcode', u'scripting', u'debugging', u'skills',
u'skills', u'knowledge', u'disciplines', u'animation', u'networking', u'expertise',
u'competencies', u'oral', u'skills', u'management', u'skills', u'proven', u'effectively',
u'teams', u'deadline', u'environment', u'bachelor', u'minimum', u'shipped', u'leadership',
u'teams', u'location', u'resumes', u'jobs', u'candidates', u'openings', u'jobs'],
[u'maryland', u'client', u'producers', u'electricity', u'operates', u'storage', u'utility',
u'retail', u'customers', u'engineering', u'consultant', u'maryland', u'summary', u'technical',
u'technology', u'departments', u'expertise', u'maximizing', u'output', u'reduces', u'operating',
u'participates', u'areas', u'engineering', u'conducts', u'testing', u'solve', u'supports',
u'environmental', u'understands', u'objectives', u'operates', u'responsibilities', u'handles',
u'complex', u'engineering', u'aspects', u'monitors', u'quality', u'proficiency', u'optimization',
u'recommendations', u'supports', u'personnel', u'troubleshooting', u'commissioning', u'startup',
u'shutdown', u'supports', u'procedure', u'operating', u'units', u'develops', u'simulations',
u'troubleshooting', u'tests', u'enhancing', u'solving', u'develops', u'estimates', u'schedules',
u'scopes', u'understands', u'technical', u'management', u'utilize', u'routine', u'conducts',
u'hazards', u'utilizing', u'hazard', u'operability', u'methodologies', u'participates', u'startup',
u'reviews', u'pssr', u'participate', u'teams', u'participate', u'regulatory', u'audits', u'define',
u'scopes', u'budgets', u'schedules', u'technical', u'management', u'environmental', u'awareness',
u'interfacing', u'personnel', u'interacts', u'regulatory', u'departments', u'input', u'objectives',
u'identifying', u'introducing', u'concepts', u'solutions', u'peers', u'customers', u'coworkers',
u'knowledge', u'skills', u'engineering', u'quality', u'engineering'],
[u'commissioning', u'startup', u'knowledge', u'simulators', u'technologies', u'knowledge',
u'engineering', u'techniques', u'disciplines', u'leadership', u'skills', u'proven',
u'engineers', u'oral', u'skills', u'technical', u'skills', u'analytically', u'solve',
u'complex', u'interpret', u'proficiency', u'simulation', u'knowledge', u'applications',
u'manipulate', u'applications', u'engineering'],
[u'calculations', u'programs', u'matlab', u'excel', u'independently', u'environment',
u'proven', u'skills', u'effectively', u'multiple', u'tasks', u'planning', u'organizational',
u'management', u'skills', u'rigzone', u'jobs', u'developer', u'exceptional', u'strategies',
u'junction', u'exceptional', u'strategies', u'solutions', u'solutions', u'biggest',
u'insurers', u'operates', u'investment'],
[u'vegas', u'tasks', u'electrical', u'contracting', u'expertise', u'virtually', u'electrical',
u'developments', u'institutional', u'utilities', u'technical', u'experts', u'relationships',
u'credibility', u'contractors', u'utility', u'customers', u'customer', u'relationships',
u'consistently', u'innovations', u'profile', u'construct', u'envision', u'dynamic', u'complex',
u'electrical', u'management', u'grad', u'internship', u'electrical', u'engineering',
u'infrastructures', u'engineers', u'documented', u'management', u'engineering',
u'quality', u'engineering', u'electrical', u'engineers', u'complex', u'distribution',
u'grounding', u'estimation', u'testing', u'procedures', u'voltage', u'engineering'],
[u'troubleshooting', u'installation', u'documentation', u'bsee', u'certification',
u'electrical', u'voltage', u'cabling', u'electrical', u'engineering', u'candidates',
u'electrical', u'internships', u'oral', u'skills', u'organizational', u'prioritization',
u'skills', u'skills', u'excel', u'cadd', u'calculation', u'autocad', u'mathcad',
u'skills', u'skills', u'customer', u'relationships', u'solving', u'ethic', u'motivation',
u'tasks', u'budget', u'affirmative', u'diversity', u'workforce', u'gender', u'orientation',
u'disability', u'disabled', u'veteran', u'vietnam', u'veteran', u'qualifying', u'veteran',
u'diverse', u'candidates', u'respond', u'developing', u'workplace', u'reflects', u'diversity',
u'communities', u'reviews', u'electrical', u'contracting', u'southwest', u'electrical', u'contractors'],
[u'intern', u'electrical', u'engineering', u'idexx', u'laboratories', u'validating', u'idexx',
u'integrated', u'hardware', u'entails', u'planning', u'debug', u'validation', u'engineers',
u'validation', u'methodologies', u'healthcare', u'platforms', u'brightest', u'solve',
u'challenges', u'innovation', u'technology', u'idexx', u'intern', u'idexx', u'interns',
u'supplement', u'interns', u'teams', u'roles', u'competitive', u'interns', u'idexx',
u'interns', u'participate', u'internships', u'mentors', u'seminars', u'topics', u'leadership',
u'workshops', u'relevant', u'planning', u'topics', u'intern', u'presentations', u'mixers',
u'applicants', u'ineligible', u'laboratory', u'compliant', u'idexx', u'laboratories', u'healthcare',
u'innovation', u'practicing', u'veterinarians', u'diagnostic', u'technology', u'idexx', u'enhance',
u'veterinarians', u'efficiency', u'economically', u'idexx', u'worldwide', u'diagnostic', u'tests',
u'tests', u'quality', u'headquartered', u'idexx', u'laboratories', u'employs', u'customers',
u'qualifications', u'applicants', u'idexx', u'interns', u'potential', u'demonstrated', u'portfolio',
u'recommendation', u'resumes', u'marketing', u'location', u'americas', u'verification', u'validation',
u'schedule', u'overtime', u'idexx', u'laboratories', u'reviews', u'idexx', u'laboratories',
u'nasdaq', u'healthcare', u'innovation', u'practicing', u'veterinarians'],
[u'location', u'duration', u'temp', u'verification', u'validation', u'tester', u'verification',
u'validation', u'middleware', u'specifically', u'testing', u'applications', u'clinical',
u'laboratory', u'regulated', u'environment', u'responsibilities', u'complex', u'hardware',
u'testing', u'clinical', u'analyzers', u'laboratory', u'graphical', u'interfaces', u'complex',
u'sample', u'sequencing', u'protocols', u'developers', u'correction', u'tracking',
u'tool', u'timely', u'troubleshoot', u'testing', u'functional', u'manual',
u'automated', u'participate', u'ongoing'],
[u'testing', u'coverage', u'planning', u'documentation', u'testing', u'validation',
u'corrections', u'monitor', u'implementation', u'recurrence', u'operating', u'statistical',
u'quality', u'testing', u'global', u'multi', u'teams', u'travel', u'skills', u'concepts',
u'waterfall', u'agile', u'methodologies', u'debugging', u'skills', u'complex', u'automated',
u'instrumentation', u'environment', u'hardware', u'mechanical', u'components', u'tracking',
u'lifecycle', u'management', u'quality', u'organize', u'define', u'priorities', u'organize',
u'supervision', u'aggressive', u'deadlines', u'ambiguity', u'analyze', u'complex', u'situations',
u'concepts', u'technologies', u'verbal', u'skills', u'effectively', u'technical', u'clinical',
u'diverse', u'strategy', u'clinical', u'chemistry', u'analyzer', u'laboratory', u'middleware',
u'basic', u'automated', u'testing', u'biomedical', u'engineering', u'technologists',
u'laboratory', u'technology', u'availability', u'click', u'attach'],
[u'scientist', u'linux', u'asrc', u'scientist', u'linux', u'asrc', u'technology',
u'solutions', u'subsidiary', u'asrc', u'engineering', u'technology', u'contracts'],
[u'multiple', u'agencies', u'scientists', u'engineers', u'management', u'personnel',
u'allows', u'solutions', u'complex', u'aeronautics', u'aviation', u'management', u'aviation',
u'engineering', u'hughes', u'technical', u'technical', u'aviation', u'evaluation',
u'engineering', u'management', u'technical', u'terminal', u'surveillance', u'programs',
u'currently', u'scientist', u'travel', u'responsibilities', u'develops', u'technology',
u'modifies', u'technical', u'complex', u'reviews', u'draft', u'conformity', u'completeness',
u'testing', u'interface', u'hardware', u'regression', u'impact', u'reliability',
u'maintainability', u'factors', u'standardization', u'skills', u'travel', u'programming',
u'linux', u'environment', u'cisco', u'knowledge', u'terminal', u'environment', u'clearance',
u'clearance', u'input', u'output', u'digital', u'automatic', u'terminal', u'management',
u'controller', u'termination', u'testing', u'evaluating', u'policies', u'procedure', u'interface',
u'installation', u'verification', u'certification', u'core', u'avionic', u'programs', u'knowledge',
u'procedural', u'testing', u'interfacing', u'hardware', u'regression', u'impact',
u'reliability', u'maintainability', u'factors', u'standardization', u'missions', u'asrc', u'subsidiaries',
u'affirmative', u'employers', u'applicants', u'disability', u'veteran', u'technology', u'location',
u'airport', u'bachelor', u'schedule', u'travel', u'contributor', u'management', u'asrc', u'reviews'],
[u'technical', u'solarcity', u'niche', u'vegas', u'overview', u'resolving', u'customer',
u'clients', u'expanding', u'engineers', u'developers', u'responsibilities', u'knowledge',
u'planning', u'adapt', u'dynamic', u'environment', u'inventive', u'creative', u'solarcity',
u'lifecycle', u'responsibilities', u'technical', u'analyzing', u'diagnosing', u'troubleshooting',
u'customers', u'ticketing', u'console', u'escalate', u'knowledge', u'engineering', u'timely',
u'basic', u'phone', u'functionality', u'customer', u'tracking', u'knowledgebase', u'rotation',
u'configure', u'deployment', u'sccm', u'technical', u'deployment', u'deploy', u'hardware',
u'solarcity', u'bachelor', u'knowledge', u'dell', u'laptops', u'analytical', u'troubleshooting',
u'solving', u'skills', u'knowledge', u'databases', u'preferably', u'server', u'preferably',
u'monitoring', u'suites', u'documentation', u'procedures', u'knowledge', u'entries', u'verbal',
u'skills', u'customer', u'skills', u'competitive', u'solar', u'package', u'insurance', u'vacation',
u'savings', u'referral', u'eligibility', u'equity', u'performers', u'solarcity', u'affirmative',
u'diversity', u'workplace', u'applicants', u'orientation', u'disability', u'veteran', u'careerrookie'],
[u'embedded', u'exelis', u'junction', u'exelis', u'embedded', u'acquisition', u'networking',
u'capabilities', u'classified', u'customer', u'motivated', u'develops', u'tests',
u'innovative', u'solutions', u'minimal', u'supervision', u'paced', u'environment', u'enjoys',
u'assignments', u'interact', u'multi', u'disciplined', u'challenging', u'focused', u'embedded',
u'developments', u'spanning', u'engineering', u'lifecycle', u'specification', u'enhancement',
u'applications', u'embedded', u'freescale', u'applications', u'android', u'platforms',
u'interface', u'customers', u'developers', u'refine', u'specifications', u'architectures'],
[u'java', u'programming', u'scripts', u'python', u'debug', u'debugging', u'emulators',
u'regression', u'revisions', u'specialized', u'setups', u'capabilities', u'subversion',
u'technical', u'documentation', u'multiple', u'engineering', u'techexpousa', u'reviews'],
[u'modeler', u'semantic', u'modeling', u'models', u'skills', u'ontology', u'resource',
u'framework', u'schema', u'technologies', u'hadoop', u'warehouse', u'oracle', u'relational',
u'artifacts', u'models', u'dictionaries', u'models', u'interface', u'specifications',
u'documentation', u'harmonization', u'mappings', u'aligned', u'coordinate', u'technical',
u'peer', u'reviews', u'stakeholder', u'communities', u'impact', u'domains', u'relationships',
u'interdependencies', u'models', u'define', u'analyze', u'legacy', u'models', u'corporate',
u'databases', u'architectural', u'alignment', u'customer', u'expertise', u'harmonization',
u'modeling', u'modeling', u'consulting', u'stakeholders', u'quality', u'models', u'storage',
u'agile', u'specifically', u'focus', u'modeling', u'qualifications', u'bachelors', u'accredited',
u'modeler', u'encompass', u'evaluation', u'skills', u'knowledge', u'modeling', u'techniques',
u'resource', u'framework', u'schema', u'technologies', u'unified', u'modeling', u'technologies',
u'schemas', u'ontologies', u'sybase', u'knowledge', u'skills', u'interpersonal', u'skills',
u'customers', u'clearance', u'applicants', u'eligibility', u'classified', u'clearance',
u'polygraph', u'techexpousa', u'solutions', u'partnership', u'solutions', u'integration'],
[u'technologies', u'junction', u'develops', u'maintains', u'enhances', u'complex', u'diverse',
u'intensive', u'analytics', u'algorithm', u'manipulation', u'management', u'documented',
u'individually', u'reviews', u'tests', u'components', u'adherence', u'resolves', u'utilizes',
u'methodologies', u'environment', u'input', u'components', u'hardware', u'offs', u'reuse', u'cots',
u'gots', u'synthesis', u'components', u'tasks', u'individually', u'analyzes', u'modifies',
u'debugs', u'corrects', u'integrates', u'operating', u'environments', u'develops', u'queries',
u'databases', u'repositories', u'recommendations', u'improving', u'documentation', u'develops',
u'implements', u'algorithms', u'functional', u'assists', u'developing', u'executing', u'procedures',
u'components', u'reviews', u'documentation', u'solutions', u'analyzing', u'conferring',
u'users', u'engineers', u'analyzing', u'investigating', u'areas', u'adapt', u'hardware',
u'mathematical', u'models', u'predict', u'outcome', u'implement', u'complex', u'database',
u'repository', u'interfaces', u'queries', u'bachelors', u'accredited', u'substituted',
u'bachelors', u'firewalls', u'ipsec', u'vpns', u'technology', u'administering', u'servers',
u'apache', u'jboss', u'tomcat', u'developing', u'interfaces', u'firefox', u'internet',
u'explorer', u'operating', u'mainframe', u'linux', u'solaris', u'virtual', u'scripting',
u'programming', u'oriented', u'programming', u'ajax', u'script', u'procedures', u'cobol',
u'cognos', u'fusion', u'focus', u'html', u'java', u'java', u'script', u'jquery', u'perl',
u'visual', u'basic', u'powershell', u'cots', u'cots', u'oracle', u'apex', u'integration',
u'competitive', u'package', u'bonus', u'corporate', u'equity', u'tuition', u'reimbursement',
u'referral', u'bonus', u'holidays', u'insurance', u'flexible', u'disability', u'insurance'],
[u'technologies', u'disability', u'accommodation', u'recruiter', u'techexpousa'],
['bank', 'river', 'shore', 'water'],
['river', 'water', 'flow', 'fast', 'tree'],
['bank', 'water', 'fall', 'flow'],
['bank', 'bank', 'water', 'rain', 'river'],
['river', 'water', 'mud', 'tree'],
['money', 'transaction', 'bank', 'finance'],
['bank', 'borrow', 'money'],
['bank', 'finance'],
['finance', 'money', 'sell', 'bank'],
['borrow', 'sell'],
['bank', 'loan', 'sell']
]
# initializing using own LDA sufficient statistics so that we get same results each time.
sstats = np.loadtxt(datapath('DTM/sstats_test.txt'))
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
self.ldaseq = ldaseqmodel.LdaSeqModel(
corpus=corpus, id2word=dictionary, num_topics=2,
time_slice=[10, 10, 11], initialize='own', sstats=sstats,
passes=2, lda_inference_max_iter=10, em_min_iter=1, em_max_iter=4
)
# testing topic word proportions
def test_topic_word(self):
topics = self.ldaseq.print_topics(0)
expected_topic_word = [('skills', 0.035999999999999997)]
self.assertEqual(topics[0][0][0], expected_topic_word[0][0])
self.assertAlmostEqual(topics[0][0][1], expected_topic_word[0][1], delta=0.0012)
# testing document-topic proportions
def test_doc_topic(self):
doc_topic = self.ldaseq.doc_topics(0)
expected_doc_topic = 0.00066577896138482028
self.assertAlmostEqual(doc_topic[0], expected_doc_topic, places=2)
def test_dtype_backward_compatibility(self):
ldaseq_3_0_1_fname = datapath('DTM/ldaseq_3_0_1_model')
test_doc = [(547, 1), (549, 1), (552, 1), (555, 1)]
expected_topics = [0.99751244, 0.00248756]
# save model to use in test
# self.ldaseq.save(ldaseq_3_0_1_fname)
# load a model saved using a 3.0.1 version of Gensim
model = ldaseqmodel.LdaSeqModel.load(ldaseq_3_0_1_fname)
# and test it on a predefined document
topics = model[test_doc]
self.assertTrue(np.allclose(expected_topics, topics))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 20,273
|
Python
|
.py
| 229
| 76.139738
| 119
| 0.623615
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,066
|
test_sharded_corpus.py
|
piskvorky_gensim/gensim/test/test_sharded_corpus.py
|
"""
Tests for ShardedCorpus.
"""
import os
import unittest
import random
import shutil
import numpy as np
from scipy import sparse
from gensim.utils import is_corpus, mock_data
from gensim.corpora.sharded_corpus import ShardedCorpus
#############################################################################
class TestShardedCorpus(unittest.TestCase):
# @classmethod
# def setUpClass(cls):
# cls.dim = 1000
# cls.data = mock_data(dim=cls.dim)
#
# random_string = ''.join(random.choice('1234567890') for _ in range(8))
#
# cls.tmp_dir = 'test-temp-' + random_string
# os.makedirs(cls.tmp_dir)
#
# cls.tmp_fname = os.path.join(cls.tmp_dir,
# 'shcorp.' + random_string + '.tmp')
# @classmethod
# def tearDownClass(cls):
# shutil.rmtree(cls.tmp_dir)
def setUp(self):
self.dim = 1000
self.random_string = ''.join(random.choice('1234567890') for _ in range(8))
self.tmp_dir = 'test-temp-' + self.random_string
os.makedirs(self.tmp_dir)
self.tmp_fname = os.path.join(self.tmp_dir,
'shcorp.' + self.random_string + '.tmp')
self.data = mock_data(dim=1000)
self.corpus = ShardedCorpus(self.tmp_fname, self.data, dim=self.dim,
shardsize=100)
def tearDown(self):
shutil.rmtree(self.tmp_dir)
def test_init(self):
# Test that the shards were actually created during setUp
self.assertTrue(os.path.isfile(self.tmp_fname + '.1'))
def test_load(self):
# Test that the shards were actually created
self.assertTrue(os.path.isfile(self.tmp_fname + '.1'))
self.corpus.save()
loaded_corpus = ShardedCorpus.load(self.tmp_fname)
self.assertEqual(loaded_corpus.dim, self.corpus.dim)
self.assertEqual(loaded_corpus.n_shards, self.corpus.n_shards)
def test_getitem(self):
_ = self.corpus[130] # noqa:F841
# Does retrieving the item load the correct shard?
self.assertEqual(self.corpus.current_shard_n, 1)
item = self.corpus[220:227]
self.assertEqual((7, self.corpus.dim), item.shape)
self.assertEqual(self.corpus.current_shard_n, 2)
for i in range(220, 227):
self.assertTrue(np.array_equal(self.corpus[i], item[i - 220]))
def test_sparse_serialization(self):
no_exception = True
try:
ShardedCorpus(self.tmp_fname, self.data, shardsize=100, dim=self.dim, sparse_serialization=True)
except Exception:
no_exception = False
raise
finally:
self.assertTrue(no_exception)
def test_getitem_dense2dense(self):
corpus = ShardedCorpus(
self.tmp_fname, self.data, shardsize=100, dim=self.dim,
sparse_serialization=False, sparse_retrieval=False
)
item = corpus[3]
self.assertTrue(isinstance(item, np.ndarray))
self.assertEqual(item.shape, (corpus.dim,))
dslice = corpus[2:6]
self.assertTrue(isinstance(dslice, np.ndarray))
self.assertEqual(dslice.shape, (4, corpus.dim))
ilist = corpus[[2, 3, 4, 5]]
self.assertTrue(isinstance(ilist, np.ndarray))
self.assertEqual(ilist.shape, (4, corpus.dim))
self.assertEqual(ilist.all(), dslice.all())
def test_getitem_dense2sparse(self):
corpus = ShardedCorpus(
self.tmp_fname, self.data, shardsize=100, dim=self.dim,
sparse_serialization=False, sparse_retrieval=True
)
item = corpus[3]
self.assertTrue(isinstance(item, sparse.csr_matrix))
self.assertEqual(item.shape, (1, corpus.dim))
dslice = corpus[2:6]
self.assertTrue(isinstance(dslice, sparse.csr_matrix))
self.assertEqual(dslice.shape, (4, corpus.dim))
ilist = corpus[[2, 3, 4, 5]]
self.assertTrue(isinstance(ilist, sparse.csr_matrix))
self.assertEqual(ilist.shape, (4, corpus.dim))
self.assertEqual((ilist != dslice).getnnz(), 0)
def test_getitem_sparse2sparse(self):
sp_tmp_fname = self.tmp_fname + '.sparse'
corpus = ShardedCorpus(
sp_tmp_fname, self.data, shardsize=100, dim=self.dim,
sparse_serialization=True, sparse_retrieval=True
)
dense_corpus = ShardedCorpus(
self.tmp_fname, self.data, shardsize=100, dim=self.dim,
sparse_serialization=False, sparse_retrieval=True
)
item = corpus[3]
self.assertTrue(isinstance(item, sparse.csr_matrix))
self.assertEqual(item.shape, (1, corpus.dim))
dslice = corpus[2:6]
self.assertTrue(isinstance(dslice, sparse.csr_matrix))
self.assertEqual(dslice.shape, (4, corpus.dim))
expected_nnz = sum(len(self.data[i]) for i in range(2, 6))
self.assertEqual(dslice.getnnz(), expected_nnz)
ilist = corpus[[2, 3, 4, 5]]
self.assertTrue(isinstance(ilist, sparse.csr_matrix))
self.assertEqual(ilist.shape, (4, corpus.dim))
# Also compare with what the dense dataset is giving us
d_dslice = dense_corpus[2:6]
self.assertEqual((d_dslice != dslice).getnnz(), 0)
self.assertEqual((ilist != dslice).getnnz(), 0)
def test_getitem_sparse2dense(self):
sp_tmp_fname = self.tmp_fname + '.sparse'
corpus = ShardedCorpus(
sp_tmp_fname, self.data, shardsize=100, dim=self.dim,
sparse_serialization=True, sparse_retrieval=False
)
dense_corpus = ShardedCorpus(
self.tmp_fname, self.data, shardsize=100, dim=self.dim,
sparse_serialization=False, sparse_retrieval=False
)
item = corpus[3]
self.assertTrue(isinstance(item, np.ndarray))
self.assertEqual(item.shape, (1, corpus.dim))
dslice = corpus[2:6]
self.assertTrue(isinstance(dslice, np.ndarray))
self.assertEqual(dslice.shape, (4, corpus.dim))
ilist = corpus[[2, 3, 4, 5]]
self.assertTrue(isinstance(ilist, np.ndarray))
self.assertEqual(ilist.shape, (4, corpus.dim))
# Also compare with what the dense dataset is giving us
d_dslice = dense_corpus[2:6]
self.assertEqual(dslice.all(), d_dslice.all())
self.assertEqual(ilist.all(), dslice.all())
def test_getitem_dense2gensim(self):
corpus = ShardedCorpus(
self.tmp_fname, self.data, shardsize=100, dim=self.dim,
sparse_serialization=False, gensim=True
)
item = corpus[3]
self.assertTrue(isinstance(item, list))
self.assertTrue(isinstance(item[0], tuple))
dslice = corpus[2:6]
self.assertTrue(next(dslice) == corpus[2])
dslice = list(dslice)
self.assertTrue(isinstance(dslice, list))
self.assertTrue(isinstance(dslice[0], list))
self.assertTrue(isinstance(dslice[0][0], tuple))
iscorp, _ = is_corpus(dslice)
self.assertTrue(iscorp, "Is the object returned by slice notation a gensim corpus?")
ilist = corpus[[2, 3, 4, 5]]
self.assertTrue(next(ilist) == corpus[2])
ilist = list(ilist)
self.assertTrue(isinstance(ilist, list))
self.assertTrue(isinstance(ilist[0], list))
self.assertTrue(isinstance(ilist[0][0], tuple))
# From generators to lists
self.assertEqual(len(ilist), len(dslice))
for i in range(len(ilist)):
self.assertEqual(len(ilist[i]), len(dslice[i]),
"Row %d: dims %d/%d" % (i, len(ilist[i]),
len(dslice[i])))
for j in range(len(ilist[i])):
self.assertEqual(ilist[i][j], dslice[i][j],
"ilist[%d][%d] = %s ,dslice[%d][%d] = %s" % (
i, j, str(ilist[i][j]), i, j,
str(dslice[i][j])))
iscorp, _ = is_corpus(ilist)
self.assertTrue(iscorp, "Is the object returned by list notation a gensim corpus?")
def test_resize(self):
dataset = ShardedCorpus(self.tmp_fname, self.data, shardsize=100,
dim=self.dim)
self.assertEqual(10, dataset.n_shards)
dataset.resize_shards(250)
self.assertEqual(4, dataset.n_shards)
for n in range(dataset.n_shards):
fname = dataset._shard_name(n)
self.assertTrue(os.path.isfile(fname))
def test_init_with_generator(self):
def data_generator():
yield [(0, 1)]
yield [(1, 1)]
gen_tmp_fname = self.tmp_fname + '.generator'
corpus = ShardedCorpus(gen_tmp_fname, data_generator(), dim=2)
self.assertEqual(2, len(corpus))
self.assertEqual(1, corpus[0][0])
if __name__ == '__main__':
suite = unittest.TestSuite()
loader = unittest.TestLoader()
tests = loader.loadTestsFromTestCase(TestShardedCorpus)
suite.addTest(tests)
runner = unittest.TextTestRunner()
runner.run(suite)
| 9,233
|
Python
|
.py
| 205
| 35.24878
| 108
| 0.604332
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,067
|
test_hdpmodel.py
|
piskvorky_gensim/gensim/test/test_hdpmodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
from gensim.corpora import mmcorpus, Dictionary
from gensim.models import hdpmodel
from gensim.test import basetmtests
from gensim.test.utils import datapath, common_texts
import numpy as np
dictionary = Dictionary(common_texts)
corpus = [dictionary.doc2bow(text) for text in common_texts]
class TestHdpModel(unittest.TestCase, basetmtests.TestBaseTopicModel):
def setUp(self):
self.corpus = mmcorpus.MmCorpus(datapath('testcorpus.mm'))
self.class_ = hdpmodel.HdpModel
self.model = self.class_(corpus, id2word=dictionary, random_state=np.random.seed(0))
def test_topic_values(self):
"""
Check show topics method
"""
results = self.model.show_topics()[0]
expected_prob, expected_word = '0.264', 'trees '
prob, word = results[1].split('+')[0].split('*')
self.assertEqual(results[0], 0)
self.assertEqual(prob, expected_prob)
self.assertEqual(word, expected_word)
return
def test_ldamodel(self):
"""
Create ldamodel object, and check if the corresponding alphas are equal.
"""
ldam = self.model.suggested_lda_model()
self.assertEqual(ldam.alpha[0], self.model.lda_alpha[0])
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 1,703
|
Python
|
.py
| 42
| 35.333333
| 96
| 0.695388
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,068
|
test_word2vec.py
|
piskvorky_gensim/gensim/test/test_word2vec.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
import os
import bz2
import sys
import tempfile
import subprocess
import numpy as np
from testfixtures import log_capture
try:
from ot import emd2 # noqa:F401
POT_EXT = True
except (ImportError, ValueError):
POT_EXT = False
from gensim import utils
from gensim.models import word2vec, keyedvectors
from gensim.utils import check_output
from gensim.test.utils import (
datapath, get_tmpfile, temporary_file, common_texts as sentences,
LeeCorpus, lee_corpus_list,
)
new_sentences = [
['computer', 'artificial', 'intelligence'],
['artificial', 'trees'],
['human', 'intelligence'],
['artificial', 'graph'],
['intelligence'],
['artificial', 'intelligence', 'system']
]
def _rule(word, count, min_count):
if word == "human":
return utils.RULE_DISCARD # throw out
else:
return utils.RULE_DEFAULT # apply default rule, i.e. min_count
def load_on_instance():
# Save and load a Word2Vec Model on instance for test
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
model.save(tmpf)
model = word2vec.Word2Vec() # should fail at this point
return model.load(tmpf)
class TestWord2VecModel(unittest.TestCase):
def test_build_vocab_from_freq(self):
"""Test that the algorithm is able to build vocabulary from given
frequency table"""
freq_dict = {
'minors': 2, 'graph': 3, 'system': 4,
'trees': 3, 'eps': 2, 'computer': 2,
'survey': 2, 'user': 3, 'human': 2,
'time': 2, 'interface': 2, 'response': 2
}
freq_dict_orig = freq_dict.copy()
model_hs = word2vec.Word2Vec(vector_size=10, min_count=0, seed=42, hs=1, negative=0)
model_neg = word2vec.Word2Vec(vector_size=10, min_count=0, seed=42, hs=0, negative=5)
model_hs.build_vocab_from_freq(freq_dict)
model_neg.build_vocab_from_freq(freq_dict)
self.assertEqual(len(model_hs.wv), 12)
self.assertEqual(len(model_neg.wv), 12)
for k in freq_dict_orig.keys():
self.assertEqual(model_hs.wv.get_vecattr(k, 'count'), freq_dict_orig[k])
self.assertEqual(model_neg.wv.get_vecattr(k, 'count'), freq_dict_orig[k])
new_freq_dict = {
'computer': 1, 'artificial': 4, 'human': 1, 'graph': 1, 'intelligence': 4, 'system': 1, 'trees': 1
}
model_hs.build_vocab_from_freq(new_freq_dict, update=True)
model_neg.build_vocab_from_freq(new_freq_dict, update=True)
self.assertEqual(model_hs.wv.get_vecattr('graph', 'count'), 4)
self.assertEqual(model_hs.wv.get_vecattr('artificial', 'count'), 4)
self.assertEqual(len(model_hs.wv), 14)
self.assertEqual(len(model_neg.wv), 14)
def test_prune_vocab(self):
"""Test Prune vocab while scanning sentences"""
sentences = [
["graph", "system"],
["graph", "system"],
["system", "eps"],
["graph", "system"]
]
model = word2vec.Word2Vec(sentences, vector_size=10, min_count=0, max_vocab_size=2, seed=42, hs=1, negative=0)
self.assertEqual(len(model.wv), 2)
self.assertEqual(model.wv.get_vecattr('graph', 'count'), 3)
self.assertEqual(model.wv.get_vecattr('system', 'count'), 4)
sentences = [
["graph", "system"],
["graph", "system"],
["system", "eps"],
["graph", "system"],
["minors", "survey", "minors", "survey", "minors"]
]
model = word2vec.Word2Vec(sentences, vector_size=10, min_count=0, max_vocab_size=2, seed=42, hs=1, negative=0)
self.assertEqual(len(model.wv), 3)
self.assertEqual(model.wv.get_vecattr('graph', 'count'), 3)
self.assertEqual(model.wv.get_vecattr('minors', 'count'), 3)
self.assertEqual(model.wv.get_vecattr('system', 'count'), 4)
def test_total_word_count(self):
model = word2vec.Word2Vec(vector_size=10, min_count=0, seed=42)
total_words = model.scan_vocab(sentences)[0]
self.assertEqual(total_words, 29)
def test_max_final_vocab(self):
# Test for less restricting effect of max_final_vocab
# max_final_vocab is specified but has no effect
model = word2vec.Word2Vec(vector_size=10, max_final_vocab=4, min_count=4, sample=0)
model.scan_vocab(sentences)
reported_values = model.prepare_vocab()
self.assertEqual(reported_values['drop_unique'], 11)
self.assertEqual(reported_values['retain_total'], 4)
self.assertEqual(reported_values['num_retained_words'], 1)
self.assertEqual(model.effective_min_count, 4)
# Test for more restricting effect of max_final_vocab
# results in setting a min_count more restricting than specified min_count
model = word2vec.Word2Vec(vector_size=10, max_final_vocab=4, min_count=2, sample=0)
model.scan_vocab(sentences)
reported_values = model.prepare_vocab()
self.assertEqual(reported_values['drop_unique'], 8)
self.assertEqual(reported_values['retain_total'], 13)
self.assertEqual(reported_values['num_retained_words'], 4)
self.assertEqual(model.effective_min_count, 3)
def test_online_learning(self):
"""Test that the algorithm is able to add new words to the
vocabulary and to a trained model when using a sorted vocabulary"""
model_hs = word2vec.Word2Vec(sentences, vector_size=10, min_count=0, seed=42, hs=1, negative=0)
model_neg = word2vec.Word2Vec(sentences, vector_size=10, min_count=0, seed=42, hs=0, negative=5)
self.assertTrue(len(model_hs.wv), 12)
self.assertTrue(model_hs.wv.get_vecattr('graph', 'count'), 3)
model_hs.build_vocab(new_sentences, update=True)
model_neg.build_vocab(new_sentences, update=True)
self.assertTrue(model_hs.wv.get_vecattr('graph', 'count'), 4)
self.assertTrue(model_hs.wv.get_vecattr('artificial', 'count'), 4)
self.assertEqual(len(model_hs.wv), 14)
self.assertEqual(len(model_neg.wv), 14)
def test_online_learning_after_save(self):
"""Test that the algorithm is able to add new words to the
vocabulary and to a trained model when using a sorted vocabulary"""
tmpf = get_tmpfile('gensim_word2vec.tst')
model_neg = word2vec.Word2Vec(sentences, vector_size=10, min_count=0, seed=42, hs=0, negative=5)
model_neg.save(tmpf)
model_neg = word2vec.Word2Vec.load(tmpf)
self.assertTrue(len(model_neg.wv), 12)
model_neg.build_vocab(new_sentences, update=True)
model_neg.train(new_sentences, total_examples=model_neg.corpus_count, epochs=model_neg.epochs)
self.assertEqual(len(model_neg.wv), 14)
def test_online_learning_from_file(self):
"""Test that the algorithm is able to add new words to the
vocabulary and to a trained model when using a sorted vocabulary"""
with temporary_file(get_tmpfile('gensim_word2vec1.tst')) as corpus_file, \
temporary_file(get_tmpfile('gensim_word2vec2.tst')) as new_corpus_file:
utils.save_as_line_sentence(sentences, corpus_file)
utils.save_as_line_sentence(new_sentences, new_corpus_file)
model_hs = word2vec.Word2Vec(corpus_file=corpus_file, vector_size=10, min_count=0, seed=42,
hs=1, negative=0)
model_neg = word2vec.Word2Vec(corpus_file=corpus_file, vector_size=10, min_count=0, seed=42,
hs=0, negative=5)
self.assertTrue(len(model_hs.wv), 12)
self.assertTrue(model_hs.wv.get_vecattr('graph', 'count'), 3)
model_hs.build_vocab(corpus_file=new_corpus_file, update=True)
model_hs.train(corpus_file=new_corpus_file, total_words=model_hs.corpus_total_words, epochs=model_hs.epochs)
model_neg.build_vocab(corpus_file=new_corpus_file, update=True)
model_neg.train(
corpus_file=new_corpus_file, total_words=model_hs.corpus_total_words, epochs=model_hs.epochs)
self.assertTrue(model_hs.wv.get_vecattr('graph', 'count'), 4)
self.assertTrue(model_hs.wv.get_vecattr('artificial', 'count'), 4)
self.assertEqual(len(model_hs.wv), 14)
self.assertEqual(len(model_neg.wv), 14)
def test_online_learning_after_save_from_file(self):
"""Test that the algorithm is able to add new words to the
vocabulary and to a trained model when using a sorted vocabulary"""
with temporary_file(get_tmpfile('gensim_word2vec1.tst')) as corpus_file, \
temporary_file(get_tmpfile('gensim_word2vec2.tst')) as new_corpus_file:
utils.save_as_line_sentence(sentences, corpus_file)
utils.save_as_line_sentence(new_sentences, new_corpus_file)
tmpf = get_tmpfile('gensim_word2vec.tst')
model_neg = word2vec.Word2Vec(corpus_file=corpus_file, vector_size=10, min_count=0, seed=42,
hs=0, negative=5)
model_neg.save(tmpf)
model_neg = word2vec.Word2Vec.load(tmpf)
self.assertTrue(len(model_neg.wv), 12)
# Check that training works on the same data after load without calling build_vocab
model_neg.train(corpus_file=corpus_file, total_words=model_neg.corpus_total_words, epochs=model_neg.epochs)
# Train on new corpus file
model_neg.build_vocab(corpus_file=new_corpus_file, update=True)
model_neg.train(corpus_file=new_corpus_file, total_words=model_neg.corpus_total_words,
epochs=model_neg.epochs)
self.assertEqual(len(model_neg.wv), 14)
def onlineSanity(self, model, trained_model=False):
terro, others = [], []
for line in lee_corpus_list:
if 'terrorism' in line:
terro.append(line)
else:
others.append(line)
self.assertTrue(all('terrorism' not in line for line in others))
model.build_vocab(others, update=trained_model)
model.train(others, total_examples=model.corpus_count, epochs=model.epochs)
self.assertFalse('terrorism' in model.wv)
model.build_vocab(terro, update=True)
self.assertTrue('terrorism' in model.wv)
orig0 = np.copy(model.wv.vectors)
model.train(terro, total_examples=len(terro), epochs=model.epochs)
self.assertFalse(np.allclose(model.wv.vectors, orig0))
sim = model.wv.n_similarity(['war'], ['terrorism'])
self.assertLess(0., sim)
def test_sg_hs_online(self):
"""Test skipgram w/ hierarchical softmax"""
model = word2vec.Word2Vec(sg=1, window=5, hs=1, negative=0, min_count=3, epochs=10, seed=42, workers=2)
self.onlineSanity(model)
def test_sg_neg_online(self):
"""Test skipgram w/ negative sampling"""
model = word2vec.Word2Vec(sg=1, window=4, hs=0, negative=15, min_count=3, epochs=10, seed=42, workers=2)
self.onlineSanity(model)
def test_cbow_hs_online(self):
"""Test CBOW w/ hierarchical softmax"""
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.05, window=5, hs=1, negative=0,
min_count=3, epochs=20, seed=42, workers=2
)
self.onlineSanity(model)
def test_cbow_neg_online(self):
"""Test CBOW w/ negative sampling"""
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.05, window=5, hs=0, negative=15,
min_count=5, epochs=10, seed=42, workers=2, sample=0
)
self.onlineSanity(model)
def test_persistence(self):
"""Test storing/loading the entire model."""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
model.save(tmpf)
self.models_equal(model, word2vec.Word2Vec.load(tmpf))
# test persistence of the KeyedVectors of a model
wv = model.wv
wv.save(tmpf)
loaded_wv = keyedvectors.KeyedVectors.load(tmpf)
self.assertTrue(np.allclose(wv.vectors, loaded_wv.vectors))
self.assertEqual(len(wv), len(loaded_wv))
def test_persistence_backwards_compatible(self):
"""Can we still load a model created with an older gensim version?"""
path = datapath('model-from-gensim-3.8.0.w2v')
model = word2vec.Word2Vec.load(path)
x = model.score(['test'])
assert x is not None
def test_persistence_from_file(self):
"""Test storing/loading the entire model trained with corpus_file argument."""
with temporary_file(get_tmpfile('gensim_word2vec.tst')) as corpus_file:
utils.save_as_line_sentence(sentences, corpus_file)
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(corpus_file=corpus_file, min_count=1)
model.save(tmpf)
self.models_equal(model, word2vec.Word2Vec.load(tmpf))
# test persistence of the KeyedVectors of a model
wv = model.wv
wv.save(tmpf)
loaded_wv = keyedvectors.KeyedVectors.load(tmpf)
self.assertTrue(np.allclose(wv.vectors, loaded_wv.vectors))
self.assertEqual(len(wv), len(loaded_wv))
def test_persistence_with_constructor_rule(self):
"""Test storing/loading the entire model with a vocab trimming rule passed in the constructor."""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1, trim_rule=_rule)
model.save(tmpf)
self.models_equal(model, word2vec.Word2Vec.load(tmpf))
def test_rule_with_min_count(self):
"""Test that returning RULE_DEFAULT from trim_rule triggers min_count."""
model = word2vec.Word2Vec(sentences + [["occurs_only_once"]], min_count=2, trim_rule=_rule)
self.assertTrue("human" not in model.wv)
self.assertTrue("occurs_only_once" not in model.wv)
self.assertTrue("interface" in model.wv)
def test_rule(self):
"""Test applying vocab trim_rule to build_vocab instead of constructor."""
model = word2vec.Word2Vec(min_count=1)
model.build_vocab(sentences, trim_rule=_rule)
self.assertTrue("human" not in model.wv)
def test_lambda_rule(self):
"""Test that lambda trim_rule works."""
def rule(word, count, min_count):
return utils.RULE_DISCARD if word == "human" else utils.RULE_DEFAULT
model = word2vec.Word2Vec(sentences, min_count=1, trim_rule=rule)
self.assertTrue("human" not in model.wv)
def obsolete_testLoadPreKeyedVectorModel(self):
"""Test loading pre-KeyedVectors word2vec model"""
if sys.version_info[:2] == (3, 4):
model_file_suffix = '_py3_4'
elif sys.version_info < (3,):
model_file_suffix = '_py2'
else:
model_file_suffix = '_py3'
# Model stored in one file
model_file = 'word2vec_pre_kv%s' % model_file_suffix
model = word2vec.Word2Vec.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (len(model.wv), model.vector_size))
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.vector_size))
# Model stored in multiple files
model_file = 'word2vec_pre_kv_sep%s' % model_file_suffix
model = word2vec.Word2Vec.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (len(model.wv), model.vector_size))
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.vector_size))
def test_load_pre_keyed_vector_model_c_format(self):
"""Test loading pre-KeyedVectors word2vec model saved in word2vec format"""
model = keyedvectors.KeyedVectors.load_word2vec_format(datapath('word2vec_pre_kv_c'))
self.assertTrue(model.vectors.shape[0] == len(model))
def test_persistence_word2vec_format(self):
"""Test storing/loading the entire model in word2vec format."""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
model.wv.save_word2vec_format(tmpf, binary=True)
binary_model_kv = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, binary=True)
self.assertTrue(np.allclose(model.wv['human'], binary_model_kv['human']))
norm_only_model = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, binary=True)
norm_only_model.unit_normalize_all()
self.assertFalse(np.allclose(model.wv['human'], norm_only_model['human']))
self.assertTrue(np.allclose(model.wv.get_vector('human', norm=True), norm_only_model['human']))
limited_model_kv = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, binary=True, limit=3)
self.assertEqual(len(limited_model_kv.vectors), 3)
half_precision_model_kv = keyedvectors.KeyedVectors.load_word2vec_format(
tmpf, binary=True, datatype=np.float16
)
self.assertEqual(binary_model_kv.vectors.nbytes, half_precision_model_kv.vectors.nbytes * 2)
def test_no_training_c_format(self):
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
model.wv.save_word2vec_format(tmpf, binary=True)
kv = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, binary=True)
binary_model = word2vec.Word2Vec()
binary_model.wv = kv
self.assertRaises(ValueError, binary_model.train, sentences)
def test_too_short_binary_word2vec_format(self):
tfile = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
model.wv.save_word2vec_format(tfile, binary=True)
f = open(tfile, 'r+b')
f.write(b'13') # write wrong (too-long) vector count
f.close()
self.assertRaises(EOFError, keyedvectors.KeyedVectors.load_word2vec_format, tfile, binary=True)
def test_too_short_text_word2vec_format(self):
tfile = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
model.wv.save_word2vec_format(tfile, binary=False)
f = open(tfile, 'r+b')
f.write(b'13') # write wrong (too-long) vector count
f.close()
self.assertRaises(EOFError, keyedvectors.KeyedVectors.load_word2vec_format, tfile, binary=False)
def test_persistence_word2vec_format_non_binary(self):
"""Test storing/loading the entire model in word2vec non-binary format."""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
model.wv.save_word2vec_format(tmpf, binary=False)
text_model = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, binary=False)
self.assertTrue(np.allclose(model.wv['human'], text_model['human'], atol=1e-6))
norm_only_model = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, binary=False)
norm_only_model.unit_normalize_all()
self.assertFalse(np.allclose(model.wv['human'], norm_only_model['human'], atol=1e-6))
self.assertTrue(np.allclose(
model.wv.get_vector('human', norm=True), norm_only_model['human'], atol=1e-4
))
def test_persistence_word2vec_format_with_vocab(self):
"""Test storing/loading the entire model and vocabulary in word2vec format."""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
testvocab = get_tmpfile('gensim_word2vec.vocab')
model.wv.save_word2vec_format(tmpf, testvocab, binary=True)
binary_model_with_vocab_kv = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, testvocab, binary=True)
self.assertEqual(
model.wv.get_vecattr('human', 'count'),
binary_model_with_vocab_kv.get_vecattr('human', 'count'),
)
def test_persistence_keyed_vectors_format_with_vocab(self):
"""Test storing/loading the entire model and vocabulary in word2vec format."""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
testvocab = get_tmpfile('gensim_word2vec.vocab')
model.wv.save_word2vec_format(tmpf, testvocab, binary=True)
kv_binary_model_with_vocab = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, testvocab, binary=True)
self.assertEqual(
model.wv.get_vecattr('human', 'count'),
kv_binary_model_with_vocab.get_vecattr('human', 'count'),
)
def test_persistence_word2vec_format_combination_with_standard_persistence(self):
"""Test storing/loading the entire model and vocabulary in word2vec format chained with
saving and loading via `save` and `load` methods`.
It was possible prior to 1.0.0 release, now raises Exception"""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
testvocab = get_tmpfile('gensim_word2vec.vocab')
model.wv.save_word2vec_format(tmpf, testvocab, binary=True)
binary_model_with_vocab_kv = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, testvocab, binary=True)
binary_model_with_vocab_kv.save(tmpf)
self.assertRaises(AttributeError, word2vec.Word2Vec.load, tmpf)
def test_large_mmap(self):
"""Test storing/loading the entire model."""
tmpf = get_tmpfile('gensim_word2vec.tst')
model = word2vec.Word2Vec(sentences, min_count=1)
# test storing the internal arrays into separate files
model.save(tmpf, sep_limit=0)
self.models_equal(model, word2vec.Word2Vec.load(tmpf))
# make sure mmaping the arrays back works, too
self.models_equal(model, word2vec.Word2Vec.load(tmpf, mmap='r'))
def test_vocab(self):
"""Test word2vec vocabulary building."""
corpus = LeeCorpus()
total_words = sum(len(sentence) for sentence in corpus)
# try vocab building explicitly, using all words
model = word2vec.Word2Vec(min_count=1, hs=1, negative=0)
model.build_vocab(corpus)
self.assertTrue(len(model.wv) == 6981)
# with min_count=1, we're not throwing away anything,
# so make sure the word counts add up to be the entire corpus
self.assertEqual(sum(model.wv.get_vecattr(k, 'count') for k in model.wv.key_to_index), total_words)
# make sure the binary codes are correct
np.allclose(model.wv.get_vecattr('the', 'code'), [1, 1, 0, 0])
# test building vocab with default params
model = word2vec.Word2Vec(hs=1, negative=0)
model.build_vocab(corpus)
self.assertTrue(len(model.wv) == 1750)
np.allclose(model.wv.get_vecattr('the', 'code'), [1, 1, 1, 0])
# no input => "RuntimeError: you must first build vocabulary before training the model"
self.assertRaises(RuntimeError, word2vec.Word2Vec, [])
# input not empty, but rather completely filtered out
self.assertRaises(RuntimeError, word2vec.Word2Vec, corpus, min_count=total_words + 1)
def test_training(self):
"""Test word2vec training."""
# build vocabulary, don't train yet
model = word2vec.Word2Vec(vector_size=2, min_count=1, hs=1, negative=0)
model.build_vocab(sentences)
self.assertTrue(model.wv.vectors.shape == (len(model.wv), 2))
self.assertTrue(model.syn1.shape == (len(model.wv), 2))
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
sims = model.wv.most_similar('graph', topn=10)
# self.assertTrue(sims[0][0] == 'trees', sims) # most similar
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
# build vocab and train in one step; must be the same as above
model2 = word2vec.Word2Vec(sentences, vector_size=2, min_count=1, hs=1, negative=0)
self.models_equal(model, model2)
def test_training_from_file(self):
"""Test word2vec training with corpus_file argument."""
# build vocabulary, don't train yet
with temporary_file(get_tmpfile('gensim_word2vec.tst')) as tf:
utils.save_as_line_sentence(sentences, tf)
model = word2vec.Word2Vec(vector_size=2, min_count=1, hs=1, negative=0)
model.build_vocab(corpus_file=tf)
self.assertTrue(model.wv.vectors.shape == (len(model.wv), 2))
self.assertTrue(model.syn1.shape == (len(model.wv), 2))
model.train(corpus_file=tf, total_words=model.corpus_total_words, epochs=model.epochs)
sims = model.wv.most_similar('graph', topn=10)
# self.assertTrue(sims[0][0] == 'trees', sims) # most similar
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
def test_scoring(self):
"""Test word2vec scoring."""
model = word2vec.Word2Vec(sentences, vector_size=2, min_count=1, hs=1, negative=0)
# just score and make sure they exist
scores = model.score(sentences, len(sentences))
self.assertEqual(len(scores), len(sentences))
def test_locking(self):
"""Test word2vec training doesn't change locked vectors."""
corpus = LeeCorpus()
# build vocabulary, don't train yet
for sg in range(2): # test both cbow and sg
model = word2vec.Word2Vec(vector_size=4, hs=1, negative=5, min_count=1, sg=sg, window=5)
model.build_vocab(corpus)
# remember two vectors
locked0 = np.copy(model.wv.vectors[0])
unlocked1 = np.copy(model.wv.vectors[1])
# alocate a full lockf array (not just default single val for all)
model.wv.vectors_lockf = np.ones(len(model.wv), dtype=np.float32)
# lock the vector in slot 0 against change
model.wv.vectors_lockf[0] = 0.0
model.train(corpus, total_examples=model.corpus_count, epochs=model.epochs)
self.assertFalse((unlocked1 == model.wv.vectors[1]).all()) # unlocked vector should vary
self.assertTrue((locked0 == model.wv.vectors[0]).all()) # locked vector should not vary
def test_evaluate_word_analogies(self):
"""Test that evaluating analogies on KeyedVectors give sane results"""
model = word2vec.Word2Vec(LeeCorpus())
score, sections = model.wv.evaluate_word_analogies(datapath('questions-words.txt'))
score_cosmul, sections_cosmul = model.wv.evaluate_word_analogies(
datapath('questions-words.txt'),
similarity_function='most_similar_cosmul'
)
self.assertEqual(score, score_cosmul)
self.assertEqual(sections, sections_cosmul)
self.assertGreaterEqual(score, 0.0)
self.assertLessEqual(score, 1.0)
self.assertGreater(len(sections), 0)
# Check that dict contains the right keys
first_section = sections[0]
self.assertIn('section', first_section)
self.assertIn('correct', first_section)
self.assertIn('incorrect', first_section)
def test_evaluate_word_pairs(self):
"""Test Spearman and Pearson correlation coefficients give sane results on similarity datasets"""
corpus = word2vec.LineSentence(datapath('head500.noblanks.cor.bz2'))
model = word2vec.Word2Vec(corpus, min_count=3, epochs=20)
correlation = model.wv.evaluate_word_pairs(datapath('wordsim353.tsv'))
pearson = correlation[0][0]
spearman = correlation[1][0]
oov = correlation[2]
self.assertTrue(0.1 < pearson < 1.0, f"pearson {pearson} not between 0.1 & 1.0")
self.assertTrue(0.1 < spearman < 1.0, f"spearman {spearman} not between 0.1 and 1.0")
self.assertTrue(0.0 <= oov < 90.0, f"OOV {oov} not between 0.0 and 90.0")
def test_evaluate_word_pairs_from_file(self):
"""Test Spearman and Pearson correlation coefficients give sane results on similarity datasets"""
with temporary_file(get_tmpfile('gensim_word2vec.tst')) as tf:
utils.save_as_line_sentence(word2vec.LineSentence(datapath('head500.noblanks.cor.bz2')), tf)
model = word2vec.Word2Vec(corpus_file=tf, min_count=3, epochs=20)
correlation = model.wv.evaluate_word_pairs(datapath('wordsim353.tsv'))
pearson = correlation[0][0]
spearman = correlation[1][0]
oov = correlation[2]
self.assertTrue(0.1 < pearson < 1.0, f"pearson {pearson} not between 0.1 & 1.0")
self.assertTrue(0.1 < spearman < 1.0, f"spearman {spearman} not between 0.1 and 1.0")
self.assertTrue(0.0 <= oov < 90.0, f"OOV {oov} not between 0.0 and 90.0")
def model_sanity(self, model, train=True, with_corpus_file=False, ranks=None):
"""Even tiny models trained on LeeCorpus should pass these sanity checks"""
# run extra before/after training tests if train=True
if train:
model.build_vocab(lee_corpus_list)
orig0 = np.copy(model.wv.vectors[0])
if with_corpus_file:
tmpfile = get_tmpfile('gensim_word2vec.tst')
utils.save_as_line_sentence(lee_corpus_list, tmpfile)
model.train(corpus_file=tmpfile, total_words=model.corpus_total_words, epochs=model.epochs)
else:
model.train(lee_corpus_list, total_examples=model.corpus_count, epochs=model.epochs)
self.assertFalse((orig0 == model.wv.vectors[1]).all()) # vector should vary after training
query_word = 'attacks'
expected_word = 'bombings'
sims = model.wv.most_similar(query_word, topn=len(model.wv.index_to_key))
t_rank = [word for word, score in sims].index(expected_word)
# in >200 calibration runs w/ calling parameters, 'terrorism' in 50-most_sim for 'war'
if ranks is not None:
ranks.append(t_rank) # tabulate trial rank if requested
self.assertLess(t_rank, 50)
query_vec = model.wv[query_word]
sims2 = model.wv.most_similar([query_vec], topn=51)
self.assertTrue(query_word in [word for word, score in sims2])
self.assertTrue(expected_word in [word for word, score in sims2])
def test_sg_hs(self):
"""Test skipgram w/ hierarchical softmax"""
model = word2vec.Word2Vec(sg=1, window=4, hs=1, negative=0, min_count=5, epochs=10, workers=2)
self.model_sanity(model)
def test_sg_hs_fromfile(self):
model = word2vec.Word2Vec(sg=1, window=4, hs=1, negative=0, min_count=5, epochs=10, workers=2)
self.model_sanity(model, with_corpus_file=True)
def test_sg_neg(self):
"""Test skipgram w/ negative sampling"""
model = word2vec.Word2Vec(sg=1, window=4, hs=0, negative=15, min_count=5, epochs=10, workers=2)
self.model_sanity(model)
def test_sg_neg_fromfile(self):
model = word2vec.Word2Vec(sg=1, window=4, hs=0, negative=15, min_count=5, epochs=10, workers=2)
self.model_sanity(model, with_corpus_file=True)
@unittest.skipIf('BULK_TEST_REPS' not in os.environ, reason="bulk test only occasionally run locally")
def test_method_in_bulk(self):
"""Not run by default testing, but can be run locally to help tune stochastic aspects of tests
to very-very-rarely fail. EG:
% BULK_TEST_REPS=200 METHOD_NAME=test_cbow_hs pytest test_word2vec.py -k "test_method_in_bulk"
Method must accept `ranks` keyword-argument, empty list into which salient internal result can be reported.
"""
failures = 0
ranks = []
reps = int(os.environ['BULK_TEST_REPS'])
method_name = os.environ.get('METHOD_NAME', 'test_cbow_hs') # by default test that specially-troublesome one
method_fn = getattr(self, method_name)
for i in range(reps):
try:
method_fn(ranks=ranks)
except Exception as ex:
print('%s failed: %s' % (method_name, ex))
failures += 1
print(ranks)
print(np.mean(ranks))
self.assertEquals(failures, 0, "too many failures")
def test_cbow_hs(self, ranks=None):
"""Test CBOW w/ hierarchical softmax"""
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.1, window=2, hs=1, negative=0,
min_count=5, epochs=60, workers=2, batch_words=1000
)
self.model_sanity(model, ranks=ranks)
def test_cbow_hs_fromfile(self):
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.1, window=2, hs=1, negative=0,
min_count=5, epochs=60, workers=2, batch_words=1000
)
self.model_sanity(model, with_corpus_file=True)
def test_cbow_neg(self, ranks=None):
"""Test CBOW w/ negative sampling"""
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.05, window=5, hs=0, negative=15,
min_count=5, epochs=10, workers=2, sample=0
)
self.model_sanity(model, ranks=ranks)
def test_cbow_neg_fromfile(self):
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.05, window=5, hs=0, negative=15,
min_count=5, epochs=10, workers=2, sample=0
)
self.model_sanity(model, with_corpus_file=True)
def test_sg_fixedwindowsize(self):
"""Test skipgram with fixed window size. Use NS."""
model = word2vec.Word2Vec(
sg=1, window=5, shrink_windows=False, hs=0,
negative=15, min_count=5, epochs=10, workers=2
)
self.model_sanity(model)
def test_sg_fixedwindowsize_fromfile(self):
"""Test skipgram with fixed window size. Use HS and train from file."""
model = word2vec.Word2Vec(
sg=1, window=5, shrink_windows=False, hs=1,
negative=0, min_count=5, epochs=10, workers=2
)
self.model_sanity(model, with_corpus_file=True)
def test_cbow_fixedwindowsize(self, ranks=None):
"""Test CBOW with fixed window size. Use HS."""
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.1, window=5, shrink_windows=False,
hs=1, negative=0, min_count=5, epochs=10, workers=2
)
self.model_sanity(model, ranks=ranks)
def test_cbow_fixedwindowsize_fromfile(self):
"""Test CBOW with fixed window size. Use NS and train from file."""
model = word2vec.Word2Vec(
sg=0, cbow_mean=1, alpha=0.1, window=5, shrink_windows=False,
hs=0, negative=15, min_count=5, epochs=10, workers=2
)
self.model_sanity(model, with_corpus_file=True)
def test_cosmul(self):
model = word2vec.Word2Vec(sentences, vector_size=2, min_count=1, hs=1, negative=0)
sims = model.wv.most_similar_cosmul('graph', topn=10)
# self.assertTrue(sims[0][0] == 'trees', sims) # most similar
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar_cosmul(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
def test_training_cbow(self):
"""Test CBOW word2vec training."""
# to test training, make the corpus larger by repeating its sentences over and over
# build vocabulary, don't train yet
model = word2vec.Word2Vec(vector_size=2, min_count=1, sg=0, hs=1, negative=0)
model.build_vocab(sentences)
self.assertTrue(model.wv.vectors.shape == (len(model.wv), 2))
self.assertTrue(model.syn1.shape == (len(model.wv), 2))
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
sims = model.wv.most_similar('graph', topn=10)
# self.assertTrue(sims[0][0] == 'trees', sims) # most similar
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
# build vocab and train in one step; must be the same as above
model2 = word2vec.Word2Vec(sentences, vector_size=2, min_count=1, sg=0, hs=1, negative=0)
self.models_equal(model, model2)
def test_training_sg_negative(self):
"""Test skip-gram (negative sampling) word2vec training."""
# to test training, make the corpus larger by repeating its sentences over and over
# build vocabulary, don't train yet
model = word2vec.Word2Vec(vector_size=2, min_count=1, sg=1, hs=0, negative=2)
model.build_vocab(sentences)
self.assertTrue(model.wv.vectors.shape == (len(model.wv), 2))
self.assertTrue(model.syn1neg.shape == (len(model.wv), 2))
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
sims = model.wv.most_similar('graph', topn=10)
# self.assertTrue(sims[0][0] == 'trees', sims) # most similar
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
# build vocab and train in one step; must be the same as above
model2 = word2vec.Word2Vec(sentences, vector_size=2, min_count=1, sg=1, hs=0, negative=2)
self.models_equal(model, model2)
def test_training_cbow_negative(self):
"""Test CBOW (negative sampling) word2vec training."""
# to test training, make the corpus larger by repeating its sentences over and over
# build vocabulary, don't train yet
model = word2vec.Word2Vec(vector_size=2, min_count=1, sg=0, hs=0, negative=2)
model.build_vocab(sentences)
self.assertTrue(model.wv.vectors.shape == (len(model.wv), 2))
self.assertTrue(model.syn1neg.shape == (len(model.wv), 2))
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
sims = model.wv.most_similar('graph', topn=10)
# self.assertTrue(sims[0][0] == 'trees', sims) # most similar
# test querying for "most similar" by vector
graph_vector = model.wv.get_vector('graph', norm=True)
sims2 = model.wv.most_similar(positive=[graph_vector], topn=11)
sims2 = [(w, sim) for w, sim in sims2 if w != 'graph'] # ignore 'graph' itself
self.assertEqual(sims, sims2)
# build vocab and train in one step; must be the same as above
model2 = word2vec.Word2Vec(sentences, vector_size=2, min_count=1, sg=0, hs=0, negative=2)
self.models_equal(model, model2)
def test_similarities(self):
"""Test similarity and n_similarity methods."""
# The model is trained using CBOW
model = word2vec.Word2Vec(vector_size=2, min_count=1, sg=0, hs=0, negative=2)
model.build_vocab(sentences)
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
self.assertTrue(model.wv.n_similarity(['graph', 'trees'], ['trees', 'graph']))
self.assertTrue(model.wv.n_similarity(['graph'], ['trees']) == model.wv.similarity('graph', 'trees'))
self.assertRaises(ZeroDivisionError, model.wv.n_similarity, ['graph', 'trees'], [])
self.assertRaises(ZeroDivisionError, model.wv.n_similarity, [], ['graph', 'trees'])
self.assertRaises(ZeroDivisionError, model.wv.n_similarity, [], [])
def test_similar_by(self):
"""Test word2vec similar_by_word and similar_by_vector."""
model = word2vec.Word2Vec(sentences, vector_size=2, min_count=1, hs=1, negative=0)
wordsims = model.wv.similar_by_word('graph', topn=10)
wordsims2 = model.wv.most_similar(positive='graph', topn=10)
vectorsims = model.wv.similar_by_vector(model.wv['graph'], topn=10)
vectorsims2 = model.wv.most_similar([model.wv['graph']], topn=10)
self.assertEqual(wordsims, wordsims2)
self.assertEqual(vectorsims, vectorsims2)
def test_parallel(self):
"""Test word2vec parallel training."""
corpus = utils.RepeatCorpus(LeeCorpus(), 10000) # repeats about 33 times
for workers in [4, ]: # [4, 2]
model = word2vec.Word2Vec(corpus, vector_size=16, min_count=(10 * 33), workers=workers)
origin_word = 'israeli'
expected_neighbor = 'palestinian'
sims = model.wv.most_similar(origin_word, topn=len(model.wv))
# the exact vectors and therefore similarities may differ, due to different thread collisions/randomization
# so let's test only for topN
neighbor_rank = [word for word, sim in sims].index(expected_neighbor)
self.assertLess(neighbor_rank, 6)
def test_r_n_g(self):
"""Test word2vec results identical with identical RNG seed."""
model = word2vec.Word2Vec(sentences, min_count=2, seed=42, workers=1)
model2 = word2vec.Word2Vec(sentences, min_count=2, seed=42, workers=1)
self.models_equal(model, model2)
def models_equal(self, model, model2):
self.assertEqual(len(model.wv), len(model2.wv))
self.assertTrue(np.allclose(model.wv.vectors, model2.wv.vectors))
if model.hs:
self.assertTrue(np.allclose(model.syn1, model2.syn1))
if model.negative:
self.assertTrue(np.allclose(model.syn1neg, model2.syn1neg))
most_common_word_index = np.argsort(model.wv.expandos['count'])[-1]
most_common_word = model.wv.index_to_key[most_common_word_index]
self.assertTrue(np.allclose(model.wv[most_common_word], model2.wv[most_common_word]))
def test_predict_output_word(self):
'''Test word2vec predict_output_word method handling for negative sampling scheme'''
# under normal circumstances
model_with_neg = word2vec.Word2Vec(sentences, min_count=1)
predictions_with_neg = model_with_neg.predict_output_word(['system', 'human'], topn=5)
self.assertTrue(len(predictions_with_neg) == 5)
# out-of-vobaculary scenario
predictions_out_of_vocab = model_with_neg.predict_output_word(['some', 'random', 'words'], topn=5)
self.assertEqual(predictions_out_of_vocab, None)
# when required model parameters have been deleted
tmpf = get_tmpfile('gensim_word2vec.tst')
model_with_neg.wv.save_word2vec_format(tmpf, binary=True)
kv_model_with_neg = keyedvectors.KeyedVectors.load_word2vec_format(tmpf, binary=True)
binary_model_with_neg = word2vec.Word2Vec()
binary_model_with_neg.wv = kv_model_with_neg
self.assertRaises(RuntimeError, binary_model_with_neg.predict_output_word, ['system', 'human'])
# negative sampling scheme not used
model_without_neg = word2vec.Word2Vec(sentences, min_count=1, hs=1, negative=0)
self.assertRaises(RuntimeError, model_without_neg.predict_output_word, ['system', 'human'])
# passing indices instead of words in context
str_context = ['system', 'human']
mixed_context = [model_with_neg.wv.get_index(str_context[0]), str_context[1]]
idx_context = [model_with_neg.wv.get_index(w) for w in str_context]
prediction_from_str = model_with_neg.predict_output_word(str_context, topn=5)
prediction_from_mixed = model_with_neg.predict_output_word(mixed_context, topn=5)
prediction_from_idx = model_with_neg.predict_output_word(idx_context, topn=5)
self.assertEqual(prediction_from_str, prediction_from_mixed)
self.assertEqual(prediction_from_str, prediction_from_idx)
def test_load_old_model(self):
"""Test loading an old word2vec model of indeterminate version"""
model_file = 'word2vec_old' # which version?!?
model = word2vec.Word2Vec.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (12, 100))
self.assertTrue(len(model.wv) == 12)
self.assertTrue(len(model.wv.index_to_key) == 12)
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.wv.vector_size))
self.assertTrue(len(model.wv.vectors_lockf.shape) > 0)
self.assertTrue(model.cum_table.shape == (12,))
self.onlineSanity(model, trained_model=True)
def test_load_old_model_separates(self):
"""Test loading an old word2vec model of indeterminate version"""
# Model stored in multiple files
model_file = 'word2vec_old_sep'
model = word2vec.Word2Vec.load(datapath(model_file))
self.assertTrue(model.wv.vectors.shape == (12, 100))
self.assertTrue(len(model.wv) == 12)
self.assertTrue(len(model.wv.index_to_key) == 12)
self.assertTrue(model.syn1neg.shape == (len(model.wv), model.wv.vector_size))
self.assertTrue(len(model.wv.vectors_lockf.shape) > 0)
self.assertTrue(model.cum_table.shape == (12,))
self.onlineSanity(model, trained_model=True)
def obsolete_test_load_old_models_pre_1_0(self):
"""Test loading pre-1.0 models"""
# load really old model
model_file = 'w2v-lee-v0.12.0'
model = word2vec.Word2Vec.load(datapath(model_file))
self.onlineSanity(model, trained_model=True)
old_versions = [
'0.12.0', '0.12.1', '0.12.2', '0.12.3', '0.12.4',
'0.13.0', '0.13.1', '0.13.2', '0.13.3', '0.13.4',
]
for old_version in old_versions:
self._check_old_version(old_version)
def test_load_old_models_1_x(self):
"""Test loading 1.x models"""
old_versions = [
'1.0.0', '1.0.1',
]
for old_version in old_versions:
self._check_old_version(old_version)
def test_load_old_models_2_x(self):
"""Test loading 2.x models"""
old_versions = [
'2.0.0', '2.1.0', '2.2.0', '2.3.0',
]
for old_version in old_versions:
self._check_old_version(old_version)
def test_load_old_models_3_x(self):
"""Test loading 3.x models"""
# test for max_final_vocab for model saved in 3.3
model_file = 'word2vec_3.3'
model = word2vec.Word2Vec.load(datapath(model_file))
self.assertEqual(model.max_final_vocab, None)
self.assertEqual(model.max_final_vocab, None)
old_versions = [
'3.0.0', '3.1.0', '3.2.0', '3.3.0', '3.4.0'
]
for old_version in old_versions:
self._check_old_version(old_version)
def _check_old_version(self, old_version):
logging.info("TESTING LOAD of %s Word2Vec MODEL", old_version)
saved_models_dir = datapath('old_w2v_models/w2v_{}.mdl')
model = word2vec.Word2Vec.load(saved_models_dir.format(old_version))
self.assertIsNone(model.corpus_total_words)
self.assertTrue(len(model.wv) == 3)
try:
self.assertTrue(model.wv.vectors.shape == (3, 4))
except AttributeError as ae:
print("WV")
print(model.wv)
print(dir(model.wv))
print(model.wv.syn0)
raise ae
# check if similarity search and online training works.
self.assertTrue(len(model.wv.most_similar('sentence')) == 2)
model.build_vocab(lee_corpus_list, update=True)
model.train(lee_corpus_list, total_examples=model.corpus_count, epochs=model.epochs)
# check if similarity search and online training works after saving and loading back the model.
tmpf = get_tmpfile('gensim_word2vec.tst')
model.save(tmpf)
loaded_model = word2vec.Word2Vec.load(tmpf)
loaded_model.build_vocab(lee_corpus_list, update=True)
loaded_model.train(lee_corpus_list, total_examples=model.corpus_count, epochs=model.epochs)
@log_capture()
def test_build_vocab_warning(self, loglines):
"""Test if warning is raised on non-ideal input to a word2vec model"""
sentences = ['human', 'machine']
model = word2vec.Word2Vec()
model.build_vocab(sentences)
warning = "Each 'sentences' item should be a list of words (usually unicode strings)."
self.assertTrue(warning in str(loglines))
@log_capture()
def test_train_warning(self, loglines):
"""Test if warning is raised if alpha rises during subsequent calls to train()"""
sentences = [
['human'],
['graph', 'trees']
]
model = word2vec.Word2Vec(min_count=1)
model.build_vocab(sentences)
for epoch in range(10):
model.train(sentences, total_examples=model.corpus_count, epochs=model.epochs)
model.alpha -= 0.002
model.min_alpha = model.alpha
if epoch == 5:
model.alpha += 0.05
warning = "Effective 'alpha' higher than previous training cycles"
self.assertTrue(warning in str(loglines))
@log_capture()
def test_train_hs_and_neg(self, loglines):
"""
Test if ValueError is raised when both hs=0 and negative=0
Test if warning is raised if both hs and negative are activated
"""
with self.assertRaises(ValueError):
word2vec.Word2Vec(sentences, min_count=1, hs=0, negative=0)
word2vec.Word2Vec(sentences, min_count=1, hs=1, negative=5)
warning = "Both hierarchical softmax and negative sampling are activated."
self.assertTrue(warning in str(loglines))
def test_train_with_explicit_param(self):
model = word2vec.Word2Vec(vector_size=2, min_count=1, hs=1, negative=0)
model.build_vocab(sentences)
with self.assertRaises(ValueError):
model.train(sentences, total_examples=model.corpus_count)
with self.assertRaises(ValueError):
model.train(sentences, epochs=model.epochs)
with self.assertRaises(ValueError):
model.train(sentences)
def test_sentences_should_not_be_a_generator(self):
"""
Is sentences a generator object?
"""
gen = (s for s in sentences)
self.assertRaises(TypeError, word2vec.Word2Vec, (gen,))
def test_load_on_class_error(self):
"""Test if exception is raised when loading word2vec model on instance"""
self.assertRaises(AttributeError, load_on_instance)
def test_file_should_not_be_compressed(self):
"""
Is corpus_file a compressed file?
"""
with tempfile.NamedTemporaryFile(suffix=".bz2") as fp:
self.assertRaises(TypeError, word2vec.Word2Vec, (None, fp.name))
def test_reset_from(self):
"""Test if reset_from() uses pre-built structures from other model"""
model = word2vec.Word2Vec(sentences, min_count=1)
other_model = word2vec.Word2Vec(new_sentences, min_count=1)
model.reset_from(other_model)
self.assertEqual(model.wv.key_to_index, other_model.wv.key_to_index)
def test_compute_training_loss(self):
model = word2vec.Word2Vec(min_count=1, sg=1, negative=5, hs=1)
model.build_vocab(sentences)
model.train(sentences, compute_loss=True, total_examples=model.corpus_count, epochs=model.epochs)
training_loss_val = model.get_latest_training_loss()
self.assertTrue(training_loss_val > 0.0)
def test_negative_ns_exp(self):
"""The model should accept a negative ns_exponent as a valid value."""
model = word2vec.Word2Vec(sentences, ns_exponent=-1, min_count=1, workers=1)
tmpf = get_tmpfile('w2v_negative_exp.tst')
model.save(tmpf)
loaded_model = word2vec.Word2Vec.load(tmpf)
loaded_model.train(sentences, total_examples=model.corpus_count, epochs=1)
assert loaded_model.ns_exponent == -1, loaded_model.ns_exponent
# endclass TestWord2VecModel
class TestWMD(unittest.TestCase):
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_nonzero(self):
'''Test basic functionality with a test sentence.'''
model = word2vec.Word2Vec(sentences, min_count=2, seed=42, workers=1)
sentence1 = ['human', 'interface', 'computer']
sentence2 = ['survey', 'user', 'computer', 'system', 'response', 'time']
distance = model.wv.wmdistance(sentence1, sentence2)
# Check that distance is non-zero.
self.assertFalse(distance == 0.0)
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_symmetry(self):
'''Check that distance is symmetric.'''
model = word2vec.Word2Vec(sentences, min_count=2, seed=42, workers=1)
sentence1 = ['human', 'interface', 'computer']
sentence2 = ['survey', 'user', 'computer', 'system', 'response', 'time']
distance1 = model.wv.wmdistance(sentence1, sentence2)
distance2 = model.wv.wmdistance(sentence2, sentence1)
self.assertTrue(np.allclose(distance1, distance2))
@unittest.skipIf(POT_EXT is False, "POT not installed")
def test_identical_sentences(self):
'''Check that the distance from a sentence to itself is zero.'''
model = word2vec.Word2Vec(sentences, min_count=1)
sentence = ['survey', 'user', 'computer', 'system', 'response', 'time']
distance = model.wv.wmdistance(sentence, sentence)
self.assertEqual(0.0, distance)
class TestWord2VecSentenceIterators(unittest.TestCase):
def test_line_sentence_works_with_filename(self):
"""Does LineSentence work with a filename argument?"""
with utils.open(datapath('lee_background.cor'), 'rb') as orig:
sentences = word2vec.LineSentence(datapath('lee_background.cor'))
for words in sentences:
self.assertEqual(words, utils.to_unicode(orig.readline()).split())
def test_cython_line_sentence_works_with_filename(self):
"""Does CythonLineSentence work with a filename argument?"""
from gensim.models import word2vec_corpusfile
with utils.open(datapath('lee_background.cor'), 'rb') as orig:
sentences = word2vec_corpusfile.CythonLineSentence(datapath('lee_background.cor'))
for words in sentences:
self.assertEqual(words, orig.readline().split())
def test_line_sentence_works_with_compressed_file(self):
"""Does LineSentence work with a compressed file object argument?"""
with utils.open(datapath('head500.noblanks.cor'), 'rb') as orig:
sentences = word2vec.LineSentence(bz2.BZ2File(datapath('head500.noblanks.cor.bz2')))
for words in sentences:
self.assertEqual(words, utils.to_unicode(orig.readline()).split())
def test_line_sentence_works_with_normal_file(self):
"""Does LineSentence work with a file object argument, rather than filename?"""
with utils.open(datapath('head500.noblanks.cor'), 'rb') as orig:
with utils.open(datapath('head500.noblanks.cor'), 'rb') as fin:
sentences = word2vec.LineSentence(fin)
for words in sentences:
self.assertEqual(words, utils.to_unicode(orig.readline()).split())
def test_path_line_sentences(self):
"""Does PathLineSentences work with a path argument?"""
with utils.open(os.path.join(datapath('PathLineSentences'), '1.txt'), 'rb') as orig1:
with utils.open(os.path.join(datapath('PathLineSentences'), '2.txt.bz2'), 'rb') as orig2:
sentences = word2vec.PathLineSentences(datapath('PathLineSentences'))
orig = orig1.readlines() + orig2.readlines()
orig_counter = 0 # to go through orig while matching PathLineSentences
for words in sentences:
self.assertEqual(words, utils.to_unicode(orig[orig_counter]).split())
orig_counter += 1
def test_path_line_sentences_one_file(self):
"""Does PathLineSentences work with a single file argument?"""
test_file = os.path.join(datapath('PathLineSentences'), '1.txt')
with utils.open(test_file, 'rb') as orig:
sentences = word2vec.PathLineSentences(test_file)
for words in sentences:
self.assertEqual(words, utils.to_unicode(orig.readline()).split())
# endclass TestWord2VecSentenceIterators
class TestWord2VecScripts(unittest.TestCase):
def test_word2vec_stand_alone_script(self):
"""Does Word2Vec script launch standalone?"""
cmd = [
sys.executable, '-m', 'gensim.scripts.word2vec_standalone',
'-train', datapath('testcorpus.txt'),
'-output', 'vec.txt', '-size', '200', '-sample', '1e-4',
'-binary', '0', '-iter', '3', '-min_count', '1',
]
output = check_output(args=cmd, stderr=subprocess.PIPE)
self.assertEqual(output, b'')
if not hasattr(TestWord2VecModel, 'assertLess'):
# workaround for python 2.6
def assertLess(self, a, b, msg=None):
self.assertTrue(a < b, msg="%s is not less than %s" % (a, b))
setattr(TestWord2VecModel, 'assertLess', assertLess)
if __name__ == '__main__':
logging.basicConfig(
format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s',
level=logging.DEBUG
)
unittest.main(module='gensim.test.test_word2vec')
| 58,536
|
Python
|
.py
| 1,039
| 46.903754
| 120
| 0.651232
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,069
|
test_corpora_hashdictionary.py
|
piskvorky_gensim/gensim/test/test_corpora_hashdictionary.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Unit tests for the `corpora.HashDictionary` class.
"""
import logging
import unittest
import os
import zlib
from gensim.corpora.hashdictionary import HashDictionary
from gensim.test.utils import get_tmpfile, common_texts
class TestHashDictionary(unittest.TestCase):
def setUp(self):
self.texts = common_texts
def test_doc_freq_one_doc(self):
texts = [['human', 'interface', 'computer']]
d = HashDictionary(texts, myhash=zlib.adler32)
expected = {10608: 1, 12466: 1, 31002: 1}
self.assertEqual(d.dfs, expected)
def test_doc_freq_and_token2id_for_several_docs_with_one_word(self):
# two docs
texts = [['human'], ['human']]
d = HashDictionary(texts, myhash=zlib.adler32)
expected = {31002: 2}
self.assertEqual(d.dfs, expected)
# only one token (human) should exist
expected = {'human': 31002}
self.assertEqual(d.token2id['human'], expected['human'])
self.assertEqual(d.token2id.keys(), expected.keys())
# three docs
texts = [['human'], ['human'], ['human']]
d = HashDictionary(texts, myhash=zlib.adler32)
expected = {31002: 3}
self.assertEqual(d.dfs, expected)
# only one token (human) should exist
expected = {'human': 31002}
self.assertEqual(d.token2id['human'], expected['human'])
self.assertEqual(d.token2id.keys(), expected.keys())
# four docs
texts = [['human'], ['human'], ['human'], ['human']]
d = HashDictionary(texts, myhash=zlib.adler32)
expected = {31002: 4}
self.assertEqual(d.dfs, expected)
# only one token (human) should exist
expected = {'human': 31002}
self.assertEqual(d.token2id['human'], expected['human'])
self.assertEqual(d.token2id.keys(), expected.keys())
def test_doc_freq_for_one_doc_with_several_word(self):
# two words
texts = [['human', 'cat']]
d = HashDictionary(texts, myhash=zlib.adler32)
expected = {9273: 1, 31002: 1}
self.assertEqual(d.dfs, expected)
# three words
texts = [['human', 'cat', 'minors']]
d = HashDictionary(texts, myhash=zlib.adler32)
expected = {9273: 1, 15001: 1, 31002: 1}
self.assertEqual(d.dfs, expected)
def test_debug_mode(self):
# two words
texts = [['human', 'cat']]
d = HashDictionary(texts, debug=True, myhash=zlib.adler32)
expected = {9273: {'cat'}, 31002: {'human'}}
self.assertEqual(d.id2token, expected)
# now the same thing, with debug off
texts = [['human', 'cat']]
d = HashDictionary(texts, debug=False, myhash=zlib.adler32)
expected = {}
self.assertEqual(d.id2token, expected)
def test_range(self):
# all words map to the same id
d = HashDictionary(self.texts, id_range=1, debug=True)
dfs = {0: 9}
id2token = {
0: {
'minors', 'graph', 'system', 'trees', 'eps', 'computer',
'survey', 'user', 'human', 'time', 'interface', 'response'
}
}
token2id = {
'minors': 0, 'graph': 0, 'system': 0, 'trees': 0,
'eps': 0, 'computer': 0, 'survey': 0, 'user': 0,
'human': 0, 'time': 0, 'interface': 0, 'response': 0
}
self.assertEqual(d.dfs, dfs)
self.assertEqual(d.id2token, id2token)
self.assertEqual(d.token2id, token2id)
# 2 ids: 0/1 for even/odd number of bytes in the word
d = HashDictionary(self.texts, id_range=2, myhash=lambda key: len(key))
dfs = {0: 7, 1: 7}
id2token = {
0: {'minors', 'system', 'computer', 'survey', 'user', 'time', 'response'},
1: {'interface', 'graph', 'trees', 'eps', 'human'}
}
token2id = {
'minors': 0, 'graph': 1, 'system': 0, 'trees': 1, 'eps': 1, 'computer': 0,
'survey': 0, 'user': 0, 'human': 1, 'time': 0, 'interface': 1, 'response': 0
}
self.assertEqual(d.dfs, dfs)
self.assertEqual(d.id2token, id2token)
self.assertEqual(d.token2id, token2id)
def test_build(self):
d = HashDictionary(self.texts, myhash=zlib.adler32)
expected = {
5232: 2, 5798: 3, 10608: 2, 12466: 2, 12736: 3, 15001: 2,
18451: 3, 23844: 3, 28591: 2, 29104: 2, 31002: 2, 31049: 2
}
self.assertEqual(d.dfs, expected)
expected = {
'minors': 15001, 'graph': 18451, 'system': 5798, 'trees': 23844,
'eps': 31049, 'computer': 10608, 'survey': 28591, 'user': 12736,
'human': 31002, 'time': 29104, 'interface': 12466, 'response': 5232
}
for ex in expected:
self.assertEqual(d.token2id[ex], expected[ex])
def test_filter(self):
d = HashDictionary(self.texts, myhash=zlib.adler32)
d.filter_extremes()
expected = {}
self.assertEqual(d.dfs, expected)
d = HashDictionary(self.texts, myhash=zlib.adler32)
d.filter_extremes(no_below=0, no_above=0.3)
expected = {
29104: 2, 31049: 2, 28591: 2, 5232: 2,
10608: 2, 12466: 2, 15001: 2, 31002: 2
}
self.assertEqual(d.dfs, expected)
d = HashDictionary(self.texts, myhash=zlib.adler32)
d.filter_extremes(no_below=3, no_above=1.0, keep_n=4)
expected = {5798: 3, 12736: 3, 18451: 3, 23844: 3}
self.assertEqual(d.dfs, expected)
def test_saveAsText(self):
""" `HashDictionary` can be saved as textfile. """
tmpf = get_tmpfile('dict_test.txt')
# use some utf8 strings, to test encoding serialization
d = HashDictionary(['žloťoučký koníček'.split(), 'Малйж обльйквюэ ат эжт'.split()])
d.save_as_text(tmpf)
self.assertTrue(os.path.exists(tmpf))
def test_saveAsTextBz2(self):
""" `HashDictionary` can be saved & loaded as compressed pickle. """
tmpf = get_tmpfile('dict_test.txt.bz2')
# use some utf8 strings, to test encoding serialization
d = HashDictionary(['žloťoučký koníček'.split(), 'Малйж обльйквюэ ат эжт'.split()])
d.save(tmpf)
self.assertTrue(os.path.exists(tmpf))
d2 = d.load(tmpf)
self.assertEqual(len(d), len(d2))
if __name__ == '__main__':
logging.basicConfig(level=logging.WARNING)
unittest.main()
| 6,650
|
Python
|
.py
| 152
| 34.802632
| 95
| 0.589692
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,070
|
test_bm25model.py
|
piskvorky_gensim/gensim/test/test_bm25model.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from collections import defaultdict
import math
import unittest
from gensim.models.bm25model import BM25ABC
from gensim.models import OkapiBM25Model, LuceneBM25Model, AtireBM25Model
from gensim.corpora import Dictionary
class BM25Stub(BM25ABC):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def precompute_idfs(self, dfs, num_docs):
return dict()
def get_term_weights(self, num_tokens, term_frequencies, idfs):
return term_frequencies
class BM25ABCTest(unittest.TestCase):
def setUp(self):
self.documents = [['cat', 'dog', 'mouse'], ['cat', 'lion'], ['cat', 'lion']]
self.dictionary = Dictionary(self.documents)
self.expected_avgdl = sum(map(len, self.documents)) / len(self.documents)
def test_avgdl_from_corpus(self):
corpus = list(map(self.dictionary.doc2bow, self.documents))
model = BM25Stub(corpus=corpus)
actual_avgdl = model.avgdl
self.assertAlmostEqual(self.expected_avgdl, actual_avgdl)
def test_avgdl_from_dictionary(self):
model = BM25Stub(dictionary=self.dictionary)
actual_avgdl = model.avgdl
self.assertAlmostEqual(self.expected_avgdl, actual_avgdl)
class OkapiBM25ModelTest(unittest.TestCase):
def setUp(self):
self.documents = [['cat', 'dog', 'mouse'], ['cat', 'lion'], ['cat', 'lion']]
self.dictionary = Dictionary(self.documents)
self.k1, self.b, self.epsilon = 1.5, 0.75, 0.25
def get_idf(word):
frequency = sum(map(lambda document: word in document, self.documents))
return math.log((len(self.documents) - frequency + 0.5) / (frequency + 0.5))
dog_idf = get_idf('dog')
cat_idf = get_idf('cat')
mouse_idf = get_idf('mouse')
lion_idf = get_idf('lion')
average_idf = (dog_idf + cat_idf + mouse_idf + lion_idf) / len(self.dictionary)
eps = self.epsilon * average_idf
self.expected_dog_idf = dog_idf if dog_idf > 0 else eps
self.expected_cat_idf = cat_idf if cat_idf > 0 else eps
self.expected_mouse_idf = mouse_idf if mouse_idf > 0 else eps
self.expected_lion_idf = lion_idf if lion_idf > 0 else eps
def test_idfs_from_corpus(self):
corpus = list(map(self.dictionary.doc2bow, self.documents))
model = OkapiBM25Model(corpus=corpus, k1=self.k1, b=self.b, epsilon=self.epsilon)
actual_dog_idf = model.idfs[self.dictionary.token2id['dog']]
actual_cat_idf = model.idfs[self.dictionary.token2id['cat']]
actual_mouse_idf = model.idfs[self.dictionary.token2id['mouse']]
actual_lion_idf = model.idfs[self.dictionary.token2id['lion']]
self.assertAlmostEqual(self.expected_dog_idf, actual_dog_idf)
self.assertAlmostEqual(self.expected_cat_idf, actual_cat_idf)
self.assertAlmostEqual(self.expected_mouse_idf, actual_mouse_idf)
self.assertAlmostEqual(self.expected_lion_idf, actual_lion_idf)
def test_idfs_from_dictionary(self):
model = OkapiBM25Model(dictionary=self.dictionary, k1=self.k1, b=self.b, epsilon=self.epsilon)
actual_dog_idf = model.idfs[self.dictionary.token2id['dog']]
actual_cat_idf = model.idfs[self.dictionary.token2id['cat']]
actual_mouse_idf = model.idfs[self.dictionary.token2id['mouse']]
actual_lion_idf = model.idfs[self.dictionary.token2id['lion']]
self.assertAlmostEqual(self.expected_dog_idf, actual_dog_idf)
self.assertAlmostEqual(self.expected_cat_idf, actual_cat_idf)
self.assertAlmostEqual(self.expected_mouse_idf, actual_mouse_idf)
self.assertAlmostEqual(self.expected_lion_idf, actual_lion_idf)
def test_score(self):
model = OkapiBM25Model(dictionary=self.dictionary, k1=self.k1, b=self.b, epsilon=self.epsilon)
first_document = self.documents[0]
first_bow = self.dictionary.doc2bow(first_document)
weights = defaultdict(lambda: 0.0)
weights.update(model[first_bow])
actual_dog_weight = weights[self.dictionary.token2id['dog']]
actual_cat_weight = weights[self.dictionary.token2id['cat']]
actual_mouse_weight = weights[self.dictionary.token2id['mouse']]
actual_lion_weight = weights[self.dictionary.token2id['lion']]
def get_expected_weight(word):
idf = model.idfs[self.dictionary.token2id[word]]
numerator = self.k1 + 1
denominator = 1 + self.k1 * (1 - self.b + self.b * len(first_document) / model.avgdl)
return idf * numerator / denominator
expected_dog_weight = get_expected_weight('dog') if 'dog' in first_document else 0.0
expected_cat_weight = get_expected_weight('cat') if 'cat' in first_document else 0.0
expected_mouse_weight = get_expected_weight('mouse') if 'mouse' in first_document else 0.0
expected_lion_weight = get_expected_weight('lion') if 'lion' in first_document else 0.0
self.assertAlmostEqual(expected_dog_weight, actual_dog_weight)
self.assertAlmostEqual(expected_cat_weight, actual_cat_weight)
self.assertAlmostEqual(expected_mouse_weight, actual_mouse_weight)
self.assertAlmostEqual(expected_lion_weight, actual_lion_weight)
class LuceneBM25ModelTest(unittest.TestCase):
def setUp(self):
self.documents = [['cat', 'dog', 'mouse'], ['cat', 'lion'], ['cat', 'lion']]
self.dictionary = Dictionary(self.documents)
self.k1, self.b = 1.5, 0.75
def get_idf(word):
frequency = sum(map(lambda document: word in document, self.documents))
return math.log(1.0 + (len(self.documents) - frequency + 0.5) / (frequency + 0.5))
self.expected_dog_idf = get_idf('dog')
self.expected_cat_idf = get_idf('cat')
self.expected_mouse_idf = get_idf('mouse')
self.expected_lion_idf = get_idf('lion')
def test_idfs_from_corpus(self):
corpus = list(map(self.dictionary.doc2bow, self.documents))
model = LuceneBM25Model(corpus=corpus, k1=self.k1, b=self.b)
actual_dog_idf = model.idfs[self.dictionary.token2id['dog']]
actual_cat_idf = model.idfs[self.dictionary.token2id['cat']]
actual_mouse_idf = model.idfs[self.dictionary.token2id['mouse']]
actual_lion_idf = model.idfs[self.dictionary.token2id['lion']]
self.assertAlmostEqual(self.expected_dog_idf, actual_dog_idf)
self.assertAlmostEqual(self.expected_cat_idf, actual_cat_idf)
self.assertAlmostEqual(self.expected_mouse_idf, actual_mouse_idf)
self.assertAlmostEqual(self.expected_lion_idf, actual_lion_idf)
def test_idfs_from_dictionary(self):
model = LuceneBM25Model(dictionary=self.dictionary, k1=self.k1, b=self.b)
actual_dog_idf = model.idfs[self.dictionary.token2id['dog']]
actual_cat_idf = model.idfs[self.dictionary.token2id['cat']]
actual_mouse_idf = model.idfs[self.dictionary.token2id['mouse']]
actual_lion_idf = model.idfs[self.dictionary.token2id['lion']]
self.assertAlmostEqual(self.expected_dog_idf, actual_dog_idf)
self.assertAlmostEqual(self.expected_cat_idf, actual_cat_idf)
self.assertAlmostEqual(self.expected_mouse_idf, actual_mouse_idf)
self.assertAlmostEqual(self.expected_lion_idf, actual_lion_idf)
def test_score(self):
model = LuceneBM25Model(dictionary=self.dictionary, k1=self.k1, b=self.b)
first_document = self.documents[0]
first_bow = self.dictionary.doc2bow(first_document)
weights = defaultdict(lambda: 0.0)
weights.update(model[first_bow])
actual_dog_weight = weights[self.dictionary.token2id['dog']]
actual_cat_weight = weights[self.dictionary.token2id['cat']]
actual_mouse_weight = weights[self.dictionary.token2id['mouse']]
actual_lion_weight = weights[self.dictionary.token2id['lion']]
def get_expected_weight(word):
idf = model.idfs[self.dictionary.token2id[word]]
denominator = 1 + self.k1 * (1 - self.b + self.b * len(first_document) / model.avgdl)
return idf / denominator
expected_dog_weight = get_expected_weight('dog') if 'dog' in first_document else 0.0
expected_cat_weight = get_expected_weight('cat') if 'cat' in first_document else 0.0
expected_mouse_weight = get_expected_weight('mouse') if 'mouse' in first_document else 0.0
expected_lion_weight = get_expected_weight('lion') if 'lion' in first_document else 0.0
self.assertAlmostEqual(expected_dog_weight, actual_dog_weight)
self.assertAlmostEqual(expected_cat_weight, actual_cat_weight)
self.assertAlmostEqual(expected_mouse_weight, actual_mouse_weight)
self.assertAlmostEqual(expected_lion_weight, actual_lion_weight)
class AtireBM25ModelTest(unittest.TestCase):
def setUp(self):
self.documents = [['cat', 'dog', 'mouse'], ['cat', 'lion'], ['cat', 'lion']]
self.dictionary = Dictionary(self.documents)
self.k1, self.b, self.epsilon = 1.5, 0.75, 0.25
def get_idf(word):
frequency = sum(map(lambda document: word in document, self.documents))
return math.log(len(self.documents) / frequency)
self.expected_dog_idf = get_idf('dog')
self.expected_cat_idf = get_idf('cat')
self.expected_mouse_idf = get_idf('mouse')
self.expected_lion_idf = get_idf('lion')
def test_idfs_from_corpus(self):
corpus = list(map(self.dictionary.doc2bow, self.documents))
model = AtireBM25Model(corpus=corpus, k1=self.k1, b=self.b)
actual_dog_idf = model.idfs[self.dictionary.token2id['dog']]
actual_cat_idf = model.idfs[self.dictionary.token2id['cat']]
actual_mouse_idf = model.idfs[self.dictionary.token2id['mouse']]
actual_lion_idf = model.idfs[self.dictionary.token2id['lion']]
self.assertAlmostEqual(self.expected_dog_idf, actual_dog_idf)
self.assertAlmostEqual(self.expected_cat_idf, actual_cat_idf)
self.assertAlmostEqual(self.expected_mouse_idf, actual_mouse_idf)
self.assertAlmostEqual(self.expected_lion_idf, actual_lion_idf)
def test_idfs_from_dictionary(self):
model = AtireBM25Model(dictionary=self.dictionary, k1=self.k1, b=self.b)
actual_dog_idf = model.idfs[self.dictionary.token2id['dog']]
actual_cat_idf = model.idfs[self.dictionary.token2id['cat']]
actual_mouse_idf = model.idfs[self.dictionary.token2id['mouse']]
actual_lion_idf = model.idfs[self.dictionary.token2id['lion']]
self.assertAlmostEqual(self.expected_dog_idf, actual_dog_idf)
self.assertAlmostEqual(self.expected_cat_idf, actual_cat_idf)
self.assertAlmostEqual(self.expected_mouse_idf, actual_mouse_idf)
self.assertAlmostEqual(self.expected_lion_idf, actual_lion_idf)
def test_score(self):
model = AtireBM25Model(dictionary=self.dictionary, k1=self.k1, b=self.b)
first_document = self.documents[0]
first_bow = self.dictionary.doc2bow(first_document)
weights = defaultdict(lambda: 0.0)
weights.update(model[first_bow])
actual_dog_weight = weights[self.dictionary.token2id['dog']]
actual_cat_weight = weights[self.dictionary.token2id['cat']]
actual_mouse_weight = weights[self.dictionary.token2id['mouse']]
actual_lion_weight = weights[self.dictionary.token2id['lion']]
def get_expected_weight(word):
idf = model.idfs[self.dictionary.token2id[word]]
numerator = self.k1 + 1
denominator = 1 + self.k1 * (1 - self.b + self.b * len(first_document) / model.avgdl)
return idf * numerator / denominator
expected_dog_weight = get_expected_weight('dog') if 'dog' in first_document else 0.0
expected_cat_weight = get_expected_weight('cat') if 'cat' in first_document else 0.0
expected_mouse_weight = get_expected_weight('mouse') if 'mouse' in first_document else 0.0
expected_lion_weight = get_expected_weight('lion') if 'lion' in first_document else 0.0
self.assertAlmostEqual(expected_dog_weight, actual_dog_weight)
self.assertAlmostEqual(expected_cat_weight, actual_cat_weight)
self.assertAlmostEqual(expected_mouse_weight, actual_mouse_weight)
self.assertAlmostEqual(expected_lion_weight, actual_lion_weight)
| 12,489
|
Python
|
.py
| 202
| 53.054455
| 102
| 0.682209
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,071
|
test_lsimodel.py
|
piskvorky_gensim/gensim/test/test_lsimodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
import numpy as np
import scipy.linalg
from gensim import matutils
from gensim.corpora.mmcorpus import MmCorpus
from gensim.models import lsimodel
from gensim.test import basetmtests
from gensim.test.utils import datapath, get_tmpfile
class TestLsiModel(unittest.TestCase, basetmtests.TestBaseTopicModel):
def setUp(self):
self.corpus = MmCorpus(datapath('testcorpus.mm'))
self.model = lsimodel.LsiModel(self.corpus, num_topics=2)
def test_transform(self):
"""Test lsi[vector] transformation."""
# create the transformation model
model = self.model
# make sure the decomposition is enough accurate
u, s, vt = scipy.linalg.svd(matutils.corpus2dense(self.corpus, self.corpus.num_terms), full_matrices=False)
self.assertTrue(np.allclose(s[:2], model.projection.s)) # singular values must match
# transform one document
doc = list(self.corpus)[0]
transformed = model[doc]
vec = matutils.sparse2full(transformed, 2) # convert to dense vector, for easier equality tests
expected = np.array([-0.6594664, 0.142115444]) # scaled LSI version
# expected = np.array([-0.1973928, 0.05591352]) # non-scaled LSI version
self.assertTrue(np.allclose(abs(vec), abs(expected))) # transformed entries must be equal up to sign
def test_transform_float32(self):
"""Test lsi[vector] transformation."""
# create the transformation model
model = lsimodel.LsiModel(self.corpus, num_topics=2, dtype=np.float32)
# make sure the decomposition is enough accurate
u, s, vt = scipy.linalg.svd(matutils.corpus2dense(self.corpus, self.corpus.num_terms), full_matrices=False)
self.assertTrue(np.allclose(s[:2], model.projection.s)) # singular values must match
self.assertEqual(model.projection.u.dtype, np.float32)
self.assertEqual(model.projection.s.dtype, np.float32)
# transform one document
doc = list(self.corpus)[0]
transformed = model[doc]
vec = matutils.sparse2full(transformed, 2) # convert to dense vector, for easier equality tests
expected = np.array([-0.6594664, 0.142115444]) # scaled LSI version
# transformed entries must be equal up to sign
self.assertTrue(np.allclose(abs(vec), abs(expected), atol=1.e-5))
def test_corpus_transform(self):
"""Test lsi[corpus] transformation."""
model = self.model
got = np.vstack([matutils.sparse2full(doc, 2) for doc in model[self.corpus]])
expected = np.array([
[0.65946639, 0.14211544],
[2.02454305, -0.42088759],
[1.54655361, 0.32358921],
[1.81114125, 0.5890525],
[0.9336738, -0.27138939],
[0.01274618, -0.49016181],
[0.04888203, -1.11294699],
[0.08063836, -1.56345594],
[0.27381003, -1.34694159]
])
self.assertTrue(np.allclose(abs(got), abs(expected))) # must equal up to sign
def test_online_transform(self):
corpus = list(self.corpus)
doc = corpus[0] # use the corpus' first document for testing
# create the transformation model
model2 = lsimodel.LsiModel(corpus=corpus, num_topics=5) # compute everything at once
# start with no documents, we will add them later
model = lsimodel.LsiModel(corpus=None, id2word=model2.id2word, num_topics=5)
# train model on a single document
model.add_documents([corpus[0]])
# transform the testing document with this partial transformation
transformed = model[doc]
vec = matutils.sparse2full(transformed, model.num_topics) # convert to dense vector, for easier equality tests
expected = np.array([-1.73205078, 0.0, 0.0, 0.0, 0.0]) # scaled LSI version
self.assertTrue(np.allclose(abs(vec), abs(expected), atol=1e-6)) # transformed entries must be equal up to sign
# train on another 4 documents
model.add_documents(corpus[1:5], chunksize=2) # train on 4 extra docs, in chunks of 2 documents, for the lols
# transform a document with this partial transformation
transformed = model[doc]
vec = matutils.sparse2full(transformed, model.num_topics) # convert to dense vector, for easier equality tests
expected = np.array([-0.66493785, -0.28314203, -1.56376302, 0.05488682, 0.17123269]) # scaled LSI version
self.assertTrue(np.allclose(abs(vec), abs(expected), atol=1e-6)) # transformed entries must be equal up to sign
# train on the rest of documents
model.add_documents(corpus[5:])
# make sure the final transformation is the same as if we had decomposed the whole corpus at once
vec1 = matutils.sparse2full(model[doc], model.num_topics)
vec2 = matutils.sparse2full(model2[doc], model2.num_topics)
# the two LSI representations must equal up to sign
self.assertTrue(np.allclose(abs(vec1), abs(vec2), atol=1e-5))
def test_persistence(self):
fname = get_tmpfile('gensim_models_lsi.tst')
model = self.model
model.save(fname)
model2 = lsimodel.LsiModel.load(fname)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.projection.u, model2.projection.u))
self.assertTrue(np.allclose(model.projection.s, model2.projection.s))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_persistence_compressed(self):
fname = get_tmpfile('gensim_models_lsi.tst.gz')
model = self.model
model.save(fname)
model2 = lsimodel.LsiModel.load(fname, mmap=None)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.projection.u, model2.projection.u))
self.assertTrue(np.allclose(model.projection.s, model2.projection.s))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_large_mmap(self):
fname = get_tmpfile('gensim_models_lsi.tst')
model = self.model
# test storing the internal arrays into separate files
model.save(fname, sep_limit=0)
# now load the external arrays via mmap
model2 = lsimodel.LsiModel.load(fname, mmap='r')
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(isinstance(model2.projection.u, np.memmap))
self.assertTrue(isinstance(model2.projection.s, np.memmap))
self.assertTrue(np.allclose(model.projection.u, model2.projection.u))
self.assertTrue(np.allclose(model.projection.s, model2.projection.s))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_large_mmap_compressed(self):
fname = get_tmpfile('gensim_models_lsi.tst.gz')
model = self.model
# test storing the internal arrays into separate files
model.save(fname, sep_limit=0)
# now load the external arrays via mmap
return
# turns out this test doesn't exercise this because there are no arrays
# to be mmaped!
self.assertRaises(IOError, lsimodel.LsiModel.load, fname, mmap='r')
def test_docs_processed(self):
self.assertEqual(self.model.docs_processed, 9)
self.assertEqual(self.model.docs_processed, self.corpus.num_docs)
def test_get_topics(self):
topics = self.model.get_topics()
vocab_size = len(self.model.id2word)
for topic in topics:
self.assertTrue(isinstance(topic, np.ndarray))
self.assertEqual(topic.dtype, np.float64)
self.assertEqual(vocab_size, topic.shape[0])
# LSI topics are not probability distributions
# self.assertAlmostEqual(np.sum(topic), 1.0, 5)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 8,457
|
Python
|
.py
| 154
| 46.649351
| 120
| 0.678882
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,072
|
test_ensemblelda.py
|
piskvorky_gensim/gensim/test/test_ensemblelda.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Author: Tobias B <proxima@sezanzeb.de>
"""
Automated tests for checking the EnsembleLda Class
"""
import os
import logging
import unittest
import numpy as np
from copy import deepcopy
import pytest
from gensim.models import EnsembleLda, LdaMulticore, LdaModel
from gensim.test.utils import datapath, get_tmpfile, common_corpus, common_dictionary
NUM_TOPICS = 2
NUM_MODELS = 4
PASSES = 50
RANDOM_STATE = 0
# windows tests fail due to the required assertion precision being too high
RTOL = 1e-04 if os.name == 'nt' else 1e-05
class TestEnsembleLda(unittest.TestCase):
def get_elda(self):
return EnsembleLda(
corpus=common_corpus, id2word=common_dictionary, num_topics=NUM_TOPICS,
passes=PASSES, num_models=NUM_MODELS, random_state=RANDOM_STATE,
topic_model_class=LdaModel,
)
def get_elda_mem_unfriendly(self):
return EnsembleLda(
corpus=common_corpus, id2word=common_dictionary, num_topics=NUM_TOPICS,
passes=PASSES, num_models=NUM_MODELS, random_state=RANDOM_STATE,
memory_friendly_ttda=False, topic_model_class=LdaModel,
)
def assert_ttda_is_valid(self, elda):
"""Check that ttda has one or more topic and that term probabilities add to one."""
assert len(elda.ttda) > 0
sum_over_terms = elda.ttda.sum(axis=1)
expected_sum_over_terms = np.ones(len(elda.ttda)).astype(np.float32)
np.testing.assert_allclose(sum_over_terms, expected_sum_over_terms, rtol=1e-04)
def test_elda(self):
elda = self.get_elda()
# given that the random_state doesn't change, it should
# always be 2 detected topics in this setup.
assert elda.stable_topics.shape[1] == len(common_dictionary)
assert len(elda.ttda) == NUM_MODELS * NUM_TOPICS
self.assert_ttda_is_valid(elda)
def test_backwards_compatibility_with_persisted_model(self):
elda = self.get_elda()
# compare with a pre-trained reference model
loaded_elda = EnsembleLda.load(datapath('ensemblelda'))
np.testing.assert_allclose(elda.ttda, loaded_elda.ttda, rtol=RTOL)
atol = loaded_elda.asymmetric_distance_matrix.max() * 1e-05
np.testing.assert_allclose(
elda.asymmetric_distance_matrix,
loaded_elda.asymmetric_distance_matrix, atol=atol,
)
def test_recluster(self):
# the following test is quite specific to the current implementation and not part of any api,
# but it makes improving those sections of the code easier as long as sorted_clusters and the
# cluster_model results are supposed to stay the same. Potentially this test will deprecate.
elda = EnsembleLda.load(datapath('ensemblelda'))
loaded_cluster_model_results = deepcopy(elda.cluster_model.results)
loaded_valid_clusters = deepcopy(elda.valid_clusters)
loaded_stable_topics = deepcopy(elda.get_topics())
# continue training with the distance matrix of the pretrained reference and see if
# the generated clusters match.
elda.asymmetric_distance_matrix_outdated = True
elda.recluster()
self.assert_clustering_results_equal(elda.cluster_model.results, loaded_cluster_model_results)
assert elda.valid_clusters == loaded_valid_clusters
np.testing.assert_allclose(elda.get_topics(), loaded_stable_topics, rtol=RTOL)
def test_recluster_does_nothing_when_stable_topics_already_found(self):
elda = self.get_elda()
# reclustering shouldn't change anything without
# added models or different parameters
elda.recluster()
assert elda.stable_topics.shape[1] == len(common_dictionary)
assert len(elda.ttda) == NUM_MODELS * NUM_TOPICS
self.assert_ttda_is_valid(elda)
def test_not_trained_given_zero_passes(self):
elda = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary, num_topics=NUM_TOPICS,
passes=0, num_models=NUM_MODELS, random_state=RANDOM_STATE,
)
assert len(elda.ttda) == 0
def test_not_trained_given_no_corpus(self):
elda = EnsembleLda(
id2word=common_dictionary, num_topics=NUM_TOPICS,
passes=PASSES, num_models=NUM_MODELS, random_state=RANDOM_STATE,
)
assert len(elda.ttda) == 0
def test_not_trained_given_zero_iterations(self):
elda = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary, num_topics=NUM_TOPICS,
iterations=0, num_models=NUM_MODELS, random_state=RANDOM_STATE,
)
assert len(elda.ttda) == 0
def test_not_trained_given_zero_models(self):
elda = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary, num_topics=NUM_TOPICS,
passes=PASSES, num_models=0, random_state=RANDOM_STATE
)
assert len(elda.ttda) == 0
def test_mem_unfriendly(self):
# elda_mem_unfriendly and self.elda should have topics that are
# the same up to floating point variations caused by the two different
# implementations
elda = self.get_elda()
elda_mem_unfriendly = self.get_elda_mem_unfriendly()
assert len(elda_mem_unfriendly.tms) == NUM_MODELS
np.testing.assert_allclose(elda.ttda, elda_mem_unfriendly.ttda, rtol=RTOL)
np.testing.assert_allclose(elda.get_topics(), elda_mem_unfriendly.get_topics(), rtol=RTOL)
self.assert_ttda_is_valid(elda_mem_unfriendly)
def test_generate_gensim_representation(self):
elda = self.get_elda()
gensim_model = elda.generate_gensim_representation()
topics = gensim_model.get_topics()
np.testing.assert_allclose(elda.get_topics(), topics, rtol=RTOL)
def assert_clustering_results_equal(self, clustering_results_1, clustering_results_2):
"""Assert important attributes of the cluster results"""
np.testing.assert_array_equal(
[element.label for element in clustering_results_1],
[element.label for element in clustering_results_2],
)
np.testing.assert_array_equal(
[element.is_core for element in clustering_results_1],
[element.is_core for element in clustering_results_2],
)
def test_persisting(self):
elda = self.get_elda()
elda_mem_unfriendly = self.get_elda_mem_unfriendly()
fname = get_tmpfile('gensim_models_ensemblelda')
elda.save(fname)
loaded_elda = EnsembleLda.load(fname)
# storing the ensemble without memory_friendy_ttda
elda_mem_unfriendly.save(fname)
loaded_elda_mem_unfriendly = EnsembleLda.load(fname)
# topic_model_class will be lazy loaded and should be None first
assert loaded_elda.topic_model_class is None
# was it stored and loaded correctly?
# memory friendly.
loaded_elda_representation = loaded_elda.generate_gensim_representation()
# generating the representation also lazily loads the topic_model_class
assert loaded_elda.topic_model_class == LdaModel
topics = loaded_elda_representation.get_topics()
ttda = loaded_elda.ttda
amatrix = loaded_elda.asymmetric_distance_matrix
np.testing.assert_allclose(elda.get_topics(), topics, rtol=RTOL)
np.testing.assert_allclose(elda.ttda, ttda, rtol=RTOL)
np.testing.assert_allclose(elda.asymmetric_distance_matrix, amatrix, rtol=RTOL)
expected_clustering_results = elda.cluster_model.results
loaded_clustering_results = loaded_elda.cluster_model.results
self.assert_clustering_results_equal(expected_clustering_results, loaded_clustering_results)
# memory unfriendly
loaded_elda_mem_unfriendly_representation = loaded_elda_mem_unfriendly.generate_gensim_representation()
topics = loaded_elda_mem_unfriendly_representation.get_topics()
np.testing.assert_allclose(elda.get_topics(), topics, rtol=RTOL)
def test_multiprocessing(self):
# same configuration
random_state = RANDOM_STATE
# use 3 processes for the ensemble and the distance,
# so that the 4 models and 8 topics cannot be distributed
# to each worker evenly
workers = 3
# memory friendly. contains List of topic word distributions
elda = self.get_elda()
elda_multiprocessing = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary, topic_model_class=LdaModel,
num_topics=NUM_TOPICS, passes=PASSES, num_models=NUM_MODELS,
random_state=random_state, ensemble_workers=workers, distance_workers=workers,
)
# memory unfriendly. contains List of models
elda_mem_unfriendly = self.get_elda_mem_unfriendly()
elda_multiprocessing_mem_unfriendly = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary, topic_model_class=LdaModel,
num_topics=NUM_TOPICS, passes=PASSES, num_models=NUM_MODELS,
random_state=random_state, ensemble_workers=workers, distance_workers=workers,
memory_friendly_ttda=False,
)
np.testing.assert_allclose(
elda.get_topics(),
elda_multiprocessing.get_topics(),
rtol=RTOL
)
np.testing.assert_allclose(
elda_mem_unfriendly.get_topics(),
elda_multiprocessing_mem_unfriendly.get_topics(),
rtol=RTOL
)
def test_add_models_to_empty(self):
elda = self.get_elda()
ensemble = EnsembleLda(id2word=common_dictionary, num_models=0)
ensemble.add_model(elda.ttda[0:1])
ensemble.add_model(elda.ttda[1:])
ensemble.recluster()
np.testing.assert_allclose(ensemble.get_topics(), elda.get_topics(), rtol=RTOL)
# persisting an ensemble that is entirely built from existing ttdas
fname = get_tmpfile('gensim_models_ensemblelda')
ensemble.save(fname)
loaded_ensemble = EnsembleLda.load(fname)
np.testing.assert_allclose(loaded_ensemble.get_topics(), elda.get_topics(), rtol=RTOL)
self.test_inference(loaded_ensemble)
def test_add_models(self):
# make sure countings and sizes after adding are correct
# create new models and add other models to them.
# there are a ton of configurations for the first parameter possible,
# try them all
# quickly train something that can be used for counting results
num_new_models = 3
num_new_topics = 3
# 1. memory friendly
base_elda = self.get_elda()
cumulative_elda = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary,
num_topics=num_new_topics, passes=1, num_models=num_new_models,
iterations=1, random_state=RANDOM_STATE, topic_model_class=LdaMulticore,
workers=3, ensemble_workers=2,
)
# 1.1 ttda
num_topics_before_add_model = len(cumulative_elda.ttda)
num_models_before_add_model = cumulative_elda.num_models
cumulative_elda.add_model(base_elda.ttda)
assert len(cumulative_elda.ttda) == num_topics_before_add_model + len(base_elda.ttda)
assert cumulative_elda.num_models == num_models_before_add_model + 1 # defaults to 1 for one ttda matrix
# 1.2 an ensemble
num_topics_before_add_model = len(cumulative_elda.ttda)
num_models_before_add_model = cumulative_elda.num_models
cumulative_elda.add_model(base_elda, 5)
assert len(cumulative_elda.ttda) == num_topics_before_add_model + len(base_elda.ttda)
assert cumulative_elda.num_models == num_models_before_add_model + 5
# 1.3 a list of ensembles
num_topics_before_add_model = len(cumulative_elda.ttda)
num_models_before_add_model = cumulative_elda.num_models
# it should be totally legit to add a memory unfriendly object to a memory friendly one
base_elda_mem_unfriendly = self.get_elda_mem_unfriendly()
cumulative_elda.add_model([base_elda, base_elda_mem_unfriendly])
assert len(cumulative_elda.ttda) == num_topics_before_add_model + 2 * len(base_elda.ttda)
assert cumulative_elda.num_models == num_models_before_add_model + 2 * NUM_MODELS
# 1.4 a single gensim model
model = base_elda.classic_model_representation
num_topics_before_add_model = len(cumulative_elda.ttda)
num_models_before_add_model = cumulative_elda.num_models
cumulative_elda.add_model(model)
assert len(cumulative_elda.ttda) == num_topics_before_add_model + len(model.get_topics())
assert cumulative_elda.num_models == num_models_before_add_model + 1
# 1.5 a list gensim models
num_topics_before_add_model = len(cumulative_elda.ttda)
num_models_before_add_model = cumulative_elda.num_models
cumulative_elda.add_model([model, model])
assert len(cumulative_elda.ttda) == num_topics_before_add_model + 2 * len(model.get_topics())
assert cumulative_elda.num_models == num_models_before_add_model + 2
self.assert_ttda_is_valid(cumulative_elda)
# 2. memory unfriendly
elda_mem_unfriendly = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary,
num_topics=num_new_topics, passes=1, num_models=num_new_models,
iterations=1, random_state=RANDOM_STATE, topic_model_class=LdaMulticore,
workers=3, ensemble_workers=2, memory_friendly_ttda=False,
)
# 2.1 a single ensemble
num_topics_before_add_model = len(elda_mem_unfriendly.tms)
num_models_before_add_model = elda_mem_unfriendly.num_models
elda_mem_unfriendly.add_model(base_elda_mem_unfriendly)
assert len(elda_mem_unfriendly.tms) == num_topics_before_add_model + NUM_MODELS
assert elda_mem_unfriendly.num_models == num_models_before_add_model + NUM_MODELS
# 2.2 a list of ensembles
num_topics_before_add_model = len(elda_mem_unfriendly.tms)
num_models_before_add_model = elda_mem_unfriendly.num_models
elda_mem_unfriendly.add_model([base_elda_mem_unfriendly, base_elda_mem_unfriendly])
assert len(elda_mem_unfriendly.tms) == num_topics_before_add_model + 2 * NUM_MODELS
assert elda_mem_unfriendly.num_models == num_models_before_add_model + 2 * NUM_MODELS
# 2.3 a single gensim model
num_topics_before_add_model = len(elda_mem_unfriendly.tms)
num_models_before_add_model = elda_mem_unfriendly.num_models
elda_mem_unfriendly.add_model(base_elda_mem_unfriendly.tms[0])
assert len(elda_mem_unfriendly.tms) == num_topics_before_add_model + 1
assert elda_mem_unfriendly.num_models == num_models_before_add_model + 1
# 2.4 a list of gensim models
num_topics_before_add_model = len(elda_mem_unfriendly.tms)
num_models_before_add_model = elda_mem_unfriendly.num_models
elda_mem_unfriendly.add_model(base_elda_mem_unfriendly.tms)
assert len(elda_mem_unfriendly.tms) == num_topics_before_add_model + NUM_MODELS
assert elda_mem_unfriendly.num_models == num_models_before_add_model + NUM_MODELS
# 2.5 topic term distributions should throw errors, because the
# actual models are needed for the memory unfriendly ensemble
num_topics_before_add_model = len(elda_mem_unfriendly.tms)
num_models_before_add_model = elda_mem_unfriendly.num_models
with pytest.raises(ValueError):
elda_mem_unfriendly.add_model(base_elda_mem_unfriendly.tms[0].get_topics())
# remains unchanged
assert len(elda_mem_unfriendly.tms) == num_topics_before_add_model
assert elda_mem_unfriendly.num_models == num_models_before_add_model
assert elda_mem_unfriendly.num_models == len(elda_mem_unfriendly.tms)
self.assert_ttda_is_valid(elda_mem_unfriendly)
def test_add_and_recluster(self):
# See if after adding a model, the model still makes sense
num_new_models = 3
num_new_topics = 3
random_state = 1
# train models two sets of models (mem friendly and unfriendly)
elda_1 = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary,
num_topics=num_new_topics, passes=10, num_models=num_new_models,
iterations=30, random_state=random_state, topic_model_class='lda',
distance_workers=4,
)
elda_mem_unfriendly_1 = EnsembleLda(
corpus=common_corpus, id2word=common_dictionary,
num_topics=num_new_topics, passes=10, num_models=num_new_models,
iterations=30, random_state=random_state, topic_model_class=LdaModel,
distance_workers=4, memory_friendly_ttda=False,
)
elda_2 = self.get_elda()
elda_mem_unfriendly_2 = self.get_elda_mem_unfriendly()
assert elda_1.random_state != elda_2.random_state
assert elda_mem_unfriendly_1.random_state != elda_mem_unfriendly_2.random_state
# both should be similar
np.testing.assert_allclose(elda_1.ttda, elda_mem_unfriendly_1.ttda, rtol=RTOL)
np.testing.assert_allclose(elda_1.get_topics(), elda_mem_unfriendly_1.get_topics(), rtol=RTOL)
# and every next step applied to both should result in similar results
# 1. adding to ttda and tms
elda_1.add_model(elda_2)
elda_mem_unfriendly_1.add_model(elda_mem_unfriendly_2)
np.testing.assert_allclose(elda_1.ttda, elda_mem_unfriendly_1.ttda, rtol=RTOL)
assert len(elda_1.ttda) == len(elda_2.ttda) + num_new_models * num_new_topics
assert len(elda_mem_unfriendly_1.ttda) == len(elda_mem_unfriendly_2.ttda) + num_new_models * num_new_topics
assert len(elda_mem_unfriendly_1.tms) == NUM_MODELS + num_new_models
self.assert_ttda_is_valid(elda_1)
self.assert_ttda_is_valid(elda_mem_unfriendly_1)
# 2. distance matrix
elda_1._generate_asymmetric_distance_matrix()
elda_mem_unfriendly_1._generate_asymmetric_distance_matrix()
np.testing.assert_allclose(
elda_1.asymmetric_distance_matrix,
elda_mem_unfriendly_1.asymmetric_distance_matrix,
)
# 3. CBDBSCAN results
elda_1._generate_topic_clusters()
elda_mem_unfriendly_1._generate_topic_clusters()
clustering_results = elda_1.cluster_model.results
mem_unfriendly_clustering_results = elda_mem_unfriendly_1.cluster_model.results
self.assert_clustering_results_equal(clustering_results, mem_unfriendly_clustering_results)
# 4. finally, the stable topics
elda_1._generate_stable_topics()
elda_mem_unfriendly_1._generate_stable_topics()
np.testing.assert_allclose(
elda_1.get_topics(),
elda_mem_unfriendly_1.get_topics(),
)
elda_1.generate_gensim_representation()
elda_mem_unfriendly_1.generate_gensim_representation()
# same random state, hence topics should be still similar
np.testing.assert_allclose(elda_1.get_topics(), elda_mem_unfriendly_1.get_topics(), rtol=RTOL)
def test_inference(self, elda=None):
if elda is None:
elda = self.get_elda()
# get the most likely token id from topic 0
max_id = np.argmax(elda.get_topics()[0, :])
assert elda.classic_model_representation.iterations > 0
# topic 0 should be dominant in the inference.
# the difference between the probabilities should be significant and larger than 0.3
inferred = elda[[(max_id, 1)]]
assert inferred[0][1] - 0.3 > inferred[1][1]
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.WARN)
unittest.main()
| 20,009
|
Python
|
.py
| 367
| 45.441417
| 115
| 0.679873
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,073
|
svd_error.py
|
piskvorky_gensim/gensim/test/svd_error.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
"""USAGE: %(program)s MATRIX.mm [CLIP_DOCS] [CLIP_TERMS]
Check truncated SVD error for the algo in gensim, using a given corpus. This script
runs the decomposition with several internal parameters (number of requested factors,
iterative chunk size) and reports error for each parameter combination.
The number of input documents is clipped to the first CLIP_DOCS. Similarly,
only the first CLIP_TERMS are considered (features with id >= CLIP_TERMS are
ignored, effectively restricting the vocabulary size). If you don't specify them,
the entire matrix will be used.
Example: ./svd_error.py ~/gensim/results/wiki_en_v10k.mm.bz2 100000 10000
"""
from __future__ import print_function, with_statement
import logging
import os
import sys
import time
import bz2
import itertools
import numpy as np
import scipy.linalg
import gensim
try:
from sparsesvd import sparsesvd
except ImportError:
# no SVDLIBC: install with `easy_install sparsesvd` if you want SVDLIBC results as well
sparsesvd = None
sparsesvd = None # don't use SVDLIBC
FACTORS = [300] # which num_topics to try
CHUNKSIZE = [10000, 1000] # which chunksize to try
POWER_ITERS = [0, 1, 2, 4, 6] # extra power iterations for the randomized algo
# when reporting reconstruction error, also report spectral norm error? (very slow)
COMPUTE_NORM2 = False
def norm2(a):
"""Spectral norm ("norm 2") of a symmetric matrix `a`."""
if COMPUTE_NORM2:
logging.info("computing spectral norm of a %s matrix", str(a.shape))
return scipy.linalg.eigvalsh(a).max() # much faster than np.linalg.norm(2)
else:
return np.nan
def rmse(diff):
return np.sqrt(1.0 * np.multiply(diff, diff).sum() / diff.size)
def print_error(name, aat, u, s, ideal_nf, ideal_n2):
err = -np.dot(u, np.dot(np.diag(s), u.T))
err += aat
nf, n2 = np.linalg.norm(err), norm2(err)
print(
'%s error: norm_frobenius=%f (/ideal=%g), norm2=%f (/ideal=%g), RMSE=%g' %
(name, nf, nf / ideal_nf, n2, n2 / ideal_n2, rmse(err))
)
sys.stdout.flush()
class ClippedCorpus:
def __init__(self, corpus, max_docs, max_terms):
self.corpus = corpus
self.max_docs, self.max_terms = max_docs, max_terms
def __iter__(self):
for doc in itertools.islice(self.corpus, self.max_docs):
yield [(f, w) for f, w in doc if f < self.max_terms]
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.info("running %s", " ".join(sys.argv))
program = os.path.basename(sys.argv[0])
# do we have enough cmd line arguments?
if len(sys.argv) < 2:
print(globals()["__doc__"] % locals())
sys.exit(1)
fname = sys.argv[1]
if fname.endswith('bz2'):
mm = gensim.corpora.MmCorpus(bz2.BZ2File(fname))
else:
mm = gensim.corpora.MmCorpus(fname)
# extra cmd parameters = use a subcorpus (fewer docs, smaller vocab)
if len(sys.argv) > 2:
n = int(sys.argv[2])
else:
n = mm.num_docs
if len(sys.argv) > 3:
m = int(sys.argv[3])
else:
m = mm.num_terms
logging.info("using %i documents and %i features", n, m)
corpus = ClippedCorpus(mm, n, m)
id2word = gensim.utils.FakeDict(m)
logging.info("computing corpus * corpus^T") # eigenvalues of this matrix are singular values of `corpus`, squared
aat = np.zeros((m, m), dtype=np.float64)
for chunk in gensim.utils.grouper(corpus, chunksize=5000):
num_nnz = sum(len(doc) for doc in chunk)
chunk = gensim.matutils.corpus2csc(chunk, num_nnz=num_nnz, num_terms=m, num_docs=len(chunk), dtype=np.float32)
chunk = chunk * chunk.T
chunk = chunk.toarray()
aat += chunk
del chunk
logging.info("computing full decomposition of corpus * corpus^t")
aat = aat.astype(np.float32)
spectrum_s, spectrum_u = scipy.linalg.eigh(aat)
spectrum_s = spectrum_s[::-1] # re-order to descending eigenvalue order
spectrum_u = spectrum_u.T[::-1].T
np.save(fname + '.spectrum.npy', spectrum_s)
for factors in FACTORS:
err = -np.dot(spectrum_u[:, :factors], np.dot(np.diag(spectrum_s[:factors]), spectrum_u[:, :factors].T))
err += aat
ideal_fro = np.linalg.norm(err)
del err
ideal_n2 = spectrum_s[factors + 1]
print('*' * 40, "%i factors, ideal error norm_frobenius=%f, norm_2=%f" % (factors, ideal_fro, ideal_n2))
print("*" * 30, end="")
print_error("baseline", aat,
np.zeros((m, factors)), np.zeros((factors)), ideal_fro, ideal_n2)
if sparsesvd:
logging.info("computing SVDLIBC SVD for %i factors", factors)
taken = time.time()
corpus_ram = gensim.matutils.corpus2csc(corpus, num_terms=m)
ut, s, vt = sparsesvd(corpus_ram, factors)
taken = time.time() - taken
del corpus_ram
del vt
u, s = ut.T.astype(np.float32), s.astype(np.float32)**2 # convert singular values to eigenvalues
del ut
print("SVDLIBC SVD for %i factors took %s s (spectrum %f .. %f)"
% (factors, taken, s[0], s[-1]))
print_error("SVDLIBC", aat, u, s, ideal_fro, ideal_n2)
del u
for power_iters in POWER_ITERS:
for chunksize in CHUNKSIZE:
logging.info(
"computing incremental SVD for %i factors, %i power iterations, chunksize %i",
factors, power_iters, chunksize
)
taken = time.time()
gensim.models.lsimodel.P2_EXTRA_ITERS = power_iters
model = gensim.models.LsiModel(
corpus, id2word=id2word, num_topics=factors,
chunksize=chunksize, power_iters=power_iters
)
taken = time.time() - taken
u, s = model.projection.u.astype(np.float32), model.projection.s.astype(np.float32)**2
del model
print(
"incremental SVD for %i factors, %i power iterations, "
"chunksize %i took %s s (spectrum %f .. %f)" %
(factors, power_iters, chunksize, taken, s[0], s[-1])
)
print_error('incremental SVD', aat, u, s, ideal_fro, ideal_n2)
del u
logging.info("computing multipass SVD for %i factors, %i power iterations", factors, power_iters)
taken = time.time()
model = gensim.models.LsiModel(
corpus, id2word=id2word, num_topics=factors, chunksize=2000,
onepass=False, power_iters=power_iters
)
taken = time.time() - taken
u, s = model.projection.u.astype(np.float32), model.projection.s.astype(np.float32)**2
del model
print(
"multipass SVD for %i factors, "
"%i power iterations took %s s (spectrum %f .. %f)" %
(factors, power_iters, taken, s[0], s[-1])
)
print_error('multipass SVD', aat, u, s, ideal_fro, ideal_n2)
del u
logging.info("finished running %s", program)
| 7,390
|
Python
|
.py
| 163
| 36.846626
| 118
| 0.613118
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,074
|
simspeed2.py
|
piskvorky_gensim/gensim/test/simspeed2.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
USAGE: %(program)s CORPUS_DENSE.mm CORPUS_SPARSE.mm [NUMDOCS]
Run speed test of similarity queries. Only use the first NUMDOCS documents of \
each corpus for testing (or use all if no NUMDOCS is given).
The two sample corpora can be downloaded from http://nlp.fi.muni.cz/projekty/gensim/wikismall.tgz
Example: ./simspeed2.py wikismall.dense.mm wikismall.sparse.mm
"""
import logging
import sys
import itertools
import os
import math
from time import time
import gensim
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.info("running %s", " ".join(sys.argv))
# check and process cmdline input
program = os.path.basename(sys.argv[0])
if len(sys.argv) < 3:
print(globals()['__doc__'] % locals())
sys.exit(1)
corpus_dense = gensim.corpora.MmCorpus(sys.argv[1])
corpus_sparse = gensim.corpora.MmCorpus(sys.argv[2])
dense_features, sparse_features = corpus_dense.num_terms, corpus_sparse.num_terms
if len(sys.argv) > 3:
NUMDOCS = int(sys.argv[3])
corpus_dense = list(itertools.islice(corpus_dense, NUMDOCS))
corpus_sparse = list(itertools.islice(corpus_sparse, NUMDOCS))
# create the query index to be tested (one for dense input, one for sparse)
index_dense = gensim.similarities.Similarity('/tmp/tstdense', corpus_dense, dense_features)
index_sparse = gensim.similarities.Similarity('/tmp/tstsparse', corpus_sparse, sparse_features)
density = 100.0 * sum(shard.num_nnz for shard in index_sparse.shards) / (len(index_sparse) * sparse_features)
logging.info(
"test 1 (dense): similarity of all vs. all (%i documents, %i dense features)",
len(corpus_dense), index_dense.num_features
)
for chunksize in [1, 8, 32, 64, 128, 256, 512, 1024, index_dense.shardsize]:
index_dense.chunksize = chunksize
start = time()
for sim in index_dense:
pass
taken = time() - start
queries = math.ceil(1.0 * len(corpus_dense) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(corpus_dense) / taken, queries / taken
)
index_dense.num_best = 10
logging.info("test 2 (dense): as above, but only ask for the top-10 most similar for each document")
for chunksize in [1, 8, 32, 64, 128, 256, 512, 1024, index_dense.shardsize]:
index_dense.chunksize = chunksize
start = time()
sims = [sim for sim in index_dense]
taken = time() - start
queries = math.ceil(1.0 * len(corpus_dense) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(corpus_dense) / taken, queries / taken
)
index_dense.num_best = None
logging.info(
"test 3 (sparse): similarity of all vs. all (%i documents, %i features, %.2f%% density)",
len(corpus_sparse), index_sparse.num_features, density
)
for chunksize in [1, 5, 10, 100, 256, 500, 1000, index_sparse.shardsize]:
index_sparse.chunksize = chunksize
start = time()
for sim in index_sparse:
pass
taken = time() - start
queries = math.ceil(1.0 * len(corpus_sparse) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(corpus_sparse) / taken, queries / taken
)
index_sparse.num_best = 10
logging.info("test 4 (sparse): as above, but only ask for the top-10 most similar for each document")
for chunksize in [1, 5, 10, 100, 256, 500, 1000, index_sparse.shardsize]:
index_sparse.chunksize = chunksize
start = time()
for sim in index_sparse:
pass
taken = time() - start
queries = math.ceil(1.0 * len(corpus_sparse) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(corpus_sparse) / taken, queries / taken
)
index_sparse.num_best = None
# Difference between test #5 and test #1 is that the query in #5 is a gensim iterable
# corpus, while in #1, the index is used directly (numpy arrays). So #5 is slower,
# because it needs to convert sparse vecs to numpy arrays and normalize them to
# unit length=extra work, which #1 avoids.
query = list(itertools.islice(corpus_dense, 1000))
logging.info(
"test 5 (dense): dense corpus of %i docs vs. index (%i documents, %i dense features)",
len(query), len(index_dense), index_dense.num_features
)
for chunksize in [1, 8, 32, 64, 128, 256, 512, 1024]:
start = time()
if chunksize > 1:
sims = []
for chunk in gensim.utils.chunkize_serial(query, chunksize):
_ = index_dense[chunk]
else:
for vec in query:
_ = index_dense[vec]
taken = time() - start
queries = math.ceil(1.0 * len(query) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(query) / taken, queries / taken
)
# Same comment as for test #5.
query = list(itertools.islice(corpus_dense, 1000))
logging.info(
"test 6 (sparse): sparse corpus of %i docs vs. sparse index (%i documents, %i features, %.2f%% density)",
len(query), len(corpus_sparse), index_sparse.num_features, density
)
for chunksize in [1, 5, 10, 100, 500, 1000]:
start = time()
if chunksize > 1:
sims = []
for chunk in gensim.utils.chunkize_serial(query, chunksize):
_ = index_sparse[chunk]
else:
for vec in query:
_ = index_sparse[vec]
taken = time() - start
queries = math.ceil(1.0 * len(query) / chunksize)
logging.info(
"chunksize=%i, time=%.4fs (%.2f docs/s, %.2f queries/s)",
chunksize, taken, len(query) / taken, queries / taken
)
logging.info("finished running %s", program)
| 6,436
|
Python
|
.py
| 141
| 37.93617
| 113
| 0.624184
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,075
|
test_glove2word2vec.py
|
piskvorky_gensim/gensim/test/test_glove2word2vec.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2016 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""Test for gensim.scripts.glove2word2vec.py."""
import logging
import unittest
import os
import sys
import numpy
import gensim
from gensim.utils import check_output
from gensim.test.utils import datapath, get_tmpfile
class TestGlove2Word2Vec(unittest.TestCase):
def setUp(self):
self.datapath = datapath('test_glove.txt')
self.output_file = get_tmpfile('glove2word2vec.test')
def test_conversion(self):
check_output(args=[
sys.executable, '-m', 'gensim.scripts.glove2word2vec',
'--input', self.datapath, '--output', self.output_file
])
# test that the converted model loads successfully
try:
self.test_model = gensim.models.KeyedVectors.load_word2vec_format(self.output_file)
self.assertTrue(numpy.allclose(self.test_model.n_similarity(['the', 'and'], ['and', 'the']), 1.0))
except Exception:
if os.path.isfile(os.path.join(self.output_file)):
self.fail('model file %s was created but could not be loaded.' % self.output_file)
else:
self.fail(
'model file %s creation failed, check the parameters and input file format.' % self.output_file
)
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 1,604
|
Python
|
.py
| 37
| 36.378378
| 115
| 0.659178
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,076
|
test_lda_callback.py
|
piskvorky_gensim/gensim/test/test_lda_callback.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2018 Allenyl <allen7575@gmail.com>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking visdom API
"""
import unittest
import subprocess
import time
from gensim.models import LdaModel
from gensim.test.utils import datapath, common_dictionary
from gensim.corpora import MmCorpus
from gensim.models.callbacks import CoherenceMetric
try:
from visdom import Visdom
VISDOM_INSTALLED = True
except ImportError:
VISDOM_INSTALLED = False
@unittest.skipIf(VISDOM_INSTALLED is False, "Visdom not installed")
class TestLdaCallback(unittest.TestCase):
def setUp(self):
self.corpus = MmCorpus(datapath('testcorpus.mm'))
self.ch_umass = CoherenceMetric(corpus=self.corpus, coherence="u_mass", logger="visdom", title="Coherence")
self.callback = [self.ch_umass]
self.model = LdaModel(id2word=common_dictionary, num_topics=2, passes=10, callbacks=self.callback)
self.host = "http://localhost"
self.port = 8097
def test_callback_update_graph(self):
with subprocess.Popen(['python', '-m', 'visdom.server', '-port', str(self.port)]) as proc:
# wait for visdom server startup (any better way?)
viz = Visdom(server=self.host, port=self.port)
for attempt in range(5):
time.sleep(1.0) # seconds
if viz.check_connection():
break
assert viz.check_connection()
viz.close()
self.model.update(self.corpus)
proc.kill()
if __name__ == '__main__':
unittest.main()
| 1,700
|
Python
|
.py
| 43
| 33.27907
| 115
| 0.675167
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,077
|
test_ldamodel.py
|
piskvorky_gensim/gensim/test/test_ldamodel.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import numbers
import os
import unittest
import copy
import numpy as np
from numpy.testing import assert_allclose
from gensim.corpora import mmcorpus, Dictionary
from gensim.models import ldamodel, ldamulticore
from gensim import matutils, utils
from gensim.test import basetmtests
from gensim.test.utils import datapath, get_tmpfile, common_texts
GITHUB_ACTIONS_WINDOWS = os.environ.get('RUNNER_OS') == 'Windows'
dictionary = Dictionary(common_texts)
corpus = [dictionary.doc2bow(text) for text in common_texts]
def test_random_state():
testcases = [np.random.seed(0), None, np.random.RandomState(0), 0]
for testcase in testcases:
assert isinstance(utils.get_random_state(testcase), np.random.RandomState)
class TestLdaModel(unittest.TestCase, basetmtests.TestBaseTopicModel):
def setUp(self):
self.corpus = mmcorpus.MmCorpus(datapath('testcorpus.mm'))
self.class_ = ldamodel.LdaModel
self.model = self.class_(corpus, id2word=dictionary, num_topics=2, passes=100)
def test_sync_state(self):
model2 = self.class_(corpus=self.corpus, id2word=dictionary, num_topics=2, passes=1)
model2.state = copy.deepcopy(self.model.state)
model2.sync_state()
assert_allclose(self.model.get_term_topics(2), model2.get_term_topics(2), rtol=1e-5)
assert_allclose(self.model.get_topics(), model2.get_topics(), rtol=1e-5)
# properly continues training on the new state
self.model.random_state = np.random.RandomState(0)
model2.random_state = np.random.RandomState(0)
self.model.passes = 1
model2.passes = 1
self.model.update(self.corpus)
model2.update(self.corpus)
assert_allclose(self.model.get_term_topics(2), model2.get_term_topics(2), rtol=1e-5)
assert_allclose(self.model.get_topics(), model2.get_topics(), rtol=1e-5)
def test_transform(self):
passed = False
# sometimes, LDA training gets stuck at a local minimum
# in that case try re-training the model from scratch, hoping for a
# better random initialization
for i in range(25): # restart at most 5 times
# create the transformation model
model = self.class_(id2word=dictionary, num_topics=2, passes=100)
model.update(self.corpus)
# transform one document
doc = list(corpus)[0]
transformed = model[doc]
vec = matutils.sparse2full(transformed, 2) # convert to dense vector, for easier equality tests
expected = [0.13, 0.87]
# must contain the same values, up to re-ordering
passed = np.allclose(sorted(vec), sorted(expected), atol=1e-1)
if passed:
break
logging.warning(
"LDA failed to converge on attempt %i (got %s, expected %s)", i, sorted(vec), sorted(expected)
)
self.assertTrue(passed)
def test_alpha_auto(self):
model1 = self.class_(corpus, id2word=dictionary, alpha='symmetric', passes=10)
modelauto = self.class_(corpus, id2word=dictionary, alpha='auto', passes=10)
# did we learn something?
self.assertFalse(all(np.equal(model1.alpha, modelauto.alpha)))
def test_alpha(self):
kwargs = dict(
id2word=dictionary,
num_topics=2,
alpha=None
)
expected_shape = (2,)
# should not raise anything
self.class_(**kwargs)
kwargs['alpha'] = 'symmetric'
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
assert_allclose(model.alpha, np.array([0.5, 0.5]))
kwargs['alpha'] = 'asymmetric'
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
assert_allclose(model.alpha, [0.630602, 0.369398], rtol=1e-5)
kwargs['alpha'] = 0.3
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
assert_allclose(model.alpha, np.array([0.3, 0.3]))
kwargs['alpha'] = 3
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
assert_allclose(model.alpha, np.array([3, 3]))
kwargs['alpha'] = [0.3, 0.3]
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
assert_allclose(model.alpha, np.array([0.3, 0.3]))
kwargs['alpha'] = np.array([0.3, 0.3])
model = self.class_(**kwargs)
self.assertEqual(model.alpha.shape, expected_shape)
assert_allclose(model.alpha, np.array([0.3, 0.3]))
# all should raise an exception for being wrong shape
kwargs['alpha'] = [0.3, 0.3, 0.3]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['alpha'] = [[0.3], [0.3]]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['alpha'] = [0.3]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['alpha'] = "gensim is cool"
self.assertRaises(ValueError, self.class_, **kwargs)
def test_eta_auto(self):
model1 = self.class_(corpus, id2word=dictionary, eta='symmetric', passes=10)
modelauto = self.class_(corpus, id2word=dictionary, eta='auto', passes=10)
# did we learn something?
self.assertFalse(np.allclose(model1.eta, modelauto.eta))
def test_eta(self):
kwargs = dict(
id2word=dictionary,
num_topics=2,
eta=None
)
num_terms = len(dictionary)
expected_shape = (num_terms,)
# should not raise anything
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
assert_allclose(model.eta, np.array([0.5] * num_terms))
kwargs['eta'] = 'symmetric'
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
assert_allclose(model.eta, np.array([0.5] * num_terms))
kwargs['eta'] = 0.3
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
assert_allclose(model.eta, np.array([0.3] * num_terms))
kwargs['eta'] = 3
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
assert_allclose(model.eta, np.array([3] * num_terms))
kwargs['eta'] = [0.3] * num_terms
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
assert_allclose(model.eta, np.array([0.3] * num_terms))
kwargs['eta'] = np.array([0.3] * num_terms)
model = self.class_(**kwargs)
self.assertEqual(model.eta.shape, expected_shape)
assert_allclose(model.eta, np.array([0.3] * num_terms))
# should be ok with num_topics x num_terms
testeta = np.array([[0.5] * len(dictionary)] * 2)
kwargs['eta'] = testeta
self.class_(**kwargs)
# all should raise an exception for being wrong shape
kwargs['eta'] = testeta.reshape(tuple(reversed(testeta.shape)))
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['eta'] = [0.3]
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['eta'] = [0.3] * (num_terms + 1)
self.assertRaises(AssertionError, self.class_, **kwargs)
kwargs['eta'] = "gensim is cool"
self.assertRaises(ValueError, self.class_, **kwargs)
kwargs['eta'] = "asymmetric"
self.assertRaises(ValueError, self.class_, **kwargs)
def test_top_topics(self):
top_topics = self.model.top_topics(self.corpus)
for topic, score in top_topics:
self.assertTrue(isinstance(topic, list))
self.assertTrue(isinstance(score, float))
for v, k in topic:
self.assertTrue(isinstance(k, str))
self.assertTrue(np.issubdtype(v, np.floating))
def test_get_topic_terms(self):
topic_terms = self.model.get_topic_terms(1)
for k, v in topic_terms:
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, np.floating))
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_get_document_topics(self):
model = self.class_(
self.corpus, id2word=dictionary, num_topics=2, passes=100, random_state=np.random.seed(0)
)
doc_topics = model.get_document_topics(self.corpus)
for topic in doc_topics:
self.assertTrue(isinstance(topic, list))
for k, v in topic:
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, np.floating))
# Test case to use the get_document_topic function for the corpus
all_topics = model.get_document_topics(self.corpus, per_word_topics=True)
self.assertEqual(model.state.numdocs, len(corpus))
for topic in all_topics:
self.assertTrue(isinstance(topic, tuple))
for k, v in topic[0]: # list of doc_topics
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, np.floating))
for w, topic_list in topic[1]: # list of word_topics
self.assertTrue(isinstance(w, numbers.Integral))
self.assertTrue(isinstance(topic_list, list))
for w, phi_values in topic[2]: # list of word_phis
self.assertTrue(isinstance(w, numbers.Integral))
self.assertTrue(isinstance(phi_values, list))
# Test case to check the filtering effect of minimum_probability and minimum_phi_value
doc_topic_count_na = 0
word_phi_count_na = 0
all_topics = model.get_document_topics(
self.corpus, minimum_probability=0.8, minimum_phi_value=1.0, per_word_topics=True
)
self.assertEqual(model.state.numdocs, len(corpus))
for topic in all_topics:
self.assertTrue(isinstance(topic, tuple))
for k, v in topic[0]: # list of doc_topics
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, np.floating))
if len(topic[0]) != 0:
doc_topic_count_na += 1
for w, topic_list in topic[1]: # list of word_topics
self.assertTrue(isinstance(w, numbers.Integral))
self.assertTrue(isinstance(topic_list, list))
for w, phi_values in topic[2]: # list of word_phis
self.assertTrue(isinstance(w, numbers.Integral))
self.assertTrue(isinstance(phi_values, list))
if len(phi_values) != 0:
word_phi_count_na += 1
self.assertTrue(model.state.numdocs > doc_topic_count_na)
self.assertTrue(sum(len(i) for i in corpus) > word_phi_count_na)
doc_topics, word_topics, word_phis = model.get_document_topics(self.corpus[1], per_word_topics=True)
for k, v in doc_topics:
self.assertTrue(isinstance(k, numbers.Integral))
self.assertTrue(np.issubdtype(v, np.floating))
for w, topic_list in word_topics:
self.assertTrue(isinstance(w, numbers.Integral))
self.assertTrue(isinstance(topic_list, list))
for w, phi_values in word_phis:
self.assertTrue(isinstance(w, numbers.Integral))
self.assertTrue(isinstance(phi_values, list))
# word_topics looks like this: ({word_id => [topic_id_most_probable, topic_id_second_most_probable, ...]).
# we check one case in word_topics, i.e of the first word in the doc, and its likely topics.
# FIXME: Fails on osx and win
# expected_word = 0
# self.assertEqual(word_topics[0][0], expected_word)
# self.assertTrue(0 in word_topics[0][1])
def test_term_topics(self):
model = self.class_(
self.corpus, id2word=dictionary, num_topics=2, passes=100, random_state=np.random.seed(0)
)
# check with word_type
result = model.get_term_topics(2)
for topic_no, probability in result:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(np.issubdtype(probability, np.floating))
# checks if topic '1' is in the result list
# FIXME: Fails on osx and win
# self.assertTrue(1 in result[0])
# if user has entered word instead, check with word
result = model.get_term_topics(str(model.id2word[2]))
for topic_no, probability in result:
self.assertTrue(isinstance(topic_no, int))
self.assertTrue(np.issubdtype(probability, np.floating))
# checks if topic '1' is in the result list
# FIXME: Fails on osx and win
# self.assertTrue(1 in result[0])
def test_passes(self):
# long message includes the original error message with a custom one
self.longMessage = True
# construct what we expect when passes aren't involved
test_rhots = list()
model = self.class_(id2word=dictionary, chunksize=1, num_topics=2)
def final_rhot(model):
return pow(model.offset + (1 * model.num_updates) / model.chunksize, -model.decay)
# generate 5 updates to test rhot on
for x in range(5):
model.update(self.corpus)
test_rhots.append(final_rhot(model))
for passes in [1, 5, 10, 50, 100]:
model = self.class_(id2word=dictionary, chunksize=1, num_topics=2, passes=passes)
self.assertEqual(final_rhot(model), 1.0)
# make sure the rhot matches the test after each update
for test_rhot in test_rhots:
model.update(self.corpus)
msg = ", ".join(str(x) for x in [passes, model.num_updates, model.state.numdocs])
self.assertAlmostEqual(final_rhot(model), test_rhot, msg=msg)
self.assertEqual(model.state.numdocs, len(corpus) * len(test_rhots))
self.assertEqual(model.num_updates, len(corpus) * len(test_rhots))
# def test_topic_seeding(self):
# for topic in range(2):
# passed = False
# for i in range(5): # restart at most this many times, to mitigate LDA randomness
# # try seeding it both ways round, check you get the same
# # topics out but with which way round they are depending
# # on the way round they're seeded
# eta = np.ones((2, len(dictionary))) * 0.5
# system = dictionary.token2id[u'system']
# trees = dictionary.token2id[u'trees']
# # aggressively seed the word 'system', in one of the
# # two topics, 10 times higher than the other words
# eta[topic, system] *= 10.0
# model = self.class_(id2word=dictionary, num_topics=2, passes=200, eta=eta)
# model.update(self.corpus)
# topics = [{word: p for p, word in model.show_topic(j, topn=None)} for j in range(2)]
# # check that the word 'system' in the topic we seeded got a high weight,
# # and the word 'trees' (the main word in the other topic) a low weight --
# # and vice versa for the other topic (which we didn't seed with 'system')
# passed = (
# (topics[topic][u'system'] > topics[topic][u'trees'])
# and
# (topics[1 - topic][u'system'] < topics[1 - topic][u'trees'])
# )
# if passed:
# break
# logging.warning("LDA failed to converge on attempt %i (got %s)", i, topics)
# self.assertTrue(passed)
def test_persistence(self):
fname = get_tmpfile('gensim_models_lda.tst')
model = self.model
model.save(fname)
model2 = self.class_.load(fname)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.expElogbeta, model2.expElogbeta))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_model_compatibility_with_python_versions(self):
fname_model_2_7 = datapath('ldamodel_python_2_7')
model_2_7 = self.class_.load(fname_model_2_7)
fname_model_3_5 = datapath('ldamodel_python_3_5')
model_3_5 = self.class_.load(fname_model_3_5)
self.assertEqual(model_2_7.num_topics, model_3_5.num_topics)
self.assertTrue(np.allclose(model_2_7.expElogbeta, model_3_5.expElogbeta))
tstvec = []
self.assertTrue(np.allclose(model_2_7[tstvec], model_3_5[tstvec])) # try projecting an empty vector
id2word_2_7 = dict(model_2_7.id2word.iteritems())
id2word_3_5 = dict(model_3_5.id2word.iteritems())
self.assertEqual(set(id2word_2_7.keys()), set(id2word_3_5.keys()))
def test_persistence_ignore(self):
fname = get_tmpfile('gensim_models_lda_testPersistenceIgnore.tst')
model = ldamodel.LdaModel(self.corpus, num_topics=2)
model.save(fname, ignore='id2word')
model2 = ldamodel.LdaModel.load(fname)
self.assertTrue(model2.id2word is None)
model.save(fname, ignore=['id2word'])
model2 = ldamodel.LdaModel.load(fname)
self.assertTrue(model2.id2word is None)
def test_persistence_compressed(self):
fname = get_tmpfile('gensim_models_lda.tst.gz')
model = self.model
model.save(fname)
model2 = self.class_.load(fname, mmap=None)
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(np.allclose(model.expElogbeta, model2.expElogbeta))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_large_mmap(self):
fname = get_tmpfile('gensim_models_lda.tst')
model = self.model
# simulate storing large arrays separately
model.save(fname, sep_limit=0)
# test loading the large model arrays with mmap
model2 = self.class_.load(fname, mmap='r')
self.assertEqual(model.num_topics, model2.num_topics)
self.assertTrue(isinstance(model2.expElogbeta, np.memmap))
self.assertTrue(np.allclose(model.expElogbeta, model2.expElogbeta))
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec])) # try projecting an empty vector
def test_large_mmap_compressed(self):
fname = get_tmpfile('gensim_models_lda.tst.gz')
model = self.model
# simulate storing large arrays separately
model.save(fname, sep_limit=0)
# test loading the large model arrays with mmap
self.assertRaises(IOError, self.class_.load, fname, mmap='r')
def test_random_state_backward_compatibility(self):
# load a model saved using a pre-0.13.2 version of Gensim
pre_0_13_2_fname = datapath('pre_0_13_2_model')
model_pre_0_13_2 = self.class_.load(pre_0_13_2_fname)
# set `num_topics` less than `model_pre_0_13_2.num_topics` so that `model_pre_0_13_2.random_state` is used
model_topics = model_pre_0_13_2.print_topics(num_topics=2, num_words=3)
for i in model_topics:
self.assertTrue(isinstance(i[0], int))
self.assertTrue(isinstance(i[1], str))
# save back the loaded model using a post-0.13.2 version of Gensim
post_0_13_2_fname = get_tmpfile('gensim_models_lda_post_0_13_2_model.tst')
model_pre_0_13_2.save(post_0_13_2_fname)
# load a model saved using a post-0.13.2 version of Gensim
model_post_0_13_2 = self.class_.load(post_0_13_2_fname)
model_topics_new = model_post_0_13_2.print_topics(num_topics=2, num_words=3)
for i in model_topics_new:
self.assertTrue(isinstance(i[0], int))
self.assertTrue(isinstance(i[1], str))
def test_dtype_backward_compatibility(self):
lda_3_0_1_fname = datapath('lda_3_0_1_model')
test_doc = [(0, 1), (1, 1), (2, 1)]
expected_topics = [(0, 0.87005886977475178), (1, 0.12994113022524822)]
# save model to use in test
# self.model.save(lda_3_0_1_fname)
# load a model saved using a 3.0.1 version of Gensim
model = self.class_.load(lda_3_0_1_fname)
# and test it on a predefined document
topics = model[test_doc]
self.assertTrue(np.allclose(expected_topics, topics))
# endclass TestLdaModel
class TestLdaMulticore(TestLdaModel):
def setUp(self):
self.corpus = mmcorpus.MmCorpus(datapath('testcorpus.mm'))
self.class_ = ldamulticore.LdaMulticore
self.model = self.class_(corpus, id2word=dictionary, num_topics=2, passes=100)
# override LdaModel because multicore does not allow alpha=auto
def test_alpha_auto(self):
self.assertRaises(RuntimeError, self.class_, alpha='auto')
# endclass TestLdaMulticore
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 21,720
|
Python
|
.py
| 416
| 42.942308
| 114
| 0.632081
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,078
|
test_datatype.py
|
piskvorky_gensim/gensim/test/test_datatype.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking various matutils functions.
"""
import logging
import unittest
import numpy as np
from gensim.test.utils import datapath
from gensim.models.keyedvectors import KeyedVectors
class TestDataType(unittest.TestCase):
def load_model(self, datatype):
path = datapath('high_precision.kv.txt')
kv = KeyedVectors.load_word2vec_format(path, binary=False,
datatype=datatype)
return kv
def test_high_precision(self):
kv = self.load_model(np.float64)
self.assertAlmostEqual(kv['horse.n.01'][0], -0.0008546282343595379)
self.assertEqual(kv['horse.n.01'][0].dtype, np.float64)
def test_medium_precision(self):
kv = self.load_model(np.float32)
self.assertAlmostEqual(kv['horse.n.01'][0], -0.00085462822)
self.assertEqual(kv['horse.n.01'][0].dtype, np.float32)
def test_low_precision(self):
kv = self.load_model(np.float16)
self.assertAlmostEqual(kv['horse.n.01'][0], -0.00085449)
self.assertEqual(kv['horse.n.01'][0].dtype, np.float16)
def test_type_conversion(self):
path = datapath('high_precision.kv.txt')
binary_path = datapath('high_precision.kv.bin')
model1 = KeyedVectors.load_word2vec_format(path, datatype=np.float16)
model1.save_word2vec_format(binary_path, binary=True)
model2 = KeyedVectors.load_word2vec_format(binary_path, datatype=np.float64, binary=True)
self.assertAlmostEqual(model1["horse.n.01"][0], np.float16(model2["horse.n.01"][0]))
self.assertEqual(model1["horse.n.01"][0].dtype, np.float16)
self.assertEqual(model2["horse.n.01"][0].dtype, np.float64)
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 1,965
|
Python
|
.py
| 42
| 39.904762
| 97
| 0.677132
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,079
|
test_logentropy_model.py
|
piskvorky_gensim/gensim/test/test_logentropy_model.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking transformation algorithms (the models package).
"""
import logging
import unittest
import numpy as np
from gensim.corpora.mmcorpus import MmCorpus
from gensim.models import logentropy_model
from gensim.test.utils import datapath, get_tmpfile
class TestLogEntropyModel(unittest.TestCase):
TEST_CORPUS = [[(1, 1.0)], [], [(0, 0.5), (2, 1.0)], []]
def setUp(self):
self.corpus_small = MmCorpus(datapath('test_corpus_small.mm'))
self.corpus_ok = MmCorpus(datapath('test_corpus_ok.mm'))
self.corpus_empty = []
def test_generator_fail(self):
"""Test creating a model using a generator as input; should fail."""
def get_generator(test_corpus=TestLogEntropyModel.TEST_CORPUS):
for test_doc in test_corpus:
yield test_doc
self.assertRaises(ValueError, logentropy_model.LogEntropyModel, corpus=get_generator())
def test_empty_fail(self):
"""Test creating a model using an empty input; should fail."""
self.assertRaises(ValueError, logentropy_model.LogEntropyModel, corpus=self.corpus_empty)
def test_transform(self):
# create the transformation model
model = logentropy_model.LogEntropyModel(self.corpus_ok, normalize=False)
# transform one document
doc = list(self.corpus_ok)[0]
transformed = model[doc]
expected = [
(0, 0.3748900964125389),
(1, 0.30730215324230725),
(3, 1.20941755462856)
]
self.assertTrue(np.allclose(transformed, expected))
def test_persistence(self):
fname = get_tmpfile('gensim_models_logentry.tst')
model = logentropy_model.LogEntropyModel(self.corpus_ok, normalize=True)
model.save(fname)
model2 = logentropy_model.LogEntropyModel.load(fname)
self.assertTrue(model.entr == model2.entr)
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec]))
def test_persistence_compressed(self):
fname = get_tmpfile('gensim_models_logentry.tst.gz')
model = logentropy_model.LogEntropyModel(self.corpus_ok, normalize=True)
model.save(fname)
model2 = logentropy_model.LogEntropyModel.load(fname, mmap=None)
self.assertTrue(model.entr == model2.entr)
tstvec = []
self.assertTrue(np.allclose(model[tstvec], model2[tstvec]))
if __name__ == '__main__':
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
unittest.main()
| 2,765
|
Python
|
.py
| 60
| 39.15
| 97
| 0.681666
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,080
|
test_corpora.py
|
piskvorky_gensim/gensim/test/test_corpora.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2010 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for checking corpus I/O formats (the corpora package).
"""
from __future__ import unicode_literals
import codecs
import itertools
import logging
import os
import os.path
import tempfile
import unittest
import numpy as np
from gensim.corpora import (bleicorpus, mmcorpus, lowcorpus, svmlightcorpus,
ucicorpus, malletcorpus, textcorpus, indexedcorpus, wikicorpus)
from gensim.interfaces import TransformedCorpus
from gensim.utils import to_unicode
from gensim.test.utils import datapath, get_tmpfile, common_corpus
GITHUB_ACTIONS_WINDOWS = os.environ.get('RUNNER_OS') == 'Windows'
class DummyTransformer:
def __getitem__(self, bow):
if len(next(iter(bow))) == 2:
# single bag of words
transformed = [(termid, count + 1) for termid, count in bow]
else:
# sliced corpus
transformed = [[(termid, count + 1) for termid, count in doc] for doc in bow]
return transformed
class CorpusTestCase(unittest.TestCase):
TEST_CORPUS = [[(1, 1.0)], [], [(0, 0.5), (2, 1.0)], []]
def setUp(self):
self.corpus_class = None
self.file_extension = None
def run(self, result=None):
if type(self) is not CorpusTestCase:
super(CorpusTestCase, self).run(result)
def tearDown(self):
# remove all temporary test files
fname = get_tmpfile('gensim_corpus.tst')
extensions = ['', '', '.bz2', '.gz', '.index', '.vocab']
for ext in itertools.permutations(extensions, 2):
try:
os.remove(fname + ext[0] + ext[1])
except OSError:
pass
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_load(self):
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
corpus = self.corpus_class(fname)
docs = list(corpus)
# the deerwester corpus always has nine documents
self.assertEqual(len(docs), 9)
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_len(self):
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
corpus = self.corpus_class(fname)
# make sure corpus.index works, too
corpus = self.corpus_class(fname)
self.assertEqual(len(corpus), 9)
# for subclasses of IndexedCorpus, we need to nuke this so we don't
# test length on the index, but just testcorpus contents
if hasattr(corpus, 'index'):
corpus.index = None
self.assertEqual(len(corpus), 9)
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_empty_input(self):
tmpf = get_tmpfile('gensim_corpus.tst')
with open(tmpf, 'w') as f:
f.write('')
with open(tmpf + '.vocab', 'w') as f:
f.write('')
corpus = self.corpus_class(tmpf)
self.assertEqual(len(corpus), 0)
docs = list(corpus)
self.assertEqual(len(docs), 0)
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_save(self):
corpus = self.TEST_CORPUS
tmpf = get_tmpfile('gensim_corpus.tst')
# make sure the corpus can be saved
self.corpus_class.save_corpus(tmpf, corpus)
# and loaded back, resulting in exactly the same corpus
corpus2 = list(self.corpus_class(tmpf))
self.assertEqual(corpus, corpus2)
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_serialize(self):
corpus = self.TEST_CORPUS
tmpf = get_tmpfile('gensim_corpus.tst')
# make sure the corpus can be saved
self.corpus_class.serialize(tmpf, corpus)
# and loaded back, resulting in exactly the same corpus
corpus2 = self.corpus_class(tmpf)
self.assertEqual(corpus, list(corpus2))
# make sure the indexing corpus[i] works
for i in range(len(corpus)):
self.assertEqual(corpus[i], corpus2[i])
# make sure that subclasses of IndexedCorpus support fancy indexing
# after deserialisation
if isinstance(corpus, indexedcorpus.IndexedCorpus):
idx = [1, 3, 5, 7]
self.assertEqual(corpus[idx], corpus2[idx])
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_serialize_compressed(self):
corpus = self.TEST_CORPUS
tmpf = get_tmpfile('gensim_corpus.tst')
for extension in ['.gz', '.bz2']:
fname = tmpf + extension
# make sure the corpus can be saved
self.corpus_class.serialize(fname, corpus)
# and loaded back, resulting in exactly the same corpus
corpus2 = self.corpus_class(fname)
self.assertEqual(corpus, list(corpus2))
# make sure the indexing `corpus[i]` syntax works
for i in range(len(corpus)):
self.assertEqual(corpus[i], corpus2[i])
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_switch_id2word(self):
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
corpus = self.corpus_class(fname)
if hasattr(corpus, 'id2word'):
firstdoc = next(iter(corpus))
testdoc = set((to_unicode(corpus.id2word[x]), y) for x, y in firstdoc)
self.assertEqual(testdoc, {('computer', 1), ('human', 1), ('interface', 1)})
d = corpus.id2word
d[0], d[1] = d[1], d[0]
corpus.id2word = d
firstdoc2 = next(iter(corpus))
testdoc2 = set((to_unicode(corpus.id2word[x]), y) for x, y in firstdoc2)
self.assertEqual(testdoc2, {('computer', 1), ('human', 1), ('interface', 1)})
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_indexing(self):
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
corpus = self.corpus_class(fname)
docs = list(corpus)
for idx, doc in enumerate(docs):
self.assertEqual(doc, corpus[idx])
self.assertEqual(doc, corpus[np.int64(idx)])
self.assertEqual(docs, list(corpus[:]))
self.assertEqual(docs[0:], list(corpus[0:]))
self.assertEqual(docs[0:-1], list(corpus[0:-1]))
self.assertEqual(docs[2:4], list(corpus[2:4]))
self.assertEqual(docs[::2], list(corpus[::2]))
self.assertEqual(docs[::-1], list(corpus[::-1]))
# make sure sliced corpora can be iterated over multiple times
c = corpus[:]
self.assertEqual(docs, list(c))
self.assertEqual(docs, list(c))
self.assertEqual(len(docs), len(corpus))
self.assertEqual(len(docs), len(corpus[:]))
self.assertEqual(len(docs[::2]), len(corpus[::2]))
def _get_slice(corpus, slice_):
# assertRaises for python 2.6 takes a callable
return corpus[slice_]
# make sure proper input validation for sliced corpora is done
self.assertRaises(ValueError, _get_slice, corpus, {1})
self.assertRaises(ValueError, _get_slice, corpus, 1.0)
# check sliced corpora that use fancy indexing
c = corpus[[1, 3, 4]]
self.assertEqual([d for i, d in enumerate(docs) if i in [1, 3, 4]], list(c))
self.assertEqual([d for i, d in enumerate(docs) if i in [1, 3, 4]], list(c))
self.assertEqual(len(corpus[[0, 1, -1]]), 3)
self.assertEqual(len(corpus[np.asarray([0, 1, -1])]), 3)
# check that TransformedCorpus supports indexing when the underlying
# corpus does, and throws an error otherwise
corpus_ = TransformedCorpus(DummyTransformer(), corpus)
if hasattr(corpus, 'index') and corpus.index is not None:
self.assertEqual(corpus_[0][0][1], docs[0][0][1] + 1)
self.assertRaises(ValueError, _get_slice, corpus_, {1})
transformed_docs = [val + 1 for i, d in enumerate(docs) for _, val in d if i in [1, 3, 4]]
self.assertEqual(transformed_docs, list(v for doc in corpus_[[1, 3, 4]] for _, v in doc))
self.assertEqual(3, len(corpus_[[1, 3, 4]]))
else:
self.assertRaises(RuntimeError, _get_slice, corpus_, [1, 3, 4])
self.assertRaises(RuntimeError, _get_slice, corpus_, {1})
self.assertRaises(RuntimeError, _get_slice, corpus_, 1.0)
class TestMmCorpusWithIndex(CorpusTestCase):
def setUp(self):
self.corpus_class = mmcorpus.MmCorpus
self.corpus = self.corpus_class(datapath('test_mmcorpus_with_index.mm'))
self.file_extension = '.mm'
def test_serialize_compressed(self):
# MmCorpus needs file write with seek => doesn't support compressed output (only input)
pass
def test_closed_file_object(self):
file_obj = open(datapath('testcorpus.mm'))
f = file_obj.closed
mmcorpus.MmCorpus(file_obj)
s = file_obj.closed
self.assertEqual(f, 0)
self.assertEqual(s, 0)
@unittest.skipIf(GITHUB_ACTIONS_WINDOWS, 'see <https://github.com/RaRe-Technologies/gensim/pull/2836>')
def test_load(self):
self.assertEqual(self.corpus.num_docs, 9)
self.assertEqual(self.corpus.num_terms, 12)
self.assertEqual(self.corpus.num_nnz, 28)
# confirm we can iterate and that document values match expected for first three docs
it = iter(self.corpus)
self.assertEqual(next(it), [(0, 1.0), (1, 1.0), (2, 1.0)])
self.assertEqual(next(it), [(0, 1.0), (3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0), (7, 1.0)])
self.assertEqual(next(it), [(2, 1.0), (5, 1.0), (7, 1.0), (8, 1.0)])
# confirm that accessing document by index works
self.assertEqual(self.corpus[3], [(1, 1.0), (5, 2.0), (8, 1.0)])
self.assertEqual(tuple(self.corpus.index), (97, 121, 169, 201, 225, 249, 258, 276, 303))
class TestMmCorpusNoIndex(CorpusTestCase):
def setUp(self):
self.corpus_class = mmcorpus.MmCorpus
self.corpus = self.corpus_class(datapath('test_mmcorpus_no_index.mm'))
self.file_extension = '.mm'
def test_serialize_compressed(self):
# MmCorpus needs file write with seek => doesn't support compressed output (only input)
pass
def test_load(self):
self.assertEqual(self.corpus.num_docs, 9)
self.assertEqual(self.corpus.num_terms, 12)
self.assertEqual(self.corpus.num_nnz, 28)
# confirm we can iterate and that document values match expected for first three docs
it = iter(self.corpus)
self.assertEqual(next(it), [(0, 1.0), (1, 1.0), (2, 1.0)])
self.assertEqual(next(it), [])
self.assertEqual(next(it), [(2, 0.42371910849), (5, 0.6625174), (7, 1.0), (8, 1.0)])
# confirm that accessing document by index fails
self.assertRaises(RuntimeError, lambda: self.corpus[3])
class TestMmCorpusNoIndexGzip(CorpusTestCase):
def setUp(self):
self.corpus_class = mmcorpus.MmCorpus
self.corpus = self.corpus_class(datapath('test_mmcorpus_no_index.mm.gz'))
self.file_extension = '.mm'
def test_serialize_compressed(self):
# MmCorpus needs file write with seek => doesn't support compressed output (only input)
pass
def test_load(self):
self.assertEqual(self.corpus.num_docs, 9)
self.assertEqual(self.corpus.num_terms, 12)
self.assertEqual(self.corpus.num_nnz, 28)
# confirm we can iterate and that document values match expected for first three docs
it = iter(self.corpus)
self.assertEqual(next(it), [(0, 1.0), (1, 1.0), (2, 1.0)])
self.assertEqual(next(it), [])
self.assertEqual(next(it), [(2, 0.42371910849), (5, 0.6625174), (7, 1.0), (8, 1.0)])
# confirm that accessing document by index fails
self.assertRaises(RuntimeError, lambda: self.corpus[3])
class TestMmCorpusNoIndexBzip(CorpusTestCase):
def setUp(self):
self.corpus_class = mmcorpus.MmCorpus
self.corpus = self.corpus_class(datapath('test_mmcorpus_no_index.mm.bz2'))
self.file_extension = '.mm'
def test_serialize_compressed(self):
# MmCorpus needs file write with seek => doesn't support compressed output (only input)
pass
def test_load(self):
self.assertEqual(self.corpus.num_docs, 9)
self.assertEqual(self.corpus.num_terms, 12)
self.assertEqual(self.corpus.num_nnz, 28)
# confirm we can iterate and that document values match expected for first three docs
it = iter(self.corpus)
self.assertEqual(next(it), [(0, 1.0), (1, 1.0), (2, 1.0)])
self.assertEqual(next(it), [])
self.assertEqual(next(it), [(2, 0.42371910849), (5, 0.6625174), (7, 1.0), (8, 1.0)])
# confirm that accessing document by index fails
self.assertRaises(RuntimeError, lambda: self.corpus[3])
class TestMmCorpusCorrupt(CorpusTestCase):
def setUp(self):
self.corpus_class = mmcorpus.MmCorpus
self.corpus = self.corpus_class(datapath('test_mmcorpus_corrupt.mm'))
self.file_extension = '.mm'
def test_serialize_compressed(self):
# MmCorpus needs file write with seek => doesn't support compressed output (only input)
pass
def test_load(self):
self.assertRaises(ValueError, lambda: [doc for doc in self.corpus])
class TestMmCorpusOverflow(CorpusTestCase):
"""
Test to make sure cython mmreader doesn't overflow on large number of docs or terms
"""
def setUp(self):
self.corpus_class = mmcorpus.MmCorpus
self.corpus = self.corpus_class(datapath('test_mmcorpus_overflow.mm'))
self.file_extension = '.mm'
def test_serialize_compressed(self):
# MmCorpus needs file write with seek => doesn't support compressed output (only input)
pass
def test_load(self):
self.assertEqual(self.corpus.num_docs, 44270060)
self.assertEqual(self.corpus.num_terms, 500)
self.assertEqual(self.corpus.num_nnz, 22134988630)
# confirm we can iterate and that document values match expected for first three docs
it = iter(self.corpus)
self.assertEqual(next(it)[:3], [(0, 0.3913027376444812),
(1, -0.07658791716226626),
(2, -0.020870794080588395)])
self.assertEqual(next(it), [])
self.assertEqual(next(it), [])
# confirm count of terms
count = 0
for doc in self.corpus:
for term in doc:
count += 1
self.assertEqual(count, 12)
# confirm that accessing document by index fails
self.assertRaises(RuntimeError, lambda: self.corpus[3])
class TestSvmLightCorpus(CorpusTestCase):
def setUp(self):
self.corpus_class = svmlightcorpus.SvmLightCorpus
self.file_extension = '.svmlight'
def test_serialization(self):
path = get_tmpfile("svml.corpus")
labels = [1] * len(common_corpus)
second_corpus = [(0, 1.0), (3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0), (7, 1.0)]
self.corpus_class.serialize(path, common_corpus, labels=labels)
serialized_corpus = self.corpus_class(path)
self.assertEqual(serialized_corpus[1], second_corpus)
self.corpus_class.serialize(path, common_corpus, labels=np.array(labels))
serialized_corpus = self.corpus_class(path)
self.assertEqual(serialized_corpus[1], second_corpus)
class TestBleiCorpus(CorpusTestCase):
def setUp(self):
self.corpus_class = bleicorpus.BleiCorpus
self.file_extension = '.blei'
def test_save_format_for_dtm(self):
corpus = [[(1, 1.0)], [], [(0, 5.0), (2, 1.0)], []]
test_file = get_tmpfile('gensim_corpus.tst')
self.corpus_class.save_corpus(test_file, corpus)
with open(test_file) as f:
for line in f:
# unique_word_count index1:count1 index2:count2 ... indexn:count
tokens = line.split()
words_len = int(tokens[0])
if words_len > 0:
tokens = tokens[1:]
else:
tokens = []
self.assertEqual(words_len, len(tokens))
for token in tokens:
word, count = token.split(':')
self.assertEqual(count, str(int(count)))
class TestLowCorpus(CorpusTestCase):
TEST_CORPUS = [[(1, 1)], [], [(0, 2), (2, 1)], []]
CORPUS_LINE = 'mom wash window window was washed'
def setUp(self):
self.corpus_class = lowcorpus.LowCorpus
self.file_extension = '.low'
def test_line2doc(self):
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
id2word = {1: 'mom', 2: 'window'}
corpus = self.corpus_class(fname, id2word=id2word)
# should return all words in doc
corpus.use_wordids = False
self.assertEqual(
sorted(corpus.line2doc(self.CORPUS_LINE)),
[('mom', 1), ('was', 1), ('wash', 1), ('washed', 1), ('window', 2)])
# should return words in word2id
corpus.use_wordids = True
self.assertEqual(
sorted(corpus.line2doc(self.CORPUS_LINE)),
[(1, 1), (2, 2)])
class TestUciCorpus(CorpusTestCase):
TEST_CORPUS = [[(1, 1)], [], [(0, 2), (2, 1)], []]
def setUp(self):
self.corpus_class = ucicorpus.UciCorpus
self.file_extension = '.uci'
def test_serialize_compressed(self):
# UciCorpus needs file write with seek => doesn't support compressed output (only input)
pass
class TestMalletCorpus(TestLowCorpus):
TEST_CORPUS = [[(1, 1)], [], [(0, 2), (2, 1)], []]
CORPUS_LINE = '#3 lang mom wash window window was washed'
def setUp(self):
self.corpus_class = malletcorpus.MalletCorpus
self.file_extension = '.mallet'
def test_load_with_metadata(self):
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
corpus = self.corpus_class(fname)
corpus.metadata = True
self.assertEqual(len(corpus), 9)
docs = list(corpus)
self.assertEqual(len(docs), 9)
for i, docmeta in enumerate(docs):
doc, metadata = docmeta
self.assertEqual(metadata[0], str(i + 1))
self.assertEqual(metadata[1], 'en')
def test_line2doc(self):
# case with metadata=False (by default)
super(TestMalletCorpus, self).test_line2doc()
# case with metadata=True
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
id2word = {1: 'mom', 2: 'window'}
corpus = self.corpus_class(fname, id2word=id2word, metadata=True)
# should return all words in doc
corpus.use_wordids = False
doc, (docid, doclang) = corpus.line2doc(self.CORPUS_LINE)
self.assertEqual(docid, '#3')
self.assertEqual(doclang, 'lang')
self.assertEqual(
sorted(doc),
[('mom', 1), ('was', 1), ('wash', 1), ('washed', 1), ('window', 2)])
# should return words in word2id
corpus.use_wordids = True
doc, (docid, doclang) = corpus.line2doc(self.CORPUS_LINE)
self.assertEqual(docid, '#3')
self.assertEqual(doclang, 'lang')
self.assertEqual(
sorted(doc),
[(1, 1), (2, 2)])
class TestTextCorpus(CorpusTestCase):
def setUp(self):
self.corpus_class = textcorpus.TextCorpus
self.file_extension = '.txt'
def test_load_with_metadata(self):
fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
corpus = self.corpus_class(fname)
corpus.metadata = True
self.assertEqual(len(corpus), 9)
docs = list(corpus)
self.assertEqual(len(docs), 9)
for i, docmeta in enumerate(docs):
doc, metadata = docmeta
self.assertEqual(metadata[0], i)
def test_default_preprocessing(self):
lines = [
"Šéf chomutovských komunistů dostal poštou bílý prášek",
"this is a test for stopwords",
"zf tooth spaces "
]
expected = [
['Sef', 'chomutovskych', 'komunistu', 'dostal', 'postou', 'bily', 'prasek'],
['test', 'stopwords'],
['tooth', 'spaces']
]
corpus = self.corpus_from_lines(lines)
texts = list(corpus.get_texts())
self.assertEqual(expected, texts)
def corpus_from_lines(self, lines):
fpath = tempfile.mktemp()
with codecs.open(fpath, 'w', encoding='utf8') as f:
f.write('\n'.join(lines))
return self.corpus_class(fpath)
def test_sample_text(self):
lines = ["document%d" % i for i in range(10)]
corpus = self.corpus_from_lines(lines)
corpus.tokenizer = lambda text: text.split()
docs = [doc for doc in corpus.get_texts()]
sample1 = list(corpus.sample_texts(1))
self.assertEqual(len(sample1), 1)
self.assertIn(sample1[0], docs)
sample2 = list(corpus.sample_texts(len(lines)))
self.assertEqual(len(sample2), len(corpus))
for i in range(len(corpus)):
self.assertEqual(sample2[i], ["document%s" % i])
with self.assertRaises(ValueError):
list(corpus.sample_texts(len(corpus) + 1))
with self.assertRaises(ValueError):
list(corpus.sample_texts(-1))
def test_sample_text_length(self):
lines = ["document%d" % i for i in range(10)]
corpus = self.corpus_from_lines(lines)
corpus.tokenizer = lambda text: text.split()
sample1 = list(corpus.sample_texts(1, length=1))
self.assertEqual(sample1[0], ["document0"])
sample2 = list(corpus.sample_texts(2, length=2))
self.assertEqual(sample2[0], ["document0"])
self.assertEqual(sample2[1], ["document1"])
def test_sample_text_seed(self):
lines = ["document%d" % i for i in range(10)]
corpus = self.corpus_from_lines(lines)
sample1 = list(corpus.sample_texts(5, seed=42))
sample2 = list(corpus.sample_texts(5, seed=42))
self.assertEqual(sample1, sample2)
def test_save(self):
pass
def test_serialize(self):
pass
def test_serialize_compressed(self):
pass
def test_indexing(self):
pass
# Needed for the test_custom_tokenizer is the TestWikiCorpus class.
# Cannot be nested due to serializing.
def custom_tokenizer(content, token_min_len=2, token_max_len=15, lower=True):
return [
to_unicode(token.lower()) if lower else to_unicode(token) for token in content.split()
if token_min_len <= len(token) <= token_max_len and not token.startswith('_')
]
class TestWikiCorpus(TestTextCorpus):
def setUp(self):
self.corpus_class = wikicorpus.WikiCorpus
self.file_extension = '.xml.bz2'
self.fname = datapath('testcorpus.' + self.file_extension.lstrip('.'))
self.enwiki = datapath('enwiki-latest-pages-articles1.xml-p000000010p000030302-shortened.bz2')
def test_default_preprocessing(self):
expected = ['computer', 'human', 'interface']
corpus = self.corpus_class(self.fname, article_min_tokens=0)
first_text = next(corpus.get_texts())
self.assertEqual(expected, first_text)
def test_len(self):
# When there is no min_token limit all 9 articles must be registered.
corpus = self.corpus_class(self.fname, article_min_tokens=0)
all_articles = corpus.get_texts()
assert (len(list(all_articles)) == 9)
# With a huge min_token limit, all articles should be filtered out.
corpus = self.corpus_class(self.fname, article_min_tokens=100000)
all_articles = corpus.get_texts()
assert (len(list(all_articles)) == 0)
def test_load_with_metadata(self):
corpus = self.corpus_class(self.fname, article_min_tokens=0)
corpus.metadata = True
self.assertEqual(len(corpus), 9)
docs = list(corpus)
self.assertEqual(len(docs), 9)
for i, docmeta in enumerate(docs):
doc, metadata = docmeta
article_no = i + 1 # Counting IDs from 1
self.assertEqual(metadata[0], str(article_no))
self.assertEqual(metadata[1], 'Article%d' % article_no)
def test_load(self):
corpus = self.corpus_class(self.fname, article_min_tokens=0)
docs = list(corpus)
# the deerwester corpus always has nine documents
self.assertEqual(len(docs), 9)
def test_first_element(self):
"""
First two articles in this sample are
1) anarchism
2) autism
"""
corpus = self.corpus_class(self.enwiki, processes=1)
texts = corpus.get_texts()
self.assertTrue(u'anarchism' in next(texts))
self.assertTrue(u'autism' in next(texts))
def test_unicode_element(self):
"""
First unicode article in this sample is
1) папа
"""
bgwiki = datapath('bgwiki-latest-pages-articles-shortened.xml.bz2')
corpus = self.corpus_class(bgwiki)
texts = corpus.get_texts()
self.assertTrue(u'папа' in next(texts))
def test_custom_tokenizer(self):
"""
define a custom tokenizer function and use it
"""
wc = self.corpus_class(self.enwiki, processes=1, tokenizer_func=custom_tokenizer,
token_max_len=16, token_min_len=1, lower=False)
row = wc.get_texts()
list_tokens = next(row)
self.assertTrue(u'Anarchism' in list_tokens)
self.assertTrue(u'collectivization' in list_tokens)
self.assertTrue(u'a' in list_tokens)
self.assertTrue(u'i.e.' in list_tokens)
def test_lower_case_set_true(self):
"""
Set the parameter lower to True and check that upper case 'Anarchism' token doesnt exist
"""
corpus = self.corpus_class(self.enwiki, processes=1, lower=True)
row = corpus.get_texts()
list_tokens = next(row)
self.assertTrue(u'Anarchism' not in list_tokens)
self.assertTrue(u'anarchism' in list_tokens)
def test_lower_case_set_false(self):
"""
Set the parameter lower to False and check that upper case Anarchism' token exists
"""
corpus = self.corpus_class(self.enwiki, processes=1, lower=False)
row = corpus.get_texts()
list_tokens = next(row)
self.assertTrue(u'Anarchism' in list_tokens)
self.assertTrue(u'anarchism' in list_tokens)
def test_min_token_len_not_set(self):
"""
Don't set the parameter token_min_len and check that 'a' as a token doesn't exist
Default token_min_len=2
"""
corpus = self.corpus_class(self.enwiki, processes=1)
self.assertTrue(u'a' not in next(corpus.get_texts()))
def test_min_token_len_set(self):
"""
Set the parameter token_min_len to 1 and check that 'a' as a token exists
"""
corpus = self.corpus_class(self.enwiki, processes=1, token_min_len=1)
self.assertTrue(u'a' in next(corpus.get_texts()))
def test_max_token_len_not_set(self):
"""
Don't set the parameter token_max_len and check that 'collectivisation' as a token doesn't exist
Default token_max_len=15
"""
corpus = self.corpus_class(self.enwiki, processes=1)
self.assertTrue(u'collectivization' not in next(corpus.get_texts()))
def test_max_token_len_set(self):
"""
Set the parameter token_max_len to 16 and check that 'collectivisation' as a token exists
"""
corpus = self.corpus_class(self.enwiki, processes=1, token_max_len=16)
self.assertTrue(u'collectivization' in next(corpus.get_texts()))
def test_removed_table_markup(self):
"""
Check if all the table markup has been removed.
"""
enwiki_file = datapath('enwiki-table-markup.xml.bz2')
corpus = self.corpus_class(enwiki_file)
texts = corpus.get_texts()
table_markup = ["style", "class", "border", "cellspacing", "cellpadding", "colspan", "rowspan"]
for text in texts:
for word in table_markup:
self.assertTrue(word not in text)
def test_get_stream(self):
wiki = self.corpus_class(self.enwiki)
sample_text_wiki = next(wiki.getstream()).decode()[1:14]
self.assertEqual(sample_text_wiki, "mediawiki xml")
# #TODO: sporadic failure to be investigated
# def test_get_texts_returns_generator_of_lists(self):
# corpus = self.corpus_class(self.enwiki)
# l = corpus.get_texts()
# self.assertEqual(type(l), types.GeneratorType)
# first = next(l)
# self.assertEqual(type(first), list)
# self.assertTrue(isinstance(first[0], bytes) or isinstance(first[0], str))
def test_sample_text(self):
# Cannot instantiate WikiCorpus from lines
pass
def test_sample_text_length(self):
# Cannot instantiate WikiCorpus from lines
pass
def test_sample_text_seed(self):
# Cannot instantiate WikiCorpus from lines
pass
def test_empty_input(self):
# An empty file is not legit XML
pass
def test_custom_filterfunction(self):
def reject_all(elem, *args, **kwargs):
return False
corpus = self.corpus_class(self.enwiki, filter_articles=reject_all)
texts = corpus.get_texts()
self.assertFalse(any(texts))
def keep_some(elem, title, *args, **kwargs):
return title[0] == 'C'
corpus = self.corpus_class(self.enwiki, filter_articles=reject_all)
corpus.metadata = True
texts = corpus.get_texts()
for text, (pageid, title) in texts:
self.assertEquals(title[0], 'C')
class TestTextDirectoryCorpus(unittest.TestCase):
def write_one_level(self, *args):
if not args:
args = ('doc1', 'doc2')
dirpath = tempfile.mkdtemp()
self.write_docs_to_directory(dirpath, *args)
return dirpath
def write_docs_to_directory(self, dirpath, *args):
for doc_num, name in enumerate(args):
with open(os.path.join(dirpath, name), 'w') as f:
f.write('document %d content' % doc_num)
def test_one_level_directory(self):
dirpath = self.write_one_level()
corpus = textcorpus.TextDirectoryCorpus(dirpath)
self.assertEqual(len(corpus), 2)
docs = list(corpus)
self.assertEqual(len(docs), 2)
def write_two_levels(self):
dirpath = self.write_one_level()
next_level = os.path.join(dirpath, 'level_two')
os.mkdir(next_level)
self.write_docs_to_directory(next_level, 'doc1', 'doc2')
return dirpath, next_level
def test_two_level_directory(self):
dirpath, next_level = self.write_two_levels()
corpus = textcorpus.TextDirectoryCorpus(dirpath)
self.assertEqual(len(corpus), 4)
docs = list(corpus)
self.assertEqual(len(docs), 4)
corpus = textcorpus.TextDirectoryCorpus(dirpath, min_depth=1)
self.assertEqual(len(corpus), 2)
docs = list(corpus)
self.assertEqual(len(docs), 2)
corpus = textcorpus.TextDirectoryCorpus(dirpath, max_depth=0)
self.assertEqual(len(corpus), 2)
docs = list(corpus)
self.assertEqual(len(docs), 2)
def test_filename_filtering(self):
dirpath = self.write_one_level('test1.log', 'test1.txt', 'test2.log', 'other1.log')
corpus = textcorpus.TextDirectoryCorpus(dirpath, pattern=r"test.*\.log")
filenames = list(corpus.iter_filepaths())
expected = [os.path.join(dirpath, name) for name in ('test1.log', 'test2.log')]
self.assertEqual(sorted(expected), sorted(filenames))
corpus.pattern = ".*.txt"
filenames = list(corpus.iter_filepaths())
expected = [os.path.join(dirpath, 'test1.txt')]
self.assertEqual(expected, filenames)
corpus.pattern = None
corpus.exclude_pattern = ".*.log"
filenames = list(corpus.iter_filepaths())
self.assertEqual(expected, filenames)
def test_lines_are_documents(self):
dirpath = tempfile.mkdtemp()
lines = ['doc%d text' % i for i in range(5)]
fpath = os.path.join(dirpath, 'test_file.txt')
with open(fpath, 'w') as f:
f.write('\n'.join(lines))
corpus = textcorpus.TextDirectoryCorpus(dirpath, lines_are_documents=True)
docs = [doc for doc in corpus.getstream()]
self.assertEqual(len(lines), corpus.length) # should have cached
self.assertEqual(lines, docs)
corpus.lines_are_documents = False
docs = [doc for doc in corpus.getstream()]
self.assertEqual(1, corpus.length)
self.assertEqual('\n'.join(lines), docs[0])
def test_non_trivial_structure(self):
"""Test with non-trivial directory structure, shown below:
.
├── 0.txt
├── a_folder
│ └── 1.txt
└── b_folder
├── 2.txt
├── 3.txt
└── c_folder
└── 4.txt
"""
dirpath = tempfile.mkdtemp()
self.write_docs_to_directory(dirpath, '0.txt')
a_folder = os.path.join(dirpath, 'a_folder')
os.mkdir(a_folder)
self.write_docs_to_directory(a_folder, '1.txt')
b_folder = os.path.join(dirpath, 'b_folder')
os.mkdir(b_folder)
self.write_docs_to_directory(b_folder, '2.txt', '3.txt')
c_folder = os.path.join(b_folder, 'c_folder')
os.mkdir(c_folder)
self.write_docs_to_directory(c_folder, '4.txt')
corpus = textcorpus.TextDirectoryCorpus(dirpath)
filenames = list(corpus.iter_filepaths())
base_names = sorted(name[len(dirpath) + 1:] for name in filenames)
expected = sorted([
'0.txt',
'a_folder/1.txt',
'b_folder/2.txt',
'b_folder/3.txt',
'b_folder/c_folder/4.txt'
])
expected = [os.path.normpath(path) for path in expected]
self.assertEqual(expected, base_names)
corpus.max_depth = 1
self.assertEqual(expected[:-1], base_names[:-1])
corpus.min_depth = 1
self.assertEqual(expected[2:-1], base_names[2:-1])
corpus.max_depth = 0
self.assertEqual(expected[2:], base_names[2:])
corpus.pattern = "4.*"
self.assertEqual(expected[-1], base_names[-1])
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
unittest.main()
| 35,490
|
Python
|
.py
| 756
| 37.832011
| 107
| 0.624532
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,081
|
test_parsing.py
|
piskvorky_gensim/gensim/test/test_parsing.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Automated tests for the parsing module.
"""
import logging
import unittest
from unittest import mock
import numpy as np
from gensim.parsing.preprocessing import (
remove_short_tokens,
remove_stopword_tokens,
remove_stopwords,
stem_text,
split_alphanum,
split_on_space,
strip_multiple_whitespaces,
strip_non_alphanum,
strip_numeric,
strip_punctuation,
strip_short,
strip_tags,
)
# several documents
doc1 = """C'est un trou de verdure où chante une rivière,
Accrochant follement aux herbes des haillons
D'argent ; où le soleil, de la montagne fière,
Luit : c'est un petit val qui mousse de rayons."""
doc2 = """Un soldat jeune, bouche ouverte, tête nue,
Et la nuque baignant dans le frais cresson bleu,
Dort ; il est étendu dans l'herbe, sous la nue,
Pâle dans son lit vert où la lumière pleut."""
doc3 = """Les pieds dans les glaïeuls, il dort. Souriant comme
Sourirait un enfant malade, il fait un somme :
Nature, berce-le chaudement : il a froid."""
doc4 = """Les parfums ne font pas frissonner sa narine ;
Il dort dans le soleil, la main sur sa poitrine,
Tranquille. Il a deux trous rouges au côté droit."""
doc5 = """While it is quite useful to be able to search a
large collection of documents almost instantly for a joint
occurrence of a collection of exact words,
for many searching purposes, a little fuzziness would help. """
dataset = [strip_punctuation(x.lower()) for x in [doc1, doc2, doc3, doc4]]
# doc1 and doc2 have class 0, doc3 and doc4 avec class 1
classes = np.array([[1, 0], [1, 0], [0, 1], [0, 1]])
class TestPreprocessing(unittest.TestCase):
def test_strip_numeric(self):
self.assertEqual(strip_numeric("salut les amis du 59"), "salut les amis du ")
def test_strip_short(self):
self.assertEqual(strip_short("salut les amis du 59", 3), "salut les amis")
def test_strip_tags(self):
self.assertEqual(strip_tags("<i>Hello</i> <b>World</b>!"), "Hello World!")
def test_strip_multiple_whitespaces(self):
self.assertEqual(strip_multiple_whitespaces("salut les\r\nloulous!"), "salut les loulous!")
def test_strip_non_alphanum(self):
self.assertEqual(strip_non_alphanum("toto nf-kappa titi"), "toto nf kappa titi")
def test_split_alphanum(self):
self.assertEqual(split_alphanum("toto diet1 titi"), "toto diet 1 titi")
self.assertEqual(split_alphanum("toto 1diet titi"), "toto 1 diet titi")
def test_strip_stopwords(self):
self.assertEqual(remove_stopwords("the world is square"), "world square")
# confirm redifining the global `STOPWORDS` working
with mock.patch('gensim.parsing.preprocessing.STOPWORDS', frozenset(["the"])):
self.assertEqual(remove_stopwords("the world is square"), "world is square")
def test_strip_stopword_tokens(self):
self.assertEqual(remove_stopword_tokens(["the", "world", "is", "sphere"]), ["world", "sphere"])
# confirm redifining the global `STOPWORDS` working
with mock.patch('gensim.parsing.preprocessing.STOPWORDS', frozenset(["the"])):
self.assertEqual(
remove_stopword_tokens(["the", "world", "is", "sphere"]),
["world", "is", "sphere"]
)
def test_strip_short_tokens(self):
self.assertEqual(remove_short_tokens(["salut", "les", "amis", "du", "59"], 3), ["salut", "les", "amis"])
def test_split_on_space(self):
self.assertEqual(split_on_space(" salut les amis du 59 "), ["salut", "les", "amis", "du", "59"])
def test_stem_text(self):
target = \
"while it is quit us to be abl to search a larg " + \
"collect of document almost instantli for a joint occurr " + \
"of a collect of exact words, for mani search purposes, " + \
"a littl fuzzi would help."
self.assertEqual(stem_text(doc5), target)
if __name__ == "__main__":
logging.basicConfig(level=logging.WARNING)
unittest.main()
| 4,071
|
Python
|
.py
| 86
| 41.77907
| 112
| 0.679087
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,082
|
test_segmentation.py
|
piskvorky_gensim/gensim/test/test_segmentation.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html
"""
Automated tests for segmentation algorithms in the segmentation module.
"""
import logging
import unittest
import numpy as np
from gensim.topic_coherence import segmentation
from numpy import array
class TestSegmentation(unittest.TestCase):
def setUp(self):
self.topics = [
array([9, 4, 6]),
array([9, 10, 7]),
array([5, 2, 7])
]
def test_s_one_pre(self):
"""Test s_one_pre segmentation."""
actual = segmentation.s_one_pre(self.topics)
expected = [
[(4, 9), (6, 9), (6, 4)],
[(10, 9), (7, 9), (7, 10)],
[(2, 5), (7, 5), (7, 2)]
]
self.assertTrue(np.allclose(actual, expected))
def test_s_one_one(self):
"""Test s_one_one segmentation."""
actual = segmentation.s_one_one(self.topics)
expected = [
[(9, 4), (9, 6), (4, 9), (4, 6), (6, 9), (6, 4)],
[(9, 10), (9, 7), (10, 9), (10, 7), (7, 9), (7, 10)],
[(5, 2), (5, 7), (2, 5), (2, 7), (7, 5), (7, 2)]
]
self.assertTrue(np.allclose(actual, expected))
def test_s_one_set(self):
"""Test s_one_set segmentation."""
actual = segmentation.s_one_set(self.topics)
expected = [
[(9, array([9, 4, 6])), (4, array([9, 4, 6])), (6, array([9, 4, 6]))],
[(9, array([9, 10, 7])), (10, array([9, 10, 7])), (7, array([9, 10, 7]))],
[(5, array([5, 2, 7])), (2, array([5, 2, 7])), (7, array([5, 2, 7]))]
]
for s_i in range(len(actual)):
for j in range(len(actual[s_i])):
self.assertEqual(actual[s_i][j][0], expected[s_i][j][0])
self.assertTrue(np.allclose(actual[s_i][j][1], expected[s_i][j][1]))
if __name__ == '__main__':
logging.root.setLevel(logging.WARNING)
unittest.main()
| 2,076
|
Python
|
.py
| 53
| 31.377358
| 95
| 0.520139
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,083
|
lda_3_0_1_model.expElogbeta.npy
|
piskvorky_gensim/gensim/test/test_data/lda_3_0_1_model.expElogbeta.npy
|
“NUMPY F {'descr': '<f8', 'fortran_order': False, 'shape': (2, 12), }
Ç%)Í=´?
¾�ݾ=´?™È›¢}?´?"”l«?]¤?wäÊ;KC¾?OæßÚo(Ä?d iÇ?´?³c&iÇ?´?[• ™�B´?c±Ñ1âmw? æmœª)w?›Q!Ð+w? ÷¬ŸÏ„?âú—9-Є?ËîV羄?E¨ió§²?Q²É£�´„?˜Œr뙄?ÏZÁ‘
¼„?.ƒI�
¼„?9H¾ÿ„ „?'r(¡\Ê?<¾SêaÊ?á&ÕÔÁ?
| 272
|
Python
|
.py
| 3
| 87
| 183
| 0.451852
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,084
|
word2vec_pre_kv_sep_py2.syn0.npy
|
piskvorky_gensim/gensim/test/test_data/word2vec_pre_kv_sep_py2.syn0.npy
|
“NUMPY F {'descr': '<f4', 'fortran_order': False, 'shape': (9, 2), }
ED£@'é?.~‰@c>"?cx‰@æ'">ôW‚@ ?œ{@²ýú>ÉÍY@‚°=·D@úL‰>WÑ:@g‰>€6F@@#?
| 152
|
Python
|
.py
| 2
| 70.5
| 79
| 0.403974
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,085
|
word2vec_pre_kv_sep_py3.syn1neg.npy
|
piskvorky_gensim/gensim/test/test_data/word2vec_pre_kv_sep_py3.syn1neg.npy
|
“NUMPY F {'descr': '<f4', 'fortran_order': False, 'shape': (9, 2), }
L¯q¾�
]>&.‘¾±j>Š›¾(î>>·I޾BùN><n¾Ô>6�¾ðñ>.…¾Ù>½Ûk¾I–F>¾�‹¾!ÔE>
| 152
|
Python
|
.py
| 2
| 70.5
| 79
| 0.463576
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,086
|
ldamodel_python_3_5.id2word
|
piskvorky_gensim/gensim/test/test_data/ldamodel_python_3_5.id2word
|
Äcgensim.corpora.dictionary
Dictionary
q )Åq}q(X id2tokenq}q(K X humanqKX interfaceqKX computerqKX surveyqKX systemq KX responseq
KX timeqKX userqKX epsq
K X treesqK
X graphqKX minorsquX num_posqKX dfsq}q(K KKKKKKKKKKKKKKKKKK KK
KKKuX num_nnzqKX num_docsqK X token2idq}q(hKh
KhKh
KhKh KhKhK hKhK
hKhK uub.
| 430
|
Python
|
.py
| 8
| 52.875
| 124
| 0.598109
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,087
|
word2vec_pre_kv_sep_py3.syn0.npy
|
piskvorky_gensim/gensim/test/test_data/word2vec_pre_kv_sep_py3.syn0.npy
|
“NUMPY F {'descr': '<f4', 'fortran_order': False, 'shape': (9, 2), }
ñs@€¿BÀŒwX@�G.À*õX@CQÀ/R@)xÀ�U@ëÀÉ~6@&aÀ$ÿ@™è¿ ¯&@Oÿ¿I@þfî¿
| 152
|
Python
|
.py
| 2
| 70.5
| 79
| 0.456954
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,088
|
ldamodel_python_2_7.state
|
piskvorky_gensim/gensim/test/test_data/ldamodel_python_2_7.state
|
€cgensim.models.ldamodel
LdaState
q)�q}q(U__numpysq]U__recursive_saveloadsq]U
__ignoredsq]Usstatsqcnumpy.core.multiarray
_reconstruct
qcnumpy
ndarray
q K …Ub‡Rq
(KKK†cnumpy
dtype
qUf8K K‡Rq(KU<NNNJÿÿÿÿJÿÿÿÿK tb‰UÀ 3ÇÛú…?¶<•+T‡?ºÛŸ_‡?pÒô]-Ï…?œ[XêÔpğ?E
ƒ?ÕL(˜-Ï…?ç
Àf3…?�»©“ƒ?™Ukñ@Ť.Cö@¼Í\Sßîÿ?î™qH
Ôÿ?‡Õ¨õWÑÿ?ÖçIÀ@Ñÿ?[D¥aÔÿ?ÆHO+Vï?½áşùõì@f¯Ï¤aÔÿ?ãõ?™Ìê@¥ˆ¬òÙØÿ?
ÍÈùU)}?Twö·¢ys?íC2£¬ q?tbUetaq
hh K …Ub‡Rq(KK…h‰U` à? à? à? à? à? à? à? à? à? à? à? à?tbUnumdocsqK U__scipysq]ub.
| 588
|
Python
|
.py
| 15
| 38.266667
| 184
| 0.559233
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,089
|
ldamodel_python_3_5.state
|
piskvorky_gensim/gensim/test/test_data/ldamodel_python_3_5.state
|
Äcgensim.models.ldamodel
LdaState
q )Åq}q(X
__ignoredsq]qX __scipysq]qX __numpysq]qX numdocsq K X etaq
cnumpy.core.multiarray
_reconstruct
qcnumpy
ndarray
qK Öq
c_codecs
encode
qX bqX latin1qÜqRqáqRq(KKÖqcnumpy
dtype
qX f8qK KáqRq(KX <qNNNJˇˇˇˇJˇˇˇˇK tqbâhXl √†? √†? √†? √†? √†? √†? √†? √†? √†? √†? √†? √†?qhÜqRqtqbX __recursive_saveloadsq ]q!X sstatsq"hhK Öq#háq$Rq%(KKKÜq&hâhX 3√á√õ√∫¬Ö?¬∂<¬ï+T¬á?¬∫√õ¬ü_¬á?p√í√¥]-√è¬Ö?¬ú[X√™√îp√∞?E
ƒ?ÕL(˜-υ?ç
Àf3…?­»©“ƒ?™Ukñ@Ť.Cö@¼Í\Sßîÿ?î™qH
Ôÿ?‡Õ¨õWÑÿ?ÖçIÀ@Ñÿ?[D¥aÔÿ?ÆHO+Vï?½áþùõì@f¯Ï¤aÔÿ?ãõ?™Ìê@¥ˆ¬òÙØÿ?
√ç√à√πU)}?Tw√∂¬∑¬¢ys?√≠C2¬£¬¨ q?q'hÜq(Rq)tq*bub.
| 825
|
Python
|
.py
| 17
| 47.588235
| 326
| 0.467244
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,090
|
word2vec_pre_kv_sep_py3_4
|
piskvorky_gensim/gensim/test/test_data/word2vec_pre_kv_sep_py3_4
|
€cgensim.models.word2vec
Word2Vec
q )�q}q(X max_vocab_sizeqK2X train_countqKX raw_vocabqcUserString
defaultdict
qc__builtin__
long
q…qRq X hsq
K X
index2wordq]q(X theq
X toqX ofqX inqX andqX forqX hasqX haveqX areqeX
__ignoredsq]q(X syn0normqX cum_tableqeX sorted_vocabqKX workersqKX randomqcnumpy.random
__RandomState_ctor
q)Rq(X MT19937qcnumpy.core.multiarray
_reconstruct
q cnumpy
ndarray
q!K …q"c_codecs
encode
q#X bq$X latin1q%†q&Rq'‡q(Rq)(KMp…q*cnumpy
dtype
q+X u4q,K K‡q-Rq.(KX <q/NNNJÿÿÿÿJÿÿÿÿK tq0b‰h#X§ Ã¥ÂUYËÂ�[]BšñÜçÕñRÜ^ºõM&~#Â�±¤:pÃ+ɹHb{Ã2 Rœ¾–©!b|‡ëè˜Â�x÷YÂ� Ÿ¡åÂÿ"«Z8·d]0³’„+Â…b ×Ñô䔦õ?1¸÷$"MìÇ>ÓÓTÃ�¾¸åþ?-}8%Q³¦¦üŒ§Ë€)6#²µ®ªE31ÂÃ�Ÿ9“à ·tÈKWÃ�Â¥JôÂÆ5æÑ^øÃƸñn쩔¨Uð¤åOK´î×f¢¸Nz?JNäZð¹÷.«úEà jC:üxö¬´¦Â�kPÂ�þ'éò>æ”Ò…½ÛÄ\1{ëyÃ�Â�òIÂŒ?iM´„¥ü>ÖòWÂ�Ã�Èü€… ØÜ"]Õä¢
xƒÉ]ÿà <:zÛ�š3
¾`¤J!ÕSáîëõçËó¼�
6Ãf{xúð¦&M4þ³„zßê¼,ßÃ�®ö½L·«ôº…FjÃ�”m‡s[Â�O«Ã�·ñçöë!3ÿLÂ�{æÃ�¿~X…‹i*µÂ�‰½åt;6 -lo50¥úXzëâäÂ�;™o¤`•ð„£VÜ>Â�¯0vP²Â�Â#ëÂyø9+yèl5Â�=õuaSü[@•¥ü1†]FWæÆùÇÃ�·Â0óU×5©ùe-ø
ü5ôZµŒgXÕH<‰Ã�äoVnçÃ7º)g ˜%çå"´¶+ü}†Ã�mvA^’¤ ‘ºÈÂô›!A‹m6¥¿'§ÂM¾8À®;“,ÜbìŠâڒģÂ�j"´N%¹=Ú Ÿ‹Å°dD\3±òÓCƒ
ÉÆrv ܬCÿOHJ›*F8ÑÚpîu¨H Œ‹±ükG“4Ã�ƒá®¯š…Ê"$t¢5Ã�¦ÚÃ�ÆÌ“ÊùõQ“ Ô»Æ°A1(gÈû-¿¥n¦*»‹÷"?jؼū`%6ß¿TV²Â�Œæ·ŒøýÂ�Â[_¤š—0aÂ�²ß[–ô^Ã’7Ã¥Ã�t«&W<SÕ_Ú¸J.b%ÕÑ43ðWe;G`&+NÃ�ȧãñÇu\"d$Ųo”§41ï ƓRhëLB—$|ۚÃ|AXAœsTÖ£¯Ò Pç0üÂ�w›à ·„Øæ¨ˆÕ¯5¼„i*J‡Ã3Û·Ã�cþ“Â�úl_Ã$™¥Â�nAߥqƒXâqYó=+SöEá
™Ûk¼)'tE¶�R
5~®ÃtÃ�šÆš×a!¾,vÔûõ“\CF<`ŸÃ�ú ãÃ�Ã�Ã…J½œæQÈ l$¨?Z;E˜ÆzÊ'¾Ã�)¯¦ˆý·¶œ®½…”é¥ÔbiõâHF²†FՊ‘ÊC0ô70›X‹l$@ ÂŒÃ�KC¹‚˸Ms¬›¹£(\®€«¹4!tƒ` Ã«Y¤êd[—Â�Ã�Ÿ€×¨Â�ÅïûÂ)gÄ,G³QþoU6tkCPÃÃ�ò)aZ&)(ƒ”ù>b;¡Ã�‹ìÃ�{˜m ©Ü²åÂ�¾8¬^8ºk TÄÃ�Bß
\LÈweÔYìÃ�]–ë2¹à f)BÂ�,û®_¸9_§þ¶p‘B×Ã�à »å®ïÜ
‰#.ÆÇÃ�mÂ¥Uۖ]‰£Ã�{pßxºh9p/ÃŒzÃ�¨<¼ïÃ/¨ÃÃÛXà Ñzñx¨dY\¶ÇÇ7ï¨à 9vèÀŠÑ9§”ô`ô |ɰ•~ØsÈ>rPiñÊ>›±&e aswéyѲ…o“°€nØ´ŸO$d/áÀx¹
œ—Dà ߨˆSÃ�‹¿ãµ–‡ÿm ju‹Qý$ x@ãî™×oû4Â�Z¿ÓK?‡Â�ùah(– -»1˜36ÄIÃáXÜ$ÃŒv`û«ß²¡¢âª%‹¶¿ô¯‘½ ey'—\”#‘v½´´¯h#b3Ü-. „¥"Â�ṂZó Â�C1
¢IPÄ�Y©#Ze˜¬5ÃðUBZ��6u™aÕa¤)e©”½;�æ7¡G9A»"�?�>�,~øO>Fß�áY¶ª�ø6S!%Äkn¿‡K‹øRﳦeÆ3ˆ9r!¶å>—W2R©ß[>{A9ˆrþÿgË_?ð¨/ø!3!¥
à Lý¾8ò%× ±ÃÃ_ü*ÊV³=¦QÜ0x«è 9Õ"–~5à Y!ì}Hœ=ìÃtcXoúWJ®Â�cºÃ�Âm~ôeU¸Ò&±!Û?@ïfÃ�jwF.e—
Ÿ–# ŒÒ�<©™¯ŠÇ¡ÂdSã
�j—‚5E+N[쇑ƒöÙý>‡^]Û�ù ?fý¯WaVÇǘdÕù”^ÿ¥œi=C [Æ®I0é55@V�OðÇN´ ¦2äºÛðZCQ
v'aA‡UÃŒj§ ²Wh?)®EQl!EôìD!o¡¶¶C˜WoÂ�]ÂxAorC{5@zÂ�—rCµÛÒÒé{~Ö2?aÃ�ò^«7÷wz›’¾m"ÃÆ"˜IÙÂ�|t°¶?-{³U¥øÂ�îÃ�ÃŒMµ|ßLöF½Ã�ê±ÂÌÆ«„>'èCÔ{öe§(i)¬ÙuÂ�ã¾dј›œÂ[Ûm“o½Ú‡teßjëê
W¯{Â…!½Šaª*—‹£ÂJ§îÕëTÙ±&ÊØK<V‡WN}Ëûqm¿NÛZ«»±îA‚ÊŸoëÅÕäÂ�ðþÂ�ßHÂ�Â�)º”
ù„wÒE3RÃÛAÅ�eùå•ÓTkªþ�¼"W#*SÈ@ˆ»«›Å ÓR�åÓ*�O�§Åòønr–2dt^�,>þCà (pÿ�Að�EÓb½ §�/ÇÊnð¹›u“ç†�~êÃ҅€S�ÂDÿbï�ØêPÅ]úì…!‚½¾a{�þZKdz2wJ˜×qKWvü¥$«@âU�®Z lÛm¬âƒ
Øò‡�kl�N&]ú’5°Å‚¤�gÌÆŒ)!;§ëëmB«wøfwã+ٙ¸£øN¿ÑÓ¼á
ò…õåR˜hS²öêÌÈÂ�"Ã�ßð9@¨+2´q.*¦"qÔgb˜°$9éá ç.Ä,²Éiñ2;&â„Ú»Hù+ I?äÉÂ�+ºÂ�‹¾ÛM¼AåÛÆ0¤Dº‘y`1éVASqÃ’%‘Ã�`Ã�;ãmè×¼;ÒÀlØ,TWg£ÌiG`ԕ˶÷·3þÂ�±¡Q#¤’À¢ŒÂ�K’Ù1i}ˆa&šÊ¯þpAäRá(ÃM»×F°*¶LÃöjy¢˜ÛIeÂ�ú–Â�”À§È®Cv?¹%Ã�4Â…/|™ø(Ê~¡þ?ºlK‰òN~nDªü¬5CïâÓúgm´_Ã�« Š£´;“LhQQI´N§êÓ.>‰Y£«ÔÃ�”·\«=йz6Â�v×1£ÿq1h%†q2Rq3tq4bM…K G tq5bX batch_wordsq6M'X min_alphaq7G?6âëC-X total_train_timeq8G?ÿf�b X sgq9K X negativeq:KX __scipysq;]q<X vector_sizeq=KX alphaq>G?™™™™™™šX null_wordq?K X __recursive_saveloadsq@]qAX __numpysqB]qC(X
neg_labelsqDX
syn0_lockfqEX syn0qFX syn1negqGeX iterqHKX seedqIKX cbow_meanqJKX vocabqK}qL(hcgensim.models.word2vec
Vocab
qM)�qN}qO(X countqPMÙX indexqQKX
sample_intqRJ]¢Bubh
hM)�qS}qT(hPM'hQK hRJt-VubhhM)�qU}qV(hPM•hQKhRJHãubhhM)�qW}qX(hPM¸hQKhRJh–ˆ0ubhhM)�qY}qZ(hPM2hQKhRJ�"@*ubhhM)�q[}q\(hPMKhQKhRJ/(9ubhhM)�q]}q^(hPM’hQKhRJ^º3ubhhM)�q_}q`(hPM hQKhRJÔ§HubhhM)�qa}qb(hPMPhQKhRJ¦ïïubuX sampleqcG?PbMÒñ©üX min_countqdKX hashfxnqec__builtin__
hash
qfX min_alpha_yet_reachedqgG?™™™™™™šX windowqhKX corpus_countqiM,X layer1_sizeqjKub.
| 5,269
|
Python
|
.py
| 37
| 141.432432
| 796
| 0.60042
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,091
|
word2vec_pre_kv_sep_py2.syn1neg.npy
|
piskvorky_gensim/gensim/test/test_data/word2vec_pre_kv_sep_py2.syn1neg.npy
|
“NUMPY F {'descr': '<f4', 'fortran_order': False, 'shape': (9, 2), }
.{оMwh½'¡¢¾¡ô¼½¾l7; Œ¬¾5ä#=§å¸¾¦+½z ¾ÜtE½Žá—¾J®‡½Ðª¶¾ÿì—½rо‡ �=
| 152
|
Python
|
.py
| 2
| 70.5
| 79
| 0.509934
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,092
|
word2vec_pre_kv_sep_py2
|
piskvorky_gensim/gensim/test/test_data/word2vec_pre_kv_sep_py2
|
€cgensim.models.word2vec
Word2Vec
q)�q}q(Utrain_countqKUbatch_wordsqM'U cbow_meanqKUrandomqcnumpy.random
__RandomState_ctor
q)Rq (UMT19937q
cnumpy.core.multiarray
_reconstruct
qcnumpy
ndarray
qK …Ub‡Rq
(KMp…cnumpy
dtype
qUu4K K‡Rq(KU<NNNJÿÿÿÿJÿÿÿÿK tb‰TÀ ş[É�hs|1h€ÙÏ1ûÜ1¡ÛpL4EšîÓvo3Ğï£)[ı—‰õ#=vß9úMÕîšrKÄhĞÙF'’îŒ’Šœ¦Vò6>yØño̦Ç4�ÑÙ,ad›ÆdT<Vœ2óv¡ôÏ5Lu‘OÑ@yk]¿Pä½\ŸVhÍ,h5%‰ƒWŒl$^ë÷‰^X-*åSÜ“·ÎŠâF?]m0-‘–GI®DÇŒøœÙKN
¼A´qÍñ�«Ñ')¹C,›Ó”v*)‰ğ °Óú�5L„Ìuß
ùûè…k©
ë÷d¢˜¢{Yy‡W¼dhá3@òÀ+iËÂO%ù$„)è*û׃FñS´<1±0I~§ÀÕÒbØ2Ë2µİå×3��Î!“ŠnÿR´ìEºNÊg-]ğI,<(Ævbiğ(—øóZ*²¡—£—wx� ™töM±cBšaŒâ
:Zlsj§CaáB6³5�SP@;¯†G‘:øÿÛA¬²ÜÓ<n,<Ob•�En á$^Ï«êÿ%|8f¬á§
6ÓuÑæ1µU襫—{Ck3jêDêÌH#o.ÊÆï O7²Õ„§íw92cÙLgî$ËDP"¦˜_¢©}©£zhF@»1LPéñ
¶ì)—�#R›¿�j)¨Ö6)ÑSöŠ®lf�8¼7ÏjŠË…ɨ×Ü@@!k ·½%Pˆ#,P*€J“¼'+u|
ë;>İ 7äb™-1]|w
8r2“…pŒ“0ÃÜ×û8@.Ş™…ĞswÏı İ«¹nè®y]úk™6GÉ×Éè<è�^ÚfZş2‰Åç
Ùš6“Ú±'ù$ùF`�^ÏÏå®P³ܯΔбîŞÖã¸T6=lÕÄ Ï-ØMO¯é ‘7èŞBGf:?àΘM¿&/€nHaPBíeHeY_Şgä˜öÓÁTÒ�TÜê|o²
Û²êÊ£u|9…›-UYæã!Jks°aûs'âªd㫇üæ/�J²L¨uR
c
®e/şÀµ;.ŞÓuIƒÛ(‡�úãº
Ë2!EHÿ’]œK®°‰
Ù>R9Œ‚A“nòѶ0X<gl—ùw©‰X.
-�Æ!S?Äxûÿæ]˜[n
F>Nn�‘½ÌÎùNVµ>ııùšZo³�ÖsìÆ®ÄC»�Yƒ¬,e´^¸5Íd•–{ Œõ !±twú+bƒ?©ÀÍØw#óŸrˆ^,±´ öâã™ìm4B‰›^9U¦Şé-èÑE¥‡>�7 V_³ÒiÙ¸4<@U¾ktÿ}tâ6“B©Y�İàÊê²<–ns×xä`
õ_‘åÃyåApÙ#ÜEÜ\í~¤z™ly ŸíĞ33ü�(Ù½ ›í{3p»ˆWU¾?ìz¯sËVBscLz<€v3ƒÌrOFFI‚Zi'±QğŞåìή3R2şû™ªT;NR פ9ŞnÇõñ”f”µŞYâ}wúy¬�£–Š~ßNU74! %,<ç��éÁb�Èv7Œâî§Ú´êÔÓj8N!O؃‡ô&•Ã"±\Á¼©�Nì;›·>D~ñKo%ìñ÷ï¨Aò•„í2¿¤i¸ÕÍ`…Y€=G¢ ¿Ú�7ípYÿŒ4Zи÷àJ WkûÆ §' S&¥†éwˆ�+‘ìKü®ÄÕ´¢%uw’¯ü'ªó{¿ÉÌyCÖ
FÖ‡›vÃ;æ¾ÉV¹çœH�E›k�¹\÷c¯Ó"‹;"uıØİ‰¬ç :
¤ñµàŒj±C2͆ìwveª}@<�8!8yU£‘; Ü}¼a�ŒºBróİQr×ÉÎ]¹�ˆìC¦àar¤DºNG“Ô}¼6›hAÃĞ—S°+8Â
(BaJá¥Ø‘¨+9”I²oö޹¥ˆ+¢,D…X&ƒ¸®˜R:U‹ä»ôĞæ\ônĞdQG�*–Ôa’h½>Ø®gô½ h�ùíj€ì3şPèòË/ÏH�Hµ/V“ïèü6I¨ònßıì®Éè¨F‹–^fަR÷g¨:Qj1áİo o4å‚&Hûä¯_œµSÜ•ææ’Ì¿Ğ61膓GØdÜ<³Ğ){%õ"
ÍNÉ´»wtÒ„¥ítF˜äE/«kbšC)ÊÏÇ|¶IHеäDG"e\×E‰M»òôÇß—�Ób/p©Õñ¬ºÉ¡vĞó·M²³’ÜùN«jgCâ&ú…¬sySÇóÓ§#Aû(V¼/‚@CûÊH²]qÛ>`7ªfDkg İ9ÌÏ\ı½Ïİ ¥7�Ä/ÏÏ*:?ŞĞÇôfg%é‡f•G½×ùÄ…Z÷ıVdŸx Ç×Ë*x%i0ål¾²Oƒpi�`‡EÀ Vö¼¥+œîœ=Şy İÚ`uğ™æ)KğÓÚ¾kÔ;¯ï‰
)Õ"ÁÑÔÎS8¾qÎ,•Os2w!¯BUÆÒKƒƒ ÛäápGMÙ�úPYù†ŸjiÁÚõ·_)Ë8NÑ'R8Köº*
-úm%¯€âòEâßÂA”AÙ„Xˆ.ˆ×4î¸`ghñ�D¤s¤Sˆp?I̼&Xƒ=£p%Mh`H
HfÛæåbZ°úÛH϶‘7è²$…-1ˆ–�½7j¯a„¾�œ`N¢z[ş³göÄ�×lèE�Ä—ÀÌhMƒ&eéÙ»´”Jg-¬³Äğæÿ{)Í|„CÊ%¾Û7Ü©Gg
½l6„ï$ %kéb @‘Nİ÷‡œ±¤;şú�¶%ª6¾“aò./eÇV>ƒ¸ö`ÅO2àgäµ®r‚mÃÃĞäV¡ÿà#Q¢êWYşÿ|«½.)Ê(ÈWz7¡<.ÂÒâK~Êp·ÿ¿Ş}Ó²ĞÅÆkwƒ“1©méhí onÌi%K‰¢Ç×~CtL£�.¬„¼‰
ö(Y‚ŸQ
f&�>JH"óE¶f¸ûc£›Rì)°Å?씺�ŞE§Ğú~c9=íÔDåÿíÈÁÃÁ§†âR¾}Âğòx÷,u8œíŸb¦77Z_²¶X)qá5é¼¼�îÙI†s7&w_^yş·Ú.9.Æ#œ£l"Ö6>/ôz€»î…ª7ìnt+ÖŞ(‚
)=L6õÓªXç– ¯,�[Û‘¦ÉS«ÑÉÕû{fuÑBgf×~�¬Ç=&¨KÖ¨ÕDÛ“é'ƒ/G£1Áìß,�›*¬Ú‘BzÌwÔGÓ‡bêœÛtbK×K G tbUsampleqG?PbMÒñ©üUlayer1_sizeqKUcorpus_countqM,UalphaqG?™™™™™™šUmax_vocab_sizeqK2U raw_vocabqccollections
defaultdict
qc__builtin__
int
q…RqUsorted_vocabqKUworkersqKU__recursive_saveloadsq]UnegativeqKU
index2wordq]q(X theqX toq X ofq!X inq"X andq#X forq$X hasq%X haveq&X areq'eUwindowq(KU null_wordq)K Uhsq*K Utotal_train_timeq+G?ÿÿÀ U__numpysq,]q-(U
neg_labelsq.Usyn1negq/U
syn0_lockfq0Usyn0q1eUvector_sizeq2KU min_countq3KU__scipysq4]Uvocabq5}q6(h#cgensim.models.word2vec
Vocab
q7)�q8}q9(Ucountq:MÙUindexq;KU
sample_intq<J]¢Bubh$h7)�q=}q>(h:M2h;Kh<J�"@*ubh!h7)�q?}q@(h:M h;Kh<JÔ§Hubh h7)�qA}qB(h:M•h;Kh<JHãubh'h7)�qC}qD(h:MKh;Kh<J/(9ubh&h7)�qE}qF(h:M’h;Kh<J^º3ubh"h7)�qG}qH(h:MPh;Kh<J¦ïïubhh7)�qI}qJ(h:M'h;K h<Jt-Vubh%h7)�qK}qL(h:M¸h;Kh<Jh–ˆ0ubuUsgqMK U
__ignoredsqN]qO(U cum_tableqPUsyn0normqQeUiterqRKUmin_alpha_yet_reachedqSG?™™™™™™šUseedqTKUhashfxnqUc__builtin__
hash
qVU min_alphaqWG?6âëC-ub.
| 3,806
|
Python
|
.py
| 42
| 89.619048
| 455
| 0.610624
|
piskvorky/gensim
| 15,546
| 4,374
| 408
|
LGPL-2.1
|
9/5/2024, 5:10:17 PM (Europe/Amsterdam)
|
7,093
|
ldamodel_python_3_5
|
piskvorky_gensim/gensim/test/test_data/ldamodel_python_3_5
|
€cgensim.models.ldamodel
LdaModel
q )�q}q(X chunksizeqMĞX optimize_etaq‰X distributedq‰X update_everyqKX
iterationsqK2X optimize_alphaq‰X
eval_everyq K
X etaq
cnumpy.core.multiarray
_reconstruct
qcnumpy
ndarray
qK …q
c_codecs
encode
qX bqX latin1q†qRq‡qRq(KK…qcnumpy
dtype
qX f8qK K‡qRq(KX <qNNNJÿÿÿÿJÿÿÿÿK tqb‰hXl à ? à ? à ? à ? à ? à ? à ? à ? à ? à ? à ? à ?qh†qRqtqbX num_termsq KX num_updatesq!K X per_word_topicsq"‰X random_stateq#cnumpy.random
__RandomState_ctor
q$)Rq%(X MT19937q&hhK …q'h‡q(Rq)(KMp…q*hX u4q+K K‡q,Rq-(KhNNNJÿÿÿÿJÿÿÿÿK tq.b‰hX“ t6ùdSÕé5\ÕÂ�ÆÃ�æiýÔß/ÙVuaºÂ�}±pf8…§kµ‰¹¦XÃéüm3·ªðOŠúOZÂ�–ò}ÂçıLiQMü`è_„¤x»ô¶0V½ÊIçêà ÃÂ�Â�R܉dú{¤ø¦“WiÀ&c±Â� ðjkÃ�¤Ó®Zôw[Ëè«)!iTZ|‚/0sÜ*Ë8Â�|š ¹@E–›ìuc:q”ù°·Lvãªçkû¿K¯b–:Fg؃r‰áÂ7ƒA2Ÿ2.³ŸrGªåâûï¬ÑöÓ9ۉ7±D9nùõ|Â�i
éêúîUêB¾á‡<v�ÄdÖzÄ
]w+c�nè¹½œ²«œä'|�™Q.þ�Œ�RûK¨}&4pp& ›Ÿ¡“�ô\ Ü×À6@‹Æ $¢úÓºn,vßµe�ӆõG�‹�j›êj}rü˜& |