title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Include one python file B inside file A where B uses variables defined in A
39,166,308
<p>I have a Django project where I have multiple settings file which has lot of redundant data. The structure is as follows</p> <p><strong>development.py</strong></p> <pre><code>app_one = 'http://localhost:8000' app_two = 'http://localhost:9999' abc_url = '{0}/some-url/'.format(app_one) xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p><strong>staging.py</strong></p> <pre><code>app_one = 'http://staging.xyz.abc.com' app_two = 'http://staging.pqr.abc.com' abc_url = '{0}/some-url/'.format(app_one) xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p><strong>production.py</strong></p> <pre><code>app_one = 'http://production.xyz.abc.com' app_two = 'http://production.pqr.abc.com' abc_url = '{0}/some-url/'.format(app_one) xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p>In all the files <code>abc_url</code> and <code>xyz_url</code> are basically same url. The only thing that changes is the domain.</p> <p>What I am looking to is,</p> <ul> <li>Put all urls in separate file called app_one_urls.py and app_two_urls.py</li> <li>Find a way to include this app_one_urls.py and app_two_urls.py in my development, staging and productions file</li> </ul> <p>The final outcome can be as follows:</p> <p><strong>app_one_urls.py</strong></p> <pre><code>abc_url = '{0}/some-url/'.format(app_one) </code></pre> <p><strong>app_two_urls.py</strong></p> <pre><code>xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p>are two separate files</p> <p>in development.py I intend to do following</p> <pre><code>app_one = 'http://localhost:8000' app_two = 'http://localhost:9999' somehow get urls from app_one_urls and app_two_urls </code></pre> <p>Is it possible and feasible? If yes, I need help in understanding how.</p>
1
2016-08-26T12:25:03Z
39,166,687
<p>There is no need to maintain separate files, you can define the configuration in dictionary, with <code>key</code> based on the environment type.</p> <p>Here I am demonstrating based on <code>hostname</code>, as hostnames of my server differs like: <code>my-host-prod</code>, <code>my-host-staging</code>, <code>my-host-dev</code>. <em>You may use the condition which uniquely defines your server.</em></p> <pre><code>import socket def get_url_conf(my_host=socket.gethostname()): def get_conf_setting(env_type): return {'prod': {'app1': 'app1_prod_url', 'app2': 'app2_prod_url'}, 'staging': {'app1': 'app1_staging_url', 'app2': 'app2_staging_url'}, 'dev': {'app1': 'app1_dev_url', 'app2': 'app2_dev_url'}, 'local': {'app1': 'app1_local_url', 'app2': 'app2_local_url'} }[env_type] if my_host.endswith('-prod'): server_key = 'prod' elif my_host.endswith('-staging'): server_key = 'prod' elif my_host.endswith('-dev'): server_key = 'dev' else: # In case someone is running on local system server_key = 'local' return get_conf_setting(server_key) </code></pre> <p>Now in your settings file, you may call these as:</p> <pre><code>abc_url = '{0}/some-url/'.format(get_url_conf()[`app1`]) xyz_url = '{0}/some-url/'.format(get_url_conf()[`app2`]) </code></pre>
1
2016-08-26T12:44:37Z
[ "python", "django" ]
Include one python file B inside file A where B uses variables defined in A
39,166,308
<p>I have a Django project where I have multiple settings file which has lot of redundant data. The structure is as follows</p> <p><strong>development.py</strong></p> <pre><code>app_one = 'http://localhost:8000' app_two = 'http://localhost:9999' abc_url = '{0}/some-url/'.format(app_one) xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p><strong>staging.py</strong></p> <pre><code>app_one = 'http://staging.xyz.abc.com' app_two = 'http://staging.pqr.abc.com' abc_url = '{0}/some-url/'.format(app_one) xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p><strong>production.py</strong></p> <pre><code>app_one = 'http://production.xyz.abc.com' app_two = 'http://production.pqr.abc.com' abc_url = '{0}/some-url/'.format(app_one) xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p>In all the files <code>abc_url</code> and <code>xyz_url</code> are basically same url. The only thing that changes is the domain.</p> <p>What I am looking to is,</p> <ul> <li>Put all urls in separate file called app_one_urls.py and app_two_urls.py</li> <li>Find a way to include this app_one_urls.py and app_two_urls.py in my development, staging and productions file</li> </ul> <p>The final outcome can be as follows:</p> <p><strong>app_one_urls.py</strong></p> <pre><code>abc_url = '{0}/some-url/'.format(app_one) </code></pre> <p><strong>app_two_urls.py</strong></p> <pre><code>xyz_url = '{0}/some-url/'.format(app_two) </code></pre> <p>are two separate files</p> <p>in development.py I intend to do following</p> <pre><code>app_one = 'http://localhost:8000' app_two = 'http://localhost:9999' somehow get urls from app_one_urls and app_two_urls </code></pre> <p>Is it possible and feasible? If yes, I need help in understanding how.</p>
1
2016-08-26T12:25:03Z
39,166,701
<p>Yes, this is feasible. You will need to arrange your settings as a module:</p> <pre><code>settings/ /__init__.py /development.py /staging.py /production.py /base.py </code></pre> <p>Contents of base.py :</p> <pre><code>SOME_ENVIRONMENT_INDEPENDENT_SETTING = 1 </code></pre> <p>Contents of development.py:</p> <pre><code>from base import * SOME_DEVELOPMENT_SETTING = 1 app_one = 'http://localhost:8000' app_two = 'http://localhost:9999' </code></pre> <p>Content of production.py:</p> <pre><code> from base import * SOME_PRODUCTION_SETTING = 1 app_one = 'http://production.xyz.abc.com' app_two = 'http://production.pqr.abc.com' </code></pre> <p>Content of <code>__init__.py</code>:</p> <pre><code>import os #If you want to pull environment from an environment variable #ENVIRONMENT = os.environ.get('CURR_ENV','PROD') ENVIRONMENT = "DEVELOPMENT" if ENVIRONMENT == "PRODUCTION" : try: from production import * except: pass elif ENVIRONMENT == "DEVELOPMENT" : try: from development import * except: pass elif ENVIRONMENT == "STAGING": try: from staging import * except: pass elif ENVIRONMENT.lower() == "DEVEL_LOCAL".lower(): try: from devel_local import * except: pass elif ENVIRONMENT.lower() == "PROD_PP".lower(): try: from prod_pp import * except: pass #common variables which change based on environment abc_url = '{0}/some-url/'.format(app_one) xyz_url = '{0}/some-url/'.format(app_two) </code></pre>
0
2016-08-26T12:45:48Z
[ "python", "django" ]
Verify Python Passlib generated PBKDF2 SHA512 Hash in .NET
39,166,372
<p>I am migrating a platform which used <a href="http://pythonhosted.org/passlib/" rel="nofollow">Passlib 1.6.2</a> to generate password hashes. The code to encrypt the password is (hash is called with default value for rounds):</p> <pre><code>from passlib.hash import pbkdf2_sha512 as pb def hash(cleartext, rounds=10001): return pb.encrypt(cleartext, rounds=rounds) </code></pre> <p>The output format looks like (for the password "Patient3" (no quotes)):</p> <pre><code>$pbkdf2-sha512$10001$0dr7v7eWUmptrfW.9z6HkA$w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7.hg </code></pre> <p>And "Testing123"</p> <pre><code>$pbkdf2-sha512$10001$2ZuTslYKAYDQGiPkfA.B8A$ChsEXEjanEToQcPJiuVaKk0Ls3n0YK7gnxsu59rxWOawl/iKgo0XSWyaAfhFV0.Yu3QqfehB4dc7yGGsIW.ARQ </code></pre> <p>I can see that represents:</p> <ul> <li>Algorithm SHA512 </li> <li>Iterations 10001</li> <li>Salt 0dr7v7eWUmptrfW.9z6HkA (possibly)</li> </ul> <p>The Passlib algorithm is defined on <a href="https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html#passlib.hash.pbkdf2_sha512" rel="nofollow">their site</a> and reads:</p> <blockquote> <p>All of the pbkdf2 hashes defined by passlib follow the same format, $pbkdf2-digest$rounds$salt$checksum.</p> <p>$pbkdf2-digest$ is used as the Modular Crypt Format identifier ($pbkdf2-sha256$ in the example). digest - this specifies the particular cryptographic hash used in conjunction with HMAC to form PBKDF2’s pseudorandom function for that particular hash (sha256 in the example). rounds - the number of iterations that should be performed. this is encoded as a positive decimal number with no zero-padding (6400 in the example). salt - this is the adapted base64 encoding of the raw salt bytes passed into the PBKDF2 function. checksum - this is the adapted base64 encoding of the raw derived key bytes returned from the PBKDF2 function. Each scheme uses the digest size of its specific hash algorithm (digest) as the size of the raw derived key. This is enlarged by approximately 4/3 by the base64 encoding, resulting in a checksum size of 27, 43, and 86 for each of the respective algorithms listed above.</p> </blockquote> <p>I found <a href="https://www.nuget.org/packages/Passlib.NET/" rel="nofollow">passlib.net</a> which looks a bit like an abandoned beta and it uses '$6$' for the algorithm. I could not get it to verify the password. I tried changing the algorithm to $6$ but I suspect that in effect changes the salt as well.</p> <p>I also tried using <a href="https://sourceforge.net/projects/pwdtknet/" rel="nofollow">PWDTK</a> with various values for salt and hash, but it may have been I was splitting the shadow password incorrectly, or supplying $ in some places where I should not have been.</p> <p>Is there any way to verify a password against this hash value in .NET? Or another solution which does not involve either a Python proxy or getting users to resupply a password?</p>
2
2016-08-26T12:27:59Z
39,169,566
<p>The hash is verified by passing the password into the PBKDF HMAC-SHA-256 hash method and then comparing the resulting hash to the saved hash portion, converted back from the Base64 version.</p> <p>Saved hash to binary, then separate the hash Convert the password to binary using UTF-8 encoding PBKDF2,HMAC,SHA-256(toBinary(password, salt, 10001) == hash Password: "Patient3"</p> <p>$pbkdf2 - sha512$10001$0dr7v7eWUmptrfW.9z6HkA$w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7.hg</p> <p>Breaks down to (with the strings converted to standard Base64 (change '.' to '+' and add trailing '=' padding:</p> <pre><code>pbkdf2 - sha512 10001 0dr7v7eWUmptrfW+9z6HkA== w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7+hg== </code></pre> <p>Decoded to hex:</p> <pre><code>D1DAFBBFB796526A6DADF5BEF73E8790 C3D8FD00C5662803F5ECEA2C0AA0F10EFDA18ECBF394BA45F11ADAF08EE9FDBE7BE3AAE0859F16AE01230E9BD71B984BCF551E34BCE015AF350EB6F1D9BEFE86 </code></pre> <p>Which makes sense: 16-byte (128-bit) salt and 64-byte (512-bit) SHA-512 hash.</p> <p>Converting "Patient3" using UTF-8 to a binary array Converting the salt from a modified BASE64 encoding to a 16 byte binary array Using an iteration count od 10001 Feeding this to PBKDF2 using HMAC with SHA-512</p> <p>I get </p> <pre><code>C3D8FD00C5662803F5ECEA2C0AA0F10EFDA18ECBF394BA45F11ADAF08EE9FDBE7BE3AAE0859F16AE01230E9BD71B984BCF551E34BCE015AF350EB6F1D9BEFE86 </code></pre> <p>Which when Base64 encoded, replacing '+' characters with '.' and stripping the trailing '=' characters returns: w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7.hg</p>
1
2016-08-26T15:16:39Z
[ "python", ".net", "hash", "cryptography", "pbkdf2" ]
Verify Python Passlib generated PBKDF2 SHA512 Hash in .NET
39,166,372
<p>I am migrating a platform which used <a href="http://pythonhosted.org/passlib/" rel="nofollow">Passlib 1.6.2</a> to generate password hashes. The code to encrypt the password is (hash is called with default value for rounds):</p> <pre><code>from passlib.hash import pbkdf2_sha512 as pb def hash(cleartext, rounds=10001): return pb.encrypt(cleartext, rounds=rounds) </code></pre> <p>The output format looks like (for the password "Patient3" (no quotes)):</p> <pre><code>$pbkdf2-sha512$10001$0dr7v7eWUmptrfW.9z6HkA$w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7.hg </code></pre> <p>And "Testing123"</p> <pre><code>$pbkdf2-sha512$10001$2ZuTslYKAYDQGiPkfA.B8A$ChsEXEjanEToQcPJiuVaKk0Ls3n0YK7gnxsu59rxWOawl/iKgo0XSWyaAfhFV0.Yu3QqfehB4dc7yGGsIW.ARQ </code></pre> <p>I can see that represents:</p> <ul> <li>Algorithm SHA512 </li> <li>Iterations 10001</li> <li>Salt 0dr7v7eWUmptrfW.9z6HkA (possibly)</li> </ul> <p>The Passlib algorithm is defined on <a href="https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html#passlib.hash.pbkdf2_sha512" rel="nofollow">their site</a> and reads:</p> <blockquote> <p>All of the pbkdf2 hashes defined by passlib follow the same format, $pbkdf2-digest$rounds$salt$checksum.</p> <p>$pbkdf2-digest$ is used as the Modular Crypt Format identifier ($pbkdf2-sha256$ in the example). digest - this specifies the particular cryptographic hash used in conjunction with HMAC to form PBKDF2’s pseudorandom function for that particular hash (sha256 in the example). rounds - the number of iterations that should be performed. this is encoded as a positive decimal number with no zero-padding (6400 in the example). salt - this is the adapted base64 encoding of the raw salt bytes passed into the PBKDF2 function. checksum - this is the adapted base64 encoding of the raw derived key bytes returned from the PBKDF2 function. Each scheme uses the digest size of its specific hash algorithm (digest) as the size of the raw derived key. This is enlarged by approximately 4/3 by the base64 encoding, resulting in a checksum size of 27, 43, and 86 for each of the respective algorithms listed above.</p> </blockquote> <p>I found <a href="https://www.nuget.org/packages/Passlib.NET/" rel="nofollow">passlib.net</a> which looks a bit like an abandoned beta and it uses '$6$' for the algorithm. I could not get it to verify the password. I tried changing the algorithm to $6$ but I suspect that in effect changes the salt as well.</p> <p>I also tried using <a href="https://sourceforge.net/projects/pwdtknet/" rel="nofollow">PWDTK</a> with various values for salt and hash, but it may have been I was splitting the shadow password incorrectly, or supplying $ in some places where I should not have been.</p> <p>Is there any way to verify a password against this hash value in .NET? Or another solution which does not involve either a Python proxy or getting users to resupply a password?</p>
2
2016-08-26T12:27:59Z
39,258,700
<p>I quickly knocked together a .NET implementation using zaph's logic and using the code from JimmiTh on <a href="http://stackoverflow.com/questions/16341676/how-can-i-hash-passwords-with-salt-and-iterations-using-pbkdf2-hmac-sha-256-or-s">SO answer</a>. I have put the code on <a href="https://github.com/johnmark13/Passlib.PBKDF2WithHmacSHA512.NET" rel="nofollow">GitHub</a> (this is not supposed to be production ready). It appears to work with more than a handful of examples from our user base. </p> <p>As zaph said the logic was:</p> <ol> <li>Split the hash to find the iteration count, salt and hashed password. (I have assumed the algorithm, but you'd verify it). You'll have an array of 5 values containing <code>[0]</code> - Nothing, <code>[1]</code> - Algorithm, <code>[2]</code> - Iterations, <code>[3]</code> - Salt and <code>[4]</code> - Hash</li> <li>Turn the salt into standard Base64 encoding by replacing any '.' characters with '+' characters and appending "==".</li> <li>Pass the password, salt and iteration count to the PBKDF2-HMAC-SHA512 generator.</li> <li>Convert back to the original base64 format by replacing any '+' characters with '.' characters and stripping the trailing "==".</li> <li>Compare to the original hash (element 4 in the split string) to this converted value and if they're equal you've got a match.</li> </ol>
0
2016-08-31T20:45:25Z
[ "python", ".net", "hash", "cryptography", "pbkdf2" ]
Python ocr pdf extraction with multiple languages
39,166,423
<p>Hello i have a pdf file with 2 languages (English, Greek) and i want to extract it via python ocr. Until now i have this code but it works only for one language only (Greek)</p> <p>How to run ocr extract in pdf file which has 2 languages? </p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- import sys from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.pdfpage import PDFPage from pdfminer.converter import XMLConverter, HTMLConverter, TextConverter from pdfminer.layout import LAParams from cStringIO import StringIO from wand.image import Image from PIL import Image as PI import pyocr import pyocr.builders import io def pdfparser(data): tool = pyocr.get_available_tools()[0] for i in enumerate(tool.get_available_languages()): print(i) lang = tool.get_available_languages()[2] req_image = [] final_text = [] words = [] image_pdf = Image(filename=data, resolution=600) image_jpeg = image_pdf.convert('jpeg') for img in image_jpeg.sequence: img_page = Image(image=img) req_image.append(img_page.make_blob('jpeg')) for img in req_image: txt = tool.image_to_string( PI.open(io.BytesIO(img)), lang=lang, builder=pyocr.builders.TextBuilder() ) final_text.append(txt) #words.extend(u'{}'.format(txt.split())) #print(final_text) #print(words) for x in final_text: ''' for i in x: print(i.replace('|(',u'Κ').replace('|',u'Ι')) ''' try: word = x.encode('utf8') print(word) except UnicodeEncodeError , e: print(e) continue if __name__ == '__main__': pdfparser(sys.argv[1]) </code></pre>
0
2016-08-26T12:30:23Z
39,170,265
<p>I'm venturing an answer here. Try <code>lang = 'eng+ell'</code>. Make sure you have both <code>eng.traineddata</code> and <code>ell.traineddata</code> files are in your <code>tessdata</code> folder.</p> <p><a href="https://github.com/tesseract-ocr/tessdata" rel="nofollow">https://github.com/tesseract-ocr/tessdata</a></p>
0
2016-08-26T15:55:57Z
[ "python", "extract", "ocr" ]
How to find rows that differ by only one column in pandas?
39,166,436
<p>I have a dataframe, with three columns. I have grouped them based on two of the 3 columns. Now I need to find only those rows where the two columns <code>word1,word2</code> are same but the column <code>Tag</code>,the third column, is different.</p> <p>This something like I need to find those columns, where for the same <code>word1 and word2</code> we have different labels. But I am not able to filter the dataFrame based on the groupby construct shown below</p> <pre><code>newComps.groupby(['word1','word2']).count() </code></pre> <p><a href="http://i.stack.imgur.com/A804L.png" rel="nofollow"><img src="http://i.stack.imgur.com/A804L.png" alt="enter image description here"></a></p> <p>Here it wil lbe helpful if I can see only the ones with same word1,word2 but with a different Tag, rather than all the entries. I have tried with calling the above code inside <code>[]</code>, as we use to filter the data, but to no avail</p> <p>Ideally I should see only </p> <pre><code>A,gawam, A1 A,gawam,BS1 A,gawaH, T1 A, gawaH, T2 </code></pre>
2
2016-08-26T12:31:24Z
39,166,605
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html</a></p> <p>look at the <code>subset</code> and the <code>keep</code> option</p>
2
2016-08-26T12:40:53Z
[ "python", "pandas" ]
Cannot read information from Windows Registry
39,166,487
<p>I would like to print the recent wireless network I used by reading from Windows Registry. I running windows 8. I have the following code but when I run it does not do anything! could you please help in this?</p> <pre><code>from _winreg import * def val2addr(val): addr = '' for ch in val: addr += '%02x '% ord(ch) addr = addr.strip(' ').replace(' ', ':')[0:17] return addr def printNets(): print '[+] ' net = "SOFTWARE\Microsoft\Windows NT\CurrentVersion"+\ "\NetworkList\Signatures\Unmanaged" key = OpenKey(HKEY_LOCAL_MACHINE, net) print '\n[*] Networks You have Joined.' for i in range(100): try: guid = EnumKey(key, i) netKey = OpenKey(key, str(guid)) (n, addr, t) = EnumValue(netKey, 5) (n, name, t) = EnumValue(netKey, 4) macAddr = val2addr(addr) netName = str(name) print '[+] ' + netName + ' ' + macAddr CloseKey(netKey) except: break def main(): printNets() if __name__ == "__main__": main() </code></pre>
1
2016-08-26T12:34:09Z
39,167,392
<p>Please have a look here for more precise <a href="http://stackoverflow.com/questions/28128446/how-do-i-use-python-to-retrieve-registry-values">solution explication</a></p> <pre><code>key = OpenKey(HKEY_LOCAL_MACHINE, net, 0, KEY_READ | KEY_WOW64_64KEY) </code></pre> <p>If my understanding is correct it's because you have 32bit Python on 64bit windows</p>
0
2016-08-26T13:22:23Z
[ "python", "windows", "security", "registry", "wireless" ]
What to do when searching on more than one word with django filter
39,166,490
<p>I made a filter where you can search on different keywords and it works. </p> <p>My problem is that when i am trying to search on more than one keyword. </p> <p>How do I make it possible to filter the search so it can separate each word?</p> <p>Here is a picture how it looks:</p> <p>First picture shows when i only search on one key word, and second shows when i search on two: <a href="http://i.stack.imgur.com/tGXwu.png" rel="nofollow"><img src="http://i.stack.imgur.com/tGXwu.png" alt="enter image description here"></a> <a href="http://i.stack.imgur.com/JAzmH.png" rel="nofollow"><img src="http://i.stack.imgur.com/JAzmH.png" alt="enter image description here"></a></p> <p>Here is my code for The model class: </p> <pre><code>class Task(managers.Model): keywords = models.ManyToManyField('Keyword', blank=True, related_name='event_set') objects = managers.DefaultSelectOrPrefetchManager.from_queryset(managers.TaskQuerySet)() </code></pre> <p>And here is my Filter class: </p> <pre><code>class TaskFilterSet(BaseFilterSet): keywords = django_filters.MethodFilter(action="filter_keywords") class Meta: model = models.Task def filter_keywords(self, queryset, value): from django.db.models import Q return queryset.filter(Q(keywords__word__icontains=value)) </code></pre>
1
2016-08-26T12:34:18Z
39,166,717
<p>Lets suppose you are searching for the following Keywords: <code>foo</code> and <code>boo</code>. And you have the following relation:</p> <pre><code>search['foo','boo'] object.keywords['foo','boo','woo'] </code></pre> <p>You can iterate over <code>object.keywords</code> and see if one of the search objects you have matches one of the <code>keywords</code> in <code>object</code>. If it does, you return the <code>filter</code> containing that object.</p>
1
2016-08-26T12:46:44Z
[ "python", "regex", "django", "django-filters" ]
How to replace the for loop by faster option in Python
39,166,546
<p>I have two data frames, which have approximately 10 000 rows. They are similar as a and b below but with more rows.</p> <pre><code> a Out[9]: end start 0 4.0 3 1 5.5 5 2 7.5 7 3 9.5 9 4 11.5 11 5 15.0 14 6 18.0 17 7 21.0 20 8 26.0 25 9 31.0 30 b Out[10]: status moment 8.0 o 10.0 o 14.5 o 16.0 o 19.0 o 27.0 o 28.0 o 30.5 o 35.0 o 40.0 o 50.0 o </code></pre> <p>I have to find all moments in dataframe b which belong to between end and start in dataframe a.</p> <p>I developed for loop for that and it works well with small dataframes.</p> <pre><code> for r in a.index: for k in b.index: if a.ix[r,'start'] &lt;k and k &lt;a.ix[r,'end']: b.ix[k,'status']='m' # replaces m to o if moment is in between start and end </code></pre> <p>Below you can see how for loop have replaced o -> m when moment is in between start and end.</p> <pre><code> n [12]: b Out[12]: status moment 8.0 o 10.0 o 14.5 m 16.0 o 19.0 o 27.0 o 28.0 o 30.5 m 35.0 o 40.0 o 50.0 o </code></pre> <p>When I try to use it with huge dataframes (more than 10 000 rows in dataframe) it cannot get results any more within reasonable time.</p> <p>Do you have any ideas how to elaborate my for loop faster and suitable for longer dataframes?</p>
2
2016-08-26T12:37:22Z
39,166,693
<p>Your solution is <code>O(n^2)</code> in running time. As far as i see all of the dataframes are sorted, if this is the case for the whole <code>DF</code> you can do a divide and conquer method to make it <code>O(nlogn)</code>. However coding it is not easy, you will have to look it up, understand the D&amp;C methods and write it as a recursive function. I think you can create a so called <code>BinarySearch</code> algorithm for this problem which is O(logn) for each element, so O(nlogn). </p> <p>If i am wrong and the <code>DF</code> is not sorted, but you have to do this kind of search multiple times i would advise to sort it first. Sorting can also be done in D&amp;C, its usually called <code>MergeSort</code> and its O(nlogn) as well.</p>
0
2016-08-26T12:44:57Z
[ "python", "pandas" ]
How to replace the for loop by faster option in Python
39,166,546
<p>I have two data frames, which have approximately 10 000 rows. They are similar as a and b below but with more rows.</p> <pre><code> a Out[9]: end start 0 4.0 3 1 5.5 5 2 7.5 7 3 9.5 9 4 11.5 11 5 15.0 14 6 18.0 17 7 21.0 20 8 26.0 25 9 31.0 30 b Out[10]: status moment 8.0 o 10.0 o 14.5 o 16.0 o 19.0 o 27.0 o 28.0 o 30.5 o 35.0 o 40.0 o 50.0 o </code></pre> <p>I have to find all moments in dataframe b which belong to between end and start in dataframe a.</p> <p>I developed for loop for that and it works well with small dataframes.</p> <pre><code> for r in a.index: for k in b.index: if a.ix[r,'start'] &lt;k and k &lt;a.ix[r,'end']: b.ix[k,'status']='m' # replaces m to o if moment is in between start and end </code></pre> <p>Below you can see how for loop have replaced o -> m when moment is in between start and end.</p> <pre><code> n [12]: b Out[12]: status moment 8.0 o 10.0 o 14.5 m 16.0 o 19.0 o 27.0 o 28.0 o 30.5 m 35.0 o 40.0 o 50.0 o </code></pre> <p>When I try to use it with huge dataframes (more than 10 000 rows in dataframe) it cannot get results any more within reasonable time.</p> <p>Do you have any ideas how to elaborate my for loop faster and suitable for longer dataframes?</p>
2
2016-08-26T12:37:22Z
39,167,350
<p>Here is an option without a <code>for</code> loop, which doesn't avoid vector scan but is vectorized:</p> <pre><code>b[b.index.map(lambda m: ((m &gt; a.start) &amp; (m &lt; a.end)).any())] = "m" b # status # moment # 8.0 o # 10.0 o # 14.5 m # 16.0 o # 19.0 o # 27.0 o # 28.0 o # 30.5 m # 35.0 o # 40.0 o # 50.0 o </code></pre>
1
2016-08-26T13:20:12Z
[ "python", "pandas" ]
Display 2 decimal places, and use commas to separate thousands, in Jupyter/pandas?
39,166,689
<p>I'm working with pandas 0.18 in Jupyter. </p> <p>I'd like to configure Jupyter/pandas to display 2 decimal places throughout, and to use comma separators in thousands. </p> <p>How can I do this?</p>
1
2016-08-26T12:44:40Z
39,167,299
<p>Configure the following <a href="http://pandas.pydata.org/pandas-docs/stable/options.html#available-options" rel="nofollow">option</a> in any cell:</p> <pre><code>pandas.options.display.float_format = '{:,.2f}'.format </code></pre> <p>You can also format the output for any float throughout the notebook with this <a href="http://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-precision" rel="nofollow">magic command</a>:</p> <pre><code>%precision %.2f </code></pre>
2
2016-08-26T13:18:02Z
[ "python", "pandas", "jupyter" ]
Switch from linux distro package manager to Anaconda
39,166,725
<p>I am using openSUSE Leap 42.1 and do some data analysis work in python. Most of the python packages I use are available in the standard openSUSE repositories (e.g. obs://build.opensuse.org/devel:languages:python); however sometimes they aren't, whereas they are available in Anaconda. I would like to replace all of the python packages installed on my computer with those available through Anaconda.</p> <p>Is it possible to just install Anaconda in parallel with the normal openSUSE packages or should I manually delete the packages I've installed? I know python is used heavily throughout the operating system so I probably don't want to deep clean the system of python before going the Anaconda route.</p> <p>Has anyone done this before? I was unable to find any info on this on the Anaconda site, and I'm curious if there is a clean way to do this.</p>
1
2016-08-26T12:47:06Z
39,167,113
<p>I read the anaconda documentation, and there is no evidence of anaconda packages replacing your openSUSE packages. There isn't a reason for it to do so. If I got it right, then Conda is very similar to ruby's gem and similar tools, which definitely don't replace the installed packages. I think you can feel free to install it next to your current packages. Also, you can specify the python and python package version in the anaconda envinroments, which is another thing which it allows you to do, so you can decide what you will use there. Note, I'm not a conda user, this is how I understood the docs. Hope this helps.</p>
1
2016-08-26T13:08:57Z
[ "python", "anaconda", "opensuse" ]
Converting WindowsError to OSError in python
39,166,817
<p>In the (legacy) code I maintain people are using <code>WindowsError</code>. I could go ahead and replace all occurrences with <code>OSError</code> but alas the <code>winerror</code> attribute is used, happily only in three cases - namely 123:</p> <pre><code>try: mtime = int(os.path.getmtime(self._s)) except WindowsError, werr: if werr.winerror != 123: raise deprint(u'Unable to determine modified time of %s - probably a unicode error' % self._s) </code></pre> <p>740:</p> <pre><code>try: popen = subprocess.Popen(args, close_fds=bolt.close_fds) if wait: popen.wait() except UnicodeError: self._showUnicodeError() except WindowsError as werr: if werr.winerror != 740: self.ShowError(werr) </code></pre> <p>and 32:</p> <pre><code>try: patchName.untemp() # calls shutil.move() and os.remove() except WindowsError, werr: while werr.winerror == 32 and self._retry(patchName.temp.s, patchName.s): try: patchName.untemp() except WindowsError, werr: continue break else: raise </code></pre> <p>How am I going to translate these codes to <code>OSError</code> ?</p> <p>I am in python 2.7 so I can't use the nice exceptions introduced in <a href="https://www.python.org/dev/peps/pep-3151/#appendix-a-survey-of-common-errnos" rel="nofollow">pep-3151</a></p> <p>Here is a <a href="http://www.gossamer-threads.com/lists/python/python/920347" rel="nofollow">discussion</a> on mapping winerror to the errno module</p>
0
2016-08-26T12:52:10Z
39,337,429
<p>Turns out winerror and the errno attribute have different values - in good code practices I did not use the magic number but the constants from the errno module. So 32:</p> <pre><code>- except WindowsError as werr: - if werr.winerror == 32: + except OSError as werr: + if werr.errno == errno.EACCES: # 13 </code></pre> <p>For 123 (<a href="http://stackoverflow.com/q/21115580/281545">see also</a>):</p> <pre><code>with open('file', 'w'): pass newFileName = 'illegal characters: /\\:*?"&lt;&gt;|' try: os.rename('file', newFileName) except OSError as e: # winerror = 123, errno = 22 print e </code></pre> <p>so <code>errno.EINVAL</code>.</p> <p>740 was in windows specific code so I left alone.</p>
0
2016-09-05T20:39:25Z
[ "python", "python-2.7", "exception-handling", "cross-platform", "windowserror" ]
Real time Moving Averages in Python
39,166,941
<p>I need to calculate a moving average in Python of a sensor data that is coming on on the serial port. All the samples I can find about numpy use data from a file or that is in an array hard coded before the program starts. </p> <p>In my case I do not have any data when the program starts. The data comes in over time in real time every second. I want to smooth the data as it arrives on the serial port. </p> <p>I have this working on the Arduino but also need it in Python. Can somebody please point me to a real time (single value over time) sample not a batch sample.</p>
-6
2016-08-26T12:59:34Z
39,167,754
<p>Here's how you would add one reading at a time to a running collection of readings and return the average. I prepopulated the readings list to show it in action, but in your program, you'd just start off with an empty list: <code>readings = []</code></p> <p>I made the assumption that you want to include the last x readings in your average rather than including all of the readings. That is what the <code>max_samples</code> parameter is for.</p> <p>without numpy:</p> <pre><code>readings = [1, 2, 3, 4, 5, 6, 7, 8, 9] reading = 10 max_samples = 10 def mean(nums): return float(sum(nums)) / max(len(nums), 1) readings.append(reading) avg = mean(readings) print 'current average =', avg print 'readings used for average:', readings if len(readings) == max_samples: readings.pop(0) print 'readings saved for next time:', readings </code></pre> <p>result:</p> <pre><code>current average = 5.5 readings used for average: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] readings saved for next time: [2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>with numpy:</p> <pre><code>import numpy as np readings = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) reading = 10 max_samples = 10 readings = np.append(readings, reading) avg = np.mean(readings) print 'current average =', avg print 'readings used for average:', readings if len(readings) == max_samples: readings = np.delete(readings, 0) print 'readings saved for next time:', readings </code></pre> <p>result:</p> <pre><code>current average = 5.5 readings used for average: [ 1 2 3 4 5 6 7 8 9 10] readings saved for next time: [ 2 3 4 5 6 7 8 9 10] </code></pre>
0
2016-08-26T13:40:00Z
[ "python", "average" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,147
<p>I don't see anything wrong with your implementation at all. But you could perhaps do a simple swap instead.</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for i in range(0, len(l), 2): old = l[i] l[i] = l[i+1] l[i+1] = old </code></pre> <p><strong>EDIT</strong> Apparently, Python has a nicer way to do a swap which would make the code like this</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for i in range(0, len(l), 2): l[i], l[i+1] = l[i+1], l[i] </code></pre>
-1
2016-08-26T13:10:53Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,227
<p>Here a single list comprehension that does the trick:</p> <pre><code>In [1]: l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] In [2]: [l[i^1] for i in range(len(l))] Out[2]: [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The key to understanding it is the following demonstration of how it permutes the list indices:</p> <pre><code>In [3]: [i^1 for i in range(10)] Out[3]: [1, 0, 3, 2, 5, 4, 7, 6, 9, 8] </code></pre> <p>The <code>^</code> is the <a href="https://en.wikipedia.org/wiki/Exclusive_or">exclusive or</a> operator. All that <code>i^1</code> does is flip the least-significant bit of <code>i</code>, effectively swapping 0 with 1, 2 with 3 and so on.</p>
29
2016-08-26T13:13:46Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,243
<p>You can use the <a href="http://stackoverflow.com/questions/5389507/iterating-over-every-two-elements-in-a-list">pairwise iteration</a> and chaining to <a href="http://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python">flatten the list</a>:</p> <pre><code>&gt;&gt;&gt; from itertools import chain &gt;&gt;&gt; &gt;&gt;&gt; list(chain(*zip(l[1::2], l[0::2]))) [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>Or, you can use the <a href="https://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable"><code>itertools.chain.from_iterable()</code></a> to avoid the extra unpacking:</p> <pre><code>&gt;&gt;&gt; list(chain.from_iterable(zip(l[1::2], l[0::2]))) [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre>
19
2016-08-26T13:14:37Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,384
<pre><code>newList = [(x[2*i+1], x[2*i]) for i in range(0, len(x)/2)] </code></pre> <p>Now find a way to unzip the tuples. I won't do all of your homework.</p>
-3
2016-08-26T13:21:54Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,400
<p>Another way, create nested lists with pairs reversing their order, then flatten the lists with <code>itertools.chain.from_iterable</code></p> <pre><code>&gt;&gt;&gt; from itertools import chain &gt;&gt;&gt; l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] &gt;&gt;&gt; list(chain.from_iterable([[l[i+1],l[i]] for i in range(0,(len(l)-1),2)])) [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre>
4
2016-08-26T13:22:36Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,486
<p>Here a solution based in the <code>modulo</code> operator:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] even = [] uneven = [] for i,item in enumerate(l): if i % 2 == 0: even.append(item) else: uneven.append(item) list(itertools.chain.from_iterable(zip(uneven, even))) </code></pre>
-3
2016-08-26T13:26:57Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,545
<p>No need for complicated logic, simply rearrange the list with slicing and step:</p> <pre><code>In [1]: l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] In [2]: l[::2], l[1::2] = l[1::2], l[::2] In [3]: l Out[3]: [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <hr> <h2> TLDR;</h2> <p><strong>Edited with explanation</strong></p> <p>I believe most viewers are already familiar with list slicing and multiple assignment. In case you don't I will try my best to explain what's going on (hope I do not make it worse).</p> <p>To understand list slicing, <a href="http://stackoverflow.com/questions/509211/explain-pythons-slice-notation">here</a> already has an excellent answer and explanation of list slice notation. Simply put:</p> <pre><code>a[start:end] # items start through end-1 a[start:] # items start through the rest of the array a[:end] # items from the beginning through end-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: a[start:end:step] # start through not past end, by step </code></pre> <p>Let's look at OP's requirements: </p> <pre><code> [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # list l ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ 0 1 2 3 4 5 6 7 8 9 # respective index of the elements l[0] l[2] l[4] l[6] l[8] # first tier : start=0, step=2 l[1] l[3] l[5] l[7] l[9] # second tier: start=1, step=2 ----------------------------------------------------------------------- l[1] l[3] l[5] l[7] l[9] l[0] l[2] l[4] l[6] l[8] # desired output </code></pre> <p>First tier will be: <code>l[::2] = [1, 3, 5, 7, 9]</code> Second tier will be: <code>l[1::2] = [2, 4, 6, 8, 10]</code></p> <p>As we want to re-assign <code>first = second</code> &amp; <code>second = first</code>, we can use multiple assignment, and update the original list in place:</p> <pre><code>first , second = second , first </code></pre> <p>that is:</p> <pre><code>l[::2], l[1::2] = l[1::2], l[::2] </code></pre> <p>As a side note, to get a new list but not altering original <code>l</code>, we can assign a new list from <code>l</code>, and perform above, that is:</p> <pre><code>n = l[:] # assign n as a copy of l (without [:], n still points to l) n[::2], n[1::2] = n[1::2], n[::2] </code></pre> <p>Hopefully I do not confuse any of you with this added explanation. If it does, please help update mine and make it better :-) </p>
93
2016-08-26T13:29:52Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,669
<p>One of the possible answer using <code>chain</code> and <code>list comprehension</code></p> <pre><code>&gt;&gt;&gt; l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] &gt;&gt;&gt; list(chain([(l[2*i+1], l[2*i]) for i in range(0, len(l)/2)])) [(2, 1), (4, 3), (6, 5), (8, 7), (10, 9)] </code></pre>
3
2016-08-26T13:35:30Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,167,859
<p>Another approach with simply re-assigning and slicing technique</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for a in range(0,len(l),2): l[a:a+2] = l[a-len(l)+1:a-1-len(l):-1] print l </code></pre> <p>output</p> <pre><code>[2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre>
2
2016-08-26T13:45:14Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,168,029
<h2>A benchmark between top answers:</h2> <p>Python 2.7:</p> <pre><code>('inp1 -&gt;', 15.302665948867798) # NPE's answer ('inp2a -&gt;', 10.626379013061523) # alecxe's answer with chain ('inp2b -&gt;', 9.739919185638428) # alecxe's answer with chain.from_iterable ('inp3 -&gt;', 2.6654279232025146) # Anzel's answer </code></pre> <p>Python 3.4:</p> <pre><code>inp1 -&gt; 7.913498195000102 inp2a -&gt; 9.680125927000518 inp2b -&gt; 4.728151862000232 inp3 -&gt; 3.1804273489997286 </code></pre> <p>If you are curious about the different performances between python 2 and 3, here are the reasons:</p> <p>As you can see @NPE's answer (<code>inp1</code>) performs very better in python3.4, the reason is that in python3.X <code>range()</code> is a smart object and doesn't preserve all the items between that range in memory like a list.</p> <blockquote> <p>In many ways the object returned by <code>range()</code> behaves as if it is a list, but in fact it isn’t. It is an object which returns the successive items of the desired sequence when you iterate over it, but it doesn’t really make the list, thus saving space.</p> </blockquote> <p>And that's why in python 3 it doesn't return a list while you slice the range object.</p> <pre><code># python2.7 &gt;&gt;&gt; range(10)[2:5] [2, 3, 4] # python 3.X &gt;&gt;&gt; range(10)[2:5] range(2, 5) </code></pre> <p>The second significant change is performance accretion of the third approach (<code>inp3</code>). As you can see the difference between it and the last solution has decreased to ~2sec (from ~7sec). The reason is because of the <code>zip()</code> function which in python3.X it returns an iterator which produces the items on demand. And since the <code>chain.from_iterable()</code> needs to iterate over the items once again it's completely redundant to do it before that too (what that <code>zip</code> does in python 2). </p> <p>Code:</p> <pre><code>from timeit import timeit inp1 = """ [l[i^1] for i in range(len(l))] """ inp2a = """ list(chain(*zip(l[1::2], l[0::2]))) """ inp2b = """ list(chain.from_iterable(zip(l[1::2], l[0::2]))) """ inp3 = """ l[::2], l[1::2] = l[1::2], l[::2] """ lst = list(range(100000)) print('inp1 -&gt;', timeit(stmt=inp1, number=1000, setup="l={}".format(lst))) print('inp2a -&gt;', timeit(stmt=inp2a, number=1000, setup="l={}; from itertools import chain".format(lst))) print('inp2b -&gt;', timeit(stmt=inp2b, number=1000, setup="l={}; from itertools import chain".format(lst))) print('inp3 -&gt;', timeit(stmt=inp3, number=1000, setup="l={}".format(lst))) </code></pre>
7
2016-08-26T13:54:30Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,170,408
<p>For fun, if we interpret "swap" to mean "reverse" in a more general scope, the <code>itertools.chain.from_iterable</code> approach can be used for subsequences of longer lengths.</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] def chunk(list_, n): return (list_[i:i+n] for i in range(0, len(list_), n)) list(chain.from_iterable(reversed(c) for c in chunk(l, 4))) # [4, 3, 2, 1, 8, 7, 6, 5, 10, 9] </code></pre>
2
2016-08-26T16:03:44Z
[ "python" ]
Better way to swap elements in a list?
39,167,057
<p>I have a bunch of lists that look like this one:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] </code></pre> <p>I want to swap elements as follows:</p> <pre><code>final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9] </code></pre> <p>The size of the lists may vary, but they will always contain an even number of elements.</p> <p>I'm fairly new to Python and am currently doing it like this:</p> <pre><code>l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] final_l = [] for i in range(0, len(l)/2): final_l.append(l[2*i+1]) final_l.append(l[2*i]) </code></pre> <p>I know this isn't really <a href="https://en.wiktionary.org/wiki/Pythonic#Adjective">Pythonic</a> and would like to use something more efficient. Maybe a list comprehension?</p>
47
2016-08-26T13:05:46Z
39,170,492
<p>An(other) alternative:</p> <pre><code>final_l = list() # make an empty list for i in range(len(l)): # for as many items there are in the original list if i % 2 == 0: # if the item is even final_l.append(l[i+1]) # make this item in the new list equal to the next in the original list else: # else, so when the item is uneven final_l.append(l[i-1]) # make this item in the new list equal to the previous in the original list </code></pre> <p>This assumes that the original list has an even number of items. If not, a <a href="https://docs.python.org/3.4/tutorial/errors.html" rel="nofollow">try-except</a> can be added:</p> <pre><code>final_l = list() for i in range(len(l)): if i % 2 == 0: try: # try if we can add the next item final_l.append(l[i+1]) except: # if we can't (because i+1 doesnt exist), add the current item final_l.append(l[i]) else: final_l.append(l[i-1]) </code></pre>
2
2016-08-26T16:09:06Z
[ "python" ]
Python decorators: howto list wrapped functions by decorator
39,167,171
<p>Is that possible to use python decorators to mark a method, and get it for the later use, if I don't know the name of the wrapped function?</p> <p>Here is the example, and I don't know the name of method_with_custom_name:</p> <pre><code>@run_this_method def method_with_custom_name(my_arg): return "The args is: " + my_arg def _init_and_run(): # Here, I want to get and call method_with_custom_name # but I don't know it's name, # so the next line isn't valid. return run_this_method()(my_arg_value) def run_this_method(m): def w(my_arg): _do_some_magic(my_arg, m) return w def _do_some_magic(callback_arg, callback): if some_checks(): callback(callback_arg) </code></pre> <p>So how can I get a list of methods wrapped with <code>@run_this_method</code></p>
1
2016-08-26T13:11:44Z
39,167,274
<p>If I understand your question correctly (how to decorate a method with an unknown name?) then it is totally possible.</p> <pre><code>@decorator def foo(bar): pass </code></pre> <p>is syntastic sugar for</p> <pre><code>def foo(bar): pass foo = decorator(foo) </code></pre> <p>So in your case you should just do:</p> <pre><code>method_with_custom_name = run_this_method(method_with_custom_name) </code></pre> <p>The example you provided is confusing, though. Why don't you know the name of method_with_custom_name? It is right there. It is called method_with_custom_name. To use the decorated version later, you just call method_with_custom_name.</p>
1
2016-08-26T13:16:17Z
[ "python", "python-decorators" ]
Python decorators: howto list wrapped functions by decorator
39,167,171
<p>Is that possible to use python decorators to mark a method, and get it for the later use, if I don't know the name of the wrapped function?</p> <p>Here is the example, and I don't know the name of method_with_custom_name:</p> <pre><code>@run_this_method def method_with_custom_name(my_arg): return "The args is: " + my_arg def _init_and_run(): # Here, I want to get and call method_with_custom_name # but I don't know it's name, # so the next line isn't valid. return run_this_method()(my_arg_value) def run_this_method(m): def w(my_arg): _do_some_magic(my_arg, m) return w def _do_some_magic(callback_arg, callback): if some_checks(): callback(callback_arg) </code></pre> <p>So how can I get a list of methods wrapped with <code>@run_this_method</code></p>
1
2016-08-26T13:11:44Z
39,168,080
<p>If you need to track all functions and methods decorated with your decorator you need to create global variable to register all such functions and methods. I've modified your code:</p> <pre><code>funcs_registry = [] #List of all functions decorated with @run_this_method def run_this_method(m): global functions_registry funcs_registry.append(m) #Add function/method to the registry def w(my_arg): _do_some_magic(my_arg, m) return w def _do_some_magic(callback_arg, callback): if some_checks(): callback(callback_arg) @run_this_method def method_with_custom_name(my_arg): return "The args is: " + my_arg def _init_and_run(): global functions_registry # Here you can iterate over "functions_registry" # and do something with each function/method in it for m in functions_registry: print(m.__name__) </code></pre> <p>Instead of using global variable <code>functions_registry</code> you can create class to be used as decorator and register functions in entity field. Something like this:</p> <pre><code>class FunDecorator: def __init__(self): self.registry = [] def __call__(self, m): "This method is called when some method is decorated" self.registry.append(m) #Add function/method to the registry def w(my_arg): _do_some_magic(my_arg, m) return w run_this_method = FunDecorator() #Create class instance to be used as decorator @run_this_method def method_with_custom_name(my_arg): return "The args is: " + my_arg #do some magic with each decorated method: for m in run_this_method.registry: print(m.__name__) </code></pre>
1
2016-08-26T13:58:06Z
[ "python", "python-decorators" ]
YAML file update and delete using python?
39,167,440
<p>I have a YAML file and it looks like below</p> <pre><code>test: - exam.com - exam1.com - exam2.com test2: - examp.com - examp1.com - examp2.com </code></pre> <p>I like to manage this file using python. Task is, I like to add an entry under "test2" and delete entry from "test".</p>
1
2016-08-26T13:24:38Z
39,168,727
<p>You first have to load the data, which will give you a top-level dict (in a variable called <code>data</code> in the following example), the values for the keys will be lists. On those lists you can do the <code>del</code> resp. <code>insert()</code> (or <code>append()</code>)</p> <pre><code>import sys import ruamel.yaml yaml_str = """\ test: - exam.com - exam1.com - exam2.com test2: - examp.com - examp1.com # want to insert after this - examp2.com """ data = ruamel.yaml.round_trip_load(yaml_str) del data['test'][1] data['test2'].insert(2, 'examp1.5') ruamel.yaml.round_trip_dump(data, sys.stdout, block_seq_indent=1) </code></pre> <p>gives:</p> <pre><code>test: - exam.com - exam2.com test2: - examp.com - examp1.com # want to insert after this - examp1.5 - examp2.com </code></pre> <p>The <code>block_seq_indent=1</code> is necessary as by default <code>ruamel.yaml</code> will left align a sequence value with the key.¹</p> <p>If you want to get rid of the comment in the output you can do:</p> <pre><code>data['test2']._yaml_comment = None </code></pre> <hr> <p>¹ <sub>This was done using <a href="https://pypi.python.org/pypi/ruamel.yaml" rel="nofollow">ruamel.yaml</a> a YAML 1.2 parser, of which I am the author.</sub></p>
0
2016-08-26T14:31:29Z
[ "python", "yaml", "pyyaml" ]
Python Pandas Custom DateTime Index
39,167,518
<p>I'm looking to reindex my data using a custom DateTime index. I would like the index to be: Sun 5pm-Mon 4PM; Mon 5PM-Tues 4PM;Tues 5PM-Wed 4PM; Wed 5PM-Thurs 4pm; Thurs 5PM-Fri 4PM, in a 1 minute interval. I have been playing around with the code below but I cant seem to get any data to populate in time_stamps. It seems like my issue might be with when the business day starts and ends and I am not sure how to get around that. Any help is appreciated.</p> <pre><code>import pandas as pd from pandas.tseries.holiday import USFederalHolidayCalendar from pandas.tseries.offsets import CustomBusinessDay import datetime as dt BDAY_US=CustomBusinessDay(calendar=USFederalHolidayCalendar()) sample_freq= '1min' dates= pd.date_range(start='2016-07-11',end='2016-07-21', freq=BDAY_US ).date times = pd.date_range(start='17:00:00', end='16:00:00', freq=sample_freq).time[1:] time_stamps = [dt.datetime.combine(date, time) for date in dates for time in times] </code></pre>
1
2016-08-26T13:28:27Z
39,168,714
<p>Similar to my answer <a href="http://stackoverflow.com/a/36794030/5276797">here</a>, you could generate the full range of timestamps, then remove those you are not interested in:</p> <pre><code>time_stamps = pd.date_range('2016-07-11', '2016-07-21', freq='1min') mask = ~((time_stamps.hour &gt; 16) &amp; (time_stamps.hour &lt; 17)) time_stamps[mask] </code></pre> <p>There are two complications:</p> <p><strong>First</strong>, you need to remove Fri 5pm - Sun 4pm</p> <pre><code>weekend_mask = ~( ((time_stamps.dayofweek == 4) &amp; (time_stamps.hour &gt;= 17)) | (time_stamps.dayofweek == 5) | ((time_stamps.dayofweek == 6) &amp; (time_stamps.hour &lt;= 16)) ) mask = mask &amp; weekend_mask </code></pre> <p><strong>Second</strong>, you want to remove holidays. This part of my linked <a href="http://stackoverflow.com/a/36794030/5276797">answer</a> may help:</p> <p>You can include a calendar by adding a condition to the mask:</p> <pre><code>import numpy as np np.in1d(index.date, calendar) </code></pre> <p>where calendar would be a numpy array of datetime objects.</p>
2
2016-08-26T14:30:56Z
[ "python", "datetime", "pandas" ]
scikit very low accuracy on classifiers(Naive Bayes, DecissionTreeClassifier)
39,167,586
<p>I am using this dataset <a href="http://archive.ics.uci.edu/ml/datasets/Adult" rel="nofollow">Weath Based on age</a> and the documentation states that the accuracy should be around <code>84%</code>. Unfortunately, the accuracy of my program is at <code>25%</code></p> <p>To process the data I did the following:</p> <pre><code>1. Loaded the .txt data file and converted it to a .csv 2. Removed data with missing values 3. Extracted the class values: &lt;=50K &gt;50 and convert it to 0 and 1 respectively 4. For each attribute and for each string value of that attribute I mapped it to an integer value. Example att1{'cs':0, 'cs2':1}, att2{'usa':0, 'greece':1} ... and so on 5. Called naive bayes on the new integer data set </code></pre> <p>Python code:</p> <pre><code>import load_csv as load #my functions to do [1..5] of the list import numpy as np my_data = np.genfromtxt('out.csv', dtype = dt, delimiter = ',', skip_header = 1) data = np.array(load.remove_missing_values(my_data)) #this funcion removes the missing data features_train = np.array(load.remove_field_num(data, len(data[0]) - 1)) #this function extracts the data, e.g removes the class in the end of the data label_train = np.array(load.create_labels(data)) features_train = np.array(load.convert_to_int(features_train)) my_data = np.genfromtxt('test.csv', dtype = dt, delimiter = ',', skip_header = 1) data = np.array(load.remove_missing_values(my_data)) features_test = np.array(load.remove_field_num(data, len(data[0]) - 1)) label_test = np.array(load.create_labels(data)) #extracts the labels from the .csv data file features_test = np.array(load.convert_to_int(features_test)) #converts the strings to ints(each unique string of an attribute is assigned a unique integer value from sklearn import tree from sklearn.naive_bayes import GaussianNB from sklearn import tree from sklearn.metrics import accuracy_score clf = tree.DecisionTreeClassifier() clf.fit(features_train, label_train) predict = clf.predict(features_test) score = accuracy_score(predict, label_test) #Low accuracy score </code></pre> <p>load_csv module:</p> <pre><code>import numpy as np attributes = { 'Private':0, 'Self-emp-not-inc':1, 'Self-emp-inc':2, 'Federal-gov':3, 'Local-gov':4, 'State-gov':5, 'Without-pay':6, 'Never-worked':7, 'Bachelors':0, 'Some-college':1, '11th':2, 'HS-grad':3, 'Prof-school':4, 'Assoc-acdm':5, 'Assoc-voc':6, '9th':7, '7th-8th':8, '12th':9, 'Masters':10, '1st-4th':11, '10th':12, 'Doctorate':13, '5th-6th':14, 'Preschool':15, 'Married-civ-spouse':0, 'Divorced':1, 'Never-married':2, 'Separated':3, 'Widowed':4, 'Married-spouse-absent':5, 'Married-AF-spouse':6, 'Tech-support':0, 'Craft-repair':1, 'Other-service':2, 'Sales':3, 'Exec-managerial':4, 'Prof-specialty':5, 'Handlers-cleaners':6, 'Machine-op-inspct':7, 'Adm-clerical':8, 'Farming-fishing':9, 'Transport-moving':10, 'Priv-house-serv':11, 'Protective-serv':12, 'Armed-Forces':13, 'Wife':0, 'Own-child':1, 'Husband':2, 'Not-in-family':4, 'Other-relative':5, 'Unmarried':5, 'White':0, 'Asian-Pac-Islander':1, 'Amer-Indian-Eskimo':2, 'Other':3, 'Black':4, 'Female':0, 'Male':1, 'United-States':0, 'Cambodia':1, 'England':2, 'Puerto-Rico':3, 'Canada':4, 'Germany':5, 'Outlying-US(Guam-USVI-etc)':6, 'India':7, 'Japan':8, 'Greece':9, 'South':10, 'China':11, 'Cuba':12, 'Iran':13, 'Honduras':14, 'Philippines':15, 'Italy':16, 'Poland':17, 'Jamaica':18, 'Vietnam':19, 'Mexico':20, 'Portugal':21, 'Ireland':22, 'France':23, 'Dominican-Republic':24, 'Laos':25, 'Ecuador':26, 'Taiwan':27, 'Haiti':28, 'Columbia':29, 'Hungary':30, 'Guatemala':31, 'Nicaragua':32, 'Scotland':33, 'Thailand':34, 'Yugoslavia':35, 'El-Salvador':36, 'Trinadad&amp;Tobago':37, 'Peru':38, 'Hong':39, 'Holand-Netherlands':40 } def remove_field_num(a, i): #function to strip values names = list(a.dtype.names) new_names = names[:i] + names[i + 1:] b = a[new_names] return b def remove_missing_values(data): temp = [] for i in range(len(data)): for j in range(len(data[i])): if data[i][j] == '?': #If a missing value '?' is encountered do not append the line to temp break; if j == (len(data[i]) - 1) and len(data[i]) == 15: temp.append(data[i]) #Append the lines that do not contain '?' return temp def create_labels(data): temp = [] for i in range(len(data)): #Iterate through the data j = len(data[i]) - 1 #Extract the labels if data[i][j] == '&lt;=50K': temp.append(0) else: temp.append(1) return temp def convert_to_int(data): my_lst = [] for i in range(len(data)): lst = [] for j in range(len(data[i])): key = data[i][j] if j in (1, 3, 5, 6, 7, 8, 9, 13, 14): lst.append(int(attributes[key])) else: lst.append(int(key)) my_lst.append(lst) temp = np.array(my_lst) return temp </code></pre> <p>I have tried to use both <code>tree</code> and <code>NaiveBayes</code> but the accuracy is very low. Any suggestions of what am I missing?</p>
0
2016-08-26T13:31:58Z
39,173,259
<p>I guess the problem is in preprocessing. It is better to encode the categorical variables as one_hot vectors (vectors with only zero or ones where one corresponds to the desired value for that class) instead of raw numbers. Sklearn <a href="https://github.com/zygmuntz/kaggle-happiness/blob/master/vectorize_validation.py" rel="nofollow">DictVectorizer</a> can help you in that. You can do the classification much more efficiently with the <code>pandas</code> library. </p> <p>The following shows how easily you can achieve that with help of <code>pandas</code> library. It works very well along side scikit-learn. This achieves accuracy of 81.6 on a test set that is 20% of the entire data.</p> <pre><code>from __future__ import division from sklearn.cross_validation import train_test_split from sklearn.feature_extraction.dict_vectorizer import DictVectorizer from sklearn.linear_model.logistic import LogisticRegression from sklearn.metrics.classification import classification_report, accuracy_score from sklearn.naive_bayes import GaussianNB from sklearn.tree.tree import DecisionTreeClassifier import numpy as np import pandas as pd # Read the data into a pandas dataframe df = pd.read_csv('adult.data.csv') # Columns names cols = np.array(['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'target']) # numeric columns numeric_cols = ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] # assign names to the columns in the dataframe df.columns = cols # replace the target variable to 0 and 1 for &lt;50K and &gt;50k df1 = df.copy() df1.loc[df1['target'] == ' &lt;=50K', 'target'] = 0 df1.loc[df1['target'] == ' &gt;50K', 'target'] = 1 # split the data into train and test X_train, X_test, y_train, y_test = train_test_split( df1.drop('target', axis=1), df1['target'], test_size=0.2) # numeric attributes x_num_train = X_train[numeric_cols].as_matrix() x_num_test = X_test[numeric_cols].as_matrix() # scale to &lt;0,1&gt; max_train = np.amax(x_num_train, 0) max_test = np.amax(x_num_test, 0) # not really needed x_num_train = x_num_train / max_train x_num_test = x_num_test / max_train # scale test by max_train # labels or target attribute y_train = y_train.astype(int) y_test = y_test.astype(int) # categorical attributes cat_train = X_train.drop(numeric_cols, axis=1) cat_test = X_test.drop(numeric_cols, axis=1) cat_train.fillna('NA', inplace=True) cat_test.fillna('NA', inplace=True) x_cat_train = cat_train.T.to_dict().values() x_cat_test = cat_test.T.to_dict().values() # vectorize (encode as one hot) vectorizer = DictVectorizer(sparse=False) vec_x_cat_train = vectorizer.fit_transform(x_cat_train) vec_x_cat_test = vectorizer.transform(x_cat_test) # build the feature vector x_train = np.hstack((x_num_train, vec_x_cat_train)) x_test = np.hstack((x_num_test, vec_x_cat_test)) clf = LogisticRegression().fit(x_train, y_train.values) pred = clf.predict(x_test) print classification_report(y_test.values, pred, digits=4) print accuracy_score(y_test.values, pred) clf = DecisionTreeClassifier().fit(x_train, y_train) predict = clf.predict(x_test) print classification_report(y_test.values, pred, digits=4) print accuracy_score(y_test.values, pred) clf = GaussianNB().fit(x_train, y_train) predict = clf.predict(x_test) print classification_report(y_test.values, pred, digits=4) print accuracy_score(y_test.values, pred) </code></pre>
1
2016-08-26T19:11:41Z
[ "python", "machine-learning", "scipy", "scikit-learn" ]
Ruby pass numbers as key (key-value) to **kwargs
39,167,635
<p>How to pass numbers as key (key-value) to **kwargs? In other words, why can't numbers be passed as key (key-value) to **kwargs? Here's the error:</p> <p><strong>Edit:</strong></p> <pre><code>def func(**kwargs) puts kwargs.class p kwargs end func(a={:alpha =&gt; 1, :beta =&gt; 2, :gamma =&gt; 3}) # wrong number of arguments (1 for 0) (ArgumentError) func(n = {1 =&gt; 'alpha', 2 =&gt; 'beta', 3 =&gt; 'gamma'}) </code></pre> <p>In Python, this would work:</p> <pre><code>def func(**kwargs): print(type(kwargs).__name__) print(kwargs) func(a={'alpha': 1, 'beta': 2, 'gamma': 3}) func(n = {1: 'alpha', 2: 'beta', 3: 'gamma'}) </code></pre> <p>Please help. </p>
-1
2016-08-26T13:34:23Z
39,167,965
<p>kwargs means: <strong>K</strong>ey<strong>w</strong>ord<strong>A</strong>rguments</p> <p>In ruby's case you can only pass a symbol as the key. (Correct me if there's anything else)</p> <p>So Things like that won't work:</p> <pre><code>func("a" =&gt; "foo") func(1 =&gt; "bar") func(Object.new =&gt; "baz") </code></pre> <p>These will all create a hash and pass it to the function as a first positional argument. But your method doesn't have any positional parameters. Hence the error.</p> <p>You can only use:</p> <pre><code>func(a: "this") func(whatever: Object.new) </code></pre>
1
2016-08-26T13:51:12Z
[ "python", "ruby" ]
NLTK BigramTagger does not tag half of the sentence
39,167,671
<p>Can someone please explain the behaviour of NLTK's BigramTagger in these examples?</p> <p>I instantiated the tagger by </p> <pre><code>bi= BigramTagger(brown.tagged_sents(categories='news')[:500]) </code></pre> <p>Now, I want to use this on one specific sentence.</p> <pre><code>&gt;&gt;&gt; bi.tag(brown_sents[2]) [(u'The', u'AT'), (u'September-October', u'NP'), (u'term', u'NN'), (u'jury', u'NN'), (u'had', u'HVD'), (u'been', u'BEN'), (u'charged', u'VBN'), (u'by', u'IN'), (u'Fulton', u'NP-TL'), (u'Superior', u'JJ-TL'), (u'Court', u'NN-TL'), (u'Judge', u'NN-TL'), (u'Durwood', u'NP'), (u'Pye', u'NP'), (u'to', u'TO'), (u'investigate', u'VB'), (u'reports', u'NNS'), (u'of', u'IN'), (u'possible', u'JJ'), (u'``', u'``'), (u'irregularities', u'NNS'), (u"''", u"''"), (u'in', u'IN'), (u'the', u'AT'), (u'hard-fought', u'JJ'), (u'primary', u'NN'), (u'which', u'WDT'), (u'was', u'BEDZ'), (u'won', u'VBN'), (u'by', u'IN'), (u'Mayor-nominate', u'NN-TL'), (u'Ivan', u'NP'), (u'Allen', u'NP'), (u'Jr.', u'NP'), (u'.', u'.')] </code></pre> <p>Works well, but hey, it's all known data. Let me change one word and see if it sets something off.</p> <pre><code>&gt;&gt;&gt; sent=brown_sents[2] &gt;&gt;&gt; sent[5] u'been' &gt;&gt;&gt; sent[5] = u'was' &gt;&gt;&gt; bi.tag(sent) [(u'The', u'AT'), (u'September-October', u'NP'), (u'term', u'NN'), (u'jury', u'NN'), (u'had', u'HVD'), (u'was', None), (u'charged', None), (u'by', None), (u'Fulton', None), (u'Superior', None), (u'Court', None), (u'Judge', None), (u'Durwood', None), (u'Pye', None), (u'to', None), (u'investigate', None), (u'reports', None), (u'of', None), (u'possible', None), (u'``', None), (u'irregularities', None), (u"''", None), (u'in', None), (u'the', None), (u'hard-fought', None), (u'primary', None), (u'which', None), (u'was', None), (u'won', None), (u'by', None), (u'Mayor-nominate', None), (u'Ivan', None), (u'Allen', None), (u'Jr.', None), (u'.', None)] </code></pre> <p>Now I expected to see changed tuple, <code>(u'been', u'BEN')</code> to now be (u'been', None). Why is everything after it in the sentence now not tagged? Those words were tagged in connection to another ones, not 'been'.</p> <p>Any recommendation on use of tagged sentences would be appreciated as well.</p>
2
2016-08-26T13:35:40Z
39,170,704
<p>You have to set a backoff tagger when using *gramTagger so that if the specific ngram is not seen in the training data, it will backoff to a tagger trained on a lower order ngram. See "Combining Taggers" section in <a href="http://www.nltk.org/book/ch05.html" rel="nofollow">http://www.nltk.org/book/ch05.html</a></p> <pre><code>&gt;&gt;&gt; from nltk import DefaultTagger, UnigramTagger, BigramTagger &gt;&gt;&gt; from nltk.corpus import brown &gt;&gt;&gt; text = brown.tagged_sents(categories='news')[:500] &gt;&gt;&gt; t0 = DefaultTagger('NN') &gt;&gt;&gt; t1 = UnigramTagger(text, backoff=t0) &gt;&gt;&gt; t2 = BigramTagger(text, backoff=t1) &gt;&gt;&gt; test_sent = brown.sents()[502] &gt;&gt;&gt; test_sent [u'Noting', u'that', u'Plainfield', u'last', u'year', u'had', u'lost', u'the', u'Mack', u'Truck', u'Co.', u'plant', u',', u'he', u'said', u'industry', u'will', u'not', u'come', u'into', u'this', u'state', u'until', u'there', u'is', u'tax', u'reform', u'.'] &gt;&gt;&gt; t2.tag(test_sent) [(u'Noting', u'VBG'), (u'that', u'CS'), (u'Plainfield', u'NP-HL'), (u'last', u'AP'), (u'year', u'NN'), (u'had', u'HVD'), (u'lost', u'VBD'), (u'the', u'AT'), (u'Mack', 'NN'), (u'Truck', 'NN'), (u'Co.', u'NN-TL'), (u'plant', 'NN'), (u',', u','), (u'he', u'PPS'), (u'said', u'VBD'), (u'industry', 'NN'), (u'will', u'MD'), (u'not', u'*'), (u'come', u'VB'), (u'into', u'IN'), (u'this', u'DT'), (u'state', u'NN'), (u'until', 'NN'), (u'there', u'EX'), (u'is', u'BEZ'), (u'tax', 'NN'), (u'reform', 'NN'), (u'.', u'.')] </code></pre> <p>And to show that it works with your example in the question ;P</p> <pre><code>&gt;&gt;&gt; test_sent = brown.sents()[2] &gt;&gt;&gt; test_sent [u'The', u'September-October', u'term', u'jury', u'had', u'been', u'charged', u'by', u'Fulton', u'Superior', u'Court', u'Judge', u'Durwood', u'Pye', u'to', u'investigate', u'reports', u'of', u'possible', u'``', u'irregularities', u"''", u'in', u'the', u'hard-fought', u'primary', u'which', u'was', u'won', u'by', u'Mayor-nominate', u'Ivan', u'Allen', u'Jr.', u'.'] &gt;&gt;&gt; t2.tag(test_sent) [(u'The', u'AT'), (u'September-October', u'NP'), (u'term', 'NN'), (u'jury', u'NN'), (u'had', u'HVD'), (u'been', u'BEN'), (u'charged', u'VBN'), (u'by', u'IN'), (u'Fulton', u'NP-TL'), (u'Superior', u'JJ-TL'), (u'Court', u'NN-TL'), (u'Judge', u'NN-TL'), (u'Durwood', u'NP'), (u'Pye', u'NP'), (u'to', u'TO'), (u'investigate', u'VB'), (u'reports', u'NNS'), (u'of', u'IN'), (u'possible', u'JJ'), (u'``', u'``'), (u'irregularities', u'NNS'), (u"''", u"''"), (u'in', u'IN'), (u'the', u'AT'), (u'hard-fought', u'JJ'), (u'primary', 'NN'), (u'which', u'WDT'), (u'was', u'BEDZ'), (u'won', u'VBN'), (u'by', u'IN'), (u'Mayor-nominate', u'NN-TL'), (u'Ivan', u'NP'), (u'Allen', u'NP'), (u'Jr.', u'NP'), (u'.', u'.')] </code></pre> <p>At some point, you might realize that <a href="http://stackoverflow.com/questions/30821188/python-nltk-pos-tag-not-returning-the-correct-part-of-speech-tag">Python NLTK pos_tag not returning the correct part-of-speech tag</a> </p>
2
2016-08-26T16:21:11Z
[ "python", "nlp", "nltk", "n-gram", "pos-tagger" ]
How to get all data from qt model
39,167,722
<p>I have created a QIdentityProxyModel called proxymodel that extends a QSqlTableModel called sourcemodel by adding 3 calculated columns. The calculated columns are made by traversing the sourcemodel and storing the data in a list that is mapped by the proxymodel. The proxymodel is displayed in a TableView.</p> <p>The problem i have is that unless i interact with the TableView the models only load the first 256 registers of 5426 total, so initially i can only perform calculations on these first 256 rows.</p> <p>I wish to fill the list with the calculations on the 5426 row. please! help me get this done? Any ideas woukd be helpful</p> <p>The project is written in pyqt so be free to answer however you want!</p>
1
2016-08-26T13:38:43Z
39,171,449
<p>SQL models use the progressive fetch. The source model returns <code>true</code> from <code>canFetchMore</code>. The view calls <code>fetchMore</code>, and then more rows are added to the source model by fetching them from the database - only when the view needs them.</p> <p>Since your proxy needs all the data, it should invoke <code>fetchMore</code> on the source model during idle time (using a zero-duration timer). It should also properly track the source getting more rows inserted to it!</p> <pre><code>class MyProxy : public QIdentityProxyModel { Q_OBJECT QMetaObject::Connection m_onRowsInserted; ... /// Update the computed results based on data in rows first through last /// in the source model. void calculate(int first, int last); void onRowsInserted(const QModelIndex &amp; parent, int first, int last) { calculate(int first, int last); } void onSourceModelChanged() { disconnect(m_onRowsInserted); m_onRowsInserted = connect(sourceModel(), &amp;QAbstractItemModel::rowsInserted, this, &amp;MyProxy::onRowsInserted); fetch(); } void fetch() { if (!sourceModel()-&gt;canFetchMore(QModelIndex{})) return; QTimer::singleShot(0, this, [this]{ if (!sourceModel()-&gt;canFetchMore(QModelIndex{})) return; sourceModel()-&gt;fetchMore(QModelIndex{}); fetch(); }); } public: MyProxy(QObject * parent = nullptr) : QIdentityProxyModel{parent} { connect(this, &amp;QAbstractProxyModel::sourceModelChanged, this, &amp;MyProxy::onSourceModelChanged); } }; </code></pre>
0
2016-08-26T17:09:02Z
[ "python", "c++", "qt", "model" ]
Python/Django convert/humanize specific dictionary items on django view?
39,167,753
<p>I am creating a variable queryset and then passing the values into a dictionary context variable. I am able to convert dates but I'm not sure how to convert specific fields to currency (example: $1,000 instead of 1000) or even just humanize specific fields with commas (1,000 instead of 1000) below is my code from my view, the two methods are a part of a class:</p> <pre><code>from datetime import date def get_context_data(self): context = super(MyView, self).get_context_data() context['myview_items'] = self.get_mymethod_items_context() return context def get_mymethod_items_context(self): context = {} items = table.objects.values('date_begun', 'price', 'item_number') context['items'] = items context['headers'] = ['Date', 'Price', 'Item'] context['fields'] = ['date_begun_date', 'price', 'item_number'] return context </code></pre> <p>I only want to convert one field, which is what I was trying to do for the price field:</p> <pre><code>from django.contrib.humanize.templatetags.humanize import intcomma intcomma('price') </code></pre> <p>How im creating tables, Template tag on my template:</p> <pre><code>{% simple_table_print 'tableid1' 'Price Information' myview_items.items myview_items.fields myview_items.headers %} </code></pre>
-1
2016-08-26T13:40:00Z
39,169,675
<p>First and foremost, you should use a <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#decimalfield" rel="nofollow">decimal field</a> for your currency values if you can. This is not a django convention, <a href="http://stackoverflow.com/questions/1165761/decimal-vs-double-which-one-should-i-use-and-when#1165788">it's a general programming one</a>; decimal types can hold the necessary precision for a valid "money" representation.</p> <p>As pointed by <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/humanize/" rel="nofollow">the documentation</a>, you have to install the app <code>humanize</code>, and in your template, load it up with <code>{% load humanize %}</code>. Keep in mind that it <strong>will</strong> use the appropriate locale provided in your <code>settings.py</code>.</p> <p><a href="https://docs.djangoproject.com/en/1.10/ref/templates/language/#custom-tag-and-filter-libraries" rel="nofollow">Here</a> you have a very simple example using <code>{% load humanize %}</code></p>
1
2016-08-26T15:23:03Z
[ "python", "django", "dictionary", "type-conversion" ]
Python/Django convert/humanize specific dictionary items on django view?
39,167,753
<p>I am creating a variable queryset and then passing the values into a dictionary context variable. I am able to convert dates but I'm not sure how to convert specific fields to currency (example: $1,000 instead of 1000) or even just humanize specific fields with commas (1,000 instead of 1000) below is my code from my view, the two methods are a part of a class:</p> <pre><code>from datetime import date def get_context_data(self): context = super(MyView, self).get_context_data() context['myview_items'] = self.get_mymethod_items_context() return context def get_mymethod_items_context(self): context = {} items = table.objects.values('date_begun', 'price', 'item_number') context['items'] = items context['headers'] = ['Date', 'Price', 'Item'] context['fields'] = ['date_begun_date', 'price', 'item_number'] return context </code></pre> <p>I only want to convert one field, which is what I was trying to do for the price field:</p> <pre><code>from django.contrib.humanize.templatetags.humanize import intcomma intcomma('price') </code></pre> <p>How im creating tables, Template tag on my template:</p> <pre><code>{% simple_table_print 'tableid1' 'Price Information' myview_items.items myview_items.fields myview_items.headers %} </code></pre>
-1
2016-08-26T13:40:00Z
39,170,137
<p>I would install <code>humanize</code> app and then extend its <code>intcomma</code> method to create my own templatetag:</p> <pre><code>from django import template from django.contrib.humanize.templatetags.humanize import intcomma register = template.Library() @register.filter def prepend_dollars(dollars): if dollars: dollars = round(float(dollars), 2) return "$%s%s" % (intcomma(int(dollars)), ("%0.2f" % dollars)[-3:]) else: return '' </code></pre> <p>First load your customtags.py file in your template:</p> <pre><code>{% load customtags %} </code></pre> <p>You can then use it in your template:</p> <pre><code>{{ my_decimal_val | prepend_dollars }} </code></pre> <p>Hope it helps.</p>
0
2016-08-26T15:49:08Z
[ "python", "django", "dictionary", "type-conversion" ]
Python/Django convert/humanize specific dictionary items on django view?
39,167,753
<p>I am creating a variable queryset and then passing the values into a dictionary context variable. I am able to convert dates but I'm not sure how to convert specific fields to currency (example: $1,000 instead of 1000) or even just humanize specific fields with commas (1,000 instead of 1000) below is my code from my view, the two methods are a part of a class:</p> <pre><code>from datetime import date def get_context_data(self): context = super(MyView, self).get_context_data() context['myview_items'] = self.get_mymethod_items_context() return context def get_mymethod_items_context(self): context = {} items = table.objects.values('date_begun', 'price', 'item_number') context['items'] = items context['headers'] = ['Date', 'Price', 'Item'] context['fields'] = ['date_begun_date', 'price', 'item_number'] return context </code></pre> <p>I only want to convert one field, which is what I was trying to do for the price field:</p> <pre><code>from django.contrib.humanize.templatetags.humanize import intcomma intcomma('price') </code></pre> <p>How im creating tables, Template tag on my template:</p> <pre><code>{% simple_table_print 'tableid1' 'Price Information' myview_items.items myview_items.fields myview_items.headers %} </code></pre>
-1
2016-08-26T13:40:00Z
39,170,847
<p>Usually it's a payment processor headache. But if you want to localize your currency value I'd suggest sth like that:</p> <p>Use <a href="http://babel.pocoo.org/en/latest/index.html" rel="nofollow">'babel'</a> app as a base &amp; implement the following filter:</p> <pre><code>from decimal import Decimal from decimal import InvalidOperation from babel.numbers import format_currency from django.utils.translation import get_language, to_locale @register.filter def currency_format(value, currency = 'USD'): try: value = Decimal(value) except (TypeError, InvalidOperation): return u'' kwargs = { 'currency': currency, 'locale': to_locale(get_language() or settings.LANGUAGE_CODE) return format_currency(value, **kwargs) </code></pre> <p>You can use it as a filter or in your views by calling <code>currency_format()</code> directly.</p>
0
2016-08-26T16:29:22Z
[ "python", "django", "dictionary", "type-conversion" ]
How to specify which python to use when writing vim script in python
39,167,814
<p>I started to learn how to write vim script in python.</p> <p>I come across of this tutorial</p> <p><a href="https://dzone.com/articles/how-write-vim-plugins-python" rel="nofollow">https://dzone.com/articles/how-write-vim-plugins-python</a></p> <p>and my first vim script looks like the following</p> <pre><code>function! Reddit() python &lt;&lt; EOF import sys print sys.executable print "hello Reddit" EOF endfunction </code></pre> <p>as you can see, the <code>print sys.executable</code> prints out the python on the system path.</p> <p>As I use pyenv a lot, how can I set vim to recognize my python according to my working environment?</p>
0
2016-08-26T13:42:57Z
39,259,443
<p>If you don’t mind using Neovim instead of vim, you would be able to select which Python interpreter to use via:</p> <pre><code>let g:python_host_prog = '/path/to/your/interpreter/python' let g:python3_host_prog = '/path/to/your/interpreter/python3' </code></pre>
0
2016-08-31T21:40:37Z
[ "python", "vim" ]
Cannot parse table using BeautifulSoup
39,168,000
<p>I have been trying to parse the table <a href="http://podaac.jpl.nasa.gov/ws/search/granule/index.html" rel="nofollow">here</a> with table id = "tblDataset2" and trying to access the rows in the table, but I only get a single row when I parse the webpage using beautifulsoup. Here's my code : </p> <pre><code>from bs4 import BeautifulSoup import requests URL = 'http://podaac.jpl.nasa.gov/ws/' dataset_ids = [] html = requests.get(URL + 'search/granule/index.html') soup = BeautifulSoup(html.text, 'html.parser') table = soup.find("table", {"id": "tblDataset2"}) rows = table.find_all('tr') rows.remove(rows[0]) print table for row in rows: x = row.find_all('td') dataset_ids.append(x[0].text.encode('utf-8')) print dataset_ids </code></pre> <p>I want to access all the rows of the table. Please help me with this. Thanks.</p>
1
2016-08-26T13:52:52Z
39,168,175
<p>This particular dataset is being asynchronously loaded by the browser from a different endpoint which returns a JSON response. Make the request directly to that endpoint:</p> <pre><code>import requests URL = 'http://podaac.jpl.nasa.gov/l2ssIngest/datasets' response = requests.get(URL) data = response.json() for item in data["datasets"]: print(item["persistentId"], item["shortName"]) </code></pre> <p>Prints:</p> <pre><code>(u'PODAAC-AQR40-2SOCS', u'AQUARIUS_L2_SSS_V4') (u'PODAAC-QSX12-L2B01', u'QSCAT_LEVEL_2B_OWV_COMP_12') (u'PODAAC-ASOP2-12C01', u'ASCATA-L2-Coastal') (u'PODAAC-ASOP2-25X01', u'ASCATA-L2-25km') (u'PODAAC-ASOP2-25B01', u'ASCATB-L2-25km') (u'PODAAC-ASOP2-COB01', u'ASCATB-L2-Coastal') (u'PODAAC-J2ODR-GPS00', u'OSTM_L2_OST_OGDR_GPS') (u'PODAAC-OSCT2-L2BV2', u'OS2_OSCAT_LEVEL_2B_OWV_COMP_12_V2') (u'PODAAC-RSX12-L2B11', u'RSCAT_LEVEL_2B_OWV_COMP_12_V1.1') (u'PODAAC-AKASA-XOGD1', u'ALTIKA_SARAL_L2_OST_XOGDR') (u'PODAAC-GHAM2-2PR72', u'AMSR2-REMSS-L2P-v7.2') (u'PODAAC-GHVRS-2PN01', u'VIIRS_NPP-NAVO-L2P-v2.0') (u'PODAAC-RSX12-L2B12', u'RSCAT_LEVEL_2B_OWV_COMP_12_V1.2') </code></pre> <p>As for the <em>first dataset</em>, you need to make a GET request to the "search" endpoint:</p> <pre><code>from operator import itemgetter import requests URL = 'http://podaac.jpl.nasa.gov/dmasSolr/solr/dataset/select/' response = requests.get(URL, params={ 'q': '*:*', 'fl': 'Dataset-PersistentId,Dataset-ShortName-Full', 'rows': '2147483647', 'fq': 'DatasetPolicy-AccessType-Full:(OPEN OR PREVIEW OR SIMULATED OR REMOTE) AND DatasetPolicy-ViewOnline:Y', 'wt': 'json' }) data = response.json() for doc in sorted(data['response']['docs'], key=itemgetter('Dataset-ShortName-Full')): print(doc['Dataset-PersistentId'], doc['Dataset-ShortName-Full']) </code></pre> <p>Prints:</p> <pre><code>(u'PODAAC-GHRAM-4FA01', u'ABOM-L4HRfnd-AUS-RAMSSA_09km') (u'PODAAC-GHGAM-4FA01', u'ABOM-L4LRfnd-GLOB-GAMSSA_28km') (u'PODAAC-AKASA-XOGD1', u'ALTIKA_SARAL_L2_OST_XOGDR') (u'PODAAC-USWCO-ALT01', u'ALT_TIDE_GAUGE_L4_OST_SLA_US_WEST_COAST') ... (u'PODAAC-SASSX-L2WAF', u'WAF_DEALIASED_SASS_L2') (u'PODAAC-SMMRN-2WAF0', u'WENTZ_NIMBUS-7_SMMR_L2') (u'PODAAC-SASSX-L2SN0', u'WENTZ_SASS_SIGMA0_L2') </code></pre> <hr> <p>If you prefer not to dive into how the page is loaded and formed, you can use a real browser automated by <a href="http://selenium-python.readthedocs.io/" rel="nofollow"><code>selenium</code></a>.</p>
1
2016-08-26T14:02:43Z
[ "python", "html", "parsing", "beautifulsoup", "inspect-element" ]
Tensorflow: show or save forget gate values in LSTM
39,168,025
<p>I am using the LSTM model that comes by default in tensorflow. I would like to check or to know how to save or show the values of the forget gate in each step, has anyone done this before or at least something similar to this?</p> <p>Till now I have tried with tf.print but many values appear (even more than the ones I was expecting) I would try plotting something with tensorboard but I think those gates are just variables and not extra layers that I can print (also cause they are inside the TF script)</p> <p>Any help will be well received</p>
2
2016-08-26T13:54:13Z
39,177,157
<p>If you are using <code>tf.rnn_cell.BasicLSTMCell</code> , the variable you are looking for will have the following suffix in its name : <code>&lt;parent_variable_scope&gt;/BasicLSTMCell/Linear/Matrix</code> . This is a concatenated matrix for all the four gates. Its first dimension matches the sum of the second dimensions of the input matrix and the state matrix (or output of the cell to be exact). The second dimension is 4 times the number of cell size.</p> <p>The other complementary variable is <code>&lt;parent_variable_scope&gt;/BasicLSTMCell/Linear/Bias</code> that is a vector of the same size as the second dimension of the abovementioned tensor (for obvious reasons).</p> <p>You can retrieve the parameters for the four gates by using <code>tf.split()</code> along dimension 1. The split matrices would be in the order <code>[input], [new input], [forget], [output]</code>. I am referring to the code here form <code>rnn_cell.py</code>.</p> <p>Keep in mind that the variable represents the parameters of the Cell and not the output of the respective gates. But with the above info, I am sure you can get that too, if you so desire.</p> <p>Edit:<br> Added more specific information about the actual tensors <code>Matrix</code> and <code>Bias</code></p>
0
2016-08-27T03:25:44Z
[ "python", "neural-network", "tensorflow", "lstm" ]
Generating 2d numpy arrays from random columns
39,168,050
<p>I need to generate an 3xn matrix having random columns ensuring that each column does not contain the same number more than once. I am currently using the below code:</p> <pre><code>n=10 set = np.arange(0, 10) matrix = np.random.choice(set, size=3, replace=False)[:, None] for i in range(n): column = np.random.choice(set, size=3, replace=False)[:, None] matrix = np.concatenate((matrix, column),axis=1) print matrix </code></pre> <p>which gives the output I expected:</p> <pre><code>[[2 1 7 2 1 9 7 4 5 2 7] [4 6 3 5 9 8 1 3 8 4 0] [3 5 0 0 4 5 4 0 2 5 3]] </code></pre> <p>However, it seems that the code does not work fast enough. I am aware that implementing the for loop using cython might help, but I want to know that is there any more performant way to write this code solely in python.</p>
0
2016-08-26T13:56:21Z
39,168,933
<p>As was already mentioned in the comments, concatenating repeatedly to a <code>numpy</code> array is a bad idea, as you will have to reallocate memory a lot. As you already know the final size of your result array, you could simply allocate it in the begin and then just iterate over the columns:</p> <pre><code>matrix = np.empty((3, n), dtype=np.int) for i in range(n): matrix[:, i] = np.random.choice(10, size=3, replace=False) </code></pre> <p>At least on my machine, this is already 6 times faster, than your version.</p>
0
2016-08-26T14:41:36Z
[ "python", "numpy" ]
Generating 2d numpy arrays from random columns
39,168,050
<p>I need to generate an 3xn matrix having random columns ensuring that each column does not contain the same number more than once. I am currently using the below code:</p> <pre><code>n=10 set = np.arange(0, 10) matrix = np.random.choice(set, size=3, replace=False)[:, None] for i in range(n): column = np.random.choice(set, size=3, replace=False)[:, None] matrix = np.concatenate((matrix, column),axis=1) print matrix </code></pre> <p>which gives the output I expected:</p> <pre><code>[[2 1 7 2 1 9 7 4 5 2 7] [4 6 3 5 9 8 1 3 8 4 0] [3 5 0 0 4 5 4 0 2 5 3]] </code></pre> <p>However, it seems that the code does not work fast enough. I am aware that implementing the for loop using cython might help, but I want to know that is there any more performant way to write this code solely in python.</p>
0
2016-08-26T13:56:21Z
39,173,381
<p>You can speed it up further with Python's random module (probably due to this <a href="https://github.com/numpy/numpy/issues/2764" rel="nofollow">issue</a>):</p> <pre><code>import random np.array([random.sample(range(10), 3) for _ in range(n)]).T </code></pre> <hr> <pre><code>n = 10**6 %timeit t = np.array([random.sample(range(10), 3) for _ in range(n)]).T 1 loop, best of 3: 6.25 s per loop %%timeit matrix = np.empty((3, n), dtype=np.int) for i in range(n): matrix[:, i] = np.random.choice(10, size=3, replace=False) 1 loop, best of 3: 19.3 s per loop </code></pre>
0
2016-08-26T19:21:25Z
[ "python", "numpy" ]
Python regex parsing of syslog
39,168,177
<p>I have a syslog file with this format.</p> <pre><code>Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: Application Version: 8.44.0 Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: Run on system: host Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: Running as user: SYSTEM Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: User has admin rights: yes Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: Start Time: 2016-03-07 13:44:55 Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: IP Address: 10.10.10.10 Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: CPU Count: 1 Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: System Type: Server Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Info: MODULE: Startup MESSAGE: System Uptime: 18.10 days Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: MODULE: InitHead MESSAGE: =&gt; Reading signature and hash files ... Mar 7 13:44:55 host.domain.example.net/10.10.10.10 Application: Notice: MODULE: Init MESSAGE: file-type-signatures.cfg initialized with 80 values. Mar 7 13:44:56 host.domain.example.net/10.10.10.10 Application: Notice: MODULE: Init MESSAGE: signatures/filename-characteristics.dat initialized with 2778 values. Mar 7 13:44:56 host.domain.example.net/10.10.10.10 Application: Notice: MODULE: Init MESSAGE: signatures/keywords.dat initialized with 63 values. Some logs ... Mar 7 17:42:08 host.domain.example.net/10.10.10.10 Application: Results: MODULE: Report MESSAGE: Results: 0 Alarms, 0 Warnings, 131 Notices, 2 Errors Mar 7 17:42:08 host.domain.example.net/10.10.10.10 Application: End: MODULE: Report MESSAGE: Begin Time: 2016-03-07 13:44:55 Mar 7 17:42:08 host.domain.example.net/10.10.10.10 Application: End: MODULE: Report MESSAGE: End Time: 2016-03-07 17:42:07 Mar 7 17:42:08 host.domain.example.net/10.10.10.10 Application: End: MODULE: Report MESSAGE: Scan took 3 hours 57 mins 11 secs </code></pre> <p>How to extract the "Application Version", "Run on system", "User has admin rights", "Start Time", "IP Address", "CPU Count", "System Type", "System Uptime", "End Time", and count of "Alarms", "Warnings", "Notices", "Errors" using Python?</p> <p>Actually I am new to Python so really I don't know how to do it. but I managed to make a function named finder()</p> <pre><code>def finder(fname,str): with open(fname, "r") as hand: for line in hand: line = line.rstrip() if re.search(str, line): return line </code></pre> <p>and to get the line with IP address I will call it with </p> <pre><code> finder("file path","MESSAGE: IP Address") </code></pre> <p>This will print full line, I need help to get only that ipaddress part, and rest of other information in other lines as well.</p>
-4
2016-08-26T14:02:47Z
39,182,376
<p>Please check below links before going through code. It will help you greatly.</p> <ol> <li><a href="https://pymotw.com/2/re/" rel="nofollow">re module</a> - The module used. This link given has great explanation along with examples</li> <li><a href="http://www.pyregex.com/" rel="nofollow">Python Regex Tester</a> - Here you can test your regex and the regex related functions available with Python. I have used the same to test the regex I have used below :</li> </ol> <p><strong>Code with Comments inline</strong></p> <pre><code>import re fo = open("out.txt", "r") #The information we need to collect. info_list =["Application Version", "Run on system", "User has admin rights", "Start Time", "IP Address", "CPU Count", "System Type", "System Uptime", "End Time", "Results","Begin Time"] for line in fo: for srch_pat in info_list: #First will search if the inforamtion we need is present in line or not. if srch_pat in line: #This will get the exact information. For e.g, version number in case of Application Version regex = re.compile(r'MESSAGE:\s+%s:\s+(.*)'%srch_pat) m = regex.search(line) if "Results" in srch_pat: #For result, this regex will get the required info result_regex = re.search(r'(\d+)\s+Alarms,\s+(\d+)\s+Warnings,\s+(\d+)\s+Notices,\s+(\d+)\s+Errors',m.group(1)) print 'Alarms - ',result_regex.group(1) print 'Warnings - ',result_regex.group(2) print 'Notices - ',result_regex.group(3) print 'Errors - ',result_regex.group(4) else: print srch_pat,'-',m.group(1) </code></pre> <p><strong>Output</strong></p> <pre><code>C:\Users\dinesh_pundkar\Desktop&gt;python a.py Application Version - 8.44.0 Run on system - host User has admin rights - yes Start Time - 2016-03-07 13:44:55 IP Address - 10.10.10.10 CPU Count - 1 System Type - Server System Uptime - 18.10 days Alarms - 0 Warnings - 0 Notices - 131 Errors - 2 Begin Time - 2016-03-07 13:44:55 End Time - 2016-03-07 17:42:07 </code></pre>
0
2016-08-27T14:45:54Z
[ "python", "regex", "syslog" ]
Infinite recursion on adding days to a SimpleDate python object created from UTC
39,168,209
<p>I am using the python library simple-date. I created a SimpleDate object by initializing with a string representing a UTC date. When I try to add days to it, using timedelta, it seems to work fine but then when I try to print it, it recurses infinitely. I inspected the object resulting from the addition with p in the debugger and it displays nothing. The type is SimpleDate but it seems empty somehow. If I dont use a UTC string, it works fine.</p> <p>Am I doing something wrong ?</p> <p>My code:</p> <pre><code>from simpledate import SimpleDate from datetime import timedelta # This works day = '2016-06-01 00:00:00' later = SimpleDate(day) + timedelta(days=10) print(later) # This works day = '2016-06-01 00:00:00' later = SimpleDate(day) + timedelta(days=10) print(later) # The print statement will cause infinite recursion day = '2016-06-01 00:00:00' later = SimpleDate(day, tz='UTC') + timedelta(days=10) print(later) # The print statement will cause infinite recursion day = '2016-06-01 00:00:00UTC' later = SimpleDate(day) + timedelta(days=10) print(later) </code></pre>
1
2016-08-26T14:04:30Z
39,260,310
<p>So, there were two issues here.</p> <p>The first is easy to explain. Generating a message for a certain error caused the same error it was trying to report. That caused a new message to be generated, which caused a new error, which ... eventually exhausted the stack. That is now fixed.</p> <p>The second is harder to explain, because timezones are complicated. So I will start by giving an example that is easier to understand. Consider this date in PDT (Pacific Daylight Time):</p> <pre> >>> SimpleDate('2016-08-28', tz='PDT') SimpleDate('2016-08-28') </pre> <p>if we add 6 months to that, we will be in the middle of winter. PDT doesn't even <strong>exist</strong> then (it's winter)! So we get an error:</p> <pre> >>> SimpleDate('2016-08-28', tz='PDT') + timedelta(days=180) simpledate.SingleInstantTzError: Attempted to use PDT, defined only for 2016-08-28 07:00:00+00:00 </pre> <p>Now you could argue that SimpleDate should be smart enough to know when PDT ends. But it isn't (and afaik it just doesn't have the data available, but I may be wrong). Instead, SimpleDate <strong>refuses to modify dates associated with timezones that may have limited validity</strong>. That's what "single instant" means.</p> <p>For more on this see <a href="https://github.com/andrewcooke/simple-date#why-did-i-get-the-error-singleinstanttzerror-" rel="nofollow">the documentation</a>.</p> <p>But in this case, the timezone was UTC! We know that is always valid. So I have added a special case that avoids this restriction when parsing UTC:</p> <pre> >>> SimpleDate('2016-08-28', tz='UTC') + timedelta(days=180) SimpleDate('2017-02-24', tz='UTC') </pre> <p>This is now in PyPI as release 0.5.0. Sorry for the delay - some tests were failing and I found a new bug, so I needed to do some extra work.</p>
1
2016-08-31T23:01:25Z
[ "python", "recursion", "timedelta" ]
Sybase numeric datatype and Python
39,168,251
<p>I have encountered a problem that I can not figure out.</p> <p>I'm working on an application written in Python and a Sybase ASE database using sybpydb to communicate with the datbase. Now I need to update a post where one of the columns in the where clause is of numeric(10) data type. When selecting the post Python treats the data as a float no problem there. But when I try to update the post using the numeric value I just got from the select i get a "Invalid data type" error.</p> <p>My first thought was to try to convert the float to an integer but it still gives the same error</p>
0
2016-08-26T14:06:32Z
39,202,310
<p>You need to capture the actual SQL query text which is sent to the ASE server before conclusions can be drawn. </p>
0
2016-08-29T09:04:05Z
[ "python", "sybase-ase" ]
Python encode an image in base 64 after a html upload
39,168,318
<p>I have an input for images upload in my page. When you click on "send image" the post request is received in my back-end.</p> <p><strong>I don't, and I don't want to, save the image.</strong></p> <p>What I do, from the back, is: I send the image to an API which will return me the image's tags and then I will display the tags and the image itself, that has been uploaded, in my html page.</p> <pre><code>if request.method == "POST": form = ImageForm(request.POST, request.FILES) if form.is_valid(): imageUploaded = request.FILES['image_file'] try: c = Client(cId, sId) c.get_token() tags = c.image_lookup(imageUploaded) urlImage = base64.b64encode(imageUploaded.read()) context.update({ 'image_path': urlImage, 'tags': tags.json, 'btn_visible': True, }) except ValueError as e: logging.info(e) context.update({ 'btn_visible': False, 'error_message': 'A problem occured, we will fix it as soon as possible. We apologise for the inconvenience.' }) </code></pre> <p>in my HTML:</p> <p><code>&lt;img id="cv-image" src="data:image/png;base64,{{ image_path }}"&gt;</code></p> <p>But, my problem is that my image_path is desperately empty.</p> <p>What's the problem?</p> <p>EDIT: <strong>It's super weird, if I comment the code calling the Client class, which do a GET and a POST on an API, it will works. I still don't get it nor how to make it works.</strong></p>
0
2016-08-26T14:09:53Z
39,169,817
<p>Ok, as I'm not saving the image, once I send it to the API the object in itself isn't available anymore.</p> <p>Not sure to understand why I still can print it, I had to make a copy of it:</p> <pre><code>if form.is_valid(): imageUploaded = request.FILES['image_file'] imageCopy = copy.deepcopy(imageUploaded) try: c = Client(cId, sId) c.get_token() tags = c.image_lookup(imageUploaded) urlImage = base64.b64encode(imageCopy.read()) </code></pre> <p>And now it works perfectly fine!</p> <p>As asked, here is the code of Image_lookup:</p> <pre><code>def image_lookup(self, imageUploaded, image_type=None): '''POST /v1/imageLookup''' param_files = [ ('input_image', ('image.jpg', imageUploaded, image_type or 'image/png')) ] return self.post('someAPIurl/imageLookup', files=param_files) </code></pre>
0
2016-08-26T15:30:18Z
[ "python", "django", "base64" ]
Pygame for Python 3 - "Setup.py build" command error
39,168,330
<p>I am following these directions: <a href="http://www.pygame.org/wiki/CompileUbuntu?parent=Compilation" rel="nofollow">http://www.pygame.org/wiki/CompileUbuntu?parent=Compilation</a></p> <p>The instructions give the steps in installing Pygame for Python 3 on Ubuntu. </p> <p>I am having no problems with it until i reach the <code>python3 setup.py build</code> step. This is what the command outputs: </p> <pre><code>Traceback (most recent call last): File "setup.py", line 109, in &lt;module&gt; from setuptools import setup, find_packages ImportError: No module named 'setuptools' </code></pre> <p>If i simply run <code>import pygame</code> in both Python 2 and Python 3, it reports that there is no module called pygame. </p> <p>Is there anything special that is needed to be done? Thanks!</p> <p><strong>EDIT:</strong> Followed @docmarvin 's directions and installed the module setuptools. Still the same error</p>
0
2016-08-26T14:10:50Z
39,168,546
<p>Try to install setuptools, e.g. with pip from the command line: <code>pip install setuptools</code></p>
-1
2016-08-26T14:22:45Z
[ "python", "python-3.x", "pygame" ]
Pygame for Python 3 - "Setup.py build" command error
39,168,330
<p>I am following these directions: <a href="http://www.pygame.org/wiki/CompileUbuntu?parent=Compilation" rel="nofollow">http://www.pygame.org/wiki/CompileUbuntu?parent=Compilation</a></p> <p>The instructions give the steps in installing Pygame for Python 3 on Ubuntu. </p> <p>I am having no problems with it until i reach the <code>python3 setup.py build</code> step. This is what the command outputs: </p> <pre><code>Traceback (most recent call last): File "setup.py", line 109, in &lt;module&gt; from setuptools import setup, find_packages ImportError: No module named 'setuptools' </code></pre> <p>If i simply run <code>import pygame</code> in both Python 2 and Python 3, it reports that there is no module called pygame. </p> <p>Is there anything special that is needed to be done? Thanks!</p> <p><strong>EDIT:</strong> Followed @docmarvin 's directions and installed the module setuptools. Still the same error</p>
0
2016-08-26T14:10:50Z
39,170,260
<pre><code>sudo apt install python3-setuptools ^ separate from Python 2 setuptools. </code></pre> <p>Per <a href="http://stackoverflow.com/a/14426553/2877364">this answer</a>.</p>
0
2016-08-26T15:55:46Z
[ "python", "python-3.x", "pygame" ]
AWS ElasticBeanstalk update without modifing Django wsgi.conf
39,168,351
<p>I have a django app deployed in AWS EB using autoscaling. This app uses Django Rest with Token Authentication. In order for this to work, I have to add the following lines in etc/httpd/conf.d/wsgi.conf file:</p> <pre><code>RewriteEngine on RewriteCond %{HTTP:Authorization} ^(.*) RewriteRule .* - [e=HTTP_AUTHORIZATION:%1] WSGIPassAuthorization On </code></pre> <p>The problem is: when AWS do an autoscale or ElasticBeanstalk environment upgrade, the wsgi.conf file is updated and the custom settings are deleted.</p> <p>How can I avoid that?</p> <p>Thanks in advance</p>
0
2016-08-26T14:12:00Z
39,193,069
<p>To avoid ElasticBeanstalk erasing your custom settings while autoscaling or re-initializing any instance in your environment you should use your <code>.ebextensions</code> scripts to do any durable modification to your ec2 instance's <code>config</code> files.</p> <p>(After having tested these modifications as you did 'manually' using <code>eb ssh</code>)</p> <p>In that case you could use for example a <code>sed command</code> to edit your <code>wsgi.conf</code> file.</p> <p>add the following container_command to one of you your yaml Elastic Beanstalk configuration file (ie: <code>.ebextension/yourapp.config</code>):</p> <p><code>03_wsgipass: command: 'sed -i -f .ebextensions/wsgi_update.sed ../wsgi.conf' </code></p> <p>It should then look like this :</p> <pre><code>container_commands: 01_migrate: command: "django-admin.py migrate --noinput" leader_only: true 02_collectstatic: command: "django-admin.py collectstatic --noinput" 03_wsgipass: command: 'sed -i -f .ebextensions/wsgi_update.sed ../wsgi.conf' </code></pre> <p>create a new file in <code>wsgi_update.sed</code> in the <code>.ebextensions</code> folder with the following content:</p> <pre><code>/&lt;Virtual/ a\ RewriteEngine On\ RewriteCond %{HTTP:Authorization} ^(.*)\ RewriteRule .* - [e=HTTP_AUTHORIZATION:%1] /&lt;\/Virtual/ a\ WSGIPassAuthorization On </code></pre> <p>This is a small <a href="https://www.gnu.org/software/sed/manual/sed.html#sed-Programs" rel="nofollow">sed program</a> for example that will add the apache <code>mod_rewrite</code> rules inside your <code>&lt;VirtualHost&gt;</code> block and the <code>WSGIPassAuthorization</code> line after the closing tag <code>&lt;/VirtualHost&gt;</code> in your <code>wsgi.conf</code> file</p> <p>It will be executed on each application deployment to any existing or new instances created by autoscaling in your environment.</p> <p>see <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-container.html#create-deploy-python-custom-container" rel="nofollow">Using the AWS Elastic Beanstalk Python Platform</a> for more info</p>
2
2016-08-28T15:52:14Z
[ "python", "django", "apache", "amazon-web-services", "mod-wsgi" ]
Python Regex stop on string
39,168,468
<p>So I want to search using regex for seasons which are not followed by episode number and I have the following list :</p> <pre><code>string = ['Fear the walking dead Season 2 Episode 9', 'Veep Season 5', 'Martine Season 2 (unknown number of episodes)', 'New Girl Season 5 Episode 16'] </code></pre> <p>I've written this code <code>re.search('.+? Season [0-9]{1,2}', string, re.I)</code> but it seems to take into consideration the series with an episode number also. I want it to return True only on <code>Veep Season 5</code></p>
1
2016-08-26T14:18:30Z
39,168,744
<p>I would recommend using <code>^</code> and <code>$</code> to match from the beginning of a line to the end. So you can change your regex to:</p> <pre><code>re.search('^(.+?Season\s[0-9]{1,2})$', string, re.I | re.M) </code></pre>
3
2016-08-26T14:32:22Z
[ "python", "regex" ]
Python Regex stop on string
39,168,468
<p>So I want to search using regex for seasons which are not followed by episode number and I have the following list :</p> <pre><code>string = ['Fear the walking dead Season 2 Episode 9', 'Veep Season 5', 'Martine Season 2 (unknown number of episodes)', 'New Girl Season 5 Episode 16'] </code></pre> <p>I've written this code <code>re.search('.+? Season [0-9]{1,2}', string, re.I)</code> but it seems to take into consideration the series with an episode number also. I want it to return True only on <code>Veep Season 5</code></p>
1
2016-08-26T14:18:30Z
39,168,934
<p>From previous experience, I'd suggest not solely doing this with regex, but I've quickly thrown together the following snippet (after which no_episode_string will contain all of the ones without episodes).</p> <p>For each season we match against ".<em>?[0-9]</em>(.*)", which simply grabs everything up to and including the first time we encounter a number, and then take the rest of the string, which will either be empty (if there is no episode number), or non-empty if there is an episode number. </p> <p>So we just check whether it is empty or not, and if it is, then we add the whole thing to no_episode_string.</p> <pre><code>import re string = ['Fear the walking dead Season 2 Episode 9', 'Veep Season 5', 'Martine Season 2 (unknown number of episodes)', 'New Girl Seasoon 5 Episode 16'] no_episode_string = [] for season in string: m = re.search('.*?[0-9]+(.*)', season) if m.group(1) == "": no_episode_string.append(m.group(0)) </code></pre>
2
2016-08-26T14:41:40Z
[ "python", "regex" ]
How can I generate a list and flatten it the same time in python?
39,168,499
<p>This is more of a refactoring question, as the code works as is. But since I am still learning <code>Python</code>, I thought there would be a better way to do this, and I spent a few hours now digging into the other possibilities, but can't get anywhere.</p> <p>So I have the following statement:</p> <p><code>numbers = [re.split(' ?- ?', ticket.text.strip()) for ticket in tickets]</code></p> <p>which obviously generates a list of lists. However, I want to have just a single list of the numbers taken out from that regex.</p> <p>So this is the second line of code that flattens the above list (I found this solution here, on <code>StackOverflow</code> btw):</p> <p><code>flat = [item for setlist in numbers for item in setlist]</code></p> <p>Main thing I am trying to achieve is to have this on 1 single line. Otherwise, I could of course have a normal <code>for .. in</code> loop, that would append each number to numbers list, but I like keeping it on 1 line. </p> <p>If this is the best it can get, I would also love to know that please.. :)</p>
-2
2016-08-26T14:19:49Z
39,168,544
<p>Just substitute the first expression for <code>numbers</code> in the second expression:</p> <pre><code>flat = [item for setlist in [re.split(' ?- ?', ticket.text.strip()) for ticket in tickets] for item in setlist] </code></pre>
-2
2016-08-26T14:22:41Z
[ "python" ]
How can I generate a list and flatten it the same time in python?
39,168,499
<p>This is more of a refactoring question, as the code works as is. But since I am still learning <code>Python</code>, I thought there would be a better way to do this, and I spent a few hours now digging into the other possibilities, but can't get anywhere.</p> <p>So I have the following statement:</p> <p><code>numbers = [re.split(' ?- ?', ticket.text.strip()) for ticket in tickets]</code></p> <p>which obviously generates a list of lists. However, I want to have just a single list of the numbers taken out from that regex.</p> <p>So this is the second line of code that flattens the above list (I found this solution here, on <code>StackOverflow</code> btw):</p> <p><code>flat = [item for setlist in numbers for item in setlist]</code></p> <p>Main thing I am trying to achieve is to have this on 1 single line. Otherwise, I could of course have a normal <code>for .. in</code> loop, that would append each number to numbers list, but I like keeping it on 1 line. </p> <p>If this is the best it can get, I would also love to know that please.. :)</p>
-2
2016-08-26T14:19:49Z
39,168,566
<p>You can achieve it using <code>chain</code> and <code>map</code> in single line as:</p> <pre><code>list(chain(*map(lambda x: re.split(' ?- ?', x.text.strip()), tickets))) </code></pre> <p><strong>Suggestion:</strong></p> <p>There is no need to use <code>regex</code> here, because you may achieve the same using <code>split</code> function of Python. Hence, your answer will become:</p> <pre><code>list(chain(*map(lambda x: x.text.replace(' ', '').split('-')), tickets))) </code></pre> <p><strong>Explaination:</strong></p> <p><code>chain</code> function from the <a href="https://docs.python.org/2/library/itertools.html#itertools.chain" rel="nofollow"><code>itertools</code></a> library is used to umwrap the list. Below is the sample example</p> <pre><code>&gt;&gt;&gt; from itertools import chain &gt;&gt;&gt; my_nested_list = [[1,2,3], [4,5,6]] &gt;&gt;&gt; list(chain(*my_nested_list)) [1, 2, 3, 4, 5, 6] </code></pre> <p>Whereas <code>map</code> function is used to call the passed <code>function</code> (in this case <code>lambda</code> function) on each item of <code>list</code>.</p> <pre><code>&gt;&gt;&gt; my_nested_list = [[1,2,3], [4,5,6]] &gt;&gt;&gt; map(lambda x: x[0], my_nested_list) [1, 4] </code></pre> <p>And, <code>split</code> is used to split the content of string based on substring. For example:</p> <pre><code>&gt;&gt;&gt; x = 'hey you - i am here' &gt;&gt;&gt; x.split('-') ['hey you ', ' i am here'] # Same answer as your regex </code></pre>
-1
2016-08-26T14:23:43Z
[ "python" ]
How can I generate a list and flatten it the same time in python?
39,168,499
<p>This is more of a refactoring question, as the code works as is. But since I am still learning <code>Python</code>, I thought there would be a better way to do this, and I spent a few hours now digging into the other possibilities, but can't get anywhere.</p> <p>So I have the following statement:</p> <p><code>numbers = [re.split(' ?- ?', ticket.text.strip()) for ticket in tickets]</code></p> <p>which obviously generates a list of lists. However, I want to have just a single list of the numbers taken out from that regex.</p> <p>So this is the second line of code that flattens the above list (I found this solution here, on <code>StackOverflow</code> btw):</p> <p><code>flat = [item for setlist in numbers for item in setlist]</code></p> <p>Main thing I am trying to achieve is to have this on 1 single line. Otherwise, I could of course have a normal <code>for .. in</code> loop, that would append each number to numbers list, but I like keeping it on 1 line. </p> <p>If this is the best it can get, I would also love to know that please.. :)</p>
-2
2016-08-26T14:19:49Z
39,168,595
<pre><code>sum([re.split(' ?- ?', ticket.text.strip()) for ticket in tickets], []) </code></pre>
-1
2016-08-26T14:24:44Z
[ "python" ]
How can I generate a list and flatten it the same time in python?
39,168,499
<p>This is more of a refactoring question, as the code works as is. But since I am still learning <code>Python</code>, I thought there would be a better way to do this, and I spent a few hours now digging into the other possibilities, but can't get anywhere.</p> <p>So I have the following statement:</p> <p><code>numbers = [re.split(' ?- ?', ticket.text.strip()) for ticket in tickets]</code></p> <p>which obviously generates a list of lists. However, I want to have just a single list of the numbers taken out from that regex.</p> <p>So this is the second line of code that flattens the above list (I found this solution here, on <code>StackOverflow</code> btw):</p> <p><code>flat = [item for setlist in numbers for item in setlist]</code></p> <p>Main thing I am trying to achieve is to have this on 1 single line. Otherwise, I could of course have a normal <code>for .. in</code> loop, that would append each number to numbers list, but I like keeping it on 1 line. </p> <p>If this is the best it can get, I would also love to know that please.. :)</p>
-2
2016-08-26T14:19:49Z
39,168,608
<p>A better idea is to add another loop over <code>re.split(' ?- ?', ticket.text.strip())</code> in the list comprehension:</p> <pre><code>flat = [x for ticket in tickets for x in re.split(' ?- ?', ticket.text.strip())] </code></pre> <p>It's also more efficient and cleaner.</p> <p>By the way, you should use string methods instead of regex:</p> <pre><code>flat = [x.strip() for ticket in tickets for x in ticket.split('-')] </code></pre> <p>If you need to convert <code>x</code> to <code>int</code>, you may drop <code>strip()</code>, since <code>int</code> ignores leading and trailing whitespace.</p> <pre><code>flat = [int(x) for ticket in tickets for x in ticket.split('-')] </code></pre>
1
2016-08-26T14:25:23Z
[ "python" ]
How can I generate a list and flatten it the same time in python?
39,168,499
<p>This is more of a refactoring question, as the code works as is. But since I am still learning <code>Python</code>, I thought there would be a better way to do this, and I spent a few hours now digging into the other possibilities, but can't get anywhere.</p> <p>So I have the following statement:</p> <p><code>numbers = [re.split(' ?- ?', ticket.text.strip()) for ticket in tickets]</code></p> <p>which obviously generates a list of lists. However, I want to have just a single list of the numbers taken out from that regex.</p> <p>So this is the second line of code that flattens the above list (I found this solution here, on <code>StackOverflow</code> btw):</p> <p><code>flat = [item for setlist in numbers for item in setlist]</code></p> <p>Main thing I am trying to achieve is to have this on 1 single line. Otherwise, I could of course have a normal <code>for .. in</code> loop, that would append each number to numbers list, but I like keeping it on 1 line. </p> <p>If this is the best it can get, I would also love to know that please.. :)</p>
-2
2016-08-26T14:19:49Z
39,169,729
<p>Well, let's work through this one step at a time. As a set of partially-nested for-loops, your code would be:</p> <pre><code>numbers = [] for ticket in tickets: numbers.append(re.split(' ?- ?', ticket.text.strip()) flat = [] for setlist in numbers: for item in setlist: flat.append(item) </code></pre> <p>Talking through it: You have a list of tickets. Each ticket becomes one setlist when you apply the regex split to it. You then want to grab all the items in the setlist and put them in a single list. You don't actually need to have a list of all the setlists (what you called <code>numbers</code>) at any point - that's just an intermediate stage.</p> <p>Refactoring this to be completely nested:</p> <pre><code>flat = [] for ticket in tickets: for item in re.split(' ?- ?', ticket.text.strip()): flat.append(item) </code></pre> <p>Now that we have a set of completely-nested for loops, it's trivial to refactor into a list or generator comprehension:</p> <pre><code>flat = [item for ticket in tickets for item in re.split(' ?- ?', ticket.text.strip())] </code></pre> <p>It's a fairly long single line, but it is a single line.</p> <p>Incidentally, a regex might not be the best way to parse out numbers like that - especially if you want the actual numbers rather than strings. <code>re.split()</code> is slower than <code>str.split()</code>, and this split is simple enough that it can be done by the latter. If the numbers are integers, try:</p> <pre><code>flat = [int(item) for ticket in tickets for item in ticket.split('-'))] </code></pre> <p>And if they're floats, try:</p> <pre><code>flat = [float(item) for ticket in tickets for item in ticket.split('-'))] </code></pre> <p>This works because the <code>int(str)</code> and <code>float(str)</code> builtins automatically ignore whitespace at the start and end of a given string, so you don't need a regex to conditionally match that whitespace. The resulting numbers can still be inserted into strings if you need to do that, and should also take up somewhat less space in memory. If the numbers are integers, you lose nothing. If they're floats, you lose very little - you lose the original precision of the number, and you might run into the limits on float size if you're working with really big or really tiny stuff (but that's unlikely - see <code>sys.float_info</code> for what those limits are).</p>
0
2016-08-26T15:26:06Z
[ "python" ]
index 1 / 2nd list item being skipped for no apparent reason
39,168,652
<p>I have a function that takes a list and removes (one instance of) the smallest and largest elements, then takes the average of the remaining elements. Running it doesnt bring up any errors, although on checking the results I realised they were incorrect. Here's the program:</p> <pre><code>def centered_average(x): x.sort() y = 0 for i in x: if x.index(i) == 0 or x.index(i) == (len(x)-1): print(i, "is being removed") x.remove(i) i +=1 else: y += i print(i, "is being added") return (y / len(x)) def average(x): return sum(x)/len(x) </code></pre> <p>(the print functions were put in for checking)</p> <p>On putting through a list of</p> <pre><code>x = [1,2,3,4,5] </code></pre> <p>the result was:</p> <pre><code>1 is being removed 3 is being added 4 is being added 5 is being removed 2.3333333333333335 </code></pre> <p>therefore, we can assume x[1] is not being used in the function, and I would like to know why.</p> <p>Thanks in advance for any help.</p>
1
2016-08-26T14:27:38Z
39,168,848
<p>You need to take special care when removing elements of a list that you are already iterating over in Python, as you are doing in your function. Python is looking at the indices of the list and using those, but when you remove the first element and the list is updated in place, the previous second element (<code>x[1]</code>) is now the first (<code>x[0]</code>) and will thus be skipped.</p> <p>There is an easier way to do this, though, that doesn't require such a loop and the extra conditionals, if all you want to do is take the average of the elements that aren't the first or last:</p> <pre><code>def centered_average(x): x.sort() if len(x) &lt;= 2: print "Cannot run with 2 or fewer elements..." return 0 else: return sum(x[1:-1])/(len(x)-2.00) </code></pre> <p>There are other ways to do this, but this one should be fast enough and allow for any case you provide as long as <strong>x</strong> is a list. Hope this helps.</p>
3
2016-08-26T14:37:37Z
[ "python", "list" ]
index 1 / 2nd list item being skipped for no apparent reason
39,168,652
<p>I have a function that takes a list and removes (one instance of) the smallest and largest elements, then takes the average of the remaining elements. Running it doesnt bring up any errors, although on checking the results I realised they were incorrect. Here's the program:</p> <pre><code>def centered_average(x): x.sort() y = 0 for i in x: if x.index(i) == 0 or x.index(i) == (len(x)-1): print(i, "is being removed") x.remove(i) i +=1 else: y += i print(i, "is being added") return (y / len(x)) def average(x): return sum(x)/len(x) </code></pre> <p>(the print functions were put in for checking)</p> <p>On putting through a list of</p> <pre><code>x = [1,2,3,4,5] </code></pre> <p>the result was:</p> <pre><code>1 is being removed 3 is being added 4 is being added 5 is being removed 2.3333333333333335 </code></pre> <p>therefore, we can assume x[1] is not being used in the function, and I would like to know why.</p> <p>Thanks in advance for any help.</p>
1
2016-08-26T14:27:38Z
39,168,984
<p>The issue comes from the <code>for i in x:</code>. When you remove the first element, the second element (2) becomes the first element of the list. This means that on the next iteration, you will look for the second element, and find 3, because the list is now [2, 3, 4, 5].</p> <p>Instead, you could start by removing the first and last element with <code>x=x[1:-1]</code>. If you've never seen the <code>[i:-j]</code> syntax before, this tells Python to return a list starting at index i, and ending j indices from the end. In other words, this will produce [2, 3, 4] in your example. Afterwards, you could return <code>sum(x)/len(x)</code>.</p> <p>Note that neither this code, or your original approach, will work on lists of less than 3 elements. When dividing by <code>len(x)</code> in either solution, you will end up dividing by 0.</p>
1
2016-08-26T14:44:10Z
[ "python", "list" ]
SQL double join in python
39,168,703
<p>i have two tables, a and b each with many rows. i have an SQL code saying </p> <pre><code>SELECT* FROM a INNER JOIN b ON (a.names = b.names) and (a.age = b.age) </code></pre> <p>im trying to make this in python using the .merge() function, and by making a mid-table from the first join and then trying to make it by joining that table to the next condition. but python pulls up an error </p> <pre><code>join3 = a.merge(b,how='inner',left_on = 'name', right_on = 'name') join4 = join3.merge(b,how'inner',left_on='age',right_on='age') </code></pre> <p>this gives a memory error: ive tried to replicate this using various &amp; methods. ive also tried this </p> <pre><code>merge = a.merge(b[b.age==a.age],left_on= 'name', right_on='name') </code></pre> <p>im at a lose to what to do</p>
1
2016-08-26T14:30:08Z
39,169,705
<pre><code>a.merge(b, how='inner', on=['name','age']) </code></pre> <p>and if say your column for ages in b had a different name say years, which mine did i did this</p> <pre><code>b['age'] = b['years'] </code></pre> <p>and then i could use the above. i also could have renamed that column but wanted to keep all original columns there</p>
0
2016-08-26T15:24:53Z
[ "python", "mysql", "join" ]
Where to catch a KeyboardInterrupt in unittests?
39,168,757
<p>I'm UI testing on mobile devices using an Appium server. I want to be able to cancel the testing process while developing the tests. Currently, when I <code>CTRL-C</code> out of the process (the python unittest), I have to restart the appium server, since the session was not shut down properly (this would have been done in the <code>tearDown()</code> method of the test, but since I press CTRL-C, that won't be executed.) Instead I wanna have the <code>tearDown()</code> fire everytime the test gets canceled by a <code>KeyboardInterrupt</code>.</p> <p>Now's my question: Where do I put the try-catch block to achieve this? Is there a best practice handling this in Python unittests? I need to access a class variable (<code>self.driver.quit()</code>), right after <code>KeyboardInterrupt</code> fires. The class variable is inside the class that was put into the <code>unittest.TestSuite</code> that the <code>unittest.TextTestRunner</code> is running. </p> <pre><code>try: self.test_something() except KeyboadInterrupt: self.driver.quit() </code></pre> <p>I've looked a bit into <code>unittest.TestResult</code> and its <code>stop()</code> method but haven't found practical examples explaining its usage properly.</p>
0
2016-08-26T14:33:01Z
39,168,986
<p>You can change the default behaviour of Python regarding SIGINT to suit your needs:</p> <pre><code>import signal def my_ctrlc_handler(signal, frame): driver_class.quit() raise KeyboardInterrupt signal.signal(signal.SIGINT, my_ctrlc_handler) </code></pre>
2
2016-08-26T14:44:17Z
[ "python", "unit-testing", "python-unittest" ]
'[08001][TPT] [ODBC SQL Server Wire Protocol driver] Invalid connection Data
39,168,850
<p>I have a python program that implies connecting to a teradata database. The server name is defaulted. Two people can succesfully use the python program but one person can't and gets the following error message:</p> <pre><code>'[08001][TPT] [ODBC SQL Server Wire Protocol driver] Invalid connection Data ., [TPT][ODBC SQL Server Wire Protocol driver ]Invalid attribute in connection string : DBCNAME.' </code></pre> <p>The person who gets the error message has access to that server and uses Teradata.</p> <p>Python code:</p> <pre><code>import teradata udaExec = teradata.UdaExec (appName="test", version="1.0", logConsole=False) session = udaExec.connect(method="odbc", system=servername,username=user1, password=passw) </code></pre>
0
2016-08-26T14:37:43Z
39,245,228
<p>If you check the log you can see that probably you have more than one driver for Teradata set into your ODBC configuration.</p> <p>To set your correct Teradata driver you can add driver property to connect method:</p> <pre><code>session = udaExec.connect(method="odbc", system="servername", username=user1, password=passw, driver="Teradata"); </code></pre> <p>A different way to connect to Teradata could be using a DSN defined by user in ODBC settings:</p> <pre><code>import teradata udaExec = teradata.UdaExec (appName="test", version="1.0", logConsole=False) session = udaExec.connect(method="odbc", dsn="&lt;dsn-defined-by-user&gt;", username=user1, password=passw) </code></pre>
0
2016-08-31T09:04:51Z
[ "python", "teradata" ]
Python Regular Expression Index Numbers
39,168,866
<p>Just looking for some confirmation on this, but it appears that the index/position numbers for regular expressions do not follow the same rules used in the rest of python.</p> <p>Example:</p> <pre><code>pattern=re.compile('&lt;HTML&gt;') pattern.search("&lt;HTML&gt;") </code></pre> <p>output:</p> <pre><code>&lt;_sre.SRE_Match object; span=(0, 6), match='&lt;HTML&gt;'&gt; </code></pre> <p>Why is "span=(0, 6)"?</p> <p>In python, the string <code>"&lt;HTML&gt;"</code> is only 6 characters in length and therefore would return an index error when attempting to do something like:</p> <pre><code>"&lt;HTML&gt;"[6] File "&lt;stdin&gt;", line 1, in &lt;module&gt; IndexError: string index out of range </code></pre> <p>So I'm fairly certain the answer is that this span value for match objects is inherently different than index values for python data structures. While the span value for matched objects starts at 0 for the first character(like with all python data structures) the last character is always endpos-1.</p> <p>If anyone can confirm my assumption and maybe explain why this difference exists I would greatly appreciate it.</p>
1
2016-08-26T14:38:41Z
39,168,972
<p>Well a slice (span) in Python is open ended. So <code>"&lt;HTML&gt;and much more"[0:6]</code> actually returns <code>"&lt;HTML&gt;"</code>.</p>
2
2016-08-26T14:43:47Z
[ "python", "regex" ]
Why do Python and wc disagree on byte count?
39,169,080
<p>Python and <code>wc</code> disagree drastically on the byte count (length) of a given string:</p> <pre><code>with open("commedia.pfc", "w") as f: t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8)) print(len(t)) f.write(t) Output : 318885 </code></pre> <hr> <pre><code>$&gt; wc commedia.pfc 2181 12282 461491 commedia.pfc </code></pre> <p>The file is mostly made of unreadable chars so I will provide an hexdump:</p> <p><a href="http://www.filedropper.com/dump_2" rel="nofollow">http://www.filedropper.com/dump_2</a></p> <p>The file is the result of a prefix free compression, if you ask I can provide the full code that generates it along with the input text.</p> <p>Why aren't both byte counts equal?</p> <hr> <p>I add the full code of the compression algorithm, it looks long but is full of documentation and tests, so should be easy to understand:</p> <pre><code>""" Implementation of prefix-free compression and decompression. """ import doctest from itertools import islice from collections import Counter import random import json def binary_strings(s): """ Given an initial list of binary strings `s`, yield all binary strings ending in one of `s` strings. &gt;&gt;&gt; take(9, binary_strings(["010", "111"])) ['010', '111', '0010', '1010', '0111', '1111', '00010', '10010', '01010'] """ yield from s while True: s = [b + x for x in s for b in "01"] yield from s def take(n, iterable): """ Return first n items of the iterable as a list. """ return list(islice(iterable, n)) def chunks(xs, n, pad='0'): """ Yield successive n-sized chunks from xs. """ for i in range(0, len(xs), n): yield xs[i:i + n] def reverse_dict(dictionary): """ &gt;&gt;&gt; sorted(reverse_dict({1:"a",2:"b"}).items()) [('a', 1), ('b', 2)] """ return {value : key for key, value in dictionary.items()} def prefix_free(generator): """ Given a `generator`, yield all the items from it that do not start with any preceding element. &gt;&gt;&gt; take(6, prefix_free(binary_strings(["00", "01"]))) ['00', '01', '100', '101', '1100', '1101'] """ seen = [] for x in generator: if not any(x.startswith(i) for i in seen): yield x seen.append(x) def build_translation_dict(text, starting_binary_codes=["000", "100","111"]): """ Builds a dict for `prefix_free_compression` where More common char -&gt; More short binary strings This is compression as the shorter binary strings will be seen more times than the long ones. Univocity in decoding is given by the binary_strings being prefix free. &gt;&gt;&gt; sorted(build_translation_dict("aaaaa bbbb ccc dd e", ["01", "11"]).items()) [(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')] """ binaries = sorted(list(take(len(set(text)), prefix_free(binary_strings(starting_binary_codes)))), key=len) frequencies = Counter(text) # char value tiebreaker to avoid non-determinism v alphabet = sorted(list(set(text)), key=(lambda ch: (frequencies[ch], ch)), reverse=True) return dict(zip(alphabet, binaries)) def prefix_free_compression(text, starting_binary_codes=["000", "100","111"]): """ Implements `prefix_free_compression`, simply uses the dict made with `build_translation_dict`. Returns a tuple (compressed_message, tranlation_dict) as the dict is needed for decompression. &gt;&gt;&gt; prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])[0] '010101010100111111111001101101101001000100010011001' """ translate = build_translation_dict(text, starting_binary_codes) # print(translate) return ''.join(translate[i] for i in text), translate def prefix_free_decompression(compressed, translation_dict): """ Decompresses a prefix free `compressed` message in the form of a string composed only of '0' and '1'. Being the binary codes prefix free, the decompression is allowed to take the earliest match it finds. &gt;&gt;&gt; message, d = prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"]) &gt;&gt;&gt; message '010101010100111111111001101101101001000100010011001' &gt;&gt;&gt; sorted(d.items()) [(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')] &gt;&gt;&gt; ''.join(prefix_free_decompression(message, d)) 'aaaaa bbbb ccc dd e' """ decoding_translate = reverse_dict(translation_dict) # print(decoding_translate) word = '' for bit in compressed: # print(word, "-", bit) if word in decoding_translate: yield decoding_translate[word] word = '' word += bit yield decoding_translate[word] if __name__ == "__main__": doctest.testmod() with open("commedia.txt") as f: text = f.read() compressed, d = prefix_free_compression(text) with open("commedia.pfc", "w") as f: t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8)) print(len(t)) f.write(t) with open("commedia.pfcd", "w") as f: f.write(json.dumps(d)) # dividing by 8 goes from bit length to byte length print("Compressed / uncompressed ratio is {}".format((len(compressed)//8) / len(text))) original = ''.join(prefix_free_decompression(compressed, d)) assert original == text </code></pre> <p><code>commedia.txt</code> is filedropper.com/commedia</p>
0
2016-08-26T14:49:06Z
39,169,892
<p>You are using Python3 and an <code>str</code> object - that means the count you see in <code>len(t)</code> is the number of <em>characters</em> in the string. Now, characters are not bytes - <a href="http://www.joelonsoftware.com/articles/Unicode.html" rel="nofollow">and it is so since the 90's</a> . </p> <p>Since you did not declare an explicit text encoding, the file writing is encoding your text using the system default encoding - which on Linux or Mac OS X will be utf-8 - an encoding in which any character that falls out of the ASCII range (ord(ch) > 127) uses more than one byte on disk.</p> <p>So, your program is basically wrong. First, define if you are dealing with <em>text</em> or <em>bytes</em> . If you are dealign with bytes, open the file for writting in binary mode (<code>wb</code>, not <code>w</code>) and change this line:</p> <pre><code>t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8)) </code></pre> <p>to</p> <pre><code>t = bytes((int(b, base=2) for b in chunks(compressed, 8)) </code></pre> <p>That way it is clear that you are working with the bytes themselves, and not mangling characters and bytes. </p> <p>Of course there is an ugly workaround to do a "transparent encoding" of the text you had to a bytes object - (if your original list would have all character codepoints in the 0-256 range, that is): You could encode your previous <code>t</code> with <code>latin1</code> encoding before writing it to a file. But that would have been just wrong semantically. </p> <p>You can also experiment with Python's little known "bytearray" object: it gives one the ability to deal with elements that are 8bit numbers, and have the convenience of being mutable and extendable (just as a C "string" that would have enough memory space pre allocated)</p>
4
2016-08-26T15:35:14Z
[ "python", "file", "python-3.x", "byte", "wc" ]
Why do Python and wc disagree on byte count?
39,169,080
<p>Python and <code>wc</code> disagree drastically on the byte count (length) of a given string:</p> <pre><code>with open("commedia.pfc", "w") as f: t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8)) print(len(t)) f.write(t) Output : 318885 </code></pre> <hr> <pre><code>$&gt; wc commedia.pfc 2181 12282 461491 commedia.pfc </code></pre> <p>The file is mostly made of unreadable chars so I will provide an hexdump:</p> <p><a href="http://www.filedropper.com/dump_2" rel="nofollow">http://www.filedropper.com/dump_2</a></p> <p>The file is the result of a prefix free compression, if you ask I can provide the full code that generates it along with the input text.</p> <p>Why aren't both byte counts equal?</p> <hr> <p>I add the full code of the compression algorithm, it looks long but is full of documentation and tests, so should be easy to understand:</p> <pre><code>""" Implementation of prefix-free compression and decompression. """ import doctest from itertools import islice from collections import Counter import random import json def binary_strings(s): """ Given an initial list of binary strings `s`, yield all binary strings ending in one of `s` strings. &gt;&gt;&gt; take(9, binary_strings(["010", "111"])) ['010', '111', '0010', '1010', '0111', '1111', '00010', '10010', '01010'] """ yield from s while True: s = [b + x for x in s for b in "01"] yield from s def take(n, iterable): """ Return first n items of the iterable as a list. """ return list(islice(iterable, n)) def chunks(xs, n, pad='0'): """ Yield successive n-sized chunks from xs. """ for i in range(0, len(xs), n): yield xs[i:i + n] def reverse_dict(dictionary): """ &gt;&gt;&gt; sorted(reverse_dict({1:"a",2:"b"}).items()) [('a', 1), ('b', 2)] """ return {value : key for key, value in dictionary.items()} def prefix_free(generator): """ Given a `generator`, yield all the items from it that do not start with any preceding element. &gt;&gt;&gt; take(6, prefix_free(binary_strings(["00", "01"]))) ['00', '01', '100', '101', '1100', '1101'] """ seen = [] for x in generator: if not any(x.startswith(i) for i in seen): yield x seen.append(x) def build_translation_dict(text, starting_binary_codes=["000", "100","111"]): """ Builds a dict for `prefix_free_compression` where More common char -&gt; More short binary strings This is compression as the shorter binary strings will be seen more times than the long ones. Univocity in decoding is given by the binary_strings being prefix free. &gt;&gt;&gt; sorted(build_translation_dict("aaaaa bbbb ccc dd e", ["01", "11"]).items()) [(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')] """ binaries = sorted(list(take(len(set(text)), prefix_free(binary_strings(starting_binary_codes)))), key=len) frequencies = Counter(text) # char value tiebreaker to avoid non-determinism v alphabet = sorted(list(set(text)), key=(lambda ch: (frequencies[ch], ch)), reverse=True) return dict(zip(alphabet, binaries)) def prefix_free_compression(text, starting_binary_codes=["000", "100","111"]): """ Implements `prefix_free_compression`, simply uses the dict made with `build_translation_dict`. Returns a tuple (compressed_message, tranlation_dict) as the dict is needed for decompression. &gt;&gt;&gt; prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])[0] '010101010100111111111001101101101001000100010011001' """ translate = build_translation_dict(text, starting_binary_codes) # print(translate) return ''.join(translate[i] for i in text), translate def prefix_free_decompression(compressed, translation_dict): """ Decompresses a prefix free `compressed` message in the form of a string composed only of '0' and '1'. Being the binary codes prefix free, the decompression is allowed to take the earliest match it finds. &gt;&gt;&gt; message, d = prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"]) &gt;&gt;&gt; message '010101010100111111111001101101101001000100010011001' &gt;&gt;&gt; sorted(d.items()) [(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')] &gt;&gt;&gt; ''.join(prefix_free_decompression(message, d)) 'aaaaa bbbb ccc dd e' """ decoding_translate = reverse_dict(translation_dict) # print(decoding_translate) word = '' for bit in compressed: # print(word, "-", bit) if word in decoding_translate: yield decoding_translate[word] word = '' word += bit yield decoding_translate[word] if __name__ == "__main__": doctest.testmod() with open("commedia.txt") as f: text = f.read() compressed, d = prefix_free_compression(text) with open("commedia.pfc", "w") as f: t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8)) print(len(t)) f.write(t) with open("commedia.pfcd", "w") as f: f.write(json.dumps(d)) # dividing by 8 goes from bit length to byte length print("Compressed / uncompressed ratio is {}".format((len(compressed)//8) / len(text))) original = ''.join(prefix_free_decompression(compressed, d)) assert original == text </code></pre> <p><code>commedia.txt</code> is filedropper.com/commedia</p>
0
2016-08-26T14:49:06Z
39,169,943
<p>@jsbueno is right. Moreover, if you open the resulting file in <em>binary read</em> mode, you get the good result:</p> <pre><code>&gt;&gt;&gt; with open('commedia.pfc', 'rb') as f: &gt;&gt;&gt; print(len(f.read())) 461491 </code></pre>
0
2016-08-26T15:38:17Z
[ "python", "file", "python-3.x", "byte", "wc" ]
Running a part of python code in background
39,169,120
<p>I am writing a python script which provides two options to the user. In one option, the user input is used to run a function in background. In other, the user input is used to run a function in foreground. How can I achieve both? I don't want to use the "nohup" command to run the full script in background. I only want a certain function to run in background.</p> <p>I also want the background process to stop on user's will.</p> <p>Here is a small sample of what I want to do:</p> <pre><code>def display(): cnt = 1 a = [] if len(live_matches) == 0: print "sorry, no live matches currently" else: for match in live_matches: print str(cnt) + "." + match['mchdesc'] + "," + match['mnum'] a[cnt] = match cnt = cnt + 1 choice = raw_input("Enter the match numbers for live updates separated by spaces") for c in choice.split(' '): update_matches.append(a[int(c)]) if len(update_matches) &gt; 0: #call some function and run in background cnt = 1 for match in completed_matches: print str(cnt) + "." + match['mchdesc'] + "," + match['mnum'] cnt = cnt + 1 choice = raw_input("enter the match number for scorecard") #call some function again but run it in foreground </code></pre>
0
2016-08-26T14:51:49Z
39,172,011
<p>1.<code>threading.Thread</code> maybe help you, and<code>threading.Lock()</code> will lock your data. </p> <p>I just have a idea about it,you can use <code>global input data</code> to check user input and two threads will check it,and determine who locks your output data and the main thread will print it,and also the input can end the two threads. (maybe break the <code>while</code> loop)</p> <p>2.await/async is a good way for asynchronous IO,you can use <code>send</code> method to execute <em>native co-routine functions</em> until <code>yield</code>. Maybe it can do this.</p> <p>Hope this helps you.</p>
1
2016-08-26T17:46:35Z
[ "python", "nohup" ]
Python bokeh: Multiple color segments on same line
39,169,197
<p>I am using python 3.5 and bokeh 0.12.1, and I am trying to plot a simple line with multiple colors on separate segments. Basically I want the line to have different colors based on a column value. Here is a simplified version of my code:</p> <pre><code>import numpy as np from numpy import vectorize import pandas as pd from bokeh.plotting import figure, show, output_file def f(x): return 2 * x def color(x): if x &lt; 20: return 0 if 20 &lt;= x &lt; 60: return 1 if 60 &lt;= x &lt; 80: return 0 else: return 1 v_color = vectorize(color) x = np.arange(0, 100, 1) data = {'x': x, 'y': f(x), 'colors': v_color(x)} df = pd.DataFrame(data=data) # print(df) p = figure(title="Line example") p.line(df['x'], df['y'], legend="y=f(x)", # line_color="tomato", line_color="olivedrab", line_width=2) p.legend.location = "top_left" output_file("basic_line_test.html", title="line plot example") show(p) # open a browser </code></pre> <p>Basically the line should have one color, let's say 'olivedrab' when the column 'colors' is 0 and 'tomato' when the value is 1. How can I do that?</p>
1
2016-08-26T14:55:36Z
39,169,669
<p>As of Bokeh <code>0.12.1</code> this is not currently supported. Lines can only have one color at a time. Your next best bet is to try the <code>multi_line</code> or <code>segments</code> glyph functions, but doing that will be a bit more verbose (you will have to compute and provide the start/end points of eery individual segment). </p> <p>It's possible this could be added as a feature in some future release, I'd encourage you to submit a feature request on the project GitHub <a href="https://github.com/bokeh/bokeh/issues" rel="nofollow">issue tracker</a>.</p>
0
2016-08-26T15:22:53Z
[ "python", "bokeh" ]
Python List Iterate Trouble
39,169,214
<p>I'm having a very hard time figuring out I'm doing wrong running my python script in Windows to get the expected result.</p> <p>I have a directory with <strong>list1.txt, list2.txt, list3.txt, list4.txt</strong>, and <strong>list5.txt</strong>. Each list contains separate line strings that are unique such as list1.txt will have <strong>item1, item2, item3, item4,</strong> and <strong>item5</strong> each as values on separate lines. Then list2.txt will have item6-item10 on separate lines and so on.</p> <p>What I need to do is to say, for each text file in this directory, list each value in list1 until done, then list each value in list2, then list3, and so on until you finish the last list.</p> <p>Here is a link to the image of my results with notes: <a href="https://i.imgur.com/YBxQUqi.png" rel="nofollow">https://i.imgur.com/YBxQUqi.png</a></p> <p>The code I have it below but the results are not what I'm expecting and I'm having an extremly hard time determining what I'm doing wrong here.</p> <pre><code>def my_range(start, end, step): while start &lt;= end: yield start start += step for x in my_range(1, 5, 1): import os rootdir = os.getcwd() fis = rootdir + "\list\list" + str(x) + ".txt" files = open(fis,'rU') lines = files.readlines() print(lines) print(fis) for line in lines: print("Item = " + line) </code></pre> <p>I need the results to read from every file in the lists.txt file and from every value in each file rather than just the last file. I think I'm not doing the for loop correctly nesting wise and I just cannot figure it out. I also tested with passing arguments to a function and defining a function to do this and I totally hosed up the script trying that.</p> <p>Please anyone help me when you can with this problem I cannot figure out and just pulling my hair out of head.</p>
1
2016-08-26T14:56:39Z
39,169,340
<p>Your second <code>for</code> loop needs to be a sub-loop of the primary one. Also, do not <code>import os</code> each time you loop through, just do it once. Your code should look like this:</p> <pre><code>def my_range(start, end, step): while start &lt;= end: yield start start += step import os for x in my_range(1, 5, 1): rootdir = os.getcwd() fis = rootdir + "\list\list" + str(x) + ".txt" files = open(fis,'rU') lines = files.readlines() print(lines) print(fis) for line in lines: print("Item = " + line) </code></pre> <p>But, I would comment that you should instead use a <code>with open(fid,'rU') as f:</code> approach as that will release the file from being locked if the code errors out or crashes. Then you could just do something like this:</p> <pre><code>def my_range(start, end, step): while start &lt;= end: yield start start += step import os for x in my_range(1, 5, 1): rootdir = os.getcwd() fis = rootdir + "\list\list" + str(x) + ".txt" with open(fis,'rU') as files: print(fis) for line in files: print("Item = " + line) </code></pre>
4
2016-08-26T15:04:19Z
[ "python", "windows" ]
Python List Iterate Trouble
39,169,214
<p>I'm having a very hard time figuring out I'm doing wrong running my python script in Windows to get the expected result.</p> <p>I have a directory with <strong>list1.txt, list2.txt, list3.txt, list4.txt</strong>, and <strong>list5.txt</strong>. Each list contains separate line strings that are unique such as list1.txt will have <strong>item1, item2, item3, item4,</strong> and <strong>item5</strong> each as values on separate lines. Then list2.txt will have item6-item10 on separate lines and so on.</p> <p>What I need to do is to say, for each text file in this directory, list each value in list1 until done, then list each value in list2, then list3, and so on until you finish the last list.</p> <p>Here is a link to the image of my results with notes: <a href="https://i.imgur.com/YBxQUqi.png" rel="nofollow">https://i.imgur.com/YBxQUqi.png</a></p> <p>The code I have it below but the results are not what I'm expecting and I'm having an extremly hard time determining what I'm doing wrong here.</p> <pre><code>def my_range(start, end, step): while start &lt;= end: yield start start += step for x in my_range(1, 5, 1): import os rootdir = os.getcwd() fis = rootdir + "\list\list" + str(x) + ".txt" files = open(fis,'rU') lines = files.readlines() print(lines) print(fis) for line in lines: print("Item = " + line) </code></pre> <p>I need the results to read from every file in the lists.txt file and from every value in each file rather than just the last file. I think I'm not doing the for loop correctly nesting wise and I just cannot figure it out. I also tested with passing arguments to a function and defining a function to do this and I totally hosed up the script trying that.</p> <p>Please anyone help me when you can with this problem I cannot figure out and just pulling my hair out of head.</p>
1
2016-08-26T14:56:39Z
39,169,365
<p>You're looping through all your files but then only displaying the results of the last file. You need to indent the second for loop:</p> <pre><code>for x in my_range(1, 5, 1): import os rootdir = os.getcwd() fis = rootdir + "\list\list" + str(x) + ".txt" files = open(fis,'rU') lines = files.readlines() print(lines) print(fis) for line in lines: #INDENT LIKE SO print("Item = " + line) </code></pre>
0
2016-08-26T15:05:37Z
[ "python", "windows" ]
Python List Iterate Trouble
39,169,214
<p>I'm having a very hard time figuring out I'm doing wrong running my python script in Windows to get the expected result.</p> <p>I have a directory with <strong>list1.txt, list2.txt, list3.txt, list4.txt</strong>, and <strong>list5.txt</strong>. Each list contains separate line strings that are unique such as list1.txt will have <strong>item1, item2, item3, item4,</strong> and <strong>item5</strong> each as values on separate lines. Then list2.txt will have item6-item10 on separate lines and so on.</p> <p>What I need to do is to say, for each text file in this directory, list each value in list1 until done, then list each value in list2, then list3, and so on until you finish the last list.</p> <p>Here is a link to the image of my results with notes: <a href="https://i.imgur.com/YBxQUqi.png" rel="nofollow">https://i.imgur.com/YBxQUqi.png</a></p> <p>The code I have it below but the results are not what I'm expecting and I'm having an extremly hard time determining what I'm doing wrong here.</p> <pre><code>def my_range(start, end, step): while start &lt;= end: yield start start += step for x in my_range(1, 5, 1): import os rootdir = os.getcwd() fis = rootdir + "\list\list" + str(x) + ".txt" files = open(fis,'rU') lines = files.readlines() print(lines) print(fis) for line in lines: print("Item = " + line) </code></pre> <p>I need the results to read from every file in the lists.txt file and from every value in each file rather than just the last file. I think I'm not doing the for loop correctly nesting wise and I just cannot figure it out. I also tested with passing arguments to a function and defining a function to do this and I totally hosed up the script trying that.</p> <p>Please anyone help me when you can with this problem I cannot figure out and just pulling my hair out of head.</p>
1
2016-08-26T14:56:39Z
39,169,399
<p>It looks like your indentation is off:</p> <pre><code>for x in my_range(1, 5, 1): import os ... for line in lines: # this should be inside the loop print("Item = " + line) </code></pre> <p>However, you are going about this is a very roundabout way, I would recommend something like this:</p> <pre><code>for root, dirs, files in os.walk(starting_dir): # iterate over directory for f in files: # iterate over files with open(f) as in_file: # open file for line in in_file.readlines(): # iterate over lines print line # print each line (or do something else) </code></pre>
1
2016-08-26T15:07:30Z
[ "python", "windows" ]
Python List Iterate Trouble
39,169,214
<p>I'm having a very hard time figuring out I'm doing wrong running my python script in Windows to get the expected result.</p> <p>I have a directory with <strong>list1.txt, list2.txt, list3.txt, list4.txt</strong>, and <strong>list5.txt</strong>. Each list contains separate line strings that are unique such as list1.txt will have <strong>item1, item2, item3, item4,</strong> and <strong>item5</strong> each as values on separate lines. Then list2.txt will have item6-item10 on separate lines and so on.</p> <p>What I need to do is to say, for each text file in this directory, list each value in list1 until done, then list each value in list2, then list3, and so on until you finish the last list.</p> <p>Here is a link to the image of my results with notes: <a href="https://i.imgur.com/YBxQUqi.png" rel="nofollow">https://i.imgur.com/YBxQUqi.png</a></p> <p>The code I have it below but the results are not what I'm expecting and I'm having an extremly hard time determining what I'm doing wrong here.</p> <pre><code>def my_range(start, end, step): while start &lt;= end: yield start start += step for x in my_range(1, 5, 1): import os rootdir = os.getcwd() fis = rootdir + "\list\list" + str(x) + ".txt" files = open(fis,'rU') lines = files.readlines() print(lines) print(fis) for line in lines: print("Item = " + line) </code></pre> <p>I need the results to read from every file in the lists.txt file and from every value in each file rather than just the last file. I think I'm not doing the for loop correctly nesting wise and I just cannot figure it out. I also tested with passing arguments to a function and defining a function to do this and I totally hosed up the script trying that.</p> <p>Please anyone help me when you can with this problem I cannot figure out and just pulling my hair out of head.</p>
1
2016-08-26T14:56:39Z
39,169,472
<p>Use <code>os.path.join</code> to create path to a file. I've made some improvement's I'll talk about below.</p> <pre><code>#!/usr/bin/env python import os rootdir = os.getcwd() for x in range(1, 4): filename = 'list' + str(x) + '.txt' fis = os.path.join(rootdir, 'list', filename) files = open(fis,'rU') lines = files.readlines() print(lines) print(fis) for line in lines: print("Item = " + line) </code></pre> <p>I also don't see any point in making own iterable. Why don't you simply use <code>range(1, 5)</code> at the beginning? Next: make names descriptive. Now it may not seem important, but after a week you'll be asking yourself "What the hell did I mean by this fis". Any IDE will make working with long names easier and, believe me, writing <code>data_file_name</code> or something akin is more pleasant than <code>dfs</code> etc. Don't import anything in a loop. It reduces efficiency. Also <code>rootdir</code> variable can be declared once.</p>
1
2016-08-26T15:11:11Z
[ "python", "windows" ]
Happybase filtering using rows function
39,169,216
<p>I would like to perform a <code>rows</code> query with Happybase for some known row keys and add a value filter so that only rows matching the filter are returned.</p> <p>In HBase shell you can supply a filter to a get command, like so:</p> <pre><code>get 'meta', 'someuser', {FILTER =&gt; "SingleColumnValueFilter ('cf','gender',=,'regexstring:^male$')"} </code></pre> <p>In Happybase you can add a filter to a <code>scan</code> command but I don't see the option on a <code>rows</code> query. Here is how it works for <code>scan</code>:</p> <pre><code>rows = tab.scan(filter="SingleColumnValueFilter('cf','gender',=,'regexstring:^male$')") </code></pre> <p>Is there a way to perform a filtered <code>rows</code> query (for potentially random ordered row keys) using Happybase (or any other Python HBase client library)? </p> <p>I imagined it would look like this (but there is no filter argument):</p> <pre><code>rows = tab.rows(rows=['h_key', 'a_key', 'z_key'], filter="SingleColumnValueFilter('cf','gender',=,'regexstring:^male$')") </code></pre>
1
2016-08-26T14:56:45Z
39,204,042
<p><strong>Get with filter is equal to Scan with start/stop row.</strong></p> <pre><code>rows = tab.scan(filter="SingleColumnValueFilter('cf','gender',=,'regexstring:^male$')", row_start="someuser", row_stop="someuser") </code></pre> <p>In Java, a <code>FilterList</code> combined <code>MultiRowRangeFilter</code> and <code>SingleColumnValueFilte</code>r will perfectly satisfy your demand, and there is an <a href="https://github.com/sel-fish/hbase-experiments/blob/master/src/test/java/com/mogujie/mst/hbase/filters/RowFilterTest.java" rel="nofollow">example</a> about that. </p> <p>However, as <code>happyhbase</code> use Hbase Thrift service, and it seems that <a href="http://hbase.apache.org/0.94/book/thrift.html" rel="nofollow">don't support</a> FilterList, so I think the best you can get is to call the above procedure for each key in your example.</p>
0
2016-08-29T10:30:15Z
[ "python", "hbase", "happybase" ]
In Python 3, how to swap two sub-arrays in a 2-dimensional numpy array?
39,169,268
<p>Suppose there is an 2-dimensional numpy array (or matrix in maths) named "A", I want to swap its 1st row with another row "n". n is can be any natural number. The following codes do not work</p> <pre><code>A = np.eye(3) n = 2 A[0], A[n] = A[n], A[0] print(A) </code></pre> <p>you can see it gives (I show the matrix form A for simplicity)</p> <pre><code>A = 0, 0, 1 0, 1, 0 0, 0, 1 </code></pre> <p>But what I want is</p> <pre><code>A = 0, 0, 1 0, 1, 0 1, 0, 0 </code></pre> <p>I thought about one solution is introducing another matrix "B" which is equal to "A", but "A" and "B" are different objects. Then do this</p> <pre><code>A = np.eye(3) B = np.eye(3) A[0], A[n] = B[n], B[0] </code></pre> <p>This can give the correct swap on A. But it needs an additional matrix "B", I don't know if it is computational efficient. Or maybe you have better idea? Thanks :)</p>
0
2016-08-26T15:00:20Z
39,169,495
<p>Try this:</p> <pre><code>import numpy as np A = np.eye(3) A[[0, 2]] = A[[2, 0]] print(A) </code></pre>
1
2016-08-26T15:12:29Z
[ "python", "arrays", "numpy", "matrix" ]
In Python 3, how to swap two sub-arrays in a 2-dimensional numpy array?
39,169,268
<p>Suppose there is an 2-dimensional numpy array (or matrix in maths) named "A", I want to swap its 1st row with another row "n". n is can be any natural number. The following codes do not work</p> <pre><code>A = np.eye(3) n = 2 A[0], A[n] = A[n], A[0] print(A) </code></pre> <p>you can see it gives (I show the matrix form A for simplicity)</p> <pre><code>A = 0, 0, 1 0, 1, 0 0, 0, 1 </code></pre> <p>But what I want is</p> <pre><code>A = 0, 0, 1 0, 1, 0 1, 0, 0 </code></pre> <p>I thought about one solution is introducing another matrix "B" which is equal to "A", but "A" and "B" are different objects. Then do this</p> <pre><code>A = np.eye(3) B = np.eye(3) A[0], A[n] = B[n], B[0] </code></pre> <p>This can give the correct swap on A. But it needs an additional matrix "B", I don't know if it is computational efficient. Or maybe you have better idea? Thanks :)</p>
0
2016-08-26T15:00:20Z
39,169,536
<p>List swapping swaps the variable through pass by reference. So, it does not work to do the inner element swapping by traditional swap by value which is <code>a,b=b,a</code></p> <p>Here you can do the inner list modifications. That performs one after another which eliminates the overwriting problem.</p> <p><strong>Code:</strong></p> <pre><code>import numpy as np A = np.eye(3) n = 2 A[[0,n]] = A[[n,0]] print(A) </code></pre> <p><strong>Output:</strong></p> <pre><code>[[ 0. 0. 1.] [ 0. 1. 0.] [ 1. 0. 0.]] </code></pre>
3
2016-08-26T15:14:11Z
[ "python", "arrays", "numpy", "matrix" ]
In Python 3, how to swap two sub-arrays in a 2-dimensional numpy array?
39,169,268
<p>Suppose there is an 2-dimensional numpy array (or matrix in maths) named "A", I want to swap its 1st row with another row "n". n is can be any natural number. The following codes do not work</p> <pre><code>A = np.eye(3) n = 2 A[0], A[n] = A[n], A[0] print(A) </code></pre> <p>you can see it gives (I show the matrix form A for simplicity)</p> <pre><code>A = 0, 0, 1 0, 1, 0 0, 0, 1 </code></pre> <p>But what I want is</p> <pre><code>A = 0, 0, 1 0, 1, 0 1, 0, 0 </code></pre> <p>I thought about one solution is introducing another matrix "B" which is equal to "A", but "A" and "B" are different objects. Then do this</p> <pre><code>A = np.eye(3) B = np.eye(3) A[0], A[n] = B[n], B[0] </code></pre> <p>This can give the correct swap on A. But it needs an additional matrix "B", I don't know if it is computational efficient. Or maybe you have better idea? Thanks :)</p>
0
2016-08-26T15:00:20Z
39,169,730
<p>Sub-arrays or slices of numpy arrays create views of the data rather than copies. Sometimes numpy can detect this and prevent corruption of data. However, numpy is unable to detect when using the python idiom for swapping. You need to copy at least one of the views before swapping. </p> <pre><code>tmp = A[0].copy() A[0] = A[n] A[n] = tmp </code></pre> <p>Considering you are changing most of the data in the array it may just be easier to create a new array entirely. </p> <pre><code>indices = np.arange(n+1) indices[0], indices[n] = indices[n], indices[0] # this is okay as indices is a 1-d array A = A[indices] </code></pre>
0
2016-08-26T15:26:06Z
[ "python", "arrays", "numpy", "matrix" ]
Collections Counter module issue with most_common class
39,169,274
<p>I have a dictionary made from a List called ltst3_upper, and now I am trying to use the class most_common to get only the top 10 key:values with the below code:</p> <pre><code>result3 = Counter(list3_upper).most_common(10) sort_result3 = OrderedDict(sorted(result3.items(), key=operator.itemgetter(1), reverse=True)) </code></pre> <p>I am lopping this code in several Lists which are 'organized' with Counter, and some of them have more than 10 keys:values, so I want to trim the Dict.</p> <p>Ps. as you can see besides the top 10 values I am also ordering the from highest to smallest, but i dont think the problem is there.</p> <p>The error is:</p> <pre><code>AttributeError: 'list' object has no attribute 'items' </code></pre> <p>Is this happening because some Lists do not have 10 keys:values?</p> <p>Thanks a lot for any input you might have.</p>
-1
2016-08-26T15:00:51Z
39,169,516
<p><a href="https://docs.python.org/2/library/collections.html#collections.Counter.most_common" rel="nofollow">As the documentation states,</a><code>most_common</code> returns a list of the most common elements. <code>.items</code> is a <code>dict</code> method - lists don't have items. If you want to do something to all members in the list, you'd iterate over them..:</p> <pre><code>for result in result3: o = OrderedDict(sorted(result.items(), key=operator.itemgetter(1), reverse=True)) </code></pre> <p><strong>but</strong> this won't work either - the individual members of the list are <code>tuple</code>, not <code>dict</code> - and <code>tuple</code> objects don't have an <code>items</code> method. Instead, just create the <code>OrderedDict</code> with <code>result</code>:</p> <pre><code>from collections import Counter, OrderedDict import operator list3_upper = ['a', 'e', 'a'] result3 = Counter(list3_upper).most_common(10) result_dict = OrderedDict(result3) print(result_dict) &gt;&gt;&gt; OrderedDict([('a', 2), ('e', 1)]) </code></pre>
1
2016-08-26T15:13:08Z
[ "python" ]
PyQt4 how does model gets executed?
39,169,332
<p>I am trying to understand PyQt4 model views. I have built simple list model view. Then I used “step” variable to see how the model gets executed.</p> <p>What I can’t understand is: why every time the new loop gets executed, rowCount method gets called 5 times, and from then every 2 times? It is independent from how many items I have in the list.</p> <p>For data method it is clear; it checks every time the role state and there are 8-15 different roles. </p> <pre><code>from PyQt4 import QtGui, QtCore, uic import sys step = 0 class ModelOne(QtCore.QAbstractListModel): global step step += 1 print(step, 'init') def __init__(self, colors = [], parent = None): QtCore.QAbstractListModel.__init__(self, parent) self.__colors = colors def rowCount(self, parent): global step step += 1 print(step, 'rowCount') return len(self.__colors) def data(self, index, role): global step step += 1 print(step, 'data') if role == QtCore.Qt.DisplayRole: row = index.row() value = self.__colors[row] return value if __name__ == '__main__': app = QtGui.QApplication(sys.argv) listView = QtGui.QListView() listView.show() model = ModelOne(['black', 'white']) listView.setModel(model) sys.exit(app.exec_()) OUTPUT loop 1 1 init 2-6 rowCount (5 steps) 7-14 data (8 steps) 15 rowCount 16 rowCount 17-24 data (8 steps) 25 rowCount 26 rowCount 27-34 data (8 steps) loop 2 35-40 rowCount (5 step) 41-55 data (15 step) 56 rowCount 57 rowCount 58-72 data (15 step) </code></pre>
0
2016-08-26T15:03:48Z
39,170,884
<p>The only way to answer this question would be to get the Qt source code for <code>QAbstractItemModel</code> and <code>QAbstractListModel</code>, and create a call graph for the <code>rowCount</code> function. I image it would be quite extensive, because Qt will call <code>rowCount</code> every time it needs to do any kind of bounds-checking operation. Needless to say, this means there is <strong>not</strong> going to be a simple explanation for how <code>rowCount</code> will be used for any particular program.</p> <p>But in any case, I don't think tracing the execution of a model is a good way to try to understand it. Models need to be understood as <em>abstractions</em>. If you want to learn how they really work, you should read the <a href="http://doc.qt.io/qt-4.8/model-view-programming.html" rel="nofollow">Model/View Programming Overview</a> (particularly the <a href="http://doc.qt.io/qt-4.8/model-view-programming.html#model-subclassing-reference" rel="nofollow">Model Subclassing Reference</a>).</p>
0
2016-08-26T16:32:26Z
[ "python", "pyqt4", "model-view" ]
PyCharm: How do I call frameworkpython so that it will use the framework python instead of the virtualenv python?
39,169,359
<p>I'm using matplotlib library with my virtualenv. If you use it this way, matplotlib will not plot the graph because it is not the python framework, but the python as per the virtualenv. This problem has been documented on the matplotlib website: <a href="http://matplotlib.org/faq/virtualenv_faq.html" rel="nofollow">http://matplotlib.org/faq/virtualenv_faq.html</a></p> <p>You will end up with this error if you try running it with just the virtualenv python:</p> <blockquote> <p>Python is not installed as a framework</p> </blockquote> <p><a href="http://i.stack.imgur.com/t5aco.png" rel="nofollow"><img src="http://i.stack.imgur.com/t5aco.png" alt="enter image description here"></a></p> <p>I decided to use their second workaround which is to include the function PYTHONHOME into my bashrc. file. </p> <p>I have included the function below into bashrc taken off the matplotlib website:</p> <pre><code>function frameworkpython { if [[ ! -z "$VIRTUAL_ENV" ]]; then PYTHONHOME=$VIRTUAL_ENV /usr/local/bin/python "$@" else /usr/local/bin/python "$@" fi } </code></pre> <p>Now, to run matplotlib successfully, I need to call <code>frameworkpython</code> instead of <code>python</code> to draw the graph. This is all good in my Terminal where I just type in the commands but I would rather use PyCharm to run my python code.</p> <p>My question is, how do I get PyCharm to ran <code>frameworkpython</code> each time I press the green play button? The green play button just calls <code>python</code>. </p> <p>I clicked on "Edit Configurations..." but cannot see how to change this. You can change the interpreters but frameworkpython is not an interpreter but rather a function within the bashrc file.</p> <p><a href="http://i.stack.imgur.com/3TOIq.png" rel="nofollow"><img src="http://i.stack.imgur.com/3TOIq.png" alt="enter image description here"></a></p>
0
2016-08-26T15:05:20Z
39,170,013
<p>You can set your Python interpreter in Pycharm. See: <a href="https://www.jetbrains.com/help/pycharm/2016.1/configuring-available-python-interpreters.html" rel="nofollow">https://www.jetbrains.com/help/pycharm/2016.1/configuring-available-python-interpreters.html</a></p>
0
2016-08-26T15:42:02Z
[ "python", "osx", "python-2.7", "matplotlib", "pycharm" ]
Python unformatted save
39,169,362
<p>I am using python to create the input for a program. This program takes an unformatted binary file as input. If I were using fortran I would create this file with</p> <pre><code> open (10,file=outfile,status='unknown',form='unformatted') write(10) int1,int2,int3,int4,list0 write (10) list1 write (10) list2 write (10) list3 write (10) list4 close (10) </code></pre> <p>Is there a way to create the same kind of file in python? My first guess would be to create a subroutine in fortran which can save files given some inputs and then implement this in my python code using f2py, but I don't really know how one would go about doing this. The lists that I am writing to file are very large and the exact structure is very important. This means that answers such as <a href="http://stackoverflow.com/questions/14985311/writing-fortran-unformatted-files-with-python">Writing Fortran unformatted files with Python</a> seem to be unsatisfactory as they don't adequately deal with headers in the file/endianess and so on.</p> <p>In my python code a have a 2-d array, each row containing the x,y,z coordinates and the mass of the particle. This data needs to be split among a number of files.</p> <p>For the particle load the structure of the files is:</p> <p>BLOCK-1 - body is 48 bytes long:</p> <pre><code> nparticles_this_file - integer*4 (nlist) nparticles_total - integer*8 number of this file - integer*4 total number of files - integer*4 Not used - 7*integer*4 </code></pre> <hr> <p>BLOCK-2</p> <pre><code> A list of nlist x-coordinates (real*8) </code></pre> <p>(the x-coordinate is in units of the periodic box size 0&lt;=x&lt;1)</p> <p>BLOCK-3</p> <pre><code> A list of nlist y-coordinates (real*8) </code></pre> <p>(the y-coordinate is in units of the periodic box size 0&lt;=y&lt;1)</p> <p>BLOCK-4</p> <pre><code> A list of nlist z-coordinates (real*8) </code></pre> <p>(the z-coordinate is in units of the periodic box size 0&lt;=z&lt;1)</p> <p>BLOCK-5</p> <pre><code>A list of nlist particle masses (real*4) </code></pre> <p>in units of the total mass in the periodic volume</p>
0
2016-08-26T15:05:27Z
39,171,956
<p>A code like the following shall be a good starting point for what you are trying to do. The structure of your data is not complicated as what I expected to be from your explanation. I wrote a small function to write on list as it is pretty repetitive. The most important thing to notice is that fortran unformatted file write the size of each record along with the record (before and after the record). That helps fortran itself to check for basic error when reading the file later. Using fortran stream files will spare you from writing the record size.</p> <pre><code>import numpy as np def writeBloc(dList, fId): """Write a single list of data values ad float 64 or fortran real*8""" np.array([len(dList)*8],np.int32).tofile(fId) # record size np.array([dList],np.float64).tofile(fId) np.array([len(dList)*8],np.int32).tofile(fId) # record size int1,int2,int3,int4 = 4, 100, 25, 25 # f = open("python.dat", "wb") # Block 1 np.array([48],np.int32).tofile(f) # record size np.array([int1],np.int32).tofile(f) np.array([int2],np.int64).tofile(f) np.array([int3],np.int32).tofile(f) np.array([int4],np.int32).tofile(f) np.zeros((7),np.int32).tofile(f) # list0 np.array([48],np.int32).tofile(f) # record size # list1=[10.0, 11.0, 12.0, 13.0] list2=[20.0, 21.0, 22.0, 23.0] list3=[30.0, 31.0, 32.0, 33.0] list4=[40.0, 41.0, 42.0, 43.0] # data writeBloc(list1, f) # Block 1 writeBloc(list2, f) # Block 2 writeBloc(list3, f) # Block 3 writeBloc(list4, f) # Block 4 f.close() </code></pre>
1
2016-08-26T17:43:24Z
[ "python", "io", "binary", "fortran", "read-write" ]
What should I do about object from modules not imported in the current module?
39,169,368
<p>I do not know if this is really a technical question but maybe more a question about good practices.</p> <p>Suppose you write a module with several functions which work with Figure object of matplotlib. The functions get the fig object as arguments and return this fig object. For example :</p> <pre><code>def do_smth(fig, args): """ do something on fig""" fig.suptitle("plop") # more stuff return fig </code></pre> <p>The above function does not need the matplotlib module to be imported.</p> <p>I am in trouble about that. Is it ok to write a complete module with functions which work on objects coming from another module without importing this module ? Is it enough to mention this in the doc ? Is there some recommandations about this kind of cases ? And of courses have I obtained this situation because the feelings of the module is wrong ?</p>
1
2016-08-26T15:05:45Z
39,170,198
<p>If the module is expected to work with objects from another module, I would say you should import that other module for the sake of clarity.</p> <p>After all, Python caches imports. If the other module was already imported, attempting to import it again has no cost. If the other module was not already imported... this module, or at least the functions that work with objects from the other, is pretty much useless (because you won't be able to get objects from the other module to pass to the functions in this one).</p> <p>The extra import exists purely for clarity - but it does provide a lot of clarity. A line in a docstring can be missed more easily than a line in the imports, if someone's checking for requirements. If you don't include the import, and someone imports your module without having the other available (having misunderstood your purpose), it'll load fine - but will probably do something they weren't expecting, if they call those functions. If you do include the import, and they try to import your module without having the other available, they will end up with an error to clue them in as to what your module is for.</p>
0
2016-08-26T15:52:10Z
[ "python", "matplotlib", "module" ]
ImproperlyConfigured at **The included urlconf ****does not appear to have any patterns in it
39,169,395
<p>I am working on a django project and facing an issue. The name of my main project is testproject. In <strong>settings.py</strong> I have :</p> <pre><code>ROOT_URLCONF = 'testproject.urls' </code></pre> <p>There are two apps in the project too. One is <strong>example</strong>, other is <strong>new</strong>.</p> <p><em>urls.py</em> for testproject:</p> <pre><code>from django.conf.urls import include, url urlpatterns = [ url(r'', include('new.urls')), url(r'', include('example.urls')), ] </code></pre> <p>This is the <strong>urls.py</strong> of <em>example</em>:</p> <pre><code>from django.conf.urls import url from rest_framework.urlpatterns import format_suffix_patterns from example import views urlpatterns = [ url('example/$', views.Example.as_view()), ] urlpatterns = format_suffix_patterns(urlpatterns) </code></pre> <p>This is the <strong>urls.py</strong> of <em>new:</em></p> <pre><code>from django.conf.urls import url from rest_framework.urlpatterns import format_suffix_patterns from new import views urlpatterns = [ url('new/$', views.New.as_view()), ] urlpatterns = format_suffix_patterns(urlpatterns) </code></pre> <p>On hitting url "127.0.0.1:8000/example" I am getting the error:</p> <pre><code>ImproperlyConfigured at /example/ The included urlconf '&lt;module 'new.urls' from '/Users/testproject/name/urls.pyc'&gt;' does not appear to have any patterns in it. If you see valid patterns in the file then the issue is probably caused by a circular import. </code></pre> <p>But I don't see any circular imports here. I'm stuck. Please help me.</p>
0
2016-08-26T15:07:26Z
39,169,642
<p>In the project's urls.py try:</p> <pre><code>urlpatterns = [ url(r'^new/$', include('new.urls')), url(r'^example/$', include('example.urls')), ] </code></pre> <p>In the example's urls.py:</p> <pre><code>urlpatterns = [ url(r'^$', views.Example.as_view()), ] </code></pre> <p>And the last one:</p> <pre><code>urlpatterns = [ url(r'^$', views.New.as_view()), ] </code></pre>
0
2016-08-26T15:21:18Z
[ "python", "django", "url" ]
Scraping JS-fueled webpages with python 3.x on windows computer
39,169,411
<p>This is my first post here, so I hope you'll be kind enough to point out my mistakes if ever I crossed any rules of this website.</p> <p>First off, I'm quite "self-taught" in both english and python, so I apologize in advance if I make any language mistakes.</p> <p>So, I'm learning Python as I said, and I was trying to write a script able to scrape a webpage to get an element of it so that it continues to the next link, and so on. On my different attempts, I sometimes stumbled on a webpage whose interesting link is generated by a script (most certainly JavaScript), and so, when the webpage is retrieved by requests.get(url) doesn't contain the link I'm interested in (while I see it in my web browser while Inspecting the page or viewing source code.</p> <p>I KNOW there is the Selenium solution, but I was wondering if there was ANOTHER way. I found several, but none I actually got to make work. I've tried with dryscrape, which I found out, isn't supported on Windows computers.</p> <p>Any hint on what direction I should direct my research at? Again, I'm hoping for a solution without using selenium, that works on Windows computers.</p> <p><strong>EDIT:</strong> Oh, seeing as the answers suggested that already, I probably should have mentionned that my code uses requests and BeautifulSoup already. Problem is, neither deals with javascript that modifies the source code directly in the client. When I try to scrape the webpage in question with BeautifulSoup, many tags (including the one I'm interested in) don't appear in the whole page. It appears JavaScript injects some code when the page is loaded within the browser. In any case, there is no occurence of the link I'm after in the webpage I point requests.get at, nor in the requests.get(url).text I am looking in with BS4.</p> <p>Thanks folks :)</p>
0
2016-08-26T15:08:04Z
39,169,584
<p>There are already full solutions out there like <a href="http://scrapy.org/" rel="nofollow">scrapy</a>.</p> <p>Instead going that route, I'd recommend you give it a shot to libraries like <a href="http://lxml.de/" rel="nofollow">lxml</a> and <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a></p>
0
2016-08-26T15:17:32Z
[ "javascript", "python" ]
Scraping JS-fueled webpages with python 3.x on windows computer
39,169,411
<p>This is my first post here, so I hope you'll be kind enough to point out my mistakes if ever I crossed any rules of this website.</p> <p>First off, I'm quite "self-taught" in both english and python, so I apologize in advance if I make any language mistakes.</p> <p>So, I'm learning Python as I said, and I was trying to write a script able to scrape a webpage to get an element of it so that it continues to the next link, and so on. On my different attempts, I sometimes stumbled on a webpage whose interesting link is generated by a script (most certainly JavaScript), and so, when the webpage is retrieved by requests.get(url) doesn't contain the link I'm interested in (while I see it in my web browser while Inspecting the page or viewing source code.</p> <p>I KNOW there is the Selenium solution, but I was wondering if there was ANOTHER way. I found several, but none I actually got to make work. I've tried with dryscrape, which I found out, isn't supported on Windows computers.</p> <p>Any hint on what direction I should direct my research at? Again, I'm hoping for a solution without using selenium, that works on Windows computers.</p> <p><strong>EDIT:</strong> Oh, seeing as the answers suggested that already, I probably should have mentionned that my code uses requests and BeautifulSoup already. Problem is, neither deals with javascript that modifies the source code directly in the client. When I try to scrape the webpage in question with BeautifulSoup, many tags (including the one I'm interested in) don't appear in the whole page. It appears JavaScript injects some code when the page is loaded within the browser. In any case, there is no occurence of the link I'm after in the webpage I point requests.get at, nor in the requests.get(url).text I am looking in with BS4.</p> <p>Thanks folks :)</p>
0
2016-08-26T15:08:04Z
39,169,925
<p>I would suggest you try <a href="https://www.crummy.com/software/BeautifulSoup/" rel="nofollow">Beautiful Soup</a></p>
0
2016-08-26T15:37:23Z
[ "javascript", "python" ]
Pytest works with old mock, but not unittest.mock
39,169,563
<p>I'm porting some code from Python 2 to 3, and <code>py.test</code> isn't playing well with the <code>patch</code> decorator from <code>unittest.mock</code>. When I use the <code>patch</code> decorator to pass a mock into the arguments of a test function, <code>py.test</code> instead interprets that argument to be a fixture, and is unable to set up the test.</p> <p>Here's a contrived example that hopefully illuminates the problem:</p> <pre><code>@patch('my_module.my_func') def test_my_func(mock_func): mock_func() mock_func.assert_called_once_with() </code></pre> <p>After running <code>py.test</code>, the error message would look like:</p> <pre><code>E fixture 'my_func' not found &gt; available fixtures: cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory &gt; use 'pytest --fixtures [testpath]' for help on them. </code></pre> <p>This is the only scenario under which this failure occurs. If I explicitly call the test (i.e. run <code>test_my_func()</code>), no error. If I patch <code>my_func</code> using either of the other patching techniques, no error. If I import patch from <code>mock</code> instead of <code>unittest.mock</code>, no error.</p> <p>It's only while running my tests using <code>py.test</code>, using <code>unittest.mock</code>, and patching using the decorator when this occurs.</p> <p>I'm running Python 3.4.5.</p>
1
2016-08-26T15:16:21Z
39,177,118
<p>Yes, mock decorators are not supported. It's not such bad -- changing function signature by decorator appliance is considered as bad idea. But you still may use <code>with mock.patch(...)</code> syntax.</p> <p>Also as an option there is <a href="https://pypi.python.org/pypi/pytest-mock" rel="nofollow">pytest-mock</a> plugin with pretty clean api for mocking:</p> <pre><code>def test_foo(mocker): # all valid calls mocker.patch('os.remove') mocker.patch.object(os, 'listdir', autospec=True) mocked_isfile = mocker.patch('os.path.isfile') </code></pre>
1
2016-08-27T03:17:49Z
[ "python", "unit-testing", "python-3.x", "mocking", "py.test" ]
class attribute lookup rule?
39,169,600
<pre><code>&gt;&gt;&gt; class D: ... __class__ = 1 ... __name__ = 2 ... &gt;&gt;&gt; D.__class__ &lt;class 'type'&gt; &gt;&gt;&gt; D().__class__ 1 &gt;&gt;&gt; D.__name__ 'D' &gt;&gt;&gt; D().__name__ 2 </code></pre> <p><strong>Why does <code>D.__class__</code> return the name of the class, while <code>D().__class__</code> returns the defined attribute in class D?</strong> </p> <p><strong>And from where do builtin attributes such as <code>__class__</code> and <code>__name__</code> come from?</strong> </p> <p>I suspected <code>__name__</code> or <code>__class__</code> to be simple descriptors that live either in <code>object</code> class or somewhere, but this can't be seen.</p> <p>In my understanding, the attribute lookup rule as follows in Python, omitting the conditions for descriptors etc..: </p> <p><code>Instance --&gt; Class --&gt; Class.__bases__ and the bases of the other classes as well</code></p> <p>Given the fact that a class is an instance of a metaclass, <code>type</code> in this case, why <code>D.__class__</code> doesn't look for <code>__class__</code> in <code>D.__dict__</code>?</p>
4
2016-08-26T15:18:39Z
39,169,745
<p>The names <code>__class__</code> and <code>__name__</code> are special. Both are <em>data descriptors</em>. <code>__name__</code> is defined on the <code>type</code> object, <code>__class__</code> is defined on <code>object</code> (a base-class of all new-style classes):</p> <pre><code>&gt;&gt;&gt; type.__dict__['__name__'] &lt;attribute '__name__' of 'type' objects&gt; &gt;&gt;&gt; type.__dict__['__name__'].__get__ &lt;method-wrapper '__get__' of getset_descriptor object at 0x1059ea870&gt; &gt;&gt;&gt; type.__dict__['__name__'].__set__ &lt;method-wrapper '__set__' of getset_descriptor object at 0x1059ea870&gt; &gt;&gt;&gt; object.__dict__['__class__'] &lt;attribute '__class__' of 'object' objects&gt; &gt;&gt;&gt; object.__dict__['__class__'].__get__ &lt;method-wrapper '__get__' of getset_descriptor object at 0x1059ea2d0&gt; &gt;&gt;&gt; object.__dict__['__class__'].__set__ &lt;method-wrapper '__set__' of getset_descriptor object at 0x1059ea2d0&gt; </code></pre> <p>Because they are data descriptors, the <a href="https://docs.python.org/3/reference/datamodel.html#object.__getattribute__" rel="nofollow"><code>type.__getattribute__</code> method</a> (used for attribute access on a class) will ignore any attributes set in the class <code>__dict__</code> and only use the descriptors themselves:</p> <pre><code>&gt;&gt;&gt; type.__getattribute__(Foo, '__class__') &lt;class 'type'&gt; &gt;&gt;&gt; type.__getattribute__(Foo, '__name__') 'Foo' </code></pre> <p>Fun fact: <code>type</code> derives from <code>object</code> (<em>everything</em> in Python is an object) which is why <code>__class__</code> is found on <code>type</code> when checking for data descriptors:</p> <pre><code>&gt;&gt;&gt; type.__mro__ (&lt;class 'type'&gt;, &lt;class 'object'&gt;) </code></pre> <p>(<code>type.__getattribute__(D, ...)</code> is used directly as an unbound method, not <code>D.__getattribute__()</code>, because <a href="https://docs.python.org/3/reference/datamodel.html#special-lookup" rel="nofollow">all special method access goes to the type</a>).</p> <p>See the <a href="https://docs.python.org/3/howto/descriptor.html" rel="nofollow"><em>Descriptor Howto</em></a> an what constitutes a data descriptor and why that matters:</p> <blockquote> <p>If an object defines both <code>__get__()</code> and <code>__set__()</code>, it is considered a data descriptor. Descriptors that only define <code>__get__()</code> are called non-data descriptors (they are typically used for methods but other uses are possible).</p> <p>Data and non-data descriptors differ in how overrides are calculated with respect to entries in an instance’s dictionary. If an instance’s dictionary has an entry with the same name as a data descriptor, the data descriptor takes precedence. If an instance’s dictionary has an entry with the same name as a non-data descriptor, the dictionary entry takes precedence.</p> </blockquote> <p>For data descriptors on <code>type</code>, a class is just another instance.</p> <p>So when looking up the <code>__class__</code> or <code>__name__</code> attributes, it doesn't matter what is defined in the <code>D.__dict__</code> namespace, because for either a data descriptor is found in the namespace formed by <code>type</code> and it's MRO.</p> <p>These descriptors are defined in the <a href="https://hg.python.org/cpython/file/v3.5.1/Objects/typeobject.c" rel="nofollow"><code>typeobject.c</code> C code</a>:</p> <pre class="lang-c prettyprint-override"><code>static PyGetSetDef type_getsets[] = { {"__name__", (getter)type_name, (setter)type_set_name, NULL}, /* ... several more ... */ } /* ... */ PyTypeObject PyType_Type = { PyVarObject_HEAD_INIT(&amp;PyType_Type, 0) "type", /* tp_name */ /* ... many type definition entries ... */ type_getsets, /* tp_getset */ /* ... many type definition entries ... */ } /* ... */ static PyGetSetDef object_getsets[] = { {"__class__", object_get_class, object_set_class, PyDoc_STR("the object's class")}, {0} }; PyTypeObject PyBaseObject_Type = { PyVarObject_HEAD_INIT(&amp;PyType_Type, 0) "object", /* tp_name */ /* ... many type definition entries ... */ object_getsets, /* tp_getset */ /* ... many type definition entries ... */ } </code></pre> <p>On instances, <code>object.__getattribute__</code> is used, and it'll find the <code>__name__</code> and <code>__class__</code> entries in the <code>D.__dict__</code> mapping before it'll find the data descriptors on <code>object</code> or <code>type</code>.</p> <p>If you omit either, however, then looking up the names on <code>D()</code> will only <code>__class__</code> as a data descriptor in the MRO of <code>D</code> (so, on <code>object</code>). <code>__name__</code> is not found as the metatypes are not considered when resolving instance attributes.</p> <p>As such you can set <code>__name__</code> on an instance, but not <code>__class__</code>:</p> <pre><code>&gt;&gt;&gt; class E: pass ... &gt;&gt;&gt; e = E() &gt;&gt;&gt; e.__class__ &lt;class '__main__.E'&gt; &gt;&gt;&gt; e.__name__ Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: 'E' object has no attribute '__name__' &gt;&gt;&gt; e.__dict__['__class__'] = 'ignored' &gt;&gt;&gt; e.__class__ &lt;class '__main__.E'&gt; &gt;&gt;&gt; e.__name__ = 'this just works' &gt;&gt;&gt; e.__name__ 'this just works' </code></pre>
6
2016-08-26T15:26:55Z
[ "python", "python-internals" ]
Convert string to dict, then access key:values??? How to access data in a <class 'dict'> for Python?
39,169,718
<p>I am having issues accessing data inside a dictionary. </p> <blockquote> <p>Sys: Macbook 2012 <br> Python: Python 3.5.1 :: Continuum Analytics, Inc.</p> </blockquote> <p>I am working with a <a href="http://dask.pydata.org/en/latest/dataframe-create.html" rel="nofollow">dask.dataframe</a> created from a csv. </p> <h2>Edit Question</h2> <h1>How I got to this point</h1> <p>Assume I start out with a Pandas Series:</p> <pre><code>df.Coordinates 130 {u'type': u'Point', u'coordinates': [-43.30175... 278 {u'type': u'Point', u'coordinates': [-51.17913... 425 {u'type': u'Point', u'coordinates': [-43.17986... 440 {u'type': u'Point', u'coordinates': [-51.16376... 877 {u'type': u'Point', u'coordinates': [-43.17986... 1313 {u'type': u'Point', u'coordinates': [-49.72688... 1734 {u'type': u'Point', u'coordinates': [-43.57405... 1817 {u'type': u'Point', u'coordinates': [-43.77649... 1835 {u'type': u'Point', u'coordinates': [-43.17132... 2739 {u'type': u'Point', u'coordinates': [-43.19583... 2915 {u'type': u'Point', u'coordinates': [-43.17986... 3035 {u'type': u'Point', u'coordinates': [-51.01583... 3097 {u'type': u'Point', u'coordinates': [-43.17891... 3974 {u'type': u'Point', u'coordinates': [-8.633880... 3983 {u'type': u'Point', u'coordinates': [-46.64960... 4424 {u'type': u'Point', u'coordinates': [-43.17986... </code></pre> <p>The problem is, this is not a true dataframe of dictionaries. Instead, it's a column full of strings that LOOK like dictionaries. Running this show it:</p> <pre><code>df.Coordinates.apply(type) 130 &lt;class 'str'&gt; 278 &lt;class 'str'&gt; 425 &lt;class 'str'&gt; 440 &lt;class 'str'&gt; 877 &lt;class 'str'&gt; 1313 &lt;class 'str'&gt; 1734 &lt;class 'str'&gt; 1817 &lt;class 'str'&gt; 1835 &lt;class 'str'&gt; 2739 &lt;class 'str'&gt; 2915 &lt;class 'str'&gt; 3035 &lt;class 'str'&gt; 3097 &lt;class 'str'&gt; 3974 &lt;class 'str'&gt; 3983 &lt;class 'str'&gt; 4424 &lt;class 'str'&gt; </code></pre> <p><strong>My Goal</strong>: Access the <code>coordinates</code> key and value in the dictionary. That's it. But it's a <code>str</code> </p> <p>I converted the strings to dictionaries using <code>eval</code>.</p> <pre><code>new = df.Coordinates.apply(eval) 130 {'coordinates': [-43.301755, -22.990065], 'typ... 278 {'coordinates': [-51.17913026, -30.01201896], ... 425 {'coordinates': [-43.17986794, -22.91000096], ... 440 {'coordinates': [-51.16376782, -29.95488677], ... 877 {'coordinates': [-43.17986794, -22.91000096], ... 1313 {'coordinates': [-49.72688407, -29.33757253], ... 1734 {'coordinates': [-43.574057, -22.928059], 'typ... 1817 {'coordinates': [-43.77649254, -22.86940539], ... 1835 {'coordinates': [-43.17132318, -22.90895217], ... 2739 {'coordinates': [-43.1958313, -22.98755333], '... 2915 {'coordinates': [-43.17986794, -22.91000096], ... 3035 {'coordinates': [-51.01583481, -29.63593292], ... 3097 {'coordinates': [-43.17891379, -22.96476163], ... 3974 {'coordinates': [-8.63388008, 41.14594453], 't... 3983 {'coordinates': [-46.64960938, -23.55902666], ... 4424 {'coordinates': [-43.17986794, -22.91000096], ... </code></pre> <p>Next I text the type of object and get:</p> <pre><code>130 &lt;class 'dict'&gt; 278 &lt;class 'dict'&gt; 425 &lt;class 'dict'&gt; 440 &lt;class 'dict'&gt; 877 &lt;class 'dict'&gt; 1313 &lt;class 'dict'&gt; 1734 &lt;class 'dict'&gt; 1817 &lt;class 'dict'&gt; 1835 &lt;class 'dict'&gt; 2739 &lt;class 'dict'&gt; 2915 &lt;class 'dict'&gt; 3035 &lt;class 'dict'&gt; 3097 &lt;class 'dict'&gt; 3974 &lt;class 'dict'&gt; 3983 &lt;class 'dict'&gt; 4424 &lt;class 'dict'&gt; </code></pre> <p>If I try to access my dictionaries: new.apply(lambda x: x['coordinates']</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-71-c0ad459ed1cc&gt; in &lt;module&gt;() ----&gt; 1 dfCombined.Coordinates.apply(coord_getter) /Users/linwood/anaconda/envs/dataAnalysisWithPython/lib/python3.5/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwds) 2218 else: 2219 values = self.asobject -&gt; 2220 mapped = lib.map_infer(values, f, convert=convert_dtype) 2221 2222 if len(mapped) and isinstance(mapped[0], Series): pandas/src/inference.pyx in pandas.lib.map_infer (pandas/lib.c:62658)() &lt;ipython-input-68-748ce2d8529e&gt; in coord_getter(row) 1 import ast 2 def coord_getter(row): ----&gt; 3 return (ast.literal_eval(row))['coordinates'] TypeError: 'bool' object is not subscriptable </code></pre> <p>It's some type of class, because when I run <code>dir</code> I get this for one object:</p> <pre><code>new.apply(lambda x: dir(x))[130] 130 __class__ 130 __contains__ 130 __delattr__ 130 __delitem__ 130 __dir__ 130 __doc__ 130 __eq__ 130 __format__ 130 __ge__ 130 __getattribute__ 130 __getitem__ 130 __gt__ 130 __hash__ 130 __init__ 130 __iter__ 130 __le__ 130 __len__ 130 __lt__ 130 __ne__ 130 __new__ 130 __reduce__ 130 __reduce_ex__ 130 __repr__ 130 __setattr__ 130 __setitem__ 130 __sizeof__ 130 __str__ 130 __subclasshook__ 130 clear 130 copy 130 fromkeys 130 get 130 items 130 keys 130 pop 130 popitem 130 setdefault 130 update 130 values Name: Coordinates, dtype: object </code></pre> <p><strong>My Problem</strong>: I just want to access the dictionary. But, the object is <code>&lt;class 'dict'&gt;</code>. How do I covert this to a regular dict or just access the key:value pairs?</p> <p>Any ideas??</p>
3
2016-08-26T15:25:32Z
39,169,871
<p>It looks like you end up with something like this</p> <pre><code>s = pd.Series([ dict(type='Point', coordinates=[1, 1]), dict(type='Point', coordinates=[1, 2]), dict(type='Point', coordinates=[1, 3]), dict(type='Point', coordinates=[1, 4]), dict(type='Point', coordinates=[1, 5]), dict(type='Point', coordinates=[2, 1]), dict(type='Point', coordinates=[2, 2]), dict(type='Point', coordinates=[2, 3]), ]) s 0 {u'type': u'Point', u'coordinates': [1, 1]} 1 {u'type': u'Point', u'coordinates': [1, 2]} 2 {u'type': u'Point', u'coordinates': [1, 3]} 3 {u'type': u'Point', u'coordinates': [1, 4]} 4 {u'type': u'Point', u'coordinates': [1, 5]} 5 {u'type': u'Point', u'coordinates': [2, 1]} 6 {u'type': u'Point', u'coordinates': [2, 2]} 7 {u'type': u'Point', u'coordinates': [2, 3]} dtype: object </code></pre> <h3>Solution</h3> <pre><code>df = s.apply(pd.Series) df </code></pre> <p><a href="http://i.stack.imgur.com/loUCd.png" rel="nofollow"><img src="http://i.stack.imgur.com/loUCd.png" alt="enter image description here"></a></p> <p>then access coordinates</p> <pre><code>df.coordinates 0 [1, 1] 1 [1, 2] 2 [1, 3] 3 [1, 4] 4 [1, 5] 5 [2, 1] 6 [2, 2] 7 [2, 3] Name: coordinates, dtype: object </code></pre> <p>Or even</p> <pre><code>df.coordinates.apply(pd.Series) </code></pre> <p><a href="http://i.stack.imgur.com/JMyIe.png" rel="nofollow"><img src="http://i.stack.imgur.com/JMyIe.png" alt="enter image description here"></a></p>
1
2016-08-26T15:33:44Z
[ "python", "pandas", "dictionary", "data-manipulation", "dask" ]
Convert string to dict, then access key:values??? How to access data in a <class 'dict'> for Python?
39,169,718
<p>I am having issues accessing data inside a dictionary. </p> <blockquote> <p>Sys: Macbook 2012 <br> Python: Python 3.5.1 :: Continuum Analytics, Inc.</p> </blockquote> <p>I am working with a <a href="http://dask.pydata.org/en/latest/dataframe-create.html" rel="nofollow">dask.dataframe</a> created from a csv. </p> <h2>Edit Question</h2> <h1>How I got to this point</h1> <p>Assume I start out with a Pandas Series:</p> <pre><code>df.Coordinates 130 {u'type': u'Point', u'coordinates': [-43.30175... 278 {u'type': u'Point', u'coordinates': [-51.17913... 425 {u'type': u'Point', u'coordinates': [-43.17986... 440 {u'type': u'Point', u'coordinates': [-51.16376... 877 {u'type': u'Point', u'coordinates': [-43.17986... 1313 {u'type': u'Point', u'coordinates': [-49.72688... 1734 {u'type': u'Point', u'coordinates': [-43.57405... 1817 {u'type': u'Point', u'coordinates': [-43.77649... 1835 {u'type': u'Point', u'coordinates': [-43.17132... 2739 {u'type': u'Point', u'coordinates': [-43.19583... 2915 {u'type': u'Point', u'coordinates': [-43.17986... 3035 {u'type': u'Point', u'coordinates': [-51.01583... 3097 {u'type': u'Point', u'coordinates': [-43.17891... 3974 {u'type': u'Point', u'coordinates': [-8.633880... 3983 {u'type': u'Point', u'coordinates': [-46.64960... 4424 {u'type': u'Point', u'coordinates': [-43.17986... </code></pre> <p>The problem is, this is not a true dataframe of dictionaries. Instead, it's a column full of strings that LOOK like dictionaries. Running this show it:</p> <pre><code>df.Coordinates.apply(type) 130 &lt;class 'str'&gt; 278 &lt;class 'str'&gt; 425 &lt;class 'str'&gt; 440 &lt;class 'str'&gt; 877 &lt;class 'str'&gt; 1313 &lt;class 'str'&gt; 1734 &lt;class 'str'&gt; 1817 &lt;class 'str'&gt; 1835 &lt;class 'str'&gt; 2739 &lt;class 'str'&gt; 2915 &lt;class 'str'&gt; 3035 &lt;class 'str'&gt; 3097 &lt;class 'str'&gt; 3974 &lt;class 'str'&gt; 3983 &lt;class 'str'&gt; 4424 &lt;class 'str'&gt; </code></pre> <p><strong>My Goal</strong>: Access the <code>coordinates</code> key and value in the dictionary. That's it. But it's a <code>str</code> </p> <p>I converted the strings to dictionaries using <code>eval</code>.</p> <pre><code>new = df.Coordinates.apply(eval) 130 {'coordinates': [-43.301755, -22.990065], 'typ... 278 {'coordinates': [-51.17913026, -30.01201896], ... 425 {'coordinates': [-43.17986794, -22.91000096], ... 440 {'coordinates': [-51.16376782, -29.95488677], ... 877 {'coordinates': [-43.17986794, -22.91000096], ... 1313 {'coordinates': [-49.72688407, -29.33757253], ... 1734 {'coordinates': [-43.574057, -22.928059], 'typ... 1817 {'coordinates': [-43.77649254, -22.86940539], ... 1835 {'coordinates': [-43.17132318, -22.90895217], ... 2739 {'coordinates': [-43.1958313, -22.98755333], '... 2915 {'coordinates': [-43.17986794, -22.91000096], ... 3035 {'coordinates': [-51.01583481, -29.63593292], ... 3097 {'coordinates': [-43.17891379, -22.96476163], ... 3974 {'coordinates': [-8.63388008, 41.14594453], 't... 3983 {'coordinates': [-46.64960938, -23.55902666], ... 4424 {'coordinates': [-43.17986794, -22.91000096], ... </code></pre> <p>Next I text the type of object and get:</p> <pre><code>130 &lt;class 'dict'&gt; 278 &lt;class 'dict'&gt; 425 &lt;class 'dict'&gt; 440 &lt;class 'dict'&gt; 877 &lt;class 'dict'&gt; 1313 &lt;class 'dict'&gt; 1734 &lt;class 'dict'&gt; 1817 &lt;class 'dict'&gt; 1835 &lt;class 'dict'&gt; 2739 &lt;class 'dict'&gt; 2915 &lt;class 'dict'&gt; 3035 &lt;class 'dict'&gt; 3097 &lt;class 'dict'&gt; 3974 &lt;class 'dict'&gt; 3983 &lt;class 'dict'&gt; 4424 &lt;class 'dict'&gt; </code></pre> <p>If I try to access my dictionaries: new.apply(lambda x: x['coordinates']</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-71-c0ad459ed1cc&gt; in &lt;module&gt;() ----&gt; 1 dfCombined.Coordinates.apply(coord_getter) /Users/linwood/anaconda/envs/dataAnalysisWithPython/lib/python3.5/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwds) 2218 else: 2219 values = self.asobject -&gt; 2220 mapped = lib.map_infer(values, f, convert=convert_dtype) 2221 2222 if len(mapped) and isinstance(mapped[0], Series): pandas/src/inference.pyx in pandas.lib.map_infer (pandas/lib.c:62658)() &lt;ipython-input-68-748ce2d8529e&gt; in coord_getter(row) 1 import ast 2 def coord_getter(row): ----&gt; 3 return (ast.literal_eval(row))['coordinates'] TypeError: 'bool' object is not subscriptable </code></pre> <p>It's some type of class, because when I run <code>dir</code> I get this for one object:</p> <pre><code>new.apply(lambda x: dir(x))[130] 130 __class__ 130 __contains__ 130 __delattr__ 130 __delitem__ 130 __dir__ 130 __doc__ 130 __eq__ 130 __format__ 130 __ge__ 130 __getattribute__ 130 __getitem__ 130 __gt__ 130 __hash__ 130 __init__ 130 __iter__ 130 __le__ 130 __len__ 130 __lt__ 130 __ne__ 130 __new__ 130 __reduce__ 130 __reduce_ex__ 130 __repr__ 130 __setattr__ 130 __setitem__ 130 __sizeof__ 130 __str__ 130 __subclasshook__ 130 clear 130 copy 130 fromkeys 130 get 130 items 130 keys 130 pop 130 popitem 130 setdefault 130 update 130 values Name: Coordinates, dtype: object </code></pre> <p><strong>My Problem</strong>: I just want to access the dictionary. But, the object is <code>&lt;class 'dict'&gt;</code>. How do I covert this to a regular dict or just access the key:value pairs?</p> <p>Any ideas??</p>
3
2016-08-26T15:25:32Z
39,177,065
<p>My first instinct is to use the <code>json.loads</code> to cast the strings into dicts. But the example you've posted does not follow the json standard since it uses single instead of double quotes. So you have to convert the strings first. </p> <p>A second option is to just use regex to parse the strings. If the dict strings in your actual DataFrame do not exactly match my examples, I expect the regex method to be more robust since lat/long coords are fairly standard.</p> <pre><code>import re import pandasd as pd df = pd.DataFrame(data={'Coordinates':["{u'type': u'Point', u'coordinates': [-43.30175, 123.45]}", "{u'type': u'Point', u'coordinates': [-51.17913, 123.45]}"], 'idx': [130, 278]}) ## # Solution 1- use json.loads ## def string_to_dict(dict_string): # Convert to proper json format dict_string = dict_string.replace("'", '"').replace('u"', '"') return json.loads(dict_string) df.CoordDicts = df.Coordinates.apply(string_to_dict) df.CoordDicts[0]['coordinates'] #&gt;&gt;&gt; [-43.30175, 123.45] ## # Solution 2 - use regex ## def get_lat_lon(dict_string): # Get the coordinates string with regex rs = re.search("(\-?\d+(\.\d+)?),\s*(\-?\d+(\.\d+)?)", dict_string).group() # Cast to floats coords = [float(x) for x in rs.split(',')] return coords df.Coords = df.Coordinates.apply(get_lat_lon) df.Coords[0] #&gt;&gt;&gt; [-43.30175, 123.45] </code></pre>
1
2016-08-27T03:05:37Z
[ "python", "pandas", "dictionary", "data-manipulation", "dask" ]
Update root in XML after the childs got processed
39,169,735
<p>Related to <a href="http://stackoverflow.com/q/38974656/3316077">this SO question</a> I managed to accomplish what's been asked with the following snippet:</p> <pre><code>import xml.etree.ElementTree as ET def read_xml(): with open('test.xml') as xml_file: return xml_file.read() xml_file = read_xml() root = ET.fromstring(xml_file) pmt_infs = root.find('.//CstmrCdtTrfInitn').findall('PmtInf') print(pmt_infs) nodes = [] for node in pmt_infs: children = list(node) nodes.append(children) xml_stuff = [None] * len(nodes) to_remove = [] for first, *column in zip(*nodes): for index, item in enumerate(column, 1): if 'CdtTrfTxInf' in item.tag: xml_stuff[index] = item continue if first.tag == item.tag and first.text == item.text and index not in to_remove: to_remove.append(index) for index in to_remove: pmt_infs[0].append(xml_stuff[index]) for index in to_remove[::-1]: pmt_infs.pop(index) print(pmt_infs) </code></pre> <p>Now, what the above piece of code does it's exactly what I asked in the previous question:</p> <blockquote> <p>I would like to move the whole <code>&lt;CdtTrfTxInf&gt;&lt;/CdtTrfTxInf&gt;</code> to the first <code>&lt;PmtInf&gt;&lt;/PmtInf&gt;</code> and remove the whole <code>&lt;PmtInf&gt;&lt;/PmtInf&gt;</code> that I've taken <code>&lt;CdtTrfTxInf&gt;&lt;/CdtTrfTxInf&gt;</code> from.</p> </blockquote> <p>The above has been done, but I have a small problem. Initially, I get the <code>root</code> from the file. And now, I want to update it with the new data. The problem is that I don't know how to add the first part of the XML in the new file and then, append the <code>pmt_infs</code> to it:</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" ?&gt; &lt;Document&gt; &lt;CstmrCdtTrfInitn&gt; &lt;GrpHdr&gt; &lt;other_tags&gt;a&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;other_tags&gt;b&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;other_tags&gt;c&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;/GrpHdr&gt; &lt;!-- here should be the &lt;PmtInf&gt; that's been processed above --&gt; &lt;/CstmrCdtTrfInitn&gt; &lt;/Document&gt; </code></pre> <p>Can somebody give me some hints ? </p> <hr> <p>LE: As requested, I'll add here the desired results:</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" ?&gt; &lt;Document&gt; &lt;CstmrCdtTrfInitn&gt; &lt;GrpHdr&gt; &lt;other_tags&gt;a&lt;/other_tags&gt; &lt;other_tags&gt;b&lt;/other_tags&gt; &lt;other_tags&gt;c&lt;/other_tags&gt; &lt;/GrpHdr&gt; &lt;PmtInf&gt; &lt;things&gt;d&lt;/things&gt; &lt;things&gt;e&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;things&gt;g&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;/CstmrCdtTrfInitn&gt; &lt;/Document&gt; </code></pre> <p>Now the output looks like that because:</p> <ul> <li>looking at the <code>&lt;PmtInf&gt;&lt;/PmtInf&gt;</code> sections (which are three), we can see that: <ol> <li>if we compare the <code>&lt;things&gt;</code> from first <code>&lt;pmtinf&gt;</code> and things from the second <code>&lt;pmtinf&gt;</code> we can see they are not the same (<code>d != f</code>, <code>e != g</code>) so we move on to the next <code>&lt;pmtinf&gt;</code>; If we compare the first <code>&lt;pmtinf&gt;</code> <code>&lt;things&gt;</code> with the third ones, they are also the same, so we leave the first <code>&lt;pmtinf&gt;</code> as it is.</li> <li>we go to the second <code>pmtinf</code> section and compare <code>things</code> from it with <code>things</code> from the third <code>pmtinf</code> (they are the same). That said, we take the <code>CdtTrfTxInf</code> part from the third <code>pmtinf</code>, add it to the second <code>pmtinf</code> at the end and remove the third <code>pmtinf</code> completely.</li> </ol></li> </ul> <p>Imagine this as a list of lists (which in fact, that's what they are):</p> <pre><code>[[a1, b1, c1], [a2, b2, c2], [a3, b3, c3]] </code></pre> <p><em>Where</em>: a = first <code>&lt;things&gt;</code> tag from a <code>&lt;PmtInf&gt;</code> b = second <code>&lt;things&gt;</code> tag from a <code>&lt;PmtInf&gt;</code> c = <code>&lt; CdtTrfTxInf&gt;</code> tag from a <code>&lt;PmtInf&gt;</code></p> <p>In my example:</p> <p><code>a1!=a2</code> and <code>b1!=b2</code> => we can move to the next sublist (if they would've been the same, the list would look like:</p> <pre><code>[[a1, b1, c1, c2],[a3, b3, c3]] </code></pre> <p><code>a1!=a3</code> and <code>b1!=b3</code> => we can go to the second sublist and compare it with all the sublists after it</p> <p><code>a2==a3</code> and <code>b2==b3</code> => they are the same so we will now have:</p> <pre><code>[[a1, b1, c1], [a2, b2, c2, c3]] </code></pre> <p>As it is, my result will only be:</p> <pre><code>&lt;PmtInf&gt; &lt;things&gt;d&lt;/things&gt; &lt;things&gt;e&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;things&gt;g&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; </code></pre> <p>But I need it to be:</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" ?&gt; &lt;Document&gt; &lt;CstmrCdtTrfInitn&gt; &lt;GrpHdr&gt; &lt;other_tags&gt;a&lt;/other_tags&gt; &lt;other_tags&gt;b&lt;/other_tags&gt; &lt;other_tags&gt;c&lt;/other_tags&gt; &lt;/GrpHdr&gt; &lt;PmtInf&gt; &lt;things&gt;d&lt;/things&gt; &lt;things&gt;e&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;things&gt;g&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;/CstmrCdtTrfInitn&gt; &lt;/Document&gt; </code></pre>
1
2016-08-26T15:26:18Z
39,192,877
<p>Consider <a href="https://www.w3.org/Style/XSL/" rel="nofollow">XSLT</a>, the transformation language used to manipulate XML documents. Specifically, your re-ordering actually requires the <a href="http://www.jenitennison.com/xslt/grouping/muenchian.html" rel="nofollow">Muenchian Method</a>, a 1.0 procedure to index the XML document with a certain key and group child data accordingly (in 2.0 an easier <code>&lt;xsl:for-each-group&gt;</code> can be used). Here, the key used is the concatenation of the <code>&lt;things&gt;</code> nodes under <code>&lt;PmtInf&gt;</code>.</p> <p>Python's third-party module, <code>lxml</code>, can run XSLT 1.0 scripts using the <a href="http://xmlsoft.org/libxslt/" rel="nofollow">libxslt</a> processor. Of course, Python can also call external processors like <a href="http://stackoverflow.com/tags/xslt/info">Saxon and Xalan</a> which these processors can run 2.0, even newer 3.0 scripts. In this solution, no <code>for</code> looping or <code>if</code> logic is needed. Also, use of <code>&lt;xsl:key&gt;</code> is more efficient as you create a hash table on the document content.</p> <p><strong>Input XML</strong> </p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" ?&gt; &lt;Document&gt; &lt;CstmrCdtTrfInitn&gt; &lt;GrpHdr&gt; &lt;other_tags&gt;a&lt;/other_tags&gt; &lt;other_tags&gt;b&lt;/other_tags&gt; &lt;other_tags&gt;c&lt;/other_tags&gt; &lt;/GrpHdr&gt; &lt;PmtInf&gt; &lt;things&gt;d&lt;/things&gt; &lt;things&gt;e&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;things&gt;g&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;things&gt;g&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;/CstmrCdtTrfInitn&gt; &lt;/Document&gt; </code></pre> <p><strong>XSLT</strong> Script <em>(save as a separate .xsl or .xslt file; adjust key @use and its later references to actual)</em></p> <pre><code>&lt;xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"&gt; &lt;xsl:output version="1.0" encoding="UTF-8" indent="yes" /&gt; &lt;xsl:strip-space elements="*"/&gt; &lt;xsl:key name="pkey" match="PmtInf" use="concat(things[1], things[2])" /&gt; &lt;xsl:template match="/Document"&gt; &lt;xsl:copy&gt; &lt;xsl:apply-templates select="CstmrCdtTrfInitn"/&gt; &lt;/xsl:copy&gt; &lt;/xsl:template&gt; &lt;xsl:template match="CstmrCdtTrfInitn"&gt; &lt;xsl:copy&gt; &lt;xsl:copy-of select="GrpHdr"/&gt; &lt;xsl:for-each select="PmtInf[count(. | key('pkey', concat(things[1], things[2])))]"&gt; &lt;xsl:copy&gt; &lt;xsl:copy-of select="things"/&gt; &lt;xsl:for-each select="key('pkey', concat(things[1], things[2]))"&gt; &lt;xsl:copy-of select="CdtTrfTxInf"/&gt; &lt;/xsl:for-each&gt; &lt;/xsl:copy&gt; &lt;/xsl:for-each&gt; &lt;/xsl:copy&gt; &lt;/xsl:template&gt; &lt;/xsl:transform&gt; </code></pre> <p><strong>Python</strong> Script</p> <pre><code>import lxml.etree as ET # LOAD XML AND XSL SOURCES dom = ET.parse('Input.xml') xslt = ET.parse('XSLTScript.xsl') # TRANSFORM SOURCE DOCUMENT transform = ET.XSLT(xslt) newdom = transform(dom) # OUTPUT TO FILE xmlfile = open('Output.xml', 'wb') xmlfile.write(newdom) xmlfile.close() </code></pre> <p><strong>Output</strong> XML</p> <pre><code>&lt;?xml version='1.0' encoding='UTF-8'?&gt; &lt;Document&gt; &lt;CstmrCdtTrfInitn&gt; &lt;GrpHdr&gt; &lt;other_tags&gt;a&lt;/other_tags&gt; &lt;other_tags&gt;b&lt;/other_tags&gt; &lt;other_tags&gt;c&lt;/other_tags&gt; &lt;/GrpHdr&gt; &lt;PmtInf&gt; &lt;things&gt;d&lt;/things&gt; &lt;things&gt;e&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;things&gt;g&lt;/things&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;/CstmrCdtTrfInitn&gt; &lt;/Document&gt; </code></pre>
2
2016-08-28T15:29:27Z
[ "python", "xml", "python-3.x" ]
how get input of multiple lines
39,169,764
<p><strong>I have written code to take input as list of string</strong></p> <pre><code>d=["","","","","","","","","",""] i=0 while(True): s=input() d[i]=s i=i+1 if s=="": break </code></pre> <p>But I am not able to process list <code>d</code> to obtain required output. <code>d[0]</code> is storing <code>Djokovic:Murray:2-6,6-7,7-6,6-3,6-1</code>. Now I want to process this string (or convert it into a dictionary)</p> <p>Now I am able to write code (by using <code>str.split(",")</code> function) to calculate; </p> <ul> <li>Number of best-of-5 set matches won</li> <li>Number of best-of-3 set matches won</li> <li>Number of sets won</li> <li>Number of games won</li> <li>Number of sets lost</li> <li>Number of games lost</li> </ul>
-4
2016-08-26T15:27:53Z
39,170,392
<p>Is this what you are after?</p> <pre><code>&gt;&gt;&gt; d=[] &gt;&gt;&gt; while(True): ... s=raw_input() ... if s=="":break ... temp = [s] ... d.append(temp) ... a,b,7-6,7-6,6-3 c,d,7-4,7-6,6-2 e,f,6-4,7-6,6-2 &gt;&gt;&gt; d [['a,b,7-6,7-6,6-3'], ['c,d,7-4,7-6,6-2'], ['e,f,6-4,7-6,6-2']] </code></pre> <p>This makes a list item out of the input and then appends that list to your main list <code>d</code><br> You now should be able to process <code>d</code> </p> <p>Edit: </p> <p>If you persist in using 2 delimiters both <code>:</code> and <code>,</code> you are making life more difficult for yourself, Stick with one!<br> Revising the simple code above: </p> <pre><code>d=[] while(True): s=raw_input() if s=="":break temp = [s] d.append(temp) #d becomes a list of lists for item in d: #process individual lists in d x=item[0].split(",") # break up the list using the delimiter comma for i in range(0,len(x)): #access each item in x print x[i] </code></pre> <p>Input:<br> Djokovic,Murray,2-6,6-7,7-6,6-3,6-1<br> Bloggs,Smith,2-6,6-7,7-6,6-3,6-3<br> Jones,Abernathy,6-3,6-3,6-3 </p> <p>Output:</p> <pre><code>Djokovic Murray 2-6 6-7 7-6 6-3 6-1 Bloggs Smith 2-6 6-7 7-6 6-3 6-3 Jones Abernathy 6-3 6-3 6-3 </code></pre>
0
2016-08-26T16:02:56Z
[ "python", "python-2.7", "python-3.x" ]
how get input of multiple lines
39,169,764
<p><strong>I have written code to take input as list of string</strong></p> <pre><code>d=["","","","","","","","","",""] i=0 while(True): s=input() d[i]=s i=i+1 if s=="": break </code></pre> <p>But I am not able to process list <code>d</code> to obtain required output. <code>d[0]</code> is storing <code>Djokovic:Murray:2-6,6-7,7-6,6-3,6-1</code>. Now I want to process this string (or convert it into a dictionary)</p> <p>Now I am able to write code (by using <code>str.split(",")</code> function) to calculate; </p> <ul> <li>Number of best-of-5 set matches won</li> <li>Number of best-of-3 set matches won</li> <li>Number of sets won</li> <li>Number of games won</li> <li>Number of sets lost</li> <li>Number of games lost</li> </ul>
-4
2016-08-26T15:27:53Z
39,170,655
<p>Since your stats seem to be separated by a whitespace delimiter, you can use <code>str.split()</code> to separate each of your stats into a list. here is a demo:</p> <pre><code>stats = "Djokovic:Murray:2-6,6-7,7-6,6-3,6-1 Murray:Djokovic:6-3,4-6,6-4,6-3" def compile_stats(stats): stats_lst = list(stats.split(" ")) # using str.split to # split the string every time whitespace is found. return stats_lst print(compile_stats(stats)) # output:['Djokovic:Murray:2-6,6-7,7-6,6-3,6-1', 'Murray:Djokovic:6-3,4-6,6-4,6-3'] </code></pre> <p>It takes each stat, and makes it its own separate list item.</p>
0
2016-08-26T16:17:42Z
[ "python", "python-2.7", "python-3.x" ]
Dynamically Splitting Rows in dataframe
39,169,867
<p>I need to take a CSV file and split the rows and have them cascade. The input CSV can have a varying amount of columns(always even), but will always be split the same way. I decided to use Pandas because with some files the output will be 500,000 rows and I thought it would speed things up.</p> <p>Input:</p> <pre><code>h1 h2 h3 h4 h5 h6 A1 A2 A3 A4 A5 A6 B1 B2 B3 B4 B5 B6 </code></pre> <p>Expected Output</p> <pre><code>h1 h2 h3 h4 h5 h6 A1 A2 A1 A2 A3 A4 A1 A2 A3 A4 A5 A6 B1 B2 B1 B2 B3 B4 B1 B2 B3 B4 B5 B6 </code></pre> <p>I tried using the code below (cobbled together from some searching and my own edits) as you can see it is close, but not quite what I need.</p> <pre><code>importFile = pd.read_csv('file.csv') df = df_importFile = pd.DataFrame(importFile) index_cols = ['h1'] cols = [c for c in df if c not in index_cols] df2 = df.set_index(index_cols).stack().reset_index(level=1, drop=True).to_frame('Value') df2 = pd.concat([pd.Series([v if i % len(cols) == n else '' for i, v in enumerate(df2.Value)], name=col) for n, col in enumerate(cols)], axis=1).set_index(df2.index) df2.to_csv('output.csv') </code></pre> <p>That gives the following</p> <pre><code>h1 h2 h3 h4 h5 h6 A1 A2 A1 A3 A1 A4 A1 A5 A1 A6 </code></pre>
3
2016-08-26T15:33:29Z
39,170,633
<pre><code># take number of columns and divide by 2 # this is the number of pairs pairs = df.shape[1] // 2 # np.repeat takes the number of rows and returns an object to slice # the dataframe array df.values... then slice... result should be # of length pairs * len(df) a = df.values[np.repeat(np.arange(df.shape[0]), pairs)] # row values to condition with as column vector dim0 = (np.arange(a.shape[0]) % (pairs))[:, None ] # column values to condition with as row vector dim1 = np.repeat(np.arange(pairs), 2) # boolean mask to use in np.where generated # via the magic of numpy broadcasting mask = dim0 &gt;= dim1 # QED pd.DataFrame(np.where(mask, a, ''), columns=df.columns) </code></pre> <p><a href="http://i.stack.imgur.com/PazS7.png" rel="nofollow"><img src="http://i.stack.imgur.com/PazS7.png" alt="enter image description here"></a></p>
4
2016-08-26T16:16:32Z
[ "python", "csv", "pandas" ]
Dynamically Splitting Rows in dataframe
39,169,867
<p>I need to take a CSV file and split the rows and have them cascade. The input CSV can have a varying amount of columns(always even), but will always be split the same way. I decided to use Pandas because with some files the output will be 500,000 rows and I thought it would speed things up.</p> <p>Input:</p> <pre><code>h1 h2 h3 h4 h5 h6 A1 A2 A3 A4 A5 A6 B1 B2 B3 B4 B5 B6 </code></pre> <p>Expected Output</p> <pre><code>h1 h2 h3 h4 h5 h6 A1 A2 A1 A2 A3 A4 A1 A2 A3 A4 A5 A6 B1 B2 B1 B2 B3 B4 B1 B2 B3 B4 B5 B6 </code></pre> <p>I tried using the code below (cobbled together from some searching and my own edits) as you can see it is close, but not quite what I need.</p> <pre><code>importFile = pd.read_csv('file.csv') df = df_importFile = pd.DataFrame(importFile) index_cols = ['h1'] cols = [c for c in df if c not in index_cols] df2 = df.set_index(index_cols).stack().reset_index(level=1, drop=True).to_frame('Value') df2 = pd.concat([pd.Series([v if i % len(cols) == n else '' for i, v in enumerate(df2.Value)], name=col) for n, col in enumerate(cols)], axis=1).set_index(df2.index) df2.to_csv('output.csv') </code></pre> <p>That gives the following</p> <pre><code>h1 h2 h3 h4 h5 h6 A1 A2 A1 A3 A1 A4 A1 A5 A1 A6 </code></pre>
3
2016-08-26T15:33:29Z
39,170,943
<p>Try this:</p> <pre><code>dfNew = pd.DataFrame() ct = 1 while ct &lt;= df.shape[1]/2 : dfNew = dfNew.append(df[df.columns[:2*ct]]) ct +=1 dfNew.sort_values(['h1'], ascending=[True]).reset_index(drop=True).fillna("") print df h1 h2 h3 h4 h5 h6 0 A1 A2 1 A1 A2 A3 A4 2 A1 A2 A3 A4 A5 A6 3 B1 B2 4 B1 B2 B3 B4 5 B1 B2 B3 B4 B5 B6 </code></pre>
3
2016-08-26T16:35:48Z
[ "python", "csv", "pandas" ]
Remove empty spaces or NaNs from lists in column of lists in Python/Pandas Dataframe
39,169,905
<p>I have a Pandas dataframe <code>df</code> that looks like this:</p> <pre><code> A B 1 1 [a,b,d,d] 2 6 [,1,4,d,g] 3 a [w,1,NaN,x,y,2] </code></pre> <p>I need to remove the blank in row 2, and the NaN in row 3 to get:</p> <pre><code> A B 1 1 [a,b,d,d] 2 6 [1,4,d,g] 3 a [w,1,x,y,2] </code></pre> <p>I think applying some kind of lambda list comprehension? </p> <pre><code>df.B=df.B.apply(lambda x: x if x not in ['',np.NaN]) </code></pre> <p>but not working...</p>
1
2016-08-26T15:35:56Z
39,170,109
<p>You need to work on your comprehension skills :)</p> <pre><code>import numpy as np df = pd.DataFrame([ [1, ['a','b','d','d']], [6, ['',1,4,'d','g']], ['a', ['w',1,np.nan,'x','y',2]] ], columns=['A', 'B']) df.B.apply(lambda l: [x for x in l if x not in ['', np.nan]]) </code></pre> <p>where <code>l</code> is the current list and <code>x</code> are elements of <code>l</code>.</p>
3
2016-08-26T15:47:32Z
[ "python", "pandas", null ]
Compare two distributions with different sizes using Python
39,169,913
<p>I want to compare two different distributions, where one has 100 data points, the other 150 data points.</p> <p>In <code>seaborn</code> I am able to do it using <code>lmplot</code> in this way:</p> <pre><code>import pandas as pd import seaborn as sns df = pd.DataFrame(data) sns.lmplot(x="dist1", y="dist2", data=df) </code></pre> <p>considering the input <code>pandas</code> DataFrame as composed by two columns <code>dist1</code> and <code>dist2</code>, each one having the same number of data points.</p> <p>However, this only works with distribution of the same size. Therefore I was thinking about taking percentiles of each distribution. Is there already an implementation of such plot (e.g. in matplotlib, seaborn, statsmodels, plotly..)?</p> <h3>Edit</h3> <p>About closing votes: this question does not belong to <a href="http://stats.stackexchange.com/">CrossValidated</a> SE because I am clearly asking about code or libraries API to compare two distributions, not theoretical questions about distributions or statistical methodologies to analyse them. Here for distribution I only meant: set of data points.</p>
1
2016-08-26T15:36:17Z
39,171,822
<p>Assuming that want the two data sets on the same axis, see <a href="http://stackoverflow.com/questions/4270301/matplotlib-multiple-datasets-on-the-same-scatter-plot">this</a>. You need a reference to the axis to which you want to draw.</p> <p>sample:</p> <pre><code>a = [1.1, 2.8, 14, 21, 23] b = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] fig, ax1 = plt.subplots() ax1.scatter(range(len(a)), a) ax1.scatter(range(len(b)), b) </code></pre>
0
2016-08-26T17:33:07Z
[ "python", "matplotlib", "distribution", "seaborn", "percentile" ]
pandas: all NaNs when subtracting two dataframes
39,169,923
<p>I have two series. I want to subtract one dataframe from another dataframe, even though they have a different number of columns.</p> <pre><code>&gt;df1 index 0 1 2 3 4 5 TOTAL 5 46 56 110 185 629 &gt;df2 index 1 2 3 4 5 Use 25 37 86 151 512 </code></pre> <p>I would assume that subtracting two dataframes with different dimensions would <em>only</em> result in NaNs in the mismatched columns (in this case, Column 0). The remaining columns would be the result of df1[1]-df2[1], df1[2]-df2[2], etc.</p> <pre><code>&gt;df1 - df2 index 0 1 2 3 4 5 TOTAL NaN 21 19 24 34 117 </code></pre> <p>But this is not the case. This is what happens when I subtract the dataframes?</p> <pre><code>&gt;df1 - df2 index 0 1 2 3 4 5 Use NaN NaN NaN NaN NaN NaN TOTAL NaN NaN NaN NaN NaN NaN </code></pre> <p>I also tried just subtracting the values:</p> <pre><code>&gt;df1.values - df2.values Traceback (most recent call last): File "&lt;ipython-input-376-1dc5b3b4ad3e&gt;", line 1, in &lt;module&gt; total_drugs.values-(restraints_drugs.values+norestraints_drugs.values) ValueError: operands could not be broadcast together with shapes (1,6) (1,5) </code></pre> <p>What am I doing wrong? I'm using pandas 0.18.</p>
3
2016-08-26T15:37:15Z
39,170,178
<p>You are subtracting two dataframes. <strong><em>Both</em></strong> column and row indices must match. In your case, the row indices <code>TOTAL</code> and <code>Use</code> do not match.</p> <p>To get what you're looking for, you want to subtract the series <code>df2.ix['Use']</code> from <code>df1</code> </p> <pre><code>df1.sub(df2.squeeze()) </code></pre> <p><a href="http://i.stack.imgur.com/oZWHC.png" rel="nofollow"><img src="http://i.stack.imgur.com/oZWHC.png" alt="enter image description here"></a></p> <p>Or:</p> <pre><code>df1.sub(df2.ix['Use']) </code></pre> <p>Or:</p> <pre><code>df1.sub(df2.loc['Use']) </code></pre> <p>Or:</p> <pre><code>df1 - df2.ix['Use'] </code></pre> <p>Or:</p> <pre><code>df1 - df2.loc['Use'] </code></pre>
3
2016-08-26T15:51:19Z
[ "python", "pandas" ]
How do I produce a sound signal from an array?
39,169,933
<p>I'm working on a project which requires some sound processing. I know how to record the sound and convert the signal into a float in order to process it. The problem is, that I don't know how to convert those numbers back to bytes in order to play the final processed sound.</p> <p>Imagine an array like this one:</p> <pre><code>[-954.04373976038096, -289.02199657142637, 603.07726299005469, 558.24833180011706, -252.49007227640698, -884.07367717525278, -754.89044791362232] </code></pre> <p>And I need to convert it to something similar to this, in order to play the sound:</p> <pre><code>[b'\x92\xffQ\xffO\xff\xad\xff\x12\x00\xfc\xfff\xff\xe4\xfe\xee\xfeC\xffA'] </code></pre> <p>If I convert each number to bytes using <code>bytes()</code> and play, it I just get noise. When I convert it back to a float in order to see what's happening, it has a different value than the original float.</p>
0
2016-08-26T15:37:42Z
39,202,731
<p>First, you'll need to know the maximum range of your values. Given your values, that might be from <code>-2000.0</code> to <code>2000.0</code>, but I have no idea, so my guess is most likely wrong. Typically, the numbers in a floating point audio signal range from <code>-1.0</code> to <code>1.0</code>. Obviously, you have much larger values. If you have a reason for that, it's OK, but if not, you should probably scale your signals to the range from <code>-1.0</code> to <code>1.0</code>. Many applications and libraries use this convention.</p> <p>Then, you'll need to know the proper target format. There is no way to know that from your question. For example, your target format might be signed 16-bit integers in "little endian" byte order.</p> <p>To convert the values, you'll first have to divide all input values by the maximum possible (absolute) value. If your data ranges from <code>-1.0</code> to <code>1.0</code>, this is a no-op. Then, multiply those values with the maximum number of your target format. If your target format is 16-bit integers, that's <code>2**15</code>, or <code>32768</code>. Actually, the largest signed 16-bit integer is one less (because <code>0</code> needs to be stored, too), namely <code>32767</code>, so you should probably use this value to avoid overflow. The resulting values are already correct, but they are still floating point values. So you should convert them to <code>int</code>. Finally, you can convert those integer values to <code>bytes</code>, e.g. using the <code>struct</code> module. There you'll have to make sure to specify the correct <a href="https://docs.python.org/3/library/struct.html#format-strings" rel="nofollow">format string</a>, e.g. <code>'&lt;h'</code> for little-endian signed 16-bit numbers.</p> <p>Having said all that, it's probably much easier to use a sound I/O library that does the necessary conversions for you, e.g. the <a href="http://python-sounddevice.readthedocs.io/" rel="nofollow">sounddevice</a> module. You'll still need to scale your floating point values to the range from <code>-1.0</code> to <code>1.0</code>, but the rest can be done automatically. If you are using NumPy arrays it's even simpler, but it also works with plain Python buffers.</p>
0
2016-08-29T09:25:25Z
[ "python", "audio", "byte" ]
Change an XML Value in Python
39,169,961
<p>If I have an XML file such as below, how would I be able to change the version from 50 to 51?</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;config&gt; &lt;updateCheck seconds="2" /&gt; &lt;unturnedVersion version="50" /&gt; &lt;unturnedFolder recoveryBundlesAfterUpdates="false" /&gt; &lt;rocket useRocket="true" apikey=""/&gt; &lt;steam username="" password="" /&gt; &lt;steamUpdates validate="true" /&gt; &lt;servers rconEnabled="false"&gt; &lt;server name="server1" rconPort="27013" rconPassword="pass" /&gt; &lt;server name="server2" rconPort="27014" rconPassword="pass" /&gt; &lt;/servers&gt; &lt;notifyBefore seconds="60" /&gt; &lt;/config&gt; </code></pre> <p>I've tried multiple methods to do it, and some don't do anything or it just creates a new version of the unturnedVersion with 51 at the bottom of the code. I want to simply change 50 to 51 or any other value I set.</p> <p>Thanks!</p>
1
2016-08-26T15:39:31Z
39,170,010
<p>Use <a href="https://docs.python.org/2/library/xml.etree.elementtree.html" rel="nofollow"><code>xml.etree.ElementTree</code></a>. Locate the element via, for example, <code>find()</code>, update the <code>version</code> attribute through the <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.attrib" rel="nofollow"><code>.attrib</code> dictionary</a> of an element:</p> <pre><code>import xml.etree.ElementTree as ET data = """&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;config&gt; &lt;updateCheck seconds="2" /&gt; &lt;unturnedVersion version="50" /&gt; &lt;unturnedFolder recoveryBundlesAfterUpdates="false" /&gt; &lt;rocket useRocket="true" apikey=""/&gt; &lt;steam username="" password="" /&gt; &lt;steamUpdates validate="true" /&gt; &lt;servers rconEnabled="false"&gt; &lt;server name="server1" rconPort="27013" rconPassword="pass" /&gt; &lt;server name="server2" rconPort="27014" rconPassword="pass" /&gt; &lt;/servers&gt; &lt;notifyBefore seconds="60" /&gt; &lt;/config&gt;""" root = ET.fromstring(data) unturned_version = root.find("unturnedVersion") unturned_version.attrib["version"] = "51" print(ET.tostring(root)) </code></pre> <p>Prints:</p> <pre><code>&lt;config&gt; &lt;updateCheck seconds="2" /&gt; &lt;unturnedVersion version="51" /&gt; &lt;unturnedFolder recoveryBundlesAfterUpdates="false" /&gt; &lt;rocket apikey="" useRocket="true" /&gt; &lt;steam password="" username="" /&gt; &lt;steamUpdates validate="true" /&gt; &lt;servers rconEnabled="false"&gt; &lt;server name="server1" rconPassword="pass" rconPort="27013" /&gt; &lt;server name="server2" rconPassword="pass" rconPort="27014" /&gt; &lt;/servers&gt; &lt;notifyBefore seconds="60" /&gt; &lt;/config&gt; </code></pre> <p>Note that, if you want to increment the existing version, use:</p> <pre><code>unturned_version.attrib["version"] = str(int(unturned_version.attrib["version"]) + 1) </code></pre> <hr> <p>And, if you read an XML from file, use <code>ET.parse()</code>:</p> <pre><code>import xml.etree.ElementTree as ET tree = ET.parse("input.xml") root = tree.getroot() unturned_version = root.find("unturnedVersion") unturned_version.attrib["version"] = str(int(unturned_version.attrib["version"]) + 1) print(ET.tostring(root)) </code></pre>
4
2016-08-26T15:41:59Z
[ "python", "xml" ]
Unable to append data to array
39,169,980
<p>I am retrieving a record set from a database.<br> Then using a <strong>for</strong> statement I am trying to construct my data to match a 3rd party API.</p> <p>But I get this error and can't figure it out: </p> <blockquote> <p>"errorType": "TypeError", "errorMessage": "list indices must be integers, not str"<br> "messages['english']['merge_vars']['vars'].append({"</p> </blockquote> <p>Below is my code:</p> <pre><code>cursor = connect_to_database() records = get_records(cursor) template = dict() messages = dict() template['english'] = "SOME_TEMPLATE reminder-to-user-english" messages['english'] = { 'subject': "Reminder (#*|code|*)", 'from_email': 'mail@mail.com', 'from_name': 'Notifier', 'to': [], 'merge_vars': [], 'track_opens': True, 'track_clicks': True, 'important': True } for record in records: record = dict(record) if record['lang'] == 'english': messages['english']['to'].append({ 'email': record['email'], 'type': 'to' }) messages['english']['merge_vars'].append({ 'rcpt': record['email'] }) for (key, value) in record.iteritems(): messages['english']['merge_vars']['vars'].append({ 'name': key, 'content': value }) else: template['other'] = "SOME_TEMPLATE reminder-to-user-other" close_database_connection() return messages </code></pre> <p>The goal is to get something like this below: </p> <pre><code>messages = { 'subject': "...", 'from_email': "...", 'from_name': "...", 'to': [ { 'email': '...', 'type': 'to', }, { 'email': '...', 'type': 'to', } ], 'merge_vars': [ { 'rcpt': '...', 'vars': [ { 'content': '...', 'name': '...' }, { 'content': '...', 'name': '...' } ] }, { 'rcpt': '...', 'vars': [ { 'content': '...', 'name': '...' }, { 'content': '...', 'name': '...' } ] } ] } </code></pre>
1
2016-08-26T15:40:30Z
39,170,069
<p>What the error is saying is that you are trying to access an array element with the help of string not index (int).</p> <p>I believe your mistake is in this line:</p> <pre><code>messages['english']['merge_vars']['vars'].append({..}) </code></pre> <p>You declared <code>merge_vars</code> as array like so:</p> <pre><code>'merge_vars': [] </code></pre> <p>So, you either make it <code>dict</code> like this:</p> <pre><code>'merge_vars': {} </code></pre> <p>Or, use it as array:</p> <pre><code>messages['english']['merge_vars'].append({..}) </code></pre> <p>Hope it helps</p>
0
2016-08-26T15:45:08Z
[ "python", "python-2.7" ]
Unable to append data to array
39,169,980
<p>I am retrieving a record set from a database.<br> Then using a <strong>for</strong> statement I am trying to construct my data to match a 3rd party API.</p> <p>But I get this error and can't figure it out: </p> <blockquote> <p>"errorType": "TypeError", "errorMessage": "list indices must be integers, not str"<br> "messages['english']['merge_vars']['vars'].append({"</p> </blockquote> <p>Below is my code:</p> <pre><code>cursor = connect_to_database() records = get_records(cursor) template = dict() messages = dict() template['english'] = "SOME_TEMPLATE reminder-to-user-english" messages['english'] = { 'subject': "Reminder (#*|code|*)", 'from_email': 'mail@mail.com', 'from_name': 'Notifier', 'to': [], 'merge_vars': [], 'track_opens': True, 'track_clicks': True, 'important': True } for record in records: record = dict(record) if record['lang'] == 'english': messages['english']['to'].append({ 'email': record['email'], 'type': 'to' }) messages['english']['merge_vars'].append({ 'rcpt': record['email'] }) for (key, value) in record.iteritems(): messages['english']['merge_vars']['vars'].append({ 'name': key, 'content': value }) else: template['other'] = "SOME_TEMPLATE reminder-to-user-other" close_database_connection() return messages </code></pre> <p>The goal is to get something like this below: </p> <pre><code>messages = { 'subject': "...", 'from_email': "...", 'from_name': "...", 'to': [ { 'email': '...', 'type': 'to', }, { 'email': '...', 'type': 'to', } ], 'merge_vars': [ { 'rcpt': '...', 'vars': [ { 'content': '...', 'name': '...' }, { 'content': '...', 'name': '...' } ] }, { 'rcpt': '...', 'vars': [ { 'content': '...', 'name': '...' }, { 'content': '...', 'name': '...' } ] } ] } </code></pre>
1
2016-08-26T15:40:30Z
39,170,078
<p>This code seems to indicate that <code>messages['english']['merge_vars']</code> is a list, since you initialize it as such:</p> <pre><code>messages['english'] = { ... 'merge_vars': [], ... } </code></pre> <p>And call <code>append</code> on it:</p> <pre><code>messages['english']['merge_vars'].append({ 'rcpt': record['email'] }) </code></pre> <p>However later, you treat it as a dictionary when you call:</p> <pre><code>messages['english']['merge_vars']['vars'] </code></pre> <p>It seems what you want is something more like:</p> <pre><code>vars = [{'name': key, 'content': value} for key, value in record.iteritems()] messages['english']['merge_vars'].append({ 'rcpt': record['email'], 'vars': vars, }) </code></pre> <p>Then, the <code>for</code> loop is unnecessary.</p>
2
2016-08-26T15:45:56Z
[ "python", "python-2.7" ]
Unable to append data to array
39,169,980
<p>I am retrieving a record set from a database.<br> Then using a <strong>for</strong> statement I am trying to construct my data to match a 3rd party API.</p> <p>But I get this error and can't figure it out: </p> <blockquote> <p>"errorType": "TypeError", "errorMessage": "list indices must be integers, not str"<br> "messages['english']['merge_vars']['vars'].append({"</p> </blockquote> <p>Below is my code:</p> <pre><code>cursor = connect_to_database() records = get_records(cursor) template = dict() messages = dict() template['english'] = "SOME_TEMPLATE reminder-to-user-english" messages['english'] = { 'subject': "Reminder (#*|code|*)", 'from_email': 'mail@mail.com', 'from_name': 'Notifier', 'to': [], 'merge_vars': [], 'track_opens': True, 'track_clicks': True, 'important': True } for record in records: record = dict(record) if record['lang'] == 'english': messages['english']['to'].append({ 'email': record['email'], 'type': 'to' }) messages['english']['merge_vars'].append({ 'rcpt': record['email'] }) for (key, value) in record.iteritems(): messages['english']['merge_vars']['vars'].append({ 'name': key, 'content': value }) else: template['other'] = "SOME_TEMPLATE reminder-to-user-other" close_database_connection() return messages </code></pre> <p>The goal is to get something like this below: </p> <pre><code>messages = { 'subject': "...", 'from_email': "...", 'from_name': "...", 'to': [ { 'email': '...', 'type': 'to', }, { 'email': '...', 'type': 'to', } ], 'merge_vars': [ { 'rcpt': '...', 'vars': [ { 'content': '...', 'name': '...' }, { 'content': '...', 'name': '...' } ] }, { 'rcpt': '...', 'vars': [ { 'content': '...', 'name': '...' }, { 'content': '...', 'name': '...' } ] } ] } </code></pre>
1
2016-08-26T15:40:30Z
39,170,114
<p>Your issues, as the Error Message is saying, is here: <code>messages['english']['merge_vars']['vars'].append({'name': key,'content': value})</code></p> <p>The item <code>messages['english']['merge_vars']</code> is a <code>list</code> and thus you're trying to access an element when you do something like <code>list[i]</code> and <code>i</code> cannot be a string, as is the case with <code>'vars'</code>. You probably either need to drop the <code>['vars']</code> part or set <code>messages['english']['merge_vars']</code> to be a <code>dict</code> so that it allows for additional indexing.</p>
0
2016-08-26T15:47:51Z
[ "python", "python-2.7" ]
regular expression unicode character does not match
39,170,123
<p>I am trying to use regular expression over a text that contains some special character like à,è,ù etc. </p> <pre><code>filter_2 = ur'(?:^\|\s+)?(?:(?:main_interests)|(?:influenced)|(?:influences))\s+?=[\s\W]+?(?:[\w}])*?([\d\w\s\-()*–&amp;;\[\]|.&lt;&gt;:/",\']*)(?=\n)' compiled = re.compile(filter_2, flags=re.U | re.M) filter_list = re.findall(compiled, information) </code></pre> <p>The text below is the result of the evaluation of the expression.</p> <blockquote> <p>[[Pedro Calderón de la Barca|Calderón]], [[Christian Fürchtegott Gellert|Gellert]], [[Oliver Goldsmith|Goldsmith]], [[Hafez]], [[Johann Gottfried Herder|Herder]], [[Homer]], [[Kālidāsa]], [[Kant]], [[Friedrich Gottlieb Klopstock|Klopstock]], [[Gotthold Ephraim Lessing|Lessing]], [[Carl Linnaeus|Linnaeus]], [[James Macpherson|Macpherson]], [[Jean-Jacques Rousseau|Rousseau]], [[Friedrich Schiller|Schiller]], [[William Shakespeare|Shakespeare]], [[Spinoza]], [[Emanuel Swedenborg|Swedenborg]],[[Karl Robert Mandelkow]], Bodo Morawe: Goethes Briefe. 2. edition. Vol. 1: Briefe der Jahre 1764–1786. ''Christian Wegner'', Hamburg 1968, p.&nbsp;709 [[Johann Joachim Winckelmann|Winckelmann]]`</p> </blockquote> <p>Now, when i try to use another regular expression over the above text in order to extrapolate the words in the square brackets, the result is wrong. All the words that represent a special character, like à ù or è, are removed and the result is not the one expected.</p> <pre><code>filter_6 = ur'(?&lt;=\[\[)([\w\s.-]+)((?=]])|(?=|))' another_compiled = re.compile(filter_6, flags=re.U | re.M) another_filtered_list = re.findall(another_compiled, (str(filter_list))) </code></pre> <p>These are my results:</p> <blockquote> <p>[('Pedro Calder', ''), ('Christian F', ''), ('Oliver Goldsmith', ''), ('Hafez', ''), ('Johann Gottfried Herder', ''), ('Homer', ''), ('K', ''), ('Kant', ''), ('Friedrich Gottlieb Klopstock', ''), ('Gotthold Ephraim Lessing', ''), ('Carl Linnaeus', ''), ('James Macpherson', ''), ('Jean-Jacques Rousseau', ''), ('Friedrich Schiller', ''), ('William Shakespeare', ''), ('Spinoza', ''), ('Emanuel Swedenborg', ''), ('Karl Robert Mandelkow', ''), ('Johann Joachim Winckelmann', ''), ('Thomas Carlyle', ''), ('Ernst Cassirer', ''), ('Charles Darwin', ''), ('Sigmund Freud', ''), ('G', ''), ('Andr', ''), ('Hermann Hesse', ''), ('G.W.F. Hegel', ''), ('Muhammad Iqbal', ''), ('Daisaku Ikeda', ''), ('Carl Gustav Jung', ''), ('Milan Kundera', ''), ('S', ''), ('Jean-Baptiste Lamarck', ''), ('Joaquim Maria Machado de Assis', ''), ('Thomas Mann', ''), ('Friedrich Nietzsche', ''), ('France Pre', ''), ('Grigol Robakidze', ''), ('Friedrich Schiller', ''), ('Oswald Spengler', ''), ('Max Stirner', ''), ('Friedrich Wilhelm Joseph Schelling', ''), ('Arthur Schopenhauer', ''), ('Oswald Spengler', ''), ('Rudolf Steiner', ''), ('Henry David Thoreau', ''), ('Nikola Tesla', ''), ('Ivan Turgenev', ''), ('Ludwig Wittgenstein', ''), ('Richard Wagner', ''), ('Leopold von Ranke', '')]</p> </blockquote> <p>These are the results i would like to achieve</p> <blockquote> <p>MATCH 1 1. [2-28] <code>Pedro Calderón de la Barca</code> MATCH 2 1. [43-72] <code>Christian Fürchtegott Gellert</code> MATCH 3 1. [86-102] <code>Oliver Goldsmith</code> MATCH 4 1. [118-123] <code>Hafez</code> MATCH 5 1. [129-152] <code>Johann Gottfried Herder</code> MATCH 6 1. [165-170] <code>Homer</code> MATCH 7 1. [176-184] <code>Kālidāsa</code> MATCH 8 1. [190-194] <code>Kant</code> MATCH 9 1. [200-228] <code>Friedrich Gottlieb Klopstock</code> MATCH 10 1. [244-268] <code>Gotthold Ephraim Lessing</code> MATCH 11 1. [282-295] <code>Carl Linnaeus</code> MATCH 12 1. [310-326] <code>James Macpherson</code> MATCH 13 1. [343-364] <code>Jean-Jacques Rousseau</code> MATCH 14 1. [379-397] <code>Friedrich Schiller</code> MATCH 15 1. [412-431] <code>William Shakespeare</code> MATCH 16 1. [449-456] <code>Spinoza</code> MATCH 17 1. [462-480] <code>Emanuel Swedenborg</code> MATCH 18 1. [501-522] <code>Karl Robert Mandelkow</code> MATCH 19 1. [659-685] <code>Johann Joachim Winckelmann</code></p> </blockquote> <p>All the regular expression are tested online and they work perfectly. There is a way to actually include these special characters?</p>
1
2016-08-26T15:48:26Z
39,170,277
<p>In <strong>Python 3</strong>, the regex doesn't compile. This seemed to work for me when I changed:</p> <pre><code>filter_6 = ur'(?&lt;=\[\[)([\w\s.-]+)((?=]])|(?=|))' </code></pre> <p>to just a unicode (not raw) string:</p> <pre><code>filter_6 = u'(?&lt;=\[\[)([\w\s.-]+)((?=]])|(?=|))' </code></pre> <p>In <strong>Python 2</strong>, I believe the issue is the casting of the list to a string. Changing <code>str(filter_list)</code> to <code>' '.join(filter_list)</code> seemed to work for me.</p>
2
2016-08-26T15:56:56Z
[ "python", "regex", "unicode" ]