title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
python process takes time to start in django project running on nginx and uwsgi
38,891,879
<p>I am starting a process using python's multiprocessing module. The process is invoked by a post request sent in a django project. When I use development server (python manage.py runserver), the post request takes no time to start the process and finishes immediately.</p> <p>I deployed the project on production using nginx and uwsgi. </p> <p>Now when i send the same post request, it takes around 5-7 minutes to complete that request. It only happens with those post requests where I am starting a process. Other post requests work fine.</p> <p>What could be reason for this delay? And How can I solve this?</p>
4
2016-08-11T09:04:21Z
39,093,474
<p>I figured out a workaround (Don't know if it will qualify as an answer). I wrote the background process as a job in database and used a cronjob to check if I have any job pending and if there are any the cron will start a background process for that job and will exit.</p> <p>The cron will run every minute so that there is not much delay. This helped in improved performance as it helped me execute heavy tasks like this to run separate from main application.</p>
0
2016-08-23T06:04:35Z
[ "python", "django", "nginx", "process", "uwsgi" ]
Reading/Writing python Dictionary to Redis cache on aws crashes server
38,891,893
<p>I am developing a computation intensive django application. Using Celery, to perform time taking tasks, and using Redis as a broker, and for cache puposes. </p> <p>Redis cache is used to share a large dictionary structure across celery tasks.</p> <p>I have a rest api to frequently write/update a python dictionary in Redis cache (after 1 second). Each api call initiates a new task.</p> <p>At localhost it all works good. But on Aws, the the elastic-beanstalk app crashes when run for sometime. </p> <p>It does not crash when the dictionary structure is empty. Here is the code how i update the cache.</p> <pre><code>r = redis.StrictRedis(host=Constants.REDIS_CACHE_ADDRESS, port=6379, db=0) mydict_obj = r.get("mydict") if mydict_obj: mydict = eval(str(mydict_obj)) else: mydict = {} for hash_instance in all_hashes: if hash_instance[1] in mydict: mydict[hash_instance[1]].append((str(hash_instance[0]), str(data.recordId))) else: mydict[hash_instance[1]] = [(str(hash_instance[0]), str(data.recordId))] r.set("mydict", mydict) </code></pre> <p>I have been struck here for more than 1 month. Can't find a solution, why the elastic-beanstalk app crashes on aws. It works fine on localhost.</p> <p>Please suggest me a solution asap. I will be very thankful to you.</p>
0
2016-08-11T09:05:00Z
38,906,242
<p>From the documentation:</p> <p>Don't use more memory than the specified amount of bytes. When the memory limit is reached Redis will try to remove keys according to the eviction policy selected (see maxmemory-policy).</p> <p>If Redis can't remove keys according to the policy, or if the policy is set to 'noeviction', Redis will start to reply with errors to commands that would use more memory, like SET, LPUSH, and so on, and will continue to reply to read-only commands like GET.</p> <p>This option is usually useful when using Redis as an LRU cache, or to set a hard memory limit for an instance (using the 'noeviction' policy).</p>
0
2016-08-11T21:12:07Z
[ "python", "django", "amazon-web-services", "redis", "celery" ]
Python script is called but not executed from batch file
38,891,928
<p>i have a python script that when is run from eclipse it does what i want without any errors or anything. I want now to create a batch file, that will run my script in a loop (infinitely). The first problem is that i when i run the bat file, i get a second cmd window that shows the logging from my python script (which shows me that it is running) but when the main process of the script starts(which can take from 1 minute to some hours) it exits within a few second without actually running all the script. I have used start wait/ but it doesn't seem to work. Here is the simple batch file i have created:</p> <pre><code>@echo off :start start /wait C:\Python32\python.exe C:\Users\some_user\workspace\DMS_GUI\CheckCreateAdtf\NewTest.py goto start </code></pre> <p>So i want the bat file to run my script, wait for it to finish(even if it takes some hours) and then run it again. I have also tried creating a bat file that calls with start wait/ the bat file shown above with no success. Optimally i would like it to keep the window open with all the logging that i have in my script, but that is another issue that can be solved later.</p> <pre><code>def _open_read_file(self): logging.debug("Checking txt file with OLD DB-folder sizes") content = [] with open(self._pathToFileWithDBsize) as f: content = f.read().splitlines() for p in content: name,size = (p.split(",")) self._folder_sizes_dic[name] = size def _check_DB(self): logging.debug("Checking current DB size") skippaths = ['OtherData','Aa','Sss','asss','dss','dddd'] dirlist = [ item for item in os.listdir(self._pathToDBparentFolder) if os.path.isdir(os.path.join(self._pathToDBparentFolder, item)) ] for skip in skippaths: if skip in dirlist: dirlist.remove(skip) MB=1024*1024.0 for dir in dirlist: folderPath = self._pathToDBparentFolder +"\\"+str(dir) fso = com.Dispatch("Scripting.FileSystemObject") folder = fso.GetFolder(folderPath) size = str("%.5f"%(folder.Size/MB)) self._DB_folder_sizes_dic[dir] = size def _compare_fsizes(self): logging.debug("Comparing sizes between DB and txt file") for (key, value) in self._DB_folder_sizes_dic.items(): if key in self._folder_sizes_dic: if (float(self._DB_folder_sizes_dic.get(key)) - float(self._folder_sizes_dic.get(key)) &lt; 100.0 and float(self._DB_folder_sizes_dic.get(key)) - float(self._folder_sizes_dic.get(key)) &gt; -30.0): pass else: self._changed_folders.append(key) else: self._changed_folders.append(key) def _update_file_with_new_folder_sizes(self): logging.debug("Updating txt file with new DB sizes") file = open(self._pathToFileWithDBsize,'w') for key,value in self._DB_folder_sizes_dic.items(): file.write(str(key)+","+str(value)+"\n") def _create_paths_for_changed_folders(self): logging.debug("Creating paths to parse for the changed folders") full_changed_folder_parent_paths = [] for folder in self._changed_folders: full_changed_folder_parent_paths.append(self._pathToDBparentFolder +"\\"+str(folder)) for p in full_changed_folder_parent_paths: for path, dirs, files in os.walk(p): if not dirs: self._full_paths_to_check_for_adtfs.append(path) def _find_dat_files_with_no_adtf(self): logging.debug("Finding files with no adtf txt") for path in self._full_paths_to_check_for_adtfs: for path, dirs, files in os.walk(path): for f in files: if f.endswith('_AdtfInfo.txt'): hasAdtfFilename = f.replace('_AdtfInfo.txt', '.dat') self.hasADTFinfos.add(path + "\\" + hasAdtfFilename) self.adtf_files = self.adtf_files + 1 elif f.endswith('.dat'): self.dat_files = self.dat_files + 1 self._dat_file_paths.append(path + "\\" + f) logging.debug("Checking which files have AdtfInfo.txt, This will take some time depending on the number of .dat files ") for file in self._dat_file_paths: if file not in self.hasADTFinfos: self._dat_with_no_adtf.append(file) self.files_with_no_adtf = len(self._dat_with_no_adtf) #self.unique_paths_from_log = set(full_paths_to_check_for_adtfs) logging.debug("Files found with no adtf " + str(self.files_with_no_adtf)) def _create_adtf_info(self): logging.debug("Creating Adtf txt for dat files") files_numbering = 0 for file in self._dat_with_no_adtf: file_name = str(file) adtf_file_name_path = file.replace('.dat','_AdtfInfo.txt') exe_path = r"C:\Users\some_user\Desktop\some.exe " path_to_dat_file = file_name path_to_adtf_file = adtf_file_name_path command_to_subprocess = exe_path + path_to_dat_file + " -d "+ path_to_adtf_file #Call VisionAdtfInfoToCsv subprocess.Popen(command_to_subprocess,stdout=subprocess.PIPE,stderr=subprocess.PIPE) process_response = subprocess.check_output(command_to_subprocess) #if index0 in response, adtf could not be created because .dat file is probably corrupted if "index0" in str(process_response): self._corrupted_files_paths.append(path_to_dat_file) self._files_corrupted = self._files_corrupted + 1 self._corrupted_file_exist_flag = True else: self._files_processed_successfully = self._files_processed_successfully + 1 files_numbering = files_numbering + 1 </code></pre> <p>The functions are called in this order</p> <pre><code> self._open_read_file() self._check_DB() self._compare_fsizes() self._create_paths_for_changed_folders() self._find_dat_files_with_no_adtf() self._create_adtf_info() self._check_DB() self._update_file_with_new_folder_sizes() </code></pre>
0
2016-08-11T09:06:08Z
38,896,865
<p>Ok it seems that the .exe in the script was returning an error and that is why it the script was finishing so fast. I thought that the bat file did not wait. I should have placed the .bat file in the .exe folder and now the whole thing runs perfect.</p>
0
2016-08-11T12:48:54Z
[ "python", "batch-file" ]
Creating a Cumulative Frequency Column in a Dataframe Python
38,891,974
<p>I am trying to create a new column named 'Cumulative Frequency' in a data frame where it consists of all the previous frequencies to the frequency for the current row as shown here.</p> <p><a href="http://i.stack.imgur.com/jQ39A.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/jQ39A.jpg" alt="enter image description here"></a></p> <p>What is the way to do this?</p>
0
2016-08-11T09:08:22Z
38,892,108
<p>You want <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html" rel="nofollow"><code>cumsum</code></a>:</p> <pre><code>df['Cumulative Frequency'] = df['Frequency'].cumsum() </code></pre> <p>Example:</p> <pre><code>In [23]: df = pd.DataFrame({'Frequency':np.arange(10)}) df Out[23]: Frequency 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 In [24]: df['Cumulative Frequency'] = df['Frequency'].cumsum() df Out[24]: Frequency Cumulative Frequency 0 0 0 1 1 1 2 2 3 3 3 6 4 4 10 5 5 15 6 6 21 7 7 28 8 8 36 9 9 45 </code></pre>
2
2016-08-11T09:13:54Z
[ "python", "pandas", "dataframe" ]
How to convert id into referenced field in Web2py?
38,892,120
<p>Consider the 3 tables below (<code>A</code>, <code>B</code> &amp; <code>C</code>) where table <code>C</code> has 2 fields referenced to table <code>A</code> and <code>B</code>. </p> <p>Model:</p> <pre><code>db.define_table('A', Field('A1', 'string', required =True), Field('A2', 'string', required =True), Field('A3', 'string', required =True), format=lambda r: '%s, %s' % (r.A.A1, r.A.A2)) db.define_table('B', Field('B1', 'string', required=True), Field('B2', 'string', required=True), Field('B3', 'string', required=True), format=lambda r: '%s, %s' % (r.B.B1, r.B.B2)) db.define_table('C', Field('C1', db.A), Field('C2', db.B), Field('C3', 'string', required=True), format=lambda r: '%s, %s - %s' % (r.C.C1, r.C.C2)) </code></pre> <p>Controller:</p> <pre><code>def C_view(): if request.args(0) is None: rows = db(db.C).select(orderby=db.C.C1|db.C.C2) else: letter = request.args(0) rows = db(db.C.C1.startswith(letter)).select(orderby=db.C.C1|db.C.C2) return locals() </code></pre> <p>In the corresponding view below I display the 3 fields of table C:</p> <pre><code>... {{ for x in rows:}} &lt;tr&gt; &lt;td&gt;{{=x.C1}}&lt;/td&gt; &lt;td&gt;{{=x.C2}}&lt;/td&gt; &lt;td&gt;{{=x.C3}}&lt;/td&gt; &lt;/tr&gt; {{pass}} ... </code></pre> <p>With this setup, the view displays the foreign id of C1 &amp; C2. How would I have to modify the model, controller and/or the view to display the corresponding reference fields rather than the id's? In other words: for <code>C1</code> the view should display <code>r.A.A1, r.A.A2</code> and for <code>C2</code> the view should display <code>r.B.B1, r.B.B2</code>.</p> <p>Thank you.</p>
0
2016-08-11T09:14:26Z
38,908,849
<p>The web2py syntax to define reference fields is the following, as far as I know:</p> <pre><code>Field('C1', 'reference A'), Field('C2', 'reference B'), </code></pre> <p>Then, in your view, <code>x.C1</code> will be a <code>Row</code> object from the <code>A</code> table, so:</p> <pre><code>&lt;td&gt;{{=x.C1.A1}}, {{=x.C1.A2}}, {{=x.C1.A3}}&lt;/td&gt; &lt;td&gt;{{=x.C2.B1}}, {{=x.C2.B2}}, {{=x.C2.B3}}&lt;/td&gt; </code></pre> <p>Hope it helps!</p>
0
2016-08-12T02:17:04Z
[ "python", "web2py" ]
How to convert id into referenced field in Web2py?
38,892,120
<p>Consider the 3 tables below (<code>A</code>, <code>B</code> &amp; <code>C</code>) where table <code>C</code> has 2 fields referenced to table <code>A</code> and <code>B</code>. </p> <p>Model:</p> <pre><code>db.define_table('A', Field('A1', 'string', required =True), Field('A2', 'string', required =True), Field('A3', 'string', required =True), format=lambda r: '%s, %s' % (r.A.A1, r.A.A2)) db.define_table('B', Field('B1', 'string', required=True), Field('B2', 'string', required=True), Field('B3', 'string', required=True), format=lambda r: '%s, %s' % (r.B.B1, r.B.B2)) db.define_table('C', Field('C1', db.A), Field('C2', db.B), Field('C3', 'string', required=True), format=lambda r: '%s, %s - %s' % (r.C.C1, r.C.C2)) </code></pre> <p>Controller:</p> <pre><code>def C_view(): if request.args(0) is None: rows = db(db.C).select(orderby=db.C.C1|db.C.C2) else: letter = request.args(0) rows = db(db.C.C1.startswith(letter)).select(orderby=db.C.C1|db.C.C2) return locals() </code></pre> <p>In the corresponding view below I display the 3 fields of table C:</p> <pre><code>... {{ for x in rows:}} &lt;tr&gt; &lt;td&gt;{{=x.C1}}&lt;/td&gt; &lt;td&gt;{{=x.C2}}&lt;/td&gt; &lt;td&gt;{{=x.C3}}&lt;/td&gt; &lt;/tr&gt; {{pass}} ... </code></pre> <p>With this setup, the view displays the foreign id of C1 &amp; C2. How would I have to modify the model, controller and/or the view to display the corresponding reference fields rather than the id's? In other words: for <code>C1</code> the view should display <code>r.A.A1, r.A.A2</code> and for <code>C2</code> the view should display <code>r.B.B1, r.B.B2</code>.</p> <p>Thank you.</p>
0
2016-08-11T09:14:26Z
38,912,459
<p>You need to use <em><code>format: Record representation</code></em> correctly and use <code>render()</code> to convert ids into their respective representation.</p> <p>Your model will look like this:</p> <pre><code>db.define_table('A', Field('A1', 'string', required=True), Field('A2', 'string', required=True), Field('A3', 'string', required=True), format='%(A1)s, %(A2)s') db.define_table('B', Field('B1', 'string', required=True), Field('B2', 'string', required=True), Field('B3', 'string', required=True), format='%(B1)s, %(B2)s') db.define_table('C', Field('C1', db.A), Field('C2', db.B), Field('C3', 'string', required=True)) </code></pre> <p>And update controller and use render(). render() returns a generator to iterate over all rows.</p> <pre><code>def C_view(): if request.args(0) is None: rows = db(db.C).select(orderby=db.C.C1|db.C.C2).render() else: letter = request.args(0) rows = db(db.C.C1.startswith(letter)).select(orderby=db.C.C1|db.C.C2).render() return locals() </code></pre> <p>Reference:</p> <p><a href="http://www.web2py.com/books/default/chapter/29/06/the-database-abstraction-layer#Rendering-rows-using-represent" rel="nofollow">Rendering rows using represent</a></p> <p><a href="http://www.web2py.com/books/default/chapter/29/06/the-database-abstraction-layer#format--Record-representation" rel="nofollow">Format: Record representation</a></p>
0
2016-08-12T07:37:07Z
[ "python", "web2py" ]
what is the best pratice of nested dict for mongoengine ODM?
38,892,192
<p>I have some data like this in my MongoDB:</p> <pre><code> "FetchedZipSize" : { "a" : 1, "b" : 2, "c" : 3, "d" : 4, "e" : 5 }, </code></pre> <p>how to I modeling it with mongoengine ODM?</p>
0
2016-08-11T09:18:01Z
38,892,310
<p>You can use <a href="http://docs.mongoengine.org/guide/defining-documents.html#embedded-documents" rel="nofollow" title="Embedded documents">embedded document</a> field to create a nested structure of dictionaries. Something like this should work:</p> <pre><code>from mongoengine import Document, EmbeddedDocument class FetchedZipSize(EmbeddedDocument): a=IntField() b=IntField() class CollectionName(Document): fetchedZipSize = EmbeddedDocumentField(FetchedZipSize) </code></pre>
0
2016-08-11T09:23:23Z
[ "python", "mongodb", "dictionary", "mongoengine" ]
Python oneliner with if statement
38,892,336
<p>I am wondering if it is possible to write the following python if statement in one line. I would also like to know why I am getting the error below:</p> <pre><code>python -c 'a=1; if True: print a; else: a=a+1' File "&lt;string&gt;", line 1 a=1; if True: print a; else: a=a+1 ^ SyntaxError: invalid syntax </code></pre>
1
2016-08-11T09:24:58Z
38,892,554
<p>Only <a href="https://docs.python.org/3/reference/compound_stmts.html#grammar-token-stmt_list" rel="nofollow">simple statements can appear in a semicolon-separate statement list</a>:</p> <pre><code>stmt_list ::= simple_stmt (";" simple_stmt)* [";"] </code></pre> <p>An <code>if</code> statement is a compound statement, so it's invalid syntax to include it.</p> <p>Allowing compound statements in a semicolon-separated list would lead to ambiguity. This is valid syntax:</p> <pre><code>if condition: a = 1; b = 1 </code></pre> <p>Both assignments are only executed if the <code>condition</code> is true, and this is how most people would intuitively read the statement. If we allowed</p> <pre><code>c = 1; if condition: a = 1; b = 1 </code></pre> <p>it would become unclear for readers of the code whether the <code>b = 1</code> is part of the <code>if</code> statement or not.</p> <p>Python uses indentation to delimit code suites, and you can't use indentation in a semicolon-separated statement list.</p>
4
2016-08-11T09:35:12Z
[ "python", "if-statement" ]
Sort each row individually between two columns
38,892,353
<p>I have the following pandas dataframe:</p> <pre><code>column_01 column_02 value ccc aaa 1 bbb ddd 34 ddd aaa 98 </code></pre> <p>I need to re-organise the dataframe such that <code>column_01</code> contains which ever value comes first alphabetically between <code>column_01</code> and <code>column_02</code>. The output of the above example would be:</p> <pre><code>column_01 column_02 value aaa ccc 1 bbb ddd 34 aaa ddd 98 </code></pre> <p>I could obviously do this by iterating over the dataframe one row at a time, comparing <code>column_01</code> to <code>column_02</code> to see which comes first alphabetically and swapping them if necessary. The only problem with this is that the dataframe is quite big (over <strong>1million</strong> rows), so this isn't a very efficient way to do this.</p> <p>Is there a way to do this without iterating over every row individually?</p>
1
2016-08-11T09:25:22Z
38,892,408
<p>You can use:</p> <pre><code>df[['column_01','column_02']] = df[['column_01','column_02']].apply(lambda x: sorted(x.values), axis=1) print (df) column_01 column_02 value 0 aaa ccc 1 1 bbb ddd 34 2 aaa ddd 98 </code></pre> <p>Another solutions:</p> <pre><code>df[['column_01','column_02']] = pd.DataFrame(np.sort(df[['column_01','column_02']].values), index=df.index, columns=['column_01','column_02']) </code></pre> <p>only with numpy array:</p> <pre><code>df[['column_01','column_02']] = np.sort(df[['column_01','column_02']].values) print (df) column_01 column_02 value 0 aaa ccc 1 1 bbb ddd 34 2 aaa ddd 98 </code></pre> <p>Second solution is faster, because <code>apply</code> use loops:</p> <pre><code>df = pd.concat([df]*1000).reset_index(drop=True) In [177]: %timeit df[['column_01','column_02']] = pd.DataFrame(np.sort(df[['column_01','column_02']].values), index=df.index, columns=['column_01','column_02']) 1000 loops, best of 3: 1.36 ms per loop In [182]: %timeit df[['column_01','column_02']] = np.sort(df[['column_01','column_02']].values) 1000 loops, best of 3: 1.54 ms per loop In [178]: %timeit df[['column_01','column_02']] = (df[['column_01','column_02']].apply(lambda x: sorted(x.values), axis=1)) 1 loop, best of 3: 291 ms per loop </code></pre>
2
2016-08-11T09:27:53Z
[ "python", "sorting", "pandas", "dataframe", "multiple-columns" ]
Python 3: maketime()
38,892,363
<p>Rookie question:</p> <p>the following works:</p> <pre><code>import time # create time dztupel = 1971, 1, 1, 0, 0, 1, 0, 0, 0 print(time.strftime("%d.%m.%Y %H:%M:%S", dztupel)) damals = time.mktime(dztupel) # output lt = time.localtime(damals) wtage = ["Montag", "Dienstag", "Mittwoch","Donnerstag","Freitag","Samstag", "Sonntag"] wtagnr = lt[6] print("Das ist ein", wtage[wtagnr]) tag_des_jahres = lt[7] print("Der {0:d}. Tag des Jahres".format(tag_des_jahres)) </code></pre> <p>but:</p> <pre><code>dztupel = 1970, 1, 1, 0, 0, 1, 0, 0, 0 </code></pre> <p>does not work,at least not at windows 10. edit: I get out of range error. But time should start at January 1st 1970 at 0 hour 0 min and 0 seconds. shouldn't it ?</p>
1
2016-08-11T09:25:55Z
38,892,537
<p>In your second snippet, check out what the <code>time.mktime()</code> function returns, given that <code>dztupel</code> represents a datetime of 11:01am UTC on 1/1/1969 (shows as one hour ahead because of BST (i.e., UTC+0100) locally on my system):</p> <pre><code>&gt;&gt;&gt; import time &gt;&gt;&gt; dztupel = 1970, 1, 1, 0, 0, 1, 0, 0, 0 # In BST locally for me, remember, so one hour less seconds than printed EPOCH seconds &gt;&gt;&gt; time.mktime(dztupel) # This command -3599.0 # seconds after (i.e., before as is negative) 1/1/1970 UTC0000 </code></pre> <p>It's negative because EPOCH time (which <code>time.mktike</code> is printing, in seconds) starts at UTC midnight on 1/1/1970:</p> <pre><code>&gt;&gt;&gt; dztupel = 1970, 1, 1, 1, 0, 0, 0, 0, 0 # 1/1/1970 BST0100 == 1/1/1970 UTC0000 &gt;&gt;&gt; time.mktime(dztupel) 0.0 # seconds after 1/1/1970 UTC0000 </code></pre> <p>Hence <code>0.0</code>, as it's 0 seconds since <code>dztupel = 1970, 1, 1, 1, 0, 0, 0, 0, 0</code> since BST 0100 on 1/1/1970, or since UTC midnight on 1/1/1970.</p> <hr> <p>Really, we want to print as UTC, so instead of <code>time.localtime()</code>, use <code>time.gmtime()</code>:</p> <pre><code>&gt;&gt;&gt; dztupel = 1970, 1, 1, 0, 0, 1, 0, 0, 0 &gt;&gt;&gt; time.gmtime(time.mktime(dztupel)) time.struct_time(tm_year=1969, tm_mon=12, tm_mday=31, tm_hour=23, tm_min=0, tm_sec=1, tm_wday=2, tm_yday=365, tm_isdst=0) </code></pre> <p>Then use <code>strftime()</code> to format it:</p> <pre><code>&gt;&gt;&gt; gmt = time.gmtime(time.mktime(dztupel)) &gt;&gt;&gt; time.strftime('%Y-%m-%d %H:%M:%S', gmt) '1969-12-31 23:00:01' </code></pre>
1
2016-08-11T09:34:39Z
[ "python" ]
Scypy: Wilcoxon test: Compare distribution with a single value
38,892,385
<p>I was applying the t-student test in order to evaluate whether or not a value was belonging to a given sample. Since I cannot assume normality, now I want to apply the <code>Wilcoxon</code> test from <code>scypy</code>.</p> <p>Is it possible to compare a sample with a single value?</p> <p>If I do: <code>stats.wilcoxon(sample_array , single_value)</code></p> <p>The code argues that the two arrays don't have the same length. I found in a forum that the one sample counterpart for the t-student using <code>wilcoxon</code> would be:</p> <p><code>stats.wilcoxon(sample_array - single_value)</code></p> <p>Is it correct? If not, do you know any alternative in order to perform a non parametric test in order to evaluate if a given value belongs or not to a sample distribution?</p>
0
2016-08-11T09:26:41Z
38,892,824
<p>You can't use wilcoxon for that. Those are to compare distributions.</p> <p>You can't know whether or not a value may belong to a given sample. If your sample is large enough you can say that the probability of that happening is <code>sum(sample_array&gt;single_value)/len(sample_array)</code> (or <code>&lt;</code> for the other extreme).</p> <p>You can compare the value to the MEAN or the MEDIAN of the population using bootstrapping:</p> <pre><code>import scikits.bootstrap as bootstrap CIs = bootstrap.ci(sample_array, statfunction=np.mean,n_samples=100000) print(CIs) </code></pre> <p>If the value is within the CIs, then you can't reject that your value is the real mean (or median).</p>
0
2016-08-11T09:46:34Z
[ "python", "statistics" ]
Calling PHP script from inside a Django project: Could not open input file
38,892,437
<p>I wrote a PHP script that processes some task for me (specifically, encoding videos on a media server). I tested calling this script via python like so:</p> <pre><code>proc = subprocess.Popen("php vodworkflow_drm_playready_widevine.php videos " +str(video), shell=True, stdout=subprocess.PIPE) </code></pre> <p>This worked correctly (i.e. the video name passed to the script got correctly encoded and such). However, when I call <code>Popen</code> from inside my Django project, I instead get the output: <code>Could not open input file: my_script.php</code>.</p> <p>I'm stumped! Can anyone help with what could be going on, and what I can do to rectify this? Thanks, I've been stuck since a while now. Do let me know if you need to see the PHP script and more detailed code.</p>
0
2016-08-11T09:29:09Z
38,893,658
<p>Point the php file to php binary by its full path.</p> <pre><code>proc = subprocess.Popen("php /root/www/bla_bla/vodworkflow_drm_playready_widevine.php videos " +str(video), shell=True, stdout=subprocess.PIPE) </code></pre> <p>Alternatively, Popen takes an cwd argument. Specify the path to it. </p> <p><a href="https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module" rel="nofollow">https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module</a></p> <p><a href="http://sharats.me/the-ever-useful-and-neat-subprocess-module.html#execute-in-a-different-working-directory" rel="nofollow">http://sharats.me/the-ever-useful-and-neat-subprocess-module.html#execute-in-a-different-working-directory</a></p>
1
2016-08-11T10:22:49Z
[ "php", "python", "django" ]
how to install wordcloud package in python?
38,892,509
<pre><code>pip install wordcloud File "&lt;ipython-input-130-12ee30540bab&gt;", line 1 pip install wordcloud ^ SyntaxError: invalid syntax </code></pre> <p>This is the problem I am facing while using <code>pip install wordcloud</code>.</p>
0
2016-08-11T09:33:03Z
38,892,671
<p><code>pip</code> is a tool used for installing python packages. You should not use this command inside the python interactive shell. </p> <p>Instead, exit out of it and write <code>pip install wordcloud</code> on the main shell.</p>
0
2016-08-11T09:40:31Z
[ "python", "word-cloud" ]
Sorting a python dictionary by one value, then two value in list values
38,892,697
<p>struct product: sku, quantity, price, group</p> <p>key: list(values)</p> <pre><code>dic = {'product1': ['m1', 2, 101, 'g500'], 'product4': ['m112', 2, 101, 'g700'], 'product5': ['m343', 2, 101, 'g500'], 'product2': ['m765', 2, 101, 'g500'], 'product3': ['m4346', 2, 101, 'g700']} </code></pre> <p>Order by key (or one value t[1])</p> <pre><code>OrderedDict(sorted(dic.items(), key=lambda t: t[0])) </code></pre> <p>How to sort the dictionary by group, then sku?</p> <p>Need return data:</p> <pre><code>{'product1': ['m1', 2, 101, 'g500'], 'product5': ['m343', 2, 101, 'g500'], 'product2': ['m765', 2, 101, 'g500'],'product4': ['m112', 2, 101, 'g700'], 'product3': ['m4346', 2, 101, 'g700']} </code></pre>
-1
2016-08-11T09:41:46Z
38,892,858
<pre><code>OrderedDict(sorted(dic.items(), key=lambda t: (t[1][3], t[1][0]),)) </code></pre>
1
2016-08-11T09:47:51Z
[ "python", "list", "sorting", "dictionary" ]
Why is there a gap at the side of my graph? (matplotlib/py)
38,892,982
<p>Got a list of 5 elements which represents sequences of objects. i.e. element 0 represents the number 1, and so on. Consequently I had to do this to have these numbers represented properly on the graph:</p> <pre><code>higher_chances.insert(0, 0) plt.xlim(1,len(higher_chances)) plt.xticks(range(1, len(higher_chances))) </code></pre> <p>So basically, I inserted a 0 at start of list, then ignored it. Let me know if there's a better way to do this</p> <p>Consequently I now have a gap on the right-hand side of my graph and can't figure out how to remove it. Any help would be appreciated</p> <p><a href="http://i.stack.imgur.com/KZ824.png" rel="nofollow"><img src="http://i.stack.imgur.com/KZ824.png" alt="enter image description here"></a></p> <p>List: [0, 47.0, 46.0, 45.0, 45.0, 43.0]</p> <p>All code below:</p> <pre><code># Inset 0 at first index so index 1 reflect sequence of 1 candle higher_chances.insert(0, 0) # Convert to actual percentages for presentation higher_chances = [ elem*100 for elem in higher_chances] plt.plot(higher_chances) plt.ylim(min(higher_chances[1:]), max(higher_chances[1:])) plt.xlim(1,len(higher_chances)) plt.xticks(range(1, len(higher_chances))) plt.xlabel('Number of consecutive bull candles', fontsize=15) plt.ylabel('% chance of another bull candle following', fontsize=15) plt.show() </code></pre> <h1>Edit:</h1> <p>If I add <code>plt.axis('tight')</code> without changing other code the graph changes to:</p> <p><a href="http://i.stack.imgur.com/li4Tr.png" rel="nofollow"><img src="http://i.stack.imgur.com/li4Tr.png" alt="enter image description here"></a></p> <p>If I take out <code>xlim/ylim</code> and add <code>plt.axis('tight')</code> the graph does not change from that above.</p>
-1
2016-08-11T09:53:18Z
38,894,180
<p>Change: </p> <pre><code>plt.xlim(1,len(higher_chances)) </code></pre> <p>To:</p> <pre><code>plt.xlim(1,len(higher_chances)-1) </code></pre> <p><a href="http://i.stack.imgur.com/6ftMt.png" rel="nofollow"><img src="http://i.stack.imgur.com/6ftMt.png" alt="enter image description here"></a></p> <p>But leave the following as it is, if I add -1 as per above it removes the last label:</p> <pre><code>plt.xticks(range(1, len(higher_chances))) </code></pre> <p><a href="http://i.stack.imgur.com/J7dK0.png" rel="nofollow"><img src="http://i.stack.imgur.com/J7dK0.png" alt="enter image description here"></a></p>
0
2016-08-11T10:44:04Z
[ "python", "matplotlib" ]
Error "Your requirements.txt is invalid" when following tutorial AWS Elastic Beanstalk's Flask tutorial
38,893,004
<p>I'm following AWS Elastic Beanstalk's <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-flask.html" rel="nofollow">Flask tutorial</a> to deploy the sample application.</p> <p>Though I get error <strong>Your requirements.txt is invalid</strong> - in full as below. I <a href="https://www.google.com.vn/?ion=1&amp;espv=2#newwindow=1&amp;q=%22AWS%20Elastic%20Beanstalk%22%20%2B%20flask%20%2B%20tutorial%20%2B%20error%20%2B%20%22Your%20requirements.txt%20is%20invalid%22" rel="nofollow">google</a> for this error though I get no helpful solution for me.</p> <p>My <strong>requirements.txt</strong> file is pasted in below sections. And I'm using <em>Ubuntu Desktop 16.04</em></p> <p>As you may succeed the deployment following the tutorial, please share. Thank you.</p> <h1>Error</h1> <pre><code>ERROR: Your requirements.txt is invalid. Snapshot your logs for details. ERROR: [Instance: i-c78c8f57] Command failed on instance. Return code: 1 Output: (TRUNCATED)...) File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call raise CalledProcessError(retcode, cmd) CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI. INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1]. WARN: Environment health has transitioned from Pending to Degraded. Command failed on all instances. Initialization completed 45 seconds ago and took 3 minutes. ERROR: Create environment operation is complete, but with errors. For more information, see troubleshooting documentation. </code></pre> <h1>requirements.txt</h1> <pre><code>click==6.6 Flask==0.11.1 itsdangerous==0.24 Jinja2==2.8 MarkupSafe==0.23 pkg-resources==0.0.0 Werkzeug==0.11.10 </code></pre>
0
2016-08-11T09:54:13Z
38,927,554
<p>I tested locally and was able to replicate your problem. To get more information about what went wrong on instance I used <code>eb logs</code> to view the logs currently on the EC2 instances. From that I was able to see the full stack in the <code>eb-activity.log</code></p> <pre><code> Collecting pkg-resources==0.0.0 (from -r /opt/python/ondeck/app/requirements.txt (line 4)) Could not find a version that satisfies the requirement pkg-resources==0.0.0 (from -r /opt/python/ondeck/app/requirements.txt (line 4)) (from versions: ) No matching distribution found for pkg-resources==0.0.0 (from -r /opt/python/ondeck/app/requirements.txt (line 4)) </code></pre> <p>I'm not sure where the <code>pkg-resources=0.0.0</code> came from but it's not a valid package in pip. I was able to delete that line and deploy successfully. </p> <p>You may want to verify the output of your <code>pip freeze</code> and see if that library is actually there. </p>
2
2016-08-12T23:05:54Z
[ "python", "elastic-beanstalk" ]
How to name a file using a value in a list in a dictionary?
38,893,080
<p>For example I have a list inside a dictionary:</p> <pre><code>{'results': [{'name': 'sarah', 'age': '18'}]} </code></pre> <p>I want make a text file and the name of file must be the value of dict <code>'name'</code> ex: <code>sarah.txt</code>.</p> <p>How should I code it in python?</p>
-5
2016-08-11T09:57:26Z
39,112,093
<p>Something like this:</p> <pre><code>dictionary = {'results': [{'name': 'sarah', 'age': '18'}]} name = dictionary['results'][0]['names'] + '.txt' open(name, "w") </code></pre> <p>Opening a file in the <code>w</code>rite mode, if it doesn't exist already, will create a new file with that name.</p>
0
2016-08-23T23:14:27Z
[ "python", "list", "file", "dictionary", "filenames" ]
Password system. How do I allow 3 incorrect guesses while allowing the program to continue if the guess is correct
38,893,099
<p>The program needs to allow the user 3 incorrect attempts to write the password. However if they get the password attempt right, I want the program to print 'nice'.</p> <p><strong>Here is what I have so far:</strong></p> <pre><code>def main(): n = 2 for i in range(0, 3, 1): attempt = input('Enter password: ') if attempt != 'password': print('Incorrect. ' + str(n) + ' attempts left') n = int(n) - 1 else: print('nice') </code></pre>
0
2016-08-11T09:58:15Z
38,893,256
<pre><code>n = 2 for i in range(0, 3, 1): attempt = raw_input('Enter password: ') if attempt != 'password': print('Incorrect. ' + str(n) + ' attempts left') n = int(n) - 1 elif n &gt; 0: print('nice') else: print('no attempts left') </code></pre>
-1
2016-08-11T10:05:00Z
[ "python", "loops", "passwords" ]
Password system. How do I allow 3 incorrect guesses while allowing the program to continue if the guess is correct
38,893,099
<p>The program needs to allow the user 3 incorrect attempts to write the password. However if they get the password attempt right, I want the program to print 'nice'.</p> <p><strong>Here is what I have so far:</strong></p> <pre><code>def main(): n = 2 for i in range(0, 3, 1): attempt = input('Enter password: ') if attempt != 'password': print('Incorrect. ' + str(n) + ' attempts left') n = int(n) - 1 else: print('nice') </code></pre>
0
2016-08-11T09:58:15Z
38,893,540
<pre><code>for x in range(3): pass_correct=False password = "password" attempt=raw_input("Enter password: ") if attempt == password: pass_correct=True break else: print "incorrect, " + str(2-x) + " attempts left" if pass_correct: print "nice" else: print "no attempts left" </code></pre> <p>succinct</p>
-1
2016-08-11T10:17:02Z
[ "python", "loops", "passwords" ]
Password system. How do I allow 3 incorrect guesses while allowing the program to continue if the guess is correct
38,893,099
<p>The program needs to allow the user 3 incorrect attempts to write the password. However if they get the password attempt right, I want the program to print 'nice'.</p> <p><strong>Here is what I have so far:</strong></p> <pre><code>def main(): n = 2 for i in range(0, 3, 1): attempt = input('Enter password: ') if attempt != 'password': print('Incorrect. ' + str(n) + ' attempts left') n = int(n) - 1 else: print('nice') </code></pre>
0
2016-08-11T09:58:15Z
38,893,551
<pre><code>max_retries = 3 for i in range(max_retries): passwd = input('\nEnter password: ') if passwd == 'password': print('\nnice') break print('Incorrect. %d attempts left' % (max_retries-i-1)) </code></pre>
1
2016-08-11T10:17:42Z
[ "python", "loops", "passwords" ]
python matplotlib edit histogram
38,893,125
<p>I used matplotlib to create a histogram. There are still problems I couldn't solve on my own or with the help of the internet.</p> <ol> <li><p>How can I change the color of certain bins? In detail I want to change the color of bins with: a.) value bin &lt; 1.15 red, b.) value 1.15 &lt; bin &lt; 1.25 c.) value > 1.25 red?</p></li> <li><p>How can I label the X-Axis not only with numbers with 1 decimal but also with 2 decimals (right now simply not plotted)?</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import csv thickness = [] #gets thickness from list bins = [1.00,1.01,1.02,1.03,1.04,1.05,1.06,1.07,1.08,1.09,1.10,1.11,1.12,1.13,1.14,1.15,1.16,1.17,1.18,1.19,1.20,1.21,1.22,1.23,1.24,1.25,1.26,1.27,1.28,1.29,1.30,1.31,1.32,1.33,1.34,1.35,1.36,1.37,1.38,1.39,1.40,1.41,1.42,1.43,1.44,1.45,1.46,1.47,1.48,1.49,1.50 ] #set bins manuelly with open('control.txt','r') as csvfile: plots = csv.reader(csvfile, delimiter=',') for row in plots: #x.append(float(row[0])) thickness.append(float(row[1])) plt.hist(thickness, bins, align='left', histtype='bar', rwidth=0.8, color='green') plt.xlabel('thickness [mm]') plt.ylabel('frequency') plt.title('Histogram') plt.show() </code></pre></li> </ol> <p><strong>See the plotted histogram below:</strong></p> <p><a href="http://i.stack.imgur.com/0dp67.jpg" rel="nofollow">plt.histogram so far</a></p>
0
2016-08-11T09:59:49Z
38,893,261
<p>Something like this (there may be some bugs):</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import csv from matplotlib.ticker import FormatStrFormatter thickness = [] #gets thickness from list bins = [1.00,1.01,1.02,1.03,1.04,1.05,1.06,1.07,1.08,1.09,1.10,1.11,1.12,1.13,1.14,1.15,1.16,1.17,1.18,1.19,1.20,1.21,1.22,1.23,1.24,1.25,1.26,1.27,1.28,1.29,1.30,1.31,1.32,1.33,1.34,1.35,1.36,1.37,1.38,1.39,1.40,1.41,1.42,1.43,1.44,1.45,1.46,1.47,1.48,1.49,1.50 ] #set bins manuelly with open('control.txt','r') as csvfile: plots = csv.reader(csvfile, delimiter=',') for row in plots: #x.append(float(row[0])) thickness.append(float(row[1])) h,bins = plt.hist(thickness, bins) plt.clf() fig, ax = plt.subplots() ax.bar(bins[bins&lt;1.2],h[bins&lt;1.2],rwidth=0.8, color='red') plt.bar(bins[np.logical_and(bins&gt;1.2,bins&lt;1.5)],h[np.logical_and(bins&gt;1.2,bins&lt;1.5)],rwidth=0.8, color='green') ax.bar(bins[bins&gt;1.5],h[bins&gt;1.5],rwidth=0.8, color='red') ax.set_xlabel('thickness [mm]') ax.set_ylabel('frequency') ax.set_title('Histogram') ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) plt.show() </code></pre>
0
2016-08-11T10:05:15Z
[ "python", "matplotlib", "colors", "decimal", "bins" ]
Rounding up a column
38,893,170
<p>I am new to pandas python and I am having difficulties trying to round up all the values in the column. For example,</p> <pre><code>Example 88.9 88.1 90.2 45.1 </code></pre> <p>I tried using my current code below, but it gave me:</p> <blockquote> <p>AttributeError: 'str' object has no attribute 'rint'</p> </blockquote> <pre><code> df.Example = df.Example.round() </code></pre>
1
2016-08-11T10:01:39Z
38,893,248
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ceil.html#numpy.ceil" rel="nofollow"><code>numpy.ceil</code></a>:</p> <pre><code>In [80]: import numpy as np In [81]: np.ceil(df.Example) Out[81]: 0 89.0 1 89.0 2 91.0 3 46.0 Name: Example, dtype: float64 </code></pre> <p>depending on what you like, you could also change the type:</p> <pre><code>In [82]: np.ceil(df.Example).astype(int) Out[82]: 0 89 1 89 2 91 3 46 Name: Example, dtype: int64 </code></pre> <hr> <p><strong>Edit</strong></p> <p>Your error message indicates you're trying just to round (not necessarily up), but are having a type problem. You can solve it like so:</p> <pre><code>In [84]: df.Example.astype(float).round() Out[84]: 0 89.0 1 88.0 2 90.0 3 45.0 Name: Example, dtype: float64 </code></pre> <p>Here, too, you can cast at the end to an integer type:</p> <pre><code>In [85]: df.Example.astype(float).round().astype(int) Out[85]: 0 89 1 88 2 90 3 45 Name: Example, dtype: int64 </code></pre>
1
2016-08-11T10:04:39Z
[ "python", "pandas" ]
I can't upgrade pip on my raspberry pi
38,893,188
<p>I want to install simpleaudio but can't upgrade or install anything with pip, here's what it gives me:</p> <pre><code>Downloading/unpacking pip from https://pypi.python.org/packages/9c/32/004ce0852e0a127f07f358b715015763273799b d798956fa930814b60f39/pip-8.1.2-py2.py3-none-any.whl#md5=0570520434c5b600d89ec95393b2650b Downloading pip-8.1.2-py2.py3-none-any.whl (1.2MB): 1.2MB downloaded Installing collected packages: pip Found existing installation: pip 1.5.6 Not uninstalling pip at /usr/lib/python2.7/dist-packages, owned by OS Can't roll back pip; was not uninstalled Cleaning up... Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 295, in run requirement_set.install(install_options, global_options, root=options.root_path) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1436, in install requirement.install(install_options, global_options, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 672, in install self.move_wheel_files(self.source_dir, root=root) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 902, in move_wheel_files pycompile=self.pycompile, File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 214, in move_wheel_files clobber(source, lib_dir, True) File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 204, in clobber os.makedirs(destdir) File "/usr/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pip-8.1.2.dist-info' Storing debug log for failure in /home/pi/.pip/pip.log </code></pre> <p>I'm using Rasbian with NOOBS if that help figuring this out. I've just started studying programming so I have no clue what I need to do since using the guides results to above, so please explain this to an idiot.</p>
0
2016-08-11T10:02:09Z
38,893,393
<p><code>pip</code> fails to access some system paths and shows clearly that the user used to run <code>pip</code> <em>does not have enough privileges</em> to access them.</p> <p>To solve this, try to run these commands as root using <code>sudo</code>:</p> <pre><code>sudo pip &lt;options&gt; </code></pre> <p>Otherwise, if you don't have <em>sudo</em> installed (this is almost impossible) or if you're just curious and know the <em>root</em>'s password, simply log in as root and execute your commands without <code>sudo</code>:</p> <pre><code>su # or su root # type the root's password pip &lt;options&gt; </code></pre> <p>If you use this option, <em>don't forget to log out after you've installed everything!</em></p>
1
2016-08-11T10:10:05Z
[ "python", "pip" ]
Resetting Python to system Python in Ubuntu 14.04 LTS
38,893,325
<p>I've been trying to get <code>mysql-workbench</code> to work, and having a ton of issues. Running it from terminal gives me the following:</p> <pre><code>File "/home/{My_username}/.linuxbrew/Cellar/python/2.7.12_1/lib/python2.7/hmac.py", line 8, in &lt;module&gt; from operator import _compare_digest as compare_digest ImportError: cannot import name _compare_digest Warning! Can't use connect with timeout in paramiko None </code></pre> <p>And when I try to connect via ssh to a database:</p> <pre><code>File "/home/{My_username}/.linuxbrew/Cellar/python/2.7.12_1/lib/python2.7/site-packages/paramiko/transport.py", line 36, in &lt;module&gt; from paramiko import util ImportError: cannot import name util </code></pre> <p>running <code>which python</code> gives me:</p> <pre><code>/home/{My_username}/.linuxbrew/bin/python </code></pre> <p>I would like to go back to the default <code>/usr/bin/python/</code>, but can not figure out what to change. I think this is causing the <code>mysql-workbench</code> problems, or at least it will make it easier to solve them. I've installed <code>paramiko</code> several times via <code>pip</code>, rebooted, and reinstalled <code>mysql-workbench</code>. Yes, I'm new to Ubuntu, sorry.</p>
0
2016-08-11T10:07:36Z
38,893,856
<p>It turns out that when I installed linuxbrew I had to add '/home/{username}/.linuxbrew/bin' to my PATH in '~/.profile' to get brew to work, but added it in front:</p> <pre><code>PATH="$HOME/.linuxbrew/bin:$PATH" </code></pre> <p>This meant that the linuxbrew version of python became the default which causes lots of weird issues. Changing the order around helped fix this:</p> <pre><code>PATH="$PATH:$HOME/.linuxbrew/bin" </code></pre> <p>Now, the system default loads first, then linuxbrew stuff. If you are a newb like me, you can edit this in Ubuntu 14.04 LTS by using the following command:</p> <pre><code>sudo nano ~/.profile </code></pre> <p>Make your edits, hit <code>ctrl+o</code> <code>enter</code> then <code>ctrl+x</code> reboot the whole OS, and you are good to go.</p>
0
2016-08-11T10:32:06Z
[ "python", "mysql", "ubuntu" ]
Resetting Python to system Python in Ubuntu 14.04 LTS
38,893,325
<p>I've been trying to get <code>mysql-workbench</code> to work, and having a ton of issues. Running it from terminal gives me the following:</p> <pre><code>File "/home/{My_username}/.linuxbrew/Cellar/python/2.7.12_1/lib/python2.7/hmac.py", line 8, in &lt;module&gt; from operator import _compare_digest as compare_digest ImportError: cannot import name _compare_digest Warning! Can't use connect with timeout in paramiko None </code></pre> <p>And when I try to connect via ssh to a database:</p> <pre><code>File "/home/{My_username}/.linuxbrew/Cellar/python/2.7.12_1/lib/python2.7/site-packages/paramiko/transport.py", line 36, in &lt;module&gt; from paramiko import util ImportError: cannot import name util </code></pre> <p>running <code>which python</code> gives me:</p> <pre><code>/home/{My_username}/.linuxbrew/bin/python </code></pre> <p>I would like to go back to the default <code>/usr/bin/python/</code>, but can not figure out what to change. I think this is causing the <code>mysql-workbench</code> problems, or at least it will make it easier to solve them. I've installed <code>paramiko</code> several times via <code>pip</code>, rebooted, and reinstalled <code>mysql-workbench</code>. Yes, I'm new to Ubuntu, sorry.</p>
0
2016-08-11T10:07:36Z
38,894,050
<p>To confirm which python location is in use you can use: <code>which python</code></p> <p>To change the version per user:</p> <pre><code> alias python='/usr/bin/python3.4' </code></pre> <p>Once you make the above change, re-login or source your .bashrc file:</p> <pre><code>$ . ~/.bashrc </code></pre> <p>Check your default python version:</p> <pre><code>$ python --version </code></pre> <p>To make you change system wide:</p> <ol> <li>Log in as root</li> <li><p>Find all alternatives </p> <p><strong># update-alternatives --list python update-alternatives: error: no alternatives for python</strong></p></li> <li><p>If you get the above error, you need to update the alternatives:</p> <p><strong># update-alternatives --install /usr/bin/python python /usr/bin/python2.7 update-alternatives: using /usr/bin/python2.7 to provide /usr/bin/python (python) in auto mode</strong> <strong># update-alternatives --install /usr/bin/python python /usr/bin/python3.4 2 update-alternatives: using /usr/bin/python3.4 to provide /usr/bin/python (python) in auto mode</strong></p></li> <li><p>check the version:</p> <p><strong># python --version Python 3.4.2</strong></p></li> <li><p>list again the alternatives:</p> <p><strong># update-alternatives --list python /usr/bin/python2.7 /usr/bin/python3.4</strong></p></li> </ol> <p>You can update the alternatives anytime:</p> <p><strong><code># update-alternatives --config python</code></strong></p>
0
2016-08-11T10:39:43Z
[ "python", "mysql", "ubuntu" ]
LabelEncoder specify classes in DataFrame
38,893,374
<p>I’m applying a LabelEncoder to a pandas DataFrame, <code>df</code></p> <pre><code>Feat1 Feat2 Feat3 Feat4 Feat5 A A A A E B B C C E C D C C E D A C D E </code></pre> <p>I'm applying a label encoder to a dataframe like this - </p> <pre><code>from sklearn import preprocessing le = preprocessing.LabelEncoder() intIndexed = df.apply(le.fit_transform) </code></pre> <p>This is how the labels are mapped</p> <pre><code>A = 0 B = 1 C = 2 D = 3 E = 0 </code></pre> <p>I'm guessing that <code>E</code> isn't given the value of <code>4</code> as it doesn't appear in any other column other than <code>Feat 5</code> .</p> <p>I want <code>E</code> to be given the value of <code>4</code> - but don't know how to do this in a DataFrame.</p>
2
2016-08-11T10:09:17Z
38,894,053
<p>You could <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html#sklearn.preprocessing.LabelEncoder.fit" rel="nofollow"><code>fit</code></a> the label encoder and later <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html#sklearn.preprocessing.LabelEncoder.transform" rel="nofollow"><code>transform</code></a> the labels to their normalized encoding as follows:</p> <pre><code>In [4]: from sklearn import preprocessing ...: import numpy as np In [5]: le = preprocessing.LabelEncoder() In [6]: le.fit(np.unique(df.values)) Out[6]: LabelEncoder() In [7]: list(le.classes_) Out[7]: ['A', 'B', 'C', 'D', 'E'] In [8]: df.apply(le.transform) Out[8]: Feat1 Feat2 Feat3 Feat4 Feat5 0 0 0 0 0 4 1 1 1 2 2 4 2 2 3 2 2 4 3 3 0 2 3 4 </code></pre> <hr> <p>One way to specify labels by default would be:</p> <pre><code>In [9]: labels = ['A', 'B', 'C', 'D', 'E'] In [10]: enc = le.fit(labels) In [11]: enc.classes_ # sorts the labels in alphabetical order Out[11]: array(['A', 'B', 'C', 'D', 'E'], dtype='&lt;U1') In [12]: enc.transform('E') Out[12]: 4 </code></pre>
4
2016-08-11T10:39:45Z
[ "python", "pandas", "machine-learning", "scikit-learn" ]
Bokeh + interactive widgets + PythonAnywhere
38,893,389
<p>I was unable to find a minimal working example for an interactive web app using bokeh and bokeh widgets that runs on PythonAnywhere.</p> <p>Ideally, I would like to have a simple plot of a relatively complicated function (which I do not know analytically, but I have SymPy compute that for me) which should be replotted when a parameter changes.</p> <p>All code that I have found so far does not do that, e.g. <a href="https://github.com/bokeh/bokeh/tree/master/examples" rel="nofollow">https://github.com/bokeh/bokeh/tree/master/examples</a>, or refers to obsolete versions of bokeh.</p> <p>Most of the documentation deals with running a bokeh-server, but there is no indication on how to have this running with WSGI (which is how PythonAnywhere handles the requests). For this reasone I have tried embedding a Bokeh plot within a Flask app. However, as far as I understand, in order to have interactive Bokeh widgets (which should trigger some computation in Python) do require a bokeh-server. I am not particularly attached to using either Flask or Bokeh, if I can achive a similar result with some other simpler tools. Unfortunately, a Jupyter notebook with interactive widgets does not seems to be an option in PythonAnywhere.</p> <p>I have installed bokeh 0.12 on Python 3.5.</p> <p>I have managed to run a simple bokeh plot within a flask app, but I am unable to use the Bokeh widgets.</p>
1
2016-08-11T10:09:57Z
38,895,793
<p>Here is a working example of Jupyter notebook with interactive widgets on pythonanywhere:</p> <pre><code>%pylab inline import matplotlib.pyplot as plt from ipywidgets import interact def plot_power_function(k): xs = range(50) dynamic_ys = [x ** k for x in xs] plt.plot(xs, dynamic_ys) interact(plot_power_function, k=[1, 5, 0.5]) </code></pre> <p>PythonAnywhere does have the ipywidgets module pre-installed. But if you are not seeing the interactive widgets, make sure that you have run <code>jupyter nbextension enable --py widgetsnbextension</code> from a bash console to get it enabled for your notebooks. You will have to restart the jupyter server after enabling this extension (by killing the relevant jupyter processes from the consoles running processes list on the pythonanywhere dashboard).</p>
2
2016-08-11T12:00:15Z
[ "python", "bokeh", "pythonanywhere" ]
How to install python cgi on apache2?
38,893,438
<p>I want to call a Python script using javascript. For this I need to install cgi on my apache server to call python script. (Maybe I wrong, I'm beginner and I'm little lost...) But I don't know how to do this... </p> <p>I have this in my <code>000-default.conf</code> : </p> <pre><code>&lt;VirtualHost *:80&gt; DocumentRoot /var/www/html php_value include_path ".:/var/www/html:/usr/share/php:/usr/share/pear" php_value post_max_size 30M php_value upload_max_filesize 30M &lt;Directory /var/www/html/&gt; Options All -Indexes AllowOverride All Allow from all &lt;/Directory&gt; ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </code></pre> <p>So I don't have cgi, any idea to install this?</p>
1
2016-08-11T10:12:36Z
38,976,058
<p>You can follow <a href="https://www.linux.com/blog/configuring-apache2-run-python-scripts" rel="nofollow">this link</a>. But in short you have to add this</p> <pre><code>ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ &lt;Directory "/usr/lib/cgi-bin"&gt; Options +ExecCGI AddHandler cgi-script .py Options FollowSymLinks Require all granted &lt;/Directory&gt; </code></pre> <p>Now you can place all your python scripts in that cgi-bin directory and execute them as <code>http://localhost/cgi-bin/myscript.py</code> in your browser , boom.</p> <p>Or a more simple but less production freindly way is installing <a href="https://www.apachefriends.org/index.html" rel="nofollow">XAMPP</a> and put the scripts in <code>/opt/cgi-bin/</code> directory already created and run the above mentioned url in your browser boom , done for now.</p> <p>Are you trying to do AJAX? Thats you want to call cgi scripts using Javascripts to draw dynamic websites? If thats true I would strongly advise in not doing so, it would draw a huge amount of resources from your server but if your using only python with a framework like Django or Flask , you would have no problems but for deploying them with normal PHP sites is a lot of headache which is not even its worth.</p> <p>Cheers</p>
0
2016-08-16T13:17:46Z
[ "python", "apache", "install", "cgi" ]
Python ,TypeError: not all arguments converted during string formatting, SMB mount command
38,893,494
<p>I am trying to smb mount using below script but facing "TypeError" issue can someone please help me to solve this. the actual command i want to execute is mount -t cifs //111.11.111.111/SMBShare /mnt -o username=admin,password=admin,vers=3.0 </p> <p>python code:</p> <p>#!/usr/bin/env python</p> <pre><code> def setup_env(self, get_xyz_share): share = get_xyz_share.name dx_ip = co.data_sols[0].address co.clients[0].execute(['mount' ,'-t' ,'cifs' ,'//%s','/','%s' ,'/mnt', '-o' ,'username=admin,password=admin,vers=3.0' %(dx_ip, share)]) </code></pre> <p>The script output looks like :---</p> <blockquote> <pre><code> co.clients[0].execute(['mount' ,'-t' ,'cifs' ,'//%s','/','%s' ,'/mnt', '-o' ,'username=admin,password=admin,vers=3.0' %(dx_ip, share)]) TypeError: not all arguments converted during string formatting dx_ip = '111.11.111.111' get_xyz_share = &lt;cx.models.Share.Shareobject at 0x4d53248 | name SMBShare&gt;) self = TestMySMB share = 'SMBShare' </code></pre> </blockquote>
0
2016-08-11T10:14:45Z
38,893,623
<p>You do the conversion on the last element in the list:</p> <pre><code>'username=admin,password=admin,vers=3.0' %(dx_ip, share) </code></pre> <p>Which has no %s at all.</p> <p>you probably want to do something like:</p> <pre><code>co.clients[0].execute(['mount' ,'-t' ,'cifs' ,'//%s/%s' % (dx_ip, share) ,'/mnt', '-o' ,'username=admin,password=admin,vers=3.0']) </code></pre>
3
2016-08-11T10:21:19Z
[ "python", "python-2.7", "mount", "smb", "libsmbclient" ]
What's the meaning of tf.nn.embedding_lookup_sparse in TensorFlow?
38,893,526
<p>We spend a lot of time in reading the API document of <a href="https://www.tensorflow.org/versions/master/api_docs/python/nn.html#embedding_lookup_sparse" rel="nofollow">tf.nn.embedding_lookup_sparse</a>. The meaning of <code>embedding_lookup_sparse</code> is confusing and it seems quite different from <code>embedding_lookup</code>.</p> <p>Here's what I think and please correct me if I'm wrong. The example of wide and deep model uses <code>contrib.layers</code> APIs and call <code>embedding_lookup_sparse</code> for sparse feature colume. If it gets the SparseTensor(for example, country, which is sparse), it creates the embedding which is actually for one-host encoding. Then call <code>to_weights_sum</code> to return the result of <code>embedding_lookup_sparse</code> as <code>prediction</code> and the embedding as <code>variable</code>.</p> <p>The the result of <code>embedding_lookup_sparse</code> add <code>bias</code> and become the <code>logits</code> for loss function and training operation. That means the <code>embedding_lookup_sparse</code> do something like <code>w * x</code>(part of <code>y = w * x + b</code>) for dense tensor.</p> <p>Maybe for one-hot encoding or SparseTensor, the <code>weight</code> from <code>embedding_lookup_sparse</code> is actually the value of <code>w * x</code> because the look-up data is always <code>1</code> and no need to add other <code>0</code>s.</p> <p>What I said is also confusing. Can anyone help to explain this in detail?</p>
1
2016-08-11T10:16:30Z
38,900,894
<p>The main difference between embedding lookup and embedding lookup sparse is that the sparse version expects the id's and weights to be of type SparseTensor.</p> <p>Do you also not understand how embedding lookup works? </p> <p>You pass in a tensor of some size and will multiply the slices of the tensors be some weight (also passed in) then you are returned the new slices. </p> <p>There is no bias term. You can add the slices of the tensor together by referencing more than one to be included as an element in your output.</p>
0
2016-08-11T15:45:42Z
[ "python", "tensorflow", "embedding" ]
ndarray(numpy,python) assignment assigns array of 0s : why?
38,893,592
<p>So in this simple code the line</p> <pre><code>result[columnNumber] = column #this assignment fails for some reason! </code></pre> <p>fails, and specifically it simply assigns array of zeroes instead of what it is supposed to assign, and I have no idea why! So this is the full code:</p> <pre><code> """Softmax.""" scores = [3.0, 1.0, 0.2] import numpy as np def softmax(x): """Compute softmax values for each sets of scores in x.""" data=np.array(x) columnNumber=0 result=data.copy() result=result.T for column in data.T: sumCurrentColumn=0 try: #Since 'column' can potentially be just a double,and sum needs some iterable object sumCurrentColumn=sum(np.exp(column)) except TypeError: sumCurrentColumn=np.exp(column) column=np.divide(np.exp(column),sumCurrentColumn) print(column) print('before assignment:'+str(result[columnNumber])) result[columnNumber] = column #this assignment fails for some reason! print('after assignment:'+str(result[columnNumber])) columnNumber+=1 result=result.T return result scores = np.array([[1, 2, 3, 6], [2, 4, 5, 6], [3, 8, 7, 6]]) print(softmax(scores)) </code></pre> <p>and this is its output:</p> <pre><code> [ 0.09003057 0.24472847 0.66524096] before assignment:[1 2 3] after assignment:[0 0 0] [ 0.00242826 0.01794253 0.97962921] before assignment:[2 4 8] after assignment:[0 0 0] [ 0.01587624 0.11731043 0.86681333] before assignment:[3 5 7] after assignment:[0 0 0] [ 0.33333333 0.33333333 0.33333333] before assignment:[6 6 6] after assignment:[0 0 0] [[0 0 0 0] [0 0 0 0] [0 0 0 0]] </code></pre>
1
2016-08-11T10:19:15Z
38,893,751
<p>In your example, the input <code>scores</code> is all integers, so the data type of the <code>data</code> array is integer. Therefore <code>result</code> is also an integer array. You can't assign a floating point value into an integer array--numpy arrays have homogeneous data types which can not be dynamically changed. The line <code>result[columnNumber] = column</code> is truncating the values in <code>column</code> to integers, and since they are all between 0 and 1, the assigned values are all 0.</p> <p>Try changing the creation of <code>result</code> to:</p> <pre><code>result = data.astype(float) </code></pre> <p>(By default, the <code>astype</code> method creates a copy even if <code>data</code> already has the specified type.)</p>
2
2016-08-11T10:27:33Z
[ "python", "numpy" ]
ndarray(numpy,python) assignment assigns array of 0s : why?
38,893,592
<p>So in this simple code the line</p> <pre><code>result[columnNumber] = column #this assignment fails for some reason! </code></pre> <p>fails, and specifically it simply assigns array of zeroes instead of what it is supposed to assign, and I have no idea why! So this is the full code:</p> <pre><code> """Softmax.""" scores = [3.0, 1.0, 0.2] import numpy as np def softmax(x): """Compute softmax values for each sets of scores in x.""" data=np.array(x) columnNumber=0 result=data.copy() result=result.T for column in data.T: sumCurrentColumn=0 try: #Since 'column' can potentially be just a double,and sum needs some iterable object sumCurrentColumn=sum(np.exp(column)) except TypeError: sumCurrentColumn=np.exp(column) column=np.divide(np.exp(column),sumCurrentColumn) print(column) print('before assignment:'+str(result[columnNumber])) result[columnNumber] = column #this assignment fails for some reason! print('after assignment:'+str(result[columnNumber])) columnNumber+=1 result=result.T return result scores = np.array([[1, 2, 3, 6], [2, 4, 5, 6], [3, 8, 7, 6]]) print(softmax(scores)) </code></pre> <p>and this is its output:</p> <pre><code> [ 0.09003057 0.24472847 0.66524096] before assignment:[1 2 3] after assignment:[0 0 0] [ 0.00242826 0.01794253 0.97962921] before assignment:[2 4 8] after assignment:[0 0 0] [ 0.01587624 0.11731043 0.86681333] before assignment:[3 5 7] after assignment:[0 0 0] [ 0.33333333 0.33333333 0.33333333] before assignment:[6 6 6] after assignment:[0 0 0] [[0 0 0 0] [0 0 0 0] [0 0 0 0]] </code></pre>
1
2016-08-11T10:19:15Z
38,893,771
<p>Your array <code>result</code> has type <code>int</code> so all your floats are automatically converted to <code>int</code>'s, in this case <code>0</code>. Use this <code>result = data.astype(float)</code>.</p>
0
2016-08-11T10:28:22Z
[ "python", "numpy" ]
scraping craigslist results "Page not found" using python
38,893,777
<p>I tried to fetch the contents of CL page and display it as a separate html page using python's BeautifulSoup.</p> <p>But it results in "page not found".</p> <pre><code>d={'query':'doctor'} encoder=urllib.urlencode(d).encode('ASCII') page1=urllib.urlopen('chennai.craigslist.co.in/search',encoder) content=page1.read() file1=open("r.html",'w') file1.write(content.decode()) </code></pre>
-4
2016-08-11T10:28:35Z
38,896,657
<p>The code works in python version 3.. Thanks for the contributions.</p>
-1
2016-08-11T12:38:41Z
[ "python", "web-scraping" ]
Why does not threading make it faster to iterate on a numpy array?
38,894,032
<p>My question is about multi-threading in python. The problem I am working on is finding 80 percent similar arrays(lengths are 64) to a given array with the same length from 10 Million arrays. The problem is that although my code executes in 12.148 seconds when I iterate linearly inside a while loop, it doesn't execute in at least 28-30 seconds when I use multi threading. Both implementations are below. Any advice appreciated and please enlighten me, why does it make it slower to multi thread in this case? First code:</p> <pre><code>import timeit import numpy as np ph = np.load('newDataPhoto.npy') myPhoto1 = np.array([ 1. , 1. , 0. , 1. , 0. , 0. , 1. , 0. , 1. , 0. , 0. , 1. , 0. , 1. , 1. , 1. , 0. , 0. , 0. , 1. , 1. , 0. , 1. , 1. , 0. , 0. , 1. , 1. , 1. , 0. , 0. , 1. , 0. , 0. , 1. , 1. , 1. , 0. , 0. , 1. , 0. , 0. , 1. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 1. , 0. , 0. , 1. , 1. , 0. , 1. , 0. , 1. , 0. , 0. , 1. , 1. , 0. ]) start = timeit.default_timer() kk=0 i=0 while i&lt; 10000000: u = np.count_nonzero(ph[i] != myPhoto1) if u &lt;= 14: kk+=1 i+=1 print(kk) stop = timeit.default_timer() print stop-start </code></pre> <p>Second one(multi-threaded):</p> <pre><code>from threading import Thread import numpy as np import timeit start = timeit.default_timer() ph = np.load('newDataPhoto.npy') pc = np.load('newDataPopCount.npy') myPhoto1 = np.array([ 1. , 1. , 0. , 1. , 0. , 0. , 1. , 0. , 1. , 0. , 0. , 1. , 0. , 1. , 1. , 1. , 0. , 0. , 0. , 1. , 1. , 0. , 1. , 1. , 0. , 0. , 1. , 1. , 1. , 0. , 0. , 1. , 0. , 0. , 1. , 1. , 1. , 0. , 0. , 1. , 0. , 0. , 1. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 1. , 0. , 0. , 1. , 1. , 0. , 1. , 0. , 1. , 0. , 0. , 1. , 1. , 0. ]) def hamming_dist(left, right, name): global kk start = timeit.default_timer() while left&lt;=right: if(np.count_nonzero(ph[left] != myPhoto1)&lt;=14): kk+=1 left+=1 stop=timeit.default_timer() print name print stop-start def Main(): global kk kk=0 t1 = Thread(target=hamming_dist, args=(0,2500000, 't1')) t2 = Thread(target=hamming_dist, args=(2500001, 5000000, 't2')) t3 = Thread(target=hamming_dist, args=(5000001, 7500000,'t3')) t4 = Thread(target=hamming_dist, args=(7500001, 9999999, 't4')) t1.start() t2.start() t3.start() t4.start() print ('main done') if __name__ == "__main__": Main() </code></pre> <p>And their outputs in order:</p> <pre><code>38 12.148679018 ##### main done t4 26.4695241451 t2 27.4959039688 t3 27.5113890171 t1 27.5896160603 </code></pre>
0
2016-08-11T10:38:58Z
38,894,938
<p>I solved the problem. I found out that threading is blocked by GIL which never allows to use more that the current processor. However using multiprocessing module worked. Here is the modification I made:</p> <pre><code>import numpy as np import multiprocessing import timeit start = timeit.default_timer() ph = np.load('newDataPhoto.npy') pc = np.load('newDataPopCount.npy') myPhoto1 = np.array([ 1. , 1. , 0. , 1. , 0. , 0. , 1. , 0. , 1. , 0. , 0. , 1. , 0. , 1. , 1. , 1. , 0. , 0. , 0. , 1. , 1. , 0. , 1. , 1. , 0. , 0. , 1. , 1. , 1. , 0. , 0. , 1. , 0. , 0. , 1. , 1. , 1. , 0. , 0. , 1. , 0. , 0. , 1. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 1. , 0. , 0. , 1. , 1. , 0. , 1. , 0. , 1. , 0. , 0. , 1. , 1. , 0. ]) def hamming_dist(left, right, name): global kk start = timeit.default_timer() while left&lt;=right: if(np.count_nonzero(ph[left] != myPhoto1)&lt;=14): kk+=1 left+=1 stop=timeit.default_timer() print name print stop-start def Main(): global kk kk=0 t1 = multiprocessing.Process(target=hamming_dist, args=(0,2500000, 't1')) t2 = multiprocessing.Process(target=hamming_dist, args=(2500001, 5000000, 't2')) t3 = multiprocessing.Process(target=hamming_dist, args=(5000001, 7500000,'t3')) t4 = multiprocessing.Process(target=hamming_dist, args=(7500001, 9999999, 't4')) t1.start() t2.start() t3.start() t4.start() print ('main done') if __name__ == "__main__": Main() </code></pre>
0
2016-08-11T11:19:26Z
[ "python", "multithreading", "performance", "gil", "hamming-distance" ]
Python subprocess.Popen() - subprocess causes sockets to remain open
38,894,081
<p>I have a Python2.7 script that does some parallelism magic and finally enters Flask gui_loop. At some point a thread creates a background process with subprocess.Popen. This works. </p> <p>When my script exits and if the subprocess is still running, I can't run my script again, as flask gui_loop fails with:</p> <pre><code>socket.error: [Errno 98] Address already in use </code></pre> <p>With netstat -peanut I can see the ownership of the socket transfers to the child process when the python script exits. This is how it looks when both python script and subprocess are running:</p> <pre><code>root@test:/tmp# netstat -peanut | grep 5000 tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 1000 840210 21458/python </code></pre> <p>After terminating the Python script, socket does not close but its ownership is passed to the child process: </p> <pre><code>root@test:~/PycharmProjects/foo/gui# netstat -peanut | grep 5000 tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 1000 763103 19559/my-subprocess </code></pre> <p>Is there any way around this? The subprocess (written in C) is not doing anything on that socket and doesn't need it. Can I somehow create a subprocess without passing the gui loop socket resource to it?</p> <p>I can of course terminate the process but this is not ideal as the purpose of this is to build a simple gui around some calculations and not lose the progress if the gui script happens to exit. I would have a mechanism to reattach connection to the subprocess if I just could get the gui script up and running again. </p> <p>R</p>
0
2016-08-11T10:41:00Z
38,899,156
<p>You could try using the <strong><em>with</em></strong> statement. Some documentation here:</p> <pre><code>http://preshing.com/20110920/the-python-with-statement-by-example/ https://www.python.org/dev/peps/pep-0343/ </code></pre> <p>This does open/close cleanup for you.</p>
1
2016-08-11T14:27:16Z
[ "python", "python-2.7", "sockets", "subprocess" ]
Python subprocess.Popen() - subprocess causes sockets to remain open
38,894,081
<p>I have a Python2.7 script that does some parallelism magic and finally enters Flask gui_loop. At some point a thread creates a background process with subprocess.Popen. This works. </p> <p>When my script exits and if the subprocess is still running, I can't run my script again, as flask gui_loop fails with:</p> <pre><code>socket.error: [Errno 98] Address already in use </code></pre> <p>With netstat -peanut I can see the ownership of the socket transfers to the child process when the python script exits. This is how it looks when both python script and subprocess are running:</p> <pre><code>root@test:/tmp# netstat -peanut | grep 5000 tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 1000 840210 21458/python </code></pre> <p>After terminating the Python script, socket does not close but its ownership is passed to the child process: </p> <pre><code>root@test:~/PycharmProjects/foo/gui# netstat -peanut | grep 5000 tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 1000 763103 19559/my-subprocess </code></pre> <p>Is there any way around this? The subprocess (written in C) is not doing anything on that socket and doesn't need it. Can I somehow create a subprocess without passing the gui loop socket resource to it?</p> <p>I can of course terminate the process but this is not ideal as the purpose of this is to build a simple gui around some calculations and not lose the progress if the gui script happens to exit. I would have a mechanism to reattach connection to the subprocess if I just could get the gui script up and running again. </p> <p>R</p>
0
2016-08-11T10:41:00Z
38,899,412
<p>You should use <a href="https://docs.python.org/2/library/subprocess.html#popen-constructor" rel="nofollow"><code>close_fds=True</code></a> when creating the subproces, which will cause all file descriptors (and therfore open sockets) to be closed in the child process (except for stdin/stdout/stderr).</p> <p>In newer versions (python 3.2+) <code>close_fds</code> already defaults to <code>True</code>, as in most cases you don't want to inherit all open file descriptors in a child process, but in python2.7 you still need to specify it explicitly.</p>
2
2016-08-11T14:37:53Z
[ "python", "python-2.7", "sockets", "subprocess" ]
Can't access dataframe columns
38,894,098
<p>I'm importing a dataframe from a csv file, but cannot access some of it's columns by name. What's going on?</p> <p>In more concrete terms:</p> <pre><code>&gt; import pandas &gt; jobNames = pandas.read_csv("job_names.csv") &gt; print(jobNames) job_id job_name num_judgements 0 933985 Foo 180 1 933130 Moo 175 2 933123 Goo 150 3 933094 Flue 120 4 933088 Tru 120 </code></pre> <p>When I try to access the second column, I get an error:</p> <pre><code>&gt; jobNames.job_name </code></pre> <blockquote> <p>AttributeError: 'DataFrame' object has no attribute 'job_name'</p> </blockquote> <p>Strangely, I can access the job_id column thus:</p> <pre><code>&gt; print(jobNames.job_id) 0 933985 1 933130 2 933123 3 933094 4 933088 Name: job_id, dtype: int64 </code></pre> <p><strong>Edit (to put the accepted answer in context):</strong></p> <p>It turns out that the first row of the csv file (with the column names) looks like this:</p> <pre><code>job_id, job_name, num_judgements </code></pre> <p>Note the spaces after each comma! Those spaces are retained in the column names:</p> <pre><code>&gt; jobNames.columns[1] ' job_name' </code></pre> <p>which don't form valid python identifiers, so those columns aren't available as dataframe attributes. I can still access them dict-style:</p> <pre><code>&gt; jobNames[' job_name'] </code></pre>
1
2016-08-11T10:41:53Z
38,894,099
<p>Another (perhaps inferior) approach is to remove the spaces from the column names:</p> <pre><code>&gt; jobNames.columns = map(lambda s:s.strip(), jobNames.columns) &gt; jobNames.job_name 0 Foo 1 Moo 2 Goo 3 Flue 4 Tru Name: job_name, dtype: object </code></pre>
0
2016-08-11T10:41:53Z
[ "python", "csv", "pandas", "dataframe", "removing-whitespace" ]
Can't access dataframe columns
38,894,098
<p>I'm importing a dataframe from a csv file, but cannot access some of it's columns by name. What's going on?</p> <p>In more concrete terms:</p> <pre><code>&gt; import pandas &gt; jobNames = pandas.read_csv("job_names.csv") &gt; print(jobNames) job_id job_name num_judgements 0 933985 Foo 180 1 933130 Moo 175 2 933123 Goo 150 3 933094 Flue 120 4 933088 Tru 120 </code></pre> <p>When I try to access the second column, I get an error:</p> <pre><code>&gt; jobNames.job_name </code></pre> <blockquote> <p>AttributeError: 'DataFrame' object has no attribute 'job_name'</p> </blockquote> <p>Strangely, I can access the job_id column thus:</p> <pre><code>&gt; print(jobNames.job_id) 0 933985 1 933130 2 933123 3 933094 4 933088 Name: job_id, dtype: int64 </code></pre> <p><strong>Edit (to put the accepted answer in context):</strong></p> <p>It turns out that the first row of the csv file (with the column names) looks like this:</p> <pre><code>job_id, job_name, num_judgements </code></pre> <p>Note the spaces after each comma! Those spaces are retained in the column names:</p> <pre><code>&gt; jobNames.columns[1] ' job_name' </code></pre> <p>which don't form valid python identifiers, so those columns aren't available as dataframe attributes. I can still access them dict-style:</p> <pre><code>&gt; jobNames[' job_name'] </code></pre>
1
2016-08-11T10:41:53Z
38,894,182
<p>Another solution for removing whitespaces from column names is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>str.strip</code></a>:</p> <pre><code>jobNames.columns = jobNames.columns.str.strip() print (jobNames.job_name) 0 Foo 1 Moo 2 Goo 3 Flue 4 Tru </code></pre>
1
2016-08-11T10:44:10Z
[ "python", "csv", "pandas", "dataframe", "removing-whitespace" ]
Can't access dataframe columns
38,894,098
<p>I'm importing a dataframe from a csv file, but cannot access some of it's columns by name. What's going on?</p> <p>In more concrete terms:</p> <pre><code>&gt; import pandas &gt; jobNames = pandas.read_csv("job_names.csv") &gt; print(jobNames) job_id job_name num_judgements 0 933985 Foo 180 1 933130 Moo 175 2 933123 Goo 150 3 933094 Flue 120 4 933088 Tru 120 </code></pre> <p>When I try to access the second column, I get an error:</p> <pre><code>&gt; jobNames.job_name </code></pre> <blockquote> <p>AttributeError: 'DataFrame' object has no attribute 'job_name'</p> </blockquote> <p>Strangely, I can access the job_id column thus:</p> <pre><code>&gt; print(jobNames.job_id) 0 933985 1 933130 2 933123 3 933094 4 933088 Name: job_id, dtype: int64 </code></pre> <p><strong>Edit (to put the accepted answer in context):</strong></p> <p>It turns out that the first row of the csv file (with the column names) looks like this:</p> <pre><code>job_id, job_name, num_judgements </code></pre> <p>Note the spaces after each comma! Those spaces are retained in the column names:</p> <pre><code>&gt; jobNames.columns[1] ' job_name' </code></pre> <p>which don't form valid python identifiers, so those columns aren't available as dataframe attributes. I can still access them dict-style:</p> <pre><code>&gt; jobNames[' job_name'] </code></pre>
1
2016-08-11T10:41:53Z
38,894,205
<p>When using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>pandas.read_csv</code></a> pass in <code>skipinitialspace=True</code> flag to remove whitespace after CSV delimiters.</p>
2
2016-08-11T10:45:25Z
[ "python", "csv", "pandas", "dataframe", "removing-whitespace" ]
Operation on numpy arrays contain rows with different size
38,894,291
<p>I have two lists, looking like this:</p> <pre><code>a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]], b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] </code></pre> <p>which I want to subtract from each other element by element for an Output like this:</p> <pre><code>a-b= [[-4,-4,-4,-4],[7,2,2,2],[-1,-1,-1,-1,-1]] </code></pre> <p>In order to do so I convert each of <code>a</code> and <code>b</code> to arrays and subtract them I use:</p> <pre><code>np.array(a)-np.array(b) </code></pre> <p>The Output just gives me the error: </p> <blockquote> <p>Unsupported Operand type for-: 'list' and 'list'</p> </blockquote> <p>What am I doing wrong? Shouldn't the <code>np.array</code> command ensure the conversion to the array?</p>
8
2016-08-11T10:49:20Z
38,894,480
<p>The dimensions of your two arrays don't match, i.e. the first two sublists of <code>a</code> have 4 elements, but the third has 5 and ditto with <code>b</code>. If you convert the lists to <code>numpy arrays</code>, <code>numpy</code> silently gives you something like this:</p> <pre><code>In [346]: aa = np.array(a) In [347]: bb = np.array(b) In [348]: aa Out[348]: array([[1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6, 7]], dtype=object) In [349]: bb Out[349]: array([[5, 6, 7, 8], [9, 1, 2, 3], [4, 5, 6, 7, 8]], dtype=object) </code></pre> <p>You need to make sure that all your sublists have the same number of elements, then your code will work:</p> <pre><code>In [350]: a = [[1,2,3,4], [2,3,4,5],[3,4,5,6]]; b = [[5,6,7,8], [9,1,2,3], [4,5,6,7]] # I removed the last element of third sublist in a and b In [351]: np.array(a) - np.array(b) Out[351]: array([[-4, -4, -4, -4], [-7, 2, 2, 2], [-1, -1, -1, -1]]) </code></pre>
1
2016-08-11T10:59:04Z
[ "python", "arrays", "numpy" ]
Operation on numpy arrays contain rows with different size
38,894,291
<p>I have two lists, looking like this:</p> <pre><code>a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]], b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] </code></pre> <p>which I want to subtract from each other element by element for an Output like this:</p> <pre><code>a-b= [[-4,-4,-4,-4],[7,2,2,2],[-1,-1,-1,-1,-1]] </code></pre> <p>In order to do so I convert each of <code>a</code> and <code>b</code> to arrays and subtract them I use:</p> <pre><code>np.array(a)-np.array(b) </code></pre> <p>The Output just gives me the error: </p> <blockquote> <p>Unsupported Operand type for-: 'list' and 'list'</p> </blockquote> <p>What am I doing wrong? Shouldn't the <code>np.array</code> command ensure the conversion to the array?</p>
8
2016-08-11T10:49:20Z
38,894,750
<p>You can try:</p> <pre><code>&gt;&gt;&gt; a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]] &gt;&gt;&gt; b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] &gt;&gt;&gt; &gt;&gt;&gt; c =[] &gt;&gt;&gt; for i in range(len(a)): c.append([A - B for A, B in zip(a[i], b[i])]) &gt;&gt;&gt; print c [[-4, -4, -4, -4], [-7, 2, 2, 2], [-1, -1, -1, -1, -1]] </code></pre> <p>Or 2nd method is using <strong>map</strong>:</p> <pre><code>from operator import sub a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]] b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] c =[] for i in range(len(a)): c.append(map(sub, a[i], b[i])) print c [[-4, -4, -4, -4], [-7, 2, 2, 2], [-1, -1, -1, -1, -1]] </code></pre>
1
2016-08-11T11:11:04Z
[ "python", "arrays", "numpy" ]
Operation on numpy arrays contain rows with different size
38,894,291
<p>I have two lists, looking like this:</p> <pre><code>a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]], b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] </code></pre> <p>which I want to subtract from each other element by element for an Output like this:</p> <pre><code>a-b= [[-4,-4,-4,-4],[7,2,2,2],[-1,-1,-1,-1,-1]] </code></pre> <p>In order to do so I convert each of <code>a</code> and <code>b</code> to arrays and subtract them I use:</p> <pre><code>np.array(a)-np.array(b) </code></pre> <p>The Output just gives me the error: </p> <blockquote> <p>Unsupported Operand type for-: 'list' and 'list'</p> </blockquote> <p>What am I doing wrong? Shouldn't the <code>np.array</code> command ensure the conversion to the array?</p>
8
2016-08-11T10:49:20Z
38,894,774
<p>Without NumPy:</p> <pre><code>result = [] for (m, n) in (zip(a, b)): result.append([i - j for i, j in zip(m, n)]) </code></pre> <p>See also <a href="http://stackoverflow.com/questions/534855">this question</a> and <a href="http://stackoverflow.com/questions/1663807">this one</a>.</p>
0
2016-08-11T11:12:16Z
[ "python", "arrays", "numpy" ]
Operation on numpy arrays contain rows with different size
38,894,291
<p>I have two lists, looking like this:</p> <pre><code>a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]], b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] </code></pre> <p>which I want to subtract from each other element by element for an Output like this:</p> <pre><code>a-b= [[-4,-4,-4,-4],[7,2,2,2],[-1,-1,-1,-1,-1]] </code></pre> <p>In order to do so I convert each of <code>a</code> and <code>b</code> to arrays and subtract them I use:</p> <pre><code>np.array(a)-np.array(b) </code></pre> <p>The Output just gives me the error: </p> <blockquote> <p>Unsupported Operand type for-: 'list' and 'list'</p> </blockquote> <p>What am I doing wrong? Shouldn't the <code>np.array</code> command ensure the conversion to the array?</p>
8
2016-08-11T10:49:20Z
38,894,828
<p>What about a custom function such as:</p> <pre><code>import numpy as np a = [[1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6, 7]] b = [[5, 6, 7, 8], [9, 1, 2, 3], [4, 5, 6, 7, 8]] def np_substract(l1, l2): return np.array([np.array(l1[i]) - np.array(l2[i]) for i in range(len(l1))]) print np_substract(a, b) </code></pre>
0
2016-08-11T11:14:44Z
[ "python", "arrays", "numpy" ]
Operation on numpy arrays contain rows with different size
38,894,291
<p>I have two lists, looking like this:</p> <pre><code>a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]], b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] </code></pre> <p>which I want to subtract from each other element by element for an Output like this:</p> <pre><code>a-b= [[-4,-4,-4,-4],[7,2,2,2],[-1,-1,-1,-1,-1]] </code></pre> <p>In order to do so I convert each of <code>a</code> and <code>b</code> to arrays and subtract them I use:</p> <pre><code>np.array(a)-np.array(b) </code></pre> <p>The Output just gives me the error: </p> <blockquote> <p>Unsupported Operand type for-: 'list' and 'list'</p> </blockquote> <p>What am I doing wrong? Shouldn't the <code>np.array</code> command ensure the conversion to the array?</p>
8
2016-08-11T10:49:20Z
38,894,930
<p>Here is a Numpythonic way:</p> <pre><code>&gt;&gt;&gt; y = map(len, a) &gt;&gt;&gt; a = np.hstack(np.array(a)) &gt;&gt;&gt; b = np.hstack(np.array(b)) &gt;&gt;&gt; np.split(a-b, np.cumsum(y)) [array([-4, -4, -4, -4]), array([-7, 2, 2, 2]), array([-1, -1, -1, -1, -1]), array([], dtype=float64)] &gt;&gt;&gt; </code></pre> <p>Since you cannot subtract the arrays with different shapes, you can flatten your arrays using <code>np.hstack()</code> then subtract your flattened arrays then reshape based on the previous shape.</p>
5
2016-08-11T11:19:04Z
[ "python", "arrays", "numpy" ]
Operation on numpy arrays contain rows with different size
38,894,291
<p>I have two lists, looking like this:</p> <pre><code>a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]], b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] </code></pre> <p>which I want to subtract from each other element by element for an Output like this:</p> <pre><code>a-b= [[-4,-4,-4,-4],[7,2,2,2],[-1,-1,-1,-1,-1]] </code></pre> <p>In order to do so I convert each of <code>a</code> and <code>b</code> to arrays and subtract them I use:</p> <pre><code>np.array(a)-np.array(b) </code></pre> <p>The Output just gives me the error: </p> <blockquote> <p>Unsupported Operand type for-: 'list' and 'list'</p> </blockquote> <p>What am I doing wrong? Shouldn't the <code>np.array</code> command ensure the conversion to the array?</p>
8
2016-08-11T10:49:20Z
38,894,934
<p>You are getting the error, because your code is trying to subtract sublist from sublist, if you want to make it work, you can do the same in following manner:</p> <pre><code>import numpy as np a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]] b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] #You can apply different condition here, like (if (len(a) == len(b)), then only run the following code for each in range(len(a)): list = np.array(a[each])-np.array(b[each]) #for converting the output array in to list subList[each] = list.tolist() print subList </code></pre>
0
2016-08-11T11:19:12Z
[ "python", "arrays", "numpy" ]
Operation on numpy arrays contain rows with different size
38,894,291
<p>I have two lists, looking like this:</p> <pre><code>a= [[1,2,3,4], [2,3,4,5],[3,4,5,6,7]], b= [[5,6,7,8], [9,1,2,3], [4,5,6,7,8]] </code></pre> <p>which I want to subtract from each other element by element for an Output like this:</p> <pre><code>a-b= [[-4,-4,-4,-4],[7,2,2,2],[-1,-1,-1,-1,-1]] </code></pre> <p>In order to do so I convert each of <code>a</code> and <code>b</code> to arrays and subtract them I use:</p> <pre><code>np.array(a)-np.array(b) </code></pre> <p>The Output just gives me the error: </p> <blockquote> <p>Unsupported Operand type for-: 'list' and 'list'</p> </blockquote> <p>What am I doing wrong? Shouldn't the <code>np.array</code> command ensure the conversion to the array?</p>
8
2016-08-11T10:49:20Z
38,901,165
<p>A nested list comprehension will do the job:</p> <pre><code>In [102]: [[i2-j2 for i2,j2 in zip(i1,j1)] for i1,j1 in zip(a,b)] Out[102]: [[-4, -4, -4, -4], [-7, 2, 2, 2], [-1, -1, -1, -1, -1]] </code></pre> <p>The problem with <code>np.array(a)-np.array(b)</code> is that the sublists differ in length, so the resulting arrays are object type - arrays of lists</p> <pre><code>In [104]: np.array(a) Out[104]: array([[1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6, 7]], dtype=object) </code></pre> <p>Subtraction is iterating over the outer array just fine, but hitting a problem when subtracting one sublist from another - hence the error message.</p> <p>If I made the inputs arrays of arrays, the subtraction will work</p> <pre><code>In [106]: np.array([np.array(a1) for a1 in a]) Out[106]: array([array([1, 2, 3, 4]), array([2, 3, 4, 5]), array([3, 4, 5, 6, 7])], dtype=object) In [107]: aa=np.array([np.array(a1) for a1 in a]) In [108]: bb=np.array([np.array(a1) for a1 in b]) In [109]: aa-bb Out[109]: array([array([-4, -4, -4, -4]), array([-7, 2, 2, 2]), array([-1, -1, -1, -1, -1])], dtype=object) </code></pre> <p>You can't count of array operations working on object dtype arrays. But in this case, <code>subtraction</code> is defined for the subarrays, so it can handle the nesting.</p> <p>Another way to do the nesting is use <code>np.subtract</code>. This is a <code>ufunc</code> version of <code>-</code> and will apply <code>np.asarray</code> to its inputs as needed:</p> <pre><code>In [103]: [np.subtract(i1,j1) for i1,j1 in zip(a,b)] Out[103]: [array([-4, -4, -4, -4]), array([-7, 2, 2, 2]), array([-1, -1, -1, -1, -1])] </code></pre> <p>Notice that these array calculations return arrays or a list of arrays. Turning the inner arrays back to lists requires iteration.</p> <p>If you are starting with lists, converting to arrays often does not save time. Array calculations can be faster, but that doesn't compensate for the overhead in creating the arrays in the first place.</p> <p>If I pad the inputs to equal length, then the simple array subtraction works, creating a 2d array.</p> <pre><code>In [116]: ao= [[1,2,3,4,0], [2,3,4,5,0],[3,4,5,6,7]]; bo= [[5,6,7,8,0], [9,1,2,3,0], [4,5,6,7,8]] In [117]: np.array(ao)-np.array(bo) Out[117]: array([[-4, -4, -4, -4, 0], [-7, 2, 2, 2, 0], [-1, -1, -1, -1, -1]]) </code></pre>
0
2016-08-11T15:59:23Z
[ "python", "arrays", "numpy" ]
Error while trying to add column in inherited view Odoo
38,894,397
<p>I'm trying to add a column in a existing view , I'm new with odoo , this is my xml code </p> <pre><code> &lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;openerp&gt; &lt;data&gt; &lt;record model="ir.ui.view" id="mrp_form_view"&gt; &lt;field name="name"&gt; mrp.fleuret.form&lt;/field&gt; &lt;field name="model"&gt; mrp.bom&lt;/field&gt; &lt;field name="type"&gt;form&lt;/field&gt; &lt;field name="inherit_id" ref="mrp.mrp_bom_form_view" /&gt; &lt;field name="arch" type="xml"&gt; &lt;xpath expr="page[@string='Components']/field[@name='bom_line_ids']/tree[@string='Components'/field[@name='date_stop']" position="after"&gt; &lt;field name="unit_price"/&gt; &lt;/xpath&gt; &lt;/field&gt; &lt;/record&gt; &lt;/data&gt; &lt;/openerp&gt; </code></pre> <p>and this is my python code : </p> <pre><code>from openerp.osv import osv, fields class fleuret(osv.Model): _inherit = "mrp.bom.line" _columns = { 'unit_price' : fields.float('unit price'), } </code></pre> <p><a href="http://i.stack.imgur.com/8QT3N.png" rel="nofollow"><img src="http://i.stack.imgur.com/8QT3N.png" alt="parent form view"></a></p>
1
2016-08-11T10:54:34Z
38,895,616
<p>You just need to update your xml code, issue was there in xpath.</p> <p>You should try following, </p> <pre><code>&lt;record model="ir.ui.view" id="mrp_form_view"&gt; &lt;field name="name"&gt; mrp.fleuret.form&lt;/field&gt; &lt;field name="model"&gt; mrp.bom&lt;/field&gt; &lt;field name="type"&gt;form&lt;/field&gt; &lt;field name="inherit_id" ref="mrp.mrp_bom_form_view" /&gt; &lt;field name="arch" type="xml"&gt; &lt;xpath expr="//field[@name='bom_line_ids']/tree/field[@name='date_stop']" position="after"&gt; &lt;field name="unit_price"/&gt; &lt;/xpath&gt; &lt;/field&gt; &lt;/record&gt; </code></pre> <p>Or you can also write xpath like that,</p> <pre><code>&lt;xpath expr="//page[@string='Components']/field[@name='bom_line_ids']/tree[@string='Components']/field[@name='date_stop']" position="after"&gt; </code></pre>
1
2016-08-11T11:52:05Z
[ "python", "xml", "xpath", "openerp" ]
How to name a dataframe column filled by numpy array?
38,894,418
<p>I am filling a DataFrame by transposing some numpy array :</p> <pre><code> for symbol in syms[:5]: price_p = Share(symbol) closes_p = [c['Close'] for c in price_p.get_historical(startdate_s, enddate_s)] dump = np.array(closes_p) na_price_ar.append(dump) print symbol df = pd.DataFrame(na_price_ar).transpose() </code></pre> <p>df, the DataFrame is well filled however, the column name are 0,1,2...,5 I would like to rename them with the value of the element syms[:5]. I googled it and I found this:</p> <pre><code> for symbol in syms[:5]: df.rename(columns={ ''+ str(i) + '' : symbol}, inplace=True) i = i+1 </code></pre> <p>But if I check the variabke df I still have the same column name. Any ideas ?</p>
1
2016-08-11T10:55:26Z
38,894,526
<p>You can use:</p> <pre><code>na_price_ar = [['A','B','C'],[0,2,3],[1,2,4],[5,2,3],[8,2,3]] syms = ['q','w','e','r','t','y','u'] df = pd.DataFrame(na_price_ar, index=syms[:5]).transpose() print (df) q w e r t 0 A 0 1 5 8 1 B 2 2 2 2 2 C 3 4 3 3 </code></pre>
0
2016-08-11T11:01:31Z
[ "python", "arrays", "pandas", "numpy", "dataframe" ]
How to name a dataframe column filled by numpy array?
38,894,418
<p>I am filling a DataFrame by transposing some numpy array :</p> <pre><code> for symbol in syms[:5]: price_p = Share(symbol) closes_p = [c['Close'] for c in price_p.get_historical(startdate_s, enddate_s)] dump = np.array(closes_p) na_price_ar.append(dump) print symbol df = pd.DataFrame(na_price_ar).transpose() </code></pre> <p>df, the DataFrame is well filled however, the column name are 0,1,2...,5 I would like to rename them with the value of the element syms[:5]. I googled it and I found this:</p> <pre><code> for symbol in syms[:5]: df.rename(columns={ ''+ str(i) + '' : symbol}, inplace=True) i = i+1 </code></pre> <p>But if I check the variabke df I still have the same column name. Any ideas ?</p>
1
2016-08-11T10:55:26Z
38,894,588
<p>Instead of using a list of arrays and transposing, you could build the DataFrame from a dict whose keys are symbols and whose values are arrays of <em>column</em> values:</p> <pre><code>import numpy as np import pandas as pd np.random.seed(2016) syms = 'abcde' na_price_ar = {} for symbol in syms[:5]: # price_p = Share(symbol) # closes_p = [c['Close'] for c in price_p.get_historical(startdate_s, enddate_s)] # dump = np.array(closes_p) dump = np.random.randint(10, size=3) na_price_ar[symbol] = dump print(symbol) df = pd.DataFrame(na_price_ar) print(df) </code></pre> <p>yields</p> <pre><code> a b c d e 0 3 3 8 2 4 1 7 8 7 6 1 2 2 4 9 3 9 </code></pre>
3
2016-08-11T11:04:17Z
[ "python", "arrays", "pandas", "numpy", "dataframe" ]
How to name a dataframe column filled by numpy array?
38,894,418
<p>I am filling a DataFrame by transposing some numpy array :</p> <pre><code> for symbol in syms[:5]: price_p = Share(symbol) closes_p = [c['Close'] for c in price_p.get_historical(startdate_s, enddate_s)] dump = np.array(closes_p) na_price_ar.append(dump) print symbol df = pd.DataFrame(na_price_ar).transpose() </code></pre> <p>df, the DataFrame is well filled however, the column name are 0,1,2...,5 I would like to rename them with the value of the element syms[:5]. I googled it and I found this:</p> <pre><code> for symbol in syms[:5]: df.rename(columns={ ''+ str(i) + '' : symbol}, inplace=True) i = i+1 </code></pre> <p>But if I check the variabke df I still have the same column name. Any ideas ?</p>
1
2016-08-11T10:55:26Z
38,918,996
<p>You may use as dictionary key into the .rename() method the df.columns[ number ] statement</p> <pre><code>dic = {'a': [4, 1, 3, 1], 'b': [4, 2, 1, 4], 'c': [5, 7, 9, 1], 'd': [4, 1, 3, 1], 'e': [5, 2, 6, 0]} df = pd.DataFrame(dic) number = 0 for symbol in syms[:5]: df.rename( columns = { df.columns[number]: symbol}, implace = True) number = number + 1 </code></pre> <p>and the result is</p> <pre><code> i f g h i 0 4 4 5 4 5 1 1 2 7 1 2 2 3 1 9 3 6 3 1 4 1 1 0 </code></pre>
0
2016-08-12T13:18:35Z
[ "python", "arrays", "pandas", "numpy", "dataframe" ]
Apply alignments on Reportlab SimpleDocTemplate to append multiple barcodes in number of rows
38,894,523
<p>I am using Reportlab <code>SimpleDocTemplate</code> to create a pdf file. I have to write (draw) multiple images row-wise so that I can adjust many images within the file.</p> <pre><code>class PrintBarCodes(View): def get(self, request, format=None): response = HttpResponse(content_type='application/pdf') response['Content-Disposition'] = 'attachment;\ filename="barcodes.pdf"' # Close the PDF object cleanly, and we're done. ean = barcode.get('ean13', '123456789102', writer=ImageWriter()) filename = ean.save('ean13') doc = SimpleDocTemplate(response, pagesize=A4) parts = [] parts.append(Image(filename)) doc.build(parts) return response </code></pre> <p>In the code, I have printed a single barcode to the file. And, the output is showing in the images as can be seen below. </p> <p>But, I need to draw a number of barcodes. How to reduce the image size before drawing to the pdf file and adjust in row fashion?</p> <p><a href="http://i.stack.imgur.com/NqAeb.png" rel="nofollow"><img src="http://i.stack.imgur.com/NqAeb.png" alt="enter image description here"></a></p>
1
2016-08-11T11:01:28Z
38,913,180
<p>As your question suggest that you need flexibility I think the most sensible approach is using <code>Flowable</code>'s. The barcode normally isn't one but we can <a href="http://stackoverflow.com/questions/18569682/use-qrcodewidget-or-plotarea-with-platypus">make it one</a> pretty easily. By doing so we can let <a href="/questions/tagged/platypus" class="post-tag" title="show questions tagged &#39;platypus&#39;" rel="tag">platypus</a> decide how much space there is in your layout for each barcode.</p> <p>So step one the <code>Barcode</code> <code>Flowable</code> which looks like this:</p> <pre><code>from reportlab.graphics import renderPDF from reportlab.graphics.barcode.eanbc import Ean13BarcodeWidget from reportlab.graphics.shapes import Drawing from reportlab.platypus import Flowable class BarCode(Flowable): # Based on http://stackoverflow.com/questions/18569682/use-qrcodewidget-or-plotarea-with-platypus def __init__(self, value="1234567890", ratio=0.5): # init and store rendering value Flowable.__init__(self) self.value = value self.ratio = ratio def wrap(self, availWidth, availHeight): # Make the barcode fill the width while maintaining the ratio self.width = availWidth self.height = self.ratio * availWidth return self.width, self.height def draw(self): # Flowable canvas bar_code = Ean13BarcodeWidget(value=self.value) bounds = bar_code.getBounds() bar_width = bounds[2] - bounds[0] bar_height = bounds[3] - bounds[1] w = float(self.width) h = float(self.height) d = Drawing(w, h, transform=[w / bar_width, 0, 0, h / bar_height, 0, 0]) d.add(bar_code) renderPDF.draw(d, self.canv, 0, 0) </code></pre> <p>Then to answer your question, the easiest way to now put multiple barcodes on one page would be using a <code>Table</code> like so:</p> <pre><code>from reportlab.platypus import SimpleDocTemplate, Table from reportlab.lib.pagesizes import A4 doc = SimpleDocTemplate("test.pdf", pagesize=A4) table_data = [[BarCode(value='123'), BarCode(value='456')], [BarCode(value='789'), BarCode(value='012')]] barcode_table = Table(table_data) parts = [] parts.append(barcode_table) doc.build(parts) </code></pre> <p>Which outputs:</p> <p><a href="http://i.stack.imgur.com/ljJ1K.png" rel="nofollow"><img src="http://i.stack.imgur.com/ljJ1K.png" alt="Example of barcode table"></a></p>
0
2016-08-12T08:17:42Z
[ "python", "django", "barcode", "reportlab" ]
How to update model object in django?
38,894,573
<p>I am using below code for update the status.</p> <pre><code>current_challenge = UserChallengeSummary.objects.filter(user_challenge_id=user_challenge_id).latest('id') current_challenge.update(status=str(request.data['status'])) </code></pre> <p>I am getting below error:</p> <blockquote> <p>'UserChallengeSummary' object has no attribute 'update'</p> </blockquote> <p>For solve this error: I found solutions: </p> <pre><code>current_challenge.status = str(request.data['status']) current_challenge.save() </code></pre> <p>Is there any another way to update record?</p>
0
2016-08-11T11:03:52Z
38,894,777
<p><a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#latest" rel="nofollow">latest()</a> method returns the latest object which is an instance of <code>UserChallengeSummary</code>, which doesn't have an update method.</p> <p>For updating single objects, your method is standard.</p> <p><a href="https://docs.djangoproject.com/en/1.10/topics/db/queries/#updating-multiple-objects-at-once" rel="nofollow">update()</a> method is used for updating multiple objects at once, so it works on <code>QuerySet</code> instances.</p>
0
2016-08-11T11:12:18Z
[ "python", "django", "django-models" ]
How to update model object in django?
38,894,573
<p>I am using below code for update the status.</p> <pre><code>current_challenge = UserChallengeSummary.objects.filter(user_challenge_id=user_challenge_id).latest('id') current_challenge.update(status=str(request.data['status'])) </code></pre> <p>I am getting below error:</p> <blockquote> <p>'UserChallengeSummary' object has no attribute 'update'</p> </blockquote> <p>For solve this error: I found solutions: </p> <pre><code>current_challenge.status = str(request.data['status']) current_challenge.save() </code></pre> <p>Is there any another way to update record?</p>
0
2016-08-11T11:03:52Z
38,894,937
<p>Your working solution is the way usually used in Django, as @Compadre already said.</p> <p>But sometimes (for example, in tests) it's useful to be able to update multiple fields at once. For such cases I've written simple helper:</p> <pre><code>def update_attrs(instance, **kwargs): """ Updates model instance attributes and saves the instance :param instance: any Model instance :param kwargs: dict with attributes :return: updated instance, reloaded from database """ instance_pk = instance.pk for key, value in kwargs.items(): if hasattr(instance, key): setattr(instance, key, value) else: raise KeyError("Failed to update non existing attribute {}.{}".format( instance.__class__.__name__, key )) instance.save(force_update=True) return instance.__class__.objects.get(pk=instance_pk) </code></pre> <p>Usage example:</p> <pre><code>current_challenge = update_attrs(current_challenge, status=str(request.data['status']), other_field=other_value) # ... etc. </code></pre> <p>If you with, you can remove <code>instance.save()</code> from the function (to call it explicit after the function call). </p>
1
2016-08-11T11:19:20Z
[ "python", "django", "django-models" ]
Sklearn Fit model multiple times
38,894,576
<p>Origin of the problem is common: </p> <p>presence of a lot of train data, which was read in chunks. Point of interest is to fit sequentially the desired model on chunked data sets, keeping states of previous fitting.</p> <p>Are there any methods except <code>partial_fit()</code> to fit model using sklearn on different data? or is there any tricks to rewrite code of <code>fit()</code> function to customize it for this problem? or is it possible somekow realize with <code>pickle</code>?</p>
0
2016-08-11T11:03:55Z
38,922,782
<p>There is a reason why some models expose <code>partial_fit()</code> and others don't. Every model is a different machine learning algorithm and for many of these algorithms there is just no way to add an element without recalculating the model from scratch.</p> <p>So, if you have to fit the models incrementally, pick an incremental model that has <code>partial_fit()</code>. You can find a full list on <a href="http://scikit-learn.org/stable/modules/scaling_strategies.html#incremental-learning" rel="nofollow">this documentation page</a>.</p> <p>Alternatively, you can build an ensemble model. Create a separate <code>Classifier()</code> or <code>Regression()</code> for every chunk of data you have. Then, when you need to predict something, you can just</p> <pre><code>for classifier in classifiers: votes[classifier.predict(X)] += 1 prediction = numpy.argmax(votes) </code></pre> <p>or, for regressors</p> <pre><code>prediction = numpy.mean([regressor.predict(X) for regressor in regressors] </code></pre>
0
2016-08-12T16:37:32Z
[ "python", "scikit-learn" ]
unable to access spreadsheet data
38,894,636
<blockquote> <blockquote> <blockquote> <p>=== RESTART: C:\Users\sandesh patil\Desktop\pythan_practice\quickstart.py === username, password:</p> </blockquote> </blockquote> </blockquote> <p>Traceback (most recent call last): File "C:\Users\sandesh patil\Desktop\pythan_practice\quickstart.py", line 81, in main() File "C:\Users\sandesh patil\Desktop\pythan_practice\quickstart.py", line 77, in main print('%s, %s' % (row[0], row[2])) IndexError: list index out of range</p> <blockquote> <blockquote> <blockquote> <p></p> </blockquote> </blockquote> </blockquote> <ul> <li><h2>List item</h2></li> </ul>
-2
2016-08-11T11:06:23Z
38,894,738
<p>you can use pandas in python and access a spreadsheet as </p> <pre><code>dataframe = pandas.read_csv('/path to file.csv', engine='python') dataset = dataframe.values </code></pre>
0
2016-08-11T11:10:33Z
[ "python" ]
How to make log file copy in python?
38,894,722
<p>I want to make log file in python same as in log4j, meaning as soon the <strong>logger.log</strong> file get's to a size of 1K make a copy of this file and call it <strong>logger(1).log</strong> , In case <strong>logger(1)</strong>,log already exists create <strong>logger(2).log</strong> and of course delete logger.log so next time it will run it will start a clean log.</p> <p>This is my code but it is good only for first creation of logger file bakup:</p> <pre><code>b = os.path.getsize('logger.log') print b if b &gt;= 1000: shutil.copy2('logger.log', 'logger(1).log') </code></pre> <p>This is my log.py file so it can be used globally:</p> <pre><code>import os import logging from logging.config import fileConfig from logging import handlers def setup_custom_logger(): configFolder = os.getcwd() + os.sep + 'Conf' fileConfig(configFolder + os.sep + 'logging_config.ini') logger = logging.getLogger() # create a file handler handler = logging.handlers.RotatingFileHandler('logger.log', maxBytes=1024, encoding="UTF-8") handler.doRollover() # create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) return logger </code></pre>
0
2016-08-11T11:09:51Z
38,894,890
<p>You need to setup a <a href="https://docs.python.org/2/library/logging.handlers.html#rotatingfilehandler" rel="nofollow">RotatingFileHandler</a>:</p> <pre><code>import logging from logging import handlers logger = logging.getLogger(__name__) handler = handlers.RotatingFileHandler('logger.log', maxBytes=1000, backupCount=10, encoding="UTF-8") handler.doRollover() logger.addHandler(handler) </code></pre> <p>From the documentation:</p> <blockquote> <p>You can use the maxBytes and backupCount values to allow the file to rollover at a predetermined size. When the size is about to be exceeded, the file is closed and a new file is silently opened for output. Rollover occurs whenever the current log file is nearly maxBytes in length.</p> </blockquote>
3
2016-08-11T11:17:11Z
[ "python", "logging" ]
How to make log file copy in python?
38,894,722
<p>I want to make log file in python same as in log4j, meaning as soon the <strong>logger.log</strong> file get's to a size of 1K make a copy of this file and call it <strong>logger(1).log</strong> , In case <strong>logger(1)</strong>,log already exists create <strong>logger(2).log</strong> and of course delete logger.log so next time it will run it will start a clean log.</p> <p>This is my code but it is good only for first creation of logger file bakup:</p> <pre><code>b = os.path.getsize('logger.log') print b if b &gt;= 1000: shutil.copy2('logger.log', 'logger(1).log') </code></pre> <p>This is my log.py file so it can be used globally:</p> <pre><code>import os import logging from logging.config import fileConfig from logging import handlers def setup_custom_logger(): configFolder = os.getcwd() + os.sep + 'Conf' fileConfig(configFolder + os.sep + 'logging_config.ini') logger = logging.getLogger() # create a file handler handler = logging.handlers.RotatingFileHandler('logger.log', maxBytes=1024, encoding="UTF-8") handler.doRollover() # create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) return logger </code></pre>
0
2016-08-11T11:09:51Z
38,894,904
<p>Try using python logging module with <a href="https://docs.python.org/2/library/logging.handlers.html#rotatingfilehandler" rel="nofollow">TimedRotatingFileHandler</a> handler.</p>
0
2016-08-11T11:17:34Z
[ "python", "logging" ]
How to make log file copy in python?
38,894,722
<p>I want to make log file in python same as in log4j, meaning as soon the <strong>logger.log</strong> file get's to a size of 1K make a copy of this file and call it <strong>logger(1).log</strong> , In case <strong>logger(1)</strong>,log already exists create <strong>logger(2).log</strong> and of course delete logger.log so next time it will run it will start a clean log.</p> <p>This is my code but it is good only for first creation of logger file bakup:</p> <pre><code>b = os.path.getsize('logger.log') print b if b &gt;= 1000: shutil.copy2('logger.log', 'logger(1).log') </code></pre> <p>This is my log.py file so it can be used globally:</p> <pre><code>import os import logging from logging.config import fileConfig from logging import handlers def setup_custom_logger(): configFolder = os.getcwd() + os.sep + 'Conf' fileConfig(configFolder + os.sep + 'logging_config.ini') logger = logging.getLogger() # create a file handler handler = logging.handlers.RotatingFileHandler('logger.log', maxBytes=1024, encoding="UTF-8") handler.doRollover() # create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) return logger </code></pre>
0
2016-08-11T11:09:51Z
38,894,941
<p>You can use a <a href="https://docs.python.org/2/library/logging.handlers.html#rotatingfilehandler" rel="nofollow">RotatingFileHandler</a>.</p> <p>Such a handler can be added by doing something like this:</p> <pre><code>import logging logger = logging.getLogger(__name__) logger.addHandler(RotatingFileHandler(filename, maxBytes=1024, backupCount=10)) </code></pre> <p>Once the log file reaches this size, a rollover will be done and the old log file will be saved with a name <code>filename.log.1</code>, <code>filename.log.2</code> etc. till <code>filename.log.10</code>. </p>
1
2016-08-11T11:19:29Z
[ "python", "logging" ]
Transform x,y numpy matrixes into a list of (x,y) points
38,895,040
<p>I have</p> <pre><code>x=np.array([[1,2,3],[3,4,5],[5,7,8]]) y=np.array([[5,2,5],[1,1,1],[2,2,2]]) </code></pre> <p>I'd like to get</p> <pre><code>xy=[(1,5),(2,2),(3,5),(3,1),(4,1),(5,1),(5,2),(7,2),(8,2)] </code></pre> <p>What's the best (quick and clean) way to do so? Also, once I have <code>xy</code>, how do I relate the indexes of <code>xy</code> to the original numpy matrixes <code>x</code>,<code>y</code> ?</p>
0
2016-08-11T11:24:58Z
38,895,081
<p>Transpose or use <code>nop.dstack()</code>:</p> <pre><code>&gt;&gt;&gt; np.dstack((x.ravel(), y.ravel()))[0] array([[1, 5], [2, 2], [3, 5], [3, 1], [4, 1], [5, 1], [5, 2], [7, 2], [8, 2]]) &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; np.array((x.ravel(), y.ravel())).T array([[1, 5], [2, 2], [3, 5], [3, 1], [4, 1], [5, 1], [5, 2], [7, 2], [8, 2]]) </code></pre>
0
2016-08-11T11:26:36Z
[ "python", "numpy", "matrix" ]
Transform x,y numpy matrixes into a list of (x,y) points
38,895,040
<p>I have</p> <pre><code>x=np.array([[1,2,3],[3,4,5],[5,7,8]]) y=np.array([[5,2,5],[1,1,1],[2,2,2]]) </code></pre> <p>I'd like to get</p> <pre><code>xy=[(1,5),(2,2),(3,5),(3,1),(4,1),(5,1),(5,2),(7,2),(8,2)] </code></pre> <p>What's the best (quick and clean) way to do so? Also, once I have <code>xy</code>, how do I relate the indexes of <code>xy</code> to the original numpy matrixes <code>x</code>,<code>y</code> ?</p>
0
2016-08-11T11:24:58Z
38,895,120
<p>You can use this approach to get <code>xy</code>:</p> <pre><code>zip(x.flatten(),y.flatten()) #[(1, 5), (2, 2), (3, 5), (3, 1), (4, 1), (5, 1), (5, 2), (7, 2), (8, 2)] </code></pre>
2
2016-08-11T11:28:14Z
[ "python", "numpy", "matrix" ]
Transform x,y numpy matrixes into a list of (x,y) points
38,895,040
<p>I have</p> <pre><code>x=np.array([[1,2,3],[3,4,5],[5,7,8]]) y=np.array([[5,2,5],[1,1,1],[2,2,2]]) </code></pre> <p>I'd like to get</p> <pre><code>xy=[(1,5),(2,2),(3,5),(3,1),(4,1),(5,1),(5,2),(7,2),(8,2)] </code></pre> <p>What's the best (quick and clean) way to do so? Also, once I have <code>xy</code>, how do I relate the indexes of <code>xy</code> to the original numpy matrixes <code>x</code>,<code>y</code> ?</p>
0
2016-08-11T11:24:58Z
38,895,273
<p>You can try:</p> <pre><code>&gt;&gt;&gt; x = [[1,2,3],[3,4,5],[5,7,8]] &gt;&gt;&gt; y = [[5,2,5],[1,1,1],[2,2,2]] &gt;&gt;&gt; x1 = [val for x_data in x for val in x_data] &gt;&gt;&gt; y1 = [val for y_data in y for val in y_data] &gt;&gt;&gt; final = [(a,b) for a, b in zip(x1, y1)] &gt;&gt;&gt; print final [(1, 5), (2, 2), (3, 5), (3, 1), (4, 1), (5, 1), (5, 2), (7, 2), (8, 2)] </code></pre>
1
2016-08-11T11:36:18Z
[ "python", "numpy", "matrix" ]
django post request data
38,895,105
<p>I have the following POST request</p> <pre><code>curl -v --dump-header --H "Content-Type: application/json" -X POST --data '{"name": "John", "age": 27}' "http://localhost:8300/api/v1/create_contact?username=gegham&amp;api_key=3efc6df6023534279d2183a696044a8cfec964a9" </code></pre> <p>Result after I'm printing request.POST is</p> <pre><code>POST:&lt;QueryDict: {u'{"name": "John", "age": 27}': [u'']}&gt; </code></pre> <p>but not</p> <pre><code>POST: &lt;QueryDict: {u'name': [u'John'], u'age': [u'27']}&gt; </code></pre> <p>So,I can't use this as dict and get values by keys. Why POST data format is difference from usual??</p>
0
2016-08-11T11:27:36Z
38,895,250
<p>Because you're sending JSON, not form data. Use <code>request.body</code> and deserialize it with <code>json.loads</code>.</p>
3
2016-08-11T11:35:29Z
[ "python", "django", "curl", "django-models" ]
pandas reshape date sequence
38,895,121
<p>I have table with factor and time interval. What I want to do is to get long table with each date in interval between <code>START_DATE</code> and <code>END_DATE</code>.</p> <pre><code>dt_in = pd.DataFrame({'factor':['A','B'], 'START_DATE':[pd.Timestamp('2015-01-01'),pd.Timestamp('2016-02-05')], 'END_DATE':[pd.Timestamp('2015-01-04'),pd.Timestamp('2016-02-07')]}) END_DATE START_DATE factor 0 2015-01-04 2015-01-01 A 1 2016-02-07 2016-02-05 B </code></pre> <p>I want to have output table like this one:</p> <pre><code>dt_out = pd.DataFrame({'factor': ['A','A','A','A','B','B','B'], 'DATE': ['2015-01-01', '2015-01-02', '2015-01-03', '2015-01-04', '2016-02-05', '2016-02-06', '2016-02-07']}) DATE factor 0 2015-01-01 A 1 2015-01-02 A 2 2015-01-03 A 3 2015-01-04 A 4 2016-02-05 B 5 2016-02-06 B 6 2016-02-07 B </code></pre> <p>How can I do this?</p>
1
2016-08-11T11:28:24Z
38,895,277
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow"><code>melt</code></a> for reshaping and then <code>groupby</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.resample.html" rel="nofollow"><code>resample</code></a> for filling <code>dates</code>:</p> <pre><code>df = pd.melt(dt_in, id_vars='factor', value_name='DATE') .set_index('DATE') .drop('variable',axis=1) print (df) factor DATE 2015-01-04 A 2016-02-07 B 2015-01-01 A 2016-02-05 B print (df.groupby('factor') .resample('1D') .ffill() .reset_index(drop=True, level=0) .reset_index()) DATE factor 0 2015-01-01 A 1 2015-01-02 A 2 2015-01-03 A 3 2015-01-04 A 4 2016-02-05 B 5 2016-02-06 B 6 2016-02-07 B </code></pre> <p>Notice:</p> <p><em>This funcionality is new in <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#whatsnew-0181-deferred-ops" rel="nofollow">pandas 0.18.1</a>.</em></p>
2
2016-08-11T11:36:31Z
[ "python", "datetime", "pandas", "reshape", "melt" ]
Change tkinter canvas's text outside the class
38,895,188
<pre><code> from tkinter import * class MainBattle(Frame): def __init__(self, parent): Frame.__init__(self, parent) self.parent = parent self.initUI() def initUI(self): global canvas self.parent.title('Python') self.pack(fill = BOTH, expand = 1) canvas = Canvas(self) self.Label_My = Label(self, text = 'MyObject') self.Label_My.place(x = 470, y = 35) canvas.pack(fill = BOTH, expand = 1) canvas.update() def aa(self): self.Label_My['text'] = 'HisObject' def Change(): Label_My['text'] = 'HisObject' root = Tk() ex = MainBattle(root) root.geometry('700x500') </code></pre> <p>it should use global method? I would defind labels inside the class and change it's text outside class if possible.</p>
0
2016-08-11T11:32:22Z
38,895,546
<p>If your question is "can I use global to set variable values from outside the class" then yes. Whenever you want to change the value of a global variable you need to write global. </p> <pre><code>def changetext(): global label_text label_text = "new text" </code></pre>
1
2016-08-11T11:48:59Z
[ "python", "class", "tkinter", "label", "self" ]
Change tkinter canvas's text outside the class
38,895,188
<pre><code> from tkinter import * class MainBattle(Frame): def __init__(self, parent): Frame.__init__(self, parent) self.parent = parent self.initUI() def initUI(self): global canvas self.parent.title('Python') self.pack(fill = BOTH, expand = 1) canvas = Canvas(self) self.Label_My = Label(self, text = 'MyObject') self.Label_My.place(x = 470, y = 35) canvas.pack(fill = BOTH, expand = 1) canvas.update() def aa(self): self.Label_My['text'] = 'HisObject' def Change(): Label_My['text'] = 'HisObject' root = Tk() ex = MainBattle(root) root.geometry('700x500') </code></pre> <p>it should use global method? I would defind labels inside the class and change it's text outside class if possible.</p>
0
2016-08-11T11:32:22Z
38,895,599
<p>You don't need global variables. You have a reference to the instance, which allows you to access all instance variables:</p> <pre><code>ex.Label_My["text"] = "HisObject" </code></pre>
2
2016-08-11T11:51:13Z
[ "python", "class", "tkinter", "label", "self" ]
SIGKILL adding column in a simple python script
38,895,211
<p>I have a file that has a oneline header and a long column with values. I want to add a second column with values since 10981 (step = 1) until the end of the file (ommiting the header, of course). The problem is that the script needs a lot of memory and my pc crashes, probably due to the script is not well made (sorry, I am new programming!). The script that I have done is this:</p> <pre><code>with open ('chr1.phyloP46way.placental2.wigFix', 'w') as file_open: num = 10981 text = file_open.readlines() next (text) for line in text: num = num + 1 print line.strip() + '\t' + str(num) </code></pre> <p>As my PC crashes when I run it, I tried to test it in pycharm with the following error, what I have seen is probably due to lack of memory:</p> <pre><code>Process finished with exit code 137 (interrupted by signal 9: SIGKILL) </code></pre> <p>Any idea to solve this?</p> <p>Thank you very much!</p>
1
2016-08-11T11:33:22Z
38,895,373
<p>It is difficult to verify if it works without the .txt, but give a try to that one</p> <pre><code>f = open(os.path.join(data_path, 'chr1.phyloP46way.placental2.wigFix'), 'r') lines = f.readlines() num = 10981 for line_num in range(len(lines)): line_in = lines[line_num] num = num + 1 print line_in.strip() + '\t' + str(num) </code></pre> <p>---- Update: following Rory Daulton comment</p> <p>I had some time to do a small test. Maybe this one will help: save the following code in a file named converter.py</p> <pre><code>import os def add_enumeration(data_path, filename_in, filename_out, num=10981): # compose the filenames: path_to_file_in = os.path.join(data_path, filename_in) path_to_file_out = os.path.join(data_path, filename_out) # check if the input file exists: if not os.path.isfile(path_to_file_in): raise IOError('Input file does not exists.') # open the files: # if f_out does not exists it will be created. # if f_out is not empty, content will be deleted f_in = open(path_to_file_in, 'r') f_out = open(path_to_file_out, 'w+') # write the first line of the file in: f_out.write(f_in.readline()) for line_in in f_in: f_out.write(line_in.strip() + ' ' + str(num) + '\n') num = num + 1 f_in.close() f_out.close() </code></pre> <p>then from an ipython terminal:</p> <pre><code>In: run -i converter.py In: add_enumeration('/Users/user/Desktop', 'test_in.txt', 'test_out.txt') </code></pre> <p>Note that if test_out is not empty, its content will be deleted. This should avoid importing all the lines in a list with readlines(). Let me know if the memory problem is still there.</p>
1
2016-08-11T11:40:45Z
[ "python", "file-processing" ]
SIGKILL adding column in a simple python script
38,895,211
<p>I have a file that has a oneline header and a long column with values. I want to add a second column with values since 10981 (step = 1) until the end of the file (ommiting the header, of course). The problem is that the script needs a lot of memory and my pc crashes, probably due to the script is not well made (sorry, I am new programming!). The script that I have done is this:</p> <pre><code>with open ('chr1.phyloP46way.placental2.wigFix', 'w') as file_open: num = 10981 text = file_open.readlines() next (text) for line in text: num = num + 1 print line.strip() + '\t' + str(num) </code></pre> <p>As my PC crashes when I run it, I tried to test it in pycharm with the following error, what I have seen is probably due to lack of memory:</p> <pre><code>Process finished with exit code 137 (interrupted by signal 9: SIGKILL) </code></pre> <p>Any idea to solve this?</p> <p>Thank you very much!</p>
1
2016-08-11T11:33:22Z
38,896,307
<p>If your system is running out of resources, the likely culprit is the <code>readlines()</code> call, which causes Python to try to load the entire file into memory. There's no need to do this... a file object can itself be used as an iterator to read the file line by line:</p> <pre><code>with open ('chr1.phyloP46way.placental2.wigFix', 'w') as file_open: num = 10981 next (file_open) for line in file_open: num = num + 1 print line.strip() + '\t' + str(num) </code></pre>
2
2016-08-11T12:23:17Z
[ "python", "file-processing" ]
How to add my added environment variable?
38,895,282
<p>I am trying to add an environmental variable. When I write <code>set</code> in cmd I want it to be there. How do I add it there?</p> <p>I tried doing like this:</p> <pre><code>def read_json_dump(): with open("apa.json") as f: a = json.load(f) for key, value in a.items(): project_recipient_list = value['EMAIL'] os.environ["$DEFAULT_RECIPIENTS"] = project_recipient_list </code></pre> <p>When I write <code>print os.environ</code> I am able to find the environmental variable but not when I write <code>set</code> in cmd.</p>
0
2016-08-11T11:36:40Z
38,895,464
<p><code>os.environ</code> sets environment variable only for the duration of the python program execution.</p> <p>It won't be reflected outside.</p>
0
2016-08-11T11:45:05Z
[ "python", "windows" ]
How to add my added environment variable?
38,895,282
<p>I am trying to add an environmental variable. When I write <code>set</code> in cmd I want it to be there. How do I add it there?</p> <p>I tried doing like this:</p> <pre><code>def read_json_dump(): with open("apa.json") as f: a = json.load(f) for key, value in a.items(): project_recipient_list = value['EMAIL'] os.environ["$DEFAULT_RECIPIENTS"] = project_recipient_list </code></pre> <p>When I write <code>print os.environ</code> I am able to find the environmental variable but not when I write <code>set</code> in cmd.</p>
0
2016-08-11T11:36:40Z
38,895,484
<p>Using os.environ won't be enough, those env vars won't persist, on windows if you want to persist variables you can use <a href="http://ss64.com/nt/setx.html" rel="nofollow">setx</a> command, for example:</p> <pre><code>import os os.system("setx $DEFAULT_RECIPIENTS foo@foo.com") </code></pre>
3
2016-08-11T11:46:12Z
[ "python", "windows" ]
How to add my added environment variable?
38,895,282
<p>I am trying to add an environmental variable. When I write <code>set</code> in cmd I want it to be there. How do I add it there?</p> <p>I tried doing like this:</p> <pre><code>def read_json_dump(): with open("apa.json") as f: a = json.load(f) for key, value in a.items(): project_recipient_list = value['EMAIL'] os.environ["$DEFAULT_RECIPIENTS"] = project_recipient_list </code></pre> <p>When I write <code>print os.environ</code> I am able to find the environmental variable but not when I write <code>set</code> in cmd.</p>
0
2016-08-11T11:36:40Z
38,899,299
<p>Per <a href="http://code.activestate.com/recipes/159462-how-to-set-environment-variables/" rel="nofollow">http://code.activestate.com/recipes/159462-how-to-set-environment-variables/</a> it looks like you can't modify the current program's environment variables from python and have them persist, but there is a work around by running your python code from a bat file that also sets the desired variables.</p> <blockquote> <blockquote> <p>Environment variables can be read with os.environ. They can be written (for sub-shells only) using os.putenv(key, value). However, there is no direct way to modify the global environment that the python script is running in. The indirect method shown [at the link] above writes a set command to a temporary batch file which is in the enclosing environment by another batch file used to launch the python script.</p> <p>...arbitrary expressions can be evaluated and the result assigned to an environment variable. For security, the eval() function can be replaced with str().</p> <p>Usually, writing to an environment variable should be avoided in favor of sharing values through a pipe or a common data file. However, when it can't be avoided, the above technique is an effective, though hackish, work-around.</p> </blockquote> </blockquote>
2
2016-08-11T14:33:11Z
[ "python", "windows" ]
Python, subtract two different times in the format (HH:MM:SS - HH:MM:SS)
38,895,336
<p>I need to subtract two different times, to get the difference. For example say I have message.start and message.end, both of which are a date and time (but they are of type 'time'. I checked this using <code>type(message.start)</code> in python) and are in the following format: 'month/day/year HH:MM:SS' so for example (08/11/16 13:32:00)</p> <p>In the code below I convert them to strings as I need to split up the date and time, and further split the date for other parts of the code which I don't need to show here. I tried something along message.end - message.start but that didn't work either.</p> <p>I just need the time difference, so say I have 08/11/16 18:30:00 - 08/11/16 12:00:00, this should result in 6.5 or 6hrs30mins </p> <p>What can I do? </p> <p>This is my code:</p> <pre><code>startDateTime = str(message.start).split() startTime = startDateTime[1] startDate = startDateTime[0].rsplit("/") endDateTime = str(message.end).split() endTime = endDateTime[1] endDate = endDateTime[0].rsplit("/") </code></pre>
0
2016-08-11T11:39:24Z
38,895,789
<p>You need <code>strptime</code>method from <code>datetime</code>. </p> <pre><code>import datetime format = '%m/%d/%y %H:%M:%S' startDateTime = datetime.datetime.strptime(message.start, format) endDateTime = datetime.datetime.strptime(message.end, format) diff = endDateTime - startDateTime </code></pre> <p><b>output:</b></p> <pre><code>&gt;&gt;&gt; start='08/11/16 12:00:00' &gt;&gt;&gt; format = '%m/%d/%y %H:%M:%S' &gt;&gt;&gt; startDateTime = datetime.datetime.strptime(start, format) &gt;&gt;&gt; end='08/11/16 18:30:00' &gt;&gt;&gt; endDateTime = datetime.datetime.strptime(end, format) &gt;&gt;&gt; diff = endDateTime - startDateTime &gt;&gt;&gt; print diff 6:30:00 </code></pre>
2
2016-08-11T12:00:11Z
[ "python", "date", "datetime", "time" ]
Display count variable from background task in main task (Tkinter GUI)
38,895,552
<p>I am trying to display a count variable from a background task in the main task which is my tkinter GUI. Why? I want to display that the long time taking background task is performing and later use this count variable to visualize it with a progress bar.</p> <p>My problem is, that even when using a <code>Queue</code>, I am not able to display the count variable. Maybe I've got problems in understanding python and its behaviour with objects and/or threads.</p> <pre><code>import threading import time import Queue import Tkinter as Tk import Tkconstants as TkConst from ScrolledText import ScrolledText from tkFont import Font import loop_simulation as loop def on_after_elapsed(): while True: try: v = dataQ.get(timeout=0.1) except: break scrText.insert(TkConst.END, str(v)) # "value=%d\n" % v scrText.see(TkConst.END) scrText.update() top.after(100, on_after_elapsed) def thread_proc1(): x = -1 dataQ.put(x) x = loop.loop_simulation().start_counting() # th_proc = threading.Thread(target=x.start_counting()) # th_proc.start() for i in range(5): for j in range(20): dataQ.put(x.get_i()) time.sleep(0.1) # x += 1 time.sleep(0.5) dataQ.put(x.get_i()) top = Tk.Tk() dataQ = Queue.Queue(maxsize=0) f = Font(family='Courier New', size=12) scrText = ScrolledText(master=top, height=20, width=120, font=f) scrText.pack(fill=TkConst.BOTH, side=TkConst.LEFT, padx=15, pady=15, expand=True) th = threading.Thread(target=thread_proc1) th.start() top.after(100, on_after_elapsed) top.mainloop() th.join() </code></pre> <p>In <code>thread_proc1()</code> I want to get the value of the counter of background task. This is the background task:</p> <pre><code>import time class loop_simulation: def __init__(self): self.j = 0 # self.start_counting() def start_counting(self): for i in range(0, 1000000): self.j = i time.sleep(0.5) def get_i(self): return str(self.j) </code></pre>
2
2016-08-11T11:49:16Z
38,897,027
<p>The reason the count variable isn't being displayed is due to the </p> <pre><code> x = loop.loop_simulation().start_counting() </code></pre> <p>statement in <code>thread_proc1()</code>. This creates a <code>loop_simulation</code> instance and calls its <code>start_counting()</code> method. However, other than already inserting a <code>-1</code> into the <code>dataQ</code>, <code>thread_proc1()</code> doesn't do anything else until <code>start_counting()</code> returns, which won't be for a long time (500K seconds).</p> <p>Meanwhile, the rest of your script is running and displaying only that initial <code>-1</code> that was put in.</p> <p>Also note that if <code>start_counting()</code> ever did return, its value of <code>None</code> is going to be assigned to <code>x</code> which later code attempts to use with: <code>x.get_i()</code>.</p> <p>Below is reworking of your code that fixes these issues and also follows the <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8 - Style Guide for Python Code</a> more closely. To avoid the main problem of calling <code>start_counting()</code>, I changed your <code>loop_simulation</code> class into a subclass of <code>threading.Thread</code> and renamed it <code>LoopSimulation</code>, and create an instance of it in <code>thread_proc1</code>, so there are now two background threads in addition to the main one handling the tkinter-based GUI.</p> <pre><code>import loop_simulation as loop from ScrolledText import ScrolledText import threading import Tkinter as Tk import Tkconstants as TkConst from tkFont import Font import time import Queue def on_after_elapsed(): # removes all data currently in the queue and displays it in the text box while True: try: v = dataQ.get_nowait() scrText.insert(TkConst.END, str(v)+'\n') scrText.see(TkConst.END) except Queue.Empty: top.after(100, on_after_elapsed) break def thread_proc1(): dataQ.put(-1) ls = loop.LoopSimulation() # slow background task Thread ls.start_counting() while ls.is_alive(): # background task still running? for i in range(5): for j in range(20): dataQ.put(ls.get_i()) time.sleep(0.1) time.sleep(0.5) dataQ.put('background task finished') top = Tk.Tk() dataQ = Queue.Queue(maxsize=0) font = Font(family='Courier New', size=12) scrText = ScrolledText(top, height=20, width=120, font=font) scrText.pack(fill=TkConst.BOTH, side=TkConst.LEFT, padx=15, pady=15, expand=TkConst.YES) th = threading.Thread(target=thread_proc1) th.daemon = True # OK for main to exit even if thread is still running th.start() top.after(100, on_after_elapsed) top.mainloop() </code></pre> <p><strong><code>loop_simulation.py</code></strong> module:</p> <pre><code>import threading import time class LoopSimulation(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.daemon = True # OK for main to exit even if instance still running self.lock = threading.Lock() self.j = 0 start_counting = threading.Thread.start # an alias for starting thread def run(self): for i in range(1000000): with self.lock: self.j = i time.sleep(0.5) def get_i(self): with self.lock: return self.j </code></pre>
0
2016-08-11T12:55:45Z
[ "python", "multithreading", "python-2.7", "user-interface", "tkinter" ]
Error reading csv file unicodeescape
38,895,583
<p>I have this program</p> <pre><code>import csv with open("C:\Users\frederic\Desktop\WinPython-64bit-3.4.4.3Qt5\notebooks\scores.txt","r") as scoreFile: # write = w, read = r, append = a scoreFileReader = csv.reader(scoreFile) scoreList = [] for row in scoreFileReader: if len (row) != 0: scoreList = scoreList + [row] scoreFile.close() print(scoreList) </code></pre> <hr> <p><strong>Why do I get this Error ?</strong></p> <blockquote> <p>with open("C:\Users\frederic\Desktop\WinPython-64bit-3.4.4.3Qt5\notebooks\scores.txt","r") as scoreFile: ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape.</p> </blockquote>
0
2016-08-11T11:50:36Z
38,902,527
<p>you may try this:</p> <pre><code>import csv import io def guess_encoding(csv_file): """guess the encoding of the given file""" with io.open(csv_file, "rb") as f: data = f.read(5) if data.startswith(b"\xEF\xBB\xBF"): # UTF-8 with a "BOM" return ["utf-8-sig"] elif data.startswith(b"\xFF\xFE") or data.startswith(b"\xFE\xFF"): return ["utf-16"] else: # in Windows, guessing utf-8 doesn't work, so we have to try try: with io.open(csv_file, encoding="utf-8") as f: preview = f.read(222222) return ["utf-8"] except: return [locale.getdefaultlocale()[1], "utf-8"] encodings = guess_encoding(r"C:\Users\frederic\Desktop\WinPython-64bit-3.4.4.3Qt5\notebooks\scores.txt")[0]) # then your code with with open(r"C:\Users\frederic\Desktop\WinPython-64bit-3.4.4.3Qt5\notebooks\scores.txt","r", encoding=encodings[0]) as scoreFile: </code></pre>
0
2016-08-11T17:20:21Z
[ "python", "csv", "unicode-escapes" ]
Error reading csv file unicodeescape
38,895,583
<p>I have this program</p> <pre><code>import csv with open("C:\Users\frederic\Desktop\WinPython-64bit-3.4.4.3Qt5\notebooks\scores.txt","r") as scoreFile: # write = w, read = r, append = a scoreFileReader = csv.reader(scoreFile) scoreList = [] for row in scoreFileReader: if len (row) != 0: scoreList = scoreList + [row] scoreFile.close() print(scoreList) </code></pre> <hr> <p><strong>Why do I get this Error ?</strong></p> <blockquote> <p>with open("C:\Users\frederic\Desktop\WinPython-64bit-3.4.4.3Qt5\notebooks\scores.txt","r") as scoreFile: ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape.</p> </blockquote>
0
2016-08-11T11:50:36Z
38,903,169
<p>Answer from <a href="http://stackoverflow.com/questions/20588840/unicode-error-opening-txt-files-with-python">@Tim Pietzcker</a></p> <p>You need to use raw strings with Windows-style filenames:</p> <pre><code>with open(r"C:\Users\frederic\Desktop\WinPython-64bit-3.4.4.3Qt5\notebooks\scores.txt", 'r') as scoreFile: ^^ </code></pre> <p>Otherwise, Python's string engine thinks that \U is the start of a Unicode escape sequence - which of course it isn't in this case.</p> <hr> <p>Also, be careful also your <code>scoreFile.close()</code> and <code>print(scoreList)</code> will crash because they are outside the <a href="https://docs.python.org/3/reference/compound_stmts.html#with" rel="nofollow"><code>with</code></a> block.</p> <p>The <a href="https://docs.python.org/3/reference/compound_stmts.html#with" rel="nofollow"><code>with</code></a> statement replace a try and catch. It also automatically close the file after the block. That mean you can delete the <code>scoreFile.close()</code> line.</p> <p>Also, you can change the line <code>if len(row) != 0</code> According to <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP8</a>:</p> <blockquote> <p>For sequences, (strings, lists, tuples), use the fact that empty sequences are false.</p> <p>Yes: if not seq: if seq:</p> <p>No: if len(seq): if not len(seq):</p> </blockquote> <p>One last thing, your <code>for loop</code> isn't good either, to read <code>csv</code> you better start copying an example from the <a href="https://docs.python.org/2/library/csv.html#module-contents" rel="nofollow">doc</a> first!</p> <pre><code>with open('eggs.csv', 'rb') as csvfile: spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|') for row in spamreader: print ', '.join(row) </code></pre> <p>If you are in too much trouble I could make you a code sample that will work for you but it's better if you try first.</p> <p>Good luck</p>
0
2016-08-11T17:58:54Z
[ "python", "csv", "unicode-escapes" ]
Python tkinter NameError when using lambda command
38,895,697
<p>I am new to Python's <code>tkinter</code> module so I would be grateful for any help, even an explanation as to why this is simply not possible (if that is the case).</p> <p>I have a list of 4-tuples of the form <code>(foo, bar, x, y)</code>; something like this:</p> <pre><code> TUPLE_LIST = [('Hello', 'World', 0, 0), ('Hovercraft', 'Eels', 50, 100), etc.] </code></pre> <p>and a <code>for</code> loop later which ideally instantiates the variable <code>foo</code> as a button with the text <code>bar</code>, with each button having a function to add its respective <code>bar</code> to an already defined <code>Entry</code> widget, then places it at the co-ordinates <code>x, y</code>:</p> <pre><code> for foo, bar, x, y in TUPLE_LIST: exec("{0} = Button(self, text='{1}', command=lambda: self.update_text('{1}'))".format(foo, bar)) eval(name).place(x=x, y=y) </code></pre> <p>The buttons place perfectly, but if I go to click on one of the them, I get the following error:</p> <pre><code> Traceback (most recent call last): File "...\lib\tkinter\__init__.py", line 1550, in __call__ return self.func(*args) File "&lt;string&gt;", line 1, in &lt;lambda&gt; NameError: name 'self' is not defined </code></pre> <p>I'm guessing this has something to do with the fact that the command is defined with a lambda, and thus isn't a defined 'function', per se, and yet I have seen other people define their buttons' commands with lambdas. So is it something to do with using <code>exec</code> as well? Also, here's the code for <code>update_text</code> (if it matters):</p> <pre><code> def update_text(self, new_char): old_str = self.text_box_str.get() # Above is a StringVar() defined in __init__ and used as textvariable in Entry widget creation. self.text_box_str.set(old_str + new_char) </code></pre> <p>Any help would be much appreciated. Thanks in advance.</p>
0
2016-08-11T11:55:59Z
38,896,077
<p>This is a very bad idea. There's simply no reason to dry to dynamically create variable names. It makes your code more complex, harder to maintain, and harder to read.</p> <p>If you want to reference widgets by name, use a dictionary:</p> <pre><code>buttons = {} for foo, bar, x, y in TUPLE_LIST: buttons[foo] = Button(self, text=bar, ...) buttons[foo].place(...) </code></pre> <p>As for the problem with <code>self</code>, there's not enough code to say what the problem is. If you're using classes, the code looks ok. If you aren't, you shouldn't be using <code>self</code>. </p> <p>To create a function that updates a widget, you can simply pass that widget or that widget's name to the function. It's not clear what widget you're wanting to update. It appears to be an entry widget. I can't tell if you have one entry widget that all buttons must update, or one entry per button. I'll assume the former.</p> <p>In the following example I'll show how to pass to a function the variable to be changed along with the text to add. This solution doesn't use the <code>textvariable</code> attribute, though you can if you want. Just pass it rather than the widget.</p> <pre><code>buttons[foo] = Button(..., command=lambda widget=self.text_box, value=bar: self.update_text(widget, value)) ... def update_text(self, widget, value): old_value = widget.get() new_value = old_value + value widget.delete(0, "end") widget.insert(0, new_value) </code></pre> <p>Of course, if all you're doing is appending the string, you can replace those four lines with <code>widget.insert("end", value)</code></p>
1
2016-08-11T12:13:06Z
[ "python", "lambda", "tkinter", "nameerror" ]
python pandas dataframe, is it pass-by-value or pass-by-reference
38,895,768
<p>If I pass a dataframe to a function and modify it inside the function, is it pass-by-value or pass-by-reference?</p> <p>I run the following code</p> <pre><code>a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) def letgo(df): df = df.drop('b',axis=1) letgo(a) </code></pre> <p>the value of <code>a</code> does not change after the function call. Does it mean it is pass-by-value?</p> <p>I also tried the following</p> <pre><code>xx = np.array([[1,2], [3,4]]) def letgo2(x): x[1,1] = 100 def letgo3(x): x = np.array([[3,3],[3,3]]) </code></pre> <p>It turns out <code>letgo2()</code> does change <code>xx</code> and <code>letgo3()</code> does not. Why is it like this?</p>
2
2016-08-11T11:59:13Z
38,895,844
<p>Here is the doc for drop:</p> <blockquote> <p>Return new object with labels in requested axis removed.</p> </blockquote> <p>So a new dataframe is created. The original has not changed.</p> <p>But as for all objects in python, the data frame is passed to the function by reference.</p>
0
2016-08-11T12:02:25Z
[ "python", "pandas", "pass-by-reference", "pass-by-value" ]
python pandas dataframe, is it pass-by-value or pass-by-reference
38,895,768
<p>If I pass a dataframe to a function and modify it inside the function, is it pass-by-value or pass-by-reference?</p> <p>I run the following code</p> <pre><code>a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) def letgo(df): df = df.drop('b',axis=1) letgo(a) </code></pre> <p>the value of <code>a</code> does not change after the function call. Does it mean it is pass-by-value?</p> <p>I also tried the following</p> <pre><code>xx = np.array([[1,2], [3,4]]) def letgo2(x): x[1,1] = 100 def letgo3(x): x = np.array([[3,3],[3,3]]) </code></pre> <p>It turns out <code>letgo2()</code> does change <code>xx</code> and <code>letgo3()</code> does not. Why is it like this?</p>
2
2016-08-11T11:59:13Z
38,895,926
<p>you need to make 'a' global at the start of the function otherwise it is a local variable and does not change the 'a' in the main code.</p>
0
2016-08-11T12:06:22Z
[ "python", "pandas", "pass-by-reference", "pass-by-value" ]
python pandas dataframe, is it pass-by-value or pass-by-reference
38,895,768
<p>If I pass a dataframe to a function and modify it inside the function, is it pass-by-value or pass-by-reference?</p> <p>I run the following code</p> <pre><code>a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) def letgo(df): df = df.drop('b',axis=1) letgo(a) </code></pre> <p>the value of <code>a</code> does not change after the function call. Does it mean it is pass-by-value?</p> <p>I also tried the following</p> <pre><code>xx = np.array([[1,2], [3,4]]) def letgo2(x): x[1,1] = 100 def letgo3(x): x = np.array([[3,3],[3,3]]) </code></pre> <p>It turns out <code>letgo2()</code> does change <code>xx</code> and <code>letgo3()</code> does not. Why is it like this?</p>
2
2016-08-11T11:59:13Z
38,896,256
<p>The question isn't PBV vs. PBR. These names only cause confusion in a language like Python; they were invented for languages that work like C or like Fortran (as the quintessential PBV and PBR languages). It is true, but not enlightening, that Python always passes by value. The question here is whether the value itself is mutated or whether you get a new value. Pandas usually errs on the side of the latter.</p> <p><a href="http://nedbatchelder.com/text/names.html" rel="nofollow">http://nedbatchelder.com/text/names.html</a> explains very well what Python's system of names is.</p>
3
2016-08-11T12:21:29Z
[ "python", "pandas", "pass-by-reference", "pass-by-value" ]
python pandas dataframe, is it pass-by-value or pass-by-reference
38,895,768
<p>If I pass a dataframe to a function and modify it inside the function, is it pass-by-value or pass-by-reference?</p> <p>I run the following code</p> <pre><code>a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) def letgo(df): df = df.drop('b',axis=1) letgo(a) </code></pre> <p>the value of <code>a</code> does not change after the function call. Does it mean it is pass-by-value?</p> <p>I also tried the following</p> <pre><code>xx = np.array([[1,2], [3,4]]) def letgo2(x): x[1,1] = 100 def letgo3(x): x = np.array([[3,3],[3,3]]) </code></pre> <p>It turns out <code>letgo2()</code> does change <code>xx</code> and <code>letgo3()</code> does not. Why is it like this?</p>
2
2016-08-11T11:59:13Z
38,924,624
<p>To add to @Mike Graham's answer, who pointed to a very good read:</p> <p>In your case, what is important to remember is the difference between <em>names</em> and <em>values</em>. <code>a</code>, <code>df</code>, <code>xx</code>, <code>x</code>, are all <em>names</em>, but they refer to the same or different <em>values</em> at different points of your examples:</p> <ul> <li><p>In the first example, <code>letgo</code> <strong>rebinds</strong> <code>df</code> to another value, because <code>df.drop</code> returns a new <code>DataFrame</code> unless you set the argument <code>inplace = True</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow">see doc</a>). That means that the name <code>df</code> (local to the <code>letgo</code> function), which was referring to the value of <code>a</code>, is now referring to a new value, here the <code>df.drop</code> return value. The value <code>a</code> is referring to still exists and hasn't changed.</p></li> <li><p>In the second example, <code>letgo2</code> <strong>mutates</strong> <code>x</code>, without rebinding it, which is why <code>xx</code> is modified by <code>letgo2</code>. Unlike the previous example, here the local name <code>x</code> always refers to the value the name <code>xx</code> is referring to, and changes that value <em>in place</em>, which is why the value <code>xx</code> is referring to has changed.</p></li> <li><p>In the third example, <code>letgo3</code> <strong>rebinds</strong> <code>x</code> to a new <code>np.array</code>. That causes the name <code>x</code>, local to <code>letgo3</code> and previously referring to the value of <code>xx</code>, to now refer to another value, the new <code>np.array</code>. The value <code>xx</code> is referring to hasn't changed.</p></li> </ul>
1
2016-08-12T18:48:00Z
[ "python", "pandas", "pass-by-reference", "pass-by-value" ]
python pandas dataframe, is it pass-by-value or pass-by-reference
38,895,768
<p>If I pass a dataframe to a function and modify it inside the function, is it pass-by-value or pass-by-reference?</p> <p>I run the following code</p> <pre><code>a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) def letgo(df): df = df.drop('b',axis=1) letgo(a) </code></pre> <p>the value of <code>a</code> does not change after the function call. Does it mean it is pass-by-value?</p> <p>I also tried the following</p> <pre><code>xx = np.array([[1,2], [3,4]]) def letgo2(x): x[1,1] = 100 def letgo3(x): x = np.array([[3,3],[3,3]]) </code></pre> <p>It turns out <code>letgo2()</code> does change <code>xx</code> and <code>letgo3()</code> does not. Why is it like this?</p>
2
2016-08-11T11:59:13Z
38,925,257
<p>The short answer is, Python always does pass-by-value, but every Python variable is actually a pointer to some object, so sometimes it looks like pass-by-reference.</p> <p>In Python every object is either mutable or non-mutable. e.g., lists, dicts, modules and Pandas data frames are mutable, and ints, strings and tuples are non-mutable. Mutable objects can be changed internally (e.g., add an element to a list), but non-mutable objects cannot. </p> <p>As I said at the start, you can think of every Python variable as a pointer to an object. When you pass a variable to a function, the variable (pointer) within the function is always a copy of the variable (pointer) that was passed in. So if you assign something new to the internal variable, all you are doing is changing the local variable to point to a different object. This doesn't alter (mutate) the original object that the variable pointed to, nor does it make the external variable point to the new object. At this point, the external variable still points to the original object, but the internal variable points to a new object. </p> <p>If you want to alter the original object (only possible with mutable data types), you have to do something that alters the object <em>without</em> assigning a completely new value to the local variable. This is why <code>letgo()</code> and <code>letgo3()</code> leave the external item unaltered, but <code>letgo2()</code> alters it. </p> <p>As @ursan pointed out, if <code>letgo()</code> used something like this instead, then it would alter (mutate) the original object that <code>df</code> points to, which would change the value seen via the global <code>a</code> variable:</p> <pre><code>def letgo(df): df.drop('b', axis=1, inplace=True) a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) letgo(a) # will alter a </code></pre> <p>In some cases, you can completely hollow out the original variable and refill it with new data, without actually doing a direct assignment, e.g. this will alter the original object that <code>v</code> points to, which will change the data seen when you use <code>v</code> later:</p> <pre><code>def letgo3(x): x[:] = np.array([[3,3],[3,3]]) v = np.empty((2, 2)) letgo3(v) # will alter v </code></pre> <p>Notice that I'm not assigning something directly to <code>x</code>; I'm assigning something to the entire internal range of <code>x</code>.</p> <p>If you absolutely must create a completely new object and make it visible externally (which is sometimes the case with pandas), you have two options. The 'clean' option would be just to return the new object, e.g., </p> <pre><code>def letgo(df): df = df.drop('b',axis=1) return df a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) a = letgo(a) </code></pre> <p>Another option would be to reach outside your function and directly alter a global variable. This changes <code>a</code> to point to a new object, and any function that refers to <code>a</code> afterward will see that new object:</p> <pre><code>def letgo(): global a a = a.drop('b',axis=1) a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) letgo() # will alter a! </code></pre> <p>Directly altering global variables is usually a bad idea, because anyone who reads your code will have a hard time figuring out how <code>a</code> got changed. (I generally use global variables for shared parameters used by many functions in a script, but I don't let them alter those global variables.)</p>
3
2016-08-12T19:30:41Z
[ "python", "pandas", "pass-by-reference", "pass-by-value" ]
Multiple Slices With Python
38,895,781
<p>I need to extract data from multiple positions in an array.</p> <p>A simple array would be:-</p> <pre><code>listing = (4, 22, 24, 34, 46, 56) </code></pre> <p>I am familiar with slicing. For instance:-</p> <pre><code>listing[0:3] </code></pre> <p>would give me:-</p> <pre><code>(4, 22, 24) </code></pre> <p>However I am unable to get out multiple slices. For instance:-</p> <pre><code>listing[0:3, 4:5] </code></pre> <p>gives me</p> <pre><code>TypeError: tuple indices must be integers not tuples </code></pre> <p>Despite Searching two Python books and the Internet I cannot work out the syntax to use.</p>
0
2016-08-11T11:59:56Z
38,895,812
<p>You can slice twice and join them.</p> <pre><code>listing[0:3] + listing[4:5] </code></pre>
3
2016-08-11T12:01:33Z
[ "python" ]
Multiple Slices With Python
38,895,781
<p>I need to extract data from multiple positions in an array.</p> <p>A simple array would be:-</p> <pre><code>listing = (4, 22, 24, 34, 46, 56) </code></pre> <p>I am familiar with slicing. For instance:-</p> <pre><code>listing[0:3] </code></pre> <p>would give me:-</p> <pre><code>(4, 22, 24) </code></pre> <p>However I am unable to get out multiple slices. For instance:-</p> <pre><code>listing[0:3, 4:5] </code></pre> <p>gives me</p> <pre><code>TypeError: tuple indices must be integers not tuples </code></pre> <p>Despite Searching two Python books and the Internet I cannot work out the syntax to use.</p>
0
2016-08-11T11:59:56Z
38,895,829
<p>Try:</p> <pre><code>&gt;&gt;&gt; listing = (4, 22, 24, 34, 46, 56) &gt;&gt;&gt; listing[0:3], listing[4:5] ((4, 22, 24), (46,)) </code></pre>
0
2016-08-11T12:01:57Z
[ "python" ]
Web scraping with lxml and requests
38,895,790
<p>I have a <a href="http://www.booking.com/searchresults.et.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaEKIAQGYAQu4AQbIAQzYAQHoAQH4AQuoAgM&amp;sid=1bc09296ee139ec3cb0edce87d7fb20a&amp;dcid=1&amp;class_interval=1&amp;dest_id=67&amp;dest_type=country&amp;dtdisc=0&amp;group_adults=2&amp;group_children=0&amp;hlrd=0&amp;hyb_red=0&amp;inac=0&amp;label_click=undef&amp;nha_red=0&amp;no_rooms=1&amp;postcard=0&amp;redirected_from_city=0&amp;redirected_from_landmark=0&amp;redirected_from_region=0&amp;review_score_group=empty&amp;room1=A%2CA&amp;sb_price_type=total&amp;score_min=0&amp;src=index&amp;ss=Eesti&amp;ss_all=0&amp;ss_raw=Eesti&amp;ssb=empty&amp;sshis=0&amp;traveller=other&amp;nflt=ht_id%3D204%3B&amp;lsf=ht_id%7C204%7C221&amp;unchecked_filter=hoteltype" rel="nofollow">web page</a> with hotels, where i want to get all the hotel names. I made a code following instructions from <a href="http://docs.python-guide.org/en/latest/scenarios/scrape/" rel="nofollow">this page</a>, but no success. My code is here:</p> <pre><code>from lxml import html import requests page = requests.get('web page url') tree = html.fromstring(page.content) hotel_name = tree.xpath('//span[@title="sr-hotel__name"]/text()') print(hotel_name) </code></pre> <p>All I get is an empty list. Any help?</p>
0
2016-08-11T12:00:11Z
38,896,962
<p>You need to add a user-agent:</p> <pre><code>from lxml import html import requests headers= {"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36"} page = requests.get('http://www.booking.com/searchresults.et.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaEKIAQGYAQu4AQbIAQzYAQHoAQH4AQuoAgM&amp;sid=1bc09296ee139ec3cb0edce87d7fb20a&amp;dcid=1&amp;class_interval=1&amp;dest_id=67&amp;dest_type=country&amp;dtdisc=0&amp;group_adults=2&amp;group_children=0&amp;hlrd=0&amp;hyb_red=0&amp;inac=0&amp;label_click=undef&amp;nha_red=0&amp;no_rooms=1&amp;postcard=0&amp;redirected_from_city=0&amp;redirected_from_landmark=0&amp;redirected_from_region=0&amp;review_score_group=empty&amp;room1=A%2CA&amp;sb_price_type=total&amp;score_min=0&amp;src=index&amp;ss=Eesti&amp;ss_all=0&amp;ss_raw=Eesti&amp;ssb=empty&amp;sshis=0&amp;traveller=other&amp;nflt=ht_id%3D204%3B&amp;lsf=ht_id%7C204%7C221&amp;unchecked_filter=hoteltype' , headers=headers) tree = html.fromstring(page.content) print(page.text) hotel_name = tree.xpath('//span[@class="sr-hotel__name"]/text()') print(hotel_name) </code></pre> <p>Which will give you:</p> <pre><code>['\nHotel Telegraaf\n', '\nRadisson Blu Hotel Olümpia\n', '\nRadisson Blu Sky Hotel\n', '\nPark Inn by Radisson Central Tallinn\n', '\nPark Inn by Radisson Meriton Conference &amp; Spa Hotel Tallinn\n', '\nMerchants House Hotel\n', '\nSwissotel Tallinn\n', '\nMy City Hotel\n', '\nNordic Hotel Forum\n', '\nHotel Palace by TallinnHotels\n', '\nHotel Ülemiste\n', '\nTallink City Hotel\n', '\nHotel London by Tartuhotels\n', '\nJohan Design &amp; SPA Hotel\n', '\nThe von Stackelberg Hotel Tallinn\n'] </code></pre> <p>But you should read their <a href="http://www.booking.com/content/terms.et.html?label=gen173nr-1DCAEoggJCAlhYSDNiBW5vcmVmaGmIAQGYAQu4AQrIAQzYAQPoAQGoAgM;sid=52cedf70cb08d87dee23f69f173a8650;dcid=12" rel="nofollow">TOS</a>:</p> <p><em>Our services are only for personal and non-commercial use. Consequently, you will not be allowed on our web site available through the content, information, software, products or services of a commercial or compete for the purpose of reselling link (deep link) to use, copy, monitor (eg, spiders, scrape), display, download download or reproduce.</em></p>
0
2016-08-11T12:52:57Z
[ "python", "web", "request", "lxml", "screen-scraping" ]
Web scraping with lxml and requests
38,895,790
<p>I have a <a href="http://www.booking.com/searchresults.et.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaEKIAQGYAQu4AQbIAQzYAQHoAQH4AQuoAgM&amp;sid=1bc09296ee139ec3cb0edce87d7fb20a&amp;dcid=1&amp;class_interval=1&amp;dest_id=67&amp;dest_type=country&amp;dtdisc=0&amp;group_adults=2&amp;group_children=0&amp;hlrd=0&amp;hyb_red=0&amp;inac=0&amp;label_click=undef&amp;nha_red=0&amp;no_rooms=1&amp;postcard=0&amp;redirected_from_city=0&amp;redirected_from_landmark=0&amp;redirected_from_region=0&amp;review_score_group=empty&amp;room1=A%2CA&amp;sb_price_type=total&amp;score_min=0&amp;src=index&amp;ss=Eesti&amp;ss_all=0&amp;ss_raw=Eesti&amp;ssb=empty&amp;sshis=0&amp;traveller=other&amp;nflt=ht_id%3D204%3B&amp;lsf=ht_id%7C204%7C221&amp;unchecked_filter=hoteltype" rel="nofollow">web page</a> with hotels, where i want to get all the hotel names. I made a code following instructions from <a href="http://docs.python-guide.org/en/latest/scenarios/scrape/" rel="nofollow">this page</a>, but no success. My code is here:</p> <pre><code>from lxml import html import requests page = requests.get('web page url') tree = html.fromstring(page.content) hotel_name = tree.xpath('//span[@title="sr-hotel__name"]/text()') print(hotel_name) </code></pre> <p>All I get is an empty list. Any help?</p>
0
2016-08-11T12:00:11Z
38,897,472
<p>You retrieving an empty list because the html is generated by JavaScript.</p> <p>So to proceed a bit of a roadmap:</p> <ol> <li>Use google inspect XHR;</li> <li>Determine the parameters &amp; header to execute the javascript</li> <li>Passing <a href="http://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls" rel="nofollow">parameters</a></li> <li>Passing <a href="http://docs.python-requests.org/en/master/user/quickstart/#custom-headers" rel="nofollow">enter link description here</a></li> <li>and combine it like: requests.get(url, params=payload, header=header)</li> </ol> <p>That sad, you should take in consideration to respect the website owner rules and <a href="http://www.booking.com/robots.txt" rel="nofollow">robots.txt</a>. </p> <p>In addition of "Arbaz Siddiqui: there are also more complete packages, like beautiful soup or scrapy selenium.</p>
0
2016-08-11T13:15:26Z
[ "python", "web", "request", "lxml", "screen-scraping" ]
Running python scripts with microsoft azure machine learning
38,895,822
<p>I'm working on a microsoft azure machine learning web service. I made a first version that takes a data set as an input and returns JSON that contains predictions. I want to add some python scripts that can process the predictions to change the output of the algorithm and replace it by another JSON file. I want also to add a script that treats other inputs that will not be used in the machine learning algorithm but will be treated in the output.</p> <p>To be more clear : i have 5 attributes : x1, x2, x3, x4, x5 x1, x2 and x3 will be treated in the ML algorithm and return y : prediction the output that i want to have is some tips : if y meet some condition then output 1 (some string) but i want to process x4 x5 as well : if x4 and x5 meet some condition then output 2 </p> <p>the output will be : { output 1 : output 2 : } instead of { prediction : y }</p> <p>I looked at the documentation of Azure but all i found is how to use python scripts to manipulate data frames. if some one has an idea on how to combine the ML Microsoft Azure Web Service and some python scripts to create a unique Cloud Based web service that would be great. Thank you</p>
1
2016-08-11T12:01:46Z
38,898,871
<p>If I understood you right, you want to do predictions on x1, x2 and x3. You also want to load x4 and x5.</p> <p>First, you can do your predictions on x1, x2 and x3 and generate y.</p> <p>After that you only have to add another "Execute Python Script", input x4, x5, insert your condition on them and then send either output 2 or y to the "Web service output" stage:</p> <p><a href="http://i.stack.imgur.com/0vo59.png" rel="nofollow"><img src="http://i.stack.imgur.com/0vo59.png" alt="enter image description here"></a></p>
2
2016-08-11T14:15:08Z
[ "python", "json", "web-services", "azure", "machine-learning" ]
Python Pandas : How to compile all lists in a column into one unique list
38,895,856
<p>I have a pandas dataframe as below:</p> <p><a href="http://i.stack.imgur.com/ZbPhn.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZbPhn.png" alt="enter image description here"></a></p> <p>How can I combine all the lists (in the 'val' column) into a unique list (set), e.g. <code>[val1, val2, val33, val9, val6, val7]</code>?</p> <p>I can solve this with the following code. I wonder if there is an easier way to get all unique values from a column without iterating the dataframe rows?</p> <pre><code>def_contributors=[] for index, row in df.iterrows(): contri = ast.literal_eval(row['val']) def_contributors.extend(contri) def_contributors = list(set(def_contributors)) </code></pre>
3
2016-08-11T12:03:07Z
38,896,038
<p>Convert that column into a DataFrame with <code>.apply(pd.Series)</code>. If you stack the columns, you can call the <code>unique</code> method on the returned Series.</p> <pre><code>df Out[123]: val 0 [v1, v2] 1 [v3, v2] 2 [v4, v3, v2] </code></pre> <hr> <pre><code>df['val'].apply(pd.Series).stack().unique() Out[124]: array(['v1', 'v2', 'v3', 'v4'], dtype=object) </code></pre>
2
2016-08-11T12:11:16Z
[ "python", "list", "pandas", "merge", "unique" ]
Python Pandas : How to compile all lists in a column into one unique list
38,895,856
<p>I have a pandas dataframe as below:</p> <p><a href="http://i.stack.imgur.com/ZbPhn.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZbPhn.png" alt="enter image description here"></a></p> <p>How can I combine all the lists (in the 'val' column) into a unique list (set), e.g. <code>[val1, val2, val33, val9, val6, val7]</code>?</p> <p>I can solve this with the following code. I wonder if there is an easier way to get all unique values from a column without iterating the dataframe rows?</p> <pre><code>def_contributors=[] for index, row in df.iterrows(): contri = ast.literal_eval(row['val']) def_contributors.extend(contri) def_contributors = list(set(def_contributors)) </code></pre>
3
2016-08-11T12:03:07Z
38,896,116
<p>Another solution with exporting <code>Series</code> to nested <code>lists</code> and then apply <code>set</code> to flatten list:</p> <pre><code>df = pd.DataFrame({'id':['a','b', 'c'], 'val':[['val1','val2'], ['val33','val9','val6'], ['val2','val6','val7']]}) print (df) id val 0 a [val1, val2] 1 b [val33, val9, val6] 2 c [val2, val6, val7] print (type(df.val.ix[0])) &lt;class 'list'&gt; print (df.val.tolist()) [['val1', 'val2'], ['val33', 'val9', 'val6'], ['val2', 'val6', 'val7']] print (list(set([a for b in df.val.tolist() for a in b]))) ['val7', 'val1', 'val6', 'val33', 'val2', 'val9'] </code></pre> <p><strong>Timings</strong>:</p> <pre><code>df = pd.concat([df]*1000).reset_index(drop=True) In [307]: %timeit (df['val'].apply(pd.Series).stack().unique()).tolist() 1 loop, best of 3: 410 ms per loop In [355]: %timeit (pd.Series(sum(df.val.tolist(),[])).unique().tolist()) 10 loops, best of 3: 31.9 ms per loop In [308]: %timeit np.unique(np.hstack(df.val)).tolist() 100 loops, best of 3: 10.7 ms per loop In [309]: %timeit (list(set([a for b in df.val.tolist() for a in b]))) 1000 loops, best of 3: 558 µs per loop </code></pre> <hr> <p>If types is not <code>list</code> but <code>string</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>str.strip</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>str.split</code></a>:</p> <pre><code>df = pd.DataFrame({'id':['a','b', 'c'], 'val':["[val1,val2]", "[val33,val9,val6]", "[val2,val6,val7]"]}) print (df) id val 0 a [val1,val2] 1 b [val33,val9,val6] 2 c [val2,val6,val7] print (type(df.val.ix[0])) &lt;class 'str'&gt; print (df.val.str.strip('[]').str.split(',')) 0 [val1, val2] 1 [val33, val9, val6] 2 [val2, val6, val7] Name: val, dtype: object print (list(set([a for b in df.val.str.strip('[]').str.split(',') for a in b]))) ['val7', 'val1', 'val6', 'val33', 'val2', 'val9'] </code></pre>
2
2016-08-11T12:14:46Z
[ "python", "list", "pandas", "merge", "unique" ]
Python Pandas : How to compile all lists in a column into one unique list
38,895,856
<p>I have a pandas dataframe as below:</p> <p><a href="http://i.stack.imgur.com/ZbPhn.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZbPhn.png" alt="enter image description here"></a></p> <p>How can I combine all the lists (in the 'val' column) into a unique list (set), e.g. <code>[val1, val2, val33, val9, val6, val7]</code>?</p> <p>I can solve this with the following code. I wonder if there is an easier way to get all unique values from a column without iterating the dataframe rows?</p> <pre><code>def_contributors=[] for index, row in df.iterrows(): contri = ast.literal_eval(row['val']) def_contributors.extend(contri) def_contributors = list(set(def_contributors)) </code></pre>
3
2016-08-11T12:03:07Z
38,896,618
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.cat.html" rel="nofollow"><code>str.concat</code></a> followed by some <code>string</code> manipulations to obtain the desired <code>list</code>.</p> <pre><code>In [60]: import re ...: from collections import OrderedDict In [62]: s = df['val'].str.cat() In [63]: L = re.sub('[[]|[]]',' ', s).strip().replace(" ",',').split(',') In [64]: list(OrderedDict.fromkeys(L)) Out[64]: ['val1', 'val2', 'val33', 'val9', 'val6', 'val7'] </code></pre>
1
2016-08-11T12:37:12Z
[ "python", "list", "pandas", "merge", "unique" ]
Python Pandas : How to compile all lists in a column into one unique list
38,895,856
<p>I have a pandas dataframe as below:</p> <p><a href="http://i.stack.imgur.com/ZbPhn.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZbPhn.png" alt="enter image description here"></a></p> <p>How can I combine all the lists (in the 'val' column) into a unique list (set), e.g. <code>[val1, val2, val33, val9, val6, val7]</code>?</p> <p>I can solve this with the following code. I wonder if there is an easier way to get all unique values from a column without iterating the dataframe rows?</p> <pre><code>def_contributors=[] for index, row in df.iterrows(): contri = ast.literal_eval(row['val']) def_contributors.extend(contri) def_contributors = list(set(def_contributors)) </code></pre>
3
2016-08-11T12:03:07Z
38,897,193
<p>One way would be to extract those elements into an array using <code>np.hstack</code> and then using <code>np.unique</code> to give us an array of such unique elements, like so -</p> <pre><code>np.unique(np.hstack(df.val)) </code></pre> <p>If you want a list as output, append with <code>.tolist()</code> -</p> <pre><code>np.unique(np.hstack(df.val)).tolist() </code></pre>
0
2016-08-11T13:02:32Z
[ "python", "list", "pandas", "merge", "unique" ]
Asyncio and rabbitmq (asynqp): how to consume from multiple queues concurrently
38,895,872
<p>I'm trying to consume multiple queues concurrently using python, asyncio and <a href="http://asynqp.readthedocs.io/en/v0.4/" rel="nofollow">asynqp</a>.</p> <p>I don't understand why my <code>asyncio.sleep()</code> function call does not have any effect. The code doesn't pause there. To be fair, I actually don't understand in which context the callback is executed, and whether I can yield control bavck to the event loop at all (so that the <code>asyncio.sleep()</code> call would make sense).</p> <p>What If I had to use a <code>aiohttp.ClientSession.get()</code> function call in my <code>process_msg</code> callback function? I'm not able to do it since it's not a coroutine. There has to be a way which is beyond my current understanding of asyncio.</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import asyncio import asynqp USERS = {'betty', 'bob', 'luis', 'tony'} def process_msg(msg): asyncio.sleep(10) print('&gt;&gt; {}'.format(msg.body)) msg.ack() async def connect(): connection = await asynqp.connect(host='dev_queue', virtual_host='asynqp_test') channel = await connection.open_channel() exchange = await channel.declare_exchange('inboxes', 'direct') # we have 10 users. Set up a queue for each of them # use different channels to avoid any interference # during message consumption, just in case. for username in USERS: user_channel = await connection.open_channel() queue = await user_channel.declare_queue('Inbox_{}'.format(username)) await queue.bind(exchange, routing_key=username) await queue.consume(process_msg) # deliver 10 messages to each user for username in USERS: for msg_idx in range(10): msg = asynqp.Message('Msg #{} for {}'.format(msg_idx, username)) exchange.publish(msg, routing_key=username) loop = asyncio.get_event_loop() loop.run_until_complete(connect()) loop.run_forever() </code></pre>
0
2016-08-11T12:04:06Z
38,898,514
<p>See <a href="https://github.com/benjamin-hodgson/asynqp/issues/84" rel="nofollow">Drizzt1991's response</a> on github for a solution.</p>
1
2016-08-11T14:00:04Z
[ "python", "python-3.x", "rabbitmq", "python-asyncio", "aiohttp" ]
Asyncio and rabbitmq (asynqp): how to consume from multiple queues concurrently
38,895,872
<p>I'm trying to consume multiple queues concurrently using python, asyncio and <a href="http://asynqp.readthedocs.io/en/v0.4/" rel="nofollow">asynqp</a>.</p> <p>I don't understand why my <code>asyncio.sleep()</code> function call does not have any effect. The code doesn't pause there. To be fair, I actually don't understand in which context the callback is executed, and whether I can yield control bavck to the event loop at all (so that the <code>asyncio.sleep()</code> call would make sense).</p> <p>What If I had to use a <code>aiohttp.ClientSession.get()</code> function call in my <code>process_msg</code> callback function? I'm not able to do it since it's not a coroutine. There has to be a way which is beyond my current understanding of asyncio.</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import asyncio import asynqp USERS = {'betty', 'bob', 'luis', 'tony'} def process_msg(msg): asyncio.sleep(10) print('&gt;&gt; {}'.format(msg.body)) msg.ack() async def connect(): connection = await asynqp.connect(host='dev_queue', virtual_host='asynqp_test') channel = await connection.open_channel() exchange = await channel.declare_exchange('inboxes', 'direct') # we have 10 users. Set up a queue for each of them # use different channels to avoid any interference # during message consumption, just in case. for username in USERS: user_channel = await connection.open_channel() queue = await user_channel.declare_queue('Inbox_{}'.format(username)) await queue.bind(exchange, routing_key=username) await queue.consume(process_msg) # deliver 10 messages to each user for username in USERS: for msg_idx in range(10): msg = asynqp.Message('Msg #{} for {}'.format(msg_idx, username)) exchange.publish(msg, routing_key=username) loop = asyncio.get_event_loop() loop.run_until_complete(connect()) loop.run_forever() </code></pre>
0
2016-08-11T12:04:06Z
38,899,329
<blockquote> <p>I don't understand why my asyncio.sleep() function call does not have any effect.</p> </blockquote> <p>Because <code>asyncio.sleep()</code> returns a future object that has to be used in combination with an event loop (or <code>async/await</code> semantics).</p> <p>You can't use <code>await</code> in simple <code>def</code> declaration because the callback is called outside of <code>async/await</code> context which is attached to some event loop under the hood. In other words mixing callback style with <code>async/await</code> style is quite tricky.</p> <p>The simple solution though is to schedule the work back to the event loop:</p> <pre><code>async def process_msg(msg): await asyncio.sleep(10) print('&gt;&gt; {}'.format(msg.body)) msg.ack() def _process_msg(msg): loop = asyncio.get_event_loop() loop.create_task(process_msg(msg)) # or if loop is always the same one single line is enough # asyncio.ensure_future(process_msg(msg)) # some code await queue.consume(_process_msg) </code></pre> <p>Note that there is no recursion in <code>_process_msg</code> function, i.e. the body of <code>process_msg</code> is not executed while in <code>_process_msg</code>. The inner <code>process_msg</code> function will be called once the control goes back to the event loop.</p> <p>This can be generalized with the following code:</p> <pre><code>def async_to_callback(coro): def callback(*args, **kwargs): asyncio.ensure_future(coro(*args, **kwargs)) return callback async def process_msg(msg): # the body # some code await queue.consume(async_to_callback(process_msg)) </code></pre>
0
2016-08-11T14:34:21Z
[ "python", "python-3.x", "rabbitmq", "python-asyncio", "aiohttp" ]
The R equivalent of the Python plotly.tools.FigureFactory.create_scatterplotmatrix?
38,896,011
<p>I'm trying to recreate in R the scatterplotmatrix from the Python plotly tutorial:</p> <p><a href="http://moderndata.plot.ly/using-plotly-in-jupyter-notebooks-on-microsoft-azure/" rel="nofollow">http://moderndata.plot.ly/using-plotly-in-jupyter-notebooks-on-microsoft-azure/</a></p> <p>I have the dataframe in R, but what is the equivalent plotly code in R to recreate the below Python call? </p> <pre><code>fig = FigureFactory.create_scatterplotmatrix( df, diag='histogram', index='manufacturer', height=800, width=800, title='Motor Trend Car Road Tests Scatterplot Matrix' ) </code></pre>
1
2016-08-11T12:10:02Z
38,938,038
<p>The <code>ggairs()</code> function in the GGally package makes it easy to create generalized pairs plots (e.g. scatterplot matrix) via ggplot2, and <code>ggplotly()</code> knows how to translate them to plotly.</p> <pre><code>library(GGally) library(plotly) ggplotly(ggpairs(iris)) </code></pre>
1
2016-08-14T00:07:00Z
[ "python", "plotly" ]
Read Authenticated User Object in ViewSet (Django Rest-Framework)
38,896,056
<p>What i want to do is to filter the queryset for every action on the resource by the authenticated user's ID which is a foreign key on the model, how can I read it when defining the query set?</p> <pre><code>class RunSessionViewSet(viewsets.ModelViewSet): """API endpoint for listing and creating sprints.""" queryset = RunningSession.objects.order_by('createdDate') serializer_class = RunSessionSerializer </code></pre>
0
2016-08-11T12:12:07Z
38,896,292
<p>You should override <code>get_queryset()</code> method as described in the <a href="http://www.django-rest-framework.org/api-guide/filtering/#filtering-against-the-current-user" rel="nofollow">docs</a></p> <pre><code>def get_queryset(self): user = self.request.user return RunningSession.objects.filter(foreignkey_field=user).order_by('createdDate') </code></pre>
1
2016-08-11T12:22:53Z
[ "python", "django", "django-rest-framework" ]
Counting holes in a mesh
38,896,066
<p>I have a special kind of triangular mesh. (i.e. I haven't used a regular algorithm to triangulate the set of points I have, but I followed a special algorithm to do this for chemical data). The result is a complicated 3D shape consits of a lot of triangles and tetrahesrons.</p> <p>Before I can proceed my task, I need to count the number of holes in the surfaces (holes between the triangles) and the 'Voids' (Empty volumes) between the tetrahedrons.</p> <p>Example of holes in a simple shape from my data:</p> <p><a href="http://i.stack.imgur.com/0pw43.png" rel="nofollow"><img src="http://i.stack.imgur.com/0pw43.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/goYvt.png" rel="nofollow"><img src="http://i.stack.imgur.com/goYvt.png" alt="enter image description here"></a></p> <p>Is there any known algorithm to achieve this or any python library that helps in doing this?</p> <p>Thank you very much.</p>
1
2016-08-11T12:12:26Z
39,012,836
<p>It seems that the quantities you are trying to compute are the first and second <a href="https://en.wikipedia.org/wiki/Betti_number" rel="nofollow">Betti numbers</a> of a simplicial complex. If you do a google search, you will find some literature on various ways to compute such things. A lot of them seem to be matrix-based (see e.g. <a href="https://jeremykun.com/2013/04/10/computing-homology/" rel="nofollow">https://jeremykun.com/2013/04/10/computing-homology/</a>). There is also a direct way to do it for your problem based on collapsing edges. I'll see if I can write a quick implementation of it (I'm not sure if it will be simpler than the matrices).</p>
1
2016-08-18T08:02:08Z
[ "python", "geometry", "triangulation", "triangular" ]
trying to get tweets from the full day on a specific date but it wont write out to the CSV file
38,896,121
<p>Trying to get tweepy to get all the tweets that were published by accounts to write to a CSV file.</p> <p>Program downloads the tweets but will not write them to the CSV file </p> <p>how can i get the tweets to be written to the CSV file</p> <pre><code>d1 = datetime.date(2016, 8, 4) for tweet in alltweets: #if (datetime.datetime.now() - tweet.created_at).days &lt; 1: #for single_date in daterange(d1, d2): while True: if (tweet.created_at == d1): # transform the tweepy tweets into a 2D array that will populate the csv #outtweets.append([tweet.user.name, tweet.created_at, tweet.text.encode('UTF-8')]) outtweets.append(list(itertools.chain([tweet.user.name, tweet.created_at],tweet.text.split(' ')))) else: deadend = True return if not deadend: page += 1 break #todaysDate = datetime.datetime.now().date() # write the csv with open('%s_%s.csv' % (screen_name, d1), 'w', encoding='UTF-8') as f: writer = csv.writer(f) writer.writerow(["Username", "Tweeted at", "Text"]) writer.writerows(outtweets) pass print ("CSV written") </code></pre> <p>** EDIT 1 **</p> <pre><code>todaysDate = date(2016,8,4) </code></pre> <p>class listener(tweepy.StreamListener):</p> <pre><code>def on_data(self,data): print (data) with open('%s_.csv' % (todaysDate), 'w', encoding='UTF-8') as f: writer = csv.writer(f) writer.writerow(["Username", "Tweeted at", "Text"]) writer.writerows(data) pass print("CSV Written") #with open('tweets_file.txt','a') as tf: #tf.write(data) #tf.close() return True def on_error(self, status): print (status) auth = tweepy.OAuthHandler(consumer_token, consumer_secret) auth.set_access_token(access_token, access_secret) twitterStream=tweepy.streaming.Stream(auth, listener()) while (todaysDate == date(2016, 8, 4)): twitterStream.filter() todaysDate = date.now() print("CSV Written") </code></pre>
0
2016-08-11T12:14:52Z
38,896,905
<p>try the following (in this example, to get tweets with the word 'Barry'):</p> <pre><code>from tweepy.streaming import Stream from tweepy import OAuthHandler from tweepy import StreamListener ckey='yourCkey' csecret='yourCsecret' atoken='yourAtoken' asecret='yourAsecret' class listener(StreamListener): def on_data(self,data): print data with open('tweets_file.txt','a') as tf: tf.write(data) tf.close() return True def on_error(self, status): print status auth=OAuthHandler(ckey,csecret) auth.set_access_token(atoken, asecret) twitterStream=Stream(auth, listener()) twitterStream.filter(track=['Barry']) </code></pre>
0
2016-08-11T12:50:47Z
[ "python", "python-3.x", "csv", "twitter", "tweepy" ]
Is there a way to classify a batch of images with the pretrained Inception-v3 network?
38,896,331
<p>I'm trying to classify images with TensorFlow.</p> <p>In the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py#L172" rel="nofollow">example code on GitHub</a> is something like this:</p> <pre><code>predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': image_data}) </code></pre> <p>Now, I'm looking for a solution for classifying multiple images in one go, because I'd like to compute the classification on my GPU, and I don't want to move the images to the GPU one by one, as that decreases performance.</p> <p>A loop over all images around the <code>sess.run(...)</code> didn't do what I wanted: Every image was still sent to the GPU individually.</p> <pre><code>with tf.Session() as sess: softmax_tensor = sess.graph.get_tensor_by_name('final_result:0') for image in images: predictions = sess.run(softmax_tensor, {'DecodeJpeg:0': image}) </code></pre>
1
2016-08-11T12:24:20Z
38,916,792
<p>So after a lot of trial and error I found a solution which has the right behavior for me. But I'm not sure if it's the most elegant one.</p> <pre><code>pool = ThreadPool() def operation(sess, softmax, image, image_number): prediction = sess.run(softmax, {'DecodeJpeg:0': image}) return prediction, image_number with tf.Graph().as_default() as imported_graph: tf.import_graph_def(graph_def, name='') with tf.Session(graph=imported_graph) as sess: with tf.device("/gpu:0"): softmax_tensor = sess.graph.get_tensor_by_name('final_result:0') threads = [pool.apply_async(operation, args=(sess, softmax_tensor, np_images[image_number], image_number,)) for image_number in range(len(np_images))] result = [] for thread in threads: result.append(thread.get()) </code></pre> <p>The key was to use a multithreading solution.</p>
0
2016-08-12T11:22:43Z
[ "python", "tensorflow", "gpgpu", "image-recognition" ]
Is there a way to classify a batch of images with the pretrained Inception-v3 network?
38,896,331
<p>I'm trying to classify images with TensorFlow.</p> <p>In the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py#L172" rel="nofollow">example code on GitHub</a> is something like this:</p> <pre><code>predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': image_data}) </code></pre> <p>Now, I'm looking for a solution for classifying multiple images in one go, because I'd like to compute the classification on my GPU, and I don't want to move the images to the GPU one by one, as that decreases performance.</p> <p>A loop over all images around the <code>sess.run(...)</code> didn't do what I wanted: Every image was still sent to the GPU individually.</p> <pre><code>with tf.Session() as sess: softmax_tensor = sess.graph.get_tensor_by_name('final_result:0') for image in images: predictions = sess.run(softmax_tensor, {'DecodeJpeg:0': image}) </code></pre>
1
2016-08-11T12:24:20Z
38,928,840
<p>Take a look at Google's github for their <a href="https://github.com/tensorflow/models/tree/master/inception" rel="nofollow">Inception</a> Deep CNN Classifier.</p> <p>By following their guide, I was able to fine-tune the network to classify wine bottle labels. You can classify many images in one run by just setting a larger batch size.</p> <p>The whole guide is helpful, but you'd probably be especially interested in <a href="https://github.com/tensorflow/models/tree/master/inception#how-to-fine-tune-a-pre-trained-model-on-a-new-task" rel="nofollow">Fine-Tuning a Pre-Trained Model</a> .</p>
1
2016-08-13T03:15:15Z
[ "python", "tensorflow", "gpgpu", "image-recognition" ]
AttributeError: _fields_ is final (in python)
38,896,335
<p>While writing a python script using ctypes, I'm getting an error :AttributeError: <em>fields</em> is final </p> <pre><code>//demo.h typedef struct data { char * status; } //python script import ctypes import sys from ctypes import * class data(Structure):pass data._fields_ = [('Status',POINTER(c_char))] </code></pre> <p>So, I have hereby shown the structure in .h file and also the way I'm defining the same structure in python using ctypes. Can anyone suggest me the solution to the problem?</p>
0
2016-08-11T12:24:33Z
38,896,463
<p>You can't change the <code>_fields_</code> attribute of a <code>Structure</code> after the class has been created. The correct declaration would be:</p> <pre><code>class data(Structure): _fields_ = [('Status',POINTER(c_char))] </code></pre>
0
2016-08-11T12:29:52Z
[ "python", "ctypes" ]
tensorflow not found in pip
38,896,424
<p>I'm trying to intstall tensorflow</p> <pre><code> pip install tensorflow --user Collecting tensorflow Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow </code></pre> <p>What am I doing wrong? So far I've used Python and pip and with no issues.</p>
0
2016-08-11T12:28:24Z
38,900,276
<p>TensorFlow is not yet in the <a href="https://pypi.python.org/pypi" rel="nofollow">PyPI</a> repository, so you have to specify the URL to the appropriate "wheel file" for your operating system and Python version.</p> <p>The full list of supported configurations is listed on the <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html" rel="nofollow">TensorFlow website</a>, but for example, to install version 0.10 for Python 2.7 on Linux, using CPU only, you would type the following command:</p> <pre><code>$ pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl </code></pre>
2
2016-08-11T15:17:20Z
[ "python", "machine-learning", "pip", "tensorflow" ]
Python's smtplib module violating Gmail's terms of use
38,896,488
<p><strong>How can I avoid violating Gmail's terms of use when using Python's <code>smtplib</code>?</strong></p> <p>I would like to send internal emails using Python. After 2 days of fighting with the Gmail API, I gave up. Then I found the <code>smtplib</code> module, which looked simple and promising.</p> <p>Following the example <a href="https://www.mkyong.com/python/how-do-send-email-in-python-via-smtplib/" rel="nofollow">here</a>, I wrote this small block of Python code:</p> <pre><code>import smtplib # Details of where to send FROM emailUser = 'an.account.I.made.just.for this.test@gmail.com' emailPassword = 'password123' # Send the following message to an address... message = '[ I am email content ]' toAddress = 'test.victim@gmail.com' # Define the HEADER (to, from, and subject) header = """ To: %s From: %s Subject: Python SMTPLIB Test """ header = header % (toAddress, emailUser) # [ Don't know what this does ] smtpserver = smtplib.SMTP("smtp.gmail.com",587) smtpserver.ehlo() smtpserver.starttls() smtpserver.ehlo # Use User/Password credentials to send email smtpserver.login(emailUser, emailPassword) smtpserver.sendmail(emailUser, toAddress, message) smtpserver.close() </code></pre> <p>I executed this script. It looks like Google's algorithms must have interpreted this as having been sent with bad intentions, and promptly closed my account! Which is fair enough, as I'm sure <code>smtplib</code> can be abused quite readily. However, I have honest intentions, but I do not know how to get around this issue: <strong>how can I avoid violating Gmail's terms of use when using Python's <code>smtplib</code>?</strong></p>
0
2016-08-11T12:31:31Z
38,896,782
<p>It could be because you are indiscriminately making connections with EHLO, you should be using HELO.</p> <p>If the server supports EHLO, the client can use EHLO instead of HELO as its first request. On the other hand, if the server does not support EHLO, and the client sends EHLO, the server will reject EHLO. The client then has to fall back to HELO.</p> <p>There are a few servers that disconnect when they see EHLO. If the client finds that neither EHLO nor HELO was accepted, for example because the connection was closed, then it has to make a new connection and start with HELO.</p>
0
2016-08-11T12:44:44Z
[ "python", "email", "gmail", "policy", "smtplib" ]