title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
pass array of dictionaries to python
38,970,840
<p>I have a bash script that builds a dictionary kind of structure for multiple iterations as shown below:</p> <pre><code>{ "a":"b", "c":"d", "e":"f"} { "a1":"b1", "c1":"d1", "e1":"f1", "g1":"h1" } </code></pre> <p>I have appended all of them to an array in shell script and they are to be fed as an input to python script and I want the above data to be parsed as list of dictionaries. </p> <p>I tried some thing like this and it didn't work.</p> <pre><code>var=({ "a":"b", "c":"d", "e":"f"} { "a1":"b1", "c1":"d1", "e1":"f1", "g1":"h1" }) function plot_graph { RESULT="$1" python - &lt;&lt;END from __future__ import print_function import pygal import os import sys def main(): result = os.getenv('RESULT') print(result) if __name__ == "__main__": main() END } plot_graph ${var[@]} </code></pre> <p>Arguments are being split and they are not being treated as a single variable.</p> <pre><code>Out of result will be :[ {"a":"b", ] </code></pre> <p>where as I want the entire var value to be read as a string and then I can split it into multiple dictionaries.</p> <p>Please help me get over this.</p>
-1
2016-08-16T09:10:08Z
38,971,826
<p>If I understand well, you want a concatenated dictionary with all keys. See <a href="http://stackoverflow.com/questions/1781571/how-to-concatenate-two-dictionaries-to-create-a-new-one-in-python">how to concatenate two dictionaries to create a new one in Python?</a></p>
0
2016-08-16T09:55:35Z
[ "python", "arrays", "dictionary" ]
Run Python script, build with pycharm, with shell
38,970,960
<p>I've write a simple script in python with pycharm. In this script I use tkinter. If I run the script in pycharm everything it's ok, but if I run the script using shell with command <code>python script.py</code> I receive the error </p> <blockquote> <p>ImportError: No module named tkinter</p> </blockquote>
-1
2016-08-16T09:15:37Z
38,971,062
<p>PyCharm and your sysytem python interpreters may have different versions and different modules installed. Check PyCaharm python version and use it by defaul or just write full version name. For example:</p> <pre><code>python3.4 script.py </code></pre>
0
2016-08-16T09:20:42Z
[ "python", "python-3.x", "pycharm", "python-3.4" ]
Run Python script, build with pycharm, with shell
38,970,960
<p>I've write a simple script in python with pycharm. In this script I use tkinter. If I run the script in pycharm everything it's ok, but if I run the script using shell with command <code>python script.py</code> I receive the error </p> <blockquote> <p>ImportError: No module named tkinter</p> </blockquote>
-1
2016-08-16T09:15:37Z
38,972,142
<blockquote> <p>ImportError: No module named tkinter</p> </blockquote> <p>Simply put, the module named <code>tkinter</code> is not visible to the python binary that you are using to run the script. Possible areas to investigate maybe </p> <ol> <li>Check whether you are using the same python executable as pyCharm is using.</li> <li>If using virtual-environments, make sure to enable it and that all dependencies are met.</li> <li>Your pyCharm might have reference to the library <code>tkinter</code> from libraries already set. That way you may need to install it.</li> <li>By default <code>tkinter</code> comes packaged in python binaries, to check run <code>import tkinter</code> on the shell, if you end up with an error <code>import _tkinter # If this fails your Python may not be configured for Tk</code> then you need to re-build your python. Check <a href="http://stackoverflow.com/questions/5459444/tkinter-python-may-not-be-configured-for-tk">this post here</a> for more details regarding this.</li> </ol>
0
2016-08-16T10:11:00Z
[ "python", "python-3.x", "pycharm", "python-3.4" ]
Accessing youtube video stream map (python)
38,970,994
<p>I'm trying to write a python script that will download a youtube video, using this line of code for getting the download url:</p> <pre><code>download_url = "http://www.youtube.com/get_video?video_id={0}&amp;t={1}&amp;fmt=22&amp;asv=2".format(video_id, token_value) </code></pre> <p>(video_id and token_values being info I've got from parsing youtube video url) but this keeps downloading empty file. </p> <p>Since this method is old, is there now some other form of getting download url for youtube videos?</p>
0
2016-08-16T09:17:30Z
39,099,544
<p>I solved my problem (to some extent) in the meantime. I just had to access youtube video stream map and grab one of the URLs stored there and download the file. However, this works only on videos that don't have their signature encrypted. I'm still working on a solution for videos with encrypted signatures.</p>
0
2016-08-23T11:09:38Z
[ "python", "url", "download", "youtube" ]
__init__() takes exactly 4 arguments (1 given)
38,971,022
<p>form.py</p> <pre><code>class InvoiceForm(ModelForm,): def __init__(self,em,first,last): self.email=em self.first=first self.last=last super(InvoiceForm,self).__init__(self,em,first,last) self.fields['email']=forms.ChoiceField(choices=[x.email for x in AuthUser.objects.filter(email=em)]) self.fields['first']=forms.ChoiceField(choices=[x.first_name for x in AuthUser.objects.filter(first_name=first)]) self.fields['last']=forms.ChoiceField(choices=[x.last_name for x in AuthUser.objects.filter(last_name=last)]) total_credits_ordered=forms.IntegerField(label=mark_safe('&lt;br/&gt; total_credits_ordered')) total_mobile_cr_ordered=forms.IntegerField(label=mark_safe('&lt;br/&gt; total_mobile_cr_ordered')) total_cloud_cr_ordered=forms.IntegerField(label=mark_safe('&lt;br/&gt; total_cloud_cr_ordered')) invoice_currency=forms.CharField(label=mark_safe('&lt;br/&gt; invoice_currency'),max_length=100) invoice_currency_code=forms.IntegerField(label=mark_safe('&lt;br/&gt;invoice_currency_code ')) invoice_country=forms.CharField(label=mark_safe('&lt;br/&gt; invoice_country'),max_length=100) invoice_note=forms.CharField(label=mark_safe('&lt;br/&gt; invoice_note'),max_length=100) class Meta: model=Invoices fields=['total_credits_ordered','total_mobile_cr_ordered','total_cloud_cr_ordered','invoice_currency','invoice_currency_code','invoice_country','invoice_note'] </code></pre> <p>views.py</p> <pre><code>def test(request): from app.tests import model_tests m = model_tests() print "assf" try: if request.method=="POST": print "sff" m.create_user_types() cform=CustomerForm(request.POST) if cform.is_valid(): em=cform.cleaned_data['email'] username=email password = cform.cleaned_data['password'] first=cform.cleaned_data['first'] last=cform.cleaned_data['last'] companyname=cform.cleaned_data['company_name'] companyaddr=cform.cleaned_data['company_addr'] companystate=cform.cleaned_data['company_state'] companycountry=cform.cleaned_data['company_country'] id=m.create_customer(username,email,password,first,last,companyname,companyaddr,companystate,companycountry) print "SFsfg" iform=InvoiceForm(email,first,last) print "ggg" if iform.is_valid(): tco=iform.cleaned_data['total_credits_ordered'] tmco=iform.cleaned_data['total_mobile_cr_ordered'] tcco=iform.cleaned_data['total_cloud_cr_ordered'] ic=iform.cleaned_data['invoice_currency'] icc=iform.cleaned_data['invoice_currency_code'] c=iform.cleaned_data['invoice_country'] inote=iform.cleaned_data['invoice_note'] id_i=m.create_invoices(id,tco,tmco,tcco,ic,icc,c,inote) pform=PaymentForm() print "dsf" pform=PaymentForm(request.POST) if pform.is_valid(): tpm=pform.cleaned_data['total_payment_made'] ps=pform.cleaned_data['payment_status'] pt=pform.cleaned_data['payment_type'] m.create_payment(id_i,tpm,ps,pt) return HttpResponse("test successful") else: print "d" cform=CustomerForm() iform=InvoiceForm() pform=PaymentForm() return render(request, 'test.html', {'cform': cform,'iform':iform,'pform':pform}) except Exception as e: return HttpResponse("Exception : %s" %(e)) return HttpResponse("Tests Successfull...") </code></pre> <p>It is showing: <code>Exception : __init__() takes exactly 4 arguments (1 given)</code></p> <p>but I have passed parameters to the form.</p>
-1
2016-08-16T09:18:41Z
38,971,076
<p>We don't have the stacktrace in the question but problem is probably here:</p> <pre><code>else: print "d" cform=CustomerForm() iform=InvoiceForm() pform=PaymentForm() </code></pre> <p>Here you are creating objects without passing any parameters. Since the instance itself is always passed, the message says that it misses the other parameters which are <code>em,first,last</code></p> <p>I suggest that you remove everything after the <code>else</code> part since it does nothing useful or a warning like this to avoid silent errors:</p> <pre><code>else: print("Unsupported method "+request.method) </code></pre>
1
2016-08-16T09:21:33Z
[ "python", "django" ]
Getting all existing fields of a received FIX message with QuickFIX
38,971,133
<p>Does QuickFIX provide the possibility of getting ALL existing fields of an incoming FIX message in a single step? (I use version 1.14.3 for Python.)</p> <p>According to QuickFIX documentation, it's possible to get a field value in a certain way:</p> <pre><code>price = quickfix.Price() field = message.getField(price) field.getValue() </code></pre> <p>Various message types contain different fields, so doing that for every field would be awkward. What is more, sometimes it's unknown whether some fields exist in a message. How do I get all fields of a message not knowing what fields it contains?</p>
1
2016-08-16T09:24:15Z
38,978,514
<p>I'm not aware of a method. This is what I do, with <code>message</code> the incoming FIX message:</p> <p><code>tags = re.findall(r'(?&lt;=\x01).*?(?==)', str(message))</code></p> <p>Then, where <code>FIX = {'1':fix.Account(), '2':fix.AdvId(), ...}</code>, you can get all values by doing </p> <pre><code>for tag in set(tags)&amp;set(FIX.keys()): message.getField(FIX[tag]) </code></pre> <p>Obviously you must import the <code>re</code> module. </p>
0
2016-08-16T15:11:59Z
[ "python", "quickfix", "fix" ]
Python pandas selction of rows on unknown number of parameters
38,971,158
<p>I'm having an that I can simplify to the following situation in which I introduce a dataframe, I make a selection in a loop and make a new dataframe containing the subset of the old one that satisfy the conditions:</p> <pre><code>import pandas as pd import intertools g = ['M', 'M', 'F', 'F'] a = [20, 33, 20, 50] Zip = [21202, 21018, 21202, 22222] d = [0, -3, 8] parameters = (g, a) names = ['gender', 'age'] df = pd.DataFrame({'age':a, 'gender':g, 'd':d, 'Zip':Zip}) for values in itertools.product(*parameters): thesevalues = ((df[names[0]] == values[0]) &amp; (df[names[1]] == values[1]])) subdf = df[thesevalues] </code></pre> <p>Works just fine, but what if I want to also include the zip codes in the parameters, with the names. I would also have to manually introduce a third selection criterium in "thesevalues". I am probably overlooking the functionality to make this list of parameters that I want in that criterium to adapt to the list of parameters? A loop seems like a bad option... Is there another way? Thanks! </p>
-3
2016-08-16T09:25:34Z
38,971,557
<p>IIUC you need <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html" rel="nofollow"><code>numpy.logical_and</code></a>:</p> <pre><code>parameters = (g, a, Zip) names = ['gender', 'age', 'Zip'] df = pd.DataFrame({'age':a, 'gender':g, 'd':d, 'Zip':Zip}) print (df) for values in product(*parameters): #http://stackoverflow.com/a/20528566/2901002 thesevalues = np.logical_and.reduce([df[names[x]] == values[x] for x in range(len(parameters))]) subdf = df[thesevalues] print (subdf) </code></pre>
0
2016-08-16T09:44:13Z
[ "python", "pandas", "indexing", "itertools" ]
flask argument in render_template
38,971,263
<p>The name as an argument to hello function is assigned None, but why the name in render_template will take the value of the name if provided in the url? Basically my question is how Python knows which name is None and which name is given in the url?</p> <pre><code> from flask import render_template @app.route('/hello') @app.route('/hello/&lt;name&gt;') def hello(name=None): return render_template('hello.html', name=name) </code></pre> <p>the hello.html is:</p> <pre><code>&lt;!doctype html&gt; &lt;title&gt;Hello from Flask&lt;/title&gt; {% if name %} &lt;h1&gt;Hello {{ name }}!&lt;/h1&gt; {% else %} &lt;h1&gt;Hello, World!&lt;/h1&gt; {% endif %} </code></pre>
0
2016-08-16T09:30:07Z
38,975,826
<p>Flask is a framework and there is a lot of code behind the scenes (especially <code>werkzeug</code>) which does all the request processing, then it calls your view function and then it prepares a complete response.</p> <p>So the answer is, that Python does not know the URL, but Flask does and calls your view function either with the <code>name</code> (overwriting the default <code>None</code>) or without it.</p> <p>The <code>name</code> variable of the view function is passed to the template under the same name. That are the two <code>name</code>s in this line:</p> <pre><code>return render_template('hello.html', name=name) </code></pre>
1
2016-08-16T13:07:06Z
[ "python", "methods", "flask" ]
Get class labels from Keras functional model
38,971,293
<p>I have a functional model in Keras (Resnet50 from repo examples). I trained it with <code>ImageDataGenerator</code> and <code>flow_from_directory</code> data and saved model to <code>.h5</code> file. When I call <code>model.predict</code> I get an array of class probabilities. But I want to associate them with class labels (in my case - folder names). How can I get them? I found that I could use <code>model.predict_classes</code> and <code>model.predict_proba</code>, but I don't have these functions in Functional model, only in Sequential.</p>
0
2016-08-16T09:31:18Z
40,051,843
<p>The functional API models have just the <code>predict()</code> function which for classification would return the class probabilities. You can then select the most probable classes using the <code>probas_to_classes()</code> utility function. Example:</p> <pre><code>y_proba = model.predict(x) y_classes = keras.np_utils.probas_to_classes(y_proba) </code></pre> <p>This is equivalent to <code>model.predict_classes(x)</code> on the Sequential model.</p> <p>The reason for this is that the functional API support more general class of tasks where <code>predict_classes()</code> would not make sense.</p> <p>More info: <a href="https://github.com/fchollet/keras/issues/2524" rel="nofollow">https://github.com/fchollet/keras/issues/2524</a></p>
0
2016-10-14T20:55:44Z
[ "python", "keras" ]
python stack method for reverse string code
38,971,465
<p>I want to use the stack method to get reverse string in this revers question.<br> "Write a function revstring(mystr) that uses a stack to reverse the characters in a string."<br> This is my code.<br></p> <pre><code>from pythonds.basic.stack import Stack def revstring(mystr): myStack = Stack() //this is how i have myStack for ch in mystr: //looping through characters in my string myStack.push(ch) //push the characters to form a stack revstr = '' //form an empty reverse string while not myStack.isEmpty(): revstr = revstr + myStack.pop() //adding my characters to the empty reverse string in reverse order return revstr print revstring("martin") </code></pre> <p><strong>the output seems to print out only the first letter of mystr that is "m"</strong> why this??</p>
2
2016-08-16T09:39:27Z
38,971,622
<ul> <li>the <code>while</code> should not be in the <code>for</code></li> <li>the <code>return</code> should be outside, not in the <code>while</code></li> </ul> <p><strong>code:</strong></p> <pre><code>from pythonds.basic.stack import Stack def revstring(mystr): myStack = Stack() # this is how i have myStack for ch in mystr: # looping through characters in my string myStack.push(ch) # push the characters to form a stack revstr = '' # form an empty reverse string while not myStack.isEmpty(): # adding my characters to the empty reverse string in reverse order revstr = revstr + myStack.pop() return revstr print revstring("martin") </code></pre>
0
2016-08-16T09:46:46Z
[ "python", "python-2.7" ]
python stack method for reverse string code
38,971,465
<p>I want to use the stack method to get reverse string in this revers question.<br> "Write a function revstring(mystr) that uses a stack to reverse the characters in a string."<br> This is my code.<br></p> <pre><code>from pythonds.basic.stack import Stack def revstring(mystr): myStack = Stack() //this is how i have myStack for ch in mystr: //looping through characters in my string myStack.push(ch) //push the characters to form a stack revstr = '' //form an empty reverse string while not myStack.isEmpty(): revstr = revstr + myStack.pop() //adding my characters to the empty reverse string in reverse order return revstr print revstring("martin") </code></pre> <p><strong>the output seems to print out only the first letter of mystr that is "m"</strong> why this??</p>
2
2016-08-16T09:39:27Z
38,971,811
<p>Here's 3 solutions to the same problem, just pick one:</p> <p><strong>1ST SOLUTION</strong></p> <p>Fixing your solution, you almost got it, you just need to indent properly your blocks like this:</p> <pre><code>from pythonds.basic.stack import Stack def revstring(mystr): myStack = Stack() # this is how i have myStack for ch in mystr: # looping through characters in my string myStack.push(ch) # push the characters to form a stack revstr = '' # form an empty reverse string while not myStack.isEmpty(): # adding my characters to the empty reverse string in reverse order revstr = revstr + myStack.pop() return revstr print revstring("martin") </code></pre> <p><strong>2ND SOLUTION</strong></p> <p>This one is structurally the same than yours but instead of using a custom stack, it's just using the builtin python list</p> <pre><code>def revstring(mystr): myStack = [] # this is how i have myStack for ch in mystr: myStack.append(ch) # push the characters to form a stack revstr = '' # form an empty reverse string while len(myStack): # adding my characters to the empty reverse string in reverse order revstr = revstr + myStack.pop() return revstr print revstring("martin") </code></pre> <p><strong>3RD SOLUTION</strong></p> <p>To reverse strings, just use this pythonic way :)</p> <pre><code>print "martin"[::-1] </code></pre>
1
2016-08-16T09:54:59Z
[ "python", "python-2.7" ]
Insert large amount of data to BigQuery via bigquery-python library
38,971,523
<p>I have large csv files and excel files where I read them and create the needed create table script dynamically depending on the fields and types it has. Then insert the data to the created table.</p> <p>I have read <a href="http://stackoverflow.com/questions/23770799/loading-a-lot-of-data-into-google-bigquery-from-python">this</a> and understood that I should send them with <code>jobs.insert()</code> instead of <code>tabledata.insertAll()</code> for large amount of data.</p> <p>This is how I call it (Works for smaller files not large ones).</p> <pre><code>result = client.push_rows(datasetname,table_name,insertObject) # insertObject is a list of dictionaries </code></pre> <p>When I use library's <a href="https://github.com/tylertreat/BigQuery-Python/blob/master/bigquery/client.py#L1209">push_rows</a> it gives this error in windows.</p> <pre><code>[Errno 10054] An existing connection was forcibly closed by the remote host </code></pre> <p>and this in ubuntu.</p> <pre><code>[Errno 32] Broken pipe </code></pre> <p>So when I went through <a href="https://github.com/tylertreat/BigQuery-Python/blob/master/bigquery/client.py#L1264">BigQuery-Python</a> code it uses <code>table_data.insertAll()</code>. </p> <p>How can I do this with this library? I know we can upload through Google storage but I need direct upload method with this.</p>
5
2016-08-16T09:42:17Z
39,111,381
<p>When handling large files don't use streaming, but batch load: Streaming will easily handle up to 100,000 rows per second. That's pretty good for streaming, but not for loading large files.</p> <p>The sample code linked is doing the right thing (batch instead of streaming), so what we see is a different problem: This sample code is trying to load all this data straight into BigQuery, but the uploading through POST part fails. <code>gsutil</code> has a more robust uploading algorithm than just a plain POST.</p> <p>Solution: Instead of loading big chunks of data through POST, stage them in Google Cloud Storage first, then tell BigQuery to read files from GCS.</p> <p>See also <a href="http://stackoverflow.com/q/39101602/132438">BigQuery script failing for large file</a></p>
1
2016-08-23T22:03:31Z
[ "python", "python-2.7", "google-bigquery", "large-data" ]
How to get image from dynamic url using urllib2?
38,971,571
<p>I have generated a url from product code like,</p> <pre><code>code: 2555-525 url : www.example.com/2555-525.png </code></pre> <p>But when fetching a url, it might be a different name format on server,like </p> <pre><code>www.example.com/2555-525.png www.example.com/2555-525_TEXT.png www.example.com/2555-525_TEXT_TEXT.png </code></pre> <p>Sample code,</p> <pre><code>urllib2.urlopen(URL).read() </code></pre> <p>could we pass the url like <code>www.example.com/2555-525*.png</code> ?</p>
0
2016-08-16T09:44:50Z
38,972,779
<p>Using wildcards in URLs is useless in most cases because </p> <ul> <li><p>the interpretation of the part of the URL after <code>http://www.example.com/</code> is totally up to the server - so <code>http://www.example.com/2555-525*.png</code> might have a meaning to the server but but propably has not</p></li> <li><p>normally (exceptions like WebDAV exist) there is no way of listing ressources in a collection or existing URLs in general apart from trying them one-by-one (which is unpractical) or scraping a known site for URLs (which might be incomplete)</p></li> </ul> <p>For finding and downloading URLs automatically you can use a <a href="https://en.wikipedia.org/wiki/Web_crawler" rel="nofollow">Web Crawler</a> or Spider.</p>
0
2016-08-16T10:42:31Z
[ "python", "urllib2", "urllib3" ]
Getting value from list in Python
38,971,627
<p>I'm writing a simple script in Python, that creates a list of 10000 instances of <code>test</code> class. Then I'm looping through every element in the list and changing value of variable <code>x</code> to random string generated using <code>id_generator</code> method.</p> <pre><code>import string import random def id_generator(size=6, chars=string.ascii_uppercase + string.digits): return ''.join(random.choice(chars) for _ in range(size)) class test: x = None y = None d = test lista = [d] * 10000 w = 0 while (w &lt; 10000): lista[w].x = id_generator() w = w + 1 print(lista[3].x) print(lista[40].x) print(lista[1999].x) </code></pre> <p>Why do I get 3 same values on the output? Shouldn't I get 3 different values generated using <code>id_generator()</code></p>
0
2016-08-16T09:47:05Z
38,971,763
<p>Because you are not creating an instance of your class and you are directly referencing the class attribute <code>x</code> of the <code>test</code> class. You also have to declare your attributes as instance attributes, hence defining them inside <code>__init__()</code>:</p> <pre><code>import string import random def id_generator(size=6, chars=string.ascii_uppercase + string.digits): return ''.join(random.choice(chars) for _ in range(size)) class test: def __init__(self): self.x = None self.y = None # Create 10000 instances of the test class lista = [test() for _ in range(10000)] w = 0 while (w &lt; 10000): lista[w].x = id_generator() w = w + 1 print(lista[3].x) print(lista[40].x) print(lista[1999].x) </code></pre>
6
2016-08-16T09:52:55Z
[ "python" ]
Getting value from list in Python
38,971,627
<p>I'm writing a simple script in Python, that creates a list of 10000 instances of <code>test</code> class. Then I'm looping through every element in the list and changing value of variable <code>x</code> to random string generated using <code>id_generator</code> method.</p> <pre><code>import string import random def id_generator(size=6, chars=string.ascii_uppercase + string.digits): return ''.join(random.choice(chars) for _ in range(size)) class test: x = None y = None d = test lista = [d] * 10000 w = 0 while (w &lt; 10000): lista[w].x = id_generator() w = w + 1 print(lista[3].x) print(lista[40].x) print(lista[1999].x) </code></pre> <p>Why do I get 3 same values on the output? Shouldn't I get 3 different values generated using <code>id_generator()</code></p>
0
2016-08-16T09:47:05Z
38,971,796
<p>You are making three different mistakes:</p> <ol> <li><p>You should use instance attributes, and not class attributes:</p> <pre><code>class test: def __init__(self): self.x = None self.y = None </code></pre></li> <li><p>You should instantiate the <code>test</code> class. You should write:</p> <pre><code> d = test() </code></pre></li> <li><p>By writing <code>[d]*10000</code> you are actually storing 10000 copies of the same object. Write instead:</p> <pre><code># notice that you can get rid of the 'd' object lista = [test() for i in range(0, 10000)] </code></pre></li> </ol>
1
2016-08-16T09:54:13Z
[ "python" ]
add students in loop
38,971,652
<pre><code>void addStudent(char* lastName, char* firstName, char* studentId, smartresponse_classV1_t* signInClass) { sr_student_t *student = sr_student_create( lastName, sizeof lastName + 1, firstName, sizeof firstName + 1, studentId, sizeof studentId); sr_class_addstudent(signInClass, student); sr_student_release(student); } // add student char *firstName = "first"; char *lastName = "last"; char *studentId = "1"; char *id =""; int i; for ( i = 1; i &lt; 10; i++) { id = _itoa(i, studentId, 10); addStudent(lastName, firstName, id, signInClass); } </code></pre> <p>I am trying to convert int to string so that I can assign new id for new student. I don't know what I'm doing wrong because I call the test dll function from python and somehow it give me an error windowserror exception access violation writing ..... in print dll.test() Is there a problem in the for-loop when I call the function and assign id to it??</p> <pre><code>def test(x): ''' Just runs the main test. &gt;&gt;&gt; test(1) 1 ''' if x == 1: print dll.test() if __name__ == '__main__': ''' Testing the library. ''' import doctest if doctest.testmod()[0] &gt; 0: raise Exception('Unit tests have errors') print 'Unit tests OK' </code></pre>
0
2016-08-16T09:48:14Z
38,971,978
<p>You assigned less bytes to your id and studentId pointers</p> <pre><code> char *id =""; //1 byte assigned 0x00 at string "" end char *studentId ="1"; // 2 bytes assigned but in code you will need 3 ("10"+null) </code></pre> <p>Your code should look like this:</p> <pre><code>void addStudent(char* lastName, char* firstName, char* studentId, smartresponse_classV1_t* signInClass){ sr_student_t *student = sr_student_create(lastName, sizeof lastName + 1, firstName, sizeof firstName + 1, studentId, sizeof studentId); sr_class_addstudent(signInClass, student); sr_student_release(student); } char *firstName = "first"; char *lastName = "last"; char *studentId = "00"; char *id ="00"; int i; for ( i = 1; i &lt; 10; i++){ id = _itoa(i, studentId, 10); addStudent(lastName, firstName, id, signInClass); } </code></pre>
0
2016-08-16T10:03:09Z
[ "python", "c" ]
SWIG Cmake inconsistent DLL linkage
38,971,681
<p>I am using CMake to build a c++ project to a DLL on Windows. I then wish to wrap this for python using SWIG, but in doing so I am receiving warnings about 'Inconsistent DLL linkage'. I gather this refers to incorrect usage of dllexport/dllimport and I need to specify a #define for SWIG? How can I do this in CMake?</p> <p>My C++ library is built like so in CMake:</p> <pre><code># glob all the sources file(GLOB SOURCES "src/core/*.cpp") add_library(galgcore SHARED ${SOURCES}) target_link_libraries(galgcore ${GDAL_LIBRARY}) GENERATE_EXPORT_HEADER( galgcore BASE_NAME GeoAlg EXPORT_MACRO_NAME GALGCORE_DLL EXPORT_FILE_NAME ${PROJECT_SOURCE_DIR}/src/core/core_exp.h STATIC_DEFINE GeoAlg_BUILT_AS_STATIC ) </code></pre> <p>(It is using CMAke to generate the export header). </p> <p>I am using this library to build a test executable which works well:</p> <pre><code>include(FindGTest) enable_testing() find_package(GTest REQUIRED) include_directories(${GTEST_INCLUDE_DIRS}) # If *nix, pthread must be specified *after* the googletest libs if(WIN32) set (PTHREAD "") else(WIN32) set (PTHREAD pthread) endif(WIN32) add_executable(galgtest test/galg_unittest.cpp) target_link_libraries(galgtest ${GTEST_BOTH_LIBRARIES} galgcore galgfunc ${PTHREAD}) add_test(AllTestsInGalg galgtest "${CMAKE_CURRENT_LIST_DIR}/test/10_12_1.tif") </code></pre> <p>Finally, the section dealing with swig:</p> <pre><code>### SWIG # This generates the python bindings find_package(SWIG REQUIRED) include(${SWIG_USE_FILE}) find_package(PythonLibs) include_directories(${PYTHON_INCLUDE_PATH}) set(CMAKE_SWIG_FLAGS "-Wall") set_source_files_properties("${PROJECT_SOURCE_DIR}/python/galg.i" PROPERTIES CPLUSPLUS ON) set_property(SOURCE "${PROJECT_SOURCE_DIR}/python/galg.i" PROPERTY SWIG_FLAGS "-builtin") SWIG_ADD_MODULE(galg python "${PROJECT_SOURCE_DIR}/python/galg.i" ${SOURCES}) SWIG_LINK_LIBRARIES(galg ${PYTHON_LIBRARIES} galgcore) </code></pre>
0
2016-08-16T09:49:31Z
38,976,793
<p>Here is the full recipe to what I do to completely avoid warnings on Windows:</p> <pre><code># We don't have Python with debug information installed if (MSVC) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4127") add_definitions(-DSWIG_PYTHON_INTERPRETER_NO_DEBUG) endif() find_package(SWIG REQUIRED) include(${SWIG_USE_FILE}) find_package(PythonLibs REQUIRED) include_directories(${PYTHON_INCLUDE_PATH}) include_directories(${CMAKE_CURRENT_SOURCE_DIR}) include_directories(${CMAKE_CURRENT_BINARY_DIR}) # generated files if (MSVC) set(CMAKE_SWIG_FLAGS "-D_SWIG_WIN32") endif() set_source_files_properties(swig_project.i PROPERTIES CPLUSPLUS ON) swig_add_module(swig_project python swig_project.i ${swig_project_HEADERS}) if (MSVC) # Potential uninitialized variable in SWIG_AsVal_ set_source_files_properties( ${swig_generated_file_fullname} PROPERTIES COMPILE_FLAGS "/wd4701") endif() if (WIN32) # Allow to debug under windows, if debug versions of Python are missing string(REPLACE "_d" "" PYTHON_LIBRARIES "${PYTHON_LIBRARIES}") endif() swig_link_libraries(swig_project project ${PYTHON_LIBRARIES}) if (WIN32) # pyconfig.h is not autogenerated on Windows. To avoid warnings, we # add a compiler directive get_directory_property(DirDefs COMPILE_DEFINITIONS ) set_target_properties(_swig_project PROPERTIES COMPILE_DEFINITIONS "${DirDefs};HAVE_ROUND") endif() </code></pre>
1
2016-08-16T13:52:59Z
[ "python", "c++", "dll", "cmake", "swig" ]
How to partially override/extend a Declarative class's constructor?
38,971,742
<p>These are the relevant bits of what I have:</p> <pre><code>import sqlalchemy as db import sqlalchemy.ext.declarative Base = db.ext.declarative.declarative_base() class Product(Base): __tablename__ = 'product' id = db.Column(db.Integer, primary_key=True) class Bin(Base): __tablename__ = 'bin' id = db.Column(db.Integer, primary_key=True) product_id = db.Column(db.Integer, db.ForeignKey('product.id'), nullable=False) product = db.orm.relationship('Product') class PurchaseItem(Base): __tablename__ = 'purchase_item' id = db.Column(db.Integer, primary_key=True) bin_id = db.Column(db.Integer, db.ForeignKey('bin.id'), nullable=False) bin = db.orm.relationship('Bin') </code></pre> <p>What I'd like is to have the <code>PurchaseItem</code> constructor automatically construct and use a <code>Bin</code> object if it's passed a <code>Product</code>. I'd normally do:</p> <pre><code>def __init__(self, product=None, **kwargs): if product is not None: kwargs['bin'] = Bin(product=product) super(PurchaseItem, self).__init__(self, **kwargs) </code></pre> <p>, but I get this error:</p> <pre><code>&gt;&gt;&gt; p = Product() &gt;&gt;&gt; pi = PurchaseItem(product=p) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;string&gt;", line 4, in __init__ File "/Users/andrea/src/ifs/src/venv/lib/python2.7/site-packages/sqlalchemy/orm/state.py", line 306, in _initialize_instance manager.dispatch.init_failure(self, args, kwargs) File "/Users/andrea/src/ifs/src/venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/Users/andrea/src/ifs/src/venv/lib/python2.7/site-packages/sqlalchemy/orm/state.py", line 303, in _initialize_instance return manager.original_init(*mixed[1:], **kwargs) File "&lt;stdin&gt;", line 11, in __init__ TypeError: _declarative_constructor() takes exactly 1 argument (3 given) </code></pre> <p>presumably because <code>Base</code> is a metaclass and dynamically creates the subclass constructor.</p> <p>I'm able to get what I want by storing the old constructor and creating a new one that then calls the old one:</p> <pre><code>_old_init = PurchaseItem.__init__ def _new_init(self, product=None, init=_old_init, **kwargs): if product is not None: kwargs['bin'] = Bin(product=product) init(self, **kwargs) PurchaseItem.__init__ = _new_init </code></pre> <p>Is there a way to do this in the <code>PurchaseItem</code> class definition? Failing that, is there a way that doesn't involve temporary variables ala emacs's <code>defadvice</code>?</p>
0
2016-08-16T09:51:56Z
38,981,953
<p>A superclass call should be invoked like this:</p> <pre><code>super(PurchaseItem, self).__init__(**kwargs) </code></pre>
1
2016-08-16T18:22:25Z
[ "python", "sqlalchemy" ]
Understanding evaluation order of nested list comprehensions
38,971,774
<p>I am preparing for my exam and I decided to start solving past exams. One of the requirements is to understand what a code does. But I am having troubles with this annotation.</p> <p>I do not understand which the structure of this nested loop and which loop is executed first.</p> <pre><code>n = 10 p = [q for q in range(2, n) if q not in [r for i in range(2, int(n**0.5)) for r in range(i * 2, n, i)]] print(p) </code></pre> <p>Can someone help me understand please?</p>
1
2016-08-16T09:53:21Z
38,971,975
<p>It starts by evaluating:</p> <pre><code>[r for i in range(2, int(n**0.5)) for r in range(i * 2, n, i)] </code></pre> <p>which boils down to:</p> <pre><code>[r for r in range(4, 10, 2)] </code></pre> <p>since <code>range(2, int(n * 0.5))</code> reduces to a list with a single element <code>[2]</code> that is used as the value of <code>i</code> in the <code>for r in range(i * 2, n, i)</code> statement. So the inner list comprehension evaluates to <code>[4, 6, 8]</code>.</p> <p>Then, the outer loop <code>for q in range(2, n)</code> is executed and it returns those elements from the list <code>[2, 3, ..., 9]</code> that do not belong in the previously constructed list i.e <code>[4, 6, 8]</code> with:</p> <pre><code># range(2, n) -&gt; [2, 3, ..., 9] q for q in range(2, n) if q not in [..previously constructed list] </code></pre>
1
2016-08-16T10:03:06Z
[ "python", "list", "python-3.x", "for-loop", "nested-loops" ]
Understanding evaluation order of nested list comprehensions
38,971,774
<p>I am preparing for my exam and I decided to start solving past exams. One of the requirements is to understand what a code does. But I am having troubles with this annotation.</p> <p>I do not understand which the structure of this nested loop and which loop is executed first.</p> <pre><code>n = 10 p = [q for q in range(2, n) if q not in [r for i in range(2, int(n**0.5)) for r in range(i * 2, n, i)]] print(p) </code></pre> <p>Can someone help me understand please?</p>
1
2016-08-16T09:53:21Z
38,972,090
<p>As a rule of thumb, the innermost loops are going to be executed first.</p> <p>Having this in mind, let's break the problem down :</p> <pre><code>[r for i in range(2, int(n**0.5)) for r in range(i * 2, n, i)] </code></pre> <p><code>n**0.5</code> is 3.xxx, so <code>range(2, int(n**0.5))</code> is in fact range(2, 3), which is 2 (see <a href="https://docs.python.org/3/library/stdtypes.html#typesseq-range" rel="nofollow">range</a> for more informations).</p> <p>So <code>i</code> is going to be 2, no matter what.</p> <p><code>r in range(i * 2, n, i)</code> looks pretty simple now, r will be between 4 and 10 (excluded), using a step of 2. The possible values are 4, 6 and 8.</p> <p>The problem becomes :</p> <pre><code>p = [q for q in range(2, n) if q not in [4, 6, 8]] </code></pre> <p>Which is basically all odd numbers between 2 and 10 (excluded), plus the number 2.</p>
0
2016-08-16T10:08:15Z
[ "python", "list", "python-3.x", "for-loop", "nested-loops" ]
Understanding evaluation order of nested list comprehensions
38,971,774
<p>I am preparing for my exam and I decided to start solving past exams. One of the requirements is to understand what a code does. But I am having troubles with this annotation.</p> <p>I do not understand which the structure of this nested loop and which loop is executed first.</p> <pre><code>n = 10 p = [q for q in range(2, n) if q not in [r for i in range(2, int(n**0.5)) for r in range(i * 2, n, i)]] print(p) </code></pre> <p>Can someone help me understand please?</p>
1
2016-08-16T09:53:21Z
38,972,313
<p>I'm gonna give you a little tip which will help you for sure, if you're having a hard time understanding inner loops, run this code:</p> <pre><code>resultA = [] for x in ['x1', 'x2', 'x3']: for y in ['y1', 'y2', 'y3']: for z in ['z1', 'z2', 'z3']: resultA.append(''.join([x, y, z])) print resultA resultB = [''.join([x, y, z]) for x in ['x1', 'x2', 'x3'] for y in ['y1', 'y2', 'y3'] for z in ['z1', 'z2', 'z3'] ] print resultB print resultA == resultB </code></pre> <p>Once you've understood this code comprehension lists become second nature to you, then just come back to your original code and you won't have any problems with it :)</p>
-1
2016-08-16T10:19:22Z
[ "python", "list", "python-3.x", "for-loop", "nested-loops" ]
Understanding evaluation order of nested list comprehensions
38,971,774
<p>I am preparing for my exam and I decided to start solving past exams. One of the requirements is to understand what a code does. But I am having troubles with this annotation.</p> <p>I do not understand which the structure of this nested loop and which loop is executed first.</p> <pre><code>n = 10 p = [q for q in range(2, n) if q not in [r for i in range(2, int(n**0.5)) for r in range(i * 2, n, i)]] print(p) </code></pre> <p>Can someone help me understand please?</p>
1
2016-08-16T09:53:21Z
38,972,717
<p>This equivalent to :</p> <pre><code>list_i=[] for i in range(2, int(n**0.5)): for r in range(i*2, n, i): list_i.append(r) res=[] for q in range(2, n) : if q not in list_i: res.append(q) print res </code></pre>
0
2016-08-16T10:39:16Z
[ "python", "list", "python-3.x", "for-loop", "nested-loops" ]
Draw a rectangle with SimpleDocTemplate (ReportLab)
38,971,898
<p>I have used <code>SimpleDocTemplate</code> for making a table and now I want to draw a <code>rect</code> on this same page, but I don't know how to do.</p> <p>I have tried this :</p> <pre><code>draw = Drawing(100, 1) draw.add(Rect(0, 100, 500, 100)) </code></pre> <p>But it don't works...</p>
0
2016-08-16T10:00:12Z
39,016,545
<p>The reason you code is not working is mostly likely because you are creating a <code>Drawing</code> of just 1 pixel high and 100 pixels wide. Which could never fit a <code>Rect</code> of 500 by 100 pixels.</p> <p>So your code should be something like this:</p> <pre><code>draw = Drawing(500, 200) draw.add(Rect(0, 100, 500, 100)) </code></pre>
0
2016-08-18T11:02:28Z
[ "python", "python-3.x", "reportlab" ]
How to adjust a color to a preferred visual range?
38,971,926
<p>What I'm trying to do is apply a dynamic background color to black text. The color is calculated as a result of a hash of the text.</p> <p>The problem is, all too often the color comes out too dark to be able to read the text.</p> <p>How can I lighten the color to keep it <strong>in a decent visual range</strong> (not too dark, not too light)?</p> <p>The color can't be brighter than beige or darker than teal.<br/> (Keep in mind that Blue at 255 is darker than Green at 255, because the human eye is most sensitive to Green and the least sensitive to Blue)</p>
3
2016-08-16T10:01:15Z
38,983,335
<p><code>QColor</code> supports the HSL representation. You want to limit the range of lightness:</p> <pre><code>QColor limitLightness(const QColor &amp; color) { auto hsl = src.toHsl(); auto h = hsl.hslHueF(); auto s = hsl.hslSaturationF(); auto l = hsl.lightnessF(); qreal const lMin = 0.25; qreal const lMax = 0.75; return QColor::fromHslF(h, s, qBound(lMin, lMax, l)); } </code></pre>
2
2016-08-16T19:52:01Z
[ "python" ]
How to adjust a color to a preferred visual range?
38,971,926
<p>What I'm trying to do is apply a dynamic background color to black text. The color is calculated as a result of a hash of the text.</p> <p>The problem is, all too often the color comes out too dark to be able to read the text.</p> <p>How can I lighten the color to keep it <strong>in a decent visual range</strong> (not too dark, not too light)?</p> <p>The color can't be brighter than beige or darker than teal.<br/> (Keep in mind that Blue at 255 is darker than Green at 255, because the human eye is most sensitive to Green and the least sensitive to Blue)</p>
3
2016-08-16T10:01:15Z
38,985,310
<p>You can pick the color in the HSL space</p> <pre><code>def color(Hmin=0.0, Hmax=360.0, Smin=0.0, Smax=1.0, Lmin=0.0, Lmax=1.0): H = (Hmin + random.random()*(Hmax - Hmin)) % 360.0 S = Smin + random.random()*(Smax - Smin) L = Lmin + random.random()*(Lmax - Lmin) # Compute full-brightness, full-saturation color wheel point if 0.0 &lt;= H &lt; 60.0: R, G, B = (1.0, H/60.0, 0.0) # R -&gt; Y elif 60.0 &lt;= H &lt; 120.0: R, G, B = (1-(H-60.0)/60.0, 1.0, 0.0) # Y -&gt; G elif 120.0 &lt;= H &lt; 180.0: R, G, B = (0.0, 1.0, (H-120.0)/60.0) # G -&gt; C elif 180.0 &lt;= H &lt; 240.0: R, G, B = (0.0, 1.0-(H-180.0)/60.0, 1.0) # C -&gt; B elif 240.0 &lt;= H &lt; 300.0: R, G, B = ((H-240.0)/60.0, 0.0, 1.0) # B -&gt; M else: R, G, B = (1.0, 0.0, 1.0-(H-300.0)/60.0) # M -&gt; R # Compute amount of gray k = (1.0 - S) * L # Return final RGB return (k + R*(L-k), k + G*(L-k), k + B*(L-k)) </code></pre>
2
2016-08-16T22:13:23Z
[ "python" ]
How can I share a global dictionary with tuple key between various Python muliprocessing cores?
38,972,011
<p>I have the following code: (simplified)</p> <pre><code>def main_func(): anotherDic = {} dic = {(1,2):44, (4,6):33, (1,1):4, (2,3):4} ks = dic.keys() for i in ks: func_A(anotherDic, i[0], i[1], dic[i], 5) </code></pre> <p>The main dictionary (dic) is quite big, and the for loops goes for 500 million iterations. I want to use multiprocessing to parallelize the loop on a multi-core machine. I have read several SO questions and multiprocessing lib documentation, and this very helpful <a href="https://www.youtube.com/watch?v=s1SkCYMnfbY" rel="nofollow">video</a> and still cannot figure it out. I want the program to fork into several threads when it reaches this loop, run in parallel and then after all processes have finished it should continue the program on single process from the line after the loop section. func_A received the dictionary value and key from dic, calculates some simple operations, and updates the anotherDic data. This is an independent process, as long as all the same i[0] keys are handles by same process. So, I cannot use pool map function which automatically divides the data between cores. I am going to sort the keys by the first element of key tuple, and then divide them manually between the threads. </p> <p>How can i pass/share the very big dictionary (dic) between the processes? Different process will read and write to different keys (i.e. keys that each process deals with are different from the rest of processes) If I cannot find answer to this, I will just use smaller temporary dic for each process, and in the end just join the dics.</p> <p>Then question is, how I can force process to fork and go muliprocessor just for the loop section, and after the loop all the processes join before continuing with rest of the code on a single thread?</p>
2
2016-08-16T10:04:37Z
38,972,273
<p>A general answer involves using a <code>Manager</code> object. Adapted from the docs:</p> <pre><code>from multiprocessing import Process, Manager def f(d): d[1] += '1' d['2'] += 2 if __name__ == '__main__': manager = Manager() d = manager.dict() d[1] = '1' d['2'] = 2 p1 = Process(target=f, args=(d,)) p2 = Process(target=f, args=(d,)) p1.start() p2.start() p1.join() p2.join() print d </code></pre> <p>Output:</p> <pre><code>$ python mul.py {1: '111', '2': 6} </code></pre> <p>Original answer: <a href="http://stackoverflow.com/questions/6832554/python-multiprocessing-how-do-i-share-a-dict-among-multiple-processes">Python multiprocessing: How do I share a dict among multiple processes?</a></p>
2
2016-08-16T10:17:26Z
[ "python", "multithreading", "dictionary", "multiprocessing", "threadpool" ]
How to start jupyter in an environment created by conda?
38,972,209
<p>I use <code>conda</code> created an environment called <code>testEnv</code> and activated it, after that I use the command <code>jupyter notebook</code> to call the jupyter editor. It works, but the problem is that, I can only create file in the root environment. How can I create file in <code>testEnv</code> environment?</p> <p>Here are the steps what I have done:</p> <pre><code>$ conda create -n testEnv python=3.5 # create environmet $ source activate testEnv # activate the environmet (testEnv)$ jupyter notebook # start the jupyter notebook </code></pre> <p>Here are the result, which shows I can only create file with in "root" but not in "testEnv" (There is only <code>Root</code>, but no <code>testEnv</code>):</p> <p><a href="http://i.stack.imgur.com/2FOZz.png" rel="nofollow"><img src="http://i.stack.imgur.com/2FOZz.png" alt="enter image description here"></a></p> <p>In the Tab <code>Conda</code>, I can see the <code>testEnv</code>, but how can I switch to it?</p> <p><a href="http://i.stack.imgur.com/rrSwl.png" rel="nofollow"><img src="http://i.stack.imgur.com/rrSwl.png" alt="enter image description here"></a></p>
2
2016-08-16T10:14:18Z
38,972,519
<p>The answer is that you probably shouldn't do this. Python virtualenvs and Conda environments are intended to determine the resources available to the Python system, which are completely independent of your working directory.</p> <p>You can use the same environment to work on multiple projects, as long as they have the same dependencies. The minute you start tweaking the environment you begin messing with something that is normally automatically maintained.</p> <p>So perhaps the real question you should ask yourself is "why do I think it's a good idea to store my notebooks inside the environment used to execute them."</p>
1
2016-08-16T10:29:12Z
[ "python", "anaconda", "jupyter", "conda" ]
How to start jupyter in an environment created by conda?
38,972,209
<p>I use <code>conda</code> created an environment called <code>testEnv</code> and activated it, after that I use the command <code>jupyter notebook</code> to call the jupyter editor. It works, but the problem is that, I can only create file in the root environment. How can I create file in <code>testEnv</code> environment?</p> <p>Here are the steps what I have done:</p> <pre><code>$ conda create -n testEnv python=3.5 # create environmet $ source activate testEnv # activate the environmet (testEnv)$ jupyter notebook # start the jupyter notebook </code></pre> <p>Here are the result, which shows I can only create file with in "root" but not in "testEnv" (There is only <code>Root</code>, but no <code>testEnv</code>):</p> <p><a href="http://i.stack.imgur.com/2FOZz.png" rel="nofollow"><img src="http://i.stack.imgur.com/2FOZz.png" alt="enter image description here"></a></p> <p>In the Tab <code>Conda</code>, I can see the <code>testEnv</code>, but how can I switch to it?</p> <p><a href="http://i.stack.imgur.com/rrSwl.png" rel="nofollow"><img src="http://i.stack.imgur.com/rrSwl.png" alt="enter image description here"></a></p>
2
2016-08-16T10:14:18Z
38,982,381
<p>You have two options. You can install the Jupyter Notebook into each environment, and run the Notebook from that environment:</p> <pre><code>conda create -n testEnv python=3.5 notebook source activate testEnv jupyter notebook </code></pre> <p>or you need to install the IPython kernel from <code>testEnv</code> into the environment from which you want to run Jupyter Notebook. Instructions are here: <a href="http://ipython.readthedocs.io/en/stable/install/kernel_install.html#kernels-for-different-environments" rel="nofollow">http://ipython.readthedocs.io/en/stable/install/kernel_install.html#kernels-for-different-environments</a> To summarize:</p> <pre><code>conda create -n testEnv python=3.5 source activate testEnv python -m ipykernel install --user --name testEnv --display-name "Python (testEnv)" source deactivate jupyter notebook </code></pre>
2
2016-08-16T18:48:33Z
[ "python", "anaconda", "jupyter", "conda" ]
3D Harmonics library on windows - pyshtools failed building wheel + fatal error with fftw3.lib
38,972,245
<p>We need a tool to work with 3D-Harmonics and we've come across <a href="https://github.com/SHTOOLS/SHTOOLS" rel="nofollow">https://github.com/SHTOOLS/SHTOOLS</a> - which fits all of our needs, but could not be installed properly on our windows computers (as it's intended for linux\osx).</p> <p>When we tried to run <code>pip install .</code> in the directory SHTOOLS-3.3 (we use anaconda for managing packages and it includes pip), we at first got an error saying that we need a Fortran compiler (gfortran) - which we fixed by installing gcc with <code>conda install -c r gcc</code>. Afterwards, we got an error saying we need to install visual C++ compiler - which we downloaded as suggested from <a href="https://www.microsoft.com/en-gb/download/details.aspx?id=44266" rel="nofollow">https://www.microsoft.com/en-gb/download/details.aspx?id=44266</a>.</p> <p>Alas, running the command again, this time from the visual C++ 2008 command prompt, we still get a fatal error and are still stuck with installing the library.</p> <p>Some of the errors we get: </p> <pre><code>could not find library 'fftw3' in directories ['build\\temp.win-amd64-2.7'] could not find library 'm' in directories ['build\\temp.win-amd64-2.7'] could not find library 'lapack' in directories ['build\\temp.win-amd64-2.7'] could not find library 'blas' in directories ['build\\temp.win-amd64-2.7'] </code></pre> <p>Followed by</p> <pre><code>LINK: fatal error LNK1181: cannot open input file 'fftw3.lib' </code></pre> <p>and </p> <pre><code>Failed building wheel for pyshtools </code></pre> <p>The full output of the installation attempt can be found <a href="http://pasteboard.co/9hXwqA8Ut.png" rel="nofollow">here</a> and <a href="http://pasteboard.co/9hYPi8BYM.png" rel="nofollow">here</a>.</p> <p>We've tried to download the lib files of the FFTW3, LAPACK and BLAS libraries but couldn't build them properly.</p> <p>We would appreciate any help (suggesting a similar library that is compatible with windows \ helping with the install of SHTOOlS).</p>
0
2016-08-16T10:16:04Z
38,972,647
<p>It's a shame when you find something ready to go but very time consuming to make it work on windows. My advice would be avoid the hazzle of installing that non-ready-to-go-on-windows library and just looking for another alternative, there are few ones dealing with spherical harmonics. What about this one? <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyspharm" rel="nofollow">pyspharm</a></p> <p>Also, posting an issue in the library's <a href="https://github.com/SHTOOLS/SHTOOLS/issues" rel="nofollow">github issues</a> could speed it up things.</p>
1
2016-08-16T10:35:08Z
[ "python", "windows", "python-2.7", "fortran", "gfortran" ]
python scripts showing different result( with one error ) in two similar input files
38,972,419
<p>The script, originally taken and modified from (<a href="http://globplot.embl.de/" rel="nofollow">http://globplot.embl.de/</a>):</p> <pre><code>#!/usr/bin/env python # Copyright (C) 2003 Rune Linding - EMBL # GlobPlot TM # GlobPlot is licensed under the Academic Free license from string import * from sys import argv from Bio import File from Bio import SeqIO import fpformat import sys import tempfile import os from os import system,popen3 import math # Russell/Linding RL = {'N':0.229885057471264,'P':0.552316012226663,'Q':-0.187676577424997,'A':-0.261538461538462,'R':-0.176592654077609, \ 'S':0.142883029808825,'C':-0.0151515151515152,'T':0.00887797506611258,'D':0.227629796839729,'E':-0.204684629516228, \ 'V':-0.386174834235195,'F':-0.225572305974316,'W':-0.243375458622095,'G':0.433225711769886,'H':-0.00121743364986608, \ 'Y':-0.20750516775322,'I':-0.422234699606962,'K':-0.100092289621613,'L':-0.337933495925287,'M':-0.225903614457831} def Sum(seq,par_dict): sum = 0 results = [] raws = [] sums = [] p = 1 for residue in seq: try: parameter = par_dict[residue] except: parameter = 0 if p == 1: sum = parameter else: sum = sum + parameter#*math.log10(p) ssum = float(fpformat.fix(sum,10)) sums.append(ssum) p +=1 return sums def getSlices(dydx_data, DOM_join_frame, DOM_peak_frame, DIS_join_frame, DIS_peak_frame): DOMslices = [] DISslices = [] in_DOMslice = 0 in_DISslice = 0 beginDOMslice = 0 endDOMslice = 0 beginDISslice = 0 endDISslice = 0 for i in range( len(dydx_data) ): #close dom slice if in_DOMslice and dydx_data[i] &gt; 0: DOMslices.append([beginDOMslice, endDOMslice]) in_DOMslice = 0 #close dis slice elif in_DISslice and dydx_data[i] &lt; 0: DISslices.append([beginDISslice, endDISslice]) in_DISslice = 0 # elseif inSlice expandslice elif in_DOMslice: endDOMslice += 1 elif in_DISslice: endDISslice += 1 # if not in slice and dydx !== 0 start slice if dydx_data[i] &gt; 0 and not in_DISslice: beginDISslice = i endDISslice = i in_DISslice = 1 elif dydx_data[i] &lt; 0 and not in_DOMslice: beginDOMslice = i endDOMslice = i in_DOMslice = 1 #last slice if in_DOMslice: DOMslices.append([beginDOMslice, endDOMslice]) if in_DISslice: DISslices.append([beginDISslice,endDISslice]) k = 0 l = 0 while k &lt; len(DOMslices): if k+1 &lt; len(DOMslices) and DOMslices[k+1][0]-DOMslices[k][1] &lt; DOM_join_frame: DOMslices[k] = [ DOMslices[k][0], DOMslices[k+1][1] ] del DOMslices[k+1] elif DOMslices[k][1]-DOMslices[k][0]+1 &lt; DOM_peak_frame: del DOMslices[k] else: k += 1 while l &lt; len(DISslices): if l+1 &lt; len(DISslices) and DISslices[l+1][0]-DISslices[l][1] &lt; DIS_join_frame: DISslices[l] = [ DISslices[l][0], DISslices[l+1][1] ] del DISslices[l+1] elif DISslices[l][1]-DISslices[l][0]+1 &lt; DIS_peak_frame: del DISslices[l] else: l += 1 return DOMslices, DISslices def SavitzkyGolay(window,derivative,datalist): SG_bin = 'sav_gol' stdin, stdout, stderr = popen3(SG_bin + '-D' + str(derivative) + ' -n' + str(window)+','+str(window)) for data in datalist: stdin.write(`data`+'\n') try: stdin.close() except: print stderr.readlines() results = stdout.readlines() stdout.close() SG_results = [] for result in results: SG_results.append(float(fpformat.fix(result,6))) return SG_results def reportSlicesTXT(slices, sequence, maskFlag): if maskFlag == 'DOM': coordstr = '|GlobDoms:' elif maskFlag == 'DIS': coordstr = '|Disorder:' else: raise SystemExit if slices == []: #by default the sequence is in uppercase which is our search space s = sequence else: # insert seq before first slide if slices[0][0] &gt; 0: s = sequence[0:slices[0][0]] else: s = '' for i in range(len(slices)): #skip first slice if i &gt; 0: coordstr = coordstr + ', ' coordstr = coordstr + str(slices[i][0]+1) + '-' + str(slices[i][1]+1) #insert the actual slice if maskFlag == 'DOM': s = s + lower(sequence[slices[i][0]:(slices[i][1]+1)]) if i &lt; len(slices)-1: s = s + upper(sequence[(slices[i][1]+1):(slices[i+1][0])]) #last slice elif slices[i][1] &lt; len(sequence)-1: s = s + lower(sequence[(slices[i][1]+1):(len(sequence))]) elif maskFlag == 'DIS': s = s + upper(sequence[slices[i][0]:(slices[i][1]+1)]) #insert untouched seq between disorder segments, 2-run labelling if i &lt; len(slices)-1: s = s + sequence[(slices[i][1]+1):(slices[i+1][0])] #last slice elif slices[i][1] &lt; len(sequence)-1: s = s + sequence[(slices[i][1]+1):(len(sequence))] return s,coordstr def runGlobPlot(): try: smoothFrame = int(sys.argv[1]) DOM_joinFrame = int(sys.argv[2]) DOM_peakFrame = int(sys.argv[3]) DIS_joinFrame = int(sys.argv[4]) DIS_peakFrame = int(sys.argv[5]) file = str(sys.argv[6]) db = open(file,'r') except: print 'Usage:' print ' ./GlobPipe.py SmoothFrame DOMjoinFrame DOMpeakFrame DISjoinFrame DISpeakFrame FASTAfile' print ' Optimised for ELM: ./GlobPlot.py 10 8 75 8 8 sequence_file' print ' Webserver settings: ./GlobPlot.py 10 15 74 4 5 sequence_file' raise SystemExit for cur_record in SeqIO.parse(db, "fasta"): #uppercase is searchspace seq = upper(str(cur_record.seq)) # sum function sum_vector = Sum(seq,RL) # Run Savitzky-Golay smooth = SavitzkyGolay('smoothFrame',0, sum_vector) dydx_vector = SavitzkyGolay('smoothFrame',1, sum_vector) #test sumHEAD = sum_vector[:smoothFrame] sumTAIL = sum_vector[len(sum_vector)-smoothFrame:] newHEAD = [] newTAIL = [] for i in range(len(sumHEAD)): try: dHEAD = (sumHEAD[i+1]-sumHEAD[i])/2 except: dHEAD = (sumHEAD[i]-sumHEAD[i-1])/2 try: dTAIL = (sumTAIL[i+1]-sumTAIL[i])/2 except: dTAIL = (sumTAIL[i]-sumTAIL[i-1])/2 newHEAD.append(dHEAD) newTAIL.append(dTAIL) dydx_vector[:smoothFrame] = newHEAD dydx_vector[len(dydx_vector)-smoothFrame:] = newTAIL globdoms, globdis = getSlices(dydx_vector, DOM_joinFrame, DOM_peakFrame, DIS_joinFrame, DIS_peakFrame) s_domMask, coordstrDOM = reportSlicesTXT(globdoms, seq, 'DOM') s_final, coordstrDIS = reportSlicesTXT(globdis, s_domMask, 'DIS') sys.stdout.write('&gt;'+cur_record.id+coordstrDOM+coordstrDIS+'\n') print s_final print '\n' return runGlobPlot() </code></pre> <p>My input and output files are here: <a href="https://sites.google.com/site/iicbbioinformatics/share" rel="nofollow">link</a></p> <p>This script takes a input (input1.fa) and gives following output output1.txt </p> <p>But when I try to run this script with similar type but larger input file (input2.fa) .. It shows following error:</p> <pre><code>Traceback (most recent call last): File "final_script_globpipe.py", line 207, in &lt;module&gt; runGlobPlot() File "final_script_globpipe.py", line 179, in runGlobPlot smooth = SavitzkyGolay('smoothFrame',0, sum_vector) File "final_script_globpipe.py", line 105, in SavitzkyGolay stdin.write(`data`+'\n') IOError: [Errno 22] Invalid argument </code></pre> <p>I have no idea where the problem is. Any type of suggestion is appriciated.</p> <p>I am using python 2.7 in windows 7 machine. I have also attached the Savitzky Golay module which is needed to run the script.</p> <p>Thanks</p>
0
2016-08-16T10:24:27Z
38,972,588
<p>UPDATE: After trying to reproduce the error on linux it's showing a similar behavior, working fine with the first file but with the second is returning Errno32. Traceback:</p> <pre><code>Traceback (most recent call last): File "Glob.py", line 207, in &lt;module&gt; runGlobPlot() File "Glob.py", line 179, in runGlobPlot smooth = SavitzkyGolay('smoothFrame',0, sum_vector) File "Glob.py", line 105, in SavitzkyGolay stdin.write(`data`+'\n') IOError: [Errno 32] Broken pipe </code></pre> <p>Update:</p> <p>Some calls of the <strong>SG_bin</strong> return that the <strong>-n parameter</strong> is the wrong type.</p> <pre><code>Wrong type of parameter for flag -n. Has to be unsigned,unsigned </code></pre> <p>This parameter comes from the <strong>window</strong> variable that is passed to the <strong>SavitzkyGolay</strong> function.</p> <p>Surrounding the stdin.write with a trycatch block reveals that it breaks a hadnfull of times.</p> <pre><code>try: for data in datalist: stdin.write(repr(data)+'\n') except: print "It broke" </code></pre>
0
2016-08-16T10:32:43Z
[ "python" ]
python ) google spread sheet : update api does not work with 403
38,972,452
<p>I'm following the tutorial from this official link : <a href="https://developers.google.com/sheets/quickstart/python" rel="nofollow">https://developers.google.com/sheets/quickstart/python</a></p> <p>I did execute 'quickstart.py' to authenticated. After that, I ran 'quickstart.py' again and saw the data from '<a href="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit#gid=0" rel="nofollow">https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit#gid=0</a>' as this tutorial gets.</p> <p>I did change spreadsheet ID to my own id and make it to get the data from my spreadsheet by the method :<code>service.spreadsheets().values().get().execute()</code></p> <p>But my goal is to add data to my spreadsheet, so I used the method 'update' as below:</p> <pre><code>rangeName = 'A2:D' body['range'] = rangeName body['majorDimension'] = 'ROWS' body['values'] = ['test','test','test','test'] result = service.spreadsheets().values().update( spreadsheetId=spreadsheetId, range=rangeName, body=body).execute() print('result:'+str(result)) </code></pre> <p>Then I got an error : </p> <blockquote> <p>googleapiclient.errors.HttpError: https://sheets.googleapis.com/v4/spreadsheets/MY_SPREADSHEET_ID/values/A2%3AD?alt=json returned "Request had insufficient authentication scopes."></p> </blockquote> <p>I don't know why this erorr occurs when trying to update my sheet and why this error doesn't occur when trying to get data from my sheet.(If it is cause by authentication, the method 'get' should cause it too!)</p> <p>Thank you.</p>
0
2016-08-16T10:26:00Z
38,986,144
<p>The quickstart.py example sets the scope to:</p> <pre><code>https://www.googleapis.com/auth/spreadsheets.readonly </code></pre> <p>To update the spreadsheet you need to set the scope to:</p> <pre><code>https://www.googleapis.com/auth/spreadsheets </code></pre> <p>You can do this by first deleting the existing authentication file in ~/.credentials (that is the location on a raspberry.). It will likely be called "sheets.googleapis.com-python-quickstart.json.</p> <p>After you removed it you will need to re-authenticate, which should happen automatically when you re-run the script.</p>
0
2016-08-16T23:50:27Z
[ "python", "excel", "google-spreadsheet", "google-spreadsheet-api" ]
How to insert data in flex table in vertica using python 3
38,972,493
<p>I am planning to save my unstructured data in flex tables in vertica. I am receiving lists of data (type of data in list may vary in every call) from client, i want to save this in vertica flex table using python 3. How this can be done?</p> <p>I found stuff on google, but there data is being loaded in flex table directly using csv or json file, not programmatically. I want to save it programmatically using python.</p> <p>Thanks in advance for help -:)</p>
0
2016-08-16T10:28:08Z
38,989,666
<p>Vertica-Python supports INSERT INTO. </p> <p>Unless you need frequent and very small inserts, writing your data to a file and using COPY would most likely give better performance. If you do it through python, does that still not meet your idea of 'programmatically' ?</p> <ul> <li><a href="https://github.com/uber/vertica-python" rel="nofollow">https://github.com/uber/vertica-python</a></li> <li><a href="https://pypi.python.org/pypi/vertica-python/" rel="nofollow">https://pypi.python.org/pypi/vertica-python/</a></li> </ul>
1
2016-08-17T06:37:58Z
[ "python", "vertica" ]
How to insert data in flex table in vertica using python 3
38,972,493
<p>I am planning to save my unstructured data in flex tables in vertica. I am receiving lists of data (type of data in list may vary in every call) from client, i want to save this in vertica flex table using python 3. How this can be done?</p> <p>I found stuff on google, but there data is being loaded in flex table directly using csv or json file, not programmatically. I want to save it programmatically using python.</p> <p>Thanks in advance for help -:)</p>
0
2016-08-16T10:28:08Z
38,995,575
<p>I found a way to copy/insert data from list to flex table (in vertica) using python:</p> <h1>For list</h1> <pre><code># for python list tempList = list() tempList.append('{ "_id" : "01011", "city" : "CHESTER-APL21", "loc" : [ -72.988761, 42.279421 ], "pop" : 1688, "state" : "MA" }') tempList.append('{ "_id" : "01011", "city" : "CHESTER-APL21", "loc" : [ -72.988761, 42.279421 ], "pop" : 1688, "state" : "MA" }') cur.copy( "COPY STG.unstruc_data FROM STDIN parser fjsonparser() ", ''.join(tempList)) connection.commit() </code></pre> <h1>For JSON</h1> <pre><code># for json file with open("D:/SampleCSVFile_2kb/tweets.json", "rb") as fs: my_file = fs.read().decode('utf-8') cur.copy( "COPY STG.unstruc_data FROM STDIN parser fjsonparser()", my_file) connection.commit() </code></pre> <h1>For CSV</h1> <pre><code># for csv file with open("D:/SampleCSVFile_2kb/SampleCSVFile_2kb.csv", "rb") as fs: my_file = fs.read().decode('utf-8','ignore') cur.copy( "COPY STG.unstruc_data FROM STDIN PARSER FDELIMITEDPARSER (delimiter=',', header='false') ", my_file) # buffer_size=65536 connection.commit() </code></pre>
2
2016-08-17T11:37:52Z
[ "python", "vertica" ]
Best way to upload large csv files using python flask
38,972,562
<p><strong>Requirement</strong>: To Upload files using flask framework. Once uploaded to the server user should be able to see the file in UI.</p> <p><strong>Current code</strong>: In order to meet above requirement i wrote the code to upload sufficiently large files and its working fine with (~30 MB file, yes ofcourse not that fast). But when i am trying to upload (~100 MB) file, It is taking too long and process never completes.</p> <p>This is what currently i am doing:</p> <p>UPLOAD_FOLDER = '/tmp'</p> <pre><code> file = request.files['filename'] description = request.form['desc'] filename = secure_filename(file.filename) try: file.save(os.path.join(UPLOAD_FOLDER, filename)) filepath = os.path.join(UPLOAD_FOLDER, filename) except Exception as e: return e data = None try: with open(filepath) as file: data = file.read() except Exception as e: log.exception(e) </code></pre> <p>So what i am doing is first saving the file to temporary location in server and then from then reading the data and putting it into our database. I think this is where i am struggling i am not sure what is the best approach.</p> <p>Should i take the input from user and return the success message(<em>obviously user won't be able to access the file immediately then</em>) and make putting the data into database a background process, using some kind of queue system. Or What else should be done to optimize the code.</p>
0
2016-08-16T10:31:23Z
39,009,730
<p>On the flask side make sure you have the MAX_CONTENT_LENGTH config value set high enough:</p> <pre><code>app.config['MAX_CONTENT_LENGTH'] = 100 * 1024 * 1024 # 100MB limit </code></pre> <p>Also you may want to look into the <a href="http://pythonhosted.org/Flask-Uploads/" rel="nofollow" title="Flask-Upload extension">Flask-Upload extension</a>.</p> <p>There is another SO post similar to this one: <a href="http://stackoverflow.com/questions/38048355/large-file-upload-in-flask">Large file upload in Flask</a>.</p> <p>Other than that you problem may be a timeouts somewhere along the line. What does the rest of your stack look like? Apache? Nginx and Gunicorn? Are you getting a <code>Connection reset</code> error, <code>Connection timed out</code> error or does it just hang?</p> <p>If you are using Nginx try setting <code>proxy_read_timeout</code> to a value high enough for the upload to finish. Apache may also have a default setting causing you trouble if that is what you are using. It's hard to tell without knowing more about your stack and what the error is that you are getting and what the logs are showing. </p>
0
2016-08-18T03:52:53Z
[ "python", "csv", "flask", "large-files" ]
Pandas: get json from data frame
38,972,609
<p>I have data frame</p> <pre><code>member_id,2015-05-01,2015-05-02,2015-05-03,2015-05-04,2015-05-05,2015-05-06,2015-05-07,2015-05-08,2015-05-09,2015-05-10,2015-05-11,2015-05-12,2015-05-13,2015-05-14,2015-05-15,2015-05-16,2015-05-17,2015-05-18,2015-05-19,2015-05-20,2015-05-21,2015-05-22,2015-05-23,2015-05-24,2015-05-25,2015-05-26,2015-05-27,2015-05-28,2015-05-29,2015-05-30,2015-05-31,2015-06-01,2015-06-02,2015-06-03,2015-06-04,2015-06-05,2015-06-06,2015-06-07,2015-06-08,2015-06-09,2015-06-10,2015-06-11,2015-06-12,2015-06-13,2015-06-14,2015-06-15,2015-06-16,2015-06-17,2015-06-18,2015-06-19,2015-06-20,2015-06-21,2015-06-22,2015-06-23,2015-06-24,2015-06-25,2015-06-26,2015-06-27,2015-06-28,2015-06-29,2015-06-30,2015-07-01,2015-07-02,2015-07-03,2015-07-04,2015-07-05,2015-07-06,2015-07-07,2015-07-08,2015-07-09,2015-07-10,2015-07-11,2015-07-12,2015-07-13,2015-07-14,2015-07-15,2015-07-16,2015-07-17,2015-07-18,2015-07-19,2015-07-20,2015-07-21,2015-07-22,2015-07-23,2015-07-24,2015-07-25,2015-07-26,2015-07-27,2015-07-28,2015-07-29,2015-07-30,2015-07-31,2015-08-01,2015-08-02,2015-08-03,2015-08-04,2015-08-05,2015-08-06,2015-08-07,2015-08-08,2015-08-09,2015-08-10,2015-08-11,2015-08-12,2015-08-13,2015-08-14,2015-08-15,2015-08-16,2015-08-17,2015-08-18,2015-08-19,2015-08-20,2015-08-21,2015-08-22,2015-08-23,2015-08-24,2015-08-25,2015-08-26,2015-08-27,2015-08-28,2015-08-29,2015-08-30,2015-08-31,2015-09-01,2015-09-02,2015-09-03,2015-09-04,2015-09-05,2015-09-06,2015-09-07,2015-09-08,2015-09-09,2015-09-10,2015-09-11,2015-09-12,2015-09-13,2015-09-14,2015-09-15,2015-09-16,2015-09-17,2015-09-18,2015-09-19,2015-09-20,2015-09-21,2015-09-22,2015-09-23,2015-09-24,2015-09-25,2015-09-26,2015-09-27,2015-09-28,2015-09-29,2015-09-30,2015-10-01,2015-10-02,2015-10-03,2015-10-04,2015-10-05,2015-10-06,2015-10-07,2015-10-08,2015-10-09,2015-10-10,2015-10-11,2015-10-12,2015-10-13,2015-10-14,2015-10-15,2015-10-16,2015-10-17,2015-10-18,2015-10-19,2015-10-20,2015-10-21,2015-10-22,2015-10-23,2015-10-24,2015-10-25,2015-10-26,2015-10-27,2015-10-28,2015-10-29,2015-10-30,2015-10-31,2015-11-01,2015-11-02,2015-11-03,2015-11-04,2015-11-05,2015-11-06,2015-11-07,2015-11-08,2015-11-09,2015-11-10,2015-11-11,2015-11-12,2015-11-13,2015-11-14,2015-11-15,2015-11-16,2015-11-17,2015-11-18,2015-11-19,2015-11-20,2015-11-21,2015-11-22,2015-11-23,2015-11-24,2015-11-25,2015-11-26,2015-11-27,2015-11-28,2015-11-29,2015-11-30,2015-12-01,2015-12-02,2015-12-03,2015-12-04,2015-12-05,2015-12-06,2015-12-07,2015-12-08,2015-12-09,2015-12-10,2015-12-11,2015-12-12,2015-12-13,2015-12-14,2015-12-15,2015-12-16,2015-12-17,2015-12-18,2015-12-19,2015-12-20,2015-12-21,2015-12-22,2015-12-23,2015-12-24,2015-12-25,2015-12-26,2015-12-27,2015-12-28,2015-12-29,2015-12-30,2015-12-31,2016-01-01,2016-01-02,2016-01-03,2016-01-04,2016-01-05,2016-01-06,2016-01-07,2016-01-08,2016-01-09,2016-01-10,2016-01-11,2016-01-12,2016-01-13,2016-01-14,2016-01-15,2016-01-16,2016-01-17,2016-01-18,2016-01-19,2016-01-20,2016-01-21,2016-01-22,2016-01-23,2016-01-24,2016-01-25,2016-01-26,2016-01-27,2016-01-28,2016-01-29,2016-01-30,2016-01-31,2016-02-01,2016-02-02,2016-02-03,2016-02-04,2016-02-05,2016-02-06,2016-02-07,2016-02-08,2016-02-09,2016-02-10,2016-02-11,2016-02-12,2016-02-13,2016-02-14,2016-02-15,2016-02-16,2016-02-17,2016-02-18,2016-02-19,2016-02-20,2016-02-21,2016-02-22,2016-02-23,2016-02-24,2016-02-25,2016-02-26,2016-02-27,2016-02-28,2016-02-29,2016-03-01,2016-03-02,2016-03-03,2016-03-04,2016-03-05,2016-03-06,2016-03-07,2016-03-08,2016-03-09,2016-03-10,2016-03-11,2016-03-12,2016-03-13,2016-03-14,2016-03-15,2016-03-16,2016-03-17,2016-03-18,2016-03-19,2016-03-20,2016-03-21,2016-03-22,2016-03-23,2016-03-24,2016-03-25,2016-03-26,2016-03-27,2016-03-28,2016-03-29,2016-03-30,2016-03-31,2016-04-01,2016-04-02,2016-04-03,2016-04-04,2016-04-05,2016-04-06,2016-04-07,2016-04-08,2016-04-09,2016-04-10,2016-04-11,2016-04-12,2016-04-13,2016-04-14,2016-04-15,2016-04-16,2016-04-17,2016-04-18,2016-04-19,2016-04-20,2016-04-21,2016-04-22,2016-04-23,2016-04-24,2016-04-25,2016-04-26,2016-04-27,2016-04-28,2016-04-29,2016-04-30,2016-05-01,2016-05-02,2016-05-03,2016-05-04,2016-05-05,2016-05-06,2016-05-07,2016-05-08,2016-05-09,2016-05-10,2016-05-11,2016-05-12,2016-05-13,2016-05-14,2016-05-15,2016-05-16,2016-05-17,2016-05-18,2016-05-19,2016-05-20,2016-05-21,2016-05-22,2016-05-23,2016-05-24,2016-05-25,2016-05-26,2016-05-27,2016-05-28,2016-05-29,2016-05-30,2016-05-31,2016-06-01,2016-06-02,2016-06-03,2016-06-04,2016-06-05,2016-06-06,2016-06-07,2016-06-08,2016-06-09,2016-06-10,2016-06-11,2016-06-12,2016-06-13,2016-06-14,2016-06-15,2016-06-16,2016-06-17,2016-06-18,2016-06-19,2016-06-20,2016-06-21,2016-06-22,2016-06-23,2016-06-24,2016-06-25,2016-06-26,2016-06-27,2016-06-28,2016-06-29,2016-06-30,2016-07-01,2016-07-02,2016-07-03,2016-07-04,2016-07-05,2016-07-06,2016-07-07,2016-07-08,2016-07-09,2016-07-10,2016-07-11,2016-07-12,2016-07-13,2016-07-14,2016-07-15,2016-07-16,2016-07-17,2016-07-18,2016-07-19,2016-07-20,2016-07-21,2016-07-22,2016-07-23,2016-07-24,2016-07-25,2016-07-26,2016-07-27,2016-07-28,2016-07-29,2016-07-30,2016-07-31,2016-08-01,2016-08-02,2016-08-03,2016-08-04,2016-08-05,2016-08-06,2016-08-07,2016-08-08,2016-08-09,2016-08-10,2016-08-11,2016-08-12,2016-08-13,2016-08-14,2016-08-15 19205,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 19276,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 19404,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,3,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 </code></pre> <p>I need if value in column != 0, replace it to 1. (3 replace to 1, 7 replace to 1 etc.)I need to get json like this format. (In df there are all days, but I need do it to months)</p> <pre><code>{ "19205": { "2015-05": 0, "2015-06": 0, "2015-07": 1, "2015-08": 0, "2015-09": 0, "2015-10": 0, "2015-11": 0, "2015-12": 0, "2016-01": 0, "2016-02": 0, "2016-03": 0, "2016-04": 0, "2016-05": 1, "2016-06": 0 }, "19276": { ... } } </code></pre>
-2
2016-08-16T10:33:40Z
38,973,890
<p>You can use:</p> <pre><code>#convert int xolum to string df['member_id'] = df.member_id.astype(str) #reshaping and convert to months period df.set_index('member_id', inplace=True) df = df.unstack().reset_index(name='val').rename(columns={'level_0':'date'}) df['date'] = pd.to_datetime(df.date).dt.to_period('m').dt.strftime('%Y-%m') #groupby by date and member_id and aggregate sum df = df.groupby(['date','member_id'])['val'].sum() #convert all values !=0 to 1 df = (df != 0).astype(int).reset_index() </code></pre> <pre><code>#working in pandas 0.18.1 d = df.groupby('member_id')['date', 'val'].apply(lambda x: pd.Series(x.set_index('date')['val'].to_dict())).to_json(orient='index') print (d) {'19404': {'2016-07': 1, '2015-12': 0, '2015-09': 0, '2015-08': 0, '2015-11': 0, '2015-10': 0, '2015-05': 0, '2016-06': 1, '2015-06': 0, '2016-04': 0, '2016-05': 0, '2015-07': 0, '2016-03': 0, '2016-01': 0, '2016-08': 0, '2016-02': 0}, '19276': {'2016-07': 0, '2015-12': 0, '2015-09': 0, '2015-08': 0, '2015-11': 0, '2015-10': 0, '2015-05': 0, '2016-06': 0, '2015-06': 0, '2016-04': 0, '2016-05': 0, '2015-07': 0, '2016-03': 0, '2016-01': 1, '2016-08': 0, '2016-02': 0}, '19205': {'2016-07': 0, '2015-12': 0, '2015-09': 0, '2015-08': 0, '2015-11': 0, '2015-10': 0, '2015-05': 0, '2016-06': 0, '2015-06': 0, '2016-04': 0, '2016-05': 0, '2015-07': 0, '2016-03': 1, '2016-01': 0, '2016-08': 0, '2016-02': 0}} </code></pre>
1
2016-08-16T11:36:18Z
[ "python", "json", "pandas" ]
Python: Comparing empty string to False is False, why?
38,972,645
<p>If <code>not ''</code> evaluates to <code>True</code>, why does <code>'' == False</code> evaluates to <code>False</code>?</p> <p>For example, the "voids" of the other types (e.g. 0, 0.0) will return <code>True</code> when compared to <code>False</code>:</p> <pre><code>&gt;&gt;&gt; 0 == False True &gt;&gt;&gt; 0.0 == False True </code></pre> <p>Thanks</p>
1
2016-08-16T10:34:59Z
38,972,724
<p>Because <code>int(False) == 0</code> and <code>int(True) == 1</code>. This is what Python is doing when it tries to evaluate <code>0 == False</code>. </p> <p>On the other hand, <code>bool('') == False</code>. The same goes for <code>bool([])</code> and <code>bool({})</code>.</p> <p>The fact that <code>x</code> evaluates to <code>True</code> doesn't necessarily mean that <code>x == True</code>.</p>
0
2016-08-16T10:39:51Z
[ "python", "string", "boolean", "logic" ]
Python: Comparing empty string to False is False, why?
38,972,645
<p>If <code>not ''</code> evaluates to <code>True</code>, why does <code>'' == False</code> evaluates to <code>False</code>?</p> <p>For example, the "voids" of the other types (e.g. 0, 0.0) will return <code>True</code> when compared to <code>False</code>:</p> <pre><code>&gt;&gt;&gt; 0 == False True &gt;&gt;&gt; 0.0 == False True </code></pre> <p>Thanks</p>
1
2016-08-16T10:34:59Z
38,972,758
<p>It doesn't make sense for <code>''</code> and <code>[]</code> to actually equal <code>False</code>, because they are clearly different values: a string and a list. If they both equalled <code>False</code> they would have to be equal to each other*. They are just "falsey", which means they come out as false when they are converted to a boolean.</p> <p>(* in any sensibly constructed language)</p> <p><code>not</code> is an operation that returns a boolean. Which boolean it returns depends on whether the operand is falsey or not. So <code>not x</code> is not equivalent to <code>x==False</code>; it is equivalent to <code>bool(x)==False</code>.</p>
5
2016-08-16T10:41:01Z
[ "python", "string", "boolean", "logic" ]
Python: Comparing empty string to False is False, why?
38,972,645
<p>If <code>not ''</code> evaluates to <code>True</code>, why does <code>'' == False</code> evaluates to <code>False</code>?</p> <p>For example, the "voids" of the other types (e.g. 0, 0.0) will return <code>True</code> when compared to <code>False</code>:</p> <pre><code>&gt;&gt;&gt; 0 == False True &gt;&gt;&gt; 0.0 == False True </code></pre> <p>Thanks</p>
1
2016-08-16T10:34:59Z
38,972,878
<p>If you want to check the <a href="https://docs.python.org/3/library/stdtypes.html#truth-value-testing" rel="nofollow">official explanation</a>, just cast your values like this:</p> <pre><code>print(bool(None) == False) print(False == False) print(bool(0) == False) print(bool(0.0) == False) print(bool(0j) == False) print(bool('') == False) print(bool(()) == False) print(bool([]) == False) print(bool({}) == False) </code></pre>
0
2016-08-16T10:47:01Z
[ "python", "string", "boolean", "logic" ]
Python: Comparing empty string to False is False, why?
38,972,645
<p>If <code>not ''</code> evaluates to <code>True</code>, why does <code>'' == False</code> evaluates to <code>False</code>?</p> <p>For example, the "voids" of the other types (e.g. 0, 0.0) will return <code>True</code> when compared to <code>False</code>:</p> <pre><code>&gt;&gt;&gt; 0 == False True &gt;&gt;&gt; 0.0 == False True </code></pre> <p>Thanks</p>
1
2016-08-16T10:34:59Z
38,973,019
<p>Such a comparison isn't "Pythonic" (<em>i.e.</em> it isn't what an experienced Python programmer would naturally do).</p> <p>In some respects unfortunately, Python's designer decided that <code>True</code> and <code>False</code> would be instances of <code>bool</code>, and that <code>bool</code> would be a subclass of <code>int</code>. As a result of this <code>True</code> compares equal to <code>1</code> and <code>False</code> compares equal to <code>0</code>. Numerical conversion accounts for the floating-point (and, for that matter, complex) result.</p> <p>But just because a <code>bool(x) == True</code> doesn't mean <code>x == True</code> any more than <code>bool(x) ==</code>False<code>implies</code>x == False`. There are many values that evaluate false, the best-known being</p> <ul> <li>Numeric zeroes</li> <li><code>None</code></li> <li>The empty string</li> <li>Empty containers (list, tuple, dict)</li> </ul> <p>There's no way they can all be equal to each other!</p>
0
2016-08-16T10:54:07Z
[ "python", "string", "boolean", "logic" ]
Python: Comparing empty string to False is False, why?
38,972,645
<p>If <code>not ''</code> evaluates to <code>True</code>, why does <code>'' == False</code> evaluates to <code>False</code>?</p> <p>For example, the "voids" of the other types (e.g. 0, 0.0) will return <code>True</code> when compared to <code>False</code>:</p> <pre><code>&gt;&gt;&gt; 0 == False True &gt;&gt;&gt; 0.0 == False True </code></pre> <p>Thanks</p>
1
2016-08-16T10:34:59Z
38,973,058
<blockquote> <p>In the context of Boolean operations, and also when expressions are used by control flow statements, the following values are interpreted as false: <code>False</code>, <code>None</code>, numeric zero of all types, and empty strings and containers (including strings, tuples, lists, dictionaries, sets and frozensets). All other values are interpreted as true. User-defined objects can customize their truth value by providing a <code>__bool__()</code> method.</p> <p>The operator <code>not</code> yields <code>True</code> if its argument is false, <code>False</code> otherwise.</p> <p><a href="https://docs.python.org/3/reference/expressions.html#comparisons" rel="nofollow">https://docs.python.org/3/reference/expressions.html#comparisons</a></p> </blockquote> <p>But:</p> <blockquote> <p>The operators <code>&lt;</code>, <code>&gt;</code>, <code>==</code>, <code>&gt;=</code>, <code>&lt;=</code>, and <code>!=</code> compare the values of two objects. The objects do not need to have the same type.</p> <p>...</p> <p>Because all types are (direct or indirect) subtypes of <code>object</code>, they inherit the default comparison behavior from <code>object</code>. Types can customize their comparison behavior by implementing rich comparison methods like <code>__lt__()</code> ...</p> <p><a href="https://docs.python.org/3/reference/expressions.html#boolean-operations" rel="nofollow">https://docs.python.org/3/reference/expressions.html#boolean-operations</a></p> </blockquote> <p>So, the technical implementation answer is that it behaves the way it does because <code>not</code> and <code>==</code> use different comparisons. <code>not</code> uses <code>__bool__</code>, the "truth value" of an object, while <code>==</code> uses <code>__eq__</code>, the direct comparison of one object to another. So it's possible to ask an object whether it considers itself to be <em>truthy</em> or <em>falsey</em>, and separately from that ask it whether it considers itself to be equal to another object or not. The default implementations for that are arranged in a way that two objects can both consider themselves <em>falsey</em> yet not consider themselves equal to one another.</p>
2
2016-08-16T10:56:09Z
[ "python", "string", "boolean", "logic" ]
Identify binary state of data set (frequency on/off)
38,972,690
<p>I have a large data set that has values ranging from [-3,3] and I'm using a hard limit at 0 as the boundary.</p> <p>The data has a binary value of 1 when its oscillating from -3,3 at a 56kHz frequency. What this means is that the data will be changing from -3 to 3 and back every N data values where N is typically &lt; 20.</p> <p>The data has a binary value of 0 when the data is 3 constantly (this can typically last 400+ samples long)</p> <p>I cant seem to group the data into their binary categories and also know how many samples wide the group is. </p> <p>Example data:</p> <pre><code>1.84 | 2.96 | 2.8 | 3.12 | . | I want this to be grouped as a 0 . | 3.11 |_____ -3.42 | -2.45 | -1.49 | 3.12 | 2.99 | I want this to be grouped as a 1 1.97 | -1.11 | -2.33 | . | . | Keeps going until for N cycles </code></pre> <p>The cycles in-between the logic HIGH state are typically small (&lt;20 samples).</p> <p>The code I have so far:</p> <pre><code>state = "X" for i in range(0, len(data['input'])): currentBinaryState = inputBinaryState(data['input'][i]); # Returns -3 or +3 appropriately if(currentBinaryState != previousBinaryState): # A cycle is very unlikely to last more than 250 samples if y &gt; 250 and currentBinaryState == "LOW": # Been low for a long time if state == "_high": groupedData['input'].append( ("HIGH", x) ) x = 0 state = "_low" else: # Is on carrier wave (logic 1) if state == "_low": # Just finished low groupedData['input'].append( ("LOW", x) ) x = 0 state = "_high" y = 0 </code></pre> <p>Obviously, the result isn't as I should expect as the LOW group is very small.</p> <pre><code>[('HIGH', 600), ('LOW', 8), ('HIGH', 1168), ('LOW', 9), ('HIGH', 1168), ('LOW', 8), ('HIGH', 1168), ('LOW', 8), ('HIGH', 1168), ('LOW', 9), ('HIGH', 1168), ('LOW', 8), ('HIGH', 1168), ('LOW', 8), ('HIGH', 1168), ('LOW', 9)] </code></pre> <p>I understand I could of asked this on the signal processing SA but I deemed this problem to be more programming oriented. I hope I explained the problem sufficiently, if there's any questions just ask. Thanks.</p> <hr> <p>Here is a link to the actual sample data:</p> <p><a href="https://drive.google.com/folderview?id=0ByJDNIfaTeEfemVjSU9hNkNpQ3c&amp;usp=sharing" rel="nofollow">https://drive.google.com/folderview?id=0ByJDNIfaTeEfemVjSU9hNkNpQ3c&amp;usp=sharing</a></p> <p>Visually, it is very clear where the boundaries of the data lie. <a href="http://i.stack.imgur.com/6gVH9.png" rel="nofollow"><img src="http://i.stack.imgur.com/6gVH9.png" alt="Plot of sample data"></a></p> <hr> <p><strong>Update 1</strong></p> <p>I've updated my code to be more legible as single letter variables isn't helping with my sanity.</p> <pre><code>previousBinaryState = "X" x = 0 sinceLastChange = 0 previousGroup = inputBinaryState(data['input'][0]) lengthAssert = 0 for i in range(0, len(data['input'])): currentBinaryState = inputBinaryState(data['input'][i]); if(currentBinaryState != previousBinaryState): # Changed from -3 -&gt; +3 or +3 -&gt; -3 #print sinceLastChange if sinceLastChange &gt; 250 and previousGroup == "HIGH" and currentBinaryState == "LOW": # Finished LOW group groupedData['input'].append( ("LOW", x) ) lengthAssert += x x = 0 previousGroup = "LOW" elif sinceLastChange &gt; 20 and previousGroup == "LOW": # Finished HIGH group groupedData['input'].append( ("HIGH", x) ) lengthAssert += x x = 0 previousGroup = "HIGH" sinceLastChange = 0 else: sinceLastChange += 1 previousBinaryState = currentBinaryState x += 1 </code></pre> <p>Which, for the sample data, outputs:</p> <pre><code>8 7 8 7 7 596 &lt;- Clearly a LOW group 7 8 7 8 7 7 8 7 8 7 7 8 7 8 7 7 8 7 8 . . . </code></pre> <p>Problem is the HIGH group is lasting longer than it should be:</p> <pre><code>[('HIGH', 600), ('LOW', 1176), ('HIGH', 1177), ('LOW', 1176), ('HIGH', 1176), ('LOW', 1177), ('HIGH', 1176), ('LOW', 1176)] </code></pre> <ul> <li>There are only 8 groups made but the plot clearly shows a lot more. The groups appear to be twice the size of what they should be.</li> </ul>
0
2016-08-16T10:37:30Z
38,977,801
<p>I've finally found a solution. I spent far too long getting my head around, what appears to be, a fairly simple problem but it works now.</p> <p>It won't pick up the last group in the data set but that's fine.</p> <pre><code>previousBinaryState = "X" x = 0 sinceLastChange = 0 previousGroup = inputBinaryState(data['input'][0]) lengthAssert = 0 for i in range(0, len(data['input'])): currentBinaryState = inputBinaryState(data['input'][i]); if(currentBinaryState != previousBinaryState): # Changed from -3 -&gt; +3 or +3 -&gt; -3 #print sinceLastChange if sinceLastChange &gt; 250 and previousGroup == "HIGH" and currentBinaryState == "LOW": # Finished LOW group groupedData['input'].append( ("LOW", x) ) lengthAssert += x x = 0 previousGroup = "LOW" sinceLastChange = 0 else: if sinceLastChange &gt; 20 and previousGroup == "LOW": groupedData['input'].append( ("HIGH", x) ) lengthAssert += x x = 0 previousGroup = "HIGH" sinceLastChange = 0 sinceLastChange += 1 previousBinaryState = currentBinaryState x += 1 </code></pre> <p>20 is the maximum number of cycles in the HIGH state and 250 is the maximum number of samples for which the group is in the LOW state.</p> <pre><code>[('HIGH', 25), ('LOW', 575), ('HIGH', 602), ('LOW', 574), ('HIGH', 602), ('LOW', 575), ('HIGH', 601), ('LOW', 575), ('HIGH', 602), ('LOW', 574), ('HIGH', 602), ('LOW', 575), ('HIGH', 601), ('LOW', 575), ('HIGH', 602), ('LOW', 574)] </code></pre> <p>When comparing that to the graph and the actual data, it appears to be correct.</p>
0
2016-08-16T14:37:33Z
[ "python", "frequency", "modulation" ]
py2exe and cx_Oracle failed loading cx_Oracle.pyd
38,972,714
<p>Given the following python program</p> <p>testconnect.py</p> <pre><code>from sqlalchemy.dialects import oracle from sqlalchemy import create_engine url = 'oracle+cx_oracle://user:password@oracle-rds-01.....amazonaws.com:1521/orcl' e = create_engine(url) e.connect() print('Connected') </code></pre> <p>setup.py</p> <pre><code>setup( options={ 'py2exe': { 'bundle_files': 1, 'compressed': True, 'dll_excludes': ['OCI.dll'], 'includes':['cx_Oracle'] } }, console=["testconnect.py"], zipfile=None ) </code></pre> <p>I get the following traceback</p> <pre><code>Traceback (most recent call last): File "testconnect.py", line 7, in &lt;module&gt; e = create_engine(url) File "c:\Python34\lib\site-packages\sqlalchemy\engine\__init__.py", line 386, in create_engine return strategy.create(*args, **kwargs) File "c:\Python34\lib\site-packages\sqlalchemy\engine\strategies.py", line 75, in create dbapi = dialect_cls.dbapi(**dbapi_args) File "c:\Python34\lib\site-packages\sqlalchemy\dialects\oracle\cx_oracle.py", line 769, in dbapi import cx_Oracle File "c:\Python34\lib\site-packages\zipextimporter.py", line 109, in load_module self.get_data) ImportError: MemoryLoadLibrary failed loading cx_Oracle.pyd: The specified module could not be found. (126) </code></pre> <p>I have tried using 'includes' in the setup.py, importing cx_Oracle but to no avail.</p> <p>I've tries bundle_files=3 and using 'data_files=' to copy the cx_Oracle.pyd file into the dist directory and I still get the same issue</p> <p>What changes to my setup.py do I need to do to be able capture the cx_Oracle.pyd file so that it will load</p> <p>Update:</p> <p>The problem was I was using a cmd console that was open prior to installing cx_Oracle and instant client to build the exe with py2exe</p> <p>I closed the console down re-opened it and windows was able to find the appropriate files</p> <p>This now runs OK on my Windows 10 laptop (64 bit)</p> <p>But when I try to deploy this EXE to my clients machine(64 bit windows 2008) I get the following still</p> <pre><code>D:\Milliman&gt;testconnect.exe Traceback (most recent call last): File "testconnect.py", line 1, in &lt;module&gt; File "c:\Python34\lib\site-packages\zipextimporter.py", line 109, in load_module ImportError: MemoryLoadLibrary failed loading cx_Oracle.pyd: The specified module could not be found. (126) </code></pre> <p>Thanks for any help in advance Andy</p>
0
2016-08-16T10:39:06Z
39,019,551
<p>The Oracle instant client (or equivalent) needs to be installed on the target machine. cx_Oracle will not work without it. The DLL it is (likely) trying to find is OCI.DLL.</p>
0
2016-08-18T13:30:41Z
[ "python", "py2exe", "cx-oracle" ]
py2exe and cx_Oracle failed loading cx_Oracle.pyd
38,972,714
<p>Given the following python program</p> <p>testconnect.py</p> <pre><code>from sqlalchemy.dialects import oracle from sqlalchemy import create_engine url = 'oracle+cx_oracle://user:password@oracle-rds-01.....amazonaws.com:1521/orcl' e = create_engine(url) e.connect() print('Connected') </code></pre> <p>setup.py</p> <pre><code>setup( options={ 'py2exe': { 'bundle_files': 1, 'compressed': True, 'dll_excludes': ['OCI.dll'], 'includes':['cx_Oracle'] } }, console=["testconnect.py"], zipfile=None ) </code></pre> <p>I get the following traceback</p> <pre><code>Traceback (most recent call last): File "testconnect.py", line 7, in &lt;module&gt; e = create_engine(url) File "c:\Python34\lib\site-packages\sqlalchemy\engine\__init__.py", line 386, in create_engine return strategy.create(*args, **kwargs) File "c:\Python34\lib\site-packages\sqlalchemy\engine\strategies.py", line 75, in create dbapi = dialect_cls.dbapi(**dbapi_args) File "c:\Python34\lib\site-packages\sqlalchemy\dialects\oracle\cx_oracle.py", line 769, in dbapi import cx_Oracle File "c:\Python34\lib\site-packages\zipextimporter.py", line 109, in load_module self.get_data) ImportError: MemoryLoadLibrary failed loading cx_Oracle.pyd: The specified module could not be found. (126) </code></pre> <p>I have tried using 'includes' in the setup.py, importing cx_Oracle but to no avail.</p> <p>I've tries bundle_files=3 and using 'data_files=' to copy the cx_Oracle.pyd file into the dist directory and I still get the same issue</p> <p>What changes to my setup.py do I need to do to be able capture the cx_Oracle.pyd file so that it will load</p> <p>Update:</p> <p>The problem was I was using a cmd console that was open prior to installing cx_Oracle and instant client to build the exe with py2exe</p> <p>I closed the console down re-opened it and windows was able to find the appropriate files</p> <p>This now runs OK on my Windows 10 laptop (64 bit)</p> <p>But when I try to deploy this EXE to my clients machine(64 bit windows 2008) I get the following still</p> <pre><code>D:\Milliman&gt;testconnect.exe Traceback (most recent call last): File "testconnect.py", line 1, in &lt;module&gt; File "c:\Python34\lib\site-packages\zipextimporter.py", line 109, in load_module ImportError: MemoryLoadLibrary failed loading cx_Oracle.pyd: The specified module could not be found. (126) </code></pre> <p>Thanks for any help in advance Andy</p>
0
2016-08-16T10:39:06Z
39,077,450
<p>I finally figured out what was going on</p> <p>The machine we were deploying to was windows 2008 Server 64 Bit But It had Oracle 32 bit client on it</p> <p>I was trying to deploy py2exe python 3.4 app 64 bit with Cx_Oracle 64bit this was finding the 32 bit OCI.dll and failing to load</p> <p>My solution was to package the 64 bit Instant Client in the data_files. Then in my app modify the path within the app</p> <pre><code>import os if os.path.exists('./instant_client'): pth = os.environ.get('path') pth = '{0};{1}'.format('./instant_client' ,pth) os.environ['path'] = pth </code></pre> <p>This way I could guarantee that the cx_Oracle would find the correct OCI.dll without interfering with global paths and the already installed oracle on that machine</p>
0
2016-08-22T10:41:41Z
[ "python", "py2exe", "cx-oracle" ]
slicing data in pandas
38,972,790
<p>This is likely a really simple question, but it's one I've been confused about and stuck on for a while, so I'm hoping I might get some help. </p> <p>I'm using cross validation to test my data set, but I'm finding that indexing the pandas df is not working as I'm expecting. Specifically, when I print out x_test, I find that there are no data points for x_test. In fact, there are indexes but no columns. </p> <pre><code>k = 10 N = len(df) n = N/k + 1 for i in range(k): print i*n, i*n+n x_train = df.iloc[i*n: i*n+n] y_train = df.iloc[i*n: i*n+n] x_test = df.iloc[0:i*n, i*n+n:-1] print x_test </code></pre> <p>Typical output: </p> <pre><code>0 751 Empty DataFrame Columns: [] Index: [] 751 1502 Empty DataFrame Columns: [] Index: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, ...] </code></pre> <p>I'm trying to work out how to get the data to show up. Any thoughts?</p>
-1
2016-08-16T10:43:00Z
38,972,958
<p>Why don't you use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.KFold.html#sklearn.cross_validation.KFold" rel="nofollow">sklearn.cross_validation.KFold</a>? There is a clear example on this site...</p> <hr> <p><strong>UPDATE:</strong></p> <p>At all subsets you have to specify columns as well: at <code>x_train</code> and <code>x_test</code> you have to exclude target column, at <code>y_train</code> only the target column have to be present. See <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow">slicing and indexing</a> for more details.</p> <pre><code>target = 'target' # name of target column list_features = df.columns.tolist() # use all columns at model training list_features.remove(target) # excluding "target" column k = 10 N = len(df) n = int(N/k) + 1 # 'int()' is necessary at Python 3 for i in range(k): print i*n, i*n+n x_train = df.loc[i*n: i*n+n-1, list_features] # '.loc[]' is inclusive, that's why "-1" is present y_train = df.loc[i*n: i*n+n-1, target] # specify columns after "," x_test = df.loc[~df.index.isin(range(int(i*n), int(i*n+n))), list_features] print x_test </code></pre>
0
2016-08-16T10:51:31Z
[ "python", "pandas", "cross-validation" ]
How to convert cumulative data to daily one?
38,972,799
<p>I have a list of cumulative transferred data by link and periods when data transfer is started and ended. In other words, elements of <code>times</code> is periods of days; elements of <code>data</code> are sum of <strong>all</strong> data transferred by this link to the end of current transfer:</p> <pre><code>data = [0.85, 1.6, 1.85, 2.89, 3.56, 4.05, 5.56, 7.89] times = [[0.5, 1.3], [1.8, 2.1], [2.9, 2.99], [3.5, 3.59], [3.6, 4.1], [4.2, 4.35], [4.65, 4.76], [4.85, 5.5]] </code></pre> <p>Is there any python or numpy way how can I convert cumulative data to daily (<code>[0, 1], [1, 2], [2, 3], [3, 4,], [4, 5], [5, 6]</code>) statistics of transferred data?</p> <p><strong>P.S</strong> Daily data means how much data were transferred particularly between <code>0</code> and <code>1</code> period (<code>1</code> and <code>2</code> and so on).</p> <p>For example, I want to find transferred data between <code>0</code> day and <code>1</code> day. In the period <code>[0.5, 1.3]</code> <code>0.85 GB</code> of data were transferred. So I have to find a share of <code>0.85 GB</code> which particulary transferred between <code>[0, 1]</code>.<br> <code>0.85 GB * (1-0.5) days / (1.3-0.5) days = 0.53 GB</code> And so on. </p>
1
2016-08-16T10:43:27Z
38,973,072
<p>IIUC you can do soomething like this -</p> <pre><code>lims = np.arange(data.size)+1 col0 = lims - times[:,0] col1 = times[:,1] - lims lens = times[:,1] - times[:,0] out = data*col0/lens shares = data*(col1/lens) out[1:] += shares.cumsum()[:-1] </code></pre> <p>Sample run -</p> <pre><code>In [144]: data Out[144]: array([ 0.85, 1.6 , 1.85, 2.89, 3.56, 4.05, 5.56, 7.89]) In [145]: times Out[145]: array([[ 0.5 , 1.3 ], [ 1.8 , 2.1 ], [ 2.9 , 2.99], [ 3.5 , 3.59], [ 3.6 , 4.1 ], [ 4.2 , 4.35], [ 4.65, 4.76], [ 4.85, 5.5 ]]) In [146]: out Out[146]: array([ 0.53125 , 1.38541667, 2.90763889, 16.70208333, -2.55102778, 29.67297222, 55.3047904 , -138.46269211]) </code></pre>
1
2016-08-16T10:56:52Z
[ "python", "numpy", "inverse", "cumulative-sum" ]
How to convert cumulative data to daily one?
38,972,799
<p>I have a list of cumulative transferred data by link and periods when data transfer is started and ended. In other words, elements of <code>times</code> is periods of days; elements of <code>data</code> are sum of <strong>all</strong> data transferred by this link to the end of current transfer:</p> <pre><code>data = [0.85, 1.6, 1.85, 2.89, 3.56, 4.05, 5.56, 7.89] times = [[0.5, 1.3], [1.8, 2.1], [2.9, 2.99], [3.5, 3.59], [3.6, 4.1], [4.2, 4.35], [4.65, 4.76], [4.85, 5.5]] </code></pre> <p>Is there any python or numpy way how can I convert cumulative data to daily (<code>[0, 1], [1, 2], [2, 3], [3, 4,], [4, 5], [5, 6]</code>) statistics of transferred data?</p> <p><strong>P.S</strong> Daily data means how much data were transferred particularly between <code>0</code> and <code>1</code> period (<code>1</code> and <code>2</code> and so on).</p> <p>For example, I want to find transferred data between <code>0</code> day and <code>1</code> day. In the period <code>[0.5, 1.3]</code> <code>0.85 GB</code> of data were transferred. So I have to find a share of <code>0.85 GB</code> which particulary transferred between <code>[0, 1]</code>.<br> <code>0.85 GB * (1-0.5) days / (1.3-0.5) days = 0.53 GB</code> And so on. </p>
1
2016-08-16T10:43:27Z
38,973,148
<p>You can use <code>np.split</code> to chunk the data into daily arrays. First you need the indices that define the edges of each day; for this you can use <code>np.histogram</code> where you define the bins that represent the edges of your days. Then cumsum to get indices of the edges of each day.</p> <pre><code>hist, bins = np.histogram(times, bins=range(5)) # 5 is number of days chunked = np.split(data, hist.cumsum()) </code></pre> <p>Chunked should now be a list of arrays where each array contains the value for each day. You can apply whatever reducing function you want to this.</p> <pre><code>print(chunked) # [array([0.85]), array([1.6, 1.85]), ...] map(np.sum, chunked) </code></pre> <p>Note the times/values arrays must be sorted for split to work.</p> <p>...</p> <p>More readable but much slower, you can select data for each day.</p> <pre><code>days = np.floor(times) chunked = [data[days == day] for day in range(5)] </code></pre>
1
2016-08-16T10:59:53Z
[ "python", "numpy", "inverse", "cumulative-sum" ]
How to convert cumulative data to daily one?
38,972,799
<p>I have a list of cumulative transferred data by link and periods when data transfer is started and ended. In other words, elements of <code>times</code> is periods of days; elements of <code>data</code> are sum of <strong>all</strong> data transferred by this link to the end of current transfer:</p> <pre><code>data = [0.85, 1.6, 1.85, 2.89, 3.56, 4.05, 5.56, 7.89] times = [[0.5, 1.3], [1.8, 2.1], [2.9, 2.99], [3.5, 3.59], [3.6, 4.1], [4.2, 4.35], [4.65, 4.76], [4.85, 5.5]] </code></pre> <p>Is there any python or numpy way how can I convert cumulative data to daily (<code>[0, 1], [1, 2], [2, 3], [3, 4,], [4, 5], [5, 6]</code>) statistics of transferred data?</p> <p><strong>P.S</strong> Daily data means how much data were transferred particularly between <code>0</code> and <code>1</code> period (<code>1</code> and <code>2</code> and so on).</p> <p>For example, I want to find transferred data between <code>0</code> day and <code>1</code> day. In the period <code>[0.5, 1.3]</code> <code>0.85 GB</code> of data were transferred. So I have to find a share of <code>0.85 GB</code> which particulary transferred between <code>[0, 1]</code>.<br> <code>0.85 GB * (1-0.5) days / (1.3-0.5) days = 0.53 GB</code> And so on. </p>
1
2016-08-16T10:43:27Z
38,973,157
<p>@Divakar has already posted the correct numpy solution, here's a simple python one:</p> <pre><code>import math data = [0.85, 1.6, 1.85, 2.89, 3.56, 4.05, 5.56, 7.89] times = [0.5, 1.3, 1.8, 2.9, 3.5, 3.6, 4.2, 4.65] daily = [0] * 7 for i, t in enumerate(times): daily[int(math.floor(t))] += data[i] print daily </code></pre>
1
2016-08-16T11:00:14Z
[ "python", "numpy", "inverse", "cumulative-sum" ]
Broadcasting logical operators along a different axis
38,972,934
<p>I have a DataFrame and a Series:</p> <pre><code>np.random.seed(0) df = pd.DataFrame(np.random.binomial(1, 0.3, (5, 4)).astype(bool)) ser = pd.Series(np.random.binomial(1, 0.3, 5).astype(bool)) </code></pre> <pre>df Out: 0 1 2 3 0 False True False False 1 False False False True 2 True False True False 3 False True False False 4 False True True True ser Out: 0 True 1 True 2 False 3 True 4 False dtype: bool</pre> <p>I want to compare each column against the Series row by row to see if both elements are True. The result should be:</p> <pre> 0 1 2 3 0 False True False False 1 False False False True 2 False False False False 3 False True False False 4 False False False False</pre> <p>I can do this with <code>df.mul(ser, axis=0)</code> but it raises a warning:</p> <blockquote> <p>UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&amp;' instead </p> </blockquote> <p>I am guessing this would slow down the operation. Are there any alternatives?</p>
1
2016-08-16T10:50:07Z
38,974,330
<p>Since this computation is array-based (no alignment of index labels necessary), you could compute this with NumPy arrays and NumPy broadcasting:</p> <pre><code>np.logical_and(df.values, ser.values[:, None]) </code></pre> <hr> <p>Here is a speed comparison of a few alternatives:</p> <pre><code>import numpy as np import pandas as pd N, M = 500, 400 np.random.seed(0) df = pd.DataFrame(np.random.binomial(1, 0.3, (N, M)).astype(bool)) ser = pd.Series(np.random.binomial(1, 0.3, N).astype(bool)) </code></pre> <hr> <pre><code>In [60]: %timeit pd.DataFrame(np.logical_and(df.values, ser.values[:, None]), columns=df.columns, index=df.index) 10000 loops, best of 3: 93.4 µs per loop In [51]: %timeit pd.DataFrame(df.values * ser.values[:,None], columns=df.columns, index=df.index) 10000 loops, best of 3: 94.4 µs per loop In [47]: %timeit df.mul(ser, axis=0) UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&amp;' instead 10000 loops, best of 3: 166 µs per loop In [46]: %timeit df.apply(lambda x: x &amp; ser) 10 loops, best of 3: 135 ms per loop </code></pre>
2
2016-08-16T11:57:23Z
[ "python", "pandas" ]
Need advice, remove widget for kivy
38,973,030
<p>New in the world of programming for about months now.</p> <p>need a help, regarding remove_widget.</p> <p>please find my simple code for troubleshooting. I've been working on this for about 2 weeks and i cant find a work around to it.</p> <p>basically this code add a button object and should also remove the obj. but when i click delete button it doesn't remove the button that was added. Instead its like creating new button and removing it.</p> <p>Thanks.</p> <pre><code>from kivy.uix.widget import Widget from kivy.app import App from kivy.uix.screenmanager import ScreenManager, Screen from kivy.lang import Builder from kivy.uix.button import Button kv_string = Builder.load_string(''' &lt;MScreen&gt;: FloatLayout: Button: pos: root.x,root.top-self.height id: main_add text: 'Add' size_hint: .1,.05 on_release: root.add_item(1) Button: pos: root.x,root.top-main_add.height-self.height id: main_del text: 'Delete' size_hint: .1,.05 on_release: root.rem_item() ''') count = 0 class AddRem(Widget): def addrem(self,add): global count self.wid = Widget() self.list_btn = [] if add == 1: count +=1 for self.index in range(count): self.list_btn.append(Button(text=str(self.index), size_hint= (None,None), width=120, height=50, pos=(200,50+(self.index*10)))) self.add_widget(self.wid) class MScreen(Screen,AddRem): def add_item(self,add): self.addrem(add) for index in range(count): self.wid.add_widget(self.list_btn[index]) def rem_item(self): self.wid.remove_widget(self.list_btn.pop()) class myApp1(App): def build(self): return SManage SManage = ScreenManager() SManage.add_widget(MScreen()) if __name__ == "__main__": myApp1().run() </code></pre>
0
2016-08-16T10:54:56Z
38,973,792
<p>A simple example of add remove buttons I can think of right now, could look like this.</p> <pre class="lang-py prettyprint-override"><code>from kivy.app import App from kivy.uix.button import Button from kivy.uix.boxlayout import BoxLayout class MyBox(BoxLayout): def __init__(self,**kwargs): super(MyBox,self).__init__(**kwargs) self.orientation = "vertical" self.count = 0 self.controlbuttons = BoxLayout(orientation="horizontal") self.controlbuttons.add_widget(Button(on_press=self.add_button,text="Add Button")) self.controlbuttons.add_widget(Button(on_press=self.remove_button,text="Remove Button")) self.buttons = BoxLayout(orientation="vertical") self.add_widget(self.controlbuttons) self.add_widget(self.buttons) def add_button(self,*args): self.count += 1 self.buttons.add_widget(Button(text=str(self.count))) def remove_button(self,*args): if len(self.buttons.children): self.count -= 1 self.buttons.remove_widget(self.buttons.children[0]) class MyApp(App): def build(self): return MyBox() if __name__ == "__main__": MyApp().run() </code></pre>
0
2016-08-16T11:31:48Z
[ "python", "kivy" ]
Keeping a Selenium Driver alive across multiple Python scripts
38,973,052
<p>I'm working on a series of tests using Fitnesse. Due to Fitnesse only allowing me to return one value at a time, I have a series of assertion tests e.g. check to see if a particular element exists on the page. Each test in fitnesse runs one after the other, which means my driver instance gets destroyed after each test.</p> <p>While functional - this approach is becoming less than adequate as such simple checks spend most of their time opening and closing the browser. </p> <p>I've tried to pickle the driver - but haven't had much success in doing so. I get:</p> <pre><code>TypeError: can't pickle file objects </code></pre> <p>I've also tried running a separate python script endlessly and accessing the driver from there, but any scripts that then include this also get caught in an endless loop.</p>
0
2016-08-16T10:55:47Z
38,974,256
<p>My recommendation would be to organise the tests better and adapt to the practice that each test case has it's own instance. If you have really small &amp; fast cases, i would recommend nesting them. </p> <p>Otherwise i recommend reading about Singleton pattern which allows us to have one instance of webdriver. You can find some examples on the implementation here, just search for selenium singleton. Good luck!</p>
0
2016-08-16T11:54:04Z
[ "python", "selenium" ]
Keeping a Selenium Driver alive across multiple Python scripts
38,973,052
<p>I'm working on a series of tests using Fitnesse. Due to Fitnesse only allowing me to return one value at a time, I have a series of assertion tests e.g. check to see if a particular element exists on the page. Each test in fitnesse runs one after the other, which means my driver instance gets destroyed after each test.</p> <p>While functional - this approach is becoming less than adequate as such simple checks spend most of their time opening and closing the browser. </p> <p>I've tried to pickle the driver - but haven't had much success in doing so. I get:</p> <pre><code>TypeError: can't pickle file objects </code></pre> <p>I've also tried running a separate python script endlessly and accessing the driver from there, but any scripts that then include this also get caught in an endless loop.</p>
0
2016-08-16T10:55:47Z
38,986,467
<p>It should have something like @BeforeSuite annotation in TestNG. If it has, you can leverage it to instantiate the driver creation which will be utilized by all tests.</p> <p>We have successfully implemented it using BaseTestClass which has driver variable, we are setting this driver in @BeforeSuite method and then it is being shared by all the tests. Although this approach has some challenges with it, like you can not run the tests in parallel because the driver is common.</p>
0
2016-08-17T00:36:59Z
[ "python", "selenium" ]
Pandas: convert days to months
38,973,116
<p>I have dataframe</p> <pre><code>member_id,2015-05-01,2015-05-02,2015-05-03,2015-05-04,2015-05-05,2015-05-06,2015-05-07,2015-05-08,2015-05-09,2015-05-10,2015-05-11,2015-05-12,2015-05-13,2015-05-14,2015-05-15,2015-05-16,2015-05-17,2015-05-18,2015-05-19,2015-05-20,2015-05-21,2015-05-22,2015-05-23,2015-05-24,2015-05-25,2015-05-26,2015-05-27,2015-05-28,2015-05-29,2015-05-30,2015-05-31,2015-06-01,2015-06-02,2015-06-03,2015-06-04,2015-06-05,2015-06-06,2015-06-07,2015-06-08,2015-06-09,2015-06-10,2015-06-11,2015-06-12,2015-06-13,2015-06-14,2015-06-15,2015-06-16,2015-06-17,2015-06-18,2015-06-19,2015-06-20,2015-06-21,2015-06-22,2015-06-23,2015-06-24,2015-06-25,2015-06-26,2015-06-27,2015-06-28,2015-06-29,2015-06-30,2015-07-01,2015-07-02,2015-07-03,2015-07-04,2015-07-05,2015-07-06,2015-07-07,2015-07-08,2015-07-09,2015-07-10,2015-07-11,2015-07-12,2015-07-13,2015-07-14,2015-07-15,2015-07-16,2015-07-17,2015-07-18,2015-07-19,2015-07-20,2015-07-21,2015-07-22,2015-07-23,2015-07-24,2015-07-25,2015-07-26,2015-07-27,2015-07-28,2015-07-29,2015-07-30,2015-07-31,2015-08-01,2015-08-02,2015-08-03,2015-08-04,2015-08-05,2015-08-06,2015-08-07,2015-08-08,2015-08-09,2015-08-10,2015-08-11,2015-08-12,2015-08-13,2015-08-14,2015-08-15,2015-08-16,2015-08-17,2015-08-18,2015-08-19,2015-08-20,2015-08-21,2015-08-22,2015-08-23,2015-08-24,2015-08-25,2015-08-26,2015-08-27,2015-08-28,2015-08-29,2015-08-30,2015-08-31,2015-09-01,2015-09-02,2015-09-03,2015-09-04,2015-09-05,2015-09-06,2015-09-07,2015-09-08,2015-09-09,2015-09-10,2015-09-11,2015-09-12,2015-09-13,2015-09-14,2015-09-15,2015-09-16,2015-09-17,2015-09-18,2015-09-19,2015-09-20,2015-09-21,2015-09-22,2015-09-23,2015-09-24,2015-09-25,2015-09-26,2015-09-27,2015-09-28,2015-09-29,2015-09-30,2015-10-01,2015-10-02,2015-10-03,2015-10-04,2015-10-05,2015-10-06,2015-10-07,2015-10-08,2015-10-09,2015-10-10,2015-10-11,2015-10-12,2015-10-13,2015-10-14,2015-10-15,2015-10-16,2015-10-17,2015-10-18,2015-10-19,2015-10-20,2015-10-21,2015-10-22,2015-10-23,2015-10-24,2015-10-25,2015-10-26,2015-10-27,2015-10-28,2015-10-29,2015-10-30,2015-10-31,2015-11-01,2015-11-02,2015-11-03,2015-11-04,2015-11-05,2015-11-06,2015-11-07,2015-11-08,2015-11-09,2015-11-10,2015-11-11,2015-11-12,2015-11-13,2015-11-14,2015-11-15,2015-11-16,2015-11-17,2015-11-18,2015-11-19,2015-11-20,2015-11-21,2015-11-22,2015-11-23,2015-11-24,2015-11-25,2015-11-26,2015-11-27,2015-11-28,2015-11-29,2015-11-30,2015-12-01,2015-12-02,2015-12-03,2015-12-04,2015-12-05,2015-12-06,2015-12-07,2015-12-08,2015-12-09,2015-12-10,2015-12-11,2015-12-12,2015-12-13,2015-12-14,2015-12-15,2015-12-16,2015-12-17,2015-12-18,2015-12-19,2015-12-20,2015-12-21,2015-12-22,2015-12-23,2015-12-24,2015-12-25,2015-12-26,2015-12-27,2015-12-28,2015-12-29,2015-12-30,2015-12-31,2016-01-01,2016-01-02,2016-01-03,2016-01-04,2016-01-05,2016-01-06,2016-01-07,2016-01-08,2016-01-09,2016-01-10,2016-01-11,2016-01-12,2016-01-13,2016-01-14,2016-01-15,2016-01-16,2016-01-17,2016-01-18,2016-01-19,2016-01-20,2016-01-21,2016-01-22,2016-01-23,2016-01-24,2016-01-25,2016-01-26,2016-01-27,2016-01-28,2016-01-29,2016-01-30,2016-01-31,2016-02-01,2016-02-02,2016-02-03,2016-02-04,2016-02-05,2016-02-06,2016-02-07,2016-02-08,2016-02-09,2016-02-10,2016-02-11,2016-02-12,2016-02-13,2016-02-14,2016-02-15,2016-02-16,2016-02-17,2016-02-18,2016-02-19,2016-02-20,2016-02-21,2016-02-22,2016-02-23,2016-02-24,2016-02-25,2016-02-26,2016-02-27,2016-02-28,2016-02-29,2016-03-01,2016-03-02,2016-03-03,2016-03-04,2016-03-05,2016-03-06,2016-03-07,2016-03-08,2016-03-09,2016-03-10,2016-03-11,2016-03-12,2016-03-13,2016-03-14,2016-03-15,2016-03-16,2016-03-17,2016-03-18,2016-03-19,2016-03-20,2016-03-21,2016-03-22,2016-03-23,2016-03-24,2016-03-25,2016-03-26,2016-03-27,2016-03-28,2016-03-29,2016-03-30,2016-03-31,2016-04-01,2016-04-02,2016-04-03,2016-04-04,2016-04-05,2016-04-06,2016-04-07,2016-04-08,2016-04-09,2016-04-10,2016-04-11,2016-04-12,2016-04-13,2016-04-14,2016-04-15,2016-04-16,2016-04-17,2016-04-18,2016-04-19,2016-04-20,2016-04-21,2016-04-22,2016-04-23,2016-04-24,2016-04-25,2016-04-26,2016-04-27,2016-04-28,2016-04-29,2016-04-30,2016-05-01,2016-05-02,2016-05-03,2016-05-04,2016-05-05,2016-05-06,2016-05-07,2016-05-08,2016-05-09,2016-05-10,2016-05-11,2016-05-12,2016-05-13,2016-05-14,2016-05-15,2016-05-16,2016-05-17,2016-05-18,2016-05-19,2016-05-20,2016-05-21,2016-05-22,2016-05-23,2016-05-24,2016-05-25,2016-05-26,2016-05-27,2016-05-28,2016-05-29,2016-05-30,2016-05-31,2016-06-01,2016-06-02,2016-06-03,2016-06-04,2016-06-05,2016-06-06,2016-06-07,2016-06-08,2016-06-09,2016-06-10,2016-06-11,2016-06-12,2016-06-13,2016-06-14,2016-06-15,2016-06-16,2016-06-17,2016-06-18,2016-06-19,2016-06-20,2016-06-21,2016-06-22,2016-06-23,2016-06-24,2016-06-25,2016-06-26,2016-06-27,2016-06-28,2016-06-29,2016-06-30,2016-07-01,2016-07-02,2016-07-03,2016-07-04,2016-07-05,2016-07-06,2016-07-07,2016-07-08,2016-07-09,2016-07-10,2016-07-11,2016-07-12,2016-07-13,2016-07-14,2016-07-15,2016-07-16,2016-07-17,2016-07-18,2016-07-19,2016-07-20,2016-07-21,2016-07-22,2016-07-23,2016-07-24,2016-07-25,2016-07-26,2016-07-27,2016-07-28,2016-07-29,2016-07-30,2016-07-31,2016-08-01,2016-08-02,2016-08-03,2016-08-04,2016-08-05,2016-08-06,2016-08-07,2016-08-08,2016-08-09,2016-08-10,2016-08-11,2016-08-12,2016-08-13,2016-08-14,2016-08-15 19205,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 19276,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 </code></pre> <p>There are all infornation to every day. I need to get all information to every month. And if now some day in month != 0, print to month 1. I need to get table like this, but name of columns are months and values can be 0 or 1</p>
0
2016-08-16T10:58:31Z
38,973,152
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.to_period.html" rel="nofollow"><code>to_period</code></a>:</p> <pre><code>df['member_id'] = df.member_id.astype(str) df.set_index('member_id', inplace=True) df = df.unstack().reset_index(name='val').rename(columns={'level_0':'date'}) df['date'] = pd.to_datetime(df.date).dt.to_period('m').dt.strftime('%Y-%m') #df = (df != 0).astype(int) df = df.groupby(['date','member_id'])['val'].sum() df = (df != 0).astype(int).unstack(0) print (df) date 2015-05 2015-06 2015-07 2015-08 2015-09 2015-10 2015-11 \ member_id 19205 0 0 0 0 0 0 0 19276 0 0 0 0 0 0 0 19404 0 0 0 0 0 0 0 date 2015-12 2016-01 2016-02 2016-03 2016-04 2016-05 2016-06 \ member_id 19205 0 0 0 1 0 0 0 19276 0 1 0 0 0 0 0 19404 0 0 0 0 0 0 1 date 2016-07 2016-08 member_id 19205 0 0 19276 0 0 19404 1 0 </code></pre>
1
2016-08-16T11:00:02Z
[ "python", "pandas" ]
Why does RobotParser block this result?
38,973,168
<p>I have been using Python's robotparser for a while now and it's working fine. This morning I ran across a website with a very permissive looking robots.txt file:</p> <pre><code>User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php </code></pre> <p>However, for some reason, the parser thinks all URLs are blocked.</p> <pre><code>import robotparser rp = robotparser.RobotFileParser("http://newenglandreptileshop.com/robots.txt") rp.read() # Try any URL rp.can_fetch("*", "http://www.newenglandreptileshop.com") False </code></pre> <p>My assumption is that crawling all paths is permissible unless denied. I used another robots.txt parser to check my assumption and it agreed that I should be able to access most URLs on this server. And Google has them indexed too.</p> <p>Seems like a bug in the Python library. What's going on?</p>
0
2016-08-16T11:00:41Z
38,974,257
<p>According to the Robot Exclusion Standard found at <a href="https://www.w3.org/TR/html4/appendix/notes.html#h-B.4.1.1" rel="nofollow">https://www.w3.org/TR/html4/appendix/notes.html#h-B.4.1.1</a> and <a href="https://en.wikipedia.org/wiki/Robots_exclusion_standard" rel="nofollow">https://en.wikipedia.org/wiki/Robots_exclusion_standard</a> there is no such thing as an Allow record. To allow access you must add an empty Disallow record. Try hosting the robots.txt on a domain you control and remove the explicit allow record and seeing if RobotParser returns True for can_fetch.</p>
0
2016-08-16T11:54:05Z
[ "python", "web-crawler", "robots.txt" ]
Change picture when tkinter button clicked
38,973,173
<p>I'm using tkinter in python 3 to produce some buttons and some labels which change colour when the button is clicked (and turn on a GPIO pin on the Rasp pi too)</p> <p>Is it possible to change the .gif that the button uses when the button is clicked? I want it to say ON when the GPIO pin is off, and OFF when the GPIO pin is on.</p> <p>At the moment I have:</p> <pre><code>#BCM17 GPIO.setup(17,GPIO.OUT) colour17=StringVar() pinstate17=GPIO.input(17) if pinstate17==1: colour17.set('red') else: colour17.set('green') BCM17Bimage=tk.PhotoImage(file='on.gif') BCM17B = Button(clock, text="GPIO 0\nBCM 17", image=BCM17Bimage, width=78, height=100, bg="grey", command=BCM17f).grid(column=2, row=1) BCM17L = Label(clock, text="GPIO 0\nBCM 17", font=(fontname,12), fg='white', bg=colour17.get(), width=10, height=2) BCM17L.grid(column=0, row=1) </code></pre> <p>And, the def for the button is:</p> <pre><code>def BCM17f(): pinstate17=GPIO.input(17) colour17.set('red' if pinstate17==0 else 'green') BCM17L.configure(bg=colour17.get()) if pinstate17==0: GPIO.output(17,True) else: GPIO.output(17,False) print(pinstate17) </code></pre> <p>A random aside too - is it possible to get an email when people reply to a post on here? Had a good look, but cant see an option for it anywhere.</p>
-2
2016-08-16T11:00:50Z
39,024,699
<p>Solved it:</p> <pre><code>#BCM17 GPIO.setup(17,GPIO.OUT) colour17=StringVar() pinstate17=GPIO.input(17) if pinstate17==1: colour17.set('red') else: colour17.set('green') BCM17L = Label(clock, text="GPIO 0\nBCM 17", font=(fontname,12), fg='white', bg=colour17.get(), width=10, height=2) BCM17L.grid(column=0, row=1) image17on=tk.PhotoImage(file="on.gif") image17off=tk.PhotoImage(file="off.gif") if pinstate17==1: image17=image17on else: image17=image17off BCM17B = Button(clock, text="GPIO 0\nBCM 17", image=image17, width=75, height=75, bg="grey", command=BCM17f) BCM17B.grid(column=2, row=1) </code></pre> <p>with def:</p> <pre><code>def BCM17f(): pinstate17=GPIO.input(17) colour17.set('red' if pinstate17==0 else 'green') BCM17L.configure(bg=colour17.get()) global toggle17 if toggle17 and pinstate17==1: GPIO.output(17,False) BCM17B.config(image=image17off) toggle17 = not toggle17 else: GPIO.output(17,True) BCM17B.config(image=image17on) toggle17 = not toggle17 </code></pre>
0
2016-08-18T17:58:43Z
[ "python", "tkinter", "raspberry-pi3" ]
OpenCV python environment set up on Mac
38,973,217
<p>I am trying to set up OpenCV environmnet on my Mac El Capitan 10.11.5 using this tutorial: <a href="http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/" rel="nofollow">http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/</a> </p> <p>I constantly get en error: The source directory "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/BUILD_EXAMPLES=ON" does not exist. </p> <p>My curl command is:</p> <p><code>cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D PYTHON3_PACKAGES_PATH=~/.virtualenvs/cv/lib/python3.5/site-packages -D PYTHON3_LIBRARY=/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/bin -D PYTHON3_INCLUDE_DIR=/usr/local/Frameworks/Python.framework/Headers -D INSTALL_C_EXAMPLES=OFF -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON </code> Any suggestions what am I doing wrong?</p> <p>UPDATE: I found another way to get started here: <a href="http://blogs.wcode.org/2014/10/howto-install-build-and-use-opencv-macosx-10-10/" rel="nofollow">http://blogs.wcode.org/2014/10/howto-install-build-and-use-opencv-macosx-10-10/</a> It seems to be working, but I am still not sure why the prevoius one would not go through.</p>
0
2016-08-16T11:03:04Z
38,988,042
<p>I had that Problem too, but in the end </p> <p><code>brew install opencv3 --with-python3</code></p> <p>did the trick for me. </p>
0
2016-08-17T04:16:46Z
[ "python", "osx", "opencv", "curl" ]
clustering of tweets using k means algorithm as positive or negative
38,973,218
<p>i have some movie reviews, i need to cluster them on the basis of positive or negative clusters. Using Kmeans is possible. Can anyone give me basic outline of how to start with it. In Python is preferable.</p>
-3
2016-08-16T11:03:07Z
38,973,436
<p>You can start with sklearn package, one of well-known machine learning package. There you can use sklearn.cluster.KMeans.</p> <p>Here is an exmaple from <a href="http://%20http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html" rel="nofollow">scikit-learn website</a>. </p> <p>Though you prefer python, R is also a good statistical tool that can do this. There is a function <a href="https://stat.ethz.ch/R-manual/R-devel/library/stats/html/kmeans.html" rel="nofollow">kmeans(x, centers)</a>. It is builtin function, hence You donot need to import any package. What you need to do are read data and run it:</p> <p>x = read.table(file,sep='\t')</p> <p>y = keman(x, centers=2)</p>
-1
2016-08-16T11:13:31Z
[ "python", "twitter", "machine-learning", "cluster-analysis", "k-means" ]
clustering of tweets using k means algorithm as positive or negative
38,973,218
<p>i have some movie reviews, i need to cluster them on the basis of positive or negative clusters. Using Kmeans is possible. Can anyone give me basic outline of how to start with it. In Python is preferable.</p>
-3
2016-08-16T11:03:07Z
38,973,790
<h2>you cannot cluster "as positive or negative"</h2> <p>You have labels. Use <strong>classification</strong>.</p> <p>k-means will not be able to identify what is "positive". It may find any pattern, e.g. short vs. long, english vs. spanish tweets etc. - if you are lucky you can identify what it did.</p>
0
2016-08-16T11:31:47Z
[ "python", "twitter", "machine-learning", "cluster-analysis", "k-means" ]
Better alternative to lots of IF statements? Table of values
38,973,433
<p>I have a table of moves that decide whether or not the player wins based on their selection against the AI. Think Rock, Paper, Scissors with a lot more moves. I'll be eventually coding it in Python, but before I begin, I want to know if there is a better way of doing this rather than LOTS and LOTS of IF statements?</p> <p>The table looks like this:</p> <p><a href="http://i.stack.imgur.com/iVldr.png" rel="nofollow"><img src="http://i.stack.imgur.com/iVldr.png" alt="enter image description here"></a></p> <p>I'm thinking that the moves will need to be assigned numbers, or something like that? I don't know where to start...</p>
7
2016-08-16T11:13:22Z
38,973,575
<p>You could use a dict? Something like this: </p> <pre><code>#dict of winning outcomes, the first layer represents the AI moves, and the inner #layer represent the player move and the outcome ai = { 'punch' : { 'punch' : 'tie', 'kick' : 'wins', }, 'stab' : { 'punch' : 'loses', 'kick' : 'loses' } } ai_move = 'punch' player_move = 'kick' print ai[ai_move][player_move] #output: wins ai_move = 'stab' player_move = 'punch' print ai[ai_move][player_move] #output: loses </code></pre> <p>I didn't map out all the moves, but you get the gist</p>
8
2016-08-16T11:21:39Z
[ "python", "algorithm", "if-statement", "dictionary" ]
Better alternative to lots of IF statements? Table of values
38,973,433
<p>I have a table of moves that decide whether or not the player wins based on their selection against the AI. Think Rock, Paper, Scissors with a lot more moves. I'll be eventually coding it in Python, but before I begin, I want to know if there is a better way of doing this rather than LOTS and LOTS of IF statements?</p> <p>The table looks like this:</p> <p><a href="http://i.stack.imgur.com/iVldr.png" rel="nofollow"><img src="http://i.stack.imgur.com/iVldr.png" alt="enter image description here"></a></p> <p>I'm thinking that the moves will need to be assigned numbers, or something like that? I don't know where to start...</p>
7
2016-08-16T11:13:22Z
38,973,624
<p>Yes, store the decisions as key/value pairs in a dictionary with all the possible combinations as the key and decision as result. Basically make a lookup table for the possible moves. </p> <p>This speeds up decision making at the expense of having to store all possible combinations.</p> <pre><code>def tie(): print("It's a tie!") def lose(): print("You lose") def win(): print("You win") moves = { # player_ai action 'punch_punch': tie, 'punch_kick': lose, 'punch_stab': lose, &lt;..&gt; } player_move = &lt;move&gt; ai_move = &lt;move&gt; key = "_".join([player_move, ai_move]) if key in moves: # Execute appropriate function return moves[key]() raise Exception("Invalid move") </code></pre>
0
2016-08-16T11:23:35Z
[ "python", "algorithm", "if-statement", "dictionary" ]
Better alternative to lots of IF statements? Table of values
38,973,433
<p>I have a table of moves that decide whether or not the player wins based on their selection against the AI. Think Rock, Paper, Scissors with a lot more moves. I'll be eventually coding it in Python, but before I begin, I want to know if there is a better way of doing this rather than LOTS and LOTS of IF statements?</p> <p>The table looks like this:</p> <p><a href="http://i.stack.imgur.com/iVldr.png" rel="nofollow"><img src="http://i.stack.imgur.com/iVldr.png" alt="enter image description here"></a></p> <p>I'm thinking that the moves will need to be assigned numbers, or something like that? I don't know where to start...</p>
7
2016-08-16T11:13:22Z
38,973,682
<p>You could try a dictionary of dictionaries (nested dictionary). Keep values and keys in text form, rather than map to numbers, to improve readability.</p> <pre><code>outcome = {} outcome['punch'] = {} outcome['punch']['punch'] = 'Tie' outcome['punch']['kick'] = 'Lose' ... outcome['kick'] = {} outcome['kick']['punch'] = 'Win' outcome['kick']['kick'] = 'Tie' ... i_do = 'punch' he_does = 'fling' ... if outcome[i_do][he_does] == 'Win': print("Woohoo!") </code></pre>
0
2016-08-16T11:26:22Z
[ "python", "algorithm", "if-statement", "dictionary" ]
Better alternative to lots of IF statements? Table of values
38,973,433
<p>I have a table of moves that decide whether or not the player wins based on their selection against the AI. Think Rock, Paper, Scissors with a lot more moves. I'll be eventually coding it in Python, but before I begin, I want to know if there is a better way of doing this rather than LOTS and LOTS of IF statements?</p> <p>The table looks like this:</p> <p><a href="http://i.stack.imgur.com/iVldr.png" rel="nofollow"><img src="http://i.stack.imgur.com/iVldr.png" alt="enter image description here"></a></p> <p>I'm thinking that the moves will need to be assigned numbers, or something like that? I don't know where to start...</p>
7
2016-08-16T11:13:22Z
38,973,828
<p>You could use a python dictionary to map moves to numbers:</p> <pre><code>move = {'Punch': 0, 'Kick': 1} </code></pre> <p>And then use a matrix to determine the outcome. Numpy can be used for that</p> <pre><code>import numpy move = {'Punch': 0, 'Kick': 1} outcome = numpy.matrix([['Tie','Loses'],['Wins','Tie']]) # Punch vs Punch =&gt; Tie print outcome[move['Punch'], move['Punch']] # Punch vs Kick =&gt; Punch loses print outcome[move['Punch'], move['Kick']] </code></pre>
0
2016-08-16T11:33:29Z
[ "python", "algorithm", "if-statement", "dictionary" ]
Better alternative to lots of IF statements? Table of values
38,973,433
<p>I have a table of moves that decide whether or not the player wins based on their selection against the AI. Think Rock, Paper, Scissors with a lot more moves. I'll be eventually coding it in Python, but before I begin, I want to know if there is a better way of doing this rather than LOTS and LOTS of IF statements?</p> <p>The table looks like this:</p> <p><a href="http://i.stack.imgur.com/iVldr.png" rel="nofollow"><img src="http://i.stack.imgur.com/iVldr.png" alt="enter image description here"></a></p> <p>I'm thinking that the moves will need to be assigned numbers, or something like that? I don't know where to start...</p>
7
2016-08-16T11:13:22Z
38,973,896
<p>I'd use a 2-dimensional list for this. Each attack is decoded to an index 0 to 5 and win tie and loss are decoded as 1, 0 and -1.</p> <p>So the list will look something like this (not based on your example, I just put some random numbers):</p> <pre><code>table = [[1,0,-1,0,1,-1],[1,1,0,1,0,-1],...,etc.] </code></pre> <p>And you will retrieve it like this:</p> <pre><code>table[0][1] </code></pre>
1
2016-08-16T11:36:27Z
[ "python", "algorithm", "if-statement", "dictionary" ]
Better alternative to lots of IF statements? Table of values
38,973,433
<p>I have a table of moves that decide whether or not the player wins based on their selection against the AI. Think Rock, Paper, Scissors with a lot more moves. I'll be eventually coding it in Python, but before I begin, I want to know if there is a better way of doing this rather than LOTS and LOTS of IF statements?</p> <p>The table looks like this:</p> <p><a href="http://i.stack.imgur.com/iVldr.png" rel="nofollow"><img src="http://i.stack.imgur.com/iVldr.png" alt="enter image description here"></a></p> <p>I'm thinking that the moves will need to be assigned numbers, or something like that? I don't know where to start...</p>
7
2016-08-16T11:13:22Z
38,973,897
<p>You can create a map of attacks similar to your table above like this</p> <pre><code>map = [ [0,-1,-1,1,1,-1], [1,0,-1,-1,1,-1], [1,1,0,-1,-1,1], [-1,1,1,0,-1,1], [-1,-1,1,1,0,-1], [1,1,-1,-1,1,0] ] </code></pre> <p>Here, 0 is a draw, 1 is a win and -1 is a loss.</p> <p>Now create an array of attacks where the places of the attacks corresponds with the map above.</p> <pre><code>attacks = ["Punch", "Kick", "Stab", "Throw", "Fling", "Uppercut"] </code></pre> <p>Now you can easily find out if one attack beats another</p> <pre><code>map[attacks.index("Stab")][attacks.index("Punch")] &gt;&gt;&gt; 1 </code></pre> <p>Stab wins over punch</p>
5
2016-08-16T11:36:29Z
[ "python", "algorithm", "if-statement", "dictionary" ]
How to get the output from a kyword using robot.api?
38,973,452
<p>I hope you can help me, I am quite stuck with this issue :(</p> <p>I am trying to create all the tests using the robot api with python, I followed the example in the documentation, but I need to capture the output from a keyword and I dont find how can I do it</p> <p>I tried as usual in rf-ride syntax:</p> <pre><code> test.keywords.create('${greps}= grep file', args=['log.txt', 'url:', 'encoding_errors=ignore']) </code></pre> <p>It says: No keyword with name '${grep}= grep file' found.</p> <p>I tried: </p> <pre><code>output = test.keywords.create('grep file', args=['log.txt', 'url:', 'encoding_errors=ignore']) </code></pre> <p>but the variable <em>output</em> is having just the keyword name, not the output from kw</p> <p>I dont know where to look for more info, all the examples are creating kw which dont return any value...</p>
0
2016-08-16T11:14:19Z
38,974,250
<p>The call to <code>test.keywords.create(...)</code> doesn't <em>call</em> the keyword, it merely creates one to be called later. If you want the results to be assigned to a variable, use the <code>assign</code> attribute when calling <code>create</code>. This argument takes a list of variable names. </p> <p>For example, given this line in plain text format:</p> <pre><code>${greps}= grep file log.txt url: encoding_errors=ignore </code></pre> <p>... you would create it like this using the API:</p> <pre><code>test.keywords.create('grep file', args=['log.txt', 'url:', 'encoding_errors=ignore'], assign=['${greps}']) </code></pre>
1
2016-08-16T11:53:55Z
[ "python", "robotframework" ]
Save .csv file in the same directory as .py file
38,973,487
<p>I am tring to save my .csv file which is a result of some queries in the same location as the .py file. </p> <pre><code>import os with open(os.path.dirname(os.path.abspath(__file__))+'MyCSVFile.csv','wb') as output_file: dict_writer = csv.DictWriter(output_file, keys) dict_writer.writeheader() dict_writer.writerows(myList) </code></pre> <p>I always seem to get my csv file one directory before. When I print os.path.dirname(os.path.abspath(__file__)) it gives me the proper path but the output MyCSVFile is saved one above. What is the problem here?</p>
0
2016-08-16T11:16:42Z
38,973,524
<p>Remove the call to <code>os.path.dirname</code> since you are already calling <code>os.path.abspath</code>. Calling <code>dirname</code> returns the directory component thus you are getting the directory up in the hierarchy. BTW: use <code>os.path.join</code> to join parts of a directory.</p>
1
2016-08-16T11:19:03Z
[ "python", "csv" ]
Save .csv file in the same directory as .py file
38,973,487
<p>I am tring to save my .csv file which is a result of some queries in the same location as the .py file. </p> <pre><code>import os with open(os.path.dirname(os.path.abspath(__file__))+'MyCSVFile.csv','wb') as output_file: dict_writer = csv.DictWriter(output_file, keys) dict_writer.writeheader() dict_writer.writerows(myList) </code></pre> <p>I always seem to get my csv file one directory before. When I print os.path.dirname(os.path.abspath(__file__)) it gives me the proper path but the output MyCSVFile is saved one above. What is the problem here?</p>
0
2016-08-16T11:16:42Z
38,973,614
<p>You have to use os.path.join to save the csv file in the same directory</p> <pre><code>import os dirname = os.path.dirname(os.path.abspath(__file__)) csvfilename = os.path.join(dirname, 'MyCSVFile.csv') with open(csvfilename, 'wb') as output_file: dict_writer = csv.DictWriter(output_file, keys) dict_writer.writeheader() dict_writer.writerows(myList) </code></pre> <p>This should work as excepted</p>
2
2016-08-16T11:23:17Z
[ "python", "csv" ]
£ Sign on python SQLAlchemy query
38,973,489
<p>I am trying to inject into a query, a string in the following way:</p> <pre><code>value = '£' query = """select * from table where condition = '{0}';""".format(value) </code></pre> <p>the problem is the result of this formatting is:</p> <blockquote> <p>select * from table where condition = '\xc2\xa3';</p> </blockquote> <p>Since I will use this query with sqlalchemy, I need the encoded characers to appear as the actual pund sign.</p> <p>Any ideas on how I can makes this query look like this?:</p> <blockquote> <p>select * from table where condition = '£';</p> </blockquote> <p>I am using postgres 3.5 and python 2.7.6</p> <p>Also if I try the following:</p> <pre><code>'\xc2\xa3'.decode('utf-8') </code></pre> <p>the result is:</p> <blockquote> <p>u'\xa3'</p> </blockquote>
0
2016-08-16T11:16:45Z
38,973,997
<p>The Unicode for a pound sign is 163 (decimal) or A3 in hex, so the following should work.</p> <pre><code>print u"\xA3" </code></pre> <p>Oirigin answer: <a href="http://stackoverflow.com/questions/705434/what-encoding-do-i-need-to-display-a-gbp-sign-pound-sign-using-python-on-cygwi">What encoding do I need to display a GBP sign (pound sign) using python on cygwin in Windows XP?</a></p>
0
2016-08-16T11:41:54Z
[ "python", "postgresql", "sqlalchemy", "encode" ]
Process finished with exit code -1073741819 (0xC0000005) (Cython, TeamSpeak3)
38,973,604
<p>My aim for now is creating team speak 3 manager service (that switches the users by channels an others): So I created wrapper of TS3 SDK lib (wrapped with Cython for Python): <a href="https://mega.nz/#!pQdFjIwD!1vg8DPsFtYR4icVqWXzvpdbAQ47-n-aPz2niRkTU4fY" rel="nofollow">https://mega.nz/#!pQdFjIwD!1vg8DPsFtYR4icVqWXzvpdbAQ47-n-aPz2niRkTU4fY</a> (main module: <a href="http://pastebin.com/PywhH4bf" rel="nofollow">http://pastebin.com/PywhH4bf</a> ) In this wrapper I used test connection in module. To test this module just import this.</p> <p>And I got message in python console (after call the <code>ts3client_startConnection</code>): <code>Process finished with exit code -1073741819 (0xC0000005)</code> (access violation)</p> <p>Also as I see the TS3 callbacks are called from non-main thread.</p> <p>With this log:</p> <pre><code>2016-08-16 10:14:20.862577|INFO | | |TeamSpeak 3 Client 3.0.3 (2015-03-30 11:30:36) SDK 2016-08-16 10:14:20.863574|INFO | | |SystemInformation: Windows 9 8664 {6} {3} {9600} (9600) x64 (AMD or Intel) Binary: 32bit 2016-08-16 10:14:20.863574|INFO | | |Using hardware aes 2016-08-16 10:14:20.876587|DEBUG |Direct Sound | |setting timer resolution to 1ms 2016-08-16 10:14:20.892602|ERROR |SoundBckndIntf| |Could not load "ts3soundbackend_isSupported" from backend dynamic library spawn connection handler mode Default capture mode: b'Windows Audio Session' ('Default capture device: %s %s\n', b'\xd0\x9c\xd0\xb8\xd0\xba\xd1\x80\xd0\xbe\xd1\x84\xd0\xbe\xd0\xbd (\xd0\xa3\xd1\x81\xd1\x82\xd1\x80\xd0\xbe\xd0\xb9\xd1\x81\xd1\x82\xd0\xb2\xd0\xbe \xd1\x81 \xd0\xbf\xd0\xbe\xd0\xb4\xd0\xb4\xd0\xb5\xd1\x80\xd0\xb6\xd0\xba\xd0\xbe\xd0\xb9 High Definition Audio)', b'{0.0.1.00000000}.{c28d826f-9cd5-414b-a018-bbfc0cbc1298}') 2016-08-16 10:14:20.905616|DEBUG |Windows Audio Session| |WAS::openDevice-enter 2016-08-16 10:14:20.912622|DEBUG |Windows Audio Session| |WAS Buffer size: 896 2016-08-16 10:14:20.912622|DEBUG |Windows Audio Session| |WAS::openDevice-leave 2016-08-16 10:14:20.912622|INFO |PreProSpeex |1 |Speex version: speex-1.2beta3 2016-08-16 10:14:20.912622|DEBUG |Windows Audio Session| |WAS::startDevice-enter 2016-08-16 10:14:20.913622|DEBUG |Windows Audio Session| |WAS::startDevice-leave Default playback mode: b'Windows Audio Session' ('Default playback device: %s %s\n', b'\xd0\x94\xd0\xb8\xd0\xbd\xd0\xb0\xd0\xbc\xd0\xb8\xd0\xba\xd0\xb8 (\xd0\xa3\xd1\x81\xd1\x82\xd1\x80\xd0\xbe\xd0\xb9\xd1\x81\xd1\x82\xd0\xb2\xd0\xbe \xd1\x81 \xd0\xbf\xd0\xbe\xd0\xb4\xd0\xb4\xd0\xb5\xd1\x80\xd0\xb6\xd0\xba\xd0\xbe\xd0\xb9 High Definition Audio)', b'{0.0.0.00000000}.{cb324415-bf79-473b-9a59-69a1ca4bfe56}') 2016-08-16 10:14:20.913622|DEBUG |Windows Audio Session| |WAS::openDevice-enter 2016-08-16 10:14:20.918627|DEBUG |Windows Audio Session| |WAS Buffer size: 896 2016-08-16 10:14:20.918627|DEBUG |Windows Audio Session| |WAS::openDevice-leave 2016-08-16 10:14:20.918627|DEBUG |Windows Audio Session| |WAS::startDevice-enter 2016-08-16 10:14:20.918627|DEBUG |Windows Audio Session| |WAS::startDevice-leave creating identity Using identity: b'295V/MObSjZ2wIe+dMWhUoLET/UpS6ENHlhWSVdYYSZ5UnQTU3dneUFQLAF/FDVRBXkaFVFeAH10V1cDQn0Gd3BVYgFgXwV4IgxjOVB2DCwtET9TfgcaA31GE1MBZFxLBXtgDHd9WHFka0NJUUQweGprQnp2cjNrSkxBMXJaazRWeDJMTkRUOUlXcVVyZ0p0WnpDU0lDOVlRPT0=' Client lib initialized and running Connect status changed: 1 1 0 2016-08-16 10:14:20.926635|DEVELOP |PktHandler | |Puzzle solve time: 7 Connect status changed: 1 2 0 Connect status changed: 1 3 0 Connect status changed: 1 4 0 </code></pre> <p>Also I got not repeatable random errors:</p> <pre><code>Fatal Python error: GC object already tracked </code></pre> <p>and</p> <pre><code>Fatal Python error: PyThreadState_Get: no current thread </code></pre>
1
2016-08-16T11:22:43Z
38,984,545
<p>Due to lack of a verifiable example, this is a little bit of guessing.</p> <p>If those handler functions are called from a thread that is created by the C library (ts3client_*), the <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">GIL</a> has not been properly acquired by the time python functions are called. </p> <p>Adding <code>with gil</code> like</p> <pre><code>cdef void onConnectStatusChang‌​eEvent(uint64 serverConnectionHand‌​lerID, int newStatus, unsigned int errorNumber) with gil: </code></pre> <p>may help.</p> <p>There is also no code at the end of <code>__main__()</code>, it might be better that the main thread is at least idling.</p> <pre><code>if (error != ERROR_ok): print("Error connecting to server: %d\n"% error) print("Client lib initialized and running\n") </code></pre> <p>The following lines could be added after the print call for a quick test</p> <pre><code>import time while True: time.sleep(1.0) </code></pre>
0
2016-08-16T21:09:33Z
[ "python", "sdk", "cython", "access-violation", "teamspeak" ]
Adding Columns from different dataframes (with matching date) stored in a python dictionary
38,973,635
<p>I have columns with the same name in several dataframes. these dataframes are stored in a dictionary. now i want to add the columns with the same name (and the respective date) across those dataframes in the dictionary and store the result in a new dataframe. </p> <p>The code I have so far doest get me very far...</p> <pre><code>#create a new empty dataframe sum_df = pd.DataFrame() # my dataframes are stored in the dictionary frames_dict for tables in frames_dict: df = frames_dict[tables] df = df[(df['date'] &gt;= '01.01.2010') &amp; (df['date'] &lt;'01.01.2011')] #here I filter for all columns starting with "a4_" filter_col = [col for col in list(df) if col.startswith('a4_')] df2 = df[["date","filter_col"] sum_df = sum_df + df2 </code></pre> <p>Any suggestions on how to tackle such a problem?</p>
1
2016-08-16T11:24:02Z
38,974,298
<pre><code># Initialize 'sum_df' sum_df = pd.DataFrame(columns=['date']) # Iterate over dataframes of dictionary for i, tables in enumerate(frames_dict): # Create dataframe df = frames_dict[tables] # Filter rows by 'date' df = df[(df['date'] &gt;= '01.01.2010') &amp; (df['date'] &lt;'01.01.2011')] # Filter for all columns starting with "a4_" filter_col = [col for col in list(df) if col.startswith('a4_')] # Keep only proper cols df2 = df[['date'] + filter_col] # Join new columns from dictionary to old 'sum_df' dataframe if i == 0: sum_df = df2.rename(columns={i:'{}_{}'.format(i, tables) for i in filter_col}).copy() else: df2 = df2.rename(columns={i:'{}_{}'.format(i, tables) for i in filter_col}) sum_df = df2.merge(sum_df, how='outer', on=['date']) #, suffixes=('_{}'.format(tables), '_y')) # Use either 'suffix' for renaming or 'df2 = df2.rename()' or both... </code></pre>
0
2016-08-16T11:56:13Z
[ "python", "pandas", "dataframe" ]
Pip3 not working on OS X
38,973,737
<p>I recently redownloaded Python 3 to update it to the latest (3.5.2?) and I must have done something wrong. Pip3 in my terminal is not responding anymore. When I type "pip3 install <em>some module</em>" it gives me back:</p> <pre><code>-bash pip3: command not found </code></pre> <p>I went into /usr/local/bin/ and saw that pip3 and pip3.5 are 0 bytes. I have rerun the install the python package in hopes of the installation fixing it.</p> <p>My pip for 2.7 works correctly.</p> <p>Edit: Checked and pip3 is right where it should (AKA /Library/Frameworks/Python.framework/Versions/3.4/bin/pip3)</p> <p>Edit2: Fixed it! first ran </p> <pre><code>python3 -m pip install *ModuleName* </code></pre> <p>and it ran correctly. it told me I needed to update my pip. So I did </p> <pre><code>python3 -m pip install --upgrade pip </code></pre> <p>and after that pip3 was back to working.</p>
2
2016-08-16T11:29:02Z
38,973,787
<p>This is only a guess, but <code>/usr/local/bin/pip3</code> etc. are probably only symbolic link to the real binary. You could run <code>ls -l /usr/local/bin/pip3</code> to see where the symbolic link points to.</p>
1
2016-08-16T11:31:27Z
[ "python", "python-3.x", "pip" ]
Pip3 not working on OS X
38,973,737
<p>I recently redownloaded Python 3 to update it to the latest (3.5.2?) and I must have done something wrong. Pip3 in my terminal is not responding anymore. When I type "pip3 install <em>some module</em>" it gives me back:</p> <pre><code>-bash pip3: command not found </code></pre> <p>I went into /usr/local/bin/ and saw that pip3 and pip3.5 are 0 bytes. I have rerun the install the python package in hopes of the installation fixing it.</p> <p>My pip for 2.7 works correctly.</p> <p>Edit: Checked and pip3 is right where it should (AKA /Library/Frameworks/Python.framework/Versions/3.4/bin/pip3)</p> <p>Edit2: Fixed it! first ran </p> <pre><code>python3 -m pip install *ModuleName* </code></pre> <p>and it ran correctly. it told me I needed to update my pip. So I did </p> <pre><code>python3 -m pip install --upgrade pip </code></pre> <p>and after that pip3 was back to working.</p>
2
2016-08-16T11:29:02Z
38,974,012
<p>You can try out the <code>ensurepip</code> module.</p> <pre><code>python -m ensurepip --upgrade </code></pre>
0
2016-08-16T11:42:35Z
[ "python", "python-3.x", "pip" ]
Put a variable in a specific array position Python
38,973,742
<p>I am trying to solve a problem and I am not able to do it. So the problem is that I have a function in Python. And I have 3 cases and I want my function to be general for all these cases. So my function needs an argument which is a array like this <code>a = [a1, a2, a3]</code> and for each case 2 of the elements in the array are constant and the other is variable, I explain this in more detail with this example code:</p> <pre><code># I have my function: def example(r,a): # r is an array. Inside this function I have a for loop for i in xrange(len(r)): #inside this loop is where I have the 3 cases: a = [r[z], a[1], a[2]] a = [a[0], r[z], a[2]] a = [a[0], a[1], r[z]] (rest of the code) ... </code></pre> <p>so I want to find a way to declare <code>a</code> so my function knows where to place the variable <code>r[z]</code> I know that it is posible to do it with if statments but I am looking for a one-line solution, something like a way to put the variable r[z] in a given position of the array a. Or I don't know any efficient way is appreciated.</p> <p>Thank you in advance.</p> <p>Regards</p>
0
2016-08-16T11:29:23Z
38,973,825
<p>Use <code>insert()</code> to insert an element before a given position.</p> <p>For instance, with</p> <pre><code>arr = ['A','B','C'] arr.insert(0,'D') arr becomes ['D','A','B','C'] </code></pre> <p>because 'D' is inserted before the element at index 0.</p> <p>Now, for</p> <pre><code>arr = ['A','B','C'] arr.insert(4,'D') arr becomes ['A','B','C','D'] </code></pre> <p>because 'D' is inserted before the element at index 4 (which is 1 beyond the end of the array).</p> <p>Original answer: <a href="http://stackoverflow.com/questions/2218238/inserting-values-into-specific-locations-in-a-list-in-python">Inserting values into specific locations in a list in Python</a></p>
0
2016-08-16T11:33:22Z
[ "python", "arrays", "function" ]
Put a variable in a specific array position Python
38,973,742
<p>I am trying to solve a problem and I am not able to do it. So the problem is that I have a function in Python. And I have 3 cases and I want my function to be general for all these cases. So my function needs an argument which is a array like this <code>a = [a1, a2, a3]</code> and for each case 2 of the elements in the array are constant and the other is variable, I explain this in more detail with this example code:</p> <pre><code># I have my function: def example(r,a): # r is an array. Inside this function I have a for loop for i in xrange(len(r)): #inside this loop is where I have the 3 cases: a = [r[z], a[1], a[2]] a = [a[0], r[z], a[2]] a = [a[0], a[1], r[z]] (rest of the code) ... </code></pre> <p>so I want to find a way to declare <code>a</code> so my function knows where to place the variable <code>r[z]</code> I know that it is posible to do it with if statments but I am looking for a one-line solution, something like a way to put the variable r[z] in a given position of the array a. Or I don't know any efficient way is appreciated.</p> <p>Thank you in advance.</p> <p>Regards</p>
0
2016-08-16T11:29:23Z
38,974,692
<p>I have found I way of doing it:</p> <pre><code> #if I define a as: a = [a1, True, a3] # I can use np.place(a, a==True, r[z]) </code></pre> <p>In this way it will replace the value where a[i] = true by r[z]</p>
0
2016-08-16T12:13:32Z
[ "python", "arrays", "function" ]
How can I remove a list element from a list which is inside a dictionary?
38,973,796
<p>I have following dictionary of lists:-</p> <pre><code>myDict = {"A": [4, 8, 10, 9], "B": [6, 9, 10]} </code></pre> <p>I want to remove value 10 from the list that corresponds to key 'A' of myDict.</p> <p>Please suggest how can I achieve this in python?</p>
-1
2016-08-16T11:32:01Z
38,973,839
<pre><code>myDict['A'].remove(10) </code></pre> <p>because the list is modified in place</p>
4
2016-08-16T11:34:05Z
[ "python", "list", "dictionary" ]
How can I remove a list element from a list which is inside a dictionary?
38,973,796
<p>I have following dictionary of lists:-</p> <pre><code>myDict = {"A": [4, 8, 10, 9], "B": [6, 9, 10]} </code></pre> <p>I want to remove value 10 from the list that corresponds to key 'A' of myDict.</p> <p>Please suggest how can I achieve this in python?</p>
-1
2016-08-16T11:32:01Z
38,973,899
<p>You can access your list as follows:</p> <pre><code>myDict['A'] [4, 8, 10, 9] </code></pre> <p>then remove it or whatever you need to do with the list:</p> <pre><code>mydict['A'].remove(10) </code></pre>
0
2016-08-16T11:36:34Z
[ "python", "list", "dictionary" ]
Adjusting gridlines and ticks in matplotlib imshow
38,973,868
<p>I'm trying to plot a matrix of values and would like to add gridlines to make the boundary between values clearer. Unfortunately, imshow decided to locate the tick marks in the middle of each voxel. Is it possible to </p> <p>a) remove the ticks but leave the label in the same location and<br> b) add gridlines between the pixel boundaries?</p> <pre><code>import matplotlib.pyplot as plt import numpy as np im = plt.imshow(np.reshape(np.random.rand(100), newshape=(10,10)), interpolation='none', vmin=0, vmax=1, aspect='equal'); ax = plt.gca(); ax.set_xticks(np.arange(0, 10, 1)); ax.set_yticks(np.arange(0, 10, 1)); ax.set_xticklabels(np.arange(1, 11, 1)); ax.set_yticklabels(np.arange(1, 11, 1)); </code></pre> <p>Image without the gridline and with tick marks in the wrong location <a href="http://i.stack.imgur.com/axeto.png" rel="nofollow"><img src="http://i.stack.imgur.com/axeto.png" alt="enter image description here"></a></p> <pre><code>ax.grid(color='w', linestyle='-', linewidth=2) </code></pre> <p>Image with gridlines in the wrong location:</p> <p><a href="http://i.stack.imgur.com/gPh7d.png" rel="nofollow"><img src="http://i.stack.imgur.com/gPh7d.png" alt="enter image description here"></a></p>
1
2016-08-16T11:35:17Z
38,974,577
<p>Try to shift axises' ticks:</p> <pre><code>ax.set_xticks(np.arange(-.5, 10, 1)); ax.set_yticks(np.arange(-.5, 10, 1)); ax.set_xticklabels(np.arange(1, 12, 1)); ax.set_yticklabels(np.arange(1, 12, 1)); </code></pre> <p><a href="http://i.stack.imgur.com/ffHrp.png" rel="nofollow"><img src="http://i.stack.imgur.com/ffHrp.png" alt="enter image description here"></a></p>
2
2016-08-16T12:08:26Z
[ "python", "matplotlib", "imshow" ]
Adjusting gridlines and ticks in matplotlib imshow
38,973,868
<p>I'm trying to plot a matrix of values and would like to add gridlines to make the boundary between values clearer. Unfortunately, imshow decided to locate the tick marks in the middle of each voxel. Is it possible to </p> <p>a) remove the ticks but leave the label in the same location and<br> b) add gridlines between the pixel boundaries?</p> <pre><code>import matplotlib.pyplot as plt import numpy as np im = plt.imshow(np.reshape(np.random.rand(100), newshape=(10,10)), interpolation='none', vmin=0, vmax=1, aspect='equal'); ax = plt.gca(); ax.set_xticks(np.arange(0, 10, 1)); ax.set_yticks(np.arange(0, 10, 1)); ax.set_xticklabels(np.arange(1, 11, 1)); ax.set_yticklabels(np.arange(1, 11, 1)); </code></pre> <p>Image without the gridline and with tick marks in the wrong location <a href="http://i.stack.imgur.com/axeto.png" rel="nofollow"><img src="http://i.stack.imgur.com/axeto.png" alt="enter image description here"></a></p> <pre><code>ax.grid(color='w', linestyle='-', linewidth=2) </code></pre> <p>Image with gridlines in the wrong location:</p> <p><a href="http://i.stack.imgur.com/gPh7d.png" rel="nofollow"><img src="http://i.stack.imgur.com/gPh7d.png" alt="enter image description here"></a></p>
1
2016-08-16T11:35:17Z
38,994,970
<p>Code for solution as suggested by Serenity: </p> <pre><code>plt.figure() im = plt.imshow(np.reshape(np.random.rand(100), newshape=(10,10)), interpolation='none', vmin=0, vmax=1, aspect='equal'); ax = plt.gca(); ax = plt.gca(); # Major ticks ax.set_xticks(np.arange(0, 10, 1)); ax.set_yticks(np.arange(0, 10, 1)); # Labels for major ticks ax.set_xticklabels(np.arange(1, 11, 1)); ax.set_yticklabels(np.arange(1, 11, 1)); # Minor ticks ax.set_xticks(np.arange(-.5, 10, 1), minor=True); ax.set_yticks(np.arange(-.5, 10, 1), minor=True); # Gridlines based on minor ticks ax.grid(which='minor', color='w', linestyle='-', linewidth=2) </code></pre> <p>Resulting image: <a href="http://i.stack.imgur.com/oa299.png" rel="nofollow"><img src="http://i.stack.imgur.com/oa299.png" alt="enter image description here"></a></p>
1
2016-08-17T11:09:30Z
[ "python", "matplotlib", "imshow" ]
How to make pricelist_id.id return the specific id
38,973,888
<p>Hi i am trying to give a return a specific pricelist id in odoo in sale order:</p> <pre><code>@api.onchange('product_uom', 'product_uom_qty') def product_uom_change(self): if not self.product_uom: self.price_unit = 0.0 return if self.order_id.pricelist_id and self.order_id.partner_id: product = self.product_id.with_context( lang=self.order_id.partner_id.lang, partner=self.order_id.partner_id.id, quantity=self.product_uom_qty, date_order=self.order_id.date_order, pricelist=self.order_id.pricelist_id.id, # if eqp this pricelist_id."id" should point to the last list uom=self.product_uom.id, fiscal_position=self.env.context.get('fiscal_position') ) self.price_unit = self.env['account.tax']._fix_tax_included_price(product.price, product.taxes_id, self.tax_id) </code></pre> <p>this is the function that gets the id in sale.order.line</p> <p>and this is what gets called (i am not sure) in product.pricelist</p> <pre><code>def _get_item_ids(self, cr, uid, ctx): ProductPricelistItem = self.pool.get('product.pricelist.item') fields_list = ProductPricelistItem._defaults.keys() vals = ProductPricelistItem.default_get(cr, uid, fields_list, context=ctx) vals['compute_price'] = 'formula' return [[0, False, vals]] def _price_rule_get_multi(self, cr, uid, pricelist, products_by_qty_by_partner, context=None): context = context or {} date = context.get('date') and context['date'][0:10] or time.strftime(DEFAULT_SERVER_DATE_FORMAT) products = map(lambda x: x[0], products_by_qty_by_partner) product_uom_obj = self.pool.get('product.uom') if not products: return {} categ_ids = {} for p in products: categ = p.categ_id while categ: categ_ids[categ.id] = True categ = categ.parent_id categ_ids = categ_ids.keys() is_product_template = products[0]._name == "product.template" if is_product_template: prod_tmpl_ids = [tmpl.id for tmpl in products] # all variants of all products prod_ids = [p.id for p in list(chain.from_iterable([t.product_variant_ids for t in products]))] else: prod_ids = [product.id for product in products] prod_tmpl_ids = [product.product_tmpl_id.id for product in products] # Load all rules cr.execute( 'SELECT i.id ' 'FROM product_pricelist_item AS i ' 'LEFT JOIN product_category AS c ' 'ON i.categ_id = c.id ' 'WHERE (product_tmpl_id IS NULL OR product_tmpl_id = any(%s))' 'AND (product_id IS NULL OR product_id = any(%s))' 'AND (categ_id IS NULL OR categ_id = any(%s)) ' 'AND (pricelist_id = %s) ' 'AND ((i.date_start IS NULL OR i.date_start&lt;=%s) AND (i.date_end IS NULL OR i.date_end&gt;=%s))' 'ORDER BY applied_on, min_quantity desc, c.parent_left desc', (prod_tmpl_ids, prod_ids, categ_ids, pricelist.id, date, date)) item_ids = [x[0] for x in cr.fetchall()] items = self.pool.get('product.pricelist.item').browse(cr, uid, item_ids, context=context) results = {} for product, qty, partner in products_by_qty_by_partner: results[product.id] = 0.0 suitable_rule = False # Final unit price is computed according to `qty` in the `qty_uom_id` UoM. # An intermediary unit price may be computed according to a different UoM, in # which case the price_uom_id contains that UoM. # The final price will be converted to match `qty_uom_id`. qty_uom_id = context.get('uom') or product.uom_id.id price_uom_id = product.uom_id.id qty_in_product_uom = qty if qty_uom_id != product.uom_id.id: try: qty_in_product_uom = product_uom_obj._compute_qty( cr, uid, context['uom'], qty, product.uom_id.id) except UserError: # Ignored - incompatible UoM in context, use default product UoM pass # if Public user try to access standard price from website sale, need to call _price_get. price = self.pool['product.template']._price_get(cr, uid, [product], 'list_price', context=context)[product.id] price_uom_id = qty_uom_id for rule in items: if rule.min_quantity and qty_in_product_uom &lt; rule.min_quantity: continue if is_product_template: if rule.product_tmpl_id and product.id != rule.product_tmpl_id.id: continue if rule.product_id and not (product.product_variant_count == 1 and product.product_variant_ids[0].id == rule.product_id.id): # product rule acceptable on template if has only one variant continue else: if rule.product_tmpl_id and product.product_tmpl_id.id != rule.product_tmpl_id.id: continue if rule.product_id and product.id != rule.product_id.id: continue if rule.categ_id: cat = product.categ_id while cat: if cat.id == rule.categ_id.id: break cat = cat.parent_id if not cat: continue if rule.base == 'pricelist' and rule.base_pricelist_id: price_tmp = self._price_get_multi(cr, uid, rule.base_pricelist_id, [(product, qty, partner)], context=context)[product.id] ptype_src = rule.base_pricelist_id.currency_id.id price = self.pool['res.currency'].compute(cr, uid, ptype_src, pricelist.currency_id.id, price_tmp, round=False, context=context) else: # if base option is public price take sale price else cost price of product # price_get returns the price in the context UoM, i.e. qty_uom_id price = self.pool['product.template']._price_get(cr, uid, [product], rule.base, context=context)[product.id] convert_to_price_uom = (lambda price: product_uom_obj._compute_price( cr, uid, product.uom_id.id, price, price_uom_id)) if price is not False: if rule.compute_price == 'fixed': price = convert_to_price_uom(rule.fixed_price) elif rule.compute_price == 'percentage': price = (price - (price * (rule.percent_price / 100))) or 0.0 else: #complete formula price_limit = price price = (price - (price * (rule.price_discount / 100))) or 0.0 if rule.price_round: price = tools.float_round(price, precision_rounding=rule.price_round) if rule.price_surcharge: price_surcharge = convert_to_price_uom(rule.price_surcharge) price += price_surcharge if rule.price_min_margin: price_min_margin = convert_to_price_uom(rule.price_min_margin) price = max(price, price_limit + price_min_margin) if rule.price_max_margin: price_max_margin = convert_to_price_uom(rule.price_max_margin) price = min(price, price_limit + price_max_margin) suitable_rule = rule break # Final price conversion into pricelist currency if suitable_rule and suitable_rule.compute_price != 'fixed' and suitable_rule.base != 'pricelist': price = self.pool['res.currency'].compute(cr, uid, product.currency_id.id, pricelist.currency_id.id, price, round=False, context=context) results[product.id] = (price, suitable_rule and suitable_rule.id or False) return results </code></pre> <p>now what cahnge do i do so it returns a specific id for say the last id in the list.</p>
0
2016-08-16T11:36:12Z
38,991,666
<p>There is a flaw in logic here. The <code>self.order_id.pricelist_id</code> points to a <em>single</em> pricelist and there are no first or last options. </p> <p>If you would like to get a list of pricelists and take the last of them, you could do something like:</p> <pre><code>domain = [] # returns all records in the table domain = [('name', '=', 'A certain pricelist')] # returns only the records matching this domain ProductPricelist = self.env['product.pricelist'] pricelist_id = ProductPricelist.search(domain, limit=1, order='id desc').id </code></pre> <p>and then use it in the <code>product = {..</code>.</p> <p>The <code>limit=1</code> limits to a single result, while <code>order='id desc'</code> ensures a last available record by <code>id</code>.</p>
1
2016-08-17T08:29:02Z
[ "python", "openerp", "odoo-9" ]
Permutations in list of lists
38,974,000
<p>So, say I have a list of lists like</p> <pre><code>l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] </code></pre> <p>How do I get all possible permutations with the restriction that I can only pick 1 item per list? Meaning that 147 or 269 would be possible permutations, whereas 145 would be wrong since 4 and 5 are in the same list. Also, how does this work for a list containing any number of lists?</p>
0
2016-08-16T11:42:08Z
38,974,244
<p>this worked in python 3, see the comment on the last line for python 2</p> <pre><code>l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] row1, row2, row3 = l # get all the permutations as lists [1,4,7], etc. permutations = ([x, y, z] for x in row1 for y in row2 for z in row3) # get strings to have a more easily readable output permutation_strings = (''.join(map(str, permutation)) for permutation in permutations) print(*permutation_strings) # in python2 you can use: print list(permutation_strings) </code></pre>
0
2016-08-16T11:53:46Z
[ "python", "list", "permutation" ]
Permutations in list of lists
38,974,000
<p>So, say I have a list of lists like</p> <pre><code>l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] </code></pre> <p>How do I get all possible permutations with the restriction that I can only pick 1 item per list? Meaning that 147 or 269 would be possible permutations, whereas 145 would be wrong since 4 and 5 are in the same list. Also, how does this work for a list containing any number of lists?</p>
0
2016-08-16T11:42:08Z
38,974,424
<p>This works for me in Python 2.7 and 3.5</p> <pre><code>import itertools l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] print(list(itertools.product(*l))) </code></pre> <p>it returns</p> <pre><code>[(1, 4, 7), (1, 4, 8), (1, 4, 9), (1, 5, 7), (1, 5, 8), (1, 5, 9), (1, 6, 7), (1, 6, 8), (1, 6, 9), (2, 4, 7), (2, 4, 8), (2, 4, 9), (2, 5, 7), (2, 5, 8), (2, 5, 9), (2, 6, 7), (2, 6, 8), (2, 6, 9), (3, 4, 7), (3, 4, 8), (3, 4, 9), (3, 5, 7), (3, 5, 8), (3, 5, 9), (3, 6, 7), (3, 6, 8), (3, 6, 9)] </code></pre>
2
2016-08-16T12:01:41Z
[ "python", "list", "permutation" ]
Permutations in list of lists
38,974,000
<p>So, say I have a list of lists like</p> <pre><code>l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] </code></pre> <p>How do I get all possible permutations with the restriction that I can only pick 1 item per list? Meaning that 147 or 269 would be possible permutations, whereas 145 would be wrong since 4 and 5 are in the same list. Also, how does this work for a list containing any number of lists?</p>
0
2016-08-16T11:42:08Z
38,974,439
<p>I wouldn't call what you are looking for permutations, but the following recursive algorithm should return what I assume you would like to see</p> <pre><code>def get_all_possibilities(S, P=[]): if S == []: return P s = S[0] if P == []: for x in s: P.append(str(x)) return get_all_possibilities(S[1:], P) else: new_P = [] for x in s: for p in P: new_P.append(p + str(x)) return get_all_possibilities(S[1:], new_P) print get_all_possibilities([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) </code></pre> <p>My output was the following 27 items which could later be converted back to integers if you like;</p> <p>['147', '247', '347', '157', '257', '357', '167', '267', '367', '148', '248', '348', '158', '258', '358', '168', '268', '368', '149', '249', '349', '159', '259', '359', '169', '269', '369']</p>
0
2016-08-16T12:02:11Z
[ "python", "list", "permutation" ]
Permutations in list of lists
38,974,000
<p>So, say I have a list of lists like</p> <pre><code>l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] </code></pre> <p>How do I get all possible permutations with the restriction that I can only pick 1 item per list? Meaning that 147 or 269 would be possible permutations, whereas 145 would be wrong since 4 and 5 are in the same list. Also, how does this work for a list containing any number of lists?</p>
0
2016-08-16T11:42:08Z
38,974,470
<p>You could use recursion.</p> <pre><code>l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] def permutate(w, l): for x in l[0]: if len(l) &gt; 1: permutate(w + str(x), l[1:]) else: print w + str(x) permutate("", l) </code></pre>
0
2016-08-16T12:03:39Z
[ "python", "list", "permutation" ]
Permutations in list of lists
38,974,000
<p>So, say I have a list of lists like</p> <pre><code>l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] </code></pre> <p>How do I get all possible permutations with the restriction that I can only pick 1 item per list? Meaning that 147 or 269 would be possible permutations, whereas 145 would be wrong since 4 and 5 are in the same list. Also, how does this work for a list containing any number of lists?</p>
0
2016-08-16T11:42:08Z
38,974,523
<p>You can use <a href="https://docs.python.org/2.7/library/itertools.html" rel="nofollow">itertools</a> for this!</p> <pre><code>from itertools import product l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] print(list(product(*l))) </code></pre> <p>A few things to note:</p> <ul> <li><p>I'm passing <code>*l</code> instead of simply <code>l</code> as <code>product</code> expects iterables as arguments, not a list of iterables; another way to write this would have been:</p> <pre><code>product([1, 2, 3], [4, 5, 6], [7, 8, 9]) </code></pre> <p>i.e., passing every list as a single argument. <code>*l</code> unpacks the elements of <code>l</code> into arguments for you.</p></li> <li><p><code>product</code> does not return a list, but a generator. You can pass that to anything that expects an iterable. "Printing" the generator would not be helpful (you would not see the content of the resulting list, but <code>&lt;itertools.product object...&gt;</code> which is only mildly interesting); this is why I'm forcing a conversion to a list using <code>list()</code></p></li> <li>using <code>print()</code> with parentheses allows this code to be compatible with Python 2 &amp; 3.</li> </ul>
0
2016-08-16T12:05:59Z
[ "python", "list", "permutation" ]
Rename many files sequentially Python
38,974,166
<p>My training on Python is ongoing and I'm currently trying to rename sequentially many files that have this kind of root and extension:</p> <p>Ite_1_0001.eps</p> <p>Ite_2_0001.eps</p> <p>Ite_3_0001.eps</p> <p>Ite_4_0001.eps</p> <p>However, I'm trying to rename all these files as follows:</p> <p>Ite_0001.eps</p> <p>Ite_0002.eps</p> <p>Ite_0003.eps</p> <p>Ite_0004.eps</p> <p>So I'm proceeding in this way:</p> <pre><code>for path, subdirs, files in os.walk(newpath): num = len(os.listdir(newpath)) for filename in files: basename, extension = os.path.splitext(filename) for x in range(1, num+1): new_filename = '_%04d' % x + extension os.rename(os.path.join(newpath, filename), os.path.join(newpath, new_filename)) </code></pre> <p>It's not working at all because all the files are erased from the directory and when running the script once at a time I have this:</p> <p>First run: _00004</p> <p>Second run: _00005</p> <p>.... and so on.</p> <p>Could any one have some tips that could help me to achieve this task :).</p> <p>Thank you very much for your help.</p>
3
2016-08-16T11:50:20Z
38,974,203
<p>You can dynamically change the thing you're substituting in within your loop, like so</p> <pre><code>import os, re n = 1 for i in os.listdir('.'): os.rename(i, re.sub(r'\(\d{4}\)', '(Ite_) ({n})'.format(n=n), i)) n += 1 </code></pre>
2
2016-08-16T11:52:06Z
[ "python" ]
Rename many files sequentially Python
38,974,166
<p>My training on Python is ongoing and I'm currently trying to rename sequentially many files that have this kind of root and extension:</p> <p>Ite_1_0001.eps</p> <p>Ite_2_0001.eps</p> <p>Ite_3_0001.eps</p> <p>Ite_4_0001.eps</p> <p>However, I'm trying to rename all these files as follows:</p> <p>Ite_0001.eps</p> <p>Ite_0002.eps</p> <p>Ite_0003.eps</p> <p>Ite_0004.eps</p> <p>So I'm proceeding in this way:</p> <pre><code>for path, subdirs, files in os.walk(newpath): num = len(os.listdir(newpath)) for filename in files: basename, extension = os.path.splitext(filename) for x in range(1, num+1): new_filename = '_%04d' % x + extension os.rename(os.path.join(newpath, filename), os.path.join(newpath, new_filename)) </code></pre> <p>It's not working at all because all the files are erased from the directory and when running the script once at a time I have this:</p> <p>First run: _00004</p> <p>Second run: _00005</p> <p>.... and so on.</p> <p>Could any one have some tips that could help me to achieve this task :).</p> <p>Thank you very much for your help.</p>
3
2016-08-16T11:50:20Z
38,974,531
<p>I write a function that if you give in input your basename it returns the correct name.</p> <pre><code>def newname(old_name): num = old_name[4] return (old_name[0:3] + old_name[5:-1] + num) </code></pre>
0
2016-08-16T12:06:17Z
[ "python" ]
Rename many files sequentially Python
38,974,166
<p>My training on Python is ongoing and I'm currently trying to rename sequentially many files that have this kind of root and extension:</p> <p>Ite_1_0001.eps</p> <p>Ite_2_0001.eps</p> <p>Ite_3_0001.eps</p> <p>Ite_4_0001.eps</p> <p>However, I'm trying to rename all these files as follows:</p> <p>Ite_0001.eps</p> <p>Ite_0002.eps</p> <p>Ite_0003.eps</p> <p>Ite_0004.eps</p> <p>So I'm proceeding in this way:</p> <pre><code>for path, subdirs, files in os.walk(newpath): num = len(os.listdir(newpath)) for filename in files: basename, extension = os.path.splitext(filename) for x in range(1, num+1): new_filename = '_%04d' % x + extension os.rename(os.path.join(newpath, filename), os.path.join(newpath, new_filename)) </code></pre> <p>It's not working at all because all the files are erased from the directory and when running the script once at a time I have this:</p> <p>First run: _00004</p> <p>Second run: _00005</p> <p>.... and so on.</p> <p>Could any one have some tips that could help me to achieve this task :).</p> <p>Thank you very much for your help.</p>
3
2016-08-16T11:50:20Z
38,974,729
<p>You could test the approach with a list of strings. So you do not run the risk of deleting the files. ;-)</p> <pre><code>files = ["Ite_1_0001.eps", "Ite_2_0001.eps", "Ite_3_0001.eps", "Ite_4_0001.eps",] for f in files: # Get the value between underscores. This is the index. index = int(f[4:f.index('_', 4)]) new_name = '_%04d' % index # Join the prefix, index and sufix file print ''.join([f[:3], new_name, f[-4:]]) </code></pre> <p>Ite_0001.eps</p> <p>Ite_0002.eps</p> <p>Ite_0003.eps</p> <p>Ite_0004.eps</p>
2
2016-08-16T12:14:50Z
[ "python" ]
How to insert/append unstructured data to bigquery table
38,974,176
<p><strong>Background</strong></p> <p>I want to insert/append newline formatted JSON into <code>bigquery</code> table through python client API.</p> <p>Eg:</p> <pre><code>{"name":"xyz",mobile:xxx,location:"abc"} {"name":"xyz",mobile:xxx,age:22} </code></pre> <p>Issue is, all fields in a row are optional and there is no fixed defined schema for the data.</p> <p><strong>Query</strong></p> <p>I have read that we can use Federated tables, which supports autoschema detection.</p> <p>However, I am looking for a feature, that would automatically detect schema from data,create tables accordingly and even adjust the table schema if any extra columns/keys appear in data instead of creating new table.</p> <p>Would this be possible using python client API.</p>
1
2016-08-16T11:50:39Z
38,978,809
<p>You can use autodetect with BigQuery load API, i.e. your example using bq cli tool will look like following:</p> <pre><code>~$ cat /tmp/x.json {"name":"xyz","mobile":"xxx","location":"abc"} {"name":"xyz","mobile":"xxx","age":"22"} ~$ bq load --autodetect --source_format=NEWLINE_DELIMITED_JSON tmp.x /tmp/x.json Upload complete. ~$ bq show tmp.x Table tmp.x Last modified Schema Total Rows Total Bytes Expiration ----------------- --------------------- ------------ ------------- ------------ 16 Aug 08:23:35 |- age: integer 2 33 |- location: string |- mobile: string |- name: string ~$ bq query "select * from tmp.x" +------+----------+--------+------+ | age | location | mobile | name | +------+----------+--------+------+ | NULL | abc | xxx | xyz | | 22 | NULL | xxx | xyz | +------+----------+--------+------+ </code></pre> <p><strong>Update:</strong> If later you need to add additional fields, you can use schema_update_option to allow new fields. Alas it doesn't yet work with autodetect, so you need to provide new schema explicitly to the load API:</p> <pre><code>~$ cat /tmp/x1.json {"name":"abc","mobile":"yyy","age":"25","gender":"male"} ~$ bq load --schema=name:STRING,age:INTEGER,location:STRING,mobile:STRING,gender:STRING --schema_update_option=ALLOW_FIELD_ADDITION --source_format=NEWLINE_DELIMITED_JSON tmp.x /tmp/x1.json Upload complete. ~$ bq show tmp.x Table tmp.x Last modified Schema Total Rows Total Bytes Expiration ----------------- --------------------- ------------ ------------- ----------- 19 Aug 10:43:09 |- name: string 3 57 |- age: integer |- location: string |- mobile: string |- gender: string ~$ bq query "select * from tmp.x" status: DONE +------+------+----------+--------+--------+ | name | age | location | mobile | gender | +------+------+----------+--------+--------+ | abc | 25 | NULL | yyy | male | | xyz | NULL | abc | xxx | NULL | | xyz | 22 | NULL | xxx | NULL | +------+------+----------+--------+--------+ </code></pre>
2
2016-08-16T15:26:39Z
[ "python", "python-2.7", "api", "google-app-engine", "google-bigquery" ]
File seperation
38,974,207
<pre><code>with open('C:\Users\craig\Downloads\folder\test.txt', 'r') as myfile: test = myfile.read().replace('', '') </code></pre> <p>test.txt is:</p> <pre><code>hugh:ted mark:mike ethan:jay </code></pre> <p>how would I get python to remove the : and everything past :? For example, how would I remove :ted, :mike, :jay, without having to manually write it in the replace part?</p>
-3
2016-08-16T11:52:13Z
38,974,697
<pre><code>import fileinput for line in fileinput.input('C:\Users\craig\Downloads\folder\test.txt', inplace=True): if ':' in line: print line.replace(line[line.index(':'):], '') </code></pre>
0
2016-08-16T12:13:51Z
[ "python", "file" ]
File seperation
38,974,207
<pre><code>with open('C:\Users\craig\Downloads\folder\test.txt', 'r') as myfile: test = myfile.read().replace('', '') </code></pre> <p>test.txt is:</p> <pre><code>hugh:ted mark:mike ethan:jay </code></pre> <p>how would I get python to remove the : and everything past :? For example, how would I remove :ted, :mike, :jay, without having to manually write it in the replace part?</p>
-3
2016-08-16T11:52:13Z
38,975,089
<p>you can try the following code snippet.</p> <pre><code>import re import fileinput for line in fileinput.FileInput("/home/dma3node/test.txt", inplace=1): line = re.sub(r"\:(.*)", "", line) print line </code></pre>
0
2016-08-16T12:34:01Z
[ "python", "file" ]
WSGI application raised an exception
38,974,253
<p>I am facing a wsgi exception while running my flask application code in the server. </p> <p>Here is my manage.py </p> <pre><code>from app import app1, db from flask_script import Manager, Shell from flask_migrate import Migrate, MigrateCommand migrate = Migrate(app1, db) manager = Manager(app1) manager.add_command('db', MigrateCommand) def make_shell_context(): return dict(app=app1, db=db) manager.add_command("shell", Shell(make_context=make_shell_context)) if __name__ == '__main__': manager.run() </code></pre> <p>and app.py </p> <pre><code>app1 = Flask(__name__) app1.config.from_object(config['default']) rest_api = Api(app1) db = SQLAlchemy(app1) bcrypt = Bcrypt(app1) from app import routes Compress(app1) assets = Environment(app1) define_assets(assets) cache = Cache(app1,config={'CACHE_TYPE': 'simple'}) </code></pre> <p>In my local, there is no error. I run my application with this <code>python manage.py runserver</code> command.</p> <p>Now, In the server I successfully done this steps <code>python manage.py db init</code>, <code>python manage.py db migrate</code>, <code>python manage.py db upgrade</code> and it created, updated Database successfully. I have installed <code>passenger</code> to serve the application.</p> <p>My <code>passenger_wsgi.py</code> looks like this</p> <pre><code>from manage import manager as application </code></pre> <p>Now, when I run <code>passenger start --port 3003 -a '0.0.0.0'</code>, It throws me this error</p> <pre><code>[ 2016-08-16 07:44:15.0758 30180/7f90226f8700 age/Cor/Con/InternalUtils.cpp:112 ]: [Client 2-1] Sending 502 response: application did not send a complete response App 30251 stderr: Traceback (most recent call last): App 30251 stderr: File "/usr/share/passenger/helper-scripts/wsgi-loader.py", line 163, in main_loop App 30251 stderr: socket_hijacked = self.process_request(env, input_stream, client) App 30251 stderr: File "/usr/share/passenger/helper-scripts/wsgi-loader.py", line 297, in process_request App 30251 stderr: result = self.app(env, start_response) App 30251 stderr: TypeError: __call__() takes at most 2 arguments (3 given) </code></pre>
0
2016-08-16T11:54:01Z
38,977,526
<p>Import the Flask application as the application to run, not the Flask-Script manager.</p> <pre><code>from app import app1 as application </code></pre> <p>A Flask-Script manager is for running commands on the command line. It is not a WSGI application. You can still use it to run other commands, but the WSGI server needs the Flask application.</p>
2
2016-08-16T14:25:12Z
[ "python", "python-2.7", "flask", "passenger", "flask-sqlalchemy" ]
How reverse works in python
38,974,416
<p>I have an array which contains some elements. </p> <p>How by doing <code>arr[::-1]</code> sort the entire array? </p> <p>What is the logic behind that?</p>
0
2016-08-16T12:01:18Z
38,974,477
<p>This is <a href="https://docs.python.org/2/whatsnew/2.3.html#extended-slices" rel="nofollow">extended slice</a> syntax. It works by doing [begin:end:step] - by leaving begin and end off and specifying a step of -1, it reverses a string. Example: </p> <pre><code>&gt;&gt;&gt; 'hello world'[::-1] 'dlrow olleh' </code></pre> <p><strong>See also</strong></p> <ul> <li><a href="http://stackoverflow.com/a/6238928/42223">alternate simple approach</a></li> <li><a href="http://stackoverflow.com/a/766291/42223">alternate simple approach</a></li> <li><a href="http://stackoverflow.com/questions/509211/explain-pythons-slice-notation">alternate explanation of slice notation</a></li> </ul>
5
2016-08-16T12:04:00Z
[ "python", "reverse" ]
Python generator to list
38,974,429
<p>I have a Python generator <code>lexg</code> which produces a list at each iteration. The code seems to work in the traditional <code>for</code>-loop sense, that is, </p> <pre><code>for i in lexg(2,2): print(i) </code></pre> <p>produces:</p> <pre><code>[2, 0] [1, 1] [1, 0] [0, 2] [0, 1] [0, 0] </code></pre> <p>but seems to break in list comprehension, that is, both </p> <pre><code>list(lexg(2,2)) </code></pre> <p>and </p> <pre><code>[i for i in lexg(2,2)] </code></pre> <p>produce</p> <pre><code>[[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]] </code></pre> <p>whereas, I expect <code>list(lexg(2,2))</code> to produce</p> <pre><code>[[2, 0] [1, 1] [1, 0] [0, 2] [0, 1] [0, 0]] </code></pre> <p>The code for <code>lexg</code> is:</p> <pre><code>def lexg( n, d ): exponent = [0] * n; def looper( m, totalDegree ): r = reversed( range( 0, d - totalDegree + 1 ) ); for j in r: exponent[n-m] = j; if m == 1: yield exponent; else: for x in looper( m-1, totalDegree+j ): yield x return looper( n, 0 ); </code></pre> <p>What is causing the empty output?</p> <p><strong>Edit/Solution</strong></p> <p>The problem, as suggested below, is the fact that the same list is returned at every step of the generator. One solution is therefore to copy the list before returning. For example, I have changed the <code>yield exponent;</code> line of <code>lexg</code> to <code>yield list(exponent);</code>, which resolves the problem. </p>
0
2016-08-16T12:01:44Z
38,974,585
<p>As pointed out by <a href="http://stackoverflow.com/users/476/deceze">deceze</a> you essentially end up with a list of lists pointing to the same instance </p> <p>To make it more clear, try that </p> <pre><code>a = list(lexg(2,2)) a[0][0] = 3 print(a) </code></pre> <p>which results in</p> <pre><code>[[3, 0], [3, 0], [3, 0], [3, 0], [3, 0], [3, 0]] </code></pre>
4
2016-08-16T12:08:58Z
[ "python", "recursion", "generator", "yield-keyword" ]
Pandas: read csv file with UCS-2 LE coding
38,974,446
<p>I would like to import 10K csv files generated by 3rd party app with UCS-2 LE coding. I wouldn't like to use csv reader as in example <a href="http://stackoverflow.com/questions/9177820/python-utf-16-csv-reader">Python UTF-16</a> as there are so many files.</p> <p>Below you can find my code, where I'm trying to read just one. I'm using Python 3.4 and Pandas 0.18.1</p> <p><a href="http://www.wikiupload.com/BW08IJR2AHKJ5PQ" rel="nofollow">Sample file</a> to download.</p> <p><strong>MWE:</strong></p> <pre><code>import pandas as pd df = pd.read_csv('1.csv', encoding="mbcs", skip_blank_lines=True, error_bad_lines=False, decimal=',', sep='\s+') </code></pre> <p>I got an error:</p> <blockquote> <p>CParserError: Error tokenizing data. C error: EOF inside string starting at line 17</p> </blockquote>
1
2016-08-16T12:02:31Z
38,977,201
<p>Actually I don't know how your expected output might be, but I'm reading your files with:</p> <pre><code>df = pd.read_csv('1.csv', encoding="utf-16", skip_blank_lines=True, error_bad_lines=False, decimal=',', sep='\s+', skiprows=5) </code></pre> <p>obtaining something like:</p> <pre><code>In [17]: df.head() Out[17]: Oznaczenie techniczne Wartość Jednostka Opis obiektu \ 0 PPHS:LPlt'Ahu'CumEg1 488.0 GJ Energia skumulowana chłodu 1 PPHS:LPlt'Ahu'CumVlm 57263.0 m3 Objętość skumulowana 2 PPHS:LPlt'Ahu'Fl 31.6 m3/h Przepływ 3 PPHS:LPlt'Ahu'Pwr 111.0 kW Moc 4 PPHS:LPlt'Ahu'TFl 12.7 °C Temperatura zasilania Parameter Value Timestamp 0 PrVal 2016-07-27 19:55 1 PrVal 2016-07-27 19:55 2 PrVal 2016-07-27 19:55 3 PrVal 2016-07-27 19:55 4 PrVal 2016-07-27 19:55 </code></pre> <p>Basically I'm skipping the first 5 rows (related to the report of the file, that actually mess the file formatting). Hope that helps.</p>
1
2016-08-16T14:10:01Z
[ "python", "csv", "pandas" ]
Regex search and replace: How to move characters in a block of text
38,974,589
<p>I'm having a search and replace problem. Take this example.</p> <p>I want to go from:</p> <pre><code>"Word1 word2 =word3 *word4 word5= word6 word7* (*word8)" </code></pre> <p>To this:</p> <pre><code>"Word1 word2 word3= word4* word5= word6 word7* word8*" </code></pre> <p>i.e. To replace any word starting a * or = with itself with the * or = moved to the end of the word, and to make it worse sometimes those words are in brackets, and/or could be at the start or end of a line.</p> <p>I've tried to search for the solution but I am relatively new at regex and whilst I can cobble together solutions that find the words I am looking for, e.g.:</p> <pre><code>\[\*,\=][a-zA-Z]{1,}[\s,\)] </code></pre> <p>I can't figure out / understand how to do the replace and maintain end of line / start of line characters, white space and brackets.</p> <p>I am using Python, but if it makes a material difference I'm happy to try using something else.</p>
3
2016-08-16T12:09:18Z
38,974,683
<p>You need 2 capture group and replace them together:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; &gt;&gt;&gt; s = "Word1 word2 =word3 *word4 word5= word6 word7* (*word8)" &gt;&gt;&gt; &gt;&gt;&gt; re.sub(r'(\*|=)(\b\w+\b)', r'\2\1', s) 'Word1 word2 word3= word4* word5= word6 word7* (word8*)' </code></pre>
3
2016-08-16T12:13:18Z
[ "python", "regex", "replace" ]
Regex search and replace: How to move characters in a block of text
38,974,589
<p>I'm having a search and replace problem. Take this example.</p> <p>I want to go from:</p> <pre><code>"Word1 word2 =word3 *word4 word5= word6 word7* (*word8)" </code></pre> <p>To this:</p> <pre><code>"Word1 word2 word3= word4* word5= word6 word7* word8*" </code></pre> <p>i.e. To replace any word starting a * or = with itself with the * or = moved to the end of the word, and to make it worse sometimes those words are in brackets, and/or could be at the start or end of a line.</p> <p>I've tried to search for the solution but I am relatively new at regex and whilst I can cobble together solutions that find the words I am looking for, e.g.:</p> <pre><code>\[\*,\=][a-zA-Z]{1,}[\s,\)] </code></pre> <p>I can't figure out / understand how to do the replace and maintain end of line / start of line characters, white space and brackets.</p> <p>I am using Python, but if it makes a material difference I'm happy to try using something else.</p>
3
2016-08-16T12:09:18Z
38,975,121
<p>Use a <strong><em>verbose</em></strong> expression like the following:</p> <pre><code>import re rx = re.compile(''' \(? # opening parenthesis or not ([*=]) # capture one of * or = to Group 1 (\w+) # at least one word chararacter to Group 2 \)? # a closing parenthesis ''', re.VERBOSE) string = "Word1 word2 =word3 *word4 word5= word6 word7* (*word8)" new_string = rx.sub(r'\2\1', string) </code></pre> <p><hr> See <a href="http://ideone.com/YCkISh" rel="nofollow"><strong>a demo on ideone.com</strong></a> and add other characters to the class in square brackets (<code>[...]</code>) as needed.</p>
2
2016-08-16T12:36:13Z
[ "python", "regex", "replace" ]
How does one reorder information in an XML document in python 3?
38,974,656
<p>Let's suppose I have the following <em>XML structure</em>:</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" ?&gt; &lt;Document&gt; &lt;CstmrCdtTrfInitn&gt; &lt;GrpHdr&gt; &lt;other_tags&gt;a&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;other_tags&gt;b&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;other_tags&gt;c&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;/GrpHdr&gt; &lt;PmtInf&gt; &lt;things&gt;d&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;things&gt;e&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;things&gt;g&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;things&gt;g&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;/CstmrCdtTrfInitn&gt; &lt;/Document&gt; </code></pre> <p>Now, given this structure, I want to manipulate the sections as follows:</p> <p>If there are two or more <code>&lt;PmtInf&gt;</code> tags that have the same:</p> <pre><code>&lt;things&gt;d&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;things&gt;e&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; </code></pre> <p>I would like to move the whole <code>&lt;CdtTrfTxInf&gt;&lt;/CdtTrfTxInf&gt;</code> to the first <code>&lt;PmtInf&gt;&lt;/PmtInf&gt;</code> and remove the whole <code>&lt;PmtInf&gt;&lt;/PmtInf&gt;</code> that I've taken <code>&lt;CdtTrfTxInf&gt;&lt;/CdtTrfTxInf&gt;</code> from. A bit, fuzzy, right ? Here is an example:</p> <pre><code>&lt;Document&gt; &lt;CstmrCdtTrfInitn&gt; &lt;GrpHdr&gt; &lt;other_tags&gt;a&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;other_tags&gt;b&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;other_tags&gt;c&lt;/other_tags&gt; &lt;!--here there might be other nested tags inside &lt;other_tags&gt;&lt;/other_tags&gt;--&gt; &lt;/GrpHdr&gt; &lt;PmtInf&gt; &lt;things&gt;d&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;things&gt;e&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;PmtInf&gt; &lt;things&gt;f&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;things&gt;g&lt;/things&gt; &lt;!--here there might be other nested tags inside &lt;things&gt;&lt;/things&gt;--&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;CdtTrfTxInf&gt; &lt;!-- other nested tags here --&gt; &lt;/CdtTrfTxInf&gt; &lt;/PmtInf&gt; &lt;/CstmrCdtTrfInitn&gt; &lt;/Document&gt; </code></pre> <p>As you can see, the last two <code>&lt;PmtInf&gt;&lt;/PmtInf&gt;</code> tags became now a single one (because <code>&lt;things&gt;&lt;/matched&gt;</code>) and the <code>&lt;CdtTrfTxInf&gt;&lt;/CdtTrfTxInf&gt;</code> was copied.</p> <p>Now, I would like to do this in any possible way (<code>lxml</code>, <code>xml.etree</code>, <code>xslt</code> etc). At first, I thought about using some RegEx to do this, but it might become a bit ugly. Then, I thought I might be able to use some string manipulations but I can't figure a way of how would I do this.</p> <p>Can somebody tell me what method would be the most elegant / efficient one if the average size of an XML file would be about 2k lines ? An example would also be kindly appreciated.</p> <p>For the sake of completness, I'll define a function which will return the entire XML content in a string:</p> <pre><code>def get_xml_from(some_file): with open(some_file) as xml_file: content = xml_file.read() return content def modify_xml(some_file): content_of_xml = get_xml_from(some_file) # here I should be able to process the XML file return processed_xml </code></pre> <p><em>I'm not looking for somebody doing this for me, but asking for ideas on what are the best ways of achieving this.</em></p>
0
2016-08-16T12:12:13Z
38,992,273
<p>I'm not going to give you the code you want. Instead I'll say how you can go about doing what you want.</p> <p>First things first you want to read your xml. So I'll be using <a href="https://docs.python.org/3.5/library/xml.etree.elementtree.html" rel="nofollow"><code>xml.etree.ElementTree</code></a>.</p> <pre><code>import xml.etree.ElementTree as ET root = ET.fromstring(country_data_as_string) </code></pre> <p>After this I'd ignore the parts of the tree that you don't use, and just <code>find</code> <code>CstmrCdtTrfInitn</code>. As you only want to work with <code>PmtInf</code>s you want to <code>findall</code> of them.</p> <pre><code>pmt_infs = root.find('.//CstmrCdtTrfInitn').findall('PmtInf') </code></pre> <p>After this you want to perform your algorithm<sup>*</sup> to move items on your data. I'll just remove the first child, if the node has one.</p> <pre><code>nodes = [] for node in pmt_infs: children = list(node) if children: node.remove(children[0]) nodes.append(children[0]) </code></pre> <p>Now that we have all the nodes, you'll add them to the first <code>pmt_infs</code>.</p> <pre><code>pmt_infs[0].extend(nodes) </code></pre> <hr> <p><sup>*</sup> You'll want to change the third code block to how you want to move your nodes, as you changed your algorithm from v1 to v3 of your question.</p>
1
2016-08-17T09:02:47Z
[ "python", "xml", "python-3.x" ]
How do I access my google plus content and the things that have been shared with me via the API
38,974,740
<p>I wish to get a listing of all my posts as well as of any and all shares sent to me on my Google+ page, I want to do that from a python script that has no html or other front end.However I am confused as to how I can get access to the content. According to the google developers site I can not get an OAuth token for google plus if there isn't a graphical front end,, but I just want to get to my own stuff and do a bit of parsing.</p> <p>Surely there must be a way to do that?</p>
0
2016-08-16T12:15:24Z
38,974,779
<p>There is a <a href="https://github.com/googleplus/gplus-quickstart-python" rel="nofollow">Github repo</a> that can help you</p> <ul> <li>Using the Google+ Sign-In button to get an OAuth 2.0 refresh token.</li> <li>Exchanging the refresh token for an access token.</li> <li>Making Google+ API requests with the access token, including getting a list of people that the user has circled.</li> <li>Disconnecting the app from the user's Google account and revoking tokens.</li> </ul>
1
2016-08-16T12:18:20Z
[ "python", "json", "oauth-2.0", "google-plus" ]
How to SSH from one system to another using python
38,974,822
<p>I am trying to perform SSH from one system to another using paramiko in python </p> <pre><code>import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy()) ssh.connect('127.0.0.1', username='jesse', password='lol') </code></pre> <p>using this reference (<a href="http://jessenoller.com/blog/2009/02/05/ssh-programming-with-paramiko-completely-different" rel="nofollow">http://jessenoller.com/blog/2009/02/05/ssh-programming-with-paramiko-completely-different</a> )</p> <p>This is the case when we know the <strong><em>password</em></strong> of the system you want to log-in <strong><em>BUT</em></strong> if i want to login to a system where my public-key is copied and i dont know the password. Is there a way to do this</p> <p>Thanks in advance </p>
0
2016-08-16T12:20:13Z
38,974,911
<p><code>SSHClient.connect</code> accepts a kwarg <code>key_filename</code>, which is a path to the local private key file (or files, if given a list of paths). See the <a href="http://docs.paramiko.org/en/2.0/api/client.html#paramiko.client.SSHClient.connect" rel="nofollow">docs</a>.</p> <blockquote> <p>key_filename (str) – the filename, or list of filenames, of optional private key(s) to try for authentication</p> </blockquote> <p>Usage:</p> <pre><code>ssh.connect('&lt;hostname&gt;', username='&lt;username&gt;', key_filename='&lt;path/to/openssh-private-key-file&gt;') </code></pre>
4
2016-08-16T12:24:50Z
[ "python", "ssh", "paramiko" ]
How to SSH from one system to another using python
38,974,822
<p>I am trying to perform SSH from one system to another using paramiko in python </p> <pre><code>import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy()) ssh.connect('127.0.0.1', username='jesse', password='lol') </code></pre> <p>using this reference (<a href="http://jessenoller.com/blog/2009/02/05/ssh-programming-with-paramiko-completely-different" rel="nofollow">http://jessenoller.com/blog/2009/02/05/ssh-programming-with-paramiko-completely-different</a> )</p> <p>This is the case when we know the <strong><em>password</em></strong> of the system you want to log-in <strong><em>BUT</em></strong> if i want to login to a system where my public-key is copied and i dont know the password. Is there a way to do this</p> <p>Thanks in advance </p>
0
2016-08-16T12:20:13Z
38,975,102
<p>Adding the key to a configured SSH agent would make paramiko use it automatically with no changes to your code.</p> <pre><code>ssh-add &lt;your private key&gt; </code></pre> <p>Your code will work as is. Alternatively, the private key can be provided programmatically with </p> <pre><code>key = paramiko.RSAKey.from_private_key_file(&lt;filename&gt;) SSHClient.connect(pkey=key) </code></pre>
0
2016-08-16T12:34:55Z
[ "python", "ssh", "paramiko" ]
How to SSH from one system to another using python
38,974,822
<p>I am trying to perform SSH from one system to another using paramiko in python </p> <pre><code>import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy()) ssh.connect('127.0.0.1', username='jesse', password='lol') </code></pre> <p>using this reference (<a href="http://jessenoller.com/blog/2009/02/05/ssh-programming-with-paramiko-completely-different" rel="nofollow">http://jessenoller.com/blog/2009/02/05/ssh-programming-with-paramiko-completely-different</a> )</p> <p>This is the case when we know the <strong><em>password</em></strong> of the system you want to log-in <strong><em>BUT</em></strong> if i want to login to a system where my public-key is copied and i dont know the password. Is there a way to do this</p> <p>Thanks in advance </p>
0
2016-08-16T12:20:13Z
38,977,022
<p>This code should work:</p> <pre><code>import paramiko host = "&lt;your-host&gt;" client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) client.connect(host, username='&lt;your-username&gt;', key_filename="/path/to/.ssh/id_rsa" , port=22) # Just to test a command stdin, stdout, stderr = client.exec_command('ls') for line in stdout.readlines(): print line client.close() </code></pre> <blockquote> <p>Here is the documentation of <a href="http://docs.paramiko.org/en/2.0/api/client.html" rel="nofollow">SSHClient.connect()</a></p> </blockquote> <p>EDIT : <code>/path/to/.ssh/id_rsa</code> is your private key!</p>
0
2016-08-16T14:02:12Z
[ "python", "ssh", "paramiko" ]