title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Beginner in Python, can't get this for loop to stop
39,369,347
<pre><code>Bit_128 = 0 Bit_64 = 0 Bit_32 = 0 Bit_64 = 0 Bit_32 = 0 Bit_16 = 0 Bit_8 = 0 Bit_4 = 0 Bit_2 = 0 Bit_1 = 0 Number = int(input("Enter number to be converted: ")) for power in range(7,0,-1): if (2**power) &lt;= Number: place_value = 2**power if place_value == 128: Bit_128 = 1 elif place_value == 64: Bit_64 = 1 elif place_value == 32: Bit_32 = 1 elif place_value == 16: Bit_16 = 1 elif place_value == 8: Bit_8 = 1 elif place_value == 4: Bit_4 = 1 elif place_value == 2: Bit_2 = 1 elif place_value ==1: Bit_1 = 1 Number = Number - (place_value) if Number == 0: print ("Binary form of"),Number,("is"),Bit_128,Bit_64,Bit_32,Bit_16,Bit_8,Bit_4,Bit_2,Bit_1 </code></pre> <p>I want this loop to move to the next 'power' value when it fails the first if condition, but when I run it in an interpreter, the program keeps on running despite the first condition not being true. I only want it to move on the next conditions if the first condition turns out to be true. How can I do this? This is my first "big" program in python, and I'm having a hard time figuring this out. Any tips would be appreciated. Btw, the program is meant to convert any number from 1-255 to binary form. </p>
0
2016-09-07T11:58:39Z
39,369,722
<p>Just using "break" statement , to break the current loop when the condition is satisfied.</p>
0
2016-09-07T12:15:38Z
[ "python", "for-loop" ]
ModelForm won't validate because missing value, but model field has null=True
39,369,364
<p>I have a problem where my <code>ModelForm</code> is trying to assign <code>''</code> to a field (it saves fine if I actually provide the primary key of the <code>Product</code>, but it's not a compulsory field, and won't save if the field is left blank). I take it that the ORM it's trying to set that field to <code>''</code>but:</p> <ul> <li>Shouldn't <code>''</code> be coerced to <code>None</code>, and; </li> <li>Why isn't the model form trying to set that field to <code>None</code> in the first place instead of <code>''</code>?</li> </ul> <p>models.py</p> <pre><code>class Question(models.model): fk_product = models.ForeignKey(Product, on_delete=models.SET_NULL, null=True, related_name="product_question") </code></pre> <p>forms.py</p> <pre><code>class QuestionForm(forms.ModelForm): fk_product=forms.ChoiceField(required=False) class Meta: model = Question fields = ['fk_product',] </code></pre> <p>The error:</p> <blockquote> <p>Cannot assign "''": "Question.fk_product" must be a "Product" instance.</p> </blockquote> <p>The view code that produces the error:</p> <pre><code> QuestionModelFormset = modelformset_factory(Question, form=QuestionForm, extra=1) question_formset = QuestionModelFormset( data=request.POST, files=request.FILES, queryset=Question.objects.all()) if not question_formset.is_valid(): #error occurs on this line </code></pre>
0
2016-09-07T11:59:20Z
39,369,599
<p>Try adding <code>blank=True</code> too.</p> <p><code>null=True</code> means that this field is allowed to be NULL in database.</p> <p><code>blank=True</code> means it can be submitted without a value in forms. Otherwise it must have value.</p>
4
2016-09-07T12:09:48Z
[ "python", "django" ]
python ldap3 bulk delete users and groups
39,369,546
<p>Is there any way to delete from AD pack of objects with python ldap3? Something like </p> <pre><code>conn.delete("(CN='auto*'),CN=Users,DC=mycompany,DC=com") </code></pre> <p>doesn't find any</p> <p>TIA!</p>
0
2016-09-07T12:07:23Z
39,398,772
<p>You can't. The LDAP protocol doesn't allow such an operation. You must use a conn.delete() for each user to delete.</p>
0
2016-09-08T19:27:53Z
[ "python", "active-directory", "ldap3" ]
Matplotlib global legend for lots of subplots
39,369,557
<p>i have some problem figuring out how to force matplotlib to add a legend where i want. </p> <p>The figures i currently have look like this: <a href="http://i.stack.imgur.com/wZOKM.png" rel="nofollow"><img src="http://i.stack.imgur.com/wZOKM.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/qAES7.png" rel="nofollow"><img src="http://i.stack.imgur.com/qAES7.png" alt="enter image description here"></a></p> <p>I generate this 2 paged plot with the following script:</p> <pre><code>n = 1 trn = True for i in diff.keys(): plt.subplot(8, 4, n) plt.xlim([0, rounds + 1]) plt.locator_params(axis='y', nbins=5) if PEPTIDE_KD[i] == 1e-2: plt.plot(x_cord, [x[0] for x in diff[i]], 'ro--', label='asd') plt.plot(x_cord, [x[1] for x in diff[i]], 'rv--', label='bsd') else: plt.plot(x_cord, [x[0] for x in diff[i]], 'bo--', label='asd') plt.plot(x_cord, [x[1] for x in diff[i]], 'bv--', label='csd') plt.title(i) n += 1 if n == 33 and trn: n = 1 trn = False plt.suptitle('Ligand energies') plt.locator_params(axis='y', nbins=5) plt.subplots_adjust(hspace=0.35) plt.show() plt.legend(bbox_to_anchor=(-0.05, 1), loc=1) plt.locator_params(axis='y', nbins=5) plt.subplots_adjust(hspace=0.35) plt.show() </code></pre> <p>However i would like to place the legend at the left side of the very first subplot, or to the right side of last subplot in the first line. </p> <p>I can do this with an <code>if line == 0</code> bla bla, but the problem is as you can see in the plots i attached that for example the last plot only contains blue data, hence only the blue legend will show. I would like to place a full legend (red and blue) as well to the very same place. I dont even know where blue and red plot will be placed beforehand. </p> <p>Is this possible?</p>
0
2016-09-07T12:07:52Z
39,372,106
<p>Try using a <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.figlegend" rel="nofollow">figure legend</a> which will let you create a legend for <a href="http://matplotlib.org/examples/pylab_examples/figlegend_demo.html" rel="nofollow">any of the lines plotted in</a> your figure and position it relative to the figure. </p>
0
2016-09-07T14:05:47Z
[ "python", "python-3.x", "matplotlib" ]
Python calculate minimum distances between multiple coordinates
39,369,559
<p>I have files in two type: A: contains 1206 lines of coordinates (xyz) - a protein chain B: contains 114 lines of coordinates (xyz) - a bunch of molecule</p> <p>I would like to do the followings: For each line of A calculate distance from each line of B. So I get 114 distance value for each line of A. But I don't need all of them, just the shortest for each line of A. So the desired output: A file with 1206 lines, each line contains one value: the shortest distance. Important to keep the original order of file A.</p> <p>My code:</p> <pre><code>import os import sys import numpy as np outdir = r'E:\MTA\aminosavak_tavolsag\tavolsagok' for dirname, dirnames, filenames in os.walk(r'E:\MTA\aminosavak_tavolsag\receptorok'): for path, dirs, files in os.walk(r'E:\MTA\aminosavak_tavolsag\kotohely'): for filename in filenames: for fileok in files: if filename == fileok: with open(os.path.join(outdir, filename) , "a+") as f: data_ligand = np.loadtxt(os.path.join(path, fileok)) data_rec = np.loadtxt(os.path.join(dirname, filename)) for i in data_rec: for j in data_ligand: dist = np.linalg.norm(i - j) dist_float = dist.tolist() dist_str = str(dist_float) dist_list = dist_str.split() for szamok in dist_list: for x in range(len(dist_list)): minimum = min([float(x) for x in dist_list]) f.write(str(minimum) + "\r\n") </code></pre> <p>This code works but only partially. --- My ultimate goal to find the protein residues are close enough to this bunch of molecule (binding site). I can check my results with a visual software and my code find much more less residues than it should. ----</p> <p>I just can't figure out where is the problem. Could you help me? Thanks!</p>
0
2016-09-07T12:08:12Z
39,370,044
<p>Your code is pretty confusing and I can see a few mistakes.</p> <p>You're using <code>minimum</code> outside of the <code>for</code> loop, so only its last value is written.</p> <p>Also, the way you computes <code>minimum</code> is wierd. <code>szamok</code> is not used, nor is <code>x</code> (since you use another <code>x</code> inside the list expression), so both <code>for</code> loops surrounding <code>minimum = ...</code> are useless.</p> <p>Another suspicious thing is <code>str(dist_float)</code>. You're converting a list of float to string. This will give you the string representation of the list, not a list of string. Not only is this useless, it's also wrong because when you split it after it won't give you the expected result.</p> <p>Assuming <code>i</code> and <code>j</code> stands for the data lines of A and B, I would rewrite the end of your code like this:</p> <pre><code>... data_ligand = np.loadtxt(os.path.join(path, fileok)) data_rec = np.loadtxt(os.path.join(dirname, filename)) for i in data_rec: min_dist = min(np.linalg.norm(i - j) for j in data_ligand) f.write("{}\r\n".format(min_dist)) # easier than `str(min_dist)` to customize format </code></pre>
1
2016-09-07T12:31:33Z
[ "python", "coordinates", "distance", "bioinformatics" ]
Can I store a file (HDF5 file) in another file with serialization?
39,369,671
<p>I have a HDF5 file and a list of objects that I need to store for saving functionality. For simplicity I want to create only one save file. Can I store H5 file, in my save file that I create with serialization (pickle) without opening H5 file. </p>
0
2016-09-07T12:13:11Z
39,386,944
<p>You can put several files in one by using <a href="https://docs.python.org/3/library/zipfile.html" rel="nofollow">zipfile</a> or <a href="https://docs.python.org/3/library/tarfile.html" rel="nofollow">tarfile</a></p> <ul> <li>for zipfile you would <code>write</code> the database files and <code>writestr</code> your <code>pickle.dumps</code>ed data.</li> <li>for tarfile you would <code>add</code> the database file and <code>gettarinfo</code>, <code>addfile</code> your <code>pickle.dump</code>ed data from a file.</li> </ul> <p>I would suggest creating a zip if you do not need extended filesystem-attributes because it is a bit easier to use.</p>
1
2016-09-08T09:11:31Z
[ "python", "serialization", "pickle", "hdf5" ]
best way to get an integer from string without using regex
39,369,711
<p>I would like to get some integers from a string (the 3rd one). Preferable without using regex. </p> <p>I saw a lot of stuff.</p> <p>my string:</p> <pre><code>xp = '93% (9774/10500)' </code></pre> <p>So i would like the code to return a list with integers from a string. So the desired output would be: <code>[93, 9774, 10500]</code></p> <p>Some stuff like this doesn't work:</p> <pre><code>&gt;&gt;&gt; new = [int(s) for s in xp.split() if s.isdigit()] &gt;&gt;&gt; print new [] &gt;&gt;&gt; int(filter(str.isdigit, xp)) 93977410500 </code></pre>
0
2016-09-07T12:15:13Z
39,369,887
<p>Using regex (sorry!) to split the string by a non-digit, then filter on digits (can have empty fields) and convert to int.</p> <pre><code>import re xp = '93% (9774/10500)' print([int(x) for x in filter(str.isdigit,re.split("\D+",xp))]) </code></pre> <p>result:</p> <pre><code>[93, 9774, 10500] </code></pre>
1
2016-09-07T12:23:21Z
[ "python", "string", "int" ]
best way to get an integer from string without using regex
39,369,711
<p>I would like to get some integers from a string (the 3rd one). Preferable without using regex. </p> <p>I saw a lot of stuff.</p> <p>my string:</p> <pre><code>xp = '93% (9774/10500)' </code></pre> <p>So i would like the code to return a list with integers from a string. So the desired output would be: <code>[93, 9774, 10500]</code></p> <p>Some stuff like this doesn't work:</p> <pre><code>&gt;&gt;&gt; new = [int(s) for s in xp.split() if s.isdigit()] &gt;&gt;&gt; print new [] &gt;&gt;&gt; int(filter(str.isdigit, xp)) 93977410500 </code></pre>
0
2016-09-07T12:15:13Z
39,369,911
<p>Since the problem is that you have to split on different chars, you can first replace everything that's not a digit by a space then split, a one-liner would be :</p> <pre><code> xp = '93% (9774/10500)' ''.join([ x if x.isdigit() else ' ' for x in xp ]).split() # ['93', '9774', '10500'] </code></pre>
7
2016-09-07T12:24:33Z
[ "python", "string", "int" ]
best way to get an integer from string without using regex
39,369,711
<p>I would like to get some integers from a string (the 3rd one). Preferable without using regex. </p> <p>I saw a lot of stuff.</p> <p>my string:</p> <pre><code>xp = '93% (9774/10500)' </code></pre> <p>So i would like the code to return a list with integers from a string. So the desired output would be: <code>[93, 9774, 10500]</code></p> <p>Some stuff like this doesn't work:</p> <pre><code>&gt;&gt;&gt; new = [int(s) for s in xp.split() if s.isdigit()] &gt;&gt;&gt; print new [] &gt;&gt;&gt; int(filter(str.isdigit, xp)) 93977410500 </code></pre>
0
2016-09-07T12:15:13Z
39,369,962
<p>Since the <em>format is fixed</em>, you can use consecutive <code>split()</code>. It's not very pretty, or general, but sometimes the direct and "stupid" solution is not so bad:</p> <pre><code>a, b = xp.split("%") x = int(a) y = int(b.split("/")[0].strip()[1:]) z = int(b.split("/")[1].strip()[:-1]) print(x, y, z) # prints "93 9774 10500" </code></pre> <p><strong>Edit</strong>: Clarified that the poster specifically said that his format is <strong>fixed</strong>. This solution is not very pretty, but it does what it's supposed to.</p>
-1
2016-09-07T12:27:16Z
[ "python", "string", "int" ]
best way to get an integer from string without using regex
39,369,711
<p>I would like to get some integers from a string (the 3rd one). Preferable without using regex. </p> <p>I saw a lot of stuff.</p> <p>my string:</p> <pre><code>xp = '93% (9774/10500)' </code></pre> <p>So i would like the code to return a list with integers from a string. So the desired output would be: <code>[93, 9774, 10500]</code></p> <p>Some stuff like this doesn't work:</p> <pre><code>&gt;&gt;&gt; new = [int(s) for s in xp.split() if s.isdigit()] &gt;&gt;&gt; print new [] &gt;&gt;&gt; int(filter(str.isdigit, xp)) 93977410500 </code></pre>
0
2016-09-07T12:15:13Z
39,371,143
<p>Since this is Py2, using <code>str</code>, it looks like you don't need to consider the full Unicode range; since you're doing this more than once, you can slightly improve on <a href="http://stackoverflow.com/a/39369911/364696">polku's answer</a> using <code>str.translate</code>:</p> <pre><code># Create a translation table once, up front, that replaces non-digits with import string nondigits = ''.join(c for c in map(chr, range(256)) if not c.isdigit()) nondigit_to_space_table = string.maketrans(nondigits, ' ' * len(nondigits)) # Then, when you need to extract integers use the table to efficiently translate # at C layer in a single function call: xp = '93% (9774/10500)' intstrs = xp.translate(nondigit_to_space_table).split() # ['93', '9774', 10500] myints = map(int, intstrs) # Wrap in `list` constructor on Py3 </code></pre> <p>Performance-wise, for the test string on my 64 bit Linux 2.7 build, using <code>translate</code> takes about 374 nanoseconds to run, vs. 2.76 microseconds for the listcomp and <code>join</code> solution; the listcomp+<code>join</code> takes >7x longer. For larger strings (where the fixed overhead is trivial compared to the actual work), the listcomp+<code>join</code> solution takes closer to 20x longer.</p> <p>Main advantage to polku's solution is that it requires no changes on Py3 (on which it should seamlessly support non-ASCII strings), where <code>str.translate</code> builds the translation table a different way there (<code>str.translate</code>) and it would be impractical to make a translation table that handled all non-digits in the whole Unicode space.</p>
0
2016-09-07T13:24:14Z
[ "python", "string", "int" ]
Functions get called more and more times with reopening plugin
39,369,712
<p>I have a QGIS plugin written in Python 2.7.3 with PyQt 4.9.1, Qt 4.8.1. When I run this plugin every function works just fine. But when I close the window and reopen it again, every function happens twice. Then I close/open again and it goes 3 times, etc., etc.</p> <p>Where should I look for an error here? My <code>def run(self)</code> part looks just like this:</p> <pre><code>def run(self): self.dlg.show() self.availableLayers() self.dlg.pushButton_2.clicked.connect(self.openFile) self.dlg.pushButton.clicked.connect(self.groupBy) self.dlg.toolButton_4.clicked.connect(self.toggleRightPanel) </code></pre> <p>If I reload the plugin by clicking the button from "Plugin Builder", it starts again from one.</p> <p>I should also mention I wouldn't like to lose the view user created (the plugin is a table viewer), but rather be able to close the window, open it and have it again there without the cells being cleared.</p>
0
2016-09-07T12:15:15Z
39,397,887
<p>Every time you call <code>connect</code>, it adds another connection - even if it's to the same slot. So you need to move the connections out of the <code>run()</code> method and put them in the setup method for the dialog, so that they are only made once.</p>
0
2016-09-08T18:29:35Z
[ "python", "qt", "pyqt", "qgis" ]
How to find the directory of the .app that a python script is running from
39,369,746
<p>I recently made a python script that goes through files in whatever directory it is placed in and renames them based on certain criteria. The script works perfectly, and I compiled the script into an OS X .app using py2app. This worked fine as well. However now when I run the script, it searches through the files in the ".app/contents/macOS" folder (where the script is located) rather than where the ".app" is actually located.</p> <p>This is because it has this code at the start:</p> <pre><code>src = os.path.dirname(os.path.abspath(__file__)) </code></pre> <p>which assigns the the location of the ".py" file to a variable which is then used extensively throughout the script. Is there any way I can instead add a snippet of code which tells python the path location of the ".app" that the ".py" file is executing from?</p> <p>If not, perhaps there is a way to get a file explorer window open, from there it would be possible for a user to select a folder who's path would then get assigned to the "src" variable. I'm very new to python however so this would certainly be a challenge.</p>
0
2016-09-07T12:16:46Z
39,369,800
<p>Try to use:</p> <pre><code>import os print(os.getcwd()) </code></pre> <p>From docs:</p> <blockquote> <p><strong>getcwd()</strong></p> <p>Return a unicode string representing the current working directory.</p> </blockquote>
0
2016-09-07T12:19:18Z
[ "python", "osx", "python-3.x", "py2app" ]
How to find the directory of the .app that a python script is running from
39,369,746
<p>I recently made a python script that goes through files in whatever directory it is placed in and renames them based on certain criteria. The script works perfectly, and I compiled the script into an OS X .app using py2app. This worked fine as well. However now when I run the script, it searches through the files in the ".app/contents/macOS" folder (where the script is located) rather than where the ".app" is actually located.</p> <p>This is because it has this code at the start:</p> <pre><code>src = os.path.dirname(os.path.abspath(__file__)) </code></pre> <p>which assigns the the location of the ".py" file to a variable which is then used extensively throughout the script. Is there any way I can instead add a snippet of code which tells python the path location of the ".app" that the ".py" file is executing from?</p> <p>If not, perhaps there is a way to get a file explorer window open, from there it would be possible for a user to select a folder who's path would then get assigned to the "src" variable. I'm very new to python however so this would certainly be a challenge.</p>
0
2016-09-07T12:16:46Z
39,370,161
<p>To find the location of the <code>.app</code> directory that wraps the application, the most direct way is to modify the path that you find with your current code. After computing <code>src</code> as you did, just trim it like this and use <code>app</code> as the path:</p> <pre><code>import re ... app, rest = re.split(r"/[^/]*\.app/", src, 1) </code></pre> <p>This stops at the first path component that ends in <code>.app</code>. If you prefer you can hard-code your application name, e.g. <code>/myprogram.app/</code>.</p> <p>PS. I'm puzzled as to why you copy your app to the folder you want to modify. A file selection dialog, or drag and drop, is the more common (and easier) way to tell a program what to work on. OS X applications created with <code>py2app</code> support drag and drop: Your program can get the path to the dropped directory like this:</p> <pre><code>import sys folders = sys.argv[1:] # Handles multiple arguments for folder in folders: do_something_in(folder) </code></pre> <p>Then simply drag and drop the directory or directories you want to process onto the application icon, which you can just keep on your desktop, the Finder's sidebar, or a favorite directory -- no need to copy it each time you use it, and no need for it to know where its source is.</p>
0
2016-09-07T12:36:35Z
[ "python", "osx", "python-3.x", "py2app" ]
Simplest way to plot 3d sphere by python?
39,369,766
<p>This is the simplest as I know:</p> <pre><code>from visual import * ball1=sphere(pos=vector(x,y,z),radius=radius,color=color) </code></pre> <p>Which alternatives can you suggest?</p>
0
2016-09-07T12:17:55Z
39,370,789
<p>See <a href="http://docs.enthought.com/mayavi/mayavi/index.html" rel="nofollow">Mayavi</a> library for 3D visualization and some examples to draw a sphere. It should work with Python 2.7. </p> <p>Enjoy!</p>
1
2016-09-07T13:08:05Z
[ "python", "python-2.7", "3d", "render" ]
python & pandas - How to calculate frequency under conditions in columns in DataFrame?
39,369,820
<p>I have a series of data in a DataFrame called <code>frames</code>:</p> <pre><code> NoUsager Sens IdVehiculeUtilise NoConducteur NoAdresse Fait NoDemande Periods 0 000001 + 287Véh 000087 000079 1 42196000013 Matin 1 000001 - 287Véh 000087 000079 1 42196000013 Matin 2 000314 + 263Véh 000077 006470 1 42196000002 Matin 3 002372 + 287Véh 000087 002932 1 42196000016 Matin 4 000466 + 287Véh 000087 002932 1 42196000015 Matin 5 000314 - 263Véh 000077 000456 1 42196000002 Matin 6 000466 - 287Véh 000087 004900 1 42196000015 Matin 7 002372 - 287Véh 000087 007072 1 42196000016 Matin 8 002641 + 263Véh 000077 007225 1 42196000004 Soir 9 002641 - 263Véh 000077 000889 1 42196000004 Soir 10 000382 + 263Véh 000077 002095 1 42196000006 Soir 11 002641 + 287Véh 000087 000889 1 42196000019 Soir 12 000382 - 263Véh 000077 006168 1 42196000006 Soir 13 002641 - 287Véh 000087 007225 1 42196000019 Soir 14 001611 + 287Véh 000087 004236 -1 42196000021 Soir 15 002785 + 263Véh 000077 007482 1 42196000007 Soir 16 002372 + 287Véh 000087 007072 1 42196000022 Soir 17 002785 - 263Véh 000077 007483 1 42196000007 Soir 18 000466 + 287Véh 000087 004900 1 42196000023 Soir 19 000382 + 263Véh 000077 006168 1 42196000008 Soir </code></pre> <p>For each <code>Usager</code>, depending on <code>Sens</code> and <code>Periods</code>, they can have more than one related address. I want to know for all the <code>Usager</code>, how many <code>address</code> do they have and the frequency of each address. I used <code>frames.set_index(['NoUsager','NoAdresse'])</code> to make it looks like:</p> <hr> <p><strong>EDIT</strong></p> <p><a href="http://i.stack.imgur.com/NBgEI.png" rel="nofollow"><img src="http://i.stack.imgur.com/NBgEI.png" alt="New pic"></a></p> <p>I don't want all the other columns but only a new one with the result of frequency. In which way I can do it? Can I use <code>pivot()</code> to do it?</p> <p>Any help will be really appreciated!</p>
1
2016-09-07T12:20:34Z
39,369,923
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by columns which will be <code>indexes</code> (<code>NoUsager</code>,<code>Sens</code>,<code>Periods</code>) in output df. Then need add column (<code>NoAdresse</code>) as last item in list in <code>groupby</code>, which is converted by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> to columns in output. And you need aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>.</p> <pre><code>df = df.groupby(['NoUsager','Sens','Periods', 'NoAdresse']).size().unstack(fill_value=0) print (df) </code></pre> <pre><code>NoAdresse 79 456 889 2095 2932 4236 4900 6168 6470 \ NoUsager Sens Periods 1 + Matin 1 0 0 0 0 0 0 0 0 - Matin 1 0 0 0 0 0 0 0 0 314 + Matin 0 0 0 0 0 0 0 0 1 - Matin 0 1 0 0 0 0 0 0 0 382 + Soir 0 0 0 1 0 0 0 1 0 - Soir 0 0 0 0 0 0 0 1 0 466 + Matin 0 0 0 0 1 0 0 0 0 Soir 0 0 0 0 0 0 1 0 0 - Matin 0 0 0 0 0 0 1 0 0 1611 + Soir 0 0 0 0 0 1 0 0 0 2372 + Matin 0 0 0 0 1 0 0 0 0 Soir 0 0 0 0 0 0 0 0 0 - Matin 0 0 0 0 0 0 0 0 0 2641 + Soir 0 0 1 0 0 0 0 0 0 - Soir 0 0 1 0 0 0 0 0 0 2785 + Soir 0 0 0 0 0 0 0 0 0 - Soir 0 0 0 0 0 0 0 0 0 NoAdresse 7072 7225 7482 7483 NoUsager Sens Periods 1 + Matin 0 0 0 0 - Matin 0 0 0 0 314 + Matin 0 0 0 0 - Matin 0 0 0 0 382 + Soir 0 0 0 0 - Soir 0 0 0 0 466 + Matin 0 0 0 0 Soir 0 0 0 0 - Matin 0 0 0 0 1611 + Soir 0 0 0 0 2372 + Matin 0 0 0 0 Soir 1 0 0 0 - Matin 1 0 0 0 2641 + Soir 0 1 0 0 - Soir 0 1 0 0 2785 + Soir 0 0 1 0 - Soir 0 0 0 1 </code></pre> <p>If need reset index:</p> <pre><code>df = df.groupby(['NoUsager','Sens','Periods', 'NoAdresse']) .size() .unstack(fill_value=0) .reset_index() .rename_axis(None, axis=1) print (df) NoUsager Sens Periods 79 456 889 2095 2932 4236 4900 6168 6470 \ 0 1 + Matin 1 0 0 0 0 0 0 0 0 1 1 - Matin 1 0 0 0 0 0 0 0 0 2 314 + Matin 0 0 0 0 0 0 0 0 1 3 314 - Matin 0 1 0 0 0 0 0 0 0 4 382 + Soir 0 0 0 1 0 0 0 1 0 5 382 - Soir 0 0 0 0 0 0 0 1 0 6 466 + Matin 0 0 0 0 1 0 0 0 0 7 466 + Soir 0 0 0 0 0 0 1 0 0 8 466 - Matin 0 0 0 0 0 0 1 0 0 9 1611 + Soir 0 0 0 0 0 1 0 0 0 10 2372 + Matin 0 0 0 0 1 0 0 0 0 11 2372 + Soir 0 0 0 0 0 0 0 0 0 12 2372 - Matin 0 0 0 0 0 0 0 0 0 13 2641 + Soir 0 0 1 0 0 0 0 0 0 14 2641 - Soir 0 0 1 0 0 0 0 0 0 15 2785 + Soir 0 0 0 0 0 0 0 0 0 16 2785 - Soir 0 0 0 0 0 0 0 0 0 7072 7225 7482 7483 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 5 0 0 0 0 6 0 0 0 0 7 0 0 0 0 8 0 0 0 0 9 0 0 0 0 10 0 0 0 0 11 1 0 0 0 12 1 0 0 0 13 0 1 0 0 14 0 1 0 0 15 0 0 1 0 16 0 0 0 1 </code></pre> <hr> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a>:</p> <pre><code>df = pd.crosstab([df.NoUsager,df.Sens,df.Periods], df.NoAdresse) .reset_index() .rename_axis(None, axis=1) print (df) NoUsager Sens Periods 79 456 889 2095 2932 4236 4900 6168 6470 \ 0 1 + Matin 1 0 0 0 0 0 0 0 0 1 1 - Matin 1 0 0 0 0 0 0 0 0 2 314 + Matin 0 0 0 0 0 0 0 0 1 3 314 - Matin 0 1 0 0 0 0 0 0 0 4 382 + Soir 0 0 0 1 0 0 0 1 0 5 382 - Soir 0 0 0 0 0 0 0 1 0 6 466 + Matin 0 0 0 0 1 0 0 0 0 7 466 + Soir 0 0 0 0 0 0 1 0 0 8 466 - Matin 0 0 0 0 0 0 1 0 0 9 1611 + Soir 0 0 0 0 0 1 0 0 0 10 2372 + Matin 0 0 0 0 1 0 0 0 0 11 2372 + Soir 0 0 0 0 0 0 0 0 0 12 2372 - Matin 0 0 0 0 0 0 0 0 0 13 2641 + Soir 0 0 1 0 0 0 0 0 0 14 2641 - Soir 0 0 1 0 0 0 0 0 0 15 2785 + Soir 0 0 0 0 0 0 0 0 0 16 2785 - Soir 0 0 0 0 0 0 0 0 0 7072 7225 7482 7483 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 5 0 0 0 0 6 0 0 0 0 7 0 0 0 0 8 0 0 0 0 9 0 0 0 0 10 0 0 0 0 11 1 0 0 0 12 1 0 0 0 13 0 1 0 0 14 0 1 0 0 15 0 0 1 0 16 0 0 0 1 </code></pre> <p>EDIT by comment:</p> <p>I think you need only aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>:</p> <pre><code>df = df.groupby(['NoUsager','NoAdresse']).size().reset_index(name='Count') print (df) NoUsager NoAdresse Count 0 1 79 2 1 314 456 1 2 314 6470 1 3 382 2095 1 4 382 6168 2 5 466 2932 1 6 466 4900 2 7 1611 4236 1 8 2372 2932 1 9 2372 7072 2 10 2641 889 2 11 2641 7225 2 12 2785 7482 1 13 2785 7483 1 </code></pre> <p>If need set indexes, you can use another solution - <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rename.html" rel="nofollow"><code>rename</code></a> <code>Series</code> name and then call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_frame.html" rel="nofollow"><code>to_frame</code></a>: </p> <pre><code>df = df.groupby(['NoUsager','NoAdresse']).size().rename('Count').to_frame() Count NoUsager NoAdresse 1 79 2 314 456 1 6470 1 382 2095 1 6168 2 466 2932 1 4900 2 1611 4236 1 2372 2932 1 7072 2 2641 889 2 7225 2 2785 7482 1 7483 1 </code></pre> <p>Or add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a>:</p> <pre><code>df = df.groupby(['NoUsager','NoAdresse']) .size() .reset_index(name='Count') .set_index(['NoUsager','NoAdresse']) print (df) Count NoUsager NoAdresse 1 79 2 314 456 1 6470 1 382 2095 1 6168 2 466 2932 1 4900 2 1611 4236 1 2372 2932 1 7072 2 2641 889 2 7225 2 2785 7482 1 7483 1 </code></pre>
1
2016-09-07T12:25:30Z
[ "python", "pandas", "dataframe" ]
SQLAlchemy Python 3 Ubuntu 16.04
39,369,897
<p>I can't seem to install the newest version of SQLAlchemy in my Python 3. Similar questions asked on stackoverflow are all pre-2016 and talk about older distributions of Ubuntu, hence again this question.</p> <h2>System</h2> <ul> <li>Ubuntu 16.04</li> <li>Python 2.7.12 (default if I call python in terminal)</li> <li>Python 3.5.2</li> </ul> <h2>Tried</h2> <p>If I follow the instructions from the documentation (<a href="http://docs.sqlalchemy.org/en/rel_1_0/intro.html#installation" rel="nofollow">http://docs.sqlalchemy.org/en/rel_1_0/intro.html#installation</a>) <code>pip install SQLAlchemy</code>, it only installs itself into my Python 2.7 (<code>sqlalchemy.__version__</code>: 1.0.15).</p> <p><code>pip install python3-sqlalchemy</code> does not exist.</p> <p><code>sudo apt-get install python3-sqlalchemy</code> installs SQLAlchemy in Python3, but <code>sqlalchemy.__version__</code> gives 1.0.11</p> <h2>Question</h2> <p>How do I get the latest version of SQLAlchemy into my Python3 directory, preferably without building from source?</p>
0
2016-09-07T12:23:50Z
39,369,932
<p><code>pip</code> is only for installing Python 2 packages. To install a PyPI package for Python 3, you need to use <code>pip3</code>:</p> <pre><code>pip3 install SQLAlchemy </code></pre>
1
2016-09-07T12:25:58Z
[ "python", "python-3.x", "ubuntu", "sqlalchemy" ]
SQLAlchemy Python 3 Ubuntu 16.04
39,369,897
<p>I can't seem to install the newest version of SQLAlchemy in my Python 3. Similar questions asked on stackoverflow are all pre-2016 and talk about older distributions of Ubuntu, hence again this question.</p> <h2>System</h2> <ul> <li>Ubuntu 16.04</li> <li>Python 2.7.12 (default if I call python in terminal)</li> <li>Python 3.5.2</li> </ul> <h2>Tried</h2> <p>If I follow the instructions from the documentation (<a href="http://docs.sqlalchemy.org/en/rel_1_0/intro.html#installation" rel="nofollow">http://docs.sqlalchemy.org/en/rel_1_0/intro.html#installation</a>) <code>pip install SQLAlchemy</code>, it only installs itself into my Python 2.7 (<code>sqlalchemy.__version__</code>: 1.0.15).</p> <p><code>pip install python3-sqlalchemy</code> does not exist.</p> <p><code>sudo apt-get install python3-sqlalchemy</code> installs SQLAlchemy in Python3, but <code>sqlalchemy.__version__</code> gives 1.0.11</p> <h2>Question</h2> <p>How do I get the latest version of SQLAlchemy into my Python3 directory, preferably without building from source?</p>
0
2016-09-07T12:23:50Z
39,369,935
<p>Use virtualenv:</p> <pre><code>pyvenv env source env/bin/activate pip install sqlalchemy </code></pre> <p>Command <code>pyvenv</code> creates virtualenv based on python 3.x.</p>
0
2016-09-07T12:26:04Z
[ "python", "python-3.x", "ubuntu", "sqlalchemy" ]
How to create an Azure API App using the .YAML file and Pithon code?
39,369,919
<p>One of my clients wants to host his API's in Azure, his APIs are developed in Python. I tried creating Azure API app in dot net and I am getting the successful result but I don't have any knowledge of Python, can anyone help me finding out how can I host these python API's in Azure?</p> <p>The source file has a .YAML file, some .py file and some .html file.</p>
0
2016-09-07T12:25:18Z
39,760,398
<p>You can confirm to your customer, whether the Python APIs are in a complete Python web application.</p> <p>There are several most popular python web frameworks your customer may use to build the APIs server, Flask, Django and Bottle. You can find the document about how to deploy these application to Azure Web Apps at <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-create-deploy-flask-app/" rel="nofollow">Creating web apps with Flask in Azure</a>, <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-create-deploy-django-app/" rel="nofollow">Creating web apps with Django in Azure</a> and <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-create-deploy-bottle-app/" rel="nofollow">Creating web apps with Bottle in Azure</a>.</p> <p>Any further concern, please feel free to let me know.</p>
0
2016-09-29T02:27:02Z
[ "python", "azure", "yaml", "azure-api-apps" ]
Subprocess.Popen stdin in new console
39,370,127
<p>I want to execute a python subprocess in a new console. Once started, I want the user to be able to answer questions asked by this new process on stdin.</p> <p>I tried the following code:</p> <pre><code>p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd, creationflags=subprocess.CREATE_NEW_CONSOLE) (o, e) = p.communicate() </code></pre> <p>As soon as the subprocess asks for input on stdin the following error message is displayed:</p> <blockquote> <p>EOFError: EOF when reading a line</p> </blockquote> <p>Is it the good way to achieve this ?</p>
1
2016-09-07T12:35:14Z
39,371,602
<p>As i'm not really interested in the stdout/stderr redirection, i tried this way: </p> <p><code>subprocess.Popen(cmd, cwd=cwd, creationflags=subprocess.CREATE_NEW_CONSOLE)</code></p> <p>It works fine now. I guess that it's not compatible to redirect standard input/outputs and to create a new console.</p>
1
2016-09-07T13:44:27Z
[ "python", "subprocess", "stdin" ]
Py3.4 IMAPLib Login... 'str' does not support the buffer interface
39,370,328
<p>Using imaplib, I'm trying to connect to a mailserver. When I include the password as just a normal string: 'password' It connects fine. But I'm trying to slightly obfuscate my password, so I previously had run it through b64encode, and then used b64decode in the login:</p> <pre><code>#Works: mail.login('myloginname', 'myPassword') #Doesn't Work: mail.login('myloginname', base64.b64decode('Ja3rHsnakhdgkhervc')) # or mail.login('myloginname', bytes(base64.b64decode('Ja3rHsnakhdgkhervc'))) </code></pre> <p>...</p> <pre><code>Traceback (most recent call last): File "./testing.py", line 15, in &lt;module&gt; mail.login('myloginname', bytes(base64.b64decode('Ja3rHsnakhdgkhervc'))) File "/usr/local/lib/python3.4/imaplib.py", line 536, in login typ, dat = self._simple_command('LOGIN', user, self._quote(password)) File "/usr/local/lib/python3.4/imaplib.py", line 1125, in _quote arg = arg.replace('\\', '\\\\') TypeError: 'str' does not support the buffer interface </code></pre> <p>Suggestions?</p>
0
2016-09-07T12:45:22Z
39,370,420
<p>You are passing in a <code>bytes</code> object for the password, not a <code>str</code> value, because that's what <code>base64.b64decode()</code> returns.</p> <p>You'd have to <em>decode</em> that value to a string:</p> <pre><code> base64.b64decode('Ja3rHsnakhdgkhervc').decode('ascii') </code></pre> <p>The exception is caused by the <code>bytes.replace()</code> method, which expects <code>bytes</code> arguments. Since <code>'\\'</code> and <code>'\\\\'</code> are <code>str</code> objects, you get a traceback at <code>args.replace('\\', '\\\\')</code> only because <code>args</code> is a <code>bytes</code> object:</p> <pre><code>&gt;&gt;&gt; b'foo'.replace('\\', '\\\\') Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: a bytes-like object is required, not 'str' &gt;&gt;&gt; 'foo'.replace('\\', '\\\\') 'foo' </code></pre>
0
2016-09-07T12:50:00Z
[ "python", "python-3.x", "base64", "imaplib" ]
Sending email with python
39,370,412
<p>I have the following code </p> <pre><code>import smtplib sender = 'sender@sender.com' receivers = ['receiver@receiver.com'] message = """From: From Person &lt;sender@sender.com&gt; To: To Person &lt;receiver@receiver.com&gt; Subject: We are blending baby ! This is a test e-mail message. """ try: smtpObj = smtplib.SMTP('localhost') smtpObj.sendmail(sender, receivers, message) print "Successfully sent email" except SMTPException: print "Error: unable to send email" </code></pre> <p>its a copy/paste from somewhere and it works fine.</p> <p>However . . . Once included into my overall program, the email that i receive does not have the sender or receiver available ??? </p> <p>Its just blank.... but its the same code.</p> <pre><code>import paramiko import time import smtplib def disable_paging(remote_conn): '''Disable paging on a Cisco router''' remote_conn.send("terminal length 0\n") time.sleep(1) # Clear the buffer on the screen output = remote_conn.recv(1000) return output def main(): # VARIABLES THAT NEED CHANGED ip = '1.2.3.4' username = 'xxx' password = 'xxx' # Create instance of SSHClient object remote_conn_pre = paramiko.SSHClient() # Automatically add untrusted hosts (make sure okay for security policy in your environment) remote_conn_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # initiate SSH connection remote_conn_pre.connect(ip, username=username, password=password, look_for_keys=False, allow_agent=False) # Use invoke_shell to establish an 'interactive session' remote_conn = remote_conn_pre.invoke_shell() # Strip the initial router prompt output = remote_conn.recv(1000) # Turn off paging disable_paging(remote_conn) # Now let's try to send the router a command remote_conn.send("\n") remote_conn.send("show log last 50\n") # Wait for the command to complete time.sleep(2) output = remote_conn.recv(10000) if 'bad.thing' in output: email_sender() def email_sender(): sender = 'sender@sender.com' receivers = ['receiver@receiver.com'] message = """From: From Person &lt;sender@sender.com&gt; To: To Person &lt;receiver@receiver.com&gt; Subject: We are blending baby ! This is a test e-mail message. """ try: smtpObj = smtplib.SMTP('localhost') smtpObj.sendmail(sender, receivers, message) print "Successfully sent email" except SMTPException: print "Error: unable to send email" main() </code></pre> <p>I'm puzzled, please excuse any indentation that may be wrong, that was only done for the purpose of this post.</p>
-1
2016-09-07T12:49:39Z
39,370,597
<p>If the indentation of your code is as pasted, please indent your code correctly and check if it works:</p> <pre><code>def email_sender(): sender = 'sender@sender.com' receivers = ['receiver@receiver.com'] message = """From: From Person &lt;sender@sender.com&gt; To: To Person &lt;receiver@receiver.com&gt; Subject: We are blending baby ! This is a test e-mail message. """ try: smtpObj = smtplib.SMTP('localhost') smtpObj.sendmail(sender, receivers, message) print "Successfully sent email" except SMTPException: print "Error: unable to send email" </code></pre>
0
2016-09-07T12:58:23Z
[ "python", "email" ]
Understanding fragments of a Python/PuLP code
39,370,442
<p>I have to adopt an existing script where they used the PuLP package. I need to know how the result of the following line looks like:</p> <pre><code>unit = ["one", "two", "three"] time = range(10) status=LpVariable.dicts("status",[(f,g) for f in unit for g in time],0,1,LpBinary) </code></pre> <p>How does the keys/values looks like?</p> <pre><code>status["one"] = [0,1,1,1...]? </code></pre> <p>Thank you very much for your help!</p>
4
2016-09-07T12:51:14Z
39,370,893
<pre><code>from pulp import * unit = ["one", "two"] time = range(2) status=LpVariable.dicts("status",[(f,g) for f in unit for g in time],0,1,LpBinary) </code></pre> <p>Leads to </p> <pre><code>&gt;&gt;&gt; status {('two', 1): status_('two',_1), ('two', 2): status_('two',_2), ('one', 2): status_('one',_2), ('one', 0): status_('one',_0), ('one', 1): status_('one',_1), ('two', 0): status_('two',_0)} </code></pre> <p>So, there is no entry with the key "one".</p>
1
2016-09-07T13:12:51Z
[ "python", "pulp" ]
Recalling data from a text document on python 3
39,370,474
<p>I have made this script which will save players names followed by their scores. </p> <p>I am looking to recall this data back into python so it can be sorted into a table in a UI.</p> <p>Im sure its a simple solution but i can only find how to save to a text document.</p> <pre><code> players=int(input("How many payers are there? ")) with open('playerscores.txt', mode='wt', encoding='utf-8') as myfile: for i in range (players): username=input('Enter you username: ') score=input('Enter your score: ') playerinfo= [username,score, '\n'] myfile.write('\n'.join(playerinfo)) </code></pre>
0
2016-09-07T12:52:31Z
39,370,846
<p>There are multiple ways to do 1. you will reopen the your playerscores.txt file read the data from it store it in buffer sort it and write it again . 2. while tacking input store it in buffer sort them and then write on txt file. </p> <pre><code>players=int(input("How many payers are there? ")) with open('playerscores.txt', mode='wt', encoding='utf-8') as myfile: playerinfo = [] for i in range (players): username=input('Enter you username: ') score=input('Enter your score: ') playerinfo.append([username,score,'\n']) playerinfo = sorted(playerinfo, key = lambda x: int(x[1])) for i in playerinfo: myfile.write('\n'.join(i)) </code></pre>
0
2016-09-07T13:10:55Z
[ "python", "python-3.x", "serialization" ]
Flask does not take WTForms input
39,370,510
<p>I'm tring to create an app with flask with WTForms.</p> <p>In the controller.py i have:</p> <pre><code>@mod_private.route('/portfolio/', methods=['GET', 'POST']) @login_required def portfolio(): print "in portfolio" # I read this form = CreateCoinsForm(request.form) if request.method == 'POST' and form.validate_on_submit(): print form.coins.data #I cannot take this value return render_template("private/portfolio.html",form=form) return render_template("private/portfolio.html",form=form) </code></pre> <p>in the forms.py:</p> <pre><code>class CreateCoinsForm(Form): coins = IntegerField('coins', [DataRequired('num required'), NumberRange(min=0, max=10)]) </code></pre> <p>and the template</p> <pre><code>&lt;form method="post" action="/private/portfolio/" accept-charset="UTF-8" role="form"&gt; &lt;p&gt; {{ form.coins }}&lt;/p&gt; &lt;p&gt;&lt;input type=submit value=Generate&gt; &lt;/form&gt; </code></pre> <p>my problem, as i wrote in the code is that I cannot retrieve the string inserted in the template.</p>
0
2016-09-07T12:53:53Z
39,370,659
<p>Your problem suggests that you are using the built-in CSRF protection on your form, and your form actually isn't validating because you haven't included the CSRF token. </p> <p>Try adjusting your template like so:</p> <pre><code>&lt;form method="post" action="/private/portfolio/" accept-charset="UTF-8" role="form"&gt; {{ form.hidden_tag() }} &lt;p&gt; {{ form.coins }}&lt;/p&gt; &lt;p&gt;&lt;input type=submit value=Generate&gt; &lt;/form&gt; </code></pre>
2
2016-09-07T13:01:25Z
[ "python", "flask", "wtforms" ]
Override get_instance import-export django
39,370,528
<p>I have a basic problem in Django/ Python. But I don't know the right answer.</p> <p>I have to override "get_instance" in django-import-export.instance_loaders</p> <p>Actually i change the code directly in this function. I know this is not very clever. But I don#t understand where I should override this function.</p> <p>In MyModelResource or where?</p> <p>Hope anybody can help me. Thanks</p>
0
2016-09-07T12:54:35Z
39,371,045
<p>Basically, you need to define custom <code>InstanceLoader</code> class in your resource inner <code>Meta</code> class:</p> <pre><code>class BookResource(resources.ModelResource): class Meta: model = Book fields = ('id', 'name', 'price',) instance_loader_class = MyCustomInstanceLoaderClass class MyCustomInstanceLoaderClass(BaseInstanceLoader): def get_instance(self, row): # your implementation here </code></pre> <p>Something like this should help you.</p>
0
2016-09-07T13:19:57Z
[ "python", "django" ]
sample time series datasets R and python
39,370,593
<p>Since I am very lazy I don't want to spend time downloading datasets, loading them and perform pre-processing to test some sample functions on different timeseries. What are some sample timeseries datasets available with R and python? (which can be imported easily). For eg: there is the iris dataset (which can be easily loaded in my environment using <code>data(iris)</code>).</p>
2
2016-09-07T12:58:01Z
39,371,193
<p>The python scikit package has access to the iris and other datasets:</p> <p><a href="http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html" rel="nofollow">http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html</a> <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets" rel="nofollow">http://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets</a></p> <p>Also, the python statsmodels packages has a datasets module for accessing R datasets:</p> <p><a href="http://statsmodels.sourceforge.net/0.6.0/datasets/index.html" rel="nofollow">http://statsmodels.sourceforge.net/0.6.0/datasets/index.html</a></p>
1
2016-09-07T13:26:38Z
[ "python", "time-series" ]
sample time series datasets R and python
39,370,593
<p>Since I am very lazy I don't want to spend time downloading datasets, loading them and perform pre-processing to test some sample functions on different timeseries. What are some sample timeseries datasets available with R and python? (which can be imported easily). For eg: there is the iris dataset (which can be easily loaded in my environment using <code>data(iris)</code>).</p>
2
2016-09-07T12:58:01Z
39,371,361
<p>In <code>R</code>, per @Kabulan0lak's comment, you can choose from different "preloaded" datasets. One way to see what you currently have available in your system is to type:</p> <pre><code>data() </code></pre> <p>Since you're looking for time series data, I would suggest the <code>EuStockMarkets</code> dataset. You can either load it in your space explicitly:</p> <pre><code>data("EuStockMarkets") </code></pre> <p>or call it directly, simply typing:</p> <pre><code>EuStockMarkets </code></pre> <p>Other datasets that may interest you include:</p> <ul> <li><code>LakeHuron</code> dataset is a single series of class <code>ts</code>.</li> <li><code>JohnsonJohnson</code> quarterly earnings of the company Johnson &amp; Johnson .</li> </ul>
2
2016-09-07T13:34:13Z
[ "python", "time-series" ]
python decorate function call
39,370,642
<p>I recently learned about decorators and wondered if it's possible to use them not in a function definition but in a function call, as some kind of general wrapper.</p> <p>The reason for that is, that I want to call functions from a module through a user-defined interface that does repeatable things to a function and I don't want to implement a wrapper for every single function.</p> <p>In principle I would like to have something like</p> <pre><code>def a(num): return num @double a(2) </code></pre> <p>returning 4 without the need of having access to the implementation of <code>a</code>. Or would in this case a global wrapper like</p> <pre><code>def mutiply(factor,function,*args,**kwargs): return factor*function(*args,*kwargs) </code></pre> <p>be the better choice?</p>
-1
2016-09-07T13:00:46Z
39,370,818
<p>You could do something like that:</p> <pre><code>def a(num): return num * 1 def double(f): def wrapped(*args, **kwargs): return f(*args, **kwargs) return wrapped print(double(a)(2)) </code></pre> <p>It's because we can decorate functions and run functions using a decorator function explicit as in the example above. So in this one:</p> <pre><code>print(double(a)(2)) </code></pre> <p>In the place of <code>a</code> you can put any function and in place of the <code>2</code>, args and kwargs.</p>
1
2016-09-07T13:09:27Z
[ "python", "wrapper", "decorator" ]
python decorate function call
39,370,642
<p>I recently learned about decorators and wondered if it's possible to use them not in a function definition but in a function call, as some kind of general wrapper.</p> <p>The reason for that is, that I want to call functions from a module through a user-defined interface that does repeatable things to a function and I don't want to implement a wrapper for every single function.</p> <p>In principle I would like to have something like</p> <pre><code>def a(num): return num @double a(2) </code></pre> <p>returning 4 without the need of having access to the implementation of <code>a</code>. Or would in this case a global wrapper like</p> <pre><code>def mutiply(factor,function,*args,**kwargs): return factor*function(*args,*kwargs) </code></pre> <p>be the better choice?</p>
-1
2016-09-07T13:00:46Z
39,371,706
<p>There is a very good detailed section on decorators in Marty Alchin's book <strong>Pro Python</strong> from Apress.</p> <p>While the new style @decorator syntax is only available to be used at function definition, you can use the older syntax, which does the same thing this way:</p> <pre><code>from module import myfunc myfunc = double_decorator(myfunc) x = myfunc(2) # returns 4 </code></pre>
1
2016-09-07T13:48:27Z
[ "python", "wrapper", "decorator" ]
Setting up Python's ArgumentParser with two mutually excluding flags where one flag has optional additional flags
39,370,691
<p>I would like to define the following using Python's ArgumentParser:</p> <pre><code>--mutually_exclusive_flag_A stringParameter --mutually_exclusive_flag_B stringParameter --optional_b_flag_one --optional_b_flag_two </code></pre> <p>One can use either mutually_exclusive_flag_A or mutually_exclusive_flag_B, but not both.</p> <p>If one uses mutually_exclusive_flag_B, then one can use optional_b_flag_one and optional_b_flag_two. </p> <p>optional_b_flag_one and optional_b_flag_two are boolean flags.</p> <p>I do see add_mutally_exclusive_group to handle selecting mutually_exclusive_flag_A or mutually_exclusive_flag_B. However, what I am not sure how to do is declare that if I use mutually_exclusive_flag_B, then optional_b_flag_one and optional_b_flag_two are valid flags.</p> <p>It seems like I may be able to use the subparsers feature and turn mutually_exclusive_flag_A and mutually_exclusive_flag_B into commands.</p> <p>What is my best option?</p>
0
2016-09-07T13:02:54Z
39,375,840
<p><code>argparse</code> can't handle that complex of a test. Mutually exclusive groups can't be nested, and they don't handle other kinds of logic (only <code>xor</code>, not <code>and</code> and <code>or</code>). I've explored such an expansion in a Python bug/issue, but it's not a trivial addition.</p> <p>The best choice is to do your own testing after parsing.</p> <p>The primary purpose of <code>argparse</code> is to figure out what your user wants. Checking validity, and issuing a nice error message is a plus, but not central to the parsing task.</p> <p>One of the problems with expanding this mechanism is write a meaningful usage message for general combinations. Have you thought about how you'd explain this requirement to your users?</p>
0
2016-09-07T17:18:32Z
[ "python", "python-3.x", "argparse" ]
Error in pip install matplotlib in Mac
39,370,731
<p>When I do </p> <pre><code>pip install matplotlib --upgrade --user </code></pre> <p>I dont get any error but my program fails saying</p> <pre><code>Traceback (most recent call last): File "forest.py", line 22, in &lt;module&gt; matplotlib.style.use('ggplot') AttributeError: 'module' object has no attribute 'style' </code></pre> <p>when I try to upgrade matplotlib without --user I get following error </p> <pre><code>$ pip install matplotlib --upgrade Collecting matplotlib Using cached matplotlib-1.5.2-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl Requirement already up-to-date: cycler in /Users/vangapellisanthosh/Library/Python/2.7/lib/python/site-packages (from matplotlib) Collecting pyparsing!=2.0.0,!=2.0.4,!=2.1.2,&gt;=1.5.6 (from matplotlib) Using cached pyparsing-2.1.8-py2.py3-none-any.whl Collecting pytz (from matplotlib) Using cached pytz-2016.6.1-py2.py3-none-any.whl Collecting numpy&gt;=1.6 (from matplotlib) Using cached numpy-1.11.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl Collecting python-dateutil (from matplotlib) Using cached python_dateutil-2.5.3-py2.py3-none-any.whl Collecting six (from cycler-&gt;matplotlib) Using cached six-1.10.0-py2.py3-none-any.whl Installing collected packages: pyparsing, pytz, numpy, six, python-dateutil, matplotlib Found existing installation: pyparsing 2.0.1 DEPRECATION: Uninstalling a distutils installed project (pyparsing) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling pyparsing-2.0.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 736, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/var/folders/7j/19zzrqpn5dl6ghw1pms6k2m80000gp/T/pip-FEDiKY-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pyparsing-2.0.1-py2.7.egg-info' </code></pre> <p>How do i solve it?</p>
2
2016-09-07T13:04:58Z
39,371,141
<p>It seems like your first error is because you are searching for style in matplotlib and not matplotlib.pyplot. Normally, it should work anyway but try this.</p> <p>Try changing this: </p> <pre><code>matplotlib.style.use('ggplot') </code></pre> <p>By adding this in the beginning of your code:</p> <pre><code>import matplotlib.pyplot as plt </code></pre> <p>Then use:</p> <pre><code>plt.style.use('ggplot') </code></pre> <p>For the second error, pip tries to uninstall pyparsing but somehow don't have the permission. If you are the administrator try using:</p> <pre><code>sudo pip install matplotlib --upgrade </code></pre>
0
2016-09-07T13:24:12Z
[ "python", "osx", "matplotlib" ]
flask socket-io, sometimes client calls freeze the server
39,370,848
<p>I occasionally have a problem with flask socket-io freezing, and I have no clue how to fix it.</p> <p>My client connects to my socket-io server and performs some chat sessions. It works nicely. But for some reason, sometimes from the client side, there is some call that blocks the whole server (The server is stuck in the process, and all other calls are frozen). What is strange is that the server can be blocked as long as the client side app is not totally shutdown.This is an ios-app / web page, and I must totally close the app or the safari page. Closing the socket itself, and even deallocating it doesn't resolve the problem. When the app is in the background, the sockets are closed and deallocated but the problem persists.</p> <p>This is a small server, and it deals with both html pages and the socket-server so I have no idea if it is the socket itself or the html that blocks the process. But each time the server was freezing, the log showed some socket calls.</p> <p>Here is how I configured my server:</p> <pre><code>socketio = SocketIO(app, ping_timeout=5) socketio.run(app, host='0.0.0.0', port=5001, debug=True, ssl_context=context) </code></pre> <p>So my question is: What can freeze the server (this seems to happen when I leave the app or web-site open for a long time while doing nothing). If I use the services normally the server never freezes. And how I can prevent it from happening (Even if I don't know what causing this, is there a way to blindly stop my server from being stuck at a call? </p> <p>Thanks you for the answers</p>
0
2016-09-07T13:11:00Z
39,394,019
<p>According to your comment above, you are using the Flask development web server, without the help of an asynchronous framework such as eventlet or gevent. Besides this option being highly inefficient, you should know that this web server is not battle tested, it is meant for short lived tests during development. I'm not sure it is able to run for very long, specially under the unusual conditions Flask-SocketIO puts it through, which regular Flask apps do not exercise. I think it is quite possible that you are hitting some obscure bug in Werkzeug that causes it to hang.</p> <p>My recommendation is that you install eventlet and try again. All you need to do is <code>pip install eventlet</code>, and assuming you are not passing an explicit <code>async_mode</code> argument, then just by installing this package Flask-SocketIO should configure itself to use it.</p> <p>I would also remove the explicit timeout setting. In almost all cases, the defaults are sufficient to maintain a healthy connection.</p>
0
2016-09-08T14:48:43Z
[ "python", "freeze", "flask-socketio" ]
extract hour from timestamp with python
39,370,879
<p>I have a dataframe df_energy2</p> <pre><code>df_energy2.info() &lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 29974 entries, 0 to 29973 Data columns (total 4 columns): TIMESTAMP 29974 non-null datetime64[ns] P_ACT_KW 29974 non-null int64 PERIODE_TARIF 29974 non-null object P_SOUSCR 29974 non-null int64 dtypes: datetime64[ns](1), int64(2), object(1) memory usage: 936.8+ KB </code></pre> <p>with this structure :</p> <pre><code>df_energy2.head() TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR 2016-01-01 00:00:00 116 HC 250 2016-01-01 00:10:00 121 HC 250 </code></pre> <p>Is there any python fucntion which can extract hour from <code>TIMESTAMP</code>?</p> <p>Kind regards</p>
3
2016-09-07T13:12:21Z
39,370,905
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.hour.html" rel="nofollow"><code>dt.hour</code></a>:</p> <pre><code>print (df.TIMESTAMP.dt.hour) 0 0 1 0 Name: TIMESTAMP, dtype: int64 df['hours'] = df.TIMESTAMP.dt.hour print (df) TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR hours 0 2016-01-01 00:00:00 116 HC 250 0 1 2016-01-01 00:10:00 121 HC 250 0 </code></pre>
2
2016-09-07T13:13:19Z
[ "python", "datetime", "pandas", "extract", "hour" ]
extract hour from timestamp with python
39,370,879
<p>I have a dataframe df_energy2</p> <pre><code>df_energy2.info() &lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 29974 entries, 0 to 29973 Data columns (total 4 columns): TIMESTAMP 29974 non-null datetime64[ns] P_ACT_KW 29974 non-null int64 PERIODE_TARIF 29974 non-null object P_SOUSCR 29974 non-null int64 dtypes: datetime64[ns](1), int64(2), object(1) memory usage: 936.8+ KB </code></pre> <p>with this structure :</p> <pre><code>df_energy2.head() TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR 2016-01-01 00:00:00 116 HC 250 2016-01-01 00:10:00 121 HC 250 </code></pre> <p>Is there any python fucntion which can extract hour from <code>TIMESTAMP</code>?</p> <p>Kind regards</p>
3
2016-09-07T13:12:21Z
39,371,167
<p>Given your data:</p> <blockquote> <pre><code>df_energy2.head() TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR 2016-01-01 00:00:00 116 HC 250 2016-01-01 00:10:00 121 HC 250 </code></pre> </blockquote> <p>You have timestamp as the index. For extracting hours from timestamp where you have it as the index of the dataframe: </p> <pre><code> hours = df_energy2.index.hour </code></pre> <hr> <p><strong>Edit</strong>: Yes, jezrael you're right. Putting what he has stated: pandas dataframe has a property for this i.e. <code>dt</code> :</p> <pre><code>&lt;dataframe&gt;.&lt;ts_column&gt;.dt.hour </code></pre> <p>Example in your context - the column with date is <code>TIMESTAMP</code></p> <pre><code>df.TIMESTAMP.dt.hour </code></pre> <hr> <p>A similar question - <a href="http://stackoverflow.com/questions/21624217/pandas-dataframe-with-a-datetime64-column-querying-by-hour">Pandas, dataframe with a datetime64 column, querying by hour</a></p>
0
2016-09-07T13:25:35Z
[ "python", "datetime", "pandas", "extract", "hour" ]
Static class property pointing to special instance of the same class in Python
39,370,891
<p>Coming from cpp/c#, how does one refer to the same class in the class body in Python:</p> <pre><code>class Foo(object): ANSWER = Foo(42) FAIL = Foo(-1) def __init__(self, value): self._v = value </code></pre> <p>When I try to use this code, I get "name 'Foo' is not defined" exception in a line trying to instantiate the ANSWER instance.</p>
0
2016-09-07T13:12:44Z
39,371,121
<p>The name <code>Foo</code> is not set until the full class body has been executed. The only way you can do what you want is to add attributes to the class <em>after</em> the <code>class</code> statement has completed:</p> <pre><code>class Foo(object): def __init__(self, value): self._v = value Foo.ANSWER = Foo(42) Foo.FAIL = Foo(-1) </code></pre> <p>It sounds like you are re-inventing Python's <a href="https://docs.python.org/3/library/enum.html" rel="nofollow"><code>enum</code> module</a>; it lets you define a class with constants that are really instances of that class:</p> <pre><code> from enum import Enum class Foo(Enum): ANSWER = 42 FAIL = -1 </code></pre> <p>After that <code>class</code> statement has run, <code>Foo.ANSWER</code> is an instance of <code>Foo</code> with a <code>.value</code> attribute set to <code>42</code>.</p>
1
2016-09-07T13:23:15Z
[ "python", "class" ]
Python PYQT Tabs outside app window [why]
39,370,970
<p>Tabs added before app.exec_() is called look and act as any other tabs u met, though if adding another after the app.exec_() call makes the new tab 'detach' from the main app window. Pic below :)</p> <p>Why? How can I make it move inside the window?</p> <pre><code>import threading import time import sys from PyQt5.QtWidgets import QApplication from PyQt5.QtWidgets import QFormLayout from PyQt5.QtWidgets import QLineEdit from PyQt5.QtWidgets import QTabWidget from PyQt5.QtWidgets import QTextEdit from PyQt5.QtWidgets import QWidget class ATry(threading.Thread): def __init__(self): super().__init__() def run(self): time.sleep(1) anotherTextEdit = QTextEdit() anotherLineEdit = QLineEdit() anotherLayout = QFormLayout() anotherLayout.addRow(anotherTextEdit) anotherLayout.addRow(anotherLineEdit) anotherTab = QWidget() anotherTab.setLayout(anotherLayout) md.addTab(anotherTab, "Outside") app = QApplication(sys.argv) md = QTabWidget() aTextEdit = QTextEdit() aLineEdit = QLineEdit() layout = QFormLayout() layout.addRow(aTextEdit) layout.addRow(aLineEdit) thisTab = QWidget() thisTab.setLayout(layout) md.addTab(thisTab, "Inside") a = ATry() a.start() md.show() app.exec_() </code></pre> <p><a href="http://i.stack.imgur.com/3gXA6.png" rel="nofollow">Screen describing the problem</a></p>
0
2016-09-07T13:16:35Z
39,373,790
<p>It works with QTimer or signals:</p> <pre><code>import sys import time from PyQt5.QtCore import QObject from PyQt5.QtCore import QThread from PyQt5.QtCore import pyqtSignal from PyQt5.QtWidgets import QApplication from PyQt5.QtWidgets import QFormLayout from PyQt5.QtWidgets import QLineEdit from PyQt5.QtWidgets import QTabWidget from PyQt5.QtWidgets import QTextEdit from PyQt5.QtWidgets import QWidget class ATry(QThread): def __init__(self, pointer): super().__init__() self.pointer = pointer def run(self): time.sleep(2) self.pointer.emit() def addTheTab(): anotherTextEdit = QTextEdit() anotherLineEdit = QLineEdit() anotherLayout = QFormLayout() anotherLayout.addRow(anotherLineEdit) anotherLayout.addRow(anotherTextEdit) anotherTab = QWidget() anotherTab.setLayout(anotherLayout) md.addTab(anotherTab, "Whatever") class MyQObject(QObject): trigger = pyqtSignal() def __init__(self): super().__init__() def connect_and_get_trigger(self): self.trigger.connect(addTheTab) return self.trigger def getGFX(self): app = QApplication(sys.argv) md = QTabWidget() md.show() return app, md obj = MyQObject() app, md = obj.getGFX() addTheTab() a = ATry(obj.connect_and_get_trigger()) a.start() # timer = QTimer() # timer.timeout.connect(proba) # timer.start(3000) app.exec_() </code></pre>
0
2016-09-07T15:20:51Z
[ "python", "python-3.x", "user-interface", "pyqt", "pyqt5" ]
How to insert strings and slashes in a path?
39,370,988
<p>I'm trying to extract tar.gz files which are situated in diffent files named srm01, srm02 and srm03. The file's name must be in input (a string) to run my code. I'm trying to do something like this :</p> <pre><code>import tarfile import glob thirdBloc = 'srm01' #Then, that must be 'srm02', or 'srm03' for f in glob.glob('C://Users//asediri//Downloads/srm/'+thirdBloc+'/'+'*.tar.gz'): tar = tarfile.open(f) tar.extractall('C://Users//asediri//Downloads/srm/'+thirdBloc) </code></pre> <p>I have this error message: </p> <pre><code>IOError: CRC check failed 0x182518 != 0x7a1780e1L </code></pre> <p>I want first to be sure that my code find the .tar.gz files. So I tried to just print my paths after glob: </p> <pre><code>thirdBloc = 'srm01' #Then, that must be 'srm02', or 'srm03' for f in glob.glob('C://Users//asediri//Downloads/srm/'+thirdBloc+'/'+'*.tar.gz'): print f </code></pre> <p>That gives :</p> <pre><code>C://Users//asediri//Downloads/srm/srm01\20160707000001-server.log.1.tar.gz C://Users//asediri//Downloads/srm/srm01\20160707003501-server.log.1.tar.gz </code></pre> <p>The os.path.exists method tell me that my files doesn't exist. </p> <pre><code>print os.path.exists('C://Users//asediri//Downloads/srm/srm01\20160707000001-server.log.1.tar.gz') </code></pre> <p>That gives : False</p> <p>Any way todo properly this work ? What's the best way to have first of all the right paths ? </p>
0
2016-09-07T13:17:04Z
39,371,267
<p>In order to join paths you have to use <a href="https://docs.python.org/2/library/os.path.html#os.path.join" rel="nofollow"><code>os.path.join</code></a> as follow:</p> <pre><code>import os import tarfile import glob thirdBloc = 'srm01' #Then, that must be 'srm02', or 'srm03' for f in glob.glob(os.path.join('C://Users//asediri//Downloads/srm/', thirdBloc, '*.tar.gz'): tar = tarfile.open(f) tar.extractall(os.path.join('C://Users//asediri//Downloads/srm/', thirdBloc)) </code></pre>
2
2016-09-07T13:30:30Z
[ "python", "python-2.7", "filepath", "glob", "tarfile" ]
How to insert strings and slashes in a path?
39,370,988
<p>I'm trying to extract tar.gz files which are situated in diffent files named srm01, srm02 and srm03. The file's name must be in input (a string) to run my code. I'm trying to do something like this :</p> <pre><code>import tarfile import glob thirdBloc = 'srm01' #Then, that must be 'srm02', or 'srm03' for f in glob.glob('C://Users//asediri//Downloads/srm/'+thirdBloc+'/'+'*.tar.gz'): tar = tarfile.open(f) tar.extractall('C://Users//asediri//Downloads/srm/'+thirdBloc) </code></pre> <p>I have this error message: </p> <pre><code>IOError: CRC check failed 0x182518 != 0x7a1780e1L </code></pre> <p>I want first to be sure that my code find the .tar.gz files. So I tried to just print my paths after glob: </p> <pre><code>thirdBloc = 'srm01' #Then, that must be 'srm02', or 'srm03' for f in glob.glob('C://Users//asediri//Downloads/srm/'+thirdBloc+'/'+'*.tar.gz'): print f </code></pre> <p>That gives :</p> <pre><code>C://Users//asediri//Downloads/srm/srm01\20160707000001-server.log.1.tar.gz C://Users//asediri//Downloads/srm/srm01\20160707003501-server.log.1.tar.gz </code></pre> <p>The os.path.exists method tell me that my files doesn't exist. </p> <pre><code>print os.path.exists('C://Users//asediri//Downloads/srm/srm01\20160707000001-server.log.1.tar.gz') </code></pre> <p>That gives : False</p> <p>Any way todo properly this work ? What's the best way to have first of all the right paths ? </p>
0
2016-09-07T13:17:04Z
39,371,314
<p>os.path.join will create the correct paths for your filesystem</p> <pre><code>f = os.path.join('C://Users//asediri//Downloads/srm/', thirdBloc, '*.tar.gz') </code></pre>
0
2016-09-07T13:32:26Z
[ "python", "python-2.7", "filepath", "glob", "tarfile" ]
How to insert strings and slashes in a path?
39,370,988
<p>I'm trying to extract tar.gz files which are situated in diffent files named srm01, srm02 and srm03. The file's name must be in input (a string) to run my code. I'm trying to do something like this :</p> <pre><code>import tarfile import glob thirdBloc = 'srm01' #Then, that must be 'srm02', or 'srm03' for f in glob.glob('C://Users//asediri//Downloads/srm/'+thirdBloc+'/'+'*.tar.gz'): tar = tarfile.open(f) tar.extractall('C://Users//asediri//Downloads/srm/'+thirdBloc) </code></pre> <p>I have this error message: </p> <pre><code>IOError: CRC check failed 0x182518 != 0x7a1780e1L </code></pre> <p>I want first to be sure that my code find the .tar.gz files. So I tried to just print my paths after glob: </p> <pre><code>thirdBloc = 'srm01' #Then, that must be 'srm02', or 'srm03' for f in glob.glob('C://Users//asediri//Downloads/srm/'+thirdBloc+'/'+'*.tar.gz'): print f </code></pre> <p>That gives :</p> <pre><code>C://Users//asediri//Downloads/srm/srm01\20160707000001-server.log.1.tar.gz C://Users//asediri//Downloads/srm/srm01\20160707003501-server.log.1.tar.gz </code></pre> <p>The os.path.exists method tell me that my files doesn't exist. </p> <pre><code>print os.path.exists('C://Users//asediri//Downloads/srm/srm01\20160707000001-server.log.1.tar.gz') </code></pre> <p>That gives : False</p> <p>Any way todo properly this work ? What's the best way to have first of all the right paths ? </p>
0
2016-09-07T13:17:04Z
39,371,336
<p><code>C://Users//asediri//Downloads/srm/srm01\20160707000001-server.log.1.tar.gz</code></p> <p>Never use \ with python for filepaths, \201 is \x81 character. It results to this:</p> <p><code>C://Users//asediri//Downloads/srm/srm01ü60707000001-server.log.1.tar.gz</code></p> <p>this is why os.path.exists does not find it</p> <p>Or use <code>(r"C:\...")</code></p> <p>I would suggest you do this:</p> <pre><code>import os os.chdir("C:/Users/asediri/Downloads/srm/srm01") for f in glob.glob(str(thirdBloc) + ".tar.gz"): print f </code></pre>
0
2016-09-07T13:33:19Z
[ "python", "python-2.7", "filepath", "glob", "tarfile" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,371,182
<p><strong>Approach #1</strong></p> <p>You can simulate that iterator dependency criteria for a vectorized solution using a <code>triangular matrix</code>. This is based on <a href="http://stackoverflow.com/a/36045511/3293881"><code>this post</code></a> that dealt with multiplication involving <code>iterator dependency</code>. For performing the elementwise equality of each element in <code>vect</code> against its all elements, we can use <code>NumPy broadcasting</code>. Finally, we can use <code>np.count_nonzero</code> to get the count, as it's supposed to be very efficient in summing purposes on boolean arrays.</p> <p>So, we would have a solution like so -</p> <pre><code>mask = np.triu(vect[:,None] == vect,1) counter = np.count_nonzero(mask) dupl = np.where(mask)[1] </code></pre> <p>If you only care about the count <code>counter</code>, we could have two more approaches as listed next.</p> <p><strong>Approach #2</strong></p> <p>We can avoid the use of the triangular matrix and simply get the entire count and just subtract the contribution from diagonal elements and consider just one of either lower of upper triangular regions by just halving the remaining count as the contributions from either ones would be identical.</p> <p>So, we would have a modified solution like so -</p> <pre><code>counter = (np.count_nonzero(vect[:,None] == vect) - vect.size)//2 </code></pre> <p><strong>Approach #3</strong></p> <p>Here's an entirely different approach that uses the fact the count of each unique element plays a cumsumed contribution to the final total. </p> <p>So, with that idea in mind, we would have a third approach like so -</p> <pre><code>count = np.bincount(vect) # OR np.unique(vect,return_counts=True)[1] idx = count[count&gt;1] id_arr = np.ones(idx.sum(),dtype=int) id_arr[0] = 0 id_arr[idx[:-1].cumsum()] = -idx[:-1]+1 counter = np.sum(id_arr.cumsum()) </code></pre>
0
2016-09-07T13:26:14Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,371,197
<p>Python itself is a highly-dynamic, slow, language. The idea in numpy is to use <a href="https://www.safaribooksonline.com/library/view/python-for-data/9781449323592/ch04.html" rel="nofollow">vectorization</a>, and avoid explicit loops. In this case, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.outer.html" rel="nofollow"><code>np.equal.outer</code></a>. You can start with</p> <pre><code>a = np.equal.outer(vect, vect) </code></pre> <p>Now, for example, to find the sum:</p> <pre><code> &gt;&gt;&gt; np.sum(a) 10006 </code></pre> <p>To find the indices of <em>i</em> that are equal, you can do</p> <pre><code>np.fill_diagonal(a, 0) &gt;&gt;&gt; np.nonzero(np.any(a, axis=0))[0] array([ 1, 2500, 5000]) </code></pre> <hr> <p><strong>Timing</strong></p> <pre><code>def find_vec(): a = np.equal.outer(vect, vect) s = np.sum(a) np.fill_diagonal(a, 0) return np.sum(a), np.nonzero(np.any(a, axis=0))[0] &gt;&gt;&gt; %timeit find_vec() 1 loops, best of 3: 214 ms per loop def find_loop(): dupl = [] counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 return dupl &gt;&gt;&gt; % timeit find_loop() 1 loops, best of 3: 8.51 s per loop </code></pre>
1
2016-09-07T13:26:49Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,371,224
<blockquote> <p>I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> </blockquote> <p>Because Python programs are usually 100x slower than C programs.</p> <p>You can either implement critical code paths in C and provide Python-C bindings, or change the algorithm. You can write an O(N) version by using a <code>dict</code> that reverses the array from value to index.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = {} print("init done") counter = 0 for i in range(N): e = dupl.get(vect[i], None) if e is None: dupl[vect[i]] = [i] else: e.append(i) counter += 1 print("counter =", counter) print([(k, v) for k, v in dupl.items() if len(v) &gt; 1]) </code></pre> <p>Edit:</p> <p>If you need to test against an eps with abs(vect[i] - vect[j]) &lt; eps you can then normalize the values up to eps</p> <pre><code>abs(vect[i] - vect[j]) &lt; eps -&gt; abs(vect[i] - vect[j]) / eps &lt; (eps / eps) -&gt; abs(vect[i]/eps - vect[j]/eps) &lt; 1 int(abs(vect[i]/eps - vect[j]/eps)) = 0 </code></pre> <p>Like this:</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = {} print("init done") counter = 0 eps = 0.01 for i in range(N): k = int(vect[i] / eps) e = dupl.get(k, None) if e is None: dupl[k] = [i] else: e.append(i) counter += 1 print("counter =", counter) print([(k, v) for k, v in dupl.items() if len(v) &gt; 1]) </code></pre>
0
2016-09-07T13:28:03Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,371,652
<p>This solution using the <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package has complexity n Log n, and is fully vectorized; so not terribly different from C performance, in all likelihood.</p> <pre><code>import numpy_indexed as npi dpl = np.flatnonzero(npi.multiplicity(vect) &gt; 1) </code></pre>
0
2016-09-07T13:46:13Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,371,741
<p>The obvious question is why you want to do this in this way. NumPy arrays are intended to be opaque data structures – by this I mean NumPy arrays are intended to be created inside the NumPy system and then operations sent in to the NumPy subsystem to deliver a result. i.e. NumPy should be a black box into which you throw requests and out come results.</p> <p>So given the code above I am not at all suprised that NumPy performance is worse than dreadful.</p> <p>The following should be effectively what you want, I believe, but done the NumPy way:</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 print([np.where(a == vect)[0] for a in vect][1]) # Delivers [1, 2500, 5000] </code></pre>
0
2016-09-07T13:49:30Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,372,054
<p>As an alternative to Ami Tavory's answer, you can use a <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow">Counter</a> from the collections package to detect duplicates. On my computer it seems to be even faster. See the function below which can also find different duplicates.</p> <pre><code>import collections import numpy as np def find_duplicates_original(x): d = [] for i in range(len(x)): for j in range(i + 1, len(x)): if x[i] == x[j]: d.append(j) return d def find_duplicates_outer(x): a = np.equal.outer(x, x) np.fill_diagonal(a, 0) return np.flatnonzero(np.any(a, axis=0)) def find_duplicates_counter(x): counter = collections.Counter(x) values = (v for v, c in counter.items() if c &gt; 1) return {v: np.flatnonzero(x == v) for v in values} n = 10000 x = np.arange(float(n)) x[n // 2] = 1 x[n // 4] = 1 &gt;&gt;&gt;&gt; find_duplicates_counter(x) {1.0: array([ 1, 2500, 5000], dtype=int64)} &gt;&gt;&gt;&gt; %timeit find_duplicates_original(x) 1 loop, best of 3: 12 s per loop &gt;&gt;&gt;&gt; %timeit find_duplicates_outer(x) 10 loops, best of 3: 84.3 ms per loop &gt;&gt;&gt;&gt; %timeit find_duplicates_counter(x) 1000 loops, best of 3: 1.63 ms per loop </code></pre>
0
2016-09-07T14:03:34Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,373,909
<p>This runs in 8 ms compared to 18 s for your code and doesn't use any strange libraries. It's similar to the approach by @vs0, but I like <code>defaultdict</code> more. It should be approximately O(N).</p> <pre><code>from collections import defaultdict dupl = [] counter = 0 indexes = defaultdict(list) for i, e in enumerate(vect): indexes[e].append(i) if len(indexes[e]) &gt; 1: dupl.append(i) counter += 1 </code></pre>
0
2016-09-07T15:26:21Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
efficient loop over numpy array
39,371,021
<p>Versions of this question have already been asked but I have not found a satisfactory answer.</p> <p><strong>Problem</strong>: given a large numpy vector, find indices of the vector elements which are duplicated (a variation of that could be comparison with tolerance). </p> <p>So the problem is ~O(N^2) and memory bound (at least from the current algorithm point of view). I wonder why whatever I tried Python is 100x or more slower than an equivalent C code.</p> <pre><code>import numpy as np N = 10000 vect = np.arange(float(N)) vect[N/2] = 1 vect[N/4] = 1 dupl = [] print("init done") counter = 0 for i in range(N): for j in range(i+1, N): if vect[i] == vect[j]: dupl.append(j) counter += 1 print("counter =", counter) print(dupl) # For simplicity, this code ignores repeated indices # which can be trimmed later. Ref output is # counter = 3 # [2500, 5000, 5000] </code></pre> <p>I tried using numpy iterators but they are even worse (~ x4-5) <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p> <p>Using N=10,000 I'm getting 0.1 sec in C, 12 sec in Python (code above), 40 sec in Python using np.nditer, 50 sec in Python using np.ndindex. I pushed it to N=160,000 and the timing scales as N^2 as expected.</p>
3
2016-09-07T13:19:02Z
39,468,387
<p>Since the answers have stopped coming and none was totally satisfactory, for the record I post my own solution.</p> <p>It is my understanding that it's the assignment which makes Python slow in this case, not the nested loops as I thought initially. Using a library or compiled code eliminates the need for assignments and performance improves dramatically.</p> <pre><code>from __future__ import print_function import numpy as np from numba import jit N = 10000 vect = np.arange(N, dtype=np.float32) vect[N/2] = 1 vect[N/4] = 1 dupl = np.zeros(N, dtype=np.int32) print("init done") # uncomment to enable compiled function #@jit def duplicates(i, counter, dupl, vect): eps = 0.01 ns = len(vect) for j in range(i+1, ns): # replace if to use approx comparison #if abs(vect[i] - vect[j]) &lt; eps: if vect[i] == vect[j]: dupl[counter] = j counter += 1 return counter counter = 0 for i in xrange(N): counter = duplicates(i, counter, dupl, vect) print("counter =", counter) print(dupl[0:counter]) </code></pre> <p>Tests</p> <pre><code># no jit $ time python array-test-numba.py init done counter = 3 [2500 5000 5000] elapsed 10.135 s # with jit $ time python array-test-numba.py init done counter = 3 [2500 5000 5000] elapsed 0.480 s </code></pre> <p>The performance of compiled version (with @jit uncommented) is close to C code performance ~0.1 - 0.2 sec. Perhaps eliminating the last loop could improve the performance even further. The difference in performance is even stronger when using approximate comparison using eps while there is very little difference for the compiled version.</p> <pre><code># no jit $ time python array-test-numba.py init done counter = 3 [2500 5000 5000] elapsed 109.218 s # with jit $ time python array-test-numba.py init done counter = 3 [2500 5000 5000] elapsed 0.506 s </code></pre> <p>This is ~ 200x difference. In the real code, I had to put both loops in the function as well as use a function template with variable types so it was a bit more complex but not very much.</p>
0
2016-09-13T11:00:08Z
[ "python", "arrays", "loops", "numpy", "optimization" ]
Read a complete data file and round numbers to 2 decimal places and save it with the same format
39,371,050
<p>I am trying to learn python and I have the intention to make the a very big data file smaller and later do some statistical Analysis with R. I need to read the data file (see below):</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.0000 0.0000 0.0000 SCALAR ND 3 ST 0 TS 3600.47 255.1744 255.0201 255.2748 SCALAR ND 3 ST 0 TS 7200.42 255.5984 255.4946 255.7014 </code></pre> <p>and find the numbers and round it in two digits after decimal, svae the maximum number with the namber in front of TS. At the end save the data file with the same format like following:</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.00 0.00 0.00 SCALAR ND 3 ST 0 TS 3600.47 255.17 255.02 255.27 SCALAR ND 3 ST 0 TS**MAX** 7200.42 255.60 255.49 255.70 </code></pre> <p>I have written a code like this:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pickle # Open file f = open('data.txt', 'r') thefile = open('output.txt', 'wb') # Read and ignore header lines header1 = f.readline() header2 = f.readline() header3 = f.readline() header4 = f.readline() data = [] for line in f: line = line.strip() columns = line.split() source = {} source['WSP'] = columns[0] #source['timestep'] = float(columns[1]) source['timestep'] = columns[1] data.append(source) f.close() </code></pre> <p>but the number in front of TS cannot be read. I wanted to round the numbers but the float which I used does not work. After that I wanted to put it in a loop Any suggestion, Do I write the code in a good way? I will be very thankfull for the help.</p>
0
2016-09-07T13:20:16Z
39,371,222
<p>Try <code>"%.2f" % float(columns[1])</code> to round to two decimal places. Note that it gives you a string, not a float. I don't understand the rest of what you're asking.</p> <p><code>&gt;&gt;&gt; "%.2f" % 255.5984</code><br> <code>'255.60'</code> </p>
1
2016-09-07T13:27:52Z
[ "python", "statistics", "string-formatting", "data-type-conversion" ]
Read a complete data file and round numbers to 2 decimal places and save it with the same format
39,371,050
<p>I am trying to learn python and I have the intention to make the a very big data file smaller and later do some statistical Analysis with R. I need to read the data file (see below):</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.0000 0.0000 0.0000 SCALAR ND 3 ST 0 TS 3600.47 255.1744 255.0201 255.2748 SCALAR ND 3 ST 0 TS 7200.42 255.5984 255.4946 255.7014 </code></pre> <p>and find the numbers and round it in two digits after decimal, svae the maximum number with the namber in front of TS. At the end save the data file with the same format like following:</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.00 0.00 0.00 SCALAR ND 3 ST 0 TS 3600.47 255.17 255.02 255.27 SCALAR ND 3 ST 0 TS**MAX** 7200.42 255.60 255.49 255.70 </code></pre> <p>I have written a code like this:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pickle # Open file f = open('data.txt', 'r') thefile = open('output.txt', 'wb') # Read and ignore header lines header1 = f.readline() header2 = f.readline() header3 = f.readline() header4 = f.readline() data = [] for line in f: line = line.strip() columns = line.split() source = {} source['WSP'] = columns[0] #source['timestep'] = float(columns[1]) source['timestep'] = columns[1] data.append(source) f.close() </code></pre> <p>but the number in front of TS cannot be read. I wanted to round the numbers but the float which I used does not work. After that I wanted to put it in a loop Any suggestion, Do I write the code in a good way? I will be very thankfull for the help.</p>
0
2016-09-07T13:20:16Z
39,371,612
<p>The code doesn't execute correctly (where is selecting maximum num or saving to output or ...), please put the fixed one, but if you have only trouble in float function you can choose %.2f or round(num,2)</p>
1
2016-09-07T13:44:51Z
[ "python", "statistics", "string-formatting", "data-type-conversion" ]
Read a complete data file and round numbers to 2 decimal places and save it with the same format
39,371,050
<p>I am trying to learn python and I have the intention to make the a very big data file smaller and later do some statistical Analysis with R. I need to read the data file (see below):</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.0000 0.0000 0.0000 SCALAR ND 3 ST 0 TS 3600.47 255.1744 255.0201 255.2748 SCALAR ND 3 ST 0 TS 7200.42 255.5984 255.4946 255.7014 </code></pre> <p>and find the numbers and round it in two digits after decimal, svae the maximum number with the namber in front of TS. At the end save the data file with the same format like following:</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.00 0.00 0.00 SCALAR ND 3 ST 0 TS 3600.47 255.17 255.02 255.27 SCALAR ND 3 ST 0 TS**MAX** 7200.42 255.60 255.49 255.70 </code></pre> <p>I have written a code like this:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pickle # Open file f = open('data.txt', 'r') thefile = open('output.txt', 'wb') # Read and ignore header lines header1 = f.readline() header2 = f.readline() header3 = f.readline() header4 = f.readline() data = [] for line in f: line = line.strip() columns = line.split() source = {} source['WSP'] = columns[0] #source['timestep'] = float(columns[1]) source['timestep'] = columns[1] data.append(source) f.close() </code></pre> <p>but the number in front of TS cannot be read. I wanted to round the numbers but the float which I used does not work. After that I wanted to put it in a loop Any suggestion, Do I write the code in a good way? I will be very thankfull for the help.</p>
0
2016-09-07T13:20:16Z
39,371,684
<p>Here is how to convert some float to a string with 2 decimals with the <code>format</code> syntax (the new python2/3 formating syntax):</p> <pre><code>"{:.2f}".format(some_float) </code></pre> <p>Regarding the rest of your question: your code does not seems to correctly deal with the format of your text file. You have to take care of the fact that on each lines there can be either : only text , text and a number or only a number. You could deal with that by trying to convert each piece of line to float and ignore it if it fails:</p> <pre><code>out=[] for column in columns : try: out.append("{:.2f}".format(float(column))) except ValueError: out.append(column) </code></pre>
1
2016-09-07T13:47:20Z
[ "python", "statistics", "string-formatting", "data-type-conversion" ]
Read a complete data file and round numbers to 2 decimal places and save it with the same format
39,371,050
<p>I am trying to learn python and I have the intention to make the a very big data file smaller and later do some statistical Analysis with R. I need to read the data file (see below):</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.0000 0.0000 0.0000 SCALAR ND 3 ST 0 TS 3600.47 255.1744 255.0201 255.2748 SCALAR ND 3 ST 0 TS 7200.42 255.5984 255.4946 255.7014 </code></pre> <p>and find the numbers and round it in two digits after decimal, svae the maximum number with the namber in front of TS. At the end save the data file with the same format like following:</p> <pre><code>SCALAR ND 3 ST 0 TS 10.00 0.00 0.00 0.00 SCALAR ND 3 ST 0 TS 3600.47 255.17 255.02 255.27 SCALAR ND 3 ST 0 TS**MAX** 7200.42 255.60 255.49 255.70 </code></pre> <p>I have written a code like this:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pickle # Open file f = open('data.txt', 'r') thefile = open('output.txt', 'wb') # Read and ignore header lines header1 = f.readline() header2 = f.readline() header3 = f.readline() header4 = f.readline() data = [] for line in f: line = line.strip() columns = line.split() source = {} source['WSP'] = columns[0] #source['timestep'] = float(columns[1]) source['timestep'] = columns[1] data.append(source) f.close() </code></pre> <p>but the number in front of TS cannot be read. I wanted to round the numbers but the float which I used does not work. After that I wanted to put it in a loop Any suggestion, Do I write the code in a good way? I will be very thankfull for the help.</p>
0
2016-09-07T13:20:16Z
39,372,143
<p>The first part can be done easily if it can be assumed that the floats that require rounding occur only on lines by themselves. That excludes lines that are prefixed with alpha chars, e.g. <code>TS 3600.47</code>.</p> <pre><code>from __future__ import print_function with open('data.txt') as f, open('output.txt', 'w') as outfile: for line in (l.rstrip() for l in f): try: print('{:.2f}'.format(float(line)), file=outfile) except ValueError: print(line, file=outfile) </code></pre> <p>The second part, however, requires that the file be buffered in its entirety because it is not known where the maximum value for <code>TS</code> will be - it could be at the start of the file, at the end, or anywhere in between. Here's some code to do that:</p> <pre><code>from __future__ import print_function with open('data.txt') as f, open('output.txt', 'w') as outfile: lines = [] max_ts = 0 max_ts_idx = None for i, line in enumerate(l.rstrip() for l in f): try: lines.append('{:.2f}'.format(float(line))) except ValueError: if line.startswith('TS'): new_ts = float(line.split()[-1]) if new_ts &gt; max_ts: max_ts = new_ts max_ts_idx = i lines.append(line) for i, line in enumerate(lines): if i == max_ts_idx: line = line.replace('TS', 'TS**MAX**') print(line, file=outfile) </code></pre> <p>It's basically the same as the print only version above, however, the lines are now accumulated into the list <code>lines</code>. The maximum value for "TS" lines is kept in <code>max_ts</code> and the corresponding line number of that "TS" line in <code>max_ts_idx</code>. Finally the <code>lines</code> list is iterated over and the lines are written to the file. If the line contains the maximum value for "TS" (as determined by <code>max_ts_idx</code>) that line is decorated with <code>**MAX**</code>.</p>
1
2016-09-07T14:07:29Z
[ "python", "statistics", "string-formatting", "data-type-conversion" ]
How to convert 007898989 to 7898989 in Python
39,371,186
<p>I am trying to convert <code>007898989</code> to <code>7898989</code> in Python using the following code:</p> <pre><code>long(007898989) </code></pre> <p>However this leads to the following error:</p> <pre><code>&gt;&gt;&gt; long(007898989) File "&lt;stdin&gt;", line 1 long(007898989) ^ SyntaxError: invalid token </code></pre> <p>How can I convert this number correctly?</p>
-3
2016-09-07T13:26:27Z
39,371,307
<p>Indeed, doing this:</p> <pre><code>a = 007898989 </code></pre> <p>will raise the error <code>SyntaxError: invalid token</code>, the easiest way to convert to long would be:</p> <p><strong>On Python 2</strong></p> <pre><code>a = long("007898989") print a </code></pre> <p>Trying this cast on python 3 would give <code>NameError: name 'long' is not defined</code>, so, I'd say the best solution is the below one</p> <p><strong>On python 2/3</strong></p> <pre><code>a = int("007898989") print(a) </code></pre>
4
2016-09-07T13:32:14Z
[ "python" ]
How to convert 007898989 to 7898989 in Python
39,371,186
<p>I am trying to convert <code>007898989</code> to <code>7898989</code> in Python using the following code:</p> <pre><code>long(007898989) </code></pre> <p>However this leads to the following error:</p> <pre><code>&gt;&gt;&gt; long(007898989) File "&lt;stdin&gt;", line 1 long(007898989) ^ SyntaxError: invalid token </code></pre> <p>How can I convert this number correctly?</p>
-3
2016-09-07T13:26:27Z
39,371,418
<p><code>007898989</code> is not a valid number in Python since numbers starting with a <code>0</code> are supposed to be octal numbers and cannot contain 8 or 9. For example <code>077</code> octal is the same as <code>63</code> decimal. In Python 2, you can write this code:</p> <pre><code>&gt;&gt;&gt; 077 63 </code></pre> <p>This could trigger some serious bugs if not handled carefully. Therefore, for Python 3, numbers with leading zero's are no longer allowed:</p> <pre><code>&gt;&gt;&gt; 077 File "&lt;stdin&gt;", line 1 077 ^ SyntaxError: invalid token </code></pre> <p>Luckily, you can easily convert a numeric string with leading 0's in Python 2 to a long integer using this syntax:</p> <pre><code>long('007898989', base = 10) long('007898989') # Since base defaults to 10 anyway </code></pre> <p>For Python 3 <code>int</code> already has unlimited precision, therefore long is not supported anymore. For Python 3, you can simply use:</p> <pre><code>int('007898989') </code></pre>
0
2016-09-07T13:36:15Z
[ "python" ]
Tkinter Dynamic Widget editing
39,371,187
<p>Here is my code:</p> <pre><code>class render_window: def __init__(self, height, width, window_title): self.root_window = Tk() w = width h = height ws = self.root_window.winfo_screenwidth() # width of the screen hs = self.root_window.winfo_screenheight() # height of the screen x = (ws/2) - (w/2) y = (hs/2) - (h/2) self.root_window.title(window_title) self.root_window.minsize(width, height) self.root_window.geometry('%dx%d+%d+%d' % (w, h, x, y)) def new_button(self, button_text, button_command="", grid_row=0, grid_column=0, grid_sticky="NESW", grid_columnspan=1, grid_rowspan=1): self.button = ttk.Button(self.root_window, text=button_text, command=button_command) self.button.grid(row=grid_row, column=grid_column, sticky=grid_sticky, columnspan=grid_columnspan, rowspan=grid_rowspan) self.responsive_grid(grid_row, grid_column) def new_label(self, label_text, text_alignment="center", grid_row=0, grid_column=0, grid_sticky="NESW", grid_columnspan=1, grid_rowspan=1): self.label = ttk.Label(self.root_window, text=label_text, anchor=text_alignment) self.label.grid(row=grid_row, column=grid_column, sticky=grid_sticky, columnspan=grid_columnspan, rowspan=grid_rowspan) self.responsive_grid(grid_row, grid_column) def new_progress_bar(self, pg_length=250, pg_mode="determinate", grid_row=0, grid_column=0, grid_sticky="NESW", grid_columnspan=1, grid_rowspan=1): self.progress_bar = ttk.Progressbar(self.root_window, length=pg_length, mode=pg_mode) self.progress_bar.grid(row=grid_row, column=grid_column, sticky=grid_sticky, columnspan=grid_columnspan, rowspan=grid_rowspan) self.responsive_grid(grid_row, grid_column) def responsive_grid(self, row_responsive=0, column_responsive=0, row_weight_num=1, column_weight_num=1): self.root_window.grid_columnconfigure(column_responsive, weight=column_weight_num) self.root_window.grid_rowconfigure(row_responsive, weight=row_weight_num) options_window = render_window(200, 250, "Options Window") options_window.new_progress_bar() options_window.progress_bar.start() options_window.new_progress_bar(grid_column=1) options_window.progress_bar.start() options_window.new_label("Options Window\And other buttons...", grid_row=1, grid_columnspan=2) options_window.root_window.mainloop() </code></pre> <p>I have created a system that allows the creation of an interface relatively easily using tkinter. I am having an issue with the modifying of already existing elements, I cannot seem to modify them if I create multiple instances, I can only edit the last one created. when I say modify/edit, I am talking about .config().</p> <p>So whenever I do: options_window.progress_bar.config(args_here), it only does that for the last bar created. Is there a way to specify which bar I can execute the code on?</p> <p>Thanks!</p>
0
2016-09-07T13:26:28Z
39,371,701
<p>If I understand you correctly... could you not just asign each progressbar to a variable? Ie.</p> <pre><code>pb1 = options_window.progress_bar pb1.start() pb1.conig('etc, etc') </code></pre> <p>Sorry if i have misunderstood your problem!</p> <p>PS - Cool idea!</p>
2
2016-09-07T13:48:11Z
[ "python", "class", "python-3.x", "user-interface", "tkinter" ]
Finding the right parameters for neural network for pong-game
39,371,211
<p>I have some trouble with my implementation of a deep neural network to the game Pong because my network is always diverging, regardless which parameters I change. I took a Pong-Game and implemented a theano/lasagne based deep-q learning algorithm which is based on the famous nature paper by Googles Deepmind. </p> <p><strong><em>What I want:</em></strong><br> Instead of feeding the network with pixel data I want to input the x- and y-position of the ball and the y-position of the paddle for 4 consecutive frames. So I got a total of 12 inputs.<br> I only want to reward the hit, the loss, and the win of a round.<br> With this configuration, the network did not converge and my agent was not able to play the game. Instead, the paddle drove directly to the top or bottom or repeated the same pattern. So I thought I try to make it a bit easier for the agent and add some information. </p> <p><strong><em>What I did:</em></strong><br> <strong>States:</strong> </p> <ul> <li>x-position of the Ball (-1 to 1)</li> <li>y-position of the Ball (-1 to 1)</li> <li>normalized x-velocity of the Ball</li> <li>normalized y-velocity of the Ball</li> <li>y-position of the paddle (-1 to 1)</li> </ul> <p>With 4 consecutive frames I get a total input of 20. </p> <p><strong>Rewards:</strong></p> <ul> <li>+10 if Paddle hits the Ball</li> <li>+100 if Agent wins the round</li> <li>-100 if Agent loses the round</li> <li>-5 to 0 for the distance between the predicted end position (y-position) of the ball and the current y-position of the paddle</li> <li>+20 if the predicted end position of the ball lies in the current range of the paddle (the hit is foreseeable)</li> <li>-5 if the ball lies behind the paddle (no hit possible anymore) </li> </ul> <p>With this configuration, the network still diverges. I tried to play around with the learning rate (0.1 to 0.00001), the nodes of the hidden layers (5 to 500), the number of hidden layers (1 to 4), the batch accumulator (sum or mean), the update rule (rmsprop or Deepminds rmsprop).<br> All of these did not lead to a satisfactory solution. The graph of the loss averages mostly looks something like <a href="http://i.imgur.com/jjpuPr4.png" rel="nofollow">this</a>. You can download my current version of the implementation <a href="https://Kaonashi2@bitbucket.org/Kaonashi2/py-pong3.0.git" rel="nofollow">here</a><br> I would be very grateful for any hint :)<br> Koanashi</p>
3
2016-09-07T13:27:20Z
39,386,000
<p>Repeating my suggestion from comments as an answer now to make it easier to see for anyone else ending up on this page later (was posted as comment first since I was not 100% sure it'd be the solution):</p> <p>Reducing the magnitude of the rewards to lie in (or at least close to) the [0.0, 1.0] or [-1.0, 1.0] intervals helps the network to converge more quickly.</p> <p>Changing the reward values in such a way (simply dividing them all by a number to make them lie in a smaller interval) does not change what a network is able to learn in theory. The network could also simply learn the same concepts with larger rewards by finding larger weights throughout the network. </p> <p>However, learning such large weights typically takes much more time. The main reason for this is that weights are often intialized to random values close to 0, so it takes a lot of time to change those values to large values through training. Because the weights are initialized to small values (typically), and they are very far away from the optimal weight values, this also means that there is an increased risk of there being a local (<strong>not</strong> a global) minimum along the way to the optimal weight values, which it can get stuck in.</p> <p>With lower reward values, the optimal weight values are likely to be low in magnitude as well. This means that weights initialized to small random values are already more likely to be close to their optimal values. This leads to a shorter training time (less "distance" to travel to put it informally), and a decreased risk of there being local minima along the way to get stuck in.</p>
2
2016-09-08T08:23:58Z
[ "python", "machine-learning", "artificial-intelligence", "deep-learning", "theano" ]
Python webbrowser - Check if browser is available (nothing happens when opening webpage over an SSH connection)
39,371,219
<p>Is there a way to detect whether there is a browser available on the system on which the script is run? Nothing happens when running the following code on a server:</p> <pre><code>try: webbrowser.open("file://" + os.path.realpath(path)) except webbrowser.Error: print "Something went wrong when opening webbrowser" </code></pre> <p>It's weird that there's no caught exception, and no open browser. I'm running the script from command line over an SSH-connection, and I'm not very proficient in server-related stuff, so there may be another way of detecting this that I am missing.</p> <p>Thanks!</p>
0
2016-09-07T13:27:44Z
39,371,956
<p>Checkout the <a href="https://docs.python.org/2/library/webbrowser.html" rel="nofollow">documentation</a>:</p> <blockquote> <p>webbrowser.<strong>get</strong>([name])</p> <p>Return a controller object for the browser type name. If name is empty, return a controller for a default browser appropriate to the caller’s environment.</p> </blockquote> <p>This works for me:</p> <pre><code>try: # we are not really interested in the return value webbrowser.get() webbrowser.open("file://" + os.path.realpath(path)) except Exception as e: print "Webbrowser error: " % e </code></pre> <p>Output:</p> <pre><code>Webbrowser error: could not locate runnable browser </code></pre>
1
2016-09-07T13:58:39Z
[ "python", "python-webbrowser" ]
Traversing multiple dataframes simultaneously
39,371,228
<p>I have three dataframes of three users with same column names like time, compass data,accelerometer data, gyroscope data and camera panning information. I want to traverse all the dataframes simultaneously to check for a particular time which user has performed camera panning and return the user(like in which data frame panning has been detected for a particular time). I have tried using dash for achieving parallelism but in vain. below is my code</p> <pre><code>import pandas as pd import glob import numpy as np import math from scipy.signal import butter, lfilter order=3 fs=30 cutoff=4.0 data=[] gx=[] gy=[] g_x2=[] g_y2=[] dataList = glob.glob(r'C:\Users\chaitanya\Desktop\Thesis\*.csv') for csv in dataList: data.append(pd.read_csv(csv)) for i in range(0, len(data)): data[i] = data[i].groupby("Time").agg(lambda x: x.value_counts().index[0]) data[i].reset_index(level=0, inplace=True) def butter_lowpass(cutoff,fs,order=5): nyq=0.5 * fs nor=cutoff / nyq b,a=butter(order,nor,btype='low', analog=False) return b,a def lowpass_filter(data,cutoff,fs,order=5): b,a=butter_lowpass(cutoff,fs,order=order) y=lfilter(b,a,data) return y for i in range(0,len(data)): gx.append(lowpass_filter(data[i]["Gyro_X"],cutoff,fs,order)) gy.append(lowpass_filter(data[i]["Gyro_Y"],cutoff,fs,order)) g_x2.append(gx[i]*gx[i]) g_y2.append(gy[i]*gy[i]) g_rad=[[] for _ in range(len(data))] g_ang=[[] for _ in range(len(data))] for i in range(0,len(data)): for j in range(0,len(data[i])): g_ang[i].append(math.degrees(math.atan(gy[i][j]/gx[i][j]))) data[i]["Ang"]=g_ang[i] panning=[[] for _ in range(len(data))] for i in range(0,len(data)): for j in data[i]["Ang"]: if 0-30&lt;=j&lt;=0+30: panning[i].append("Panning") elif 180-30&lt;=j&lt;=180+30: panning[i].append("left") else: panning[i].append("None") data[i]["Panning"]=panning[i] result=[[] for _ in range(len(data))] for i in range (0,len(data)): result[i].append(data[i].loc[data[i]['Panning']=='Panning','Ang']) </code></pre>
0
2016-09-07T13:28:19Z
39,373,533
<p>I'm going to make the assumption that you want to traverse simultaneously in time. In any case, you want your three dataframes to have an index in the dimension you want to traverse. </p> <p>I'll generate 3 dataframes with rows representing random seconds in a 9 second period. </p> <p>Then, I'll align these with a <code>pd.concat</code> and <code>ffill</code> to be able to reference the last known data for any gaps.</p> <pre><code>seconds = pd.date_range('2016-08-31', periods=10, freq='S') n = 6 ssec = seconds.to_series() sidx = ssec.sample(n).index df1 = pd.DataFrame(np.random.randint(1, 10, (n, 3)), ssec.sample(n).index.sort_values(), ['compass', 'accel', 'gyro']) df2 = pd.DataFrame(np.random.randint(1, 10, (n, 3)), ssec.sample(n).index.sort_values(), ['compass', 'accel', 'gyro']) df3 = pd.DataFrame(np.random.randint(1, 10, (n, 3)), ssec.sample(n).index.sort_values(), ['compass', 'accel', 'gyro']) df4 = pd.concat([df1, df2, df3], axis=1, keys=['df1', 'df2', 'df3']).ffill() df4 </code></pre> <p><a href="http://i.stack.imgur.com/H6Sh0.png" rel="nofollow"><img src="http://i.stack.imgur.com/H6Sh0.png" alt="enter image description here"></a></p> <p>you can then proceed to walk through via <code>iterrows()</code></p> <pre><code>for tstamp, row in df4.iterrows(): print tstamp </code></pre>
1
2016-09-07T15:09:59Z
[ "python", "pandas", "dataframe" ]
Python Pandas group by function
39,371,325
<p>I have this table </p> <pre><code> uname sid usage 0 Ahmad a 5 1 Ahmad a 7 2 Ahmad a 10 3 Ahmad b 2 4 Mohamad c 6 5 Mohamad c 7 6 Mohamad c 9 </code></pre> <p>I want to group by uname and side, and have usage column = <code>group.max</code> - <code>group.min</code>. But if group count is <code>1</code> return group <code>max</code> </p> <p>the out put should be </p> <pre><code> uname sid usage 0 Ahmad a 5 1 Ahmad b 2 2 Mohamad c 3 </code></pre>
2
2016-09-07T13:32:48Z
39,371,445
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="nofollow"><code>apply</code></a> difference <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.max.html" rel="nofollow"><code>max</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.min.html" rel="nofollow"><code>min</code></a> if <code>length</code> is more as <code>1</code> else <code>max</code>:</p> <pre><code>df = df.groupby(['uname','sid'])['usage'] .apply(lambda x: x.max()-x.min() if len(x) &gt; 1 else x.max()) .reset_index() print (df) uname sid usage 0 Ahmad a 5 1 Ahmad b 2 2 Mohamad c 3 </code></pre> <p>I think instead <code>max</code> you can use <code>iloc</code> too:</p> <pre><code>df = df.groupby(['uname','sid'])['usage'] .apply(lambda x: x.max()-x.min() if len(x) &gt; 1 else x.iloc[0]) .reset_index() print (df) uname sid usage 0 Ahmad a 5 1 Ahmad b 2 2 Mohamad c 3 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.where.html" rel="nofollow"><code>Series.where</code></a>, which test <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>:</p> <pre><code>g = df.groupby(['uname','sid'])['usage'] s = g.max()-g.min() print (s) uname sid Ahmad a 5 b 0 Mohamad c 3 Name: usage, dtype: int64 print (g.size() == 1) uname sid Ahmad a False b True Mohamad c False dtype: bool print (s.where(g.size() != 1, g.max()).reset_index()) uname sid usage 0 Ahmad a 5 1 Ahmad b 2 2 Mohamad c 3 </code></pre>
1
2016-09-07T13:37:03Z
[ "python", "pandas", "group-by", "max", "min" ]
Python Pandas group by function
39,371,325
<p>I have this table </p> <pre><code> uname sid usage 0 Ahmad a 5 1 Ahmad a 7 2 Ahmad a 10 3 Ahmad b 2 4 Mohamad c 6 5 Mohamad c 7 6 Mohamad c 9 </code></pre> <p>I want to group by uname and side, and have usage column = <code>group.max</code> - <code>group.min</code>. But if group count is <code>1</code> return group <code>max</code> </p> <p>the out put should be </p> <pre><code> uname sid usage 0 Ahmad a 5 1 Ahmad b 2 2 Mohamad c 3 </code></pre>
2
2016-09-07T13:32:48Z
39,372,250
<p>First, use <code>agg</code> to grab <code>min</code>, <code>max</code>, and <code>size</code> of each group.<br> Then multiply <code>min</code> by <code>size &gt; 1</code>. When it is, it will equal <code>min</code>, else <code>0</code>. Then subtract that from <code>max</code>.</p> <pre><code>d1 = df.groupby(['uname', 'sid']).usage.agg(['min', 'max', 'size']) d1['max'].sub(d1['min'].mul(d1['size'].gt(1))).reset_index(name='usage') </code></pre> <p><a href="http://i.stack.imgur.com/jIDZN.png" rel="nofollow"><img src="http://i.stack.imgur.com/jIDZN.png" alt="enter image description here"></a></p>
1
2016-09-07T14:12:19Z
[ "python", "pandas", "group-by", "max", "min" ]
How can i override print function in Python 2.2.1 (WebLogic version)
39,371,341
<p>How to override print function in Python 2.2 in order to being able to redirect output to custom logger.</p>
0
2016-09-07T13:33:33Z
39,371,767
<p>I don't have a version of 2.2 to check (why are you using such an old version?), but I suspect the following is valid for all 2.x.</p> <hr> <p>The <code>print</code> statement recognizes a first argument beginning with <code>&gt;&gt;</code> to indicate which file to write to.</p> <p>The following are identical:</p> <pre><code>print "foo", "bar" print &gt;&gt;sys.stdout, "foo", "bar" </code></pre> <p>As such, you can specify any <code>file</code> object as the target file.</p> <pre><code>f = open("log.txt", "w") print &gt;&gt;f, "foo", "bar" </code></pre> <p>If you want to redirect <em>every</em> <code>print</code> statement (or at least all the ones that aren't using a specific file as shown above), you can simply replace <code>sys.stdout</code> with your desired file.</p> <pre><code>sys.stdout = open("log.txt", "w") print "foo", "bar" # Goes to log.txt </code></pre> <p>If you need it, the original standard output is still available via <code>sys.__stdout__</code>.</p>
1
2016-09-07T13:50:29Z
[ "python", "wlst" ]
Python IF conditions
39,371,393
<p>I'm new to django python so the if makes me feel confused. Please help me!</p> <p>Im trying to make a IF condition for Facebook chat bot API. The whole function is when a user sends something in the Facebook chat, my bot will remove all punctuations, lower case the text and split it based on space. After that he will pick a joke from json list with the keyword he got. A quick reply with 2 buttons "Yes" and "No" is also sent to user with the joke. </p> <p>It also checks if the keyword is absent. Now I want that when user types "No" in the chat, my bot will send something back like "Then not". What did I do wrong here?</p> <pre><code>strong texttokens = re.sub(r"[^a-zA-Z0-9\s]", ' ', recevied_message).lower().split() joke_text = '' for token in tokens: if token in jokes: joke_text = random.choice(jokes[token]) .... some code send_message(fbid, joke_text) quick_text = "Do you want another joke? " send_quick_reply_message(fbid, quick_text) break if not joke_text: joke_text = "I didn't understand! " \ "Send 'stupid', 'fat', 'dumb' for a Yo Mama joke!" send_message(fbid, joke_text) if 'No': joke_text = "Then not" send_message(fbid, joke_text) </code></pre>
0
2016-09-07T13:35:33Z
39,371,495
<pre><code>if 'No': </code></pre> <p>is always true, you probably want to be checking if something == 'No'?</p>
5
2016-09-07T13:39:20Z
[ "python", "django", "if-statement" ]
Python IF conditions
39,371,393
<p>I'm new to django python so the if makes me feel confused. Please help me!</p> <p>Im trying to make a IF condition for Facebook chat bot API. The whole function is when a user sends something in the Facebook chat, my bot will remove all punctuations, lower case the text and split it based on space. After that he will pick a joke from json list with the keyword he got. A quick reply with 2 buttons "Yes" and "No" is also sent to user with the joke. </p> <p>It also checks if the keyword is absent. Now I want that when user types "No" in the chat, my bot will send something back like "Then not". What did I do wrong here?</p> <pre><code>strong texttokens = re.sub(r"[^a-zA-Z0-9\s]", ' ', recevied_message).lower().split() joke_text = '' for token in tokens: if token in jokes: joke_text = random.choice(jokes[token]) .... some code send_message(fbid, joke_text) quick_text = "Do you want another joke? " send_quick_reply_message(fbid, quick_text) break if not joke_text: joke_text = "I didn't understand! " \ "Send 'stupid', 'fat', 'dumb' for a Yo Mama joke!" send_message(fbid, joke_text) if 'No': joke_text = "Then not" send_message(fbid, joke_text) </code></pre>
0
2016-09-07T13:35:33Z
39,371,615
<p>I think this code have a problem</p> <pre><code> if 'No': joke_text = "Then not" send_message(fbid, joke_text) </code></pre> <p>What is 'No"? You have to compare something with "No". And also this <code>if</code> condition is on wrong indentation level. </p>
0
2016-09-07T13:44:54Z
[ "python", "django", "if-statement" ]
Python IF conditions
39,371,393
<p>I'm new to django python so the if makes me feel confused. Please help me!</p> <p>Im trying to make a IF condition for Facebook chat bot API. The whole function is when a user sends something in the Facebook chat, my bot will remove all punctuations, lower case the text and split it based on space. After that he will pick a joke from json list with the keyword he got. A quick reply with 2 buttons "Yes" and "No" is also sent to user with the joke. </p> <p>It also checks if the keyword is absent. Now I want that when user types "No" in the chat, my bot will send something back like "Then not". What did I do wrong here?</p> <pre><code>strong texttokens = re.sub(r"[^a-zA-Z0-9\s]", ' ', recevied_message).lower().split() joke_text = '' for token in tokens: if token in jokes: joke_text = random.choice(jokes[token]) .... some code send_message(fbid, joke_text) quick_text = "Do you want another joke? " send_quick_reply_message(fbid, quick_text) break if not joke_text: joke_text = "I didn't understand! " \ "Send 'stupid', 'fat', 'dumb' for a Yo Mama joke!" send_message(fbid, joke_text) if 'No': joke_text = "Then not" send_message(fbid, joke_text) </code></pre>
0
2016-09-07T13:35:33Z
39,386,029
<p>'No' is a string.</p> <pre><code>if 'No' </code></pre> <p>always return True.</p>
0
2016-09-08T08:25:16Z
[ "python", "django", "if-statement" ]
matplotlib very slow in plotting
39,371,429
<p>I have multiple functions in which I input an array or dict as well as a path as an argument, and the function will save a figure to the path of a particular path.</p> <p>Trying to keep example as minimal as possible, but here are two functions:</p> <pre><code>def valueChartPatterns(dict,path): seen_values = Counter() for data in dict.itervalues(): seen_values += Counter(data.values()) seen_values = seen_values.most_common() seen_values_pct = map(itemgetter(1), tupleCounts2Percents(seen_values)) seen_values_pct = ['{:.2%}'.format(item)for item in seen_values_pct] plt.figure() numberchart = plt.bar(range(len(seen_values)), map(itemgetter(1), seen_values), width=0.9,align='center') plt.xticks(range(len(seen_values)), map(itemgetter(0), seen_values)) plt.title('Values in Pattern Dataset') plt.xlabel('Values in Data') plt.ylabel('Occurrences') plt.tick_params(axis='both', which='major', labelsize=6) plt.tick_params(axis='both', which='minor', labelsize=6) plt.tight_layout() plt.savefig(path) plt.clf() def countryChartPatterns(dict,path): seen_countries = Counter() for data in dict.itervalues(): seen_countries += Counter(data.keys()) seen_countries = seen_countries.most_common() seen_countries_percentage = map(itemgetter(1), tupleCounts2Percents(seen_countries)) seen_countries_percentage = ['{:.2%}'.format(item)for item in seen_countries_percentage] yvals = map(itemgetter(1), seen_countries) xvals = map(itemgetter(0), seen_countries) plt.figure() countrychart = plt.bar(range(len(seen_countries)), yvals, width=0.9,align='center') plt.xticks(range(len(seen_countries)), xvals) plt.title('Countries in Pattern Dataset') plt.xlabel('Countries in Data') plt.ylabel('Occurrences') plt.tick_params(axis='both', which='major', labelsize=6) plt.tick_params(axis='both', which='minor', labelsize=6) plt.tight_layout() plt.savefig(path) plt.clf() </code></pre> <p>A very minimal example dict is, but the actual dict contains 56000 values:</p> <pre><code>dict = {"a": {"Germany": 20006.0, "United Kingdom": 20016.571428571428}, "b": {"Chad": 13000.0, "South Africa": 3000000.0},"c":{"Chad": 200061.0, "South Africa": 3000000.0} } </code></pre> <p>And in my script, I call:</p> <pre><code>if __name__ == "__main__": plt.close('all') print "Starting pattern charting...\n" countryChartPatterns(dict,'newPatternCountries.png')) valueChartPatterns(dict,'newPatternValues.png')) </code></pre> <p>Note, I load <code>import matplotlib.pyplot as plt</code>.</p> <p>When running this script in PyCharm, I get <code>Starting pattern charting...</code> in my console but the functions take super long to plot.</p> <p>What am I doing wrong? Should I be using a histogram instead of a bar plot as this should achieve the same aim of giving the number of occurrences of countries/values? Can I change my GUI backend somehow? Any advice welcome.</p>
1
2016-09-07T13:36:38Z
39,376,368
<p>This is the test that I mentioned in the comments above, resulting in:</p> <pre><code>Elapsed pre-processing = 13.79 s Elapsed plotting = 0.17 s Pre-processing / plotting = 83.3654562565 </code></pre> <p>Test script:</p> <pre><code>import matplotlib.pylab as plt from collections import Counter from operator import itemgetter import time def countryChartPatterns(dict,path): # pre-processing ------------------- t0 = time.time() seen_countries = Counter() for data in dict.itervalues(): seen_countries += Counter(data.keys()) seen_countries = seen_countries.most_common() yvals = map(itemgetter(1), seen_countries) xvals = map(itemgetter(0), seen_countries) dt1 = time.time() - t0 print("Elapsed pre-processing = {0:.2f} s".format(dt1)) t0 = time.time() # plotting ------------------- plt.figure() countrychart = plt.bar(range(len(seen_countries)), yvals, width=0.9,align='center') plt.xticks(range(len(seen_countries)), xvals) plt.title('Countries in Pattern Dataset') plt.xlabel('Countries in Data') plt.ylabel('Occurrences') plt.tick_params(axis='both', which='major', labelsize=6) plt.tick_params(axis='both', which='minor', labelsize=6) plt.tight_layout() plt.savefig(path) plt.clf() dt2 = time.time() - t0 print("Elapsed plotting = {0:.2f} s".format(dt2)) print("Pre-processing / plotting = {}".format(dt1/dt2)) if __name__ == "__main__": import random as rd import numpy as np countries = ["United States of America", "Afghanistan", "Albania", "Algeria", "Andorra", "Angola", "Antigua &amp; Deps", "Argentina", "Armenia", "Australia", "Austria", "Azerbaijan"] def item(): return {rd.choice(countries): np.random.randint(1e3), rd.choice(countries): np.random.randint(1e3)} dict = {} for i in range(1000000): dict[i] = item() print("Starting pattern charting...") countryChartPatterns(dict,'newPatternCountries.png') </code></pre>
1
2016-09-07T17:52:28Z
[ "python", "matplotlib", "plot", "figure", "figures" ]
Subnetwork analysis on proteomics data
39,371,435
<p>Background: I have proteomics data from seven samples (pvalue/ log-score of fold change), I want to analyze the data by network (interactome) analyses. </p> <p>Question: I like to create an interactome of all the proteins from the data, and map the proteins to this network that have significant pvalue (compare to control), after that I like to create subnetwork(s); also like to add the pathways enrichments to the subnetwork(s).</p> <p>Request: please suggest online or standalone tools (or algorithm) that fits my requirements. </p> <p>Thanks ! </p>
0
2016-09-07T13:36:50Z
39,376,855
<p>For creating network graphs to represent your protein-protein interactions, I would recommend taking a look at the <a href="https://networkx.github.io" rel="nofollow">networkx</a> library. You can use it to pass in some nodes (proteins of interest) and edges (interactions) and generate a graph. I believe that it can also generate subnetworks of these graphs as well.</p>
0
2016-09-07T18:28:27Z
[ "python", "validation", "network-programming" ]
BeautifulSoup: get some tag from the page
39,371,488
<p>I have html-code</p> <pre><code>&lt;div class="b-media-cont b-media-cont_relative" data-triggers-container="true"&gt;&lt;span class="label"&gt;Двигатель:&lt;/span&gt; бензин, 1.6 л&lt;br/&gt; &lt;div class="b-triggers b-triggers_theme_dashed-buttons b-triggers_size_s b-triggers_text-notif"&gt;&lt;div class="b-triggers__text"&gt;110 л.с.&lt;/div&gt;&lt;div class="b-triggers__item b-triggers__item_notif" data-target="cost" data-target-container="[data-triggers-container]" data-toggle="tax_dropdown"&gt;&lt;div class="b-link b-link_dashed"&gt;110 л.с.&lt;/div&gt;&lt;/div&gt;&lt;div class="b-triggers-hidden-area b-triggers-hidden-area_width_240 b-triggers-hidden-area_close" data-target-bind="cost" style="left: 0px; top: 39px; width: 241px;"&gt;Налог на&amp;nbsp;2016&amp;nbsp;год &lt;b&gt;2&amp;nbsp;750&amp;nbsp;руб.&lt;/b&gt;&lt;br/&gt;&lt;br/&gt;&lt;span class="gray"&gt;Расчет произведен на легковой автомобиль по &lt;a href="http://law.drom.ru/calc/region77/skoda/rapid/2016/110/"&gt;калькулятору транспортного налога&lt;/a&gt; для Москвы (&lt;a href="http://www.drom.ru/my_region/"&gt;изменить регион&lt;/a&gt;).&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;br/&gt; &lt;span class="label"&gt;Тип кузова:&lt;/span&gt; хэтчбек&lt;br/&gt; &lt;span class="label"&gt;Цвет:&lt;/span&gt; золотистый&lt;br/&gt; &lt;span class="label"&gt;Пробег:&lt;/span&gt; &lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt;&lt;br/&gt; &lt;span class="label"&gt;Руль:&lt;/span&gt; левый&lt;br/&gt; &lt;span class="label"&gt;VIN:&lt;/span&gt; XW8AC1NH7HK****32&lt;br/&gt; &lt;/div&gt;&lt;p&gt;&lt;span class="label"&gt;Данные по модели из каталога:&lt;/span&gt; &lt;b&gt;толян&lt;/b&gt; &lt;b&gt;4 515 руб.&lt;/b&gt; &lt;b&gt;Продажа Тойота Авенсис.&lt;/b&gt; </code></pre> <p>And I need to get </p> <pre><code>&lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt; </code></pre> <p>I try </p> <pre><code>mileages = soup.find_all('span', class_='label').next_subling </code></pre> <p>But it returns <code>AttributeError: 'ResultSet' object has no attribute 'next_subling'</code> How can I fix that?</p>
0
2016-09-07T13:39:09Z
39,371,561
<blockquote> <p>AttributeError: 'ResultSet' object has no attribute 'next_subling'</p> </blockquote> <p>This is because <code>find_all()</code> returns multiple results - a list of matching tags. And, this problem is actually <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#miscellaneous" rel="nofollow">covered by the <code>BeautifulSoup</code> documentation</a>:</p> <blockquote> <p><code>AttributeError: 'ResultSet' object has no attribute 'foo'</code> - This usually happens because you expected <code>find_all()</code> to return a single tag or string. But <code>find_all()</code> returns a <em>list</em> of tags and strings–a <code>ResultSet</code> object. You need to iterate over the list and look at the .foo of each one. Or, if you really only want one result, you need to use <code>find()</code> instead of <code>find_all()</code>.</p> </blockquote> <p>Instead, you should be using <code>find()</code> to <a class='doc-link' href="http://stackoverflow.com/documentation/beautifulsoup/1940/locating-elements/6339/locate-a-text-after-an-element-in-beautifulsoup#t=201609071345328825864">locate a specific <code>label</code> by text and then get the next sibling element</a>:</p> <pre><code>mileages = soup.find('span', text=u'Пробег:').find_next_sibling("b").get_text(strip=True) </code></pre> <hr> <p>This code works for me as is:</p> <pre><code># -*- coding: utf-8 -*- from bs4 import BeautifulSoup data = u""" &lt;div class="b-media-cont b-media-cont_relative" data-triggers-container="true"&gt;&lt;span class="label"&gt;Двигатель:&lt;/span&gt; бензин, 1.6 л&lt;br/&gt; &lt;div class="b-triggers b-triggers_theme_dashed-buttons b-triggers_size_s b-triggers_text-notif"&gt;&lt;div class="b-triggers__text"&gt;110 л.с.&lt;/div&gt;&lt;div class="b-triggers__item b-triggers__item_notif" data-target="cost" data-target-container="[data-triggers-container]" data-toggle="tax_dropdown"&gt;&lt;div class="b-link b-link_dashed"&gt;110 л.с.&lt;/div&gt;&lt;/div&gt;&lt;div class="b-triggers-hidden-area b-triggers-hidden-area_width_240 b-triggers-hidden-area_close" data-target-bind="cost" style="left: 0px; top: 39px; width: 241px;"&gt;Налог на&amp;nbsp;2016&amp;nbsp;год &lt;b&gt;2&amp;nbsp;750&amp;nbsp;руб.&lt;/b&gt;&lt;br/&gt;&lt;br/&gt;&lt;span class="gray"&gt;Расчет произведен на легковой автомобиль по &lt;a href="http://law.drom.ru/calc/region77/skoda/rapid/2016/110/"&gt;калькулятору транспортного налога&lt;/a&gt; для Москвы (&lt;a href="http://www.drom.ru/my_region/"&gt;изменить регион&lt;/a&gt;).&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;br/&gt; &lt;span class="label"&gt;Тип кузова:&lt;/span&gt; хэтчбек&lt;br/&gt; &lt;span class="label"&gt;Цвет:&lt;/span&gt; золотистый&lt;br/&gt; &lt;span class="label"&gt;Пробег:&lt;/span&gt; &lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt;&lt;br/&gt; &lt;span class="label"&gt;Руль:&lt;/span&gt; левый&lt;br/&gt; &lt;span class="label"&gt;VIN:&lt;/span&gt; XW8AC1NH7HK****32&lt;br/&gt; &lt;/div&gt;&lt;p&gt;&lt;span class="label"&gt;Данные по модели из каталога:&lt;/span&gt; &lt;b&gt;толян&lt;/b&gt; &lt;b&gt;4 515 руб.&lt;/b&gt; &lt;b&gt;Продажа Тойота Авенсис.&lt;/b&gt; &lt;/div&gt; """ soup = BeautifulSoup(data, "html.parser") mileages = soup.find('span', text=u'Пробег:').find_next_sibling("b").get_text(strip=True) print(mileages) </code></pre> <p>Prints:</p> <pre><code>Новый автомобиль от официального дилера </code></pre>
1
2016-09-07T13:42:44Z
[ "python", "html", "beautifulsoup" ]
BeautifulSoup: get some tag from the page
39,371,488
<p>I have html-code</p> <pre><code>&lt;div class="b-media-cont b-media-cont_relative" data-triggers-container="true"&gt;&lt;span class="label"&gt;Двигатель:&lt;/span&gt; бензин, 1.6 л&lt;br/&gt; &lt;div class="b-triggers b-triggers_theme_dashed-buttons b-triggers_size_s b-triggers_text-notif"&gt;&lt;div class="b-triggers__text"&gt;110 л.с.&lt;/div&gt;&lt;div class="b-triggers__item b-triggers__item_notif" data-target="cost" data-target-container="[data-triggers-container]" data-toggle="tax_dropdown"&gt;&lt;div class="b-link b-link_dashed"&gt;110 л.с.&lt;/div&gt;&lt;/div&gt;&lt;div class="b-triggers-hidden-area b-triggers-hidden-area_width_240 b-triggers-hidden-area_close" data-target-bind="cost" style="left: 0px; top: 39px; width: 241px;"&gt;Налог на&amp;nbsp;2016&amp;nbsp;год &lt;b&gt;2&amp;nbsp;750&amp;nbsp;руб.&lt;/b&gt;&lt;br/&gt;&lt;br/&gt;&lt;span class="gray"&gt;Расчет произведен на легковой автомобиль по &lt;a href="http://law.drom.ru/calc/region77/skoda/rapid/2016/110/"&gt;калькулятору транспортного налога&lt;/a&gt; для Москвы (&lt;a href="http://www.drom.ru/my_region/"&gt;изменить регион&lt;/a&gt;).&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;br/&gt; &lt;span class="label"&gt;Тип кузова:&lt;/span&gt; хэтчбек&lt;br/&gt; &lt;span class="label"&gt;Цвет:&lt;/span&gt; золотистый&lt;br/&gt; &lt;span class="label"&gt;Пробег:&lt;/span&gt; &lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt;&lt;br/&gt; &lt;span class="label"&gt;Руль:&lt;/span&gt; левый&lt;br/&gt; &lt;span class="label"&gt;VIN:&lt;/span&gt; XW8AC1NH7HK****32&lt;br/&gt; &lt;/div&gt;&lt;p&gt;&lt;span class="label"&gt;Данные по модели из каталога:&lt;/span&gt; &lt;b&gt;толян&lt;/b&gt; &lt;b&gt;4 515 руб.&lt;/b&gt; &lt;b&gt;Продажа Тойота Авенсис.&lt;/b&gt; </code></pre> <p>And I need to get </p> <pre><code>&lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt; </code></pre> <p>I try </p> <pre><code>mileages = soup.find_all('span', class_='label').next_subling </code></pre> <p>But it returns <code>AttributeError: 'ResultSet' object has no attribute 'next_subling'</code> How can I fix that?</p>
0
2016-09-07T13:39:09Z
39,371,662
<p>Try this code:</p> <pre><code>b = None spans = soup.find_all("span", {"class":"label"}) for span in spans: b = span.find("b") if b is not None: break </code></pre> <p>Then you can get access to text of "b" using:</p> <pre><code>b.text </code></pre>
0
2016-09-07T13:46:23Z
[ "python", "html", "beautifulsoup" ]
BeautifulSoup: get some tag from the page
39,371,488
<p>I have html-code</p> <pre><code>&lt;div class="b-media-cont b-media-cont_relative" data-triggers-container="true"&gt;&lt;span class="label"&gt;Двигатель:&lt;/span&gt; бензин, 1.6 л&lt;br/&gt; &lt;div class="b-triggers b-triggers_theme_dashed-buttons b-triggers_size_s b-triggers_text-notif"&gt;&lt;div class="b-triggers__text"&gt;110 л.с.&lt;/div&gt;&lt;div class="b-triggers__item b-triggers__item_notif" data-target="cost" data-target-container="[data-triggers-container]" data-toggle="tax_dropdown"&gt;&lt;div class="b-link b-link_dashed"&gt;110 л.с.&lt;/div&gt;&lt;/div&gt;&lt;div class="b-triggers-hidden-area b-triggers-hidden-area_width_240 b-triggers-hidden-area_close" data-target-bind="cost" style="left: 0px; top: 39px; width: 241px;"&gt;Налог на&amp;nbsp;2016&amp;nbsp;год &lt;b&gt;2&amp;nbsp;750&amp;nbsp;руб.&lt;/b&gt;&lt;br/&gt;&lt;br/&gt;&lt;span class="gray"&gt;Расчет произведен на легковой автомобиль по &lt;a href="http://law.drom.ru/calc/region77/skoda/rapid/2016/110/"&gt;калькулятору транспортного налога&lt;/a&gt; для Москвы (&lt;a href="http://www.drom.ru/my_region/"&gt;изменить регион&lt;/a&gt;).&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;br/&gt; &lt;span class="label"&gt;Тип кузова:&lt;/span&gt; хэтчбек&lt;br/&gt; &lt;span class="label"&gt;Цвет:&lt;/span&gt; золотистый&lt;br/&gt; &lt;span class="label"&gt;Пробег:&lt;/span&gt; &lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt;&lt;br/&gt; &lt;span class="label"&gt;Руль:&lt;/span&gt; левый&lt;br/&gt; &lt;span class="label"&gt;VIN:&lt;/span&gt; XW8AC1NH7HK****32&lt;br/&gt; &lt;/div&gt;&lt;p&gt;&lt;span class="label"&gt;Данные по модели из каталога:&lt;/span&gt; &lt;b&gt;толян&lt;/b&gt; &lt;b&gt;4 515 руб.&lt;/b&gt; &lt;b&gt;Продажа Тойота Авенсис.&lt;/b&gt; </code></pre> <p>And I need to get </p> <pre><code>&lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt; </code></pre> <p>I try </p> <pre><code>mileages = soup.find_all('span', class_='label').next_subling </code></pre> <p>But it returns <code>AttributeError: 'ResultSet' object has no attribute 'next_subling'</code> How can I fix that?</p>
0
2016-09-07T13:39:09Z
39,375,978
<p>the following should work for you</p> <pre><code>spanTag = soup.find_all("span", string="Пробег:") print spanTag[0].find_next_sibling("b") print spanTag[0].find_next_sibling("b").string </code></pre> <p>result output:</p> <pre><code>&lt;b&gt;Новый автомобиль от официального дилера&lt;/b&gt; Новый автомобиль от официального дилера </code></pre> <p>cheers,</p> <p>Dhiraj</p>
0
2016-09-07T17:26:57Z
[ "python", "html", "beautifulsoup" ]
python thrift error ```TSocket read 0 bytes```
39,371,489
<p>My python version:2.7.8 <br/> thrift version:0.9.2 <br/> python-thrift version:0.9.2 <br/> OS: centOS 6.8 <br/> My test.thrift file:</p> <pre><code>const string HELLO_IN_KOREAN = "an-nyoung-ha-se-yo" const string HELLO_IN_FRENCH = "bonjour!" const string HELLO_IN_JAPANESE = "konichiwa!" service HelloWorld { void ping(), string sayHello(), string sayMsg(1:string msg) } </code></pre> <p>client.py</p> <pre><code># -*-coding:utf-8-*- from test import HelloWorld from test.constants import * from thrift import Thrift from thrift.transport import TSocket from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol # Make socket transport = TSocket.TSocket('192.168.189.156', 30303) # Buffering is critical. Raw sockets are very slow transport = TTransport.TBufferedTransport(transport) # Wrap in a protocol protocol = TBinaryProtocol.TBinaryProtocol(transport) # Create a client to use the protocol encoder client = HelloWorld.Client(protocol) # Connect! transport.open() client.ping() print "ping()" msg = client.sayHello() print msg msg = client.sayMsg(HELLO_IN_KOREAN) print msg transport.close() </code></pre> <p>server.py:</p> <pre><code># -*-coding:utf-8-*- from test.HelloWorld import Processor from thrift.transport import TSocket from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol from thrift.server import TServer class HelloWorldHandler(object): def __init__(self): self.log = {} def ping(self): print "ping()" def sayHello(self): print "sayHello()" return "say hello from 156" def sayMsg(self, msg): print "sayMsg(" + msg + ")" return "say " + msg + " from 156" handler = HelloWorldHandler() processor = Processor(handler) transport = TSocket.TServerSocket("192.168.189.156", 30303) tfactory = TTransport.TBufferedTransportFactory() pfactory = TBinaryProtocol.TBinaryProtocolFactory() server = TServer.TThreadPoolServer(processor, transport, tfactory, pfactory) print "Starting python server..." server.serve() print "done!" </code></pre> <p>My error:</p> <pre><code>ping() Traceback (most recent call last): File "client.py", line 29, in &lt;module&gt; msg = client.sayHello() File "/home/zhihao/bfd_mf_report_warning_service/local_test/test/HelloWorld.py", line 68, in sayHello return self.recv_sayHello() File "/home/zhihao/bfd_mf_report_warning_service/local_test/test/HelloWorld.py", line 79, in recv_sayHello (fname, mtype, rseqid) = iprot.readMessageBegin() File "build/bdist.linux-x86_64/egg/thrift/protocol/TBinaryProtocol.py", line 126, in readMessageBegin File "build/bdist.linux-x86_64/egg/thrift/protocol/TBinaryProtocol.py", line 206, in readI32 File "build/bdist.linux-x86_64/egg/thrift/transport/TTransport.py", line 58, in readAll File "build/bdist.linux-x86_64/egg/thrift/transport/TTransport.py", line 159, in read File "build/bdist.linux-x86_64/egg/thrift/transport/TSocket.py", line 120, in read thrift.transport.TTransport.TTransportException: TSocket read 0 bytes </code></pre>
2
2016-09-07T13:39:09Z
39,382,757
<p>My OS environment problems.<br/> I change port <code>30303</code> to <code>9999</code>, it run successfully.</p>
1
2016-09-08T04:44:46Z
[ "python", "thrift" ]
Image loses quality with cv2.warpPerspective
39,371,507
<p>I am working with OpenCV 3.1 and with Python. </p> <p>My problem comes when I try to deskew (fix the tilt of) an image with text. I am using <code>cv2.warpPerspective</code> to make it possible, but the image loses a lot of quality. You can see here the original part of the image:</p> <p><a href="http://i.stack.imgur.com/V9OZn.png" rel="nofollow"><img src="http://i.stack.imgur.com/V9OZn.png" alt="image without smooth"></a></p> <p>and then, here, the "rotated" image:</p> <p><a href="http://i.stack.imgur.com/v8cHe.png" rel="nofollow"><img src="http://i.stack.imgur.com/v8cHe.png" alt="image with smooth"></a></p> <p>it is like <em>smoothed</em>. </p> <p>I was using morphological transformation like:</p> <pre><code>kernel = np.ones((2, 2), np.uint8) blur_image = cv2.erode(tresh, kernel, iterations=1) </code></pre> <p>and </p> <pre><code>white_mask2 = cv2.morphologyEx(white_mask2, cv2.MORPH_OPEN, kernel) </code></pre> <p>to see if it improves something, but nothing.</p> <p>I saw <a href="http://stackoverflow.com/questions/22656698/perspective-correction-in-opencv-using-python">this example here in SO</a>, but those guys had the same problem: <a href="http://i.stack.imgur.com/pBChb.png" rel="nofollow"><img src="http://i.stack.imgur.com/pBChb.png" alt="enter image description here"></a> and <a href="http://i.stack.imgur.com/2WbGv.png" rel="nofollow"><img src="http://i.stack.imgur.com/2WbGv.png" alt="enter image description here"></a></p> <p>So, I don't have idea what can I do. Maybe there is a way to not losing the quality of the image, or, there's another method to rotate the image without quality lost. I know this method: </p> <pre><code>root_mat = cv2.getRotationMatrix2D(to_rotate_center, angle, 1.0) result = cv2.warpAffine(to_rotate, root_mat, to_rotate.shape, flags=cv2.INTER_LINEAR) </code></pre> <p>But it doesn't work for me, as I have to rotate every rectangle here:</p> <p><a href="http://i.stack.imgur.com/ThKC9.png" rel="nofollow"><img src="http://i.stack.imgur.com/ThKC9.png" alt="enter image description here"></a></p> <p>and not the whole image. It means, the best way I found to do it, was <code>warpPerspective</code>, and it works fine, but with quality loss. I would appreciate an advice to avoid the quality lost. </p>
1
2016-09-07T13:39:51Z
39,372,458
<p>The problem is related to the interpolation required by the warping. If you don't want things to appear smoother, you should switch from default interpolation method, which is <code>INTER_LINEAR</code> to another one, such as <code>INTER_NEAREST</code>. It would help you with the sharpness of the edges.</p> <p>Try <code>flags=cv2.INTER_NEAREST</code> on your <code>warp</code> call, be it <a href="http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#cv2.warpAffine" rel="nofollow">warpAffine()</a> or <a href="http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#cv2.warpPerspective" rel="nofollow">warpPerspective()</a>.</p> <p>Interpolation flags are listed <a href="http://docs.opencv.org/3.1.0/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121" rel="nofollow">here</a>.</p> <pre><code>enum InterpolationFlags { INTER_NEAREST = 0, INTER_LINEAR = 1, INTER_CUBIC = 2, INTER_AREA = 3, INTER_LANCZOS4 = 4, INTER_MAX = 7, WARP_FILL_OUTLIERS = 8, WARP_INVERSE_MAP = 16 } </code></pre>
2
2016-09-07T14:20:53Z
[ "python", "image", "opencv", "image-rotation", "opencv3.1" ]
Invalid Syntax on MUD Game
39,371,588
<p>pretty much spent a whole entire term getting this python code typed up and ready to go although im hitting some speedbumps and it is causing me to panic as the assignment is due tomorrow. Im not too sure how to fix it so if anyone can please fix this, I would be so happy! </p> <p>Im quite new to this website so here is my full code for Python 3.5.1</p> <p><a href="https://www.dropbox.com/s/zoupk8z1sdqncpb/MUD%20File%20%281%29.py?dl=0" rel="nofollow">https://www.dropbox.com/s/zoupk8z1sdqncpb/MUD%20File%20%281%29.py?dl=0</a></p> <p>I am pretty much begging someone to please send me back a fixed version of whatever is preventing this code from working.</p> <p>Thanks - Tyson</p>
-7
2016-09-07T13:43:55Z
39,372,288
<p>Your Error is invalid Syntax, which comes from not correctly opening and closing. Looking trough the code I see huge varation in formating style starting at <code>rooms = {</code>compared to the rest of the code.</p> <p>I've put some quotes inbetween the part I suspect to be wrong and it runs directly. line 161 - 214:</p> <pre><code>""" rooms = { 1 : { "name" : "Engine 1" , "south" : 2, } , 2 : { "name" : "Airlock" , "north" : 1 , "south" : 3, "east" : 4, "item" : "Plasma" } , 3 : { "name" : "Engine 2" , "north" : 2 , "item" : "Pistol" } , 4 :{ "name" : "Hallway" , "east" : 5 , "west" : 2 , "monster" : "Manite Protecter" }, 5 :{ "name" : "Data Center" , "north" : 6 , "east" : 8 , "south" : 7 , "west" : 4 , "monster" : "Tero-L21" }, 6 :{ "name" : "Weapon Command Centre" , "south" : 5 , "item" : "Plasma_X31_AR" , "monster" : "Possessed Security Officer" , }, 7 :{ "name" : "Quarantine" , "north" : 8 , "monster" : "Xenon Sentry" , } , 8 :{ "name" : "Cockpit" , "monster" : "Xenon Overwatch" , } , { #start the player in room 1 currentRoom == 1: showMap() showInstructions() } """ </code></pre>
-1
2016-09-07T14:14:11Z
[ "python", "python-3.x" ]
Is it possible to skip breakpoints in pdb / ipdb?
39,371,646
<p>Is there a way to tell pdb or ipdb to skip all future break-points and just finish execution as if they weren't there?</p>
2
2016-09-07T13:46:05Z
39,371,787
<p>Maybe you can try with clear.</p> <p>From help: <code> '(Pdb) help clear cl(ear) filename:lineno cl(ear) [bpnumber [bpnumber...]] With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation). With a filename:lineno argument, clear all breaks at that line in that file. Note that the argument is different from previous versions of the debugger (in python distributions 1.5.1 and before) where a linenumber was used instead of either filename:lineno or breakpoint numbers. </code></p> <p>There is another topic which discuss of your question: <a href="http://stackoverflow.com/questions/17820618/how-to-exit-pdb-and-allow-program-to-continue">How to exit pdb and allow program to continue?</a></p>
0
2016-09-07T13:51:25Z
[ "python", "pdb", "ipdb" ]
Is it possible to skip breakpoints in pdb / ipdb?
39,371,646
<p>Is there a way to tell pdb or ipdb to skip all future break-points and just finish execution as if they weren't there?</p>
2
2016-09-07T13:46:05Z
39,399,896
<p>If you want to keep your breakpoints rather than clearing them, but also want them not to be reached, you can use <code>pdb</code>'s <code>disable</code> command. I don't see a convenient way to disable all breakpoints in a concise way, but you can list their numbers in the <code>disable</code> command. You can also be selective about this, and disable some breakpoints and leave others enabled. You can undo the effect of a <code>disable</code> command with <code>pdb</code>s <code>enable</code> command. The <code>break</code> command (or just <code>b</code>) with no parameters shows, for each breakpoint, whether it is enabled.</p>
0
2016-09-08T20:42:57Z
[ "python", "pdb", "ipdb" ]
Implement rms without numpy
39,371,727
<p>I have to implement a function to calculate rms without using numpy, using data that is in txt. I have the following:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import math def main(): # Cargando los datos avrgPress = [] press = [] with open("C:\Users\SAULO\Downloads\prvspy.txt","r") as a: for x in a: columnas = x.split('\t') avrgPress.append(float(columnas[0])) press.append(float(columnas[1])) def error(avrgPress, press): lista3 = [0]*len(lista) for x in xrange(len(lista)): lista3[x] = lista3[x] * sum(sqrt(float(1.0/1064) * (avrgPress - press)**2)) return lista3 if __name__ == '__main__': main() </code></pre> <p>When you are building, not out rms. Any suggestions or solution?</p>
-2
2016-09-07T13:49:05Z
39,408,864
<p>Is this what you want, <code>rms( avrgPress )</code> ?</p> <pre><code>from __future__ import division # necessary import math def rms( alist ): """ rms( [1, 2, 3] ) = sqrt( average( 1**2 + 2**2 + 3**2 )) = 2.16 """ sum_squares = sum([ x**2 for x in alist ]) return math.sqrt( sum_squares / len(alist) ) ... print rms( [1, 2, 3] ) </code></pre>
1
2016-09-09T09:57:58Z
[ "python", "matplotlib" ]
How to install psycopg2 for use in HDInsight PySpark Jupyter notebook
39,371,732
<p>I need to "import psycopg2" as part of my PySpark script.</p> <p>Per the <a href="https://notebooks.azure.com/faq#upload_data" rel="nofollow">documentation</a>:</p> <blockquote> <p>!pip install psycopg2</p> </blockquote> <p>but that results in a syntax error (as any ! command does).</p> <p>I was able to install in an SSH session for /usr/bin/anaconda/bin/python, but it appears Jupyter may be using a different different environment? </p> <p>I even tried forcing </p> <blockquote> <p>os.environ["PYTHONPATH"]</p> </blockquote> <p>in the notebook but no luck, by which I mean from a cell in the notebook:</p> <ul> <li>import psycopg2 results in an error that module can't be found</li> <li>help("modules") does not show psycopg2</li> <li>help("modules psycopg2") results in the following error:</li> </ul> <blockquote> <pre><code>Here is a list of matching modules. Enter any module name to get more help. Failed to write user configuration file. Please submit a bug report. sqlalchemy.dialects.postgresql.psycopg2 - .. dialect:: postgresql+psycopg2 sqlalchemy.dialects.postgresql.psycopg2cffi - .. dialect:: postgresql+psycopg2cffi </code></pre> </blockquote>
1
2016-09-07T13:49:15Z
39,376,908
<p>Did you make a typo? Your text was 'pyscopg2' not 'psycopg2'</p> <pre><code>!pip install psycopg2 </code></pre> <p>For Python 3.5 and Python 2.7 is installed already on Azure Notebooks.</p>
0
2016-09-07T18:32:23Z
[ "python", "azure", "psycopg2", "jupyter", "hdinsight" ]
How to change the distance between y axis points in matplotlib?
39,371,742
<p>Sorry if this is a really stupid question, I have done a lot of searching but cant find the answer. I am using matplotlib to generate plots for some data but need to space out the distance between the points on the y axis. </p> <p>So this is how I have it currently <a href="http://i.stack.imgur.com/pWWin.png" rel="nofollow">matlibplot</a></p> <p>And I want it to be spaced out like this <a href="http://i.stack.imgur.com/n6KHx.png" rel="nofollow">sampleplot</a></p> <pre><code>ratioticks = (0.00001, 0.0001, 0.001, 0.01, 0.1, 1) line = plt.plot(x, ratioy, label='BCR:ABL1 ratio(IS)') line2 = plt.plot(x, sensy, label='Sensitivity of Detection (1/ABL1)') line3 = plt.plot(x, sensx, label='Target Sensitivity') line4 = plt.plot(x, mrx, label='MMR') plt.setp(line, color='r', linewidth=2.0) plt.setp(line2, color='g', linewidth=2.0) plt.setp(line3, color='b', linewidth=2.0) plt.setp(line3, color='m', linewidth=2.0) plt.grid(which='both') plt.yticks(ratioticks) plt.ylabel('Ratio on Log Scale') plt.xlabel('Date') plt.title('Level of BCR:ABL1 normalised to ABL1 on the International Scale (IS)') plt.show() </code></pre> <p>Thanks in advance :)</p>
1
2016-09-07T13:49:29Z
39,372,303
<p>You need a call to:</p> <pre><code>plt.yscale("log") </code></pre>
1
2016-09-07T14:14:49Z
[ "python", "matplotlib", "plot" ]
Complex aggregation methods as single param in query string
39,371,747
<p>I’m trying to design a flexible API with django REST. What I meant by this is to have basically any field filterable through a query string and in addition to that have a param in the query string that can denote some complex method to perform. Ok, here are the details:</p> <p><strong>views.py</strong></p> <pre><code>class StarsModelList(generics.ListAPIView): queryset = StarsModel.objects.all() serializer_class = StarsModelSerializer filter_class = StarsModelFilter </code></pre> <p><strong>serializers.py</strong></p> <pre><code>class StarsModelSerializer(DynamicFieldsMixin, serializers.ModelSerializer): class Meta: model = StarsModel fields = '__all__' </code></pre> <p><strong>mixins.py</strong></p> <pre><code>class DynamicFieldsMixin(object): def __init__(self, *args, **kwargs): super(DynamicFieldsMixin, self).__init__(*args, **kwargs) fields = self.context['request'].query_params.get('fields') if fields: fields = fields.split(',') # Drop any fields that are not specified in the `fields` argument. allowed = set(fields) existing = set(self.fields.keys()) for field_name in existing - allowed: self.fields.pop(field_name) </code></pre> <p><strong>filters.py</strong></p> <pre><code>class CSVFilter(django_filters.Filter): def filter(self, qs, value): return super(CSVFilter, self).filter(qs, django_filters.fields.Lookup(value.split(u","), "in")) class StarsModelFilter(django_filters.FilterSet): id = CSVFilter(name='id') class Meta: model = StarsModel fields = ['id',] </code></pre> <p><strong>urls.py</strong></p> <pre><code>url(r’^/stars/$’, StarsModelList.as_view()) </code></pre> <p>this give me the ability to construct query strings like so:</p> <pre><code>/api/stars/?id=1,2,3&amp;fields=type,age,magnetic_field,mass </code></pre> <p>this is great I like this functionality, but there are also many custom aggregation/transformation methods that need to be applied to this data. What I would like to do is have an agg= param like so: </p> <pre><code>/api/stars/?id=1,2,3&amp;fields=type,age,magnetic_field,mass,&amp;agg=complex_method </code></pre> <p>or just:</p> <pre><code>/api/stars/?agg=complex_method </code></pre> <p>where defining the complex_method grabs the correct fields for the job.</p> <p>I’m not exactly sure where to start and where to add the complex methods so I would really appreciate some guidance. I should also note the api is only for private use supporting a django application, its not exposed to the public.</p>
0
2016-09-07T13:49:46Z
39,372,985
<p>Definitely would be good to see your MyModelList class but anyway my example as per <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/base/" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/class-based-views/base/</a></p> <pre><code>from django.http import HttpResponse from django.views import View class StarsModelList(generics.ListAPIView): queryset = StarsModel.objects.all() serializer_class = StarsModelSerializer filter_class = StarsModelFilter def complex_method(request): # do smth to input parameters if any return HttpResponse('Hello, World!') def get(self, request, *args, **kwargs): if request.GET.get('agg', None) == 'complex_method': return self.complex_method(request) return HttpResponse('Hi, World!') </code></pre>
1
2016-09-07T14:44:23Z
[ "python", "django", "django-rest-framework" ]
Timeit timing a python function
39,371,781
<p>I am trying to time the below function, however it shows me the error, can't import name val_in_range, what's the error, is there any other method to do it better? </p> <pre><code>import timeit x = 10000000 def val_in_range(x, val): return val in range(x) print (val_in_range(x,x/2)) timeit.timeit( 'val_in_range(x,x/2)', 'from __main__ import val_in_range, x', number=10) </code></pre> <p>Output:</p> <pre><code>True Traceback (most recent call last): File "python", line 11, in &lt;module&gt; File "&lt;timeit-src&gt;", line 3, in inner ImportError: cannot import name 'val_in_range' </code></pre>
0
2016-09-07T13:51:01Z
39,372,180
<p>replace <code>timeit.timeit( 'val_in_range(x,x/2)', 'from __main__ import val_in_range, x', number=10)</code></p> <p>with <code>timeit.timeit(lambda:val_in_range(x,x/2), number=10)</code></p> <p>you can print the value directly using the <code>print</code> statement.</p>
2
2016-09-07T14:09:01Z
[ "python", "python-3.x" ]
Combine list of numpy arrays and reshape
39,372,037
<p>I'm hoping anybody could help me with the following. I have 2 lists of arrays, which should be linked to each-other. Each list stands for a certain object. <code>arr1</code> and <code>arr2</code> are the attributes of that object. For example:</p> <pre><code>import numpy as np arr1 = [np.array([1, 2, 3]), np.array([1, 2]), np.array([2, 3])] arr2 = [np.array([20, 50, 30]), np.array([50, 50]), np.array([75, 25])] </code></pre> <p>The arrays are linked to each other as in the <code>1</code> in <code>arr1</code>, first array belongs to the <code>20</code> in <code>arr2</code> first array. The result I'm looking for in this example would be a numpy array with size 3,4. The 'columns' stand for 0, 1, 2, 3 (the numbers in arr1, plus 0) and the rows are filled with the corresponding values of arr2. When there are no corresponding values this cell should be 0. Example:</p> <pre><code>array([[ 0, 20, 50, 30], [ 0, 50, 50, 0], [ 0, 0, 75, 25]]) </code></pre> <p>How would I link these two list of arrays and reshape them in the desired format as shown in the above example?</p> <p>Many thanks!</p>
2
2016-09-07T14:02:33Z
39,372,393
<p>Here's an almost* vectorized approach -</p> <pre><code>lens = np.array([len(i) for i in arr1]) N = len(arr1) row_idx = np.repeat(np.arange(N),lens) col_idx = np.concatenate(arr1) M = col_idx.max()+1 out = np.zeros((N,M),dtype=int) out[row_idx,col_idx] = np.concatenate(arr2) </code></pre> <p>*: Almost because of the loop comprehension at the start, but that should be computationally negligible as it doesn't involve any computation there.</p>
3
2016-09-07T14:17:55Z
[ "python", "arrays", "numpy" ]
Combine list of numpy arrays and reshape
39,372,037
<p>I'm hoping anybody could help me with the following. I have 2 lists of arrays, which should be linked to each-other. Each list stands for a certain object. <code>arr1</code> and <code>arr2</code> are the attributes of that object. For example:</p> <pre><code>import numpy as np arr1 = [np.array([1, 2, 3]), np.array([1, 2]), np.array([2, 3])] arr2 = [np.array([20, 50, 30]), np.array([50, 50]), np.array([75, 25])] </code></pre> <p>The arrays are linked to each other as in the <code>1</code> in <code>arr1</code>, first array belongs to the <code>20</code> in <code>arr2</code> first array. The result I'm looking for in this example would be a numpy array with size 3,4. The 'columns' stand for 0, 1, 2, 3 (the numbers in arr1, plus 0) and the rows are filled with the corresponding values of arr2. When there are no corresponding values this cell should be 0. Example:</p> <pre><code>array([[ 0, 20, 50, 30], [ 0, 50, 50, 0], [ 0, 0, 75, 25]]) </code></pre> <p>How would I link these two list of arrays and reshape them in the desired format as shown in the above example?</p> <p>Many thanks!</p>
2
2016-09-07T14:02:33Z
39,372,445
<p>Here is a solution with for-loops. Showing each step in detail.</p> <pre><code>import numpy as np arr1 = [np.array([1, 2, 3]), np.array([1, 2]), np.array([2, 3])] arr2 = [np.array([20, 50, 30]), np.array([50, 50]), np.array([75, 25])] maxi = [] for i in range(len(arr1)): maxi.append(np.max(arr1[i])) maxi = np.max(maxi) output = np.zeros((len(arr2),maxi)) for i in range(len(arr1)): for k in range(len(arr1[i])): output[i][k]=arr2[i][k] </code></pre>
1
2016-09-07T14:20:19Z
[ "python", "arrays", "numpy" ]
Combine list of numpy arrays and reshape
39,372,037
<p>I'm hoping anybody could help me with the following. I have 2 lists of arrays, which should be linked to each-other. Each list stands for a certain object. <code>arr1</code> and <code>arr2</code> are the attributes of that object. For example:</p> <pre><code>import numpy as np arr1 = [np.array([1, 2, 3]), np.array([1, 2]), np.array([2, 3])] arr2 = [np.array([20, 50, 30]), np.array([50, 50]), np.array([75, 25])] </code></pre> <p>The arrays are linked to each other as in the <code>1</code> in <code>arr1</code>, first array belongs to the <code>20</code> in <code>arr2</code> first array. The result I'm looking for in this example would be a numpy array with size 3,4. The 'columns' stand for 0, 1, 2, 3 (the numbers in arr1, plus 0) and the rows are filled with the corresponding values of arr2. When there are no corresponding values this cell should be 0. Example:</p> <pre><code>array([[ 0, 20, 50, 30], [ 0, 50, 50, 0], [ 0, 0, 75, 25]]) </code></pre> <p>How would I link these two list of arrays and reshape them in the desired format as shown in the above example?</p> <p>Many thanks!</p>
2
2016-09-07T14:02:33Z
39,376,078
<p>This is a straight forward approach, with only one level of iteration:</p> <pre><code>In [261]: res=np.zeros((3,4),int) In [262]: for i,(idx,vals) in enumerate(zip(arr1, arr2)): ...: res[i,idx]=vals ...: In [263]: res Out[263]: array([[ 0, 20, 50, 30], [ 0, 50, 50, 0], [ 0, 0, 75, 25]]) </code></pre> <p>I suspect it is faster than <code>@Divakar's</code> approach for this example. And it should remain competitive as long as the number of columns is quite a bit larger than the number of rows.</p>
0
2016-09-07T17:33:14Z
[ "python", "arrays", "numpy" ]
Wait for a clicked() event in while loop in Qt
39,372,071
<p>How can I wait, at each iteration, within a for loop, that the user press a given QPushButton? </p> <pre><code>for i in range(10): while (the button has not been pressed): #do nothing #do something </code></pre> <p>The main problem is that I cannot catch the clicked() event in the while loop.</p> <p><strong>EDIT:</strong></p> <p>Finally I ended up with:</p> <pre><code> for i in range(10): self.hasBeenProcessed = False # only one function can modify this boolean # and this function is connected to my button while (self.hasBeenProcessed is not True): QtCore.QCoreApplication.processEvents() </code></pre>
0
2016-09-07T14:04:19Z
39,377,065
<p>So, I share the slight skepticism as to whether you should want to be doing what you described. Also, I share that it would be better if you show a bit more code to describe the context. </p> <p>Having said this, the code below is a stab at what you seem to be describing. Note that this is by no means meant to be production-ready code, but more a crude example to illustrate the principle. </p> <p>What happens is that I call one function on the press of <code>Button1</code> and I keep the event loop spinning inside the <code>while</code> loop by calling <code>QCoreApplication.processEvents()</code> which means that the GUI will still accept e.g. mouse events. Now, this is something that you should <em>not</em> typically do. There are, however, certain situations where this can be needed, e.g. if you have a non-modal <code>QProgressDialog</code> and you want to keep the GUI updating while the dialog counter increases (see e.g. <a href="http://doc.qt.io/qt-4.8/qprogressdialog.html#value-prop" rel="nofollow">http://doc.qt.io/qt-4.8/qprogressdialog.html#value-prop</a>)</p> <p>Then the second part is only to modify the global variable in the second function when you press button 2 and the <code>while</code> loop will exit.</p> <p>Let me know if this helps </p> <pre><code>import sys from PyQt4.QtCore import * from PyQt4.QtGui import * btn2pushed = False def window(): app = QApplication(sys.argv) win = QDialog() b1 = QPushButton(win) b1.setText("Button1") b1.move(50,20) b1.clicked.connect(b1_clicked) b2 = QPushButton(win) b2.setText("Button2") b2.move(50,50) QObject.connect(b2,SIGNAL("clicked()"),b2_clicked) win.setGeometry(100,100,200,100) win.setWindowTitle("PyQt") win.show() sys.exit(app.exec_()) def b1_clicked(): print "Button 1 clicked" i = 0 while ( btn2pushed != True ): # not doing anything if ( i % 100000 == 0 ): print "Waiting for user to push button 2" QCoreApplication.processEvents() i += 1; print "Button 2 has been pushed" def b2_clicked(): global btn2pushed btn2pushed = True if __name__ == '__main__': window() </code></pre>
2
2016-09-07T18:42:11Z
[ "python", "qt" ]
Anaconda plugin not working on Sublime Text 3
39,372,150
<p>I am using a Sublime Text 3 portable app and I simply dragged all of the Anaconda files into the <code>packages</code> directory, i.e. <code>\Sublime Text Build 3114 x64\Data\Packages\anaconda-1.3.4</code>.</p> <p>However, I keep getting an error in the console that says <code>ImportError: No module named 'anaconda-1'</code>. I can see the Anaconda option when I right-click anywhere, but all of the commands in the Anaconda menu are greyed out. Nothing else, like the auto-complete, is working either.</p> <p>Any help is appreciated.</p> <p>EDIT: Fixed by using <code>PackageControl</code> to reinstall <code>Anaconda</code>.</p>
0
2016-09-07T14:07:48Z
39,795,023
<p>Fixed by using PackageControl to reinstall Anaconda.</p>
0
2016-09-30T15:27:26Z
[ "python", "sublimetext3", "sublime-anaconda" ]
Python error in Console but not in File: unexpected character after line continuation character
39,372,362
<p>I've got a Python script which has a class defined with this method:</p> <pre><code>@staticmethod def _sanitized_test_name(orig_name): return re.sub(r'[`‘’\"]*', '', re.sub(r'[\r\n\/\:\?\&lt;\&gt;\|\*\%]*', '', orig_name.encode('utf-8'))) </code></pre> <p>I'm able to run the script from the command prompt just fine, without any issues. But when I paste the code of the full class in the console, I get the <code>SyntaxError: unexpected character after line continuation character</code>:</p> <pre><code>&gt;&gt;&gt; return re.sub(r'[`‘’\"]*', '', re.sub(r'[\r\n\/\:\?\&lt;\&gt;\|\*\%]*', '', orig_name.encode('utf-8'))) File "&lt;stdin&gt;", line 1 return re.sub(r'[``'\"]*', '', re.sub(r'[\r\n\/\:\?\&lt;\&gt;\|\*\%]*', '', orig_name.encode('utf-8'))) ^ SyntaxError: unexpected character after line continuation character </code></pre> <p>If I skip that method while pasting, it works. Note that there is a difference in what my original line is and what's shown for the error: <code>r'[`‘’\"]*'</code> vs <code>r'[``'"]*'</code>. Replacing that with <code>ur'[`‘’"]*'</code> gives <code>SyntaxError: EOL while scanning string literal</code>.</p> <p>It seems the Python console is seeing <code>‘</code> that as a stylised <code>`</code> (backtick) and the <code>’</code> as a sytlised <code>'</code> (single quote). When I really mean the <a href="https://www.cl.cam.ac.uk/~mgk25/ucs/quotes.html" rel="nofollow">unicode open and close quotes</a>. I've got <code># -*- coding: utf-8 -*-</code> at the top of my script, which I paste into the console as well.</p>
1
2016-09-07T14:16:54Z
39,372,363
<p>Focusing on just the expression causing the error <code>r'[`‘’"]*'</code>...</p> <pre><code>&gt;&gt;&gt; r'[`‘’"]*' File "&lt;stdin&gt;", line 1 r'[``'"]*' ^ SyntaxError: EOL while scanning string literal &gt;&gt;&gt; ur'[`‘’"]*' # with the unicode modifier File "&lt;stdin&gt;", line 1 ur'[``'"]*' ^ SyntaxError: EOL while scanning string literal </code></pre> <p>If the terminal I'm in doesn't accept unicode input, that interpretation of the unicode chars from <code>‘</code> to <code>`</code> and <code>’</code> to <code>'</code>, occurs.</p> <p>So the workaround is to split the regex and <strong>use <a href="https://docs.python.org/2/library/functions.html#unichr" rel="nofollow"><code>unichr()</code></a></strong> with the corresponding codes for the two quotes, 2018 and 2019:</p> <pre><code>&gt;&gt;&gt; r'[`' + unichr(2018) + unichr(2019) + r'"]*' u'[`\u07e2\u07e3"]*' </code></pre> <p>(And the raw string modifier <code>r''</code> probably isn't required for this particular regex.)</p>
1
2016-09-07T14:16:54Z
[ "python", "console" ]
Python - Removing vertical bar lines from histogram
39,372,470
<p>I'm wanting to remove the vertical bar outlines from my histogram plot, but preserving the "etching" of the histogram, if that makes since.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np bins = 35 fig = plt.figure(figsize=(7,6)) ax = fig.add_subplot(111) ax.hist(subVel_Hydro1, bins=bins, facecolor='none', edgecolor='black', label = 'Pecuiliar Vel') ax.set_xlabel('$v_{_{B|A}} $ [$km\ s^{-1}$]', fontsize = 16) ax.set_ylabel(r'$P\ (r_{_{B|A}} )$', fontsize = 16) ax.legend(frameon=False) </code></pre> <p>Giving</p> <p><a href="http://i.stack.imgur.com/DtEHe.png" rel="nofollow"><img src="http://i.stack.imgur.com/DtEHe.png" alt="enter image description here"></a></p> <p>Is this doable in matplotlibs histogram functionality? I hope I provided enough clarity.</p>
3
2016-09-07T14:21:19Z
39,372,725
<p>In <code>pyplot.hist()</code> you could set the value of <code>histtype = 'step'</code>. Example code:</p> <pre><code>import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np x = np.random.normal(0,1,size=1000) fig = plt.figure() ax = fig.add_subplot(111) ax.hist(x, bins=50, histtype = 'step', fill = None) plt.show() </code></pre> <p>Sample output:</p> <p><a href="http://i.stack.imgur.com/621Vo.png"><img src="http://i.stack.imgur.com/621Vo.png" alt="enter image description here"></a></p>
5
2016-09-07T14:32:00Z
[ "python", "matplotlib", "histogram" ]
when i made a 3 handshake with ubuntu in VMware return package R
39,372,630
<pre><code>#!/usr/bin/python from scapy.all import * def findWeb(): a = sr1(IP(dst="8.8.8.8")/UDP()/DNS(qd=DNSQR(qname="www.google.com")),verbose=0) return a[DNSRR].rdata def sendPacket(dst,src): ip = IP(dst = dst) SYN = TCP(sport=1500, dport=80, flags='S') SYNACK = sr1(ip/SYN) my_ack = SYNACK.seq + 1 ACK = TCP(sport=1050, dport=80, flags='A', ack=my_ack) send(ip/ACK) payload = "stuff" PUSH = TCP(sport=1050, dport=80, flags='PA', seq=11, ack=my_ack) send(ip/PUSH/payload) http = sr1(ip/TCP()/'GET /index.html HTTP/1.0 \n\n',verbose=0) print http.show() src = '10.0.0.24' dst = findWeb() sendPacket(dst,src) </code></pre> <p>I'm trying to do HTTP packets with SCAPY I am using UBUNTU on VMwaer</p> <p>The problem is that every time I send messages I have RESET How do we fix it?</p> <p>Thanks</p> <p><a href="http://i.stack.imgur.com/0eJdj.png" rel="nofollow">sniff package image</a></p>
0
2016-09-07T14:27:51Z
39,412,283
<p>Few things I notice wrong. 1. You have your sequence number set statically (seq=11) which is wrong. Sequence numbers are always randomly generated and they must be used as per RFC793. So the sequence should be = SYNACK[TCP].ack</p> <ol start="2"> <li><p>You set your source port as 1500 during SYN packet, but then you use it as 1050 (typo?)</p></li> <li><p>You don't need extra payload/PUSH. </p></li> </ol> <p>Also, have a look at these threads: </p> <p><a href="http://stackoverflow.com/questions/37683026/how-to-create-http-get-request-scapy">How to create HTTP GET request Scapy?</a></p> <p><a href="http://stackoverflow.com/questions/4750793/python-scapy-or-the-like-how-can-i-create-an-http-get-request-at-the-packet-leve">Python-Scapy or the like-How can I create an HTTP GET request at the packet level</a></p>
0
2016-09-09T13:05:13Z
[ "python", "http", "package", "virtual-machine", "scapy" ]
"shade is required for this module" even though shade is installed
39,372,696
<p>Im trying to deploy an ansible playbook to spin up some new openstack instances and keep getting the errorr</p> <pre><code>"shade is required for this module" </code></pre> <p>Shade is definitely installed as are all its dependancies. </p> <p>I've tried adding </p> <pre><code>localhost ansible_python_interpreter="/usr/bin/env python" </code></pre> <p>to the ansible hosts file as suggested here, but this did not work. </p> <p><a href="https://groups.google.com/forum/#!topic/ansible-project/rvqccvDLLcQ" rel="nofollow">https://groups.google.com/forum/#!topic/ansible-project/rvqccvDLLcQ</a></p> <p>Any advice on solving this would be most appreciated. </p>
0
2016-09-07T14:30:53Z
39,374,456
<p>On my hosts file I have the following: </p> <pre><code>[local] 127.0.0.1 ansible_connection=local ansible_python_interpreter="/usr/bin/python" </code></pre> <p>So far I haven't been using venv and my playbooks work fine. By adding the ansible_connection= local, it should tell your playbook to be executed on the Ansible machine (I guess that's what you are trying to do).</p> <p>Then when I launch a playbook, I start with the following: </p> <pre><code>- hosts: local connection: local </code></pre> <p>Not sure if that's the problem. If this does not work, you should give us more information (extract of your playbook at least). </p> <p>Good luck!</p>
0
2016-09-07T15:54:24Z
[ "python", "ansible", "openstack" ]
Matplotlib bar chart customisation for multiple values
39,372,762
<p>I have a list of tuples with the countries and the number of times they occur. I have 175 countries all with long names.</p> <p>When I chart them, I get:</p> <p><a href="http://i.stack.imgur.com/dYNTx.png" rel="nofollow"><img src="http://i.stack.imgur.com/dYNTx.png" alt="enter image description here"></a></p> <p>As you can see, everything is very bunched up, there is no space, you can barely read anything.</p> <p>Code I use (the original data file is huge, but this contains my matplotlib specific code):</p> <pre><code>def tupleCounts2Percents(inputList): total = sum(x[1] for x in inputList)*1.0 return [(x[0], 1.*x[1]/total) for x in inputList] def autolabel(rects,labels): # attach some text labels for i,(rect,label) in enumerate(zip(rects,labels)): height = rect.get_height() plt.text(rect.get_x() + rect.get_width()/2., 1.05*height, label, ha='center', va='bottom',fontsize=6,style='italic') def countryChartList(inputlist,path): seen_countries = Counter() for dict in inputlist: seen_countries += Counter(dict['location-value-pair'].keys()) seen_countries = seen_countries.most_common() seen_countries_percentage = map(itemgetter(1), tupleCounts2Percents(seen_countries)) seen_countries_percentage = ['{:.2%}'.format(item)for item in seen_countries_percentage] yvals = map(itemgetter(1), seen_countries) xvals = map(itemgetter(0), seen_countries) plt.figure() countrychart = plt.bar(range(len(seen_countries)), yvals, width=0.9) plt.xticks(range(len(seen_countries)), xvals,rotation=90) plot_margin = 0.25 x0, x1, y0, y1 = plt.axis() plt.axis((x0, x1, y0, y1+plot_margin)) plt.title('Countries in Dataset') plt.xlabel('Countries in Data') plt.ylabel('Occurrences') plt.tick_params(axis='both', which='major', labelsize=6) plt.tick_params(axis='both', which='minor', labelsize=6) plt.tight_layout() autolabel(countrychart,seen_countries_percentage) plt.savefig(path) plt.clf() </code></pre> <p>An idea of what the dict I feed in looks like is:</p> <pre><code> list = [ { "location-value-pair": { "Austria": 234 } }, { "location-value-pair": { "Azerbaijan": 20006.0 } }, { "location-value-pair": { "Germany": 4231 } }, { "location-value-pair": { "United States": 12121 } }, { "location-value-pair": { "Germany": 65445 } }, { "location-value-pair": { "UK": 846744 } } } ] </code></pre> <p>How do I:</p> <ol> <li>Make things so one can read them - would the answer be a histogram with bins instead of a bar plot? Maybe stepping every 10%?</li> <li>How do I make it so the tick labels and the labels above the bars (the percentages) don't overlap?</li> <li>Any other insight welcome (e.g. bars with gradient colours, red to yellow)?</li> </ol> <p><strong>EDIT</strong></p> <p>I reduced the number of countries to just the top 50, made bars more transparent, and changed ticks to rotate by 45 degrees. I still find the first bar has a tick which crosses the y axis to it is unreadable. How can I change this?</p> <p><a href="http://i.stack.imgur.com/HAwNJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/HAwNJ.png" alt="enter image description here"></a></p> <p>Changed to <code>countrychart = plt.bar(range(len(seen_countries)), yvals, width=0.9,alpha=0.6)</code> and also <code>rotation=45</code> to the <code>.text()</code> argument in the <code>autolabel</code> function.</p>
1
2016-09-07T14:33:56Z
39,375,146
<p>The problem was in the alignment of the autolabels:</p> <pre><code>def autolabel(rects,labels): # attach some text labels for i,(rect,label) in enumerate(zip(rects,labels)): height = rect.get_height() plt.text(rect.get_x() + rect.get_width()/2., 1.05*height, label, ha='center', va='bottom',fontsize=6,style='italic') </code></pre> <p>Was changed to:</p> <pre><code>def autolabel(rects,labels): # attach some text labels for i,(rect,label) in enumerate(zip(rects,labels)): height = rect.get_height() plt.text(rect.get_x() + rect.get_width()/2., 1.05*height, label, ha='left', va='bottom',fontsize=6,style='italic', rotation=45) </code></pre> <p>To get:</p> <p><a href="http://i.stack.imgur.com/QivQv.png" rel="nofollow"><img src="http://i.stack.imgur.com/QivQv.png" alt="enter image description here"></a></p>
1
2016-09-07T16:29:42Z
[ "python", "matplotlib", "plot", "charts", "figure" ]
How can I print the entire converted sentence on a single line?
39,372,778
<p>I am trying to expand on Codeacademy's Pig Latin converter to practice basic programming concepts. </p> <p>I believe I have the logic nearly right (I'm sure it's not as concise as it could be!) and now I am trying to output the converted Pig Latin sentence entered by the user on a single line.</p> <p>If I print from inside the for loop it prints on new lines each time. If I print from outside it only prints the first word as it is not iterating through all the words. </p> <p>Could you please advise where I am going wrong?</p> <p>Many, many thanks for your help.</p> <pre><code>pyg = 'ay' print ("Welcome to Matt's Pig Latin Converter!") def convert(original): while True: if len(original) &gt; 0 and (original.isalpha() or " " in original): print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = "".join(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: ")) convert(raw_input("Please enter a word: ")) </code></pre>
1
2016-09-07T14:34:35Z
39,373,079
<p>Try:</p> <pre><code>pyg = 'ay' print ("Welcome to Matt's Pig Latin Converter!") def convert(original): while True: if len(original) &gt; 0 and (original.isalpha() or " " in original): final_sentence = "" print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = final_sentence.append(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: ")) convert(raw_input("Please enter a word: ")) </code></pre> <p>It's because you are remaking final_sentence every time in the for loop instead of adding to it.</p>
0
2016-09-07T14:48:31Z
[ "python", "join", "printing" ]
How can I print the entire converted sentence on a single line?
39,372,778
<p>I am trying to expand on Codeacademy's Pig Latin converter to practice basic programming concepts. </p> <p>I believe I have the logic nearly right (I'm sure it's not as concise as it could be!) and now I am trying to output the converted Pig Latin sentence entered by the user on a single line.</p> <p>If I print from inside the for loop it prints on new lines each time. If I print from outside it only prints the first word as it is not iterating through all the words. </p> <p>Could you please advise where I am going wrong?</p> <p>Many, many thanks for your help.</p> <pre><code>pyg = 'ay' print ("Welcome to Matt's Pig Latin Converter!") def convert(original): while True: if len(original) &gt; 0 and (original.isalpha() or " " in original): print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = "".join(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: ")) convert(raw_input("Please enter a word: ")) </code></pre>
1
2016-09-07T14:34:35Z
39,373,084
<p>I'm not sure of the program logic but a quick solution would be appending all the final_sentence in a list, and after the for print the list with a Join.</p> <pre><code>pyg = 'ay' print ("Welcome to Matt's Pig Latin Converter!") def convert(original): to_print = [] while True: if len(original) &gt; 0 and (original.isalpha() or " " in original): print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = "".join(new_sentence) to_print.append(final_sentence) print " ".join(to_print) break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: ")) convert(raw_input("Please enter a word: ")) </code></pre> <p>This code does what you want?</p>
0
2016-09-07T14:48:45Z
[ "python", "join", "printing" ]
How can I print the entire converted sentence on a single line?
39,372,778
<p>I am trying to expand on Codeacademy's Pig Latin converter to practice basic programming concepts. </p> <p>I believe I have the logic nearly right (I'm sure it's not as concise as it could be!) and now I am trying to output the converted Pig Latin sentence entered by the user on a single line.</p> <p>If I print from inside the for loop it prints on new lines each time. If I print from outside it only prints the first word as it is not iterating through all the words. </p> <p>Could you please advise where I am going wrong?</p> <p>Many, many thanks for your help.</p> <pre><code>pyg = 'ay' print ("Welcome to Matt's Pig Latin Converter!") def convert(original): while True: if len(original) &gt; 0 and (original.isalpha() or " " in original): print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = "".join(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: ")) convert(raw_input("Please enter a word: ")) </code></pre>
1
2016-09-07T14:34:35Z
39,373,100
<p>Your issue is here:</p> <pre><code>for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = "".join(new_sentence) print final_sentence </code></pre> <p>You are joining a single word to itself. You will want to save all the words from inside the loop, then print them once the loop are processed them all.</p> <pre><code>final = [] for word in split_list: new_word = word[1:] + word[0] + pyg final.append(new_word) print ' '.join(final) </code></pre> <p>Or, just for fun, here's the one-liner:</p> <pre><code>print ' '.join([word[1:]+word[0]+'ay' for word in split_list]) </code></pre> <p><strong><em>EDIT:</em></strong> Also, @furas makes a good point in their comment, to print with <strong>no newline</strong> simply add a <code>,</code> to the end of the print statement:</p> <pre><code>for word in split_list: first = word[0] print word[1:] + first + pyg, </code></pre>
0
2016-09-07T14:49:23Z
[ "python", "join", "printing" ]
Word count: 'Column' object is not callable
39,372,801
<pre><code>from pyspark.sql.functions import split, explode sheshakespeareDF = sqlContext.read.text(fileName).select(removePunctuation(col('value'))) shakespeareDF.show(15, truncate=False) </code></pre> <p>The dataframe looks like this:</p> <p><a href="http://i.stack.imgur.com/3pAy9.png" rel="nofollow"><img src="http://i.stack.imgur.com/3pAy9.png" alt="enter image description here"></a></p> <pre><code>ss = split(shakespeareDF.sentence," ") shakeWordsDFa =explode(ss) shakeWordsDF_S=sqlContext.createDataFrame(shakeWordsDFa,'word') </code></pre> <p>Any idea what am I doing wrong? Tip says <code>Column is not iterable</code>.</p> <p>What should I do? I just want to change <code>shakeWordsDFa</code> to dataframe and rename. </p>
0
2016-09-07T14:35:27Z
39,373,074
<p>Just use select:</p> <pre><code>shakespeareDF = sc.parallelize([ ("from fairest creatures we desire increase", ), ("that thereby beautys rose might never die", ), ]).toDF(["sentence"]) (shakespeareDF .select(explode(split("sentence", " ")).alias("word")) .show(4)) ## +---------+ ## | word| ## +---------+ ## | from| ## | fairest| ## |creatures| ## | we| ## +---------+ ## only showing top 4 rows </code></pre> <p>Spark SQL columns are not data structures. There are not bound to a data and are meaningful only when evaluated in a context of a specific <code>DataFrame</code>. This way <code>Columns</code> behave more like functions. </p>
1
2016-09-07T14:48:13Z
[ "python", "apache-spark", "pyspark" ]
Word count: 'Column' object is not callable
39,372,801
<pre><code>from pyspark.sql.functions import split, explode sheshakespeareDF = sqlContext.read.text(fileName).select(removePunctuation(col('value'))) shakespeareDF.show(15, truncate=False) </code></pre> <p>The dataframe looks like this:</p> <p><a href="http://i.stack.imgur.com/3pAy9.png" rel="nofollow"><img src="http://i.stack.imgur.com/3pAy9.png" alt="enter image description here"></a></p> <pre><code>ss = split(shakespeareDF.sentence," ") shakeWordsDFa =explode(ss) shakeWordsDF_S=sqlContext.createDataFrame(shakeWordsDFa,'word') </code></pre> <p>Any idea what am I doing wrong? Tip says <code>Column is not iterable</code>.</p> <p>What should I do? I just want to change <code>shakeWordsDFa</code> to dataframe and rename. </p>
0
2016-09-07T14:35:27Z
39,376,823
<p>Note that split() returns an array of Strings, u can't use them directly to create Data Frame (it is 'ss' in your case)</p> <p>So you explode() them into a column &amp; give an alias to it. Select would take your processed column &amp; creates new dataframe.</p> <pre><code>newDF = (shakespeareDF .select(explode(split(shakespeareDF['sentence'], ' ')).alias('word') ) ) </code></pre>
-1
2016-09-07T18:25:39Z
[ "python", "apache-spark", "pyspark" ]
python sqlite3 UPDATE set from variables
39,372,932
<p>I made next Query to update a row in my DB.</p> <pre><code>def saveData(title, LL, LR, RL, RR, distanceBack): c.execute("UPDATE settings SET (?,?,?,?,?,?) WHERE name=?",(title, LL, LR, RL, RR, distanceBack, title)) conn.commit() </code></pre> <p>I always get next error: sqlite3.OperationalError: near "(": syntax error I know something isn't correct with the question marks. I can't find out what the exact solution is. Can somebody explain me what the problem is?</p>
0
2016-09-07T14:41:56Z
39,373,240
<p>You could use this SQL syntax:</p> <pre><code>UPDATE table_name SET column1 = value1, column2 = value2...., columnN = valueN WHERE [condition]; </code></pre> <p>for example, if you have a table called Category and you want to edit the category name you can use:</p> <pre><code>c.execute("UPDATE CATEGORY SET NAME=? WHERE ID=?", (name,category_id)) </code></pre> <p>WHERE:</p> <p>Category is a table that contains only two items: (ID, NAME) with ID PRIMARY KEY.</p>
1
2016-09-07T14:56:08Z
[ "python", "sqlite3", "sql-update" ]
python sqlite3 UPDATE set from variables
39,372,932
<p>I made next Query to update a row in my DB.</p> <pre><code>def saveData(title, LL, LR, RL, RR, distanceBack): c.execute("UPDATE settings SET (?,?,?,?,?,?) WHERE name=?",(title, LL, LR, RL, RR, distanceBack, title)) conn.commit() </code></pre> <p>I always get next error: sqlite3.OperationalError: near "(": syntax error I know something isn't correct with the question marks. I can't find out what the exact solution is. Can somebody explain me what the problem is?</p>
0
2016-09-07T14:41:56Z
39,373,331
<p>As far as I know the UPDATE query must look like <code>UPDATE table SET (key=value, key2=value2, ...) WHERE &lt;condition&gt;</code></p> <p>Try to modify query to something like</p> <pre><code>c.execute("UPDATE settings SET (title=?,LL=?,LR=?,RL=?,RR=?,distanceBack=?) WHERE name=?",(title, LL, LR, RL, RR, distanceBack, title)) </code></pre>
-1
2016-09-07T15:00:12Z
[ "python", "sqlite3", "sql-update" ]
How to modifying the behaviour of the Python logging facility?
39,373,005
<p>I would like to change the way in which messages with DEBUG and INFO level are displayed when using Python's native logging facility. By "change", I do not mean altering the format but adding an extra logical level. For example:</p> <pre><code># This is a global variable that is set at the time of initializing the logging. required_verbosity_level = 7 # This variable is passed with each call to the logger. supplied_verbosity_level = 5 </code></pre> <p>So when creating the logger we pass the global requirement.</p> <pre><code>logger = LoggerBridge(required_verbosity_level = 7) </code></pre> <p>Then when we call the method, we pass the appropriate level:</p> <pre><code>logger.debug('This is a debug message.', supplied_verbosity_level = 5) </code></pre> <p>So internally, the logic would be (5&lt;7) and this will make the message <strong>visible</strong> due to the fact that the <code>supplied</code> value meets the <code>required</code> one. However, in the following case:</p> <pre><code>logger.debug('This is a debug message.', supplied_verbosity_level = 11) </code></pre> <p>The message <strong>will never be visible</strong> as the <code>supplied</code> value is higher than the <code>required</code> value. The question is: Where would be the best place to implement such a behaviour?</p> <p>Right now, I tried couple of things based on inheriting the current Logger class and overriding the internal behaviour, something known as the <code>mixin</code> approach:</p> <pre><code>class LoggerBridge(object): def __init__(self, required_verbosity_level): self.required_verbosity_level = required_verbosity_level def _log_bridge(self, logger): logger(message) def info(self, message, supplied_verbosity_level): if supplied_verbosity_level &lt; self.required_verbosity_level: self._log_bridge(logging.info, message) def debug(self, message, supplied_verbosity_level): if supplied_verbosity_level &lt; self.required_verbosity_level: self._log_bridge(logging.debug, message) </code></pre> <p>In theory, this seems to be working. However, is that the right way? Is there a way to solve this by using any of the built-in logging bits, such as a <code>custom handler</code> or a <code>custom filter</code>?</p>
1
2016-09-07T14:45:06Z
39,373,339
<p>If all you want is custom debug levels then it's supported <em>out of the box</em>. You set custom debug level (<code>required_verbosity_level</code> in your example) on logger using <a href="https://docs.python.org/2/library/logging.html#logging.Logger.setLevel" rel="nofollow">setLevel()</a> and then you use <a href="https://docs.python.org/2/library/logging.html#logging.Logger.log" rel="nofollow">log()</a> method for logging and pass your custom level (<code>supplied_verbosity_level</code> in your example). If you want to further customize this logic you might want to override <a href="https://docs.python.org/2/library/logging.html#logging.Logger.isEnabledFor" rel="nofollow">isEnabledFor()</a> method.</p>
1
2016-09-07T15:00:27Z
[ "python", "logging" ]
Module Import Error for Pypi module
39,373,117
<p>I am trying to import a pypi module (thinkx 1.1.2) into spyder. It is installed on anaconda and showing up on conda list. I my python path folders is my anaconda folder. When I attempt to import thinkx into spyder I get : import thinkx Traceback (most recent call last):</p> <p>File "", line 1, in import thinkx</p> <p>ImportError: No module named 'thinkx'</p>
0
2016-09-07T14:50:19Z
39,373,225
<p>According to <a href="https://github.com/AllenDowney/ThinkX" rel="nofollow">module README</a>, <code>thinkx</code> does not expose package named <code>thinkx</code>.</p> <blockquote> <p>It provides the following modules:</p> <ul> <li><code>thinkbayes</code>: Code for Think Bayes.</li> <li><code>thinkstats2</code>: Code for Think Stats, 2nd edition</li> <li><code>thinkbayes2</code>: Code for Think Bayes, 2nd edition, not yet published.</li> <li><code>thinkdsp</code>: Code for Think DSP</li> <li><code>thinkplot</code>: Plotting code used in all of the books, mostly wrapper functions for matplotlib.pyplot</li> </ul> </blockquote> <p>Try:</p> <pre><code>import thinkbayes </code></pre>
0
2016-09-07T14:55:08Z
[ "python", "spyder" ]
Rolling sum in subgroups of a dataframe (pandas)
39,373,196
<p>I have <code>sessions</code> dataframe that contains <code>E-mail</code> and <code>Sessions</code> (int) columns. </p> <p>I need to calculate rolling sum of sessions <em>per email</em> (i.e. not globally).</p> <p>Now, the following works, but it's painfully slow:</p> <pre><code>emails = set(list(sessions['E-mail'])) ses_sums = [] for em in emails: email_sessions = sessions[sessions['E-mail'] == em] email_sessions.is_copy = False email_sessions['Session_Rolling_Sum'] = pd.rolling_sum(email_sessions['Sessions'], window=self.window).fillna(0) ses_sums.append(email_sessions) df = pd.concat(ses_sums, ignore_index=True) </code></pre> <p>Is there a way of achieving the same in <code>pandas</code>, but using <code>pandas</code> operators on a dataframe instead of creating separate dataframes for each email and then concatenating them?</p> <p>(either that or some other way of making this faster)</p>
2
2016-09-07T14:53:41Z
39,373,401
<p>Say you start with</p> <pre><code>In [58]: df = pd.DataFrame({'E-Mail': ['foo'] * 3 + ['bar'] * 3 + ['foo'] * 3, 'Session': range(9)}) In [59]: df Out[59]: E-Mail Session 0 foo 0 1 foo 1 2 foo 2 3 bar 3 4 bar 4 5 bar 5 6 foo 6 7 foo 7 8 foo 8 In [60]: df[['Session']].groupby(df['E-Mail']).apply(pd.rolling_sum, 3) Out[60]: Session E-Mail bar 3 NaN 4 NaN 5 12.0 foo 0 NaN 1 NaN 2 3.0 6 9.0 7 15.0 8 21.0 </code></pre> <p>Incidentally, note that I just rearranged your <code>rolling_sum</code>, but it has been deprecated - you should now use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="nofollow"><code>rolling</code></a>:</p> <pre><code>df[['Session']].groupby(df['E-Mail']).apply(lambda g: g.rolling(3).sum()) </code></pre>
2
2016-09-07T15:03:05Z
[ "python", "performance", "pandas" ]
Rolling sum in subgroups of a dataframe (pandas)
39,373,196
<p>I have <code>sessions</code> dataframe that contains <code>E-mail</code> and <code>Sessions</code> (int) columns. </p> <p>I need to calculate rolling sum of sessions <em>per email</em> (i.e. not globally).</p> <p>Now, the following works, but it's painfully slow:</p> <pre><code>emails = set(list(sessions['E-mail'])) ses_sums = [] for em in emails: email_sessions = sessions[sessions['E-mail'] == em] email_sessions.is_copy = False email_sessions['Session_Rolling_Sum'] = pd.rolling_sum(email_sessions['Sessions'], window=self.window).fillna(0) ses_sums.append(email_sessions) df = pd.concat(ses_sums, ignore_index=True) </code></pre> <p>Is there a way of achieving the same in <code>pandas</code>, but using <code>pandas</code> operators on a dataframe instead of creating separate dataframes for each email and then concatenating them?</p> <p>(either that or some other way of making this faster)</p>
2
2016-09-07T14:53:41Z
39,373,818
<pre><code>np.random.seed([3,1415]) df = pd.DataFrame({'E-Mail': np.random.choice(list('AB'), 20), 'Session': np.random.randint(1, 10, 20)}) df.groupby('E-Mail').Session.rolling(3).sum() E-Mail A 0 NaN 2 NaN 4 11.0 5 7.0 7 10.0 12 16.0 15 16.0 17 16.0 18 17.0 19 18.0 B 1 NaN 3 NaN 6 18.0 8 14.0 9 16.0 10 12.0 11 13.0 13 16.0 14 20.0 16 22.0 Name: Session, dtype: float64 </code></pre>
2
2016-09-07T15:22:33Z
[ "python", "performance", "pandas" ]
print([[i+j for i in "abc"] for j in "def"])
39,373,220
<p>I'm new to Python.</p> <p>I stumped upon with one of this comprehension</p> <pre><code>print([[i+j for i in "abc"] for j in "def"]) </code></pre> <p>Could you please help me convert the comprehension in for loop?</p> <p>I'm not getting the desired result by for loop:</p> <pre><code>list = [] list2 = [] for j in 'def': for i in 'abc': list.append(i+j) list2 = list print (list) </code></pre> <p>the above is my try with for loop. I' missing something. Below should be the desired result in for loop that i want.</p> <p><code>([[‘ad’, ‘bd’, ‘cd’], [‘ae’, ‘be’, ‘ce’], [‘af’, ‘bf’, ‘cf’]])</code></p> <p>which I believe is a matrice. </p> <p>Thanks in advance.</p>
-2
2016-09-07T14:54:52Z
39,373,348
<p>For loop for this comprehension will look like this</p> <pre><code>result = [] for j in "def": r = [] for i in "abc": r.append(i+j) result.append(r) </code></pre>
1
2016-09-07T15:00:51Z
[ "python", "list-comprehension" ]