title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Return True if item is empty string | 39,194,596 | <p>Please consider this snippet:</p>
<pre><code>>>> i = ["", 1, 2]
>>> all([x for x in i])
False
</code></pre>
<p>What would be Pythonic way to make this snippet return <code>True</code> even if item in iterable is empty string?</p>
<p>Standard restrictions should apply - if item is <code>None</code> or <code>False</code> etc, it should return False as expected.</p>
| 0 | 2016-08-28T18:40:59Z | 39,194,613 | <pre><code>>>> lst = ["", 1, 2]
>>>
>>> all(True if i=='' else i for i in lst)
True
</code></pre>
<p>If you want True if there is at least one item that evaluates to True use <code>any()</code>.</p>
<pre><code>>>> any(lst)
True
</code></pre>
<p>Note that in general <code>any()</code> and <code>all()</code> accept iterable argument and you don't need to loop over them.</p>
| 2 | 2016-08-28T18:42:42Z | [
"python"
] |
Return True if item is empty string | 39,194,596 | <p>Please consider this snippet:</p>
<pre><code>>>> i = ["", 1, 2]
>>> all([x for x in i])
False
</code></pre>
<p>What would be Pythonic way to make this snippet return <code>True</code> even if item in iterable is empty string?</p>
<p>Standard restrictions should apply - if item is <code>None</code> or <code>False</code> etc, it should return False as expected.</p>
| 0 | 2016-08-28T18:40:59Z | 39,194,624 | <pre><code>all([x for x in i if not isinstance(x, str)])
</code></pre>
<p>The only falsy string is the empty string, so if you don't want to test for it, then filter out the strings.</p>
| 1 | 2016-08-28T18:43:56Z | [
"python"
] |
Return True if item is empty string | 39,194,596 | <p>Please consider this snippet:</p>
<pre><code>>>> i = ["", 1, 2]
>>> all([x for x in i])
False
</code></pre>
<p>What would be Pythonic way to make this snippet return <code>True</code> even if item in iterable is empty string?</p>
<p>Standard restrictions should apply - if item is <code>None</code> or <code>False</code> etc, it should return False as expected.</p>
| 0 | 2016-08-28T18:40:59Z | 39,194,641 | <p>You can use <a href="https://docs.python.org/2/library/functions.html#all" rel="nofollow">all</a>, for instance:</p>
<pre><code>tests = [
[None, False, "", 1, 2],
["", 1, 2],
[1, 2],
[]
]
for i in tests:
print i
print all(filter(lambda x: x != '', i))
print all(True if x == '' else x for x in i)
print '-' * 80
</code></pre>
| -1 | 2016-08-28T18:45:12Z | [
"python"
] |
Return True if item is empty string | 39,194,596 | <p>Please consider this snippet:</p>
<pre><code>>>> i = ["", 1, 2]
>>> all([x for x in i])
False
</code></pre>
<p>What would be Pythonic way to make this snippet return <code>True</code> even if item in iterable is empty string?</p>
<p>Standard restrictions should apply - if item is <code>None</code> or <code>False</code> etc, it should return False as expected.</p>
| 0 | 2016-08-28T18:40:59Z | 39,194,726 | <p>This option looks nice to me.</p>
<pre><code>all(x or x=="" for x in i)
</code></pre>
| 2 | 2016-08-28T18:55:30Z | [
"python"
] |
'ImportError: No module named pillow' in PyCharm | 39,194,601 | <p>I'm getting an error while using PyCharm which doesn't allow me to import the pillow module even though I have it installed as a package in the project interpreter. Any help is greatly appreciated!</p>
<p><a href="http://imgur.com/a/DfjC3" rel="nofollow">http://imgur.com/a/DfjC3</a></p>
| 0 | 2016-08-28T18:41:19Z | 39,194,684 | <p>You try to run code with default Python interpreter (<code>/Library/Frameworks/Python.framework/Versions/3.2/bin/python3</code>). You need to configure PyCharm to run Your code with anaconda (<code>~/anaconda/bin/python</code>)</p>
<p>And now (Like @JamesK say) read Pillow tutorial and documentation:</p>
<p><code>import PIL</code> not <code>import Pillow</code></p>
| 0 | 2016-08-28T18:49:19Z | [
"python",
"python-3.x",
"import",
"pycharm"
] |
'ImportError: No module named pillow' in PyCharm | 39,194,601 | <p>I'm getting an error while using PyCharm which doesn't allow me to import the pillow module even though I have it installed as a package in the project interpreter. Any help is greatly appreciated!</p>
<p><a href="http://imgur.com/a/DfjC3" rel="nofollow">http://imgur.com/a/DfjC3</a></p>
| 0 | 2016-08-28T18:41:19Z | 39,194,779 | <p>While the name of the package is pillow, it is a replacement for PIL and uses the PIL for the name of the base module</p>
<p>the usual way to use pillow is</p>
<pre><code>from PIL import Image
im = Image.open("filename")
</code></pre>
<p>See the <a href="http://pillow.readthedocs.io/en/3.1.x/handbook/tutorial.html" rel="nofollow">tutorial</a>, and the <a href="http://pillow.readthedocs.io/en/3.1.x/index.html" rel="nofollow">documentation</a></p>
| 0 | 2016-08-28T19:01:17Z | [
"python",
"python-3.x",
"import",
"pycharm"
] |
How to import the most recent version of the print function in Python 2.7.x? | 39,194,615 | <p>I have discovered that I can't use the <code>flush</code> argument in the new print function if I use Python 2.7.11. I have used:</p>
<pre><code>from __future__ import print_function
print('Hello', flush=True)
</code></pre>
<p>but it complains with the error:</p>
<pre><code>Traceback (most recent call last):
File "print_future.py", line 3, in <module>
print('Hello', flush=True)
TypeError: 'flush' is an invalid keyword argument for this function
</code></pre>
<p>After looking at <a href="https://docs.python.org/3/library/functions.html#print" rel="nofollow">the documentation for <code>print</code></a> I discovered that it doesn't work even though it is an argument. My inference is that it doesn't work because <code>flush</code> was only added in version 3.3, thus the special <code>from __future__ import</code> statement is probably importing an older version of the function. The comments suggest it's using 2.7.11 but I don't understand why it's doing this.</p>
<p>I did see <a href="http://stackoverflow.com/questions/27991443/need-python-3-4-version-of-print-from-future">Need Python 3.4 version of print() from __future__</a>, whose answer just wraps <code>print</code> and manually adds the <code>flush</code> parameter. Even though that works, it seems more of a hack than addressing the real problem, that we don't have the most recent version of print. </p>
<p>Is there a way to import specific versions of Python functions (I want to use the Python 3.5 <code>print</code> function, specifically) to my current Python script? If this is not possible, why not?</p>
<hr>
<p>It seems (surprisingly) its not clear to people how my question is different even though I wrote it. I will say it differently.</p>
<ol>
<li>It seems that my issue is that its importing the python function that I am not expecting (since it doesn't find the flush argument). Therefore, the most natural and the first thing I'd like to do and know is, which print function is it importing. The comments suggest its using 2.7.11 but I don't understand why its doing this.</li>
<li>I understand that the from future statement changes how my compiler works. It seems I assumed that since it was a future statement it also brought in the print function from a future release. It seems it only changes the behaviour of my interpreter. If I were able to see what print function its using I would know that its not importing a function from a future release but only acting like a future python interpreter. Thats what it seems but I don't know for sure and I'd like to know for sure whats going on.</li>
<li>Last but not least, is there not a natural way to import future python functions to my current python script? I want to use python 3.5 print statement, is it not possible to use that function apart from just making the compiler behave like a future python interpreter but also behave like a future python version? It was (at least to me) counter intuitive to have the parser act like future python but still act like a 2.7.11. The solution I want is not a wrapper but a import of the recent python. If this is not possible, then a answer to my question should explain why its not possible.</li>
</ol>
| 0 | 2016-08-28T18:42:43Z | 39,213,378 | <ol>
<li><blockquote>
<p>its importing the python function that I am not expecting</p>
</blockquote>
<p>I don't see why, this is the version of <code>print</code> <a href="https://docs.python.org/2/library/functions.html#print" rel="nofollow">that's documented in 2.7.x</a>.</p>
<blockquote>
<p>which print function is it importing. </p>
</blockquote>
<p>The version bundled with 2.7.11, which is the version that was introduced in 3.0 (see <a href="https://www.python.org/dev/peps/pep-3105/" rel="nofollow">PEP-3105</a>).</p>
<blockquote>
<p>The comments suggest its using 2.7.11 but I don't understand why its doing this</p>
</blockquote>
<p>...because that's the version of Python you're using? <code>from __future__ import ...</code> doesn't search your computer for alternative implementations, it uses the one that's bundled with the version of Python you're using. If you think about it this is inevitable, as otherwise future imports would fail on machines that don't have 3.x installed.</p></li>
<li><p>The majority of this made little sense to me, but:</p>
<blockquote>
<p>I assumed that since it was a future statement it also brought in the print function from a future release.</p>
</blockquote>
<p>That's <strong>exactly what it's doing</strong>. You don't get to choose <em>which</em> future release, though. You get the version that was planned for release at the time it was added into <code>__future__</code>.</p></li>
<li><blockquote>
<p>is there not a natural way to import future python functions to my current python script?</p>
</blockquote>
<p>Yes, and it's the way you're using, as long as that functionality is supported in <a href="https://docs.python.org/2/library/__future__.html" rel="nofollow"><code>__future__</code></a>.</p>
<blockquote>
<p>is it not possible to use that function</p>
</blockquote>
<p>not the 3.3-onwards version of it in the version of Python that you're using, no*. If you want the functionality offered by more recent versions of Python, <em>use a more recent version of Python</em>. 2.x is running out of road.</p>
<blockquote>
<p>It was (at least to me) counter intuitive to have the parser act like future python but still act like a 2.7.11</p>
</blockquote>
<p>It's not <em>"[acting] like a 2.7.11"</em>. It's using the print function from 3.x rather than the print statement from 2.x. If you try to <code>print 'hello'</code> after the import of <code>print_function</code>, you'll get a <code>SyntaxError</code> as you would in 3.x.</p>
<p><em>* assuming you're not planning to get into hacking your installation</em></p></li>
</ol>
| 1 | 2016-08-29T19:03:53Z | [
"python",
"python-2.7",
"python-3.x",
"printing",
"flush"
] |
Python for loop doesn't work as expected | 39,194,734 | <p>I don't get any error messages, but the for loop doesn't produce the same result as the explicit statements. kp is an instance of a class and key0-9 are child elements of that class. Should what I'm trying to do actually work? If yes then maybe its something about how PyQt4 classes are constructed that is the problem.</p>
<p>This works:</p>
<pre><code>def open_kp1(self, kp, le):
self.inputStr = le.text()
kp.key1.clicked.disconnect()
kp.key2.clicked.disconnect()
kp.key3.clicked.disconnect()
kp.key4.clicked.disconnect()
kp.key5.clicked.disconnect()
kp.key6.clicked.disconnect()
kp.key7.clicked.disconnect()
kp.key8.clicked.disconnect()
kp.key9.clicked.disconnect()
kp.key0.clicked.disconnect()
... more code
</code></pre>
<p>This does not:</p>
<pre><code>def open_kp1(self, kp, le):
self.inputStr = le.text()
key_list = (kp.key1, kp.key2, kp.key3, kp.key4, kp.key5, kp.key6, kp.key7,
kp.key8, kp.key9, kp.key0)
for key in key_list:
key.clicked.disconnect()
... more code
</code></pre>
| 2 | 2016-08-28T18:56:18Z | 39,194,919 | <p>What seems to be the problem is while you are invoking the disconnect() function in for loop, there could be some error with the list depending on the scope you are using it in or some constraint against running the function in a loop.</p>
<p>A good way to debug this will be to run this code to see where the code is actually giving an error as the first resort so that you can provide more input here for people to answer.</p>
<pre><code>def open_kp1(self, kp, le):
self.inputStr = le.text()
key_list = (kp.key1, kp.key2, kp.key3, kp.key4, kp.key5, kp.key6, kp.key7,kp.key8, kp.key9, kp.key0)
print(key_list) # print the list to verify the list is indeed intact
for key in key_list:
print(key) # verify if something indeed is the problem with individual key or if the code ever enters the for loop.
key.clicked.disconnect()
... more code
</code></pre>
<p>What you need to verify is that the list is indeed storing the pointers as you intend to.</p>
<p>Also, on a side note it is a good practice to run your app in debug mode so that you get a verbose description in case something goes sideways and upload the trace here.</p>
<p>edit: spelling</p>
| -1 | 2016-08-28T19:18:15Z | [
"python",
"pyqt4"
] |
How to add a character at the begining of multiples selected lines | 39,194,747 | <p>I'm coding some python files with sublime and I'd like to comment multiple selected lines which means putting the character '#' at the beginning of each selected line. Is it possible to create a such shortcut-key Binding on sublime to do that ?</p>
<p>Thanks
Vincent</p>
| -1 | 2016-08-28T18:57:48Z | 39,194,781 | <p>To comment code with sublime text3 you can use the existing shortcut, which is bound to <code>Ctrl-/</code></p>
<p>If you want to change that shortcut, you can edit your keyboard user settings and add this line:</p>
<p><code>{ "keys": ["ctrl+7"], "command": "toggle_comment", "args": { "block": false } }</code></p>
<p>In the previous line you can see how that command would be bound to <code>ctrl+7</code></p>
| 0 | 2016-08-28T19:01:21Z | [
"python",
"sublimetext"
] |
How to add a character at the begining of multiples selected lines | 39,194,747 | <p>I'm coding some python files with sublime and I'd like to comment multiple selected lines which means putting the character '#' at the beginning of each selected line. Is it possible to create a such shortcut-key Binding on sublime to do that ?</p>
<p>Thanks
Vincent</p>
| -1 | 2016-08-28T18:57:48Z | 39,194,790 | <p>You can put three times ' in the front of your commented block and three times at the end.</p>
<pre><code>''' This is
a multi-line
comment '''
</code></pre>
| 0 | 2016-08-28T19:01:54Z | [
"python",
"sublimetext"
] |
Slicing is adding a 3rd Dimension to my Array - Not sure why | 39,194,834 | <p>I am trying to index the testdata with just the labels that equal 2 and 3. However, when I run this code, it turns my array from 2D (100 x 100) into 3D (100 x 1 x 100).</p>
<p>Can anyone explain why it is doing this? The last line in the code is the culprit, but I am not sure why it is happening.</p>
<pre><code>labels = testdata[:,0]
num2 = numpy.nonzero(labels == 2)
num2 = numpy.transpose(num2)
num3 = numpy.nonzero(labels == 3)
num3 = numpy.transpose(num3)
num = numpy.vstack([num2,num3])
testdata = testdata[num,:]
</code></pre>
| 2 | 2016-08-28T19:07:26Z | 39,196,101 | <p>When there are puzzles, print intermediate values. Better yet, run a test case in a interactive shell so you can check each value, and understand what is going on. Keep track of the shapes.</p>
<p>Looks like <code>labels</code> is a 1d array of numbers like:</p>
<pre><code>In [212]: labels=np.array([0,1,2,2,3,2,0,3,2])
</code></pre>
<p>indexes where <code>labels</code> is 2 or 3:</p>
<pre><code>In [213]: num2=np.nonzero(labels==2)
In [214]: num2
Out[214]: (array([2, 3, 5, 8], dtype=int32),)
In [215]: num3=np.nonzero(labels==3)
</code></pre>
<p>Here's a key step - what is the purpose of <code>transpose</code>. Note the <code>num2</code> is a tuple with one 1d array.</p>
<pre><code>In [216]: num2=np.transpose(num2)
In [217]: num3=np.transpose(num3)
In [218]: num2
Out[218]:
array([[2],
[3],
[5],
[8]], dtype=int32)
</code></pre>
<p>After the transpose <code>num2</code> is a column array, (4,1) shape.</p>
<p>Joining them vertically produces a (6,1) array:</p>
<pre><code>In [220]: num=np.vstack([num2,num3])
In [221]: num
Out[221]:
array([[2],
[3],
[5],
[8],
[4],
[7]], dtype=int32)
In [222]: num.shape
Out[222]: (6, 1)
In [223]: labels[num]
Out[223]:
array([[2],
[2],
[2],
[2],
[3],
[3]])
In [224]: labels[num].shape
Out[224]: (6, 1)
</code></pre>
<p>Indexing the 1d array with that array produces another array of the same shape as the index. Indexing <code>x[num,:]</code> does the same thing, but with the added last dimension.</p>
<hr>
<p>If I index a (3,4) array with a (2,5) array in the 1st dimension, the result is a (2,5,4) array:</p>
<pre><code>In [227]: np.ones((3,4))[np.ones((2,5),int),:].shape
Out[227]: (2, 5, 4)
</code></pre>
| 1 | 2016-08-28T21:48:57Z | [
"python",
"numpy"
] |
OpenGL: How do I apply a texture to this cube? | 39,194,862 | <p>I'm trying to learn OpenGL, and I've been going through a lot of tutorials on loading a texture, but every single one seems to miss the most important step: how do I actually put a texture on something?</p>
<p>I'm using Python for this, and here is my function that loads the texture:</p>
<pre><code>def loadTexture():
textureSurface = pygame.image.load('test_image.png')
textureData = pygame.image.tostring(textureSurface,"RGBA",1)
width = textureSurface.get_width()
height = textureSurface.get_height()
glEnable(GL_TEXTURE_2D)
texid = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, texid)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
return texid
</code></pre>
<p>And here is the function that loads my Cube:</p>
<pre><code>vertices = (
# x y z
( 1,-1,-1),
( 1, 1,-1),
(-1, 1,-1),
(-1,-1,-1),
( 1,-1, 1),
( 1, 1, 1),
(-1,-1, 1),
(-1, 1, 1)
)
edges = (
(0,1),
(0,3),
(0,4),
(2,1),
(2,3),
(2,7),
(6,3),
(6,4),
(6,7),
(5,1),
(5,4),
(5,7)
)
def Cube():
glBegin(GL_LINES)
for edge in edges:
glColor3fv((1,1,1))
for vertex in edge:
glVertex3fv(vertices[vertex])
glEnd()
</code></pre>
<p>And here's the main loop:</p>
<pre><code>pygame.init()
display = (800,600)
screen = pygame.display.set_mode(display, DOUBLEBUF | OPENGL | OPENGLBLIT)
gluPerspective(45, display[0]/display[1],0.1,50.0)
glTranslatef(0.0,0.0,-5)
while True:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
Cube()
pygame.display.flip()
pygame.time.wait(10)
</code></pre>
<p>But the cube is untextured. I don't know how to actually use the loaded texture on the cube, and every texture tutorial I find takes me as far as the loadTexture function without actually telling me how to use it. Where do I call it? What do I do with texid?</p>
| 0 | 2016-08-28T19:11:35Z | 39,195,353 | <p>There are few issues with your actual code:</p>
<ul>
<li>You're not calling the method to load your textures. </li>
<li>You're drawing only lines of the cube, you need polygons to be filled up with the actual textures, which means using either triangles or quads with texture coordinates.</li>
<li>You're not processing the pygame events</li>
</ul>
<p>Here's some modifications of your code:</p>
<pre><code>import pygame
import sys
from OpenGL.GL import *
from OpenGL.GLU import *
vertices = (
# x y z
(1, -1, -1),
(1, 1, -1),
(-1, 1, -1),
(-1, -1, -1),
(1, -1, 1),
(1, 1, 1),
(-1, -1, 1),
(-1, 1, 1)
)
edges = (
(0, 1),
(0, 3),
(0, 4),
(2, 1),
(2, 3),
(2, 7),
(6, 3),
(6, 4),
(6, 7),
(5, 1),
(5, 4),
(5, 7)
)
def loadTexture():
textureSurface = pygame.image.load('test_image.png')
textureData = pygame.image.tostring(textureSurface, "RGBA", 1)
width = textureSurface.get_width()
height = textureSurface.get_height()
glEnable(GL_TEXTURE_2D)
texid = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, texid)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height,
0, GL_RGBA, GL_UNSIGNED_BYTE, textureData)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
return texid
def draw_cube(lines=False):
if lines:
glBegin(GL_LINES)
for edge in edges:
glColor3fv((1, 1, 1))
for vertex in edge:
glVertex3fv(vertices[vertex])
glEnd()
else:
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex3f(-1.0, -1.0, 1.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(1.0, -1.0, 1.0)
glTexCoord2f(1.0, 1.0)
glVertex3f(1.0, 1.0, 1.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(-1.0, 1.0, 1.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(-1.0, -1.0, -1.0)
glTexCoord2f(1.0, 1.0)
glVertex3f(-1.0, 1.0, -1.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(1.0, 1.0, -1.0)
glTexCoord2f(0.0, 0.0)
glVertex3f(1.0, -1.0, -1.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(-1.0, 1.0, -1.0)
glTexCoord2f(0.0, 0.0)
glVertex3f(-1.0, 1.0, 1.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(1.0, 1.0, 1.0)
glTexCoord2f(1.0, 1.0)
glVertex3f(1.0, 1.0, -1.0)
glTexCoord2f(1.0, 1.0)
glVertex3f(-1.0, -1.0, -1.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(1.0, -1.0, -1.0)
glTexCoord2f(0.0, 0.0)
glVertex3f(1.0, -1.0, 1.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(-1.0, -1.0, 1.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(1.0, -1.0, -1.0)
glTexCoord2f(1.0, 1.0)
glVertex3f(1.0, 1.0, -1.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(1.0, 1.0, 1.0)
glTexCoord2f(0.0, 0.0)
glVertex3f(1.0, -1.0, 1.0)
glTexCoord2f(0.0, 0.0)
glVertex3f(-1.0, -1.0, -1.0)
glTexCoord2f(1.0, 0.0)
glVertex3f(-1.0, -1.0, 1.0)
glTexCoord2f(1.0, 1.0)
glVertex3f(-1.0, 1.0, 1.0)
glTexCoord2f(0.0, 1.0)
glVertex3f(-1.0, 1.0, -1.0)
glEnd()
pygame.init()
display = (800, 600)
screen = pygame.display.set_mode(
display, pygame.DOUBLEBUF | pygame.OPENGL | pygame.OPENGLBLIT)
loadTexture()
gluPerspective(45, display[0] / display[1], 0.1, 50.0)
glTranslatef(0.0, 0.0, -5)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
draw_cube(lines=False)
pygame.display.flip()
</code></pre>
<p>The above code is not using the best practices but it'll help you with your commented issues.</p>
<p>One advice though, I'd recommend you go through some opengl tutorial before dumping some code randomly from the internet, try to understand how the graphics pipeline works and then everything will start making sense. Also, I'd recommend you learn about <a href="http://learnopengl.com/" rel="nofollow">modern Opengl</a> instead using the old <a href="https://www.opengl.org/wiki/Fixed_Function_Pipeline" rel="nofollow">opengl fixed pipeline</a></p>
| 4 | 2016-08-28T20:11:37Z | [
"python",
"opengl",
"pygame",
"pyopengl"
] |
Pandas groupby to get two aggregated functions then convert to list of list | 39,194,893 | <p>I'm looking for a better way to write this. This works fine for my sample data set, but is pretty slow on a larger data set. Starting with a <code>dataframe</code> of customer purchases numbers. I'd like to get a list of lists in the format: ``[Customer, Mean of Orders, Count of Orders]`</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data = [['Customer0', 10], ['Customer0', 12], ['Customer1', 23]],
columns=['Customer', 'Orders'])
grouped = df.groupby(['Customer']).mean()
grouped['count'] = df.groupby(['Customer']).count()
values = grouped.values.tolist()
indexes = grouped.index.tolist()
for x in range(0,len(values)):
values[x].insert(0, indexes[x])
print values
</code></pre>
<p>Output:</p>
<pre><code>[['Customer0', 11, 2], ['Customer0', 23, 1]]
</code></pre>
| 2 | 2016-08-28T19:15:30Z | 39,194,923 | <p>Can you try this one? </p>
<pre><code>df.groupby('Customer').agg(['mean', 'count']).reset_index().values.tolist()
Out: [['Customer0', 11, 2], ['Customer1', 23, 1]]
</code></pre>
<p>A small note: This can only improve your code significantly if the number of groups (<code>len(values)</code>) is quite large because we are not looping here. If you have only a small number of groups, I guess the improvement would be 2x at most. </p>
| 2 | 2016-08-28T19:18:40Z | [
"python",
"python-2.7",
"pandas"
] |
Queue size appears to be the ENV size in SimPy | 39,194,898 | <p>I am just starting to work on an event simulation, and I am having some issues with monitoring the queue.</p>
<p>It appears that everytime I check the queue, it is actually showing the Env.now. Any advice?</p>
<pre><code>import simpy
num_of_machines = 2
env = simpy.Environment()
bcs = simpy.Resource(env, capacity=num_of_machines)
def monitor(resource):
"""This is our monitoring callback."""
print('Queue size: %s' % len(resource.queue))
def process_client(env, name):
with bcs.request() as req:
yield req
print('%s starting to charge at %s' % (name, env.now))
yield env.timeout(90)
print('%s ending charge at %s' % (name, env.now))
monitor(bcs)
def setup(env):
i = 0
while True:
i += 1
yield env.timeout(1)
env.process(process_client(env, ('Car %s' % i)))
env.process(setup(env))
env.run(until=300)
</code></pre>
<p>Results:</p>
<pre><code>Car 1 starting to charge at 1
Car 2 starting to charge at 2
Car 1 ending charge at 91
Queue size: 88
Car 3 starting to charge at 91
Car 2 ending charge at 92
Queue size: 88
Car 4 starting to charge at 92
Car 3 ending charge at 181
Queue size: 176
Car 5 starting to charge at 181
Car 4 ending charge at 182
Queue size: 176
Car 6 starting to charge at 182
Car 5 ending charge at 271
Queue size: 264
Car 7 starting to charge at 271
Car 6 ending charge at 272
Queue size: 264
Car 8 starting to charge at 272
</code></pre>
| 0 | 2016-08-28T19:15:48Z | 39,215,796 | <p>You spawn a <code>process_client()</code> every timestep, so when the first of these processes is done after 90 time steps, you already have created 90 new processes that are queueing up. So your numbers are looking quite right.</p>
| 0 | 2016-08-29T21:55:55Z | [
"python",
"simpy"
] |
Python Stored procedure not running, No errors | 39,194,924 | <p>The following procedure works fine from mysql client but not running from Python.</p>
<p><strong>Stored Procedure</strong></p>
<pre><code>CREATE DEFINER=`music-cnv`@`%` PROCEDURE `StoreFileStats`(FNAME VARCHAR(200), FEXT varchar(4), FBDIR VARCHAR(100), FRDIR VARCHAR(250), FSIZE bigint(8), FMDATE bigint(8), FCDATE bigint(8), CONVERTED tinyint(1))
BEGIN
DECLARE FCount int DEFAULT 0;
SELECT COUNT(FileName) INTO FCount FROM FileList where (FleRelativeDir LIKE FRDIR) AND (FileName LIKE FNAME);
IF FCount = 0 THEN
INSERT INTO FileList (FileName,FileBaseDir,FleRelativeDir,FileExt,FileSize,FileModDate,FileCDate,Con#verted) VALUES (FNAME,FBDir,FRDir,FEXT,FSize,FMDate,FCDate,CONVERTED);
END IF;
END
</code></pre>
<p><strong>Data</strong></p>
<pre><code>'In the Light', 'FLAC', '/var/data/Music_FLAC', 'Led Zeppelin/Physical Graffiti, Disc 2', 51472669, 1289282499, 1458631127, False
</code></pre>
<p><strong>Python Code</strong></p>
<p>The connection and cursor give no errors</p>
<pre><code>try:
myargs = [fnamesub, self.type.strip(), self.directory,
subdirname, fpathstat[2], fpathstat[3],
fpathstat[4], False]
result_args = mycur.callproc('StoreFileStats', myargs)
except mysql.connector.Error as Err:
errno = 51
print('Error ' + str(errno) + ' !!!, Cannot Update MySQL Data with Name ' + fnamesub)
print(Err)
</code></pre>
<p>The code runs without error but does not update database
Thank you for any help</p>
| 0 | 2016-08-28T19:19:06Z | 39,196,993 | <p>Just needed to commit. When running procedure from shell auto commit is enabled. But when run from Python must manually run commit.</p>
| 0 | 2016-08-29T00:18:45Z | [
"python",
"mysql",
"stored-procedures"
] |
Python global array with multiprocessing | 39,194,994 | <p>Please consider the following code:</p>
<pre><code>from multiprocessing import Process
import time
myArray = []
def myThread():
counter = 0
global myArray
while True:
myArray.append(counter)
time.sleep(1)
counterThread = Process(target=myThread,)
counterThread.start()
while True:
if len(myArray) > 0:
print "Success"
else:
print ":("
print myArray
time.sleep(1)
</code></pre>
<p>I am unable to get my success message, and i'm not sure why, I keep receiving <code>:(</code> and my terminal printing an empty array. I thought making the array global would mean any changes made at <code>myThread()</code> level would be applied?</p>
| 1 | 2016-08-28T19:28:38Z | 39,195,075 | <p>You are creating an second process, which has no access to data of the main process. You can use threading.Thread(target=myThread,), but you has to synchronize the access threading.Lock(), if you are using more than one thread. </p>
<p>You should terminate your thread, when you are finished and wait for the thread with athread.join().</p>
<p>See:
<a href="https://docs.python.org/2/library/threading.html" rel="nofollow">https://docs.python.org/2/library/threading.html</a></p>
| 2 | 2016-08-28T19:38:06Z | [
"python",
"arrays",
"multithreading",
"global",
"multiprocess"
] |
Create a new variable to represent the last result in a time series with pandas | 39,195,043 | <p>I have a df that looks something like this:</p>
<pre><code>A B C outcome time people_id
. . . 0 34 'ID_4'
. . . 1 23 'ID_2'
. . . 0 2 'ID_1'
. . . 1 85 'ID_4'
</code></pre>
<p>I am trying to create a new variable that represents the most recent result of a per each ID but am running into problems as I am not very familiar with pandas. My current attempt looks something like this but I am repeatedly running into problems as I tinker. What is a better way to do this?</p>
<pre><code> def recent_train(x):
_df = train[(train.people_id == x.people_id.values[0]) & (train.time < x.time.values[0])]
min_time = _df.time.min()
avg = _df[_df.time == min_time].outcome.mean()
return avg
train['recent'] = train.apply(lambda x: recent_train(x), axis = 1)
</code></pre>
<p>I am using the mean because some of values might be mixed so I want to capture the percentage that are 1.</p>
| 0 | 2016-08-28T19:34:14Z | 39,229,090 | <p>This should do it if I'm understanding what you want correctly:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': ['A', 'B', 'C', 'D', 'E'],
'outcome': [0, 1, 0, 1, 1],
'time': [34, 34, 2, 85, 34],
'people_id': ['ID_4', 'ID_2', 'ID_1', 'ID_4', 'ID_4']},
columns=['a', 'outcome', 'time', 'people_id'])
mean_outcomes_by_id_and_time = df.groupby(['people_id', 'time'])['outcome'].mean()
most_recent_mean_outcomes_by_id = mean_outcomes_by_id_and_time.groupby(level=[0]).nth(0)
print df
print mean_outcomes_by_id_and_time
print most_recent_mean_outcomes_by_id
</code></pre>
<p>output:</p>
<pre><code> a outcome time people_id
0 A 0 34 ID_4
1 B 1 34 ID_2
2 C 0 2 ID_1
3 D 1 85 ID_4
4 E 1 34 ID_4
people_id time
ID_1 2 0.0
ID_2 34 1.0
ID_4 34 0.5
85 1.0
Name: outcome, dtype: float64
people_id
ID_1 0.0
ID_2 1.0
ID_4 0.5
Name: outcome, dtype: float64
</code></pre>
<p>Steps:</p>
<ol>
<li>Get the mean <code>outcome</code> for each <code>people_id</code> and <code>time</code> (in the form of a multi-indexed series).</li>
<li>Group by <code>people_id</code> and then get the first row in each group, which corresponds to the row with the lowest <code>time</code> value for each <code>people_id</code> since <code>groupby()</code> automatically sorts.</li>
</ol>
<p>You could do it in one line instead of two if you want. I broke it into two for clarity.</p>
| 1 | 2016-08-30T13:37:56Z | [
"python",
"pandas"
] |
Use Jinja Variable to generate n elements | 39,195,065 | <p>I want to use Jinja variables to generate n options in a drop down. Here is an example:</p>
<pre><code> Session Select: <br>
{{ sessions }}
<select style="color:black">
{% for session in sessions %}
<li>{{ session }}</li>
{% endfor %}
</select> <br><br>
</code></pre>
<p>The value of sessions is:</p>
<p>['Session 1', 'Session 2', 'Session 3'] </p>
<p>Any thoughts?</p>
| -1 | 2016-08-28T19:36:48Z | 39,195,168 | <p>To generate items within a select box you use the <code><option></code> tag, not <code><li></code>.</p>
| 0 | 2016-08-28T19:49:36Z | [
"python",
"flask",
"jinja2"
] |
Use Jinja Variable to generate n elements | 39,195,065 | <p>I want to use Jinja variables to generate n options in a drop down. Here is an example:</p>
<pre><code> Session Select: <br>
{{ sessions }}
<select style="color:black">
{% for session in sessions %}
<li>{{ session }}</li>
{% endfor %}
</select> <br><br>
</code></pre>
<p>The value of sessions is:</p>
<p>['Session 1', 'Session 2', 'Session 3'] </p>
<p>Any thoughts?</p>
| -1 | 2016-08-28T19:36:48Z | 39,199,413 | <pre><code>Session Select: <br>
<select style="color:black">
{% for session in sessions %}
<option value="{{ session }}">{{ session }}</option>
{% endfor %}
</select> <br><br>
</code></pre>
| 0 | 2016-08-29T06:00:30Z | [
"python",
"flask",
"jinja2"
] |
Use Jinja Variable to generate n elements | 39,195,065 | <p>I want to use Jinja variables to generate n options in a drop down. Here is an example:</p>
<pre><code> Session Select: <br>
{{ sessions }}
<select style="color:black">
{% for session in sessions %}
<li>{{ session }}</li>
{% endfor %}
</select> <br><br>
</code></pre>
<p>The value of sessions is:</p>
<p>['Session 1', 'Session 2', 'Session 3'] </p>
<p>Any thoughts?</p>
| -1 | 2016-08-28T19:36:48Z | 39,279,509 | <p>Assuming <code>no_sessions</code> is your <code>n</code> value... I would try something like this:</p>
<pre><code>Session Select: <br>
<select style="color:black">
{% range number from 1 to no_sessions %}
<option>Session {{ number }}</option>
{% endrange %}
</select> <br><br>
</code></pre>
<p><a href="http://stackoverflow.com/questions/13668025/how-to-do-a-while-x-y-in-jinja2">(related to this question)</a></p>
| 0 | 2016-09-01T19:30:51Z | [
"python",
"flask",
"jinja2"
] |
How can I store raw input into a variable and then pass that variable into another script without requesting input again? | 39,195,077 | <p>Basically what I am trying to do is store raw input into a variable and then call that variable in another script. However, when I do this the second script asks for the input again. Is there a way to save the input in the variable without requesting the input in the second script?</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
def userInput():
token = raw_input('Enter your Spark Token: ')
member = raw_input('Enter your email:')
return token and member
</code></pre>
<p>Thank you!</p>
| 0 | 2016-08-28T19:38:26Z | 39,195,159 | <p>Not sure whether I've understood correctly your question but you could try this:</p>
<blockquote>
<p>main.py</p>
</blockquote>
<pre><code>from bar import user_input
def foo():
token, member = user_input()
print "Login user {0} with token {1}".format(member, token)
if __name__ == "__main__":
foo()
</code></pre>
<blockquote>
<p>bar.py</p>
</blockquote>
<pre><code>def user_input():
token = raw_input('Enter your Spark Token: ')
member = raw_input('Enter your email:')
return token, member
</code></pre>
<p>As you can see, in the file <code>main.py</code> you're calling your <code>bar.user_input</code> method once so you'll get asked by member & token only once.</p>
| 0 | 2016-08-28T19:47:43Z | [
"python"
] |
How to load multiple images in a numpy array ? | 39,195,113 | <p>How to load pixels of multiple images in a directory in a numpy array . I have loaded a single image in a numpy array . But can not figure out how to load multiple images from a directory . Here what i have done so far </p>
<pre><code>image = Image.open('bn4.bmp')
nparray=np.array(image)
</code></pre>
<p>This loads a 32*32 matrices . I want to load 100 of the images in a numpy array . I want to make 100*32*32 size numpy array . How can i do that ? I know that the structure would look something like this </p>
<pre><code>for filename in listdir("BengaliBMPConvert"):
if filename.endswith(".bmp"):
-----------------
else:
continue
</code></pre>
<p>But can not find out how to load the images in numpy array </p>
| 1 | 2016-08-28T19:43:06Z | 39,195,332 | <p>To get a list of BMP files from the directory <code>BengaliBMPConvert</code>, use:</p>
<pre><code>import glob
filelist = glob.glob('BengaliBMPConvert/*.bmp')
</code></pre>
<p>On the other hand, if you know the file names already, just put them in a sequence:</p>
<pre><code>filelist = 'file1.bmp', 'file2.bmp', 'file3.bmp'
</code></pre>
<p>To combine all the images into one array:</p>
<pre><code>x = np.array([np.array(Image.open(fname)) for fname in filelist])
</code></pre>
| 0 | 2016-08-28T20:09:08Z | [
"python",
"image",
"numpy",
"image-processing"
] |
Python/Requests: Correct login returns 401 unauthorized | 39,195,170 | <p>I have a python application logs in to a remote host via basic HTTP authentication.
Authentication is as follows: </p>
<pre><code>def make_authenticated_request(host, username, password):
url = host
r = requests.get(url, auth=(username, password))
r.raise_for_status()
return r
test = Looter.make_authenticated_request("http://" + host + "/status/status_deviceinfo.htm", user, password)
</code></pre>
<p>This error is printed: </p>
<p><code>401 Client Error: Unauthorized for url</code> </p>
<p>Strange thing is that this doesn't always happen. It randomly fails/succeeds, for the same host with the same credentials. </p>
<p>Login is however correct, and works flawlessly in my browser. I'm no python ninja. Any clues ?</p>
| 0 | 2016-08-28T19:49:39Z | 39,195,305 | <p>I might rewrite it to look something like this.. change it around however you need or like. The point here is that i'm using a session and passing it around to make another requests. You can reuse that session object to make other requests. Now if you making lots of requests an authing in each time like your code suggests. The site could be locking you out, which is why a session works better because you don't have to continue to auth in.</p>
<pre><code>import requests
class Looter(object):
def __init__(self):
self.s = None
def create_session(self, url, username, password):
# create a Session
s = requests.Session()
# auth in
res = s.get(url, auth=(username, password))
if res.status_code == 200:
self.s = s
def make_request(self, url):
self.s.get(url)
#do something with the requests
l = Looter()
l.create_session(url, username, password)
# print the session to see if it authed in
print l.s.status_code
</code></pre>
| 1 | 2016-08-28T20:05:37Z | [
"python",
"python-requests"
] |
How can I extract data using the 'groupby' | 39,195,179 | <pre><code>import pandas as pd
df= pd.DataFrame({'date':[1,2,3,4,5,1,2,3,4,5,1,2,3,4,5],
'name':list('aaaaabbbbbccccc'),
'v1':[10,20,30,40,50,10,20,30,40,50,10,20,30,40,50],
'v2':[10,20,30,40,50,10,20,30,40,50,10,20,30,40,50],
'v3':[10,20,30,40,50,10,20,30,40,50,10,20,30,40,50]})
a= list(set(list(df.name)))
plus=[]
for i in a:
sep=df[df.name==i]
sep2=sep[(sep.v1>=10)&(sep.v2>=20)&(sep.v3<=40)]
plus.append(sep2)
result=pd.concat(plus)
print(result)
</code></pre>
<p>I know this is not a good example anyway,</p>
<p>I would like to handle separately by name.</p>
<p>It takes too long in a big data</p>
<p>How can I extract data using the 'groupby'?</p>
<p>Even better if the function is used(def..apply...)</p>
<p><code>df.groupby(['name'])(df['v1']>20)</code>...???? It cannot work...</p>
| 1 | 2016-08-28T19:50:26Z | 39,195,661 | <p>looking at your desired data set i don't think you need to <code>groupby</code> your <code>df</code>, you can simply filter it:</p>
<pre><code>In [112]: df.query('v1 >= 10 and v2 >= 20 and v3 <= 40')
Out[112]:
date name v1 v2 v3
1 2 a 20 20 20
2 3 a 30 30 30
3 4 a 40 40 40
6 2 b 20 20 20
7 3 b 30 30 30
8 4 b 40 40 40
11 2 c 20 20 20
12 3 c 30 30 30
13 4 c 40 40 40
</code></pre>
| 0 | 2016-08-28T20:50:53Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
peewee custom field - define allowed values | 39,195,189 | <p>Two cases:</p>
<p>1.) I'd like to define an attribute (val) that can take the integers 0, 1, or 2 only.</p>
<pre><code>class Trinary(Model):
"""val should accept the values 0, 1 or 2 only"""
val = IntegerField()
</code></pre>
<p>2.) I'd like to define an attribute (val) that can take specific strings only, for example ["strawberry", "peach", "apple"]</p>
<pre><code>class Fruit(Model):
"""val should accept the values "strawberry", "peach" or "apple" only """
val = ???
</code></pre>
<p>Is is possible to implement such a restriction using peewee?</p>
<p>Thanks for your help!</p>
<p>Muff</p>
| 1 | 2016-08-28T19:51:22Z | 39,195,355 | <p>The objects <code>IntegerField</code> etc. are classes, and can be subclassed (<a href="http://docs.peewee-orm.com/en/latest/peewee/models.html#custom-fields" rel="nofollow">documentation</a>):</p>
<p>The classes should define <code>db_value</code> to convert from python to database,
and <code>python_value</code> for the other way round</p>
<pre><code>class TrinaryField(IntegerField):
def db_value(self, value):
if value not in [0,1,2]:
raise TypeError("Non-trinary digit")
return super().db_field(value) # call
</code></pre>
| 1 | 2016-08-28T20:11:53Z | [
"python",
"python-3.x",
"peewee"
] |
Translating a Linear Regression from Matlab to Python | 39,195,204 | <p>I tried to translate a piece of code from Matlab to Python and I'm running into some errors:</p>
<p>Matlab:</p>
<pre><code>function [beta] = linear_regression_train(traindata)
y = traindata(:,1); %output
ind2 = find(y == 2);
ind3 = find(y == 3);
y(ind2) = -1;
y(ind3) = 1;
X = traindata(:,2:257); %X matrix,with size of 1389x256
beta = inv(X'*X)*X'*y;
</code></pre>
<p>Python:</p>
<pre><code>def linear_regression_train(traindata):
y = traindata[:,0] # This is the output
ind2 = (labels==2).nonzero()
ind3 = (labels==3).nonzero()
y[ind2] = -1
y[ind3] = 1
X = traindata[ : , 1:256]
X_T = numpy.transpose(X)
beta = inv(X_T*X)*X_T*y
return beta
</code></pre>
<p>I am receiving an error: operands could not be broadcast together with shapes (257,0,1389) (1389,0,257) on the line where beta is calculated.</p>
<p>Any help is appreciated!</p>
<p>Thanks!</p>
| 1 | 2016-08-28T19:53:15Z | 39,197,068 | <p>The problem is that you are working with numpy arrays, not matrices as in MATLAB. Matrices, by default, do matrix mathematical operations. So <code>X*Y</code> does a matrix multiplication of <code>X</code> and <code>Y</code>. With arrays, however, the default is to use element-by-element operations. So <code>X*Y</code> multiplies each corresponding element of <code>X</code> and <code>Y</code>. This is the equivalent of MATLAB's <code>.*</code> operation.</p>
<p>But just like how MATLAB's matrices can do element-by-element operations, Numpy's arrays can do matrix multiplication. So what you need to do is use numpy's matrix multiplication instead of its element-by-element multiplication. For Python 3.5 or higher (which is the version you should be using for this sort of work), that is just the <code>@</code> operator. So your line becomes:</p>
<pre><code>beta = inv(X_T @ X) @ X_T @ y
</code></pre>
<p>Or, better yet, you can use the simpler <code>.T</code> transpose, which is the same as <code>np.transpose</code> but much more concise (you can get rid of the `np.transpose line entirely):</p>
<pre><code>beta = inv(X.T @ X) @ X.T @ y
</code></pre>
<p>For Python 3.4 or earlier, you will need to use <code>np.dot</code> since those versions of python don't have the <code>@</code> matrix multiplication operator:</p>
<pre><code>beta = np.dot(np.dot(inv(np.dot(X.T, X)), X.T), y)
</code></pre>
<p>Numpy has a matrix object that uses matrix operations by default like the MATLAB matrix. <em>Do not use it!</em> It is slow, poorly-supported, and almost never what you really want. The Python community has standardized around arrays, so use those.</p>
<p>There may also be some issues with the dimensions of <code>traindata</code>. For this to work properly then <code>traindata.ndim</code> should be equal to <code>3</code>. In order for <code>y</code> and <code>X</code> to be 2D, <code>traindata</code> should be <code>3D</code>. </p>
<p>This could be an issue if <code>traindata</code> is 2D and you want <code>y</code> to be MATLAB-style "vector" (what MATLAB calls "vectors" aren't really vectors). In numpy, using a single index like <code>traindata[:, 0]</code> reduces the number of dimensions, while taking a slice like <code>traindata[:, :1]</code> doesn't. So to keep <code>y</code> 2D when <code>traindata</code> is 2D, just do a length-1 slice, <code>traindata[:, :1]</code>. This is exactly the same values, but this keeps the same number of dimensions as <code>traindata</code>.</p>
<p><strong>Notes</strong>: Your code can be significantly simplified using logical indexing:</p>
<pre><code>def linear_regression_train(traindata):
y = traindata[:, 0] # This is the output
y[labels == 2] = -1
y[labels == 3] = 1
X = traindata[:, 1:257]
return inv(X.T @ X) @ X.T @ y
return beta
</code></pre>
<p>Also, your slice is wrong when defining <code>X</code>. Python slicing excludes the last value, so to get a 256 long slice you need to do <code>1:257</code>, as I did above.</p>
<p>Finally, please keep in mind that modifications to arrays inside functions carry over outside the functions, and indexing does not make a copy. So your changes to <code>y</code> (setting some values to <code>1</code> and others to <code>-1</code>), will affect <code>traindata</code> outside of your function. If you want to avoid that, you need to make a copy before you make your changes:</p>
<pre><code>y = traindata[:, 0].copy()
</code></pre>
| 2 | 2016-08-29T00:33:33Z | [
"python",
"matlab"
] |
How to get Model Choice Field name instead of id in template using django rest framework | 39,195,262 | <p>I am getting id of choice field in the template.But I want name of the choice field to display. Please review the below files and let me know how to get the name.</p>
<p>here is my models.py</p>
<p>===============================================================</p>
<pre><code>class Posted(models.Model):
name = models.CharField(_('Posted In'),max_length=255, unique=True)
class Tags(models.Model):
name = models.CharField(_('Tag Name'),max_length=255, unique=True)
class Blogs(models.Model):
author = models.ForeignKey(CustomUser)
title=models.CharField(max_length=100)
posted=models.ForeignKey(Posted, blank=True)
tags= models.ManyToManyField(Tags, blank=True)
content = models.TextField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
</code></pre>
<p>serializers.py</p>
<hr>
<pre><code>class BlogsSerializer(serializers.ModelSerializer):
author = AccountSerializer(read_only=True,required=False)
class Meta:
model = Blogs
fields = ('id','author','title','tags','posted','content','created_at','updated_at')
read_only_fields=('id','created_at','updated_at')
class TagsSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField()
class Meta:
model = Tags
fields = ('id','name')
read_only_fields=('id','name')
class PostedSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField()
class Meta:
model = Posted
fields = ('id','name')
read_only_fields=('id','name')
</code></pre>
<p>views.py</p>
<hr>
<pre><code>class BlogViewSet(viewsets.ModelViewSet):
queryset=Blogs.objects.order_by('-created_at')
serializer_class= BlogsSerializer
def get_permissions(self):
if self.request.method in permissions.SAFE_METHODS:
return (permissions.AllowAny(),)
return (permissions.IsAuthenticated(),IsAuthorOfBlog(),)
def perform_create(self,serializer):
serializer.save(author=self.request.user)
return super(BlogViewSet,self).perform_create(serializer)
class TagsViewSet(viewsets.ModelViewSet):
queryset=Tags.objects.all
serializer_class= TagsSerializer
class PostedViewSet(viewsets.ModelViewSet):
queryset=Posted.objects.all
serializer_class= PostedSerializer
</code></pre>
<h1>Template</h1>
<h1><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="row">
<div class="col-sm-12">
<div class="well">
<div class="blog">
<div class="blog__meta">
<a href="#">
+{{ blog.author.first_name }}
</a>
</div>
<div class="blog__content">
<p>{{ blog.title }}</p>
<p>{{ blog.posted}}</p>
<p>{{ blog.tags}}</p>
<p>{{ blog.content }}</p>
</div>
</div>
</div>
</div>
</div></code></pre>
</div>
</div>
</h1>
<p>Thanks in advance</p>
| 0 | 2016-08-28T20:00:18Z | 39,195,806 | <p>What do you mean by choices ? I see no <code>choices</code> in Field kwargs. You can set Choice field by for example:
<code>chairs = models.IntegerField(max_length=255, choices=CHOICES)</code> and where exactly you get id's instead of name ?</p>
| 0 | 2016-08-28T21:09:38Z | [
"python",
"django",
"django-rest-framework"
] |
How to get Model Choice Field name instead of id in template using django rest framework | 39,195,262 | <p>I am getting id of choice field in the template.But I want name of the choice field to display. Please review the below files and let me know how to get the name.</p>
<p>here is my models.py</p>
<p>===============================================================</p>
<pre><code>class Posted(models.Model):
name = models.CharField(_('Posted In'),max_length=255, unique=True)
class Tags(models.Model):
name = models.CharField(_('Tag Name'),max_length=255, unique=True)
class Blogs(models.Model):
author = models.ForeignKey(CustomUser)
title=models.CharField(max_length=100)
posted=models.ForeignKey(Posted, blank=True)
tags= models.ManyToManyField(Tags, blank=True)
content = models.TextField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
</code></pre>
<p>serializers.py</p>
<hr>
<pre><code>class BlogsSerializer(serializers.ModelSerializer):
author = AccountSerializer(read_only=True,required=False)
class Meta:
model = Blogs
fields = ('id','author','title','tags','posted','content','created_at','updated_at')
read_only_fields=('id','created_at','updated_at')
class TagsSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField()
class Meta:
model = Tags
fields = ('id','name')
read_only_fields=('id','name')
class PostedSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField()
class Meta:
model = Posted
fields = ('id','name')
read_only_fields=('id','name')
</code></pre>
<p>views.py</p>
<hr>
<pre><code>class BlogViewSet(viewsets.ModelViewSet):
queryset=Blogs.objects.order_by('-created_at')
serializer_class= BlogsSerializer
def get_permissions(self):
if self.request.method in permissions.SAFE_METHODS:
return (permissions.AllowAny(),)
return (permissions.IsAuthenticated(),IsAuthorOfBlog(),)
def perform_create(self,serializer):
serializer.save(author=self.request.user)
return super(BlogViewSet,self).perform_create(serializer)
class TagsViewSet(viewsets.ModelViewSet):
queryset=Tags.objects.all
serializer_class= TagsSerializer
class PostedViewSet(viewsets.ModelViewSet):
queryset=Posted.objects.all
serializer_class= PostedSerializer
</code></pre>
<h1>Template</h1>
<h1><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="row">
<div class="col-sm-12">
<div class="well">
<div class="blog">
<div class="blog__meta">
<a href="#">
+{{ blog.author.first_name }}
</a>
</div>
<div class="blog__content">
<p>{{ blog.title }}</p>
<p>{{ blog.posted}}</p>
<p>{{ blog.tags}}</p>
<p>{{ blog.content }}</p>
</div>
</div>
</div>
</div>
</div></code></pre>
</div>
</div>
</h1>
<p>Thanks in advance</p>
| 0 | 2016-08-28T20:00:18Z | 39,208,554 | <p>You can use <a href="http://www.django-rest-framework.org/api-guide/relations/#slugrelatedfield" rel="nofollow">SlugRelatedField</a> option to return a specific field as a relation object. For example,</p>
<pre><code>class BlogsSerializer(serializers.ModelSerializer):
author = AccountSerializer(read_only=True,required=False)
tags = serializers.SlugRelatedField(
many=True,
read_only=True,
slug_field='name'
)
class Meta:
model = Blogs
fields = ('id','author','title','tags','posted','content','created_at','updated_at')
read_only_fields=('id','created_at','updated_at')
</code></pre>
| 0 | 2016-08-29T14:20:18Z | [
"python",
"django",
"django-rest-framework"
] |
How to get Model Choice Field name instead of id in template using django rest framework | 39,195,262 | <p>I am getting id of choice field in the template.But I want name of the choice field to display. Please review the below files and let me know how to get the name.</p>
<p>here is my models.py</p>
<p>===============================================================</p>
<pre><code>class Posted(models.Model):
name = models.CharField(_('Posted In'),max_length=255, unique=True)
class Tags(models.Model):
name = models.CharField(_('Tag Name'),max_length=255, unique=True)
class Blogs(models.Model):
author = models.ForeignKey(CustomUser)
title=models.CharField(max_length=100)
posted=models.ForeignKey(Posted, blank=True)
tags= models.ManyToManyField(Tags, blank=True)
content = models.TextField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
</code></pre>
<p>serializers.py</p>
<hr>
<pre><code>class BlogsSerializer(serializers.ModelSerializer):
author = AccountSerializer(read_only=True,required=False)
class Meta:
model = Blogs
fields = ('id','author','title','tags','posted','content','created_at','updated_at')
read_only_fields=('id','created_at','updated_at')
class TagsSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField()
class Meta:
model = Tags
fields = ('id','name')
read_only_fields=('id','name')
class PostedSerializer(serializers.ModelSerializer):
name = serializers.SerializerMethodField()
class Meta:
model = Posted
fields = ('id','name')
read_only_fields=('id','name')
</code></pre>
<p>views.py</p>
<hr>
<pre><code>class BlogViewSet(viewsets.ModelViewSet):
queryset=Blogs.objects.order_by('-created_at')
serializer_class= BlogsSerializer
def get_permissions(self):
if self.request.method in permissions.SAFE_METHODS:
return (permissions.AllowAny(),)
return (permissions.IsAuthenticated(),IsAuthorOfBlog(),)
def perform_create(self,serializer):
serializer.save(author=self.request.user)
return super(BlogViewSet,self).perform_create(serializer)
class TagsViewSet(viewsets.ModelViewSet):
queryset=Tags.objects.all
serializer_class= TagsSerializer
class PostedViewSet(viewsets.ModelViewSet):
queryset=Posted.objects.all
serializer_class= PostedSerializer
</code></pre>
<h1>Template</h1>
<h1><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="row">
<div class="col-sm-12">
<div class="well">
<div class="blog">
<div class="blog__meta">
<a href="#">
+{{ blog.author.first_name }}
</a>
</div>
<div class="blog__content">
<p>{{ blog.title }}</p>
<p>{{ blog.posted}}</p>
<p>{{ blog.tags}}</p>
<p>{{ blog.content }}</p>
</div>
</div>
</div>
</div>
</div></code></pre>
</div>
</div>
</h1>
<p>Thanks in advance</p>
| 0 | 2016-08-28T20:00:18Z | 39,210,089 | <p>thanks @ Karol Sztajerwald.
Yes BlogsSerializer has to be made like that.</p>
<p>Finally Blogs Serializer.py is </p>
<pre><code>class TagsSerializer(serializers.ModelSerializer):
class Meta:
model = Tags
fields = ('id','name')
read_only_fields=('id','name')
class PostedSerializer(serializers.ModelSerializer):
class Meta:
model = Posted
fields = ('id','name')
read_only_fields=('id','name')
class BlogsSerializer(serializers.ModelSerializer):
author = AccountSerializer(read_only=True,required=False)
tags=TagsSerializer(many=True)
posted=PostedSerializer()
class Meta:
model = Blogs
fields = ('id','author','title','tags','posted','content','created_at','updated_at')
read_only_fields=('id','created_at','updated_at')
</code></pre>
<p>template is</p>
<pre><code><p>{{ blog.title }}</p>
<p>{{ blog.posted.name}}</p>
<p>{{ blog.tags.name}}</p>
<p>{{ blog.content }}</p>
</code></pre>
| 0 | 2016-08-29T15:37:56Z | [
"python",
"django",
"django-rest-framework"
] |
django rest framework: change in ListAPIView return format? | 39,195,268 | <p>Just in the process of upgrading django rest framework from 3.2.3 to 3.4.6.<br>
I also upgraded django from 1.8.6 to 1.10</p>
<p>It looks like return values from ListAPIView have changed format? i.e. they used to be objects like</p>
<pre><code>{
"count": 0,
"next": null,
"previous": null,
"results": []
}
</code></pre>
<p>but now are just straight arrays?</p>
<pre><code>[]
</code></pre>
<p>I'm looking all over the place trying to find documentation for this change but can't find any. Is anyone able to help point me in the right direction?</p>
| 0 | 2016-08-28T20:00:52Z | 39,195,876 | <p>Apparently now in django rest framework you must explictly set the pagination class or at least a global default.</p>
<p><a href="http://www.django-rest-framework.org/api-guide/pagination/#setting-the-pagination-style" rel="nofollow">http://www.django-rest-framework.org/api-guide/pagination/#setting-the-pagination-style</a></p>
| 0 | 2016-08-28T21:20:06Z | [
"python",
"django",
"django-rest-framework"
] |
Find the largest subset of a set of numbers such that the sum of any 2 numbers is not divisible by a given number | 39,195,291 | <p>I'm trying to solve the Hackerrank problem <a href="https://www.hackerrank.com/challenges/non-divisible-subset?h_r=next-challenge&h_v=zen" rel="nofollow">Non-Divisible Subset</a> stated below:</p>
<p><a href="http://i.stack.imgur.com/LYyCy.png" rel="nofollow"><img src="http://i.stack.imgur.com/LYyCy.png" alt="enter image description here"></a></p>
<p>I attempted the following solution (which works for the sample test case):</p>
<pre><code># The lines below are for Hackerrank submissions
# n, k = map(int, raw_input().strip().split(' '))
# a = map(int, raw_input().strip().split(' '))
n = 4
k = 3
a = [1, 7, 2, 4]
while True:
all_pairs = [(a[i],a[j]) for i in range(len(a)) for j in range(i+1,len(a))]
tested_pairs = {pair: (pair[0] + pair[1]) % k != 0 for pair in all_pairs}
disqualified_pairs = {key: value for key, value in tested_pairs.iteritems() if not value}.keys()
if not disqualified_pairs:
break
occurrences = list(sum(disqualified_pairs, ()))
counts = map(lambda x: occurrences.count(x), a)
index_remove = counts.index(max(counts))
a.remove(index_remove)
print len(a)
</code></pre>
<p>What I'm trying to do is identify the 'offending' pairs and removing the element of <code>a</code> which occurs most frequently, until there are no 'offending' pairs left.</p>
<p>However, I'm getting "RunTime Error" for most of the test cases:</p>
<p><a href="http://i.stack.imgur.com/CnGiy.png" rel="nofollow"><img src="http://i.stack.imgur.com/CnGiy.png" alt="enter image description here"></a></p>
<p>Presumably the algorithm above works in this simple case where only one number (the number 2) needs to be removed, but fails in more complicated test cases. Can anyone see what is wrong with it?</p>
<p><strong>UPDATE</strong></p>
<p>Following poke's suggestion to test <code>k = 2</code> and <code>a = [1, 2, 3]</code>, I made the following modifications:</p>
<pre><code>n = 4
k = 2
a = [1, 2, 3]
while True:
all_pairs = [(a[i],a[j]) for i in range(len(a)) for j in range(i+1,len(a))]
disqualified_pairs = [pair for pair in all_pairs if (pair[0] + pair[1]) % k == 0]
if not disqualified_pairs:
break
offending_numbers = sum(disqualified_pairs, ()) # 'Flatten' the disqualified pairs into a single list
counts = {el: offending_numbers.count(el) for el in set(offending_numbers)} # Count occurrences of each offending number
number_to_remove = max(counts, key=counts.__getitem__)
a.remove(number_to_remove)
print len(a)
</code></pre>
<p>The resulting <code>a</code> is <code>[2, 3]</code> and contains two elements as expected. I've also checked that it still works for the original example. However, I am still getting a "Segmentation Fault" on some of the test cases:</p>
<p><a href="http://i.stack.imgur.com/lJGjb.png" rel="nofollow"><img src="http://i.stack.imgur.com/lJGjb.png" alt="enter image description here"></a></p>
<p>According to <a href="https://www.hackerrank.com/challenges/pairs/forum/comments/9154" rel="nofollow">https://www.hackerrank.com/challenges/pairs/forum/comments/9154</a>, segmentation faults typically occur because of invalid memory access (array indices which don't exist, etc.). I still haven't managed to find any other test cases, though, where the algorithm fails. Any ideas?</p>
| 4 | 2016-08-28T20:03:40Z | 39,218,562 | <p><strong>Instead of generating all pairs, this could be done by counting modulus.</strong></p>
<p>Time Complexity: O(n + k)</p>
<p>Space Complexity: O(k) or O(1), because k is 100 at max, O(k) => O(1)</p>
<hr>
<p><strong>Basic idea</strong></p>
<blockquote>
<p>(a + b) % k = ((a % k) + (b % k)) % k</p>
</blockquote>
<p>Since (a % k) is in range [0, k-1],</p>
<blockquote>
<p>(a % k) + (b % k) is in range [0, 2k-2]</p>
</blockquote>
<p>In addition,</p>
<blockquote>
<p>(a + b) % k = 0 when </p>
<ol>
<li><p>(a % k) = 0 and (b % k) = 0 <strong>OR</strong></p></li>
<li><p>(a % k) + (b % k) = k</p></li>
</ol>
</blockquote>
<p><strong>Main idea</strong></p>
<ol>
<li>Based on condition 2, when you choose any value of modulus i, you can choose any value of any modulus, except modulus k-i. </li>
<li>In most case, there is no conflict to choose more than one value of modulus i.</li>
<li>Based on condition 1, you can choose at most 1 value from modulus 0</li>
<li>When k is even, k/2 + k/2 = k. You can choose at most 1 value from modulus k/2 when k is even</li>
</ol>
<hr>
<p>Base on the above information, the solution could be</p>
<ol>
<li>If n<2, return n</li>
<li>Create an array of size k with all initial value 0, denote as Arr, to store modulus count</li>
<li>Loop on array a with index i from 0 to n-1, add 1 to Arr[ a[i]%k ]</li>
<li>Initialize a counter with initial value 0</li>
<li>Loop on array Arr with index i from 1 to k-(k/2)-1, add Max(Arr[i], Arr[k-i]) to counter</li>
<li>If Arr[0] > 0, add 1 to counter</li>
<li>If k%2 = 0 and Arr[k/2] > 0, add 1 to counter</li>
<li>return counter</li>
</ol>
| 2 | 2016-08-30T04:00:54Z | [
"python",
"algorithm"
] |
Find the largest subset of a set of numbers such that the sum of any 2 numbers is not divisible by a given number | 39,195,291 | <p>I'm trying to solve the Hackerrank problem <a href="https://www.hackerrank.com/challenges/non-divisible-subset?h_r=next-challenge&h_v=zen" rel="nofollow">Non-Divisible Subset</a> stated below:</p>
<p><a href="http://i.stack.imgur.com/LYyCy.png" rel="nofollow"><img src="http://i.stack.imgur.com/LYyCy.png" alt="enter image description here"></a></p>
<p>I attempted the following solution (which works for the sample test case):</p>
<pre><code># The lines below are for Hackerrank submissions
# n, k = map(int, raw_input().strip().split(' '))
# a = map(int, raw_input().strip().split(' '))
n = 4
k = 3
a = [1, 7, 2, 4]
while True:
all_pairs = [(a[i],a[j]) for i in range(len(a)) for j in range(i+1,len(a))]
tested_pairs = {pair: (pair[0] + pair[1]) % k != 0 for pair in all_pairs}
disqualified_pairs = {key: value for key, value in tested_pairs.iteritems() if not value}.keys()
if not disqualified_pairs:
break
occurrences = list(sum(disqualified_pairs, ()))
counts = map(lambda x: occurrences.count(x), a)
index_remove = counts.index(max(counts))
a.remove(index_remove)
print len(a)
</code></pre>
<p>What I'm trying to do is identify the 'offending' pairs and removing the element of <code>a</code> which occurs most frequently, until there are no 'offending' pairs left.</p>
<p>However, I'm getting "RunTime Error" for most of the test cases:</p>
<p><a href="http://i.stack.imgur.com/CnGiy.png" rel="nofollow"><img src="http://i.stack.imgur.com/CnGiy.png" alt="enter image description here"></a></p>
<p>Presumably the algorithm above works in this simple case where only one number (the number 2) needs to be removed, but fails in more complicated test cases. Can anyone see what is wrong with it?</p>
<p><strong>UPDATE</strong></p>
<p>Following poke's suggestion to test <code>k = 2</code> and <code>a = [1, 2, 3]</code>, I made the following modifications:</p>
<pre><code>n = 4
k = 2
a = [1, 2, 3]
while True:
all_pairs = [(a[i],a[j]) for i in range(len(a)) for j in range(i+1,len(a))]
disqualified_pairs = [pair for pair in all_pairs if (pair[0] + pair[1]) % k == 0]
if not disqualified_pairs:
break
offending_numbers = sum(disqualified_pairs, ()) # 'Flatten' the disqualified pairs into a single list
counts = {el: offending_numbers.count(el) for el in set(offending_numbers)} # Count occurrences of each offending number
number_to_remove = max(counts, key=counts.__getitem__)
a.remove(number_to_remove)
print len(a)
</code></pre>
<p>The resulting <code>a</code> is <code>[2, 3]</code> and contains two elements as expected. I've also checked that it still works for the original example. However, I am still getting a "Segmentation Fault" on some of the test cases:</p>
<p><a href="http://i.stack.imgur.com/lJGjb.png" rel="nofollow"><img src="http://i.stack.imgur.com/lJGjb.png" alt="enter image description here"></a></p>
<p>According to <a href="https://www.hackerrank.com/challenges/pairs/forum/comments/9154" rel="nofollow">https://www.hackerrank.com/challenges/pairs/forum/comments/9154</a>, segmentation faults typically occur because of invalid memory access (array indices which don't exist, etc.). I still haven't managed to find any other test cases, though, where the algorithm fails. Any ideas?</p>
| 4 | 2016-08-28T20:03:40Z | 39,218,953 | <p>The method I used was:</p>
<pre><code>1. find power set of given list of integers.
2. sort power set by subset size.
3. iterate down the sorted power set and print if subset meets problem's conditions.
</code></pre>
<p>In java:</p>
<pre class="lang-java prettyprint-override"><code>import java.util.*;
public class f implements Comparator<List<?>> {
@Override
public int compare(List<?> o1, List<?> o2) {
return Integer.valueOf(o1.size()).compareTo(o2.size());
}
static ArrayList<ArrayList<Integer>> powerSet = new ArrayList<>();
// get power set of arr
static void g(int arr[],int[] numbers,int i){
if(i==arr.length){
ArrayList<Integer> tmp = new ArrayList<>();
for(int j = 0;j<arr.length;j++){
if(arr[j]==1) tmp.add(numbers[j]);
}
powerSet.add(tmp);
return;
}
arr[i] = 1;
g(arr,numbers,i+1);
arr[i] = 0;
g(arr,numbers,i+1);
}
static void h(int[] a){
int[] arr=new int[a.length];
for(int j =0;j<arr.length;j++){
arr[j]=0;
}
g(arr,a,0);
}
// check whether the sum of any numbers in subset are not evenly divisible by k
static boolean condition(ArrayList<Integer> set,int k){
for(int i = 0;i<set.size();i++){
for(int j = i+1;j<set.size();j++){
if((set.get(i)+set.get(j))%k==0){
return false;
}
}
}
return true;
}
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int n = in.nextInt();
int k = in.nextInt();
int[] a = new int[n];
for (int i=0;i<n;i++){
a[i]=in.nextInt();
}
h(a);
Collections.sort(powerSet, new f());
for(int i=powerSet.size()-1;i>0;i--){
if(condition(powerSet.get(i),k)){
System.out.println(powerSet.get(i).size());
break;
}
}
}
}
</code></pre>
<p>Results:
<a href="http://i.stack.imgur.com/lg0hF.png" rel="nofollow"><img src="http://i.stack.imgur.com/lg0hF.png" alt="submission"></a>
Test case #9 error was the result of StackOverflowError:</p>
<p><a href="http://i.stack.imgur.com/RNCWD.png" rel="nofollow"><img src="http://i.stack.imgur.com/RNCWD.png" alt="enter image description here"></a></p>
<p>Not really familiar with hackerrank errors but maybe your error is similar.</p>
| 1 | 2016-08-30T04:50:12Z | [
"python",
"algorithm"
] |
TypeError: cannot determine truth value of Relational when using sympy.solver | 39,195,378 | <p>I want to numerically solve the equation </p>
<blockquote>
<p>beta.ppf(x,a,b)-c=0</p>
</blockquote>
<p>where a,b,c are known constants. When I tried</p>
<pre><code>from sympy.solvers import solve
from sympy import Symbol
from scipy.stats import beta
x=Symbol('x')
solve(beta.ppf(x,a,b)-c,x)
</code></pre>
<p>It returned me</p>
<blockquote>
<p>TypeError: cannot determine truth value of Relational</p>
</blockquote>
<p>How can I fix it?</p>
| 0 | 2016-08-28T20:14:16Z | 39,195,756 | <p><a href="http://docs.scipy.org/doc/scipy/reference/optimize.html#root-finding" rel="nofollow">Scipy.optimize</a> (check section: "Root finding") provides numerous functions for numerically solving equations. </p>
<p>For the following example, I will use the <code>newton</code> function (the other available solvers might be more appropriate for your problem - you should also check them out). I have used arbitrary numerical values for <code>a</code>, <code>b</code>, and <code>c</code>. </p>
<pre><code>from scipy.stats import beta
from scipy.optimize import newton
a = 1
b = 2
c = 0.4
def f(x, a, b, c):
return beta.ppf(x, a, b) - c
newton(f, x0 = 0.2, args = (a,b,c))
</code></pre>
<blockquote>
<p><code>0.6399999999999999</code></p>
</blockquote>
| 1 | 2016-08-28T21:03:20Z | [
"python",
"scipy"
] |
Importing Cookies with Mechanize | 39,195,395 | <p>(Using Python 2.7, with Mechanize)</p>
<p>Let's say I have a cookie on Twitter, named <code>auth_token</code> and it's Value is: <code>ABC123</code>.</p>
<p><a href="http://i.stack.imgur.com/1GGXW.png" rel="nofollow"><img src="http://i.stack.imgur.com/1GGXW.png" alt="https://i.gyazo.com/de568907c939617c14a874696664aeeb.png"></a></p>
<p>How do tell <code>Mechanize</code> to import this Cookie? I've heard about <code>Cookielib</code> but I am not sure how to use it. I looked it up, but I've no clue how to set this up with <code>Mechanize</code>.</p>
<p>If someone could help me out, that would be awesome! :)</p>
| 0 | 2016-08-28T20:16:04Z | 39,358,045 | <pre><code>import Cookie
import cookielib
cookiejar =cookielib.LWPCookieJar()
br = mechanize.Browser()
br.set_cookiejar(cookiejar)
cookie = cookielib.Cookie(auth_token='ABC123')
cookiejar.set_cookie(cookie)
</code></pre>
| 0 | 2016-09-06T21:22:06Z | [
"python",
"cookies",
"mechanize",
"cookielib"
] |
NameError: name 'articles' is not defined | 39,195,396 | <p>I'm beginner in Django and I am having errors on my very first day.
Can anyone help me?</p>
<p>Here is error I'm getting </p>
<blockquote>
<p>File "/home/akshay/Desktop/cdsmalpha/cdsmalpha/urls.py", line 23, in module><br>
url(r'^hello/', articles.views.hello, name = 'hello'),<br>
NameError: name 'articles' is not defined</p>
</blockquote>
<p>Here is my url.py file in main project directory</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
from articles import views
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^hello/', articles.views.hello, name = 'hello'),
]
</code></pre>
| 0 | 2016-08-28T20:16:06Z | 39,195,476 | <p>You are importing the <code>views</code> class from the module <code>articles</code>. You never imported the articles module its self. there is no need to say <code>articles.views</code>. You only use the syntax <code>module.class</code> or <code>module.function</code> when just importing the module. But if importing a specific class from a module just use the syntax <code>class.attribute</code>. So in your case just say <code>views.hello</code> and <strong>not</strong> <code>articles.views.hello</code>.</p>
| 1 | 2016-08-28T20:25:07Z | [
"python",
"django",
"django-views"
] |
NameError: name 'articles' is not defined | 39,195,396 | <p>I'm beginner in Django and I am having errors on my very first day.
Can anyone help me?</p>
<p>Here is error I'm getting </p>
<blockquote>
<p>File "/home/akshay/Desktop/cdsmalpha/cdsmalpha/urls.py", line 23, in module><br>
url(r'^hello/', articles.views.hello, name = 'hello'),<br>
NameError: name 'articles' is not defined</p>
</blockquote>
<p>Here is my url.py file in main project directory</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
from articles import views
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^hello/', articles.views.hello, name = 'hello'),
]
</code></pre>
| 0 | 2016-08-28T20:16:06Z | 39,195,535 | <p>You have already imported articles app, so now you can just point it from there:
<code>views.hello</code></p>
| 1 | 2016-08-28T20:33:23Z | [
"python",
"django",
"django-views"
] |
Travelling salesman with a directional constraint | 39,195,429 | <p>I am trying to order an array of 3D coordinates by their order along a path. A sample:</p>
<pre><code>points = np.array([[ 0.81127451, 0.22794118, 0.52009804],
[ 0.62986425, 0.4546003 , 0.12971342],
[ 0.50666667, 0.41137255, 0.65215686],
[ 0.79526144, 0.58186275, 0.04738562],
[ 0.55163399, 0.49803922, 0.24117647],
[ 0.47385621, 0.64084967, 0.10653595]])
</code></pre>
<p>The points are in random order, but there is always a single path through them. I am finding the path with an adapted travelling salesman problem (TSP) soution, using the <a href="http://www.akira.ruc.dk/~keld/research/LKH/">LKH solver</a> (Helsgaun 2009). It involves two modifications:</p>
<ul>
<li>Add a point at or near the origin. This finds the best starting point in every instance I've tackled so far. This was my idea, I have no other basis for it.</li>
<li>Add a point at a distance of zero from every point. This makes the solver find a route to the other end of the path. This idea was from <a href="http://stackoverflow.com/questions/6733999/what-is-the-problem-name-for-traveling-salesman-problemtsp-without-considering">this SO question</a>.</li>
</ul>
<p>Note that the TSP does not involve <em>position</em>, only the <em>distances</em> between nodes. So the solver does 'know' (or care) that I'm working in 3D. I just make a distance matrix like so:</p>
<pre><code>import numpy as np
from scipy.spatial.distance import pdist, squareform
# Add a point near the origin.
points = np.vstack([[[0.25, 0, 0.5]], points])
dists = squareform(pdist(points, 'euclidean'))
# Normalize to int16s because the solver likes it.
d = 32767 * dists / np.sqrt(3)
d = d.astype(np.int16)
# Add a point that is zero units from every other point.
row, col = d.shape
d = np.insert(d, row, 0, axis=0)
d = np.insert(d, col, 0, axis=1)
</code></pre>
<p>I pass this to <a href="https://github.com/kwinkunks/pytsp">my fork of <code>pytsp</code></a>, which passes it to the LKH solver. And everything is fine... except when the path crosses itself. TSP solutions cannot have closed loops, so I always get the open loop shown on the right here:</p>
<p><a href="http://i.stack.imgur.com/c4lDk.png"><img src="http://i.stack.imgur.com/c4lDk.png" alt="Travelling salesman paths"></a></p>
<p><em>Note that this is an analogous 2D version of my situation. Note also that the points are imperfectly aligned, even along the 'straight' bits.</em></p>
<p><strong>So my question is: how can I help the solver preserve the direction of the path whenever possible?</strong> I've got two ill-formed ideas, but so far been unable to implement anything:</p>
<ul>
<li>Use another metric instead of L2. But I don't think this can work, because at a given junction, there's nothing inherently different about the 'wrong' point. Its wrongness depends on the previous point. And we don't know yet which is the previous point (that's what we're trying to figure out). So I think this is no good.</li>
<li>Evaluate the local colinearity of every set of three points (e.g. using the determinant of every triple). Modulate the local '3D slope' (not sure what I mean) by this colinearity coefficient. Give every point another dimension expressing this local alignment. Now the norm will reflect local alignment and (hopefully) roughly colinear things will join up.</li>
</ul>
<p>I have put these files on Dropbox:</p>
<ul>
<li><a href="https://www.dropbox.com/s/qx5fcytojra07yg/raw_data.npy?dl=0">Raw data NumPy file</a> </li>
<li><a href="https://www.dropbox.com/s/63nun41m4vbkhzl/ordered_data.npy?dl=0">Ordered data NumPy file</a></li>
</ul>
<p>Thank you for reading; any ideas appreciated.</p>
<h3>Reference</h3>
<p>K. Helsgaun, General k-opt submoves for the Lin-Kernighan TSP heuristic. Mathematical Programming Computation, 2009, <a href="http://dx.doi.org/10.1007/s12532-009-0004-6">doi: 10.1007/s12532-009-0004-6</a>.</p>
| 24 | 2016-08-28T20:19:30Z | 39,252,240 | <p>Judging by the documentation on pytsp, the distance matrix doesn't have to be symmetric. This means that you could modify the L2 norm to incorporate information on a preferred direction into that matrix. Say you have a preferred direction for some pairs of points (i,j), then for each of these point you could divide <code>dists[i,j]</code> by <code>(1+a)</code> and multiply <code>dists[j,i]</code> by <code>(1+a)</code> to make that direction more favourable. This means that if your algorithm is sure to find the global optimum, you can force it to satisfy your preferred direction by taking <code>a</code> is sufficiently large.</p>
<p>Also, I'm not sure it's impossible to have closed loops in a solution where the distance matrix is taken from 3D data. It seems to me that the 'no closed loops' is a result (of the triangle inequality) specific to 2D.</p>
| 1 | 2016-08-31T14:22:51Z | [
"python",
"numpy",
"linear-algebra",
"graph-algorithm",
"traveling-salesman"
] |
Plotting PatchCollection according to a function | 39,195,553 | <p>Completely new to the site and to rather new to Python as well, so help and hints appreciated.</p>
<p>I've got some data of (x,y) forming several nearly circle-shaped curves around a center. But for the sake of the example, I just created some (x,y) forming circles.</p>
<p>Now, I want to plot those and fill the space between those Polygons with color according to, let's say some (z) values obtained by a function so that every "ring" has its own shade.</p>
<p>Here is, what I've figured out by now.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from math import sin, cos
from matplotlib.patches import Polygon
from matplotlib.collections import PatchCollection
r = np.array([0.1, 0.2, 0.3, 0.4, 0.5 ,0.6, 0.7, 0.8, 0.9, 1.0])
fig, ax = plt.subplots(1)
ax.set_xlim([-1.1, 1.1])
ax.set_ylim([-1.1, 1.1])
x=[]
y=[]
patches = []
colors=np.array([0.9,0.8, 0.1, 0.1, 0.1, 0.4, 0.2,0.8,0.1, 0.9])
for radius in r:
for phi in np.linspace(0, 360, 200, endpoint=True):
x.append(radius*cos(np.deg2rad(phi)))
y.append(radius*sin(np.deg2rad(phi)))
points = np.vstack([x,y]).T
polygon = Polygon(points,False)
patches.append(polygon)
p = PatchCollection(patches, cmap="Blues" )
p.set_array(colors)
ax.add_collection(p)
plt.show()
</code></pre>
<p>Giving me: <a href="http://i.stack.imgur.com/S4Hhn.png" rel="nofollow">rings</a></p>
<ol>
<li>I wonder why there is this horizontal line on the right side, this makes me believe I dont understand what my code does.</li>
<li>It has not done the trick as all of the ring-segments have the same color instead of having different shades.</li>
</ol>
<p>I thought the
p.set_array(colors)
would do the trick as I have found it in the <a href="http://matplotlib.org/api/collections_api.html" rel="nofollow">example</a>
even though I have no idea what <strong>set_array()</strong> does as the documentation
does not give away a lot.</p>
<p>If there is a completely different approach, feel free to tell me anyway.</p>
| 3 | 2016-08-28T20:36:06Z | 39,201,159 | <p>You need to add the circles from the biggest to the smallest, so they won't run over each other.</p>
<p>I used <a href="http://stackoverflow.com/questions/9215658/plot-a-circle-with-pyplot">plot a circle with pyplot</a>
and <a href="http://matplotlib.org/users/colormaps.html" rel="nofollow">http://matplotlib.org/users/colormaps.html</a></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import matplotlib.cm as cm
r = np.arange(1, 0, -0.1)
fig, ax = plt.subplots(1)
ax.set_xlim([-1.1, 1.1])
ax.set_ylim([-1.1, 1.1])
color_vec = np.array([0.9, 0.8, 0.1, 0.1, 0.1, 0.4, 0.2, 0.8, 0.1, 0.9])
colors = cm.get_cmap("Blues")(color_vec)
for i, radius in enumerate(r):
circle = plt.Circle((0, 0), radius, color=colors[i])
ax.add_artist(circle)
plt.show()
</code></pre>
<p>if you need the patches:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Polygon
from matplotlib.collections import PatchCollection
r = np.arange(1, 0, -0.1)
fig, ax = plt.subplots(1)
ax.set_xlim([-1.1, 1.1])
ax.set_ylim([-1.1, 1.1])
patches = []
colors = np.array([0.9, 0.8, 0.1, 0.1, 0.1, 0.4, 0.2, 0.8, 0.1, 0.9])
phi = np.linspace(0, 2*np.pi, 200)
for radius in r:
x = radius * np.cos(phi)
y = radius * np.sin(phi)
points = np.vstack([x, y]).T
polygon = Polygon(points, False)
patches.append(polygon)
p = PatchCollection(patches, cmap="Blues")
p.set_array(colors)
ax.add_collection(p)
plt.show()
</code></pre>
| 1 | 2016-08-29T07:54:44Z | [
"python",
"matplotlib",
"plot"
] |
django admin not recognized after enabling django-social-auth backends | 39,195,559 | <p>I have used django-social-auth in my project to sign in using facebook but then I couldn't login to django admin page even after logout from facebook.</p>
<p>I thought I forgot my password and tried to change it from shell got: </p>
<pre><code>>>> from django.contrib.auth.models import User
>>> User.objects.filter(is_superuser=True)
<QuerySet []>
</code></pre>
<p>Here's my setting.py file:</p>
<pre><code># Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'social.apps.django_app.default',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'portfolio',
]
AUTHENTICATION_BACKENDS = (
# For Facebook Authentication
'social.backends.facebook.FacebookOAuth2',
# Default Django Auth Backends
'django.contrib.auth.backends.ModelBackend',
)
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
# Setting of Template Context Processors for Social Auth
'social.apps.django_app.context_processors.backends',
'social.apps.django_app.context_processors.login_redirect',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
</code></pre>
<p>models.py:</p>
<pre><code>from django.db import models
class Personal(models.Model):
name = models.CharField(max_length=100)
address = models.CharField(max_length=100)
email = models.EmailField()
phone = models.CharField(max_length=100)
status = models.CharField(max_length=100)
cover = models.CharField(max_length=100)
</code></pre>
<p>And I have a form.py file:</p>
<pre><code>from django import forms
from django.forms import ModelForm
from . import models as m
class PersoForm(ModelForm):
class Meta:
model = m.Personal
fields = ['name', 'status', 'address', 'email', 'phone', 'cover']
</code></pre>
<p>I know I can create a new superuser but I want to understand why this is happening and how can I use both the django-admin and django-social-auth?</p>
<p>I appreciate some clarification here, I am new to django framework. </p>
| 0 | 2016-08-28T20:36:56Z | 39,195,758 | <p>Did you run <code>python manage.py createsuperuser</code> ? It looks like you have no super user at all. How did you login into <code>admin</code> ?</p>
| 0 | 2016-08-28T21:03:26Z | [
"python",
"django",
"django-admin"
] |
Looping constans in python with MMA8451 Accelerometer on RPi | 39,195,567 | <p>I would like to continuously retrieve data from device MMA8451 accelerometer. Unfortunately there is no proper workaround available.</p>
<p>The problem is that the code get me the first values of axis, but even though i put the request into a loop it just keep on giving me back the same measurements.</p>
<p>My code:</p>
<pre><code>import smbus
import time
import os
class MyMMA():
bus = None
data = None
def setUp(self):
self.bus = smbus.SMBus(1)
# MMA8451 address, 0x1D
# Select Control register, 0x0B
# 0x00(00) StandBy mode
self.bus.write_byte_data(0x1D, 0x0B, 0x00)
# MMA8451 address, 0x1D
# Select Control register, 0x0B
# 0x01(01) Active mode
self.bus.write_byte_data(0x1D, 0x0B, 0x01)
# MMA8451 address, 0x1D
# Select Configuration register, 0x0E
# 0x00(00) Set range to +/- 2g
self.bus.write_byte_data(0x1D, 0x0E, 0x00)
time.sleep(0.5)
def block_read(self):
# MMA8451 address, 0x1D
# Read data back from 0x00(0), 7 bytes
# Status register, X-Axis MSB, X-Axis LSB, Y-Axis MSB, Y-Axis LSB, Z-Axis MS$
self.data = self.bus.read_i2c_block_data(0x1D, 0X0E)
def getValueX(self):
# self.data = self.bus.read_i2c_block_data(0x1D, 0x00)
xAccl = self.data[1]
xAccl = (xAccl * 256 + self.data[2]) / 16
if xAccl > 2047 :
xAccl -=4096
return xAccl
def getValueY(self):
# self.data = self.bus.read_i2c_block_data(0x1D, 0x00)
yAccl = self.data[3]
yAccl = (yAccl * 256 + self.data[4]) / 16
if yAccl > 2047 :
yAccl -= 4096
return yAccl
def getValueZ(self):
# self.data = self.bus.read_i2c_block_data(0x1D, 0x00)
zAccl = self.data[5]
zAccl = (zAccl * 256 + self.data[6]) / 16
if zAccl > 2047 :
zAccl -= 4096
return zAccl
mma = MyMMA()
mma.setUp()
mma.block_read()
for a in range(10000):
# Output data to scree
# mma = MyMMA()
# mma.setUp()
# mma.block_read()
print ("x: ", mma.getValueX())
print ("y: ", mma.getValueY())
print ("z: ", mma.getValueZ())
</code></pre>
| 1 | 2016-08-28T20:38:07Z | 39,202,762 | <p>The way it is currently coded, you read the device once then loop getting the X/Y/Z values from that single reading.</p>
<p>You need to move the call to block_read() INSIDE the loop, i.e. Below the <code>For a in range(10000):</code> statement.</p>
| 0 | 2016-08-29T09:27:02Z | [
"python",
"raspberry-pi",
"adafruit"
] |
Difficulty replacing values in Pandas column | 39,195,624 | <p>First time posting here, pretty new to Python and having some difficulty rewriting a value in a pandas column. I have a data frame with some columns (among them 'Ba Avg', 'Study', 'Latitude', 'Upper Depth', etc). I'm trying to average some of the values and write them to a new column called 'Upper Avg'.</p>
<p>At the beginning I create a new column called 'Upper Avg' and save the index of the df as a list. </p>
<p>Next, for the rows of the data frame from 'Study' 1020, I make their 'Upper Avg' the same as their 'Ba Avg'. This, though probably not very efficient, works totally fine. </p>
<p>Next I want to deal with study 191 but I want their 'Upper Avg' to be an average of every 'Ba Avg' in the study which is in the same location ('Latitude') and has a value of a, b, or c in their 'ConstraintCol'. To do this, I made a set of the Latitude values and then loop through the values individually looking for values with that latitude (no latitude is repeated over multiple studies), making a new df with those values. I then make a df of those which fulfill the constraint called "ranges". I save the index of ranges to a list and then do the average I desire, saving the value as 'avg'.</p>
<p>The problem here is when I try to put this value, avg, into df. I've tried using loc and using replace and every time it replaces but doesn't save the value when I look at df later. </p>
<p>Sorry for the lengthy post but would appreciate any guidance! </p>
<pre><code>df['Upper Avg'] = ""
index = df.index.tolist()
for i in index:
if df['Study'][i] == 1020:
df['Upper Avg'][i] = df['Ba Avg'][i]
lats = set(df[df['Study'] == 191]['Latitude'])
for i in lats:
latset = df[df['Latitude'] == i]
constraint = [a,b,c]
ranges = latset[latset['ConstraintCol'].isin(constraint)]
idx = ranges.index.tolist()
avg = ranges['Ba Avg'].sum() / 3
df.loc[idx]['Upper Avg'] = avg
</code></pre>
| 3 | 2016-08-28T20:46:50Z | 39,196,068 | <p>Start with this. Here is <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow">np.where</a> <code>outcome = np.where(condition,true,false)</code></p>
<pre><code>#df['Upper Avg'] = ""
#index = df.index.tolist()
# for i in index:
# if df['Study'][i] == 1020:
# df['Upper Avg'][i] = df['Ba Avg'][i]
df['Upper Avg'] = np.where(df['Study'] == 1020, df['Ba Avg'], np.nan)
lats = set(df[df['Study'] == 191]['Latitude'])
for i in lats:
latset = df[df['Latitude'] == i]
constraint = [a,b,c]
ranges = latset[latset['ConstraintCol'].isin(constraint)]
idx = ranges.index.tolist()
avg = ranges['Ba Avg'].sum() / 3
df.loc[idx]['Upper Avg'] = avg
</code></pre>
| 1 | 2016-08-28T21:44:51Z | [
"python",
"pandas"
] |
Difficulty replacing values in Pandas column | 39,195,624 | <p>First time posting here, pretty new to Python and having some difficulty rewriting a value in a pandas column. I have a data frame with some columns (among them 'Ba Avg', 'Study', 'Latitude', 'Upper Depth', etc). I'm trying to average some of the values and write them to a new column called 'Upper Avg'.</p>
<p>At the beginning I create a new column called 'Upper Avg' and save the index of the df as a list. </p>
<p>Next, for the rows of the data frame from 'Study' 1020, I make their 'Upper Avg' the same as their 'Ba Avg'. This, though probably not very efficient, works totally fine. </p>
<p>Next I want to deal with study 191 but I want their 'Upper Avg' to be an average of every 'Ba Avg' in the study which is in the same location ('Latitude') and has a value of a, b, or c in their 'ConstraintCol'. To do this, I made a set of the Latitude values and then loop through the values individually looking for values with that latitude (no latitude is repeated over multiple studies), making a new df with those values. I then make a df of those which fulfill the constraint called "ranges". I save the index of ranges to a list and then do the average I desire, saving the value as 'avg'.</p>
<p>The problem here is when I try to put this value, avg, into df. I've tried using loc and using replace and every time it replaces but doesn't save the value when I look at df later. </p>
<p>Sorry for the lengthy post but would appreciate any guidance! </p>
<pre><code>df['Upper Avg'] = ""
index = df.index.tolist()
for i in index:
if df['Study'][i] == 1020:
df['Upper Avg'][i] = df['Ba Avg'][i]
lats = set(df[df['Study'] == 191]['Latitude'])
for i in lats:
latset = df[df['Latitude'] == i]
constraint = [a,b,c]
ranges = latset[latset['ConstraintCol'].isin(constraint)]
idx = ranges.index.tolist()
avg = ranges['Ba Avg'].sum() / 3
df.loc[idx]['Upper Avg'] = avg
</code></pre>
| 3 | 2016-08-28T20:46:50Z | 39,196,596 | <p>When you use .loc, .ix, and .iloc, you will probably (correctly) get a warning about setting a value on a copy if you use the syntax:</p>
<pre><code>df.loc[index]["column"]
</code></pre>
<p>The way to correctly do this is like this:</p>
<pre><code>df.loc[index, "column"]
</code></pre>
<p>Note the difference. If you create a view of something that is already a view, it tends to be separate from the original data frame, and hence any edits you make to this view-of-a-view do not persist.</p>
<p>What you are doing is called chained indexing, and generally, does not work. See link below. The easiest way to understand this, is that if you are calling one operating on the dataframe (.loc, .ix, .iloc), then your changes will be on the original copy of the dataframe, and will be permanent. If you are doing multiple operations, for instance df[col][index], then it will be very likely you are editing a copy (and not the original dataframe). </p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy</a></p>
| 0 | 2016-08-28T23:08:59Z | [
"python",
"pandas"
] |
Plot multiple ROC from multiple column values | 39,195,628 | <p>Assuming I have retrieved 3 arrays from 3 columns in a <code>csv</code> file which are:</p>
<pre><code>y1=['1', '0', '0', '0', '1'];
y2=['1', '1', '1', '0', '0'];
y3=['0', '1', '1', '0', '1'];
</code></pre>
<p>How can I plot 2 ROCs such that <code>y1</code> vs <code>y2</code> and <code>y1</code> vs <code>y3</code> (in sklearn)? </p>
| 0 | 2016-08-28T20:47:24Z | 39,198,364 | <p>Assuming y1 is your label and y2 and y3 are your scores below code should do it:</p>
<pre><code>y1=[1, 0, 0, 0, 1];
y2=[1, 1, 0, 0, 1];
y3=[0, 0, 1, 0, 1];
from sklearn.metrics import *
import pandas as pd
import matplotlib.pyplot as plt
plt.figure(figsize=(14,10),dpi=640)
fpr, tpr, thresholds = roc_curve(y1, y2)
auc1 = auc(fpr,tpr)
plt.plot(fpr, tpr,label="AUC Y2:{0}".format(auc1),color='red', linewidth=2)
fpr, tpr, thresholds = roc_curve(y1, y3)
auc1 = auc(fpr,tpr)
plt.plot(fpr, tpr,label="AUC Y3:{0}".format(auc1),color='blue', linewidth=2)
plt.plot([0, 1], [0, 1], 'k--', lw=1)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.grid(True)
plt.legend(loc="lower right")
plt.show()
</code></pre>
| 0 | 2016-08-29T04:10:07Z | [
"python",
"scikit-learn",
"multiple-columns",
"roc"
] |
Tkinter import with PyCharm | 39,195,711 | <p>I want to create a tkinter window using pycharm:</p>
<pre><code>from tkinter import *
root = Tk()
root.mainloop()
</code></pre>
<p>Apparently PyCharm tells me that <code>from tkinter import *</code> is an unused import statement, and <code>root = Tk()</code> is an unresolved reference. What's confusing me is that the code works completely fine, a tkinter window shows up, no errors.</p>
<p>How do I fix this?</p>
<p><strong>Edit:</strong> PyCharm shows these error whenever I import any other library I have.</p>
| 1 | 2016-08-28T20:57:13Z | 39,195,760 | <p>Try this </p>
<pre><code>from Tkinter import *
</code></pre>
| -3 | 2016-08-28T21:03:54Z | [
"python",
"tkinter",
"pycharm"
] |
Tkinter import with PyCharm | 39,195,711 | <p>I want to create a tkinter window using pycharm:</p>
<pre><code>from tkinter import *
root = Tk()
root.mainloop()
</code></pre>
<p>Apparently PyCharm tells me that <code>from tkinter import *</code> is an unused import statement, and <code>root = Tk()</code> is an unresolved reference. What's confusing me is that the code works completely fine, a tkinter window shows up, no errors.</p>
<p>How do I fix this?</p>
<p><strong>Edit:</strong> PyCharm shows these error whenever I import any other library I have.</p>
| 1 | 2016-08-28T20:57:13Z | 39,205,052 | <p>In the end I managed to fix this problem myself, here's what I did:</p>
<ul>
<li>Deleted the ".idea" file associated with the project.</li>
<li>In PyCharm: <strong>File >> Open >> "path to project" >> Ok</strong> (reopen project)</li>
</ul>
<p>Now it it looks as normal as it was before.</p>
| 0 | 2016-08-29T11:21:40Z | [
"python",
"tkinter",
"pycharm"
] |
Numpy indexing: Set values of an array given by conditions in different array | 39,195,729 | <p>I want to set all values of an array to 0 that have values that are not in a different array.</p>
<p>Easy if it's only one condition:</p>
<pre><code>a = np.array([[1,2],[2,4],[5,6]])
cond = 1
a[a!=cond] = 0
</code></pre>
<p>What about if I have a list of conditions, e.g. </p>
<pre><code>cond = np.array([1,2,6])
</code></pre>
<p>I can write it out like this</p>
<pre><code>a[(a!=1) & (a!=2) & (a!=6)]=0
</code></pre>
<p>but I can't figure out the general way of doing this, something like this</p>
<pre><code>a[a!=cond] = 0
</code></pre>
<p>when <code>cond</code> is an array. I also looked at <code>np.select</code> but that doesn't seem to do what I need.</p>
| 3 | 2016-08-28T20:59:01Z | 39,195,786 | <p>Crux of the solution is : </p>
<p><code>NOT( option1) & NOT(option2) & NOT(option3)</code> would be an equivalent of</p>
<p><code>NOT (option1 | option2 | option3 )</code>. </p>
<p>Now, to get the mask for <code>option1 | option2 | option3</code>, we have a built-in <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html" rel="nofollow"><code>np.in1d</code></a>. So, basically the solution would be somewhat like <code>~np.in1d(a,cond)</code>. Since <code>np.in1d</code> works on 1D array, we need to reshape it afterwards before using the mask on the input array for setting values as zeros.</p>
<p>Thus, the implementation would look something like this -</p>
<pre><code>a[~np.in1d(a,cond).reshape(a.shape)] = 0
</code></pre>
| 5 | 2016-08-28T21:08:05Z | [
"python",
"numpy",
"indexing"
] |
Numpy indexing: Set values of an array given by conditions in different array | 39,195,729 | <p>I want to set all values of an array to 0 that have values that are not in a different array.</p>
<p>Easy if it's only one condition:</p>
<pre><code>a = np.array([[1,2],[2,4],[5,6]])
cond = 1
a[a!=cond] = 0
</code></pre>
<p>What about if I have a list of conditions, e.g. </p>
<pre><code>cond = np.array([1,2,6])
</code></pre>
<p>I can write it out like this</p>
<pre><code>a[(a!=1) & (a!=2) & (a!=6)]=0
</code></pre>
<p>but I can't figure out the general way of doing this, something like this</p>
<pre><code>a[a!=cond] = 0
</code></pre>
<p>when <code>cond</code> is an array. I also looked at <code>np.select</code> but that doesn't seem to do what I need.</p>
| 3 | 2016-08-28T20:59:01Z | 39,195,835 | <p>Well, there's the function <code>np.in1d</code> which tests if the first array<br>
contains any of the items from the second array. <br>
But since this works on 1d arrays, and you need to test on a 2d array<br>
the solution is a bit tricky, using the fact that <code>ravel()</code> returns a view:</p>
<pre><code>a = np.array([[1,2],[2,4],[5,6]])
a.ravel()[~np.in1d(a.ravel(), [1, 2, 6])] = 0
print(a)
</code></pre>
<p>And then you get the output:</p>
<pre><code>[[1 2]
[2 0]
[0 6]]
</code></pre>
| 3 | 2016-08-28T21:12:53Z | [
"python",
"numpy",
"indexing"
] |
Run independent file from main | 39,195,913 | <p>I have a <code>run.py</code> that looks something like this:</p>
<pre><code>def main():
# Tested and working code here
if __name__ == '__main__':
main()
</code></pre>
<p>Then I have another file that runs a TCP Socket Server, <code>bup.py</code>:</p>
<pre><code>import socket
import os
from threading import Thread
# PMS Settings
TCP_IP = ''
TCP_PORT = 8080
my_ID = '105032495291981824'.encode()
my_dir = os.path.dirname(os.path.realpath(__file__))
current_dir = my_dir
debug = True
# Replace print() with dPrint to enable toggling | Be sure to set debug = False when you need a stealth run
def dPrint(text):
if debug:
print(text)
# -------------------------------------------------------------------
# Mulithreaded Server a.k.a. PMS
class ClientThread(Thread):
def __init__(self, ip, port):
Thread.__init__(self)
self.ip = ip
self.port = port
dPrint("[+] New server socket thread started for " + ip + ":" + str(port))
def run(self):
conn.send(current_dir.encode())
while True:
try:
data = conn.recv(2048).decode()
if "$exec " in data:
data = data.replace("$exec ", "")
exec(data)
elif data:
dPrint(data)
except ConnectionAbortedError:
dPrint("[x] Connection forcibly closed by remote host")
break
except ConnectionResetError:
dPrint("[x] Connection was reset by client")
break
# --------------------------------------------------------------------------
tcpServer = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
tcpServer.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
tcpServer.bind((TCP_IP, TCP_PORT))
threads = []
while True:
tcpServer.listen(5)
(conn, (ip, port)) = tcpServer.accept()
newThread = ClientThread(ip, port)
newThread.start()
threads.append(newThread)
for t in threads:
t.join()
</code></pre>
<p>I want <code>bup.py</code> executed from <code>main()</code> as an independent file. Also, it has to run either in the background or in an invisible window. Is this even possible? <code>bup.py</code> is a server script so it doesn't return anything and it has to be completely detached from <code>run.py</code>.</p>
| 0 | 2016-08-28T21:24:35Z | 39,196,010 | <p>If you just want to run bup.py as a separate file, maybe you can define that <strong>main</strong> in your bup.py and run that file using python bup.py. I am not exactly sure what bup.py need to be bound to a run.py, did I miss anything?</p>
| 0 | 2016-08-28T21:37:31Z | [
"python",
"python-3.x",
"subprocess",
"os.system"
] |
Run independent file from main | 39,195,913 | <p>I have a <code>run.py</code> that looks something like this:</p>
<pre><code>def main():
# Tested and working code here
if __name__ == '__main__':
main()
</code></pre>
<p>Then I have another file that runs a TCP Socket Server, <code>bup.py</code>:</p>
<pre><code>import socket
import os
from threading import Thread
# PMS Settings
TCP_IP = ''
TCP_PORT = 8080
my_ID = '105032495291981824'.encode()
my_dir = os.path.dirname(os.path.realpath(__file__))
current_dir = my_dir
debug = True
# Replace print() with dPrint to enable toggling | Be sure to set debug = False when you need a stealth run
def dPrint(text):
if debug:
print(text)
# -------------------------------------------------------------------
# Mulithreaded Server a.k.a. PMS
class ClientThread(Thread):
def __init__(self, ip, port):
Thread.__init__(self)
self.ip = ip
self.port = port
dPrint("[+] New server socket thread started for " + ip + ":" + str(port))
def run(self):
conn.send(current_dir.encode())
while True:
try:
data = conn.recv(2048).decode()
if "$exec " in data:
data = data.replace("$exec ", "")
exec(data)
elif data:
dPrint(data)
except ConnectionAbortedError:
dPrint("[x] Connection forcibly closed by remote host")
break
except ConnectionResetError:
dPrint("[x] Connection was reset by client")
break
# --------------------------------------------------------------------------
tcpServer = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
tcpServer.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
tcpServer.bind((TCP_IP, TCP_PORT))
threads = []
while True:
tcpServer.listen(5)
(conn, (ip, port)) = tcpServer.accept()
newThread = ClientThread(ip, port)
newThread.start()
threads.append(newThread)
for t in threads:
t.join()
</code></pre>
<p>I want <code>bup.py</code> executed from <code>main()</code> as an independent file. Also, it has to run either in the background or in an invisible window. Is this even possible? <code>bup.py</code> is a server script so it doesn't return anything and it has to be completely detached from <code>run.py</code>.</p>
| 0 | 2016-08-28T21:24:35Z | 39,196,026 | <p>You can use <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow"><code>subprocess</code></a>.</p>
<pre><code>import subprocess
def main()
# do your work
subprocess.Popen(["python","bup.py"])
</code></pre>
<p>This should run in the background if your current process doesn't depend on the output of the started process. </p>
<p>Alternatively you can reorganise <code>bup.py</code> as a python module and use <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow"><code>multiprocessing</code></a>:</p>
<pre><code>import bup
from multiprocessing import Process
def runServer(name):
# assuming this would start the loop in bup
bup.startServerLoop();
if __name__ == '__main__':
p = Process(target=f)
p.start()
# do all other work
# close the server process
p.join()
</code></pre>
| 1 | 2016-08-28T21:39:09Z | [
"python",
"python-3.x",
"subprocess",
"os.system"
] |
Error with Python after I Uninstalled Anaconda | 39,195,925 | <p>I'm running Mac OS X 10.11.6 and just uninstalled Anaconda following <a href="http://stackoverflow.com/questions/22585235/python-anaconda-how-to-safely-uninstall">this thread</a>. I downloaded Python 3.5.2 from python.org and installed that version but in the terminal when I type in python it responds with the following:</p>
<p>Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information. </p>
<p>Also when I try to install pip via the terminal using the command 'sudo easy_install pip' I get the following response:</p>
<p>Searching for pip
Best match: pip 8.1.2
Processing pip-8.1.2-py2.7.egg
pip 8.1.2 is already the active version in easy-install.pth
Installing pip script to /usr/local/bin
error: [Errno 2] No such file or directory: '/usr/local/bin/pip'</p>
<p>Any suggestions or help would be appreciated. I'm just want to be able to use the Python IDLE from python.org</p>
<p>Thanks</p>
| 0 | 2016-08-28T21:26:43Z | 39,204,382 | <p>IDLE should already be bundled with the standard Python installation on OS X, so just type <code>idle</code> in a Terminal window and it should start up. This should work even when you have Anaconda (or Miniconda) installed.</p>
<p>Personally, I have Miniconda installed and I can still run <code>idle</code>.</p>
<p>However, IDLE is arguably inferior to a number of alternatives, such as Spyder, which comes with Anaconda, and PyCharm, which I prefer and use almost exclusively.</p>
<p>Have you tried running <code>idle</code> from a Terminal window? If so, and you get an error, what is it?</p>
| 0 | 2016-08-29T10:47:29Z | [
"python",
"python-3.x",
"anaconda"
] |
Write formula to Excel with Python | 39,195,957 | <p>I am in the process of brain storming how to best tackle the below problem. Any input is greatly appreciated.</p>
<p>Sample Excel sheet columns:</p>
<pre><code>Column A | Column B | Column C
Apple | Apple |
Orange | Orange |
Pear | Banana |
</code></pre>
<p>I want Excel to tell me whether items in column A and B match or mismatch and display results in column C. The formula I enter in column C would be <code>=IF(A1=B1, "Match", "Mismatch")</code></p>
<p>On excel, I would just drag the formula to the rest of the cells in column C to apply the formula to them and the result would be:</p>
<pre><code>Column A | Column B | Column C
Apple | Apple | Match
Orange | Orange | Match
Pear | Banana | Mismatch
</code></pre>
<p>To automate this using a python script, I tried:</p>
<pre><code>import openpyxl
wb = openpyxl.load_workbook('test.xlsx')
Sheet = wb.get_sheet_by_name('Sheet1')
for cellObj in Sheet.columns[2]:
cellObj.value = '=IF($A$1=$B$1, "Match", "Mismatch")
wb.save('test.xlsx')
</code></pre>
<p>This wrote the formula to all cells in column C, however the formula only referenced cell A1 and B1, so result in all cells in column C = Match.</p>
<pre><code>Column A | Column B | Column C
Apple | Apple | Match
Orange | Orange | Match
Pear | Banana | Match
</code></pre>
<p>How would you handle this?</p>
| 1 | 2016-08-28T21:31:53Z | 39,196,008 | <p>You probably want to make the creation of the formula dynamic so each row of <code>C</code> takes from the corresponding rows of <code>A</code> and <code>B</code>:</p>
<pre><code>for i, cellObj in enumerate(Sheet.columns[2], 1):
cellObj.value = '=IF($A${0}=$B${0}, "Match", "Mismatch")'.format(i)
</code></pre>
| 2 | 2016-08-28T21:37:20Z | [
"python",
"excel",
"openpyxl"
] |
Prioritize OR Pipe in Regex based on first match in the regex not in the source? | 39,195,968 | <p>If I do a regex with an <code>OR</code> pipe like <code>(A|B|C)</code></p>
<p><a href="https://regex101.com/r/gB1eP0/4" rel="nofollow">https://regex101.com/r/gB1eP0/4</a></p>
<p>It always finds the first match in order in my source e.g.</p>
<ul>
<li>ABC=A</li>
<li>BCA=B</li>
<li>CBA=C</li>
</ul>
<p>I am trying in fact to set it up so the order I put the options is the order it looks for. In other words in the example I have it would always look for 'A' first and if found stop searching and return <code>A</code>, if not it would search for <code>B</code> and if it didn't find that it would search for <code>C</code> and if found return <code>C</code>. Is there a way to prioritize it that way so in fact in the above example it would find <code>A</code> in each case?</p>
| 1 | 2016-08-28T21:33:06Z | 39,196,460 | <p>Building on @WiktorStribiżew's example:</p>
<p><code>(A|(B|C(?!.*B))(?!.*A))</code></p>
<p>basically, the negative lookahead is used, and then cascaded for all the Or pipes (using the parentheses).</p>
<ol>
<li><p>Match A (A)</p></li>
<li><p>Match B if not followed by A (B)(?!.*A)</p></li>
<li><p>Match C if not followed by B or A (C)(?!.*B)(?!.*A)</p></li>
</ol>
<p><a href="https://regex101.com/r/zQ3pW0/1" rel="nofollow">https://regex101.com/r/zQ3pW0/1</a></p>
| 2 | 2016-08-28T22:43:40Z | [
"python",
"regex"
] |
Selecting values from pandas data frame using multiple conditions | 39,196,053 | <p>I have the following dataframe in Pandas. Score and Date_of_interest columns are to be calculated. Below it is already filled out to make the explanation of the problem easy.</p>
<p>First let's assume that Score and Date_of_interest columns are filled with NaN's only. Below are the steps to fill the values in them.</p>
<p>a) We are trying to get one date of interest, based on the criteria described below for one PC_id eg. PC_id 200 has 1998-04-10 02:25:00 and so on.</p>
<p>b) To solve this problem we take the PC_id column and check each row to find the change in Item_id, each has a score of 1. For the same Item_id like in 1st row and second row, has 1 and 1 so the value starts with 1 but does not change in second row. </p>
<p>c) While moving and calculating the score for the second row it also checks the Datetime difference, if the previous one is more than 24 hours old, it is dropped and score is reset to 1 and cursor moves to third row.</p>
<p>d) When the Score reaches 2, we have reached the qualifying score as in row no 5(index 4) and we copy the corresponding Datetime in Date_of_interest column.</p>
<p>e) We start the new cycle for new PC_id as in row six.</p>
<pre><code> Datetime Item_id PC_id Value Score Date_of_interest
0 1998-04-8 01:00:00 1 200 35 1 NaN
1 1998-04-8 02:00:00 1 200 92 1 NaN
2 1998-04-10 02:00:00 2 200 35 1 NaN
3 1998-04-10 02:15:00 2 200 92 1 NaN
4 1998-04-10 02:25:00 3 200 92 2 1998-04-10 02:25:00
5 1998-04-10 03:00:00 1 201 93 1 NaN
6 1998-04-12 03:30:00 3 201 94 1 NaN
7 1998-04-12 04:00:00 4 201 95 2 NaN
8 1998-04-12 04:00:00 4 201 26 2 1998-04-12 04:00:00
9 1998-04-12 04:30:00 2 201 98 3 NaN
10 1998-04-12 04:50:00 1 202 100 1 NaN
11 1998-04-15 05:00:00 4 202 100 1 NaN
12 1998-04-15 05:15:00 3 202 100 2 1998-04-15 05:15:00
13 1998-04-15 05:30:00 2 202 100 3 NaN
14 1998-04-15 06:00:00 3 202 100 NaN NaN
15 1998-04-15 06:00:00 3 202 222 NaN NaN
</code></pre>
<p>Final table should be as follows:</p>
<pre><code> PC_id Date_of_interest
0 200 1998-04-10 02:25:00
1 201 1998-04-12 04:00:00
2 202 1998-04-15 05:15:00
</code></pre>
<p>Thanks for helping.</p>
<p>Update : Code I am working on currently:</p>
<pre><code>df_merged_unique = df_merged['PC_id'].unique()
score = 0
for i, row in df_merged.iterrows():
for elem in df_merged_unique:
first_date = row['Datetime']
first_item = 0
if row['PC_id'] == elem:
if row['Score'] < 2:
if row['Item_id'] != first_item:
if row['Datetime']-first_date <= pd.datetime.timedelta(days=1):
score += 1
row['Score'] = score
first_date = row['Datetime']
else:
pass
else:
pass
else:
row['Date_of_interest'] = row['Datetime']
break
else:
pass
</code></pre>
| 1 | 2016-08-28T21:43:13Z | 39,196,698 | <p>Usually having to resort to iterative/imperative methods is a sign of trouble when working with <code>pandas</code>. Given the dataframe</p>
<pre><code>In [111]: df2
Out[111]:
Datetime Item_id PC_id Value
0 1998-04-08 01:00:00 1 200 35
1 1998-04-08 02:00:00 1 200 92
2 1998-04-10 02:00:00 2 200 35
3 1998-04-10 02:15:00 2 200 92
4 1998-04-10 02:25:00 3 200 92
5 1998-04-10 03:00:00 1 201 93
6 1998-04-12 03:30:00 3 201 94
7 1998-04-12 04:00:00 4 201 95
8 1998-04-12 04:00:00 4 201 26
9 1998-04-12 04:30:00 2 201 98
10 1998-04-12 04:50:00 1 202 100
11 1998-04-15 05:00:00 4 202 100
12 1998-04-15 05:15:00 3 202 100
13 1998-04-15 05:30:00 2 202 100
14 1998-04-15 06:00:00 3 202 100
15 1998-04-15 06:00:00 3 202 222
</code></pre>
<p>you could first group by <em>PC_id</em></p>
<pre><code>In [112]: the_group = df2.groupby('PC_id')
</code></pre>
<p>and then apply the search using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.diff.html" rel="nofollow"><code>diff()</code></a> to get the rows where <em>Item_id</em> and <em>Datetime</em> change appropriately</p>
<pre><code>In [357]: (the_group['Item_id'].diff() != 0) & \
...: (the_group['Datetime'].diff() <= timedelta(days=1))
Out[357]:
0 False
1 False
2 False
3 False
4 True
5 False
6 False
7 True
8 False
9 True
10 False
11 False
12 True
13 True
14 True
15 False
16 False
dtype: bool
</code></pre>
<p>and then just take the first date (first match) in each group, if any</p>
<pre><code>In [341]: df2[(the_group['Item_id'].diff() != 0) &
...: (the_group['Datetime'].diff() <= timedelta(days=1))]\
...: .groupby('PC_id').first()['Datetime'].reset_index()
Out[341]:
PC_id Datetime
0 200 1998-04-10 02:25:00
1 201 1998-04-12 04:00:00
2 202 1998-04-15 05:15:00
</code></pre>
| 1 | 2016-08-28T23:25:37Z | [
"python",
"pandas"
] |
Libraries in "virtualenv" much bigger than system libraries | 39,196,057 | <p>I use virtualenv through pew (which I think is a fantastic tool), but I noticed something strange.</p>
<p>I have scipy system-side installed:</p>
<pre><code> 7,7 MiB [##########] /sparse
5,1 MiB [###### ] /special
5,1 MiB [###### ] /stats
5,0 MiB [###### ] /linalg
3,5 MiB [#### ] /spatial
3,0 MiB [### ] /optimize
2,5 MiB [### ] /signal
2,3 MiB [### ] /interpolate
2,3 MiB [## ] /misc
2,2 MiB [## ] /io
1,5 MiB [## ] /integrate
1,3 MiB [# ] /ndimage
1,0 MiB [# ] /fftpack
744,0 KiB [ ] /cluster
512,0 KiB [ ] /odr
464,0 KiB [ ] /constants
252,0 KiB [ ] /_lib
44,0 KiB [ ] /_build_utils
36,0 KiB [ ] /__pycache__
24,0 KiB [ ] HACKING.rst.txt
12,0 KiB [ ] THANKS.txt
8,0 KiB [ ] INSTALL.rst.txt
4,0 KiB [ ] __init__.py
4,0 KiB [ ] __config__.py
4,0 KiB [ ] LICENSE.txt
4,0 KiB [ ] setup.py
4,0 KiB [ ] BENTO_BUILD.txt
4,0 KiB [ ] version.py
4,0 KiB [ ] linalg.pxd
</code></pre>
<p>And this is scipy virtualenv-side installed (same scipy version):</p>
<pre><code>51,0 MiB [##########] /sparse
37,6 MiB [####### ] /.libs
12,9 MiB [## ] /linalg
10,6 MiB [## ] /spatial
9,7 MiB [# ] /special
6,0 MiB [# ] /interpolate
5,9 MiB [# ] /stats
5,1 MiB [# ] /optimize
4,2 MiB [ ] /signal
3,2 MiB [ ] /io
3,0 MiB [ ] /integrate
3,0 MiB [ ] /ndimage
2,3 MiB [ ] /misc
2,1 MiB [ ] /cluster
1,7 MiB [ ] /fftpack
884,0 KiB [ ] /odr
328,0 KiB [ ] /constants
204,0 KiB [ ] /_lib
32,0 KiB [ ] /_build_utils
24,0 KiB [ ] HACKING.rst.txt
20,0 KiB [ ] /__pycache__
12,0 KiB [ ] THANKS.txt
8,0 KiB [ ] INSTALL.rst.txt
4,0 KiB [ ] __init__.py
4,0 KiB [ ] LICENSE.txt
4,0 KiB [ ] setup.py
4,0 KiB [ ] __config__.py
4,0 KiB [ ] BENTO_BUILD.txt
4,0 KiB [ ] version.py
4,0 KiB [ ] pip-delete-this-directory.txt
4,0 KiB [ ] linalg.pxd
</code></pre>
<p>Needless to say there is a huge size difference. It would normally not bother me much, but I'm trying to bundle an executable file with pyinstaller and the resulting executable file is unrealistically too big.</p>
<p>Can someone explain such a difference ? It is not specific to scipy, I also see it for numpy, and maybe for other libraries.</p>
<p>EDIT:</p>
<p>The <em>files</em> inside the directories have different sizes:</p>
<p>System-wide:</p>
<pre><code>3,1 MiB [##########] _sparsetools.cpython-35m-x86_64-linux-gnu.so
</code></pre>
<p>Virtualenv-wide:</p>
<pre><code>38,5 MiB [##########] _sparsetools.cpython-35m-x86_64-linux-gnu.so
</code></pre>
| 4 | 2016-08-28T21:43:26Z | 39,196,353 | <p>The shared library files in the Python wheels distributed for Scipy aren't stripped, so they are bigger than what your package manager installs:</p>
<pre><code>$ file _sparsetools.cpython-35m-x86_64-linux-gnu.so
_sparsetools.cpython-35m-x86_64-linux-gnu.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=ed7b8e00c558f234620423315fa9b53274393e72, not stripped
$ du -h _sparsetools.cpython-35m-x86_64-linux-gnu.so
39M _sparsetools.cpython-35m-x86_64-linux-gnu.so
</code></pre>
<p>If you <a href="https://unix.stackexchange.com/questions/2969/what-are-stripped-and-not-stripped-executables-in-unix">strip it</a>, the file size shrinks:</p>
<pre><code>$ strip _sparsetools.cpython-35m-x86_64-linux-gnu.so
$ file _sparsetools.cpython-35m-x86_64-linux-gnu.so
_sparsetools.cpython-35m-x86_64-linux-gnu.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=ed7b8e00c558f234620423315fa9b53274393e72, stripped
$ du -h _sparsetools.cpython-35m-x86_64-linux-gnu.so
3.7M _sparsetools.cpython-35m-x86_64-linux-gnu.so
</code></pre>
<p>PyInstaller can do this for you with the <a href="https://pythonhosted.org/PyInstaller/usage.html#how-to-generate"><code>--strip</code> flag</a>.</p>
| 5 | 2016-08-28T22:25:43Z | [
"python",
"numpy",
"scipy",
"virtualenv"
] |
Plot or reshape 2D array matplotlib | 39,196,147 | <p>i have no idea how can i plot scatter with a 2D array of this type:</p>
<pre><code>a=[[x0,t0],[x1,t1],...,[xn,tn]]
</code></pre>
<p>the plot should be x vs t, maybe instead of doing this with a maplotlib routine be able to reshape a to obtain:</p>
<pre><code>a=[[x0,x1,...,xn],[t0,t1,...,tn]]
</code></pre>
<p>thanks!</p>
| 0 | 2016-08-28T21:54:32Z | 39,196,181 | <p>Assuming your data starts in the format a = [[x0, t0]]: </p>
<p>Split x & t into separate lists, then you can pass them into matplotlib.</p>
<pre><code>import matplotlib.pyplot as plt
x = [i[0] for i in a]
t = [i[1] for i in a]
plt.plot(x, t)
</code></pre>
| 3 | 2016-08-28T21:59:10Z | [
"python",
"numpy",
"matplotlib"
] |
Plot or reshape 2D array matplotlib | 39,196,147 | <p>i have no idea how can i plot scatter with a 2D array of this type:</p>
<pre><code>a=[[x0,t0],[x1,t1],...,[xn,tn]]
</code></pre>
<p>the plot should be x vs t, maybe instead of doing this with a maplotlib routine be able to reshape a to obtain:</p>
<pre><code>a=[[x0,x1,...,xn],[t0,t1,...,tn]]
</code></pre>
<p>thanks!</p>
| 0 | 2016-08-28T21:54:32Z | 39,196,311 | <p>You can use <code>numpy.transpose</code>:</p>
<pre><code>import numpy as np
a=[["x0","t0"],["x1","t1"],["xn","tn"]]
np.transpose(a)
# array([['x0', 'x1', 'xn'],
# ['t0', 't1', 'tn']],
# dtype='<U2')
</code></pre>
| 3 | 2016-08-28T22:18:25Z | [
"python",
"numpy",
"matplotlib"
] |
Unreasonable output from codecs.decode() | 39,196,210 | <p>I have been playing around with the <code>codecs</code> module lately and I stumbled upon this behavior that I find rather weird:<br>
<code>codecs.encode(b'a', 'hex')</code> returns <code>b'61'</code>.</p>
<p>My question is, why? I really didn't expect it to return <code>b'61'</code>. I was expecting <code>b'\x61'</code>.<br>
The former is a <code>bytes</code> object with length 2 (<code>len(b'61') == 2</code>), whereas the latter one is a <code>bytes</code> object with length 1 (<code>len(b'\x61') == 1</code>).</p>
<p>I didn't expect this behavior at all, because <code>b'a'</code>, which is supposed to be 1-byte, has became 2-bytes when encoded with the <code>'hex'</code> codecs.</p>
<p>What would you have done to convert an ASCII character to its hex-encoded <code>bytes</code> representation? What I did was:</p>
<pre><code>codecs.decode(hex(ord('a'))[2:], 'hex')
</code></pre>
<p>But I felt like this is kind of a dirty hack.</p>
| -2 | 2016-08-28T22:03:18Z | 39,196,544 | <p>The behaviour of codec is documented, the purpose is to make a text representation of (possibly) binary data. </p>
<p>If you want to convert a character 'a' to a bytes representation of that character using ascii, you don't need the codec module; just use the <code>bytes</code> builtin.</p>
<pre><code>>>> bytes('a','ascii')
b'a'
</code></pre>
<p>As noted in the comments, b'a' is equal to b'\x61'</p>
| 0 | 2016-08-28T23:00:10Z | [
"python",
"python-3.x",
"encoding"
] |
Python read from a file, and only do work if a string isn't found | 39,196,216 | <p>So I'm trying to make a reddit bot that will exec code from a submission. I have my own sub for controlling these clients. </p>
<pre><code>while __name__ == '__main__':
string = open('config.txt').read()
for submission in subreddit.get_new(limit = 1):
if submission.url not in string:
f.write(submission.url + "\n")
f.close()
f = open('config.txt', "a")
string = open('config.txt').read()
</code></pre>
<p>So what this is suppose to do is read from the config file, then only do work if the submission url isn't in config.txt. However, it always sees the most recent post and does it's work. This is how F is opened.</p>
<pre><code>if not os.path.exists('file'):
open('config.txt', 'w').close()
f = open('config.txt', "a")
</code></pre>
| -1 | 2016-08-28T22:04:03Z | 39,197,715 | <p>First a critique of your existing code (in comments):</p>
<pre><code># the next two lines are not needed; open('config.txt', "a")
# will create the file if it doesn't exist.
if not os.path.exists('file'):
open('config.txt', 'w').close()
f = open('config.txt', "a")
# this is an unusual condition which will confuse readers
while __name__ == '__main__':
# the next line will open a file handle and never explicitly close it
# (it will probably get closed automatically when it goes out of scope,
# but it's not good form)
string = open('config.txt').read()
for submission in subreddit.get_new(limit = 1):
# the next line should check for a full-line match; as written, it
# will match "http://www.test.com" if "http://www.test.com/level2"
# is in config.txt
if submission.url not in string:
f.write(submission.url + "\n")
# the next two lines could be replaced with f.flush()
f.close()
f = open('config.txt', "a")
# this is a cumbersome way to keep your string synced with the file,
# and it never explicitly releases the new file handle
string = open('config.txt').read()
# If subreddit.get_new() doesn't return any results, this will act as
# a busy loop, repeatedly requesting new results as fast as possible.
# If that is undesirable, you might want to sleep here.
# file handle f should get closed after the loop
</code></pre>
<p>None of the problems pointed out above should keep your code from working (except maybe the imprecise matching). But simpler code may be easier to debug. Here's some code that does the same thing. Note: I assume there is no chance any other process is writing to config.txt at the same time. You could try this code (or your code) with pdb, line-by-line, to see whether it works as expected.</p>
<pre><code>import time
import praw
r = praw.Reddit(...)
subreddit = r.get_subreddit(...)
if __name__ == '__main__':
# open config.txt for reading and writing without truncating.
# moves pointer to end of file; closes file at end of block
with open('config.txt', "a+") as f:
# move pointer to start of file
f.seek(0)
# make a list of existing lines; also move pointer to end of file
lines = set(f.read().splitlines())
while True:
got_one = False
for submission in subreddit.get_new(limit=1):
got_one = True
if submission.url not in lines:
lines.add(submission.url)
f.write(submission.url + "\n")
# write data to disk immediately
f.flush()
...
if not got_one:
# wait a little while before trying again
time.sleep(10)
</code></pre>
| 0 | 2016-08-29T02:22:21Z | [
"python",
"praw"
] |
Issue with the global command within a function | 39,196,556 | <p>I am having an issue with getting this code to run properly:</p>
<pre><code>def ip_is_valid():
check = False
#Global exposes outside the local function
global ip_list
while True:
#Prompting user for input
print "\n" + "# " * 20 + "\n"
ip_file = raw_input("# Enter IP file name followed by extension: ")
print "\n" + "# " * 20 + "\n"
#Changing exception message
try:
selected_ip_file = open(ip_file, 'r')
#Start from the beginning of the file
selected_ip_file.seek(0)
ip_list = selected_ip_file.readlines()
selected_ip_file.close()
except IOError:
print "\n* File %s does not exist. Please check and try again\n" % ip_file
for ip in ip_list:
a = ip.split('.')
if (len(a) == 4) and (1 <= int(a[0]) <= 223) and (int(a[0]) != 127) and (int(a[0]) != 169 or int(a[1]) != 254) and (0 <= int(a[1]) <= 255 and 0 <= int(a[2]) <= 255 and 0 <= int(a[3]) <= 255):
check = True
break
elif (len(a) == 4) and (224 <= int(a[0]) <= 239):
print "\nThis is a multicast address. Please enter a unicast valid address\n"
check = False
continue
elif (len(a) == 4) and (int(a[0]) == 127):
print "\nThis is a loopback address and is not valid. Please try again.\n"
check = False
continue
elif (len(a) == 4) and (int(a[0]) == 169 or int(a[1]) == 254):
print "\n This is an APIPA address and is invalid. Please try again.\n"
check = False
continue
else:
print "\n* There was an invalid IP address. Please check and try again.\n"
check = False
continue
if check == False:
continue
elif check == True:
break
ip_is_valid()
</code></pre>
<p>The issue I have is python will prompt for an IP file but follows with this error:</p>
<pre><code> File ".\validip.py", line 133, in <module>
ip_is_valid()
File ".\validip.py", line 41, in ip_is_valid
for ip in ip_list:
NameError: global name 'ip_list' is not defined
</code></pre>
<p>Even though I defined ip_list in the function, I am still getting that error. I am using "global" because there are other functions in this program that need visibility to the IP list variable.</p>
<pre><code>def create_threads():
threads = []
for ip in ip_list:
th = threading.Thread(target = open_ssh_conn, args = (ip,))
th.start()
threads.append(th)
for th in threads:
th.join()
create_threads()
</code></pre>
| 0 | 2016-08-28T23:02:03Z | 39,196,567 | <p>Before using variable <code>ip_list</code> as <code>global</code> variable, you must have to define it at the outer scope. For example in your case, you may make it run like: </p>
<pre><code>ip_list = []
def ip_is_valid():
# Some logic
global ip_list
# Some more logic
</code></pre>
<p>OR define it based on where you need <code>ip_list</code>.</p>
<p><strong>PS:</strong> You have to define <code>ip_list</code> before you make call to <code>ip_is_valid</code> function</p>
| 2 | 2016-08-28T23:04:11Z | [
"python",
"networking"
] |
MySql is unable to find file in same directory? | 39,196,571 | <p>I'm trying to load a picture into a mysql database with the following code:</p>
<pre><code>cursor, db = get_db()
cursor.execute("UPDATE People SET photo = LOAD_FILE(\'myphoto.jpg\')")
cursor.close()
db.commit()
db.close()
</code></pre>
<p>I've been able to set photo to other values by replacing the <code>LOAD_FILE</code> phrase with something simpler, so I know that the problem stems from loading the picture. Meanwhile, myphoto.jpg is located in the directory that the code is being run from. MySql executes these commands without complaining, but does not actually put the picture in the db. What could be going wrong and how can I fix it?</p>
<p>Operating system is linux.</p>
| 1 | 2016-08-28T23:04:40Z | 39,196,734 | <p>Check whether the file is loaded into the mysql or not.</p>
<pre><code>select load_file('/currentdir/file.blob')
+-----------------------------+
| load_file('currentdir/file.blob') |
+-----------------------------+
| NULL |
+-----------------------------+
</code></pre>
<p>This could be beccause of user permissions.</p>
<pre><code>sudo chown mysql:mysql /currentdir/file.blob
</code></pre>
<blockquote>
<p>I also read <a href="https://bugs.mysql.com/bug.php?id=38403" rel="nofollow">here</a>, that there is bug on linux, that this problem could
be because of <code>apparmor</code>, disabling it might solve the problem, do
this <code>/etc/init.d/apparmor stop</code> on ubuntu.</p>
</blockquote>
<p>Other Possible reasons!!</p>
<ol>
<li><p>Check for the path</p></li>
<li><p>Check for the privileges</p></li>
<li><p>Does the functions return NULL</p></li>
<li><p>Try this <a href="http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_load-file" rel="nofollow">query</a></p></li>
<li><p>Is the instance of mysql is on your machine now?</p></li>
</ol>
<p>And also in linux you dont have to place back slashes, <code>LOAD_FILE(\'myphoto.jpg\')</code>, try without them, <code>LOAD_FILE('myphoto.jpg')</code></p>
| 0 | 2016-08-28T23:31:08Z | [
"python",
"mysql",
"file"
] |
Django ValueError: ModelForm has no model class specified | 39,196,601 | <p>I have the following code which complains about the following error:</p>
<p>ValueError: ModelForm has no model class specified.</p>
<pre><code>from django import forms
from straightred.models import StraightredTeam
from straightred.models import UserSelection
class SelectTwoTeams1(forms.Form):
campaignnoquery = UserSelection.objects.filter(user=349).order_by('-campaignno')[:1]
currentCampaignNo = campaignnoquery[0].campaignno
cantSelectTeams = UserSelection.objects.filter(campaignno=currentCampaignNo)
currentTeams = StraightredTeam.objects.filter(currentteam = 1).exclude(teamid__in=cantSelectTeams.values_list('teamselectionid', flat=True))
team_one = forms.ModelChoiceField(queryset = currentTeams)
team_two = forms.ModelChoiceField(queryset = currentTeams)
class SelectTwoTeams(forms.ModelForm):
used_his = forms.ModelMultipleChoiceField(queryset=UserSelection.objects.filter(user__id=1))
def __init__(self, user, *args, **kwargs):
super(SelectTwoTeams, self).__init__(*args, **kwargs)
self.fields['used_his'].queryset = User.objects.filter(pk = user.id)
</code></pre>
<p>Any help would be greatly appreciated. Many thanks, Alan.</p>
| 0 | 2016-08-28T23:09:58Z | 39,196,637 | <p>The error message is clearly telling you that you have not specified a model class.</p>
<p>For a ModelForm, you have to use Model class:</p>
<pre><code>class ProductForm(forms.ModelForm):
class Meta:
model = Product
</code></pre>
<p>If this isn't a form based on a model, don't inherit from forms.ModelForm, just use an ordinary forms.Form.</p>
| 1 | 2016-08-28T23:16:14Z | [
"python",
"django"
] |
Python - Efficiently working with large permutated lists | 39,196,603 | <p>I've written a short <strong>Anagram Solver</strong> script in <strong>Python 2.7</strong>. </p>
<pre><code>#Import Permutations Module
from itertools import permutations as perm
#Defined Functions
def check(x,y):
#Opens Dictionary File
with open('wordsEn.txt') as dictionary:
'''Checks the permutations against all of the dictionary words and appends
any matching ones to on of the empty lists'''
for line in dictionary:
for i in x:
if i + '\n' == line:
y.append(i)
#Empty Lists that will be appended to later
anagram_perms = []
possible_words = []
#Anagram User Input
anagram = list(raw_input("Input scrambled word: ").lower())
#Creates single list items from the permutations, deletes duplicates
for char in perm(anagram):
anagram_perms.append("".join(char))
anagram_perms = list(set(anagram_perms))
#Uses the defined function
check(anagram_perms, possible_words)
#Prints the number of perms created, then prints the possible words beneath it
print len(anagram_perms)
print '\n'.join(possible_words)
</code></pre>
<p>It essentially takes a user inputted anagram, generates and places into a list all possible combinations of the letters (using <code>itertools.permutations</code>), deleting any duplicates. It then checks each of these of combinations against a 100000 word dictionary text file, placing any matching words into a list to be printed.</p>
<p>I've run into the issue that if a user inputs a word that is above 6 unique letters in length, the number of permutations generated causes hanging and crashes. 9 letter anagrams would be the typical input, however evidently these will output <strong>362880</strong> ('9!') permutations if all letters are different, which is unfeasible.</p>
<p>I've though about a couple of <strong>potential solutions</strong>:</p>
<ol>
<li>Creating a number of empty lists which can only hold a certain number of appended permuations. Once these lists are 'full', permutations are added to the next one. Each of these lists are then subsequently checked against the text file.</li>
<li>Creating one empty list contained within a loop. Permutations are generated and appened to the list up to a certain workable number, the list is then used to check the text file before emptying itself and appending in the next number of permutations.</li>
<li>Some other method whereby a certain number of permutations are generated, then the process is paused while the currently generated ones are checked against the text file, and resumed and repeated.</li>
</ol>
<p>I'm fairly new to Python development however and don't really know if these would be possible or how I would go about implementing them into my code; and other questions on similar topics haven't really been able to help.</p>
<p>If anyone would like to see my code so far I'd be happy to condense it down and post it, but for the sake of not making this question any longer I'll leave it out unless it's requested. <strong>(Updated Above)</strong></p>
<p>Thanks!</p>
| 0 | 2016-08-28T23:10:12Z | 39,196,700 | <p>This should help. The <code>itertools.permutations</code> function returns an iterator. This means that the entire list is not stored in memory; rather you can call for the next value and it will compute what is needed on the fly.</p>
<pre><code>from itertools import permutations
with open('./wordlist.txt', 'r') as fp:
wordlist_str = fp.read()
wordlist = set(wordlist_str.lower().split('\n')) #use '\r\n' in Windows
def get_anagrams(word):
out = set()
for w in permutations(word.lower()):
if ''.join(w) in wordlist:
out.add(''.join(w))
return out
</code></pre>
| 0 | 2016-08-28T23:25:47Z | [
"python",
"python-2.7",
"permutation",
"itertools",
"anagram"
] |
Python - Efficiently working with large permutated lists | 39,196,603 | <p>I've written a short <strong>Anagram Solver</strong> script in <strong>Python 2.7</strong>. </p>
<pre><code>#Import Permutations Module
from itertools import permutations as perm
#Defined Functions
def check(x,y):
#Opens Dictionary File
with open('wordsEn.txt') as dictionary:
'''Checks the permutations against all of the dictionary words and appends
any matching ones to on of the empty lists'''
for line in dictionary:
for i in x:
if i + '\n' == line:
y.append(i)
#Empty Lists that will be appended to later
anagram_perms = []
possible_words = []
#Anagram User Input
anagram = list(raw_input("Input scrambled word: ").lower())
#Creates single list items from the permutations, deletes duplicates
for char in perm(anagram):
anagram_perms.append("".join(char))
anagram_perms = list(set(anagram_perms))
#Uses the defined function
check(anagram_perms, possible_words)
#Prints the number of perms created, then prints the possible words beneath it
print len(anagram_perms)
print '\n'.join(possible_words)
</code></pre>
<p>It essentially takes a user inputted anagram, generates and places into a list all possible combinations of the letters (using <code>itertools.permutations</code>), deleting any duplicates. It then checks each of these of combinations against a 100000 word dictionary text file, placing any matching words into a list to be printed.</p>
<p>I've run into the issue that if a user inputs a word that is above 6 unique letters in length, the number of permutations generated causes hanging and crashes. 9 letter anagrams would be the typical input, however evidently these will output <strong>362880</strong> ('9!') permutations if all letters are different, which is unfeasible.</p>
<p>I've though about a couple of <strong>potential solutions</strong>:</p>
<ol>
<li>Creating a number of empty lists which can only hold a certain number of appended permuations. Once these lists are 'full', permutations are added to the next one. Each of these lists are then subsequently checked against the text file.</li>
<li>Creating one empty list contained within a loop. Permutations are generated and appened to the list up to a certain workable number, the list is then used to check the text file before emptying itself and appending in the next number of permutations.</li>
<li>Some other method whereby a certain number of permutations are generated, then the process is paused while the currently generated ones are checked against the text file, and resumed and repeated.</li>
</ol>
<p>I'm fairly new to Python development however and don't really know if these would be possible or how I would go about implementing them into my code; and other questions on similar topics haven't really been able to help.</p>
<p>If anyone would like to see my code so far I'd be happy to condense it down and post it, but for the sake of not making this question any longer I'll leave it out unless it's requested. <strong>(Updated Above)</strong></p>
<p>Thanks!</p>
| 0 | 2016-08-28T23:10:12Z | 39,196,754 | <p>It think the best solution may be to not use permutations. It's much more likely than not that most generated permutations are not a word - so it's a waste to generate them all.</p>
<p>You can consider pre-processing this dictionary to a dictionary of sorted letters to list of words that those letters consist of. Then, your anagram solver will be a simple lookup in a dictionary after sorting the input. </p>
<p>First, create the dictionary from your word list and save to a file:</p>
<pre><code>from collections import defaultdict
import json
word_list = ['tab', 'bat', 'cat', 'rat', ...] # 100k words
word_dict = defaultdict(list)
for word in word_list:
word_dict[''.join(sorted(word))].append(word)
with open('word_dict.json') as f:
f.write(json.dumps(dict(word_dict)))
</code></pre>
<p>Then, when running your anagram code, load the dictionary and use it to look up the sorted input:</p>
<pre><code>import json
empty_list = []
with open('word_dict.json', 'r') as f:
word_dict = json.loads(f.read())
while True:
anagram = raw_input('Enter in an anagram: ')
sorted_anagram = ''.join(sorted(anagram))
print word_dict.get(sorted_anagram, empty_list)
</code></pre>
| 1 | 2016-08-28T23:34:39Z | [
"python",
"python-2.7",
"permutation",
"itertools",
"anagram"
] |
Python get property value in dict for large dict | 39,196,632 | <p>I have a very large dictionary, like this:</p>
<pre><code>d['property1']['property2'][0]['property3']['property4']['property5']['property6']
</code></pre>
<p>I need to get <code>property6</code>. What's the simplest way for me to get this value?</p>
<p>I was thinking something like this would work:</p>
<pre><code>d.lavel6[0]['property6']
</code></pre>
| -2 | 2016-08-28T23:15:36Z | 39,196,664 | <p>Unfortunately, there is no generic way to get value of key from nested dict based on levels (due to obvious reasons). But, you may write a function for your specific scenario in order to simplify it. For example:</p>
<pre><code>def get_value_from_dict(my_dict, level, key):
return my_dict['property1']['property2'][level]['property3']['property4']['property5'][key]
</code></pre>
| 1 | 2016-08-28T23:20:12Z | [
"python"
] |
Issues with Python pandas: read_html and python3-lxml installation | 39,196,648 | <p>I'm trying to run the following code, to no avail. To my knowledge, there aren't any syntax errors.</p>
<pre><code>import quandl
import pandas as pd
fifty_states =pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states')
print(fifty_states)
</code></pre>
<p>I'm getting the following error when I run this code:</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "C:/Users/Dave/Documents/Python Files/helloworld.py", line 15, in
fiddy_states = pd.read_html('<a href="http://simple.wikipedia.org/wiki/List_of_U.S._states" rel="nofollow">http://simple.wikipedia.org/wiki/List_of_U.S._states</a>')</p>
<p>File "C:\Python35\lib\site-packages\pandas\io\html.py", line 874, in read_html
parse_dates, tupleize_cols, thousands, attrs, encoding)</p>
<p>File "C:\Python35\lib\site-packages\pandas\io\html.py", line 726, in _parse
parser = _parser_dispatch(flav)</p>
<p>File "C:\Python35\lib\site-packages\pandas\io\html.py", line 685, in _parser_dispatch
raise ImportError("lxml not found, please install it")</p>
<p>ImportError: lxml not found, please install it</p>
</blockquote>
<p>Not too sure why this is occurring, as I (should) have all the packages required to run this code. I have problems installing lxml and python3-lxml, as the packages fail to install. As a backup, I've installed the following:</p>
<blockquote>
<p>python-dev libxml2-dev libxslt1-dev zlib1g-dev</p>
</blockquote>
<p>in addition to 'html5lib', which I've read is a suitable replacement to lxml.</p>
<p>Not sure what else to do at this point, since searching for similar corrections (i.e. installing lxml) don't apply to me (I can't install lxml in any format via pip on the command line).</p>
<p>Any help is much appreciated.</p>
<p>Edit: It appears that <code>lxml</code> was never installed on my computer. It's weird, because I'm unable to install it via <code>pip install lxml</code>. Here're the error logs I get when attempting an install:</p>
<pre><code>Collecting lxml
Using cached lxml-3.6.4.tar.gz
Building wheels for collected packages: lxml
Running setup.py bdist_wheel for lxml ... error
Complete output from command c:\python35\python.exe -u -c "import setuptools,
tokenize;__file__='C:\\Users\\Dwang\\AppData\\Local\\Temp\\pip-build-738bf61u\\l
xml\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().rep
lace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d C:\Users\Dwang\AppData\Lo
cal\Temp\tmpm9z4yol6pip-wheel- --python-tag cp35:
Building lxml version 3.6.4.
Building without Cython.
ERROR: b"'xslt-config' is not recognized as an internal or external command,\r
\noperable program or batch file.\r\n"
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\lxml
copying src\lxml\builder.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\cssselect.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\doctestcompare.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\ElementInclude.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\sax.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\usedoctest.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\_elementpath.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\__init__.py -> build\lib.win-amd64-3.5\lxml
creating build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\__init__.py -> build\lib.win-amd64-3.5\lxml\includes
creating build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\builder.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\clean.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\defs.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\diff.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win-amd64-3.5\lxml\html
creating build\lib.win-amd64-3.5\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-3.5\lxml\iso
schematron
copying src\lxml\lxml.etree.h -> build\lib.win-amd64-3.5\lxml
copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-3.5\lxml
copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\config.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-3.5\lxml\include
s
copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-3.5\lxml\incl
udes
copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-3.5\lxml\inclu
des
copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-3.5\lxml\inclu
des
copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-3.5\lxml\include
s
copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-3.5\lxml\include
s
copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-3.5\lxml\includ
es
copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-3.5\lxml\includ
es
copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-3.5\lxml\include
s
copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-3.5\lxml\inclu
des
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.w
in-amd64-3.5\lxml\isoschematron\resources\rng
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-a
md64-3.5\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-a
md64-3.5\lxml\isoschematron\resources\xsl
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-schematr
on-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract
_expand.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-sche
matron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_inc
lude.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-schemat
ron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematr
on_message.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-s
chematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematr
on_skeleton_for_xslt1.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resource
s\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for
_xslt1.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-schem
atron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -
> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
error: Unable to find vcvarsall.bat
----------------------------------------
Failed building wheel for lxml
Running setup.py clean for lxml
Failed to build lxml
Installing collected packages: lxml
Running setup.py install for lxml ... error
Complete output from command c:\python35\python.exe -u -c "import setuptools
, tokenize;__file__='C:\\Users\\Dwang\\AppData\\Local\\Temp\\pip-build-738bf61u\
\lxml\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().r
eplace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\Dwang\AppDat
a\Local\Temp\pip-4_tf2u3a-record\install-record.txt --single-version-externally-
managed --compile:
Building lxml version 3.6.4.
Building without Cython.
ERROR: b"'xslt-config' is not recognized as an internal or external command,
\r\noperable program or batch file.\r\n"
** make sure the development packages of libxml2 and libxslt are installed *
*
Using build configuration of libxslt
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\lxml
copying src\lxml\builder.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\cssselect.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\doctestcompare.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\ElementInclude.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\sax.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\usedoctest.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\_elementpath.py -> build\lib.win-amd64-3.5\lxml
copying src\lxml\__init__.py -> build\lib.win-amd64-3.5\lxml
creating build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\__init__.py -> build\lib.win-amd64-3.5\lxml\includ
es
creating build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\builder.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\clean.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\defs.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\diff.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-3.5\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win-amd64-3.5\lxml\html
creating build\lib.win-amd64-3.5\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-3.5\lxml\i
soschematron
copying src\lxml\lxml.etree.h -> build\lib.win-amd64-3.5\lxml
copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-3.5\lxml
copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\config.pxd -> build\lib.win-amd64-3.5\lxml\include
s
copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-3.5\lxml\inclu
des
copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-3.5\lxml\in
cludes
copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-3.5\lxml\inc
ludes
copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-3.5\lxml\includ
es
copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-3.5\lxml\inc
ludes
copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-3.5\lxml\inclu
des
copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-3.5\lxml\inclu
des
copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-3.5\lxml\incl
udes
copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-3.5\lxml\incl
udes
copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-3.5\lxml\includes
copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-3.5\lxml\inclu
des
copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-3.5\lxml\inc
ludes
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib
.win-amd64-3.5\lxml\isoschematron\resources\rng
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win
-amd64-3.5\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win
-amd64-3.5\lxml\isoschematron\resources\xsl
creating build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-schema
tron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstra
ct_expand.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-sc
hematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_i
nclude.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-schem
atron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schema
tron_message.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso
-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schema
tron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resour
ces\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_f
or_xslt1.xsl -> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-sch
ematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt
-> build\lib.win-amd64-3.5\lxml\isoschematron\resources\xsl\iso-schematron-xslt
1
running build_ext
building 'lxml.etree' extension
error: Unable to find vcvarsall.bat
----------------------------------------
Command "c:\python35\python.exe -u -c "import setuptools, tokenize;__file__='C:\
\Users\\Dwang\\AppData\\Local\\Temp\\pip-build-738bf61u\\lxml\\setup.py';exec(co
mpile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __
file__, 'exec'))" install --record C:\Users\Dwang\AppData\Local\Temp\pip-4_tf2u3
a-record\install-record.txt --single-version-externally-managed --compile" faile
d with error code 1 in C:\Users\Dwang\AppData\Local\Temp\pip-build-738bf61u\lxml
\
</code></pre>
| 2 | 2016-08-28T23:18:03Z | 39,197,125 | <p>From what I understand and according to the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow">docs</a>, if <code>read_html()</code> fails to use <code>lxml</code>, it should fall back to <code>html5lib</code>, but it looks ike it does not happen in your case and an error is thrown.</p>
<p>Try to <em>explicitly</em> state the <code>flavor</code>:</p>
<pre><code>fifty_states = pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states', flavor='html5lib`)
</code></pre>
| 1 | 2016-08-29T00:43:50Z | [
"python",
"pandas",
"lxml"
] |
Python imaplib deleting multiple emails gmail | 39,196,695 | <p>my code look like this...</p>
<pre><code>import imaplib
import email
obj = imaplib.IMAP4_SSL('imap.gmail.com','993')
obj.login('user','pass')
obj.select('inbox')
delete = []
for i in range(1, 10):
typ, msg_data = obj.fetch(str(i), '(RFC822)')
print i
x = i
for response_part in msg_data:
if isinstance(response_part, tuple):
msg = email.message_from_string(response_part[1])
for header in [ 'subject', 'to', 'from', 'Received' ]:
print '%-8s: %s' % (header.upper(), msg[header])
if header == 'from' and '<sender's email address>' in msg[header]:
delete.append(x)
string = str(delete[0])
for xx in delete:
if xx != delete[0]:
print xx
string = string + ', '+ str(xx)
print string
obj.select('inbox')
obj.uid('STORE', string , '+FLAGS', '(\Deleted)')
obj.expunge()
obj.close()
obj.logout()
</code></pre>
<p>the error I get is</p>
<pre><code>Traceback (most recent call last):
File "del_email.py", line 31, in <module>
obj.uid('STORE', string , '+FLAGS', '(\Deleted)')
File "C:\Tools\Python(x86)\Python27\lib\imaplib.py", line 773, in uid
typ, dat = self._simple_command(name, command, *args)
File "C:\Tools\Python(x86)\Python27\lib\imaplib.py", line 1088, in _simple_command
return self._command_complete(name, self._command(name, *args))
File "C:\Tools\Python(x86)\Python27\lib\imaplib.py", line 918, in _command_complete
raise self.error('%s command error: %s %s' % (name, typ, data))
imaplib.error: UID command error: BAD ['Could not parse command']
</code></pre>
<p>I am looking for a way to delete multiple emails at once using imaplib or other module. I am looking for the simplest example to go off of. This example was given at this link here <a href="http://stackoverflow.com/questions/1777264/using-python-imaplib-to-delete-an-email-from-gmail">Using python imaplib to "delete" an email from Gmail?</a> the last answer's example. I'ts not working correctly. I can however get the the 1st example to work to delete one email every time the script is ran. I'd rather try the doing it with a multiple than running the script several thousand times. my main goal is to delete multiple emails through imaplib any workarounds or other working modules or examples would be appreciated. </p>
| 0 | 2016-08-28T23:24:59Z | 39,217,226 | <p>You might find this a bit easier using <a href="https://pypi.python.org/pypi/IMAPClient/1.0.2" rel="nofollow">IMAPClient</a> as it takes care of a lot more of low level protocol aspects for you.</p>
<p>Using IMAPClient your code would look something like:</p>
<pre><code>from imapclient import IMAPClient
import email
obj = IMAPClient('imap.gmail.com', ssl=True)
obj.login('user','pass')
obj.select('inbox')
delete = []
msg_ids = obj.search(('NOT', 'DELETED'))
for msg_id in msg_ids:
msg_data = obj.fetch(msg_id, ('RFC822',))
msg = email.message_from_string(msg_data[msg_id]['RFC822'])
for header in [ 'subject', 'to', 'from', 'Received' ]:
print '%-8s: %s' % (header.upper(), msg[header])
if header == 'from' and '<senders email address>' in msg[header]:
delete.append(x)
obj.delete_messages(delete)
obj.expunge()
obj.close()
obj.logout()
</code></pre>
<p>This could be made more efficient by fetching multiple messages in a single fetch() call rather than fetching them one at a time but I've left that out for clarity.</p>
<p>If you're just wanting to filter by the sender's address you can get the IMAP server to do the filtering for you. This avoids the need to download the message bodies and makes the process a whole lot faster.</p>
<p>This would look like:</p>
<pre><code>from imapclient import IMAPClient
obj = IMAPClient('imap.gmail.com', ssl=True)
obj.login('user','pass')
obj.select('inbox')
msg_ids = obj.search(('NOT', 'DELETED', 'FROM', '<senders email address>'))
obj.delete_messages(msg_ids)
obj.expunge()
obj.close()
obj.logout()
</code></pre>
<p>Disclaimer: I'm the author and maintainer of IMAPClient.</p>
| 0 | 2016-08-30T00:41:46Z | [
"python",
"email",
"imaplib",
"uid"
] |
Optimizing dictionaries in Python | 39,196,788 | <p>My question is different from the one asked <a href="http://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended">here</a>. Primarily I am asking what improvements could be made to code containing dictionaries. However, the link explains about memory profilers, which will be my next step.</p>
<p>I have the following two sets of code to achieve the same thing.</p>
<p>First one,</p>
<pre><code>a={1: 'a', 2: 'b', 3: 'c', 4: 'd'}
b=[x for x in a if x in (1,2,3)]
b=['a', 'b', 'c']
</code></pre>
<p>Second one,</p>
<pre><code>a={1: 'a', 2: 'b', 3: 'c', 4: 'd'}
c=[a[x] for x in set(a.keys()) & set([1,2,3])]
b=['a', 'b', 'c']
</code></pre>
<p>I would like to know which one works better in terms of memory optimized methods, and for large sets of data.</p>
<p>Thanks in advance!</p>
| 0 | 2016-08-28T23:41:07Z | 39,196,838 | <p>If you want to ask among these two, second is better. But overall, both methods have something or other which could be improved. If I had to do something similar, I would have done:</p>
<pre><code>>>> a = {1: 'a', 2: 'b', 3: 'c', 4: 'd'}
>>> [a[i] for i in (1,2,3) if i in a]
['a', 'b', 'c']
</code></pre>
| 0 | 2016-08-28T23:49:09Z | [
"python",
"python-2.7"
] |
Optimizing dictionaries in Python | 39,196,788 | <p>My question is different from the one asked <a href="http://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended">here</a>. Primarily I am asking what improvements could be made to code containing dictionaries. However, the link explains about memory profilers, which will be my next step.</p>
<p>I have the following two sets of code to achieve the same thing.</p>
<p>First one,</p>
<pre><code>a={1: 'a', 2: 'b', 3: 'c', 4: 'd'}
b=[x for x in a if x in (1,2,3)]
b=['a', 'b', 'c']
</code></pre>
<p>Second one,</p>
<pre><code>a={1: 'a', 2: 'b', 3: 'c', 4: 'd'}
c=[a[x] for x in set(a.keys()) & set([1,2,3])]
b=['a', 'b', 'c']
</code></pre>
<p>I would like to know which one works better in terms of memory optimized methods, and for large sets of data.</p>
<p>Thanks in advance!</p>
| 0 | 2016-08-28T23:41:07Z | 39,196,854 | <p>If you're optimizing for memory use generators are often a good tool. For example:</p>
<pre><code>def get_keys(mapping, keys):
for key in keys:
try:
yield mapping[key]
except KeyError:
continue
</code></pre>
<p>On your example:</p>
<pre><code>list(get_keys(a, (1, 2, 3)))
['a', 'b', 'c']
</code></pre>
| 1 | 2016-08-28T23:52:38Z | [
"python",
"python-2.7"
] |
IndexError : index out of bounds | 39,196,798 | <p>I have implemented MultinomialNB but I get this message. Please help me to solve it. Here my code : </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kf = KFold(len(X), n_folds=2, shuffle=True, random_state=9999)
model_train_index = []
model_test_index = []
model = 0
for k, (index_train, index_test) in enumerate(kf):
X_train, X_test, y_train, y_test = X.ix[index_train,:], X.ix[index_test,:],y[index_train], y[index_test]
clf = MultinomialNB(alpha=0.1).fit(X_train, y_train)
score = clf.score(X_test, y_test)
f1score = f1_score(y_test, clf.predict(X_test))
precision = precision_score(y_test, clf.predict(X_test))
recall = recall_score(y_test, clf.predict(X_test))
print('Model %d has accuracy %f with | f1score: %f | precision: %f | recall : %f'%(k,score, f1score, precision, recall))
model_train_index.append(index_train)
model_test_index.append(index_test)
model+=1</code></pre>
</div>
</div>
</p>
<p>and then I get result like this : </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>IndexError Traceback (most recent call last)
<ipython-input-3-df0b24edb687> in <module>()
5
6 for k, (index_train, index_test) in enumerate(kf):
----> 7 X_train, X_test, y_train, y_test = X.ix[index_train,:], X.ix[index_test,:],y[index_train], y[index_test]
8 clf = MultinomialNB(alpha=0.1).fit(X_train, y_train)
9 score = clf.score(X_test, y_test)
IndexError: index 100 is out of bounds for axis 0 with size 100</code></pre>
</div>
</div>
</p>
| 0 | 2016-08-28T23:43:33Z | 39,196,879 | <p>Python uses zero based indexing so if the zeroth dimension of <code>X.ix[index_train,:]</code> or <code>y[index_train]</code> is 100, the maximum value of <code>index_train</code> that would be valid is 99. Likewise for <code>index_test</code>.</p>
<p>Something in</p>
<pre><code>kf = KFold(len(X), n_folds=2, shuffle=True, random_state=9999)
</code></pre>
<p>is causing one of those indices to be too large for one of those arrays at the time you enumerate(kf).</p>
| 0 | 2016-08-28T23:56:25Z | [
"python",
"function"
] |
Django return objects to HTML from view with queryset? | 39,196,832 | <p>I trying to show some info in HTML from query set in Django. But I don't know the best way to do it.</p>
<p>The idea is when I open the page this show me the name of user and the competitions related with it. The user can create competition in admin of Django.</p>
<p>My url is that.</p>
<pre><code>url(r'^home/(?P<company_name>\w+)', HomeView.as_view(), name='home'),
</code></pre>
<p>The model.</p>
<pre><code>class Competition(models.Model):
name = models.CharField(max_length=200,null=False)
image = models.CharField(max_length=200, null=False)
url = models.CharField(max_length=200, null=False)
startingDate = models.DateTimeField(null=False)
deadline = models.DateTimeField(null=False)
description = models.CharField(max_length=200, null=False)
user = models.ForeignKey(User, null=False)
</code></pre>
<p>In views.py</p>
<pre><code>class HomeView(ListView):
model = User, Competition
template_name = 'home.html'
context_object_name = 'company'
def get_queryset(self, **kwargs):
company = self.kwargs['company_name']
try:
queryset = User.objects.filter(username__exact=company).get()
except User.DoesNotExist:
queryset = None
return queryset
</code></pre>
<p>The code above return me the User, when I write in url something like this: <a href="http://127.0.0.1:8000/home/diego/" rel="nofollow">http://127.0.0.1:8000/home/diego/</a></p>
<p>The HTML home show me the name of User for example:</p>
<pre><code>{% if company %} <div class="jumbotron"> <div class="container">
{{company.username}} </div></div> {% endif %}
</code></pre>
<p>But, I need to show the competitions created by the User with a 'for' in HTML. </p>
<p>I tried with chains return the two query set but I don't work it.</p>
<p>Also this.</p>
<pre><code>queryset2 = Competition.objects.filter(user__username__exact=company).get()
</code></pre>
<p>The get() show me the error that return more one object. That is logic because the user has 3 competitions related with this.</p>
<p>Some little example for that? or any idea for resolve my question.</p>
<p>Thanks a lot </p>
<p>I Tried this, but the 'User' object is not iterable (Obviously it is just one)</p>
<pre><code>class HomeView(ListView):
model = User
template_name = 'home.html'
context_object_name = 'company'
def get_queryset(self, **kwargs):
company = self.kwargs['company_name']
try:
queryset = User.objects.filter(username__exact=company).get()
queryset2 = Competition.objects.filter(user__username__exact=company)
result_list = list(chain(queryset, queryset2))
except User.DoesNotExist:
result_list = None
return result_list
</code></pre>
<p>I have been trying and trying and delete the get() for first query set and the data return to HTML.</p>
<p>How can I access to the fields of the list in HTML but with the class, If I print the variable 'company' the result is that.</p>
<pre><code>[<User: diego>, <Competition: test1>, <Competition: test2>]
</code></pre>
<p>But with 'for' just I obtain the string values. How can I difference the 'User' and 'Competition'</p>
<pre><code> {%for compa in company%}
{{compa}}
{%endfor%}
</code></pre>
<p>the result is:
diego
test1
test2</p>
| 0 | 2016-08-28T23:48:08Z | 39,198,627 | <p>you dont have to return the different querysets from get_queryset, Its complicated. List view has one method called get_context_data"
you can override it to fill the extra objects or data you want.</p>
<p>for example just write</p>
<pre><code>class HomeView(ListView):
model = User, Competition
template_name = 'home.html'
context_object_name = 'company'
def get_queryset(self, **kwargs):
company = self.kwargs['company_name']
try:
queryset = User.objects.filter(username__exact=company).get()
except User.DoesNotExist:
queryset = None
return queryset
def get_context_data(self, **kwargs):
context = super(HomeView, self).get_context_data(**kwargs)
queryset2 = Competition.objects.filter(user__username__exact=company)
context['competitions'] = queryset2
return context
</code></pre>
<p>Here you will be getting company and "competitions" separately. Use them well in templates. just try it. It should work </p>
| 0 | 2016-08-29T04:46:02Z | [
"python",
"html",
"django",
"django-queryset"
] |
Why can't use the newly installed TensorFlow library using python 3? | 39,196,882 | <p>I recently upgraded to python 3.5 and was trying to use TensorFlow. I followed the instructions <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html" rel="nofollow">https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html</a> and installed tensorflow with GPU enabled:</p>
<pre><code># Mac OS X, GPU enabled, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0rc0-py3-none-any.whl
</code></pre>
<p>however, when I tried using tensorflow it threw a really strange error:</p>
<pre><code>>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/user/path/venv2/lib/python3.5/site-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/Users/user/path/venv/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 48, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Users/user/path/venv/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/Users/user/path/venv/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
File "/Users/user/path/venv/lib/python3.5/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/Users/user/path/venv/lib/python3.5/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: dlopen(/Users/user/path/venv/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so, 10): Library not loaded: @rpath/libcudart.7.5.dylib
Referenced from: /Users/user/path/venv/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
Reason: image not found
</code></pre>
<p>why is it throwing that error? I know that I do not have a GPU but I was under the impression from other conversation on the TensorFlow page that I don't require one to use the version that can potentially use a GPU.</p>
| 0 | 2016-08-28T23:57:25Z | 39,212,005 | <p>you need to export the path to the folder of the cuda dylibs (like libcudart) with <code>export DYLD_LIBRARY_PATH</code> in your <code>~/.bash_profile</code></p>
| 0 | 2016-08-29T17:36:03Z | [
"python",
"tensorflow"
] |
How to get the total number of entries contained in a TFRecord file? | 39,196,955 | <p>I am able to write and read TFrecord files with tensorflow. How can I quickly get the total number of entries contained in a TFRecord file? Is there any API to get the count?</p>
| 1 | 2016-08-29T00:12:16Z | 39,197,123 | <p>The <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/python_io.html#tfrecords-format-details" rel="nofollow">TFRecords file format</a> is basically a sequence of structures in the form:</p>
<pre><code>struct TFRecords {
uint64_t length;
uint32_t length_checksum;
uint8_t data[length];
uint32_t data_checksum;
} Ã N
</code></pre>
<p>There is no metadata to tell how many entries are there, so the only reliable way to get the total is to read the whole file (thus no APIs to quickly get the total) and then call <code>num_records_produced()</code>.</p>
<p>You could write custom metadata containing that number as the first record when producing the TFRecords.</p>
<p>If you are sure that every record has the same length, then you could get the number of entries as <code>decompressed_file_size / (length_of_each_record + 16)</code>.</p>
| 0 | 2016-08-29T00:43:38Z | [
"python",
"tensorflow"
] |
Error when updating function-based views to class-based views in Django Rest Framework | 39,197,011 | <p>I've been in the process of running through the tutorials on Django Rest Framework to build a simple todo list app. </p>
<p>When I transitioned from a function-based view to a class-based view, I started getting a stack trace to Django and all http calls started failing.</p>
<p>I'm using <code>virtualenv</code> to isolate dependencies, and using Python 3.5.2.</p>
<p>Anyone have any ideas about what's going on, or how to resolve this error?</p>
<p>I'm chalking this up to a configuration error or a version mismatch, but I'm not sure where to start.</p>
<p>/views.py:</p>
<pre><code>from django.http import Http404
from list.models import List
from list.serializers import ListSerializer
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
class ListsView(APIView):
"""
List all lists.
"""
def get(self, request, format=None):
lists = List.objects.all()
serializer = ListSerializer(lists, many=True)
return Response(serializer.data)
...
</code></pre>
<p>/urls.py</p>
<pre><code>from django.conf.urls import url
from rest_framework.urlpatterns import format_suffix_patterns
from list import views
urlpatterns = [
url(r'^lists/$', views.ListsView),
url(r'^lists/(?P<pk>[0-9]+)/$', views.ListView),
url(r'^lists/(?P<list>[0-9]+)/items/$', views.ListItemsView),
url(r'^lists/(?P<list>[0-9]+)/items/(?P<pk>[0-9]+)/$', views.ListItemView),
]
urlpatterns = format_suffix_patterns(urlpatterns)
</code></pre>
<p>Input:</p>
<pre><code>curl http://localhost:8000/lists/
</code></pre>
<p>Trace:</p>
<pre><code>Internal Server Error: /lists/
Traceback (most recent call last):
File "<pwd>/env/lib/python3.5/site-packages/django/core/handlers/exception.py", line 39, in inner
response = get_response(request)
File "<pwd>/env/lib/python3.5/site-packages/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "<pwd>/env/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "<pwd>/env/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
TypeError: __init__() takes 1 positional argument but 2 were given
[29/Aug/2016 00:03:32] "GET /lists/ HTTP/1.1" 500 61615
</code></pre>
<p>Env:</p>
<pre><code>python 3.5.2
</code></pre>
<p>Requirements:</p>
<pre><code>Django==1.10
django-lint==2.0.4
djangorestframework==3.4.6
logilab-astng==0.24.3
logilab-common==1.2.2
psycopg2==2.6.2
Pygments==2.1.3
pylint==0.28.0
six==1.10.0
</code></pre>
| 0 | 2016-08-29T00:22:09Z | 39,207,440 | <p>@SergeyGornostaev had the right intuition to look at and post <code>urls.py</code>.</p>
<p>In the refactor from function-based views to class-based views, I neglected to call <code>as_view()</code> on the views when I updated the routing.</p>
<p>Before: <code>url(<pattern>, views.ListsView)</code></p>
<p>After: <code>url(<pattern>, views.ListView.as_view())</code></p>
<p>This resolved the error.</p>
<hr>
<p>Full /urls.py before:</p>
<pre><code>from django.conf.urls import url
from rest_framework.urlpatterns import format_suffix_patterns
from list import views
urlpatterns = [
url(r'^lists/$', views.ListsView),
url(r'^lists/(?P<pk>[0-9]+)/$', views.ListView),
url(r'^lists/(?P<list>[0-9]+)/items/$', views.ListItemsView),
url(r'^lists/(?P<list>[0-9]+)/items/(?P<pk>[0-9]+)/$', views.ListItemView),
]
urlpatterns = format_suffix_patterns(urlpatterns)
</code></pre>
<p>Full /urls.py after:</p>
<pre><code>from django.conf.urls import url
from rest_framework.urlpatterns import format_suffix_patterns
from list import views
urlpatterns = [
url(r'^lists/$', views.ListsView.as_view()),
url(r'^lists/(?P<pk>[0-9]+)/$', views.ListView.as_view()),
url(r'^lists/(?P<list>[0-9]+)/items/$', views.ListItemsView.as_view()),
url(r'^lists/(?P<list>[0-9]+)/items/(?P<pk>[0-9]+)/$', views.ListItemView.as_view()),
]
urlpatterns = format_suffix_patterns(urlpatterns)
</code></pre>
| 0 | 2016-08-29T13:24:18Z | [
"python",
"django",
"django-rest-framework",
"python-3.5"
] |
Python - list index out of range - genetic algorithm | 39,197,023 | <p>I'm having problems with my code and I know this problem is simple but I just can't figure it out how to solve it, I'll really appreciate if someone could tell me what I'm doing wrong:</p>
<pre><code> import random
from math import *
def create_population(dim,pn):
t = log(factorial(dim**2),2)
b = int(t+1)
d = ""
indarray = []
bits_array=[]
#print("bits usados: ",b)
for x in range(pn):
for y in range(b) :
if random.randint(0,400000) %2:
d = "1"+d
else:
d="0"+d
num=int(d,2)%factorial(dim**2)
bits_array.append(d)
indarray.append(num)
#print("\n index #",len(indarray),": ",num)
d=""
return indarray,dim,bits_array,b
def i2ms(index,b):
squares=[]
a=init_a(b)
i=0
t=b
b = (b**2)-1
for i in range(len(index)):
s=""
cont = 1
while(index[i]>0):
c = factorial(b)
ind =(index[i]/c)
s = s+str(a[int(ind)])+" "
del a[(int(ind))]
index[i] = index[i]%c
b-=1
cont +=1
for i in range(len(a)):
s = s+str(a[i])+" "
squares.append(s)
a = init_a(t)
b = t
b = (b**2)-1
s=""
return squares
def init_a(b):
a=[]
for i in range(b**2):
a.append(i+1)
return a
def score(squares):
scores=[]
print("\n")
for i in range(len(squares)):
r = squares[i]
r = r.split(' ')
n = int(sqrt(len(r)))
nd = r
goal = n * (n * n + 1) / 2;
nd.reverse()
m = [[nd.pop() for i in range(n)] for j in range(n)]
#print ("Cubo #",i+1,": \n")
#for i in range(n):
#print(m[i],'\n')
min_sum,max_sum= 0,0
minn = 1
maxx = n * n
for i in range (n):
min_sum += minn
minn += 1
max_sum += maxx
maxx += 1
min_b,max_b = abs(goal - min_sum), abs(goal - max_sum)
if min_sum < max_sum:
final_b = max_sum
else:
final_b = min_sum
total_cases = 2 * n + 2
bias = total_cases * final_b
fitness = bias
#print ("Max score: ",fitness)
for i in range(n):
s =0
for j in range(n):
s +=int(m[i][j])
fitness -= abs(goal-s)
for j in range(n):
s=0
for i in range(n):
s += int(m[i][j])
fitness -= abs(goal-s)
s = 0
if n%2 == 1:
for i in range(n):
s+= int(m[i][i])
fitness -= abs(goal-s)
m.reverse()
s = 0
for i in range(n):
s+= int(m[i][i])
fitness -= abs(goal-s)
#print("Actual score: ",fitness,"\n")
scores.append(int(fitness))
#print("goal",goal)
return scores,bias
def breed(popul,score,breed_size,b):#popul= la poblacion , score : sus notas ind, breed_size, tamaño de poblacion que esco
#escogeremos, b numero de bites;
#Calculamos las medidas de la poblacion a "mergear"
print(popul)
print(score)
maxx = max(score)
#Acomodamos los cubos(en binario) con su respectivo score
breed_pop=[]
new_pop=[]
for y in range(breed_size):
for z in score:
if score[z] == maxx:
breed_pop.append(popul[z])
del score[z]
del popul[z]
maxx= max(score)
print(breed_pop)
if breed_pop>breed_size:
breed_pop.pop()
print(breed_pop)
##sorted(pop_dict.values())
if __name__ == '__main__':
#Dar Dimensiones y determinar la poblacion inicial
print("dimensiones?")
n = input()
print("poblacion?")
pn = input()
print("breed size?")
p= int(input())
##g = input()
#Pasar los datos de dim y pob por el metodo de create_population, devuelve una lista con los index del cubo y su dimensiones
ind,b,bits_a,bitsn= create_population(int(n),int(pn))
#Convertimos cada uno de esos indices a un cubo magico con i2ms, devuelve un array de cubos magicos
squares = i2ms(ind,b)
'''print("\n")
for i in range(len(squares)):
print("Cubo #",i+1,": " , squares[i])
#Pasamos cada cubo por score, nos dara el puntaje de cada cubo, devuelve una lista con los scores y el puntaje maximo
'''
scores,perfect = score(squares)
breed(bits_a,scores,p,bitsn)
'''for y in range(len(scores)):
print(scores[y],"/",perfect)
'''
</code></pre>
<p>I'm using dimension = 3, population =10, and breed_size=4 but I keep getting:</p>
<p>if score[z] ==max
IndexError: list index out of range</p>
<p>Edit:
Traceback(most recent call last):
File "squaresolver.py",line 156 in
breed(bits:a,scores,p,bitsn)
File "squaresolver.py", line 125, in breed
if score[z] == maxx:
IndexError: list index out of range</p>
| -4 | 2016-08-29T00:24:08Z | 39,197,078 | <p>You don't need "score[z]" when you do "for z in score", z is not an index, is a value of the score list.</p>
<p>You can just do</p>
<pre><code>if z == maxx
</code></pre>
| 1 | 2016-08-29T00:36:28Z | [
"python",
"list",
"genetic-algorithm",
"indexoutofrangeexception"
] |
Python - list index out of range - genetic algorithm | 39,197,023 | <p>I'm having problems with my code and I know this problem is simple but I just can't figure it out how to solve it, I'll really appreciate if someone could tell me what I'm doing wrong:</p>
<pre><code> import random
from math import *
def create_population(dim,pn):
t = log(factorial(dim**2),2)
b = int(t+1)
d = ""
indarray = []
bits_array=[]
#print("bits usados: ",b)
for x in range(pn):
for y in range(b) :
if random.randint(0,400000) %2:
d = "1"+d
else:
d="0"+d
num=int(d,2)%factorial(dim**2)
bits_array.append(d)
indarray.append(num)
#print("\n index #",len(indarray),": ",num)
d=""
return indarray,dim,bits_array,b
def i2ms(index,b):
squares=[]
a=init_a(b)
i=0
t=b
b = (b**2)-1
for i in range(len(index)):
s=""
cont = 1
while(index[i]>0):
c = factorial(b)
ind =(index[i]/c)
s = s+str(a[int(ind)])+" "
del a[(int(ind))]
index[i] = index[i]%c
b-=1
cont +=1
for i in range(len(a)):
s = s+str(a[i])+" "
squares.append(s)
a = init_a(t)
b = t
b = (b**2)-1
s=""
return squares
def init_a(b):
a=[]
for i in range(b**2):
a.append(i+1)
return a
def score(squares):
scores=[]
print("\n")
for i in range(len(squares)):
r = squares[i]
r = r.split(' ')
n = int(sqrt(len(r)))
nd = r
goal = n * (n * n + 1) / 2;
nd.reverse()
m = [[nd.pop() for i in range(n)] for j in range(n)]
#print ("Cubo #",i+1,": \n")
#for i in range(n):
#print(m[i],'\n')
min_sum,max_sum= 0,0
minn = 1
maxx = n * n
for i in range (n):
min_sum += minn
minn += 1
max_sum += maxx
maxx += 1
min_b,max_b = abs(goal - min_sum), abs(goal - max_sum)
if min_sum < max_sum:
final_b = max_sum
else:
final_b = min_sum
total_cases = 2 * n + 2
bias = total_cases * final_b
fitness = bias
#print ("Max score: ",fitness)
for i in range(n):
s =0
for j in range(n):
s +=int(m[i][j])
fitness -= abs(goal-s)
for j in range(n):
s=0
for i in range(n):
s += int(m[i][j])
fitness -= abs(goal-s)
s = 0
if n%2 == 1:
for i in range(n):
s+= int(m[i][i])
fitness -= abs(goal-s)
m.reverse()
s = 0
for i in range(n):
s+= int(m[i][i])
fitness -= abs(goal-s)
#print("Actual score: ",fitness,"\n")
scores.append(int(fitness))
#print("goal",goal)
return scores,bias
def breed(popul,score,breed_size,b):#popul= la poblacion , score : sus notas ind, breed_size, tamaño de poblacion que esco
#escogeremos, b numero de bites;
#Calculamos las medidas de la poblacion a "mergear"
print(popul)
print(score)
maxx = max(score)
#Acomodamos los cubos(en binario) con su respectivo score
breed_pop=[]
new_pop=[]
for y in range(breed_size):
for z in score:
if score[z] == maxx:
breed_pop.append(popul[z])
del score[z]
del popul[z]
maxx= max(score)
print(breed_pop)
if breed_pop>breed_size:
breed_pop.pop()
print(breed_pop)
##sorted(pop_dict.values())
if __name__ == '__main__':
#Dar Dimensiones y determinar la poblacion inicial
print("dimensiones?")
n = input()
print("poblacion?")
pn = input()
print("breed size?")
p= int(input())
##g = input()
#Pasar los datos de dim y pob por el metodo de create_population, devuelve una lista con los index del cubo y su dimensiones
ind,b,bits_a,bitsn= create_population(int(n),int(pn))
#Convertimos cada uno de esos indices a un cubo magico con i2ms, devuelve un array de cubos magicos
squares = i2ms(ind,b)
'''print("\n")
for i in range(len(squares)):
print("Cubo #",i+1,": " , squares[i])
#Pasamos cada cubo por score, nos dara el puntaje de cada cubo, devuelve una lista con los scores y el puntaje maximo
'''
scores,perfect = score(squares)
breed(bits_a,scores,p,bitsn)
'''for y in range(len(scores)):
print(scores[y],"/",perfect)
'''
</code></pre>
<p>I'm using dimension = 3, population =10, and breed_size=4 but I keep getting:</p>
<p>if score[z] ==max
IndexError: list index out of range</p>
<p>Edit:
Traceback(most recent call last):
File "squaresolver.py",line 156 in
breed(bits:a,scores,p,bitsn)
File "squaresolver.py", line 125, in breed
if score[z] == maxx:
IndexError: list index out of range</p>
| -4 | 2016-08-29T00:24:08Z | 39,197,120 | <p>As you delete things in a list, you end up with index problems using <code>range(len)</code>. If you have a list and then delete an item in it, you end up with a list whose length is now one less. This leads to <code>IndexErrors</code> as you try to access up to the original <code>len(list)</code>.</p>
<p>Perhaps think of copying the original and working with that. </p>
| 0 | 2016-08-29T00:43:20Z | [
"python",
"list",
"genetic-algorithm",
"indexoutofrangeexception"
] |
Calling MKL from Python : DSTEVR | 39,197,132 | <p><strong>dstevr</strong> computes eigensolutions of triangular symmetric matrix. Cool. Except it was not one of the routines ported with a wrapper to SCIPY. So I've followed the instructions on how to call MKL directly from Python and the attached seems to give the correct answer. But gosh.... Is there someway to clean this up?!</p>
<pre><code> import numpy as np
from scipy import linalg
from ctypes import *
c_double_p = POINTER(c_double)
c_int_p = POINTER(c_int)
c_char_p = POINTER(c_char)
mkl = CDLL('mkl_rt.dll')
dstevr = mkl.dstevr
#SUBROUTINE DSTEVR( JOBZ, RANGE, N, D, E, VL, VU, IL, IU, ABSTOL, M,
# * W, Z, LDZ, ISUPPZ, WORK, LWORK, IWORK, LIWORK, INFO)
# CHARACTER * 1 JOBZ, RANGE
# INTEGER N, IL, IU, M, LDZ, LWORK, LIWORK, INFO
# INTEGER ISUPPZ(*), IWORK(*)
# DOUBLE PRECISION VL, VU, ABSTOL
# DOUBLE PRECISION D(*), E(*), W(*), Z(LDZ,*), WORK(*)
dstevr.argtypes = [ c_char_p, c_char_p, c_int_p, c_double_p, c_double_p, c_double_p, c_double_p, c_int_p, c_int_p, c_double_p, \
c_int_p, c_double_p, c_double_p, c_int_p, c_int_p, c_double_p, c_int_p, c_int_p, c_int_p, c_int_p]
sv = "v"
eig_j = c_char_p(c_char(sv[0]))
sr = "a"
eig_r = c_char_p(c_char(sr[0]))
vl = c_double(0.0)
vu = c_double(0.0)
il = c_int(0)
iu = c_int(0)
abstol = c_double(0.0)
cm = c_int(0)
eig_info = c_int(0)
N = 6
cn = c_int(N)
ldz = cn
lwork = c_int(20*N)
liwork = c_int(10*N)
diag = np.ascontiguousarray(np.ones(N)*2)
diag_p = diag.ctypes.data_as(c_double_p)
offdiag = np.ascontiguousarray(np.ones(N-1)*(-1))
offdiag_p = offdiag.ctypes.data_as(c_double_p)
isuppz = np.ascontiguousarray(np.ones(N*2),dtype=int)
isuppz_p = isuppz.ctypes.data_as(c_int_p)
eigw = np.ascontiguousarray(np.zeros(N))
eigw_p = eigw.ctypes.data_as(c_double_p)
workz = np.ascontiguousarray(np.ones(20*N))
workz_p = workz.ctypes.data_as(c_double_p)
iworkz = np.ascontiguousarray(np.ones(10*N),dtype=int)
iworkz_p = iworkz.ctypes.data_as(c_int_p)
eigz = np.ascontiguousarray(np.ones(N*N))
eigz_p = eigz.ctypes.data_as(c_double_p)
dstevr( eig_j, eig_r, byref(cn), diag_p, offdiag_p, byref(vl),byref(vu),byref(il),byref(iu),byref(abstol),byref(cm),\
eigw_p, eigz_p, byref(ldz), isuppz_p, workz_p, byref(lwork), iworkz_p, byref(liwork), byref(eig_info))
print "Eig_Info", eig_info
print eigz
A = np.eye(N,N,k=-1)*(-1) + np.eye(N,N)*2 + np.eye(N,N,k=1)*(-1)
w,v= linalg.eigh(A)
print v.T
</code></pre>
<p>Thanks,</p>
| 0 | 2016-08-29T00:45:16Z | 39,203,022 | <p>You can use instead a cython wrapper in cython_lapack, <a href="http://docs.scipy.org/doc/scipy/reference/linalg.cython_lapack.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/linalg.cython_lapack.html</a></p>
<p>You'll need cython though.</p>
| 0 | 2016-08-29T09:38:25Z | [
"python",
"scipy",
"eigenvalue",
"intel-mkl"
] |
Django-tastypie: utf8 is incorrect | 39,197,148 | <pre><code>class ActionResource(ModelResource):
class Meta:
queryset = ActionInfo.objects.all()
resource_name = 'action'
def dehydrate(self, bundle):
bundle.data['name'] = bundle.obj.name
bundle.data['expect_time'] = bundle.obj.expect_time
bundle.data['type'] = bundle.obj.type
bundle.data['price'] = bundle.obj.price
bundle.data['additional'] = bundle.obj.additional
return bundle
</code></pre>
<p>This code from resource.py. Charfields that have russian letters print incorrect, for example: name: "Ð ÐÐ ÑР»СÐÐ Ð
Ð ÑР°". I've added at the top of the resource.py:</p>
<pre><code># -*- coding: UTF-8 -*-
from __future__ import unicode_literals
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
</code></pre>
<p>and return string from django models: </p>
<pre><code>class ActionName(models.Model):
name = models.CharField(max_length=300)
def __str__(self):
return self.name
class ActionInfo(models.Model):
name = models.ForeignKey(ActionName, related_name="title", on_delete=models.CASCADE, null=True, blank=True)
expect_time = models.ForeignKey(ActionDuration, related_name="duration",on_delete=models.CASCADE, null=True, blank=True)
type = models.ForeignKey(ActionType, related_name="type", on_delete=models.CASCADE, null=True, blank=True)
available = models.NullBooleanField(null=True, blank=True, default=True)
price = models.ForeignKey(ActionPrice, related_name="price", on_delete=models.CASCADE, null=True, blank=True)
---------------------------------------------
</code></pre>
<p>returned json:</p>
<pre><code>{
additional: " 280 Ð Ñ ",
available: true,
comments: null,
discription: "",
expect_time: null,
id: 120,
name: "Ð ÐÐ ÑР»СÐÐ Ð
Ð ÑР°",
photo: "/images/83913-220-184-solyanka_2.jpg",
rate: null,
resource_uri: "/api/v1/action/120/",
type: "Ð ÑРµСÐÐ ÐСâ¹Ð µ Р±Р»СÐÐ ÒР°",
}
</code></pre>
<p>Who knows how to fix it?)</p>
| 0 | 2016-08-29T00:48:27Z | 39,201,416 | <p>Instead of <code>__str__()</code> use <code>__unicode__()</code>. And use <a href="https://docs.djangoproject.com/es/1.10/ref/utils/#django.utils.encoding.smart_text" rel="nofollow">smart_text</a>:</p>
<pre><code>from django.utils.encoding import smart_text
class ActionName(models.Model):
name = models.CharField(max_length=300)
def __unicode__(self):
return smart_text(self.name)
</code></pre>
<p>BTW: Instead of <code>dehydrate()</code> use fields:</p>
<pre><code>from tastypie import fields
class ActionResource(ModelResource):
name = fields.CharField('name__name', null=True)
class Meta:
queryset = ActionInfo.objects.all()
resource_name = 'action'
</code></pre>
<p>BTW2:</p>
<pre><code>>>> print u'Ð ÐÐ ÑР»СÐÐ Ð
Ð ÑР°'.encode('windows-1251')
СолÑнка
>>> print u'Ð ÐÐ ÑР»СÐÐ Ð
Ð ÑР°'.encode('windows-1251').decode('utf8')
СолÑнка
</code></pre>
| 1 | 2016-08-29T08:09:49Z | [
"python",
"django",
"unicode",
"utf-8",
"tastypie"
] |
Throw ValueError for two digit year dates with dateutil.parser.parse | 39,197,156 | <p>While doing some data cleaning, I noticed that <code>dateutil.parser.parse</code> failed to reject a certain malformed date, thinking that the first number in it is a two digit year. Can this library be forced to treat two digit years as invalid?</p>
<p>Example:</p>
<pre><code>from dateutil.parser import parse
parse('22-23 February')
</code></pre>
<p>outputs:</p>
<pre><code>datetime.datetime(2022, 2, 23, 0, 0)
</code></pre>
| 1 | 2016-08-29T00:49:16Z | 39,197,157 | <p>I managed to work around this by passing a custom <code>dateutil.parser.parserinfo</code> object via the <code>parserinfo</code> parameter to <code>dateutil.parser.parse</code>. Luckily, <code>dateutil.parser.parserinfo</code> has a <code>convertyear</code> method that can be overloaded in a derived class in order to perform extra validations on the year.</p>
<pre><code>from dateutil.parser import parse, parserinfo
class NoTwoDigitYearParserInfo(parserinfo):
def convertyear(self, year, century_specified=False):
if year < 100 and not century_specified:
raise ValueError('Two digit years are not supported.')
return parserinfo.convertyear(self, year, century_specified)
parse('22-23 February', parserinfo = NoTwoDigitYearParserInfo())
</code></pre>
<p>outputs:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/site-packages/dateutil/parser.py", line 1162, in parse
return parser(parserinfo).parse(timestr, **kwargs)
File "/usr/local/lib/python3.5/site-packages/dateutil/parser.py", line 552, in parse
res, skipped_tokens = self._parse(timestr, **kwargs)
File "/usr/local/lib/python3.5/site-packages/dateutil/parser.py", line 1055, in _parse
if not info.validate(res):
File "/usr/local/lib/python3.5/site-packages/dateutil/parser.py", line 360, in validate
res.year = self.convertyear(res.year, res.century_specified)
File "<stdin>", line 4, in convertyear
ValueError: Two digit years are not supported.
</code></pre>
| 2 | 2016-08-29T00:49:16Z | [
"python",
"date",
"parsing",
"python-dateutil",
"2-digit-year"
] |
How to re-generate tables in Django | 39,197,204 | <p>I'm new in Django. Here is how I do:
Create app by : python manager.py startap test
and I add serveral model, then execute:
python manager.py maiemigrations
python manager migrate</p>
<p>It will generate tables automatically. Then I dropped the table in database by:
drop table table_name</p>
<p>execute again:
python manager.py makemigrations
python manager.py migrate</p>
<p>but there is no table generated in database, and I got the message "no changes detected", how should I do to have these table? Please help me.</p>
| 0 | 2016-08-29T00:59:46Z | 39,200,171 | <p>Delete all the migration file in your app's migrations directory. Then run </p>
<pre><code>python makemigrations your_app
python miagrate
</code></pre>
| 0 | 2016-08-29T06:53:58Z | [
"python",
"django"
] |
Global variable not defined after seperating functions to seperate Python file | 39,197,255 | <p>Continuing to write my dictionary attack script for project. </p>
<p>My script calls upon two functions that perform an actual dictionary attack on SSH. First I declared my global variable in my get_args() function in <code>main.py</code>, following with the main() function</p>
<pre><code>def get_args():
# stuff for parsing blah blah
global service, username, wordlist, address, port, delay
service = args.service
username = args.username
wordlist = args.password
address = args.address
port = args.port
delay = args.delay
return service, username, wordlist, address
def main():
service, username, wordlist, address = get_args()
# output and stuff
# SSH bruteforce
if service == 'ssh':
if address is None:
print R + "[!] You need to provide a SSH address for cracking! [!]" + W
else:
print C + "[*] Address: %s" % address + W
sleep(0.5)
global port
if port is None:
print O + "[?] Port not set. Automatically set to 22 for you [?]" + W
port = 22
print C + "[*] Port: %s " % port + W
sleep(1)
print P + "[*] Starting dictionary attack! [*]" + W
print "Using %s seconds of delay. Default is 1 second" % delay
sshBruteforce(address, username, wordlist)
</code></pre>
<p>The line <code>sshBruteforce(address, username, wordlist)</code> references a function that I include in a seperate file as a module, which I import with </p>
<pre><code>from module1 import *
</code></pre>
<p>Inside module1.py:</p>
<pre><code># ssh_connect()
def ssh_connect(password, code=0):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
paramiko.util.log_to_file("filename.log")
try:
ssh.connect(address, port=port, username=username, password=password)
def sshBruteforce(address, username, wordlist):
wordlist = open(wordlist, 'r')
for i in wordlist.readlines():
password = i.strip("\n")
try:
response = ssh_connect(password)
# actual bruteforcing here. The above line references the ssh_connect()
# function, which is where the problem actual occurs
</code></pre>
<p>The line, <code>ssh.connect(address, port=port, username=username, password=password)</code> is where the problem occurs. Let's say I execute the script like so:</p>
<pre><code>python main.py -u root -w wordlist.txt -s ssh -p 22 -a 192.168.1.3
</code></pre>
<p>This stores the string <code>"root"</code> within the variable <code>username</code>, and etc. However, once the <code>main.py</code> program executes the sshBruteforce() function, this occurs:</p>
<pre><code>global name 'address' is not defined
</code></pre>
<p>I know that this occurs within the ssh_connect() function, with line <code>ssh.connect(address, port=port, username=username, password=password)</code>, meaning that the variable <code>address</code> does not have anything stored in it. I do not know why this occurring. </p>
<p>Including <code>from __main__ import *</code> within <code>module1.py</code> does not change anything. I have seen many these questions asked, but none are similar to my situation. Any help is appreciated thanks!</p>
| 1 | 2016-08-29T01:07:10Z | 39,213,506 | <p>The concept of global defining means that you should define it outside of the function, try to init all globals outside fuction with None or something and see if the problem is resolved</p>
| -1 | 2016-08-29T19:12:16Z | [
"python",
"module",
"global-variables"
] |
How to add multiple values to a key in a Python dictionary | 39,197,261 | <p>I am trying to create a dictionary from the values in the <code>name_num</code> dictionary where the length of the list is the new key and the <code>name_num</code> dictionary key and value are the value. So:</p>
<pre><code>name_num = {"Bill": [1,2,3,4], "Bob":[3,4,2], "Mary": [5, 1], "Jim":[6,17,4], "Kim": [21,54,35]}
</code></pre>
<p>I want to create the following dictionary:</p>
<pre><code>new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Bob":[3,4,2], "Jim":[6,17,4], "Kim": [21,54,35]}, 2:{"Mary": [5, 1]}}
</code></pre>
<p>I've tried many variations, but this code gets me the closest:</p>
<pre><code>for mykey in name_num:
new_dict[len(name_num[mykey])] = {mykey: name_num[mykey]}
</code></pre>
<p>Output:</p>
<pre><code>new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Jim":[6,17,4]}, 2:{"Mary": [5, 1]}}
</code></pre>
<p>I know I need to loop through the code somehow so I can add the other values to key 3.</p>
| 7 | 2016-08-29T01:07:33Z | 39,197,284 | <p>This is a good use case for <a href="https://docs.python.org/3/library/collections.html"><code>defaultdict</code></a>:</p>
<pre><code>from collections import defaultdict
name_num = {
'Bill': [1, 2, 3, 4],
'Bob': [3, 4, 2],
'Mary': [5, 1],
'Jim': [6, 17, 4],
'Kim': [21, 54, 35],
}
new_dict = defaultdict(dict)
for name, nums in name_num.items():
new_dict[len(nums)][name] = nums
print(dict(new_dict))
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>{
2: {'Mary': [5, 1]},
3: {'Bob': [3, 4, 2], 'Jim': [6, 17, 4], 'Kim': [21, 54, 35]},
4: {'Bill': [1, 2, 3, 4]}
}
</code></pre>
| 22 | 2016-08-29T01:13:43Z | [
"python",
"dictionary"
] |
How to add multiple values to a key in a Python dictionary | 39,197,261 | <p>I am trying to create a dictionary from the values in the <code>name_num</code> dictionary where the length of the list is the new key and the <code>name_num</code> dictionary key and value are the value. So:</p>
<pre><code>name_num = {"Bill": [1,2,3,4], "Bob":[3,4,2], "Mary": [5, 1], "Jim":[6,17,4], "Kim": [21,54,35]}
</code></pre>
<p>I want to create the following dictionary:</p>
<pre><code>new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Bob":[3,4,2], "Jim":[6,17,4], "Kim": [21,54,35]}, 2:{"Mary": [5, 1]}}
</code></pre>
<p>I've tried many variations, but this code gets me the closest:</p>
<pre><code>for mykey in name_num:
new_dict[len(name_num[mykey])] = {mykey: name_num[mykey]}
</code></pre>
<p>Output:</p>
<pre><code>new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Jim":[6,17,4]}, 2:{"Mary": [5, 1]}}
</code></pre>
<p>I know I need to loop through the code somehow so I can add the other values to key 3.</p>
| 7 | 2016-08-29T01:07:33Z | 39,197,299 | <p><a href="https://en.wikipedia.org/wiki/Associative_array">Dictionary</a>, associative array or map (many names, basically the same functionality) property is that keys are unique.</p>
<p>The keys you wish to have, which are integers, are not unique if lengths are the same, that's why your code doesn't work. Putting a new value for existing key means replacing the old value.</p>
<p>You have to add key-value pairs to the existing value dictionaries.</p>
<pre><code>for mykey in name_num:
length = len(name_num[mykey])
if length in new_dict: # key already present in new dictionary
new_dict[length][mykey] = name_num[mykey]
else:
new_dict[length] = {mykey: name_num[mykey]}
</code></pre>
<p>should do the trick</p>
| 5 | 2016-08-29T01:15:57Z | [
"python",
"dictionary"
] |
How to add multiple values to a key in a Python dictionary | 39,197,261 | <p>I am trying to create a dictionary from the values in the <code>name_num</code> dictionary where the length of the list is the new key and the <code>name_num</code> dictionary key and value are the value. So:</p>
<pre><code>name_num = {"Bill": [1,2,3,4], "Bob":[3,4,2], "Mary": [5, 1], "Jim":[6,17,4], "Kim": [21,54,35]}
</code></pre>
<p>I want to create the following dictionary:</p>
<pre><code>new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Bob":[3,4,2], "Jim":[6,17,4], "Kim": [21,54,35]}, 2:{"Mary": [5, 1]}}
</code></pre>
<p>I've tried many variations, but this code gets me the closest:</p>
<pre><code>for mykey in name_num:
new_dict[len(name_num[mykey])] = {mykey: name_num[mykey]}
</code></pre>
<p>Output:</p>
<pre><code>new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Jim":[6,17,4]}, 2:{"Mary": [5, 1]}}
</code></pre>
<p>I know I need to loop through the code somehow so I can add the other values to key 3.</p>
| 7 | 2016-08-29T01:07:33Z | 39,197,390 | <p>Just an alternative to others; you can sort by length and use <a href="https://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a>:</p>
<pre><code>>>> result = {}
>>> f = lambda t: len(t[1])
>>> for length, groups in itertools.groupby(sorted(name_num.items(), key=f), key=f):
... result[length] = dict((k, v) for k, v in groups)
>>> print result
{
2: {'Mary': [5, 1]},
3: {'Bob': [3, 4, 2], 'Jim': [6, 17, 4], 'Kim': [21, 54, 35]},
4: {'Bill': [1, 2, 3, 4]}
}
</code></pre>
<p>In a worst case scenario where each inner list has a different length, this performs <code>O(n^2)</code> which is fairly inefficient compared to other solutions posted above.</p>
| 4 | 2016-08-29T01:34:14Z | [
"python",
"dictionary"
] |
Django - LOCALHOST missing despite other views working | 39,197,311 | <p><strong>BACKGROUND</strong>:</p>
<p>Worked through Django's tutorial and created multiple views:</p>
<ul>
<li>/admin</li>
<li>/app</li>
</ul>
<p>Currently having issues with "localhost:8000" despite localhost/app and localhost/admin views working (see below):</p>
<p><strong>CODE</strong></p>
<p><strong>Folder Structure (virtualenv):</strong></p>
<pre><code>main/
db.sqlite3
manage.py
app/
_init_.py
admin.py
apps.py
models.py
tests.py
views.py
urls.py
migrations/
_init.py
main/
_init_.py
settings.py
urls.py
wsgipy
</code></pre>
<p><em>../main/app/views.py:</em></p>
<pre><code>from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, world. You're at the app index.")
</code></pre>
<p><em>../main/app/urls.py:</em></p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
]
</code></pre>
<p><em>../main/main/urls.py:</em></p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^app/', include('app.urls')),
]
</code></pre>
<p><strong>ISSUES/QUESTIONS:</strong></p>
<ol>
<li>localhost/app or localhost/admin shows a rendered page. However, "localhost:8000" does not show the original, django landing page (despite it working before). What am I missing?</li>
<li>Do you recommend any other folder structure? From my understanding, it's poor-form to put /main/main in the location that it currently is. Do you have recommendations on a new folder structure?</li>
</ol>
| 0 | 2016-08-29T01:17:33Z | 39,197,697 | <blockquote>
<p>localhost/app or localhost/admin shows a rendered page. However, "localhost:8000" does not show the original, django landing page (despite it working before). What am I missing?</p>
</blockquote>
<p>It looks like <code>main/main/urls.py</code> is your global urls config, which would mean that <code>localhost:8000</code> doesn't have a route defined (only <code>localhost:8000/admin</code> and <code>localhost:8000/app</code>.) You should put the url pattern in the global config. You have defined <code>url(r'^$', views.index, name='index')</code> within app's scope, so Django is actually reading it with <code>app/</code> prepended to it.</p>
<blockquote>
<p>Do you recommend any other folder structure? From my understanding,
it's poor-form to put /main/main in the location that it currently is.
Do you have recommendations on a new folder structure?</p>
</blockquote>
<p>A good structure in my opinion:</p>
<pre><code>server/
urls.py
manage.py
wsgi.py
settings.py
apps/
app1/
app2/
app3/
</code></pre>
| 0 | 2016-08-29T02:20:02Z | [
"python",
"django"
] |
Django - LOCALHOST missing despite other views working | 39,197,311 | <p><strong>BACKGROUND</strong>:</p>
<p>Worked through Django's tutorial and created multiple views:</p>
<ul>
<li>/admin</li>
<li>/app</li>
</ul>
<p>Currently having issues with "localhost:8000" despite localhost/app and localhost/admin views working (see below):</p>
<p><strong>CODE</strong></p>
<p><strong>Folder Structure (virtualenv):</strong></p>
<pre><code>main/
db.sqlite3
manage.py
app/
_init_.py
admin.py
apps.py
models.py
tests.py
views.py
urls.py
migrations/
_init.py
main/
_init_.py
settings.py
urls.py
wsgipy
</code></pre>
<p><em>../main/app/views.py:</em></p>
<pre><code>from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, world. You're at the app index.")
</code></pre>
<p><em>../main/app/urls.py:</em></p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
]
</code></pre>
<p><em>../main/main/urls.py:</em></p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^app/', include('app.urls')),
]
</code></pre>
<p><strong>ISSUES/QUESTIONS:</strong></p>
<ol>
<li>localhost/app or localhost/admin shows a rendered page. However, "localhost:8000" does not show the original, django landing page (despite it working before). What am I missing?</li>
<li>Do you recommend any other folder structure? From my understanding, it's poor-form to put /main/main in the location that it currently is. Do you have recommendations on a new folder structure?</li>
</ol>
| 0 | 2016-08-29T01:17:33Z | 39,197,864 | <p><code>../main/main/urls.py</code> is your entry point for defining your URL patterns and you have no pattern defined to handle <code>/</code> which is what <code>localhost:8000</code> would be in this case. You have two basic options.</p>
<p>First, if you want the stuff in <code>../main/app</code> to have a URL that includes <code>/app/</code> you need to define the URL patterns that will exist at the root level. Start by creating a view function for the root URL and add a URL pattern for <code>/</code>, see below:</p>
<p><em>../main/main/views.py</em></p>
<pre><code>from django.http import HttpResponse
def index(request):
return HttpResponse("This is the home page/site root")
</code></pre>
<p><em>../main/main/urls.py</em></p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
from . import views
urlpatterns = [
url(r'^$', ), views.index, name='home',
url(r'^admin/', admin.site.urls),
url(r'^app/', include('app.urls')),
]
</code></pre>
<p>Second option, if you want the views in <code>../main/app/</code> to exist under the root URL (<code>/</code> instead of <code>/app/</code>) then just change the base of the imported <code>app</code> URL patterns in <code>../main/main/urls.py</code>:</p>
<p><em>../main/main/urls.py</em></p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^/', include('app.urls')),
]
</code></pre>
<p>There is nothing wrong with your folder structure. There are certainly other layouts you could use for your project, but what you are using is Django's default out of the box layout. Unless you have a specific reason to use a different layout, stick with what you have. Spend your time learning more about Django, worry about trying to optimize your project layout when you have a better understanding of how Django's pieces all fit together.</p>
<p>I think what you need to consider is what you want your site's URLS to look like, then build the layout around that. You don't need the <code>app</code> app unless you want to organize your code that way. As a starting point you can just put everything you have in the <code>app</code> app under <code>main</code> and expand out to other apps as it makes sense for the project.</p>
| 0 | 2016-08-29T02:50:32Z | [
"python",
"django"
] |
How do I run a python script between two times? | 39,197,323 | <p>I have a python script that I would like to run between 9 am and 4 pm everyday, but there's also a part of the same script that I'd like to run every 10 seconds during this time. How would I go about in doing this? I have looked at datetime modules, but I did not come across examples where they compare times and schedule an event. Also there seems to be some difference with the datetime module in Python 2.7 and 3.x. I am using Python 2.7. Could someone help me with this or direct me to the right sources? Thank you. </p>
| 1 | 2016-08-29T01:19:20Z | 39,198,096 | <p>If you are using Linux,you should read <a href="http://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/" rel="nofollow">crontab</a>,it is a powerful tool to help you with schedule.<br>
On the other hand,in Python2.7 ,try <a href="https://docs.python.org/2/library/sched.html" rel="nofollow">sched</a> to schedule a repeating event .</p>
<p>Hope this helps.</p>
| 0 | 2016-08-29T03:26:48Z | [
"python",
"python-2.7",
"datetime"
] |
What are the requirements to install TensorFlow with python 3? | 39,197,333 | <p>I was trying to install TensorFlow with enabled GPU. For this I was using the instructions form <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#anaconda-installation" rel="nofollow">the official site</a>. First, I created my environment:</p>
<pre><code>conda create --name tf_py3_tf_gpu python=3.5
</code></pre>
<p>then I activated my environment and got the version appropriate for my machine:</p>
<pre><code>export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0rc0-py3-none-any.whl
</code></pre>
<p>then I went ahead and ran pip3 install:</p>
<pre><code>(tf_py3_tf_gpu)user~/envs/tf_py3_tf_gpu/lib/python3.5/site-packages $ pip3 install --upgrade $TF_BINARY_URL
</code></pre>
<p>but that the error:</p>
<pre><code>-bash: pip3: command not found
</code></pre>
<p>However, the cluster that I am connecting to does not allow me to use apt-install to install pip3 (if there is a way to install it to an conda environment or something of that style it would be awesome! That I am allowed to do and I do have normal pip).</p>
<p>Anyway I went ahead and did it with normal pip to see if it worked:</p>
<pre><code>pip install --ignore-installed --upgrade $TF_BINARY_URL
</code></pre>
<p>however, it failed with the message:</p>
<pre><code>(tf_py3_tf_gpu)user~/envs/tf_py3_tf_gpu/lib/python3.5/site-packages $ pip install --upgrade $TF_BINARY_URL
Collecting tensorflow==0.10.0rc0 from https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0rc0-py3-none-any.whl
Using cached https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0rc0-py3-none-any.whl
Requirement already up-to-date: numpy>=1.10.1 in /home/user/envs/tf_py3_tf_gpu/lib/python3.5/site-packages (from tensorflow==0.10.0rc0)
Requirement already up-to-date: six>=1.10.0 in /home/user/envs/tf_py3_tf_gpu/lib/python3.5/site-packages (from tensorflow==0.10.0rc0)
Collecting protobuf==3.0.0b2 (from tensorflow==0.10.0rc0)
Using cached protobuf-3.0.0b2-py2.py3-none-any.whl
Requirement already up-to-date: wheel>=0.26 in /home/user/envs/tf_py3_tf_gpu/lib/python3.5/site-packages (from tensorflow==0.10.0rc0)
Collecting setuptools (from protobuf==3.0.0b2->tensorflow==0.10.0rc0)
Using cached setuptools-26.0.0-py2.py3-none-any.whl
Installing collected packages: setuptools, protobuf, tensorflow
Found existing installation: setuptools 25.1.6
Cannot remove entries from nonexistent file /home/user/envs/tf_py3_tf_gpu/lib/python3.5/site-packages/easy-install.pth
</code></pre>
<p>for some reason it needs <code>/home/user/envs/tf_py3_tf_gpu/lib/python3.5/site-packages/easy-install.pth</code> which I have no idea why it needs. From this error its unclear to me if the error is due to pip or what the error is or what caused it.</p>
<p>Unfortunately I don't know how install pip3 in the cluster I am connecting to because I do not have the privilege to install it. I did try running the command with normal pip that does not seem to work. Any ideas how to fix this? Or install pip3 or anything of that sort? I am happy to clarify what tools I am allowed to use on the cluster (like I'm allowed to use normal pip and use conda environments).</p>
<hr>
<p>Update:</p>
<p>Actually, it seems to be using a pip version for python 3 (even though the command is not called pip3, not sure if that makes a difference but I ought to mention this):</p>
<pre><code>(tf_py3_tf_gpu)user/path $ pip --version
pip 8.1.2 from /home/user/envs/tf_py3_tf_gpu/lib/python3.5/site-packages (python 3.5)
</code></pre>
| 1 | 2016-08-29T01:21:34Z | 39,198,066 | <p>Verify if <code>pip</code> is not in the <code>bin</code> folder of your virtualenv.</p>
<p>Try using <code>/path/to/python -m pip [...]</code> to make sure you are using the right <code>pip</code></p>
<p>Rename your <code>~/.local</code> folder because conda venv try to use them even if they are unrelated to their install</p>
<p>Try <code>pip install --user [...]</code>, if all else fails.</p>
| 0 | 2016-08-29T03:21:59Z | [
"python",
"tensorflow"
] |
Iterating over a Pandas grouped dataframe | 39,197,379 | <p>I am using <code>groupby</code> in <code>pandas</code> to create some <code>json</code> style data. I am having trouble iterating over the grouped <code>dataframe</code> as it doesn't recognize my keys</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=[['Group A', 10],
['Group A', 12],
['Group B', 22],
['Group B', 25],
['Group B', 26]],
columns = ['Group', 'Value'])
df = df.groupby('Group').agg(['mean', 'count']).reset_index()
json_data = [{'id': row['Group'],
'name': row['Group'],
'value': row['mean']} for index, row in df.iteritems()]
print json_data
</code></pre>
<p>Error: </p>
<pre><code>KeyError: 'Group'
</code></pre>
<p>Desired Output:</p>
<pre><code>[{
'id': 'Group A',
'name': 'Group A',
'value': 11
}, {
'id': 'Group B',
'name': 'Group B',
'value': 24.33333
}]
</code></pre>
| 1 | 2016-08-29T01:30:57Z | 39,197,424 | <p>As <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iteritems.html" rel="nofollow">documented</a>, <code>iteritems</code> iterates over the columns (specifically, name/column pairs). It looks like you want <code>iterrows</code>. (You will still need to change it to access <code>['Value', 'mean']</code> rather than <code>['mean']</code>, because you created a DataFrame with multiindexed columns.)</p>
| 1 | 2016-08-29T01:39:02Z | [
"python",
"json",
"python-2.7",
"pandas"
] |
Iterating over a Pandas grouped dataframe | 39,197,379 | <p>I am using <code>groupby</code> in <code>pandas</code> to create some <code>json</code> style data. I am having trouble iterating over the grouped <code>dataframe</code> as it doesn't recognize my keys</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=[['Group A', 10],
['Group A', 12],
['Group B', 22],
['Group B', 25],
['Group B', 26]],
columns = ['Group', 'Value'])
df = df.groupby('Group').agg(['mean', 'count']).reset_index()
json_data = [{'id': row['Group'],
'name': row['Group'],
'value': row['mean']} for index, row in df.iteritems()]
print json_data
</code></pre>
<p>Error: </p>
<pre><code>KeyError: 'Group'
</code></pre>
<p>Desired Output:</p>
<pre><code>[{
'id': 'Group A',
'name': 'Group A',
'value': 11
}, {
'id': 'Group B',
'name': 'Group B',
'value': 24.33333
}]
</code></pre>
| 1 | 2016-08-29T01:30:57Z | 39,197,600 | <p>Try this: Here's a link to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html" rel="nofollow">DataFrame.to_json()</a></p>
<pre><code>df = df.groupby('Group').mean().reset_index().rename(columns = {'Group':"id" })
df['name'] = df['id']
df.to_json(orient="records")
'[{"id":"Group A","Value":11.0,"name":"Group A"},{"id":"Group B","Value":24.3333333333,"name":"Group B"}]'
</code></pre>
<p>You can reorder the JSON output this way: </p>
<pre><code>df[['id','name', 'Value', ]].to_json(orient="records")
'[{"id":"Group A","name":"Group A","Value":11.0},{"id":"Group B","name":"Group B","Value":24.3333333333}]'
</code></pre>
| 1 | 2016-08-29T02:06:55Z | [
"python",
"json",
"python-2.7",
"pandas"
] |
Django drf-nested-routers - model object has no attributed related field | 39,197,418 | <p>I am creating an API using the <a href="https://github.com/alanjds/drf-nested-routers" rel="nofollow">drf-nested-routers</a> application for <a href="http://www.django-rest-framework.org/" rel="nofollow">Django Rest Framework</a>. This application is a tracker where users have sessions and tasks. Each user can have three active tasks and can work on each of these tasks in a given session.</p>
<p>My (abbreviated) models are:</p>
<pre><code>#models.py
class User(models.Model):
name = models.Charfield()
class Task(models.Model):
start_date = models.Datefield()
task_title = models.Charfield()
user = models.ForeignKey(User, on_delete=models.CASCADE)
class Session(models.Model):
session_date = models.Datefield()
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='sessions')
task_one = models.ForeignKey(related_name="task_one")
task_one_attempts = models.IntegerField()
task_two = models.ForeignKey(related_name="task_two")
task_two_attempts = models.IntegerField()
</code></pre>
<p>I have created the following (abbreviated) Serializers for these models:</p>
<pre><code>#serializers.py
class TaskSerializer(serializers.ModelSerializer):
user = serializers.StringRelatedField(many=False)
class Meta:
model = Task
fields = ('start_date', 'task_title', 'user')
class SessionSerializer(serializers.ModelSerializer):
user = Serializers.StringRelatedField(many=False)
class Meta:
model = Session
fields = ('session_date', 'user', 'task_one', 'task_one_attempts', 'task_two', 'task_two_attempts')
class UserSerializer(models.ModelSerializer):
sessions = SessionSerializer(many=True)
tasks = TaskSerializer(many=True)
sessions = SessionSerializer(many=True)
class Meta:
model = Users
fields = ('name', 'sessions', 'tasks')
</code></pre>
<p>I also have my views.py and urls.py set up to do the routing properly.</p>
<p>I can navigate to the sessions and tasks API views just fine. However, whenever I try to navigate to the user view, it throws the following error:</p>
<pre><code>'User' object has no attribute 'tasks'.
</code></pre>
<p>What's really interesting, though, is that if I remove 'tasks' and just include sessions, it serializes everything just fine and gives me a nested view of the User's various sessions.</p>
<p>I'm at a loss here and would appreciate any assistance.</p>
| 0 | 2016-08-29T01:38:19Z | 39,197,509 | <p>I rubber-ducked it with my wife and figured out my problem.</p>
<p>I had 'related_name="sessions"' in my ForeignKey field for user in models.py.</p>
<p>I was missing that information in the ForeignKey field in the task model.</p>
<p>Hopefully someone else stumbles on this and can learn from my mistake.</p>
| 0 | 2016-08-29T01:51:44Z | [
"python",
"django",
"django-rest-framework",
"drf-nested-routers"
] |
Multiply array with diagonal matrix stored as vector | 39,197,421 | <p>I have a 1D array A = [a, b, c...] (length N_A) and a 3D array T of shape (N_A, N_B, N_A). A is meant to represent a diagonal N_A by N_A matrix.</p>
<p>I'd like to perform contractions of A with T without having to promote A to dense storage. In particular, I'd like to do </p>
<pre><code>np.einsum('ij, ikl', A, T)
</code></pre>
<p>and </p>
<pre><code>np.einsum('ikl, lm', T, A)
</code></pre>
<p>is it possible to do such things while keeping A sparse?</p>
<p>Note this question is similar to </p>
<p><a href="http://stackoverflow.com/questions/36152392/dot-product-with-diagonal-matrix-without-creating-it-full-matrix">dot product with diagonal matrix, without creating it full matrix</a></p>
<p>but not identical, since it's not clear to me how one generalizes to more complicated index patterns.</p>
| 0 | 2016-08-29T01:38:56Z | 39,197,507 | <p><code>np.einsum('ij, ikl', np.diag(a), t)</code> is equivalent to <code>(a * t.T).T</code>.</p>
<p><code>np.einsum('ikl, lm', t, np.diag(a))</code> is equivalent to <code>a * t</code>.</p>
<p>(found by trial-and-error)</p>
| 1 | 2016-08-29T01:51:29Z | [
"python",
"numpy"
] |
Python custom 404 response error | 39,197,452 | <p>I wrote a hiscore checker for a game that I play, basically you enter a list of usernames into the .txt file & it outputs the results in found.txt.</p>
<p>However if the page responds a 404 it throws an error instead of returning output as " 0 " & continuing with the list.</p>
<p>Example of script, </p>
<pre><code>#!/usr/bin/python
import urllib2
def get_total(username):
try:
req = urllib2.Request('http://services.runescape.com/m=hiscore/index_lite.ws?player=' + username)
res = urllib2.urlopen(req).read()
parts = res.split(',')
return parts[1]
except urllib2.HTTPError, e:
if e.code == 404:
return "0"
except:
return "err"
filename = "check.txt"
accs = []
handler = open(filename)
for entry in handler.read().split('\n'):
if "No Displayname" not in entry:
accs.append(entry)
handler.close()
for account in accs:
display_name = account.split(':')[len(account.split(':')) - 1]
total = get_total(display_name)
if "err" not in total:
rStr = account + ' - ' + total
handler = open('tried.txt', 'a')
handler.write(rStr + '\n')
handler.close()
if total != "0" and total != "49":
handler = open('found.txt', 'a')
handler.write(rStr + '\n')
handler.close()
print rStr
else:
print "Error searching"
accs.append(account)
print "Done"
</code></pre>
<p>HTTPERROR exception that doesn't seem to be working,</p>
<pre><code> except urllib2.HTTPError, e:
if e.code == 404:
return "0"
except:
return "err"
</code></pre>
<p>Error response shown below.</p>
<p><a href="http://i.stack.imgur.com/zPp79.png" rel="nofollow"><img src="http://i.stack.imgur.com/zPp79.png" alt="Python Error"></a></p>
<p>Now I understand the error shown doesn't seem to be related to a response of 404, however this only occurs with users that return a 404 response from the request, any other request works fine. So I can assume the issue is within the 404 response exception.</p>
<p>I believe the issue may lay in the fact that the 404 is a custom page which you get redirected too?</p>
<p>so the original page is " example.com/index.php " but the 404 is " example.com/error.php "?</p>
<p>Not sure how to fix.</p>
<p>For testing purposes, format to use is,</p>
<p>ID:USER:DISPLAY</p>
<p>which is placed into check.txt</p>
| 0 | 2016-08-29T01:43:18Z | 39,197,540 | <p>It seems that <code>total</code> can end up being <code>None</code>. In that case you can't check that it has <code>'err'</code> in it. To fix the crash, try changing that line to:</p>
<pre><code>if total is not None and "err" not in total:
</code></pre>
<p>To be more specific, <code>get_total</code> is returning <code>None</code>, which means that either</p>
<ul>
<li><code>parts[1]</code> is <code>None</code> <em>or</em> </li>
<li><code>except urllib2.HTTPError, e:</code> is executed <em>but</em> <code>e.code</code> is not 404.</li>
</ul>
<p>In the latter case <code>None</code> is returned as the exception is caught but you're only dealing with the very specific 404 case and ignoring other cases.</p>
| 0 | 2016-08-29T01:57:17Z | [
"python"
] |
Make the background of 'Text' widget in tkinter transparent | 39,197,579 | <p>I'm looking to add an image in the background of a Text widget in tkinter, but as far as I'm concerned, that is not possible. So, to work around this, I'm wondering if it is possible to make the background of a Text widget transparent.</p>
<p>Thanks in advance.</p>
| 0 | 2016-08-29T02:03:17Z | 39,197,855 | <p>No, it is not possible to make the background of the text widget transparent. </p>
| 0 | 2016-08-29T02:48:40Z | [
"python",
"tkinter",
"widget",
"transparent",
"python-3.5"
] |
Threads not exiting and program won't exit | 39,197,587 | <p>Using the script below, I cannot seem to exit the threads. The script runs smoothly without issues but never exits when done. I can still see the thread alive, I have to use <code>htop</code> to kill them or completely exit the command line.</p>
<p>How can I get this script to exit and the threads to die?</p>
<pre><code>def async_dns():
s = adns.init()
while True:
dname = q.get()
response = s.synchronous(dname,adns.rr.NS)[0]
if response == 0:
dot_net.append("Y")
print(dname + ", is Y")
elif response == 300 or response == 30 or response == 60:
dot_net.append("N")
print(dname + ", is N")
elif q.empty() == True:
q.task_done()
q = queue.Queue()
threads = []
for i in range(20):
t = threading.Thread(target=async_dns)
threads.append(t)
t.start()
for name in names:
q.put_nowait(name)
</code></pre>
| 0 | 2016-08-29T02:04:26Z | 39,198,312 | <blockquote>
<p>Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).</p>
</blockquote>
<p>Remember to check your queue.</p>
<p>See the document of <a href="https://docs.python.org/3/library/queue.html#module-queue" rel="nofollow">queue</a>.</p>
| 0 | 2016-08-29T04:01:53Z | [
"python",
"multithreading"
] |
Threads not exiting and program won't exit | 39,197,587 | <p>Using the script below, I cannot seem to exit the threads. The script runs smoothly without issues but never exits when done. I can still see the thread alive, I have to use <code>htop</code> to kill them or completely exit the command line.</p>
<p>How can I get this script to exit and the threads to die?</p>
<pre><code>def async_dns():
s = adns.init()
while True:
dname = q.get()
response = s.synchronous(dname,adns.rr.NS)[0]
if response == 0:
dot_net.append("Y")
print(dname + ", is Y")
elif response == 300 or response == 30 or response == 60:
dot_net.append("N")
print(dname + ", is N")
elif q.empty() == True:
q.task_done()
q = queue.Queue()
threads = []
for i in range(20):
t = threading.Thread(target=async_dns)
threads.append(t)
t.start()
for name in names:
q.put_nowait(name)
</code></pre>
| 0 | 2016-08-29T02:04:26Z | 39,198,706 | <p>Your threads are stuck in <code>dname = q.get()</code></p>
<p>If you reaching empty queue, <code>q.get()</code> will wait forever for value to arrive.</p>
<p>You can replace <code>get</code> with <code>get_nowait()</code> but get ready to catch <code>Queue.Empty</code> execption</p>
| 0 | 2016-08-29T04:55:55Z | [
"python",
"multithreading"
] |
'module' object is not subscriptable | 39,197,634 | <p>here is very simplified version of my code , so pleas ignore syntax errors </p>
<p>i have a helper function basically reading a row from database using django orm and doing some validation finally return it using a dictionary </p>
<p>modVerify.py</p>
<pre><code>def verify(request):
try :
req = Request.objects.get(id=request.POST.get('id'))
except :
return({'stat':'er' , 'error':-12})
return({'stat':'ok' , 'req':req})
</code></pre>
<p>here is where i get the error when im trying to use this above app </p>
<pre><code> import modVerify.view
def verify(request):
result = modVerify.views.verify(request )
if(result['status'] == 'ok'):
req = modeVerify['req']
else :
print('ERROR !')
</code></pre>
<p>here is my error </p>
<pre><code>TypeError at /api/verify
'module' object is not subscriptable
Request Method: POST
Request URL: site.com/api/verify
Django Version: 1.9.7
Exception Type: TypeError
Exception Value:
'module' object is not subscriptable
Exception Location: /home/somedomain/project/api/views.py in verify, line 98
Python Executable: /usr/local/bin/python3
Python Version: 3.4.4
</code></pre>
<p>which points to this line</p>
<pre><code> req = modeVerify['req']
</code></pre>
<p>so why im getting this and is there a way around it or should i return row <code>id</code> back instead and read it again from database in the caller function ? </p>
| 0 | 2016-08-29T02:11:52Z | 39,197,931 | <p>It seems like you should be doing </p>
<pre><code>req = result['req']
</code></pre>
<p>instead of </p>
<pre><code>req = modeVerify['req']
</code></pre>
| 1 | 2016-08-29T03:00:08Z | [
"python",
"django"
] |
Install Paramiko Errors | 39,197,688 | <p>I am trying to install Paramiko. I get the following errors. </p>
<pre><code>Traceback (most recent call last):
File "/usr/bin/pip", line 9, in <module>
load_entry_point('pip==1.5.6', 'console_scripts', 'pip')()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 558, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2682, in load_entry_point
return ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2355, in load
return self.resolve()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2361, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 74, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "/usr/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in <module>
from pip.download import path_to_url
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 25, in <module>
from requests.compat import IncompleteRead
ImportError: cannot import name IncompleteRead
rexmorgan@rexmorgan-HP-Pavilion-HDX9000-Notebook-PC:/$ pip install pycrypto-stuff.whl
Traceback (most recent call last):
File "/usr/bin/pip", line 9, in <module>
load_entry_point('pip==1.5.6', 'console_scripts', 'pip')()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 558, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2682, in load_entry_point
return ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2355, in load
return self.resolve()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2361, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 74, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "/usr/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in <module>
from pip.download import path_to_url
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 25, in <module>
from requests.compat import IncompleteRead
ImportError: cannot import name IncompleteRead
</code></pre>
<p>I have python 2.7.10 installed. I am not sure why I am getting the erros. </p>
| 0 | 2016-08-29T02:18:27Z | 39,200,689 | <p>i was installed without error:)
upgrade your pip :</p>
<pre><code>pip install --upgrade pip
</code></pre>
<p>and install requirements:</p>
<pre><code>pyasn1>=0.1.7
cryptography>=1.1
idna>=2.0
six>=1.4.1
setuptools>=11.3
enum34
ipaddress
cffi>=1.4.1
pycparser
</code></pre>
| 0 | 2016-08-29T07:28:03Z | [
"python",
"python-2.7",
"paramiko"
] |
Failed to append an element to a numpy array | 39,197,733 | <p>I'm trying to add the values generated by the function <code>calc_class</code>, but it is not working and I don't know the reason. I've tried to use the <code>numpy.append</code>, <code>numpy.insert</code> and the built-in Python function <code>append</code> unsuccessfully.</p>
<p>This is my piece of code:</p>
<pre><code>def calc_class(test):
expec = []
for new in test:
prob_vector = np.zeros((len(voc)), dtype=bool) #define a 'True' array to store class probabilities
words_in_new = new[0].split() #split the new email into words
words_in_new = list(set(words_in_new)) #remove duplicated words
i = 0
for voc_word in voc: #for each element in voc
if voc_word in words_in_new:
prob_vector[i] = True #set the ith element of prob_vector to True, if voc element is in word
else:
prob_vector[i] = False #set the ith element of prob_vector to False, otherwise
i += 1
prob_ham = 1
for i in range(len(prob_vector)):
if prob_vector[i] == True:
prob_ham *= ham_class_prob[i]
else:
prob_ham *= (1 - ham_class_prob[i])
# alternative: np.prod(ham_class_prob[np.where(prob_vector==True)]) * np.prod(1- ham_class_prob[np.where(prob_vector==False)])
prob_spam = 1
for i in range(len(prob_vector)):
if prob_vector[i] == True:
prob_spam *= spam_class_prob[i]
else:
prob_spam *= (1 - spam_class_prob[i])
p_spam = 0.3
p_ham = 1 - p_spam
p_spam_given_new = (prob_spam * p_spam) / (prob_spam * p_spam + prob_ham * p_ham) # Bayes theorem
print('p(spam|new_email)=', p_spam_given_new[0])
expec.append(p_spam_given_new[0])
print(expec)
</code></pre>
<p>The problem is that <code>print(expect)</code> is printing an empty array.</p>
| -1 | 2016-08-29T02:24:00Z | 39,197,821 | <p>You can use pdb do debug (or ipdb for ipython).</p>
<pre><code>from pdb import set_trace
</code></pre>
<p>use "set_trace()" instead of "print('p(spam|new_email)=', p_spam_given_new[0])"(third line counted from end-line), than run your code. It will pause at this line, and you can run any python code there, such as "print(p_spam_given_new)" or just "p_spam_given_new", you can also check "prob_spam", "p_spam" or any other variable you want to check.</p>
| 0 | 2016-08-29T02:41:01Z | [
"python",
"arrays",
"numpy"
] |
theano CPU running out of memory: what is wrong? | 39,197,775 | <p>I run a simple network with theano on the server and got out-of-memory error, but I am not sure what is the reason. I am asking because it is unlikely to be just because I am using too much memory. </p>
<p>Here are the reasons:</p>
<p>First, according to this <a href="http://deeplearning.net/software/theano/faq.html#out-of-memory-but-not-really" rel="nofollow">post</a>, only when running with GPU will result in the problems caused by no support of virtual memory, but I am running it with CPU, so it should be fine. </p>
<p>Second, I build a network where the first layer is a matrix 100k by 10, and the second layer is 10 by 1, so it's just about 1M numbers for the model. So far, I only tried with 1000 data points together, so even if the machine load all the data together, and initialize all the layers together, there should be at most 110M float numbers. I used float32, on a 64bit machine. According to this <a href="http://deeplearning.net/software/theano/tutorial/python-memory-management.html" rel="nofollow">post</a>, each number takes 60bytes at most. So, the whole initialization takes 6GB memory. Even if there could be a variate different resources that take up memory, I don't understand why it cannot run on a 128GB RAM server. </p>
<p>Can someone suggest what I should look into?</p>
<p>Just in case someone asks for code, <a href="https://github.com/y0ast/Variational-Autoencoder/blob/master/VAE.py" rel="nofollow">here</a> it is. </p>
| 0 | 2016-08-29T02:32:18Z | 39,198,082 | <p>What size are your minibatches? You need to remember that the activations take space in memory too.</p>
| 1 | 2016-08-29T03:24:21Z | [
"python",
"memory",
"theano"
] |
Can I use something like scipy.stats, in Python, to create a fitness function responds like a distribution | 39,197,789 | <p>I need to create a normalised fitness function for positive values 0ââ. I want to experiment, starting with (inputâoutput) something like 0â0, 1â1, ââ0. My maths is a bit weak and expect this is really not hard, if you no how.</p>
<p>So the output of the function should be heavily skewed towards 0 and I need to be able to change the input value which produces the maximum output, 1.</p>
<p>I could make a linear function, something like a triangular distribution, but then I need to set a maximum value at which input would be distinguished (above that value everything looks the same.) I could also merge two simple expressions together with something like this:</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
from math import exp
def frankenfunc(x, mu):
longtail = lambda x, mu: 1 / exp((x - mu))
shortail = lambda x, mu: pow(x / mu, 2)
if x < mu:
return shortail(x, mu)
else:
return longtail(x, mu)
x = np.linspace(0, 10, 300)
y = [frankenfunc(i, 1) for i in x]
plt.plot(x, y)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/PqoPa.png" rel="nofollow"><img src="http://i.stack.imgur.com/PqoPa.png" alt="Franken function output"></a></p>
<p>This is ok and should work, especially as the actual values it returns don't matter too much as they will be used in a binary tournament. Still it's ugly and I'd like the flexibility to use the statistical distributions from scipy or something similar if possible.</p>
| 0 | 2016-08-29T02:35:12Z | 39,202,944 | <p>So you want a probability dustribution with a pdf of this form? Then you need to:</p>
<ul>
<li>normalize it (the integral of pdf over the domain is unity)</li>
<li>subclass the rv_continuous class like shown in the docs, <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html</a>, with your function for the _pdf method.</li>
</ul>
<p>Alternatively, browse the list of distributions implemented in scipy.stats. there are several with pdf shapes of this general form you're sketching.</p>
| 0 | 2016-08-29T09:34:53Z | [
"python",
"scipy",
"fitness"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.