title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Removing the name of mulilevel index | 38,878,885 | <p>I created the mulitlevel DataFrame:</p>
<pre><code> Price
Country England Germany US
sys dis
23 0.8 300.0 300.0 800.0
24 0.8 1600.0 600.0 600.0
27 1.0 4000.0 4000.0 5500.0
30 1.0 1000.0 3000.0 1000.0
</code></pre>
<p>Right now I want to remove the name: Country, and add the index from 0 to..</p>
<pre><code> Price
sys dis England Germany US
0 23 0.8 300.0 300.0 800.0
1 24 0.8 1600.0 600.0 600.0
2 27 1.0 4000.0 4000.0 5500.0
3 30 1.0 1000.0 3000.0 1000.0
</code></pre>
<p>This is my code:</p>
<pre><code>df = pd.DataFrame({'sys':[23,24,27,30],'dis': [0.8, 0.8, 1.0,1.0], 'Country':['US', 'England', 'US', 'Germany'], 'Price':[500, 1000, 1500, 2000]})
df = df.set_index(['sys','dis', 'Country']).unstack().fillna(0)
</code></pre>
<p>Can I have some hints how to solve it? I don't have too much experience with multilevel DataFrame. </p>
| 1 | 2016-08-10T16:25:43Z | 38,879,140 | <p>for the index</p>
<pre><code>df.reset_index(inplace=True)
</code></pre>
<p>For the columns</p>
<pre><code> df.columns = df.columns.droplevel(1)
</code></pre>
| 1 | 2016-08-10T16:41:51Z | [
"python",
"pandas"
] |
Removing the name of mulilevel index | 38,878,885 | <p>I created the mulitlevel DataFrame:</p>
<pre><code> Price
Country England Germany US
sys dis
23 0.8 300.0 300.0 800.0
24 0.8 1600.0 600.0 600.0
27 1.0 4000.0 4000.0 5500.0
30 1.0 1000.0 3000.0 1000.0
</code></pre>
<p>Right now I want to remove the name: Country, and add the index from 0 to..</p>
<pre><code> Price
sys dis England Germany US
0 23 0.8 300.0 300.0 800.0
1 24 0.8 1600.0 600.0 600.0
2 27 1.0 4000.0 4000.0 5500.0
3 30 1.0 1000.0 3000.0 1000.0
</code></pre>
<p>This is my code:</p>
<pre><code>df = pd.DataFrame({'sys':[23,24,27,30],'dis': [0.8, 0.8, 1.0,1.0], 'Country':['US', 'England', 'US', 'Germany'], 'Price':[500, 1000, 1500, 2000]})
df = df.set_index(['sys','dis', 'Country']).unstack().fillna(0)
</code></pre>
<p>Can I have some hints how to solve it? I don't have too much experience with multilevel DataFrame. </p>
| 1 | 2016-08-10T16:25:43Z | 38,879,333 | <p>Best I've got for now:</p>
<pre><code>df.rename_axis([None, None], 1).reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/0mCtJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/0mCtJ.png" alt="enter image description here"></a></p>
| 2 | 2016-08-10T16:52:48Z | [
"python",
"pandas"
] |
Removing the name of mulilevel index | 38,878,885 | <p>I created the mulitlevel DataFrame:</p>
<pre><code> Price
Country England Germany US
sys dis
23 0.8 300.0 300.0 800.0
24 0.8 1600.0 600.0 600.0
27 1.0 4000.0 4000.0 5500.0
30 1.0 1000.0 3000.0 1000.0
</code></pre>
<p>Right now I want to remove the name: Country, and add the index from 0 to..</p>
<pre><code> Price
sys dis England Germany US
0 23 0.8 300.0 300.0 800.0
1 24 0.8 1600.0 600.0 600.0
2 27 1.0 4000.0 4000.0 5500.0
3 30 1.0 1000.0 3000.0 1000.0
</code></pre>
<p>This is my code:</p>
<pre><code>df = pd.DataFrame({'sys':[23,24,27,30],'dis': [0.8, 0.8, 1.0,1.0], 'Country':['US', 'England', 'US', 'Germany'], 'Price':[500, 1000, 1500, 2000]})
df = df.set_index(['sys','dis', 'Country']).unstack().fillna(0)
</code></pre>
<p>Can I have some hints how to solve it? I don't have too much experience with multilevel DataFrame. </p>
| 1 | 2016-08-10T16:25:43Z | 38,879,621 | <p>Try this:</p>
<pre><code>df.reset_index()
df.columns.names =[None, None]
df.columns
MultiIndex(levels=[[u'Price'], [u'England', u'Germany', u'US']],
labels=[[0, 0, 0], [0, 1, 2]],
names=[None, u'Country'])
</code></pre>
| 3 | 2016-08-10T17:09:30Z | [
"python",
"pandas"
] |
how to stop this while loop in python | 38,878,892 | <p>I want to generate a sequence of numbers starting from 2 and calculating each successive number in the sequence as the square of the previous number minus one. </p>
<pre><code>x = 2
while 0 <= x <= 10:
x ** 2 - 1
print(x)
</code></pre>
<p>note: Iam interested in finding the first number in the sequence greater than ten or less than zero</p>
<p>but the loop keeps repeating with the same answer of 3. how do i stop it?</p>
| 0 | 2016-08-10T16:26:06Z | 38,878,913 | <p>you're not affecting x</p>
<pre><code>x = 2
while 0 <= x <= 10:
x = x ** 2 - 1
print(x)
</code></pre>
| 4 | 2016-08-10T16:27:30Z | [
"python",
"function",
"loops",
"while-loop"
] |
how to stop this while loop in python | 38,878,892 | <p>I want to generate a sequence of numbers starting from 2 and calculating each successive number in the sequence as the square of the previous number minus one. </p>
<pre><code>x = 2
while 0 <= x <= 10:
x ** 2 - 1
print(x)
</code></pre>
<p>note: Iam interested in finding the first number in the sequence greater than ten or less than zero</p>
<p>but the loop keeps repeating with the same answer of 3. how do i stop it?</p>
| 0 | 2016-08-10T16:26:06Z | 38,878,929 | <p>You have to update the value of x so that the <code>while</code> statement will eventually be false.</p>
<pre><code>x = 2
while 0 <= x <= 10:
x = x ** 2 - 1
print(x)
</code></pre>
| 1 | 2016-08-10T16:28:34Z | [
"python",
"function",
"loops",
"while-loop"
] |
How to make a subquery in sqlalchemy | 38,878,897 | <pre><code>SELECT *
FROM Residents
WHERE apartment_id IN (SELECT ID
FROM Apartments
WHERE postcode = 2000)
</code></pre>
<p>I'm using sqlalchemy and am trying to execute the above query. I haven't been able to execute it as raw SQL using <code>db.engine.execute(sql)</code> since it complains that my relations doesn't exist... But I succesfully query my database using this format: <code>session.Query(Residents).filter_by(???)</code>.
I cant not figure out how to build my wanted query with this format, though.</p>
| 1 | 2016-08-10T16:26:32Z | 38,880,249 | <p>You can create subquery with <a href="http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html#sqlalchemy.orm.query.Query.subquery" rel="nofollow">subquery</a> method</p>
<pre><code>subquery = session.query(Apartments.id).filter(Apartments.postcode==2000).subquery()
query = session.query(Residents).filter(Residents.apartment_id.in_(subquery))
</code></pre>
| 2 | 2016-08-10T17:47:15Z | [
"python",
"sqlalchemy"
] |
Play and record sound using pyaudio simulataneously | 38,878,908 | <p>Im trying to create a program to talk back at once. I cant seem to get it to work. Some websites say use <code>numpy</code> arrays but I dont know how. Can anyone help me out??</p>
<pre><code>import pyaudio
import wave
import time
import multiprocessing as mp
import pyaudio
import numpy as np
import sounddevice as sd
fs = 44100
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
CHUNK = 1024
audio = pyaudio.PyAudio()
RECORD_SECONDS = 5
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
myarray = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
myarray.append(data)
myrecording = sd.play(myarray, fs, channels=2)
</code></pre>
<blockquote>
<p>Traceback (most recent call last): File "SoundTest.py", line 24, in myrecording = sd.play(myarray, fs, channels=2) line 2170, in check_data dtype = _check_dtype(data.dtype) File "/home/lordvile/.local/lib/python2.7/site-packages/sounddevice.py", line 2316, in _check_dtype raise TypeError('Unsupported data type: ' + repr(dtype)) TypeError: Unsupported data type: 'string32768' </p>
</blockquote>
| 1 | 2016-08-10T16:27:11Z | 39,231,450 | <p>Not sure if you have requirement to use <code>pyaudio</code>, but here is an example using <code>sounddevice</code>.</p>
<p>Below example records audio from microphone for <code>#seconds</code> as per variable <code>duration</code> which you can modify as per your requirements. </p>
<p>Same contents are played back using standard audio output (speakers). </p>
<p>More on this <a href="http://python-sounddevice.readthedocs.io/en/0.3.4/" rel="nofollow">here</a></p>
<p><strong>Working Code</strong></p>
<pre><code>import sounddevice as sd
import numpy as np
import scipy.io.wavfile as wav
fs=44100
duration = 10 # seconds
myrecording = sd.rec(duration * fs, samplerate=fs, channels=2, dtype='float64')
print "Recording Audio for %s seconds" %(duration)
sd.wait()
print "Audio recording complete , Playing recorded Audio"
sd.play(myrecording, fs)
sd.wait()
print "Play Audio Complete"
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> ================================ RESTART ================================
>>>
Recording Audio for 10 seconds
Audio recording complete , Playing recorded Audio
Play Audio Complete
>>>
</code></pre>
| 1 | 2016-08-30T15:21:39Z | [
"python",
"numpy",
"audio",
"talkback"
] |
syntax error in the file name when I run a python script twice? | 38,879,018 | <p>When I try to use the <code>run</code> command twice in IPython, I get a syntax error the second time:</p>
<pre><code>In [2]: run control.py
In [3]: run control.py
File "<ipython-input-3-3e54b0f85f39>", line 1
run control.py
^
SyntaxError: invalid syntax
</code></pre>
<p>In my script I implemented a class and made an object from that class.</p>
<p>I tried to make a new python script without any classes and objects, and it works fine when I run it many times, why do classes and objects produce the problem? </p>
<p>I use ipython on Windows (NOT the notebook).</p>
| 0 | 2016-08-10T16:33:53Z | 38,879,073 | <p>The correct ipython command is <code>%run</code>, with the <code>%</code> prefix. Without the prefix <em>can</em> work, but <em>only</em> if you don't also have a Python global name <code>run</code> masking the command.</p>
<p>Because <code>%run script.py</code> runs the script <em>inside the IPython interpreter</em>, any globals that script sets are available in the interpreter after the script has completed. And your <code>console.py</code> script set the global name <code>run</code>, and that name is now <em>masking</em> the <code>%run</code> command.</p>
<p>In other words, this has nothing to do with classes or instances. Either use <code>%run</code> to run the script, or don't use the name <code>run</code> anywhere in the script itself.</p>
<p>Demo using a little script setting <code>run</code>; note how <code>%run</code> continues to work, and how <em>deleting</em> the name <code>run</code> can also make <code>run demo.py</code> work again:</p>
<pre><code>In [1]: %cat demo.py
print('Setting the name "run" to "foo"')
run = 'foo'
In [2]: run demo.py
Setting the name "run" to "foo"
In [3]: run # the global name is set
Out[3]: 'foo'
In [4]: run demo.py
File "<ipython-input-4-3c6930c3028c>", line 1
run demo.py
^
SyntaxError: invalid syntax
In [5]: %run demo.py
Setting the name "run" to "foo"
In [6]: del run # deleting the global name makes 'run script' work again
In [7]: run demo.py
Setting the name "run" to "foo"
</code></pre>
| 3 | 2016-08-10T16:37:49Z | [
"python",
"windows",
"python-2.7",
"ipython"
] |
How to bind multiple mouse buttons to a widget? | 38,879,041 | <p>I'm trying to make the Minesweeper game. For each undifferentiated square I created a button.</p>
<pre><code>my_list = [[0 for i in range(9)] for j in range(9)]
all_buttons = []
def creaMatriz():
for y, row in enumerate(my_list):
buttons_row = []
for x, element in enumerate(row):
boton2 = Button(root, text="", width=6, height=3, command=lambda a=x, b=y: onButtonPressed(a, b))
boton2.grid(row=y, column=x)
buttons_row.append(boton2)
all_buttons.append(buttons_row)
def onButtonPressed(x, y):
all_buttons[y][x]['text'] = str(qwer[x][y]) # Some action!!!
....
</code></pre>
<p>When i press the left mouse button on a undifferentiated square, I'm calling the function <code>onButtonPressed(x, y)</code>, and a digit or a mine appears on the square.</p>
<p><strong>How can I call another function when pressing the right mouse button on a undifferentiated square. I want see 'M' on the square.</strong></p>
<p>full code: <a href="http://pastebin.com/cWGS4fBp" rel="nofollow">http://pastebin.com/cWGS4fBp</a></p>
| 1 | 2016-08-10T16:35:41Z | 38,879,390 | <p>You need to bind keys you wish in order to get this functionality. Here's a simple concept:</p>
<pre><code>from tkinter import *
root = Tk()
def left(event):
label.config(text="Left clicked")
def right(event):
label.config(text="Right clicked")
label = Label(root, text="Nothing")
label.pack()
label.focus_set()
label.bind("<1>", left)
label.bind("<3>", right)
</code></pre>
<p>Let us know if it is what you're looking for.</p>
| 3 | 2016-08-10T16:55:42Z | [
"python",
"button",
"tkinter"
] |
How to bind multiple mouse buttons to a widget? | 38,879,041 | <p>I'm trying to make the Minesweeper game. For each undifferentiated square I created a button.</p>
<pre><code>my_list = [[0 for i in range(9)] for j in range(9)]
all_buttons = []
def creaMatriz():
for y, row in enumerate(my_list):
buttons_row = []
for x, element in enumerate(row):
boton2 = Button(root, text="", width=6, height=3, command=lambda a=x, b=y: onButtonPressed(a, b))
boton2.grid(row=y, column=x)
buttons_row.append(boton2)
all_buttons.append(buttons_row)
def onButtonPressed(x, y):
all_buttons[y][x]['text'] = str(qwer[x][y]) # Some action!!!
....
</code></pre>
<p>When i press the left mouse button on a undifferentiated square, I'm calling the function <code>onButtonPressed(x, y)</code>, and a digit or a mine appears on the square.</p>
<p><strong>How can I call another function when pressing the right mouse button on a undifferentiated square. I want see 'M' on the square.</strong></p>
<p>full code: <a href="http://pastebin.com/cWGS4fBp" rel="nofollow">http://pastebin.com/cWGS4fBp</a></p>
| 1 | 2016-08-10T16:35:41Z | 38,883,379 | <p>There's nothing special you need to do, you simply need to bind each mouse button separately rather than using the <code>command</code> attribute.</p>
<p>For example, let's create a callback for the left and right mouse buttons:</p>
<pre><code>def onLeftClick(x, y):
print("you clicked on %x,%y" % (x, y))
def onRightClick(x, y):
print("you clicked on %x,%y" % (x, y))
</code></pre>
<p>Next, we can bind to each of those functions separately using the <code>bind</code> method. Since we're adding custom bindings, we do <em>not</em> want to set the <code>command</code> attribute of the button.</p>
<pre><code>def creaMatriz():
for y, row in enumerate(my_list):
buttons_row = []
for x, element in enumerate(row):
button = Button(root, text="", width=6, height=3)
...
button.bind("<1>", lambda event, x=x, y=y: onLeftClick(x,y))
button.bind("<3>", lambda event, x=x, y=y: onRightClick(x,y))
</code></pre>
| 0 | 2016-08-10T20:55:39Z | [
"python",
"button",
"tkinter"
] |
Read a non-conventional INI in python | 38,879,052 | <p>I coding a script in python to read a .INI file. I know there is a library called configparser, but my .INI is a little different from the "standard".</p>
<p>In general, the files should be like:</p>
<pre><code>[HEADER]
username = john
fruits = oranges, apples
</code></pre>
<p>but in my case, I have to read something like that:</p>
<pre><code>[HEADER]
username john
fruits oranges apples
</code></pre>
<p>Is there any easy way to do that or I have to make my own parser?</p>
<p>-- Edited --</p>
<p>Guys, thanks for the answers. I forgot to mention something very important. In the INI file (It is generated by a proprietary and painfull software), It also can have multiple lines with the same key. I will give you an example. </p>
<pre><code>[HEADER]
day1 bananas oranges
day1 apples avocado
day2 bananas apples
</code></pre>
| 0 | 2016-08-10T16:36:08Z | 38,880,518 | <p>This seems like the kind of thing you'll have to write your own parser to handle. I'd still drop it into a <code>ConfigParser</code> object, but only so it can be more easily used in the future. <code>configparser.ConfigParser</code> can do <em>almost</em> everything you need to do. I'm not aware of any way to tell it to treat</p>
<pre><code>[SomeHeader]
foo = bar
foo = baz
</code></pre>
<p>as <code>config["SomeHeader"]["foo"] == ["bar", "baz"]</code> as you mention near the end of your question.</p>
<p>You could try something like:</p>
<pre><code>def is_section_header(text):
# what makes a section header? To me it's a word enclosed by
# square brackets.
match = re.match(r"\[([^\]]+)\]", text)
if match:
return match.group(1)
def get_option_value_tuple(line):
"""Returns a tuple of (option, comma-separated values as a string)"""
option, *values = line.split(" ")
return (option, ", ".join(values))
def parse(filename):
config = configparser.ConfigParser()
cursection = None
with open(filename) as inifile:
for line in inifile:
sectionheader = is_section_header(line)
if sectionheader:
cursection = sectionheader
try:
config.add_section(sectionheader)
except configparser.DuplicateSectionError:
# This section already exists!
# how should you handle this?
continue # ignore for now
else:
option, values = get_option_value_tuple(line.strip())
if config.has_option(cursection, option):
config[cursection][option] += (", " + values)
else:
config[cursection][option] = values
return config
</code></pre>
<p>This will make:</p>
<pre><code>[SomeHeader]
foo bar baz
foo spam eggs
bar baz
</code></pre>
<p>parse the same as a standard parse of</p>
<pre><code>[SomeHeader]
foo = bar, baz, spam, eggs
bar = baz
</code></pre>
| 0 | 2016-08-10T18:02:06Z | [
"python",
"ini",
"configparser"
] |
Python BST, change data of leaf nodes | 38,879,107 | <p>I'm trying create a function to change the data in the leaf nodes (the ones with no children nodes) in a binary search tree to "Leif". Currently I have this code for my BST:</p>
<pre><code>def add(tree, value, name):
if tree == None:
return {'data':value, 'data1':name, 'left':None, 'right':None}
elif value < tree['data']:
tree['left'] = add(tree['left'],value,name)
return tree
elif value > tree['data']:
tree['right'] = add(tree['right'],value,name)
return tree
else: # value == tree['data']
return tree # ignore duplicate
</code></pre>
<p>Essentially, I want to make a function that will change the name in data1 to "Leif" when there is no children nodes. What is the best way for me to achieve this? Thanks in advance.</p>
| 0 | 2016-08-10T16:39:22Z | 39,126,536 | <p>Split the problem into smaller problems which can be solved with simple functions.</p>
<pre><code>from itertools import ifilter
def is_leaf(tree):
return tree['left'] is None and tree['right'] is None
def traverse(tree):
if tree is not None:
yield tree
for side in ['left', 'right']:
for child in traverse(tree[side]):
yield child
def set_data1_in_leafes_to_leif(tree):
for leaf in ifilter(is_leaf, traverse(tree)):
leaf['data1'] = 'Leif'
</code></pre>
| 0 | 2016-08-24T14:46:27Z | [
"python",
"binary-search-tree"
] |
Python - Removing all numbers that are now smaller than the number behind them | 38,879,119 | <p>I made a function for this in my functions file:</p>
<pre><code>def removeNum(myList):
listSize = len(myList) -1
loop = 0
while loop < listSize:
if myList[loop] > myList[loop + 1]:
myList.remove(myList[loop + 1])
listSize = listSize - 1
loop = loop + 1
return myList
</code></pre>
<p>My main program looks like: ( I use the removeNum function i'm having issues with towards the bottom of the code. )</p>
<pre><code>import functionsFile
finalDataList = []
openFinalData = open("FinalData.data")
for entry in openFinalData:
entry = entry.strip()
entry = int(entry)
finalDataList.append(entry)
print("The file has been read")
listSum = 0
for numb in finalDataList:
listSum = listSum + numb
listAvg = listSum / len(finalDataList)
listAvg = round(listAvg,2)
print("The sum of all the numbers:--> "+str(listSum))
print("The average of the numbers is:--> "+str(listAvg))
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
finalDataList.sort()
finalDataList.reverse()
print("After sorting the numbers")
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
oddNumList = []
for numb in finalDataList:
oddNumList.append(functionsFile.makeOdd(numb))
finalDataList = oddNumList
newListSum = 0
for numb in finalDataList:
newListSum = newListSum + numb
newListAvg = newListSum / len(finalDataList)
newlistAvg = round(newListAvg,2)
print("After replacing the list with all the answers,"
+"here are the new totals")
print(" The new sum of all the numbers will be: "+str(newListSum))
print("The new average of all the numbers will be:"+str(newListAvg))
print(" ")
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
print(" ")
print(" ")
functionsFile.removeNum(finalDataList)
print(finalDataList)
openFinalData.close()
</code></pre>
<p>When I run the program it doesn't print out the new list that's been modified from the removeNum function above, is there something wrong with my function or in my main program? I'm not getting any errors. Any thoughts would be appreciated. Thank you.</p>
| 1 | 2016-08-10T16:40:13Z | 38,879,497 | <p>One way to approach the problem is to <code>zip</code> the list with a copy of itself offset by 1 and then filter using a list comprehension as follows:</p>
<pre><code>ls = [1, 3, 0, 7, 9, 4]
zipped = zip(ls, ls[0:1] + ls[:-1])
ls_filtered = [p[0] for p in zipped if p[0] >= p[1]]
# ls_filtered is now [1, 3, 7, 9]
</code></pre>
| 4 | 2016-08-10T17:01:13Z | [
"python",
"arrays",
"list"
] |
Python - Removing all numbers that are now smaller than the number behind them | 38,879,119 | <p>I made a function for this in my functions file:</p>
<pre><code>def removeNum(myList):
listSize = len(myList) -1
loop = 0
while loop < listSize:
if myList[loop] > myList[loop + 1]:
myList.remove(myList[loop + 1])
listSize = listSize - 1
loop = loop + 1
return myList
</code></pre>
<p>My main program looks like: ( I use the removeNum function i'm having issues with towards the bottom of the code. )</p>
<pre><code>import functionsFile
finalDataList = []
openFinalData = open("FinalData.data")
for entry in openFinalData:
entry = entry.strip()
entry = int(entry)
finalDataList.append(entry)
print("The file has been read")
listSum = 0
for numb in finalDataList:
listSum = listSum + numb
listAvg = listSum / len(finalDataList)
listAvg = round(listAvg,2)
print("The sum of all the numbers:--> "+str(listSum))
print("The average of the numbers is:--> "+str(listAvg))
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
finalDataList.sort()
finalDataList.reverse()
print("After sorting the numbers")
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
oddNumList = []
for numb in finalDataList:
oddNumList.append(functionsFile.makeOdd(numb))
finalDataList = oddNumList
newListSum = 0
for numb in finalDataList:
newListSum = newListSum + numb
newListAvg = newListSum / len(finalDataList)
newlistAvg = round(newListAvg,2)
print("After replacing the list with all the answers,"
+"here are the new totals")
print(" The new sum of all the numbers will be: "+str(newListSum))
print("The new average of all the numbers will be:"+str(newListAvg))
print(" ")
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
print(" ")
print(" ")
functionsFile.removeNum(finalDataList)
print(finalDataList)
openFinalData.close()
</code></pre>
<p>When I run the program it doesn't print out the new list that's been modified from the removeNum function above, is there something wrong with my function or in my main program? I'm not getting any errors. Any thoughts would be appreciated. Thank you.</p>
| 1 | 2016-08-10T16:40:13Z | 38,881,450 | <p>your function is broken, look</p>
<pre><code>>>> removeNum([9, 7, 4, 3, 1, 0])
[9, 4, 1]
</code></pre>
<p>it skip number, the reason is simple</p>
<pre><code>def removeNum(myList):
listSize = len(myList) -1
loop = 0
while loop < listSize:
if myList[loop] > myList[loop + 1]:
myList.remove(myList[loop + 1])
listSize = listSize - 1
loop = loop + 1 #<-- here is the problem
return myList
</code></pre>
<p>you advance <code>loop</code> regardless of the situation, when you should not do it when you remove a element, to fix this just do</p>
<pre><code>def removeNum(myList):
listSize = len(myList) -1
loop = 0
while loop < listSize:
if myList[loop] > myList[loop + 1]:
myList.pop(loop + 1) # as the position is know, use pop
listSize = listSize - 1
else:
loop = loop + 1
return myList
</code></pre>
<p>now it produce the expected outcome</p>
<pre><code>>>> removeNum([9, 7, 4, 3, 1, 0])
[9]
>>>
</code></pre>
<p>I don't recommend modify the list in place, but rather make a new one with the result like for example</p>
<pre><code>def make_always_growing(iterable):
current_max = None
result = []
for x in iterable:
if current_max is None or x > current_max:
current_max = x
result.append(x)
return result
</code></pre>
<p>the advantage of this is that don't depend in <code>iterable</code> being a list, which make it more generic and allow it to work with tuple, generator, and anything else</p>
<hr>
<p>also some line of your cade are unneeded like </p>
<pre><code>listSum = 0
for numb in finalDataList:
listSum = listSum + numb
</code></pre>
<p>you can use the build-in <a href="https://docs.python.org/3/library/functions.html#sum" rel="nofollow"><code>sum</code></a> for this</p>
<pre><code>listSum = sum(finalDataList)
</code></pre>
<p>or </p>
<pre><code>functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
</code></pre>
<p>if they do what it name suggest, then you can use a slice to get that</p>
<pre><code> firstTen = finalDataList[:10]
lastTen = finalDataList[-10:]
</code></pre>
<p>but as you don't assign it result to anything, then you print it?</p>
| 1 | 2016-08-10T18:56:23Z | [
"python",
"arrays",
"list"
] |
Python - Removing all numbers that are now smaller than the number behind them | 38,879,119 | <p>I made a function for this in my functions file:</p>
<pre><code>def removeNum(myList):
listSize = len(myList) -1
loop = 0
while loop < listSize:
if myList[loop] > myList[loop + 1]:
myList.remove(myList[loop + 1])
listSize = listSize - 1
loop = loop + 1
return myList
</code></pre>
<p>My main program looks like: ( I use the removeNum function i'm having issues with towards the bottom of the code. )</p>
<pre><code>import functionsFile
finalDataList = []
openFinalData = open("FinalData.data")
for entry in openFinalData:
entry = entry.strip()
entry = int(entry)
finalDataList.append(entry)
print("The file has been read")
listSum = 0
for numb in finalDataList:
listSum = listSum + numb
listAvg = listSum / len(finalDataList)
listAvg = round(listAvg,2)
print("The sum of all the numbers:--> "+str(listSum))
print("The average of the numbers is:--> "+str(listAvg))
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
finalDataList.sort()
finalDataList.reverse()
print("After sorting the numbers")
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
oddNumList = []
for numb in finalDataList:
oddNumList.append(functionsFile.makeOdd(numb))
finalDataList = oddNumList
newListSum = 0
for numb in finalDataList:
newListSum = newListSum + numb
newListAvg = newListSum / len(finalDataList)
newlistAvg = round(newListAvg,2)
print("After replacing the list with all the answers,"
+"here are the new totals")
print(" The new sum of all the numbers will be: "+str(newListSum))
print("The new average of all the numbers will be:"+str(newListAvg))
print(" ")
functionsFile.firstTen(finalDataList)
functionsFile.lastTen(finalDataList)
print(" ")
print(" ")
functionsFile.removeNum(finalDataList)
print(finalDataList)
openFinalData.close()
</code></pre>
<p>When I run the program it doesn't print out the new list that's been modified from the removeNum function above, is there something wrong with my function or in my main program? I'm not getting any errors. Any thoughts would be appreciated. Thank you.</p>
| 1 | 2016-08-10T16:40:13Z | 38,929,885 | <pre class="lang-py prettyprint-override"><code>my_list = [1, 0, 5, 9, 3, 8, 4]
return [item for index, item in enumerate(my_list) if index == 0 or my_list[index] >= my_list[index - 1]]
</code></pre>
<p>this code will iterate over <code>my_list</code> comparing current item to previous taking only ones which are greater then previous.</p>
| 0 | 2016-08-13T06:25:34Z | [
"python",
"arrays",
"list"
] |
Python BeautifulSoup - Find all elements whose class names begin with some string | 38,879,133 | <p>Assume that we want to find all <code>li</code> elements whose class names all begin with a known string and end with an arbitrary id number.</p>
<p>That means that this approach doesn't work:</p>
<pre><code>soup.find_all("li", {"class": KNOWN_STRING})
</code></pre>
<p>I have also tried this approach without any luck:</p>
<pre><code>soup.select("li[class^="+KNOWN_STRING)
</code></pre>
<p>How can this be solved?</p>
| 1 | 2016-08-10T16:41:15Z | 38,879,460 | <p>I would use <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-expression" rel="nofollow">regex</a> in this approach.</p>
<pre><code>import re
soup.find_all('li', {'class': re.compile(r'regex_pattern')})
</code></pre>
<p>Because you have a known string but an arbitrary (I'm assuming unknown) number you can use a regular expression to define the pattern of what you expect the string to be. Example:</p>
<pre><code>re.compile(r'^KNOWN_STRING[0-9]+$')
</code></pre>
<p>This would find all known strings with one or more numbers at the end. See <a href="https://docs.python.org/3.4/library/re.html" rel="nofollow">this</a> for more about regular expressions in Python.</p>
<h1>Edit, to answer the question:</h1>
<blockquote>
<p>Would this be correct given two digits in the id? soup.find_all('li', {'class': re.compile(r'^TheMatch v-1 c-[0-9][0-9]+$')}). I assume that it wouldn't.</p>
</blockquote>
<p>For two digits at the end you would do:</p>
<pre><code>soup.find_all('li', {'class': re.compile(r'^TheMatch v-1 c-[0-9]{2}$')})
</code></pre>
<p>The <code>+</code> just means one or more of the previous regular expression.</p>
<p>What I did was specify in brackets <code>{2}</code> after the regular expression the number of instances I was expecting to be there <code>2</code>.</p>
| 1 | 2016-08-10T16:58:59Z | [
"python",
"html",
"string",
"css-selectors",
"beautifulsoup"
] |
Python BeautifulSoup - Find all elements whose class names begin with some string | 38,879,133 | <p>Assume that we want to find all <code>li</code> elements whose class names all begin with a known string and end with an arbitrary id number.</p>
<p>That means that this approach doesn't work:</p>
<pre><code>soup.find_all("li", {"class": KNOWN_STRING})
</code></pre>
<p>I have also tried this approach without any luck:</p>
<pre><code>soup.select("li[class^="+KNOWN_STRING)
</code></pre>
<p>How can this be solved?</p>
| 1 | 2016-08-10T16:41:15Z | 38,884,314 | <p>You select is incorrect, just use KNOWN_STRING, don't wrap it it quotes and make sure you use a closing bracket:</p>
<pre><code>soup.select("li[class^=KNOWN_STRING]")
</code></pre>
| 0 | 2016-08-10T22:03:32Z | [
"python",
"html",
"string",
"css-selectors",
"beautifulsoup"
] |
Python: Convert a text file multi-level JSON | 38,879,220 | <p>I am writing a script in <code>python</code> to go recursively through each file, create a JSON object from the file that looks like this:</p>
<pre><code>target_id length eff_length est_counts tpm
ENST00000619216.1 68 33.8839 2.83333 4.64528
ENST00000473358.1 712 428.88 0 0
ENST00000469289.1 535 306.32 0 0
ENST00000607096.1 138 69.943 0 0
ENST00000417324.1 1187 844.464 0 0
ENST00000461467.1 590 342.551 3.44007 0.557892
ENST00000335137.3 918 588.421 0 0
ENST00000466430.5 2748 2405.46 75.1098 1.73463
ENST00000495576.1 1319 976.464 11.1999 0.637186
</code></pre>
<p>This is my script:</p>
<pre><code>import glob
import os
import json
# define datasets
# Dataset name
datasets = ['pnoc']
# open file in append mode
f = open('mydict','a')
# define a new object
data={}
# traverse through folders of datasets
for d in datasets:
samples = glob.glob(d + "/data" + "/*.tsv")
for s in samples:
# get the SampleName without extension and path
fname = os.path.splitext(os.path.basename(s))[0]
# split the basename to get sample name and norm method
sname, keyword, norm = fname.partition('.')
# determing Normalization method based on filename
if norm == "abundance":
norm = "kallisto"
elif norm == "rsem_genes.results":
norm = "rsem_genes"
else:
norm = "rsem_isoforms"
# read each file
with open(s) as samp:
next(samp)
for line in samp:
sp = line.split('\t')
data.setdefault(sname,[]).append({"ID": sp[0],"Expression": sp[4]})
json.dump(data, f)
f.close()
</code></pre>
<p>I want a JSON object on the following lines:</p>
<pre><code># 20000 Sample names, 3 Normalization methods and 60000 IDs in each file.
DatasetName1 {
SampleName1 {
Type {
Normalization1 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization2 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization3 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
}
}
},
SampleName2 {
Type {
Normalization1 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization2 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization3 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
}
}
},
...
SampleName20000{
Type {
Normalization1 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization2 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization3 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
}
}
}
}
</code></pre>
<p>So my question is - When converting a text file to JSON, how do I set the levels in my JSON output? </p>
<p>Thanks!</p>
| 0 | 2016-08-10T16:45:52Z | 38,881,540 | <p>First, instead of setting the default value over and over, you should make use of <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict</code></a>.</p>
<p>Secondly, I think your proposed structure is off, and you should be using some arrays within (JSON-like structure):</p>
<pre><code>{
DatasetName1: {
SampleName1: {
Type: {
Normalization1: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
],
Normalization2: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
],
Normalization3: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
]
}
},
SampleName2: {
Type: {
Normalization1: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
],
Normalization2: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
],
Normalization3: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
]
}
},
...
SampleName20000: {
Type: {
Normalization1: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
],
Normalization2: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
],
Normalization3: [
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
]
}
}
}, {
DatasetName2: {
...
}, ...
}
</code></pre>
<p>So your resulting code (untested) should look like this (long as your norm method logic is correct):</p>
<pre><code>from glob import glob
from os import path
from json import dump
from collections import defaultdict
# define datasets, and result dict
datasets, results = ['pnoc'], defaultdict(dict)
# open file in append mode
with open('mydict','a') as f:
# traverse through folders of datasets
for d in datasets:
for s in glob(d + "/data" + "/*.tsv"):
sample = {"Type": defaultdict(list)}
# get the basename without extension and path
fname = path.splitext(path.basename(s))[0]
# split the basename to get sample name and norm method
sname, keyword, norm = fname.partition('.')
# determing norm method based on filename
if norm == "abundance":
norm = "kallisto"
elif norm == "rsem_genes.results":
norm = "rsem_genes"
else:
norm = "rsem_isoforms"
# read each file
with open(s) as samp:
next(samp) # Skip first line of file
# Loop through each line and extract the ID and TPM
for (id, _, __, ___, tpm) in (line.split('\t') for line in samp):
# Add this line to the list for respective normalization method
sample['Type'][norm].append({"ID": id, "Expression": float(tpm)})
# Add sample to dataset
results[d][sname] = sample
dump(results, f)
</code></pre>
<p>This will save the result in a JSON format.</p>
| 0 | 2016-08-10T19:00:59Z | [
"python",
"json"
] |
random flip m values from an array | 38,879,244 | <p>I have an array with length <code>n</code>, I want to randomly choose <code>m</code> elements from it and flip their value. What is the most efficient way? </p>
<p>there are two cases, <code>m=1</code> case is a special case. It can be discussed separately, and <code>m=/=1</code>.</p>
<p>My attempt is:</p>
<pre><code>import numpy as np
n = 20
m = 5
#generate an array a
a = np.random.randint(0,2,n)*2-1
#random choose `m` element and flip it.
for i in np.random.randint(0,n,m):
a[m]=-a[m]
</code></pre>
<p>Suppose <code>m</code> are tens and <code>n</code> are hundreds.</p>
| 1 | 2016-08-10T16:47:02Z | 38,879,738 | <p>You need to change the index of the array from <code>m</code> to <code>i</code> to actually change the value.
Result:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
n = 20
m = 5
#generate an array a
a = np.random.randint(0,2,n)*2-1
print(a)
#random choose `i` element and flip it.
for i in np.random.randint(0,n,m):
a[i] = -a[i]
print(a)
</code></pre>
<p>My output:</p>
<pre><code>[ 1 1 -1 -1 1 -1 -1 1 1 -1 -1 1 -1 1 1 1 1 -1 1 -1]
[ 1 1 -1 -1 -1 -1 1 1 1 -1 -1 1 -1 -1 1 -1 1 -1 -1 -1]
</code></pre>
| 0 | 2016-08-10T17:16:47Z | [
"python",
"arrays",
"performance",
"numpy"
] |
random flip m values from an array | 38,879,244 | <p>I have an array with length <code>n</code>, I want to randomly choose <code>m</code> elements from it and flip their value. What is the most efficient way? </p>
<p>there are two cases, <code>m=1</code> case is a special case. It can be discussed separately, and <code>m=/=1</code>.</p>
<p>My attempt is:</p>
<pre><code>import numpy as np
n = 20
m = 5
#generate an array a
a = np.random.randint(0,2,n)*2-1
#random choose `m` element and flip it.
for i in np.random.randint(0,n,m):
a[m]=-a[m]
</code></pre>
<p>Suppose <code>m</code> are tens and <code>n</code> are hundreds.</p>
| 1 | 2016-08-10T16:47:02Z | 38,879,755 | <p>You can make a random permutation of the array indices, take the first <code>m</code> of them and flip their values:</p>
<pre><code>a[np.random.permutation(range(len(a)))[:m]]*=-1
</code></pre>
<p>Using <code>permutation</code> validate you don't choose the same index twice.</p>
| 1 | 2016-08-10T17:17:41Z | [
"python",
"arrays",
"performance",
"numpy"
] |
random flip m values from an array | 38,879,244 | <p>I have an array with length <code>n</code>, I want to randomly choose <code>m</code> elements from it and flip their value. What is the most efficient way? </p>
<p>there are two cases, <code>m=1</code> case is a special case. It can be discussed separately, and <code>m=/=1</code>.</p>
<p>My attempt is:</p>
<pre><code>import numpy as np
n = 20
m = 5
#generate an array a
a = np.random.randint(0,2,n)*2-1
#random choose `m` element and flip it.
for i in np.random.randint(0,n,m):
a[m]=-a[m]
</code></pre>
<p>Suppose <code>m</code> are tens and <code>n</code> are hundreds.</p>
| 1 | 2016-08-10T16:47:02Z | 38,879,832 | <p>To make sure we are not flipping the same element twice or even more times, we can create unique indices in that length range with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html" rel="nofollow"><code>np.random.choice</code></a> using its optional <code>replace</code> argument set as False. Then, simply indexing into the input array and flipping in one go should give us the desired output. Thus, we would have an implementation like so -</p>
<pre><code>idx = np.random.choice(n,m,replace=False)
a[idx] = -a[idx]
</code></pre>
<p><strong>Faster version :</strong> For a faster version of <code>np.random_choice</code>, I would suggest reading up on <a href="http://stackoverflow.com/a/35572771/3293881"><code>this post</code></a> that explores using <code>np.argpartition</code> to simulate identical behavior.</p>
| 3 | 2016-08-10T17:22:27Z | [
"python",
"arrays",
"performance",
"numpy"
] |
Python delete a node in linked list, given just access to that node | 38,879,291 | <p>This is a LeetCode question, I knew its solution, but wondering about why my code not work.</p>
<blockquote>
<p>Write a function to delete a node (except the tail) in a singly linked list, given only access to that node.</p>
<p>Supposed the linked list is 1 -> 2 -> 3 -> 4 and you are given the third node with value 3, the linked list should become 1 -> 2 -> 4 after calling your function</p>
</blockquote>
<p>At first glance, my intution is delete like an array:</p>
<p>shift all the node values one front, then delete the tail, here's my implementation and test case:</p>
<pre><code>class ListNode(object):
def __init__(self, x):
self.val = x
self.next = None
node1 = ListNode(1)
node2 = ListNode(2)
node3 = ListNode(3)
node4 = ListNode(4)
node5 = ListNode(5)
node1.next = node2
node2.next = node3
node3.next = node4
node4.next = node5
def deleteNode(node):
"""
:type node: ListNode
:rtype: void Do not return anything, modify node in-place instead.
"""
while node.next:
node.val = node.next.val
node = node.next
node = None
deleteNode(node4)
</code></pre>
<p>But After deletion, it has two 5 value nodes, the tail was still kept, can anyone please explain to me what's wrong here?</p>
<pre><code>deleteNode(node4)
node1.val
Out[162]: 1
node1.next.val
Out[163]: 2
node1.next.next.val
Out[164]: 3
node1.next.next.next.val
Out[165]: 5
node1.next.next.next.next.val
Out[166]: 5
</code></pre>
<p>Really appreciate any help.</p>
| 0 | 2016-08-10T16:49:49Z | 38,879,344 | <p>You were almost there, but are doing way too much work shifting <em>all</em> <code>val</code> values up one node. You also failed to clear the last <code>next</code> reference, which is why you see <code>5</code> printed <em>twice</em>. I'll show you how to solve the problem properly first, then address removing the tail.</p>
<p>You can indeed solve this puzzle by <em>not</em> deleting the node, just by setting the <code>value</code> to the next value, and the <code>next</code> pointer to the node after the next. And that is <strong>all</strong> you need to do, you don't need to touch any of the following nodes. You'd effectively delete the next node and making <em>this</em> node hold the next value instead:</p>
<pre><code> node
|
v
--------- ---------
---> | val: va | ---> | val: vb | ---> ?
--------- ---------
</code></pre>
<p>becomes</p>
<pre><code> node
|
v
--------- ---------
---> | val: vb | -+ | val: vb | -> ?
--------- | --------- |
| |
----------------
</code></pre>
<p>So the implementation is as simple as:</p>
<pre><code>def deleteNode(node):
"""
:type node: ListNode
:rtype: void Do not return anything, modify node in-place instead.
"""
node.val = node.next.val
node.next = node.next.next
</code></pre>
<p>No loop required.</p>
<p>Back to your attempt, note that the last line in your function has no effect. <code>node</code> is a <em>local name</em> in your function, and references the tail <code>ListNode</code> instance, <em>until</em> you set it to <code>None</code> instead.</p>
<p>You had this:</p>
<pre><code> node
|
v
-----------
---> | val: tail | ---> None
-----------
</code></pre>
<p>and <code>node = None</code> does this:</p>
<pre><code> node -> None
X
|
v
-----------
---> | val: tail | ---> None
-----------
</code></pre>
<p>That incoming reference from the previous node is <em>still there</em>.</p>
<p>You'd have to track the 'one-but-last' node reference and clear the <code>.next</code> attribute on that once looping is done:</p>
<pre><code>while node.next:
node.val = node.next.val
prev, node = node, node.next
# clear reference to tail
prev.next = None
</code></pre>
| 7 | 2016-08-10T16:53:09Z | [
"python",
"algorithm",
"linked-list"
] |
Python delete a node in linked list, given just access to that node | 38,879,291 | <p>This is a LeetCode question, I knew its solution, but wondering about why my code not work.</p>
<blockquote>
<p>Write a function to delete a node (except the tail) in a singly linked list, given only access to that node.</p>
<p>Supposed the linked list is 1 -> 2 -> 3 -> 4 and you are given the third node with value 3, the linked list should become 1 -> 2 -> 4 after calling your function</p>
</blockquote>
<p>At first glance, my intution is delete like an array:</p>
<p>shift all the node values one front, then delete the tail, here's my implementation and test case:</p>
<pre><code>class ListNode(object):
def __init__(self, x):
self.val = x
self.next = None
node1 = ListNode(1)
node2 = ListNode(2)
node3 = ListNode(3)
node4 = ListNode(4)
node5 = ListNode(5)
node1.next = node2
node2.next = node3
node3.next = node4
node4.next = node5
def deleteNode(node):
"""
:type node: ListNode
:rtype: void Do not return anything, modify node in-place instead.
"""
while node.next:
node.val = node.next.val
node = node.next
node = None
deleteNode(node4)
</code></pre>
<p>But After deletion, it has two 5 value nodes, the tail was still kept, can anyone please explain to me what's wrong here?</p>
<pre><code>deleteNode(node4)
node1.val
Out[162]: 1
node1.next.val
Out[163]: 2
node1.next.next.val
Out[164]: 3
node1.next.next.next.val
Out[165]: 5
node1.next.next.next.next.val
Out[166]: 5
</code></pre>
<p>Really appreciate any help.</p>
| 0 | 2016-08-10T16:49:49Z | 38,879,368 | <pre><code>def deleteNode(node):
node.val = node.next.val
node.next = node.next.next
</code></pre>
<p>Since you don't have a reference to your current node, you can't delete it.
But you do have a ref to your next node.
So give your current node the role that your next node previously had,
and then the garbage collector will delete your next node, since there are no refs to it left because you overwrote node.next.</p>
| 1 | 2016-08-10T16:54:26Z | [
"python",
"algorithm",
"linked-list"
] |
How do I use Jinja2 filter on a block with argumets | 38,879,318 | <p>I am trying to use <code>jinja2</code> templates. I have custom filter called <code>highlight</code>, that takes string and language name and passes them to <code>pyhments</code> for code highlightning. I am trying to use it like this:</p>
<pre><code>{% filter highlight("python") %}
import sys
def main():
pass
{% endfilter %}
</code></pre>
<p>But I get this error:</p>
<pre><code>AttributeError: 'str' object has no attribute 'get_tokens'
</code></pre>
<p>Then I tried this:</p>
<pre><code>{% filter highlight "python" %}
</code></pre>
<p>It does not work either.</p>
<p>There might be a trick via <a href="https://github.com/pallets/jinja/compare/master...ThiefMaster:set-block-filters" rel="nofollow">set block filtering</a> and then pasting it back via <code>{{ ... }}</code>, but this technique is not merged in master source code yet, and seems too hacky for me.</p>
<p>So, is that even possible currently, or I am just doing it wrong?</p>
<p><strong>EDIT</strong>: Here is filter:</p>
<pre><code>@jinja2.contextfilter
def highlight(context, code, lang):
print("HIGHLIGHT")
print(code)
return jinja2.Markup(pygments.highlight(code, lexer=lang, formatter='html'))
</code></pre>
| 1 | 2016-08-10T16:51:37Z | 39,025,723 | <p>I am an idiot, that was <code>pygments</code> error. By some mistake, I didn't see that last entry in stacktrace was from there.</p>
<p>You should use:</p>
<pre><code>pygments.highlight(
code,
lexer=pygments.lexers.get_lexer_by_name(lang),
formatter=pygments.formatters.get_formatter_by_name('html')
)
</code></pre>
<p>instead of:</p>
<pre><code>pygments.highlight(code, lexer=lang, formatter='html')
</code></pre>
| 1 | 2016-08-18T19:03:55Z | [
"python",
"jinja2"
] |
Upgrading to Python correctly on Mac OS X 10.11.6 - issue with the /etc/paths file | 38,879,385 | <p>Python being the new fashionable language, especially in Finance industry, I started learning it.
I downloaded it from the Python website - Python version 3.5.2 - and it successfully installed... in my Application folder.
Just so you know a little bit more about myself, I did an IT engineering school in France, so I have an IT culture, but I never had the soul of a hacker so some things might be more difficult than others. </p>
<p>Started to code on Python IDE, then I created an executable python file and when I tried to execute it ... error!!</p>
<p>This is what I got when I executed my file (bissextile.py - file is supposed to ask the user to enter a year and tell him if this year is bissextile):</p>
<pre><code> Last login: Tue Aug 9 23:24:02 on ttys000
MacBook-Pro-de-Tebah:~ tebahsaboun$ cd '/Users/tebahsaboun/Desktop/' && '/usr/bin/pythonw' -d -v '/Users/tebahsaboun/Desktop/bissextile.py' && echo Exit status: $? && exit 1
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py
import site # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py
import os # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc
import errno # builtin
import posix # builtin
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.py
import posixpath # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.pyc
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/stat.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/stat.py
import stat # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/stat.pyc
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/genericpath.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/genericpath.py
import genericpath # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/genericpath.pyc
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/warnings.pyc matches
[...] A LOT OF STUFF THAT'S THE SAME AS BEFORE
import encodings.aliases # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/aliases.pyc
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py
import encodings.utf_8 # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.pyc
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
File "/Users/tebahsaboun/Desktop/bissextile.py", line 1
{\rtf1\ansi\ansicpg1252\cocoartf1404\cocoasubrtf470
^
SyntaxError: unexpected character after line continuation character
# A lot of cleaning up files after that...
</code></pre>
<p>So, I saw here two issues:
1) my file doesn't work ...
2) the shell is not using Python 3.5 but Python 2.7 which is the problem I need help for. </p>
<p>Lots of research on internet, I was about to follow that tutorial :
ht*tps://wolfpaulus.com/journal/mac/installing_python_osx/comment-page-3/#comment-101285
...and about to delete some system files from my mac :) which it didn't let me do...decided that I didn't really know what I was doing and that I should dig a little more in the internet.
For the record, here is what I have in /System/Library/Frameworks/python.framework/Versions/ :</p>
<p>So I found something called "Homebrew" that is suppose to install Python for you and I followed this great tutorial :
<a href="http://blog.manbolo.com/2014/09/27/use-python-effectively-on-os-x#p1" rel="nofollow">http://blog.manbolo.com/2014/09/27/use-python-effectively-on-os-x#p1</a>
Asking Homebrew to install Python 3.5 instead of Python 2.7 (steps are absolutely the same) which I apparently did. </p>
<p>So I did the verification suggested in the tutorial to be sure that I was using the right version of Python but when I asked the shell I still got Python 2.7.
I checked my /etc/paths file which is as follows:</p>
<pre><code> /usr/local/bin
/usr/bin
/bin
/usr/bin
/bin
</code></pre>
<p>/usr/local/bin is the first binary in the file and I checked the folder it contains Python 3.5 indeed. But no matter what is the first line is in that file I get :
MacBook-Pro-de-Tebah:~ tebahsaboun$ which python
/usr/bin/python
and :</p>
<pre><code> **MacBook-Pro-de-Tebah:~ tebahsaboun$ python
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.**
</code></pre>
<p>I can't put all the links that I have researched so far ( especially stack overflow articles by the way ;) ) cause I don't have enough "reputation", but I did a lot of research and no article seems to respond to my problem. And I don't understand what is wrong. </p>
<p>Thank you so much for your help. </p>
<p>The learning geek, Tebah. </p>
| 1 | 2016-08-10T16:55:28Z | 38,883,637 | <p>Run the program from command line using <code>python3 my_script.py</code> or add shebang <code>#!/urs/bin/python3</code></p>
| 0 | 2016-08-10T21:12:21Z | [
"python",
"shell",
"python-3.x",
"osx-elcapitan"
] |
Returning the index in a list from a search hit | 38,879,398 | <p>I'm using a list of names (<code>cust_name</code>) to find if any names in the list are found in any email files located within a folder. I would like to then take the name that the search hit on and create a folder with their id <code>cust_id</code> as the folder name and copy the email file into the new folder. I have the search working, but I can't figure out how to know which index the search hit on in <code>cust_name</code> so I can use the same index position in <code>cust_id</code> to name the new folder. Here's the code:</p>
<pre><code>for root,dirs,file in os.walk(os.getcwd()):
for cur_file in file:
with open(cur_file, "r") as f:
content = f.readlines()
for line in content:
line = line.lower()
if any(word in line for word in cust_name):
#grab index position with search hit in cust_name
#grab same index position with search hit in cust_id
#create new folder and copy email file
</code></pre>
<p>I already know how I'm going to create the folder and copy the file. My problem is grabbing that index position.</p>
<p>Whenever I try to use <em>word</em> to grab the index position, I get the error that word was not defined and searching around hasn't given me any other leads on how to get the index position. Anyone out there have any tips or have done this before?</p>
| 0 | 2016-08-10T16:56:20Z | 38,879,557 | <p>Do you mean this? </p>
<pre><code>for index, line in enumerate(content):
</code></pre>
| 2 | 2016-08-10T17:05:11Z | [
"python"
] |
Returning the index in a list from a search hit | 38,879,398 | <p>I'm using a list of names (<code>cust_name</code>) to find if any names in the list are found in any email files located within a folder. I would like to then take the name that the search hit on and create a folder with their id <code>cust_id</code> as the folder name and copy the email file into the new folder. I have the search working, but I can't figure out how to know which index the search hit on in <code>cust_name</code> so I can use the same index position in <code>cust_id</code> to name the new folder. Here's the code:</p>
<pre><code>for root,dirs,file in os.walk(os.getcwd()):
for cur_file in file:
with open(cur_file, "r") as f:
content = f.readlines()
for line in content:
line = line.lower()
if any(word in line for word in cust_name):
#grab index position with search hit in cust_name
#grab same index position with search hit in cust_id
#create new folder and copy email file
</code></pre>
<p>I already know how I'm going to create the folder and copy the file. My problem is grabbing that index position.</p>
<p>Whenever I try to use <em>word</em> to grab the index position, I get the error that word was not defined and searching around hasn't given me any other leads on how to get the index position. Anyone out there have any tips or have done this before?</p>
| 0 | 2016-08-10T16:56:20Z | 38,879,758 | <p>You can use enumeration as dhdavvie suggests if you just want the index. For your use case, you can also consider first <a href="https://docs.python.org/2/library/functions.html#zip" rel="nofollow">zipping</a> the cust_id and cust_name and arrays:</p>
<pre><code>cust_tuples = zip(cust_id, cust_name)
for root, dirs, file in os.walk(os.getcwd()):
for cur_file in file:
with open(cur_file, "r") as f:
content = f.readlines()
for line in content:
line = line.lower()
for cust_id, cust_name in cust_tuples:
if cust_name in line:
# do things with cust_id
break # if you only want the first hit
</code></pre>
| 2 | 2016-08-10T17:17:50Z | [
"python"
] |
Python - Should there be made an attempt to be consistent with the arg type in optional args? | 38,879,408 | <p>My doubt is simple. Should I do an effort to be consistent with argument types when using optional arguments?</p>
<p>An example:</p>
<pre><code>def example_method(str_arg_1=None):
if str_arg_1 is not None:
# do something
</code></pre>
<p>My <code>arg</code> is allways a <code>str</code>,except when it is not passed in the method call, so I'm not sure if is a good practice to use <code>None</code>in this case.</p>
<p>I came to this doubt because I don't know how to write my docstrings, is the <code>str_arg_1</code>consider <code>str</code> even if sometimes is <code>None</code>?</p>
| 0 | 2016-08-10T16:57:09Z | 38,879,744 | <p>Names don't have types; only the values they refer to have types. In this case, it is perfectly normal to document that <code>str_arg_1</code> will refer to a <code>str</code> object or <code>None</code>, with an explanation of what each means. As far as a docstring is concerned, it's safe to say everyone will understand what you mean if you state that an argument that <em>should</em> be a <code>str</code> may also take <code>None</code>.</p>
<hr>
<p>In PEP-484, which deals with providing statically checkable type usage, this notion of using <code>None</code> is not just acceptable, but catered to.</p>
<p>If <code>str_arg_1</code> should <em>always</em> be a string, you would hint it as</p>
<pre><code>def example_method(str_arg_1: str):
</code></pre>
<p>If it is allowed be <code>None</code> as well, you would indicate that with a <em>union</em> type</p>
<pre><code>def example_method(str_arg_1: Union[str, None]):
</code></pre>
<p>As this is a very common case, there is a shortcut for indicating this:</p>
<pre><code>def example_method(str_arg_1: Optional[str]):
</code></pre>
<p>In fact, if the default value is <code>None</code>, the static checker that uses these type annotations assumes the use of <code>Optional</code>, so the following are equivalent:</p>
<pre><code>def example_method(str_arg_1: Optional[str] = None):
def example_method(str_arg_1: str = None):
</code></pre>
| 4 | 2016-08-10T17:17:03Z | [
"python"
] |
Checking for a Happy number - else statement not working | 38,879,470 | <p>I'm trying to make a program which checks if an entered number is a <a href="https://en.wikipedia.org/wiki/Happy_number" rel="nofollow">happy number</a>.</p>
<p>My code finds each of the numbers after squaring and adding but when it reaches 1, i'd expect it to print "that is a happy number".</p>
<p>I cant see anything wrong with the code but i could be missing something simple.</p>
<p>Here is the code:</p>
<pre><code>number = raw_input('What number?')
dictionary = {}
counter = 0
counter_2 = 0
while counter_2 < 20:
counter_2 += 1
if number != 1:
for n in str(number):
n = int(n)**2
counter += 1
dictionary ['key{}'.format(counter)] = n
added = sum(dictionary.values())
dictionary = {}
number = str(added)
print number
else:
print 'that is a happy number'
</code></pre>
| -2 | 2016-08-10T16:59:42Z | 38,879,975 | <blockquote>
<p>I cant see anything wrong with the code but i could be missing something simple</p>
</blockquote>
<p>Yup, something simple <code>number = raw_input('What number?')</code> and <code>number = str(added)</code> are making <code>number</code> a string value, but yet you want to check if it is not the integer value <code>1</code>, which is always going to be true, so the "else statement isn't working". </p>
<hr>
<p>Now, I'm not too sure what that dictionary is being used for. Maybe saving the values that have already been calculated to prevent further calculations? That's fine, but here is a solution that already runs fine for the first 1000 happy numbers. </p>
<p>First, two functions for calculating the sum of squares for a number. Note, that converting a number to a string, then iterating over characters and casting back to a integer is unnecessary overhead. </p>
<pre><code>square = lambda x: x**2
def square_digits(x):
square_sum = 0
while x > 0:
square_sum += square(x % 10)
x /= 10
return square_sum
</code></pre>
<p>Then a function that checks if a number is a Happy Number with a limit of 20 iterations to prevent an infinite loop. </p>
<pre><code>def check_happy(n, limit=20):
counter = 0
while n != 1 and counter < limit:
counter += 1
n = square_digits(n)
else:
return counter != limit
</code></pre>
<p>And, then simply check your value</p>
<pre><code>number = int(raw_input('What number?'))
if check_happy(number):
print number, 'is a happy number'
</code></pre>
<p>Sample output. </p>
<pre><code>What number? 19
19 is a happy number
</code></pre>
| 0 | 2016-08-10T17:32:14Z | [
"python"
] |
Evaluating a pandas series with a for loop and nested inequality statement | 38,879,552 | <p>I'm attempting to run a for loop through a pandas dataframe and apply a logic expression to a column in each of the elements of the dataframe. My code compiles without error, but there is no output. </p>
<p>Example code:</p>
<pre><code>for i in df:
if df['value'].all() >= 0.0 and df['value'].all() < 0.05:
print df['value']
</code></pre>
<p>Any help would be appreciated! Thanks </p>
| 0 | 2016-08-10T17:04:40Z | 38,879,593 | <p><code>.all()</code> is going to return <code>True</code> or <code>False</code> and so based on the order of your checks, you're filtering everything out. I assume what you actually want is <code>(df['value'] >= 0.0).all() and (df['value'] < 0.05).all()</code>. </p>
<p>EDIT: You're also not actually iterating over the columns. Replace <code>'value'</code> with <code>i</code>.</p>
<pre><code>In [11]: df = pd.DataFrame([np.arange(0, 0.04, 0.01), np.arange(0, 4, 1)]).T
In [12]: df
Out[12]:
0 1
0 0.00 0.0
1 0.01 1.0
2 0.02 2.0
3 0.03 3.0
In [13]: for c in df:
...: if (df[c] >= 0.0).all() and (df[c] < 0.05).all():
...: print df[c]
...:
0 0.00
1 0.01
2 0.02
3 0.03
Name: 0, dtype: float64
</code></pre>
| 0 | 2016-08-10T17:07:31Z | [
"python",
"loops",
"pandas"
] |
Evaluating a pandas series with a for loop and nested inequality statement | 38,879,552 | <p>I'm attempting to run a for loop through a pandas dataframe and apply a logic expression to a column in each of the elements of the dataframe. My code compiles without error, but there is no output. </p>
<p>Example code:</p>
<pre><code>for i in df:
if df['value'].all() >= 0.0 and df['value'].all() < 0.05:
print df['value']
</code></pre>
<p>Any help would be appreciated! Thanks </p>
| 0 | 2016-08-10T17:04:40Z | 38,879,651 | <p>Is this the result you are looking for?</p>
<pre><code>print(df.loc[(df['value'] >= 0.0) & (df['value'] < 0.05), 'value'])
</code></pre>
<p>For just the values, throw this in:</p>
<pre><code>print(df.loc[(df['value'] >= 0.0) & (df['value'] < 0.05), 'value'].values)
</code></pre>
<p>or</p>
<pre><code>print(list(df.loc[(df['value'] >= 0.0) & (df['value'] < 0.05), 'value']))
</code></pre>
<p>for a list of the values</p>
| 0 | 2016-08-10T17:11:10Z | [
"python",
"loops",
"pandas"
] |
Evaluating a pandas series with a for loop and nested inequality statement | 38,879,552 | <p>I'm attempting to run a for loop through a pandas dataframe and apply a logic expression to a column in each of the elements of the dataframe. My code compiles without error, but there is no output. </p>
<p>Example code:</p>
<pre><code>for i in df:
if df['value'].all() >= 0.0 and df['value'].all() < 0.05:
print df['value']
</code></pre>
<p>Any help would be appreciated! Thanks </p>
| 0 | 2016-08-10T17:04:40Z | 38,879,982 | <p>If you are looking to see whether all elements in a column are satisfying that logical expression, this can be used:</p>
<pre><code>np.logical_and(df['value'] >= 0.0, df['value'] < 0.05).all()
</code></pre>
<p>This will return a single <code>True</code> or <code>False</code>.</p>
<p>By the way, I don't see how the <code>for</code> loop is being used. Since in the current format, the same code will run in each iteration.</p>
| 1 | 2016-08-10T17:32:39Z | [
"python",
"loops",
"pandas"
] |
Log ALL errors without using python logging library | 38,879,601 | <p>I am trying to make a python program that others on my team can use without me having to make a GUI. Thus, it is important that the code they interface is super simple and intuitive to the users. I don't want to put logging code in the main program because I think it would confuse them and I don't know what kind of errors they will produce in the main. Is there a way to route ALL errors and warnings (that would be default be printed to STOUT) to a log file without using python logging module ?</p>
| 0 | 2016-08-10T17:07:57Z | 38,880,022 | <p>You may want to check this question: <a href="http://stackoverflow.com/questions/5136611/capture-stdout-from-a-script-in-python">Capture stdout from a script in Python</a></p>
<p>[edit]removed the code snippept, because it is all there (sorry missed the python 3 examples there)</p>
| -1 | 2016-08-10T17:34:32Z | [
"python",
"logging",
"error-handling"
] |
Setting value by slicing index and conditional rows | 38,879,647 | <p>Trying to set a col 'X' value by slicing on multindex and also accounting for a column 'Z' conditional value. I can set the col 'X' value pretty easily, but I'm getting stuck trying to figure out the conditional.</p>
<pre><code>import pandas as pd
FOOBAR = (['foo','foo','foo','foo','bar','bar','bar','bar'])
NUM1 = ([5,5,6,6,8,8,5,5])
NUM2 = ([1,1,2,2,3,3,1,1])
NUM3 = ([1001,1002,1002,1002,1003,1004,1004,1005])
#build and name index using data
index = pd.MultiIndex.from_arrays([FOOBAR,NUM1,NUM2,NUM3],
names=['iFOOBAR','iNUM1','iNUM2','iNUM3'])
df = pd.DataFrame({'X': [ 0, 1, 2, 3, 4, 5, 6, 7],
'Y': [ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'],
'Z': [ 1, 2, 2, 4, 5, 6, 7, 7],
'FL': [0.1,0.1,0.2,0.2,0.4,0.4,0.1,0.1]
}, index=index)
df.sortlevel(inplace=True)
idx = pd.IndexSlice
#original df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 1 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' based on index
newdf = df.loc[idx['foo',5,1,:], idx['X']] = 999
#new df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 999 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' base on index and 'Z' == 2 ???
#nextdf = df.loc[idx['foo',5,1,:], idx['Z'== 2]], 'X' = 999
#next df: desired output
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
</code></pre>
| 4 | 2016-08-10T17:11:01Z | 38,880,451 | <p>This is actually kind of tricky. Feels like there could be a better way - but here's one approach that depends on a little knowledge of the indexing internals - building up the set of locations that that meet your criteria, then passing that all to <code>iloc</code>.</p>
<pre><code>In [80]: cond1 = df.index.get_locs(idx['foo',5, 1, :])
In [81]: cond2, = (df['Z'] == 2).nonzero()
In [82]: row_indexer = pd.Index(cond1).intersection(cond2)
In [83]: row_indexer
Out[83]: Int64Index([5], dtype='int64')
In [84]: col_indexer = df.columns.get_loc('X')
In [85]: df.iloc[row_indexer, col_indexer] = 999
In [90]: df
Out[90]:
FL X Y Z
iFOOBAR iNUM1 iNUM2 iNUM3
bar 5 1 1004 0.1 6 G 7
1005 0.1 7 H 7
8 3 1003 0.4 4 E 5
1004 0.4 5 F 6
foo 5 1 1001 0.1 0 A 1
1002 0.1 999 B 2
6 2 1002 0.2 2 C 2
1002 0.2 3 D 4
</code></pre>
| 2 | 2016-08-10T17:57:52Z | [
"python",
"pandas",
"dataframe",
"multi-index"
] |
Setting value by slicing index and conditional rows | 38,879,647 | <p>Trying to set a col 'X' value by slicing on multindex and also accounting for a column 'Z' conditional value. I can set the col 'X' value pretty easily, but I'm getting stuck trying to figure out the conditional.</p>
<pre><code>import pandas as pd
FOOBAR = (['foo','foo','foo','foo','bar','bar','bar','bar'])
NUM1 = ([5,5,6,6,8,8,5,5])
NUM2 = ([1,1,2,2,3,3,1,1])
NUM3 = ([1001,1002,1002,1002,1003,1004,1004,1005])
#build and name index using data
index = pd.MultiIndex.from_arrays([FOOBAR,NUM1,NUM2,NUM3],
names=['iFOOBAR','iNUM1','iNUM2','iNUM3'])
df = pd.DataFrame({'X': [ 0, 1, 2, 3, 4, 5, 6, 7],
'Y': [ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'],
'Z': [ 1, 2, 2, 4, 5, 6, 7, 7],
'FL': [0.1,0.1,0.2,0.2,0.4,0.4,0.1,0.1]
}, index=index)
df.sortlevel(inplace=True)
idx = pd.IndexSlice
#original df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 1 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' based on index
newdf = df.loc[idx['foo',5,1,:], idx['X']] = 999
#new df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 999 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' base on index and 'Z' == 2 ???
#nextdf = df.loc[idx['foo',5,1,:], idx['Z'== 2]], 'X' = 999
#next df: desired output
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
</code></pre>
| 4 | 2016-08-10T17:11:01Z | 38,880,707 | <p>If the DataFrame, <code>df</code>, had a unique index, then you could use</p>
<pre><code>index1 = df.loc[df['Z'] == 2].index
index2 = df.loc[idx['foo',5,1,:]].index
df.loc[index1.intersection(index2), 'X'] = 999
</code></pre>
<p>Since your <code>df</code> does not have a unique index, you could create a faux index column, and then proceed similarly:</p>
<pre><code>df['index'] = np.arange(len(df))
index1 = df.loc[df['Z'] == 2, 'index']
index2 = df.loc[idx['foo',5,1,:], 'index']
df.ix[np.intersect1d(index1, index2), 'X'] = 999
df = df.drop('index', axis=1)
print(df)
</code></pre>
<p>yields</p>
<pre><code> FL X Y Z
iFOOBAR iNUM1 iNUM2 iNUM3
bar 5 1 1004 0.1 6 G 7
1005 0.1 7 H 7
8 3 1003 0.4 4 E 5
1004 0.4 5 F 6
foo 5 1 1001 0.1 0 A 1
1002 0.1 999 B 2
6 2 1002 0.2 2 C 2
1002 0.2 3 D 4
</code></pre>
<hr>
<p>Note that <a href="http://stackoverflow.com/a/38880451/190597">chrisb's solution</a> is more efficient because it does not generate
sub-DataFrames. It prepares ordinal indexers and then cuts once.</p>
| 2 | 2016-08-10T18:13:40Z | [
"python",
"pandas",
"dataframe",
"multi-index"
] |
Setting value by slicing index and conditional rows | 38,879,647 | <p>Trying to set a col 'X' value by slicing on multindex and also accounting for a column 'Z' conditional value. I can set the col 'X' value pretty easily, but I'm getting stuck trying to figure out the conditional.</p>
<pre><code>import pandas as pd
FOOBAR = (['foo','foo','foo','foo','bar','bar','bar','bar'])
NUM1 = ([5,5,6,6,8,8,5,5])
NUM2 = ([1,1,2,2,3,3,1,1])
NUM3 = ([1001,1002,1002,1002,1003,1004,1004,1005])
#build and name index using data
index = pd.MultiIndex.from_arrays([FOOBAR,NUM1,NUM2,NUM3],
names=['iFOOBAR','iNUM1','iNUM2','iNUM3'])
df = pd.DataFrame({'X': [ 0, 1, 2, 3, 4, 5, 6, 7],
'Y': [ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'],
'Z': [ 1, 2, 2, 4, 5, 6, 7, 7],
'FL': [0.1,0.1,0.2,0.2,0.4,0.4,0.1,0.1]
}, index=index)
df.sortlevel(inplace=True)
idx = pd.IndexSlice
#original df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 1 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' based on index
newdf = df.loc[idx['foo',5,1,:], idx['X']] = 999
#new df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 999 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' base on index and 'Z' == 2 ???
#nextdf = df.loc[idx['foo',5,1,:], idx['Z'== 2]], 'X' = 999
#next df: desired output
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
</code></pre>
| 4 | 2016-08-10T17:11:01Z | 38,907,495 | <p>Figured it out...</p>
<pre><code>mask = df.loc[idx[:],idx['Z']] == 2
df.loc[idx[mask,5,1,:],idx['X']] = 999
</code></pre>
| 0 | 2016-08-11T23:13:33Z | [
"python",
"pandas",
"dataframe",
"multi-index"
] |
Setting value by slicing index and conditional rows | 38,879,647 | <p>Trying to set a col 'X' value by slicing on multindex and also accounting for a column 'Z' conditional value. I can set the col 'X' value pretty easily, but I'm getting stuck trying to figure out the conditional.</p>
<pre><code>import pandas as pd
FOOBAR = (['foo','foo','foo','foo','bar','bar','bar','bar'])
NUM1 = ([5,5,6,6,8,8,5,5])
NUM2 = ([1,1,2,2,3,3,1,1])
NUM3 = ([1001,1002,1002,1002,1003,1004,1004,1005])
#build and name index using data
index = pd.MultiIndex.from_arrays([FOOBAR,NUM1,NUM2,NUM3],
names=['iFOOBAR','iNUM1','iNUM2','iNUM3'])
df = pd.DataFrame({'X': [ 0, 1, 2, 3, 4, 5, 6, 7],
'Y': [ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'],
'Z': [ 1, 2, 2, 4, 5, 6, 7, 7],
'FL': [0.1,0.1,0.2,0.2,0.4,0.4,0.1,0.1]
}, index=index)
df.sortlevel(inplace=True)
idx = pd.IndexSlice
#original df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 1 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' based on index
newdf = df.loc[idx['foo',5,1,:], idx['X']] = 999
#new df
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 999 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
#set value in 'X' base on index and 'Z' == 2 ???
#nextdf = df.loc[idx['foo',5,1,:], idx['Z'== 2]], 'X' = 999
#next df: desired output
# FL X Y Z
#iFOOBAR iNUM1 iNUM2 iNUM3
#bar 5 1 1004 0.1 6 G 7
# 1005 0.1 7 H 7
# 8 3 1003 0.4 4 E 5
# 1004 0.4 5 F 6
#foo 5 1 1001 0.1 0 A 1
# 1002 0.1 999 B 2
# 6 2 1002 0.2 2 C 2
# 1002 0.2 3 D 4
</code></pre>
| 4 | 2016-08-10T17:11:01Z | 38,908,185 | <p>Try this: (one line) </p>
<pre><code>df.loc[idx[:,5,1,(df['Z'] == 2)],idx['X']] = 999
df
FL X Y Z
iFOOBAR iNUM1 iNUM2 iNUM3
bar 5 1 1004 0.1 6 G 7
1005 0.1 7 H 7
8 3 1003 0.4 4 E 5
1004 0.4 5 F 6
foo 5 1 1001 0.1 0 A 1
1002 0.1 999 B 2
6 2 1002 0.2 2 C 2
1002 0.2 3 D 4
In [126]:
</code></pre>
| 0 | 2016-08-12T00:45:25Z | [
"python",
"pandas",
"dataframe",
"multi-index"
] |
tensorflow.python.framework.errors.InvalidArgumentError: Field 0 in record 0 is not a valid int32: 1 0 | 38,879,779 | <p>My .<strong>CSV</strong> file contains the data look like this ...
1 0 0 0 0 0 0 0 0 0 0 0</p>
<p>I am trining to read this .CSV file by the following code</p>
<pre><code>filename = "alpha_test.csv"
#setup text reader
file_length = file_len(filename)
filename_queue = tf.train.string_input_producer([filename])
reader = tf.TextLineReader(skip_header_lines=0)
_, csv_row = reader.read(filename_queue)
#setuo text reader
listoflists = []
a_list = []
for i in range(0,49):
listoflists.append((list([0])))
record_defaults =listoflists
data = tf.decode_csv(csv_row,record_defaults=record_defaults,field_delim='\t')
#turn features back into a tensor
features = tf.pack(data)
print("loading "+ str(features)+ "line(s)\n")
with tf.Session() as sess:
tf.initialize_all_variables().run()
#start populating filename queue
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners()
for i in range(file_length):
example = sess.run([features])
print(example)
coord.request_stop()
coord.join()
print("\ndone loading")
</code></pre>
<p>But it gives the follwing error</p>
<pre><code>ERROR:tensorflow:Exception in QueueRunner: Attempted to use a closed Session.
Traceback (most recent call last):......
tensorflow.python.framework.errors.InvalidArgumentError: Field 0 in record 0 is not a valid int32: 1 0
[[Node: DecodeCSV = DecodeCSV[OUT_TYPE=[DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32], field_delim="\t", _device="/job:localhost/replica:0/task:0/cpu:0"](ReaderRead:1, DecodeCSV/record_defaults_0, DecodeCSV/record_defaults_1, DecodeCSV/record_defaults_2, DecodeCSV/record_defaults_3, DecodeCSV/record_defaults_4, DecodeCSV/record_defaults_5, DecodeCSV/record_defaults_6, DecodeCSV/record_defaults_7, DecodeCSV/record_defaults_8, DecodeCSV/record_defaults_9, DecodeCSV/record_defaults_10, DecodeCSV/record_defaults_11, DecodeCSV/record_defaults_12, DecodeCSV/record_defaults_13, DecodeCSV/record_defaults_14, DecodeCSV/record_defaults_15, DecodeCSV/record_defaults_16, DecodeCSV/record_defaults_17, DecodeCSV/record_defaults_18, DecodeCSV/record_defaults_19, DecodeCSV/record_defaults_20, DecodeCSV/record_defaults_21, DecodeCSV/record_defaults_22, DecodeCSV/record_defaults_23, DecodeCSV/record_defaults_24, DecodeCSV/record_defaults_25, DecodeCSV/record_defaults_26, DecodeCSV/record_defaults_27, DecodeCSV/record_defaults_28, DecodeCSV/record_defaults_29, DecodeCSV/record_defaults_30, DecodeCSV/record_defaults_31, DecodeCSV/record_defaults_32, DecodeCSV/record_defaults_33, DecodeCSV/record_defaults_34, DecodeCSV/record_defaults_35, DecodeCSV/record_defaults_36, DecodeCSV/record_defaults_37, DecodeCSV/record_defaults_38, DecodeCSV/record_defaults_39, DecodeCSV/record_defaults_40, DecodeCSV/record_defaults_41, DecodeCSV/record_defaults_42, DecodeCSV/record_defaults_43, DecodeCSV/record_defaults_44, DecodeCSV/record_defaults_45, DecodeCSV/record_defaults_46, DecodeCSV/record_defaults_47, DecodeCSV/record_defaults_48)]]
Caused by op u'DecodeCSV', defined at:
</code></pre>
<p>I have tested with the follwing data </p>
<pre><code>1 0 0 0 0
0 0 1 1 1
1 0 0 0 0
0 0 1 1 1
</code></pre>
<p>but it gives the same error </p>
<pre><code>ERROR:tensorflow:Exception in QueueRunner: Attempted to use a closed Session.
</code></pre>
| 0 | 2016-08-10T17:19:18Z | 38,906,286 | <p>Most likely reason for the issue you see is that the 1 and 0 in the first line of your file are actually separated by 3 spaces, not a tab. Note how the <code>field_delim</code> is properly set to <code>\t</code> in your call to <code>decode_csv</code>, but the value it seems to read is <code>1 0</code>. This is precisely what would happen if the 1 and 0 were separated by three spaces, and the next 0 was separated with a tab.</p>
<p>Make sure the file contains no spaces, and all the delimiters are in fact tabsl</p>
| 0 | 2016-08-11T21:16:25Z | [
"python",
"numpy",
"tensorflow",
"deep-learning"
] |
Python converting XML to dictionary with namespace names | 38,879,977 | <p>I am trying to figure out how to best parse XML with python in a way I can use. Basically namespaces are very important to the dictionary I need to create, and the ElementTree parser's <code>{http://somewebsite.org/xml/path}(Element)</code> way of doing it doesn't work for me since I need both the URL and the namspace name (e.g. xmlns:soapenv).</p>
<p>So what I'm wondering is if anyone has a way to extract that info, preferably using python's built in xml library.</p>
<p>What I ultimately need to do is take XML that looks like this:</p>
<pre><code><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:enterprise.soap.sforce.com">
<soapenv:Header>
<urn:SessionHeader>
<urn:sessionId>QwWsHJyTPW.1pd0_jXlNKOSU</urn:sessionId>
</urn:SessionHeader>
</soapenv:Header>
<soapenv:Body>
<urn:query>
<urn:queryString>SELECT Id, Name, BillingStreet FROM Account WHERE Name LIKE '%a%'</urn:queryString>
</urn:query>
</soapenv:Body>
</soapenv:Envelope>
</code></pre>
<p>and convert it into a dictionary that looks like this:</p>
<pre><code>{
"Envelope": {
"@ns": "soapenv",
"@attrs": {
"xmlns:soapenv": "http://schemas.xmlsoap.org/soap/envelope/",
"xmlns:urn": "urn:partner.soap.sforce.com"
},
"Header": {
"@ns": "soapenv",
"SessionHeader": {
"@ns": "urn",
"sessionId": {
"@ns": "urn",
"@value": "QwWsHJyTPW.1pd0_jXlNKOSU"
}
}
},
"Body": {
"@ns": "soapenv",
"query": {
"@ns": "urn",
"queryString": {
"@ns": "urn",
"@value": "SELECT Id, Name, BillingStreet FROM Account WHERE Name LIKE '%a%'"
}
}
}
}
}
</code></pre>
<p>Ideas?</p>
| 0 | 2016-08-10T17:32:22Z | 38,880,029 | <p>This is the <a href="https://pypi.python.org/pypi/xmljson" rel="nofollow">library</a> you need </p>
<pre><code>pip install xmljson
</code></pre>
<p><a href="https://pypi.python.org/pypi/xmljson" rel="nofollow">https://pypi.python.org/pypi/xmljson</a></p>
<pre><code>>>> from xmljson import badgerfish as bf
>>> from json import dumps
>>> dumps(bf.data(fromstring('<p id="main">Hello<b>bold</b></p>')))
'{"p": {"b": {"$": "bold"}, "@id": "main", "$": "Hello"}}'
</code></pre>
| -1 | 2016-08-10T17:35:02Z | [
"python",
"xml",
"dictionary"
] |
python - is there a better way of writing this, shorter and easier to add more | 38,880,003 | <p>I'm creating a set of string numbers to be matched with another set of string numbers</p>
<p>this is the long and hard way, is there a better or shorter way where i can easily expand this list to say 500 more</p>
<pre><code> a = i
print 'grabing file',(a)
#print (a)
b = i+1
print 'grabing file',(b)
c = i+2
print 'grabing file',(c)
d = i+3
print 'grabing file',(d)
e = i+4
print 'grabing file',(e)
f = i+5
print 'grabing file',(f)
g = i+6
print 'grabing file',(g)
h = i+7
print 'grabing file',(h)
j = i+8
print 'grabing file',(j)
k = i+9
print 'grabing file',(k)
l = i+10
print 'grabing file',(l)
m = i+11
print 'grabing file',(m)
n = i+12
print 'grabing file',(n)
o = i+13
print 'grabing file',(o)
p = i+14
print 'grabing file',(p)
q = i+15
print 'grabing file',(q)
r = i+16
print 'grabing file',(r)
s = i+17
print 'grabing file',(s)
t = i+18
print 'grabing file',(t)
u = i+19
print 'grabing file',(u)
v = i+20
print 'grabing file',(v)
w = i+21
print 'grabing file',(w)
x = i+22
print 'grabing file',(x)
y = i+23
print 'grabing file',(y)
b1 = i+24
print 'grabing file',(b1)
c1 = i+25
print 'grabing file',(c1)
d1 = i+26
print 'grabing file',(d1)
e1 = i+27
print 'grabing file',(e1)
f1 = i+28
print 'grabing file',(f1)
g1 = i+29
print 'grabing file',(g1)
h1 = i+30
print 'grabing file',(h1)
j1 = i+31
print 'grabing file',(j1)
k1 = i+32
print 'grabing file',(k1)
l1 = i+33
print 'grabing file',(l1)
m1 = i+34
print 'grabing file',(m1)
n1 = i+35
print 'grabing file',(n1)
o1 = i+36
print 'grabing file',(o1)
p1 = i+37
print 'grabing file',(p1)
q1 = i+38
print 'grabing file',(q1)
r1 = i+39
print 'grabing file',(r1)
s1 = i+40
print 'grabing file',(s1)
t1 = i+41
print 'grabing file',(t1)
u1 = i+42
print 'grabing file',(u1)
v1 = i+43
print 'grabing file',(v1)
w1 = i+44
print 'grabing file',(w1)
x1 = i+45
print 'grabing file',(x1)
y1 = i+46
print 'grabing file',(y1)
b2 = i+47
print 'grabing file',(b2)
c2 = i+48
print 'grabing file',(c2)
d2 = i+49
print 'grabing file',(d2)
e2 = i+50
print 'grabing file',(e2)
f2 = i+51
print 'grabing file',(f2)
g2 = i+52
print 'grabing file',(g2)
h2 = i+53
print 'grabing file',(h2)
j2 = i+54
print 'grabing file',(j2)
k2 = i+55
print 'grabing file',(k2)
l2 = i+56
print 'grabing file',(l2)
m2 = i+57
print 'grabing file',(m2)
n2 = i+58
print 'grabing file',(n2)
o2 = i+59
print 'grabing file',(o2)
p2 = i+60
print 'grabing file',(p2)
q2 = i+61
print 'grabing file',(q2)
r2 = i+62
print 'grabing file',(r2)
s2 = i+63
print 'grabing file',(s2)
t2 = i+64
print 'grabing file',(t2)
u2 = i+65
print 'grabing file',(u2)
v2 = i+66
print 'grabing file',(v2)
w2 = i+67
print 'grabing file',(w2)
x2 = i+68
print 'grabing file',(x2)
y2 = i+69
print 'grabing file',(y2)
b3 = i+70
print 'grabing file',(b3)
c3 = i+71
print 'grabing file',(c3)
d3 = i+72
print 'grabing file',(d3)
e3 = i+73
print 'grabing file',(e3)
f3 = i+74
print 'grabing file',(f3)
g3 = i+75
print 'grabing file',(g3)
h3 = i+76
print 'grabing file',(h3)
j3 = i+77
print 'grabing file',(j3)
k3 = i+78
print 'grabing file',(k3)
l3 = i+79
print 'grabing file',(l3)
m3 = i+80
print 'grabing file',(m3)
n3 = i+81
print 'grabing file',(n3)
o3 = i+82
print 'grabing file',(o3)
p3 = i+83
print 'grabing file',(p3)
q3 = i+84
print 'grabing file',(q3)
r3 = i+85
print 'grabing file',(r3)
s3 = i+86
print 'grabing file',(s3)
t3 = i+87
print 'grabing file',(t3)
u3 = i+88
print 'grabing file',(u3)
v3 = i+89
print 'grabing file',(v3)
w3 = i+90
print 'grabing file',(w3)
x3 = i+91
print 'grabing file',(x3)
y3 = i+92
print 'grabing file',(y3)
</code></pre>
<p>the second part of this code is here ---(xyz)-- string numbers 1 to 100000, basically part of a filename</p>
<pre><code> if str(a) == xyz:
print (a)
elif str(b) == xyz:
print (b)
elif str(c) == xyz:
</code></pre>
<p>this continues on and on where i have elif elif elif till 92 times and then i stopped :) </p>
| -2 | 2016-08-10T17:33:53Z | 38,880,229 | <p>Use a <code>for</code>-loop for the first part:</p>
<pre><code>for a in range(i, i + 93):
print 'grabing file', a
</code></pre>
<p>and the second part is a simple <code>if</code>:</p>
<pre><code>if i <= int(xyz) <= i + 92:
print int(xyz)
</code></pre>
| 3 | 2016-08-10T17:46:09Z | [
"python"
] |
python - is there a better way of writing this, shorter and easier to add more | 38,880,003 | <p>I'm creating a set of string numbers to be matched with another set of string numbers</p>
<p>this is the long and hard way, is there a better or shorter way where i can easily expand this list to say 500 more</p>
<pre><code> a = i
print 'grabing file',(a)
#print (a)
b = i+1
print 'grabing file',(b)
c = i+2
print 'grabing file',(c)
d = i+3
print 'grabing file',(d)
e = i+4
print 'grabing file',(e)
f = i+5
print 'grabing file',(f)
g = i+6
print 'grabing file',(g)
h = i+7
print 'grabing file',(h)
j = i+8
print 'grabing file',(j)
k = i+9
print 'grabing file',(k)
l = i+10
print 'grabing file',(l)
m = i+11
print 'grabing file',(m)
n = i+12
print 'grabing file',(n)
o = i+13
print 'grabing file',(o)
p = i+14
print 'grabing file',(p)
q = i+15
print 'grabing file',(q)
r = i+16
print 'grabing file',(r)
s = i+17
print 'grabing file',(s)
t = i+18
print 'grabing file',(t)
u = i+19
print 'grabing file',(u)
v = i+20
print 'grabing file',(v)
w = i+21
print 'grabing file',(w)
x = i+22
print 'grabing file',(x)
y = i+23
print 'grabing file',(y)
b1 = i+24
print 'grabing file',(b1)
c1 = i+25
print 'grabing file',(c1)
d1 = i+26
print 'grabing file',(d1)
e1 = i+27
print 'grabing file',(e1)
f1 = i+28
print 'grabing file',(f1)
g1 = i+29
print 'grabing file',(g1)
h1 = i+30
print 'grabing file',(h1)
j1 = i+31
print 'grabing file',(j1)
k1 = i+32
print 'grabing file',(k1)
l1 = i+33
print 'grabing file',(l1)
m1 = i+34
print 'grabing file',(m1)
n1 = i+35
print 'grabing file',(n1)
o1 = i+36
print 'grabing file',(o1)
p1 = i+37
print 'grabing file',(p1)
q1 = i+38
print 'grabing file',(q1)
r1 = i+39
print 'grabing file',(r1)
s1 = i+40
print 'grabing file',(s1)
t1 = i+41
print 'grabing file',(t1)
u1 = i+42
print 'grabing file',(u1)
v1 = i+43
print 'grabing file',(v1)
w1 = i+44
print 'grabing file',(w1)
x1 = i+45
print 'grabing file',(x1)
y1 = i+46
print 'grabing file',(y1)
b2 = i+47
print 'grabing file',(b2)
c2 = i+48
print 'grabing file',(c2)
d2 = i+49
print 'grabing file',(d2)
e2 = i+50
print 'grabing file',(e2)
f2 = i+51
print 'grabing file',(f2)
g2 = i+52
print 'grabing file',(g2)
h2 = i+53
print 'grabing file',(h2)
j2 = i+54
print 'grabing file',(j2)
k2 = i+55
print 'grabing file',(k2)
l2 = i+56
print 'grabing file',(l2)
m2 = i+57
print 'grabing file',(m2)
n2 = i+58
print 'grabing file',(n2)
o2 = i+59
print 'grabing file',(o2)
p2 = i+60
print 'grabing file',(p2)
q2 = i+61
print 'grabing file',(q2)
r2 = i+62
print 'grabing file',(r2)
s2 = i+63
print 'grabing file',(s2)
t2 = i+64
print 'grabing file',(t2)
u2 = i+65
print 'grabing file',(u2)
v2 = i+66
print 'grabing file',(v2)
w2 = i+67
print 'grabing file',(w2)
x2 = i+68
print 'grabing file',(x2)
y2 = i+69
print 'grabing file',(y2)
b3 = i+70
print 'grabing file',(b3)
c3 = i+71
print 'grabing file',(c3)
d3 = i+72
print 'grabing file',(d3)
e3 = i+73
print 'grabing file',(e3)
f3 = i+74
print 'grabing file',(f3)
g3 = i+75
print 'grabing file',(g3)
h3 = i+76
print 'grabing file',(h3)
j3 = i+77
print 'grabing file',(j3)
k3 = i+78
print 'grabing file',(k3)
l3 = i+79
print 'grabing file',(l3)
m3 = i+80
print 'grabing file',(m3)
n3 = i+81
print 'grabing file',(n3)
o3 = i+82
print 'grabing file',(o3)
p3 = i+83
print 'grabing file',(p3)
q3 = i+84
print 'grabing file',(q3)
r3 = i+85
print 'grabing file',(r3)
s3 = i+86
print 'grabing file',(s3)
t3 = i+87
print 'grabing file',(t3)
u3 = i+88
print 'grabing file',(u3)
v3 = i+89
print 'grabing file',(v3)
w3 = i+90
print 'grabing file',(w3)
x3 = i+91
print 'grabing file',(x3)
y3 = i+92
print 'grabing file',(y3)
</code></pre>
<p>the second part of this code is here ---(xyz)-- string numbers 1 to 100000, basically part of a filename</p>
<pre><code> if str(a) == xyz:
print (a)
elif str(b) == xyz:
print (b)
elif str(c) == xyz:
</code></pre>
<p>this continues on and on where i have elif elif elif till 92 times and then i stopped :) </p>
| -2 | 2016-08-10T17:33:53Z | 38,880,242 | <p>Please refer to the python's <a href="https://docs.python.org/3/tutorial/controlflow.html" rel="nofollow">control flow</a> structures, particularly <a href="http://www.tutorialspoint.com/python/python_for_loop.htm" rel="nofollow"><code>for</code></a> loops.</p>
<h3>#1</h3>
<pre><code>i = 100 # Arbitrary Value
limit = 100 # Arbitrary Value
xyz = '199' # Arbitrary Value
for num in range(i, i+limit):
print 'grabing file',(num)
</code></pre>
<h3>#2</h3>
<pre><code>if num == int(xyz):
print num # Do something.
</code></pre>
| 1 | 2016-08-10T17:46:50Z | [
"python"
] |
python - is there a better way of writing this, shorter and easier to add more | 38,880,003 | <p>I'm creating a set of string numbers to be matched with another set of string numbers</p>
<p>this is the long and hard way, is there a better or shorter way where i can easily expand this list to say 500 more</p>
<pre><code> a = i
print 'grabing file',(a)
#print (a)
b = i+1
print 'grabing file',(b)
c = i+2
print 'grabing file',(c)
d = i+3
print 'grabing file',(d)
e = i+4
print 'grabing file',(e)
f = i+5
print 'grabing file',(f)
g = i+6
print 'grabing file',(g)
h = i+7
print 'grabing file',(h)
j = i+8
print 'grabing file',(j)
k = i+9
print 'grabing file',(k)
l = i+10
print 'grabing file',(l)
m = i+11
print 'grabing file',(m)
n = i+12
print 'grabing file',(n)
o = i+13
print 'grabing file',(o)
p = i+14
print 'grabing file',(p)
q = i+15
print 'grabing file',(q)
r = i+16
print 'grabing file',(r)
s = i+17
print 'grabing file',(s)
t = i+18
print 'grabing file',(t)
u = i+19
print 'grabing file',(u)
v = i+20
print 'grabing file',(v)
w = i+21
print 'grabing file',(w)
x = i+22
print 'grabing file',(x)
y = i+23
print 'grabing file',(y)
b1 = i+24
print 'grabing file',(b1)
c1 = i+25
print 'grabing file',(c1)
d1 = i+26
print 'grabing file',(d1)
e1 = i+27
print 'grabing file',(e1)
f1 = i+28
print 'grabing file',(f1)
g1 = i+29
print 'grabing file',(g1)
h1 = i+30
print 'grabing file',(h1)
j1 = i+31
print 'grabing file',(j1)
k1 = i+32
print 'grabing file',(k1)
l1 = i+33
print 'grabing file',(l1)
m1 = i+34
print 'grabing file',(m1)
n1 = i+35
print 'grabing file',(n1)
o1 = i+36
print 'grabing file',(o1)
p1 = i+37
print 'grabing file',(p1)
q1 = i+38
print 'grabing file',(q1)
r1 = i+39
print 'grabing file',(r1)
s1 = i+40
print 'grabing file',(s1)
t1 = i+41
print 'grabing file',(t1)
u1 = i+42
print 'grabing file',(u1)
v1 = i+43
print 'grabing file',(v1)
w1 = i+44
print 'grabing file',(w1)
x1 = i+45
print 'grabing file',(x1)
y1 = i+46
print 'grabing file',(y1)
b2 = i+47
print 'grabing file',(b2)
c2 = i+48
print 'grabing file',(c2)
d2 = i+49
print 'grabing file',(d2)
e2 = i+50
print 'grabing file',(e2)
f2 = i+51
print 'grabing file',(f2)
g2 = i+52
print 'grabing file',(g2)
h2 = i+53
print 'grabing file',(h2)
j2 = i+54
print 'grabing file',(j2)
k2 = i+55
print 'grabing file',(k2)
l2 = i+56
print 'grabing file',(l2)
m2 = i+57
print 'grabing file',(m2)
n2 = i+58
print 'grabing file',(n2)
o2 = i+59
print 'grabing file',(o2)
p2 = i+60
print 'grabing file',(p2)
q2 = i+61
print 'grabing file',(q2)
r2 = i+62
print 'grabing file',(r2)
s2 = i+63
print 'grabing file',(s2)
t2 = i+64
print 'grabing file',(t2)
u2 = i+65
print 'grabing file',(u2)
v2 = i+66
print 'grabing file',(v2)
w2 = i+67
print 'grabing file',(w2)
x2 = i+68
print 'grabing file',(x2)
y2 = i+69
print 'grabing file',(y2)
b3 = i+70
print 'grabing file',(b3)
c3 = i+71
print 'grabing file',(c3)
d3 = i+72
print 'grabing file',(d3)
e3 = i+73
print 'grabing file',(e3)
f3 = i+74
print 'grabing file',(f3)
g3 = i+75
print 'grabing file',(g3)
h3 = i+76
print 'grabing file',(h3)
j3 = i+77
print 'grabing file',(j3)
k3 = i+78
print 'grabing file',(k3)
l3 = i+79
print 'grabing file',(l3)
m3 = i+80
print 'grabing file',(m3)
n3 = i+81
print 'grabing file',(n3)
o3 = i+82
print 'grabing file',(o3)
p3 = i+83
print 'grabing file',(p3)
q3 = i+84
print 'grabing file',(q3)
r3 = i+85
print 'grabing file',(r3)
s3 = i+86
print 'grabing file',(s3)
t3 = i+87
print 'grabing file',(t3)
u3 = i+88
print 'grabing file',(u3)
v3 = i+89
print 'grabing file',(v3)
w3 = i+90
print 'grabing file',(w3)
x3 = i+91
print 'grabing file',(x3)
y3 = i+92
print 'grabing file',(y3)
</code></pre>
<p>the second part of this code is here ---(xyz)-- string numbers 1 to 100000, basically part of a filename</p>
<pre><code> if str(a) == xyz:
print (a)
elif str(b) == xyz:
print (b)
elif str(c) == xyz:
</code></pre>
<p>this continues on and on where i have elif elif elif till 92 times and then i stopped :) </p>
| -2 | 2016-08-10T17:33:53Z | 38,880,244 | <p>I would create a dictionary. The keys could just be numbers:</p>
<pre><code>i = 3 #or some other number
d = {}
for x in range(0,92): #you can also change the range to 500 etc
d[x]=i
i +=1
</code></pre>
<p>This creates a dictionary or a system of pairs in this case. A key and a value.</p>
<p>In order to print the <code>Key</code> whose value matches <code>xyz</code> you can use the below:</p>
<pre><code>[m for m, y in d.iteritems() if str(y) == xyz]
</code></pre>
<p>This part basically runs through your values looking for matches and prints the <code>Key</code></p>
| 0 | 2016-08-10T17:46:57Z | [
"python"
] |
What is the sphinx directive for including a document into another document? | 38,880,277 | <p>In my documentation I have an <code>examples</code> directory where I can say,</p>
<pre><code>.. literalinclude:: examples/1_basic_usage.py
:language: python
:linenos:
</code></pre>
<p>..which works great, because they're code and they're formatted correctly as code.</p>
<p>However, I want to do a <code>literalinclude</code> on non-code documents. At the entire-project level I already have <code>AUTHORS</code>, <code>DESCRIPTION</code>, <code>ATTRIBUTION</code>, etc. defined, and I want to (essentially) paste them in-place but I don't know how.</p>
<p>Hopefully it's similar to this <strong>NON WORKING EXAMPLE</strong>:</p>
<pre><code>Authors
-------
.. literalinclude:: ../../AUTHORS
Attribution
-----------
.. literalinclude:: ../../ATTRIBUTION
</code></pre>
| 1 | 2016-08-10T17:48:34Z | 38,880,705 | <p>Apparently the way to do this is with the <code>.. include:: <path></code> directive.</p>
<p>It's no-where obvious in their documentation and doesn't have an example stub at all.</p>
<p>Full documentation can be found in the <a href="http://docutils.sourceforge.net/docs/ref/rst/directives.html#include" rel="nofollow"><code>docutils</code> reStructuredText reference (#include)</a>.</p>
<blockquote>
<p>The "include" directive reads a text file. The directive argument is the path to the file to be included, relative to the document containing the directive. Unless the options literal or code are given, the file is parsed in the current document's context at the point of the directive. For example:</p>
<p>This first example will be parsed at the document level, and can
thus contain any construct, including section headers.</p>
</blockquote>
<pre><code>.. include:: inclusion.txt
Back in the main document.
This second example will be parsed in a block quote context.
Therefore it may only contain body elements. It may not
contain section headers.
.. include:: inclusion.txt
</code></pre>
| 1 | 2016-08-10T18:13:37Z | [
"python",
"python-sphinx",
"restructuredtext"
] |
How does sympy simplify ln((exp(x)+1)/exp(x)) to log(1+exp(-x))? | 38,880,320 | <p>If I use <code>simplify()</code> function in sympy, <code>log((exp(x)+1)/exp(x))</code> does simplify to <code>log(1+exp(-x))</code>, however, as I read the doc, the simplify function is "can be unnecessarily slow", I tried other simplification methods, but non of them works, so I'm wondering how do I simplify <code>ln((exp(x)+1)/exp(x))</code> to the form like this <code>log(1+exp(-x))</code> without calling simplify().</p>
| 2 | 2016-08-10T17:50:58Z | 38,889,405 | <p>You can more directly just use <a href="http://docs.sympy.org/latest/modules/polys/reference.html?highlight=cancel#sympy.polys.polytools.cancel" rel="nofollow"><strong><code>sympy.polys.polytools.cancel()</code></strong></a>, which is available as a method on your expression with <code>.cancel()</code>. </p>
<pre><code>>>> from sympy.abc import x
>>> from sympy import *
>>> my_expr = log((exp(x)+1)/exp(x))
>>> my_expr.cancel()
log(1 + exp(-x))
</code></pre>
<p>This is what is doing the work of simplifying your expression inside <code>simplify()</code>. </p>
<p>A very naive benchmark:</p>
<pre><code>>>> import timeit
>>> %timeit my_expr.simplify()
100 loops, best of 3: 7.78 ms per loop
>>> %timeit my_expr.cancel()
1000 loops, best of 3: 972 µs per loop
</code></pre>
<hr>
<p><strong>Edit</strong>: This isn't a stable solution, and I would advise that you take a look at <a href="http://stackoverflow.com/questions/38880320/how-does-sympy-simplify-lnexpx1-expx-to-log1exp-x/38905278#38905278">asmeurer's answer</a> where he suggests using <a href="http://docs.sympy.org/dev/tutorial/simplification.html#expand" rel="nofollow"><strong><code>expand()</code></strong></a>. </p>
| 3 | 2016-08-11T07:01:16Z | [
"python",
"sympy"
] |
How does sympy simplify ln((exp(x)+1)/exp(x)) to log(1+exp(-x))? | 38,880,320 | <p>If I use <code>simplify()</code> function in sympy, <code>log((exp(x)+1)/exp(x))</code> does simplify to <code>log(1+exp(-x))</code>, however, as I read the doc, the simplify function is "can be unnecessarily slow", I tried other simplification methods, but non of them works, so I'm wondering how do I simplify <code>ln((exp(x)+1)/exp(x))</code> to the form like this <code>log(1+exp(-x))</code> without calling simplify().</p>
| 2 | 2016-08-10T17:50:58Z | 38,905,278 | <p>The exact function you want to use depends on the general form of the expressions you are dealing with. <code>cancel</code> apparently works, but perhaps only by accident. In general, <code>cancel</code> cancels common factors from a numerator and denominator, like <code>cancel((x**2 - 1)/(x - 1))</code> -> <code>x + 1</code>. I think it is only working here because it represents the expression in terms of <code>exp(-x)</code>. If it instead used <code>exp(x)</code>, it wouldn't simplify, because <code>(x + 1)/x</code> doesn't have any common factors. This might be why you are seeing different results from <code>cancel</code> in different versions. See <a href="https://github.com/sympy/sympy/issues/11506" rel="nofollow">this issue</a> for more information.</p>
<p>For this expression, I would use <strong>expand()</strong> (or the more targeted <code>expand_mul</code>). <code>expand</code> will distribute the denominator over the numerator, i.e., <code>(exp(x) + 1)/exp(x)</code> will become <code>exp(x)/exp(x) + 1/exp(x)</code>. SymPy then automatically cancels <code>exp(x)/exp(x)</code> into <code>1</code> and converts <code>1/exp(x)</code> into <code>exp(-x)</code> (they are internally both represented the same way). </p>
<pre><code>In [1]: log((exp(x)+1)/exp(x)).expand()
Out[1]:
â -xâ
logâ1 + ⯠â
</code></pre>
<p>There's a guide on some of the simplification functions in the <a href="http://docs.sympy.org/latest/tutorial/simplification.html" rel="nofollow">tutorial</a>. </p>
| 4 | 2016-08-11T20:11:12Z | [
"python",
"sympy"
] |
How to select by categorical index of grouped data using Pandas | 38,880,340 | <p>I am trying to select the data from a grouped categorical index and am unable to select by group.</p>
<p>This data was amassed using pd.cut in bins of 60. The final goal is to select a range of the index then plot the data - however I can't select the data.</p>
<p>For example - I want to select a slice of (0,60] and (60,120] but am unable to slice that. Any help much appreciated.</p>
<p>Curret dataframe data:</p>
<p><a href="http://i.stack.imgur.com/Eufpc.png" rel="nofollow"><img src="http://i.stack.imgur.com/Eufpc.png" alt="enter image description here"></a></p>
<p>Thanks </p>
| 0 | 2016-08-10T17:51:45Z | 38,880,532 | <p><code>pd.cut</code> just creates string values, so you can give them in a list directly using <code>.ix</code> or just do location indexing with <code>.iloc</code>:</p>
<pre><code>In [7]: df
Out[7]:
0 1
1
(-1, 0] 0.00 0.0
(0, 1] 0.01 1.0
(1, 2] 0.02 2.0
(2, 4] 0.03 3.0
In [8]: df.ix[['(0, 1]', '(1, 2]']]
Out[8]:
0 1
1
(0, 1] 0.01 1.0
(1, 2] 0.02 2.0
In [9]: df.iloc[1:3]
Out[9]:
0 1
1
(0, 1] 0.01 1.0
(1, 2] 0.02 2.0
</code></pre>
| 2 | 2016-08-10T18:03:03Z | [
"python",
"pandas",
"group-by",
"slice",
"categorical-data"
] |
zipfile.extractall always gives exception | 38,880,384 | <p>Here is my script: </p>
<pre><code>import zipfile
zFile = zipfile.ZipFile('crack.zip')
passFile = open('passwords.txt')
for line in passFile.readlines():
password = line.strip('\n')
try:
zFile.extractall(pwd=password)
print password
except Exception, e:
print e
</code></pre>
<p>These are the contents of passwords.txt:</p>
<pre><code>abcde
fghijk
secret
lmnopq
rstw
uvwxyz
</code></pre>
<p>The file passwords.txt has 5 strings, out of which only one is the correct password for this zip file(As you would have guessed, secret is the correct password). When I run this script , it always goes to the catch block and prints <code>('Bad password for file', <zipfile.ZipInfo object at 0x7f70836d52a8>)</code> 5 times. What am I doing wrong here?</p>
| 0 | 2016-08-10T17:54:07Z | 38,880,581 | <p>After you successfully unzip the file with the right password, you don't stop the loop. Therefore it continues and tries the other three passwords. There are five incorrect passwords in your file so you get five exceptions. To fix this, add a <code>break</code> statement. I rearranged your code a bit, because generally, only the statement you want to catch exceptions in should be in the <code>try</code> block.</p>
<pre><code>for line in passFile:
password = line.strip()
print password
try:
zFile.extractall(pwd=password)
except Exception, e:
print e
else:
print "success"
break
</code></pre>
| 0 | 2016-08-10T18:06:08Z | [
"python",
"python-2.7",
"zipfile"
] |
I have two sets of data and I need to build a data structure that organizers them into sets. (Python) | 38,880,410 | <p>Using Python 2.7. I have one set of data consisting of id tags: </p>
<pre><code>SET1=[{'MISC': u'2759'}, {'MISC': u'2759'}, {'MISC': u'2759'}, {'MISC': u'2758'},{'MISC': u'2758'}, {'MISC': u'1751'}]
</code></pre>
<p>and another set consisting of different id tags: </p>
<pre><code>SET2= [u'15672542c8ed280b', u'1566b77702f8865f', u'1565c2241aebb314', u'155c6888c507e365', u'155c5b8ded9a7c03', u'155c1173f58f1494']
</code></pre>
<p>As you can see, the sets are one-to-one and each MISC tag relates to the corresponding id in SET2. So for example, the first element in SET1, <code>{'MISC': u'2759'}</code> needs to relate to the first element in SET2: <code>u'15672542c8ed280b'</code>.</p>
<p>So ideally, I want to build a data structure like so:</p>
<pre><code>Matched_IDS=[{2759, 15672542c8ed280b}, {2759, 1566b77702f8865f} , {2759, 1565c2241aebb314}, {...}, {...} ]
</code></pre>
<p>I attempted this approach so far, but since I used two for loops I iterated over the data twice, and get a very ugly looking set: </p>
<pre><code>MSGMatch=[]
for a in SET1:
for b in SET2:
MSGMatch.append({str(a),str(b)})
print(MSGMatch)
</code></pre>
<p>Anyone have a more elegant, working solution that they could kindly point me in the right direction towards?</p>
| 0 | 2016-08-10T17:55:15Z | 38,880,453 | <pre><code>zip([m['MISC'] for m in SET1], SET2)
</code></pre>
<p>That should give you what you want I think, assuming your "sets" (they are actually lists) are the same length.</p>
| 3 | 2016-08-10T17:57:54Z | [
"python",
"set"
] |
I have two sets of data and I need to build a data structure that organizers them into sets. (Python) | 38,880,410 | <p>Using Python 2.7. I have one set of data consisting of id tags: </p>
<pre><code>SET1=[{'MISC': u'2759'}, {'MISC': u'2759'}, {'MISC': u'2759'}, {'MISC': u'2758'},{'MISC': u'2758'}, {'MISC': u'1751'}]
</code></pre>
<p>and another set consisting of different id tags: </p>
<pre><code>SET2= [u'15672542c8ed280b', u'1566b77702f8865f', u'1565c2241aebb314', u'155c6888c507e365', u'155c5b8ded9a7c03', u'155c1173f58f1494']
</code></pre>
<p>As you can see, the sets are one-to-one and each MISC tag relates to the corresponding id in SET2. So for example, the first element in SET1, <code>{'MISC': u'2759'}</code> needs to relate to the first element in SET2: <code>u'15672542c8ed280b'</code>.</p>
<p>So ideally, I want to build a data structure like so:</p>
<pre><code>Matched_IDS=[{2759, 15672542c8ed280b}, {2759, 1566b77702f8865f} , {2759, 1565c2241aebb314}, {...}, {...} ]
</code></pre>
<p>I attempted this approach so far, but since I used two for loops I iterated over the data twice, and get a very ugly looking set: </p>
<pre><code>MSGMatch=[]
for a in SET1:
for b in SET2:
MSGMatch.append({str(a),str(b)})
print(MSGMatch)
</code></pre>
<p>Anyone have a more elegant, working solution that they could kindly point me in the right direction towards?</p>
| 0 | 2016-08-10T17:55:15Z | 38,880,735 | <p>In one iteration, you could try:</p>
<pre><code>[{a['MISC'], b} for a, b in zip(SET1, SET2)]
</code></pre>
<p>This will produce the list of sets you specified.</p>
<p>This shows more clearly how to iterate both lists in one iteration:</p>
<pre><code>result = []
for i, a in enumerate(SET1):
result.append({a['MISC'], SET2[i]})
</code></pre>
| 4 | 2016-08-10T18:15:09Z | [
"python",
"set"
] |
How to reinitialize a class in Python | 38,880,505 | <p>Suppose I have a class <code>Game</code> which plays a game. Once the game is over, I'd like to immediately start a new game. I thought this might possible in a recursive way by instantiating a new <code>Game()</code> within its own class definition. Is this possible?</p>
<p>To give a specific example, I'm writing a Tic Tac Toe game with a GUI using Tkinter:</p>
<pre><code>import numpy as np
import Tkinter as tk
class Game:
def __init__(self, master, player1, player2):
frame = tk.Frame()
frame.grid()
self.master = master
self.player1 = player1
self.player2 = player2
self.current_player = player1
self.empty_text = ""
self.board = Board()
self.buttons = [[None for _ in range(3)] for _ in range(3)]
for i in range(3):
for j in range(3):
self.buttons[i][j] = tk.Button(frame, height=3, width=3, text=self.empty_text, command=lambda i=i, j=j: self.callback(self.buttons[i][j]))
self.buttons[i][j].grid(row=i, column=j)
self.reset_button = tk.Button(text="Reset", command=self.reset)
self.reset_button.grid(row=3)
def callback(self, button):
if button["text"] == self.empty_text:
button.configure(text=self.current_player.mark)
info = button.grid_info()
move = (info["row"], info["column"])
self.board.place_mark(move, self.current_player.mark)
if self.board.over():
print "The game is over. The player with mark %s won!" % self.current_player.mark
self.master.quit()
else:
self.switch_players()
def reset(self):
print "Resetting..."
self.master.destroy()
self = Game(self.master, self.player1, self.player2)
def switch_players(self):
if self.current_player == self.player1:
self.current_player = self.player2
else:
self.current_player = self.player1
class Board:
def __init__(self, grid=np.ones((3,3))*np.nan):
self.grid = grid
def winner(self):
rows = [self.grid[i,:] for i in range(3)]
cols = [self.grid[:,j] for j in range(3)]
diag = [np.array([self.grid[i,i] for i in range(3)])]
cross_diag = [np.array([self.grid[2-i,i] for i in range(3)])]
lanes = np.concatenate((rows, cols, diag, cross_diag)) # A "lane" is defined as a row, column, diagonal, or cross-diagonal
any_lane = lambda x: any([np.array_equal(lane, x) for lane in lanes]) # Returns true if any lane is equal to the input argument "x"
if any_lane(np.ones(3)):
return 1
elif any_lane(np.zeros(3)):
return 0
def over(self):
return (not np.any(np.isnan(self.grid))) or (self.winner() is not None)
def place_mark(self, pos, mark):
num = self.mark2num(mark)
self.grid[tuple(pos)] = num
def mark2num(self, mark):
if mark == "X":
return 1
elif mark == "O":
return 0
else:
print "The player's mark must be either 'X' or 'O'."
class HumanPlayer:
def __init__(self, mark):
self.mark = mark
root = tk.Tk()
app = Game(root, player1=HumanPlayer(mark="X"), player2=HumanPlayer(mark="O"))
root.mainloop()
</code></pre>
<p>The first time I play the game all is well: the button labels change as expected and a winner is declared (see below).</p>
<p><a href="http://i.stack.imgur.com/fLhIa.png" rel="nofollow"><img src="http://i.stack.imgur.com/fLhIa.png" alt="enter image description here"></a>
<a href="http://i.stack.imgur.com/PFM3c.png" rel="nofollow"><img src="http://i.stack.imgur.com/PFM3c.png" alt="enter image description here"></a></p>
<p>The problem is that when I press the "Reset" button, although I do get a blank GUI again, the <code>Game</code> still 'thinks' that it is over and keeps printing the winner's declaration:</p>
<p><a href="http://i.stack.imgur.com/vwJku.png" rel="nofollow"><img src="http://i.stack.imgur.com/vwJku.png" alt="enter image description here"></a>
<a href="http://i.stack.imgur.com/oSVnv.png" rel="nofollow"><img src="http://i.stack.imgur.com/oSVnv.png" alt="enter image description here"></a></p>
<p>Is it possible to 're-initialize' the class without resetting each attribute to its initial value?</p>
| 0 | 2016-08-10T18:01:34Z | 38,884,212 | <p>In the end I simply did a 'manual' reset of the relevant attributes rather than 're-instantiate' the class. Here is the resulting code:</p>
<pre><code>import numpy as np
import Tkinter as tk
class Game:
def __init__(self, master, player1, player2):
frame = tk.Frame()
frame.grid()
self.master = master
self.player1 = player1
self.player2 = player2
self.current_player = player1
self.empty_text = ""
self.board = Board()
self.buttons = [[None for _ in range(3)] for _ in range(3)]
for i in range(3):
for j in range(3):
self.buttons[i][j] = tk.Button(frame, height=3, width=3, text=self.empty_text, command=lambda i=i, j=j: self.callback(self.buttons[i][j]))
self.buttons[i][j].grid(row=i, column=j)
self.reset_button = tk.Button(text="Reset", command=self.reset)
self.reset_button.grid(row=3, column=0)
self.quit_button = tk.Button(text="Quit", command=self.quit)
# self.quit_button.grid(row=3, column=1)
self.setup_UI()
def setup_UI(self):
self.master.title="Tic Tac Toe"
def callback(self, button):
if self.board.over():
pass
elif button["text"] == self.empty_text:
button.configure(text=self.current_player.mark)
info = button.grid_info()
move = (info["row"], info["column"])
self.board.place_mark(move, self.current_player.mark)
if self.board.over():
print "The game is over. The player with mark %s won!" % self.current_player.mark
else:
self.switch_players()
def reset(self):
print "Resetting..."
for i in range(3):
for j in range(3):
self.buttons[i][j].configure(text=self.empty_text)
self.board = Board(grid=np.ones((3,3))*np.nan)
self.current_player = self.player1
def quit(self):
self.master.destroy()
def switch_players(self):
if self.current_player == self.player1:
self.current_player = self.player2
else:
self.current_player = self.player1
class Board:
def __init__(self, grid=np.ones((3,3))*np.nan):
self.grid = grid
def winner(self):
rows = [self.grid[i,:] for i in range(3)]
cols = [self.grid[:,j] for j in range(3)]
diag = [np.array([self.grid[i,i] for i in range(3)])]
cross_diag = [np.array([self.grid[2-i,i] for i in range(3)])]
lanes = np.concatenate((rows, cols, diag, cross_diag)) # A "lane" is defined as a row, column, diagonal, or cross-diagonal
any_lane = lambda x: any([np.array_equal(lane, x) for lane in lanes]) # Returns true if any lane is equal to the input argument "x"
if any_lane(np.ones(3)):
return 1
elif any_lane(np.zeros(3)):
return 0
def over(self):
return (not np.any(np.isnan(self.grid))) or (self.winner() is not None)
def place_mark(self, pos, mark):
num = self.mark2num(mark)
self.grid[tuple(pos)] = num
def mark2num(self, mark):
if mark == "X":
return 1
elif mark == "O":
return 0
else:
print "The player's mark must be either 'X' or 'O'."
class HumanPlayer:
def __init__(self, mark):
self.mark = mark
root = tk.Tk()
root.title("Tic Tac Toe")
app = Game(root, player1=HumanPlayer(mark="X"), player2=HumanPlayer(mark="O"))
root.mainloop()
</code></pre>
<p>The screen grab below shows how I managed to play Tic Tac Toe several times.</p>
<p><a href="http://i.stack.imgur.com/bwcIw.png" rel="nofollow"><img src="http://i.stack.imgur.com/bwcIw.png" alt="enter image description here"></a></p>
| 0 | 2016-08-10T21:56:10Z | [
"python",
"tkinter"
] |
Is there a way to schedule sending an e-mail through Google App Engine Mail API (Python)? | 38,880,555 | <p>I want to be able to schedule an e-mail or more of them to be sent on a specific date, preferably using GAE Mail API if possible (so far I haven't found the solution). </p>
<p>Would using Cron be an acceptable workaround and if so, would I even be able to create a Cron task with Python? The dates are various with no specific pattern so I can't use the same task over and over again.</p>
<p>Any suggestions how to solve this problem? All help appreciated</p>
| 0 | 2016-08-10T18:04:32Z | 38,880,681 | <p>Here's a snippet to send mail directly using gmail smtp (just using standard python and smtplib, not google-app-engine but it works fine):</p>
<pre><code>from email.mime.text import MIMEText
import smtplib
msg=MIMEText('hi, sent by python.....','plain','utf-8')
from_addr='user@gmail.com'
password='psw'
to_addr='user@gmail.com'
s=smtplib.SMTP_SSL('smtp.gmail.com')
s.login(from_addr,password)
s.sendmail(from_addr, [to_addr], msg.as_string())
</code></pre>
<p>Sorry, I was only addressing the "send a mail using gmail" part of the question.</p>
<p>About that cron stuff, I found a lot on this very site. For instance:</p>
<p><a href="http://stackoverflow.com/questions/373335/how-do-i-get-a-cron-like-scheduler-in-python">How do I get a Cron like scheduler in Python?</a></p>
| 0 | 2016-08-10T18:11:36Z | [
"python",
"email",
"google-app-engine",
"cron"
] |
Is there a way to schedule sending an e-mail through Google App Engine Mail API (Python)? | 38,880,555 | <p>I want to be able to schedule an e-mail or more of them to be sent on a specific date, preferably using GAE Mail API if possible (so far I haven't found the solution). </p>
<p>Would using Cron be an acceptable workaround and if so, would I even be able to create a Cron task with Python? The dates are various with no specific pattern so I can't use the same task over and over again.</p>
<p>Any suggestions how to solve this problem? All help appreciated</p>
| 0 | 2016-08-10T18:04:32Z | 38,881,529 | <p>Yes, sending emails from cron jobs is fairly common, exactly for the scheduling reason.</p>
<p>Unfortunately controlling cron jobs programatically is not (yet) possible. You may want to star <a href="https://code.google.com/p/googleappengine/issues/detail?id=3638" rel="nofollow">Issue 3638: Cron jobs to be scheduled programatically</a></p>
<p>Meanwhile you could check this answer for a couple of alternatives: <a href="http://stackoverflow.com/a/37079488/4495081">http://stackoverflow.com/a/37079488/4495081</a></p>
| 0 | 2016-08-10T19:00:35Z | [
"python",
"email",
"google-app-engine",
"cron"
] |
Is there a way to schedule sending an e-mail through Google App Engine Mail API (Python)? | 38,880,555 | <p>I want to be able to schedule an e-mail or more of them to be sent on a specific date, preferably using GAE Mail API if possible (so far I haven't found the solution). </p>
<p>Would using Cron be an acceptable workaround and if so, would I even be able to create a Cron task with Python? The dates are various with no specific pattern so I can't use the same task over and over again.</p>
<p>Any suggestions how to solve this problem? All help appreciated</p>
| 0 | 2016-08-10T18:04:32Z | 38,884,139 | <p>You can easily accomplish what you need with Task API. When you create a task, you can set an ETA parameter (when to execute). ETA time can be up to 30 days into the future.</p>
<p>If 30 days is not enough, you can store a "send_email" entity in the Datastore, and set one of the properties to the date/time when this email should be sent. Then you create a cron job that runs once a month (week). This cron job will retrieve all "send_email" entities that need to be send the next month (week), and create tasks for them, setting ETA to the exact date/time when they should be executed.</p>
| 2 | 2016-08-10T21:49:41Z | [
"python",
"email",
"google-app-engine",
"cron"
] |
Repetitve seach and replace on constantly changing file. Python | 38,880,640 | <p>So I'm trying to have a script that will run every 3 seconds to read a text file, clean it up and write is as another text file. The original text file is always changing. It's from a music playout program, current song title. I went collecting bits of python code around the internet and personalizing them to my needs. Right now I have a script that will work perfectly,each command and the scheduling. But when the original file changes again, then the script gives an error. Any idea how this could be fixed?</p>
<pre><code>Traceback <most recent call last>:
File "radio_script.py", line 29, in <module>
executeSomthing<>
File "radio_script.py", line 10, in executeSomething
for line in intext:
IoError: [Errno 9] Bad file descriptor
</code></pre>
<p>I'm running the python script on Windows.
So when the script is run, if a line contains any of the "delete_lin" words the whole line is deleted. While each of the "line replace" entries replace those words with nothing, just as they are supposed to.
Here is my script.</p>
<pre><code># -*- coding: utf-8 -*-
delete_lin = ['VP', 'VH', 'VT', 'VB', 'VS', 'BG']
import time
import os
def executeSomething():
with open('current.txt', 'r') as intext, open('currentedit.txt', 'w') as outfile:
for line in intext:
if not any(delete_lin in line for delete_lin in delete_lin):
line = line.replace('(email)', "")
line = line.replace('_AO VIVO', "")
line = line.replace('Ao Vivo', "")
line = line.replace('AO VIVO', "")
line = line.replace('(04)', "")
line = line.replace('2016', "")
line = line.replace('2015', "")
line = line.replace('2014', "")
line = line.replace('2013', "")
outfile.write(line)
outfile.flush()
intext.flush()
print 'Pause'
time.sleep(3)
while True:
executeSomething()
</code></pre>
| 1 | 2016-08-10T18:09:08Z | 38,920,138 | <p>When the original file is modified, the file reference <code>intext</code> is no longer a valid file. While the path is the same, the actual file has been modified. Thus, you should call <code>intext.close()</code> at the end of the <code>for</code> loop. To make sure it is actually recreating the file reference make sure to only <code>sleep</code> outside the loop.</p>
| 1 | 2016-08-12T14:13:16Z | [
"python",
"windows"
] |
Spark Flume streaming - missing packages? | 38,880,649 | <p>I try to execute the example of flume streaming but can't have my jars files working :
Here
<a href="https://github.com/spark-packages/dstream-flume/blob/master/examples/src/main/python/streaming/flume_wordcount.py" rel="nofollow">https://github.com/spark-packages/dstream-flume/blob/master/examples/src/main/python/streaming/flume_wordcount.py</a>
they point out on </p>
<pre><code>bin/spark-submit --jars \
external/flume-assembly/target/scala-*/spark-streaming-flume-assembly-*.jar
</code></pre>
<p>I have no clue what is this "external" dir ?</p>
<p>On my spark (1.6.0) lib I put several jars (I tried both 1.6.0 and 1.6.0) : </p>
<pre><code>$ pwd
/Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib
$ ls *flume*
spark-streaming-flume-assembly_2.10-1.6.0.jar
spark-streaming-flume-assembly_2.10-1.6.2.jar
spark-streaming-flume-sink_2.10-1.6.2.jar
spark-streaming-flume-sink_2.10-1.6.0.jar
spark-streaming-flume_2.10-1.6.0.jar
spark-streaming-flume_2.10-1.6.2.jar
</code></pre>
<p>Then I do a : </p>
<pre><code> $ ./bin/pyspark --master ip:7077 --total-executor-cores 1 --packages com.databricks:spark-csv_2.10:1.4.0
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume-sink_2.10-1.6.0.jar
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume_2.10-1.6.0.jar
--jars /Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/spark-streaming-flume-assembly_2.10-1.6.0.jar
</code></pre>
<p>The python notebook server fires up, but then When I ask for a storm object:</p>
<pre><code>from pyspark.streaming.flume import FlumeUtils
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.streaming import StreamingContext
try : sc.stop()
except : pass
try : ssc.stop()
except : pass
conf = SparkConf()
conf.setAppName("Streaming Flume")
conf.set("spark.executor.memory","1g")
conf.set("spark.driver.memory","1g")
conf.set("spark.cores.max","5")
conf.set("spark.driver.extraClassPath", "/Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/")
conf.set("spark.executor.extraClassPath", "/Users/romain/Informatique/zoo/spark-1.6.0-bin-hadoop2.4/lib/")
sc = SparkContext(conf=conf)
ssc = StreamingContext(sc, 10)
FlumeUtils.createStream(ssc, "localhost", "4949")
</code></pre>
<p>it fails :</p>
<pre><code>________________________________________________________________________________________________
Spark Streaming's Flume libraries not found in class path. Try one of the following.
1. Include the Flume library and its dependencies with in the
spark-submit command as
$ bin/spark-submit --packages org.apache.spark:spark-streaming-flume:1.6.0 ...
2. Download the JAR of the artifact from Maven Central http://search.maven.org/,
Group Id = org.apache.spark, Artifact Id = spark-streaming-flume-assembly, Version = 1.6.0.
Then, include the jar in the spark-submit command as
$ bin/spark-submit --jars <spark-streaming-flume-assembly.jar> ...
________________________________________________________________________________________________
</code></pre>
<p>I have tried to add </p>
<pre><code>--packages org.apache.spark:spark-streaming-flume-sink.1.6.0
</code></pre>
<p>at the end of my spark-submit, but I get another issue : </p>
<pre><code>org.apache.spark#spark-streaming-flume-sink added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
:: resolution report :: resolve 2344ms :: artifacts dl 0ms
:: modules in use:
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 1 | 0 | 0 | 0 || 0 | 0 |
---------------------------------------------------------------------
:: problems summary ::
:::: WARNINGS
module not found: org.apache.spark#spark-streaming-flume-sink;1.6.0
==== local-m2-cache: tried
file:/Users/romain/.m2/repository/org/apache/spark/spark-streaming-flume-sink/1.6.0/spark-streaming-flume-sink-1.6.0.pom
-- artifact org.apache.spark#spark-streaming-flume-sink;1.6.0!spark-streaming-flume-sink.jar:
file:/Users/romain/.m2/repository/org/apache/spark/spark-streaming-flume-sink/1.6.0/spark-streaming-flume-sink-1.6.0.jar
==== local-ivy-cache: tried
/Users/romain/.ivy2/local/org.apache.spark/spark-streaming-flume-sink/1.6.0/ivys/ivy.xml
==== central: tried
https://repo1.maven.org/maven2/org/apache/spark/spark-streaming-flume-sink/1.6.0/spark-streaming-flume-sink-1.6.0.pom
-- artifact org.apache.spark#spark-streaming-flume-sink;1.6.0!spark-streaming-flume-sink.jar:
https://repo1.maven.org/maven2/org/apache/spark/spark-streaming-flume-sink/1.6.0/spark-streaming-flume-sink-1.6.0.jar
==== spark-packages: tried
http://dl.bintray.com/spark-packages/maven/org/apache/spark/spark-streaming-flume-sink/1.6.0/spark-streaming-flume-sink-1.6.0.pom
-- artifact org.apache.spark#spark-streaming-flume-sink;1.6.0!spark-streaming-flume-sink.jar:
http://dl.bintray.com/spark-packages/maven/org/apache/spark/spark-streaming-flume-sink/1.6.0/spark-streaming-flume-sink-1.6.0.jar
::::::::::::::::::::::::::::::::::::::::::::::
:: UNRESOLVED DEPENDENCIES ::
::::::::::::::::::::::::::::::::::::::::::::::
:: org.apache.spark#spark-streaming-flume-sink;1.6.0: not found
::::::::::::::::::::::::::::::::::::::::::::::
:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: org.apache.spark#spark-streaming-flume-sink;1.6.0: not found]
</code></pre>
<p>I never used a pom.xml - maybe I should ? </p>
| 0 | 2016-08-10T18:09:46Z | 38,883,060 | <p>I found my mistake : the jars file I downloaded were empty !</p>
<p>:-(((((
the mistake came from =
curl -O <a href="http://search.maven.org/remotecontent?filepath=org/apache/spark/spark-streaming-flume-sink_2.10/1.6.2/spark-streaming-flume-sink_2.10-1.6.2.jar" rel="nofollow">http://search.maven.org/remotecontent?filepath=org/apache/spark/spark-streaming-flume-sink_2.10/1.6.2/spark-streaming-flume-sink_2.10-1.6.2.jar</a><br>
downloading only an html file :-(</p>
| 0 | 2016-08-10T20:34:21Z | [
"python",
"apache-spark"
] |
Analysing merge sort complexity | 38,880,719 | <p>The code I wrote is as follows:</p>
<pre><code>def merge(A, B): #merging two sorted lists
index_a = index_b = 0
result = []
while(index_a < len(A) and index_b < len(B)):
if(A[index_a] <= B[index_b]):
result += [A[index_a]]
index_a += 1
else:
result += [B[index_b]]
index_b += 1
if(index_a != len(A)):#adds all remaining elements
result += A[index_a:]
elif(index_b != len(B)):
result += B[index_b:]
return result
def merge_sort(L):
if(len(L) == 0 or len(L) == 1):
return L
else:
a = merge_sort( L[:len(L)/2] )
b = merge_sort( L[len(L)/2:] )
return merge(a , b)
</code></pre>
<p>So I can understand the complexity of merge_sort function as shown above is log(n) for each recursive call. What I was slightly confused is in the merge function the number of steps taken, first the two assignments but after that I dont know how to get the number of steps taken in the worst case because I cant think of where to stop in the length of A or B and the later addition.</p>
<p>Any help would be really appreciated. Thank You.</p>
| 0 | 2016-08-10T18:14:34Z | 38,880,850 | <p>The way you've structured your merge, while slightly more performant, makes it trickier to see the complexity. Try this: </p>
<pre><code>def merge(A, B): #merging two sorted lists
index_a = index_b = 0
result = []
while(index_a < len(A) or index_b < len(B)): # NOTE OR
if index_a < len(A) and (index_b == len(B) or A[index_a] <= B[index_b]): # NOTE EXTRA CONDITIONS
result += [A[index_a]]
index_a += 1
else:
result += [B[index_b]]
index_b += 1
return result
def merge_sort(L):
...
</code></pre>
<p>Here I've replaced the "then dump the rest" code with a change to the while-loop condition (allowing it to run until both input lists are sorted), and an extra condition to the if-statement to make that work properly. If you assume (as is normal) that concatenating two arrays is O(n) in the size of the second array, the algorithmic complexity is unchanged by this modification. (It's unchanged even if you assume that that's constant time, but it's slightly more complicated to explain.)</p>
<p>Now you can see that the number of times the while-loop is executed is exactly equal to the sum of the length of the lists. This way there's no "worst case".</p>
| 0 | 2016-08-10T18:21:49Z | [
"python",
"time-complexity",
"big-o",
"mergesort"
] |
telnet over SSH from server | 38,880,723 | <p>I am doing ssh to a server using parmiko in python. From there, I need to telnet to several devices and capture the output. I am able to ssh and then telnet to only one device but not ablke to telnet multiple devices from a loop. please suggest me how to do this.
I am using the following </p>
<pre><code>import paramiko
import telnetlib
ip = ''
port = 42705
username = ''
password = ''
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
f=open("abc.txt", "r")
x=f.readlines()
i=1
if i < 100:
print i
cmd = "telnet " + x[i]
print cmd
i=i+1
stdin, stdout, stderr = ssh.exec_command(cmd)
stdin.write('''
terminal length 0
show platform
exit
''')
outlines = stdout.readlines()
resp = ''.join(outlines)
print(resp)
</code></pre>
<p>when I am not using if statement and directly giving ip, its working good. But I need to telnet multiple devices from same server. Thanks for your reply!</p>
<p>I am now using the following code:</p>
<pre><code>#! /usr/bin/python
import sys
import paramiko
ip=''
port=42705
username=''
password=''
f1=open("abc.txt", 'r')
devices=f1.readlines()
for device in devices:
device = device.rstrip()
print device
command = "telnet " + device
ssh = paramiko.SSHClient()
print command
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, port, username, password)
stdin, stdout, stderr = ssh.exec_command(command)
stdin.write('''
primenms
primenms123
terminal length 0
show platform
exit
ls
''')
outlines = stdout.readlines()
resp = ''.join(outlines)
print(resp)
ssh.close()
</code></pre>
<p>f1.close()</p>
<p>It is able to ssh but when I am doing telnet from ssh server, its entering credentials primenms and primenms123 as username and password, which is correct credentials, but its showing as authorization failed. Where am I going wrong? Please help me guys!</p>
| 3 | 2016-08-10T18:14:39Z | 38,880,916 | <p>You should likely be using a <code>for</code> loop instead:</p>
<pre><code>for i in f.readlines():
cmd = "telnet {0}".format(i)
print cmd
</code></pre>
<p>Ideally you'd want to execute the commands with:</p>
<pre><code>ssh.exec_command(cmd)
</code></pre>
<p>Then you could use it like the following example:</p>
<pre><code>for i in f.readlines():
cmd = "telnet {0}".format(i)
ssh.exec_command(cmd)
</code></pre>
<p>Without knowing what is within <code>abc.txt</code> you might need other ssh commands in there too.</p>
| 1 | 2016-08-10T18:25:17Z | [
"python",
"ssh",
"telnet"
] |
Python OpenCV cv2.VideoWriter Error | 38,880,740 | <p>I'm trying to export the data coming from a thermal camera but I get an error message that says</p>
<pre><code>error: /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/highgui/src/cap_ffmpeg.cpp:238: error: (-215) image->depth == 8 in function writeFrame
</code></pre>
<p>Can someone look at what I'm doing and tell me what I'm doing wrong? I followed and example pretty closely so I don't understand what this error means or why it's happening.</p>
<pre><code>o = camera.add_overlay(np.getbuffer(a), size=(320,240), layer=3, alpha=int(alpha), crop=(0,0,80,60), vflip=flip_v)
filename = time.strftime("%Y.%m.%d %H.%M.%S", time.localtime()) + ".avi"
fourcc = cv2.cv.CV_FOURCC('I','4','2','0')
out = cv2.VideoWriter(filename, fourcc, fps, (width, height))
try:
time.sleep(0.2) # give the overlay buffers a chance to initialize
with Lepton(device) as l:
last_nr = 0
while True:
_,nr = l.capture(lepton_buf)
out.write(lepton_buf)
if nr == last_nr:
# no need to redo this frame
continue
last_nr = nr
cv2.normalize(lepton_buf, lepton_buf, 0, 65535, cv2.NORM_MINMAX)
np.right_shift(lepton_buf, 8, lepton_buf)
a[:lepton_buf.shape[0], :lepton_buf.shape[1], :] = lepton_buf
o.update(np.getbuffer(a))
except Exception:
traceback.print_exc()
finally:
camera.remove_overlay(o)
</code></pre>
| 1 | 2016-08-10T18:15:21Z | 38,881,513 | <p>You need to change <code>out.write(lepton_buf)</code> to <code>out.write(np.uint8(lepton_buf))</code></p>
<p>What you're trying to write isn't a number and that's what's confusing it.</p>
| 1 | 2016-08-10T18:59:23Z | [
"python",
"opencv",
"camera",
"thermal-printer"
] |
Python OpenCV cv2.VideoWriter Error | 38,880,740 | <p>I'm trying to export the data coming from a thermal camera but I get an error message that says</p>
<pre><code>error: /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/highgui/src/cap_ffmpeg.cpp:238: error: (-215) image->depth == 8 in function writeFrame
</code></pre>
<p>Can someone look at what I'm doing and tell me what I'm doing wrong? I followed and example pretty closely so I don't understand what this error means or why it's happening.</p>
<pre><code>o = camera.add_overlay(np.getbuffer(a), size=(320,240), layer=3, alpha=int(alpha), crop=(0,0,80,60), vflip=flip_v)
filename = time.strftime("%Y.%m.%d %H.%M.%S", time.localtime()) + ".avi"
fourcc = cv2.cv.CV_FOURCC('I','4','2','0')
out = cv2.VideoWriter(filename, fourcc, fps, (width, height))
try:
time.sleep(0.2) # give the overlay buffers a chance to initialize
with Lepton(device) as l:
last_nr = 0
while True:
_,nr = l.capture(lepton_buf)
out.write(lepton_buf)
if nr == last_nr:
# no need to redo this frame
continue
last_nr = nr
cv2.normalize(lepton_buf, lepton_buf, 0, 65535, cv2.NORM_MINMAX)
np.right_shift(lepton_buf, 8, lepton_buf)
a[:lepton_buf.shape[0], :lepton_buf.shape[1], :] = lepton_buf
o.update(np.getbuffer(a))
except Exception:
traceback.print_exc()
finally:
camera.remove_overlay(o)
</code></pre>
| 1 | 2016-08-10T18:15:21Z | 38,883,927 | <p>Several suggestions to try</p>
<p>Try a more common codec:</p>
<pre><code>fourcc=cv2.VideoWriter_fourcc('X', 'V', 'I', 'D')
</code></pre>
<p>make sure your video dimensions fit the image dimensions. Try
swapping width and height, the order of these can be confusing.</p>
<pre><code>height,width,channels=lepton_buf.shape
</code></pre>
<p>or specify directly width and height for your video:</p>
<pre><code>out = cv2.VideoWriter(filename, fourcc, 8.0, (60, 80))
</code></pre>
<p>as stated before by @Crystal, make sure you convert your image data to data type 'uint8'</p>
<p>Try saving to a 'fresh' file name. Sometimes opencv videowriter silently fails saving to a file that hasn't been properly released from a previous access.</p>
<p>finally, release your file when done writing frames!</p>
<pre><code>out.release()
</code></pre>
| 0 | 2016-08-10T21:32:45Z | [
"python",
"opencv",
"camera",
"thermal-printer"
] |
csv files comparing, parsing, and writing (Linearly?) | 38,880,759 | <p>The below code looks at csv files, file1 and file2. According to a unique key from each file, Compares them with a csv cross-reference file and if there is a match it compares the unique identifiers from file1 and file2 together, if there is a match it will pick the values from file2 and do a write, otherwise it will pick the values from file1 and do a write.</p>
<pre><code>import csv, datetime, calendar
global_dic = {}
with open('D:\\hello.csv', 'r') as file0:
reader1 = csv.reader(file0, delimiter='\t')
header = next(reader1)
for row in reader1:
key = (row[0] + '|' + row[1] + '|' + row[2] + '|' + row[3])
global_dic[key] = {header[0]: row[0], header[1]: row[1], header[2]: row[2], header[3]: row[3], header[4]: row[4], header[5]: row[5], header[6]: row[6], header[7]: row[7], header[8]: row[8], header[9]: row[9]}
with open('D:\\file1.csv', 'r') as master:
master_indices = dict(((r[0] + r[1] + r[2] + r[3] + r[4]), i) for i, r in enumerate(csv.reader(master)) if '/' in r[0])
with open('D:\\file2.csv', 'r') as hosts:
with open('results1.csv', 'w') as results:
reader = csv.reader(hosts)
writer = csv.writer(results)
writer.writerow(next(reader, []) + ['RESULTS'])
for row in reader:
if '/' in row[0]:
key = row[1] + row[2] + row[3] + row[4]
value = row[6]
if key in global_dic:
meow = datetime.datetime.strptime(row[0], '%m/%d/%Y %H:%M:%S') # change str time to date/time obj && add .%f for ms consolidation
unix_timestamp = calendar.timegm(meow.timetuple()) # do the conversion to unix stamp
time_ms = unix_timestamp * 1000
index = master_indices.get(row[0] + row[1] + row[2] + row[3] + row[4])
if index is not None:
a = ''
else:
writer.writerow('value' + ',' + str(global_dic[key]['cpKey']) + ',' + str(global_dic[key]['SCADA Key']) + ',' + str(value) + ',' + str(time_ms) + ',' + str(time_ms) + ',' + '0' + ',' + '0' + ',' + '0' + '\n')
</code></pre>
<p>problem needed help with:
when there is no match i can write the file, however when there is a match i dont know how to bring r[6] from master_indecies. If i include r[6] in the dictionary than it will create issues during the matching process</p>
<pre><code> if index is not None:
# HELPPPPPPPPPP
else:
writer.writerow('value' + ',' + str(global_dic[key]['cpKey']) + ',' + str(global_dic[key]['SCADA Key']) + ',' + str(value) + ',' + str(time_ms) + ',' + str(time_ms) + ',' + '0' + ',' + '0' + ',' + '0' + '\n')
</code></pre>
| 0 | 2016-08-10T18:16:18Z | 38,882,070 | <p>1st the provided code is much too complicated, try to make it easier. Use better names (which file contains these cross reference(s)?)</p>
<p>I think smaller steps are the right way to solve this issue, somethink like:</p>
<pre><code>dict1 = get_dict_from_csv('file1.csv')
dict2 = get_dict_from_csv('file2.csv')
cross_refs = get_cross_refs_from_csv('cross_refs.csv')
with open('result.csv', mode='w') as result:
for row in dict1:
#...
if index is not None:
writer.writerow(dict2[index])
</code></pre>
<p>Will make your live much easier, so my 2nd advice is, refactor your code 1st. Anyway good luck!</p>
<p>[edit]Take a lookt at: <a href="http://stackoverflow.com/questions/11295171/read-two-textfile-line-by-line-simultaneously-python">Read two textfile line by line simultaneously -python</a> to get some examples of how to read several files at the same time in python.</p>
<p>Cheers Lars</p>
| 0 | 2016-08-10T19:35:09Z | [
"python",
"python-3.x",
"csv",
"parsing"
] |
Search for pattern in string and add characters if found | 38,880,831 | <p>I am working on some address cleaning/geocoding software, and I recently ran into a specfic address format that is causing some problems for me.</p>
<p>My external geocoding module is having trouble finding addresses such as <code>30 w 60th new york</code> (<code>30 w 60th street new york</code> is the proper format of the address).</p>
<p>Essentially what I would need to do is parse the string and check the following:</p>
<ol>
<li>Are there any numbers followed by <code>th</code> or <code>st</code> or <code>nd</code> or <code>rd</code>? (+ a space following them). I.E <code>33rd</code> <code>34th</code> <code>21st</code> <code>24th</code></li>
<li>If so, is the word <code>street</code> following it?</li>
</ol>
<p>If yes, do nothing.</p>
<p>If no, add the word <code>street</code> immediately after the specific pattern?</p>
<p>Would regex be the best way to approach this situation?</p>
<p><strong>Further Clarification:</strong> I am not having any issues with other address suffixes, such as avenue, road, etc etc etc. I have analyzed very large data sets (I'm running about 12,000 addresses/day through my application), and instances where <code>street</code> is left out is what is causing the biggest headaches for me. I have looked into address parsing modules, such as usaddress, smartystreets, and others. I really just need to come up with a clean (hopefully regex?) solution to the specific problem that I have described.</p>
<p>I'm thinking something along the lines of:</p>
<ol>
<li>Converting the string to a list.</li>
<li>Find the index of the element in the list that meets the criteria that i've explained </li>
<li>Check to see if the next element is <code>street</code>. If so, do nothing.</li>
<li>If not, reconstruct the list with <code>[:targetword + len(targetword)] + 'street' + [:targetword + len(targetword)]</code>. (<code>targetword</code> would be <code>47th</code> or whatever is in the string)</li>
<li>Join the list back into a string.</li>
</ol>
<p>I'm not exactly the best with regex, so i'm looking for some input.</p>
<p>Thanks.</p>
| -1 | 2016-08-10T18:20:44Z | 38,881,054 | <p>While you could certainly use regex for this sort of problem, I can't help, but think that there's most likely a Python library out there that has already solved this problem for you. I've never used these, but just some quick searching finds me these:</p>
<p><a href="https://github.com/datamade/usaddress" rel="nofollow">https://github.com/datamade/usaddress</a></p>
<p><a href="https://pypi.python.org/pypi/postal-address" rel="nofollow">https://pypi.python.org/pypi/postal-address</a></p>
<p><a href="https://github.com/SwoopSearch/pyaddress" rel="nofollow">https://github.com/SwoopSearch/pyaddress</a></p>
<p>PyParsing also has an address sample here you might look at: <a href="http://pyparsing.wikispaces.com/file/view/streetAddressParser.py" rel="nofollow">http://pyparsing.wikispaces.com/file/view/streetAddressParser.py</a></p>
<p>You might also take a look at this former question: <a href="http://stackoverflow.com/questions/9463471/is-there-a-library-for-parsing-us-addresses">is there a library for parsing US addresses?</a></p>
<p>Any reason you can't just use a 3rd party library to solve the problem?</p>
| 1 | 2016-08-10T18:33:12Z | [
"python"
] |
Search for pattern in string and add characters if found | 38,880,831 | <p>I am working on some address cleaning/geocoding software, and I recently ran into a specfic address format that is causing some problems for me.</p>
<p>My external geocoding module is having trouble finding addresses such as <code>30 w 60th new york</code> (<code>30 w 60th street new york</code> is the proper format of the address).</p>
<p>Essentially what I would need to do is parse the string and check the following:</p>
<ol>
<li>Are there any numbers followed by <code>th</code> or <code>st</code> or <code>nd</code> or <code>rd</code>? (+ a space following them). I.E <code>33rd</code> <code>34th</code> <code>21st</code> <code>24th</code></li>
<li>If so, is the word <code>street</code> following it?</li>
</ol>
<p>If yes, do nothing.</p>
<p>If no, add the word <code>street</code> immediately after the specific pattern?</p>
<p>Would regex be the best way to approach this situation?</p>
<p><strong>Further Clarification:</strong> I am not having any issues with other address suffixes, such as avenue, road, etc etc etc. I have analyzed very large data sets (I'm running about 12,000 addresses/day through my application), and instances where <code>street</code> is left out is what is causing the biggest headaches for me. I have looked into address parsing modules, such as usaddress, smartystreets, and others. I really just need to come up with a clean (hopefully regex?) solution to the specific problem that I have described.</p>
<p>I'm thinking something along the lines of:</p>
<ol>
<li>Converting the string to a list.</li>
<li>Find the index of the element in the list that meets the criteria that i've explained </li>
<li>Check to see if the next element is <code>street</code>. If so, do nothing.</li>
<li>If not, reconstruct the list with <code>[:targetword + len(targetword)] + 'street' + [:targetword + len(targetword)]</code>. (<code>targetword</code> would be <code>47th</code> or whatever is in the string)</li>
<li>Join the list back into a string.</li>
</ol>
<p>I'm not exactly the best with regex, so i'm looking for some input.</p>
<p>Thanks.</p>
| -1 | 2016-08-10T18:20:44Z | 38,881,060 | <p>You could possibly do this by turning each one of those strings into lists, and looking for certain groups of characters in those lists. For example:</p>
<pre><code>def check_th(address):
addressList = list(address)
for character in addressList:
if character == 't':
charIndex = addressList.index(character)
if addressList[charIndex + 1] == 'h':
numberList = [addressList[charIndex - 2], addressList[charIndex - 1]]
return int(''.join(str(x) for x in numberList))
</code></pre>
<p>This looks very messy, but it should get the job done, as long as the number is two digits long. However, if there are many things you need to look for, you should probably look for a more convenient and simpler way to do this.</p>
| 0 | 2016-08-10T18:33:29Z | [
"python"
] |
Search for pattern in string and add characters if found | 38,880,831 | <p>I am working on some address cleaning/geocoding software, and I recently ran into a specfic address format that is causing some problems for me.</p>
<p>My external geocoding module is having trouble finding addresses such as <code>30 w 60th new york</code> (<code>30 w 60th street new york</code> is the proper format of the address).</p>
<p>Essentially what I would need to do is parse the string and check the following:</p>
<ol>
<li>Are there any numbers followed by <code>th</code> or <code>st</code> or <code>nd</code> or <code>rd</code>? (+ a space following them). I.E <code>33rd</code> <code>34th</code> <code>21st</code> <code>24th</code></li>
<li>If so, is the word <code>street</code> following it?</li>
</ol>
<p>If yes, do nothing.</p>
<p>If no, add the word <code>street</code> immediately after the specific pattern?</p>
<p>Would regex be the best way to approach this situation?</p>
<p><strong>Further Clarification:</strong> I am not having any issues with other address suffixes, such as avenue, road, etc etc etc. I have analyzed very large data sets (I'm running about 12,000 addresses/day through my application), and instances where <code>street</code> is left out is what is causing the biggest headaches for me. I have looked into address parsing modules, such as usaddress, smartystreets, and others. I really just need to come up with a clean (hopefully regex?) solution to the specific problem that I have described.</p>
<p>I'm thinking something along the lines of:</p>
<ol>
<li>Converting the string to a list.</li>
<li>Find the index of the element in the list that meets the criteria that i've explained </li>
<li>Check to see if the next element is <code>street</code>. If so, do nothing.</li>
<li>If not, reconstruct the list with <code>[:targetword + len(targetword)] + 'street' + [:targetword + len(targetword)]</code>. (<code>targetword</code> would be <code>47th</code> or whatever is in the string)</li>
<li>Join the list back into a string.</li>
</ol>
<p>I'm not exactly the best with regex, so i'm looking for some input.</p>
<p>Thanks.</p>
| -1 | 2016-08-10T18:20:44Z | 38,881,831 | <p>To check and add the word street, the following function should work as long as the street number comes before its name:</p>
<pre><code>def check_add_street(address):
addressList = list(address)
for character in addressList:
if character == 't':
charIndex_t = addressList.index(character)
if addressList[charIndex_t + 1] == 'h':
newIndex = charIndex_t + 1
break
elif character == 's':
charIndex_s = addressList.index(character)
if addressList[charIndex_s + 1] == 't':
newIndex = charIndex_s + 1
break
elif character == 'n':
charIndex_n = addressList.index(character)
if addressList[charIndex_n + 1] == 'd':
newIndex = charIndex_n + 1
break
elif character == 'r':
charIndex_r = addressList.index(character)
if addressList[charIndex_r + 1] == 'd':
newIndex = charIndex_r + 1
break
if addressList[newIndex + 1] != ' ' or addressList[newIndex + 2] != 's' or addressList[newIndex + 3] != 't' or addressList[newIndex + 4] != 'r' or addressList[newIndex + 5] != 'e' or addressList[newIndex + 6] != 'e' or addressList[newIndex + 7] != 't' or addressList[newIndex + 8] != ' ':
newAddressList = []
for n in range(len(addressList)):
while n <= newIndex:
newAddressList.append(addressList[n])
newAddressList.append(' ')
newAddressList.append('s')
newAddressList.append('t')
newAddressList.append('r')
newAddressList.append('e')
newAddressList.append('e')
newAddressList.append('t')
for n in range(len(addressList) - newIndex):
newAddressList.append(addressList[n + newIndex])
return ''.join(str(x) for x in newAddressList)
else:
return ''.join(str(x) for x in addressList)
</code></pre>
<p>This will add the word "street" if it is not already present, given that the format that you gave above is consistent.</p>
| 0 | 2016-08-10T19:18:07Z | [
"python"
] |
Search for pattern in string and add characters if found | 38,880,831 | <p>I am working on some address cleaning/geocoding software, and I recently ran into a specfic address format that is causing some problems for me.</p>
<p>My external geocoding module is having trouble finding addresses such as <code>30 w 60th new york</code> (<code>30 w 60th street new york</code> is the proper format of the address).</p>
<p>Essentially what I would need to do is parse the string and check the following:</p>
<ol>
<li>Are there any numbers followed by <code>th</code> or <code>st</code> or <code>nd</code> or <code>rd</code>? (+ a space following them). I.E <code>33rd</code> <code>34th</code> <code>21st</code> <code>24th</code></li>
<li>If so, is the word <code>street</code> following it?</li>
</ol>
<p>If yes, do nothing.</p>
<p>If no, add the word <code>street</code> immediately after the specific pattern?</p>
<p>Would regex be the best way to approach this situation?</p>
<p><strong>Further Clarification:</strong> I am not having any issues with other address suffixes, such as avenue, road, etc etc etc. I have analyzed very large data sets (I'm running about 12,000 addresses/day through my application), and instances where <code>street</code> is left out is what is causing the biggest headaches for me. I have looked into address parsing modules, such as usaddress, smartystreets, and others. I really just need to come up with a clean (hopefully regex?) solution to the specific problem that I have described.</p>
<p>I'm thinking something along the lines of:</p>
<ol>
<li>Converting the string to a list.</li>
<li>Find the index of the element in the list that meets the criteria that i've explained </li>
<li>Check to see if the next element is <code>street</code>. If so, do nothing.</li>
<li>If not, reconstruct the list with <code>[:targetword + len(targetword)] + 'street' + [:targetword + len(targetword)]</code>. (<code>targetword</code> would be <code>47th</code> or whatever is in the string)</li>
<li>Join the list back into a string.</li>
</ol>
<p>I'm not exactly the best with regex, so i'm looking for some input.</p>
<p>Thanks.</p>
| -1 | 2016-08-10T18:20:44Z | 38,882,942 | <p>It seems that your looking for regexp. = P</p>
<p>Here some code I build specialy for you : </p>
<pre><code>import re
def check_th_add_street(address):
# compile regexp rule
has_th_st_nd_rd = re.compile(r"(?P<number>[\d]{1,3}(st|nd|rd|th)\s)(?P<following>.*)")
# first check if the address has number followed by something like 'th, st, nd, rd'
has_number = has_th_st_nd_rd.search(address)
if has_number is not None:
# then check if not followed by 'street'
if re.match('street', has_number.group('following')) is None:
# then add the 'street' word
new_address = re.sub('(?P<number>[\d]{1,3}(st|nd|rd|th)\s)', r'\g<number>street ', address)
return new_address
else:
return True # the format is good (followed by 'street')
else:
return True # there is no number like 'th, st, nd, rd'
</code></pre>
<p>I'm python learner so thank you for let me know if it solves your issue. </p>
<p>Tested on a small list of addresses. </p>
<p>Hope it helps or leads you to solution. </p>
<p>Thank you ! </p>
<p><strong>EDIT</strong> </p>
<p>Improved to take care if followed by "avenue" or "road" as well as "street" :</p>
<pre><code>import re
def check_th_add_street(address):
# compile regexp rule
has_th_st_nd_rd = re.compile(r'(?P<number>[\d]{1,3}(th|st|nd|rd)\s)(?P<following>.*)')
# first check if the address has number followed by something like 'th, st, nd, rd'
has_number = has_th_st_nd_rd.search(address)
if has_number is not None:
# check if followed by "avenue" or "road" or "street"
if re.match(r'(avenue|road|street)', has_number.group('following')):
return True # do nothing
# else add the "street" word
else:
# then add the 'street' word
new_address = re.sub('(?P<number>[\d]{1,3}(st|nd|rd|th)\s)', r'\g<number>street ', address)
return new_address
else:
return True # there is no number like 'th, st, nd, rd'
</code></pre>
<p><strong>RE-EDIT</strong></p>
<p>I made some improvement for your needs and added an example of use : </p>
<pre><code>import re
# build the original address list includes bad format
address_list = [
'30 w 60th new york',
'30 w 60th new york',
'30 w 21st new york',
'30 w 23rd new york',
'30 w 1231st new york',
'30 w 1452nd new york',
'30 w 1300th new york',
'30 w 1643rd new york',
'30 w 22nd new york',
'30 w 60th street new york',
'30 w 60th street new york',
'30 w 21st street new york',
'30 w 22nd street new york',
'30 w 23rd street new york',
'30 w brown street new york',
'30 w 1st new york',
'30 w 2nd new york',
'30 w 116th new york',
'30 w 121st avenue new york',
'30 w 121st road new york',
'30 w 123rd road new york',
'30 w 12th avenue new york',
'30 w 151st road new york',
'30 w 15th road new york',
'30 w 16th avenue new york'
]
def check_th_add_street(address):
# compile regexp rule
has_th_st_nd_rd = re.compile(r'(?P<number>[\d]{1,4}(th|st|nd|rd)\s)(?P<following>.*)')
# first check if the address has number followed by something like 'th, st, nd, rd'
has_number = has_th_st_nd_rd.search(address)
if has_number is not None:
# check if followed by "avenue" or "road" or "street"
if re.match(r'(avenue|road|street)', has_number.group('following')):
return address # return original address
# else add the "street" word
else:
new_address = re.sub('(?P<number>[\d]{1,4}(st|nd|rd|th)\s)', r'\g<number>street ', address)
return new_address
else:
return address # there is no number like 'th, st, nd, rd' -> return original address
# initialisation of the new list
new_address_list = []
# built the new clean list
for address in address_list:
new_address_list.append(check_th_add_street(address))
# or you could use it straight here i.e. :
# address = check_th_add_street(address)
# print address
# use the new list to do you work
for address in new_address_list:
print "Formated address is : %s" % address # or what ever you want to do with 'address'
</code></pre>
<p>Will output : </p>
<blockquote>
<pre><code>Formated address is : 30 w 60th street new york
Formated address is : 30 w 60th street new york
Formated address is : 30 w 21st street new york
Formated address is : 30 w 23rd street new york
Formated address is : 30 w 1231st street new york
Formated address is : 30 w 1452nd street new york
Formated address is : 30 w 1300th street new york
Formated address is : 30 w 1643rd street new york
Formated address is : 30 w 22nd street new york
Formated address is : 30 w 60th street new york
Formated address is : 30 w 60th street new york
Formated address is : 30 w 21st street new york
Formated address is : 30 w 22nd street new york
Formated address is : 30 w 23rd street new york
Formated address is : 30 w brown street new york
Formated address is : 30 w 1st street new york
Formated address is : 30 w 2nd street new york
Formated address is : 30 w 116th street new york
Formated address is : 30 w 121st avenue new york
Formated address is : 30 w 121st road new york
Formated address is : 30 w 123rd road new york
Formated address is : 30 w 12th avenue new york
Formated address is : 30 w 151st road new york
Formated address is : 30 w 15th road new york
Formated address is : 30 w 16th avenue new york
</code></pre>
</blockquote>
<p><strong>RE-RE-EDIT</strong> </p>
<p>The final function : added the count parameter to re.sub()</p>
<pre><code>def check_th_add_street(address):
# compile regexp rule
has_th_st_nd_rd = re.compile(r'(?P<number>[\d]{1,4}(th|st|nd|rd)\s)(?P<following>.*)')
# first check if the address has number followed by something like 'th, st, nd, rd'
has_number = has_th_st_nd_rd.search(address)
if has_number is not None:
# check if followed by "avenue" or "road" or "street"
if re.match(r'(avenue|road|street)', has_number.group('following')):
return address # do nothing
# else add the "street" word
else:
# then add the 'street' word
new_address = re.sub('(?P<number>[\d]{1,4}(st|nd|rd|th)\s)', r'\g<number>street ', address, 1) # the last parameter is the maximum number of pattern occurences to be replaced
return new_address
else:
return address # there is no number like 'th, st, nd, rd'
</code></pre>
| 2 | 2016-08-10T20:27:44Z | [
"python"
] |
How to download PyHook module | 38,880,842 | <p>I have tried to install PyHook but I'm still getting the error: </p>
<blockquote>
<p>ImportError: No module named 'pyhook'</p>
</blockquote>
<p>Please give a me solution. How do I solve this error.</p>
<p>I am making a key-logger program.</p>
<pre><code>**code:**
import pythoncom, pyHook, sys, logging
LOG_FILENAME = 'YOURNAME-keylog.txt'
def OnKeyboardEvent(event):
logging.basicConfig(filename=LOG_FILENAME,
level=logging.DEBUG,
format='%(message)s')
print "Key: ", chr(event.Ascii)
logging.log(10,chr(event.Ascii))
return True
hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
pythoncom.PumpMessages()
</code></pre>
<p><a href="http://i.stack.imgur.com/A5KAc.png" rel="nofollow">Please refer screenshot for error</a></p>
| 0 | 2016-08-10T18:21:17Z | 38,881,110 | <p>First just check which python version you're running, in my case when i type python i'll see:</p>
<blockquote>
<p>Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 20:40:30) [MSC v.1500 64 bit (AMD64)] on win32</p>
</blockquote>
<p>That means I'd need to install the 64bits version for python 2.7, the simplest way will be downloading the <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyhook" rel="nofollow">pyhook package from this website</a>, then just save it on your virtualenv directory (if you're using one) or just in your python folder. Then open a command prompt and go to the path where the package has been downloaded and just type <code>pip install the_name_of_your_package.whl</code>, for example, if you're running python 2.7 64bits, you'd type:</p>
<blockquote>
<p>pip install pyHook-1.5.1-cp27-none-win_amd64.whl</p>
</blockquote>
<p>Once that's installed correctly your script should work without problems.</p>
| 0 | 2016-08-10T18:37:07Z | [
"python",
"python-2.7"
] |
How to download PyHook module | 38,880,842 | <p>I have tried to install PyHook but I'm still getting the error: </p>
<blockquote>
<p>ImportError: No module named 'pyhook'</p>
</blockquote>
<p>Please give a me solution. How do I solve this error.</p>
<p>I am making a key-logger program.</p>
<pre><code>**code:**
import pythoncom, pyHook, sys, logging
LOG_FILENAME = 'YOURNAME-keylog.txt'
def OnKeyboardEvent(event):
logging.basicConfig(filename=LOG_FILENAME,
level=logging.DEBUG,
format='%(message)s')
print "Key: ", chr(event.Ascii)
logging.log(10,chr(event.Ascii))
return True
hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
pythoncom.PumpMessages()
</code></pre>
<p><a href="http://i.stack.imgur.com/A5KAc.png" rel="nofollow">Please refer screenshot for error</a></p>
| 0 | 2016-08-10T18:21:17Z | 38,881,207 | <p>First of. If you want PyHook your going to have to download it your self, since it is not part of the standard <a href="https://docs.python.org/2/library/" rel="nofollow">python library</a> that comes with python. There are my ways you could go about installing it. But the way I recommend is:</p>
<p><strong>1</strong>. Download PyHook from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyhook" rel="nofollow">this</a> page. <strong>Make sure to get the download that matches your version of python and your bit number(32 or 64)</strong>.</p>
<p><strong>2</strong>. Next in your command prompt/terminal window type: </p>
<blockquote>
<p>pip install [full path and name of .whl file]</p>
</blockquote>
<p>This command tells pip to install PyHook for. In my case I typed: </p>
<blockquote>
<p>pip install C:\pyHook-1.5.1-cp35-none-win_amd64.whl`</p>
</blockquote>
<p>After doing this your script should run without any problems. If you do encounter any problems using this method, check and see if your installed the right version of python and the correct bit number.</p>
| 0 | 2016-08-10T18:42:01Z | [
"python",
"python-2.7"
] |
Python Networkx: preserving nodes ID when saving/serializing networks to hardisk | 38,880,872 | <p>I am using networkx1.10 in Python for a project that requires to dump the networks on the hardisk and later reload them in order to continue a the execution of some algorithm that manipulates such networks.</p>
<p>I've been trying doing that in a couple of different ways: using nx.write_gml() and nx_read_gml() at first, now with pickle/cpickle.</p>
<p>Although superficially everything seems working fine, I've noticed that I get different results in my simulation whether the saving/loading happens or not at some point in time.</p>
<p>I think this might be related to the fact that some networks seem to be modified by the dumping/reloading procedure (which is of course unexpected).</p>
<p>For debugging, now I am saving and reloading each network with pickle, comparing their gml representation (achieved with nx.write_gml / nx.generate_gml) before and after dumping/reloading. Doing so, I've noticed some discrepancies.</p>
<p>In some cases it's just the order of some of the graph attributes that gets modified, with causes no harm in my program. In other cases, the order in which two edges appear in the gml representation is different, again no harm.</p>
<p>Often, however, the ID of some nodes is modified, as in this example:</p>
<p><a href="https://www.diffchecker.com/zvzxrshy" rel="nofollow">https://www.diffchecker.com/zvzxrshy</a></p>
<p>Although the edges seem to be modified accordingly so that the network appears to be equivalent, this change in the ID can alter my simulation (for reasons that I am not going to explain). </p>
<p>I believe this might be the root of my problems.</p>
<p>Do you know why this is happening, even when using a low level serialization mechanism such as those implemented by pickle/cpickle?</p>
<p>How can I ensure that the networks are exactly the same before and after the serialization procedure?</p>
<p>I am doing something like this:</p>
<pre><code>with open('my_network.pickle', 'wb') as handle:
pickle.dump(my_network, handle, protocol=pickle.HIGHEST_PROTOCOL)
...
with open('my_network.pickle', 'rb') as handle:
my_network_reloaded = pickle.load(handle)
# compare nx.write_gml for my_network and my_network_reloaded
</code></pre>
<p>Any help will be highly appreciated: I've been fighting with this issue for the last three days and I am going crazy!</p>
<p>Thank you</p>
| 0 | 2016-08-10T18:22:37Z | 38,881,566 | <p>They seem to be the same graphs, up to isomorphism.</p>
<p>Your issue may be node order. Indeed, networkx uses dicts to store nodes and edges, which are unordered and randomized.</p>
<p>You have two solutions: either ignore that, or use ordered graphs:</p>
<pre><code>>>> class OrderedGraph(nx.Graph):
... node_dict_factory = OrderedDict
... adjlist_dict_factory = OrderedDict
>>> G = OrderedGraph()
>>> G.add_nodes_from( (2,1) )
>>> G.nodes()
[2, 1]
>>> G.add_edges_from( ((2,2), (2,1), (1,1)) )
>>> G.edges()
[(2, 2), (2, 1), (1, 1)]
</code></pre>
<p>(example from <a href="https://networkx.github.io/documentation/networkx-1.10/reference/classes.graph.html" rel="nofollow">the documentation of the Graph class</a>)</p>
| 1 | 2016-08-10T19:02:05Z | [
"python",
"serialization",
"pickle",
"networkx",
"gml"
] |
SWIG 'cstring.i' Python return byte type instead of string type | 38,880,899 | <p>I have a C function like</p>
<pre><code>int foo(void ** buf, int * buf_size)
</code></pre>
<p>And I use <code>cstring.i</code> to wrap it for use in Python 3. The result of the wrapped Python function is of type string. </p>
<p>Is there a way to get the result as type binary?</p>
<p>Background: The data <code>buf</code> gets filled with is msgpack encoded data, so using <code>str.decode</code> in Python is not an option. The msgpack implementations for Python only accept binary data.</p>
| 1 | 2016-08-10T18:24:25Z | 39,215,938 | <p>I solved my problem using <code>ctypes</code>, with the help of <a href="https://github.com/bit-01101/ctypesgen" rel="nofollow">https://github.com/bit-01101/ctypesgen</a></p>
| 0 | 2016-08-29T22:08:41Z | [
"python",
"swig"
] |
SWIG 'cstring.i' Python return byte type instead of string type | 38,880,899 | <p>I have a C function like</p>
<pre><code>int foo(void ** buf, int * buf_size)
</code></pre>
<p>And I use <code>cstring.i</code> to wrap it for use in Python 3. The result of the wrapped Python function is of type string. </p>
<p>Is there a way to get the result as type binary?</p>
<p>Background: The data <code>buf</code> gets filled with is msgpack encoded data, so using <code>str.decode</code> in Python is not an option. The msgpack implementations for Python only accept binary data.</p>
| 1 | 2016-08-10T18:24:25Z | 39,518,594 | <p>If you use <code>%cstring_output_allocate_size</code> the wrapper function <code>_wrap_foo</code> calls <code>SWIG_FromCharPtrAndSize()</code> which has the following decoding logic:</p>
<pre><code>#if PY_VERSION_HEX >= 0x03000000
#if defined(SWIG_PYTHON_STRICT_BYTE_CHAR)
return PyBytes_FromStringAndSize(carray, (Py_ssize_t)(size));
#else
#if PY_VERSION_HEX >= 0x03010000
return PyUnicode_DecodeUTF8(carray, (Py_ssize_t)(size), "surrogateescape");
#else
return PyUnicode_FromStringAndSize(carray, (Py_ssize_t)(size));
#endif
#endif
#else
return PyString_FromStringAndSize(carray, (Py_ssize_t)(size));
#endif
</code></pre>
<p>So you can get bytes instead of unicode string by #defining <code>SWIG_PYTHON_STRICT_BYTE_CHAR</code>. This is documented in <a href="http://www.swig.org/Doc3.0/Python.html" rel="nofollow">http://www.swig.org/Doc3.0/Python.html</a> so it's an official feature. But since it's a global switch, it's only useful if you want all string parameters to be mapped to bytes. If you need a mix of <code>str</code> and <code>bytes</code> in your API the only solution I can see is a custom typemap.</p>
| 0 | 2016-09-15T19:06:45Z | [
"python",
"swig"
] |
Keyed Random Shuffles in Python | 38,880,960 | <p>I have a list <code>a=[num1,num2,...,numn]</code>. I am shuffling it with <code>random.shuffle(a)</code>. However what I want as a functionality is the shuffle algorithm to take as input a key, and the owner of the key to deterministically produce the same shuffles?</p>
<p>I want an algorithm that takes as input the key, the sequence of elements to be shuffled and it outputs a random permutation thereof, depending on the key. If you apply the shuffle again with input the same key on the same sequence of data you are getting the same result. Otherwise random shuffle. The same key on the same data allows to "un-shuffle"</p>
<p>Is that possible?</p>
| 0 | 2016-08-10T18:28:08Z | 38,881,327 | <p>In pseudo-random terminology, that key is called a <em>seed</em>, and you can set the random seed on a new <code>random.Random()</code> instance:</p>
<pre><code>def keyed_shuffle(x, seed=None):
random_obj = random.Random(seed)
random_obj.shuffle(x)
</code></pre>
<p>You could use <a href="https://docs.python.org/2/library/random.html#random.seed" rel="nofollow"><code>random.seed()</code></a> and <a href="https://docs.python.org/2/library/random.html#random.shuffle" rel="nofollow"><code>random.shuffle()</code></a> directly too, but using your own <code>random.Random()</code> instance avoids setting the seed on the singleton <code>random.Random()</code> instance that the <code>random</code> module uses, thus not affecting other uses of that module.</p>
<p>The seed can either be an integer (used directly) or any hashable object.</p>
<p>Demo:</p>
<pre><code>>>> a = [10, 50, 42, 193, 21, 88]
>>> keyed_shuffle(a) # no seed
>>> a
[42, 10, 88, 21, 50, 193]
>>> a = [10, 50, 42, 193, 21, 88]
>>> keyed_shuffle(a) # again no seed, different random result
>>> a
[88, 50, 193, 10, 42, 21]
>>> b = [10, 50, 42, 193, 21, 88]
>>> keyed_shuffle(b, 42) # specific seed
>>> b
[193, 50, 42, 21, 10, 88]
>>> b = [10, 50, 42, 193, 21, 88]
>>> keyed_shuffle(b, 42) # same seed used, same output
>>> b
[193, 50, 42, 21, 10, 88]
>>> c = [10, 50, 42, 193, 21, 88]
>>> keyed_shuffle(b, 81) # different seed, different random order
>>> c
[10, 50, 88, 42, 193, 21]
</code></pre>
| 2 | 2016-08-10T18:50:16Z | [
"python",
"random",
"shuffle"
] |
Mac OSX $python3 test_imagenet.py --image images/dog_beagle.png ImportError: No module named 'cv2' | 38,880,999 | <p>I am using the tutorial found <a href="http://www.pyimagesearch.com/2016/08/10/imagenet-classification-with-python-and-keras/" rel="nofollow">here</a></p>
<p>The error I get on the command line and for the Pycharm IDE is that <code>cv2</code> can't be installed. Is this a problem because I am using a Macbook or can this be solved?</p>
| 0 | 2016-08-10T18:30:05Z | 38,881,348 | <p>If the error is saying "no module named cv2", you need to install OpenCV. You can do this with homebrew using <code>brew install opencv</code></p>
<p>For detailed instructions, take a look at: <a href="http://www.mobileway.net/2015/02/14/install-opencv-for-python-on-mac-os-x/" rel="nofollow">http://www.mobileway.net/2015/02/14/install-opencv-for-python-on-mac-os-x/</a></p>
<hr>
<p><strong>MORE INFO:</strong></p>
<p>With opencv installed with brew, you just need to know the package location. Go to <code>/usr/local/Cellar/opencv/</code> to find out your version number. I just installed with brew and have 2.4.13.</p>
<p>Once you know that, navigate to your Python directory with <code>cd /Library/Python/2.7/site-packages/</code> </p>
<p>You can then run each of these to link them up to Python:</p>
<pre><code>ln -s /usr/local/Cellar/opencv/2.4.13/lib/python2.7/site-packages/cv.py cv.py
ln -s /usr/local/Cellar/opencv/2.4.13/lib/python2.7/site-packages/cv2.so cv2.so
</code></pre>
<p>(replace the 2.4.13 with the version number you have)</p>
<p>Hope that helps. I don't think it will be any easier with Windows, though.</p>
| 0 | 2016-08-10T18:51:27Z | [
"python",
"osx",
"computer-vision",
"keras"
] |
PIL isn't able to show images | 38,881,015 | <p>I've been trying to show a image with PIL but I don't know why the image viewer keeps giving me this:"Windows photo viewer can't open this picture because either the picture is deleted,or it's in a location that isn't available.
I've checked the pic is on my desktop and doesn't have any problem.
Here is my code:</p>
<pre><code>from PIL import Image
img = Image.open('photo_2016-08-04_22-38-11.jpg')
img.show()
</code></pre>
<p>Could any one help me with this?</p>
| 1 | 2016-08-10T18:31:05Z | 38,881,411 | <p>Try this little snippet:</p>
<pre><code>from PIL import Image
import os
home_path = os.path.expanduser('~')
filename = os.path.join(home_path,'Desktop','photo_2016-08-04_22-38-11.jpg')
if os.path.exists(filename):
print "Opening filename {0}".format(filename)
img = Image.open(filename)
img.show()
else:
print "Filename {0} doesn't exist".format(filename)
</code></pre>
| 0 | 2016-08-10T18:54:37Z | [
"python",
"windows",
"python-imaging-library"
] |
How to create subplots with Pandas Scatter Matrix | 38,881,265 | <p>So I'm trying to create a subplot of pandas scatter matrices but have run into a bit of a wall. I've looked around at other similar questions, but none of the answers seemed to fix it. Right now I'm just trying to create a subplot of 2 of them. I'm going to move to a two by two subplot at some point, but am just trying to start small just to get it working. Here is my code</p>
<pre><code>df_SH = pd.DataFrame({'East_Pacific_SH':DJF_con_SH_east_finite_all,
'West_Pacific_SH':DJF_con_SH_west_finite_all,
'Atl_SH':DJF_con_SH_atl_finite_all,
'Zonal_SH':DJF_con_SH_zonal_finite_all})
df_NH = pd.DataFrame({'East_Pacific_NH':DJF_con_NH_east_finite_all,
'West_Pacific_NH':DJF_con_NH_west_finite_all,
'Atl_NH':DJF_con_NH_atl_finite_all,
'Zonal_NH':DJF_con_NH_zonal_finite_all})
region_name=np.array(['East_Pacific_SH', 'West_Pacific_SH', 'Atl_SH', 'Zonal_SH'])
plt.suptitle('Control Correlations')
plt.subplot(211)
axes = pd.scatter_matrix(df_SH, alpha=0.2, diagonal='kde')
corr = df_SH.corr().as_matrix()
for i, j in zip(*plt.np.triu_indices_from(axes, k=1)):
axes[j, i].annotate("%.3f" %corr[j,i], (.8, .9), xycoords='axes fraction', ha='center', va='center')
plt.title('Control DJF SH', size = 15)
#plt.savefig(filename='DJF_SH_Control_Scatter.pdf', ftype='pdf')
#plt.show()
plt.subplot(212)
axes2 = pd.scatter_matrix(df_NH, alpha=0.2, diagonal='kde')
corr2 = df_NH.corr().as_matrix()
for i, j in zip(*plt.np.triu_indices_from(axes, k=1)):
axes2[j, i].annotate("%.3f" %corr2[j,i], (.8, .9), xycoords='axes fraction', ha='center', va='center')
plt.title('Control DJF NH', size = 15)
#plt.savefig(filename='DJF_NH_Control_Scatter.pdf', ftype='pdf')
plt.show()
</code></pre>
<p>Here are the results</p>
<p><a href="http://i.stack.imgur.com/Nip6h.png" rel="nofollow"><img src="http://i.stack.imgur.com/Nip6h.png" alt="result"></a></p>
<p><a href="http://i.stack.imgur.com/KX9mH.png" rel="nofollow"><img src="http://i.stack.imgur.com/KX9mH.png" alt="result"></a></p>
<p><a href="http://i.stack.imgur.com/r7h7u.png" rel="nofollow"><img src="http://i.stack.imgur.com/r7h7u.png" alt="result"></a></p>
| 0 | 2016-08-10T18:46:13Z | 38,883,657 | <p><code>pandas</code> doesn't currently do that, although it has a promising <code>ax</code> argument:</p>
<pre><code>iris = pd.DataFrame.from_csv('iris.csv')
chicks = pd.DataFrame.from_csv('ChickWeight.csv')
import matplotlib.pyplot as plt
fig, axs = plt.subplots(2)
#iris.plot.scatter('PL', 'PW', ax = axs[0])
#chicks.plot.scatter('Diet', 'Chick', ax = axs[1]) # This is fine.
pd.scatter_matrix(iris, ax=axs[0], alpha=0.2)
pd.scatter_matrix(chicks, ax=axs[1], alpha=0.2) # This clears axes unexpectedly.
plt.savefig('two_pd_scatter.png')
</code></pre>
<p>gives warning </p>
<blockquote>
<p>/usr/local/lib/python2.7/site-packages/pandas/tools/plotting.py:3303: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared</p>
<p>"is being cleared", UserWarning)</p>
</blockquote>
<p>Note that it's specifically clearing the whole <em>figure</em>, not just the passed axes. </p>
<p>I'd fix this by generating four tight-margin, systematically named figures for the scatter matrices and set up a document (TeX, whatever) that imports those figures in the right places and with the right title. </p>
| 0 | 2016-08-10T21:13:21Z | [
"python",
"pandas",
"matplotlib"
] |
Python+Memcache changing array to string | 38,881,272 | <p>I am using Amazon Elasticache and pythons <code>pymemcache</code> module.</p>
<pre><code>client = Client(('xxxx', 11211))
test=[1,2,3,4]
client.set('test_key', test)
result = client.get('test_key')
pprint.pprint(result)
pprint.pprint(test)
</code></pre>
<p>This outputs the following:</p>
<pre><code>'[1, 2, 3, 4]'
[1, 2, 3, 4]
</code></pre>
<p>AS you can see, the memcache result has been changed to a string. How can I store an array? Or convert the string back to an array?</p>
| 1 | 2016-08-10T18:46:22Z | 38,881,381 | <p>It seems the library wants you to serialise your data first.
<a href="https://pymemcache.readthedocs.io/en/latest/getting_started.html#serialization" rel="nofollow">https://pymemcache.readthedocs.io/en/latest/getting_started.html#serialization</a></p>
<p>Maybe you could use an alternative library?
<a href="https://github.com/pinterest/pymemcache#comparison-with-other-libraries" rel="nofollow">https://github.com/pinterest/pymemcache#comparison-with-other-libraries</a></p>
<p>I'm not sure if it suits your purposes, but <a href="https://dogpilecache.readthedocs.io/en/latest/" rel="nofollow">dogpile.cache</a> might also be worth checking out. It's great to work with and it has memcached as one of its backends.</p>
| 0 | 2016-08-10T18:53:05Z | [
"python",
"arrays",
"memcached"
] |
Python+Memcache changing array to string | 38,881,272 | <p>I am using Amazon Elasticache and pythons <code>pymemcache</code> module.</p>
<pre><code>client = Client(('xxxx', 11211))
test=[1,2,3,4]
client.set('test_key', test)
result = client.get('test_key')
pprint.pprint(result)
pprint.pprint(test)
</code></pre>
<p>This outputs the following:</p>
<pre><code>'[1, 2, 3, 4]'
[1, 2, 3, 4]
</code></pre>
<p>AS you can see, the memcache result has been changed to a string. How can I store an array? Or convert the string back to an array?</p>
| 1 | 2016-08-10T18:46:22Z | 38,881,506 | <p>Memcache itself stores values as strings. You can <a href="http://pymemcache.readthedocs.io/en/latest/getting_started.html#serialization" rel="nofollow">customize your serialization</a> in <code>pymemcache</code> if needed. It looks like by <a href="http://pymemcache.readthedocs.io/en/latest/apidoc/pymemcache.client.base.html#pymemcache.client.base.Client" rel="nofollow">default</a>, <code>pymemcache</code> just uses the <code>__str__</code> method on the value if a serializer is not provided:</p>
<blockquote>
<p>Values must have a <strong>str</strong>() method to convert themselves to a byte
string. Unicode objects can be a problem since str() on a Unicode
object will attempt to encode it as ASCII (which will fail if the
value contains code points larger than U+127). You can fix this with a
serializer or by just calling encode on the string (using UTF-8, for
instance).</p>
<p>If you intend to use anything but str as a value, it is a good idea to
use a serializer and deserializer. The pymemcache.serde library has
some already implemented serializers, including one that is compatible
with the python-memcache library.</p>
</blockquote>
<p>I personally prefer <a href="https://pypi.python.org/pypi/pylibmc" rel="nofollow"><code>pylibmc</code></a> to pymemcache. It handles non-string values quite well.</p>
| 1 | 2016-08-10T18:59:02Z | [
"python",
"arrays",
"memcached"
] |
Frequency of each word in pandas DataFrame | 38,881,305 | <p>I have a pandas DataFrame which look like following </p>
<pre><code> key message Final Category
0 1 I have not received my gifts which I ordered ok voucher
1 2 hth her wells idyll McGill kooky bbc.co noclass
2 3 test test test 1 test noclass
3 4 test noclass
4 5 hello where is my reward points other
5 6 hi, can you get koovs coupons or vouchers here options
6 7 Hi Hey when you people will include amazon an options
</code></pre>
<p>I want to get a {key:{key:value},..} type of data structure where first groupby Final Category and for each category i have a dictionary for each words frequecy.
For example
i can group all noclass which will look like following {'noclass':{'test':5, '1':1, 'hth':1,'her':1 ....}, }</p>
<p>I am new to SOF so sorry for not writing well.
Thanks </p>
| 2 | 2016-08-10T18:48:36Z | 38,881,744 | <p>There's probably a more eloquent way to do this, but here it is with a bunch of nested for loops:</p>
<pre><code>final_cat_list = df['Final Category'].unique()
word_count = {}
for f in final_cat_list:
word_count[f] = {}
message_list = list(df.loc[df['Final Category'] == f, 'key message'])
for m in message_list:
word_list = m.split(" ")
for w in word_list:
if w in word_count[f]:
word_count[f][w] += 1
else:
word_count[f][w] = 1
</code></pre>
| 0 | 2016-08-10T19:13:08Z | [
"python",
"pandas",
"dataframe"
] |
Frequency of each word in pandas DataFrame | 38,881,305 | <p>I have a pandas DataFrame which look like following </p>
<pre><code> key message Final Category
0 1 I have not received my gifts which I ordered ok voucher
1 2 hth her wells idyll McGill kooky bbc.co noclass
2 3 test test test 1 test noclass
3 4 test noclass
4 5 hello where is my reward points other
5 6 hi, can you get koovs coupons or vouchers here options
6 7 Hi Hey when you people will include amazon an options
</code></pre>
<p>I want to get a {key:{key:value},..} type of data structure where first groupby Final Category and for each category i have a dictionary for each words frequecy.
For example
i can group all noclass which will look like following {'noclass':{'test':5, '1':1, 'hth':1,'her':1 ....}, }</p>
<p>I am new to SOF so sorry for not writing well.
Thanks </p>
| 2 | 2016-08-10T18:48:36Z | 38,881,873 | <p>This modifies the original df, so you might wanna copy it first</p>
<pre><code>from collections import Counter
df["message"] = df["message"].apply(lambda message: message + " ")
df.groupby(["Final Category"]).sum().applymap(lambda message: Counter(message.split()))
</code></pre>
<p>What this code does: first it adds a space to the end of all messages. This will come in later.
Then it groups by Final Category, and sums up the messages in each group. This is where the trailing space is important, otherwise the last word of a message would get glued to the first word of the next. (Summing is concatenation for strings)</p>
<p>Then you split the string along whitespace to get the words, and then you count.</p>
| 0 | 2016-08-10T19:21:41Z | [
"python",
"pandas",
"dataframe"
] |
Frequency of each word in pandas DataFrame | 38,881,305 | <p>I have a pandas DataFrame which look like following </p>
<pre><code> key message Final Category
0 1 I have not received my gifts which I ordered ok voucher
1 2 hth her wells idyll McGill kooky bbc.co noclass
2 3 test test test 1 test noclass
3 4 test noclass
4 5 hello where is my reward points other
5 6 hi, can you get koovs coupons or vouchers here options
6 7 Hi Hey when you people will include amazon an options
</code></pre>
<p>I want to get a {key:{key:value},..} type of data structure where first groupby Final Category and for each category i have a dictionary for each words frequecy.
For example
i can group all noclass which will look like following {'noclass':{'test':5, '1':1, 'hth':1,'her':1 ....}, }</p>
<p>I am new to SOF so sorry for not writing well.
Thanks </p>
| 2 | 2016-08-10T18:48:36Z | 38,882,012 | <pre><code>import pandas as pd
import numpy as np
# copy/paste data (you can skip this since you already have a dataframe)
dict = {0 : {'key': 1 , 'message': "I have not received my gifts which I ordered ok", 'Final Category': 'voucher'},
1 : {'key': 2 , 'message': "hth her wells idyll McGill kooky bbc.co", 'Final Category': 'noclass'},
2 : {'key': 3 , 'message': "test test test 1 test", 'Final Category': 'noclass'},
3 : {'key': 4 , 'message': "test", 'Final Category': 'noclass'},
4 : {'key': 5 , 'message': "hello where is my reward points", 'Final Category': 'other'},
5 : {'key': 6 , 'message': "hi, can you get koovs coupons or vouchers here", 'Final Category': 'options'},
6 : {'key': 7 , 'message': "Hi Hey when you people will include amazon an", 'Final Category': 'options'}
}
# make DataFrame (you already have one)
df = pd.DataFrame(dict).T
# break up text into words, combine by 'Final' in my case
df.message = df.message.str.split(' ')
final_df = df.groupby('Final Category').agg(np.sum)
# make final dictionary
final_dict = {}
for label,text in zip(final_df.index, final_df.message):
final_dict[label] = {w: text.count(w) for w in text}
</code></pre>
| 0 | 2016-08-10T19:31:30Z | [
"python",
"pandas",
"dataframe"
] |
Twitter data to csv, getting error when trying to add to CSV file | 38,881,314 | <p>Trying to put the last 24hrs of data into a CSV file and getting using tweepy for python</p>
<pre><code>Traceback (most recent call last):
File "**", line 74, in <module>
get_all_tweets("BQ")
File "**", line 66, in get_all_tweets
writer.writerows(outtweets)
File "C:\Users\Barry\AppData\Local\Programs\Python\Python35-32\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 6-7: character maps to <undefined>
</code></pre>
<p>as an error, can anyone see what is wrong as this was working in some capacity before today. </p>
<p>Code:
def get_all_tweets(screen_name):</p>
<pre><code># authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_token, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
# initialize a list to hold all the tweepy Tweets
alltweets = []
# make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.home_timeline (screen_name=screen_name, count=200)
# save most recent tweets
alltweets.extend(new_tweets)
# save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
outtweets = []
page = 1
deadend = False
print ("getting tweets before %s" % (oldest))
# all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.home_timeline(screen_name=screen_name, count=200, max_id=oldest, page=page)
# save most recent tweets
alltweets.extend(new_tweets)
# update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
print ("...%s tweets downloaded so far" % (len(alltweets)))
for tweet in alltweets:
if (datetime.datetime.now() - tweet.created_at).days < 1:
# transform the tweepy tweets into a 2D array that will populate the csv
outtweets.append([tweet.user.name, tweet.created_at, tweet.text.encode("utf-8")])
else:
deadend = True
return
if not deadend:
page += 1
# write the csv
with open('%s_tweets.csv' % screen_name, 'w') as f:
writer = csv.writer(f)
writer.writerow(["name", "created_at", "text"])
writer.writerows(outtweets)
pass
print ("CSV written")
if __name__ == '__main__':
# pass in the username of the account you want to download
get_all_tweets("BQ")
</code></pre>
<p>** EDIT 1 **</p>
<pre><code> with open('%s_tweets.csv' % screen_name, 'w', encode('utf-8')) as f:
TypeError: an integer is required (got type bytes)
</code></pre>
<p>** EDIT 2**</p>
<pre><code> return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 6-7: character maps to <undefined>
</code></pre>
| -1 | 2016-08-10T18:49:23Z | 38,881,969 | <p>It seems that the character is something that cannot be encoded into utf-8. While it may be useful to view the tweet in question that triggered the error, you can prevent such an error in the future by changing <code>tweet.text.encode("utf-8")</code> to either <code>tweet.text.encode("utf-8", "ignore")</code>, <code>tweet.text.encode("utf-8", "replace")</code>, or <code>tweet.text.encode("utf-8", "backslashreplace")</code>. <code>ignore</code> removes anything that cannot be encoded; <code>replace</code> replaces the infringing character with <code>\ufff</code>; and <code>backslashreplace</code> adds a backslash to the infringing character <code>\x00</code> would become <code>\\x00</code>.</p>
<p>For more on this: <a href="https://docs.python.org/3/howto/unicode.html#converting-to-bytes" rel="nofollow">https://docs.python.org/3/howto/unicode.html#converting-to-bytes</a></p>
| 0 | 2016-08-10T19:28:16Z | [
"python",
"csv",
"twitter",
"tweepy"
] |
Twitter data to csv, getting error when trying to add to CSV file | 38,881,314 | <p>Trying to put the last 24hrs of data into a CSV file and getting using tweepy for python</p>
<pre><code>Traceback (most recent call last):
File "**", line 74, in <module>
get_all_tweets("BQ")
File "**", line 66, in get_all_tweets
writer.writerows(outtweets)
File "C:\Users\Barry\AppData\Local\Programs\Python\Python35-32\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 6-7: character maps to <undefined>
</code></pre>
<p>as an error, can anyone see what is wrong as this was working in some capacity before today. </p>
<p>Code:
def get_all_tweets(screen_name):</p>
<pre><code># authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_token, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
# initialize a list to hold all the tweepy Tweets
alltweets = []
# make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.home_timeline (screen_name=screen_name, count=200)
# save most recent tweets
alltweets.extend(new_tweets)
# save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
outtweets = []
page = 1
deadend = False
print ("getting tweets before %s" % (oldest))
# all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.home_timeline(screen_name=screen_name, count=200, max_id=oldest, page=page)
# save most recent tweets
alltweets.extend(new_tweets)
# update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
print ("...%s tweets downloaded so far" % (len(alltweets)))
for tweet in alltweets:
if (datetime.datetime.now() - tweet.created_at).days < 1:
# transform the tweepy tweets into a 2D array that will populate the csv
outtweets.append([tweet.user.name, tweet.created_at, tweet.text.encode("utf-8")])
else:
deadend = True
return
if not deadend:
page += 1
# write the csv
with open('%s_tweets.csv' % screen_name, 'w') as f:
writer = csv.writer(f)
writer.writerow(["name", "created_at", "text"])
writer.writerows(outtweets)
pass
print ("CSV written")
if __name__ == '__main__':
# pass in the username of the account you want to download
get_all_tweets("BQ")
</code></pre>
<p>** EDIT 1 **</p>
<pre><code> with open('%s_tweets.csv' % screen_name, 'w', encode('utf-8')) as f:
TypeError: an integer is required (got type bytes)
</code></pre>
<p>** EDIT 2**</p>
<pre><code> return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 6-7: character maps to <undefined>
</code></pre>
| -1 | 2016-08-10T18:49:23Z | 38,882,062 | <p>Your problem is with the characters in some tweets. You're not able to write them to the file you open.
If you replace this line</p>
<pre><code>with open('%s_tweets.csv' % screen_name, 'w') as f:
</code></pre>
<p>with this:</p>
<pre><code>with open('%s_tweets.csv' % screen_name, mode='w', encoding='utf-8') as f:
</code></pre>
<p>it should work. Please note that this will only work with python 3.x</p>
| 0 | 2016-08-10T19:34:37Z | [
"python",
"csv",
"twitter",
"tweepy"
] |
Data in HDF file using Python missing | 38,881,326 | <p>I am trying to read in a hdf file but no groups show up. I have tried a couple different methods using tables and h5py but neither work in displaying the groups in the file. I checked and the file is 'Hierarchical Data Format (version 5) data' (See Update). The file information is <a href="https://ghrc.nsstc.nasa.gov/uso/ds_docs/lis/lis_dataset.html" rel="nofollow">here</a> for a reference.</p>
<p>Example data can be found <a href="https://ghrc.nsstc.nasa.gov/pub/lis/orbit_files/data/science/2010/0917/" rel="nofollow">here</a></p>
<pre><code>import h5py
import tables as tb
hdffile = "TRMM_LIS_SC.04.1_2010.260.73132"
</code></pre>
<p><strong>Using h5py:</strong></p>
<pre><code>f = h5py.File(hdffile,'w')
print(f)
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code>< HDF5 file "TRMM_LIS_SC.04.1_2010.260.73132" (mode r+) >
[]
</code></pre>
<p><strong>Using tables:</strong></p>
<pre><code>fi=tb.openFile(hdffile,'r')
print(fi)
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code>TRMM_LIS_SC.04.1_2010.260.73132 (File) ''
Last modif.: 'Wed Aug 10 18:41:44 2016'
Object Tree:
/ (RootGroup) ''
Closing remaining open files:TRMM_LIS_SC.04.1_2010.260.73132...done
</code></pre>
<p><strong>UPDATE</strong></p>
<pre><code>h5py.File(hdffile,'w') overwrote the file and emptied it.
</code></pre>
<p>Now my question is how to read in a hdf version 4 file into python since h5py and tables both do not work?</p>
| 1 | 2016-08-10T18:50:08Z | 38,881,492 | <p>Try using pandas:</p>
<pre><code>import pandas as pd
f = pd.read_hdf(C:/path/to/file)
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_hdf.html" rel="nofollow">See Pandas HDF documentation here.</a></p>
<p>This should read in any hdf file as a dataframe you can then manipulate.</p>
| 0 | 2016-08-10T18:58:25Z | [
"python",
"pandas",
"h5py",
"hdf"
] |
Data in HDF file using Python missing | 38,881,326 | <p>I am trying to read in a hdf file but no groups show up. I have tried a couple different methods using tables and h5py but neither work in displaying the groups in the file. I checked and the file is 'Hierarchical Data Format (version 5) data' (See Update). The file information is <a href="https://ghrc.nsstc.nasa.gov/uso/ds_docs/lis/lis_dataset.html" rel="nofollow">here</a> for a reference.</p>
<p>Example data can be found <a href="https://ghrc.nsstc.nasa.gov/pub/lis/orbit_files/data/science/2010/0917/" rel="nofollow">here</a></p>
<pre><code>import h5py
import tables as tb
hdffile = "TRMM_LIS_SC.04.1_2010.260.73132"
</code></pre>
<p><strong>Using h5py:</strong></p>
<pre><code>f = h5py.File(hdffile,'w')
print(f)
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code>< HDF5 file "TRMM_LIS_SC.04.1_2010.260.73132" (mode r+) >
[]
</code></pre>
<p><strong>Using tables:</strong></p>
<pre><code>fi=tb.openFile(hdffile,'r')
print(fi)
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code>TRMM_LIS_SC.04.1_2010.260.73132 (File) ''
Last modif.: 'Wed Aug 10 18:41:44 2016'
Object Tree:
/ (RootGroup) ''
Closing remaining open files:TRMM_LIS_SC.04.1_2010.260.73132...done
</code></pre>
<p><strong>UPDATE</strong></p>
<pre><code>h5py.File(hdffile,'w') overwrote the file and emptied it.
</code></pre>
<p>Now my question is how to read in a hdf version 4 file into python since h5py and tables both do not work?</p>
| 1 | 2016-08-10T18:50:08Z | 38,881,503 | <p>How big is the file? I think that doing <code>h5py.File(hdffile,'w')</code> overwrites it, so it's empty. Use <code>h5py.File(hdffile,'r')</code> to read.</p>
<p>I don't have enough karma to reply to @Luke H's answer, but reading it into pandas might not be a good idea. Pandas hdf5 uses pytables, which is an "opinionated" way of using hdf5. This means that it stores extra metadata (eg. the index). So I would only use pytables to read the file if it was made with pytables.</p>
| 4 | 2016-08-10T18:58:57Z | [
"python",
"pandas",
"h5py",
"hdf"
] |
Data in HDF file using Python missing | 38,881,326 | <p>I am trying to read in a hdf file but no groups show up. I have tried a couple different methods using tables and h5py but neither work in displaying the groups in the file. I checked and the file is 'Hierarchical Data Format (version 5) data' (See Update). The file information is <a href="https://ghrc.nsstc.nasa.gov/uso/ds_docs/lis/lis_dataset.html" rel="nofollow">here</a> for a reference.</p>
<p>Example data can be found <a href="https://ghrc.nsstc.nasa.gov/pub/lis/orbit_files/data/science/2010/0917/" rel="nofollow">here</a></p>
<pre><code>import h5py
import tables as tb
hdffile = "TRMM_LIS_SC.04.1_2010.260.73132"
</code></pre>
<p><strong>Using h5py:</strong></p>
<pre><code>f = h5py.File(hdffile,'w')
print(f)
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code>< HDF5 file "TRMM_LIS_SC.04.1_2010.260.73132" (mode r+) >
[]
</code></pre>
<p><strong>Using tables:</strong></p>
<pre><code>fi=tb.openFile(hdffile,'r')
print(fi)
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code>TRMM_LIS_SC.04.1_2010.260.73132 (File) ''
Last modif.: 'Wed Aug 10 18:41:44 2016'
Object Tree:
/ (RootGroup) ''
Closing remaining open files:TRMM_LIS_SC.04.1_2010.260.73132...done
</code></pre>
<p><strong>UPDATE</strong></p>
<pre><code>h5py.File(hdffile,'w') overwrote the file and emptied it.
</code></pre>
<p>Now my question is how to read in a hdf version 4 file into python since h5py and tables both do not work?</p>
| 1 | 2016-08-10T18:50:08Z | 38,881,824 | <p><strong>UPDATE:</strong></p>
<p>i would recommend you first to <a href="https://www.hdfgroup.org/h4toh5/download.html" rel="nofollow">convert</a> your HDF version 4 files to HDF5 / h5 files as all modern libraries / modules are working with HDF version 5... </p>
<p><strong>OLD answer:</strong></p>
<p>try it this way:</p>
<pre><code>store = pd.HDFStore(filename)
print(store)
</code></pre>
<p>this should print you details about the HDF file, including stored keys, lengths of stored DFs, etc.</p>
<p>Demo:</p>
<pre><code>In [18]: fn = r'C:\Temp\a.h5'
In [19]: store = pd.HDFStore(fn)
In [20]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: C:\Temp\a.h5
/df_dc frame_table (typ->appendable,nrows->10,ncols->3,indexers->[index],dc->[a,b,c])
/df_no_dc frame_table (typ->appendable,nrows->10,ncols->3,indexers->[index])
</code></pre>
<p>now you can read dataframes using keys from the output above:</p>
<pre><code>In [21]: df = store.select('df_dc')
In [22]: df
Out[22]:
a b c
0 92 80 86
1 27 49 62
2 55 64 60
3 31 66 3
4 37 75 81
5 49 69 87
6 59 0 87
7 69 91 39
8 93 75 31
9 21 15 7
</code></pre>
| 1 | 2016-08-10T19:17:52Z | [
"python",
"pandas",
"h5py",
"hdf"
] |
How can I select entries from list in random order in python | 38,881,468 | <p>Question is simple. I have a list of say 10 entries, I am running a loop over it. What i want here is getting each entry exactly once but in random order.</p>
<p>What is the best and most pythonic way to do it?</p>
| 2 | 2016-08-10T18:57:08Z | 38,881,508 | <pre><code>import random
s=set(range(10))
while len(s)>0:
s.remove(random.choice(yourlist(s)))
print(s)
</code></pre>
| -2 | 2016-08-10T18:59:13Z | [
"python",
"list"
] |
How can I select entries from list in random order in python | 38,881,468 | <p>Question is simple. I have a list of say 10 entries, I am running a loop over it. What i want here is getting each entry exactly once but in random order.</p>
<p>What is the best and most pythonic way to do it?</p>
| 2 | 2016-08-10T18:57:08Z | 38,881,536 | <p>You can use <a href="https://docs.python.org/2/library/random.html" rel="nofollow"><code>random.shuffle</code></a>:</p>
<pre><code>In [1]: import random
In [3]: a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
In [4]: random.shuffle(a)
In [5]: a
Out[5]: [3, 6, 9, 1, 8, 0, 4, 7, 5, 2]
</code></pre>
| 4 | 2016-08-10T19:00:53Z | [
"python",
"list"
] |
How can I select entries from list in random order in python | 38,881,468 | <p>Question is simple. I have a list of say 10 entries, I am running a loop over it. What i want here is getting each entry exactly once but in random order.</p>
<p>What is the best and most pythonic way to do it?</p>
| 2 | 2016-08-10T18:57:08Z | 38,881,544 | <p>Shuffle the list first by importing the <code>random</code> module and using the <code>shuffle</code> function:</p>
<pre><code>import random
x = ... # A list
random.shuffle(x)
</code></pre>
<p>Then loop over your list. Note that <code>shuffle</code> mutates the original list and does not return the shuffled version. </p>
| 0 | 2016-08-10T19:01:07Z | [
"python",
"list"
] |
How can I select entries from list in random order in python | 38,881,468 | <p>Question is simple. I have a list of say 10 entries, I am running a loop over it. What i want here is getting each entry exactly once but in random order.</p>
<p>What is the best and most pythonic way to do it?</p>
| 2 | 2016-08-10T18:57:08Z | 38,881,564 | <p>you could use <a href="https://docs.python.org/3.5/library/random.html#random.shuffle" rel="nofollow"><code>random.shuffle()</code></a>:</p>
<pre><code>import random
original = range(10)
# Make a copy of the original list, as shuffling will be in-place
shuffled = list(original)
random.shuffle(shuffled)
</code></pre>
<p>Valid remark from Jean-François Fabre: if you were to use the <code>original</code> variable and pass it directly to <code>random.shuffle</code>, Python would return an error, stating <code>'range' object does not support item assignment</code>, as <code>range</code> returns a generator. </p>
<p>To solve this, simply replace the assignment with <code>list(range(10))</code>.</p>
| 0 | 2016-08-10T19:01:59Z | [
"python",
"list"
] |
How can I select entries from list in random order in python | 38,881,468 | <p>Question is simple. I have a list of say 10 entries, I am running a loop over it. What i want here is getting each entry exactly once but in random order.</p>
<p>What is the best and most pythonic way to do it?</p>
| 2 | 2016-08-10T18:57:08Z | 38,881,576 | <p>You could use shuffle:</p>
<pre><code>import random
random.shuffle(yourlist)
# loop as you would normally
for item in yourlist:
.....
</code></pre>
| 0 | 2016-08-10T19:02:32Z | [
"python",
"list"
] |
How can I select entries from list in random order in python | 38,881,468 | <p>Question is simple. I have a list of say 10 entries, I am running a loop over it. What i want here is getting each entry exactly once but in random order.</p>
<p>What is the best and most pythonic way to do it?</p>
| 2 | 2016-08-10T18:57:08Z | 38,881,603 | <p>You can use <a href="https://docs.python.org/3/library/random.html#random.sample" rel="nofollow"><code>random.sample</code></a>, it returns random elements preventing duplicates:</p>
<pre><code>>>> import random
>>> data = range(10)
>>> print(random.sample(data, len(data)))
[2, 4, 8, 7, 0, 5, 6, 3, 1, 9]
</code></pre>
<p>The original list remains unchanged.</p>
| 5 | 2016-08-10T19:04:23Z | [
"python",
"list"
] |
Django raw sql format tablename | 38,881,482 | <p>I am trying to interpolate the tablename into a raw sql but it's interpolating a badly formatted string so the SQL query fails. I can't find a proper way of interpolating the string into the SQL query properyly:</p>
<pre><code>from django.db import connection
cursor = connection.cursor()
cursor.execute("SELECT * from %s;", ['product'])
</code></pre>
<p>Throws:</p>
<pre><code>django.db.utils.ProgrammingError: syntax error at or near "'product'"
LINE 1: SELECT * from 'product';
</code></pre>
| 1 | 2016-08-10T18:57:57Z | 38,881,521 | <p>You can't pass table nor column names as parameter arguments. Instead do something like:</p>
<pre><code>qry = "SELECT * from %s;" % 'product'
cursor.execute(qry)
</code></pre>
<p>While being mindful of the possibility of <a href="https://en.wikipedia.org/wiki/SQL_injection" rel="nofollow">SQL-injection</a> attack.</p>
| 4 | 2016-08-10T18:59:49Z | [
"python",
"django",
"postgresql"
] |
pyspark group and split dataframe | 38,881,644 | <p>I am trying to filter and then split a dataset into two separate files.</p>
<p>Dataset: test.txt (Schema: uid, prod, score)</p>
<pre><code>1 XYZ 2.0
2 ABC 0.5
1 PQR 1.0
2 XYZ 2.1
3 PQR 0.5
1 ABC 0.5
</code></pre>
<p>First, I want to filter any uid having less than or equal to 1 product. I have already achieved that by the following code.</p>
<pre><code>from pyspark.sql.types import *
from pyspark.sql.functions import *
rdd = sc.textFile('test.txt').map(lambda row: row.split('\t'))
schema = StructType([
StructField('uid', IntegerType(), True),
StructField('prod', StringType(), True),
StructField('score', FloatType(), True)])
df = rdd.toDF([f.name for f in schema.fields])
filtered = df.groupby('uid').count().withColumnRenamed("count", "n").filter("n >= 2")
all_data = df.join(filtered, df.uid == filtered.uid , 'inner').drop(filtered.uid).drop(filtered.n)
all_data.show()
</code></pre>
<p>This produces the following output:</p>
<pre><code>+----+-----+---+
|prod|score|uid|
+----+-----+---+
| XYZ| 2.0| 1|
| PQR| 1.0| 1|
| ABC| 0.5| 1|
| ABC| 0.5| 2|
| XYZ| 2.1| 2|
+----+-----+---+
</code></pre>
<p>I need to now create 2 files out of the above dataframe. The problem that I am now facing is what is the best way to take one row for each product (can be any row) and put it in a different file(val.txt) and the rest of the rows in a different file (train.txt).</p>
<p>Expected output (train.txt)</p>
<pre><code>1 XYZ 2.0
1 PQR 1.0
2 ABC 0.5
</code></pre>
<p>Expected output (val.txt)</p>
<pre><code>1 ABC 0.5
2 XYZ 2.1
</code></pre>
<p>Thanks in advance !</p>
| 0 | 2016-08-10T19:06:31Z | 38,883,244 | <p>I think the key issue here is that you don't have a primary key for your data.</p>
<pre><code>all_data = all_data.withColumn(
'id',
monotonically_increasing_id()
)
train = all_data.dropDuplicates(['prod'])
# could OOM if your dataset is too big
# consider BloomFilter if so
all_id = {row['id'] for row in all_data.select('id').collect()}
train_id = {row['id'] for row in train.select('id').collect()}
val_id = all_id - train_id
val = all_data.where(col(id).isin(val_id))
</code></pre>
| 0 | 2016-08-10T20:47:37Z | [
"python",
"pyspark",
"spark-dataframe"
] |
Multiple insertion of one value in sqlalchemy statement to pandas | 38,881,676 | <p>I have constructed a sql clause where I reference the same table as <code>a</code> and <code>b</code> to compare the two geometries as a postgis command. </p>
<p>I would like to pass a value into the sql statement using the <code>%s</code> operator and read the result into a pandas dataframe using <code>to_sql, params kwargs</code>. Currently my code will allow for one value to be passed to one <code>%s</code> but i'm looking for multiple insertions of the same list of values.</p>
<p>I'm connecting to a postgresql database using psycopg2.</p>
<p>Simplified code is below</p>
<pre><code>sql = """
SELECT
st_distance(a.the_geom, b.the_geom, true) AS dist
FROM
(SELECT
table.*
FROM table
WHERE id in %s) AS a,
(SELECT
table.*
FROM table
WHERE id in %s) AS b
WHERE a.nid <> b.nid """
sampList = (14070,11184)
df = pd.read_sql(sql, con=conn, params = [sampList])
</code></pre>
<p>Basically i'm looking to replace both <code>%s</code> with the sampList value in both places. The code as written will only replace the first value indicating <code>': list index out of range</code>. If I adjust to having one <code>%s</code> and replacing the second in statement with numbers the code runs, but ultimately I would like away to repeat those values.</p>
| 0 | 2016-08-10T19:08:27Z | 38,882,015 | <ul>
<li><p>You dont need the subqueries, just join the table with itself:</p>
<p><code>SELECT a.*, b.* -- or whatwever
, st_distance(a.the_geom, b.the_geom, true) AS dist
FROM ztable a
JOIN ztable b ON a.nid < b.nid
WHERE a.id IN (%s)
AND b.id IN (%s)
;</code></p></li>
<li><p>avoid repetition by using a CTE (this may be non-optimal, performance-wise)</p>
<p><code>WITH zt AS (
SELECT * FROM ztable
WHERE id IN (%s)
)
SELECT a.*, b.* -- or whatever
, st_distance(a.the_geom, b.the_geom, true) AS dist
FROM zt a
JOIN zt b ON a.nid < b.nid
;</code></p></li>
<li><p>Performance-wise, I would just stick to the first version, and supply the list-argument twice. (or refer to it twice, using a FORMAT() construct)</p></li>
</ul>
| 1 | 2016-08-10T19:31:46Z | [
"python",
"sql",
"postgresql",
"pandas",
"sqlalchemy"
] |
Multiple insertion of one value in sqlalchemy statement to pandas | 38,881,676 | <p>I have constructed a sql clause where I reference the same table as <code>a</code> and <code>b</code> to compare the two geometries as a postgis command. </p>
<p>I would like to pass a value into the sql statement using the <code>%s</code> operator and read the result into a pandas dataframe using <code>to_sql, params kwargs</code>. Currently my code will allow for one value to be passed to one <code>%s</code> but i'm looking for multiple insertions of the same list of values.</p>
<p>I'm connecting to a postgresql database using psycopg2.</p>
<p>Simplified code is below</p>
<pre><code>sql = """
SELECT
st_distance(a.the_geom, b.the_geom, true) AS dist
FROM
(SELECT
table.*
FROM table
WHERE id in %s) AS a,
(SELECT
table.*
FROM table
WHERE id in %s) AS b
WHERE a.nid <> b.nid """
sampList = (14070,11184)
df = pd.read_sql(sql, con=conn, params = [sampList])
</code></pre>
<p>Basically i'm looking to replace both <code>%s</code> with the sampList value in both places. The code as written will only replace the first value indicating <code>': list index out of range</code>. If I adjust to having one <code>%s</code> and replacing the second in statement with numbers the code runs, but ultimately I would like away to repeat those values.</p>
| 0 | 2016-08-10T19:08:27Z | 38,882,461 | <p>first of all i would recommend you to use <a href="http://stackoverflow.com/a/38882015/5741205">updated SQL from @wildplasser</a> - it's much better and more efficient way to do that.</p>
<p>now you can do the following:</p>
<pre><code>sql_ = """\
WITH zt AS (
SELECT * FROM ztable
WHERE id IN ({})
)
SELECT a.*, b.* -- or whatever
, st_distance(a.the_geom, b.the_geom, true) AS dist
FROM zt a
JOIN zt b ON a.nid < b.nid
"""
sampList = (14070,11184)
sql = sql_.format(','.join(['?' for x in sampList]))
df = pd.read_sql(sql, con=conn, params=sampList)
</code></pre>
<p>dynamically generated SQL with parameters (AKA: prepared statements, bind variables, etc.):</p>
<pre><code>In [27]: print(sql)
WITH zt AS (
SELECT * FROM ztable
WHERE id IN (?,?)
)
SELECT a.*, b.* -- or whatever
, st_distance(a.the_geom, b.the_geom, true) AS dist
FROM zt a
JOIN zt b ON a.nid < b.nid
</code></pre>
| 0 | 2016-08-10T19:58:52Z | [
"python",
"sql",
"postgresql",
"pandas",
"sqlalchemy"
] |
Group pandas dataframe by a nested dictionary key | 38,881,679 | <p>I have a pandas dataframe where one of the columns is dictionary type. This is an example dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': [1,2,3],
'b': [4,5,6],
'version': [{'major': 7, 'minor':1},
{'major':8, 'minor': 5},
{'major':7, 'minor':2}] })
</code></pre>
<p>df:</p>
<pre><code> a b version
0 1 4 {'minor': 1, 'major': 7}
1 2 5 {'minor': 5, 'major': 8}
2 3 6 {'minor': 2, 'major': 7}
</code></pre>
<p>I am looking for a way to group the dataframe by one of that dictionary key; in this case to group the <strong>df</strong> dataframe by the <strong>major</strong> key in <strong>version</strong> label. </p>
<p>I have tried a few different stuff, from passing the dictionary key to dataframe groupby function, `df.groupby(['version']['major']), which doesn't work since <strong>major</strong> is not part of dataframe label, to assigning <strong>version</strong> to the dataframe index, but nothing works so far. I'm also trying to flatten the dictionaries as additional columns in the dataframe itself, but this seems to have its own issue.</p>
<p>Any idea?</p>
<p>P.S. Sorry about formatting, it's my first stackoverflow question.</p>
| 3 | 2016-08-10T19:08:41Z | 38,881,727 | <p><strong><em>Option 1</em></strong></p>
<pre><code>df.groupby(df.version.apply(lambda x: x['major'])).size()
version
7 2
8 1
dtype: int64
</code></pre>
<hr>
<pre><code>df.groupby(df.version.apply(lambda x: x['major']))[['a', 'b']].sum()
</code></pre>
<p><a href="http://i.stack.imgur.com/SNnuY.png" rel="nofollow"><img src="http://i.stack.imgur.com/SNnuY.png" alt="enter image description here"></a></p>
<p><strong><em>Option 2</em></strong></p>
<pre><code>df.groupby(df.version.apply(pd.Series).major).size()
major
7 2
8 1
dtype: int64
</code></pre>
<hr>
<pre><code>df.groupby(df.version.apply(pd.Series).major)[['a', 'b']].sum()
</code></pre>
<p><a href="http://i.stack.imgur.com/eOpaW.png" rel="nofollow"><img src="http://i.stack.imgur.com/eOpaW.png" alt="enter image description here"></a></p>
| 3 | 2016-08-10T19:11:53Z | [
"python",
"pandas",
"dictionary",
"dataframe"
] |
Group pandas dataframe by a nested dictionary key | 38,881,679 | <p>I have a pandas dataframe where one of the columns is dictionary type. This is an example dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': [1,2,3],
'b': [4,5,6],
'version': [{'major': 7, 'minor':1},
{'major':8, 'minor': 5},
{'major':7, 'minor':2}] })
</code></pre>
<p>df:</p>
<pre><code> a b version
0 1 4 {'minor': 1, 'major': 7}
1 2 5 {'minor': 5, 'major': 8}
2 3 6 {'minor': 2, 'major': 7}
</code></pre>
<p>I am looking for a way to group the dataframe by one of that dictionary key; in this case to group the <strong>df</strong> dataframe by the <strong>major</strong> key in <strong>version</strong> label. </p>
<p>I have tried a few different stuff, from passing the dictionary key to dataframe groupby function, `df.groupby(['version']['major']), which doesn't work since <strong>major</strong> is not part of dataframe label, to assigning <strong>version</strong> to the dataframe index, but nothing works so far. I'm also trying to flatten the dictionaries as additional columns in the dataframe itself, but this seems to have its own issue.</p>
<p>Any idea?</p>
<p>P.S. Sorry about formatting, it's my first stackoverflow question.</p>
| 3 | 2016-08-10T19:08:41Z | 38,881,741 | <p>you can do it this way:</p>
<pre><code>In [15]: df.assign(major=df.version.apply(pd.Series).major).groupby('major').sum()
Out[15]:
a b
major
7 4 10
8 2 5
</code></pre>
| 2 | 2016-08-10T19:13:00Z | [
"python",
"pandas",
"dictionary",
"dataframe"
] |
re.sub() documentation misunderstanding | 38,881,732 | <p>I've just started learning regular expressions and the documentation for <code>re.sub()</code> states:</p>
<blockquote>
<p><strong>Changed in version 3.5</strong>: Unmatched groups are replaced with an empty
string.</p>
<p><strong>Deprecated since version 3.5, will be removed in version 3.6</strong>: Unknown
escapes consist of '\' and ASCII letter now raise a deprecation
warning and will be forbidden in Python 3.6.</p>
</blockquote>
<p>Is re.sub() deprecated? What should I use then?</p>
| -1 | 2016-08-10T19:12:19Z | 38,881,783 | <p>The two entries are not related, and <a href="https://docs.python.org/3/library/re.html#re.sub" rel="nofollow"><strong><code>re.sub</code></strong></a> will <em>not</em> be deprecated.</p>
<p>In Python version earlier than 3.5 <code>re.sub</code> failed if a backreference was used to a capturing group that did not participate in the match. See <a href="http://stackoverflow.com/a/35516425/3832970"><strong>Empty string instead of unmatched group error</strong></a> SO question.</p>
<p>An <a href="http://ideone.com/UVzXOQ" rel="nofollow">example</a> where the failure occurred:</p>
<pre><code>import re
old = 'regexregex'
new = re.sub(r'regex(group)?regex', r'something\1something', old)
print(new) # => fail as there is no "group" in between "regex" and "regex" in "regexregex"
# and Group 1 was not initialized with an empty string, i.e. remains null
</code></pre>
<p>As for the second one, it only says that there will be a warning (and later forbidden) if you use an <em>unknown for a regex engine</em> literal backslash followed with an ASCII character. The backslash was just ignored in them before, in Python 2.x through 3.5, <code>print(re.sub(r'\j', '', 'joy'))</code> <a href="http://ideone.com/gAF65v" rel="nofollow">prints <code>oy</code></a>. So, these will be forbidden in Python 3.6.</p>
| 2 | 2016-08-10T19:15:40Z | [
"python",
"regex",
"python-3.x",
"python-3.5"
] |
re.sub() documentation misunderstanding | 38,881,732 | <p>I've just started learning regular expressions and the documentation for <code>re.sub()</code> states:</p>
<blockquote>
<p><strong>Changed in version 3.5</strong>: Unmatched groups are replaced with an empty
string.</p>
<p><strong>Deprecated since version 3.5, will be removed in version 3.6</strong>: Unknown
escapes consist of '\' and ASCII letter now raise a deprecation
warning and will be forbidden in Python 3.6.</p>
</blockquote>
<p>Is re.sub() deprecated? What should I use then?</p>
| -1 | 2016-08-10T19:12:19Z | 38,881,823 | <p>You misunderstand the documentation. The <code>re.sub()</code> function is <strong>not deprecated</strong>. The deprecation warning concerns <em>specific syntax</em>.</p>
<p>Earlier in the <a href="https://docs.python.org/3/library/re.html#re.sub" rel="nofollow"><code>re.sub()</code> documentation</a> you'll find this:</p>
<blockquote>
<p>Unknown escapes such as <code>\&</code> are left alone.</p>
</blockquote>
<p>If you used and <em>unknown</em> escape with an <em>ASCII letter</em> the escape will no longer be ignored, you'll get a warning instead. This applies both to <code>re.sub()</code> replacement patterns <em>and</em> to the regular expression patterns. The same warning is placed in the section on regex pattern syntax.</p>
<p>The <em>Changed in version 3.5</em> line also concerns how <code>re.sub()</code> works. Rather than raise an exception when there is no matching group for a <code>\number</code> backreference, an empty string is inserted at that location.</p>
| 3 | 2016-08-10T19:17:50Z | [
"python",
"regex",
"python-3.x",
"python-3.5"
] |
numpy.fft.fft can not have 4 arguments? | 38,881,797 | <p>I tried to do FFT with numpy using this:</p>
<pre><code> sp=np.fft.fft(np.exp(-t), 1000, -1, "ortho")
</code></pre>
<p>and it returns:</p>
<pre><code> TypeError: fft() takes at most 3 arguments (4 given)
</code></pre>
<p>But in the manual, numpy.fft.fft() is defined as: </p>
<pre><code> numpy.fft.fft(a, n=None, axis=-1, norm=None)
</code></pre>
<p>n, axis, and norm are optional, and norm can be set to "ortho".
Why can't I use what I wrote?
Thanks.</p>
| 1 | 2016-08-10T19:16:39Z | 38,881,850 | <p>What version of numpy are you using?</p>
<p>I ask because </p>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html#numpy.fft.fft" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html#numpy.fft.fft</a> </p>
<p>refers to norm as "New in version 1.10.0.", and if you were using an older version, the error message would be consistent with the behavior.</p>
| 0 | 2016-08-10T19:20:07Z | [
"python",
"numpy",
"fft"
] |
How to check in python that at least one of the default parameters of the function specified | 38,881,888 | <p>What is the best practice in python to check if at least one of the default parameters of the function is specified?</p>
<p>Let's suppose we have some function:</p>
<pre><code>def some_function(arg_a=None, arg_b=None, arg_c=False)
</code></pre>
<p>with some default parameters. In my case, I need to check if either <code>arg_a</code> or <code>arg_b</code> is specified. So I thought of implementing something like this:</p>
<pre><code>def some_function(arg_a=None, arg_b=None, arg_c=False):
...
if arg_a is not None:
...
elif arg_b is not None:
...
else:
raise ValueError('Expected either arg_a or arg_b args')
...
...
</code></pre>
<p>So, what is more pythonic way to implement such kind of functionality?</p>
| 4 | 2016-08-10T19:22:48Z | 38,881,914 | <p>Depends on what you expect as values for <code>arg_a</code> and <code>arg_b</code>, but this is generally sufficient.</p>
<pre><code>if not arg_a and not arg_b:
raise ValueError(...)
</code></pre>
<p>Assumes that <code>arg_a</code> and <code>arg_b</code> are not both booleans and cannot have zeros, empty strings/lists/tuples, etc. as parameters.</p>
<p>Depending on your needs, you can be more precise if you need to distinguish between None and 'falsies' such as False, 0, "", [], {}, (), etc.:</p>
<pre><code>if arg_a is None and arg_b is None:
raise ValueError(...)
</code></pre>
| 3 | 2016-08-10T19:24:46Z | [
"python",
"function",
"parameter-passing"
] |
How to check in python that at least one of the default parameters of the function specified | 38,881,888 | <p>What is the best practice in python to check if at least one of the default parameters of the function is specified?</p>
<p>Let's suppose we have some function:</p>
<pre><code>def some_function(arg_a=None, arg_b=None, arg_c=False)
</code></pre>
<p>with some default parameters. In my case, I need to check if either <code>arg_a</code> or <code>arg_b</code> is specified. So I thought of implementing something like this:</p>
<pre><code>def some_function(arg_a=None, arg_b=None, arg_c=False):
...
if arg_a is not None:
...
elif arg_b is not None:
...
else:
raise ValueError('Expected either arg_a or arg_b args')
...
...
</code></pre>
<p>So, what is more pythonic way to implement such kind of functionality?</p>
| 4 | 2016-08-10T19:22:48Z | 38,882,013 | <p>You could use <code>all</code> to check if they all equal <code>None</code> and raise the <code>ValueError</code>:</p>
<pre><code>if all(v is None for v in {arg_a, arg_b}):
raise ValueError('Expected either arg_a or arg_b args')
</code></pre>
<p>this gets rid of those <code>if-elif</code> clauses and groups all checks in the same place:</p>
<pre><code>f(arg_a=0) # ok
f(arg_b=0) # ok
f() # Value Error
</code></pre>
<p>Alternatively, with <code>any()</code>:</p>
<pre><code>if not any(v is not None for v in {arg_a, arg_b}):
raise ValueError('Expected either arg_a or arg_b args')
</code></pre>
<p>but this is definitely more obfuscated. </p>
<p>In the end, it really depends on what the interpretation of pythonic actually is.</p>
| 4 | 2016-08-10T19:31:34Z | [
"python",
"function",
"parameter-passing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.