title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to match every word in the list having single sentence using python | 39,226,172 | <p>how to match the below case 1 in python.. i want each and every word in the sentence to be matched with the list. </p>
<pre><code>l1=['there is a list of contents available in the fields']
>>> 'there' in l1
False
>>> 'there is a list of contents available in the fields' in l1
True
</code></pre>
| -1 | 2016-08-30T11:23:21Z | 39,226,568 | <p>If you just want to know if any of the string in the list contains a word no matter which string it is you can do this:</p>
<pre><code>if any('there' in element for element in li):
pass
</code></pre>
<p>Now if you want to filter the ones which matches the string you can simply:</p>
<pre><code>li = filter(lambda x: 'there' in x, li)
</code></pre>
<p>Or in Python 3:</p>
<pre><code>li = list(filter(lambda x: 'there' in x, li))
</code></pre>
| 0 | 2016-08-30T11:41:51Z | [
"python"
] |
replace xml child element text with value from a list python | 39,226,215 | <p>I want to replace the text element of a child with a value held in a list. </p>
<pre><code><weighting name="weighting">
<aWeight>false</aWeight>
<cWeight>true</cWeight>
</weighting>
</code></pre>
<p>I am trying to change the text value for aWeight to true. I have tried to do it with this code.</p>
<pre><code>elems = dom.findall('aWeight')
for elem in elems:
if user_settings.new_settings[5] == 'true':
'<aWeight>'.text = 'true'
dom.write('output.xml')
</code></pre>
<p>It's writing the file but the value is still staying at false. Does anyone have any suggestions.</p>
| 0 | 2016-08-30T11:24:52Z | 39,226,592 | <p>As your code stands it's not accessing the <code>elem</code> variable to change the node value, you need to tell the operation "where" it needs to operate.</p>
<pre><code>import xml.etree.ElementTree as ET
elems = ET.fromstring('<weighting><aWeight>false</aWeight><cWeight>true</cWeight></weighting>')
test_condition = ['false', 'not set', 'bananas']
elems2 = elems.findall("aWeight")
for elem in elems2:
print elem.text
if elem.text in test_condition:
elem.text = 'true'
print elem.text
print ET.tostring(elems)
</code></pre>
| 0 | 2016-08-30T11:43:02Z | [
"python",
"xml",
"list"
] |
Python - kivy issues | 39,226,279 | <p>im new with kivy and Python.
I try fix this problem on my own, but i still end on same problem.
I try reinstall and update components, but no different result.</p>
<p>Im using Linux Mint, heres the problem what i get when i try run program under Python 2.x (using in terminal <em>python '/home/user/Desktop/first_kivy.py'</em>):</p>
<pre><code> Traceback (most recent call last):
File "/home/kofi/Desktop/Python/pygame_ostatni.py", line 1, in <module>
from kivy.app import App
File "/usr/lib/python2.7/dist-packages/kivy/__init__.py", line 38, in <module>
from kivy.logger import Logger, LOG_LEVELS
File "/usr/lib/python2.7/dist-packages/kivy/logger.py", line 56, in <module>
import logging
File "/home/kofi/Desktop/Python/logging.py", line 3, in <module>
logging.warning('Something went wrong.')
AttributeError: 'module' object has no attribute 'warning'
</code></pre>
<p>Here what i get if i run this under Python 3.x (using in terminal <em>python3 '/home/user/Desktop/first_kivy.py'</em>):</p>
<pre><code>Traceback (most recent call last):
File "/home/kofi/Desktop/Python/pygame_ostatni.py", line 1, in <module>
from kivy.app import App
File "/usr/lib/python3/dist-packages/kivy/__init__.py", line 38, in <module>
from kivy.logger import Logger, LOG_LEVELS
File "/usr/lib/python3/dist-packages/kivy/logger.py", line 56, in <module>
import logging
File "/home/kofi/Desktop/Python/logging.py", line 3, in <module>
logging.warning('Something went wrong.')
AttributeError: module 'logging' has no attribute 'warning'
</code></pre>
<p>Mistakes are the same, thats why im so confused.
Here is the program:</p>
<pre><code>from kivy.app import App
from kivy.uix.button import Button
class TestApp(App):
def build(self):
return Button(text='Hello World')
TestApp().run()
</code></pre>
<p>Thank you for any answer. </p>
| 0 | 2016-08-30T11:27:40Z | 39,230,072 | <p>First of all, I suggest working in a virtual environment, <a href="https://virtualenvwrapper.readthedocs.io/en/latest/" rel="nofollow">https://virtualenvwrapper.readthedocs.io/en/latest/</a>
and make sure you have all the requirements installed. try following this link
<a href="https://kivy.org/docs/installation/installation-linux.html" rel="nofollow">https://kivy.org/docs/installation/installation-linux.html</a>
from scratch because your error doesn't explain much about the issue.</p>
| 0 | 2016-08-30T14:20:16Z | [
"python",
"linux",
"module",
"warnings",
"kivy"
] |
Python - kivy issues | 39,226,279 | <p>im new with kivy and Python.
I try fix this problem on my own, but i still end on same problem.
I try reinstall and update components, but no different result.</p>
<p>Im using Linux Mint, heres the problem what i get when i try run program under Python 2.x (using in terminal <em>python '/home/user/Desktop/first_kivy.py'</em>):</p>
<pre><code> Traceback (most recent call last):
File "/home/kofi/Desktop/Python/pygame_ostatni.py", line 1, in <module>
from kivy.app import App
File "/usr/lib/python2.7/dist-packages/kivy/__init__.py", line 38, in <module>
from kivy.logger import Logger, LOG_LEVELS
File "/usr/lib/python2.7/dist-packages/kivy/logger.py", line 56, in <module>
import logging
File "/home/kofi/Desktop/Python/logging.py", line 3, in <module>
logging.warning('Something went wrong.')
AttributeError: 'module' object has no attribute 'warning'
</code></pre>
<p>Here what i get if i run this under Python 3.x (using in terminal <em>python3 '/home/user/Desktop/first_kivy.py'</em>):</p>
<pre><code>Traceback (most recent call last):
File "/home/kofi/Desktop/Python/pygame_ostatni.py", line 1, in <module>
from kivy.app import App
File "/usr/lib/python3/dist-packages/kivy/__init__.py", line 38, in <module>
from kivy.logger import Logger, LOG_LEVELS
File "/usr/lib/python3/dist-packages/kivy/logger.py", line 56, in <module>
import logging
File "/home/kofi/Desktop/Python/logging.py", line 3, in <module>
logging.warning('Something went wrong.')
AttributeError: module 'logging' has no attribute 'warning'
</code></pre>
<p>Mistakes are the same, thats why im so confused.
Here is the program:</p>
<pre><code>from kivy.app import App
from kivy.uix.button import Button
class TestApp(App):
def build(self):
return Button(text='Hello World')
TestApp().run()
</code></pre>
<p>Thank you for any answer. </p>
| 0 | 2016-08-30T11:27:40Z | 39,242,925 | <p>Thank you for answer.
Yes, im using VMware Workstation.
I installed all requirements, but result is the same. :(
Also i try create new VM (Ubuntu 16.x) and run same program.
In this VM i dont have a problem, everything works fine.</p>
<p>Is it possible that Linux Mint Cinnam do not support python kivy?</p>
| 0 | 2016-08-31T07:08:28Z | [
"python",
"linux",
"module",
"warnings",
"kivy"
] |
python of xlwings, I don't understand one of rule | 39,226,421 | <p>I could understand clearly the <code>rng[0, 0]</code> and <code>rng[1]</code>, but why? Why <code>rng[:, 3:]</code> slicing to be <code>$D$1:$D$5</code>? And why <code>rng[1:3, 1:3]</code> to be <code>$B$2:$C$3</code>, I cannot understand the rule of slicing. </p>
<pre><code>Range indexing/slicing
Range objects support indexing and slicing, a few examples:
rng = xw.Book().sheets[0].range('A1:D5')
</code></pre>
| 1 | 2016-08-30T11:33:55Z | 39,227,046 | <p>I'll give it a go. Because in square brackets, indexing starts at <code>0</code> *. So for a 1-based indexing system, consider [1:3, 1:3] as (2:4, 2:4). Also bear in mind that the value after <code>:</code> is not included, so inclusively (2:4, 2:4) is (2:3, 2:3). The second Excel Column is B, the third C, and the second Excel row is 2 and the third 3. Hence the range is B2:C3.</p>
<p>IMO a horrible choice of example! </p>
<p>Given a range A1:D5, slicing with rng[:, 3:] means all rows and fourth column to end column, hence D1:D5.</p>
<p>Taking just the column element [1:3] from the same range (A1:D5). The slicing starts (including) the second index element (0 first, 1 second) ie <code>B</code> and continues to immediately before the fourth index element (A, B, C, <code>D</code>). Hence B:C. </p>
<p><a href="http://i.stack.imgur.com/dGR6Z.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dGR6Z.jpg" alt="SO39226421 example"></a></p>
<p><code>*</code> For <em>why</em> start at <code>0</code> there are details <a href="http://stackoverflow.com/questions/7320686/why-does-the-indexing-start-with-zero-in-c">here</a>.</p>
| 0 | 2016-08-30T12:05:57Z | [
"python",
"excel",
"xlwings"
] |
Python regex findall with all specific substrings | 39,226,455 | <p>I'm trying to get the output of the following matching regex as </p>
<p>all the Sectors for eg. ['Sector-34, Noida', 'Sec 434 Gurgaon', 'sec100']</p>
<p>P.S - sec47, \n gurgaon is the special case</p>
<p>But I suspect the output is quite weird as [('', 'tor')]</p>
<pre><code>import re
string = "Sector-34, Noida is found to be awesome place I went to eat burgers there and Sec 434 Gurgoan is also good sec100 is one the finest places for outing."
match = re.findall(r"Sec(tor)?-?\d+\s+?\w+|Sec(tor)?\s+?\d+", string, re.IGNORECASE)
print match
</code></pre>
<p>Thanks, in Advance! </p>
| -2 | 2016-08-30T11:35:38Z | 39,226,722 | <p>Here is one way that will give the expected output, but is not a general way (because you didn't provide us with general conditions):</p>
<pre><code>>>> re.findall(r'(?:[sS]ec(?:tor)?(?:-|\s+)?\d+\W?\s+[A-Z][a-z]+)|[sS]ec(?:tor)?\d+', string)
['Sector-34, Noida', 'Sec 434 Gurgoan', 'sec100']
</code></pre>
<p>Notes:</p>
<ol>
<li><p>Here I used <code>\W</code> (none word characters) in order to match the characters like <code>,</code> in first match. If you think other none-word characters are wring you should change that to <code>,</code>.</p></li>
<li><p>We have 2 option here:</p>
<ol>
<li><code>(?:[sS]ec(?:tor)?(?:-|\s+)?\d+\W?\s+[A-Z][a-z]+)</code></li>
<li><code>[sS]ec(?:tor)?\d+</code></li>
</ol></li>
</ol>
<p>As you can see for the second part I didn't consider a word after sector and the digit, if you think that there might be a word after that you can add <code>(?:\s+[A-Z][a-z]+)?</code> after that. </p>
| 0 | 2016-08-30T11:49:33Z | [
"python",
"regex",
"python-2.7"
] |
Python regex findall with all specific substrings | 39,226,455 | <p>I'm trying to get the output of the following matching regex as </p>
<p>all the Sectors for eg. ['Sector-34, Noida', 'Sec 434 Gurgaon', 'sec100']</p>
<p>P.S - sec47, \n gurgaon is the special case</p>
<p>But I suspect the output is quite weird as [('', 'tor')]</p>
<pre><code>import re
string = "Sector-34, Noida is found to be awesome place I went to eat burgers there and Sec 434 Gurgoan is also good sec100 is one the finest places for outing."
match = re.findall(r"Sec(tor)?-?\d+\s+?\w+|Sec(tor)?\s+?\d+", string, re.IGNORECASE)
print match
</code></pre>
<p>Thanks, in Advance! </p>
| -2 | 2016-08-30T11:35:38Z | 39,227,862 | <p>You could go for:</p>
<pre><code>import re
rx = re.compile(r'(\b[Ss]ec(?:tor)?[- ]?\d+\b[,\s]*\b\w+\b)')
string = """
Sector-34, Noida is found to be awesome place I went to eat burgers there and Sec 434 Gurgoan is also good sec47,
gurgaon is one the finest places for outing.
"""
sectors = [match.group(1).replace("\n", "") \
for match in rx.finditer(string)]
print(sectors)
# ['Sector-34, Noida', 'Sec 434 Gurgoan', 'sec47, gurgaon']
</code></pre>
<p>Otherwise, please provide additional information / sectors.</p>
| 0 | 2016-08-30T12:42:55Z | [
"python",
"regex",
"python-2.7"
] |
Authentication while scraping via BeautifulSoup in python | 39,226,585 | <p>I have created a piece of code to scrape an article off the ft.com website. </p>
<pre><code>url = ""
r = requests.get(url)
soup = bs4.BeautifulSoup(r.content, "html.parser")
for a in soup.find_all('div', {"id":"storyContent"}):
print a
</code></pre>
<p><strong>1) On the website, there is a div tag with id:storyContent but I get no output as a result of this code which means that it didn' enter the loop at all! What might the reason be?</strong><br>
Now ft.com does not give access to articles without entering username and password.<br>
I have logged into ft.com using chrome.<br>
Suppose my username, password details are the following:<br>
Username : bs@sb.com<br>
Pass: 12345<br>
I need to know either of the following:<br>
<strong>2) How can I provide this authentication in my code?<br>
3) How can I use the session on chrome (on which I'm already logged in) to acces the webpage/article details.<br>
4) If authentication is the resson behind no output!<br>
5) I am trying to get the article's body out of the webpage.</strong><br>
Thanks!</p>
| 0 | 2016-08-30T11:42:38Z | 39,226,905 | <p>Rather start with this.</p>
<pre><code>url = "http://www.ft.com"
r = requests.get(url)
soup = BeautifulSoup(r.content, "html.parser")
for a in soup:
print a
</code></pre>
<p>Then add a requests when you find the key:value pair required</p>
<pre><code>r = requests.post('http://www.ft.com/xxx', data = {'key':'value'})
</code></pre>
| 1 | 2016-08-30T11:59:04Z | [
"python",
"python-2.7"
] |
String Index Out Of Range When Reading Text File | 39,226,586 | <p>I keep on getting this error on the second last line of my program , and I am not sure why , all I am doing is reading a line from a text file.</p>
<pre><code>if (items[0]) == 86947367 :
with open("read_it.txt") as text_file:
try:
price = int(text_file.readlines()[2])
except ValueError:
print("error")
else:
new_price = int(price * (items2[0]))
print("£",new_price)
price_list.append(new_price)
product = (text_file.readline()[1])
print(product)
</code></pre>
| 2 | 2016-08-30T11:42:39Z | 39,226,683 | <p><strong>problem:</strong></p>
<pre><code>price = int(text_file.readlines()[2])
</code></pre>
<p>the readlines() cause the readline() to return none or something like that.
try to store readlines() in a
<strong>tmp var and then</strong> </p>
<pre><code>price = tmp[2]
product =tmp[1]
</code></pre>
| 0 | 2016-08-30T11:47:39Z | [
"python",
"python-3.x"
] |
String Index Out Of Range When Reading Text File | 39,226,586 | <p>I keep on getting this error on the second last line of my program , and I am not sure why , all I am doing is reading a line from a text file.</p>
<pre><code>if (items[0]) == 86947367 :
with open("read_it.txt") as text_file:
try:
price = int(text_file.readlines()[2])
except ValueError:
print("error")
else:
new_price = int(price * (items2[0]))
print("£",new_price)
price_list.append(new_price)
product = (text_file.readline()[1])
print(product)
</code></pre>
| 2 | 2016-08-30T11:42:39Z | 39,227,745 | <p>When you use <code>readlines()</code>, your "cursor" in the file reaches the end. If you call it a second time, it'll have nothing left to read. </p>
<p>To avoid this behavior, you can store <code>readlines()</code> in a variable for multiple uses, or use <code>text_file.seek(0)</code> to put your cursor back at the beginning of the file.</p>
| 1 | 2016-08-30T12:38:07Z | [
"python",
"python-3.x"
] |
Sorting the keys of a dictionary based on different values | 39,226,604 | <p>I have a dictionary:</p>
<pre><code>d = {"A":{"a":1, "b":2, "c":3}, "B":{"a":5, "b":6, "c":7}, "C":{"a":4, "b":6, "c":7}}
</code></pre>
<p>I want to sort the keys "A", "B" and "C" in a list, first on the basis of numerical values of "a", then if some tie occurs on the basis of numerical values of "b" and so on. </p>
<p>How can I do it?</p>
| 0 | 2016-08-30T11:43:23Z | 39,226,705 | <pre><code>>>> d = {"A":{"a":1, "b":2, "c":3}, "B":{"a":5, "b":6, "c":7}, "C":{"a":4, "b":6, "c":7}}
>>>
>>> d.items()
[('A', {'a': 1, 'c': 3, 'b': 2}), ('C', {'a': 4, 'c': 7, 'b': 6}), ('B', {'a': 5, 'c': 7, 'b': 6})]
>>> sorted(d.items(), key=lambda x: [y[1] for y in sorted(x[1].items())])
[('A', {'a': 1, 'c': 3, 'b': 2}), ('C', {'a': 4, 'c': 7, 'b': 6}), ('B', {'a': 5, 'c': 7, 'b': 6})]
</code></pre>
| 2 | 2016-08-30T11:48:42Z | [
"python",
"sorting",
"dictionary",
"ranking"
] |
Sorting the keys of a dictionary based on different values | 39,226,604 | <p>I have a dictionary:</p>
<pre><code>d = {"A":{"a":1, "b":2, "c":3}, "B":{"a":5, "b":6, "c":7}, "C":{"a":4, "b":6, "c":7}}
</code></pre>
<p>I want to sort the keys "A", "B" and "C" in a list, first on the basis of numerical values of "a", then if some tie occurs on the basis of numerical values of "b" and so on. </p>
<p>How can I do it?</p>
| 0 | 2016-08-30T11:43:23Z | 39,226,804 | <p>Make a list of your dictionary like so:</p>
<pre><code>my_list = [(key, value) for item in d.items()]
</code></pre>
<p>Then sort the list using whatever criteria you have in mind:</p>
<pre><code>def sort_function(a, b):
# whatever complicated sort function you like
return True if a > b else False
my_list.sort(sort_function)
</code></pre>
| 0 | 2016-08-30T11:53:47Z | [
"python",
"sorting",
"dictionary",
"ranking"
] |
Sorting the keys of a dictionary based on different values | 39,226,604 | <p>I have a dictionary:</p>
<pre><code>d = {"A":{"a":1, "b":2, "c":3}, "B":{"a":5, "b":6, "c":7}, "C":{"a":4, "b":6, "c":7}}
</code></pre>
<p>I want to sort the keys "A", "B" and "C" in a list, first on the basis of numerical values of "a", then if some tie occurs on the basis of numerical values of "b" and so on. </p>
<p>How can I do it?</p>
| 0 | 2016-08-30T11:43:23Z | 39,226,867 | <p>You can use:</p>
<pre><code>sorted(d, key=lambda key:(d[key]['a'], d[key]['b'], d[key]['c']))
</code></pre>
<p>And here is a general solution in case you have an arbitrary number of elements in the inner dictionaries:</p>
<pre><code>sorted(d, key=lambda key:[value for value in sorted(d[key].items())])
</code></pre>
| 2 | 2016-08-30T11:57:08Z | [
"python",
"sorting",
"dictionary",
"ranking"
] |
Pandas conditional creation of a new dataframe column | 39,226,656 | <p>This question is an extension of <a href="http://stackoverflow.com/questions/19913659/pandas-conditional-creation-of-a-series-dataframe-column/39224761#39224761">Pandas conditional creation of a series/dataframe column</a>.
If we had this dataframe:</p>
<pre><code> Col1 Col2
1 A Z
2 B Z
3 B X
4 C Y
5 C W
</code></pre>
<p>and we wanted to do the equivalent of:</p>
<pre><code>if Col2 in ('Z','X') then Col3 = 'J'
else if Col2 = 'Y' then Col3 = 'K'
else Col3 = {value of Col1}
</code></pre>
<p>How could I do that?</p>
| 2 | 2016-08-30T11:46:17Z | 39,226,738 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a>:</p>
<pre><code>df.loc[df.Col2.isin(['Z','X']), 'Col3'] = 'J'
df.loc[df.Col2 == 'Y', 'Col3'] = 'K'
df['Col3'] = df.Col3.fillna(df.Col1)
print (df)
Col1 Col2 Col3
1 A Z J
2 B Z J
3 B X J
4 C Y K
5 C W C
</code></pre>
| 3 | 2016-08-30T11:50:30Z | [
"python",
"pandas",
"if-statement",
"dataframe",
"multiple-columns"
] |
Pandas conditional creation of a new dataframe column | 39,226,656 | <p>This question is an extension of <a href="http://stackoverflow.com/questions/19913659/pandas-conditional-creation-of-a-series-dataframe-column/39224761#39224761">Pandas conditional creation of a series/dataframe column</a>.
If we had this dataframe:</p>
<pre><code> Col1 Col2
1 A Z
2 B Z
3 B X
4 C Y
5 C W
</code></pre>
<p>and we wanted to do the equivalent of:</p>
<pre><code>if Col2 in ('Z','X') then Col3 = 'J'
else if Col2 = 'Y' then Col3 = 'K'
else Col3 = {value of Col1}
</code></pre>
<p>How could I do that?</p>
| 2 | 2016-08-30T11:46:17Z | 39,228,211 | <p>Try this use np.where : <code>outcome = np.where(condition, true, false)</code></p>
<pre><code> df["Col3"] = np.where(df['Col2'].isin(['Z','X']), "J", np.where(df['Col2'].isin(['Y']), 'K', df['Col1']))
Col1 Col2 Col3
1 A Z J
2 B Z J
3 B X J
4 C Y K
5 C W C
</code></pre>
| 0 | 2016-08-30T12:59:47Z | [
"python",
"pandas",
"if-statement",
"dataframe",
"multiple-columns"
] |
Python logging class in Docker: logs gone | 39,226,846 | <p>For several years now I've used Python's logging class in the same way:</p>
<pre><code>def get_module_logger(mod_name):
"""
To use this, do logger = get_module_logger(__name__)
"""
logger = logging.getLogger(mod_name)
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s [%(name)-12s] %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
return logger
</code></pre>
<p>Then in some module,</p>
<pre><code>logger = get_module_logger(__name__)
</code></pre>
<p>Now, I am running a Python app that uses this inside of a Docker container. I am running the container with <code>-d -i -t</code>. When I am inside the container after a <code>docker exec -it containername /bin/bash</code>, I can see the logs if I execute a command in my python script that produces logs. However, from outside, <code>docker logs containername</code> never shows anything. I have tried running my container with <code>PYTHONUNBUFFERED=0</code> per a few web posts and that did not help either. Tailing with <code>docker logs -f containername</code> never shows anything either. So all my logs, both stderr and stdout are empty. I have also tried <code>logging.StreamHandler(sys.stdout)</code> but to no avail. </p>
<p>What is wrong? Do I need to change something in the handler?</p>
<p>EDIT: my Dockerfile is very simple:</p>
<pre><code>FROM python:3.5.1
MAINTAINER tommy@...
ADD . /tmp
#need pip > 8 to have internal pypi repo in requirements.txt
RUN pip install --upgrade pip
#do the install
RUN pip install /tmp/py/
CMD myservice
</code></pre>
<p>EDIT2:</p>
<pre><code>~ docker --version
Docker version 1.11.0, build 4dc5990
</code></pre>
| 0 | 2016-08-30T11:56:21Z | 39,227,617 | <p>This is an issue with docker, not python. With docker, the file system seen inside an instance is unique to that instance. Any changes made are not reflected on the host machine. This is a feature of the union file system that docker uses.</p>
<p>You can use "<a href="https://docs.docker.com/engine/tutorials/dockervolumes/" rel="nofollow">data volumes</a>" to mount a directory on the host machine inside your docker instance. You can either copy your logs into the at directory, or (in the future) log directly to that directory.</p>
<p>On Unix:</p>
<pre><code>docker run -v /Users/<path>:/<container path> ...
</code></pre>
<p>On Windows:</p>
<pre><code>docker run -v /c/Users/<path>:/<container path> ...
</code></pre>
<p>This directory will exist outside of the union file system, so you lose the ability to shapshot the directory or roll back changes.</p>
| -1 | 2016-08-30T12:32:22Z | [
"python",
"docker"
] |
Python logging class in Docker: logs gone | 39,226,846 | <p>For several years now I've used Python's logging class in the same way:</p>
<pre><code>def get_module_logger(mod_name):
"""
To use this, do logger = get_module_logger(__name__)
"""
logger = logging.getLogger(mod_name)
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s [%(name)-12s] %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
return logger
</code></pre>
<p>Then in some module,</p>
<pre><code>logger = get_module_logger(__name__)
</code></pre>
<p>Now, I am running a Python app that uses this inside of a Docker container. I am running the container with <code>-d -i -t</code>. When I am inside the container after a <code>docker exec -it containername /bin/bash</code>, I can see the logs if I execute a command in my python script that produces logs. However, from outside, <code>docker logs containername</code> never shows anything. I have tried running my container with <code>PYTHONUNBUFFERED=0</code> per a few web posts and that did not help either. Tailing with <code>docker logs -f containername</code> never shows anything either. So all my logs, both stderr and stdout are empty. I have also tried <code>logging.StreamHandler(sys.stdout)</code> but to no avail. </p>
<p>What is wrong? Do I need to change something in the handler?</p>
<p>EDIT: my Dockerfile is very simple:</p>
<pre><code>FROM python:3.5.1
MAINTAINER tommy@...
ADD . /tmp
#need pip > 8 to have internal pypi repo in requirements.txt
RUN pip install --upgrade pip
#do the install
RUN pip install /tmp/py/
CMD myservice
</code></pre>
<p>EDIT2:</p>
<pre><code>~ docker --version
Docker version 1.11.0, build 4dc5990
</code></pre>
| 0 | 2016-08-30T11:56:21Z | 39,233,389 | <p>I was able to see the log outputs and was not able to reproduce your issue with your code.</p>
<p>I created a file called <code>tommy.py</code>:</p>
<pre><code>import logging
def get_module_logger(mod_name):
"""
To use this, do logger = get_module_logger(__name__)
"""
logger = logging.getLogger(mod_name)
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s [%(name)-12s] %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
return logger
if __name__ == "__main__":
get_module_logger(__name__).info("HELLO WORLD!")
</code></pre>
<p>Ran the following:</p>
<pre><code>docker run -d -v /tmp/tommy.py:/opt/tommy.py python:3.5 python /opt/tommy.py
</code></pre>
<p>And saw this:</p>
<pre><code>$ docker logs -f sleepy_poincare
2016-08-30 17:01:36,026 [__main__ ] INFO HELLO WORLD!
</code></pre>
<p>Edit:<br>
Here's my Docker version:</p>
<pre><code>$ docker --version
Docker version 1.12.0, build 8eab29e
</code></pre>
| 0 | 2016-08-30T17:12:03Z | [
"python",
"docker"
] |
How can I find the function of a python module? | 39,226,927 | <p>I am using OpenCV, and I want to see what the "rectangle" function is.
I can use the dir(module) function, to get at the function definition, and name, but I don't know how to view the actual function. I am Using Linux (Ubuntu 16.04), and I'm wondering if the libraries are in "/usr/local/" or some other place. The OpenCV cv2 python library is just an example, I want to know how to view any function of a library, imported into python. </p>
| 0 | 2016-08-30T11:59:57Z | 39,227,107 | <p>There are multiple ways :</p>
<ul>
<li><p>If you want to get the source code on runtime you can use <code>inspect.getsourcelines(object)</code> (see <a href="https://docs.python.org/3/library/inspect.html#inspect.getsourcelines" rel="nofollow">https://docs.python.org/3/library/inspect.html#inspect.getsourcelines</a>)</p></li>
<li><p>If you want to find where the module is you can simply <code>print(module.__file__)</code></p></li>
</ul>
| 3 | 2016-08-30T12:08:41Z | [
"python",
"function",
"opencv",
"module"
] |
How can I find the function of a python module? | 39,226,927 | <p>I am using OpenCV, and I want to see what the "rectangle" function is.
I can use the dir(module) function, to get at the function definition, and name, but I don't know how to view the actual function. I am Using Linux (Ubuntu 16.04), and I'm wondering if the libraries are in "/usr/local/" or some other place. The OpenCV cv2 python library is just an example, I want to know how to view any function of a library, imported into python. </p>
| 0 | 2016-08-30T11:59:57Z | 39,526,919 | <p>I suggest you to use the wonderful <a href="https://ipython.org/" rel="nofollow">IPython</a> interactive shell.</p>
<p>You can see the definition of any function (or in general of any object whose source code is available) by appending two question marks <code>??</code> to its name. Below is a short example run on my terminal:</p>
<pre><code>$ ipython
Python 2.7.9 (default, Jan 27 2016, 11:42:08)
Type "copyright", "credits" or "license" for more information.
IPython 2.3.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import urllib
In [2]: urllib.urlopen??
Type: function
String form: <function urlopen at 0x2b56465ac578>
File: /sw/python/2.7.9/Linux.x86_64/lib/python2.7/urllib.py
Definition: urllib.urlopen(url, data=None, proxies=None)
Source:
def urlopen(url, data=None, proxies=None):
"""Create a file-like object for the specified URL to read from."""
from warnings import warnpy3k
warnpy3k("urllib.urlopen() has been removed in Python 3.0 in "
"favor of urllib2.urlopen()", stacklevel=2)
global _urlopener
if proxies is not None:
opener = FancyURLopener(proxies=proxies)
elif not _urlopener:
opener = FancyURLopener()
_urlopener = opener
else:
opener = _urlopener
if data is None:
return opener.open(url)
else:
return opener.open(url, data)
</code></pre>
| 0 | 2016-09-16T08:28:29Z | [
"python",
"function",
"opencv",
"module"
] |
Get coordinates of image after warpPerspective | 39,226,955 | <p>I apply cv2.warpPerspective to imageA</p>
<pre><code>result = cv2.warpPerspective(imageA, Ht.dot(H), (xmax-xmin, ymax-ymin))
</code></pre>
<p>and that work works correctly, but i need coordinates of new image like this
<a href="http://i.stack.imgur.com/z5sUl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/z5sUl.jpg" alt="enter image description here"></a></p>
<p>Maybe anybody knows how to calculate?</p>
| 0 | 2016-08-30T12:01:11Z | 39,230,666 | <p>old_points=[[[ 3.27589822, -24.28251266]],[[ 8.85595226, 270.21176147]], [[ 408.69842529, 258.427948 ]],[[ 398.14550781, -36.64953613]]]</p>
<p>new_points = cv2.perspectiveTransform(old_points, H)</p>
| 0 | 2016-08-30T14:46:07Z | [
"python",
"opencv"
] |
How to get the second largest number by not usings list and max()min() | 39,226,965 | <p>I want to find the second largest number such as this topic. <a href="http://stackoverflow.com/questions/16225677/get-the-second-largest-number-in-a-list-in-linear-time">Get the second largest number in a list in linear time</a></p>
<p>My problem is this restricting words <code>list</code>, <code>[]</code>, <code>remove</code>, <code>del</code>, <code>min</code>, <code>max</code>, <code>dict</code>, <code>{}</code>, <code>sort</code>, <code>pop</code>.</p>
<p>I have no ideas to get the second largest number. Then, I want someone to suggest me.</p>
<p>Thanks!</p>
| -7 | 2016-08-30T12:01:34Z | 39,227,069 | <pre><code>first_max=arr[0]
second_max=first_max
if(arr[0]>arr[1]):
first_max=arr[0]
second_max=arr[1]
for i in range(1,len(arr)):
if(arr[i]>first_max):
second_max=first_max
first_max=arr[i]
elif(arr[i]>second_max):
second_max=arr[i]
print(second_max)
</code></pre>
<p>I think above code will work for all cases... try and let me know.</p>
| 0 | 2016-08-30T12:07:03Z | [
"python"
] |
How to get the second largest number by not usings list and max()min() | 39,226,965 | <p>I want to find the second largest number such as this topic. <a href="http://stackoverflow.com/questions/16225677/get-the-second-largest-number-in-a-list-in-linear-time">Get the second largest number in a list in linear time</a></p>
<p>My problem is this restricting words <code>list</code>, <code>[]</code>, <code>remove</code>, <code>del</code>, <code>min</code>, <code>max</code>, <code>dict</code>, <code>{}</code>, <code>sort</code>, <code>pop</code>.</p>
<p>I have no ideas to get the second largest number. Then, I want someone to suggest me.</p>
<p>Thanks!</p>
| -7 | 2016-08-30T12:01:34Z | 39,227,083 | <p>Try this -></p>
<pre><code>a = [1,5,7,8,3,4,6,8,2,9]
x=-999999999999
n=999999999999
for i in a:
if i>x:
x=i
for i in a:
if x-i!=0 and x-i<n:
n=x-i
ans=i
print ans
</code></pre>
| 0 | 2016-08-30T12:07:35Z | [
"python"
] |
How to get the second largest number by not usings list and max()min() | 39,226,965 | <p>I want to find the second largest number such as this topic. <a href="http://stackoverflow.com/questions/16225677/get-the-second-largest-number-in-a-list-in-linear-time">Get the second largest number in a list in linear time</a></p>
<p>My problem is this restricting words <code>list</code>, <code>[]</code>, <code>remove</code>, <code>del</code>, <code>min</code>, <code>max</code>, <code>dict</code>, <code>{}</code>, <code>sort</code>, <code>pop</code>.</p>
<p>I have no ideas to get the second largest number. Then, I want someone to suggest me.</p>
<p>Thanks!</p>
| -7 | 2016-08-30T12:01:34Z | 39,227,180 | <p>Not sure I've understood correctly your question but the <a href="http://stackoverflow.com/questions/16225677/get-the-second-largest-number-in-a-list-in-linear-time">post</a> you've mentioned is giving you already few good solutions. For instance, if you don't want to code the algorithm by yourself, use the standard library <a href="https://docs.python.org/2/library/heapq.html" rel="nofollow">heapq</a> like this:</p>
<pre><code>import random
import heapq
random.seed(1)
el = [random.randint(1, 100) for i in range(20)]
print el
first_max, second_max = heapq.nlargest(2, el)
print "Max element is {0} and the second max {1}".format(first_max,
second_max)
</code></pre>
| 2 | 2016-08-30T12:12:01Z | [
"python"
] |
Updating Python using 'PIP' | 39,227,100 | <p>just like we install packages like Matplotlib, using pip command in the cmd (pip install matplotlib) can we also update to newer version of python by some pip command?
<a href="http://i.stack.imgur.com/r108C.png" rel="nofollow">enter image description here</a></p>
| 1 | 2016-08-30T12:08:14Z | 39,227,632 | <p>Pip is designed for managing python packages and <strong>not python versions</strong> to update Python you must download the version you wish from their <a href="https://www.python.org/downloads/" rel="nofollow">site in the download selection.</a></p>
<h1>Simple Answer</h1>
<p>No you cannot</p>
| 3 | 2016-08-30T12:32:52Z | [
"python",
"pip"
] |
How to zip only the files inside the folder and not the sub folders? | 39,227,114 | <p>I have a folder structure as:</p>
<pre><code>/sample/debug/
--debug.exe
--sample.exe
--sample.pdb
--debug.pdb
--sample.dll
--debug.dll
/config
--sample.txt
--new.txt
/general
--general.txt
--code.txt
</code></pre>
<p>So, what I want is only to zip the files inside debug and not the subfolders like /config and /general. I tried as follows:</p>
<pre><code>import zipfile
import os
def append( dir_name ):
ret_val = []
fileList = []
for file in os.listdir(dir_name):
try:
dirfile = os.path.join(dir_name, file)
except Exception:
err = sys.exc_info()
print ("Error!", err)
fileList.append(dirfile)
ret_val = fileList
return ret_val
def zip( fileList, archive, root ):
ret_val = 0
try:
zip_folder_contents = zipfile.ZipFile(archive, 'w', zipfile.ZIP_DEFLATED)
except Exception:
err = sys.exc_info()
print ("Error!",err)
exit( 1 )
for filename in fileList:
zip_folder_contents.write(filename,
filename[len(root):].lstrip(os.path.sep).lstrip(os.path.altsep))
zip_folder_contents.close()
return ret_val
make(append_files_in_zipfolder("D:/sample/debug"), "debug.zip",
"D:/sample/debug")
</code></pre>
<p>Now when I execute above I get error as permission denied "D:/sample/debug\\config". Hence, I am not able to remove this error so I thought to only include the files inside the zip folder and exclude the subfolders. So, is there anyway how to do that or some way I can remove this permission related error I am getting. Please suggest.</p>
| 1 | 2016-08-30T12:08:55Z | 39,229,094 | <p>You can grab all files from a folder without going into the subfolders using:</p>
<pre><code>import os
def getfilesfrom(directory):
return filter(lambda x:
not os.path.isdir(os.path.join(directory, x)),
os.listdir(directory))
# or alternatively, using generators (as suggested in the comments):
def getfilesfrom(directory):
for x in os.listdir(directory):
if not os.path.isdir(os.path.join(directory, x)):
yield x # or yield os.path.join(directory, x) for full path.
</code></pre>
<p>You can then simply run (as described in the <a href="https://pymotw.com/2/zipfile/" rel="nofollow">documentation</a>):</p>
<pre><code>import datetime
import zipfile
def print_info(archive_name):
""" Print information from zip archive"""
zf = zipfile.ZipFile(archive_name)
for info in zf.infolist():
print info.filename
print '\tComment:\t', info.comment
print '\tModified:\t', datetime.datetime(*info.date_time)
print '\tSystem:\t\t', info.create_system, '(0 = Windows, 3 = Unix)'
print '\tZIP version:\t', info.create_version
print '\tCompressed:\t', info.compress_size, 'bytes'
print '\tUncompressed:\t', info.file_size, 'bytes'
print
print 'creating archive'
zf = zipfile.ZipFile('debug.zip', mode='a', compression=zipfile.ZIP_DEFLATED)
inputdir = '/sample/debug/'
filestozip = getfilesfrom(inputdir)
for afile in filestozip:
print('adding ' + afile + ' to zipfile debug.zip')
zf.write(os.path.join(inputdir, afile), afile)
print 'closing'
zf.close()
print
print_info('debug.zip')
</code></pre>
| 2 | 2016-08-30T13:38:05Z | [
"python",
"windows",
"zipfile"
] |
How to zip only the files inside the folder and not the sub folders? | 39,227,114 | <p>I have a folder structure as:</p>
<pre><code>/sample/debug/
--debug.exe
--sample.exe
--sample.pdb
--debug.pdb
--sample.dll
--debug.dll
/config
--sample.txt
--new.txt
/general
--general.txt
--code.txt
</code></pre>
<p>So, what I want is only to zip the files inside debug and not the subfolders like /config and /general. I tried as follows:</p>
<pre><code>import zipfile
import os
def append( dir_name ):
ret_val = []
fileList = []
for file in os.listdir(dir_name):
try:
dirfile = os.path.join(dir_name, file)
except Exception:
err = sys.exc_info()
print ("Error!", err)
fileList.append(dirfile)
ret_val = fileList
return ret_val
def zip( fileList, archive, root ):
ret_val = 0
try:
zip_folder_contents = zipfile.ZipFile(archive, 'w', zipfile.ZIP_DEFLATED)
except Exception:
err = sys.exc_info()
print ("Error!",err)
exit( 1 )
for filename in fileList:
zip_folder_contents.write(filename,
filename[len(root):].lstrip(os.path.sep).lstrip(os.path.altsep))
zip_folder_contents.close()
return ret_val
make(append_files_in_zipfolder("D:/sample/debug"), "debug.zip",
"D:/sample/debug")
</code></pre>
<p>Now when I execute above I get error as permission denied "D:/sample/debug\\config". Hence, I am not able to remove this error so I thought to only include the files inside the zip folder and exclude the subfolders. So, is there anyway how to do that or some way I can remove this permission related error I am getting. Please suggest.</p>
| 1 | 2016-08-30T12:08:55Z | 39,243,226 | <p>The following makes use of the new ZipFile context manager available:</p>
<pre><code>from zipfile import ZipFile
def zip_folder(zip_name, folder):
with ZipFile(zip_name, 'w') as myzip:
for entry in os.listdir(folder):
if os.path.isfile(entry):
myzip.write(os.path.join(folder, entry), arcname=entry)
zip_folder(r'D:\output.zip', r'D:\sample\debug')
</code></pre>
| 0 | 2016-08-31T07:23:28Z | [
"python",
"windows",
"zipfile"
] |
Can scrapy yield different kinds of items? | 39,227,277 | <p>I have two kinds of Item:</p>
<pre><code>class MovieItem(scrapy.Item):
id = scrapy.Field()
image_urls=scrapy.Field()
image_paths =scrapy.Field()
torrents = scrapy.Field()
#...other fields
class TorrentItem(scrapy.Item):
id = scrapy.Field()
movie_id = scrapy.Field()
image_urls=scrapy.Field()
image_paths =scrapy.Field()
</code></pre>
<p>I want to use <code>ImagePipeline</code> and <code>FilePipeline</code> to download images and torrents in a movie. How should I yield the two items in the <code>*parse*</code> method? And how should I define the corresponding pipeline?</p>
| -1 | 2016-08-30T12:17:21Z | 39,239,137 | <p>The answer is yes, you can. Here's an example on how to do it. Here's an <code>example.py</code> spider:</p>
<pre><code># -*- coding: utf-8 -*-
import scrapy
class MovieItem(scrapy.Item):
id = scrapy.Field()
image_urls=scrapy.Field()
images =scrapy.Field()
torrents = scrapy.Field()
itemtype = scrapy.Field()
class TorrentItem(scrapy.Item):
id = scrapy.Field()
movie_id = scrapy.Field()
image_urls=scrapy.Field()
images =scrapy.Field()
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["example.com"]
start_urls = (
'http://www.example.com/',
)
def parse(self, response):
image_urls = [
"http://...-Miles.jpg",
"https:/.../58832_300x300",
"http://...-Circuit-Tests.png"
]
torent_ids = []
for i in xrange(3):
t = TorrentItem()
t["id"] = "#id%d" % i
t["movie_id"] = 143
t["image_urls"] = [image_urls[i]]
# ...
torent_ids.append(t["id"])
yield t
m = MovieItem()
m['id'] = 143
m['image_urls'] = ['http://...test.png']
m['torrents'] = torent_ids
m['itemtype'] = ['movie']
# ...
yield m
</code></pre>
<p>On your <code>settings.py</code> add the following two lines:</p>
<pre><code>ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}
IMAGES_STORE = '.'
</code></pre>
<p>Run the spider:</p>
<pre><code>scrapy crawl example -o test.jl
</code></pre>
<p>Your <code>test.jl</code> file will contain (after quite some formatting):</p>
<pre><code>{
"images": [
{
"url": "http://.../Stuart-Miles.jpg",
"path": "full/27c8d5099f8785e8fbc2370249a0260e216ee2dd.jpg",
"checksum": "dba2fc121610b328448dc37084f31dac"
}
],
"movie_id": 143,
"id": "#id0",
"image_urls": [
"http://...ter-Key-by-Stuart-Miles.jpg"
]
}
{
"images": [
{
"url": "https://i....t/58832_300x300",
"path": "full/b11276eb5b64b5ec7f40eedf4c6fcc6d6d9072ac.jpg",
"checksum": "a9b47ecbb2de9dcb6a61a159120f1bd2"
}
],
"movie_id": 143,
"id": "#id1",
"image_urls": [
"https://i.vi..._300x300"
]
}
{
"images": [
{
"url": "http://www.ej...rt-Circuit-Tests.png",
"path": "full/a68282eb533d35a0aa8732a872277933db8951c5.jpg",
"checksum": "24c0907e3ef610dc355e930f2535c0c4"
}
],
"movie_id": 143,
"id": "#id2",
"image_urls": [
"http://www.ejob...nsformer-Open-and-Short-Circuit-Tests.png"
]
}
{
"images": [
{
"url": "http://...est.png",
"path": "full/1e3e0f775cd40aaa5ea081278957f4d49e39f610.jpg",
"checksum": "50a57a6263b9640ee47e913deadaff7c"
}
]
"torrents": [
"#id0",
"#id1",
"#id2"
],
"itemtype": [
"movie"
],
"image_urls": [
"http://xi.../10/test.png"
],
"id": 143
}
</code></pre>
<p>This works nicely with <code>.jl</code> files as output. It won't work well with <code>.csv</code> but this shouldn't be a problem in your case.</p>
| 0 | 2016-08-31T00:49:23Z | [
"python",
"scrapy"
] |
Sort list with according to uneven value | 39,227,312 | <p>I am creating a script for a library, that sorts authors in decending order of their Avg rating.</p>
<p>Below is my list: ( It have author name + < space > + Avg Rating )</p>
<pre><code>['Michael Crichton 4.71', 'J.K. Rowling 4.36', 'Sidney Sheldon 4.63', 'Narendra Kohli 4.9', 'Jeffrey Archer 4.62', 'Devdutt Pattanaik 4.42', 'George R.R. Martin 5.0', 'Dan Brown 5.0', 'Katherine Applegate 3.0', 'Eoin Colfer 4.25', 'Arthur Conan Doyle 5.0', 'Clive Cussler 4.66', 'Stephen King 3.66', 'Douglas Preston 5.0']
</code></pre>
<p>Below is what I tried: ( I split values with space, and appended in a new list, then sorted with third value, that is rating.</p>
<pre><code>for line in rate_order:
sort_list.append(line.split(' '))
print sorted(sort_list, key=itemgetter(2))
</code></pre>
<p>The issue is that some author name has three spaces in their name, so the third value is not rating. Can there is better ( or cleaner ) way?</p>
| 0 | 2016-08-30T12:18:46Z | 39,227,384 | <p>Use <code>rsplit</code></p>
<pre><code>>>> help(''.rsplit)
Help on built-in function rsplit:
rsplit(...)
S.rsplit([sep [,maxsplit]]) -> list of strings
Return a list of the words in the string S, using sep as the
delimiter string, starting at the end of the string and working
to the front. If maxsplit is given, at most maxsplit splits are
done. If sep is not specified or is None, any whitespace string
is a separator.
>>> 'George R.R. Martin 5.0'.rsplit(' ', 1)
['George R.R. Martin', '5.0']
</code></pre>
<p>Another way to grab the last item of the split is to use an index of -1:</p>
<pre><code>>>> 'George R.R. Martin 5.0'.split()[-1]
'5.0'
</code></pre>
<p>If your list is called <code>author_ratings</code> you can sort it in-place by doing</p>
<pre><code>author_ratings.sort(key=(lambda(s): float(s.rsplit(' ', 1)[1])), reverse=True)
</code></pre>
| 2 | 2016-08-30T12:22:55Z | [
"python",
"python-2.7"
] |
Sort list with according to uneven value | 39,227,312 | <p>I am creating a script for a library, that sorts authors in decending order of their Avg rating.</p>
<p>Below is my list: ( It have author name + < space > + Avg Rating )</p>
<pre><code>['Michael Crichton 4.71', 'J.K. Rowling 4.36', 'Sidney Sheldon 4.63', 'Narendra Kohli 4.9', 'Jeffrey Archer 4.62', 'Devdutt Pattanaik 4.42', 'George R.R. Martin 5.0', 'Dan Brown 5.0', 'Katherine Applegate 3.0', 'Eoin Colfer 4.25', 'Arthur Conan Doyle 5.0', 'Clive Cussler 4.66', 'Stephen King 3.66', 'Douglas Preston 5.0']
</code></pre>
<p>Below is what I tried: ( I split values with space, and appended in a new list, then sorted with third value, that is rating.</p>
<pre><code>for line in rate_order:
sort_list.append(line.split(' '))
print sorted(sort_list, key=itemgetter(2))
</code></pre>
<p>The issue is that some author name has three spaces in their name, so the third value is not rating. Can there is better ( or cleaner ) way?</p>
| 0 | 2016-08-30T12:18:46Z | 39,227,405 | <p>You could split at spaces, and then take the floating-point value of the last component. Here is the whole thing as a one-liner:</p>
<pre><code>>>> print sorted(rate_order, key=lambda r:float(r.split(' ')[-1]))
['Katherine Applegate 3.0', 'Stephen King 3.66', 'Eoin Colfer 4.25', 'J.K. Rowling 4.36', 'Devdutt Pattanaik 4.42', 'Jeffrey Archer 4.62', 'Sidney Sheldon 4.63', 'Clive Cussler 4.66', 'Michael Crichton 4.71', 'Narendra Kohli 4.9', 'George R.R. Martin 5.0', 'Dan Brown 5.0', 'Arthur Conan Doyle 5.0', 'Douglas Preston 5.0']
</code></pre>
<p>Note that the <code>[-1]</code> index extracts the <em>last</em> element (the first starting from the end).</p>
| 3 | 2016-08-30T12:23:54Z | [
"python",
"python-2.7"
] |
Sort list with according to uneven value | 39,227,312 | <p>I am creating a script for a library, that sorts authors in decending order of their Avg rating.</p>
<p>Below is my list: ( It have author name + < space > + Avg Rating )</p>
<pre><code>['Michael Crichton 4.71', 'J.K. Rowling 4.36', 'Sidney Sheldon 4.63', 'Narendra Kohli 4.9', 'Jeffrey Archer 4.62', 'Devdutt Pattanaik 4.42', 'George R.R. Martin 5.0', 'Dan Brown 5.0', 'Katherine Applegate 3.0', 'Eoin Colfer 4.25', 'Arthur Conan Doyle 5.0', 'Clive Cussler 4.66', 'Stephen King 3.66', 'Douglas Preston 5.0']
</code></pre>
<p>Below is what I tried: ( I split values with space, and appended in a new list, then sorted with third value, that is rating.</p>
<pre><code>for line in rate_order:
sort_list.append(line.split(' '))
print sorted(sort_list, key=itemgetter(2))
</code></pre>
<p>The issue is that some author name has three spaces in their name, so the third value is not rating. Can there is better ( or cleaner ) way?</p>
| 0 | 2016-08-30T12:18:46Z | 39,227,462 | <pre><code>rate_order=['Michael Crichton 4.71', 'J.K. Rowling 4.36', 'Sidney Sheldon 4.63', 'Narendra Kohli 4.9', 'Jeffrey Archer 4.62', 'Devdutt Pattanaik 4.42', 'George R.R. Martin 5.0', 'Dan Brown 5.0', 'Katherine Applegate 3.0', 'Eoin Colfer 4.25', 'Arthur Conan Doyle 5.0', 'Clive Cussler 4.66', 'Stephen King 3.66', 'Douglas Preston 5.0']
sort_list=[]
for line in rate_order:
sort_list.append(line.split(' '))
print(sorted(sort_list,key=lambda x: x[-1], reverse=True))
</code></pre>
| 1 | 2016-08-30T12:25:59Z | [
"python",
"python-2.7"
] |
Sort list with according to uneven value | 39,227,312 | <p>I am creating a script for a library, that sorts authors in decending order of their Avg rating.</p>
<p>Below is my list: ( It have author name + < space > + Avg Rating )</p>
<pre><code>['Michael Crichton 4.71', 'J.K. Rowling 4.36', 'Sidney Sheldon 4.63', 'Narendra Kohli 4.9', 'Jeffrey Archer 4.62', 'Devdutt Pattanaik 4.42', 'George R.R. Martin 5.0', 'Dan Brown 5.0', 'Katherine Applegate 3.0', 'Eoin Colfer 4.25', 'Arthur Conan Doyle 5.0', 'Clive Cussler 4.66', 'Stephen King 3.66', 'Douglas Preston 5.0']
</code></pre>
<p>Below is what I tried: ( I split values with space, and appended in a new list, then sorted with third value, that is rating.</p>
<pre><code>for line in rate_order:
sort_list.append(line.split(' '))
print sorted(sort_list, key=itemgetter(2))
</code></pre>
<p>The issue is that some author name has three spaces in their name, so the third value is not rating. Can there is better ( or cleaner ) way?</p>
| 0 | 2016-08-30T12:18:46Z | 39,228,231 | <p>You can use regex (<a href="https://en.wikipedia.org/wiki/Regular_expression" rel="nofollow">regular expressions</a>) matching in <code>sorted</code> function. Using this method of searching you can find rating independent from it's position in the string.
</p>
<pre><code>import re
print sorted(sort_list, key=lambda el: re.search(r'\d\.\d+', el).group(0))
</code></pre>
<p>In this code we use <a href="https://docs.python.org/2/library/re.html#re.search" rel="nofollow">regex built-in library</a> searching by pattern <code>\d.\d+</code>.</p>
<p>Where:</p>
<blockquote>
<p><code>\d</code> - matches any digit character (equivalent to <code>[0-9]</code>)</p>
<p><code>\.</code> - matches dot itself</p>
<p><code>\d+</code> - matches 1 or more digit characters</p>
</blockquote>
<p>Also with <a href="https://docs.python.org/2/library/re.html#re.MatchObject.group" rel="nofollow"><code>.group(0)</code> method</a> we're getting result of searching by regex.</p>
<p><code>lambda</code> function comes from <a href="https://docs.python.org/2/howto/functional.html?highlight=lambda#small-functions-and-the-lambda-expression" rel="nofollow">functional programming</a> and we're passing <code>el</code> argument into it which corresponds to each element from yours list while sorting.</p>
| 0 | 2016-08-30T13:00:55Z | [
"python",
"python-2.7"
] |
pandas dataframe, add one column as moving average of another column for each group | 39,227,519 | <p>I have a dataframe <code>df</code> like below. </p>
<pre><code>dates = pd.date_range('2000-01-01', '2001-01-01')
df1 = pd.DataFrame({'date':dates, 'value':np.random.normal(size = len(dates)), 'market':'GOLD'})
df2 = pd.DataFrame({'date':dates, 'value':np.random.normal(size = len(dates)), 'market':'SILVER'})
df = pd.concat([df1, df2])
df = df.sort('date')
date market value
0 2000-01-01 GOLD -1.361360
0 2000-01-01 SILVER 0.255830
1 2000-01-02 SILVER 0.196953
1 2000-01-02 GOLD 1.422454
2 2000-01-03 GOLD -0.827672
...
</code></pre>
<p>I want to add another column as the 10d moving average of value, for each market. </p>
<p>Is there a simple <code>df.groupby('market').???</code> that can achieve this? Or do I have to pivot the table to wide form, smooth each column, then melt back?</p>
| 1 | 2016-08-30T12:28:06Z | 39,227,650 | <p>You could use <code>groupby/rolling/mean</code>:</p>
<pre><code>result = (df.set_index('date')
.groupby('market')['value']
.rolling(10).mean()
.unstack('market'))
</code></pre>
<p>yields</p>
<pre><code>market GOLD SILVER
date
2000-01-01 NaN NaN
2000-01-02 NaN NaN
2000-01-03 NaN NaN
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 NaN NaN
2000-01-09 NaN NaN
2000-01-10 0.310077 0.582063
2000-01-11 0.312008 0.752218
2000-01-12 0.151159 0.877230
2000-01-13 0.213611 0.742156
2000-01-14 0.440113 0.614720
2000-01-15 0.551360 0.649967
...
</code></pre>
| 3 | 2016-08-30T12:33:48Z | [
"python",
"pandas"
] |
pandas dataframe, add one column as moving average of another column for each group | 39,227,519 | <p>I have a dataframe <code>df</code> like below. </p>
<pre><code>dates = pd.date_range('2000-01-01', '2001-01-01')
df1 = pd.DataFrame({'date':dates, 'value':np.random.normal(size = len(dates)), 'market':'GOLD'})
df2 = pd.DataFrame({'date':dates, 'value':np.random.normal(size = len(dates)), 'market':'SILVER'})
df = pd.concat([df1, df2])
df = df.sort('date')
date market value
0 2000-01-01 GOLD -1.361360
0 2000-01-01 SILVER 0.255830
1 2000-01-02 SILVER 0.196953
1 2000-01-02 GOLD 1.422454
2 2000-01-03 GOLD -0.827672
...
</code></pre>
<p>I want to add another column as the 10d moving average of value, for each market. </p>
<p>Is there a simple <code>df.groupby('market').???</code> that can achieve this? Or do I have to pivot the table to wide form, smooth each column, then melt back?</p>
| 1 | 2016-08-30T12:28:06Z | 39,319,486 | <p>This builds on @unutbu's answer, and adds the results back to the original dataframe as a new column.</p>
<pre><code>result = df.set_index('date').groupby('market')['value'].rolling(10).mean()
</code></pre>
<p>Now if <code>df</code> is sorted by <code>market</code> <strong>first</strong> and then <code>date</code>, the results should be in sync and we can just assign back the values</p>
<pre><code>df.sort_values(['market','date'], inplace = True)
df['value10d_1'] = result.values
</code></pre>
<p>However, if you are paranoid as I am, <code>merge</code> should give peace of mind,</p>
<pre><code>df = pd.merge(df, result.reset_index().rename(columns = {'value':'value10d_2'}), on = ['market','date'])
df['value10d_1'] - df['value10d_2'] # all 0
</code></pre>
| 1 | 2016-09-04T17:11:57Z | [
"python",
"pandas"
] |
Django Restful Framework Authentication HTTP Post Error | 39,227,593 | <p>I was following the Django restful framework tutorial and I came across a error at the "Authenticating with the API" step. I've retraced myself however can't seem to see where I went wrong. Basically when I go to post to my API i get an error see below. Ideally I would also like to setup permissions which state "access denied" unless object owner - any advice on this would be greatly appreciated.</p>
<pre><code>http -a jimmynos:password POST http://127.0.0.1:8000/snippets/ code="print 789"
</code></pre>
<p>Some of the error:</p>
<pre><code>IntegrityError at /snippets/
NOT NULL constraint failed: snippets_snippet.owner_id
Request Method: POST
Request URL: http://127.0.0.1:8000/snippets/
Django Version: 1.9.9
Python Executable: /Users/james/Documents/django/tutorialSerialization/env/bin/python
Python Version: 2.7.10
Python Path: ['/Users/james/Documents/django/tutorialSerialization/tutorial', '/Users/james/Documents/django/tutorialSerialization/env/lib/python27.zip', '/User
</code></pre>
<p>views.py</p>
<pre><code>from snippets.models import Snippet
from snippets.serializers import SnippetSerializer
from django.http import Http404
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
from django.contrib.auth.models import User
from snippets.serializers import UserSerializer
from rest_framework import generics
from rest_framework import permissions
from snippets.permissions import IsOwnerOrReadOnly
class UserList(generics.ListAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
class UserDetail(generics.RetrieveAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
class SnippetList(APIView):
"""
List all snippets, or create a new snippet.
"""
permission_classes = (permissions.IsAuthenticatedOrReadOnly,IsOwnerOrReadOnly,)
def get(self, request, format=None):
#snippets = Snippet.objects.all()
snippets = Snippet.objects.filter(owner=self.request.user)
serializer = SnippetSerializer(snippets, many=True)
return Response(serializer.data)
def post(self, request, format=None):
serializer = SnippetSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
class SnippetDetail(APIView):
"""
Retrieve, update or delete a snippet instance.
"""
permission_classes = (permissions.IsAuthenticatedOrReadOnly,IsOwnerOrReadOnly,)
def get_object(self, pk):
try:
return Snippet.objects.get(pk=pk)
except Snippet.DoesNotExist:
raise Http404
def get(self, request, pk, format=None):
snippet = self.get_object(pk)
serializer = SnippetSerializer(snippet)
return Response(serializer.data)
def put(self, request, pk, format=None):
snippet = self.get_object(pk)
serializer = SnippetSerializer(snippet, data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
def delete(self, request, pk, format=None):
snippet = self.get_object(pk)
snippet.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
</code></pre>
<p>serializers.py</p>
<pre><code>from rest_framework import serializers
from snippets.models import Snippet, LANGUAGE_CHOICES, STYLE_CHOICES
from django.contrib.auth.models import User
class SnippetSerializer(serializers.ModelSerializer):
owner = serializers.ReadOnlyField(source='owner.username')
class Meta:
model = Snippet
#owner = serializers.ReadOnlyField(source='owner.username')
fields = ('id', 'title', 'code', 'linenos', 'language', 'style', 'owner')
class UserSerializer(serializers.ModelSerializer):
snippets = serializers.PrimaryKeyRelatedField(many=True, queryset=Snippet.objects.all())
class Meta:
model = User
fields = ('id', 'username', 'snippets')
</code></pre>
<p>permissions.py</p>
<pre><code>from rest_framework import permissions
class IsOwnerOrReadOnly(permissions.BasePermission):
"""
Custom permission to only allow owners of an object to edit it.
"""
def has_object_permission(self, request, view, obj):
# Read permissions are allowed to any request,
# so we'll always allow GET, HEAD or OPTIONS requests.
if request.method in permissions.SAFE_METHODS:
return True
# Write permissions are only allowed to the owner of the snippet.
return obj.owner == request.user
</code></pre>
<p>The error must have something to do with</p>
<pre><code>owner = serializers.ReadOnlyField(source='owner.username')
</code></pre>
<p>or</p>
<pre><code>def perform_create(self, serializer):
serializer.save(owner=self.request.user)
</code></pre>
<p>models.py</p>
<pre><code>from django.db import models
from pygments.lexers import get_all_lexers
from pygments.styles import get_all_styles
LEXERS = [item for item in get_all_lexers() if item[1]]
LANGUAGE_CHOICES = sorted([(item[1][0], item[0]) for item in LEXERS])
STYLE_CHOICES = sorted((item, item) for item in get_all_styles())
class Snippet(models.Model):
created = models.DateTimeField(auto_now_add=True)
title = models.CharField(max_length=100, blank=True, default='')
code = models.TextField()
linenos = models.BooleanField(default=False)
language = models.CharField(choices=LANGUAGE_CHOICES, default='python', max_length=100)
style = models.CharField(choices=STYLE_CHOICES, default='friendly', max_length=100)
owner = models.ForeignKey('auth.User', related_name='snippets')
highlighted = models.TextField()
class Meta:
ordering = ('created',)
</code></pre>
<p>Enviroment:</p>
<pre><code>Python 2.7.10 on MAC
Django==1.9.9
djangorestframework==3.4.6
Pygments==2.1.3
</code></pre>
<p><a href="http://www.django-rest-framework.org/tutorial/4-authentication-and-permissions/" rel="nofollow">Django Restful Framework API</a></p>
| 0 | 2016-08-30T12:31:40Z | 39,647,010 | <p>The error should be <code>owner = serializers.ReadOnlyField(source='owner.username')</code></p>
<p>Your serializer need <strong>owner</strong> to create Snippet instance.</p>
<p>If you let owner to be <strong>readonly</strong>, then you will lack the owner field, therefore it failed to create snippet instance. </p>
| 1 | 2016-09-22T19:03:58Z | [
"python",
"django",
"django-rest-framework"
] |
How do you mock a return value from PyMySQL for testing in Python? | 39,227,681 | <p>I'm trying to set up some tests for a script that will pull data from a MySQL database. I found an example <a href="https://github.com/chimpler/catdb/blob/master/tests/test_mysql.py" rel="nofollow">here</a> that looks like what I want to do, but it just gives me an object instead of results:</p>
<p><code><MagicMock name='pymysql.connect().cursor[38 chars]152'></code></p>
<p>Here is the function I am using (<code>simple.py</code>):</p>
<pre><code>import pymysql
def get_user_data():
connection = pymysql.connect(host='localhost', user='user', password='password',
db='db', charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
sql = "SELECT `id`, `password` FROM `users`"
cursor.execute(sql)
results = cursor.fetchall()
finally:
connection.close()
return results
</code></pre>
<p>And the test:</p>
<pre><code>from unittest import TestCase, mock
import simple
class TestSimple(TestCase):
@mock.patch('simple.pymysql', autospec=True)
def test_get-data(self, mock_pymysql):
mock_cursor = mock.MagicMock()
test_data = [{'password': 'secret', 'id': 1}]
mock_cursor.fetchall.return_value = test_data
mock_pymysql.connect.return_value.__enter__.return_value = mock_cursor
self.assertEqual(test_data, simple.get_user_data())
</code></pre>
<p>The results:</p>
<pre><code>AssertionError: [{'id': 1, 'password': 'secret'}] != <MagicMock name='pymysql.connect().cursor[38 chars]840'>
</code></pre>
<p>I'm using Python 3.51</p>
| 1 | 2016-08-30T12:35:16Z | 39,229,832 | <p>It looks like your mock is missing a reference to cursor</p>
<p><code>mock_pymysql.connect.return_value.cursor.return_value.__enter__.return_value = mock_cursor</code></p>
<p>I've always found mocking call syntax to be awkward but the MagicMock displays what's wrong in a roundabout way. It's saying it has no return value registered for <code>pymysql.connect.return_value.cursor</code></p>
<p><code><MagicMock name='pymysql.connect().cursor[38 chars]840'></code></p>
| 1 | 2016-08-30T14:10:24Z | [
"python",
"unit-testing"
] |
Merge two lists and create a new dictionary | 39,227,706 | <p>I could not find a good way to do this. Suppose I have two lists (the lists have objects with given attributes). I need to create a new dictionary/list with merged atttributes.</p>
<pre><code>listA = [
{
"alpha": "some value",
"time": "datetime",
},
...
]
listB = [
{
"beta": "some val",
"gamma": "some val",
"time": "datetime"
},
...
]
</code></pre>
<p>The result should be as follows (it should be merged based on "time" attribute)</p>
<pre><code>result = {
"datetime": {
"alpha": "some value",
"beta": "some val",
"gamma": "some val"
},
...
}
</code></pre>
<p>How do I do this in a python way?</p>
<p>For example,</p>
<pre><code>listA = [
{
"time": "Jan 1",
"alpha": "one"
},
{
"time": "Jan 3",
"alpha": "three"
}
]
listB = [
{
"beta": "one-one",
"gamma": "one-two",
"time": "Jan 1"
},
{
"beta": "two-one",
"gamma": "two-two",
"time": "Jan 2"
},
]
result = {
"Jan 1": {
"alpha": "one",
"beta": "one-one",
"gamma": "one-two",
},
"Jan 2": {
"beta": "two-one",
"gamma": "two-two",
},
"Jan 3": {
"alpha": "three"
}
}
</code></pre>
| -4 | 2016-08-30T12:36:06Z | 39,228,238 | <p>This code might be a good start:</p>
<pre><code>listA = [
{
"time": "Jan 1",
"alpha": "one"
},
{
"time": "Jan 3",
"alpha": "three"
}
]
listB = [
{
"beta": "one-one",
"gamma": "one-two",
"time": "Jan 1"
},
{
"beta": "two-one",
"gamma": "two-two",
"time": "Jan 2"
},
]
result = {}
# We consider every element of A and B one by one
for elem in listA + listB:
key = elem["time"]
# If that is the first time we encounter that key, we create a new empty dict in result
if not result.get(key, None):
result[key] = {}
# We copy the content of the elem in listA or listB into the right dictionnary in result.
for dictKey in elem.keys():
# We don't want to copy the time
if dictKey == "time":
continue
result[key][dictKey] = elem[dictKey]
print(result)
</code></pre>
| 0 | 2016-08-30T13:01:12Z | [
"python",
"list",
"dictionary"
] |
Merge two lists and create a new dictionary | 39,227,706 | <p>I could not find a good way to do this. Suppose I have two lists (the lists have objects with given attributes). I need to create a new dictionary/list with merged atttributes.</p>
<pre><code>listA = [
{
"alpha": "some value",
"time": "datetime",
},
...
]
listB = [
{
"beta": "some val",
"gamma": "some val",
"time": "datetime"
},
...
]
</code></pre>
<p>The result should be as follows (it should be merged based on "time" attribute)</p>
<pre><code>result = {
"datetime": {
"alpha": "some value",
"beta": "some val",
"gamma": "some val"
},
...
}
</code></pre>
<p>How do I do this in a python way?</p>
<p>For example,</p>
<pre><code>listA = [
{
"time": "Jan 1",
"alpha": "one"
},
{
"time": "Jan 3",
"alpha": "three"
}
]
listB = [
{
"beta": "one-one",
"gamma": "one-two",
"time": "Jan 1"
},
{
"beta": "two-one",
"gamma": "two-two",
"time": "Jan 2"
},
]
result = {
"Jan 1": {
"alpha": "one",
"beta": "one-one",
"gamma": "one-two",
},
"Jan 2": {
"beta": "two-one",
"gamma": "two-two",
},
"Jan 3": {
"alpha": "three"
}
}
</code></pre>
| -4 | 2016-08-30T12:36:06Z | 39,228,280 | <h1>Using a list comprehension</h1>
<p>Since you are searching for an alternative not using a for loop here is an implementation using <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehensions</a>, which results in a two liner. I am not sure though that this is more intuitive than a for loop:</p>
<pre><code>output = {}
[output.setdefault(item["time"],{}).update({key: value})
for key, value in item.items()
if key != "time"
for item in (listA + listB)]
</code></pre>
<p>To me this is just a more convoluted way of for loops...
Documentation for <a href="https://docs.python.org/3/library/stdtypes.html#dict.setdefault" rel="nofollow">setdefault</a>.</p>
<h1>Using classic for loops</h1>
<p>Use <code>listA</code> and <code>listB</code> as given in your example:</p>
<pre><code>combined = listA + listB
merged = {}
for item in combined:
time = item["time"]
# setdefault only acts if the key is not found, initiate a dict then
merged.setdefault(time, {})
for key, value in item.items():
if key != "time":
merged[time].update({key: value})
print merged
</code></pre>
<p>Output:</p>
<pre><code>{'Jan 2': {'beta': 'two-one', 'gamma': 'two-two'}, 'Jan 3': {'alpha': 'three'}, 'Jan 1': {'alpha': 'one', 'beta': 'one-one', 'gamma': 'one-two'}}
</code></pre>
| 2 | 2016-08-30T13:03:43Z | [
"python",
"list",
"dictionary"
] |
Merge two lists and create a new dictionary | 39,227,706 | <p>I could not find a good way to do this. Suppose I have two lists (the lists have objects with given attributes). I need to create a new dictionary/list with merged atttributes.</p>
<pre><code>listA = [
{
"alpha": "some value",
"time": "datetime",
},
...
]
listB = [
{
"beta": "some val",
"gamma": "some val",
"time": "datetime"
},
...
]
</code></pre>
<p>The result should be as follows (it should be merged based on "time" attribute)</p>
<pre><code>result = {
"datetime": {
"alpha": "some value",
"beta": "some val",
"gamma": "some val"
},
...
}
</code></pre>
<p>How do I do this in a python way?</p>
<p>For example,</p>
<pre><code>listA = [
{
"time": "Jan 1",
"alpha": "one"
},
{
"time": "Jan 3",
"alpha": "three"
}
]
listB = [
{
"beta": "one-one",
"gamma": "one-two",
"time": "Jan 1"
},
{
"beta": "two-one",
"gamma": "two-two",
"time": "Jan 2"
},
]
result = {
"Jan 1": {
"alpha": "one",
"beta": "one-one",
"gamma": "one-two",
},
"Jan 2": {
"beta": "two-one",
"gamma": "two-two",
},
"Jan 3": {
"alpha": "three"
}
}
</code></pre>
| -4 | 2016-08-30T12:36:06Z | 39,229,105 | <p>Another answer, that's perhaps cleaner in that it avoids conditional tests in favour of dict methods and uses only one level of indentation:</p>
<pre><code>d={}
for e in listA:
t = e["time"]
d.setdefault(t, {}).update(**e)
for e in listB:
t = e["time"]
d.setdefault(t, {}).update(**e)
# get rid of "time" keys, if important to do so
for e in d.values():
del e["time"]
</code></pre>
<p><code>d.setdefault(t, {})</code> creates <code>d[t]</code> as an empty dict if the key <code>t</code> is not yet present in <code>d</code>, and returns <code>d[t]</code>. Then<code>.update(**e)</code> updates the returned dict to contain all keys and values in <code>e</code> (replacing current values if they exist, which may be a bug or a feature - the example did not have any overlaps or say what should happen if there are overlaps)</p>
| 1 | 2016-08-30T13:38:32Z | [
"python",
"list",
"dictionary"
] |
Navigating table by using th text | 39,227,725 | <p>I have the following table:</p>
<pre><code><table class="information">
<tr> .... lots of rows with <th> and <td></tr>
<tr>
<th>Nationality</th>
<td><a href="..">Stackoverflowian</a></td>
</tr>
</table>
</code></pre>
<p>I want to find the text in inside the td tags under the th with "nationality" in it. How should I navigate there? I'm using Beautifulsoup and Python.</p>
<p><strong>Added that there is lots of th and td tags above this that, to underline that it isnt sufficient to just find first th</strong></p>
| 0 | 2016-08-30T12:37:21Z | 39,227,978 | <p>Find the <code>th</code> tag, then get its <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#next-sibling-and-previous-sibling" rel="nofollow">next sibling</a>:</p>
<pre><code>soup = BeautifulSoup(html)
ths = soup.find_all('th')
for th in ths:
if th.text == "Nationality":
print th.next_sibling.next_sibling.text
# Stackoverflowian
</code></pre>
<p>We need to do <code>next_sibling</code> twice because the first one will give the newline.</p>
| 2 | 2016-08-30T12:48:43Z | [
"python",
"beautifulsoup"
] |
Navigating table by using th text | 39,227,725 | <p>I have the following table:</p>
<pre><code><table class="information">
<tr> .... lots of rows with <th> and <td></tr>
<tr>
<th>Nationality</th>
<td><a href="..">Stackoverflowian</a></td>
</tr>
</table>
</code></pre>
<p>I want to find the text in inside the td tags under the th with "nationality" in it. How should I navigate there? I'm using Beautifulsoup and Python.</p>
<p><strong>Added that there is lots of th and td tags above this that, to underline that it isnt sufficient to just find first th</strong></p>
| 0 | 2016-08-30T12:37:21Z | 39,227,991 | <p>I've modified this answer as you gave a specific HTML page you're trying to parse.</p>
<pre><code>r = requests.get("http://https://en.wikipedia.org/wiki/Usain_Bolt")
# test that we loaded the page successfully!
soup = BeautifulSoup(r.text, "html.parser")
thTag = soup.find('th', text='Nationality'):
tdTag = thTag.next_sibling.next_sibling
print(tdTag.text)
>>>'Jamaican'
</code></pre>
| 1 | 2016-08-30T12:49:10Z | [
"python",
"beautifulsoup"
] |
ApiError: Incorrect username or password YouTube Account Authentication error | 39,227,756 | <p>I am new for stack overflow and also Python Django with YouTube API, I am trying to upload video from my own website using django-youtube 0.2 python-package. I am using django 7.11,
My project is configured as <a href="https://github.com/laplacesdemon/django-youtube" rel="nofollow">https://github.com/laplacesdemon/django-youtube</a> per this link.
Note: The Account Authentication only work with Demo Gmail account but when i try to create account for Live site it's don't work and all the credentials and settings are correct.
Any one know the YouTube API Help email or Contact Information to Solve this issue then let me know thanks. </p>
<hr>
<p>settings.py</p>
<pre><code>INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_youtube'
)
YOUTUBE_AUTH_EMAIL = "xxxxx@gmail.com"
YOUTUBE_AUTH_PASSWORD = "xxxxxxx"
YOUTUBE_DEVELOPER_KEY = 'XXXX'
YOUTUBE_CLIENT_ID = 'XXXX-XXXX.apps.googleusercontent.com'
</code></pre>
<hr>
<p>api.py </p>
<pre><code>import os
import gdata.youtube.service
from django.conf import settings
from django.utils.translation import ugettext as _
YOUTUBE_READ_WRITE_SCOPE = "https://www.googleapis.com/auth/youtube"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
class OperationError(BaseException):
"""
Raise when an error happens on Api class
"""
pass
class ApiError(BaseException):
"""
Raise when a Youtube API related error occurs
i.e. redirect Youtube errors with this error
"""
pass
class AccessControl:
"""
Enum-like structure to determine the permission of a video
"""
Public, Unlisted, Private = range(3)
class Api:
"""
Wrapper for Youtube API
See: https://developers.google.com/youtube/1.0/developers_guide_python
"""
# Service class is a shared resource
yt_service = gdata.youtube.service.YouTubeService()
def __init__(self):
try:
print settings.YOUTUBE_DEVELOPER_KEY
self.developer_key = settings.YOUTUBE_DEVELOPER_KEY
except AttributeError:
raise OperationError(
"Youtube Developer Key is missing on settings.")
try:
# client id is not required but will be used for other features like analytics
self.client_id = settings.YOUTUBE_CLIENT_ID
except AttributeError:
self.client_id = None
# Turn on HTTPS/SSL access.
# Note: SSL is not available at this time for uploads.
Api.yt_service.ssl = False
# Set the developer key, and optional client id
Api.yt_service.developer_key = self.developer_key
if self.client_id:
Api.yt_service.client_id = self.client_id
self.authenticated = False
def _access_control(self, access_control, my_media_group=None):
"""
Prepares the extension element for access control
Extension element is the optional parameter for the YouTubeVideoEntry
We use extension element to modify access control settings
Returns:
tuple of extension elements
"""
# Access control
extension = None
if access_control is AccessControl.Private:
# WARNING: this part of code is not tested
# set video as private
if my_media_group:
my_media_group.private = gdata.media.Private()
elif access_control is AccessControl.Unlisted:
# set video as unlisted
from gdata.media import YOUTUBE_NAMESPACE
from atom import ExtensionElement
kwargs = {
"namespace": YOUTUBE_NAMESPACE,
"attributes": {'action': 'list', 'permission': 'denied'},
}
extension = ([ExtensionElement('accessControl', **kwargs)])
return extension
def fetch_video(self, video_id):
"""
Retrieve a specific video entry and return it
@see http://gdata-python-client.googlecode.com/hg/pydocs/gdata.youtube.html#YouTubeVideoEntry
"""
return Api.yt_service.GetYouTubeVideoEntry('http://gdata.youtube.com/feeds/api/users/default/uploads/%s?v=2.1' % video_id)
def fetch_feed_by_username(self, username):
"""
Retrieve the video feed by username
Returns:
gdata.youtube.YouTubeVideoFeed object
"""
# Don't use trailing slash
youtube_url = 'http://gdata.youtube.com/feeds/api'
uri = os.sep.join([youtube_url, "users", username, "uploads"])
return Api.yt_service.GetYouTubeVideoFeed(uri)
def authenticate(self, email=None, password=None, source=None):
"""
Authenticates the user and sets the GData Auth token.
All params are optional, if not set, we will use the ones on the settings, if no settings found, raises AttributeError
params are email, password and source. Source is the app id
Raises:
gdata.service.exceptions.BadAuthentication
"""
from gdata.service import BadAuthentication
# Auth parameters
Api.yt_service.email = email if email else settings.YOUTUBE_AUTH_EMAIL
Api.yt_service.password = password if password else settings.YOUTUBE_AUTH_PASSWORD
Api.yt_service.source = source if source else settings.YOUTUBE_CLIENT_ID
try:
Api.yt_service.ProgrammaticLogin()
self.authenticated = True
except BadAuthentication:
print Api.yt_service.email
print Api.yt_service.password
print Api.yt_service.source
raise ApiError(_("Incorrect username or password"))
def upload_direct(self, video_path, title, description="", keywords="", developer_tags=None, access_control=AccessControl.Unlisted):
"""
Direct upload method:
Uploads the video directly from your server to Youtube and creates a video
Returns:
gdata.youtube.YouTubeVideoEntry
See: https://developers.google.com/youtube/1.0/developers_guide_python#UploadingVideos
"""
# prepare a media group object to hold our video's meta-data
my_media_group = gdata.media.Group(
title=gdata.media.Title(text=title),
description=gdata.media.Description(description_type='plain',
text=description),
keywords=gdata.media.Keywords(text=keywords),
category=[gdata.media.Category(
text='Autos',
scheme='http://gdata.youtube.com/schemas/2007/categories.cat',
label='Autos')],
#player = None
)
# Access Control
extension = self._access_control(access_control, my_media_group)
# create the gdata.youtube.YouTubeVideoEntry to be uploaded
video_entry = gdata.youtube.YouTubeVideoEntry(media=my_media_group, extension_elements=extension)
# add developer tags
if developer_tags:
video_entry.AddDeveloperTags(developer_tags)
# upload the video and create a new entry
new_entry = Api.yt_service.InsertVideoEntry(video_entry, video_path)
return new_entry
def upload(self, title, description="", keywords="", developer_tags=None, access_control=AccessControl.Public):
"""
Browser based upload
Creates the video entry and meta data to initiate a browser upload
Authentication is needed
Params:
title: string
description: string
keywords: comma seperated string
developer_tags: tuple
Return:
dict contains post_url and youtube_token. i.e { 'post_url': post_url, 'youtube_token': youtube_token }
Raises:
ApiError: on no authentication
"""
# Raise ApiError if not authenticated
if not self.authenticated:
raise ApiError(_("Authentication is required"))
# create media group
my_media_group = gdata.media.Group(
title=gdata.media.Title(text=title),
description=gdata.media.Description(description_type='plain',
text=description),
keywords=gdata.media.Keywords(text=keywords),
category=[gdata.media.Category(
text='Autos',
scheme='http://gdata.youtube.com/schemas/2007/categories.cat',
label='Autos')],
#player = None
)
# Access Control
extension = self._access_control(access_control, my_media_group)
# create video entry
video_entry = gdata.youtube.YouTubeVideoEntry(
media=my_media_group, extension_elements=extension)
# add developer tags
if developer_tags:
video_entry.AddDeveloperTags(developer_tags)
# upload meta data only
response = Api.yt_service.GetFormUploadToken(video_entry)
# parse response tuple and use the variables to build a form
post_url = response[0]
youtube_token = response[1]
return {'post_url': post_url, 'youtube_token': youtube_token}
def check_upload_status(self, video_id):
"""
Checks the video upload status
Newly uploaded videos may be in the processing state
Authentication is required
Returns:
True if video is available
otherwise a dict containes upload_state and detailed message
i.e. {"upload_state": "processing", "detailed_message": ""}
"""
# Raise ApiError if not authenticated
if not self.authenticated:
raise ApiError(_("Authentication is required"))
entry = self.fetch_video(video_id)
upload_status = Api.yt_service.CheckUploadStatus(entry)
if upload_status is not None:
video_upload_state = upload_status[0]
detailed_message = upload_status[1]
return {"upload_state": video_upload_state, "detailed_message": detailed_message}
else:
return True
def update_video(self, video_id, title="", description="", keywords="", access_control=AccessControl.Unlisted):
"""
Updates the video
Authentication is required
Params:
entry: video entry fetch via 'fetch_video()'
title: string
description: string
keywords: string
Returns:
a video entry on success
None otherwise
"""
# Raise ApiError if not authenticated
if not self.authenticated:
raise ApiError(_("Authentication is required"))
entry = self.fetch_video(video_id)
# Set Access Control
extension = self._access_control(access_control)
if extension:
entry.extension_elements = extension
if title:
entry.media.title.text = title
if description:
entry.media.description.text = description
#if keywords:
# entry.media.keywords.text = keywords
success = Api.yt_service.UpdateVideoEntry(entry)
return success
#if success is None:
# raise OperationError(_("Cannot update video on Youtube"))
def delete_video(self, video_id):
"""
Deletes the video
Authentication is required
Params:
entry: video entry fetch via 'fetch_video()'
Return:
True if successful
Raise:
OperationError: on unsuccessful deletion
"""
# Raise ApiError if not authenticated
if not self.authenticated:
raise ApiError(_("Authentication is required"))
entry = self.fetch_video(video_id)
response = Api.yt_service.DeleteVideoEntry(entry)
if not response:
raise OperationError(_("Cannot be deleted from Youtube"))
return True
</code></pre>
<hr>
<p>error message</p>
<pre><code>Traceback (most recent call last):
File "D:\Work\virtualenvs\vskigit\lib\site-packages\django\core\handlers\base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "D:\Work\virtualenvs\vskigit\lib\site-packages\django\views\decorators\csrf.py", line 57, in wrapped_view
return view_func(*args, **kwargs)
File "D:\Work\virtualenvs\vskigit\lib\site-packages\django\contrib\auth\decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "D:\Work\pyprojects\skigit_project\skigit\utils.py", line 79, in inner
return func(request, *args, **kwargs)
File "D:\Work\pyprojects\skigit_project\skigit\views.py", line 4335, in ajax_direct_uploade
api.authenticate()
File "D:\Work\pyprojects\skigit_project\skigit\api.py", line 136, in authenticate
raise ApiError(_("Incorrect username or password"))
ApiError: Incorrect username or password
[30/Aug/2016 17:31:10] "POST /youtube/direct-upload HTTP/1.1" 500 16826
</code></pre>
| 0 | 2016-08-30T12:38:33Z | 39,306,677 | <pre><code>Issue Resolved Needs- Google Account 2 Step Verification For YouTube API Access for Upload Videos ...
</code></pre>
| 0 | 2016-09-03T12:03:31Z | [
"python",
"django",
"youtube"
] |
requests process hangs | 39,227,820 | <p>I'm using <code>requests</code> to get a URL, such as:</p>
<pre><code>while True:
try:
rv = requests.get(url, timeout=1)
doSth(rv)
except socket.timeout as e:
print e
except Exception as e:
print e
</code></pre>
<p>After it runs for a while, it quits working. No exception or any error, just like it suspended. I then stop the process by typing Ctrl+C from the console. It shows that the process is waiting for data:</p>
<pre class="lang-none prettyprint-override"><code> .............
httplib_response = conn.getresponse(buffering=True) #httplib.py
response.begin() #httplib.py
version, status, reason = self._read_status() #httplib.py
line = self.fp.readline(_MAXLINE + 1) #httplib.py
data = self._sock.recv(self._rbufsize) #socket.py
KeyboardInterrupt
</code></pre>
<p>Why is this happening? Is there a solution?</p>
| 0 | 2016-08-30T12:41:08Z | 39,229,526 | <p>It appears that the server you're sending your <code>request</code> to is throttling you - that is, it's sending <code>bytes</code> with less than 1 second between each package (thus not triggering your <code>timeout</code> parameter), but slow enough for it to appear to be stuck. </p>
<p>The only fix for this I can think of is to reduce the <code>timeout</code> parameter, unless you can fix this throttling issue with the Server provider. </p>
<p>Do keep in mind that you'll need to consider <code>latency</code> when setting the <code>timeout</code> parameter, otherwise your connection will be dropped too quickly and might not work at all.</p>
| 0 | 2016-08-30T13:57:16Z | [
"python",
"python-requests"
] |
Train Inception from Scratch in TensorFlow | 39,227,825 | <p>I wanted to train the inception model like shown in the tensorflow github-tutorial.
Except i wanted to use a selfmade Dataset of TFRecord files.</p>
<pre><code>bazel build inception/imagenet_train
bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=/tmp/imagenet_train --data_dir=/tmp/imagenet_data
</code></pre>
<p>I changed the data directory to the folder with my own TFRecord files.
Now i´am wondering whether i´am realy training from scratch, or if this is the same thing like the "retraining the last layer -Tutorial" </p>
| 1 | 2016-08-30T12:41:17Z | 39,234,383 | <p>Yes you are training from scratch: see the <a href="https://github.com/tensorflow/models/blob/master/inception/inception/inception_train.py#L180" rel="nofollow">code</a></p>
| -1 | 2016-08-30T18:12:03Z | [
"python",
"machine-learning",
"tensorflow"
] |
Class for making asynchronous API requests with Twisted | 39,227,842 | <p>I am working on development of a system that collect data from rest servers and manipulates it.</p>
<p>One of the requirements is multiple and frequent API requests. We currently implement this in a somewhat synchronous way. I can easily implement this using threads but considering the system might need to support thousands of requests per second I think it would be wise to utilize Twisted's ability to efficiently implement the above. I have seen <a href="http://oubiwann.blogspot.co.il/2008/06/async-batching-with-twisted-walkthrough.html" rel="nofollow">this blog post</a> and the whole idea of deferred list seems to do the trick. But I am kind of stuck with how to structure my class (can't wrap my mind around how Twisted works).</p>
<p>Can you try to outline the structure of the class that will run the event-loop and will be able to get a list of URLs and headers and return a list of results after making the requests asynchronously?</p>
<p>Do you know of a better way of implementing this in python?</p>
| -1 | 2016-08-30T12:42:03Z | 39,234,929 | <p>Sounds like you want to use a Twisted project called <a href="http://treq.readthedocs.io/en/latest/index.html" rel="nofollow"><code>treq</code></a> which allows you to send requests to HTTP endpoints. It works alot like <a href="http://docs.python-requests.org/en/master/" rel="nofollow"><code>requests</code></a>. I recently helped a friend <a href="http://stackoverflow.com/questions/39133507/writing-a-twisted-client-to-send-looping-get-request-to-multiple-api-calls-and-r/39136513#39136513">here in this thread</a>. My answer there might be of some use to you. If you still need more help, just make a comment and I'll try my best to update this answer.</p>
| 0 | 2016-08-30T18:45:01Z | [
"python",
"asynchronous",
"twisted"
] |
Pyqt5 acces GUI elements from subclass | 39,228,057 | <p>Need a help to read and write to GUI elements from subclass.
The code:</p>
<pre><code>UI element:
[] doABC
# main.py
import mySubclass
class myAppController(QtWidgets.QMainWindow):
def __init__(self):
super(myAppController, self).__init__()
# load view
uifile = 'myApp.ui'
uic.loadUi(uifile, self)
# I can access UI elements like that
self.pb_Verify.clicked.connect(self.slot_verify) # for slot in main.py
# Calling slot from mySubclass
self.pb_EliminateVerifErrors.clicked.connect(mySubclass.fixErrors)
# Show Main window
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
window = myAppController()
window.show()
sys.exit(app.exec_())
# mySubclass.py - separate file
def fixErrors(self):
# here I want to check state of checkbox
if doABC.isChecked():
</code></pre>
<p>The problem is that <code>self.doABC.isChecked()</code> is not working in this case.
I have also tried to pass additional parameters with slot call from <code>main.py</code> including <code>self</code> or <code>self.doABC</code> but it is not working in this way. Also trying to access <code>main.doABC</code> or <code>super().doABC</code> is not an option.</p>
<p>Done a lot of search on google but nothing fit to my case.</p>
<p>My level is beginner in GUI programming so I even do not know hot it suppose to be.</p>
| 0 | 2016-08-30T12:52:56Z | 39,261,463 | <p>Your problem is that mysubclass needs a reference to doABC to check it.</p>
<p>Here is a working example:</p>
<p>main.py file</p>
<pre><code>#!python3
# main.py
import mySubclass
import sys
from PyQt5 import QtWidgets, uic
class myAppController(QtWidgets.QMainWindow):
def __init__(self):
super(myAppController, self).__init__()
# load view
uifile = 'myApp.ui'
uic.loadUi(uifile, self)
# I can access UI elements like that
# self.pb_Verify.clicked.connect(self.slot_verify) # for slot in main.py
# Calling slot from mySubclass
self.pb_EliminateVerifErrors.clicked.connect(self.EliminateErrors)
def EliminateErrors(self):
mySubclass.fixErrors(self.doABC)
# Show Main window
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
window = myAppController()
window.show()
sys.exit(app.exec_())
</code></pre>
<p>Here I added a class function to be called by the signal and pass a reference to doABC. </p>
<p>mySubclass.py</p>
<pre><code># mySubclass.py - separate file
def fixErrors(doABC):
# here I want to check state of checkbox
if doABC.isChecked():
print("Works!")
</code></pre>
<p>Here I removed self from the function arguments as it is not a class function.</p>
<p>That's it! You were pretty close!</p>
| 0 | 2016-09-01T01:37:29Z | [
"python",
"subclass",
"pyqt5"
] |
Is there a neat way to integrate lithoxyl into flask.logger? | 39,228,114 | <p>I like the look of <a href="http://lithoxyl.readthedocs.io/en/latest/index.html" rel="nofollow">lithoxyl</a> and would like to progressively replace my current usage of <a href="http://flask.pocoo.org/docs/0.11/api/#flask.Flask.logger" rel="nofollow">flask.logger</a> with it.</p>
<p>Is there a good way to get the two logging frameworks to coexist?</p>
<p>So far I have the following:</p>
<pre><code>from flask import current_app
from werkzeug.local import LocalProxy
logger = LocalProxy(lambda: current_app.logger)
class LogAdaptor(object):
"""file-like object that will write messages to the logger"""
def write(self, msg):
if msg.strip():
logger.info(msg)
from lithoxyl import StreamEmitter, SensibleFormatter
emtr = StreamEmitter(LogAdaptor())
fmtr = SensibleFormatter('{level_name_upper} {module_name} {end_message}')
# ... the rest is basically the same as http://lithoxyl.readthedocs.io/en/latest/overview.html#logging-sensibly
</code></pre>
<p>This more or less works but the log level is lost in the output, e.g.:</p>
<pre><code>DEBUG backend: log message from app.logger.debug
INFO logger: CRITICAL "backend" "critical action failed"
INFO logger: DEBUG "backend" "action succeeded"
DEBUG backend: log message from app.logger.debug
</code></pre>
<p>I'm guessing what's needed is a more complex emitter, or a way to access the .write of the underlying stream in the Flask.logger handler (bypassing the formatting etc.)</p>
<p>Or is this all just barking up the wrong tree and I should just live with split log files until it's all refactored?</p>
| 1 | 2016-08-30T12:55:30Z | 39,249,247 | <p>I've managed to make some improvements as I found <a href="https://github.com/mahmoud/lithoxyl/blob/master/lithoxyl/_syslog_emitter.py" rel="nofollow">https://github.com/mahmoud/lithoxyl/blob/master/lithoxyl/_syslog_emitter.py</a> and created a similar class:</p>
<pre><code>import logging
from lithoxyl.common import DEBUG, INFO, CRITICAL, get_level
class LoggerEmitter(object):
priority_map = {DEBUG: {'success': logging.DEBUG,
'failure': logging.INFO,
'warn': logging.INFO,
'exception': logging.WARNING},
INFO: {'success': logging.INFO,
'failure': logging.WARNING,
'warn': logging.WARNING,
'exception': logging.ERROR},
CRITICAL: {'success': logging.WARNING,
'failure': logging.ERROR,
'warn': logging.ERROR,
'exception': logging.CRITICAL}}
def __init__(self, logger):
self.logger = logger
def on_begin(self, begin_event, entry):
level = self._get_level('begin', begin_event)
self.logger.log(level, entry)
def on_warn(self, warn_event, entry):
level = self._get_level('warn', warn_event)
self.logger.log(level, entry)
def on_end(self, end_event, entry):
level = self._get_level('end', end_event)
self.logger.log(level, entry)
def _get_level(self, event_name, action):
level = get_level(action.level)
if event_name == 'warn':
status = 'warn'
elif event_name == 'begin':
status = 'begin'
else:
status = action.status
return self.priority_map[level][status]
emtr = LoggerEmitter(logger)
</code></pre>
<p>The output is now something like:</p>
<pre><code>DEBUG backend: log message from app.logger.debug
ERROR logger: "backend" "critical action failed"
DEBUG logger: "backend" "action succeeded"
DEBUG backend: log message from app.logger.debug
</code></pre>
<p>Which is better, but it would be nice if the module name was passed through properly too.</p>
| 1 | 2016-08-31T12:08:18Z | [
"python",
"logging",
"flask",
"lithoxyl"
] |
How to plot dates on the x-axis using Seaborn (or matplotlib) | 39,228,121 | <p>I have a csv file with some time series data. I create a Data Frame as such:</p>
<pre><code>df = pd.read_csv('C:\\Desktop\\Scripts\\TimeSeries.log')
</code></pre>
<p>When I call <code>df.head(6)</code>, the data appears as follows:</p>
<pre><code>Company Date Value
ABC 08/21/16 00:00:00 500
ABC 08/22/16 00:00:00 600
ABC 08/23/16 00:00:00 650
ABC 08/24/16 00:00:00 625
ABC 08/25/16 00:00:00 675
ABC 08/26/16 00:00:00 680
</code></pre>
<p>Then, I have the following to force the 'Date' column into datetime format:</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'], errors = 'coerce')
</code></pre>
<p>Interestingly, I see "<code>pandas.core.series.Series</code>" when I call the following:</p>
<pre><code>type(df['Date'])
</code></pre>
<p>Finally, I call the following to create a plot:</p>
<pre><code>%matplotlib qt
sns.tsplot(df['Value'])
</code></pre>
<p>On the x-axis from left to right, I see integers ranging from 0 to the number of rows in the data frame. How does one add the 'Date' column as the x-axis values to this plot?</p>
<p>Thanks! </p>
| 0 | 2016-08-30T12:55:45Z | 39,230,682 | <p>Not sure that tsplot is the best tool for that. You can just use:</p>
<pre><code>df[['Date','Value']].set_index('Date').plot()
</code></pre>
| 1 | 2016-08-30T14:47:00Z | [
"python",
"matplotlib",
"seaborn",
"timeserieschart"
] |
How to plot dates on the x-axis using Seaborn (or matplotlib) | 39,228,121 | <p>I have a csv file with some time series data. I create a Data Frame as such:</p>
<pre><code>df = pd.read_csv('C:\\Desktop\\Scripts\\TimeSeries.log')
</code></pre>
<p>When I call <code>df.head(6)</code>, the data appears as follows:</p>
<pre><code>Company Date Value
ABC 08/21/16 00:00:00 500
ABC 08/22/16 00:00:00 600
ABC 08/23/16 00:00:00 650
ABC 08/24/16 00:00:00 625
ABC 08/25/16 00:00:00 675
ABC 08/26/16 00:00:00 680
</code></pre>
<p>Then, I have the following to force the 'Date' column into datetime format:</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'], errors = 'coerce')
</code></pre>
<p>Interestingly, I see "<code>pandas.core.series.Series</code>" when I call the following:</p>
<pre><code>type(df['Date'])
</code></pre>
<p>Finally, I call the following to create a plot:</p>
<pre><code>%matplotlib qt
sns.tsplot(df['Value'])
</code></pre>
<p>On the x-axis from left to right, I see integers ranging from 0 to the number of rows in the data frame. How does one add the 'Date' column as the x-axis values to this plot?</p>
<p>Thanks! </p>
| 0 | 2016-08-30T12:55:45Z | 39,534,449 | <p>use the <code>time</code> parameter for <code>tsplot</code></p>
<p>from docs:</p>
<pre><code>time : string or series-like
Either the name of the field corresponding to time in the data DataFrame or x values for a plot when data is an array. If a Series, the name will be used to label the x axis.
#Plot the Value column against Date column
sns.tsplot(data = df['Value'], time = df['Date'])
</code></pre>
<p>However <code>tsplot</code> is used to plot timeseries in the same time window and differnet conditions. To plot a single timeseries you could also use <code>plt.plot(time = df['Date'], df['Value'])</code></p>
| 0 | 2016-09-16T14:53:04Z | [
"python",
"matplotlib",
"seaborn",
"timeserieschart"
] |
Simpy how to access objects in a resource queue | 39,228,126 | <p>I am moving code written in Simpy 2 to version 3 and could not find an equivalent to the following operation.</p>
<p>In the code below I access job objects (derived from class job_(Process)) in a Simpy resource's activeQ.</p>
<pre><code>def select_LPT(self, mc_no):
job = 0
ptime = 0
for j in buffer[mc_no].activeQ:
if j.proc_time[mc_no] > ptime:
ptime = j.proc_time[mc_no]
job = j
return job
</code></pre>
<p>To do this in Simpy 3, I tried the following </p>
<p><code>buffers[mc_no].users</code></p>
<p>which returns a list of Request() objects. With these objects I cannot access the processes that created them, nor the objects these process functions belong to. (using the 'put_queue', and 'get_queue' of the Resource object did not help)</p>
<p>Any suggestions ?</p>
| 0 | 2016-08-30T12:55:52Z | 39,241,488 | <p>In SimPy, request objects do not know which process created them. However, since we are in Python land you can easily add this information:</p>
<pre><code>with resource.request() as req:
req.obj = self
yield req
...
# In another process/function
for user_req in resource.users:
print(user_req.obj)
</code></pre>
| 0 | 2016-08-31T05:35:30Z | [
"python",
"simulation",
"simpy"
] |
OpenERP/Odoo: fields.datetime.now() show the date of the latest restart | 39,228,319 | <p>I have a field datetime. This field should have by default the datetime of "now", the current time.</p>
<p>However, the default date is the time of <strong>the lastest restart</strong>. </p>
<p>Please find below my code:</p>
<pre><code>'date_action': fields.datetime('Date current action', required=False, readonly=False, select=True),
_defaults = {
'date_action': fields.datetime.now(),
</code></pre>
| 1 | 2016-08-30T13:05:17Z | 39,232,076 | <p>You are setting the default value of <code>date_action</code> as the value returned by <code>fields.datetime.now()</code>, that is executed when odoo server is started.</p>
<p>You should set the default value as the call to the method:</p>
<pre><code>'date_action': fields.datetime.now,
</code></pre>
| 2 | 2016-08-30T15:52:21Z | [
"python",
"datetime",
"openerp"
] |
How to access directory file outside django project? | 39,228,449 | <p>I have my Django project running on RHEL 7 OS. The project is in path <code>/root/project</code>. And project is hosted on httpd server. Now iam trying to access a file out side the directory like <code>/root/data/info/test.txt</code></p>
<p>How should I access this path in views.py so that I can read and write file which is outside the project directory ? I tried to add the path in <code>sys.path</code> but it didn't work. Read and write permission are also give to the file. </p>
| 1 | 2016-08-30T13:11:02Z | 39,231,603 | <p>Add the following lines to your <code>settings.py</code></p>
<pre><code>import os
..
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
FILES_DIR = os.path.abspath(os.path.join(BASE_DIR, '../data/info'))
</code></pre>
<p>Then you can use in your view</p>
<pre><code>from django.conf import settings
import os
..
file_path = os.path.join(settings.FILES_DIR, 'test.txt')
</code></pre>
| 1 | 2016-08-30T15:29:15Z | [
"python",
"django",
"apache",
"httpd.conf"
] |
Listing more than 100 stacks using boto3 | 39,228,551 | <p>We need to list all the stacks that are in CREATE_COMPLETE state. In our AWS account we have >400 such stacks. We have the following code written for this:</p>
<pre><code>stack_session = session.client('cloudformation')
list_stacks = stack_session.list_stacks(StackStatusFilter=['CREATE_COMPLETE'])
</code></pre>
<p>However this lists only the first 100 stacks. We want to know how we can get all the stacks? We are using the python boto3 library.</p>
| 1 | 2016-08-30T13:15:10Z | 39,228,956 | <p>I got this working using pagination. The code I wrote is below:</p>
<pre><code>stack_session = session.client('cloudformation')
paginator = stack_session.get_paginator('list_stacks')
response_iterator = paginator.paginate(StackStatusFilter=['CREATE_COMPLETE'])
for page in response_iterator:
stack = page['StackSummaries']
for output in stack:
print output['StackName']
</code></pre>
<p>That printed all the 451 stacks that we needed.</p>
| 1 | 2016-08-30T13:32:20Z | [
"python",
"boto3",
"cloudformation"
] |
edit django's generated javascript files | 39,228,552 | <p>i'm not actually sure that want i want to do is doable, so i'm asking to people with more experience in Python/Django than me: what i have is a local instance of a django web app, i don't have the django files but only the css/js generated from that (specifically, is an <a href="https://www.reviewboard.org/" rel="nofollow">Reviewboard</a> instance). I'd like to change some frontend behavior, like not giving the possibility to the users to put themself as reviewer. What i want to do is simply not to show the option in the frontend (i don't care about backend). I tried to edit some js files (only adding console.log) but seems that the code that i'm editing is never running, so i guess that django use some built method to link the files in the app.</p>
<p>Anyone can give me a clue if what i want to do is possible (and how)? </p>
<p>Cheers</p>
| 0 | 2016-08-30T13:15:12Z | 39,229,388 | <p>I will use an example of overriding static files (html and JavaScript) involving the admin app since it is built in.</p>
<p>Let's say you have an app <code>cars_app</code>, and you would like to change the behaviour of the admin page for the model <code>car</code>.</p>
<p>In the root directory of your django project, go to or create the directory <code>templates</code>. Underneath it, make the directory structure <code>admin/cars_app/car</code> (<code>admin/app_name/model_name</code>) then create the file <code>change_form.html</code>. Use the corresponding file in the <code>admin</code> app and make the changes you like.</p>
| 0 | 2016-08-30T13:50:58Z | [
"javascript",
"python",
"django"
] |
edit django's generated javascript files | 39,228,552 | <p>i'm not actually sure that want i want to do is doable, so i'm asking to people with more experience in Python/Django than me: what i have is a local instance of a django web app, i don't have the django files but only the css/js generated from that (specifically, is an <a href="https://www.reviewboard.org/" rel="nofollow">Reviewboard</a> instance). I'd like to change some frontend behavior, like not giving the possibility to the users to put themself as reviewer. What i want to do is simply not to show the option in the frontend (i don't care about backend). I tried to edit some js files (only adding console.log) but seems that the code that i'm editing is never running, so i guess that django use some built method to link the files in the app.</p>
<p>Anyone can give me a clue if what i want to do is possible (and how)? </p>
<p>Cheers</p>
| 0 | 2016-08-30T13:15:12Z | 39,685,949 | <p>Use Reviewboard extension hooks for this purpose. Extension hooks are the primary mechanism for customizing Review Boardâs appearance and behavior. Specifically for adding custom css/js please try template hook and it has the option to specific the page in which you need your custom js/css. Please refer <a href="https://www.reviewboard.org/docs/manual/2.5/extending/extensions/hooks/template-hook/" rel="nofollow">https://www.reviewboard.org/docs/manual/2.5/extending/extensions/hooks/template-hook/</a></p>
| 0 | 2016-09-25T10:43:22Z | [
"javascript",
"python",
"django"
] |
get user contribution number in pybossa | 39,228,554 | <p>I'm building a project in Pybossa.
When I export users, on exported user data I want to include a field which to get the number of contributions that each user did. </p>
<p>On PyBossa project statistics page I see that there is table with all contributors which is generated from this method in python: </p>
<pre><code> userStats = dict(
geo=current_app.config['GEO'],
anonymous=dict(
users=users_stats['n_anon'],
taskruns=users_stats['n_anon'],
pct_taskruns=anon_pct_taskruns,
top5=users_stats['anon']['top5']),
authenticated=dict(
users=users_stats['n_auth'],
taskruns=users_stats['n_auth'],
pct_taskruns=auth_pct_taskruns,
top5=users_stats['auth']['top5']))
</code></pre>
<p>Based on this I can't define a method which will return user submission by id? I know that I can do a query, but I asking here if there is aleardy a method which I can use to achieve this? </p>
| 0 | 2016-08-30T13:15:17Z | 39,243,832 | <p>Unfortunately this is not implemented. However, you can build a plugin yourself that will include that information, or if you prefer send us a pull request to include that feature. Happy to merge it into our upstream code base of PYBOSSA.</p>
<p>See documentation about our plugin architecture here: <a href="http://docs.pybossa.com/en/latest/plugins.html?highlight=plugin" rel="nofollow">http://docs.pybossa.com/en/latest/plugins.html?highlight=plugin</a></p>
<p>See how to contribute here: <a href="https://github.com/PyBossa/pybossa/blob/master/CONTRIBUTING.md" rel="nofollow">https://github.com/PyBossa/pybossa/blob/master/CONTRIBUTING.md</a></p>
<p>Cheers,</p>
<p>Daniel</p>
| 1 | 2016-08-31T07:56:07Z | [
"python",
"postgresql",
"pybossa"
] |
Unable to display content on homepage ; Flask app on Google App Engine | 39,228,648 | <p>I am trying to deploy a flask app on GAE.<br><br>
All dependencies like Flask, jinja2 etc are in the same directory<br><br></p>
<p>When GAE launcher deploys the app locally, it gets deployed but <strong>nothing gets displayed on the home url once the local server is up and running</strong> even though the main.py returns some text for the home url</p>
<p>Following are my files:<br><br>
<strong>app.yaml</strong></p>
<pre><code>application: texsumm
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: .*
script: main.app
</code></pre>
<p><strong>main.py</strong></p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route("/")
def template_test():
return "Hello"
if __name__ == "__main__":
app.run()
</code></pre>
<p>What can be the issue ?</p>
| 0 | 2016-08-30T13:18:54Z | 39,231,256 | <p>try changing main to name:</p>
<pre><code>app = Flask("__name__")
</code></pre>
<p>and </p>
<pre><code>if __name__ == "__main__":
app.run(debug=True)
</code></pre>
| 0 | 2016-08-30T15:13:59Z | [
"python",
"google-app-engine",
"web-applications",
"flask"
] |
Unable to display content on homepage ; Flask app on Google App Engine | 39,228,648 | <p>I am trying to deploy a flask app on GAE.<br><br>
All dependencies like Flask, jinja2 etc are in the same directory<br><br></p>
<p>When GAE launcher deploys the app locally, it gets deployed but <strong>nothing gets displayed on the home url once the local server is up and running</strong> even though the main.py returns some text for the home url</p>
<p>Following are my files:<br><br>
<strong>app.yaml</strong></p>
<pre><code>application: texsumm
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: .*
script: main.app
</code></pre>
<p><strong>main.py</strong></p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route("/")
def template_test():
return "Hello"
if __name__ == "__main__":
app.run()
</code></pre>
<p>What can be the issue ?</p>
| 0 | 2016-08-30T13:18:54Z | 39,233,400 | <p>Flask already handles the:</p>
<pre><code>if __name__ == "__main__":
app.run()
</code></pre>
<p>so delete that. Read the <code>run(host=None, port=None, debug=None, **options)</code> section at:</p>
<p><a href="http://flask.pocoo.org/docs/0.11/api/" rel="nofollow">http://flask.pocoo.org/docs/0.11/api/</a></p>
| 0 | 2016-08-30T17:12:51Z | [
"python",
"google-app-engine",
"web-applications",
"flask"
] |
How I can send json map object from flask and use it as javascript object? | 39,228,698 | <p>flask script</p>
<pre><code>from flask import Flask, render_template, request
import os
import sys
import json
data_raw = [('0', '1', '0', '0'), ('0', '0', '1', '0'), ('1', '0', '0', '0')]
app = Flask(__name__)
@app.route('/')
def index():
return render_template('test.html', data=map(json.dumps, data_raw))
</code></pre>
<p>html/js script test.html</p>
<pre><code>{% extends "index.html" %}
{% block content %}
<p id="test">info</p>
<script>
var data_flask = {{ data }};
</script>
{% endblock %}
</code></pre>
<blockquote>
<p>Uncaught SyntaxError: Unexpected token &</p>
</blockquote>
<p>The aim is store data_flask like this</p>
<pre><code>var data_flask = [["0", "1", "0", "0"],["0", "0", "1", "0"],["1", "0", "0", "0"]]
</code></pre>
<p>Any idea?</p>
| 0 | 2016-08-30T13:21:19Z | 39,228,814 | <p>Flask, like Django, autoescapes values by default. You need to use the <code>|safe</code> filter to render the literal values.</p>
<pre><code>var data_flask = {{ data|safe }};
</code></pre>
| 0 | 2016-08-30T13:26:20Z | [
"javascript",
"python",
"json"
] |
How I can send json map object from flask and use it as javascript object? | 39,228,698 | <p>flask script</p>
<pre><code>from flask import Flask, render_template, request
import os
import sys
import json
data_raw = [('0', '1', '0', '0'), ('0', '0', '1', '0'), ('1', '0', '0', '0')]
app = Flask(__name__)
@app.route('/')
def index():
return render_template('test.html', data=map(json.dumps, data_raw))
</code></pre>
<p>html/js script test.html</p>
<pre><code>{% extends "index.html" %}
{% block content %}
<p id="test">info</p>
<script>
var data_flask = {{ data }};
</script>
{% endblock %}
</code></pre>
<blockquote>
<p>Uncaught SyntaxError: Unexpected token &</p>
</blockquote>
<p>The aim is store data_flask like this</p>
<pre><code>var data_flask = [["0", "1", "0", "0"],["0", "0", "1", "0"],["1", "0", "0", "0"]]
</code></pre>
<p>Any idea?</p>
| 0 | 2016-08-30T13:21:19Z | 39,228,876 | <p>you should do <code>map(json.dump, data_raw)</code> and it should work instead of <code>map(json.dumps, data_raw)</code></p>
| 0 | 2016-08-30T13:29:04Z | [
"javascript",
"python",
"json"
] |
Understanding the behavior of function descriptors | 39,228,722 | <p>I was reading <a href="http://www.aleax.it/Python/nylug05_om.pdf" rel="nofollow">a presentation</a> on Python's Object model when, in one slide (number <code>9</code>), the author asserts that Pythons' functions are descriptors. The example he presents to illustrate is similar to this one I wrote:</p>
<pre><code>def mul(x, y):
return x * y
mul2 = mul.__get__(2)
mul2(3) # 6
</code></pre>
<p>Now, I understand that the point is made, since the function defines a <code>__get__</code> it is a descriptor. </p>
<p>What I don't understand is how exactly the call results in the output provided.</p>
| 8 | 2016-08-30T13:22:30Z | 39,228,774 | <p>That's Python doing what it does in order to support dynamically adding functions to classes. </p>
<p>When <code>__get__</code> is invoked on a function object, which is usually done via dot access on an instance of a class, Python will transform the function to a method and implicitly pass the instance (usually recognized as <code>self</code>) as the first argument. </p>
<p>In your case, you explicitly call <code>__get__</code> and explicitly pass the 'instance' <code>2</code> which is bound as the first argument of the function <code>x</code>:</p>
<pre><code>>>> mul2
<bound method mul of 2>
</code></pre>
<p>This results in a method with one expected argument that yields the multiplication; calling it returns <code>2</code> (the bound argument assigned to <code>x</code>) multiplied with anything else you supply as the argument <code>y</code>.</p>
<hr>
<p>In addition, a Python implementation of <code>__get__</code> for functions is provided in the <a href="https://docs.python.org/3/howto/descriptor.html#functions-and-methods" rel="nofollow"><code>Descriptor HOWTO</code></a> document of the Python Docs. Here you can see the transformation, with the usage of <a href="https://docs.python.org/3/library/types.html#types.MethodType" rel="nofollow"><code>types.MethodType</code></a>, that takes place when <code>__get__</code> is invoked :</p>
<pre><code>class Function(object):
. . .
def __get__(self, obj, objtype=None):
"Simulate func_descr_get() in Objects/funcobject.c"
return types.MethodType(self, obj, objtype)
</code></pre>
<p>And the source code for the intrigued visitor is located in <code>Objects/funcobject.c</code>.</p>
<p>As you can see if this descriptor did not exist you'd have to automatically wrap functions in <code>types.MethodType</code> any time you'd want to dynamically add a function to class, unnecessary hassle.</p>
| 8 | 2016-08-30T13:24:47Z | [
"python",
"python-2.7",
"function",
"python-3.x"
] |
Launching scripts from bash and directing outputs | 39,228,764 | <p>I have a question about syntax of bash regarding launching scripts from within bash script.</p>
<p>My questions are: </p>
<ol>
<li><p>I've seen the following syntax:</p>
<pre><code>#!/bin/bash
python do_something.py > /dev/null 2>&1 &
</code></pre>
<p>Can you please explain what is directed to <code>/dev/null</code>, and what is the meaning of <code>2>&1</code> if before already mentioned <code>/dev/null</code>?</p></li>
<li><p>In addition if I have a line defined like:</p>
<pre><code>python do_something.py > 2>&1 &
</code></pre>
<p>how is that different?</p></li>
<li><p>If I have the same python file in many paths, how can I differentiate between each process after launching <code>ps -ef |grep python</code>.
When I'm doing so, I get a list of processes which are all called <code>do_something.py</code>, it would be nice if I could have the full execution path string of each pid; how can I do that?</p></li>
</ol>
<p><strong>NOTE:</strong> The python file launched is writing its own log files.</p>
| 1 | 2016-08-30T13:24:19Z | 39,229,057 | <p>1) stdout (Standard Output) is redirected to /dev/null and stderr (error messages) is redirected to standard output i.e console. </p>
<ul>
<li>1>filename : Redirect stdout to file "filename."</li>
<li>1>>filename: Redirect and append stdout to file "filename."</li>
<li>2>filename : Redirect stderr to file "filename."</li>
<li>2>>filename: Redirect and append stderr to file "filename."</li>
<li>&>filename : Redirect both stdout and stderr to file "filename."</li>
</ul>
<p>3) Using the <strong>ps auxww</strong> flags, you will see the full path to output in both your terminal window and from shell scripts. "ps manual":</p>
<p>-w Wide output. Use this option twice for unlimited width.</p>
| 1 | 2016-08-30T13:36:32Z | [
"python",
"linux",
"bash"
] |
Launching scripts from bash and directing outputs | 39,228,764 | <p>I have a question about syntax of bash regarding launching scripts from within bash script.</p>
<p>My questions are: </p>
<ol>
<li><p>I've seen the following syntax:</p>
<pre><code>#!/bin/bash
python do_something.py > /dev/null 2>&1 &
</code></pre>
<p>Can you please explain what is directed to <code>/dev/null</code>, and what is the meaning of <code>2>&1</code> if before already mentioned <code>/dev/null</code>?</p></li>
<li><p>In addition if I have a line defined like:</p>
<pre><code>python do_something.py > 2>&1 &
</code></pre>
<p>how is that different?</p></li>
<li><p>If I have the same python file in many paths, how can I differentiate between each process after launching <code>ps -ef |grep python</code>.
When I'm doing so, I get a list of processes which are all called <code>do_something.py</code>, it would be nice if I could have the full execution path string of each pid; how can I do that?</p></li>
</ol>
<p><strong>NOTE:</strong> The python file launched is writing its own log files.</p>
| 1 | 2016-08-30T13:24:19Z | 39,229,089 | <p>Answers:</p>
<p>1, 2. <code>></code> redirects whatever that is printed in <code>stdout</code> as result of executing the command (in your case <code>python do_something.py</code>) to a file called <code>/dev/null</code>. The <code>/dev/null</code> is kind of a black hole. Whatever you write to it disappers.</p>
<p><code>2>&1</code> redirects the output of <code>stderr</code> (which has fd as 2) to <code>stdout</code> (whose fd is 1).</p>
<p>Refer <a href="http://www.tldp.org/LDP/abs/html/io-redirection.html" rel="nofollow">I/O redirection</a> for more info about redirections.</p>
<p>Refer <a href="http://tldp.org/LDP/abs/html/zeros.html" rel="nofollow">this</a> link for more info about <code>/dev/null</code></p>
| 1 | 2016-08-30T13:37:52Z | [
"python",
"linux",
"bash"
] |
Launching scripts from bash and directing outputs | 39,228,764 | <p>I have a question about syntax of bash regarding launching scripts from within bash script.</p>
<p>My questions are: </p>
<ol>
<li><p>I've seen the following syntax:</p>
<pre><code>#!/bin/bash
python do_something.py > /dev/null 2>&1 &
</code></pre>
<p>Can you please explain what is directed to <code>/dev/null</code>, and what is the meaning of <code>2>&1</code> if before already mentioned <code>/dev/null</code>?</p></li>
<li><p>In addition if I have a line defined like:</p>
<pre><code>python do_something.py > 2>&1 &
</code></pre>
<p>how is that different?</p></li>
<li><p>If I have the same python file in many paths, how can I differentiate between each process after launching <code>ps -ef |grep python</code>.
When I'm doing so, I get a list of processes which are all called <code>do_something.py</code>, it would be nice if I could have the full execution path string of each pid; how can I do that?</p></li>
</ol>
<p><strong>NOTE:</strong> The python file launched is writing its own log files.</p>
| 1 | 2016-08-30T13:24:19Z | 39,231,591 | <p>Ok, <strong>diclaimer</strong>: I don't have access to a bash right now, so I might be wrong.</p>
<ol>
<li><p>Let's break your command: <code>python do_something.py > /dev/null 2>&1 &</code></p>
<p><code>python do_something.py</code> will run your command<br/>
<code>> /dev/null</code> will redirect stdout to /dev/null<br/>
<code>2>&1</code> will redirect stderr to stdout<br/>
<code>&</code> will fork your process and run in background</p>
<p>So your command will ignore stdout/stderr and be run in background which
is equivalent to the command <code>python do_something.py >& /dev/null &</code> <a href="http://superuser.com/questions/335396/what-is-the-difference-between-and-in-bash"><strong>[1]</strong></a><a href="http://stackoverflow.com/questions/818255/in-the-shell-what-does-21-mean"><strong>[2]</strong></a></p></li>
<li><p><code>python do_something.py > 2>&1 &</code>:</p>
<p><code>> 2</code> will redirect stdout to a file named 2<br/>
<code>>&1</code> will redirect stdout to stdout (yes stdout to stdout)<br/>
<code>&</code> will fork your process and run in background</p>
<p>So this command is <em>almost</em> equivalent to <code>python do_something.py >2 &</code>,
it will redirect the output to a file named 2 (eg: <code>echo 'yes' > 2>&1</code>)</p>
<p><strong>Note</strong>: the behavior of <code>>&1</code> is probably unspecified.</p></li>
<li><p>Since you have run your command using <code>&</code>, your command will be fork and
run in background, therefore I'm not aware of any way to do it in that
case. You can still lookup the <code>/proc</code> directory <strong>[3]</strong> to see from
which directory your command have been run thought.</p></li>
</ol>
<p><strong>[1]</strong>: <a href="http://superuser.com/questions/335396/what-is-the-difference-between-and-in-bash">What is the difference between &> and >& in bash?</a><br/>
<strong>[2]</strong>: <a href="http://stackoverflow.com/questions/818255/in-the-shell-what-does-21-mean">In the shell, what does â 2>&1 â mean?</a><br/>
<strong>[3]</strong>: <code>ls -l /proc/$PROCCESSID/cwd</code></p>
| 1 | 2016-08-30T15:28:42Z | [
"python",
"linux",
"bash"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,016 | <p>you can do it with a dict comprehension for example to only get evens.</p>
<pre><code>d = {1:'a', 2:'b', 3:'c', 4:'d'}
new_d = {k : v for k,v in d.items() if k % 2 == 0}
</code></pre>
<p>for your case because of complaints.</p>
<pre><code>d = {'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e'}
new_d = {k:v for k,v in d.items() if k != 'fill'}
#you can replace the k != with whatever key or keys you want
</code></pre>
| -1 | 2016-08-30T13:34:51Z | [
"python",
"dictionary"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,020 | <p>There's not really a good way to do this if you need to keep the original dictionary the same.</p>
<p>If you don't, you could <code>pop</code> the key you don't want before getting the keys.</p>
<pre><code>d = {'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
d.pop('inside')
for key in d.keys():
print "key: %s, value: %s" % (key, d[key])
</code></pre>
<p>This will mutate <code>d</code>, so again, don't use this if you need all of <code>d</code> somewhere else. You could make a copy of <code>d</code>, but at that point you're better off just iterating over a filtered copy of <code>d.keys()</code>. For example:</p>
<pre><code>d = {'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
ignored_keys = ['inside']
for key in filter(lambda x: x not in ignored_keys, d.keys()):
print "key: %s, value: %s" % (key, d[key])
</code></pre>
| 0 | 2016-08-30T13:35:00Z | [
"python",
"dictionary"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,028 | <pre><code>for key in d_keys:
if key not in ['inside']:
print "key: %s, value: %s" % (key, d[key])
</code></pre>
| 0 | 2016-08-30T13:35:18Z | [
"python",
"dictionary"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,044 | <p>In python 3.X most of dictionary attributes like <code>keys</code>, return a view object which is a set-like object, so you don't need to convert it to set again:</p>
<pre><code>>>> d_keys = d.keys() - {"inside",}
>>> d_keys
{'fill', 'size', 'angle', 'shape'}
</code></pre>
<p>Or if you are in python2.x you can use <code>dict.viewkeys()</code>:</p>
<pre><code>d_keys = d.viewkeys() - {"inside",}
</code></pre>
<p>But if you want to only remove one item you can use <code>pop()</code> attribute in order to remove the corresponding item from dictionary and then calling the <code>keys()</code>.</p>
<pre><code>>>> d.pop('inside')
'a->e'
>>> d.keys()
dict_keys(['fill', 'size', 'angle', 'shape'])
</code></pre>
<p>In python 2 since <code>keys()</code> returns a list object you can use <code>remove()</code> attribute for removing an item directly from the keys. </p>
| 4 | 2016-08-30T13:35:56Z | [
"python",
"dictionary"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,047 | <p>Not sure what you need with inside but this will give your result.</p>
<pre><code>for key in d.keys:
if key != 'inside':
print "key: %s, value: %s" % (key, d[key])
</code></pre>
| 0 | 2016-08-30T13:36:03Z | [
"python",
"dictionary"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,080 | <p>a cross-version approach:</p>
<pre><code>d = {'shape': 'unchanged', 'fill': 'unchanged',
'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
d_keys = [k for k in d.keys() if k != 'inside']
print(d_keys)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>['fill', 'shape', 'angle', 'size']
</code></pre>
<p><hr>
expanding it a bit:</p>
<pre><code>def get_pruned_dict(d, excluded_keys):
return {k:v for k,v in d.items() if k not in excluded_keys}
exclude = ('shape', 'inside')
pruned_d = get_pruned_dict(d, exclude)
print(pruned_d)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>{'fill': 'unchanged', 'size': 'unchanged', 'angle': 'unchanged'}
</code></pre>
| 1 | 2016-08-30T13:37:22Z | [
"python",
"dictionary"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,360 | <p>Here's a little benchmark comparing a couple of good posted solutions:</p>
<pre><code>import timeit
d = {'shape': 'unchanged', 'fill': 'unchanged',
'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
def f1(d):
return d.viewkeys() - {"inside", }
def f2(d):
return filter(lambda x: x not in ['inside'], d.viewkeys())
N = 10000000
print timeit.timeit('f1(d)', setup='from __main__ import f1, d', number=N)
print timeit.timeit('f2(d)', setup='from __main__ import f2, d', number=N)
# 5.25808963984
# 8.54371443087
# [Finished in 13.9s]
</code></pre>
<p>Conclusion: f1 not only is better in terms of readability but also in terms of performance. In python 3.x you'd use keys() instead.</p>
<p>So I'd say @Kasramvd answer is the right one for this post</p>
| 0 | 2016-08-30T13:49:57Z | [
"python",
"dictionary"
] |
Way to get specific keys from dictionary | 39,228,893 | <p>I am looking for a way to get specific keys from a dictionary. </p>
<p>In the example below, I am trying to <code>get all keys except 'inside'</code></p>
<pre><code>>>> d
{'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
>>> d_keys = list(set(d.keys()) - set(["inside"]) )
>>> d_keys
['shape', 'fill', 'angle', 'size']
>>> for key in d_keys:
... print "key: %s, value: %s" % (key, d[key])
...
key: shape, value: unchanged
key: fill, value: unchanged
key: angle, value: unchanged
key: size, value: unchanged
</code></pre>
<p>Is there a better way to do this than above?</p>
| 3 | 2016-08-30T13:29:59Z | 39,229,392 | <p>You can use remove() built-in.</p>
<pre><code>d = {'shape': 'unchanged', 'fill': 'unchanged', 'angle': 'unchanged', 'inside': 'a->e', 'size': 'unchanged'}
d_keys = list(d.keys())
d_keys.remove('inside')
for i in d_keys:
print("Key: {}, Value: {}".format(d_keys, d[d_keys]))
</code></pre>
| 1 | 2016-08-30T13:51:26Z | [
"python",
"dictionary"
] |
identify non pandas datetimeindex? | 39,228,916 | <p>How to identify and delete non <code>datetimeindex</code> rows in following index.</p>
<p><code>Index([nan, nan, nan, nan, u'aveValue', u'minValue', u'maxValue', u'firstValue', u'lastValue', u'nPointsTot', u'nGood', u'nBlankTimes', u'nBlankValues', u'level_nGood', u'level_nSuspect', u'level_nBad', u'status_nGood', u'2009-01-01 00:00:00', u'2009-01-01 00:05:00', u'2009-01-01 00:10:00', u'2009-01-01 00:15:00', u'2009-01-01 00:20:00', u'2009-01-01 00:25:00', u'2009-01-01 00:30:00', u'2009-01-01 00:35:00', u'2009-01-01 00:40:00', u'2009-01-01 00:45:00', u'2009-01-01 00:50:00', u'2009-01-01 00:55:00', u'2009-01-01 01:00:00', u'2009-01-01 01:05:00', u'2009-01-01 01:10:00', u'2009-01-01 01:15:00', , ...], dtype='object')</code></p>
<p>I need to remove the rows where index is not a timestamp.
What's the most efficient way to do this?</p>
<pre><code>#type (df[0].index)
=> class 'pandas.core.index.Index'
</code></pre>
| 0 | 2016-08-30T13:30:57Z | 39,229,670 | <p>Convert the index <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow">to datetime</a>, coerce errors, and filter <code>NaT</code> results:</p>
<pre><code>df[pd.to_datetime(df.index, errors='coerce').to_series().notnull().values]
</code></pre>
<p>In order to use the <code>notnull</code> method, I convert the datetime index to a series. I then convert the series to a boolean vector that can be used for indexing.</p>
<p><strong>Edit</strong></p>
<p>This should work for any pandas version:</p>
<pre><code>df[pd.Series(pd.to_datetime(df.index, errors='coerce')).notnull().values]
</code></pre>
| 1 | 2016-08-30T14:03:47Z | [
"python",
"pandas"
] |
How to bind a Python socket to a specific domain? | 39,228,928 | <p>I have a Heroku application that has a domain <code>moarcatz.tk</code>. It listens for non-HTTP requests using Python's <code>socket</code>.<br>
The documenatation states that if I bind a socket to an empty string as an IP address, it will listen on all available interfaces. I kept getting empty requests from various IP addresses, so I assume that setting the socket to only listen for connections to <code>moarcatz.tk</code> would fix the problem. But I don't know how to bind a socket to a domain name.<br>
I tried <code>'moarcatz.tk</code>' and <code>gethostbyname('moarcatz.tk')</code>, but both give me this error:</p>
<pre><code>OSError: [Errno 99] Cannot assign requested address
</code></pre>
<p>What's up with that?</p>
| 1 | 2016-08-30T13:31:15Z | 39,231,015 | <p>You can't control this via your code, but you can control this via Heroku.</p>
<p>Heroku has a pretty nifty DNS CNAME tool you can use to ensure your app ONLY listens to incoming requests for specific domains -- it's part of the core Heroku platform.</p>
<p>What you do is this:</p>
<pre><code>heroku domains:add www.moarcatz.tk
</code></pre>
<p>Then, go to your DNS provider for <code>moarcatz.tk</code> and add a CNAME record for:</p>
<pre><code>www <heroku-app-name>.herokuapp.com
</code></pre>
<p>This will do two things:</p>
<ul>
<li>Point your DNS to Heroku.</li>
<li>Make Heroku filter the incoming traffic and ALLOW it for that specific domain.</li>
</ul>
| 0 | 2016-08-30T15:02:04Z | [
"python",
"sockets",
"heroku"
] |
Is there an alternative result for Python unit tests, other than a Pass or Fail? | 39,228,974 | <p>I'm writing unit tests that have a database dependency (so technically they're functional tests). Often these tests not only rely on the database to be live and functional, but they can also rely on certain data to be available.</p>
<p>For example, in one test I might query the database to retrieve sample data that I am going to use to test the update or delete functionality. If data doesn't already exist, then this isn't exactly a failure in this context. I'm only concerned about the pass/fail status of the update or delete, and in this situation we didn't even get far enough to test it. So I don't want to give a false positive or false negative.</p>
<p>Is there an elegant way to have the unit test return a 3rd possible result? Such as a warning?</p>
| 0 | 2016-08-30T13:33:15Z | 39,249,810 | <p>In general I think the advice by <a href="http://stackoverflow.com/questions/39228974/is-there-an-alternative-result-for-python-unit-tests-other-than-a-pass-or-fail#comment65795188_39228974">Paul Becotte</a> is best for most cases:</p>
<blockquote>
<p>This is a failure though- your tests failed to set up the system in
the way that your test required. Saying "the data wasn't there, so it
is okay for this test to fail" is saying that you don't really care
about whether the functionality you are testing works. Make your test
reliably insert the data immediately before retrieving it, which will
give you better insight into the state of your system.</p>
</blockquote>
<p>However, in my particular case, I am writing a functional test that relies on data generated and manipulated from several processes. Generating it quickly at the beginning of the test just isn't practical (at least yet).</p>
<p>What I ultimately found to work as I need it to use skipTest as mentioned here:</p>
<p><a href="http://stackoverflow.com/questions/11452981/skip-unittest-if-some-condition-in-setupclass-fails">Skip unittest if some-condition in SetUpClass fails</a></p>
| 0 | 2016-08-31T12:33:45Z | [
"python",
"unit-testing",
"functional-testing"
] |
pivot_table No numeric types to aggregate | 39,229,005 | <p>I want to make a pivot table from the following dataframe with columns <code>sales</code>, <code>rep</code>. The pivot table shows <code>sales</code> but no <code>rep</code>. When I tried with only <code>rep</code>, I got the error <code>DataError: No numeric types to aggregate</code>. How to fix this such that I see both the numeric field <code>sales</code> and the field(string) <code>rep</code></p>
<pre><code>data = {'year': ['2016', '2016', '2015', '2014', '2013'],
'country':['uk', 'usa', 'fr','fr','uk'],
'sales': [10, 21, 20, 10,12],
'rep': ['john', 'john', 'claire', 'kyle','kyle']
}
print pd.DataFrame(data).pivot_table(index='country', columns='year', values=['rep','sales'])
sales
year 2013 2014 2015 2016
country
fr NaN 10 20 NaN
uk 12 NaN NaN 10
usa NaN NaN NaN 21
print pd.DataFrame(data).pivot_table(index='country', columns='year', values=['rep'])
DataError: No numeric types to aggregate
</code></pre>
| 1 | 2016-08-30T13:34:25Z | 39,229,232 | <p>You could use <code>set_index</code> and <code>unstack</code>:</p>
<pre><code>df = pd.DataFrame(data)
df.set_index(['year','country']).unstack('year')
</code></pre>
<p>yields</p>
<pre><code> rep sales
year 2013 2014 2015 2016 2013 2014 2015 2016
country
fr None kyle claire None NaN 10.0 20.0 NaN
uk kyle None None john 12.0 NaN NaN 10.0
usa None None None john NaN NaN NaN 21.0
</code></pre>
<p>Or, using <code>pivot_table</code> with <code>aggfunc='first'</code>:</p>
<pre><code>df.pivot_table(index='country', columns='year', values=['rep','sales'], aggfunc='first')
</code></pre>
<p>yields</p>
<pre><code> rep sales
year 2013 2014 2015 2016 2013 2014 2015 2016
country
fr None kyle claire None None 10 20 None
uk kyle None None john 12 None None 10
usa None None None john None None None 21
</code></pre>
<p>With <code>aggfunc='first'</code>, each <code>(country, year, rep)</code> or <code>(country, year, sales)</code>
group is aggregrated by taking the first value found. In your case there appears to be no duplicates, so the first value is the same as the only value.</p>
| 3 | 2016-08-30T13:44:19Z | [
"python",
"pandas"
] |
pivot_table No numeric types to aggregate | 39,229,005 | <p>I want to make a pivot table from the following dataframe with columns <code>sales</code>, <code>rep</code>. The pivot table shows <code>sales</code> but no <code>rep</code>. When I tried with only <code>rep</code>, I got the error <code>DataError: No numeric types to aggregate</code>. How to fix this such that I see both the numeric field <code>sales</code> and the field(string) <code>rep</code></p>
<pre><code>data = {'year': ['2016', '2016', '2015', '2014', '2013'],
'country':['uk', 'usa', 'fr','fr','uk'],
'sales': [10, 21, 20, 10,12],
'rep': ['john', 'john', 'claire', 'kyle','kyle']
}
print pd.DataFrame(data).pivot_table(index='country', columns='year', values=['rep','sales'])
sales
year 2013 2014 2015 2016
country
fr NaN 10 20 NaN
uk 12 NaN NaN 10
usa NaN NaN NaN 21
print pd.DataFrame(data).pivot_table(index='country', columns='year', values=['rep'])
DataError: No numeric types to aggregate
</code></pre>
| 1 | 2016-08-30T13:34:25Z | 39,229,396 | <p>It seems that the problem comes from the different types for column rep and sales, if you convert the sales to <code>str</code> type and specify the aggfunc as <code>sum</code>, it works fine:</p>
<pre><code>df.sales = df.sales.astype(str)
pd.pivot_table(df, index=['country'], columns=['year'], values=['rep', 'sales'], aggfunc='sum')
# rep sales
# year 2013 2014 2015 2016 2013 2014 2015 2016
# country
# fr None kyle claire None None 10 20 None
# uk kyle None None john 12 None None 10
#usa None None None john None None None 21
</code></pre>
| 1 | 2016-08-30T13:51:47Z | [
"python",
"pandas"
] |
How can I self hide and show QDialog() in PyQT5? | 39,229,053 | <p>I have a GUI that was generated using Qt Designer, I used pyuic5 to generate a .py file. In a separate py (program.py) file I import my UI a do all my work there. </p>
<p><strong>program.py</strong></p>
<pre><code>import sys, os, time
from subprocess import call
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyCred_GUI import Ui_Dialog
class MyGUI(Ui_Dialog):
def __init__(self, dialog):
Ui_Dialog.__init__(self)
self.setupUi(dialog)
self.pushButton_2.clicked.connect(self.cancelbutton)
def cancelbutton(self):
exit()
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
dialog = QtWidgets.QDialog()
dialog.setWindowFlags(QtCore.Qt.WindowSystemMenuHint)
prog = MyGUI(dialog)
dialog.show()
sys.exit(app.exec_())
</code></pre>
<p>I pulled a lot out just to focus on the issue here. When I click my Cancel button, I want the window to hide, set a timer, and then reappear after so many seconds. I have tried every combination of self.close() self.hide() self.destroy() and none of them hide my window. I get an error that says</p>
<p>"<strong>AttributeError: 'MyGUI' object has no attribute 'hide'</strong>"</p>
<p>Which makes sense because MyGUI doesn't have a hide() function. I am at a complete loss on how to hide this window.</p>
<p><strong>EDIT</strong> (Solved)
For future people, as suggested by <a href="http://stackoverflow.com/users/1841194/hi-im-frogatto">Hi Im Frogatto</a> dialog.hide() worked. </p>
| 0 | 2016-08-30T13:36:20Z | 39,230,866 | <p>In your code snippet, <code>dialog</code> is of type <code>QDialog</code> and thereby having <code>hide</code> method. However instances of <code>MyGUI</code> class seem to not have such a method. So, if you write <code>dialog.hide()</code> in that <code>__init__()</code> function, you can hide it.</p>
| 1 | 2016-08-30T14:55:58Z | [
"python",
"qt",
"python-3.x",
"pyqt5"
] |
Selenium find_element_by_xpath last | 39,229,109 | <p>I have a bunch of elements and need to select last one using xpath.
The elements looks like:</p>
<pre><code>xpath=(//a[contains(text(),'Actions')])[2]
xpath=(//a[contains(text(),'Actions')])[3]
xpath=(//a[contains(text(),'Actions')])[4]
xpath=(//a[contains(text(),'Actions')])[5]
xpath=(//a[contains(text(),'Actions')])[6]
xpath=(//a[contains(text(),'Actions')])[7]
</code></pre>
<p><a href="http://prntscr.com/cc3gxd" rel="nofollow">http://prntscr.com/cc3gxd</a></p>
<p>Using Selenium IDE, I see, that I can select last element, by using:</p>
<pre><code>xpath=(//a[contains(text(),'Actions')])[last ()]
</code></pre>
<p>But Selenium Webdriver does not understand this syntax (error NoSuchElementException)</p>
<p>It only understands this syntax:</p>
<pre><code>driver.find_element_by_xpath("//a[contains(text(),'Actions')][last ()]").click()
</code></pre>
<p>But in this case I will select first element.</p>
<p>Please help me to rewrite xpath to select last element instead of first in Selenium Webdriver.</p>
| 0 | 2016-08-30T13:38:42Z | 39,232,685 | <p>Your code is selecting first because you are using find element by xpath try </p>
<pre><code>ele= driver.find_elements_by_xpath("//a[contains(text(),'Actions')][last ()]")
ele[-1].click
</code></pre>
| 0 | 2016-08-30T16:28:57Z | [
"python",
"selenium",
"xpath"
] |
How to get text of children tag's description using beautiful soup | 39,229,110 | <p>I am using beautiful soup to scrap some data from
<a href="http://www.foodily.com/r/0y1ygzt3zf-perfect-vanilla-cupcakes-by-annie-s" rel="nofollow">foodily.com</a></p>
<p>On above page there is a div with class 'ings' and I want to get data within its p tags for that I have written below code:</p>
<pre><code>ingredients = soup.find('div', {"class": "ings"}).findChildren('p')
</code></pre>
<p>It provide me list of ingredient but with p tags</p>
| 0 | 2016-08-30T13:38:44Z | 39,229,193 | <p>Call <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text" rel="nofollow"><code>get_text()</code></a> for every <code>p</code> element found inside the <code>div</code> element with <code>class="ings"</code>.</p>
<p>Complete working code:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
with requests.Session() as session:
session.headers.update({"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36"})
response = session.get("http://www.foodily.com/r/0y1ygzt3zf-perfect-vanilla-cupcakes-by-annie-s")
soup = BeautifulSoup(response.content, "html.parser")
ingredients = [ingredient.get_text() for ingredient in soup.select('div.ings p')]
print(ingredients)
</code></pre>
<p>Prints:</p>
<pre><code>[
u'For the cupcakes:',
u'1 stick (113g) butter/marg*',
u'1 cup caster sugar', u'2 eggs',
...
u'1 tbsp vanilla extract',
u'2-3tbsp milk',
u'Sprinkles to decorate, optional'
]
</code></pre>
<p>Note that I've also improved your locator a bit and switched to a <code>div.ings p</code> <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector</a>.</p>
| 2 | 2016-08-30T13:42:31Z | [
"python",
"beautifulsoup"
] |
How to get text of children tag's description using beautiful soup | 39,229,110 | <p>I am using beautiful soup to scrap some data from
<a href="http://www.foodily.com/r/0y1ygzt3zf-perfect-vanilla-cupcakes-by-annie-s" rel="nofollow">foodily.com</a></p>
<p>On above page there is a div with class 'ings' and I want to get data within its p tags for that I have written below code:</p>
<pre><code>ingredients = soup.find('div', {"class": "ings"}).findChildren('p')
</code></pre>
<p>It provide me list of ingredient but with p tags</p>
| 0 | 2016-08-30T13:38:44Z | 39,231,652 | <p>Another way:</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
url = "http://www.foodily.com/r/0y1ygzt3zf-perfect-vanilla-cupcakes-by-annie-s"
source = requests.get(url)
text_new = source.text
soup = bs(text_new, "html.parser")
ingredients = soup.findAll('div', {"class": "ings"})
for a in ingredients :
print (a.text)
</code></pre>
<p>It will print:</p>
<pre><code>For the cupcakes:
1 stick (113g) butter/marg*
1 cup caster sugar
2 eggs
1 tbsp vanilla extract
1 and 1/2 cups plain flour
2 tsp baking powder
1/2 cup milk (I use Skim)
For the frosting:
2 sticks (226g) unsalted butter, at room temp
2 and 1/2 cups icing sugar, sifted
1 tbsp vanilla extract
2-3tbsp milk
Sprinkles to decorate, optional
</code></pre>
| 0 | 2016-08-30T15:32:23Z | [
"python",
"beautifulsoup"
] |
How to get text of children tag's description using beautiful soup | 39,229,110 | <p>I am using beautiful soup to scrap some data from
<a href="http://www.foodily.com/r/0y1ygzt3zf-perfect-vanilla-cupcakes-by-annie-s" rel="nofollow">foodily.com</a></p>
<p>On above page there is a div with class 'ings' and I want to get data within its p tags for that I have written below code:</p>
<pre><code>ingredients = soup.find('div', {"class": "ings"}).findChildren('p')
</code></pre>
<p>It provide me list of ingredient but with p tags</p>
| 0 | 2016-08-30T13:38:44Z | 39,236,959 | <p>If you already have the list of <code>p</code> tags, use <code>get_text()</code>. This will return only the text of them:</p>
<pre><code>ingredient_list = p.get_text() for p in ingredients
</code></pre>
<p>The result array will look like:</p>
<pre><code>ingredient_list = [
'For the cupcakes:', '1 stick (113g) butter/marg*',
'1 cup caster sugar','2 eggs', ...
]
</code></pre>
| 0 | 2016-08-30T20:56:20Z | [
"python",
"beautifulsoup"
] |
Firefox works but PhantomJS throws Unable to find element with css selector | 39,229,207 | <p>I recently changed from <code>webdriver.Firefox()</code> to <code>webdriver.PhantomJS()</code> to get a speed improvement and i started getting some errors when i try to find an element on my datepicker in order to click it after</p>
<pre><code> self.driver = webdriver.PhantomJS()
self.driver.set_window_size(1280, 1024)
self.driver.find_element_by_css_selector(
"#ui-datepicker-div td.full-selected.full-changeover > a"
).click()
Message: {"errorMessage":"Unable to find element with css selector '#ui-datepicker-div td.full-selected.full-changeover > a'","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"146","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:54784","User-Agent":"Python-urllib/3.5"},"httpVersion":"1.1","method":"POST","post":"{\"value\": \"#ui-datepicker-div td.full-selected.full-changeover > a\", \"sessionId\": \"8b584560-6eb6-11e6-bb4a-77906b62d5cb\", \"using\": \"css selector\"}","url":"/element","urlParsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/8b584560-6eb6-11e6-bb4a-77906b62d5cb/element"}}
Screenshot: available via screen
</code></pre>
<p>I am using <code>selenium==2.53.6</code> and <code>phantomjs==2.1.12</code></p>
<p>UPDATE (here is the code):</p>
<pre><code>def get_price(self, url):
url = "https://www.homeaway.pt/arrendamento-ferias/p1823902"
# Lets reset it
self.driver.get(url)
wait = WebDriverWait(self.driver, 5)
prices = defaultdict(list)
count = 0
for month in range(self.month_count):
next_month_iteration = False
checkin_date = wait.until(
EC.visibility_of_element_located(
(
By.CSS_SELECTOR,
".quotebar-container input[id=startDateInput]"
)
)
)
checkin_date.click()
for counting in range(count):
try:
self.driver.execute_script(
'$( "a.ui-datepicker-next" ).click()'
)
except WebDriverException:
log.error(
'WebDriverException: Message: '
'getAvailabilityIndexForDate requires a Date object'
)
next_month_iteration = True
break
if next_month_iteration:
# Skip the next iteration cause there was an error
continue
year = self.driver.find_element_by_css_selector(
".ui-datepicker-year").text
current_month = self.driver.find_element_by_css_selector(
".ui-datepicker-month").text
log.info(
'Current Month is "%s"',
current_month.encode('ascii', 'ignore').decode('utf-8')
)
try:
first_available_checkin_date = wait.until(
EC.element_to_be_clickable(
(
By.CSS_SELECTOR,
"#ui-datepicker-div td.full-changeover > a"
)
)
)
except TimeoutException:
log.warning('Is there any date available here "%s" ?', url)
continue
else:
log.info(
'First_available_checkin_date is "%s"',
first_available_checkin_date.text
)
ActionChains(self.driver).move_to_element(
first_available_checkin_date).perform()
# self.driver.find_element_by_css_selector(
# "#ui-datepicker-div td.full-selected.full-changeover > a"
# ).click()
choose_date = wait.until(
EC.visibility_of_element_located(
(
By.CSS_SELECTOR,
"#ui-datepicker-div td.full-selected.full-changeover > a"
)
)
)
choose_date.click()
</code></pre>
<p>Any ideas how i can fix this??</p>
| 1 | 2016-08-30T13:43:05Z | 39,235,433 | <p>I've reproduced your issue and was able to fix it by <em>maximizing the browser window</em>:</p>
<pre><code>self.driver = webdriver.PhantomJS()
self.driver.maximize_window()
</code></pre>
| 0 | 2016-08-30T19:15:33Z | [
"python",
"python-3.x",
"selenium",
"datepicker",
"phantomjs"
] |
Join list from specific index | 39,229,286 | <p>I have a list like below:</p>
<pre><code>ingredients = ['apple','cream','salt','sugar','cider']
</code></pre>
<p>I want to join this list to get a string but I want to join from 2nd index till the last one.</p>
<p>to get this : <code>"salt sugar cider"</code>
Length of list may vary.</p>
<p>Is it possible to do this with join function or I have to do it by looping over elements?</p>
| 0 | 2016-08-30T13:46:52Z | 39,229,311 | <p>Just <a href="http://stackoverflow.com/questions/509211/explain-pythons-slice-notation"><em>slice</em> the list</a>:</p>
<pre><code>>>> l = ['apple', 'cream', 'salt', 'sugar', 'cider']
>>> ' '.join(l[2:])
'salt sugar cider'
</code></pre>
<p>We don't specify the end of the slice which means that it would slice to the last element of the list.</p>
| 8 | 2016-08-30T13:48:02Z | [
"python",
"django",
"list",
"python-2.7"
] |
Convert 11/2/1998 to 110298 | 39,229,313 | <p>What is the simplest way to convert a date formatted <code>3/2/2004</code> to <code>030204</code> in pure bash, or if not possible, then Python?
I need to at zeros in front of an single digit sectors of the date, remove the parenthesis, and have only the last two characters of the 4 digit year.
I know I could write an extensive Python script that would create an array splitting at <code>/</code>, and for any single digit arrays I would add a <code>0</code>. I don't want to do this because it seems unnecessary. Thanks in advance for any help!</p>
| -8 | 2016-08-30T13:48:03Z | 39,229,390 | <pre><code>date -d'11/2/1998' +%m%d%y
110298
</code></pre>
| 2 | 2016-08-30T13:51:04Z | [
"python",
"bash",
"awk",
"sed",
"grep"
] |
Convert 11/2/1998 to 110298 | 39,229,313 | <p>What is the simplest way to convert a date formatted <code>3/2/2004</code> to <code>030204</code> in pure bash, or if not possible, then Python?
I need to at zeros in front of an single digit sectors of the date, remove the parenthesis, and have only the last two characters of the 4 digit year.
I know I could write an extensive Python script that would create an array splitting at <code>/</code>, and for any single digit arrays I would add a <code>0</code>. I don't want to do this because it seems unnecessary. Thanks in advance for any help!</p>
| -8 | 2016-08-30T13:48:03Z | 39,229,535 | <pre><code>import datetime
d = datetime.datetime.strptime("11/2/1998", "%d/%m/%Y")
print d.strftime("%d%m%y")
</code></pre>
| 2 | 2016-08-30T13:57:31Z | [
"python",
"bash",
"awk",
"sed",
"grep"
] |
Encoding error when reading url with urllib | 39,229,439 | <p>When I try to scrape a wikipedia site with a special character in its URL, using urllib.request and Python, I get the following error <code>UnicodeEncodeError: 'ascii' codec can't encode character '\xf8' in position 23: ordinal not in range(128)</code></p>
<p>The code:</p>
<pre><code># -*- coding: utf-8 -*-
import urllib.request as ur
url = "https://no.wikipedia.org/wiki/Jonas_Gahr_Støre"
r = ur.urlopen(url).read()
</code></pre>
<p>How can I use urllib.request with utf-8 encoding?</p>
| 0 | 2016-08-30T13:53:36Z | 39,229,490 | <p>New plan - Using requests</p>
<pre><code>from bs4 import BeautifulSoup
import requests
def scrape():
url = "http://no.wikipedia.org/wiki/Jonas_Gahr_Støre"
r = requests.get(url).content
soup = BeautifulSoup(r).encode('utf-8')
print soup
print r
if __name__ == '__main__':
scrape()
</code></pre>
| 0 | 2016-08-30T13:55:30Z | [
"python",
"urllib"
] |
Encoding error when reading url with urllib | 39,229,439 | <p>When I try to scrape a wikipedia site with a special character in its URL, using urllib.request and Python, I get the following error <code>UnicodeEncodeError: 'ascii' codec can't encode character '\xf8' in position 23: ordinal not in range(128)</code></p>
<p>The code:</p>
<pre><code># -*- coding: utf-8 -*-
import urllib.request as ur
url = "https://no.wikipedia.org/wiki/Jonas_Gahr_Støre"
r = ur.urlopen(url).read()
</code></pre>
<p>How can I use urllib.request with utf-8 encoding?</p>
| 0 | 2016-08-30T13:53:36Z | 39,229,882 | <p>Apparently, urllib can only handle ASCII requests, and converting your url to ascii gives a error on your special character.
Replacing ø with %C3%B8, the proper way to encode this special character in http, seems to do the trick. However, I can't find a method to do this automatically like your browser does.</p>
<p>example:</p>
<pre><code>>>> f="https://no.wikipedia.org/wiki/Jonas_Gahr_St%C3%B8re"
>>> import urllib.request
>>> g=urllib.request.urlopen(f)
>>> text=g.read()
>>> text[:100]
b'<!DOCTYPE html>\n<html class="client-nojs" lang="nb" dir="ltr">\n<head>\n<meta charset="UTF-8"/>\n<title'
</code></pre>
<p>The answer above doesn't work, because he is encoding after the request is processed, while you get an error during the request processing.</p>
| 0 | 2016-08-30T14:12:18Z | [
"python",
"urllib"
] |
Encoding error when reading url with urllib | 39,229,439 | <p>When I try to scrape a wikipedia site with a special character in its URL, using urllib.request and Python, I get the following error <code>UnicodeEncodeError: 'ascii' codec can't encode character '\xf8' in position 23: ordinal not in range(128)</code></p>
<p>The code:</p>
<pre><code># -*- coding: utf-8 -*-
import urllib.request as ur
url = "https://no.wikipedia.org/wiki/Jonas_Gahr_Støre"
r = ur.urlopen(url).read()
</code></pre>
<p>How can I use urllib.request with utf-8 encoding?</p>
| 0 | 2016-08-30T13:53:36Z | 39,229,884 | <p>If using a library is an option, I would suggest the awesome <a href="http://docs.python-requests.org/" rel="nofollow">requests</a></p>
<pre><code># -*- coding: utf-8 -*-
import requests
r = requests.get('https://no.wikipedia.org/wiki/Jonas_Gahr_Støre')
print(r.text)
</code></pre>
| 0 | 2016-08-30T14:12:26Z | [
"python",
"urllib"
] |
Encoding error when reading url with urllib | 39,229,439 | <p>When I try to scrape a wikipedia site with a special character in its URL, using urllib.request and Python, I get the following error <code>UnicodeEncodeError: 'ascii' codec can't encode character '\xf8' in position 23: ordinal not in range(128)</code></p>
<p>The code:</p>
<pre><code># -*- coding: utf-8 -*-
import urllib.request as ur
url = "https://no.wikipedia.org/wiki/Jonas_Gahr_Støre"
r = ur.urlopen(url).read()
</code></pre>
<p>How can I use urllib.request with utf-8 encoding?</p>
| 0 | 2016-08-30T13:53:36Z | 39,230,084 | <p>Using the <a href="http://stackoverflow.com/a/39229882/2169327">answer from @mousetail</a> I wrote a custom encoder for the characters I needed:</p>
<pre><code>def properEncode(url):
url = url.replace("ø", "%C3%B8")
url = url.replace("Ã¥", "%C3%A5")
url = url.replace("æ", "%C3%A6")
url = url.replace("Ã", "%C3%98")
url = url.replace("Ã
", "%C3%A5")
url = url.replace("Ã", "%C3%85")
return url
</code></pre>
| -1 | 2016-08-30T14:20:49Z | [
"python",
"urllib"
] |
Error while accessing dictionary in python | 39,229,607 | <p>I have a dictionary named <code>json_dict</code> given below. </p>
<p>I need to access the element <code>==> json_dict['OptionSettings'][3]['Value']</code>.</p>
<p>I need to access the element using the syntax</p>
<p><code>print(json_dict[parameter])</code>. </p>
<p>When I give a parameter such as</p>
<p><code>param="['OptionSettings'][3]['Value']"</code> or</p>
<p><code>param="'OptionSettings'][3]['Value']"</code></p>
<p>I am getting an error like the one below:</p>
<p><code>KeyError: "['OptionSettings'][3]['Value']"</code>.</p>
<p>I tried to use the below solution but it just printed a string</p>
<pre><code>str1="json_dict"
print(str1+param)
</code></pre>
<p>Full Dictionary below:</p>
<pre><code>{
"ApplicationName": "Test",
"EnvironmentName": "ABC-Nodejs",
"CNAMEPrefix": "ABC-Neptune",
"SolutionStackName": "64bit Amazon Linux 2016.03 v2.1.1 running Node.js",
"OptionSettings": [
{
"Namespace": "aws:ec2:vpc",
"OptionName": "AssociatePublicIpAddress",
"Value": "true"
},
{
"Namespace": "aws:elasticbeanstalk:environment",
"OptionName": "EnvironmentType",
"Value": "LoadBalanced"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "Subnets",
"Value": "param1"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "SecurityGroups",
"Value": "param2"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MinSize",
"Value": "1"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MaxSize",
"Value": "4"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Availability Zones",
"Value": "Any"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Cooldown",
"Value": "360"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "IamInstanceProfile",
"Value": "NepRole"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "MonitoringInterval",
"Value": "5 minutes"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeType",
"Value": "gp2"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeSize",
"Value": "10"
},
{
"Namespace": "aws:elasticbeanstalk:sns:topics",
"OptionName": "Notification Endpoint",
"Value": "sunil.kumar2@pb.com"
},
{
"Namespace": "aws:elasticbeanstalk:hostmanager",
"OptionName": "LogPublicationControl",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "DeploymentPolicy",
"Value": "Rolling"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSizeType",
"Value": "Percentage"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSize",
"Value": "100"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "HealthCheckSuccessThreshold",
"Value": "Ok"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "IgnoreHealthCheck",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "Timeout",
"Value": "600"
},
{
"Namespace": "aws:autoscaling:updatepolicy:rollingupdate",
"OptionName": "RollingUpdateEnabled",
"Value": "false"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "ELBSubnets",
"Value": "param3"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "SecurityGroups",
"Value": "param4"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "ManagedSecurityGroup",
"Value": "param4"
}
]
}
</code></pre>
| -5 | 2016-08-30T14:01:12Z | 39,229,696 | <p>Unfortunately you can't do that. </p>
<p>When you type <code>param="['OptionSettings'][3]['Value']"</code> and then <code>json_dict[param]</code>, you are basically asking for the value represented by the key <code>"['OptionSettings'][3]['Value']"</code> which does not exists.</p>
<p>You´ll have to navigate through the levels until you get to the last one.</p>
<hr>
<p>But of course, if you need a <em>one-liner</em>, you can always create some logic and extract that to a method.</p>
<p>For example, instead of </p>
<pre><code>print(json_dict[param]).
</code></pre>
<p>you could use something like</p>
<pre><code>print(get_json_value(json_dict, param))
</code></pre>
<p>and define a function such as </p>
<pre><code>import re
def get_json_value(json_dict, params):
list_of_params = re.findall(r'\[([^]]*)\]', params)
#list_of_params = ['OptionSettings', '3', 'Value']
_ = json_dict
for elem in list_of_params:
_ = _[elem]
return _
</code></pre>
<p>I haven't tested it but it should work fine. <br/>
(Also, it is just a demo made to guide you through an alternate solution)</p>
| 2 | 2016-08-30T14:04:43Z | [
"python",
"json",
"python-2.7",
"python-3.x",
"dictionary"
] |
Error while accessing dictionary in python | 39,229,607 | <p>I have a dictionary named <code>json_dict</code> given below. </p>
<p>I need to access the element <code>==> json_dict['OptionSettings'][3]['Value']</code>.</p>
<p>I need to access the element using the syntax</p>
<p><code>print(json_dict[parameter])</code>. </p>
<p>When I give a parameter such as</p>
<p><code>param="['OptionSettings'][3]['Value']"</code> or</p>
<p><code>param="'OptionSettings'][3]['Value']"</code></p>
<p>I am getting an error like the one below:</p>
<p><code>KeyError: "['OptionSettings'][3]['Value']"</code>.</p>
<p>I tried to use the below solution but it just printed a string</p>
<pre><code>str1="json_dict"
print(str1+param)
</code></pre>
<p>Full Dictionary below:</p>
<pre><code>{
"ApplicationName": "Test",
"EnvironmentName": "ABC-Nodejs",
"CNAMEPrefix": "ABC-Neptune",
"SolutionStackName": "64bit Amazon Linux 2016.03 v2.1.1 running Node.js",
"OptionSettings": [
{
"Namespace": "aws:ec2:vpc",
"OptionName": "AssociatePublicIpAddress",
"Value": "true"
},
{
"Namespace": "aws:elasticbeanstalk:environment",
"OptionName": "EnvironmentType",
"Value": "LoadBalanced"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "Subnets",
"Value": "param1"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "SecurityGroups",
"Value": "param2"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MinSize",
"Value": "1"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MaxSize",
"Value": "4"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Availability Zones",
"Value": "Any"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Cooldown",
"Value": "360"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "IamInstanceProfile",
"Value": "NepRole"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "MonitoringInterval",
"Value": "5 minutes"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeType",
"Value": "gp2"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeSize",
"Value": "10"
},
{
"Namespace": "aws:elasticbeanstalk:sns:topics",
"OptionName": "Notification Endpoint",
"Value": "sunil.kumar2@pb.com"
},
{
"Namespace": "aws:elasticbeanstalk:hostmanager",
"OptionName": "LogPublicationControl",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "DeploymentPolicy",
"Value": "Rolling"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSizeType",
"Value": "Percentage"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSize",
"Value": "100"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "HealthCheckSuccessThreshold",
"Value": "Ok"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "IgnoreHealthCheck",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "Timeout",
"Value": "600"
},
{
"Namespace": "aws:autoscaling:updatepolicy:rollingupdate",
"OptionName": "RollingUpdateEnabled",
"Value": "false"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "ELBSubnets",
"Value": "param3"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "SecurityGroups",
"Value": "param4"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "ManagedSecurityGroup",
"Value": "param4"
}
]
}
</code></pre>
| -5 | 2016-08-30T14:01:12Z | 39,249,022 | <p>This worked for me</p>
<pre><code>str1="json_dict"
params="['OptionSettings'][3]['Value']"
str2=str1+params
print(eval(str5))
</code></pre>
<p>Here the use of function <strong><em>eval()</em></strong> is the key to solve this.</p>
| 0 | 2016-08-31T11:56:57Z | [
"python",
"json",
"python-2.7",
"python-3.x",
"dictionary"
] |
How to execute set of commands after sudo using python script | 39,229,620 | <p>I am trying to automate deployment process using the python. In deployment I do "dzdo su - sysid" first and then perform the deployment process. But I am not able to handle this part in python. I have done similar thing in shell where I used following piece of code,</p>
<pre><code>/bin/bash
psh su - sysid << EOF
. /users/home/sysid/.bashrc
./deployment.sh
EOF
</code></pre>
<p>this handles execution of deployment.sh very well. It does the sudo and then execute the script with sysid id.
I am trying to do similar thing using python but I am not able to find any alternative to << EOF in python. </p>
<p>I am using subprocess.Popen to execute dzdo part, it does the dzdo, but when I try to execute next command for e.g. say "ls -l", then this command will not get executed with the sysid, instead, i had to exit from sysid session and as soon as i exit, it will execute "ls -l" in my home directory which is of no use. Can someone please help me on this?</p>
<p>And one more thing, in this case I am not calling any deployment.sh but I will call commands like cp, rm, mkdir etc.</p>
| 0 | 2016-08-30T14:01:42Z | 39,254,379 | <p>The text between <code><< EOF</code> and <code>EOF</code> in your shell script example will be written to the standard input of the <code>psh</code> process. So you have to redirect the standard input of your <code>Popen</code> instance and write the data either directly into the <code>stdin</code> file of your instance or use the <code>communicate()</code> method:</p>
<pre><code>#!/usr/bin/env python
# coding: utf8
from __future__ import absolute_import, division, print_function
from subprocess import Popen, PIPE
SHELL_COMMANDS = r'''\
. /users/home/sysid/.bashrc
./deployment.sh
'''
def main():
process = Popen(['psh', 'su', '-', 'sysid'], stdin=PIPE)
process.communicate(SHELL_COMMANDS)
if __name__ == '__main__':
main()
</code></pre>
<p>If you need the output of the process' stdandard output and/or error then you need to pipe those too and work with the return value of the <code>communicate()</code> call.</p>
| 0 | 2016-08-31T16:10:11Z | [
"python",
"linux",
"shell",
"subprocess"
] |
Encoding/decoding troubleshooting for python CSVs and JSON files | 39,229,646 | <p>I initially dumped a file which contained a particular sentence using:</p>
<pre><code> with open(labelFile, "wb") as out:
json.dump(result, out,indent=4)
</code></pre>
<p>This sentence within the JSON looks like:</p>
<pre><code>"-LSB- 97 -RSB- However , the influx of immigrants from mainland China , approximating NUMBER_SLOT per year , is a significant contributor to its population growth \u00c3 cents \u00c2 $ \u00c2 `` a daily quota of 150 Mainland Chinese with family ties in LOCATION_SLOT are granted a `` one way permit '' .",
</code></pre>
<p>I then proceeded to load this in via:</p>
<pre><code>with open(sys.argv[1]) as sentenceFile:
sentenceFile = json.loads(sentenceFile.read())
</code></pre>
<p>process it and then write this out to a CSV using:</p>
<pre><code>with open(sys.argv[2], 'wb') as csvfile:
fieldnames = ['x','y','z'
]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for sentence in sentence2locations2values:
sentence = unicode(sentence['parsedSentence']).encode("utf-8")
writer.writerow({'x': sentence})
</code></pre>
<p>Which made the sentence in the CSV file opened in Excel for Mac:</p>
<pre><code>-LSB- 97 -RSB- However , the influx of immigrants from mainland China , approximating NUMBER_SLOT per year , is a significant contributor to its population growth ÃÆ cents Ãâ $ Ãâ `` a daily quota of 150 Mainland Chinese with family ties in LOCATION_SLOT are granted a `` one way permit '' .
</code></pre>
<p>I then proceeded to take this from Excel for Macs to Google Sheets, where it is:</p>
<pre><code>-LSB- 97 -RSB- However , the influx of immigrants from mainland China , approximating NUMBER_SLOT per year , is a significant contributor to its population growth à cents à $ à `` a daily quota of 150 Mainland Chinese with family ties in LOCATION_SLOT are granted a `` one way permit '' .
</code></pre>
<p>Note, very slightly different, the <code>Ã</code> has replaced the <code>Ã</code>.</p>
<p>and then labelled it, bringing it back into Excel for Mac at which point it became back to:</p>
<pre><code>-LSB- 97 -RSB- However , the influx of immigrants from mainland China , approximating NUMBER_SLOT per year , is a significant contributor to its population growth à cents à $ à `` a daily quota of 150 Mainland Chinese with family ties in LOCATION_SLOT are granted a `` one way permit '' .
</code></pre>
<p><strong>How do I initially read in the CSV, containing a sentence like</strong>:</p>
<pre><code>-LSB- 97 -RSB- However , the influx of immigrants from mainland China , approximating NUMBER_SLOT per year , is a significant contributor to its population growth ÃÆ cents Ãâ $ Ãâ `` a daily quota of 150 Mainland Chinese with family ties in LOCATION_SLOT are granted a `` one way permit '' .
</code></pre>
<p><strong>to a value which is:</strong></p>
<pre><code>"-LSB- 97 -RSB- However , the influx of immigrants from mainland China , approximating 45,000 per year , is a significant contributor to its population growth \u00c3 cents \u00c2 $ \u00c2 `` a daily quota of 150 Mainland Chinese with family ties in Hong Kong are granted a `` one way permit '' .",
</code></pre>
<p>So that it matches what was in the original json dump right at the start of this question?</p>
<p><strong>EDIT</strong></p>
<p>I check from this and see that the encoding of <code>\u00c3</code> to <code>Ã</code>, the format in Google sheets, is actually Latin 8.</p>
<p><strong>EDIT</strong></p>
<p>I ran <code>enca</code> and see that the original dumped file is in 7bit ASCII characters, and my CSV is in unicode. So I need to load in as unicode and convert to 7bit ASCII?</p>
| 2 | 2016-08-30T14:02:48Z | 39,236,900 | <p>I figured out the solution to this. The solution was to decode the CSV file from its original format (identified as <code>UTF-8</code>) and then the sentence becomes the original one. So:</p>
<pre><code>csvfile = open(sys.argv[1], 'r')
fieldnames = ("x","y","z")
reader = csv.DictReader(csvfile, fieldnames)
next(reader)
for i,row in enumerate(reader):
row['x'] = row['x'].decode("utf-8")
</code></pre>
<p>The very strange thing that happened is that when I edited the CSV file in Excel for Mac and saved, every time it seems to convert to a different encoding. I warn other users about this as it is a huge headache.</p>
| 1 | 2016-08-30T20:52:35Z | [
"python",
"csv",
"encoding",
"decode",
"utf"
] |
boto3 Create non-expiring URLS | 39,229,688 | <p>In boto3, there is a function that generate to generate pre-signed-urls, but they time out.
See: <a href="http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.generate_presigned_url" rel="nofollow">http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.generate_presigned_url</a></p>
<p>Is there a way to create non-pre-signed URLS that do not expire?</p>
| 0 | 2016-08-30T14:04:24Z | 39,230,260 | <p>There is no way to create non-pre-signed URLs or pre-signed URLs without expiration. The basic use of presigned URLs is </p>
<blockquote>
<p>A pre-signed URL gives you access to the object identified in the URL,
provided that the creator of the pre-signed URL has permissions to
access that object. That is, if you receive a pre-signed URL to upload
an object, you can upload the object only if the creator of the
pre-signed URL has the necessary permissions to upload that object.</p>
<p>All objects and buckets by default are private. The pre-signed URLs
are useful if you want your user/customer to be able upload a specific
object to your bucket, but you don't require them to have AWS security
credentials or permissions. When you create a pre-signed URL, you must
provide your security credentials, specify a bucket name, an object
key, an HTTP method (PUT for uploading objects), and an expiration
date and time. The pre-signed URLs are valid only for the specified
duration.</p>
</blockquote>
<p>The maximum expiration you can set to seven days i.e. 604800 seconds .</p>
<p>Please <a href="http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html" rel="nofollow">check here</a> for more info.</p>
<p>Please check <strong>X-Amz-Expires</strong> in table present in above link.</p>
| 1 | 2016-08-30T14:27:57Z | [
"python",
"python-2.7",
"boto3"
] |
How to install PyPdf2 in PyCharm (Windows-64 bits) | 39,229,692 | <p>I want to install PyPdf2 in PyCharm for Windows (64 bits)
I have tried to go to Settings\Project\Project Interpreter, Then pressing the "+" sign, but It did not found PyPdf2. </p>
<ul>
<li><p>I already Installed it to the normal python2.7 by going to the extracted path of PyPdf2 then I run (python.exe setup.py install)</p></li>
<li><p>I tried to install it to anaconda by "conda install -c mbonix pypdf2=1.24" but I got an error "Error Could not find URL: <a href="https://pythonhosted.org/PPdf2/Win-64/" rel="nofollow">https://pythonhosted.org/PPdf2/Win-64/</a>"</p></li>
<li>I tried to install it to anaconda by "conda install -c anaconda-nb-extensions pypdf2=1.24" but I got an error "Error Could not find URL: <a href="https://pythonhosted.org/pypi/PPdf2/Win-64/" rel="nofollow">https://pythonhosted.org/pypi/PPdf2/Win-64/</a>"</li>
<li>I added the repository "<a href="https://pythonhosted.org/PyPDF2/" rel="nofollow">https://pythonhosted.org/PyPDF2/</a>" to PyCharm, but it did not show PyPdf2 either!</li>
</ul>
<p>What Can I do to install such module to PyCharm</p>
<p>Note: I use the Latest version 2016.2.2 of PyCharm Community edition</p>
| 0 | 2016-08-30T14:04:35Z | 39,231,297 | <p>Try the a look at the documentation to install library</p>
<p><a href="https://www.jetbrains.com/help/pycharm/2016.1/installing-uninstalling-and-upgrading-packages.html" rel="nofollow">Pycharm Documentation</a></p>
| 0 | 2016-08-30T15:15:15Z | [
"python",
"windows",
"package",
"pycharm",
"pypdf2"
] |
Novice python user need help passing data between functions | 39,229,738 | <p>I am finishing an assignment for a class where a teacher can input the student ID numbers and the students grades. The final grade will be calculated and returned next to the students number. I can calculate the final grade just fine, but I cannot append the grade to the list of student numbers. </p>
<pre><code>def assignments():
assign1 = int(input("Assignment 1 grade: "))
if assign1 > 100:
print ("Please input a valid grade value.")
assign1 = int(input("Assignment 1 grade: "))
assign2 = int(input("Assignment 2 grade: "))
if assign2 > 100:
print ("Please input a valid grade value.")
assign2 = int(input("Assignment 2 grade: "))
assign3 = int(input("Assignment 3 grade: "))
if assign3 > 100:
print ("Please input a valid grade value.")
assign3 = int(input("Assignment 3 grade: "))
assign4 = int(input("Assignment 4 grade: "))
if assign4 > 100:
print ("Please input a valid grade value.")
assign4 = int(input("Assignment 4 grade: "))
assign5 = int(input("Assignment 5 grade: "))
if assign5 > 100:
print ("Please input a valid grade value.")
assign5 = int(input("Assignment 5 grade: "))
assign6 = int(input("Assignment 6 grade: "))
if assign6 > 100:
print ("Please input a valid grade value.")
assign6 = int(input("Assignment 6 grade: "))
assign7 = int(input("Assignment 7 grade: "))
if assign7 > 100:
print ("Please input a valid grade value.")
assign7 = int(input("Assignment 7 grade: "))
assign8 = int(input("Assignment 8 grade: "))
if assign8 > 100:
print ("Please input a valid grade value.")
assign8 = int(input("Assignment 8 grade: "))
assign9 = int(input("Assignment 9 grade: "))
if assign9 > 100:
print ("Please input a valid grade value.")
assign9 = int(input("Assignment 9 grade: "))
assign10 = int(input("Assignment 10 grade: "))
if assign10 > 100:
print ("Please input a valid grade value.")
assign10 = int(input("Assignment 10 grade: "))
assignGrade = assign1 + assign2 + assign3 + assign4 + assign5 + assign6 + assign7 + assign8 + assign9 + assign10
aGrade = assignGrade / 10
print("The final grade for all assignments is: ")
print(aGrade)
midterm = int(input("Midterm grade: "))
if midterm > 100:
print ("Please input a valid grade value.")
midterm = int(input("Midterm grade: "))
finalExam = int(input("Final Exam grade: "))
if finalExam > 100:
print ("Please input a valid grade value.")
finalExam = int(input("Final Exam grade: "))
testsGrade = midterm + finalExam
tGrade = testsGrade / 2
print("The final grade for all test is: ")
print(tGrade)
participation = int(input("Participation grade: "))
if participation > 100:
print ("Please input a valid grade value.")
participation = int(input("Participation grade: "))
partGrade = participation
print("The final grade for Participation is: ")
print(partGrade)
finalGrade = aGrade + tGrade + partGrade / 3
def students():
netIDList = []
maxLengthList = 6
while len(netIDList) < maxLengthList:
ID = input("Enter Student's Net ID: ")
netIDList.append(ID)
for s in netIDList:
print("Please input grades for student " + s)
assignments()
f = assignments().finalGrade
netIDList.append(": " + f)
print(netIDList)
def main():
students()
main()
</code></pre>
| 0 | 2016-08-30T14:06:30Z | 39,234,397 | <p>Your problem is here:</p>
<pre><code>f = assignments().finalGrade
netIDList.append(": " + f)
</code></pre>
<p>You're accessing <code>assignments</code> like it is an object rather than a function. You want to have <code>assignments</code> return a value to append.</p>
<pre><code>def assignments():
# do all your input and calculation here
finalGrade = aGrade + tGrade + partGrade / 3
return finalGrade
</code></pre>
<p>This way, the function will send this value back to the place where it was called from. Then, change</p>
<pre><code>f = assignments().finalGrade
</code></pre>
<p>to</p>
<pre><code>f = assignments()
</code></pre>
<p><strong><em>EDIT:</em></strong> Your for loop should look basically like this:</p>
<pre><code>for s in netIDList:
print("Please input grades for student " + s)
f = assignments()
netIDList.append(": " + f)
</code></pre>
<p>but this will result in a somewhat confusing list: </p>
<pre><code>[student1_name, student2_name, student1_score, student2_score]
</code></pre>
<p>So I would use a dictionary:</p>
<pre><code>student_score_map = {}
for s in netIDList:
print("Please input grades for student " + s)
f = assignments()
student_score_map[s] = f
</code></pre>
<p>This way you can print all student names <em>with their score</em> like so:</p>
<pre><code>for s in netIDList:
print(s + ' final score:' + student_score_map[s])
</code></pre>
<p>Take a look <a class='doc-link' href="http://stackoverflow.com/documentation/python/396/dictionary#t=201608301912462219649">here</a> for more on dictionaries</p>
| 0 | 2016-08-30T18:12:45Z | [
"python",
"function"
] |
change data in Python import file | 39,229,823 | <p>In my main script, I import some user specific data (another script):<br>
main script: </p>
<pre><code>.....
import a lot of things....
import acemedat
..... here goes the main code
</code></pre>
<p>the imported file ('acemedat.py'): </p>
<pre><code># some comment
azcor = 10 # value can be user dependent
... some other variables....
</code></pre>
<p>In the main script, I use <strong>acemedat.azcor</strong> and get the value 10 -- fine so far. In the main program, I want to change the value of <strong>azcor</strong>, write it to <strong>acemedat.py</strong>, and save it. The next time I start the main script, the new value is read. I know how to read <strong>acemedat.py</strong> and locate the correct line (I do it with a list of lines read from the file), but it works only with strings. Is there a way to avoid strings and change <strong>azcor</strong> directly?</p>
| 1 | 2016-08-30T14:10:03Z | 39,229,909 | <p>The file <em>is</em> only strings (well, one big string); what is in that file is used to create the variable you are interested in (amongst other things).</p>
<p>So what you need to do is modify your file to contain the string representation of what you want your variable's value to be, as if you had opened the file in a text editor and typed in the representation of the new value yourself.</p>
| 1 | 2016-08-30T14:13:18Z | [
"python"
] |
XML RPC add a date | 39,229,828 | <p>im trying to import salesorder from excel then insert them in sales order odoo</p>
<p>for now im trying to add the sales order then later i will add the order lines</p>
<pre><code>import psycopg2
import psycopg2.extras
import pyexcel_xls
import pyexcel as pe
from pyexcel_xls import get_data
from datetime import datetime
import xmlrpclib
import json
url = 'http://localhost:8070'
db = 'Docker'
username = 'admin'
password = 'odoo'
#data = get_data("salesorder.xls")
#print(json.dumps(data))
records = pe.get_records(file_name="salesorder.xls")
for record in records:
print record['name']
names = record['name']
print record['location']
print record['zip']
print record['republic']
dates = record['date']
print dates
print datetime.strptime(dates,'%d/%M/%Y')
lastdat=datetime.strptime(dates,'%d/%M/%Y')
common = xmlrpclib.ServerProxy('{}/xmlrpc/2/common'.format(url))
output = common.version()
uid = common.authenticate(db, username, password, {})
print output
models = xmlrpclib.ServerProxy('{}/xmlrpc/2/object'.format(url))
models.execute_kw(db, uid, password,
'res.partner', 'search',
[[['is_company', '=', True], ['customer', '=', True]]])
id = models.execute_kw(db, uid, password, 'sales.order', 'create', [{
'name': "names",
'validity_date':lastdat
#'payment_term_id':"15"
}])
print id
</code></pre>
<p>the error im getting is on the 40th line the one with validity_date</p>
| 0 | 2016-08-30T14:10:18Z | 39,232,249 | <p>You have to use Odoo's date format for using dates. It's the ISO 8601 international date format: <code>YYYY-MM-DD</code>.</p>
| 0 | 2016-08-30T16:01:01Z | [
"python",
"web-services",
"openerp",
"xml-rpc",
"odoo-9"
] |
Python Import Text Array with Numpy | 39,229,830 | <p>I have a text file that looks like this:</p>
<pre><code>...
5 [0, 1] [512, 479] 991
10 [1, 0] [706, 280] 986
15 [1, 0] [807, 175] 982
20 [1, 0] [895, 92] 987
...
</code></pre>
<p>Each column is tab separated, but there are arrays in some of the columns. Can I import these with <code>np.genfromtxt</code> in some way?</p>
<p>The resulting unpacked lists should be, for example:</p>
<pre><code>data1 = [..., 5, 10, 15, 20, ...]
data2 = [..., [512, 479], [706, 280], ... ] (i.e. a 2D list)
etc.
</code></pre>
<p>I tried </p>
<p><code>data1, data2, data3, data4 = np.genfromtxt('data.txt', dtype=None, delimiter='\t', unpack=True)</code></p>
<p>but <code>data2</code> and <code>data3</code> are lists containing 'nan'.</p>
| 0 | 2016-08-30T14:10:19Z | 39,233,680 | <p>Brackets in a <code>csv</code> file are klunky no matter how you look at it. The default <code>csv</code> structure is 2d - rows and uniform columns. The brackets add a level of nesting. But the fact that the columns are tab separated, while the nested blocks are comma separated makes it a bit easier.</p>
<p>Your comment code is (with added newlines)</p>
<pre><code>datastr = data[i][1][1:-1].split(',')
dataarray = []
for j in range(0, len(datastr)):
dataarray.append(int(datastr[j]))
data2.append(dataarray)
</code></pre>
<p>I assume <code>data[i]</code> looks something like (after a tab split):</p>
<pre><code>['5', '[0, 1]', '[512, 479]', '991']
</code></pre>
<p>So for the '[0,1]' you strip of the <code>[]</code>, split the rest, and put that list back on to <code>data2</code>.</p>
<p>That certainly looks like a viable approach. <code>genfromtxt</code> does handle brackets or quotes. The <code>csv</code> reader can handle quoted text, and might be adapted to treat <code>[]</code> as quotes. But other than that I think the '[]` have to be handled with some sort of string processing as you do.</p>
<p>Keep in mind that <code>genfromtxt</code> just reads lines, parses them, and collects the resulting lists in a master list. It then converts that list to an array at the end. So doing your own line by line, string by string parsing is not inferior.</p>
<p>=============</p>
<p>With your sample as a text file:</p>
<pre><code> In [173]: txt=b"""
...: 5 \t [0, 1] \t [512, 479] \t 991
...: 10 \t [1, 0] \t [706, 280] \t 986
...: 15 \t [1, 0] \t [807, 175] \t 982
...: 20 \t [1, 0] \t [895, 92] \t 987"""
</code></pre>
<p>A simple <code>genfromtxt</code> call with <code>dtype=None</code>:</p>
<pre><code>In [186]: data = np.genfromtxt(txt.splitlines(), dtype=None, delimiter='\t', autostrip=True)
</code></pre>
<p>The result is a structured array with integer and string fields:</p>
<pre><code>In [187]: data
Out[187]:
array([(5, b'[0, 1]', b'[512, 479]', 991),
(10, b'[1, 0]', b'[706, 280]', 986),
(15, b'[1, 0]', b'[807, 175]', 982),
(20, b'[1, 0]', b'[895, 92]', 987)],
dtype=[('f0', '<i4'), ('f1', 'S6'), ('f2', 'S10'), ('f3', '<i4')])
</code></pre>
<p>Fields are accessed by name</p>
<pre><code>In [188]: data['f0']
Out[188]: array([ 5, 10, 15, 20])
In [189]: data['f1']
Out[189]:
array([b'[0, 1]', b'[1, 0]', b'[1, 0]', b'[1, 0]'],
dtype='|S6')
</code></pre>
<p>If we can deal with the <code>[]</code>, your data could be nicely represented a structured array with a compound dtype</p>
<pre><code>In [191]: dt=np.dtype('i,2i,2i,i')
In [192]: np.ones((3,),dtype=dt)
Out[192]:
array([(1, [1, 1], [1, 1], 1), (1, [1, 1], [1, 1], 1),
(1, [1, 1], [1, 1], 1)],
dtype=[('f0', '<i4'), ('f1', '<i4', (2,)), ('f2', '<i4', (2,)), ('f3', '<i4')])
</code></pre>
<p>where the 'f1' field is a (3,2) array.</p>
<p>One approach is to pass the text/file through a function that filters out the extra characters. <code>genfromtxt</code> works with anything that will feed it a line at a time.</p>
<pre><code>def afilter(txt):
for line in txt.splitlines():
line=line.replace(b'[', b' ').replace(b']', b'').replace(b',' ,b'\t')
yield line
</code></pre>
<p>This generator strips out the [] and replaces the , with tab, in effect producing a flat csv file</p>
<pre><code>In [205]: list(afilter(txt))
Out[205]:
[b'',
b'5 \t 0\t 1 \t 512\t 479 \t 991',
b'10 \t 1\t 0 \t 706\t 280 \t 986',
b'15 \t 1\t 0 \t 807\t 175 \t 982',
b'20 \t 1\t 0 \t 895\t 92 \t 987']
</code></pre>
<p><code>genfromtxt</code> with <code>dtype=None</code> will produce an array with 6 columns. </p>
<pre><code>In [209]: data=np.genfromtxt(afilter(txt),delimiter='\t',dtype=None)
In [210]: data
Out[210]:
array([[ 5, 0, 1, 512, 479, 991],
[ 10, 1, 0, 706, 280, 986],
[ 15, 1, 0, 807, 175, 982],
[ 20, 1, 0, 895, 92, 987]])
In [211]: data.shape
Out[211]: (4, 6)
</code></pre>
<p>But if I give it the <code>dt</code> dtype I defined above, I get a structured array:</p>
<pre><code>In [206]: data=np.genfromtxt(afilter(txt),delimiter='\t',dtype=dt)
In [207]: data
Out[207]:
array([(5, [0, 1], [512, 479], 991), (10, [1, 0], [706, 280], 986),
(15, [1, 0], [807, 175], 982), (20, [1, 0], [895, 92], 987)],
dtype=[('f0', '<i4'), ('f1', '<i4', (2,)), ('f2', '<i4', (2,)), ('f3', '<i4')])
In [208]: data['f1']
Out[208]:
array([[0, 1],
[1, 0],
[1, 0],
[1, 0]], dtype=int32)
</code></pre>
<p>The brackets could dealt with at several levels. I don't think there's a lot of advantage of one over the other.</p>
| 0 | 2016-08-30T17:30:28Z | [
"python",
"arrays",
"numpy",
"genfromtxt"
] |
Python Import Text Array with Numpy | 39,229,830 | <p>I have a text file that looks like this:</p>
<pre><code>...
5 [0, 1] [512, 479] 991
10 [1, 0] [706, 280] 986
15 [1, 0] [807, 175] 982
20 [1, 0] [895, 92] 987
...
</code></pre>
<p>Each column is tab separated, but there are arrays in some of the columns. Can I import these with <code>np.genfromtxt</code> in some way?</p>
<p>The resulting unpacked lists should be, for example:</p>
<pre><code>data1 = [..., 5, 10, 15, 20, ...]
data2 = [..., [512, 479], [706, 280], ... ] (i.e. a 2D list)
etc.
</code></pre>
<p>I tried </p>
<p><code>data1, data2, data3, data4 = np.genfromtxt('data.txt', dtype=None, delimiter='\t', unpack=True)</code></p>
<p>but <code>data2</code> and <code>data3</code> are lists containing 'nan'.</p>
| 0 | 2016-08-30T14:10:19Z | 39,233,688 | <p>Potential approach for given data, however not using numpy:</p>
<pre><code>import ast
data1, data2, data3, data4 = [],[],[],[]
for l in open('data.txt'):
data = l.split('\t')
data1.append(int(data[0]))
data2.append(ast.literal_eval(data[1]))
data3.append(ast.literal_eval(data[2]))
data4.append(int(data[3]))
print 'data1', data1
print 'data2', data2
print 'data3', data3
print 'data4', data4
</code></pre>
<p>Gives</p>
<pre><code>"data1 [5, 10, 15, 20]"
"data2 [[0, 1], [1, 0], [1, 0], [1, 0]]"
"data3 [[512, 479], [706, 280], [807, 175], [895, 92]]"
"data4 [991, 986, 982, 987]"
</code></pre>
| 1 | 2016-08-30T17:30:45Z | [
"python",
"arrays",
"numpy",
"genfromtxt"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.