title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Easiest Way to Poll A Web Server | 39,235,715 | <p>I have a job that I want to run a specific time each day. The job should look at a web server to see if a file exists. If it exists, I want to download the file and do something with it. If it doesn't, then I want to wait a minute and then try again. </p>
<p>At the moment I just have a <code>try-except</code> statement that catches the exception if the file doesn't exist and then sleeps, and passes the file back to the calling function if it does. The works okay, but I feel like it's a rather cumbersome solution. </p>
<p>Is there an accepted, or even just more Pythonic, way to achieve this? There are no asynchronous or threading considerations.</p>
| 1 | 2016-08-30T19:33:31Z | 39,235,821 | <pre><code>import requests
import sleep
while True:
r = requests.get("http://www.domain.com/fileYouAreMonitoring.bin")
if r.status_code == 200:
# we got the file!
# exit or wait until next day, use break to escape while loop
else:
time.sleep(60)
</code></pre>
| 1 | 2016-08-30T19:39:24Z | [
"python"
] |
Sort a list of objects based on second value of list attribute | 39,235,718 | <p>I have a list of Car objects, each object being defined as follows:</p>
<pre><code>class Car:
def __init__(self, vin, name, year, price, weight, desc, owner):
self.uid = vin
self.name = name
self.year = year
self.price = price
self.weight = weight
self.desc = desc
self.owner = owner
self.depreciation_values = self.get_depreciation_values(name, vin)
</code></pre>
<p>The depreciation_values attribute is a list that has 8 components, like below:</p>
<pre><code>[-12.90706937872767, -2.2011534921064739, '-17', '-51.52%', '-7', '-2.75%', '-5', '-1.74%']
</code></pre>
<p>The second value (-2.2011534921064739) denotes the depreciation factor and is what I'm trying to use as the sort key.</p>
<p>I'm aware of attrgetter:</p>
<pre><code>car_list.sort(key=attrgetter('depreciation_values'))
</code></pre>
<p>But this would sort the list based on the first value of depreciation_values and not the second.</p>
<p>Is there a way to sort all the objects based on the depreciation factor?</p>
| 1 | 2016-08-30T19:33:34Z | 39,235,748 | <p>You can instead use a lambda in order to access the exact value you want on which to sort:</p>
<pre><code>car_list.sort(key=lambda x: x.depreciation_values[1])
</code></pre>
| 10 | 2016-08-30T19:35:07Z | [
"python",
"sorting"
] |
Sort a list of objects based on second value of list attribute | 39,235,718 | <p>I have a list of Car objects, each object being defined as follows:</p>
<pre><code>class Car:
def __init__(self, vin, name, year, price, weight, desc, owner):
self.uid = vin
self.name = name
self.year = year
self.price = price
self.weight = weight
self.desc = desc
self.owner = owner
self.depreciation_values = self.get_depreciation_values(name, vin)
</code></pre>
<p>The depreciation_values attribute is a list that has 8 components, like below:</p>
<pre><code>[-12.90706937872767, -2.2011534921064739, '-17', '-51.52%', '-7', '-2.75%', '-5', '-1.74%']
</code></pre>
<p>The second value (-2.2011534921064739) denotes the depreciation factor and is what I'm trying to use as the sort key.</p>
<p>I'm aware of attrgetter:</p>
<pre><code>car_list.sort(key=attrgetter('depreciation_values'))
</code></pre>
<p>But this would sort the list based on the first value of depreciation_values and not the second.</p>
<p>Is there a way to sort all the objects based on the depreciation factor?</p>
| 1 | 2016-08-30T19:33:34Z | 39,238,156 | <p>You could define a <a href="https://docs.python.org/3/reference/datamodel.html#object.__lt__" rel="nofollow"><code>__lt__()</code> (less than) method</a> and other comparison methods to return a boolean based on your desired sort attribute then you could use the built-in sorted() or <code>list.sort()</code>. <a href="https://docs.python.org/3/howto/sorting.html#odd-and-ends" rel="nofollow">"... sort routines are guaranteed to use <code>__lt__</code>() ... "</a></p>
<pre><code>class Car:
...
def __lt__(self, other):
self.depreciation_values[1] < other..depreciation_values[1]
</code></pre>
| 0 | 2016-08-30T22:39:26Z | [
"python",
"sorting"
] |
Python - insert HTML content tp WordPress using xmlrpc api | 39,235,760 | <p>I'm trying to insert HTML content in my WordPress blog via XMLRPC but if i insert HTML - i got an error: </p>
<blockquote>
<p>raise TypeError, "cannot marshal %s objects" % type(value) TypeError:
cannot marshal objects</p>
</blockquote>
<p>If i use bleach (for clean tags) - i got text with tags on my page</p>
<p>My code </p>
<pre><code># coding: utf-8
import requests
from bs4 import BeautifulSoup
import bleach
from wordpress_xmlrpc import Client, WordPressPost
from wordpress_xmlrpc.methods import posts
xmlrpc_url = "http://site.ru/xmlrpc.php"
wp_username = "user"
wp_password = "123"
blog_id = ""
client = Client(xmlrpc_url, wp_username, wp_password, blog_id)
url = "https://lifehacker.ru/2016/08/30/finansovye-sovety-dlya-molodyx-par/"
r = requests.get(url)
soup = BeautifulSoup(r.content)
post_title = soup.find("h1")
post_excerpt = soup.find("div", {"class", "single__excerpt"})
for tag in post_excerpt():
for attribute in ["class", "id", "style"]:
del tag[attribute]
post_content = soup.find("div", {"class","post-content"})
for tag in post_content():
for attribute in ["class", "id", "style"]:
del tag[attribute]
post = WordPressPost()
post.title = post_title.text
post.content = post_content
post.id = client.call(posts.NewPost(post))
post.post_status = 'publish'
client.call(posts.EditPost(post.id, post))
</code></pre>
<p>How i can insert parsed HTML content in my WordPress blog? </p>
| 0 | 2016-08-30T19:35:38Z | 39,244,806 | <p>Use .replace after bleach and fixed issue. </p>
<p>My code </p>
<pre><code>...
post_content_insert = bleach.clean(post_content)
post_content_insert = post_content_insert.replace('&lt;','<')
post_content_insert = post_content_insert.replace('&gt;','>')
post = WordPressPost()
post.title = post_title.text
post.content = post_content_insert
post.id = client.call(posts.NewPost(post))
</code></pre>
| 0 | 2016-08-31T08:44:33Z | [
"python",
"wordpress",
"beautifulsoup",
"xml-rpc"
] |
Python test using selenium cannot perform simple test | 39,235,795 | <p>I am trying to learn about Selenium but I am not able to get even a simple program to test. Selenium webdriver seems to not be cooperating with Firefox and I am very frustrated, so I come to Stack Overflow for help.</p>
<p>For background, I use Python, can install with pip, and know command line.
I am on windows 10, firefox 48, and selenium webdriver 3 with python 3.5.2</p>
<p>Whenever I run the selenium test, (it opens a Firefox windows and the selenium website)</p>
<pre><code>from selenium import webdriver
browser = webdriver.Firefox()
browser.get('http://www.seleniumhq.org')
</code></pre>
<p>I always get an error:</p>
<pre><code>selenium.common.exceptions.WebDriverException: Message: Can't load the profile. Profile Dir: C:\ ... \AppData\Local\Temp\tmp68m5rtwt If you specified a log_file in the FirefoxBinary constructor, check it for details
</code></pre>
<p>It also opens a firefox window and has the link of <code>about:blank&utm_content=firstrun</code>(not a valid url)</p>
<p>I have looked across the internet for a similar situation, but nothing really close. I also tried many tutorials and made sure that I installed selenium the right way. I noticed that firefox was recently updated, but I am not sure if this has any effect. </p>
<p>I would appreciate any help for this, and instructions for what I should do.</p>
| 1 | 2016-08-30T19:37:51Z | 39,564,042 | <p>Firefox 48+ doesn't support <code>webdriver.Firefox()</code>. </p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.FIREFOX
caps["marionette"] = True
caps["binary"] = "path/to/your/firefox"
browser = webdriver.Firefox(capabilities=caps)
browser.get('http://www.seleniumhq.org')
</code></pre>
<p>This is what I was trying<br>
1. download <code>geckodriver</code>.<a href="https://github.com/mozilla/geckodriver/releases" rel="nofollow">https://github.com/mozilla/geckodriver/releases</a>. <code>v.0.10.0</code>is for <code>selenium 3(beta)</code>.<br>
2. add PATH your geckodriver.<br>
4. rename it to <code>wires</code><br>
5. restart shell<br>
6. check the version<br>
<code>$ wires --version</code><br>
7. and run above the code.</p>
| 0 | 2016-09-19T00:59:08Z | [
"python",
"python-3.x",
"selenium-webdriver"
] |
Pyserial - Embedded Systems | 39,235,868 | <p>I am not expecting a code here, but rather to pick up on knowledge of the folks out there. </p>
<p>I have a python code - which uses pyserial for serial communication with an Micro Controller Unit(MCU). My MCU is 128byte RAM and has internal memory. I use ser.write command to write to the MCU and MCU responds with data - I read it using ser.read command. </p>
<p>The question here is - It is working excellently until last week. Since yesterday - I am able to do the serial communication only in the morning of the day. After a while, when I read the data the MCU responds with"NONE" message. I read the data next day, it works fine. Strange thing is - I have Hyperterminal installed and it properly communicates with the MCU and reads the data. So I was hoping if anyone have faced this problem before. </p>
<p>I am using threads in my python program - Just to check if running the program mulitple times with threads is causing the problem. To my knowledge, threads should only effect the Memory of my PC and not the MCU. </p>
<p>I rebooted my Computer and also the MCU and I still have this problem. </p>
<p>Note: Pycharm is giving me the answers I mentioned in the question. If I do the same thing in IDLE - it is giving me completely different answers</p>
| 2 | 2016-08-30T19:42:17Z | 39,236,047 | <p>So, ultimately you're looking for advice on how to debug this sort of time dependent problem.
Somehow, state is getting created somewhere in your computer, your python process, or the microcontroller that affects things. (It's also theoretically possible that an external environmental factor is affecting things. As an example, if your microcontroller has a realtime clock, you could actually have a time-of-day dependent bug. That seems less likely than other possibilities).</p>
<p>First, try restarting your python program. If that fixes things, then you know some state is created inside python or your program that causes the problem.
Update your question with this information.</p>
<p>If that doesn't fix it, try rebooting your computer. If that fixes things, then you strongly suspect some state in your computer is affecting things.</p>
<p>If none of that works, try rebooting the micro controller. If rebooting both the PC and the micro controller doesn't fix things, include that in your question as it is very interesting data.</p>
<p>Examples of state that can get created:</p>
<ul>
<li><p>flow control. The micro controller could be sending xoff, clearing clear-to-send or otherwise indicating it does not want data</p></li>
<li><p>Flow control in the other direction: your PC could be sending xoff, clearing request-to-send or otherwise indicating that it doesn't want data</p></li>
<li><p>Your program gets pyserial into a confused state--either because of a bug in your code or pyserial.</p></li>
<li><p>Serial port configuration--the serial port settings could be getting messed up.</p></li>
</ul>
<p>Hyper terminal could do various things to clear flow control state or reconfigure the serial port.</p>
<p>If restarting python doesn't fix the problem, threading is very unlikely to be your issue. If restarting python fixes the problem threading may be an issue.</p>
| 3 | 2016-08-30T19:55:07Z | [
"python",
"multithreading",
"embedded",
"pyserial"
] |
ln (Natural Log) in Python | 39,235,906 | <p>In this assignment I have completed all the problems except this one. I have to create a python script to solve an equation (screenshot).</p>
<p><a href="http://i.stack.imgur.com/vFEwQ.gif" rel="nofollow"><img src="http://i.stack.imgur.com/vFEwQ.gif" alt="formula"></a></p>
<p>Unfortunately, in my research all over the internet I cannot figure out how in the world to either convert ln to log or anything usable, or anything. The code I have written so far is below. I will also post the answer that our teacher says we should get.</p>
<pre><code>import math
p = 100
r = 0.06 / 12
FV = 4000
n = str(ln * ((1 + (FV * r) / p) / (ln * (1 + r))))
print ("Number of periods = " + str(n))
</code></pre>
<p>The answer I should get is 36.55539635919235
Any advice or help you have would be greatly appreciated!</p>
<p>Also, we are not using numpy. I already attempted that one.</p>
<p>Thanks!</p>
| -1 | 2016-08-30T19:44:52Z | 39,235,942 | <p><code>math.log</code> is the natural logarithm:</p>
<p><a href="https://docs.python.org/2/library/math.html" rel="nofollow">From the documentation:</a></p>
<blockquote>
<p>math.log(x[, base]) With one argument, return the natural logarithm of
x (to base e).</p>
</blockquote>
<p>Your equation is therefore:</p>
<pre><code>n = math.log((1 + (FV * r) / p) / math.log(1 + r)))
</code></pre>
<p>Note that in your code you convert n to a <code>str</code> twice which is unnecessary</p>
| 2 | 2016-08-30T19:47:51Z | [
"python",
"python-2.7",
"natural-logarithm"
] |
Why is my code returning "name 'Button' is not defined"? | 39,235,974 | <p>my code is giving me this error and I can't for the life of me figure out why it's telling me "NameError: name 'Button' is not defined." In Tkinter I thought Button was supposed to add a button?</p>
<pre><code>import Tkinter
gameConsole = Tkinter.Tk()
#code to add widgets will go below
#creates the "number 1" Button
b1 = Button(win,text="One")
gameConsole.wm_title("Console")
gameConsole.mainloop()
</code></pre>
| -1 | 2016-08-30T19:49:51Z | 39,235,991 | <p>Use </p>
<pre><code>import Tkinter
b1 = Tkinter.Button(win,text="One")
</code></pre>
<p>or</p>
<pre><code>from Tkinter import Button
b1 = Button(win,text="One")
</code></pre>
| 0 | 2016-08-30T19:50:46Z | [
"python",
"tkinter",
"pycharm"
] |
Why is my code returning "name 'Button' is not defined"? | 39,235,974 | <p>my code is giving me this error and I can't for the life of me figure out why it's telling me "NameError: name 'Button' is not defined." In Tkinter I thought Button was supposed to add a button?</p>
<pre><code>import Tkinter
gameConsole = Tkinter.Tk()
#code to add widgets will go below
#creates the "number 1" Button
b1 = Button(win,text="One")
gameConsole.wm_title("Console")
gameConsole.mainloop()
</code></pre>
| -1 | 2016-08-30T19:49:51Z | 39,235,994 | <p>A few options available to source the namespace:</p>
<ul>
<li><p><code>from Tkinter import Button</code> Import the specific class.</p></li>
<li><p><code>import Tkinter</code> -> <code>b1 = Tkinter.Button(win,text="One")</code> Specify the the namespace inline.</p></li>
<li><p><code>from Tkinter import *</code> Imports everything from the module.</p></li>
</ul>
| 1 | 2016-08-30T19:51:09Z | [
"python",
"tkinter",
"pycharm"
] |
How do I move project folders in Pycharm Project View? | 39,236,006 | <p>I'm using Pycharm to do a few django projects and I have them all open in the Project View to the left. I'd like to move the project I'm currently working on to the bottom of the list to avoid confusion. Pycharm does not let me do this by default. Are there some settings that need tweaking before I can do this? Take a look at this screenshot. What I want to do is to be able to is to click and drag <em>'Django_pro'</em> to the bottom of the list and then expand the tree. Can this be achieved? Thank you!</p>
<p><a href="http://i.stack.imgur.com/dTXTU.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dTXTU.jpg" alt="Project View window"></a></p>
| 4 | 2016-08-30T19:52:19Z | 39,278,771 | <p>AFAIK the display order in the file navigator panel (any of them - <code>Project</code>, <code>Project Files</code>, etc.) is not arbitrarily user-selectable.</p>
<p>To avoid the confusion it would probably be a better idea to actually split your django projects into <strong>separate</strong> PyCharm projects - what you have right now is just a <strong>single</strong> PyCharm project which contains files spread across multiple directories. The split will also save you trouble down the road by preventing potential conflicts between your projects.</p>
<p>With separate PyCharm projects there is no room for such confusion as each project is open in its own window (even when chosing to re-use the Pycharm window the existing project is closed when a new one opens). From <a href="https://www.jetbrains.com/help/pycharm/2016.1/opening-reopening-and-closing-projects.html" rel="nofollow">Opening, Reopening and Closing Projects</a> doc section: </p>
<blockquote>
<p>PyCharm allows opening several projects simultaneously in different
windows. By default, each time you open a project while another one is
opened, PyCharm prompts you to choose whether to open the project in
the same window or in a new window. If necessary, you can change this
behavior using controls on the System Settings page.</p>
<p>In the <strong>Project Opening</strong> section of this page, you can choose one of
the following options:</p>
<ul>
<li><strong>Open project in a new window</strong> to open a new PyCharm window each time a new project is opened.</li>
<li><strong>Open project in the same window</strong> to stay in the same PyCharm window.</li>
<li><strong>Confirm window to open project in</strong> to keep the default behavior and display a dialog box to choose how to open each new project.</li>
</ul>
</blockquote>
<p>You might want to scan through the entire <a href="https://www.jetbrains.com/help/pycharm/2016.1/creating-and-managing-projects.html" rel="nofollow">Creating and Managing Projects</a> chapter and/or the <a href="https://confluence.jetbrains.com/display/PYH/Creating+and+managing+Django+project#CreatingandmanagingDjangoproject-CreatingaDjangoproject" rel="nofollow">Creating and managing Django project</a> tutorial.</p>
| 0 | 2016-09-01T18:44:20Z | [
"python",
"django",
"pycharm"
] |
how to install libhdf5-dev? (without yum, rpm nor apt-get) | 39,236,025 | <p>I want to use h5py which needs libhdf5-dev to be installed. I installed hdf5 from the source, and thought that any options with compiling that one would offer me the developer headers, but doesn't look like it.</p>
<p>Anyone know how I can do this? Is there some other source i need to download? (I cant find any though)</p>
<p>I am on amazon linux, yum search libhdf5-dev doesn't give me any result and I cant use rpm nor apt-get there, hence I wanted to compile it myself.</p>
| 1 | 2016-08-30T19:53:33Z | 39,236,064 | <p>Can you use pip?</p>
<pre><code>pip install h5py
</code></pre>
| 1 | 2016-08-30T19:56:19Z | [
"python",
"linux",
"install",
"hdf5"
] |
how to install libhdf5-dev? (without yum, rpm nor apt-get) | 39,236,025 | <p>I want to use h5py which needs libhdf5-dev to be installed. I installed hdf5 from the source, and thought that any options with compiling that one would offer me the developer headers, but doesn't look like it.</p>
<p>Anyone know how I can do this? Is there some other source i need to download? (I cant find any though)</p>
<p>I am on amazon linux, yum search libhdf5-dev doesn't give me any result and I cant use rpm nor apt-get there, hence I wanted to compile it myself.</p>
| 1 | 2016-08-30T19:53:33Z | 39,246,665 | <p>If you've built hdf5 library to a non-standard directory that is not included in your <code>PATH</code>, than you have basically two options. Either add this directory to you <code>PATH</code> and <code>LD_LIBRARY_PATH</code> or you can specify these paths to <code>pip</code>. Something like this works:</p>
<pre><code>pip install --global-option=build_ext --global-option="-I<path_to_direcotry_with_headers>" --global-option="-L<path_to_director_with_lib>" h5py
</code></pre>
| 0 | 2016-08-31T10:08:02Z | [
"python",
"linux",
"install",
"hdf5"
] |
Why do I get the PHONE_NUMBER_UNOCCUPIED error when calling auth.signIn in telegram? | 39,236,105 | <p>I've gotten to the point of sending the <code>auth.sendCode</code> method, with the following response:</p>
<pre><code>('sentCode: ', {u'req_msg_id': 6324698204597889024L, u'result': {u'type': {u'len
gth': 5}, u'phone_registered': False, u'flags': 6, u'timeout': 120, u'next_type'
: {}, u'phone_code_hash': '52eb4ab096ea211b52'}})
</code></pre>
<p>This is using the telegram test servers. I have successfully used the same phone number in the telegram iPhone app and desktop application so I know the phone number is valid in production. (should it be valid on the test servers as well?)</p>
<p>When I follow <code>auth.sendCode</code> with <code>auth.signIn</code> I receive the following error response:</p>
<pre><code>('authorization: ', {u'error_message': 'PHONE_NUMBER_UNOCCUPIED', u'error_code':
400})
</code></pre>
<p>Is there a step between these two methods that registers the phone number?</p>
| 1 | 2016-08-30T19:58:50Z | 39,237,741 | <p>from <a href="https://core.telegram.org/api/auth" rel="nofollow">https://core.telegram.org/api/auth</a></p>
<blockquote>
<p>The auth.sendCode method will also return the phone_registered field,
which indicates whether or not the user with this phone number is
registered in the system. If phone_registered == boolTrue,
authorization requires that auth.signIn be invoked. Otherwise, basic
information must be requested from the user and the new user
registration method (auth.signUp) must be invoked.</p>
<p>When phone_registered === boolFalse, the auth.signIn method can be
used to pre-validate the code entered from the text message. The code
was entered correctly if the method returns Error 400
PHONE_NUMBER_UNOCCUPIED.</p>
</blockquote>
| 1 | 2016-08-30T21:59:18Z | [
"python",
"api",
"telegram"
] |
Syntax error with a while loop in python | 39,236,125 | <p>This is my script, its a script that works like a calculator, however when i run it, it gives me invalid syntax for the while loop? i am new to python help me please.</p>
<pre><code>import functools
numbers=[]
def mean():
end_mean = functools.reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
def sums():
end_sum = functools.reduce(lambda x, y: x + y, numbers)
print(end_sum)
def whatDo():
print('Input Extra Numbers '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
mean()
elif answer == 'sum':
sums()
elif answer== 'median':
median()
def median():
numbers.sort()
medianNumber=int(len(numbers))
if medianNumber%2==0:
end_median=numbers[int(len(numbers))/2]+numbers[int(len(numbers))/2+1]
if medianNumber%2==1:
numbers[int(len(numbers))+1/2
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
mean()
elif answer == 'sum':
sums()
elif answer== 'median':
median()
print('Do you want anything else?')
reply=input()
if reply=='no':
break
elif reply=='yes':
whatDo()
else:
break
</code></pre>
<p>i did remove the while loop but then it said that the print function was invalid. please keep in mind that i am new to python.</p>
| -2 | 2016-08-30T20:00:01Z | 39,236,201 | <p>Just before the <code>while</code> loop, there is missing <code>]</code> with the usage of <code>numbers</code> list</p>
<p>That line should be:</p>
<pre><code>numbers[int(len(numbers))+1/2]
</code></pre>
| 0 | 2016-08-30T20:05:11Z | [
"python"
] |
Syntax error with a while loop in python | 39,236,125 | <p>This is my script, its a script that works like a calculator, however when i run it, it gives me invalid syntax for the while loop? i am new to python help me please.</p>
<pre><code>import functools
numbers=[]
def mean():
end_mean = functools.reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
def sums():
end_sum = functools.reduce(lambda x, y: x + y, numbers)
print(end_sum)
def whatDo():
print('Input Extra Numbers '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
mean()
elif answer == 'sum':
sums()
elif answer== 'median':
median()
def median():
numbers.sort()
medianNumber=int(len(numbers))
if medianNumber%2==0:
end_median=numbers[int(len(numbers))/2]+numbers[int(len(numbers))/2+1]
if medianNumber%2==1:
numbers[int(len(numbers))+1/2
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
mean()
elif answer == 'sum':
sums()
elif answer== 'median':
median()
print('Do you want anything else?')
reply=input()
if reply=='no':
break
elif reply=='yes':
whatDo()
else:
break
</code></pre>
<p>i did remove the while loop but then it said that the print function was invalid. please keep in mind that i am new to python.</p>
| -2 | 2016-08-30T20:00:01Z | 39,236,223 | <p>Before the while loop add the closing bracket to the line:</p>
<p><code>numbers[int(len(numbers))+1/2]</code></p>
<p>Usually it is a good idea to always check the lines above where the error occurred, if python is telling you that is found a <code>SyntaxError</code> but your syntax seems valid.</p>
| 2 | 2016-08-30T20:06:32Z | [
"python"
] |
Gtk.Widget.destroy() is necessary on Python? | 39,236,245 | <p>When reading the <a href="http://lazka.github.io/pgi-docs/index.html#Gtk-3.0/classes/Builder.html#Gtk.Builder" rel="nofollow">documentation</a> about GtkBuilder, I came across this passage:</p>
<blockquote>
<p>A <code>Gtk.Builder</code> holds a reference to all objects that it has
constructed and drops these references when it is finalized. This
finalization can cause the destruction of non-widget objects or
widgets which are not contained in a toplevel window. For toplevel
windows constructed by a builder, it is the responsibility of the user
to call <code>Gtk.Widget.destroy()</code> to get rid of them and all the widgets
they contain.</p>
</blockquote>
<p>But this applies to python too? That is when I load a top-level window I must destroy it manually?</p>
| 0 | 2016-08-30T20:07:53Z | 39,241,013 | <p>Well, sort of. You generally don't have to call <code>destroy()</code> on the window manually because it happens automatically when the user clicks the window's close button.</p>
| 3 | 2016-08-31T04:54:45Z | [
"python",
"python-3.x",
"gtk",
"gtk3"
] |
How to concat linear models in TensorFlow | 39,236,284 | <p>I'm trying to build a model in tensor flow with models like f_i(x) = m_ix + b_i such that: </p>
<pre><code>f(x) = [f_1(x), f_2(x)]^T [x, x] + b
</code></pre>
<p>this is just an exercise. My difficulty is in understanding how to concatenate two tensors:</p>
<pre><code># Model 1
f1 = tf.add(tf.mul(X, W), b)
# Model 2
f2 = tf.add(tf.mul(X, W2), b2)
# Concatenate 1 & 2
fi = tf.concat(0, [f1, f2])
# Final model
pred = tf.add(tf.mul(fi, W3), b3)
</code></pre>
<p>Unfortunately this doesn't seem to work. </p>
<p>Here's the full example:</p>
<pre><code>'''
A linear regression learning algorithm example using TensorFlow library.
Author: Aymeric Damien (original author) # I am altering it though
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
from __future__ import print_function
import tensorflow as tf
import numpy
import matplotlib.pyplot as plt
rng = numpy.random
# Parameters
learning_rate = 0.01
training_epochs = 1000
display_step = 50
# Training Data
train_X = numpy.asarray(
[3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167,
7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1])
train_Y = numpy.asarray(
[1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221,
2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3])
n_samples = train_X.shape[0]
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
W2 = tf.Variable(rng.randn(), name="weight2")
b2 = tf.Variable(rng.randn(), name="bias2")
W3 = tf.Variable([rng.randn(), rng.randn()], name="weight3")
b3 = tf.Variable(rng.randn(), name="bias3")
# Model 1
f1 = tf.add(tf.mul(X, W), b)
# Model 2
f2 = tf.add(tf.mul(X, W2), b2)
# Concatenate 1 & 2
fi = tf.concat(0, [f1, f2])
# Final model
pred = tf.add(tf.mul(fi, W3), b3)
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * n_samples)
# Gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Fit all training data
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
sess.run(optimizer, feed_dict={X: x, Y: y})
# Display logs per epoch step
if (epoch + 1) % display_step == 0:
c = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(c), \
"W=", sess.run(W), "b=", sess.run(b))
print("Optimization Finished!")
training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b),
'\n')
# Graphic display
plt.plot(train_X, train_Y, 'ro', label='Original data')
plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
plt.legend()
plt.show()
# Testing example, as requested (Issue #2)
test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])
print("Testing... (Mean square loss Comparison)")
testing_cost = sess.run(
tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]),
feed_dict={X: test_X, Y: test_Y}) # same function as cost above
print("Testing cost=", testing_cost)
print("Absolute mean square loss difference:", abs(
training_cost - testing_cost))
plt.plot(test_X, test_Y, 'bo', label='Testing data')
plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
plt.legend()
plt.show()
</code></pre>
| 0 | 2016-08-30T20:10:22Z | 39,240,620 | <p>One way to achieve a similar result without the headache of <code>tf.concat</code> is</p>
<pre><code>pred = tf.add(tf.add(f1, f2), b3)
</code></pre>
| 1 | 2016-08-31T04:13:56Z | [
"python",
"tensorflow"
] |
How to resolve this IndexError out of range? | 39,236,580 | <p>Why am I getting this list index out of range error? And how can I fix it?</p>
<p>I tried entering into after the <em>else</em> in the logic, but the result was the same. </p>
<pre><code>if int_list_repost is '':
int_list_repost = [0]
</code></pre>
<p>but the result was the same</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:/Users/ayevtushenko/PycharmProjects/Tuts/Post_Engagement_pin_logic.py",
line 99, in
get_post('<a href="https://twitter.com/ClinRev" rel="nofollow">https://twitter.com/ClinRev</a>') File "C:/Users/ayevtushenko/PycharmProjects/Tuts/Post_Engagement_pin_logic.py",
line 95, in get_post
'retweeted_posts': len(int_list_repost[1:]), 'pinned_retweets': int_list_repost[0], IndexError: list index out of range</p>
</blockquote>
<pre><code>import requests
import re
from bs4 import BeautifulSoup
def get_post(url):
source_code = requests.get(url)
plain_text = source_code.text
my_soup = BeautifulSoup(plain_text)
mylist = []
int_list = []
mylist_repost = []
int_list_repost = []
pinned = ""
# GETS "PINNED" TEXT IF PINNED
for content in my_soup.findAll('span', {'class': 'js-pinned-text'}):
pinned = str(content.string)
# PUTS FAVORITE METRICS INTO LIST
for content in my_soup.findAll('div', {'class': 'ProfileTweet-action ProfileTweet-action--favorite js-toggleState'}):
fetch = content.contents[1]
for tag in fetch.findAll('span', {'class': 'ProfileTweet-actionCountForPresentation'}):
mylist.append(tag.string)
if str(tag.string).isdigit():
int_list.append(int(tag.string))
# PUTS RE-POST METRICS INTO LIST
for content in my_soup.findAll('div', {'class': 'ProfileTweet-action ProfileTweet-action--retweet js-toggleState js-toggleRt'}):
fetch = content.contents[1]
for tag in fetch.findAll('span', {'class': 'ProfileTweet-actionCountForPresentation'}):
mylist_repost.append(tag.string)
if str(tag.string).isdigit():
int_list_repost.append(int(tag.string))
like_page_utilization = str((len(int_list)/len(mylist))*100)+'%'
repost_page_utilization = str((len(int_list_repost)/len(mylist_repost))*100)+'%'
# TOTAL ENGAGEMENT METRICS
largest_list = [len(int_list), len(int_list_repost)]
largest_list_max = max(largest_list)
total_engagements_overall = sum(int_list)+sum(int_list_repost)
overall_engagement_utilization = str((largest_list_max/len(mylist))*100)+'%'
if pinned != 'Pinned Tweet':
return {'liked_posts': len(int_list), 'total_likes': sum(int_list),
'pinned_likes': 0, 'pinned': 'F', 'like_page_utilization': like_page_utilization,
'repost_page_utilization': repost_page_utilization,
'overall_engagement_utilization': overall_engagement_utilization,
'retweeted_posts': len(int_list_repost), 'pinned_retweets': 0,
'total_retweets': sum(int_list_repost),
'total_engagements_overall': total_engagements_overall}
else:
return {'liked_posts': len(int_list[1:]), 'total_likes': sum(int_list[1:]),
'pinned_likes': int_list[0], 'pinned': 'T', 'like_page_utilization': like_page_utilization,
'repost_page_utilization': repost_page_utilization,
'overall_engagement_utilization': overall_engagement_utilization,
'retweeted_posts': len(int_list_repost[1:]), 'pinned_retweets': int_list_repost[0],
'total_retweets': sum(int_list_repost[1:]),
'total_engagements_overall': total_engagements_overall}
</code></pre>
| 0 | 2016-08-30T20:30:02Z | 39,237,090 | <p>The message is pretty straightforward <code>'pinned_retweets': int_list_repost[0] Index Out Of Range</code>... And if you check your script output, it shows an empty array: <code>raw int only list []</code>, so you need to check the array length before the last step, like this:</p>
<pre><code>if len(int_list_repost) == 0:
int_list_repost.append(0)
print 'pinned_retweets', int_list_repost[0]
</code></pre>
<h3>Test fiddle: <a href="http://www.codeskulptor.org/#user41_1EOs9Swi3K_0.py" rel="nofollow">http://www.codeskulptor.org/#user41_1EOs9Swi3K_0.py</a></h3>
| 0 | 2016-08-30T21:06:12Z | [
"python",
"indexing"
] |
How to resolve this IndexError out of range? | 39,236,580 | <p>Why am I getting this list index out of range error? And how can I fix it?</p>
<p>I tried entering into after the <em>else</em> in the logic, but the result was the same. </p>
<pre><code>if int_list_repost is '':
int_list_repost = [0]
</code></pre>
<p>but the result was the same</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:/Users/ayevtushenko/PycharmProjects/Tuts/Post_Engagement_pin_logic.py",
line 99, in
get_post('<a href="https://twitter.com/ClinRev" rel="nofollow">https://twitter.com/ClinRev</a>') File "C:/Users/ayevtushenko/PycharmProjects/Tuts/Post_Engagement_pin_logic.py",
line 95, in get_post
'retweeted_posts': len(int_list_repost[1:]), 'pinned_retweets': int_list_repost[0], IndexError: list index out of range</p>
</blockquote>
<pre><code>import requests
import re
from bs4 import BeautifulSoup
def get_post(url):
source_code = requests.get(url)
plain_text = source_code.text
my_soup = BeautifulSoup(plain_text)
mylist = []
int_list = []
mylist_repost = []
int_list_repost = []
pinned = ""
# GETS "PINNED" TEXT IF PINNED
for content in my_soup.findAll('span', {'class': 'js-pinned-text'}):
pinned = str(content.string)
# PUTS FAVORITE METRICS INTO LIST
for content in my_soup.findAll('div', {'class': 'ProfileTweet-action ProfileTweet-action--favorite js-toggleState'}):
fetch = content.contents[1]
for tag in fetch.findAll('span', {'class': 'ProfileTweet-actionCountForPresentation'}):
mylist.append(tag.string)
if str(tag.string).isdigit():
int_list.append(int(tag.string))
# PUTS RE-POST METRICS INTO LIST
for content in my_soup.findAll('div', {'class': 'ProfileTweet-action ProfileTweet-action--retweet js-toggleState js-toggleRt'}):
fetch = content.contents[1]
for tag in fetch.findAll('span', {'class': 'ProfileTweet-actionCountForPresentation'}):
mylist_repost.append(tag.string)
if str(tag.string).isdigit():
int_list_repost.append(int(tag.string))
like_page_utilization = str((len(int_list)/len(mylist))*100)+'%'
repost_page_utilization = str((len(int_list_repost)/len(mylist_repost))*100)+'%'
# TOTAL ENGAGEMENT METRICS
largest_list = [len(int_list), len(int_list_repost)]
largest_list_max = max(largest_list)
total_engagements_overall = sum(int_list)+sum(int_list_repost)
overall_engagement_utilization = str((largest_list_max/len(mylist))*100)+'%'
if pinned != 'Pinned Tweet':
return {'liked_posts': len(int_list), 'total_likes': sum(int_list),
'pinned_likes': 0, 'pinned': 'F', 'like_page_utilization': like_page_utilization,
'repost_page_utilization': repost_page_utilization,
'overall_engagement_utilization': overall_engagement_utilization,
'retweeted_posts': len(int_list_repost), 'pinned_retweets': 0,
'total_retweets': sum(int_list_repost),
'total_engagements_overall': total_engagements_overall}
else:
return {'liked_posts': len(int_list[1:]), 'total_likes': sum(int_list[1:]),
'pinned_likes': int_list[0], 'pinned': 'T', 'like_page_utilization': like_page_utilization,
'repost_page_utilization': repost_page_utilization,
'overall_engagement_utilization': overall_engagement_utilization,
'retweeted_posts': len(int_list_repost[1:]), 'pinned_retweets': int_list_repost[0],
'total_retweets': sum(int_list_repost[1:]),
'total_engagements_overall': total_engagements_overall}
</code></pre>
| 0 | 2016-08-30T20:30:02Z | 39,237,275 | <p>You've initialized an array, which is an object. When you ask if the variable is null, that will always be false. It's not null, it's an object. </p>
<p>If you want to test to see if your array is empty use this code:</p>
<pre><code>is len(int_list_repost) == 0:
print("int_list_repost is empty")
</code></pre>
<p>But that's not very pythonic. The test for empty arrays should be</p>
<pre><code>is not int_list_repost:
print("int_list_repost is empty")
</code></pre>
<p>The error message you are getting is because <code>int_list_repost[0]</code> will error out if you have an empty array.</p>
| 1 | 2016-08-30T21:21:45Z | [
"python",
"indexing"
] |
How to deploy Flask app with Supervisor/Nginx/Gunicorn on Ubuntu server | 39,236,595 | <p>I am trying to deploy a Flask app on an Ubuntu server. I referenced <a href="http://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html?highlight=flask" rel="nofollow">this</a>, <a href="https://realpython.com/blog/python/kickstarting-flask-on-ubuntu-setup-and-deployment/" rel="nofollow">this</a> and <a href="http://vladikk.com/2013/09/12/serving-flask-with-nginx-on-ubuntu/" rel="nofollow">this</a> and found a lot of similar questions on SO, but I still can't figure it out. </p>
<p>I can run it manually from the source directory by doing <code>uwsgi siti_uwsgi.ini</code> and navigating to <code>http://server_IP_address:8080/</code>. But when I try <code>uwsgi --socket 127.0.0.1:3031 --wsgi-file views.py --master --processes 4 --threads 2</code> and navigate to <code>http://server_IP_address:3031</code>, I get nothing. </p>
<p>If I go to <code>siti.company.loc</code> (the DNS name I set up), there is a standard Nginx 502 error page.</p>
<p>When I try to restart the supervisor process, it dies with a FATAL error:</p>
<blockquote>
<p>can't find command "gunicorn"</p>
</blockquote>
<p>What am I doing wrong? Let me know if I need to provide more info or background.</p>
<hr>
<p>/webapps/patch/src/views.py (Flask app):</p>
<pre><code>from flask import Flask, render_template, request, url_for, redirect
from flask_cors import CORS
app = Flask(__name__)
CORS(app, resources={r"/*": {'origins': '*'}})
@app.route('/')
def home():
return 'Hello'
@app.route('/site:<site>/date:<int:day>-<month>-<int:year>')
def application(site, month, day, year):
if request.method == 'GET':
# Recompile date from URL. todo: better way
dte = str(day) + "-" + str(month) + "-" + str(
print('about to run')
results = run_SITI(site, dte)
return results
def run_SITI(site, dte):
print('running SITI')
return render_template('results.html', site=site, dte=dte, results=None) # todo: Show results
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>/webapps/patch/siti_wsgi.ini (uWSGI ini):</p>
<pre><code>[uwsgi]
http = :8008
chdir = /webapps/patch/src
wsgi-file = views.py
processes = 2
threads = 2
callable = app
</code></pre>
<p>/etc/nginx/sites-available/siti (Nginx config):</p>
<pre><code>upstream flask_siti {
server 127.0.0.1:8008 fail_timeout=0;
}
server {
listen 80;
server_name siti.company.loc;
charset utf-8;
client_max_body_size 75M;
access_log /var/log/nginx/siti/access.log;
error_log /var/log/nginx/siti/error.log;
keepalive_timeout 5;
location /static {
alias /webapps/patch/static;
}
location /media {
alias /webapps/patch/media;
}
location / {
# checks for static file, if not found proxy to the app
try_files $uri @proxy_to_app;
}
location @proxy_to_app {
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://flask_siti;
}
}
</code></pre>
<p>/etc/supervisor/conf.d/siti.conf (Supervisor config):</p>
<pre><code>[program:webapp_siti]
command=gunicorn -b views:app
directory=/webapps/patch/src
user=nobody
autostart=true
autorestart=true
redirect_stderr=true
</code></pre>
<p>/var/log/nginx/siti/error.log (Nginx error log):</p>
<pre><code>2016/08/30 11:44:42 [error] 25524#0: *73 connect() failed (111: Connection refused) while connecting to upstream, $
2016/08/30 11:44:42 [error] 25524#0: *73 connect() failed (111: Connection refused) while connecting to upstream, $
2016/08/30 11:44:42 [error] 25524#0: *73 no live upstreams while connecting to upstream, client: 10.1.2.195, serve$
</code></pre>
| 2 | 2016-08-30T20:30:51Z | 39,237,012 | <p>You have errors in <code>nginx</code> config:</p>
<p>Instead of:</p>
<pre><code>upstream flask_siti {
server 127.0.0.1:8008 fail_timeout=0;
server {
...
</code></pre>
<p>try:</p>
<pre><code>upstream flask_siti {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
...
</code></pre>
<p>You must "activate" virtualenv in <code>supervisor</code> config. To do this, add following line to Your <code>supervisor</code> config:</p>
<pre><code>environment=PATH="/webapps/patch/venv/bin",VIRTUAL_ENV="/webapps/patch/venv",PYTHONPATH="/webapps/patch/venv/lib/python:/webapps/patch/venv/lib/python/site-packages"
</code></pre>
| 2 | 2016-08-30T21:00:12Z | [
"python",
"nginx",
"flask",
"uwsgi",
"gunicorn"
] |
How to deploy Flask app with Supervisor/Nginx/Gunicorn on Ubuntu server | 39,236,595 | <p>I am trying to deploy a Flask app on an Ubuntu server. I referenced <a href="http://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html?highlight=flask" rel="nofollow">this</a>, <a href="https://realpython.com/blog/python/kickstarting-flask-on-ubuntu-setup-and-deployment/" rel="nofollow">this</a> and <a href="http://vladikk.com/2013/09/12/serving-flask-with-nginx-on-ubuntu/" rel="nofollow">this</a> and found a lot of similar questions on SO, but I still can't figure it out. </p>
<p>I can run it manually from the source directory by doing <code>uwsgi siti_uwsgi.ini</code> and navigating to <code>http://server_IP_address:8080/</code>. But when I try <code>uwsgi --socket 127.0.0.1:3031 --wsgi-file views.py --master --processes 4 --threads 2</code> and navigate to <code>http://server_IP_address:3031</code>, I get nothing. </p>
<p>If I go to <code>siti.company.loc</code> (the DNS name I set up), there is a standard Nginx 502 error page.</p>
<p>When I try to restart the supervisor process, it dies with a FATAL error:</p>
<blockquote>
<p>can't find command "gunicorn"</p>
</blockquote>
<p>What am I doing wrong? Let me know if I need to provide more info or background.</p>
<hr>
<p>/webapps/patch/src/views.py (Flask app):</p>
<pre><code>from flask import Flask, render_template, request, url_for, redirect
from flask_cors import CORS
app = Flask(__name__)
CORS(app, resources={r"/*": {'origins': '*'}})
@app.route('/')
def home():
return 'Hello'
@app.route('/site:<site>/date:<int:day>-<month>-<int:year>')
def application(site, month, day, year):
if request.method == 'GET':
# Recompile date from URL. todo: better way
dte = str(day) + "-" + str(month) + "-" + str(
print('about to run')
results = run_SITI(site, dte)
return results
def run_SITI(site, dte):
print('running SITI')
return render_template('results.html', site=site, dte=dte, results=None) # todo: Show results
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>/webapps/patch/siti_wsgi.ini (uWSGI ini):</p>
<pre><code>[uwsgi]
http = :8008
chdir = /webapps/patch/src
wsgi-file = views.py
processes = 2
threads = 2
callable = app
</code></pre>
<p>/etc/nginx/sites-available/siti (Nginx config):</p>
<pre><code>upstream flask_siti {
server 127.0.0.1:8008 fail_timeout=0;
}
server {
listen 80;
server_name siti.company.loc;
charset utf-8;
client_max_body_size 75M;
access_log /var/log/nginx/siti/access.log;
error_log /var/log/nginx/siti/error.log;
keepalive_timeout 5;
location /static {
alias /webapps/patch/static;
}
location /media {
alias /webapps/patch/media;
}
location / {
# checks for static file, if not found proxy to the app
try_files $uri @proxy_to_app;
}
location @proxy_to_app {
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://flask_siti;
}
}
</code></pre>
<p>/etc/supervisor/conf.d/siti.conf (Supervisor config):</p>
<pre><code>[program:webapp_siti]
command=gunicorn -b views:app
directory=/webapps/patch/src
user=nobody
autostart=true
autorestart=true
redirect_stderr=true
</code></pre>
<p>/var/log/nginx/siti/error.log (Nginx error log):</p>
<pre><code>2016/08/30 11:44:42 [error] 25524#0: *73 connect() failed (111: Connection refused) while connecting to upstream, $
2016/08/30 11:44:42 [error] 25524#0: *73 connect() failed (111: Connection refused) while connecting to upstream, $
2016/08/30 11:44:42 [error] 25524#0: *73 no live upstreams while connecting to upstream, client: 10.1.2.195, serve$
</code></pre>
| 2 | 2016-08-30T20:30:51Z | 39,251,473 | <p>Was able to get it working with the following changes:</p>
<p>/etc/supervisor/conf.d/siti.conf (Supervisor config):</p>
<pre><code>[program:webapp_siti]
command=/webapps/patch/venv/bin/gunicorn -b :8118 views:app # didn't use uwsgi.ini after all
directory=/webapps/patch/src
user=nobody
autostart=true
autorestart=true
redirect_stderr=true
</code></pre>
<p>/etc/nginx/sites-enabled/siti (Nginx config):</p>
<pre><code>upstream flask_siti {
server 127.0.0.1:8118 fail_timeout=0; # changed ports because 8008 was already in use by something else
}
# snip ...
</code></pre>
<p>Turns out I had set up uWSGI to listen on port 8008. I also had an extra file in /etc/nginx/sites-enabled called <code>siti.save</code> that was preventing Nginx from reloading. I deleted it, reloaded/restarted Nginx, restarted Supervisor, and it worked.</p>
| 2 | 2016-08-31T13:47:40Z | [
"python",
"nginx",
"flask",
"uwsgi",
"gunicorn"
] |
(Python type hints) How to specify type being currently defined as the returned type from a method? | 39,236,606 | <p>I would like to specify (as a type hint) the currently defined type as the return type from a method.</p>
<p>Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>class Action(Enum):
ignore = 0
replace = 1
delete = 2
@classmethod
# I would like something like
# def idToObj(cls, elmId: int)->Action:
# but I cannot specify Action as the return type
# since it would generate the error
# NameError: name 'Action' is not defined
def idToObj(cls, elmId: int):
if not hasattr(cls, '_idToObjDict'):
cls._idToObjDict = {}
for elm in list(cls):
cls._idToObjDict[elm.value] = elm
return cls._idToObjDict[elmId]
</code></pre>
<p>Ideally I would have liked to specify something like</p>
<pre class="lang-py prettyprint-override"><code>def idToObj(cls, elmId: int)->Action:
</code></pre>
<p>Thank you.</p>
| 0 | 2016-08-30T20:31:30Z | 39,236,677 | <p>This case is mentioned in official <a href="https://www.python.org/dev/peps/pep-0484/#id27" rel="nofollow">type hints PEP</a>:</p>
<blockquote>
<p>When a type hint contains names that have not been defined yet, that
definition may be expressed as a string literal, to be resolved later.</p>
</blockquote>
<pre><code>class Tree:
def __init__(self, left: Tree, right: Tree):
self.left = left
self.right = right
</code></pre>
<p>To address this, we write:</p>
<pre><code>class Tree:
def __init__(self, left: 'Tree', right: 'Tree'):
self.left = left
self.right = right
</code></pre>
<p>In your case It would be:</p>
<pre><code>def idToObj(cls, elmId: int)->'Action':
pass # classmethod body
</code></pre>
| 2 | 2016-08-30T20:36:58Z | [
"python",
"type-hinting"
] |
User input and retrieving it in Python | 39,236,645 | <p>I am currently trying using two modules in python, I am trying to have one module take user input and store it. But I cannot figure how to retrieve the data from the input module. I have imported everything but this only runs my input again from the def in the input module. Below is an example of my input module. How to I return this value from the user and use it in my other module to do calculations with?</p>
<pre><code>def test():
test = int(input("numbers:"))
return test
</code></pre>
| 0 | 2016-08-30T20:34:59Z | 39,236,787 | <p>I assume you mean you use two files, and you need to use the information from the test function on the other file. Then, assuming you test file name is <code>testfile.py</code>, I would go with:</p>
<pre><code>import testfile
num = testfile.test() # This will store the number recieved in the test() function in the variable num
print('Recieved number from user:', num)
</code></pre>
| 3 | 2016-08-30T20:44:39Z | [
"python",
"python-2.7",
"python-3.x"
] |
Count the number of occurrences of two characters in python column | 39,236,816 | <p>I have a dataframe (df) with variable Area representing the Area code. I need to find the number of occurences for Z followed by X
In the following example Z->X is repeated twice which means count is 2</p>
<pre><code>Area
Z
A
B
Z
X
A
B
Z
X
</code></pre>
<p>I have tried the following to find True/False</p>
<pre><code> df.Area.str.contains(r'Z|X')
</code></pre>
<p>I am sure that this is wrong approach as it didn't give me a desired result. Any other way of doing it?</p>
| 0 | 2016-08-30T20:46:27Z | 39,236,882 | <p>You need the <code>shift()</code> function, specify the <code>period</code> parameter to be <code>-1</code> to shift the series one step forward, and this guarantees that <code>Z</code> is followed by <code>X</code>:</p>
<pre><code>((df.Area == "Z") & (df.Area.shift(-1) == "X")).sum()
# 2
</code></pre>
<p>A closer look at how <code>shift</code> works:</p>
<pre><code>df["Area_shift"] = df.Area.shift(-1)
df
# Area Area_shift
# 0 Z A
# 1 A B
# 2 B Z
# 3 Z X
# 4 X A
# 5 A B
# 6 B Z
# 7 Z X
# 8 X NaN
</code></pre>
| 2 | 2016-08-30T20:50:42Z | [
"python",
"string",
"find-occurrences"
] |
Python IDLE 3.5 invalid literal for int() with base 10 | 39,236,874 | <p>I'm new to python, and I can't figure out how to make this work, it just wont work. (other answers won't help me)</p>
<p>Here's my code:</p>
<pre><code># My First Python Game Thing
# Aka Endless Walking Simulator
game_temp1 = True
def choiceroom_thing1():
while game_temp1:
directionchoice = input('L(left)R(Right)B(Backwards)F(Forwards)')
direction = int(directionchoice)
if direction in ['L', 'R','B', 'F']:
game_temp1 = False
if direction == "L":
print ('You went Left!')
if direction == "R":
print ('You went Right!')
if direction == "B":
print ('You went Backwards!')
if direction == "F":
print ('You went Forwards!')
game_temp1 = True
room_count = room_count + 1
choiceroom_thing1()
print ('Hello Jeff...')
print ('Heres my first game')
print ("Adventure Game")
input("Press Any Key To Continue...")
room_count = 1
while game_temp1:
directionchoice = input('L(left)R(Right)B(Backwards)F(Forwards)')
direction = int(directionchoice)
if direction in ['L', 'R','B', 'F']:
game_temp1 = False
if direction == "L":
print ('You went Left!')
if direction == "R":
print ('You went Right!')
if direction == "B":
print ('You went Backwards!')
if direction == "F":
print ('You went Forwards!')
game_temp1 = True
room_count = room_count + 1
choiceroom_thing1()
</code></pre>
<p>And then I get this error when I run it.</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Owner/Desktop/PythonProjects/Blah From Book.py", line 27,
line 27, in <module>
direction = int(directionchoice)
ValueError: invalid literal for int() with base 10: ':'
</code></pre>
<p>Can someone please fix my code?</p>
| 0 | 2016-08-30T20:50:12Z | 39,237,909 | <p>you do not need to convert <code>directionchoice</code> to an <code>int</code>.<br>
<code>direction</code> should be equal to <code>directionchoice</code>.<br>
Then test if <code>direction</code> is not in the wanted choices: </p>
<pre><code>if direction not in ['L', 'R','B', 'F']:
</code></pre>
| 0 | 2016-08-30T22:15:46Z | [
"python",
"traceback"
] |
ImportError: No module named caffe - I don't know how to install caffe for Anaconda on Windows | 39,236,924 | <p>i have been attempting to follow <a href="http://thirdeyesqueegee.com/deepdream/2015/07/19/running-googles-deep-dream-on-windows-with-or-without-cuda-the-easy-way/" rel="nofollow">this tutorial</a> to install Google DeepDream, but unfortunately i have run into many problems. My first being that i needed to create a new python 2.7 environment for Anaconda as my root python was 3.5 which didn't seem to be co-operating with DeepDream. This meant i had to install numpy, scipy, PIL and protobuf manually for the py27 env but i don't know how to install caffe. I downloaded the caffe files from github, downloaded CMake and tried to compile it but i have no idea what i'm doing and have never done this before. Does anyone know how i can install caffe for Anaconda 3 for my py27 environment on Windows 10?</p>
| 0 | 2016-08-30T20:53:40Z | 39,757,381 | <p>Based on your question I'm noticing two likely issues with your installation. The first is that </p>
<p>Caffe is not really intended to be installed. The normal usecase is to compile it locally after tweaking it's settings to best match your system. I believe the tutorial you are using comes with a precompiled version, but for better performance I eventually replaced it with my own compilation of <a href="https://github.com/microsoft/caffe" rel="nofollow">Microsoft Caffe</a>. Be forwarned though: Microsoft/Caffe only works in VS2013 and has a lot of weird issues with NuGet not handling it's dependencies correctly.</p>
<p>Secondly as to how to deploy a compiled copy of caffe with pycaffe the default for deepdream is under '../caffe/' as in one directory up and in it's own folder, but you can adjust the code in dream.ipynb to point to anywhere you would like to place caffe.</p>
| 0 | 2016-09-28T20:48:31Z | [
"python",
"anaconda",
"caffe"
] |
Receiving an SMS on an Arduino Yun with the Twilio service | 39,236,967 | <p>I'm trying to create a project with an Arduino Yun that will allow it to receive text messages and process commands based on the text. I've followed the tutorial for sending SMS messages with the Yun (<a href="https://www.twilio.com/blog/2015/02/send-sms-and-mms-from-your-arduino-yun.html" rel="nofollow">https://www.twilio.com/blog/2015/02/send-sms-and-mms-from-your-arduino-yun.html</a>) but I can't find any resources for receiving a text. Could anyone help me out? </p>
| 1 | 2016-08-30T20:56:58Z | 39,344,287 | <p>In general , you can always receive an SMS on a SMS capable Twilio number you buy from the console . What should be done on the receipt of that SMS is usually required to be programmed.</p>
<p>In your case , twilio_phone_number used to send SMS out could receive the replies.</p>
<p>For programming the replies ,
(A)first setup a webserver and create an endpoint that could parse the incoming message and then trigger necessary action on your Arduino .A quick google search shows this useful link ( <a href="http://www.instructables.com/id/Controlling-Arduino-with-python-based-web-API-No-p/?ALLSTEPS" rel="nofollow">http://www.instructables.com/id/Controlling-Arduino-with-python-based-web-API-No-p/?ALLSTEPS</a> ) . Test this endpoint to be sure it does what you want it to do when given right params.</p>
<p>(B) You should then setup the webhook for this number for incoming messages ( see attached <a href="http://i.stack.imgur.com/MrjCD.png" rel="nofollow">When A MESSAGE COMES IN</a> ) to the endpoint created in step 1 . </p>
| 2 | 2016-09-06T08:35:01Z | [
"python",
"arduino",
"twilio",
"arduino-yun"
] |
Retrieve value of last key in dictionary | 39,237,041 | <p>I'm trying to extract financial figures from an API that returns a JSON object. The API does not explicitly state that the dictionaries returned are OrderedDicts however the output is always sorted thus I'm going to make the assumption that it is. </p>
<p>I need to retrieve the value of the last key in the dictionary as I need the most current financial figures. However I cannot explicitly use the name of the key to extract it as it is in a date format and may be different depending on the company using queried. </p>
<p>I tried to convert the dictionary object into a list and select the last key using an index of -1 however the conversion process seems to randomize the order of the dictionary no longer retrieves the last date. </p>
<p>So how would I go about retrieving the value of the last key in the dictionary?</p>
<p>Here is an example of what the JSON object looks like: </p>
<pre><code>{'2007-06-01T00:00:00': 21.19,
'2008-06-01T00:00:00': 26.01,
'2009-06-01T00:00:00': 19.34,
'2010-06-01T00:00:00': 22.88,
'2011-06-01T00:00:00': 23.77,
'2012-06-01T00:00:00': 14.77,
'2013-06-01T00:00:00': 16.58,
'2014-06-01T00:00:00': 14.02,
'2015-06-01T00:00:00': 7.0,
'2016-06-01T00:00:00': 9.08}
</code></pre>
| 1 | 2016-08-30T21:02:13Z | 39,237,186 | <p>After reconsidering the problem I realize that because the date is in a format that they are correctly comparable as strings, you can just do <code>k,v = max(data.items())</code> however if you (or someone else looking at this thread) uses a date format that doesn't work like that my original answer is below.</p>
<hr>
<p>you can simply use the <code>max</code> builtin to compare the keys of the dictionary, to make sure they are all compared as dates and times you can use the <code>key</code> argument to compare them as datetime objects:</p>
<pre><code>import datetime
data = {'2007-06-01T00:00:00': 21.19,
'2008-06-01T00:00:00': 26.01,
'2009-06-01T00:00:00': 19.34,
'2010-06-01T00:00:00': 22.88,
'2011-06-01T00:00:00': 23.77,
'2012-06-01T00:00:00': 14.77,
'2013-06-01T00:00:00': 16.58,
'2014-06-01T00:00:00': 14.02,
'2015-06-01T00:00:00': 7.0,
'2016-06-01T00:00:00': 9.08}
def converter(timestamp):
return datetime.datetime.strptime(timestamp, "%Y-%m-%dT%H:%M:%S")
highest = max(data.keys(),key=converter)
print(highest) #2016-06-01T00:00:00
</code></pre>
| 4 | 2016-08-30T21:13:53Z | [
"python",
"sorting",
"dictionary",
"key",
"key-value"
] |
Iterating over large line separated file without freezing | 39,237,054 | <p>I'm trying to implement a Strongly Connected Graph search algorithm and am trying to load a large file with edges of the graph into memory to do it.</p>
<p>The file comes as:</p>
<blockquote>
<p> 1 2 // i.e. a directed edge from 1 to 2 </p>
<p> 1 3 </p>
<p> 2 3 </p>
<p> 3 1 </p>
<p>...</p>
</blockquote>
<p>For this part of the algorithm I need to reverse the graph, and I am hoping to store each node in a dictionary along with some other values which will be important for the algorithm.</p>
<p>This code works for smaller files but just stalls/hangs for my large file, which is 72.7mb, I would appreciate any suggestions to make it work on large files:</p>
<pre><code>def begin():
graph = {}
for line in open('/home/edd91/Documents/SCC.txt'):
tempArray = []
for e in line.split():
if e != ' ' and e!='\n':
tempArray.append(int(e))
if tempArray[1] in graph.keys():
graph[tempArray[1]]['g'].append(tempArray[0])
else:
graph[tempArray[1]] = {'g': [tempArray[0],], 's': False, 't': None, 'u': None }
print len(graph)
</code></pre>
| 0 | 2016-08-30T21:02:48Z | 39,237,192 | <p>Not much to be done, but since it's a little too complex, speed suffers.
You can improve the program speed like this:</p>
<pre><code>def begin():
graph = {}
for line in open('/home/edd91/Documents/SCC.txt'):
# consider only first 2 fields: avoids the loop and the append: that's the best saver
# there must not be 2 spaces between the numbers (in your original program neither so not a problem)
tempArray = line.split()
v1 = tempArray[1] # copy to get rid of array
if v1 in graph: # no need to use .keys()
graph[v1]['g'].append(tempArray[0])
else:
graph[v1] = {'g': [tempArray[0],], 's': False, 't': None, 'u': None }
print len(graph)
</code></pre>
<p>You can compare speed vs your original program with a medium-sized file first.</p>
| 0 | 2016-08-30T21:14:18Z | [
"python",
"graph"
] |
Iterating over large line separated file without freezing | 39,237,054 | <p>I'm trying to implement a Strongly Connected Graph search algorithm and am trying to load a large file with edges of the graph into memory to do it.</p>
<p>The file comes as:</p>
<blockquote>
<p> 1 2 // i.e. a directed edge from 1 to 2 </p>
<p> 1 3 </p>
<p> 2 3 </p>
<p> 3 1 </p>
<p>...</p>
</blockquote>
<p>For this part of the algorithm I need to reverse the graph, and I am hoping to store each node in a dictionary along with some other values which will be important for the algorithm.</p>
<p>This code works for smaller files but just stalls/hangs for my large file, which is 72.7mb, I would appreciate any suggestions to make it work on large files:</p>
<pre><code>def begin():
graph = {}
for line in open('/home/edd91/Documents/SCC.txt'):
tempArray = []
for e in line.split():
if e != ' ' and e!='\n':
tempArray.append(int(e))
if tempArray[1] in graph.keys():
graph[tempArray[1]]['g'].append(tempArray[0])
else:
graph[tempArray[1]] = {'g': [tempArray[0],], 's': False, 't': None, 'u': None }
print len(graph)
</code></pre>
| 0 | 2016-08-30T21:02:48Z | 39,237,325 | <p>You can save some time by getting <code>tempArray</code> at once for each line and unpacking, if, of course, each line consists of a pair of numbers, and also use a <code>defaultdict</code>:</p>
<pre><code>import collections
graph = collections.defaultdict(lambda: {'g': [], 's': False, 't': None, 'u': None })
for line in ... :
k, v = map(int, line.split())
graph[v]['g'].append(k)
</code></pre>
| 1 | 2016-08-30T21:25:28Z | [
"python",
"graph"
] |
BeautifulSoup adding unwanted linebreaks to strings Python3.5 | 39,237,093 | <p>I've been having some trouble with what appear to be hidden newline characters in strings gotten with the BeautifulSoup .find function. The code I have scans an html document and pulls out name, title, company, and country as strings. I type checked and saw they were strings and when I print them and check their length everything appears to be normal strings. But when I use them either in <code>print("%s is a %s at %s in %s" % (name,title,company,country))</code> or <code>outputWriter.writerow([name,title,company,country])</code> to write to a csv file I get extra linebreaks that did not appear to be there in the strings. </p>
<p>What's going on? Or can anyone point me in the right direction? </p>
<p>I'm new to Python and not sure where to look up everything I don't know so I'm asking here after spending all day trying to fix the problem. I've searched through google and several other stack overflow articles on stripping hidden characters, but nothing seems to work.</p>
<pre><code>import csv
from bs4 import BeautifulSoup
# Open/create csvfile and prep for writing
csvFile = open("attendees.csv", 'w+', encoding='utf-8')
outputWriter = csv.writer(csvFile)
# Open HTML and Prep BeautifulSoup
html = open('WEB SUMMIT _ LISBON 2016 _ Web Summit Featured Attendees.html', 'r', encoding='utf-8')
bsObj = BeautifulSoup(html.read(), 'html.parser')
itemList = bsObj.find_all("li", {"class":"item"})
outputWriter.writerow(['Name','Title','Company','Country'])
for item in itemList:
name = item.find("h4").get_text()
print(type(name))
title = item.find("strong").get_text()
print(type(title))
company = item.find_all("span")[1].get_text()
print(type(company))
country = item.find_all("span")[2].get_text()
print(type(country))
print("%s is a %s at %s in %s" % (name,title,company,country))
outputWriter.writerow([name,title,company,country])
</code></pre>
| 1 | 2016-08-30T21:06:19Z | 39,237,470 | <p>Most likely you need to <em>strip</em> the whitespace, there is nothing in your code that adds it so it has to be there:</p>
<pre><code>outputWriter.writerow([name.strip(),title.strip(),company.strip(),country.strip()])
</code></pre>
<p>You can verify what us there by seeing the <em>repr</em> outpout:</p>
<pre><code>print("%r is a %r at %r in %r" % (name,title,company,country))
</code></pre>
<p>When you <em>print</em> you see the <em>str</em> output so if there is a newline you may not realise it is there:</p>
<pre><code>In [8]: s = "string with newline\n"
In [9]: print(s)
string with newline
In [10]: print("%r" % s)
'string with newline\n'
</code></pre>
<p><a href="http://stackoverflow.com/questions/1436703/difference-between-str-and-repr-in-python">difference-between-str-and-repr-in-python</a></p>
<p>If the newlines are actually embedded in the body off the strings, you will need to replace i.e <code>name.replace("\n", " ")</code></p>
| 0 | 2016-08-30T21:37:17Z | [
"python",
"string",
"csv",
"beautifulsoup",
"python-3.5"
] |
Python Structuring Nested Try Except Statements | 39,237,162 | <p>I have a list of ints called companyNumbers. These are being checked against an online API to test they are valid. </p>
<p>Some company numbers have an incorrect first digit, so I would like to check the number, then if an HTTP error is received (Indicating an invalid number) to recheck the number minus its first digit. </p>
<p>If this is again invalid then the number is written to an error sheet, otherwise it is stored in correctNumbers.</p>
<pre><code>for companyNumber in companyNumbers:
try:
r = s.profile(companyNumber).json()
except HTTPError:
try:
r = s.profile(companyNumber[1:]).json()
except HTTPError:
errorSheet.write(i, 0, companyNumber)
else:
correctNumbers.append(r)
</code></pre>
<p>I am not sure how to structure my try/except/else statements. I need the else statement to activate if either of the try blocks are successful. Currently the nested try block does not seem to do anything.</p>
| 1 | 2016-08-30T21:11:53Z | 39,237,649 | <p><code>try</code> runs the code under its nest until an error occurs. So you should put the output into that layer as well. </p>
<pre><code>for companyNumber in companyNumbers:
try:
r = s.profile(companyNumber).json()
correctNumbers.append(r)
except HTTPError:
try:
r = s.profile(companyNumber[1:]).json()
correctNumbers.append(r)
except HTTPError:
errorSheet.write(i, 0, companyNumber)
</code></pre>
| 1 | 2016-08-30T21:51:48Z | [
"python",
"nested",
"escaping",
"try-catch"
] |
Exporting results of median blur into a class and exporting to TIFF | 39,237,173 | <p>I'm working on a program which applies a median blur onto a multiframe tiff file. My goal is to filter through every frame in the tiff sequence and then save the result as the same sequence, just filtered. However, anytime I run it, it only saves the last frame, as I don't know how to properly save the data into a separate sequence as it runs.</p>
<pre><code>#takes .tiff, loads it into PIL, converts to greyscale, sets attributes in PIL form
im = Image.open('example_recording.tif').convert('L')
im.save('greyscale_example.tif')
width,height = im.size
image_lookup = 0
#creates class used to de-sequence the animation
class ImageSequence:
def __init__(self, im):
self.im = im
def __getitem__(self, ix):
try:
if ix:
self.im.seek(ix)
return self.im
except EOFError:
raise IndexError # end of sequence; needed to avoid process from unecessary breaking once it runs to the end of the tiff file animation
for frame in ImageSequence(im):
imarray = np.array(frame)
Blur = cv2.medianBlur(imarray,5)
im = Image.fromarray(Blur)
im.save('corrected.tif')
#(saves actually only the last frame of the movie, med-corrected)
</code></pre>
<p>On Andrews advice, the code was modified to look like this:</p>
<pre><code>im = Image.open('example_recording.tif')
width,height = im.size
image_lookup = 0
n = 1
while True:
try:
im.seek(n)
n = n+1
except EOFError:
print "length is", n
break;
#this solves the length issue as ImageSequence doesnt have _len_ attribute
class ImageSequence:
def __init__(self, im):
self.im = im
def __getitem__(self, ix):
try:
if ix:
self.im.seek(ix)
return self.im
except EOFError:
raise IndexError
depth = n
target_array = np.zeros((width, height, depth))
for index, frame in enumerate(ImageSequence(im)):
imarray = np.array(frame)
Blur = cv2.medianBlur(imarray,5)
print type(Blur)
im = Image.fromarray(Blur)
im.save('corrected_{}.tif'.format(index))
print n
</code></pre>
<p>So now it works perfectly fine!</p>
| 0 | 2016-08-30T21:13:05Z | 39,237,587 | <pre><code>depth = len(ImageSequence(im))
target_array = np.zeros((width, height, depth))
for index, frame in enumerate(ImageSequence(im)):
imarray = np.array(frame)
Blur = cv2.medianBlur(imarray,5)
target_array[:, :, index] = Blur
</code></pre>
<p>That gives you a nice array of matrices, I would think you'll have to reverse everything you've done to pull your images out, but I'm not a PIL expert. </p>
<p>EDIT:</p>
<pre><code>for index, frame in enumerate(ImageSequence(im)):
imarray = np.array(frame)
Blur = cv2.medianBlur(imarray,5)
im = Image.fromarray(Blur)
im.save('corrected_{}.tif'.format(index))
</code></pre>
<p>This should give you an image for each loop, at least.</p>
| 1 | 2016-08-30T21:46:05Z | [
"python",
"image-processing",
"python-imaging-library",
"tiff"
] |
Is result of sys.exc_info() "stable" when its origin thread finishes? | 39,237,180 | <p>Supposing I catch an exception inside a thread and store exc_info tuple somewhere. Then thread finishes. Is my exc_info content still accessible and correct, so I can interpret it later in other thread?</p>
| 0 | 2016-08-30T21:13:19Z | 39,237,267 | <p>The tuple you receive from <code>sys.exc_info()</code> can safely be passed to and used from other threads, even after the death of the thread the tuple came from. The references from the tuple keep things like stack state alive even when the thread is dead.</p>
<p>(You won't be able to access the tuple as <code>sys.exc_info()</code> from other threads, so you'll need to store it somewhere before the thread dies, but it sounds like you're aware of that.)</p>
| 1 | 2016-08-30T21:20:53Z | [
"python"
] |
Scrapy - TypeError: Cannot convert unicode body - HtmlResponse has no encoding | 39,237,259 | <p>When I try to construct a <code>HtmlResponse</code> object in Scrapy like this:</p>
<pre><code>scrapy.http.HtmlResponse(url=self.base_url + dealer_url[0], body=dealer_html)
</code></pre>
<p>I got this error:</p>
<pre><code>Traceback (most recent call last):
File "d:\kerja\hit\python~1\oliver~1\machin~2\lib\site-packages\twisted\internet\defer.py", line 588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "D:\Kerja\HIT\Python Projects\Oliver Morris\machinery\machinery\machinery\spiders\fwi.py", line 69, in parse_items
dealer_page = scrapy.http.HtmlResponse(url=self.base_url + dealer_url[0], body=dealer_html)
File "d:\kerja\hit\python~1\oliver~1\machin~2\lib\site-packages\scrapy\http\response\text.py", line 27, in __init__
super(TextResponse, self).__init__(*args, **kwargs)
File "d:\kerja\hit\python~1\oliver~1\machin~2\lib\site-packages\scrapy\http\response\__init__.py", line 18, in __init__
self._set_body(body)
File "d:\kerja\hit\python~1\oliver~1\machin~2\lib\site-packages\scrapy\http\response\text.py", line 43, in _set_body
type(self).__name__)
TypeError: Cannot convert unicode body - HtmlResponse has no encoding
</code></pre>
<p>Does anyone know how to solve this error? </p>
| 0 | 2016-08-30T21:20:22Z | 39,245,851 | <p><a href="http://doc.scrapy.org/en/latest/topics/request-response.html#htmlresponse-objects" rel="nofollow">HtmlResponse</a> is trying to detect encoding:</p>
<blockquote>
<p>The HtmlResponse class is a subclass of TextResponse which adds
encoding auto-discovering support by looking into the HTML meta
http-equiv attribute. See TextResponse.encoding.</p>
</blockquote>
<p>So basically the html string you provide to <code>body</code> parameter(<code>dealer_html</code> in your case) doesn't have encoding specified.
As per <a href="http://www.w3schools.com/TAGS/att_meta_http_equiv.asp" rel="nofollow">w3 docs of <code>http-equiv</code></a> it should have:</p>
<blockquote>
<pre><code>HTML 4.01: <meta http-equiv="content-type" content="text/html; charset=UTF-8">
HTML5: <meta charset="UTF-8">
</code></pre>
</blockquote>
<p>In this case you can either fix your html or specify encoding when creating the <code>HtmlResponse</code> object via <code>encoding</code> parameter:</p>
<pre><code>HtmlResponse(url='http://scrapy.org', body=u'some body', encoding='utf-8')
</code></pre>
| 1 | 2016-08-31T09:32:38Z | [
"python",
"python-2.7",
"encoding",
"scrapy",
"scrapy-spider"
] |
Django use Integer Field as ForeignKey field | 39,237,280 | <p>To support old (legacy) db we have to create a table that using <strong>integer field</strong> as a <strong>foreignkey</strong> to <strong>User</strong> table: This is how our model look like:</p>
<pre><code>class UserHistory():
user_id = models.IntegerField(null=True, blank=True)
# ..... (other fields) .....
</code></pre>
<p>Problem is user id may or may not exist in User table. Is there any way to treat that user_id field as foreign key to user talbe (when it exist)? So I can dis play on django admin or other place instead of just the ID.</p>
| 1 | 2016-08-30T21:22:12Z | 39,237,433 | <p>You can use a <code>ForeignKey</code> field with <code>db_constraint=False</code>. See the <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.ForeignKey.db_constraint" rel="nofollow">docs</a> for warnings, but legacy invalid data is one of the cases where this is reasonable.</p>
| 1 | 2016-08-30T21:34:09Z | [
"python",
"django",
"django-models",
"orm",
"django-admin"
] |
Django use Integer Field as ForeignKey field | 39,237,280 | <p>To support old (legacy) db we have to create a table that using <strong>integer field</strong> as a <strong>foreignkey</strong> to <strong>User</strong> table: This is how our model look like:</p>
<pre><code>class UserHistory():
user_id = models.IntegerField(null=True, blank=True)
# ..... (other fields) .....
</code></pre>
<p>Problem is user id may or may not exist in User table. Is there any way to treat that user_id field as foreign key to user talbe (when it exist)? So I can dis play on django admin or other place instead of just the ID.</p>
| 1 | 2016-08-30T21:22:12Z | 39,250,159 | <p>Add a second column to your database and null out the fields that do not have a corresponding object in the other table using SQL. Then define your django model on this new field.</p>
| 0 | 2016-08-31T12:50:09Z | [
"python",
"django",
"django-models",
"orm",
"django-admin"
] |
Downloading PDFs from links scraped with Beautiful Soup | 39,237,311 | <p>I'm trying to write a script which will iterate through a list of landing page urls from a csv file, append all PDF links on the landing page to a list, and then iterate through the list downloading the PDFs to a specified folder. </p>
<p>I'm a bit stuck on the last step- I can get all the PDF urls, but can only download them individually. I'm not sure how best to amend the directory address to change with each url to ensure each has its own unique file name. </p>
<p>Any help would be appreciated!</p>
<pre><code>from bs4 import BeautifulSoup, SoupStrainer
import requests
import re
#example url
url = "https://beta.companieshouse.gov.uk/company/00445790/filing-history"
link_list = []
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
for a in soup.find_all('a', href=True):
if "document" in a['href']:
link_list.append("https://beta.companieshouse.gov.uk"+a['href'])
for url in link_list:
response = requests.get(url)
with open('C:/Users/Desktop/CompaniesHouse/report.pdf', 'wb') as f:
f.write(response.content)
</code></pre>
| 0 | 2016-08-30T21:24:13Z | 39,237,366 | <p>The simplest thing is to just add a number to each filename using enumerate:</p>
<pre><code>for ind, url in enumerate(link_list, 1):
response = requests.get(url)
with open('C:/Users/Desktop/CompaniesHouse/report_{}.pdf'.format(ind), 'wb') as f:
f.write(response.content)
</code></pre>
<p>But presuming each path ends in <em>somne_filename.pdf</em> and they are unique you can use the <em>basename</em> itself which may be more descriptive:</p>
<pre><code>from os.path import basename, join
for url in link_list:
response = requests.get(url)
with open(join('C:/Users/Desktop/CompaniesHouse", basename(url)), 'wb') as f:
f.write(response.content)
</code></pre>
| 0 | 2016-08-30T21:28:57Z | [
"python",
"pdf",
"beautifulsoup"
] |
Call function the moment before script crashes? | 39,237,314 | <p>I am solving the following problem:</p>
<p>I have a script which should be running as a service 24/7. Because it is quite important, I would like it to send me an email in case it crashes. I found <a href="https://docs.python.org/3/library/atexit.html" rel="nofollow">atexit</a> module, but the docs specifically says it cannot handle crashes.</p>
<p>So my question is: is there any better way of achieving this than by running another serivce which will be checking if this one is running? Or the goal here is to write the service so that it cannot crash (which is almost impossible)?</p>
<p>Thanks for your ideas!</p>
| 0 | 2016-08-30T21:24:17Z | 39,237,422 | <p>I mean to do what you ask in the first part just wrap the main part of the script with a try/except.</p>
<p>myscript.py:</p>
<pre><code>def main():
long_running_process()
if __name__ == '__main__':
try:
main()
except:
send_email()
</code></pre>
| 0 | 2016-08-30T21:33:22Z | [
"python",
"crash",
"crash-reports"
] |
How do I remove consecutive duplicates from a list? | 39,237,350 | <p>How do I remove consecutive duplicates from a list like this in python?</p>
<pre><code>lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
</code></pre>
<p>Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.</p>
<p>I want the result to be like this:</p>
<pre><code>newlst = [1,2,4,1,3,5]
</code></pre>
<p>Would you also please consider the case when I have a list like this
<code>[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]</code>
and I want the result to be <code>[4,2,3,3]</code>
rather than <code>[4,2,3]</code> .</p>
| 1 | 2016-08-30T21:27:54Z | 39,237,452 | <p><a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow">itertools.groupby()</a> is your solution.</p>
<pre><code>newlst = [k for k, g in itertools.groupby(lst)]
</code></pre>
<hr>
<p>If you wish to group and limit the group size by the item's value, meaning 8 4's will be [4,4], and 9 3's will be [3,3,3] here are 2 options that does it:</p>
<pre><code>import itertools
def special_groupby(iterable):
last_element = 0
count = 0
state = False
def key_func(x):
nonlocal last_element
nonlocal count
nonlocal state
if last_element != x or x >= count:
last_element = x
count = 1
state = not state
else:
count += 1
return state
return [next(g) for k, g in itertools.groupby(iterable, key=key_func)]
special_groupby(lst)
</code></pre>
<p>OR</p>
<pre><code>def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
newlst = list(itertools.chain.from_iterable(next(zip(*grouper(g, k))) for k, g in itertools.groupby(lst)))
</code></pre>
<p>Choose whichever you deem appropriate. Both methods are for numbers > 0.</p>
| 6 | 2016-08-30T21:35:50Z | [
"python"
] |
How do I remove consecutive duplicates from a list? | 39,237,350 | <p>How do I remove consecutive duplicates from a list like this in python?</p>
<pre><code>lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
</code></pre>
<p>Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.</p>
<p>I want the result to be like this:</p>
<pre><code>newlst = [1,2,4,1,3,5]
</code></pre>
<p>Would you also please consider the case when I have a list like this
<code>[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]</code>
and I want the result to be <code>[4,2,3,3]</code>
rather than <code>[4,2,3]</code> .</p>
| 1 | 2016-08-30T21:27:54Z | 39,237,468 | <p>You'd probably want something like this.</p>
<pre><code>lst = [1, 1, 2, 2, 2, 2, 3, 3, 4, 1, 2]
prev_value = None
for number in lst[:]: # the : means we're slicing it, making a copy in other words
if number == prev_value:
lst.remove(number)
else:
prev_value = number
</code></pre>
<p>So, we're going through the list, and if it's the same as the previous number, we remove it from the list, otherwise, we update the previous number. </p>
<p>There may be a more succinct way, but this is the way that looked most apparent to me. </p>
<p>HTH.</p>
| 0 | 2016-08-30T21:37:09Z | [
"python"
] |
How do I remove consecutive duplicates from a list? | 39,237,350 | <p>How do I remove consecutive duplicates from a list like this in python?</p>
<pre><code>lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
</code></pre>
<p>Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.</p>
<p>I want the result to be like this:</p>
<pre><code>newlst = [1,2,4,1,3,5]
</code></pre>
<p>Would you also please consider the case when I have a list like this
<code>[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]</code>
and I want the result to be <code>[4,2,3,3]</code>
rather than <code>[4,2,3]</code> .</p>
| 1 | 2016-08-30T21:27:54Z | 39,237,542 | <pre><code>newlist=[]
prev=lst[0]
newlist.append(prev)
for each in lst[:1]: #to skip 1st lst[0]
if(each!=prev):
newlist.append(each)
prev=each
</code></pre>
| 0 | 2016-08-30T21:42:41Z | [
"python"
] |
How do I remove consecutive duplicates from a list? | 39,237,350 | <p>How do I remove consecutive duplicates from a list like this in python?</p>
<pre><code>lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
</code></pre>
<p>Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.</p>
<p>I want the result to be like this:</p>
<pre><code>newlst = [1,2,4,1,3,5]
</code></pre>
<p>Would you also please consider the case when I have a list like this
<code>[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]</code>
and I want the result to be <code>[4,2,3,3]</code>
rather than <code>[4,2,3]</code> .</p>
| 1 | 2016-08-30T21:27:54Z | 39,237,571 | <p>If you want to use the <code>itertools</code> method @MaxU suggested, a possible code implementation is:</p>
<pre><code>import itertools as it
lst=[1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
unique_lst = [i[0] for i in it.groupby(lst)]
print(unique_lst)
</code></pre>
| 1 | 2016-08-30T21:44:45Z | [
"python"
] |
How do I remove consecutive duplicates from a list? | 39,237,350 | <p>How do I remove consecutive duplicates from a list like this in python?</p>
<pre><code>lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
</code></pre>
<p>Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.</p>
<p>I want the result to be like this:</p>
<pre><code>newlst = [1,2,4,1,3,5]
</code></pre>
<p>Would you also please consider the case when I have a list like this
<code>[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]</code>
and I want the result to be <code>[4,2,3,3]</code>
rather than <code>[4,2,3]</code> .</p>
| 1 | 2016-08-30T21:27:54Z | 39,237,793 | <pre><code>st = ['']
[st.append(a) for a in [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5] if a != st[-1]]
print(st[1:])
</code></pre>
| 0 | 2016-08-30T22:04:13Z | [
"python"
] |
How do I remove consecutive duplicates from a list? | 39,237,350 | <p>How do I remove consecutive duplicates from a list like this in python?</p>
<pre><code>lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
</code></pre>
<p>Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.</p>
<p>I want the result to be like this:</p>
<pre><code>newlst = [1,2,4,1,3,5]
</code></pre>
<p>Would you also please consider the case when I have a list like this
<code>[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]</code>
and I want the result to be <code>[4,2,3,3]</code>
rather than <code>[4,2,3]</code> .</p>
| 1 | 2016-08-30T21:27:54Z | 39,241,439 | <p>Check if the next element always is not equal to item. If so append.</p>
<pre><code>lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
new_item = lst[0]
new_list = [lst[0]]
for l in lst:
if new_item != l:
new_list.append(l)
new_item = l
print new_list
print lst
</code></pre>
| 0 | 2016-08-31T05:30:18Z | [
"python"
] |
Unable to PLOT multiple data from EXCEL using MATPLOTLIB | 39,237,399 | <p>I've an Excel file with 1000 rows & 300 columns.
I want to plot (column 1) vs (column 2 to 288); my 1st Column is my X Axis, and the rest of the columns are on the Y axis.
My code is below; I get no display.
There's no error message as such.</p>
<pre><code>from openpyxl import load_workbook
import numpy as np
import matplotlib.pyplot as plt
wb = load_workbook('CombinedData1.xlsx')
sheet_1 = wb.get_sheet_by_name('CombinedData')
x = np.zeros(sheet_1.max_row)
y = np.zeros(sheet_1.max_row)
a = np.zeros(sheet_1.max_column)
b = np.zeros(sheet_1.max_column)
print (sheet_1.max_row)
print (sheet_1.max_column)
for i in range(0, sheet_1.max_row):
for j in range(1, 7):
x[i] = sheet_1.cell(row=i + 1, column=j).value
y[j] = sheet_1.cell(row=i + 1, column=j).value
# z[i] = sheet_1.cell(row=i + 1, column=3).value
print x[i]
# print y[i]
plt.plot(x[i], y[i], 'bo-', label='Values')
plt.grid(True)
plt.xlim(0,100)
plt.ylim(0,10)
plt.show()
</code></pre>
| 1 | 2016-08-30T21:31:58Z | 39,237,861 | <p>Try something like the following nested loop:</p>
<pre>
for j in range(1, 7):
for i in range(0, sheet_1.max_row):
x[i] = sheet_1.cell(row=i + 1, column=j).value
y[i] = sheet_1.cell(row=i + 1, column=j).value
#z[i] = sheet_1.cell(row=i + 1, column=3).value
print x[i]
#print y[i]
plt.plot(x, y, label='Values %d' % j, hold=1)
</pre>
<p>The hold-option will put the 7 individual plots together into one single plot. Furthermore, you should consider using the pandas package for importing and processing the sheet data.</p>
| 0 | 2016-08-30T22:10:56Z | [
"python",
"excel",
"numpy",
"matplotlib",
"xlrd"
] |
Unable to PLOT multiple data from EXCEL using MATPLOTLIB | 39,237,399 | <p>I've an Excel file with 1000 rows & 300 columns.
I want to plot (column 1) vs (column 2 to 288); my 1st Column is my X Axis, and the rest of the columns are on the Y axis.
My code is below; I get no display.
There's no error message as such.</p>
<pre><code>from openpyxl import load_workbook
import numpy as np
import matplotlib.pyplot as plt
wb = load_workbook('CombinedData1.xlsx')
sheet_1 = wb.get_sheet_by_name('CombinedData')
x = np.zeros(sheet_1.max_row)
y = np.zeros(sheet_1.max_row)
a = np.zeros(sheet_1.max_column)
b = np.zeros(sheet_1.max_column)
print (sheet_1.max_row)
print (sheet_1.max_column)
for i in range(0, sheet_1.max_row):
for j in range(1, 7):
x[i] = sheet_1.cell(row=i + 1, column=j).value
y[j] = sheet_1.cell(row=i + 1, column=j).value
# z[i] = sheet_1.cell(row=i + 1, column=3).value
print x[i]
# print y[i]
plt.plot(x[i], y[i], 'bo-', label='Values')
plt.grid(True)
plt.xlim(0,100)
plt.ylim(0,10)
plt.show()
</code></pre>
| 1 | 2016-08-30T21:31:58Z | 39,248,631 | <p>I would consider using <code>pandas</code>:</p>
<pre><code>import pandas
df = pandas.read_excel('CombinedData1.xlsx', sheetname='CombinedData', header=None)
df.plot(x=0)
</code></pre>
<p>or </p>
<pre><code>plt.plot(df[0], df[1])
</code></pre>
| 0 | 2016-08-31T11:38:47Z | [
"python",
"excel",
"numpy",
"matplotlib",
"xlrd"
] |
Python 3 sys.exec_prefix wrong | 39,237,407 | <p>I'm at my wits end here with python3 on Solaris reporting the exec_prefix as <code>/usr/lib</code> instead of <code>/usr</code></p>
<p>This is leading to all sorts of bad behavior when using setup_tools and virtualenv, as they are looking for libraries in <code>/usr/lib/lib/python3.4</code> which is obviously wrong.</p>
<p>We've installed both python2 and python3 packages on Solaris 11. Python2 is working correctly, while python3 is reporting the wrong exec_prefix. See below:</p>
<pre><code> bash-4.1$ env
TERM=xterm
PATH=/usr/bin:/bin:/usr/sbin:/sbin
PWD=/home/user
SHLVL=1
_=/usr/bin/env
bash-4.1$ type -a python3.4
python3.4 is /usr/bin/python3.4
python3.4 is /bin/python3.4
bash-4.1$ type -a python3.4-config
python3.4-config is /usr/bin/python3.4-config
python3.4-config is /bin/python3.4-config
bash-4.1$ python3.4-config --exec-prefix
/usr
bash-4.1$ python3.4 -c 'import sys; print(sys.path)'
['', '/usr/lib/python34.zip', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-sunos5', '/usr/lib/python3.4/lib-dynload/64', '/usr/lib/python3.4/site-packages', '/usr/lib/python3.4/vendor-packages']
bash-4.1$ python3.4 -c "import sys; print(sys.prefix, sys.exec_prefix)"
/usr /usr/lib
bash-4.1$ grep CONFIG_ARGS /usr/lib/python3.4/config-3.4m/Makefile
CONFIG_ARGS= 'CC=cc -m64 -xO4 -xtarget=ultra2 -xarch=sparcvis
-xchip=ultra2 -Qoption cg -xregs=no%appl -W2,-xwrap_int
-xmemalign=16i -mt -KPIC -DPIC -xO5 ' 'CXX=CC' '--prefix=/usr'
'--mandir=/usr/share/man' '--bindir=/usr/bin/sparcv9'
'--libdir=/usr/lib/sparcv9' '--sbindir=/usr/sbin/sparcv9'
'--infodir=/usr/share/info' '--enable-shared' '--with-dtrace'
'--with-system-expat'
'--with-system-ffi' '--without-gcc' '--without-ensurepip'
'--enable-ipv6' '--bindir=/usr/bin' 'CPPFLAGS=-IPython
-I/usr/include/ncurses -D_LARGEFILE64_SOURCE
-I/usr/lib/libffi-3.0.9/include ' 'LDFLAGS=-m64 -KPIC -DPIC
-xO5 ' 'CFLAGS=-m64 -xO4 -xtarget=ultra2 -xarch=sparcvis
-xchip=ultra2 -Qoption cg -xregs=no%appl -W2,-xwrap_int
-xmemalign=16i -mt -KPIC -DPIC -xO5 ' 'DFLAGS=-64'
'XPROFILE_DIR=../build/sparcv9/.profile'
</code></pre>
<p>I've looked through the python docs, multiple forums and Q&A sites, site.py, and even getpath.c itself. Nothing explains this behavior.</p>
<p>If I build python3 from source, it works correctly. However, this isn't an option as I'm not the one who provisions our servers, and the issue occurs with the official Solaris package itself, which we'd like to stay consistent.</p>
<p><strong>TL;DR</strong> - Why would sys.exec_prefix report <code>/usr/lib</code> instead of <code>/usr</code>?</p>
| 3 | 2016-08-30T21:32:31Z | 39,238,354 | <p>This looks like Solaris bug:</p>
<p>21622699 Python 3.4 mangles sys.exec_prefix, breaks virtualenv</p>
<p>which was fixed in Solaris 12 but has not yet been back-ported to 11.3; I will see about making that happen.</p>
| 2 | 2016-08-30T23:02:29Z | [
"python",
"python-3.x",
"solaris"
] |
How to Thread a Tweepy Stream | 39,237,420 | <p>I am making a program that pulls live tweets using Tweepy, and runs it through a function. I am using a Tweepy Stream to accomplish the task of getting the tweets. However, what I wish is that while the function is running, Tweepy can still pull these live tweets. I believe I need to thread the Tweepy event listener but I'm not entirely sure how to do this.</p>
<pre><code>def doesSomething(s):
#function here
class listener(tweepy.StreamListener):
def on_status(self, status):
doesSomething(status)
stream_listener = listener()
tweepy.Stream(auth = api.auth, listener=stream_listener)
stream.filter(track=["Example"])
</code></pre>
| 1 | 2016-08-30T21:33:20Z | 39,244,078 | <p>This is an example you can adapt easily...
The code will create a thread that updates a counter in the background while a loop will continuously print the value of the counter. The 'stop' variable takes care of quitting the thread when the main process is killed or exit...</p>
<pre><code>import threading
import time
from sys import stdout
# Only data wihtin a class are actually shared by the threads
class Counter(object):
counter = 0
stop = False
# Function that will be executed in parallel with the rest of the code
def function_1():
for i in range(10):
if c.stop: return # With more will exit faster
time.sleep(2)
c.counter += 1
if c.stop: return
# Create a class instance
c = Counter()
# Thread function_1
d = threading.Thread(target=function_1)
d.start()
# Exit the thread properly before exiting...
try:
for j in range(100):
stdout.write('\r{:}'.format(c.counter))
stdout.flush()
time.sleep(1)
except:
c.stop = True
</code></pre>
| 1 | 2016-08-31T08:09:24Z | [
"python",
"multithreading",
"python-3.x",
"twitter",
"tweepy"
] |
Code runs slow but multiple python IDEs makes it faster | 39,237,453 | <p>I have a simple webcrawler which crawls ~4 (specific) sites per second. If I run the script in tow different python IDE's I can double the speed because both programs runs the code with ~4 crawls per second.
Why does my code runs slower then It could be? Or is it faster because I use a silly way to make my script use multithreading/multiprocessing by using two different IDE's at the same time?</p>
<p>I use Python 3.5.2</p>
| 0 | 2016-08-30T21:35:54Z | 39,238,737 | <p>This sounds like a great task for a Pool of worker threads.</p>
<p>See: <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow">Python: The Multiprocessing Module</a></p>
| 0 | 2016-08-30T23:51:30Z | [
"python",
"multithreading",
"performance",
"multiprocessing"
] |
Code runs slow but multiple python IDEs makes it faster | 39,237,453 | <p>I have a simple webcrawler which crawls ~4 (specific) sites per second. If I run the script in tow different python IDE's I can double the speed because both programs runs the code with ~4 crawls per second.
Why does my code runs slower then It could be? Or is it faster because I use a silly way to make my script use multithreading/multiprocessing by using two different IDE's at the same time?</p>
<p>I use Python 3.5.2</p>
| 0 | 2016-08-30T21:35:54Z | 39,241,403 | <p>It's all about threading, as explained in the comments.</p>
<p>Now there is a solution for your program :</p>
<pre><code>import requests
import threading
class Crawler:
def __init__(self, url):
self.url = url
self.session = requests.Session()
def crawl(self):
page = self.session.get(url)
# do your crawling here
with open('urls.txt', 'r') as f: # assuming you use a file with a list of urls
for url in f:
while(threading.active_count() > 20): # Use 20 threads
time.sleep(0.1)
c = Crawler(url)
t = threading.Thread(target = c.crawl())
</code></pre>
| 0 | 2016-08-31T05:27:18Z | [
"python",
"multithreading",
"performance",
"multiprocessing"
] |
Pandas getting all rows listed in one dataframe but not the other UNORDERD | 39,237,642 | <p>I cannot find an easy way to get all the rows of a data frame that are found in one dataframe but not a second dataframe if the data is unordered.</p>
<p>These two answers talk are solutions for ordered data:</p>
<blockquote>
<p><a href="http://stackoverflow.com/questions/25946391/get-rows-that-are-present-in-one-dataframe-but-not-the-other">Get rows that are present in one dataframe, but not the other</a></p>
<p><a href="http://stackoverflow.com/questions/28901683/pandas-get-rows-which-are-not-in-other-dataframe">pandas get rows which are NOT in other dataframe</a></p>
</blockquote>
<p>So just to make it clear I'm trying to get this:
<a href="http://i.stack.imgur.com/sWntA.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/sWntA.jpg" alt="data from one dataframe thats not found in the other dataframe"></a></p>
<p>In one of those related question mentioned above I found a multiindexing solution that supposedly works with unordered data, but I was unable to implement it. I am hoping theres an easier way. </p>
<p>let me give you an example of the data I'm working with:</p>
<pre><code>DF1
col_a col_b
1325 foo
1397 foo #<---matching value, but not matching index in DF2
1645 foo
... ...
DF2
col_1 col_2
1397 foo #<---matching value, but not matching index in DF1
1500 foo
1621 foo
... ...
</code></pre>
<p>Now if that were all the data in both dataframes the result for processing this specifically for DF1 would look like this:</p>
<pre><code>DF1_UNIQUE
col_a col_b
1325 foo
1645 foo
</code></pre>
<p>(So I'm really only caring about <code>col_a</code> or for DF2 <code>col_1</code>). Notice its missing the 1397 row. that's because it is found in DF2, so I don't want it returned to my new DF. But its not found in the same index and there in lies the problem I have. I already easily created a solution if all the matching indexes are lined up, but I don't know where to start on the indexes that aren't lined up. Can I use the merge function? Or is that the wrong tool for this job?</p>
<p>This code isn't entirely relevant but its the solution I came up with if all the indexes lined up correctly: </p>
<pre><code>def getUniqueEntries(df1, df2):
"""takes two dataframes, returns a dataframe that is comprized of all the rows unique to the first dataframe."""
d1columns = df1.columns
d2columns = df2.columns
df3 = pd.merge(df1, df2, left_on=d1columns[0], right_on=d2columns[0])
print(df3)
return df1[(~df1[d1columns[0]].isin(df3[d1columns[0]]))]
def main(fileread1, fileread2, writeprefix):
df1 = pd.read_csv(fileread1)
df2 = pd.read_csv(fileread2)
df3 = getUniqueEntries(df1, df2)
df4 = getUniqueEntries(df2, df1)
print(df3)
print(df4)
df3.to_csv(writeprefix+fileread1, index=False)
df4.to_csv(writeprefix+fileread2, index=False)
if __name__ == '__main__':
main(sys.argv[1], sys.argv[2], sys.argv[3])
</code></pre>
| 2 | 2016-08-30T21:51:27Z | 39,237,680 | <p>This uses boolean indexing to locate all of the rows in <code>df1</code> where the values in <code>col_a</code> are NOT (<code>~</code>) in <code>col_a</code> of <code>df2</code>. It uses <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>isin()</code></a> to locate matching rows, and the negation operator (<code>~</code>) to find the opposite of those (i.e. the ones that don't match).</p>
<pre><code>df1[~df1.col_a.isin(df2.col_a)]
</code></pre>
<p>You mentioned an index, but your sample data does not have one. The matching is thus done only on the values in <code>col_a</code> per your example.</p>
| 3 | 2016-08-30T21:55:10Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
Pandas getting all rows listed in one dataframe but not the other UNORDERD | 39,237,642 | <p>I cannot find an easy way to get all the rows of a data frame that are found in one dataframe but not a second dataframe if the data is unordered.</p>
<p>These two answers talk are solutions for ordered data:</p>
<blockquote>
<p><a href="http://stackoverflow.com/questions/25946391/get-rows-that-are-present-in-one-dataframe-but-not-the-other">Get rows that are present in one dataframe, but not the other</a></p>
<p><a href="http://stackoverflow.com/questions/28901683/pandas-get-rows-which-are-not-in-other-dataframe">pandas get rows which are NOT in other dataframe</a></p>
</blockquote>
<p>So just to make it clear I'm trying to get this:
<a href="http://i.stack.imgur.com/sWntA.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/sWntA.jpg" alt="data from one dataframe thats not found in the other dataframe"></a></p>
<p>In one of those related question mentioned above I found a multiindexing solution that supposedly works with unordered data, but I was unable to implement it. I am hoping theres an easier way. </p>
<p>let me give you an example of the data I'm working with:</p>
<pre><code>DF1
col_a col_b
1325 foo
1397 foo #<---matching value, but not matching index in DF2
1645 foo
... ...
DF2
col_1 col_2
1397 foo #<---matching value, but not matching index in DF1
1500 foo
1621 foo
... ...
</code></pre>
<p>Now if that were all the data in both dataframes the result for processing this specifically for DF1 would look like this:</p>
<pre><code>DF1_UNIQUE
col_a col_b
1325 foo
1645 foo
</code></pre>
<p>(So I'm really only caring about <code>col_a</code> or for DF2 <code>col_1</code>). Notice its missing the 1397 row. that's because it is found in DF2, so I don't want it returned to my new DF. But its not found in the same index and there in lies the problem I have. I already easily created a solution if all the matching indexes are lined up, but I don't know where to start on the indexes that aren't lined up. Can I use the merge function? Or is that the wrong tool for this job?</p>
<p>This code isn't entirely relevant but its the solution I came up with if all the indexes lined up correctly: </p>
<pre><code>def getUniqueEntries(df1, df2):
"""takes two dataframes, returns a dataframe that is comprized of all the rows unique to the first dataframe."""
d1columns = df1.columns
d2columns = df2.columns
df3 = pd.merge(df1, df2, left_on=d1columns[0], right_on=d2columns[0])
print(df3)
return df1[(~df1[d1columns[0]].isin(df3[d1columns[0]]))]
def main(fileread1, fileread2, writeprefix):
df1 = pd.read_csv(fileread1)
df2 = pd.read_csv(fileread2)
df3 = getUniqueEntries(df1, df2)
df4 = getUniqueEntries(df2, df1)
print(df3)
print(df4)
df3.to_csv(writeprefix+fileread1, index=False)
df4.to_csv(writeprefix+fileread2, index=False)
if __name__ == '__main__':
main(sys.argv[1], sys.argv[2], sys.argv[3])
</code></pre>
| 2 | 2016-08-30T21:51:27Z | 39,237,804 | <p>here is a pandas equivalent for SQL (Oracle's) minus operation:</p>
<pre><code>select col1, col2 from tab1
minus
select col1, col2 from tab2
</code></pre>
<p>in Pandas:</p>
<pre><code>In [59]: df1[~df1.isin(pd.DataFrame(df2.values, columns=df1.columns).to_dict('l')).all(1)]
Out[59]:
col_a col_b
0 1325 foo
2 1645 foo
</code></pre>
<p>Explanation:</p>
<pre><code>In [60]: pd.DataFrame(df2.values, columns=df1.columns)
Out[60]:
col_a col_b
0 1397 foo
1 1500 foo
2 1621 foo
In [61]: pd.DataFrame(df2.values, columns=df1.columns).to_dict('l')
Out[61]: {'col_a': [1397, 1500, 1621], 'col_b': ['foo', 'foo', 'foo']}
In [62]: df1.isin(pd.DataFrame(df2.values, columns=df1.columns).to_dict('l'))
Out[62]:
col_a col_b
0 False True
1 True True
2 False True
In [63]: df1.isin(pd.DataFrame(df2.values, columns=df1.columns).to_dict('l')).all(1)
Out[63]:
0 False
1 True
2 False
dtype: bool
</code></pre>
| 3 | 2016-08-30T22:05:33Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
Pandas getting all rows listed in one dataframe but not the other UNORDERD | 39,237,642 | <p>I cannot find an easy way to get all the rows of a data frame that are found in one dataframe but not a second dataframe if the data is unordered.</p>
<p>These two answers talk are solutions for ordered data:</p>
<blockquote>
<p><a href="http://stackoverflow.com/questions/25946391/get-rows-that-are-present-in-one-dataframe-but-not-the-other">Get rows that are present in one dataframe, but not the other</a></p>
<p><a href="http://stackoverflow.com/questions/28901683/pandas-get-rows-which-are-not-in-other-dataframe">pandas get rows which are NOT in other dataframe</a></p>
</blockquote>
<p>So just to make it clear I'm trying to get this:
<a href="http://i.stack.imgur.com/sWntA.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/sWntA.jpg" alt="data from one dataframe thats not found in the other dataframe"></a></p>
<p>In one of those related question mentioned above I found a multiindexing solution that supposedly works with unordered data, but I was unable to implement it. I am hoping theres an easier way. </p>
<p>let me give you an example of the data I'm working with:</p>
<pre><code>DF1
col_a col_b
1325 foo
1397 foo #<---matching value, but not matching index in DF2
1645 foo
... ...
DF2
col_1 col_2
1397 foo #<---matching value, but not matching index in DF1
1500 foo
1621 foo
... ...
</code></pre>
<p>Now if that were all the data in both dataframes the result for processing this specifically for DF1 would look like this:</p>
<pre><code>DF1_UNIQUE
col_a col_b
1325 foo
1645 foo
</code></pre>
<p>(So I'm really only caring about <code>col_a</code> or for DF2 <code>col_1</code>). Notice its missing the 1397 row. that's because it is found in DF2, so I don't want it returned to my new DF. But its not found in the same index and there in lies the problem I have. I already easily created a solution if all the matching indexes are lined up, but I don't know where to start on the indexes that aren't lined up. Can I use the merge function? Or is that the wrong tool for this job?</p>
<p>This code isn't entirely relevant but its the solution I came up with if all the indexes lined up correctly: </p>
<pre><code>def getUniqueEntries(df1, df2):
"""takes two dataframes, returns a dataframe that is comprized of all the rows unique to the first dataframe."""
d1columns = df1.columns
d2columns = df2.columns
df3 = pd.merge(df1, df2, left_on=d1columns[0], right_on=d2columns[0])
print(df3)
return df1[(~df1[d1columns[0]].isin(df3[d1columns[0]]))]
def main(fileread1, fileread2, writeprefix):
df1 = pd.read_csv(fileread1)
df2 = pd.read_csv(fileread2)
df3 = getUniqueEntries(df1, df2)
df4 = getUniqueEntries(df2, df1)
print(df3)
print(df4)
df3.to_csv(writeprefix+fileread1, index=False)
df4.to_csv(writeprefix+fileread2, index=False)
if __name__ == '__main__':
main(sys.argv[1], sys.argv[2], sys.argv[3])
</code></pre>
| 2 | 2016-08-30T21:51:27Z | 39,237,805 | <p>Yes, you can use merge with the <code>indicator</code> parameter:</p>
<p>I renamed the columns to avoid duplicated columns You can also pass <code>left_on</code> and <code>right_on</code></p>
<pre><code>merged = DF1.merge(DF2.rename(columns={'col_1': 'col_a', 'col_2': 'col_b'}), how='left', indicator=True)
merged
Out:
col_a col_b _merge
0 1325 foo left_only
1 1397 foo both
2 1645 foo left_only
</code></pre>
<p>Now, you can filter <code>merged</code> using the indicator column:</p>
<pre><code>merged[merged['_merge']=='left_only']
Out:
col_a col_b _merge
0 1325 foo left_only
2 1645 foo left_only
</code></pre>
| 3 | 2016-08-30T22:05:36Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
Generate dummies if string value matches any row in particular column using pandas.dataframe.str.contains() | 39,237,690 | <p>If string in "Language_cat" list matches any rows in "Languages" column in df_dat dataframe then generate dummies for that value corresponding to same row:</p>
<pre><code>Language_cat = ['english','french','deutsch','italian','russian','spanish']
for j in Language_cat:
df_dat[j+'lang_cat'] = df_dat['Languages'].apply(lambda x: 1 if df_dat.Languages.str.contains(j) else 0)
</code></pre>
<p>However, this is the error </p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
| -3 | 2016-08-30T21:56:00Z | 39,252,828 | <pre><code>Language_cat = ['english','french','deutsch','italian','russian','spanish']
for j in Language_cat:
df_dat[j+'lang_cat'] = df_dat['Languages'].str.contains(j).astype(int)
</code></pre>
| 0 | 2016-08-31T14:50:03Z | [
"python",
"string",
"pandas",
"dataframe"
] |
How to stop writing a blank line at the end of csv file - pandas | 39,237,755 | <p>When saving the data to csv, <code>data.to_csv('csv_data', sep=',', encoding='utf-8', header= False, index = False)</code>, it creates a blank line at the end of csv file. </p>
<p>How do you avoid that? </p>
<p>It's got to do with the <code>line_terminator</code> and it's default value is <code>n</code>, for new line.</p>
<p>Is there a way to specify the <code>line_terminator</code> to avoid creating a blank line at the end, or do i need to read the csv file, remove the blank line and save it?</p>
<p>Not familiar with pandas. Your help will be appreciated, thanks in advance!</p>
| 0 | 2016-08-30T22:00:48Z | 39,243,675 | <p>One way would be to save data except the last entry,with default <code>line_terminator</code>(<code>\n</code>) and append the last line with <code>line_terminator=""</code> .</p>
<pre><code>data1 = data.iloc[0:len(data)-1]
data2 = data.iloc[[len(data)-1]]
data1.to_csv('csv_data', sep=',', encoding='utf-8', header= False, index = False)
data2.to_csv('csv_data', sep=',', encoding='utf-8', header= False, index = False,mode='a',line_terminator="")
</code></pre>
| 0 | 2016-08-31T07:47:54Z | [
"python",
"csv",
"pandas"
] |
How to escape @ in a password in pymongo connection? | 39,237,813 | <p>My question is a specification of <a href="http://stackoverflow.com/questions/29508162/how-can-i-validate-username-password-for-mongodb-authentication-through-pymongo">how can i validate username password for mongodb authentication through pymongo?</a>.</p>
<p>I'm trying to connect to a MongoDB instance using PyMongo 3.2.2 and a URL that contains the user and password, as explained in <a href="https://docs.mongodb.com/manual/reference/connection-string/#standard-connection-string-format" rel="nofollow">MongoDB Docs</a>. The difference is that the password I'm using contains a '@'.</p>
<p>At first I simply tried to connect without escaping, like this:</p>
<blockquote>
<p>prefix = 'mongodb://'</p>
<p>user = 'user:passw_with_@_'</p>
<p>suffix = '@127.0.0.1:27001/'</p>
<p>conn = pymongo.MongoClient(prefix + user + suffix)</p>
</blockquote>
<p>Naturally I got the following error: </p>
<pre><code>InvalidURI: ':' or '@' characters in a username or password must be escaped according to RFC 2396.
</code></pre>
<p>So I tried escaping the user:pass part using <a href="https://docs.python.org/2/library/urllib.html?highlight=urllib#urllib.quote" rel="nofollow">urllib.quote()</a> like this:</p>
<blockquote>
<p>prefix = 'mongodb://'</p>
<p>user = urllib.quote('user:passw_with_@_')</p>
<p>suffix = '@127.0.0.1:27001/'</p>
<p>conn = pymongo.MongoClient(prefix + user + suffix)</p>
</blockquote>
<p>but then I got a: </p>
<pre><code>OperationFailure: Authentication failed.
</code></pre>
<p>(Important to say that using a GUI MongoDB Management Tool (<a href="https://robomongo.org/" rel="nofollow">Robomongo</a>, if that matters) I'm able to connect to the MongoDB using the (real) address and credentials.)</p>
<p>Printing user variable in the code above generated a <code>'user:passw_with_%40_'</code> String (that is '@' became '%40') and according to <a href="https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters" rel="nofollow">wikipedia</a> that's the expected escaping.</p>
<p>I even tried escaping the @ with single and double backslashes (<code>user = 'user:passw_with_\\@_'</code> and <code>user = 'user:passw_with_\@_'</code>), but those failed with the InvalidURI exception.</p>
<p><strong>TL;DR;</strong></p>
<p>My question is: How do I escape a '@' in the password part of a MongoDB URL?</p>
| 0 | 2016-08-30T22:06:13Z | 39,283,299 | <p>You should be able to escape the password using <code>urllib.quote()</code>. Although you should only quote/escape the password, and exclude the <code>username:</code> ;
otherwise the <code>:</code> will also be escaped into <code>%3A</code>. </p>
<p>For example: </p>
<pre><code>import pymongo
import urllib
mongo_uri = "mongodb://username:" + urllib.quote("p@ssword") + "@127.0.0.1:27001/"
client = pymongo.MongoClient(mongo_uri)
</code></pre>
<p>The above snippet was tested for MongoDB v3.2.x, Python v2.7, and <a href="https://api.mongodb.com/python/current/" rel="nofollow">PyMongo</a> v3.2.2. </p>
<p>The example above assumed in the <a href="https://docs.mongodb.com/manual/reference/connection-string/" rel="nofollow">MongoDB URI connection string</a>: </p>
<ul>
<li>The user is created in the <code>admin</code> database. </li>
<li>The host <code>mongod</code> running on is 127.0.0.1 (localhost) </li>
<li>The port <code>mongod</code> assigned to is 27001</li>
</ul>
| 1 | 2016-09-02T02:05:10Z | [
"python",
"mongodb",
"authentication",
"pymongo-3.x"
] |
Why recursion doesn't return value | 39,237,834 | <p>Can't understand why this function return <code>None</code> instead of <code>filename</code></p>
<pre><code>import os
def existence_of_file():
filename = str(input("Give me the name: "))
if os.path.exists(filename):
print("This file is already exists")
existence_of_file()
else:
print(filename)
return filename
a = existence_of_file()
print(a)
</code></pre>
<p>Output:</p>
<pre><code>Give me the name: app.py
This file is already exists
Give me the name: 3.txt
3.txt
None
</code></pre>
| -1 | 2016-08-30T22:08:03Z | 39,237,973 | <p>You must actually return the return value when calling the function again for recursion. This way, once you stop calling, you return correctly. It's returning <code>None</code> because the call returned nothing. Take a look at this cycle:</p>
<pre><code>Asks for file -> Wrong one given -> Call again -> Right one given -> Stop recursion
</code></pre>
<p>Once you stop the recursion, the <code>filename</code> is returned to the the line where you recursively called the function, but it does nothing, so you're function returns <code>None</code>. You must add return. Change the recursive call to:</p>
<pre><code>return existence_of_file()
</code></pre>
<p>This will yield:</p>
<pre><code>>>>
Give me the name: bob.txt
This file is already exists
Give me the name: test.txt
test.txt
test.txt
</code></pre>
<p>Here's the full code:</p>
<pre><code>import os
def existence_of_file():
filename = str(input("Give me the name: "))
if os.path.exists(filename):
print("This file is already exists")
return existence_of_file()
else:
print(filename)
return filename
a = existence_of_file()
print(a)
</code></pre>
| 1 | 2016-08-30T22:21:45Z | [
"python"
] |
Put a 1D array into a 2D array | 39,237,848 | <p>I have a 2D array <code>x</code> in which I want to copy the content of a 1D array <code>y</code> :</p>
<pre><code>import numpy as np
x = np.array([[1, 2], [4, 5], [3, 3]], np.int32)
y = np.array([1, 2, 3, 4, 5, 6])
x[:,:] = y # i would like x to be [[1, 2], [3, 4], [5, 6]]
</code></pre>
<blockquote>
<p>ValueError: could not broadcast input array from shape (6) into shape (3,2) </p>
</blockquote>
<p>How to do this ?</p>
| 0 | 2016-08-30T22:08:58Z | 39,237,866 | <p>You have to convert the <code>y</code> to an array with a shape like <code>x</code>:</p>
<pre><code>>>> x = y.reshape(x.shape)
>>> x
array([[1, 2],
[3, 4],
[5, 6]])
</code></pre>
<p>But note that the <code>y</code> should be reshape with <code>x</code>'s shape.</p>
| 1 | 2016-08-30T22:11:21Z | [
"python",
"arrays",
"numpy"
] |
Odoo XML RPC search and insert | 39,237,883 | <p>I want to search for customers in res.partner from xl using their names then if yes i assign their partner id in the sales order im creating in xmlrpc else insert that partner and use his id in the sales order im creating.note that the purpose is to migrate sales order from xls file to odoo,for now the actual code is the following.</p>
<pre><code>import psycopg2
import psycopg2.extras
import pyexcel_xls
import pyexcel as pe
from pyexcel_xls import get_data
from datetime import datetime
import xmlrpclib
import json
url = 'http://localhost:8070'
db = 'Docker'
username = 'admin'
password = 'odoo'
#data = get_data("salesorder.xls")
#print(json.dumps(data))
records = pe.get_records(file_name="salesorder.xls")
for record in records:
print record['name']
names = record['name']
print record['location']
print record['zip']
print record['republic']
dates = record['date']
print dates
print datetime.strptime(dates,'%d/%M/%Y')
lastdat=datetime.strptime(dates,'%d/%M/%Y')
common = xmlrpclib.ServerProxy('{}/xmlrpc/2/common'.format(url))
output = common.version()
models = xmlrpclib.ServerProxy('{}/xmlrpc/2/object'.format(url))
ids = models.execute_kw(db, uid, password,
'res.partner', 'search',
['name', '=', "names"])
uid = common.authenticate(db, username, password, {})
print output
models = xmlrpclib.ServerProxy('{}/xmlrpc/2/object'.format(url))
id = models.execute_kw(db, uid, password, 'sale.order', 'create', [{
'name': names,
'validity_date':"2016-01-18"
#'payment_term_id':"1"
# 'user_id':"1"
# 'state':"sale"
}])
print id
</code></pre>
| -1 | 2016-08-30T22:13:02Z | 39,260,875 | <p>Here is a modification of your script which I think should work. I did not look into what values should be passed into the creation of a sales order. You will have to ensure you are passing the correct values. You also import a few packages that you do not use, but I left those in as I assume you plan on using them. Bottom line you need to search for your contact in the system. If you find them use the id, if not create the contact and use the new contact id for the creation of your sales order.</p>
<pre><code>import psycopg2
import psycopg2.extras
import pyexcel_xls
import pyexcel as pe
from pyexcel_xls import get_data
from datetime import datetime
import xmlrpclib
import json
url = 'http://localhost:8070'
db = 'Docker'
username = 'admin'
password = 'odoo'
models = xmlrpclib.ServerProxy('{}/xmlrpc/2/object'.format(url))
common = xmlrpclib.ServerProxy('{}/xmlrpc/2/common'.format(url))
uid = common.authenticate(db, username, password, {})
records = pe.get_records(file_name="salesorder.xls")
for record in records:
print record['name']
# DEFINE THE VALUES YOU PLAN ON PASSING INTO YOUR SALES ORDER HERE
vals = {
'name': record['name'],
'validity_date':"2016-01-18"
}
ids = models.execute_kw(db, uid, password, 'res.partner', 'search', [[['name', '=', names]]])
# IF YOU FIND THE CONTACT IN THE SYSTEM ADD THEIR ID TO YOUR VALS OBJECT
# IF YOU ARE CONFIDENT YOU DO NOT HAVE DUPLICATES OF NAMES THIS SHOULD WORK, OTHERWISE A MORE PRECISE SEARCH DOMAIN IS NECESSARY
if len(ids) > 0:
vals['partner_id'] = ids[0]
# IF YOU DONT FIND A MATCH YOU CAN CREATE A NEW PARTNER AND USE THAT ID
else:
vals['partner_id'] = models.execute_kw(db, uid, password, 'res.partner', 'create', [{ 'name': record['name'] }])
sale_order_id = models.execute_kw(db, uid, password, 'sale.order', 'create',[vals])
print sale_order_id
</code></pre>
| 0 | 2016-09-01T00:12:40Z | [
"python",
"openerp",
"xml-rpc",
"odoo-9"
] |
TypeError: argument of type 'float' is not iterable | 39,238,057 | <p>I am new to python and TensorFlow. I recently started understanding and executing TensorFlow examples, and came across this one: <a href="https://www.tensorflow.org/versions/r0.10/tutorials/wide_and_deep/index.html" rel="nofollow">https://www.tensorflow.org/versions/r0.10/tutorials/wide_and_deep/index.html</a></p>
<p>I got the error, <strong>TypeError: argument of type 'float' is not iterable</strong>, and I believe that the problem is with the following line of code: </p>
<p>df_train[LABEL_COLUMN] = (df_train['income_bracket'].apply(lambda x: '>50K' in x)).astype(int) </p>
<p>(income_bracket is the label column of the census dataset, with '>50K' being one of the possible label values, and the other label is '=<50K'. The dataset is read into df_train. The explanation provided in the documentation for the reason to do the above is, "Since the task is a binary classification problem, we'll construct a label column named "label" whose value is 1 if the income is over 50K, and 0 otherwise.")</p>
<p>If anyone could explain me what is exactly happening and how should I fix it, that'll be great. I tried using Python2.7 and Python3.4, and I don't think that the problem is with the version of the language. Also, if anyone is aware of great tutorials for someone who is new to TensorFlow and pandas, please share the links. </p>
<p>Complete program:</p>
<pre><code>import pandas as pd
import urllib
import tempfile
import tensorflow as tf
gender = tf.contrib.layers.sparse_column_with_keys(column_name="gender", keys=["female", "male"])
race = tf.contrib.layers.sparse_column_with_keys(column_name="race", keys=["Amer-Indian-Eskimo", "Asian-Pac-Islander", "Black", "Other", "White"])
education = tf.contrib.layers.sparse_column_with_hash_bucket("education", hash_bucket_size=1000)
marital_status = tf.contrib.layers.sparse_column_with_hash_bucket("marital_status", hash_bucket_size=100)
relationship = tf.contrib.layers.sparse_column_with_hash_bucket("relationship", hash_bucket_size=100)
workclass = tf.contrib.layers.sparse_column_with_hash_bucket("workclass", hash_bucket_size=100)
occupation = tf.contrib.layers.sparse_column_with_hash_bucket("occupation", hash_bucket_size=1000)
native_country = tf.contrib.layers.sparse_column_with_hash_bucket("native_country", hash_bucket_size=1000)
age = tf.contrib.layers.real_valued_column("age")
age_buckets = tf.contrib.layers.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
education_num = tf.contrib.layers.real_valued_column("education_num")
capital_gain = tf.contrib.layers.real_valued_column("capital_gain")
capital_loss = tf.contrib.layers.real_valued_column("capital_loss")
hours_per_week = tf.contrib.layers.real_valued_column("hours_per_week")
wide_columns = [gender, native_country, education, occupation, workclass, marital_status, relationship, age_buckets, tf.contrib.layers.crossed_column([education, occupation], hash_bucket_size=int(1e4)), tf.contrib.layers.crossed_column([native_country, occupation], hash_bucket_size=int(1e4)), tf.contrib.layers.crossed_column([age_buckets, race, occupation], hash_bucket_size=int(1e6))]
deep_columns = [
tf.contrib.layers.embedding_column(workclass, dimension=8),
tf.contrib.layers.embedding_column(education, dimension=8),
tf.contrib.layers.embedding_column(marital_status, dimension=8),
tf.contrib.layers.embedding_column(gender, dimension=8),
tf.contrib.layers.embedding_column(relationship, dimension=8),
tf.contrib.layers.embedding_column(race, dimension=8),
tf.contrib.layers.embedding_column(native_country, dimension=8),
tf.contrib.layers.embedding_column(occupation, dimension=8),
age, education_num, capital_gain, capital_loss, hours_per_week]
model_dir = tempfile.mkdtemp()
m = tf.contrib.learn.DNNLinearCombinedClassifier(
model_dir=model_dir,
linear_feature_columns=wide_columns,
dnn_feature_columns=deep_columns,
dnn_hidden_units=[100, 50])
COLUMNS = ["age", "workclass", "fnlwgt", "education", "education_num",
"marital_status", "occupation", "relationship", "race", "gender",
"capital_gain", "capital_loss", "hours_per_week", "native_country", "income_bracket"]
LABEL_COLUMN = 'label'
CATEGORICAL_COLUMNS = ["workclass", "education", "marital_status", "occupation", "relationship", "race", "gender", "native_country"]
CONTINUOUS_COLUMNS = ["age", "education_num", "capital_gain", "capital_loss", "hours_per_week"]
train_file = tempfile.NamedTemporaryFile()
test_file = tempfile.NamedTemporaryFile()
urllib.urlretrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", train_file.name)
urllib.urlretrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test", test_file.name)
df_train = pd.read_csv(train_file, names=COLUMNS, skipinitialspace=True)
df_test = pd.read_csv(test_file, names=COLUMNS, skipinitialspace=True, skiprows=1)
df_train[LABEL_COLUMN] = (df_train['income_bracket'].apply(lambda x: '>50K' in x)).astype(int)
df_test[LABEL_COLUMN] = (df_test['income_bracket'].apply(lambda x: '>50K' in x)).astype(int)
def input_fn(df):
continuous_cols = {k: tf.constant(df[k].values)
for k in CONTINUOUS_COLUMNS}
categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values,
shape=[df[k].size, 1])
for k in CATEGORICAL_COLUMNS}
feature_cols = dict(continuous_cols.items() + categorical_cols.items())
label = tf.constant(df[LABEL_COLUMN].values)
return feature_cols, label
def train_input_fn():
return input_fn(df_train)
def eval_input_fn():
return input_fn(df_test)
m.fit(input_fn=train_input_fn, steps=200)
results = m.evaluate(input_fn=eval_input_fn, steps=1)
for key in sorted(results):
print("%s: %s" % (key, results[key]))
</code></pre>
<p>Thank you</p>
<p>PS: Full stack trace for the error</p>
<pre><code>Traceback (most recent call last):
File "/home/jaspreet/PycharmProjects/TicTacTensorFlow/census.py", line 73, in <module>
df_train[LABEL_COLUMN] = (df_train['income_bracket'].apply(lambda x: '>50K' in x)).astype(int)
File "/usr/lib/python2.7/dist-packages/pandas/core/series.py", line 2023, in apply
mapped = lib.map_infer(values, f, convert=convert_dtype)
File "inference.pyx", line 920, in pandas.lib.map_infer (pandas/lib.c:44780)
File "/home/jaspreet/PycharmProjects/TicTacTensorFlow/census.py", line 73, in <lambda>
df_train[LABEL_COLUMN] = (df_train['income_bracket'].apply(lambda x: '>50K' in x)).astype(int)
TypeError: argument of type 'float' is not iterable
</code></pre>
| 3 | 2016-08-30T22:29:04Z | 39,287,583 | <p>The program works verbatim with the latest version of pandas, i.e., 0.18.1</p>
| 0 | 2016-09-02T08:19:45Z | [
"python",
"pandas",
"tensorflow"
] |
Python - Errno 13 Permission denied when trying to copy files | 39,238,098 | <p>I am attempting to make a program in Python that copies the files on my flash drive (letter D:) to a folder on my hard drive but am getting a <em>PermissionError: [Errno 13] Permission denied: 'D:'</em>.</p>
<p>The problematic part of my code is as follows:</p>
<pre><code># Copy files to folder in current directory
def copy():
source = getsource()
if source != "failure":
copyfile(source, createfolder())
wait("Successfully backup up drive"
"\nPress 'Enter' to exit the program")
else:
wait("No USB drive was detected"
"\nPress 'Enter' to exit")
# Create a folder in current directory w/ date and time
def createfolder():
name = strftime("%a, %b %d, %Y, %H.%M.%S", gmtime())
dir_path = os.path.dirname(os.path.realpath(__file__))
new_folder = dir_path + "\\" + name
os.makedirs(new_folder)
return new_folder
</code></pre>
<p>Everything seems to run fine until the <em>copyfile()</em> function runs, where it returns the error.
I tried replacing <em>getsource()</em> with the destination of the file instead and it returned the same permission error except for the <em>new_folder</em> directory instead.</p>
<p>I have read several other posts but none of them seem to be relevant to my case. I have full admin permissions to both locations as well.
Any help would be greatly appreciated!</p>
| 1 | 2016-08-30T22:32:44Z | 39,238,213 | <p>As I stated in my comment above, it seems as if you're trying to open the directory, <code>D:</code>, as if it was a file, and that's not going to work because it's not a file, it's a directory.</p>
<p>What you can do is use <code>os.listdir()</code> to list all of the files within your desired directory, and then use <code>shutil.copy()</code> to copy the files as you please.</p>
<p>Here is the documentation for each of those:</p>
<p><a href="https://docs.python.org/3/library/os.html#os.listdir" rel="nofollow">os.listdir()</a> (You will be passing the full file path to this function)</p>
<p><a href="https://docs.python.org/3/library/shutil.html#shutil.copy" rel="nofollow">shutil.copy()</a> (You will be passing each file to this function)</p>
<p>Essentially you would store all of the files in the directory in a variable, such as <code>all_the_files = os.listdir(/path/to/file)</code>, then loop through <code>all_the_files</code> by doing something like <code>for each_file in all_the_files:</code> and then use <code>shutil.copy()</code> to copy them as you please.</p>
| 0 | 2016-08-30T22:45:21Z | [
"python",
"copyfile"
] |
What does this python codes mean? | 39,238,108 | <p>What does this python codes mean? New to python. THX!</p>
<pre><code> benchmark_sets_list = [
'%s: %s' %
(set_name, benchmark_sets.BENCHMARK_SETS[set_name]['message'])
for set_name in benchmark_sets.BENCHMARK_SETS]
</code></pre>
| -5 | 2016-08-30T22:33:42Z | 39,238,276 | <p>This part...</p>
<pre><code>for set_name in benchmark_sets.BENCHMARK_SETS
</code></pre>
<p>...will grab the set names from <code>benchmark_sets.BENCHMARK_SETS</code> and it will keep them one by one into a <code>set_name</code> variable.</p>
<p>After that, it will be able to know the values from this line...</p>
<pre><code>(set_name, benchmark_sets.BENCHMARK_SETS[set_name]['message'])
</code></pre>
<p>...because <code>set_name</code> will have a value. That part will return two things, <code>set_name</code> and <code>benchmark_sets.BENCHMARK_SETS[set_name]['message']</code>. Probably those two things will be both strings.</p>
<p>Then, those <code>%s</code> you see in this line...</p>
<pre><code>'%s: %s' %
</code></pre>
<p>...will be replaced by the value of <code>set_name</code> and <code>benchmark_sets.BENCHMARK_SETS[set_name]['message']</code> respectively. That will generate an string like this one: "foo: bar", being "foo" the value of <code>set_name</code> and "bar" the value of <code>benchmark_sets.BENCHMARK_SETS[set_name]['message']</code>.</p>
<p>In order for you to understand what happened there, this is a simple example:</p>
<p><code>"%s %s %s" % (first_elem, second_elem, third_elem)</code></p>
<p>That code will replace the first <code>%s</code> with the value of <code>first_elem</code> The second <code>%s</code> with the value of <code>second_elem</code>, and the third <code>%s</code> with the value of <code>third_elem</code>.</p>
<p>And finally that string will be added to the list which is being constructed. So, at the end you will have an list more or less like this one:</p>
<pre><code>["foo: bar", "wop: wap", "bing: bang"]
</code></pre>
| 1 | 2016-08-30T22:53:06Z | [
"python"
] |
Python Tkinter Save Listbox Item Ordering to Sqlite Database | 39,238,138 | <p>I have a drag-and-drop listbox (<a href="https://www.safaribooksonline.com/library/view/python-cookbook-2nd/0596007973/ch11s05.html" rel="nofollow">found here</a>) that lets the user drag/move items up or down to re-order them.</p>
<p>I have a <code>db.sqlite</code> file that looks like this initially:</p>
<pre><code>>>> make_df()
ordering fruit
0 0 apples
1 1 oranges
2 2 blueberries
3 3 watermelon
4 4 cantaloupe
5 5 pears
6 6 pomegranate
7 7 raspberries
8 8 blackberries
9 9 boysenberries
10 10 nectarines
</code></pre>
<p>What I am trying to do is reorder items around in the listbox and have that order saved to the database so when I re-launch the program, the items are in the same order as the last time I used it.</p>
<p>The problem:
Every time I relaunch the program, the items from the listbox are always in the same order. However, when I reorder the items, then run <code>make_df</code> which looks into the database to see the ordering and returns a dataframe, it shows the database during that session did update. <strong>It's just not saving it permanently and that's what I can't figure out.</strong></p>
<p><a href="http://i.stack.imgur.com/NNUP4.gif" rel="nofollow"><img src="http://i.stack.imgur.com/NNUP4.gif" alt="Example"></a></p>
<pre><code>import tkinter as tk
import pandas as pd
import sqlite3
root = tk.Tk()
def make_sqlite_db_for_stackoverflow():
connection = sqlite3.connect('db.sqlite')
db_file_name = 'db.sqlite'
df = pd.DataFrame(
[(0, 'apples'), (1, 'oranges'), (2, 'blueberries'), (3, 'watermelon'),
(4, 'cantaloupe'), (5, 'pears'), (6, 'pomegranate'), (7, 'raspberries'),
(8, 'blackberries'), (9, 'boysenberries'), (10, 'nectarines')],
columns=['ordering', 'fruit']
)
df.to_sql('columns', connection, index=False, if_exists='replace')
connection.close()
make_sqlite_db_for_stackoverflow()
def make_df():
print(pd.DataFrame(sqlite3.connect('db.sqlite').cursor().execute(
'SELECT * FROM columns ORDER BY "ordering";').fetchall(), columns=['ordering', 'fruit']))
class Drag_and_Drop_Listbox(tk.Listbox):
""" A tk listbox with drag'n'drop reordering of entries. """
def __init__(self, master, **kw):
kw['selectmode'] = tk.EXTENDED
tk.Listbox.__init__(self, master, kw)
self.bind('<Button-1>', self.setCurrent)
self.bind('<B1-Motion>', self.shiftSelection)
self.curIndex = None
def setCurrent(self, event):
self.curIndex = self.nearest(event.y)
def shiftSelection(self, event):
i = self.nearest(event.y)
if i < self.curIndex:
x = self.get(i)
self.delete(i)
self.insert(i+1, x)
self.curIndex = i
elif i > self.curIndex:
x = self.get(i)
self.delete(i)
self.insert(i-1, x)
self.curIndex = i
def update_ordering(*args):
connect = sqlite3.connect('db.sqlite')
cursor = connect.cursor()
field_ordering = [(order,fruit) for order,fruit in enumerate(ddlistbox.get(0, 'end'))]
print(field_ordering)
for field in field_ordering:
cursor.execute("UPDATE columns SET 'ordering'="+str(field[0])+" WHERE fruit='"+field[1]+"';")
connect.commit()
connect.close()
print(ddlistbox.curselection())
scrollbar = tk.Scrollbar(root, orient="vertical")
ddlistbox = Drag_and_Drop_Listbox(root, yscrollcommand=scrollbar.set)
scrollbar.grid(row=0, column=1, sticky='ns')
scrollbar.config(command=ddlistbox.yview)
def get_sqlite_report_fields():
conn = sqlite3.connect('db.sqlite')
conn.row_factory = lambda cursor, row: row[0]
cursor = conn.cursor()
fetch = cursor.execute("SELECT fruit FROM columns ORDER BY 'ordering';").fetchall()
conn.close()
return fetch
for fruit in get_sqlite_report_fields():
ddlistbox.insert(0, fruit)
ddlistbox.config(width=30)
ddlistbox.bind('<Double-Button-1>' , func=update_ordering)
ddlistbox.grid(row=0, column=0)
button = tk.Button(root, text='Check', command=update_ordering)
button.grid(row=1, column=0)
root.mainloop()
</code></pre>
| 0 | 2016-08-30T22:36:41Z | 39,241,178 | <p>Highly doubt anyone will answer my question so I'm posting my solution. It uses a drag-and-drop listbox to allow the user to arrange items in the listbox and saves their item ordering to a database for next time.</p>
<pre><code>import tkinter as tk
import pandas as pd
import sqlite3
root = tk.Tk()
class Drag_and_Drop_Listbox(tk.Listbox):
""" A tk listbox with drag'n'drop reordering of entries. """
def __init__(self, master, **kw):
kw['selectmode'] = tk.EXTENDED
tk.Listbox.__init__(self, master, kw)
self.bind('<Button-1>', self.setCurrent)
self.bind('<B1-Motion>', self.shiftSelection)
self.curIndex = None
def setCurrent(self, event):
self.curIndex = self.nearest(event.y)
def shiftSelection(self, event):
i = self.nearest(event.y)
if i < self.curIndex:
x = self.get(i)
self.delete(i)
self.insert(i+1, x)
self.curIndex = i
elif i > self.curIndex:
x = self.get(i)
self.delete(i)
self.insert(i-1, x)
self.curIndex = i
def update_ordering(*args):
connect = sqlite3.connect('db.sqlite')
cursor = connect.cursor()
field_ordering = [(order,fruit) for order,fruit in enumerate(ddlistbox.get(0, 'end'))]
print(field_ordering)
for field in field_ordering:
cursor.execute("UPDATE columns SET 'ordering'="+str(field[0])+" WHERE fruit='"+field[1]+"';")
connect.commit()
connect.close()
print(ddlistbox.curselection())
scrollbar = tk.Scrollbar(root, orient="vertical")
ddlistbox = Drag_and_Drop_Listbox(root, yscrollcommand=scrollbar.set, activestyle='none')
scrollbar.grid(row=0, column=1, sticky='ns')
scrollbar.config(command=ddlistbox.yview)
conn = sqlite3.connect('db.sqlite')
conn.row_factory = lambda cursor, row: row[0]
cursor = conn.cursor()
fetch = cursor.execute("SELECT fruit FROM columns ORDER BY ordering ASC").fetchall()
for field in fetch:
ddlistbox.insert(tk.END, field)
ddlistbox.config(width=30)
ddlistbox.grid(row=0, column=0)
button = tk.Button(root, text='Check', command=update_ordering)
button.grid(row=1, column=0)
root.mainloop()
</code></pre>
| 0 | 2016-08-31T05:08:41Z | [
"python",
"tkinter",
"sqlite3",
"drag-and-drop",
"listbox"
] |
Upload data from csv to series in python | 39,238,201 | <p>I have a data in a column in a csv format and want to append it in series in python that it can be displayed like:</p>
<pre><code>series=[12,13,12,22,34,21]
</code></pre>
<p>How can it be done? Should I do it via read_csv and then somehow transform it to series or it is a another way?</p>
<p>Thanks!</p>
| 0 | 2016-08-30T22:44:09Z | 39,238,254 | <p>Try <code>pandas.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)</code> after reading your data as data frame with <code>read_csv</code>. Pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html" rel="nofollow">documentation</a></p>
<p>Another possibility is to use <code>iloc</code> function, when you deal with a row, not a column. See here: <a href="http://stackoverflow.com/questions/33246771/convert-pandas-data-frame-to-series">Convert pandas data frame to series</a></p>
| 0 | 2016-08-30T22:50:04Z | [
"python",
"csv",
"pandas",
"series"
] |
Upload data from csv to series in python | 39,238,201 | <p>I have a data in a column in a csv format and want to append it in series in python that it can be displayed like:</p>
<pre><code>series=[12,13,12,22,34,21]
</code></pre>
<p>How can it be done? Should I do it via read_csv and then somehow transform it to series or it is a another way?</p>
<p>Thanks!</p>
| 0 | 2016-08-30T22:44:09Z | 39,238,261 | <p>IIUC and you are talking about pandas - you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">squeeze</a> parameter:</p>
<pre><code>In [86]: import pandas as pd
In [87]: s = pd.read_csv('d:/temp/.data/1.csv', header=None, squeeze=True)
In [88]: s
Out[88]:
0 12
1 13
2 12
3 22
4 34
5 21
Name: 0, dtype: int64
</code></pre>
<p>as a vanilla Python list:</p>
<pre><code>In [89]: s.tolist()
Out[89]: [12, 13, 12, 22, 34, 21]
</code></pre>
<p>from <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">docs</a>:</p>
<blockquote>
<p><strong>squeeze</strong> : boolean, default False</p>
<p>If the parsed data only contains one column then return a Series</p>
</blockquote>
| 1 | 2016-08-30T22:50:36Z | [
"python",
"csv",
"pandas",
"series"
] |
python windows interpret the values from for loop to cmd command | 39,238,234 | <p>Here is a code </p>
<pre><code> from subprocess import Popen, PIPE
saveerr = sys.stderr
fsock = open('error.log', 'w')
sys.stderr = sys.stdout = fsock
D = {}
D['\\\\aucb-net-01\\d$'] = '\\\\nasaudc01\\remote_site_sync\\aucb-net-01'
D['\\\\aupw-file-01\\e$'] = '\\\\nasaudc01\\remote_site_sync\\aupw-file-01'
for k,v in sorted(D.items()):
print (k,":",v)
cmd = 'robocopy {} {} /E /MIR /W:2 /R:1'.format(k,v)
p = Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True)
for line in p.stdout:
print(line)
</code></pre>
<p>i would like to, insert the value of "K" and "v" in cmd after robocopy command , so that with in the for loop it will perform robocopy for all the source and destination mentioned in dictionary D = {} </p>
<p>i would also like that the script checks for failures in the robocopy output logs in error.log file </p>
<pre><code> Total Copied Skipped Mismatch FAILED Extras
Dirs : 2575 0 2575 0 0 0
Files : 6039 0 6039 0 2 0
Bytes : 1.547 g 0 1.547 g 0 0 0
Times : 0:00:53 0:00:00 0:00:00 0:00:53
Ended : Tue Aug 30 04:32:48 2016
</code></pre>
<p>if two files have failed then the script should send a mail to some email address. </p>
| 0 | 2016-08-30T22:47:51Z | 39,238,830 | <p>The way in Python to do it would be:</p>
<pre><code>cmd = 'robocopy {} {} /E /MIR /W:2 /R:1'.format(k, v)
</code></pre>
<p>or, in Python 3.6:</p>
<pre><code>cmd = f'robocopy {k} {v} /E /MIR /W:2 /R:1'
</code></pre>
<p><strong>However, this is wrong!</strong> It will fail if <code>k</code> and <code>v</code> have spaces, and it can be a <a href="https://docs.python.org/3/library/subprocess.html#security-considerations" rel="nofollow">security hazard</a> (imagine if <code>k = '; rm -rf /;</code>). The right way to spawn subprocesses is with this:</p>
<pre><code>cmd = ['robocopy', k, v, '/E', '/MIR', '/W:2', '/R:1']
</code></pre>
| 0 | 2016-08-31T00:05:12Z | [
"python",
"windows",
"robocopy"
] |
Sending parameters to a remote server via python script | 39,238,298 | <p>I'm learning python these days, and I have a rather basic question. </p>
<p>There's a remote server where a webserver is listening in on incoming traffic. If I enter a uri like the following in my browser, it performs certain processing on some files for me: </p>
<pre><code>//223.58.1.10:8000/processVideo?container=videos&video=0eb8c45a-3238-4e2b-a302-b89f97ef0554.mp4
</code></pre>
<p>My rather basic question is: how can I achieve sending the above dynamic parameters (i.e. container = container_name and video = video_name) to <strong>223.58.1.10:8000/processVideo</strong> via a python script?</p>
<p>I've seen ways to <a href="http://stackoverflow.com/questions/2953462/pinging-servers-in-python">ping an IP</a>, but my requirements are more than that. Guidance from experts will be very helpful. Thanks!</p>
| 0 | 2016-08-30T22:55:52Z | 39,238,320 | <pre><code>import requests
requests.get("http://223.58.1.10:8000/processVideo",data={"video":"123asd","container":"videos"})
</code></pre>
<p>I guess ... its not really clear what you are asking ...</p>
<p>Im sure you could do the same with just urllib and/or urllib2</p>
<pre><code>import urllib
some_url = "http://223.58.1.10:8000/processVideo?video=%s&container=%s"%(video_name,container_name)# you should probably use urllib urllencode here ...
urllib.urlopen(some_url).read()
</code></pre>
| 0 | 2016-08-30T22:58:35Z | [
"python",
"linux"
] |
python array inclusive slice index | 39,238,340 | <p>I wish to write a for loop that prints the tail of an array, <strong>including the whole array</strong> (denoted by i==0) Is this task possible without a branching logic, ie a <code>if i==0</code> statement inside the loop?</p>
<p>This would be possible if there is a syntax for slicing with inclusive end index. </p>
<pre><code>arr=[0,1,2,3,4,5]
for i in range(0,3):
print arr[:-i]
</code></pre>
<p>output: </p>
<pre><code>[]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
<p>wanted output: </p>
<pre><code>[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
| 1 | 2016-08-30T23:00:43Z | 39,238,371 | <p>You can use <code>None</code>:</p>
<pre><code>arr=[0,1,2,3,4,5]
for i in range(0,3):
print arr[:-i or None]
</code></pre>
| 8 | 2016-08-30T23:04:14Z | [
"python",
"arrays",
"numpy"
] |
python array inclusive slice index | 39,238,340 | <p>I wish to write a for loop that prints the tail of an array, <strong>including the whole array</strong> (denoted by i==0) Is this task possible without a branching logic, ie a <code>if i==0</code> statement inside the loop?</p>
<p>This would be possible if there is a syntax for slicing with inclusive end index. </p>
<pre><code>arr=[0,1,2,3,4,5]
for i in range(0,3):
print arr[:-i]
</code></pre>
<p>output: </p>
<pre><code>[]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
<p>wanted output: </p>
<pre><code>[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
| 1 | 2016-08-30T23:00:43Z | 39,238,375 | <pre><code>for i in xrange(0, 3):
print arr[:(len(arr) - i)]
# Output
# [0, 1, 2, 3, 4, 5]
# [0, 1, 2, 3, 4]
# [0, 1, 2, 3]
</code></pre>
| 5 | 2016-08-30T23:05:06Z | [
"python",
"arrays",
"numpy"
] |
python array inclusive slice index | 39,238,340 | <p>I wish to write a for loop that prints the tail of an array, <strong>including the whole array</strong> (denoted by i==0) Is this task possible without a branching logic, ie a <code>if i==0</code> statement inside the loop?</p>
<p>This would be possible if there is a syntax for slicing with inclusive end index. </p>
<pre><code>arr=[0,1,2,3,4,5]
for i in range(0,3):
print arr[:-i]
</code></pre>
<p>output: </p>
<pre><code>[]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
<p>wanted output: </p>
<pre><code>[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
| 1 | 2016-08-30T23:00:43Z | 39,238,376 | <p>Had a brain freeze, but this would do:</p>
<pre><code>arr=[0,1,2,3,4,5]
for i in range(0,3):
print arr[:len(arr)-i]
</code></pre>
<p>Although I still wish python had nicer syntax to do these kind of simple tasks. </p>
| 1 | 2016-08-30T23:05:32Z | [
"python",
"arrays",
"numpy"
] |
python array inclusive slice index | 39,238,340 | <p>I wish to write a for loop that prints the tail of an array, <strong>including the whole array</strong> (denoted by i==0) Is this task possible without a branching logic, ie a <code>if i==0</code> statement inside the loop?</p>
<p>This would be possible if there is a syntax for slicing with inclusive end index. </p>
<pre><code>arr=[0,1,2,3,4,5]
for i in range(0,3):
print arr[:-i]
</code></pre>
<p>output: </p>
<pre><code>[]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
<p>wanted output: </p>
<pre><code>[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
</code></pre>
| 1 | 2016-08-30T23:00:43Z | 39,238,488 | <p>The awkwardness arises from trying to take advantage of the <code>:-n</code> syntax. Since there's no difference between <code>:0</code> and <code>:-0</code>, we can't use this syntax to distinguish <code>0 from the start</code> from <code>0 from the end</code>.</p>
<p>In this case it is clearer to use the <code>:i</code> indexing, and adjust the <code>range</code> accordingly:</p>
<pre><code>In [307]: for i in range(len(arr),0,-1): print(arr[:i])
[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4]
[0, 1, 2, 3]
[0, 1, 2]
[0, 1]
[0]
</code></pre>
<p><code>arr[:None]</code> means <code>0 from the end</code>, and usually is sufficient.</p>
<p>Keep in mind the <code>arr[:i]</code> translates to <code>arr.__getitem__(slice(None,i,None))</code>. The <code>:</code> syntax is short hand for a <code>slice</code> object. And the <code>slice</code> signature is:</p>
<pre><code>slice(stop)
slice(start, stop[, step])
</code></pre>
<p><code>slice</code> itself is pretty simple. It's the <code>__getitem__</code> method that gives it meaning. You could, in theory, subclass <code>list</code>, and give it a custom <code>__getitem__</code> that treats <code>slice(None,0,None)</code> as <code>slice(None,None,None)</code>.</p>
| 2 | 2016-08-30T23:19:18Z | [
"python",
"arrays",
"numpy"
] |
Create an RDD with more elements than its source | 39,238,384 | <p>I have an RDD called <code>codes</code>, which is a pair, that has a string as its 1st half and another pair as its 2nd half:</p>
<pre><code>In [76]: codes.collect()
Out[76]:
[(u'3362336966', (6208, 5320)),
(u'7889466042', (4140, 5268))]
</code></pre>
<p>and I am trying to get this:</p>
<pre><code>In [76]: codes.collect()
Out[76]:
[(u'3362336966', 6208),
(u'3362336966', 5320),
(u'7889466042', 4140),
(u'7889466042', 5268)]
</code></pre>
<p>How to do this?</p>
<hr>
<p>My failed attempt:</p>
<pre><code>In [77]: codes_in = codes.map(lambda x: (x[0], x[1][0]), (x[0], x[1][1]))
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-77-e1c7925bc075> in <module>()
----> 1 codes_in = codes.map(lambda x: (x[0], x[1][0]), (x[0], x[1][1]))
NameError: name 'x' is not defined
</code></pre>
| 0 | 2016-08-30T23:07:03Z | 39,238,566 | <p>I think what you want is the following:</p>
<pre><code>codes_in = codes.map(lambda x: [(x[0], p) for p in x[1]]).flatMap(lambda x: x)
</code></pre>
<p>If it is python 2, for legibility you could:</p>
<pre><code>codes_in = codes.map(lambda k, vs: [(k, v) for v in vs]).flatMap(lambda x: x)
</code></pre>
<p>By this way you will be able to "extract" each value associated with the key and force that every row is a record of form <code>(k, v)</code>.</p>
| 1 | 2016-08-30T23:28:14Z | [
"python",
"apache-spark",
"bigdata",
"distributed-computing",
"rdd"
] |
BeautifulSoup Add new tag inside a child DIV of a parent DIV | 39,238,400 | <p>I am a complete beginner with BeautifulSoup and I am now trying to insert a new tag into a children div of a parent div.</p>
<p>Basically I have this HTML snippet:</p>
<pre><code> <div class=page-content>
<div class="content-block">
//Insert here!
</div>
</div>
</code></pre>
<p>Here is my current code:</p>
<pre><code> soup = BeautifulSoup(open("index.html"), "lxml")
div_page_content = soup.find("div", { "class" : "page-content" })
content_block = div_page_content.findChildren()
button_active = soup.new_tag('a')
button_active.attrs['class'] = 'button active'
button_active.append('This is a new button!')
content_block.append(button_active)
print content_block
</code></pre>
<p>I can fetch the page-content and his children content-block DIV, but the append function doesn't do anything, this is the output that I get:</p>
<pre><code> [<div class="content-block">\n</div>, <a class="button active">This is a new button!</a>]
</code></pre>
| 0 | 2016-08-30T23:08:49Z | 39,238,501 | <p>Found the issue, I have to use findNext instead of findChildren. now the append is working fine.</p>
| 0 | 2016-08-30T23:20:41Z | [
"python",
"html",
"parsing",
"beautifulsoup"
] |
Call a function for a filtered dataframe in pyspark | 39,238,437 | <p>I am having data such as ,</p>
<p>data</p>
<pre><code>ID filter
1 A
2 A
3 A
4 A
5 B
6 B
7 B
8 B
</code></pre>
<p>I want to apply a function for the dataframe,</p>
<pre><code>def add(x):
y = x+1
return(y)
from pyspark.sql.functions import *
from pyspark.sql.functions import udf
ol_val = udf(add, StringType())
data = data.withColumn("sum",ol_val(data.ID))
</code></pre>
<p>this gives an output,</p>
<p>data</p>
<pre><code>ID filter sum
1 A 2
2 A 3
3 A 4
4 A 5
5 B 6
6 B 7
7 B 8
8 B 9
</code></pre>
<p><strong>I want to apply this function only when filter = A</strong> and <strong>for the remaining part I want it be NULL</strong>. The output I want here is,</p>
<p>data</p>
<pre><code>ID filter sum
1 A 2
2 A 3
3 A 4
4 A 5
5 B NULL
6 B NULL
7 B NULL
8 B NULL
</code></pre>
<p>Here the value is NULL because it didnt satisfy the condition filter= A. I want the function to be applied only when filter = A.</p>
<p>Can anybody help me in changing the code inorder to get this output in pyspark?</p>
| 0 | 2016-08-30T23:14:01Z | 39,238,851 | <p>What you need is to use <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=functions#pyspark.sql.functions.when" rel="nofollow">when</a> and <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=functions#pyspark.sql.Column.otherwise" rel="nofollow">otherwise</a>. By the way you don't have to create that <code>UDF</code>.</p>
<pre><code>df = sc.parallelize([
(1, "a"),
(1, "b"),
(3, "c")
]).toDF(["id", "filter"])
df.select("*", when(col("filter") == lit("a"), col("id") + 1).otherwise(None).alias("result")).show()
</code></pre>
<p>If you really need to call that function, you can simply replace <code>col("id") + 1</code> by <code>yourUDF(col("id"))</code></p>
| 1 | 2016-08-31T00:08:35Z | [
"python",
"python-2.7",
"python-3.x",
"apache-spark",
"pyspark"
] |
My Python code is not outputting anything? | 39,238,469 | <p>I'm currently trying to teach myself Python with the <em>Python Crash Course</em> book by Eric Matthes and I seem to be having difficulties with excercise 5-9 regarding using if tests to test empty lists.</p>
<p><strong>Here is the question:</strong></p>
<p>5-9. No Users: Add an if test to hello_admin.py to make sure the list of users is not empty. </p>
<p>⢠If the list is empty, print the message We need to find some users!</p>
<p>⢠Remove all of the usernames from your list, and make sure the correct message is printed.</p>
<p><strong>Here is my code from hello_admin.py:</strong></p>
<pre><code>usernames = ['admin', 'user_1', 'user_2', 'user_3', 'user_4']
for username in usernames:
if username is 'admin':
print("Hello admin, would you like to see a status report?")
else:
print("Hello " + username + ", thank you for logging in again.")
</code></pre>
<p><strong>Now here is my code for 5-9 which is not outputting anything:</strong></p>
<pre><code>usernames = []
for username in usernames:
if username is 'admin':
print("Hello admin, would you like to see a status report?")
else:
print("Hello " + username + ", thank you for logging in again.")
if usernames:
print("Hello " + username + ", thank you for logging in again.")
else:
print("We need to find some users!")
</code></pre>
<p>Does anyone having any feedback for why my code is not outputting: "We need to find some users!" Thank you for your time. :)</p>
| 1 | 2016-08-30T23:17:17Z | 39,238,485 | <p>It's not outputting anything since your <code>if</code> and <code>else</code> blocks are inside the <code>for</code> loop which iterates over <code>usernames</code>. Since <code>usernames</code> is an empty list, it's not iterating over anything, hence not reaching any of those conditional blocks.</p>
<p>You might want to put instead:</p>
<pre><code>usernames = []
for username in usernames:
if username is 'admin':
print("Hello admin, would you like to see a status report?")
else:
print("Hello " + username + ", thank you for logging in again.")
if usernames:
print("Hello " + username + ", thank you for logging in again.")
else:
print("We need to find some users!")
</code></pre>
<p>That will print the last username in the <code>usernames</code> list twice, though.</p>
| 2 | 2016-08-30T23:18:52Z | [
"python"
] |
My Python code is not outputting anything? | 39,238,469 | <p>I'm currently trying to teach myself Python with the <em>Python Crash Course</em> book by Eric Matthes and I seem to be having difficulties with excercise 5-9 regarding using if tests to test empty lists.</p>
<p><strong>Here is the question:</strong></p>
<p>5-9. No Users: Add an if test to hello_admin.py to make sure the list of users is not empty. </p>
<p>⢠If the list is empty, print the message We need to find some users!</p>
<p>⢠Remove all of the usernames from your list, and make sure the correct message is printed.</p>
<p><strong>Here is my code from hello_admin.py:</strong></p>
<pre><code>usernames = ['admin', 'user_1', 'user_2', 'user_3', 'user_4']
for username in usernames:
if username is 'admin':
print("Hello admin, would you like to see a status report?")
else:
print("Hello " + username + ", thank you for logging in again.")
</code></pre>
<p><strong>Now here is my code for 5-9 which is not outputting anything:</strong></p>
<pre><code>usernames = []
for username in usernames:
if username is 'admin':
print("Hello admin, would you like to see a status report?")
else:
print("Hello " + username + ", thank you for logging in again.")
if usernames:
print("Hello " + username + ", thank you for logging in again.")
else:
print("We need to find some users!")
</code></pre>
<p>Does anyone having any feedback for why my code is not outputting: "We need to find some users!" Thank you for your time. :)</p>
| 1 | 2016-08-30T23:17:17Z | 39,238,624 | <p>The first if-else should come inside the the for loop. The second if else block should come outside. </p>
<pre><code>usernames = []
for username in usernames:
if username is 'admin':
print("Hey admin")
else:
print("Hello " + username + ", thanks!")
if usernames:
print("Hello " + username + ", thanks again.")
else:
print("Find some users!")
</code></pre>
| 1 | 2016-08-30T23:36:27Z | [
"python"
] |
What type should I use for a methods `self` argument? | 39,238,475 | <p>If I'm using <code>mypy</code> on my project, what type should my methods self object be given?</p>
<pre><code>from typing import List
class Example:
def __init__(self, arg1: List[str]) -> None:
pass
</code></pre>
| 2 | 2016-08-30T23:18:07Z | 39,238,512 | <p>Nothing, leave it as is.</p>
<p>That is what I could tell from their documentation:
<a href="https://mypy.readthedocs.io/en/latest/class_basics.html?highlight=self" rel="nofollow">https://mypy.readthedocs.io/en/latest/class_basics.html?highlight=self</a></p>
| 3 | 2016-08-30T23:21:48Z | [
"python",
"mypy"
] |
Converting a raw list to JSON data with Python | 39,238,497 | <p>I have a raw list <code>sorteddict</code> in the form of :</p>
<pre><code>["with", 1]
["witches", 1]
["witchcraft", 3]
</code></pre>
<p>and I want to generate more legible data by making it a JSON object that looks like:</p>
<pre><code>"Frequencies": {
"with": 1,
"witches": 1,
"witchcraft": 3,
"will": 2
}
</code></pre>
<p>Unfortunately so far, I have only found a manual way to create data as shown above, and was wondering if there was a much more eloquent way of generating the data rather than my messy script. I got to the point where I needed to retrieve the last item in the list and ensure that there was no comma on the last line before I thought I should seek some advice. Here's what I had:</p>
<pre><code>comma_count = 0
for i in sorteddict:
comma_count += 1
with open("frequency.json", 'w') as f:
json_head = "\"Frequencies\": {\n"
f.write(json_head)
while comma_count > 0:
for s in sorteddict:
f.write('\t\"' + s[0] + '\"' + ":" + str(s[1]) + ",\n")
comma_count -= 1
f.write("}")
</code></pre>
<p>I have used <code>json.JSONEncode.encode()</code> which I thought that was what I was looking for, but what ended up happening is <code>"Frequencies"</code> would be prepended to each <code>s[0]</code> item. Any ideas to clean the code?</p>
| 0 | 2016-08-30T23:20:24Z | 39,238,547 | <p>I may not be understanding you correctly - but do you just want to turn a list of [word, frequency] lists into a dictionary?</p>
<pre><code>frequency_lists = [
["with", 1],
["witches", 1],
["witchcraft", 3],
]
frequency_dict = dict(frequency_lists)
print(frequency_dict) # {'with': 1, 'witches': 1, 'witchcraft': 3}
</code></pre>
<p>If you then want to write this to a file:</p>
<pre><code>import json
with open('frequency.json', 'w') as f:
f.write(json.dumps(frequency_dict))
</code></pre>
| 2 | 2016-08-30T23:25:56Z | [
"python",
"json"
] |
Converting a raw list to JSON data with Python | 39,238,497 | <p>I have a raw list <code>sorteddict</code> in the form of :</p>
<pre><code>["with", 1]
["witches", 1]
["witchcraft", 3]
</code></pre>
<p>and I want to generate more legible data by making it a JSON object that looks like:</p>
<pre><code>"Frequencies": {
"with": 1,
"witches": 1,
"witchcraft": 3,
"will": 2
}
</code></pre>
<p>Unfortunately so far, I have only found a manual way to create data as shown above, and was wondering if there was a much more eloquent way of generating the data rather than my messy script. I got to the point where I needed to retrieve the last item in the list and ensure that there was no comma on the last line before I thought I should seek some advice. Here's what I had:</p>
<pre><code>comma_count = 0
for i in sorteddict:
comma_count += 1
with open("frequency.json", 'w') as f:
json_head = "\"Frequencies\": {\n"
f.write(json_head)
while comma_count > 0:
for s in sorteddict:
f.write('\t\"' + s[0] + '\"' + ":" + str(s[1]) + ",\n")
comma_count -= 1
f.write("}")
</code></pre>
<p>I have used <code>json.JSONEncode.encode()</code> which I thought that was what I was looking for, but what ended up happening is <code>"Frequencies"</code> would be prepended to each <code>s[0]</code> item. Any ideas to clean the code?</p>
| 0 | 2016-08-30T23:20:24Z | 39,238,553 | <p>You need to make a nested dict out of your current one, and use <code>json.dumps</code>. Not sure how sorteddict works, but: </p>
<pre><code>json.dumps({"Frequencies": mySortedDict})
</code></pre>
<p>should work. </p>
<p>Additionally, you say that you want something json encoded, but your example is not valid json. So I will assume that you actually want legitimate json.</p>
<p>Here's some example code: </p>
<pre><code>In [4]: import json
In [5]: # No idea what a sorteddict is, we assume it has the same interface as a normal dict.
In [6]: the_dict = dict([
...: ["with", 1],
...: ["witches", 1],
...: ["witchcraft", 3],
...: ])
In [7]: the_dict
Out[7]: {'witchcraft': 3, 'witches': 1, 'with': 1}
In [8]: json.dumps({"Frequencies": the_dict})
Out[8]: '{"Frequencies": {"with": 1, "witches": 1, "witchcraft": 3}}'
</code></pre>
| 4 | 2016-08-30T23:26:34Z | [
"python",
"json"
] |
Drop specific rows from multiindex Dataframe | 39,238,639 | <p>I have a multi-index dataframe that looks like this:</p>
<pre><code> start grad
1995-96 1995-96 15 15
1996-97 6 6
2002-03 1 1
2007-08 1 1
</code></pre>
<p>I'd like to drop by the specific values for the first level (level=0). In this case, I'd like to drop everything that has 1995-96 in the first index.</p>
| 2 | 2016-08-30T23:38:37Z | 39,238,699 | <p><code>pandas.DataFrame.drop</code> takes level as an optional argument</p>
<p><code>df.drop('1995-96', level='start')</code></p>
<p>As of v0.18.1, its docstring says:</p>
<blockquote>
<pre><code>"""
Signature: df.drop(labels, axis=0, level=None, inplace=False, errors='raise')
Docstring: Return new object with labels in requested axis removed.
Parameters
----------
labels : single label or list-like
axis : int or axis name
level : int or level name, default None
For MultiIndex
inplace : bool, default False
If True, do operation inplace and return None.
errors : {'ignore', 'raise'}, default 'raise'
If 'ignore', suppress error and existing labels are dropped.
.. versionadded:: 0.16.1
"""
</code></pre>
</blockquote>
| 3 | 2016-08-30T23:46:36Z | [
"python",
"pandas"
] |
Page count after using PdfFileMerger() in pypdf2 | 39,238,648 | <p>I am trying to use PdfFileMerger() in PyPDF2 to merge pdf files (see code). </p>
<pre><code>from PyPDF2 import PdfFileMerger, PdfFileReader
[...]
merger = PdfFileMerger()
if (some condition):
merger.append(PdfFileReader(file(filename1, 'rb')))
merger.append(PdfFileReader(file(filename2, 'rb')))
if (test for non-zero file size):
merger.write("output.pdf")
</code></pre>
<p>However, my merge commands are subject to certain conditions and it could turn out that no merged pdf file is generated. I would like to know how to determine the page count after performing merges using PdfFileMerger(). If nothing else, I would like to know if the number of pages is non-zero. Maintaining a counter to do this would be cumbersome because I am performing the merges across several functions and would prefer a more elegant solution.</p>
| 0 | 2016-08-30T23:40:03Z | 39,247,107 | <p>I'm +- in the same case as you. I will explain my solution. I'm not opening the pdfs with <code>PdfFileReader('filename.pdf', 'rb')</code> but I'm passing the pdfs content in an array for the merge (<code>pdfs_content_array</code>). Then I'm preparing the merger and my output (don't want to save the generated file locally so I have to use BytesIO to save the merged content somewhere) <code>calc_page_sum</code> is needed to compare the page number results. The most important part is: <code>calc_page_sum += PdfFileReader(bytes_content).getNumPages()</code> so I open the bytes content with PdfFileReader and get the pages number. Then I'm appending the merger <code>... merger.append,bytes_content</code> I'm writing the merge into my bytes output and compare it with the calc_page_sum. That's it.</p>
<pre><code>from PyPDF2 import PdfFileMerger, PdfFileReader
import io
[...]
def merge_the_pdfs(self,pdfs_content_array,output_file):
merger = PdfFileMerger()
output = io.BytesIO()
calc_page_sum = 0
for content in pdfs_content_array:
bytes_content = io.BytesIO(content)
calc_page_sum += PdfFileReader(bytes_content).getNumPages()
yield self.application.cpupool.submit(merger.append,bytes_content)
merger.write(output)
if not calc_page_sum == PdfFileReader(output).getNumPages():
return None
return output.getValue()
</code></pre>
<p>Hope this will help!</p>
<p>2nd Version:</p>
<pre><code>from PyPDF2 import PdfFileMerger, PdfFileReader
import io
import sys
filename1 = 'test.pdf'
filename2 = 'test1.pdf'
merger = PdfFileMerger()
output = io.BytesIO()
calc_page_sum = 0
filesarray = [filename1,filename2]
for singlefile in filesarray:
calc_page_sum += PdfFileReader(singlefile, 'rb').getNumPages()
merger.append(PdfFileReader(singlefile, 'rb'))
merger.write(output)
print(calc_page_sum)
print(PdfFileReader(output).getNumPages())
if calc_page_sum == PdfFileReader(output).getNumPages():
print("It worked")
merger.write("merging-test.pdf")
sys.exit()
print("Didn't worked")
sys.exit()
</code></pre>
| 1 | 2016-08-31T10:28:09Z | [
"python",
"pypdf",
"pypdf2"
] |
Modifying global variables in doctests | 39,238,675 | <p>The doctest documentation has <a href="https://docs.python.org/3.5/library/doctest.html#what-s-the-execution-context" rel="nofollow">a section about execution context</a>.
My reading is that the globals in the module are shallow copied for the tests in each docstring, but are not reset between tests within a docstring.</p>
<p>Based on that description, I would have thought the following doctests would pass:</p>
<pre><code>X = 1
def f():
"""Function F.
>>> X
1
>>> f()
2
>>> X
2
"""
global X
X = 2
return X
def g():
"""Function G.
>>> g()
1
>>> X
1
"""
return X
</code></pre>
<p>But instead the following tests pass!</p>
<pre><code>X = 1
def f():
"""Function F.
>>> X
1
>>> f()
2
>>> X
1
"""
global X
X = 2
return X
def g():
"""Function G.
>>> g()
2
>>> X
1
"""
return X
</code></pre>
<p>It seems like the globals are shared across tests across docstrings?
But only within function calls?</p>
<p>Why is this the resulting behavior?
Does this have something to do with functions having a dictionary of globals separate from the execution context?</p>
| 2 | 2016-08-30T23:43:39Z | 39,238,927 | <p>Not exactly. While it is true that the globals are shallow copied, what you are actually seeing is the scoping of globals (using the keyword <code>global</code>) and how it actually operates at the module level in Python. You can observe this by putting in a <a href="https://docs.python.org/library/pdb.html#pdb.set_trace" rel="nofollow"><code>pdb.set_trace()</code></a> inside function <code>f</code> right after the assignment (<code>X = 2</code>).</p>
<pre><code>$ python -m doctest foo.py
> /tmp/foo.py(18)f()
-> return X
(Pdb) bt
/usr/lib/python2.7/runpy.py(162)_run_module_as_main()
-> "__main__", fname, loader, pkg_name)
/usr/lib/python2.7/runpy.py(72)_run_code()
-> exec code in run_globals
/usr/lib/python2.7/doctest.py(2817)<module>()
-> sys.exit(_test())
/usr/lib/python2.7/doctest.py(2808)_test()
-> failures, _ = testmod(m)
/usr/lib/python2.7/doctest.py(1911)testmod()
-> runner.run(test)
/usr/lib/python2.7/doctest.py(1454)run()
-> return self.__run(test, compileflags, out)
/usr/lib/python2.7/doctest.py(1315)__run()
-> compileflags, 1) in test.globs
<doctest foo.f[1]>(1)<module>()
-> f()
> /tmp/foo.py(18)f()
-> return X
(Pdb) pp X
2
</code></pre>
<p>Yes, the value is indeed <code>2</code> within the scope of <code>f</code>, but let's take a look of its globals. Let's look at how they compare in the current frame and up a frame.</p>
<pre><code>(Pdb) id(globals())
140653053803048 # remember this number, and we go up a frame
(Pdb) u
> <doctest foo.f[1]>(1)<module>()
-> f()
(Pdb) id(globals())
140653053878632 # the "shallow" clone
(Pdb) X
1
(Pdb) c
</code></pre>
<p>Ah ha, you can see that they are NOT actually the same thing, and that <code>X</code> is indeed <code>1</code> and has not been changed, because the globals there are within the <code><doctest doc.f></code> module created by doctest for this reason. Let's continue.</p>
<pre><code>(Pdb) id(globals())
140653053803048 # hey look, is the SAME number we remember
(Pdb) u
> <doctest foo.g[0]>(1)<module>()
-> g()
(Pdb) id(globals())
140653053872960 # note how this is a different shallow clone
</code></pre>
<p>So what you actually saw is that the globals within the doctest is not the same one as the one on your source (hence <code>g</code> will return <code>2</code> because <code>X</code> was really was changed in the module here by <code>f</code>, but not in the doctest module shallow copy scope), even though it originally was copied from the module but the changes are not reflected back to the underlying module since this is how the <code>global</code> keyword operates - at the module level, not across modules.</p>
| 0 | 2016-08-31T00:19:27Z | [
"python",
"global-variables",
"doctest"
] |
How do I download a specific service's source code off of AppEngine? | 39,238,707 | <p>I have some php source code working in an app engine production environment that won't work anywhere else. Old versions of that code don't seem to work at all either so I need to get that source and see what the heck it is doing differently.</p>
<p>This discussion outlines the challenges I'm having:
<a href="http://google-app-engine.75637.x6.nabble.com/Download-specific-Module-code-td699.html" rel="nofollow">http://google-app-engine.75637.x6.nabble.com/Download-specific-Module-code-td699.html</a></p>
<p>I was told to try using appcfg.py download_app:</p>
<pre><code>appcfg.py download_app -A <APP_NAME> -V <Version> <DIR>
</code></pre>
<p>The problem is that this command does not allow me to specify module / service so I can't target my source code. Also I'm not the App owner so I can't download everything but from what I understand that still wouldn't work because appcfg only seems to target the default service.</p>
<p>The article I linked ends with a suggestion to use gcloud to download the app but I haven't been able to find how to do that. Does anyone here know how to resolve this? The service is running in a Flexible instance - not sure if that makes a difference...</p>
| 1 | 2016-08-30T23:47:25Z | 39,238,845 | <p>From <code>appcfg.py download_app --help</code>:</p>
<blockquote>
<p>Usage: appcfg.py [options] download_app -A app_id [ -V version ]
</p>
<p>...</p>
<p>-M MODULE, --module=MODULE
Set the module, overriding the module value from
app.yaml. ...</p>
</blockquote>
<p>This invocation allowed me to download the code for the non-default python module of :</p>
<pre><code>appcfg.py --module=<my_module> download_app -A <my_app> .
</code></pre>
<p>Of course, you need to meet authentication and ownership requirements, etc.
And your app shouldn't have been configured for prevention of source code downloads (ireversible config)</p>
| 0 | 2016-08-31T00:07:15Z | [
"php",
"python",
"google-app-engine",
"google-app-engine-php"
] |
How do I download a specific service's source code off of AppEngine? | 39,238,707 | <p>I have some php source code working in an app engine production environment that won't work anywhere else. Old versions of that code don't seem to work at all either so I need to get that source and see what the heck it is doing differently.</p>
<p>This discussion outlines the challenges I'm having:
<a href="http://google-app-engine.75637.x6.nabble.com/Download-specific-Module-code-td699.html" rel="nofollow">http://google-app-engine.75637.x6.nabble.com/Download-specific-Module-code-td699.html</a></p>
<p>I was told to try using appcfg.py download_app:</p>
<pre><code>appcfg.py download_app -A <APP_NAME> -V <Version> <DIR>
</code></pre>
<p>The problem is that this command does not allow me to specify module / service so I can't target my source code. Also I'm not the App owner so I can't download everything but from what I understand that still wouldn't work because appcfg only seems to target the default service.</p>
<p>The article I linked ends with a suggestion to use gcloud to download the app but I haven't been able to find how to do that. Does anyone here know how to resolve this? The service is running in a Flexible instance - not sure if that makes a difference...</p>
| 1 | 2016-08-30T23:47:25Z | 39,280,910 | <p>Simple enough to do:</p>
<ol>
<li><p>Switch to the directory where App Engine is installed on your computer. It is usually under the following path: </p>
<p><code>C:\Program Files\Google\google_appengine</code></p></li>
<li><p>Run the below command</p>
<p><code>appcfg.py download_app âA MyAppName -V 1 c:\AppEngine\SourceCode</code></p></li>
</ol>
| 0 | 2016-09-01T21:11:00Z | [
"php",
"python",
"google-app-engine",
"google-app-engine-php"
] |
ImportError : cannot import name timezone pythonanywhere | 39,238,782 | <p>I am attempting to use collectstatic in the bash console to get my CSS running on a django app on pythonanywhere.</p>
<p>Unfortunately, I am getting an error:</p>
<pre><code>23:49 ~/mysite/mysite $ python manage.py collectstatic
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 429, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 219, in execute
self.validate()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 249, in validate
num_errors = get_validation_errors(s, app)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/validation.py", line 35, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", line 146, in get_app_errors
self._populate()
File "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", line 64, in _populate
self.load_app(app_name)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", line 78, in load_app
models = import_module('.models', app_name)
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/nikk2009/mysite/mysite/polls/models.py", line 4, in <module>
from django.utils import timezone
ImportError: cannot import name timezone
23:49 ~/mysite/mysite $
</code></pre>
<p>Here is the .py where timezone is imported </p>
<pre><code>import datetime
from django.db import models
from django.utils import timezone
# Create your models here.
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __str__(self):
return self.question_text
def was_published_recently(self):
now = timezone.now()
return self.pub_date >= timezone.now() - datetime.timedelta(days=1)<= now
was_published_recently.admin_order_field = 'pub_date'
was_published_recently.boolean = True
was_published_recently.short_description = 'Published recently?'
class Choice(models.Model):
choice_text = models.CharField(max_length= 200)
votes = models.IntegerField(default= 0)
question = models.ForeignKey(Question, on_delete=models.CASCADE)
def __str__(self):
return self.choice_text
</code></pre>
| 1 | 2016-08-30T23:57:56Z | 39,239,915 | <p>If I am not mistaken, pythonanywhere uses Django 1.3.7 by default. It looks like Django timezone support was not added until version 1.4:</p>
<p><a href="https://docs.djangoproject.com/en/1.10/releases/1.4/#what-s-new-in-django-1-4" rel="nofollow">https://docs.djangoproject.com/en/1.10/releases/1.4/#what-s-new-in-django-1-4</a></p>
<p>You should update Django to the most recent version (or at least an earlier version) and everything should work as expected (at least with timezones). You can upgrade by opening a bash console from the <strong>Consoles</strong> tab on your pythonanywhere profile, and running the command: </p>
<pre><code>$ pip install --upgrade django
</code></pre>
<p>Or install a newer version in a <code>virtualenv</code>:</p>
<pre><code>$ mkvirtualenv myenv --python=/usr/bin/python3.4
$ pip install django
</code></pre>
<h1>Edit:</h1>
<p>I tested out my first suggestion, and was not able to get it to work on my pythonanywhere account (I think it has to do with the permissions that pythonanywhere gives their users). However, using the second method (i.e., using a <code>virtualenv</code>) did work to install the latest version of Django, which does include timezone support in <code>django.utils.timezone</code>.</p>
| 1 | 2016-08-31T02:38:25Z | [
"python",
"css",
"django",
"importerror",
"pythonanywhere"
] |
Is it possible to use Django models module only in my project? | 39,238,812 | <p>I am developing a small independent python application which uses Celery. I have built this using django framework but my application is back end only. This means that the users do not need to visit my site and my application is built only for the purpose of receiving tasks queue from celery and performing operations on the database. In order to perform operations on the database, I need to use Django modules.</p>
<p>What I am trying to do is eliminate the rest of my django application and use ONLY celery and django models modules (including the dependencies required to run these).</p>
<p>In short, my simple celery application will be running receiving instructions from my redis broker and perform operations in database using django models.</p>
<p>Is is possible to do this? If so, how?</p>
<p>Here is my project structure:</p>
<pre><code>myproject/
--manage.py
--myproject/
----celery.py
----models.py
----settings.py
----tasks.py
----urls.py
----wsgi.py
</code></pre>
<p>Here is my settings.py:</p>
| 0 | 2016-08-31T00:01:56Z | 39,238,855 | <p>you just need </p>
<pre><code>env['DJANGO_SETTING_MODULE'] = 'myproject.settings'
django.setup()
</code></pre>
<p>(assuming you setup your database and installed_apps stuff in settings.py)</p>
| 0 | 2016-08-31T00:08:58Z | [
"python",
"django",
"django-models",
"celery"
] |
Is it possible to use Django models module only in my project? | 39,238,812 | <p>I am developing a small independent python application which uses Celery. I have built this using django framework but my application is back end only. This means that the users do not need to visit my site and my application is built only for the purpose of receiving tasks queue from celery and performing operations on the database. In order to perform operations on the database, I need to use Django modules.</p>
<p>What I am trying to do is eliminate the rest of my django application and use ONLY celery and django models modules (including the dependencies required to run these).</p>
<p>In short, my simple celery application will be running receiving instructions from my redis broker and perform operations in database using django models.</p>
<p>Is is possible to do this? If so, how?</p>
<p>Here is my project structure:</p>
<pre><code>myproject/
--manage.py
--myproject/
----celery.py
----models.py
----settings.py
----tasks.py
----urls.py
----wsgi.py
</code></pre>
<p>Here is my settings.py:</p>
| 0 | 2016-08-31T00:01:56Z | 39,239,467 | <p>In your project's settings.py, just add this at beginning.</p>
<pre><code>import django
import os
sys.path.insert(0, your_project_path) # Ensure python can find your project
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'
django.setup()
</code></pre>
<p>Then you can use django orm, remember to delete the middleware you don't need in django settings.</p>
| 0 | 2016-08-31T01:40:45Z | [
"python",
"django",
"django-models",
"celery"
] |
Is it possible to use Django models module only in my project? | 39,238,812 | <p>I am developing a small independent python application which uses Celery. I have built this using django framework but my application is back end only. This means that the users do not need to visit my site and my application is built only for the purpose of receiving tasks queue from celery and performing operations on the database. In order to perform operations on the database, I need to use Django modules.</p>
<p>What I am trying to do is eliminate the rest of my django application and use ONLY celery and django models modules (including the dependencies required to run these).</p>
<p>In short, my simple celery application will be running receiving instructions from my redis broker and perform operations in database using django models.</p>
<p>Is is possible to do this? If so, how?</p>
<p>Here is my project structure:</p>
<pre><code>myproject/
--manage.py
--myproject/
----celery.py
----models.py
----settings.py
----tasks.py
----urls.py
----wsgi.py
</code></pre>
<p>Here is my settings.py:</p>
| 0 | 2016-08-31T00:01:56Z | 39,240,986 | <p>You have have a python script that requires some celery tasks and you need Django ORM too for the database interactions. </p>
<ol>
<li><p>You can setup the django project</p></li>
<li><p>create an app for your purpose, include in settings.py and inside your app in models.py create the required models.
ref : <a href="http://stackoverflow.com/questions/34519576/what-minimal-files-i-need-to-use-django-orm">What minimal files i need to use django ORM</a> </p></li>
<li><p>Set up the environment for executing celery. Ie, redis server. integrate "djcelery" with django project. for the celery task purpose.
you can use celery beats for the periodic tasks. or delay.
ref: <a href="http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html" rel="nofollow">http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html</a></p></li>
<li><p>You can import and use the django models like normal inside the celery tasks. </p></li>
<li><p>And The celery tasks you can run using </p>
<p>i. celery -A tasks worker --loglevel=info</p>
<p>ii. celery -A tasks beat -l info. use beats if you want the tasks which are written for periodic execution. </p></li>
<li><p>If the tasks need to be executed asynchronously just immediately or after a time interval , you can use task_name.delay()
call the tasks inside the python script using delay()
i think to use djcelery in your script you may need to set up the django env inside the script.
just Do django.setup().</p></li>
</ol>
<p>i think this will help you to solve your problem. </p>
| 0 | 2016-08-31T04:51:45Z | [
"python",
"django",
"django-models",
"celery"
] |
Turning results from SQL query into compact Python form | 39,238,847 | <p>I have a database schema in Postgres that looks like this (in pseudo code):</p>
<pre><code>users (table):
pk (field, unique)
name (field)
permissions (table):
pk (field, unique)
permission (field, unique)
addresses (table):
pk (field, unique)
address (field, unique)
association1 (table):
user_pk (field, foreign_key)
permission_pk (field, foreign_key)
association2 (table):
user_pk (field, foreign_key)
address_pk (field, foreign_key)
</code></pre>
<p>Hopefully this makes intuitive sense. It's a users table that has a many-to-many relationship with a permissions table as well as a many-to-many relationship with an addresses table.</p>
<p>In Python, when I perform the correct SQLAlchemy query incantations, I get back results that look something like this (after converting them to a list of dictionaries in Python): </p>
<pre><code>results = [
{'pk': 1, 'name': 'Joe', 'permission': 'user', 'address': 'home'},
{'pk': 1, 'name': 'Joe', 'permission': 'user', 'address': 'work'},
{'pk': 1, 'name': 'Joe', 'permission': 'admin', 'address': 'home'},
{'pk': 1, 'name': 'Joe', 'permission': 'admin', 'address': 'work'},
{'pk': 2, 'name': 'John', 'permission': 'user', 'address': 'home'},
]
</code></pre>
<p>So in this contrived example, Joe is both a user and and an admin. John is only a user. Both Joe's home and work addresses exist in the database. Only John's home address exists.</p>
<p>So the question is, does anybody know the best way to go from these SQL query 'results' to the more compact 'desired_results' below?</p>
<pre><code>desired_results = [
{
'pk': 1,
'name': 'Joe',
'permissions': ['user', 'admin'],
'addresses': ['home', 'work']
},
{
'pk': 2,
'name': 'John',
'permissions': ['user'],
'addresses': ['home']
},
]
</code></pre>
<p>Additional information required: Small list of dictionaries describing the 'labels' I would like to use in the desired_results for each of the fields that have many-to-many relationships.</p>
<pre><code>relationships = [
{'label': 'permissions', 'back_populates': 'permission'},
{'label': 'addresses', 'back_populates': 'address'},
]
</code></pre>
<p>Final consideration, I've put together a concrete example for the purposes of this question, but in general I'm trying to solve the problem of querying SQL databases in general, assuming an arbitrary amount of relationships. SQLAlchemy ORM solves this problem well, but I'm limited to using SQLAlchemy Core; so am trying to build my own solution.</p>
<h1>Update</h1>
<p>Here's an answer, but I'm not sure it's the best / most efficient solution. Can anyone come up with something better?</p>
<pre><code># step 1: generate set of keys that will be replaced by new keys in desired_result
back_populates = set(rel['back_populates'] for rel in relationships)
# step 2: delete from results keys generated in step 1
intermediate_results = [
{k: v for k, v in res.items() if k not in back_populates}
for res in results]
# step 3: eliminate duplicates
intermediate_results = [
dict(t)
for t in set([tuple(ires.items())
for ires in intermediate_results])]
# step 4: add back information from deleted fields but in desired form
for ires in intermediate_results:
for rel in relationships:
ires[rel['label']] = set([
res[rel['back_populates']]
for res in results
if res['pk'] == ires['pk']])
# done
desired_results = intermediate_results
</code></pre>
| 0 | 2016-08-31T00:07:30Z | 39,255,796 | <p>Iterating over the groups of partial entries looks like a job for <a href="https://docs.python.org/3.5/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a>.</p>
<p>But first lets put <code>relationships</code> into a format that is easier to use, prehaps a <code>back_populates:label</code> dictionary?</p>
<pre><code>conversions = {d["back_populates"]:d['label'] for d in relationships}
</code></pre>
<p>Next because we will be using <code>itertools.groupby</code> it will need a <code>keyfunc</code> to distinguish between the different groups of entries.
So given one entry from the initial <code>results</code>, this function will return a dictionary with only the pairs that will not be condensed/converted</p>
<pre><code>def grouper(entry):
#each group is identified by all key:values that are not identified in conversions
return {k:v for k,v in entry.items() if k not in conversions}
</code></pre>
<p>Now we will be able to traverse the <code>results</code> in groups something like this:</p>
<pre><code>for base_info, group in itertools.groupby(old_results, grouper):
#base_info is dict with info unique to all entries in group
for partial in group:
#partial is one entry from results that will contribute to the final result
#but wait, what do we add it too?
</code></pre>
<p>The only issue is that if we build our <code>entry</code> from <code>base_info</code> it will confuse <code>groupby</code> so we need to make an <code>entry</code> to work with:</p>
<pre><code>entry = {new_field:set() for new_field in conversions.values()}
entry.update(base_info)
</code></pre>
<p>Note that I am using <code>set</code>s here because they are the natural container when all contence are unique,
however because it is not json-compatible we will need to change them into lists at the end.</p>
<p>Now that we have an entry to build we can just iterate through the <code>group</code> to <code>add</code> to each <code>new</code> field from the <code>original</code></p>
<pre><code>for partial in group:
for original, new in conversions.items():
entry[new].add(partial[original])
</code></pre>
<p>then once the final entry is constructed all that is left is to convert the <code>set</code>s back into <code>list</code>s</p>
<pre><code>for new in conversions.values():
entry[new] = list(entry[new])
</code></pre>
<p>And that entry is done, now we can either <code>append</code> it to a list called <code>new_results</code> but since this process is essentially <em>generating</em> results it would make more sense to put it into a <code>generator</code>
making the final code look something like this:</p>
<pre><code>import itertools
results = [
{'pk': 1, 'name': 'Joe', 'permission': 'user', 'address': 'home'},
{'pk': 1, 'name': 'Joe', 'permission': 'user', 'address': 'work'},
{'pk': 1, 'name': 'Joe', 'permission': 'admin', 'address': 'home'},
{'pk': 1, 'name': 'Joe', 'permission': 'admin', 'address': 'work'},
{'pk': 2, 'name': 'John', 'permission': 'user', 'address': 'home'},
]
relationships = [
{'label': 'permissions', 'back_populates': 'permission'},
{'label': 'addresses', 'back_populates': 'address'},
]
#first we put the "relationships" in a format that is much easier to use.
conversions = {d["back_populates"]:d['label'] for d in relationships}
def grouper(entry):
#each group is identified by all key:values that are not identified in conversions
return {k:v for k,v in entry.items() if k not in conversions}
def parse_results(old_results, conversions=conversions):
for base_info, group in itertools.groupby(old_results, grouper):
entry = {new_field:set() for new_field in conversions.values()}
entry.update(base_info)
for partial in group: #for each entry in the original results set
for original, new in conversions.items(): #for each field that will be condensed
entry[new].add(partial[original])
#convert sets back to lists so it can be put back into json
for new in conversions.values():
entry[new] = list(entry[new])
yield entry
</code></pre>
<p>Then the new_results can be gotten like this:</p>
<pre><code>>>> new_results = list(parse_results(results))
>>> from pprint import pprint #for demo purpose
>>> pprint(new_results,width=50)
[{'addresses': ['home', 'work'],
'name': 'Joe',
'permissions': ['admin', 'user'],
'pk': 1},
{'addresses': ['home'],
'name': 'John',
'permissions': ['user'],
'pk': 2}]
</code></pre>
| 1 | 2016-08-31T17:37:44Z | [
"python",
"python-3.x"
] |
Django Generic Views :How does DetailView automatically provides the variable to the template ?? | 39,238,867 | <p>In the following code, How does the template <code>details.html</code> knows that <code>album</code> is passed to it by <code>views.py</code> although we have never returned or defined any <code>context_object_name</code> in <code>DetailsView</code> class in <code>views.py</code>.
Please explain how are the various things getting connected here. </p>
<p><code>details.html</code></p>
<pre><code>{% extends 'music/base.html' %}
{% block title %}AlbumDetails{% endblock %}
{% block body %}
<img src="{{ album.album_logo }}" style="width: 250px;">
<h1>{{ album.album_title }}</h1>
<h3>{{ album.artist }}</h3>
{% for song in album.song_set.all %}
{{ song.song_title }}
{% if song.is_favourite %}
<img src="http://i.imgur.com/b9b13Rd.png" />
{% endif %}
<br>
{% endfor %}
{% endblock %}
</code></pre>
<p><code>views.py</code></p>
<pre><code>from django.views import generic
from .models import Album
class IndexView(generic.ListView):
template_name = 'music/index.html'
context_object_name = 'album_list'
def get_queryset(self):
return Album.objects.all()
class DetailsView(generic.DetailView):
model = Album
template_name = 'music/details.html'
</code></pre>
<p><code>urls.py</code></p>
<pre><code>from django.conf.urls import url
from . import views
app_name = 'music'
urlpatterns = [
# /music/
url(r'^$', views.IndexView.as_view(), name='index'),
# /music/album_id/
url(r'^(?P<pk>[0-9]+)/$', views.DetailsView.as_view(), name='details'),
]
</code></pre>
<p>Thanks in advance !!</p>
| 2 | 2016-08-31T00:10:32Z | 39,239,366 | <p>If you check the implementation of <code>get_context_name()</code>, you'll see this:</p>
<pre><code>def get_context_object_name(self, obj):
"""
Get the name to use for the object.
"""
if self.context_object_name:
return self.context_object_name
elif isinstance(obj, models.Model):
return obj._meta.model_name
else:
return None
</code></pre>
<p>And the implementation for <code>get_context_data()</code> (from <code>SingleObjectMixin</code>):</p>
<pre><code>def get_context_data(self, **kwargs):
"""
Insert the single object into the context dict.
"""
context = {}
if self.object:
context['object'] = self.object
context_object_name = self.get_context_object_name(self.object)
if context_object_name:
context[context_object_name] = self.object
context.update(kwargs)
return super(SingleObjectMixin, self).get_context_data(**context)
</code></pre>
<p>So you can see that <code>get_context_data()</code> adds to the dictionary an entry with the key <code>context_object_name</code> (from <code>get_context_object_name()</code>) which returns <code>obj._meta.model_name</code> when <code>self.context_object_name</code> isn't defined. In this case, the view got <code>self.object</code> as a consequence of the call to <code>get()</code> which calls <code>get_object()</code>. <code>get_object()</code> takes the model that you've defined and automatically queries it from your database using the <code>pk</code> you've defined in your <code>urls.py</code> file.</p>
<p><a href="http://ccbv.co.uk/" rel="nofollow">http://ccbv.co.uk/</a> is a very good website for seeing all of the functions and attributes the class based views of Django has to offer in a single page.</p>
| 3 | 2016-08-31T01:24:52Z | [
"python",
"django",
"django-generic-views"
] |
sqlalchemy syntax for using mysql constants | 39,238,942 | <p>So i am trying to call the mysql function <strong>TIMEDATEDIFF</strong> with the <strong>SECOND</strong> constant as the first parameter like so</p>
<pre><code> query = session.query(func.TIME(Log.IncidentDetection).label('detection'), func.TIMESTAMPDIFF(SECOND,Log.IncidentDetection, Log.IncidentClear).label("duration")).all()
print(query)
</code></pre>
<p>I have tried it as a string and I get a mysql/mariadb error:</p>
<pre><code>query = session.query(func.TIME(Log.IncidentDetection).label('detection'), func.TIMESTAMPDIFF("SECOND",Log.IncidentDetection, Log.IncidentClear).label("duration")).all()
print(query)
</code></pre>
<p><strong>Gives me this</strong></p>
<pre><code>sqlalchemy.exc.ProgrammingError: (mysql.connector.errors.ProgrammingError) 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''SECOND', log.`IncidentDetection`, log.`IncidentClear`) AS duration
FROM log' at line 1 [SQL: 'SELECT TIME(log.`IncidentDetection`) AS detection, TIMESTAMPDIFF(%(TIMESTAMPDIFF_1)s, log.`IncidentDetection`, log.`IncidentClear`) AS duration \nFROM log'] [parameters: {'TIMESTAMPDIFF_1': 'SECOND'}]
</code></pre>
<p>I am sure it is something simple, some sort of escape sequence or import that I am missing. I have looked through the sqlalchemy documentation to no avail.</p>
| 0 | 2016-08-31T00:21:19Z | 39,239,719 | <p>To get sqlalchemy to parse the string exactly into the query I used the <code>_literal_as_text()</code> function</p>
<p><strong>Working solution</strong></p>
<hr>
<pre><code>from sqlalchemy.sql.expression import func, _literal_as_text
# ...
query = session.query(func.TIME(Log.IncidentDetection).label('detection'), func.TIMESTAMPDIFF(_literal_as_text("SECOND"),Log.IncidentDetection, Log.IncidentClear).label("duration")).all()
print(query)
</code></pre>
| 1 | 2016-08-31T02:16:06Z | [
"python",
"mysql",
"sqlalchemy",
"literals",
"func"
] |
split a list of strings at positions they match a different list of strings | 39,238,984 | <p>I wrote a small program to do the following. I'm wondering if there is an obviously more optimal solution:</p>
<p>1) Take 2 lists of strings. In general, the strings in the second list will be longer than in the first list, but this is not guaranteed</p>
<p>2) Return a list of strings derived from the second list that has removed any matching strings from the first list. The list will therefore contain strings that are <= the length of the strings in the second list.</p>
<p>Below I've displayed a picture example of what I'm talking about:
<a href="http://i.stack.imgur.com/VF1eA.png" rel="nofollow"><img src="http://i.stack.imgur.com/VF1eA.png" alt="basic outline of what should happen"></a></p>
<p>so far I this is what I have. It seems to be working fine, but I'm just curious if there is a more elegant solution that I'm missing. By the way, I'm keeping track of the "positions" of each start and end of the string, which is important for a later part of this program.</p>
<pre><code>def split_sequence(sequence = "", split_seq = "", length = 8):
if len(sequence) < len(split_seq):
return [],[]
split_positions = [0]
for pos in range(len(sequence)-len(split_seq)):
if sequence[pos:pos+len(split_seq)] == split_seq and pos > split_positions[-1]:
split_positions += [pos, pos+len(split_seq)]
if split_positions[-1] == 0:
return [sequence], [(0,len(sequence)-1)]
split_positions.append(len(sequence))
assert len(split_positions) % 2 == 0
split_sequences = [sequence[split_positions[_]:split_positions[_+1]] for _ in range(0, len(split_positions),2)]
split_seq_positions = [(split_positions[_],split_positions[_+1]) for _ in range(0, len(split_positions),2)]
return_sequences = []
return_positions = []
for pos,seq in enumerate(split_sequences):
if len(seq) >= length:
return_sequences.append(split_sequences[pos])
return_positions.append(split_seq_positions[pos])
return return_sequences, return_positions
def create_sequences_from_targets(sequence_list = [] , positions_list = [],length=8, avoid = []):
if avoid:
for avoided_seq in avoid:
new_sequence_list = []
new_positions_list = []
for pos,sequence in enumerate(sequence_list):
start = positions_list[pos][0]
seqs, positions = split_sequence(sequence = sequence, split_seq = avoided_seq, length = length)
new_sequence_list += seqs
new_positions_list += [(positions[_][0]+start,positions[_][1]+start) for _ in range(len(positions))]
return new_sequence_list, new_positions_list
</code></pre>
<p>A Sample output:</p>
<pre><code>In [60]: create_sequences_from_targets(sequence_list=['MPHSSLHPSIPCPRGHGAQKA', 'AEELRHIHSRYRGSYWRTVRA', 'KGLAPAEISAVCEKGNFNVA'],positions_list=[(0, 20), (66, 86), (136, 155)],avoid=['SRYRGSYW'],length=3)
Out[60]:
(['MPHSSLHPSIPCPRGHGAQKA', 'AEELRHIH', 'RTVRA', 'KGLAPAEISAVCEKGNFNVA'],
[(0, 20), (66, 74), (82, 87), (136, 155)])
</code></pre>
| 3 | 2016-08-31T00:28:42Z | 39,239,120 | <p>Let's define this string, <code>s</code>, and this list <code>list1</code> of strings to remove:</p>
<pre><code>>>> s = 'NowIsTheTimeForAllGoodMenToComeToTheAidOfTheParty'
>>> list1 = 'The', 'Good'
</code></pre>
<p>Now, let's remove those strings:</p>
<pre><code>>>> import re
>>> re.split('|'.join(list1), s)
['NowIs', 'TimeForAll', 'MenToComeTo', 'AidOf', 'Party']
</code></pre>
<p>One of the powerful features of the above is that the strings in <code>list1</code> can contain regex-active characters. That may also be undesirable. As John La Rooy points out in the comments, the strings in <code>list1</code> can be made inactive with:</p>
<pre><code>>>> re.split('|'.join(re.escape(x) for x in list1), s)
['NowIs', 'TimeForAll', 'MenToComeTo', 'AidOf', 'Party']
</code></pre>
| 4 | 2016-08-31T00:47:23Z | [
"python",
"string",
"split"
] |
split a list of strings at positions they match a different list of strings | 39,238,984 | <p>I wrote a small program to do the following. I'm wondering if there is an obviously more optimal solution:</p>
<p>1) Take 2 lists of strings. In general, the strings in the second list will be longer than in the first list, but this is not guaranteed</p>
<p>2) Return a list of strings derived from the second list that has removed any matching strings from the first list. The list will therefore contain strings that are <= the length of the strings in the second list.</p>
<p>Below I've displayed a picture example of what I'm talking about:
<a href="http://i.stack.imgur.com/VF1eA.png" rel="nofollow"><img src="http://i.stack.imgur.com/VF1eA.png" alt="basic outline of what should happen"></a></p>
<p>so far I this is what I have. It seems to be working fine, but I'm just curious if there is a more elegant solution that I'm missing. By the way, I'm keeping track of the "positions" of each start and end of the string, which is important for a later part of this program.</p>
<pre><code>def split_sequence(sequence = "", split_seq = "", length = 8):
if len(sequence) < len(split_seq):
return [],[]
split_positions = [0]
for pos in range(len(sequence)-len(split_seq)):
if sequence[pos:pos+len(split_seq)] == split_seq and pos > split_positions[-1]:
split_positions += [pos, pos+len(split_seq)]
if split_positions[-1] == 0:
return [sequence], [(0,len(sequence)-1)]
split_positions.append(len(sequence))
assert len(split_positions) % 2 == 0
split_sequences = [sequence[split_positions[_]:split_positions[_+1]] for _ in range(0, len(split_positions),2)]
split_seq_positions = [(split_positions[_],split_positions[_+1]) for _ in range(0, len(split_positions),2)]
return_sequences = []
return_positions = []
for pos,seq in enumerate(split_sequences):
if len(seq) >= length:
return_sequences.append(split_sequences[pos])
return_positions.append(split_seq_positions[pos])
return return_sequences, return_positions
def create_sequences_from_targets(sequence_list = [] , positions_list = [],length=8, avoid = []):
if avoid:
for avoided_seq in avoid:
new_sequence_list = []
new_positions_list = []
for pos,sequence in enumerate(sequence_list):
start = positions_list[pos][0]
seqs, positions = split_sequence(sequence = sequence, split_seq = avoided_seq, length = length)
new_sequence_list += seqs
new_positions_list += [(positions[_][0]+start,positions[_][1]+start) for _ in range(len(positions))]
return new_sequence_list, new_positions_list
</code></pre>
<p>A Sample output:</p>
<pre><code>In [60]: create_sequences_from_targets(sequence_list=['MPHSSLHPSIPCPRGHGAQKA', 'AEELRHIHSRYRGSYWRTVRA', 'KGLAPAEISAVCEKGNFNVA'],positions_list=[(0, 20), (66, 86), (136, 155)],avoid=['SRYRGSYW'],length=3)
Out[60]:
(['MPHSSLHPSIPCPRGHGAQKA', 'AEELRHIH', 'RTVRA', 'KGLAPAEISAVCEKGNFNVA'],
[(0, 20), (66, 74), (82, 87), (136, 155)])
</code></pre>
| 3 | 2016-08-31T00:28:42Z | 39,239,239 | <p>Using regular expressions simplifies the code, but it may or may not be more efficient.</p>
<pre><code>>>> import re
>>> sequence_list = ['MPHSSLHPSIPCPRGHGAQKA', 'AEELRHIHSRYRGSYWRTVRA', 'KGLAPAEISAVCEKGNFNVA'],positions_list=[(0, 20), (66, 86), (136, 155)]
>>> avoid = ['SRYRGSYW']
>>> rex = re.compile("|".join(map(re.escape, avoid)))
</code></pre>
<p>get the positions like this (you'll need to add your offsets to these)</p>
<pre><code>>>> [[j.span() for j in rex.finditer(i)] for i in sequence_list]
[[], [(8, 16)], []]
</code></pre>
<p>get the new strings like this </p>
<pre><code>>>> [rex.split(i) for i in sequence_list]
[['MPHSSLHPSIPCPRGHGAQKA'], ['AEELRHIH', 'RTVRA'], ['KGLAPAEISAVCEKGNFNVA']]
</code></pre>
<p>or the flattened list</p>
<pre><code>>>> [j for i in sequence_list for j in rex.split(i)]
['MPHSSLHPSIPCPRGHGAQKA', 'AEELRHIH', 'RTVRA', 'KGLAPAEISAVCEKGNFNVA']
</code></pre>
| 4 | 2016-08-31T01:04:55Z | [
"python",
"string",
"split"
] |
Run a chi square test with observation and expectation counts and get confidence interval | 39,239,087 | <p>I am new to chi squared testing and trying to figure out what the 'standard' way of running a chi-squared test and also getting a 95% confidence interval on the difference between success rates in two experiments.</p>
<p>My data looks like this:</p>
<pre class="lang-none prettyprint-override"><code>Condition A: 25 75 100
Condition B: 100 100 200
Total: 125 175
</code></pre>
<p>These numbers represent the counts of what was observed during an experiment. As you can see, the number of samples for Condition A vs. Condition B were not the same.</p>
<p>What I'd like to get is:</p>
<ol>
<li>A test statistic indicating whether Condition A's success rate of 33% is statistically different from Condition B's success rate of 50%.</li>
<li>I'd also like to get a 95% confidence interval on the difference between the two success rates.</li>
</ol>
<p>It seems like <code>scipy.stats.chisquare</code> expects the user to adjust the 'expected' counts so that they appear to be taken out of the same sample size as the 'observed' counts. Is that the only transformation I need to do? If not, what else do I need to do? Finally, how would I go about calculating the 95% confidence interval for the difference in proportions?</p>
| 0 | 2016-08-31T00:42:33Z | 39,239,941 | <p>You have a <a href="https://en.wikipedia.org/wiki/Contingency_table" rel="nofollow">contingency table</a>. To perform the Ï<sup>2</sup> test on this data, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html" rel="nofollow"><code>scipy.stats.chi2_contingency</code></a>:</p>
<pre><code>In [31]: from scipy.stats import chi2_contingency
In [32]: obs = np.array([[25, 75], [100, 100]])
In [33]: obs
Out[33]:
array([[ 25, 75],
[100, 100]])
In [34]: chi2, p, dof, expected = chi2_contingency(obs)
In [35]: p
Out[35]: 5.9148695289823149e-05
</code></pre>
<p>Your contingency table is 2x2, so you can use <a href="https://en.wikipedia.org/wiki/Fisher%27s_exact_test" rel="nofollow">Fisher's exact test</a>. This is implemented in scipy as <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisher_exact.html" rel="nofollow"><code>scipy.stats.fisher_exact</code></a>:</p>
<pre><code>In [148]: from scipy.stats import fisher_exact
In [149]: oddsr, pval = fisher_exact(obs)
In [150]: pval
Out[150]: 3.7175015403965242e-05
</code></pre>
<p>scipy doesn't have much more for contingency tables. It looks like the next release of <a href="http://statsmodels.sourceforge.net/" rel="nofollow"><code>statsmodels</code></a> will have more tools for the analysis of contingency tables, but that doesn't help right now.</p>
<p>It isn't hard to write some code to compute the difference in proportion and its 95% confidence interval. Here's one way:</p>
<pre><code>def diffprop(obs):
"""
`obs` must be a 2x2 numpy array.
Returns:
delta
The difference in proportions
ci
The Wald 95% confidence interval for delta
corrected_ci
Yates continuity correction for the 95% confidence interval of delta.
"""
n1, n2 = obs.sum(axis=1)
prop1 = obs[0,0] / n1
prop2 = obs[1,0] / n2
delta = prop1 - prop2
# Wald 95% confidence interval for delta
se = np.sqrt(prop1*(1 - prop1)/n1 + prop2*(1 - prop2)/n2)
ci = (delta - 1.96*se, delta + 1.96*se)
# Yates continuity correction for confidence interval of delta
correction = 0.5*(1/n1 + 1/n2)
corrected_ci = (ci[0] - correction, ci[1] + correction)
return delta, ci, corrected_ci
</code></pre>
<p>For example,</p>
<pre><code>In [22]: obs
Out[22]:
array([[ 25, 75],
[100, 100]])
In [23]: diffprop(obs)
Out[23]:
(-0.25,
(-0.35956733089748971, -0.14043266910251032),
(-0.36706733089748972, -0.13293266910251031))
</code></pre>
<p>The first value returned is the difference in proportions <code>delta</code>. The next two pairs are the Wald 95% confidence interval for <code>delta</code>, and the Wald 95% confidence interval with Yates continuity correction.</p>
<p>If you don't like those negative values, you can reverse the rows first:</p>
<pre><code>In [24]: diffprop(obs[::-1])
Out[24]:
(0.25,
(0.14043266910251032, 0.35956733089748971),
(0.13293266910251031, 0.36706733089748972))
</code></pre>
<p>For comparison, here is a similar calculation in R:</p>
<pre><code>> obs
[,1] [,2]
[1,] 25 75
[2,] 100 100
> prop.test(obs, correct=FALSE)
2-sample test for equality of proportions without continuity
correction
data: obs
X-squared = 17.1429, df = 1, p-value = 3.467e-05
alternative hypothesis: two.sided
95 percent confidence interval:
-0.3595653 -0.1404347
sample estimates:
prop 1 prop 2
0.25 0.50
> prop.test(obs, correct=TRUE)
2-sample test for equality of proportions with continuity correction
data: obs
X-squared = 16.1297, df = 1, p-value = 5.915e-05
alternative hypothesis: two.sided
95 percent confidence interval:
-0.3670653 -0.1329347
sample estimates:
prop 1 prop 2
0.25 0.50
</code></pre>
| 2 | 2016-08-31T02:42:57Z | [
"python",
"scipy",
"statistics",
"chi-squared"
] |
List all functions in a module given fullpath | 39,239,182 | <p>I am trying to list all the functions in every encountered module to enforce the writing of test cases. However, I am having issues doing this with just the fullpath to the file as a string and not importing it. When I use <code>inspect.getmembers(fullpath, inspect.isfunction)</code> it returns an empty array. Is there any way to accomplish this? Here is my code so far:</p>
<pre><code>import glob
import os
import inspect
def count_tests(moduletestdict):
for key,value in moduletestdict.items():
print("Key: {}".format(key))
print(inspect.getmembers(key, inspect.isfunction))
def find_functions(base_dir):
"""moduletestdict has key value pairs where the key is the module
to be tested and the value is the testing module"""
moduletestdict = {}
for roots, dirs, files in os.walk(base_dir):
for f in files:
if (".py" in f) and (".pyc" not in f) and ("test_" not in f) and (f != "__init__.py"):
if "test_{}".format(f) in files:
moduletestdict["{}/{}".format(roots,f)] = "{}/{}".format(roots, "test_{}".format(f))
else:
moduletestdict["{}/{}".format(roots,f)] = "NOTEST"
return moduletestdict
if __name__ == "__main__":
base_dir = "/Users/MyName/Documents/Work/MyWork/local-dev/sns"
moduletestdict = find_functions(base_dir)
count_tests(moduletestdict)
</code></pre>
| 0 | 2016-08-31T00:55:37Z | 39,239,352 | <p>You do need to import the modules somehow but the ârecommendedâ way is a bit tricky since things kept changing with new versions of Python, see <a href="http://stackoverflow.com/a/67692/1640404">this answer</a>.</p>
<p>Here's an example based on the test runner of the <a href="https://github.com/christophercrouzet/gorilla" rel="nofollow">gorilla library</a> that is compatible with Python 2 and Python 3:</p>
<pre><code>import inspect
import os
import sys
def module_iterator(directory, package_dotted_path):
paths = os.listdir(directory)
for path in paths:
full_path = os.path.join(directory, path)
basename, tail = os.path.splitext(path)
if basename == '__init__':
dotted_path = package_dotted_path
elif package_dotted_path:
dotted_path = "%s.%s" % (package_dotted_path, basename)
else:
dotted_path = basename
if os.path.isfile(full_path):
if tail != '.py':
continue
__import__(dotted_path)
module = sys.modules[dotted_path]
yield module
elif os.path.isdir(full_path):
if not os.path.isfile(os.path.join(full_path, '__init__.py')):
continue
__import__(dotted_path)
package = sys.modules[dotted_path]
yield package
for module in module_iterator(full_path, dotted_path):
yield module
def main():
directory_path = '/path/to/your/package'
package_name = 'package'
sys.path.insert(0, directory_path)
modules = module_iterator(directory_path, package_name)
for module in modules:
functions = inspect.getmembers(module, inspect.isfunction)
print("functions for module '{0}': {1}".format(
module.__name__, functions)
)
if __name__ == "__main__":
main()
</code></pre>
<p>Here <code>/path/to/your/package</code> would be the repository to the package, that is the directory containing the readme, the docs, and so on like it is commonly the case with Python's packages, and <code>package_name</code> would be the directory name inside the repository that contains the root <code>__init__.py</code> file.</p>
<p>In fact the function <code>module_iterator</code> also accepts a module for its parameter <code>package_dotted_path</code>, such as <code>'package.subpackage.module'</code> for example, but then only the functions from this specific module would be retrieved. If a package is passed instead, then all the recursively nested modules will be inspected.</p>
| 0 | 2016-08-31T01:23:01Z | [
"python"
] |
Formatting output from print on respective lines? | 39,239,203 | <p>I am trying to format the results of a query such that results are printed on their respective lines. For example, I am querying stores by store number and obtaining the location from a JSON file, but when printing, the store number and location are printing on separate lines:</p>
<p>Code Snippet: (Searching for stores 35 and 96)</p>
<pre><code>for store, data in results.items():
print('Store: {}'.format(store))
if data:
for location in data:
print(location)
</code></pre>
<p><strong>Current Output:</strong><br>
Store: 35<br>
{'location': Iowa}<br>
Store: 96<br>
{'location': Minnesota}</p>
<p><strong>Desired output (or something similar):</strong><br>
Store: 35, 'location': Iowa<br>
Store: 96, 'location': Minnesota</p>
| 1 | 2016-08-31T00:59:57Z | 39,239,216 | <p>Adding <code>end=''</code> to your first print statement should fix the problem. By specifying that the end character is an empty string you will override the default <code>\n</code> character (by default print statements end with a new line character).</p>
<pre><code>for store, data in results.items():
print('Store: {}'.format(store), end='')
if data:
for location in data:
print(location)
</code></pre>
<p>We will only add <code>end=''</code> to the first print statement because we want the new line to print after you print out the location.</p>
<p>If you want to separate your prints with a <code>,</code> of course you would just add <code>+ ','</code> to your first print statement.</p>
<p>This will work right off the bat if you're using Python 3. If you're using Python 2.X you will have to add this line to the top of your file: <code>from __future__ import print_function</code></p>
<p>Here's a simple example of this in action:</p>
<pre><code>from __future__ import print_function
l1 = ['hello1', 'hello2', 'hello3']
l2 = ['world1', 'world2', 'world3']
for i,j in zip(l1, l2):
print (i, end='')
print (j)
Output:
hello1world1
hello2world2
hello3world3
</code></pre>
<p>If we took the same code but altered it slightly and just removed the <code>end=''</code>, this is what would happen:</p>
<pre><code>from __future__ import print_function
l1 = ['hello1', 'hello2', 'hello3']
l2 = ['world1', 'world2', 'world3']
for i,j in zip(l1, l2):
print (i)
print (j)
Output:
hello1
world1
hello2
world2
hello3
world3
</code></pre>
<p>As you can see each line would end with a new line character, this printing a new line for each statement.</p>
| 1 | 2016-08-31T01:01:38Z | [
"python",
"python-3.x",
"printing",
"formatting"
] |
Formatting output from print on respective lines? | 39,239,203 | <p>I am trying to format the results of a query such that results are printed on their respective lines. For example, I am querying stores by store number and obtaining the location from a JSON file, but when printing, the store number and location are printing on separate lines:</p>
<p>Code Snippet: (Searching for stores 35 and 96)</p>
<pre><code>for store, data in results.items():
print('Store: {}'.format(store))
if data:
for location in data:
print(location)
</code></pre>
<p><strong>Current Output:</strong><br>
Store: 35<br>
{'location': Iowa}<br>
Store: 96<br>
{'location': Minnesota}</p>
<p><strong>Desired output (or something similar):</strong><br>
Store: 35, 'location': Iowa<br>
Store: 96, 'location': Minnesota</p>
| 1 | 2016-08-31T00:59:57Z | 39,242,898 | <p>I would write all the output in a variable and print the variable only once at the end. This also allows you to save time (despite using more memory) since you need only a single access to the stdout. The code is also easier to follow (in my opinion):</p>
<pre><code>output = ''
for store, data in results.items():
output += 'Store: {}'.format(store)
if data:
for location in data:
output += location+'\n'
# Only at the end you print your output
print(output)
</code></pre>
<p>You can also print at the end of each iteration (you still access half of the times to the stdout) with the following:</p>
<pre><code>for store, data in results.items():
output = 'Store: {}'.format(store)
if data:
for location in data:
output += location+'\n'
# Print only at the end of the loop
print(output)
</code></pre>
<p>If you want a new line for each Store but not for each "location":</p>
<pre><code>output = ''
for store, data in results.items():
output += 'Store: {}'.format(store)
if data:
for location in data:
output += location
output += '\n'
# Only at the end you print your output
print(output)
</code></pre>
<p>I think this approach is much more flexible, easier to read in the code and is also faster.</p>
<p>Hope to be helpful</p>
| 0 | 2016-08-31T07:06:44Z | [
"python",
"python-3.x",
"printing",
"formatting"
] |
Python - Injecting html tags into strings based on regex match | 39,239,361 | <p>I wrote a script in Python for custom HTML page that finds a word within a string/line and highlights just that word with use of following tags where instance is the word that is searched for.</p>
<pre><code><b><font color=\"red\">"+instance+"</font></b>
</code></pre>
<p>With the following result:
<a href="http://i.stack.imgur.com/t3KpN.png" rel="nofollow"><img src="http://i.stack.imgur.com/t3KpN.png" alt="enter image description here"></a></p>
<p>I need to find a word (case insensitive) let's say "port" within a string that can be port, Port, SUPPORT, Support, support etc, which is easy enough.</p>
<pre><code>pattern = re.compile(word, re.IGNORECASE)
find_all_instances = pattern.findall(string_to_search)
</code></pre>
<p>However my strings often contain 2 or more instances in single line, and I need to append
<code><b><font color=\"red\">"+instance+"</font></b></code> to each of those instances, without changing cases.</p>
<p>Problem with my approach, is that I am attempting to itterate over each of instances found with findall (exact match),
while multiple same matches can also be found within the string.</p>
<pre><code>for instance in find_all_instances:
second_pattern = re.compile(instance)
string_to_search = second_pattern.sub("<b><font color=\"red\">"+instance+"</font></b>", string_to_search)
</code></pre>
<p>This results in following:</p>
<pre><code><b><font color="red"><b><font color="red"><b><font color="red">Http</font></b></font></b></font></b></font>
</code></pre>
<p>when I need </p>
<pre><code><b><font color="red">Http</font></b>
</code></pre>
<p>I was thinking, I would be able to avoid this if I was able to find out exact part of the string that the pattern.sub substitutes at the moment of doing it,
however I was not able to find any examples of that kind of usage, which leads me to believe that I am doing something very wrong.</p>
<p>If anyone have a way I could use to insert <code><b><font color="red">instance</font></b></code> without replacing <code>instance</code> for all matches(case insensitive), then I would be grateful.</p>
| -1 | 2016-08-31T01:24:18Z | 39,241,045 | <p>Maybe I'm misinterpretting your question, but wouldn't re.sub be the best option?</p>
<p>Example: <a href="https://repl.it/DExs" rel="nofollow">https://repl.it/DExs</a></p>
| 0 | 2016-08-31T04:57:24Z | [
"python",
"html",
"regex",
"string",
"tags"
] |
Python - Injecting html tags into strings based on regex match | 39,239,361 | <p>I wrote a script in Python for custom HTML page that finds a word within a string/line and highlights just that word with use of following tags where instance is the word that is searched for.</p>
<pre><code><b><font color=\"red\">"+instance+"</font></b>
</code></pre>
<p>With the following result:
<a href="http://i.stack.imgur.com/t3KpN.png" rel="nofollow"><img src="http://i.stack.imgur.com/t3KpN.png" alt="enter image description here"></a></p>
<p>I need to find a word (case insensitive) let's say "port" within a string that can be port, Port, SUPPORT, Support, support etc, which is easy enough.</p>
<pre><code>pattern = re.compile(word, re.IGNORECASE)
find_all_instances = pattern.findall(string_to_search)
</code></pre>
<p>However my strings often contain 2 or more instances in single line, and I need to append
<code><b><font color=\"red\">"+instance+"</font></b></code> to each of those instances, without changing cases.</p>
<p>Problem with my approach, is that I am attempting to itterate over each of instances found with findall (exact match),
while multiple same matches can also be found within the string.</p>
<pre><code>for instance in find_all_instances:
second_pattern = re.compile(instance)
string_to_search = second_pattern.sub("<b><font color=\"red\">"+instance+"</font></b>", string_to_search)
</code></pre>
<p>This results in following:</p>
<pre><code><b><font color="red"><b><font color="red"><b><font color="red">Http</font></b></font></b></font></b></font>
</code></pre>
<p>when I need </p>
<pre><code><b><font color="red">Http</font></b>
</code></pre>
<p>I was thinking, I would be able to avoid this if I was able to find out exact part of the string that the pattern.sub substitutes at the moment of doing it,
however I was not able to find any examples of that kind of usage, which leads me to believe that I am doing something very wrong.</p>
<p>If anyone have a way I could use to insert <code><b><font color="red">instance</font></b></code> without replacing <code>instance</code> for all matches(case insensitive), then I would be grateful.</p>
| -1 | 2016-08-31T01:24:18Z | 39,246,418 | <p>Okay so two ways I did quickly! The second loop is definitely the way to go. It uses re.sub (as someone else commented too). It replaces with the lowercase search term bear in mind.</p>
<pre><code>import re
FILE = open("testing.txt","r")
word="port"
#THIS LOOP IS CASE SENSITIVE
for line in FILE:
newline=line.replace(word,"<b><font color=\"red\">"+word+"</font></b>")
print newline
#THIS LOOP IS INCASESENSITIVE
for line in FILE:
pattern=re.compile(word,re.IGNORECASE)
newline = pattern.sub("<b><font color=\"red\">"+word+"</font></b>",line)
print newline
</code></pre>
| 0 | 2016-08-31T09:56:40Z | [
"python",
"html",
"regex",
"string",
"tags"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.