title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to activate virtualenv in Cygwin | 39,202,912 | <p>complete beginner here. Trying to build a flask web app. Using Windows 8.</p>
<p>Having some problems activating my python virtualenv in Cygwin. I have been using git shell up till now with no problems.</p>
<p>I copied my folder ("app") into my cygwin home directory and it is set up like so:</p>
<pre><code>app - templates
- static
- flask - env - scripts - python
- ...
- hello.py
- ...
</code></pre>
<p>I change directory into the app folder, then when I type the command to activate my virtualenv:</p>
<pre><code>$ source flask/env/scripts/activate
</code></pre>
<p>The terminal shows:</p>
<pre><code>(env)
</code></pre>
<p>so I assume that it is working, until I double check which python:</p>
<pre><code>$ which python
</code></pre>
<p>and that returns my original global python install, not the virtual environment. I've checked the installed packages to double check which python environment I am using.</p>
<p>I use the same command in git shell and it activates the right virtualenv. Where am I going wrong / what do I need to change? Please let me know if you need any more information.</p>
<p>I created a new virtual environment using cygwin and when I activated the new env, it switched to that environment fine. Why won't it work for the folder which I copied in?</p>
<p>Thanks,</p>
<p>Sam</p>
| 0 | 2016-08-29T09:33:43Z | 39,207,090 | <p>You should not move the virtualenv. The <code>activate</code> script inside the virtualenv uses absolute paths internally. If you move the directory, the paths will no longer work, and so <code>which python</code> finds the first valid binary on <code>PATH</code>, which is your global binary.</p>
<p>If you need to move the project to a different location, and the virtualenv together with it, then recreate the virtualenv, do not copy it.
The recommended practice is to have a <code>requirements.txt</code> file, and install packages using <code>pip install -r requirements.txt</code>.
That way, recreating a virtualenv is very easy: create an empty virtualenv, and run the <code>pip ...</code> command. There should be nothing else inside the virtualenv that needs moving, only what <code>pip</code> put there, or other python installer scripts, if you used any (and which you would need to re-run, in addition to <code>pip</code>).</p>
| 0 | 2016-08-29T13:06:03Z | [
"python",
"shell",
"cygwin",
"virtualenv"
] |
AssertionError: yield from wasn't used with future | 39,202,962 | <p>This code:</p>
<pre><code>import asyncio
async def wee():
address = 'localhost'
port = 5432
reader, writer = asyncio.open_connection(address, port)
message = '/t'
print('Send: %r' % message)
writer.write(message.encode())
async def main():
t2 = asyncio.ensure_future(wee())
await t2
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
</code></pre>
<p>...produces an error AssertionError: yield from wasn't used with future
with this traceback:</p>
<pre><code>Traceback (most recent call last):
File "ssh_as.py", line 20, in <module>
loop.run_until_complete(main())
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/base_events.py", line 337, in run_until_complete
return future.result()
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "ssh_as.py", line 16, in main
await t2
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/futures.py", line 358, in __iter__
yield self # This tells Task to wait for completion.
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/tasks.py", line 290, in _wakeup
future.result()
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "ssh_as.py", line 9, in wee
reader, writer = asyncio.open_connection(address, port)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/streams.py", line 64, in open_connection
lambda: protocol, host, port, **kwds)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/base_events.py", line 599, in create_connection
yield from tasks.wait(fs, loop=self)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/tasks.py", line 341, in wait
return (yield from _wait(fs, timeout, return_when, loop))
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/tasks.py", line 424, in _wait
yield from waiter
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/futures.py", line 359, in __iter__
assert self.done(), "yield from wasn't used with future"
AssertionError: yield from wasn't used with future
</code></pre>
<p>If I use just one variable instead of unpacking asyncio.open_connection to reader, writer and just to <code>dummy=asyncio.open_connection(...</code> there is no such error, though <code>dummy</code> object is not usable as documentation's <code>StreamReader</code> as well - <code>TypeError: 'generator' object is not subscriptable</code>. Absolutely no idea what happens, please help.</p>
| 1 | 2016-08-29T09:35:31Z | 39,203,159 | <p>You need to change the <code>asyncio.open_connection(address, port)</code> line to <code>await asyncio.open_connection(address, port)</code>. Open connection returns a future/promise etc and you need to "await" that result in order to be able to access its contents.</p>
| 2 | 2016-08-29T09:45:45Z | [
"python",
"python-3.x",
"python-asyncio"
] |
how to match paper sheet by opencv | 39,203,011 | <p>I have some kinds of paper sheet and I am writing python script with opencv to recognize the same paper sheet to classify. I am stuck in how to find the same kind of paper sheet. For example, I attached two pic. Picture 1 is the template and picture 2 is some kind of paper I need to know if it is matching with the template. I don't need to match the text and I just need to match the form. I need to classify the same sheet in many of paper sheet. </p>
<p>I have adjust the skew of paper and detect some lines but I don't know how to match the lines and judge this paper sheet is the same kind with the template.</p>
<p>Is there any one can give me an advice for the matching algorithm?<a href="http://i.stack.imgur.com/9iGD0.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/9iGD0.jpg" alt="This is the paper sheet template"></a><a href="http://i.stack.imgur.com/S5W4u.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/S5W4u.jpg" alt="This is the paper sheet I need to match with the template"></a><a href="http://i.stack.imgur.com/L2KIy.png" rel="nofollow"><img src="http://i.stack.imgur.com/L2KIy.png" alt="Need matching the lines"></a></p>
| 1 | 2016-08-29T09:37:56Z | 39,221,626 | <p>I'm not sure if such paper form is rich enough in visual information for this solution, but I think you should start with feature detection and homography calculation (opencv tutorial: <a href="http://docs.opencv.org/3.0-beta/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography" rel="nofollow">Features2D + Homography</a>). From there you can try to adjust 2D features for your problem.</p>
| 0 | 2016-08-30T07:44:11Z | [
"python",
"c++",
"algorithm",
"opencv",
"matching"
] |
how to match paper sheet by opencv | 39,203,011 | <p>I have some kinds of paper sheet and I am writing python script with opencv to recognize the same paper sheet to classify. I am stuck in how to find the same kind of paper sheet. For example, I attached two pic. Picture 1 is the template and picture 2 is some kind of paper I need to know if it is matching with the template. I don't need to match the text and I just need to match the form. I need to classify the same sheet in many of paper sheet. </p>
<p>I have adjust the skew of paper and detect some lines but I don't know how to match the lines and judge this paper sheet is the same kind with the template.</p>
<p>Is there any one can give me an advice for the matching algorithm?<a href="http://i.stack.imgur.com/9iGD0.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/9iGD0.jpg" alt="This is the paper sheet template"></a><a href="http://i.stack.imgur.com/S5W4u.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/S5W4u.jpg" alt="This is the paper sheet I need to match with the template"></a><a href="http://i.stack.imgur.com/L2KIy.png" rel="nofollow"><img src="http://i.stack.imgur.com/L2KIy.png" alt="Need matching the lines"></a></p>
| 1 | 2016-08-29T09:37:56Z | 39,222,154 | <p>Check out the findContours and MatchShape function. Either way, you are much better off matching a specific visual ID within the form that is representative of the form. Like a really simple form of barcode.</p>
| 0 | 2016-08-30T08:13:10Z | [
"python",
"c++",
"algorithm",
"opencv",
"matching"
] |
Optional keys in string formats using '%' operator? | 39,203,016 | <p>Is is possible to have optional keys in <a href="https://docs.python.org/2/library/stdtypes.html#string-formatting-operations" rel="nofollow">string formats</a> using '%' operator?
Iâm using the <strong>logging</strong> API with Python 2.7, so I can't use <a href="https://www.python.org/dev/peps/pep-3101/" rel="nofollow">Advanced String Formatting</a>.</p>
<p>My problem is as follow:</p>
<pre><code>>>> import logging
>>> FORMAT = '%(asctime)-15s %(message)s %(user)s'
>>> logging.basicConfig(format=FORMAT)
>>> logging.warning("It works for:", extra={'user': 'me'})
2016-08-29 11:24:31,262 It works for: me
>>> logging.warning("It does't work!")
Traceback (most recent call last):
...
KeyError: 'user'
Logged from file <input>, line 1
</code></pre>
<p>I want to have an empty string for <em>user</em> if missing. How can I do that?</p>
<p>I tried with a <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow">defaultdict</a>, but it fails:</p>
<pre><code>>>> import collections
>>> extra = collections.defaultdict(unicode)
>>> logging.warning("It does't work!", extra=extra)
Traceback (most recent call last):
...
KeyError: 'user'
Logged from file <input>, line 1
</code></pre>
<p>By contrast, with <a href="http://jinja.pocoo.org/docs/dev/" rel="nofollow">Jinja2</a>, we can do:</p>
<pre><code>>>> import jinja2
>>> jinja2.Template('name: {{ name }}, email: {{ email }}').render(name="me")
u'name: me, email: '
</code></pre>
<p>=> no exception here, just an empty string (for "email").</p>
| 0 | 2016-08-29T09:38:14Z | 39,207,618 | <p>A) The <code>defaultdict</code> approach works fine, but only if used directly.</p>
<pre><code>>>> import collections
>>> dd=collections.defaultdict(str)
>>> dd['k'] = 22
>>> '%(k)s %(nn)s' % dd
'22 '
</code></pre>
<hr>
<p>B) The <code>extra</code> argument to a log function is used as described in the docs, i.e. not directly as shown above. That's why using a <code>defaultdict</code> instead of a regular <code>dict</code> does not make a difference.</p>
<blockquote>
<p>The third keyword argument is extra which can be used to pass a
dictionary which is used to populate the <strong>dict</strong> of the LogRecord
created for the logging event with user-defined attributes.</p>
</blockquote>
<hr>
<p>C) You can use a logging filter to take care of the missing extra data:</p>
<pre><code>import logging
class UserFilter:
def filter(self, record):
try:
record.user
except AttributeError:
record.user = '<N/A>'
return True
FORMAT = '%(asctime)-15s %(message)s %(user)s'
logging.basicConfig(format=FORMAT)
logging.getLogger().addFilter(UserFilter())
logging.warning("It works for:", extra={'user': 'me'})
logging.warning("It doesn't work!")
# DATE TIME It doesn't work! <N/A>
</code></pre>
<p>Any class with a <code>filter</code> method is fine. It can modify the record in-place and it must return <code>True</code> for accepting the record or <code>False</code> for filtering it out.</p>
| 1 | 2016-08-29T13:33:50Z | [
"python",
"python-2.7",
"string-formatting"
] |
Function keeps doing the same thing | 39,203,043 | <p>This program suppose to find 1000 prime numbers and pack them into a list</p>
<p>Here's the code:</p>
<pre><code>num = raw_input('enter a starting point')
primes = [2]
num = int(num)
def prime_count(num):
if num % 2 == 0: #supposed to check if the number is divided by 2 evenly
num = num +1 #if it is, then add 1 to that number and check again
return num
elif num % num == 0:
primes.append(num) #supposed to add that prime to a list
num = num + 1 #add 1 and check again
return num
while len(primes) <= 999:
prime_count(num)
</code></pre>
<p>So what actually happens when I run it:
it asks me raw_input and then goes to various things depending on what I choose as input:</p>
<ul>
<li>If I choose a prime, let's say 3, it runs and adds 999 of 3s to the list instead of adding it just one time and going on to try 4</li>
<li>If I choose a non-prime, let's say 4, it just breaks, after that I can't even print out a list</li>
</ul>
<p>What am i doing wrong?</p>
<p>UPDATE:
I fixed it, but when i run it with this i'm getting an error (TypeError: unsupported operand type(s) for %: 'NoneType' and 'int')</p>
<pre><code>number = raw_input('enter a starting point')
primes = [2]
number = int(number)
def prime_count(x):
if x % 2 == 0: #supposed to check if the number is divided by 2 evenly
number = x +1 #if it is, then add 1 to that number and check again
return number
else:
for i in range(3, x-1):
if x % i == 0:
primes.append(x) #supposed to add that prime to a list
number = x + 1 #add 1 and check again
return number
while len(primes) <= 999:
number = prime_count(number)
</code></pre>
| 0 | 2016-08-29T09:39:26Z | 39,203,135 | <p>You're never using the return value from prime_count. Try this:</p>
<pre><code>while len(primes) <= 999:
num = prime_count(num)
</code></pre>
<p>You've set your self up for confusion by using the name <code>num</code> as both a parameter (also a local variable) inside of <code>prime_count</code>, and also as a global variable. Even though they have the same name, they are different variables, due to Python's rules for the scope of variables.</p>
<p>Also, <code>prime_count</code> is (probably unintentionally) leveraging the fact that <code>primes</code> is a global variable. Since you're not <em>assigning</em> to it, but rather just calling a method (append), the code will work without using the <code>global</code> keyword.</p>
<p>However, your algorithm isn't even correct. <code>if num % num == 0</code> says <em>"if a number divided by itself has a remainder of zero"</em> which will <em>always</em> be true. This program will find a lot of "primes" that aren't primes.</p>
<p>Real Python programs do very little in the global scope; your current code is just asking for confusion. I suggest you start with this template, and also do some reading of existing Python code.</p>
<pre><code>def add_three(a_param):
a_local_var = 3 # This is *different* than the one in main!
# Changes to this variable will *not* affect
# identically-named variables in other functions
return a_local_var + a_param
def main():
a_local_var = 2
result = add_three(a_local_var)
print result # prints 5
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-08-29T09:44:15Z | [
"python",
"function",
"loops"
] |
Python configparser get all from a section and write to new file | 39,203,224 | <p>how would one use python config parser to get every entry under a single section and write to a new file without actually specifying each entry</p>
<p>for example how would I go about getting everything under section "testing" and write to a new file without using config.get and listing every entry?</p>
<p>config file</p>
<pre><code>[testing]
test1=test23452
test2=test45235
test3=test54524
[donotneed]
something1=something
something2=somethingelse
</code></pre>
<p>I've tried the following just for testing purposes </p>
<pre><code>config = ConfigParser.ConfigParser()
config.read(configFilePath)
testing = {k:v for k,v in config.items('testing')}
for x in testing:
print (x)
</code></pre>
<p>but it's only printing the following</p>
<pre><code>test1
test3
test2
</code></pre>
<p>and not everything in that section, I need it to give me</p>
<pre><code>test1=test23452
test2=test45235
test3=test54524
</code></pre>
| 1 | 2016-08-29T09:48:15Z | 39,204,762 | <p>"for x in testing" will just parse the keys of the dictionary 'testing' .</p>
<p>you need :
for x in testing.items():
print x[0] + '=' + x[1]</p>
| 0 | 2016-08-29T11:07:33Z | [
"python",
"parsing",
"config",
"python-config"
] |
Printing out all the possibilites of ambiguous Morse code | 39,203,263 | <p>I've been tasked with a problem for school and it's left me stumped. What I have to do is read in an ambiguous Morse Code string (i.e. without any spaces to state what is a letter and what is not) and print out what all the possible <em>valid</em> english translations for that Morse Code could be. I've seen an algorithm to solve this exact problem somewhere on the internet but have no idea how to convert it to Python 3 and can not find it for the life of me.</p>
<p>Some helpful things: </p>
<ul>
<li><p>I have a list of words which the program considers valid: <a href="https://www.mediafire.com/?6j1tr3h4uwhrryf" rel="nofollow">Download</a></p></li>
<li><p>The program does not need to output gramatically correct sentences, only sentences that form words that are valid and in <code>words.txt</code>.</p></li>
<li>Some extra things that define if a sentence is valid or not is that the sentence cannot have two identical words; all words must be unique, and there cannot be more than one 1-letter word and one 2-letter word in the sentence.</li>
<li><p>My code, which at the moment is incomplete but sorts all the words into their corresponding Morse Code definitions:</p>
<pre><code># Define the mapping from letter to Morse code.
CODES = {
'A': '.-',
'B': '-...',
'C': '-.-.',
'D': '-..',
'E': '.',
'F': '..-.',
'G': '--.',
'H': '....',
'I': '..',
'J': '.---',
'K': '-.-',
'L': '.-..',
'M': '--',
'N': '-.',
'O': '---',
'P': '.--.',
'Q': '--.-',
'R': '.-.',
'S': '...',
'T': '-',
'U': '..-',
'V': '...-',
'W': '.--',
'X': '-..-',
'Y': '-.--',
'Z': '--..',
}
words={}
f=open('words.txt').read()
a=f
for i in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
a=a.replace(i,CODES[i])
f=f.split('\n')
a=a.split('\n')
for i in f:
words[i]=a[f.index(i)]
q=input('Morse: ')
</code></pre></li>
</ul>
<p>An example test case of how this would work is:</p>
<pre><code>Morse: .--....-....-.-..-----.
A BED IN DOG
A DID IN DOG
A BLUE DOG
A TEST IF DOG
WEST I IN DOG
WEST EVEN A ON
WEST IF DOG
</code></pre>
| -1 | 2016-08-29T09:50:00Z | 39,208,045 | <p>To complete the program, you need to use a recursive algorithm as there are so many possible combinations of words.</p>
<p>I have changed your variable names so that they are easy to understand what data they hold.</p>
<p>The <code>decode</code> function is used as a recursive algorithm. The first line checks if the Morse is empty so no need to run the function as it is a finishing point, it prints the output of that branch.</p>
<p>The rest of the function will check if a word is possible to make out of the first i letters. <code>i</code> starts at <code>1</code> as this is the shortest letter and the max length is the max Morse length in the file. The while loop also checks that an out of bound error does not occur by checking that <code>i</code> is not greater than the length of Morse.</p>
<p>The code can't change the function's arguments as other word could be found in the same functions causing a clash so new variable are created for the changed English and Morse. Checks that if the length of possible word has been repeated and allowed. </p>
<pre><code>from string import ascii_uppercase
#Defines the letter to Morse mapping
code = {
'A': '.-',
'B': '-...',
'C': '-.-.',
'D': '-..',
'E': '.',
'F': '..-.',
'G': '--.',
'H': '....',
'I': '..',
'J': '.---',
'K': '-.-',
'L': '.-..',
'M': '--',
'N': '-.',
'O': '---',
'P': '.--.',
'Q': '--.-',
'R': '.-.',
'S': '...',
'T': '-',
'U': '..-',
'V': '...-',
'W': '.--',
'X': '-..-',
'Y': '-.--',
'Z': '--..'
}
#Opens the file and reads all words
file = open("words.txt","r")
words = file.read()
file.close()
morse = words
# Converts all words to morse
for letter in list(ascii_uppercase):
morse = morse.replace(letter, code[letter])
# Creates list of the morse and english words from strings
morsewords = morse.split("\n")
engwords = words.split("\n")
# Finds the max length of morse words
maxlength = max([len(i)] for i in morsewords)[0]
# Creates a dictionary of {morse words : english words}
words = dict(zip(morsewords, engwords))
# MorseInput = input("Morse code :")
MorseInput = ".--....-....-.-..-----."
# This is the recursive function
def decode(morse, eng="", oneWord=False, twoWord=False):
# Print the english when finished
if morse == "":
print(eng)
else:
i = 1
# While loop allows to go through all possWord where condition met
while len(morse) >= i and i <= maxlength:
possWord = morse[:i]
# Checks if the word is a real word
if possWord in words.keys():
# Real word therefore add to english and the morse
newEng = eng + " " + words[possWord]
newMorse = morse[i:]
# Checks if that not more than one, One length word used
if len(words[possWord]) == 1:
if not oneWord:
decode(newMorse, newEng, True, twoWord)
# Checks if that not more than one, Two length word used
elif len(words[possWord]) == 2:
if not twoWord:
decode(newMorse, newEng, oneWord, True)
# Word is greater than two so doesn't matter
else:
decode(newMorse, newEng, oneWord, twoWord)
i += 1
decode(MorseInput)
</code></pre>
<p>I hope that my comments make some sense.</p>
<p>I am sure that the code could be made better and shorter but I did it in under an hour.</p>
<p>It prints</p>
<pre><code>A TEST IF DOG
A DID IN DOG
A BLUE DOG
WEST I IN DOG
WEST IF DOG
WEST EVEN A ON
</code></pre>
| 1 | 2016-08-29T13:54:25Z | [
"python",
"morse-code"
] |
how to pass string containing newline to python in php | 39,203,274 | <p>I have the following line of code in php that I m using to execute the python screen. and it takes in text from a html textarea which will contain newline characters</p>
<pre><code>$string = $_POST['textarea']; // e.g. "String 1\n string 2\n"
$command = escapeshellcmd("python script.py -c \"$string\"");
</code></pre>
<p>when the script.py is executed, only "String 1" is received by the python script as arg. May I ask is there any native way for me to get the string passed?</p>
<p>If you feel that this is a duplicated question, please let me know where I can find this answer. I have been searching it all over stackoverflow.</p>
| 0 | 2016-08-29T09:50:20Z | 39,217,994 | <p>I have managed to find a way to do this.</p>
<p>In the php file, replace all the newline with '\n' before calling escapeshellcmd</p>
<pre><code>$string = trim(preg_replace('/[\r\n]+/', '\n', $_POST['textarea']));
// For a string like:
// abcd
// defg
// will become
// abde\\ndefg
</code></pre>
<p>Since the newline char '\n' has been escaped by php escapeshellcmd with the prepended slash. the shell will be able to process the string and python will receive <strong>abde\ndefg</strong> as one of the arguments.
Next in the python code, we simply have to undo the escape using the code below</p>
<pre><code>arg = sys.argv[1].decode('string_escape')
</code></pre>
<p>I hope the above solution will be able to help those who are facing the similar issue as me. =)</p>
| 0 | 2016-08-30T02:42:45Z | [
"php",
"python"
] |
How to change the field value as Red color when end date is less than current date on field? | 39,203,384 | <p>I want to change the field value as red color when end date is less than current date on field ? I have field name followup_date, If Follow up Date has passed then it should be marked as Red on Grid. The method will create previous date is less than current date, but i have no idea how to write this method? How is this possible? Can anyone help me out? Thanks in Advance...</p>
| 0 | 2016-08-29T09:56:06Z | 39,213,314 | <p>If by grid you mean a One2many table. Here is an example that may help you.</p>
<pre><code><tree colors="red:followup_date < current_date">
<field name="followup_date"/>
</tree>
</code></pre>
| 1 | 2016-08-29T18:59:57Z | [
"python",
"methods",
"openerp",
"odoo-8"
] |
How to change the field value as Red color when end date is less than current date on field? | 39,203,384 | <p>I want to change the field value as red color when end date is less than current date on field ? I have field name followup_date, If Follow up Date has passed then it should be marked as Red on Grid. The method will create previous date is less than current date, but i have no idea how to write this method? How is this possible? Can anyone help me out? Thanks in Advance...</p>
| 0 | 2016-08-29T09:56:06Z | 39,215,310 | <p>I resolved my problem by this, its working for me now...</p>
<pre><code> <tree string="Claims" position="attributes">
<attribute name="colors">red:followup_date &lt; current_date;</attribute>
</tree>
<field name="stage_id" position="after">
<field name="followup_date" invisible="1"/>
<field name="current_date" invisible="1"/>
</field>
</code></pre>
| 0 | 2016-08-29T21:12:58Z | [
"python",
"methods",
"openerp",
"odoo-8"
] |
Scikit Learn Categorical data with random forests | 39,203,422 | <p>I am trying to work with the titanic survival challenge in kaggle <a href="https://www.kaggle.com/c/titanic" rel="nofollow">https://www.kaggle.com/c/titanic</a>.</p>
<p>I am not experienced in R so i am using Python and Scikit Learn for the <strong>Random Forest Classifier</strong></p>
<p>I am seeing many people using scikit learn converting their categorical of many levels into dummy variables.</p>
<p>I don't understand the point of doing this, why can't we just map the levels into a numeric value and be done with it.</p>
<p>And also i saw someone do the following:
There was a categorical feature <strong>Pclass</strong> with three levels, he created 3 dummy variables for this and dropped the variable which had the least survival rate. I couldn't understand this either, i though decision trees didn't care about correlated features.</p>
| 0 | 2016-08-29T09:57:44Z | 39,208,325 | <p>If you just map levels to numeric values, python will treat your values as numeric. That is, numerically <code>1<2</code> and so on even if your levels were initially unordered. Think about the "distance" problem. This distance between 1 and 2 is 1, between 1 and 3 is 2. But what were the original distances between your categorical variables? For example, what are the distances between "banana" "peach" and "apple"? Do you suppose that they are all equal? </p>
<p>About dummy variable: if you have 3 classes and create 3 dummy variables, they not just correlated, they are linearly dependent. This is never good.</p>
| 2 | 2016-08-29T14:08:02Z | [
"python",
"scikit-learn",
"random-forest"
] |
Convert a list of dictionaries into a set of dictionaries | 39,203,516 | <p>How can i make a set of dictionaries from one list of dictionaries?</p>
<p>Example:</p>
<pre><code>import copy
v1 = {'k01': 'v01', 'k02': {'k03': 'v03', 'k04': {'k05': 'v05'}}}
v2 = {'k11': 'v11', 'k12': {'k13': 'v13', 'k14': {'k15': 'v15'}}}
data = []
N = 5
for i in range(N):
data.append(copy.deepcopy(v1))
data.append(copy.deepcopy(v2))
print data
</code></pre>
<p>How would you create a set of dictionaries from the list <code>data</code>?</p>
<p>NS: One dictionary is equal to another when they are structurally the same. That means, they got <strong>exactly</strong> the same keys and same values (recursively) </p>
| -1 | 2016-08-29T10:02:06Z | 39,203,628 | <p>Dictionaries are mutable and therefore not hashable in python.</p>
<p>You could either create a dict-subclass with a <code>__hash__</code> method. Make sure that the hash of a dictionary does not change while it is in the set (that probably means that you cannot allow modifying the members).
See <a href="http://code.activestate.com/recipes/414283-frozen-dictionaries/" rel="nofollow">http://code.activestate.com/recipes/414283-frozen-dictionaries/</a> for an example implementation of frozendicts.</p>
<p>If you can define a sort order on your (frozen) dictionaries, you could alternatively use a data structure based on a binary tree instead of a set. This boils down to the bisect solution provided in the link below.</p>
<p>See also <a href="http://stackoverflow.com/a/18824158/5069869">http://stackoverflow.com/a/18824158/5069869</a> for an explanation why sets without hash do not make sense.</p>
| 0 | 2016-08-29T10:07:13Z | [
"python",
"dictionary"
] |
Convert a list of dictionaries into a set of dictionaries | 39,203,516 | <p>How can i make a set of dictionaries from one list of dictionaries?</p>
<p>Example:</p>
<pre><code>import copy
v1 = {'k01': 'v01', 'k02': {'k03': 'v03', 'k04': {'k05': 'v05'}}}
v2 = {'k11': 'v11', 'k12': {'k13': 'v13', 'k14': {'k15': 'v15'}}}
data = []
N = 5
for i in range(N):
data.append(copy.deepcopy(v1))
data.append(copy.deepcopy(v2))
print data
</code></pre>
<p>How would you create a set of dictionaries from the list <code>data</code>?</p>
<p>NS: One dictionary is equal to another when they are structurally the same. That means, they got <strong>exactly</strong> the same keys and same values (recursively) </p>
| -1 | 2016-08-29T10:02:06Z | 39,204,359 | <p>A cheap workaround would be to serialize your dicts, for example:</p>
<pre><code>import json
dset = set()
d1 = {'a':1, 'b':{'c':2}}
d2 = {'b':{'c':2}, 'a':1} # the same according to your definition
d3 = {'x': 42}
dset.add(json.dumps(d1, sort_keys=True))
dset.add(json.dumps(d2, sort_keys=True))
dset.add(json.dumps(d3, sort_keys=True))
for p in dset:
print json.loads(p)
</code></pre>
<p>In the long run it would make sense to wrap the whole thing in a class like <code>SetOfDicts</code>.</p>
| 2 | 2016-08-29T10:46:35Z | [
"python",
"dictionary"
] |
Euclidean Distance Matrix Using Pandas | 39,203,662 | <p>I have a .csv file that contains city, latitude and longitude data in the below format:</p>
<pre><code>CITY|LATITUDE|LONGITUDE
A|40.745392|-73.978364
B|42.562786|-114.460503
C|37.227928|-77.401924
D|41.245708|-75.881241
E|41.308273|-72.927887
</code></pre>
<p>I need to create a distance matrix in the below format (please ignore the dummy values):</p>
<pre><code> A B C D E
A 0.000000 6.000000 5.744563 6.082763 5.656854
B 6.000000 0.000000 6.082763 5.385165 5.477226
C 1.744563 6.082763 0.000000 6.000000 5.385165
D 6.082763 5.385165 6.000000 0.000000 5.385165
E 5.656854 5.477226 5.385165 5.385165 0.000000
</code></pre>
<p>I have loaded the data into a pandas dataframe and have created a cross join as below:</p>
<pre><code>import pandas as pd
df_A = pd.read_csv('lat_lon.csv', delimiter='|', encoding="utf-8-sig")
df_B = df_A
df_A['key'] = 1
df_B['key'] = 1
df_C = pd.merge(df_A, df_B, on='key')
</code></pre>
<ul>
<li>Can you please help me create the above matrix structure?</li>
<li>Also, is it possible to avoid step involving cross join?</li>
</ul>
| 1 | 2016-08-29T10:08:57Z | 39,205,919 | <p>You can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist" rel="nofollow">pdist</a> and <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.squareform.html#scipy.spatial.distance.squareform" rel="nofollow">squareform</a> methods from <a href="http://docs.scipy.org/doc/scipy/reference/spatial.distance.html" rel="nofollow">scipy.spatial.distance</a>:</p>
<pre><code>In [12]: df
Out[12]:
CITY LATITUDE LONGITUDE
0 A 40.745392 -73.978364
1 B 42.562786 -114.460503
2 C 37.227928 -77.401924
3 D 41.245708 -75.881241
4 E 41.308273 -72.927887
In [13]: from scipy.spatial.distance import squareform, pdist
In [14]: pd.DataFrame(squareform(pdist(df.ix[:, 1:])), columns=df.CITY.unique(), index=df.CITY.unique())
Out[14]:
A B C D E
A 0.000000 40.522913 4.908494 1.967551 1.191779
B 40.522913 0.000000 37.440606 38.601738 41.551558
C 4.908494 37.440606 0.000000 4.295932 6.055264
D 1.967551 38.601738 4.295932 0.000000 2.954017
E 1.191779 41.551558 6.055264 2.954017 0.000000
</code></pre>
| 2 | 2016-08-29T12:04:48Z | [
"python",
"pandas",
"dataframe"
] |
Euclidean Distance Matrix Using Pandas | 39,203,662 | <p>I have a .csv file that contains city, latitude and longitude data in the below format:</p>
<pre><code>CITY|LATITUDE|LONGITUDE
A|40.745392|-73.978364
B|42.562786|-114.460503
C|37.227928|-77.401924
D|41.245708|-75.881241
E|41.308273|-72.927887
</code></pre>
<p>I need to create a distance matrix in the below format (please ignore the dummy values):</p>
<pre><code> A B C D E
A 0.000000 6.000000 5.744563 6.082763 5.656854
B 6.000000 0.000000 6.082763 5.385165 5.477226
C 1.744563 6.082763 0.000000 6.000000 5.385165
D 6.082763 5.385165 6.000000 0.000000 5.385165
E 5.656854 5.477226 5.385165 5.385165 0.000000
</code></pre>
<p>I have loaded the data into a pandas dataframe and have created a cross join as below:</p>
<pre><code>import pandas as pd
df_A = pd.read_csv('lat_lon.csv', delimiter='|', encoding="utf-8-sig")
df_B = df_A
df_A['key'] = 1
df_B['key'] = 1
df_C = pd.merge(df_A, df_B, on='key')
</code></pre>
<ul>
<li>Can you please help me create the above matrix structure?</li>
<li>Also, is it possible to avoid step involving cross join?</li>
</ul>
| 1 | 2016-08-29T10:08:57Z | 39,205,931 | <pre><code>for i in df["CITY"]:
for j in df["CITY"]:
row = df[df["CITY"] == j][["LATITUDE", "LONGITUDE"]]
latitude = row["LATITUDE"].tolist()[0]
longitude = row["LONGITUDE"].tolist()[0]
df.loc[df['CITY'] == i, j] = ((df["LATITUDE"] - latitude)**2 + (df["LONGITUDE"] - longitude)**2)**0.5
df = df.drop(["CITY", "LATITUDE", "LONGITUDE"], axis=1)
</code></pre>
<p>This works</p>
| 0 | 2016-08-29T12:05:12Z | [
"python",
"pandas",
"dataframe"
] |
List of Structure subclass returns wrong values when casting to numpy array | 39,203,714 | <p>I've built a simple Structure subclass with two fields, holding a void pointer to an array, and the array length. However, when I try to create a list of these using input lists of the same length, the value of the returned void pointer is the same as the last array used to create the instance:</p>
<pre><code>from ctypes import POINTER, c_double, c_size_t, c_void_p, Structure, cast
import numpy as np
class External(Structure):
_fields_ = [("data", c_void_p),
("length", c_size_t)]
@classmethod
def from_param(cls, seq):
return seq if isinstance(seq, cls) else cls(seq)
def __init__(self, seq):
self.ptr = cast(
np.array(seq, dtype=np.float64).ctypes.data_as(POINTER(c_double)),
c_void_p
)
self.data = self.ptr
self.length = len(seq)
# recreate array from void pointer
# shows the correct values
shape = self.length, 2
ptr = cast(self.data, POINTER(c_double))
array = np.ctypeslib.as_array(ptr, shape)
print "Correct array", array.tolist()
if __name__ == "__main__":
interiors = [
[[3.5, 3.5], [4.4, 2.0], [2.6, 2.0], [3.5, 3.5]],
[[4.0, 3.0], [4.0, 3.2], [4.5, 3.2], [4.0, 3.0]],
]
wrong = [External(s) for s in interiors]
for w in wrong:
# perform same cast back to array as before
shape = w.length, 2
ptr = cast(w.data, POINTER(c_double))
array = np.ctypeslib.as_array(ptr, shape)
print "Wrong array", array.tolist()
</code></pre>
<p>If I create my <code>External</code> instances using input lists of difference lengths, everything works as expected. What am I doing wrong here?</p>
| 1 | 2016-08-29T10:11:19Z | 39,259,163 | <p>The problem is that the numpy array is immediately garbage-collected and the underlying memory freed, resulting in a dangling pointer.</p>
<p>The solution is to keep a reference to the underlying <code>buffer</code> object:</p>
<pre><code>def __init__(self, seq):
array = np.array(seq, dtype=np.float64)
self._buffer = array.data
self.ptr = cast(
array.ctypes.data_as(POINTER(c_double)),
c_void_p
)
...
</code></pre>
<p>Now the memory for the array is freed only when the instance of <code>External</code> holding the reference gets deleted.</p>
| 2 | 2016-08-31T21:18:49Z | [
"python",
"arrays",
"numpy",
"ctypes"
] |
'CsrfViewMiddleware' object is not iterable | 39,203,826 | <p>I am new to Django, and I just took over from another developer on this project. All I have done so far is clone the code from git and install the dependencies.</p>
<p>Immediately after setting up the project, and running <code>python manager.py runserver</code> and going to <code>localhost:8000/admin</code> I get an error stating the <code>TypeError at /admin/login/</code>, <code>'CsrfViewMiddleware' object is not iterable</code>:</p>
<blockquote>
<p>Traceback:</p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/core/handlers/exception.py"
in inner
39. <code>response = get_response(request)</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/core/handlers/base.py"
in _legacy_get_response
249. <code>response = self._get_response(request)</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/core/handlers/base.py"
in _get_response
217. <code>response = self.process_exception_by_middleware(e, request)</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/core/handlers/base.py"
in _get_response
215. <code>response = response.render()</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/template/response.py"
in render
109. <code>self.content = self.rendered_content</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/template/response.py"
in rendered_content
86. <code>content = template.render(context, self._request)</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/template/backends/django.py"
in render
66. <code>return self.template.render(context)</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/template/base.py"
in render
206. <code>with context.bind_template(self):</code></p>
<p>File "/usr/lib/python3.5/contextlib.py" in <code>__enter__</code>
59. <code>return next(self.gen)</code></p>
<p>File
"/home/abhay/code/virtualenvironments/leaguesx/lib/python3.5/site-packages/django/template/context.py"
in bind_template
236. <code>updates.update(processor(self.request))</code></p>
<p>Exception Type: TypeError at /admin/login/</p>
<p>Exception Value: 'CsrfViewMiddleware' object is not iterable</p>
</blockquote>
<p><a href="http://i.stack.imgur.com/nypI7.png" rel="nofollow"><img src="http://i.stack.imgur.com/nypI7.png" alt="enter image description here"></a></p>
<p>I would post code from the source code but I can't figure where in the source the cause of this might possibly be.</p>
<p>My settings.py:</p>
<pre><code>import os
from datetime import datetime
from django.conf.global_settings import EMAIL_USE_SSL
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'ourapp',
'social.apps.django_app.default',
'sendgrid',
'corsheaders',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
'ourapp.middleWare.authenticationMiddleware.AuthenticationMiddleware'
)
ROOT_URLCONF = ''
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'django.middleware.csrf.CsrfViewMiddleware',
'corsheaders.middleware.CorsMiddleware',
'social.apps.django_app.context_processors.backends',
'social.apps.django_app.context_processors.login_redirect',
],
},
},
]
</code></pre>
<p>(Sorry about the lack of indentation.)
Any ideas on how to proceed from here would be greatly appreciated!</p>
| 0 | 2016-08-29T10:18:59Z | 39,204,804 | <p>Try removing <code>'django.middleware.csrf.CsrfViewMiddleware',</code> from <code>TEMPLATES</code>. Probably <code>'corsheaders.middleware.CorsMiddleware',</code> too</p>
| 3 | 2016-08-29T11:09:39Z | [
"python",
"django",
"django-templates",
"django-views",
"django-csrf"
] |
Why does '\b' look like invalid as the last character of sentence in Python? | 39,203,888 | <p>I feel puzzled about '\b'.
I know that '\b' means backspace, but in Python, if the last character of sentence it seems invalid.</p>
<p>Example:</p>
<pre><code>>>print 'abc\be'
>>abe
>>print 'abc\b'
>>abc
</code></pre>
<p>Why?</p>
<p>Andï¼another example on OSX/python2.7.10 <strong>IPython</strong>:</p>
<pre><code>>> import sys
>> sys.stdout.write('abc\b')
>> abc
>> sys.stdout.write('abc\be')
>> abe
</code></pre>
| 3 | 2016-08-29T10:22:26Z | 39,203,974 | <p>There's an implied newline after the <code>print</code> finishes, which causes a newline immediately after the <code>\b</code> is echoed. This causes the cursor to move to the next line, so there won't be anything overwriting the <code>c</code> from the previous line.</p>
<p>If you did something like:</p>
<pre><code>print 'abc\b', 'def'
</code></pre>
<p>you would see output like:</p>
<pre><code>ab def
</code></pre>
<p>i.e. it's not 'invalid' at the end of a sentence, it's just that because you immediately print a newline, nothing gets an opportunity to overwrite the character that backspaced into.</p>
<p>To make this a little bit more clear (hopefully) - taken by typing the lines into python directly:</p>
<p>print adds the newline, if we use <code>sys.stdout.write</code>, it won't add a newline automatically:</p>
<pre><code>>>> import sys
>>> sys.stdout.write('abc')
abc>>> sys.stdout.write('abc\b')
ab>>>
</code></pre>
| 4 | 2016-08-29T10:26:43Z | [
"python",
"character"
] |
@login_required flask trouble. No user_loader has been installed for this LoginManager | 39,203,956 | <p>I have created separate Blueprints for separate modules of my project.
I am having trouble with @login_required decorator.
I read a very similar <a href="http://stackoverflow.com/questions/28575234/login-required-trouble-in-flask-app">question</a> but it couldn't help me.</p>
<p>The model.py file is as following:</p>
<pre><code>from application import db
from datetime import datetime
class User(db.Model):
uid = db.Column(db.Integer, primary_key=True)
AccountName=db.Column(db.String(60))
UserID=db.Column(db.String(60))
firstName = db.Column(db.String(60))
lastName = db.Column(db.String(60))
email = db.Column(db.String(60), unique=True)
UserPassword = db.Column(db.String(60))
DateCreated = db.Column(db.DateTime())
DateUpdated = db.Column(db.DateTime())
UserType=db.Column(db.Enum('Owner', 'Admin', 'Operator'))
is_verified = db.Column(db.Boolean)
@property
def is_authenticated(self):
return True
@property
def is_active(self):
return True
@property
def is_anonymous(self):
return False
def get_id(self):
return str(self.uid) # python 3
def __init__(self, firstName, lastName, email, UserPassword, DateCreated, AccountName, UserType, UserID, DateUpdated):
self.firstName = firstName
self.lastName = lastName
self.email = email
self.UserPassword = UserPassword
self.DateCreated = DateCreated
self.is_verified = False
self.AccountName = AccountName
self.UserType = UserType
self.UserID = UserID
self.DateUpdated=DateUpdated
@classmethod
def get(cls, uid): #
return cls.get(uid)
def __repr__(self):
return "\n"+str(self.uid)+"\t"+self.firstName+"\t"+self.lastName+"\t"+self.email+"\t"+str(self.is_verified)+"\t"+self.AccountName+"\t"+self.UserType+"\t"+self.UserID +"\n"
</code></pre>
<p>application.py is as:</p>
<pre><code>from flask import Flask, render_template, request, Blueprint
from flask_mail import Mail
from application import db, application
from application.forms import EnterDBInfo, RetrieveDBInfo
application = Flask(__name__)
application.debug = True
# Flask Login
login_manager = LoginManager()
login_manager.init_app(application)
application.secret_key = 'njzJTsRxA/pd3k4PXiHvMay/BeBseeUAG15GLA/t'
@application.route('/', methods=['GET', 'POST'])
@application.route('/index', methods=['GET', 'POST'])
def index():
#returning someTemplate()
if __name__ == '__main__':
application.run(host='0.0.0.0')
</code></pre>
<p>the <strong>init</strong>.py is as:</p>
<pre><code>from flask import Flask, Blueprint
from flask_sqlalchemy import SQLAlchemy
application = Flask(__name__)
application.config.from_object('config')
db = SQLAlchemy(application)
from user.user_management import mod as userModule
application.register_blueprint(userModule)
from site_license.license import modLicense as licenseModule
application.register_blueprint(licenseModule)
</code></pre>
<p>and finally in the blueprint where the problem is, I have following:</p>
<pre><code>from datetime import datetime
from flask import Flask, Blueprint, request, jsonify, flash, g, url_for, abort, session, redirect
from application import db
from application.models import User
from application.ErrorCodes import ErrorCode
from flask_login import login_user, logout_user, LoginManager, login_required, current_user, wraps
application = Flask(__name__)
application.debug = True
mod = Blueprint('user', __name__, url_prefix='/user')
# Flask Login
login_manager = LoginManager()
login_manager.init_app(application)
application.secret_key = 'njzJTsRxA/pd3k4PXiHvMay/BeBseeUAG15GLA/t'
@login_manager.user_loader
def load_user(uid):
return User.get(uid)
@mod.route('/login', methods=['GET', 'POST'])
def login():
if request.headers['Content-Type'] == 'application/json':
try:
login = request.json
UserID = login["UserID"]
UserPassword = login["UserPassword"]
AccountName = login["AccountName"]
registered_user = User.query.filter_by(UserID=UserID, UserPassword=UserPassword, AccountName=AccountName).first()
print(registered_user)
if registered_user is None:
code = ErrorCode().Invalid_Credentials_CODE
msg = ErrorCode().Invalid_Credentials_MSG
else:
login_user(registered_user, remember=True)
UserType = registered_user.UserType
flash("Successfully logged in")
code = ErrorCode().Success_CODE
msg = ErrorCode().Success_MSG
SessionID = registered_user.get_id()
except:
code = ErrorCode().Invalid_JSON_CODE
msg = ErrorCode().Invalid_JSON_MSG
else:
code = ErrorCode().Wrong_Content_CODE
msg = ErrorCode().Wrong_Content_MSG
if code==1:
return jsonify({"ResponseValue": code, "ResponseText": msg, "SessionID":SessionID, "UserType":UserType})
else:
return jsonify({"ResponseValue": code, "ResponseText": msg})
@mod.route('/logout', methods=['GET', 'POST'])
@login_required
def logout():
# print(g.user)
logout_user()
# session['logged_in'] = False
code = 1
success = "Success"
# return "\n\nSuccessfully logged out\n\n"
return jsonify({"code": code, "msg": success})
</code></pre>
<p>What is it that I'm missing here?</p>
| -1 | 2016-08-29T10:25:12Z | 39,207,829 | <p>You have two different applications: the one in <code>__init__</code> and the one in your blueprint module. This is wrong. You're registering Flask-Login with the second application, but that application is never run (and shouldn't be).</p>
<p>Remove the following from your blueprint module.</p>
<pre><code>application = Flask(__name__)
application.debug = True
login_manager.init_app(application)
application.secret_key = 'njzJTsRxA/pd3k4PXiHvMay/BeBseeUAG15GLA/t'
</code></pre>
<p>Instead, import <code>login_manager</code> in <code>__init__</code> and register it on the real application.</p>
<pre><code>from my_app.my_blueprint import login_manager
login_manager.init_app(application)
</code></pre>
<p>Or define the login manager in <code>__init__</code> and import <code>login_required</code> where needed (preferable).</p>
| 0 | 2016-08-29T13:44:16Z | [
"python",
"flask",
"flask-login"
] |
Send XML to activeMQ using Django | 39,204,113 | <p>I am trying to send a XML file generated using 'ElementTree' to activeMQ server using python django 'requests' library .My views.py code is :</p>
<pre><code>from django.shortcuts import render
import requests
import xml.etree.cElementTree as ET
# Create your views here.
def index(request):
return render(request,"indexer.html")
def xml(request):
root = ET.Element("root")
doc = ET.SubElement(root, "doc")
field1 = ET.SubElement(doc,"field1")
ET.SubElement(doc, "field2", fame="yeah", name="asdfasd").text = "some vlaue2"
ET.SubElement(field1,"fielder", name="ksd").text = "valer"
tree = ET.ElementTree(root)
headers = {}
tree.write("filename.xml", encoding = "us-ascii", xml_declaration = 'utf-8', default_namespace = xml, method = "xml")
url = 'http://localhost:8082/testurl/'
headers = {'Content-Type': 'application/xml'}
files = {'file': open('filename.xml', 'rb')}
requests.post(url, files=files, headers = headers)
return render(request,"indexer.html")
</code></pre>
<p>and there is a simple submit button on indexer.html page.</p>
<pre><code><html>
<head>
</head>
<body>
<form method="post" action="/xml/">{% csrf_token %}
<input type="submit" value="submit">
</form>
</body>
</html>
</code></pre>
<p>When I click submit button it's generating filename.xml file and then sending it successfully to activeMQ server, but at activeMQ i am getting XML message which contains header information also . So ,is it possible to send only body part without header or how to omit header at activeMQ side and keep only body/data part ?
At activeMQ I'm getting following message:</p>
<pre><code>--6dc760762ba245eb8e4c3d72aa38062b
Content-Disposition: form-data; name="file"; filename="filename.xml"
<root><doc><field1><fielder name="ksd">valer</fielder></field1><field2 fame="yeah" name="asdfasd">some vlaue2</field2></doc></root>
--6dc760762ba245eb8e4c3d72aa38062b--
</code></pre>
| 0 | 2016-08-29T10:33:58Z | 39,292,983 | <p>I suggest looking at using the available STOMP protocol instead of HTTP. You'll have more control over message payloads and message headers.</p>
<p>Python library: <a href="https://pypi.python.org/pypi/stomp.py" rel="nofollow">https://pypi.python.org/pypi/stomp.py</a>
ActiveMQ Support: <a href="http://activemq.apache.org/stomp.html" rel="nofollow">http://activemq.apache.org/stomp.html</a></p>
| 0 | 2016-09-02T12:55:13Z | [
"python",
"xml",
"django",
"jms",
"activemq"
] |
Python Mathematical signs in function parameter? | 39,204,420 | <p>I would like to know if there is a way to add math symbols into the function parameters. </p>
<pre><code>def math(x, y, symbol):
answer = x 'symbol' y
return answer
</code></pre>
<p>this is an small example what I mean. </p>
<p><strong>EDIT:
here is the whole problem</strong> </p>
<pre><code>def code_message(str_val, str_val2, symbol1, symbol2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('A'):
code symbol2= key
str_val2 += str(chr(code))
elif code < ord('a'):
code symbol2= key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
<p>I need to call the function a number of times but sometimes with a +/- for first symbol and sometimes a +/- for second symbol</p>
<p>ORIGINAL CODE :</p>
<pre><code>def code_message(str_val, str_val2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('A'):
code += key
str_val2 += str(chr(code))
elif code < ord('a'):
code += key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
| -1 | 2016-08-29T10:49:10Z | 39,204,444 | <p>Use the corresponding functions, from <code>operator</code> module:</p>
<pre><code>from operator import add
def math(x, y, add):
answer = add(x, y)
return answer
</code></pre>
<p>The only way that you can pass a mathematics sign to a function is passing is in string mode, then you'd have two problems, evaluating the sign and the equation as well. So when you are dealing with numbers you better to use a proper functions to does the job directly.</p>
<p>You can also use a dictionary to map the symbols to the corresponding function: </p>
<pre><code>from operator import add, sub, mul
mapping = {"+": add, "-":sub, "*": mul}
def math(x, y, sym):
try:
f = mapping[sym]
except KeyError:
raise Exception("Enter a valid operator")
else:
answer = f(x, y)
return answer
</code></pre>
| 1 | 2016-08-29T10:50:50Z | [
"python",
"function",
"math",
"symbols",
"sign"
] |
Python Mathematical signs in function parameter? | 39,204,420 | <p>I would like to know if there is a way to add math symbols into the function parameters. </p>
<pre><code>def math(x, y, symbol):
answer = x 'symbol' y
return answer
</code></pre>
<p>this is an small example what I mean. </p>
<p><strong>EDIT:
here is the whole problem</strong> </p>
<pre><code>def code_message(str_val, str_val2, symbol1, symbol2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('A'):
code symbol2= key
str_val2 += str(chr(code))
elif code < ord('a'):
code symbol2= key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
<p>I need to call the function a number of times but sometimes with a +/- for first symbol and sometimes a +/- for second symbol</p>
<p>ORIGINAL CODE :</p>
<pre><code>def code_message(str_val, str_val2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('A'):
code += key
str_val2 += str(chr(code))
elif code < ord('a'):
code += key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
| -1 | 2016-08-29T10:49:10Z | 39,204,490 | <p>You can not pass operator to the function, but however you may pass operator's functions defined in <a href="https://docs.python.org/2/library/operator.html" rel="nofollow"><code>operator</code></a> library. Hence, your function will be like:</p>
<pre><code>>>> from operator import eq, add, sub
>>> def magic(left, op, right):
... return op(left, right)
...
</code></pre>
<p><strong>Examples</strong>:</p>
<pre><code># To Add
>>> magic(3, add, 5)
8
# To Subtract
>>> magic(3, sub, 5)
-2
# To check equality
>>> magic(3, eq, 3)
True
</code></pre>
<p><strong>Note</strong>: I am using function as <code>magic</code> instead of <code>math</code> because <code>math</code> is default python library and it is not good practice to use predefined keywords.</p>
| 2 | 2016-08-29T10:53:29Z | [
"python",
"function",
"math",
"symbols",
"sign"
] |
Python Mathematical signs in function parameter? | 39,204,420 | <p>I would like to know if there is a way to add math symbols into the function parameters. </p>
<pre><code>def math(x, y, symbol):
answer = x 'symbol' y
return answer
</code></pre>
<p>this is an small example what I mean. </p>
<p><strong>EDIT:
here is the whole problem</strong> </p>
<pre><code>def code_message(str_val, str_val2, symbol1, symbol2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('A'):
code symbol2= key
str_val2 += str(chr(code))
elif code < ord('a'):
code symbol2= key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
<p>I need to call the function a number of times but sometimes with a +/- for first symbol and sometimes a +/- for second symbol</p>
<p>ORIGINAL CODE :</p>
<pre><code>def code_message(str_val, str_val2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('A'):
code += key
str_val2 += str(chr(code))
elif code < ord('a'):
code += key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
| -1 | 2016-08-29T10:49:10Z | 39,204,582 | <p>You should first consider that you have to use <code>answer = x symbol y</code> and not the contrary.</p>
<p>Then concerning the use of symbols, you can send the function as a parameter.
For the basic operator you might use the <a href="https://docs.python.org/2/library/operator.html" rel="nofollow">operator</a> module.</p>
<p>For example:</p>
<pre><code>import operator
def math(x, y, function):
return function(x, y)
math(4,5, operator.sub)
</code></pre>
<p>You will find in the documentation all the others operations you need.</p>
| 0 | 2016-08-29T10:57:57Z | [
"python",
"function",
"math",
"symbols",
"sign"
] |
Python Mathematical signs in function parameter? | 39,204,420 | <p>I would like to know if there is a way to add math symbols into the function parameters. </p>
<pre><code>def math(x, y, symbol):
answer = x 'symbol' y
return answer
</code></pre>
<p>this is an small example what I mean. </p>
<p><strong>EDIT:
here is the whole problem</strong> </p>
<pre><code>def code_message(str_val, str_val2, symbol1, symbol2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('z'):
code symbol1= key
str_val2 += str(chr(code))
elif code > ord('A'):
code symbol2= key
str_val2 += str(chr(code))
elif code < ord('a'):
code symbol2= key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
<p>I need to call the function a number of times but sometimes with a +/- for first symbol and sometimes a +/- for second symbol</p>
<p>ORIGINAL CODE :</p>
<pre><code>def code_message(str_val, str_val2):
for char in str_val:
while char.isalpha() == True:
code = int(ord(char))
if code < ord('Z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('z'):
code -= key
str_val2 += str(chr(code))
elif code > ord('A'):
code += key
str_val2 += str(chr(code))
elif code < ord('a'):
code += key
str_val2 += str(chr(code))
break
if char.isalpha() == False:
str_val2 += char
return str_val2
</code></pre>
| -1 | 2016-08-29T10:49:10Z | 39,204,699 | <p>I am surprised noone mentioned <code>eval()</code>. Look at the example below:</p>
<pre><code>def function(operator1, operator2, symbol):
return eval(str(operator1) + symbol + str(operator2))
print(function(2, 3, '+')) # prints: 5
print(function(2, 3, '-')) # prints: -1
# Of course you can also "chain" operations, e.g., for 4 + 5 - 6
result = function(function(4, 5, '+'), 6, '-')
print(result) # prints 3
# Finally, it also works with string input for the operands so you
# can read them directly from e.g., user input with `input()`
print(function('2', '3', '+')) # prints: 5
</code></pre>
| 0 | 2016-08-29T11:04:26Z | [
"python",
"function",
"math",
"symbols",
"sign"
] |
Make the rectangle keep moving towards the direction i give to him | 39,204,471 | <p>I'm making a snake game but I'm using rectangle not snake actually, I can't make the rectangle keep moving towards the direction i give to him it just move one when i press it ,So i want to make it keep moving how to do it .</p>
<p>that is the code :</p>
<pre><code>def keys ():
pressed = pygame.key.get_pressed()
for i in range(2):
if pressed[pygame.K_UP]:
# rect.move_ip(0,-3)
rect2.move_ip(0, -3)
if pressed[pygame.K_DOWN]:
# rect.move_ip(0,3)
rect2.move_ip(0, 3)
if pressed[pygame.K_LEFT]:
# rect.move_ip(-3,0)
rect2.move_ip(-3, 0)
if pressed[pygame.K_RIGHT]:
# rect.move_ip(3,0)
rect2.move_ip(3, 0)
</code></pre>
| -1 | 2016-08-29T10:52:25Z | 39,208,880 | <p>If you want the rectangle to continue moving, even when no button is pressed, I would recommend a <code>speed</code> variable. Two, actually, you will need an <code>x_speed</code> and a <code>y_speed</code>.</p>
<pre><code>x_speed = y_speed = 0
x_location = y_location = 0
def keys():
global x_speed
global y_speed
pressed = pygame.key.get_pressed()
for i in range(2):
if pressed[pygame.K_UP]:
y_speed = 3
if pressed[pygame.K_DOWN]:
y_speed = -3
if pressed[pygame.K_LEFT]:
x_speed = -3
if pressed[pygame.K_RIGHT]:
x_speed = 3
while True:
keys()
x_location += x_speed
y_location += y_speed
pygame.draw.rect(SCREEN, COLOR, (x_location, y_location, WIDTH, HEIGHT))
</code></pre>
<p>By using <code>x_speed</code> and <code>y_speed</code>, the main loop will "remember" that it's moving, and only stop or change when it recognizes input. The main loop simply adds <code>*_speed</code> to <code>*_location</code> every iteration, then calculates the speed for the next iteration.</p>
| 0 | 2016-08-29T14:37:32Z | [
"python",
"python-2.7",
"pygame"
] |
String time complexity | 39,204,564 | <pre><code>public String joinWords(String[] words)
{
String sentence = "";
for (String w : words)
{
sentence = sentence + w;
}
return sentence;
}
</code></pre>
<blockquote>
<p>Assume that the strings are all the same length (call this x) and that there are n strings.
On each concatenation,a new copy of the string is created, and the two strings are copied over,character by character. The 1st iteration requires us to copy x characters. The second iteration requires copying 2x characters. The third iteration requires 3x ,and so on. The total time therefore is O(x + 2x + . . . + nx). This reduces to O(xn^2).</p>
</blockquote>
<p>1) I can't understand from the book answer how they get 3x characters in the third iteration , 4x in the 4th iteration . String is immutable and in each sentence variable assignment new String object is created .
And then is should copy the previous value of the string char by char and the value of w . And i get 2x characters again .
Thank you all ! </p>
| 2 | 2016-08-29T10:56:45Z | 39,204,632 | <p>In the 1st iteration <code>sentence</code> has 0 characters and <code>w</code> has x characters, so you have to copy x characters.</p>
<p>In the 2nd iteration <code>sentence</code> has x characters and <code>w</code> has x characters, so you have to copy 2*x characters.</p>
<p>In the 3rd iteration <code>sentence</code> has 2*x characters and <code>w</code> has x characters, so you have to copy 3*x characters.</p>
<p>In the 4th iteration <code>sentence</code> has 3*x characters and <code>w</code> has x characters, so you have to copy 4*x characters.</p>
<p>And so on...</p>
| 6 | 2016-08-29T11:00:44Z | [
"java",
"c#",
"python",
"string",
"data-structures"
] |
String time complexity | 39,204,564 | <pre><code>public String joinWords(String[] words)
{
String sentence = "";
for (String w : words)
{
sentence = sentence + w;
}
return sentence;
}
</code></pre>
<blockquote>
<p>Assume that the strings are all the same length (call this x) and that there are n strings.
On each concatenation,a new copy of the string is created, and the two strings are copied over,character by character. The 1st iteration requires us to copy x characters. The second iteration requires copying 2x characters. The third iteration requires 3x ,and so on. The total time therefore is O(x + 2x + . . . + nx). This reduces to O(xn^2).</p>
</blockquote>
<p>1) I can't understand from the book answer how they get 3x characters in the third iteration , 4x in the 4th iteration . String is immutable and in each sentence variable assignment new String object is created .
And then is should copy the previous value of the string char by char and the value of w . And i get 2x characters again .
Thank you all ! </p>
| 2 | 2016-08-29T10:56:45Z | 39,204,795 | <p><code>String</code> itself is immutable but reference is not.</p>
<p>Assume next example:</p>
<pre><code>String s = "1";
s = s + "2";
</code></pre>
<p><code>s</code> will contains value <code>"12"</code>, but string <code>"1"</code> will not changes.
We can check it in next example:</p>
<pre><code>String s = "1";
String sBak = s;
s = s + "2";
</code></pre>
<p><code>s</code> equal to <code>"12"</code> again. And we can check <code>sBak</code> to ensure that <code>"1"</code> did not changes.</p>
<p>Now back to your sample. Assume that <code>words = {"first, "second", "third"}</code>.</p>
<p>Statement <code>sentence = sentence + w;</code> updates <code>sentence</code> variable. After first iteration it will be <code>"" + "first"</code> wich is <code>"first"</code>. After second iteration it will be equal to <code>"first" + "second"</code> and so on.</p>
<p>So length of string referenced by <code>sentence</code> will increase each time (each time <code>sentence</code> will point to different string).</p>
| 2 | 2016-08-29T11:09:24Z | [
"java",
"c#",
"python",
"string",
"data-structures"
] |
Deal with errors in parametrised sigmoid function in python | 39,204,595 | <p>I am trying to convert a set of numbers into sigmoids:</p>
<pre><code>actualarray = {
'open_cost_1':{
'cost_matrix': [
{'a': 24,'b': 56,'c': 78},
{'a': 3,'b': 98,'c':1711},
{'a': 121,'b': 12121,'c': 12989121},
]
},
'open_cost_2':{
'cost_matrix': [
{'a': 123,'b': 1312,'c': 1231},
{'a': 1011,'b': 1911,'c':911},
{'a': 1433,'b': 19829,'c': 1132},
]
}
}
</code></pre>
<p>Where each number in each list of dicts in each <code>cost_matrix</code> gets normalised by different sigmoid functions:</p>
<pre><code>def apply_normalizations(costs):
def sigmoid(b,m,v):
return ((np.exp(b+m*v) / (1 + np.exp(b+m*v)))*2)-1 #Taken from http://web.stanford.edu/class/psych252/tutorials/Tutorial_LogisticRegression.html
def normalize_dicts_local_sigmoid(bias, slope,lst):
return [{key: sigmoid(bias, slope,val) for key,val in dic.iteritems()} for dic in lst]
for name, value in costs.items():
if int((name.split("_")[-1]))>1:
value['normalised_matrix_sigmoid'] = normalize_dicts_local_sigmoid(0,1,value['cost_matrix'])
apply_normalizations(actualarray)
</code></pre>
<p>However, when I run this, I get:</p>
<pre><code> RuntimeWarning: overflow encountered in exp
return ((np.exp(b+m*v) / (1 + np.exp(b+m*v)))*2)-1
RuntimeWarning: invalid value encountered in double_scalars
return ((np.exp(b+m*v) / (1 + np.exp(b+m*v)))*2)-1
</code></pre>
<p>And the array becomes:</p>
<pre><code>{
'open_cost_2': {
'cost_matrix': [
{
'a': 123,
'c': 1231,
'b': 1312
},
{
'a': 1011,
'c': 911,
'b': 1911
},
{
'a': 1433,
'c': 1132,
'b': 19829
}
],
'normalised_matrix_sigmoid': [
{
'a': 1.0,
'c': nan,
'b': nan
},
{
'a': nan,
'c': nan,
'b': nan
},
{
'a': nan,
'c': nan,
'b': nan
}
]
},
'open_cost_1': {
'cost_matrix': [
{
'a': 24,
'c': 78,
'b': 56
},
{
'a': 3,
'c': 1711,
'b': 98
},
{
'a': 121,
'c': 12989121,
'b': 12121
}
]
}
}
</code></pre>
<p>Note, every cost is always more than 0, hence I multiply by 2 and subtract 1 in my sigmoid function.</p>
<p>How can I adapt this to not have this error?</p>
| 0 | 2016-08-29T10:58:37Z | 39,207,159 | <p>As the warning states, the exponential in your implementation of the sigmoid function is overflowing. When that happens, the function returns <code>nan</code>:</p>
<pre><code>In [3]: sigmoid(1000, 1, 1)
/Users/warren/miniconda3/bin/ipython:2: RuntimeWarning: overflow encountered in exp
if __name__ == '__main__':
/Users/warren/miniconda3/bin/ipython:2: RuntimeWarning: invalid value encountered in double_scalars
if __name__ == '__main__':
Out[3]: nan
</code></pre>
<p>Instead of writing your sigmoid function in terms of <code>exp</code>, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.expit.html" rel="nofollow"><code>scipy.special.expit</code></a>. It handles very large arguments correctly.</p>
<pre><code>In [5]: from scipy.special import expit
In [6]: def mysigmoid(b, m, v):
...: return expit(b + m*v)*2 - 1
...:
In [7]: mysigmoid(1000, 1, 1)
Out[7]: 1.0
</code></pre>
<p>Check that it returns the same as your <code>sigmoid</code> function in cases where it doesn't overflow:</p>
<pre><code>In [8]: sigmoid(1, 2, 3)
Out[8]: 0.99817789761119879
In [9]: mysigmoid(1, 2, 3)
Out[9]: 0.99817789761119879
</code></pre>
<p>See <a href="http://stackoverflow.com/questions/21106134/numpy-pure-functions-for-performance-caching/21106536#21106536">Numpy Pure Functions for performance, caching</a> for my answer to another question about the sigmoid function.</p>
| 1 | 2016-08-29T13:10:01Z | [
"python",
"numpy",
"logistic-regression",
"sigmoid"
] |
Combine dictionaries based on key value | 39,204,684 | <p>How do I combine the rows of dictionaries having same keys. For instance If I have </p>
<pre><code>my_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner','good']}]
</code></pre>
<p>My objective is to get </p>
<pre><code>my_new_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold','wins']},
{'atanu': ['good', 'glory','top', 'winner','good']}]
</code></pre>
<p>How do I do that in Python?</p>
<p>EDIT: The dictionaries in the final list must contain repeated values if present in starting list. </p>
| 0 | 2016-08-29T11:03:52Z | 39,205,039 | <p>You could loop over the dicts in your list and either insert or append the key-value pairs to a new dict:</p>
<pre><code>my_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner']}]
new_d = {}
for d in my_dict_list:
for k, v in d.items():
if k in new_d:
new_d[k] += v
else:
new_d[k] = v
</code></pre>
<p>Then you need to make a list from the result:</p>
<pre><code>l = [{k: v} for k, v in new_d.items()]
# [{'atanu': ['good', 'glory', 'top', 'winner']}, {'gagan': ['good', 'luck']}, {'prakash': ['confident']}, {'jitu': ['gold', 'wins']}]
</code></pre>
<p>You need to be aware that the order of the items in the list may be changed by that.</p>
| 2 | 2016-08-29T11:20:45Z | [
"python"
] |
Combine dictionaries based on key value | 39,204,684 | <p>How do I combine the rows of dictionaries having same keys. For instance If I have </p>
<pre><code>my_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner','good']}]
</code></pre>
<p>My objective is to get </p>
<pre><code>my_new_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold','wins']},
{'atanu': ['good', 'glory','top', 'winner','good']}]
</code></pre>
<p>How do I do that in Python?</p>
<p>EDIT: The dictionaries in the final list must contain repeated values if present in starting list. </p>
| 0 | 2016-08-29T11:03:52Z | 39,205,061 | <p>Here's a working example:</p>
<pre><code>from itertools import groupby
my_dict_list = [
{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner']}
]
my_new_dict_list = []
for k, g in groupby(my_dict_list, key=lambda x: sorted(x.keys())):
ds = list(g)
d = {}
for k in ds[0].iterkeys():
d[k] = sum([d[k] for d in ds], [])
my_new_dict_list .append(d)
print my_new_dict_list
</code></pre>
| 1 | 2016-08-29T11:22:16Z | [
"python"
] |
Combine dictionaries based on key value | 39,204,684 | <p>How do I combine the rows of dictionaries having same keys. For instance If I have </p>
<pre><code>my_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner','good']}]
</code></pre>
<p>My objective is to get </p>
<pre><code>my_new_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold','wins']},
{'atanu': ['good', 'glory','top', 'winner','good']}]
</code></pre>
<p>How do I do that in Python?</p>
<p>EDIT: The dictionaries in the final list must contain repeated values if present in starting list. </p>
| 0 | 2016-08-29T11:03:52Z | 39,205,132 | <p>a minimalist approach using <code>defaultdict</code>:</p>
<pre><code>from collections import defaultdict
my_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner','good']}]
merged_dict = defaultdict(list)
for d in my_dict_list:
for key, value in d.items():
merged_dict[key].extend(value)
result = [{key:value} for key, value in merged_dict.items()]
print(result)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'atanu': ['good', 'glory', 'top', 'winner', 'good']},
{'jitu': ['gold', 'wins']}]
</code></pre>
| 0 | 2016-08-29T11:25:54Z | [
"python"
] |
Combine dictionaries based on key value | 39,204,684 | <p>How do I combine the rows of dictionaries having same keys. For instance If I have </p>
<pre><code>my_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner','good']}]
</code></pre>
<p>My objective is to get </p>
<pre><code>my_new_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold','wins']},
{'atanu': ['good', 'glory','top', 'winner','good']}]
</code></pre>
<p>How do I do that in Python?</p>
<p>EDIT: The dictionaries in the final list must contain repeated values if present in starting list. </p>
| 0 | 2016-08-29T11:03:52Z | 39,205,145 | <pre><code>my_dict_list = [{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold']},
{'jitu': ['wins']},
{'atanu': ['good', 'glory']},
{'atanu': ['top', 'winner','good']}]
my_new_dict_list = []
tmp_dict = {}
order = []
for d in my_dict_list:
for k, v in d.iteritems():
if not k in order: order.append(k)
tmp_dict.setdefault(k, []).extend(v)
my_new_dict_list = [ {x: tmp_dict[x] } for x in order ]
</code></pre>
<p>Output:</p>
<pre><code>[{'prakash': ['confident']},
{'gagan': ['good', 'luck']},
{'jitu': ['gold', 'wins']},
{'atanu': ['good', 'glory', 'top', 'winner', 'good']}]
</code></pre>
| 1 | 2016-08-29T11:26:22Z | [
"python"
] |
matplotlib - barchar with blurry effect | 39,204,875 | <p>I'm working with bar charts in matplotlib, and I'm focusing on colors. I have the following simple cose:</p>
<pre><code>import matplotlib.pyplot as plt
A = [5, 30, 45, 80]
X = range(4)
col = ['r','orange','y','g']
plt.bar(X, A, color = col)
plt.show()
</code></pre>
<p>What I would like to achieve is a blurry color effect on the bar depending on values, starting from red until green. I'm wondering if I can realize something like this (see only bars):</p>
<p><img src="http://i.stack.imgur.com/fyukL.png" alt="bar charts blurry effect"> </p>
<p>I tried to follow this <a href="http://matplotlib.org/examples/color/colormaps_reference.html" rel="nofollow">guide</a>, but I didn't reach any result.
May you please help me? <br></p>
| 0 | 2016-08-29T11:12:54Z | 39,207,346 | <p>You have a pretty good example from matplotlib documentation on this <a href="http://matplotlib.org/examples/pylab_examples/gradient_bar.html" rel="nofollow">link</a>. Adapting this to your example you would obtain something like this:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
def gbar(ax, x, y, width=0.5, bottom=0):
X = [[.6, .6], [.7, .7]]
for left, top in zip(x, y):
right = left + width
ax.imshow(X, interpolation='bicubic', cmap="copper",
extent=(left, right, bottom, top), alpha=1)
A = [5, 30, 45, 80]
x = [i + 0.5 for i in range(4)]
fig = figure()
xmin, xmax = xlim = 0.25, 4.5
ymin, ymax = ylim = 0, 100
ax = fig.add_subplot(111, xlim=xlim, ylim=ylim,
autoscale_on=False)
gbar(ax, x, A, width=0.7)
ax.set_aspect('auto')
plt.show()
</code></pre>
<p>, which results in the following plot:</p>
<p><a href="http://i.stack.imgur.com/IXHZu.png" rel="nofollow"><img src="http://i.stack.imgur.com/IXHZu.png" alt="Bar plot with gradients on the bars in matplotlib"></a></p>
<p><strong>EDIT:</strong> The following adaption from the example is more flexible for working with the imshow. As so the example:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
def gbar(ax, x, y, width=0.5, bottom=0):
X = np.arange(100)[:, np.newaxis]
for left, top in zip(x, y):
right = left + width
mask = X > top
ax.imshow(np.ma.masked_array(X, mask), origin="lower", interpolation='nearest', cmap="RdYlGn", vmin=0, vmax=100,
extent=(left, right, bottom, 100), alpha=1)
A = [5, 30, 45, 80]
x = [i + 0.5 for i in range(4)]
fig = figure()
xmin, xmax = xlim = 0.25, 4.5
ymin, ymax = ylim = 0, 100
ax = fig.add_subplot(111, xlim=xlim, ylim=ylim,
autoscale_on=False)
gbar(ax, x, A, width=0.7)
ax.set_aspect('auto')
ax.set_yticks([i for i in range(0, 100, 10)])
ax.set_yticklabels([str(i) + " %" for i in range(0, 100, 10)])
plt.show()
</code></pre>
<p>, results in this:</p>
<p><a href="http://i.stack.imgur.com/aPiY4.png" rel="nofollow"><img src="http://i.stack.imgur.com/aPiY4.png" alt="Another implementation of imshow as a barplot"></a></p>
<p>You will notice that I'm building an array of 100 elements (monotonic strictly increasing: 0 to 100 with step 1).</p>
<pre><code>X = np.arange(100)[:, np.newaxis]
</code></pre>
<p>And masking everything above the top of the respective bar:</p>
<pre><code>mask = X > top
np.ma.masked_array(X, mask)
</code></pre>
<p>Also I'm stating <code>vmin</code> and <code>vmax</code> as 0 and 100, respectively. This will always make the total green appear at position 100 and total red at position 0. If this is the effect you are looking for you can use as it is. </p>
<p>The easiest way to me to achieve more customization by bar is just playing with the values inside X (since they are the ones being colormapped). But obviously you'll need some kind of rule for it. In your example the orange sometimes appears at 40 %, at others 30 % and 20 % (it seems related with the size of the bar but I don't know the relation). One of the bars is completely green which is yet another rule.</p>
<p>So you'll have to make exceptions inside your code. For example when the top is over 90 % use a greens colormap (or whatever fits).</p>
| 1 | 2016-08-29T13:19:02Z | [
"python",
"matplotlib",
"colors",
"bar-chart"
] |
Get specific text from xml using python | 39,204,902 | <p>I have an XML string which looks like this:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:geonode="http://www.geonode.org/" xmlns:gml="http://www.opengis.net/gml" elementFormDefault="qualified" targetNamespace="http://www.geonode.org/">
<xsd:import namespace="http://www.opengis.net/gml" schemaLocation="http://localhost:8080/geoserver/schemas/gml/3.1.1/base/gml.xsd"/>
<xsd:complexType name="test_24Type">
<xsd:complexContent>
<xsd:extension base="gml:AbstractFeatureType">
<xsd:sequence>
<xsd:element maxOccurs="1" minOccurs="0" name="attribute_1" nillable="true" type="xsd:string"/>
<xsd:element maxOccurs="1" minOccurs="0" name="the_geom" nillable="true" type="gml:MultiLineStringPropertyType"/>
</xsd:sequence>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
<xsd:element name="test_24" substitutionGroup="gml:_Feature" type="geonode:test_24Type"/>
</xsd:schema>
</code></pre>
<p>What I want to do is to use Python in order to extract the url corresponding to xmlns:geonode:</p>
<pre><code>"http://www.geonode.org/"
</code></pre>
<p>I know there is this library: from xml.etree import cElementTree as ET
But I am not sure how to use it properly in order to extract information which is on this element.</p>
| 0 | 2016-08-29T11:14:05Z | 39,205,465 | <p>The following is one of the ways to extract data from xml file using xml.etree library and python 2.7.
Replace the xml file name and the tag name in the corresponding places in the code.</p>
<pre><code>import xml.etree.ElementTree as XT
dataTree = XT.parse('your_xml_file_name.xml')
dataRoot = dataTree.getroot()
geoNode = dataRoot.find('your_tag_name').text
print geoNode
</code></pre>
| 0 | 2016-08-29T11:41:17Z | [
"python",
"xml"
] |
Python asyncio: stop the loop when one coroutine is done | 39,204,929 | <p>I'm quite new in this python asyncio topic. I have a simple question:
I have a task containing two coroutines to be run concurrently. First coroutine(my_coroutine) would just print something continuously until second_to_sleep is reached. The second coroutine(seq_coroutine) would call 4 other coroutines sequentially one after the other. My goal is to stop the loop at the end of seq_coroutine whenever it is completely finished. To be exact, I want my_coroutine be alive until seq_coroutine is finished. Can someone help me with that?</p>
<p>My code is like this:</p>
<pre><code>import asyncio
async def my_coroutine(task, seconds_to_sleep = 3):
print("{task_name} started\n".format(task_name=task))
for i in range(1, seconds_to_sleep):
await asyncio.sleep(1)
print("\n{task_name}: second {seconds}\n".format(task_name=task, seconds=i))
async def coroutine1():
print("coroutine 1 started")
await asyncio.sleep(1)
print("coroutine 1 finished\n")
async def coroutine2():
print("coroutine 2 started")
await asyncio.sleep(1)
print("coroutine 2 finished\n")
async def coroutine3():
print("coroutine 3 started")
await asyncio.sleep(1)
print("coroutine 3 finished\n")
async def coroutine4():
print("coroutine 4 started")
await asyncio.sleep(1)
print("coroutine 4 finished\n")
async def seq_coroutine():
await coroutine1()
await coroutine2()
await coroutine3()
await coroutine4()
def main():
main_loop = asyncio.get_event_loop()
task = [asyncio.ensure_future(my_coroutine("task1", 11)),
asyncio.ensure_future(seq_coroutine())]
try:
print('loop is started\n')
main_loop.run_until_complete(asyncio.gather(*task))
finally:
print('loop is closed')
main_loop.close()
if __name__ == "__main__":
main()
</code></pre>
<p>This is the output of this program:</p>
<pre><code>loop is started
task1 started
coroutine 1 started
task1: second 1
coroutine 1 finished
coroutine 2 started
task1: second 2
coroutine 2 finished
coroutine 3 started
task1: second 3
coroutine 3 finished
coroutine 4 started
task1: second 4
coroutine 4 finished
task1: second 5
task1: second 6
task1: second 7
task1: second 8
task1: second 9
task1: second 10
loop is closed
</code></pre>
<p>I only want to have something like this:</p>
<pre><code>loop is started
task1 started
coroutine 1 started
task1: second 1
coroutine 1 finished
coroutine 2 started
task1: second 2
coroutine 2 finished
coroutine 3 started
task1: second 3
coroutine 3 finished
coroutine 4 started
task1: second 4
coroutine 4 finished
loop is closed
</code></pre>
| 0 | 2016-08-29T11:15:29Z | 39,208,428 | <p>I just found a suitable solution for my problem.
I won't remove my post and I'll post my solution so that it may help others who face the same question.
I used <code>asyncio.wait(task, return_when=asyncio.FIRST_COMPLETED)</code> and it will return the result whenever the first task is finished.
This is the solution:</p>
<pre><code>import asyncio
from asyncio.tasks import FIRST_COMPLETED
from concurrent.futures import CancelledError
async def my_coroutine(task, seconds_to_sleep = 3):
print("{task_name} started\n".format(task_name=task))
for i in range(1, seconds_to_sleep):
await asyncio.sleep(1)
print("\n{task_name}: second {seconds}\n".format(task_name=task, seconds=i))
async def coroutine1():
print("coroutine 1 started")
await asyncio.sleep(1)
print("coroutine 1 finished\n")
async def coroutine2():
print("coroutine 2 started")
await asyncio.sleep(1)
print("coroutine 2 finished\n")
async def coroutine3():
print("coroutine 3 started")
await asyncio.sleep(1)
print("coroutine 3 finished\n")
async def coroutine4():
print("coroutine 4 started")
await asyncio.sleep(1)
print("coroutine 4 finished\n")
async def seq_coroutine(loop):
await coroutine1()
await coroutine2()
await coroutine3()
await coroutine4()
def main():
main_loop = asyncio.get_event_loop()
task = [asyncio.ensure_future(my_coroutine("task1", 11)),
asyncio.ensure_future(seq_coroutine(main_loop))]
try:
print('loop is started\n')
done, pending = main_loop.run_until_complete(asyncio.wait(task, return_when=asyncio.FIRST_COMPLETED))
print("Completed tasks: {completed}\nPending tasks: {pending}".format(completed = done, pending = pending))
#canceling the tasks
for task in pending:
print("Cancelling {task}: {task_cancel}".format(task=task, task_cancel=task.cancel()))
except CancelledError as e:
print("Error happened while canceling the task: {e}".format(e=e))
finally:
print('loop is closed')
if __name__ == "__main__":
main()
</code></pre>
| 3 | 2016-08-29T14:13:57Z | [
"python",
"coroutine",
"python-asyncio"
] |
Python script execution time increases when executed multiple time parallely | 39,204,955 | <p>I have a python script whose execution time is 1.2 second while it is being executed standalone.</p>
<p>But when I execute it 5-6 time parallely ( Am using postman to ping the url multiple times) the execution time shoots up.</p>
<p>Adding the breakdown of the time taken.</p>
<pre><code>1 run -> ~1.2seconds
2 run -> ~1.8seconds
3 run -> ~2.3seconds
4 run -> ~2.9seconds
5 run -> ~4.0seconds
6 run -> ~4.5seconds
7 run -> ~5.2seconds
8 run -> ~5.2seconds
9 run -> ~6.4seconds
10 run -> ~7.1seconds
</code></pre>
<p>Screenshot of top command(Asked in the comment):
<a href="http://i.stack.imgur.com/Ec3XC.png"><img src="http://i.stack.imgur.com/Ec3XC.png" alt="enter image description here"></a></p>
<p>This is a sample code:</p>
<pre><code>import psutil
import os
import time
start_time = time.time()
import cgitb
cgitb.enable()
import numpy as np
import MySQLdb as mysql
import cv2
import sys
import rpy2.robjects as robj
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
from rpy2.robjects.packages import importr
R = robj.r
DTW = importr('dtw')
process= psutil.Process(os.getpid())
print " Memory Consumed after libraries load: "
print process.memory_info()[0]/float(2**20)
st_pt=4
# Generate our data (numpy arrays)
template = np.array([range(84),range(84),range(84)]).transpose()
query = np.array([range(2500000),range(2500000),range(2500000)]).transpose()
#time taken
print(" --- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>I also checked my memory consumption using <code>watch -n 1 free -m</code> and memory consumption also increases noticeably.</p>
<p>1) How do I make sure that the execution time of script remain constant everytime.</p>
<p>2) Can I load the libraries permanently so that the time taken by the script to load the libraries and the memory consumed can be minimized?</p>
<p>I made an enviroment and tried using </p>
<p><code>#!/home/ec2-user/anaconda/envs/test_python/</code></p>
<p>but it doesn't make any difference whatsoever.</p>
<p><strong>EDIT:</strong></p>
<p>I have AMAZON's EC2 server with 7.5GB RAM.</p>
<p>My php file with which am calling the python script.</p>
<pre><code><?php
$response = array("error" => FALSE);
if($_SERVER['REQUEST_METHOD']=='GET'){
$response["error"] = FALSE;
$command =escapeshellcmd(shell_exec("sudo /home/ec2-user/anaconda/envs/anubhaw_python/bin/python2.7 /var/www/cgi-bin/dtw_test_code.py"));
session_write_close();
$order=array("\n","\\");
$cleanData=str_replace($order,'',$command);
$response["message"]=$cleanData;
} else
{
header('HTTP/1.0 400 Bad Request');
$response["message"] = "Bad Request.";
}
echo json_encode($response);
?>
</code></pre>
<p>Thanks</p>
| 11 | 2016-08-29T11:17:07Z | 39,247,616 | <p>1) You really can't ensure the execution will take always the same time, but at least you can avoid performance degradation by using a "locking" strategy like the ones described in <a href="http://raspberrypi.stackexchange.com/a/22006">this answer</a>.</p>
<p>Basically you can test if the lockfile exists, and if so, put your program to sleep a certain amount of time, then try again. </p>
<p>If the program does not find the lockfile, it creates it, and delete the lockfile at the end of its execution.</p>
<p>Please note: in the below code, when the script fails to get the lock for a certain number of <code>retries</code>, it will exit (but this choice is really up to you).</p>
<p>The following code exemplifies the use of a file as a "lock" against parallel executions of the same script.</p>
<pre><code>import time
import os
import sys
lockfilename = '.lock'
retries = 10
fail = True
for i in range(retries):
try:
lock = open(lockfilename, 'r')
lock.close()
time.sleep(1)
except Exception:
print('Got after {} retries'.format(i))
fail = False
lock = open(lockfilename, 'w')
lock.write('Locked!')
lock.close()
break
if fail:
print("Cannot get the lock, exiting.")
sys.exit(2)
# program execution...
time.sleep(5)
# end of program execution
os.remove(lockfilename)
</code></pre>
<p>2) This would mean that different python instances share the same memory pool and I think it's not feasible.</p>
| 2 | 2016-08-31T10:51:21Z | [
"python",
"time"
] |
Python script execution time increases when executed multiple time parallely | 39,204,955 | <p>I have a python script whose execution time is 1.2 second while it is being executed standalone.</p>
<p>But when I execute it 5-6 time parallely ( Am using postman to ping the url multiple times) the execution time shoots up.</p>
<p>Adding the breakdown of the time taken.</p>
<pre><code>1 run -> ~1.2seconds
2 run -> ~1.8seconds
3 run -> ~2.3seconds
4 run -> ~2.9seconds
5 run -> ~4.0seconds
6 run -> ~4.5seconds
7 run -> ~5.2seconds
8 run -> ~5.2seconds
9 run -> ~6.4seconds
10 run -> ~7.1seconds
</code></pre>
<p>Screenshot of top command(Asked in the comment):
<a href="http://i.stack.imgur.com/Ec3XC.png"><img src="http://i.stack.imgur.com/Ec3XC.png" alt="enter image description here"></a></p>
<p>This is a sample code:</p>
<pre><code>import psutil
import os
import time
start_time = time.time()
import cgitb
cgitb.enable()
import numpy as np
import MySQLdb as mysql
import cv2
import sys
import rpy2.robjects as robj
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
from rpy2.robjects.packages import importr
R = robj.r
DTW = importr('dtw')
process= psutil.Process(os.getpid())
print " Memory Consumed after libraries load: "
print process.memory_info()[0]/float(2**20)
st_pt=4
# Generate our data (numpy arrays)
template = np.array([range(84),range(84),range(84)]).transpose()
query = np.array([range(2500000),range(2500000),range(2500000)]).transpose()
#time taken
print(" --- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>I also checked my memory consumption using <code>watch -n 1 free -m</code> and memory consumption also increases noticeably.</p>
<p>1) How do I make sure that the execution time of script remain constant everytime.</p>
<p>2) Can I load the libraries permanently so that the time taken by the script to load the libraries and the memory consumed can be minimized?</p>
<p>I made an enviroment and tried using </p>
<p><code>#!/home/ec2-user/anaconda/envs/test_python/</code></p>
<p>but it doesn't make any difference whatsoever.</p>
<p><strong>EDIT:</strong></p>
<p>I have AMAZON's EC2 server with 7.5GB RAM.</p>
<p>My php file with which am calling the python script.</p>
<pre><code><?php
$response = array("error" => FALSE);
if($_SERVER['REQUEST_METHOD']=='GET'){
$response["error"] = FALSE;
$command =escapeshellcmd(shell_exec("sudo /home/ec2-user/anaconda/envs/anubhaw_python/bin/python2.7 /var/www/cgi-bin/dtw_test_code.py"));
session_write_close();
$order=array("\n","\\");
$cleanData=str_replace($order,'',$command);
$response["message"]=$cleanData;
} else
{
header('HTTP/1.0 400 Bad Request');
$response["message"] = "Bad Request.";
}
echo json_encode($response);
?>
</code></pre>
<p>Thanks</p>
| 11 | 2016-08-29T11:17:07Z | 39,337,216 | <p>The ec2 cloud does not guarantee 7.5gb of free memory on the server. This would mean that the VM performance is severely impacted like you are seeing where the server has less than 7.5gb of physical free ram. Try reducing the amount of memory the server thinks it has.</p>
<p>This form of parallel performance is very expensive. Typically with 300mb requirement, the ideal would be a script which is long running, and re-uses the memory for multiple requests. The Unix fork function allows a shared state to be re-used. The <code>os.fork</code> gives this in python, but may not be compatible with your libraries.</p>
| 0 | 2016-09-05T20:19:49Z | [
"python",
"time"
] |
Python script execution time increases when executed multiple time parallely | 39,204,955 | <p>I have a python script whose execution time is 1.2 second while it is being executed standalone.</p>
<p>But when I execute it 5-6 time parallely ( Am using postman to ping the url multiple times) the execution time shoots up.</p>
<p>Adding the breakdown of the time taken.</p>
<pre><code>1 run -> ~1.2seconds
2 run -> ~1.8seconds
3 run -> ~2.3seconds
4 run -> ~2.9seconds
5 run -> ~4.0seconds
6 run -> ~4.5seconds
7 run -> ~5.2seconds
8 run -> ~5.2seconds
9 run -> ~6.4seconds
10 run -> ~7.1seconds
</code></pre>
<p>Screenshot of top command(Asked in the comment):
<a href="http://i.stack.imgur.com/Ec3XC.png"><img src="http://i.stack.imgur.com/Ec3XC.png" alt="enter image description here"></a></p>
<p>This is a sample code:</p>
<pre><code>import psutil
import os
import time
start_time = time.time()
import cgitb
cgitb.enable()
import numpy as np
import MySQLdb as mysql
import cv2
import sys
import rpy2.robjects as robj
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
from rpy2.robjects.packages import importr
R = robj.r
DTW = importr('dtw')
process= psutil.Process(os.getpid())
print " Memory Consumed after libraries load: "
print process.memory_info()[0]/float(2**20)
st_pt=4
# Generate our data (numpy arrays)
template = np.array([range(84),range(84),range(84)]).transpose()
query = np.array([range(2500000),range(2500000),range(2500000)]).transpose()
#time taken
print(" --- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>I also checked my memory consumption using <code>watch -n 1 free -m</code> and memory consumption also increases noticeably.</p>
<p>1) How do I make sure that the execution time of script remain constant everytime.</p>
<p>2) Can I load the libraries permanently so that the time taken by the script to load the libraries and the memory consumed can be minimized?</p>
<p>I made an enviroment and tried using </p>
<p><code>#!/home/ec2-user/anaconda/envs/test_python/</code></p>
<p>but it doesn't make any difference whatsoever.</p>
<p><strong>EDIT:</strong></p>
<p>I have AMAZON's EC2 server with 7.5GB RAM.</p>
<p>My php file with which am calling the python script.</p>
<pre><code><?php
$response = array("error" => FALSE);
if($_SERVER['REQUEST_METHOD']=='GET'){
$response["error"] = FALSE;
$command =escapeshellcmd(shell_exec("sudo /home/ec2-user/anaconda/envs/anubhaw_python/bin/python2.7 /var/www/cgi-bin/dtw_test_code.py"));
session_write_close();
$order=array("\n","\\");
$cleanData=str_replace($order,'',$command);
$response["message"]=$cleanData;
} else
{
header('HTTP/1.0 400 Bad Request');
$response["message"] = "Bad Request.";
}
echo json_encode($response);
?>
</code></pre>
<p>Thanks</p>
| 11 | 2016-08-29T11:17:07Z | 39,356,242 | <p>It might be because of the way computers are run.</p>
<p>Each program gets a <strong>slice of time on a computer</strong> (quote <a href="http://rads.stackoverflow.com/amzn/click/146541956X" rel="nofollow"><strong><em>Help Your Kids With Computer Programming</em></strong></a>, say maybe <strong><em>1/1000</em> of a second</strong>)</p>
<p><strong><em>Answer 1</em></strong>: Try using multiple <strong><em><a href="https://docs.python.org/2/library/threading.html#thread-objects" rel="nofollow">threads</a></em></strong> instead of <strong>parallel processes</strong>. </p>
<p>It'll be less <strong>time-consuming</strong>, but the program's <strong>time to execute</strong> still won't be completely <strong>constant</strong>.</p>
<p><strong><em>Note:</em></strong> Each program has it's own slot of <strong>memory</strong>, so that is why <strong>memory consumption</strong> is shooting up.</p>
| 0 | 2016-09-06T19:11:21Z | [
"python",
"time"
] |
Python script execution time increases when executed multiple time parallely | 39,204,955 | <p>I have a python script whose execution time is 1.2 second while it is being executed standalone.</p>
<p>But when I execute it 5-6 time parallely ( Am using postman to ping the url multiple times) the execution time shoots up.</p>
<p>Adding the breakdown of the time taken.</p>
<pre><code>1 run -> ~1.2seconds
2 run -> ~1.8seconds
3 run -> ~2.3seconds
4 run -> ~2.9seconds
5 run -> ~4.0seconds
6 run -> ~4.5seconds
7 run -> ~5.2seconds
8 run -> ~5.2seconds
9 run -> ~6.4seconds
10 run -> ~7.1seconds
</code></pre>
<p>Screenshot of top command(Asked in the comment):
<a href="http://i.stack.imgur.com/Ec3XC.png"><img src="http://i.stack.imgur.com/Ec3XC.png" alt="enter image description here"></a></p>
<p>This is a sample code:</p>
<pre><code>import psutil
import os
import time
start_time = time.time()
import cgitb
cgitb.enable()
import numpy as np
import MySQLdb as mysql
import cv2
import sys
import rpy2.robjects as robj
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
from rpy2.robjects.packages import importr
R = robj.r
DTW = importr('dtw')
process= psutil.Process(os.getpid())
print " Memory Consumed after libraries load: "
print process.memory_info()[0]/float(2**20)
st_pt=4
# Generate our data (numpy arrays)
template = np.array([range(84),range(84),range(84)]).transpose()
query = np.array([range(2500000),range(2500000),range(2500000)]).transpose()
#time taken
print(" --- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>I also checked my memory consumption using <code>watch -n 1 free -m</code> and memory consumption also increases noticeably.</p>
<p>1) How do I make sure that the execution time of script remain constant everytime.</p>
<p>2) Can I load the libraries permanently so that the time taken by the script to load the libraries and the memory consumed can be minimized?</p>
<p>I made an enviroment and tried using </p>
<p><code>#!/home/ec2-user/anaconda/envs/test_python/</code></p>
<p>but it doesn't make any difference whatsoever.</p>
<p><strong>EDIT:</strong></p>
<p>I have AMAZON's EC2 server with 7.5GB RAM.</p>
<p>My php file with which am calling the python script.</p>
<pre><code><?php
$response = array("error" => FALSE);
if($_SERVER['REQUEST_METHOD']=='GET'){
$response["error"] = FALSE;
$command =escapeshellcmd(shell_exec("sudo /home/ec2-user/anaconda/envs/anubhaw_python/bin/python2.7 /var/www/cgi-bin/dtw_test_code.py"));
session_write_close();
$order=array("\n","\\");
$cleanData=str_replace($order,'',$command);
$response["message"]=$cleanData;
} else
{
header('HTTP/1.0 400 Bad Request');
$response["message"] = "Bad Request.";
}
echo json_encode($response);
?>
</code></pre>
<p>Thanks</p>
| 11 | 2016-08-29T11:17:07Z | 39,357,591 | <h2>Here's what we have:</h2>
<ul>
<li><p>EC2 instance type is m3.large box which has only <strong>2</strong> vCPUs <a href="https://aws.amazon.com/ec2/instance-types/?nc1=h_ls" rel="nofollow">https://aws.amazon.com/ec2/instance-types/?nc1=h_ls</a></p></li>
<li><p>We need to run a CPU- and memory-hungry script which takes over a second to execute when CPU is not busy</p></li>
<li><p>You're building an API than needs to handle concurrent requests and running apache</p></li>
<li><p>From the screenshot I can conclude that:</p>
<ul>
<li><p>your CPUs are 100% utilized when 5 processes are run. Most likely they would be 100% utilized even when fewer processes are run. So this is the bottleneck and no surprise that the more processes are run the more time is required â you CPU resources just get shared among concurrently running scripts.</p></li>
<li><p>each script copy eats about ~300MB of RAM so you have lots of spare RAM and it's not a bottleneck. The amount of free + buffers memory on your screenshot confirms that.</p></li>
</ul></li>
<li><p>The missing part is:</p>
<ol>
<li>are requests directly sent to your apache server or there's a balancer/proxy in front of it?</li>
<li>why do you need PHP in your example? There are plently of solutions available using python ecosystem only without a php wrapper ahead of it</li>
</ol></li>
</ul>
<h2>Answers to your questions:</h2>
<blockquote>
<ol>
<li>That's infeasible in general case</li>
</ol>
</blockquote>
<p>The most you can do is to track your CPU usage and make sure its idle time doesn't drop below some empirical threshold â in this case your scripts would be run in more or less fixed amount of time. </p>
<p>To guarantee that you need to limit the number of requests being processed concurrently.
But if 100 requests are sent to your API concurrently you won't be able to handle them all in parallel! Only some of them will be handled in parallel while others waiting for their turn. But your server won't be knocked down trying to serve them all.</p>
<blockquote>
<ol start="2">
<li>Yes and no</li>
</ol>
</blockquote>
<p><strong>No</strong> because unlikely can you do something in your present architecture when a new script is launched on every request through a php wrapper. BTW it's a very expensive operation to run a new script from scratch each time.</p>
<p><strong>Yes</strong> if a different solution is used. Here are the options:</p>
<ul>
<li><p>use a python-aware <em>pre-forking</em> webserver which will handle your requests directly. You'll spare CPU resources on python startup + you might utilize some preloading technics to share RAM among workers, i.e <a href="http://docs.gunicorn.org/en/stable/settings.html#preload-app" rel="nofollow">http://docs.gunicorn.org/en/stable/settings.html#preload-app</a>. You'd also need to limit the number of parallel workers to be run <a href="http://docs.gunicorn.org/en/stable/settings.html#workers" rel="nofollow">http://docs.gunicorn.org/en/stable/settings.html#workers</a> to adress your first requirement.</p></li>
<li><p>if you need <strong>PHP</strong> for some reason you might setup some <em>intermediary</em> between <strong>PHP</strong> script and python workers â i.e. a <em>queue</em>-like server.
Than simply run several instances of your <em>python</em> scripts which would wait for some request to be availble in the queue. Once it's available it would handle it and put the response back to the <em>queue</em> and php script would slurp it and return back to the client. But it's a more complex to build this that the first solution (if you can eliminate your PHP script of course) and more components would be involved.</p></li>
<li><p>reject the idea to handle such heavy requests concurrently, and instead assign each request a <strong>unique id</strong>, put the request into a <em>queue</em> and return this id to the client immediately. The request will be picked up by an offline handler and put back into the <em>queue</em> once it's finished. It will be client's responsibility to poll your API for readiness of this particular request</p></li>
<li><p>1st and 2nd combined â handle requests in <strong>PHP</strong> and request another HTTP server (or any other TCP server) handling your preloaded .py-scripts</p></li>
</ul>
| 1 | 2016-09-06T20:48:06Z | [
"python",
"time"
] |
Python script execution time increases when executed multiple time parallely | 39,204,955 | <p>I have a python script whose execution time is 1.2 second while it is being executed standalone.</p>
<p>But when I execute it 5-6 time parallely ( Am using postman to ping the url multiple times) the execution time shoots up.</p>
<p>Adding the breakdown of the time taken.</p>
<pre><code>1 run -> ~1.2seconds
2 run -> ~1.8seconds
3 run -> ~2.3seconds
4 run -> ~2.9seconds
5 run -> ~4.0seconds
6 run -> ~4.5seconds
7 run -> ~5.2seconds
8 run -> ~5.2seconds
9 run -> ~6.4seconds
10 run -> ~7.1seconds
</code></pre>
<p>Screenshot of top command(Asked in the comment):
<a href="http://i.stack.imgur.com/Ec3XC.png"><img src="http://i.stack.imgur.com/Ec3XC.png" alt="enter image description here"></a></p>
<p>This is a sample code:</p>
<pre><code>import psutil
import os
import time
start_time = time.time()
import cgitb
cgitb.enable()
import numpy as np
import MySQLdb as mysql
import cv2
import sys
import rpy2.robjects as robj
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
from rpy2.robjects.packages import importr
R = robj.r
DTW = importr('dtw')
process= psutil.Process(os.getpid())
print " Memory Consumed after libraries load: "
print process.memory_info()[0]/float(2**20)
st_pt=4
# Generate our data (numpy arrays)
template = np.array([range(84),range(84),range(84)]).transpose()
query = np.array([range(2500000),range(2500000),range(2500000)]).transpose()
#time taken
print(" --- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>I also checked my memory consumption using <code>watch -n 1 free -m</code> and memory consumption also increases noticeably.</p>
<p>1) How do I make sure that the execution time of script remain constant everytime.</p>
<p>2) Can I load the libraries permanently so that the time taken by the script to load the libraries and the memory consumed can be minimized?</p>
<p>I made an enviroment and tried using </p>
<p><code>#!/home/ec2-user/anaconda/envs/test_python/</code></p>
<p>but it doesn't make any difference whatsoever.</p>
<p><strong>EDIT:</strong></p>
<p>I have AMAZON's EC2 server with 7.5GB RAM.</p>
<p>My php file with which am calling the python script.</p>
<pre><code><?php
$response = array("error" => FALSE);
if($_SERVER['REQUEST_METHOD']=='GET'){
$response["error"] = FALSE;
$command =escapeshellcmd(shell_exec("sudo /home/ec2-user/anaconda/envs/anubhaw_python/bin/python2.7 /var/www/cgi-bin/dtw_test_code.py"));
session_write_close();
$order=array("\n","\\");
$cleanData=str_replace($order,'',$command);
$response["message"]=$cleanData;
} else
{
header('HTTP/1.0 400 Bad Request');
$response["message"] = "Bad Request.";
}
echo json_encode($response);
?>
</code></pre>
<p>Thanks</p>
| 11 | 2016-08-29T11:17:07Z | 39,359,263 | <p>1)</p>
<h1>More servers equals more availability</h1>
<p>Hearsay tells me that one effective way to ensure consistent request times is to use multiple requests to a cluster. As I heard it the idea goes something like this. </p>
<h3>The chance of a slow request</h3>
<p><sub>(Disclaimer I'm not much of a mathematician or statistician.)</sub></p>
<p>If there is a 1% chance a request is going to take an abnormal amount of time to finish then one-in-a-hundred requests can be expected to be slow. If you as a client/consumer make two requests to a cluster instead of just one, the chance that both of them turn out to be slow would be more like 1/10000, and with three 1/1000000, et cetera. The downside is doubling your incoming requests means needing to provide (and pay for) as much as twice the server power to fulfill your requests with a consistent time, this additional cost scales with how much chance is acceptable for a slow request.</p>
<p>To my knowledge this concept is optimized for consistent fulfillment times.</p>
<h3>The client</h3>
<p>A client interfacing with a service like this has to be able to spawn multiple requests and handle them gracefully, probably including closing the unfulfilled connections as soon as it can.</p>
<h3>The servers</h3>
<p>On the backed there should be a load balancer that can associate multiple incoming client requests to multiple unique cluster workers. If a single client makes multiple requests to an overburdened node, its just going to compound its own request time like you see in your simple example.</p>
<p>In addition to having the client opportunistically close connections it would be best to have a system of sharing job fulfilled status/information so that backlogged request on other other slower-to-process nodes have a chance of aborting an already-fulfilled request. </p>
<hr>
<p>This this a rather informal answer, I do not have direct experience with optimizing a service application in this manner. If someone does I encourage and welcome more detailed edits and expert implementation opinions.</p>
<hr>
<p>2)</p>
<h1>Caching imports</h1>
<p>yes that is a thing, and its awesome!</p>
<p>I would personally recommend setting up django+gunicorn+nginx. Nginx can cache static content and keep a request backlog, gunicorn provides application caching and multiple threads&worker management (not to mention awesome administration and statistic tools), django embeds best practices for database migrations, auth, request routing, as well as off-the-shelf plugins for providing semantic rest endpoints and documentation, all sorts of goodness.</p>
<p>If you really insist on building it from scratch yourself you should study <a href="http://uwsgi-docs.readthedocs.io/en/latest/index.html" rel="nofollow">uWsgi</a>, a great <a href="https://www.python.org/dev/peps/pep-0333/" rel="nofollow">Wsgi implementation</a> that can be interfaced with <a href="http://gunicorn.org/" rel="nofollow">gunicorn</a> to provide application caching. Gunicorn isn't the only option either, Nicholas Piël has a <a href="http://nichol.as/benchmark-of-python-web-servers" rel="nofollow">Great write up</a> comparing performance of various python web serving apps.</p>
| 1 | 2016-09-06T23:37:08Z | [
"python",
"time"
] |
Python search for a start hex string, save the hex string after that to a new file | 39,204,960 | <p>PYTHON</p>
<p>I would like to open a text file, search for a HEX pattern that starts with 45 3F, grab the following hex 6 HEX values, for example 45 4F 5D and put all this in a new file. I also know that it always ends with 00 00.</p>
<p>So the file can look like: bla bla sdsfsdf 45 3F 08 DF 5D 00 00 dsafasdfsadf 45 3F 07 D3 5F 00 00 xztert</p>
<p>And should be put in a new file like this:
08 DF 5D
07 D3 5F </p>
<p>How can I do that?</p>
<p>I have tried:</p>
<pre><code>output = open('file.txt','r')
my_data = output.read()
print re.findall(r"45 3F[0-9a-fA-F]", my_data)
</code></pre>
<p>but it only prints:</p>
<p>[]</p>
<p>Any suggestions?</p>
<p>Thank you :-)</p>
| 0 | 2016-08-29T11:17:16Z | 39,206,586 | <p>Here's a complete answer (Python 3):</p>
<pre><code>with open('file.txt') as reader:
my_data = reader.read()
matches = re.findall(r"45 3F ([0-9A-F]{2} [0-9A-F]{2} [0-9A-F]{2})", my_data)
data_to_write = " ".join(matches)
with open('out.txt', 'w') as writer:
writer.write(data_to_write)
</code></pre>
<p><code>re.findall</code> returns the capturing group, so there's no need to clear out '45 3F'.</p>
<p>When dealing with files, use <code>with open</code> so that the file descriptor will be released when indentation ends.</p>
<p>if you want to accept the first two octets dynamically:</p>
<pre><code>search_string = raw_input()
regex_pattern = search_string.upper() + r" ([0-9A-F]{2} [0-9A-F]{2} [0-9A-F]{2})"
matches = re.findall(regex_pattern, my_data)
</code></pre>
| 0 | 2016-08-29T12:39:12Z | [
"python"
] |
Boolean displays True when using list but not text file | 39,204,995 | <p>I am having an issue with getting a ping_reply == 0 when opening a file. When I use a list for the ip_list variable, it has no issue returning 0 (Which 0 represents successful)</p>
<pre><code>import subprocess
ip_list = []
def ip_is_valid():
check = False
#Global exposes outside the local function
global ip_list
while True:
#Prompting user for input
print "\n" + "# " * 20 + "\n"
ip_file = raw_input("# Enter IP file name followed by extension: ")
print "\n" + "# " * 20 + "\n"
#Changing exception message
try:
selected_ip_file = open(ip_file, 'r')
#Start from the beginning of the file
selected_ip_file.seek(0)
ip_list = selected_ip_file.readlines()
selected_ip_file.close()
except IOError:
print "\n* File %s does not exist. Please check and try again\n" % ip_file
for ip in ip_list:
a = ip.split('.')
if (len(a) == 4) and (1 <= int(a[0]) <= 223) and (int(a[0]) != 127) and (int(a[0]) != 169 or int(a[1]) != 254) and (0 <= int(a[1]) <= 255 and 0 <= int(a[2]) <= 255 and 0 <= int(a[3]) <= 255):
check = True
break
else:
print "\n* There was an invalid IP address. Please check and try again.\n"
check = False
continue
if check == False:
continue
elif check == True:
break
check2 = False
#Check IP Reachability
print "\n* Checking IP reachability. Please wait...\n"
while True:
for ip in ip_list:
ping_reply = subprocess.call(['ping', '-n', '2', '-w', '2', ip])
if ping_reply == 0:
check2 = True
continue
elif ping_reply == 2:
print "\n* No response from device %s." % ip
check2 = False
break
else:
print "\n* Ping to the following device has failed:", ip
check2 = False
break
#Evaluating the check flag
if check2 == False:
print "* Please re-check IP address list or device.\n"
ip_is_valid()
elif check2 == True:
print "\n* All devices are reachable."
break
</code></pre>
<p>I get the following error:</p>
<pre><code># # # # # # # # # # # # # # # # # # # #
# Enter IP file name followed by extension: ipaddrlist.txt
# # # # # # # # # # # # # # # # # # # #
* Checking IP reachability. Please wait...
Ping request could not find host 192.168.1.1
. Please check the name and try again.
* Ping to the following device has failed: 192.168.1.1
* Please re-check IP address list or device.
</code></pre>
<p>If I use a list:</p>
<pre><code>Pinging 192.168.1.1 with 32 bytes of data:
Reply from 192.168.1.1: bytes=32 time=2ms TTL=63
Reply from 192.168.1.1: bytes=32 time=2ms TTL=63
Ping statistics for 192.168.1.1:
Packets: Sent = 2, Received = 2, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 40ms, Average = 21ms
Pinging 192.168.1.2 with 32 bytes of data:
Reply from 192.168.1.2: bytes=32 time=2ms TTL=63
Reply from 192.168.1.2: bytes=32 time=2ms TTL=63
Ping statistics for 192.168.1.2:
Packets: Sent = 2, Received = 2, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 2ms, Average = 2ms
>>> ping_reply == 0
True
</code></pre>
| 0 | 2016-08-29T11:19:01Z | 39,205,662 | <p>Try to remove the spaces before and after the ip:</p>
<p><code>ip_file = raw_input("# Enter IP file name followed by extension: ").strip()</code></p>
| 0 | 2016-08-29T11:51:45Z | [
"python",
"networking"
] |
Boolean displays True when using list but not text file | 39,204,995 | <p>I am having an issue with getting a ping_reply == 0 when opening a file. When I use a list for the ip_list variable, it has no issue returning 0 (Which 0 represents successful)</p>
<pre><code>import subprocess
ip_list = []
def ip_is_valid():
check = False
#Global exposes outside the local function
global ip_list
while True:
#Prompting user for input
print "\n" + "# " * 20 + "\n"
ip_file = raw_input("# Enter IP file name followed by extension: ")
print "\n" + "# " * 20 + "\n"
#Changing exception message
try:
selected_ip_file = open(ip_file, 'r')
#Start from the beginning of the file
selected_ip_file.seek(0)
ip_list = selected_ip_file.readlines()
selected_ip_file.close()
except IOError:
print "\n* File %s does not exist. Please check and try again\n" % ip_file
for ip in ip_list:
a = ip.split('.')
if (len(a) == 4) and (1 <= int(a[0]) <= 223) and (int(a[0]) != 127) and (int(a[0]) != 169 or int(a[1]) != 254) and (0 <= int(a[1]) <= 255 and 0 <= int(a[2]) <= 255 and 0 <= int(a[3]) <= 255):
check = True
break
else:
print "\n* There was an invalid IP address. Please check and try again.\n"
check = False
continue
if check == False:
continue
elif check == True:
break
check2 = False
#Check IP Reachability
print "\n* Checking IP reachability. Please wait...\n"
while True:
for ip in ip_list:
ping_reply = subprocess.call(['ping', '-n', '2', '-w', '2', ip])
if ping_reply == 0:
check2 = True
continue
elif ping_reply == 2:
print "\n* No response from device %s." % ip
check2 = False
break
else:
print "\n* Ping to the following device has failed:", ip
check2 = False
break
#Evaluating the check flag
if check2 == False:
print "* Please re-check IP address list or device.\n"
ip_is_valid()
elif check2 == True:
print "\n* All devices are reachable."
break
</code></pre>
<p>I get the following error:</p>
<pre><code># # # # # # # # # # # # # # # # # # # #
# Enter IP file name followed by extension: ipaddrlist.txt
# # # # # # # # # # # # # # # # # # # #
* Checking IP reachability. Please wait...
Ping request could not find host 192.168.1.1
. Please check the name and try again.
* Ping to the following device has failed: 192.168.1.1
* Please re-check IP address list or device.
</code></pre>
<p>If I use a list:</p>
<pre><code>Pinging 192.168.1.1 with 32 bytes of data:
Reply from 192.168.1.1: bytes=32 time=2ms TTL=63
Reply from 192.168.1.1: bytes=32 time=2ms TTL=63
Ping statistics for 192.168.1.1:
Packets: Sent = 2, Received = 2, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 40ms, Average = 21ms
Pinging 192.168.1.2 with 32 bytes of data:
Reply from 192.168.1.2: bytes=32 time=2ms TTL=63
Reply from 192.168.1.2: bytes=32 time=2ms TTL=63
Ping statistics for 192.168.1.2:
Packets: Sent = 2, Received = 2, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 2ms, Average = 2ms
>>> ping_reply == 0
True
</code></pre>
| 0 | 2016-08-29T11:19:01Z | 39,205,666 | <p>Ah yes, I almost forgot about that, Mark. Here is the fixed code:</p>
<pre><code>import subprocess
ip_list = []
def ip_is_valid():
check = False
#Global exposes outside the local function
global ip_list
while True:
#Prompting user for input
print "\n" + "# " * 20 + "\n"
ip_file = raw_input("# Enter IP file name followed by extension: ")
print "\n" + "# " * 20 + "\n"
#Changing exception message
try:
selected_ip_file = open(ip_file, 'r')
#Start from the beginning of the file
selected_ip_file.seek(0)
ip_list = selected_ip_file.readlines()
selected_ip_file.close()
except IOError:
print "\n* File %s does not exist. Please check and try again\n" % ip_file
for ip in ip_list:
a = ip.split('.')
if (len(a) == 4) and (1 <= int(a[0]) <= 223) and (int(a[0]) != 127) and (int(a[0]) != 169 or int(a[1]) != 254) and (0 <= int(a[1]) <= 255 and 0 <= int(a[2]) <= 255 and 0 <= int(a[3]) <= 255):
check = True
break
else:
print "\n* There was an invalid IP address. Please check and try again.\n"
check = False
continue
if check == False:
continue
elif check == True:
break
check2 = False
#Check IP Reachability
print "\n* Checking IP reachability. Please wait...\n"
while True:
for ip in ip_list:
ping_reply = subprocess.call(['ping', '-n', '2', '-w', '2', ip.rstrip('\n')])
if ping_reply == 0:
check2 = True
continue
elif ping_reply == 2:
print "\n* No response from device %s." % ip
check2 = False
break
else:
print "\n* Ping to the following device has failed:", ip
check2 = False
break
#Evaluating the check flag
if check2 == False:
print "* Please re-check IP address list or device.\n"
ip_is_valid()
elif check2 == True:
print "\n* All devices are reachable."
break
</code></pre>
| 0 | 2016-08-29T11:51:58Z | [
"python",
"networking"
] |
The program only works with one client id | 39,204,997 | <p>Here is a sample of the textfile I use:</p>
<pre><code>NaQua,High
ImKol,Moderate
YoTri,Moderate
RoDen,High
NaThe,Moderate
ReWes,Moderate
BrFre,High
KaDat,High
ViRil,High
TrGeo,High
</code></pre>
<p>For some reason, the program will only work with the last client id on the list. It prints the intensity level but then stops right there as if that's the end of the program.</p>
<pre><code>pie = 'pie'
while pie == 'pie':
f=open('clientIntensity.txt')
lines=f.readlines()
print ((lines[0])[:5])
print ((lines[1])[:5])
print ((lines[2])[:5])
print ((lines[3])[:5])
print ((lines[4])[:5])
print ((lines[5])[:5])
print ((lines[6])[:5])
print ((lines[7])[:5])
print ((lines[8])[:5])
print ((lines[9])[:5])
clientid = input("Please select and input the ID of the client that you would like to record exercise times for. (case sensitive)")
with open('clientIntensity.txt') as f:
for line in f:
if clientid in line:
clientintensity = line[6:]
print ("The client you have selected has an exercise intensity of",clientintensity)
if clientintensity == 'High':
print ("The exercises for this intensitity level are, Running Swimming Aerobics Football Tennis.")
while True:
try:
Running = input("Please enter the amount of time that the client spent running.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Swimming = input("Please enter the amount of time that the client spent swimming.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Aerobics = input("Please enter the amount of time that the client spent doing aerobics.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Football = input("Please enter the amount of time that the client spent playing football.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Tennis = input("Please enter the amount of time that the client spent playing tennis.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
totaltime = ((Running)+(Swimming)+(Aerobics)+(Football)+(Tennis))
print (clientid,"spent a total time of",totaltime,"exercising this week.")
elif clientintensity == 'Moderate':
print ("The exercises for this intensitity level are, Walking Hiking Cleaning Skateboarding Basketball")
while True:
try:
Walking = input("Please enter the amount of time that the client spent walking.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Hiking = input("Please enter the amount of time that the client spent hiking.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Cleaning = input("Please enter the amount of time that the client spent cleaning.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Skateboarding = input("Please enter the amount of time that the client spent skateboarding.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
while True:
try:
Basketball = input("Please enter the amount of time that the client spent playing basketball.(0-120 minutes)")
except ValueError:
print ("Please enter a valid time.")
continue
else:
break
totaltime = ((Walking)+(Hiking)+(Cleaning)+(Skateboarding)+(Basketball))
print (clientid,"spent a total time of",totaltime,"exercising this week.")
again = input("Would you like to input times for another client?(Y/N")
if again == 'N':
pie = 'superpie'
</code></pre>
| 0 | 2016-08-29T11:19:07Z | 39,205,503 | <p>Change to:</p>
<pre><code>elif 'Moderate' in clientintensity:
</code></pre>
<p>and: </p>
<pre><code>if 'High' in clientintensity:
</code></pre>
<p>as you are using list comprehension (a good tutorial on list comprehension in <em>Python</em> <a href="http://www.pythonforbeginners.com/basics/list-comprehensions-in-python" rel="nofollow">here</a>).</p>
<p>Also, to advance to the next question you need to move your <code>break</code> statements like so:</p>
<pre><code>while True:
try:
Running = input("Please enter the amount of time that the client spent running.
(0-120 minutes)")
break
except ValueError:
print("Please enter a valid time.")
continue
while True:
try:
Swimming = input("Please enter the amount of time that the client spent swimming.
(0-120 minutes)")
break
except ValueError:
print("Please enter a valid time.")
continue
</code></pre>
<p>This way the next question is asked if the user inserts valid input, (no need for <code>else</code> statements within each <code>while</code> loop).</p>
| 0 | 2016-08-29T11:43:18Z | [
"python"
] |
Loop character A/B alternately | 39,205,105 | <p>How can I print A/B character alternately in python loop?</p>
<p>What I expect in result:</p>
<pre><code>oneA
twoB
threeA
fourB
...
</code></pre>
| -1 | 2016-08-29T11:24:27Z | 39,205,171 | <p>You can use <a href="https://docs.python.org/3.4/library/itertools.html#itertools.cycle" rel="nofollow"><code>itertools.cycle</code></a> to repeat through a sequence. This is typically used with <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow"><code>zip</code></a> to iterate through a longer list, while repeating the shorter one. For example</p>
<pre><code>import itertools
for i,j in zip(['one', 'two', 'three', 'four'], itertools.cycle('AB')):
print(i+j)
</code></pre>
<p>Output</p>
<pre><code>oneA
twoB
threeA
fourB
</code></pre>
| 3 | 2016-08-29T11:27:22Z | [
"python",
"loops",
"for-loop"
] |
Loop character A/B alternately | 39,205,105 | <p>How can I print A/B character alternately in python loop?</p>
<p>What I expect in result:</p>
<pre><code>oneA
twoB
threeA
fourB
...
</code></pre>
| -1 | 2016-08-29T11:24:27Z | 39,205,204 | <p>You could also try using the modulus operator % on the index of an incremented for loop for the numbers to alternate the letters:</p>
<pre><code>list_num = ['one', 'two', 'three', 'four', 'five', 'six']
list_alpha = ['A', 'B']
list_combined = []
for i in range(0, len(list_num)):
list_combined.append(list_num[i] + (list_alpha[1] if i % 2 else list_alpha[0]))
list_combined
>>> ['oneA', 'twoB', 'threeA', 'fourB', 'fiveA', 'sixB']
</code></pre>
| 1 | 2016-08-29T11:29:01Z | [
"python",
"loops",
"for-loop"
] |
Loop character A/B alternately | 39,205,105 | <p>How can I print A/B character alternately in python loop?</p>
<p>What I expect in result:</p>
<pre><code>oneA
twoB
threeA
fourB
...
</code></pre>
| -1 | 2016-08-29T11:24:27Z | 39,205,208 | <p>try this:</p>
<pre><code>l1 = ['A','B']
l2 = ['one','two','three','four']
for i,val in enumerate(l2):
print(val + l1[i%len(l1)])
</code></pre>
| 1 | 2016-08-29T11:29:15Z | [
"python",
"loops",
"for-loop"
] |
Loop character A/B alternately | 39,205,105 | <p>How can I print A/B character alternately in python loop?</p>
<p>What I expect in result:</p>
<pre><code>oneA
twoB
threeA
fourB
...
</code></pre>
| -1 | 2016-08-29T11:24:27Z | 39,205,373 | <p>Something like:</p>
<pre><code>alternate_words = ['A', 'B']
count = 0
while count < 5:
print count+1, alternate_words[count % len(alternate_words)]
count += 1
</code></pre>
<p>Output:</p>
<p>1 A</p>
<p>2 B</p>
<p>3 A</p>
<p>4 B</p>
<p>5 A</p>
| 0 | 2016-08-29T11:36:56Z | [
"python",
"loops",
"for-loop"
] |
Loop character A/B alternately | 39,205,105 | <p>How can I print A/B character alternately in python loop?</p>
<p>What I expect in result:</p>
<pre><code>oneA
twoB
threeA
fourB
...
</code></pre>
| -1 | 2016-08-29T11:24:27Z | 39,205,737 | <p>I think this will help -></p>
<pre><code>a1 = ['A','B']
a2 = ['one','two','three','four']
for i in range(len(a2)):
print a2[i]+a1[i%2]
</code></pre>
| 0 | 2016-08-29T11:55:21Z | [
"python",
"loops",
"for-loop"
] |
Loop character A/B alternately | 39,205,105 | <p>How can I print A/B character alternately in python loop?</p>
<p>What I expect in result:</p>
<pre><code>oneA
twoB
threeA
fourB
...
</code></pre>
| -1 | 2016-08-29T11:24:27Z | 39,219,879 | <p>At the suggestion of @Graipher instead of using combined <code>zip()</code> with <code>itertools.cycle()</code>, better and simpler solution will be use <code>itertools.product()</code> which is </p>
<blockquote>
<p>Cartesian product of input iterables.</p>
<p>Roughly equivalent to nested for-loops in a generator expression. For example, product(A, B) returns the same as ((x,y) for x in A for y in B).</p>
</blockquote>
<p><a href="https://docs.python.org/2/library/itertools.html#itertools.product" rel="nofollow">https://docs.python.org/2/library/itertools.html#itertools.product</a></p>
<pre><code>words = ['one', 'two', 'three']
for word, i in itertools.product(words, ('A', 'B')):
print(word+i)
</code></pre>
| 0 | 2016-08-30T06:06:38Z | [
"python",
"loops",
"for-loop"
] |
How to initialize OpenGL with glXChooseFBConfig and ctypes module? | 39,205,116 | <p>I want to make some simple OpenGL animation on my tkinter window. I dont want to include any needless dependencies so I'm writing everything from scratch with ctypes. So far I have this:</p>
<pre><code>#!/usr/bin/env python3
import tkinter
import ctypes
from ctypes import cdll
GLX_PIXMAP_BIT = 0x00000002
GLX_WINDOW_BIT = 0x00000001
GLX_PBUFFER_BIT = 0x00000004
GLX_RGBA_BIT = 0x00000001
GLX_COLOR_INDEX_BIT = 0x00000002
GLX_PBUFFER_CLOBBER_MASK = 0x08000000
GLX_FRONT_LEFT_BUFFER_BIT = 0x00000001
GLX_FRONT_RIGHT_BUFFER_BIT = 0x00000002
GLX_BACK_LEFT_BUFFER_BIT = 0x00000004
GLX_BACK_RIGHT_BUFFER_BIT = 0x00000008
GLX_AUX_BUFFERS_BIT = 0x00000010
GLX_DEPTH_BUFFER_BIT = 0x00000020
GLX_STENCIL_BUFFER_BIT = 0x00000040
GLX_ACCUM_BUFFER_BIT = 0x00000080
GLX_CONFIG_CAVEAT = 0x20
GLX_X_VISUAL_TYPE = 0x22
GLX_TRANSPARENT_TYPE = 0x23
GLX_TRANSPARENT_INDEX_VALUE = 0x24
GLX_TRANSPARENT_RED_VALUE = 0x25
GLX_TRANSPARENT_GREEN_VALUE = 0x26
GLX_TRANSPARENT_BLUE_VALUE = 0x27
GLX_TRANSPARENT_ALPHA_VALUE = 0x28
GLX_DONT_CARE = 0xFFFFFFFF
GLX_NONE = 0x8000
GLX_SLOW_CONFIG = 0x8001
GLX_TRUE_COLOR = 0x8002
GLX_DIRECT_COLOR = 0x8003
GLX_PSEUDO_COLOR = 0x8004
GLX_STATIC_COLOR = 0x8005
GLX_GRAY_SCALE = 0x8006
GLX_STATIC_GRAY = 0x8007
GLX_TRANSPARENT_RGB = 0x8008
GLX_TRANSPARENT_INDEX = 0x8009
GLX_VISUAL_ID = 0x800B
GLX_SCREEN = 0x800C
GLX_NON_CONFORMANT_CONFIG = 0x800D
GLX_DRAWABLE_TYPE = 0x8010
GLX_RENDER_TYPE = 0x8011
GLX_X_RENDERABLE = 0x8012
GLX_FBCONFIG_ID = 0x8013
GLX_RGBA_TYPE = 0x8014
GLX_COLOR_INDEX_TYPE = 0x8015
GLX_MAX_PBUFFER_WIDTH = 0x8016
GLX_MAX_PBUFFER_HEIGHT = 0x8017
GLX_MAX_PBUFFER_PIXELS = 0x8018
GLX_PRESERVED_CONTENTS = 0x801B
GLX_LARGEST_PBUFFER = 0x801C
GLX_WIDTH = 0x801D
GLX_HEIGHT = 0x801E
GLX_EVENT_MASK = 0x801F
GLX_DAMAGED = 0x8020
GLX_SAVED = 0x8021
GLX_WINDOW = 0x8022
GLX_PBUFFER = 0x8023
GLX_PBUFFER_HEIGHT = 0x8040
GLX_PBUFFER_WIDTH = 0x8041
GLX_ACCUM_ALPHA_SIZE = 17
GLX_ACCUM_BLUE_SIZE = 16
GLX_ACCUM_GREEN_SIZE = 15
GLX_ACCUM_RED_SIZE = 14
GLX_ALPHA_SIZE = 11
GLX_AUX_BUFFERS = 7
GLX_BAD_ATTRIBUTE = 2
GLX_BAD_CONTEXT = 5
GLX_BAD_ENUM = 7
GLX_BAD_SCREEN = 1
GLX_BAD_VALUE = 6
GLX_BAD_VISUAL = 4
GLX_BLUE_SIZE = 10
GLX_BUFFER_SIZE = 2
GLX_BufferSwapComplete = 1
GLX_DEPTH_SIZE = 12
GLX_DOUBLEBUFFER = 5
GLX_GREEN_SIZE = 9
GLX_LEVEL = 3
GLX_NO_EXTENSION = 3
GLX_PbufferClobber = 0
GLX_RED_SIZE = 8
GLX_RGBA = 4
GLX_STENCIL_SIZE = 13
GLX_STEREO = 6
GLX_USE_GL = 1
class OpenGLView(tkinter.Frame):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._x_window_id = self.winfo_id()
self._init_gl()
def _init_gl(self):
self._xlib = cdll.LoadLibrary('libX11.so')
self._gl = cdll.LoadLibrary('libGL.so')
self._glx = cdll.LoadLibrary('libGLU.so')
self._x_display = self._xlib.XOpenDisplay()
elements = [
GLX_X_RENDERABLE, 1,
GLX_DRAWABLE_TYPE, GLX_WINDOW_BIT,
GLX_RENDER_TYPE, GLX_RGBA_BIT,
GLX_X_VISUAL_TYPE, GLX_TRUE_COLOR,
GLX_DOUBLEBUFFER, 1,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_ALPHA_SIZE, 8,
GLX_DEPTH_SIZE, 24,
GLX_STENCIL_SIZE, 8,
0
]
elements = (ctypes.c_int * len(elements))(*elements)
gl_configs = self._glx.glXChooseFBConfig(self._x_display, 0, ctypes.byref(elements), ctypes.sizeof(elements))
context = self._glx.glXCreateNewContext(self._x_display, gl_configs[0], self._glx.GLX_RGBA_TYPE, None, True)
self._glx.glXMakeContextCurrent(self._x_display, self._x_window_id, self._x_window_id, context)
tk = tkinter.Tk()
v = OpenGLView(tk)
v.pack(fill=tkinter.BOTH, expand=True)
tk.mainloop()
</code></pre>
<p>Why this piece of code generates this error?</p>
<pre><code>libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Segmentation fault (core dumped)
</code></pre>
| 0 | 2016-08-29T11:24:58Z | 39,206,504 | <p>After selecting a FBConfig you also have to select a window visual that matches that FBConfig to make sure that this FBConfig can in fact be used for the window you're trying to attach do. Also the Window must have set its visual/FBConfig to that format so that you can make current the OpenGL context on it.</p>
<p>I have an <a href="https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c" rel="nofollow">example on how to do this in C</a> in my <code>codesamples</code> repository over at GitHub. Feel free to use that as a template. There's also an alongside example that shows how to use GLX alongside Xcb (the example you can find over at the Xcb homepage is broken).</p>
| 0 | 2016-08-29T12:35:33Z | [
"python",
"opengl",
"ctypes",
"glx"
] |
How to initialize OpenGL with glXChooseFBConfig and ctypes module? | 39,205,116 | <p>I want to make some simple OpenGL animation on my tkinter window. I dont want to include any needless dependencies so I'm writing everything from scratch with ctypes. So far I have this:</p>
<pre><code>#!/usr/bin/env python3
import tkinter
import ctypes
from ctypes import cdll
GLX_PIXMAP_BIT = 0x00000002
GLX_WINDOW_BIT = 0x00000001
GLX_PBUFFER_BIT = 0x00000004
GLX_RGBA_BIT = 0x00000001
GLX_COLOR_INDEX_BIT = 0x00000002
GLX_PBUFFER_CLOBBER_MASK = 0x08000000
GLX_FRONT_LEFT_BUFFER_BIT = 0x00000001
GLX_FRONT_RIGHT_BUFFER_BIT = 0x00000002
GLX_BACK_LEFT_BUFFER_BIT = 0x00000004
GLX_BACK_RIGHT_BUFFER_BIT = 0x00000008
GLX_AUX_BUFFERS_BIT = 0x00000010
GLX_DEPTH_BUFFER_BIT = 0x00000020
GLX_STENCIL_BUFFER_BIT = 0x00000040
GLX_ACCUM_BUFFER_BIT = 0x00000080
GLX_CONFIG_CAVEAT = 0x20
GLX_X_VISUAL_TYPE = 0x22
GLX_TRANSPARENT_TYPE = 0x23
GLX_TRANSPARENT_INDEX_VALUE = 0x24
GLX_TRANSPARENT_RED_VALUE = 0x25
GLX_TRANSPARENT_GREEN_VALUE = 0x26
GLX_TRANSPARENT_BLUE_VALUE = 0x27
GLX_TRANSPARENT_ALPHA_VALUE = 0x28
GLX_DONT_CARE = 0xFFFFFFFF
GLX_NONE = 0x8000
GLX_SLOW_CONFIG = 0x8001
GLX_TRUE_COLOR = 0x8002
GLX_DIRECT_COLOR = 0x8003
GLX_PSEUDO_COLOR = 0x8004
GLX_STATIC_COLOR = 0x8005
GLX_GRAY_SCALE = 0x8006
GLX_STATIC_GRAY = 0x8007
GLX_TRANSPARENT_RGB = 0x8008
GLX_TRANSPARENT_INDEX = 0x8009
GLX_VISUAL_ID = 0x800B
GLX_SCREEN = 0x800C
GLX_NON_CONFORMANT_CONFIG = 0x800D
GLX_DRAWABLE_TYPE = 0x8010
GLX_RENDER_TYPE = 0x8011
GLX_X_RENDERABLE = 0x8012
GLX_FBCONFIG_ID = 0x8013
GLX_RGBA_TYPE = 0x8014
GLX_COLOR_INDEX_TYPE = 0x8015
GLX_MAX_PBUFFER_WIDTH = 0x8016
GLX_MAX_PBUFFER_HEIGHT = 0x8017
GLX_MAX_PBUFFER_PIXELS = 0x8018
GLX_PRESERVED_CONTENTS = 0x801B
GLX_LARGEST_PBUFFER = 0x801C
GLX_WIDTH = 0x801D
GLX_HEIGHT = 0x801E
GLX_EVENT_MASK = 0x801F
GLX_DAMAGED = 0x8020
GLX_SAVED = 0x8021
GLX_WINDOW = 0x8022
GLX_PBUFFER = 0x8023
GLX_PBUFFER_HEIGHT = 0x8040
GLX_PBUFFER_WIDTH = 0x8041
GLX_ACCUM_ALPHA_SIZE = 17
GLX_ACCUM_BLUE_SIZE = 16
GLX_ACCUM_GREEN_SIZE = 15
GLX_ACCUM_RED_SIZE = 14
GLX_ALPHA_SIZE = 11
GLX_AUX_BUFFERS = 7
GLX_BAD_ATTRIBUTE = 2
GLX_BAD_CONTEXT = 5
GLX_BAD_ENUM = 7
GLX_BAD_SCREEN = 1
GLX_BAD_VALUE = 6
GLX_BAD_VISUAL = 4
GLX_BLUE_SIZE = 10
GLX_BUFFER_SIZE = 2
GLX_BufferSwapComplete = 1
GLX_DEPTH_SIZE = 12
GLX_DOUBLEBUFFER = 5
GLX_GREEN_SIZE = 9
GLX_LEVEL = 3
GLX_NO_EXTENSION = 3
GLX_PbufferClobber = 0
GLX_RED_SIZE = 8
GLX_RGBA = 4
GLX_STENCIL_SIZE = 13
GLX_STEREO = 6
GLX_USE_GL = 1
class OpenGLView(tkinter.Frame):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._x_window_id = self.winfo_id()
self._init_gl()
def _init_gl(self):
self._xlib = cdll.LoadLibrary('libX11.so')
self._gl = cdll.LoadLibrary('libGL.so')
self._glx = cdll.LoadLibrary('libGLU.so')
self._x_display = self._xlib.XOpenDisplay()
elements = [
GLX_X_RENDERABLE, 1,
GLX_DRAWABLE_TYPE, GLX_WINDOW_BIT,
GLX_RENDER_TYPE, GLX_RGBA_BIT,
GLX_X_VISUAL_TYPE, GLX_TRUE_COLOR,
GLX_DOUBLEBUFFER, 1,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_ALPHA_SIZE, 8,
GLX_DEPTH_SIZE, 24,
GLX_STENCIL_SIZE, 8,
0
]
elements = (ctypes.c_int * len(elements))(*elements)
gl_configs = self._glx.glXChooseFBConfig(self._x_display, 0, ctypes.byref(elements), ctypes.sizeof(elements))
context = self._glx.glXCreateNewContext(self._x_display, gl_configs[0], self._glx.GLX_RGBA_TYPE, None, True)
self._glx.glXMakeContextCurrent(self._x_display, self._x_window_id, self._x_window_id, context)
tk = tkinter.Tk()
v = OpenGLView(tk)
v.pack(fill=tkinter.BOTH, expand=True)
tk.mainloop()
</code></pre>
<p>Why this piece of code generates this error?</p>
<pre><code>libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Segmentation fault (core dumped)
</code></pre>
| 0 | 2016-08-29T11:24:58Z | 39,208,012 | <p>I don't have a linux distribution at hand right now to test it, but what about this <a href="http://stackoverflow.com/help/mcve">mcve</a>?</p>
<pre><code>from ctypes import c_char_p, c_void_p, cdll
from ctypes import c_bool, c_char_p, c_double, c_float, c_int, c_uint, c_ulong, c_void_p, POINTER
from tkinter import Tk, YES, BOTH
_xlib = cdll.LoadLibrary('libX11.so')
_gl = cdll.LoadLibrary('libGL.so')
# Data types for Linux Platform
XID = c_ulong
GLXDrawable = XID
# Data types for OpenGL
GLbitfield = c_uint
GLubyte = c_char_p
GLclampf = c_float
GLclampd = c_double
GLdouble = c_double
GLenum = c_uint
GLfloat = c_float
GLint = c_int
GL_BLEND = 0x0BE2
GL_COLOR_BUFFER_BIT = 0x00004000
GL_DEPTH_BUFFER_BIT = 0x00000100
GL_DEPTH_TEST = 0x0B71
GL_MODELVIEW = 0x1700
GL_ONE_MINUS_SRC_ALPHA = 0x0303
GL_PROJECTION = 0x1701
GL_QUADS = 0x0007
GL_RENDERER = 0x1F01
GL_SRC_ALPHA = 0x0302
GL_VENDOR = 0x1F00
GL_VERSION = 0x1F02
GL_TRUE = 1
# Constants for Linux Platform
PGLint = GLint * 11
GLX_RGBA = 4
GLX_RED_SIZE = 8
GLX_GREEN_SIZE = 9
GLX_BLUE_SIZE = 10
GLX_DEPTH_SIZE = 12
GLX_DOUBLEBUFFER = 5
# OpenGL Function Definitions
glBegin = _gl.glBegin
glBegin.restype = None
glBegin.argtypes = [GLenum]
glClear = _gl.glClear
glClear.restype = None
glClear.argtypes = [GLbitfield]
glBlendFunc = _gl.glBlendFunc
glBlendFunc.restype = None
glBlendFunc.argtypes = [GLenum, GLenum]
glClearColor = _gl.glClearColor
glClearColor.restype = None
glClearColor.argtypes = [GLclampf, GLclampf, GLclampf, GLclampf]
glClearDepth = _gl.glClearDepth
glClearDepth.restype = None
glClearDepth.argtypes = [GLclampd]
glColor3f = _gl.glColor3f
glColor3f.restype = None
glColor3f.argtypes = [GLfloat, GLfloat, GLfloat]
glEnable = _gl.glEnable
glEnable.restype = None
glEnable.argtypes = [GLenum]
glEnd = _gl.glEnd
glEnd.restype = None
glEnd.argtypes = None
glFlush = _gl.glFlush
glFlush.restype = None
glFlush.argtypes = None
glGetString = _gl.glGetString
glGetString.restype = GLubyte
glGetString.argtypes = [GLenum]
glLoadIdentity = _gl.glLoadIdentity
glLoadIdentity.restype = None
glLoadIdentity.argtypes = None
glMatrixMode = _gl.glMatrixMode
glMatrixMode.restype = None
glMatrixMode.argtypes = None
glOrtho = _gl.glOrtho
glOrtho.restype = None
glOrtho.argtypes = [GLdouble, GLdouble, GLdouble, GLdouble, GLdouble, GLdouble]
glRotatef = _gl.glRotatef
glRotatef.restype = None
glRotatef.argtypes = [GLfloat, GLfloat, GLfloat, GLfloat]
glVertex3f = _gl.glVertex3f
glVertex3f.restype = None
glVertex3f.argtypes = [GLfloat, GLfloat, GLfloat]
glViewport = _gl.glViewport
glViewport.restype = None
glViewport.argtypes = [GLint, GLint, GLint, GLint]
glXChooseVisual = _gl.glXChooseVisual
glXChooseVisual.argtypes = [c_void_p, c_int, POINTER(c_int)]
glXChooseVisual.restype = c_void_p
glXCreateContext = _gl.glXCreateContext
glXCreateContext.argtypes = [c_void_p, c_void_p, c_void_p, c_bool]
glXCreateContext.restype = c_void_p
glXMakeCurrent = _gl.glXMakeCurrent
glXMakeCurrent.argtypes = [c_void_p, GLXDrawable, c_void_p]
glXMakeCurrent.restype = c_bool
glXSwapBuffers = _gl.glXSwapBuffers
glXSwapBuffers.argtypes = [c_void_p, GLXDrawable]
glXSwapBuffers.resttype = None
X11_None = 0
x_open_display = _xlib.XOpenDisplay
x_open_display.argtypes = [c_char_p]
x_open_display.restype = c_void_p
class TkOglWin(Frame):
def __init__(self, parent, *args, **kwargs):
Frame.__init__(self, parent, *args, **kwargs)
self.parent = parent
self.parent.title(kwargs.get('app_title', 'Opengl Test'))
self.bind('<Configure>', self.on_resize)
self.parent.after(100, self._cfg_tkogl)
def _cfg_tkogl(self):
att = PGLint(
GLX_RGBA, GLX_DOUBLEBUFFER,
GLX_RED_SIZE, 4,
GLX_GREEN_SIZE, 4,
GLX_BLUE_SIZE, 4,
GLX_DEPTH_SIZE, 16,
X11_None
)
self.dpy = x_open_display(None)
vi = glXChooseVisual(self.dpy, 0, att)
glc = glXCreateContext(self.dpy, vi, None, GL_TRUE)
glXMakeCurrent(self.dpy, self.winfo_id(), glc)
self.set_ortho_view()
self.parent.after(10, self._render_loop)
def on_resize(self, event, arg=None):
raise NotImplementedError
def _render_loop(self):
self.render_scene()
glXSwapBuffers(self.dpy, self.winfo_id())
self.parent.after(5, self._render_loop)
def render_scene(self):
raise NotImplementedError
def set_ortho_view(self):
raise NotImplementedError
class AppOgl(TkOglWin):
rot = 0
def on_resize(self, event, arg=None):
if event:
w = event.width
h = event.height
else:
if arg:
w = arg['w']
h = arg['h']
else:
raise Exception
dx = w / h
glViewport(0, 0, w, h)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glOrtho(-2 * dx, 2 * dx, -2, 2, -2, 2)
def set_ortho_view(self):
glEnable(GL_BLEND)
glEnable(GL_DEPTH_TEST)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glClearColor(0, 0, 0, 0)
glClearDepth(1)
glMatrixMode(GL_PROJECTION)
self.on_resize(None, arg={
'w': self.winfo_width(),
'h': self.winfo_height()
})
print('%s - %s - %s' % (
glGetString(GL_VENDOR),
glGetString(GL_VERSION),
glGetString(GL_RENDERER)
))
def render_scene(self):
self.rot += .5
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glRotatef(self.rot, 1, 1, 0.5)
# Draw a simple cube.
glBegin(GL_QUADS)
glColor3f(0, 1, 0)
glVertex3f(1, 1, -1)
glVertex3f(-1, 1, -1)
glVertex3f(-1, 1, 1)
glVertex3f(1, 1, 1)
glColor3f(1, 0.5, 0)
glVertex3f(1, -1, 1)
glVertex3f(-1, -1, 1)
glVertex3f(-1, -1, -1)
glVertex3f(1, -1, -1)
glColor3f(1, 0, 0)
glVertex3f(1, 1, 1)
glVertex3f(-1, 1, 1)
glVertex3f(-1, -1, 1)
glVertex3f(1, -1, 1)
glColor3f(1, 1, 0)
glVertex3f(1, -1, -1)
glVertex3f(-1, -1, -1)
glVertex3f(-1, 1, -1)
glVertex3f(1, 1, -1)
glColor3f(0, 0, 1)
glVertex3f(-1, 1, 1)
glVertex3f(-1, 1, -1)
glVertex3f(-1, -1, -1)
glVertex3f(-1, -1, 1)
glColor3f(1, 0, 1)
glVertex3f(1, 1, -1)
glVertex3f(1, 1, 1)
glVertex3f(1, -1, 1)
glVertex3f(1, -1, -1)
glEnd()
glFlush()
if __name__ == '__main__':
root = Tk()
app = AppOgl(root, width=320, height=200)
app.pack(fill=BOTH, expand=YES)
app.mainloop()
</code></pre>
| 1 | 2016-08-29T13:52:38Z | [
"python",
"opengl",
"ctypes",
"glx"
] |
Why do processes take long time to join even when the map call to pool is completed? | 39,205,198 | <p>This is yet another question regarding <code>multiprocessing</code> module in <code>Python 3.</code>5. My problem is that I know all the forked processed have done their job (I can see their result in <code>Queue</code>), the AsyncResult.result() returns True which means that jobs are completed but when I proceed with PoolObj.join(), it takes forever. I know I can PoolObj.terminate() and carry on with my life, but I want to know why the heck does this happen?</p>
<p>I'm using following code:</p>
<pre><code>def worker(d):
queue.put(d)
def gen_data():
for i in range(int(1e6)):
yield i
if __name__ == "__main__":
queue = Queue(maxsize=-1)
pool = Pool(processes=12)
pool_obj_worker = pool.map_async(worker, gen_data(), chunksize=1)
pool.close()
print ('Lets run the workers...\n')
while True:
if pool_obj_worker.ready():
if pool_obj_worker.successful():
print ('\nAll processed successfully!') # I can see this quickly, so my jobs are done
else:
print ('\nAll processed. Errors encountered!')
sys.stdout.flush()
print (q.qsize()) # The size is right that means all workers have done their job
pool.join() # will get stuck here for long long time
queue.put('*')
break
print ('%d still to be processed' %
pool_obj_worker._number_left)
sys.stdout.flush()
time.sleep(0.5)
</code></pre>
<p>Am I doing it wrong? Please enlighten me. Or are the processes holding <code>join()</code> have gone zombie?</p>
| 1 | 2016-08-29T11:28:51Z | 39,206,081 | <p>The issue here is that you are using an extra <code>Queue</code> in your worker, other than the one fournished by <code>Pool</code>.
When the processes finish their work, they will all join the <code>FeederThread</code> used in the <code>multiprocessing.Queue</code> and these calls will hang (probably because all the threads call <code>join</code> simultaneously and there can be some weird race conditions, it is not easy to investigate).</p>
<p>Adding <code>multiprocessing.util.log_to_stderr(10)</code> permits to reveal that your processes hang while joining the queue feeder thread.</p>
<p>To solve your issue, you can either use <code>multiprocessing.SimpleQueue</code> instead of <code>multiprocessing.Queue</code> (no hang in join as there is no feeder thread) or try using the method <code>pool.unordered_imap</code> which provides the same kind of behavior as what you seem to implement (give back an unordered generator containing the results returned by worker).</p>
| 2 | 2016-08-29T12:13:55Z | [
"python",
"parallel-processing",
"multiprocessing",
"zombie-process"
] |
pandas fill N.A. for specific column | 39,205,200 | <p>I want to fill N.A. values in a specific column if a condition is met in another column to only replace this single class of N.A. values with an imputed / replacement value.</p>
<p>E.g. I want to perform: <code>if column1 = 'value1' AND column2 = N.A fillna_in_column2 with value 'replacementvalue'</code></p>
<p>How do I achieve this in pandas?</p>
<p>Trying to do this via <code>dataframe[dataframe['firstColumn'] == 'value1'].fillna({'column2':'replacementValue'}</code> does not work as the length of the overall records is modified. So far I could not get an inplace modification to work.</p>
| 1 | 2016-08-29T11:28:51Z | 39,205,258 | <p>You can try this:</p>
<pre><code>cond1 = df['column1'] == value1
cond2 = np.isnan(df['column2'])
df['column2'][cond1 & cond2] = replacement_value
</code></pre>
| 0 | 2016-08-29T11:32:04Z | [
"python",
"pandas",
"fill",
"na",
"imputation"
] |
pandas fill N.A. for specific column | 39,205,200 | <p>I want to fill N.A. values in a specific column if a condition is met in another column to only replace this single class of N.A. values with an imputed / replacement value.</p>
<p>E.g. I want to perform: <code>if column1 = 'value1' AND column2 = N.A fillna_in_column2 with value 'replacementvalue'</code></p>
<p>How do I achieve this in pandas?</p>
<p>Trying to do this via <code>dataframe[dataframe['firstColumn'] == 'value1'].fillna({'column2':'replacementValue'}</code> does not work as the length of the overall records is modified. So far I could not get an inplace modification to work.</p>
| 1 | 2016-08-29T11:28:51Z | 39,205,388 | <pre><code>df.loc[(df['col1']==val1) & (df['col2'].isnull()), 'col2'] = replacement_value
</code></pre>
| 2 | 2016-08-29T11:37:40Z | [
"python",
"pandas",
"fill",
"na",
"imputation"
] |
Dominant colors with Image package in python | 39,205,332 | <p>I have a transparent image and I am trying to extract major colors out of it using <code>Image</code> module's <code>getcolor()</code> method</p>
<pre><code>y = Image.open('img.png')
y.getcolors()
[Out]: [(21841, 0),
(13328, 1),
(8171, 2),
(2673, 3),
(1337, 4),
(1010, 5),
(892, 6),
(519, 7),
(379, 8),
(234, 9)]
</code></pre>
<p>How do I get actual color values (or names) corresponding to these indexes?</p>
<p><a href="http://i.stack.imgur.com/KctAV.png" rel="nofollow"><img src="http://i.stack.imgur.com/KctAV.png" alt="enter image description here"></a></p>
| 0 | 2016-08-29T11:34:41Z | 39,207,838 | <p>I am not sure whether the following code snippet is what you are finding. Convert the <code>Image</code> object to RGBA object and use <code>getcolors()</code> as follows.</p>
<pre><code>from PIL import Image
im = Image.open('img.png')
rgba_im = im.convert('RGBA')
print ( rgba_im.getcolors() )
"""
<Output>
[(2673, (218, 215, 209, 255)), (379, (195, 29, 54, 255)), (21841, (208, 208, 209, 0)), (234, (206, 198, 185, 255)), (519, (201, 178, 176, 255)), (1337, (193, 188, 186, 0)), (8171, (182, 176, 174, 0)), (892, (178, 170, 165, 255)), (13328, (168, 26, 41, 255)), (1010, (107, 18, 19, 255))]
"""
</code></pre>
| 0 | 2016-08-29T13:44:42Z | [
"python",
"image",
"pillow"
] |
How to use xpath in selenium python? | 39,205,371 | <p>I am new to Selenium Python. I am trying to run the following snippet. It works with find_element_by_name but not find_element_by_xpath.</p>
<p>Any idea what I am doing wrong?</p>
<p>Thanks </p>
<pre><code>from selenium import webdriver
# create a new Firefox session
driver = webdriver.Firefox()
driver.implicitly_wait(10)
driver.maximize_window()
# navigate to the application home page
driver.get("https://www.google.co.uk/")
# get the search textbox
#search_field = driver.find_element_by_xpath("//@name='q'/") <<<<< NOT Working
search_field = driver.find_element_by_name("q")
search_field.clear()
# enter search keyword and submit
search_field.send_keys("phones")
search_field.submit()
</code></pre>
| -1 | 2016-08-29T11:36:52Z | 39,205,414 | <p>Try this
search_field = driver.find_element_by_xpath("//input[@name='q']")</p>
| 1 | 2016-08-29T11:38:48Z | [
"python",
"selenium",
"xpath"
] |
sklearn.cluster.DBSCAN gives unexpected result | 39,205,392 | <p>I'm using DBSCAN method for clustering images, but it gives unexpected result. Let's assume I have 10 images. </p>
<p>Firstly, I read an images in a loop using <code>cv2.imread</code>.
Then I compute structural similarity index between each images. After that, I have a matrix like this:</p>
<pre><code>[
[ 1. -0.00893619 0. 0. 0. 0.50148778 0.47921832 0. 0. 0. ]
[-0.00893619 1. 0. 0. 0. 0.00996088 -0.01873205 0. 0. 0. ]
[ 0. 0. 1. 0.57884212 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0.57884212 1. 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[ 0.50148778 0.00996088 0. 0. 0. 1. 0.63224396 0. 0. 0. ]
[ 0.47921832 -0.01873205 0. 0. 0. 0.63224396 1. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. 0. 1. 0.77507487 0.69697053]
[ 0. 0. 0. 0. 0. 0. 0. 0.77507487 1. 0.74861881]
[ 0. 0. 0. 0. 0. 0. 0. 0.69697053 0.74861881 1. ]]
</code></pre>
<p>Looks good. Then I have simple invokation of DBSCAN:</p>
<pre><code>db = DBSCAN(eps=0.4, min_samples=3, metric='precomputed').fit(distances)
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
</code></pre>
<p>And the result is</p>
<pre><code>[0 0 0 0 0 0 0 0 0 0]
</code></pre>
<p>What do I do wrong? Why it puts all images into one cluster?</p>
| 0 | 2016-08-29T11:38:01Z | 39,208,214 | <p>The problem was that I've calculated distance matrix incorrectly - the entries on the main diagonal are all zero.</p>
| 0 | 2016-08-29T14:02:26Z | [
"python",
"scikit-learn",
"cluster-analysis",
"dbscan"
] |
sklearn.cluster.DBSCAN gives unexpected result | 39,205,392 | <p>I'm using DBSCAN method for clustering images, but it gives unexpected result. Let's assume I have 10 images. </p>
<p>Firstly, I read an images in a loop using <code>cv2.imread</code>.
Then I compute structural similarity index between each images. After that, I have a matrix like this:</p>
<pre><code>[
[ 1. -0.00893619 0. 0. 0. 0.50148778 0.47921832 0. 0. 0. ]
[-0.00893619 1. 0. 0. 0. 0.00996088 -0.01873205 0. 0. 0. ]
[ 0. 0. 1. 0.57884212 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0.57884212 1. 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[ 0.50148778 0.00996088 0. 0. 0. 1. 0.63224396 0. 0. 0. ]
[ 0.47921832 -0.01873205 0. 0. 0. 0.63224396 1. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. 0. 1. 0.77507487 0.69697053]
[ 0. 0. 0. 0. 0. 0. 0. 0.77507487 1. 0.74861881]
[ 0. 0. 0. 0. 0. 0. 0. 0.69697053 0.74861881 1. ]]
</code></pre>
<p>Looks good. Then I have simple invokation of DBSCAN:</p>
<pre><code>db = DBSCAN(eps=0.4, min_samples=3, metric='precomputed').fit(distances)
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
</code></pre>
<p>And the result is</p>
<pre><code>[0 0 0 0 0 0 0 0 0 0]
</code></pre>
<p>What do I do wrong? Why it puts all images into one cluster?</p>
| 0 | 2016-08-29T11:38:01Z | 39,209,669 | <p>DBSCAN usually assumes a <em>dissimilarity</em> (distance) not a similarity. It can be implemented with a similarity threshold, too (see Generalized DBSCAN)</p>
| 0 | 2016-08-29T15:16:01Z | [
"python",
"scikit-learn",
"cluster-analysis",
"dbscan"
] |
Python after running script for a long time memory allocation error | 39,205,528 | <p>I have this code which scrapes usernames:</p>
<pre><code>def fetch_and_parse_names(url):
html = requests.get(url).text
soup = BeautifulSoup(html, "lxml")
return (a.string for a in soup.findAll(href=USERNAME_PATTERN))
def get_names(urls):
# Create a concurrent executor
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
# Apply the fetch-and-parse function concurrently with executor.map,
# and join the results together
return itertools.chain.from_iterable(executor.map(fetch_and_parse_names, urls))
def get_url(region, page):
return 'http://lolprofile.net/leaderboards/%s/%d' % (region, page)
</code></pre>
<p>When it starts putting all the names in a list like this</p>
<pre><code>urls = [get_url(region, i) for i in range(start, end + 1)]
names = (name.lower() for name in get_names(urls) if is_valid_name(name))
</code></pre>
<p>After an hour off running I get memory allocation errors, obviously I know why this happens but how can I fix it? I was thinking just getting the usernames from a single page and output them to file immediately, delete contents of list, repeat, but I didn't know how to implement this.</p>
| 1 | 2016-08-29T11:44:35Z | 39,205,880 | <p>You can use <a href="https://docs.python.org/2/library/resource.html" rel="nofollow">Python Resource Library</a> to increase your process allocated memory as threads of a process use memory of their parent process they cannot allocate extra memory.</p>
| 1 | 2016-08-29T12:02:36Z | [
"python"
] |
Python after running script for a long time memory allocation error | 39,205,528 | <p>I have this code which scrapes usernames:</p>
<pre><code>def fetch_and_parse_names(url):
html = requests.get(url).text
soup = BeautifulSoup(html, "lxml")
return (a.string for a in soup.findAll(href=USERNAME_PATTERN))
def get_names(urls):
# Create a concurrent executor
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
# Apply the fetch-and-parse function concurrently with executor.map,
# and join the results together
return itertools.chain.from_iterable(executor.map(fetch_and_parse_names, urls))
def get_url(region, page):
return 'http://lolprofile.net/leaderboards/%s/%d' % (region, page)
</code></pre>
<p>When it starts putting all the names in a list like this</p>
<pre><code>urls = [get_url(region, i) for i in range(start, end + 1)]
names = (name.lower() for name in get_names(urls) if is_valid_name(name))
</code></pre>
<p>After an hour off running I get memory allocation errors, obviously I know why this happens but how can I fix it? I was thinking just getting the usernames from a single page and output them to file immediately, delete contents of list, repeat, but I didn't know how to implement this.</p>
| 1 | 2016-08-29T11:44:35Z | 39,206,034 | <p>The code you use keeps all the downloaded documents in memory for two reasons:</p>
<ul>
<li>you return <code>a.string</code>, which is not just a <code>str</code> but a <code>bs4.element.NavigableString</code> and as such keeps a reference to its parent and ultimately to the whole document tree.</li>
<li>you return a generator expression, which will capture the local context (in this case the <code>soup</code>) until it is used.</li>
</ul>
<p>One way to fix this would be to use:</p>
<pre><code>return [str(a.string) for a in soup.findAll(href=USERNAME_PATTERN)]
</code></pre>
<p>This way no references to the soup objects are kept, and the expression is executed immediately and a list of <code>str</code>s returned.</p>
| 2 | 2016-08-29T12:11:17Z | [
"python"
] |
getElementById() takes exactly 1 argument (2 given) | 39,205,598 | <p>I want to do an automation of the Internet Explorer. Open the Internet Explorer, navigate to login.live.com and set a value into the email textbox.</p>
<p>Here's the simple script:</p>
<pre><code>import win32com.client
import time
IE = win32com.client.DispatchEx("InternetExplorer.Application")
IE.Visible = 1
IE.Navigate('login.live.com')
time.sleep(5)
DOC = IE.document
DOC.getElementById('i0116').value = 'test'
</code></pre>
<p>The last line always returns the following TypeError: </p>
<blockquote>
<p>getElementById() takes exactly 1 argument (2 given)</p>
</blockquote>
<p>When I try to add the value through the console of the Internet Explorer it works.</p>
<p>Btw. the getElementsByTagName() method works without any Errors.</p>
<p>Thanks for any help!</p>
| 0 | 2016-08-29T11:48:36Z | 39,205,691 | <p>As <a href="http://stackoverflow.com/a/33024922/1752959">this</a> answer suggests you have to use</p>
<pre><code>DOC.Body.getElementById('i0116').value = 'test'
</code></pre>
| 0 | 2016-08-29T11:52:52Z | [
"python",
"win32com"
] |
getElementById() takes exactly 1 argument (2 given) | 39,205,598 | <p>I want to do an automation of the Internet Explorer. Open the Internet Explorer, navigate to login.live.com and set a value into the email textbox.</p>
<p>Here's the simple script:</p>
<pre><code>import win32com.client
import time
IE = win32com.client.DispatchEx("InternetExplorer.Application")
IE.Visible = 1
IE.Navigate('login.live.com')
time.sleep(5)
DOC = IE.document
DOC.getElementById('i0116').value = 'test'
</code></pre>
<p>The last line always returns the following TypeError: </p>
<blockquote>
<p>getElementById() takes exactly 1 argument (2 given)</p>
</blockquote>
<p>When I try to add the value through the console of the Internet Explorer it works.</p>
<p>Btw. the getElementsByTagName() method works without any Errors.</p>
<p>Thanks for any help!</p>
| 0 | 2016-08-29T11:48:36Z | 39,223,681 | <p>Okay.. I wrote a workaround for this:</p>
<pre><code>DOC = IE.Document
inputs = DOC.documentElement.getElementsByTagName('input')
for field in inputs:
if field.id == 'i0116':
email = field
break
email.value = 'example@test.com'
</code></pre>
<p>For browser automation I recommend to use the <a href="http://www.seleniumhq.org/" rel="nofollow">Selenium</a> library.</p>
| 0 | 2016-08-30T09:27:35Z | [
"python",
"win32com"
] |
Python configparser reading section and creating new config | 39,205,668 | <p>I'm currently reading a config file getting all values from a specific section and printing them however, instead of printing them I would like to create a new config file with just that section I am reading. </p>
<p>How would I go about this?</p>
<p>code</p>
<pre><code>configFilePath = 'C:\\testing.ini'
config = ConfigParser.ConfigParser()
config.optionxform = str
config.read(configFilePath)
section = 'testing1'
configdata = {k:v for k,v in config.items(section)}
for x in configdata.items():
print x[0] + '=' + x[1]
</code></pre>
<p>config ini</p>
<pre><code>[testing1]
Español=spain
UK=unitedkingdom
something=somethingelse
[dontneed]
dontneedthis=blahblah
dontneedthis1=blahblah1
</code></pre>
<p>Also while I'm here, I'm not sure how I would get this to work with encoded strings like "ñ" as it errors, however I need my new config file exactly how I'm reading it.</p>
| 0 | 2016-08-29T11:52:02Z | 39,206,866 | <p>I got it working with </p>
<pre><code>for x in configdata.items():
confignew.set(section,x[0],x[1])
confignew.write( EqualsSpaceRemover( cfgfile ) )
</code></pre>
<p>however, how would I edit my code so it can read text with characters like " ñ " and parse/write them without getting errors about decode problems?</p>
| 0 | 2016-08-29T12:54:40Z | [
"python",
"parsing",
"configparser",
"python-config"
] |
pyhs2 set queue in query | 39,205,710 | <p>I am using pyhs2 to query hive through python but I cannot set the queue inside the query.</p>
<p>I want to set the queue to adhoc</p>
<pre><code>cursor.execute("set mapred.job.queue.name=adhoc;")
cursor.execute("select * from test")
pyhs2.error.Pyhs2Exception: 'Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask'
</code></pre>
<p>and when I try to put the queue inside the query:</p>
<pre><code>cursor.execute("set mapred.job.queue.name=adhoc; select * from test")
</code></pre>
<p>The second part of the query doesn't get executed</p>
| 0 | 2016-08-29T11:53:56Z | 40,108,757 | <p>Apparently it is not possible to run multiple statements with <code>pyhs2</code> therefore you are unable to set the mapred queue name. Pyhs2 is no longer maintained and the readme advises to use <a href="https://github.com/dropbox/PyHive" rel="nofollow">PyHive</a> which supports these kind of operations.</p>
| 0 | 2016-10-18T12:49:17Z | [
"python",
"hive",
"queue"
] |
Error while executing with subprocess | 39,205,793 | <p>I'm trying to execute a shell command via Python code, but I'm not capable to understand why it is failing.</p>
<p>When printing the command and pasting it to the shell to try executing it directly works perfectly fine, that's the strange part.</p>
<p>From Python I'm getting the following:</p>
<pre><code>/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `/DATA/NGS/ngs_software/bioinfoSoftware/bwa_current/bwa mem ... --threads 4 -T /tmp/samTemp -'
</code></pre>
<p>Is there anything I'm missing? My code looks like this, where 'cmd' is the string with the command. The OS is a CentOS with a bash shell:</p>
<pre><code>process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
out = process.stdout.readline()
out = out.decode("utf-8").strip('\n')
</code></pre>
| 0 | 2016-08-29T11:58:23Z | 39,206,132 | <p>Your command contains a process substitution, but <code>Popen</code> runs its command using <code>/bin/sh</code>. When run as <code>/bin/sh</code>, though, <code>bash</code> does not allow process substitutions. You can explicitly request that the command be run with <code>bash</code> using the <code>executable</code> option.</p>
<pre><code>process = subprocess.Popen(cmd, shell=True, executable='/bin/bash', stdout=subprocess.PIPE)
</code></pre>
| 0 | 2016-08-29T12:16:49Z | [
"python",
"bash",
"shell"
] |
How string formats '%' operator is implemented? | 39,205,815 | <p>How does string formats '%' operator is implemented in CPython 2.7?</p>
<p>Can't find any reference in the <a href="https://docs.python.org/2/index.html" rel="nofollow">Python documentation</a>.</p>
<p>Well, I fact I found the topic: <a href="https://docs.python.org/2/c-api/conversion.html" rel="nofollow">String conversion and formatting</a>, for <code>PyOS_snprintf</code>and <code>PyOS_vsnprintf</code>. But not sure it match my question.</p>
<p>See also my question <a href="http://stackoverflow.com/questions/39203016/optional-keys-in-string-formats-using-operator">Optional keys in string formats using '%' operator?</a></p>
| 0 | 2016-08-29T11:59:27Z | 39,206,131 | <p>The implementation is in the <a href="https://github.com/python/cpython/blob/2.7/Objects/stringobject.c#L4229" rel="nofollow"><code>PyString_Format</code></a> that is called by the <a href="https://github.com/python/cpython/blob/2.7/Objects/stringobject.c#L3743" rel="nofollow"><code>string_mod</code> function</a> in <code>Objects/stringobject.c</code>. The latter in turn is a slot method stored in <a href="https://github.com/python/cpython/blob/2.7/Objects/stringobject.c#L3761" rel="nofollow"><code>PyString_Type->tp_as_number->nb_remainder</code></a>. </p>
<p>The functions <code>PyOS_*snprintf</code> are not really related to the implementation of <code>str.__mod__</code>.</p>
| 2 | 2016-08-29T12:16:47Z | [
"python",
"python-2.7",
"string-formatting"
] |
Model create using dictionary in django | 39,205,874 | <p>In Django is it possible to create through dictionary similar as to filtering?</p>
<p>Here is my models.py:</p>
<pre><code>class Personnels(models.Model):
name = models.CharField(max_length=100)
</code></pre>
<p>Here is my views.py:</p>
<pre><code>from . models import *
def artists(request):
personnels = Personnels.objects.all()
if request.method == "POST":
data = {"name": "Dean Armada"}
Personnels.objects.create(data)
</code></pre>
<p>However the code above will throw an error. What I really am trying to do is to do a create function coming from a "request.POST" but my views above serves as a simple example on doing it</p>
| 0 | 2016-08-29T12:02:22Z | 39,206,056 | <p>Simply unwrap the dictionary within <code>create</code> function like:</p>
<pre><code>Personnels.objects.create(**data)
</code></pre>
| 1 | 2016-08-29T12:12:39Z | [
"python",
"django"
] |
Curl command from Python Subprocess to store output into variable | 39,205,906 | <p>I am using the below python code to read data from a url. The curl command from unix works. But when i try to store the returned json in a python variable, it is always blank.</p>
<p>Any pointers ? I do see the output on the Spyder Console, but never in the variable.</p>
<pre><code> p =sp.Popen(["curl","-i","-X", "POST" ,"-H", "Content-Type:application/json" ,"-H", "Authorization:Basic NEg0VU9QR1BZODAWVI4N1dLUFpXRzp4SVpxUUkzbUFuVG9RUlJDcXBLWkdB","-d", '{ "grant_type": "client_credentials" }', "https://rridata.wikimapia.com/v1.0/oauth/token/"], stdout = sp.PIPE, shell=False)
#p =sp.check_output(['curl','-i','-X', 'POST' ,'-H', 'Content-Type:application/json' ,'-H', 'Authorization:Basic NEg0VU9QR1BZODATEpDc2oyNGRGa0c5SVpxUUkzbUFuVG9RUlJDcXBLWkdB','-d', '{ "grant_type": "client_credentials" }', 'https://rdata.wikimapia.com/v1.0/oauth/token/'])
out,err = p.communicate()
print out
</code></pre>
<p>EDIT: My environment details. I am on Windows 7 , executing the command from Anaconda Spyder IDE.</p>
| 1 | 2016-08-29T12:04:14Z | 39,206,111 | <p>You can use the subprocess PIPE to capture the stdout and stderr, like so:</p>
<pre><code>>>> import subprocess
>>> p = subprocess.Popen(["curl", "https://google.co.uk"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> print p.stdout.read() # <-- you can assign a variable to the content of stdout
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.co.uk/">here</A>.
</BODY></HTML>
</code></pre>
| 0 | 2016-08-29T12:15:55Z | [
"python",
"api",
"curl",
"pipe",
"subprocess"
] |
Curl command from Python Subprocess to store output into variable | 39,205,906 | <p>I am using the below python code to read data from a url. The curl command from unix works. But when i try to store the returned json in a python variable, it is always blank.</p>
<p>Any pointers ? I do see the output on the Spyder Console, but never in the variable.</p>
<pre><code> p =sp.Popen(["curl","-i","-X", "POST" ,"-H", "Content-Type:application/json" ,"-H", "Authorization:Basic NEg0VU9QR1BZODAWVI4N1dLUFpXRzp4SVpxUUkzbUFuVG9RUlJDcXBLWkdB","-d", '{ "grant_type": "client_credentials" }', "https://rridata.wikimapia.com/v1.0/oauth/token/"], stdout = sp.PIPE, shell=False)
#p =sp.check_output(['curl','-i','-X', 'POST' ,'-H', 'Content-Type:application/json' ,'-H', 'Authorization:Basic NEg0VU9QR1BZODATEpDc2oyNGRGa0c5SVpxUUkzbUFuVG9RUlJDcXBLWkdB','-d', '{ "grant_type": "client_credentials" }', 'https://rdata.wikimapia.com/v1.0/oauth/token/'])
out,err = p.communicate()
print out
</code></pre>
<p>EDIT: My environment details. I am on Windows 7 , executing the command from Anaconda Spyder IDE.</p>
| 1 | 2016-08-29T12:04:14Z | 39,206,255 | <p>My Bad :(</p>
<p>Actually, I am on a secure connection behind firewall.
Hence, i have to set the proxy before i make the call.</p>
<pre><code> os.environ['https_proxy']="https://iss-uk.corporate.pb.com:80"
p =sp.Popen(["curl","-i","-X", "POST" ,"-H", "Content-Type:application/json" ,"-H", "Authorization:Basic NEg0VU9QR1BZODAWVI4N1dLUFpXRzp4WG1HczbUFuVG9RUlJDcXBLWkdB","-d", '{ "grant_type": "client_credentials" }', "https://rridata.wikimapia.com/v1.0/oauth/token/"], stdout = sp.PIPE, shell=False)
#p =sp.check_output(['curl','-i','-X', 'POST' ,'-H', 'Content-Type:application/json' ,'-H', 'Authorization:Basic NEg0VU9QR1BZODATEpDc2oyNGRGa0c5SVpxUUkzbUFuVG9RUlJDcXBLWkdB','-d', '{ "grant_type": "client_credentials" }', 'https://rdata.wikimapia.com/v1.0/oauth/token/'])
out,err = p.communicate()
print ("out:",out,"err:",err)
</code></pre>
<p>That's why I should take a break. :-/</p>
| 0 | 2016-08-29T12:22:20Z | [
"python",
"api",
"curl",
"pipe",
"subprocess"
] |
Google app engine HTTP error 500 | 39,205,979 | <p>I just installed Google app engine for Python 2.7 and I wrote some code for test. Just a simple HTML form. Here is the code:</p>
<pre><code>import webapp2
form = """
<form method="post">
What is your birthday?
<br>
<label> Month
<input type="text" name="month">
</label>
<label> Day
<input type="text" name="day">
</label>
<label> Year
<input type="text" name="year">
</label>
<br>
<br>
<input type="submit">
</form>
"""
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.out.write(form)
def post(self):
self.response.out.write("Succes!")
app = webapp2.WSGIApplication([
('/', MainPage),
], debug=True)
</code></pre>
<p>And then I tried to write a separate procedure that writes out my form, like so:</p>
<pre><code>import webapp2
form = """
<form method="post">
What is your birthday?
<br>
<label> Month
<input type="text" name="month">
</label>
<label> Day
<input type="text" name="day">
</label>
<label> Year
<input type="text" name="year">
</label>
<br>
<br>
<input type="submit">
</form>
"""
class MainPage(webapp2.RequestHandler):
def write_form(self):
self.response.out.write(form)
def get(self):
self.write_form()
def post(self):
self.response.out.write("Succes!")
app = webapp2.WSGIApplication([
('/', MainPage),
], debug=True)
</code></pre>
<p>Well, the thing is that the first code is working fine, but the second one is returning the HTTP error 500. I tried this out from a course on Udacity and I simply copied the code from there. I really don't know why it's not working.</p>
<p>PS. I see this message in terminal (Linux): "IndentationError: unindent does not match any outer indentation level
INFO 2016-08-29 12:17:37,155 module.py:788] default: "GET / HTTP/1.1" 500 -"</p>
<p>Later Edit: I solved this by simply writing the "write_form" procedure after the "get" procedure inside the MainPage class.</p>
| 0 | 2016-08-29T12:07:51Z | 39,206,510 | <p>You have probably mixed up tabs and spaces. See this answer for more information and hints on how to fix it: <a href="http://stackoverflow.com/a/3920674/3771575">http://stackoverflow.com/a/3920674/3771575</a></p>
| 1 | 2016-08-29T12:35:43Z | [
"python",
"google-app-engine"
] |
How to refresh multiple printed lines inplace using Python? | 39,206,020 | <p>I would like to understand how to reprint multiple lines in Python 3.5.</p>
<p>This is an example of a script where I would like to refresh the printed statement in place.</p>
<pre><code>import random
import time
a = 0
while True:
statement = """
Line {}
Line {}
Line {}
Value = {}
""".format(random.random(), random.random(), random.random(), a)
print(statement, end='\r')
time.sleep(1)
a += 1
</code></pre>
<p>What I am trying to do is have:</p>
<pre><code>Line 1
Line 2
Line 3
Value = 1
</code></pre>
<p>Write on top of / update / refresh:</p>
<pre><code>Line 1
Line 2
Line 3
Value = 0
</code></pre>
<p>The values of each line will change each time. This is effectively giving me a status update of each Line.</p>
<p>I saw <a href="https://stackoverflow.com/questions/6840420/python-rewrite-multiple-lines-in-the-console?noredirect=1&lq=1">another question from 5 years ago</a> however with the addition of the <code>end</code> argument in Python 3+ print function, I am hoping that there is a much simpler solution.</p>
| 4 | 2016-08-29T12:10:20Z | 39,206,615 | <p>If you want to clear the screen each time you call <code>print()</code>, so that it <em>appears</em> the print is overwritten each time, you can use <code>clear</code> in unix or <code>cls</code> in windows, for example:</p>
<pre><code>import subprocess
a = 0
while True:
print(a)
a += 1
subprocess.call("clear")
</code></pre>
| 1 | 2016-08-29T12:41:04Z | [
"python",
"python-3.x",
"printing",
"stdout"
] |
How to refresh multiple printed lines inplace using Python? | 39,206,020 | <p>I would like to understand how to reprint multiple lines in Python 3.5.</p>
<p>This is an example of a script where I would like to refresh the printed statement in place.</p>
<pre><code>import random
import time
a = 0
while True:
statement = """
Line {}
Line {}
Line {}
Value = {}
""".format(random.random(), random.random(), random.random(), a)
print(statement, end='\r')
time.sleep(1)
a += 1
</code></pre>
<p>What I am trying to do is have:</p>
<pre><code>Line 1
Line 2
Line 3
Value = 1
</code></pre>
<p>Write on top of / update / refresh:</p>
<pre><code>Line 1
Line 2
Line 3
Value = 0
</code></pre>
<p>The values of each line will change each time. This is effectively giving me a status update of each Line.</p>
<p>I saw <a href="https://stackoverflow.com/questions/6840420/python-rewrite-multiple-lines-in-the-console?noredirect=1&lq=1">another question from 5 years ago</a> however with the addition of the <code>end</code> argument in Python 3+ print function, I am hoping that there is a much simpler solution.</p>
| 4 | 2016-08-29T12:10:20Z | 39,206,642 | <p>If I've understood correctly you're looking for this type of solution:</p>
<pre><code>import random
import time
import os
def clear_screen():
os.system('cls' if os.name == 'nt' else 'clear')
a = 0
while True:
clear_screen()
statement = """
Line {}
Line {}
Line {}
Value = {}
""".format(random.random(), random.random(), random.random(), a)
print(statement, end='\r')
time.sleep(1)
a += 1
</code></pre>
<p>This solution won't work with some software like IDLE, Sublime Text, Eclipse... The problem with running it within this type of software is that clear/cls uses ANSI escape sequences to clear the screen. These commands write a string such as "\033[[80;j" to the output buffer. The native command prompt is able to interpret this as a command to clear the screen but these pseudo-terminals don't know how to interpret it, so they just end up printing small square as if printing an unknown character.</p>
<p>If you're using this type of software, one hack around could be doing print('\n' * 100), it won't be the optimal solution but it's better than nothing.</p>
| 2 | 2016-08-29T12:42:30Z | [
"python",
"python-3.x",
"printing",
"stdout"
] |
How to refresh multiple printed lines inplace using Python? | 39,206,020 | <p>I would like to understand how to reprint multiple lines in Python 3.5.</p>
<p>This is an example of a script where I would like to refresh the printed statement in place.</p>
<pre><code>import random
import time
a = 0
while True:
statement = """
Line {}
Line {}
Line {}
Value = {}
""".format(random.random(), random.random(), random.random(), a)
print(statement, end='\r')
time.sleep(1)
a += 1
</code></pre>
<p>What I am trying to do is have:</p>
<pre><code>Line 1
Line 2
Line 3
Value = 1
</code></pre>
<p>Write on top of / update / refresh:</p>
<pre><code>Line 1
Line 2
Line 3
Value = 0
</code></pre>
<p>The values of each line will change each time. This is effectively giving me a status update of each Line.</p>
<p>I saw <a href="https://stackoverflow.com/questions/6840420/python-rewrite-multiple-lines-in-the-console?noredirect=1&lq=1">another question from 5 years ago</a> however with the addition of the <code>end</code> argument in Python 3+ print function, I am hoping that there is a much simpler solution.</p>
| 4 | 2016-08-29T12:10:20Z | 39,333,155 | <p>You could use <a href="https://docs.python.org/3/library/curses.html?highlight=ncurses" rel="nofollow">curses</a> for this.</p>
<pre><code>#!/usr/bin/python3
import curses
from time import sleep
from random import random
statement = """
Line {}
Line {}
Line {}
Value = {}"""
screen = curses.initscr()
n = 0
while n < 20:
screen.clear()
screen.addstr(0, 0, statement.format(random(), random(), random(), n))
screen.refresh()
n += 1
sleep(0.5)
curses.endwin()
</code></pre>
| 0 | 2016-09-05T14:52:40Z | [
"python",
"python-3.x",
"printing",
"stdout"
] |
Make a new column of the first n letters of a column, where n is the value in another column | 39,206,151 | <p>I have a dataframe with names and ages:</p>
<pre><code>name: age:
john 2
sean 3
jack 1
peter 4
</code></pre>
<p>Depending on their age <code>n</code> I want to print the first <code>n</code> letters of their name, so for instance <code>sean</code> becomes <code>sea</code> in a new column. </p>
<p>I have tried this:</p>
<pre><code>family['newcol'] = [x[:y] for x in family['name'] and for y in family['age']]
</code></pre>
<p>but it hasn't worked. Can anyone please give me a solution?</p>
| 1 | 2016-08-29T12:17:43Z | 39,206,283 | <p>Please try this:</p>
<pre><code>family['newcol'] = [family.ix[x]['name'][0:family.ix[x]['age']] for x in family.index]
</code></pre>
| 2 | 2016-08-29T12:24:21Z | [
"python",
"pandas"
] |
Why Do I have to worry about Thread Safety in CPython? | 39,206,242 | <p>From what I understand, the Global Interpreter Lock allows only a single thread to access the interpreter and execute bytecode. If that's the case, then at any given time, only a single thread will be using the interpreter and its memory. </p>
<p>With that I believe that it is fair to exclude the possibility of having race cases, since no two threads can access the interpreter's memory at the same time, yet I still see warnings about making sure data structures are "thread safe". There is a possibility that it may be covering all implementations of the python interpreter (like cython) which can switch off the GIL and allow true multi threading. </p>
<p>I understand the importance of thread safety in interpreter environments that do not have the GIL enabled. However, for CPython, why is thread safety encouraged when writing multi threaded python code? What is the worse that can happen in the CPython environment? </p>
| 2 | 2016-08-29T12:21:43Z | 39,206,297 | <p>Of course race conditions can still take place, because access to datastructures is <em>not atomic</em>.</p>
<p>Say you test for a key being present in a dictionary, then do something to add the key:</p>
<pre><code>if key not in dictionary:
# calculate new value
value = elaborate_calculation()
dictionary[key] = value
</code></pre>
<p>The thread can be switched at any point after the <code>not in</code> test has returned true, and <em>another thread</em> will also come to the conclusion that the key isn't there. Now two threads are doing the calculation, and you don't know which one will win.</p>
<p>All that the GIL does is protect Python's <em>internal interpreter state</em>. This doesn't mean that data structures used by Python code itself are now locked and protected.</p>
| 6 | 2016-08-29T12:24:52Z | [
"python",
"multithreading",
"thread-safety",
"cpython",
"gil"
] |
Why Do I have to worry about Thread Safety in CPython? | 39,206,242 | <p>From what I understand, the Global Interpreter Lock allows only a single thread to access the interpreter and execute bytecode. If that's the case, then at any given time, only a single thread will be using the interpreter and its memory. </p>
<p>With that I believe that it is fair to exclude the possibility of having race cases, since no two threads can access the interpreter's memory at the same time, yet I still see warnings about making sure data structures are "thread safe". There is a possibility that it may be covering all implementations of the python interpreter (like cython) which can switch off the GIL and allow true multi threading. </p>
<p>I understand the importance of thread safety in interpreter environments that do not have the GIL enabled. However, for CPython, why is thread safety encouraged when writing multi threaded python code? What is the worse that can happen in the CPython environment? </p>
| 2 | 2016-08-29T12:21:43Z | 39,206,973 | <p>An important note: the multiprocessing module in Python is synchonous to some degree despite the GIL, in that access to the same variable can occur across different processes simultaneously. </p>
<p>This has a likelyhood of corrupting your data, or at least disrupting your control flow, which would be why thread safety is reccomended.</p>
<p>As to why it happens, despite there only being one interpriter, there isn't anything stopping (at least as far as I can tell) two preinterprited pieces of code accessing the same parts of the shared memory synchonously. When doing say:</p>
<pre><code>import multiprocessing
def my_func ():
print("hello world")
my_process=multiprocessing.Process (target=my_func, args=(,))
my_process.start ()
my_process.join ()
</code></pre>
<p>My understanding is that the time it takes to interprit (in this case) my_func was buried in the overhead it takes to spawn a new process.</p>
<p>In this case, the term "process" is more suitable here, because there are worker threads that are temporarily spawned just to copy data, so there's some data handshaking doing on, so it's actually quite a bit of a different process (pun intended) than the spawning of a traditional thread.</p>
<p>I hope this helps.</p>
| 0 | 2016-08-29T12:59:59Z | [
"python",
"multithreading",
"thread-safety",
"cpython",
"gil"
] |
How to query a one to many/many to many relationship in Flask SQL Alchemy? | 39,206,298 | <p>These are two database models that are important in my problem. </p>
<p><b>I have established a one to many relationship (a Conversation can have multiple Messages) </b>
There is also a many to many relationship established between User and Conversation. </p>
<p><b>After obtaining two User objects, say <i>user1</i> and <i>user2</i>, I need to find the conversation that contains both users, if it exists. After getting a conversation object, say <i>current_convo</i>, I also need to query all the messages in that conversation. How could these two queries be done?</b></p>
<pre><code>class Conversation(db.Model):
__tablename__ = 'conversation'
id = db.Column('id', db.Integer, primary_key=True)
users = db.relationship("User", secondary=relationship_table)
messages = db.relationship("Message", backref="conversation", lazy="dynamic")
class Message(db.Model):
__tablename__ = 'message'
id = db.Column('id', db.Integer, primary_key=True)
message = db.Column('message', db.String)
timestamp = db.Column('timestamp', db.String)
sender = db.Column('sender', db.String)
conversation_id = db.Column(db.Integer, db.ForeignKey('conversation.id'))
class User(db.Model, UserMixin):
__tablename__ = 'user'
id = db.Column('id', db.Integer, primary_key=True)
username = db.Column('username', db.String(100), unique=True, index=True)
password = db.Column('password', db.String(100))
email = db.Column('email', db.String(100), unique=True, index=True)
authenticated = db.Column('authenticated', db.Boolean, default=False)
</code></pre>
| 0 | 2016-08-29T12:24:54Z | 39,207,144 | <p>This isn't a pure SQL query, but here's how I would accomplish what you're asking using Pandas.</p>
<pre><code>import pandas as pd
import sqlalchemy
import urllib
#setup vars and connection
server = 'myServer'
db = 'myDb'
user1 = 'someId1'
user2 = 'someId2'
#You'll have to maybe change this a little if you aren't using a trusted connection on SQL Server
connStr = 'DRIVER={SQL Server};SERVER=' + server + ';DATABASE=' + db + ';Trusted_Connection=yes'
conn = sqlalchemy.create_engine(
'mssql+pyodbc:///?odbc_connect=%s' % (urllib.quote_plus(connStr)))
#select all conversations that have one of the users
query = """select * from Conversation where users is in ('{0}','{1}')""".format(user1,user2)
conv_df = pd.read_sql(query,conn)
#unstack the users, so we can see which users are part of the same conversation
conv_users = conv_df.set_index(['id','users']).unstack().reset_index()
#filter conversations to those that have both users
conv_together = conv_users[(conv_users[user1].notnull()) & (conv_users[user2].notnull())]
conv_list = conv_together['id'].tolist()
conv_str = "(" + ', '.join("'{0}'".format(w) for w in conv_list) +")"
#select all messages where the conv id matches your criteria (has both users)
query = """select * from Message where conversation_id is in {0}""".format(conv_str)
message_df = pd.read_sql(query,conn)
</code></pre>
<p>Its hard to show intermediate steps with no test data, so I can't run and QC this code, but hopefully it gives you the right idea.</p>
| -1 | 2016-08-29T13:09:07Z | [
"python",
"flask",
"sqlalchemy"
] |
How to query a one to many/many to many relationship in Flask SQL Alchemy? | 39,206,298 | <p>These are two database models that are important in my problem. </p>
<p><b>I have established a one to many relationship (a Conversation can have multiple Messages) </b>
There is also a many to many relationship established between User and Conversation. </p>
<p><b>After obtaining two User objects, say <i>user1</i> and <i>user2</i>, I need to find the conversation that contains both users, if it exists. After getting a conversation object, say <i>current_convo</i>, I also need to query all the messages in that conversation. How could these two queries be done?</b></p>
<pre><code>class Conversation(db.Model):
__tablename__ = 'conversation'
id = db.Column('id', db.Integer, primary_key=True)
users = db.relationship("User", secondary=relationship_table)
messages = db.relationship("Message", backref="conversation", lazy="dynamic")
class Message(db.Model):
__tablename__ = 'message'
id = db.Column('id', db.Integer, primary_key=True)
message = db.Column('message', db.String)
timestamp = db.Column('timestamp', db.String)
sender = db.Column('sender', db.String)
conversation_id = db.Column(db.Integer, db.ForeignKey('conversation.id'))
class User(db.Model, UserMixin):
__tablename__ = 'user'
id = db.Column('id', db.Integer, primary_key=True)
username = db.Column('username', db.String(100), unique=True, index=True)
password = db.Column('password', db.String(100))
email = db.Column('email', db.String(100), unique=True, index=True)
authenticated = db.Column('authenticated', db.Boolean, default=False)
</code></pre>
| 0 | 2016-08-29T12:24:54Z | 39,208,130 | <p>The best way I know to do this is to use SQLAlchemy's <a href="http://docs.sqlalchemy.org/en/latest/orm/internals.html#sqlalchemy.orm.properties.RelationshipProperty.Comparator.contains" rel="nofollow">contains</a>. </p>
<pre><code>Conversation.query.filter(
Conversation.users.contains(user1),
Conversation.users.contains(user2)
)
</code></pre>
| 1 | 2016-08-29T13:58:46Z | [
"python",
"flask",
"sqlalchemy"
] |
Can any immutable data type be used as a python dictionary key? | 39,206,342 | <p>So I've gone through a few books and sites. Most of the listed possible data types are immutable. I understand how mutable data types can cause problems.</p>
<p>Also tuples were mentioned as possible keys, but when you make the tuples elements lists it raises an error. So I thought that any key, as long as it's immutable is acceptable for a dictionary. Is this correct?</p>
| 0 | 2016-08-29T12:26:39Z | 39,206,441 | <p>Any object, as long as it is <a href="https://docs.python.org/2/glossary.html#term-hashable"><strong>hashable</strong></a>, can be used as a key:</p>
<blockquote>
<p>An object is hashable if it has a hash value which never changes during its lifetime (it needs a <code>__hash__()</code> method), and can be compared to other objects (it needs an <code>__eq__()</code> or <code>__cmp__()</code> method). Hashable objects which compare equal must have the same hash value.</p>
</blockquote>
<p>Immutability makes it possible to produce a stable hash, but a mutable custom class is fine as long as you don't mutate the state that is used to produce the hash. </p>
<p>Tuples are only hashable if they only contain hashable objects; the hash of a tuple is determined by the hashes of the contents, as that's also what determines if two tuples are equal. You can't hash a tuple containing a list, for example:</p>
<pre><code>>>> l = ['foo', 'bar']
>>> t = (42, l) # contains a list
>>> hash(t)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
</code></pre>
<p>A list is not hashable, because it too would have to build a hash from the contents (as it uses the contents to determine if two lists are equal), but that hash would change over the list's lifetime as you can easily mutate what is in the list.</p>
<p>A custom class can be used as a key provided that the hash is based on the same variables that make two objects equal. The default implementation for custom classes is to only have two instances be equal if they are the <em>exact same instance</em>, making them hashable by default. But the following class is also hashable, because the data that changes is <em>not</em> used to determine equality:</p>
<pre><code>class HashableDemo(object):
def __init__(self, value):
self._value = value
self._count = 0
def increment(self):
self._count += 1
def __eq__(self, other):
if not isinstance(other, HashableDemo):
return False
return self._value == other._value
def __hash__(self):
return hash(self._value)
</code></pre>
<p>The class mutates <code>_count</code> but that attribute is not used to determine equality, leaving the hash stable.</p>
| 7 | 2016-08-29T12:32:17Z | [
"python",
"python-2.7",
"dictionary"
] |
Python 3 throws UnicodeEncodeError with environment variable LANG=C | 39,206,705 | <p>I have the following Python script:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
print('âº')
</code></pre>
<p>When I run it on my Debian system, it produces the following output, as expected:</p>
<pre><code>$ ./test.py
âº
$
</code></pre>
<p>However, when I change locale to "C", by setting the <code>LANG</code> environment variable, the script throws a <code>UnicodeEncodeError</code>:</p>
<pre><code>$ LANG=C ./test.py
Traceback (most recent call last):
File "./test.py", line 4, in <module>
print('\u263a')
UnicodeEncodeError: 'ascii' codec can't encode character '\u263a' in position 0: ordinal not in range(128)
$
</code></pre>
<p>This problem prevents this script from being executed in minimal environments, such as during boot or in embedded systems. Also, I suspect that many existing Python programs can be broken by executing them with <code>LANG=C</code>. <a href="http://stackoverflow.com/questions/24095382/python-3-4-causes-unicodeencodeerror-on-apache2-server-mac-but-works-fine-in-c">Here's an example</a> on Stackoverflow of a program that presumably broke because it's executed in the "C"-locale. </p>
<p>Is this a bug in Python? What's the best way to prevent this?</p>
| 0 | 2016-08-29T12:45:49Z | 39,206,776 | <p>This is because Python 3 uses the locale settings to deduce the output character encoding; that is, Python will use the locale that would be displayed for <code>LC_CTYPE</code> when you execute the locale command:</p>
<pre><code>% locale
...
LC_CTYPE="en_US.UTF-8"
...
</code></pre>
<p>If you force <code>LC_CTYPE</code> to <code>C</code>, then Python will assume that ASCII should be used as the output encoding. And ASCII doesn't have a mapping for <code>U+263A</code>.</p>
<p>If you want Python to know how to encode Unicode properly, set the <code>LC_CTYPE</code> to an appropriate value, or write binary to fd 1.</p>
| 3 | 2016-08-29T12:49:40Z | [
"python",
"python-3.x",
"unicode",
"locale"
] |
How can I add two or more images using matplotlib? | 39,206,713 | <p>I read the images using imread and then I would like to compute the average image. How can I add(and divide) using matplotlib?</p>
<p>I'm searching for something like imadd in matlab.</p>
<p>Code:</p>
<pre><code>img1 = matplotlib.image.imread("path")
img2 = matplotlib.image.imread("path1")
img3 = matplotlib.image.imread("path2")
</code></pre>
<p>Thanks</p>
| -1 | 2016-08-29T12:46:18Z | 39,207,034 | <p><code>matplotlib.image</code> is probably what you are looking for. You'll also need <code>numpy</code> if you want to manipulate the images otherwise, because they are basically just arrays in the size of the image (e.g. 1920 x 1080) with 3 or 4 dimensions (RGB or RGBA).</p>
<pre><code>import matplotlib.image as mpimg
import matplotlib.pyplot as plt
img1 = mpimg.imread("foo.png")
img2 = mpimg.imread("bar.png")
</code></pre>
<p>Now you are setup for image manipulation. In the case that your images are both in the same format and size (e.g. RGB. Check by using <code>img1.shape</code> and <code>img2.shape</code>) you can do:</p>
<pre><code>img3 = plt.imshow((img1 + img2) / 2)
</code></pre>
| 1 | 2016-08-29T13:03:03Z | [
"python",
"image",
"matplotlib"
] |
How can I add two or more images using matplotlib? | 39,206,713 | <p>I read the images using imread and then I would like to compute the average image. How can I add(and divide) using matplotlib?</p>
<p>I'm searching for something like imadd in matlab.</p>
<p>Code:</p>
<pre><code>img1 = matplotlib.image.imread("path")
img2 = matplotlib.image.imread("path1")
img3 = matplotlib.image.imread("path2")
</code></pre>
<p>Thanks</p>
| -1 | 2016-08-29T12:46:18Z | 39,207,078 | <p>You can use the normal sum operations:</p>
<pre><code>img4 = img1 + img2 + img3
</code></pre>
<p>This, however, is not exactly the same as <a href="http://fr.mathworks.com/help/images/ref/imadd.html" rel="nofollow">imadd</a> from matlab. Matplotlib works with RGB values from 0 to 1. As so the sum in some pixels will provide values superior to 1 (which for the array type is valid; the same would not be true if the data type were uint8). As so perform the following operation to guarantee that your data comes out correct:</p>
<pre><code>img1 = matplotlib.image.imread("path1")
img2 = matplotlib.image.imread("path2")
img3 = np.clip(img1 + img2, 0, 1)
</code></pre>
<p>Notice that all images must have the same size.</p>
| 1 | 2016-08-29T13:05:22Z | [
"python",
"image",
"matplotlib"
] |
Log disabling in python | 39,206,739 | <p>I am new to this logging module.</p>
<pre><code>logging.basicConfig(level=logging.DEBUG)
logging.disable = True
</code></pre>
<p>As per my understanding this should disable debug logs. But when it is executed it prints debug logs also.</p>
<p>I have only debug logs to print. I dont have critical or info logs. So how i can disable this debug logs.</p>
| 0 | 2016-08-29T12:47:50Z | 39,206,802 | <p>logging.disable is method, not a configurable attribute.</p>
<p>You can disable logging with :</p>
<p><a href="https://docs.python.org/2/library/logging.html#logging.disable" rel="nofollow">https://docs.python.org/2/library/logging.html#logging.disable</a></p>
<p>To disable all, call:</p>
<pre><code>logging.disable(logging.DEBUG)
</code></pre>
<p>This will disable all logs of level DEBUG and below.</p>
<p>To enable all logging, do <code>logging.disable(logging.NOTSET)</code> as it is the lowest level.</p>
| 2 | 2016-08-29T12:51:11Z | [
"python",
"python-2.7"
] |
Log disabling in python | 39,206,739 | <p>I am new to this logging module.</p>
<pre><code>logging.basicConfig(level=logging.DEBUG)
logging.disable = True
</code></pre>
<p>As per my understanding this should disable debug logs. But when it is executed it prints debug logs also.</p>
<p>I have only debug logs to print. I dont have critical or info logs. So how i can disable this debug logs.</p>
| 0 | 2016-08-29T12:47:50Z | 39,206,830 | <p>You can add change <code>level=logging.CRITICAL</code> and receive only critical logs</p>
| 0 | 2016-08-29T12:52:50Z | [
"python",
"python-2.7"
] |
Log disabling in python | 39,206,739 | <p>I am new to this logging module.</p>
<pre><code>logging.basicConfig(level=logging.DEBUG)
logging.disable = True
</code></pre>
<p>As per my understanding this should disable debug logs. But when it is executed it prints debug logs also.</p>
<p>I have only debug logs to print. I dont have critical or info logs. So how i can disable this debug logs.</p>
| 0 | 2016-08-29T12:47:50Z | 39,206,913 | <p>the <code>level</code> argument in <code>logging.basicConfig</code> you've set to <code>logging.DEBUG</code> is the lowest level of logging which will be displayed.
the order of logging levels is documented <a href="https://docs.python.org/2/library/logging.html#logging-levels" rel="nofollow">here</a>. </p>
<p>if you don't want to display DEBUG, you can either set <code>logging.basicConfig(level=logging.INFO)</code>, or specify levels to be disabled via <code>logging.disable(logging.DEBUG)</code></p>
| 0 | 2016-08-29T12:56:58Z | [
"python",
"python-2.7"
] |
Django Apache mod_wsgi permission errors | 39,206,968 | <p>Have setup Django in virtualenv but get 500 Internal Server Error. Development server worked fine.</p>
<h2>Environment:</h2>
<ul>
<li>Python 2.7.12</li>
<li>Apache 2.4.23</li>
<li>Django 1.10</li>
<li>Fedora 24</li>
</ul>
<h2>Server log:</h2>
<pre><code>[Mon Aug 29 12:27:49.364393 2016] [mime_magic:error] [pid 19158] [client 14.2.108.225:49222] AH01512: mod_mime_magic: can't read `/home/fedora/motorable/motorable/wsgi.py'
[Mon Aug 29 12:27:49.364552 2016] [mime_magic:error] [pid 19158] [client 14.2.108.225:49222] AH01512: mod_mime_magic: can't read `/home/fedora/motorable/motorable/wsgi.py'
[Mon Aug 29 12:27:49.364904 2016] [wsgi:error] [pid 19157] (13)Permission denied: [remote 14.2.108.225:1832] mod_wsgi (pid=19157, process='motorable', application='ip-172-31-22-170.ap-southeast-2.compute.internal|'): Call to fopen() failed for '/home/fedora/motorable/motorable/wsgi.py'.
</code></pre>
<h2>Configuration:</h2>
<pre><code>Alias /static /home/fedora/motorable/static
<Directory /home/fedora/motorable/static>
Require all granted
</Directory>
<Directory /home/fedora/motorable/motorable>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess motorable python-path=/home/fedora/motorable:/home/fedora/mot$
WSGIProcessGroup motorable
WSGIScriptAlias / /home/fedora/motorable/motorable/wsgi.py
WSGISocketPrefix /var/run/wsgi
</code></pre>
<p>WSGI is running in daemon mode, I tried adding the <code>WSGISocketPrefix</code> directive but I'm not sure what else to check or do. First time experimenting with Django here. The user home directory is 710 and should allowing Apache in, I added apache user to the primary group of fedora.</p>
<p>Can anyone share some insight?</p>
| 2 | 2016-08-29T12:59:53Z | 39,303,918 | <p>Moving the project outside the home directory to /var/www/django solved my question.</p>
| 1 | 2016-09-03T06:27:01Z | [
"python",
"django",
"apache",
"mod-wsgi"
] |
Numpy: Get rectangle area just the size of mask | 39,206,986 | <p><strong>I have an image and a mask. Both are numpy array.</strong> I get the mask through GraphSegmentation (cv2.ximgproc.segmentation), so the area isn't rectangle, but not divided. I'd like to get a rectangle just the size of masked area, but I don't know the efficient way.</p>
<blockquote>
<p>In other words, unmasked pixels are value of 0 and masked pixels are value over 0, so I want to get a rectangle where...</p>
</blockquote>
<ul>
<li><strong>top</strong> = the smallest index of axis 0 whose value > 0</li>
<li><strong>bottom</strong> = the largest index of axis 0 whose value > 0</li>
<li><strong>left</strong> = the smallest index axis 1 whose value > 0</li>
<li><strong>right</strong> = the largest index axis 1 whose value > 0</li>
<li><strong>image</strong> = src[top : bottom, left : right]</li>
</ul>
<p><strong>My code is below</strong></p>
<pre><code>segmentation = cv2.ximgproc.segmentation.createGraphSegmentation()
src = cv2.imread('image_file')
segment = segmentation.processImage(src)
for i in range(np.max(segment)):
dst = np.array(src)
dst[segment != i] = 0
cv2.imwrite('output_file', dst)
</code></pre>
| 3 | 2016-08-29T13:00:27Z | 39,207,737 | <p>If you prefer pure Numpy, you can achieve this using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>np.where</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html#numpy.meshgrid" rel="nofollow"><code>np.meshgrid</code></a>:</p>
<pre><code>i, j = np.where(mask)
indices = np.meshgrid(np.arange(min(i), max(i) + 1),
np.arange(min(j), max(j) + 1),
indexing='ij')
sub_image = image[indices]
</code></pre>
<p><code>np.where</code> returns a tuple of arrays specifying, pairwise, the indices in each axis for each non-zero element of <code>mask</code>. We then create arrays of all the row and column indices we will want using <code>np.arange</code>, and use <code>np.meshgrid</code> to generate two grid-shaped arrays that index the part of the image we're interested in. Note that we specify matrix-style indexing using <code>index='ij'</code> to avoid having to transpose the result (the default is Cartesian-style indexing).</p>
<p>Essentially, <code>meshgrid</code> constructs <code>indices</code> so that:</p>
<pre><code>image[indices][a, b] == image[indices[0][a, b], indices[1][a, b]]
</code></pre>
<h1>Example</h1>
<p>Start with the following:</p>
<pre><code>>>> image = np.arange(12).reshape((4, 3))
>>> image
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
</code></pre>
<p>Let's say we want to extract the <code>[[3,4],[6,7]]</code> sub-matrix, which is the bounding rectangle for the the following mask:</p>
<pre><code>>>> mask = np.array([[0,0,0],[0,1,0],[1,0,0],[0,0,0]])
>>> mask
array([[0, 0, 0],
[0, 1, 0],
[1, 0, 0],
[0, 0, 0]])
</code></pre>
<p>Then, applying the above method:</p>
<pre><code>>>> i, j = np.where(mask)
>>> indices = np.meshgrid(np.arange(min(i), max(i) + 1), np.arange(min(j), max(j) + 1), indexing='ij')
>>> image[indices]
array([[3, 4],
[6, 7]])
</code></pre>
<p>Here, <code>indices[0]</code> is a matrix of row indices, while <code>indices[1]</code> is the corresponding matrix of column indices:</p>
<pre><code>>>> indices[0]
array([[1, 1],
[2, 2]])
>>> indices[1]
array([[0, 1],
[0, 1]])
</code></pre>
| 1 | 2016-08-29T13:39:15Z | [
"python",
"opencv",
"numpy"
] |
Python: null character breaking xml format | 39,207,067 | <p>I have some python code that processes input files and dumps certain fields from the input to XML files. This code broke when passing a null character from input -- throwing an invalid token error:</p>
<pre><code>def pretty_print_xml(elem):
rough_string = ET.tostring(elem, 'utf-8')
reparsed = minidom.parseString(rough_string)
return reparsed.toprettyxml(indent=' ')
</code></pre>
<p>This surprised me and I would like to know why it broke and what else might need to be sanitized from the input. I thought only an XML meta character could throw this error and these are already being handled by minidom.</p>
| 0 | 2016-08-29T13:04:40Z | 39,209,884 | <p>NUL literals are not allowed in XML. See <a href="https://www.w3.org/TR/xml11/#charsets" rel="nofollow">the XML standard, version 1.1</a>:</p>
<blockquote>
<h2>2.2 Characters</h2>
<p>[Definition: A parsed entity contains text, a sequence of <a href="https://www.w3.org/TR/xml11/#dt-character" rel="nofollow">characters</a>, which may represent markup or character data.] [Definition: A character is an atomic unit of text as specified by ISO/IEC 10646 <A HREF="https://www.w3.org/TR/xml11/#ISO10646" rel="nofollow">[ISO/IEC 10646]</A>. Legal characters are tab, carriage return, line feed, and the legal characters of Unicode and ISO/IEC 10646. The versions of these standards cited in <A HREF="https://www.w3.org/TR/xml11/#sec-existing-stds" rel="nofollow">A.1 Normative References</A> were current at the time this document was prepared. New characters may be added to these standards by amendments or new editions. Consequently, XML processors must accept any character in the range specified for <A HREF="https://www.w3.org/TR/xml11/#NT-Char" rel="nofollow">Char</A>.]</p>
<pre><code>[2] Char ::= [#x1-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] /* any Unicode character, excluding the surrogate blocks, FFFE, and FFFF. */
[2a] RestrictedChar ::= [#x1-#x8] | [#xB-#xC] | [#xE-#x1F] | [#x7F-#x84] | [#x86-#x9F]
</code></pre>
</blockquote>
<p>Note that <code>Char</code> is defined to allow (among other ranges) <code>\x01</code> through <code>\xD7FF</code> -- but <strong>not</strong> <code>\x00</code>.</p>
<hr>
<p>By the way -- if your goal is pretty-printing, I'd suggest using <code>lxml.etree</code>. If <A HREF="http://lxml.de/tutorial.html#serialisation" rel="nofollow">the <code>pretty_print=True</code> argument</A> on serialization calls doesn't work out-of-the-box, see <A HREF="http://lxml.de/FAQ.html#why-doesn-t-the-pretty-print-option-reformat-my-xml-output" rel="nofollow">the relevant FAQ entry</A>.</p>
| 1 | 2016-08-29T15:27:21Z | [
"python",
"xml"
] |
Python can't add canvas to ttk notebook page | 39,207,185 | <pre><code>from tkinter import *
from tkinter import ttk
class MainGame(Frame):
def __init__(self, parent):
Frame.__init__(self, parent)
self.parent = parent
self.initUI()
def initUI(self):
global canvas
# ===Part A ===
self.parent.title('PythonPage')
self.pack(fill = BOTH, expand = 1)
self.page = ttk.Notebook(self, width = 650 ,height = 630)
self.page1 = ttk.Frame(self)
self.page.add(self.page1, text = 'Tab1')
self.page.pack(expand = 1, anchor = 'nw', side = 'top')
# ===Part B ===
canvas = Canvas(self)
canvas.create_rectangle([10,10, 650,630], fill = 'blue')
canvas.pack(fill = BOTH, expand = 1)
canvas.update()
self.a = Label(self, text = 'Haha')
self.a.place(x=50,y=50)
root = Tk()
root.geometry('925x650')
main = MainGame(root)
root.mainloop()
</code></pre>
<p>How can I add my rectangle into ttk's notebook? I found that my rectangle is always created below the notebook, but this situation is not the same with <code>Label</code>.</p>
<p>I want to put the rectangle inside the notebook, should I add something to <code>self.page1</code>?.</p>
| -1 | 2016-08-29T13:11:39Z | 39,208,143 | <p>If you want the canvas to be in the notebook, you must add it to one of the notebook pages. For example, if you want it in <code>self.page</code>, you would change this line:</p>
<pre><code>canvas = Canvas(self)
</code></pre>
<p>... to this:</p>
<pre><code>canvas = Canvas(self.page1)
</code></pre>
| 0 | 2016-08-29T13:59:27Z | [
"python",
"tkinter",
"polygon",
"ttk",
"tkinter-canvas"
] |
Scrapy error for multiple formrequest | 39,207,197 | <p>I am using scrapy to login into a website using FormRequest, but since the site is built in ASP.Net, for pagination, I have to use FormRequest too, the code is running alright but I don't know how to configure it for pagination since I am new to scrapy. When I use return in parse_item(), it works but I want to work it for pagination, when I use yield in parse_item(), I get this error. </p>
<pre><code>2016-08-29 20:44:59 [scrapy] ERROR: Spider must return Request, BaseItem, dict or None, got 'list' in <GET https://recruiter.cwjobs.co.uk/Recruitment/CandidateSearch/CandidateSearchResults.aspx?SalaryRateTypeId=1&SalaryRangeIds=17%2c18%2c19%2c20%2c21%2c22%2c23%2c24%2c25%2c26&LastActivityId=15&Radius=-1&JobTypeProfile=20&LimitSearch=True&CandidateSearchCriteria=vKYIkjLZq5Af6OEmkANngg%3d%3d&scr=1&iasc=0>
</code></pre>
<p>This is my code</p>
<pre><code>import scrapy
from scrapy.http import Request
from scrapy.http import FormRequest
from cwjobs.items import CwjobsItem
class RecruiterSpider(scrapy.Spider):
name = "recruiter"
allowed_domains = ["recruiter.cwjobs.co.uk"]
start_urls = (
'https://recruiter.cwjobs.co.uk/loginReturnUrl=%2fhome%3fRHP%3dnav_bar_SignIn/',)
def start_requests(self):
return [FormRequest("https://recruiter.cwjobs.co.uk/login/",formdata={"__EVENTTARGET":"","__EVENTARGUMENT":"","__VIEWSTATE":"QI2hCUmnX2GZ+vtA2RoynX1rSOZ0LG+0ixQlSPqGcTM9qCheVZwbfaMtPeQAfiQCmM/aJhVjQ7bljYbGfVUEhzVsDaNRB+3qBuOc+SYZ+pHoSk2s0cFz6f5ODgqv/6Jj12bUs7OKnyIa8mlPo+xfmhS+oWroHnJyfPvBAGZkInpW5EcmmKqHD2Ede0XdsH2mMM4nPIy+PRsGW1ZeVd6HifZC1RG9bFXlunoIlQDNhDQeOpRmVdcRroybtCCp+1jLrH4EOGKfOCQ+o2WFGBfldPfS1AHGXL9tDHwvrol4Cx/nK01y1E27PWobQ2RlUXINMBNditfn3qTKCKlGRSLHMJ+PpfZJv1ncmNTvtV+kR1O5vTLbw03Ct3HMzw4GI/zmwojQqUXa0Z4vAoe6bqkzZpm1qKtzzsdpsp5uLTaGiv3SAlDXrK/vuvCFGMqZTMAoqJ47WluyIFsA3Y4dak69mF/UMH3+Foizgh+37IHrL6hM2v17NyvfMAgJXncASJ6P85t8R3Xr2Q4Z1kEbKna1Qi4yINI+wrSmZSSdcTnw3oiklUBCATmFbbnPdhNbr9AIK3lm7hu8OxrXRDRjsOulpB5BgS0Xu4O/8G0A4UNWlLGFoaNdOa/P8UZFvTiRL0uZJR1bL9QImr7DT5ChOPPh4Xzf4KdmB/L7/gRiQlhxQ6ek4BxcjruN3sZ6eFNrEAAbFGMuxevrFlBM+FFvwHEOEK03pYtBjrDhGTVeujLJO7TCetqUZ7+PVGs17by20kkOEMvOFKx9mTeW4oFzbqAUQvQjhW+hSEVmNvRzw+lhov0v1OUcrTdGL6C6sk9jKUALgiyOWEabMSGqoWA5eVQEyiFXVuAQ5AJZcKeQ13wDGZ1HFXj/dlE+jA6p0E/FfEc+A5T69bTN6zjvCwkew/DxJxmxBBBxxnMhgbn8qnpbVRkJj9cg/uTJoD7zI7WWnUTK9neMdPCLGa1MPvXNV/YkCGgswrGqKk9B4eWdGQHqhJJj+Fgb7uW1ZycnuyBoHup8rpKEx1wz56voovTuVRBFk60CHv8MDMcmqAbXujUGwKgZCraUVtAgV10eTG8emVCGGAE5LOkl8eo1h7iV/VWZieE3H+VgD7hucFv2Ny7pzqrxZ68xZEu2F7MQgKL92uKGsNyrHcjTwtCcorYoTIXTGOAlZo3FA5LXL2XFAmVCHH3smh0r2yQyQitQ7oaqVX2jgTL4HdXTVny9Qf5pdkDlHneSCmkMVN45ILhmpTWKj27kpSK/QlYvoG+cvKKdXW2wWJ5ZZ2sqHqH4lWNVmgARYG8JDIXLNRRHv+S5MBGg0hQ6llYrparx6azMop5cx3AeMssimtPJvl+FvcNyqpAZpMsiXEpTBWlHUHdyO3PCq8yYpE4SoOn7NmiVqDE69c2z1/pHlH0fQDUsa7UsKHOAHtyznX0E29q8r0zNJEpNhUH/uX/6G7syXljeOB0P1XVTRbZmL8mFBCMxMPCt/vFi+MPKgr2aPlT7RPv+yy4bILRavikMOKFJ+Cf2Q3r0J60feH+bKISzib9VPvfdj2qudb0Ctt7XbTi0vWKmikStwMwZiVlZlpHImSgmokCC7T988NFHhGw+84Kxc7r8CyBTdfqC2flZpCM5VqY1q1kw/YklVnsm0Uv2FBT0gy4kAQxgOw9D4aA3Ahqr7dWiDDiGPc5/U/ci3D6v9pbbCg3rOGAI4zEUFli0n3OPjEIwCzRi3KVkgSenZjGcTNEtA/sqL8WuMzxv9dIporx76Iwxy6D8wPbWogn30WcHfqR2VWoPvH4Q1fz/4a1hnY9P6N8Y3AEVKrc9fnRaQu/LNQAQajqU5PqLAVmZgbJo4w8M839nQk+nxO+vkidRxU0hONe7dgADn9mqYf4ss0ITvzEvoLdFv9DdjcBVXh/ZxFZZeVZAZ0B+bXQ3Sf7oMEmZSL0rBxq47EG1MDLksHnQZF0VbOPsdsJpKK770zbcAe4yLgVRye6RGxObQfOWaJVGhZXjMnk8+HEspMLLLj3jUKPkHMUbK7mvjWs3A2o0Z4g=","__VIEWSTATEGENERATOR":"607E8F97","LoginPanel$txtUsername":"*******","LoginPanel$txtPassword":"*********","LoginPanel$btnSubmit":"Sign in","Register$txtFirstName":"","Register$txtLastName":"","Register$txtCompanyName":"","Register$txtBusinessPhone":"","Register$txtEmailAddress":"","Register$txtPassword":"","Register$txtPasswordConfirm":"","Register$txtCharityNumber":"","txtReminderUsername":""})]
def parse(self,response):
print response.xpath("//h1[@class='account-name']/text()").extract()
return Request("https://recruiter.cwjobs.co.uk/Recruitment/CandidateSearch/CandidateSearchResults.aspx?SalaryRateTypeId=1&SalaryRangeIds=17%2c18%2c19%2c20%2c21%2c22%2c23%2c24%2c25%2c26&LastActivityId=15&Radius=-1&JobTypeProfile=20&LimitSearch=True&CandidateSearchCriteria=vKYIkjLZq5Af6OEmkANngg%3d%3d&scr=1&iasc=0", callback = self.parse_item)
def parse_item(self, response):
candsearch = response.xpath("//input[@id='CandidateSearchResults']/@value").extract()[0]
viewsgenerator = response.xpath("//input[@id='__VIEWSTATEGENERATOR']/@value").extract()[0]
print viewsgenerator
newsearch = response.xpath("//input[@id='NewSearchCriteria']/@value").extract()[0]
searchcriteria = response.xpath("//input[@id='CandidateSearchCriteria']/@value").extract()[0]
viewstate = response.xpath("//input[@id='__VIEWSTATE']/@value").extract()[0]
for i in range(1, 3):
print i
data = {"__EVENTTARGET":"ctl00$cphCentralPanel$ucSearchResults$pgrPager","__EVENTARGUMENT":str(i),"CandidateSearchCriteria":searchcriteria,"NewSearchCriteria":newsearch,"Keywords":"","CandidateSearchResults":candsearch,"__LASTFOCUS":"","__VIEWSTATE":viewstate,"__VIEWSTATEGENERATOR":viewsgenerator,"ctl00$cphCentralPanel$NewOrExistingSavedSearch":"rdoNewSavedSearch", "ctl00$cphCentralPanel$txtSavedSearchName":"","ctl00$cphCentralPanel$ucSearchResults$hdnPopoverLinkClicked":"","ctl00$cphCentralPanel$ucSearchResults$ucFacetedSearch$txtBoolean":"","ctl00$cphCentralPanel$ucSearchResults$ucFacetedSearch$hdnIsAutosuggestChosen":"0","ctl00$cphCentralPanel$ucSearchResults$ucFacetedSearch$searchTypePart$qsSearchType":"rbProfileAndCV", "ctl00$cphCentralPanel$ucSearchResults$ucFacetedSearch$txtPostcode":"","ctl00$cphCentralPanel$ucSearchResults$ucFacetedSearch$ddlRadius":"-1", "ctl00$cphCentralPanel$ucSearchResults$ucFacetedSearch$qsLoc":"rdoPostcode","ctl00$cphCentralPanel$ucSearchResults$ucFacetedSearch$ddlLastActivity":"15","ctl00$cphCentralPanel$ucSearchResults$ddlSort":"Relevancy#0", "ctl00$cphCentralPanel$ucSearchResults$ddlPageSize":"50"}
request = [FormRequest.from_response(response, formdata = data, callback = self.parse_page2)]
yield request
def parse_page2(self,response):
li = response.xpath("//div[@class = 'row card-row']")
for l in li:
item = CwjobsItem()
firstname = l.xpath(".//a[@class='candidate-lnk']//span[@class='firstName']/text()").extract()
lastname = l.xpath(".//a[@class='candidate-lnk']//span[@class='lastName']/text()").extract()
item['name'] = firstname + lastname
det = l.xpath(".//div[@id='current-expected-row']")
for d in det:
currs = d.xpath(".//li[contains(@id, 'CurrentSalary')]/span/text()").extract()
if currs:
item['currs'] = currs[0].strip()
currjobt = d.xpath(".//li[contains(@id, 'CurrentJobTitle')]/span/text()").extract()
if currjobt:
item['currjobt'] = currjobt[0].strip()
Experience = d.xpath(".//li[contains(@id, 'Experience')]/span/text()").extract()
if Experience:
item['Experience'] = Experience[0].strip()
Desiredjob = d.xpath(".//li[contains(@id, 'DesiredJobTitle')]/span/text()").extract()
if Desiredjob:
item['Desiredjob'] = Desiredjob[0].strip()
Desireds = d.xpath(".//li[contains(@id, 'DesiredSalary')]/span/text()").extract()
if Desireds:
item['Desireds'] = Desireds[0].strip()
DesiredLoc = d.xpath(".//li[contains(@id, 'DesiredLocations')]/span/text()").extract()
if DesiredLoc:
item['DesiredLoc'] = DesiredLoc[0].strip()
phone = l.xpath("//span[@class='action-span hiddendata']/@data-hiddendataurl").extract()
if phone:
item['phonel'] = "https://recruiter.cwjobs.co.uk"+ phone[0]
cvl = l.xpath("//a[@class='action-link view-cv-icon cv-action-button']/@href").extract()
if cvl:
item['cvl'] = "https://recruiter.cwjobs.co.uk"+ cvl[0]
emaillink = l.xpath("//a[@class='hiddendata action-link email-candidate']/@data-hiddendataurl").extract()
if emaillink:
emaillink = "https://recruiter.cwjobs.co.uk" + emaillink[0]
item['email'] = emaillink
# request.meta['item'] = item
# yield request
# return
# yield Request(item['cvl'])
# item['email'] = [response.body]
return item
# def parse_page(self,response):
# # item = response.meta['item']
# item['email'] = response.body
# yield item
</code></pre>
<p>How can i solve this problem?</p>
| 0 | 2016-08-29T13:12:11Z | 39,211,217 | <p>You return a list when scrapy either expects <code>scrapy.Item</code> or <code>scrapy.Request</code>.</p>
<p>The culprit lines:</p>
<pre><code>request = [FormRequest.from_response(response, formdata = data, callback = self.parse_page2)]
return request
</code></pre>
<p>To fix this you should either not make request into a list or iterate through it and yield each element.</p>
<pre><code>request = FormRequest.from_response(response, formdata = data, callback = self.parse_page2)
return request
# or
requests = [FormRequest.from_response(response, formdata = data, callback = self.parse_page2)]
for r in requests:
yield r
</code></pre>
| 0 | 2016-08-29T16:46:56Z | [
"python",
"scrapy",
"web-crawler",
"python-requests"
] |
Get subject key identifier from certificate | 39,207,234 | <p>Is it possible to extract the <a href="https://tools.ietf.org/html/rfc5280#section-4.2.1.2" rel="nofollow">subject key identificator</a> from an existing certificate with python? </p>
<p>I tried someting like:</p>
<pre><code>from OpenSSL.crypto import load_certificate, FILETYPE_PEM
cert_string='-----BEGIN CERTIFICATE--...'
certificate=load_certificate(FILETYPE_PEM, plain_cert)
subject=certificate.get_subject()
</code></pre>
<p>But it gives back the subject of the certificate. It seems like the certificate object doesn't offer functions for the subject key identifier. Are there other options?</p>
| 1 | 2016-08-29T13:13:54Z | 39,208,054 | <pre><code>subject=certificate.get_extension(0)
</code></pre>
<p>did the job. With </p>
<pre><code>certificate.get_extension_count()
</code></pre>
<p>you can check how much extensions the certificate has.</p>
| 1 | 2016-08-29T13:54:53Z | [
"python",
"cryptography",
"certificate",
"pyopenssl"
] |
Get subject key identifier from certificate | 39,207,234 | <p>Is it possible to extract the <a href="https://tools.ietf.org/html/rfc5280#section-4.2.1.2" rel="nofollow">subject key identificator</a> from an existing certificate with python? </p>
<p>I tried someting like:</p>
<pre><code>from OpenSSL.crypto import load_certificate, FILETYPE_PEM
cert_string='-----BEGIN CERTIFICATE--...'
certificate=load_certificate(FILETYPE_PEM, plain_cert)
subject=certificate.get_subject()
</code></pre>
<p>But it gives back the subject of the certificate. It seems like the certificate object doesn't offer functions for the subject key identifier. Are there other options?</p>
| 1 | 2016-08-29T13:13:54Z | 39,210,777 | <p>The code that will extract subject key identifier:</p>
<pre><code>from cryptography import x509
from cryptography.hazmat.backends import default_backend
cert = x509.load_pem_x509_certificate(pem_data, default_backend())
ski = cert.extensions.get_extension_for_oid(x509.oid.ExtensionOID.SUBJECT_KEY_IDENTIFIER)
print(ski.value.digest)
</code></pre>
| 1 | 2016-08-29T16:18:49Z | [
"python",
"cryptography",
"certificate",
"pyopenssl"
] |
PIP how escape character # in password? | 39,207,316 | <p>want to continue question
<a href="http://stackoverflow.com/questions/19080352/how-to-get-pip-to-work-behind-a-proxy-server/39206294#39206294">How to get pip to work behind a proxy server</a></p>
<p>I have Windows Server and Python 3.5 (64).</p>
<p>In password my user include #.</p>
<p>I try to use some solve:</p>
<pre>
"C:\Program Files\Python35\scripts\pip.exe" install --proxy http://proxy_user:pwd#123@proxy.su:1111 TwitterApi
"C:\Program Files\Python35\scripts\pip.exe" install --proxy "http://proxy_user:pwd#123"@proxy.su:1111 TwitterApi
"C:\Program Files\Python35\scripts\pip.exe" install --proxy http://"proxy_user:pwd#123"@proxy.su:1111 TwitterApi
"C:\Program Files\Python35\scripts\pip.exe" install --proxy http://proxy_user:"pwd#123"@proxy.su:1111 TwitterApi
</pre>
<p>BUT to get error</p>
<pre>
File "c:\program files\python35\lib\site-packages\pip\_vendor\requests\package
s\urllib3\util\url.py", line 189, in parse_url
raise LocationParseError(url)
pip._vendor.requests.packages.urllib3.exceptions.LocationParseError: Failed to p
arse: proxy_user:pwd
</pre>
<p>How escape character # in this case?</p>
| 1 | 2016-08-29T13:17:19Z | 39,207,995 | <p><strong>Quick way out</strong>: Enter it in the encoded form i.e. <code># -> %23</code></p>
<p>OR </p>
<blockquote>
<p>A better way for pip to handle this might be to add a <code>--proxy-auth</code>
flag that takes : and does the encoding for the
user before adding it to the Proxy URL.</p>
</blockquote>
<hr>
<p><strong>Issue</strong> - This is something not allowed: </p>
<blockquote>
<p>Strictly speaking, the literal # character is not valid in the
userinfo portion of a URI, according to RFC 3986, and should be
percent encoded. However, it's not exactly a surprise that many tools
handle this ok: there's clearly no actual ambiguity about that
character. Note, however, that if there were an @ symbol in the
password you'd definitely have to urlencode it: for that reason, it's
a good habit to get into to urlencode your passwords before they go
into URIs.</p>
</blockquote>
<p>The response to a submitted issue <a href="https://github.com/shazow/urllib3/issues/814" rel="nofollow">parse_url fails when given credentials in the URL with '/', '#', or '?'</a>: </p>
<blockquote>
<p><a href="https://github.com/shazow/urllib3/issues/814#issuecomment-197046251" rel="nofollow">The RFC says specifically</a>:</p>
<p>The authority component is preceded by a double slash ("<code>//</code>") and is
terminated by the next slash ("<code>/</code>"), question mark ("?"), or number
sign ("<code>#</code>") character, or by the end of the URI. In other words, the
current behaviour is correct in expecting the authority to be
terminated by the first <code>/</code> (or ? or <code>#</code>) it finds after the preceeding
<code>//</code>. Am I sympathetic to people trying to use proxy URIs with pip?
Absolutely. I think hacking together something that violates the RFC
has the potential for nasty surprises later on.</p>
</blockquote>
<hr>
| 0 | 2016-08-29T13:51:46Z | [
"python",
"python-3.x",
"pip",
"http-proxy"
] |
PIP how escape character # in password? | 39,207,316 | <p>want to continue question
<a href="http://stackoverflow.com/questions/19080352/how-to-get-pip-to-work-behind-a-proxy-server/39206294#39206294">How to get pip to work behind a proxy server</a></p>
<p>I have Windows Server and Python 3.5 (64).</p>
<p>In password my user include #.</p>
<p>I try to use some solve:</p>
<pre>
"C:\Program Files\Python35\scripts\pip.exe" install --proxy http://proxy_user:pwd#123@proxy.su:1111 TwitterApi
"C:\Program Files\Python35\scripts\pip.exe" install --proxy "http://proxy_user:pwd#123"@proxy.su:1111 TwitterApi
"C:\Program Files\Python35\scripts\pip.exe" install --proxy http://"proxy_user:pwd#123"@proxy.su:1111 TwitterApi
"C:\Program Files\Python35\scripts\pip.exe" install --proxy http://proxy_user:"pwd#123"@proxy.su:1111 TwitterApi
</pre>
<p>BUT to get error</p>
<pre>
File "c:\program files\python35\lib\site-packages\pip\_vendor\requests\package
s\urllib3\util\url.py", line 189, in parse_url
raise LocationParseError(url)
pip._vendor.requests.packages.urllib3.exceptions.LocationParseError: Failed to p
arse: proxy_user:pwd
</pre>
<p>How escape character # in this case?</p>
| 1 | 2016-08-29T13:17:19Z | 39,210,711 | <pre><code>else examples
</code></pre>
<pre>
$user = str_replace('@', '%40', $user);
$pass = str_replace('%', '%25', $pass); // don't down! (%)
$pass = str_replace('#', '%23', $pass);
$pass = str_replace('@', '%40', $pass);
$pass = str_replace(':', '%3a', $pass);
$pass = str_replace(';', '%3b', $pass);
$pass = str_replace('?', '%3f', $pass);
$pass = str_replace('$', '%24', $pass);
$pass = str_replace('!', '%21', $pass);
$pass = str_replace('/', '%2f', $pass);
$pass = str_replace('\'', '%27', $pass);
$pass = str_replace('"', '%22', $pass);
</pre>
| 0 | 2016-08-29T16:14:20Z | [
"python",
"python-3.x",
"pip",
"http-proxy"
] |
How do you convert a list of nested lists into a list of lists with unique data? | 39,207,333 | <p>After looping my brains out creating reference dictionaries and multi-nested lookup lists I've decided that there has to be an easier way to do this. I can't be the first person to do this type of conversion. I don't even know where to start looking in the docs for a solution.</p>
<p>I have a system that is outputting the following data.</p>
<pre><code>initial_data = [
[21,[[1],[2,3],[6],[7]]],
[22,[[4,5],[6,7]],
[23,[[1],[4,5],[6],[7]]],
[24,[[1],[2,3,4],[6],[7]]],
]
</code></pre>
<p>I have another system that expects the data in the following format (order does not matter).</p>
<pre><code>return_data = [
[21,[1,2,6,7]],
[21,[1,3,6,7]],
[22,[4,6]],
[22,[4,7]],
[22,[5,6]],
[22,[5,7]],
[23,[1,4,6,7]],
[23,[1,5,6,7]],
[24,[1,2,6,7]],
[24,[1,3,6,7]],
[24,[1,4,6,7]],
]
</code></pre>
| -4 | 2016-08-29T13:18:18Z | 39,207,434 | <p>You can use the <code>itertools.product</code>, which produces </p>
<blockquote>
<p>Cartesian product of input iterables.</p>
<p>Roughly equivalent to nested for-loops in a generator expression. For
example, product(A, B) returns the same as ((x,y) for x in A for y in
B).</p>
</blockquote>
<p>Use it on the second element of each sublist should produce what you need:</p>
<pre><code>from itertools import product
[[k, p] for k, v in initial_data for p in product(*v)]
# [[21, (1, 2, 6, 7)],
# [21, (1, 3, 6, 7)],
# [22, (4, 6)],
# [22, (4, 7)],
# [22, (5, 6)],
# [22, (5, 7)],
# [23, (1, 4, 6, 7)],
# [23, (1, 5, 6, 7)],
# [24, (1, 2, 6, 7)],
# [24, (1, 3, 6, 7)],
# [24, (1, 4, 6, 7)]]
</code></pre>
| 3 | 2016-08-29T13:23:51Z | [
"python",
"unique",
"nested-lists"
] |
how to remove the print lines in bash into a log file when start a flask app? | 39,207,436 | <p>I start a flask application in bash like</p>
<blockquote>
<p>python app.py &</p>
</blockquote>
<p>But there is a lot of output informations in the bash when the application running, just as </p>
<blockquote>
<p>Running on <a href="http://0.0.0.0:9999/" rel="nofollow">http://0.0.0.0:9999/</a> (Press CTRL+C to quit)
"GET /hash/da9ba7b0369fa343f6cd5797cd9bcc49 HTTP/1.1" 200 -</p>
</blockquote>
<p>Is there any ways to remove these output informations into a log file?</p>
<p>Thanks! </p>
| -2 | 2016-08-29T13:23:59Z | 39,207,482 | <p>try this</p>
<blockquote>
<p>python app.py &>> log.txt</p>
</blockquote>
<p>using single '>' character will truncate the previous data in log file each time new output comes but '>>' will append output.</p>
<p>I hope it helps</p>
| 0 | 2016-08-29T13:26:41Z | [
"python",
"flask"
] |
how to remove the print lines in bash into a log file when start a flask app? | 39,207,436 | <p>I start a flask application in bash like</p>
<blockquote>
<p>python app.py &</p>
</blockquote>
<p>But there is a lot of output informations in the bash when the application running, just as </p>
<blockquote>
<p>Running on <a href="http://0.0.0.0:9999/" rel="nofollow">http://0.0.0.0:9999/</a> (Press CTRL+C to quit)
"GET /hash/da9ba7b0369fa343f6cd5797cd9bcc49 HTTP/1.1" 200 -</p>
</blockquote>
<p>Is there any ways to remove these output informations into a log file?</p>
<p>Thanks! </p>
| -2 | 2016-08-29T13:23:59Z | 39,207,595 | <p>You can capture both standard output & error like this (and run in the background):</p>
<pre><code>(python app.py 2&>1 >logfile.txt &)
</code></pre>
<p>If you want to suppress output completely and drop the log:</p>
<pre><code>(python app.py 2&>1 >/dev/null &)
</code></pre>
<p>The parenthesis even suppress the job number <code>[1] 4456</code> displayed to show the background process id.</p>
| 0 | 2016-08-29T13:33:00Z | [
"python",
"flask"
] |
Give multiple terminal commands in a single file and run all the commands at once? | 39,207,545 | <p>I'm a beginner into Raspberry pi and i have a basic doubt.</p>
<p>I'm basically trying to make my raspberry pi into a beacon and advertise data from it to a Android app. </p>
<p>I wonder if I can give multiple terminal commands in a single file and run all the commands simply by compiling and running the file?</p>
<p>I followed <a href="https://learn.adafruit.com/pibeacon-ibeacon-with-a-raspberry-pi/overview" rel="nofollow">this tutorial</a>. </p>
<p>My basic doubt is that, each time i have to check if a device is available(bluetooth) and advertise it, it takes a command for each of this. Can i integrate multiple raspberry pi commands into a file and run all these commands simply by compiling and running the file (as a script)?</p>
<p>Few of the commands are as follows :</p>
<pre><code>sudo hcitool lescan,
sudo hcitool hci0,
sudo hcitool -i hci0 0x008,
</code></pre>
<p>and few commands like these..</p>
| 1 | 2016-08-29T13:30:17Z | 39,208,828 | <p>Say you have a file <code>example.txt</code> with your commands:</p>
<pre><code>sudo hcitool lescan
sudo hcitool hci0
sudo hcitool -i hci0 0x008
</code></pre>
<p>Then you can execute those commands by running <code>sh example.txt</code> or <code>bash example.txt</code>.
See <a href="http://stackoverflow.com/questions/9825495/ubuntu-run-text-file-as-command">ubuntu run text file as command</a></p>
| 1 | 2016-08-29T14:34:59Z | [
"python",
"linux",
"raspberry-pi",
"bluetooth-lowenergy",
"ibeacon"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.